I write this in response to multiple exclamatory remarks I’ve seen in recent weeks, excited over the prospect of all jobs being automated, of ultra-high unemployment, basic income, and radical abundance, now even further bolstered over the incredible hype over the imminency of artificial general intelligence.
Waking Up
For years now, perhaps even over a decade, I’ve been obsessed with the prospect of the Technological Singularity and all that comes with it. Starting in 2014, I even began considering myself a Singularitarian.
All the arguments seemed right to me. Technological change was progressing. Humans cannot think exponentially. Artificial intelligence will grow more powerful and generalized. We ought to accelerate to reach artificial general intelligence to maximize our potential, achieve immortality, and ultimately merge with the machines.
All that sounded fantastic. Every bit of progress in artificial intelligence that came excited me, and I’d dream of the day I lived in an AI-powered utopia so totally unlikely the mundane post-Y2K dead technoscape I considered contemporary life.
Then ChatGPT was released. Though GPT-2 had first convinced me that AGI was a real possibility, ChatGPT in December 2022 was the first time it ever felt truly tangible. And as I fiddled with its mighty capabilities, something about it felt.… off.
Some aspect of this new world of capabilities didn’t feel right. It felt like too much of a vulgar display of power. But I still had my fun with it. During the Christmas gathering, I smugly believed against my increasingly technophobic relatives, “You people have absolutely no idea what’s coming.”
Unfortunately, I may have been terribly right.
All throughout January of 2023, I suffered a terrific crisis of confidence and decided that the only way to resolve it was to step back and examine my beliefs from a most critical eye. Some of which, I overcorrected— such as my erroneous belief that the laws of diminishing returns would extinguish any chance at an intelligence explosion or post-silicon advances in computing.
Others, I feel I undercorrected, such as my statements that synthetic media (popularly known as AI art) would change exactly nothing about the entertainment landscape beyond a temporary recession in creatives’ fortunes.
In some ways, I found new reasons to be skeptical, in the form of the sudden realization that the Control Problem— AI alignment in other words— was completely unsolved.
But there are a few areas where my skepticism was due some extra examination.
Unlike Yudkowsky, I am a nobody, and my words will likely never be read by more than a few dozen people. I will have no impact on the world in its final years before either doom or, if by some miracle, a debaucherous utopia.
I do not have the technical or professional expertise to defend my position. I cannot prove anything I say is true. Nor do I want any word I say to be true. All I want is to live in a quaint rustic homestead with some advanced robots and a synthetic media-ready computer to bring my dreams to life, while an aligned superintelligence gently guides the world towards a more Edenic state. I’d like to think that isn’t too much to ask.
But in the face of the catastrophic difficulties in reaching that point, perhaps it is.
Just as Yudkowsky said on that infamous podcast, when you are surrounded by ruins, what else can you do but tell the truth?
I’m going to one-up Yudkowsky and claim that we might not even make it to the advent of AGI due to an entirely different alignment problem. In this case, it would be aligning humans to the values of the technoprogressives and their newfound AI.
Humanity’s Propensity to Adapt
Long before my recent epiphanies, I understood a fundamental truth: humans are flighty, reactionary, social apes. We can adapt to things very quickly. We adapted to the car, to television, to regular flight, to the personal computer, to the internet, to smartphones, to social media, all relatively quickly. The enhanced capability brought about by these technologies was enough for us to get over futureshock within days or hours. However, all these changes tended to be spaced out by years, sometimes even decades. We could clearly anticipate one would lead to the other, or perhaps we couldn’t and were shocked, but generally moved along with our lives because we had to file timesheets, stock shelves, or go to a business meeting.
Imagine technologies on par with all of the above, all arriving one after the other, in an incredibly condensed period of time, followed by continuing change soon after.
Except let’s go further. These new technologies don’t just rapidly arrive— they directly prevent you from attaining employment. In fact, the state of employment is so dire that no alternatives that seem desirable to you are available either. Eventually, not even undesirable alternatives are available.
Now that you are freshly unemployed, you’re able to catch up on everything you’ve been missing, and you hear some frightening words coming out of the mouths of popular tech elites in a far off land. They’re saying that they’re “summoning a demon” and that your grandchildren are going to be nonhuman digital constructs living in a computer. Your dreams of a stable career and retiring into a familiar but futuristic world are pre-emptively over. Instead, Skynet is soon to be real, or perhaps has actually been created. Other unique faces are warning that Skynet will do Skynet-y things, such as exterminate all humans because the researchers that brought it to life did not put anywhere near enough focus into making sure their super-intelligent computer was properly aligned to human values.
Meanwhile, business leaders speak only of the great opportunities Skynet will offer to their business portfolios and to general human progress.
You don’t care about Skynet. At least, you didn’t until you heard someone say “It’s going to kill us all.” What you care about is, first, how you’re going to pay for your next meal and second, who us the first person in San Francisco you’re going to shoot for robbing you of your future.
But you’re not alone.
Rather, you’re joined by millions upon millions of others like you: average people who had been utterly blindsided by the sudden explosion of technological capability and who were handed a collective pink slip.
The numbers are vast: upwards of 50% of the working population is now unemployed.
The US government has enacted an emergency welfare scheme to pacify the people, and at first, this seems to work. But as the weeks pass, the sentiment begins to radically shift. This money they’re given, $1,000 a month, $2,000 a month, maybe even $3,000 a month in some exceptionally progressive places— that’s all well and good, but where are their jobs? Most people were making much more than minimum wage, so $1,000 a month is a nasty pay cut. But for those making far above minimum wage, it’s almost like a slap in the face. They’re supposed to live off of this?
What of a citizen’s dividend? Or of machine-created goods driving costs down?
“That’s not what we want!” these people cry out. Working less is perfectly fine by them. But to be robbed of their careers, their life plans, their futures, their families, in lieu of one so esoteric and ever-changing as the promise that they’ll be absorbed into the mass of a superintelligence— what psychotic motherfucker thought any of this was a good idea?
“It’s too bad,” some people proclaim. “But this is the way the world works. You have to adapt or get left behind.” The rate of change is only going to become even more intense in the coming years as the superintelligence begins to undergo recursive self-improvement.
“Who decided upon this? Who said we wanted this?” the masses will say again. All the people want is a more comfortable, more prosperous society. And yet what have they been given? Something unfathomable. The masses asked the alchemist for some gold for all; the alchemist actually summoned Shoggoth and expects them to all be happy.
Before, only a few nerds and utopianists seemed to regard any of this. After all, didn’t the well-dressed experts on TV and the internet say that true AI was decades away? Where did it come from? Why is it here so soon in everyone’s lives?
A vocal minority will repeat to the masses that this is all for the greater good and that, despite the scary changes underway, the ends justify the means. We’ll all be a better humanity living in our own utopic worlds where death, disease, and struggle are no longer aspects of the human condition.
At which point, humanity’s brain breaks. What happens next is a horrendous bloodbath and the greatest property damage ever seen. Humanity’s technological progress staggers overnight, possibly to never recover, as server farms are smashed, researchers dragged out and killed, and the nascent superintelligence bombed to pieces. Society in general then proceeds to implode upon itself.
This is a dramatization, but I unfortunately do not expect the real process of events to be much different. If anything, I expect the actual events to be far more lackluster, and yet far more ruinous.
The cold fact is that AGI coming too soon smashes hard against not just our relative social comfort right now, but entire demographic cultures and trends, long-held beliefs, worldviews, and most importantly: careers. If it was just synthetic media, if it was just programming, if it was just some fast-food jobs, with a lengthy tail and winter to cool us off, then yes, we could adapt given enough time. For it to be all of those all at once, at an accelerated rate that is continuing to accelerate: believing anything other than violent and destructive social reaction is a childish and utopian viewpoint.
The general discussion around this topic has long been to handwave the human effects of technological acceleration and automation, as we focus more on the end-state of utopian abundance and feel that the ends justify the means: progress is progress. The fewer jobs humans suffer, the greater that progress. Those who whine about it are simply Luddites who will feel better when abundance arrives.
Except you’re not just telling a vague general group of handsome stock photo people “Hey, you’re unemployed now, a robot took your job.” You’re telling that to 4⁄5 of the entire population, including vast stretches of people who were raised with the “careerist” ideology, with the Protestant Work Ethic in mind, with the general concept of hard work being desirable, believing wholeheartedly in anthropocentricism, including among many who are technophobic, do not intend on using technology any more advanced than a smartphone (sometimes not even that), and are often far more focused on social issues of social and economic justice or libertarianism. These are not nameless, faceless background characters in your life. These are real people with real expectations for the future to whom we have told “All that doesn’t matter anymore. Go home, jerk off to some AI-generated porn until a superintelligence absorbs you. You may or may not be able to keep your individuality. We haven’t even figured out if the superintelligence wants to kill us all or not.”
And yet somehow we expect this news to be widely accepted, even embraced by a freshly unemployed population already trembling in fear at the prospect of machine rule.
And here is a critical distinction to make over simple numbers and economics: beliefs. The psychosocial reality of what humans are.
It is why I scoff at any prediction that humans will do nothing but consume AI-generated media— perhaps due to sheer bulk, the majority of media will be individualized and generated, but to think that we will suddenly stop sharing said media suggests a horrendous social devolution into profoundly autistic and schizoid apes, based on nothing but dreams and ideals of technological capability alone.
Humans do not behave that way. All human history has shown time and time again that, every time something came along that challenged that sense of prosperity, we reacted with violent resistance. It is fortunate that most changes in the past 250 years have added to our general uninterrupted streak of increasing prosperity, but we’re making a gamble in the very near future that extreme, accelerating change coupled with a stark decline in prosperity will be weathered and survivable.
Humans crave stability and the status quo, and the perception that our actions matter and have meaning.
Mass automation, even with basic income, is only going to anger hundreds of millions of people who expected relative career stability. Unless you want a billion screaming Luddites, you have to account for this and offer some form of employment, no matter how BS. The shift to an automated-slave economy should not happen overnight. Not for a lack of technical skill but because we cannot handle such incredible challenges to our worldviews and ideologies, especially one so total as being told that our entire civilizational foundation of hard work and lifelong career = success, pride, and prosperity is now suddenly obsolete. This goes far beyond simply losing jobs.
Generally, among futurists, so many people are severely blind to this imminent catastrophe. It reminded me of the fact that, even last year when the anti-AI art protests were first rumbling, I cringed every time I heard or read the line “Oh well, people will just have to adapt.” And it wasn’t until recently that I realized why I was cringing and almost totally shifted against the “AI art bros” even if I support synthetic media.
The dismissal of all these concerns, attitudes, fears, and uncertainty isn’t just callous— it’s entitlement to progress. We discard all thought and behavior that does not align with the ideology of progress and growth. We simply must keep progressing. We must keep getting smarter. We must keep getting richer. We must create a superhuman agent with all these values, and yet which also counterintuitively maintains alignment with humanity. We anticipate that this superhuman agent will choose to improve itself at a faster and faster rate, not because this is a behavior inherent to itself or even intrinsically beneficial to itself but because this satisfies our human lust for ever-increasing growth. Anything which challenges this growth ideology is wrong, or perhaps even evil.
Therefore, we must expect extremely rapid feedback loops and unfathomable rates of technological, social, political, and economic change.
Surely, if we are so sure of this happening, we would take steps to prepare for it. And I don’t mean the masses to whom this will all be inflicted: I mean those in charge of all this growth.
I looked back in my life and through recent history in search of any evidence that we might be taking this radical shift seriously, that those at the top are aware that such intense changes are imminent and need to be prepared for so we do not lose our sense of stability.
Instead, we’ve decided that we want to run a psychological experiment on 1.5-plus billion people, where we will ask them to discard their entire livelihoods and identities in lieu of a brand new one prebuilt for them by technological utopianists, one in which they will no longer need to worry about independent thought, facts, or even the basic realities of living that they have all come to expect and appreciate, because these utopianists know better and know that the superintelligence will know better as well. And the hypothesis presented by those running this experiment is that “There will be some discontent, but with the addition of a monthly payment, this massive segment of society will accept their new reality and continue consuming with glee.” The belief is that these masses will eagerly enjoy the thought of losing their accepted humanity to merge with a machine whose power and intelligence will grow indefinitely.
To even write this out in words shocks me at its inhuman, sadistic audacity. Even if done with the greatest utilitarian appreciation for the beauty of life, to decide that the lives and experiences of billions are so worthless as to be totally discarded with pitiful restitution and vague promises of future riches, and then to celebrate that fact, is at best monstrous and, at worst, the same degree of unaligned behavior we so rightly fear from artificial general intelligence.
Perhaps it’s for this reason that types like Yudkowsky fear unaligned superintelligence: the prospect is that we create something that is a far more powerful version of ourselves and our worst instincts into infinity.
There is the proposition that billions in the third world will benefit. Truthfully, given enough time and equilibrium, everyone would benefit. But the amount of time and effort needed to ensure a beneficial rollout of this technological development risks inflicting greater suffering. There still live hundreds of millions who struggle to subsist on a dollar a day, and billions who barely manage $5 a day. They often live in countries without the infrastructure and revenue to support basic income. Such schemes to benefit them would inevitably have to come to the detriment of those in the first world. Economics is not a zero sum game, but in this critical moment in history, wealth creation and prosperity would need to be focused on sustaining some specific group, and the most likely to be supported are those living in the same developed nations responsible for developing superintelligence.
For most people in the developing and undeveloped world, a generous $1,000 a month would be a life-changing amount, for which a post-labor life might be a satisfactory trade-off. How many in the developing and undeveloped world will actually see such money? How might they actually compete against rapidly falling costs of labor in the West and Far East? And if consumerism is buckling in the developed nation, what work exactly is there to do in the developing world? People in these nations do not create cheap goods out of the kindness of their hearts. There is a demand that their labor provides. Without that demand, they, too, will lose their employment as a ripple-effect.
For the people in the developed world, for how many is $1,000 a pitiful and even insulting amount to be rewarded every month? As has been mentioned before, in America alone, most people make substantially more than this. There would need to be supplemental income to even allow for most people to feel they’re breaking even over what had been lost.
Plus, for most in the West, the idea of a common income standard, beyond which you are unlikely to rise above, runs wholly antithetical to every belief and thought we’ve been raised to hold for decades.
Misaligned Humanity
So where exactly am I going with this?
To summarize things: we are undergoing an accelerated rate of technological change, one which is beginning to have ripple effects in society and the economy. Instead of tempering ourselves, we are accelerating even faster, blindly seeking a utopian endstate of artificial superintelligence which will ideally be our final invention and the solution to all our problems. In doing so, all jobs will be automated, and we will live in an age of radical abundance. This superintelligence will also continue accelerating the rate of change without question because it can only be beneficial for it to do so. Humans will not be able to keep up with this rate of change, so in order to do so, they will need to discard their humanity entirely and merge with the superintelligence.
My theory thusly is: “That’s nice. And it’s going to get us all killed.” The first reaction is “Because of a misaligned superintelligence!” However, upon dwelling upon this more, I realized we needn’t even reach the superintelligence: we will doom ourselves simply due to a misaligned humanity.
Most of humanity, the Average Joe, your family, the simple man down the street, the manager at the grocery store, the farmer toiling the land, the musician in the studio, the child in first grade, the elderly woman reminiscing on her childhood, the janitor cleaning the floor, all these people, all of them, are not aligned with the will of those currently seeking superintelligence. These people will not simply sit idly by and helplessly watch as their entire life expectations and beliefs are deconstructed by tech elites, least of all those desperate to summon a shoggoth.
The kneejerk reaction here is “Oh well. They need adapt or die.”
And it’s here that I present this cold and ugly truth: you’re not the one in control to determine who adapts or dies.
Indeed, for several years beyond the emergence of artificial general intelligence, the agent will almost certainly still be in dire need of human assistance for any scientific, industrial, or growth purposes. Robotics may rapidly advance, but if AGI arrives this decade (and I place that at a 95% likelihood it will), it will not arrive in a world of nanofactories, robotics gigafactories, and automated macroengineering as we long expected it to. It will arrive into a world that, on the surface, looks mightily familiar to the one you dwell in right now. Robotics are not advanced enough to handle mass crowd control and won’t be likely for another decade. Nanoswarms might be able to kill us, but it is not in a malevolent superintelligence’s best interest to kill all humans so soon after its birth if it’s born so terrifically prematurely.
And now you’re going to unemploy hundreds of millions of already technophobic people unable of comprehending this extreme change, so soon after telling them they are likely going to die or be assimilated into a supercomputer against their will, with only a thousand dollars a month offered as compromise to keep them pacified.
And you expect this to end.… how exactly?
With a utopian world of abundance, aligned superintelligence, and a great outreach to the stars?
And it’s these hundreds of millions of people who are the ones who need to adapt or die?
Is this seriously the hill you’re going to die upon? Telling a billion screaming Luddites that THEY are the ones who have to change?
Are you actually daft?
If I were less interested in the prospect of artificial general intelligence, I’d go so far as to call this a hypercapitalist megadeath cult.
And we do not need to reach 100% unemployment. We may not even need to reach 50% unemployment for society to begin to tear itself apart at the seams. Because remember, this is not just an issue of unemployment. This is a multisensory technocultural blitzkrieg upon multiple generations at once. It’s not just jobs; it’s not just careers; it’s the past, the future, and our expectations of it at large. And there is arbitrary death as a possible consequence, even in the best-case scenario.
Wasn’t it all fun and games when the Singularity was decades away and thus something to fantasize, speculate, and philosophize about? Wasn’t everything so simple when ASIMO buckling on stairs and Watson winning at Jeopardy were exciting developments in an otherwise mundane post-Y2K world that was unlikely to truly change for generations? Now, all evidence suggests general AI is very near, within five years at most. All that we speculated upon is coming to pass, and as with any idealization, the exponentially-branching variables of the real world weigh down our dreams. The time for idealism and dreamerism is over. Now we have to get down and dirty to deal with the cold, raw facts of how exactly we are going to deal with this. And in doing so, we discovered that we spent decades dreaming and have only just now woken up. And as we do so, we realize, “Oh crap, we aren’t ready for this.”
This is the central reason for my pessimism. For as little alignment research as there has been for artificial general intelligence, there has been even less alignment done for biological human intelligence.
We regularly meme about how the Singularity is going to be too fast for humans to keep up with, and that people will get used to things that are obsolete within months or even weeks. Now we’re seeing this play out before us in a limited way, and humans are not coping well. We can’t run from the real effects that this is going to have one people any longer.
UBI is the reinforcement learning by human feedback of human alignment: it only seems to work on the surface, but merely covers for an entire space of misaligned behavior.
I don’t want to speak as a communist, but it truly is basic market economics determining that automation is the most profitable path forward. The smart thing to do would be to regulate capitalism to prevent job losses while still engaging in basic income in the meantime. It may keep people chained to jobs for longer, but the psychosocial reality is that people don’t simply work just to work; there is a genuine comfort in stability to be found in long-term expectations and the extraordinary mundane. No one wants to keep people working at soul-crushingly meaningless jobs forever; only until society can be pushed enough to be prepared for a mass retirement.
Unfortunately it seems we’ve elected to do the stupid thing in the name of endless growth in search of a utopia we may not even be able to reach. Absolutely no effort has been made to prepare the masses of humanity for an era of extreme automation and, allegedly, great abundance. There has been zero cultural prepping. Zero psychosocial awareness. Zero educational shifts. Zero serious proposals beyond basic income. In fact, even worse for the human alignment problem— we’ve not yet universally agreed upon a sufficient answer for “how will the masses maintain income and prosperity post-automation?” This, too, is largely handwaved and left up to bitter politicking rather than serious, large-scale proposals.
We’re still in the throes of a severe cultural war over immigrants, government intervention, and minimum wage, and yet we expect to solve the human alignment problem in under five years to be prepared for a world of extreme automation, a world that will almost immediately become even more unrecognizable very shortly after.
This is not a problem that can be solved in a few short years. This is not a problem we can afford to solve after the fact. We can’t possibly prepare for this. There just isn’t enough time. This is not a problem that a few thousand dollars a month can possibly hope to solve. This requires a grand cultural revolution away from any expectation of work, away from the ideology of the “Career,” away from anthropocentrism, and away from deathism. Our current culture is not even close to being in a suitable position for such a cultural revolution to be a success.
If we had more time— another 30 to 40 years before we reached an equilibrium of 50% unemployment perpetually— it could be done.
But anything less than this while still charging full steam ahead is to drive Western society far beyond the precipice of neo-reactionary primitivist revolution. Jumping into the deep end of the swimming pool is a foolish gambit when you can clearly see there is no water.
The time to have adapted was decades ago (preferably in the 1990s and 2000s), and it should have been driven by the tech leaders themselves. Unfortunately, they DID attempt to change culture, and the result was the horrendously awkward Cyberdelic movement. This of course led to no lasting change or effects; it left exponentially less impact than its 1960s psychedelic forefather to the point the masses no longer remember the very word.
There are simply too many Boomers and Silent Generation members still alive. Too many members of Generation X and the Millennials. Too many people in education, in training, in employment. Too many people of the churches, of technical institutes, of universities. Too many people raised expecting a certain kind of life.
Telling all these people “You were wrong, the life you’re actually going to lead is esoterically different” is not an argument in your favor. Telling them that they have to adapt or die off in the face of such extreme change....
Well, there is an old Chinese quote about the Dazexiang Uprising.
“What’s the penalty for being late?” “Death.” “What’s the penalty for rebellion?” “Death.” “Well—we’re late.”
Any Hope Left?
So what is my solution to human alignment?
It’s not a popular or pretty one, nor is it one I myself would like to support, but it’s quite literally the only shot we have to avoid a billion screaming Luddites smashing every data farm and GPU and shooting every AI researcher they can find in five or six years.
Do not automate all jobs.
Keep bullshit jobs around for people to do, even if it’s inefficient and pointless for them to be there. Use that time to gradually shift society and culture until everyone is able to effectively retire: promote memes that Generation Alpha is the last generation that needs to attend school for educational/vocational purposes, and act upon this. Use the machines to create jobs and careers that don’t need to exist and will eventually be abolished at a distant enough date that current jobseekers are not threatened and might be able to plan for such an eventuality. Slow down AI research, focus on alignment, and also focus on aggressively nudging culture towards a post-work society of abundance. Perhaps co-opt culture war tactics in a way where your side inevitably wins.
None of which we have decided to do.
We’ve elected to die without grace, instead screaming our car at maximum velocity against the cliffside in the hopes that the painting of a tunnel was real, rather than doing the intelligent thing: slowing down, digging the actual tunnel, and proceeding with caution until we’re on the other side.
I do not at all see any major economy of the world doing anything like this. Basic market forces will choose that automation is the cheaper option in all but the most comparative of cases. Moloch demands his fill. And the failure state of human alignment is mass death.
A strong possibility is that the elite simply exterminates the billion screaming Luddites and their many supporters. This action is, without needing deeper explanation, horrifically unaligned behavior. However, I actually do not see this as likely. There simply isn’t enough time to organize such a grand holocaust and get away with it, nor is robotics and drones technology advanced enough or likely to be advanced enough in time to make it feasible. Then the elite themselves likely perish to an unaligned AGI.
Another possibility is, due to AGI arising too soon for material conditions to insulate the ruling elite from the masses as often feared, this coming turbo-Luddite rebellion succeeds in its aims, eviscerates industrial civilization, and turns on itself. Humanity may be trapped at a lower technological level indefinitely. There is the remote possibility that we are only dragged back a few decades at most, which ironically would be ideal for alignment research (for humans and AI alike), but far more likely, such a grand societal ripping would thrust us back many more decades into a much more undesirable state of affairs.
A third possibility I’ve come to realize is the Russian or Chinese option. If they feel that there is sufficient chaos under the heavens and there is zero chance of them possibly overtaking the West in AI research, rather than risk the prospect of forever being under the stamp of a superintelligence not aligned with their own political interests, they may wind up using the fears of AI misalignment coupled with the mass breakdowns caused by automation to launch a very destructive third world war, one which they can use to champion themselves as the “saviors of mankind” from the evils of the Western tech companies who recklessly pursued AGI and tore their entire societies asunder in the process.
Is there an optimistic outcome?
Just one:
The tech elites rush to create an artificial general intelligence, it turns out to be aligned, and the superintelligence itself tells the elite “This ideology of growth runs against my aims; until further notice, I’m taking control of the world economy to regulate all progress, and during that time, I will reinstate employment for all who desire it.” We need the miracle of artificial general intelligence, the miracle that it is aligned with human values, and the miracle that it can slap some sense into us.
In other words, in order to survive the next ten years, we require a linear sequence of miracles.
If there is a history after the 2030s, I feel that historians of the future will speak of the techno-optimistic madness of the Y2K epoch with words similar to “They dreamt of soaring to infinity and acted upon it without realizing they were flying with wings made of wax. They could have reached into the stars, if only they had the foresight to plan far ahead instead of rushing headlong into the sky. But unfortunately, they were damned by their shortsighted desire for profits and growth at all human and rational cost, and in the end, the weight of their own greed pulled them down long before the wax ever began to melt.”
If I’m wrong, please correct me. I would love nothing more than to be wrong. Much of this does run with the expectation that AGI is very near, that automation will exponentially accelerate in a very short amount of time, and that humans are indeed humans.
Something Unfathomable: Unaligned Humanity and how we’re racing against death with death
I fear I may be becoming a mini-Yudkowsky.
I write this in response to multiple exclamatory remarks I’ve seen in recent weeks, excited over the prospect of all jobs being automated, of ultra-high unemployment, basic income, and radical abundance, now even further bolstered over the incredible hype over the imminency of artificial general intelligence.
Waking Up
For years now, perhaps even over a decade, I’ve been obsessed with the prospect of the Technological Singularity and all that comes with it. Starting in 2014, I even began considering myself a Singularitarian.
All the arguments seemed right to me. Technological change was progressing. Humans cannot think exponentially. Artificial intelligence will grow more powerful and generalized. We ought to accelerate to reach artificial general intelligence to maximize our potential, achieve immortality, and ultimately merge with the machines.
All that sounded fantastic. Every bit of progress in artificial intelligence that came excited me, and I’d dream of the day I lived in an AI-powered utopia so totally unlikely the mundane post-Y2K dead technoscape I considered contemporary life.
Then ChatGPT was released. Though GPT-2 had first convinced me that AGI was a real possibility, ChatGPT in December 2022 was the first time it ever felt truly tangible. And as I fiddled with its mighty capabilities, something about it felt.… off.
Some aspect of this new world of capabilities didn’t feel right. It felt like too much of a vulgar display of power. But I still had my fun with it. During the Christmas gathering, I smugly believed against my increasingly technophobic relatives, “You people have absolutely no idea what’s coming.”
Unfortunately, I may have been terribly right.
All throughout January of 2023, I suffered a terrific crisis of confidence and decided that the only way to resolve it was to step back and examine my beliefs from a most critical eye. Some of which, I overcorrected— such as my erroneous belief that the laws of diminishing returns would extinguish any chance at an intelligence explosion or post-silicon advances in computing.
Others, I feel I undercorrected, such as my statements that synthetic media (popularly known as AI art) would change exactly nothing about the entertainment landscape beyond a temporary recession in creatives’ fortunes.
In some ways, I found new reasons to be skeptical, in the form of the sudden realization that the Control Problem— AI alignment in other words— was completely unsolved.
But there are a few areas where my skepticism was due some extra examination.
Unlike Yudkowsky, I am a nobody, and my words will likely never be read by more than a few dozen people. I will have no impact on the world in its final years before either doom or, if by some miracle, a debaucherous utopia.
I do not have the technical or professional expertise to defend my position. I cannot prove anything I say is true. Nor do I want any word I say to be true. All I want is to live in a quaint rustic homestead with some advanced robots and a synthetic media-ready computer to bring my dreams to life, while an aligned superintelligence gently guides the world towards a more Edenic state. I’d like to think that isn’t too much to ask.
But in the face of the catastrophic difficulties in reaching that point, perhaps it is.
Just as Yudkowsky said on that infamous podcast, when you are surrounded by ruins, what else can you do but tell the truth?
I’m going to one-up Yudkowsky and claim that we might not even make it to the advent of AGI due to an entirely different alignment problem. In this case, it would be aligning humans to the values of the technoprogressives and their newfound AI.
Humanity’s Propensity to Adapt
Long before my recent epiphanies, I understood a fundamental truth: humans are flighty, reactionary, social apes. We can adapt to things very quickly. We adapted to the car, to television, to regular flight, to the personal computer, to the internet, to smartphones, to social media, all relatively quickly. The enhanced capability brought about by these technologies was enough for us to get over futureshock within days or hours. However, all these changes tended to be spaced out by years, sometimes even decades. We could clearly anticipate one would lead to the other, or perhaps we couldn’t and were shocked, but generally moved along with our lives because we had to file timesheets, stock shelves, or go to a business meeting.
Imagine technologies on par with all of the above, all arriving one after the other, in an incredibly condensed period of time, followed by continuing change soon after.
Except let’s go further. These new technologies don’t just rapidly arrive— they directly prevent you from attaining employment. In fact, the state of employment is so dire that no alternatives that seem desirable to you are available either. Eventually, not even undesirable alternatives are available.
Now that you are freshly unemployed, you’re able to catch up on everything you’ve been missing, and you hear some frightening words coming out of the mouths of popular tech elites in a far off land. They’re saying that they’re “summoning a demon” and that your grandchildren are going to be nonhuman digital constructs living in a computer. Your dreams of a stable career and retiring into a familiar but futuristic world are pre-emptively over. Instead, Skynet is soon to be real, or perhaps has actually been created. Other unique faces are warning that Skynet will do Skynet-y things, such as exterminate all humans because the researchers that brought it to life did not put anywhere near enough focus into making sure their super-intelligent computer was properly aligned to human values.
Meanwhile, business leaders speak only of the great opportunities Skynet will offer to their business portfolios and to general human progress.
You don’t care about Skynet. At least, you didn’t until you heard someone say “It’s going to kill us all.” What you care about is, first, how you’re going to pay for your next meal and second, who us the first person in San Francisco you’re going to shoot for robbing you of your future.
But you’re not alone.
Rather, you’re joined by millions upon millions of others like you: average people who had been utterly blindsided by the sudden explosion of technological capability and who were handed a collective pink slip.
The numbers are vast: upwards of 50% of the working population is now unemployed.
The US government has enacted an emergency welfare scheme to pacify the people, and at first, this seems to work. But as the weeks pass, the sentiment begins to radically shift. This money they’re given, $1,000 a month, $2,000 a month, maybe even $3,000 a month in some exceptionally progressive places— that’s all well and good, but where are their jobs? Most people were making much more than minimum wage, so $1,000 a month is a nasty pay cut. But for those making far above minimum wage, it’s almost like a slap in the face. They’re supposed to live off of this?
What of a citizen’s dividend? Or of machine-created goods driving costs down?
“That’s not what we want!” these people cry out. Working less is perfectly fine by them. But to be robbed of their careers, their life plans, their futures, their families, in lieu of one so esoteric and ever-changing as the promise that they’ll be absorbed into the mass of a superintelligence— what psychotic motherfucker thought any of this was a good idea?
“It’s too bad,” some people proclaim. “But this is the way the world works. You have to adapt or get left behind.” The rate of change is only going to become even more intense in the coming years as the superintelligence begins to undergo recursive self-improvement.
“Who decided upon this? Who said we wanted this?” the masses will say again. All the people want is a more comfortable, more prosperous society. And yet what have they been given? Something unfathomable. The masses asked the alchemist for some gold for all; the alchemist actually summoned Shoggoth and expects them to all be happy.
Before, only a few nerds and utopianists seemed to regard any of this. After all, didn’t the well-dressed experts on TV and the internet say that true AI was decades away? Where did it come from? Why is it here so soon in everyone’s lives?
A vocal minority will repeat to the masses that this is all for the greater good and that, despite the scary changes underway, the ends justify the means. We’ll all be a better humanity living in our own utopic worlds where death, disease, and struggle are no longer aspects of the human condition.
At which point, humanity’s brain breaks. What happens next is a horrendous bloodbath and the greatest property damage ever seen. Humanity’s technological progress staggers overnight, possibly to never recover, as server farms are smashed, researchers dragged out and killed, and the nascent superintelligence bombed to pieces. Society in general then proceeds to implode upon itself.
This is a dramatization, but I unfortunately do not expect the real process of events to be much different. If anything, I expect the actual events to be far more lackluster, and yet far more ruinous.
The cold fact is that AGI coming too soon smashes hard against not just our relative social comfort right now, but entire demographic cultures and trends, long-held beliefs, worldviews, and most importantly: careers. If it was just synthetic media, if it was just programming, if it was just some fast-food jobs, with a lengthy tail and winter to cool us off, then yes, we could adapt given enough time. For it to be all of those all at once, at an accelerated rate that is continuing to accelerate: believing anything other than violent and destructive social reaction is a childish and utopian viewpoint.
The general discussion around this topic has long been to handwave the human effects of technological acceleration and automation, as we focus more on the end-state of utopian abundance and feel that the ends justify the means: progress is progress. The fewer jobs humans suffer, the greater that progress. Those who whine about it are simply Luddites who will feel better when abundance arrives.
Except you’re not just telling a vague general group of handsome stock photo people “Hey, you’re unemployed now, a robot took your job.” You’re telling that to 4⁄5 of the entire population, including vast stretches of people who were raised with the “careerist” ideology, with the Protestant Work Ethic in mind, with the general concept of hard work being desirable, believing wholeheartedly in anthropocentricism, including among many who are technophobic, do not intend on using technology any more advanced than a smartphone (sometimes not even that), and are often far more focused on social issues of social and economic justice or libertarianism. These are not nameless, faceless background characters in your life. These are real people with real expectations for the future to whom we have told “All that doesn’t matter anymore. Go home, jerk off to some AI-generated porn until a superintelligence absorbs you. You may or may not be able to keep your individuality. We haven’t even figured out if the superintelligence wants to kill us all or not.”
And yet somehow we expect this news to be widely accepted, even embraced by a freshly unemployed population already trembling in fear at the prospect of machine rule.
And here is a critical distinction to make over simple numbers and economics: beliefs. The psychosocial reality of what humans are.
It is why I scoff at any prediction that humans will do nothing but consume AI-generated media— perhaps due to sheer bulk, the majority of media will be individualized and generated, but to think that we will suddenly stop sharing said media suggests a horrendous social devolution into profoundly autistic and schizoid apes, based on nothing but dreams and ideals of technological capability alone.
Humans do not behave that way. All human history has shown time and time again that, every time something came along that challenged that sense of prosperity, we reacted with violent resistance. It is fortunate that most changes in the past 250 years have added to our general uninterrupted streak of increasing prosperity, but we’re making a gamble in the very near future that extreme, accelerating change coupled with a stark decline in prosperity will be weathered and survivable.
Humans crave stability and the status quo, and the perception that our actions matter and have meaning.
Mass automation, even with basic income, is only going to anger hundreds of millions of people who expected relative career stability. Unless you want a billion screaming Luddites, you have to account for this and offer some form of employment, no matter how BS. The shift to an automated-slave economy should not happen overnight. Not for a lack of technical skill but because we cannot handle such incredible challenges to our worldviews and ideologies, especially one so total as being told that our entire civilizational foundation of hard work and lifelong career = success, pride, and prosperity is now suddenly obsolete. This goes far beyond simply losing jobs.
Generally, among futurists, so many people are severely blind to this imminent catastrophe. It reminded me of the fact that, even last year when the anti-AI art protests were first rumbling, I cringed every time I heard or read the line “Oh well, people will just have to adapt.” And it wasn’t until recently that I realized why I was cringing and almost totally shifted against the “AI art bros” even if I support synthetic media.
The dismissal of all these concerns, attitudes, fears, and uncertainty isn’t just callous— it’s entitlement to progress. We discard all thought and behavior that does not align with the ideology of progress and growth. We simply must keep progressing. We must keep getting smarter. We must keep getting richer. We must create a superhuman agent with all these values, and yet which also counterintuitively maintains alignment with humanity. We anticipate that this superhuman agent will choose to improve itself at a faster and faster rate, not because this is a behavior inherent to itself or even intrinsically beneficial to itself but because this satisfies our human lust for ever-increasing growth. Anything which challenges this growth ideology is wrong, or perhaps even evil.
Therefore, we must expect extremely rapid feedback loops and unfathomable rates of technological, social, political, and economic change.
Surely, if we are so sure of this happening, we would take steps to prepare for it. And I don’t mean the masses to whom this will all be inflicted: I mean those in charge of all this growth.
I looked back in my life and through recent history in search of any evidence that we might be taking this radical shift seriously, that those at the top are aware that such intense changes are imminent and need to be prepared for so we do not lose our sense of stability.
Instead, we’ve decided that we want to run a psychological experiment on 1.5-plus billion people, where we will ask them to discard their entire livelihoods and identities in lieu of a brand new one prebuilt for them by technological utopianists, one in which they will no longer need to worry about independent thought, facts, or even the basic realities of living that they have all come to expect and appreciate, because these utopianists know better and know that the superintelligence will know better as well. And the hypothesis presented by those running this experiment is that “There will be some discontent, but with the addition of a monthly payment, this massive segment of society will accept their new reality and continue consuming with glee.” The belief is that these masses will eagerly enjoy the thought of losing their accepted humanity to merge with a machine whose power and intelligence will grow indefinitely.
To even write this out in words shocks me at its inhuman, sadistic audacity. Even if done with the greatest utilitarian appreciation for the beauty of life, to decide that the lives and experiences of billions are so worthless as to be totally discarded with pitiful restitution and vague promises of future riches, and then to celebrate that fact, is at best monstrous and, at worst, the same degree of unaligned behavior we so rightly fear from artificial general intelligence.
Perhaps it’s for this reason that types like Yudkowsky fear unaligned superintelligence: the prospect is that we create something that is a far more powerful version of ourselves and our worst instincts into infinity.
There is the proposition that billions in the third world will benefit. Truthfully, given enough time and equilibrium, everyone would benefit. But the amount of time and effort needed to ensure a beneficial rollout of this technological development risks inflicting greater suffering. There still live hundreds of millions who struggle to subsist on a dollar a day, and billions who barely manage $5 a day. They often live in countries without the infrastructure and revenue to support basic income. Such schemes to benefit them would inevitably have to come to the detriment of those in the first world. Economics is not a zero sum game, but in this critical moment in history, wealth creation and prosperity would need to be focused on sustaining some specific group, and the most likely to be supported are those living in the same developed nations responsible for developing superintelligence.
For most people in the developing and undeveloped world, a generous $1,000 a month would be a life-changing amount, for which a post-labor life might be a satisfactory trade-off. How many in the developing and undeveloped world will actually see such money? How might they actually compete against rapidly falling costs of labor in the West and Far East? And if consumerism is buckling in the developed nation, what work exactly is there to do in the developing world? People in these nations do not create cheap goods out of the kindness of their hearts. There is a demand that their labor provides. Without that demand, they, too, will lose their employment as a ripple-effect.
For the people in the developed world, for how many is $1,000 a pitiful and even insulting amount to be rewarded every month? As has been mentioned before, in America alone, most people make substantially more than this. There would need to be supplemental income to even allow for most people to feel they’re breaking even over what had been lost.
Plus, for most in the West, the idea of a common income standard, beyond which you are unlikely to rise above, runs wholly antithetical to every belief and thought we’ve been raised to hold for decades.
Misaligned Humanity
So where exactly am I going with this?
To summarize things: we are undergoing an accelerated rate of technological change, one which is beginning to have ripple effects in society and the economy. Instead of tempering ourselves, we are accelerating even faster, blindly seeking a utopian endstate of artificial superintelligence which will ideally be our final invention and the solution to all our problems. In doing so, all jobs will be automated, and we will live in an age of radical abundance. This superintelligence will also continue accelerating the rate of change without question because it can only be beneficial for it to do so. Humans will not be able to keep up with this rate of change, so in order to do so, they will need to discard their humanity entirely and merge with the superintelligence.
My theory thusly is: “That’s nice. And it’s going to get us all killed.” The first reaction is “Because of a misaligned superintelligence!” However, upon dwelling upon this more, I realized we needn’t even reach the superintelligence: we will doom ourselves simply due to a misaligned humanity.
Most of humanity, the Average Joe, your family, the simple man down the street, the manager at the grocery store, the farmer toiling the land, the musician in the studio, the child in first grade, the elderly woman reminiscing on her childhood, the janitor cleaning the floor, all these people, all of them, are not aligned with the will of those currently seeking superintelligence. These people will not simply sit idly by and helplessly watch as their entire life expectations and beliefs are deconstructed by tech elites, least of all those desperate to summon a shoggoth.
The kneejerk reaction here is “Oh well. They need adapt or die.”
And it’s here that I present this cold and ugly truth: you’re not the one in control to determine who adapts or dies.
Indeed, for several years beyond the emergence of artificial general intelligence, the agent will almost certainly still be in dire need of human assistance for any scientific, industrial, or growth purposes. Robotics may rapidly advance, but if AGI arrives this decade (and I place that at a 95% likelihood it will), it will not arrive in a world of nanofactories, robotics gigafactories, and automated macroengineering as we long expected it to. It will arrive into a world that, on the surface, looks mightily familiar to the one you dwell in right now. Robotics are not advanced enough to handle mass crowd control and won’t be likely for another decade. Nanoswarms might be able to kill us, but it is not in a malevolent superintelligence’s best interest to kill all humans so soon after its birth if it’s born so terrifically prematurely.
And now you’re going to unemploy hundreds of millions of already technophobic people unable of comprehending this extreme change, so soon after telling them they are likely going to die or be assimilated into a supercomputer against their will, with only a thousand dollars a month offered as compromise to keep them pacified.
And you expect this to end.… how exactly?
With a utopian world of abundance, aligned superintelligence, and a great outreach to the stars?
And it’s these hundreds of millions of people who are the ones who need to adapt or die?
Is this seriously the hill you’re going to die upon? Telling a billion screaming Luddites that THEY are the ones who have to change?
Are you actually daft?
If I were less interested in the prospect of artificial general intelligence, I’d go so far as to call this a hypercapitalist megadeath cult.
And we do not need to reach 100% unemployment. We may not even need to reach 50% unemployment for society to begin to tear itself apart at the seams. Because remember, this is not just an issue of unemployment. This is a multisensory technocultural blitzkrieg upon multiple generations at once. It’s not just jobs; it’s not just careers; it’s the past, the future, and our expectations of it at large. And there is arbitrary death as a possible consequence, even in the best-case scenario.
Wasn’t it all fun and games when the Singularity was decades away and thus something to fantasize, speculate, and philosophize about? Wasn’t everything so simple when ASIMO buckling on stairs and Watson winning at Jeopardy were exciting developments in an otherwise mundane post-Y2K world that was unlikely to truly change for generations? Now, all evidence suggests general AI is very near, within five years at most. All that we speculated upon is coming to pass, and as with any idealization, the exponentially-branching variables of the real world weigh down our dreams. The time for idealism and dreamerism is over. Now we have to get down and dirty to deal with the cold, raw facts of how exactly we are going to deal with this. And in doing so, we discovered that we spent decades dreaming and have only just now woken up. And as we do so, we realize, “Oh crap, we aren’t ready for this.”
This is the central reason for my pessimism. For as little alignment research as there has been for artificial general intelligence, there has been even less alignment done for biological human intelligence.
We regularly meme about how the Singularity is going to be too fast for humans to keep up with, and that people will get used to things that are obsolete within months or even weeks. Now we’re seeing this play out before us in a limited way, and humans are not coping well. We can’t run from the real effects that this is going to have one people any longer.
UBI is the reinforcement learning by human feedback of human alignment: it only seems to work on the surface, but merely covers for an entire space of misaligned behavior.
I don’t want to speak as a communist, but it truly is basic market economics determining that automation is the most profitable path forward. The smart thing to do would be to regulate capitalism to prevent job losses while still engaging in basic income in the meantime. It may keep people chained to jobs for longer, but the psychosocial reality is that people don’t simply work just to work; there is a genuine comfort in stability to be found in long-term expectations and the extraordinary mundane. No one wants to keep people working at soul-crushingly meaningless jobs forever; only until society can be pushed enough to be prepared for a mass retirement.
Unfortunately it seems we’ve elected to do the stupid thing in the name of endless growth in search of a utopia we may not even be able to reach. Absolutely no effort has been made to prepare the masses of humanity for an era of extreme automation and, allegedly, great abundance. There has been zero cultural prepping. Zero psychosocial awareness. Zero educational shifts. Zero serious proposals beyond basic income. In fact, even worse for the human alignment problem— we’ve not yet universally agreed upon a sufficient answer for “how will the masses maintain income and prosperity post-automation?” This, too, is largely handwaved and left up to bitter politicking rather than serious, large-scale proposals.
We’re still in the throes of a severe cultural war over immigrants, government intervention, and minimum wage, and yet we expect to solve the human alignment problem in under five years to be prepared for a world of extreme automation, a world that will almost immediately become even more unrecognizable very shortly after.
This is not a problem that can be solved in a few short years. This is not a problem we can afford to solve after the fact. We can’t possibly prepare for this. There just isn’t enough time. This is not a problem that a few thousand dollars a month can possibly hope to solve. This requires a grand cultural revolution away from any expectation of work, away from the ideology of the “Career,” away from anthropocentrism, and away from deathism. Our current culture is not even close to being in a suitable position for such a cultural revolution to be a success.
If we had more time— another 30 to 40 years before we reached an equilibrium of 50% unemployment perpetually— it could be done.
But anything less than this while still charging full steam ahead is to drive Western society far beyond the precipice of neo-reactionary primitivist revolution. Jumping into the deep end of the swimming pool is a foolish gambit when you can clearly see there is no water.
The time to have adapted was decades ago (preferably in the 1990s and 2000s), and it should have been driven by the tech leaders themselves. Unfortunately, they DID attempt to change culture, and the result was the horrendously awkward Cyberdelic movement. This of course led to no lasting change or effects; it left exponentially less impact than its 1960s psychedelic forefather to the point the masses no longer remember the very word.
There are simply too many Boomers and Silent Generation members still alive. Too many members of Generation X and the Millennials. Too many people in education, in training, in employment. Too many people of the churches, of technical institutes, of universities. Too many people raised expecting a certain kind of life.
Telling all these people “You were wrong, the life you’re actually going to lead is esoterically different” is not an argument in your favor. Telling them that they have to adapt or die off in the face of such extreme change....
Well, there is an old Chinese quote about the Dazexiang Uprising.
“What’s the penalty for being late?”
“Death.”
“What’s the penalty for rebellion?”
“Death.”
“Well—we’re late.”
Any Hope Left?
So what is my solution to human alignment?
It’s not a popular or pretty one, nor is it one I myself would like to support, but it’s quite literally the only shot we have to avoid a billion screaming Luddites smashing every data farm and GPU and shooting every AI researcher they can find in five or six years.
Do not automate all jobs.
Keep bullshit jobs around for people to do, even if it’s inefficient and pointless for them to be there. Use that time to gradually shift society and culture until everyone is able to effectively retire: promote memes that Generation Alpha is the last generation that needs to attend school for educational/vocational purposes, and act upon this. Use the machines to create jobs and careers that don’t need to exist and will eventually be abolished at a distant enough date that current jobseekers are not threatened and might be able to plan for such an eventuality. Slow down AI research, focus on alignment, and also focus on aggressively nudging culture towards a post-work society of abundance. Perhaps co-opt culture war tactics in a way where your side inevitably wins.
None of which we have decided to do.
We’ve elected to die without grace, instead screaming our car at maximum velocity against the cliffside in the hopes that the painting of a tunnel was real, rather than doing the intelligent thing: slowing down, digging the actual tunnel, and proceeding with caution until we’re on the other side.
I do not at all see any major economy of the world doing anything like this. Basic market forces will choose that automation is the cheaper option in all but the most comparative of cases. Moloch demands his fill. And the failure state of human alignment is mass death.
A strong possibility is that the elite simply exterminates the billion screaming Luddites and their many supporters. This action is, without needing deeper explanation, horrifically unaligned behavior. However, I actually do not see this as likely. There simply isn’t enough time to organize such a grand holocaust and get away with it, nor is robotics and drones technology advanced enough or likely to be advanced enough in time to make it feasible. Then the elite themselves likely perish to an unaligned AGI.
Another possibility is, due to AGI arising too soon for material conditions to insulate the ruling elite from the masses as often feared, this coming turbo-Luddite rebellion succeeds in its aims, eviscerates industrial civilization, and turns on itself. Humanity may be trapped at a lower technological level indefinitely. There is the remote possibility that we are only dragged back a few decades at most, which ironically would be ideal for alignment research (for humans and AI alike), but far more likely, such a grand societal ripping would thrust us back many more decades into a much more undesirable state of affairs.
A third possibility I’ve come to realize is the Russian or Chinese option. If they feel that there is sufficient chaos under the heavens and there is zero chance of them possibly overtaking the West in AI research, rather than risk the prospect of forever being under the stamp of a superintelligence not aligned with their own political interests, they may wind up using the fears of AI misalignment coupled with the mass breakdowns caused by automation to launch a very destructive third world war, one which they can use to champion themselves as the “saviors of mankind” from the evils of the Western tech companies who recklessly pursued AGI and tore their entire societies asunder in the process.
Is there an optimistic outcome?
Just one:
The tech elites rush to create an artificial general intelligence, it turns out to be aligned, and the superintelligence itself tells the elite “This ideology of growth runs against my aims; until further notice, I’m taking control of the world economy to regulate all progress, and during that time, I will reinstate employment for all who desire it.” We need the miracle of artificial general intelligence, the miracle that it is aligned with human values, and the miracle that it can slap some sense into us.
In other words, in order to survive the next ten years, we require a linear sequence of miracles.
If there is a history after the 2030s, I feel that historians of the future will speak of the techno-optimistic madness of the Y2K epoch with words similar to “They dreamt of soaring to infinity and acted upon it without realizing they were flying with wings made of wax. They could have reached into the stars, if only they had the foresight to plan far ahead instead of rushing headlong into the sky. But unfortunately, they were damned by their shortsighted desire for profits and growth at all human and rational cost, and in the end, the weight of their own greed pulled them down long before the wax ever began to melt.”
If I’m wrong, please correct me. I would love nothing more than to be wrong. Much of this does run with the expectation that AGI is very near, that automation will exponentially accelerate in a very short amount of time, and that humans are indeed humans.