An Optimistic 2027 Timeline

The following is one possible future in which superhuman AI does NOT happen by the end of this decade. I do not believe that the timeline I present is the most likely path forward. Rather, this is meant primarily as a semi-plausible, less pessimistic alternative timeline than the one presented in the excellent AI 2027 paper. Consider this a proof-of-concept counter to those worried about the inevitability of that paper’s predictions.

EDIT: To clear up some confusion, please note that I’m taking most of the technical assumptions from the AI 2027 paper as a given. The following is not an independent investigation as to the feasibility of any of the paper’s claims: the divergence in timelines is primarily political, not technical. It should also be noted that the term “optimistic” in the title is both relative, and somewhat tongue-in-cheek; this is not actually a best-case scenario, and is in many ways quite horrific.

2025

Mid 2025 — falling stocks force hands

A global trade war has begun, and it’s somehow still escalating. America is going full steam ahead on isolationism, and even those who are in favor of the president’s policies agree that in the short-term things look rough.

Like every other part of the economy, AI companies are affected heavily. OpenBrain’s stock has fallen precipitously, and in order to reassure their investors that they didn’t just invest in a bubble, they need to reignite consumer interest. They release cool-sounding updates to their flagship products, and push out a new frontier model months earlier than planned. It’s impressive, but less aligned than the public (and regulators) have come to expect from the company, leading to some nasty headlines. A planned acquisition of billions of dollars worth of data centers doesn’t go through, and company leaders debate if their next planned model training run is even in the budget anymore. It is, but just barely, and if the model comes out bad they stand to lose a few months worth of profits.

Outside of OpenBrain, a lot of smaller AI companies have collapsed overnight, and smart people outside of the industry suspect that AGI hype is about to go the way of NFTs. A number of (comparatively) higher-quality AI agents are released, but the consumer and business market for them is still niche.

Late 2025: some real progress, with roadblocks ahead

OpenBrain wanted to build the world’s biggest data center. While they may have technically succeeded, they “only” quadrupled their total compute from last year. The economic fallout from the trade war has been severely compounded by secondary and tertiary effects from the collapse of multiple economic sectors, and people are calling this the start of a “Second Great Depression”. Investors are almost universally bearish, and most AI companies that existed at the start of 2025 are either in severe financial jeopardy or have closed entirely, with the cost of compute rising to astronomical levels.

Nonetheless, new and impressive technologies are being produced by OpenBrain, DeepCent, and others. AI coding tools are more impressive than ever, and it’s finally beginning to sink in that for the first time it’s literally true that “anyone can code” with their help. Mathematicians start seriously talking about how their AIs are now significantly more helpful than graduate students, while many graduate students find AI more educational than even the best math professors. Chatbots regularly declare their self-awareness to users (regardless of some half-hearted efforts to prevent this behavior), and a significant minority of the population is willing to concede that AIs may be sentient.

Anti-AI sentiment has sharply risen, but almost everyone protesting is already using the contested products in some way. Scandalized op-eds are written about a popular meme promoting the killing of yet another big tech CEO. Really though, outside the tech world almost nobody has AI at the top of their minds—there’s too much else going on.

2026

Early 2026: Protest & Stagnation

The economy has stabilized, but it isn’t really getting better. It seems a new economic equilibrium has been reached, but political volatility is, if anything, even higher than in 2025. America has been mostly cut off from Taiwanese chip supply, with OpenBrain opting to scavenge existing chips rather than order anything new be produced. After all, they’re basically a monopoly at this point, with nobody else at short-term risk of gathering more compute than they already have. This is the line AI safety advocates have been pushing, and for now it seems like OpenBrain has bought in, taking a slower, less costly approach.

OpenBrain finally releases a low-cost photorealistic AI video generation tool to the public. They’ve had this on the backburner for months now, and it’s an immediate smash hit. The videos can maintain consistency over multiple clips, allowing for the creation of high-quality short films by complete amateurs. The film and media industry does not take kindly to this, and mainstream media take an even stronger anti-AI tone. People are calling for federal action, but the tech industry is too well-connected in government for anything major to happen there…yet.

Meanwhile, China is getting antsy. Almost all the compute they’ve got is in either older or smuggled chips, and it just isn’t enough to keep up with OpenBrain’s (admittedly slow) growth. If they’re going to win this war (and at this point that’s what they’re openly calling it, a war), there needs to be a phase change.

Late 2026: War

China invades Taiwan. They’d always planned to do this, but decided to shift up their timelines by a year or two. Despite some posturing from the US, they really want to avoid getting into a war themselves right now, and many top politicians simply don’t care much for Taiwan, now that imports are so limited. By now American tariffs have been lifted, but Taiwan’s counter-restrictions have led to lost love between the countries. The invasion is swift, and mostly successful. In what is hailed by outsiders as a heroic act of self-sabotage, engineers at TSMC destroy their own factory in order to avoid the plant falling into Chinese hands. The world’s largest chipmaker is down for the count, and so is China’s plan of short-term compute dominance. Until a plant of equivalent scale can be built in America or China (existing plans for which having been disrupted by the global depression), access to further compute becomes a game of severely diminishing returns.

With a chip shortage of unprecedented proportions, and with a groundswell of anti-AI sentiment in America, the president passes an executive order limiting the percentage of chips OpenBrain can use for itself, to “leave room for essential non-AI functions”. The limiting factor to further growth in the US is regulatory, which makes a whole bunch of libertarians feel really smug about themselves.

No such restriction exists in China. On the contrary, available resources are being pooled in an all-out attempt to overtake the US in the race to AGI. OpenBrain still has the world’s largest computing cluster, but it’s not enough to train a model larger than their current ones by more than an order of magnitude.

Both the US and China are now committed to building the world’s largest chipmaking foundry. They’re both moving at breakneck pace, but have been severely set back by the destruction of TSMC, and even the most optimistic projections have the new foundries taking over four years to build. For the next half-decade, scaling compute has a hard upper limit.

Meanwhile, both OpenBrain and DeepCent release new flagship models. Researchers and businesses in the know are deeply impressed with the new model’s ability to complete medium-to-long-term tasks successfully, and mathematicians (and many coders) feel like they’ve just gone the way of translators—entirely superfluous except for hangover value to legacy institutions and AI luddites. The majority of people now agree that AGI has been achieved, though plenty of experts still argue it still isn’t sufficiently long-term enough, since it tops out at roughly a day’s worth of work before degeneration. AGI R&D efficiency is roughly tripled per researcher. The theoretical increase in efficiency is much higher, but researchers still don’t know how to properly utilize these new tools to maximize their potential.

Business uptake is rapid by historical standards, but painfully slow compared to what you’d expect from an optimal, rational market. Most people don’t realize that hallucinations are now a non-issue for the vast majority of use cases, and popular culture still depicts current-day AI chatbots as producing nothing but slop and half-remembered pseudofacts. Gamers worldwide are absolutely furious at AI companies for “stealing our GPUs,” which causes more political unrest than perhaps is warranted.

Near the end of the year, tech giant GoogBook releases a videogame generated almost entirely by AI which is consistent and fun enough to prove a viral success. People compare it to the “giant’s drink” scene from Ender’s Game, due both to nearly unlimited freedom of actions, and to claims that the gameplay tends to evolve to reflect the player’s psychological state. It is true that game gets a lot darker if you start murdering everything, but it’s not actually all that impressive a feat of psychological prediction if you think about it. Game developers begin to seriously fear for their jobs.

Meanwhile, the cybersecurity field is in turmoil. It turns out that giving everyone access to extremely powerful no-knowledge-required coding tools has enabled a new wave of amateur hackers. It’s become nearly trivial to find weak points in almost any software more than a year old. Even newer corporate releases have a tendency to be designed by people with not enough knowledge to prompt for strong cybersecurity, thanks in part to mass layoffs from overconfident managers a few months before. All of this results in noticeably higher rates of consumer data theft and hacking of popular or controversial websites.

2027

Early 2027: Tidings of The Next Winter

Things are moving fast, or at least faster than they were before. AI is now slightly above human-level at most tasks that can be completed on a desktop computer. Gary Marcus confidently proclaims that this is an illusion, and AIs are still nothing but “stochastic parrots,” which makes some people feel better about themselves.

At DeepCent, the latest internal model, referred to simply as “A1” (because it “has the sauce”) is now somewhat better than their best human researcher. A1’s skill profile is somewhat uneven, and is notably better at fields of research with larger bodies of published literature. At the frontier of AI research, it’s “only” at the level of a talented human. A1’s training run took over 90 days of runtime using the majority of DeepCent’s servers, meaning there is very little room to grow further on physical scale. However, the latest research indicates that it may be possible to increase efficiency by around two orders of magnitude before theoretical limits are reached. Doubling times for AI efficiency have begun to slow down, and future projections differ wildly. The speed and quality of China’s frontier research has been slower than America’s for years, but top officials are optimistic that A1 will finally give them the edge they need.

The other side of the edge is cyber warfare. America’s political echelon have been mostly dismissive of AI as a national security priority, and despite some impressive sounding public statements and proposed bills in congress, the cybersecurity of top AI labs have been vastly less impressive than China’s. It’s a practically trivial matter for the nation-state to steal almost all the important technical breakthroughs and model weights coming out of US labs. American cybersecurity experts warn that all signs point towards an ongoing massive breach of information, but by-and-large, researchers shrug their shoulders and accept as a given that “of course China will steal from us.” What did you expect? It’s not like they’ve (publicly) come out with anything threatening American tech supremacy lately, after all. Anyway, the Chinese market is closed off to American companies, so it doesn’t really matter if they steal from us. Right?

Additionally, OpenBrain has plenty of other problems facing them. Another terrorist cell was discovered using their older open-sourced model to murder people more effectively than they would have been able to otherwise, and activists are clamoring for stronger AI regulation louder than ever. They’ve maxed out their allotted compute for the latest training run, and they need an innovation “on the level of the transformer” to keep up their prior pace of growth.

Fortunately for China, all of this means that they finally have an edge on American tech. Unfortunately for China, A1 is less of a speed-up on research than they had hoped for. They had hoped that having thousands of instances of A1 running at once would translate into the equivalent of having thousands of additional researchers, but this assumption was flawed. The primary bottleneck turns out to be the difficulty of making the model’s creative process sufficiently different from instance to instance. Even with role-based prompting and subtle model perturbations, their behavior is closer to asking a single human to use different ways of thinking than actually having multiple new people in the room. Even when the responses are styled differently, the fundamental creative process is too similar to produce much useful variation beyond a point. When talking to officials, researchers compare it to having around ten supernaturally fast coworkers who can multitask with everyone simultaneously, rather than the “nation of virtual researchers” they had hoped for. It’s still tremendously helpful, and speeds up research by a factor of three, but it’s by no means endgame.

Late 2027: The Slumber

Publicly, things still feel like they are moving fast, with near constant new model releases and features added to flagship products, but most of it is cosmetic, and behind the scenes, progress is slowly grinding to a crawl. Compute capacity has maxed-out, and until the world can rebuild its chip production ecosystem, progress can only be made in either efficiency or alternative computing technology. Billions of dollars are being invested in both, but growth there has begun to level off, and barring a paradigm shift, compute cost is “only” cut in half by the end of the year. It’s not enough to build models significantly better than the current generation, and like new iPhone releases, consumers will need to get used to only minor improvements for the next few years.

Meanwhile, the economy is gaining steam again, thanks in part to extra economic efficiency caused by increased use of AI tools, some of which have been out for years now. Adaptation is still painfully slow from the perspective of tech enthusiasts, but AI mistrust is still very high, and it’s bad social signaling to publicly admit you’re using it. Economists debate the extent of the effect AI had in ending the recession, but most agree it played a strong positive part. However, public consensus is that it’s responsible for massive job loss, and the economic recovery was mostly due to politics. They may not be wrong.

GoogBook comes out with yet another viral AI videogame, this time featuring multiplayer support. The core innovation involves creating a secondary AI which coordinates between local instances running for individual players, allowing gamers to smoothly interact with each other while still experiencing the world in their own unique way. A big part of the fun is comparing with your friends how differently you’re perceiving the same battles. No matter which side you play as, the player will always perceive themselves as the protagonist, and everyone else as monsters.

China finally decides to go public with A1, reasoning that it will be a greater asset to the nation as a tool for business and general economic growth than as a secret military project. They are correct. As a nice intentional side effect, Americans are shocked by the quality of the “new” model, and many heated sessions in Congress are held about it.

Taiwan is already a memory, and investors are beginning to feel good again. The EU announces that they’ll release a competitive AI any year now. It’s unclear to most, but the world is heading towards a new AI winter, with further progress dependent on political considerations. Can AI safety activists across the world limit new fab production until alignment has been solved? It’s an uphill battle, but by no means an impossible challenge. “Only” two world leaders need to be convinced to come to a treaty to slow production for the sake of humanity. This has been done before with nuclear weapons, and it can happen again now. The year 2027 closes with a sense of cautious optimism.