A Tentative Timeline of The Near Future (2022-2025) for Self-Accountability

I don’t currently have the energy to set up an account on a prediction market (yes, I am remarkably lazy in some ways), but nonetheless want to have my personal predictions publicly available somewhere. This is primarily intuition-based, and not thought-through deeply. I’ve noticed that a common failure mode when it comes to AI timelines involve making overly-conservative predictions, so I’m going to try to push against my baseline intuitions here, and we’ll see if that helps anything. Feel free to question anything I put here, or put your own predictions in the comments!

Likely to happen before the end of…

2022

Note: this section was written in August 2022. I have elected not to alter it further, as the year nears its end.

  • Post written by AI with minimal prompting reaches 30+ upvotes on LessWrong

  • AI can regularly fool a randomly-selected (from American population), non-expert judge in a 10-minute Turing test. Not sure if this will be formally tested though, as I’m not aware of any orgs actively doing such testing.

2023

  • AI reaches human expert level at MATH benchmark.

  • Famous, well-respected public intellectual announces that they believe AI has reached sentience, deserves rights.

  • AI can now write a book with a mostly consistent plot, given roughly a page of prompting or less.

  • “Weak” AGI is announced that can play a randomly-selected game on Steam and get at least one achievement (in games which have Steam achievements enabled) most of the time. This assumes someone bothers to try this in particular, if not it should still be obvious it can be done.

  • AI proves an “interesting” result in mathematics (as judged by professional mathematicians) with minimal prompting.

  • Major lawsuit involving AI trained on “stolen artwork” gets in the news

  • It is unclear if artists are actually losing significant amounts of work to AI, but plenty of op-eds get written which assume that premise.

  • I move out of my parent’s house, possibly to LA for networking/​work reasons, possibly remaining in Virginia, for community-building/​health reasons. In a possibly related move, I finally come out to my parents, which probably goes okay, albeit with a small chance of being disowned by my grandparents.

  • S.B.F. somehow remains a free, not-in-jail citizen, and continues to post questionable statements on Twitter.

  • Anti-EA sentiment mostly dies down, but anti “AI safety” sentiment goes way up. The term has become associated with (perceived) censorship, and right-wing politicians may begin to shun people who use “AI safety” in their public branding. AI governance orgs try to adjust by going for a “national security” public angle. [Note that that last bit is incredibly speculative, and depends on too many factors to predict with any real confidence.]

  • Multiple people land well-paying coding jobs and publicly post about how they “don’t actually know how to code” (beyond some really basic level), but have been outsourcing everything to AI.

2024

  • Assuming Donald Trump is not barred from running, he will become president. If not him, it’s an easy DeSantos win. (Biden is the Democratic nominee of course, assuming he’s still alive. As usual, the media pays no attention to third party candidates.)

  • AI writes a NYT best-selling book.

  • Twitter is still functional, and most users haven’t left the site. The workplace environment is kind of miserable though, and content moderation is still severely lacking (according to both sides of the culture war). Elon Musk is largely washed-up, and won’t be doing anything too groundbreaking with the remainder of his life (outside of politics perhaps, which I won’t rule out).

  • A minor celebrity or big-name journalist finally discovers Erik Sheader Smith’s video game The Endless Empty for the masterpiece it is, kickstarting its growth as widely-hailed classic of the genre. My own game, Nepenthe, is largely forgotten by history, at least until someone discovers a certain easter egg, which is occasionally mentioned in 40+ minute long Youtube videos (you know the type).

  • The social media battle going on between those who firmly believe that AI is “just copy-pasting others work” and those who firmly believe that AI is sentient (and want to free it), has reached enough intensity that it gets brought up a few times in the political news cycle. At least one (possibly fringe) candidate pledges to “protect the rights of artists” through AI legislation.

  • Some new video game nobody has heard about before goes viral among schoolchildren, sparking a wave of incredibly forced puns across news headlines worldwide.

  • China’s economy has pretty much recovered from Covid. Other than that, hard to predict, but growth won’t look terribly different from the rest of the world.

  • Companies start actually replacing a significant number of customer support jobs with AI. Consumers generally report being more satisfied as a result, to many people’s annoyance.

  • Both teachers and students have the ability to easily automate online assignment work, leading to a growing number of absurdist scenarios where algorithms play meaningless educational games while teachers and students do their own thing, unwatching. This is objectively hilarious, but people get mad about it, leading to a poorly-managed escalation of the school surveillance arms race we already see today.

  • Another billionaire has emerged as an EA mega-donor.

2025

  • Self-driving cars (and drone delivery) never quite reach market saturation due to some consumer/​cultural pushback, but mostly due to legislation over “safety concerns,” even if self-driving is significantly safer than human-driven vehicles by this point. However, more and more self-driving-adjacent features are added into “normal” cars for “safety reasons,” so it’s become increasingly hard to delineate any sort of clear line between AI and human-operated vehicles.

  • I am in love.

  • A mass fatality event occurs due to what could plausibly be interpreted as “misaligned AI.” This sparks some countries to pass a whole bunch of AI-related laws which are totally ignored by other countries. The AI safety community is split on if the blame for what happened should be placed on misaligned AI, human error, or some complex mix of both. For whatever reason, a popular language model (developed for entertainment perhaps) publicly takes responsibility, despite seemingly having nothing to do with the incident. For the most part though, this is treated as just another tragedy in the news cycle, and is ignored by most people.

  • Someone who has at some point called themself “rationalist” or “EA” commits a serious crime with the intention of halting capabilities gain at some company or another. This is totally ineffective, everyone agrees that that was like, the least rational or altruistic action they could have possibly taken, but the media runs with exactly the sort of story you’d expect it to run with. This makes AI governance work a bit harder, and further dampens communications between safety and capabilities researchers. Overall though, things pretty much move on.

  • Despite having more funding than ever before, the quality and quantity of AI safety research seems...slightly lesser. It’s unclear what the exact cause is, though some point out that they’ve been having a harder time staying focused lately, what with [insert groundbreaking new technology here].

  • Youtube dies a horrible death in a totally unpredictable manner. The whole disaster is retroactively considered clearly inevitable by experts. There is much mourning and gnashing of teeth, but the memes, too, are bountiful.

  • The sun rises and the sun falls.

  • Me and my friends are still alive.

2026+

  • Here be dragons...