2024 in AI predictions

Follow-up to: 2023 in AI predictions.

Here I collect some AI predictions made in 2024. It’s not very systematic, it’s a convenience sample mostly from browsing Twitter/​X. I prefer including predictions that are more specific/​testable. I’m planning to make these posts yearly, checking in on predictions whose date has expired. Feel free to add more references to predictions made in 2024 to the comments. (Thanks especially @tsarnick and @AISafetyMemes for posting about a lot of these.)

Predictions about 2024

I’ll review predictions from previous posts that are about 2024.

the gears to ascension: “Hard problem of alignment is going to hit us like a train in 3 to 12 months at the same time some specific capabilities breakthroughs people have been working on for the entire history of ML finally start working now that they have a weak AGI to apply to, and suddenly critch’s stuff becomes super duper important to understand.” (conceded as false by author)

John Pressman: “6-12 month prediction (80%): The alignment problem as the core of AI X-Risk will become a historical artifact as it’s largely solved or on track to being solved in the eyes of most parties and arguments increasingly become about competition and misuse. Few switch sides.” (conceded as false by author)

Predictions made in 2024

December 2024

Gary Marcus:

Prediction: By end of 2024 we will see

  • 7-10 GPT-4 level models

  • No massive advance (no GPT-5, or disappointing GPT-5)

  • Price wars

  • Very little moat for anyone

  • No robust solution to hallucinations

  • Modest lasting corporate adoption

  • Modest profits, split 7-10 ways

(since 2024 has already ended, this can be evaluated to some degree; I would say he’s approximately correct regarding non-agent models, but o1 and o3 are big advances (“massive” is about right), and constitute more moat for OpenAI. He rates himself as 77.)

September 2025

teortaxesTex: “We can have effectively o3 level models fitting into 256 Gb VRAM by Q3 2025, running at >40 t/​s. Basically it’s a matter of Liang and co. having the compute and the political will to train and upload r3 on Huggingface.”

October 2025

Jack Gallagher: “calling it now—there’s enough different promising candidates rn that I bet by this time next year we mostly don’t use Adam anymore.”

December 2025

Elon Musk: “AI will probably be smarter than any single human next year. By 2029, AI is probably smarter than all humans combined.” (I’ll repeat this for 2029)

Aidan McLau: “i think it’s likely (p=.6) that an o-series model solves a millennium prize math problem in 2025”

Victor Taelin: “I’m now willing to bet up to 100k (but no more than that, I’m not Musk lol) that HOC will have AGI by end of 2025.… AGI defined as an algorithm capable of proving theorems in a proof assistant as competently as myself. (This is an objective way to say ‘codes like Taelin’.)”

April 2026

drdanponders: “It just dawned on me that ~humanoids in the house will be a thing very soon indeed. In under 2 years I bet. Simply another home appliance, saving you time, cooking for you, doing the chores, watching the house while you’re gone. I can see a robot of approximately this complexity and capabilities at around the price of a budget car even at launch.”

June 2026

Mira Murati: “in the next couple of years, we’re looking at PhD-level intelligence for specific tasks.”

August 2026

Dario Amodei “In terms of someone looks at the model and even if you talk to it for an hour or so, it’s basically like a generally well educated human, that could be not very far away at all. I think that could happen in two or three years. The main thing that would stop it would be if we hit certain safety thresholds and stuff like that.”

November 2026

William Bryk: “700 days until humans are no longer the top dogs at math in the known universe.”

Februrary 2027

Daniel Kokotajlo: “I expect to need the money sometime in the next 3 years, because that’s about when we get to 50% chance of AGI.”

(thread includes more probabilities further down; see this thread for more context on AGI definitions)

December 2027

Leopold Aschenbrenner: “it is strikingly plausible that by 2027, models will be able to do the work of an AI researcher/​engineer.”

Gary Marcus vs. Milus Brundage:

If there exist AI systems that can perform 8 of the 10 tasks below by the end of 2027, as determined by our panel of judges, Gary will donate $2,000 to a charity of Miles’ choice; if AI can do fewer than 8, Miles will donate $20,000 to a charity of Gary’s choice.

...

  1. Watch a previously unseen mainstream movie (without reading reviews etc) and be able to follow plot twists and know when to laugh, and be able to summarize it without giving away any spoilers or making up anything that didn’t actually happen, and be able to answer questions like who are the characters? What are their conflicts and motivations? How did these things change? What was the plot twist?

  2. Similar to the above, be able to read new mainstream novels (without reading reviews etc) and reliably answer questions about plot, character, conflicts, motivations, etc, going beyond the literal text in ways that would be clear to ordinary people.

  3. Write engaging brief biographies and obituaries without obvious hallucinations that aren’t grounded in reliable sources.

  4. Learn and master the basics of almost any new video game within a few minutes or hours, and solve original puzzles in the alternate world of that video game.

  5. Write cogent, persuasive legal briefs without hallucinating any cases.

  6. Reliably construct bug-free code of more than 10,000 lines from natural language specification or by interactions with a non-expert user. [Gluing together code from existing libraries doesn’t count.]

  7. With little or no human involvement, write Pulitzer-caliber books, fiction and non-fiction.

  8. With little or no human involvement, write Oscar-caliber screenplays.

  9. With little or no human involvement, come up with paradigm-shifting, Nobel-caliber scientific discoveries.

  10. Take arbitrary proofs from the mathematical literature written in natural language and convert them into a symbolic form suitable for symbolic verification.

2028

Dario Amodei: “A.S.L. 4 is going to be more about, on the misuse side, enabling state-level actors to greatly increase their capability, which is much harder than enabling random people. So where we would worry that North Korea or China or Russia could greatly enhance their offensive capabilities in various military areas with A.I. in a way that would give them a substantial advantage at the geopolitical level. And on the autonomy side, it’s various measures of these models are pretty close to being able to replicate and survive in the wild. So it feels maybe one step short of models that would, I think, raise truly existential questions…I think A.S.L. 4 could happen anywhere from 2025 to 2028.”

Shane Legg: “And so, yeah, I think there’s a 50% chance that we have AGI by 2028. Now, it’s just a 50% chance. I’m sure what’s going to happen is we’re going to get to 2029 and someone’s going to say, ‘Shane, you were wrong.’ Come on, I said 50% chance.”

Thomas Friedman: “And this election coincides with one of the greatest scientific turning points in human history: the birth of artificial general intelligence, or A.G.I., which is likely to emerge in the next four years and will require our next president to pull together a global coalition to productively, safely and compatibly govern computers that will soon have minds of their own superior to our own.”

Sabine Hossenfelder: “According to Aschenbrenner, by 2028, the most advanced models will run on 10 gigawatts of power at a cost of several hundred billion dollars. By 2030, they’ll run at 100 gigawatts of power at a cost of a trillion dollars… Can you do that? Totally. Is it going to happen? You got to be kidding me.”

Vlad Tenev, on AI solving Millenium prize: 2028 for a human/​AI hybrid solving a Millenium prize problem

2029

Sam Altman, regarding AGI: “5 years, give or take, maybe slightly longer — but no one knows exactly when or what it will mean for society.”

(he says AGI “will mean that 95% of what marketers use agencies, strategists, and creative professionals for today will easily, nearly instantly and at almost no cost be handled by the AI — and the AI will likely be able to test the creative against real or synthetic customer focus groups for predicting results and optimizing. Again, all free, instant, and nearly perfect. Images, videos, campaign ideas? No problem.”)

Elon Musk: “AI will probably be smarter than any single human next year. By 2029, AI is probably smarter than all humans combined.”

John Schulman in response to “What is your median timeline for when it replaces your job?”: “Maybe five years.”

Ray Kurzweil: “By 2029, computers will have human level intelligence”

jbetker: “In summary – we’ve basically solved building world models, have 2-3 years on system 2 thinking, and 1-2 years on embodiment. The latter two can be done concurrently. Once all of the ingredients have been built, we need to integrate them together and build the cycling algorithm I described above. I’d give that another 1-2 years. So my current estimate is 3-5 years for AGI. I’m leaning towards 3 for something that looks an awful lot like a generally intelligent, embodied agent (which I would personally call an AGI). Then a few more years to refine it to the point that we can convince the Gary Marcus’ of the world.”

Jeffrey Ladish: “Now it appears, if not obvious, quite likely that we’ll be able to train agents to exceed human strategic capabilities, across the board, this decade.”

Bindu Reddy: “We are at least 3-5 years away from automating software engineering.”

AISafetyMemes: “I repeat: in 1-5 years, if we’re still alive, I expect the biggest protests humanity has ever seen”

Jonathan Ross: “Prediction: AI will displace social drinking within 5 years. Just as alcohol is a social disinhibitor, like the Steve Martin movie Roxanne, people will use AI powered earbuds to help them socialize. At first we’ll view it as creepy, but it will quickly become superior to alcohol”

2030

Demis Hassabis: “I will say that when we started DeepMind back in 2010, we thought of it as a 20-year project. And I think we’re on track actually, which is kind of amazing for 20-year projects because usually they’re always 20 years away. That’s the joke about whatever, quantum, AI, take your pick. But I think we’re on track. So I wouldn’t be surprised if we had AGI-like systems within the next decade.”

Christopher Manning: “I do not believe human-level AI (artificial superintelligence, or the commonest sense of #AGI) is close at hand. AI has made breakthroughs, but the claim of AGI by 2030 is as laughable as claims of AGI by 1980 are in retrospect. Look how similar the rhetoric was in @LIFE in 1970!”

Dr_Singularity: “For the record, I’m currently at ~96% that ASI will be here by 2030. I’ve stopped saving for retirement and have increased my spending. Long term planning is pointless in a world when ASI (even AGI alone) is on the horizon.”

Greg Colbourn: “High chance AI will lead to human extinction before 2030 unless we act now”

2032

Eric Schmidt: “In the industry it is believed that somewhere around 5 years, no one knows exactly, the systems will begin to be able to write their own code, that is, they literally will take their code and make it better. And of course that’s recursive… It’s reasonable to expect that within 6-8 years from now… it will be possible to have a single system that is 80 or 90 percent of the ability of the expert in every field… ninety percent of the best physicist, ninety percent of the best chemist, ninety percent of the best artist.”

Roko Mijic: “AI will completely replace human programmers by 2045… 2032 seems more realistic”

2034

Mustafa Suleyman: “”AI is a new digital species...To avoid existential risk, we should avoid: 1) Autonomy 2) Recursive self-improvement 3) Self-replication. We have a good 5 to 10 years before we’ll have to confront this.”

Joe Biden: “We will see more technological change, I argue, in the next 2-10 years, than we have in the last 50 years.”

2039

Ray Kurzweil: “When we get to the 2030s, nanobots will connect our brains to the cloud, just the way your phone does. It’ll expand intelligence a million-fold by 2045. That is the Singularity.”

Rob Bensinger: “I think [Leopold Aschenbrenner’s] arguments for this have a lot of holes, but he gets the basic point that superintelligence looks 5 or 15 years off rather than 50+.”

acidshill: “damn… i’d probably be pretty concerned about the trajectory of politics and culture if i wasn’t pretty confident that we’re all going to d*e in the next 15 years… but i am, so instead it’s just funny”

James Miller: “I don’t see how, absent the collapse of civilization, we don’t get a von Neumann level or above AI within 15 years.”

Aella: “for the record, im currently at ~70% that we’re all dead in 10-15 years from AI. i’ve stopped saving for retirement, and have increased my spending and the amount of long-term health risks im taking”

2044

Geoffrey Hinton: “Now, I think it’s quite likely that sometime in the next 20 years, these things will get smarter than us.”

Yann LeCun: “We’re nowhere near reaching human-level intelligence, let alone superintelligence. If we’re lucky, within a decade or so, maybe two.”