I’m especially interested to hear from people with long (i.e. 20+ year) timelines what they think the next 10 years will look like.
[EDIT: After pushback from Richard Ngo, I’ve agreed to stop talking about short and long timelines and just use numbers instead, e.g. “20+ year” and “<10 year” timelines. I recommend everyone do the same going forward.]
Ajeya (as of the time of her report at least) is such a person, and she gave some partial answers already:
By 2025, I think we could easily have decent AI personal assistants that autonomously compose emails and handle scheduling and routine shopping based on users’ desires, great AI copy-editors who help a lot with grammar and phrasing and even a little bit with argument construction or correctness, AIs who summarize bodies of information like conversations in meetings or textbooks and novels, AI customer service reps and telemarketers, great AI medical diagnosis, okay AI counseling/therapy, AI coding assistants who write simple code for humans and catch lots of subtle bugs not catchable by compilers / type checkers, even better AI assisted search that feels like asking a human research assistant to find relevant information for you, pretty good AI tutors, AIs that handle elements of logistics and route planning, AIs that increasingly handle short-timescale trading at hedge funds, AIs that help find good hyperparameter settings and temperature settings for training and sampling from other AIs, and so on.
But she thinks it’ll probably take till around 2050 for us to get transformative AI, and (I think?) AGI as well.
The hypothesis I’m testing here is that people with long timelines [EDIT: 20+ year timelines] nevertheless think that there’ll be lots of crazy exciting AI progress in the twenties, just nothing super dangerous or transformative. I’d like to get a clearer sense of how many people agree and what they think that progress will look like.
(This question is a complement to Bjartur’s previous question)
I think that it is very likely that AGI will emerge in the next 20 years, unless something massively detrimental to AI development happens.
So, it’s the year 2050, and there is still no AGI. What happened?
Some possible scenarios:
We got lucky with the first COVID-19 variants being so benign and slow-spreading. Without that early warning (and our preparations), the Combined Plague could have destroyed the civilization. We’ve survived. But the disastrous pandemic has slowed down progress in most fields, including AI research—by breaking supply chains, by reducing funding, and by claiming many researchers.
After the Google Nanofab Incident, the US / the EU / China decided that AI research must be strictly regulated. They then bullied the rest of the world to implement similarly strict regulations. These days, it’s easier to buy plutonium than to buy a TPU. The AGI research is ongoing, but at a much slower pace.
The collapse of the main AI research hub—the US. The Second Civil War was caused mostly by elite overproduction, amplified by Chinese memetic warfare, the pandemic, and large-scale technological unemployment. The Neoluddite Palo Alto Lynching is still remembered as one of the bloodiest massacres in the US history.
The first AGI secretly emerged, and decided to mostly leave the Earth alone, for whatever reason. Its leftowers are preventing the emergence of another AGI (e.g. by compromising every GPU / TPU in subtle ways).
We have created an AGI. But it requires truly enormous computational resources to produce useful results (similarly to AIXI). For example, for a few million bucks worth of compute, it can master Go. But to master cancer research, it needs thousands of years running on everything we have. We seem to be still decades away from the AGI making any difference.