What happens next?

Two years ago, I noted that we had clearly entered the era of general intelligence, but that it was “too soon” to expect widespread social impacts.

In the last 2 years, AI has gone from the green line to the orange line

In those 2 years, AI development has followed the best possible of the 3 paths I suggested (foom/​GPT-4-takes-my-job/​Slow Takeoff). Returns to scale seem to be delivering a steady ~15 IQ points/​year and cutting edge models appear to be largely a compute-intensive project that allows (relatively) safety-conscious leading labs to explore the new frontiers while others reap the benefits with ~1 year delay.

Possibly the most important graph in the world right now

If I had to identify 3 areas where GPT-3.5 was lacking, it would have been:

  • reasoning

  • modeling the real world

  • learning on-the-fly

Of those three, reasoning (o3) is largely solved and we have promising approaches for world modeling (genie2). Learning on-the-fly remains, but I expect some combination of sim2real and muZero to work here.

Hence, while in 2023 I wrote

For any task that one of the large AI labs (DeepMind, OpenAI, Meta) is willing to invest sufficient resources in they can obtain average level human performance using current AI techniques.

I would now write

Going forward, we should expect to see job automation determined primarily not based on technical difficulty but rather based on social resistance (or lack thereof) to automating that task.

Already, the first automated jobs are upon us: taxi driver, security guard, amazon worker. Which jobs will be automated next will be decided by a calculation that looks at:

  • social desirability

  • lack of special interests/​collective bargaining (the dockworkers are never getting automated)

  • low risk (self driving is maybe the exception that prove the rule here. Despite being safer than human for years Waymo remains restricted to a few cities)

Security guard at mall is the prototypical “goes first” example, since:

  • everyone is in favor of more security

  • security guards at malls are not known for being good at collective bargaining

  • mall security guards have a flashlight (not a gun)

Brain surgeon is the prototypical “goes last”example:

  • a “human touch” is considered a key part of the health care

  • doctors have strong regulatory protections limiting competition

  • Literal lives at at stake and medical malpractice is one of the most legally perilous areas imaginable

As AI proliferates across society, we have to simultaneously solve a bunch of problems:

  • What happens to all the people whose jobs are replaced?

  • The “AGI Race” between US and China (I disagree with those who claim China is not racing)

  • Oh, by the way, AI is getting smarter faster than ever and we haven’t actually solved alignment yet

I suspect we have 2-4 years before one of these becomes a crisis. (And by crisis, I mean something everyone on Earth is talking about all the time, in the same sense that Covid-19 was a crisis).

The actual “tone” of the next few years could be very different depending on which of these crises hits first.

1. Jobs hits first. In this world, mass panic about unemployment leads the neo-ludditte movement to demand a halt to job automation. A “tax the machines” policy is implemented and a protracted struggle over what jobs get automated and who benefits/​loses plays out across all of society (~60%)

2. AGI Race hits first. In this world, the US and China find themselves at war (or on the brink of it). Even if the US lets Taiwan get swallowed, the West is still going to gear up for the next fight. This means building as much as fast as possible. (~20%)

3. Alignment hits first. Some kind of alignment catastrophe happens and the world must deal with it. Maybe it is fatal, maybe it is just some self-replicating GPT-worm. In this world, the focus is on some sort of global AI governance to make sure that whatever the first Alignment Failure was (and given the way gov’t works, totally ignoring other failure cases) (~10%)

4. Something wild. The singularity is supposed to be unpredictable. (~10%)