If we solve alignment, do we die anyway?

hat Epistemic status: I’m aware of good arguments that this scenario isn’t inevitable, but it still seems frighteningly likely even if we solve technical alignment. Clarifying this scenario seems important.

TL;DR: (edits in parentheses, two days after posting, from discussions in comments )

  1. If we solve alignment, it will probably be used to create AGI that follows human orders.

  2. If takeoff is slow-ish, a pivotal act that prevents more AGIs from being developed will be difficult (risky or bloody).

  3. If no pivotal act is performed, AGI proliferates. (It will soon be capable of recursive self improvement (RSI)) This creates an n-way non-iterated Prisoner’s Dilemma where the first to attack, probably wins (by hiding and improving intelligence and offensive capabilities at a fast exponential rate).

  4. Disaster results. (Extinction or permanent dystopia are possible if vicious humans order their AGI to attack first while better humans hope for peace.)

  5. (Edit later: After discussion and thought, the above seems so inevitable and obvious that the first group(s) to control AGI(s) will probably attempt a pivotal act before fully RSI-capable AGI proliferates, even if it’s risky.)

The first AGIs will probably be aligned to take orders

People in charge of AGI projects like power. And by definition, they like their values somewhat better than the aggregate values of all of humanity. It also seems like there’s a pretty strong argument that Instruction-following AGI is easier than value aligned AGI. In the slow-ish takeoff we expect, this alignment target seems to allow for error-correcting alignment, in somewhat non-obvious ways. If this argument holds up even weakly, it will be an excuse for the people in charge to do what they want to anyway.

I hope I’m wrong and value-aligned AGI is just as easy and likely. But it seems like wishful thinking at this point.

The first AGI probably won’t perform a pivotal act

In realistically slow takeoff scenarios, the AGI won’t be able to do anything like make nanobots to melt down GPUs. It would have to use more conventional methods, like software intrusion to sabotage existing projects, followed by elaborate monitoring to prevent new ones. Such a weak attempted pivotal act could fail, or could escalate to a nuclear conflict.

Second, the humans in charge of AGI may not have the chutzpah to even try such a thing. Taking over the world is not for the faint of heart. They might get it after their increasingly-intelligent AGI carefully explains to them the consequences of allowing AGI proliferation, or they might not. If the people in charge are a government, the odds of such an action go up, but so do the risks of escalation to nuclear war. Governments seem to be fairly risk-taking. Expecting governments to not just grab world-changing power while they can seems naive, so this is my median scenario.

So RSI-capable AGI may proliferate until a disaster occurs

If we solve alignment and create personal intent aligned AGI but nobody manages a pivotal act, I see a likely future world with an increasing number of AGIs capable of recursively self-improving. How long until someone tells their AGI to hide, self-improve, and take over?

Many people seem optimistic about this scenario. Perhaps network security can be improved with AGIs on the job. But AGIs can do an end-run around the entire system: hide, set up self-replicating manufacturing (robotics is rapidly improving to allow this), use that to recursively self-improve your intelligence, and develop new offensive strategies and capabilities until you’ve got one that will work within an acceptable level of viciousness.[1]

If hiding in factories isn’t good enough, do your RSI manufacturing underground. If that’s not good enough, do it as far from Earth as necessary. Take over with as little violence as you can manage or as much as you need. Reboot a new civilization if that’s all you can manage while still acting before someone else does.

The first one to pull the stops probably wins. This looks all too much like a non-iterated Prisoner’s Dilemma with N players—and N increasing.

Counterarguments/​Outs

For small numbers of AGI and similar values among their wielders, a collective pivotal act could be performed. I place some hopes here, particularly if political pressure is applied in advance to aim for this outcome, or if the AGIs come up with better cooperation structures and/​or arguments than I have.

The nuclear MAD standoff with nonproliferation agreements is fairly similar to the scenario I’ve described. We’ve survived that so far- but with only nine participants to date.

One means of preventing AGI proliferation is universal surveillance by a coalition of loosely cooperative AGI (and their directors). That might be done without universal loss of privacy if a really good publicly encrypted system were used, as Steve Omohundro suggests, but I don’t know if that’s possible. If privacy can’t be preserved, this is not a nice outcome, but we probably shouldn’t ignore it.

The final counterargument is that, if this scenario does seem likely, and this opinion spreads, people will work harder to avoid it, making it less likely. This virtuous cycle is one reason I’m writing this post including some of my worst fears.

Please convince me I’m wrong. Or make stronger arguments that this is right.

I think we can solve alignment, at least for personal-intent alignment, and particularly for the language model cognitive architectures that may well be our first AGI. But I’m not sure I want to keep helping with that project until I’ve resolved the likely consequences a little more. So give me a hand?

(Edit:) Conclusions after discussion

None of the suggestions in the comments seemed to me like workable ways to solve the problem.

I think we could survive an n-way multipolar human-controlled ASI scenario if n is small—like a handful of ASIs controlled by a few different governments. But not indefinitely—unless those ASIs come up with coordination strategies no human has yet thought of (or argued convincingly enough that I’ve heard of it—this isn’t really my area, but nobody has pointed to any strong possibilities in the comments). I’d love more pointers to coordination strategies that could solve this problem.

So my conclusion is to hope that this is so obviously such a bad/​dangerous scenario that it won’t be allowed to happen.

Basically, my hope is that this all becomes viscerally obvious to the first people who speak with a superhuman AGI and who think about global politics. I hope they’ll pull their shit together, as humans sometimes do when they’re motivated to actually solve hard problems.

I hope they’ll declare a global moratorium on AGI development and proliferation, and agree to share the benefits of their AGI/​ASI broadly in hopes that this gets other governments on board, at least on paper. They’d use their AGI to enforce that moratorium, along with hopefully minimal force. Then they’ll use their intent-aligned AGI to solve value alignment and launch a sovereign ASI before some sociopath(s) gets ahold of the reins of power and creates a permanent dystopia of some sort.

More on this scenario in my reply below.

I’d love to get more help thinking about how likely the central premise, that people get their shit together once they’re staring real AGI in the face is. And what we can do now to encourage that.

Additional edit: Eli Tyre and Steve Byrnes have reached similar conclusions by somewhat different routes. More in a final footnote.[2]

  1. ^

    Some maybe-less-obvious approaches to takeover, in ascending order of effectiveness: Drone/​missile-delivered explosive attacks on individuals controlling and data centers housing rival AGI; Social engineering/​deepfakes to set off cascading nuclear launches and reprisals; dropping stuff from orbit or altering asteroid paths; making the sun go nova.

    The possibilities are limitless. It’s harder to stop explosions than to set them off by surprise. A superintelligence will think of all of these and much better options. Anything more subtle that preserves more of the first actors’ near-term winnings (earth and humanity) is gravy. The only long-term prize goes to the most vicious.

  2. ^

    Eli Tyre reaches similar conclusions with a more systematic version of this logic in Unpacking the dynamics of AGI conflict that suggest the necessity of a premptive pivotal act:

    Overall, the need for a pivotal act depends on the following conjunction /​ disjunction.

    The equilibrium of conflict involving powerful AI systems lands on a technology /​ avenue of conflict which are (either offense dominant, or intelligence-advantage dominant) and can be developed and deployed inexpensively or quietly.

    Unfortunately, I think all three of these are very reasonable assumptions about the dynamics of AGI-fueled war. The key reason is that there is adverse selection on all of these axes.

    Steve Byrnes reaches similar conclusions in What does it take to defend the world against out-of-control AGIs?, but he focuses on near-term, fully vicious attacks from misaligned AGI, prior to fully hardening society and networks, centering on triggering full nuclear exchanges. I find this scenario less likely because I expect instruction-following alignment to mostly work on the technical level, and the first groups to control AGIs to avoid apocalyptic attacks.

    I have yet to find a detailed argument that addresses these scenarios and reaches opposite conclusions.