So to summarise: if we have a multipolar world, and the vulnerable world hypothesis if true, then conflict can be existentially bad and this is a reason to avoid a multipolar world. Didn’t consider this, interesting point!
(I also commented on substack)
This applies, but weaker even in a non-vulnerable world, because the incentives are way weaker for peaceful cooperation of values in AGI-world.
Considerations:
offense/defense balance (if offense wins very hard, it’s harder to let everyone do their own thing)
tunability-of-AGI-power / implementability of the harm principle (if you can give everyone AGI that can follow very well the rule “don’t let these people harm other people”, then you can give that AGI safely to everyone and they can build planets however they like but not death ray anyone else’s planets)
I do think this requires severely restraining open-source, but conditional on that happening, I think the offense-defense balance/tunability will sort of work out.
Some of my general worries with singleton worlds are:
humanity has all its eggs in one basket—you better hope the governance structure is never corrupted, or never becomes sclerotic; real-life institutions so far have not given me many signs of hope on this count
cultural evolution is a pretty big part of how human societies seem to have improved and relies on a population of cultures / polities
vague instincts towards diversity being good and less fragile than homogeneity or centralisation
Yeah, I’m not a fan of singleton worlds, and tend towards multipolar worlds. It’s just that it might involve a loss of a lot of life in the power-struggles around AGI.
On governing the commons, I’d say Elinor Ostrom’s observations are derivable from the folk theorems of game theory, which basically says that any outcome can be a Nash Equilibrium (with a few conditions that depend on the theorem) can be possible if the game is repeated and players have to deal with each other.
The problem is that AGI weakens the incentives for players to deal with each other, so Elinor Ostrom’s solutions are much less effective.
(I also commented on substack)
This applies, but weaker even in a non-vulnerable world, because the incentives are way weaker for peaceful cooperation of values in AGI-world.
I do think this requires severely restraining open-source, but conditional on that happening, I think the offense-defense balance/tunability will sort of work out.
Yeah, I’m not a fan of singleton worlds, and tend towards multipolar worlds. It’s just that it might involve a loss of a lot of life in the power-struggles around AGI.
On governing the commons, I’d say Elinor Ostrom’s observations are derivable from the folk theorems of game theory, which basically says that any outcome can be a Nash Equilibrium (with a few conditions that depend on the theorem) can be possible if the game is repeated and players have to deal with each other.
The problem is that AGI weakens the incentives for players to deal with each other, so Elinor Ostrom’s solutions are much less effective.
More here:
https://en.wikipedia.org/wiki/Folk_theorem_(game_theory)