I agree that the coordination games between nukes and AI are different, but I still think that nukes make for a good analogy. But not after multiple parties have developed them. Rather I think key elements of the analogy is the game changing and decisive strategic advantage that nukes/AI grant once one party develops them. There aren’t too many other technologies that have that property. (maybe the bronze-iron age transition?)
Where the analogy breaks down is with AI safety. If we get AI safety wrong there’s a risk of large permanent negative consequences. A better analogy might be living near the end of WW2, but if you build a nuclear bomb incorrectly, it ignites the atmosphere and destroys the world.
In either case, under this model, you end up with the following outcomes:
(A): Either party incorrectly develops the technology
(B): The other party successfully develops the technology
(C): My party successfully develops the technology
and generally a preference ordering of A<B<C, although a sufficiently cynical actor might have B<A<C.
If there’s a sufficiently shallow trade-off between speed of development and the risk of error, this can lead to a dollar auction like dynamic where each party is incentivized to trade a bit more risk in order to develop the technology first. In a symmetric situation without coordination, the equilibrium nash equilibrium is all parties advancing as quickly as possible to develop the technology and throwing caution to the wind.
In a symmetric situation without coordination, the equilibrium is all parties advancing as quickly as possible to develop the technology and throwing caution to the wind.
Really? It seems like if I’ve raised my risk level to 99% and the other team has raised their risk level to 98% (they are slightly ahead), one great option for me is to commit not to developing the technology and let the other team develop the technology at risk level ~1%. This gets me an expected utility of 0.99B + 0.01A, which is probably better than the 0.01C + 0.99A that I would otherwise have gotten (assuming I developed the technology first).
I am assuming common knowledge here, but I am not assuming coordination. See also OpenAI Charter.
Interesting. I had the Nash equilibrium in mind, but it’s true that unlike a dollar auction, you can de-escalate, and when you take into account how your opponent will react to you changing your strategy, doing so becomes viable. But then you end up with something like a game of chicken, where ideally, you want to force your opponent to de-escalate first, as this tilts the outcomes toward option C rather than B.
I agree that the coordination games between nukes and AI are different, but I still think that nukes make for a good analogy. But not after multiple parties have developed them. Rather I think key elements of the analogy is the game changing and decisive strategic advantage that nukes/AI grant once one party develops them. There aren’t too many other technologies that have that property. (maybe the bronze-iron age transition?)
Where the analogy breaks down is with AI safety. If we get AI safety wrong there’s a risk of large permanent negative consequences. A better analogy might be living near the end of WW2, but if you build a nuclear bomb incorrectly, it ignites the atmosphere and destroys the world.
In either case, under this model, you end up with the following outcomes:
(A): Either party incorrectly develops the technology
(B): The other party successfully develops the technology
(C): My party successfully develops the technology
and generally a preference ordering of A<B<C, although a sufficiently cynical actor might have B<A<C.
If there’s a sufficiently shallow trade-off between speed of development and the risk of error, this can lead to a dollar auction like dynamic where each party is incentivized to trade a bit more risk in order to develop the technology first. In a symmetric situation without coordination, the
equilibriumnash equilibrium is all parties advancing as quickly as possible to develop the technology and throwing caution to the wind.Really? It seems like if I’ve raised my risk level to 99% and the other team has raised their risk level to 98% (they are slightly ahead), one great option for me is to commit not to developing the technology and let the other team develop the technology at risk level ~1%. This gets me an expected utility of 0.99B + 0.01A, which is probably better than the 0.01C + 0.99A that I would otherwise have gotten (assuming I developed the technology first).
I am assuming common knowledge here, but I am not assuming coordination. See also OpenAI Charter.
Interesting. I had the Nash equilibrium in mind, but it’s true that unlike a dollar auction, you can de-escalate, and when you take into account how your opponent will react to you changing your strategy, doing so becomes viable. But then you end up with something like a game of chicken, where ideally, you want to force your opponent to de-escalate first, as this tilts the outcomes toward option C rather than B.