We decided to restrict nuclear power to the point where it’s rare in order to prevent nuclear proliferation. We decided to ban biological weapons, almost fully successfully. We can ban things that have strong local incentives, and I think that ignoring that, and claiming that slowing down or stopping can’t happen, is giving up on perhaps the most promising avenue for reducing existential risk from AI. (And this view helps in accelerating race dynamics, so even if I didn’t think it was substantively wrong, I’d be confused as to why it’s useful to actively promote it as an idea.)
No, it was and is a global treaty enforced multilaterally, as well as a number of bans on testing and arms reduction treaties. For each, there is a strong local incentive for states—including the US—to defect, but the existence of a treaty allows global cooperation.
With AGI, of course, we have strong reasons to think that the payoff matrix looks something like the following:
(0,0) (-∞, 5-∞) (5-∞, -∞) (-∞, -∞)
So yes, there’s a local incentive to defect, but it’s actually a prisoner’s dilemma where the best case for defecting is identical to suicide.
We decided to restrict nuclear power to the point where it’s rare in order to prevent nuclear proliferation. We decided to ban biological weapons, almost fully successfully. We can ban things that have strong local incentives, and I think that ignoring that, and claiming that slowing down or stopping can’t happen, is giving up on perhaps the most promising avenue for reducing existential risk from AI. (And this view helps in accelerating race dynamics, so even if I didn’t think it was substantively wrong, I’d be confused as to why it’s useful to actively promote it as an idea.)
This is enforced by the USA though, and the USA is a nuclear power with global reach.
No, it was and is a global treaty enforced multilaterally, as well as a number of bans on testing and arms reduction treaties. For each, there is a strong local incentive for states—including the US—to defect, but the existence of a treaty allows global cooperation.
With AGI, of course, we have strong reasons to think that the payoff matrix looks something like the following:
(0,0) (-∞, 5-∞)
(5-∞, -∞) (-∞, -∞)
So yes, there’s a local incentive to defect, but it’s actually a prisoner’s dilemma where the best case for defecting is identical to suicide.