The proposed treaty does not mention the threshold-exempt “Multinational AGI Consortium” suggested in the policy paper. Such an exemption would be, in my opinion, a very bad idea. The underlying argument behind a compute cap is that we do not know how to build AGI safely. It does not matter who is building it, whether OpenAI or the US military or some international organization, the risked outcome is the same: The AI escapes control and takes over, regardless of how much “security” humanity tries to place around it. If the threshold is low enough that we can be sure that it won’t be dangerous to go over it, then countries will want to go past it for their own critical projects. If it’s high enough that we can’t be sure, then it wouldn’t be safe for MAGIC to go over it either.
We can argue, “This point is too dangerous. We need to not build that far. Not to ensure national security, not to cure cancer, no. Zero exceptions, because otherwise we will all die.” People can accept that.
There’s no way to argue, “This point is dangerous, so let the more responsible group handle it. We’ll build it, but you can’t control it.” That’s a clear recipe for disaster.
The proposed treaty does not mention the threshold-exempt “Multinational AGI Consortium” suggested in the policy paper. Such an exemption would be, in my opinion, a very bad idea. The underlying argument behind a compute cap is that we do not know how to build AGI safely. It does not matter who is building it, whether OpenAI or the US military or some international organization, the risked outcome is the same: The AI escapes control and takes over, regardless of how much “security” humanity tries to place around it. If the threshold is low enough that we can be sure that it won’t be dangerous to go over it, then countries will want to go past it for their own critical projects. If it’s high enough that we can’t be sure, then it wouldn’t be safe for MAGIC to go over it either.
We can argue, “This point is too dangerous. We need to not build that far. Not to ensure national security, not to cure cancer, no. Zero exceptions, because otherwise we will all die.” People can accept that.
There’s no way to argue, “This point is dangerous, so let the more responsible group handle it. We’ll build it, but you can’t control it.” That’s a clear recipe for disaster.