Each State Party undertakes to self-report the amount and locations of large concentrations of advanced hardware to relevant international authorities.
“Large concentrations” isn’t defined anywhere, and would probably need to be, for this to be a useful requirement.
Each State Party undertakes to collaborate in good-faith for the establishment of effective measures to ensure that potential benefits from safe and beneficial artificial intelligence systems are distributed globally.
Hm, I feel like this line might make certain countries less likely to agree to this? Not sure.
Each State Party undertakes to pursue in good faith negotiations on effective measures relating to the cessation of an artificial intelligence arms race and the prevention of any future artificial intelligence arms race.
The proposed treaty does not mention the threshold-exempt “Multinational AGI Consortium” suggested in the policy paper. Such an exemption would be, in my opinion, a very bad idea. The underlying argument behind a compute cap is that we do not know how to build AGI safely. It does not matter who is building it, whether OpenAI or the US military or some international organization, the risked outcome is the same: The AI escapes control and takes over, regardless of how much “security” humanity tries to place around it. If the threshold is low enough that we can be sure that it won’t be dangerous to go over it, then countries will want to go past it for their own critical projects. If it’s high enough that we can’t be sure, then it wouldn’t be safe for MAGIC to go over it either.
We can argue, “This point is too dangerous. We need to not build that far. Not to ensure national security, not to cure cancer, no. Zero exceptions, because otherwise we will all die.” People can accept that.
There’s no way to argue, “This point is dangerous, so let the more responsible group handle it. We’ll build it, but you can’t control it.” That’s a clear recipe for disaster.
A few comments on the proposed treaty:
“Large concentrations” isn’t defined anywhere, and would probably need to be, for this to be a useful requirement.
Hm, I feel like this line might make certain countries less likely to agree to this? Not sure.
What might this actually entail?
The proposed treaty does not mention the threshold-exempt “Multinational AGI Consortium” suggested in the policy paper. Such an exemption would be, in my opinion, a very bad idea. The underlying argument behind a compute cap is that we do not know how to build AGI safely. It does not matter who is building it, whether OpenAI or the US military or some international organization, the risked outcome is the same: The AI escapes control and takes over, regardless of how much “security” humanity tries to place around it. If the threshold is low enough that we can be sure that it won’t be dangerous to go over it, then countries will want to go past it for their own critical projects. If it’s high enough that we can’t be sure, then it wouldn’t be safe for MAGIC to go over it either.
We can argue, “This point is too dangerous. We need to not build that far. Not to ensure national security, not to cure cancer, no. Zero exceptions, because otherwise we will all die.” People can accept that.
There’s no way to argue, “This point is dangerous, so let the more responsible group handle it. We’ll build it, but you can’t control it.” That’s a clear recipe for disaster.