Nuclear non-proliferation worked because the grandfathered-in countries had all the power and the ones who weren’t were under the implicit threat of embargo, invasion, or even annihilation. Despite all its accomplishments, GPT-4 does not give Open AI the ability to enforce its monopoly with the threat of violence.
Not to mention that 3-4 of the 5 listed countries non-party to the treaty developed nukes anyway. If Meta decides to flagrantly ignore the 0.2 OOM limit and creates something actually dangerous it’s not going to sit quietly in a silo waiting for further mistakes to be made before it kills us all.
I think you’ve misunderstood what we mean by “target”. Similar issues applied to the 2°C target, which nonetheless yielded significant coordination benefits.
The 2°C target helps facilitate coordination between nations, organisations, and individuals.
It provided a clear, measurable goal.
It provided a sense of urgency and severity.
It promoted a sense of shared responsibility.
It helped to align efforts across different stakeholders.
It created a shared understanding of what success would look like.
The AI governance community should converge around a similar target.
Nuclear non-proliferation worked because the grandfathered-in countries had all the power and the ones who weren’t were under the implicit threat of embargo, invasion, or even annihilation. Despite all its accomplishments, GPT-4 does not give Open AI the ability to enforce its monopoly with the threat of violence.
Not to mention that 3-4 of the 5 listed countries non-party to the treaty developed nukes anyway. If Meta decides to flagrantly ignore the 0.2 OOM limit and creates something actually dangerous it’s not going to sit quietly in a silo waiting for further mistakes to be made before it kills us all.
I think you’ve misunderstood what we mean by “target”. Similar issues applied to the 2°C target, which nonetheless yielded significant coordination benefits.