I think you have to set this up in such a way that the current ceiling is where we already are, not back in time to before GPT-4. If you don’t, then the chance it actually gets adopted seems vastly lower, since all adopters that didn’t make their own GPT-4 already have to agree to be 2nd-class entities until 2029.
It’s very difficult to talk about nuclear non-proliferation when a bunch of people already have nukes. If you can actually enforce it, that’s a different story, but if you could actually enforce anything relating to this mess the rest just becomes details anyway.
Nuclear proliferation worked despite the fact that many countries with nuclear weapons were “grandfathered in”.
If the y-axis for the constraint is fixed to the day of the negotiaiton, then stakeholders who want a laxer constraint are incentivised to delay negotiation. To avoid that hazard, I have picked a schelling date (2022) to fix the y-axis.
The purpose of this article isn’t to proposal any policy, strategy, treaty, agreement, law, etc which might achieve the 0.2 OOMs/year target. instead, the purpose of this article is to propose a target itself. This has inherent coordination benefits, c.f. the 2ºC target.
Nuclear non-proliferation worked because the grandfathered-in countries had all the power and the ones who weren’t were under the implicit threat of embargo, invasion, or even annihilation. Despite all its accomplishments, GPT-4 does not give Open AI the ability to enforce its monopoly with the threat of violence.
Not to mention that 3-4 of the 5 listed countries non-party to the treaty developed nukes anyway. If Meta decides to flagrantly ignore the 0.2 OOM limit and creates something actually dangerous it’s not going to sit quietly in a silo waiting for further mistakes to be made before it kills us all.
I think you’ve misunderstood what we mean by “target”. Similar issues applied to the 2°C target, which nonetheless yielded significant coordination benefits.
The 2°C target helps facilitate coordination between nations, organisations, and individuals.
It provided a clear, measurable goal.
It provided a sense of urgency and severity.
It promoted a sense of shared responsibility.
It helped to align efforts across different stakeholders.
It created a shared understanding of what success would look like.
The AI governance community should converge around a similar target.
I think you have to set this up in such a way that the current ceiling is where we already are, not back in time to before GPT-4. If you don’t, then the chance it actually gets adopted seems vastly lower, since all adopters that didn’t make their own GPT-4 already have to agree to be 2nd-class entities until 2029.
It’s very difficult to talk about nuclear non-proliferation when a bunch of people already have nukes. If you can actually enforce it, that’s a different story, but if you could actually enforce anything relating to this mess the rest just becomes details anyway.
Nuclear proliferation worked despite the fact that many countries with nuclear weapons were “grandfathered in”.
If the y-axis for the constraint is fixed to the day of the negotiaiton, then stakeholders who want a laxer constraint are incentivised to delay negotiation. To avoid that hazard, I have picked a schelling date (2022) to fix the y-axis.
The purpose of this article isn’t to proposal any policy, strategy, treaty, agreement, law, etc which might achieve the 0.2 OOMs/year target. instead, the purpose of this article is to propose a target itself. This has inherent coordination benefits, c.f. the 2ºC target.
Nuclear non-proliferation worked because the grandfathered-in countries had all the power and the ones who weren’t were under the implicit threat of embargo, invasion, or even annihilation. Despite all its accomplishments, GPT-4 does not give Open AI the ability to enforce its monopoly with the threat of violence.
Not to mention that 3-4 of the 5 listed countries non-party to the treaty developed nukes anyway. If Meta decides to flagrantly ignore the 0.2 OOM limit and creates something actually dangerous it’s not going to sit quietly in a silo waiting for further mistakes to be made before it kills us all.
I think you’ve misunderstood what we mean by “target”. Similar issues applied to the 2°C target, which nonetheless yielded significant coordination benefits.