So far, I’m confident that our proposals will not impede the vast majority of AI developers, but if we end up receiving feedback that this isn’t true, we’ll either rethink our proposals or remove this claim from our advocacy efforts. Also, as stated in a comment below:
It seems to me that for AI regulation to have important effects, it probably has to affect many AI developers around the point where training more powerful AIs would be dangerous.
So, if AI regulation is aiming to be useful in short timelines and AI is dangerous, it will probably have to affect most AI developers.
And if policy requires a specific flop threshold or similar, then due to our vast uncertainty, that flop threshold probably will have to soon affect many AI developers. My guess is that the criteria you establish would in fact affect a large number of AI developers soon (perhaps most people interested in working with SOTA open-source LLMs).
In general, safe flop and performance thresholds have to unavoidably be pretty low to actually be sufficient slightly longer term. For instance, suppose that 10^27 flops is a dangerous amount of effective compute (relative to the performance of the GPT4 training run). Then, if algorithmic progress is 2x per year, 10^24 real flops is 10^27 effective flop in just 10 years.
I think you probably should note that this proposal is likely to affect the majority of people working with generative AI in the next 5-10 years. This seems basically unavoidable.
I’d guess that the best would be to define a specific flop or dollar threshold and have this steadily decrease over time at a conservative rate (e.g. 2x lower threshold each year).
Presumably, your hope for avoiding this flop threshold becoming burdensome soon is:
As AI advances and dangerous systems become increasingly easy to develop at a fraction of the current cost, the definition of frontier AI will need to change. This is why we need an expert-led administration that can adapt the criteria for frontier AI to address the evolving nature of this technology.
It seems to me that for AI regulation to have important effects, it probably has to affect many AI developers around the point where training more powerful AIs would be dangerous.
So, if AI regulation is aiming to be useful in short timelines and AI is dangerous, it will probably have to affect most AI developers.
And if policy requires a specific flop threshold or similar, then due to our vast uncertainty, that flop threshold probably will have to soon affect many AI developers. My guess is that the criteria you establish would in fact affect a large number of AI developers soon (perhaps most people interested in working with SOTA open-source LLMs).
In general, safe flop and performance thresholds have to unavoidably be pretty low to actually be sufficient slightly longer term. For instance, suppose that 10^27 flops is a dangerous amount of effective compute (relative to the performance of the GPT4 training run). Then, if algorithmic progress is 2x per year, 10^24 real flops is 10^27 effective flop in just 10 years.
I think you probably should note that this proposal is likely to affect the majority of people working with generative AI in the next 5-10 years. This seems basically unavoidable.
I’d guess that the best would be to define a specific flop or dollar threshold and have this steadily decrease over time at a conservative rate (e.g. 2x lower threshold each year).
Presumably, your hope for avoiding this flop threshold becoming burdensome soon is: