We don’t know that there are diminishing marginal returns to capability of models (I prefer not to use the word “intelligence” for this). There are many many applications where increasing accuracy on some tasks from 99% to 99.9% would be enormously better. When such models are deployed in earnest, proponents of AI development could very well justify the claim that regulatory limits are actually killing people e.g. through industrial accidents that would have been prevented by a more capable model, bad medical advice, road crashes, and so on. It would take serious and unwavering counter-pressure against that to prevent regulations from crumbling.
The other problem is that we don’t know the real limits on capabilities for models of a given size, and an advance in efficiency may lead to generally superhuman capabilities anyway. It is unlikely that the earliest models will be nearly the most efficient that are possible, or even within a factor of 10 (or 100).
Bad actors would probably run models larger than the allowed size, and this would become rapidly easier over time. Much of the software is open-source, or can be relatively easily worked around. For example, running a deep model a few layers at a time so that any given running fragment has fewer parameters than the regulated maximum.
It would likely also push the development of AI models that aren’t based on parameter counts at all, for which such regulation would be completely ineffective.
I think some sort of cap is the one of the highest impact things we can do from a safety perspective. I agree that imposing the cap effectively and getting buy-in from broader society is a challenge, however, these problems are a lot more tractable than AI safety.
I haven’t heard anybody else propose this so I wanted to float it out there.
We don’t know that there are diminishing marginal returns to capability of models (I prefer not to use the word “intelligence” for this). There are many many applications where increasing accuracy on some tasks from 99% to 99.9% would be enormously better. When such models are deployed in earnest, proponents of AI development could very well justify the claim that regulatory limits are actually killing people e.g. through industrial accidents that would have been prevented by a more capable model, bad medical advice, road crashes, and so on. It would take serious and unwavering counter-pressure against that to prevent regulations from crumbling.
The other problem is that we don’t know the real limits on capabilities for models of a given size, and an advance in efficiency may lead to generally superhuman capabilities anyway. It is unlikely that the earliest models will be nearly the most efficient that are possible, or even within a factor of 10 (or 100).
Bad actors would probably run models larger than the allowed size, and this would become rapidly easier over time. Much of the software is open-source, or can be relatively easily worked around. For example, running a deep model a few layers at a time so that any given running fragment has fewer parameters than the regulated maximum.
It would likely also push the development of AI models that aren’t based on parameter counts at all, for which such regulation would be completely ineffective.
I think you make a lot of great points.
I think some sort of cap is the one of the highest impact things we can do from a safety perspective. I agree that imposing the cap effectively and getting buy-in from broader society is a challenge, however, these problems are a lot more tractable than AI safety.
I haven’t heard anybody else propose this so I wanted to float it out there.