Your current threshold does include all Llama models (other than llama-1 6.7/13 B sizes), since they were trained with > 1 trillion tokens.
I also think 70% on MMLU is extremely low, since that’s about the level of ChatGPT 3.5, and that system is very far from posing a risk of catastrophe.
The cutoffs also don’t differentiate between sparse and dense models, so there’s a fair bit of non-SOTA-pushing academic / corporate work that would fall under these cutoffs.
Your current threshold does include all Llama models (other than llama-1 6.7/13 B sizes), since they were trained with > 1 trillion tokens.
Yes, this reasoning was for capabilities benchmarks specifically. Data goes further with future algorithmic progress, so I thought a narrower criteria for that one was reasonable.
I also think 70% on MMLU is extremely low, since that’s about the level of ChatGPT 3.5, and that system is very far from posing a risk of catastrophe.
This is the threshold for the government has the ability to say no to, and is deliberately set well before catastrophe.
I also think that one route towards AGI in the event that we try to create a global shutdown of AI progress is by building up capabilities on top of whatever the best open source model is, and so I’m hesitant to give up the government’s ability to prevent the capabilities of the best open source model from going up.
The cutoffs also don’t differentiate between sparse and dense models, so there’s a fair bit of non-SOTA-pushing academic / corporate work that would fall under these cutoffs.
Thanks for pointing this out, I’ll think about if there’s a way to exclude sparse models, though I’m not sure if its worth the added complexity and potential for loopholes. I’m not sure how many models fall into this category—do you have a sense? This aggregation of models has around 40 models above the 70B threshold.
This is the threshold for the government has the ability to say no to, and is deliberately set well before catastrophe.
There are disadvantages to giving the government “the ability to say no” to models used by thousands of people. There are disadvantages even in a frame where AI-takeover is the only thing you care about!
For instance, if you give the government too expansive a concern such that it must approve many models “well before the threshold”, then it will have thousands of requests thrown at it regularly, and it could (1) either try to scrutinize each, and become an invasive thorn everyone despises and which will be eliminated as soon as possible, because 99.99% of what it concerns itself will have (evidently to everyone) nothing to do with x-risk or (2) it might become a rubber-stamp factory that just lets all these thousands of requests through. (It could even simultaneously do both, like the FDA, which lets through probably useless things while prohibiting safe and useful things! This is likely; the government is not going to deliberate over what models are good like an intelligent person; it’s going to just follow procedure.)
(I don’t think LLMs of the scale you’re concerned with run an AI takeover risk. I have yet to read a takeover story about LLMs—of any size whatsoever—which makes sense to me. If there’s a story you think makes sense, by all means please give a link.)
But—I think a frame where AI takeover is the only thing you care about is manifestly the wrong frame for someone concerned with policy. Like—if you just care about one thing in a startup, and ignore other harms, you go out of business; but if you care about just one thing in policy, and ignore other things.… you… just can pass a policy, have it in place, and then cause a disaster because law doesn’t give feedback. The loop of feedback is about 10,000% worse, and so you need to be about 10,000% more paranoid that your ostensibly good actions are actually good.
And I don’t see evidence you’re seeking out possible harms of your proposed actions. Your website doesn’t talk about them; you don’t talk about possible bad effects and how you’d mitigate them—other than, as Quintin points out, in a basically factually incorrect manner.
Yes, this reasoning was for capabilities benchmarks specifically. Data goes further with future algorithmic progress, so I thought a narrower criteria for that one was reasonable.
So, you are deliberately targeting models such as LLama-2, then? Searching HuggingFace for “Llama-2” currently brings up 3276 models. As I understand the legislation you’re proposing, each of these models would have to undergo government review, and the government would have the perpetual capacity to arbitrarily pull the plug on any of them.
I expect future small, open-source models to prioritize runtime efficiency, and so will over-train as much as possible. As a result, I expect that most of the open source ecosystem will be using models trained on > 1 T tokens. I think StableDiffusion is within an OOM of the 1 T token cutoff, since it was trained on a 2 billion image/text pairs subset of the LAION-5B dataset, and judging from the sample images on page 35, the captions are a bit less than 20 tokens per image. Future open source text-to-image models will likely be trained with > 1 T text tokens. Once that happens, the hobbyist and individual creators responsible for the vast majority of checkpoints / LORAs on model sharing sites like Civitai will also be subject to these regulations.
I expect language / image models will increasingly become the medium through which people express themselves, the next internet, so to speak. I think that giving the government expansive powers of censorship over the vast majority[1] of this ecosystem is extremely bad, especially for models that we know are not a risk.
I also think it is misleading for you to say things like:
and:
but then propose rules that would actually target a very wide (and increasing) swath of open-source, academic, and hobbyist work.
We’re going to make sure to exempt existing open source models. We’re trying to avoid pushing the frontier of open source AI, not trying to put the models that are already out their back in the box, which I agree is intractable.
These are good points, and I decided to remove the data criteria for now in response to these considerations.
The definition of frontier AI is wide because it describes the set of models that the administration has legal authority over, not the set of models that would be restricted. The point of this is to make sure that any model that could be dangerous would be included in the definition. Some non-dangerous models will be included, because of the difficulty with predicting the exact capabilities of a model before training.
We’re planning to shift to recommending a tiered system in the future, where the systems in the lower tiers have a reporting requirement but not a licensing requirement.
In order to mitigate the downside of including too many models, we have a fast track exemption for models that are clearly not dangerous but technically fall within the bounds of the definition.
I don’t expect this to impact the vast majority of AI developers outside the labs. I do think that open sourcing models at the current frontier is dangerous and want to prevent future extensions of the bar. Insofar as that AI development was happening on top of models produced by the labs, it would be affected.
The threshold is a work in progress. I think it’s likely that they’ll be revised significantly throughout this process. I appreciate the input and pushback here.
I’ve changed the wording to “Only a few technical labs (OpenAI, DeepMind, Meta, etc) and people working with their models would be regulated currently.” The point of this sentence is to emphasize that this definition still wouldn’t apply to the vast majority of AI development—most AI development uses small systems, e.g. image classifiers, self driving cars, audio models, weather forecasting, the majority of AI used in health care, etc.
Credit for changing the wording, but I still feel this does not adequately convey how sweeping the impact of the proposal would be if implemented as-is. Foundation model-related work is a sizeable and rapidly growing chunk of active AI development. Of the 15K pre-print papers posted on arXiv under the CS.AI category this year, 2K appear to be related to language models. The most popular Llama2 model weights alone have north of 500K downloads to date, and foundation-model related repos have been trending on Github for months. “People working with [a few technical labs’] models” is a massive community containing many thousands of developers, researchers, and hobbyists. It is important to be honest about how they will likely be impacted by this proposed regulation.
I suspect that they didn’t think about their “frontier AI” criteria much, particularly the token criterion. I strongly expect they’re honestly mistaken about the implications of their criteria, not trying to deceive. I weakly expect that they will update their criteria based on considerations like those Quintin mentions, and that you could help inform them if you engaged on the merits.
If your interpretation is correct, it’s damning of their organization in a different way. As a research organization, their entire job is to think carefully about their policy proposals before they make them. It’s likely net-harmful to have an org lobbying for AI “safety” regulations without doing their due diligence on research first.
Nora didn’t say that this proposal is harmful. Nora said that if Zach’s explanation for the disconnect between their rhetoric and their stated policy goals is correct (namely that they don’t really know what they’re talking about) then their existence is likely net-harmful.
That said, yes requiring everyone who wants to finetune LLaMA 2 get a license would be absurd and harmful. la3orn and gallabyres articulate some reasons why in this thread.
Another reason is that it’s impossible to enforce, and passing laws or regulations and then not enforcing them is really bad for credibility.
Another reason is that the history of AI is a history of people ignoring laws and ethics so long as it makes them money and they can afford to pay the fines. Unless this regulation comes with fines so harsh that they remove all possibility of making money off of models, OpenAI et al. won’t be getting licenses. They’ll just pay the fines while small scale and indie devs (who allegedly the OP is specifically hoping to not impact) screech their work to a halt and wait for the government to tell them it’s okay for them to continue to do their work.
Also, such a regulation seems like it would be illegal in the US. While the government does have wide latitude to regulate commercial activities that impact multiple states, this is rather specifically a proposal that would regulate all activity (even models that never get released!). I’m unaware of any precedent for such an action, can you name one?
Also, such a regulation seems like it would be illegal in the US. While the government does have wide latitude to regulate commercial activities that impact multiple states, this is rather specifically a proposal that would regulate all activity (even models that never get released!). I’m unaware of any precedent for such an action, can you name one?
Drug regulation, weapons regulation, etc.
As far as I can tell, the commerce clause lets basically everything through.
Require people to have licenses to fine-tune llama 2?
For one thing this is unenforceable without, ironically, superintelligence-powered universal surveillance. And I expect any vain attempt to enforce it would do more harm than good. See this post for some reasons for thinking it’d be net-negative.
Your current threshold does include all Llama models (other than llama-1 6.7/13 B sizes), since they were trained with > 1 trillion tokens.
I also think 70% on MMLU is extremely low, since that’s about the level of ChatGPT 3.5, and that system is very far from posing a risk of catastrophe.
The cutoffs also don’t differentiate between sparse and dense models, so there’s a fair bit of non-SOTA-pushing academic / corporate work that would fall under these cutoffs.
Yes, this reasoning was for capabilities benchmarks specifically. Data goes further with future algorithmic progress, so I thought a narrower criteria for that one was reasonable.
This is the threshold for the government has the ability to say no to, and is deliberately set well before catastrophe.
I also think that one route towards AGI in the event that we try to create a global shutdown of AI progress is by building up capabilities on top of whatever the best open source model is, and so I’m hesitant to give up the government’s ability to prevent the capabilities of the best open source model from going up.
Thanks for pointing this out, I’ll think about if there’s a way to exclude sparse models, though I’m not sure if its worth the added complexity and potential for loopholes. I’m not sure how many models fall into this category—do you have a sense? This aggregation of models has around 40 models above the 70B threshold.
There are disadvantages to giving the government “the ability to say no” to models used by thousands of people. There are disadvantages even in a frame where AI-takeover is the only thing you care about!
For instance, if you give the government too expansive a concern such that it must approve many models “well before the threshold”, then it will have thousands of requests thrown at it regularly, and it could (1) either try to scrutinize each, and become an invasive thorn everyone despises and which will be eliminated as soon as possible, because 99.99% of what it concerns itself will have (evidently to everyone) nothing to do with x-risk or (2) it might become a rubber-stamp factory that just lets all these thousands of requests through. (It could even simultaneously do both, like the FDA, which lets through probably useless things while prohibiting safe and useful things! This is likely; the government is not going to deliberate over what models are good like an intelligent person; it’s going to just follow procedure.)
(I don’t think LLMs of the scale you’re concerned with run an AI takeover risk. I have yet to read a takeover story about LLMs—of any size whatsoever—which makes sense to me. If there’s a story you think makes sense, by all means please give a link.)
But—I think a frame where AI takeover is the only thing you care about is manifestly the wrong frame for someone concerned with policy. Like—if you just care about one thing in a startup, and ignore other harms, you go out of business; but if you care about just one thing in policy, and ignore other things.… you… just can pass a policy, have it in place, and then cause a disaster because law doesn’t give feedback. The loop of feedback is about 10,000% worse, and so you need to be about 10,000% more paranoid that your ostensibly good actions are actually good.
And I don’t see evidence you’re seeking out possible harms of your proposed actions. Your website doesn’t talk about them; you don’t talk about possible bad effects and how you’d mitigate them—other than, as Quintin points out, in a basically factually incorrect manner.
So, you are deliberately targeting models such as LLama-2, then? Searching HuggingFace for “Llama-2” currently brings up 3276 models. As I understand the legislation you’re proposing, each of these models would have to undergo government review, and the government would have the perpetual capacity to arbitrarily pull the plug on any of them.
I expect future small, open-source models to prioritize runtime efficiency, and so will over-train as much as possible. As a result, I expect that most of the open source ecosystem will be using models trained on > 1 T tokens. I think StableDiffusion is within an OOM of the 1 T token cutoff, since it was trained on a 2 billion image/text pairs subset of the LAION-5B dataset, and judging from the sample images on page 35, the captions are a bit less than 20 tokens per image. Future open source text-to-image models will likely be trained with > 1 T text tokens. Once that happens, the hobbyist and individual creators responsible for the vast majority of checkpoints / LORAs on model sharing sites like Civitai will also be subject to these regulations.
I expect language / image models will increasingly become the medium through which people express themselves, the next internet, so to speak. I think that giving the government expansive powers of censorship over the vast majority[1] of this ecosystem is extremely bad, especially for models that we know are not a risk.
I also think it is misleading for you to say things like:
and:
but then propose rules that would actually target a very wide (and increasing) swath of open-source, academic, and hobbyist work.
Weighted by what models people of the future actually use / interact with.
(ETA: these are my personal opinions)
Notes:
We’re going to make sure to exempt existing open source models. We’re trying to avoid pushing the frontier of open source AI, not trying to put the models that are already out their back in the box, which I agree is intractable.
These are good points, and I decided to remove the data criteria for now in response to these considerations.
The definition of frontier AI is wide because it describes the set of models that the administration has legal authority over, not the set of models that would be restricted. The point of this is to make sure that any model that could be dangerous would be included in the definition. Some non-dangerous models will be included, because of the difficulty with predicting the exact capabilities of a model before training.
We’re planning to shift to recommending a tiered system in the future, where the systems in the lower tiers have a reporting requirement but not a licensing requirement.
In order to mitigate the downside of including too many models, we have a fast track exemption for models that are clearly not dangerous but technically fall within the bounds of the definition.
I don’t expect this to impact the vast majority of AI developers outside the labs. I do think that open sourcing models at the current frontier is dangerous and want to prevent future extensions of the bar. Insofar as that AI development was happening on top of models produced by the labs, it would be affected.
The threshold is a work in progress. I think it’s likely that they’ll be revised significantly throughout this process. I appreciate the input and pushback here.
It’s more than misleading, it’s simply a lie, at least insofar as developers outside of Google, OpenAI, and co. use the Llama 2 models.
I’ve changed the wording to “Only a few technical labs (OpenAI, DeepMind, Meta, etc) and people working with their models would be regulated currently.” The point of this sentence is to emphasize that this definition still wouldn’t apply to the vast majority of AI development—most AI development uses small systems, e.g. image classifiers, self driving cars, audio models, weather forecasting, the majority of AI used in health care, etc.
Credit for changing the wording, but I still feel this does not adequately convey how sweeping the impact of the proposal would be if implemented as-is. Foundation model-related work is a sizeable and rapidly growing chunk of active AI development. Of the 15K pre-print papers posted on arXiv under the
CS.AI
category this year, 2K appear to be related to language models. The most popular Llama2 model weights alone have north of 500K downloads to date, and foundation-model related repos have been trending on Github for months. “People working with [a few technical labs’] models” is a massive community containing many thousands of developers, researchers, and hobbyists. It is important to be honest about how they will likely be impacted by this proposed regulation.I suspect that they didn’t think about their “frontier AI” criteria much, particularly the token criterion. I strongly expect they’re honestly mistaken about the implications of their criteria, not trying to deceive. I weakly expect that they will update their criteria based on considerations like those Quintin mentions, and that you could help inform them if you engaged on the merits.
If your interpretation is correct, it’s damning of their organization in a different way. As a research organization, their entire job is to think carefully about their policy proposals before they make them. It’s likely net-harmful to have an org lobbying for AI “safety” regulations without doing their due diligence on research first.
Sorry, what harmful thing would this proposal do? Require people to have licenses to fine-tune llama 2? Why is that so crazy?
Nora didn’t say that this proposal is harmful. Nora said that if Zach’s explanation for the disconnect between their rhetoric and their stated policy goals is correct (namely that they don’t really know what they’re talking about) then their existence is likely net-harmful.
That said, yes requiring everyone who wants to finetune LLaMA 2 get a license would be absurd and harmful. la3orn and gallabyres articulate some reasons why in this thread.
Another reason is that it’s impossible to enforce, and passing laws or regulations and then not enforcing them is really bad for credibility.
Another reason is that the history of AI is a history of people ignoring laws and ethics so long as it makes them money and they can afford to pay the fines. Unless this regulation comes with fines so harsh that they remove all possibility of making money off of models, OpenAI et al. won’t be getting licenses. They’ll just pay the fines while small scale and indie devs (who allegedly the OP is specifically hoping to not impact) screech their work to a halt and wait for the government to tell them it’s okay for them to continue to do their work.
Also, such a regulation seems like it would be illegal in the US. While the government does have wide latitude to regulate commercial activities that impact multiple states, this is rather specifically a proposal that would regulate all activity (even models that never get released!). I’m unaware of any precedent for such an action, can you name one?
Drug regulation, weapons regulation, etc.
As far as I can tell, the commerce clause lets basically everything through.
It doesn’t let the government institute prior restraint on speech.
For one thing this is unenforceable without, ironically, superintelligence-powered universal surveillance. And I expect any vain attempt to enforce it would do more harm than good. See this post for some reasons for thinking it’d be net-negative.
Very far in qualitative capability or very far in effective flop?
I agree on the qualitative capability, but disagree on the effective flop.
It seems quite plausible (say 5%) that models with only 1,000x more training compute than GPT-3.5 pose a risk of catastrophe. This would be GPT-5.