Nora didn’t say that this proposal is harmful. Nora said that if Zach’s explanation for the disconnect between their rhetoric and their stated policy goals is correct (namely that they don’t really know what they’re talking about) then their existence is likely net-harmful.
That said, yes requiring everyone who wants to finetune LLaMA 2 get a license would be absurd and harmful. la3orn and gallabyres articulate some reasons why in this thread.
Another reason is that it’s impossible to enforce, and passing laws or regulations and then not enforcing them is really bad for credibility.
Another reason is that the history of AI is a history of people ignoring laws and ethics so long as it makes them money and they can afford to pay the fines. Unless this regulation comes with fines so harsh that they remove all possibility of making money off of models, OpenAI et al. won’t be getting licenses. They’ll just pay the fines while small scale and indie devs (who allegedly the OP is specifically hoping to not impact) screech their work to a halt and wait for the government to tell them it’s okay for them to continue to do their work.
Also, such a regulation seems like it would be illegal in the US. While the government does have wide latitude to regulate commercial activities that impact multiple states, this is rather specifically a proposal that would regulate all activity (even models that never get released!). I’m unaware of any precedent for such an action, can you name one?
Also, such a regulation seems like it would be illegal in the US. While the government does have wide latitude to regulate commercial activities that impact multiple states, this is rather specifically a proposal that would regulate all activity (even models that never get released!). I’m unaware of any precedent for such an action, can you name one?
Drug regulation, weapons regulation, etc.
As far as I can tell, the commerce clause lets basically everything through.
Require people to have licenses to fine-tune llama 2?
For one thing this is unenforceable without, ironically, superintelligence-powered universal surveillance. And I expect any vain attempt to enforce it would do more harm than good. See this post for some reasons for thinking it’d be net-negative.
Sorry, what harmful thing would this proposal do? Require people to have licenses to fine-tune llama 2? Why is that so crazy?
Nora didn’t say that this proposal is harmful. Nora said that if Zach’s explanation for the disconnect between their rhetoric and their stated policy goals is correct (namely that they don’t really know what they’re talking about) then their existence is likely net-harmful.
That said, yes requiring everyone who wants to finetune LLaMA 2 get a license would be absurd and harmful. la3orn and gallabyres articulate some reasons why in this thread.
Another reason is that it’s impossible to enforce, and passing laws or regulations and then not enforcing them is really bad for credibility.
Another reason is that the history of AI is a history of people ignoring laws and ethics so long as it makes them money and they can afford to pay the fines. Unless this regulation comes with fines so harsh that they remove all possibility of making money off of models, OpenAI et al. won’t be getting licenses. They’ll just pay the fines while small scale and indie devs (who allegedly the OP is specifically hoping to not impact) screech their work to a halt and wait for the government to tell them it’s okay for them to continue to do their work.
Also, such a regulation seems like it would be illegal in the US. While the government does have wide latitude to regulate commercial activities that impact multiple states, this is rather specifically a proposal that would regulate all activity (even models that never get released!). I’m unaware of any precedent for such an action, can you name one?
Drug regulation, weapons regulation, etc.
As far as I can tell, the commerce clause lets basically everything through.
It doesn’t let the government institute prior restraint on speech.
For one thing this is unenforceable without, ironically, superintelligence-powered universal surveillance. And I expect any vain attempt to enforce it would do more harm than good. See this post for some reasons for thinking it’d be net-negative.