FWIW, I looked briefly into this 2 years ago about whether it was legal to release data poison. As best as I could figure, it probably is in the USA: I can’t see what crime it would be, if you aren’t actively maliciously injecting the data somewhere like Wikipedia (where you are arguably violating policies or ToS by inserting false content with the intent of damaging computer systems), but you are just releasing it somewhere like your own blog and waiting for the LLM scrapers to voluntarily slurp it down and choke during training, that’s then their problem. If their LLMs can’t handle it, well, that’s just too bad. No different than if you had written up testcases for bugs or security holes: you are not responsible for what happens to other people if they are too lazy or careless to use it correctly, and it crashes or otherwise harms their machine. If you had gone out of your way to hack them*, that would be a violation of the CFAA or something else, sure, but if you just wrote something on your blog, exercising free speech while violating no contracts such as Terms of Service? That’s their problem—no one made them scrape your blog while being too incompetent to handle data poisoning. (This is why the CFAA provision quoted wouldn’t apply: you didn’t knowingly cause it to be sent to them! You don’t have the slightest idea who is voluntarily and anonymously downloading your stuff or what the data poisoning would do to them.) So stuff like the art ‘glazing’ is probably entirely legal, regardless of whether it works.
* one of the perennial issues with security researchers / amateur pentesters being shocked by the CFAA being invoked on them—if you have interacted with the software enough to establish the existence of a serious security vulnerability worth reporting… This is also a barrier to work on jailbreaking LLM or image-generation models: if you succeed in getting it to generate stuff it really should not, sufficiently well to convince the relevant entities of the existence of the problem, well, you may have just earned yourself a bigger problem than wasting your time.
On a side note, I think the window for data poisoning may be closing. Given increasing sample-efficiency of larger smarter models, and synthetic data apparently starting to work and maybe even being the majority of data now, the so-called data wall may turn out to be illusory, as frontier models now simply bootstrap from static known-good datasets, and the final robust models become immune to data poison that could’ve harmed them in the beginning, and can be safely updated with new (and possibly-poisoned) data in-context.
FWIW, I looked briefly into this 2 years ago about whether it was legal to release data poison. As best as I could figure, it probably is in the USA: I can’t see what crime it would be, if you aren’t actively maliciously injecting the data somewhere like Wikipedia (where you are arguably violating policies or ToS by inserting false content with the intent of damaging computer systems), but you are just releasing it somewhere like your own blog and waiting for the LLM scrapers to voluntarily slurp it down and choke during training, that’s then their problem. If their LLMs can’t handle it, well, that’s just too bad. No different than if you had written up testcases for bugs or security holes: you are not responsible for what happens to other people if they are too lazy or careless to use it correctly, and it crashes or otherwise harms their machine. If you had gone out of your way to hack them*, that would be a violation of the CFAA or something else, sure, but if you just wrote something on your blog, exercising free speech while violating no contracts such as Terms of Service? That’s their problem—no one made them scrape your blog while being too incompetent to handle data poisoning. (This is why the CFAA provision quoted wouldn’t apply: you didn’t knowingly cause it to be sent to them! You don’t have the slightest idea who is voluntarily and anonymously downloading your stuff or what the data poisoning would do to them.) So stuff like the art ‘glazing’ is probably entirely legal, regardless of whether it works.
* one of the perennial issues with security researchers / amateur pentesters being shocked by the CFAA being invoked on them—if you have interacted with the software enough to establish the existence of a serious security vulnerability worth reporting… This is also a barrier to work on jailbreaking LLM or image-generation models: if you succeed in getting it to generate stuff it really should not, sufficiently well to convince the relevant entities of the existence of the problem, well, you may have just earned yourself a bigger problem than wasting your time.
On a side note, I think the window for data poisoning may be closing. Given increasing sample-efficiency of larger smarter models, and synthetic data apparently starting to work and maybe even being the majority of data now, the so-called data wall may turn out to be illusory, as frontier models now simply bootstrap from static known-good datasets, and the final robust models become immune to data poison that could’ve harmed them in the beginning, and can be safely updated with new (and possibly-poisoned) data in-context.