I agree with the sentiment that indiscriminate regulation is unlikely to have good effects.
I think the step that is missing is analysing the specific policies No-AI Art Activist are likely to advocate for, and whether it is a good idea to support it.
My current sense is that data helpful for alignment is unlikely to be public right now, and so harder copyright would not impede alignment efforts. The kind of data that I could see being useful are things like scores and direct feedback. Maybe at most things like Amazon reviews could end up being useful for toy settings.
Another aspect that the article does not touch on is that copyright enforcement could have an adverse effect. Currently there is basically no one trying to commercialize training dataset curation because enforcing copyright use is a nightmare. It is in fact a common good. I’d expect there would be more incentives to create large curated datasets if this was not the case.
Lastly, here are some examples of “no AI art” legislation I expect the movement is likely to support:
Removing copyright protection of AI generated images
Enforcing AI training data to be strictly opt-in
Forcing AI content to be labelled as such
Besides regulation, I also expect activists to 4) pressure companies to deboost AI made content in social medial sites.
My general impression is that 3) is slightly good for AI safety. People in the AI Safety community have advocated for it in the past, convincingly.
I’m more agnostic on 1), 2) and 4).
1 and 4 will make AI generation less profitable, but also it’s somewhat confused—it’s a weird double standard to apply to AI content over human made content.
2 makes training more annoying, but could lead to commercialization of datasets and more collective effort being put into building them. I also think there is a possibly a coherent moral case for it, which I’m still trying to make my mind about, regardless of the AI safety consequences.
All in all, I am confused, though I wholeheartedly agree that we should be analysing and deciding to support specific policies rather than eg the anti AI art movement as a whole.
I agree with the sentiment that indiscriminate regulation is unlikely to have good effects.
I think the step that is missing is analysing the specific policies No-AI Art Activist are likely to advocate for, and whether it is a good idea to support it.
My current sense is that data helpful for alignment is unlikely to be public right now, and so harder copyright would not impede alignment efforts. The kind of data that I could see being useful are things like scores and direct feedback. Maybe at most things like Amazon reviews could end up being useful for toy settings.
Another aspect that the article does not touch on is that copyright enforcement could have an adverse effect. Currently there is basically no one trying to commercialize training dataset curation because enforcing copyright use is a nightmare. It is in fact a common good. I’d expect there would be more incentives to create large curated datasets if this was not the case.
Lastly, here are some examples of “no AI art” legislation I expect the movement is likely to support:
Removing copyright protection of AI generated images
Enforcing AI training data to be strictly opt-in
Forcing AI content to be labelled as such
Besides regulation, I also expect activists to 4) pressure companies to deboost AI made content in social medial sites.
My general impression is that 3) is slightly good for AI safety. People in the AI Safety community have advocated for it in the past, convincingly.
I’m more agnostic on 1), 2) and 4).
1 and 4 will make AI generation less profitable, but also it’s somewhat confused—it’s a weird double standard to apply to AI content over human made content.
2 makes training more annoying, but could lead to commercialization of datasets and more collective effort being put into building them. I also think there is a possibly a coherent moral case for it, which I’m still trying to make my mind about, regardless of the AI safety consequences.
All in all, I am confused, though I wholeheartedly agree that we should be analysing and deciding to support specific policies rather than eg the anti AI art movement as a whole.