This is great news. I particularly agree that legislators should pass new laws making it illegal to train AIs on copyrighted data without the consent of the copyright owner. This is beneficial from at least two perspectives:
If AI is likely to automate most human labor, then we need to build systems for redistributing wealth from AI providers to the rest of the world. One previous proposal is the robot tax, which would offset the harms of automation borne by manufacturing workers. Another popular idea is a Universal Basic Income. Following the same philosophy as these proposals, I think the creators of copyrighted material ought to be allowed to name their price for training AI systems on their data. This would distribute some AI profits to a larger group of people who contributed to the model’s capabilities, and it might slow or prevent automation in industries where workers organize to deny AI companies access to training data. In economic terms, automation would then only occur if the benefits to firms and consumers outweigh the costs to workers. This could reduce concentration of power via wealth inequality, and slow the takeoff speeds of GDP growth.
For anyone concerned about existential threats from AI, restricting the supply of training data could slow AI development, leaving more time for work on technical safety and governance which would reduce x-risk.
I think previous counterarguments against this position are fairly weak. Specifically, while I agree that foundation models which are pretrained to imitate a large corpus of human-generated data are safer in many respects than RL agents trained end-to-end, I think that foundation models are clearly the most promising paradigm over the next few years, and even with restrictions on training data I don’t think end-to-end RL training would quickly catch up.
OpenAI appears to lobby against these restrictions. This makes sense if you model OpenAI as profit-maximizing. Surprisingly to me, even OpenAI employees who are concerned about x-risk have opposed restrictions, writing “We hope that US policymakers will continue to allow this area of dramatic recent innovation to proceed without undue burdens from the copyright system.” I wonder if people concerned about AI risk may have been “captured” by industry on this particular issue, meaning that people have unquestioningly supported a policy because they trust the AI companies which endorse it, even though the policy might increase x-risk from AI development.
Curious why this is being downvoted. I think legislators should pass laws which have positive consequences. I explained the main reasons why I think this policy would have positive consequences. Then I speculated that popular beliefs on this issue might be biased by profit motives. I did not claim that this is a comprehensive analysis of the issue, or that there are no valid counterarguments. Which part of this is norm-violating?
I’d also be curious to know why (some) people downvoted this.
Perhaps it’s because you imply that some OpenAI folks were captured, and maybe some people think that that’s unwarranted in this case?
Sadly, the more-likely explanation (IMO) is that policy discussions can easily become tribal, even on LessWrong.
I think LW still does better than most places at rewarding discourse that’s thoughtful/thought-provoking and resisting tribal impulses, but I wouldn’t be surprised if some people were doing something like “ah he is saying something Against AI Labs//Pro-regulation, and that is bad under my worldview, therefore downvote.”
(And I also think this happens the other way around as well, and I’m sure people who write things that are “pro AI labs//anti-regulation” are sometimes unfairly downvoted by people in the opposite tribe.)
This is great news. I particularly agree that legislators should pass new laws making it illegal to train AIs on copyrighted data without the consent of the copyright owner. This is beneficial from at least two perspectives:
If AI is likely to automate most human labor, then we need to build systems for redistributing wealth from AI providers to the rest of the world. One previous proposal is the robot tax, which would offset the harms of automation borne by manufacturing workers. Another popular idea is a Universal Basic Income. Following the same philosophy as these proposals, I think the creators of copyrighted material ought to be allowed to name their price for training AI systems on their data. This would distribute some AI profits to a larger group of people who contributed to the model’s capabilities, and it might slow or prevent automation in industries where workers organize to deny AI companies access to training data. In economic terms, automation would then only occur if the benefits to firms and consumers outweigh the costs to workers. This could reduce concentration of power via wealth inequality, and slow the takeoff speeds of GDP growth.
For anyone concerned about existential threats from AI, restricting the supply of training data could slow AI development, leaving more time for work on technical safety and governance which would reduce x-risk.
I think previous counterarguments against this position are fairly weak. Specifically, while I agree that foundation models which are pretrained to imitate a large corpus of human-generated data are safer in many respects than RL agents trained end-to-end, I think that foundation models are clearly the most promising paradigm over the next few years, and even with restrictions on training data I don’t think end-to-end RL training would quickly catch up.
OpenAI appears to lobby against these restrictions. This makes sense if you model OpenAI as profit-maximizing. Surprisingly to me, even OpenAI employees who are concerned about x-risk have opposed restrictions, writing “We hope that US policymakers will continue to allow this area of dramatic recent innovation to proceed without undue burdens from the copyright system.” I wonder if people concerned about AI risk may have been “captured” by industry on this particular issue, meaning that people have unquestioningly supported a policy because they trust the AI companies which endorse it, even though the policy might increase x-risk from AI development.
Curious why this is being downvoted. I think legislators should pass laws which have positive consequences. I explained the main reasons why I think this policy would have positive consequences. Then I speculated that popular beliefs on this issue might be biased by profit motives. I did not claim that this is a comprehensive analysis of the issue, or that there are no valid counterarguments. Which part of this is norm-violating?
I’d also be curious to know why (some) people downvoted this.
Perhaps it’s because you imply that some OpenAI folks were captured, and maybe some people think that that’s unwarranted in this case?
Sadly, the more-likely explanation (IMO) is that policy discussions can easily become tribal, even on LessWrong.
I think LW still does better than most places at rewarding discourse that’s thoughtful/thought-provoking and resisting tribal impulses, but I wouldn’t be surprised if some people were doing something like “ah he is saying something Against AI Labs//Pro-regulation, and that is bad under my worldview, therefore downvote.”
(And I also think this happens the other way around as well, and I’m sure people who write things that are “pro AI labs//anti-regulation” are sometimes unfairly downvoted by people in the opposite tribe.)