I’d suggest looking at this from a consequentialist perspective.
One of your questions was, “Should it also be illegal for people to learn from copyrighted material?” This seems to imply that whether a policy is good for AIs depends on whether it would be good for humans. It’s almost a Kantian perspective—“What would happen if we universalized this principle?” But I don’t think that’s a good heuristic for AI policy. For just one example, I don’t think AIs should be given constitutional rights, but humans clearly should.
My other comment explains why I think the consequences of restricting training data would be positive.
I don’t say that the same policies must necessarily apply to AIs and humans. But I do say that if they don’t then there should be a reason why they treat AIs and humans differently.
If a law treats people a certain way, there must be a reason for that, because people have rights.
But if a law treats non-people a certain way, there doesn’t need to be any reason for that. All that is required is that there be good reasons for what consequences the law has for people.
There does not seem to be any reason why the default should be to treat AIs and humans the same way (or to treat AIs in any particular way).
I think “humans are people and AIs aren’t” could be a perfectly good reason for treating them differently, and didn’t intend to say otherwise. So, e.g., if Mikhail had said “Humans should be allowed to learn from anything they can read because doing so is a basic human right and it would be unjust to forbid that; today’s AIs aren’t the sort of things that have rights, so that doesn’t apply to them at all” then that would have been a perfectly cromulent answer. (With, e.g., the implication that to whatever extent that’s the whole reason for treating them differently in this case, the appropriate rules might change dramatically if and when there are AIs that we find it appropriate to think of as persons having rights.)
I’d suggest looking at this from a consequentialist perspective.
One of your questions was, “Should it also be illegal for people to learn from copyrighted material?” This seems to imply that whether a policy is good for AIs depends on whether it would be good for humans. It’s almost a Kantian perspective—“What would happen if we universalized this principle?” But I don’t think that’s a good heuristic for AI policy. For just one example, I don’t think AIs should be given constitutional rights, but humans clearly should.
My other comment explains why I think the consequences of restricting training data would be positive.
I don’t say that the same policies must necessarily apply to AIs and humans. But I do say that if they don’t then there should be a reason why they treat AIs and humans differently.
Why?
If a law treats people a certain way, there must be a reason for that, because people have rights.
But if a law treats non-people a certain way, there doesn’t need to be any reason for that. All that is required is that there be good reasons for what consequences the law has for people.
There does not seem to be any reason why the default should be to treat AIs and humans the same way (or to treat AIs in any particular way).
I think “humans are people and AIs aren’t” could be a perfectly good reason for treating them differently, and didn’t intend to say otherwise. So, e.g., if Mikhail had said “Humans should be allowed to learn from anything they can read because doing so is a basic human right and it would be unjust to forbid that; today’s AIs aren’t the sort of things that have rights, so that doesn’t apply to them at all” then that would have been a perfectly cromulent answer. (With, e.g., the implication that to whatever extent that’s the whole reason for treating them differently in this case, the appropriate rules might change dramatically if and when there are AIs that we find it appropriate to think of as persons having rights.)