Cars and planes and knives and various chemicals can be easily goaded to break the law by the user. No one has yet released a car that only ever follows all applicable laws no matter what the driver does.
Without taking a position on the copyright problem as a whole, there’s an important distinction here around how straightforward the user’s control is. A typical knife is operated in a way where deliberate, illegal knife-related actions can reasonably be seen as a direct extension of the user’s intent (and accidental ones an extension of the user’s negligence). A traditional car is more complex, but cars are also subject to licensing regimes which establish social proof that the user has been trained in how to produce intended results when operating the car, so that illegal car-related actions can be similarly seen as an extension of the user’s intent or negligence. Comparing this to the legal wrangling around cars with ‘smarter’ autonomous driving features may be informative, because that’s when it gets more ambiguous how much of the result is a direct translation of the user’s intent. There does seem to be a lot of legal and social pressure on manufacturers to ensure the safety of autonomous driving by technical means, but I’m not as sure about legality; in particular, I vaguely remember mixed claims around the way self-driving features handle the tension between posted speed limits and commonplace human driving behavior in the US.
In the case of a chatbot, the part where the bot makes use of a vast quantity of information that the user isn’t directly aware of as part of forming its responses is necessary for its purpose, so expecting a reasonable user to take responsibility for anticipating and preventing any resulting copyright violations is not practical. Here, comparing chatbot output to that of search engines—a step down in the tool’s level of autonomy, rather than a step up as in the previous car comparison—may be informative. The purpose of a search engine similarly relies on the user not being able to directly anticipate the results, but the results can point to material that contains copyright violations or other content that is illegal to distribute. And even though those results are primarily links instead of direct inclusions, there’s legal and social pressure on search engines to do filtering and enforce specific visibility takedowns on demand.
So there’s clearly some kind of spectrum here between user responsibility and vendor responsibility that depends on how ‘twisty’ the product is to operate.
Without taking a position on the copyright problem as a whole, there’s an important distinction here around how straightforward the user’s control is. A typical knife is operated in a way where deliberate, illegal knife-related actions can reasonably be seen as a direct extension of the user’s intent (and accidental ones an extension of the user’s negligence). A traditional car is more complex, but cars are also subject to licensing regimes which establish social proof that the user has been trained in how to produce intended results when operating the car, so that illegal car-related actions can be similarly seen as an extension of the user’s intent or negligence. Comparing this to the legal wrangling around cars with ‘smarter’ autonomous driving features may be informative, because that’s when it gets more ambiguous how much of the result is a direct translation of the user’s intent. There does seem to be a lot of legal and social pressure on manufacturers to ensure the safety of autonomous driving by technical means, but I’m not as sure about legality; in particular, I vaguely remember mixed claims around the way self-driving features handle the tension between posted speed limits and commonplace human driving behavior in the US.
In the case of a chatbot, the part where the bot makes use of a vast quantity of information that the user isn’t directly aware of as part of forming its responses is necessary for its purpose, so expecting a reasonable user to take responsibility for anticipating and preventing any resulting copyright violations is not practical. Here, comparing chatbot output to that of search engines—a step down in the tool’s level of autonomy, rather than a step up as in the previous car comparison—may be informative. The purpose of a search engine similarly relies on the user not being able to directly anticipate the results, but the results can point to material that contains copyright violations or other content that is illegal to distribute. And even though those results are primarily links instead of direct inclusions, there’s legal and social pressure on search engines to do filtering and enforce specific visibility takedowns on demand.
So there’s clearly some kind of spectrum here between user responsibility and vendor responsibility that depends on how ‘twisty’ the product is to operate.