The basic Open SOurce deal is that absolutely anyone can take the product and do whatever they like with it, without paying the supplier anything.
So
The vendor cannot prevent the customer doing something bad with the product (If there is a line of code that says “dont do this bad thing”, then the customer can just delete it
The vendor also cannot charge the customer an insurance premium base on how likely the customer is to do something ba with the product
… which would suggest that Open Source is only viable in areas where there isn’t much third party liability.
Open source might be viable if it’s possible for the producers to add safeguards into the model that cannot be trivially undone by cheap fine-tuning, but yeah, I would agree with that given the current lack of techniques for doing this successfully.
A bit of nitpicking: the basic Open Source deal is not that you can do what you want with the product. It’s that the source code should be available. The whole point of introducing open source as an idea was to allow coorporations etc. to give access to their source code without worrying so much about people doing what you’re describing. Deleting a “don’t do this bad thing” can be prosecuted as copyright infringement (if the whole license gets removed). This is what copyleft was invented for—to subvert copyright laws by using them to force companies to publish their code.
There are licenses like MIT which do what you’re describing. Others are less permissive, and e.g. only allow you to use the code in non-commercial projects, or stipulate that you have to send any fixes back to the original developer if you’re planning on distributing it. The GPL is a fun one, which requires any code that is derivative of it to also be open sourced.
Also, Open Source can very much be a source of liability, e.g. the SCO v. IBM case which was trying to get people to pay for linux (patent trolls being what they are) or Oracle vs Google, where Oracle (arguably also patent trolls) wanted Google to pay billions for use of the Java API (this ended up in the supreme court).
There are licenses that only allow you to use the code in non-commercial projects
But they are emphatically not considered open-source licenses by the Open Source Initiative and are not considered Free-Software licenses by the Free Software Foundation, positions that have persisted uninterrupted since the 1990s.
There’s a lot more to the Open Source Definition than, “well, you can read the source”. Most of the licenses approved by the Open Source Initiative have also been approved by the Free Software Foundation.
So far, lawmakers at least in the US have refrained from passing laws that impose liability on software owners and software distributors: they have left the question up to the contract (e.g., the license) between the software owner and the software user. But there is nothing preventing them from passing laws on software (or AI models) that trump contract provisions—something they routinely do in other parts of the economy: in California, for example, the terms of the contract between tenant and landlord, i.e., the lease, hardly matter at all because there are so many state laws that override whatever is in the lease.
Licenses are considered contracts at least in the English-speaking countries: the act of downloading the software or using the software is considered acceptance of the contract. But, like I said, there are tons of laws that override contracts.
So, the fact that the labs have the option of releasing models under open-source-like licenses has very little bearing on the feasibility, effectiveness or desirability of a future liability regime for AI as discussed in the OP—as long as lawmakers cooperate in the creation of the regime by passing new laws.
This may be a reason as to why Meta/LLama is making their models open source. In the future where coasean bargaining comes into play for larger companies, which it mostly likely will, META may have a cop out by making their models open source. Obviously like you said there will then have to be some restrictions on open source in regards to AI models, but “open source-esque” models may be the solution for companies such as OpenAI and Anthropic to avoid liability in the future.
Also note that Open Source precludes doing this …
The basic Open SOurce deal is that absolutely anyone can take the product and do whatever they like with it, without paying the supplier anything.
So
The vendor cannot prevent the customer doing something bad with the product (If there is a line of code that says “dont do this bad thing”, then the customer can just delete it
The vendor also cannot charge the customer an insurance premium base on how likely the customer is to do something ba with the product
… which would suggest that Open Source is only viable in areas where there isn’t much third party liability.
Open source might be viable if it’s possible for the producers to add safeguards into the model that cannot be trivially undone by cheap fine-tuning, but yeah, I would agree with that given the current lack of techniques for doing this successfully.
A bit of nitpicking: the basic Open Source deal is not that you can do what you want with the product. It’s that the source code should be available. The whole point of introducing open source as an idea was to allow coorporations etc. to give access to their source code without worrying so much about people doing what you’re describing. Deleting a “don’t do this bad thing” can be prosecuted as copyright infringement (if the whole license gets removed). This is what copyleft was invented for—to subvert copyright laws by using them to force companies to publish their code.
There are licenses like MIT which do what you’re describing. Others are less permissive, and e.g. only allow you to use the code in non-commercial projects, or stipulate that you have to send any fixes back to the original developer if you’re planning on distributing it. The GPL is a fun one, which requires any code that is derivative of it to also be open sourced.
Also, Open Source can very much be a source of liability, e.g. the SCO v. IBM case which was trying to get people to pay for linux (patent trolls being what they are) or Oracle vs Google, where Oracle (arguably also patent trolls) wanted Google to pay billions for use of the Java API (this ended up in the supreme court).
Since the inception of the term, “Open Source” has meant more than that. You’re describing “source-available software” instead.
But they are emphatically not considered open-source licenses by the Open Source Initiative and are not considered Free-Software licenses by the Free Software Foundation, positions that have persisted uninterrupted since the 1990s.
This is pretty much why many people thought that the term “Open Source” was a betrayal of the objectives of the Free Software movement,
“Free as in free speech, not free bewr” has implication that “well, you can read the source” lacks.
There’s a lot more to the Open Source Definition than, “well, you can read the source”. Most of the licenses approved by the Open Source Initiative have also been approved by the Free Software Foundation.
So far, lawmakers at least in the US have refrained from passing laws that impose liability on software owners and software distributors: they have left the question up to the contract (e.g., the license) between the software owner and the software user. But there is nothing preventing them from passing laws on software (or AI models) that trump contract provisions—something they routinely do in other parts of the economy: in California, for example, the terms of the contract between tenant and landlord, i.e., the lease, hardly matter at all because there are so many state laws that override whatever is in the lease.
Licenses are considered contracts at least in the English-speaking countries: the act of downloading the software or using the software is considered acceptance of the contract. But, like I said, there are tons of laws that override contracts.
So, the fact that the labs have the option of releasing models under open-source-like licenses has very little bearing on the feasibility, effectiveness or desirability of a future liability regime for AI as discussed in the OP—as long as lawmakers cooperate in the creation of the regime by passing new laws.
This may be a reason as to why Meta/LLama is making their models open source. In the future where coasean bargaining comes into play for larger companies, which it mostly likely will, META may have a cop out by making their models open source. Obviously like you said there will then have to be some restrictions on open source in regards to AI models, but “open source-esque” models may be the solution for companies such as OpenAI and Anthropic to avoid liability in the future.