So far, lawmakers at least in the US have refrained from passing laws that impose liability on software owners and software distributors: they have left the question up to the contract (e.g., the license) between the software owner and the software user. But there is nothing preventing them from passing laws on software (or AI models) that trump contract provisions—something they routinely do in other parts of the economy: in California, for example, the terms of the contract between tenant and landlord, i.e., the lease, hardly matter at all because there are so many state laws that override whatever is in the lease.
Licenses are considered contracts at least in the English-speaking countries: the act of downloading the software or using the software is considered acceptance of the contract. But, like I said, there are tons of laws that override contracts.
So, the fact that the labs have the option of releasing models under open-source-like licenses has very little bearing on the feasibility, effectiveness or desirability of a future liability regime for AI as discussed in the OP—as long as lawmakers cooperate in the creation of the regime by passing new laws.
So far, lawmakers at least in the US have refrained from passing laws that impose liability on software owners and software distributors: they have left the question up to the contract (e.g., the license) between the software owner and the software user. But there is nothing preventing them from passing laws on software (or AI models) that trump contract provisions—something they routinely do in other parts of the economy: in California, for example, the terms of the contract between tenant and landlord, i.e., the lease, hardly matter at all because there are so many state laws that override whatever is in the lease.
Licenses are considered contracts at least in the English-speaking countries: the act of downloading the software or using the software is considered acceptance of the contract. But, like I said, there are tons of laws that override contracts.
So, the fact that the labs have the option of releasing models under open-source-like licenses has very little bearing on the feasibility, effectiveness or desirability of a future liability regime for AI as discussed in the OP—as long as lawmakers cooperate in the creation of the regime by passing new laws.