Liability regimes for AI

For many products, we face a choice of who to hold liable for harms that would not have occurred if not for the existence of the product. For instance, if a person uses a gun in a school shooting that kills a dozen people, there are many legal persons who in principle could be held liable for the harm:

  1. The shooter themselves, for obvious reasons.

  2. The shop that sold the shooter the weapon.

  3. The company that designs and manufactures the weapon.

Which one of these is the best? I’ll offer a brief and elementary economic analysis of how this decision should be made in this post.

The important concepts from economic theory to understand here are Coasean bargaining and the problem of the judgment-proof defendant.

Coasean bargaining

Let’s start with Coaesean bargaining: in short, this idea says that regardless of who the legal system decides to hold liable for a harm, the involved parties can, under certain conditions, slice the harm arbitrarily among themselves by contracting and reach an economically efficient outcome. Under these conditions and assuming no transaction costs, it doesn’t matter who the government decides to hold liable for a harm; it’s the market that will ultimately decide how the liability burden is divided up.

For instance, if we decide to hold shops liable for selling guns to people who go on to use the guns in acts of violence, the shops could demand that prospective buyers purchase insurance on the risk of them committing a criminal act. The insurance companies could then analyze who is more or less likely to engage in such an act of violence and adjust premiums accordingly, or even refuse to offer it altogether to e.g. people with previous criminal records, which would make guns less accessible overall (because there’s a background risk of anyone committing a violent act using a gun) and also differentially less accessible to those seen as more likely to become violent criminals. In other words, we don’t lose the ability to deter individuals by deciding to impose the liability on other actors in the chain, because they can simply find ways of passing on the cost.

The judgment-proof defendant

However, what if we imagine imposing the liability on individuals instead? We might naively think that there’s nothing wrong, because anyone who used a gun in a violent act would be required to pay compensation to the victims which in principle could be set high enough to deter offenses even by wealthy people. However, the problem we run into in this case is that most school shooters have little in the way of assets and certainly not enough to compensate the victims and the rest of the world for all the harm that they have caused. In other words, they are judgment-proof: the best we can do when we catch them is put them in jail or execute them. In these cases, Coaesean bargaining breaks down.

We can try to recover something like the previous solution by mandating such people buy civil or criminal insurance by law, so that they are no longer judgment-proof because the insurance company has big coffers to pay out large settlements if necessary, and also the incentive to turn away people who seem like risky customers. However, law is not magic, and someone who refuses to follow this law would still in the end be judgment-proof.

We can see this in the following example: suppose that the shooter doesn’t legally purchase the gun from the shop but steals it instead. Given that the shop will not be held liable for anything, it’s only in their interest to invest in security for ordinary business reasons, but they have no incentive to take additional precautions beyond what make sense for e.g. laptop stores. Because the shooter obtains the gun illegally, they can then go and carry out a shooting without any regard for something such as “a requirement to buy insurance”. In other words, even a law requiring people to buy insurance before being able to legally purchase a gun doesn’t solve the problem of the judgment-proof defendant in this case.

The way to solve the problem of the judgment-proof defendant is obvious: we should impose the liability on whoever is least likely to be judgment-proof, which in practice will be the largest company involved in the process with a big pile of cash and a lot of credibility to lose if they are hit with a large settlement. They can then use Coaesean bargaining where appropriate to divide up this cost as far as they are able to under the constraints they are operating under. If they are able to escape liability by splitting up into smaller companies this is also a problem, so we would have to structure the regime in a way where this would be impossible, for example by holding all producers liable in an industry where production already has big economies of scale.

Transaction costs and economies of scale

The problem with this solution is that it gives an advantage to companies that are bigger. This is by design: a bigger company is less likely to be judgment-proof just because it gets to average the risk of selling guns over a larger customer base and therefore any single bad event is less likely to be something for which the company can’t afford a settlement. However, it means we expect a trend towards increased market concentration in the presence of such a liability regime, which might be undesirable for other reasons.

A smaller company can try to compete by buying insurance on the risk of them being sued, which is itself another example of a Coasean solution, but this still doesn’t remove the economies of scale introduced by our solution because in the real world such bargaining has transaction costs. Because transaction costs are in general concave in the amount being transacted, large companies will still have an advantage over smaller companies, and this is ignoring the possibility that certain forms of insurance may be illegal to offer in some jurisdictions.

Summary and implications for AI

So, we end up with the following simple analysis:

  1. In industries where the problem of the judgment-proof defendant is serious, for example with technologies that can do enormous amounts of harm if used by the wrong actors, we want the liability to be legally imposed on as big of a base as possible. A simple form of this is to hold the biggest company involved in production liable, though there are more complex solutions.

  2. In industries where the problem of the judgment-proof defendant is not serious, we want to impose the liability on whoever can locally do the most to reduce the risk of the product being used to do harm, as this is the solution that gives the best local incentives and therefore reduces Coasean transaction costs that must be incurred to the minimum. In most cases this will be the end users of a product, though not always.

For AI, disagreements about liability regimes seem to mostly arise out of whether people think we’re in world (1) or world (2). Probably most people agree the solution recommended in (1) creates “artificial” economies of scale favoring larger companies, but people who want to hold big technology companies or AI labs liable instead of end users think the potential downside of AI technology is very large and so the end users will be judgment-proof given the scale of the harms the technology could do. It’s plausible even the big companies are judgment-proof (e.g. if billions of people die or the human species goes extinct) and this might need to be addressed by other forms of regulation, but focusing only on the liability regime we still want as big of a base as we can get.

In contrast, if you think the relevant risks from AI look like people using their systems to do some small amounts of harm which are not particularly serious, you’ll want to hold the individuals responsible for these harms liable and spare the companies. This gives the individuals the best incentives to stop engaging in misuse and reduces transaction costs that would both be bad in themselves and also exacerbate the trend towards industry concentration.

Unless people can agree on what the risks of AI systems are, it’s unlikely that they will be able to agree on what the correct liability regime for the industry should look like. Discussion should therefore switch to making the case for large or small risks from AI, depending on what advocates might believe, and away from details of specific proposals which obscure more fundamental disagreements about the facts.