Unfortunately, there are two significant barriers to using tort liability to internalize AI risk. First, under existing doctrine, plaintiffs harmed by AI systems would have to prove that the companies that trained or deployed the system failed to exercise reasonable care. This is likely to be extremely difficult to prove since it would require the plaintiff to identify some reasonable course of action that would have prevented the injury. Importantly, under current law, simply not building or deploying the AI systems does not qualify as such a reasonable precaution.
Not only this, but it will require extremely expensive discovery procedures which the average citizen cannot afford. This is assuming you can overcome the technical barrier of; but what specifically in our files are you looking for? what about our privacy?
Second, under plausible assumptions, most of the expected harm caused by AI systems is likely to come in scenarios where enforcing a damages award is not practically feasible. Obviously, no lawsuit can be brought after human extinction or enslavement by misaligned AI. But even in much less extreme catastrophes where humans remain alive and in control with a functioning legal system, the harm may simply be so large in financial terms that it would bankrupt the companies responsible and no plausible insurance policy could cover the damages.
I think joint & several liability regimes will resolve this. In the sense that, it’s not 100% the companies fault; it’ll be shared by the programmers, the operator, and the company.
Courts could, if they are persuaded of the dangers associated with advanced AI systems, treat training and deploying AI systems with unpredictable and uncontrollable properties as an abnormally dangerous activity that falls under this doctrine.
Unfortunately, in practice, what will really happen is that ‘expert AI professional’ will be hired to advise old legal professionals what’s considered ‘foreseeable’. This is susceptible to the same corruption, favouritism and ignorance we see in usual crimes. I think ultimately, we’ll need lawyers to specialise in both AI and law to really solve this.
The second problem of practically non-compensable harms is a bit more difficult to overcome. But tort law does have a tool that can be repurposed to handle it: punitive damages. Punitive damages impose liability on top of the compensatory damages the plaintiffs in successful lawsuits get to compensate them for the harm the defendant caused them.
Yes. Here I ask: what about legal systems that use delictual law instead of tort law? The names, requirements and such are different. In other words, you’ll get completely different legal treatment for international AI’s. This creates a whole new can of worms that defeats legal certainty and the rule of law.
“I think joint & several liability regimes will resolve this. In the sense that, it’s not 100% the companies fault; it’ll be shared by the programmers, the operator, and the company.”
J&S doesn’t do much good if the harm is practically non-compensable because no one is alive to sue or be sued, or the legal system is no longer functioning. Even for harms that are merely financially uninsurable, in only enlarges the maximum practically compensable harm by less than an order of magnitude.
“Yes. Here I ask: what about legal systems that use delictual law instead of tort law? The names, requirements and such are different. In other words, you’ll get completely different legal treatment for international AI’s. This creates a whole new can of worms that defeats legal certainty and the rule of law.”
I encourage experts in other legal systems to conduct similar analyses to mine regarding how liability is likely to attach to AI harms and what doctrinal/statutory levers could be pulled to achieve more favorable rules.
I think joint & several liability regimes will resolve this. In the sense that, it’s not 100% the companies fault; it’ll be shared by the programmers, the operator, and the company.
Unfortunately, in practice, what will really happen is that ‘expert AI professional’ will be hired to advise old legal professionals what’s considered ‘foreseeable’. This is susceptible to the same corruption, favouritism and ignorance we see in usual crimes. I think ultimately, we’ll need lawyers to specialise in both AI and law to really solve this.
Yes. Here I ask: what about legal systems that use delictual law instead of tort law? The names, requirements and such are different. In other words, you’ll get completely different legal treatment for international AI’s. This creates a whole new can of worms that defeats legal certainty and the rule of law.
“I think joint & several liability regimes will resolve this. In the sense that, it’s not 100% the companies fault; it’ll be shared by the programmers, the operator, and the company.”
J&S doesn’t do much good if the harm is practically non-compensable because no one is alive to sue or be sued, or the legal system is no longer functioning. Even for harms that are merely financially uninsurable, in only enlarges the maximum practically compensable harm by less than an order of magnitude.
“Yes. Here I ask: what about legal systems that use delictual law instead of tort law? The names, requirements and such are different. In other words, you’ll get completely different legal treatment for international AI’s. This creates a whole new can of worms that defeats legal certainty and the rule of law.”
I encourage experts in other legal systems to conduct similar analyses to mine regarding how liability is likely to attach to AI harms and what doctrinal/statutory levers could be pulled to achieve more favorable rules.