I think liability-based interventions are substantially more popular with Republicans than other regulatory interventions—they’re substantially more hands-off than, for instance, a regulatory agency. They also feature prominently in the Josh Hawley proposal. I’ve also been told by a republican staffer that liability approaches are relatively popular amongst Rs.
An important baseline point is that AI firms (if they’re selling to consumers) are probably by default covered by product liability by default. If they’re covered by product liability, then they’ll be liable for damages if it can be shown that there was a not excessively costly alternative design that they could have implemented that would have avoided that harm.
If AI firms aren’t covered by product liability, they’re liable according to standard tort law, which means they’re liable if they’re negligent under a reasonable person standard.
Liability law also gives (some, limited) teeth to NIST standards. If a firm can show that it was following NIST safety standards, this gives it a strong argument that it wasn’t being negligent.
I share your scepticism of liability interventions as mechanisms for making important dents in the AI safety problem. Prior to the creation of the EPA, firms were still in principle liable for the harms their pollution caused, but the tort law system is generically a very messy way to get firms to reduce accident risks. It’s expensive and time consuming to go through the court system, courts are reluctant to award punitive damages which means that externalities aren’t internalised even theory (in expectation for firms,) and you need to find a plaintiff with standing to sue firms.
I think there are still some potentially important use cases for liability for reducing AI risks:
Making clear the legal responsibilities of private sector auditors (I’m quite confident that this is a good idea)
Individual liability for individuals with safety responsibilities at firms (although this would be politically unpopular on the right I’d expect)
Creating safe harbours from liability if firms fulfil some set of safety obligations (similarly to the California bill) - ideally safety obligations that are updated over time and tied to best practice
Requiring insurance to cover liability and using this to create better safety practices as firms to reduce insurance premiums and satisfy insurers’ requirements for coverage
Tieing liability to specific failures modes that we expect to correlate with catastrophic failure modes, perhaps tied to a punitive damages regime—for instance holding a firm liable, including for punitive damages if a model causes harm via say goal misgenerlisation or firms lacking industry standard risk management practices
To be clear, I’m still sceptical of liability-based solutions and reasonably strongly favour regulatory proposals (where specific liability provisions will still play an important role.)
I think liability-based interventions are substantially more popular with Republicans than other regulatory interventions—they’re substantially more hands-off than, for instance, a regulatory agency. They also feature prominently in the Josh Hawley proposal. I’ve also been told by a republican staffer that liability approaches are relatively popular amongst Rs.
An important baseline point is that AI firms (if they’re selling to consumers) are probably by default covered by product liability by default. If they’re covered by product liability, then they’ll be liable for damages if it can be shown that there was a not excessively costly alternative design that they could have implemented that would have avoided that harm.
If AI firms aren’t covered by product liability, they’re liable according to standard tort law, which means they’re liable if they’re negligent under a reasonable person standard.
Liability law also gives (some, limited) teeth to NIST standards. If a firm can show that it was following NIST safety standards, this gives it a strong argument that it wasn’t being negligent.
I share your scepticism of liability interventions as mechanisms for making important dents in the AI safety problem. Prior to the creation of the EPA, firms were still in principle liable for the harms their pollution caused, but the tort law system is generically a very messy way to get firms to reduce accident risks. It’s expensive and time consuming to go through the court system, courts are reluctant to award punitive damages which means that externalities aren’t internalised even theory (in expectation for firms,) and you need to find a plaintiff with standing to sue firms.
I think there are still some potentially important use cases for liability for reducing AI risks:
Making clear the legal responsibilities of private sector auditors (I’m quite confident that this is a good idea)
Individual liability for individuals with safety responsibilities at firms (although this would be politically unpopular on the right I’d expect)
Creating safe harbours from liability if firms fulfil some set of safety obligations (similarly to the California bill) - ideally safety obligations that are updated over time and tied to best practice
Requiring insurance to cover liability and using this to create better safety practices as firms to reduce insurance premiums and satisfy insurers’ requirements for coverage
Tieing liability to specific failures modes that we expect to correlate with catastrophic failure modes, perhaps tied to a punitive damages regime—for instance holding a firm liable, including for punitive damages if a model causes harm via say goal misgenerlisation or firms lacking industry standard risk management practices
To be clear, I’m still sceptical of liability-based solutions and reasonably strongly favour regulatory proposals (where specific liability provisions will still play an important role.)
I’m not a lawyer and have no legal training.