I’m surprised why some people are so interested in the idea of liability for extreme harms. I understand that from a legal/philosophical perspective, there are some nice arguments about how companies should have to internalize the externalities of their actions etc.
But in practice, I’d be fairly surprised if liability approaches were actually able to provide a meaningful incentive shift for frontier AI developers. My impression is that frontier AI developers already have fairly strong incentives to avoid catastrophes (e.g., it would be horrible for Microsoft if its AI model caused $1B in harms, it would be horrible for Meta and the entire OS movement if an OS model was able to cause $1B in damages.)
And my impression is that most forms of liability would not affect this cost-benefit tradeoff by very much. This is especially true if the liability is only implemented post-catastrophe. Extreme forms of liability could require insurance, but this essentially feels like a roundabout and less effective way of implementing some form of licensing (you have to convince us that risks are below an acceptable threshold to proceed.)
I think liability also has the “added” problem of being quite unpopular, especially among Republicans. It is easy to attack liability regulations as anti-innovation, argue that that it creates a moat (only big companies can afford to comply), and argue that it’s just not how America ends up regulating things (we don’t hold Adobe accountable for someone doing something bad with Photoshop.)
To be clear, I don’t think “something is politically unpopular” should be a full-stop argument against advocating for it.
But I do think that “liability for AI companies” scores poorly both on “actual usefulness if implemented” and “political popularity/feasibility.” I also think the “liability for AI companies” advocacy often ends up getting into abstract philosophy land (to what extent should companies internalize externalities) and ends up avoiding some of the “weirder” points (we expect AI has a considerable chance of posing extreme national security risks, which is why we need to treat AI differently than Photoshop.)
I would rather people just make the direct case that AI poses extreme risks & discuss the direct policy interventions that are warranted.
With this in mind, I’m not an expert in liability and admittedly haven’t been following the discussion in great detail (partly because the little I have seen has not convinced me that this is an approach worth investing into). I’d be interested in hearing more from people who have thought about liability– particularly concrete stories for how liability would be expected to meaningfully shift incentives of labs. (See also here).
Stylistic note: I’d prefer replies along the lines of “here is the specific argument for why liability would significantly affect lab incentives and how it would work in concrete cases” rather than replies along the lines of “here is a thing you can read about the general legal/philosophical arguments about how liability is good.”
One reason I feel interested in liability is because it opens up a way to do legal investigations. The legal system has a huge number of privileges that you get to use if you have reasonable suspicion someone has committed a crime or is being negligent. I think it’s quite likely that if there was no direct liability, that even if Microsoft or OpenAI causes some huge catastrophe, that we would never get a proper postmortem or analysis of the facts, and would never reach high-confidence on the actual root-causes.
So while I agree that OpenAI and Microsoft want to of course already avoid being seen as responsible for a large catastrophe, having legal liability makes it much more likely there will be an actual investigation where e.g. the legal system gets to confiscate servers and messages to analyze what happens, which makes it then more likely that if OpenAI and Microsoft are responsible, they will be found out to be responsible.
I think liability-based interventions are substantially more popular with Republicans than other regulatory interventions—they’re substantially more hands-off than, for instance, a regulatory agency. They also feature prominently in the Josh Hawley proposal. I’ve also been told by a republican staffer that liability approaches are relatively popular amongst Rs.
An important baseline point is that AI firms (if they’re selling to consumers) are probably by default covered by product liability by default. If they’re covered by product liability, then they’ll be liable for damages if it can be shown that there was a not excessively costly alternative design that they could have implemented that would have avoided that harm.
If AI firms aren’t covered by product liability, they’re liable according to standard tort law, which means they’re liable if they’re negligent under a reasonable person standard.
Liability law also gives (some, limited) teeth to NIST standards. If a firm can show that it was following NIST safety standards, this gives it a strong argument that it wasn’t being negligent.
I share your scepticism of liability interventions as mechanisms for making important dents in the AI safety problem. Prior to the creation of the EPA, firms were still in principle liable for the harms their pollution caused, but the tort law system is generically a very messy way to get firms to reduce accident risks. It’s expensive and time consuming to go through the court system, courts are reluctant to award punitive damages which means that externalities aren’t internalised even theory (in expectation for firms,) and you need to find a plaintiff with standing to sue firms.
I think there are still some potentially important use cases for liability for reducing AI risks:
Making clear the legal responsibilities of private sector auditors (I’m quite confident that this is a good idea)
Individual liability for individuals with safety responsibilities at firms (although this would be politically unpopular on the right I’d expect)
Creating safe harbours from liability if firms fulfil some set of safety obligations (similarly to the California bill) - ideally safety obligations that are updated over time and tied to best practice
Requiring insurance to cover liability and using this to create better safety practices as firms to reduce insurance premiums and satisfy insurers’ requirements for coverage
Tieing liability to specific failures modes that we expect to correlate with catastrophic failure modes, perhaps tied to a punitive damages regime—for instance holding a firm liable, including for punitive damages if a model causes harm via say goal misgenerlisation or firms lacking industry standard risk management practices
To be clear, I’m still sceptical of liability-based solutions and reasonably strongly favour regulatory proposals (where specific liability provisions will still play an important role.)
I think we should be talking more about potentially denying a frontier AI license to any company that causes a major disaster (within some future licensing regime), where a company’s record before the law passes will be taken into amount.
One alternative method to liability for the AI companies is strong liability for companies using AI systems. This does not directly address risks from frontier labs having dangerous AIs in-house, but helps with risks from AI system deployment in the real world. It indirectly affects labs, because they want to sell their AIs.
A lot of this is the default. For example, Air Canada recently lost a court case after claiming a chatbot promising a refund wasn’t binding on them. However, there could be related opportunities. Companies using AI systems currently don’t have particularly good ways to assess risks from AI deployment, and if models continue getting more capable while reliability continues lagging, they are likely to be willing to pay an increasing amount for ways to get information on concrete risks, guard against it, or derisk it (e.g. through insurance against their deployed AI systems causing harms). I can imagine a service that sells AI-using companies insurance against certain types of deployment risk, that could also double as a consultancy / incentive-provider for lower-risk deployments. I’d be interested to chat if anyone is thinking along similar lines.
There are analogies here in pollution. Some countries force industry to post bonds for damage to the local environment. This is a new innovation that may be working.
The reason the superfund exists in the US is because liability for pollution can be so severe that a company would simply cease to operate, and the mess would not be cleaned up.
In practice, when it comes to taking environmental risks, better to burn the train cars of vinyl chloride, creating a catastrophe too expensive for anyone to clean up or even comprehend than to allow a few gallons to leak, creating an expensive accident that you can actually afford.
I’m surprised why some people are so interested in the idea of liability for extreme harms. I understand that from a legal/philosophical perspective, there are some nice arguments about how companies should have to internalize the externalities of their actions etc.
But in practice, I’d be fairly surprised if liability approaches were actually able to provide a meaningful incentive shift for frontier AI developers. My impression is that frontier AI developers already have fairly strong incentives to avoid catastrophes (e.g., it would be horrible for Microsoft if its AI model caused $1B in harms, it would be horrible for Meta and the entire OS movement if an OS model was able to cause $1B in damages.)
And my impression is that most forms of liability would not affect this cost-benefit tradeoff by very much. This is especially true if the liability is only implemented post-catastrophe. Extreme forms of liability could require insurance, but this essentially feels like a roundabout and less effective way of implementing some form of licensing (you have to convince us that risks are below an acceptable threshold to proceed.)
I think liability also has the “added” problem of being quite unpopular, especially among Republicans. It is easy to attack liability regulations as anti-innovation, argue that that it creates a moat (only big companies can afford to comply), and argue that it’s just not how America ends up regulating things (we don’t hold Adobe accountable for someone doing something bad with Photoshop.)
To be clear, I don’t think “something is politically unpopular” should be a full-stop argument against advocating for it.
But I do think that “liability for AI companies” scores poorly both on “actual usefulness if implemented” and “political popularity/feasibility.” I also think the “liability for AI companies” advocacy often ends up getting into abstract philosophy land (to what extent should companies internalize externalities) and ends up avoiding some of the “weirder” points (we expect AI has a considerable chance of posing extreme national security risks, which is why we need to treat AI differently than Photoshop.)
I would rather people just make the direct case that AI poses extreme risks & discuss the direct policy interventions that are warranted.
With this in mind, I’m not an expert in liability and admittedly haven’t been following the discussion in great detail (partly because the little I have seen has not convinced me that this is an approach worth investing into). I’d be interested in hearing more from people who have thought about liability– particularly concrete stories for how liability would be expected to meaningfully shift incentives of labs. (See also here).
Stylistic note: I’d prefer replies along the lines of “here is the specific argument for why liability would significantly affect lab incentives and how it would work in concrete cases” rather than replies along the lines of “here is a thing you can read about the general legal/philosophical arguments about how liability is good.”
One reason I feel interested in liability is because it opens up a way to do legal investigations. The legal system has a huge number of privileges that you get to use if you have reasonable suspicion someone has committed a crime or is being negligent. I think it’s quite likely that if there was no direct liability, that even if Microsoft or OpenAI causes some huge catastrophe, that we would never get a proper postmortem or analysis of the facts, and would never reach high-confidence on the actual root-causes.
So while I agree that OpenAI and Microsoft want to of course already avoid being seen as responsible for a large catastrophe, having legal liability makes it much more likely there will be an actual investigation where e.g. the legal system gets to confiscate servers and messages to analyze what happens, which makes it then more likely that if OpenAI and Microsoft are responsible, they will be found out to be responsible.
I found this answer helpful and persuasive– thank you!
I think liability-based interventions are substantially more popular with Republicans than other regulatory interventions—they’re substantially more hands-off than, for instance, a regulatory agency. They also feature prominently in the Josh Hawley proposal. I’ve also been told by a republican staffer that liability approaches are relatively popular amongst Rs.
An important baseline point is that AI firms (if they’re selling to consumers) are probably by default covered by product liability by default. If they’re covered by product liability, then they’ll be liable for damages if it can be shown that there was a not excessively costly alternative design that they could have implemented that would have avoided that harm.
If AI firms aren’t covered by product liability, they’re liable according to standard tort law, which means they’re liable if they’re negligent under a reasonable person standard.
Liability law also gives (some, limited) teeth to NIST standards. If a firm can show that it was following NIST safety standards, this gives it a strong argument that it wasn’t being negligent.
I share your scepticism of liability interventions as mechanisms for making important dents in the AI safety problem. Prior to the creation of the EPA, firms were still in principle liable for the harms their pollution caused, but the tort law system is generically a very messy way to get firms to reduce accident risks. It’s expensive and time consuming to go through the court system, courts are reluctant to award punitive damages which means that externalities aren’t internalised even theory (in expectation for firms,) and you need to find a plaintiff with standing to sue firms.
I think there are still some potentially important use cases for liability for reducing AI risks:
Making clear the legal responsibilities of private sector auditors (I’m quite confident that this is a good idea)
Individual liability for individuals with safety responsibilities at firms (although this would be politically unpopular on the right I’d expect)
Creating safe harbours from liability if firms fulfil some set of safety obligations (similarly to the California bill) - ideally safety obligations that are updated over time and tied to best practice
Requiring insurance to cover liability and using this to create better safety practices as firms to reduce insurance premiums and satisfy insurers’ requirements for coverage
Tieing liability to specific failures modes that we expect to correlate with catastrophic failure modes, perhaps tied to a punitive damages regime—for instance holding a firm liable, including for punitive damages if a model causes harm via say goal misgenerlisation or firms lacking industry standard risk management practices
To be clear, I’m still sceptical of liability-based solutions and reasonably strongly favour regulatory proposals (where specific liability provisions will still play an important role.)
I’m not a lawyer and have no legal training.
I think we should be talking more about potentially denying a frontier AI license to any company that causes a major disaster (within some future licensing regime), where a company’s record before the law passes will be taken into amount.
One alternative method to liability for the AI companies is strong liability for companies using AI systems. This does not directly address risks from frontier labs having dangerous AIs in-house, but helps with risks from AI system deployment in the real world. It indirectly affects labs, because they want to sell their AIs.
A lot of this is the default. For example, Air Canada recently lost a court case after claiming a chatbot promising a refund wasn’t binding on them. However, there could be related opportunities. Companies using AI systems currently don’t have particularly good ways to assess risks from AI deployment, and if models continue getting more capable while reliability continues lagging, they are likely to be willing to pay an increasing amount for ways to get information on concrete risks, guard against it, or derisk it (e.g. through insurance against their deployed AI systems causing harms). I can imagine a service that sells AI-using companies insurance against certain types of deployment risk, that could also double as a consultancy / incentive-provider for lower-risk deployments. I’d be interested to chat if anyone is thinking along similar lines.
There are analogies here in pollution. Some countries force industry to post bonds for damage to the local environment. This is a new innovation that may be working.
The reason the superfund exists in the US is because liability for pollution can be so severe that a company would simply cease to operate, and the mess would not be cleaned up.
In practice, when it comes to taking environmental risks, better to burn the train cars of vinyl chloride, creating a catastrophe too expensive for anyone to clean up or even comprehend than to allow a few gallons to leak, creating an expensive accident that you can actually afford.