Even a “small” incident, small in the sense that it doesn’t kill absolutely everybody, could generate enough liability to wipe out Meta, or Google. For example, something on the scale of flying a jet airplane into the world trade centre.
even if. Bankrupting Facebook wasn’t enough to cover the actual loss I thin’ the prospect would be sufficient deterrent .. at least, enough to prevent it happening a second time.
covid19 might be influencing our thnking here. Even if covid wasn’t a lab leak, we now know that such lan leaks are at least possible, and lab accidents that kill tens of millions or peopl are distinctly possible. Which is too big to be covered by tort law.
Should I be concerned that this kind of reliance on tort law only disincentivizes such “small” incidents, and only when they occur in such a way that the offending entity won’t attain control of the future faster than the legal system can resolve a massive and extremely complex case in the midst of the political system and economy trying to resolve the incident’s direct aftermath? Because I definitely am.
The version of your concern that I endorse is that this framework wouldn’t work very well in worlds where warning shots are (or, more to the point, expected to be rare by the key decision-makers). It can deter large incidents, but only those that are associated with small incidents that are more likely. If the threat models you’re most worried about are unlikely to produce such near-misses, then the expectation of liability is unlikely to be a sufficient deterrent. It’s not clear to me that there are politically viable policies that would significantly mitigate those kinds of risks, but I plan to address that question more deeply in future work.
The expected prevalence of warning shots is something I really don’t have any sense of. Ideally, of course, I’d like a policy that both increases the likelihood of (doesn’t disincentivize) small, early warning shots in the context of paths that, without them, would lead to large incidents, but also disincentivizes all bad outcomes such that companies want to avoid them.
The idea with my framework is punitive damages would only be available to the extent that the most cost-effective risk mitigation measures that the AI system developer/deployer could have to taken to further reduce to likelihood and/or severity of the practically compensable harm would also tend to mitigate the uninsurable risk. I agree that there’s a potential Goodhart problem here, which the prospect of liability could give AI companies strong incentives to eliminate warning shots, without doing very much to mitigate the catastrophic risk. For this reason, I think it’s really important that the punitive damages formula put heavy weight on the elasticity of the particular practically compensable harm at issue with the associated uninsurable risk.
“One death is a tragedy, a million deaths is a statistic”, attributed to Stalin. There is possibly an analogue in tort law, where if you kill one person by negligence their dependents will sue you, but if you kill a million people by negligence no-one will date mention it.
see also: “if you owe the bank a thousand dollars, you have a problem; if you owe the bank a billion dollars, the bank has a problem.”
The moment any corporation knows it has the ability to kill millions to billions of people, or disrupt the world economy, with AI, it becomes a global geopolitical superpower, which can also really change how much it cares about complying with national laws.
It’s a bit like the joke about the asteroid mining business model. 1) Develop the ability to de-orbit big chunks of space rock. 2) Demand money. No mining needed.
Even a “small” incident, small in the sense that it doesn’t kill absolutely everybody, could generate enough liability to wipe out Meta, or Google. For example, something on the scale of flying a jet airplane into the world trade centre.
even if. Bankrupting Facebook wasn’t enough to cover the actual loss I thin’ the prospect would be sufficient deterrent .. at least, enough to prevent it happening a second time.
covid19 might be influencing our thnking here. Even if covid wasn’t a lab leak, we now know that such lan leaks are at least possible, and lab accidents that kill tens of millions or peopl are distinctly possible. Which is too big to be covered by tort law.
Should I be concerned that this kind of reliance on tort law only disincentivizes such “small” incidents, and only when they occur in such a way that the offending entity won’t attain control of the future faster than the legal system can resolve a massive and extremely complex case in the midst of the political system and economy trying to resolve the incident’s direct aftermath? Because I definitely am.
The version of your concern that I endorse is that this framework wouldn’t work very well in worlds where warning shots are (or, more to the point, expected to be rare by the key decision-makers). It can deter large incidents, but only those that are associated with small incidents that are more likely. If the threat models you’re most worried about are unlikely to produce such near-misses, then the expectation of liability is unlikely to be a sufficient deterrent. It’s not clear to me that there are politically viable policies that would significantly mitigate those kinds of risks, but I plan to address that question more deeply in future work.
Thanks, that makes sense.
The expected prevalence of warning shots is something I really don’t have any sense of. Ideally, of course, I’d like a policy that both increases the likelihood of (doesn’t disincentivize) small, early warning shots in the context of paths that, without them, would lead to large incidents, but also disincentivizes all bad outcomes such that companies want to avoid them.
The idea with my framework is punitive damages would only be available to the extent that the most cost-effective risk mitigation measures that the AI system developer/deployer could have to taken to further reduce to likelihood and/or severity of the practically compensable harm would also tend to mitigate the uninsurable risk. I agree that there’s a potential Goodhart problem here, which the prospect of liability could give AI companies strong incentives to eliminate warning shots, without doing very much to mitigate the catastrophic risk. For this reason, I think it’s really important that the punitive damages formula put heavy weight on the elasticity of the particular practically compensable harm at issue with the associated uninsurable risk.
“One death is a tragedy, a million deaths is a statistic”, attributed to Stalin. There is possibly an analogue in tort law, where if you kill one person by negligence their dependents will sue you, but if you kill a million people by negligence no-one will date mention it.
see also: “if you owe the bank a thousand dollars, you have a problem; if you owe the bank a billion dollars, the bank has a problem.”
The moment any corporation knows it has the ability to kill millions to billions of people, or disrupt the world economy, with AI, it becomes a global geopolitical superpower, which can also really change how much it cares about complying with national laws.
It’s a bit like the joke about the asteroid mining business model. 1) Develop the ability to de-orbit big chunks of space rock. 2) Demand money. No mining needed.