Curious if you disagree and think those would be good liability rules even if AI progress was frozen, or if you’re viewing this as a sacrifice that you’re willing to make to get a weapon against x-risk?
I don’t think my actual view here quite fits that binary?
Very roughly speaking, these sorts of lawsuits are pushing toward a de-facto rule of “If you want to build AI (without getting sued into oblivion), then you are responsible for solving its alignment problem. If you further want it to be usable by the broad public, then you are responsible for solving the harder version of its alignment problem, in which it must be robust to misuse.”.
I can see the view in which that’s unreasonable. Like, if it is both true that these products were never particularly dangerous in the first place, and solving the alignment problem for them (including misuse) is Hard, then yeah, we’d potentially be missing out on a lot of value. On the other hand… if we’re in that world and the value in fact dramatically outweighs the costs, then the efficient solution is for the AI companies to eat the costs. Like, if they’re generating way more value than deepfakes and fake reports and whatnot generate damage, then it is totally fair and reasonable for the companies to make lots of money but also pay damages to the steady stream of people harmed by their products. That’s the nice thing about using liability rather than just outlawing things: if the benefits in fact outweigh the costs, then the AI companies can eat the costs and still generate lots of value.
… Ok, having written that out, I think my actual answer is “it’s just totally reasonable”. The core reason it’s reasonable is Coasean thinking: liability is not the same as outlawing things, so if the upside outweighs the downside then AI companies should eat the costs.
(A useful analogy here is worker’s comp: yes, it’s a pain in the ass, but it does not mean that all companies just stop having employees. It forces the companies to eat costs, and therefore strongly incentivizes them to solve safety problems, and that’s what we want here. If the upside is worth the downside, then companies eat the cost, and that’s also a fine outcome.)
Cars are net positive, and also cause lots of harm. Car companies are sometimes held liable for the harm caused by cars, e.g. if they fail to conform to legal safety standards or if they sell cars with defects. More frequently the liability falls on e.g. a negligent driver or is just ascribed to accident. The solution is not just “car companies should pay out for every harm that involves a car”, partly because the car companies also don’t capture all or even most of the benefits of cars, but mostly because that’s an absurd overreach which ignores people’s agency in using the products they purchase. Making cars (or ladders or knives or printing presses or...) “robust to misuse”, as you put it, is not the manufacturer’s job.
Liability for current AI systems could be a good idea, but it’d be much less sweeping than what you’re talking about here, and would depend a lot on setting safety standards which properly distinguish cases analogous to “Alice died when the car battery caught fire because of poor quality controls” from cases analogous to “Bob died when he got drunk and slammed into a tree at 70mph”.
That seems like a useful framing. When you put it like that, I think I agree in principle that it’s reasonable to hold a product maker liable for the harms that wouldn’t have occurred without their product, even if those harms are indirect or involve misuse, because that is a genuine externality, and a truly beneficial product should be able to afford it.
However, I anticipate a few problems that I expect will cause any real-life implementation to fall seriously short of that ideal:
The product can only justly be held liable for the difference in harm, compared to the world without that product. For instance, maybe someone used AI to write a fake report, but without AI they would have written a fake report by hand. This is genuinely hard to measure, because sometimes the person wouldn’t have written a fake if they didn’t have such a convenient option, but at the same time, fake reports obviously existed before AI, so AI can’t possibly be responsible for 100% of this problem.
If you assign all liability to the product, this will discourage people from taking reasonable precautions. For instance, they might stop making even a cursory attempt to check if reports look fake, knowing that AI is on the hook for the damage. This is (in some cases) far less efficient than the optimal world, where the defender pays for defense as if they were liable for the damage themselves. In principle you could do a thing where the AI pays for the difference in defense costs plus the difference in harm-assuming-optimal-defense, instead of for actual harm given your actual defense, but calculating “optimal defense” and “harm assuming optimal defense” sounds like it would be fiendishly hard even if all parties’ incentives were aligned, which they aren’t. (And you’d have to charge AI for defense costs even in situations where no actual attack occurred, and maybe even credit them in situations where the net result is an improvement to avoid overcharging them overall?)
My model of our legal system—which admittedly is not very strong—predicts that the above two problems are hard to express within our system, that no specific party within our system believes they have the responsibility of solving them, and that therefore our system will not make any organized attempt to solve them. For instance, if I imagine trying to persuade a judge that they should estimate the damage a hand-written fake report would have generated and bill the AI company only for the difference in harm, I don’t have terribly high hopes of the judge actually trying to do that. (I am not a legal expert and am least certain about this point.)
(I should probably explain this in more detail, but I’m about to get on a plane so leaving a placeholder comment. The short answer is that these are all standard points discussed around the Coase theorem, and I should probably point people to David Friedman’s treatment of the topic, but I don’t remember which book it was in.)
I don’t think my actual view here quite fits that binary?
Very roughly speaking, these sorts of lawsuits are pushing toward a de-facto rule of “If you want to build AI (without getting sued into oblivion), then you are responsible for solving its alignment problem. If you further want it to be usable by the broad public, then you are responsible for solving the harder version of its alignment problem, in which it must be robust to misuse.”.
I can see the view in which that’s unreasonable. Like, if it is both true that these products were never particularly dangerous in the first place, and solving the alignment problem for them (including misuse) is Hard, then yeah, we’d potentially be missing out on a lot of value. On the other hand… if we’re in that world and the value in fact dramatically outweighs the costs, then the efficient solution is for the AI companies to eat the costs. Like, if they’re generating way more value than deepfakes and fake reports and whatnot generate damage, then it is totally fair and reasonable for the companies to make lots of money but also pay damages to the steady stream of people harmed by their products. That’s the nice thing about using liability rather than just outlawing things: if the benefits in fact outweigh the costs, then the AI companies can eat the costs and still generate lots of value.
… Ok, having written that out, I think my actual answer is “it’s just totally reasonable”. The core reason it’s reasonable is Coasean thinking: liability is not the same as outlawing things, so if the upside outweighs the downside then AI companies should eat the costs.
(A useful analogy here is worker’s comp: yes, it’s a pain in the ass, but it does not mean that all companies just stop having employees. It forces the companies to eat costs, and therefore strongly incentivizes them to solve safety problems, and that’s what we want here. If the upside is worth the downside, then companies eat the cost, and that’s also a fine outcome.)
Cars are net positive, and also cause lots of harm. Car companies are sometimes held liable for the harm caused by cars, e.g. if they fail to conform to legal safety standards or if they sell cars with defects. More frequently the liability falls on e.g. a negligent driver or is just ascribed to accident. The solution is not just “car companies should pay out for every harm that involves a car”, partly because the car companies also don’t capture all or even most of the benefits of cars, but mostly because that’s an absurd overreach which ignores people’s agency in using the products they purchase. Making cars (or ladders or knives or printing presses or...) “robust to misuse”, as you put it, is not the manufacturer’s job.
Liability for current AI systems could be a good idea, but it’d be much less sweeping than what you’re talking about here, and would depend a lot on setting safety standards which properly distinguish cases analogous to “Alice died when the car battery caught fire because of poor quality controls” from cases analogous to “Bob died when he got drunk and slammed into a tree at 70mph”.
That seems like a useful framing. When you put it like that, I think I agree in principle that it’s reasonable to hold a product maker liable for the harms that wouldn’t have occurred without their product, even if those harms are indirect or involve misuse, because that is a genuine externality, and a truly beneficial product should be able to afford it.
However, I anticipate a few problems that I expect will cause any real-life implementation to fall seriously short of that ideal:
The product can only justly be held liable for the difference in harm, compared to the world without that product. For instance, maybe someone used AI to write a fake report, but without AI they would have written a fake report by hand. This is genuinely hard to measure, because sometimes the person wouldn’t have written a fake if they didn’t have such a convenient option, but at the same time, fake reports obviously existed before AI, so AI can’t possibly be responsible for 100% of this problem.
If you assign all liability to the product, this will discourage people from taking reasonable precautions. For instance, they might stop making even a cursory attempt to check if reports look fake, knowing that AI is on the hook for the damage. This is (in some cases) far less efficient than the optimal world, where the defender pays for defense as if they were liable for the damage themselves.
In principle you could do a thing where the AI pays for the difference in defense costs plus the difference in harm-assuming-optimal-defense, instead of for actual harm given your actual defense, but calculating “optimal defense” and “harm assuming optimal defense” sounds like it would be fiendishly hard even if all parties’ incentives were aligned, which they aren’t. (And you’d have to charge AI for defense costs even in situations where no actual attack occurred, and maybe even credit them in situations where the net result is an improvement to avoid overcharging them overall?)
My model of our legal system—which admittedly is not very strong—predicts that the above two problems are hard to express within our system, that no specific party within our system believes they have the responsibility of solving them, and that therefore our system will not make any organized attempt to solve them.
For instance, if I imagine trying to persuade a judge that they should estimate the damage a hand-written fake report would have generated and bill the AI company only for the difference in harm, I don’t have terribly high hopes of the judge actually trying to do that. (I am not a legal expert and am least certain about this point.)
(I should probably explain this in more detail, but I’m about to get on a plane so leaving a placeholder comment. The short answer is that these are all standard points discussed around the Coase theorem, and I should probably point people to David Friedman’s treatment of the topic, but I don’t remember which book it was in.)