Liability is not punishment in-and-of itself. Liability is the ability to be punished, if it is established in court that one did <whatever type of thing one would be liable for>. What I want to create is de-facto machinery which can punish AI companies, insofar as they do harm.
Also, I do not find it at all plausible that no material harm has been done to anyone by AI yet. Surely people have been harmed by hallucinations, deepfakes, etc. And I’m not proposing disproportionate punishments here—just punishments proportionate to the harms. If the benefits of AI greatly exceed the harms (which they clearly do so far), then AI companies would be incentivized by liability to eat that cost short-term and find ways to mitigate the harms long-term.
Liability is not punishment in-and-of itself. Liability is the ability to be punished, if it is established in court that one did <whatever type of thing one would be liable for>. What I want to create is de-facto machinery which can punish AI companies, insofar as they do harm.
Also, I do not find it at all plausible that no material harm has been done to anyone by AI yet. Surely people have been harmed by hallucinations, deepfakes, etc. And I’m not proposing disproportionate punishments here—just punishments proportionate to the harms. If the benefits of AI greatly exceed the harms (which they clearly do so far), then AI companies would be incentivized by liability to eat that cost short-term and find ways to mitigate the harms long-term.