I find your ‘liability’ section somewhat scary. It sounds really concerningly similar to saying the following:
AI companies haven’t actually done any material harm to anyone yet. However, I would like to pretend that they have, and to punish them for these imagined harms, because I think that being randomly punished for no reason will make them improve and be less likely to do actual harms in future.
Liability is not punishment in-and-of itself. Liability is the ability to be punished, if it is established in court that one did <whatever type of thing one would be liable for>. What I want to create is de-facto machinery which can punish AI companies, insofar as they do harm.
Also, I do not find it at all plausible that no material harm has been done to anyone by AI yet. Surely people have been harmed by hallucinations, deepfakes, etc. And I’m not proposing disproportionate punishments here—just punishments proportionate to the harms. If the benefits of AI greatly exceed the harms (which they clearly do so far), then AI companies would be incentivized by liability to eat that cost short-term and find ways to mitigate the harms long-term.
I find your ‘liability’ section somewhat scary. It sounds really concerningly similar to saying the following:
Liability is not punishment in-and-of itself. Liability is the ability to be punished, if it is established in court that one did <whatever type of thing one would be liable for>. What I want to create is de-facto machinery which can punish AI companies, insofar as they do harm.
Also, I do not find it at all plausible that no material harm has been done to anyone by AI yet. Surely people have been harmed by hallucinations, deepfakes, etc. And I’m not proposing disproportionate punishments here—just punishments proportionate to the harms. If the benefits of AI greatly exceed the harms (which they clearly do so far), then AI companies would be incentivized by liability to eat that cost short-term and find ways to mitigate the harms long-term.