We can certainly debate whether liability ought to work this way. Personally I disagree, for reasons others have laid out here, but it’s fun to think through.
Still, it’s worth saying explicitly that as regards the motivating problem of AI governance, this is not currently how liability works. Any liability-based strategy for AI regulation must either work within the existing liability framework, or (much less practically) overhaul the liability framework as its first step.
We can certainly debate whether liability ought to work this way. Personally I disagree, for reasons others have laid out here, but it’s fun to think through.
Still, it’s worth saying explicitly that as regards the motivating problem of AI governance, this is not currently how liability works. Any liability-based strategy for AI regulation must either work within the existing liability framework, or (much less practically) overhaul the liability framework as its first step.