Oh, I may be fully off-base here. But I’m confused in how existing liability mechanisms don’t apply, in cases where A causes hurt to B that A had a duty to avoid, regardless of AI involvement. I don’t think anyone is claiming that AI somehow shields a company from liability.
Ah, re-reading with that lens, it seems the proposal is to add “extra liability” to AI-involved harms, not to create any new liabilities for near-misses. My reaction against this is a lot weaker—I’m on-board with a mix of punitive and restorative damages for many legal claims of liability.
I think we’re more or less on the same page now. I am also confused about the applicability of existing mechanisms. My lay impression is that there isn’t much clarity right now.
For example this uncertainty about who’s liable for harms from AI systems came up multiple times during the recent AI hearings before the US Senate, in the context of Section 230′s shielding of computer service providers from certain liabilities, to what extent that it & other laws extend here. In response to Senator Graham asking about this, Sam Altman straight up said “We’re claiming we need to work together to find a totally new approach. I don’t think Section 230 is the even the right framework.”
Oh, I may be fully off-base here. But I’m confused in how existing liability mechanisms don’t apply, in cases where A causes hurt to B that A had a duty to avoid, regardless of AI involvement. I don’t think anyone is claiming that AI somehow shields a company from liability.
Ah, re-reading with that lens, it seems the proposal is to add “extra liability” to AI-involved harms, not to create any new liabilities for near-misses. My reaction against this is a lot weaker—I’m on-board with a mix of punitive and restorative damages for many legal claims of liability.
I think we’re more or less on the same page now. I am also confused about the applicability of existing mechanisms. My lay impression is that there isn’t much clarity right now.
For example this uncertainty about who’s liable for harms from AI systems came up multiple times during the recent AI hearings before the US Senate, in the context of Section 230′s shielding of computer service providers from certain liabilities, to what extent that it & other laws extend here. In response to Senator Graham asking about this, Sam Altman straight up said “We’re claiming we need to work together to find a totally new approach. I don’t think Section 230 is the even the right framework.”