I see. The liability proposal isn’t aimed at near-miss scenarios with no actual harm. It is aimed at scenarios with actual harm, but where that actual harm falls short of extinction + the conditions contributing to the harm were of the sort that might otherwise contribute to extinction.
You said no one had named “a specific actionable harm that’s less than extinction” and I offered one (the first that came to mind) that seemed plausible, specific, and actionable under Hanson’s “negligent owner monitoring” condition.
To be clear, though, if I thought that governments could just prevent negligent owner monitoring (& likewise with some of the other conditions) as you suggested, I would be in favor of that!
EDIT: Someone asked Hanson to clarify what he meant by “near-miss” such that it’d be an actionable threshold for liability, and he responded:
Any event where A causes a hurt to B that A had a duty to avoid, the hurt is mediated by an AI, and one of those eight factors I list was present.
Oh, I may be fully off-base here. But I’m confused in how existing liability mechanisms don’t apply, in cases where A causes hurt to B that A had a duty to avoid, regardless of AI involvement. I don’t think anyone is claiming that AI somehow shields a company from liability.
Ah, re-reading with that lens, it seems the proposal is to add “extra liability” to AI-involved harms, not to create any new liabilities for near-misses. My reaction against this is a lot weaker—I’m on-board with a mix of punitive and restorative damages for many legal claims of liability.
I think we’re more or less on the same page now. I am also confused about the applicability of existing mechanisms. My lay impression is that there isn’t much clarity right now.
For example this uncertainty about who’s liable for harms from AI systems came up multiple times during the recent AI hearings before the US Senate, in the context of Section 230′s shielding of computer service providers from certain liabilities, to what extent that it & other laws extend here. In response to Senator Graham asking about this, Sam Altman straight up said “We’re claiming we need to work together to find a totally new approach. I don’t think Section 230 is the even the right framework.”
Yes, in a near-miss scenario, there’s no actual harm. There’s nothing to base the liability on—the perpetrator didn’t actually damage the claimant.
I see. The liability proposal isn’t aimed at near-miss scenarios with no actual harm. It is aimed at scenarios with actual harm, but where that actual harm falls short of extinction + the conditions contributing to the harm were of the sort that might otherwise contribute to extinction.
You said no one had named “a specific actionable harm that’s less than extinction” and I offered one (the first that came to mind) that seemed plausible, specific, and actionable under Hanson’s “negligent owner monitoring” condition.
To be clear, though, if I thought that governments could just prevent negligent owner monitoring (& likewise with some of the other conditions) as you suggested, I would be in favor of that!
EDIT: Someone asked Hanson to clarify what he meant by “near-miss” such that it’d be an actionable threshold for liability, and he responded:
Oh, I may be fully off-base here. But I’m confused in how existing liability mechanisms don’t apply, in cases where A causes hurt to B that A had a duty to avoid, regardless of AI involvement. I don’t think anyone is claiming that AI somehow shields a company from liability.
Ah, re-reading with that lens, it seems the proposal is to add “extra liability” to AI-involved harms, not to create any new liabilities for near-misses. My reaction against this is a lot weaker—I’m on-board with a mix of punitive and restorative damages for many legal claims of liability.
I think we’re more or less on the same page now. I am also confused about the applicability of existing mechanisms. My lay impression is that there isn’t much clarity right now.
For example this uncertainty about who’s liable for harms from AI systems came up multiple times during the recent AI hearings before the US Senate, in the context of Section 230′s shielding of computer service providers from certain liabilities, to what extent that it & other laws extend here. In response to Senator Graham asking about this, Sam Altman straight up said “We’re claiming we need to work together to find a totally new approach. I don’t think Section 230 is the even the right framework.”