Hanson does not ignore this, he is very clear about it
it seems plausible that for every extreme scenario like [extinction by foom] there are many more “near miss” scenarios which are similar, but which don’t reach such extreme ends. For example, where the AI tries but fails to hide its plans or actions, where it tries but fails to wrest control or prevent opposition, or where it does these things yet its abilities are not broad enough for it to cause existential damage. So if we gave sufficient liability incentives to AI owners to avoid near-miss scenarios, with the liability higher for a closer miss, those incentives would also induce substantial efforts to avoid the worst-case scenarios.
The purpose of this kind of liability is to provide an incentive gradient pushing actors away from the preconditions of harm. Many of those preconditions are applicable to harms at differing scales. For example, if an actor allowed AI systems to send emails in an unconstrained and unmonitored way, that negligence is an enabler for both automated spear-phishing scams (a “lesser harms”) and for AI-engineered global pandemics.
Do you (or Robin) have any examples in other domains where “near-miss” outcomes that don’t actually result in actionable individual harm are treated as liabilities, especially insurable ones? I can only think of cases (tobacco, environmental regulation) where it’s aggregated by legislation into regulatory fines, completely separate from a liability framework.
I know it’s fun and edgy to pretend that we can make up some just-so stories about how laws and human legal systems work, and then extend that theory to make it fit our preferences. But it would seem simpler and more direct to just say “government should directly prevent X risky behaviors”.
I see. The liability proposal isn’t aimed at near-miss scenarios with no actual harm. It is aimed at scenarios with actual harm, but where that actual harm falls short of extinction + the conditions contributing to the harm were of the sort that might otherwise contribute to extinction.
You said no one had named “a specific actionable harm that’s less than extinction” and I offered one (the first that came to mind) that seemed plausible, specific, and actionable under Hanson’s “negligent owner monitoring” condition.
To be clear, though, if I thought that governments could just prevent negligent owner monitoring (& likewise with some of the other conditions) as you suggested, I would be in favor of that!
EDIT: Someone asked Hanson to clarify what he meant by “near-miss” such that it’d be an actionable threshold for liability, and he responded:
Any event where A causes a hurt to B that A had a duty to avoid, the hurt is mediated by an AI, and one of those eight factors I list was present.
Oh, I may be fully off-base here. But I’m confused in how existing liability mechanisms don’t apply, in cases where A causes hurt to B that A had a duty to avoid, regardless of AI involvement. I don’t think anyone is claiming that AI somehow shields a company from liability.
Ah, re-reading with that lens, it seems the proposal is to add “extra liability” to AI-involved harms, not to create any new liabilities for near-misses. My reaction against this is a lot weaker—I’m on-board with a mix of punitive and restorative damages for many legal claims of liability.
I think we’re more or less on the same page now. I am also confused about the applicability of existing mechanisms. My lay impression is that there isn’t much clarity right now.
For example this uncertainty about who’s liable for harms from AI systems came up multiple times during the recent AI hearings before the US Senate, in the context of Section 230′s shielding of computer service providers from certain liabilities, to what extent that it & other laws extend here. In response to Senator Graham asking about this, Sam Altman straight up said “We’re claiming we need to work together to find a totally new approach. I don’t think Section 230 is the even the right framework.”
Robin Hanson wants AIs to replace humans. He thinks that is very good, more good than anything else. Every argument he makes regarding AI should be presumed to be his autistic attempt at convincing people to do whatever he thinks will make AIs replace humans faster.
Hanson does not ignore this, he is very clear about it
The purpose of this kind of liability is to provide an incentive gradient pushing actors away from the preconditions of harm. Many of those preconditions are applicable to harms at differing scales. For example, if an actor allowed AI systems to send emails in an unconstrained and unmonitored way, that negligence is an enabler for both automated spear-phishing scams (a “lesser harms”) and for AI-engineered global pandemics.
Do you (or Robin) have any examples in other domains where “near-miss” outcomes that don’t actually result in actionable individual harm are treated as liabilities, especially insurable ones? I can only think of cases (tobacco, environmental regulation) where it’s aggregated by legislation into regulatory fines, completely separate from a liability framework.
I know it’s fun and edgy to pretend that we can make up some just-so stories about how laws and human legal systems work, and then extend that theory to make it fit our preferences. But it would seem simpler and more direct to just say “government should directly prevent X risky behaviors”.
Can you re-state that? I find the phrasing of your question confusing.
(Are you saying there is no harm in the near-miss scenarios, so liability doesn’t help? If so I disagree.)
Yes, in a near-miss scenario, there’s no actual harm. There’s nothing to base the liability on—the perpetrator didn’t actually damage the claimant.
I see. The liability proposal isn’t aimed at near-miss scenarios with no actual harm. It is aimed at scenarios with actual harm, but where that actual harm falls short of extinction + the conditions contributing to the harm were of the sort that might otherwise contribute to extinction.
You said no one had named “a specific actionable harm that’s less than extinction” and I offered one (the first that came to mind) that seemed plausible, specific, and actionable under Hanson’s “negligent owner monitoring” condition.
To be clear, though, if I thought that governments could just prevent negligent owner monitoring (& likewise with some of the other conditions) as you suggested, I would be in favor of that!
EDIT: Someone asked Hanson to clarify what he meant by “near-miss” such that it’d be an actionable threshold for liability, and he responded:
Oh, I may be fully off-base here. But I’m confused in how existing liability mechanisms don’t apply, in cases where A causes hurt to B that A had a duty to avoid, regardless of AI involvement. I don’t think anyone is claiming that AI somehow shields a company from liability.
Ah, re-reading with that lens, it seems the proposal is to add “extra liability” to AI-involved harms, not to create any new liabilities for near-misses. My reaction against this is a lot weaker—I’m on-board with a mix of punitive and restorative damages for many legal claims of liability.
I think we’re more or less on the same page now. I am also confused about the applicability of existing mechanisms. My lay impression is that there isn’t much clarity right now.
For example this uncertainty about who’s liable for harms from AI systems came up multiple times during the recent AI hearings before the US Senate, in the context of Section 230′s shielding of computer service providers from certain liabilities, to what extent that it & other laws extend here. In response to Senator Graham asking about this, Sam Altman straight up said “We’re claiming we need to work together to find a totally new approach. I don’t think Section 230 is the even the right framework.”
Robin Hanson wants AIs to replace humans. He thinks that is very good, more good than anything else. Every argument he makes regarding AI should be presumed to be his autistic attempt at convincing people to do whatever he thinks will make AIs replace humans faster.
Here’s an example of that.