The below is the draft of a blog post I have about why I like AI doom liability. My dream is that people read it and decide “ah yes this is the main policy we will support” or “oh this is bad for a reason Daniel hasn’t noticed and I’ll tell him why”. I think usually you’re supposed to flesh out posts, but I’m not sure that adds a ton of information in this case.
Why I like AI doom liability
AI doom liability is my favourite approach to AI regulation. I want to sell you all on it.
the basic idea
general approach to problems: sue people for the negative impacts
internalizes externalities
means that the people figuring out how to avoid are informed and aligned (rather than bureaucrats less aware of on-the-ground conditions / trying to look good / seeking power)
less fucked than criminal law, regulatory law
look at what hits the supreme court, which stuff ends up violating people’s rights the worst, what’s been more persistent over human history, what causes massive protests, etc.
first-pass approach to AI: sue for liabilities after AI takes over
can’t do that
so sue for intermediate disasters, get punitive damages for how close you were to AI takeover
intuition: pulling liability forward into places it can be paid, for same underlying conduct.
also mention strict liability, liability insurance
See Foom Liability (Hanson, 2023), Tort Law as a Tool for Mitigating Catastrophic Risk from Artificial Intelligence (Weil, 2024).
why it’s nice
liability when you’re more informed of risks, vs regulation now, when we know less
doesn’t require the right person in the right position
judged by juries informed by lawyers on both sides, not power-hungry politically constrained
we don’t really know what the right way to make safe AI is right now
good in high-risk worlds or low-risk worlds—as long as you believe in intermediate disasters
intermediate disasters seem plausible because slow takeoff
more fair: ai companies can’t get away with making egregiously unsafe AI, but they’re not penalized for doing stuff that is actually harmless.
difficulties with the proposal:
jury discretion
you could give the jury the optimal formula, which isn’t easy to plug numbers in, and give them a bunch of discretion how to apply it
or you could give them a more plug-and-play formula which sort of approximates the optimal formula, making things more predictable but less theoretically optimal.
it’s not clear how you want to trade off predictability with theoretical optimality, or what the trade-off even looks like (Hanson’s post is a bit more predictable but it’s unclear how predictable it actually is).
positive externalities
in a world where research produces positive externalities, it’s a really bad idea to force people to internalize all negative externalities
one way this is clear: open source AI. tons of positive externalities—people get to use AI to do cool stuff, and you can do research on it, maybe helping you figure out how to make AI more safely.
this regime, without tweaks, would likely make it economically unviable to open source large SOTA models. it’s unclear whether this is optimal.
Further note: this policy doesn’t work to regulate government-developed AGI, which is a major drawback if you expect the government to develop AGI. It also probably lowers the relative cost for the government to develop AGI, which is a major drawback if you think the private sector would do a better job of responsible AGI development than the government.
You could but (a) it’s much harder constitutionally in the US (governments can only be sued if they consent to being sued, maybe unless other governments are suing them) and (b) the reason for thinking this proposal works is modelling affected actors as profit-maximizing, which the government probably isn’t.
Oh: it would be sad if there were a bunch of frivolous suits for this. One way to curb that without messing up optionality would be to limit such suits to large enough intermediate disasters.
You can’t always use liability to internalise all the externality because e.g. you can’t effectively sue companies for more than they have, and for some companies that stay afloat by regular fundraising rounds, that may not even be that much? like, if they’re considering an action that is a coinflip between “we cause some huge liability” and “we triple the value of our company” then it’s usually going to be correct from a shareholder perspective to take it, no matter the size of the liability, right?
Criminal law has the ability to increase the deterrent somewhat – probably many people will not accept any amount of money for a significant enough chance of prison – though obviously it’s not perfect either
OP doesn’t emphasize liability insurance enough but part of the hope is that you can mandate that companies be insured up to $X00 billion, which costs them less than $X00 billion assuming that they’re not likely to be held liable for that much. Then the hope is the insurance company can say “please don’t do extremely risky stuff or your premium goes up”.
The below is the draft of a blog post I have about why I like AI doom liability. My dream is that people read it and decide “ah yes this is the main policy we will support” or “oh this is bad for a reason Daniel hasn’t noticed and I’ll tell him why”. I think usually you’re supposed to flesh out posts, but I’m not sure that adds a ton of information in this case.
Why I like AI doom liability
AI doom liability is my favourite approach to AI regulation. I want to sell you all on it.
the basic idea
general approach to problems: sue people for the negative impacts
internalizes externalities
means that the people figuring out how to avoid are informed and aligned (rather than bureaucrats less aware of on-the-ground conditions / trying to look good / seeking power)
less fucked than criminal law, regulatory law
look at what hits the supreme court, which stuff ends up violating people’s rights the worst, what’s been more persistent over human history, what causes massive protests, etc.
first-pass approach to AI: sue for liabilities after AI takes over
can’t do that
so sue for intermediate disasters, get punitive damages for how close you were to AI takeover
intuition: pulling liability forward into places it can be paid, for same underlying conduct.
also mention strict liability, liability insurance
See Foom Liability (Hanson, 2023), Tort Law as a Tool for Mitigating Catastrophic Risk from Artificial Intelligence (Weil, 2024).
why it’s nice
liability when you’re more informed of risks, vs regulation now, when we know less
doesn’t require the right person in the right position
judged by juries informed by lawyers on both sides, not power-hungry politically constrained
we don’t really know what the right way to make safe AI is right now
good in high-risk worlds or low-risk worlds—as long as you believe in intermediate disasters
intermediate disasters seem plausible because slow takeoff
more fair: ai companies can’t get away with making egregiously unsafe AI, but they’re not penalized for doing stuff that is actually harmless.
difficulties with the proposal:
jury discretion
you could give the jury the optimal formula, which isn’t easy to plug numbers in, and give them a bunch of discretion how to apply it
or you could give them a more plug-and-play formula which sort of approximates the optimal formula, making things more predictable but less theoretically optimal.
it’s not clear how you want to trade off predictability with theoretical optimality, or what the trade-off even looks like (Hanson’s post is a bit more predictable but it’s unclear how predictable it actually is).
positive externalities
in a world where research produces positive externalities, it’s a really bad idea to force people to internalize all negative externalities
one way this is clear: open source AI. tons of positive externalities—people get to use AI to do cool stuff, and you can do research on it, maybe helping you figure out how to make AI more safely.
this regime, without tweaks, would likely make it economically unviable to open source large SOTA models. it’s unclear whether this is optimal.
I don’t know a principled way to deal with this.
Further note: this policy doesn’t work to regulate government-developed AGI, which is a major drawback if you expect the government to develop AGI. It also probably lowers the relative cost for the government to develop AGI, which is a major drawback if you think the private sector would do a better job of responsible AGI development than the government.
I think you could also push to make government liable as part of this proposal
You could but (a) it’s much harder constitutionally in the US (governments can only be sued if they consent to being sued, maybe unless other governments are suing them) and (b) the reason for thinking this proposal works is modelling affected actors as profit-maximizing, which the government probably isn’t.
Oh: it would be sad if there were a bunch of frivolous suits for this. One way to curb that without messing up optionality would be to limit such suits to large enough intermediate disasters.
You can’t always use liability to internalise all the externality because e.g. you can’t effectively sue companies for more than they have, and for some companies that stay afloat by regular fundraising rounds, that may not even be that much? like, if they’re considering an action that is a coinflip between “we cause some huge liability” and “we triple the value of our company” then it’s usually going to be correct from a shareholder perspective to take it, no matter the size of the liability, right?
Criminal law has the ability to increase the deterrent somewhat – probably many people will not accept any amount of money for a significant enough chance of prison – though obviously it’s not perfect either
OP doesn’t emphasize liability insurance enough but part of the hope is that you can mandate that companies be insured up to $X00 billion, which costs them less than $X00 billion assuming that they’re not likely to be held liable for that much. Then the hope is the insurance company can say “please don’t do extremely risky stuff or your premium goes up”.