What happens if the company just writes and implements a plan which sounds vaguely good but will not, in fact, address the various risks? Probably nothing.
The only enforcement mechanism that the bill has is that the Attorney General (AG) of California can bring a civil claim. And, the penalties are quite limited except for damages. So, in practice, this bill mostly establishes liability enforced by the AG.
So, the way I think this will go is:
The AI lab implements a plan and must provide this plan to the AG.
If an incident occurs which causes massive damages (probably ball park of $500 million in damages given language elsewhere in the bill), then the AG might decide to sue.
A civil court will decide whether the AI lab had a reasonable plan.
I don’t see why you think “the bill is mostly a recipe for regulatory capture” given that no regulatory body will be established and it de facto does something very similar to the proposal you were suggesting (impose liability for catastrophes). (It doesn’t require insurance, but I don’t really see why self insuring is notably different.)
(Maybe you just mean that if a given safety case doesn’t result in that AI lab being sued by the AG, then there will be a precedent established that this plan is acceptable? I don’t think not being sued really establishes precedent. This doesn’t really seem to be how it works with liability and similar types of requirements in other industries from my understanding. Or maybe you mean that the AI lab will win cases despite having bad safety plans and this will make a precedent?)
(To be clear, I’m worried that the bill might be unnecessarily burdensome because it no longer has a limited duty exemption and thus the law doesn’t make it clear that weak performance on capability evals can be sufficient to establish a good case for safety. I also think the quantity of damages considered a “Critical harm” is too low and should maybe be 10x higher.)
Here is the relevant section of the bill discussing enforcement:
The [AG is] entitled to recover all of the following in addition to any civil penalties specified in this chapter:
(1) A civil penalty for a violation that occurs on or after January 1, 2026, in an amount not exceeding 10 percent of the cost of the quantity of computing power used to train the covered model to be calculated using average market prices of cloud compute at the time of training for a first violation and in an amount not exceeding 30 percent of that value for any subsequent violation.
(2) (A) Injunctive or declaratory relief, including, but not limited to, orders to modify, implement a full shutdown, or delete the covered model and any covered model derivatives controlled by the developer.
(B) The court may only order relief under this paragraph for a covered model that has caused death or bodily harm to another human, harm to property, theft or misappropriation of property, or constitutes an imminent risk or threat to public safety.
(3) (A) Monetary damages.
(B) Punitive damages pursuant to subdivision (a) of Section 3294 of the Civil Code.
(4) Attorney’s fees and costs.
(5) Any other relief that the court deems appropriate.
(1) is decently small, (2) is only indirectly expensive, (3) is where the real penalty comes in (note that this is damages), (4) is small, (5) is probably unimportant (but WTF is (5) suppose to be for?!?).
Good argument, I find this at least somewhat convincing. Though it depends on whether penalty (1), the one capped at 10%/30% of training compute cost, would be applied more than once on the same model if the violation isn’t remedied.
The only enforcement mechanism that the bill has is that the Attorney General (AG) of California can bring a civil claim. And, the penalties are quite limited except for damages. So, in practice, this bill mostly establishes liability enforced by the AG.
So, the way I think this will go is:
The AI lab implements a plan and must provide this plan to the AG.
If an incident occurs which causes massive damages (probably ball park of $500 million in damages given language elsewhere in the bill), then the AG might decide to sue.
A civil court will decide whether the AI lab had a reasonable plan.
I don’t see why you think “the bill is mostly a recipe for regulatory capture” given that no regulatory body will be established and it de facto does something very similar to the proposal you were suggesting (impose liability for catastrophes). (It doesn’t require insurance, but I don’t really see why self insuring is notably different.)
(Maybe you just mean that if a given safety case doesn’t result in that AI lab being sued by the AG, then there will be a precedent established that this plan is acceptable? I don’t think not being sued really establishes precedent. This doesn’t really seem to be how it works with liability and similar types of requirements in other industries from my understanding. Or maybe you mean that the AI lab will win cases despite having bad safety plans and this will make a precedent?)
(To be clear, I’m worried that the bill might be unnecessarily burdensome because it no longer has a limited duty exemption and thus the law doesn’t make it clear that weak performance on capability evals can be sufficient to establish a good case for safety. I also think the quantity of damages considered a “Critical harm” is too low and should maybe be 10x higher.)
Here is the relevant section of the bill discussing enforcement:
(1) is decently small, (2) is only indirectly expensive, (3) is where the real penalty comes in (note that this is damages), (4) is small, (5) is probably unimportant (but WTF is (5) suppose to be for?!?).
Good argument, I find this at least somewhat convincing. Though it depends on whether penalty (1), the one capped at 10%/30% of training compute cost, would be applied more than once on the same model if the violation isn’t remedied.