Notwithstanding the tendentious assumption in the other comment thread that courts are maximally adversarial processes bent on on misreading legislation to achieve their perverted ends, I would bet that the relevant courts would not in fact rule that a bunch of deepfaked child porn counted as “Other grave harms to public safety and security that are of comparable severity to the harms described in subparagraphs (A) to (C), inclusive”, where those other things are “CBRN > mass casualties”, “cyberattack on critical infra”, and “autonomous action > mass casualties”. Happy to take such a bet at 2:1 odds.
But there are some simpler reason that particular hypothetical fails:
Image models are just not nearly as expensive to train, so it’s unlikely that they’d fall under the definition of a covered model to begin with.
Even if someone used a covered multimodal model, existing models can already do this.
See:
(2) “Critical harm” does not include any of the following:
(A) Harms caused or materially enabled by information that a covered model or covered model derivative outputs if the information is otherwise reasonably publicly accessible by an ordinary person from sources other than a covered model or covered model derivative.
I’m not sure if you intended the allusion to “the tendentious assumption in the other comment thread that courts are maximally adversarial processes bent on on misreading legislation to achieve their perverted ends”, but if it was aimed at the thread I commented on… what? IMO it is fair game to call out as false the claim that
It only counts if the $500m comes from “cyber attacks on critical infrastructure” or “with limited human oversight, intervention, or supervision....results in death, great bodily injury, property damage, or property loss.”
even if deepfake harms wouldn’t fall under this condition. Local validity matters.
I agree with you that deepfake harms are unlikely to be direct triggers for the bill’s provisions, for similar reasons as you mentioned.
Notwithstanding the tendentious assumption in the other comment thread that courts are maximally adversarial processes bent on on misreading legislation to achieve their perverted ends, I would bet that the relevant courts would not in fact rule that a bunch of deepfaked child porn counted as “Other grave harms to public safety and security that are of comparable severity to the harms described in subparagraphs (A) to (C), inclusive”, where those other things are “CBRN > mass casualties”, “cyberattack on critical infra”, and “autonomous action > mass casualties”. Happy to take such a bet at 2:1 odds.
But there are some simpler reason that particular hypothetical fails:
Image models are just not nearly as expensive to train, so it’s unlikely that they’d fall under the definition of a covered model to begin with.
Even if someone used a covered multimodal model, existing models can already do this.
See:
I’m not sure if you intended the allusion to “the tendentious assumption in the other comment thread that courts are maximally adversarial processes bent on on misreading legislation to achieve their perverted ends”, but if it was aimed at the thread I commented on… what? IMO it is fair game to call out as false the claim that
even if deepfake harms wouldn’t fall under this condition. Local validity matters.
I agree with you that deepfake harms are unlikely to be direct triggers for the bill’s provisions, for similar reasons as you mentioned.
Not your particular comment on it, no.