Humans write computer viruses for far more money than price of token generation.
Quoting the bill:
“Critical harm” does not include any of the following: (A) Harms caused or materially enabled by information that a covered model or covered model derivative outputs if the information is otherwise reasonably publicly accessible by an ordinary person from sources other than a covered model or covered model derivative.
1, Yes, but they also require far more money to do all the good stuff as well! I’m not saying there isn’t a tradeoff involved here.
2, Yes, I’ve read that. I was saying that this is a pretty low bar, since an ordinary person isn’t good at writing viruses. I’m afraid that the bill might have the effect of making competent jailbreakable models essentially illegal, even if they don’t pose an existential risk (in which case that would be necessary ofc.), and even if their net value for society is positive, because there is a lot of software out there that‘s insecure and that a reasonably competent coding AI could exploit and cause >500 MM in damages.
I’m saying that it might be better to tell companies to git gud at computer security and accept the fact that yes, an AI will absolutely try to break their stuff, and that they won’t get to sue Anthropic if something happens.
Humans write computer viruses for far more money than price of token generation.
Quoting the bill:
1, Yes, but they also require far more money to do all the good stuff as well! I’m not saying there isn’t a tradeoff involved here.
2, Yes, I’ve read that. I was saying that this is a pretty low bar, since an ordinary person isn’t good at writing viruses. I’m afraid that the bill might have the effect of making competent jailbreakable models essentially illegal, even if they don’t pose an existential risk (in which case that would be necessary ofc.), and even if their net value for society is positive, because there is a lot of software out there that‘s insecure and that a reasonably competent coding AI could exploit and cause >500 MM in damages.
I’m saying that it might be better to tell companies to git gud at computer security and accept the fact that yes, an AI will absolutely try to break their stuff, and that they won’t get to sue Anthropic if something happens.