I think there’s a decent case that SB 1047 would improve Anthropic’s business prospects, so I’m not sure this narrative makes sense. On one hand, SB 1047 might make it less profitable to run an AGI company, which is bad for Anthropic’s business plan. But Anthropic is perhaps the best positioned of all AGI companies to comply with the requirements of SB 1047, and might benefit significantly from their competitors being hampered by the law.
The good faith interpretation of Anthropic’s argument would be that the new agency created by the bill might be very bad at issuing guidance that actually reduces x-risk, and you might prefer the decision-making of AI labs with a financial incentive to avoid catastrophes without additional pressure to follow the exact recommendations of the new agency.
The good faith interpretation of Anthropic’s argument would be that the new agency created by the bill might be very bad at issuing guidance that actually reduces x-risk, and you might prefer the decision-making of AI labs with a financial incentive to avoid catastrophes without additional pressure to follow the exact recommendations of the new agency.
Some quick thoughts on this:
If SB1047 passes, labs can still do whatever they want to reduce xrisk. This seems additive to me– I would be surprised if a lab was like “we think XYZ is useful to reduce extreme risks, and we would’ve done them if SB1047 had not passed, but since Y and Z aren’t in the FMD guidance, we’re going to stop doing Y and Z.”
I think the guidance the agency issues will largely be determined by who it employs. I think it’s valid to be like “maybe the FMD will just fail to do a good job because it won’t employ good people”, but to me this is more of a reason to say “how do we make sure the FMD gets staffed with good people who understand how to issue good recommendations”, rather than “there is a risk that you issue bad guidance, therefore we don’t want any guidance.”
I do think that a poorly-implemented FMD could cause harm by diverting company attention/resources toward things that are not productive, but IMO this cost seems relatively small compared to the benefits acquired in the worlds where the FMD issues useful guidance. (I haven’t done a quantitative EV calculation on this though, maybe someone should. I would suspect that even if you give FMD like 20-40% chance of good guidance, and 60-80% chance of useless guidance, the EV would still be net positive.)
I think there’s a decent case that SB 1047 would improve Anthropic’s business prospects, so I’m not sure this narrative makes sense. On one hand, SB 1047 might make it less profitable to run an AGI company, which is bad for Anthropic’s business plan. But Anthropic is perhaps the best positioned of all AGI companies to comply with the requirements of SB 1047, and might benefit significantly from their competitors being hampered by the law.
The good faith interpretation of Anthropic’s argument would be that the new agency created by the bill might be very bad at issuing guidance that actually reduces x-risk, and you might prefer the decision-making of AI labs with a financial incentive to avoid catastrophes without additional pressure to follow the exact recommendations of the new agency.
Some quick thoughts on this:
If SB1047 passes, labs can still do whatever they want to reduce xrisk. This seems additive to me– I would be surprised if a lab was like “we think XYZ is useful to reduce extreme risks, and we would’ve done them if SB1047 had not passed, but since Y and Z aren’t in the FMD guidance, we’re going to stop doing Y and Z.”
I think the guidance the agency issues will largely be determined by who it employs. I think it’s valid to be like “maybe the FMD will just fail to do a good job because it won’t employ good people”, but to me this is more of a reason to say “how do we make sure the FMD gets staffed with good people who understand how to issue good recommendations”, rather than “there is a risk that you issue bad guidance, therefore we don’t want any guidance.”
I do think that a poorly-implemented FMD could cause harm by diverting company attention/resources toward things that are not productive, but IMO this cost seems relatively small compared to the benefits acquired in the worlds where the FMD issues useful guidance. (I haven’t done a quantitative EV calculation on this though, maybe someone should. I would suspect that even if you give FMD like 20-40% chance of good guidance, and 60-80% chance of useless guidance, the EV would still be net positive.)