Newsom Vetoes SB 1047

Link post

It’s over, until such a future time as either we are so back, or it is over for humanity.

Gavin Newsom has vetoed SB 1047.

Newsom’s Message In Full

Quoted text is him, comments are mine.

To the Members of the California State Senate: I am returning Senate Bill 1047 without my signature.

This bill would require developers of large artificial intelligence (Al) models, and those providing the computing power to train such models, to put certain safeguards and policies in place to prevent catastrophic harm. The bill would also establish the Board of Frontier Models – a state entity – to oversee the development of these models.

It is worth pointing out here that mostly the ‘certain safeguards and policies’ was ‘have a policy at all, tell us what it is and then follow it.’ But there were some specific things that were requires, so Newsom is indeed technically correct here.

California is home to 32 of the world’s 50 leading Al companies, pioneers in one of the most significant technological advances in modern history. We lead in this space because of our research and education institutions, our diverse and motivated workforce, and our free-spirited cultivation of intellectual freedom. As stewards and innovators of the future, I take seriously the responsibility to regulate this industry.

Queue the laugh track. No, that’s not why California leads, but sure, whatever.

This year, the Legislature sent me several thoughtful proposals to regulate AI companies in response to current, rapidly evolving risks – including threats to our democratic process, the spread of misinformation and deepfakes, risks to online privacy, threats to critical infrastructure, and disruptions in the workforce. These bills, and actions by my Administration, are guided by principles of accountability, fairness, and transparency of AI systems and deployment of AI technology in California.

He signed a bunch of other AI bills. It is quite the rhetorical move to characterize those bills as ‘thoughtful’ in the context of SB 1047, which (like or hate its consequences) was by far the most thoughtful bill, was centrally a transparency bill, and was clearly an accountability bill. What you call ‘fair’ is up to you I guess.

SB 1047 magnified the conversation about threats that could emerge from the deployment of AI. Key to the debate is whether the threshold for regulation should be based on the cost and number of computations needed to develop an Al model, or whether we should evaluate the system’s actual risks regardless of these factors. This global discussion is occurring as the capabilities of AI continue to scale at an impressive pace. At the same time, the strategies and solutions for addressing the risk of catastrophic harm are rapidly evolving.

Yes. This is indeed the key question. Do you target the future more capable frontier models that enable catastrophic and existential harm and require they be developed safely? Or do you let such systems be developed unsafely, and then put restrictions on what you tell people you can do with such systems, with no way to enforce that on users let alone on the systems themselves? I’ve explained over and over why it must be the first one, and focusing on the second is the path of madness that is bad for everyone. Yet here we are.

By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 – at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.

Bold mine. Read that again. The problem, according to Newsom, with SB 1047 was that it did not put enough restrictions on smaller AI models, and this could lead to a ‘false sense of security.’ He claims he is vetoing the bill because it does not go far enough.

Do you believe any of that? I don’t. Would a lower threshold (or no threshold!) on size have made this bill more likely to be signed? Of course not. A more comprehensive bill would have been more likely to be vetoed, not less likely.

Centrally the bill was vetoed, not because it was insufficiently comprehensive, but rather because of one or more of the following:

  1. Newsom was worried about the impact of the bill on industry and innovation.

  2. Industry successfully lobbied to have the bill killed, for various reasons.

  3. Newsom did what he thought helped his presidential ambitions.

You can say Newsom genuinely thought the bill would do harm, whether or not you think this was the result of lies told by various sources. Sure. It’s possible.

You can say Newsom was the subject of heavy lobbying, which he was, and did a political calculation and did what he thought was best for Gavin Newsom. Sure.

I do not buy for a second that he thought the bill was ‘insufficiently comprehensive.’

If it somehow turns out I am wrong about that, I am going to be rather shocked, as for rather different reasons will be everyone who is celebrating that this bill went down. It would represent far more fundamental confusions than I attribute to Newsom.

Adaptability is critical as we race to regulate a technology still in its infancy. This will require a delicate balance. While well-intentioned, SB 1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions – so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.

No, the bill does not restrict ‘basic functions.’ It does not restrict functions at all. The bill only restricts whether the model is safe to release in general. Once that happens, you’re in the clear. Whereas if you regulate by function, then yes, you will put a regulatory burden on even the most basic functions, that’s how that works.

More importantly, restricting on the basis of ‘function’ does not work. That is not how the threat model works. If you have a sufficiently generally capable model it can be rapidly put to any given ‘function.’ If it is made available to the public, it will be used for whatever it can be used for, and you have very little control over that even under ideal conditions. If you open the weights, you have zero control, telling rival nations, hackers, terrorists or other non-state actors they aren’t allowed to do something doesn’t matter. You lack the ability to enforce such restrictions against future models smarter than ourselves, should they arise and become autonomous, as many inevitably would make them. I have been over this many times.

Let me be clear – I agree with the author – we cannot afford to wait for a major catastrophe to occur before taking action to protect the public. California will not abandon its responsibility. Safety protocols must be adopted. Proactive guardrails should be implemented, and severe consequences for bad actors must be clear and enforceable. I do not agree, however, that to keep the public safe, we must settle for a solution that is not informed by an empirical trajectory analysis of Al systems and capabilities. Ultimately, any framework for effectively regulating Al needs to keep pace with the technology itself.

Bold mine. The main thing SB 1047 would have done was to say ‘if you spend $100 million on your model, you have to create, publish and abide by some chosen set of safety protocols.’ So it’s hard to reconcile this statement with thinking SB 1047 is bad.

Newsom clearly wants California to act without the federal government. He wants to act to create ‘proactive guardrails,’ rather than waiting to respond to harm.

The only problem is that he’s buying into an approach that fundamentally won’t work.

This also helps explain his signing other (far less impactful) AI bills.

To those who say there’s no problem here to solve, or that California does not have a role in regulating potential national security implications of this technology, I disagree. A California-only approach may well be warranted – especially absent federal action by Congress – but it must be based on empirical evidence and science. The U.S. Al Safety Institute, under the National Institute of Science and Technology, is developing guidance on national security risks, informed by evidence-based approaches, to guard against demonstrable risks to public safety. Under an Executive Order I issued in September 2023, agencies within my Administration are performing risk analyses of the potential threats and vulnerabilities to California’s critical infrastructure using Al. These are just a few examples of the many endeavors underway, led by experts, to inform policymakers on Al risk management practices that are rooted in science and fact. And endeavors like these have led to the introduction of over a dozen bills regulating specific, known risks posed by Al, that I have signed in the last 30 days.

Again, he’s clearly going to be signing a bunch of bills, one way or another. It’s not going to be this one, so it’s going to be something else. Be careful what you wish for.

I am committed to working with the Legislature, federal partners, technology experts, ethicists, and academia, to find the appropriate path forward, including legislation and regulation. Given the stakes – protecting against actual threats without unnecessarily thwarting the promise of this technology to advance the public good – we must get this right.

For these reasons, I cannot sign this bill.

Sincerely,

Gavin Newsom

Newsom’s Explanation Does Not Make Sense

His central point is not Obvious Nonsense. His central point at least gets to be Wrong: He is saying AI regulation should be based on not putting restrictions on frontier model development, and instead it should focus on restricting particular uses.

But again, if you care about catastrophic risks: That. Would. Not. Work.

He doesn’t understand, decided to act as if he doesn’t understand, or both.

The Obvious Nonsense part is the idea that we shouldn’t require those training big models to publish their safety and security protocols – the primary thing SB 1047 does – because this doesn’t impact small models and thus is insufficiently effective and might give a ‘false sense of security.’

This is the same person who warned he was primarily worried about the ‘chilling effect’ of SB 1047 on the little guy.

Now he says that the restrictions don’t apply to the little guy, so he can’t sign the bill?

He wants to restrict uses, but doesn’t want to find out what models are capable of?

What the hell?

‘Falls short.’ ‘Isn’t comprehensive.’ The bill wasn’t strong enough, says Newsom, so he decided nothing was a better option, weeks after warning about that ‘chilling effect.’

If I took his words to have meaning, I would then notice I was confused.

Sounds like we should had our safety requirements also apply to the less expensive models made by ‘little tech’ then, especially since those people were lying to try and stop the bill anyway? Our mistake.

Well, his, actually. Or it would be, if he cared about not dying. So could go either way.

Ben Fritz and Peetike Rana: The Democrat decided to reject the measure because it applies only to the biggest and most expensive AI models and doesn’t take into account whether they are deployed in high-risk situations, he said in his veto message.

Smaller models sometimes handle critical decision-making involving sensitive data, such as electrical grids and medical records, while bigger models at times handle low-risk activities such as customer service.

Kelsey Piper: Is there one single person in the state of California who believes that this is Newsom’s real reason for the veto – SB 1047 isn’t comprehensive enough!

There are reasonable arguments both for and against the bill but this isn’t one of them; there are very good reasons to treat the most expensive models differently including low barriers to entry for startups and small businesses.

Newsom vetoed because he’s in the pocket of lobbyists who pressed him aggressively for a veto; he has no principles and no roadmap for artificial intelligence or anything else, and if there were a more comprehensive bill he’d veto that one too. Come on.

Michael Cohen: Newsom: “By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security. Smaller, specialized models may emerge as equally or even more dangerous”.

I’d like to hear *anyone* claim he’s not bullshitting here. He could have easily contacted the author’s office at any point in the process to say, “your bill doesn’t go far enough”. Or just start with rules on bigger models and then amend the bill to be more expansive later.

So Newsom has his reasons for vetoing the bill, and for some reason, he didn’t think it would reflect well on him to share them with us.

The idea that he’d have signed the bill if it was more ‘comprehensive’?

That’s rather Obvious Nonsense, more commonly known as bullshit.

When powerful people align with other power or do something for reasons that would not sound great if said out loud, and tell you no, they don’t give you a real explanation.

Instead, they make up a reason for it that they hadn’t mentioned to you earlier.

Often they’ll do what Newsom does here, by turning a concern they previously harped on onto its head. Here, power expresses concern you’ll hurt the little guy, so you exempt the little guy? Power says response is not sufficiently comprehensive. Veto. Everything else that took place, all the things you thought mattered? They’re suddenly irrelevant, except insofar as they didn’t offer a superior excuse.

To answer the question about whether there is one person who is willing to say they think Newsom’s words are genuine, the answer is yes. That person was Dean Ball, for at least some of the words. I did not see any others.

Newsom’s Proposed Path of Use Regulation is Terrible for Everyone

So what’s does Newsom say his plan is now?

It sure looks like he wants use-based regulation. Oh no.

The governor announced that he is working with leading AI researchers including Fei-Fei Li, a Stanford University professor who has worked at Google and recently launched a startup called World Labs, to develop new legislation he is willing to support.

Jam tomorrow, I suppose.

Newsom’s announcement: Governor Newsom announced that the “godmother of AI,” Dr. Fei-Fei Li, as well as Tino Cuéllar, member of the National Academy of Sciences Committee on Social and Ethical Implications of Computing Research, and Jennifer Tour Chayes, Dean of the College of Computing, Data Science, and Society at UC Berkeley, will help lead California’s effort to develop responsible guardrails for the deployment of GenAI. He also ordered state agencies to expand their assessment of the risks from potential catastrophic events.

Given he’s centrally consulting Dr. Fei-Fei Li, together with his aim of targeting particular uses, it sounds like a16z did get to Newsom in the end, he has been regulatory captured (what a nice term for it!), and we have several indications here he does indeed intend to pursue the worst possible path of targeting use cases of AI rather than the models themselves.

For a relatively smart version of the argument that you should target use cases, here is Timothy Lee, who does indeed realize the risk the use-based regulations will be far more onerous, although he neglects to consider the reasons it flat out won’t work, citing ‘safety is not a model property,’ the logic of which I’ll address later but to which the response is essentially ‘not with that attitude, and you’re not going to find it anywhere else in any way you’d find remotely acceptable.’

In other ways, such calls make no sense. If you’re proposing, as he suggests here, ‘require safety in your model if you restrict who can use it, but if you let anyone use and modify the model in any way and use it for anything with no ability to undo or restrict that, then we should allow that, nothing unsafe about that’ then any reasonable person must recognize that proposal as Looney Tunes. I mean, what?

A lot of the usual suspects are saying similar things, renewing their calls for going down exactly that path of targeting wrong mundane use, likely motivated in large part by ‘that means if we open the weights then nothing that happens as a result of it would be our responsibility or our fault’ and in large part by ‘that’ll show them.’

Do they have any idea what they are setting up to happen? Where that would inevitably go, and where it can’t go, even on their own terms? Tim has a glimmer, as he mentions at the end of his post. Dean Ball knows and has now chosen to warn about it.

Alas, most have no idea.

This is shaping up to be one of the biggest regulatory own goals in history.

Such folks often think they are being clever or are ‘winning,’ because if we focus on ‘scientifically proven’ harms then we won’t have to worry about existential risk concerns. The safety people will be big mad. That means things are good. No, seriously, you see claims like ‘well the people who advocate for safety are sad SB 1047 failed, which means we should be happy.’ Full zero sum thinking.

Let’s pause for a second to notice that this is insane. The goal is to be at the production possibilities frontier between (innovation, or utility, or progress) and preventing catastrophic and existential harms.

Yes, we can disagree about how best to do that, whether a given policy will be net good, or how much we should value one against the other. That’s fine.

But if you say ‘those who care about safety think today made us less safe, And That’s Wonderful,’ then it seems like you are kind of either an insane person or a nihilistic vindictive f***, perhaps both?

That’s like saying that you know the White Sox must have a great offense, because look at their horrible pitching. And saying that if you want the White Sox to score more runs next year, you should want to ensure they have even worse pitching.

(And that’s the charitable interpretation, where the actual motivation isn’t largely rage, hatred, vindictiveness and spite, or an active desire for AIs to control the future.)

Instead, I would implore you, to notice that Newsom made it very clear that the regulations are coming, and to actually ask: If use-based regulations to reduce various mundane and catastrophic risks do come, and are our central strategy – even if you think those risks are fake or not worth caring about – what will that look like? Are you going to be happy about it?

If the ‘little guy’ or fans of innovation think this would go well for them, I would respond: You have not met real world use-based risk-reduction regulations, or you have forgotten what that looks like.

It looks like the EU AI Act. It looks like the EU. Does that help make this clear?

There would be a long and ever growing list of particular things you are not allowed to permit an AI to do, and that you would be required to ensure your AI did do. It will be your responsibility, as a ‘deployer’ of an AI model, to ensure that these things do and do not happen accordingly, whether or not they make any sense in a given context.

This laundry list will make increasingly little sense. It will be ever expanding. It will be ill defined. It will focus on mundane harms, including many things that California is Deeply Concerned about that you don’t care about even a little, but that California thinks are dangers of the highest order. The demands will often not be for ‘reasonable care’ and instead be absolute, and they will often be vague, with lots of room to expand them over time.

You think open models are going to get a free pass from all this, when anyone releasing one is very clearly ‘making it available’ for use in California? What do you think will happen once people see the open models being used to blatantly violate all these use restrictions being laid down?

All the things such people were all warning about with SB 1047, both real and hallucinated, in all directions? Yeah, basically all of that, and more.

Kudos again to Dean Ball in particular for understanding the danger here. He is even more apprehensive than I am about this path, and writes this excellent section explaining what this kind of regime would look like.

Dean Ball: This is not a hypothetical. This is the reality for contractors in the State of California today—one of Governor Newsom’s “use-based” regulations (in this case downstream of an Executive Order he issued that requires would-be government contractors to document all their uses of generative AI).

I fear this is the direction that Western policymakers are sleepwalking toward if we do not make concerted effort. Every sensible person, I think, understands that this is no way to run a civilized economy.

Or do they?

I certainly agree this is no way to run a civilized economy. I also know that one of the few big civilized economies, the EU, is running in exactly this way across the board. If all reasonable people knew this would rule out a large percentage of SB 1047 opponents, as well as Gavin Newsom, has potentially sensible people.

It would be one thing if that approach greatly reduced existential risk at great economic cost, and there was no third option available. Then we’d have to talk price and make a tough decision.

Newsom’s Proposed Path of Use Regulation Doesn’t Prevent X-Risk

Instead, it does more of damage, without the benefits. What does such an approach do about actual existential risks from AI, by any method other than ‘be so damaging that the entire AI industry is crippled’?

It does essentially nothing, plausibly making us actively less safe. The thing that is dangerous is not any particular ‘use’ of the models. It is creating or causing there to exist AI entities that are highly capable and especially ones that are smarter than ourselves. This approach lets that happen without any supervision or precautions, indeed encourages it.

That is not going to cut it. Once the models exist, they are going to get deployed in the ways that are harmful, and do the harmful things, with or without humans intending for that to happen (and many humans do want it to happen). You can’t take the model back once that happens. You can’t take the damage back once that happens. You can’t un-exfiltrate the model, or get it back under control, or get the future back under control. Danger and ability to cause catastrophic events are absolutely model properties. The only ones who can possibly hope to prevent this from happening without massive intrusions into everything are the developers of the model.

If you let people create, and especially if you allow them to open the weights of, catastrophically dangerous AI models if deployed in the wrong ways, while telling people ‘if you use it the wrong way we will punish you?’

Are you telling me that has a snowball’s chance in hell of having them not deploy the AI in the wrong ways? Especially once the wrong way is as simple as ‘give it a maximizing instruction and access to the internet?’ When they’re as or more persuasive than we are? When on the order of 10% of software engineers would welcome a loss of human control over the future?

Whereas everyone, who wants to do anything actually useful with AI, the same as people who want to do most any other useful thing in California, would now face an increasing set of regulatory restrictions and requirements that cripple the ability to collect mundane utility.

All you are doing is disrupting using AI to actually accomplish things. You’re asking for the EU AI Act. And then that, presumably, by default and in some form, is what you are going to get if we go down this path. Notice the other bills Newsom signed (see section below) and how they start to impose various requirements on anyone who wants to use AI, or even wants to do various tech things.

Ashwinee Panda: It’s remarkably prescient that Newsom’s veto calls out the bill for focusing on large models; indeed many of the capabilities that can cause havoc will start appearing in small models as more people start replicating the distillation process that frontier labs have been using.

Teortaxes: Or rather: SB1047 will come back stronger than everyone asking for a veto wanted.

It won’t come back ‘stronger,’ in this scenario. It will come back ‘wrong.’

Also note that one of SB 1047’s features was that only relative capabilities were targeted (even more so before the limited duty exception was forcibly removed by opponents), whereas a regime where ‘small models are dangerous too’ is central to thinking will hold models and any attempt to ‘deploy’ them to absolute standards of capability by default, rather than asking whether they cause or materially enable something that couldn’t have otherwise been done, or asking whether your actions were reasonable.

Note how other bills didn’t say ‘take reasonable care to do or prevent X,’ they mostly said ‘do or prevent X’ full stop and often imposed ludicrous standards.

Newsom Says He Wants to Regulate Small Entrepreneurs and Academia

Nancy Pelosi is very much not on the same page as Gavin Newsom.

Nancy Pelosi: AI springs from California. Thank you, @CAgovernor Newsom, for recognizing the opportunity and responsibility we all share to enable small entrepreneurs and academia – not big tech – to dominate.

Arthur Conmy: Newsom: we also need to regulate small models and companies Pelosi: thanks for not regulating small models and companies

Miles Brundage: Lol – Newsom’s letter says it is *bad* there’s a carveout for small models (which was intended as a proxy for small companies). Regardless of your views on the bill, CA Democrats do not seem to be trying particularly hard to coordinate + show there was some principle here.

Pelosi did not stop to actually parse Newsom’s statement. But that cannot surprise us, since she also did not stop to parse SB 1047, a bill that would not have impacted ‘small entrepreneurs’ or ‘academia’ in any way at all.

Whereas Newsom specifically called out the need to check ‘depolyers’ of even small models for wrong use cases, an existential threat to both groups.

Samuel Hammond: Instead of focusing on frontier models where the risk is greatest, Newsom wants a bill that covers *all* AI models, big and small.

Opponents of SB1047 will regret not accepting the narrow approach when they had the chance. This is what “safety isn’t a model property” gets you.

Having shot down the bill tailored to whistleblowers and catastrophic risk, California’s next attempt will no doubt be SAG-AFTRA bill from hell.

Dean Ball: SB 1047 co-sponsor threatens next bill, this time with “new allies,” by which she means, basically, the people who are going to shut down american ports next week.(@hamandcheese isn’t wrong that worse bills are possible—we just need to be smarter, friendlier, and less cynical).

If that’s the way things go, as is reasonably likely, then you are going to wish, so badly, that you had instead helped steer us towards a compute-based and model-based regime that outright didn’t apply to you, that was actually well thought out and debated and refined in detail, back when you had the chance and the politics made that possible.

What If Something Goes Really Wrong?

Then there’s the question of what happens if a catastrophic event did occur. In which case, things plausibly spin out of control rather quickly. Draconian restrictions could result. It is very much in the AI industry’s interest for such events to not happen.

That’s all independent of the central issue of actual existential risks, which this veto makes more likely.

I am saying, even if you don’t think the existential risks are that big a deal, that you should be very worried about Newsom’s statement, and where all of this is heading.

So if you are pushing the rhetoric of use-based regulation, I urge you to reconsider. And to try and steer things towards regulatory focus on the model layer and compute thresholds, or development of new other ideas that can serve similar purposes, ‘while you have the chance.’

Could Newsom Come Around?

None of this means Newsom couldn’t come around in the future.

There are scenarios where this could work out well next year. Here are some of them:

  1. Newsom’s political incentives could change, or we could make them change.

  2. In particular, the rising salience of AI, or particular AI incidents, could make it no longer worthwhile to care so much about certain particular interests.

  3. Also in particular, GPT-5 or another 5-level model could change everything.

  4. The people influencing Newsom could change their minds, especially when they see what the alternative regulatory regime starts shaping up to look like, and start regretting not being more careful what they wished for.

  5. Newsom could be genuinely misled or confused about how all of this works, and be confused about the wisdom of targeting the use layer versus the model layer, or not understand it, and then later come to understand it, as he learns more and the situation changes.

  6. Newsom currently doesn’t seem to buy existential risk arguments. He might change his mind about that.

  7. Newsom could genuinely want a highly comprehensive bill, and work in good faith to get one and understand the issues for next session.

  8. There might have been other unique factors in play with this bill. Perhaps (for example, and as some have speculated) there were big political forces that quietly didn’t want to give Weiner a big win here. We can’t know.

Newsom clearly wants California to ‘lead’ on AI regulation, and pass various proactive bills in advance of anything going wrong. He is going to back and sign some bills, and those bills will be more impactful than the ones he signed this session. The question is, will they be good bills, sir?

Here is Scott Weiner’s statement on the veto. He’s not going anywhere.

Scott Weiner: This veto is a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet. The companies developing advanced Al systems acknowledge that the risks these models present to the public are real and rapidly increasing.

While the large Al labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary commitments from industry are not enforceable and rarely work out well for the public.

This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from U.S. policymakers, particularly given Congress’s continuing paralysis around regulating the tech industry in any meaningful way.

This veto is a missed opportunity for California to once again to lead on innovative tech regulation – just as we did around data privacy and net neutrality — and we are all less safe as a result.

At the same time, the debate around SB 1047 has dramatically advanced the issue of Al safety on the international stage. Major Al labs were forced to get specific on the protections they can provide to the public through policy and oversight.

Leaders from across civil society, from Hollywood to women’s groups to youth activists, found their voice to advocate for commonsense, proactive technology safeguards to protect society from foreseeable risks. The work of this incredible coalition will continue to bear fruit as the international community contemplates the best ways to protect the public from the risks presented by Al.

California will continue to lead in that conversation – we are not going anywhere.

Here’s Dan Hendrycks.

Dan Hendrycks: Governor Gavin Newsom’s veto of SB 1047 is disappointing. This bill presented a reasonable path for protecting Californians and safeguarding the AI ecosystem, while encouraging innovation.

But I am not discouraged. The bill encouraged collaboration between industry, academics and lawmakers, and has begun moving the conversation about AI safety into the mainstream, where it belongs. AI developers are now more aware that they already have to exercise reasonable care lest they be found liable.

SB 1047 galvanized a wide-reaching bipartisan coalition of supporters, making clear that a regulatory approach that drives AI safety and innovation is not only possible, but lies on the immediate horizon.

Discourse and tactics around the bill from some in the industry have been disheartening. It is disgraceful that many opponents of SB 1047 trafficked in misinformation to undermine this bill, rather than engaging in a factual debate. SB 1047 has revealed that some industry calls for responsible AI are nothing more than PR aircover for their business and investment strategies. This is a key lesson as we continue to advocate for AI safety measures.

Timing is Everything

I have seen exactly one person make the claim that Newsom isn’t bullshitting, and that Newsom’s words have meaning.

That person is Dean Ball, who also pointed out the detail that Newsom vetoed at a time designed to cause maximum distraction.

Dean Ball: Veto was obvious to everyone paying attention [after the comments about a chilling effect] (prediction markets were low-iq throughout, maybe not enough trading), and newsom probably timed it to be during the 49ers game (maximal public inattention).

Samuel Hammond: Why so cynical.

Dean Ball: Because politicians’ tactical behavior is different from our own strategic behavior.

Samuel Hammond: So it’s cynicism when you take Newsom’s call for a bill that applies to the entire industry at face value, but not when you armchair theorize that Newsom tactically vetoed SB1047 on a game night to keep the rubes distracted.

🤔

Why would Newsom want to make his veto as quiet as possible, especially if he wanted to dispel any possible ‘chilling effect’?

Because the bill was very popular, so he didn’t want people to know he vetoed it.

There were various people who chimed in to support SB 1047 in the last few days. I did not stop to note them. Nor did I note the latest disingenuous arguments trotted out by bill opponents. It’s moot now and it brings me great joy to now ignore all that.

We should note, however: For the record, yes, the bill was very popular. AIPI collaborated with Dean Ball to craft a more clearly neutral question wording, including randomizing argument order.

AIPI: Key findings remain consistent with our past polls:

– 62% support vs. 25% oppose SB1047

– 54% agree more with bill proponents vs. 28% with opponents

– Bipartisan support: 68% Democrats, 58% independents, 53% Republicans favor the bill.

Most striking is that these results closely mirror the previous AIPI poll results, which we now know were not substantially distorted by question wording. They previously found +39 support, 59%-20%. The new result is +37 support, 62%-25%, well within the margin of error versus the old results from AIPI.

The objection that this is a low-salience issue where voters haven’t thought about it and don’t much care is still highly valid. And you could reasonably claim, as Ball says explicitly, that voter preferences shouldn’t determine whether the bill is good or not.

We should look to do more of this adversarial collaborative polling in the future. We should also remember it when estimating the ‘house effect’ and ‘pollster rating’ of AIPI on such issues, and when we inevitably once again see claims that their wordings are horribly biased even when they seem clearly reasonable.

Also, this from Anthropic’s Jack Clark seems worth noting:

Jack Clark: While the final version of SB 1047 was not perfect, it was a promising first step towards mitigating potentially severe and far reaching risks associated with AI development.

We think the core of the bill – mandating developers produce meaningful security and safety policies about their most powerful AI systems, and ensuring some way of checking they’re following their own policies – is a prerequisite for building a large and thriving AI industry.

To get an AI industry that builds products everyone can depend on will require lots of people to work together to figure out the right rules of the road for AI systems – it is welcome news that Governor Newsom shares this view.

Anthropic will talk to people in industry, academia, government, and safety to find a consensus next year and do our part to ensure whatever policy we arrive at appropriately balances supporting innovation with averting catastrophic risks.

Jack Clark is engaging in diplomacy and acting like Newsom was doing something principled and means what he says in good faith. That is indeed the right move for Jack Clark in this spot.

I’m not Jack Clark.

What Did the Market Have to Say?

Gavin Newsom did not do us the favor of vetoing during market hours. So we cannot point to the exact point where he vetoed, and measure the impact on various stocks, such as Nvidia, Google, Meta and Microsoft.

That would have been the best way to test the impact of SB 1047 on the AI industry. If SB 1047 was such a threat, those stocks would go up on the news. If they don’t go up, then that means the veto wasn’t impactful.

There is the claim that the veto was obvious given Newsom’s previous comments, and thus priced in.

There are two obvious responses.

  1. There was a Polymarket (and Manifold) prediction market on the result, and they very much did not think the outcome was certain. Why did such folks not take the Free Money?

  2. When Gavin Newsom made those previous comments, did the markets move? On September 17, when SB 1047’s chances declined from 46% to 20% at Polymarket. You absolutely could not tell, looking at stock price charts, that this was the day that it happened. There were no substantial price movements at all.

Then, when the market opened on Monday the 30th, after the veto, again there was no major price movement. This is complicated by potential impact from Spruce Pine, and potential damage to our supply chains for quartz for semiconductors there, but it seems safe to say that nothing major happened here.

The combined market reaction, in particular the performance of Nvidia, is incompatible with SB 1047 having a substantial impact on the general ecosystem. You can in theory claim that Google and Microsoft benefit from a bill that exclusively puts restrictions on a handful of big companies. And you can claim Meta’s investors would actually be happy to have Zuckerberg think better of what they think is his open model folly. But any big drop in AI innovation and progress would hurt Nvidia.

If you think that this was not the right market test, what else would be a good test instead? What market provides a better indication?

What Newsom Did Sign

The one that most caught my eye was his previous decision to sign AB 2013, requiring training data transparency. Starting on January 1, 2026, before making a new AI system or modification of an existing AI system publicly available for Californians to use, the developer or service shall post documentation regarding the data used to train the system. The bill is short, so here’s the part detailing what you have to post:

(a) A high-level summary of the datasets used in the development of the generative artificial intelligence system or service, including, but not limited to:

  1. The sources or owners of the datasets.

  2. A description of how the datasets further the intended purpose of the artificial intelligence system or service.

  3. The number of data points included in the datasets, which may be in general ranges, and with estimated figures for dynamic datasets.

  4. A description of the types of data points within the datasets. For purposes of this paragraph, the following definitions apply: (A) As applied to datasets that include labels, “types of data points” means the types of labels used. (B) As applied to datasets without labeling, “types of data points” refers to the general characteristics.

  5. Whether the datasets include any data protected by copyright, trademark, or patent, or whether the datasets are entirely in the public domain.

  6. Whether the datasets were purchased or licensed by the developer.

  7. Whether the datasets include personal information, as defined in subdivision (v) of Section 1798.140.

  8. Whether the datasets include aggregate consumer information, as defined in subdivision (b) of Section 1798.140.

  9. Whether there was any cleaning, processing, or other modification to the datasets by the developer, including the intended purpose of those efforts in relation to the artificial intelligence system or service.

  10. The time period during which the data in the datasets were collected, including a notice if the data collection is ongoing.

  11. The dates the datasets were first used during the development of the artificial intelligence system or service.

  12. Whether the generative artificial intelligence system or service used or continuously uses synthetic data generation in its development. A developer may include a description of the functional need or desired purpose of the synthetic data in relation to the intended purpose of the system or service.

(b) A developer shall not be required to post documentation regarding the data used to train a generative artificial intelligence system or service for any of the following:

  1. A generative artificial intelligence system or service whose sole purpose is to help ensure security and integrity. For purposes of this paragraph, “security and integrity” has the same meaning as defined in subdivision (ac) of Section 1798.140, except as applied to any developer or user and not limited to businesses, as defined in subdivision (d) of that section.

  2. A generative artificial intelligence system or service whose sole purpose is the operation of aircraft in the national airspace.

  3. A generative artificial intelligence system or service developed for national security, military, or defense purposes that is made available only to a federal entity.

This is not the most valuable transparency we could get. In particular, you get the information on system release, not on system training, so once it is posted the damage will typically be largely done from an existential risk perspective.

However this is potentially a huge problem.

In particular: You have to post ‘the sources or owners of the data sets’ and whether you had permission from the owners to use those data sets.

Right now, the AI companies use data sources they don’t have the rights to, and count on the ambiguity involved to protect them. If they have to admit (for example) ‘I scraped all of YouTube and I didn’t have permission’ then that makes it a lot easier to cause trouble in response to that. It also makes it a lot harder, in several senses, to justify not making such trouble, as failure to enforce copyright endangers that copyright, which is (AIUI, IANAL, etc) why often owners feel compelled to sue when violations are a little too obvious and prominent, even if they are fine with a particular use.

The rest of it seems mostly harmless, for example I presume everyone is going to answer #2 with something only slightly less of a middle finger than ‘to help the system more accurately predict the next token’ and #9 with ‘Yes we cleaned the data, so that bad data wouldn’t corrupt the system.’

What is a ‘substantial modification’ of a system? If you fine-tune a system, does that count? My assumption would mostly be yes, and you mostly just mumble ‘synthetic data’ as per #12?

Everyone’s favorite regulatory question is, ‘what about open source’? The bill does not mention open source or open models at all, instead laying down rules everyone must follow if they want to make a model available in California. Putting something on the open internet for download makes it available in California. So any open model will need to be able to track and publish all this information, and anyone who modifies the system will have to do so as well, although they will have the original model’s published information to use as a baseline.

What else we got? We get a few bills that regularize definitions, I suppose. Sure.

Otherwise, mostly a grab bag of ‘tell us this is AI’ and various concerns about deepfakes and replicas.

  • AB 1008 by Assemblymember Rebecca Bauer-Kahan (D-Orinda) – Clarifies that personal information under the California Consumer Privacy Act (CCPA) can exist in various formats, including information stored by AI systems. (previously signed)

  • AB 1831 by Assemblymember Marc Berman (D-Menlo Park) – Expands the scope of existing child pornography statutes to include matter that is digitally altered or generated by the use of AI.

  • AB 1836 by Assemblymember Rebecca Bauer-Kahan (D-Orinda) – Prohibits a person from producing, distributing, or making available the digital replica of a deceased personality’s voice or likeness in an expressive audiovisual work or sound recording without prior consent, except as provided. (previously signed)

  • AB 2013 by Assemblymember Jacqui Irwin (D-Thousand Oaks) – Requires AI developers to post information on the data used to train the AI system or service on their websites. (previously signed)

I covered that one above.

  • AB 2355 by Assemblymember Wendy Carrillo (D-Los Angeles) – Requires committees that create, publish, or distribute a political advertisement that contains any image, audio, or video that is generated or substantially altered using AI to include a disclosure in the advertisement disclosing that the content has been so altered. (previously signed)

  • AB 2602 by Assemblymember Ash Kalra (D-San Jose) – Provides that an agreement for the performance of personal or professional services which contains a provision allowing for the use of a digital replica of an individual’s voice or likeness is unenforceable if it does not include a reasonably specific description of the intended uses of the replica and the individual is not represented by legal counsel or by a labor union, as specified. (previously signed)

  • AB 2655 by Assemblymember Marc Berman (D-Menlo Park) – Requires large online platforms with at least one million California users to remove materially deceptive and digitally modified or created content related to elections, or to label that content, during specified periods before and after an election, if the content is reported to the platform. Provides for injunctive relief. (previously signed)

  • AB 2839 by Assemblymember Gail Pellerin (D-Santa Cruz) – Expands the timeframe in which a committee or other entity is prohibited from knowingly distributing an advertisement or other election material containing deceptive AI-generated or manipulated content from 60 days to 120 days, amongst other things. (previously signed)

  • AB 2876 by Assemblymember Marc Berman (D-Menlo Park) – Require the Instructional Quality Commission (IQC) to consider AI literacy to be included in the mathematics, science, and history-social science curriculum frameworks and instructional materials.

  • AB 2885 by Assemblymember Rebecca Bauer-Kahan (D-Orinda) – Establishes a uniform definition for AI, or artificial intelligence, in California law. (previously signed)

  • AB 3030 by Assemblymember Lisa Calderon (D-Whittier) – Requires specified health care providers to disclose the use of GenAI when it is used to generate communications to a patient pertaining to patient clinical information. (previously signed)

  • SB 896 by Senator Bill Dodd (D-Napa) – Requires CDT to update report for the Governor as called for in Executive Order N-12-23, related to the procurement and use of GenAI by the state; requires OES to perform a risk analysis of potential threats posed by the use of GenAI to California’s critical infrastructure (w/​high-level summary to Legislature); and requires that the use of GenAI for state communications be disclosed.

  • SB 926 by Senator Aisha Wahab (D-Silicon Valley) – Creates a new crime for a person to intentionally create and distribute any sexually explicit image of another identifiable person that was created in a manner that would cause a reasonable person to believe the image is an authentic image of the person depicted, under circumstances in which the person distributing the image knows or should know that distribution of the image will cause serious emotional distress, and the person depicted suffers that distress. (previously signed)

  • SB 942 by Senator Josh Becker (D-Menlo Park) – Requires the developers of covered GenAI systems to both include provenance disclosures in the original content their systems produce and make tools available to identify GenAI content produced by their systems. (previously signed)

  • SB 981 by Senator Aisha Wahab (D-Silicon Valley) – Requires social media platforms to establish a mechanism for reporting and removing “sexually explicit digital identity theft.” (previously signed)

  • SB 1120 by Senator Josh Becker (D-Menlo Park) – Establishes requirements on health plans and insurers applicable to their use AI for utilization review and utilization management decisions, including that the use of AI, algorithm, or other software must be based upon a patient’s medical or other clinical history and individual clinical circumstances as presented by the requesting provider and not supplant health care provider decision making. (previously signed)

  • SB 1288 by Senator Josh Becker (D-Menlo Park) – Requires the Superintendent of Public Instruction (SPI) to convene a working group for the purpose of exploring how artificial intelligence (AI) and other forms of similarly advanced technology are currently being used in education. (previously signed)

  • SB 1381 by Senator Aisha Wahab (D-Silicon Valley) – Expands the scope of existing child pornography statutes to include matter that is digitally altered or generated by the use of AI.

Paths Forward

Wait till next year, as they say. This is far from over.

This raises the importance of maintaining the Biden Executive Order on AI. This at least gives us a minimal level of transparency into what is going on. If it were indeed repealed, as Trump has promised to do on day one, we would be relying for even a minimum of transparency only on voluntary commitments from top AI labs – commitments that Meta and other bad actors are unlikely to make and honor.

The ‘good’ news is that Gavin Newsom is clearly down for regulating AI.

The bad news is that he wants to do it in the wrong way, by imposing various requirements on those who deploy and use AI. That doesn’t protect us against the threats that matter most. Instead, it only can protect us against the mostly mundane harms that we can address over time as the situation changes.

And the cost of such an approach, in terms of innovation and mundane utility, risks being extremely high – exactly the ‘little guys’ and academics who were entirely exempt from SB 1047 would likely now be hit the hardest.

If we cannot do compute governance, and we cannot do model-level governance, then I do not see an alternative solution. I only see bad options, a choice between an EU-style regime and doing essentially nothing.

The stage is now potentially set for the worst possible outcomes.

There will be great temptation for AI notkilleveryoneism advocates to throw their lot in with the AI ethics and mundane harm crowds.

Rob Wiblin: Having failed to get up a narrow bill focused on frontier models, should AI x-risk folks join a popular front for an Omnibus AI Bill that includes SB1047 but adds regulations to tackle union concerns, actor concerns, disinformation, AI ethics, current safety, etc?

Dean Ball: The AI safety movement could easily transition from being a quirky, heterodox, “extremely online” movement to being just another generic left-wing cause. It could even work.

But I hope they do not. As I have written consistently, I believe that the AI safety movement, on the whole, is a long-term friend of anyone who wants to see positive technological transformation in the coming decades. Though they have their concerns about AI, in general this is a group that is pro-science, techno-optimist, anti-stagnation, and skeptical of massive state interventions in the economy (if I may be forgiven for speaking broadly about a diverse intellectual community).

It is legitimate to have serious concerns about the trajectory of AI: the goal is to make heretofore inanimate matter think. We should not take this endeavor lightly. We should contemplate potential future trajectories rather than focusing exclusively on what we can see with our eyes—even if that does not mean regulating the future preemptively. We should not assume that the AI transformation “goes well” by default. We should, however, question whether and to what extent the government’s involvement helps or hurts in making things “go well.”

I hope that we can work together, as a broadly techno-optimist community, toward some sort of consensus.

I am 110% with Dean Ball here.

Especially: The safety community that exists today, that is concerned with existential risks, really is mostly techno-optimists. This is a unique opportunity, while everyone on all sides is a techno-optimist, and also rather libertarian, to work together to find solutions that work. That window, where the techno-optimist non-safety community has a dancing partner that can and wants to actually dance with them, is going to close.

From the safety side’s perspective, deciding who to work with going forward, one can make common cause with those who have concerns different from yours – if others want to put up stronger precautions against deepfakes and voice clones and copyright infringement or other mundane AI harms than I think is ideal, or make those requests more central, there has to be room for compromise when doing politics. If you also get what you need. One cannot always insist on a perfect bill.

What we must not do is exactly what so many people lied and said SB 1047 was doing – which is to back a destructive bill exactly because it is destructive. We need to continue to recognize that imposing costs is a cost, doing damage is damaging, destruction is to be avoided. Some costs may be necessary along the way, but the plan cannot be to destroy the village in order to save it.

Even if we successfully work together to have those who truly care about safety insist upon only backing sensible approaches, events may quickly be out of our hands. There are a lot more generic liberals, or generic conservatives, than there are heterodox deeply wonky people who care deeply about us all not dying and the path to accomplishing that.

There is the potential for those other crowds to end up writing such bills entirely without the existential risk mitigations and have that be how all of this works, especially if opposition forces continue to do their best to poison the well about the safety causes that matter and those who advocate to deal with them.

Alternatively, one could dream that now that Newsom’s concerns have been made clear, those concerned about existential risks might decide to come back with a much stronger bill that indeed does target everyone. That is what Newsom explicitly said he wants, maybe you call his bluff, maybe it turns out he isn’t fully bluffing. Maybe he is capable of recognizing a policy that would work, or those who would support such a policy. There are doubtless ways to use the tools and approaches Newsom is calling for to make us safer, but it isn’t going to be pretty, and those who opposed SB 1047 are really, really not going to like them.

Meanwhile, the public, in the USA and in California, really does not like AI, is broadly supportive of regulation, and that is not going to change.

Also it’s California, so there’s some chance this happens, seriously please don’t do it, nothing is so bad that you have to resort to a ballot proposition, choose life:

Daniel Eth: I’ll just leave this here (polling from AIPI a few days ago, follow up question on how people would vote in the next tweet):

Image

Thus I reiterate the warning: SB 1047 was probably the most well-written, most well-considered and most light touch bill that we were ever going to get. Those who opposed it, and are now embracing the use-case regulatory path as an alternative thinking it will be better for industry and innovation, are going to regret that. If we don’t get back on the compute and frontier model based path, it’s going to get ugly.

There is still time to steer things back in a good direction. In theory, we might even be able to come back with a superior version of the model-based approach, if we all can work together to solve this problem before something far worse fills the void.

But we’ll need to work together, and we’ll need to move fast.