The “prompt shut down” clause seemed like one of the more important clauses in the SB 1047 bill. I was surprised other people I talked to didn’t think seem to think it mattered that much, and wanted to argue/hear-arguments about it.
The clauses says AI developers, and compute-cluster operators, are required to have a plan for promptly shutting down large AI models.
People’s objections were usually:
“It’s not actually that hard to turn off an AI – it’s maybe a few hours of running around pulling plugs out of server racks, and it’s not like we’re that likely to be in the sort of hard takeoff scenario where the differences in a couple hours of manually turning it off will make the difference.”
I’m not sure if this is actually true, but, assuming it’s true, it still seems to me like the shutdown clause is the one of the more uncomplicatedly-good parts of the bill.
Some reasons:
1. I think the ultimate end game for AI governance will require being able to quickly notice and shut down rogue AIs. That’s what it means for the acute risk period to end.
2. In the more nearterm, I expect the situation where we need to stop running an AI to be fairly murky. Shutting down an AI is going to be very costly. People don’t like doing costly things. People also don’t like doing things that involve lots of undocumented, complex manual decisions that are going to be a pain. If a company (or compute cluster) doesn’t have an explicit plan for how to shut down an AI, I think they’re a lot less likely to do it. In particular if it’d be a big economic loss, and it’s not entirely obvious they have to.
If a government is trying to impose this cost from the outside, and a company doesn’t want to, they’ll probably make a bunch of arguments about how unreasonable and/or impossible the request is.
3. I also think “shut it all down” is something that might be important to do, and while not currently in the overton window, might be in the overton window later.
I think making “prompt shutdown” a concrete task that companies and governments are thinking about make it significantly more likely to happen. And I think/hope it’ll serve as a building-block scaffold, such that later both governments and companies will have an easier time considering plans that include “prompt shutdown” as a component.
More “straightforwardly good.”
There’s a lot in the bill that I think is probably good, but, does depend on how things get enforced. For example, I think it’s good to require companies to have a plan to reasonably-assure that their AIs are good. But, I’ve heard some people be concerned “aren’t basically all SSP-like plans basically fake? is this going to cement some random bureaucratic bullshit rather than actual good plans?.” And yeah, that does seem plausible.
I’d take the risk of that on current margins. But “if you’re running a big model, you need to have the capacity to turn it off quickly” seems like a just pretty reasonable, necessary piece of legislation?
But, I’ve heard some people be concerned “aren’t basically all SSP-like plans basically fake? is this going to cement some random bureaucratic bullshit rather than actual good plans?.” And yeah, that does seem plausible.
I do think that all SSP-like plans are basically fake, and I’m opposed to them becoming the bedrock of AI regulation. But I worry that people take the premise “the government will inevitably botch this” and conclude something like “so it’s best to let the labs figure out what to do before cementing anything.” This seems alarming to me. Afaict, the current world we’re in is basically the worst case scenario—labs are racing to build AGI, and their safety approach is ~“don’t worry, we’ll figure it out as we go.” But this process doesn’t seem very likely to result in good safety plans either; charging ahead as is doesn’t necessarily beget better policies. So while I certainly agree that SSP-shaped things are woefully inadequate, it seems important, when discussing this, to keep in mind what the counterfactual is. Because the status quo is not, imo, a remotely acceptable alternative either.
Afaict, the current world we’re in is basically the worst case scenario
the status quo is not, imo, a remotely acceptable alternative either
Both of these quotes display types of thinking which are typically dangerous and counterproductive, because they rule out the possibility that your actions can make things worse.
The current world is very far from the worst-case scenario (even if you have very high P(doom), it’s far away in log-odds) and I don’t think it would be that hard to accidentally make things considerably worse.
I think on alternative here that isn’t just “trust AI companies” is “wait until we have a good Danger Eval, and then get another bit of legislation that specifically focuses on that, rather than hoping that the bureaucratic/political process shakes out with a good set of SSP industry standards.”
I don’t know that that’s the right call, but I don’t think it’s a crazy position from a safety perspective.
I largely agree that the “full shutdown” provisions are great. I also like that the bill requires developers to specify circumstances under which they would enact a shutdown:
(I) Describes in detail the conditions under which a developer would enact a full shutdown.
In general, I think it’s great to help governments understand what kinds of scenarios would require a shutdown, make it easy for governments and companies to enact a shutdown, and give governments the knowledge/tools to verify that a shutdown has been achieved.
If your AI is doing something that’s causing harm to third parties that you are legally liable for .. chances are, whatever it is doing, it is doing it at Internet speeds, and even small delays are going to be very, very expensive.
I am imagining that all the people who got harmed after the first minute or so after the AI went rogue are going to be pointing at SB1047 to argue that you are negligent, and therefore liable for whatever bad thing it did.
With a nod to the recent Crowdstrike incident …. if your AI is sending out packets to other people;s Windows systems, and bricking them about as fast it can send packets through its ethernet interface, your liability may be expanding rapidly. An additional billion dollars for each hour you dont shut it down sounds possible.
The “prompt shut down” clause seemed like one of the more important clauses in the SB 1047 bill. I was surprised other people I talked to didn’t think seem to think it mattered that much, and wanted to argue/hear-arguments about it.
The clauses says AI developers, and compute-cluster operators, are required to have a plan for promptly shutting down large AI models.
People’s objections were usually:
“It’s not actually that hard to turn off an AI – it’s maybe a few hours of running around pulling plugs out of server racks, and it’s not like we’re that likely to be in the sort of hard takeoff scenario where the differences in a couple hours of manually turning it off will make the difference.”
I’m not sure if this is actually true, but, assuming it’s true, it still seems to me like the shutdown clause is the one of the more uncomplicatedly-good parts of the bill.
Some reasons:
1. I think the ultimate end game for AI governance will require being able to quickly notice and shut down rogue AIs. That’s what it means for the acute risk period to end.
2. In the more nearterm, I expect the situation where we need to stop running an AI to be fairly murky. Shutting down an AI is going to be very costly. People don’t like doing costly things. People also don’t like doing things that involve lots of undocumented, complex manual decisions that are going to be a pain. If a company (or compute cluster) doesn’t have an explicit plan for how to shut down an AI, I think they’re a lot less likely to do it. In particular if it’d be a big economic loss, and it’s not entirely obvious they have to.
If a government is trying to impose this cost from the outside, and a company doesn’t want to, they’ll probably make a bunch of arguments about how unreasonable and/or impossible the request is.
3. I also think “shut it all down” is something that might be important to do, and while not currently in the overton window, might be in the overton window later.
I think making “prompt shutdown” a concrete task that companies and governments are thinking about make it significantly more likely to happen. And I think/hope it’ll serve as a building-block scaffold, such that later both governments and companies will have an easier time considering plans that include “prompt shutdown” as a component.
More “straightforwardly good.”
There’s a lot in the bill that I think is probably good, but, does depend on how things get enforced. For example, I think it’s good to require companies to have a plan to reasonably-assure that their AIs are good. But, I’ve heard some people be concerned “aren’t basically all SSP-like plans basically fake? is this going to cement some random bureaucratic bullshit rather than actual good plans?.” And yeah, that does seem plausible.
I’d take the risk of that on current margins. But “if you’re running a big model, you need to have the capacity to turn it off quickly” seems like a just pretty reasonable, necessary piece of legislation?
Largely agree with everything here.
I do think that all SSP-like plans are basically fake, and I’m opposed to them becoming the bedrock of AI regulation. But I worry that people take the premise “the government will inevitably botch this” and conclude something like “so it’s best to let the labs figure out what to do before cementing anything.” This seems alarming to me. Afaict, the current world we’re in is basically the worst case scenario—labs are racing to build AGI, and their safety approach is ~“don’t worry, we’ll figure it out as we go.” But this process doesn’t seem very likely to result in good safety plans either; charging ahead as is doesn’t necessarily beget better policies. So while I certainly agree that SSP-shaped things are woefully inadequate, it seems important, when discussing this, to keep in mind what the counterfactual is. Because the status quo is not, imo, a remotely acceptable alternative either.
Both of these quotes display types of thinking which are typically dangerous and counterproductive, because they rule out the possibility that your actions can make things worse.
The current world is very far from the worst-case scenario (even if you have very high P(doom), it’s far away in log-odds) and I don’t think it would be that hard to accidentally make things considerably worse.
I think on alternative here that isn’t just “trust AI companies” is “wait until we have a good Danger Eval, and then get another bit of legislation that specifically focuses on that, rather than hoping that the bureaucratic/political process shakes out with a good set of SSP industry standards.”
I don’t know that that’s the right call, but I don’t think it’s a crazy position from a safety perspective.
I largely agree that the “full shutdown” provisions are great. I also like that the bill requires developers to specify circumstances under which they would enact a shutdown:
In general, I think it’s great to help governments understand what kinds of scenarios would require a shutdown, make it easy for governments and companies to enact a shutdown, and give governments the knowledge/tools to verify that a shutdown has been achieved.
If your AI is doing something that’s causing harm to third parties that you are legally liable for .. chances are, whatever it is doing, it is doing it at Internet speeds, and even small delays are going to be very, very expensive.
I am imagining that all the people who got harmed after the first minute or so after the AI went rogue are going to be pointing at SB1047 to argue that you are negligent, and therefore liable for whatever bad thing it did.
With a nod to the recent Crowdstrike incident …. if your AI is sending out packets to other people;s Windows systems, and bricking them about as fast it can send packets through its ethernet interface, your liability may be expanding rapidly. An additional billion dollars for each hour you dont shut it down sounds possible.