Therefore, in the case of an emergency, a compute provider and/or an AI developer can be called upon to shutdown the model.
Invoking the kill switch would be costly and painful for the compute provider/AI developer, and I wonder if this would make them slow to pull the trigger. Why not place the kill switch in the regulator’s control, along with the expectation that companies could sue the regulator for damages if the kill switch was invoked needlessly?
Edit: Actually I think this is what is meant by “Hardware-Enabled Governance Mechanisms (HEM)”, and I think the suggestion that the compute provider or AI developer shut down the model is a stop-gap until HEM is widely deployed.
Invoking the kill switch would be costly and painful for the compute provider/AI developer, and I wonder if this would make them slow to pull the trigger. Why not place the kill switch in the regulator’s control, along with the expectation that companies could sue the regulator for damages if the kill switch was invoked needlessly?
Edit: Actually I think this is what is meant by “Hardware-Enabled Governance Mechanisms (HEM)”, and I think the suggestion that the compute provider or AI developer shut down the model is a stop-gap until HEM is widely deployed.