Sure, a stop button doesn’t have the issues I described, as long as it’s used rarely enough. If it’s too commonplace then you should expect similar effects on safety to eg CEQA’s effects on infrastructure innovation. Major projects can only take on so much risk, and the more non-technical risk you add the less technical novelty will fit into that budget.
This line from the proposed “Responsible AI Act” seems to go much further than a stop button though?
Require advanced AI developers to apply for a license & follow safety standards.
Where do these safety standards come from? How are they enforced?
These same questions apply to stop buttons. Who has the stop button? Random bureaucrats? Congress? Anyone who can file a lawsuit?
Sure, a stop button doesn’t have the issues I described, as long as it’s used rarely enough. If it’s too commonplace then you should expect similar effects on safety to eg CEQA’s effects on infrastructure innovation. Major projects can only take on so much risk, and the more non-technical risk you add the less technical novelty will fit into that budget.
This line from the proposed “Responsible AI Act” seems to go much further than a stop button though?
Where do these safety standards come from? How are they enforced?
These same questions apply to stop buttons. Who has the stop button? Random bureaucrats? Congress? Anyone who can file a lawsuit?