Thank you. I agree that kind of thing is plausible (but maybe not that particular example—I think this regulation would hit the RL-agents too).
(I think giving regulators a stop button is clearly positive-EV and gallabytes’s concern doesn’t make sense, but I know that’s much weaker than what I asserted above.)
Sure, a stop button doesn’t have the issues I described, as long as it’s used rarely enough. If it’s too commonplace then you should expect similar effects on safety to eg CEQA’s effects on infrastructure innovation. Major projects can only take on so much risk, and the more non-technical risk you add the less technical novelty will fit into that budget.
This line from the proposed “Responsible AI Act” seems to go much further than a stop button though?
Require advanced AI developers to apply for a license & follow safety standards.
Where do these safety standards come from? How are they enforced?
These same questions apply to stop buttons. Who has the stop button? Random bureaucrats? Congress? Anyone who can file a lawsuit?
Thank you. I agree that kind of thing is plausible (but maybe not that particular example—I think this regulation would hit the RL-agents too).
(I think giving regulators a stop button is clearly positive-EV and gallabytes’s concern doesn’t make sense, but I know that’s much weaker than what I asserted above.)
Sure, a stop button doesn’t have the issues I described, as long as it’s used rarely enough. If it’s too commonplace then you should expect similar effects on safety to eg CEQA’s effects on infrastructure innovation. Major projects can only take on so much risk, and the more non-technical risk you add the less technical novelty will fit into that budget.
This line from the proposed “Responsible AI Act” seems to go much further than a stop button though?
Where do these safety standards come from? How are they enforced?
These same questions apply to stop buttons. Who has the stop button? Random bureaucrats? Congress? Anyone who can file a lawsuit?