I wouldn’t try because most regulations that I can think of—at least in the form our government is likely to pass them—have downsides which I consider worse than their benefits.
I also think that x-risk from AI misalignment is more like a 5% chance than a 95% chance. If heavy AI regulations increase other AI-related x-risks—say, permanent totalitarianism—while negligibly impacting misalignment risk, the EV can easily come out quite negative.
I think the model by which permanent totalitarianism comes about is actually cleaner than the x-risk RSI model—and requires less-drastically-smart-superintelligence—so I think it is worth serious consideration.
But I don’t know what particular concrete regulations you have in mind, though. Through what actual means do you want to implement an AI pause, concretely? What kind of downsides do you anticipate from such measures, and how would you mitigate these downsides?
I wouldn’t try because most regulations that I can think of—at least in the form our government is likely to pass them—have downsides which I consider worse than their benefits.
I also think that x-risk from AI misalignment is more like a 5% chance than a 95% chance. If heavy AI regulations increase other AI-related x-risks—say, permanent totalitarianism—while negligibly impacting misalignment risk, the EV can easily come out quite negative.
I think the model by which permanent totalitarianism comes about is actually cleaner than the x-risk RSI model—and requires less-drastically-smart-superintelligence—so I think it is worth serious consideration.
But I don’t know what particular concrete regulations you have in mind, though. Through what actual means do you want to implement an AI pause, concretely? What kind of downsides do you anticipate from such measures, and how would you mitigate these downsides?