I don’t recall much conversation about regulation after Dying with Dignity. At the time, I was uncritically accepting the claim that, since this issue was outside of the Overton window, that just wasn’t an option. I do remember a lot of baleful talk about how we’re going to die.
I just don’t understand how anyone who believed in Dying with Dignity would consider regulation too imperfect a solution. Why would you not try? What are you trying to preserve if you think we’re on track to die with no solution to alignment in sight? Even if you don’t think regulation will accomplish the goal of saving civilization, isn’t shooting your shot anyway what “dying with dignity” means?
I wouldn’t try because most regulations that I can think of—at least in the form our government is likely to pass them—have downsides which I consider worse than their benefits.
I also think that x-risk from AI misalignment is more like a 5% chance than a 95% chance. If heavy AI regulations increase other AI-related x-risks—say, permanent totalitarianism—while negligibly impacting misalignment risk, the EV can easily come out quite negative.
I think the model by which permanent totalitarianism comes about is actually cleaner than the x-risk RSI model—and requires less-drastically-smart-superintelligence—so I think it is worth serious consideration.
But I don’t know what particular concrete regulations you have in mind, though. Through what actual means do you want to implement an AI pause, concretely? What kind of downsides do you anticipate from such measures, and how would you mitigate these downsides?
I don’t recall much conversation about regulation after Dying with Dignity. At the time, I was uncritically accepting the claim that, since this issue was outside of the Overton window, that just wasn’t an option. I do remember a lot of baleful talk about how we’re going to die.
I just don’t understand how anyone who believed in Dying with Dignity would consider regulation too imperfect a solution. Why would you not try? What are you trying to preserve if you think we’re on track to die with no solution to alignment in sight? Even if you don’t think regulation will accomplish the goal of saving civilization, isn’t shooting your shot anyway what “dying with dignity” means?
I wouldn’t try because most regulations that I can think of—at least in the form our government is likely to pass them—have downsides which I consider worse than their benefits.
I also think that x-risk from AI misalignment is more like a 5% chance than a 95% chance. If heavy AI regulations increase other AI-related x-risks—say, permanent totalitarianism—while negligibly impacting misalignment risk, the EV can easily come out quite negative.
I think the model by which permanent totalitarianism comes about is actually cleaner than the x-risk RSI model—and requires less-drastically-smart-superintelligence—so I think it is worth serious consideration.
But I don’t know what particular concrete regulations you have in mind, though. Through what actual means do you want to implement an AI pause, concretely? What kind of downsides do you anticipate from such measures, and how would you mitigate these downsides?