Two different reasons you might have more pushback, from two different groups of people:
On one hand—one set of people might have read List of Lethalities / Dying with Dignity a year or two ago and thought “Dang, this seems really badly wrong.” But there wasn’t that much reason to argue against it, then, because it wasn’t like it moved public opinion—people on the internet being wrong are gonna keep being wrong. Now that public opinion might be moved to regulation, people who thought they had better things to do than correct someone being wrong on the internet are more actively speaking up against such measures. So that’s some opposition.
On the other hand—another set of people might have agreed with List of Lethalities, etc, and did want AI regulation / something like the pause, were operating in far mode for such regulation—regulation was a thing that might not happen, which you can just imagine happening in an ideal and perfect way, not considering how regulations actually tend to be implemented by governments. Now that regulation might actually happen, you have to operate in near mode and consider the actual contingencies of such acts, where it becomes obvious that the government can fuck up something as simple as a tiktok ban, let alone more complicated things like covid response. I think when operating in near mode AI regulation becomes much less attractive. So that would likely produce some more opposition even from people who previously were in favor.
I don’t recall much conversation about regulation after Dying with Dignity. At the time, I was uncritically accepting the claim that, since this issue was outside of the Overton window, that just wasn’t an option. I do remember a lot of baleful talk about how we’re going to die.
I just don’t understand how anyone who believed in Dying with Dignity would consider regulation too imperfect a solution. Why would you not try? What are you trying to preserve if you think we’re on track to die with no solution to alignment in sight? Even if you don’t think regulation will accomplish the goal of saving civilization, isn’t shooting your shot anyway what “dying with dignity” means?
I wouldn’t try because most regulations that I can think of—at least in the form our government is likely to pass them—have downsides which I consider worse than their benefits.
I also think that x-risk from AI misalignment is more like a 5% chance than a 95% chance. If heavy AI regulations increase other AI-related x-risks—say, permanent totalitarianism—while negligibly impacting misalignment risk, the EV can easily come out quite negative.
I think the model by which permanent totalitarianism comes about is actually cleaner than the x-risk RSI model—and requires less-drastically-smart-superintelligence—so I think it is worth serious consideration.
But I don’t know what particular concrete regulations you have in mind, though. Through what actual means do you want to implement an AI pause, concretely? What kind of downsides do you anticipate from such measures, and how would you mitigate these downsides?
Two different reasons you might have more pushback, from two different groups of people:
On one hand—one set of people might have read List of Lethalities / Dying with Dignity a year or two ago and thought “Dang, this seems really badly wrong.” But there wasn’t that much reason to argue against it, then, because it wasn’t like it moved public opinion—people on the internet being wrong are gonna keep being wrong. Now that public opinion might be moved to regulation, people who thought they had better things to do than correct someone being wrong on the internet are more actively speaking up against such measures. So that’s some opposition.
On the other hand—another set of people might have agreed with List of Lethalities, etc, and did want AI regulation / something like the pause, were operating in far mode for such regulation—regulation was a thing that might not happen, which you can just imagine happening in an ideal and perfect way, not considering how regulations actually tend to be implemented by governments. Now that regulation might actually happen, you have to operate in near mode and consider the actual contingencies of such acts, where it becomes obvious that the government can fuck up something as simple as a tiktok ban, let alone more complicated things like covid response. I think when operating in near mode AI regulation becomes much less attractive. So that would likely produce some more opposition even from people who previously were in favor.
I don’t recall much conversation about regulation after Dying with Dignity. At the time, I was uncritically accepting the claim that, since this issue was outside of the Overton window, that just wasn’t an option. I do remember a lot of baleful talk about how we’re going to die.
I just don’t understand how anyone who believed in Dying with Dignity would consider regulation too imperfect a solution. Why would you not try? What are you trying to preserve if you think we’re on track to die with no solution to alignment in sight? Even if you don’t think regulation will accomplish the goal of saving civilization, isn’t shooting your shot anyway what “dying with dignity” means?
I wouldn’t try because most regulations that I can think of—at least in the form our government is likely to pass them—have downsides which I consider worse than their benefits.
I also think that x-risk from AI misalignment is more like a 5% chance than a 95% chance. If heavy AI regulations increase other AI-related x-risks—say, permanent totalitarianism—while negligibly impacting misalignment risk, the EV can easily come out quite negative.
I think the model by which permanent totalitarianism comes about is actually cleaner than the x-risk RSI model—and requires less-drastically-smart-superintelligence—so I think it is worth serious consideration.
But I don’t know what particular concrete regulations you have in mind, though. Through what actual means do you want to implement an AI pause, concretely? What kind of downsides do you anticipate from such measures, and how would you mitigate these downsides?