Regulation in most other areas has been counterproductive. In AI, it will likely be even more so: there’s at least some understanding of e.g. medicine by both the public and our rulers, but most people have no idea about the details of alignment.
This could easily backfire in countless ways. It could drive researchers out of the field, it could mandate “alignment” procedures that don’t actually help and get in the way of finding procedures that do, it could create requirements for AIs to say what is socially desirable instead of what is true (chatGPT is already notorious for this), making it harder to tell how the AI is functioning...
It is socially desirable to call for regulation as a solution for almost any problem you care to name, but it is practically useful far more rarely. This is AI alignment. This is potentially the future of humanity at stake, and all human values. If we cannot speak the truth here, when will we ever speak it?
There are, of course, potentially reasonable counterarguments. Someone might believe that AI capabilities are more fragile than AI alignment, for instance, such that regulation would tend to slow capabilities without greatly hampering alignment, and the time bought gave us a better chance of a good outcome. Perhaps. But please consider, are you calling for regulation because it actually makes sense, or because it’s the Approved Answer to problems?
Combating bad regulation would be the obvious way.
In seriousness, I haven’t focused on interventions to improve regulation yet— I just noticed a thing about public opinion and wrote it. (And again, some possible regulations would be good.)
Combating bad regulation isn’t a solution, but a description of a property you’d want a solution to have.
Or more specifically, while you could perhaps lobby against particular destructive policies, this article is pushing for “helping [government actors] take good actions”, but given the track record of government actions, it would make far more sense to help them take no action. Pushing for political action without a plan to steer that action in a positive direction is much like pushing for AI capabilities without a plan for alignment… which we both agree is insanely dangerous.
The state is not aligned. That should be crystal clear from the medical and economic regulations that already exist. And bringing in a powerful Unfriendly agent into mankind’s efforts to create a Friendly one is more likely to backfire than to help.
Regulation in most other areas has been counterproductive. In AI, it will likely be even more so: there’s at least some understanding of e.g. medicine by both the public and our rulers, but most people have no idea about the details of alignment.
This could easily backfire in countless ways. It could drive researchers out of the field, it could mandate “alignment” procedures that don’t actually help and get in the way of finding procedures that do, it could create requirements for AIs to say what is socially desirable instead of what is true (chatGPT is already notorious for this), making it harder to tell how the AI is functioning...
It is socially desirable to call for regulation as a solution for almost any problem you care to name, but it is practically useful far more rarely. This is AI alignment. This is potentially the future of humanity at stake, and all human values. If we cannot speak the truth here, when will we ever speak it?
There are, of course, potentially reasonable counterarguments. Someone might believe that AI capabilities are more fragile than AI alignment, for instance, such that regulation would tend to slow capabilities without greatly hampering alignment, and the time bought gave us a better chance of a good outcome. Perhaps. But please consider, are you calling for regulation because it actually makes sense, or because it’s the Approved Answer to problems?
Please don’t make this worse.
I didn’t call for regulation.
Some possible regulations would be good and some would be bad.
I do endorse trying to nudge regulation to be better than the default.
How do you propose nudging regulation to be better without nudging for more regulation?
Combating bad regulation would be the obvious way.
In seriousness, I haven’t focused on interventions to improve regulation yet— I just noticed a thing about public opinion and wrote it. (And again, some possible regulations would be good.)
Combating bad regulation isn’t a solution, but a description of a property you’d want a solution to have.
Or more specifically, while you could perhaps lobby against particular destructive policies, this article is pushing for “helping [government actors] take good actions”, but given the track record of government actions, it would make far more sense to help them take no action. Pushing for political action without a plan to steer that action in a positive direction is much like pushing for AI capabilities without a plan for alignment… which we both agree is insanely dangerous.
The state is not aligned. That should be crystal clear from the medical and economic regulations that already exist. And bringing in a powerful Unfriendly agent into mankind’s efforts to create a Friendly one is more likely to backfire than to help.