Answering independently, I’d like to point out a few features of something like governance appearing as a result of the warning shot.
If a wave of new funding appears, it will be provided via grants according to the kind of criteria that make sense to Congress, which means AI Safety research will probably be in a similar position to cancer research since the War on Cancer was launched. This bodes poorly for our concerns.
If a set of regulations appear, they will ban or require things according to criteria that make sense to Congress. This looks to me like it stands a substantial chance of making several winning strategies actually illegal by accident, as well as accidentally emphasizing the most dangerous directions.
In general, once something has laws about it people stop reasoning about it morally, and default to the case of legal → good. I expect this to completely deactivate a majority of ML researchers with respect to alignment; it will simply be one more bureaucratic procedure for getting funding.
Answering independently, I’d like to point out a few features of something like governance appearing as a result of the warning shot.
If a wave of new funding appears, it will be provided via grants according to the kind of criteria that make sense to Congress, which means AI Safety research will probably be in a similar position to cancer research since the War on Cancer was launched. This bodes poorly for our concerns.
If a set of regulations appear, they will ban or require things according to criteria that make sense to Congress. This looks to me like it stands a substantial chance of making several winning strategies actually illegal by accident, as well as accidentally emphasizing the most dangerous directions.
In general, once something has laws about it people stop reasoning about it morally, and default to the case of legal → good. I expect this to completely deactivate a majority of ML researchers with respect to alignment; it will simply be one more bureaucratic procedure for getting funding.