This seems to me an instantiation of a classic debate about realpolitik.
I disagree with the main point in this post because raising concerns over x-risk is not mutually exclusive with advocating for more palatable policies (such as requiring evals before deployment). I think the actual thing that many EAs are trying to do is to talk loudly about near term policies while also mentioning x-risk concerns to the extent that they think is currently politically useful. The aim of this is to slow down AI progress (giving us more time to find a permanent solution), gain traction within the political system and actually make AI safer (although if alignment is hard then these policies may not actually reduce x-risk directly).
Gaining knowledge, experience and contacts in AI policy making will make it easier to advocate policies that actually deal with x-risk in the future. The concern about being seen as dishonest for not raising x-risk sooner feels unrealistic to me because it is so standard in public discourse to say something not because you believe it but because it aligns with your tribe (ie. operate at higher Simulacrum Levels).
In summary
Implement as much AI regulation as you can today, while gaining influence and gradually raising the salience of x-risk so that you can implement better regulation in the future.
seems like a reasonable strategy and better than the proposed alternative of
Only communicate x-risk concerns to policy makers.
This seems to me an instantiation of a classic debate about realpolitik.
I disagree with the main point in this post because raising concerns over x-risk is not mutually exclusive with advocating for more palatable policies (such as requiring evals before deployment). I think the actual thing that many EAs are trying to do is to talk loudly about near term policies while also mentioning x-risk concerns to the extent that they think is currently politically useful. The aim of this is to slow down AI progress (giving us more time to find a permanent solution), gain traction within the political system and actually make AI safer (although if alignment is hard then these policies may not actually reduce x-risk directly).
Gaining knowledge, experience and contacts in AI policy making will make it easier to advocate policies that actually deal with x-risk in the future. The concern about being seen as dishonest for not raising x-risk sooner feels unrealistic to me because it is so standard in public discourse to say something not because you believe it but because it aligns with your tribe (ie. operate at higher Simulacrum Levels).
In summary
seems like a reasonable strategy and better than the proposed alternative of