Your question is coming from within a frame (I’ll call it the “EY+LW frame”) that I believe most of the DC people do not heavily share, so it is kind of hard to answer directly. But yes, to attempt an answer, I’ve seen quite a lot of interest (and direct policy successes) in reducing AI chips’ availability and production in China (eg via both CHIPS act and export controls), which is a prerequisite for US to exert more regulatory oversight of AI production and usage. I think the DC folks seem fairly well positioned to give useful inputs into further AI regulation as well.
So in short, they are generally unconcerned with existential risks? I’ve spoken with some staff and I get the sense they do not believe it will impact them personally.
Mild disagree: I do think x-risk is a major concern, but seems like people around DC tend to put 0.5-10% probability mass on extinction rather than the 30%+ that I see around LW. This lower probability causes them to put a lot more weight on actions that have good outcomes in the non extinction case. The EY+LW frame has a lot more stated+implied assumptions about uselessness of various types of actions because of such high probability on extinction.
Your question is coming from within a frame (I’ll call it the “EY+LW frame”) that I believe most of the DC people do not heavily share, so it is kind of hard to answer directly. But yes, to attempt an answer, I’ve seen quite a lot of interest (and direct policy successes) in reducing AI chips’ availability and production in China (eg via both CHIPS act and export controls), which is a prerequisite for US to exert more regulatory oversight of AI production and usage. I think the DC folks seem fairly well positioned to give useful inputs into further AI regulation as well.
So in short, they are generally unconcerned with existential risks? I’ve spoken with some staff and I get the sense they do not believe it will impact them personally.
Mild disagree: I do think x-risk is a major concern, but seems like people around DC tend to put 0.5-10% probability mass on extinction rather than the 30%+ that I see around LW. This lower probability causes them to put a lot more weight on actions that have good outcomes in the non extinction case. The EY+LW frame has a lot more stated+implied assumptions about uselessness of various types of actions because of such high probability on extinction.