Given that no-one’s posted a comment in the affirmative yet:
I’d guess that more US national security engagement with AI risk is good. In rough order why:
I think the deployment problem is a key challenge, and an optimal strategy for addressing this challenge will have elements of transnational competition, information security, and enforcement that benefit from or require the national security apparatus.
As OP points out, there’s some chance that the US government/military ends up as a key player advancing capabilities, so it’s good for them to be mindful of the risks.
As OP points out, if funding for large alignment projects seems promising, places like DTRA have large budgets and a strong track record of research funding.
I agree that there are risks with communicating AI risk concepts in a way that poisons the well, lacks fidelity, gets distorted, or fails to cross inferential distances, but these seem like things to manage and mitigate rather than give up on. Illustratively, I’d be excited about bureaucrats, analysts and program managers reading things like Alignment Problem from a Deep Learning Perspective, Unsolved Problems in ML Safety, or CSET’s Key Concepts in AI Safety series; and developing frameworks and triggers to consider whether and when cutting-edge AI systems merit regulatory attention as dual use and/or high-risk systems a la the nuclear sector. (I include these examples as things that seem directionally good to me off the top of my head, but I’m not claiming they’re the most promising things to push on in this space).
Given that no-one’s posted a comment in the affirmative yet:
I’d guess that more US national security engagement with AI risk is good. In rough order why:
I think the deployment problem is a key challenge, and an optimal strategy for addressing this challenge will have elements of transnational competition, information security, and enforcement that benefit from or require the national security apparatus.
As OP points out, there’s some chance that the US government/military ends up as a key player advancing capabilities, so it’s good for them to be mindful of the risks.
As OP points out, if funding for large alignment projects seems promising, places like DTRA have large budgets and a strong track record of research funding.
I agree that there are risks with communicating AI risk concepts in a way that poisons the well, lacks fidelity, gets distorted, or fails to cross inferential distances, but these seem like things to manage and mitigate rather than give up on. Illustratively, I’d be excited about bureaucrats, analysts and program managers reading things like Alignment Problem from a Deep Learning Perspective, Unsolved Problems in ML Safety, or CSET’s Key Concepts in AI Safety series; and developing frameworks and triggers to consider whether and when cutting-edge AI systems merit regulatory attention as dual use and/or high-risk systems a la the nuclear sector. (I include these examples as things that seem directionally good to me off the top of my head, but I’m not claiming they’re the most promising things to push on in this space).