I learned from the public policy section from the Wikipedia article about the AI control problem that earlier this year the UK government published its ten-year National AI Strategy. For specifically the control problem and x-risks from AGI, the strategy report reads:
The government takes the long term risk of non-aligned Artificial General Intelligence, and the unforeseeable changes that it would mean for the UK and the world, seriously.
That’s an improvement from a few years ago when some world governments declared they’re prioritizing the control problem only in terms of national security. On its face, that’s of course ludicrous because prioritizing only national security misrepresents the scope of x-risk from AI. If national governments understand the control problem, they understand that it’s pointless in a literal sense to minimize their focus to only national security.
Yet so far they’ve still mainly expressed their focus on the control problem in terms of national security. One obvious reason why is that there may be a nationalist tendency to a government’s ideology that provokes them to add a boilerplate about national security or self-interest to everything without reflection. Another obvious reason is that governments are trying to send social signals. Yet I’m asking this question to check what more precise or less intuitive answers I might be missing.
A related question is: What might have changed in the last few years that has a national government express concern about the control problem in terms of global security too?
The National AI Strategy report from the UK is the first time I’ve seen a government express concerns about the control problem in terms of global security. This could be important because it means governments may also be trying to signal they’re more willing to coordinate with other governments on the control problem. Understanding what motivated one government to signal they’re more multilateralist than before could be applied to motivate other governments to do the same.
There are huge government budgets for dealing with national security. If you want to argue that part of the budget goes to fighting existential risk you have to argue that existential risk threatens national security.
That isn’t something I thought of but that makes sense as the most significant reason that, at least so far, I hadn’t considered yet.
There are several signals the government might be trying to send that come to mind:
It may be only one government agency or department, or a small set of agencies/departments, that are currently focused on the control problem. They may also still need to work on other tasks with government agencies/departments that have national security as the greatest priority. Even if a department internally thinks about the control problem in terms of global security, they may want to publicly reinforce national security as a top priority to keep a good working relationship with other departments they work closely with.
Whatever arms of the government are focused on the control problem may be signaling to the public or electorate, or politicians more directly accountable to the public/electorate, to remain popular and retain access to resources.