I learned from the public policy section from the Wikipedia article about the AI control problem that earlier this year the UK government published its ten-year National AI Strategy. For specifically the control problem and x-risks from AGI, the strategy report reads:
The government takes the long term risk of non-aligned Artificial General Intelligence, and the unforeseeable changes that it would mean for the UK and the world, seriously.
That’s an improvement from a few years ago when some world governments declared they’re prioritizing the control problem only in terms of national security. On its face, that’s of course ludicrous because prioritizing only national security misrepresents the scope of x-risk from AI. If national governments understand the control problem, they understand that it’s pointless in a literal sense to minimize their focus to only national security.
Yet so far they’ve still mainly expressed their focus on the control problem in terms of national security. One obvious reason why is that there may be a nationalist tendency to a government’s ideology that provokes them to add a boilerplate about national security or self-interest to everything without reflection. Another obvious reason is that governments are trying to send social signals. Yet I’m asking this question to check what more precise or less intuitive answers I might be missing.
A related question is: What might have changed in the last few years that has a national government express concern about the control problem in terms of global security too?
The National AI Strategy report from the UK is the first time I’ve seen a government express concerns about the control problem in terms of global security. This could be important because it means governments may also be trying to signal they’re more willing to coordinate with other governments on the control problem. Understanding what motivated one government to signal they’re more multilateralist than before could be applied to motivate other governments to do the same.
[Question] Why do governments refer to existential risks primarily in terms of national security?
I learned from the public policy section from the Wikipedia article about the AI control problem that earlier this year the UK government published its ten-year National AI Strategy. For specifically the control problem and x-risks from AGI, the strategy report reads:
That’s an improvement from a few years ago when some world governments declared they’re prioritizing the control problem only in terms of national security. On its face, that’s of course ludicrous because prioritizing only national security misrepresents the scope of x-risk from AI. If national governments understand the control problem, they understand that it’s pointless in a literal sense to minimize their focus to only national security.
Yet so far they’ve still mainly expressed their focus on the control problem in terms of national security. One obvious reason why is that there may be a nationalist tendency to a government’s ideology that provokes them to add a boilerplate about national security or self-interest to everything without reflection. Another obvious reason is that governments are trying to send social signals. Yet I’m asking this question to check what more precise or less intuitive answers I might be missing.
A related question is: What might have changed in the last few years that has a national government express concern about the control problem in terms of global security too?
The National AI Strategy report from the UK is the first time I’ve seen a government express concerns about the control problem in terms of global security. This could be important because it means governments may also be trying to signal they’re more willing to coordinate with other governments on the control problem. Understanding what motivated one government to signal they’re more multilateralist than before could be applied to motivate other governments to do the same.