The primary audience I’m intending for the collection is people who work in or are considering working in AI governance and policy, and I hope it will be useful as an input into:
Building more detailed models of how the UK government might affect AI development and deployment.
Getting an overview of the policy status quo in the UK.
Thinking about which policy areas are likely to matter more for managing transitions to advanced AI.
Thinking about how important influencing the UK government is relative to other actors.
In this post, I try to situate current UK government levers in the broader context, to give a sense of the limits of the collection.
Some initial caveats:
The collection is based exclusively on publicly available information, not on conversations with relevant government officials.
I’m not an expert in the UK government or in AI policy.
The factual information in the collection hasn’t been vetted by relevant experts. I expect there are things I’ve misunderstood, and important things that I’ve missed.
The collection is a snapshot in time. To the best of my knowledge, the information is up to date as of April 2023, but the collection will soon get out of date. I am not going to personally commit to updating the collection, but would be excited for others to do so. If you’re interested, comment on this post or on the collection, or send me a message.
I am not advocating that particular actors should try to pull any particular lever. I think it’s easy to do more harm than good, and encourage readers to orient to the collection as a way of thinking about how different trajectories might play out, rather than as a straightforward input into which policies to push. I think that figuring out net positive ways of influencing policy could be very valuable, but requires a bunch of work on top of the sorts of information in this collection.
This collection is just a small part of the puzzle. Two aspects of this which I’ll unpack in a bit more detail below:
I’m pretty uncertain about whether the actions of the UK government will end up mattering, but I do think it’s likely enough that the UK government is worth some attention.
What needs to be true for the actions of the UK government to matter?
Government(s) needs to matter.
Governments tend to move slowly, so the faster takeoff is the less influence they’ll have relatively, all else equal.
I think there are fast-ish takeoff scenarios where governments still matter a lot, and slow takeoff scenarios which are plausible.
So I feel pretty confident that this is likely enough to be worth serious attention.
The UK needs to matter.
I can see two main ways that the UK ends up mattering:
DeepMind/Graphcore/Arm/some other UK-based entity ends up being a major player in the development of advanced AI.
The UK influences other more important actors, for example via:
UK government powers over AI companies outside of the UK.
International agreements.
Regulatory diffusion.
Diplomacy.
I’m not well-informed here, but again this seems likely enough to be worth some attention.
The UK government needs to have relevant powers.
The UK government currently has powers over information, including kinds of information which might be produced by DeepMind/Graphcore/Arm; and agreements, including possible agreements between AI labs.
It also has powers over ownership, including ownership of DeepMind/Graphcore/Arm; and funding, including of relevant safety, capabilities and other research; but these seem less strategically important to me.
If you think that there might be critical information in the UK, and/or that agreements involving UK actors might be critical, then the actions of the UK government might end up mattering.
Here are the main kinds of scenario I’m currently tracking where UK government levers on AI development matter (with links to the most relevant current levers in the collection):
Scenarios where it’s desirable to contain UK-based information.
Types of information one might want to contain: frontier models, dual use technology, semiconductor technology.
Scenarios where it’s desirable to differentially speed up various kinds of UK-based research/development.
Kinds of research/development one might want to speed up: safety work, AI development relative to other developers, relevant technologies like cryptography or semiconductors.
Conditional on UK government actions mattering, what matters besides current policy levers?
How current policy levers are likely to be implemented
The collection I’ve made focuses on current policy levers as written. This is just a starting point.
How these policies are actually implemented, now or in future, is a separate and important question. I expect answering it to involve lots of speaking to people in government about the levers in question.
Potential future policy levers
There are a few reasons to think that potential future policy levers matter more than current ones:
Longer timelines, if you have them.
If you don’t think current policies are sufficient to navigate x-risk, then in worlds which go well this is likely partly thanks to future levers.
In the case of the UK in particular, future policy levers may be a relatively larger deal than current ones, because the UK can move more nimbly than actors like the US and the EU.
I focused on current policy levers largely because this was more tractable. I think that work on potential future policy levers would probably be a more important contribution. Some possible ways of approaching potential future policy levers:
Gaining a deep understanding of the current direction of travel, and extrapolating forwards.
Again, I expect this to involve lots of speaking with people in government.
There are ways to get started at your desk, like carefully reading through the recent government reports that have been issued.
I also expect it to involve sensitive information that you aren’t allowed to share/attribute transparently.
Looking at historical cases of technology regulation and using these to extrapolate what the UK government might be likely to do in future on AI, and ways of influencing this process.
An overview of relevant actors in the UK government
The collection contains some relevant actors, but by no means all of them.
This is a good starting point for a more comprehensive overview.
Thanks to Markus Anderljung for suggesting this project and helping me do it; and to Shahar Avin, Adam Bales, Di Cooke, Shin-Shin Hua, Elliot Jones, and Jess Whittlestone for helpful conversations and feedback.
Current UK government levers on AI development
This is a link post for this collection of current UK government levers on AI development.
At the end of 2022, I made a collection of information on current UK government levers on AI development, focused on levers which seem to me to have potentially significant implications for the governance of advanced AI.
The primary audience I’m intending for the collection is people who work in or are considering working in AI governance and policy, and I hope it will be useful as an input into:
Building more detailed models of how the UK government might affect AI development and deployment.
Getting an overview of the policy status quo in the UK.
Thinking about which policy areas are likely to matter more for managing transitions to advanced AI.
Thinking about how important influencing the UK government is relative to other actors.
In this post, I try to situate current UK government levers in the broader context, to give a sense of the limits of the collection.
Some initial caveats:
The collection is based exclusively on publicly available information, not on conversations with relevant government officials.
I’m not an expert in the UK government or in AI policy.
The factual information in the collection hasn’t been vetted by relevant experts. I expect there are things I’ve misunderstood, and important things that I’ve missed.
The collection is a snapshot in time. To the best of my knowledge, the information is up to date as of April 2023, but the collection will soon get out of date. I am not going to personally commit to updating the collection, but would be excited for others to do so. If you’re interested, comment on this post or on the collection, or send me a message.
I am not advocating that particular actors should try to pull any particular lever. I think it’s easy to do more harm than good, and encourage readers to orient to the collection as a way of thinking about how different trajectories might play out, rather than as a straightforward input into which policies to push. I think that figuring out net positive ways of influencing policy could be very valuable, but requires a bunch of work on top of the sorts of information in this collection.
This collection is just a small part of the puzzle. Two aspects of this which I’ll unpack in a bit more detail below:
The actions of the UK government might not matter.
Even conditional on UK government actions mattering, there are many important things besides current policy levers.
Will the actions of the UK government matter?
I’m pretty uncertain about whether the actions of the UK government will end up mattering, but I do think it’s likely enough that the UK government is worth some attention.
What needs to be true for the actions of the UK government to matter?
Government(s) needs to matter.
Governments tend to move slowly, so the faster takeoff is the less influence they’ll have relatively, all else equal.
I think there are fast-ish takeoff scenarios where governments still matter a lot, and slow takeoff scenarios which are plausible.
So I feel pretty confident that this is likely enough to be worth serious attention.
The UK needs to matter.
I can see two main ways that the UK ends up mattering:
DeepMind/Graphcore/Arm/some other UK-based entity ends up being a major player in the development of advanced AI.
The UK influences other more important actors, for example via:
UK government powers over AI companies outside of the UK.
International agreements.
Regulatory diffusion.
Diplomacy.
I’m not well-informed here, but again this seems likely enough to be worth some attention.
The UK government needs to have relevant powers.
The UK government currently has powers over information, including kinds of information which might be produced by DeepMind/Graphcore/Arm; and agreements, including possible agreements between AI labs.
It also has powers over ownership, including ownership of DeepMind/Graphcore/Arm; and funding, including of relevant safety, capabilities and other research; but these seem less strategically important to me.
If you think that there might be critical information in the UK, and/or that agreements involving UK actors might be critical, then the actions of the UK government might end up mattering.
Here are the main kinds of scenario I’m currently tracking where UK government levers on AI development matter (with links to the most relevant current levers in the collection):
Scenarios where it’s desirable to contain UK-based information.
Types of information one might want to contain: frontier models, dual use technology, semiconductor technology.
Most relevant current levers: National Security and Investment Act, competition law, export controls, secrecy orders, other government powers over information.
Scenarios where it’s desirable to mandate certain practices in UK-based AI development.
Practices one might want to mandate: safety standards, external review, particular information sharing practices.
Most relevant current levers: National Security and Investment Act, government funding.
Scenarios where it’s desirable to differentially speed up various kinds of UK-based research/development.
Kinds of research/development one might want to speed up: safety work, AI development relative to other developers, relevant technologies like cryptography or semiconductors.
Most relevant current levers: government funding.
Scenarios where it’s desirable to stop UK-based AI development.
Most relevant current levers: National Security and Investment Act, nationalisation.
Scenarios where the UK is a necessary part of international coordination around AI development/deployment.
Most relevant current levers: National Security and Investment Act, competition law.
Conditional on UK government actions mattering, what matters besides current policy levers?
How current policy levers are likely to be implemented
The collection I’ve made focuses on current policy levers as written. This is just a starting point.
How these policies are actually implemented, now or in future, is a separate and important question. I expect answering it to involve lots of speaking to people in government about the levers in question.
Potential future policy levers
There are a few reasons to think that potential future policy levers matter more than current ones:
Longer timelines, if you have them.
If you don’t think current policies are sufficient to navigate x-risk, then in worlds which go well this is likely partly thanks to future levers.
In the case of the UK in particular, future policy levers may be a relatively larger deal than current ones, because the UK can move more nimbly than actors like the US and the EU.
I focused on current policy levers largely because this was more tractable. I think that work on potential future policy levers would probably be a more important contribution. Some possible ways of approaching potential future policy levers:
Gaining a deep understanding of the current direction of travel, and extrapolating forwards.
Again, I expect this to involve lots of speaking with people in government.
There are ways to get started at your desk, like carefully reading through the recent government reports that have been issued.
I also expect it to involve sensitive information that you aren’t allowed to share/attribute transparently.
Looking at historical cases of technology regulation and using these to extrapolate what the UK government might be likely to do in future on AI, and ways of influencing this process.
An overview of relevant actors in the UK government
The collection contains some relevant actors, but by no means all of them.
This is a good starting point for a more comprehensive overview.
Thanks to Markus Anderljung for suggesting this project and helping me do it; and to Shahar Avin, Adam Bales, Di Cooke, Shin-Shin Hua, Elliot Jones, and Jess Whittlestone for helpful conversations and feedback.