tl;dr: The new UK government will likely continue to balance encouraging AI innovation for public good against increasing regulation for public safety, with so-far rhetorical calls for stricter regulation than the previous government’s. Several reports have been published by the government and the UK AI Safety Institute, including the latter’s first technical report on model evaluation.
Previously on The UK’s AI Policy
Erstwhile Prime Minister Rishi Sunak took office in October 2022 and quickly announced a suite of new AI policies and plans. Broadly, Sunak’s government saw AI as a stonking big opportunity for the UK’s economy and society, via becoming a hub of AI development, revolutionizing public services, and providing $1 trillion in value for the UK by 2035. They described their regulatory approach as pro-innovation, calling for government oversight and, eventually, greater requirements on developers of frontier AI.
However, Sunak did take AI safety and even existential risk seriously, saying:
Get this wrong, and AI could make it easier to build chemical or biological weapons. Terrorist groups could use AI to spread fear and destruction [...] in the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely [through] ‘super intelligence’.[...] I don’t want to be alarmist. And there is a real debate about this [...] But however uncertain and unlikely these risks are, if they did manifest themselves, the consequences would be incredibly serious.
To address these risks, the government organized the first international AI Safety Summit. You can read my summaries of the plans for and outcomes from the summit if you want more detail, but briefly, the summit resulted in the international Bletchley Declaration and a promise of ~£700 million over 7 years to the UK AI Safety Institute (née Frontier AI Safety taskforce, and not to be confused with the US or Canadian AI Safety Institutes, with both of whom they are partnered, nor the Japanese or Singaporean AI Safety Institutes, with whom they are not). The UK AISI called for input from labs and research institutes and started work on research topics like AI evaluations that we’ll discuss below, with notable advisors and staff such as Yoshua Bengio, Ian Hogarth, Paul Christiano, and Matt Clifford.
What do the new guys say?
Since then, the dramatic-in-a-British-way 2024 election reduced the tories’ seat count in parliament by two thirds and doubled Labour’s. Rishi Sunak has been replaced by Sir Keir Starmer.
The new Labour government has indicated that they intend to regulate AI more tightly than the previous Tory government, while still encouraging growth of the AI sector and making use of AI in delivering their national missions.
Before the election, Starmer stated the UK should introduce stronger regulation of AI, and Labour’s manifesto promised to introduce “binding regulation on the handful of companies developing the most powerful AI models”. Other than planning to ban sexual deepfakes and outlawing nudification, we have little information on what this binding regulation would look like.
Indeed, some recent statements seem to ape the previous government’s pro-innovation approach:
Labour’s manifesto commits to creating a pro-innovation regulatory body to update regulation, speed up approval timelines, and “co-ordinate issues that span existing boundaries” (???);
A commitment to supporting AI development by removing barriers to new data centres and the creation of a National Data Library;
Revamping the department of science and technology to encourage AI development in the public sector.
What has the UK AISI and governmental friends been up to?
AI Opportunities Unit
On the 26th July, the Secretary of State for Science, Innovation and Technology Peter Kyle stated that AI has enormous potential and that the UK must use AI to support their five national missions, while still developing next steps for regulating frontier AI. To do so:
An AI opportunities unit will be established within the Department for Science, Innovation and Technology;
Tech entrepreneur Matt Clifford will develop an AI opportunities action plan, to be submitted in September 2024;
The government will address “key AI enablers such as the UK’s compute and broader infrastructure requirements”.
Note that Kyle previously advocated for compelling AI developers by law to share test results with the UK AISI (rather than relying on existing voluntary sharing), though this hasn’t appeared in rhetoric or policy since.
King v Baron
In July during the King’s Speech, the government committed to legislating powerful AI by placing the UK AISI “on a statutory footing”, providing it with a permanent remit to improve safety while focusing specifically on developers of the most advanced frontier AI, rather than users or, as the EU AI Act does, AI developers more broadly.
Despite widespread rumours of a fully formed and ready to go AI Bill, King Charles III didn’t mention any such bill. House of Lords member and nine-times-gold medal-winning paralympian, the Right Honourable Baron Holmes of Richmond, plans to re-introduce his proposed AI bill to the house.
Reports from the UK AISI and AI Seoul Summit
The AISI released three reports in May this year:
Their first technical report on model evaluations. Their findings weren’t especially novel, but do demonstrate the government is developing at least some in-house testing chops:
Several LLMs demonstrated expert-level knowledge in chemistry and biology;
Models solved high-school level cybersecurity challenges but struggled with university-level.
All tested models were highly vulnerable to basic jailbreak attacks, complying with harmful requests.
Models safeguards could be bypassed to elicit harmful information.
They’ve open-sourced Inspect, a software library for assessing AI model capabilities.
They’ve set up a San Francisco office to collaborate with the US.
They’ve partnered with the Canadian AI Safety Institute.
Jade Leung has been appointed as Chief Technology Officer.
They’re continuing to focus on evaluating risk from frontier AI.
They have a new program to increase societal resilience to AI risks.
The mammoth Interim International Scientific Report on the Safety of Advanced AI, commissioned by the 2023 summit and chaired by Yoshua Bengio, was published for the May 2024 AI Seoul Summit. The report’s too long to properly summarize, but along with some standard chatter about rapid AI progress and future uncertainty, and explanations of how AI is developed etc, the report notes:
AI is approaching human-level performance and will likely transform many jobs.
Evaluating GPAI remains a challenge; benchmarking and red-teaming are insufficient to assure safety.
Technical safety approaches like adversarial training are helpful, but no current methods can guarantee safety for advanced general-purpose AI.
Experts disagree about likelihood and timelines of extreme risks like loss of control.
The trajectory of AI development will be shaped by societal choices.
Energy demands will strain electrical infrastructure.
Systemic risks include disruption of the Labour market, exacerbation of income inequality, concentration of power to few countries & companies (which increases risks from single points of failure), environmental harm, threats to privacy, copyright infringement
At the AI Seoul Summit itself, co-hosted by the UK and South Korea, 10 countries agreed to develop an international network of AI Safety Institutes and £8.5 million in funding grants for research on systemic AI safety was announced (delivered through the UK AISI & partnered institutes). See here for other takeaways from the summit.
If you’re interested in a more in-depth analysis of existing AI regulations in the EU, China, and the US, check out our 2024 State of the AI Regulatory Landscape report.
The new UK government’s stance on AI safety
tl;dr: The new UK government will likely continue to balance encouraging AI innovation for public good against increasing regulation for public safety, with so-far rhetorical calls for stricter regulation than the previous government’s. Several reports have been published by the government and the UK AI Safety Institute, including the latter’s first technical report on model evaluation.
Previously on The UK’s AI Policy
Erstwhile Prime Minister Rishi Sunak took office in October 2022 and quickly announced a suite of new AI policies and plans. Broadly, Sunak’s government saw AI as a stonking big opportunity for the UK’s economy and society, via becoming a hub of AI development, revolutionizing public services, and providing $1 trillion in value for the UK by 2035. They described their regulatory approach as pro-innovation, calling for government oversight and, eventually, greater requirements on developers of frontier AI.
However, Sunak did take AI safety and even existential risk seriously, saying:
To address these risks, the government organized the first international AI Safety Summit. You can read my summaries of the plans for and outcomes from the summit if you want more detail, but briefly, the summit resulted in the international Bletchley Declaration and a promise of ~£700 million over 7 years to the UK AI Safety Institute (née Frontier AI Safety taskforce, and not to be confused with the US or Canadian AI Safety Institutes, with both of whom they are partnered, nor the Japanese or Singaporean AI Safety Institutes, with whom they are not). The UK AISI called for input from labs and research institutes and started work on research topics like AI evaluations that we’ll discuss below, with notable advisors and staff such as Yoshua Bengio, Ian Hogarth, Paul Christiano, and Matt Clifford.
What do the new guys say?
Since then, the dramatic-in-a-British-way 2024 election reduced the tories’ seat count in parliament by two thirds and doubled Labour’s. Rishi Sunak has been replaced by Sir Keir Starmer.
The new Labour government has indicated that they intend to regulate AI more tightly than the previous Tory government, while still encouraging growth of the AI sector and making use of AI in delivering their national missions.
Before the election, Starmer stated the UK should introduce stronger regulation of AI, and Labour’s manifesto promised to introduce “binding regulation on the handful of companies developing the most powerful AI models”. Other than planning to ban sexual deepfakes and outlawing nudification, we have little information on what this binding regulation would look like.
Indeed, some recent statements seem to ape the previous government’s pro-innovation approach:
Labour’s manifesto commits to creating a pro-innovation regulatory body to update regulation, speed up approval timelines, and “co-ordinate issues that span existing boundaries” (???);
Investment into AI for the National Health Service;
A commitment to supporting AI development by removing barriers to new data centres and the creation of a National Data Library;
Revamping the department of science and technology to encourage AI development in the public sector.
What has the UK AISI and governmental friends been up to?
AI Opportunities Unit
On the 26th July, the Secretary of State for Science, Innovation and Technology Peter Kyle stated that AI has enormous potential and that the UK must use AI to support their five national missions, while still developing next steps for regulating frontier AI. To do so:
An AI opportunities unit will be established within the Department for Science, Innovation and Technology;
Tech entrepreneur Matt Clifford will develop an AI opportunities action plan, to be submitted in September 2024;
The government will address “key AI enablers such as the UK’s compute and broader infrastructure requirements”.
Note that Kyle previously advocated for compelling AI developers by law to share test results with the UK AISI (rather than relying on existing voluntary sharing), though this hasn’t appeared in rhetoric or policy since.
King v Baron
In July during the King’s Speech, the government committed to legislating powerful AI by placing the UK AISI “on a statutory footing”, providing it with a permanent remit to improve safety while focusing specifically on developers of the most advanced frontier AI, rather than users or, as the EU AI Act does, AI developers more broadly.
Despite widespread rumours of a fully formed and ready to go AI Bill, King Charles III didn’t mention any such bill. House of Lords member and nine-times-gold medal-winning paralympian, the Right Honourable Baron Holmes of Richmond, plans to re-introduce his proposed AI bill to the house.
Reports from the UK AISI and AI Seoul Summit
The AISI released three reports in May this year:
Their first technical report on model evaluations. Their findings weren’t especially novel, but do demonstrate the government is developing at least some in-house testing chops:
Several LLMs demonstrated expert-level knowledge in chemistry and biology;
Models solved high-school level cybersecurity challenges but struggled with university-level.
All tested models were highly vulnerable to basic jailbreak attacks, complying with harmful requests.
Models safeguards could be bypassed to elicit harmful information.
Their fourth progress report.
They’ve open-sourced Inspect, a software library for assessing AI model capabilities.
They’ve set up a San Francisco office to collaborate with the US.
They’ve partnered with the Canadian AI Safety Institute.
Jade Leung has been appointed as Chief Technology Officer.
They’re continuing to focus on evaluating risk from frontier AI.
They have a new program to increase societal resilience to AI risks.
The mammoth Interim International Scientific Report on the Safety of Advanced AI, commissioned by the 2023 summit and chaired by Yoshua Bengio, was published for the May 2024 AI Seoul Summit. The report’s too long to properly summarize, but along with some standard chatter about rapid AI progress and future uncertainty, and explanations of how AI is developed etc, the report notes:
AI is approaching human-level performance and will likely transform many jobs.
Evaluating GPAI remains a challenge; benchmarking and red-teaming are insufficient to assure safety.
Technical safety approaches like adversarial training are helpful, but no current methods can guarantee safety for advanced general-purpose AI.
Experts disagree about likelihood and timelines of extreme risks like loss of control.
The trajectory of AI development will be shaped by societal choices.
Energy demands will strain electrical infrastructure.
Systemic risks include disruption of the Labour market, exacerbation of income inequality, concentration of power to few countries & companies (which increases risks from single points of failure), environmental harm, threats to privacy, copyright infringement
At the AI Seoul Summit itself, co-hosted by the UK and South Korea, 10 countries agreed to develop an international network of AI Safety Institutes and £8.5 million in funding grants for research on systemic AI safety was announced (delivered through the UK AISI & partnered institutes). See here for other takeaways from the summit.
If you’re interested in a more in-depth analysis of existing AI regulations in the EU, China, and the US, check out our 2024 State of the AI Regulatory Landscape report.