And in the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely...
The UK’s answer is not to rush to regulate.
This is a point of principle – we believe in innovation, it’s a hallmark of the British economy…
…so we will always have a presumption to encourage it, not stifle it.
This is pretty unfortunate, insofar as speeches go, especially because this foreshadows similar policies to be adopted in the US. But it’s not surprising, given the pace at which AI’s information processing capabilities are already rapidly becoming the keys to the kingdom for major governments and militaries around the world e.g. for SOTA information warfare in the hybrid warfare era solidified by the Ukraine War.
The bit that came immediately after those lines also felt pretty important:
And in any case, how can we write laws that make sense for something we don’t yet fully understand?
So, instead, we’re building world-leading capability to understand and evaluate the safety of AI models within government.
To do that, we’ve already invested £100m in a new taskforce…
…more funding for AI safety than any other country in the world.
And we’ve recruited some of the most respected and knowledgeable figures in the world of AI.
So, I’m completely confident in telling you the UK is doing far more than other countries to keep you safe.
And because of this – because of the unique steps we’ve already taken – we’re able to go even further today.
I can announce that we will establish the world’s first AI Safety Institute – right here in the UK.
It will advance the world’s knowledge of AI safety.
And it will carefully examine, evaluate, and test new types of AI…
…so that we understand what each new model is capable of…
…exploring all the risks, from social harms like bias and misinformation, through to the most extreme risks of all.
The British people should have peace of mind that we’re developing the most advanced protections for AI of any country in the world.
To me this seemed like good news—“don’t rush to regulate, actually take the time for experts to figure out what makes sense” sounds like the kind of approach that might actually give sensible regulation rather than something that was quickly put together and sounded good but doesn’t actually make any sense.
I think it’s clear that the overall tone of this speech was pro-acceleration and just giving lip service to AI safety. You never know for sure with high-level politician speeches, of course, since plausible deniability and playing both sides is baked in, this is their area of expertise and has been for centuries; and in spite of that, this speech was still unusually strong against AI safety.
He ends the speech on this note:
And that’s why I make no apology for being pro-technology.
It’s why I want to seize every opportunity for our country to benefit in the way I’m so convinced that it can.
And it’s why I believe we can and should, look to the future with optimism and hope.
He also explicitly says he understands AI safety concerns and that they are unlikely, which makes his rejection of them in favor of acceleration even worse. Money is still being dangled instead of spent, and it could instead go to AI safety’s enemies at any time; they are still holding the cards. This makes it even more relevant to evaluate the odds that we’re already in the timeline where major governments and militaries are enamored with SOTA AI-powered psychological manipulation, instead of not evaluating that.
this speech was still unusually strong against AI safety.
I think that’s a reasonable read if you’re operating in a conceptual framework where acceleration and safety must be mutually exclusive, but the sense I got was that that’s not the framework he’s operating under. My read of the speech is as pro-acceleration and pro-safety. Invest a lot in AI development, and also invest a lot in ensuring its safety.
I think it’s definitely possible that Rishi Sunak might be operating in an epistemic environment where both AI capabilities and AI alignment seem easy, but that’s also bad news.
If leaders think that alignment is easy, then that’s setting humanity up for a situation where leaders pick the alignment engineers who are the best at loudly saying “yes, I can do it, pick me pick me pick meeeeee” and then everyone dies because the leadership stacked their team with people with the strongest tendency to imagine themselves succeeding, when in reality humans solving alignment might be like chimpanzees doing bridge engineering or rocket science.
If we had regulation in the UK ASAP then that would mean that governments would still be able to exploit uses of existing systems without burning the remaining timeline before the finish line. But this indicates that people are probably going to have to continue trying to solve alignment during race dynamics instead of during a regulatory pause, and $100m is probably not worth that, especially because that $100m will give the UK leverage over the AI safety community, instead of regulation which would give them leverage over AI capabilities companies.
It’s important not to ignore that this speech is to the general public. While I agree that “in the most unlikely but extreme cases” is not accurate, it’s not clear that this reflects the views of the PM / government, rather than what they think it’s expedient to say.
Even if they took the risk fully seriously, and had doom at 60%, I don’t think he’d say that in a speech.
The speech is consistent with [not quite getting it yet], but also consistent with [getting it, but not thinking it’s helpful to say it in a public speech]. I’m glad Eliezer’s out there saying the unvarnished truth—but it’s less clear that this would be helpful from the prime minister.
It’s worth considering the current political situation: the Conservatives are very likely to lose the next election (no later than Jan 2025 - but it often happens early [this lets the governing party pick their moment, have the element of surprise, and look like calling the election was a positive choice]). Being fully clear about the threat in public could be perceived as political desperation. So far, the issue hasn’t been politicized. If not coming out with the brutal truth helps with that, it’s likely a price worth paying. In particular, it doesn’t help if the UK government commits to things that Labour will scrap as soon as they get in.
Perhaps more importantly from his point of view, he’ll need support from within his own party over the next year—if he’s seen as sabotaging the Conservatives’ chances in the next election by saying anything too weird / alarmist-seeming / not-playing-to-their-base, he may lose that.
Again, it’s also consistent with not quite getting it, but that’s far from the only explanation.
We could do a lot worse than Rishi Sunak followed by Keir Starmer. Relative to most plausible counterfactuals, we seem to have gotten very lucky here.
This is pretty unfortunate, insofar as speeches go, especially because this foreshadows similar policies to be adopted in the US. But it’s not surprising, given the pace at which AI’s information processing capabilities are already rapidly becoming the keys to the kingdom for major governments and militaries around the world e.g. for SOTA information warfare in the hybrid warfare era solidified by the Ukraine War.
The bit that came immediately after those lines also felt pretty important:
To me this seemed like good news—“don’t rush to regulate, actually take the time for experts to figure out what makes sense” sounds like the kind of approach that might actually give sensible regulation rather than something that was quickly put together and sounded good but doesn’t actually make any sense.
I think it’s clear that the overall tone of this speech was pro-acceleration and just giving lip service to AI safety. You never know for sure with high-level politician speeches, of course, since plausible deniability and playing both sides is baked in, this is their area of expertise and has been for centuries; and in spite of that, this speech was still unusually strong against AI safety.
He ends the speech on this note:
He also explicitly says he understands AI safety concerns and that they are unlikely, which makes his rejection of them in favor of acceleration even worse. Money is still being dangled instead of spent, and it could instead go to AI safety’s enemies at any time; they are still holding the cards. This makes it even more relevant to evaluate the odds that we’re already in the timeline where major governments and militaries are enamored with SOTA AI-powered psychological manipulation, instead of not evaluating that.
I think that’s a reasonable read if you’re operating in a conceptual framework where acceleration and safety must be mutually exclusive, but the sense I got was that that’s not the framework he’s operating under. My read of the speech is as pro-acceleration and pro-safety. Invest a lot in AI development, and also invest a lot in ensuring its safety.
I think it’s definitely possible that Rishi Sunak might be operating in an epistemic environment where both AI capabilities and AI alignment seem easy, but that’s also bad news.
If leaders think that alignment is easy, then that’s setting humanity up for a situation where leaders pick the alignment engineers who are the best at loudly saying “yes, I can do it, pick me pick me pick meeeeee” and then everyone dies because the leadership stacked their team with people with the strongest tendency to imagine themselves succeeding, when in reality humans solving alignment might be like chimpanzees doing bridge engineering or rocket science.
If we had regulation in the UK ASAP then that would mean that governments would still be able to exploit uses of existing systems without burning the remaining timeline before the finish line. But this indicates that people are probably going to have to continue trying to solve alignment during race dynamics instead of during a regulatory pause, and $100m is probably not worth that, especially because that $100m will give the UK leverage over the AI safety community, instead of regulation which would give them leverage over AI capabilities companies.
It’s important not to ignore that this speech is to the general public.
While I agree that “in the most unlikely but extreme cases” is not accurate, it’s not clear that this reflects the views of the PM / government, rather than what they think it’s expedient to say.
Even if they took the risk fully seriously, and had doom at 60%, I don’t think he’d say that in a speech.
The speech is consistent with [not quite getting it yet], but also consistent with [getting it, but not thinking it’s helpful to say it in a public speech]. I’m glad Eliezer’s out there saying the unvarnished truth—but it’s less clear that this would be helpful from the prime minister.
It’s worth considering the current political situation: the Conservatives are very likely to lose the next election (no later than Jan 2025 - but it often happens early [this lets the governing party pick their moment, have the element of surprise, and look like calling the election was a positive choice]).
Being fully clear about the threat in public could be perceived as political desperation. So far, the issue hasn’t been politicized. If not coming out with the brutal truth helps with that, it’s likely a price worth paying. In particular, it doesn’t help if the UK government commits to things that Labour will scrap as soon as they get in.
Perhaps more importantly from his point of view, he’ll need support from within his own party over the next year—if he’s seen as sabotaging the Conservatives’ chances in the next election by saying anything too weird / alarmist-seeming / not-playing-to-their-base, he may lose that.
Again, it’s also consistent with not quite getting it, but that’s far from the only explanation.
We could do a lot worse than Rishi Sunak followed by Keir Starmer.
Relative to most plausible counterfactuals, we seem to have gotten very lucky here.