I think it’s clear that the overall tone of this speech was pro-acceleration and just giving lip service to AI safety. You never know for sure with high-level politician speeches, of course, since plausible deniability and playing both sides is baked in, this is their area of expertise and has been for centuries; and in spite of that, this speech was still unusually strong against AI safety.
He ends the speech on this note:
And that’s why I make no apology for being pro-technology.
It’s why I want to seize every opportunity for our country to benefit in the way I’m so convinced that it can.
And it’s why I believe we can and should, look to the future with optimism and hope.
He also explicitly says he understands AI safety concerns and that they are unlikely, which makes his rejection of them in favor of acceleration even worse. Money is still being dangled instead of spent, and it could instead go to AI safety’s enemies at any time; they are still holding the cards. This makes it even more relevant to evaluate the odds that we’re already in the timeline where major governments and militaries are enamored with SOTA AI-powered psychological manipulation, instead of not evaluating that.
this speech was still unusually strong against AI safety.
I think that’s a reasonable read if you’re operating in a conceptual framework where acceleration and safety must be mutually exclusive, but the sense I got was that that’s not the framework he’s operating under. My read of the speech is as pro-acceleration and pro-safety. Invest a lot in AI development, and also invest a lot in ensuring its safety.
I think it’s definitely possible that Rishi Sunak might be operating in an epistemic environment where both AI capabilities and AI alignment seem easy, but that’s also bad news.
If leaders think that alignment is easy, then that’s setting humanity up for a situation where leaders pick the alignment engineers who are the best at loudly saying “yes, I can do it, pick me pick me pick meeeeee” and then everyone dies because the leadership stacked their team with people with the strongest tendency to imagine themselves succeeding, when in reality humans solving alignment might be like chimpanzees doing bridge engineering or rocket science.
If we had regulation in the UK ASAP then that would mean that governments would still be able to exploit uses of existing systems without burning the remaining timeline before the finish line. But this indicates that people are probably going to have to continue trying to solve alignment during race dynamics instead of during a regulatory pause, and $100m is probably not worth that, especially because that $100m will give the UK leverage over the AI safety community, instead of regulation which would give them leverage over AI capabilities companies.
I think it’s clear that the overall tone of this speech was pro-acceleration and just giving lip service to AI safety. You never know for sure with high-level politician speeches, of course, since plausible deniability and playing both sides is baked in, this is their area of expertise and has been for centuries; and in spite of that, this speech was still unusually strong against AI safety.
He ends the speech on this note:
He also explicitly says he understands AI safety concerns and that they are unlikely, which makes his rejection of them in favor of acceleration even worse. Money is still being dangled instead of spent, and it could instead go to AI safety’s enemies at any time; they are still holding the cards. This makes it even more relevant to evaluate the odds that we’re already in the timeline where major governments and militaries are enamored with SOTA AI-powered psychological manipulation, instead of not evaluating that.
I think that’s a reasonable read if you’re operating in a conceptual framework where acceleration and safety must be mutually exclusive, but the sense I got was that that’s not the framework he’s operating under. My read of the speech is as pro-acceleration and pro-safety. Invest a lot in AI development, and also invest a lot in ensuring its safety.
I think it’s definitely possible that Rishi Sunak might be operating in an epistemic environment where both AI capabilities and AI alignment seem easy, but that’s also bad news.
If leaders think that alignment is easy, then that’s setting humanity up for a situation where leaders pick the alignment engineers who are the best at loudly saying “yes, I can do it, pick me pick me pick meeeeee” and then everyone dies because the leadership stacked their team with people with the strongest tendency to imagine themselves succeeding, when in reality humans solving alignment might be like chimpanzees doing bridge engineering or rocket science.
If we had regulation in the UK ASAP then that would mean that governments would still be able to exploit uses of existing systems without burning the remaining timeline before the finish line. But this indicates that people are probably going to have to continue trying to solve alignment during race dynamics instead of during a regulatory pause, and $100m is probably not worth that, especially because that $100m will give the UK leverage over the AI safety community, instead of regulation which would give them leverage over AI capabilities companies.