Innovation is defined by its ability to surprise.
Only a few years ago, GPT-2 meant nothing to the public.
For many of us, AI felt like a distant possibility at best.
Something that would never – could never – live up to the hype.
And yet, overnight, ChatGPT became a household name.
It unleashed an unprecedented wave of technological change.
And the pace of progress shows no signs of slowing down.
With DeepSeek, we’ve just seen once again just how sudden, how unpredictable, innovation can be.
The AI revolution is happening.
Ignoring it is simply not an option.
In the UK, we reject the doomsayers and the pessimists.
Because we are optimistic about the extraordinary potential of this technology.
And hopeful for the radical, far-reaching change it will bring.
We launched the AI Opportunities Action Plan to put us on the front foot.
Working in collaboration with our international partners, we’re going to create one of the biggest clusters of AI innovation in the world and deliver a new era of prosperity and wealth creation for our country.
This is a once-in-a-generation opportunity.
If we can seize it, we will close the door on a decade of slow growth and stagnant productivity.
Of taxes that are just too high.
We will deliver new jobs that put more money in working people’s pockets.
And we will drive forward a digital revolution inside government to make our state smaller, smarter, and more efficient.
But none of that is possible unless we can mitigate its risks that AI presents.
After all, businesses will only use these technologies if they can trust them.
Security and innovation go hand in hand.
AI is a powerful tool and powerful tools can be misused.
State-sponsored hackers are using AI to write malicious code and identify system vulnerabilities, increasing the sophistication and efficiency of their attacks.
Criminals are using AI deepfakes to assist in fraud, breaching security by impersonating officials.
Last year, attackers used live deepfake technology during a video call to mimic bank officials.
They stole $25 million.
And now we are seeing instances of people using AI to assist them in planning violent and harmful acts.
These aren’t distant possibilities.
They are real, tangible harms, happening right now.
The implications for our people could be pervasive and profound.
In the UK, we have built the largest team in a government dedicated to understanding AI capabilities and risks in the world.
That work is rooted in the strength of our partnerships with the companies who are right at the frontier of AI.
Working with those companies, the government can conduct scientifically informed tests to understand new AI capabilities and the risks they pose.
Make no mistake, I’m talking about risks to our people, their way of life, and the sovereignty and stability which underpins it.
That is why today, I am renaming our AI Safety Institute as the AI Security Institute.
This change brings us into line with what most people would expect an Institute like this to be doing.
They are not looking into freedom of speech.
They are not deciding what counts as bias or discrimination.
They are not politicians – nor should they be.
They are scientists – scientists who are squarely focused on rigorous research into the most serious emerging risks.
They are researching AI’s potential to assist with the development of chemical and biological weapons.
They are building on the expertise of our National Cyber Security Centre (NCSC) to understand how this technology could be used to help malicious actors commit cyber-attacks.
They want to understand how AI could undermine human control.
Our research shows that those risks are clear
There has been a clear upward trend in AI system capabilities most relevant to national security in the past 18 months.
-
For the first time last year, AI models demonstrated PhD-level performance on chemistry and biology question sets.
-
The safeguards designed to prevent these models doing harm are not currently sufficient.
-
Every model tested by the Institute is vulnerable to safeguard evasion attacks.
-
And it is almost certain that these capabilities will continue to improve, while novel risks will emerge from systems acting as autonomous agents to complete tasks with only limited human instruction.
The more we understand these risks, the better we can work with companies to address them.
And the faster we can keep our nation safe, the faster our people can embrace the potential of AI to create wealth and improve their lives.
There are certain security risks which require immediate action.
That is why the Security Institute will collaborate with the Defence Science and Technology Laboratory, the Ministry of Defence’s science and technology organisation, to assess the dual-use scientific capabilities of frontier AI.
Today, we are also launching a criminal misuse team in the Security Institute, who will partner directly with the Home Office to conduct research on a range of crime and security issues which threaten to harm our citizens.
Earlier this month, the UK set out plans to make it illegal to own AI tools optimised to make images of child sexual abuse.
Reports of AI-generated child sexual abuse material found online by the Internet Watch Foundation have quadrupled in a single year.
The Security Institute will work with the Home Office to explore what more we can do to prevent abusers using AI to commit their sickening crimes.
A security risk is a security risk, no matter where it comes from.
US companies have shown the lead in taking security risks seriously.
But we need to scrutinise all models regardless of their jurisdiction of origin.
So I’ve instructed the Security Institute to take a leading role in testing AI models wherever they come from, open or closed.
While we can’t discuss these results publicly, we will share them with our allies.
We are alive to the security risks of today.
But we need to focus on tomorrow, too, and the day after that.
We are now seeing the glimmers of AI agents that can act autonomously, of their own accord.
The 2025 International AI Safety Report, led by Yoshua Bengio, warns us that – without the checks and balances of people directing them – we must consider the possibility that risks won’t just come from malicious actors misusing AI models, but from the models themselves.
We don’t yet know the full extent of these risks.
However, as we deploy AI across our economy, our society, and the critical infrastructure that keeps our nation secure, we cannot afford to ignore them.
Because losing oversight and control of advanced AI systems, particularly Artificial General Intelligence (AGI), would be catastrophic.
It must be avoided at all costs.
I want to be clear exactly what this testing is, and what it’s not.
It’s not a barrier to market access. Not a blocker to innovation.
It is urgent scientific work to understand serious risks to our country.
Governments are not passive bystanders in the AI revolution.
We have agency in how AI shapes our society.
And we have a responsibility to use that agency to defend our democratic way of life.
Only countries with a deep and knowing understanding of this technology will be able to build the capacity they need to deliver for their citizens in the twenty-first century.
But success is not a given.
It depends on the democratic world rallying together to maintain our leadership in AI.
Together, we can protect our fundamental values – freedom, openness, and opportunity.
If we do that, we won’t just keep our people safe.
We will ensure that they are first to benefit from the new era of wealth and prosperity which AI will bring.