UK TimesUK Times
  • Home
  • News
  • TV & Showbiz
  • Money
  • Health
  • Science
  • Sports
  • Travel
  • More
    • Web Stories
    • Trending
    • Press Release
What's Hot

link road from M6 J4A southbound to M42 J8 northbound | Northbound | Broken down vehicle

31 July 2025

Fox News anchor caught off guard when confronted with Fox’s bad poll numbers for Trump on inflation – UK Times

31 July 2025

A404(M) southbound between J9B and J9A | Southbound | Road Works

31 July 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
UK TimesUK Times
Subscribe
  • Home
  • News
  • TV & Showbiz
  • Money
  • Health
  • Science
  • Sports
  • Travel
  • More
    • Web Stories
    • Trending
    • Press Release
UK TimesUK Times
Home » AI Security Institute launches international coalition to safeguard AI development
Money

AI Security Institute launches international coalition to safeguard AI development

By uk-times.com30 July 2025No Comments8 Mins Read
Facebook Twitter LinkedIn Telegram Pinterest Tumblr Reddit WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Email
  • AI Security Institute joins forces with Canadian counterpart, Amazon, Anthropic and civil society in new research project focused on AI behaviour and control. 
  • Project brings together like-minded partners, tackling issues around safety, security, and human control to protect citizens and provide the Strong Foundations of the government’s Plan for Change.        
  • Guided by world-class advisory board including Turing Award winners Shafi Goldwasser and Yoshua Bengio, the Project will expand the global field of AI alignment research – to make AI behave as designed.

Bringing together international partners, tech companies and civil society, the UK will spearhead pioneering new work which will help make sure AI systems behave predictably and as designed – supporting the Plan for Change by enabling us to unlock the full benefits of AI while providing strong national security foundations. 

AI Alignment is a crucial field in AI research – focused on making sure the technology always acts in our interests and rooting out harmful behaviours which could pose a risk to society. 

Backed by over £15 million, the fund announced today (30 July) will help unlock the benefits of advanced AI while keeping people safe – further cementing the UK’s position as a world leader in AI and expanding the global effort to tackle alignment. 

Led by the UK’s AI Security Institute, the Alignment Project is backed by an international coalition including the Canadian AI Safety Institute, Canadian Institute for Advanced Research (CIFAR), Schmidt Sciences, Amazon Web Services (AWS), Anthropic, Halcyon Futures, the Safe AI Fund, UK Research and Innovation, and the Advanced Research and Invention Agency (ARIA). 

This reflects a growing global consensus – across government, industry, academia and philanthropy – that alignment is one of the most urgent technical challenges of our time, and that expanding the field is a shared international responsibility. 

The project will fund cutting-edge research into AI alignment – including ways to make sure AI systems continue to follow our goals as the technology becomes more capable and finding techniques to ensure AI systems remain transparent and responsive to human oversight.  

AI continues to develop at breakneck speed, with the 2025 International AI Safety Report highlighting how advanced models are rapidly improving their capabilities and demonstrating PhD-levels of knowledge in some areas. Today’s methods for controlling AI are likely to be insufficient for tomorrow’s more capable systems as the technology continues to develop, with the need for co-ordinated global action to ensure the long-term safety of citizens more pressing than ever. 

Science, Innovation and Technology Secretary Peter Kyle said

Advanced AI systems are already exceeding human performance in some areas, so it’s crucial we’re driving forward research to ensure this transformative technology is behaving in our interests.

AI alignment is all geared towards making systems behave as we want them to, so they are always acting in our best interests. This is at the heart of the work the Institute has been leading since day one – safeguarding our national security and ensuring the British public are protected from the most serious risks AI could pose as the technology becomes more and more advanced.

The responsible development of AI needs a co-ordinated global approach, and this fund will help us make AI more reliable, more trustworthy, and more capable of delivering the growth, better public services, and high-skilled jobs that drive our Plan for Change.  

Home to world-leading AI companies and research institutions, Britain is uniquely positioned to lead this global effort. The fund being launched today builds on the leadership of the AI Security Institute, ensuring leading researchers from the UK and collaborating partners can shape the direction of the field and drive progress on safe, controllable AI that can be deployed with confidence. 

The Alignment Project, guided by a world-class expert advisory board, including Yoshua Bengio, Zico Kolter, Shafi Goldwasser, and Andrea Lincoln, will remove key barriers that have previously limited alignment research by offering 3 distinct levels of support  

  • Grant funding up to £1 million for researchers across disciplines from computer sciences to cognitive science, 

  • Compute access Up to £5 million dedicated cloud computing credits from AWS, enabling technical experiments beyond typical academic reach,    

  • Venture capital Investment from private funders to accelerate commercial alignment solutions.  

The Project combines funding, infrastructure, and market incentives to drive breakthrough progress.   

Governments, philanthropists, and industry partners are now able to come forward to join The Alignment Project – contributing to its work through research grants, cloud compute resources, or venture funding for startups tackling alignment challenges. This coalition will accelerate progress, helping AI safety keep pace with rapidly advancing capabilities.   

Solving alignment removes one of the largest barriers to AI adoption – trust – and will maintain Britain’s competitive edge as a leader in AI as the technology is put to work to deliver the government’s Plan for Change.  

Geoffrey Irving, Chief Scientist, AI Security Institute said

AI alignment is one of the most urgent and under-resourced challenges of our time. Progress is essential, but it’s not happening fast enough relative to the rapid pace of AI development. Misaligned, highly capable systems could act in ways beyond our ability to control, with profound global implications. 

The Alignment Project tackles this head-on by bringing together governments, industry, philanthropists, VC and researchers to close the critical gaps in alignment research. International coordination isn’t just valuable – it’s necessary. By providing funding, compute resources, and interdisciplinary collaboration to bring more ideas to bear on the problem, we hope to increase the chance that transformative AI systems serve humanity reliably, safely, and in ways we can trust.

Jack Clark, Co-Founder and Head of Policy at Anthropic said 

As AI systems become increasingly intelligent, it is urgent that we improve our understanding of how they work. Anthropic is delighted to work with the UK AI Security Institute and other partners on the Alignment Project, which will bring greater focus to these issues.

Nora Ammann, Technical Specialist, Advanced Research and Invention Agency (ARIA) said

We’re proud to support this multilateral effort to solve challenges around AI alignment that we expect to define the next decade of AI adoption. We’re particularly excited about opportunities to partner on developing mathematically rigorous understanding and assurances for advanced AI systems, which will complement the work of ARIA’s Safeguard AI programme.

John Davies, Managing Director UK, Germany and International Organisations Worldwide Public Sector at Amazon Web Services said

We are delighted to support the UK AI Security Institute’s Alignment Project by providing access to the free cloud computing credits that will enable researchers to run control experiments and stress test the safety of AI models. This initiative will help ensure that companies, governments, academia, and researchers work together to deliver groundbreaking generative AI innovation with trust at the forefront. 

The Honourable Evan Solomon, Minister of Artificial Intelligence and Digital Innovation and Minister responsible for the Federal Economic Development Agency for Southern Ontario said

We are at a hinge moment in the story of AI, where our choices today will shape Canada’s economic future and influence the global trajectory of this technology. By investing strategically in scale, infrastructure, and adoption, we’re not just fueling opportunity for Canadians, we’re making sure progress is matched by purpose and responsibility.

That’s why this partnership, uniting the Canadian AI Safety Institute and CIFAR with the UK’s AI Security Institute, matters. Together, we’re advancing cutting-edge research to ensure the next generation of AI systems are not only powerful, but reliable – serving societies here at home and around the world.

Mark Greaves, Executive Director of AI and Advanced Computing at Schmidt Sciences said

Keeping AI systems aligned with human values is a great scientific challenge of our time. Meeting this challenge will require the same creativity and cross-disciplinary energy that powered past scientific revolutions. The Alignment Project is designed to accelerate leading AI alignment researchers and attract brilliant minds from other fields to this challenge. Schmidt Sciences is proud to partner with the UK AI Security Institute and a global consortium to develop techniques to ensure that AI system behavior is aligned with human values.

Professor Charlotte Deane, Executive Chair at EPSRC said

This partnership unites critical elements of the UK’s AI ecosystem, bridging the gap between fundamental discovery science and the practical challenges of AI alignment. EPSRC is dedicated to advancing the pioneering research that underpins AI safety, and by coordinating our efforts with the AI Security Institute and partners across the sector, we are strengthening the coherence of our national AI ecosystem. Together, we will ensure that the UK’s leadership in artificial intelligence drives both breakthrough innovation and tangible benefits for society.

Notes to editors

Visit the Alignment Project website for further information. 

The Alignment Project advisory board includes 

  • Yoshua Bengio, Full Professor at Université de Montréal and founder and scientific advisor of Mila – Quebec AI Institute 
  • Zico Kolter, Professor and Head of Machine Learning Department at Carnegie Mellon University 
  • Shafi Goldwasser, Research Director for Resilience, Simons Institute, UC Berkeley 
  • Andrea Lincoln, Assistant Professor of Computer Science, Boston University 
  • Buck Shlegeris, Chief Executive Officer, Redwood Research 
  • Sydney Levine, Research Scientist, Google DeepMind 
  • Marcelo Mattar, Assistant Professor of Psychology and Neural Science at New York University
Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email

Related News

Membership of Fundamental Review of Building Regulations Guidance

31 July 2025

Cardiff Capital Region backed by £30m to unlock innovation and growth

31 July 2025

This current state of war remains a choice that President Putin is making UK statement at the UN Security Council

31 July 2025

Business leaders back the UK Government’s Small Business Plan

31 July 2025

Penalty issued for breach of Russia Sanctions

31 July 2025

TRA proposes keeping anti-dumping measure on bikes from China

31 July 2025
Top News

link road from M6 J4A southbound to M42 J8 northbound | Northbound | Broken down vehicle

31 July 2025

Fox News anchor caught off guard when confronted with Fox’s bad poll numbers for Trump on inflation – UK Times

31 July 2025

A404(M) southbound between J9B and J9A | Southbound | Road Works

31 July 2025

Subscribe to Updates

Get the latest UK news and updates directly to your inbox.

© 2025 UK Times. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact Us

Type above and press Enter to search. Press Esc to cancel.

Go to mobile version