Agentic AI tools are starting to appear in real organisations, not just research labs. These tools don’t just generate content or predictions; they can plan, make decisions and take actions on your behalf.
This capability can be beneficial, including in cyber defence, but it can also introduce new risks if used without care. ‘Careful adoption of agentic AI services’ is new joint guidance, co-authored by the NCSC with international partners, that sets out why organisations should start small, use agents only for low-risk tasks, and apply established cyber security controls from the outset.
This blog summarises the key points from that guidance, and will be of use to anyone involved in the design, development, deployment and operation of agentic AI systems.

