Get all your news in one place.
100's of premium titles.
One app.
Start reading
TechRadar
TechRadar
Benedict Collins

Shadow AI 'double agents' are outpacing security visibility – and that's a serious concern for UK businesses

Holographic silhouette of a human. Conceptual image of AI (artificial intelligence), VR (virtual reality), Deep Learning and Face recognition systems. Cyberpunk style vector illustration.

  • AI agent adoption is outpacing visibility
  • AI agents are working autonomously across environments
  • Business leaders recognize the risk and believe they can prevent unauthorized access

UK businesses are increasingly deploying AI agents to help automate mundane tasks and improve productivity, but some are behaving as ‘double agents’ and putting business security at risk.

New research from Microsoft’s Cyber Pulse report has found that while most business leaders believe they can prevent unauthorized use by AI double agents, visibility is struggling to keep pace with adoption.

Unmanaged AI agents create blind spots for security teams, especially when autonomous AI agents are given permission to work across networks, devices, and software.

AI double agents risk sabotaging businesses

In 2026, adoption has risen rapidly with 62% of UK businesses already deploying AI agents within their business - a rise of 22% year over year. Additionally, 68% of businesses are expecting an enterprise-wide AI agent rollout within the next 12 months.

But business leaders also recognize the risk of this rising rate of adoption, with 84% noting that unauthorized or poorly governed AI agents are a serious security concern.

This problem is only likely to worsen as AI agents become more capable and accessible, especially when they can act autonomously with permissions stretching across different environments.

Microsoft’s findings also note that security teams have three clear priorities. Making sure visibility into where AI agents are operating is maintained (50%), ensuring that introducing AI agents to existing systems and processes is done safely (50%), and verifying that autonomous AI agents meet compliance, risk and audit requirements (49%).

“This research signals a structural shift in cross-enterprise security,” said Jo Miller, National Security Officer at Microsoft UK. “As AI agents move from experimentation into operational functions across UK organisations, they are delivering real gains in productivity and resilience, but they also introduce a new category of digital identity that must be secured with the same rigour as human or machine identities.”

“Double agents emerge when visibility and governance does not keep pace with adoption, which is why organisations need the ability to see, manage and control how agents access systems and data, across their enterprise,” Miller continued.

“By treating AI agents as managed identities and applying robust zero trust principles, with least-privilege access, defined permissions and full auditability, businesses can manage risk while continuing to innovate with confidence.”



Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.