Tuesday, June 17, 2025

Gartner: Guardian agents to capture 10-15% of the agentic AI market by 2030

According to Gartner, guardian agents will be increasingly deployed as AI risk surface expands, ensuring AI processes stay reliable and secure

By 2030, guardian agent technologies will account for at least 10 to 15% of agentic AI markets, according to recent research from Gartner. Guardian agents are AI-based technologies designed to support trustworthy and secure interactions with AI. They function as both AI assistants, supporting users with tasks like content review, monitoring and analysis, and as evolving semi-autonomous or fully autonomous agents, capable of formulating and executing action plans as well as redirecting or blocking actions to align with predefined agent goals.

Guardrails are needed as agentic AI usage continues to grow

According to a Gartner webinar poll of 147 CIOs and IT function leaders conducted in May 2025, 24% of respondents had already deployed a few AI agents (less than a dozen) and another 4% had deployed over a dozen.

The same poll question found that 50% of respondents said they were researching and experimenting with the technology, while another 17% of respondents said that they had not done so but planned to deploy the technology by the end of 2026 at the latest. Automated trust, risk and security controls are needed to keep these agents aligned and safe, accelerating the need for and rise of guardian agents.

“Agentic AI will lead to unwanted outcomes if it is not controlled with the right guardrails,” said Avivah Litan, VP Distinguished Analyst at Gartner. “Guardian agents leverage a broad spectrum of agentic AI capabilities and AI-based, deterministic evaluations to oversee and manage the full range of agent capabilities, balancing runtime decision making with risk management.”

Risks increase as agent power increases and spreads

Fifty-two per cent of 125 respondents from the same webinar poll identified that their AI agents are or will primarily focus on use cases related to internal administration functions such as IT, HR, and accounting, while 23% are focused on external customer facing functions. As use cases for AI agents continue to grow, there are several threat categories impacting them, including input manipulation and data poisoning, where agents rely on manipulated or misinterpreted data. Examples include:

● Credential hijacking and abuse leading to unauthorised control and data theft.
● Agents interacting with fake or criminal websites and sources that can result in poisoned actions.
● Agent deviation and unintended behaviour due to internal flaws or external triggers that can cause reputational damage and operational disruption.

“The rapid acceleration and increasing agency of AI agents necessitates a shift beyond traditional human oversight,” said Litan. “As companies move towards complex multi-agent systems that communicate at breakneck speed, humans cannot keep up with the potential for errors and malicious activities. This escalating threat landscape underscores the urgent need for guardian agents, which provide automated oversight, control, and security for AI applications and agents.”

CIOs and security and AI leaders should focus on three primary usage types of guardian agents to contribute towards safeguarding and protecting AI interactions:

● Reviewers: Identifying and reviewing AI-generated output and content for accuracy and acceptable use.
● Monitors: Observing and tracking AI and agentic actions for human- or AI-based follow-up.
● Protectors: Adjusting or blocking AI and agentic actions and permissions using automated actions during operations.

Guardian agents will manage interactions and anomalies no matter the usage type. This is a key pillar of their integration, since Gartner predicts that 70% of AI apps will use multi-agent systems by 2028.

Gartner clients can read more in Guardians of the Future: How CIOs Can Leverage Guardian Agents for Trustworthy and Secure AI. Additional details can also be found in the complimentary Gartner webinar CIOs, Leverage Guardian Agents for Trustworthy and Secure AI.

Latest

How carbon-negative supply chains can lead the green revolution

The fight against climate change demands more than just...

Don’t treat AI as a silver bullet for workforce transformation

Simply deploying AI and expecting it to transform your...

New to the class: How business schools are embracing GenAI

Business schools are recognising the transformative potential of GenAI...

Master the art of effective negotiations

Good negotiations create positive outcomes for all parties, says...

Subscribe To Our Content

Don't miss

How carbon-negative supply chains can lead the green revolution

The fight against climate change demands more than just...

Don’t treat AI as a silver bullet for workforce transformation

Simply deploying AI and expecting it to transform your...

New to the class: How business schools are embracing GenAI

Business schools are recognising the transformative potential of GenAI...

Master the art of effective negotiations

Good negotiations create positive outcomes for all parties, says...

One in every fifty children falls victim to identity theft each year

A new report by LSEG Risk Intelligence highlights how...

How carbon-negative supply chains can lead the green revolution

The fight against climate change demands more than just reducing emissions, it requires reversing them – Martin C. Schleper of NEOMA Business School believes...

Don’t treat AI as a silver bullet for workforce transformation

Simply deploying AI and expecting it to transform your workforce will result in misplaced solutions and organisational fatigue, says Oliver Shaw CEO of Orgvue Artificial...

New to the class: How business schools are embracing GenAI

Business schools are recognising the transformative potential of GenAI – because ultimately, it helps students solve the challenges of tomorrow, says Lily Bi of...

LEAVE A REPLY

Please enter your comment!
Please enter your name here