Agentic AI Set To Rise, With New Cybersecurity Risks: Gartner
The autonomous technology could help CIO’s deliver their AI goals but needs legal and ethical guidelines.
At a Glance
- By 2028, agentic AI could replace 20% of digital storefront interactions and handle 15% of day-to-day business decisions.
- Agentic AI introduces new risks, including unauthorized actions, malicious logic, and supply chain vulnerabilities.
- Orgs must educate teams about agentic AI risks and implement controls like monitoring and flagging anomalous activities.
Agentic AI could dramatically improve AI’s potential and could be included in 33% of enterprise software applications by 2028, up from 1% today, according to management consultancy Gartner.
But along with potentially game-changing benefits, the technology brings new risks and security threats above and beyond those inherent to AI models and applications, said Avivah Litan, a distinguished vice president analyst at Gartner.
Until now, large language models (LLMs) have not acted on their own initiative, but with agentic AI, LLMs can act autonomously with minimal human supervision. They could adapt to their context and execute goals in complex environments.
This ability could dramatically increase AI’s potential by enabling it to examine data, perform research, compile tasks and complete them in the digital or physical world via APIs or robotic systems.
For example, future agentic AI systems with full agency could learn from their environment, make decisions and perform tasks independently.
Gartner, which listed agentic AI as its top strategic technology trend for 2025, predicted in a briefing note that by 2028, AI agent machine customers could replace 20% of the interactions of human readable digital storefronts.
Read the Full Article on AI Business
Read more about:
AI BusinessAbout the Authors
You May Also Like