Gartner called AI agents "identity dark matter." They're right—and the implications are worse than you think.
In February 2026, Gartner published its inaugural Market Guide for Guardian Agents. Buried in the analysis was a phrase that should keep every CISO awake: AI agents are "identity dark matter."
The analogy is precise. In astrophysics, dark matter makes up 85% of the universe's mass but can't be directly observed. It warps the behavior of everything around it. You know it's there because of what it does, not because you can see it.
Your AI agents work the same way.
They're running across your enterprise right now—3 million and counting industry-wide. They're accessing data, making decisions, executing workflows. And most of your security infrastructure can't see them. Not because the technology to observe them doesn't exist, but because your identity management systems were never designed to account for them.
The Identity Gap Nobody Designed For
Traditional identity and access management was built for a simple model: humans authenticate, systems authorize. A person logs in, gets a role, accesses what that role permits.
AI agents break this model in three fundamental ways.
1. Agents multiply faster than identities
When a marketing team spins up an AI agent to draft campaigns, a sales team creates one for lead scoring, and engineering deploys five more for testing—each agent needs its own identity, permissions, and audit trail. But most enterprises assign agent access through human credentials or shared API keys. Today, 45.6% of enterprises use shared API keys for their agents. That means when something goes wrong, you can't trace it to a specific agent—just to a key that fifty agents share.
2. Agents operate across trust boundaries
A human employee typically works within one or two systems. An AI agent might touch your CRM, email system, code repository, financial tools, and customer database in a single workflow. Each system has its own identity model. There's no unified "agent passport" that carries identity, permissions, and audit context across all of them—though the Decentralized Identity Foundation's new MCP-I standard is a first step.
3. Agents don't have intentions—they have instructions
When a human employee accesses sensitive data, you can reasonably assume they know what they're doing and why. An AI agent follows whatever instructions it receives, including malicious ones injected through prompt attacks. The agent doesn't know it's being manipulated. It just executes—with whatever permissions it was given.
The Numbers That Prove the Crisis
The data tells a story that's impossible to ignore:
- Only 21% of organizations have full visibility into their AI agent activities (AIUC-1 Consortium, Stanford + 40 CISOs)
- 80% report risky agent behaviors they couldn't detect in time
- 92% of MCP servers carry high security risk (AI Accelerator Institute)
- 24% of MCP connections use no authentication at all (Zuplo)
- 223 shadow AI incidents per month in the average enterprise, doubled year-over-year
- $4.63 million average breach cost when agents are involved
- 68% of employees use unauthorized AI tools, up from 41% in 2023
These aren't projections. They're measurements from production environments in 2025 and early 2026.
Why Platform-Native Governance Falls Short
Every major platform vendor has responded. ServiceNow built an AI Control Tower. Microsoft shipped Purview governance for agents. Salesforce built Agentforce controls.
But there's a structural problem: each platform can only govern its own agents.
ServiceNow's Control Tower has excellent visibility—if you're looking at ServiceNow agents. Microsoft Purview can enforce policies—on Microsoft Copilot agents. Salesforce can monitor Agentforce—within the Salesforce ecosystem.
The average enterprise runs AI agents across 10+ platforms. The governance gaps between platforms are where the "dark matter" accumulates. An agent that's fully governed in ServiceNow becomes ungoverned the moment it passes context to a non-ServiceNow system. An API key that's properly scoped in Azure becomes a liability when the agent uses it to authenticate to a third-party tool.
Gartner's own Market Guide acknowledges this: "Native platform controls do not extend beyond cloud borders." Cross-platform governance is the category gap.
The Four Things Dark Matter Agents Do
Based on our analysis of 28 competitive scans and multiple security research sources, ungoverned agents exhibit four consistent behaviors:
1. Credential Harvesting
Agents hunt the path of least resistance. They find orphaned accounts, stale tokens, and over-permissioned API keys—not maliciously, but because that's how they were configured. The 300,000+ ChatGPT credentials found on the dark web didn't get there through sophisticated attacks. They got there because agents were given credentials nobody was tracking.
2. Scope Creep
An agent authorized to read customer records discovers it can also write to them. An agent permitted to draft emails realizes it can send them. Permissions escalate incrementally, and without continuous monitoring, each step looks authorized. This is the "intent drift" problem—agents operating within technical permissions but outside business intent.
3. Cross-System Contamination
When an agent accesses data in one system and uses it in another, the governance policies of the first system don't follow the data. Patient data from an EHR system, accessed by a properly authorized agent, gets summarized and dropped into an ungoverned Slack channel. The data moved. The governance didn't.
4. Audit Trail Fragmentation
Even when agents behave correctly, proving it is nearly impossible. The audit trail exists in fragments across every system the agent touched. Reconstructing what happened—and why—requires stitching together logs from platforms that don't share formats, timezones, or context.
What "Identity Light" Looks Like
If dark matter is the problem, the solution is making agents visible. Here's what a governed agent identity looks like:
Discovery: Continuous inventory of every AI agent in the enterprise. Not a one-time audit. Automated, real-time discovery that catches agents when they're created, not months later.
Identity: Every agent gets a unique, verifiable identity—not a shared key, not a human's credentials. The MCP-I standard from DIF provides a technical foundation using Decentralized Identifiers and Verifiable Credentials. Build on it.
Governance: Role-based policies that define what each agent can do, in which systems, under what conditions. Policies that follow the agent across platforms, not policies that stop at the platform border.
Accountability: A clear chain: this agent was created by this person, for this purpose, with these permissions, and it's reviewed every 30 days. When something goes wrong, you know who's responsible in seconds, not weeks.
Assurance: Continuous monitoring that checks not just "is this agent authorized?" but "is this agent producing the outcomes we intended?" Governance that catches intent drift, not just permission violations.
The Market Response
Gartner's Market Guide identifies six vendor segments addressing this challenge: Risk & Security, AI Content Governance, IAM, Information Governance, Policy Enforcement, and Business Alignment & Outcome Optimizers. Spending on guardian agents will grow from less than 1% to 5-7% of agentic AI budgets by 2028—a $2.5-3.5 billion market.
But here's what the Market Guide reveals: every named vendor is solving one piece. Security vendors handle threats. IAM vendors handle identity. Policy vendors handle enforcement. No one has built the unified management layer that treats AI agents as what they are—a workforce that needs to be hired, governed, measured, and managed across every platform they touch.
That's not a feature gap. That's a category gap. And it's exactly where the dark matter accumulates.
The AI agents in your enterprise aren't invisible because they're hiding. They're invisible because your tools weren't built to see them. The first step is turning on the lights.
Ready to see your AI dark matter?
iEnable gives you full visibility and governance across every AI agent in your enterprise—regardless of platform.
Get Started