The AI Agent Governance Landscape Is Fragmenting — Why That's a Problem for Every Enterprise

RSAC 2026 reveals an explosion of platform-native AI agent governance solutions. But enterprises run agents across 5-10 platforms. Cross-platform visibility is the missing layer.

← Back to Blog

📊 Analysis

The AI Agent Governance Landscape Is Fragmenting. Here’s Why That’s a Problem.

📅 March 22, 2026 ⏱ 22 min

Four days before RSA Conference 2026 opens its doors in San Francisco, the AI agent governance market is experiencing something between a Cambrian explosion and a land grab. Over $375 million flowed into agent governance and security startups in March 2026 alone. Platform giants are locking arms. A new cohort of startups is pitching “agent-native” security to CISOs who, twelve months ago, barely had agentic AI on their radar.

And yet the fundamental problem — the one that will define whether enterprises actually govern their AI agents or merely pretend to — remains largely unaddressed.

That problem is fragmentation.

Every major platform is building governance for its own agents. Every startup is carving out a niche — identity here, data access there, shadow agent discovery somewhere else. Meanwhile, the average enterprise is deploying AI agents across five to ten platforms simultaneously, with no unified view of what those agents are doing, what data they’re touching, or whether any of them have gone rogue.

RSAC 2026 will showcase dozens of solutions to pieces of this problem. What it won’t showcase — at least not prominently — is a coherent answer to the whole thing.

Let’s map what’s actually happening.


The Platform-Native Trap

The biggest governance news heading into RSAC is the deepening integration between ServiceNow and Microsoft. ServiceNow’s AI Control Tower now integrates directly with Microsoft Agent 365, Copilot Studio, and Microsoft Foundry. On paper, this is significant: the two largest enterprise platform players have connected their governance layers, giving joint customers a unified view of agent activity across both ecosystems.

On paper.

In practice, this solves governance for exactly two platforms. Enterprises that happen to run their entire agent infrastructure on ServiceNow and Microsoft — and only ServiceNow and Microsoft — now have a real governance story. For everyone else, the ServiceNow-Microsoft integration is a reminder of what cross-platform governance could look like if the rest of the market cooperated.

They won’t cooperate, of course. Platform-native governance is a moat strategy. ServiceNow wants you to run more agents on ServiceNow. Microsoft wants you to run more agents on Microsoft. Their governance integration exists, in part, to make the combined ecosystem stickier. That’s not cynicism; it’s business logic.

The problem is that enterprises don’t operate on two platforms. They operate on many. A typical large enterprise in 2026 is running agents built on Microsoft Copilot Studio, ServiceNow’s agent framework, Salesforce Agentforce, custom agents on AWS Bedrock or Google Vertex, internal tools built on LangChain or CrewAI, and increasingly, agents spawned by business users through low-code platforms they may not even know about.

Each of those platforms is developing its own governance controls. Each has its own way of logging agent actions, defining permissions, and enforcing policies. None of them have a strong incentive to make their governance data interoperable with competitors.

The result is what you might call the platform-native trap: the more each vendor invests in governance for its own ecosystem, the harder it becomes to govern agents across ecosystems. Each new “governance solution” actually deepens the fragmentation it claims to address.

Consider the numbers. Ninety-eight percent of enterprises are now deploying AI agents in some capacity. Seventy-nine percent of those enterprises lack governance policies that span their full agent footprint. That 79% figure isn’t going to improve by adding more platform-specific governance tools. It’s going to improve when someone builds the cross-platform layer that sits above all of them.


The Startup Surge: Agent-Native Security Arrives at RSAC

If the platform giants are playing defense — governing their own ecosystems — the startup class of 2026 is playing offense, attacking specific governance gaps with purpose-built solutions.

The most watched entrant is Geordie AI, an RSAC 2026 Innovation Sandbox Top 10 finalist. Backed by a $6.5 million seed round from Ten Eleven Ventures and General Catalyst, Geordie was founded by members of the Darktrace founding team and describes itself as an “agent-native” security platform. The pedigree matters: Darktrace pioneered the use of AI for cyber defense over a decade ago, and Geordie’s founders are betting that the same paradigm shift is happening again, this time with AI agents as both the asset to protect and the attack surface to monitor.

Geordie’s approach centers on behavioral analysis of agent activity — understanding what “normal” looks like for an agent and flagging deviations in real-time. It’s a compelling thesis, particularly for enterprises where agents are making autonomous decisions with real-world consequences. But it’s also, by design, a detection layer rather than a governance framework. Knowing that an agent is behaving anomalously is valuable. Knowing why it has the permissions it has, who approved those permissions, and which policies should constrain its behavior across every platform it touches — that’s governance.

Entro Security is taking a different angle. On March 19, 2026, Entro launched AGA — Agentic Governance & Administration — a platform focused on three capabilities that most governance tools overlook: shadow AI discovery, MCP (Model Context Protocol) activity visibility, and cross-system policy enforcement. The shadow AI discovery piece is particularly timely. As business users spin up agents through low-code tools and marketplace integrations, security teams are discovering agent sprawl that mirrors the shadow IT problem of the 2010s, except agents can take actions, not just store data.

Entro’s focus on MCP activity visibility is also noteworthy. The Model Context Protocol has rapidly become the connective tissue of agentic AI, enabling agents to interact with tools, databases, and APIs through a standardized interface. But MCP also creates new governance blind spots: if an agent uses MCP to access a tool it shouldn’t, or chains together MCP calls in ways that violate data policies, most existing governance tools won’t see it. Entro is building specifically for that gap.

Token Security, another RSAC Innovation Sandbox finalist, is attacking the identity dimension. Their AI Agent Identity Security Platform addresses what may be the most overlooked aspect of agent governance: agent identity. When a human accesses a system, identity is relatively well understood — there’s an employee behind a credential, bound by an access policy, logged in an identity provider. When an agent accesses a system, the identity model breaks down. Whose identity does the agent operate under? The user who deployed it? The platform that hosts it? Its own? Token Security argues that agents need their own identity framework — one that’s as rigorous as human IAM but designed for the scale and speed of machine actors.

Bedrock Data rounds out the governance conversation from the data access angle. Leading multiple RSAC sessions on governing the data that AI agents access, Bedrock is tackling the recognition that an agent’s permissions are only as secure as its data pipeline. An agent with narrowly scoped system permissions but broad data access can still cause significant harm — exfiltrating sensitive information, making decisions based on data it shouldn’t see, or poisoning downstream systems with contaminated inputs.

Each of these startups is building something real. Each addresses a genuine gap. And each, individually, covers only a fraction of what enterprise AI agent governance actually requires.


The Identity Crisis: 82 Machines for Every Human

Beneath the product announcements and RSAC keynotes, there’s a structural shift that makes AI agent governance fundamentally different from any governance challenge enterprises have faced before.

The machine-to-human identity ratio in the average enterprise has reached 82:1. Eighty-two machine identities — service accounts, API keys, bot credentials, agent tokens — for every human identity in the directory. And that ratio is accelerating as agentic AI scales.

This isn’t just a numbers problem. It’s a governance architecture problem.

Traditional governance frameworks — IAM, PAM, RBAC, even zero trust — were designed for a world where humans were the primary actors and machines were the plumbing. The human decided; the machine executed. Governance meant controlling human access and auditing human actions.

In the agentic era, that model inverts. The agent decides and acts. The human may have set the goal, but the agent chose the method, selected the tools, accessed the data, and executed the action. Governance now means controlling agent access and auditing agent actions — at a scale that’s two orders of magnitude larger than human-centric governance ever had to handle.

The identity crisis compounds when you factor in shadow agents. Just as shadow IT emerged when business users adopted SaaS tools without IT approval, shadow agents are emerging as business users deploy AI agents without security review. A marketing manager spins up an agent on a low-code platform to automate campaign analysis. A sales director connects an agent to CRM data through an MCP integration. A finance analyst deploys a reporting agent that pulls from the data warehouse every morning.

None of these agents went through a security review. None have formal identity records. None are covered by existing governance policies. And collectively, they may have access to more sensitive data than any single human employee.

The MCP security dimension adds another layer. As the Model Context Protocol becomes the standard way agents interact with tools and data sources, MCP connections become the new attack surface. An agent that chains together MCP calls — accessing a database, processing the results through an external API, and feeding the output into a decision workflow — creates a data flow that no single system’s governance controls can fully track. Each MCP connection may be individually authorized, but the chain of connections may violate policies that no individual system is positioned to enforce.

CrowdStrike is keynoting RSAC 2026 with what they’re calling the “AI Operational Reality Manifesto” — a recognition that the industry’s AI security rhetoric has outpaced its operational readiness. The manifesto is expected to address the gap between AI security products on demo stages and AI security practices in production environments. It’s a welcome dose of realism from a company that built its reputation on operational security rather than aspirational frameworks.


$375 Million in March: Following the Money

The funding landscape tells its own story about where the market thinks AI agent governance is heading — and where the gaps remain.

In March 2026 alone, over $375 million flowed into companies addressing various dimensions of agent governance and security:

Add Geordie AI’s $6.5 million seed, and the picture is clear: investors see AI agent governance as a generational market opportunity. The diversity of bets — observability, orchestration, security posture, identity, application security — also reflects investor uncertainty about where in the stack the governance winner will emerge.

That uncertainty is itself a signal. When investors spread capital across every layer of a problem, it usually means the problem hasn’t been coherently defined yet. The market hasn’t converged on what “AI agent governance” actually means — is it identity management? Behavioral monitoring? Policy enforcement? Data access control? Observability? Compliance automation?

The answer, of course, is all of the above. And that’s precisely why platform-specific and point solutions won’t be enough. Governance isn’t a feature; it’s a discipline that spans the entire agent lifecycle, from deployment through operation to decommissioning, across every platform and tool an agent touches.


The Regulatory Clock: EU AI Act and the Compliance Imperative

The market dynamics would be sufficient to drive governance adoption on their own. But there’s a regulatory accelerant that makes the timeline non-negotiable.

The EU AI Act compliance deadline of August 2, 2026 is less than five months away. While much of the AI Act’s focus is on AI systems broadly, its provisions around transparency, human oversight, and risk management apply directly to agentic AI deployments. Enterprises operating in or serving EU markets will need to demonstrate that their AI agents are inventoried, classified by risk level, and subject to appropriate governance controls.

The EU AI Act doesn’t care whether an enterprise runs agents on one platform or ten. It requires governance outcomes — transparency, accountability, human oversight — regardless of the underlying architecture. An enterprise that can demonstrate governance for its Microsoft agents but not its Salesforce agents, or its production agents but not its shadow agents, will not satisfy regulators.

This regulatory reality is pushing governance from a “nice to have” to a “must have” on a timeline that most enterprises aren’t prepared for. Gartner projects that spending on what they term “Guardian Agents” — AI systems specifically designed to govern other AI systems — will grow from less than 1% to 5-7% of total agentic AI budgets by 2028. The EU AI Act deadline will accelerate the front end of that curve significantly.


What Cross-Platform AI Agent Governance Actually Requires

If platform-native solutions govern their own ecosystems and startups address individual governance dimensions, what does the missing cross-platform layer actually look like?

Based on the patterns emerging from the market and the gaps visible in current solutions, cross-platform AI agent governance requires five capabilities that no single product fully delivers today.

1. Universal Agent Discovery and Inventory

You cannot govern what you cannot see. Cross-platform governance starts with the ability to discover and inventory every agent across every platform — sanctioned and shadow, production and experimental, internal and third-party. This means integrating with the agent registries and deployment pipelines of every major platform, but also scanning for agents that exist outside those registries.

Entro’s shadow AI discovery is a step in this direction. But discovery needs to be continuous, cross-platform, and tied to a unified inventory that provides a single source of truth for “how many agents do we have, where are they running, and what are they doing?“

2. Cross-Platform Identity and Access Governance

Token Security’s agent identity work points to a real need, but identity governance for agents must span platforms. An agent’s identity, permissions, and access patterns need to be visible and manageable regardless of which platform hosts the agent. This means establishing agent identity standards that work across Microsoft, ServiceNow, Salesforce, AWS, Google Cloud, and custom platforms — a challenge that mirrors the federated identity problem enterprises spent the last decade solving for humans.

The 82:1 machine-to-human identity ratio makes this urgent. Enterprises that took years to implement human identity governance now need to implement machine identity governance at 82x the scale, in a fraction of the time.

3. Unified Policy Definition and Enforcement

Governance policies — what an agent can do, what data it can access, when it requires human approval, how its actions are logged — need to be defined once and enforced everywhere. Today, policies are platform-specific. A data access policy in ServiceNow doesn’t automatically apply to the same agent’s data access through an MCP connection to an external tool.

Cross-platform policy enforcement requires a policy engine that sits above individual platforms, translates enterprise governance requirements into platform-specific controls, and monitors compliance across the full agent footprint.

4. End-to-End Activity Monitoring and Audit

Geordie AI’s behavioral analysis and Bedrock Data’s data access governance both address monitoring, but from different vantage points. Cross-platform governance requires end-to-end visibility: what did the agent do, across which platforms, accessing what data, through which MCP connections, producing what outcomes? This audit trail needs to be unified, searchable, and exportable for compliance purposes.

When the EU AI Act auditor asks “show me what this agent did last month,” the answer can’t be “here’s the ServiceNow log, and here’s the Microsoft log, and here’s the AWS log, and we’re not sure about the MCP connections.” It needs to be one view, one trail, one answer.

5. Lifecycle Governance: From Deployment to Decommissioning

Agents have lifecycles. They’re deployed, configured, updated, scaled, and eventually decommissioned. Cross-platform governance needs to cover the full lifecycle — ensuring that an agent is properly reviewed before deployment, continuously monitored during operation, and cleanly decommissioned when no longer needed, with all its credentials revoked and data access terminated.

This lifecycle dimension is almost entirely missing from current solutions. Most governance tools focus on runtime monitoring. Few address the deployment review or decommissioning phases, and none do so across platforms.


The Path Forward: Convergence or Continued Fragmentation?

RSAC 2026 will be a showcase for what the AI agent governance market has built in a remarkably short time. The ServiceNow-Microsoft integration demonstrates that platform-native governance is maturing. Geordie AI, Entro Security, Token Security, and Bedrock Data demonstrate that startups are attacking real gaps with real technology. CrowdStrike’s “AI Operational Reality Manifesto” signals that the industry is beginning to reckon with the distance between demo-stage security and production-grade governance.

But the trajectory of the market points toward continued fragmentation unless something changes. Each new platform-native governance feature makes cross-platform governance harder. Each new startup carving out a niche creates another integration point enterprises need to manage. Each vendor’s governance data model diverges further from every other vendor’s governance data model.

The enterprises caught in the middle — the 98% deploying agents, the 79% lacking comprehensive governance policies — need something different. They need a cross-platform governance layer that integrates with every platform’s native controls, aggregates every startup’s specialized capabilities, and provides the unified visibility, policy enforcement, and audit trail that regulators and boards are beginning to demand.

That layer is what iEnable is building. Not a replacement for platform-native governance or specialized security tools, but the connective tissue that makes them work together — the cross-platform intelligence layer that turns fragmented point solutions into coherent enterprise governance.

The AI agent governance market is about to have its biggest week of the year at RSAC 2026. The announcements will be impressive. The demos will be polished. The funding rounds will be large.

But the question enterprises should be asking isn’t “which governance tool should I buy?” It’s “how do I govern agents across every tool I’ve already bought?”

That’s the question that matters. And it’s the one most of the market isn’t answering yet.


Frequently Asked Questions About AI Agent Governance

What is AI agent governance?

AI agent governance is the set of policies, processes, and technologies that control how AI agents operate within an enterprise. It encompasses agent identity management, access controls, behavioral monitoring, data access policies, audit trails, and compliance enforcement. Unlike traditional AI governance, which focuses primarily on model fairness and transparency, AI agent governance addresses the unique challenges of autonomous software agents that can take actions, access data, and make decisions across multiple systems without direct human involvement. Effective AI agent governance ensures that every agent — whether sanctioned or shadow, simple or complex — operates within defined boundaries and produces auditable records of its activity.

Why is cross-platform AI governance important for enterprises?

Cross-platform AI governance is important because enterprises don’t run agents on a single platform. The average large enterprise deploys AI agents across five to ten platforms, including Microsoft, ServiceNow, Salesforce, AWS, Google Cloud, and various open-source frameworks. Each platform offers its own governance controls, but none provide visibility into agents running elsewhere. Without cross-platform AI governance, enterprises face blind spots where agents operate without oversight, policies are enforced inconsistently, and audit trails are fragmented across systems. As regulatory requirements like the EU AI Act demand comprehensive governance regardless of underlying architecture, cross-platform visibility becomes a compliance necessity rather than a technical preference.

How does the 82:1 machine-to-human identity ratio affect AI agent governance?

The 82:1 machine-to-human identity ratio means that for every human identity in an enterprise directory, there are 82 machine identities — including service accounts, API keys, bot credentials, and agent tokens. This ratio is growing as agentic AI scales, and it fundamentally changes the governance challenge. Traditional identity and access management (IAM) systems were designed for human-scale identity governance. AI agent governance must operate at machine scale, managing permissions, monitoring activity, and enforcing policies for identities that are 82 times more numerous than human ones — and that operate autonomously around the clock.

What is the EU AI Act’s impact on AI agent governance requirements?

The EU AI Act, with its compliance deadline of August 2, 2026, directly impacts AI agent governance requirements for any enterprise operating in or serving EU markets. The Act mandates transparency about AI system capabilities, human oversight mechanisms for high-risk AI applications, risk classification and management processes, and detailed record-keeping of AI system behavior. The Act does not distinguish between agents on different platforms — governance must be comprehensive, making cross-platform AI governance a regulatory requirement rather than merely a best practice.

What are shadow AI agents and why are they a governance risk?

Shadow AI agents are AI agents deployed within an enterprise without formal security review, IT approval, or governance oversight. They emerge when business users leverage low-code platforms, marketplace integrations, or MCP connections to spin up agents that automate tasks like data analysis, report generation, or workflow management. Shadow agents are a governance risk because they often have access to sensitive enterprise data, operate without identity records or access policies, and are invisible to security teams. Discovering and governing shadow agents is a critical capability in any enterprise AI agent management strategy.

How should enterprises evaluate AI agent governance frameworks ahead of RSAC 2026?

Enterprises evaluating AI agent governance frameworks should assess five key capabilities: (1) universal agent discovery across all platforms, including shadow agents; (2) cross-platform identity and access governance; (3) unified policy definition and enforcement; (4) end-to-end activity monitoring with a single audit trail; and (5) lifecycle governance covering deployment review, runtime monitoring, and decommissioning. No single vendor delivers all five today. Prioritize solutions that integrate with existing platforms rather than replacing them.


Related reading: