Key Takeaways
- Five vendors are launching AI agent governance products at or ahead of RSAC 2026 (March 22–26): Geordie AI, Token Security, Entro Security, Bedrock Data, and Microsoft Agent 365.
- Every framework converges on the same three concerns: discovery (find your agents), identity (authenticate and authorize them), and behavior (monitor what they do).
- Geordie AI maps to three pillars: Posture Management, Behavioral Observability, Contextual Controls. Token Security introduces intent-based access. Entro extends IGA to agents. Bedrock governs data access. Microsoft wraps Defender + Entra + Purview into Agent 365.
- All five operate at what iEnable’s Seven-Layer Framework calls Layers 1–3: Infrastructure, Identity, and Behavioral Control.
- Zero vendors address Layer 7: Organizational Context Quality — whether AI agents understand your org’s priorities, culture, tribal knowledge, and decision-making patterns. This is the layer that determines whether a governed agent is merely safe or actually useful.
- The gap isn’t academic. A fully governed agent that doesn’t understand your organization will make technically compliant but contextually wrong decisions — the enterprise equivalent of a contractor who passes the background check but doesn’t know the building.
RSAC 2026: Five AI Agent Governance Vendors, One Blind Spot None of Them See
📅 March 19, 2026 ⏱ 16 min

RSA Conference 2026 opens Monday in San Francisco. The headline story isn’t ransomware or zero-trust this year — it’s AI agent governance. Five vendors are racing to define how enterprises should discover, authenticate, monitor, and control autonomous AI agents. Their frameworks are impressive. Their coverage is thorough. And they all miss the same thing.
The Convergence
Something remarkable happened in March 2026. Within a single week, five companies — ranging from a $6.5M seed-stage startup to the world’s largest software company — independently launched products to govern AI agents in the enterprise.
This isn’t coincidence. It’s a market signal. Gartner projects 40% of enterprise applications will embed task agents by late 2026, up from under 5% today. The governance infrastructure race has officially started.
Let’s map what each vendor built, what layers they cover, and what they all left out.
Vendor 1: Geordie AI — The Agent-Native Platform
RSAC Innovation Sandbox Top 10 Finalist | Founded 2025 | $6.5M Seed (Ten Eleven Ventures, General Catalyst)
Geordie’s pitch is “agent-native security” — built from scratch for AI agents rather than adapting existing security tools. Their framework operates across three pillars:
| Pillar | What It Does |
|---|---|
| Posture Management | Automatically inventories agents across frameworks, code environments, APIs, and endpoints in a single pane |
| Behavioral Observability | Combines behavioral data with posture context to build a “living picture” of how agents operate |
| Contextual Controls | Their engine “Beam” translates insights into mitigations, preventing agents from making risky decisions |
What they cover: Discovery, inventory, behavioral monitoring, policy enforcement.
iEnable Layer mapping: Layers 1–3 (Infrastructure Security, Identity & Access, Behavioral Control).
What’s missing: Geordie can tell you what an agent did and whether it was within policy. It cannot tell you whether the agent understood your organization well enough to make the right decision in the first place.
Vendor 2: Token Security — Intent-Based Access
RSAC Innovation Sandbox Top 10 Finalist | SC Awards 2026 Finalist (two categories)
Token Security introduces a genuinely interesting concept: intent-based security. Their insight is that two agents with identical permissions can behave completely differently depending on their declared purpose. Static permission models can’t account for this.
Their five core capabilities:
- Continuous Discovery — Find agents and their owners across the enterprise
- Intent Understanding — Map declared vs. observed agent intent
- Dynamic Least Privilege — Create access policies aligned to defined intent
- Lifecycle Governance — Manage agent identity from creation to decommission
- Enforcement — Block agents operating outside their intent envelope
What they cover: Non-human identity management, intent classification, dynamic authorization.
iEnable Layer mapping: Layers 2–3 (Identity & Access, Behavioral Control), with innovative intent classification touching Layer 4 (Decision Governance).
What’s missing: Token maps agent intent but not agent understanding. An agent can intend to do exactly the right thing and still make a contextually wrong decision because it doesn’t understand the organizational nuances that determine what “right” means in your specific company.
Vendor 3: Entro Security — AGA (Agentic Governance & Administration)
Launching at RSAC 2026, Booth #N4515 | Extends existing IGA platform to AI agents
Entro takes the most familiar approach: extend Identity Governance and Administration (IGA) — the framework enterprises already use for human identities — to cover AI agents. Their AGA platform has two core pillars:
| Capability | What It Does |
|---|---|
| Shadow AI Discovery | Integrates with EDR tools to find AI clients and local runtimes on employee devices. Connects with agent foundries (AWS Bedrock, Copilot Studio) and CSPs to discover agents and their non-human identities |
| Monitoring & Enforcement | MCP activity visibility, policy controls for sanctioned MCP targets, audit trails of allowed/blocked activity, controls to reduce sensitive data exposure (for a deeper look at why MCP governance matters, see our MCP security enterprise governance guide) |
What they cover: Shadow AI detection, non-human identity mapping, MCP governance, policy enforcement.
iEnable Layer mapping: Layers 1–3 (Infrastructure Security, Identity & Access, Behavioral Control).
What’s missing: Entro can find your shadow AI agents and tell you what permissions they have. It can’t tell you whether those agents understand the difference between how your East Coast team handles escalations versus your West Coast team — the kind of organizational context that determines whether an agent’s output is helpful or harmful.
Vendor 4: Bedrock Data — Data Governance for Agents
RSAC 2026 Daily Sessions | Fresh Snowflake strategic investment (March 2026)
Bedrock approaches the problem from the data layer. Their thesis: the hardest problem in AI agent security isn’t identity or access — it’s governing the data that agents access, process, and act on.
Key innovations:
- Metadata Lake + MCP Integration: Agents query the Metadata Lake in real-time to validate whether data is within policy before acting
- ArgusAI: Creates a unified exposure map for understanding and containing agentic system risks
- Data Bill of Materials (DBoM): A continuously updated inventory of data assets linked to AI systems
- Snowflake Horizon Integration: Governance capabilities integrated directly into the data platform
What they cover: Data classification, sensitivity boundaries, policy-aware data access, exposure mapping.
iEnable Layer mapping: Layers 1–2 (Infrastructure Security, Data Governance), with strong data-layer depth.
What’s missing: Bedrock governs which data agents can access. It doesn’t govern what agents understand about that data’s organizational context — why this customer account matters more than that one, which data interpretation your VP of Sales would disagree with, how this quarter’s strategic pivot changes what “relevant” means.
Vendor 5: Microsoft Agent 365 — The Hyperscaler Play
GA May 1, 2026 | $15/user/month (or bundled in $99/user M365 E7)
We analyzed Agent 365 in depth last week. The short version: Microsoft wraps three existing platforms — Defender, Entra, and Purview — into a unified agent governance product.
| Pillar | Extends | What It Does |
|---|---|---|
| Observability | Defender | Monitor agent activity across M365 ecosystem |
| Security | Entra | Agent ID: unique identities, conditional access, least privilege for non-human identities |
| Governance | Purview | Data classification, sensitivity labels, compliance controls applied to agent actions |
What they cover: The broadest coverage of any vendor — identity, access, behavior, data, compliance, audit.
iEnable Layer mapping: Layers 1–5 (Infrastructure through Compliance & Audit), with genuine depth in identity management (Agent ID is a breakthrough).
What’s missing: Even Microsoft’s most comprehensive offering stops at governing what agents can do. It doesn’t address what agents understand about your organization. Agent ID ensures the right agent accesses the right data with the right permissions. It doesn’t ensure the agent knows why your CFO rejected a similar proposal last quarter, or that “urgent” means something different to your engineering team than your sales team.
The Framework Comparison
Here’s every RSAC 2026 vendor mapped against iEnable’s Seven-Layer AI Agent Governance Framework:
| Layer | What It Governs | Geordie AI | Token Security | Entro AGA | Bedrock Data | Microsoft Agent 365 |
|---|---|---|---|---|---|---|
| 1. Infrastructure Security | Agent runtime, network, deployment | ✅ | ◐ | ✅ | ✅ | ✅ |
| 2. Identity & Access | Authentication, authorization, least privilege | ◐ | ✅ | ✅ | ◐ | ✅ |
| 3. Behavioral Control | Monitoring, guardrails, anomaly detection | ✅ | ✅ | ✅ | ◐ | ✅ |
| 4. Decision Governance | Approval workflows, escalation, human-in-loop | ◐ | ◐ | ○ | ○ | ◐ |
| 5. Compliance & Audit | Regulatory alignment, audit trails, reporting | ○ | ○ | ◐ | ✅ | ✅ |
| 6. Ethical Alignment | Bias detection, fairness, value alignment | ○ | ○ | ○ | ○ | ◐ |
| 7. Organizational Context Quality | Does the agent understand your org? | ○ | ○ | ○ | ○ | ○ |
✅ = Core capability | ◐ = Partial coverage | ○ = Not addressed
The pattern is unmistakable. Five vendors. Five different angles. Five different founding teams. And a perfectly empty column at Layer 7.
Why the Gap Matters
Consider a real scenario. Your AI agent has:
- ✅ Proper identity (Token Security verified it)
- ✅ Correct permissions (Entro mapped its access)
- ✅ Data governance (Bedrock validated its data access)
- ✅ Behavioral monitoring (Geordie is watching it)
- ✅ Compliance controls (Microsoft Purview approved it)
The agent is now asked: “Draft a response to this customer’s contract renewal request.”
It produces a technically compliant, properly authorized, behaviorally normal response that:
- Doesn’t know this customer’s CEO plays golf with your CEO every month
- Doesn’t know your VP of Sales has been nurturing this account for three years toward an enterprise upgrade
- Doesn’t know that “standard renewal terms” is the wrong frame because last quarter’s board meeting changed the strategic direction on this exact customer segment
- Doesn’t know your company’s culture treats this type of account as a white-glove relationship, not a transactional renewal
The agent did everything right at Layers 1–6. And it produced an output that your VP of Sales would call “technically correct and completely wrong.”
That’s the organizational context gap. It’s not a security vulnerability. It’s a value-creation failure. And no amount of identity management, access control, or behavioral monitoring can fix it — because the problem isn’t what the agent can do, it’s what the agent understands.
What This Means for Enterprises at RSAC
If you’re attending RSA Conference next week, here’s what to take away:
1. The infrastructure layer is solved (or close to it). Between these five vendors and the existing security stack, enterprises will have robust tools for discovering, authenticating, monitoring, and controlling AI agents by mid-2026.
2. The real governance question has shifted. The question is no longer “can we control our AI agents?” It’s “do our AI agents understand us well enough to be worth controlling?”
3. The Layer 7 gap is your opportunity. Every vendor at RSAC will show you how to make AI agents safe. None will show you how to make them effective — because effectiveness requires organizational context that lives outside the security stack.
4. Ask this question at every booth: “Your platform tells me what my agents are doing. Can it tell me whether my agents understand why they should do it differently for this customer versus that one?”
The silence will tell you everything.
For a deeper dive into what each vendor is bringing to RSAC and how to evaluate them, see our RSAC 2026 AI Agent Governance Guide, our RSAC 2026 AI agent governance preview, and Cross-Platform AI Agent Governance at RSAC 2026.
The 14th Consecutive Week
We’ve been tracking vendor launches across the AI agent ecosystem since December 2025. This is now the 14th consecutive week where every new vendor announcement addresses Layers 1–3 (infrastructure, identity, behavior) and zero address Layer 7 (organizational context).
The convergence at RSAC 2026 confirms this isn’t an oversight — it’s a structural gap. The security industry builds security products. The identity industry builds identity products. No one builds organizational understanding products because organizational context doesn’t fit neatly into any existing vendor category.
That’s exactly where iEnable operates. Not competing with Geordie or Token or Entro or Bedrock or Microsoft — complementing them. Layer 7 sits on top of Layers 1–6. You need both. But only one determines whether your AI agents are merely governed or genuinely useful.
Frequently Asked Questions
What is RSAC 2026’s biggest theme for AI?
RSA Conference 2026 (March 22–26, San Francisco) has AI agent governance as its dominant cybersecurity theme. Five vendors — Geordie AI, Token Security, Entro Security, Bedrock Data, and Microsoft — are all launching or showcasing AI agent governance products, marking the first time the security industry has converged on agent-specific controls.
What is the Seven-Layer AI Agent Governance Framework?
The Seven-Layer AI Agent Governance Framework is iEnable’s model for comprehensive AI agent oversight. The layers are: (1) Infrastructure Security, (2) Identity & Access, (3) Behavioral Control, (4) Decision Governance, (5) Compliance & Audit, (6) Ethical Alignment, and (7) Organizational Context Quality. Most vendors cover Layers 1–3; Layer 7 — whether agents understand your organization — remains unaddressed by all major vendors at RSAC 2026.
What is organizational context in AI governance?
Organizational context is the accumulated knowledge about how a specific company operates: its priorities, culture, relationships, tribal knowledge, decision-making patterns, and strategic direction. In AI governance, it determines whether an agent that has proper permissions and follows all rules still makes decisions that align with how your organization actually works. Without organizational context, AI agents are compliant but context-blind.
How does Microsoft Agent 365 compare to startup AI governance solutions?
Microsoft Agent 365 ($15/user/month, GA May 1, 2026) offers the broadest coverage by extending Defender, Entra, and Purview to AI agents. It covers Layers 1–5 of governance — but even Copilot adoption itself is struggling, with only 3.3% of users reaching power-user status. Startups like Geordie AI (agent-native observability), Token Security (intent-based access), and Entro Security (shadow AI discovery) offer deeper specialization in specific layers. All share the same gap at Layer 7: organizational context quality.
RSAC 2026 will define how the security industry thinks about AI agent governance for the next two years. The vendors exhibiting have built impressive products for the problems they can see. The problem they can’t see — whether AI agents understand your organization — is the one that will determine whether $15/user/month in governance spend actually translates to business value. That’s not a security question. It’s an enablement question. And it’s exactly the one iEnable was built to answer.
Related reading:
- Non-Human Identity Management for AI Agents: The 2026 Enterprise Guide
- AI Agent Identity Governance: The Dark Matter Problem
- The AI Agent Governance Framework Your Company Needs
- Microsoft Agent 365: The $15/User Governance Gap
- The AI Agent Kill Switch Problem
- MCP Security: The Enterprise Governance Guide