The security industry's biggest event is about to confirm what we've been saying: AI agent governance isn't optional anymore.
Three million AI agents are running in enterprise environments today. Fewer than half are monitored. Only 7% of organizations have governance policies specifically designed for autonomous AI systems.
Starting today, the RSA Conference 2026 will put those numbers on the main stage.
This year's RSAC (March 23–26, San Francisco) isn't just another security conference. It's the moment the industry formally acknowledges that agentic AI governance is a category — not a feature, not a nice-to-have, but an entirely new discipline that enterprises must build from scratch.
We've analyzed the full agenda. Here are the five sessions that will shape how enterprises think about AI governance for the rest of the year.
1. The CISO's Playbook for the AI Revolution
Why it matters: This session traces the shift from generative AI (tools that create content) to agentic AI (systems that take autonomous action). The distinction is critical. When an AI agent can approve expenses, modify infrastructure, or process customer requests without human oversight, the governance model changes fundamentally.
What to watch for: Expect specific recommendations on how security teams should restructure for agentic AI. The "playbook" framing signals practical implementation — not theoretical frameworks.
iEnable perspective: The playbook will almost certainly address single-platform governance. The question it won't answer: what happens when you have agents from ServiceNow, Microsoft, Salesforce, and custom-built tools all operating simultaneously? That's the cross-platform gap.
2. MCP Security: Tool Poisoning and the Protocol That Connects Everything
Why it matters: The Model Context Protocol has become the USB-C of AI — a universal connector that lets agents interact with any tool, data source, or API. Adoption is explosive: 75% of enterprise gateway vendors will integrate MCP by year's end. But security hasn't kept pace. 92% of MCP servers carry high security risk. 24% use no authentication at all.
This session will cover tool poisoning attacks — where malicious instructions embedded in documents or databases manipulate AI agents into unauthorized actions. It's the prompt injection problem, but at infrastructure scale.
What to watch for: Three confirmed production MCP breaches have already been documented (WhatsApp history exfiltration, GitHub private repo exposure, Asana cross-organization data leaks). Expect these to be referenced as proof that MCP security isn't theoretical.
iEnable perspective: MCP governance is fracturing into sub-layers: API gateways (Tray.ai, Zuplo), identity (Vouched MCP-I, Token Security), runtime enforcement (OPAQUE). What's missing is the management layer that sits above all of them — the layer that asks not just "is this agent authenticated?" but "should this agent be doing this task at all?"
3. Non-Human Identities: The Identity Dark Matter
Why it matters: Gartner's inaugural Market Guide for Guardian Agents (published February 25, 2026) called AI agents "identity dark matter" — entities that hunt the path of least resistance, exploit orphaned accounts and stale tokens, and operate in identity gaps that traditional IAM was never designed to address.
This RSAC session on trusted identity propagation for autonomous agents across SaaS ecosystems will be standing room only. Non-human identities now outnumber human identities in most enterprises, and the governance tools designed for people don't work for agents.
What to watch for: Specific architectural recommendations for agent identity lifecycle management. The Decentralized Identity Foundation's new MCP-I standard (donated by Vouched on March 5) uses DIDs and Verifiable Credentials for cryptographic agent verification. Watch whether speakers reference it.
iEnable perspective: Agent identity is the foundation layer. You can't govern what you can't identify. Our platform integrates with emerging standards like MCP-I while providing the management layer on top: not just "who is this agent?" but "what role does this agent play, what are its limits, and who's accountable when it goes wrong?"
4. Zenity: AI Agent Security Exposure Demo
Why it matters: Zenity has built Automated Incident and Detection Response (AIDR) with execution graphs that map agent behavior at runtime. They're MITRE ATLAS contributors. Their demo will show modern AI attack paths — not theoretical ones, but real enterprise attack vectors with enterprise mitigation strategies.
What to watch for: The gap between what Zenity shows (security monitoring) and what enterprises need (workforce management). Zenity can tell you an agent did something suspicious. It can't tell you whether that agent should exist, who hired it, what its SLA is, or whether it's delivering business value.
iEnable perspective: Security is necessary but not sufficient. The majority of unauthorized agent actions are internal policy violations, not external attacks (Gartner). The governance challenge isn't catching bad actors — it's managing a workforce of autonomous systems that are technically compliant but operationally chaotic.
5. Straiker: Autonomy Is a Battlefield
Why it matters: Straiker brings a red team/blue team approach to agentic AI — Ascend AI for adversarial testing and Defend AI for runtime security. Their "battlefield" framing is deliberate: they're positioning agent governance as an active defense problem, not a compliance checklist.
What to watch for: The adversarial testing methodology. If enterprises can stress-test their agents before deployment, it dramatically reduces the governance burden post-deployment. Prevention is cheaper than remediation.
iEnable perspective: Adversarial testing + cross-platform governance = a complete picture. Straiker tests individual agents. iEnable manages the agent workforce. Different layers, potentially complementary.
The Bigger Picture: Why RSAC 2026 Is the Inflection Point
Four converging forces make this RSAC different from every conference before it:
1. The governance gap is quantified. Everest Group's survey of 200+ mid-market enterprises found only 7% have agentic-specific governance policies. Gartner projects guardian agent spending will grow from less than 1% to 5–7% of agentic AI budgets by 2028 ($2.5–3.5 billion TAM). The market is massive and nearly unaddressed.
2. The regulatory pressure is real. NIST's public comment period on AI agent standards closed March 9. CISA Directive 2026-02 takes effect April 1, requiring federal agencies to phase out designated supply chain risks. The EU AI Act enforcement continues ramping. Governance isn't optional when regulators are watching.
3. The breach timeline is documented. We're past the "it could happen" phase. MCP production breaches, shadow AI incidents (223/month, doubled year-over-year), agent identity exploits — the threat model is no longer hypothetical.
4. The platform vendors can't solve it alone. ServiceNow, Microsoft, Salesforce — each has built impressive governance for their own ecosystems. But enterprises run 10+ platforms. Cross-platform governance remains the gap that no platform vendor has an incentive to close.
What to Do at RSAC
Now that you're here, make the most of it:
- Audit your agent inventory. How many AI agents are running? Across how many platforms? Who manages each one?
- Assess your governance maturity. The Gartner Market Guide identifies four evaluation criteria: agent discovery, identity management, information governance, and policy enforcement. Where do you stand?
- Map your identity gaps. Are your agents using shared API keys (45.6% of enterprises are)? Do you have agent-specific identity management, or are agents running on human credentials?
- Document your cross-platform exposure. Which agents touch multiple systems? Where are the gaps between platform-native governance tools?
RSAC 2026 will define the governance conversation for the next 12 months. The enterprises that show up prepared will lead. The ones that don't will spend the rest of the year catching up.
We're tracking every agentic AI development at RSAC and publishing analysis at ienable.ai/blog. Subscribe for real-time coverage of the sessions that matter.
Need cross-platform AI agent governance?
iEnable gives you visibility and control over your entire AI workforce — regardless of platform. No blind spots. No ecosystem lock-in.
Learn More About iEnable →Related reading: