Key Takeaways
- RSAC 2026 Innovation Sandbox features 10 finalists, each awarded $5M. The competition runs Monday, March 23 in San Francisco.
- Three governance themes dominate: AI agent security (Geordie AI, Token Security, Realm Labs), identity beyond passwords (Glide Identity, Charm Security, Humanix), and AI-native code/pipeline security (ZeroPath, Crash Override, Clearly AI, Fig Security).
- 4 of 10 finalists directly address AI agent governance — up from zero in 2024. The category didn’t exist two years ago.
- The pattern nobody discusses: every finalist solves for controlling AI systems. None address whether those AI systems understand the organizations they serve — the difference between a secure agent and a useful one.
- Organizations that deploy governed-but-contextless AI agents will discover compliance without competence: agents that pass every security check but make decisions that ignore how the business actually works.
RSAC 2026 Innovation Sandbox: 10 Finalists, 3 Governance Themes, and the Pattern Nobody Is Talking About
📅 March 21, 2026 ⏱ 14 min

On Monday, March 23, ten cybersecurity startups will pitch at RSAC’s Innovation Sandbox — the competition that launched CrowdStrike, Phantom, and Armis into billion-dollar trajectories. This year’s finalists tell you more about where enterprise security is heading than any analyst report. Here’s what they reveal — and what they all miss.
The 10 Finalists at a Glance
Each finalist received a $5M investment. Here’s who they are:
| Company | Focus | One-Line Summary |
|---|---|---|
| Geordie AI | AI Agent Security | Agent-native governance: discover, monitor, and control AI agents at runtime |
| Token Security | Non-Human Identity | Discover and govern every AI agent and machine identity across the enterprise |
| Realm Labs | AI Behavior Monitoring | Monitor AI inference in production; catch malfunctions before operational harm |
| Charm Security | AI-Powered Fraud Defense | Agentic AI workforce that prevents scams using behavioral psychology |
| Humanix | Social Engineering Defense | Conversational AI detecting manipulation and impersonation using cognitive psychology |
| Glide Identity | Next-Gen Authentication | Cryptographic device-level trust replacing passwords and SMS codes |
| ZeroPath | AI Code Security | AI-native SAST that finds complex vulnerabilities and business logic flaws |
| Crash Override | Supply Chain Security | CI/CD pipeline integrity with automated compliance and certificate management |
| Clearly AI | Secure Development | AI-powered threat modeling, design reviews, and risk assessment |
| Fig Security | Security Operations | Identify broken security workflows and simulate changes before deployment |
Theme 1: AI Agent Governance (The Category That Didn’t Exist in 2024)
Finalists: Geordie AI, Token Security, Realm Labs
Two years ago, “AI agent governance” wasn’t a phrase anyone used at RSAC. This year, three of ten finalists are building in this space. That’s a 0-to-30% shift in a single competition cycle.
Geordie AI: The Agent-Native Platform
Founded by ex-Darktrace (Henry Comfort, COO Americas) and ex-Snyk (Benji Weber) leadership, Geordie raised $6.5M from Ten Eleven Ventures and General Catalyst. Their approach: embed security controls directly into agent execution flows rather than monitoring from outside.
Key capabilities:
- Agent visibility in ~10 minutes — unified view across dev, cloud, and endpoint environments
- Beam Risk Mitigation Engine — real-time intervention during agent decision-making
- Behavioral analysis that detects deviations from established patterns across multi-agent workflows
Geordie’s bet is that agent security must be runtime, not retrospective. You can’t secure an agent by analyzing its logs after it’s already accessed your customer database.
Token Security: The Identity-First Approach
Token Security extends non-human identity (NHI) management to AI agents. Their thesis: AI agents are the fastest-growing identity category in the enterprise, and existing IAM tools were never designed for them.
The 91% problem is real — Okta’s 2025 report found that 91% of organizations use AI agents, but only 10% have a strategy for managing their identities. Token bridges that gap with:
- Discovery of every AI agent and machine identity
- Governance across the full lifecycle (provisioning → monitoring → deprovisioning)
- Compliance enforcement at the identity layer
Realm Labs: The Inference Monitor
While Geordie and Token focus on governance, Realm Labs focuses on detection. Their platform monitors AI behavior during inference — catching hallucinations, malfunctions, and unexpected outputs before they cascade into operational harm.
This is the “runtime observability” layer. Think of it as application performance monitoring (APM) for AI models.
What This Theme Reveals
The AI agent governance market went from theoretical to funded in under 18 months. When three Innovation Sandbox finalists independently build in the same space, the category is real. The question is no longer whether enterprises need agent governance — it’s which layer of governance matters most.
Theme 2: Identity Beyond Humans
Finalists: Glide Identity, Charm Security, Humanix
The second theme is identity — but not the traditional kind. These finalists are redefining what identity means when your adversaries use AI and your workforce includes non-human agents.
Glide Identity: Post-Password Authentication
Glide replaces passwords and SMS codes with cryptographic device-level trust. In a world where AI agents can generate phishing at scale and SIM-swap attacks are commoditized, authentication methods from 2010 aren’t sufficient.
Charm Security: AI Agents Defending Humans
Charm is the most conceptually interesting finalist. They deploy an agentic AI workforce — AI agents whose job is to prevent scams and social engineering by combining fraud expertise with behavioral psychology. It’s AI agents protecting humans from attacks that are themselves increasingly AI-generated.
Humanix: The Psychology Layer
Humanix detects social engineering using conversational AI trained in cognitive psychology. Rather than pattern-matching against known attack signatures, they identify manipulation techniques — urgency, authority exploitation, emotional pressure — regardless of the specific content.
What This Theme Reveals
Identity is expanding in two directions simultaneously. Authentication is moving beyond knowledge-based secrets toward cryptographic proof. Defense is moving from signature detection toward behavioral and psychological analysis. Both shifts are accelerated by AI agents — as both the attacker’s tool and the defender’s tool.
Theme 3: AI-Native Code and Pipeline Security
Finalists: ZeroPath, Crash Override, Clearly AI, Fig Security
The third theme: applying AI to secure the software development lifecycle. Four finalists are rebuilding security tooling with AI at the core rather than bolted on.
ZeroPath: Replacing Legacy SAST
Traditional static analysis tools generate noise. ZeroPath uses AI to find real vulnerabilities — including business logic flaws that rule-based scanners miss entirely.
Crash Override: Pipeline Integrity
Supply chain attacks (SolarWinds, Codecov, XZ Utils) proved that build pipelines are attack surfaces. Crash Override captures build provenance, proves deployment accuracy, and automates compliance — turning “did we deploy what we think we deployed?” from a hope into a cryptographic guarantee.
Clearly AI: Automated Security Reviews
Threat modeling is the security practice everyone agrees matters and nobody does consistently. Clearly AI automates it — providing AI-powered threat models, design reviews, and risk assessments that keep pace with rapid development cycles.
Fig Security: SecOps Resilience
Fig addresses a problem security teams live with daily: broken security workflows. Detection rules that stopped working. Integrations that silently failed. Config changes that created coverage gaps. Fig identifies these failures and lets teams simulate fixes safely before deploying them.
What This Theme Reveals
The DevSecOps toolchain is being rebuilt with AI at every layer. The winners won’t be companies that add an AI feature to existing tools — they’ll be companies that reimagine what security tooling looks like when AI is a first-class citizen.
The Pattern Nobody Is Talking About
Here’s what’s striking about all ten finalists: every one of them solves for controlling AI. None solve for understanding within AI.
Geordie AI governs what agents do. Token Security governs what agents access. Realm Labs monitors what agents produce. But none of them address what agents know about the organization they serve.
This is the gap we’ve been tracking at iEnable since before RSAC made it a headline. We call it the Seventh Layer — Organizational Context Quality.
Consider the practical implication. An enterprise deploys Geordie AI to govern its fleet of autonomous agents. Every agent is discovered. Every permission is scoped. Every behavior is monitored. The security posture is excellent.
Then an agent processes a customer escalation. It follows policy perfectly. It applies the correct discount tier. It routes to the right queue. But it doesn’t know that this customer’s account is under review because of a conversation the VP of Sales had yesterday with the CEO — a conversation that happened in a meeting, not a system. The agent makes a technically correct, contextually catastrophic decision.
Governance without context produces compliant incompetence.
This isn’t a criticism of the Innovation Sandbox finalists — they’re solving urgent, real problems. But the industry is converging on the same three layers (discovery, identity, behavior) while leaving the layer that determines business value entirely unaddressed.
The vendors that will win in 2027 and beyond aren’t the ones that best control AI agents. They’re the ones that best inform them.
What This Means for Enterprise Buyers
If you’re evaluating AI agent governance tools at or after RSAC, here’s the framework:
Buy Now (Layers 1–3)
- Agent discovery and inventory — you need this immediately. You cannot govern what you cannot see. Geordie AI and Token Security both provide this.
- Identity governance for NHIs — AI agents need lifecycle management, just-in-time access, and automatic deprovisioning. Token Security leads here.
- Runtime monitoring — catching agent malfunctions during inference, not after. Realm Labs and Geordie’s Beam engine both address this.
Build Now (Layer 7)
- Organizational context infrastructure — no vendor sells this yet. Start by auditing what your agents actually know about your organization versus what they need to know. The gap is your exposure surface for contextually wrong decisions.
- Decision audit trails — not just “what did the agent do?” but “what did the agent think it knew when it decided?” This is the forensics capability nobody has built.
Watch (2026–2027)
- Convergence — expect Geordie, Token, and Entro to expand into each other’s capabilities within 12 months. The standalone agent governance category will consolidate.
- Platform plays — Microsoft (Agent 365), CrowdStrike, and Palo Alto will all ship agent governance features. The question is whether purpose-built startups or platform extensions win.
- The context layer — whoever cracks organizational context for AI agents will own the category after governance commoditizes.
RSAC Monday: What to Watch
The Innovation Sandbox competition runs Monday, March 23. Here’s what to listen for:
-
Geordie AI vs Token Security — both target AI agent governance, but from different angles (agent-native security vs identity-first). The judges’ questions will reveal which framing resonates with enterprise buyers.
-
Realm Labs’ positioning — inference monitoring could be a standalone category or a feature that gets absorbed into broader agent governance platforms. How they pitch it matters.
-
The “agentic” count — how many times “agentic AI” appears across all 10 pitches. Our over/under: 47 times. The word saturation tells you whether the category has reached peak hype or is still climbing.
-
What nobody asks — listen for questions about organizational context, business alignment, or decision quality. Their absence is as informative as their presence.
The Bigger Picture
RSAC 2026 Innovation Sandbox is a snapshot of where cybersecurity’s smartest founders think the market is going. Their collective bet: AI agent governance is the next major enterprise security category, and it will be as important to the 2026–2030 era as cloud security was to 2015–2020.
They’re right about the category. They’re right about the urgency. And they’re all building in the same three layers while leaving the most strategically important layer — the one that determines whether governed agents actually help businesses — wide open.
The race to govern AI agents has started. The race to inform them hasn’t begun. For a broader view of the vendor landscape and what enterprises should expect, see our RSAC 2026 AI agent governance preview.
iEnable helps enterprises build the organizational context layer that AI agent governance vendors leave unaddressed. Learn more about the Seven-Layer Framework →
What is the RSAC 2026 Innovation Sandbox?
The RSAC Innovation Sandbox is an annual cybersecurity startup competition held at the RSA Conference. In 2026, ten finalists were selected, each receiving a $5M investment. The competition takes place on Monday, March 23, 2026 in San Francisco. Past winners include CrowdStrike, Phantom (acquired by Splunk), and Armis.
Who are the RSAC 2026 Innovation Sandbox finalists?
The 10 finalists are: Charm Security, Clearly AI, Crash Override, Fig Security, Geordie AI, Glide Identity, Humanix, Realm Labs, Token Security, and ZeroPath. Four of the ten (Geordie AI, Token Security, Realm Labs, and Charm Security) focus on AI agent security or agentic AI applications.
What is AI agent governance?
AI agent governance is the practice of discovering, monitoring, authenticating, and controlling autonomous AI agents deployed across enterprise systems. It encompasses agent inventory and discovery, identity lifecycle management for non-human identities, runtime behavior monitoring, and policy enforcement. The category emerged in 2025-2026 as enterprises began deploying AI agents capable of autonomous decision-making, tool use, and multi-step execution.
What is the difference between Geordie AI and Token Security?
Geordie AI takes an agent-native security approach, embedding controls directly into AI agent execution flows with their Beam Risk Mitigation Engine for real-time intervention. Token Security takes an identity-first approach, extending non-human identity (NHI) management to AI agents with discovery, governance, and compliance enforcement at the identity layer. Geordie focuses on runtime behavior; Token focuses on identity lifecycle.