🏷️ Strategy
The 97% Problem: Why Your Enterprise Has AI Everywhere and AI Expertise Nowhere
📅 March 9, 2026 ⏱ 16 min

90% of organizations use AI regularly. Only 3% of workers qualify as proficient. The largest skills investment in corporate history is producing the smallest competency gains. Here’s why.
Here’s the number that should reframe every enterprise AI strategy: 97% of the workforce are AI novices or experimenters.
Not 97% haven’t tried AI. Nearly everyone has. But 97% are using it poorly — replacing Google searches, drafting emails, summarizing documents — in ways that produce little measurable business value.
This isn’t an estimate from a pessimistic analyst. It’s the central finding of Section’s January 2026 AI Proficiency Report, which classified enterprise workers into proficiency tiers:
- 28% are AI novices — minimal or no meaningful usage
- 70% are AI experimenters — surface-level, occasional use
- ~2% are practitioners — regular, workflow-integrated usage
- <1% are experts — transformative, value-creating application
The industry spent the last two years solving access. Licenses were purchased. Copilots were deployed. ChatGPT subscriptions were distributed. The access problem is solved — 90% of organizations now use AI regularly, according to McKinsey.
The proficiency problem hasn’t moved.
The Training Paradox
Enterprises haven’t ignored the skills gap. They’ve thrown money at it.
- $1,200 per employee annually on AI upskilling (CareerTrainer.ai, 2026 Corporate Training Statistics)
- 68% of enterprises have established dedicated AI training programs
- 73% of Fortune 500 companies mandate AI training for all employees
- 75% of executives plan to increase AI training budgets
The investment is massive. The results are devastating.
According to Section’s report, employees who have completed AI training score just 40 out of 100 on proficiency assessments. Training moves the needle from “novice” to “slightly less novice.” It doesn’t produce the workflow transformation that generates ROI. The core problem: most programs treat upskilling as an event rather than embedding it in workflows — and generic AI training has a 94% abandonment rate within 30 days because it fails to connect AI capability to each employee’s specific role and context.
This is what we call the Proficiency Paradox: the more organizations invest in AI training, the more clearly they demonstrate that training isn’t the bottleneck.
Consider the evidence:
- PwC’s 29th Global CEO Survey (2026): 56% of CEOs say they’ve realized neither revenue nor cost benefits from AI. Only 14% of workers use GenAI daily at work.
- ManpowerGroup’s 2026 Global Talent Barometer: AI usage rose 13% to 45% of workers — but confidence in technology dropped 18%. Over 56% report no recent training, and 57% lack mentorship opportunities.
- CoSchedule’s 2026 Marketing Report: Despite near-universal AI adoption, only 3% of marketers identify as AI experts. 79% believe AI improved their results. But improved from what baseline, and by how much?
The pattern is consistent: usage up, confidence down, proficiency stagnant. Something fundamental is broken in the model.
The Executive Perception Gap
The proficiency crisis has a compounding factor that makes it nearly invisible to leadership: executives experience a fundamentally different AI reality than their workforce.
Section’s data reveals a structural awareness gap:
| Metric | C-Suite Belief | Individual Contributor Reality |
|---|---|---|
| Clear, actionable AI policy | 81% | 53% |
| Tools available with clear access | 80% | ~50% |
| Policies enforced and strategic | 71% | <50% |
| Received AI training | 81% | 27% |
| Clear access to AI tools | 80% | 32% |
Executives use AI daily. They have training, access, and support. Their experience of AI is genuinely positive. So when they look at company-wide adoption metrics — license utilization, weekly active users, employee survey results — they see progress.
They’re measuring the wrong things.
Weekly active users don’t measure proficiency. License utilization doesn’t measure value. Course completion doesn’t measure capability. The metrics that reach the board are activity metrics masquerading as impact metrics.
Meanwhile, the workers closest to repetitive, automatable processes — the people who would benefit most from AI proficiency — are the least supported in achieving it.
Why Training Fails: The Missing Context Layer
The standard enterprise response to the proficiency gap follows a predictable sequence:
- Buy AI tools (licenses, subscriptions, platform access)
- Train on AI tools (prompt engineering, compliance, feature tutorials)
- Measure AI usage (active users, login frequency, feature adoption)
- Wonder why ROI doesn’t materialize
The flaw is in step two. Enterprise AI training teaches workers how to use AI tools in general. It doesn’t teach them how to use AI tools for their specific organizational context.
A marketing analyst who completes a Copilot training course learns how to write prompts, summarize documents, and generate content. What they don’t learn — because the training can’t teach it and the tool doesn’t know it — is:
- How your company defines “qualified lead” versus your industry’s definition
- Which brand voice guidelines apply to Q1 campaigns versus product launches
- Why historical campaign performance data from 2024 isn’t comparable to 2025 (because you restructured territories)
- What your CFO actually means when they ask for “pipeline velocity” (hint: it’s not the standard definition)
This is organizational context. It’s the institutional knowledge that makes general AI capability specifically useful. And no amount of prompt engineering training can substitute for it.
This is why the Context Gradient maps so precisely onto the proficiency crisis. Tasks requiring zero organizational context — meeting summaries, email drafting, web search — show the highest adoption and the highest satisfaction. Workers can do these immediately because the AI tool needs nothing from the organization to perform them.
Tasks requiring deep organizational context — strategic analysis, competitive intelligence, financial planning, customer segmentation — show the lowest adoption and the lowest satisfaction. Workers can’t become proficient at these tasks because the AI tools they’re using don’t have access to the organizational knowledge those tasks require.
The proficiency gap is a context gap measured in human behavior.
The Confidence Crisis
ManpowerGroup’s 2026 findings reveal something that training alone cannot fix: a crisis of confidence that’s actively impeding adoption.
- AI usage rose to 45% of workers — but confidence dropped 18%
- 43% fear job automation within two years (up 5% from 2025)
- 63% report experiencing burnout from stress and workloads
- 64% are staying with current employers seeking stability
When workers use AI tools that don’t understand their business context, they get mediocre results. Mediocre results breed justified skepticism. Justified skepticism becomes reluctance. Reluctance becomes the organizational inertia that stalls every AI initiative.
This isn’t technophobia. It’s a rational response to a tool that consistently fails to deliver on its promise in their specific work context. A marketing manager who asks Copilot to draft a competitive analysis and receives generic industry talking points isn’t going to invest time becoming “proficient” with Copilot. They’re going to do the analysis manually and tell their colleagues that AI isn’t ready for serious work.
ManpowerGroup’s 2026 Talent Shortage Survey found that AI literacy is now the hardest skill to find globally, surpassing traditional IT and engineering for the first time. But as they note, this isn’t about technical AI skills — it’s about “literacy and confidence in using AI to support existing roles.”
The skill that’s hardest to find is the skill that requires organizational context to develop.
The Data Foundation Collapse
Today’s Semarchy survey of 1,000 C-level executives across the US, UK, and France adds another dimension to the crisis:
- 51% say data management is their single most pressing AI challenge — surpassing cost and talent
- 51% are implementing AI without Master Data Management foundations
- 38% aren’t enforcing data quality standards
- 83% acknowledge their data skills are holding back AI potential
- 82% say their data strategy is insufficient
And the most devastating finding: only 7% of CDOs and 18% of CIOs are viewed as holding a chief role in their organization’s AI strategy.
The people who understand data — the raw material of organizational context — are systematically excluded from AI strategy. The people setting AI strategy — typically CEOs and business unit leaders — don’t understand data foundations.
This mirrors the broader pattern we’ve documented: enterprise AI investment concentrates on the technology layer while starving the organizational context layer. The 70/5/<1 budget split holds: ~70% on compute and models, ~5% on integration, less than 1% on organizational context.
The Proficiency Gradient
To understand why some workers achieve proficiency and most don’t, we need to map proficiency against organizational context requirements — just as the Context Gradient maps tool usage against context depth.
Level 1: Zero-Context Tasks (High proficiency achievable)
- Email drafting and editing
- Meeting transcription and summaries
- Web search replacement
- Basic document formatting
Workers become proficient at these tasks quickly because AI needs nothing from the organization. The tool’s general training is sufficient. This is the 70% of workers who are “experimenters” — they’ve achieved proficiency at tasks that don’t generate business value.
Level 2: Surface-Context Tasks (Moderate proficiency achievable)
- Content creation from templates
- Data extraction from standard reports
- Customer communication from scripts
- Process documentation
Workers can achieve moderate proficiency when basic organizational templates and guidelines are available. Most training programs target this level. Some workers break through; most plateau.
Level 3: Structural-Context Tasks (Low proficiency achievable)
- Competitive analysis using company positioning
- Financial analysis with organizational definitions
- Sales enablement with deal context
- Product marketing with differentiation framework
Workers struggle here because AI tools lack access to the organizational knowledge that makes these tasks meaningful. A well-trained worker with a context-poor AI tool produces worse results than an untrained worker with deep institutional knowledge.
Level 4: Institutional-Context Tasks (Proficiency nearly unachievable)
- Strategic planning with historical context
- Customer segmentation with relationship history
- Risk assessment with organizational risk appetite
- Innovation evaluation against corporate capabilities
Workers cannot become proficient at these tasks through any amount of training because the AI tools fundamentally lack the organizational context required. The 3.3% paid adoption rate for Copilot and the 89% production failure rate for agents are different measurements of the same phenomenon.
The RevSure Signal
RevSure’s 2026 State of Agentic AI in B2B GTM study (306 B2B leaders, March 2026) captures the proficiency-context connection in a single finding:
96% of leaders believe AI agents with complete funnel visibility would significantly improve execution.
Not better models. Not more training. Complete funnel visibility — which is organizational context by another name. Leaders intuitively understand that AI effectiveness depends on organizational knowledge, even as their budgets funnel toward tool subscriptions and training programs.
And yet: 47% cite lead quality and data reliability as primary barriers. The data their AI agents need doesn’t exist, isn’t accessible, or isn’t trustworthy. No amount of prompt engineering workshops can compensate for data infrastructure that AI tools can’t reliably use.
What Actually Closes the Proficiency Gap
The enterprises achieving meaningful AI proficiency — the 3% — share a common characteristic that has nothing to do with training budgets or tool selection.
They’ve invested in organizational context before workforce enablement.
Section’s data confirms this: workers whose managers actively expect and model AI usage are 2.6x more proficient than those without that expectation. Companies with formal AI strategies produce 1.6x higher proficiency than those without.
But the deepest insight is what drives these interventions: they all require organizational context to function. A manager who expects AI usage must know what tasks AI should be used for in their specific team’s workflow. A formal AI strategy must articulate which organizational knowledge AI tools need access to.
The path to closing the 97% gap:
1. Context Before Training
Before launching another round of AI workshops, audit what your AI tools actually know about your organization. Map the gap between what Copilot/ChatGPT/Glean can access and what workers need to do their jobs. The size of that gap predicts the ceiling of your training investment.
2. Organizational Context as Infrastructure
Treat organizational context — how your company defines terms, makes decisions, evaluates performance, serves customers — as infrastructure, not folklore. This is the context engineering that transforms general-purpose AI into organizationally-specific capability.
3. Proficiency Metrics That Measure Value, Not Activity
Replace weekly active users with value-based metrics: time saved on context-dependent tasks, decision quality improvement, workflow redesign completion. If your AI proficiency metrics don’t distinguish between email drafting and strategic analysis, they’re measuring the wrong thing.
4. Bridge the Executive Perception Gap
Create mechanisms for C-suite leaders to experience AI through their workforce’s eyes — not through their own privileged-access, well-supported experience. Section’s 30-point gap between executive and contributor perception is the single largest obstacle to investment correction.
5. Invest in the Right Layer
The proficiency gap won’t close through more licenses, better models, or expanded training. It will close when AI tools understand enough about your organization to make workers genuinely productive on the tasks that generate business value.
The Real Cost of the 97%
Every enterprise with AI tools deployed but workers struggling to use them effectively is paying what amounts to a context tax: the ongoing cost of AI tools that lack the organizational knowledge to be useful, training programs that can’t compensate for that deficiency, and a workforce that rationally disengages from technology that consistently disappoints.
PwC quantifies part of this: 56% of CEOs report no revenue or cost benefits from AI. Not “insufficient benefits.” Zero measurable impact.
The $1,200 per employee in annual training spend, multiplied across the enterprise, multiplied by the years the proficiency gap persists, is a compounding investment in the wrong layer of the problem.
The 3% who achieve proficiency aren’t smarter, better trained, or using superior tools. They’re working in organizations where AI has access to the context it needs to be genuinely useful.
The 97% problem isn’t a training crisis. It’s a context crisis measured in lost human potential.
Frequently Asked Questions
What is the 97% problem in enterprise AI? The 97% problem refers to the finding that 97% of enterprise workers are AI novices or experimenters — using AI tools at a surface level that produces little measurable business value, despite near-universal access. Only ~3% achieve meaningful proficiency that drives ROI.
Why doesn’t AI training close the proficiency gap? AI training teaches workers how to use tools in general. It cannot teach workers how to apply AI to their specific organizational context — the institutional knowledge, business definitions, historical patterns, and decision frameworks that make AI outputs relevant. Workers plateau at surface-level proficiency because the tools themselves lack organizational context.
How does the 97% problem relate to Copilot’s 3.3% adoption rate? The 97% proficiency gap and the 3.3% Copilot paid adoption rate are different measurements of the same root cause: AI tools that lack organizational context cannot deliver enough value to justify either sustained usage or purchase. Workers who can’t get useful results from AI tools don’t become proficient, and enterprises that see low proficiency don’t expand AI investments.
What is the Proficiency Paradox? The Proficiency Paradox describes the counterintuitive finding that increased AI training investment doesn’t proportionally increase AI proficiency. Organizations that invest heavily in AI training see marginal proficiency gains because training teaches tool mechanics, not organizational context application. The bottleneck is context, not curriculum.
How does organizational context affect AI proficiency? Organizational context determines the ceiling of AI proficiency. Tasks requiring no organizational context (email drafting, meeting summaries) show high proficiency regardless of training. Tasks requiring deep organizational context (strategic analysis, competitive intelligence) show near-zero proficiency regardless of training — because AI tools lack access to the institutional knowledge those tasks require.
What should enterprises do instead of expanding AI training programs? Rather than expanding training programs, enterprises should invest in organizational context engineering — making institutional knowledge accessible to AI tools so workers get useful, specific results. This includes structured knowledge bases, business definition libraries, decision framework documentation, and organizational context APIs. When AI tools understand the business, workers naturally become proficient because AI outputs become genuinely useful.
How is the 97% problem related to the 89% agent production failure rate? Both the 97% proficiency gap and the 89% agent production failure rate originate from the same root cause: absence of organizational context. Agents fail to reach production because they lack the business knowledge to operate reliably. Workers fail to achieve proficiency because AI tools lack the business knowledge to produce useful results. Same gap, different symptoms.
The proficiency gap data is drawn from Section’s January 2026 AI Proficiency Report, PwC’s 29th Global CEO Survey, ManpowerGroup’s 2026 Global Talent Barometer, CoSchedule’s 2026 Marketing Report, and Semarchy’s 2026 AI Data Management Report. Enterprise context engineering capabilities described reflect iEnable’s approach to organizational AI enablement.