The AI Risk Nobody Is Managing
Everyone is asking the wrong question about AI and jobs.
The question dominating boardrooms right now is: "Will AI replace my workforce?" It's an understandable question — Jack Dorsey just cut 40% of Block's headcount, the February jobs report came in at -92,000, and headlines are predicting a "Great Recession for white-collar workers."
But a study published last week by Anthropic researchers cuts through the noise in a way that should reframe how executives think about this. The researchers introduced a metric they call "observed exposure" — comparing what AI is theoretically capable of doing against what it's actually being used to do in professional settings, measured through real Claude usage data. You can read the full paper here.
The gap is striking. For computer and math roles, AI can theoretically handle 94% of tasks. Observed in actual professional use? 33%. Office and administrative roles show the same pattern.
That gap is where the real strategic risk lives — not in the headline, but in the space between what AI can do and what it's actually doing inside your organization right now.
Your org is running at adoption speed, not capability speed
Here's what I see consistently working with financial services and technology leadership teams at Publicis Sapient: decisions are being made at the speed of AI's potential while operations are running at the speed of AI's actual adoption. Budgets cut, teams restructured, hiring frozen — all in anticipation of productivity gains that haven't materialized yet.
The Anthropic study surfaces exactly this dynamic. The most visible early impact of AI isn't mass layoffs — it's a slowdown in hiring. Workers in AI-exposed roles are already seeing a 14% drop in the job finding rate since ChatGPT's emergence. Companies aren't replacing people; they're just not backfilling when people leave. The headcount reduction is happening quietly, through attrition, before the AI systems that were supposed to justify it are performing at scale.
That's the timing mismatch. You've thinned your team based on what AI will eventually do. The AI isn't there yet. And the institutional knowledge that walked out isn't coming back.
The uncomfortable truth about who's actually most exposed
Here's what most executives don't expect: the workers least at risk from AI are warehouse workers, mechanics, and tradespeople — roles requiring physical presence no model can replicate. The 30% of workers with zero AI exposure are largely in those jobs.
The workers most exposed are older, highly educated, and well-compensated — the lawyer, the financial analyst, the senior software developer. Computer programmers face 94% theoretical task exposure. The people you've historically paid the most to think are exactly the ones sitting in the gap between what AI can do and what it's doing today.
That's not a reason to panic. It's a reason to get specific about your planning before the gap closes.
Three questions worth answering before your next structural decision
The capability-adoption gap won't stay this wide — the researchers are explicit that current limitations are temporary. When it closes, organizations that planned ahead will be in a fundamentally different position than those that used AI as cover for cuts and then tried to rebuild.
For each high-exposure role in your organization, ask:
What percentage of this role's tasks is AI currently performing — not theoretically, but in actual observed workflows?
What would need to be true for that number to double in the next 18 months?
And if it does, who holds the contextual judgment AI can't replicate?
Those three questions won't give you a complete roadmap. But they'll tell you whether you're managing the gap — or just hoping it resolves itself.
If this is something you're working through right now, I'd welcome the conversation.
Sources: Anthropic, "Labor market impacts of AI: A new measure and early evidence" (March 2026) · Fortune (March 6, 2026) · U.S. Bureau of Labor Statistics, February 2026 Jobs Report