88% of Organisations Use AI. Almost None Have Changed How Work Gets Done.
Stanford University's 2026 AI Index dropped this week — the most comprehensive annual audit of where AI actually stands, produced by independent researchers with no vendor agenda.
The headline finding sounds like good news: organisational AI adoption has reached 88%. But the number immediately beneath it tells a different story. AI agent deployment — the kind that actually automates workflows, changes decisions, and shows up on the P&L — sits in single digits across nearly all business functions.
That gap is the most important thing in the report for any executive making AI decisions right now. It means most organisations have licensed tools, rolled out copilots, and checked the AI box. Very few have changed how work actually gets done.
Where the Gains Are Real — and Where They're Not
The productivity data in the Stanford report is the most useful thing your board hasn't seen yet. AI is delivering measurable gains — 14% in customer service, 26% in software development. Those numbers are real and significant.
But the report is direct: those gains are not appearing in tasks requiring judgment.
That is not a model problem. It is an implementation problem. Organisations deploying AI into judgment-intensive workflows without redesigning around AI's actual capabilities are getting nothing. They're paying for tools delivering their value somewhere else, or nowhere at all.
The implication for every CIO and COO: before expanding your AI deployment, map where judgment is required and where it isn't. The 26% productivity gain in software development didn't happen because developers got smarter tools. It happened because the right workflows were targeted.
The Workforce Signal Executives Are Underestimating
Stanford's workforce data is the sharpest in the report — and the most uncomfortable.
Employment among software developers aged 22 to 25 has fallen nearly 20% since 2024, even as demand for their older colleagues grows. The same pattern is appearing in customer service and other high-AI-exposure roles. Executive surveys are unambiguous: planned headcount reductions outpace recent cuts across these functions.
Stanford's conclusion: the disruption is targeted and just beginning.
This isn't a prediction. It's a measurement. And it has a direct implication most organisations haven't absorbed: the entry-level pipeline that feeds mid-level and senior roles is thinning. The experienced employees you'll need in five years are the ones you're not hiring today. The organisations getting this right are redesigning entry-level roles now — restructuring what they look like so institutional knowledge stays intact even as AI absorbs the routine work.
The Transparency Problem Nobody Is Talking About
One finding in the Stanford report deserves more executive attention than it's getting. The Foundation Model Transparency Index — which measures how openly AI companies disclose details about their models' training data, capabilities, risks, and usage policies — saw average scores drop from 58 to 40 this year.
The most capable models are now the least transparent about how they work.
For every organisation making procurement decisions about AI infrastructure, this is a governance risk. You cannot audit what you cannot see. You cannot explain a decision made by a model whose training data and risk parameters are undisclosed. And as regulatory frameworks in the EU and US move toward requiring documented evidence of AI governance, the opacity of your model provider becomes your compliance exposure.
What This Report Means for Decisions You're Making Now
The 2026 Stanford AI Index doesn't tell you which tool to buy. What it tells you is this: the capabilities are here, the adoption is widespread, and the workforce disruption is already measurable. The organisations that pull ahead in the next eighteen months aren't the ones with the most AI licences — they're the ones that have identified which workflows to transform, which roles to redesign, and which model providers they can actually hold accountable.
88% say they use AI. The question worth asking this week is which side of that gap you're actually on.
Sources: Stanford HAI, "2026 AI Index Report" (April 13, 2026) · Stanford HAI, "Inside the AI Index: 12 Takeaways from the 2026 Report" (April 13, 2026)