Dotun Opasina

  • About
  • AI Projects
  • Practical Datascience
  • Trainings
  • AI Mastery Program

Turns Out AI Can't Turn Anyone Into An Expert

May 04, 2026 by Oladotun Opasina

A data scientist at a UK fintech sat down to write a marketing article with the help of GenAI. He had the tools, the prompt, the AI-generated draft. In his own words: "GenAI suggested some catchy hooks… Actually, I didn't fully understand what it was doing because I never wrote an article like that. I added random stuff to make it more 'marketing.'"

That quote — captured in a Harvard Business School field experiment published in Fortune on Friday — is the most expensive piece of qualitative data on AI workforce strategy I've seen this year. The data scientist isn't a careless employee. He's the proof that the premise underneath every "we'll use AI to flatten functional silos" announcement of 2026 is partially wrong — and the wrong part is what matters most.

What the experiment actually found

Researchers from Harvard, Stanford, and the Stanford Digital Economy Lab ran a controlled experiment at IG Group, a UK fintech. Three groups: web analysts who write the company's content (the experts), marketers in adjacent functions, and technology specialists with no relationship to content. Each attempted two tasks: conceptualizing an article (outline, structure, keywords), then executing it. Half had access to IG's GenAI tools.

For conceptualization, GenAI was an equalizer. Marketers and engineers, both with AI, produced outlines statistically indistinguishable from the experts. The kind of result that gets written into a strategy deck.

For execution, the result split. Marketers with AI matched the experts. Engineers with AI did not — even with the same tool access, they consistently underperformed.

The researchers gave the failure mode a name: the GenAI Wall. AI bridges adjacent expertise gaps. It cannot bridge distant ones.

Why the wall exists

Engineers didn't lose because they couldn't use the AI. They lost because they couldn't evaluate its output. Marketers shared a vocabulary with the content experts — engagement, conversion, audience targeting — and knew which AI suggestions to keep and which to rewrite. Engineers approached the task as technical documentation: prioritize brevity, eliminate "marketing spin," cut the calls to action. They removed the parts that made the content work, because no domain instinct told them those parts mattered.

Domain experts used GenAI to chart the route to a destination they already knew. Outsiders had to trust the AI for both. That's where things went wrong.

The constraint isn't AI capability. It's the user's distance from the domain.

What this changes for your strategy

Most boards in 2026 are operating with a quiet assumption: GenAI lets us reduce specialists by enabling generalists. The Harvard finding bounds it. AI moves work between adjacent functions. Not distant ones, no matter how good the tools get.

  • Separate conceptualization from execution. Mixed teams can ideate well across function boundaries with AI. Execution still needs domain experts. Treating those phases as one bucket sets up the failure pattern the experiment caught.

  • Stop measuring AI readiness by technical proficiency alone. The engineers were the most technically capable group. They still hit the wall. The variable was domain knowledge — and most enterprise AI training is teaching prompt engineering when it should be reinforcing functional fundamentals.

  • Map knowledge distances before designing AI-driven workforce flexibility. A finance analyst can probably absorb FP&A work with AI. A software engineer probably cannot become an effective sales rep no matter how good the AI gets. The distance between functions determines whether your AI talent strategy compounds.

The deeper signal

Bojinov, the HBS professor behind the study, has been arguing something else worth sitting with: AI project failure rates are running near 80 percent — nearly double typical IT projects of a decade ago. His framing: this is a leadership problem, not a technology problem. Executives are deploying AI at scale without understanding where its limits are.

The companies pulling ahead next year will map their knowledge distances first. The wall is real. The question is whether you find it on your strategy deck, or on your earnings call.

Sources:

François Candelon and Iavor Bojinov, "Hitting the 'GenAI wall': Where generative AI stops working, and what it means for your talent strategy," Fortune, May 1, 2026. https://fortune.com/2026/05/01/artificial-intelligence-genai-wall-effect-conceptualization-execution-talent-strategy/

Vendraminelli, DosSantos DiSorbo, Hildebrandt, McFowland III, Karunakaran, and Bojinov, "The GenAI Wall Effect: Examining the Limits to Horizontal Expertise Transfer Between Occupational Insiders and Outsiders," Harvard Business School Working Paper No. 26-011, September 2025. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5462694

May 04, 2026 /Oladotun Opasina
  • Newer
  • Older

Powered by Squarespace