A Framework for AI Decisions I Wish I'd Had Earlier
Health insurance premiums are jumping 18% for 2026—more than double last year's increase. Your margins are compressed from every direction. Your board wants to know what you're doing about AI.
Before you approve another vendor demo, ask: what specific decision are we trying to make, and what makes it correct?
If you can't answer that precisely, you'll get perfectly functional AI that solves the wrong problem. Then you'll build validation processes confirming those wrong solutions. Six months later, you're wondering why the claims team still works manually.
Why AI Projects Fail
Mid-market health insurers face brutal pressure. Premiums up 24% since 2019. Specialty drugs crushing margins. Competitors with better tech stealing members. Everyone's being told to "do something about AI."
So companies follow the playbook: hire consultants, create an AI strategy, pilot some tools, report adoption metrics. A year later, leadership claims 60% AI adoption while operations runs exactly like before.
The mistake is starting with technology instead of the problem.
Start With Specifications
Pick one painful, high-volume process. Prior authorizations for standard procedures. Claims coding for routine diagnoses. Eligibility verification.
Now describe exactly what decision needs to be made and what makes it correct. Not what you want AI to do, what business result you need.
For prior auths: "Knee MRI requests with documented conservative treatment and specific clinical indicators get approved within 2 hours with 98% accuracy against Medicare criteria."
That specificity forces clarity. You can't hide behind "improve efficiency." You've defined success before evaluating any technology.
Most companies skip this. They let vendors define the problem based on what their AI does. That's backwards. Make the AI solve your precisely-defined problem.
Build vs. Buy Comes Down to Differentiation
Build when the process creates competitive advantage. If your utilization management differs from competitors because of your network or market, buying generic AI means conforming to standard industry processes—eliminating your edge.
Buy when it's commodity work. Claims adjudication for routine procedures follows the same logic everywhere. No advantage comes from proprietary systems here.
For PE operators: companies with proprietary AI solving unique challenges command premium valuations. Companies using the same vendors as everyone else don't.
The Questions That Matter
When evaluating vendors, skip the feature lists. Ask:
"Show me a client who deployed this in production with verified results." Not pilots. Production. If they can't provide verifiable references, walk away.
"When your system is wrong, what happens?" They should have documented failure modes and error rates. If they claim it's never wrong, they haven't deployed at scale.
"Can I audit the decision logic?" You can't defend what you can't explain, and regulators will demand explanations.
Create the Forcing Function
Here's why AI gets ignored: you implement it alongside the old process. Staff can choose. They'll pick the familiar manual approach every time.
Instead: Pick one authorization category—routine imaging for specific conditions—and route it exclusively through AI. No manual backup. The old process is gone for these cases.
Your manual process already has error rates; you're just used to them. AI with comparable accuracy that works in hours instead of days creates immediate value. More importantly, it forces teams to trust the output instead of maintaining parallel workflows that defeat the purpose.
Skip the endless pilots. If the specification is clear and the vendor has proven deployments, commit fully to one use case. You'll discover real integration problems immediately rather than after a "successful" pilot.
Measure What Matters
Don't track AI adoption. Track the operational outcome you specified initially.
For prior auth AI: measure processing time, denial rates, appeals, provider satisfaction, member complaints. These show if AI actually improves operations or just creates different problems.
If you're a PE operator evaluating portfolio companies, ignore "adoption metrics" entirely. Ask: what specific processes work measurably better because of AI? If management can't point to concrete improvements, they have theater, not value.
Steps to Try
Pick one high-volume process that genuinely frustrates your team.
Document what a correct decision looks like—precisely.
Then decide: does this create competitive advantage or is it commodity work?
Differentiated? Build it internally with a 12-18 month timeline.
Commodity? Find vendors with proven production results. Verify independently. Commit fully to one use case.
And before you start: figure out what happens to the people currently doing this work. Without a plan for them, the AI won't get implemented regardless of how good it is.
The companies winning with AI aren't building the most sophisticated systems. They're solving precisely-defined problems with clear forcing functions and honest change management.