Dotun Opasina

  • About
  • AI Projects
  • Practical Datascience
  • Trainings
  • AI Mastery Program

Your AI Agents Have the Keys to Your Business. Who's Checking What They Do With Them?

April 08, 2026 by Oladotun Opasina

97% of enterprise leaders expect a major AI agent security incident this year. Nearly half within six months. The average breach now costs $4.9 million — and incidents involving AI agents with unchecked access cost $670,000 more because they leave no log trail. The damage compounds silently before anyone notices.

That's not a technology problem. It's a business one.

AI agents are running inside your organisation right now — executing transactions, accessing customer data, managing workflows — using the same credentials your systems already trust. They don't look like a threat. They look like legitimate activity. And in most organisations, nobody is checking what they actually do once they're running.

87% of enterprise leaders agree AI agents pose a greater insider threat risk than human employees. Yet only 6% of security budgets cover this risk. That gap isn't a technology failure. It's a leadership one.

The Access You Granted Is the Risk

Forget sophisticated attacks. The real exposure is simpler and closer to home.

When an agent is given broad access — to your financial systems, customer records, operational data — it operates with the authority you gave it. A single instruction embedded in a document or email is enough to redirect what it does with that authority. No breach required. Just text, interpreted as instruction, executed with credentials your organisation willingly granted.

This isn't theoretical. In March 2026, security research firm CodeWall pointed an autonomous agent at a major consulting firm's internal AI platform — no credentials, no insider knowledge. Within two hours it had full read and write access to the production database: 46.5 million internal messages, 728,000 files, 57,000 employee accounts. The vulnerability wasn't exotic — a basic flaw their own scanners had missed for two years. Weeks later, the same agent found a separate firm's AI platform with 3.17 trillion rows of compensation data and M&A intelligence on the public internet — zero authentication, full read-write access.

Both were responsibly disclosed. But the point stands: if organisations with world-class security investment are shipping AI platforms with these gaps, the question isn't whether a vulnerability exists in yours — it's whether anyone is looking.

Most aren't. The average enterprise manages 37 deployed agents. More than half run without security oversight or logging. Only 14.4% went live with full approval — the rest deployed by individual teams, connecting to systems nobody mapped.

You cannot govern what you cannot see.

The Tools Exist. The Decisions Don't.

On April 2, Microsoft released the Agent Governance Toolkit as open source — a runtime layer that intercepts every agent action before execution, enforces policy, assigns trust scores to agent identities, and includes an emergency kill switch. Free, works with existing frameworks, maps to EU AI Act and SOC2.

The infrastructure problem has a solution. But the toolkit cannot decide what your agents are allowed to do, which decisions need human approval, or what counts as a violation. Those rules have to come from leadership — and in most organisations, nobody has written them.

Three Questions That Belong on Your Agenda This Week

Colorado's AI Act is enforceable in June 2026. The EU AI Act's high-risk obligations land in August. Both require operational proof of governance — not intentions, not policy documents, but evidence.

What agents are running in our organisation, who authorised them, and what can they access? If the answer isn't immediate and complete, that's where to start.

Which decisions are our agents making autonomously that should require human approval? Financial transactions, data access, customer-facing actions — these need explicit rules before something goes wrong, not after.

Do we have a kill switch, and who controls it? If an agent behaves unexpectedly, the response needs to be seconds, not a meeting.

Your technical team can implement the controls. The decisions those controls enforce have to come from you.

The only real variable is whether you're the organisation that saw it coming.

Sources: 2026 Agentic AI Security Report, Arkose Labs · Microsoft Open Source Blog, "Introducing the Agent Governance Toolkit" (April 2, 2026) · CodeWall Security Research (March 2026) · IBM Cost of a Data Breach Report 2026 · OWASP Top 10 for Agentic Applications 2026

April 08, 2026 /Oladotun Opasina
  • Newer
  • Older

Powered by Squarespace