Playing It Safe With AI Is No Longer Safe
The US government just declared that not deploying AI is a systemic risk. Not a missed opportunity. Not a competitive disadvantage. A risk — in the same category as deploying it badly.
That changes everything for financial services.
Most institutions have managed AI adoption as a compliance question for the past three years. Move carefully. Document everything. Wait for regulatory clarity. It was defensible — and it quietly became the default.
That default just became a liability.
On March 23, the US Treasury Department and the Financial Stability Oversight Council launched the AI Innovation Series — a public-private initiative bringing together financial institutions, technology firms, and regulators to accelerate responsible AI adoption. The signal came directly from Treasury Secretary Scott Bessent: the department is moving "from a posture focused on constraint toward one that recognizes failure to adopt productivity-enhancing technology as its own risk."
From "Tread Carefully" to "Move or Fall Behind"
This isn't a minor policy update. It's a reframe of the entire risk equation.
Institutions moving cautiously — waiting for frameworks before committing to AI in credit underwriting, fraud detection, or operational risk — were operating rationally. FSOC's 2023 report flagged AI as a financial stability vulnerability for the first time. The message was: tread carefully.
The 2026 message is different. Deputy Assistant Secretary Christina Skinner: "When institutions cannot deploy tools that improve fraud detection, credit allocation, and operational resilience, the system becomes less efficient and less secure." Non-deployment now has a name and a place on the FSOC's radar.
The four roundtables will focus on identifying high-value use cases and building governance frameworks for scaling without compromising safety and soundness. The institutions at the table will shape what responsible deployment looks like. The ones on the sidelines will be handed the results.
A Green Light Is Not a Free Pass
The same announcement that removes constraint-based friction raises the bar on what comes next. Treasury's Chief AI Officer Paras Malik was clear: "disciplined implementation will determine its impact." This isn't a green light to deploy fast and govern later. Regulators will be watching both.
In my work with financial services clients at Publicis Sapient, the gap isn't appetite — every institution has a roadmap. The gap is the foundational work: data maturity, model governance, and clear accountability for AI-driven decisions.
With a global financial payments client, the first conversation was never about AI platforms — it was about whether their data was clean, accessible, and auditable enough for the governance standards that would follow. That question, asked before the build not after, is what separates institutions that scale AI successfully from those generating the failure stories regulators cite.
Three Questions Worth Answering Before the Frameworks Arrive
Before the FSOC roundtables produce formal guidance, financial services leaders should be able to answer:
Are we at the table? The institutions shaping these roundtables will have a material advantage over those reacting to the output. If your institution isn't represented, someone else is defining what responsible AI deployment looks like for your industry.
Which of our highest-value use cases — fraud detection, credit underwriting, operational risk — are blocked by data quality issues rather than model limitations? That's where foundational investment needs to go first.
Do we have documented accountability for AI-driven decisions that would survive regulatory scrutiny today — not in eighteen months, but today?
The regulatory window isn't closing — it's opening. But faster than most institutions' governance infrastructure can support. Do the foundational work now and you'll be ahead of the frameworks, not scrambling to catch up.
I've had this exact conversation with financial services leadership teams at Publicis Sapient — the starting point is always the same. Not which AI to deploy, but whether the foundation is ready. If your team is navigating this, it's a conversation worth having now, before the frameworks land.
Sources: US Treasury Department, "Treasury Launches the Artificial Intelligence (AI) Innovation Series" (March 23, 2026) · ABA Banking Journal, "FSOC, Treasury Department launch effort to support financial sector AI adoption" (March 24, 2026) · PYMNTS, "Treasury Department Targets Regulatory Friction to Scale Bank AI Adoption" (March 24, 2026)