Unified AI adoption standards through industry collaboration

AI is moving fast. Regulation? Not so much. So, the financial industry is doing what it always does when things get too complex, it’s building its own rules. Banks and big tech companies are now collaborating to develop unified, open-source standards for using AI. These standards matter because, without them, we end up with a mess of inconsistent practices that slow things down and increase risk.

This isn’t the industry’s first time dealing with this. Citi already led a successful campaign in rationalizing protocols for cloud infrastructure, things like compliance and security controls, when public cloud started to scale. That effort created reusable frameworks that thousands of institutions still rely on today. Now, with AI presenting an entirely new class of challenges, it’s the same story: don’t wait for regulators, build something that actually works at scale, now.

FINOS (Fintech Open Source Foundation) is at the center of this. In October 2023, it released the Common Cloud Controls, open-source standards designed with input from both banks and major tech providers. That framework unified how firms define and manage risks in cloud systems. The template is proven, and now it’s being adapted to AI.

This kind of collaboration allows C-suite leaders to move faster without increasing risk. Proprietary solutions may feel easier short-term, but across a regulated space like financial services, fragmentation is operational debt. Open industry standards reduce uncertainty, improve interoperability, and allow teams to scale AI solutions that won’t need rework when formal regulation eventually arrives.

Executives who treat compliance as a last step, or wait for government clarity, are going to fall behind. But leaders who engage in building and adopting these open standards can stay ahead of disruption and bring trusted AI solutions to market faster. It’s about aligning technology growth with practical safeguards, something that scales.

Heightened cybersecurity emphasis and compliance in AI adoption

As AI tools get integrated into financial operations, the main concern is security. Bad actors don’t wait. They adapt faster than regulation. That’s why the U.S. Department of the Treasury stepped in with a March 2024 report, calling out the cyber risks tied to AI and reminding financial institutions to stay aligned with existing regulatory frameworks.

The message is clear: no matter how advanced the tech gets, the responsibility to protect data, systems, and compliance obligations stays the same. AI doesn’t replace the need for governance, it amplifies it. Areas like data privacy, transaction monitoring, and operational risk don’t vanish with automation; they become more complex, and non-compliance becomes more costly.

What matters here for executives is this, most of what’s needed to manage AI risk already exists in your organization. The frameworks, controls, and policies built for previous technologies like cloud and digital onboarding can be adapted. What’s important now is speed and intent. AI systems need to be tested for performance, and for reliability under regulatory expectations. That means stress-testing them against the same rules you apply to human decision-making and legacy software.

The Treasury isn’t creating new rules yet, but it’s watching closely. It wants institutions to act with discipline even in the absence of sweeping new laws. The earlier Treasury initiative, the Cloud Services Steering Group announced in May 2023, created common language and risk roadmaps for migrating financial systems to the cloud. The same disciplined approach is now being applied to AI deployments.

C-suite leaders should not underestimate the signaling in these moves. If your AI strategy skips over resilience, oversight, or auditability, you’re opening the door to legal exposure and potential operational blowback. On the other hand, taking compliance seriously gives your AI programs the foundation they need to operate at scale without derailment.

Political stance on AI regulation and its operational implications

Right now, the U.S. federal government is taking a passive role when it comes to AI regulation. Under the Trump administration’s influence, a proposal was introduced to block states from creating their own AI rules for the next 10 years. That moratorium narrowly passed the House last month and has cleared a key procedural step in the Senate.

This approach sends one clear message to businesses: regulation won’t come from the top any time soon. For AI, especially in regulated sectors like finance, that creates both freedom and risk. On one hand, companies can innovate without being slowed down by competing state-by-state policies. On the other hand, the absence of a standardized legal framework means companies are now responsible for setting their own internal limits. That includes how AI is tested, how decisions are audited, and how bias or failures are addressed.

Executives shouldn’t mistake this regulatory pause as a blank check. It’s a shift in responsibility. In a fragmented political climate, the risk isn’t lack of rule-making, it’s patchy enforcement and retroactive scrutiny. Especially in financial services, where trust is foundational, AI used without oversight can create liabilities that don’t surface until it’s too late to fix them easily.

There’s also a competitive implication. If your organization can build trustworthy, explainable AI systems while others wait for policy signals, you gain first-mover leverage. Right now, market leaders aren’t just innovating, they’re writing the standards everyone else will follow. By shaping internal governance models now, aligned with frameworks like those led by FINOS, you can scale AI faster without sacrificing control.

Key executive takeaways

  • Industry standards are a strategic advantage: Banks and big tech are creating open-source AI governance standards to fill the regulatory gap. Leaders should join or align with these efforts early to scale AI solutions securely and maintain interoperability across systems.
  • Cybersecurity and compliance can’t lag AI innovation: The U.S. Treasury warns that AI integration amplifies existing cyber risks in finance. Executives should prioritize adapting current compliance frameworks to ensure AI deployments stay within regulatory bounds and meet resilience expectations.
  • Regulatory inaction shifts responsibility to the private sector: With a 10-year moratorium on state-level AI laws advancing in Congress, governance falls to industry players. Decision-makers should proactively implement internal AI oversight models to avoid future legal and operational exposure.

Alexander Procter

September 4, 2025

5 Min