Existing AI regulations are outdated

Most of today’s AI regulations were written for yesterday’s technology. They focus on early-generation large language models and visible use cases such as deepfake prevention or transparency labeling. What they don’t address are the new forms of AI now appearing, systems that can update themselves, learn from internal data, and operate semi-independently.

William Dunning, Managing Associate for AI Regulation at Simmons & Simmons, pointed out that these gaps create serious compliance blind spots. Existing laws assume human oversight, yet new AI models can act and evolve without continuous human control. As these systems start to develop and interact with each other, the current legal definitions of “responsibility” and “control” become hard to apply.

For executives, this isn’t just a legal challenge, it’s a strategic one. Businesses that plan ahead and integrate flexible governance now will adapt faster when regulations catch up. Waiting for governments to rewrite rules means risking disruption later. Leaders should focus on internal standards that assume future regulatory complexity, not the simplicity of the present.

Staying ahead of regulation is not about prediction, it’s about readiness. Executives should build frameworks that can expand or adjust as AI governance evolves. Doing so keeps organizations compliant, credible, and trustworthy in the eyes of regulators, customers, and investors.

Shift from policymaking to enforcement

Regulators have moved past discussion. The next 12 months will be about action. Policymakers, particularly in Europe, are ready to enforce what’s already on paper. The European Union’s AI Act is leading this shift, and although some parts remain unclear, businesses should expect inspections, fines, and accountability processes to begin.

Nikki Pope, Senior Director for AI and Legal Ethics at Nvidia, said the change ahead will focus less on writing new laws and more on enforcing what exists. This means the gray areas, those sections of the law still open to interpretation, will be tested in real cases. The United States offers little clarity in comparison. There’s no federal regulation on the horizon, only state-level efforts such as California’s laws on AI transparency and watermarking.

For corporate leaders, enforcement matters more than debate. Compliance strategies need to be operational now, not in draft form. Review the AI tools in use across your organization, identify where risk exists, and make sure internal policies reflect both European and U.S. requirements if your business operates globally.

The cost of waiting is higher than the cost of acting early. An adaptable compliance program is not just for legal protection; it’s a competitive advantage. Executives who lead on enforcement readiness set the tone for responsible innovation and reduce exposure to operational, legal, and reputational risks.

Okoone experts
LET'S TALK!

A project in mind?
Schedule a 30-minute meeting with us.

Senior experts helping you move faster across product, engineering, cloud & AI.

Please enter a valid business email address.

The EU AI act as an international benchmark amid global fragmentation

The European Union has set a strong global precedent with the 2024 EU AI Act. It mandates clear labeling of deepfakes, strict controls over high-risk AI applications, and a detailed compliance process that leaves little room for ambiguity. This law is shaping international discussions on what responsible AI governance should look like. At the same time, it exposes how fragmented global regulation truly is.

Minesh Tanna, Partner and Global AI Lead at Simmons & Simmons, explained that while the EU pushes for comprehensive rules, the United States has resisted similar federal legislation. Instead, individual states, most notably California, are creating their own frameworks focused on transparency, watermarking, and accountability for AI-generated content. For multinational companies, this means dealing with overlapping and sometimes conflicting requirements.

Executives should treat this regulatory diversity as a practical reality, not a temporary inconvenience. Operating across regions now requires adaptable governance systems capable of meeting both the EU’s stringent criteria and the United States’ decentralized standards. The speed of adaptation will mark the difference between compliance as a constraint and compliance as a brand advantage. Companies that embed scalable frameworks today will be better positioned for consistency and trust on a global scale.

Executives should invest in monitoring and regulatory intelligence functions that continuously track legal changes in all operating jurisdictions. This proactive visibility enables faster response times, reduces compliance costs, and prevents regulatory blind spots before they impact business operations.

Integration of traditional legal principles and product liability

Existing legal frameworks are not disappearing, they are expanding to include AI. Product liability, which holds companies responsible for unsafe products, will increasingly apply to AI systems that cause harm. If an AI-driven decision results in discrimination, injury, or economic loss, the same principles that govern defective physical products will be used to hold organizations accountable.

Nikki Pope, Senior Director for AI and Legal Ethics at Nvidia, and Minesh Tanna, Partner and Global AI Lead at Simmons & Simmons, emphasized that this integration of traditional and emerging legal standards creates both clarity and pressure. Clarity because it grounds AI accountability in established legal logic. Pressure because it raises expectations for testing, traceability, and documentation across all stages of AI deployment.

For executive teams, this development presents an immediate directive: treat AI safety and reliability as legal obligations, not optional ethical choices. Establish audit trails, document how systems are trained, and ensure all decision-making processes are traceable. This reduces exposure to liability while reinforcing a culture of trust internally and externally.

Decision-makers should engage both legal experts and engineers early in AI product design and deployment. By merging technical safety validation with established legal frameworks, organizations can demonstrate responsible management and stay ahead of enforcement actions.

Proactive AI governance frameworks are essential for trust and compliance

AI governance is no longer a side initiative, it’s a requirement for sustainable operations. Organizations must establish clear processes for evaluating, deploying, and managing AI across all functions. This includes keeping an up-to-date inventory of every AI system in use, defining who is responsible for oversight, and ensuring transparent documentation for each use case. The goal is to make AI operations predictable, accountable, and defensible under regulatory scrutiny.

Jennifer Barrera, President and CEO of the California Chamber of Commerce, stressed that aligning governance practices with compliance expectations reduces the risk of discrimination and bias, especially in sensitive areas such as hiring and employee evaluations. Minesh Tanna, Partner and Global AI Lead at Simmons & Simmons, added that this process requires genuine collaboration between legal and engineering teams. Engineers understand system behavior, while lawyers interpret regulatory language; only when both work together can companies turn broad regulatory expectations into actionable frameworks.

Executives should approach AI governance as an active discipline requiring constant review. Regulations evolve, technology moves quickly, and internal practices must adjust in real time. Establish regular governance audits, assign senior accountability for AI oversight, and integrate ethical reviews into product and system development. These actions not only maintain compliance but also strengthen organizational credibility with customers, regulators, and stakeholders.

For decision-makers, governance is a strategic asset. By embedding governance directly into operational culture, executives ensure that innovation continues safely, responsibly, and in a way that strengthens long-term market confidence and brand value.

Building trust as the core objective behind AI regulation

Trust is the foundation of effective AI adoption and long-term success. Regulations aim to ensure that AI systems are transparent, reliable, and aligned with human oversight. Without trust, even the most advanced technologies face public and corporate resistance. Building trust means demonstrating responsible development, clear accountability, and consistent compliance.

Minesh Tanna, Partner and Global AI Lead at Simmons & Simmons, emphasized that creating trustworthy AI systems should be every organization’s central objective. Companies must invest in processes that validate system integrity, explain model decisions clearly, and prioritize user privacy and data protection. Trust is established when users, customers, and partners can see evidence of control and accountability in every operational layer of AI deployment.

For executives, this is a leadership issue as much as it is a technical one. Leadership defines tone and direction, how transparency is practiced, how ethical standards are implemented, and how compliance is enforced internally. Building trust requires commitment from the top, visible adherence to ethical practices, and measurable actions that prove reliability.

Executives should treat trust as a measurable metric, not an abstract goal. Tracking compliance performance, transparency levels, and user confidence data over time ensures that trust becomes a managed outcome, one directly tied to reputation, market growth, and long-term business resilience.

Main highlights

  • Outdated regulations demand proactive governance: Most current AI laws are already behind the technology curve. Leaders should implement flexible, future-ready governance frameworks now to remain compliant as new forms of AI, like self-learning systems, emerge.
  • Regulatory focus is shifting to enforcement: Policymakers are moving from drafting rules to enforcing them, especially in the EU. Executives should strengthen compliance operations immediately to avoid penalties and uncertainty as enforcement ramps up.
  • Global regulation is fragmented but influential: The EU AI Act sets tough global standards, while U.S. regulations remain state-led and inconsistent. Multinational leaders should build adaptable compliance systems that address varying regional demands.
  • Traditional legal frameworks still apply to AI: Expect product liability laws to govern AI-related harm. Leaders should ensure their systems are safe, documented, and auditable to minimize legal exposure and build organizational accountability.
  • AI governance must be integrated company-wide: Governance isn’t a legal formality, it’s an operational necessity. Executives should align legal, technical, and ethical oversight functions, ensuring AI use is compliant, transparent, and bias-free.
  • Trust defines long-term AI success: Regulation ultimately aims to build trust in how AI is used. Leadership should make transparency, accountability, and measurable ethical standards core to operations to strengthen market confidence and brand resilience.

Alexander Procter

April 21, 2026

8 Min

Okoone experts
LET'S TALK!

A project in mind?
Schedule a 30-minute meeting with us.

Senior experts helping you move faster across product, engineering, cloud & AI.

Please enter a valid business email address.