The EU AI act’s ambiguous language creates compliance uncertainty

The EU AI Act is meant to establish a framework for ethical, responsible AI use. The intention is good. But intention isn’t enough, execution matters more. And right now, the execution is unclear. We’re looking at legislation filled with legal grey zones wrapped in technical ambiguity.

One of the key problems is the lack of clear definitions around critical terms. “Significant generality” is one example. What does that actually mean? What’s the threshold or metric? There isn’t one. These open-ended terms expose AI providers to risks. Developers need to know how much training data detail is “sufficiently detailed.” If they overshare, they risk revealing IP or violating copyright. If they undershare, they risk non-compliance.

This becomes even more complicated when figuring out who’s responsible. If you tweak an open-source model for a focused task, are you now the “provider”? If you embed that model into your SaaS platform, are you liable for all the outcomes? Without clear lines, legal exposure keeps increasing, especially for companies shipping global products.

So for business leaders, this isn’t just a policy issue, it’s a product risk and an operational constraint. The ambiguity creates hesitation in product teams, legal departments, and investor circles. It slows down the system. And as Oliver Howley from Proskauer pointed out, uncertainty doesn’t promote responsible innovation; it delays it.

Burdensome compliance requirements disproportionately affect AI startups and innovation

Startups don’t have the legal muscle of Google or OpenAI. They don’t have compliance departments with a dozen lawyers or millions reserved for legal auditing. But under the current shape of the EU AI Act, they’re expected to operate at that level. That’s a repeatable pattern, big regulation favors big incumbents. And that’s a problem.

The Act creates heavy documentation and operational requirements that put early-stage developers under real pressure. These startups often work fast, ship early, and iterate. But now, they’re being asked to create detailed technical dossiers, manage copyright across datasets, and log incident response protocols like a Fortune 100 company. That’s not practical, and certainly not scalable.

Companies like OpenAI, Anthropic, and Google have signed up to the AI Code of Practice, which is the voluntary framework aligned with the Act. Even they’re saying the documentation demands are intense. Meta simply refused to sign, rejecting the current regulatory design. That tells you something. If top-tier firms hesitate, how is a five-engineer startup going to handle this?

For executives, here’s the issue: the AI Act’s current structure risks stalling exactly the kind of early innovation that drives real breakthroughs. Instead of enabling new players, it may be forcing them out of the market, or worse, away from Europe entirely. Howley warns that this could redirect capital and talent overseas. If that happens, the EU will spend the next decade catching up, not leading.

Innovation doesn’t survive in gray areas or under excessive weight. It needs clarity and space to run. The Act needs to support that, not restrict it.

Transparency requirements risk exposing proprietary data and triggering IP disputes

There’s a fine line between useful transparency and self-sabotage. The EU AI Act pushes AI providers to walk that line without giving them a clear view of where it ends. It mandates that developers share training data summaries that are “sufficiently detailed,” but it doesn’t define what that looks like in practice. So, providers are left guessing, share too much, and you’re vulnerable to IP theft or copyright litigation. Share too little, and you risk heavy fines or non-compliance.

This is especially problematic for companies that rely on proprietary training pipelines or have invested in custom-curated datasets. When these processes form the core value of your AI product, being forced to reveal them to regulators or competitors devalues the asset. And since IP protections for AI training data aren’t airtight to begin with, disclosure becomes a real operational risk.

To make things harder, the AI Code of Practice recommends that developers filter out data from websites that opted out of being scraped. That sounds manageable, but it’s not. The requirement isn’t only forward-looking, it makes retroactive compliance necessary. That means companies would need to audit past training datasets, filter them again, and potentially rebuild foundational models based on current data rights. That’s costly, slow, and nearly impossible at scale.

Executives evaluating market entry or expansion in the EU need to think seriously about legal containment strategies. Legal uncertainty is manageable when you know the bounds; it becomes a threat multiplier when rules are open to interpretation. As Oliver Howley pointed out, even companies aiming to comply could land in legal jeopardy just by interpreting the law differently than enforcement bodies later do. That kind of regulatory fragility doesn’t sit well in fast-moving AI markets.

Open-source GPAI providers face unique challenges due to systemic risk designations

Open-source models have become central to rapid AI advancement. They’re reusable, composable, and buildable. But the EU AI Act treats them inconsistently. In theory, open-source GPAI providers are exempt from some transparency requirements. In practice, those exemptions disappear if a model is labeled as presenting “systemic risk.” And that designation brings an entirely different regulatory load.

A “systemic risk” model, as defined in the Act, is one that exceeds 1,025 floating-point operations per second (FLOPs) during training and is officially designated by the EU AI Office. This hits major foundation models like OpenAI’s ChatGPT, Google’s Gemini, and Meta’s Llama. Once flagged, providers must create detailed documentation, carry out testing, red-teaming, cybersecurity protocols, track energy use, and run post-market monitoring, regardless of whether they control downstream use.

Here’s where it gets unreasonable. Open-source distribution means you’ve lost direct control. You can’t track or limit how users modify, deploy, or integrate your model. You also can’t monitor every edge use case in the ecosystem. So when regulators hold open-source model providers accountable for downstream behavior they can’t monitor or influence, it creates a liability sinkhole.

For decision-makers, the takeaway is clear: investing in open-source innovation under the AI Act comes with elevated legal complexities and uncertain oversight scopes. High FLOPs alone trigger systemic risk obligations, regardless of model intent or original design. That flips the usual benefits of open sourcing on their head. Companies considering an open approach will need to rethink compliance as a shared burden model, which adds coordination costs most aren’t planning for.

This isn’t about resisting regulation; it’s about making sure the rules align with how the ecosystem works. If they don’t, the result won’t be safer AI, it’ll be stalled progress and reduced participation.

The act’s emphasis on procedural documentation overshadows the real-world performance of AI systems

The EU AI Act is focused on process. It requires providers to publish training summaries, implement risk management protocols, document incident response systems, and report on post-market performance. That’s all functional if the goal is to demonstrate regulatory alignment. But none of this replaces the actual measurement of model outputs, accuracy, bias, or societal impact.

A system that ticks every administrative box can still generate misleading, offensive, or unsafe outputs. That’s a gap in the law. It confuses paperwork with performance. What the Act currently lacks is any requirement to evaluate how systems behave in live environments. The criteria for judging systemic risk don’t include output quality, they focus on documentation and theoretical safety mechanisms.

For product and compliance leaders, this opens a critical exposure point. Compliance might be met, but if something goes wrong, if AI-generated content causes real-world harm, public and regulatory response will ignore the documentation and focus on outcomes. It puts companies in the position of being technically compliant but reputationally at risk.

Oliver Howley called this out directly: “A model could meet every technical requirement… and still produce harmful or biased content.” He’s right. The current framework measures adherence, not effectiveness. Without that shift in focus, businesses may allocate enormous resources to compliance operations that do little to improve the product itself or protect consumers in meaningful ways.

The penalty structure and phased implementation timeline present steep risks for non-compliance

The EU AI Act is not just a regulatory framework, it’s also an enforcement machine. Non-compliance comes with real financial consequences. The largest fines reach up to €35 million or 7% of a company’s global turnover, whichever is higher. For less severe breaches like incomplete documentation or failure to supply information, fines still reach €15 million or 3% of turnover, and €7.5 million or 1%, respectively.

These numbers escalate risk significantly for both large incumbents and mid-sized firms operating in Europe. For SMBs and startups, the penalties apply either as a percentage or a fixed amount, whichever is lower, but the burden is still material. Financial liability doesn’t scale downward very gracefully.

There’s a grace period: enforcement on general-purpose AI (GPAI) models won’t begin until August 2, 2026, even though requirements take effect August 2, 2025. While that one-year buffer is helpful, it won’t change the fact that the implementation path is unclear, especially while definitions, threshold calculations, and enforcement consistency remain in flux.

From a leadership perspective, this means risk planning must begin now. Waiting for enforcement to kick in is not the move. Product planning, legal frameworks, governance models, and deployment playbooks all need to integrate compliance design early in the development cycle. That includes documentation readiness, IP and data-usage clarity, and conformity with systemic-risk classification criteria.

The fines aren’t theoretical, and the ambiguity around obligations increases exposure. That’s a poor risk ratio for any high-growth company banking on launching into or expanding across EU markets. If regulatory alignment feels unpredictable, investing early in internal control systems will be far more efficient than responding reactively to enforcement actions down the line.

The EU AI act’s staggered rollout and extended compliance deadlines aim to balance enforcement with market adaptation

The EU AI Act isn’t hitting all at once, its implementation is deliberately phased. The rules begin applying in stages depending on the type of AI system and when it enters the market. For general-purpose AI (GPAI) models released after August 2, 2025, compliance is required by August 2, 2026. For those already on the market, the cutoff is August 2, 2027. Specific public-sector systems must comply by as late as 2030.

This timeline gives businesses time to adapt, but adaptation alone isn’t enough. The deadlines offer breathing room, not clarity. Notified body engagement, post-market monitoring, governance coordination, and transparency documentation all require long-term infrastructure planning. Companies hoping to operate successfully in the EU need predictable guidance today, not just more time to interpret vague expectations.

No business planning horizon benefits from uncertainty stretched across six years. Planning AI roadmaps, infrastructure investments, or geographic expansion under these conditions requires forecasting unknowns, especially when legislative interpretation is still in flux. One year from now, compliance mechanisms may evolve based on early enforcement cases or lobbying influence.

For C-suite leaders, it’s essential to treat this phased implementation not as an extended deferral, but as a stack of moving targets. Every milestone in this schedule will likely come with updated regulatory interpretations, market shifts, and evolving risk assessments. It’s not just about hitting the deadlines, it’s making sure your compliance strategy stays aligned as the rules settle.

Broad industry opposition and geopolitical tensions underscore the act’s wider ramifications

The EU’s approach to regulating AI is having global impact, starting with pushback from industry giants. OpenAI, Google, Anthropic, and Meta are directly involved in shaping the conversation. While OpenAI and Google have signed the EU’s AI Code of Practice, both have been critical of its complexity and scope. Meta outright refused to sign it. The message is clear: the current framework causes friction, even for those leading the sector.

That friction isn’t just legal, it’s geopolitical. The United States has taken a different stance on AI regulation, favoring more flexible or sector-specific approaches. The EU’s risk-based model, while well-intended, clashes with this. The potential for cross-border enforcement, where European regulators penalize U.S.-based firms, could create strain in broader transatlantic trade discussions.

Oliver Howley raised this concern explicitly. If U.S. providers start facing EU penalties under opaque compliance rules, it won’t take long for diplomatic pressure to escalate. Trade negotiations could stall. Federal policy responses in the U.S. may harden. That kind of international friction isn’t just abstract, it affects global AI product launches, cross-border M&A, and investment flows.

From a strategic leadership standpoint, global AI growth needs coordination, not fragmentation. The business downside of operating under multiple conflicting regulatory regimes is real. It raises engineering costs, complicates go-to-market strategies, and multiplies internal compliance workloads. The more these frameworks diverge, the more leadership teams will need to invest in jurisdiction-specific operating models, and that’s not a high-leverage investment.

Keeping pace with all the evolving norms is necessary. But pushing for regulatory alignment at the international level should be on the agenda for any company planning to compete globally in AI. That’s where real risk mitigation, and long-term advantage, exists.

Concluding thoughts

Regulation at this scale always creates friction before alignment. The EU AI Act isn’t trivial policy, it’s the beginning of a structural shift in how markets, governments, and developers treat artificial intelligence. But for that shift to work, clarity matters as much as ambition.

Executives can’t afford to treat this as a long-range policy issue. It’s an operational one. Product roadmaps, compliance architectures, legal strategies, and even hiring plans are going to be shaped by how readable and enforceable this framework becomes over the next 12 to 24 months.

The companies that win here won’t just adapt, they’ll structure AI governance into their business models early and intentionally. That means prioritizing internal alignment, getting serious about documentation hygiene, tracking downstream use, and shaping engagement with regulators proactively, not reactively.

The next market gains won’t come from dodging regulation, they’ll come from designing systems that scale within it. And in a space moving this fast, waiting on clarity is a risk in itself.

Alexander Procter

August 13, 2025

12 Min