The EU is investigating Google’s use of online content for training AI models

There’s growing tension between innovation and regulation, particularly in artificial intelligence. Right now, the European Commission is investigating Google. The big question is whether Google is giving content creators a fair deal and whether it’s gaining an unfair edge by using data others can’t access, like the vast collection of videos on YouTube, which Google owns.

This is less about fines and more about market control. If Google uses its own platforms like YouTube to train large-scale AI models, while competitors are legally blocked from accessing the same material, it creates a major imbalance. That’s not just a legal or ethical question, it’s a structural advantage. If we allow this pattern to persist, a handful of companies will write the rules and own the training inputs, which ultimately writes the future of AI. That’s what the EU wants to prevent.

C-suite leaders should pay attention. We’re entering a phase where access to data may define power in AI, not just talent or algorithms. If you’re building in this space or partnering with companies that do, the playing field is under review in Brussels, and regulatory boundaries could be reset.

This comes on the heels of last year’s U.S. antitrust ruling against Google’s dominance in online search, where the company had to commit to sharing certain data and limiting exclusive arrangements. These moves reflect a broader pattern of governments starting to police how data is gained and deployed in large-scale AI.

Google’s AI overviews are reportedly reducing traffic to traditional online publishers

AI Overviews display an AI-generated summary at the top of Google’s search results page. Fast, automated, and helpful for users, no argument there. But ask any digital publisher, and they’ll tell you it cuts them out of the conversation. Traffic that once drove subscriptions, ad revenue, and visibility is getting absorbed by Google’s summary boxes. That content is often powered by the very publishers now losing out.

This isn’t a hypothetical issue. According to Pew Research’s March 2025 survey, 60% of users using Google Search saw these AI summaries. Among those users, only 8% clicked on actual link results. Compare that to users who didn’t see a summary, 15% of them clicked on links. That’s a drop of almost half. For the content creators and publishers fueling the internet economy, that shift is a direct hit to business models.

For executives in media, tech, or any content-driven industry, understand this: AI tools that boost user experience can also change the flow of digital economics. If platforms begin to summarize, repurpose, and distribute content without converting traffic, someone loses. And at the moment, it’s publishers.

It’s also starting to raise copyright and data-use questions, cases are mounting, and global regulators are leaning in. Make sure your monetization strategies and partnerships are aligned with this reality, whether you’re leveraging AI or defending content rights. The web you knew five years ago is rapidly being rewritten by summarization engines and generative models. How you adapt will determine your relevance in the next chapter.

Emerging global regulatory and legal challenges are intensifying scrutiny of AI content practices

Across both sides of the Atlantic, the conversation around AI and content usage is evolving fast, and moving toward regulation. In the EU, the AI Act introduces specific guardrails around copyright and intellectual property when it comes to how general-purpose AI systems are developed and trained. Now, with the EU Commission directly questioning Google’s practices, enforcement may arrive sooner than expected.

Regulators are watching how foundational AI models are trained and whether that process violates the intellectual ownership of data sources. If companies are scraping copyrighted work and turning it into model outputs, whether it’s a news article, a song lyric, or a video, it’s not just a matter of ethics anymore. It’s entering a zone of legal liability.

Forrester’s Principal Analyst Enza Iannopollo pointed out that even though official rule enforcement under the EU AI Act might take time, investigations like this one could create momentum for faster regulatory responses. Industry leadership should take that as a signal: tightening oversight is coming. Data sourcing, consent, and copyright infrastructure will need to be built into AI product roadmaps, not bolted on later.

This matters today because the lawsuit cycle is already active. The debates are no longer about what’s technically possible. We’re looking at how business models will be constrained or redefined by intellectual property protection. For anyone leading in AI, news, or content-driven tech, this is the time to engage with policymakers proactively. Once the rules are locked in, adaptation becomes more costly.

Google defends its AI innovations as integral to maintaining a competitive edge

Google wasn’t passive in response to the EU’s investigation. The company positioned its AI Overviews and other generative tools as essential innovations in the search space, tools that align not only with how people interact with the internet today but also with where that interaction is headed. According to a Google spokesperson, these products are built to expand access and deliver value to users across Europe and beyond.

At the same time, Google made it clear that they view the threat of over-regulation as a potential drag on progress. Their message: if regulators clamp down aggressively, it could result in fewer advancements, slower competition, and less value delivered to users. They emphasized a commitment to working with partners across the creative and media sectors to adapt together as AI adoption grows.

Executives should track this carefully. While regulation seeks to ensure fairness, not all enforcement improves outcomes. There’s a balance between protecting rights and enabling new growth. If generative AI goes through a regulatory bottleneck, those waiting to innovate may lose the window to lead. AI innovation is moving fast, and policy environments need to be built for that pace.

At the executive level, the focus now should be alignment, between product evolution and stakeholder inclusion. Build technologies that respect content ownership, but don’t paralyze innovation with unnecessary internal restrictions. That’s the line emerging across markets like the EU, and how you walk it will separate scalable strategies from stalled ones.

U.S.-based legal challenges mirror EU concerns regarding the use of copyrighted content in AI models

The regulatory pressure isn’t just in Europe. In the U.S., AI model providers are facing a growing number of lawsuits focused on one core issue, unauthorized use of copyrighted content. Major publishers are pushing back hard. Penske Media Corporation, the owner of Rolling Stone, filed suit against Google in September. The case challenges Google’s use of AI summaries that allegedly repurpose copyrighted content in ways that diminish publishers’ business performance and audience reach.

The New York Times also launched a similar lawsuit, not against Google, but against Perplexity, an AI-powered engine competing in the same generative space. The allegation: copyright infringement based on how the engine uses Times content to generate outputs. These cases are stacking up, and they mark a global trend. Legal challenges are becoming part of the ecosystem when AI tools are built using data that wasn’t cleared, licensed, or compensated for.

What makes this significant for leadership is the shift from theoretical debate to legal precedent. If courts begin siding with publishers, and some already are, that will restrict how companies train language models and deploy AI-generated content. The outcome of these cases could reshape the costs of building AI platforms. What’s free today might require licensing fees tomorrow.

This environment demands legal clarity and proactive policy. If you’re building or investing in AI tools, review your data pipelines, content sourcing policies, and term-of-use frameworks now, not after litigation comes your way. Rights-respecting AI models are quickly becoming not just ethical necessities but strategic ones. The market is watching, and so are regulators on multiple continents.

Key takeaways for decision-makers

  • Antitrust and AI data access in focus: The EU is investigating whether Google unfairly uses its control over platforms like YouTube to train AI models, raising concerns about competitive imbalance. Leaders in AI should evaluate how exclusive data access could trigger regulatory scrutiny.
  • AI summaries and publisher impact: Google’s AI Overviews dramatically reduce traffic to original sources, only 8% of users click through when summaries appear. Executives in digital media should reassess content distribution strategies and explore new monetization models.
  • Global IP standards are shifting: Legislative developments like the EU AI Act and recent investigations indicate accelerating regulatory action around data rights and copyright. Leaders should proactively align AI workflows with evolving IP and compliance frameworks to minimize risk.
  • Balancing regulation and innovation: Google argues that overly restrictive rules could hinder technological progress, even as it commits to working with stakeholders. Executives should design compliance strategies that preserve space for innovation while addressing fairness and transparency.
  • Legal pushback is gaining momentum: High-profile lawsuits in the U.S., including from Rolling Stone’s parent company and The New York Times, challenge how AI firms use third-party content. Companies training generative models should audit their data sources now to prepare for emerging litigation and licensing requirements.

Alexander Procter

January 27, 2026

8 Min