UK firms’ AI sovereignty efforts expose dependency risks
UK companies are positioning themselves to gain greater control over their AI systems. Many have already drafted exit strategies to handle possible service restrictions from their primary AI providers. But strategy on paper doesn’t always translate into resilience in practice. Red Hat’s survey of 100 UK IT decision makers found that while 67% of organisations have defined exit plans, 43% still expect moderate to significant disruption if their main AI supplier limits access.
This says one thing clearly, most companies know the risk of vendor lock-in but haven’t yet achieved full operational independence. True sovereignty in AI means having control over infrastructure, data, and the technology stack without being tied to one provider’s pace or pricing. For C-suite leaders, this is not a technical issue; it’s a strategic one. AI is now embedded in critical operations, and disruption at the provider level can cascade quickly across processes, productivity, and customer experience.
Business leaders should think beyond reactive strategies. It’s not enough to define an exit plan, enterprises must be able to execute one without losing function or performance. This requires multi-cloud flexibility, internal skills to manage transitions, and open frameworks that reduce friction when changing vendors. In other words, sovereignty isn’t merely about owning your data; it’s about owning your plans, your execution, and your future agility.
Joanna Hodgson, Country Manager, UK, at Red Hat, explained it clearly: many firms have strategies prepared, but “executing a switch without disruption remains difficult.” Her point reinforces a larger truth, control is the ultimate measure of readiness. AI sovereignty is about building systems that you can depend on, irrespective of a single provider’s policies or priorities.
Lagging governance for autonomous AI implementation
The UK is moving quickly with AI that can act and decide independently, what Red Hat calls “agentic AI.” About 87% of companies already use these systems to automate decisions and workflows. It’s progress, but there’s a problem: governance hasn’t caught up. Only 25% of firms reported strong governance frameworks. Another 43% said they have partial oversight, and 17% admitted their governance is minimal.
This gap between adoption and oversight is a risk multiplier. Agentic AI brings major efficiency gains but also introduces potential for error, bias, and compliance failures if left unchecked. Many organizations are now using AI with autonomy but without a consistent set of rules defining what it can or cannot do. That’s like running automation without insurance, efficient until something goes wrong.
For executives, the priority should be to translate governance from a compliance checkbox into a continuous management process. Governance isn’t about restriction; it’s about sustainability and trust. As AI systems act more independently, executives must see governance as the key to maintaining control over outputs and decisions.
Across Europe, 64% of organizations report having at least some form of AI governance. That puts the UK behind its peers. Bridging this gap requires a joint effort, board-level oversight, technical auditing, and clear internal policies for data integrity and model behavior. The faster AI evolves, the faster governance must evolve with it. In simple terms: if AI is to scale safely, governance can’t be an afterthought; it must be a foundation.
A project in mind?
Schedule a 30-minute meeting with us.
Senior experts helping you move faster across product, engineering, cloud & AI.
Partial data visibility challenges compliance and security
UK organisations are progressing in how they handle and monitor their data, but many still lack full visibility of where that data is stored, processed, and accessed. Red Hat’s findings show that 93% of UK respondents have either full or partial visibility, yet almost half of them (45%) have only partial control. This means a large number of firms understand data handling at a surface level but lack the depth needed for full accountability.
Data visibility isn’t just a technical measurement; it represents how well an organisation understands its own infrastructure. For executive teams, this directly connects to compliance, privacy, and security. Regulations are tightening across Europe, and firms that can’t fully track data flows risk breaching emerging requirements around data residency, consent, and access control. Executives must ensure that the organisation not only stores data securely but also knows exactly how and where it’s being used.
Comparing performance across regions makes the stakes clearer. In Germany, 97% of organisations report full or partial transparency, setting a higher standard. The Netherlands and Italy both stand at 90%, meaning the UK lags behind key European peers. For leaders, this is not about closing a data gap, it’s about maintaining control of business-critical information. Incomplete transparency opens the door to risk, whether through data misuse, regulatory penalties, or operational inefficiencies.
To move forward, executives should focus on integrating unified data management frameworks. This includes consolidating data sources, deploying stronger access controls, and implementing continuous monitoring tools across all environments, on-premise and in the cloud. The goal is straightforward visibility that supports compliance and reduces unnecessary complexity. In today’s regulatory environment, having “partial visibility” is simply not enough for sustainable growth.
Open source as a strategic lever for enhanced AI trust
More UK business leaders are recognising open source as a practical route to achieving control, flexibility, and trust in AI systems. In Red Hat’s survey, 80% of respondents said open source gives them greater control over how AI is built and executed. Transparency and auditability, ranked as top advantages by 87% of participants, make open source appealing to institutions seeking accountability. Another 82% value greater customisation, making it easier to align AI with unique business and regulatory requirements.
For C-suite executives, the message is clear: open source is not just a toolset, it’s a strategy for independence. Proprietary AI platforms often restrict adaptability and create long-term vendor lock-in. Open source gives businesses more say in how their AI operates, where it runs, and which technologies it integrates with. It provides the clarity needed to meet regulatory expectations and gives organisations confidence in their ability to audit results and performance.
Adopting open source doesn’t come without responsibility. Firms must ensure proper governance, community engagement, and in-house capability to manage open source systems effectively. But when executed well, this approach delivers tangible advantages: transparency for regulators, flexibility for developers, and operational sovereignty for leadership.
For executives guiding their organisations through AI expansion, open source represents a foundation for trust and self-determination. It builds systems that companies understand, manage, and evolve on their own terms. In a market increasingly defined by rapid AI innovation and shifting compliance demands, that level of control is no longer optional, it’s essential for resilience and long-term leadership.
Strong demand for regulation embedding open source principles in AI
UK technology leaders are calling for stronger regulations that embed open source principles into AI development and deployment. This call is rooted in a desire for transparency, accountability, and fairness in how AI systems are built and operated. Red Hat’s survey shows 89% of UK respondents support regulation requiring open source standards, well above the EMEA average of 77%, and higher than France (70%) and Germany (72%).
Executives are recognizing that self-regulation alone is no longer enough to ensure responsible AI growth. By adopting legal frameworks that emphasize open access, auditability, and shared innovation, governments can foster a more stable and trustworthy technology environment. This approach ensures that public and private sectors work from the same foundation, one that prioritizes clarity and control over black-box systems.
From a leadership perspective, supporting such regulation is a strategic move. It creates predictability, reduces compliance uncertainty, and aligns corporate policy with broader societal expectations of accountability in AI. In regulated sectors such as finance, healthcare, and government services, codifying open source principles can also simplify adherence to security and data-handling laws.
For executives, the takeaway is clear: supporting regulation around openness and transparency doesn’t slow innovation, it protects it. Open frameworks allow organisations to innovate at scale while maintaining trust across customers, regulators, and markets. The UK’s leadership in this area is a signal to other economies: policy alignment with open source values is not only ethical but commercially sustainable.
European shift from AI experimentation to sovereign, regulated deployments
Across Europe, the conversation around AI is entering a more mature phase. Companies are no longer asking how to test AI, they’re determining how to operationalize it under sovereign, secure, and compliant conditions. Red Hat’s regional findings show a consistent trend: boardroom discussions now focus on governance, security, and regulatory frameworks rather than short-term pilots or proofs of concept.
This shift signals a growing consensus that enterprise AI is critical infrastructure, not just an innovation engine. European organisations are converging around models that combine local control with cross-border interoperability. They want systems that meet data protection rules, ensure transparency, and integrate smoothly across multiple vendors and cloud environments. These are frameworks designed to scale responsibly, preserving both agility and compliance.
For decision-makers, the goal is balance: maintaining operational freedom while meeting tightening regulatory obligations. Open source principles are central to this balance because they empower companies to choose their tools, understand the full lifecycle of their AI models, and demonstrate compliance when required. Executives who embrace this shift early can position their firms as leaders in transparent AI governance and cross-market collaboration.
Hans Roth, Senior Vice President and General Manager for EMEA at Red Hat, captured this transition succinctly. He noted that board-level conversations have moved “beyond experimentation to how AI can be deployed in a way that meets sovereignty, security, and regulatory expectations.” His observation reflects where the market is heading: companies want freedom of choice and accountability built into every layer of AI deployment. The next phase of AI in Europe will be defined not by rapid experimentation but by the disciplined execution of open, resilient, and self-governing systems.
Key executive takeaways
- Build real AI sovereignty: Many UK firms have exit plans for AI provider disruption, but 43% would still face operational impacts. Leaders should move beyond planning and invest in practical independence through open architectures and vendor diversification.
- Close the governance gap in autonomous AI: With 87% of UK organizations using agentic AI but only 25% applying strong governance, leaders must formalize oversight. Strong governance frameworks reduce compliance risk and ensure AI autonomy supports business performance.
- Achieve full data visibility to protect integrity and compliance: Nearly half of UK firms have only partial visibility into where and how their data is stored and processed. Executives should strengthen cross-platform data management to meet evolving regulatory expectations and fortify security practices.
- Leverage open source as a foundation for trusted AI: Four in five UK decision-makers view open source as offering greater control and transparency. Leaders should adopt open frameworks to build adaptable, auditable AI systems that align with both regulatory and operational goals.
- Champion regulation that enforces open source principles: 89% of UK executives support regulation mandating open standards for transparency and auditability. Backing this movement can provide regulatory stability and strengthen accountability in AI-driven industries.
- Shift from AI experimentation to sovereign, scalable execution: European companies are embedding AI into critical operations, emphasizing sovereignty, security, and governance. Executives should focus on scalable, open infrastructures that ensure freedom of technology choice and long-term resilience.
A project in mind?
Schedule a 30-minute meeting with us.
Senior experts helping you move faster across product, engineering, cloud & AI.


