Global coordination is essential for effective AI governance and human rights protection
AI doesn’t stop at borders. It’s developed, trained, and refined across continents, through data centers, global teams, and interconnected supply chains. Trying to regulate it within national boundaries is like containing the internet to one country. Microsoft’s Ginny Badanes, General Manager of Tech for Society, and Rob Sherman, Deputy Chief Privacy Officer of Policy at Meta, made it clear that global coordination is a necessity. They warned that fragmented rules across countries only make it harder to manage risks linked to bias, safety, and privacy. What the world needs is alignment, a set of international standards that ensure AI benefits humanity while protecting human rights everywhere.
Their perspective reflects more than compliance. It’s a pragmatic recognition that AI success depends on trust. If companies and governments align on core ethical and safety principles, innovation moves faster and with greater public confidence. The current patchwork of directives, from Europe’s AI Act to proposed US frameworks, creates friction and slows down progress. Harmonization could clear that clutter and establish a common foundation for growth.
For executives, the takeaway is simple: treat AI governance as a global business issue. You cannot scale AI responsibly if oversight, security, and data practices vary wildly between jurisdictions. By engaging in international dialogue and supporting interoperable standards, decision-makers can help shape a safer, more predictable environment for AI innovation.
The UK’s human rights-based approach to AI regulation
The UK’s AI Opportunities Action Plan earned strong reviews from both industry and government leaders. According to Badanes and Sherman, the strategy makes a “sensible start,” grounded in Britain’s established human rights laws and risk-based framework. It balances innovation with societal protection, a combination that appeals to both business and regulators. Kanishka Narayan, the UK’s AI Minister, reinforced this view, pointing to the AI Security Institute as an emerging global leader in technical AI governance.
Still, optimism comes with caution. The UK model works best if it connects with broader international systems. Global interoperability is key. AI doesn’t recognize borders, and neither do its risks, from misinformation to systemic bias. If the UK operates in isolation, it risks creating compliance silos that limit the use of new technologies. Global alignment ensures that AI solutions developed under UK standards can integrate smoothly into international markets without redundant auditing or fragmented oversight.
For executives, this signals an opportunity. Companies that build AI products in compliance with human rights-based standards can future-proof their operations. The direction from the UK government shows clear intent, protect rights, encourage innovation, and collaborate internationally. For forward-thinking business leaders, aligning with this model early sets the stage for greater resilience and smoother global deployment as regulation matures.
A project in mind?
Schedule a 30-minute meeting with us.
Senior experts helping you move faster across product, engineering, cloud & AI.
Public trust and transparency underpin AI adoption and governance
Public trust is the real engine behind AI’s success. If people don’t trust AI systems, they won’t use them, no matter how advanced the technology gets. Ginny Badanes, General Manager of Tech for Society at Microsoft, and Rob Sherman, Deputy Chief Privacy Officer of Policy at Meta, both emphasized this during the committee session. Their message was direct: without transparency, adoption fails. Users need to understand where information comes from and how decisions are made. To that end, companies are working on measures such as factual alignment tools, transparency indicators in chatbot responses, and integrated citations that allow people to verify sources directly. These efforts aim to make AI outputs more reliable and less opaque.
This is about more than compliance, it’s about sustaining credibility in a fast-moving market. Clear labeling of AI-generated content, explainability in model outputs, and responsible data usage create the foundation for durable trust. As AI’s role expands across industries, from customer service to healthcare and finance, businesses that invest in visible, understandable transparency will command greater loyalty from both users and regulators.
For executives, the lesson is uncomplicated but strategic. Transparent AI systems inspire confidence and support long-term growth. Customers trust clarity, not perfection, and that trust directly shapes adoption rates, revenue growth, and brand perception. Companies that make transparency and accountability operational priorities will be best positioned to both innovate and lead global compliance discussions as regulations evolve.
Addressing misinformation and preserving democratic integrity
The conversation on AI and democracy struck a serious tone. Lawmakers challenged Rob Sherman, Deputy Chief Privacy Officer of Policy at Meta, over the rising threat of misinformation and the role of AI in amplifying it. Sherman responded that Facebook enforces real-identity verification for users, relying on government-issued IDs when necessary. Still, adversarial groups keep evolving their tactics, which creates an ongoing arms race between bad actors and platform safeguards. The company continues to enhance detection mechanisms, update misinformation filters, and improve the transparency of how information appears in feeds. However, Sherman acknowledged that the work is far from finished.
AI-generated misinformation is one of today’s most complex governance challenges. It affects democratic processes, distortions in public debate, and even trust in institutions. Executives at major tech firms know that dealing with this issue is not just about moderating content, it’s about building systemic resilience. This includes investing in advanced verification systems, provenance tools like digital watermarking, and real-time tracking of AI-manipulated media.
For business leaders, the point is clear: maintaining credibility in the information ecosystem is non-negotiable. Misinformation control isn’t a technical issue, it’s a strategic imperative. Companies that anticipate and mitigate AI-driven disinformation risks protect both their users and their long-term market positions. In the next phase of digital transformation, transparency and integrity won’t be optional features; they’ll define which organizations retain public trust and regulatory confidence.
Accountability for AI harms remains unresolved
Accountability in AI remains one of the toughest governance problems. During the parliamentary session, Ginny Badanes, General Manager of Tech for Society at Microsoft, and Rob Sherman, Deputy Chief Privacy Officer of Policy at Meta, pointed out that responsibility for harm should align with where real control exists, whether at the level of model development, deployment, or user behavior. This means liability isn’t one-dimensional. A system that misleads users, for instance, could involve failures at multiple points: flawed design, improper deployment, or malicious use. Both executives avoided prescribing a fixed legal model, recognizing that accountability must evolve with technology’s complexity.
Right now, the legal frameworks governing AI accountability are fragmented and inconsistent. Some regulations focus on the developer, others on the operator or data controller. This uncertainty makes it harder for businesses to plan risk management and compliance strategies. While policymakers refine their frameworks, companies are advised to strengthen internal governance, document decision processes, and perform regular impact assessments.
Executives should view accountability as a leadership function, not a compliance task. Clarity on who holds responsibility in a risk chain protects both users and organizations. Institutionalizing accountability, through transparent reporting, auditable model design, and strong deployment oversight, reduces exposure to litigation and improves stakeholder confidence. As AI continues to integrate into vital sectors, accountability will be the foundation for sustainable progress.
Safeguarding children in AI interactions requires differentiated, risk-based oversight
The discussion on child safety underscored a growing reality, AI systems interact with younger users in increasingly complex ways. Rob Sherman and Ginny Badanes both emphasized that a uniform age limit across all AI applications doesn’t reflect the real differences in risk. A tutoring app and a conversational chatbot simply don’t present the same exposure. Instead, they supported a risk-based framework, where safety features and verification adapt to how and where children engage with technology. Sherman indicated that consistent, platform-level age verification would add meaningful protection that is currently missing in most ecosystems.
This approach focuses on tailoring protections to context rather than imposing blunt restrictions. As Badanes stated, it’s not enough to block access; systems must also be designed from the start with child safety in mind, through clear boundaries, built-in guardrails, and transparent design principles. That requires collaboration between developers, regulators, and education experts to ensure safety without limiting positive engagement or learning opportunities.
For executives, this is a call to act early. Designing and deploying AI tools with embedded safety standards will soon be an expectation, not an option. Companies that adopt age-appropriate safeguards today will enter future regulatory conversations from a position of strength. It’s about achieving responsible innovation, maintaining open access to beneficial technologies while reducing the risk of harm to vulnerable users.
Firms view existential AI threats as less immediate than everyday safety and governance risks
Executives from both Microsoft and Meta emphasized that while theoretical risks from advanced AI deserve attention, current priorities lie in managing immediate operational and safety challenges. Rob Sherman, Deputy Chief Privacy Officer of Policy at Meta, described the progress of AI as incremental, with no sudden transformation that would redefine society overnight. Ginny Badanes, General Manager of Tech for Society at Microsoft, referred to existential risk as “low-probability but high-impact,” highlighting the need for balance between addressing long-term theoretical threats and the tangible risks users face today.
Both companies detailed their internal safeguards designed to address these immediate risks, ongoing “red-teaming” exercises, risk assessments for new AI models, and the use of “frontier risk” frameworks to review systems for security issues in areas like cybersecurity and autonomy. They also stressed continuous collaboration with governments, who hold vital intelligence not available to the private sector. This partnership allows for more accurate assessments of national security implications before AI systems reach public use.
For executives, the message is pragmatic. Overcommitting resources to hypothetical future risks can distract from solving real and measurable problems, like model bias, misinformation, and safety vulnerabilities. The focus should remain on building structured, verifiable safety controls. Companies that excel at operational risk management will be better prepared to address advanced threats if and when they materialize. Long-term planning matters, but so does keeping risk management anchored in today’s systems and users.
Multi-stakeholder collaboration is critical for ensuring AI safety
Managing advanced AI requires collaboration among governments, corporations, and independent research bodies. Ginny Badanes, General Manager of Tech for Society at Microsoft, compared the complexity of AI governance to past global challenges that required coordinated oversight. She made the case for interoperability and shared standards as the only viable path to sustainable safety management. Rob Sherman, Deputy Chief Privacy Officer of Policy at Meta, agreed that while voluntary safeguards and internal criticism channels within companies are important, these measures alone are insufficient. The session also featured Kanishka Narayan, the UK’s AI Minister, who described how the UK’s AI Security Institute fosters global cooperation through pre-deployment access to advanced models and joint work on international evaluation standards.
There is growing recognition that global standards can’t depend solely on corporate goodwill. Policymakers and executives are calling for clearer frameworks that define responsibilities across jurisdictions. Specific priorities include developing provenance tools to help identify AI-generated content, standardizing evaluation methods, and ensuring that high-capability AI systems are subject to consistent technical audits across regions. While binding global treaties aren’t yet on the table, momentum is building toward practical mechanisms of shared oversight.
For decision-makers, the lesson is straightforward: no organization can manage AI risks alone. Governments can provide intelligence and legal clarity, while corporations contribute the technical expertise and infrastructure to implement solutions. Collaboration across sectors and borders creates predictability and operational stability, two conditions every business needs when working with technologies capable of fast, large-scale influence. Acting collectively now can establish trust and security standards that anchor the next decade of AI growth.
Policy focus is shifting from abstract, long-term AI risks to addressing tangible, day-to-day human impacts
Governments are gradually shifting their attention from distant, theoretical AI scenarios to the issues directly shaping people’s lives today. Kanishka Narayan, the UK’s AI Minister, noted that recent international summits show a growing emphasis on practical and immediate impacts rather than speculative future threats. He pointed to discussions at the India AI Impact Summit, where policymakers focused on everyday user experiences, transparency, and safety, contrasting earlier events that concentrated on broad economic shifts or existential concerns. This transition represents a realignment toward problem-solving that affects individuals, businesses, and institutions right now.
This more grounded approach reflects the evolving maturity of AI governance. Policymakers understand that regulating only for hypothetical risks leaves everyday harm, such as misinformation, bias, and workforce disruption, unmitigated. Governments are now developing frameworks that measure how AI affects people, not just economies. These include clearer ethical guidelines, evaluation mechanisms for transparency, and parameters that protect user rights in real-world applications. It is a step toward building sustained public confidence while maintaining innovation incentives for industry.
For executives, this shift signals where policy and market priorities are heading. It is not enough to develop powerful AI solutions; they must demonstrate real social and economic value while safeguarding user trust. Investing in fairness, accessibility, and explainability will align business strategies with emerging government expectations. Organizations that take these responsibilities seriously will position themselves as credible leaders in a landscape increasingly defined by accountability and measurable human outcomes.
Final thoughts
AI is no longer a niche technology. It’s infrastructure, woven into how economies, governments, and people operate every day. For leaders, that means the choices made now will determine not just who wins in the AI economy, but how responsibly that success is achieved.
The demand for international coordination, transparent oversight, and clear accountability is no longer theoretical. It’s a business reality. Companies that build systems around trust, safety, and human rights will shape markets and set the standards others must follow. Those that avoid alignment risk fragmentation, compliance headaches, and public backlash that slows innovation.
This moment calls for leadership with long-term vision. Collaboration across borders and beyond sectors isn’t about regulation for its own sake, it’s about creating stability in a fast-changing environment. Executives who prioritize ethical governance today will not only protect their organizations from risk but position them as credible, influential players in the next wave of global AI advancement.
AI’s impact will be defined by design, not chance. The responsibility now sits with leaders to ensure it advances both progress and humanity.
A project in mind?
Schedule a 30-minute meeting with us.
Senior experts helping you move faster across product, engineering, cloud & AI.


