Organizations worldwide are channeling more resources into AI, following the widespread appeal and transformative impact of generative AI technologies. Surge investment highlights a broader industry trend whereby companies are going beyond experimenting with AI, focusing more on fully integrating it into their operational and strategic frameworks to drive efficiencies, improve customer experiences, and create, build, and deploy new products.

Responsible AI frameworks make sure that AI applications respect ethical standards and societal norms, preventing harm to individuals or communities. Responsible AI mitigates risks and actively contributes to creating business value through ethical practices. Terms such as Ethical AI and Trustworthy AI are quickly becoming part of the corporate vocabulary, focusing on the need for building AI systems that stakeholders can trust.

Regulatory and educational drivers for responsible AI

Regulatory influence on AI practices

Regulatory frameworks like the EU’s AI Act are primary drivers of Responsible AI practices. Regulations compel organizations to adopt proactive approaches to compliance, rather than reactive ones. 

Beena Ammanath, a leader in technology trust ethics at Deloitte, asserts the need for businesses to anticipate upcoming regulations and prepare accordingly. Anticipation here maintains compliance and positions companies to respond swiftly to regulatory changes without disrupting their operations.

Initiatives to increase AI fluency

To keep pace with the rapid advancements in AI, companies are increasingly investing in training programs for their board members, C-suite executives, and employees. These programs focus on improving AI fluency across the organization, teaching how to use AI technologies effectively while balancing associated risks. 

The overarching goal here is to foster a workforce that is adept at leveraging AI for operational benefits while staying vigilant about mitigating the ethical risks AI poses. Educational initiatives are key for organizations aiming to integrate AI seamlessly and responsibly into their business models.

Current status of AI and responsible AI adoption

Recent surveys conducted by Boston Consulting Group (BCG) and MIT indicate that the adoption of Responsible AI practices varies greatly among companies. Data shows that only 20% of companies currently implement responsible AI programs. Conversely, 30% do not engage in any form of responsible AI practices, and the rest exhibit varying degrees of implementation. This suggests a considerable gap in the adoption of ethical practices in AI usage, spotlighting an area ripe for strategic development.

Importance of responsible AI maturity

Findings from the same research suggest that the maturity of a company’s AI technology does not predict the maturity of its responsible AI practices. An interesting observation was pointed out: mature AI capabilities do not inherently lead to mature responsible AI practices. 

Companies benefit from prioritizing responsible AI development early in their AI adoption journey. Early implementation of responsible AI frameworks can prevent systemic failures and optimize the value derived from AI investments. Organizations must be proactive to better manage risks and leverage AI innovations responsibly.

Practical approaches and leadership in responsible AI

Evolving policies and leadership roles

Organizational policies and risk frameworks for AI need to be dynamic, adapting over time to address new challenges and opportunities. 

Steven Mills, a leader at BCG, stands by this approach by updating BCG’s AI policies several times during the initial year following the surge in generative AI applications. Adjustments were needed to address unforeseen use cases and risks, further highlighting the need for policies that evolve as quickly as the technologies they govern.

Organizational support for responsible AI

Designating a chief ethics officer or equivalent authority figure, such as a CIO or CTO in smaller firms, is key for spearheading these efforts. For effective implementation, leaders require adequate financial and human resources. They must have the authority to enact policies and the capability to mobilize organizational support to ensure that responsible AI practices are both formulated and implemented across all levels of the organization.

Advantages in highly regulated industries

Companies operating in highly regulated sectors, such as finance and healthcare, often have a head start in responsible AI practices due to their existing risk management frameworks. 

These industries are accustomed to strict compliance requirements, making them intrinsically better at integrating new regulations such as those for responsible AI. Experience in risk assessment and mitigation provides a solid foundation for extending these practices into AI governance, positioning them to manage AI-related risks from the outset.

Journey and benefits of a mature responsible AI program

The process of building a mature responsible AI program typically spans two to three years. This allows for a thorough integration of ethical guidelines, risk management strategies, and compliance mechanisms across all AI applications within an organization. 

Early interventions protect the organization and solidify its reputation as a leader in ethical AI usage.

With that said, companies can start to experience benefits well before reaching full maturity. Prioritizing the review of AI use cases early in the program’s development, organizations can quickly identify and mitigate potential risks, safeguarding against possible ethical breaches and regulatory non-compliance. 

Business impact

Recent research from Boston Consulting Group (BCG) and MIT highlights the tangible business benefits associated with implementing responsible AI practices: 

  • 50% of leaders in responsible AI report that their companies have developed better products and services as a result of their ethical AI strategies. 
  • Nearly 50% of this group have seen improved brand recognition, which can be attributed to public and consumer trust in their commitment to ethical practices. 
  • 43% of these companies experience accelerated innovation, showing that responsible AI frameworks can coexist with and even stimulate technological advancements. 

General trends

More companies today recognize the potential risks associated with AI, such as ethical breaches, societal harm, and regulatory penalties. Anticipation of stricter regulations in the future also motivates companies to proactively address these challenges. As a result, organizations are increasingly viewing responsible AI as a regulatory requirement and as a strategic driver of business value.

Tim Boesen

May 3, 2024

5 Min