A majority of medium and large businesses are stalling AI projects due to deep-seated trust issues
AI adoption is slowing across large and mid-sized companies. Gong’s research found that 58% of these organizations have stalled their AI initiatives. The reason isn’t a lack of enthusiasm, it’s trust. Senior decision-makers are worried about how AI systems handle sensitive data, make decisions, and explain outcomes. When a system can’t clearly justify its results or demonstrate solid data security, confidence collapses. This hesitation is delaying progress, even in companies that already see AI’s potential to transform operations and accelerate growth.
This trust issue is more than a technology challenge, it’s a business risk. When leaders slow implementation, they lose early-mover advantage and the compounded learning that comes from early experimentation. AI can only increase organizational speed and precision if teams believe they understand and can control it. For vendors and internal teams building AI solutions, this means transparency isn’t optional, it’s strategic. The ability to explain what AI does and why it acts a certain way builds confidence and turns hesitation into investment.
Executives should see this trust gap as an opportunity to lead differently. The winners won’t be those who adopt AI the fastest, but those who make it the most reliable. Establishing verifiable data management and model oversight helps turn fear into clarity. It creates the confidence necessary for scale while reducing reputational and operational risk.
According to Gong’s survey of 2,056 business leaders in the UK and US, 46% of planned AI investments are paused, 47% in the UK and 44% in the US, with US firms slightly more likely (63%) to report stalled progress than those in the UK (52%). The data shows that skepticism toward AI crosses industries and borders, signaling a profound shift in how businesses choose to advance with this technology.
Trust-related challenges now outweigh regulatory uncertainty as the leading barrier to AI adoption
The biggest obstacle to AI implementation isn’t government regulation, it’s self-imposed caution. Gong’s study reveals that leaders are placing more weight on operational integrity than on external policy risks. Data privacy and security top the list of concerns at 34%, followed by explainability at 30% and model transparency at 28%. Regulatory uncertainty, previously the dominant issue, comes fourth at 27%. This signals that companies now fear internal failures, data breaches, unexplained outputs, or unreliable automation, more than they fear shifting regulatory environments.
The shift is logical. As AI becomes central to business operations, trust in the system is becoming just as important as trust in the people running it. Leaders want precise answers on questions like: How is this model trained? What data feeds it? How can I verify its logic? These aren’t compliance checkboxes; they’re questions fundamental to business viability and customer confidence. Without these answers, the risks of deploying AI at scale outweigh the potential rewards.
For executives, the takeaway is clear. Building internal AI governance must be a top strategic priority. There needs to be a framework that tracks every data input, explains every algorithmic decision, and ensures compliance with both local and international standards. This creates measurable transparency and removes the ambiguity that holds back investment. It also sends a message to stakeholders that your organization treats trust as integral to innovation, not a secondary consideration.
Among the barriers surveyed, 34% of respondents cited data privacy and security as the main challenge, 30% pointed to explainability, 28% to model transparency, and only 27% to regulatory uncertainty. This ranking shows that trust has overtaken compliance as the key determinant for whether AI projects proceed or pause.
A project in mind?
Schedule a 30-minute meeting with us.
Senior experts helping you move faster across product, engineering, cloud & AI.
The current perception of limited value from AI investments adds to the reluctance in fully deploying AI at scale
Many organizations are cooling their pace on AI, not because they doubt its potential, but because they haven’t yet seen tangible results. Gong’s data shows that three-quarters of surveyed companies, 70% in the UK and 80% in the US, believe they aren’t gaining enough value from their current AI investments. This perception gap is critical. Companies know AI can increase speed, efficiency, and precision, yet what they’re experiencing often falls short of expectations.
The issue stems from early-stage adoption patterns. Many firms deployed AI pilots without the clear metrics, governance, and operational integration needed to measure returns effectively. The result: systems that operate in isolation, generate incomplete insights, or produce inconsistent outcomes. When that happens, confidence in AI’s scalability weakens, and leaders hesitate to expand projects beyond trials.
Executives need to treat AI value creation as an active process. Getting measurable outcomes requires long-term investment in data quality, model monitoring, and performance benchmarking. The organizations now slowing their AI adoption should assess where their value chain disconnects occur, whether in data input, management oversight, or model real-world alignment. A deliberate approach that ties AI performance directly to business outcomes will accelerate results and help justify continued investment.
AI delivers returns when it’s transparent and measured. A clear governance system backed by defined KPIs helps executives see where AI is truly driving performance and where it needs refinement. Gong’s finding that 75% of leaders are unsatisfied with their returns is a wake-up call for companies treating AI as an experiment rather than a strategic asset. The next phase of AI maturity will depend on visibility, seeing what’s working, what’s not, and adjusting direction quickly.
Increased buyer demand
Buyers no longer accept vague assurances about how AI works. They’re asking hard questions: how is data protected, what guardrails exist, and who validates the claims? Gong Labs analyzed over 25 million sales interactions and found that one in four referenced AI security concerns. Buyers emphasized the need for explainability, transparency in training data, and verified assurances about how models generate decisions and safeguard information. Vendors that can’t deliver detailed, evidence-backed answers are finding it harder to close deals.
The numbers are conclusive. Twenty-six percent of buyers said explainability is the top factor in building trust. Another 25% cited the importance of concrete data protection safeguards. Built-in security guarantees and third-party certifications each came in at 23%, while 22% emphasized the value of transparency into training data and model logic. This demand for proof reflects a more mature market, one that prizes reliability over speed to market.
Executives leading procurement or product evaluation should recognize this shift as a sign of rising intelligence in buyer behavior. Decision-makers are not rejecting AI; they’re raising standards. This means every vendor interaction is now a test of credibility. Companies that demonstrate strong validation practices, audits, certifications, and real-world transparency, will stand out. Those that overpromise and underexplain will lose ground quickly.
For vendors and business leaders alike, success depends on clarity. Presenting measurable safeguards and external validation builds trust faster than marketing claims. It shows customers that AI reliability isn’t just a goal, it’s engineered into the product. As buyers become more sophisticated, the companies that prioritize transparency and trustworthy design will shape the next stage of AI market growth.
The conversation around AI trust has evolved
AI trust is no longer a conversation limited to risk or compliance teams, it’s now a major business strategy issue. Companies are starting to recognize that earning trust in AI systems directly influences revenue, growth, and competitive success. Chris Peake, Chief Trust Officer at Gong, captures this evolution clearly: “Security and AI trust are now revenue conversations.” His point highlights that trust isn’t just about checking regulatory boxes, it’s about securing the confidence required for customers and partners to act.
In this context, trust becomes a measurable competitive factor. Firms with a well-documented and transparent AI governance framework gain an advantage because they move faster through deployment and reach scale with reduced friction. Gong’s own findings demonstrate this connection between transparency and growth. When governance is embedded in AI products, such as through stronger model oversight, tighter data controls, or independently verified safeguards, companies cut uncertainty and accelerate operational adoption. Trust, therefore, becomes an operational asset with commercial outcomes.
For executives, this shift demands direct attention. AI governance and trust-building can no longer be delegated down the organizational chart. They belong at the executive level, alongside metrics for revenue, growth, and innovation. Implementing clear governance structures, integrating independent audits, and ensuring consistent system explainability aren’t just compliance assurances, they enable faster and safer deployment. Decision-makers who invest in stronger trust systems will find it easier to win contracts, retain clients, and demonstrate leadership to stakeholders.
The slowdown in AI project deployment
The hesitation surrounding AI adoption isn’t a rejection, it’s a signal of maturity. Organizations have moved beyond early experimentation and are now focusing on governance and accountability. The latest data from Gong and Censuswide shows that companies in both the UK and the US share the same mindset: adoption will stall until trust is proven. Executives are now asking for concrete evidence of security integrity, auditability, and traceability before committing significant budget to AI tools or platforms.
This shift is healthy for the long term. It shows that the market is becoming more disciplined, with buyers requiring higher standards before full rollout. Rather than jumping on new AI capabilities, leaders now demand consistent governance frameworks, structures that define how models are built, trained, and tested. Vendors that meet these expectations will strengthen their market position; those that don’t risk losing relevance.
For executives, this is a time to act, not to retreat. If trust is missing from the vendor ecosystem, internal governance investment can fill that gap. Establishing internal standards around data use, model transparency, and system reliability allows companies to keep momentum while the market catches up. The key is operational clarity, knowing precisely what’s happening inside the AI system and being able to prove that it aligns with ethical, regulatory, and business requirements.
The research conducted by Censuswide among business leaders in the UK and US, alongside Gong Labs’ analysis of a year’s worth of anonymized sales data, confirms a unified conclusion: governance, not enthusiasm, now determines AI progress. Companies that move trust from a technical concern to a leadership priority will set the pace for the next phase of AI-driven business transformation.
Key takeaways for leaders
- AI progress stalls without trust: Over half of medium and large companies have paused AI initiatives due to trust concerns. Executives should integrate transparency and explainability into AI systems to rebuild momentum.
- Operational trust outranks regulation: Data security, privacy, and clear model logic now outweigh regulatory issues as adoption barriers. Leaders should strengthen internal governance frameworks to ensure confidence and compliance.
- Value perception drives investment decisions: Seventy-five percent of firms feel their AI investments deliver limited value. Executives should set measurable ROI benchmarks and tie AI performance directly to business outcomes.
- Buyer expectations are rising: Buyers increasingly demand explainability, security assurances, and third‑party validation before purchase. Vendors must demonstrate clear safeguards and verified governance to maintain credibility.
- Trust fuels commercial advantage: AI trust has become central to revenue and growth, not just compliance. Leaders should treat governance and transparency as business enablers that accelerate adoption and customer confidence.
- Governance determines AI momentum: The slowdown in deployment reflects a call for reliable frameworks rather than waning interest. Executives should establish and enforce consistent policies around auditability, data use, and model oversight to sustain progress.
A project in mind?
Schedule a 30-minute meeting with us.
Senior experts helping you move faster across product, engineering, cloud & AI.


