Public trust in AI is limited by concerns over safety and misinformation
People don’t trust AI. A global survey from KPMG and the University of Melbourne proves it: out of over 48,000 individuals across 47 countries, 54% say they are wary of AI. That’s a significant trust gap. And interestingly, it’s worse in advanced economies. Only 39% of respondents in those countries say they trust AI, compared to 57% in emerging markets.
Yet, 72% still see AI as useful. Which tells us something important, people want to use AI, but they’re not convinced it’s safe or predictable. And the issue isn’t just abstract fear. There’s a specific pain point: misinformation. 88% of respondents want laws in place to stop AI-generated misinformation.
This is a credibility issue. If you’re an executive thinking about where AI fits into your business, you can’t afford to treat trust as optional. It’s fundamental. You’re integrating a system that has global scrutiny on whether it tells the truth or makes things up. So, building trust isn’t a soft metric, it’s a business risk and an opportunity. Address it early. Lead transparently. Explain not just what your AI does, but how it does it. And be upfront about limitations.
Regulation needs to be part of this change. Most of the world, 70% of respondents, believes AI should be regulated, and they’re not satisfied with current legislation. They want government and industry working together. That means private companies need to stop waiting for rules to be handed down and start laying the groundwork for safer use cases now. If citizens and customers are asking for guardrails, staying ahead of the curve builds trust and market advantage.
Lack of adequate training and AI literacy fuels skepticism
We’ve got a knowledge gap feeding the trust gap. Only 39% of people globally say they’ve received any form of AI training, at school, work, or on their own. And nearly half admit they don’t really understand AI. That’s a problem not just for users, but for businesses trying to scale adoption. Because when your team doesn’t understand what the tool does, they won’t use it effectively, or they won’t trust it at all.
Now here’s the good part, when people do get trained, the results are there. Among respondents who’ve been trained, 76% report efficiency gains versus just 56% among those without. The same group also reports higher revenue gains, 55% compared to 34%. That’s direct value from basic understanding. These aren’t complex training courses either, just foundational education on how the systems work and what they’re useful for.
If you’re running a business, this is worth your attention. The ROI on AI training isn’t abstract, it’s measurable. Start by making sure your leadership team understands the fundamentals. Then cascade knowledge down through the organization. Focus training on practical outcomes, where people actually use AI day to day. When employees know what AI can and can’t do, they’ll use it more effectively. They’ll trust it more. And your organization will outperform the ones still guessing.
One more thing worth noting: managers and decision-makers within companies tend to benefit more from AI than other roles. That makes sense, they’re integrating AI with business strategy and metrics. But it also means training shouldn’t just sit in operations or tech. Every department, from HR to finance to legal, needs the tools to evaluate and implement AI responsibly.
In short, AI doesn’t create value by itself. People trained to understand it do. So if you’re aiming for transformation and scale, start with training. That’s where adoption accelerates.
Generative AI tools are widely used despite governance and misuse challenges
Adoption of generative AI is already mainstream. According to the KPMG and University of Melbourne study, 58% of employees around the world are using AI tools regularly on the job. In education, that number jumps to 83% among students. The reason is simple, people see improvements in personal efficiency and reduced stress. But here’s where things get complicated: performance doesn’t equal control.
Many organizations are integrating AI faster than they’re setting up policies to govern its use. Over half of users say they’ve improved output, but a significant number also report increases in workload, breakdowns in teamwork, and compliance issues. It’s a situation where usage is outpacing understanding, and adoption is outpacing accountability.
In schools, the picture is the same. Students are leveraging AI to keep up, but only half say their institutions offer proper guidance or training on responsible usage. That imbalance, high usage, low governance, is the breeding ground for misuse. You end up with a generation using AI not because they’re prepared, but because they feel they have no better option.
For leaders overseeing digital transformation, this matters. You can’t scale AI safely or profitably without putting structures in place to guide it. That means clear user policies, internal oversight, and accessible training. Governance cannot be outsourced, it needs to be integrated from day one and be reviewed continuously.
If you ignore governance while pushing use cases, risks will compound. Workflows will become less predictable, and trust inside and outside your organization will erode. The gains that AI creates won’t be consistent. On the other hand, if you embed policy and training early, employees get smarter, compliance improves, and your AI investments start compounding the right way.
Rising AI hallucinations undermine reliability and create a trust paradox
Generative AI still has a major flaw, it makes things up. These false outputs, called hallucinations, are frequent and getting worse in newer AI models. In testing from OpenAI, the o3 reasoning model hallucinated 33% of the time when answering questions about public figures. On simple fact-based questions, the rate jumped to 51%. The smaller and faster o4-mini model performed even worse, hallucinating 79% of the time on the same basic tests.
That’s a serious reliability issue, especially for businesses deploying these models in critical workflows. And here’s the paradox: the more complex and advanced the AI system gets, the more likely it is to hallucinate. Jason Hardy, CTO at Hitachi Vantara, called this “The AI Paradox.” He points out that complexity often degrades reliability instead of improving it. One of the main reasons? Data.
These models don’t understand truth, they generate predictions based on training data patterns. And with high-quality, original training data running low, models are trained on newer sources that often lack structure or accuracy. Treating all training data as equally reliable degrades the model’s output. Small errors multiply through multi-step reasoning. The result is a system that speaks with confidence, but can’t be trusted without verification.
From a business standpoint, this is a red flag. Deploying AI without rigorous validation is a mistake, especially in industries with compliance or safety risks. Brandon Purcell, VP and Principal Analyst at Forrester Research, said large language models (LLMs) should not be used for fact-based information unless they’re grounded in up-to-date, source-verified data. Without that, hallucinations are not outliers, they’re guaranteed.
Executives need to think beyond deployment. AI systems must be monitored in real-time. Testing, both automated and human-led, should be designed into the process before models go live. This includes red teaming, where you’re actively trying to break the system to expose weak points. Think of it as pressure-testing your AI to protect reputation and performance.
This is the moment to mature the AI playbook. Companies that get serious about reliability will lead. Those that ignore hallucinations because the tools seem functional today are building on unstable ground.
Small language models (SLMs) promise enhanced efficiency and accuracy compared to LLMs
Large language models, or LLMs, have opened the door to scalable AI across industries. But the momentum is shifting. Small language models (SLMs) are starting to take over where precision and control matter most. They’re faster, cheaper to run, and easier to target to specific use cases. That’s exactly what most businesses need right now, practical AI that delivers reliable results without overloading infrastructure or introducing unnecessary risk.
The data backs this shift. A Forrester report predicts that adoption of SLMs will grow 60% by 2025. In a Harris Poll commissioned by Hyperscience, 75% of IT decision-makers said they believe SLMs outperform LLMs in speed, cost, accuracy, and return on investment. These aren’t casual preferences, they’re based on real pain points when deploying larger models at scale.
You also see the pressure at the infrastructure level. In a Capital One study of more than 4,000 technical and business leaders, 87% said their data ecosystem is AI-ready. But 70% admitted they are still spending hours every day fixing data quality issues. The bottleneck isn’t just technology, it’s the operational load of maintaining reliable inputs and outputs. That’s where SLMs have the edge. They’re more adaptable and less resource-intensive to deploy at scale.
Andrew Joiner, CEO of Hyperscience, put it simply: the real opportunity for AI isn’t generic automation, it’s smart, purpose-built workflows. He sees tailored SLMs as key to fixing inefficiencies in areas like document processing and administrative automation. These aren’t optional tweaks, they’re strategic upgrades that reclaim time and improve reliability at scale.
At the same time, SLMs allow for stronger governance. They’re easier to monitor, easier to interpret, and more aligned with responsible AI objectives. Brandon Purcell from Forrester has emphasized the need for high-stakes AI systems to be tested thoroughly before deployment, especially in sectors like healthcare or financial services. He recommends simulation-based validation, similar to the testing rigor used in high-compliance industries.
For executives moving aggressively on AI, the message is clear: focus matters. The more narrowly you define the objective, the more value you can extract safely. SLMs aren’t a scaled-down compromise, they’re a reshaped answer to real business needs with lower risk, lower cost, and higher control.
Main highlights
- Low public trust is a risk multiplier: With 54% of global respondents wary of AI and 88% calling for laws to stop AI-driven misinformation, leaders should focus on transparency and ethics to close the trust gap early and build long-term brand credibility.
- AI literacy drives measurable ROI: Only 39% of users have AI training, yet those with education report efficiency gains of 76% vs. 56% and higher revenue impact. Executives should invest in role-specific AI training to unlock productivity and improve adoption.
- Rapid AI adoption demands stronger governance: 58% of employees and 83% of students use generative AI, but weak oversight and limited training are leading to misuse and compliance risks. Leaders must implement clear governance frameworks and training to prevent liability and protect gains.
- Hallucinations undermine AI reliability: Newer AI models show rising hallucination rates, up to 79% in some tests, damaging user confidence. Decision-makers should demand rigorous pre-deployment testing, enforce data quality standards, and monitor AI-driven outputs continuously.
- Small language models offer strategic advantage: SLMs are outperforming LLMs in speed, ROI, and accuracy, according to 75% of IT decision-makers. Enterprises should evaluate SLMs as a scalable, lower-risk solution for targeted use cases where precision and oversight are critical.