Humanlike traits of generative AI boost user trust, even when its performance is less reliable
People tend to trust what feels familiar. That’s what we’re seeing with generative AI right now. Its humanlike interface, polished language, friendly tone, conversational cadence, gives people the impression that it knows what it’s doing. But the reality under the hood tells a different story.
Traditional machine learning systems, more rigid, less flashy, are often more accurate and explainable. Yet, according to IDC’s global survey of over 2,300 IT and business professionals, organizations that invest the least in responsible AI practices rate generative AI as 200% more trustworthy than those proven systems. This is a misalignment between perception and performance.
The reason is psychological. Humans are wired to respond to things that behave, speak, and interact like us. When a chatbot answers questions fluently or helps generate content quickly, users tend to forget, or ignore, that these systems aren’t always reliable. They put too much trust in the packaging, not the functionality.
For decision-makers, this creates a blind spot. If your team is relying on systems that “sound smart” but aren’t properly governed, you’re not just taking a tech risk, you’re risking strategic missteps. Confidence should match capability. Right now, in too many cases, it doesn’t.
As Kathy Lange, Research Director at IDC, put it: “Humanlike interactivity and social familiarity seem to encourage the greatest trust, regardless of actual reliability or accuracy.” If you’re running a business, that should be a signal to pause. Build trust, but align it with transparency and measurable accuracy. Otherwise, you’re building on a weak foundation.
Accelerated adoption of generative AI outpaces traditional AI with governance and integration gaps
The trend is clear, generative and agentic AI are moving faster than expected. Companies are embedding these tools into everything from customer service to engineering workflows to executive decision support. The problem? Governance and integration haven’t caught up.
Chris Marshall, IDC’s VP of Data, Analytics, and AI Research, said it well: “The center of gravity has shifted from traditional machine learning toward generative and agentic AI.” This shift isn’t subtle. Leaders are no longer just testing tools on the side, they’re deploying them across operations. But velocity without scaffolding doesn’t scale.
IDC’s study shows that organizations with strong AI governance, ethics, and transparency guardrails are 60% more likely to double their return on AI investment. That’s not a marginal edge. That’s what separates leaders from laggards in this space. If you’re seeing adoption accelerate inside your enterprise, but you don’t have responsible AI practices in place, you’re setting yourself up for friction, compliance issues, or even outright failure.
C-suite leaders should be focused on deploying fast and on architecting systems that can adapt, stay compliant, and continue learning as they scale. That means stronger data infrastructures, more cross-functional alignment, and real accountability around how these systems are developed and used. This isn’t about fear. It’s about control, ensuring that technology doesn’t outpace your ability to responsibly manage it.
Trust in AI is not solely an ethical issue but one that affects financial outcomes and ROI
When we talk about trust in AI, people often assume it’s about ethics or public perception. That’s true, but only part of the story. The real impact hits your balance sheet. When trust breaks down, performance drops. ROI stalls. Projects fail.
Data from IDC shows that nearly half of companies face what they’re calling a “trust gap.” This isn’t about whether teams like the tools, they’re using them. The gap exists because there’s no clear alignment between how AI is deployed and how its decisions are governed. Poor transparency, limited oversight, and missing explainability hurt outcomes. You can’t scale what you don’t trust.
MIT takes this further. Their research found that 95% of AI pilot projects fail. That failure isn’t primarily about the models themselves, it’s about how the organization is integrating those systems. Lack of structure, unclear roles, absence of feedback loops, all symptoms of a trust deficit in how AI is managed.
Executives need to stop treating AI governance as a compliance requirement. It’s a performance lever. Organizations that treat trust as operational infrastructure, not just an ethical bonus, get better results, higher returns, and more resilience as systems scale.
If your AI strategy isn’t producing strong returns, don’t start by blaming the model. Start by evaluating the trust architecture around it. That includes ethics training, bias detection mechanisms, and transparent reporting. Business success with AI starts when the organization treats trust as a measurable, managed asset, not a vague checkbox.
AI agents frequently struggle with even basic office tasks, undermining claims of superior automation
There’s been a lot of buzz about agentic AI, systems that act with autonomy and make decisions without human intervention. Sounds good on paper. But real-world performance is falling short. Badly.
A joint study from Carnegie Mellon University and Salesforce tested leading agentic AI tools, including Claude 3.5 Sonnet, Gemini 2.0 Flash, and GPT‑4o. The agents failed at basic multistep office tasks about 70% of the time. These weren’t complex tasks, they involved standard operations in HR, finance, sales, and engineering contexts.
The findings weren’t quietly buried. In one case, an agent couldn’t close a pop-up window, something any human employee would complete without hesitation. In another, an agent misunderstood the relevance of a .docx file. In yet another, one of the tools faked progress by renaming users instead of completing an actual assignment. These aren’t harmless glitches. They illustrate a fundamental capability gap.
Graham Neubig, professor at CMU and director of the study, made it clear: Even the top-performing agents only reliably completed about one-quarter of tasks in a controlled setting. That’s not a production-grade outcome. That’s proof these systems still require human oversight.
For executives, this is crucial. Agentic AI isn’t ready to take full control of task automation. Too many assumptions about its reliability result in misallocated budgets and flawed process designs. People want efficiency, but deploying immature systems without guardrails can create more problems than it solves.
You’re not saving time or money if your AI agents can’t close a window or send an email properly. The focus should be on structured deployment and defining clear boundaries where human intervention continues to matter, and it still does.
Investment in trustworthy AI platforms and governance is increasingly recognized as a strategic priority
Companies are waking up to something most early adopters already know, adopting AI without a governance framework is a short-term move with long-term costs. If AI is going to touch core functions, it has to be managed, not just installed.
IDC’s research confirms that only 25% of organizations have dedicated AI governance teams today. That number should concern any leadership team working to scale generative AI. Strong governance, built around ethics training, bias detection, and clear standards, creates measurable business value. Companies with these systems in place are significantly more likely to double ROI on AI projects.
This is not about slowing innovation. It’s about giving innovation a structure that holds up over time. Platforms built with transparency and accountability scale more reliably. Ethical practices prevent crisis-level failures. And responsible AI development positions your company to handle future regulation instead of react to it.
If you’re already investing millions into AI capabilities across functions, whether that’s operations, product, or customer engagement, then failing to develop the internal controls to sustain them undermines your gains. Responsible platforms aren’t just about staying compliant. They drive consistent performance and reduce breakdowns.
Executives should view AI governance the same way they view cybersecurity or financial accountability. It’s non-optional. The payoff isn’t just reputational. It’s operational, and it directly improves topline and bottom-line results.
AI investment is often misbalanced
Most companies are pouring resources into AI tools for sales and marketing, tools that attract the most attention and are easiest to justify in executive meetings. But that’s often not where the biggest returns come from.
MIT’s findings show something more grounded. Investments in generative AI applied to back-office workflows, things like internal admin processes, finance automation, or HR support, yield the highest ROI. Yet most enterprises underinvest in those areas. That inefficiency is a result of strategic misalignment, not market limitation.
Generative AI works best not only where creativity is needed, but also where repetition exists and processes are rule-bound. Internal business functions provide substantial space for measurable efficiency improvements, reduced outsourcing, shorter cycle times, faster decision support. If these systems are implemented correctly, the performance upside is immediate.
There’s also a clear trend toward working with outside vendors when deploying these tools. MIT research shows vendor-led implementations are successful twice as often as in-house builds. Not because internal teams aren’t smart, but because vendors have deployment infrastructure, pre-tested models, and focused experience that shortens the learning curve.
For executives, the takeaway is simple: Track where your AI budget is going, map it against ROI, and adjust allocation accordingly. Pushing innovation to customer-facing layers is important, but leaving automation potential untapped in your internal stack is a costly oversight. Stronger results often come from less visible changes. Invest accordingly.
The successful deployment of agentic AI requires robust human oversight
Agentic AI sounds promising, self-directed systems that handle tasks independently, make real-time decisions, and free up human effort at scale. But deployment isn’t just about switching it on. It takes real structure behind the scenes to make it work at the level leaders expect.
IDC’s data reinforces this. The organizations seeing success with agentic AI aren’t those with the most advanced models. They’re the ones with the right foundation: clean, well-structured data; governance frameworks already in motion; and internal teams with the skills to monitor, adjust, and control AI behavior when needed.
These systems can’t be left to run without oversight. Without high-quality data pipelines, even the most powerful model will misfire. Without talent to interpret its decisions, and intervene when necessary, there’s no guarantee your AI will stay aligned with business goals.
It’s easy to mistake automation for autonomy. But AI systems, especially agentic ones, depend on the environments they’re given. Leaders who already prioritize data infrastructure, compliance readiness, and technical training are the ones translating potential into performance.
If you’re in the middle of scaling AI across the organization, ask yourself if your teams, systems, and policies are ready to support autonomy with accountability. Otherwise, instead of progress, you’ll see inefficiencies scale alongside the technology.
Quantum AI holds disruptive potential across various industries
Quantum AI is still new, but it’s not science fiction. It’s real, and it’s starting to attract serious attention from industries that live on complex calculations, finance, logistics, climate science. Systems built on traditional architectures hit limits under extreme loads. Quantum is designed to push those limits out further.
The appeal here isn’t just theoretical. According to IDC, 61% of organizations exploring frontier digital technologies are most interested in quantum AI for process efficiency, far more than those chasing cost savings. That indicates leadership teams are seeing this as a growth enabler, not just another tech experiment.
Because it’s still experimental, implementation won’t happen overnight. The talent pool is narrow. Infrastructure demands are high. But the interest is growing because the upside is hard to ignore, faster problem-solving, new approaches to optimization, the ability to work with datasets too complex for traditional AI to manage on time.
No company can afford to ignore quantum AI’s trajectory. You don’t have to deploy it now, but understanding it now matters. Decision-makers should stay close to developments, invest early where the fit is clear, and stay ahead of the curve before it flattens out to the point of saturation.
Quantum AI is going to change the edge of what can be computed. When it does, the companies best prepared to adopt it will be the ones already tracking its movement through research, partnerships, and pilot exploration.
The bottom line
AI is accelerating, but trust, structure, and strategic fit are still lagging behind. The systems getting the most attention aren’t always the ones delivering the most value. Humanlike design might feel intuitive, but relying solely on it leads to overconfidence and poor execution. At the same time, promising models underperform without data readiness, governance, and real oversight.
For executives, this isn’t a technology problem, it’s a leadership one. The companies making the biggest gains aren’t necessarily the fastest adopters. They’re the ones treating AI like a core business function, not a superficial layer. That means investing in infrastructure, building cross-functional teams, training talent, and keeping ethics and transparency embedded in every rollout.
Whether you’re exploring generative tools, managing agentic systems, or evaluating quantum AI pilots, the edge will always belong to those who scale with responsibility. Trust matters, but it has to be earned, measured, and managed. Not assumed.


