Employers’ inadequate AI training hampers productivity and ROI
AI is already reshaping how organizations operate, but too many companies are treating it as a plug‑and‑play solution. The problem isn’t the technology, it’s the people using it. Many leaders assume employees will naturally figure out AI systems on their own. That assumption is wrong and costly. When workers lack structured training on both the technical and ethical use of AI, the result is wasted potential and slower return on investment.
As J.P. Gownder, Vice President and Principal Analyst at Forrester, pointed out, this lack of training has become a “bottleneck” that directly inhibits productivity. It’s not that employees are unwilling to adopt the tools, they’re just not being given the skill set to succeed. Effective training must go beyond basic tool usage. Workers need to understand how to interpret AI outputs, when to question them, and how to apply them responsibly. Without that foundation, automation turns into dependency, and productivity gains vanish.
For business leaders, this is a clear call to rethink how AI integration is rolled out. It’s not a one‑time event but a continuous process. Investing in training builds resilience, trust, and efficiency across teams. The companies that get this right will see faster adaptation, less frustration, and a stronger return on their technology investments. Ignoring this means leaving real value on the table, and in a competitive market, that’s not an option.
Low proficiency in critical AI skills, such as prompt engineering, undermines tool utilization
Even as AI tools become more accessible, the ability to use them effectively remains low across the workforce. One of the clearest examples of this is prompt engineering, the skill of directing generative AI systems like Microsoft 365 Copilot or Google Workspace to produce useful outputs. The latest data shows only a small increase in worker proficiency, rising from 22% in 2024 to 26% in 2025. That’s a concerningly slow rate of progress for a skill that will soon define how work gets done.
AI systems revolve around precise input, and poor prompts equal poor results. When employees don’t know how to craft effective instructions, even the most advanced tools fall short of their potential. This means more time spent correcting outputs, less time spent generating real value, and ultimately, lower overall efficiency. For executives, this signals a systemic issue, not a technical one but a developmental one.
This gap highlights an opportunity for leaders to act decisively. Upskilling teams in prompt engineering and critical reasoning around AI isn’t an optional step, it’s essential infrastructure for a productive digital workplace. Decision-makers should treat AI fluency as a baseline competency, much like digital literacy was a decade ago. The organizations that move fastest to close this skills gap will be the ones capturing true gains in performance and innovation, while others lag behind chasing technology they don’t fully understand.
A project in mind?
Schedule a 30-minute meeting with us.
Senior experts helping you move faster across product, engineering, cloud & AI.
Insufficient AI literacy leads to misuse, frustration, and ethical risks
AI has immense potential, but without proper literacy, it can create confusion rather than clarity. Many employees still approach AI systems with uncertainty. They either misuse these tools or avoid them altogether, which limits efficiency and creates unnecessary frustration. The problem isn’t just a lack of comfort, it’s a lack of understanding about how to evaluate AI outputs and when to question them. Overreliance on automation without a solid knowledge base can also lead to ethical misjudgments that put corporate reputation and compliance at risk.
J.P. Gownder of Forrester has emphasized that inadequate training doesn’t just hold back productivity, it can also blur ethical boundaries. When workers rely on AI-generated results without questioning accuracy, errors slip into decision-making processes. Over time, this erodes confidence in both the technology and the organization’s leadership.
Executives should approach this as a governance issue, not solely a technical one. AI literacy must include both skill-based learning and ethical frameworks for responsible use. It’s not enough to provide access to powerful tools; employees need to understand the reasoning behind AI outputs and their implications. A workforce that can critically assess AI-generated information will operate with greater confidence, consistency, and accountability. That capability ultimately strengthens performance and public trust, all while reducing operational risk.
Misaligned expectations between executives and employees over AI’s impact
There’s a growing gap between executive optimism about AI and the employee experience of using it. Leadership widely believes that AI is a driver of productivity, yet employees often feel that it adds to their workload instead of reducing it. This disconnect creates tension and slows adoption. According to a report by Culture Amp, 96% of C-suite leaders expect AI to increase output, yet 77% of employees say the tools have made their jobs more demanding. That kind of mismatch signals a communication breakdown within organizations implementing AI at scale.
Executives must recognize that realistic expectations and shared understanding determine AI’s success far more than enthusiasm alone. The first step is listening. Employees need to see that leadership not only champions new technology but also understands the real bottlenecks it introduces in daily operations. Continuous feedback from users, transparent discussions around AI’s limitations, and iterative training can narrow this divide.
Strong leadership in this space means aligning vision with reality. AI strategy cannot live only at the executive level, it must reflect how people actually work. When leaders stay connected to that perspective, adoption becomes smoother, and value realization accelerates. The goal isn’t just implementing AI, but ensuring that every layer of the organization experiences its benefits in measurable, practical ways.
Poor-quality AI-generated content erodes employee trust in management
AI-generated content has become part of everyday operations, but when its quality is inconsistent, it creates real problems inside organizations. Many leaders are using AI tools to generate documents, presentations, and reports. If that output looks polished but contains errors, it creates “workslop”—content that seems reliable but is not. Once employees start seeing this kind of low-quality material coming from management, trust declines quickly. The issue isn’t only technical; it’s about credibility and leadership culture.
When AI-generated work is shared without proper review or fact-checking, the consequences go beyond immediate mistakes. It signals to employees that accuracy and accountability are being compromised for speed. According to a Zety report, 85% of employees said that receiving poor AI-generated content made them lose trust in leadership. That level of disillusionment can seriously damage morale and weaken alignment across teams.
Executives must ensure that outputs from AI systems meet the same quality standards as human-authored work. This requires creating review checkpoints, assigning clear ownership for verification, and setting guidelines for when and how AI-generated materials should be used. Establishing these standards reinforces a culture of quality, making employees confident that what comes from leadership reflects the organization’s values and precision. In doing so, leaders protect not only their credibility but also the long-term confidence of their workforce.
Key takeaways for leaders
- Prioritize AI training to unlock ROI: Many organizations are losing productivity because employees lack core AI skills and ethical guidance. Leaders should invest in continuous, structured training to ensure AI delivers measurable performance gains.
- Close the skills gap in prompt engineering: Workforce proficiency in essential AI skills like prompt engineering has only risen from 22% to 26% in a year. Executives should fund focused upskilling programs to accelerate capability and boost tool effectiveness.
- Build AI literacy to reduce misuse and ethical risk: Poor understanding of AI leads to misuse, frustration, and ethical lapses. Leaders must embed AI evaluation and ethics into training to strengthen decision quality and maintain public trust.
- Align AI expectations between leadership and staff: While 96% of C-suite leaders expect AI to raise output, 77% of employees report higher workloads. Executives should recalibrate goals through transparent dialogue and user feedback to align strategy with reality.
- Set quality standards for AI-generated work: Inaccurate AI output, reported by 85% of employees to harm trust, undermines credibility. Leaders should implement review systems and accountability for AI-created materials to protect trust and ensure accuracy.
A project in mind?
Schedule a 30-minute meeting with us.
Senior experts helping you move faster across product, engineering, cloud & AI.


