Misalignment in AI adoption undermines productivity
Most companies aren’t lacking AI tools, they’re just not putting them to work properly. That’s the insight from Zellis’s latest study on AI in the UK workplace, and it’s worth paying attention to. While 94% of business leaders believe their organisation is already using AI, only 61% of employees report actually using it. That’s not a small gap, it’s a signal that executive assumptions about technology adoption aren’t matching the day-to-day employee experience.
This misalignment costs more than just accuracy in boardroom discussions. According to Zellis, UK businesses could be missing out on up to £60 billion a year in combined productivity and cost savings. That’s not hypothetical, it’s based on a model where better alignment between leadership vision and employee usage unlocks 8% more value in working time.
There’s a leadership problem here, not a tooling one. Companies are sitting in what Zellis calls an “AI grey zone” where executives see progress, but employees are still stuck with fragmented access and uneven adoption. If you’re not investing in clear implementation plans, plans that actually reach beyond headlines and slide decks, you’re leaving money on the table. This isn’t about launching AI. It’s about landing it where it matters most.
Abigail Vaughan, CEO of Zellis, put it plainly: “Leaving that value unrealised isn’t an option.” And she’s right. If the tech exists but teams don’t know how to use it, or don’t trust why it’s being used, you’ve built a system that looks advanced from the outside but delivers mediocre returns.
Executives should look at AI adoption the way they look at margins, measurable, optimised, and fully aligned from top down to frontline. Anything less introduces inefficiencies you can’t afford in a competitive market.
Divergent priorities for AI application between leaders and employees
When leaders and employees disagree on how to use AI, productivity stalls. According to the data, most employees would prefer to see AI handling repetitive admin work, data entry, error-checking, basic form processes. It’s simple: they want fewer obstacles in their daily flow. 69% support using AI for these routine tasks, which makes sense. Save them time, reduce the noise, and let them focus on work that drives value.
Leaders often want something different. A surprising 35% think AI should help make decisions on promotions, pay, and career progressions, areas that touch on trust and personal growth. But only 8% of employees agree with that approach. It’s a serious divide. And it shows that some executives are skipping a critical question: just because AI can do something, should it?
If AI is applied in areas where trust and transparency matter most, without clear frameworks and human checks, it can backfire. It pushes employees away from the technology instead of pulling them in. If staff feel AI is being used to manage decisions instead of enable performance, they’ll resist. Not because they’re anti-tech, but because they don’t see fairness woven into the system.
For decision-makers, the next step is straightforward: align AI use with actual priorities on the ground. Don’t deploy it in strategic areas unless you’ve already proven its value in operational ones. Build confidence bottom-up, not top-down.
There’s no hesitation in adopting AI, but there is caution when it crosses the line into decision authority without input. The message from your teams is clear: use AI to assist, not replace; support, not control. That’s where the value lives.
Inadequate employee involvement in AI decision-making
Leadership confidence in how AI is being rolled out often doesn’t match employee experience. Zellis’s research shows this clearly. Among leaders in organisations using AI, 63% believe employees are involved in decisions about its use. But only 40% of employees feel that way, and 33% say they’re not involved at all.
That gap isn’t just about perception, it impacts trust, engagement, and effectiveness. If teams aren’t part of the conversation when AI initiatives roll out, adoption drops. AI tools still need context to drive impact, and that comes from the people using them.
When employees aren’t actively engaged in shaping how AI is used, the outcome is predictable: tools are underused, misunderstood, or misapplied. Input from the workforce isn’t a checkbox to tick, it’s a data source. They can highlight operational friction, poor workflows, and high-effort low-return processes AI is best suited to optimise.
For executives, this means integrating employee feedback into the full AI strategy, from selection to deployment. It’s not enough to announce a platform upgrade or get buy-in at rollout. Inclusion has to be consistent. It has to be visible. You need employees to know their input shapes outcomes.
When that trust is there and involvement is authentic, what you get is alignment. And alignment is what turns AI from a feature into a capability.
Enhanced AI alignment can drive significant productivity and cost savings
AI can unlock serious productivity gains, but only when its use supports how people actually work. According to Zellis, aligning AI strategy with real workflows could shift 8% of employee time toward higher-value activities. Do the math: that’s 1.7 billion hours across large UK organisations. Valued at £40 billion per year in staff capacity alone.
AI isn’t just a time-saver, it restructures how effort is applied within an organisation. Get the alignment right and you increase capacity without increasing headcount. Use that extra bandwidth for growth, innovation, or precision execution across core functions.
There’s also a major cost efficiency angle. One in five leaders said smarter AI integration could reduce operating costs by 7% to 10%. Zellis estimates that could unlock up to an additional £20 billion in savings annually. That’s £1 freed up for every £10 spent.
Executives have two levers, efficiency and effectiveness. AI alignment improves both if it’s positioned correctly. But poorly integrated tools block those outcomes. The system you deploy must reflect how your teams work, think, and execute. Otherwise, all you’ve bought is complexity.
If you’re not measuring the direct operational impact of AI, you’re moving blind. Once the alignment is clear, the value is too. And it shows up in performance metrics, cost structures, and the resilience of your operating model.
Increased expectations for AI tools and upskilling among employees and leaders
AI is quickly becoming a baseline skill, not just for data teams, but for everyone. Employees understand it. They want better tools, and they want the right training to use them. Zellis reports that 34% of employees expect their employer to provide AI tools within a year. That jumps to 63% when looking out over two years.
Leaders see the shift coming too. Nearly half, 47%—expect advanced AI and digital skills will be necessary in their organisation within the next year. That number rises to 77% over two years. More importantly, 74% recognize that employee upskilling will become a growing priority over that same period. These parallel signals point to an expectation mismatch that won’t last long.
If your company is behind on integrating AI training into current learning pathways, you’re heading into risk. Talent retention gets harder, internal capabilities plateau, and recruitment becomes more expensive. High performers won’t wait around for an organisation to catch up when others are already investing in growth paths that match market needs.
Smart investment now means building a workforce that doesn’t just use AI, but expects it, and competently applies it. The pressure is here, and the timeframe is short. This isn’t about massive infrastructure overhauls. It’s about making AI accessible, understandable, and actionable across every team.
Business leaders should be planning for AI fluency the same way they invest in financial literacy or digital communication. It’s not optional. It’s core capability.
Transparent AI practices enhance trust, retention, and work-related wellbeing
Trust increases when AI is implemented with transparency. According to Zellis, 40% of employees say that being transparent about how AI is used would make them more likely to stay with their employer. Another 42% say it would improve their trust in leadership.
Executives need to register this clearly: transparency isn’t about over-explaining the algorithm. It’s about defining the boundaries, what AI does, how decisions are made, and where human oversight remains critical. When employees understand how AI is used and why, confidence improves, and stress drops.
This is especially true for younger employees, who are shaping the next phase of workforce expectations. Among workers aged 18 to 34 who use AI, 62% reported reduced stress at work, and 61% said AI has increased their confidence in their role. They aren’t questioning the existence of AI. They’re asking for clarity and leadership that ensures it adds value without taking control.
When your people feel like AI exists to help them, rather than judge them, you get alignment. And with that comes better focus, higher output, and stronger culture. These are core business outcomes, not soft benefits.
Executives should drive AI adoption with transparency built into the model from day one. That means clear communication, ongoing feedback loops, and defined use cases with human checks where necessary. If you’re upfront and consistent, the trust sticks, and so do your best people.
Younger employees show greater engagement and positive perceptions of AI
Generational momentum is shifting how AI is viewed and used inside organisations. Younger employees, especially those aged 29 to 44, are leading in both usage and engagement. According to Zellis, 69% of this group are actively using AI in their roles, and 27% report frequent use. That’s not adoption, it’s embedded behavior.
These employees are also more engaged in conversations around AI policy and implementation. Among them, 58% agree that their feedback on AI is heard and acted upon. That level of engagement is not matched across the wider workforce, where only 45% believe leaders are using AI effectively, and 40% feel involved in related decisions.
What this shows is simple: the next generation of leadership is already interacting with AI in a practical and confident way. They aren’t waiting for permission. They expect AI to be available, usable, and designed with their input in mind.
For executive teams, this has long-term implications. As these workers step into leadership roles, their preferences will drive AI expectations across the organisation. They will push for more responsive systems, more transparent frameworks, and higher adoption standards.
Ignoring this trend delays competitiveness and weakens your talent strategy. Embracing it, on the other hand, means building organisational habits that scale with future leadership, not against it.
Steve Elcock, Director of Product – AI at Zellis, summed it up concisely: “AI doesn’t fail because the technology isn’t ready; it fails when people aren’t.” That’s the bottom line. Engagement isn’t about technical access, it’s about education, trust, and genuine alignment with how people want to work. When those elements are present, capability scales fast, and so does value.
Final thoughts
Technology doesn’t create value on its own. People do, when the systems they use actually serve how they work. Right now, too many AI strategies are misaligned, too top-heavy, or disconnected from what teams need. The result is predictable: unrealised gains, slow adoption, and talent frustration.
The data from Zellis is clear. AI alignment isn’t about chasing new tools or showcasing technical ambition. It’s about operational clarity, workforce inclusion, and strategic focus. That’s what unlocks scale, efficiency, and resilience.
For executive leaders, this is a simple equation: better alignment equals better output. Make your AI strategy practical. Make it trustworthy. Bring your workforce into the process. Because when AI supports people, instead of being imposed on them, you don’t just improve performance. You future-proof it.
The opportunity is massive. What decides the outcome isn’t the tech. It’s how human your approach is to using it.


