AI adoption remains limited in office workplaces
Let’s cut through the noise. Right now, generative AI adoption in offices is modest, just over 25% of organizations are “all in,” with another 28% running small, targeted AI pilots. That’s it. Half of businesses are either stuck in evaluation phases or have no deployments at all. That tells us a lot. Most organizations are still trying to figure out what measurable value AI can provide.
So if your company hasn’t rolled out large-scale AI, don’t worry. You’re in line with most of the market. The key reasons? Lack of clear use cases. Existing workflows that don’t integrate well with AI. Uncertainty about how AI tools fit into daily operations and wider strategy. But these are solvable, and leaders who move early, test broadly, and focus on clear business outcomes will gain long-term compounding advantages.
According to Perficient’s latest “State of GenAI in the Workforce Survey,” 26% of organizations fully adopted AI, and another 28% deployed it selectively. Thomson Reuters’ “2025 Generative AI in Professional Services Report” followed nearly identical trends across 1,702 professionals in law, tax, and government. That’s consistent parallel validation across industries.
Adoption will speed up. But what determines who leads will be the willingness to test, iterate, and train, not just install software. The market still has massive open space for strategic adopters.
Inadequate training impedes effective AI adoption
This one’s obvious but under-addressed, most employees don’t get proper training on AI tools. Perficient’s study found that many workers only get emails from IT about AI. That’s not training. That’s noise.
Here’s the issue: AI won’t work for your business if your people don’t know how to use it properly. It’s not just about installing new tools, it’s about enabling your team to think and execute differently. You’re redesigning how decisions are made, how tasks move, and how outcomes are driven. That’s not something fixed by a link to a user guide.
Eric Walk, Principal for Enterprise Data Strategy at Perficient, nails it: “Adoption is primarily driven by good training and good enablement.” That’s the delta between those who experiment and those who scale. Your tech is only as good as your internal traction.
Executives need to ask: are you equipping employees to work effectively with AI, or are you just asking them to adapt on their own? Because if your answer is the latter, you’re burning potential competitive advantage. People need hands-on experience, clear workflows, and real feedback loops to trust and improve with these tools.
Strong enablement isn’t a soft issue, it’s an acceleration layer. If your workforce gets it, AI becomes a force multiplier. If it doesn’t, it becomes shelfware. Most companies are underinvesting in change management around this, and it shows in the gap between pilot tests and full adoption. Fix that gap, and adoption speeds up.
AI demonstrates measurable productivity enhancements
We’re seeing clear signals, AI is increasing real productivity in the workplace. Not assumptions, not vague promises, measurable improvements. A combined 76% of surveyed employees who use AI say it boosts their output, their quality of work, or both.
AI tools are already handling tasks that reflect the competence of someone with 7 to 10 years of human job experience. That means the systems are mature enough for meaningful contributions, not just light automation or surface-level productivity. For tasks involving document summarization, pattern detection, or content generation, the accuracy and speed are already shifting workflows.
Executives need to stop treating this as theoretical. Your teams are likely already using generative AI in isolated use cases. The choice now is whether you want to formalize that use into business architecture or stay in fragmented experimentation. If there’s value in speed, consistency, or parallel task execution, AI is the short path, not because it replaces people, but because it raises the potential of teams across all levels.
The data paints a clear picture. Perficient’s report shows that 17% of AI users reported higher output, 32% saw increased quality, and 27% experienced both. That’s two-thirds of users already surpassing baseline performance levels. This is now about scalability, not viability.
The takeaway: AI is working, and the longer companies wait to integrate it across systems and teams, the more they pay in opportunity cost. Leaders don’t need perfection at launch. They need structure and strategic scope. Start with high-volume, low-risk tasks where AI is already proven to outperform or match human performance. That alone yields a competitive edge.
Trust and regulatory concerns limit AI use in high-stakes sectors
In sectors like law, tax, and public agencies, AI adoption is hitting friction, not because people don’t see its potential, but because they don’t trust it yet. That distinction matters. Nearly 90% of professionals believe AI could have a role in their industries. But far fewer believe it should be applied, at least not right now.
The reasons come down to risk, precision, and accountability. These sectors handle sensitive data, strict compliance frameworks, and reputational stakes that don’t leave much room for error. Until there’s stronger regulation and clearer operational guidelines, leaders in these industries will be cautious, and rationally so.
AI in professional services isn’t a matter of automation alone. It either performs with acceptable precision or it doesn’t belong in the decision loop. Legal professionals, in particular, are raising concerns over issues like the unauthorized practice of law. According to a recent Thomson Reuters survey, 73% of legal respondents see that as a major or moderate threat. Similar anxiety exists in tax and audit functions, where responsibility is non-negotiable.
Executives in these sectors need to think tactically. Use AI where it speeds up human-controlled processes, like contract review or fraud detection support, not where it replaces judgment, legal guidance, or final accountability. That alignment improves productivity without breaching trust.
Thomson Reuters’ “2025 Generative AI in Professional Services Report” maps this position clearly: only around 22% have adopted AI, while large portions are still evaluating or have no immediate plans to use it. The enthusiasm is there; the infrastructure and guardrails are not. Build both, and AI adoption scales where it matters most.
Concerns over job displacement and misuse drive AI adoption hesitancy
There’s a growing concern across professional sectors that AI won’t just change work, it will eliminate it. And many people aren’t wrong to think that. In legal and tax professions especially, the fear is direct: 50% of tax professionals and nearly two-thirds of legal workers consider AI a real threat to job security. Those are not marginal figures. They reflect a legitimate reaction to how fast this technology is evolving.
But this isn’t just about automation. There’s additional friction around ethical misuse. In law, for example, 73% of professionals surveyed by Thomson Reuters flagged unlicensed legal practice via AI as either a major or moderate threat. Legal workers want to use AI to handle routine, repetitive tasks, but they draw a firm line when it comes to responsibilities that carry legal risk or require human discretion.
What that means for leadership is simple, strategic clarity matters. You can’t just deploy AI broadly and hope people will adjust. Whether it’s legal services, audit work, or risk management, these are people whose performance is tightly connected to regulatory outcomes. They need certainty that AI doesn’t compromise their value or introduce avoidable exposure.
To move past resistance, companies need to articulate very clearly where AI improves capacity and where it won’t be allowed to overstep. You also need re-skilling and cross-functional communication. If your end goal is maximizing time spent on high-value work, staff should see automation not as a threat, but as a shift, one that only works if leadership leads with transparency.
A distinct divide exists between personal and professional AI adoption
Most professionals are familiar with AI tools. They use them on their own, on mobile apps, browsers, and devices. They ask ChatGPT for support, use voice assistants regularly, and trust the responses they get. But when those same technologies are brought into a corporate workflow, the reaction changes. There’s hesitance. There’s doubt.
This trust gap isn’t about the technology. It’s about context. People don’t want AI making decisions, or even assisting, with tasks that could impact compliance, client outcomes, or company reputation without safeguards in place. The same tool that feels useful in a personal setting feels questionable when decisions have financial, legal, or operational consequences.
Laura Clayton McDonnell, President of the Corporates Business Segment at Thomson Reuters, nailed it when she said people “wouldn’t even think twice about taking their personal phone and clicking on ChatGPT… but when you get into the professional environment, folks pause.” That pause is where most corporate AI efforts stall.
Executives need to solve how users cross that boundary from personal capability to professional confidence. This means setting clear use cases, defining tool limitations, and ensuring accuracy and compliance are default, not optional. Professional environments demand higher performance standards. That needs to be built into implementation strategy.
If your organization wants wide adoption, eliminate ambiguity. Train staff to understand how AI decisions are made, where it can be trusted, and where it cannot. Once those boundaries are known, usage becomes normalized, and consistent, scalable value follows.
Key takeaways for leaders
- AI adoption is early but real: Around 25% of organizations have fully implemented generative AI, with another 28% testing focused use cases. Leaders should invest in low-risk pilots now to stay competitive as adoption accelerates.
- Training is the weak link: Most employees receive little to no structured AI training, limiting tool effectiveness. Prioritize hands-on enablement programs to unlock real returns on AI investments.
- Productivity gains are clear: Two-thirds of AI users report noticeable improvements in quality, output, or both. Focus first on repeatable, data-heavy tasks where AI already outperforms manual workflows.
- Trust is the barrier in regulated sectors: In law, tax, and government, professionals recognize AI’s potential but remain cautious about applying it to high-responsibility work. Leaders should deploy AI in supportive, not decision-making, roles until regulatory clarity improves.
- Job loss and misuse fears create resistance: Legal and tax professionals worry about job displacement and AI enabling unethical practices. Communicate boundaries clearly and lead with task-specific automation to build confidence and reduce backlash.
- Personal vs. work AI trust gap persists: Employees trust AI tools for personal use but pause in professional settings due to compliance and accuracy concerns. Set clear guidelines and use cases to bridge the trust divide and enable safe, confident adoption.