A majority of managers now use generative AI to make critical personnel decisions
The shift in how organizations make people-related decisions is well underway. Most managers are no longer relying solely on experience or gut instinct. Today, 60% of U.S. managers are using generative AI, mostly ChatGPT, Microsoft Copilot, and Google Gemini, to make high-impact decisions about their direct reports. Promotions, pay raises, layoffs, and terminations shape company trajectories.
Among those who use AI for these decisions, the majority are using it across the board: 78% apply it to determine raises, 77% for promotions, 66% for layoffs, and 64% for firings. Surprisingly, or maybe not, over 1 in 5 managers admit they often let genAI make the final call, without human input.
For executives, this is a signal: operational efficiency in workforce management is no longer just about HR software or department structure. It’s about using powerful AI tools that can evaluate employee contributions faster, across more variables, than a human can in real time. If implemented well, this leads to faster decisions, optimized organization structures, and reduced delta between employee impact and recognition. Done wrong, it can damage culture and trust faster than you can react.
AI doesn’t have feelings. It doesn’t understand nuance unless it’s trained that way. That’s both its greatest strength and its biggest weakness. When a machine gives you a ranking, a recommendation, or a termination decision, it’s basing that on what it knows, which comes from historical data and parameters originally chosen by humans. If those inputs are flawed, outcomes will be as well. As you expand or implement AI into critical HR workflows, understand this isn’t plug-and-play. It requires human review and clear boundaries.
At this level of adoption, generative AI is no longer experimental. It’s operational. That also means your competition is using it, and you can either lead or follow.
Most managers lack formal training in the ethical use of generative AI for people management
Most managers who use AI to fire, promote, or demote employees don’t really know how to use it properly. That’s the reality. While 94% of managers who use AI say they apply it to people management decisions, only 32% have received formal training on how to do it ethically. Another 43% have had informal guidance. Even worse, nearly a quarter, 24%—have had no training at all.
This isn’t a minor oversight. You can’t outsource fairness or empathy to a black box algorithm and hope things turn out fine. The absence of proper training doesn’t just expose companies to bad decisions, it exposes them to lawsuits, damaged employer brands, and long-term distrust among employees.
Here’s the problem. AI produces output fast, sometimes too fast. When a manager sees a list, a score, or a result, there’s a tendency to accept it as truth, especially under pressure. Without understanding how that AI was trained, what constraints it follows, or where it may introduce bias, even well-intentioned leaders are making calls on flawed assumptions.
If your team is using genAI for hiring or performance reviews, formal training isn’t optional, it’s required. That training should go beyond how the tool works and explain where its blind spots are, how to cross-check its recommendations, and when to push back. Otherwise, you risk automating bias at scale.
Executives should approach this as a strategic priority. AI will keep evolving. The ability of your leaders to use it wisely will determine whether it drives truly fair and productive outcomes, or erodes morale and opens you to risk.
As Stacie Haller, Chief Career Advisor at Resume Builder, explains: “It’s essential not to lose the ‘people’ in people management.” If you care about sustainable growth, build systems that combine AI speed with human insight. Not just for productivity, but for integrity.
Generative AI is increasingly influencing hiring processes, including candidate screening and role replacement
Hiring is changing fast. Generative AI is now making direct calls on who gets interviews, who gets hired, and who gets replaced. This is systems-level workforce design being driven by algorithms.
According to Resume Builder, 46% of managers said they were asked to evaluate if AI could replace a human position. More than half, 57%, agreed it could. And what’s more telling, 43% actually went on to make that replacement. So, real roles are being phased out based on what the AI suggests.
The hiring funnel is also AI-driven. Resume sorting, shortlisting, even initial interviews, AI is completing these tasks at scale. A study from TestGorilla reports that one in five employers in the US and UK are already using genAI to conduct first-round interviews.
This has a direct impact on HR staffing models too. Overloaded teams don’t need to manually scan 800 CVs anymore. That task is done in seconds. But speed introduces risk. AI can filter candidates out based on narrow or historical data that doesn’t reflect real capability or adaptability. Qualified candidates are missed because the model didn’t recognize a non-traditional background or a differently worded resume.
For executives, this all means hiring decisions are being filtered by logic that may not align with the company’s core objectives unless constantly supervised and maintained. The outcome might be faster hiring. But you also need safeguards to ensure you’re building the team you actually want, not just the one that fits a preset data model.
This isn’t about anti-AI sentiment. It’s about precision. It’s about making sure your models reflect the realities of your talent pool and the evolving needs of the business.
Generative AI enhances operational efficiency in routine HR functions, but it may also introduce fairness and bias concerns
Generative AI is streamlining HR admin work across most functions. Training content, performance reviews, development plans, all of these can be built, edited, and deployed faster with the help of AI. In terms of execution, generative AI is doing a lot of the legwork.
The impact is real. In the Resume Builder survey, 97% of managers who use genAI said they rely on it to produce training materials. Another 94% use it to develop employee growth plans. About 91% apply it to performance evaluations. Even performance improvement plans, a sensitive area, are being supported by AI in 88% of cases.
This saves time. It frees management bandwidth. And it introduces a level of consistency and structure that’s hard to achieve manually across a large organization.
But here’s where it demands attention: fairness. AI won’t pick up on interpersonal conflict, context-specific nuance, or unseen effort unless it’s explicitly trained to assess those variables. Without clear guidance and regular review, the system ends up reinforcing past judgments. That’s where bias enters the process, not always intentionally, but reliably.
Still, most managers trust the output. Seventy-one percent believe AI makes fair decisions. That sense of confidence is productive, but without being backed by clear audit trails and consistent models, it can lead to mistakes that scale quickly and invisibly.
For leaders, using AI to power HR systems isn’t optional. But trusting it blindly is a mistake. Use the tool. Make workflows smarter. Just don’t assume the output is final. Involve people. Review results. Adjust fast when needed.
You don’t need more data. You need better judgment supported by data that’s being applied intelligently and reviewed continuously. That’s how technology becomes an asset, not a liability.
Human oversight and robust ethical guidelines are essential for responsible AI deployment in people management
If you’re deploying AI in any human decision-making process, especially managing employees, there has to be structure behind it. Oversight isn’t a formality. It’s a core requirement. AI doesn’t understand implications. It doesn’t know when a decision could erode trust or violate compliance. That’s on the people using it.
Managers today are applying generative AI to reviews, terminations, promotions, and development plans. But many are doing it without standards. As earlier data showed, only 32% of managers using genAI received formal training on ethical use. Nearly a quarter received none.
That’s a warning signal.
Stacie Haller, Chief Career Advisor at Resume Builder, said it clearly: “It’s essential not to lose the ‘people’ in people management.” That isn’t sentiment, it’s operational logic. When employee trust breaks down, retention drops, engagement falls, and legal exposure rises. It’s not complicated. Maintain oversight. Define boundaries. And train your teams on what the AI is doing under the hood, and what it’s not.
Companies that don’t establish clear ethical standards for AI use will be reacting to problems instead of preventing them. It’s smarter and more cost-effective to define the rules now than to clean up after misaligned decisions later.
You don’t need to slow progress to do this. Implement the oversight frameworks at the same time you scale adoption. Build them together. AI works better when people are still involved.
Employers are shifting hiring priorities by valuing soft skills over specific AI technical expertise
AI is taking over repetitive parts of the hiring process, which changes what employers value in candidates. And the shift is clear. Companies are starting to prioritize soft skills, like communication, critical thinking, and adaptability, over highly specific AI technical credentials.
The numbers confirm it. In a recent TestGorilla study, only 38% of employers now look actively for AI-specific skills in new hires. That’s down from 52% just a year ago. At the same time, 74% use skills-based assessments to evaluate candidates. And 57% of employers have removed college degree requirements altogether.
What this tells you is simple: employers aren’t chasing resumes filled with buzzwords. They’re looking for actual capability. Not just skills in a narrow tool or platform, but people who can ask the right questions, interpret data accurately, and work productively with both humans and AI.
This matters more now because AI will keep changing. The version you hire someone for today might not exist in a few quarters. What doesn’t change is the ability to adapt, learn, and make decisions when no clear pattern exists.
For executive teams, this should directly influence re-skilling strategies, hiring policy, and talent development. Build teams that can think clearly and solve problems. Technical tools will continue to evolve. Human value isn’t tied to memorizing platforms, it’s about applying judgment under pressure and building what comes next.
The use of generative AI in early-stage hiring processes is becoming mainstream across industries
Generative AI isn’t just showing up in HR, it’s becoming standard. At the early stages of hiring, employers are now depending on genAI tools to screen resumes, score applicants, and even conduct first-round interviews. The scale of adoption means this isn’t confined to early adopters or tech firms anymore. This is active practice across industries.
The data reinforces that shift. According to a study by TestGorilla, 70% of employers now use genAI at some point in their hiring process. One in five use it to conduct initial candidate interviews. That’s the interview prior to anyone on the team actually speaking to the applicant directly. This accelerates timelines but compresses evaluation criteria into automated logic.
For executives, this increases both speed and reach. Applicant pools that used to take days or weeks to sort through can now be pre-qualified in under an hour. That saves time across recruiting functions, especially in high-volume or high-turnover roles. But speed should not replace sound judgment. AI might scan faster, but it doesn’t evaluate with human perspective unless explicitly programmed to.
Poor parameter settings, rigid filtering logic, or outdated training data can lead to bad decisions at scale. Shortlists could miss top candidates simply because their resumes don’t conform to expected formats or because they offer diverse, but unfamiliar, career paths. That naturally limits your access to differentiated talent.
No automation can replace leadership’s responsibility to build strong teams. If AI runs the screening process, you still need decision-making frameworks that include human checks, escalation criteria, and data review. You also need to be consistent in testing the model’s output against real-world performance data, not just standard KPIs.
AI in hiring is now expected. The difference in competitive edge comes from how consciously you guide its implementation, aligning it with long-term talent strategy and values. That’s where leadership has to be active. Not in writing the code, but in shaping how it’s used.
Concluding thoughts
AI isn’t coming, it’s already threaded into how companies recruit, evaluate, and manage people. If you’re in a leadership position, you’re not deciding whether to adopt it. You’re deciding how to shape its use.
The upside is real: speed, scalability, and tighter alignment between data and decision-making. But that only holds if oversight is in place. If your managers aren’t trained, your models aren’t tested, or your policies aren’t clear, you’re not advancing, you’re automating uncertainty.
This is the moment for sharp, deliberate implementation. Build AI into your workflows, not blindly above them. Train leaders to question outputs. Make ethics part of the operating logic, not a compliance box to check.
AI won’t replace thoughtful leadership. But it will amplify whatever system it’s embedded in. Make that system one you actually trust to run at scale.