AI-Generated feedback enhances performance, but only when hidden

AI-driven feedback systems have shown surprising power in improving performance. A 2021 Strategic Management Journal study found that employees perform better when they believe their feedback comes from a human manager rather than an algorithm. When people learn the truth, that AI wrote the feedback, the boost disappears. The study called this reaction the “disclosure effect.”

This effect highlights something fundamental about human behavior. It’s not just the content of the message that matters; it’s the perceived source. Employees expect performance evaluations to reflect understanding, context, and empathy, qualities associated with human judgment. So, even when AI generates accurate and personalized insights, awareness of its involvement triggers resistance.

For leadership teams, the lesson is direct. The biggest challenge in AI deployment is psychological. The data shows AI can deliver superior results, but without employee acceptance, those results evaporate. Executives must tread carefully. Concealment of AI involvement might produce temporary gains but creates real risk if transparency later becomes demanded.

AI systems should be positioned as value amplifiers. Businesses that explain AI’s supportive role, using it to process large-scale data or highlight trends rather than to make judgments on its own, can mitigate distrust. This reframes AI as an assistant enhancing human management rather than an authority figure issuing impersonal judgments.

The trust deficit undermines AI’s potential in HR

Trust is the critical barrier to AI adoption in human resources. Research from SHL shows that only 27% of workers trust their employers to use AI responsibly, while 59% believe it worsens workplace bias. That perception alone limits AI’s potential, regardless of its real capabilities. Even when algorithms produce fairer, data-based feedback, human suspicion neutralizes the impact.

This lack of trust arises from uncertainty about how AI makes decisions and where accountability lies when errors occur. Employees fear losing personal connection and fairness, while leaders risk being perceived as detached or manipulative if they fail to explain how and why AI is being used. That loss of trust nearly always leads to lower engagement, slower adoption, and weaker performance outcomes.

Executives should view AI deployment as a cultural transformation. Transparency, education, and clarity about what data is used and why can turn AI from a source of anxiety into a source of confidence. When people understand that AI supports fairness and consistency, trust begins to rebuild.

Another critical factor is perception management. If leaders communicate clearly that AI isn’t judging people but helping analyze patterns to make better decisions, employees are more open to it. Building explainability directly into HR systems, showing how results are derived, reinforces confidence and helps balance performance efficiency with ethical accountability.

Employee resistance stems from perceived dehumanization

Employee resistance to AI in performance evaluation doesn’t come from technology itself, it comes from how people interpret what its use means. Workers often assume that a machine lacks the context to understand subtle factors influencing their performance. They believe human managers can better recognize the behaviors, constraints, and circumstances that data misses. When AI becomes the face of evaluation, employees perceive a breakdown in empathy and leadership connection.

Dr. Ryne Sherman, Chief Science Officer at Hogan Assessments, explains that resistance grows when employees feel reduced to inputs in a system rather than individuals with unique contributions. He notes that when organizations delegate evaluation to algorithms, team members may see it as a signal that leadership values efficiency over humanity. The perception of being “processed” by a system drives disengagement, regardless of the fairness or accuracy of the data.

For executives, this highlights a leadership challenge. Integrating AI effectively demands awareness of human emotion and cultural impact. Business leaders cannot delegate empathy to code. The systems may provide accurate data, but it takes human context to interpret that data constructively. Preserving human judgment within AI-driven processes can maintain trust while using automation to enhance scale and consistency.

The takeaway for leaders: employees don’t reject technology, they reject feeling unseen. When AI-generated feedback is paired with managerial understanding and communication, resistance drops, and adoption becomes organic. It’s a matter of maintaining human dignity in a data-powered environment.

Hiding AI use is unsustainable and counterproductive

Concealing AI’s role in feedback processes no longer works. As AI-generated language becomes familiar, employees easily recognize its rhythm and structure. This creates immediate suspicion. Once that suspicion forms, performance gains vanish, and trust erodes quickly. Attempts to hide AI’s presence end up achieving the opposite of their intent, lower performance and lasting credibility loss.

Edie Goldberg, President and Founder of E.L. Goldberg & Associates, notes that employees are already trained to sense when text lacks human specificity or depth. She advises that managers use AI to draft feedback and then refine it with context, tone, and personal understanding. This method allows organizations to gain efficiency from AI without losing the authenticity that people expect from human interaction.

For company leaders, this is a practical issue of sustainability and culture. Concealment might deliver short-term stability, but exposure is inevitable. Employees talk, systems leave digital trails, and transparency expectations are higher than ever. The ethical and operational risk of hiding AI outweighs any short-term performance metrics it might boost.

Executives should instead focus on redefining how AI participates in management. Presenting AI as an analytical assistant, one that gathers multiple data points and organizes them for human decision-making, sets a clear boundary between human authority and technological support. When teams see AI as a collaborator rather than an imposter, performance and trust can coexist.

Transparency as a system design principle

Transparency should not be treated as a moral afterthought; it must be built into the foundation of AI-driven systems. When organizations clearly identify AI-generated feedback and provide employees with the same information their managers see, trust grows through open participation. This removes ambiguity and creates accountability, both for managers and the technology itself. The transparency-first approach builds reliability into how feedback circulates and removes space for speculation.

Kate O’Neil, Co-Founder and CEO of Opre, has fully embraced this model. Her platform labels all AI-generated feedback openly and delivers it simultaneously to managers and employees. Neither side can edit or disguise the origin of the insights. This ensures both parties know exactly which content is machine-generated and what requires human interpretation. O’Neil’s approach turns feedback into a shared process rather than a hidden hierarchy.

For decision-makers, this strategy supports scalability without losing integrity. When employees know where data comes from, they focus more on improving performance rather than questioning its legitimacy. It also enforces managerial responsibility, leaders must interpret and act on the insights rather than passively forward them.

Organizations adopting transparent AI frameworks demonstrate confidence in both their systems and their people. Over time, this transparency builds resilience. Employees stop wondering whether they’re being misled and instead engage with the system’s findings as an honest foundation for discussion. This shift can strengthen collaboration and improve overall alignment between workforce and leadership.

The inevitable fallout of deceptive AI practices

Organizations that hide AI involvement in feedback or performance management often discover that the fallout is far more damaging than anticipated. Once concealment is uncovered, trust evaporates. Employees feel manipulated, leadership credibility collapses, and resentment replaces cooperation. From there, recovery becomes slow and expensive. Deception doesn’t just damage individual relationships, it undermines the entire management culture.

Edie Goldberg cautions that trust, once broken, is incredibly difficult to rebuild. She has observed how employees in organizations using hidden AI systems become disengaged and skeptical even of legitimate human feedback afterward. Dr. Ryne Sherman echoes this warning, stating that once deception becomes public, trust within the workforce “hits absolute rock bottom.” According to him, the consequences go beyond attrition; disengagement and loss of morale quickly destabilize performance across teams.

For executives, deception carries measurable organizational risk. The exposure of hidden technology use would likely trigger internal crises, including employee departures, morale collapse, or public scrutiny. Rebuilding trust would require sweeping reform, often involving leadership changes and complete transparency over data practices.

The message is clear: short-term performance gains achieved by deception create long-term destruction. Transparent adoption of AI systems is not simply an ethical stance, it’s a strategic risk management decision. Companies that invest in honesty now will build durable trust that protects them from backlash later. Those that don’t risk losing both their workforce and their brand equity when the truth emerges.

The choice between trust and performance reveals organizational values

The decision to disclose or conceal AI involvement goes beyond operations or system configuration, it reveals what kind of organization leaders are building. Every executive choice on this issue signals whether the company prioritizes sustainable trust or short-term productivity metrics. Hiding AI involvement to preserve short-term performance may show tactical focus, while choosing transparency points to a strategic direction aligned with long-term stability and credibility.

The split in approach has wider implications. When AI transparency is selective, revealed to executives but hidden from frontline workers, it amplifies existing divides between management and staff. This inequality breeds resentment, accelerates disengagement, and weakens company culture. Full transparency aligns all layers of the organization under a unified understanding of how data and algorithms influence people decisions.

Executives must acknowledge that workforce trust is now a performance variable, not a side consideration. A company can have superior technology, but if employees question leadership integrity, performance, retention, and reputation decline over time. Clarity about how AI is used supports fairness, ensuring that performance data is perceived as reliable and unbiased rather than manipulative or hidden.

The balance between transparency and productivity is not static. It’s a continuous leadership evaluation of priorities. Research showing that only 27% of workers trust employers to use AI responsibly, and that 59% believe it worsens bias, underscores how fragile the current environment is. Companies that face this reality proactively are more likely to succeed in implementing AI as an accepted part of their management infrastructure.

AI’s true promise lies in Real-Time, contextual feedback

AI’s real advantage in performance management emerges through immediacy, not imitation. When AI provides real-time feedback, highlighting workflow patterns, productivity changes, or collaboration metrics, it delivers value that traditional reviews cannot. Immediate, data-backed insights allow employees and leaders to respond quickly to changes instead of waiting for annual or quarterly evaluations. This shifts the focus from evaluation to continuous improvement.

Edie Goldberg emphasizes this potential, noting that AI can process large amounts of performance data across multiple interactions and timeframes, producing feedback that is fairer and more complete than traditional reviews. Real-time AI feedback can integrate input from peers, communication data, and performance metrics into one coherent picture, reducing individual bias and improving accuracy.

For executives, the strategic advantage is clear. Real-time data creates a dynamic management framework where performance discussions are based on facts rather than delayed interpretations. Employees benefit from immediate guidance, and leaders gain visibility into developing trends before they become systemic issues. Over time, this continuous loop can enhance performance consistency and engagement.

However, leadership must also recognize the cultural implications. The usefulness of AI-driven real-time feedback depends on employee trust in the system’s fairness. If workers understand that the system is observing professional behavior and not personal details, transparency can coexist with operational precision. Proper boundaries and clear communication about how data is used will define whether employees view AI-driven feedback as supportive or invasive.

The unresolved question, transparency vs. efficacy over time

The debate on whether to disclose AI involvement in performance management remains unresolved, largely because long-term evidence is still limited. Short-term studies, such as the 2021 Strategic Management Journal research, confirm that disclosing AI use lowers immediate performance results. Yet no longitudinal data shows whether transparent systems eventually outperform opaque ones by building stronger trust and cultural resilience.

Executives face a decision that tests leadership philosophy as much as operational strategy. Transparency advocates prioritize sustainable culture, integrity, and long-term organizational health, arguing that open disclosure prevents inevitable trust crises. Others focus on near-term efficiency, believing that initial performance boosts can help justify or refine systems before disclosure becomes necessary. Both perspectives reflect different risk profiles rather than outright moral disagreement.

In practice, most organizations operate somewhere between these two positions. Full disclosure risks early resistance and productivity dips, while concealment creates the potential for significant fallout. The lack of historical precedent for AI use in performance feedback compounds uncertainty. Leaders must rely on principles rather than extensive empirical benchmarks. This turns AI governance into a test of values, whether an organization chooses immediate optimization or long-term transformation capacity.

For decision-makers, the correct strategy depends on organizational maturity and culture. Companies with high transparency standards and employee engagement should integrate AI usage openly from the start. Those in less-trusting environments may need phased disclosure with active education and cultural reinforcement. The authenticity of leadership communication will determine how successfully either approach unfolds.

AI’s adoption in HR and performance management is not just about automating processes, it’s about shaping how people experience fairness and leadership in a digital environment. The question of disclosure will continue evolving as more companies implement transparent systems and track performance outcomes over several years. The answer will emerge from real-world results, not theory.

The bottom line

For leaders shaping the next generation of performance systems, AI is both an accelerator and a mirror. It amplifies existing values, processes, and relationships. The question is not whether AI can optimize output, it already does, but whether the organization is prepared to manage what optimization reveals about its culture.

Trust is now as measurable as performance itself. When employees understand how technology fits into evaluation and decision-making, they respond with engagement, not fear. When they feel manipulated, they disengage no matter how precise the system may be. The difference lies in how openly leaders define AI’s role in judgment and accountability.

Executives who treat transparency as a leadership standard rather than a compliance obligation will find AI easier to integrate and scale. Those who rely on short-term secrecy for quick wins will face the cost later, through loss of talent, trust, or internal cohesion.

AI does not erode culture; it tests it. Companies clear about intent, process, and purpose will use this technology to strengthen their workforce. Those unclear about what they stand for will see that uncertainty multiply. The future will reward honesty, precision, and the courage to align technology with values rather than convenience.

Alexander Procter

March 24, 2026

12 Min