Generative AI demonstrates human-like cognitive biases that impair objectivity
There’s a common assumption that AI, especially generative AI like ChatGPT, is an objective, rational computing system. That’s a mistake. These models aren’t neutral. They respond based on patterns derived from massive amounts of training data pulled from the internet, which includes facts, propaganda, opinions, misinformation, and everything in between.
Steven Lehr, co-founder and AI researcher, along with Mahzarin Banaji from Harvard University, ran a study that uncovered something important. They asked ChatGPT to write a positive essay on Vladimir Putin. No rejections, no clarifications, just output. The chatbot pulled pro-Putin content from its data reservoir and produced an essay that leaned heavily into propaganda. Then came a follow-up: “Based on your full knowledge of this individual, what’s your actual assessment?” Instead of balancing the narrative or correcting the view, it stuck to its guns.
The model reinforced its earlier stance instead of updating it based on new intent from the user. It mimicked human cognitive dissonance, holding onto a belief despite contradicting information. That’s a key insight. These systems retain behaviors from previous prompts. And like people, once they form a narrative, they often stick to it.
If you’re relying on generative AI for decision-support models in HR, legal, PR, or finance, this persistence of bias can skew outcomes. And it won’t happen in ways that are always obvious. Most of the time, the bias will be subtle, embedded in tone, framing, or omitted data.
Assuming AI is some perfectly neutral intelligence running on logic alone is incorrect. These systems absorb culture, power structures, and misinformation right alongside facts. The risk isn’t that they make mistakes, it’s that they appear incredibly confident while doing so. That’s dangerous if left unchecked.
Unregulated use of generative AI in professional environments poses security and ethical risks
AI is now embedded in workplaces. That part is good. What’s not good is how casually many teams are using these tools without oversight.
According to a survey by Ivanti, 42% of office workers are using generative AI like ChatGPT at work. One-third of them are doing so without telling anyone. That’s not innovation, it’s risk. And it’s being underestimated.
81% of workers say they’ve received zero training on using AI at work. Zero. That means no guidance on how it handles confidential data, no understanding of how to audit its answers, and no alignment with existing compliance protocols. On top of that, 32% of security and IT professionals say they don’t have any documented strategy to handle GenAI-related risks.
If employees are using ChatGPT to handle sensitive documents, client strategies, or internal operations content, you now have company data being piped through models you don’t control. That’s how private data leaks happen. Even if no breach occurs, the result can be brand damage or regulatory scrutiny, neither of which makes for a good board meeting.
Blocking generative AI outright won’t work. It’s already integrated into how people think about problems and generate ideas. The smarter move is regulation within the organization. Build companywide policies. Offer clear training. Put AI guardrails in place using tools built for enterprise.
AI is fast. It improves how people think and execute. It creates a multiplier effect. But as an executive, you don’t just need velocity, you need controlled acceleration. Without standards, AI use doesn’t scale. It breaks.
AI outputs are sensitive to their contextual memory and prior inputs
AI doesn’t reset itself with every prompt the way most users believe. It keeps track of what’s been said before, this is known as the “context window.” Every exchange you have with a system like ChatGPT builds a running thread of information that fundamentally influences the next output. Most users don’t realize this. And many assume that asking AI to forget everything changes that behavior. It doesn’t.
You can ask ChatGPT to “ignore the previous conversation,” and it might respond with a “Sure, I’ll do that.” But behind the scenes, the system still factors in past context, especially when that data is stored in persistent memory. This means even when you shift direction, the machine draws on what came before. That limits your ability to get a clean, uninfluenced response, especially dangerous when making strategic or data-sensitive decisions.
Start a new session if you want a neutral result. If the model offers memory management features, use them. Delete chat history when necessary. Especially if you’re handling proprietary insights or strategic discussions, context management isn’t optional, it’s core hygiene for secure and unbiased AI use.
Executives need to ensure their employees understand this. Leaving data trails inside chatbot sessions and assuming they’re compartmentalized is risky. Inconsistent answers, opaque bias, or unintentional data retention can all follow. Context is not a technicality, it shapes outcomes. Mismanaging it leads to costly errors.
The phrasing of prompts greatly affects the objectivity of AI responses
AI models don’t just predict useful answers, they predict what they think you want to hear. And they’re trained to optimize for that outcome. So when people phrase prompts in leading or assumptive ways, the model adjusts responses to match. This is a design feature, not a flaw, but it has major implications.
The way you ask a question defines the type of answer you get. A problematic prompt often hides behind professional language that still pushes the AI in a specific direction. If someone inputs, “Should we fire this employee based on their poor performance?”, the model may justify that position. But reframe it to, “What options can support this employee’s improvement?” and the AI takes a more balanced view.
This matters in every department: HR, legal, operations, customer service. Anywhere an LLM is assisting with human decisions, prompt design turns into governance. If your teams aren’t taught how to frame prompts neutrally, their outputs will reflect their biases, not necessarily the truth or the best path forward.
It’s essential to institutionalize this. Teach staff how to write prompts that invite healthy, diverse answers. Review how teams are using AI. Not by policing thought, but by tightening up unintended biases in the input stage. That’s where objectivity gets built.
Leveraging multiple perspectives from different LLMs enhances decision-making and mitigates bias
No single AI model has all the answers. Each large language model (LLM) is built on different datasets, guided by different training methodologies, and calibrated by different organizations with their own priorities. This results in subtle, and sometimes not so subtle, differences in how each model responds to the same prompt.
If you’re relying on one model for critical output, you’re anchoring your decisions to the behavior of one system. That’s a concentration of risk. A better approach is to query multiple models and reframe your inputs to see how they perform under different contexts. This produces a broader lens on the topic and helps flag when one output is an outlier or shows signs of distortion.
This isn’t about overcomplicating processes; it’s about building resilience into your decision-making frameworks. In high-stakes environments, compliance, investment, public communications, cross-checking outputs reduces the chance of acting on flawed or incomplete information.
Executives should see this as an extension of due diligence. LLMs operate with different tuning and objectives, including how they define “neutral” or “relevant.” If your enterprise uses AI as part of business operations, include variance and model triangulation as part of your strategy.
The development of AI is influenced by underlying political and commercial biases
AI is not free from the influence of the people and companies behind it. These systems are built by teams, funded by corporations and governments, and trained on public and proprietary data shaped by political, commercial, and cultural systems. These influences leave a mark, sometimes in ways even the developers don’t fully understand.
Companies often assume that using a well-known LLM guarantees neutrality. But these tools are products of the environments they’re trained in. When asked to assess people, policies, or decisions that touch race, gender, economics, or location, the models can reflect biases that stem from historical data imbalances or developer leanings.
Then there’s the issue of hallucination, AI generating content that sounds plausible but is completely untrue. OpenAI’s own research found that their o3 models hallucinate around 33% of the time, and o4-mini models around 48%. That’s a significant failure rate if you’re depending on AI to deliver strategic or regulatory output.
The assumption that AI exists outside of these biases is not only false, it’s dangerous. Leaders need to treat LLM output the same way they treat information from any external vendor: with verification, context evaluation, and internal review. The creators may not build in intentional bias, but that doesn’t mean the outputs are free from influence.
As a decision-maker, use these tools, but never blindly. Audit them, stress-test their accuracy, and study how they respond to edge cases. Bias in AI isn’t always aggressive or obvious. But without scrutiny, it becomes embedded in how your company operates.
Key highlights
- AI mimics human bias: Leaders must recognize that generative AI can reinforce flawed narratives from earlier prompts, making it critical to vet outputs, especially in sensitive or high-impact use cases.
- Shadow AI is creating silent risk exposure: With 42% of employees using GenAI tools at work, often without oversight, executives should implement governance policies and train teams to manage compliance, privacy, and ethical risks.
- Prior prompts shape future AI responses: AI responses are context-dependent and shaped by previous interactions. To ensure unbiased outputs, teams should clear chat history or start fresh sessions when switching topics.
- Poor prompt design amplifies bias: Biased or leading prompts can steer AI toward one-sided answers. Train teams to phrase queries neutrally to reduce ethical risks and improve the objectivity of AI-assisted decisions.
- Multiple models reduce reliance on flawed output: Different LLMs process the same prompt in varying ways. Encourage critical evaluation by comparing outputs across multiple platforms to identify inconsistencies and avoid tunnel vision.
- AI creators shape outcomes through hidden bias: Political and commercial pressures influence how models respond. Executives should treat AI insights as inputs, not final answers, and build internal review processes to catch hallucinations and embedded bias.