AI-generated writing lacks humanity and falls short in emulating an authentic human voice
We’ve trained AI models on a scale humanity hasn’t seen before, billions of data points, terabytes of human text. The results are statistically impressive. Large Language Models (LLMs) like GPT can summarize, draft, and generate human-like responses faster than any writer alive. But here’s the problem: human-like isn’t human. And when you read their output, you know it.
There’s a distinct absence of character, of soul, even. What we’re producing with AI right now often reads like high-functioning noise. Technically correct, but emotionally inert. It doesn’t sound like anyone in particular. Not your customer. Not your employee. Not your executive team. That’s a problem, especially if you care about influence, trust, or brand differentiation.
Bryan Cantrill, CTO at Oxide Computing, summed it up well: the writing from AI is “stylistically grating.” He pointed to overuse of punctuation like em-dashes, something only a particular subset of humans use organically. That’s not stylistic flair; it’s misapplied mimicry. Even Sam Altman at OpenAI saw this and made adjustments. If you’re leading organizations that rely on communication for market presence, customer engagement, or investor confidence, this should matter.
Because as automation scales up, the average quality of corporate content risks dropping off a cliff. Quantity isn’t the issue. It’s tone. Authenticity. Context. If your messaging sounds like it was mass-produced in a content factory, your customers will treat it that way, disposable and forgettable.
A company’s voice should reflect its people. That human element is what drives connection. Cutting corners with AI may look efficient on paper, but it often costs more than it saves in brand equity and credibility. If leadership continues relying on these tools without discretion, you’ll end up with a voice that tells no story, persuades no one, and ultimately adds no value.
British-style journalism demonstrates the importance of a distinct, opinionated voice that AI struggles to replicate
AI doesn’t have an opinion. That’s not a feature, it’s a limitation. The systems we’re building today don’t know what they think because they don’t think. They don’t believe or judge or weigh risk in context. What comes out looks balanced on the surface because it’s a statistical average of everything we’ve already said online. But that balance often turns into blandness. And bland content doesn’t earn attention, it gets ignored.
In British journalism, you see the opposite. Outlets like The Register take clear positions. Their headlines are sharp. Their tone brash. And even if you disagree with what’s written, you still want to keep reading. There’s something magnetic about writers who have convictions, even when they push the boundaries. It’s driven by humans who want to be heard, not algorithms predicting the midpoint of public sentiment.
Contrast that with much of what you see from American media and corporate comms, measured, neutral, stripped of personality. The press tries to present facts without tone. But neutrality often becomes a disguise for indecision or fear of backlash. That same tone is exactly what AI mimics. And it shows. AI-generated content gravitates toward vagueness. It avoids taking positions because it’s not built to own ideas.
Emily Bell, a respected journalist, noted that British reporting is “faster, sloppier, wittier, less well-resourced and more venal, competitive, direct, and blunt.” That’s not criticism, it’s realism. And it makes the content more compelling, not less. Ashlee Vance, who helped shape The Register’s distinct voice, mastered the combination of wit and insight without diluting either.
For leaders, this is a pragmatic insight. If you’re using AI tools to communicate with markets, customers, or partners, and the tone sounds like nothing and means nothing, that’s a liability. People pay attention to what feels real. A compelling voice has force. It doesn’t need consensus to gain traction. And it’s never built by committee or code alone. AI still lacks what The Register and similar platforms have in abundance: a firm point of view that holds its ground.
Automated content can replace subpar human writing with equally uninspiring machine-generated text
Efficiency sounds great, until you realize you’re optimizing content that no one reads. That’s what’s happening in a lot of marketing today. An LLM produces a press release or sales document ten times faster than a human. It looks polished. It checks all the boxes. It also goes unnoticed. Zero impact. And that’s the real cost.
There’s a case mentioned where a product marketing lead replaced junior writers with an LLM because the machine’s output was better and cheaper. But the reality underneath is more revealing. The sales collateral produced only got a few dozen downloads across a sales team that numbered in the thousands. This isn’t a win. It’s a signal that the content itself didn’t matter, regardless of how efficiently it was created.
Creating forgettable content faster doesn’t solve a communications problem, it confirms it. If you’re a C-level executive approving budgets for PR and marketing efforts, this should raise a red flag. The primary goal of content isn’t to exist, it’s to perform. Whether that’s educating customers, enabling sales, or earning investor attention, content has to move someone to act or change their perspective. That requires intention and context, things LLMs don’t understand.
Some leaders think that if a machine can write what a human would have written, and no one engages with it anyway, then it’s a smart tradeoff. That uses the wrong metric. You don’t measure success by output volume, you measure outcomes. And if the outcome is indifference, then the approach needs to change.
AI isn’t harmful because it’s replacing junior writing talent. It’s harmful when it reinforces low expectations. A well-trained AI can replicate a bad process at scale. It can push out ten versions of the same forgettable message, on time and on brand, while audience engagement sinks quietly into zero. That’s a slow bleed most executives don’t catch until the numbers show it in missed conversions or declining trust.
If your collateral, press releases, or marketing content are low-impact, AI won’t fix that. It will just help produce more of the same, faster. Fix the message first. Then think about how to scale it.
For persuasive and meaningful communication, maintaining authenticity and individuality is essential
If you want to persuade someone, you need to sound like you mean it. That requires honesty. Not polished output. Not tone-neutral phrasing. Honest, personal voice. That’s something large language models still can’t deliver. They replicate structure. They can echo tone. But they don’t believe what they’re saying, because they can’t.
This matters more than it sounds. Most decisions, especially at the executive level, are driven by how information is presented, not just what it includes. If a CEO sends out a company-wide memo, and it reads like it came from a generic AI assistant, the message falls flat. People don’t follow information; they follow conviction. LLMs will never have that. They generate what sounds plausible, not what feels true.
Muhammed Shaphy, CEO of Talentz.ai, called it out clearly: “AI made your writing smooth. It erased your voice in the process.” That’s what a lot of marketing and internal communication is doing right now. It’s clean but lifeless. Clear but forgettable. You’re getting speed at the cost of connection.
This is a broader leadership issue. People respond to clarity, energy, and authenticity. You don’t need grand storytelling to get their attention. You need precision and personality. You need to sound like someone, not something. If the writing doesn’t reflect the person or the brand behind it, it loses its weight. This is especially true in executive communications, thought leadership, and brand strategy. If your voice disappears behind automation, your presence does too.
Most businesses already push out too much content no one asked for. Using AI to do more of that, with less personality, is a strategic error. Be more selective. Prioritize communication where your voice matters. Where tone, timing, and intent affect outcomes. Then own that message without outsourcing your voice to a machine trained on everyone else’s.
AI is a tool. Use it where speed or scale adds value. But when it comes to trust, persuasion, and leadership, it’s still a human game.
Key takeaways for decision-makers
- AI lacks authentic voice and conviction: Leaders should avoid using generative AI for communication that requires emotional resonance, as AI-generated content often lacks the clarity, tone, and personality that drive real human engagement.
- Distinctive voices build stronger connections: Messaging modeled after unique, human-driven styles, like British opinion journalism, can enhance brand recognition and engagement. Executives should challenge the idea that safe, neutral language is more effective.
- Output without impact is wasted effort: Leaders must question whether content, even if efficiently generated by AI, drives actual engagement. Prioritize quality and relevance over speed and volume to avoid producing high-efficiency, low-value communication.
- Authenticity remains the most persuasive tool: AI can standardize tone but erodes individual voice. Maintain a strong, human-led voice in leadership communications and high-value content to preserve trust, relatability, and influence.


