Marketers are rapidly adopting AI
There’s no doubt, marketers move fast. Historically, they’ve been first to jump on every major technological shift. AI is no exception. Recent data tells us that 63% of marketers are already using generative AI, and 79% plan to expand usage by 2025. They’re not dabbling, they’re committing. In fact, 85% are using AI for content creation. That’s far ahead of the overall U.S. workforce, where only 37% use AI tools at all.
But there’s a mismatch between confidence and reality. While 87% of marketers say they trust the accuracy of AI-generated output, the numbers behind that belief show a very different story. Research reveals that 51% of AI content has major problems, errors that poison trust, screw up messaging, or worse, put the business at risk. Across the board, 91% of AI-generated content has some kind of issue. That’s not a rounding error. That’s a systemic warning.
Still, enthusiasm isn’t the problem. The issue is unchecked optimism. AI is powerful, and the direction is clear: intelligent automation will reshape industries. But we’re not quite there yet. The tech is still learning, so relying on it for precision, nuance, and brand tone is risky. It creates more work downstream cleaning up avoidable issues.
Executives should be asking: Are we using AI in the right places? Are we holding its output to the same standard as human work? Blind trust just because a machine produced it? That’s not leadership, that’s negligence. The goal isn’t to slow down innovation, it’s to scale it responsibly.
Use AI. Embrace it. But don’t confuse speed with certainty. Confidence is earned not by adoption rates, but by verified outcomes.
AI should be applied cautiously
AI is everywhere in marketing tech stacks now, 93% of marketers saw new AI features added to their tools just last year. The access problem is solved. The real challenge now is placement. Where you apply AI matters a lot more than whether you’re using it.
Not all marketing tasks are created equal. Some drive direct revenue, shape the customer experience, and define your competitive position in the market. Those are the high-impact, differentiating functions. Others are low-risk and internal, things like drafting policy documents, drafting job descriptions, or formatting standard templates. That’s where current generative AI performs best. In those zones, errors are manageable. You waste some time. Maybe you lose operational efficiency. But you don’t lose customers, credibility, or legal protection.
Misusing AI in high-visibility areas puts the business at real risk. Customer-facing content, strategic brand voice, key campaigns, those touchpoints carry reputational weight. If the content is off, tone-deaf, or legally incorrect, you don’t just correct the mistake. You deal with consequences. There’s already a case here: Air Canada is facing legal action after an AI chatbot misled a customer with false information. The cost isn’t just money, it’s trust.
This isn’t about being conservative. It’s about being deliberate. Use AI where it amplifies output without jeopardizing brand equity. That division of labor, strategic by design, maximizes results without threatening what sets your business apart. Done right, AI doesn’t replace differentiating work, it supports it by clearing out the low-priority clutter.
Use judgment. AI can run some plays now, but it’s not the quarterback yet.
Human oversight remains essential
AI doesn’t understand your customers. It doesn’t understand your brand. It doesn’t even understand what it’s saying. Generative AI works by predicting the most likely next word in a sequence based on patterns it’s seen before. That means it can generate fluent, impressive-sounding content, but not meaningful, intentional communication.
When that content reaches a public audience, the stakes change. Every word reflects your company’s voice, your judgment, and your values. Without a human in the loop, you’re pushing material that might be off-brand, inaccurate, or inappropriate, and exposing your reputation to unnecessary risk. What AI lacks is nuance. It doesn’t understand cultural tone, emotional timing, or what specific phrasing might signal to a given customer segment. Humans bring the context. That context is non-negotiable.
Brand fidelity, market relevance, persuasive clarity, none of that happens automatically. It takes judgment calls that AI doesn’t make. Algorithms don’t hold accountability. Humans do. AI can generate faster, but only people ensure the message lands as intended.
If you’re launching anything customer-facing, email campaigns, product pages, strategy decks, keep human review in place. Not just proofreading. Actual marketing judgment. Think of human input as quality control, not correction. Let AI move quickly, but don’t mistake speed for precision.
The result of skipping human oversight isn’t just poor messaging. It’s erosion. Customers question the brand. Compliance teams chase red flags. Teams waste time fixing what shouldn’t have been broken. You’re left cleaning up when you should be scaling forward.
Stay fast. But stay sharp. If it touches your market, it needs your eyes.
Risk awareness and structured monitoring systems
Generative AI isn’t guesswork, it’s probability-based output. Stephen Wolfram, a computational scientist and founder of Wolfram Research, explained it clearly: tools like ChatGPT are simply predicting the next most likely word, one step at a time. That’s useful. But it’s also limiting. The result is generic, average content modeled on patterns it has seen. It lacks originality, intent, and an understanding of your business context or values.
That’s why structured oversight isn’t optional, it’s a necessity. AI performance should be measured, monitored, and evaluated against clear benchmarks. If you’re not tracking how generative outputs perform once released into the world, you’re scaling unknowns. That introduces reputational risk, content inefficiencies, and legal vulnerabilities you didn’t plan for.
One practical approach here comes from Niel Nickolaisen, an IT strategist and author. His “purpose alignment model” helps operators decide where AI belongs, or doesn’t. The idea is simple: avoid applying AI to tasks that are mission-critical or tied to market differentiation. Instead, focus AI efforts on areas that are operational but non-core, where errors won’t upend customer trust or strategic outcomes.
None of this means hesitation, it means intentionality. Use AI where value is clear and fallout is manageable. But build systems that expose and correct its misses. You don’t want surprises hiding inside scaled processes.
93% of marketers had new AI features added to their tools last year. That number alone tells you how fast this is moving. But speed isn’t impact. You need controlled execution if you want outcome stability. Otherwise, you’re multiplying success and failure at the same rate. That’s not scalable leadership. That’s risk without structure.
Use AI, test it, monitor the results, and refine how it fits your stack. That’s how you stay ahead, without losing control.
Key highlights
- Marketers are overconfident in flawed AI outputs: Most marketers trust AI-generated content, but 91% of it contains errors, 51% with major flaws. Leaders should stress-test AI output before trusting it with brand visibility or messaging.
- Use AI only where mistakes won’t damage the brand: AI should be applied to low-risk, operational tasks, not customer experience, revenue strategy, or brand presentation. Executives should pair AI speed with human judgment in differentiating activities.
- Human oversight is non-negotiable for public-facing AI use: Generative AI lacks empathy, tone, and accountability. Leaders must ensure human review is built into workflows where AI impacts customers or brand voice.
- AI performance requires structure and monitoring: Without a system to measure and adjust AI output, risk scales alongside automation. Leaders should use models like purpose alignment to focus AI deployment in areas where errors are acceptable and value is clear.