Break Free from Parrot Thinking, Harness AI for Strategy.
Why Strategy Risks Becoming a Cage
AI is often accused of being nothing more than a “stochastic parrot”, mimicking patterns without truly thinking. But Oliver Feldwick (2025) flips the critique: it is strategists, not machines, who risk becoming parrots.
In today’s marketing ecosystem, strategists face more briefs, more formats, and more demands, with less time, fewer resources, and relentless deadlines. The impossible equation forces them to default to repetition: the same brand onions, the same “dusty diagnostics,” the same borrowed statistics and catchphrases. True strategic thinking now happens despite the system, not because of it.
The consequence? Strategy, once about reframing and provoking, has been reduced to parroting patterns. And yet, paradoxically, the very AI tools strategists dismiss could help them escape this cage, if they are used with intention.
The Parrot Problem: Formulaic Thinking Has Hollowed Out Strategy
Strategy should be about discovery: finding new ways to look at old problems, reframing categories, and shifting perspectives. But under today’s pressures, strategists have become risk-averse operators. Instead of digging, they deliver. Instead of reframing, they reproduce.
As Feldwick notes, quoting Byron Sharp has become shorthand for thinking. Clichés like “connected but lonelier than ever” are passed off as insights. Frameworks are filled in by muscle memory. In effect, planners have automated themselves into irrelevance.
This predictability is dangerous. IPA’s The Long and the Short of It (Binet & Field, 2013, UK) proved that campaigns underpinned by distinctiveness and imagination are twice as effective at driving long-term growth as generic ones. When strategists parrot patterns, brands bleed differentiation.
Brand Case
Pepsi (Live For Now, 2017): Built on formulaic tropes of “unity” and “protest,” this Kendall Jenner campaign showed the risk of parroting cultural clichés. Instead of resonance, it sparked global backlash and forced a retraction. Formula looked safe, but destroyed trust.
AI: From Stochastic to Generative
Ironically, AI may help strategists escape the very cage it’s accused of creating. Large Language Models (LLMs) have shown strategic fingerprints in game theory experiments: adopting playstyles, negotiation tactics, and even cooperative long-term strategies.
Give these systems “scratchpads” step-by-step reasoning space, and they generate logical arguments, counterpoints, and creative solutions. They don’t just parrot; they recombine. And recombination is at the heart of strategy. As Mark Earls argued in Copy, Copy, Copy (2015), originality is often about remixing ideas in new lights.
What AI Enables
Automate the obvious: Persona boilerplates, boilerplate research, repetitive decks.
Accelerate the difficult: Complex segmentation, scenario modeling, foresight analysis.
Augment the important: Reframing briefs, generating alternative routes, provoking unexpected connections.
Used well, AI restores strategists to their real job: thinking differently.
Multiplying Creativity: Why AI Boosts Both Speed and Serendipity
Two forces separate breakthrough strategy from mediocrity: iteration and variance.
Iteration: The difference between a breakthrough and a block is often how many routes are tested. Feldwick notes that iteration has historically come at human cost, late nights and weekend overtime. AI changes this, allowing sustainable iteration at scale. More ideas mean more chances to find the one that matters.
Variance: True disruption emerges from unexpected perspectives. AI can simulate synthetic personas, adversarial voices, or unfamiliar cultural standpoints. By forcing strategists out of their comfort zones, variance creates serendipity.
Brand Case
Spotify (Wrapped, 2017–present): A campaign iterated annually, recombining user data with cultural storytelling. Variance in execution, from humor to identity to activism, keeps it fresh. AI could accelerate such recombination, but human direction ensures it reflects brand voice.
The Risks of Cognitive Offloading: Why AI Could Make us Dumber
But Feldwick warns of a paradox: if we use AI carelessly, it will make us lazier, not smarter. This is the risk of “cognitive offloading.” If strategists let machines do all the thinking, they risk losing the muscle of curiosity, questioning, and reframing.
Strategy’s value has never been just in the output, the words in a deck. Its real value is the collective process: aligning teams, sparking debates, building shared conviction. AI may produce polished outputs, but it cannot replace the messy, human journey of alignment.
From Prompt Engineering to Prompt-Craft
Feldwick reframes “prompt engineering” as “prompt-craft.” This is not about tricking machines with the right keywords; it is about combining precision with imagination. Like writing a great creative brief, prompt-craft blends rigor with play. It’s about experimenting, remixing, and pushing AI toward surprising, adversarial, or playful directions.
The skillset is not foreign to strategists. It is simply the evolution of what they’ve always done: combining information with inspiration to provoke better outcomes.
The Human Skills that Define the Future
Even as AI accelerates iteration and variance, three human skills remain indispensable:
Trust: Building consensus and belief that the chosen path is the right one. In an AI-driven process, maintaining this thread of human trust is harder but more valuable than ever.
Taste: Curating what is emotionally, aesthetically, and culturally resonant. AI can generate thousands of options, but only human taste can choose the one that matters.
Integration: Connecting silos has always been a strategist’s role. Now, integration expands to human + machine. Realizing AI’s potential requires weaving its output seamlessly into organizational and cultural contexts.
Brand Case
Dove (Real Beauty, 2004): AI could have produced hundreds of rational routes, but it took human taste to champion a bold truth: beauty stereotypes were toxic. The integration of cultural truth with brand equity built one of Unilever’s strongest growth platforms.
Becoming Cyborg Strategists: Human and Machine, Together
Feldwick’s provocation is clear: the future is not about resisting AI or surrendering to it. It’s about becoming cyborg strategists. Let machines handle the noise, the 50 bad ideas, the rote frameworks, the repetitive data. Let humans focus on the signal, the breakthrough 51st idea, the reframing, the provocation.
This is not abdicating strategy. It is reclaiming it from bureaucracy and rediscovering its joy. The cage of parroting was never locked. It was always a choice. Now, AI offers the chance to step out.
CEO-Level Imperatives
Ban Formulaic Thinking: Kill reliance on repetitive frameworks and empty diagnostics.
Invest In Prompt-Craft: Train strategists to use AI creatively, not mechanically.
Institutionalize Iteration: Build workflows where AI generates multiple routes before humans decide.
Protect Human Judgment: Hold leaders accountable for applying trust, taste, and integration.
Champion Cyborg Strategy: Mandate a partnership model: AI for scale, humans for meaning.
Bottom Line: AI Should Liberate Strategy, Not Replace it
The real threat is not that AI becomes too much like us, but that we become too much like it: unquestioning, formulaic, predictable. With intention, AI can free strategists from rote parroting, multiplying iteration and variance while elevating human judgment.
The cage isn’t locked. It never was. The choice is whether strategists step out, or keep repeating frameworks until their role disappears.