LLMs are fast learners! AI search rewards authority – and its earned media that builds it!
IBA warned readers a few months back of the rise of the LLeMmings hunting listicles and AEO to feed their frenzy for coverage mentions in media (btw any media it seems will do). Fast forward and it appears original thought has continued to go out of the digital window for many PR and marketing pros!
But as we predicted, the world and AI search is catching up, as this LinkedIn post from HeyAmos founder and CMO Oliver Pechter shares: “Right now, some brands are buying their way into listicles and sponsored roundups to show up in AI answers. That works today. It won’t work forever.”
There are still some B2B marketers out there that eat Gen-AI for breakfast and spew out marketing words rather than think for themselves. But stop – and yes think. And yes, that means a whole lot of the aforementioned won’t because they’ve already forgotten how to!
Marketing experts and PR pros now have the golden opportunity to capitalize on earned media strategies in the rush towards AI search. Because earned media is what AI search is beginning to eat for breakfast. It needs good and trustworthy content to perfect its job – it’s on its CV.
As this new Content Marketing Institute article urges, earned media belongs at the core of AI search strategies. Katherine Caraway, director of earned media at Intero Digital, highlights “Generative engines don’t treat all content equally – and earned media has structural advantages that branded content simply doesn’t.”
Earned media isn’t just a brand awareness play – it never was. LLMs learn fast – they can increasingly decipher earned content from pay-to-play! Which means the work PR teams have always done to secure earned editorial coverage and thought leadership are the greatest influencing levers.
Read on to see how to put human expertise back in the driver’s seat and leave AI as a passenger – not a replacement for critical thinking!
Don’t be a lemming, be a leader!
“A colleague suggested that we might even call the most extreme users “LLeMmings”—yes, because they are always LLM-ing, but also because their near-constant AI use conjures images of cybernetic lemmings unable to act without guidance.”
It didn’t take long for the idea to catch on! An article in The Atlantic struck the very same chord , heralding the rise of the LLeMmings. The article highlights the “Google Maps–ification” of interactions with Large Language models, where people are relying on AI to guide them through their everyday life in the same way they rely on GPS apps to navigate.
Separating tool from thought
That’s not to say that some AI tools can be extremely useful aids to help businesses and people in their everyday tasks. You only need to glance at the IBA client roster to see a host of B2B technology providers helping factory floor workers execute tasks more safely and efficiently, or pre-diagnosing maintenance faults before they occur on a military jet, or allowing organizations to deploy intelligent AI agents to ensure retail stock is always correctly provisioned despite fluctuations in supply or demand.
Watch this space as Meta’s Chief AI Scientist leaves Meta to think
But these are intelligent uses of AI tools that add value to daily operations. According to Yann LeCun, an NYU professor and recently departed VP & Chief AI Scientist at Meta speaking in a recent WSJ interview, the danger becomes when the AI tools switch on – and the humans turn their minds off! Current iterations of LLM and agentic AI have less true learning ability than human children, and rumors of its power are greatly exaggerated.
True AI intelligence requires a different approach to learning
Prof LeCun believes Meta and other organizations have been focusing on LLMs, which will be less useful in attempting to create AI systems that can match human intelligence. Instead, he is focusing on research to create systems that build internal “mental maps” of reality, learning physics and causality to predict outcomes and plan actions, moving beyond current LLMs’ text-based limitations towards more human-like intelligence
The “world models” learn by taking in visual information, much like a baby animal or young child does. For the time being, AI can be a tool to make people more effective and intelligent in their professional roles, rather than AI becoming inherently “smarter” than humans.
You heard it here first!
The IBA Talking Propeller Heads with Shazam founding CTO, Mike Karliner, puts LLMs into perspective. It’s like we’ve said since the inception of ChatGPT and LLMs: they make good people smarter, and mediocre people worse.
For now, the focus should be on leveraging the best of both people and technology to ensure beneficial outcomes
Remember, next time you ask ChatGPT for help – just because we can, doesn’t mean we should!
FOOTNOTE: In 1911 IBM founder Thomas J. Watson Sr. at National Cash Register, urged employees to use their heads, not just feet, for success, and it became IBM’s core philosophy, inspiring its products (ThinkPad, ThinkCentre) and culture, emphasizing thoughtful innovation and problem-solving through its famous signs and branding.

He wrote “THINK” on a flip chart, telling his team that thinking was key to success, and adopted it as a motto. “THINK” promotes innovation, problem-solving, and empowers employees to use their intellect – perhaps YHIHF!
Jamie Kightley is Head of Client Services at IBA International.