The dangers of outsourcing critical thinking – now experts believe LLMs will never match the learning of children
A few weeks ago, IBA highlighted how B2B marketers need to think like humans, not lemmings, when it comes to the latest technologies that enter the marketing universe. And in the AI-driven business age it’s becoming easier than ever to let those technologies tell you what to do.
Our thoughts from the IBA team: “A colleague suggested that we might even call the most extreme users “LLeMmings”—yes, because they are always LLM-ing, but also because their near-constant AI use conjures images of cybernetic lemmings unable to act without guidance.”
It didn’t take long for the idea to catch on! A recent article in The Atlantic struck the very same chord , heralding the rise of the LLeMmings. The article highlights the “Google Maps–ification” of interactions with Large Language models, where people are relying on AI to guide them through their everyday life in the same way they rely on GPS apps to navigate.
Separating tool from thought
That’s not to say that some AI tools can be extremely useful aids to help businesses and people in their everyday tasks. You only need to glance at the IBA client roster to see a host of B2B technology providers helping factory floor workers execute tasks more safely and efficiently, or pre-diagnosing maintenance faults before they occur on a military jet, or allowing organizations to deploy intelligent AI agents to ensure retail stock is always correctly provisioned despite fluctuations in supply or demand.
Watch this space as Meta’s Chief AI Scientist leaves Meta to think
But these are intelligent uses of AI tools that add value to daily operations. According to Yann LeCun, an NYU professor and recently departed VP & Chief AI Scientist at Meta speaking in a recent WSJ interview, the danger becomes when the AI tools switch on – and the humans turn their minds off! Current iterations of LLM and agentic AI have less true learning ability than human children, and rumors of its power are greatly exaggerated.
True AI intelligence requires a different approach to learning
Prof LeCun believes Meta and other organizations have been focusing on LLMs, which will be less useful in attempting to create AI systems that can match human intelligence. Instead, he is focusing on research to create systems that build internal “mental maps” of reality, learning physics and causality to predict outcomes and plan actions, moving beyond current LLMs’ text-based limitations towards more human-like intelligence
The “world models” learn by taking in visual information, much like a baby animal or young child does. For the time being, AI can be a tool to make people more effective and intelligent in their professional roles, rather than AI becoming inherently “smarter” than humans.
You heard it here first!
The IBA Talking Propeller Heads with Shazam founding CTO, Mike Karliner, puts LLMs into perspective. It’s like we’ve said since the inception of ChatGPT and LLMs: they make good people smarter, and mediocre people worse.
For now, the focus should be on leveraging the best of both people and technology to ensure beneficial outcomes
Remember, next time you ask ChatGPT for help – just because we can, doesn’t mean we should!
FOOTNOTE: In 1911 IBM founder Thomas J. Watson Sr. at National Cash Register, urged employees to use their heads, not just feet, for success, and it became IBM’s core philosophy, inspiring its products (ThinkPad, ThinkCentre) and culture, emphasizing thoughtful innovation and problem-solving through its famous signs and branding.

He wrote “THINK” on a flip chart, telling his team that thinking was key to success, and adopted it as a motto. “THINK” promotes innovation, problem-solving, and empowers employees to use their intellect – perhaps YHIHF!
Jamie Kightley is Head of Client Services at IBA International.