‘What a piece of work is a man, how noble in reason, how infinite in faculty, in form and moving how express and admirable, in action how like an angel, in apprehension how like a god, the beauty of the world, the paragon of animals!
And yet to me, what is this quintessence of dust?’
Artificial intelligence has always been a moving target. In the 1990s, when IBM's Deep Blue defeated Garry Kasparov at chess, many declared that machines had finally reached "real intelligence." But today, no one points to chess as the hallmark of AI. The same happened when IBM's Watson won Jeopardy! in 2011. Or when Google DeepMind's AlphaGo defeated Lee Sedol, a world champion in the ancient game of Go (Weiqi), in a five-game match held from March 9–15, 2016. Or when ChatGPT astonished millions by generating fluent, human-like text in 2022. Each time, what seemed like a leap forward in AI breakthroughs quickly became routine technology.
This constantly shifting benchmark is because the definition of AI isn't fixed. Whenever machines master a task that we once thought required human intelligence, we consider that it has reached "Artificial Intelligence". This phenomenon is sometimes referred to as the AI effect: as soon as AI succeeds at something, we stop referring to it as AI, discounting the behaviour of an artificial intelligence program as not "real" intelligence.
Look back over the last few decades, and the pattern is clear:
Each time, the moment of awe is followed by a subtle retreat: we move the goalposts. "Real AI," people say, must be the following thing machines can't yet do.
This is the paradox of AI: the more it succeeds, the less we see it as intelligent.
Beneath these shifting milestones lies a deeper debate about how intelligence itself should be defined. For decades, researchers have split into two camps.
Symbolic AI, also known as "Good Old-Fashioned AI," employs a rule-based approach. It treats intelligence as logic. If you can describe the rules of a problem clearly enough, like the moves in chess or the steps in solving algebra, you can programme a computer to follow those rules. Symbolic AI is like teaching by writing a detailed instruction manual.
Connectionist AI, on the other hand, is modelled on the brain. Instead of rules, it learns patterns by exposure to examples. Neural networks don't need explicit instructions; they train on vast amounts of data until they can make predictions or classifications. It's the difference between telling someone exactly how to ride a bike versus letting them practice until balance becomes second nature.
This divide is more than technical. It reflects a philosophical split that spans centuries: rationalism versus empiricism. Is knowledge built from reason and principles, or from experience and pattern recognition? AI has now become a modern stage for that ancient debate.
Today, neural networks dominate. They power image recognition, voice assistants, translation tools, and generative AI models, such as ChatGPT. Yet they still stumble on things that humans find simple: commonsense reasoning, multi-step logic, and grounded understanding.
That's why some researchers advocate for neuro-symbolic AI. These are systems that combine the statistical learning power of neural networks with the structured reasoning of symbolic approaches. Think of it as blending intuition with logic.
The hope is that by merging pattern recognition with explicit rules, AI might overcome its most obvious blind spots: confidently hallucinating facts, failing basic logic puzzles, or struggling with tasks that require step-by-step reasoning. Whether this hybrid approach will lead to more "general" intelligence remains uncertain, but it shows how even research directions are shaped by shifting definitions of AI.
Perhaps the most interesting aspect of this story is not what it reveals about machines, but what it reveals about ourselves. Each time AI conquers a domain, whether it's chess, trivia, Go, or writing, we quietly demote that domain. We decide it no longer counts as "intelligence."
If machines can calculate, then calculation cannot be considered a form of intelligence. If they can write essays, then writing isn't an accurate measure of intelligence either. By moving the goalposts, we protect a shrinking circle of what we consider uniquely human.
This shifting of the definition of "intelligence" also reveals our anxieties. We fear being reduced to logic and pattern recognition. We worry that if machines can perform our tasks, we may lose our sense of purpose or uniqueness. But it also reveals something hopeful: that intelligence, for us, is more than just output. It's tied to creativity, judgment, and lived experience. And AI forces us to confront what those qualities really mean. And what it truly means to be human.
Q: Why does the definition of AI keep changing?
A: Because each time machines master a task, we stop considering that task "intelligence." This is called the AI effect.
Q: What is the AI effect?
A: It's the idea that AI is whatever hasn't been achieved yet. Once a machine has achieved it, it no longer feels like AI.
Q: Is symbolic AI still relevant today?
A: Yes. While neural networks dominate, many researchers believe that hybrid systems, which combine rule-based approaches with learning, will be necessary for further progress.
Q: What is neuro-symbolic AI?
A: A hybrid approach that blends neural networks' pattern recognition with symbolic AI's logical reasoning.
Q: Can AI ever have one "true" definition?
A: Unlikely. AI is a field, not a single technology. Its definition will continue to shift as machines achieve more.
Artificial intelligence has never had a fixed definition. Each breakthrough redefines the frontier, and each frontier forces us to ask again: what is intelligence?
In this sense, AI is less a technology than a mirror. It reflects our evolving values, our shifting sense of self, and our ongoing quest for what makes us human
👉 If you found this helpful, subscribe to The Intelligent Playbook — a free newsletter with actionable tips, prompts, and strategies to help you get the most out of AI in your everyday work. And if you know someone who’s curious about AI, share this with them too.
Further Reading:
Pamela McCorduck, Machines Who Think (classic AI history text, widely cited for “AI effect”).
Big Tech’s ‘fake-it-till-you-make-it’ hype is now infecting AI
Related Articles:
Only humans, please.
Thank you for subscribing!
Have a great day!