I asked an LLM to explain how search engines work.
It wrote a 1,200-word essay on the true nature of knowledge, referenced six dead philosophers, and apologised for not being helpful.
Meanwhile, Google just showed me a 3-step guide and a diagram.
Moral of the story: One knows the answer. The other wants to be the answer.
AI as a writing tool has brought excitement, fear, and more than a little confusion for many. Everyone seems to be experimenting with ChatGPT, Gemini, or Claude, in the hope of finding shortcuts to improve their writing. Some are thrilled, others are sceptical, and many are simply overwhelmed.
I often thought it interesting that when I share with others that I use AI extensively in my writing process, I usually get a sympathetic “oh, it’s understandable” look or a sneaky “Me too!” glance. Like AI is some forbidden fruit, implying “Oh, you naughty boy”.
At the heart of this is a technology most people don’t fully understand: the Large Language Model (LLM). These systems are not oracles that “know everything.” They’re predictive engines that generate text, one word at a time, by estimating the most likely next word.
That’s right. AI is similar to a weather forecast. Impressive and useful, but never entirely certain.
For writers, understanding this isn’t about becoming technical. It’s about learning how to guide the tool so it becomes a creative partner that amplifies your skills instead of replacing them.
If you’re a writer, LLMs matter because they change how fast and flexibly you can work. They’re not here to replace your voice, but to support it:
However, to use them effectively, you need to understand what they are and what they are not.
“Large” = Scale of Training
An LLM is trained on billions of words from books, articles, websites, and more. It doesn’t mean that AI reads and remembers books the way we do (like feel sad or inspired when we read a good novel or article).
Instead, all that text is broken into tiny pieces called tokens (like syllables or word fragments). During training, the model practices predicting the next token again and again, billions of times over. Over time, it becomes very good at spotting patterns in language, rather than understanding or memorising content.
Think of it as learning the rules of how words fit together, rather than memorising a giant library. This massive dataset allows it to recognise patterns in how words, sentences, and ideas tend to fit together.
“Language Model” = Predicting the Next Word
At its core, an LLM is a predictive system. It doesn’t “think.” It doesn’t “know.” It calculates probabilities.
Think of it like your phone’s autocomplete, but supercharged. If you type, “Once upon a …” your phone might suggest “time.” An LLM does the same thing at scale: given all the words before, it predicts what comes next with remarkable fluency, even if it is a 70,000-word novel.
LLMs as Predictive Models, Not Sentient Beings
Here’s the key distinction many people miss:
When an LLM generates text, it is not checking facts; it’s calculating likelihoods. If “Paris is the capital of France” appears frequently in training data, the model will likely generate that correctly. But if a rare or ambiguous question is asked, it may “hallucinate” a confident-sounding but false answer.
LLMs don’t possess knowledge in the human sense. They don’t retain “facts” or have lived experiences. Instead, they store statistical relationships between words; patterns that tell them what’s likely to come next, not what’s true.
An LLM also cannot know when it’s wrong. Because it does not store facts, there is no internal check that says, “Oops, that was false.” It wasn’t designed to evaluate truth, only to generate fluent text.
When an LLM generates a “fact,” it’s not retrieving a truth. It’s offering the most statistically probable fact-shaped sentence based on patterns it has seen. Sometimes, that matches reality; sometimes, it doesn’t.
This is why fact-checking and human oversight are essential. The model’s strength is fluency, not accuracy. Writers must bring judgment, context, and fact-checking to the table.
Think of it as a tireless writing assistant that never runs out of suggestions.
It’s powerful, but it’s not a brain. It’s a tool.
The writers who thrive won’t be the ones who fight AI, but the ones who use it wisely.
Your role remains essential:
AI helps navigate, but you’re still the driver.
Q: Is an LLM the same as a search engine?
A: No. A search engine indexes and retrieves existing documents; an LLM generates new text based on learned patterns. Importantly, neither “knows” what is true. A search engine reflects the reliability of its sources, while an LLM outputs what seems most plausible. In both cases, truth depends on evidence and your judgment.
Q: Can I rely on LLMs for factual accuracy?
A: Not completely. LLMs excel at fluency, not truth. They can produce accurate statements, but they can also “hallucinate” false ones with the same confidence. Use them as a starting point, but always verify with trusted sources.
Q: Why can’t AI know when it’s wrong?
A: Because LLMs weren’t designed to recognise truth. They don’t store facts or have awareness. They generate statistically likely sequences of words, not meanings. Without lived experience or self-reflection, they have no internal check to signal error.
Q: Will AI replace writers?
A: Not in the sense many fear. But the profession will change. Writers who embrace AI as a tool will outpace those who don’t, producing faster and exploring more ideas, while reserving their energy for the aspects only humans can bring: voice, insight, and judgment.
Q: Do I need coding to use an LLM?
A: No. Modern AI tools like ChatGPT, Gemini, and Claude are built for natural language use. If you can type a question, you can use it. Technical knowledge is helpful only if you want to develop custom applications, not to benefit from AI on a day-to-day basis.
Large Language Models are changing the writing landscape. They’re not magical oracles but predictive tools. For those who fear them, it’s tempting to imagine a future where machines replace writers. But for those who learn to guide them, the opposite is true: AI becomes a multiplier of creativity, efficiency, and impact.
The future isn’t about choosing between humans and AI. It’s about humans using AI well.
👉 If you found this helpful, subscribe to The Intelligent Playbook — a free newsletter with actionable tips, prompts, and strategies to help you get the most out of AI in your everyday work. And if you know someone who’s still confused about AI, share this with them too.
Related Articles:
Step-by-Step Prompting: A Practical Workflow That Works Better
What is a Personal Writing Style Sheet, Why It’s Important, and How to Use It.
Subscribe. And the truth will be delivered.
Thank you for subscribing!
Have a great day!