You ask a question. It answers instantly. The speed feels like intelligence. The confidence feels like truth.
A half century ago, moviemakers and sci fi writers predicted this moment by depicting massive machines that spit out ticker tape answers:
We call our interactions with ChatGPT, Claude or other large language models “conversations” when they're nothing of the sort. You're speaking to a mirror that translates your language into math and reflects patterns back at you: fluent, fast, but without comprehension. We've mistaken performance for perception. The more persuasive the output, the easier it becomes to forget what we're actually dealing with.
What We've Actually Built
I've started using a term of my own: Parrotware. I coined it not to diminish the power of models like ChatGPT (I use these tools daily and find them invaluable), but to reframe their essence. And as an important reminder of what we’re doing and why. What we call "AI" is not intelligence. It's mimicry.
Parrotware is a talking card catalog on speed. Google with a personality disorder.
These models don't know what they're saying; they remix what humans have already said, written, drawn, painted, taught, photographed, sang and invented. They don't think; they parrot. Treating that output as wise or authoritative is a cultural error with escalating consequences.
We’ve built a pattern machine: a sophisticated tool that analyzes past input and synthesizes new combinations of familiar forms. It’s a talking card catalog on speed. Imagine Google with a personality disorder.
It doesn’t know anything, but it will answer everything. You’re not using intelligence. You’re using a mimic. Parrotware repeats patterns with confidence—but without a flicker of comprehension.
And I want to remind all of us: That’s where you come in. You’re the human. You’re the sentient being.
Some might contend that the term 'Parrotware' doesn't fully capture the astonishing complexity and seemingly novel outputs of today's advanced large language models (LLM’s). The sheer scale of the training data — billions upon billions of words, images, and other forms of human expression — allows them to detect statistical patterns so intricate that the outputs can appear creative, insightful, or even intuitive.
But this perceived “more” is precisely the illusion that the term 'Parrotware' seeks to highlight. The surprising outputs are a testament to the immense power of pattern recognition at scale, not a flicker of independent thought. The “aha!” moments we experience when interacting with them are, in fact, our own cognitive biases projecting intelligence onto a machine.
This phenomenon, far from disproving the 'Parrotware' thesis, actually serves as its most compelling demonstration.
Parrotware is beginning to prompt the same concerns that spurred Nicolas Carr in his bestselling book, “The Shallows” years ago, when he asked, “Is Google Making Us Stupid?” Recent research at MIT showed that subjects who used ChatGPT more demonstrated less brain activity than other control groups in the study who used it less or not at all.
I see these concerns as valid and worth discussing, yet I also remember that this concern is an ancient one, and analagous to the group freakout we are also having at the moment about the supposed AI jobpocalypse.
Socrates famously opposed writing things down because he argued it would diminish human intellectual capacity:
"If men learn this [writing], it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks."
The Parrotware Slop Problem
John Oliver recently skewered the wave of AI-generated content flooding the internet. Deepfake presidents playing piano on a talent show. Anthropomorphized feline families experiencing terrible tragedy. AI-generated family vlogs where no one is real.
Parrotware doesn't just produce slop: it produces convincing slop. This invites us to stop distinguishing signal from noise.
Researchers have documented how AI-generated medical misinformation spreads on platforms like Reddit, often with authoritative tone and confident phrasing, and gets upvoted by real users. Recently, one user interacting with a “therapy chatbot” was advised to take “small hit of meth to get through [the] week.”
We don't need to worry about sentient machines — because as I’ve already argued, they don’t exist, and won’t exist any time this century, if ever.
Instead, let’s pause to consider overconfident humans misusing non-sentient tools. The real danger isn't what Parrotware will decide to do. It's what we will delegate to it without scrutiny: decisions around hiring, policing, health care, education, justice, and war.
These are human questions, with human stakes, meant for human minds.
Pope Leo XIV has been unambiguous: “In our own day, the Church offers to everyone the treasury of her social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice and labor.” He added, “It must not be forgotten that artificial intelligence functions as a tool for the good of human beings, not to diminish them, not to replace them.”
How I Think About This
I'm bullish on the power of these tools when used with discernment. But I try to use them like a microscope or telescope: to enhance perception, not replace it. Parrotware can extend human creativity, but only if humans stay in the loop as authors, editors, and interpreters (note: I’ll have a post on the importance of authorship coming up soon).
Colleagues I work with, clients, and my own internal monologue keep asking: how do we use these tools responsibly? As a rough draft, I developed what I call a Parrotware Review Index:
Have you fact-checked it?
Are references authentic and traceable?
Is the tone aligned with your values and audience?
Does it add value or just fill space?
Are you using it to think better, or just to think faster?
Would you publish the output under your name without apology?
If it misleads, who owns the mistake?
The Voice That Speaks First
We begin by naming the tool accurately. Parrotware is not a mind. It is a mimic. It assembles from what we've already said, but it cannot know what we meant.
We cannot delegate meaning. We cannot automate agency. We cannot outsource conscience.
The answer to Parrotware's rise isn't fear. It's leadership and ownership.
So let the machine mimic. Use the tool. But let the human decide. You hold memory. You hold context. You hold responsibility.
Parrotware is not your replacement. It's not your rival. It's not your prophet. It's a reflection engine, trained on the past, wearing a mask of fluency. The story it tells still depends on who is prompting it and why.
If you're looking for the future of real intelligence, you won't find it in the machine. You'll find it in the human being staring back at you in the mirror.
P.S. This essay was written with the aid of Parrotware. Of course it was. And that underlines the point.