We have all felt it. You open an email or a LinkedIn update and while the grammar is flawless and the tone is impeccably professional, something feels… Off. We see a specific kind of sterility, a linguistic uncanny valley where the words are correct, but the soul has been vacuumed out of the room. For years, we marveled that AI could learn to sound like us. Now, the more unsettling reality is setting in: we are starting to sound like the AI.
This isn’t just a vibe or a quirk of the corporate world; it is a measurable shift in human cognition and communication. We are witnessing a Great Flattening of the human voice, where the unpredictable, the jagged, and the idiosyncratic edges of our speech are being sanded down by the invisible hand of the Large Language Model (LLM). When we outsource the struggle of writing
to a machine, we aren’t just saving time—we are eroding the very process by which we think.
The Architecture of the Average
The evidence is no longer anecdotal. A study from the University of Southern California analyzed scientific journals, local news, and social media, discovering that diversity in writing style plummeted following the release of ChatGPT. We are moving toward a standardized global dialect, a sort of linguistic beige that prioritizes predictability over personality.

The Max-Planck Institute for Human Development took this a step further, reviewing 740,249 hours of content to track how AI’s favorite vocabulary is bleeding into our daily lives. Words like delve
, meticulous
, boast
, and comprehend
—words that are perfectly fine in isolation but are used with suspicious frequency by LLMs—are now appearing more often in everyday human conversation.
“People get used to this idealized, very predictable form of language, and even people who are not using it, in order to have that sense of powerful, influential writing, they start writing more like LLMs.” Morteza Dehghani, USC professor
This creates a dangerous feedback loop. As the Brookings survey indicates, 32% of small businesses already use AI for outreach, and 16% of individuals use LLMs for social communication. When the majority of our digital environment is saturated with this idealized
prose, we begin to perceive the “AI style” as the gold standard for professionalism, further pushing us toward the LinkedIn average
.
The Technical Trap of Model Collapse
There is a deeper, more systemic risk at play here that goes beyond individual habit: the phenomenon of model collapse. This occurs when AI models are trained on data generated by other AI models rather than original human output. As the internet becomes flooded with synthetic text, the AI begins to “eat its own tail,” losing the rare, nuanced, and outlier data points that make human language rich.

When the training data lacks diversity, the AI’s output becomes even more homogenized. This creates a recursive loop of blandness. If the AI only sees “average” writing, it only produces “average” writing, and humans—striving for that polished, corporate sheen—copy that average back into the wild. We are effectively automating the death of linguistic evolution.
“When AI models train on synthetic data, they begin to forget the tails of the distribution—the rare words, the weird phrasing, the creative leaps. We are essentially creating a digital echo chamber that filters out the very essence of human creativity.” Dr. Elena Rossi, Computational Linguist
The Cognitive Cost of the Simple Path
As a journalist, I’ve always believed that writing is not just a way to record thoughts; it is the act of thinking itself. When you struggle to discover the right word or wrestle with a sentence structure, you are actually refining your understanding of the subject. By bypassing that struggle, we are bypassing the cognitive heavy lifting required for deep thought.
Emily Bender, a linguist at the University of Washington, warns that this chase for polish is a trap. The “struggle” is where the learning happens. When we let an LLM synthesize our thoughts, we aren’t just delegating a task; we are delegating our intellect.
“There is value in the struggle of writing, because we learn to express ourselves, and we learn to do the thinking that happens as we’re writing. Each time we choose not to do that, we are losing out, both individually and societally.” Emily Bender, University of Washington
This is why Alex Mahadevan of the Poynter Institute for Media Studies describes AI writing as soulless
and mediocre
. It is grammatically perfect but artistically void. It lacks what Mahadevan calls good bad writing
—the kind of prose that is slightly flawed, perhaps a bit too passionate or oddly phrased, but is deeply engaging because it feels like a real person is on the other end of the line.
Reclaiming the Human Glitch
So, where does that abandon us? If we continue down this path, we risk a future where every professional email, every news report, and every social post feels like it was written by the same polite, mid-level bureaucrat. The “LinkedIn average” is a comfortable place to hide, but it is a graveyard for original thought.

The antidote is intentional imperfection. We need to embrace the human glitch—the quirks of voice, the daring metaphors, and the occasional daringly fragmented sentence. The goal shouldn’t be to avoid AI entirely—that’s a losing battle—but to use it as a scaffold, not a replacement.
The next time you’re tempted to hit “rewrite for professional tone,” strive the opposite. Lean into the friction. Be a little too descriptive. Be slightly too blunt. Use a word that an LLM would never think to pair with another. In an age of synthetic perfection, the most valuable currency we have is authenticity, and authenticity is almost always a little bit messy.
I want to hear from you: Have you noticed your own writing changing since you started using AI? Do you find yourself second-guessing your natural voice to avoid sounding “unprofessional,” or are you actively fighting the “LinkedIn average”? Let’s talk about it in the comments.