Edited notebook pages showing redlined text and strikethrough marks demonstrating how to fix generic AI content writing

When AI content sounds generic (and how to fix it).

You can feel it in the first sentence. There’s a smell. Five clues tell you a human didn’t write this, even when you can’t name them. Here’s what causes that, and the four-step fix.

Open a fresh ChatGPT tab. Type, “Write me a LinkedIn post about productivity.” Hit enter. Read what comes back. You’ll see it: a clean structure, an opening hook that’s too neat, three bullet points with the same rhythm, a sign-off question, and the unmistakable sense that this could have been written by anyone, for anyone, about anything.

That’s not the model being bad. The model is doing exactly what it was asked. The problem is that “write a post about productivity” gives it nothing specific to be loyal to. It defaults to the average of every productivity post on the internet, which is, statistically, mush.

Why generic happens.

A model writes like the average of what it was trained on, unless you anchor it somewhere specific. Without anchors, it pulls toward the middle: the smoothest sentence, the most-common phrasing, the safest verb. Beige.

Five anchors flip this. Each one gives the model something to grip.

  • Words and phrases you actually use (and ones you’d never say).
  • Sentence-length pattern (do you write short snaps, or long winding paragraphs?).
  • Stance (warm, dry, contrarian, encouraging, blunt).
  • Stories you tell over and over (the ones clients hear in the first session).
  • Specifics from inside your business (real client moments, real numbers, real objects).

The four-step fix.

1. Write a voice document.

Not a brand guideline PDF with mission statements. A working document the model can actually use. Banned phrases, preferred phrases, two paragraphs that sound exactly like you, your stance on three or four topics in your field. Two pages. That’s enough to change every output.

2. Feed it three real examples of your writing.

An email you wrote a client. A voice note you transcribed. A messy first draft you scrapped because it was too raw. The rawer, the better. The model needs evidence of how you really sound, not how you wish you sounded.

3. Ask for specifics, not categories.

“Write a post about consistency” gets you mush. “Write a post about the chiropractor who tells every new patient about her own back surgery in the first visit, and why that’s the whole brand” gets you a post. Specificity at the prompt is half the work.

4. Edit out the AI tells.

Even a well-prompted output has tells. The biggest one: em-dashes. They’re the AI fingerprint that nobody told you about. Strip them. Replace with commas, semicolons, or new sentences. While you’re there, kill phrases like “in today’s fast-paced world,” “let’s dive in,” and “leverage” as a verb. Five minutes of editing turns 80 percent of the way there into 95 percent.

The voice document is the leverage point.

Of the four steps, the voice document is the one most people skip and the one that earns the most. Specificity at the prompt helps. Editing the tells helps. But neither holds across a hundred outputs the way a real voice document does. Build it once, paste it into every session, and the model carries your fingerprint into work you haven’t even thought of yet.

What goes in it is straightforward but un-fancy. Three or four phrases you use weekly. Three or four phrases you’d never say. The way you start an email. The way you close one. The thing you say when a prospect asks what makes you different. The kind of analogies you reach for. Whether you swear, and if so, where the line is. Two paragraphs of you writing about anything, copy-pasted in raw, typos included.

If that sounds like a lot, it’s two pages. The reason it lifts every output is that the model now has dozens of small anchors instead of vibes. It stops guessing what you sound like, because you told it.

What this means for you.

If you’ve been blaming the model for sounding generic, the model is taking the blame for a setup problem. The setup is the voice document, the examples, the specificity, and the edit pass. Do those four things and the same model that gave you mush yesterday gives you something a client recognizes today.

If you don’t want to build that setup yourself, that’s actually what we do. Our intake is 38 questions specifically because three questions, which is what most services capture, isn’t enough information to anchor a model away from beige. Thirty-eight is.

Common questions.

Is ChatGPT or Claude better for brand voice?

Both work, with different temperaments. Claude tends to follow voice instructions more literally; ChatGPT tends to want to “improve” your voice toward its average. We use both, depending on the platform we’re writing for.

Why does my output still sound off even after a voice document?

Usually one of three reasons: the voice document is too aspirational (describing who you want to sound like, not how you actually sound), there are no real writing samples attached, or the prompt is too vague. Start with the prompt. Get specific.

How long should a voice document be?

Two pages of working material beats twenty pages of brand-book theater. The model uses the first 2000 words it sees more than the next 18,000.

What’s the fastest single edit I can make?

Strip every em-dash and replace with appropriate punctuation. That one change makes AI prose look human faster than anything else.

We do the voice work for you.

38-question intake, voice document, first ten pieces. If it doesn’t sound like you, money back. See the board, see the pieces, then decide.