How to Actually Write With LLMs

Three years. That's how long I spent trying to get writing out of a language model that didn't read like a language model wrote it. Custom system prompts, voice DNA documents, anti-pattern databases with 20 documented failure modes, style guides that ran longer than the articles they were supposed to produce. I catalogued vocabulary tics down to the em dash frequency ratio (AI drops one every 55 words; humans use maybe 2-3 per 500). I could tell you exactly why a piece of AI writing felt robotic.
I just couldn't make it stop.
So I did what engineers do: I built more infrastructure. Better prompts. Richer examples. Longer instructions. Quantified the 5 structural tells that flag machine authorship, created templates for every content type, maintained a living archive of what worked and what didn't. The diagnosis got sharper every month but the output did not change much.
Then last week, while writing a technical article for this blog, something broke through. The quality gap between this session and every session before it was so wide that I stopped writing to reverse-engineer what had changed. It was not the model, and it was not the prompt. It was the process.
The Problem With One-Shot Writing
The standard approach looks like this: you write a detailed prompt describing your voice, your style, your anti-patterns, maybe paste in some examples of your writing, and ask the model to produce a finished article. One shot. Full quality. Ready to publish.
This does not work. It can't work, for the same reason you cannot ask a human writer to simultaneously optimize for structure, voice, rhythm, transitions, and audience engagement in a single draft. These concerns interfere with each other. When the model is trying to get the information architecture right, it falls back on safe sentence patterns. When it shifts focus to sentence structure, it loses the argument's thread. And when it tries to nail transitions, it over-explains everything because it is juggling too many priorities at once.
The answer, once I found it, felt almost too simple: stop asking for a finished product and start asking for layers.
Five Passes, One Job Each
If one-shot writing fails because of competing concerns, the natural fix is to remove the competition entirely. You separate the writing process into five sequential passes, and you give each pass exactly one job. Nothing else matters during that pass. Just the one thing.
Pass 1: Structure and Content
Get the bones right. Do not wordsmith, do not worry about voice. Just get the information in the right order, with the right sections, and make sure nothing is missing or redundant. Let the model write in its default AI voice because it genuinely does not matter at this stage.
This is where most people start and also where they stop. They get a structurally sound article written in AI-ese and then try to "fix the tone" in a single editing pass, which is asking one round of revision to do the work of four.
Pass 2: AI Anti-Pattern Removal
With the structure locked down, you can finally go after the thing that actually makes AI writing feel like AI writing. This is the pass I had been trying to do from the very start, and the reason it never worked before is that the bones were still moving underneath it.
You tell the model to strip out the structural tells that flag machine authorship. Specifically:
- Filler transitions ("But here's the thing," "More fundamentally")
- Meta-narration ("In this article, I'll walk through")
- Insight announcements ("The key insight is that," "The deeper principle:")
- Rhetorical flips ("isn't X — it's Y") used more than once
- Anaphoric staccato ("No X. No Y. No Z. Just W.")
- Em dash overuse
- Superlative filler ("in its purest form")
- Preemptive hedging ("The natural concern is...")
On my article, this pass cut about 30% of the filler that I would not have noticed on a first read. Phrases that seemed perfectly fine in context turned out to be structural tics, the kind of crutch the model reaches for when it does not know how to move between ideas organically.
Pass 3: Sentence Structure and Rhythm
Once the anti-patterns are cleared away, you start to hear the rhythm problem that was hiding underneath them. AI writes in a monotone. Almost every sentence follows Subject-Verb-Object, lands at medium length, stays declarative. Paragraph after paragraph of the same cadence. It reads like a textbook that is trying very hard not to offend anyone.
This pass is about breaking that monotone open:
- Length variation. A long sentence that builds up context and takes its time, followed by a short one. Four words. Then something medium length to reset.
- Varied openers. Start with a prepositional phrase. A condition. A fragment. Put the object before the subject for once.
- Fragments. Real writers use them all the time. Not every thought needs a verb to land.
- Delayed subjects. "What Zanzibar proved wasn't theoretical" hits differently than "Zanzibar proved this wasn't theoretical." The delay creates tension that the direct version skips over.
A good test: read any three consecutive sentences. If they all have the same structure, at least one needs to change.
Pass 4: Transitions and Contextual Framing
At this point, each section reads well on its own. But the space between sections is where the article falls apart. This is the pass most people skip entirely, and it is the reason so much AI-assisted writing reads like a slide deck that someone converted to prose. Each section arrives cold, with no connection to what came before, and the reader has no idea why they should care about what comes next.
Good transitions do the bridging work that makes a reader feel carried through the argument rather than dropped into disconnected rooms. The article I was writing compared different authorization models for our platform, and in the original draft each model got its own section with no connection between them. After this pass, sentences like "Given those requirements, we evaluated the standard authorization models" told the reader exactly why the next section existed. Compare that to just dropping a heading and launching into an explanation with no context at all.
There is a related technique I started calling "context-leading sentences," where you ground the reader in WHY before delivering WHAT. Instead of opening a section by defining a concept, you open by explaining the problem it solves. The first version earns your attention by giving you a reason to care. The second version assumes you already do.
Pass 5: The Hook
Write the opening last. You genuinely need to know what the article has become before you can figure out how to sell it. The article I was working on was a deep dive into how we handle authorization at Sift, and the hook went through three iterations. Each one taught me something about why the previous version was wrong:
- The first version opened with an insider anecdote about a specific technical concept. Funny if you have spent your career in that domain. Completely meaningless if you have not.
- The second version opened universal ("Every product that aggregates data from multiple sources eventually hits the same wall") and moved the anecdote to paragraph two, where it became supporting evidence rather than carrying the entire opening on its own.
- The third version added a conceptual frame that connected our approach to a landmark paper in a different field, drawing a parallel where one elegant mechanism replaced an entire stack of complex machinery in both domains.
Each iteration was better because I understood the article more deeply by the time I wrote it. If I had written the hook first, I would have anchored on the wrong thing and the rest of the article would have bent toward a weaker frame.
How to Direct Each Pass
Five passes only work if you are giving the model specific instructions at each one. The quality of each pass depends entirely on how precisely you direct it, and most advice about "prompting" treats that specificity as optional when it is actually the thing that matters most.
Effective direction names both the dimension and the direction you want to move in:
- "The transitions are dull and abrupt, and I am the opposite of those things" (you know what is wrong and you know how you want it to feel instead)
- "Fix sentence structure, too many sentences start with the subject" (a specific mechanical issue the model can act on)
- "The intro resolves the tension too early, make reading the rest of the article feel like a must" (you are diagnosing the structural problem, not just describing a vibe)
Ineffective direction sounds like this:
- "Make it better"
- "This doesn't sound like me"
- "More engaging"
The difference is that specific direction gives the model a constraint to optimize against. "Make it better" has infinite solution space and the model will pick the safest interpretation. "Too many sentences start with the subject" has exactly one solution space and the model will actually fix it.
Why This Works (Technically)
Understanding the methodology is useful, but understanding why it works is what tells you when you can break the rules and when you cannot. One-shot writing fails for the same reason multitasking fails for humans: attention is a finite resource, even for models. When you ask a model to simultaneously track structure, voice, anti-patterns, rhythm, and transitions, it cannot give equal weight to all of them. So it defaults to the safest option for each concern, and the compound effect of all those safe choices is writing that reads like it was produced by a very competent committee that nobody actually wanted to be on.
When you separate the passes, each one gets the model's full attention. The anti-pattern pass can be aggressive about cutting because it is not also trying to hold the argument structure together. The rhythm pass can take creative risks because it is not simultaneously worrying about AI tells. Each pass produces better output because it has fewer things to optimize for at the same time.
What Three Years of Research Got Me
The research was not wasted, even if it felt like it for a long time. The anti-pattern database, the roboticness analysis, the voice documentation: all of it feeds directly into Pass 2. The five structural tells I documented (flattened rhythm, abstraction dominance, monotone emotional register, visible rhetorical scaffolding, no story or surprise) each map to a specific pass that knows how to fix them:
| What Makes AI Writing Robotic | Which Pass Fixes It |
|---|---|
| Flattened rhythm | Pass 3: Sentence Structure |
| Abstraction dominance | Pass 2: Anti-Patterns |
| Monotone emotional register | Pass 3 + Pass 4 |
| Visible rhetorical scaffolding | Pass 2: Anti-Patterns |
| No story, no surprise | Pass 4 + Pass 5 |
| Vocabulary tells | Pass 2: Anti-Patterns |
| Em dash overuse | Pass 2: Anti-Patterns |
The research told me what was wrong. The layered methodology told me when to fix each thing. That sequencing, the idea that you fix different problems at different stages rather than all at once, was the piece I had been missing for three years.
The Meta-Irony
This article was written using the process it describes. The first draft was purely structural: all content, no polish, no voice. Then I ran the anti-pattern pass. Then sentence structure. Then transitions. Then I rewrote the opening, which is the part you read first but the part I wrote last. Five passes, each one making the previous passes' work better without undoing it.
If you are reading this and thinking you could never get your LLM to produce something like this, you are probably right if you are asking for it in one shot. Try five.
And there are a lot of improvements that will happen as i keep iterating. This is just my first major breakthrough so far. I'll share more on my journey as i hit other milestones.