Voice guides are private @MythOS memos that encode how a person actually writes and speaks in a specific domain — so that AI agents producing content on their behalf sound like them, not like generic AI output. A voice guide captures tone, sentence structure, rhetorical habits, signature phrases, what to avoid, and the philosophical framing that makes a person's content recognizably theirs.
What a Voice Guide Does
Most AI-generated content sounds like AI. It hedges, it qualifies, it uses the same structures regardless of who asked for it. A voice guide solves this by giving the AI a detailed instruction set — not "write casually" but "lead with the insight, not the reasoning. Short declarative sentences for impact. Contractions always. Occasional single-sentence paragraphs for emphasis." The result: content that passes the author test. If someone who knows you reads it, they should think you wrote it — because the AI was working from the same patterns you use instinctively. Voice guides are part of the @Augmentation Stack's Memory layer. They're persistent context that the AI loads before producing content, ensuring every output reflects the author's actual voice rather than the model's default.
What Goes in a Voice Guide
Core Philosophy
The thesis that shapes everything you say about this domain. Not a mission statement — the actual operating belief that determines what you emphasize, what you critique, and what you ignore.
Tone Characteristics
How you sound when you're at your best. Are you direct or discursive? Do you use analogies? Are you irreverent or measured? Do you name-drop or cite data? The specifics matter more than the labels.
Sentence Structure
Sentence length patterns, paragraph density, use of contractions, fragments for emphasis, how you start paragraphs. These are the rhythmic signatures that make writing recognizable.
"What You Say vs. What Others Say"
A comparison table showing the generic version of a claim and your version. This is the fastest calibration tool — one table and the AI knows the difference between your voice and everyone else's.
What to Avoid
The anti-patterns that make content sound wrong. Jargon you never use, tones you despise, hedging patterns that undermine your confidence, filler phrases that dilute your density.
Frameworks and Vocabulary
Named concepts you use consistently, vocabulary choices that are yours (infrastructure vs. tool, architecture vs. feature, augmentation vs. automation), and the intellectual scaffolding your content builds on.
How I Use Voice Guides
I have three: one for @Reddit marketing (business audience), one for Reddit (enterprise audience), and one for AI & augmentation. Each gets loaded before any content sprint in that domain. When my agents run the @MythOS Content Topology pipeline — discovering, researching, drafting, and publishing memos autonomously — the voice guide is what ensures the output sounds like me rather than like Claude's default. The voice guide also serves as a mirror. Writing down how you sound forces you to notice patterns you weren't aware of. My Reddit voice guide revealed that I use crude analogies far more than I realized — and that removing them would remove what makes the content mine. The AI learned my voice. I learned my voice. Both are valuable.
How to Create One
- Gather source material. Find 5-10 pieces of your best writing in the domain. These don't need to be published — emails, memos, drafts all work. You need enough to surface patterns
- Create a private MythOS memo with the title "My Voice on [Domain]." Tag it
#voice-guide - Structure it: Core Philosophy → Tone → Sentence Structure → "What You Say vs. What Others Say" table → What to Avoid → Frameworks and Vocabulary. Follow the pattern in the existing voice guides
- Have your AI collaborator analyze your source material and draft the voice guide for you. Then review it — the AI will surface patterns you didn't consciously notice. Edit until it reads like a genuine instruction set, not a flattering description
- Test it. Ask your AI to write a short piece in the domain using the voice guide. Read it aloud. Does it sound like you? If not, identify what's off and add that correction to the guide
- Load it before content work. Reference the voice guide in your content sprint protocol or in your @CLAUDE.md. The guide is useless if it isn't loaded when the AI writes Voice guides are private by default — they're operational infrastructure, not published content. But the concept itself is public because it's one of the most powerful patterns in @Collaborative Augmentation: teaching the AI to speak as you, not for you. Voice guides were the thing that turned my AI-generated content from "fine but generic" to "indistinguishable from my own writing." The difference is stark. Without a voice guide, Claude writes like Claude — competent, structured, slightly bloodless. With one, it writes like me — direct, architectural, occasionally crude, always sovereignty-minded. The meta-moment: I used Claude to write the voice guide that teaches Claude to sound like me. @Collaborative Augmentation in its purest form — the AI learns the human, the human reviews the learning, and the collaboration compounds. Now every memo, every content sprint, every pipeline output starts with that encoded voice. I haven't had to correct tone in months.
Contexts
- #agentic-augmentation
- #recursive-mythic-engine
