§
§ · free tool

Brand voice prompts. For your LLM.

Build a system-prompt that holds your brand voice across ChatGPT, Claude, Gemini: archetype, banned words, examples, escalation rules. Browser-only.

Browser-only · nothing leaves this device
§ 01 · load a preset
§ 02 · inputs

Identity, archetype, vocabulary.

§ 03 · at a glance
archetype-
banned words-
prompt length-
Adjust inputs; the prompt updates in real time.
§ 04 · generated system prompt

Paste into ChatGPT or Claude.


      
§ 05 · three reference voices

Three voices in 100 words.

DTC skincare warm-mentor

"Refund is on its way - it will land back on your card within 3-5 business days. The product does not work for everyone; that is why every order ships with a no-questions return window. If you would like, I can suggest two other options that might fit better."

B2B SaaS sharp-expert

"Tallyloop syncs Stripe and QuickBooks every 4 hours, not on-demand. In our data, on-demand sync caused 12% of users to hit Stripe API rate limits during month-end close. The 4-hour batch closes books 3.8 days faster on average than the QuickBooks-only flow. Specifics in the docs."

Agency quiet-confident

"The brand system is six months of work. Two months of research, three months of design, one month of documentation. The system extends - the team adds new touchpoints in the same voice, in the same type, in the same color, without us in the room. That is the deliverable."

§ 06 · what makes a good voice prompt

Voice is what stays the same.

A brand voice prompt that holds across a long LLM session does six things at once. It states the identity (who is speaking). It names a voice archetype with concrete rules (warm-mentor, sharp-expert, quiet-confident). It lists banned words and signature words explicitly because LLMs follow lexical rules better than abstract style instructions. It includes 5 example Q&A pairs in the voice because examples beat rules. It specifies response-format rules (sentence length, paragraph length, list-versus-prose rules). And it includes escalation rules for when to defer to a human. Without all six, the LLM drifts toward its default register over time.

Voice does not equal tone

Voice is durable - it is the brand's consistent identity, the same on a refund email as on a launch announcement. Tone is situational - more formal in a refund context, more playful in a launch context. The system prompt should encode the voice; the situational tone is set by each individual request. Conflating them produces brittle output where the LLM treats every email like a press release. Anthropic's Claude and OpenAI's GPT models both follow this distinction when the system prompt makes it explicit.

Concrete beats abstract

"Friendly and approachable" is abstract. "Sentences average 14-18 words; use second-person; avoid hedging phrases" is concrete. LLMs follow concrete rules far better than abstract ones because abstract rules require model interpretation, while concrete rules require only model compliance. Specify sentence length, paragraph length, person (first, second, third), contractions (use or avoid), and list-versus-prose preferences as numeric or binary rules. The resulting prompt is longer but the output is more predictable.

Examples beat rules

Five concrete Q&A pairs in the target voice teach the LLM more than 50 abstract style rules. Choose examples that cover edge cases: a hostile customer, a friendly check-in, a refund request, a product question, a brand-philosophy question. The examples should sound like the brand at its best, not at its average. OpenAI's prompt-engineering guide and Anthropic's docs both call out few-shot examples as the strongest single technique for voice consistency.

Banned-words discipline

The single biggest improvement most brand-voice prompts can make is an explicit banned-words list. LLMs default to a small vocabulary of high-frequency words: leverage, robust, holistic, seamless, vibrant, cutting-edge, harness, elevate, empower, streamline, synergy, paradigm. None of these words appear in any human writer's natural register; they appear because the LLM has seen them in marketing copy at training time. Listing them as banned forces the model to reach for a different word, which is almost always a better word. GOV.UK's style guide publishes one of the strongest banned-words lists in plain-English writing.

Test with edge questions

Before deploying a voice prompt, test it with five edge cases: a hostile customer, a press inquiry, a regulatory request, a question outside the brand's domain, a question that requires the brand to admit a mistake. The LLM either holds voice or breaks. If it breaks, add a Q&A pair covering that case to the system prompt and retest. Most voice prompts reach stable behavior in 3-5 iterations.

Voice drift in long sessions

Even a strong voice prompt drifts in long sessions because each new exchange pulls the model slightly back toward its default register. The fix is a "voice drift" instruction at the end of the system prompt: if the model notices itself reaching for banned words or hedging language, pause and rewrite. Google's Gemini, OpenAI GPT-5, and Claude all respond well to this self-monitoring instruction. For sessions longer than 50 turns, re-paste the voice prompt as a refresher.

Related tools: Landing page prompt generator for the page voice. Email design prompt generator for the email voice. Headline analyzer for headline-level voice checks. Readability checker for the reading level. Brand identity for the system the voice sits inside.

§ 07 · questions

Five answers.

What is a brand voice prompt generator?

A tool that assembles an LLM system prompt encoding a brand's voice. The prompt covers identity, voice archetype, audience, do/don't list, banned and signature words, sentence-length guidance, response-format rules, 5 example Q&A pairs in the voice, and escalation rules for when to defer to a human. It pastes into ChatGPT, Claude, Gemini, or any LLM that accepts a system prompt.

Why is voice not the same as tone?

Voice is the consistent identity of the brand, the same across every customer touchpoint. Tone is the situational adjustment - more formal in a refund email, more casual in a launch announcement. The prompt should encode the voice (durable) and let the user specify tone (situational) in each request. Conflating the two produces brittle output where every email reads in the same robotic register.

What are banned words and signature words?

Banned words are words the brand never uses, regardless of context. They typically include LLM-default words (leverage, robust, holistic, seamless, vibrant, cutting-edge) and brand-specific words (we, our, us if the brand prefers second-person). Signature words are words and phrases the brand favors. Both lists give the LLM concrete guardrails that abstract style instructions cannot.

Why include 5 example Q&A pairs?

Examples beat rules. An LLM follows tone better when given 5 concrete Q&A pairs in the target voice than when given 50 abstract style rules. The examples should cover edge cases: a hostile customer, a friendly check-in, a refund request, a product question, a brand-philosophy question. The more diverse the examples, the better the voice holds across new situations.

Does this tool save my prompts?

No. Every value you enter and every prompt assembled lives in memory for this browser tab only. Nothing is transmitted to a server, stored in a database, or synced across devices. Close the tab and the data is gone.

§ 08 · voice is upstream of every word

Voice is the brand's consistency.

Our brand-identity engagements include voice work as a core deliverable: the system prompt for your team's LLM workflows, the human-side editorial guide, and the 30-question voice test for every new touchpoint.