Voice guard catches the AI tells before the prospect does.
The single biggest signal that a cold email was written by AI is the punctuation and vocabulary. Em-dashes. Buzzwords. Cohort claims with no source. We built a three-layer voice guard that rejects every one of those patterns before the sequence ever loads.
How it works
-
Layer 1, generation rules.
The generator prompt includes 16+ hard rules. No em-dashes, no en-dashes, no spaced hyphens. No banned words. No bullet points in email bodies. 3-line signature with a URL. Under 120 words. Natural opt-out in emails 2 and 3. These are CRITICAL RULES the model is told it will be rejected for violating.
-
Layer 2, validation at generation time.
After the model returns a draft, scripts/kyle_voice_guard.py runs 4 checks. Banned-word detection. Em-dash detection (em, en, minus sign, spaced hyphen). First-person digit anchor, so "60% of SDRs..." without a cited source or first-person experience gets rejected. Second-hand framing detection for phrases like "leaders tell me" that sound laundered.
-
Layer 3, pre-load guard.
Before any sequence gets loaded to Instantly, Stage 6 runs _stage_6_em_dash_preload_guard. If any email subject, body, or LinkedIn message contains an em-dash, en-dash, or minus sign, the entire load is refused. Strict mode is default for every client. There is no "the model missed one" path to production.
-
Cited stats byte-match a whitelist.
Any sequence marked as cited_benchmark has its verbatim quote compared byte-for-byte against config/approved_stats.json. Whitespace-normalized, smart-quote-normalized, but otherwise identical. A paraphrase fails. A source-name swap fails. A one-word drift fails. If it does not match, the sequence is regenerated.
Want proof the voice holds up?
On the discovery call we run 5 real sequences against your ICP, live. Voice guard output included.