Before you start — read this.
1
This takes 20–30 minutes. Not because the tool is complicated — because the questions are hard. You're encoding your judgment, not just your product documentation.
2
Modules 1 and 2 are yours to fill in. The quality of your agent depends entirely on what you put there. Modules 3–6 are Dlytr standards — read them, customize the two fields each one has, and mark them complete.
3
The last field in Module 1 is the most important thing in this entire tool. Write your single best answer to your most common support question — in your own words, exactly as you'd say it to a customer you like. Not a template. Your actual answer.
4
Your answers save automatically as you type. You can close and come back. Hit Compile Prompt → in the sidebar when all six modules are marked complete.
If you haven't answered your most common support question yourself yet — do that first. An agent trained on untested answers will produce untested results. The human playbook has to work before you automate it.
Compiled system prompt
Copy this into your agent's system prompt — Claude Projects, Intercom Fin, Voiceflow, or wherever your agent runs.
⚠ Some modules aren't complete yet. The prompt below is partial — finish all six for best results.
System prompt
Generated from completed modules
No live data yet Finish training your agent, compile the prompt, and deploy it to your support platform. Once your agent handles conversations and reports back, your actual CSAT, resolution rate, and response time will appear here — scored against the benchmarks below.
CSAT Score
Target: ≥ 4.2 / 5
Below threshold: < 3.5
Resolution Rate
Target: ≥ 80% without escalation
Below threshold: < 60%
Avg Response Time
Target: ≤ 2 minutes
Below threshold: > 10 minutes
Escalation Rate
Target: ≤ 20%
Below threshold: > 40%
Conversations Handled
Total volume this month
"I Don't Know" Rate
Target: ≤ 15%
High rate signals a help center gap
Dlytr benchmarks — what good looks like
Metric Target Below threshold Why it matters
CSAT score
Customer satisfaction, 1–5
≥ 4.2 < 3.5 Direct signal of whether the agent is helping or frustrating
Resolution rate
% resolved without human escalation
≥ 80% < 60% Low rate = help center gaps or undertrained agent
Response time
Time to first reply
≤ 2 min > 10 min Slow responses suggest the agent is overthinking or calling external services
Escalation rate
% passed to the founder
≤ 20% > 40% High rate defeats the purpose — retrain Module 1 or 2
"I don't know" rate
% of convos hitting the fallback
≤ 15% > 30% High rate = missing help center content. Add articles, retrain.
Conversation volume
Total handled this month
Any Context for all other metrics — low volume makes percentages unreliable
How benchmarks improve over time. These are Dlytr defaults based on industry standards for AI customer support. As your agent handles real conversations and you retrain modules based on gaps, your benchmarks will tighten. An agent with 6 months of training history should be targeting CSAT ≥ 4.5 and resolution rate ≥ 90%.
Gap log
Questions your agent couldn't answer confidently — logged automatically from each conversation. Each gap maps back to the module that needs updating.
No gaps logged yet
Once your agent handles real conversations, missed and hedged questions will appear here automatically — mapped to the module that needs fixing.
Question / trigger Type Module × this week Suggested fix
Gap detection — how it works
Your compiled prompt instructs the agent to append a hidden [DLYTR_GAP] tag after any response where it: • triggered the "I don't know" protocol → hard miss • used hedging language (I think / I believe / you may want to check) → soft miss • answered a knowledge boundary topic → wrong boundary Example tag (invisible to the customer): [DLYTR_GAP: type=soft_miss, trigger="what state should I incorporate in", module=app-training, confidence=low] In v1.5, a Worker endpoint receives these tags and writes them to the Gap Log database in Notion — no customer PII stored. The gaps above are a preview of what that log will look like.
✓ Copied to clipboard