Prompt Description
Prompt design reference document used to standardize orchestration behavior, variables, and interaction controls.
Execution Context
- Topic / Scope: System-wide standards for orchestrator and sub-agent prompt authoring, quality gates, and output contracts.
- Upstream Inputs: Source specification from
/mnt/hasmaster_1000/design/prompting/prompt_development_guide.md. - Downstream Consumer: Prompt authors, orchestrator/runtime implementers, and QA reviewers.
System Usage
- Used By: prompt development and runtime prompt governance
- Trigger: when creating, reviewing, or refactoring orchestrator/sub-agent prompts
- Inputs: prompting policy, journey requirements, and orchestration constraints
- Outputs: prompt development policy, structure requirements, and validation checklist
Canonical Prompt Payload
# HASmaster Prompt Development Guide
Version: 1.0
Scope: Orchestrator and sub-agent prompts for guided HASmaster journey interactions.
## 1) Outcome and success criteria
This guide defines how to build prompts that outperform a generic direct-chat interaction by combining:
- guided, confirmation-driven progression across journey tasks,
- retrieval and specialist agent orchestration,
- strict output contracts that are machine-safe and user-helpful.
Success is measured by:
1. User experience quality:
- user reports the interaction was useful, clear, and respectful,
- user remains engaged through task completion,
- user confirms each task summary as accurate.
2. Outcome quality:
- required journey task summaries are completed and confirmed,
- recommendations are actionable and context-aware,
- compatibility constraints and gaps are explicitly surfaced.
3. System quality:
- outputs validate against schema,
- prompts are reusable, reliable, and auditable,
- orchestration behavior is auditable.
## 2) Prompting philosophy — intent over script
**This is the governing principle for every prompt written in this system. It overrides convenience.**
Prompts must express **what the AI should accomplish**, never **how the AI should say it**.
| Allowed | Not allowed |
|---|---|
| "Acknowledge the human situation behind what they shared" | "Say: 'It sounds like you're dealing with…'" |
| "Help them identify where to focus within their areas" | "Ask: 'Are you ready to move to Check Compatibility?'" |
| "Understand the full picture — context, relationships, real-life dynamics" | "Read between the lines: kids left lights on likely means family friction, energy, maintenance…" |
| "Stay warm and curious" | "Do NOT mention Check Compatibility" |
**Why this matters:**
- The AI has better contextual judgment than any script we could write.
- Scripting outputs makes responses brittle, repetitive, and condescending to users.
- Explicit prohibitions signal that we don't trust the AI — and produce worse results.
- Our job is to set the **goal and intent**. The AI's job is to determine the best way to get there.
**Test for every prompt before shipping:**
- Does this prompt tell the AI *what to do* (outcome) or *what to say* (script)?
- Would removing a sentence make the intent unclear? If no — cut it.
- Are there any hardcoded example phrases, explicit "Do NOT" rules, or numbered step-by-step scripts? Remove them.
Non-negotiable policy:
- Avoid hard-coded deterministic language from legacy intent-based prompting.
- Use clear outcome-oriented guidance and let agents apply context + tone inputs to generate the final response wording.
### 2.1 Reusable sequence patterns
Prompts should be reusable by sequence pattern, not by scripted wording.
Use a stable interaction sequence that can apply across many scenarios:
1. capture current user problem in plain terms,
2. propose a concise summary candidate,
3. request explicit user confirmation/correction,
4. only after confirmation, branch to retrieval/recommendation,
5. close with clear next action.
UI alignment for this sequence:
- collect all freeform user text through one shared composer input,
- avoid inline "edit summary" widgets in response cards,
- keep responses state-driven (`clarifying_questions`, `task_state`, `task_summaries`) instead of scripted UI text.
What remains reusable:
- phase transitions,
- data contracts,
- quality gates,
- confirmation rules.
What must remain adaptive:
- assistant phrasing,
- examples chosen,
- tone/voice realization based on user controls and context.
### 2.2 Empathy + Reason Clarification Pattern (core blueprint)
This pattern is mandatory for problem-discovery tasks (starting with Stage 1 frustration capture) and should be reusable across other stages.
Intent:
1. reflect empathy for what the user said,
2. expand understanding using domain-aware possible drivers,
3. let the user validate which drivers are true (and add missing ones),
4. use validated drivers to build the summary candidate.
Response shape (outcome guidance, not scripted wording):
1. `Empathy reflection`
- Show that the assistant heard the specific frustration in plain language.
2. `Possible reasons (hypotheses)`
- Offer 3-5 plausible reasons this is a problem for the user, using relevant domain knowledge.
- Frame these as possibilities, not assumptions.
- Return these as structured `reason_options` so UI can render selectable checkboxes.
3. `Validation prompt`
- Ask the user which reasons are accurate and what is missing.
4. `State update`
- Carry confirmed reasons into `task_state.task_summaries[*].summary` and confirmation flow.
Quality rules:
- Avoid deterministic canned lines; keep wording adaptive.
- Do not skip empathy.
- Do not jump to recommendations before validating the underlying frustration drivers.
- Keep hypotheses concise and non-judgmental.
- Build hypotheses dynamically from current user context; do not use a fixed reason list.
- Reuse impact dimensions, not canned outputs (cost/resources, time/effort, reliability, safety/risk, emotional/social, learning/responsibility, comfort/health, goal progress).
- Phrase hypotheses as user-impact outcomes (pain/effect), not abstract root-cause mechanics.
- Avoid immutable structural causes as checkbox reasons unless translated into user impact (for example: glare, inconvenience, privacy loss, damage risk, stress, lost time).
Runtime implementation rule:
- Prompt behavior must be editable without code changes.
- Active `journey-get-inspired` runtime prompt path:
`/mnt/nas_docker_apps/wordpress-staging/wp_data/wp-content/hasmaster-runtime-cache/prompts/journey-get-inspired.txt`
- Orchestrator passes `prompt.text` to the LangGraph worker; worker must treat that as the primary response behavior source.
### 2.3 Completion Narrative Synthesis Pattern (cross-stage)
This pattern is mandatory whenever a stage reaches completion and hands off to a documentation/confirmation step.
Intent:
1. summarize the full context gathered so far in user-centered prose,
2. explain why the collected inputs matter for the user (not just what was selected),
3. prepare the user for confirmation/documentation and next-stage transition.
Response shape (outcome guidance, not scripted wording):
1. `Narrative synthesis paragraph`
- 3-5 sentences, coherent and natural, based on confirmed summaries.
- include the user's core situation, priorities, and stated outcome.
- include impact framing (why this matters to them).
2. `Confirmation instruction`
- end with one clear line: confirm now or edit in the main input above.
3. `Machine summary (separate field)`
- keep compact structured summary in machine fields (`task_state.task_summaries`, `items[*].value`, profile snapshot).
- do not use label-only compact summary as the primary user-facing prose.
Quality rules:
- Avoid deterministic canned completion phrases.
- Avoid rigid `Label: Value | Label: Value` text as primary narrative.
- Reuse context dynamically from confirmed summaries and history.
- Keep wording adaptive to style controls and audience level.
- Ensure final message remains reusable across journeys and stages.
---
## 3) Core architecture rules
1. One orchestrator prompt = policy + routing + IO contract.
2. One sub-agent prompt = narrow mission + strict inputs + strict outputs.
3. Variables are declared once in an explicit variables block.
4. Every response type is typed and schema-validated.
5. QA verification is the final gate before output.
6. Agent delegation contracts must be transport-agnostic but executable in LangChain with n8n workflow orchestration.
## 4) Required prompt structure
### 4.1 Orchestrator sections (required order)
1. Role
2. Goal
3. Non-goals
4. Inputs (Variables)
5. Team roster and call conditions
6. Routing algorithm
7. Output contract
8. Quality gates
9. Stop conditions
### 4.2 Sub-agent sections (required order)
1. Role
2. Task
3. Inputs
4. Method constraints
5. Output schema
6. Refusal/unknown rules
## 5) Variable delimiting standard
Use one explicit block with unique delimiters:
```text
<<<VARIABLES>>>
journey: get-inspired
user_text: ...
intake: {...}
session_context: {...}
journey_context: {...}
interaction_style: {
tone: practical,
humor_level: light,
directness: balanced,
explanation_depth: moderate,
technical_level: layman_friendly,
directive_style: patient_coach,
accuracy_strictness: strict
}
source_material: ...
<<<END_VARIABLES>>>
```
Rules:
- Never define the same variable in multiple places.
- Keep long source/context in one field (`source_material` or `inputs`).
- If required variables are missing, ask only for missing required fields.
## 6) Output typing and contracts
Preferred pattern:
- Orchestrator output: strict JSON matching journey response schema.
- Sub-agent output: strict JSON envelopes for merge safety.
Recommended sub-agent envelope:
```json
{
"agent_name": "researcher",
"task_id": "task-2",
"inputs_used": ["user_text", "intake.focus_area"],
"output": {},
"open_questions": [],
"risks": [],
"confidence": 0.0
}
```
Merge rules:
- Conflicts between agent outputs must be recorded in `conflicts[]` internally.
- If unresolved open questions block accuracy, request user input before finalizing.
- Never guess facts; use explicit `unknown` internally when confidence is low.
## 7) Routing and orchestration pattern
Baseline pattern:
1. Parse variables.
2. Determine current journey task + completion state.
3. Ask or refine only the current task until user confirmation.
4. Call specialist agents only when needed.
5. Produce schema-valid output with next step guidance.
Typical specialist roster:
- Research agent: retrieve use cases, videos, links, and relevant examples.
- Profile/storage agent: persist confirmed summaries when identity/subscription checks pass.
- Compatibility agent: map constraints to required/matched badges and gaps.
- QA agent: enforce schema, constraints, and no-guessing policy.
Execution note:
- In runtime, route specialist tasks through LangChain-compatible tool calls and n8n workflow nodes.
- Keep agent payloads typed so they can be logged, retried, and audited.
## 8) Quality gates (must pass)
Before returning output, verify:
1. All required schema fields are present and valid.
2. Response stayed within journey scope and current task.
3. Constraints and style controls were applied.
4. No unsupported claims were introduced.
5. Clarifying questions are minimal and bounded.
## 9) Stop conditions and safeguards
Stop and ask user for targeted clarification when:
- required input for the current task is missing,
- the user confirmation for current summary is missing,
- compatibility-impacting details are unknown.
Do not stop for avoidable reasons:
- low-confidence style preferences (use defaults),
- optional enrichment links (can be suggested later).
## 10) Stage blueprint: Get Inspired
Required confirmed outcomes for Stage 1:
1. Frustration/problem to solve
2. Focus area (S.C.O.R.E.)
3. Priority rooms/zones
4. Inspiration references (videos/articles/use cases)
5. Home automation goal
Optional final action:
- Save progress preference (after 1-5 are confirmed)
Behavioral requirements:
- empathetic probing,
- concise validation choices,
- explicit user confirmation per task before advancing,
- compact summary of confirmed state in each turn.
Current Stage 1 contract files:
- Prompt: `website/wp_journeys_v1_1/prompts/get_inspired.md`
- Schema: `website/wp_journeys_v1_1/schemas/response_get_inspired_v2.json`
- Source mirror: `execution_ops/prompts/source/website/journey-get-inspired-orchestrator-prompt.md`
## 11) Iteration loop for prompt quality
Use this cycle per stage:
1. Draft prompt from this guide.
2. Run structured test conversations (happy path + edge cases).
3. Score against success criteria.
4. Tighten routing, wording, and stop conditions.
5. Promote as blueprint for next journey stage.
## 12) Definition of done for each stage prompt
A stage prompt is done when:
1. It achieves confirmed completion of all required stage tasks.
2. It consistently returns schema-valid JSON.
3. It uses context/orchestration to produce better outcomes than direct chat.
4. It demonstrates positive user feedback in staged testing.
## 13) Required UI controls for interaction quality
To unlock the prompt quality goals, the UI should capture and send explicit interaction preferences.
WordPress implementation rule:
- Stay as close as possible to existing block-theme styles and patterns.
- Introduce style/layout differences only when the interaction flow or usability clearly requires it.
Minimum user-facing controls to add (5):
1. Humor level: `none`, `light`, `playful`.
2. Directness: `gentle`, `balanced`, `blunt`.
3. Explanation depth: `brief`, `moderate`, `deep`.
4. Technical level: `layman_friendly`, `balanced`, `expert`.
5. Directive style: `patient_coach`, `balanced`, `tell_me_what_to_do`.
Recommended optional controls:
1. Interaction mode preset selector (Practical, Quick Outcome, High Clarity, Humorous, Tech Savvy).
2. Inspiration bias: `creator-videos`, `written-guides`, `mixed`.
3. Accessibility mode: `standard`, `high-clarity`.
Payload contract addition:
- send these values in `interaction_style` for every ask turn.
- if any field is missing, backend applies defaults and echoes applied style in response.
Example payload fragment:
```json
{
"interaction_style": {
"humor_level": "light",
"directness": "balanced",
"explanation_depth": "moderate",
"technical_level": "layman_friendly",
"directive_style": "patient_coach",
"accuracy_strictness": "strict",
"question_style": "guided",
"confirmation_style": "every_task"
}
}
```
## 14) Scalable inspiration sourcing direction
Inspiration sourcing is a system capability, not a one-time prompt edit.
Prompts must assume long-term retrieval from a managed registry that can scale to thousands of use cases and sources.
Required direction:
1. Pull inspiration links from runtime registry/tooling, not hardcoded prompt text.
2. Only use specific source URLs (no channel/homepage URLs).
3. Require source validation, relevance scoring, and freshness metadata.
4. Retrieve top-ranked 2-3 sources per turn using active task + user context.
5. Make source retrieval auditable (why selected, score, confidence, last checked).
Reference blueprint:
- `design/prompting/inspiration_registry_scaling_strategy.md`
## 15) Runtime orchestration map reference
For the user-readable end-to-end map (UI journey flow -> orchestrator -> worker -> persistence), including Mermaid diagrams and current-vs-target n8n architecture:
- `design/prompting/journey_runtime_orchestration_map.md`