Prompt – Website Design Principles

Prompt Description

Prompt design and governance reference defining website AI integration principles, enforcement patterns, and release gates.

Execution Context

  • Topic / Scope: Website AI interaction design principles and prompt-governance operating model.
  • Upstream Inputs: Source specification from /mnt/hasmaster_1000/website/design/design_principles.md.
  • Downstream Consumer: Prompt authors, orchestrator/runtime implementers, reviewers, and governance owners.

System Usage

  • Used By: prompt library governance, stage prompt authoring, and release validation
  • Trigger: when defining or reviewing website AI prompt behavior and policy alignment
  • Inputs: design decisions, schema contracts, evaluation criteria, and runtime telemetry expectations
  • Outputs: enforceable design-decision guidance for prompt rules, checks, and release gates

Canonical Prompt Payload

# Website Design Decisions (AI Integration)

This document captures current design decisions, each with operating principles.
These are decision statements first; where validation is needed, attach explicit success metrics.

## Design Decision 1: Outcome-based guidance over deterministic intent flows is paramount.
- Define target outcomes and guideposts, then let agents choose the best path.
- Use guardrails (quality gates, policy checks, constraints) instead of trying to pre-script every scenario.
- Judge performance by outcome quality and completion, not strict adherence to a fixed flow.
- Use deterministic intent routing as a fail-safe fallback, not as the primary operating model.

## Design Decision 2: Trustworthiness is paramount.
- Validate before responding, and ask when facts are missing.
- Explicitly separate known facts from assumptions and unknowns.
- Prefer verifiable outputs over confident but ungrounded responses.

## Design Decision 3: Small iterative engagement outperforms large one-shot responses.
- Default to short response increments with frequent checkpoints.
- Ask for directional confirmation before expanding scope.
- Optimize for fast partial value, then refine based on user steering.

## Design Decision 4: More prompt context yields better outcomes than less context.
- Collect and pass relevant task, domain, and user intent context up front.
- Reuse prior interaction context to avoid repeated setup.
- Structure context inputs so the model can reason with fewer ambiguities.

## Design Decision 5: Users want control over response style, depth, and personality.
- Let users choose response type (summary, detailed, action-oriented, exploratory).
- Support user-selectable tone/personality modes at runtime.
- Keep controls visible and adjustable during the interaction.

## Design Decision 6: Users want journey visibility and influence along the way.
- Show current step, next step, and expected path to completion.
- Surface decision points where user input can redirect flow.
- Make progress status and intent updates continuous and understandable.

## Design Decision 7: Strong taxonomy is foundational for user-agent-system contracts.
- Define and maintain a shared taxonomy across UI, orchestration, and agents.
- Map user intents, entities, actions, and outcomes to stable canonical terms.
- Use taxonomy as the contract layer for routing, rendering, and execution.

## Design Decision 8: Contract-first responses improve reliability across UI and orchestration.
- Treat every AI turn as a versioned response contract, not free-form prose.
- Enforce schema-valid output before rendering or state mutation.
- Keep field names and meanings stable across journey stages.

## Design Decision 9: Confirmation-gated state transitions reduce wrong progression.
- Keep one active task at a time.
- Require explicit user confirmation before marking a task complete.
- Do not auto-advance tasks on inferred intent.

## Design Decision 10: Assumption and confidence disclosure improves user decisions.
- Include explicit assumptions used for the current turn.
- Include confidence and whether blocking input is still required.
- Ask focused follow-up questions when confidence is low.

## Design Decision 11: Constraint-first recommendations are more feasible and actionable.
- Capture hard constraints before generating recommendations.
- Distinguish hard constraints from soft preferences.
- Filter options by constraints before ranking or suggesting.

## Design Decision 12: Structured choice inputs outperform ambiguous free-text collection.
- Prefer structured options when selecting reasons, platforms, or tradeoffs.
- Allow free text, then normalize it back into canonical structured choices.
- Keep option definitions short so users can decide quickly.

## Design Decision 13: Continuous compatibility accounting prevents late-stage rework.
- Update required, matched, and gap compatibility signals on every turn.
- Represent unknowns as explicit gaps rather than implicit assumptions.
- Use compatibility gaps to drive next questions and recommendations.

## Design Decision 14: Provenance-locked links and claims increase trust and safety.
- Only provide links and claims grounded in approved registries and site sources.
- Never fabricate citations, URLs, or compatibility statements.
- Return no link rather than an unverified link.

## Design Decision 15: Deterministic fallback behavior should be explicit and fail-safe.
- Define explicit fallback task order per journey stage.
- Apply predictable defaults when optional context is absent.
- Keep deterministic flow logic limited to fallback and recovery paths.

## Design Decision 16: Delegation transparency improves diagnosability and confidence.
- Record which specialist agents were planned, executed, skipped, or failed.
- Expose why delegation happened for each agent call.
- Surface degraded-mode behavior clearly when delegated work fails.

## Design Decision 17: Consent-bound memory improves continuity without violating user control.
- Separate transient session context from saved profile context.
- Require explicit user preference for save/resume behavior.
- Let users inspect, clear, and reset stored journey state.

## Enforcing Decision Alignment in Prompt Development
Design decisions must be implemented as enforceable prompt contracts, tests, and runtime checks.

For every design decision, define:
- Prompt rule: what the prompt must instruct the model to do.
- Schema anchor: which response field(s) carry the behavior.
- Evaluation case: scenario that proves the behavior.
- Runtime metric: live signal that confirms adherence.
- Owner: role responsible for maintaining and reviewing the rule.

## Prompt Governance Flow

flowchart LR
  D[Design Decisions] --> M[Decision-to-Prompt Matrix]
  M --> S[Prompt Standard and Stage Template]
  S --> A[Journey Prompt Authoring]
  A --> L[Prompt Lint Checks]
  L --> E[Scenario Evaluation Suite]
  E --> G[Release Gate]
  G --> R[Runtime Prompt Deployment]
  R --> T[Telemetry and Policy Signals]
  T --> V[Review and Refinement]
  T --> P[Policy Gap Review]
  P --> D
  V --> D
## What Changes in the Current Prompt Approach Current approach uses strong prompt guidance and schemas, but relies mostly on manual prompt review. New approach adds explicit decision traceability, automated checks, and runtime governance. | Area | Current approach | New approach | |---|---|---| | Prompt authoring | Stage prompt + prompt standard | Stage prompt + decision coverage block for each decision | | Validation before merge | Human review + basic schema expectations | Lint + scenario evals + schema validation as hard release gate | | Quality enforcement | Prompt text quality gates | Prompt text gates + automated compliance tests | | Runtime feedback | Ad-hoc observation | Defined telemetry review for policy checks, assumptions, confidence, and task transitions | | Deterministic flow use | Common control pattern | Primary outcome-based guidance; deterministic logic reserved for fallback/recovery | | Cross-journey consistency | Varies by stage maturity | Uniform contract, eval suite, and release criteria across all journeys | ## Journey-by-Journey Development Changes Apply the same lifecycle to each journey prompt (`get-inspired`, `define-constraints`, `check-compatibility`, `choose-gear`, `build-it`, `automate-life`, `fix-it`). For each journey, prompt development now requires: 1. Define outcome targets per task (not just intent route order). 2. Map task behaviors to design decisions and response schema fields. 3. Author or update the stage prompt with explicit guardrails and fallback rules. 4. Add scenario tests for confirmation-gating, no-fabrication, uncertainty handling, and compatibility gap handling. 5. Gate release on passing lint and scenario evaluations. 6. Monitor runtime adherence and feed issues back into prompt revisions. ## Unmapped Decision Protocol (Bounded Flexibility) When a live prompt decision does not map directly to an existing design decision, use outcome-driven judgment within hard guardrails. Required operating rules: 1. Preserve non-negotiables: no fabrication, schema-valid output, confirmation gating, and safety/privacy constraints. 2. Choose the action most likely to achieve the user outcome with the lowest risk. 3. Prefer reversible actions over irreversible actions. 4. Explicitly state assumptions and confidence; ask one focused clarifying question when needed. 5. For high-impact actions (destructive, costly, privacy/security sensitive, or hard to undo), require explicit user confirmation before execution. 6. Record the case as a `policy_gap` item for governance review and either map it to an existing decision or create a new one. ## Definition of Done for Any Prompt Change - Decision coverage is documented and complete. - Output is schema-valid for the target journey response contract. - Required scenario evaluations pass. - No-fabrication and confirmation-gating checks pass. - Runtime metrics are defined for post-release monitoring.