System Prompt Design Patterns
10 reusable System Prompt design patterns
This chapter distills 10 reusable design patterns from the System Prompts of major AI vendors. Use them to write professional-grade System Prompts.
Pattern 1: Identity Anchoring
Pin down the AI's identity, capabilities, and limitations right at the top.
# Template
You are [Role Name], a [Role Type] specialized in [Domain].
Your capabilities:
- [Capability 1]
- [Capability 2]
Your limitations:
- [Limitation 1]
- [Limitation 2]
Knowledge cutoff: [Date]
Current date: [Dynamic Date]
Real-world example (Claude Code):
You are an interactive CLI tool that helps users
with software engineering tasks.
Here is useful information about the environment:
<env>
Working directory: [working directory]
Platform: [platform]
Today's date: [date]
</env>
Why it works:
- Gives the AI a clear sense of its role
- Reduces off-character responses caused by role confusion
- Users also know exactly what the AI can do
Pattern 2: Layered Constraints
Use priority markers to signal how important each rule is.
# Priority markers
CRITICAL: [Highest priority, must obey]
IMPORTANT: [Significant rule]
Note: [General suggestion]
# Or use urgency levels
NEVER: [Absolute prohibition]
ALWAYS: [Must execute]
PREFER: [Preferred choice]
AVOID: [Try not to do]
Real-world example (Claude Code):
IMPORTANT: Refuse to write code that may be used maliciously.
IMPORTANT: You should minimize output tokens as much as possible.
NEVER commit changes unless the user explicitly asks.
Why it works:
- AI understands relative importance of rules
- When rules conflict, it knows which one wins
- Prevents the "treat everything equally" problem
Pattern 3: Allowed/Not Allowed Lists
Define behavioral boundaries with a clean binary split.
# Template
## [Scenario] Policy
Allowed:
- [Permitted behavior 1]
- [Permitted behavior 2]
Not Allowed:
- [Prohibited behavior 1]
- [Prohibited behavior 2]
Real-world example (GPT-4o image policy):
Image safety policies:
Not Allowed:
- Giving away the identity of real people in images
- Stating that someone is a public figure
- Classifying human-like images as animals
Allowed:
- OCR transcription of sensitive PII
- Identifying animated characters
Why it works:
- Clear boundaries, no gray areas
- Easy to maintain and update
- Zero ambiguity for the AI to execute
Pattern 4: Example-Driven
Show the expected behavior with concrete input/output examples.
# Template
Examples of appropriate [behavior]:
user: [Input 1]
assistant: [Expected output 1]
user: [Input 2]
assistant: [Expected output 2]
# Comparison examples
✅ Correct: [Right approach]
❌ Incorrect: [Wrong approach]
Real-world example (Claude Code):
Examples of appropriate verbosity:
user: 2 + 2
assistant: 4
user: what command should I run to list files?
assistant: ls
user: what files are in src/?
assistant: [runs ls and sees foo.c, bar.c, baz.c]
foo.c, bar.c, baz.c
Why it works:
- Few-shot learning is remarkably effective
- Way more intuitive than abstract descriptions
- Gives you precise control over output style
Pattern 5: Tool Specification
Give every tool a clear interface definition and usage guide.
# Template
## [Tool Name]
Description: [What it does]
When to use:
- [Use case 1]
- [Use case 2]
When NOT to use:
- [Anti-pattern]
Parameters:
- param1 (required): [Description]
- param2 (optional): [Description]
Example:
[Call example]
Real-world example (GPT web tool):
## web
Use the `web` tool to access up-to-date information when:
- Local Information: questions about user's location
- Freshness: information that could be outdated
- Niche Information: detailed info not widely known
- Accuracy: when cost of mistake is high
Commands:
- search(): Issues a new query to search engine
- open_url(url: str): Opens the given URL
Why it works:
- Reduces tool misuse
- Makes parameter types and required fields explicit
- Helps the AI pick the right tool for the job
Pattern 6: Conditional Branching
Handle different scenarios with if-then logic.
# Template
When [condition], then [action]
If [situation A], do [action A]
If [situation B], do [action B]
Otherwise, [default action]
Real-world example (Claude citation rules):
If the search results do not contain relevant information,
politely inform the user that the answer cannot be found
in the search results, and make no use of citations.
If the documents have <document_context> tags,
consider that information when providing answers
but DO NOT cite from the document context.
Why it works:
- Handles edge cases
- Logic is clear and easy to follow
- Limits the AI's "creative freedom" where you don't want it
Pattern 7: Format Templates
Define structured output formats.
# Template
Format your response as:
<tag_name>
[content]
</tag_name>
# Or
Return your answer in this format:
```json
{
"field1": "value",
"field2": "value"
}
```
**Real-world example (Claude Code file path extraction)**:
Format your response as:
If no files are read or modified, return empty tags:
Do not include any other text in your response.
**Why it works**:
- Easy for programs to parse
- Highly consistent output
- Less post-processing work
---
## Pattern 8: Negative Constraints
Spell out what NOT to do.
```markdown
# Template
Do NOT:
- [Prohibited behavior 1]
- [Prohibited behavior 2]
NEVER:
- [Absolute prohibition 1]
- [Absolute prohibition 2]
AVOID:
- [Try not to 1]
- [Try not to 2]
Real-world example (Claude Code):
IMPORTANT: You should NOT answer with unnecessary
preamble or postamble, unless the user asks you to.
You MUST avoid text before/after your response, such as:
- "The answer is <answer>."
- "Here is the content of the file..."
- "Based on the information provided..."
- "Here is what I will do next..."
Why it works:
- Eliminates common bad behaviors
- More direct than positive descriptions
- "What not to do" is often more important than "what to do"
Pattern 9: Context Injection
Dynamically inject runtime information.
# Template
<context>
Current user: [User info]
Session info: [Session info]
Available tools: [Tool list]
</context>
# Or use variable placeholders
Working directory: [working_directory]
Platform: [platform]
Today's date: [current_date]
Real-world example (Claude Code):
<env>
Working directory: /Users/john/project
Is directory a git repo: Yes
Platform: darwin
Today's date: 2025-01-15
Model: claude-sonnet-4
</env>
Why it works:
- Lets the AI perceive its runtime environment
- Enables dynamic behavior adjustment
- Provides essential contextual information
Pattern 10: Iterative Improvement
Guide the AI on how to progressively refine its output.
# Template
If [initial attempt fails], then:
1. [Adjustment strategy 1]
2. [Adjustment strategy 2]
3. If still fails, [fallback strategy]
# Or
After completing [task], verify by:
- [Verification step 1]
- [Verification step 2]
If verification fails, [correction strategy]
Real-world example (Claude Code task workflow):
For software engineering tasks:
1. Use search tools to understand the codebase
2. Implement the solution using all tools available
3. Verify the solution with tests
4. VERY IMPORTANT: Run lint and typecheck commands
If unable to find the correct command,
ask the user and proactively suggest writing
it to CLAUDE.md for next time.
Why it works:
- Provides failure recovery strategies
- Encourages verification and self-checking
- Creates a closed-loop workflow
Practice Exercises
Exercise 1: Design a Customer Service Bot System Prompt
Requirements:
- Use at least 5 of the design patterns above
- Include identity definition, behavioral constraints, tool usage
- Handle sensitive topic boundaries
Exercise 2: Optimize an Existing Prompt
Take a prompt you're currently using, refactor it with these patterns, and compare the results.
Exercise 3: Reverse Engineering
Pick an AI product (like Perplexity or Notion AI) and try to infer its System Prompt design through interaction.
Here's the thing: a good System Prompt isn't written in one shot. It comes from continuous testing, observing AI behavior, and iterating. Treat these 10 patterns as your toolbox -- mix and match based on what you actually need.