Elements
instruction / context / input data / output indicator
If you've seen enough prompt engineering examples and applications, you'll notice that prompts tend to share common building blocks.
A prompt can contain any of the following elements:
- instruction: The specific task or directive you want the model to perform
- context: External or additional background information that helps the LLM respond better
- input data: The content or question from the user
- output indicator: The expected type or format of the output
Here's a simple prompt for a text classification task that shows these elements in action:
Prompt
Classify the text as neutral, negative, or positive.
Text: I think the food is okay.
Sentiment:
In this example, the instruction is "classify the text as neutral, negative, or positive." The input data is "I think the food is okay." The output indicator is "Sentiment:". No context was used here, but you could provide it — like adding more examples to help the model understand the task better and guide its output.
Not every element is required for every prompt. It depends on the task.
How to write the four core elements
Instruction
The instruction should make it immediately obvious what you want the model to do. Start with a verb: Summarize / Extract / Translate / Generate / Explain / Classify / Sort. If you want a specific style or audience, bake it right into the instruction.
Example
Summarize the following content in 3 bullet points, targeting product managers.
Context
Context is "the information the model needs to give a correct answer" — role definition, business background, relevant knowledge, constraints, examples, data sources. The more relevant the context, the more stable the output. But too much noisy context can actually distract the model.
Example
You are an e-commerce customer service agent. Keep your tone friendly but concise. The brand emphasizes "great value" and "30-day returns."
Input Data
This is the actual text, question, or data you want processed. Use clear delimiters (like """, ###, or XML tags) to wrap the input — it reduces the chance the model misinterprets things.
Example
Input:
"""
Customer review: Shipping was fast, but the packaging was damaged.
"""
Output Indicator
The output indicator determines what the output looks like. You can specify format (table, JSON, bullet points), fields, ordering, length, or language.
Example
Output format:
- Conclusion:
- Evidence:
- Recommendation:
A complete four-element example
### Instruction
Classify the user feedback into: Logistics / Product Quality / Service / Price, and provide a one-sentence summary.
### Context
You are an e-commerce operations analyst. You need to quickly identify the issue type so it can be routed to the right team.
### Input
"""
The headphones I bought last week have great sound quality, but they're a bit loose. Customer service responded quickly, and the exchange was smooth.
"""
### Output
Category: <Logistics|Product Quality|Service|Price>
Summary: <one sentence>
The instruction defines the classification task, the context provides the business scenario, the input is the review content, and the output indicator locks down the fields and format.
Common combinations
-
instruction + input data Best for simple tasks like translation, summarization, or rewriting.
-
instruction + context + input data Best for real business scenarios (with roles, rules, constraints).
-
instruction + context + output indicator Best for generation tasks with no input data, like writing marketing copy or building plans.
Practical output format patterns
When output needs to feed into downstream systems (automation pipelines, data analysis, database imports), use explicit formats:
JSON
Output JSON with these fixed fields:
{
"category": "string",
"summary": "string",
"confidence": 0-1
}
Table
Output a table: Issue Type | Impact Level | Recommended Action
Bullet points
Output 3-5 bullet points, each no longer than 20 words.
Tips for designing prompts
- Start simple, then gradually add context and format constraints
- The more specific your instructions, the more stable your results
- Use clear delimiters to separate instructions, context, input, and output
- Tell the model what to do, not what not to do
Turn this chapter's knowledge into practical skills
Enter the interactive lab and practice Prompt with real tasks. Get started in 10 minutes.