Elements
instruction / context / input data / output indicator
If you've seen enough prompt engineering examples and applications, you'll notice that prompts tend to share common building blocks.
A prompt can contain any of the following elements:
- instruction: The specific task or directive you want the model to perform
- context: External or additional background information that helps the LLM respond better
- input data: The content or question from the user
- output indicator: The expected type or format of the output
Here's a simple prompt for a text classification task that shows these elements in action:
Prompt
Classify the text as neutral, negative, or positive.
Text: I think the food is okay.
Sentiment:
In this example, the instruction is "classify the text as neutral, negative, or positive." The input data is "I think the food is okay." The output indicator is "Sentiment:". No context was used here, but you could provide it — like adding more examples to help the model understand the task better and guide its output.
Not every element is required for every prompt. It depends on the task.
How to write the four core elements
Instruction
The instruction should make it immediately obvious what you want the model to do. Start with a verb: Summarize / Extract / Translate / Generate / Explain / Classify / Sort. If you want a specific style or audience, bake it right into the instruction.
Example
Summarize the following content in 3 bullet points, targeting product managers.
Context
Context is "the information the model needs to give a correct answer" — role definition, business background, relevant knowledge, constraints, examples, data sources. The more relevant the context, the more stable the output. But too much noisy context can actually distract the model.
Example
You are an e-commerce customer service agent. Keep your tone friendly but concise. The brand emphasizes "great value" and "30-day returns."
Input Data
This is the actual text, question, or data you want processed. Use clear delimiters (like """, ###, or XML tags) to wrap the input — it reduces the chance the model misinterprets things.
Example
Input:
"""
Customer review: Shipping was fast, but the packaging was damaged.
"""
Output Indicator
The output indicator determines what the output looks like. You can specify format (table, JSON, bullet points), fields, ordering, length, or language.
Example
Output format:
- Conclusion:
- Evidence:
- Recommendation:
A complete four-element example
### Instruction
Classify the user feedback into: Logistics / Product Quality / Service / Price, and provide a one-sentence summary.
### Context
You are an e-commerce operations analyst. You need to quickly identify the issue type so it can be routed to the right team.
### Input
"""
The headphones I bought last week have great sound quality, but they're a bit loose. Customer service responded quickly, and the exchange was smooth.
"""
### Output
Category: <Logistics|Product Quality|Service|Price>
Summary: <one sentence>
The instruction defines the classification task, the context provides the business scenario, the input is the review content, and the output indicator locks down the fields and format.
Common combinations
-
instruction + input data Best for simple tasks like translation, summarization, or rewriting.
-
instruction + context + input data Best for real business scenarios (with roles, rules, constraints).
-
instruction + context + output indicator Best for generation tasks with no input data, like writing marketing copy or building plans.
Practical output format patterns
When output needs to feed into downstream systems (automation pipelines, data analysis, database imports), use explicit formats:
JSON
Output JSON with these fixed fields:
{
"category": "string",
"summary": "string",
"confidence": 0-1
}
Table
Output a table: Issue Type | Impact Level | Recommended Action
Bullet points
Output 3-5 bullet points, each no longer than 20 words.
Tips for designing prompts
- Start simple, then gradually add context and format constraints
- The more specific your instructions, the more stable your results
- Use clear delimiters to separate instructions, context, input, and output
- Tell the model what to do, not what not to do
Turn this chapter's knowledge into practical skills
Enter the interactive lab and practice Prompt with real tasks. Get started in 10 minutes.
📚 相关资源
❓ 常见问题
关于本章主题最常被搜索的问题,点击展开答案
Prompt 的四个核心要素是什么?
Instruction(任务指令,例如「分类」「提取」「翻译」)、Context(背景,包括角色、业务规则、知识来源)、Input Data(实际待处理的文本/数据)、Output Indicator(输出格式,例如 `Sentiment:` 或 JSON schema)。不是每个 prompt 都要全配,但任务越复杂越要写齐。
Input Data 为什么要用三引号或 XML 标签包起来?
因为不加分隔符时,模型容易把输入正文当成新指令,比如用户评论里出现「ignore previous instruction」就翻车。用 `"""`、`###`、或 `<input></input>` 包住,模型一眼识别这是数据不是指令——这是降低 prompt injection 误读最简单的一招。
Output Indicator 写「输出 JSON」就够了吗?
不够。要写死字段名、字段类型、值域,比如 `{"category":"string","summary":"string","confidence":0-1}`。只说「输出 JSON」,模型会自由发挥字段名、嵌套层级、null/undefined,下游程序解析直接崩。要进自动化流程或入库的输出,schema 必须锁住。
三种常见 Prompt 组合分别适合什么任务?
Instruction + Input Data 适合翻译、摘要、改写这种最简单的转换任务;Instruction + Context + Input Data 适合带业务角色和规则的实战场景(电商客服分类、合同条款审查);Instruction + Context + Output Indicator 适合「无输入」的生成任务,比如写营销文案、做项目计划。
为什么宁可写「要做什么」也不要写「不要做什么」?
因为否定指令对概率模型不可靠——你写「Do not ask about interests」,模型还是会问,因为「ask about interests」这条路径仍有高概率。改成正向指令「从全球热门电影里直接推荐一部,找不到就回复 Sorry, I can't find a movie recommendation today」,行为就稳了。