Tips
General tips: start simple, be specific, use Do vs Don't framing, and more
Here are some tips to keep in mind when designing prompts:
Start simple
Designing prompts is an iterative process that takes a lot of experimentation to get right. A simple playground from OpenAI or Cohere is a good starting point.
Start with simple prompts and keep adding more elements and context as you go (because you want better results). Iterating on your prompt throughout the process is crucial. As you read through this guide, you'll see many examples where specificity, simplicity, and conciseness tend to give better results.
When you have a big task involving many different subtasks, try breaking it down into simpler subtasks and building up as you get better results. This avoids dumping too much complexity into the prompt design upfront.
Instruction
You can design effective prompts for simple tasks by using commands like "Write", "Classify", "Summarize", "Translate", "Sort", etc.
You'll also need to experiment a lot to figure out what works best. Try different instructions with different keywords, contexts, and data to see what clicks for your specific use case and task. Generally, the more specific and relevant the context is to the task, the better.
Some people recommend putting the instruction at the beginning of the prompt. Others suggest using clear separators like "###" to split the instruction from the context.
For example:
Prompt:
### Instruction ###
Translate the following text to Spanish:
Text: "hello!"
Output:
¡Hola!
Specificity
Be very specific about the instruction and task you want the model to perform. The more descriptive and detailed the prompt, the better the results. This matters most when you care about the output's format or style. There's no magic token or keyword that guarantees better results -- what matters is a well-formatted, descriptive prompt. Providing examples in the prompt is actually one of the most effective ways to get output in a specific format.
Watch your prompt length too, since context windows have limits. Including too many unnecessary details isn't always a good idea; details should be relevant and help the task. We encourage lots of experimentation and iteration to optimize prompts for your application.
For instance, here's a prompt that extracts specific information from text:
Prompt:
Extract location names from the text below.
Desired format:
Locations: <comma-separated list>
Input: "Although these developments are encouraging for researchers, many mysteries remain. Neuroimmunologist Henrique Veiga-Fernandes at Lisbon's Champalimaud Centre said: 'We often have a black box between the brain and the effects we observe. If we want to use it in a therapeutic context, we need to understand the mechanism.'"
Output:
Locations: Lisbon, Champalimaud Centre
Avoid ambiguity
Given the advice above about being detailed and improving format, it's easy to fall into the trap of trying to be too clever with your prompt -- which can create vague descriptions. Being specific and direct is usually better. Think of it like good communication: the more direct, the more effective.
For example, you might want to learn about prompt engineering:
Explain the concept of prompt engineering. Keep the explanation short, only a few sentences.
That prompt isn't clear about how many sentences or what style. A better version is specific, concise, and to the point:
Explain the concept of prompt engineering to a high school student in 2-3 sentences.
Do vs Don't
Another common tip: avoid telling the model what not to do. Instead, tell it what to do. This is more specific and focuses on the details that lead to good responses from the model.
For example, a movie recommendation agent (bad prompt):
Prompt:
You are a movie recommendation agent. Do not ask about interests. Do not ask for personal information.
Customer: Recommend a movie based on my interests.
Agent:
Output:
Sure, I can recommend a movie based on your interests. What genres do you like—action, comedy, romance, or something else?
A better prompt:
Prompt:
You are a movie recommendation agent. Recommend a movie from globally trending movies. Avoid asking for user preferences and avoid asking for personal information. If you cannot recommend a movie, respond with "Sorry, I can't find a movie recommendation today.".
Customer: Recommend a movie based on my interests.
Agent:
Output:
Sorry, I don't have any information about your interests. Here is a list of globally trending movies: [movie list]. I hope you find something you like!
📚 相关资源
❓ 常见问题
关于本章主题最常被搜索的问题,点击展开答案
刚开始写 Prompt,从哪种结构起步?
从最简单的 instruction + input 开始,结果不行再叠 context、再叠输出格式。一上来就堆 system role + 多 example + JSON schema,调试时不知道是哪一层出错。OpenAI / Cohere 的 playground 是最好的迭代环境,每改一项跑一次。
把 instruction 放在 prompt 开头还是结尾?
推荐放开头,并用 `### Instruction ###` 这类显式分隔符把指令和数据隔开。这是 OpenAI 早期文档的建议,至今仍稳——模型看到分隔符后认知「这一段是要执行的命令」。指令埋在最后,长上下文里很容易被忽略。
Specificity 到底要具体到什么程度?
具体到「受众 + 句数 + 风格」这层。「Explain prompt engineering」太空,「Explain prompt engineering to a high school student in 2-3 sentences」就稳了——给了受众(高中生)、长度(2-3 句)、隐含的风格(简洁)。但别堆无关细节,context window 有限。
提供示例对输出格式控制有多大作用?
很大。文档原话:「在 prompt 中提供 examples 对获得特定格式的期望输出非常有效。」想拿 `Locations: <comma-separated list>` 这种受限格式,最稳的写法就是写一遍「Desired format」加一段示范输入输出,比反复用文字描述格式更有效。
电影推荐 agent 那个反例为什么会翻车?
因为 prompt 全是否定句:「Do not ask about interests. Do not ask for personal information.」模型仍然反问「What genres do you like」。换成正向指令「Recommend a movie from globally trending movies. 找不到就回 Sorry, I can't find a movie recommendation today.」立刻给出热门片单——告诉模型走哪条路径,比禁止哪条路径有效得多。