P
Prompt Master

Prompt 大师

掌握和 AI 对话的艺术

Zero-shot Prompting

Prompting without providing any examples

Zero-shot prompting means you give the model no examples -- just clear instructions and let it do its thing. This section's goal: help you write reusable, stable, and verifiable zero-shot prompts.


Learning Path (Suggested Order)

  1. Beginner: Nail the minimal structure of "instruction + output format"
  2. Intermediate: Add constraints and fallbacks for edge cases
  3. Practical: Break business tasks into verifiable conditions

What Is Zero-Shot Prompting?

Zero-shot prompting is simple: no examples, just rules. It works best when the task is clear, labels are well-defined, and the output format is straightforward.

┌─────────────────────────────────────────────────────────────┐
│                    Zero-shot Prompt Flow                      │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│   Task Description →  Labels/Constraints → Input Data → Output │
│   (what to do)        (how to do it)      (text/data)   (structured) │
│                                                             │
└─────────────────────────────────────────────────────────────┘

Why Does Zero-Shot Matter?

Use CaseSpecific UseBusiness Value
Quick PrototypingValidate new tasks fastLower trial-and-error cost
Lightweight ClassificationFew labels, clear rulesShip quickly
Content ProcessingSummarize, extract, rewriteBoost efficiency
Internal ToolsAd-hoc automationNo training data needed

Business Output (PM Perspective)

You can ship these quickly with zero-shot prompts:

  • Minimum Viable Feature (MVP)
  • Structured Output Templates (ready for automation)
  • Quantifiable Validation (accuracy / consistency)

Completion criteria (suggested):

  • Read this page + finish 1 exercise + do 1 self-check
Prompt Lab

Turn this chapter's knowledge into practical skills

Enter the interactive lab and practice Prompt with real tasks. Get started in 10 minutes.

View Now

Core Prompt Structure

The key to zero-shot prompts: task description + output constraints.

Goal: What task to perform
Labels/Rules: Options and boundaries
Format: Output structure
Input: Content to process

General Template

Complete the following task:
{task}

Rules:
{rules}

Output format:
{output_format}

Input:
{input}

Quick Start: Sentiment Classification

Prompt:

Classify the text as neutral, negative, or positive.

Text: I think this vacation was okay.
Sentiment:

Output:

neutral

Note: No examples were provided. That's the whole point of zero-shot.


Example 1: Intent Recognition

Scenario: Customer service messages

Identify the user's intent. Only output one of: inquiry / complaint / refund / other.

User message: The headphones I bought broke after two days. How are you going to handle this?

Intent:

Example 2: Information Extraction

Scenario: Extract key info from text

Extract name, company, and title from the text.
Output JSON: {"name":"","company":"","title":""}

Text: I'm Alice, and I work as a Product Manager at JR Academy.

Example 3: Formatted Summary

Scenario: Meeting notes

Organize the following content into 3 bullet points, each no longer than 20 words.

Content: We decided to finish requirement breakdown this week and start development next week. A handles the backend API, B handles the frontend pages.

Migration Template (Swap Variables to Reuse)

Task: {task}
Rules: {label_space_or_rules}
Output: {format}
Input: {input_text}

Self-Check Checklist (Before Submitting)

  • Is the task clear enough?
  • Is the output format fixed?
  • Is there a fallback (output unknown when unsure)?
  • Can you reproduce stable results within 3 tries?

Advanced Tips

  1. Restrict output: State "only output the label/JSON".
  2. Add fallbacks: If unsure -> unknown.
  3. Define boundaries: Specify positive/negative example keywords or rules.
  4. Control temperature: temperature=0 for better consistency.
  5. Step-by-step prompting: Break complex tasks into two zero-shot steps.

Common Problems & Solutions

ProblemCauseSolution
Unstable output formatNo format constraintSpecify format + example output
Inconsistent classificationVague rulesDefine boundary conditions
Model over-explainsOutput not limitedExplicitly say "only output the result"
Can't determineAmbiguous taskAdd unknown fallback

Latest Research at a Glance

  • Instruction Tuning: Fine-tuning on multi-task instruction data significantly improves zero-shot generalization, especially on unseen tasks.
  • RLHF/Instruct: Aligning model behavior through human feedback improves the reliability and consistency of "follow the instruction" outputs.
  • Prompt Optimization: Treating the prompt itself as an optimizable object -- iteratively generating better instructions to boost zero-shot performance.

API Call Examples

Python (OpenAI)

from openai import OpenAI

client = OpenAI()

def zero_shot_classify(text: str) -> str:
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {
                "role": "system",
                "content": "You are a classifier. Only output labels."
            },
            {
                "role": "user",
                "content": f"Classify: positive/negative/neutral\nText: {text}\nLabel:"
            }
        ],
        temperature=0,
        max_tokens=10
    )
    return response.choices[0].message.content.strip()

Python (Claude)

import anthropic

client = anthropic.Anthropic()

def zero_shot_classify(text: str) -> str:
    message = client.messages.create(
        model="claude-sonnet-4-20250514",
        max_tokens=10,
        messages=[
            {
                "role": "user",
                "content": f"Classify: positive/negative/neutral\nText: {text}\nLabel:"
            }
        ]
    )
    return message.content[0].text.strip()

Hands-On Exercises

Exercise 1: Classification

Classify the following reviews as positive / negative / neutral:
1. Shipping was super fast, great experience
2. Quality is so-so
3. Completely unusable, so disappointed

Exercise 2: Extraction

Extract the date and amount from the text, output as JSON.
Text: The order will ship on 2025-03-12, total price is $129.

Exercise 3: Summarization

Summarize the following content into 2 action items.
Text: This week we need to finish the page redesign and API integration. Testing is scheduled for Friday.

Exercise Scoring Rubric (Self-Assessment)

DimensionPass Criteria
Task clarityCan restate the goal in one sentence
Format stabilityOutput structure is consistent
ReusabilityVariables are swappable
ConsistencyStable results across 3 consecutive runs


References


Summary

  1. Zero-shot prompting works best for clear tasks and quick validation.
  2. Fixing the output format dramatically improves stability.
  3. You must define boundaries and fallbacks.
  4. Low temperature = more stable, more consistent.
  5. Templates make your prompt workflow reusable.

📚 相关资源

❓ 常见问题

关于本章主题最常被搜索的问题,点击展开答案

Zero-shot prompting 到底是什么?

不给示例只给规则——比如 `Classify the text as neutral, negative, or positive.\nText: I think this vacation was okay.\nSentiment:` 直接得到 `neutral`。适用条件是任务明确、标签清晰、输出格式简单。靠 instruction tuning 和 RLHF 让现代模型具备这种能力。

Zero-shot Prompt 的核心结构有哪几块?

四块:目标(做什么任务)、标签/规则(可选项与边界)、格式(输出结构)、输入(待处理内容)。模板长这样:`请完成以下任务:{task}\n规则:{rules}\n输出格式:{output_format}\n输入:{input}`。把变量替换就能复用,是 PM 视角的最小工作流。

Zero-shot 输出不稳定怎么办?

五招按顺序上:(1) 限制输出(写「只输出 JSON」或「只输出标签」);(2) 加 unknown 兜底;(3) 明确标签边界条件;(4) temperature 设到 0;(5) 复杂任务拆成两步零样本。常见问题表里:格式漂移就锁格式 + 给一行示范输出;分类不一致就把规则写到边界条件级别。

什么时候必须放弃 zero-shot 改用 few-shot 或 CoT?

三个信号:(1) 三次连续运行结果都不一样(一致性挂了);(2) 任务涉及多步推理(数学题、odd-numbers 那类),zero-shot 必错;(3) 输出格式必须严格定制(特殊枚举、嵌套 JSON),instruction 描述无法收敛。这时改 few-shot 给示范,或 CoT 给推理链。

Instruction Tuning 和 RLHF 跟 zero-shot 能力有什么关系?

Instruction Tuning 在多任务指令数据上微调(参考 FLAN 那篇 arxiv 2109.01652),显著提升对没见过任务的零样本泛化;RLHF(arxiv 2203.02155)通过人类反馈对齐「按指令输出」的可靠性。两者叠加才让 zero-shot 从研究 trick 变成生产可用的默认模式。