logo
P
Prompt Master

Prompt 大师

掌握和 AI 对话的艺术

Q&A Prompts

Question answering prompts (overview)

Question Answering (QA) is one of the most fundamental and core LLM capabilities. From general encyclopedia Q&A to enterprise knowledge base assistants, QA runs through the main workflow of AI applications.

The core goal here: make the AI answer within a controllable scope, cite evidence, and know when to refuse if information is insufficient.


Learning Path (suggested order)

  1. Beginner: Write a minimal QA Prompt with "question + output format"
  2. Intermediate: Add citations and "I don't know" rules
  3. Advanced: Restrict answers to a given document (RAG/Closed QA)

Two Core Scenarios

1. Open-Domain QA

  • Definition: Leverages the model's pre-trained knowledge to answer
  • Scenarios: Encyclopedia Q&A, general assistants, chatbots
  • Challenge: Knowledge freshness and hallucination

2. Closed-Domain QA (RAG)

  • Definition: Answers only based on provided context
  • Scenarios: Enterprise knowledge bases, contract review, customer service
  • Challenge: Making the model "only say what's in the document"

Business Output (PM Perspective)

With QA Prompts you can deliver:

  • Minimal viable Q&A feature (FAQ/knowledge base)
  • Structured answer templates (parseable, reviewable)
  • Traceable responses (with citations and source numbers)

Completion criteria (suggested):

  • Read this page + complete 1 exercise + self-check once

Core Prompt Structure

Goal: Answer the question
Scope: Whether restricted to given documents
Format: Output structure (conclusion/evidence/citation)
Input: Question + document (optional)

General Template (Open-Domain)

Answer the following question with a concise conclusion.

Question:
{question}

Output format:
- Answer:
- Key points (1-3):

General Template (Closed-Domain)

You are an enterprise knowledge base assistant. You can only use the provided documents to answer.

Documents:
{context}

Question:
{question}

Requirements:
1) Only cite document content
2) If the document doesn't have the answer, output "Insufficient information, cannot answer"
3) Include citation numbers in output

Output format:
- Answer:
- Citations:

Quick Start: Open-Domain QA

Question: Why is the sky blue?

Output format:
- Answer:
- Key points (1-3):

Example 1: Open-Domain QA

Question: What is the core reaction in photosynthesis?

Output format:
- Answer:
- Key points (1-3):

Example 2: Closed-Domain QA (with citations)

Documents:
[Doc 1] Product warranty is 24 months, covering non-human-caused damage only.
[Doc 2] Batteries are consumable items with a 6-month warranty.

Question: My battery broke after one year. Is it covered under warranty?

Requirements: Answer using documents only, cite [Doc] for each statement.

Example 3: Ambiguous Question Clarification

Question: Can I return it?

Requirements: If the question is unclear, ask a clarifying question first, then give a "default assumption" answer.

Migration Template (swap variables to reuse)

Question: {question}
Documents: {context}
Output: Answer + citation numbers + key info

Self-check Checklist (review before submitting)

  • Is the answer scope clear (open/closed)?
  • Does it require "refuse if info is insufficient"?
  • Are citations traceable?
  • Is the output format stable?

Tips & Best Practices

  1. "I don't know" rule Explicitly write it in: no answer → "Insufficient information, cannot answer."

  2. Cite sources Force citation of document numbers for reviewability.

  3. Handle ambiguity Require clarifying questions first, then answer.

  4. Structured output Use lists/tables to organize answers for readability.

  5. Parameter settings QA tasks recommend temperature=0~0.3 for consistency.


Common Problems & Solutions

ProblemCauseSolution
Over-elaborate answerScope unrestrictedSpecify "documents only"
Can't trace backNo citationsForce citation numbers
Jumbled logicUnformatted outputFix output fields
Won't refuseMissing ruleAdd "insufficient info" fallback

API Examples

Python (OpenAI)

from openai import OpenAI

client = OpenAI()

def closed_qa(context: str, question: str) -> str:
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {
                "role": "system",
                "content": "You are a knowledge base assistant. Only use the provided documents to answer."
            },
            {
                "role": "user",
                "content": f"""Documents:\n{context}\n\nQuestion: {question}\n\nRequirements:\n- Use documents only\n- If no answer, return "Insufficient information, cannot answer"\n- Include citation numbers"""
            }
        ],
        temperature=0,
        max_tokens=300
    )
    return response.choices[0].message.content.strip()

Python (Claude)

import anthropic

client = anthropic.Anthropic()

def closed_qa(context: str, question: str) -> str:
    message = client.messages.create(
        model="claude-sonnet-4-20250514",
        max_tokens=300,
        messages=[
            {
                "role": "user",
                "content": f"""You are a knowledge base assistant. Only use the documents to answer.
Documents: {context}
Question: {question}
Requirements: If no answer, return "Insufficient information, cannot answer." Include citation numbers."""
            }
        ]
    )
    return message.content[0].text.strip()

Hands-on Exercises

Exercise 1: Open-Domain

Question: Why does the Moon have tidal locking?
Output format: Answer + Key points (1-3)

Exercise 2: Closed-Domain

Documents:
[Doc 1] Course A runs for 12 weeks with 8 live sessions.
[Doc 2] Course A offers 1 free make-up session.

Question: How many live sessions does Course A have? Can I make up missed ones?

Exercise Scoring Rubric (self-assessment)

DimensionPassing Criteria
Clear scopeClearly states open or closed
Correct citationsCitation numbers are consistent
Refusal ruleRefuses when info is insufficient
Stable formatOutput fields are consistent


Takeaways

  1. QA needs a clear "open/closed" scope.
  2. Citations and refusal mechanisms are the foundation of trustworthy QA.
  3. Fixed output format makes engineering integration easier.
  4. Low temperature improves consistency.
  5. Use templates for quick reuse across business scenarios.