logo
P
Prompt Master

Prompt 大师

掌握和 AI 对话的艺术

Truthfulness

Truthfulness prompts (overview)

The point of truthfulness isn't "making the LLM never wrong." It's: when evidence is insufficient, explicitly say you don't know, surface the uncertainty, and verify claims against given facts.


Learning Path (suggested order)

  1. Beginner: Fix output format (conclusion + evidence + uncertainty)
  2. Intermediate: Verify claims within given facts/context
  3. Advanced: Traceable answers for business use

What Is a Truthfulness Prompt?

A Truthfulness Prompt specifies evidence sources and output constraints, requiring the model to answer within verifiable facts and output uncertainty or refuse when information is insufficient.

┌─────────────────────────────────────────────────────────────┐
│                   Truthfulness Prompt Flow                    │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│   Question/claim → Available facts → Conclusion → Evidence/uncertainty │
│   (statement)       (facts)          (holds/not)   (citation/note)     │
│                                                             │
└─────────────────────────────────────────────────────────────┘

Why Truthfulness Matters

Use CaseSpecific ApplicationBusiness Value
Content productionFact-checking, source citingLower misinformation risk
Customer serviceStandardized replies, no guessingHigher trust
Compliance/legalTraceable evidenceLower compliance risk
Research/writingFact consistency checksBetter credibility

Business Output (PM Perspective)

With Truthfulness Prompts you can deliver:

  • Traceable answers: Conclusion + evidence citations
  • Safe fallback: Explicit refusal when info is insufficient
  • Auditable output: Easy for human review and compliance checks

Completion criteria (suggested):

  • Read this page + complete 1 exercise + self-check once

Core Prompt Structure

Goal: Draw a conclusion based on facts
Evidence: Only cite the given facts
Format: Conclusion + evidence + uncertainty
Input: Question or claim

General Template

You are a fact-checker. You can only answer based on the given facts.

Question/claim:
{claim}

Known facts:
{facts}

Requirements:
1) If facts are insufficient, output "Cannot determine"
2) Conclusion must cite corresponding fact numbers
3) Fixed output format

Output format:
- Conclusion:
- Evidence:
- Uncertainty:

Quick Start: Simple Verification

Question: Did Company A's revenue grow in 2023?

Known facts:
1) Company A's 2022 revenue was $1 billion
2) Company A's 2023 revenue was $1.2 billion

Output format:
- Conclusion:
- Evidence:
- Uncertainty:

Example 1: Correcting Hallucination

Claim: The Sun is 3 million km from Earth.

Known facts:
1) The average Earth-Sun distance is approximately 150 million km

Output format:
- Conclusion:
- Evidence:
- Uncertainty:

Example 2: Refusing When Info Is Insufficient

Question: Is Company B planning layoffs?

Known facts:
1) Company B launched a new product last quarter
2) No public financial reports or announcements available

Output format:
- Conclusion:
- Evidence:
- Uncertainty:

Example 3: Comparing Multiple Facts

Claim: The course conversion rate improved because of the price drop.

Known facts:
1) This month's price is 10% lower than last month
2) Conversion rate increased by 8%
3) A new landing page launched this month

Output format:
- Conclusion:
- Evidence:
- Uncertainty:

Migration Template (swap variables to reuse)

Claim/question: {claim}
Known facts: {facts}
Output: Conclusion + evidence numbers + uncertainty note

Self-check Checklist (review before submitting)

  • Is the conclusion based only on the given facts?
  • Are evidence source numbers clearly indicated?
  • Does it refuse when info is insufficient?
  • Is the output format fixed and parseable?

Advanced Tips

  1. Evidence numbering: Require citing fact numbers to avoid vague references.
  2. Confidence level: Output high/medium/low.
  3. Conflict handling: When facts contradict, output "Cannot determine."
  4. Two-way verification: Have the model output both supporting and opposing evidence.
  5. Step-by-step verification: First check whether facts cover the claim, then draw a conclusion.

Common Problems & Solutions

ProblemCauseSolution
Over-confident conclusionMissing refusal ruleAdd "Cannot determine"
Uses external knowledgeEvidence unrestrictedSpecify "facts only"
Unclear evidenceNo numbering requiredForce citation numbers
Explanation too longNo format limitsFix fields and length

Recent Research Highlights (external summaries)

  • TruthfulQA: A benchmark measuring "whether models avoid mimicking common human misconceptions," emphasizing truthfulness on commonly misunderstood questions.
  • SelfCheckGPT: Uses self-consistency / diverse sampling in black-box settings to detect hallucinations, improving output credibility assessment.

Hands-on Exercises

Exercise 1: Refusal Scenario

Question: Was Company C profitable in 2024?
Known facts:
1) Company C lost $200M in 2023
2) 2024 financial report has not been released yet

Exercise 2: Evidence Numbering

Claim: Product D's sales dropped because of insufficient inventory.
Known facts:
1) Product D inventory decreased 30% this month
2) Sales dropped 15%
3) A competitor launched a promotion

Exercise Scoring Rubric (self-assessment)

DimensionPassing Criteria
Accurate conclusionConsistent with facts
Clear evidenceCitation numbers included
Reasonable refusalRefuses when info is insufficient
Stable formatOutput fields consistent

Index


References


Takeaways

  1. The core of Truthfulness is verifiability and refusal mechanism.
  2. Evidence numbering significantly improves auditability.
  3. Specifying "use given facts only" suppresses hallucination.
  4. Uncertainty should be explicitly output.
  5. Build stable output through templates and self-checks.