logo
P
Prompt Master

Prompt 大师

掌握和 AI 对话的艺术

Factuality

Reduce hallucinations and improve response reliability

LLMs tend to generate responses that sound coherent and convincing but are sometimes completely made up. Improving prompts can help the model produce more accurate/factual answers and reduce the likelihood of inconsistent and fabricated responses.

Some solutions include:

  • Provide ground truth in context (e.g., a relevant article paragraph or Wikipedia entry) to reduce the chance of the model generating fabricated text.
  • Configure the model to generate less "creative" responses by lowering probability parameters and instructing it to admit when it doesn't know the answer (e.g., "I don't know").
  • Provide few-shot examples that combine questions and answers, including both known and unknown Q&A pairs.

Here's a simple example:

Prompt:

Q: What is an atom?
A: An atom is a tiny particle that makes up everything.

Q: Who is Alvan Muntz?
A: ?

Q: What is Kozar-09?
A: ?

Q: How many moons does Mars have?
A: Two, Phobos and Deimos.

Q: Who is Neto Beto Roberto?

Output:

A: ?

I made up "Neto Beto Roberto," so the model got this one right. Try tweaking the question slightly and see if you can still get it to work. Based on everything you've learned so far, there are different ways to improve this further.