logo
P
Prompt Master

Prompt 大师

掌握和 AI 对话的艺术

APE

Automatic Prompt Engineer: auto-generate and select instructions

APE

Image source: Zhou et al. (2022)

Zhou et al. (2022) proposed Automatic Prompt Engineer (APE), a framework for automatic instruction generation and selection. The instruction generation problem is framed as natural language synthesis -- using LLMs as black-box optimizers to generate and search candidate solutions.

The first step involves a large language model (as an inference model) that receives output demonstrations to generate instruction candidates for the task. These candidates guide the search process. A target model executes the instructions, and the best instruction is selected based on computed evaluation scores.

APE discovered a zero-shot CoT prompt that's better than the human-designed "Let's think step by step" prompt (Kojima et al., 2022).

The prompt "Let's work this out in a step by step way to be sure we have the right answer." triggers chain-of-thought reasoning and improved performance on MultiArith and GSM8K benchmarks:

APECOT

Image source: Zhou et al. (2022)

This paper touches on an important topic in prompt engineering: automatically optimizing prompts. While we don't go deep into this here, here are some key papers if you're interested:

  • Prompt-OIRL - Uses offline inverse reinforcement learning to generate query-dependent prompts.
  • OPRO - Introduces the idea of using LLMs to optimize prompts: telling LLMs to "take a deep breath" improves math performance.
  • AutoPrompt - Proposes a gradient-guided search method for automatically creating prompts for various tasks.
  • Prefix Tuning - A lightweight fine-tuning alternative that prepends trainable continuous prefixes for NLG tasks.
  • Prompt Tuning - Proposes learning soft prompts through backpropagation.