logo
03

Prompt Engineering for PMs: Document Automation

⏱️ 60 min

Prompt Engineering for PMs: Document Automation

PMs learning prompt engineering -- the real value isn't "can you write really long prompts." It's whether you can quickly turn vague requirements into structured output. Many PMs think they're using AI to boost efficiency, but they're actually just copying their usual vague verbal requirements to the model, then spending more time fixing bad drafts.

So this page skips the "prompt mysticism" and focuses on the most common PM scenarios -- documents, SOPs, reviews -- and how to write reusable, collaboratable, actionable prompts.

PM Prompt Canvas


Bottom Line: PM Prompts Are About Constraining, Not Over-Describing

The most common problem with AI-written documents isn't generation failure. It's:

  • Structure looks complete but has no decision value
  • Tone sounds professional but info is empty
  • Lots of text but none of it can be directly used by the team

So the most important thing in PM prompts isn't "describe more." It's constraining 4 variables first:

  1. Context
  2. Task
  3. Format
  4. Acceptance bar

If these four aren't clear, output quality won't be stable.


Why PMs Crash Hardest When Using AI

ProblemRoot cause
PRD is long but pointlessOnly wrote the topic, not goals and boundaries
SOP looks complete but isn't executableMissing owner, timebox, exception handling
Meeting summary drops key infoDidn't define "what must be preserved"
Requirement review doc too broadDidn't specify audience and usage scenario

AI won't automatically understand "what you really want in your head." It just tries to fill in whatever blanks you leave.


A Prompt Framework Better Suited for PMs

Rather than generic frameworks, PMs work better with this:

ModuleWhat to write
ContextProduct background, users, business goals
TaskWhat document or analysis to generate
ConstraintsWhat must be included, what can't be fabricated
Output formatSections, tables, checklists, word count
Review barWhat counts as acceptable output

The names don't matter. Whether you've filled in the content does.


PRD Generation: Don't Let AI Write the Whole Thing at Once

A more stable approach breaks it into 3 steps:

step 1: clarify problem
step 2: generate PRD skeleton
step 3: fill each section with constraints

If you just say "help me write a PRD," AI tends to produce a formally complete but substantively empty standard template.


A More Reliable PRD Prompt

You are acting as a senior product manager.

Context:
- product: [name]
- target user: [who]
- business goal: [why this matters]

Task:
Create a PRD draft for the following feature:
[feature summary]

Constraints:
- clearly separate must-have scope from nice-to-have scope
- include unacceptable failure cases
- do not invent technical certainty
- mark assumptions as [assumption]

Output format:
1. problem statement
2. user task
3. feature scope
4. success metrics
5. edge cases
6. risks and dependencies

This prompt's value is that it forces the model to write "what's needed for decisions" first, rather than stacking paragraphs of filler.


SOP Prompts: The Most Overlooked Parts

Many SOP prompts only ask for "process steps." But a truly executable SOP also needs:

ItemWhy it can't be skipped
OwnerNobody responsible = nobody executes
TriggerWhat situation activates this SOP
Exception pathWhat to do when things go wrong
HandoffHow roles transfer between each other
Done definitionWhat counts as process complete

Without these, AI-written SOPs look like training materials, not actual operating documents.


Meeting Summary Prompts: First Define "What Can't Be Dropped"

If you just say "help me summarize this meeting," you'll probably get a smoothly-written version that drops key signals.

A more practical approach:

Summarize this meeting transcript.

Must preserve:
- decisions already made
- open questions still unresolved
- action items with owner
- deadlines if explicitly mentioned

Do not:
- turn discussions into confirmed decisions
- invent missing owners
- compress away disagreement

When PMs use AI to summarize meetings, the scariest thing is turning "discussed" into "decided."


What a Prompt Library Should Actually Capture

What PM teams should actually capture isn't hundreds of scattered prompts, but high-frequency templates.

Recommend prioritizing:

  1. PRD draft prompt
  2. Meeting summary prompt
  3. Competitor analysis prompt
  4. Research clustering prompt
  5. Risk review prompt
  6. Weekly update prompt

Once these templates are fixed, team document quality gets noticeably more consistent.


Prompts Aren't One-Off Inputs -- They're Team Assets

A mature PM team starts doing these things with prompts:

  • Versioning
  • Owner assignment
  • Example inputs
  • Expected outputs
  • Known failure modes

This is basically the same thing as building internal template libraries. Just that before it was spreadsheet templates, now it's AI instruction templates.


Common Crash Points

ProblemFix
AI keeps writing fillerEnforce sections and length limits
Over-confident fabricationExplicitly require marking assumptions
Inconsistent output styleFix voice and format
Everyone on the team writes their ownBuild a prompt library with review process

Practice

Take your most-used PM prompt. Check these 4 things:

  1. Is context clearly stated
  2. Is output format specified
  3. Does it say "what not to fabricate"
  4. Is there a definition of what counts as acceptable

If 2+ of these are missing, the prompt isn't stable enough yet.

📚 相关资源