Understand & Verify AI Responses
Understanding and Validating AI Responses
Code from AI doesn't equal "production-ready." Learning to read, question, and validate is what turns AI into a reliable partner instead of a liability.
Read Structure Before Details
- Look at function signatures, dependencies, and edge case handling first. Decide whether it actually fits your project.
- Flag anything you're unsure about (types, interfaces, error handling) and prepare follow-up questions.
Make AI Self-Check
Review the code you just generated. List 3 scenarios where it might fail and suggest fixes for each.
If there are performance concerns or unhandled exceptions, call those out too.
Handing the "QA" step back to AI is a quick way to surface things it missed.
Have It Write Tests
Write 4 unit tests for the function above, covering: empty input, duplicate input, invalid input, and the happy path.
Use the test framework already in the project (Jest/Vitest).
Getting AI to produce tests helps you verify whether its understanding matches yours.
When the Answer Is Vague
- Ask for "a line-by-line explanation, annotating what each key variable means."
- If context is lacking, paste in file snippets or interface definitions and have it revise the code.
- Have AI trace through step by step (input -> expected output -> actual output) to quickly spot where things diverge.
Practice
Take the "deduplicate and sort" function from the previous chapter, have AI write a test suite and explain the time complexity. Then ask it to evaluate whether there's a simpler implementation and explain the tradeoffs.
📚 相关资源
❓ 常见问题
关于本章主题最常被搜索的问题,点击展开答案
AI 给的代码到底要从哪里开始读?
先读结构再读细节:(1) 函数签名 —— 入参、返回值、type 是否符合项目约定;(2) 依赖 —— 有没有引入新包、有没有 import 错路径;(3) 边界处理 —— 空输入 / null / 异常分支怎么处理。这三点过关再读实现细节。先看实现等于一上来就被局部细节绑架,反而忽略掉"方向是不是错了"。
怎么让 AI 自己检查自己写的代码?
直接发指令:"请检查你刚生成的代码可能失败的 3 个场景,列出问题并给出修复建议;如果有性能隐患或未处理的异常,一并指出。"AI 对自己输出的反思能力比我们想得高,能找出 60-70% 的明显遗漏。这一步几乎零成本,比你自己一行行盯有效得多 —— 把"质检"环节再交回 AI 一次。
为什么让 AI 写测试是验证理解一致性的好办法?
因为测试是"AI 怎么理解你的需求"的可执行版本。例:让它为 dedup 函数写 4 条用例 —— 空输入、重复输入、非法输入、正常路径,用项目里已有的 Jest/Vitest。如果 AI 写的测试和你脑子里的预期对不上,说明 prompt 没传达清楚需求,比看实现代码更早暴露分歧。
AI 回答含糊或绕弯时该怎么追问?
三个有效追问句式:(1)「请逐行解释,标注关键变量的含义」—— 强迫 AI 把每行讲清;(2)「输出运行轨迹:输入 → 预期输出 → 真实输出」—— 用具体数据替换抽象描述;(3) 补 file 片段或接口定义再让它修订 —— 含糊往往是 context 不够。三招按顺序试,比反复问"你确定吗"管用。
AI 说"这段代码可以工作"我应该信吗?
不能信。AI 没有运行环境,"可以工作"是模式匹配的判断,不是验证结果。规则:每段非平凡代码都必须真跑一次 —— 跑测试、跑 lint、手测关键路径,至少 3 选 1。"看着像对"和"真的对"在 AI 时代差距比以前更大,因为 AI 输出的代码语法层面很少出错,错往往藏在边界和集成处。