Vibe Coding in Practice
Vibe Coding in Practice
Vibe Coding isn't "I describe a feature, AI ships it, I merge." In production it's a collaboration mode where natural language is the primary input but engineering discipline is preserved — AI drafts, the engineer constrains, validates, and decides on refactors.
If you've only used vibe coding in toy demos, run through the Vibe Coding SOP first. This page assumes you know the basics and focuses on what stops you from getting burned in production.
The decision matrix for four scenarios
| Scenario | Vibe or hand-write | Why |
|---|---|---|
| Add a CRUD endpoint when the pattern already exists | Vibe | AI copies existing patterns accurately |
| Refactor a core algorithm / perf optimization | Hand-write | AI doesn't know your perf budget or trade-offs |
| One-off script / data migration | Vibe | Throwaway code, never enters the codebase |
| Fix a production bug | Hybrid | AI surfaces suspects; engineer makes the call |
| New styled-component | Vibe | Visual specs are clear; AI output is reliable |
| Auth / permissions / billing logic | Hand-write | Mistakes blow up immediately; you need to own every line |
The decision rule: what's the blast radius if this code is wrong? Small radius → vibe; large radius → hand-write or at least line-by-line review.
Quality gates for vibe output
Merging AI code straight is how incidents start. Minimum viable gate stack — five checks:
- Type check must pass —
tsc --noEmitis the cheapest sanity check; AI fabricates imports constantly - New code must have tests — bake "implement and add unit tests in same PR" into the prompt; reject PRs without tests
- PR diff size cap — hard limit
+400/-200per PR; force-split anything larger. No one can review 800 lines of AI code in one sitting - Auto lint + format — pre-commit hook with husky + lint-staged; AI's indentation and quote style drift constantly
- E2E for critical paths — auth flow, payments, data migrations — at least one happy path under e2e
# .github/workflows/vibe-gate.yml example
- name: Type check
run: bun run type-check
- name: Test coverage
run: bun test --coverage
# New files in PR must hit ≥ 80% coverage
- name: PR size guard
uses: CodelyTV/pr-size-labeler@v1
with:
xl_max_size: '600' # Block anything over 600 lines
Code review: vibe mode
Reviewing human code, you check "is it correct?". Reviewing AI code, you check three different things:
- Hallucination — does the function / API it references actually exist? AI invents
lodash.deepMerge,React.useDeferredQuery, etc. - Pattern drift — did it bypass an existing utility? E.g. project has
useApi()hook, but AI writes rawfetch(...) - Over-engineering — unnecessary abstractions, premature optimization, unused interfaces. AI optimizes for "complete", not "minimal"
In practice: reviewers see "Generated with Claude Code" in PR description → switch to vibe-mode review → spend ~30% more time than usual.
Real failures from JR Academy
Incident 1: AI-generated migration dropped the wrong field
Asked Claude Code to "remove unused legacy_user_id field from User schema." AI wrote a mongoose migration that unset the field. Problem: a legacy reconciliation system in production was still reading that field across 200K users. Reconciliation broke after migration ran.
Lesson: all destructive DB operations must be hand-written. AI is allowed to give pseudocode at most.
Incident 2: AI "fixed" a race condition with the wrong primitive
PR titled "fix race condition in order creation by adding mutex." Reviewer approved. After deploy, throughput dropped 80% — because AI added a process-level mutex but the service runs 4 pods, so it provided no isolation.
Lesson: concurrency and distributed-system fixes need line-by-line review. Never trust the PR description.
A sustainable rhythm
- First 30 min of the day, hand-write — keep your muscle memory for the codebase, otherwise in three months you can't read your own project
- Whiteboard before vibe on hard tasks — figure out architecture yourself, then have AI fill in the details. Never let AI design the architecture
- Re-read one AI PR per week — go back to a PR you merged, understand every line. Was anything over-engineered? Could it be simpler?
- Keep the ability to hand-write the critical modules — auth, billing, core algorithms — the code that keeps the company alive — every line must be one you can defend
Next
- AI Rules Configuration — bake quality gates into project rules
- AI Coding Workflow — how the three tools split responsibility
- Claude Code Examples — practical prompt patterns
📚 相关资源
❓ 常见问题
关于本章主题最常被搜索的问题,点击展开答案
Vibe Coding 是不是按一下按钮 AI 就把整个 product 做完?
不是。Vibe Coding 是一种协作方式:你用自然语言写清目标、约束、acceptance criteria,AI 负责生成 / 改 / 解释 / 迭代代码,定方向 + sign-off 仍然是你的事。AI 解决『机械翻译需求成代码』,解决不了『判断 priority + 做 architecture 取舍 + 对 production 风险负责』。
Vibe Coding 适合哪些任务,哪些要谨慎?
适合:页面样式 / 组件补全、CRUD + form + type、报错排查、新项目 prototype —— 反馈快、容易 validate、规则明确。谨慎:支付 / 鉴权 / 权限系统、核心 architecture 升级 —— 写错代价高,必须人主导方案。不建议全权交给 AI:compliance、security 敏感逻辑必须人工 review。
一个有效的 Vibe Coding prompt 至少包含什么?
4 件事缺一不可:目标(要做什么)、背景(哪个 project / file / 业务 context)、约束(tech stack / style / 不能动的 boundary)、验收(完成后怎么判断对错)。少一项 AI 就只能猜,结果就会是『看似合理但不对题』。这 4 件事写清楚,普通需求一次就能跑通。
Vibe Coding 新手最容易踩哪些坑?
5 个高频坑:(1) 需求太空 → AI 产出『看似合理但不对题』,修法是写清 input / output / boundary;(2) 一次改太多 file → 出问题没法定位,拆成 1-2 个小 task;(3) 不贴错误信息 → AI 只能猜,直接贴 log + 截图 + 调用链;(4) 只看代码不运行 → 看着像对实际跑不通,每轮 validate;(5) 把 AI 当最终责任人 → 出错没人兜底,sign-off 留给人。
把报错丢给 AI 是『偷懒』吗?
不是,是 Vibe Coding 的标准动作。报错信息不是失败的证明,是解决问题的钥匙 —— 你把代码 + terminal 红色报错 + 调用链一股脑贴回去,AI 能在几秒内做第一轮 root cause 分析,比你翻 10 个 Stack Overflow 帖子快得多。关键是贴完整 context,不是只贴一句『报错了帮我看看』。