30

AI Coding Workflow

⏱️ 40 min

AI Coding Workflow

Cursor, Claude Code, and Kiro aren't substitutes — they cover different task granularities. Sort out when to use which before mixing them, otherwise you'll just thrash between tools without shipping faster.

This page assumes you've installed at least one. If not, run through Install Cursor and the Claude Code Full Guide first, then come back for the team coordination side.

What each tool actually does best

ToolSweet spotContext modelWins at
CursorInside the IDE, single-file or small multi-file editsOpen tabs by default + explicit @-referencesInline edit, tab autocomplete, conversational refactors
Claude CodeTerminal, cross-repo long tasksCLAUDE.md persistence + session contextMulti-file coordination, running tests, debugging long chains, drafting PRs
KiroSpec-first, design-driven devspec → tasks → code, three-stage flowBreaking down a PRD into tasks, forcing AI through ordered steps

In practice these don't conflict. Cursor wins when you're typing and need flow; Claude Code wins for "fix this bug across three services"; Kiro wins when you start a new feature ticket and want to nail the spec before any code is written.

Pick by task granularity

  • One-line / single-function change → Cursor inline edit (Cmd+K). Faster than opening a chat.
  • Logic inside one file → Cursor chat panel, reference the file with @.
  • Cross 5+ files / needs test verification → Claude Code. Write a prompt that ends with "run bun test and verify."
  • New feature from zero → Kiro writes the spec → Claude Code implements → Cursor polishes.
  • Production incident debugging → Claude Code, because it can chain kubectl logs, psql, and grep through history in one session.

Stitching them into one day

# Morning: Kiro turns today's ticket into a spec
kiro spec create "add rate limit to /api/upload"
# → produces spec.md + tasks.md

# Late morning: Claude Code implements against the spec
claude
> implement tasks.md step by step, run tests after each step

# Afternoon: Cursor for UI polish
# Cmd+K "convert this toast to styled-components"

# End of day: Claude Code drafts the PR + changelog
> generate PR title and body from git diff main...HEAD

Rule of thumb: don't switch tools mid-task. Every switch means re-feeding context — pure token waste.

Measuring lift (not "AI feels fast")

Your tech lead doesn't care that AI feels productive. They want numbers. Track these four:

  1. PR cycle time — hours from ticket open to merge, compared against the 30-day baseline before AI rollout
  2. First-review pass rate — fraction of PRs that ship without revisions
  3. Bug introduction rate — new bugs per week / PRs merged per week
  4. Token cost — Cursor Pro $20/mo, Claude Code billed via API, Kiro tiered. Track per-engineer monthly spend.

Real numbers from JR Academy internal: PR cycle dropped from 2.3 days to 1.1 days after Claude Code rollout, but first-review pass rate fell in month one (engineers shipped more, reviewers fell behind). It recovered in month two. That lag is real — bake it into the rollout plan.

Common rollout mistakes

  • No shared CLAUDE.md — each engineer's AI produces a different code style; review cost balloons. Commit a project-level CLAUDE.md into the repo.
  • No one reads AI-written PRs — PRs balloon, reviewers rubber-stamp, three months later nobody understands the code. Cap PR size (e.g. +400/-200 lines).
  • Token spend out of control — Claude Code defaults to Sonnet 4.5; a long session can burn $30+ a day. Set budget alerts; reserve Opus for hard problems.
  • AI code without tests — if AI writes the implementation, AI must write the test. Otherwise the next AI edit will silently break behavior.
  • Skills/commands stay personal — personal productivity ≠ team productivity. Promote high-value skills to repo-level .claude/skills/.

Next

📚 相关资源

❓ 常见问题

关于本章主题最常被搜索的问题,点击展开答案

AI Coding 工作流核心要做什么?

用 Cursor、Claude Code、Kiro 这类 AI 原生工具重新定义开发节奏。它替换的不是『写代码』本身,而是 setup 脚手架 / 写重复 CRUD / 在文档和 IDE 间切换 / 把小需求手动翻译成代码 / 翻搜索引擎找 error 解法这些机械动作;不替换的是定目标、判 priority、做 architecture 取舍、对 production 负责。

Cursor、Claude Code、GitHub Copilot 该先学哪个?

三个工具定位不同:Cursor 适合在 IDE 里做 multi-file 修改和 context 协作;Claude Code 适合 task 拆解、terminal workflow、codebase 级修改;GitHub Copilot 适合补全 + 解释 + IDE 内即时辅助。别同时学三四个,先把一个用顺手再扩。零基础走 Cursor 起步成本最低;已有 terminal 习惯的工程师直接上 Claude Code 收益更大。

为什么要让 AI 先出 plan 再动手?

直接让 AI 改代码常见结果是『改了,但方向偏了』。先让它输出 implementation plan 你能在它打字之前就发现方向问题,纠偏成本接近 0;改完才发现偏了,多文件 diff + 跨文件影响要回滚,成本高一个数量级。Plan-then-act 是 AI Coding 工作流里最便宜的 quality gate。

AI Coding 应该一次让 AI 改十几个文件吗?

不要。更稳的节奏是小步执行小步 validate:改 1 个点 → run 一下 → 记问题 → 带着 error 和结果继续问。一次改十几个 file 一起看,出问题没法定位是谁引入的;拆成 1-2 个小 task,每轮都跑一次,错误能立刻溯源。这是 AI Coding 工作流和『写脚本拼起来』的核心区别。

AI Coding 工作流真正拉开效率差距的是什么?

不是谁会说『帮我写一下』,而是谁能把好用的 prompt 模板、acceptance checklist、debug 套路沉淀下来。一个写过 50 次 form 的工程师,第 51 次的 prompt 可以一句话带出 input / output / boundary / 不能动的部分;新手每次都从零描述,每次结果都不一样。沉淀是 AI 时代的复利。