Every ambiguity you find before coding saves 10x the time it would cost to fix mid-implementation.
In Lesson 6, you learned to write effective specifications. But even well-structured specs contain hidden assumptions. You know what "conflict resolution" means in your head. Does Claude? You imagine the error handling approach. Did you write it down?
This lesson teaches the refinement phase: a structured interview where Claude surfaces every design decision that would otherwise become a mid-implementation pivot. The result is a specification precise enough that implementation becomes execution of a well-understood plan.
The refinement phase uses a specific prompt pattern that triggers Claude's ask_user_question tool:
When you give this instruction, Claude shifts from implementation mode to investigation mode. Instead of generating code, Claude reads your specification critically and asks questions about anything unclear, unstated, or potentially problematic.
What makes this different from conversation:
In normal prompting, you ask questions and Claude answers. In the interview pattern, the roles reverse. Claude asks questions and you answer. This shift matters because Claude can identify gaps you didn't know existed—assumptions so natural to you that you didn't think to document them.
Consider what happens when an ambiguity survives to implementation:
The 10x multiplier is conservative. A decision that takes 10 minutes to clarify in an interview can cost days when discovered after deployment.
Real example: A developer's specification said "handle conflicts optimistically." The developer meant "last write wins." Claude interpreted this as "prompt user to resolve." Implementation proceeded with user prompts. The developer discovered this three days later during testing. Refactoring took an entire afternoon—time that a single interview question would have prevented.
Claude's interview questions cluster around five categories. Understanding these helps you anticipate questions and write better specs proactively.
How existing information transitions to the new system:
Why this matters: Data decisions are often unstated because developers assume "obviously we keep our data." But the HOW requires explicit decisions.
What happens when systems disagree:
Why this matters: Every multi-user or distributed system faces conflicts. The absence of a stated strategy means Claude chooses one—possibly not the one you wanted.
Which approach among valid alternatives:
Why this matters: Research often surfaces multiple valid approaches. Without explicit selection, Claude picks based on its training distribution—not your project's needs.
What happens when things go wrong:
Why this matters: Happy path specifications are common. Error path specifications are rare. But production systems spend significant time in error states.
Where the system's scope ends:
Why this matters: Specifications often describe the normal case. Edge cases at boundaries are where implementations diverge from intent.
Here's what a refinement interview looks like with our report specification:
Your specification includes:
Claude's interview questions:
Question 1: For the tool comparison—should we include pricing? Enterprise pricing is often confidential or negotiated. Do we use list prices, price ranges, or skip pricing entirely?
Your answer: "Include list prices where public, note 'contact for enterprise pricing' otherwise. CTOs need ballpark numbers to budget."
Question 2: The spec mentions 'key dimensions.' Which dimensions matter most to CTOs? Technical capabilities? Security compliance? Learning curve? Integration difficulty?
Your answer: "Good catch. Prioritize: (1) security/compliance, (2) integration with existing tools, (3) learning curve, (4) technical capabilities. CTOs care about adoption risk before features."
Question 3: How should we handle rapidly changing information? Tool capabilities change monthly. Should we include version numbers? Publish dates? Caveats about currency?
Your answer: "Include 'as of [date]' header. Note that capabilities change rapidly. Link to official docs for current features."
Specification updates based on answers:
Three questions. Three decisions documented. Three mid-writing pivots prevented.
Use this checklist to ensure your specification addresses common ambiguity areas before implementation:
You don't need to fill every checkbox for every specification. A small utility script doesn't need crash recovery documentation. But running through the checklist surfaces which categories apply to your project.
The interview phase ends when:
If Claude's questions feel like implementation details rather than design decisions, you're ready for Phase 4.
Warning sign: If you're annoyed by the questions, that's often a signal that you haven't thought through those decisions. The annoyance comes from realizing you need to think harder, not from Claude being pedantic.
Objective: Practice the interview pattern on a real specification.
Take a specification you've written (or use the one from Lesson 5's lab). Run it through the interview pattern:
The goal isn't to fill checkboxes—it's to surface decisions you hadn't consciously made.
Running Example Continued: We have report-spec.md. Now we refine it through interview before writing.
Prompt 1: Interview the Report Spec
What you're learning: The interview surfaces implicit assumptions. "CTOs evaluating AI tools" is vague—a startup CTO has different concerns than an enterprise CTO. Each answer sharpens the spec and prevents writing for the wrong audience.
Prompt 2: Update Spec with Decisions
What you're learning: Interview answers must persist. "First AI tool adoption" means we explain basics enterprise CTOs already know. "Security/compliance concerns" becomes a required section. The spec now encodes these CTO-specific needs.
Prompt 3: Identify Remaining Gaps
What you're learning: Perfect specs don't exist. The goal is specs good enough that writing doesn't require restructuring. Whether to use bullet points vs prose can stay flexible; whether to include a security section cannot.