SDD is a power tool, not a universal solution. The wisdom is knowing when NOT to use it.
In Lesson 8, you learned the full implementation workflow: task delegation, context isolation, atomic commits. You now have the complete SDD toolkit. But a toolkit isn't valuable if you use a sledgehammer for every nail.
This lesson develops your judgment. By the end, you'll recognize task characteristics that signal "use SDD" versus "skip the ceremony." You'll learn the lightweight spec pattern for borderline cases. And you'll understand the common critiques of SDD so you can navigate them thoughtfully.
The four-phase workflow adds overhead: research time, specification writing, interview questions, task orchestration. That overhead pays dividends when the alternative is worse.
The pattern: These scenarios share a characteristic—complexity that exceeds working memory. You can't hold a 15-file refactor in your head. You can't remember all the assumptions you made while exploring an unfamiliar library. The specification becomes external memory that preserves decisions across sessions and collaborators.
Without a spec, large refactors drift. You start renaming a module, realize you need to update imports, discover those imports have side effects, chase those side effects into another module, and suddenly you're four directories deep with no clear picture of what you've changed or why.
With a spec, you define the end state upfront. Each task references that spec. When you're deep in the fourth module, the spec reminds you: "We're renaming for consistency, not refactoring for performance." That anchor prevents well-intentioned tangents from derailing the project.
When requirements are fuzzy, SDD's research phase transforms vague goals into concrete specifications. The research isn't just about finding code patterns—it's about discovering what questions you didn't know to ask.
A developer approaching "add real-time collaboration" without SDD might jump straight to WebSocket code. With SDD, the research phase reveals: What conflict resolution strategy? What persistence model? What happens offline? By the time you write the spec, you understand the problem well enough to solve it.
Learning a new library through vibe coding means sequential trial and error. You try something, it fails, you search for examples, try again. Each failure teaches one lesson.
SDD's parallel research pattern changes this. You spawn subagents to investigate different aspects simultaneously: one explores authentication patterns, another investigates pagination, a third examines error handling. Research that takes hours sequentially completes in minutes—and you have a comprehensive spec before writing any application code.
The same overhead that pays dividends on complex tasks wastes time on simple ones.
The pattern: These scenarios share a characteristic—the solution is either obvious or unknowable. When you know exactly what to change, specifying it first adds no information. When you're exploring to discover what's possible, specifying upfront constrains discovery.
A null pointer exception in line 47 of a utility function doesn't need a specification. You know the problem. You know the fix. The overhead of research, spec, interview, and tasks would take longer than just fixing the bug.
Heuristic: If you can explain the fix in one sentence, skip SDD. "Add null check before accessing user.preferences" doesn't benefit from a formal specification.
Sometimes you don't know what you're building until you build something. A creative exploration—"What if we visualized this data differently?"—benefits from rapid iteration, not upfront specification.
Vibe coding serves exploration. You try an approach, see what happens, adjust. The "specification" emerges from what you learn. Formalizing too early constrains the creative process.
Important: Exploration is a phase, not an end state. Once you discover what works, you should often write a specification before building the production version. Exploration generates understanding; SDD turns that understanding into reliable implementation.
When the system is down, you fix it first and document later. A production incident isn't the time for research phases and interview questions.
Post-incident: After resolving the immediate issue, SDD can help with the follow-up. "Prevent this class of failure" is the kind of unclear-requirement task where SDD excels. The incident response is direct; the long-term fix benefits from specification.
For quick classification, apply this logic:
The variables:
Most tasks fall into the "judgment call" category. That's where the lightweight spec pattern helps.
When you're uncertain whether SDD is warranted, start light:
That's it. No reference architecture analysis. No multi-phase implementation plan. Just constraints and success criteria.
Why this works: Constraints prevent scope creep. Success criteria define done. These two elements provide 80% of specification value with 20% of the overhead.
When to expand: If writing the lightweight spec reveals complexity—"Wait, how DO we handle the existing data?"—that's your signal. Convert to a full specification with research phase and interview. The lightweight spec became the seed for comprehensive planning.
When to ship as-is: If the lightweight spec feels sufficient—constraints are clear, success criteria are unambiguous—proceed directly to implementation. The spec exists for reference without the full ceremony.
Experienced practitioners develop intuition for when spec ceremony pays off. This intuition isn't mystical—it's pattern recognition from feedback loops.
Building judgment requires:
Tracking outcomes. When a project struggles, ask: Would a specification have helped? When a project succeeds quickly, ask: Did I need the spec, or did I write it out of habit?
Calibrating overhead. Time your specification writing. If a lightweight spec takes 10 minutes and a full spec takes 2 hours, you know the investment you're making. Match investment to risk.
Recognizing signals. Certain project characteristics predict specification value:
SDD isn't universally praised. Research identifies legitimate concerns worth acknowledging.
The concern: Sequential phases (research → spec → implement) resemble waterfall methodology, which failed precisely because upfront planning couldn't anticipate implementation reality.
The counterpoint: SDD phases aren't isolated handoffs. The spec updates during implementation when reality diverges from plan. The difference from waterfall: tasks are atomic and reversible (git revert), not months-long commitments with no feedback.
The concern: Reviewing the spec AND reviewing the code doubles overhead.
The counterpoint: Spec review catches design problems; code review catches implementation problems. They're different error classes. Finding a design flaw during code review means discarding implementation work. Spec review surfaces design issues when changes are cheap.
The concern: As projects grow, specification maintenance becomes its own burden.
The counterpoint: This is legitimate. Long-running projects must decide whether to maintain specs as living documents (Spec-Anchored level) or treat them as implementation kickoff artifacts (Spec-First level). The choice depends on organizational needs.
The concern: Not every change needs four phases and three documents.
The counterpoint: Agreed. That's the point of this lesson. The decision framework exists precisely because SDD isn't always appropriate. The skill is knowing when.
Running Example Concluded: We completed "Personal AI Employees in 2026" using the full workflow. Now we reflect: was SDD worth it?
Prompt 1: Retrospective on Our Running Example
What you're learning: For our report, research prevented writing for the wrong audience. The spec prevented scope creep into tutorials. Refinement caught that mid-size CTOs have different needs than enterprise. But a quick blog post wouldn't need this ceremony.
Prompt 2: Classify These Tasks
What you're learning: Quick classification builds intuition. Tasks 1, 3, 5 are skip SDD (obvious, contained). Task 2 needs at least lightweight spec (what changed?). Task 4 needs full SDD (audience research required). Practice makes this instant.
Prompt 3: Lightweight Spec for Borderline Task
What you're learning: Lightweight specs are your probe. "Blog post about AI adoption" sounds simple until you write constraints: "Don't reveal vendor pricing" (wait, can we share ROI without that?), "Don't criticize tools we evaluated" (or do we want honest comparisons?). The spec reveals hidden decisions.