In Lesson 4, you learned the four-phase SDD workflow. Now you'll execute the first phase: research that transforms hours of sequential reading into parallel minutes.
Here's the economics that makes this lesson worth your time. That reference implementation you've been meaning to understand? The library documentation you've been piecing together? The architecture decisions scattered across multiple files? What would take you four hours of sequential reading, Claude can investigate in twenty minutes using parallel research agents.
This isn't about AI reading faster. It's about architecture. Claude's subagent system lets you spawn multiple independent investigators, each with fresh context, each focused on a specific question. While one agent studies CRDT data structures, another examines WebSocket protocols, a third analyzes storage patterns, and a fourth maps overall architecture. They don't wait for each other. They don't cross-contaminate each other's understanding. They report back simultaneously.
The pattern that activates parallel research is direct:
That's the trigger. When Claude sees this instruction alongside a research goal, it spawns 3-5 independent agents. Each agent operates in isolation, investigates one aspect, and returns findings. You receive parallel perspectives without the accumulated confusion of sequential conversation.
When you invoke parallel research, Claude:
The key architectural insight: each subagent starts with fresh context. When Agent 2 investigates WebSocket patterns, it doesn't carry assumptions from Agent 1's CRDT analysis. This isolation prevents the cross-contamination that plagues sequential research conversations.
Here's what parallel research looks like in practice. Alex, a developer building a local-first application, needed to migrate from SQLite to IndexedDB while adding real-time sync capabilities. The investigation required understanding:
Sequential approach: research CRDTs, then WebSockets, then IndexedDB, then architecture. Each phase builds on the previous. Estimated time: 3-4 hours of reading and note-taking.
Parallel approach: spawn four agents simultaneously.
Alex's prompt:
What returned:
Time elapsed: 18 minutes instead of 4 hours.
More importantly, the findings were independent. Agent 1's CRDT understanding didn't assume anything about storage. When Alex later found that IndexedDB had limitations Agent 3 hadn't considered, it didn't invalidate the CRDT or WebSocket research.
The advantage isn't just speed. Context isolation changes the quality of research.
Hidden conflicts become visible. When you research sequentially, you unconsciously reconcile contradictions as you go. Agent 1 says "use approach A." By the time you get to related research, you're looking for information that supports A, not challenges it.
With parallel research, Agent 1 might recommend approach A while Agent 3 recommends approach B for the same problem. That conflict is valuable information. It means you have a genuine design decision to make, not an assumption that slipped through.
Revision stays surgical. If you discover Agent 2's WebSocket analysis missed something critical, you re-run Agent 2. Agents 1, 3, and 4 are unaffected. In sequential research, late-stage discoveries often invalidate earlier work because of accumulated assumptions.
Beyond subagents: For research tasks where investigators need to challenge each other's findings rather than report independently, Claude Code's Agent Teams (Chapter 4, Lesson 9) enable direct inter-agent communication. Subagents work well when only the result matters. Agent teams work better when the debate between investigators produces the real insight.
The quality of parallel research depends on how you decompose your investigation. Effective decomposition creates threads that are:
For understanding an authentication system:
For evaluating a new library:
For analyzing a performance issue:
When parallel research completes, you receive structured findings from each agent. Your job is synthesis: identifying patterns, conflicts, and implications.
Look for themes that appear across multiple agents:
Conflicts between agents often indicate genuine design decisions:
These conflicts become inputs to Phase 3 (Refinement) where you'll resolve ambiguities before implementation.
Research findings directly feed your specification:
You found a project that does something similar to what you need. Parallel research can dissect it:
You're deciding between two approaches. Research can inform the decision:
You inherited code and need to understand it:
Objective: Apply the parallel research pattern to a real investigation need.
Choose a research goal that's been on your backlog. Maybe it's understanding a library, auditing code you inherited, or investigating how a reference project implements something.
Construct a research request using the template:
Your decomposition is effective if:
Run the research. When findings return, identify:
Running Example Continued: We're writing "Personal AI Employees in 2026" for CTOs. Now we research the landscape in parallel.
Prompt 1: Parallel Research on AI Tools
What you're learning: Each agent investigates independently. Agent 1 discovers Claude Code can run autonomously while Copilot needs more guidance. Agent 2 finds ROI studies. Neither pollutes the other's findings. Conflicts (Agent 1 says X is best, Agent 4 says Y is trending) surface real decisions.
Prompt 2: Synthesis
What you're learning: Four agents produce four perspectives. Synthesis reveals what CTOs must know (appears everywhere) versus what's optional depth. Conflicts become explicit: "Agent 2 found ROI data, Agent 3 found skepticism"—both belong in an honest report.
Prompt 3: Gap Analysis
What you're learning: Research isn't just collecting data—it's identifying gaps. CTOs might ask: "How do we measure success?" "What's the learning curve?" "How do we handle security review?" If research.md doesn't answer these, the spec needs to address how we'll fill them.