A management consultant is preparing a market entry analysis for a client expanding into Southeast Asia. The project needs competitive intelligence, financial modeling, and regulatory review -- all by Friday. She starts with a single Claude session, asking it to research competitors, then model revenue scenarios, then check import regulations. By the time it reaches the regulatory section, the financial assumptions are buried twenty messages up. Context is degrading. The analysis is shallow because one agent is juggling three specialties.
What if three separate Claude instances could investigate simultaneously -- one dedicated to competitive landscape, one to financial modeling, one to regulatory requirements -- each with a fresh, focused context window? And what if those three analysts could then discuss their findings with each other, challenging assumptions and cross-referencing data, before delivering a unified brief?
That is Agent Teams. Where subagents (Lesson 11) are fire-and-forget workers that report back to a single caller, Agent Teams are fully independent Claude Code instances that coordinate through a shared task list and direct messaging. Each teammate has its own context window, can message any other teammate, and self-coordinates work.
Agent Teams is an experimental feature. Add this to your VS Code settings.json or Claude Code settings:
Verify it worked: Start a new Claude Code session and type a prompt that requests a team. If the feature is enabled, you will see Claude creating teammates instead of subagents.
Agent Teams supports two display modes:
Set the mode in your settings.json:
Or override for a single session with a CLI flag:
The default "auto" uses split panes if you are already running inside tmux, and in-process otherwise. For most learners, in-process mode is the simplest starting point.
A note on "experimental": The patterns you learn here -- task decomposition, parallel coordination, role assignment -- are fundamental to multi-agent systems. The specific API may evolve, but the thinking transfers to any platform that supports agent coordination.
Open Claude Code in any project folder and type:
Try it now. Run the prompt above and observe each step. Even without real market data in your project folder, the team will demonstrate the coordination pattern using its own knowledge.
While the team works, explore the files Claude creates behind the scenes. Open a separate terminal (not your Claude session) and inspect:
Team config -- who is on the team:
You will see a members array where each teammate has a name, agentType, and model:
Task files -- what work exists:
Each task is a JSON file with dependency tracking:
The blocks and blockedBy fields form a dependency graph. A task with unresolved blockedBy entries cannot be claimed until those dependencies complete. When a blocking task completes, dependent tasks unblock automatically.
Why this matters: When a team gets stuck (a task says in_progress but the teammate seems idle), you can read these files to diagnose the problem. Is the task stuck? Is a dependency not marked complete? Knowing the internals turns debugging from guesswork into inspection.
You already know subagents from Lesson 11. When should you use teams instead?
The decision rule: If teammates need to talk to each other, use teams. If they just report back, use subagents.
Agent teams use more tokens than subagents because each teammate maintains its own full context window plus inter-agent messages. A 3-agent team analysis might cost 3-5x what a single-agent session costs. Use the strongest model for synthesis (the lead) and efficient models for research (teammates). Configure this in your team creation prompt:
Use teams when the quality improvement justifies the cost -- multi-angle investigations, competing hypotheses, and cross-functional coordination are worth it. Simple summaries and single-perspective tasks are not.
Each technique below includes a prompt you should try.
Delegate mode prevents the team lead from doing analysis directly. The lead can only coordinate: create tasks, send messages, review results. All investigation goes to teammates.
Think of it this way: You are the project director. You define scope, your team executes. You never write the deliverable yourself.
Try it now:
Before executing, teammates present their approach for review -- like approving a consultant's work plan before they bill hours.
Try it now:
Watch the flow:
You can redirect teammates mid-task without disrupting others.
Try it now: During a team session, use Shift+Up / Shift+Down to select a specific teammate, then type:
The teammate receives your message and adjusts its work accordingly. Other teammates are not interrupted.
Tasks can depend on other tasks. A blocked task will not start until its dependency completes.
Try it now:
Watch tasks unblock automatically as their dependencies complete. Teammates claim unblocked tasks without being told.
Teams can write to shared files that all teammates read. This is how teams produce consensus.
Try it now:
Unlike messages (which live in each teammate's context), a shared file persists and can be read by anyone. This pattern is powerful for investigations where you want a permanent record.
Lesson 15 introduced hooks for single-agent workflows. Two hook events are designed specifically for teams.
When a teammate runs out of tasks and goes idle, this hook fires. You can use it to assign more work or check for remaining items.
The hook script (.claude/hooks/check-remaining-tasks.sh):
Exit code 2 sends the stdout message as feedback and keeps the teammate working. Exit code 0 allows the teammate to go idle normally.
When a teammate marks a task as done, this hook fires before the task is accepted. You can use it to enforce quality standards on deliverables.
The hook script (.claude/hooks/verify-task-quality.sh):
Teams are powerful but introduce coordination complexity. Five common failure modes and their fixes:
What it looks like: The director starts writing the analysis instead of reviewing team output.
Fix: Enable delegate mode (Shift+Tab) or include explicit instructions: "You are the coordinator. NEVER conduct research directly. Create tasks, assign them, and review results."
What it looks like: Two analysts updating the same section of a report, overwriting each other's findings.
Fix: Assign section ownership explicitly: "Market researcher writes to Section 1 of ANALYSIS.md. Financial analyst writes to Section 2. Competitive analyst writes to Section 3."
What it looks like: A new team member does not know about project conventions or prior decisions.
Fix: Teammates do NOT inherit the lead's conversation history. Include critical context in the spawn prompt, or ensure your CLAUDE.md file contains the necessary background (teammates DO read project context files).
What it looks like: A team analysis costs significantly more than expected.
Fix: Use the strongest model for synthesis and efficient models for research. The teammates do the bulk investigative work; the lead synthesizes. This gives you depth where it matters without overspending on routine research.
What it looks like: A deliverable sits "in progress" while the teammate waits for input or is stuck in a loop.
Fix: Check the teammate's view (Shift+Up/Down). Send a direct message to redirect or unstick it. If needed, inspect the task files at ~/.claude/tasks/*/ to check dependency status.
Three universal patterns for team coordination. Each includes a prompt you can adapt.
Multiple angles on the same question, investigated simultaneously.
Sequential dependencies where each stage feeds the next.
When the root cause is unclear, multiple investigators actively try to disprove each other. This prevents anchoring bias -- the tendency to commit to the first plausible explanation.
Why this works: A single investigator finds one plausible explanation and stops looking. With four independent investigators who can challenge each other's theories, the hypothesis that survives is much more likely to be the real root cause. Sequential investigation suffers from anchoring; parallel debate eliminates it.