Boris Cherny, creator and head of Claude Code at Anthropic, has shared detailed insights into how he and his team use the tool in production. While Boris works primarily in software development, the practices his team has refined reveal universal patterns that transform Claude Code from a capable assistant into a force multiplier—regardless of your domain.
What makes these practices valuable isn't exotic techniques. It's seeing how the features you've learned in this chapter combine into a production workflow that lets one person operate like a small team.
This lesson maps the Claude Code team's workflow to everything you've learned—and connects it to the official Claude Code best practices—showing you what expert-level usage looks like in practice. Where techniques are developer-specific, we'll note the equivalent approach for knowledge workers.
Before diving into specific techniques, understand the principle that unifies all Claude Code best practices:
Claude's context window fills up fast, and performance degrades as it fills.
The context window holds your entire conversation: every message, every file Claude reads, every command output. A single research session or complex task can consume tens of thousands of tokens. As context fills, Claude may start "forgetting" earlier instructions or making more mistakes.
Why this matters:
Every practice in this lesson connects back to this constraint. When you understand context as the fundamental resource, the "why" behind each technique becomes clear.
Boris maintains 15-20 concurrent sessions in his workflow. The key insight from the Claude Code team: running multiple isolated sessions is the single biggest productivity unlock.
"It's the single biggest productivity unlock, and the top tip from the team."
— Boris Cherny
The Core Principle
Boris runs many sessions because he manages a massive software product. You do not need this many. Start with 2-3. The principle is about parallel workstreams—like having multiple assistants working different problems simultaneously.
How to run parallel sessions:
For developers using git worktrees:
Why parallel directories work better than switching:
Pro tips from the team:
Start small: Begin with 3 parallel sessions before scaling. The cognitive overhead of managing many sessions takes practice.
Connection to Chapter Concepts:
Boris activates Plan Mode (Shift+Tab twice) for every non-trivial task. He iterates back and forth with Claude until the plan is solid, then switches to auto-accept mode for execution.
"A good plan is really important!"
— Boris Cherny
The Pattern:
Why this works: When you spend time on planning, you align Claude's understanding with your intent. The investment in planning pays off through faster, more accurate execution. No wasted iterations fixing misunderstandings.
When things go sideways: The moment something goes wrong, switch back to Plan Mode and re-plan. Don't keep pushing through a confused execution. Some team members also explicitly tell Claude to enter Plan Mode for verification steps—not just for the initial build.
A powerful technique from the Claude Code team involves using separate sessions for writing and reviewing:
"One person has one Claude write the plan, then they spin up a second Claude to review it as a staff engineer."
— Boris Cherny
The workflow:
Session A (Writer): Create the implementation plan
Session B (Reviewer): Review with fresh eyes
Session A: Address feedback
Why this works:
Connection to Chapter Concepts:
Boris's team maintains a shared CLAUDE.md file checked into git. The entire team contributes multiple times per week.
The key practice: when Claude makes a mistake, document it immediately.
"Anytime we see Claude do something incorrectly we add it to the CLAUDE.md, so Claude knows not to do it next time."
— Boris Cherny
They also use GitHub's @.claude tagging feature during code reviews—when a reviewer sees Claude could have done better, they update CLAUDE.md as part of the review process.
One of the most actionable techniques from the Claude Code team:
"After every correction, end with: 'Update your CLAUDE.md so you don't make that mistake again.' Claude is eerily good at writing rules for itself."
— Boris Cherny
Example flow:
Claude generates code with wrong import path:
You correct:
Add the magic phrase:
Claude adds to CLAUDE.md:
Why Claude writes better rules than you:
The compound effect: Every correction makes Claude smarter. Over weeks, your CLAUDE.md becomes a knowledge base that prevents entire categories of mistakes.
Notes directory pattern: One engineer tells Claude to maintain a notes directory for every task/project, updated after every PR. They then point CLAUDE.md at it. This creates project-specific context that accumulates over time.
Connection to Chapter Concepts:
The Claude Code team applies a simple but powerful heuristic:
"If you do something more than once a day, turn it into a skill."
— Boris Cherny
Skill Architecture
Every skill can be user-invoked (you type /skill-name) or agent-invoked (Claude uses it automatically). Use disable-model-invocation: true to restrict a skill to manual invocation only.
Boris recommends building a skill you run at the end of every session—while the context is still fresh:
"Build a skill and run it at the end of every session."
For developers (/techdebt):
For knowledge workers (/session-review):
The habit: Before closing any session, run your review skill. The context is fresh—Claude remembers exactly what you discussed and can spot things you might have missed.
"Create your own skills and commit them to git. Reuse across every project."
Pattern:
Example skills (adapt to your domain):
Advanced pattern: Build analytics-engineer-style agents that write dbt models, review code, and test changes in dev. These become reusable assets that any team member can invoke.
Connection to Chapter Concepts:
Boris uses custom subagents for his most common workflows:
"I think of subagents as automating the most common workflows that I do for most PRs."
— Boris Cherny
The Investigation Pattern: Beyond PR workflows, subagents keep your main context clean. When Claude researches a codebase, it reads many files—all consuming your context. Instead:
The subagent explores in its own context window, reads relevant files, and reports back with findings—all without cluttering your main conversation.
Throw more compute at problems: Append "use subagents" to any request where you want Claude to parallelize the work. Claude will spin up multiple subagents to tackle different aspects simultaneously.
Advanced: Route permissions to Opus 4.5 via hook: Some team members route permission requests through an Opus 4.5 hook that scans for attacks and auto-approves safe operations—letting Claude work more autonomously while maintaining security.
Connection to Chapter Concepts:
This might be the most important insight from Boris's workflow:
"Probably the most important thing to get great results out of Claude Code: give Claude a way to verify its work. If Claude has that feedback loop, it will 2-3x the quality of the final result."
— Boris Cherny
How he implements this:
The Philosophy: You don't trust AI output—you instrument it. Give Claude tools to check its own work, and quality improves dramatically.
Connection to Chapter Concepts:
Boris's team uses a simple but effective hook:
This runs the formatter after every file write or edit. Claude generates well-formatted code 90% of the time, and the hook handles the remaining 10% to prevent CI formatting failures.
Connection to Chapter Concepts:
Boris explicitly avoids --dangerously-skip-permissions. Instead, he uses /permissions to pre-allow commands that are safe in his environment:
These permissions are checked into .claude/settings.json and shared with the entire team.
Why this matters: Skip permissions trades safety for convenience. Pre-allowed permissions give you the convenience while maintaining the safety boundary—Claude still asks before running unknown commands.
Connection to Chapter Concepts:
The Claude Code team has developed patterns for letting Claude solve problems independently:
The core pattern: Give Claude the problem, not the solution. It often finds better approaches when it has freedom to investigate.
Don't micromanage: Instead of prescribing exact steps, describe the outcome you want. Claude often finds better solutions than you would have specified.
Connect to your data: Enable MCP integrations (Slack, Google Drive, Notion) so Claude can pull context directly. Zero context switching—Claude reads the source material, investigates, and produces the solution.
Beyond the basics, the Claude Code team uses specific prompting techniques that work across domains:
Challenge Claude to verify your work:
Escape mediocre solutions: After a mediocre result, say: "Knowing everything you know now, scrap this and create the elegant solution." This prompt leverages Claude's accumulated context to find better approaches it wouldn't have seen initially.
Reduce ambiguity: Write detailed briefs before handing work off. The more specific you are about constraints, audience, and success criteria, the better the output. Vague requests produce vague results.
The Claude Code team has refined their environment for optimal Claude usage. These principles apply whether you're in a terminal or browser:
Status visibility: Use /statusline to always show context usage. Know at a glance how much context you've consumed—this helps you decide when to /clear or start fresh.
Visual organization:
Voice dictation: Use voice input (hit fn twice on macOS, or use your platform's dictation). You speak 3x faster than you type, and your prompts get way more detailed as a result. More detail = better output. This works in any Claude interface.
Claude Code can become your research and analysis interface—you describe what you want to know, Claude figures out how to get it:
"Personally, I haven't written a line of SQL in 6+ months."
— Boris Cherny
The pattern: If there's a way to access your data (CLI, MCP, API, or even files), Claude can query it for you. Build a skill that knows how to access your data sources, and analytics becomes conversational.
"I use Opus 4.5 with thinking for everything. It's the best coding model I've ever used, and even though it's bigger & slower than Sonnet, since you have to steer it less and it's better at tool use, it is almost always faster than using a smaller model in the end."
— Boris Cherny
The Counterintuitive Insight: A "wrong fast answer" costs more time than a "right slow answer." Opus 4.5 requires less correction and iteration, making total task completion faster despite slower per-response times.
The official best practices emphasize aggressive session management. Claude Code's conversations are persistent and reversible—use this to your advantage.
Course-Correct Early:
Resume Conversations:
Use /rename to give sessions descriptive names ("oauth-migration", "debugging-memory-leak") so you can find them later. Treat sessions like branches—different workstreams can have separate, persistent contexts.
When to Clear: If you've corrected Claude more than twice on the same issue, the context is cluttered with failed approaches. Run /clear and start fresh with a more specific prompt that incorporates what you learned.
When to Abandon: Boris notes that 10-20% of his sessions are abandoned when they hit unexpected scenarios. This is normal. Sometimes starting fresh is faster than recovering a confused session.
The Claude Code team recommends a specific configuration for anyone who wants to learn as they work:
"Enable the 'Explanatory' or 'Learning' output style in /config to have Claude explain the why behind its changes."
— Boris Cherny
Enable Learning Mode:
Run /config and set the output style to "Explanatory" or "Learning". Now Claude doesn't just make changes—it teaches you what it's doing and why.
Before (default mode):
After (learning mode):
Generate Visual HTML Presentations:
For onboarding or understanding unfamiliar code:
Claude creates an interactive HTML file you can share with teammates or reference later. Perfect for:
ASCII diagrams for quick understanding: Ask Claude to draw ASCII diagrams of new protocols and codebases. Sometimes a quick text diagram is faster than generating HTML—great for understanding data flows, state machines, or API relationships.
Spaced-repetition learning skill: Build a skill where you explain your understanding, Claude asks follow-up questions to fill gaps, and stores the result. This creates active recall practice that deepens learning over time.
Here's how these techniques map to what you've learned:
The official documentation catalogs failure patterns observed across many users. Recognizing these early saves time:
Meta-pattern: Most failures stem from context pollution—either too much irrelevant information, or failed approaches cluttering the conversation. When in doubt, start fresh.
Looking at Boris's workflow and the official best practices, five principles emerge:
1. Context is the Constraint
Every technique traces back to managing the context window. Worktrees, subagents for investigation, /clear between tasks, Plan Mode—all prevent context pollution. Internalize this and the "why" behind every practice becomes clear.
2. Parallelization Over Optimization
Multiple simple sessions outperform one overloaded session. Don't try to make one conversation do everything—distribute work across parallel Claude instances using worktrees.
3. Plan Mode Discipline
Planning isn't training wheels. It's the foundation. Boris uses it for every non-trivial task, not just when he's unsure. The investment in alignment pays off in execution quality.
4. Self-Evolving Documentation
CLAUDE.md isn't static. It grows with every correction. The magic phrase—"Update your CLAUDE.md so you don't make that mistake again"—turns every mistake into institutional memory.
5. Verification Infrastructure
Quality comes from feedback loops, not hope. Give Claude ways to check its work—through MCP tools, hooks, subagents, or browser automation. Verification creates the iteration loop that produces excellent results.
Apply what you've learned from the creator's workflow:
🔧 Set Up Parallel Sessions:
What you're learning: The setup that enables the parallelization Boris calls "the single biggest productivity unlock."
🎯 Try Claude-Reviews-Claude:
What you're learning: Two-pass verification that catches blind spots. This is how the Claude Code team ensures plans are solid before execution.
✍️ Practice Self-Writing Rules:
What you're learning: The feedback loop that makes Claude smarter over time. Each correction becomes a permanent rule.
📋 Create Your Session-End Review Skill:
What you're learning: Session hygiene habits that compound over time. Running this before closing any session captures value that would otherwise be lost.
🔍 Enable Learning Mode:
What you're learning: How to use Claude Code for learning, not just doing—perfect for onboarding and understanding unfamiliar material.
🔍 Analyze Your Current Practice:
What you're learning: Self-assessment against expert practice—identifying your highest-leverage improvement opportunity.