Two engineers build contract review agents. Same model. Same basic architecture. One sells for $2,000/month. The other can't give it away.
What's different?
The answer: context quality.
In Chapter 1, you learned that Digital FTEs are AI agents that work 24/7, delivering consistent results at a fraction of human cost. But here's the uncomfortable truth: those same AI models are available to everyone. Your competitors have access to Claude, GPT, and Gemini too. They can spin up the same frontier model in minutes.
The model isn't your moat. Context engineering is.
If you've used AI for real work, you've experienced the breakdown. Your AI followed instructions brilliantly for the first twenty minutes. Then it started ignoring conventions, repeating mistakes you already corrected, producing wildly different outputs for similar inputs. The AI didn't get dumber. Its context got corrupted.
This chapter teaches you the quality control discipline that separates sellable Digital FTEs from expensive toys.
Anthropic defines context engineering as:
"The art and science of curating what will go into the limited context window from that constantly evolving universe of possible information."
The guiding principle: find the smallest set of high-signal tokens that maximize the likelihood of some desired outcome.
Your prompt is what you say. Your context is everything the AI already knows when you say it. Context engineering is controlling that "already knows" part.
"Prompt engineering" was the 2023 discipline. It has a ceiling.
Your prompt is 0.1% of what the model processes. The other 99.9% is context. If you're optimizing prompts while ignoring context, you're polishing the doorknob while the house is on fire.
This matters for your Digital FTEs. A legal assistant Digital FTE with perfect prompts but corrupted context will hallucinate case citations. A sales Digital FTE with perfect prompts but bloated context will forget customer preferences mid-conversation. The context is what makes the difference between a $50/month chatbot and a $5,000/month professional assistant.
Not all context degradation is equal. Recognizing the pattern helps you respond effectively.
You renamed something, changed a decision, or updated terminology. But 40 messages ago, you discussed the old version extensively. That discussion is still in context. Claude might reference the outdated information, creating confusion or errors.
Symptom: Claude uses terminology, patterns, or references that were correct earlier but aren't anymore.
You spent 20 messages on a tangent. Now you're working on something different. That tangent is still consuming attention budget—attention that could be allocated to your current constraints.
Symptom: Claude's responses feel less focused, miss details, or include tangential considerations.
You're working with two similar things—maybe two services, two documents, or two processes. They have similar names or overlapping terminology. Claude starts conflating them—using the wrong one in the wrong context.
Symptom: Claude mixes up similar-sounding concepts, uses wrong terminology, or applies patterns from one domain to another.
Early in the session, you said one thing. Later, you said something different. Both instructions are in context. Claude has to reconcile them and might choose wrong.
Symptom: Claude's decisions seem inconsistent, or it asks clarifying questions you thought you'd already answered.
Claude Code handles context automatically through a feature called autocompact. When your context window fills up, Claude Code summarizes the conversation, keeps key decisions, and forgets noise—without you doing anything.
Most of the time, this works well. Lesson 6 teaches when you need to manually intervene with /compact or /clear for situations where automatic management isn't enough.
Objective: See what's consuming your context window right now.
In Claude Code, run:
You'll see output showing:
What to observe: Much of your context is consumed before you type anything. That's baseline cost. Context engineering is managing these numbers so you have room for the work that matters.
Think about your current or most recent working session with Claude. Ask yourself:
If you identified any of these, you've diagnosed context rot. Later lessons teach how to treat each type.
Prompt 1: Context Inventory
What you're learning: Before you can engineer context, you need to see what's actually there. This prompt develops awareness of context state.
Prompt 2: Rot Diagnosis
What you're learning: Diagnosis comes before treatment. This prompt helps you identify which rot type (if any) is affecting your current session, so you can apply the right fix.
Safety note: When running context diagnostics, you're examining the session state, not changing it. This is observational—safe to run at any time.