Context has a budget—and that quality degrades as session window fills in with messages, tool definitions and CLAUDE.md. Now the question becomes: how much of your CLAUDE.md is actually doing useful work?
The uncomfortable answer, backed by research across enterprise deployments:
"Across enterprise workloads, roughly 30% to 60% of tokens sent to models add no value." — Neal Patel
That means between one-third and two-thirds of your carefully crafted CLAUDE.md might be noise—content that consumes attention budget without improving output quality. Worse, that noise competes with signal for the limited attention available.
This lesson teaches you to tell the difference, and to engineer a CLAUDE.md that's lean, effective, and measurably better than a bloated one.
Before diving into what counts as signal and noise, you need to understand a hard constraint most people don't know exists.
Research suggests that frontier LLMs can reliably follow approximately 150-200 distinct instructions. Beyond that threshold, compliance drops. The model starts ignoring rules, conflating similar instructions, or applying them inconsistently.
Here's the problem: Claude Code's system prompt already contains roughly 50 instructions. That's the identity, safety rules, and base capabilities that Anthropic bakes in. You don't see them, but they're consuming instruction budget.
That leaves you roughly 100-150 instructions for your CLAUDE.md before you hit diminishing returns.
Count the rules in your current CLAUDE.md. If you have 300 lines with 2-3 instructions per logical section, you might have 80-100 distinct instructions. Add the 50 from the system prompt, and you're at 130-150. You're already at the ceiling.
This is why trimming noise matters. Every instruction that doesn't add value is stealing budget from instructions that do.
For each line, section, or instruction in your CLAUDE.md, ask these four questions:
If the answer is yes—Claude would be uncertain, would ask for clarification, or would make the wrong assumption—then it's SIGNAL.
If the answer is no—Claude would proceed correctly without being told—then it's potentially NOISE.
Examples across domains:
If the information is already present in your workspace—in existing documents, configuration files, templates, or established patterns—then including it in CLAUDE.md is redundant. Claude will read those files anyway.
Examples across domains:
Information that changes often becomes stale in CLAUDE.md. Stale information is worse than no information—it creates context poisoning where Claude follows outdated rules.
Examples across domains:
Claude is trained on millions of documents across every domain. It knows standard conventions for most professional fields. Restating defaults wastes budget.
Examples across domains:
After applying the framework, these categories typically survive as signal:
Commands or workflows Claude can't guess:
Style rules that DIFFER from defaults:
Review and approval requirements:
Naming and organization conventions:
Decisions specific to YOUR project or organization:
Non-obvious behaviors or gotchas:
These categories typically fail the audit:
Things Claude can infer from existing materials:
Standard professional conventions:
Information that changes frequently:
Detailed reference documentation:
File-by-file descriptions:
A lean CLAUDE.md is necessary but not sufficient. Research on LLM attention patterns reveals a U-shaped curve: models pay significantly more attention to the beginning and end of their context window, while middle content receives approximately 30% less recall. This isn't a bug—it's how attention mechanisms work.
Zone 1 (Top Priority):
Zone 2 (Reference Material):
Zone 3 (Workflow Instructions):
If you'd be upset when the AI ignores it, don't put it in the middle.
Your most critical constraints belong in Zone 1 (top). Your workflow triggers belong in Zone 3 (bottom). Everything else—the reference material, the nice-to-haves, the detailed documentation—goes in Zone 2 or moves to external files entirely.
This is why progressive disclosure matters even more than you might think: not only does it keep your CLAUDE.md lean, it keeps your high-attention zones reserved for high-priority content.
The key to a lean CLAUDE.md is progressive disclosure: don't include detailed information inline. Reference external files that Claude reads on demand.
Instead of:
Use:
Then create docs/review-process.md with the full content.
Your CLAUDE.md then becomes:
Objective: Reduce your CLAUDE.md to under 60 lines while maintaining or improving effectiveness.
Time: 60 minutes
Choose your domain:
What you'll need:
Step 1: Export and Measure Current State
Record your starting line count.
Count distinct instructions:
Record: explicit instructions + ~50 system prompt = total instruction count. How close to the 150-200 ceiling?
Step 2: Apply the 4-Question Framework
Create an audit table. For each major section of your CLAUDE.md:
Step 3: Delete or Move Noise
For sections marked NOISE:
For sections marked PARTIAL:
Step 4: Tighten Signal
For sections marked SIGNAL, make them terser:
Before:
After:
Step 5: Add Progressive Disclosure
Create reference files for detailed content:
Move detailed content to appropriate files and update CLAUDE.md with references:
Step 6: Measure New State
Record your new line count (target: under 60).
Step 7: Comparison Test
Run your comparison task with the old CLAUDE.md (restore from backup or git):
Run the same task with the new CLAUDE.md:
Compare:
Deliverable: A CLAUDE.md under 60 lines that performs equal to or better than your original.
Most practitioners find:
Mistake 1: Keeping "helpful" noise
"But this context helps Claude understand the project!"
If Claude can learn it by reading files, you're paying twice: once for the CLAUDE.md tokens, again for the file read. And the CLAUDE.md version might be stale.
Mistake 2: Insufficient terseness
Every word costs tokens. "Please always ensure that you thoroughly review all documents" becomes "Review all documents." Same meaning, 1/3 the tokens.
Mistake 3: Not testing the result
An audit without comparison testing is guesswork. You need empirical evidence that the new CLAUDE.md performs at least as well. Sometimes removing what you thought was signal reveals it wasn't.
Mistake 4: Forgetting position optimization
A lean CLAUDE.md still needs position engineering. Your 52 lines should have critical rules at the top and workflow instructions at the bottom—not signal buried in the middle.
What you're learning: How to use AI as an audit partner. Claude can apply the 4-question framework to its own context—it knows what it would ask about, what it can infer, and what conventions it already follows. This prompt turns the audit from solo work into collaboration.
What you're learning: The mechanics of progressive disclosure. Moving content to external files is straightforward, but knowing WHEN Claude will read those files matters. This prompt helps you design a reference structure where files get read at the right moments.
What you're learning: Whether you've gone too far. A CLAUDE.md can be too lean—stripped of signal Claude actually needs. This prompt stress-tests your audit by checking whether essential information survived. If Claude can't answer these questions, you need to restore some content.