You've been in your role for three years. You know why things are done a certain way—not because it's written down, but because you were there when the decisions got made.
Maybe you're a lawyer who knows which judges prefer concise briefs versus detailed ones, which opposing counsel will negotiate in good faith, and which contract clauses your firm has learned to avoid after a costly dispute five years ago. Maybe you're a marketing director who knows that this particular client hates the word "synergy," that their CEO responds better to data than stories, and that the Q4 campaign failed not because of the creative but because of timing with their product launch. Maybe you're a research scientist who knows which methodologies your reviewers trust, which citation styles signal credibility in your field, and which collaborators actually respond to emails.
None of this is documented. It lives in your head, in email threads nobody will ever search, in the institutional memory of colleagues who were there when the decisions got made.
Now you're working with AI. It can read your documents. It can follow your instructions. But it doesn't know why. It doesn't carry the weight of decisions that shaped your practice. It treats every contract clause, every client, every methodology as equally neutral—without the context that makes your expertise valuable.
This is the Two-Way Problem—and solving it is the difference between an AI that helps you and an AI that actually understands your work.
Greg Foster, writing about the real bottlenecks in AI-assisted work, identified what he calls the Two-Way Problem. It's not about prompts or context windows. It's about knowledge transfer in both directions.
Both directions matter. If you can't get your knowledge into the AI, it makes decisions that violate unwritten rules. If you can't extract understanding from what the AI produces, you're using deliverables you don't fully comprehend.
Most professionals focus only on the first direction. They spend hours crafting instructions and system prompts. But the second direction—actually understanding what the AI created—gets neglected. The result is work products where the AI understands the reasoning better than the humans responsible for defending or maintaining them.
Tacit knowledge is what experienced professionals carry that never makes it into documentation. It's the unwritten rules—the stuff you'd tell a new colleague over coffee but would never think to write down.
Examples of tacit knowledge across domains:
What tacit knowledge is NOT:
The distinction matters because AI systems can read documentation. What they can't read is the knowledge that never got documented—the context that makes documentation make sense.
A Legal Example: Your instructions say "Use standard indemnification language." That's explicit knowledge. But the tacit knowledge is: "We use standard language except for this client, who had a $2M claim two years ago, so we always add carve-outs for gross negligence—but only on service agreements, not licensing deals." Without that context, AI might use standard language where it shouldn't.
A Marketing Example: Your brand guide says "Use conversational tone." That's explicit knowledge. But the tacit knowledge is: "Conversational means different things for different audiences—our B2B clients want professional-conversational, while the consumer brand can be casual. And the CEO hates exclamation points." Without that context, AI might produce copy that's technically on-brand but wrong for the audience.
A Software Example: Your CLAUDE.md says "Use async/await for database calls." That's explicit knowledge. But the tacit knowledge is: "We switched to async because of connection pool issues during traffic spikes, but only on the product catalog service. The user service still uses sync calls because it's read-heavy and the added complexity wasn't worth it." Without that context, AI might refactor code that's actually fine.
The first direction of the Two-Way Problem: how do you transfer what's in your head to the AI?
Most professional documentation is written for humans. It's full of context that assumes shared experience, references that require interpretation, and explanations that build on knowledge the reader already has.
Documents for AI consumption need to be different. Let's look at examples across several domains:
Legal Context Example:
For Humans:
For AI:
Marketing Context Example:
For Humans:
For AI:
Research Context Example:
For Humans:
For AI:
Software Context Example:
For Humans:
For AI:
The AI versions across all domains share common characteristics: they're explicit about constraints, include the "why" behind decisions, and call out what NOT to do. They're not trying to be comprehensive—they're trying to transfer the tacit knowledge that shapes decisions.
Rules are ambiguous. Examples are concrete.
Rule-based (weak):
AI already knows these generic principles. They don't help.
Example-based (strong):
Here's how this works across different domains:
Legal Writing Example:
Marketing Copy Example:
Research Writing Example:
Software Example:
The example-based versions give AI a concrete reference. When it encounters similar situations, it can pattern-match against the good example rather than interpreting vague rules.
Memory systems capture preferences and knowledge as they emerge during conversations, then persist them for future sessions.
The OpenAI Memory Lifecycle:
This lifecycle means knowledge accumulates over time. You don't have to pre-specify everything upfront. The system learns your preferences as you work.
Practical implementation:
Most AI tools don't have built-in memory persistence (yet), but you can implement the pattern manually with a memories file:
Then in your instructions:
Not all knowledge should persist. The key question: Should this affect future sessions?
Global memory shapes how AI works on your projects generally. Session memory shapes what it's working on right now.
The danger of over-globalizing: if you persist too much, your memories become noisy. "We're reviewing the Johnson contract" isn't a preference—it's current context that will be irrelevant tomorrow.
The danger of under-globalizing: if you don't persist enough, you re-explain the same preferences every session. "I already told you this client prefers bullet points" shouldn't happen.
The second direction of the Two-Way Problem: how do you extract understanding from what the AI generates?
This direction gets less attention, but it's equally important. When AI produces a complex deliverable—a contract, a campaign strategy, a research analysis, or code—you need to understand it well enough to defend it, modify it, and explain it to others.
Don't accept deliverables without reasoning.
Weak approach (any domain):
or
AI produces output. You read it. Maybe you understand the reasoning, maybe you don't. You're reverse-engineering intent from the deliverable.
Strong approach (Legal example):
Strong approach (Marketing example):
Strong approach (Software example):
Now you understand the intent before you see the deliverable. The output becomes verification of the explanation, not a puzzle to decode.
Ask for outputs that organize understanding, not just deliverables.
For any complex work product, request structured documentation:
Domain-specific variations:
Legal:
Marketing:
This structure forces AI to articulate the knowledge that would otherwise stay implicit. You're not just getting a deliverable—you're getting a knowledge transfer document.
Don't try to understand everything at once.
Weak approach: AI generates a complete deliverable. You review a 20-page document or 500 lines of code, trying to hold the whole thing in your head.
Strong approach: Break the work into chunks that build understanding progressively.
Legal example:
Research example:
Each step builds on the previous one. By the time you reach the final deliverable, you've accumulated understanding piece by piece instead of trying to absorb it all at once.
After AI creates something significant, explain it back:
This reveals gaps in your understanding. If you can't explain it, you don't understand it. And if you don't understand it, you shouldn't use it—whether it's a contract clause you'll need to defend, a campaign strategy you'll need to present, or code you'll need to maintain.
Objective: Transform 10 minutes of verbal explanation into effective AI context.
This lab addresses the first direction of the Two-Way Problem: getting what's in your head into a format the AI can use.
Choose Your Domain Context:
This lab works for any professional domain. Select the context closest to your work:
What you'll need:
Protocol:
Step 1: Record the Explanation (10 minutes)
Imagine a competent new colleague is joining tomorrow. They can read documents and understand standard procedures—but they don't know the history, the relationships, or the unwritten rules.
Record yourself explaining your project/client/engagement to them.
For Legal:
For Marketing:
For Research:
For Business/Consulting:
For Software:
Don't script it. Talk naturally, as you would to a real colleague.
Step 2: Transcribe (10 minutes)
Transcribe your recording. You can use:
Step 3: Extract Non-Documented Knowledge (20 minutes)
Read through your transcription and highlight everything that ISN'T in your existing documentation.
Create a document with these sections:
Step 4: Categorize for AI Consumption (15 minutes)
Classify each item:
Some tacit knowledge is for the AI. Some is for humans only. Some belongs in your instructions; some belongs in separate context files.
Step 5: Encode as AI-Consumable Artifacts (30 minutes)
Create the actual artifacts:
Step 6: Test (15 minutes)
Start a fresh AI session and ask it to make a decision that requires the tacit knowledge you just encoded.
Test examples by domain:
Does the AI behave as an informed colleague would?
Expected Finding: You'll discover that verbal explanations contain far more tacit knowledge than you realized. Much of it is genuinely valuable and was at risk of being lost.
Deliverable: A tacit knowledge document capturing what experienced professionals carry in their heads, encoded into AI-consumable formats (instructions, context docs, memories file, or skills).
This lesson addresses the human side of context engineering. Previous lessons taught you about attention budgets, position sensitivity, and signal-to-noise ratios. This lesson teaches you about the content itself—what knowledge to put in context and how to get knowledge back out.
Without tacit knowledge transfer, your Digital FTE is a generic chatbot. With it, your Digital FTE becomes a domain expert worth paying for.
The Two-Way Problem sits at the center of effective AI collaboration:
Without tacit knowledge, your context is shallow—technically correct but missing the wisdom that makes work effective. Without strategies for extraction, you're using deliverables you don't fully understand.
What you're learning: Structured extraction of tacit knowledge through guided questions. The AI becomes an interviewer, helping you surface knowledge you carry but might not think to document. This is the first step in the knowledge-IN direction.
What you're learning: Structuring the knowledge-OUT direction. Instead of passively receiving deliverables, you're requiring the AI to transfer understanding along with output. This prevents the "I'm using work I don't understand" failure mode.
What you're learning: The skill of memory scoping. Not all knowledge should persist—over-globalizing creates noise; under-globalizing causes repetition. This prompt helps you develop intuition for the distinction and apply it to real knowledge items.