"The knowledge that makes a domain agent genuinely useful is the knowledge the expert cannot easily write down. The Knowledge Extraction Method is the structured process for getting it out of their head and into a SKILL.md that works."
Chapter 26 established the complete architecture of a Cowork plugin: the three-component model, the Agent Skills Pattern, the context hierarchy, the governance layer, and the ownership model. It left one question deliberately unanswered: how do you actually write a production-quality SKILL.md? The architecture tells you what goes in each section. It does not tell you how to extract the domain expertise that belongs there. This chapter answers that question with a structured methodology.
The answer has two modes. Method A extracts knowledge from expert heads through a five-question interview framework designed to surface the tacit professional knowledge that makes the difference between a generic agent and a genuinely useful one. Method B extracts knowledge from institutional documents through a three-pass framework: explicit rule extraction, contradiction mapping, and gap identification: that converts policy manuals, handbooks, and standard operating procedures into SKILL.md instructions while surfacing the problems that naive extraction misses. Most professional domains require both methods, and the reconciliation principle determines which takes precedence when expert judgement and documented standards conflict.
But extraction alone is not enough. A SKILL.md that encodes the expert's knowledge but has never been tested against the range of real-world queries has unknown coverage gaps. The validation stage: building scenario sets, scoring outputs on accuracy, calibration, and boundary compliance, interpreting failure patterns, and running the shadow mode protocol: is what converts a plausible first draft into a production-ready file. This chapter teaches both halves: how to get the knowledge out, and how to confirm it works.
By the end of this chapter, you will be able to:
Lesson
Title
Duration
What You'll Walk Away With
The Problem That No Platform Solves
20 min
Understanding of tacit vs explicit knowledge, the articulation gap, and why structured extraction is necessary
The Five Questions: Expert Interview Framework
30 min
The five interview questions, what each one surfaces, and how they map to SKILL.md sections
Conducting the Expert Interview
20 min
The briefing protocol, note-taking approach, and north star summary that make an interview produce usable material
The Document Extraction Framework
25 min
The three-pass framework for extracting SKILL.md instructions from institutional documents
Choosing and Combining Methods
15 min
The domain-method mapping and reconciliation principle for multi-method extraction
From Extraction to SKILL.md
25 min
How to translate extraction outputs into Persona, Questions, and Principles sections
Building the Validation Scenario Set
25 min
The four scenario categories, three scoring components, and 95% threshold
The Validation Loop: From Draft to Production
25 min
Failure pattern interpretation, targeted rewriting, shadow mode, and graduated autonomy
Hands-On Exercise: First Extraction and SKILL.md Draft
150 min
A complete extraction-to-validation cycle for a real professional domain
Chapter Summary
15 min
Synthesis of the full methodology, ready for the domain chapters
Chapter Quiz
50 min
50 questions covering all ten lessons
By the end of this chapter, you should be able to answer these five questions:
When you finish this chapter, your perspective shifts:
Start with Lesson 1: The Problem That No Platform Solves.
Ask