Method A is used when the knowledge you need to encode lives primarily in the heads of experienced professionals rather than in documents. It applies to domains where the most important expertise is tacit: the financial analyst's risk calibration, the lead architect's coordination judgement, the compliance officer's instinct for which contract clause deserves more scrutiny than its surface reading suggests.
The method is structured around five questions. Not because five is a magic number, but because these five questions, in this order, reliably surface the three kinds of tacit knowledge that most domain agent SKILL.md files need: the decision-making logic the expert applies, the exceptions and edge cases that standard frameworks miss, and the escalation conditions that separate what the agent should handle autonomously from what it should route to a human.
The questions are designed for a conversation, not a form. They are prompts for a structured interview with the domain expert whose knowledge you are encoding; which may be yourself, a colleague, or a subject-matter expert you are engaging specifically for this purpose. A single interview of sixty to ninety minutes, conducted properly with these five questions, produces enough material to write a substantive first-draft SKILL.md. What the interview will not produce, and what no interview can produce, is a complete SKILL.md. The gap between the first draft and the production-ready version is what the Validation Loop in Lesson 8 closes.
Not "what do you do?" and not "what are the best practices for this?" Both of those questions invite the expert to perform their knowledge rather than reveal it. They produce the official version of expertise: the version that appears in job descriptions and training manuals: rather than the operational version that drives actual decisions.
Asking for a specific recent example of work going well does something different. It activates episodic memory rather than semantic memory. The expert does not retrieve a stored description of their expertise; they reconstruct a specific event. And in reconstructing it, they cannot help but include the details that make it specific: what they noticed, what they were uncertain about, what they decided and why, what happened next. These details are the raw material of the SKILL.md.
Follow-up questions: "What did you look for first?" "What told you this was going the right way?" "What would you have done differently if X had been the case instead?"
Credit analyst example: Asked to describe a recent credit assessment that went well, the analyst does not say "I evaluated the financials and checked the ratios." She says: "There was a mid-market manufacturing company applying for a term loan to fund a capacity expansion. The headline numbers were strong: DSCR above 2.0, LTV under 60%. But when I looked at the working capital cycle, I noticed the receivables days had been creeping up over three quarters while revenue was flat. That told me the revenue quality was weakening: they were extending payment terms to keep the topline steady. I flagged it, we restructured the covenant package to include a receivables concentration test, and the deal closed with tighter protections. Six months later, their largest customer went into administration. The covenant saved us."
That single account contains decision-making logic (look at working capital cycle, not just headline ratios), a specific pattern (receivables days increasing while revenue is flat signals revenue quality issues), and a protective action (restructure covenants to include a receivables concentration test). Each of those is a candidate SKILL.md Principle.
Specifically: not because of bad luck, but because of a judgement call that turned out to be mistaken.
This question surfaces the failure modes that the expert has personally encountered and learned from. It is the single most valuable question in the interview because the knowledge it produces is the knowledge that is hardest to find anywhere else. Post-mortems in professional contexts are often sanitised; experts who have made costly mistakes rarely document them in forms that others can access. But in a one-to-one conversation conducted with appropriate professional trust, most experienced professionals will describe at least one instructive failure: and the lesson they drew from it is often the most precise piece of domain knowledge you will extract.
Follow-up questions: "At what point could the mistake have been caught?" "What would have had to be true for you to have made a different call?" "Is there a signal you now look for that you weren't looking for then?"
Credit analyst example: "Early in my career, I approved a facility for a property developer. The balance sheet was strong, the LTV was conservative, and the development had pre-sales. What I missed was that the pre-sales were conditional: the contracts had break clauses tied to planning permission for a second phase. When the second phase was refused, the pre-sales unwound, and the developer's cash position deteriorated faster than the financial model had projected. I now always read the underlying contracts on any pre-sale figure, not just the headline number. And when a revenue line depends on a condition outside the borrower's control, I stress-test the scenario where that condition fails."
The Principles this produces are specific and testable: "When revenue projections depend on pre-sales, verify whether the underlying contracts are conditional. When any revenue line depends on a condition outside the borrower's control, run a stress scenario where that condition fails." These are instructions that prevent a specific, real failure mode; not generic advice about being careful.
This question is the most efficient path to the gap between described and actual expertise. Every experienced professional can answer it immediately, because it is the knowledge they spend their career transmitting to the people who work for them. And because they are describing someone else's errors rather than their own, the defences that come up in Question 2 are lower.
The answers almost always follow a pattern: the junior professional applies the rule without reading the context, or reads the context without knowing which rules apply to it, or escalates too early because they lack confidence, or escalates too late because they lack humility.
Follow-up questions: "Can you give me a specific example?" "What does the senior professional see that the junior one doesn't?" "How long does it typically take someone to learn this, and why does it take that long?"
Credit analyst example: "The junior analyst flags every net debt increase as a concern. The senior analyst knows that a net debt increase in the context of a capital investment programme with contracted revenue is categorically different from a net debt increase driven by operating losses. The junior analyst treats a covenant breach as binary: breached or not. The senior analyst reads the covenant with the loan documentation in hand and asks whether the breach is technical or substantive, whether the remedy period has been used correctly, and whether the breach pattern suggests deterioration or an isolated event."
Each of those distinctions, context-dependent interpretation of net debt, technical vs substantive covenant breaches, is a SKILL.md Principle. They are the instructions that encode the expertise differential between a junior analyst who applies rules mechanically and a senior analyst who reads context.
If you had to write a one-page guide for this work, something that would help someone make the right call in ninety percent of situations, what would be on it?
This question asks the expert to compress their operational knowledge into transferable instructions. Most experts resist the framing initially ("it's more complicated than a one-pager can capture") and they are correct. But the point of the question is not to produce the finished SKILL.md. It is to identify what the expert believes are the most load-bearing principles in their practice, because those are the instructions that need to appear in every version of the SKILL.md, however much else changes.
Follow-up questions: "What's the first thing on the page?" "What's the thing you'd most want to prevent someone from doing?" "Is there a heuristic you use that isn't in this guide because it's too hard to explain?"
That last follow-up is important. The knowledge that is too hard to explain is often the knowledge most worth encoding, and it requires more interview time to surface.
Credit analyst example: "First: always read the cashflow statement before the balance sheet. The balance sheet tells you what exists; the cashflow statement tells you what is happening. Second: never trust a revenue figure you cannot trace to a contract or a customer. Third: when the management narrative and the numbers tell different stories, trust the numbers. Fourth: if you cannot explain the credit risk in two sentences, you do not understand it well enough to approve it."
These four heuristics are load-bearing Principles. The first governs analytical sequence. The second governs source verification. The third governs conflict resolution between qualitative and quantitative information. The fourth is a self-test for decision readiness. All four translate directly into SKILL.md instructions.
What are the situations where you would not trust an automated system to handle this: and why?
This question defines the human-in-the-loop requirements for the SKILL.md. Every domain has conditions under which autonomous agent operation is inappropriate: not because the technology is insufficient, but because the professional judgement required to handle those conditions correctly is genuinely irreplaceable.
The answers typically cluster into three categories.
Category
What It Means
SKILL.md Output
Stakes too high
The consequences of a systematic error are unacceptable at any rate
Explicit routing rules with thresholds
Context too unusual
Standard rules do not apply and the agent cannot know it does not know
Uncertainty recognition instructions
Relationship is the service
The human interaction is part of the professional value
Boundary conditions on delegation
Follow-up questions: "What is the threshold where you would want a human involved regardless of the system's track record?" "Can you describe a situation where the context was so unusual that no standard procedure applied?"
Credit analyst example: "Any credit decision above £25 million goes to the senior credit committee regardless of how strong the analysis looks: the reputational risk of a single large default is too high to accept any systematic error rate. Any situation where the borrower has a relationship with a board member or senior executive gets routed to an independent reviewer: the conflict of interest makes automated analysis inappropriate. And any credit assessment where I encounter a fact pattern I genuinely have not seen before: a novel industry structure, a regulatory regime I am not familiar with: I flag it explicitly and bring in a specialist rather than applying a framework that may not fit."
These three answers produce three distinct types of SKILL.md escalation conditions: threshold-based routing (£25 million), relationship-based routing (conflict of interest), and uncertainty-based routing (novel fact patterns). All three are essential for a production-quality SKILL.md in this domain.
The five questions are not random probes. Each one targets specific raw material for specific sections of the Agent Skills Pattern.
Question
Primary Target
SKILL.md Section
Q1: Recent success
Decision-making logic, analytical sequence
Principles (operational logic)
Q2: Instructive failure
Defensive knowledge, error prevention
Principles (what NOT to do)
Q3: Junior vs senior
Expertise differential, contextual judgement
Principles (nuanced distinctions)
Q4: One-page guide
Load-bearing heuristics, core operating rules
Principles (non-negotiable rules)
Q5: Automation boundaries
Escalation conditions, human-in-the-loop gates
Questions (out of scope) + Principles (routing logic)
The Persona section draws from all five questions: the professional identity that emerges from the interview as a whole. But the Principles section is where the majority of the extraction material lands, because the Principles are where tacit knowledge becomes explicit instruction.
Use these prompts in Anthropic Cowork or your preferred AI assistant to practise the interview framework.
What you're learning: The five questions work on any domain, including your own. By experiencing them as the interviewee, you develop intuition for what rich answers feel like versus surface-level ones; which is essential preparation for conducting the interview with someone else. The north star summary at the end previews the synthesis technique taught in Lesson 3.
What you're learning: The question sequence is a designed progression, not an arbitrary list. Understanding the design logic helps you adapt the questions to domains where the standard sequence may need adjustment: for example, when an expert is most forthcoming about failures early in the conversation rather than after building trust through a success story.
What you're learning: The five questions are a framework, not a script. Adapting them to a specific domain requires understanding their extraction purpose well enough to reformulate them without losing their effectiveness. This exercise also produces a working interview guide you can use in your own extraction work.
Five interview questions, asked in order, reliably surface the three kinds of tacit knowledge a SKILL.md needs: decision-making logic, exceptions and edge cases, and escalation conditions. Each question targets a different type of memory and maps to a specific SKILL.md section. The questions work because they activate episodic memory (specific cases with contextual detail) rather than semantic memory (general descriptions stripped of context).
📋Quick Reference
Access condensed key takeaways and quick reference notes for efficient review.
Free forever. No credit card required.