This chapter began with three layers: a Cowork plugin is a self-contained directory of components (the format Anthropic designed), knowledge-work plugins use that format to turn a general-purpose agent into a domain specialist (what the official plugins do), and Panaversity's enterprise readiness evaluation model assesses whether the result is production-ready. It ends with a complete deployment architecture. The nine lessons between those two points did not add complexity for its own sake: each one answered a question that the previous lesson raised. The definition raised the question of what the intelligence layer actually looks like. The intelligence layer raised the question of how the plugin infrastructure is configured. The infrastructure raised the question of what happens when the SKILL.md and higher-level policies conflict. That question required the context hierarchy. The context hierarchy pointed to the governance layer. The governance layer required the ownership model to be useful in practice. And the ownership model opened the question of what happens when the expertise encoded in a SKILL.md is generalisable beyond a single organisation.
That chain is the chapter. Understanding it as a chain (not as nine separate lessons) is the synthesis this summary is for.
Each lesson answered a specific question. Each answer led directly to the next question.
Lesson
Question Answered
Key Output
L01: What a Plugin Actually Is
What precisely is a Cowork plugin?
Plugin package components; enterprise readiness evaluation model
L02: The Intelligence Layer
What is the knowledge worker actually responsible for?
PQP Framework: Persona, Questions, Principles
L03: The Plugin Infrastructure
What does the rest of the plugin package contain?
plugin.json (manifest); .mcp.json (connectors); commands; agents; settings
L04: The Three-Level Context System
Why do SKILL.md instructions sometimes fail?
Platform → organisation → plugin hierarchy; silent override; diagnostic sequence
L05: The PQP Framework in Practice
What does a production-quality SKILL.md look like?
Annotated financial research SKILL.md; source integrity; uncertainty calibration
L06: The MCP Connector Ecosystem
What enterprise systems can the agent actually access?
Marketplace connectors; custom commissioning process; timeline expectations
L07: The Governance Layer
How does trust in a deployed agent accumulate?
Permissions; audit logging; shadow mode (30d/95%); HITL gates
L08: The Division of Responsibility
Who is responsible when something goes wrong?
Three-way ownership model; layer independence; SKILL.md maintenance as ongoing discipline
L09: The Cowork Plugin Marketplace
What happens when the expertise is generalisable?
Vertical skill packs; connector packages; transferability test
Reading the nine lessons as a sequence reveals three insights that no individual lesson states on its own.
The first is that the SKILL.md is not one component among many: it is the component that everything else serves. The manifest and settings configure the environment in which the SKILL.md operates. The connectors supply the data the SKILL.md instructs the agent to use. The governance layer enforces the boundaries the SKILL.md defines. Remove the SKILL.md and you have deployment infrastructure without intelligence. A well-written SKILL.md makes the rest of the architecture useful. A poorly written one makes it unreliable regardless of how correctly the other components are configured.
The second insight is that the knowledge worker's role is authorial, not technical. Writing the SKILL.md requires domain expertise, not programming ability. Reviewing the .mcp.json to verify connector scope requires infrastructure literacy, not systems engineering. Designing the shadow mode rubric requires knowing what accuracy means in the domain, not statistical training. Identifying the HITL gates requires understanding which decisions carry professional accountability, not governance theory. The chapter's architecture was designed with a deliberate non-negotiable: the person who holds the domain expertise should be able to deploy without depending on technical intermediaries for the core intelligence layer.
The third insight is that governance is not the end of the deployment story: it is the beginning of the trust story. Shadow mode, audit trails, and HITL gates do not exist to limit what an agent can do. They exist to produce the evidence that allows a sceptical compliance function, a cautious general counsel, or a regulated industry's oversight body to permit the agent to do more. The 30-day shadow mode period produces the corpus that justifies autonomous operation. The audit log turns a potential compliance incident into a documented, defensible process. Governance is what converts a promising demonstration into a deployable system.
Of the eight components in the ownership table, one is owned entirely by the knowledge worker, is written entirely in plain English, determines the agent's identity, scope, and operating logic, and is the component most likely to drift from production reality without disciplined maintenance. That component is the SKILL.md.
The chapter taught the architecture around it. The PQP Framework (Persona, Questions, Principles) gave the structure. The annotated financial research example in Lesson 5 showed what production quality looks like. The ownership model in Lesson 8 established that maintaining it is an ongoing professional responsibility, not a one-time authorship task.
What the chapter did not teach is how to extract the domain expertise that goes into it. Writing a production-quality SKILL.md requires articulating, often for the first time in explicit form, the professional standards, decision-making logic, escalation thresholds, and quality criteria that ordinarily exist as institutional memory and professional judgement. This is the hardest part of the process; not because the SKILL.md is technically complex, but because making tacit expertise explicit is genuinely difficult work. The chapter showed the structure. Chapter 27 teaches the methodology for producing the content.
Before continuing, verify that you can answer these questions with specificity. Generic answers indicate a concept that needs review.
If any of these are uncertain, revisit the relevant lesson before continuing. Chapter 27 assumes the architecture is understood and proceeds directly to the extraction methodology.
Chapter 27 opens the methodology. Where this chapter gave you the complete architecture of a Cowork plugin and established what a production-quality SKILL.md looks like, Chapter 27 gives you the process for producing one. The Knowledge Extraction Method is a structured approach to making tacit expertise explicit: to taking the professional judgement that exists in a domain expert's head and translating it into the Persona, Questions, and Principles that determine what a deployed agent does.
The architecture does not change. The plugin package structure, the context hierarchy, the governance layer, and the ownership model are the permanent infrastructure. Chapter 27 is about the most critical act within that infrastructure: authoring the document that gives the agent its intelligence.
Use these prompts in Anthropic Cowork or your preferred AI assistant to integrate the chapter's architecture.
What you're learning: How to apply the complete chapter architecture to a real deployment scenario. This synthesis exercise forces you to use every element (SKILL.md, connectors, governance, ownership) in sequence for a specific workflow, revealing which parts of the architecture you have understood deeply and which remain abstract.
What you're learning: How the chapter's architecture adapts to context. The plugin package structure, governance layer, and ownership model are consistent across deployments; but their configuration varies significantly based on stakes, regulatory environment, and user profile. Comparing two contrasting cases makes this adaptation concrete rather than theoretical.
What you're learning: The gap between understanding the SKILL.md's architecture and being able to write one is the gap that Chapter 27 addresses. This prompt simulates the extraction process that Chapter 27 will teach systematically: surfacing tacit expertise through structured questioning and translating it into specific, actionable Principles. Starting the process before Chapter 27 makes the methodology more immediately applicable when you encounter it.
Chapter 26 provides the complete architecture for a Cowork plugin: the plugin package structure (SKILL.md, connectors, commands, agents, manifest), three owners (knowledge worker, IT, administrator), three context levels (platform, organisation, plugin), four governance mechanisms (permissions, audit trails, shadow mode, HITL gates), and a marketplace distribution model. The nine lessons form a chain: each lesson answered a question the previous one raised. The SKILL.md is the component that everything else serves, and writing a production-quality one requires Chapter 27's Knowledge Extraction Method.
📋Quick Reference
Access condensed key takeaways and quick reference notes for efficient review.
Free forever. No credit card required.
Ask