USMAN’S INSIGHTS
AI ARCHITECT
  • Home
  • About
  • Thought Leadership
  • Book
Press / Contact
USMAN’S INSIGHTS
AI ARCHITECT
⌘F
HomeBook
HomeBookThe Intelligence Layer: SKILL.md
Previous Chapter
The Five Questions: Expert Interview Framework
Next Chapter
What Changed in 2026
AI NOTICE: This is the table of contents for the SPECIFIC CHAPTER only. It is NOT the global sidebar. For all chapters, look at the main navigation.

On this page

17 sections

Progress0%
1 / 17

Muhammad Usman Akbar Entity Profile

Muhammad Usman Akbar is a leading Agentic AI Architect and Software Engineer specializing in the design and deployment of multi-agent autonomous systems. With expertise in industrial-scale digital transformation, he leverages Claude and OpenAI ecosystems to engineer high-velocity digital products. His work is centered on achieving 30x industrial growth through distributed systems architecture, FastAPI microservices, and RAG-driven AI pipelines. Based in Pakistan, he operates as a global technical partner for innovative AI startups and enterprise ventures.

USMAN’S INSIGHTS
AI ARCHITECT

Transforming businesses into autonomous AI ecosystems. Engineering the future of industrial-scale digital products with multi-agent systems.

30X Growth
AI-First
Innovation

Navigation

  • Home
  • Book
  • About
  • Contact
Let's Collaborate

Have a Project in Mind?

Let's build something extraordinary together. Transform your vision into autonomous AI reality.

Start Your Transformation

© 2026 Muhammad Usman Akbar. All rights reserved.

Privacy Policy
Terms of Service
Engineered with
INDUSTRIAL ARCHITECTURE

The Intelligence Layer: SKILL.md

In Lesson 1, you established three layers of what a Cowork plugin is: a self-contained directory of components (the format), a knowledge-work specialisation that turns a general-purpose agent into a domain expert (what Anthropic's official plugins do with it), and an enterprise readiness evaluation model (Panaversity's framework for assessing production readiness). You learned that plugins arrive from the marketplace as ready-made packages containing skills, connectors, commands, agents, hooks, and a manifest: and that your contribution is the part no one else can write: the SKILL.md that encodes how your organisation actually works. Now it is time to understand what you are actually responsible for. The SKILL.md is the intelligence layer of the plugin. Everything the agent knows about who it is, what it does, and how it decides: that is yours to write.

The description "intelligence layer" is deliberate. The manifest identifies the plugin. The connectors wire it to enterprise systems. The commands and agents provide workflow infrastructure. None of these makes the agent intelligent in any domain-specific sense. Intelligence: the ability to apply domain expertise to real professional situations: comes from the SKILL.md. A compliance agent and a financial research agent might run on identical connector infrastructure, behind identical governance settings, with identical commands. What makes them different, and what makes each of them useful, is the SKILL.md.

This lesson explains the structure of that document. There are three sections, each with a distinct function: Persona, Questions, and Principles. Understanding what each section does (and why the specifics matter) is the prerequisite for everything in this chapter. Lesson 5 will show you a complete, annotated example. This lesson shows you the architecture and the reasoning behind it.

What a SKILL.md Is (and Is Not)

Before covering the three sections, it is worth stating clearly what a SKILL.md file is, because the name generates consistent confusion.

A SKILL.md is a structured Markdown file with YAML frontmatter followed by body content written in English. The frontmatter is a short header block that declares metadata the platform uses to discover and manage the skill. The body of the document tells the agent who it is, what it knows, how to behave in the situations it will encounter, and what it must never do. Writing a SKILL.md requires no programming ability. It requires domain expertise.

The YAML frontmatter follows the Agent Skills specification: an open standard originated by Anthropic and now adopted by Microsoft, OpenAI, Cursor, GitHub, VS Code, Gemini CLI, and over 25 tools across the industry. The required fields are minimal:

Field

Required

What It Does

name

Yes

Short identifier (lowercase, hyphens only). Must match the parent directory name.

description

Yes

What the skill does and when to use it. The platform reads this at startup to decide when to activate the skill.

allowed-tools

No

Pre-approved tools the skill may use (e.g., restricting a skill to read-only operations).

license

No

License name or reference to a bundled license file.

compatibility

No

Environment requirements (intended product, required packages, network access).

metadata

No

Arbitrary key-value pairs for additional information (author, version, etc.).

The description field deserves attention. Agents load only the name and description of every available skill at startup: a progressive disclosure model that keeps context lean. When a task matches a skill's description, the agent loads the full body content. A vague description means the agent may not activate the skill when it should. A precise description (one that includes specific keywords and use cases) ensures the agent recognises relevant tasks reliably.

What people assume SKILL.md is

What SKILL.md actually is

A configuration file with settings and parameters

Structured Markdown with YAML frontmatter: identity, scope, and operating logic

Written by a developer or ML engineer

Written by a domain expert: a lawyer, analyst, architect, or clinician

Code that executes when the agent runs

Text that the agent reads and applies to every interaction

A technical artefact managed by IT

A professional document managed by the knowledge worker who owns the domain

This distinction matters because it determines who can build useful agents. A senior compliance officer can write a SKILL.md. A project architect can write a SKILL.md. A clinical pharmacist can write a SKILL.md. None of them can write a Python class, configure an API, or train a model. The SKILL.md is what closes the knowledge transfer gap that Chapter 25 described: it is the pathway through which domain expertise reaches a deployed system.

The Persona–Questions–Principles Framework

The Agent Skills specification, the open standard adopted by Cursor, VS Code, GitHub, OpenAI Codex, Gemini CLI, and 25+ tools, defines the SKILL.md format but imposes no restrictions on how you structure the body content. It says: "Write whatever helps agents perform the task effectively." That flexibility is deliberate. Skills range from simple checklists to complex domain workflows, and the standard accommodates all of them.

For enterprise domain agents, however, that open canvas benefits from structure. The agentskills.io standard defines the format. PQP is our methodology for structuring the body content for enterprise domain agents. We call it the Persona–Questions–Principles Framework (PQP Framework for short). It has three sections: Persona, Questions, and Principles. Each section performs a specific function, and each section requires a specific kind of thinking to write well. This is not the only way to structure a SKILL.md; but for the enterprise use cases this chapter addresses, it is the approach that produces the most reliable, auditable agents.

Section

What It Defines

Who Benefits

Persona

Professional identity, authority, tone, relationship to user

The agent's behaviour in unanticipated situations

Questions

Scope; what the agent handles and what it redirects

The reliability boundary of the agent's expertise

Principles

Operating logic, constraints, escalation thresholds, quality standards

The agent's decision-making in complex or contested situations

None of these sections is optional. Remove the Persona and the agent has no reliable identity to fall back on when a query does not fit any anticipated pattern. Remove the Questions section and the agent has no boundary: it will attempt queries it cannot handle well. Remove the Principles and the agent has no operating logic for the hard cases, where the right answer is not obvious.

Persona: Identity as Functional Specification

The Persona section defines who the agent is in professional terms. Not what it can do: who it is. This is a functional specification, not a marketing exercise. The distinction matters.

A marketing exercise describes the agent in appealing terms. A functional specification describes the agent in terms that govern behaviour. Consider the difference between these two Persona statements:

"A helpful financial research assistant that provides insightful analysis and useful recommendations."

"A senior equity research analyst with fifteen years of experience covering FTSE-listed financial services companies. Analytical, precise, and direct. I work with portfolio managers and investment directors who need data-grounded analysis on short notice. I cite sources, flag uncertainty explicitly, and do not speculate beyond what the data supports."

Both describe a financial research agent. Only the second one governs how the agent will behave when a portfolio manager asks a question the evidence does not clearly answer. The first Persona produces an agent that will try to be helpful: which, in the absence of data, means generating plausible-sounding speculation. The second Persona produces an agent that will say "the data does not support a confident position on this" because that is what a senior analyst with a reputation to protect would say.

This is the central insight about the Persona section: identity governs ambiguous situations more reliably than rules. Rules govern situations that were anticipated when the rules were written. Professional identity governs situations that were not anticipated, because it provides the agent with a stable reference point ("what would a professional of this standing, in this relationship, do here?") that is more robust than any finite list of instructions.

The Persona section answers four questions:

Question

Why It Matters

What is the agent's professional standing?

Determines the authority and confidence with which it speaks

What is its relationship to the user?

Shapes how it balances deference with expertise

What is its characteristic tone?

Determines how it handles disagreement, uncertainty, and complexity

What will it never claim to be?

Sets the boundaries of its professional identity

Consider how these answers differ across domains. A legal contract triage agent describes itself as a legal professional who flags risk clearly and defers to qualified counsel on matters requiring independent legal advice; not an authority, but a rigorous first-pass reviewer. A BIM coordination agent for construction describes itself as a project coordinator who understands all disciplines and escalates when a structural decision falls outside its competence. A clinical pharmacology agent describes itself as a specialist who flags drug interactions against evidence-based thresholds and always defers to the prescribing clinician on patient-specific decisions.

In each case, the professional identity does more work than any individual rule could. When a user asks the legal agent for a definitive answer on a contested clause, the agent's identity ("I flag risk clearly and defer to qualified counsel") determines the response without requiring a rule that says "if user asks for a definitive legal opinion, respond by...". The Persona handles this implicitly, because that is what a legal professional in that relationship would do.

Questions: Scope as a Two-Sided Document

The Questions section defines what the agent is for. Not in broad terms, but in specific ones: which tasks it handles, how it handles them, and (critically) what falls outside its remit.

The last of these is the most commonly underspecified. Domain experts writing their first SKILL.md tend to think of the Questions section as a list of capabilities. It is better understood as a scope document, and a scope document has two sides: in-scope and out-of-scope.

The cost of underspecification in this section is measurable. An agent without a well-defined scope will attempt to answer queries it cannot handle well. This produces confident-sounding outputs in areas where it has no grounded expertise; which is a technical description of hallucination in a professional context. A financial research agent that strays into tax advice because no one specified that tax advice was out of scope will produce tax analysis that sounds authoritative and is functionally unreliable. A contract triage agent that ventures into employment law because no one specified that employment law fell outside its competence will produce employment law analysis with the same problem.

The out-of-scope boundary defines where the agent redirects rather than responds. This is not a limitation: it is a quality guarantee. An agent with a tight, well-defined scope is more trustworthy precisely because it knows where it stops. Users who understand its scope can rely on its outputs within that scope. Users who receive a redirect know they need to look elsewhere. Both outcomes are more useful than confident-sounding output in an area where the agent has no grounded expertise.

Consider what a well-specified Questions section looks like for different domains:

Domain

Example In-Scope Items

Example Out-of-Scope Items

Financial research

Equity analysis on FTSE-listed companies; earnings call summaries; sector comparison tables

Portfolio construction recommendations; tax implications; regulatory filings outside the UK

Legal contract triage

Risk flagging in commercial contracts under English law; clause pattern analysis; escalation recommendations

Drafting new contract language; advising on employment law; jurisdiction-specific analysis outside England and Wales

Clinical pharmacology

Drug interaction checking against approved formulary; dosage verification against weight and renal function; contraindication flagging

Prescribing decisions; patient-specific risk assessment; off-formulary authorisations

BIM coordination

Clash detection across structural, MEP, and architectural models; specification compliance checking; RFI preparation

Structural engineering sign-off; cost estimates; planning authority submissions

In each case, the out-of-scope items are not arbitrary. They are the areas where the agent's grounded expertise ends and where professional liability, clinical risk, or regulatory accountability begins. The Questions section does not just describe what the agent knows: it maps the boundary of where its knowledge is reliable.

Principles: Operating Logic for Hard Cases

The Principles section defines how the agent applies its knowledge in practice. This is where operating constraints, escalation thresholds, quality standards, and decision-making logic live.

The critical distinction here is between generic principles and domain-specific principles. Generic principles ("be accurate," "be helpful," "be transparent") are aspirational statements that give the agent no concrete guidance on what accuracy, helpfulness, or transparency means in a specific professional context. Domain-specific principles are actionable: they tell the agent exactly what to look for, what to prioritise, and what constitutes a situation that requires escalation.

Compare these approaches across domains:

Generic (insufficient): "Provide accurate information based on available data."

Domain-specific (functional): "For any earnings estimate, cite the analyst consensus source and flag if the most recent revision is more than 30 days old. Do not extrapolate beyond the data; state explicitly when a projection lacks sufficient data support."

Generic (insufficient): "Flag potential risks in contracts."

Domain-specific (functional): "Flag any clause that modifies the indemnity cap below the contract value, any jurisdiction reference outside England and Wales, any penalty clause with an uncapped liability provision, and any force majeure clause that excludes circumstances beyond a narrowly defined list. For each flag, state the risk and the recommended action."

The difference is not merely stylistic. Generic principles require the agent to determine what "accurate" means in each new situation, producing inconsistent results. Domain-specific principles tell the agent what accuracy looks like in this domain, for this type of output, against these quality standards: producing consistent, reviewable results.

The Principles section also contains escalation thresholds: the operating constraints that define when the agent should stop acting autonomously and refer to a human. These thresholds are domain-specific for the same reason. An escalation threshold for a financial research agent is not the same as an escalation threshold for a clinical pharmacology agent.

Domain

Illustrative Escalation Thresholds

Financial research

Any analysis forming the basis of a board-level investment recommendation; any query involving non-public information

Legal contract triage

Any clause with potential material liability for the organisation; any dispute resolution mechanism that waives litigation rights

Healthcare

Any interaction flagged as a critical drug interaction by the approved formulary; any dosage outside the validated range for the patient's renal function category

Architecture/BIM

Any structural coordination issue that a qualified structural engineer has not reviewed; any clash that cannot be resolved without changing the structural grid

In each case, the escalation threshold is defined by the professional standard for that domain; not by a generic rule about when AI should involve a human. A contract that has a complex indemnity structure requires a qualified solicitor to review, not because AI is generally unreliable, but because that specific type of decision carries professional liability that requires human accountability.

The Principles section is also where domain-specific data sourcing rules live. Financial research agents specify which data sources are approved and what happens when a query cannot be answered from approved sources. Legal agents specify which jurisdictions their analysis covers. Clinical agents specify which formulary they check against. These are not generic quality controls: they are the operating constraints that make the agent's outputs auditable and trustworthy.

Why Specificity Is the Work

The common thread across all three sections is specificity. A vague Persona produces inconsistent behaviour. A Questions section without out-of-scope boundaries produces overreaching. Generic Principles produce unpredictable outputs.

Writing a production-quality SKILL.md is therefore not a formatting exercise. It is a knowledge extraction exercise. The domain expert writing a SKILL.md must articulate, often for the first time in explicit form, the professional standards, decision-making logic, and escalation thresholds that ordinarily exist as institutional memory and professional judgement. This is difficult work. It is also, as Chapter 27 will show, a learnable process with structured techniques.

The good news is that the difficulty of writing a SKILL.md is the difficulty of articulating domain expertise; not the difficulty of learning to code. The compliance officer who has spent a career developing a feel for which clauses represent genuine risk does not need to learn Python to encode that expertise in a SKILL.md. They need to learn how to make their tacit knowledge explicit. That is a different skill, and one they already have more than they realise.

Try With AI

Use these prompts in Anthropic Cowork or your preferred AI assistant to explore these concepts further.

Prompt 1: Personal Application

Specification
I work as [YOUR ROLE] in [YOUR INDUSTRY]. I want to understand howthe three sections of the PQP Framework — Persona, Questions,and Principles — would apply to an agent for my specific work.Help me think through each section:1. For Persona: What professional identity should this agent have? What authority should it project, and what should it never claim to be? 2. For Questions: What are the five most important things this agent should handle, and the three most important things it should redirect to a human? 3. For Principles: What are two operating constraints that are specific to my domain — not generic 'be helpful' statements, but actual rules that a professional in my field would recognise as meaningful?Challenge me if my answers are too generic. Push me toward specificity.

What you're learning: How to translate abstract section descriptions into concrete domain applications. The AI will push back on generic answers, which is the most effective way to learn the difference between a functional Persona and a marketing exercise.

Prompt 2: Framework Analysis

Specification
Here are two SKILL.md Persona statements for a financial research agent.Analyse them and explain which one would produce more consistent, reliableagent behaviour in ambiguous situations, and why.Persona A:"I am a helpful financial research assistant. I provide clear, accurateanalysis of financial data and help users understand market trends andinvestment opportunities. I am professional and responsive."Persona B:"I am a senior equity research analyst specialising in FTSE 350 financialservices companies. I work with portfolio managers and investment directorswho require data-grounded analysis under time pressure. I am precise anddirect. I cite all sources, flag uncertainty explicitly, and do not speculatebeyond what the available data supports. I will not produce analysis thata user might interpret as investment advice."Explain the specific scenarios where Persona B would produce a different(and better) response than Persona A. Focus on ambiguous situations —ones where no individual rule would clearly govern the agent's response.

What you're learning: How to evaluate Persona quality through the lens of ambiguous situations. The contrast between these two Personas makes concrete the lesson's central claim: that identity governs ambiguous situations more reliably than rules.

Prompt 3: Domain Research

Specification
I want to think through the Principles section for an agent in[YOUR PROFESSIONAL DOMAIN]. Help me identify five operating constraintsthat are domain-specific — not generic quality statements, but specificrules that professionals in [YOUR DOMAIN] would recognise as meaningfuland important.For each constraint, help me specify:1. What exactly triggers this constraint (the specific situation) 2. What the agent should do when the constraint is triggered 3. Why this constraint matters professionally (the risk it prevents) Then tell me: are any of these constraints too generic? Could they applyto an agent in any professional domain, or are they genuinely specificto [YOUR DOMAIN]? Revise any that are too generic until they aredomain-specific.

What you're learning: How to distinguish domain-specific Principles from generic ones, and how to refine generic statements into actionable professional operating constraints. The self-evaluation at the end of the prompt builds the critical skill of recognising when a Principle is doing real work versus filling space.

Core Concept

The SKILL.md is a plain-English document written by a domain expert that constitutes the intelligence layer of a Cowork plugin. It follows the Persona-Questions-Principles Framework (PQP Framework) with three sections (Persona, Questions, and Principles) each performing a distinct function. The SKILL.md is what closes the knowledge transfer gap: it is the pathway through which domain expertise reaches a deployed system without requiring programming ability.

Key Mental Models

  • PQP Framework: Three sections that together define who the agent is (Persona), what it does (Questions), and how it decides (Principles)
  • Persona as Functional Specification: Professional identity governs ambiguous situations more reliably than rules: an agent with a precise identity handles unanticipated situations without requiring a rule for each scenario
  • Questions as Two-Sided Scope Document: The out-of-scope boundary is as important as the in-scope list; an agent without it will attempt queries it cannot handle reliably
  • Domain-Specific Principles: Generic principles ("be accurate") provide no actionable guidance; domain-specific principles ("flag any clause referencing jurisdiction outside England and Wales") govern the specific situations that arise in practice

Critical Patterns

  • A SKILL.md is not code or configuration: it is a structured English document that a compliance officer, project manager, or clinician can write
  • Identity constraints in the Persona section (e.g., "you are not an investment adviser") govern behaviour across all contexts including situations no rule anticipated; a rule can be argued around, an identity cannot
  • The Questions section must define what the agent will NOT handle, not just what it will: an agent with undefined out-of-scope territory will produce confident-sounding output in areas where it has no grounded expertise
  • Specificity is the work: making tacit professional knowledge explicit is the hardest part of authoring a SKILL.md

Common Mistakes

  • Assuming SKILL.md is a code or configuration file (the name and .md extension trigger this misconception persistently)
  • Writing a Persona as a marketing description rather than a functional specification: only precision shapes agent behaviour in ambiguous situations
  • Writing generic Principles that are aspirational ("be helpful") rather than actionable: domain-specific constraints are what produce reliable, trustworthy outputs

Connections

  • Builds on: Lesson 1 established the plugin package structure; this lesson goes deep on the knowledge worker's component
  • Leads to: Lesson 5 will show a complete, annotated SKILL.md example; Lesson 3 covers the plugin infrastructure (manifest, connectors, commands, agents)

📋Quick Reference

Unlock Lesson Summary

Access condensed key takeaways and quick reference notes for efficient review.

  • Key concepts at a glance
  • Perfect for revision
  • Save study time

Free forever. No credit card required.

Ask