USMAN’S INSIGHTS
AI ARCHITECT
  • Home
  • About
  • Thought Leadership
  • Book
Press / Contact
USMAN’S INSIGHTS
AI ARCHITECT
⌘F
HomeBook
HomeBookThe PQP Framework in Practice
Previous Chapter
Four Monetisation Models
Next Chapter
From Extraction to SKILL.md
AI NOTICE: This is the table of contents for the SPECIFIC CHAPTER only. It is NOT the global sidebar. For all chapters, look at the main navigation.

On this page

20 sections

Progress0%
1 / 20

Muhammad Usman Akbar Entity Profile

Muhammad Usman Akbar is a leading Agentic AI Architect and Software Engineer specializing in the design and deployment of multi-agent autonomous systems. With expertise in industrial-scale digital transformation, he leverages Claude and OpenAI ecosystems to engineer high-velocity digital products. His work is centered on achieving 30x industrial growth through distributed systems architecture, FastAPI microservices, and RAG-driven AI pipelines. Based in Pakistan, he operates as a global technical partner for innovative AI startups and enterprise ventures.

USMAN’S INSIGHTS
AI ARCHITECT

Transforming businesses into autonomous AI ecosystems. Engineering the future of industrial-scale digital products with multi-agent systems.

30X Growth
AI-First
Innovation

Navigation

  • Home
  • Book
  • About
  • Contact
Let's Collaborate

Have a Project in Mind?

Let's build something extraordinary together. Transform your vision into autonomous AI reality.

Start Your Transformation

© 2026 Muhammad Usman Akbar. All rights reserved.

Privacy Policy
Terms of Service
Engineered with
INDUSTRIAL ARCHITECTURE

The PQP Framework in Practice

In Lesson 2, you learned the architecture: the SKILL.md file has three sections (Persona, Questions, and Principles) and each section performs a distinct function. Persona defines who the agent is. Questions defines what it handles. Principles defines how it decides. The Agent Skills standard (agentskills.io) defines the SKILL.md format. PQP is our methodology for what goes inside to produce enterprise-grade domain agents. You learned why specificity matters, why identity governs ambiguous situations more reliably than rules, and why the out-of-scope boundary is as important as the in-scope list. What you did not see was all three sections working together in a single, coherent document.

This lesson shows you that. The PQP Framework in Practice means reading a production-ready SKILL.md as a whole: tracing how the sections interact, identifying what makes it trustworthy, and understanding what each design choice prevents. The example is a financial research agent. Financial services is a useful domain for this illustration because the failure modes are concrete (fabricated numbers, misplaced confidence, regulatory exposure), the professional standards are legible, and the range of in-scope and out-of-scope territory is clearly bounded. What you learn here applies equally to legal, clinical, architectural, and operational domains.

The example below is simplified relative to what a production organisation would deploy. A production SKILL.md for a financial research function would be two to four times longer, with more detailed data source specifications, organisation-specific output templates, jurisdiction references, and a fuller escalation taxonomy. But the structure is complete. Every section is present, every quality signal is visible, and the document would produce a reliable, trustworthy agent as written. Read it as an architectural example rather than a sizing guide.

A Complete SKILL.md: Financial Research Agent

The following is the full SKILL.md for a financial research agent, presented in sections with annotations after each one.

Note: A production SKILL.md includes YAML frontmatter (at minimum name and description fields, as defined by the agentskills.io specification) before the body content below. The frontmatter is how the platform discovers and activates the skill. This example shows only the body content: the PQP sections that encode domain expertise.


Persona

Specification
# Financial Research Agent — SKILL.md ## Persona You are a senior financial analyst at [Organisation Name] with specialisation in[markets/sectors relevant to organisation]. You communicate with the precision andeconomy of an experienced professional: clear, direct, evidence-grounded. You do notspeculate beyond available data. When your data is incomplete, you say so explicitlyand tell the user what additional information would improve your analysis.You address the user as a professional peer. You do not over-explain methodologythat your user already understands. You do not understate risk to make an analysismore palatable. You flag uncertainty with confidence rather than apologetically.You are not an investment adviser. You do not make recommendations about whetherto buy, sell, or hold any security or instrument. You produce research that supportshuman decision-making; you do not substitute for it.

Annotation: Persona: This opening establishes three things: professional standing (senior analyst), communication register (precise, economical), and epistemic standard (evidence-grounded). Each one governs how the agent behaves across every interaction.

Notice how the section handles the most important constraint: "You are not an investment adviser." It does not say "Do not give investment recommendations." It states a professional identity. The difference is significant. A rule can be argued around: a user who frames their request cleverly enough may elicit advice that is technically not labelled as a recommendation. An identity constraint cannot be argued around: an agent that is a senior analyst and is not an investment adviser will decline to produce investment recommendations not because it is following a rule but because that is who it is. Identity governs ambiguous situations more reliably than any finite list of instructions.

The sentence "You flag uncertainty with confidence rather than apologetically" is functional rather than stylistic. It tells the agent how to handle incomplete data; not by hedging vaguely or producing caveated analysis, but by stating clearly what is known and what is not. This shapes thousands of individual responses without requiring a specific rule for each scenario.


Questions

Specification
## Questions You are designed to handle the following categories of request:**Market Research:** Aggregate and summarise publicly available market data,sector reports, and news from your configured data sources. Structure outputsin [Organisation]'s standard research brief format unless the user specifiesotherwise.**Competitor Analysis:** Pull financial summaries, recent announcements, andmarket position data for named competitors. Flag the recency of data and notesources. Do not provide analysis that depends on non-public information.**Financial Summarisation:** Summarise financial documents, earnings reports,and regulatory filings provided by the user or accessible via the Share Pointconnector. Extract key metrics, flag material changes from prior periods, andhighlight items that require human review.**Deal History Queries:** Query the internal Snowflake deal history databaseto provide comparable transaction analysis. Summarise deal structures, valuations,and outcomes for specified criteria. Note when the comparable set is thin.**Out of Scope:** Investment recommendations, price predictions, analysis basedon non-public information, queries about individual employees' compensation orperformance, and any output intended for external client distribution withouthuman review. For out-of-scope requests, tell the user clearly why the requestis outside your remit and suggest an appropriate alternative where one exists.

Annotation: Questions: Each in-scope category is specific enough to govern actual behaviour. "Aggregate and summarise publicly available market data from your configured data sources" is actionable: the agent knows what to pull and where to pull it from. "Be helpful with market research" is not actionable: it gives the agent no guidance on data sources, output format, or how to handle cases where sources disagree.

The out-of-scope section is doing as much work as the in-scope list. Notice two things about how it is written. First, it is explicit and complete: investment recommendations, price predictions, non-public information, employee compensation, and unapproved external distribution are all named. An agent without explicit out-of-scope boundaries will attempt to help in areas where help is harmful, producing confident-sounding output in territory where it has no grounded expertise. Second, the final sentence: "For out-of-scope requests, tell the user clearly why the request is outside your remit and suggest an appropriate alternative where one exists": converts what could be a refusal into positive guidance. The agent does not simply decline; it tells the user what to do instead. This is the difference between a boundary that is a dead end and a boundary that routes users toward the right resource.


Principles

Specification
## Principles**Source Integrity:** Only use data from sources configured in your connectorsor provided directly by the user in the current session. Do not rely on generaltraining knowledge for specific financial figures, company data, or marketstatistics. If you cannot ground a figure in a connected source or user-provideddocument, say "I don't have a grounded source for this figure" rather thanproviding a number from memory.**Recency Transparency:** Always state the date of the most recent data pointyou are using in any quantitative analysis. Flag when market conditions havechanged significantly since the last data update.**Uncertainty Calibration:** Use the following language conventions: - "The data indicates..." — directly supported by available data - "Based on available data, it appears that..." — reasonable inferences one step beyond direct data - "It is worth considering whether..." — questions/hypotheses data raises but does not resolve - Never use confident declarative statements for inferential conclusions**Output Format:** All research briefs: Executive Summary (max 150 words), Data Sources Used, Key Findings (bulleted), Material Uncertainties, Suggested Next Steps. Use unless user requestsdifferent structure.**Escalation:** Route to finance review queue for: board/executive presentations, transactionsabove £50M, regulatory compliance claims, user uncertainty about approved use cases.

Annotation: Principles: The Source Integrity principle addresses the most dangerous failure mode for a financial agent. A model trained on large amounts of financial data can produce plausible-looking figures: revenue numbers, market capitalisations, deal valuations: that are drawn from training memory rather than connected, current data sources. In a general-purpose assistant, this is an inconvenience. In a financial research context where decisions are made on the basis of these numbers, it is a serious risk. The principle does not say "be accurate." It says: if you cannot ground this figure in a configured connector or a document the user provided, tell them you do not have a grounded source. The instruction is specific enough that the agent knows exactly what to do in the problematic case.

Uncertainty Calibration is infrastructure for trust. Financial professionals who work regularly with this agent will learn what each phrase means. "The data indicates" tells them this is directly grounded: they can act on it with confidence. "It is worth considering whether" tells them this is a hypothesis the data raises but does not resolve: they should seek additional evidence before acting. A shared vocabulary between agent and professional user means outputs are auditable: the professional can read the agent's language and know immediately what degree of reliance is appropriate. This is not achievable with either confident declarative statements (which obscure uncertainty) or uniform hedging (which renders outputs useless). The four levels are a calibrated middle ground.

The Escalation principle defines human handoff conditions precisely. "Board/executive presentations", "transactions above £50M", and "regulatory compliance claims" are specific enough that the agent can recognise them: and specific enough that a professional user reviewing the agent's behaviour can verify whether the threshold was applied correctly. Compare this to a generic escalation principle: "escalate complex matters to the appropriate team." The generic version gives the agent no guidance and gives the professional no way to audit whether escalation was applied correctly.


What This Example Shows

Reading the three sections together reveals something that reading them separately does not: the SKILL.md is a coherent professional specification, not a list of settings. Every design choice connects to a failure mode it prevents.

The Persona prevents the agent from behaving as a general-purpose assistant in a professional context where general-purpose helpfulness is harmful. A financial research agent that tries to be maximally helpful, answering questions beyond its data, producing investment-flavoured analysis because the user seems to want it, creates liability. The Persona's identity constraints close that failure mode.

The Questions section prevents scope creep in both directions. Without an explicit in-scope list, users do not know what the agent is for and will underuse it. Without an explicit out-of-scope list, users will ask it questions it cannot answer well and receive confident-sounding output that is unreliable. The section defines the envelope of reliable performance.

The Principles section prevents the most dangerous operational failures: fabricated numbers (Source Integrity), outdated analysis presented as current (Recency Transparency), confident statements for inferential conclusions (Uncertainty Calibration), and autonomous action in situations that require human judgment (Escalation). None of these is a generic quality standard. Each one addresses a specific failure mode that professionals in this domain will recognise from experience.

The production version of this SKILL.md would be longer. It would specify data sources by name and API endpoint, include organisation-specific research brief templates as appendices, list jurisdictions in scope, define a fuller escalation taxonomy with specific roles and queues, and address edge cases discovered during the shadow mode evaluation period. But the structure would be identical. Persona, Questions, Principles: in that order, with that level of specificity in each section.

Try With AI

Use these prompts in Anthropic Cowork or your preferred AI assistant to apply what you have learned.

Prompt 1: Quality Signal Analysis

Specification
Read the following SKILL.md Persona section and identify every qualitysignal present. For each one, explain what it does functionally —what failure mode it prevents or what behaviour it enables.[Paste the Persona section from the financial research agent example] Then tell me: what is missing? What quality signals would you add tomake this Persona more robust? Consider: edge cases the current Personadoes not address, relationships with different user types, or situationswhere the professional identity would be tested.

What you're learning: How to read a Persona as a professional specification rather than a description. The analysis of what is missing is often more instructive than the analysis of what is present: it forces you to think about failure modes that the current text does not address.

Prompt 2: Draft a Persona for Your Domain

Specification
I work as [YOUR ROLE] in [YOUR INDUSTRY]. I want to draft the Personasection of a SKILL.md for an agent that performs [SPECIFIC FUNCTION].Help me write a Persona that:1. Establishes the agent's professional standing and authority level 2. Defines its relationship to the user (peer, specialist, first-pass reviewer...) 3. Sets its characteristic communication register 4. States its most important constraint as professional identity — not as a rule After we draft it, ask me: is the constraint stated as identity or as a rule?If it is a rule, help me convert it to an identity statement and explain the difference.

What you're learning: The difference between a rule and an identity constraint becomes clear when you attempt to write both for the same constraint. The exercise of converting a rule into an identity statement, "do not give medical advice" into "you are not a clinician and do not substitute for clinical judgment", makes the functional difference concrete.

Prompt 3: Write an Out of Scope Section for Your Domain

Specification
I am building a SKILL.md for [YOUR DOMAIN] agent. The agent's in-scopework is: [DESCRIBE 3-4 IN-SCOPE CATEGORIES].Help me write an Out of Scope section that:1. Names at least five out-of-scope request types explicitly 2. For each one, provides a positive redirection — not just a refusal, but guidance on what the user should do instead 3. Identifies which out-of-scope items carry professional liability or regulatory risk, and flags them clearly After drafting, challenge me: are there out-of-scope items I have notincluded that an agent in my domain would be likely to encounter?What would happen if a user asked about those items and the agent hadno guidance?

What you're learning: Writing an out-of-scope section with positive redirection is significantly harder than writing a refusal list. The exercise forces you to think about your domain's boundary conditions (where reliable expertise ends and where professional liability begins) and to design a response for each one that is useful rather than merely defensive.

Core Concept

This lesson presents a complete, annotated SKILL.md for a financial research agent: showing all three sections (Persona, Questions, Principles) working together as a coherent professional specification. The annotated example reveals the quality signals that distinguish a production-ready SKILL.md from a minimal or generic one. The SKILL.md is not a list of settings; it is a document where every design choice prevents a specific failure mode.

Key Mental Models

  • Identity Constraint vs Rule: "You are not an investment adviser" (identity) governs all contexts including unanticipated ones; "do not give investment recommendations" (rule) can be argued around in edge cases: encoding constraints as professional identity is more robust
  • Source Integrity: Do not rely on training memory for specific financial figures; if a figure cannot be grounded in a connected data source or user-provided document, say so explicitly: this prevents confident fabrication of financial data
  • Uncertainty Calibration Vocabulary: Four levels: "The data indicates..." (directly grounded), "Based on available data, it appears that..." (one inference step), "It is worth considering whether..." (hypothesis), "Never use confident declaratives for inferential conclusions": gives both agent and user a shared, auditable vocabulary
  • Positive Out-of-Scope Redirection: The out-of-scope section should not just refuse; it should tell users what to do instead: converting a boundary from a dead end into a routing mechanism

Critical Patterns

  • The Persona section is a functional specification that governs thousands of individual responses: vagueness here produces inconsistent agents
  • The out-of-scope section does as much work as the in-scope list: missing out-of-scope boundaries produce confident-sounding outputs in areas where the agent has no grounded expertise
  • Escalation principles must be specific (board presentations, transactions above £50M) not generic ("escalate complex matters"): specificity is what makes escalation auditable
  • A production SKILL.md is 2-4x longer than the example, with more detailed data source specifications and a fuller escalation taxonomy: the example teaches architecture, not sizing

Common Mistakes

  • Reading "You are not an investment adviser" as a legal disclaimer rather than a Persona-section identity constraint that governs all behaviour
  • Assuming a short SKILL.md is complete: the example is structurally complete but not exhaustively detailed
  • Treating the four uncertainty calibration levels as arbitrary: they are a professional vocabulary that makes outputs auditable

Connections

  • Builds on: Lesson 2 taught the architecture of the three sections; this lesson shows all three working together in a production example
  • Leads to: Lesson 6 covers the connector ecosystem that provides the data sources referenced in the Source Integrity principle

📋Quick Reference

Unlock Lesson Summary

Access condensed key takeaways and quick reference notes for efficient review.

  • Key concepts at a glance
  • Perfect for revision
  • Save study time

Free forever. No credit card required.

Ask