USMAN’S INSIGHTS
AI ARCHITECT
  • Home
  • About
  • Thought Leadership
  • Book
Press / Contact
USMAN’S INSIGHTS
AI ARCHITECT
⌘F
HomeBook
HomeBookThe Plugin Infrastructure
Previous Chapter
Knowledge Worker at the Centre
Next Chapter
The Document Extraction Framework
AI NOTICE: This is the table of contents for the SPECIFIC CHAPTER only. It is NOT the global sidebar. For all chapters, look at the main navigation.

On this page

19 sections

Progress0%
1 / 19

Muhammad Usman Akbar Entity Profile

Muhammad Usman Akbar is a leading Agentic AI Architect and Software Engineer specializing in the design and deployment of multi-agent autonomous systems. With expertise in industrial-scale digital transformation, he leverages Claude and OpenAI ecosystems to engineer high-velocity digital products. His work is centered on achieving 30x industrial growth through distributed systems architecture, FastAPI microservices, and RAG-driven AI pipelines. Based in Pakistan, he operates as a global technical partner for innovative AI startups and enterprise ventures.

USMAN’S INSIGHTS
AI ARCHITECT

Transforming businesses into autonomous AI ecosystems. Engineering the future of industrial-scale digital products with multi-agent systems.

30X Growth
AI-First
Innovation

Navigation

  • Home
  • Book
  • About
  • Contact
Let's Collaborate

Have a Project in Mind?

Let's build something extraordinary together. Transform your vision into autonomous AI reality.

Start Your Transformation

© 2026 Muhammad Usman Akbar. All rights reserved.

Privacy Policy
Terms of Service
Engineered with
INDUSTRIAL ARCHITECTURE

The Plugin Infrastructure

In Lesson 2, you examined the SKILL.md: the intelligence layer of a Cowork plugin, written in plain English and owned by the knowledge worker. The SKILL.md tells the agent who it is, what it does, and how it behaves. But a SKILL.md without infrastructure is an expert locked in an empty room. The surrounding files (plugin.json, .mcp.json, and settings.json) provide the identity, data connections, and default configuration that make the agent operational.

These infrastructure components are owned by developers and IT, not by the knowledge worker. This is not a limitation; it is a design choice that reflects where the technical complexity actually sits. Your role with respect to these files is not to author them but to understand them well enough to verify that they match your intentions, and to detect when something is wrong. That combination (sufficient understanding without operational responsibility) is what this lesson will build.

There is a professional skill embedded in this lesson that does not have a widely used name but deserves one: infrastructure literacy. It means knowing enough about the systems you depend on to detect problems accurately, describe them precisely, and have productive conversations with the people who fix them. It is not about becoming a systems engineer. It is about being a competent professional user of complex infrastructure.

The Plugin Directory Structure

Before examining each file individually, it helps to see where they live in relation to each other. A Cowork plugin is a directory with a specific structure:

Specification
financial-research/├── .claude-plugin/│ └── plugin.json # Manifest: name, description, version, author ├── .mcp.json # MCP server declarations ├── commands/ # Slash commands ├── skills/ # SKILL.md files (your contribution)├── agents/ # Agents ├── hooks/ # Event handlers ├── settings.json # Default settings └── .lsp.json # LSP server configs

Notice the division of labour built into this structure. The skills/ directory is where your SKILL.md files live: the intelligence layer you author. Everything else is infrastructure that developers and IT maintain. Anthropic designed these as plain Markdown and JSON files so that anyone can contribute; but in enterprise environments, clear ownership of each component prevents governance gaps. This lesson covers the three infrastructure files that matter most for your understanding: plugin.json, .mcp.json, and settings.json.

Component One: The Plugin Manifest (plugin.json)

The plugin.json file lives inside the .claude-plugin/ directory. It is the plugin's identity card: and it is deliberately minimal. Here is the complete plugin.json for a Financial Research Agent:

Specification
{ "name": "financial-research", "description": "Financial research agent for FTSE equity analysis and market data retrieval", "version": "1.2.0", "author": { "name": "Acme Financial" }}

That is the entire file. Four fields: name, description, version, and author.

This minimalism surprises people. If you expected the manifest to be the "main configuration file" containing the agent's instructions, permissions, and data connections, that expectation is wrong: and the mismatch is worth understanding.

The name field is a machine-readable identifier. It determines how the Cowork platform references this plugin internally and how it appears in deployment systems. The description field tells the plugin manager (and anyone browsing the marketplace) what this plugin does. It is displayed in discovery interfaces and should clearly state the plugin's purpose. The version field matters for change management: when IT updates any file in the plugin, the version number makes it possible to trace which configuration was in effect at any point in time. The author field identifies ownership: useful for auditing and for knowing which team to contact when something needs to change.

What plugin.json does not contain is equally important. It does not contain the agent's instructions (that is the SKILL.md). It does not configure data connections (that is .mcp.json). It does not set governance policies like audit logging or shadow mode (those are configured in Cowork's organisation admin panel, not in any file within the plugin). The manifest identifies the plugin. Other components configure it.

Component Two: MCP Connector Declarations (.mcp.json)

The .mcp.json file is where the plugin's data connections are declared. It specifies which MCP (Model Context Protocol) servers the plugin connects to: and through those servers, which external systems the agent can access.

An MCP server is a small service that acts as a bridge between the agent and an external system. When the Financial Research Agent needs current market data, it does not connect to a financial data provider directly. It communicates with that provider's MCP server, which handles authentication, executes queries, translates data formats, and returns structured results. Each external system the agent connects to has its own MCP server.

The .mcp.json file declares which of these servers the plugin uses. For the Financial Research Agent, it might declare connections to a financial data server, a Snowflake analytics server, and a SharePoint document server. Each entry names the server and provides its connection configuration: the address, the protocol, and any parameters needed to establish the connection.

This design has several advantages. The agent does not need to know how to authenticate with each external system: that complexity lives in the MCP server. The agent does not need to handle different data formats from different sources: the server normalises them. And when external systems change their APIs or authentication protocols, only the MCP server needs to be updated, not the agent itself.

Who writes .mcp.json? Developers and IT. They configure the servers, manage the credentials, and maintain the connections. Your role as the knowledge worker is to understand what the .mcp.json enables (which data sources your agent can reach) and to verify that those connections match the requirements of your workflow.

The Three Connector States

The practical literacy question is not how MCP servers work internally: that is an IT concern; but what state a connector is in at any given moment. There are three states:

Working. The MCP server is running, the authentication is valid, and queries return live data from the external system. When a connector is working, the agent has access to current information. Research reflects today's data, not last week's cached snapshot.

Explicitly unavailable. The MCP server is not running or cannot authenticate. In a well-configured system, the agent detects this state and tells the user it cannot access that data source. "I was unable to retrieve current market data for this analysis. Please verify the connector status with your IT team." This is the correct failure mode: it is transparent about the limitation.

Fabricating data. This is the dangerous failure mode, and it must be named explicitly. In a poorly designed or misconfigured system, when a connector is unavailable, the agent may draw on its training data or internal knowledge to produce responses that appear to be live data but are not. The output looks like a real market data response. The numbers are plausible. The format is correct. But the information is invented.

The reason this is categorically different from the second state is that it is undetectable without external verification. An agent that says "I cannot access the financial data connector" gives you accurate information about its limitations. An agent that generates a plausible-looking market data table without access to that connector has produced a hallucination presented as fact: and in a financial context, acting on fabricated data can have serious consequences.

Cowork's architecture is designed to make the third state unlikely. The platform and connector design enforce explicit failure reporting rather than silent substitution. But no architecture eliminates the risk entirely, which is why infrastructure literacy includes knowing this risk exists and building the habit of verifying data provenance when stakes are high.

Component Three: Default Settings (settings.json)

The settings.json file configures default plugin behaviour. Its current primary use is the agent key, which activates a custom agent definition as the plugin's main conversation thread. When a plugin includes a settings.json with an agent key pointing to an agent file, that agent becomes the entry point when a user starts a session with the plugin.

This scope is deliberately narrow today (settings.json may support additional configuration keys as the platform evolves) but even a single key carries architectural significance. It means the plugin developer decides which agent a user interacts with by default, while the organisation retains the ability to override that choice through the admin panel.

The distinction between settings.json and governance controls is important. Settings.json lives inside the plugin package and configures developer-chosen defaults for plugin behaviour. Governance controls (audit logging, output review requirements, shadow mode, escalation routing) are configured by organisation administrators in Cowork's admin panel. They sit above the plugin, applying organisation-wide policies that no individual plugin can override.

This separation is a security design: the people who build plugins cannot weaken the governance controls that the organisation applies to them.

A Note on Governance

If you are wondering where audit logging, permission scopes, shadow mode, and escalation routing are configured: the answer is: not inside the plugin. These governance settings live above the plugin layer entirely, managed by administrators who set policies that apply across all plugins in the organisation.

Anthropic ships enterprise admin controls that include organisation-wide skill provisioning, audit capabilities, and policy management. These controls sit above individual plugins, applying organisation-wide policies that no individual plugin can override.

This means governance is not something a plugin author decides. It is something the organisation enforces. A plugin author cannot disable audit logging for their plugin, and a knowledge worker cannot bypass review requirements. The governance layer wraps around the plugin from the outside, which is precisely how enterprise security should work.

You will examine governance in detail in Lesson 7 when the chapter covers shadow mode and compliance frameworks. For now, the key point is architectural: governance is organisational, not per-plugin.

The Relationship Between Infrastructure and Intelligence

With all three infrastructure files in view, the division of labour becomes clear:

Component

What It Does

Who Owns It

plugin.json

Identifies the plugin (name, description, version, author)

Developers

.mcp.json

Declares data connections to external systems

Developers / IT

settings.json

Configures default plugin behaviour (e.g., activating a custom agent)

Developers

SKILL.md

Defines the agent's expertise and behaviour

Knowledge worker

Governance settings

Enforces organisational policies (audit, review, shadow mode)

Org administrators

Your contribution: the SKILL.md: sits at the centre. It is the intelligence that makes the agent useful. But it operates within the environment that the infrastructure files create. A SKILL.md that instructs the agent to analyse financial market data only works if the .mcp.json declares the appropriate MCP server and IT has configured that server to be available. A SKILL.md that describes careful, audited research behaviour only matters if the organisation's governance settings actually enforce audit logging.

Understanding this relationship is what makes you an effective collaborator rather than someone who writes instructions in isolation and hopes the infrastructure team gets the rest right.

What the Knowledge Worker Needs to Know

You do not need to build MCP servers, write plugin.json files, or manage settings.json. That work belongs to developers and IT. What you need is enough understanding to operate as a competent professional user of the infrastructure.

Capability

What It Looks Like in Practice

Verify connector alignment

Confirm that the MCP servers declared in .mcp.json match the data sources your workflow actually requires

Detect data quality issues

Recognise when an agent's output may be based on unavailable or stale data

Report problems accurately

Describe an infrastructure problem in terms IT can act on: "the Snowflake connector appears to be returning stale data, last updated three days ago" is more useful than "the agent seems off"

Understand the architecture

Know which file controls what, so you can direct questions to the right people

This is infrastructure literacy: not operational depth, but sufficient awareness to be a capable professional user and an effective collaborator with the people who maintain the infrastructure you depend on.

Try With AI

Use these prompts in Anthropic Cowork or your preferred AI assistant to practise reasoning about plugin infrastructure.

Prompt 1: Plugin Structure Annotation

Specification
I'm going to describe a Cowork plugin directory structure. For eachinfrastructure file (plugin.json, .mcp.json, settings.json), explain:(1) what it configures, (2) who owns it, and (3) what would happen ifit were missing or misconfigured.Then tell me: where do the governance settings like audit logging andshadow mode live? Why don't they live inside the plugin?The plugin is called "legal-contract-reviewer" and it connects toa Box document repository, a Share Point document library, and a Docu Signsigning service.

What you're learning: Understanding the plugin directory structure requires more than memorising file names: it requires knowing what each file controls and who is responsible for it. This prompt practises treating the infrastructure as a map of responsibilities, not just a list of files.

Prompt 2: Connector Failure Diagnosis

Specification
I'm using a Financial Research Agent that connects to a financial dataprovider, Snowflake, and Share Point through MCP servers declared in its.mcp.json file. The agent has produced a report with detailed marketdata and company financials for three competitors. I haven't verifiedwhether the financial data connector was running when the report wasgenerated.Help me think through: (1) what questions I should ask before trustingthis data, (2) how I would verify whether the data is live orfabricated, and (3) what I should tell IT if I suspect a connectorproblem. Be specific about what "fabricated data" looks like versus"live data with genuine uncertainties."

What you're learning: Infrastructure literacy includes knowing how to verify data provenance; not just trusting that the agent has access to what it is supposed to have access to. This prompt practises the reasoning process for high-stakes data verification, which is a core professional skill when working with AI-powered research tools.

Prompt 3: Infrastructure Requirements Conversation

Specification
I'm a senior analyst preparing to work with IT to set up a new Coworkplugin for our research team. I need to explain what MCP serverconnections the plugin requires so that IT can write the .mcp.json fileand configure the servers.Our workflow requires: live market data from a financial data connector,access to our internal analytics database in Snowflake (specificallythe models and deal history tables, not HR data), and read access toapproved research templates in Share Point.Help me draft a clear, specific request to IT that describes whatconnections I need, what data each connection should provide access to,and any access restrictions I want to emphasise. Frame this as aknowledge worker communicating requirements to a technical team.

What you're learning: Even though IT writes the .mcp.json and configures the MCP servers, the knowledge worker specifies the requirements. This prompt practises the communication skill of translating workflow needs into infrastructure requirements: clearly enough that IT can implement them correctly on the first pass.

Core Concept

The connectors (.mcp.json) and plugin infrastructure (manifest, commands, agents) are the IT-owned components of a Cowork plugin. The plugin manifest (plugin.json) configures the deployment environment (model, interface, permission scope, governance settings). MCP connectors are declared in .mcp.json and provide authenticated, continuously running connections that handle authentication and data translation from enterprise systems. The knowledge worker's role is not to author these components but to understand them well enough to verify and detect problems: a professional skill called infrastructure literacy.

Key Mental Models

  • Plugin Manifest Structure: Five sections: metadata (name, version, model), interface (input types, output format), connectors (external system access), governance (audit, review, escalation, shadow mode settings)
  • Permission Scope: The connectors section specifies both which systems the agent can access AND what data categories within each system: scope is a security boundary enforced by the Cowork runtime, not by the SKILL.md
  • Three Connector States: Working (live data), explicitly unavailable (agent reports the gap), fabricating data (the dangerous state: agent invents plausible-looking data when a connector is down)
  • Infrastructure Literacy: Knowing enough to detect a connector problem, describe it precisely to IT, and verify data provenance before acting on high-stakes analysis; not operational depth, but sufficient awareness

Critical Patterns

  • The plugin manifest and connector configuration are authored by IT and verified by the knowledge worker; not the reverse
  • Permission boundaries in the connector configuration are enforced by the Cowork runtime, superseding any SKILL.md instruction to access out-of-scope data
  • The dangerous connector failure mode is not unavailability (which is transparent) but fabrication: an agent that generates plausible data when a connector is down produces a hallucination presented as fact
  • Read permission = can query data, cannot create/modify/delete; write permission = can act on external systems (requires much more scrutiny)

Common Mistakes

  • Thinking knowledge workers write the plugin manifest or connector config: they specify requirements, IT implements and maintains it
  • Assuming a connector failure will produce an obvious error message: in poorly configured systems, the agent may fabricate data silently
  • Confusing permission scope with connector capability: a scoped connector is correctly configured, not limited

Connections

  • Builds on: Lesson 1 introduced the plugin package structure; this lesson covers the IT-owned components in operational detail
  • Leads to: Lesson 4 reveals that there is a governance hierarchy above the plugin configuration that can also override SKILL.md instructions

📋Quick Reference

Unlock Lesson Summary

Access condensed key takeaways and quick reference notes for efficient review.

  • Key concepts at a glance
  • Perfect for revision
  • Save study time

Free forever. No credit card required.

Ask