In Lesson 5, you learned the four models for capturing value from domain agents. But monetisation only works if the organisation is ready for deployment. Not every organisation is.
Readiness is not a binary property -- either ready or not ready. It is a level. Organisations can move through levels deliberately, and understanding where an organisation sits today determines what intervention is appropriate. Deploy too early, and the agent fails not because the technology was wrong but because the organisation could not support it. Wait too long, and competitors move first.
This lesson gives you a five-level model for assessing any organisation's AI maturity. By the end, you will be able to look at your own organisation, diagnose its level honestly, and determine what needs to change to move forward.
AI is on the agenda but not in operations.
At Level 1, individual employees are using consumer AI tools -- ChatGPT, Claude, Gemini -- on their own initiative. There is no organisational sanction, no governance, no data strategy, and no designated AI owner. The leadership team talks about AI in meetings. Nobody has deployed anything.
Education, not deployment. Level 1 organisations are not candidates for domain agent deployment. The infrastructure -- governance, data access, designated ownership -- does not exist yet. Attempting to deploy an agent here fails not because the technology is inadequate but because the organisation cannot support it.
The right move: run awareness workshops, establish an AI working group, draft an acceptable use policy, and identify one team willing to run a pilot. That is how you move to Level 2.
Active pilots. At least one team has deployed a real agent.
At Level 2, the organisation has moved beyond talk. A designated AI lead or working group exists, though with limited authority. At least one team has deployed a real agent -- not a demo, not a proof of concept, but an agent that handles real work. Results are promising but isolated.
This is where most large enterprises sit in early 2026.
Level 2 is also where most enterprise AI deployments stall. The pilot worked. Leadership was impressed. And then nothing happened.
This is the Post-Pilot Trap -- the transition zone between Experimentation and Integration. Pilots succeed because they operate in controlled conditions: a motivated team, a clear problem, executive attention. Scaling requires governance, cross-team coordination, and sustained investment. Most organisations do not make that leap.
Deploy team-level Cowork agents with measurable value and minimal governance overhead. The goal is not enterprise transformation. The goal is building a track record of measurable results that justifies the investment needed to reach Level 3.
Pick one domain. Pick one team. Deploy one agent. Measure the value. Use that evidence to make the case for structured deployment.
Structured deployment. Agents in production, connected to real systems, with governance.
At Level 3, the organisation has moved beyond experimentation. There is a formal AI strategy with executive sponsorship. IT has a defined role in agent deployment. Agents are connected to production systems -- CRM, ERP, document management -- with real data flowing through governed pipelines.
This is where Part 3 agents are most naturally at home.
Deploy a single vertical fully: one domain, one team, one agent, full stack. This means SKILL.md authored by the domain expert, connectors to real systems managed by IT, governance policies in place, and measurable value being tracked.
The emphasis at Level 3 is depth over breadth. Do one deployment completely and well. Document what worked, what failed, and what you would change. That documentation becomes the playbook for expanding to additional domains.
Multi-vertical portfolio. Mature governance. Performance measurement driving investment decisions.
At Level 4, the organisation has multiple agents deployed across multiple domains. Governance is mature -- there are clear policies for data access, output quality, escalation, and agent retirement. The strategic question shifts from "should we deploy AI?" to "how do we optimise our AI portfolio?"
This is where platform commitment and build-versus-buy decisions become relevant. At Level 4, the organisation has enough deployment experience to make informed decisions about:
The cross-vertical portfolio strategy in Chapter 26 is addressed primarily to Level 4 organisations. The build-versus-buy decision for SKILL.md development -- whether to invest in internal knowledge extraction capability or engage an external services provider -- becomes relevant at this level.
Organisational redesign around agent capability.
At Level 5, the organisation has fundamentally redesigned how it works. Job descriptions have changed. Human-agent boundaries are explicitly negotiated and documented. AI governance is a standing organisational capability, not a project.
Few organisations are here in 2026. Level 5 is the long-term destination, not a near-term goal for most Part 3 readers.
Level
Name
Defining Feature
Appropriate Intervention
1
Awareness
AI on agenda, not in operations
Education and policy
2
Experimentation
Active pilots, isolated results
Team-level Cowork deployment
3
Integration
Production agents with governance
Single vertical, full stack
4
Optimisation
Multi-vertical portfolio
Platform commitment, portfolio management
5
Transformation
Organisational redesign
Continuous evolution
Use these prompts in Anthropic Cowork or your preferred AI assistant to explore these concepts further.
What you're learning: You are practising honest organisational assessment. The AI's questions force you to evaluate your organisation against specific diagnostic indicators rather than relying on optimistic self-assessment.
What you're learning: You are calibrating your assessment skills across the full maturity spectrum. Getting all three right confirms you can distinguish between adjacent levels (the hardest part of assessment).
What you're learning: You are positioning your own organisation within your industry's maturity landscape. This helps you set realistic expectations and identify competitive advantages available at your current level.
Organisational AI readiness is not binary (ready or not) but a five-level progression: Awareness, Experimentation, Integration, Optimisation, and Transformation. Each level has diagnostic indicators and an appropriate intervention. Most large enterprises sit at Level 2 (Experimentation) in early 2026, where the Post-Pilot Trap -- the transition zone between successful pilots and structured deployment -- stalls most enterprise AI initiatives. Part 3 domain agents are most naturally at home in Level 3 (Integration) organisations.
📋Quick Reference
Access condensed key takeaways and quick reference notes for efficient review.
Free forever. No credit card required.
Ask