USMAN’S INSIGHTS
AI ARCHITECT
  • Home
  • About
  • Thought Leadership
  • Book
Press / Contact
USMAN’S INSIGHTS
AI ARCHITECT
⌘F
HomeBook
HomeBookThe Integration Litmus: Wiring the 9-Tool Engine for Production
Previous Chapter
Build the Code and Upgrade Tools
Next Chapter
Expected vs Actual
AI NOTICE: This is the table of contents for the SPECIFIC CHAPTER only. It is NOT the global sidebar. For all chapters, look at the main navigation.

On this page

8 sections

Progress0%
1 / 8

Muhammad Usman Akbar Entity Profile

Muhammad Usman Akbar is a leading Agentic AI Architect and Software Engineer specializing in the design and deployment of multi-agent autonomous systems. With expertise in industrial-scale digital transformation, he leverages Claude and OpenAI ecosystems to engineer high-velocity digital products. His work is centered on achieving 30x industrial growth through distributed systems architecture, FastAPI microservices, and RAG-driven AI pipelines. Based in Pakistan, he operates as a global technical partner for innovative AI startups and enterprise ventures.

USMAN’S INSIGHTS
AI ARCHITECT

Transforming businesses into autonomous AI ecosystems. Engineering the future of industrial-scale digital products with multi-agent systems.

30X Growth
AI-First
Innovation

Navigation

  • Home
  • Book
  • About
  • Contact
Let's Collaborate

Have a Project in Mind?

Let's build something extraordinary together. Transform your vision into autonomous AI reality.

Start Your Transformation

© 2026 Muhammad Usman Akbar. All rights reserved.

Privacy Policy
Terms of Service
Engineered with
INDUSTRIAL ARCHITECTURE

Wire All Nine Tools

James had nine tools spread across four chapters. State tools in Module 9.3, Chapter 3, content tools in Module 9.3, Chapter 4, pedagogy tools in Module 9.3, Chapter 5, code and upgrade tools in Module 9.3, Chapter 6. Each one tested individually, each one working in isolation.

"Time to see if they play together," he said, opening the project directory.

Emma set her coffee down. "Start the server. Call every tool. In order. One bad import and the whole server refuses to start."


You are doing exactly what James is doing. Nine tools, one server, one verification run.

Step 1: Verify the Server Starts with All Nine Tools

Open a terminal in your tutorclaw-mcp project directory and ask Claude Code:

text
Start the TutorClaw MCP server and list all registered tools. Verification Goal: Confirm that all 9 tools are correctly registered and available: - register_learner - get_learner_state - update_progress - get_chapter_content - get_exercises - generate_guidance - assess_response - submit_code - get_upgrade_url

Claude Code starts the server. Watch the startup output. You should see all nine tool names listed.

What to check:

Expected ToolBuilt In
register_learnerModule 9.3, Chapter 3
get_learner_stateModule 9.3, Chapter 3
update_progressModule 9.3, Chapter 3
get_chapter_contentModule 9.3, Chapter 4
get_exercisesModule 9.3, Chapter 4
generate_guidanceModule 9.3, Chapter 5
assess_responseModule 9.3, Chapter 5
submit_codeModule 9.3, Chapter 6
get_upgrade_urlModule 9.3, Chapter 6

If you see all nine: move to Step 2.

If a tool is missing or the server fails to start, read the error message. Common problems at this stage:

Import conflicts. Two tool modules import the same helper differently, or a circular import between modules prevents the server from loading. Paste the traceback to Claude Code:

text
The server failed to start. Traceback Analysis: [Paste Traceback Here] Objective: Resolve any import conflicts, circular dependencies, or resource path issues so that all 9 tools load successfully during initialization.

Missing data directories. A tool tries to read from data/ or content/ but the directory does not exist yet. Claude Code can create the directory structure or add a check that creates it on first run.

Shared state file locks. Two tools try to open the same JSON file at the same time during registration. This is rare in single-process servers but can happen if the startup sequence initializes state. Describe the behavior to Claude Code and let it restructure the file access.

Step 2: Run a Complete Tutoring Flow

Now call every tool in the order a real tutoring session would use them. Ask Claude Code:

text
Conduct a Full Cycle Verification Run of all 9 tools. Sequence to Execute: 1. register_learner (Name: "Test Student") 2. get_learner_state (Using ID from step 1) 3. get_chapter_content (Chapter 1) 4. generate_guidance (Predict stage) 5. assess_response ("I think variables store data") 6. update_progress (Record interaction) 7. get_exercises (Chapter 1) 8. submit_code (print("hello world")) 9. get_upgrade_url (Check upgrade state) Output: Show the individual response for each tool in the chain.

Claude Code calls each tool and shows the output. You built each of these tools in Module 9.3, Chapters 3 through 6, so the individual responses should look familiar. What is new here is the sequence: each tool reads or writes state that the next tool depends on. If the chain breaks at any point, you will see it in the output.

Step 3: Fix What Breaks

If every tool returned a valid response, you are done with the verification. Skip to the closing.

If any tool failed, describe the error to Claude Code:

text
Fix an integration failure in the 9-tool chain. Context: Tool [Name] failed with the following error: [Paste Error] Previous Step: Tool [Name] returned [Brief Output]. Task: Identify the state mismatch or contract drift between the tools. Fix the logic to ensure the entire 9-tool sequence completes without interruption.

Give Claude Code the context of where in the sequence the failure happened. A tool that works in isolation might fail when called after another tool has modified the shared state. The sequence matters because each tool depends on what the previous tools wrote.

Common sequence failures:

State not found. get_learner_state fails because register_learner wrote the ID to a different key than get_learner_state expects. This is a contract mismatch between tools built in different chapters.

Content directory empty. get_chapter_content returns nothing because the sample content files from Module 9.3, Chapter 4 are missing or in the wrong path. Verify the content/chapters/ directory has your markdown files.

Wrong stage passed. generate_guidance rejects the stage parameter because the tool expects "predict" but the caller sent "Predict" (capitalization mismatch). Small contract issues like this surface only in integration.

After each fix, run the full sequence again from the beginning. The goal is one clean run through all nine tools.

Try With AI

Exercise 1: The 10th Tool Test

text
Conduct an Out-of-Order Stress Test. Analysis: What happens if tools are called outside of the happy-path sequence? Scenario: Evaluate the system's response to: - Calling assess_response before generate_guidance. - Requesting get_exercises for a blocked (paid) chapter. Task: Predict the behavior for 5 edge-case sequences and verify they handle state exceptions gracefully.

What you are learning: A product does not control the order users call tools. The agent might call assess_response before generate_guidance if the user jumps ahead. Each tool should handle unexpected sequences gracefully, returning an error or a sensible default rather than crashing.

Exercise 2: Comparing Specs to Reality

text
Verify Implementation Integrity against Module 9.3, Chapter 2 Spec. Spec Review (get_chapter_content): - Inputs: chapter_number (int), learner_id (string). - Logic: Free tier limited to chapters 1-5; Paid gets full access. Task: Compare the actual runtime behavior observed in Step 2 with this specification. Identify any drift in tier gating or parameter handling.

What you are learning: The spec from Module 9.3, Chapter 2 is the contract. Integration testing is where you verify the implementation honors that contract. Checking spec against behavior is the core of the describe-steer-verify workflow. If the behavior drifts from the spec, the spec or the code needs to change.

Exercise 3: Planning the OpenClaw Connection

text
Plan the OpenClaw Gateway Integration. Objective: Transition from local terminal testing to WhatsApp mobile verification. Task: Describe: 1. The exactly sequence of openclaw terminal commands to register the server. 2. The specific dashboard indicators that signify a healthy multi-tool connection. 3. The ideal first WhatsApp message to trigger the primary register_learner -> get_learner_state dependency chain.

What you are learning: The connection pattern is identical to Module 9.2: start the server, run openclaw mcp set, restart the gateway, check the dashboard. Planning the next step before executing it reinforces the workflow and helps you anticipate problems. The WhatsApp message you choose determines which tool the agent calls first.


James ran the sequence. register_learner returned a learner_id. get_learner_state showed chapter 1, predict stage, 0.5 confidence. get_chapter_content returned the first chapter. generate_guidance produced a prediction prompt. assess_response evaluated his test answer. update_progress recorded the interaction. get_exercises pulled practice problems. submit_code ran his print statement and returned the output. get_upgrade_url gave him a placeholder URL.

"All nine. Working together."

Emma looked at the terminal output. "When I built my first 9-tool server, I had three circular import errors and a shared state race condition. Yours started on the first try because Claude Code structured the modules properly." She paused. "I spent an afternoon untangling mine. You spent twenty minutes verifying yours."

"So it is done?"

"Locally." Emma closed her laptop halfway. "Module 9.3, Chapter 8, you connect this to OpenClaw and test from your phone. That is a different kind of test. Your tools work when you control the input. When the agent decides which tool to call based on a WhatsApp message, you find out if your tool descriptions are good enough."

James looked at his terminal. Nine tools, nine responses, zero errors. "I will take that test."