USMAN’S INSIGHTS
AI ARCHITECT
  • Home
  • About
  • Thought Leadership
  • Book
Press / Contact
USMAN’S INSIGHTS
AI ARCHITECT
⌘F
HomeBook
HomeBookEscape Vendor Lock-In — Your Claude Skills Run Anywhere
Previous Chapter
From Skills to Business Monetizing Agent Expertise
Next Chapter
Architecting Context Engineering High-Fidelity Agent Environments
AI NOTICE: This is the table of contents for the SPECIFIC CHAPTER only. It is NOT the global sidebar. For all chapters, look at the main navigation.

On this page

20 sections

Progress0%
1 / 20

Muhammad Usman Akbar Entity Profile

Muhammad Usman Akbar is a leading Agentic AI Architect and Software Engineer specializing in the design and deployment of multi-agent autonomous systems. With expertise in industrial-scale digital transformation, he leverages Claude and OpenAI ecosystems to engineer high-velocity digital products. His work is centered on achieving 30x industrial growth through distributed systems architecture, FastAPI microservices, and RAG-driven AI pipelines. Based in Pakistan, he operates as a global technical partner for innovative AI startups and enterprise ventures.

USMAN’S INSIGHTS
AI ARCHITECT

Transforming businesses into autonomous AI ecosystems. Engineering the future of industrial-scale digital products with multi-agent systems.

30X Growth
AI-First
Innovation

Navigation

  • Home
  • Book
  • About
  • Contact
Let's Collaborate

Have a Project in Mind?

Let's build something extraordinary together. Transform your vision into autonomous AI reality.

Start Your Transformation

© 2026 Muhammad Usman Akbar. All rights reserved.

Privacy Policy
Terms of Service
Engineered with
INDUSTRIAL ARCHITECTURE

The Cross-Vendor Landscape: Your Skills Are Portable

You've spent this entire chapter learning Claude Code. Here's the secret: you weren't just learning one tool.

Every concept you mastered -- CLAUDE.md project instructions, Skills, MCP servers, hooks, subagents, agent teams -- is part of an emerging industry standard. OpenAI's Codex CLI has its own version of each. Google's Gemini CLI has its own version. And in December 2025, several of these vendors created the Agentic AI Foundation (AAIF) under the Linux Foundation, donating key projects to seed open, vendor-neutral standards for agentic AI.

MIT Technology Review named "Generative Coding" one of its 10 Breakthrough Technologies of 2026. AI now writes approximately 30% of Microsoft's code and more than 25% of Google's. The tools you learned in this chapter are not a niche experiment. They are the new baseline for how software gets built.


The Market in February 2026

The agentic coding market has consolidated into two leaders and several strong contenders.

Tier 1: The Two Leaders

Anthropic (Claude Code) Analyst estimates put Claude Code at ~$1B annual recurring revenue as of early February 2026 (Sacra). SemiAnalysis estimated Claude Code accounts for ~4% of all public GitHub commits (SemiAnalysis, Feb 5, 2026). Claude Opus 4.5 holds the top spot on SWE-bench Verified at 80.9%. Philosophy: developer-in-the-loop, local terminal execution, accuracy-first.

OpenAI (Codex) Codex CLI is open source, built in Rust, installable via npm i -g @openai/codex (GitHub). OpenAI launched a macOS desktop app on February 2, 2026, and released GPT-5.3-Codex on February 5, 2026. Codex supports cloud sandbox execution (default for delegated tasks) and also provides local CLI modes. Philosophy: parallel, asynchronous, fire-and-forget delegation.

Tier 2: Strong Contenders

ToolKey StatPositioning
Cursor~$1B ARR, ~$29.3B valuation (analyst est., Sacra)Fastest SaaS growth in history (SaaStr). IDE-first experience.
GitHub Copilot68% developer usage, ~$400M revenue 2025 (a16z)Agent mode GA. Massive distribution via GitHub ecosystem.
Google Gemini CLIOpen source (Apache 2.0), free tier (1,000 req/day), 1M token contextAccessible, open, enormous context window.

Tier 3: Emerging Players

Amazon Q Developer and Devin (which acquired the Windsurf product and brand) round out the landscape.


The Concept Mapping Table

This is the most important table in this lesson. Everything you learned in Chapter 3 has equivalents across the industry:

What You LearnedClaude CodeOpenAI CodexGoogle Gemini CLIOpen Standard
Project instructionsCLAUDE.mdAGENTS.mdGEMINI.mdAGENTS.md (AAIF)
Agent Skills.claude/skills/SKILL.md.agents/skills/SKILL.md.gemini/skills/SKILL.mdAgent Skills spec (agentskills.io)
Tool connectivityMCP servers in settings.jsonMCP servers in config.tomlMCP servers in settings.jsonMCP (Linux Foundation)
Human-in-the-loop controlallowedTools, permissionsApproval modes (suggest / auto-edit / full-auto)Tool approval promptsVendor-specific (no standard yet)
Context hierarchyGlobal, Project, DirectoryGlobal, ProjectGlobal, Project, DirectoryVendor-specific (no standard yet)
SubagentsTask tool with subagent_typeCloud sandbox tasksNot yet availableVendor-specific (no standard yet)
Agent TeamsTeamCreate, TaskCreate, SendMessagemacOS app parallel agentsNot yet availableVendor-specific (no standard yet)
HooksPre/Post tool hooks in settings.jsonNot yet availableNot yet availableVendor-specific (no standard yet)
IDE integrationVS Code extensionVS Code extensionVS Code extensionVendor-specific (no standard yet)
Desktop appClaude Desktop / CoworkCodex macOS appNot yet availableVendor-specific (no standard yet)

The pattern: what you know transfers. The directory name changes (.claude/ vs .agents/ vs .gemini/), but the concepts are the same.


Standards Convergence: The Agentic AI Foundation

In December 2025, the biggest companies in AI did something unusual: they agreed on shared standards.

The Agentic AI Foundation (AAIF) formed under the Linux Foundation with platinum members including Anthropic, OpenAI, Google, Microsoft, AWS, Block, Bloomberg, and Cloudflare. The foundation governs three founding projects:

ProjectCreated ByWhat It StandardizesAdoption
MCP (Model Context Protocol)Anthropic (donated)Tool connectivity -- how agents talk to external services10,000+ active public servers, 97M monthly SDK downloads
AGENTS.mdOpenAI (donated)Project instructions -- how agents understand your codebase60,000+ open source projects
gooseBlock (donated)Open agent runtime -- reference implementation for agentic workflowsOpen source agent framework

A fourth standard, Agent Skills (the SKILL.md format), was created by Anthropic on December 18, 2025, and has been adopted by OpenAI, Microsoft (GitHub Copilot), Cursor, Atlassian, and Figma. The specification lives at agentskills.io.

What this means for you: The Skills you built in this chapter using .claude/skills/ follow the same specification that Codex uses in .agents/skills/ and Gemini CLI uses in .gemini/skills/. Different directory names, same format. Your Skills are largely portable where the SKILL.md spec is followed; vendor-specific metadata and directory paths may differ slightly.


Three Philosophies, One Ecosystem

Each tool reflects a different design philosophy. None is universally "best" -- they excel at different work.

AspectClaude CodeOpenAI CodexGemini CLI
Philosophy"Measure twice, cut once""Move fast, iterate""Open and accessible"
ExecutionLocal terminalCloud sandbox + localLocal CLI + cloud inference
StrengthsDeep reasoning, accuracy, self-correctionParallel tasks, async delegation, speedFree tier, 1M context, open source
Best forComplex refactoring, architecture workBatch operations, explorationBudget-conscious teams, large codebases
Pricing$20+/month subscription$20-$200/month (via ChatGPT)Free (1,000 req/day)
Open sourceNoCLI is open source (Rust)Yes (Apache 2.0)

Professional developers increasingly use multiple tools for different strengths. Claude Code for the careful architecture work. Codex for parallelized bulk tasks. Gemini CLI for quick queries against massive codebases. This is "poly-agentic" development -- choosing the right tool for each task, not committing to one forever.


SWE-bench: The Coding Benchmark

SWE-bench is a benchmark that tests whether AI can solve real software engineering problems pulled from open source GitHub repositories. Unlike artificial coding challenges, SWE-bench tasks require reading existing code, understanding project context, and producing working fixes.

Multiple variants exist with different difficulty levels. SWE-bench Verified uses human-validated problems. SWE-bench Pro is harder, with more complex multi-file problems.

SWE-bench Verified Leaderboard (February 2026, source)

RankModelScore
1Claude Opus 4.580.9%
2Claude Opus 4.680.8%
3GPT-5.280.0%
4Gemini 3 Flash78.0%
5Claude Sonnet 4.577.2%
6Gemini 3 Pro76.2%

Important caveat: Companies report scores on different benchmark variants, making direct comparisons tricky. GPT-5.3-Codex scores 56.8% on SWE-bench Pro -- which is a harder test, not a worse score. When comparing models, always check which variant was used.


Why This Matters for Your Career

The patterns you learned in this chapter are not Claude Code patterns. They are industry patterns.

When you write a CLAUDE.md file, you are practicing the same skill as writing an AGENTS.md file for Codex or a GEMINI.md file for Gemini CLI. When you build a Skill in .claude/skills/, you can port it to Codex or Gemini CLI by moving the SKILL.md file to a different directory. When you connect an MCP server, that same server works with every tool that supports the protocol.

This portability exists because the industry converged. The AAIF ensures that MCP servers, AGENTS.md files, and Agent Skills work the same way regardless of which coding agent you choose. Your investment in learning these patterns compounds across every tool you touch.

The developers who will thrive are not the ones who master one tool. They are the ones who understand the underlying patterns -- context files, skills, tool connectivity, orchestration -- and apply them wherever the work demands. That is what you built in this chapter.


A Note on Security

MCP and agentic tool connectivity expand what agents can do -- but they also expand the attack surface. When an agent can call external servers, read files, and execute commands, the consequences of a compromised or malicious tool server are significant: prompt injection, data exfiltration, and unintended code execution are all real risks.

As you work across tools and connect MCP servers, apply the same caution you would when installing any third-party dependency: review the server code before trusting it, run MCP servers in isolated environments where possible, and prefer servers from verified publishers. The MCP specification includes transport-level security, but the responsibility for evaluating trust ultimately rests with you.


Try With AI

text
What are the architectural differences between you (Claude Code) and OpenAI's Codex CLI? Be specific about execution model, sandboxing, and where each tool runs code.

What you're learning: How to use an AI agent to analyze its own competitive landscape. Claude Code has direct knowledge of its own architecture and can reason about public information on competitors. This develops your ability to gather technical intelligence through AI conversation.

text
I have a skill at .claude/skills/my-skill/SKILL.md. Show me how to create an equivalent for OpenAI Codex (in .agents/skills/) and for Gemini CLI (in .gemini/skills/). What changes are needed in each version?

What you're learning: Cross-vendor skill porting. The answer reveals how much of the SKILL.md format is universal (most of it) versus vendor-specific (directory path and minor configuration). This is the practical proof that your skills are portable.

text
Search the web for the latest SWE-bench Verified leaderboard. How do Claude, GPT, and Gemini models compare? What should I consider beyond benchmark scores when choosing a coding agent?

What you're learning: Critical evaluation of AI benchmarks. Scores matter, but so do execution model, pricing, context window, and workflow fit. This prompt teaches you to make tool decisions based on multiple factors, not just a single number.


Further Reading

  • Agentic AI Foundation (Linux Foundation) -- AAIF founding announcement
  • Model Context Protocol (MCP) -- official specification
  • Agent Skills specification -- SKILL.md format
  • AGENTS.md -- cross-vendor project instructions standard
  • OpenAI Codex CLI -- open source repository
  • Google Gemini CLI -- open source repository
  • SWE-bench Verified leaderboard -- February 2026 snapshot

Snapshot disclaimer

The AI model and market landscape change rapidly. Figures in this lesson reflect snapshots from February 2026 and are cited to specific public sources. Check the linked references for the latest numbers. Benchmark scores are self-reported by model providers unless independently verified, and different evaluation variants (Verified, Pro, Lite) produce different results for the same models.


What's Next

You've completed the full Chapter 3 journey -- from your first Claude Code session through skills, MCP, hooks, plugins, agent teams, and now cross-vendor fluency. Next up: the Chapter Quiz (Lesson 34) to test your understanding across all 33 lessons.

Core Concept

The skills you learned in this chapter aren't Claude Code-specific—they're industry-standard patterns that transfer across OpenAI Codex, Google Gemini CLI, and all tools converging under the Agentic AI Foundation. Your investment in learning CLAUDE.md, Skills, MCP, and agent orchestration is portable across the entire agentic coding ecosystem.

Key Mental Models

  • Cross-Vendor Portability: CLAUDE.md → AGENTS.md → GEMINI.md, .claude/skills/ → .agents/skills/ → .gemini/skills/—same concepts, different directory names
  • Standards Convergence: Three founding projects (MCP, AGENTS.md, goose) plus Agent Skills spec mean tools are interoperable by design, not accident
  • Poly-Agentic Development: Professional developers use multiple tools for different strengths (Claude Code for careful architecture, Codex for parallel tasks, Gemini CLI for large codebases)—not commitment to one tool forever
  • Industry Benchmark: SWE-bench tests real software engineering (not artificial challenges)—but variants differ in difficulty, making direct comparisons tricky

Critical Patterns

  • The Concept Mapping Table (memorize this):

    • Project instructions: CLAUDE.md / AGENTS.md / GEMINI.md → AGENTS.md standard (AAIF)
    • Agent Skills: .claude/skills/SKILL.md → .agents/skills/SKILL.md → .gemini/skills/SKILL.md → Agent Skills spec (agentskills.io)
    • Tool connectivity: MCP servers in settings.json/config.toml → MCP standard (Linux Foundation)
    • Subagents/Teams: Claude Code and Codex both support, Gemini CLI doesn't (yet)
  • Three Philosophies:

    • Claude Code: "Measure twice, cut once"—local execution, deep reasoning, accuracy-first (best for complex refactoring)
    • OpenAI Codex: "Move fast, iterate"—cloud sandbox + local, parallel async tasks, speed-focused (best for batch operations)
    • Gemini CLI: "Open and accessible"—free tier, 1M token context, open source (best for budget-conscious teams, large codebases)
  • Market Positioning (February 2026):

    • Tier 1 Leaders: Anthropic ($1B ARR, 4% of all GitHub commits) and OpenAI (Codex CLI open source, macOS desktop app)
    • Tier 2 Contenders: Cursor ($1B ARR), GitHub Copilot (68% developer usage), Gemini CLI (free tier)
    • Tier 3 Emerging: Amazon Q Developer, Devin (acquired Windsurf)

Common Mistakes

  • Treating learned skills as Claude Code-specific instead of recognizing them as industry patterns
  • Comparing benchmark scores across different SWE-bench variants (Verified vs Pro)—always check which test was used
  • Committing to one tool when poly-agentic approach (right tool for each task) is more effective
  • Thinking portability is future promise—standards convergence happened in December 2025, it's live now
  • Ignoring that AI writes ~30% of Microsoft code and 25%+ of Google code—this isn't experimental, it's the new baseline

Connections

  • Builds on: All Chapter 3 concepts (CLAUDE.md, Skills, MCP, hooks, plugins, subagents, teams) are now understood as portable industry patterns
  • Leads to: Chapter Quiz (Lesson 34) testing understanding across all 33 lessons, then application in later chapters
  • Career Impact: Investment in these patterns compounds across every tool you touch—learning one deeply means learning the underlying architecture of all agentic coding tools