USMAN’S INSIGHTS
AI ARCHITECT
  • Home
  • About
  • Thought Leadership
  • Book
Press / Contact
USMAN’S INSIGHTS
AI ARCHITECT
⌘F
HomeBook
HomeBookThe Spec Level Mismatch That's Silently Wrecking Every Project
Previous Chapter
Why Specs Beat Vibe Coding
Next Chapter
The Project Constitution
AI NOTICE: This is the table of contents for the SPECIFIC CHAPTER only. It is NOT the global sidebar. For all chapters, look at the main navigation.

On this page

24 sections

Progress0%
1 / 24

Muhammad Usman Akbar Entity Profile

Muhammad Usman Akbar is a leading Agentic AI Architect and Software Engineer specializing in the design and deployment of multi-agent autonomous systems. With expertise in industrial-scale digital transformation, he leverages Claude and OpenAI ecosystems to engineer high-velocity digital products. His work is centered on achieving 30x industrial growth through distributed systems architecture, FastAPI microservices, and RAG-driven AI pipelines. Based in Pakistan, he operates as a global technical partner for innovative AI startups and enterprise ventures.

USMAN’S INSIGHTS
AI ARCHITECT

Transforming businesses into autonomous AI ecosystems. Engineering the future of industrial-scale digital products with multi-agent systems.

30X Growth
AI-First
Innovation

Navigation

  • Home
  • Book
  • About
  • Contact
Let's Collaborate

Have a Project in Mind?

Let's build something extraordinary together. Transform your vision into autonomous AI reality.

Start Your Transformation

© 2026 Muhammad Usman Akbar. All rights reserved.

Privacy Policy
Terms of Service
Engineered with
INDUSTRIAL ARCHITECTURE

The Three Levels of SDD

In Lesson 1, you experienced why vibe coding fails: context loss, assumption drift, and pattern violations compound until your AI produces unusable output. Specifications solve this by giving Claude the complete picture upfront.

But not all specifications are created equal. Some are planning artifacts you throw away after implementation. Others become living documentation that evolves with your codebase. And at the experimental frontier, some teams treat specifications as the only artifact worth maintaining, regenerating code on demand.

Understanding where you operate on this spectrum helps you calibrate your effort. You don't need enterprise-grade living documentation for a weekend project. But you also shouldn't throwaway specs for a system your team will maintain for years.

The Three Levels

LevelCreationMaintenanceUse Case
Spec-FirstSpec guides implementationSpec discarded afterMost common; quick tasks
Spec-AnchoredSpec written firstBoth spec + code maintainedTeam projects; living documentation
Spec-as-SourceSpec is primary artifactOnly spec edited; code regeneratedExperimental (Tessl approach)

Let's examine each level.

Level 1: Spec-First (Most Common)

You write the spec. Claude implements it. You move on.

This is where most practitioners operate, and for good reason. For 80% of tasks, the specification served its purpose the moment Claude finished implementing it. The spec prevented vibe coding, ensured Claude had complete context, and resulted in working code.

After that? The spec becomes historical record at best, garbage at worst.

When Spec-First Works

  • Single-session tasks that won't require revisiting
  • Personal projects without team coordination
  • Prototypes and experiments
  • Bug fixes where the spec is the fix description

The Trade-off

Benefit: Zero maintenance overhead. Write once, implement once, done.

Cost: Six months later, when you need to modify the feature, you have no specification. You're reading code to understand intent, which is exactly the problem specifications solve.

Example: Personal AI Employees Report (Spec-First)

Specification
# Personal AI Employees Report Specification ## Intent Write a one-time report analyzing the 2026 landscape of AI tools that function as digital employees (Claude Code, Cowork, etc.). ## Requirements - Executive summary for quick scanning - Tool comparison section (Claude Code, Cursor, Copilot) - ROI analysis with concrete metrics - Implementation risks and mitigations ## Constraints - 2000 words maximum - Target audience: CTOs evaluating tools - No speculation—only documented capabilities ## Success Criteria - Reader can compare tools on key dimensions - ROI section includes specific cost/benefit numbers - Risks are actionable, not vague warnings

Claude writes it. You publish it. The spec gets filed somewhere you'll never look again. This is perfectly appropriate for a one-time deliverable.

Level 2: Spec-Anchored (Team Standard)

Both specification and code are maintained artifacts.

When you work with a team, specifications become documentation. A new analyst joining in month six shouldn't need to reverse-engineer your report methodology from the final document. They should read the spec and understand how to update it.

When Spec-Anchored Works

  • Team projects with multiple contributors
  • Systems requiring compliance documentation
  • Products with long maintenance horizons
  • Features that will undergo multiple iterations

The Trade-off

Benefit: Specifications serve as onboarding documentation, architectural decision records, and implementation guides. When requirements change, you update the spec first, then the code, maintaining alignment.

Cost: Double maintenance burden. Every code change potentially requires a spec change. Specs and code can drift out of sync if discipline lapses.

The Discipline Requirement

Spec-Anchored only works if you enforce the discipline:

  1. Spec changes before code changes. Always.
  2. Code reviews check spec alignment. Reviewers verify the spec was updated.
  3. Specs live near code. Not in a separate wiki no one visits.

When teams adopt Spec-Anchored without this discipline, they get the worst outcome: outdated specs that actively mislead readers, combined with maintenance overhead that provides no value.

Example: The Same Report, Anchored

The spec from Level 1 becomes a living document—updated quarterly as the AI landscape evolves:

Specification
reports/ personal-ai-employees/ spec.md # Living specification changelog.md # History of updates decisions/ # Why we track these metrics 2026-q1-report.md # Generated from spec

When you later need to add a new tool to the comparison, you:

  1. Update spec.md with the new requirements
  2. Have Claude implement against the updated spec
  3. Commit both spec and code changes together

An analyst joining the team reads spec.md and understands the report methodology without reverse-engineering the final document.

Level 3: Spec-as-Source (Experimental)

The specification is the primary artifact. Code is regenerated on demand.

This is the frontier. Companies like Tessl are exploring a world where you never edit code directly. You edit specifications, and AI regenerates the implementation. Code becomes a build artifact, like compiled binaries.

The Appeal

Think about it: if Claude can generate code reliably from specifications, why maintain code at all? Code has bugs, requires refactoring, accumulates technical debt. Specifications express intent directly.

In this model:

  • You edit the spec to add a feature
  • AI regenerates affected code
  • Tests verify the generation
  • Code is never manually modified

The Problem: Determinism

Here's where Spec-as-Source breaks down in practice:

Identical specifications do not produce identical code.

Ask Claude to implement the same spec twice. You'll get functionally equivalent but syntactically different implementations. Variable names differ. Control flow varies. Comments appear or disappear.

For production systems, this creates problems:

  • Git diffs become meaningless (every regeneration is a massive diff)
  • Debugging becomes harder (which generation introduced the bug?)
  • Performance characteristics vary unpredictably
  • Third-party integrations may break on regeneration

The MDD Parallel

This isn't a new challenge. Model-Driven Development (MDD) promised the same transformation in the 2000s: write models, generate code, never touch implementation.

MDD failed to achieve mainstream adoption for the same reasons Spec-as-Source struggles:

  • Generated code needed manual patches for edge cases
  • Models couldn't express all implementation concerns
  • The abstraction leaked, requiring developers to understand both layers

AI-powered generation is more flexible than MDD code generators, but the determinism problem remains unsolved.

When to Consider Spec-as-Source

Despite challenges, Spec-as-Source makes sense for:

  • Highly repetitive code: CRUD operations, API clients, data transformations
  • Disposable code: Scripts, one-time migrations, proof-of-concepts
  • Environments with strong test coverage: When tests validate behavior, implementation variance matters less

For a production web application with years of expected maintenance? Spec-Anchored remains the safer choice.

Choosing Your Level

Most practitioners should default to Spec-First and graduate to Spec-Anchored when:

  • Working with a team (2+ people)
  • Building something with 6+ month maintenance horizon
  • Creating systems that require compliance documentation
  • Developing features that will undergo multiple iterations

Reserve Spec-as-Source experimentation for:

  • Personal projects with strong test coverage
  • Highly repetitive, well-understood domains
  • Situations where you're willing to accept regeneration variance

The Maturity Spectrum

Think of these levels as a maturity spectrum you can adopt gradually:

Week 1: Start with Spec-First. Write specs before asking Claude to implement. Experience the improvement over vibe coding.

Month 1: For projects you'll maintain, try Spec-Anchored. Keep specs alongside code. Update specs when requirements change.

Later: When you have strong test coverage and understand the trade-offs, experiment with Spec-as-Source for appropriate use cases.

You don't need to commit to one level for all projects. A solo weekend project is Spec-First. Your team's core product is Spec-Anchored. That AI-generated API client you regenerate weekly might be Spec-as-Source.

Try With AI

Prompt 1: Level Classification for Real Projects

Specification
I'm building a CLI tool that generates project scaffolding for my team. It will be used by 8 developers, maintained for 2+ years, and needs clear documentation for onboarding new team members. Which SDD level fits this scenario? Walk me through your reasoning considering: team coordination needs, documentation value, and maintenance burden.

What you're learning: This prompt uses a concrete project with clear characteristics. Claude's analysis reveals how team size, maintenance horizon, and documentation needs map to SDD levels. Notice how "8 developers" and "onboarding" push toward Spec-Anchored.

Prompt 2: Trade-off Analysis

Specification
Compare these two real scenarios: Scenario A: Solo developer building a weekend side project to learn a new API. No plans to maintain it beyond the learning exercise. Scenario B: Two-person startup building their MVP. Tight deadline, but they'll need to onboard contractors in 3 months. For each, recommend an SDD level and explain what they'd lose by choosing a different level.

What you're learning: The same methodology applies differently based on context. Scenario A's "learning exercise" suggests Spec-First (or skip specs entirely). Scenario B's "onboard contractors" changes everything—even under time pressure, Spec-Anchored provides future value.

Prompt 3: Spec-as-Source Reality Check

Specification
A team wants to use Spec-as-Source for their REST API client library. Their argument: "The API spec already exists as OpenAPI, so we should just regenerate the client whenever the spec changes." What could go wrong? Give me three specific failure scenarios they haven't considered.

What you're learning: Spec-as-Source sounds appealing in theory. This prompt surfaces practical issues: determinism problems breaking git workflows, debugging challenges when generated code varies, and edge cases requiring manual patches that get overwritten on regeneration.