USMAN’S INSIGHTS
AI ARCHITECT
  • Home
  • About
  • Thought Leadership
  • Book
Press / Contact
USMAN’S INSIGHTS
AI ARCHITECT
⌘F
HomeBook
HomeBookThe Spec-First Launch: Shipping Your Production-Ready Task API
Previous Chapter
Docker Image Builder Skill
Next Chapter
Kubernetes for AI Services
AI NOTICE: This is the table of contents for the SPECIFIC CHAPTER only. It is NOT the global sidebar. For all chapters, look at the main navigation.

On this page

49 sections

Progress0%
1 / 49

Muhammad Usman Akbar Entity Profile

Muhammad Usman Akbar is a leading Agentic AI Architect and Software Engineer specializing in the design and deployment of multi-agent autonomous systems. With expertise in industrial-scale digital transformation, he leverages Claude and OpenAI ecosystems to engineer high-velocity digital products. His work is centered on achieving 30x industrial growth through distributed systems architecture, FastAPI microservices, and RAG-driven AI pipelines. Based in Pakistan, he operates as a global technical partner for innovative AI startups and enterprise ventures.

USMAN’S INSIGHTS
AI ARCHITECT

Transforming businesses into autonomous AI ecosystems. Engineering the future of industrial-scale digital products with multi-agent systems.

30X Growth
AI-First
Innovation

Navigation

  • Home
  • Book
  • About
  • Contact
Let's Collaborate

Have a Project in Mind?

Let's build something extraordinary together. Transform your vision into autonomous AI reality.

Start Your Transformation

© 2026 Muhammad Usman Akbar. All rights reserved.

Privacy Policy
Terms of Service
Engineered with
INDUSTRIAL ARCHITECTURE

Capstone: Containerize Your API

Throughout this chapter, you've built Docker knowledge step by step: container fundamentals, Dockerfile syntax, lifecycle management, multi-stage builds. Now it's time to apply everything to a real production scenario.

In Chapter 70, you built a Task API with SQLModel and Neon PostgreSQL. It works on your machine. But "works on my machine" doesn't ship products. Your teammates can't run it without matching your Python version, installing the same dependencies, and configuring their environment variables.

This capstone changes that. You'll write a specification FIRST, then containerize your API using the patterns from this chapter. The result: a portable container image that runs identically on your laptop, a teammate's machine, or a cloud server.

The specification-first approach is critical. Jumping straight to code is the Vibe Coding anti-pattern. Writing the spec first forces you to think about constraints (image size, security, configuration) before touching any Docker commands.


Phase 1: Write the Specification FIRST

Before any implementation, you write a specification. This is the specification-first approach that separates professional development from Vibe Coding. The spec defines WHAT you're building and HOW you'll know it works.

Create containerization-spec.md in your project directory:

markdown
# Containerization Specification: Task API ## Intent Containerize the SQLModel + Neon Task API for production deployment. **Business Goal**: Enable any developer to run this API without environment setup. **Technical Goal**: Create a portable, optimized container image that works anywhere Docker runs. ## Constraints ### Image Size - **Target**: Under 200MB final image - **Rationale**: Smaller images push/pull faster, reduce storage costs ### Security - **Non-root user**: Container runs as unprivileged user - **Health check**: Built-in endpoint for orchestrator monitoring - **No secrets in image**: Database URL passed at runtime ### Configuration - **DATABASE_URL**: Environment variable (not hardcoded) - **PORT**: Configurable, defaults to 8000 ### Base Image - **Choice**: python:3.12-alpine (small, secure) - **Alternative**: python:3.12-slim (if Alpine compatibility issues) ## Success Criteria - [ ] Container builds successfully without errors - [ ] Image size under 200MB (verify with `docker images`) - [ ] All CRUD endpoints work when running containerized - [ ] Health check endpoint responds at `/health` - [ ] Container can connect to Neon database with provided DATABASE_URL - [ ] Image can be pushed to registry (Docker Hub or GHCR) - [ ] Image can be pulled and run on different machine - [ ] Container runs as non-root user ## Non-Goals (What We're NOT Doing) - [ ] Docker Compose multi-service setup (separate lesson) - [ ] Kubernetes deployment (Chapter 80) - [ ] CI/CD automation (future topic) - [ ] GPU support (not needed for this API) ## Dependencies - SQLModel Task API code from Chapter 7 - Neon PostgreSQL database with connection string - Docker Desktop installed and running - Registry account (Docker Hub or GitHub)

Why specification first?

Without a spec, you'd start typing FROM python:3.12 and figure things out as you go. That's Vibe Coding. You might forget security constraints. You might not consider image size until it's 1.2GB. You might hardcode secrets.

The spec makes constraints explicit BEFORE you start. It's your contract with yourself.


Phase 2: Prepare the Application

Before writing the Dockerfile, ensure your Task API code is ready for containerization.

Your project should have this structure:

text
task-api/ ├── main.py # FastAPI application ├── models.py # SQLModel Task definition ├── database.py # Engine and session management ├── config.py # Settings with DATABASE_URL ├── requirements.txt # Dependencies └── containerization-spec.md # The spec you just wrote

Verify requirements.txt includes all dependencies:

text
fastapi==0.115.0 uvicorn==0.30.0 sqlmodel==0.0.22 psycopg2-binary==2.9.9 pydantic-settings==2.5.2

Update config.py to read DATABASE_URL from environment:

python
# config.py from pydantic_settings import BaseSettings from functools import lru_cache class Settings(BaseSettings): database_url: str class Config: env_file = ".env" @lru_cache def get_settings() -> Settings: return Settings()

Add a health check endpoint to main.py:

python
# Add this to main.py (if not already present) @app.get("/health") def health_check(): """Health check endpoint for container orchestrators.""" return {"status": "healthy", "service": "task-api"}

Output:

json
{"status": "healthy", "service": "task-api"}

Phase 3: Apply Multi-Stage Build Pattern

Now apply the multi-stage build pattern from Chapter 6. Reference your specification: under 200MB, Alpine base, non-root user.

Create Dockerfile:

dockerfile
# ============================================================================= # Stage 1: Build Stage # Purpose: Install dependencies with build tools (discarded after build) # ============================================================================= FROM python:3.12-alpine AS builder WORKDIR /app # Install UV for fast dependency installation RUN pip install --no-cache-dir uv # Copy requirements first (layer caching) COPY requirements.txt . # Install dependencies to user directory RUN uv pip install --system --no-cache -r requirements.txt # ============================================================================= # Stage 2: Runtime Stage # Purpose: Minimal production image with only necessary files # ============================================================================= FROM python:3.12-alpine WORKDIR /app # Create non-root user for security RUN adduser -D -u 1000 appuser # Copy installed packages from builder COPY --from=builder /usr/local/lib/python3.12/site-packages /usr/local/lib/python3.12/site-packages COPY --from=builder /usr/local/bin /usr/local/bin # Copy application code COPY main.py . COPY models.py . COPY database.py . COPY config.py . # Set ownership to non-root user RUN chown -R appuser:appuser /app # Switch to non-root user USER appuser # Environment configuration ENV PYTHONUNBUFFERED=1 \ PYTHONDONTWRITEBYTECODE=1 # Expose port (documentation, doesn't publish) EXPOSE 8000 # Health check HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \ CMD wget --no-verbose --tries=1 --spider http://localhost:8000/health || exit 1 # Run the application CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

Key design decisions (trace back to spec):

Spec RequirementDockerfile Implementation
Under 200MBpython:3.12-alpine base, multi-stage build
Non-root useradduser appuser, USER appuser
Health checkHEALTHCHECK instruction with wget
No secrets in imageDATABASE_URL passed at runtime via -e flag
Configurable portExposed via EXPOSE, configurable in CMD

Phase 4: Build and Validate Locally

Build the image and validate against success criteria from your spec.

Build the image:

Specification
docker build -t task-api:v1 .

Output:

text
[+] Building 12.3s (15/15) FINISHED => [internal] load build definition from Dockerfile => [builder 1/4] FROM python:3.12-alpine => [builder 2/4] RUN pip install --no-cache-dir uv => [builder 3/4] COPY requirements.txt . => [builder 4/4] RUN uv pip install --system --no-cache -r requirements.txt => [stage-1 1/7] FROM python:3.12-alpine => [stage-1 2/7] COPY --from=builder /usr/local/lib/python... => exporting to image

Check image size (spec: under 200MB):

Specification
docker images task-api:v1

Output:

text
REPOSITORY TAG IMAGE ID CREATED SIZE task-api v1 a1b2c3d4e5f6 30 seconds ago 145MB

145MB is well under the 200MB target from your specification.

Run the container with DATABASE_URL:

Specification
```bash docker run -d \ -p 8000:8000 \ -e DATABASE_URL="postgresql://user:pass@ep-xxx.region.aws.neon.tech/neondb?sslmode=require" \ --name task-api-container \ task-api:v1
Specification
**Verify container is running:**

docker ps

Specification
**Output:** ```text CONTAINER ID IMAGE COMMAND STATUS PORTS f7g8h9i0j1k2 task-api:v1 "uvicorn main:app..." Up 10 seconds 0.0.0.0:8000->8000/tcp

Test health check (spec: health endpoint responds):

Specification
curl http://localhost:8000/health

Output:

json
{"status":"healthy","service":"task-api"}

Test CRUD endpoints (spec: all endpoints work):

Specification
```bash # Create a task curl -X POST http://localhost:8000/tasks \ -H "Content-Type: application/json" \ -d '{"title": "Test containerized API"}' # List tasks curl http://localhost:8000/tasks
Specification
**Output:** ```json {"id":1,"title":"Test containerized API","description":null,"status":"pending","created_at":"2024-01-15T10:30:00"}

Verify non-root user (spec: container runs as non-root):

Specification
docker exec task-api-container whoami

Output:

text
appuser

Phase 5: Push to Container Registry

Your image works locally. Now push it to a registry so anyone can pull and run it.

Option A: Docker Hub

Step 1: Log in to Docker Hub

Specification
docker login

Enter your Docker Hub username and password when prompted.

Step 2: Tag the image for your Docker Hub account

Specification
```bash docker tag task-api:v1 yourusername/task-api:v1 docker tag task-api:v1 yourusername/task-api:latest
Specification
Replace `yourusername` with your actual Docker Hub username. **Step 3: Push to Docker Hub**
bash
docker push yourusername/task-api:v1 docker push yourusername/task-api:latest
Specification
**Output:** ```text The push refers to repository [docker.io/yourusername/task-api] a1b2c3d4e5f6: Pushed b2c3d4e5f6a7: Pushed v1: digest: sha256:abc123... size: 1234

Option B: GitHub Container Registry (GHCR)

Step 1: Create a personal access token

Go to GitHub Settings > Developer settings > Personal access tokens > Generate new token. Select write:packages scope.

Step 2: Log in to GHCR

Specification
echo $GITHUB_TOKEN | docker login ghcr.io -u yourusername --password-stdin

Step 3: Tag and push

Specification
```bash docker tag task-api:v1 ghcr.io/yourusername/task-api:v1 docker push ghcr.io/yourusername/task-api:v1
Specification
--- ## Phase 6: Cross-Machine Validation The ultimate test: can someone else run your container? This validates the spec requirement "Image can be pulled and run on different machine." **On a different machine (or a cloud VM):** **Step 1: Pull the image**

docker pull yourusername/task-api:v1

Specification
**Step 2: Run with your Neon DATABASE\_URL**
bash
docker run -d \ -p 8000:8000 \ -e DATABASE_URL="postgresql://user:pass@ep-xxx.region.aws.neon.tech/neondb?sslmode=require" \ --name task-api \ yourusername/task-api:v1
Specification
**Step 3: Verify endpoints work**
bash
curl http://localhost:8000/health curl http://localhost:8000/tasks
Specification
**What you just proved:** "Works on my machine" is now "Works everywhere Docker runs." Your teammate doesn't need: - The same Python version - The same operating system - To run `pip install` for 50 packages - To configure environment variables manually They run one command, and the entire environment is identical to yours. --- ## Specification Checklist: Final Validation Go back to your specification and verify each success criterion: | Success Criterion | Status | Evidence | | :--- | :--- | :--- | | Container builds successfully | PASS | `docker build` completed without errors | | Image size under 200MB | PASS | `docker images` shows 145MB | | All CRUD endpoints work | PASS | curl commands return expected responses | | Health check responds | PASS | `/health` returns `{"status":"healthy"}` | | Connects to Neon database | PASS | Tasks persist across container restarts | | Pushed to registry | PASS | Image visible on Docker Hub/GHCR | | Runs on different machine | PASS | Pulled and executed successfully | | Runs as non-root | PASS | `whoami` returns `appuser` | **All criteria met. Specification satisfied.** --- ## Common Issues and Solutions ### Issue: Container exits immediately **Check logs:**

docker logs task-api-container

Specification
**Common causes:** - Missing DATABASE\_URL environment variable - Invalid database connection string - Python import errors ### Issue: Cannot connect to database **Verify DATABASE\_URL is passed correctly:**

docker exec task-api-container env | grep DATABASE

Specification
**Ensure sslmode=require is present** (required for Neon):

DATABASE_URL=postgresql://user:pass@host/db?sslmode=require

Specification
### Issue: Health check failing **Debug by running health check manually:**

docker exec task-api-container wget --spider http://localhost:8000/health

Specification
**Check if uvicorn started:**

docker logs task-api-container | head -20

Specification
### Issue: Permission denied errors **Check if non-root user has access to files:**

docker exec task-api-container ls -la /app

Specification
All files should be owned by `appuser`. ---