USMAN’S INSIGHTS
AI ARCHITECT
  • Home
  • About
  • Thought Leadership
  • Book
Press / Contact
USMAN’S INSIGHTS
AI ARCHITECT
⌘F
HomeBook
HomeBookEscaping the Heavyweight VM: The Lightweight Path to Scaling
Previous Chapter
Build Your Docker Skill
Next Chapter
Container Fundamentals Images Containers and Layers
AI NOTICE: This is the table of contents for the SPECIFIC CHAPTER only. It is NOT the global sidebar. For all chapters, look at the main navigation.

On this page

30 sections

Progress0%
1 / 30

Muhammad Usman Akbar Entity Profile

Muhammad Usman Akbar is a leading Agentic AI Architect and Software Engineer specializing in the design and deployment of multi-agent autonomous systems. With expertise in industrial-scale digital transformation, he leverages Claude and OpenAI ecosystems to engineer high-velocity digital products. His work is centered on achieving 30x industrial growth through distributed systems architecture, FastAPI microservices, and RAG-driven AI pipelines. Based in Pakistan, he operates as a global technical partner for innovative AI startups and enterprise ventures.

USMAN’S INSIGHTS
AI ARCHITECT

Transforming businesses into autonomous AI ecosystems. Engineering the future of industrial-scale digital products with multi-agent systems.

30X Growth
AI-First
Innovation

Navigation

  • Home
  • Book
  • About
  • Contact
Let's Collaborate

Have a Project in Mind?

Let's build something extraordinary together. Transform your vision into autonomous AI reality.

Start Your Transformation

© 2026 Muhammad Usman Akbar. All rights reserved.

Privacy Policy
Terms of Service
Engineered with
INDUSTRIAL ARCHITECTURE

Docker Installation & Setup

Your FastAPI agent runs perfectly on your machine. But "works on my machine" doesn't scale to production or your team's machines. Docker solves this fundamental problem by packaging your agent, its dependencies, and its runtime into a container that runs identically everywhere—your laptop, a teammate's Mac, or a cloud server.

Before you experience the power of containerization through AI collaboration, you need to understand what Docker actually is. This chapter walks you through installation and initial setup manually, building the mental model you'll need to optimize containers and debug issues later.


Why Containers? The Problem They Solve

Before diving into Docker, understand the problem it solves and why it's become essential for deploying AI services.

The Deployment Problem

Your FastAPI agent from Module 6 works on your laptop. But to make it useful, you need to run it somewhere accessible—a server in the cloud, your company's data center, or a colleague's machine. This is where things break:

Your MachineProduction Server
Python 3.12Python 3.9
macOSUbuntu Linux
Dependencies installed globallyDifferent versions installed
Environment variables set in .zshrcNo environment configured
Model files in ~/DownloadsWhere are the model files?

Every difference is a potential bug. The server says "Module not found." You say "But it works on my machine!"

Three Ways to Deploy Software

Option 1: Manual Setup (Fragile) SSH into the server, install Python, pip install dependencies, copy files, configure environment variables, hope nothing changed since yesterday. Problems: Slow, error-prone, not reproducible. Works until someone updates a system package.

Option 2: Virtual Machines (Heavy) Package the entire operating system—kernel, libraries, your application—into a VM image. Run the VM on any hypervisor (VMware, VirtualBox, cloud providers).

text
┌─────────────────────────────────────┐ │ Your Application │ ├─────────────────────────────────────┤ │ Python, Dependencies, Files │ ├─────────────────────────────────────┤ │ Guest OS (Ubuntu) │ ├─────────────────────────────────────┤ │ Hypervisor (VMware) │ ├─────────────────────────────────────┤ │ Host OS (macOS/Windows) │ ├─────────────────────────────────────┤ │ Hardware │ └─────────────────────────────────────┘

Problems: Each VM needs its own OS (gigabytes of storage), boots in minutes, wastes RAM running duplicate kernels. Running 10 services means 10 operating systems.

Option 3: Containers (Lightweight) Package your application and dependencies, but share the host kernel. No duplicate operating system. Start in milliseconds. Use megabytes instead of gigabytes.

text
┌──────────┐ ┌──────────┐ ┌──────────┐ │ App 1 │ │ App 2 │ │ App 3 │ ├──────────┤ ├──────────┤ ├──────────┤ │ Deps │ │ Deps │ │ Deps │ └──────────┴──┴──────────┴──┴──────────┘ Container Runtime ├─────────────────────────────────────┤ │ Host OS (Linux) │ ├─────────────────────────────────────┤ │ Hardware │ └─────────────────────────────────────┘

Key insight: Containers share the host's kernel. They're isolated processes, not full operating systems. This makes them:

  • Fast: Start in under a second (VMs take minutes)
  • Small: 50-200MB typical (VMs are 2-10GB)
  • Efficient: Run 100 containers on a laptop (10 VMs would exhaust RAM)
  • Portable: Same container runs on any Linux kernel (development to production)

Why Containers Matter for AI Services

Your AI agent has specific requirements:

  1. Large dependencies: PyTorch, transformers, numpy—hundreds of megabytes of packages
  2. Specific versions: Model trained on transformers 4.35.0 breaks on 4.36.0
  3. Environment variables: API keys, model paths, configuration
  4. GPU access: Some AI workloads need NVIDIA CUDA (containers support this)
  5. Reproducibility: Must reproduce exact behavior for debugging

Containers solve all of these by freezing your entire environment into an immutable, portable package. When you deploy to the cloud, you're not hoping the server is configured correctly—you're shipping the exact environment that works.

Cloud Computing Context

If you're new to cloud computing, here's the essential context:

Cloud providers (AWS, Google Cloud, Azure, DigitalOcean) rent you servers by the hour. These servers run Linux. You deploy your application to these servers. Without containers: You configure each server manually, install dependencies, copy files. Different servers drift out of sync. With containers: You build once, push to a registry (like Docker Hub), and pull onto any server. The container is identical everywhere. Kubernetes (Sub-module 2) orchestrates many containers across many servers—handling load balancing, failover, and scaling. But first, you need to know how to build and run a single container. That's what Docker teaches you.


Prerequisites: What You Need Before Starting

Check that your system meets these baseline requirements:

RequirementHow to Check
macOS 11.0+ OR Windows 10/11 OR Linux (Ubuntu 18.04+, Fedora, etc.)You're reading this, so you have a system
4 GB RAM minimummacOS: Apple menu → About This Mac; Windows: Settings → System → About; Linux:free -h
2 CPU cores minimumFor AI workloads, we'll allocate 4GB RAM + 2 CPUs minimum
10 GB free disk spaceFor Docker images and containers
Reliable internet connectionWe'll download ~2GB during setup

Don't Have Minimum Resources?

If your machine is underpowered, you have options:

  1. Cloud alternative: Install Docker on a cloud VM (AWS EC2 t3.small, Google Cloud e2-standard-2, or DigitalOcean $5/month droplet)
  2. Shared machine: Use a lab computer or colleague's system
  3. Defer chapter: Complete Module 6 (agent fundamentals) while planning infrastructure upgrades

What Is Docker? The Mental Model

Before installing anything, understand what we're installing. Docker has three essential components:

Component 1: Docker Engine (The Runtime)

This is the core. The Docker Engine is a lightweight process that runs on your operating system and:

  • Creates isolated containers from images
  • Manages container lifecycle (start, stop, remove)
  • Handles networking between containers
  • Manages storage and volumes Think of it like: A process manager—like systemd on Linux or Task Manager on Windows, but specialized for containers.

Component 2: Docker Desktop (The Complete Package)

On macOS and Windows, you can't install Docker Engine directly (it's Linux-native). Docker Desktop solves this by:

  • Running a lightweight Linux VM (using Hypervisor.framework on macOS, Hyper-V on Windows)
  • Installing Docker Engine inside that VM
  • Providing a GUI dashboard for viewing containers and images
  • Handling networking so containers feel like they're on your machine Important: Docker Desktop is NOT Docker Engine. Desktop is the packaging and UI around Engine.

Component 3: containerd (The Container Runtime)

Inside Docker Engine runs containerd, a lower-level component that actually:

  • Pulls container images from registries
  • Extracts images to filesystems
  • Creates cgroups and namespaces (Linux kernel features that provide isolation)
  • Starts container processes You rarely interact with containerd directly, but it's the reason containers are so lightweight—they don't need a full operating system like VMs do.

The Architecture Stack

text
Your Machine (macOS/Windows) ↓ Docker Desktop (GUI + VM) ↓ Linux VM (inside Docker Desktop) ↓ Docker Engine ↓ containerd ↓ Containers (your FastAPI agent, databases, etc.)

On Linux, the stack is simpler (no VM needed):

text
Your Linux Machine ↓ Docker Engine ↓ containerd ↓ Containers

Installation by Operating System

  • macOS
  • Windows
  • Linux

macOS

Supported versions: macOS 11.0 (Big Sur) and later

Step 1: Download Docker Desktop Visit Docker's official download page and click the macOS download button. You'll see two options:

  • Apple Silicon (M1/M2/M3): For newer Macs with Apple chips
  • Intel: For Intel-based Macs Check which you have: Apple menu → About This Mac → Look for "Chip: Apple M2" (Silicon) or "Processor: Intel Core" (Intel).

Step 2: Install Docker Desktop

  1. Open your Downloads folder
  2. Double-click Docker.dmg
  3. Drag the Docker icon to the Applications folder
  4. Wait for copy to complete (usually 1-2 minutes)
  5. Eject the disk image by dragging it to Trash

Step 3: Launch Docker

  1. Open Applications folder
  2. Double-click Docker.app
  3. Enter your password when prompted (Docker needs to install system components)
  4. Wait for "Docker is running" to appear in the menu bar (top right)

Windows

Supported versions: Windows 10 (21H2 and later) or Windows 11

Prerequisites check: Docker Desktop on Windows requires Hyper-V. Check if it's enabled:

powershell
# In Power Shell (as Administrator) Get-Windows Optional Feature -Online -Feature Name Hyper-V | Select-Object -Property Name, State

Expected output:

text
Name State ---- ----- Hyper-V Enabled

If State is "Disabled", you need to enable Hyper-V first:

powershell
# In Power Shell (as Administrator) Enable-Windows Optional Feature -Online -Feature Name Hyper-V -All

Restart your computer when prompted.

Step 1: Download Docker Desktop Visit Docker's official download page and click the Windows download button. This downloads Docker Desktop Installer.exe.

Step 2: Install

  1. Double-click Docker Desktop Installer.exe
  2. Follow the installation wizard (use default settings)
  3. Check "Use WSL 2 instead of Hyper-V" (WSL 2 is recommended for performance)
  4. Click Install

Step 3: Launch Docker

  1. Search "Docker Desktop" in your Start menu
  2. Click to launch
  3. Wait for "Docker Desktop is running" notification (taskbar shows Docker icon)

Linux

Supported distributions: Ubuntu 18.04+, Fedora, Debian, CentOS, etc.

Ubuntu/Debian:

bash
# Update package index sudo apt-get update # Install prerequisites sudo apt-get install -y ca-certificates curl gnupg lsb-release # Add Docker's official GPG key curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg # Add Docker repository echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null # Install Docker Engine sudo apt-get update sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin # Add your user to docker group sudo usermod -aG docker $USER # Activate the change newgrp docker

Fedora/RHEL:

bash
sudo dnf -y install dnf-plugins-core sudo dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo sudo dnf install docker-ce docker-ce-cli containerd.io docker-compose-plugin sudo systemctl start docker sudo usermod -aG docker $USER newgrp docker

Verify Installation

No matter your OS, verify Docker installed correctly by checking the version:

bash
docker version

Key confirmation: You see both Client and Server sections. If you see only "Client" without "Server," Docker Engine isn't running. Restart Docker Desktop and try again.


Run Your First Container

Now test that the entire system works end-to-end:

bash
docker run hello-world

Congratulations: You've just created and run your first container. This command validates that Docker Engine is running, network connection works, image pulling works, and container creation works.


Configure Docker Desktop for AI Workloads

For AI services, we recommend these resource settings in Settings > Resources:

  • CPUs: 4 (minimum for AI workloads)
  • Memory: 8 GB (minimum 4GB)
  • Disk space: 100 GB Click Apply & Restart.

On Linux, specify limits at runtime:

bash
docker run --memory=4g --cpus=2 your-agent-image

Common Installation Issues & Recovery

IssueCause/DiagnosisRecovery
Docker Desktop won't startVirtualization disabledRestart computer, check BIOS/Hyper-V, or reinstall.
Permission denied (Linux)User not in docker groupRun sudo usermod -aG docker $USER && newgrp docker.
Cannot connect to daemonDocker Engine not runningStart Docker Desktop or sudo systemctl start docker.
Image pull failedNetwork issue or downtimeCheck connection, retry docker pull hello-world.