USMAN’S INSIGHTS
AI ARCHITECT
  • Home
  • About
  • Thought Leadership
  • Book
Press / Contact
USMAN’S INSIGHTS
AI ARCHITECT
⌘F
HomeBook
HomeBookOne Checkbox Away: Deploying Production Locally
Previous Chapter
Kubernetes Architecture and the Declarative Model
Next Chapter
Pods The Atomic Unit
AI NOTICE: This is the table of contents for the SPECIFIC CHAPTER only. It is NOT the global sidebar. For all chapters, look at the main navigation.

On this page

27 sections

Progress0%
1 / 27

Muhammad Usman Akbar Entity Profile

Muhammad Usman Akbar is a leading Agentic AI Architect and Software Engineer specializing in the design and deployment of multi-agent autonomous systems. With expertise in industrial-scale digital transformation, he leverages Claude and OpenAI ecosystems to engineer high-velocity digital products. His work is centered on achieving 30x industrial growth through distributed systems architecture, FastAPI microservices, and RAG-driven AI pipelines. Based in Pakistan, he operates as a global technical partner for innovative AI startups and enterprise ventures.

USMAN’S INSIGHTS
AI ARCHITECT

Transforming businesses into autonomous AI ecosystems. Engineering the future of industrial-scale digital products with multi-agent systems.

30X Growth
AI-First
Innovation

Navigation

  • Home
  • Book
  • About
  • Contact
Let's Collaborate

Have a Project in Mind?

Let's build something extraordinary together. Transform your vision into autonomous AI reality.

Start Your Transformation

© 2026 Muhammad Usman Akbar. All rights reserved.

Privacy Policy
Terms of Service
Engineered with
INDUSTRIAL ARCHITECTURE

Enabling Kubernetes (Docker Desktop)

You already have Docker Desktop from Module 7, Sub-module 1. Here's the good news: Kubernetes is built in. No separate installation, no virtual machines, no complex setup. Just a checkbox.

By the end of this chapter, you'll have a working Kubernetes cluster running on your laptop—the same API and concepts used in production cloud deployments—ready for your first Pod deployment.


Docker Desktop Kubernetes vs Cloud Kubernetes

FeatureDocker DesktopCloud Kubernetes (GKE, EKS, AKS)
LocationYour laptopCloud data center
NodesSingle nodeMultiple nodes
CostFreeCloud compute bills
SetupOne checkboxCloud provider configuration
APIIdentical Kubernetes APISame API
kubectlSame commandsSame commands

Key insight: Docker Desktop Kubernetes is NOT a toy. It's a real Kubernetes cluster with the same API as production. Everything you learn here transfers directly to cloud deployments.


Enable Kubernetes

Step 1: Open Docker Desktop Settings

  • macOS: Click the Docker icon in your menu bar → Settings (or press ⌘,)
  • Windows: Right-click the Docker icon in the system tray → Settings
  • Linux: Click the Docker Desktop icon → Settings

Step 2: Enable Kubernetes

  1. In Settings, click Kubernetes in the left sidebar
  2. Check Enable Kubernetes
  3. Click Apply & Restart

Docker Desktop will download Kubernetes components and start the cluster. This takes 2-3 minutes on first enable.

What you'll see:

  • A progress indicator while Kubernetes initializes
  • Docker Desktop restarts
  • A green "Kubernetes running" indicator in the bottom-left corner

Step 3: Verify Kubernetes is Running

Open a terminal and run:

Specification
kubectl version

Output:

text
Client Version: v1.28.2 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.28.2

Both Client and Server versions should display. If Server Version is missing, Kubernetes isn't running yet—wait for the green indicator in Docker Desktop.


Verify Your Cluster

Check that your cluster is healthy:

Specification
kubectl cluster-info

Output:

text
Kubernetes control plane is running at https://kubernetes.docker.internal:6443 CoreDNS is running at https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

This shows:

  • Control plane: API server is running
  • CoreDNS: Service discovery is working (pods can find each other by name)

Check the nodes in your cluster:

Specification
kubectl get nodes

Output:

text
NAME STATUS ROLES AGE VERSION docker-desktop Ready control-plane 5m v1.28.2

This shows:

  • NAME: Your single node is called "docker-desktop"
  • STATUS: Ready (healthy, accepting workloads)
  • ROLES: control-plane (runs both control plane and worker responsibilities)
  • VERSION: Kubernetes v1.28.2

Your Kubernetes cluster is running and ready.


Understanding kubectl Context

kubectl needs to know which Kubernetes cluster to talk to. This is managed through contexts.

What is a Context?

A context combines:

  • Cluster: Which Kubernetes cluster to talk to
  • User: What credentials to use
  • Namespace: Which namespace to use (default is default)

Check Your Current Context

Specification
kubectl config current-context

Output:

Specification
docker-desktop

This confirms kubectl is pointing to your Docker Desktop Kubernetes cluster.

View All Contexts

Specification
kubectl config get-contexts

Output:

text
CURRENT NAME CLUSTER AUTHINFO NAMESPACE * docker-desktop docker-desktop docker-desktop

The * marks your current context. If you later work with cloud clusters (GKE, EKS, AKS), you'll see multiple contexts here and can switch between them:

Specification
kubectl config use-context docker-desktop

Where is This Stored?

Context configuration lives in ~/.kube/config:

Specification
cat ~/.kube/config

Output (partial):

yaml
apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1... server: https://kubernetes.docker.internal:6443 name: docker-desktop contexts: - context: cluster: docker-desktop user: docker-desktop name: docker-desktop current-context: docker-desktop kind: Config users: - name: docker-desktop user: client-certificate-data: LS0tLS1... client-key-data: LS0tLS1...

Docker Desktop automatically configures this when you enable Kubernetes.


Quick Reference: Cluster Management

Check Kubernetes Status

Look for the green "Kubernetes running" indicator in Docker Desktop's bottom-left corner. Or run:

Specification
kubectl get nodes

Restart Kubernetes

If Kubernetes becomes unresponsive:

  1. Docker Desktop Settings → Kubernetes
  2. Click Reset Kubernetes Cluster

This resets the cluster to a clean state (removes all deployments).

Disable Kubernetes

To free up resources when not using Kubernetes:

  1. Docker Desktop Settings → Kubernetes
  2. Uncheck Enable Kubernetes
  3. Click Apply & Restart

Re-enable anytime with the same checkbox.

Resource Allocation

Docker Desktop shares resources with Kubernetes. Adjust in:

  • Docker Desktop Settings → Resources
  • Recommended: At least 4GB memory for Kubernetes workloads

What You've Accomplished

You now have:

  • ✅ Kubernetes enabled in Docker Desktop
  • ✅ kubectl configured and communicating with your cluster
  • ✅ A working single-node Kubernetes cluster (same API as production)
  • ✅ Understanding of kubectl contexts

No VMs. No drivers. No hypervisors. Just a checkbox.

Your local Kubernetes cluster is ready. Next lesson, you'll deploy your first Pod to this cluster.


Try With AI

Now that your cluster is running, explore it with AI assistance.

Prompt 1: Cluster Architecture

text
I just enabled Kubernetes in Docker Desktop. When I run 'kubectl get nodes', I see 'docker-desktop' with role 'control-plane'. In the previous chapter, you explained that production Kubernetes has separate control plane and worker nodes. How does Docker Desktop handle this with a single node? What components are running?

What you're learning: How Docker Desktop combines control plane and worker responsibilities on a single node, and what Kubernetes components are actually running.

Prompt 2: Context Management

text
I want to understand kubectl contexts better. I ran 'kubectl config get-contexts' and see 'docker-desktop'. If I later add a cloud cluster (like GKE), how would I: 1. Add the new context? 2. Switch between local and cloud clusters? 3. Avoid accidentally deploying to production?

What you're learning: How professionals manage multiple Kubernetes environments safely, preventing accidental production deployments.

Prompt 3: Resource Planning

text
I'm about to deploy AI agents to my Docker Desktop Kubernetes cluster. My laptop has 16GB RAM and Docker Desktop is allocated 8GB. How much of that 8GB is available for my workloads? What happens if my pods request more memory than available? How should I plan resource requests for AI workloads?

What you're learning: Resource management fundamentals—the relationship between node capacity, allocatable resources, and pod requests/limits that you'll configure in later lessons.


Reflect on Your Skill

You built a kubernetes-deployment skill in Chapter 1. Test and improve it based on what you learned.

Test Your Skill

text
Using my kubernetes-deployment skill, verify Kubernetes cluster health. Does my skill include commands like kubectl cluster-info and kubectl get nodes?

Identify Gaps

Ask yourself:

  • Did my skill include kubectl context management?
  • Did it explain how to verify metrics-server and cluster components are running?
  • Did it cover the kubeconfig file and context switching?

Improve Your Skill

If you found gaps:

text
My kubernetes-deployment skill is missing cluster verification and context management commands. Update it to include kubectl cluster-info, kubectl get nodes, kubectl config current-context, and kubeconfig management.

Chapter Summary

CategoryKey Takeaways
Core ConceptDocker Desktop includes a built-in Kubernetes cluster utilizing identical APIs to production. It runs locally as a single node but transfers directly to cloud deployments.
Mental Modelskubeconfig: Connection string (~/.kube/config). kubectl: The CLI to the API. Docker Desktop: The host environment running the control plane and worker node locally.
Critical PatternsEnable Kubernetes in Settings; verify with kubectl get nodes; switch contexts using kubectl config use-context docker-desktop.
Common MistakesForgetting to enable Kubernetes; misunderstanding persistent kubeconfig changes; expecting multiple nodes locally.
ConnectionsBuilds on: Docker knowledge from Module 7, Sub-module 1. Leads to: Subsequent chapters where Pods, Deployments, and Services run on this local cluster.

Source: https://www.muhammadusmanakbar.com/book