In Lesson 2, you pulled images from Docker Hub and ran containers. Those images were built by someone else. Now you'll write your own blueprint—a Dockerfile—and build a custom image for your Task API.
By the end of this lesson, you'll understand exactly what happens when Docker reads each line of your Dockerfile, why instruction order matters for build speed, and how to create images that build fast and run reliably.
You'll containerize the In-Memory Task API from Chapter 70. Let's create it fresh using UV:
Output:
Add FastAPI with all standard dependencies:
Output:
Now replace the contents of main.py with the Task API:
Test that it runs locally:
Output:
Open a new terminal and verify:
Output:
Stop the server with Ctrl+C. Your API works locally—now let's containerize it.
A Dockerfile is a text file containing instructions that tell Docker how to build an image. Think of it as a recipe:
When you run docker build, Docker reads your Dockerfile line by line, executing each instruction to construct an image.
The Dockerfile doesn't run your application. It creates an image—a frozen snapshot containing your code, dependencies, and configuration. When you docker run that image, you get a live container.
Docker processes your Dockerfile top to bottom, one instruction at a time:
This layered approach is why Docker builds are fast after the first time: unchanged layers don't rebuild.
You'll use all six in your Dockerfile. Let's write it instruction by instruction.
Create a new file named Dockerfile (no extension):
Open it in your editor. We'll build it one instruction at a time.
Every Dockerfile starts with FROM. This specifies your base image—the starting environment.
Add this line:
What this does:
Why slim? The slim variant includes only what's needed to run Python. The full python:3.12 image is ~900 MB with build tools you don't need for this application.
Your Dockerfile so far:
UV is a modern Python package manager—10-100x faster than pip. We'll copy it from its official image:
What this does:
Your Dockerfile so far:
Set where your application will live inside the container:
What this does:
Your Dockerfile so far:
Copy your dependency file into the image:
What this does:
Why copy this first? Layer caching. Dependencies change rarely; code changes often. By copying dependencies first, Docker can cache the installed packages layer and reuse it when only your code changes.
Your Dockerfile so far:
Now install the dependencies:
What this does:
Important distinction:
Your Dockerfile so far:
Now copy your actual application:
What this does:
Why copy this AFTER dependencies? When you edit main.py and rebuild:
Your Dockerfile so far:
Document which port your application uses:
What this does:
Your Dockerfile so far:
Finally, tell Docker what command to run when the container starts:
What this does:
Why 0.0.0.0? Inside a container, localhost (127.0.0.1) is isolated. Using 0.0.0.0 makes the service accessible when you map ports with -p.
Save the file.
Now build an image from your Dockerfile:
What the flags mean:
Output:
Notice each step corresponds to an instruction in your Dockerfile. Docker executed them top to bottom, creating layers.
Verify the image exists:
Output:
Your image is ~195 MB—containing Python, UV, FastAPI, Uvicorn, and your application code.
Start a container from your image:
What -p 8000:8000 does:
Output:
Open a new terminal and test:
Output:
Create a task:
Output:
Your containerized Task API works! Stop it with Ctrl+C.
When you run docker build ., Docker sends your entire directory to the build context. If you have:
Docker wastes time processing these, and worse—secrets could end up in your image.
Create .dockerignore:
Rebuild to verify it works:
The build should be faster since Docker isn't processing excluded files.
Edit main.py to add a version endpoint:
Rebuild:
Output:
Notice CACHED for steps 1-5. Docker reused those layers because pyproject.toml didn't change. Only the COPY main.py step ran.
Build time: ~1 second instead of 45 seconds.
This is why instruction order matters:
Run on port 9000 instead:
Test:
Pass configuration without changing the image:
Your application reads os.environ["LOG_LEVEL"] at runtime.
Run without blocking your terminal:
Flags:
Check status:
Stop and remove:
Cause: File missing in build context. Fix: Verify the file exists: ls pyproject.toml
Cause: Another process using port 8000. Fix: Use different port: docker run -p 9000:8000 task-api:v1
Run without -d to see the error:
Or check logs:
What you're learning: Analyzing layer cache invalidation—understanding how instruction order affects build performance.
What you're learning: Troubleshooting native compilation failures—a common challenge with Python packages that have binary dependencies.
What you're learning: Applying Dockerfile patterns to your own applications—moving from following instructions to making design decisions.
Never include secrets (API keys, passwords, database credentials) in your Dockerfile or image. Use environment variables (-e flag) or Docker secrets at runtime. Images may be shared or pushed to registries where secrets would be exposed.
You built a docker-deployment skill in Lesson 0. Test and improve it based on what you learned.
If you found gaps: