Your AI agent from Chapter 12 is running on Kubernetes. But there's a timing problem: the agent container starts before its dependencies are ready.
Imagine deploying a sentiment analysis agent that needs a 500MB language model. If the container starts before the model is downloaded, the agent crashes immediately. You could add retry logic to the application, but that's fragile. Or you could use init containers—lightweight setup containers that run to completion before your main application container even starts.
Init containers solve the "dependencies ready" problem elegantly. They guarantee: download model weights, verify database connectivity, wait for configuration files—all before your production code runs.
When you create a Pod, Kubernetes follows a strict sequence:
If any init container fails, Kubernetes doesn't start the app containers. The Pod restarts and tries again.
You should use init containers when:
Common patterns in production:
Compare this to your current approach: app container starts, tries to use a model that doesn't exist yet, crashes, Kubernetes restarts it, it crashes again. Repeated restart loops waste resources and delay startup.
Output when you deploy this:
The Pod enters a crash loop. Your agent never starts successfully.
With an init container, the model downloads before the agent even starts:
Output when you deploy this:
Notice the progression: Init:0/1 (downloading) → Running (successful). The app container never starts until the model is ready. No crash loops. No retry logic in your application code.
Let's build a working init container step-by-step.
Every init container is defined in the initContainers list within a Pod spec:
Key points:
Create a file init-demo.yaml:
Output shows:
The init container exits with code 1 (failure). Kubernetes keeps retrying. Let's fix it:
Create init-demo-fixed.yaml with a conditional that won't fail:
Status progressed to 1/1 Running because the init container succeeded.
Init containers are useful for setup, but they're even more powerful when they share data with app containers through volumes.
Create model-download-init.yaml:
Key points:
The init container wrote the file; the app container successfully read it. This is the pattern you'll use for:
What happens when an init container fails? Kubernetes provides tools to diagnose the problem.
Create init-fail.yaml:
The Pod is stuck restarting because the init container keeps failing.
This tells you why the init container failed (missing SECRET_KEY).
Key diagnostic clues:
The init container is checking for a file that doesn't exist. Either:
Create the file (if this should work):
Fix the init container logic:
Now the init container succeeds, and the Pod moves to Running.
Init containers always run sequentially. Use this when one setup step depends on another:
The three init containers run in order: 1-create-dirs completes, then 2-verify-config runs, then 3-download-assets runs. Only after all three succeed does the app container start.
Some operations (like network requests) may be transient. Add retry logic to init containers:
This init container retries up to 5 times with a 5-second delay between attempts. Kubernetes will also restart the Pod after a delay if all attempts fail.
You're deploying a custom computer vision model that requires:
Setup: You have the model archive URL and expected SHA256 checksum available as environment variables.
Your task: Design an init container that handles all three steps, then configure the main app container to use the downloaded model.
Specific prompts to try:
"Design an init container YAML that downloads a 2GB model archive from s3://models-bucket/vision-model.tar.gz, extracts it to /models, and verifies the checksum is abc123..."
"Show me how to configure volume sharing so the app container (running a FastAPI service) can access the extracted model files at /models/weights/"
"What happens if the checksum verification fails? How should the init container handle this so Kubernetes knows to restart the Pod?"
"Write the complete Pod YAML combining the init container with a FastAPI app container that expects the model at /models/weights/model.onnx"
After you get responses, consider: