In Chapters 1-5, you built a production-ready Helm chart: templates that render correctly, values that adapt to environments, and helper functions that eliminate boilerplate. But before deploying to production, you need confidence that the chart works as intended. This means catching errors early through validation, verifying template rendering produces correct manifests, and testing that deployed resources actually function.
This chapter teaches the testing safety net that stands between a working chart in development and a broken deployment in production. You'll use helm lint to validate chart structure, helm template to inspect rendered manifests, test pods to verify connectivity, and helm test to run validation against live deployments. Together, these tools catch the mistakes that would otherwise cost hours to debug in production.
This chapter covers Helm testing approaches from static validation to live deployment verification:
Prerequisites:
Time Estimate: 45 minutes
helm lint checks your chart for structural errors, missing required fields, and configuration problems. It runs before installation, catching issues early.
Create a chart with an intentional error to see how lint catches it:
Output:
Output:
helm template renders your templates locally without connecting to a Kubernetes cluster. This shows exactly what manifests will be created.
Output:
Output:
Before deploying, you can verify:
When templates produce unexpected output, helm template --debug shows each rendering step with line numbers, making errors easy to locate.
Create a template with a conditional that references a nonexistent value:
Without the flag, you see nothing if the value is missing. With --debug:
Output:
For more detailed inspection:
Output shows template variables, function evaluations, and which conditionals evaluated to true/false.
Before moving to runtime testing, verify your understanding:
✅ You can run helm lint on any chart directory and interpret the output ✅ You understand what helm template produces and how it differs from helm install ✅ You can use --debug to troubleshoot template rendering issues ✅ You've tested rendering a chart with multiple values files
Self-Check: Create a chart with a broken conditional ({{- if .Values.nonexistent }}). Use helm template --debug to confirm the conditional evaluates to false and produces no output.
Test pods are Kubernetes Pods that verify a deployed release works correctly. They're executed when you run helm test and removed afterward.
Test pods use the helm.sh/hook: test annotation. Create a file templates/test-pod.yaml:
Output (when running helm test):
Test pods use exit codes to signal success (0) or failure (non-zero). Helm interprets these to report test results.
Output:
Output:
helm test executes all test pods in a release and reports results. Run it after deploying with helm install or helm upgrade.
Output:
Output:
If tests take longer than the default 300 seconds:
Output:
For testing template logic (conditionals, loops, variable substitution), use helm-unittest. This tests chart templates without deploying to Kubernetes.
Output:
Create tests/deployment_test.yaml:
Output:
Integration tests verify that deployed resources (Deployments, Services, ConfigMaps) behave correctly in a live Kubernetes cluster.
Create templates/integration-test.yaml:
Output (when helm test runs):
Decision rule: Start with unit tests for template logic. Add integration tests only if deployment behavior needs verification (service connectivity, data persistence, etc.).
Before the exercise, confirm you can:
✅ Write a test pod with helm.sh/hook: test annotation ✅ Use exit codes to signal test success (0) or failure (non-zero) ✅ Run helm test against a deployed release ✅ Distinguish when to use unit tests vs integration tests ✅ Install and run helm-unittest for template logic testing
Self-Check: Write a test pod that verifies a ConfigMap exists with kubectl get configmap <name>. Deploy the chart, run helm test, and confirm the test passes.
Problem: Running helm test my-release when the release doesn't exist.
Error:
Fix: Always deploy first with helm install or helm upgrade, then run helm test.
Problem: Creating a test pod without helm.sh/hook: test.
Result: Pod is created during deployment instead of during testing phase. It runs immediately and may fail before dependencies are ready.
Fix:
Problem: Test pods remain after testing completes, cluttering the namespace.
Fix: Add deletion policy:
Problem: helm lint passes but deployment fails because of Kubernetes-specific validation (resource limits, invalid service types, etc.).
Reality: helm lint validates chart structure and template syntax, NOT Kubernetes resource semantics.
Fix: Combine helm lint with helm template --validate (requires cluster connection) or deploy to a test namespace.
Problem: Test pods hang indefinitely waiting for external dependencies (databases, APIs).
Fix: Always specify a timeout:
Problem: Writing helm-unittest tests that expect certain cluster resources to exist.
Reality: Unit tests run locally without a cluster—they test template logic only.
Fix: Use integration test pods for cluster-dependent validation.
You'll test a sample Helm chart that deploys a simple agent API. Create a working directory:
Step 1: Create a Chart and Run helm lint
Ask AI to create a Helm chart with intentional issues:
"Create a Helm chart for an agent service with:
Apply the chart to your directory and run helm lint to verify it catches the error. Note how the error message guides you to fix it.
Step 2: Inspect Template Rendering
Ask AI to render the chart with different values files:
"Now create a values-prod.yaml with different replica count (5 instead of 3) and nodePort service type. Show me what helm template outputs with and without this values file."
Compare the two outputs. What changed? Why would this difference matter in production?
Step 3: Create a Test Pod
Ask AI to create a test pod that:
Deploy the chart with helm install and run helm test to verify your test pod works.
Step 4: Write a Unit Test
Ask AI to create a helm-unittest test file that:
Run helm unittest to verify your tests pass.
Step 5: Reflection
Compare what each testing approach revealed:
You built a helm-chart skill in Chapter 0. Test and improve it based on what you learned.
If you found gaps: