USMAN’S INSIGHTS
AI ARCHITECT
  • Home
  • About
  • Thought Leadership
  • Book
Press / Contact
USMAN’S INSIGHTS
AI ARCHITECT
⌘F
HomeBook
HomeBookThe Validation Shield: Testing Your Charts
Previous Chapter
Helm Hooks and Lifecycle Management
Next Chapter
OCI Registries and Distribution
AI NOTICE: This is the table of contents for the SPECIFIC CHAPTER only. It is NOT the global sidebar. For all chapters, look at the main navigation.

On this page

57 sections

Progress0%
1 / 57

Muhammad Usman Akbar Entity Profile

Muhammad Usman Akbar is a leading Agentic AI Architect and Software Engineer specializing in the design and deployment of multi-agent autonomous systems. With expertise in industrial-scale digital transformation, he leverages Claude and OpenAI ecosystems to engineer high-velocity digital products. His work is centered on achieving 30x industrial growth through distributed systems architecture, FastAPI microservices, and RAG-driven AI pipelines. Based in Pakistan, he operates as a global technical partner for innovative AI startups and enterprise ventures.

USMAN’S INSIGHTS
AI ARCHITECT

Transforming businesses into autonomous AI ecosystems. Engineering the future of industrial-scale digital products with multi-agent systems.

30X Growth
AI-First
Innovation

Navigation

  • Home
  • Book
  • About
  • Contact
Let's Collaborate

Have a Project in Mind?

Let's build something extraordinary together. Transform your vision into autonomous AI reality.

Start Your Transformation

© 2026 Muhammad Usman Akbar. All rights reserved.

Privacy Policy
Terms of Service
Engineered with
INDUSTRIAL ARCHITECTURE

Testing Your Charts

In Chapters 1-5, you built a production-ready Helm chart: templates that render correctly, values that adapt to environments, and helper functions that eliminate boilerplate. But before deploying to production, you need confidence that the chart works as intended. This means catching errors early through validation, verifying template rendering produces correct manifests, and testing that deployed resources actually function.

This chapter teaches the testing safety net that stands between a working chart in development and a broken deployment in production. You'll use helm lint to validate chart structure, helm template to inspect rendered manifests, test pods to verify connectivity, and helm test to run validation against live deployments. Together, these tools catch the mistakes that would otherwise cost hours to debug in production.


What You'll Learn

This chapter covers Helm testing approaches from static validation to live deployment verification:

Testing CategoryTools/ConceptsWhat They Validate
Static Analysishelm lint, helm templateChart structure, template syntax, YAML validity, rendered output
Runtime TestingTest pods,helm test, exit codesDeployed resources, service connectivity, integration behavior
Unit Testinghelm-unittest frameworkTemplate logic, conditionals, variable substitution
Integration TestingTest pods with kubectl commandsLive cluster behavior, resource interactions

Prerequisites:

  • Completed Chapters 1-5 (templates, values, helpers, conditionals, dependencies)
  • Access to a Kubernetes cluster (minikube, kind, or cloud provider)
  • Helm 3.x installed locally
  • Basic understanding of exit codes and shell scripting

Time Estimate: 45 minutes


Concept 1: Validating Charts with helm lint

helm lint checks your chart for structural errors, missing required fields, and configuration problems. It runs before installation, catching issues early.

Understanding helm lint Output

Create a chart with an intentional error to see how lint catches it:

bash
# Create test chart mkdir -p test-chart/templates cd test-chart # Create Chart.yaml with missing required field cat > Chart.yaml <<'EOF' name: test-chart version: 1.0.0 # Note: description is missing—required field EOF # Create broken template cat > templates/deployment.yaml <<'EOF' apiVersion: apps/v1 kind: Deployment metadata: name: {{ .Values.appName }} # Invalid indentation in YAML spec: replicas: {{ .Values.replicaCount | invalid_function }} EOF # Create values.yaml cat > values.yaml <<'EOF' appName: my-app replicaCount: 3 EOF

Output:

text
$ helm lint . ==> Linting test-chart [ERROR] Chart.yaml: Chart.metadata.description is required [ERROR] templates/deployment.yaml: [ERROR] render error in "templates/deployment.yaml": template: test-chart/templates/deployment.yaml:6:34: executing "test-chart/templates/deployment.yaml" at <invalid_function>: function "invalid_function" not defined Error: 1 chart(s) failed linting

What helm lint Validates

CheckPurpose
Chart.yaml structureRequired fields (name, version, description)
Template syntaxValid Go templating, correct function names
YAML validityProper indentation, valid structure
Chart directory layouttemplates/, values.yaml exist and have correct permissions

Fixing Errors: Add Required Description

bash
cat > Chart.yaml <<'EOF' name: test-chart version: 1.0.0 description: A test chart for Helm testing apiVersion: v1 appVersion: 1.0 EOF cat > templates/deployment.yaml <<'EOF' apiVersion: apps/v1 kind: Deployment metadata: name: {{ .Values.appName }} spec: replicas: {{ .Values.replicaCount }} EOF

Output:

text
$ helm lint . ==> Linting test-chart 1 chart(s) passed all tests

Concept 2: Rendering Templates to Inspect Output

helm template renders your templates locally without connecting to a Kubernetes cluster. This shows exactly what manifests will be created.

Basic Template Rendering

bash
cd test-chart helm template my-release .

Output:

yaml
--- # Source: test-chart/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-app spec: replicas: 3

Rendering with Environment-Specific Values

bash
# Create values-prod.yaml cat > values-prod.yaml <<'EOF' appName: prod-agent replicaCount: 5 EOF # Render with production values helm template my-release . -f values-prod.yaml

Output:

yaml
--- # Source: test-chart/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: prod-agent spec: replicas: 5

Why This Matters

Before deploying, you can verify:

  • Variable substitution works correctly
  • Conditionals produce expected output
  • Resource definitions are valid Kubernetes YAML

Concept 3: Debugging Template Rendering with --debug

When templates produce unexpected output, helm template --debug shows each rendering step with line numbers, making errors easy to locate.

Broken Conditional Example

Create a template with a conditional that references a nonexistent value:

yaml
cat > templates/service.yaml <<'EOF' {{- if .Values.serviceEnabled }} apiVersion: v1 kind: Service metadata: name: {{ .Values.serviceName }} spec: type: {{ .Values.serviceType }} {{- end }} EOF

Without the flag, you see nothing if the value is missing. With --debug:

Specification
helm template my-release . --debug

Output:

text
--- # Source: test-chart/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-app spec: replicas: 3 # templates/service.yaml did not produce a resource (condition: if .Values.serviceEnabled is false)

Using --debug for Line-by-Line Inspection

For more detailed inspection:

bash
helm template my-release . --debug 2>&1 | grep -A 5 "service.yaml"

Output shows template variables, function evaluations, and which conditionals evaluated to true/false.


Checkpoint

Before moving to runtime testing, verify your understanding:

✅ You can run helm lint on any chart directory and interpret the output ✅ You understand what helm template produces and how it differs from helm install ✅ You can use --debug to troubleshoot template rendering issues ✅ You've tested rendering a chart with multiple values files

Self-Check: Create a chart with a broken conditional ({{- if .Values.nonexistent }}). Use helm template --debug to confirm the conditional evaluates to false and produces no output.


Concept 4: Test Pods and the helm.sh/hook: test Annotation

Test pods are Kubernetes Pods that verify a deployed release works correctly. They're executed when you run helm test and removed afterward.

Creating a Test Pod

Test pods use the helm.sh/hook: test annotation. Create a file templates/test-pod.yaml:

yaml
apiVersion: v1 kind: Pod metadata: name: {{ include "test-chart.fullname" . }}-test annotations: helm.sh/hook: test helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded spec: serviceAccountName: {{ include "test-chart.fullname" . }} containers: - name: test-agent-connectivity image: curlimages/curl:latest command: - sh - -c args: - | echo "Testing service connectivity..." curl -v http://{{ include "test-chart.fullname" . }}:8000/health if [ $? -eq 0 ]; then echo "Test PASSED: Service is reachable" exit 0 else echo "Test FAILED: Service is not reachable" exit 1 fi restartPolicy: Never

Output (when running helm test):

text
Pod test-chart-test created Pod test-chart-test is still running... Pod test-chart-test succeeded Test suite for my-release PASSED

Understanding Test Pod Annotations

AnnotationPurpose
helm.sh/hook: testMarks this pod as a test resource
helm.sh/hook-delete-policy: before-hook-creationDelete previous test pods before running new ones
helm.sh/hook-delete-policy: hook-succeededDelete test pod after it succeeds

Concept 5: Test Exit Codes—What Passes and What Fails

Test pods use exit codes to signal success (0) or failure (non-zero). Helm interprets these to report test results.

Exit Code Semantics

yaml
# Test passes (exit 0) apiVersion: v1 kind: Pod metadata: name: test-pass annotations: helm.sh/hook: test spec: containers: - name: test image: alpine:latest command: - sh - -c - | echo "Test passed" exit 0 restartPolicy: Never

Output:

text
Pod test-pass created Test pod test-pass succeeded

Exit Code Semantics

yaml
# Test fails (exit non-zero) apiVersion: v1 kind: Pod metadata: name: test-fail annotations: helm.sh/hook: test spec: containers: - name: test image: alpine:latest command: - sh - -c - | echo "Test failed: database unreachable" exit 1 restartPolicy: Never

Output:

text
Pod test-fail created Pod test-fail failed Test suite for my-release FAILED

Concept 6: Running Tests with helm test

helm test executes all test pods in a release and reports results. Run it after deploying with helm install or helm upgrade.

Full Testing Workflow

bash
# Step 1: Deploy the chart helm install task-api ./agent-chart --values agent-values.yaml

Output:

text
Release "task-api" has been deployed. NAME: task-api STATUS: deployed REVISION: 1
bash
# Step 2: Run tests against deployed release helm test task-api

Output:

text
Pod: task-api-test-1 Pod: task-api-test-1 succeeded Pod: task-api-test-2 Pod: task-api-test-2 succeeded Pod: task-api-test-3 Pod: task-api-test-3 succeeded Pod: task-api-connectivity-test Pod: task-api-connectivity-test succeeded TEST SUITE: PASSED

Testing with Timeout

If tests take longer than the default 300 seconds:

bash
helm test task-api --timeout 10m

Output:

text
Test pods running with 10m timeout... All tests passed within timeout

Concept 7: Unit Testing with helm-unittest Framework

For testing template logic (conditionals, loops, variable substitution), use helm-unittest. This tests chart templates without deploying to Kubernetes.

Installing helm-unittest

bash
helm plugin install https://github.com/helm-unit-test/helm-unittest

Output:

text
Installed helm-unittest to /Users/username/.helm/plugins/helm-unittest

Writing a Unit Test

Create tests/deployment_test.yaml:

yaml
suite: test deployment templates: - deployment.yaml tests: - name: deployment should have correct replica count set: replicaCount: 5 asserts: - equal: path: spec.replicas value: 5 - name: deployment should handle zero replicas set: replicaCount: 0 asserts: - equal: path: spec.replicas value: 0 - name: deployment should use correct image set: image: repository: myregistry/agent tag: v1.2.0 asserts: - equal: path: spec.template.spec.containers[0].image value: myregistry/agent:v1.2.0

Running Unit Tests

bash
helm unittest ./agent-chart

Output:

text
suite: test deployment ✓ deployment should have correct replica count ✓ deployment should handle zero replicas ✓ deployment should use correct image 3 tests passed

Concept 8: Integration Testing—Testing Deployed Resources

Integration tests verify that deployed resources (Deployments, Services, ConfigMaps) behave correctly in a live Kubernetes cluster.

Integration Test Pattern

Create templates/integration-test.yaml:

yaml
cat > templates/integration-test.yaml <<'EOF' apiVersion: v1 kind: Pod metadata: name: {{ include "agent-chart.fullname" . }}-integration-test annotations: helm.sh/hook: test helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded spec: serviceAccountName: {{ include "agent-chart.fullname" . }} containers: - name: integration-test image: curlimages/curl:latest command: - sh - -c args: - | # Test 1: Verify Deployment has correct replicas REPLICAS=$(kubectl get deployment {{ include "agent-chart.fullname" . }} -o jsonpath='{.spec.replicas}') echo "Deployment has $REPLICAS replicas" if [ "$REPLICAS" != "{{ .Values.replicaCount }}" ]; then echo "FAILED: Expected {{ .Values.replicaCount }} replicas, got $REPLICAS" exit 1 fi # Test 2: Verify Service exists and is accessible SERVICE_IP=$(kubectl get svc {{ include "agent-chart.fullname" . }} -o jsonpath='{.spec.clusterIP}') echo "Service IP: $SERVICE_IP" if [ -z "$SERVICE_IP" ]; then echo "FAILED: Service not found" exit 1 fi # Test 3: Verify ConfigMap data CONFIG=$(kubectl get configmap {{ include "agent-chart.fullname" . }}-config -o jsonpath='{.data}') echo "ConfigMap data exists: $CONFIG" echo "All integration tests passed" exit 0 restartPolicy: Never EOF

Output (when helm test runs):

text
Pod agent-chart-integration-test created Deployment has 3 replicas Service IP: 10.0.0.15 ConfigMap data exists: map[model:gpt-4 logLevel:INFO] All integration tests passed Pod agent-chart-integration-test succeeded

Decision Framework: Unit vs Integration Testing

DimensionUnit Testing (helm-unittest)Integration Testing (Test Pods)
What it testsTemplate logic (conditionals, loops, variable substitution)Deployed resources behavior in live cluster
DependenciesNone (tests run locally)Requires Kubernetes cluster, deployed release
SpeedFast (seconds)Slow (minutes—includes pod startup time)
When to useTest chart configuration changes, template logic, edge casesVerify deployment actually works, service connectivity
Example"Does deployment have correct image?""Can the service reach the deployment?"

Decision rule: Start with unit tests for template logic. Add integration tests only if deployment behavior needs verification (service connectivity, data persistence, etc.).


Checkpoint

Before the exercise, confirm you can:

✅ Write a test pod with helm.sh/hook: test annotation ✅ Use exit codes to signal test success (0) or failure (non-zero) ✅ Run helm test against a deployed release ✅ Distinguish when to use unit tests vs integration tests ✅ Install and run helm-unittest for template logic testing

Self-Check: Write a test pod that verifies a ConfigMap exists with kubectl get configmap <name>. Deploy the chart, run helm test, and confirm the test passes.


Common Mistakes

Mistake 1: Running helm test Before Deployment

Problem: Running helm test my-release when the release doesn't exist.

Error:

Specification
Error: release: not found

Fix: Always deploy first with helm install or helm upgrade, then run helm test.

Mistake 2: Forgetting the Test Hook Annotation

Problem: Creating a test pod without helm.sh/hook: test.

Result: Pod is created during deployment instead of during testing phase. It runs immediately and may fail before dependencies are ready.

Fix:

yaml
metadata: annotations: helm.sh/hook: test # REQUIRED for test pods

Mistake 3: Test Pod Doesn't Clean Up

Problem: Test pods remain after testing completes, cluttering the namespace.

Fix: Add deletion policy:

yaml
metadata: annotations: helm.sh/hook: test helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded

Mistake 4: Assuming helm lint Catches All Errors

Problem: helm lint passes but deployment fails because of Kubernetes-specific validation (resource limits, invalid service types, etc.).

Reality: helm lint validates chart structure and template syntax, NOT Kubernetes resource semantics.

Fix: Combine helm lint with helm template --validate (requires cluster connection) or deploy to a test namespace.

Mistake 5: Testing Without Timeouts

Problem: Test pods hang indefinitely waiting for external dependencies (databases, APIs).

Fix: Always specify a timeout:

Specification
helm test my-release --timeout 5m

Mistake 6: Unit Tests That Depend on Cluster State

Problem: Writing helm-unittest tests that expect certain cluster resources to exist.

Reality: Unit tests run locally without a cluster—they test template logic only.

Fix: Use integration test pods for cluster-dependent validation.


Try With AI

Setup

You'll test a sample Helm chart that deploys a simple agent API. Create a working directory:

Specification
mkdir helm-testing-practicecd helm-testing-practice

Step 1: Create a Chart and Run helm lint

Ask AI to create a Helm chart with intentional issues:

"Create a Helm chart for an agent service with:

  • Chart.yaml with a missing description field (to trigger lint error)
  • A Deployment template
  • A Service template
  • A values.yaml with image, replicas, and service type"

Apply the chart to your directory and run helm lint to verify it catches the error. Note how the error message guides you to fix it.

Step 2: Inspect Template Rendering

Ask AI to render the chart with different values files:

"Now create a values-prod.yaml with different replica count (5 instead of 3) and nodePort service type. Show me what helm template outputs with and without this values file."

Compare the two outputs. What changed? Why would this difference matter in production?

Step 3: Create a Test Pod

Ask AI to create a test pod that:

  • Uses curl to verify the Service is accessible
  • Returns exit code 0 if the service responds, exit code 1 if it doesn't
  • Includes the helm.sh/hook: test annotation

Deploy the chart with helm install and run helm test to verify your test pod works.

Step 4: Write a Unit Test

Ask AI to create a helm-unittest test file that:

  • Verifies the Deployment has the correct number of replicas from values
  • Verifies the image tag is correctly substituted
  • Tests conditional logic (e.g., "if serviceEnabled is true, Service should exist")

Run helm unittest to verify your tests pass.

Step 5: Reflection

Compare what each testing approach revealed:

  • What did helm lint catch that you might have missed manually?
  • What did helm template show that helped you understand the manifest?
  • What did unit testing verify about template logic?
  • What would integration testing verify that unit tests cannot?

Reflect on Your Skill

You built a helm-chart skill in Chapter 0. Test and improve it based on what you learned.

Test Your Skill

text
Using my helm-chart skill, create test pods and helm-unittest tests for a chart. Does my skill understand validation strategies and test patterns?

Improve Your Skill

If you found gaps:

text
My helm-chart skill is missing [testing strategies / validation patterns]. Update it to include: - helm lint for structural validation - helm template --debug for rendering inspection - Test pods with helm.sh/hook: test and proper exit codes - helm-unittest for template logic verification - Integration tests with kubectl commands - Decision framework: unit vs integration testing