USMAN’S INSIGHTS
AI ARCHITECT
  • Home
  • About
  • Thought Leadership
  • Book
Press / Contact
USMAN’S INSIGHTS
AI ARCHITECT
⌘F
HomeBook
HomeBookWhy Your Distributed App Silently Loses Data (And How to Stop It)
Previous Chapter
Building Blocks and Components
Next Chapter
Service Invocation
AI NOTICE: This is the table of contents for the SPECIFIC CHAPTER only. It is NOT the global sidebar. For all chapters, look at the main navigation.

On this page

30 sections

Progress0%
1 / 30

Muhammad Usman Akbar Entity Profile

Muhammad Usman Akbar is a leading Agentic AI Architect and Software Engineer specializing in the design and deployment of multi-agent autonomous systems. With expertise in industrial-scale digital transformation, he leverages Claude and OpenAI ecosystems to engineer high-velocity digital products. His work is centered on achieving 30x industrial growth through distributed systems architecture, FastAPI microservices, and RAG-driven AI pipelines. Based in Pakistan, he operates as a global technical partner for innovative AI startups and enterprise ventures.

USMAN’S INSIGHTS
AI ARCHITECT

Transforming businesses into autonomous AI ecosystems. Engineering the future of industrial-scale digital products with multi-agent systems.

30X Growth
AI-First
Innovation

Navigation

  • Home
  • Book
  • About
  • Contact
Let's Collaborate

Have a Project in Mind?

Let's build something extraordinary together. Transform your vision into autonomous AI reality.

Start Your Transformation

© 2026 Muhammad Usman Akbar. All rights reserved.

Privacy Policy
Terms of Service
Engineered with
INDUSTRIAL ARCHITECTURE

Deploy Dapr + State Management

Module 7 takes the agent you built in Module 6 and turns it into a production cloud service. You'll containerize the stack, orchestrate it on Kubernetes, automate delivery, and operate it with observability, security, and cost controls. The goal: a reliable Digital FTE that runs 24/7 for real users.

Prerequisites: Modules 4-6. You need a working agent service to deploy.

You understand the sidecar pattern and building blocks. Now it's time to deploy a real Dapr control plane and write code that uses it.

This lesson has two parts. First, you'll deploy Dapr on Docker Desktop Kubernetes using Helm—the same pattern you used for Kafka in earlier chapters. Second, you'll implement state management operations using the Python SDK, moving from simple save/get operations to handling concurrent updates with ETags.

By the end, you'll have Dapr running on your local Kubernetes cluster and a clear pattern for persisting state without writing any Redis-specific code.


Part A: Deploy Dapr with Helm (15 minutes)

Prerequisites Check

Before deploying Dapr, verify your environment:

bash
# Check Docker Desktop Kubernetes is running kubectl cluster-info

Output:

text
Kubernetes control plane is running at https://127.0.0.1:6443 CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
bash
# Check Helm is installed helm version

Output:

text
version.Build Info{Version:"v3.16.3", Git Commit:"...", Git Tree State:"clean", Go Version:"go1.22.7"}

If either command fails, revisit earlier chapters to set up Docker Desktop Kubernetes and Helm.

Step 1: Add Dapr Helm Repository

bash
# Add Dapr Helm repo helm repo add dapr https://dapr.github.io/helm-charts/ helm repo update

Output:

text
"dapr" has been added to your repositories Successfully got an update from the "dapr" chart repository Update Complete. Happy Helming!

Step 2: Install Dapr Control Plane

Install Dapr 1.14 in the dapr-system namespace:

bash
# Install Dapr control plane helm upgrade --install dapr dapr/dapr \ --version=1.14.0 \ --namespace dapr-system \ --create-namespace \ --wait

Output:

text
Release "dapr" does not exist. Installing it now. NAME: dapr LAST DEPLOYED: [timestamp] NAMESPACE: dapr-system STATUS: deployed REVISION: 1 NOTES: Thank you for installing dapr.

The --wait flag ensures Helm waits until all pods are ready before returning.

Step 3: Verify Control Plane Components

Check that all Dapr pods are running:

bash
kubectl get pods -n dapr-system

Output:

text
NAME READY STATUS RESTARTS AGE dapr-operator-7d8b9f4c5b-x2j4k 1/1 Running 0 45s dapr-sentry-5f6c7d8e9f-m3n5p 1/1 Running 0 45s dapr-sidecar-injector-6a7b8c9d0e-q1r2s 1/1 Running 0 45s dapr-placement-server-0 1/1 Running 0 45s dapr-scheduler-server-0 1/1 Running 0 45s

Each component has a specific role:

ComponentRole
dapr-operatorManages Dapr component resources and Kubernetes integration
dapr-sidecar-injectorAutomatically injects sidecars into pods with Dapr annotations
dapr-sentryCertificate authority for mTLS between services
dapr-placement-serverActor placement service (used in later chapters)
dapr-scheduler-serverJob scheduling service for the Jobs API (later lessons)

Step 4: Deploy Redis for State Store

Dapr needs a backend for state storage. Deploy Redis using the Bitnami Helm chart:

bash
# Add Bitnami repo if not already added helm repo add bitnami https://charts.bitnami.com/bitnami helm repo update # Install Redis in default namespace helm install redis bitnami/redis \ --namespace default \ --set auth.enabled=false \ --set architecture=standalone

Output:

text
NAME: redis LAST DEPLOYED: [timestamp] NAMESPACE: default STATUS: deployed

Wait for Redis to be ready:

bash
kubectl get pods -l app.kubernetes.io/name=redis

Output:

text
NAME READY STATUS RESTARTS AGE redis-master-0 1/1 Running 0 60s

Step 5: Create State Store Component

Dapr components tell the sidecar which backend to use. Create a file named statestore.yaml:

yaml
# statestore.yaml apiVersion: dapr.io/v1alpha1 kind: Component metadata: name: statestore namespace: default spec: type: state.redis version: v1 metadata: - name: redisHost value: redis-master.default.svc.cluster.local:6379 - name: redisPassword value: ""
FieldPurpose
type: state.redisUse Redis state store implementation
version: v1Component API version
redisHostKubernetes DNS name for Redis service
redisPasswordEmpty for development (use secrets in production)

Apply the component:

bash
kubectl apply -f statestore.yaml

Output:

text
component.dapr.io/statestore created

Verify the component was created:

bash
kubectl get components

Output:

text
NAME AGE statestore 15s

Your Dapr control plane is now running with a Redis state store configured. Any pod with Dapr annotations can now use the state API.


Part B: State Management with Python SDK (20 minutes)

Now let's write Python code that uses Dapr's state management. You'll use the dapr-client SDK to save, retrieve, and delete state—without any Redis-specific code.

Install the Python SDK

bash
pip install dapr-client

Basic State Operations

The DaprClient class provides a context manager for state operations:

python
from dapr.clients import DaprClient from pydantic import BaseModel import json class Todo(BaseModel): id: str title: str done: bool = False # Using context manager for proper resource cleanup with DaprClient() as client: # Create a Todo todo = Todo(id="todo-1", title="Learn Dapr", done=False) # Save state client.save_state( store_name="statestore", key="todo-1", value=todo.model_dump_json() ) print(f"Saved: {todo.title}")

Output:

text
Saved: Learn Dapr

Retrieve State

python
with DaprClient() as client: # Get state state = client.get_state( store_name="statestore", key="todo-1" ) if state.data: todo = Todo.model_validate_json(state.data) print(f"Retrieved: {todo.title}, done={todo.done}") else: print("Todo not found")

Output:

text
Retrieved: Learn Dapr, done=False

Delete State

python
with DaprClient() as client: # Delete state client.delete_state( store_name="statestore", key="todo-1" ) print("Deleted todo-1") # Verify deletion state = client.get_state(store_name="statestore", key="todo-1") print(f"After delete, data exists: {bool(state.data)}")

Output:

text
Deleted todo-1 After delete, data exists: False

Bulk Operations

When you need to save multiple items, bulk operations are more efficient:

python
with DaprClient() as client: # Save multiple todos at once todos = [ Todo(id="todo-1", title="Deploy Dapr", done=True), Todo(id="todo-2", title="Configure state store", done=True), Todo(id="todo-3", title="Write Python code", done=False), ] states = [ {"key": todo.id, "value": todo.model_dump_json()} for todo in todos ] client.save_bulk_state( store_name="statestore", states=states ) print(f"Saved {len(todos)} todos in one operation")

Output:

text
Saved 3 todos in one operation

ETag for Optimistic Concurrency

When multiple processes might update the same state, you need concurrency control. Dapr uses ETags for optimistic concurrency—each state value has a version number, and updates only succeed if your version matches.

The problem: Two processes read the same todo, both modify it, both try to save. Without concurrency control, the last write wins and one update is lost.

The solution: Use the ETag returned with each read. If someone else modified the state since you read it, your ETag won't match and the save fails.

python
with DaprClient() as client: # Get state with ETag state = client.get_state( store_name="statestore", key="todo-1" ) current_etag = state.etag print(f"Current ETag: {current_etag}") # Update with ETag (first-write-wins) todo = Todo.model_validate_json(state.data) todo.done = True try: client.save_state( store_name="statestore", key="todo-1", value=todo.model_dump_json(), etag=current_etag, state_metadata={"concurrency": "first-write"} ) print("Update succeeded - ETag matched") except Exception as e: print(f"Update failed - ETag mismatch: {e}")

Output:

text
Current ETag: 1 Update succeeded - ETag matched

If another process updated the state between your read and write, you'd see:

text
Update failed - ETag mismatch: ...

FastAPI Integration with Lifespan

In a real application, you'll integrate Dapr with FastAPI. Use the lifespan pattern for proper initialization:

python
from contextlib import asynccontextmanager from fastapi import FastAPI, HTTPException from dapr.clients import DaprClient from pydantic import BaseModel import uuid class Todo(BaseModel): id: str | None = None title: str done: bool = False @asynccontextmanager async def lifespan(app: FastAPI): # Startup: verify Dapr sidecar is ready # In Kubernetes, the sidecar starts alongside your container yield # Shutdown: cleanup if needed app = FastAPI(lifespan=lifespan) @app.post("/todos", response_model=Todo) async def create_todo(todo: Todo): todo.id = str(uuid.uuid4()) with DaprClient() as client: client.save_state( store_name="statestore", key=f"todo-{todo.id}", value=todo.model_dump_json() ) return todo @app.get("/todos/{todo_id}", response_model=Todo) async def get_todo(todo_id: str): with DaprClient() as client: state = client.get_state( store_name="statestore", key=f"todo-{todo_id}" ) if not state.data: raise HTTPException(status_code=404, detail="Todo not found") return Todo.model_validate_json(state.data) @app.delete("/todos/{todo_id}") async def delete_todo(todo_id: str): with DaprClient() as client: client.delete_state( store_name="statestore", key=f"todo-{todo_id}" ) return {"status": "deleted"} @app.get("/health") async def health(): return {"status": "healthy"}

This FastAPI application stores todos in Dapr state without any Redis-specific code. If you later switch to PostgreSQL or Cosmos DB, you change the component YAML—not your application code.


Why Dapr State vs Direct Redis?

You might wonder: why add Dapr instead of using Redis directly?

AspectDirect Redis (redis-py)Dapr State API
Backend lock-inCode tied to RedisSwap via YAML
Connection managementYour responsibilitySidecar handles it
SerializationYour choiceConsistent JSON
ConcurrencyManual ETag implementationBuilt-in first-write-wins
mTLSManual certificate setupAutomatic via Sentry
State backendsRedis only30+ supported stores

For a single service using Redis forever, direct Redis is fine. For distributed systems where you might change backends or need consistent patterns across services, Dapr provides valuable abstraction.


Reflect on Your Skill

You built a dapr-deployment skill in earlier lessons. Does it include both infrastructure deployment AND state management code patterns?

Test Your Skill

text
Using my dapr-deployment skill, generate: 1. Helm commands to deploy Dapr 1.14 with verification 2. A Redis state store component YAML 3. Python code using DaprClient to save and get state Does my skill produce all three outputs correctly?

Identify Gaps

Ask yourself:

  • Did my skill include the control plane component explanations?
  • Did it include the async context manager pattern?
  • Did it explain ETag concurrency?

Improve Your Skill

If you found gaps:

text
Update my dapr-deployment skill to include: - Helm deployment commands with --wait and verification - State store component YAML structure - DaprClient context manager pattern for state operations - ETag-based optimistic concurrency example

Try With AI

Deploy Dapr on Your Cluster

text
Deploy Dapr 1.14 on my Docker Desktop Kubernetes cluster. Show me: 1. The Helm install command with recommended flags 2. How to verify all control plane pods are running 3. What each control plane component does Then create a Redis state store component YAML.

What you're learning: Dapr deployment follows the operator pattern you learned in earlier chapters (Strimzi). The control plane manages sidecar injection and security certificates—you declare what you want, Dapr figures out how to achieve it.


Create a FastAPI State Endpoint

text
Create a FastAPI endpoint that saves a Todo to Dapr state store using DaprClient. Include: - Pydantic model for Todo - POST endpoint to create todos - GET endpoint to retrieve by ID - The state store component YAML needed

What you're learning: The DaprClient context manager handles connection lifecycle. You don't manage Redis connections—you call save_state with a store name, and Dapr routes to the configured backend.


Explain ETag Concurrency

text
What's the ETag pattern for optimistic concurrency with Dapr state? Help me understand: 1. What problem does it solve? 2. How do I use it with DaprClient? 3. What happens when there's a conflict? 4. When would I use first-write-wins vs last-write-wins?

What you're learning: Concurrent state updates are a classic distributed systems problem. ETags provide optimistic concurrency—you assume no conflict and handle the rare case when one occurs. This is more performant than pessimistic locking for read-heavy workloads.

Safety note: When using Dapr state in production, never store secrets in plain text. Use the Secrets building block (later lessons) for credentials, and enable encryption at rest on your state store backend.