Module 7 takes the agent you built in Module 6 and turns it into a production cloud service. You'll containerize the stack, orchestrate it on Kubernetes, automate delivery, and operate it with observability, security, and cost controls. The goal: a reliable Digital FTE that runs 24/7 for real users.
Prerequisites: Modules 4-6. You need a working agent service to deploy.
You've learned Dapr's building blocks individually: state management in Chapter 4, service invocation in Chapter 5, pub/sub in Chapter 6, bindings in Chapter 7, jobs in Chapter 8, and secrets in Chapter 9. Each building block solves a specific distributed systems challenge. Now it's time to compose them into a complete application.
This capstone follows the spec-driven development approach. You'll write a specification first, then refactor the Module 6 Task API to use Dapr for all infrastructure abstraction. The goal is practical: eliminate direct Redis, Kafka, and HTTP client code from your application. Your Task API talks to Dapr; Dapr talks to infrastructure.
The result demonstrates Dapr's core value proposition: infrastructure becomes configuration, not code. Need to swap Redis for PostgreSQL? Change a YAML file. Need to add Kafka alongside Redis pub/sub? Deploy another component. Your application code stays the same.
Before touching code, define precisely what you're building. A clear specification enables focused implementation and provides acceptance criteria for validation.
The specification maps Module 6's Task API functionality to Dapr building blocks:
When you work with AI to implement this specification, you both understand what "done" looks like.
Now refactor the Task API by composing patterns from earlier lessons. This phase demonstrates the core skill of spec-driven development: translating clear requirements into working code.
First, create the Pydantic models for tasks and events.
Create models.py:
Output:
Create the FastAPI application that uses Dapr for all infrastructure.
Create main.py:
Output:
Configure the Dapr components that your application uses.
Create components/statestore.yaml:
Create components/pubsub.yaml:
Create components/secrets.yaml:
Apply the components:
Output:
The implementation is complete. Now verify each success criterion from the specification.
Output:
The implementation matches the specification.
This capstone demonstrated the spec-driven development pattern for Dapr integration:
The system you built includes:
The key transformation: your application code no longer knows about Redis, Kafka, or HTTP clients. It only knows about Dapr APIs. Infrastructure decisions happen in YAML configuration, not Python code.
You built a dapr-deployment skill in earlier lessons. This capstone is the ultimate test.
Ask yourself:
If you found gaps:
Prompt 1: Migrate State Management
What you're learning: The state migration pattern. You're seeing how to replace direct Redis calls with Dapr's state API while gaining features like automatic concurrency control. The abstraction prepares you for swapping backends without code changes.
Prompt 2: Add Pub/Sub Integration
What you're learning: Event-driven patterns with Dapr. You're composing pub/sub with state management, seeing how events flow between services through Dapr's abstraction layer rather than direct broker connections.
Prompt 3: Deploy with Sidecar Verification
What you're learning: Production deployment patterns. The sidecar injection via annotations is the key insight - your application doesn't install Dapr, Kubernetes injects it. Verification confirms the distributed system is functioning correctly.
Safety note: When migrating production services to Dapr, run both implementations in parallel during transition. Direct clients and Dapr can coexist temporarily. Never migrate all services simultaneously - one service at a time reduces blast radius.