You've spent this chapter building event-driven system components piece by piece: producers with reliability guarantees, consumers with proper offset management, Avro schemas with Schema Registry, FastAPI integration patterns, and agent event designs. Now it's time to compose these skills into a complete system.
This capstone follows the spec-driven development approach you've seen throughout this book. You'll write a specification first, then implement it by composing the patterns you've learned. This mirrors how production AI agents are built: specify intent clearly, then orchestrate implementation using accumulated skills.
The goal is practical: add event-driven notifications to the Task API from Part 6. When a task is created, updated, or completed, the system should publish events that trigger notifications and create an immutable audit trail. By the end, you'll have a working event-driven notification system that demonstrates the patterns professional teams use in production.
Before writing any code, define precisely what you're building. A clear specification enables focused implementation and provides acceptance criteria for validation.
Every capstone specification answers these questions:
Notice what the specification provides:
Clear intent: Not "add Kafka to Task API" but "enable decoupled notifications with audit trail"
Measurable criteria: Each success criterion is a checkbox that can be verified
Explicit boundaries: Non-goals prevent scope creep
Visual architecture: The diagram clarifies component relationships
When you work with AI to implement this specification, you have shared understanding of what "done" looks like.
Now implement the specification by composing patterns from earlier chapters. This phase demonstrates the core skill of spec-driven development: translating clear requirements into working code.
First, implement the event schema and producer integration for the Task API.
Create events/schemas.py:
Output:
Create a reliable publisher that integrates with FastAPI's lifespan.
Create events/publisher.py:
Output:
Connect the publisher to FastAPI endpoints.
Update main.py:
Output:
Create a consumer that processes task events for notifications.
Create services/notification_service.py:
Output:
Create a consumer that logs all events to an immutable audit trail.
Create services/audit_service.py:
Phase 3: Validate Against Specification
The implementation is complete. Now verify each success criterion from the specification.
SC-1: Task Events Published
SC-2: Notification Service Consumes Events
SC-3: Audit Service Consumes Events
SC-4: End-to-End Flow Verified
All success criteria from the specification have been verified:
The implementation matches the specification.
This capstone demonstrated the spec-driven development pattern for event-driven systems:
The system you built includes:
These patterns compose into production event-driven architectures. The same structure scales to dozens of consumers processing millions of events.
Use AI to extend and refine your capstone implementation.
Prompt 1: Add Schema Registry Integration
What you're learning: Schema Registry integration adds type safety and evolution management. This is the pattern production systems use to prevent schema drift between producers and consumers.
Prompt 2: Add Dead Letter Queue
What you're learning: Production consumers need failure handling. Dead letter queues capture problematic events for investigation without blocking the main consumer.
Prompt 3: Improve the Specification
What you're learning: Specifications evolve through iteration. Reviewing your spec after implementation reveals gaps and improvements for future systems.
Safety Note: When testing event-driven systems, always verify consumer offsets are committed correctly before stopping services. Uncommitted offsets can cause duplicate processing on restart. Use kafka-consumer-groups.sh --describe to check consumer group state before any planned maintenance.