Module 7 takes the agent you built in Module 6 and turns it into a production cloud service. You'll containerize the stack, orchestrate it on Kubernetes, automate delivery, and operate it with observability, security, and cost controls. The goal: a reliable Digital FTE that runs 24/7 for real users.
Prerequisites: Modules 4-6. You need a working agent service to deploy.
Your Task API needs to clean up completed todos older than 30 days. You could run a cron job on your server, but what happens when you scale to multiple pods? Three pods means three cleanup runs. You need exactly-once scheduled execution across your distributed system.
This is what Dapr's Jobs API solves. Instead of managing cron jobs in your infrastructure, you schedule jobs through Dapr. The Scheduler service tracks when jobs should run and triggers your application at the right time. Whether you have one pod or twenty, the job runs once.
In this lesson, you'll schedule a daily cleanup job for your Task API and implement the handler that receives the trigger. You'll also learn when to use Jobs API versus bindings, since both can trigger your application on a schedule.
Both Jobs API and input bindings can trigger your application on a schedule. The difference is who controls the schedule:
Use Jobs API when:
Use Input Bindings when:
For the Task API cleanup, Jobs API makes sense because your application knows when cleanup should run and can adjust the schedule programmatically.
The Jobs API is backed by the Scheduler service, which runs as part of the Dapr control plane alongside the sidecar injector, operator, and sentry.
Key architecture facts:
Output:
Dapr accepts two formats for job schedules:
Simple patterns for common intervals:
More precise control with systemd-style cron (note: 6 fields, not 5):
Examples:
The Jobs API uses HTTP endpoints on the Dapr sidecar. Here's how to schedule a daily cleanup job.
Output:
One-time job (runs once at a specific time):
Recurring job with limited repeats:
When a scheduled job triggers, Dapr sends a POST request to your application at /job/{job-name}. You implement this endpoint to handle the job.
Output (when job triggers):
Your handler must return a response indicating success or failure:
You can schedule multiple jobs and route them to different handlers:
Here's a complete FastAPI application that schedules a cleanup job on startup and handles the trigger:
Output (on startup):
To change a job's schedule, delete and recreate it, or use the overwrite parameter:
You built a dapr-deployment skill in earlier lessons. Test and improve it based on what you learned.
Ask yourself:
If you found gaps:
Prompt 1: Schedule a Daily Cleanup
What you're learning: This prompt practices the complete Jobs API workflow: scheduling a job with a human-readable expression and implementing the handler that receives the trigger. You'll see how the job data flows from schedule time to trigger time.
Prompt 2: Jobs API vs Cron Bindings
What you're learning: Understanding when to use Jobs API vs bindings prevents architectural mistakes. AI will explain that Jobs API is for application-controlled scheduling while bindings are for static schedules or external triggers.
Prompt 3: Implement a Job Handler
What you're learning: Job handlers need proper error handling and response formatting. AI will show you how to structure handlers that correctly communicate success or failure back to Dapr.
Safety Note: The Jobs API is currently in alpha (v1.0-alpha1). While stable for production use, the API surface may change in future Dapr versions. Pin your Dapr version in production and test thoroughly when upgrading.