Throughout this Sub-module, you've built cloud deployment knowledge step by step: cluster provisioning with doctl and hetzner-k3s, load balancer configuration, stack deployment with Dapr and Traefik, secrets management, and production verification. Now it's time to apply everything to a complete production scenario.
In previous chapters, you deployed components individually. You tested pieces in isolation. But production deployment isn't about individual commands working in isolation. Production deployment is about the complete sequence: from empty cloud account to working HTTPS endpoint serving real traffic.
This capstone brings it all together. You'll write a deployment specification FIRST, provision real infrastructure, deploy your complete stack, verify with HTTPS, and execute a clean teardown. The specification-first approach is what separates professional cloud engineering from Vibe Coding.
The result: your multi-cloud-deployer skill becomes production-ready. It's not a theoretical exercise. It's a verified, tested Digital FTE component you can use for real deployments.
Before touching any CLI tools, write your specification. This forces you to think about environment choices, resource requirements, success criteria, and cost constraints before any infrastructure exists.
Create this file in your project directory:
File: deployment-spec.md
Why specification first?
Without a spec, you'd start running doctl kubernetes cluster create and figure things out as you go. That's Vibe Coding. You might choose the wrong node size. You might forget TLS configuration until you're debugging certificate errors at 2 AM. You might deploy to the most expensive region.
The spec makes constraints explicit BEFORE you provision. It's your contract with yourself and your budget.
This capstone supports three deployment paths. Each path produces identical outcomes with different providers and cost profiles.
Best for: Teams, real traffic, managed SLA
Monthly cost: ~$48+ (3-node cluster minimum)
Provisioning command:
Output:
Connect and verify:
Output:
Best for: Personal practice, budget-conscious learners
Monthly cost: ~$15 (3x CX22 servers)
Provisioning command:
Output:
Connect and verify:
Output:
Best for: Enterprise environments, Azure ecosystem integration
Monthly cost: ~$75+ (3-node cluster)
Provisioning command:
Connect and verify:
With your cluster provisioned, deploy the complete production stack. This sequence is universal across all providers.
Output:
Verify:
Output:
Output:
Get the Load Balancer IP:
Output:
Record this IP for DNS configuration.
Output:
File: cluster-issuer.yaml
Apply:
Output:
File: task-api-deployment.yaml
Apply:
Output:
File: task-api-ingress.yaml
Apply:
Output:
Real deployments rarely work perfectly on the first try. This phase demonstrates the iterative refinement process that produces working production systems.
After applying all resources, check status:
Common issues and their indicators:
For image pull issues:
If you see "unauthorized," create an image pull secret:
For DNS issues:
Point your domain's A record to the Load Balancer IP. Verify propagation:
Output:
For certificate issues:
Check cert-manager logs:
When all issues are resolved:
Output:
Output:
Output:
Go back to your specification and verify each success criterion:
Production capstones aren't complete without teardown. You must prove zero ongoing costs.
Output:
Output:
Check your provider dashboard:
Update your specification checklist:
All criteria met. Specification satisfied.
Your multi-cloud-deployer skill has been tested and refined throughout this Sub-module. Evaluate its production readiness:
Total: /100
Scoring guide:
Your multi-cloud-deployer skill has evolved through this Sub-module. It started as a skeleton created from official documentation. Now it's been tested against real cloud infrastructure.
Final Test: Ask your skill:
Evaluate the output:
Your skill is production-ready when:
This skill is now part of your Digital FTE portfolio.
You don't just "know cloud deployment." You OWN a verified, production-tested skill that can deploy AI agent services to any major cloud provider. This is the outcome of the Skill-First Learning Pattern: not knowledge, but assets.
You've completed the capstone by following the specification-first approach. Now extend your deployment skills through AI collaboration.
Prompt 1: Specification Review
What you're learning: AI can review specifications and identify blind spots. It might suggest failover strategies, backup procedures, or monitoring configurations you hadn't considered. You evaluate each suggestion against your actual production requirements.
Prompt 2: Multi-Cloud Comparison
What you're learning: The "provision -> connect -> deploy" pattern is universal, but provisioning commands differ. AI helps you understand which skills transfer directly and which require adaptation.
Prompt 3: Production Hardening
What you're learning: A working deployment isn't a hardened deployment. AI helps you identify the gap between "it runs" and "it's production-ready." You evaluate each recommendation against your application's actual risk profile.
Safety note: When sharing deployment specifications with AI, redact actual domain names, IP addresses, and cloud credentials. Replace real values with placeholders like yourdomain.com or YOUR_API_TOKEN.