In the Docker Desktop Kubernetes Chapter, you used NodePort services to access your applications from outside the cluster. NodePort worked because Docker Desktop simulates external access on your laptop. But NodePort has limitations: you access services on high port numbers (30000-32767), and there's no real load balancer distributing traffic.
On a real cloud provider, you get something better: LoadBalancer services that provision actual cloud load balancers. When you create a LoadBalancer service on DigitalOcean, Kubernetes talks to the DigitalOcean API and creates a real load balancer with a public IP address. Your service becomes accessible at standard ports (80, 443) without any port mapping tricks.
This lesson covers how LoadBalancer services work differently in the cloud, how to watch external IP provisioning, how to configure DNS to point your domain at your service, and what this costs.
In earlier Chapters, you learned three Service types: ClusterIP (internal), NodePort (external via high ports), and LoadBalancer (external via cloud LB). On Docker Desktop, LoadBalancer services behave like NodePort because there's no cloud provider to provision a real load balancer.
On DigitalOcean DOKS (or any cloud Kubernetes), the LoadBalancer type triggers cloud provider integration:
The key insight: LoadBalancer is the production pattern because it provides a stable public IP, handles traffic distribution, and works with standard ports.
Here's a LoadBalancer service for your Task API:
Save this as task-api-lb.yaml and apply it:
Output:
When you create a LoadBalancer service on a cloud cluster, the external IP doesn't appear immediately. Kubernetes requests a load balancer from the cloud provider, and provisioning takes 30-60 seconds.
Watch the service to see the transition:
Output (initial state):
The <pending> status means Kubernetes has requested a load balancer from DigitalOcean, but it's still being provisioned. Keep watching:
Output (after ~45 seconds):
The EXTERNAL-IP column now shows a real public IP address. Your service is accessible at http://143.198.x.x (your actual IP will differ).
Press Ctrl+C to stop watching.
When you created the LoadBalancer service:
This integration is automatic. You don't manage the load balancer directly—Kubernetes does it for you.
You can see the load balancer in the DigitalOcean console (Control Panel -> Networking -> Load Balancers) or via doctl:
Output:
The load balancer was automatically created and named based on your service.
Each LoadBalancer service creates a separate cloud load balancer. On DigitalOcean, regional HTTP load balancers cost $12 per month per node (starting with 1 node). This adds up quickly:
For a single service, $12/month is reasonable. But if you have 10 microservices each with a LoadBalancer, you're paying $120/month just for load balancers—often more than your compute costs.
In production, teams use Ingress controllers (covered in later chapters) to expose multiple services through a single load balancer:
Instead of 3 LoadBalancers ($36/mo), you have 1 LoadBalancer with an Ingress routing to 3 ClusterIP services ($12/mo).
For this chapter, you'll use a LoadBalancer for simplicity. Just remember: production deployments typically use Ingress to consolidate costs.
Your LoadBalancer has a public IP (e.g., 143.198.x.x), but users access applications via domain names, not IP addresses. You need DNS to map your domain to the external IP.
If you own a domain (e.g., example.com), create an A-record pointing to your LoadBalancer IP:
Log into your DNS provider (DigitalOcean, Cloudflare, Route53, GoDaddy, etc.)
Create an A-record:
Result: api.example.com resolves to 143.198.x.x
Example in DigitalOcean DNS:
If you don't have a domain or want quick testing, use wildcard DNS services. These services resolve any hostname containing an IP address back to that IP:
With your LoadBalancer IP 143.198.x.x, you can immediately access:
No DNS configuration required. The nip.io service extracts the IP from the hostname and returns it.
Limitations of wildcard DNS:
Use nip.io/sslip.io for testing. Use real domains for production.
After creating DNS records, verify they propagate correctly using dig or nslookup.
Output:
The ANSWER SECTION shows your A-record resolving to the LoadBalancer IP.
Output:
If DNS doesn't resolve immediately:
Wait for propagation: DNS changes can take up to TTL seconds (plus caching). With TTL 300, wait 5 minutes.
Flush local DNS cache:
Check with a public DNS server:
Verify the record exists at your DNS provider using their control panel.
Here's the full sequence for making your Task API accessible at a real domain:
Wait until EXTERNAL-IP shows a real IP (not <pending>).
In your DNS provider, create:
Confirm it resolves to your LoadBalancer IP.
Output:
Your Task API is now accessible via a real domain name.
You've learned how LoadBalancer services provision cloud load balancers and how DNS connects domains to IPs. Now practice with your AI companion.
What you're learning: Systematic debugging of cloud controller integration issues—permissions, quotas, and cloud provider errors.
What you're learning: Cost-aware architecture design. Understanding when to consolidate load balancers vs when dedicated LBs make sense.
What you're learning: Multi-environment DNS patterns and the operational considerations for managing multiple cluster endpoints.
When configuring DNS, remember that changes propagate globally. Use low TTL values (300 seconds) during initial setup so you can fix mistakes quickly. For production, increase TTL to reduce DNS query load.
Test your multi-cloud-deployer skill with what you learned about LoadBalancer services and DNS.
Ask yourself:
If you found gaps:
By the end of this chapter, your skill will be a comprehensive cloud deployment reference.