Your Task API is deployed to Kubernetes. Run this command:
Output:
Now try accessing it from your browser at http://10.96.45.123:8000. What happens?
Nothing. The page never loads.
That ClusterIP address exists only inside the Kubernetes cluster. Your browser, running outside the cluster on your laptop, cannot reach it. The IP 10.96.45.123 is meaningless to your operating system's network stack.
This lesson answers a fundamental question: How do external users reach services running inside Kubernetes?
Kubernetes networking creates a private network inside the cluster. Every Pod gets an IP address, and Services provide stable endpoints for groups of Pods. But this private network is invisible to the outside world.
Think of it like a corporate office building:
Kubernetes Services are the internal extension numbers. External users need a public entry point.
You've probably used this already:
Output:
Now http://localhost:8000 works in your browser. But ask yourself: What happens when you close that terminal?
The forwarding stops. Your service becomes unreachable again.
kubectl port-forward tunnels traffic from your local machine through the Kubernetes API server to the Pod. It's useful for debugging, but it's not a solution for production:
You need something that provides permanent external access.
Kubernetes offers multiple ways to expose Services externally. Each builds on the previous, solving additional problems while adding complexity.
ClusterIP is the default Service type. When you create a Service without specifying a type, you get ClusterIP:
Output:
ClusterIP provides:
ClusterIP does NOT provide:
ClusterIP is perfect for internal services that only other Pods need to reach. Your database, cache, and message queue typically use ClusterIP because external users should never access them directly.
NodePort opens a specific port on every node in your cluster:
Output:
Check the service:
Output:
Now you can access your service at http://<any-node-ip>:30080. If you're using Docker Desktop, try:
Output:
NodePort solves external access, but creates new problems:
NodePort works for development and testing. For production, you need something that handles node failures and provides standard ports (80/443).
LoadBalancer requests an external load balancer from your cloud provider:
Output:
Wait a moment, then check:
Output (cloud environment):
Now http://203.0.113.45 reaches your service on port 80. The cloud provider created a load balancer, gave it a public IP, and configured it to forward traffic to your nodes.
Question: If LoadBalancer solves external access, why do we need Ingress?
Answer: Cost and routing.
Consider this scenario: You have 10 microservices. With LoadBalancer Services:
At $15-25 per load balancer per month, that's $150-250/month just for external access. And you still can't do:
You need a single entry point that routes to multiple services.
Ingress provides HTTP/HTTPS routing rules that direct traffic to Services based on hostnames and paths.
Here's an Ingress resource:
Output:
Ingress gives you:
This looks like the solution. But there's a problem.
Ingress is a Kubernetes-native resource, but it only defines basic routing. Real production needs require:
These features don't exist in the Ingress specification. So every Ingress controller added them through annotations.
Question: What happens when you switch from NGINX to Traefik?
Answer: You rewrite every annotation. The rate limiting syntax, timeout format, TLS configuration, and middleware references are all different. This is vendor lock-in through annotations.
Even worse: annotations are unstructured strings. There's no schema validation. Typo in an annotation name? Kubernetes accepts it silently and the feature simply doesn't work.
The annotation problem is one of several fundamental limitations in the Ingress API.
In production, different teams have different responsibilities:
With Ingress, everyone edits the same resource. The application developer who adds a route can accidentally modify the TLS configuration. There's no RBAC granularity within the Ingress resource.
Every feature beyond basic routing requires vendor-specific annotations. Your infrastructure becomes tightly coupled to one ingress controller. Switching controllers means rewriting every Ingress resource.
The Ingress spec cannot express common requirements:
These require controller-specific CRDs or annotations, losing the benefit of a standard API.
Ingress only handles HTTP traffic. If you need:
You're back to vendor-specific solutions.
When an Ingress resource fails to configure correctly, how do you know? The Ingress status field is minimal. Controllers report status inconsistently, making debugging difficult.
The Kubernetes community recognized these limitations and created Gateway API—a successor to Ingress designed from the ground up for:
Gateway API splits responsibilities into distinct resources:
This separation enables:
With Gateway API, switching from Envoy Gateway to Traefik Gateway requires changing ONE field:
Your HTTPRoute resources remain identical. No annotation rewrites. No vendor lock-in.
Work through these exercises to solidify your understanding of Service types and Ingress limitations.
If you have a Kubernetes cluster running, list all Services:
Output:
For each Service, identify:
Create a simple deployment to experiment with:
Output:
Now expose it with different Service types:
Output:
Compare them:
Output:
Notice: LoadBalancer shows <pending> on Docker Desktop because there's no cloud provider to create a load balancer. On GKE, EKS, or AKS, it would receive an external IP.
Clean up:
Visit the documentation for any two Ingress controllers:
Find the annotation for:
Compare the syntax. How different are they?
For each scenario, which Service type would you choose?
Answers:
You built a traffic-engineer skill in Lesson 0. Based on what you learned about Ingress limitations, consider:
Your skill should help you choose:
Does your skill encode this decision tree?
Consider your Task API deployment:
All of these require Gateway API's expressiveness. Ingress annotations would create vendor lock-in.
If your skill currently generates Ingress resources, you now understand why Gateway API is better. The remaining lessons will teach Gateway API patterns that your skill should capture instead.