In March 2023, a security researcher discovered that a popular cloud provider's managed Kubernetes service had no NetworkPolicy enforcement by default. Pods in one tenant's namespace could freely communicate with pods in other tenants' namespaces. The vulnerability went undetected for months because Kubernetes allows all traffic by default—and most teams never lock it down.
Your Task API running in Kubernetes can currently reach every other pod in the cluster. An attacker who compromises your pod gains lateral movement to databases, secret stores, and control plane components. NetworkPolicies transform your cluster from an open network into a zero-trust environment where pods communicate only with explicitly permitted services.
This lesson teaches you to implement the most critical security pattern: default deny first, then explicit allows. Without this pattern, your cluster security is theater.
Docker Desktop's default CNI does not enforce NetworkPolicies. You must install Calico before any policies take effect.
Why Calico is required:
Install Calico with a single command:
Output:
Wait for Calico pods to become ready:
Output:
Once all calico-node pods show Running, NetworkPolicies are enforced cluster-wide.
Critical principle: Default deny must be your first NetworkPolicy.
When you apply a NetworkPolicy that selects pods, Kubernetes switches those pods from "allow all" to "deny all except explicitly allowed." You control this with policyTypes.
Create default-deny.yaml in your Task API namespace:
Output (after applying):
Output:
What just happened:
Your Task API pods can no longer reach anything—including DNS.
Warning: Without a DNS allow rule, service discovery breaks completely.
After applying default deny, try to resolve a service name from your Task API pod:
Output:
Service discovery fails because pods cannot reach CoreDNS in kube-system. This breaks:
Create allow-dns.yaml:
Output:
Output:
Now verify DNS resolution works:
Output:
Why this works:
Common mistake: Using podSelector instead of namespaceSelector for DNS. CoreDNS pods have specific labels, but targeting by namespace is more reliable across Kubernetes versions.
Your Task API receives traffic from the Envoy Gateway (configured in Module 7.1). Allow this ingress:
Output:
Output:
Why namespace selection matters:
If you used podSelector without namespaceSelector, the rule would only match pods in the same namespace as your Task API. Gateway pods exist in a different namespace (envoy-gateway-system), so you must use namespaceSelector to cross namespace boundaries.
Your Task API needs to reach:
Create explicit egress rules:
Output:
Output:
Traffic matrix after all policies:
The only way to confirm NetworkPolicies work is to test them. Deploy a test pod and attempt unauthorized traffic.
Output:
Try to reach Task API from the test pod:
Output:
Exit code 28 means connection timeout—the NetworkPolicy blocked the traffic.
To verify gateway traffic works, temporarily add a label that matches your ingress policy:
In production, only pods from the envoy-gateway-system namespace can reach your Task API on port 8000.
Output:
Here is the complete set of policies you created in this lesson:
Save as task-api-network-policies.yaml and apply:
Output:
Test your cloud-security skill against what you learned:
Evaluation questions:
If any answers are "no," update your skill with the patterns from this lesson.
Test your understanding of NetworkPolicy design and troubleshooting.
Prompt 1:
What you're learning: Whether you can apply the default-deny + DNS pattern to a new namespace. Notice if the generated policy includes both UDP and TCP for port 53, and whether it correctly targets kube-system for CoreDNS.
Prompt 2:
What you're learning: Common debugging patterns for NetworkPolicy. The issue is that CoreDNS runs in kube-system (not default) and the correct approach uses namespaceSelector rather than podSelector alone. This is the most common NetworkPolicy mistake.
Prompt 3:
What you're learning: How to design a complete NetworkPolicy architecture for a realistic application. Each service should have its own ingress/egress policies, creating a defense-in-depth traffic matrix.
Always test NetworkPolicies in a development namespace before applying to production. A misconfigured egress policy can break your application's ability to reach databases, external APIs, or even DNS. Use kubectl exec with curl or nslookup to verify connectivity after each policy change.