DNS-Based Egress Control in EKS Using Application Network Policies

tl;dr: Go watch my video on my new youtube channel :)
- Pranav Sivvam
Dec 16, 2025

AWS recently upgraded its EKS network policies to include Admin Policies and Application Network Policies. If you work with EKS, your reaction was probably something like: FINALLY.

In this post, we’ll focus on Application Network Policies.

what does it do?

Application Network Policies (APN) let you restrict pod egress based on domain names. In simple terms: you explicitly allow the domains your pods can talk to, and everything else is blocked.

If you’ve used Calico or Cilium then you probably know what I’m talking about. The reason as to why I’m talking about it is that lots of companies go with the default EKS VPC CNI for their production workloads and up until now there was no straightforward way to restrict egress traffic based on domain names. You were limited to the standard L3/L4 network policies. The latest update changes that.

prerequisites

Before you start creating APNs and wonder why nothing works (which happened to me), make sure you have the following:

Enable the Network Policy Controller

apiVersion: v1
kind: ConfigMap
metadata:
  name: amazon-vpc-cni
  namespace: kube-system
data:
  enable-network-policy-controller: "true"

Save the above YAML as np-controller.yaml and apply it with kubectl:

kubectl apply -f np-controller.yaml

Enable Network Policies in the Node Class

Update your Node Class configuration to enable network policies.

apiVersion: eks.amazonaws.com/v1
kind: NodeClass
metadata:
  name: network-policy-enabled
spec:
  # Enables network policy support
  networkPolicy: DefaultAllow
  # Optional: Enables logging for network policy events
  networkPolicyEventLogs: Enabled
  # Include other Node Class configurations as needed

Save the YAML as my-nodeclass.yaml and apply it:

kubectl apply -f my-nodeclass.yaml

Make sure your NodePool config references your NodeClass.

applying the Network Policies

You apply an ApplicationNetworkPolicy with the domains (FQDNs) you want your pods to be able to talk to, and everything else is blocked, including DNS.

apiVersion: networking.k8s.aws/v1alpha1
kind: ApplicationNetworkPolicy
metadata:
  name: nginx-egress
  namespace: default
spec:
  podSelector:
    matchLabels:
      run: nginx
  policyTypes:
  - Egress
  egress:
  - to:
    - domainNames:
      - "github.com"
      - "api.github.com"
    ports:
    - protocol: TCP
      port: 443

Note: DNS based policies are supported only in EKS Auto Mode clusters.

It’s ideal to allow DNS at the cluster level with the new Admin Policies, but for the sake of simplicity here’s a namespace-level policy that allows DNS:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-dns
  namespace: default
spec:
  podSelector:
    matchLabels:
      run: nginx
  policyTypes:
    - Egress
  egress:
    - to:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: kube-system
      ports:
        - protocol: UDP
          port: 53
        - protocol: TCP
          port: 53

You can apply both by running the kubectl apply command.

kubectl apply -f allow-github.yaml
kubectl apply -f allow-dns.yaml

…and that’s it. Your pods can’t reach anything other than github.com and api.github.com.

behind the scenes

If you’re interested in what happens under the hood, AWS has explained it beautifully in this article. I’ll explain it quickly over here:

  1. The DNS policy is applied
  2. The Network Policy Controller detects the policy and tells the AWS node agent to allow DNS requests only for the allowed domains (github.com)
  3. The pod (or an app running in the pod) asks for the IP of npmjs.com and github.com
  4. These requests are sent through a proxy
  5. The request for npmjs.com gets blocked
  6. The request for github.com is allowed
  7. The DNS request for github.com passes through the DNS filter allowlist and is proxied through CoreDNS
  8. CoreDNS recursively resolves the IP from a DNS server
  9. The IP and its TTL are returned in the DNS response. They are then stored in an eBPF map (key-value store).
  10. eBPF probes (programs) attached to the pod’s network interface enforce egress filtering at the IP layer.
  11. IP validity is tied to the DNS TTL and automatically expires

why this matters

If a pod is compromised:

Even rogue dependencies are restricted to explicitly allowed domains.

This is a huge step forward for supply-chain and workload security on EKS.

my Youtube channel!

That’s right! I created a youtube channel and posted my first short video on this :) Feel free to go and check it out!