AKS Static Egress Gateway: Per-Namespace Static Egress IPs

AKS Static Egress Gateway: Per-Namespace Static Egress IPs


🎯 TL;DR: Unique Static Egress IP per Kubernetes Namespace in AKS

AKS now has a native equivalent to OpenShift’s EgressIP: Static Egress Gateway. One dedicated gateway node pool + a StaticGatewayConfiguration CRD per namespace = stable egress IPs (public or private). No more separate node pools, subnets, and NAT Gateways per namespace.

Pods opt in via annotation, IPs are stable across restarts/upgrades, supports public and private egress, and layers cleanly with Azure Firewall. Requires aks-preview CLI extension and StaticEgressGatewayPreview feature flag. Private IP mode requires Kubernetes 1.34+.

Full working demo: github.com/Ricky-G/azure-scenario-hub/tree/main/src/aks-unique-egress-ip-per-namespace


If you’ve used OpenShift’s EgressIP CRD to assign static egress IPs per namespace for firewall allowlisting, you know how critical this is for security compliance. The first question in any OpenShift-to-AKS migration is always: “How do we get per-namespace static egress IPs?”

Until recently, you needed a separate node pool, subnet, and NAT Gateway per namespace. Ten namespaces = ten of each. It didn’t scale.

AKS Static Egress Gateway fixes this: one gateway node pool, one subnet, one CRD per namespace.

graph LR
    A["Namespace A"] --> GW["Gateway Pool"]
    B["Namespace B"] --> GW
    C["Namespace C"] --> GW
    GW -->|"Unique IP per NS"| EXT["External Services"]
Read more
Application Gateway Ingress Controller For AKS

Application Gateway Ingress Controller For AKS


🎯 TL;DR: AGIC Direct Pod Ingress for High-Performance AKS Workloads

AGIC provides direct pod ingress bypassing Kubernetes ClusterIP for up to 50% lower network latency compared to in-cluster solutions. Problem: Traditional ingress controllers add network hops and consume AKS compute resources. Solution: Application Gateway routes directly to pod IPs via Azure Resource Manager integration, offering WAF, SSL termination, and managed updates as AKS add-on. Critical limitation: 100 backend pool limit means 2000+ services require 20 Application Gateways, making cost-effective deployment challenging for large-scale clusters.


Recently I ran into an interesting issue with an AKS cluster running 2000+ services. There is nothing wrong in running 2000+ services that’s what Kubernetes is there for, scale! but the interesting aspect that caught my attention was trying to get the Applicaiton Gateway Ingress Controller (AGIC) to ingress to all these services. I had worked with Istio and NGINX for ingress into AKS with no issues and never AGIC, so I had to try this to see where it worked well, what the advantages are and where the limitations are.

Application Gateway

Application Gateway (App Gateway) is a well-established layer 7 service that has been around for a while, some of the major features are:

  • URL routing
  • Cookie-based affinity
  • SSL termination
  • End-to-end SSL
  • Support for public, private, and hybrid web sites
  • Integrated web application firewall
  • Zone redundancy
  • Connection draining

This post isn’t focused on the App Gateway itself, it’s more about how and what it can do as an ingress controller for AKS. You can find out more about App Gateway and all abouts its features here

Read more