I spotted something land in the Azure CLI extensions repo this week. PR #9669 adds CLI support for App Routing Istio, including new --enable-app-routing-istio flags on az aks create and az aks update, plus dedicated az aks approuting gateway istio enable and disable subcommands in the aks-preview extension. The PR merged on 16 March 2026.
If you are running Application Routing with NGINX today and wondering what the managed migration path actually looks like, this is it starting to take shape.
Why this matters
The upstream Ingress NGINX project ended maintenance in March 2026. Microsoft committed to security-only patches for the Application Routing add-on’s NGINX path through November 2026. After that, you need to be somewhere else.
Back in November 2025, the AKS engineering team was clear about the direction: Gateway API is the strategic path, and Application Routing with Gateway API powered by the Istio control plane was planned for the first half of 2026. This PR is that work showing up in the CLI.
For teams using Application Routing today, it means the migration does not have to involve stepping outside the managed add-on model. You are not being pushed into self-managing Envoy Gateway or standing up your own Istio installation. You stay in az aks approuting commands and swap the backend.
What the PR actually adds
The PR adds --enable-app-routing-istio (short form --enable-ari) and --disable-app-routing-istio (--disable-ari) to az aks create and az aks update, plus a new dedicated command group:
az aks approuting gateway istio enableaz aks approuting gateway istio disable
Worth being clear: --enable-gateway-api is not new here. That flag for installing the Gateway API CRDs already existed. This PR adds the Istio-specific App Routing layer that sits on top of those CRDs.
So no, this is not “turn on all of Istio on AKS with one flag”. The PR help text is explicit: this is an ingress-only version of Istio for App Routing. It does not provide service mesh functionality such as mTLS or traffic management between services, and it cannot be used at the same time as the full Istio service mesh add-on (--enable-azure-service-mesh). If you try to combine them, the command will fail.
Step 1: Register the preview feature flag
Before using any of the new flags, register the feature flag on your subscription.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 |
az feature register \ --namespace Microsoft.ContainerService \ --name AppRoutingIstioGatewayAPIPreview az feature show \ --namespace Microsoft.ContainerService \ --name AppRoutingIstioGatewayAPIPreview \ --output table az provider register \ --namespace Microsoft.ContainerService az extension add --name aks-preview --upgrade |
Do not move on until the feature state shows Registered. It usually takes a few minutes.
Step 2a: New cluster
For a fresh test cluster, set your variables and create with both switches:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
RG="rg-pixel-istio-gw" CLUSTER="pixel-istio-gw" LOCATION="uksouth" az group create \ --name "$RG" \ --location "$LOCATION" az aks create \ --resource-group "$RG" \ --name "$CLUSTER" \ --location "$LOCATION" \ --enable-gateway-api \ --enable-app-routing-istio \ --generate-ssh-keys |
Step 2b: Existing cluster
For an existing cluster, use az aks update with both flags:
|
1 2 3 4 5 6 7 8 |
RG="rg-pixel-istio-gw" CLUSTER="pixel-istio-gw" az aks update \ --resource-group "$RG" \ --name "$CLUSTER" \ --enable-gateway-api \ --enable-app-routing-istio |
Or use the dedicated App Routing Istio subcommand if Gateway API CRDs are already installed on the cluster:
|
1 2 3 |
az aks approuting gateway istio enable \ --resource-group "$RG" \ --name "$CLUSTER" |
To disable, the command prompts for confirmation, which makes sense given it affects live ingress traffic:
|
1 2 3 |
az aks approuting gateway istio disable \ --resource-group "$RG" \ --name "$CLUSTER" |
Step 3: Connect and check what is installed
Pull down credentials and then check the cluster looks the way you expect before deploying anything:
|
1 2 3 |
az aks get-credentials \ --resource-group "$RG" \ --name "$CLUSTER" |
Check namespaces and then verify the Gateway API resource types are present:
|
1 2 3 4 5 |
kubectl get ns kubectl get gatewayclass kubectl get gateways --all-namespaces kubectl get httproutes --all-namespaces kubectl api-resources | grep gateway |
This is worth doing early. It tells you immediately whether the CRDs are present and whether the cluster looks the way you expect.
Step 4: Deploy a test application
Keep the test app simple. Create a namespace and deploy a basic echo server:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
kubectl create namespace demo cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: echo namespace: demo spec: replicas: 1 selector: matchLabels: app: echo template: metadata: labels: app: echo spec: containers: - name: echo image: ealen/echo-server:latest ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: echo namespace: demo spec: selector: app: echo ports: - port: 80 targetPort: 80 EOF |
Now create a Gateway and an HTTPRoute to expose it. The gatewayClassName: istio is what tells the App Routing Istio controller to own this gateway:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
cat <<EOF | kubectl apply -f - apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: name: pixel-istio-gw namespace: demo spec: gatewayClassName: approuting-istio listeners: - name: http protocol: HTTP port: 80 --- apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: echo-route namespace: demo spec: parentRefs: - name: pixel-istio-gw rules: - backendRefs: - name: echo port: 80 EOF |
Step 5: Verify and test
Check the Gateway and route settled cleanly:
|
1 2 3 4 |
kubectl get gateway -n demo kubectl get httproute -n demo kubectl describe gateway pixel-istio-gw -n demo kubectl describe httproute echo-route -n demo |
You want to see the Gateway programmed and the route accepted. Once it has an address, grab it and test:
|
1 2 3 |
GATEWAY_IP=$(kubectl get gateway pixel-istio-gw -n demo -o jsonpath='{.status.addresses[0].value}') curl http://$GATEWAY_IP |
A response back from the echo app is the proof point. Not the fact that the create command completed. A real request through Gateway API hitting a real workload.
If something does not work
Go backwards one layer at a time:
|
1 2 3 4 5 6 7 8 9 10 11 12 |
# Check the pods kubectl get pods -n demo # Check the service kubectl get svc -n demo # Check the Gateway and route kubectl describe gateway pixel-istio-gw -n demo kubectl describe httproute echo-route -n demo # Check recent events kubectl get events -n demo --sort-by=.lastTimestamp |
That usually tells you quickly whether the issue is the app, the service, the route, or the Gateway implementation itself.
What I noticed
This is not the full Istio service mesh. No sidecar injection, no mTLS, no service-to-service traffic management. Istio is doing the gateway work only. That is an important distinction if you were concerned about the overhead of a full mesh rollout just to replace NGINX ingress.
The automated deployment model is useful. When you apply a Gateway resource, Istio automatically creates the Deployment, Service, HPA, and PDB for the gateway pods. You do not manage those directly. That is less operational overhead than running NGINX yourself.
Experimental Gateway API CRDs will block you. If you have previously installed Gateway API CRDs from a community Helm chart using the experimental channel, remove those before running --enable-gateway-api. Check what you have first:
|
1 |
kubectl get crds | grep "gateway.networking.k8s.io" |
This is still preview. That does not mean do not test it. It means test it like a preview. Be curious, break it, rebuild it, and see how it behaves with the sort of routes and traffic patterns you actually care about.
My recommendation
If you are on Application Routing with NGINX today, you have until November 2026 on the security patch window. That is enough time to migrate properly, but not enough to keep ignoring it.
What I would do now is spin up a dev cluster using the steps above and deploy something representative through a Gateway and HTTPRoute. The model is not difficult to grasp, but it is different enough from Ingress that you want hands-on time before touching production.
Also run the ingress2gateway tool against your existing cluster to preview how your current Ingress resources would translate:
|
1 2 3 4 5 |
brew install ingress2gateway # or go install github.com/kubernetes-sigs/ingress2gateway@latest ingress2gateway print --providers=ingress-nginx |
This surfaces annotation-heavy configs that do not translate cleanly. Much better to find that out now than mid-migration.
I covered the full range of migration options in my Ingress NGINX retirement post if you want the wider picture.
Final thoughts
The Azure CLI now has support for App Routing Istio in the aks-preview extension, and the PR makes clear that Gateway API CRD installation and App Routing Istio are separate pieces you turn on together. That lines up with Microsoft’s stated plan for Application Routing with Gateway API powered by the Istio control plane for ingress only.
If you run AKS and are thinking about where ingress is heading next, this is worth spinning up and testing now. When the Gateway API conversation lands properly on your desk, you want to already have kicked the tyres.
Have you started testing Gateway API on AKS yet? Drop a comment below or reach out on X/Twitter.
0 Comments