I spotted this in the AKS release notes recently and it is worth paying attention to if you are running multi-zone clusters. Microsoft has added managedNATGatewayV2 as a new outbound type for AKS, and the key difference from the existing managedNATGateway is zone redundancy. The original option is tied to a single availability zone. If that zone has a problem, your egress goes with it. V2 uses the StandardV2 NAT gateway SKU, which is zone-redundant by default without you having to configure anything.
This is currently in public preview, so it is not covered by the AKS SLA and not recommended for production workloads yet.
Why this matters
If you are already spreading your node pools across availability zones, your compute layer has zone redundancy but your egress does not if you are using managedNATGateway. That is an asymmetry worth fixing. StandardV2 also brings IPv6 outbound support and flow logs, neither of which are available on the Standard SKU.
The comparison is straightforward:
managedNATGateway | managedNATGatewayV2 | |
|---|---|---|
| NAT gateway SKU | Standard | StandardV2 |
| Zone redundancy | No (single zone) | Yes (built in) |
| IPv6 outbound | No | Yes |
| Flow logs | No | Yes |
| Public IP SKU required | Standard | StandardV2 |
| Outbound IP model immutable | No | Yes |
| Status | GA | Preview |
The immutability row is the one to pay attention to. With managedNATGatewayV2, your choice between Azure-managed IPs and customer-defined IPs is locked at cluster creation. You cannot switch models with az aks update after the fact. That is a real design decision, not a footnote.
How to enable it
You need the aks-preview CLI extension at version 20.0.0b1 or later. Install or update it first.
|
1 2 |
az extension add --name aks-preview az extension update --name aks-preview |
Then register the ManagedNATGatewayV2Preview feature flag. This can take a few minutes to propagate.
|
1 2 |
az feature register --namespace "Microsoft.ContainerService" --name "ManagedNATGatewayV2Preview" az feature show --namespace "Microsoft.ContainerService" --name "ManagedNATGatewayV2Preview" |
Wait until the show command returns "state": "Registered", then refresh the provider registration.
|
1 |
az provider register --namespace Microsoft.ContainerService |
Azure-managed IPs
The simpler path is letting Azure own the outbound IPs. You specify a count and AKS handles the rest.
|
1 2 3 4 5 6 7 8 9 |
az aks create \ --resource-group pixelrobots-rg \ --name pixelrobots-aks \ --location uksouth \ --node-count 3 \ --outbound-type managedNATGatewayV2 \ --nat-gateway-managed-outbound-ip-count 1 \ --nat-gateway-idle-timeout 4 \ --generate-ssh-keys |
Azure provisions a StandardV2 NAT gateway with one managed outbound IPv4 address. For dual-stack egress, add --nat-gateway-managed-outbound-ipv6-count 1 as well.
After creation, you can update the IP count and the idle timeout without recreating the cluster.
|
1 2 3 4 5 |
az aks update \ --resource-group pixelrobots-rg \ --name pixelrobots-aks \ --nat-gateway-managed-outbound-count 2 \ --nat-gateway-idle-timeout 10 |
Customer-defined IPs
If external services need to allowlist your egress addresses, or you want the IPs to survive a cluster recreate, bring your own. The catch is that StandardV2 NAT gateway requires StandardV2 public IPs. Existing Standard SKU public IPs will not work, so you need to create new ones.
The commands below create a zone-redundant public IP and a public IP prefix, then create the cluster referencing them. The --zone 1 2 3 on the IP resources matters. A zone-redundant NAT gateway backed by zonal IPs would create a zone-specific dependency that defeats the point.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
MY_IP_ID=$(az network public-ip create \ --resource-group pixelrobots-rg \ --name pip-nat-gw-uks \ --location uksouth \ --sku StandardV2 \ --allocation-method Static \ --version IPv4 \ --zone 1 2 3 \ --query id \ --output tsv) MY_PREFIX_ID=$(az network public-ip prefix create \ --resource-group pixelrobots-rg \ --name ippfx-nat-gw-uks \ --location uksouth \ --length 31 \ --sku StandardV2 \ --version IPv4 \ --zone 1 2 3 \ --query id \ --output tsv) az aks create \ --resource-group pixelrobots-rg \ --name pixelrobots-aks \ --location uksouth \ --node-count 3 \ --outbound-type managedNATGatewayV2 \ --nat-gateway-outbound-ips $MY_IP_ID \ --nat-gateway-outbound-ip-prefixes $MY_PREFIX_ID \ --nat-gateway-idle-timeout 4 \ --generate-ssh-keys |
You can update which IPs are assigned later, but you cannot switch to Azure-managed IPs. The model you choose at creation is permanent.
|
1 2 3 4 |
az aks update \ --resource-group pixelrobots-rg \ --name pixelrobots-aks \ --nat-gateway-outbound-ips $NEW_IP_ID |
Bicep
The natGatewayProfile property is fully documented in the ARM schema. managedNATGatewayV2 is a valid outboundType value as of the 2026-01-02-preview API version, which is the latest as of writing.
For the Azure-managed IP approach, the cluster resource looks like this. The managedOutboundIPProfile block handles both IPv4 and IPv6 counts, and the outboundIPPrefixes property is V2-only so you will not find it documented against managedNATGateway.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
// Requires Microsoft.ContainerService/ManagedNATGatewayV2Preview feature flag registered param location string = resourceGroup().location @description('AKS cluster name') param aksName string = 'aks-pixelrobots-prod-uks' @description('Number of managed outbound IPv4 addresses for the NAT gateway (1-16)') @minValue(1) @maxValue(16) param natGatewayManagedOutboundIpCount int = 1 @description('NAT gateway idle timeout in minutes (4-120)') @minValue(4) @maxValue(120) param natGatewayIdleTimeoutMinutes int = 4 resource aksCluster 'Microsoft.ContainerService/managedClusters@2026-01-02-preview' = { name: aksName location: location identity: { type: 'SystemAssigned' } properties: { dnsPrefix: aksName agentPoolProfiles: [ { name: 'system' count: 3 vmSize: 'Standard_D4ds_v5' mode: 'System' availabilityZones: ['1', '2', '3'] osDiskType: 'Ephemeral' } ] networkProfile: { networkPlugin: 'azure' networkPluginMode: 'overlay' outboundType: 'managedNATGatewayV2' natGatewayProfile: { idleTimeoutInMinutes: natGatewayIdleTimeoutMinutes managedOutboundIPProfile: { count: natGatewayManagedOutboundIpCount // countIPv6: 1 // uncomment to add managed outbound IPv6 addresses } } } } } output clusterName string = aksCluster.name output nodeResourceGroup string = aksCluster.properties.nodeResourceGroup |
For customer-defined IPs, declare the public IP resources first and reference them in the natGatewayProfile. The sku.name: 'StandardV2' is required on both the IP and the prefix; Standard SKU resources will cause a deployment failure.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
param location string = resourceGroup().location param aksName string = 'aks-pixelrobots-prod-uks' resource natPublicIp 'Microsoft.Network/publicIPAddresses@2024-11-01' = { name: 'pip-nat-gw-uks' location: location sku: { name: 'StandardV2' tier: 'Regional' } zones: ['1', '2', '3'] properties: { publicIPAllocationMethod: 'Static' publicIPAddressVersion: 'IPv4' } } resource natPublicIpPrefix 'Microsoft.Network/publicIPPrefixes@2024-11-01' = { name: 'ippfx-nat-gw-uks' location: location sku: { name: 'StandardV2' tier: 'Regional' } zones: ['1', '2', '3'] properties: { prefixLength: 31 publicIPAddressVersion: 'IPv4' } } resource aksCluster 'Microsoft.ContainerService/managedClusters@2026-01-02-preview' = { name: aksName location: location identity: { type: 'SystemAssigned' } properties: { dnsPrefix: aksName agentPoolProfiles: [ { name: 'system' count: 3 vmSize: 'Standard_D4ds_v5' mode: 'System' availabilityZones: ['1', '2', '3'] osDiskType: 'Ephemeral' } ] networkProfile: { networkPlugin: 'azure' networkPluginMode: 'overlay' outboundType: 'managedNATGatewayV2' natGatewayProfile: { idleTimeoutInMinutes: 4 outboundIPs: { publicIPs: [natPublicIp.id] } outboundIPPrefixes: { publicIPPrefixes: [natPublicIpPrefix.id] } } } } } |
Bicep will deploy the IP resources before the cluster due to the implicit dependency on natPublicIp.id and natPublicIpPrefix.id, so the ordering is handled for you.
Verifying after creation
Once the cluster is up, confirm the NAT gateway was provisioned in the node resource group.
|
1 2 |
NODE_RG=$(az aks show -g pixelrobots-rg -n pixelrobots-aks --query nodeResourceGroup -o tsv) az network nat gateway list -g $NODE_RG -o table |
You should see one entry with ProvisioningState: Succeeded. To check the full outbound IP configuration including idle timeout, query the cluster’s NAT gateway profile directly.
|
1 2 3 4 5 |
az aks show \ -g pixelrobots-rg \ -n pixelrobots-aks \ --query "networkProfile.natGatewayProfile" \ -o json |
One thing I noticed: there is no zones property on the NAT gateway resource in the node resource group. That is not an oversight. Zone redundancy is implicit in StandardV2 and does not show up as a configurable property the way it does on Standard SKU resources where you pick a specific zone.
Migrating an existing cluster
If you have an existing cluster running loadBalancer or managedNATGateway, you can migrate to V2 using az aks update without recreating the cluster. There is one thing to understand before you run anything: this migration changes your cluster’s outbound IP addresses and involves a period of network disruption. Existing connections will drop. If you have firewall rules or authorized IP ranges configured around your current egress IPs, update them before or immediately after the migration.
The supported migration paths to managedNATGatewayV2 on managed VNets are:
| From | Supported |
|---|---|
loadBalancer | Yes |
managedNATGateway | Yes |
none | Yes |
block | Yes |
There is no currently supported path out of managedNATGatewayV2 on a managed VNet (this might change once GA). Once you migrate to V2, the only way back to a different outbound type is a cluster recreate. That is worth sitting with before you run the command on anything you care about.
Before you migrate
The migration will replace your cluster’s egress IPs with new ones provisioned on the StandardV2 NAT gateway. Before you start, capture what IPs you are currently using so you know exactly what to update afterwards.
For a loadBalancer cluster, the outbound IPs are on the load balancer in the node resource group. This command pulls the current outbound IP addresses and their associated resource IDs.
|
1 2 3 4 5 6 |
NODE_RG=$(az aks show -g pixelrobots-rg -n pixelrobots-aks --query nodeResourceGroup -o tsv) az network lb show \ -g $NODE_RG \ -n kubernetes \ --query "frontendIPConfigurations[].{Name:name, PublicIP:publicIPAddress.id}" \ -o table |
For a managedNATGateway cluster, the IPs are on the NAT gateway resource directly.
|
1 2 3 4 5 |
NODE_RG=$(az aks show -g pixelrobots-rg -n pixelrobots-aks --query nodeResourceGroup -o tsv) az network nat gateway list \ -g $NODE_RG \ --query "[].{Name:name, IPs:join(', ', publicIpAddresses[].id)}" \ -o table |
Take note of the actual IP addresses, not just the resource IDs. You can resolve them with az network public-ip show --ids <id> --query ipAddress -o tsv. Once you have the full list, check it against any firewall rules, NSG rules, API server authorized IP ranges, and third-party allowlists before proceeding. Updating those after the migration while outbound traffic is broken is not a good place to be.
From loadBalancer
This is probably the most common starting point. The command below migrates a cluster from the default load balancer egress to managedNATGatewayV2 with Azure-managed IPs. You need at least Azure CLI version 2.56 for outbound type migration; run az upgrade if you are not sure.
|
1 2 3 4 5 |
az aks update \ --resource-group pixelrobots-rg \ --name pixelrobots-aks \ --outbound-type managedNATGatewayV2 \ --nat-gateway-managed-outbound-ip-count 1 |
AKS will provision a new StandardV2 NAT gateway and detach the load balancer from egress. Expect a brief outage on outbound connections during the transition.
From managedNATGateway
Migrating from V1 to V2 follows the same pattern. The cluster already has a managed NAT gateway in place; AKS replaces it with a StandardV2 one.
|
1 2 3 4 5 |
az aks update \ --resource-group pixelrobots-rg \ --name pixelrobots-aks \ --outbound-type managedNATGatewayV2 \ --nat-gateway-managed-outbound-ip-count 1 |
After the update completes, confirm the NAT gateway in the node resource group has been replaced by checking the SKU on the resource.
|
1 2 |
NODE_RG=$(az aks show -g pixelrobots-rg -n pixelrobots-aks --query nodeResourceGroup -o tsv) az network nat gateway list -g $NODE_RG --query "[].{Name:name, Sku:sku.name, State:provisioningState}" -o table |
The SKU column should show StandardV2. If you had firewall rules allowing your old NAT gateway IP, update them now with the new egress address from the cluster’s NAT gateway profile.
Wrapping up
managedNATGatewayV2 is still in preview and I would treat it that way. Enable it on dev and staging clusters now so you understand the behaviour and work through the IP model decision before it matters. Do not deploy it to production until GA unless you are comfortable with preview caveats.
The thing I keep coming back to is the outbound IP model immutability. That decision is permanent at cluster creation. Before you create any cluster with V2, answer the question: do any external systems need to allowlist your egress addresses? If yes, bring your own StandardV2 IPs. If no, let Azure manage them. Getting that wrong means a cluster recreate.
For multi-zone clusters, managedNATGatewayV2 is clearly the better option once it reaches GA. Zone-redundant compute paired with zone-redundant egress is the correct setup, and V2 gets you there without any additional configuration.
The one-way migration constraint is worth flagging to your team before anyone runs az aks update. Once a managed VNet cluster is on V2, it stays there. Plan the migration, update your firewall rules ahead of time, and schedule the downtime window properly rather than treating it as a quick update.
0 Comments