URL has been copied successfully!
URL has been copied successfully!
URL has been copied successfully!
URL has been copied successfully!
URL has been copied successfully!
Share:
Twitter
LinkedIn
Facebook
Reddit
Follow by Email
Copy link
Threads
Bluesky
Reading Time: 8 minutes

I have been waiting for something like this for a while. If you run AKS at any meaningful scale, one of the hardest problems you eventually hit is the multi-cluster networking story. How do services across clusters discover each other? How do you enforce mTLS across cluster boundaries? How do you shift traffic between versions spread across different clusters? With open-source Istio you can get there, but the operational overhead of running and upgrading a control plane, managing certificates, and wiring up multi-cluster federation is substantial. Azure has just released a preview of something that tries to solve all of that in one go. Azure Kubernetes Application Network.

Why this matters

Azure Kubernetes Application Network is a fully managed ambient-based service mesh for Azure Kubernetes Service (AKS). The headline features are:

  • No sidecars. It uses Istio ambient mode, which means no proxy containers injected into your pods. A node-level DaemonSet (ztunnel) handles L4 security transparently.
  • Fully managed control and data plane. Azure runs Istiod, manages certificates including rotation, and handles upgrades. You do not operate any mesh infrastructure.
  • Multi-cluster by design. Multiple AKS clusters can join a single Application Network resource. Services across clusters can discover and communicate with each other with mTLS enforced end-to-end, including across cluster boundaries.
  • Shared trust boundary. A single root CA is provisioned per Application Network resource. Each member cluster receives an intermediate CA. Workload certificates are SPIFFE-compliant, valid for 24 hours, and rotated automatically every 12 hours.

The multi-cluster piece is what makes this genuinely interesting. You create one Application Network resource, join your clusters to it, label services as global, and the control plane handles cross-cluster service discovery. Cross-cluster traffic flows through east-west gateways in each cluster, all encrypted with mTLS. For teams running multiple AKS clusters and trying to build a coherent networking layer across them, this is a meaningful step forward.

How Application Network works

The architecture has three layers:

Management plane: The Azure resource provider. You interact with this via az appnet CLI commands or the ARM API. It provisions supporting infrastructure (including Azure Key Vault for certificate storage) and registers clusters as members.

Control plane: Fully managed and runs outside your cluster, provisioned per region. It includes a managed Istiod instance that connects to your cluster’s Kubernetes API server to watch services and push xDS configuration to ztunnel and waypoint proxies. In multi-cluster deployments, each control plane also connects to other member clusters’ API servers to exchange service discovery information.

Data plane: Components deployed into the applink-system namespace in your cluster. This includes:

  • ztunnel: A node-level L4 proxy (DaemonSet). Intercepts traffic, establishes mTLS, enforces L4 policies.
  • Waypoint proxies: Optional per-namespace L7 proxies for HTTP routing, traffic shifting, fault injection, and L7 auth policies.
  • East-west gateway: Handles cross-cluster traffic. You provide network reachability between east-west gateways (via VNet peering, VPN, etc.).
  • Istio CNI: Configures pod networking for ambient traffic interception.

The sidecar-free model is worth emphasising. In a typical Istio deployment, every pod gets an Envoy sidecar, which adds CPU and memory overhead at scale and complicates rolling deployments. Ambient mode separates the data plane from workloads entirely. Your pods do not change. The node-level ztunnel handles encryption and identity transparently.

Limitations to know before you start

  • Regions: centralus, eastus2, westus2, westus3, northeurope, southeastasia only
  • Preview status: Not for production. No SLA, limited warranty, best-effort support only.
  • No Windows node pools
  • Cannot switch upgrade modes (SelfManaged to FullyManaged or back)
  • Cannot run alongside the Istio add-on on the same cluster

Setting it up

Prerequisites

  • Azure CLI 2.84.0 or later
  • An AKS cluster with AKS-managed Entra integration and OIDC issuer enabled (required)
  • One of the supported regions: centralus, eastus2, westus2, westus3, northeurope, southeastasia

Start by setting your environment variables:

These are referenced throughout the rest of the setup steps, so set them once and leave the terminal open.

Register the preview feature

Register the PublicPreview feature flag under the Microsoft.AppLink namespace:

Registration can take a few minutes. Wait for the properties.state to show Registered before continuing:

Once registered, refresh the resource provider so Azure picks up the new capability:

You only need to do this once per subscription.

Install the CLI extension

Install the preview extension and set your active subscription:

You should now have az appnet commands available. Run az appnet --help to confirm.

Create an AKS cluster

If you have an existing cluster, ensure it has --enable-oidc-issuer and --enable-aad set. If starting fresh:

Cluster creation takes a few minutes. Both flags are mandatory, Application Network will reject clusters that are missing either one.

Create the Application Network resource

Create a resource group for the Application Network resource, then create the resource itself:

This takes a few minutes. The output will include properties.provisioningState: Succeeded when done.

Join the cluster as a member

You have two upgrade modes: FullyManaged (Azure handles minor version upgrades automatically via release channels) and SelfManaged (you trigger upgrades yourself). If you do not specify, it defaults to SelfManaged.

This triggers the data plane deployment into your cluster. It may take a few minutes before ztunnel and the other components are fully running.

Verify the join:

Look for properties.provisioningState: Succeeded in the output. If it is still showing Updating, wait a moment and run it again.

Check that ztunnel and Istio CNI DaemonSets are running:

Expected output:

If any pods are not ready, give it another minute. The control plane pushes configuration to the data plane after the member join completes and there is a short propagation delay.

Enable ambient mode on a namespace

Enrol a namespace into the mesh with a label. No pod restarts required:

That is it for a single-cluster setup. Your services in that namespace are now communicating with automatic mTLS. No YAML changes to your workloads.

Add L7 capabilities with a waypoint proxy

The ztunnel handles L4 by default. If you need L7 policies (HTTP routing, traffic shifting, JWT auth, fault injection), deploy a waypoint proxy. The easiest approach uses istioctl. First, install it matching your AppNet’s supported Istio version:

Then deploy the waypoint for the default namespace:

This labels the namespace with istio.io/use-waypoint: waypoint automatically and deploys the waypoint gateway.

Multi-cluster setup

This is where things get interesting. Join a second cluster to the same Application Network resource:

Once both joins succeed, the control plane begins synchronising service discovery between the clusters. They share the same root CA, so mTLS works across the boundary without any additional certificate configuration.

List all members to confirm both are registered:

For cross-cluster traffic to work, you need network connectivity between the east-west gateways in each cluster. VNet peering is the simplest approach.

Making services global

By default, services are cluster-local. To make a service reachable across cluster boundaries, label it:

Unlabelled services remain cluster-local. Only services you explicitly mark global are synchronised across the mesh.

The waypoint proxies also need to be globally visible so cross-cluster traffic can find the correct L7 proxy for each cluster:

The control plane now synchronises service discovery across both clusters. The reviews service from cluster 2 becomes reachable from cluster 1 using the same reviews.default.svc.cluster.local name. The ztunnel routes cross-cluster traffic through the east-west gateway, and mTLS is enforced end-to-end.

You can verify global service discovery from ztunnel’s perspective:

You should see VIPs from both clusters listed for the reviews and waypoint services.

Traffic management across clusters

Once you have multi-cluster connected, you get access to the full Istio traffic management toolkit applied across cluster boundaries. This is what makes the architecture powerful.

Cross-cluster traffic shifting

You can split traffic between a local and remote version of a service using a VirtualService. For example, routing 20% to a local reviews-v1 in cluster 1 and 80% to reviews-v2/v3 in cluster 2:

The pattern here uses local/remote service abstractions: reviews-local selects local pods via label selector, reviews-remote intentionally selects no local pods so traffic flows cross-cluster. It is a clean pattern for weighted canary releases across clusters.

L4 and L7 authorization policies

L4 policy (enforced by ztunnel) blocks at the connection level. This example allows traffic to productpage only from the ingress gateway service account:

Anything not matching that principal will get a connection reset before headers are exchanged. No application changes required.

L7 policy (enforced by waypoint) works at the HTTP level, including method-level control. This example restricts productpage to GET requests from the curl service account only:

Note that targetRefs (pointing at the Service) is used for waypoint policies rather than selector. If you use selector here, the policy lands on ztunnel instead and you will not get L7 enforcement.

JWT claim-based routing is also supported, routing requests to different backends based on claims in the JWT token validated by the waypoint.

What I noticed

A few things worth calling out before you dive in:

You own the network connectivity. The east-west gateway is deployed and managed for you, but getting your cluster VNets to talk to each other is still your responsibility. VNet peering, VPN, or whatever connectivity model you use, you set it up. This is not automatically wired for you.

Upgrade mode is a one-way decision (for now). Once you join a member in SelfManaged or FullyManaged mode, you cannot switch between the two. Choose carefully. If you want Azure to manage minor version rollouts, go FullyManaged with a release channel from the start.

No coexistence with the Istio add-on. If your cluster already uses the AKS-managed Istio add-on, you cannot join it to Application Network. This is a migration, not an addition.

Linux only. Windows node pools are not supported in this preview.

Would I use this in production?

Not yet, and the preview terms make that explicit. But the architecture is solid and the direction is clearly right.

The ambient mesh model reduces the overhead of running a service mesh significantly. No sidecars means no per-pod memory tax, no rolling restart required to enrol services, and no mesh proxy lifecycle to manage alongside your application containers. The fully managed control and data plane removes the largest operational burden that teams typically face when adopting Istio: keeping it running and upgraded.

The multi-cluster story is genuinely useful. The pattern of joining multiple clusters to a single Application Network resource, using global service labels, and routing traffic across clusters with standard Istio VirtualServices is much cleaner than the self-managed multi-cluster Istio federation approaches most teams end up cobbling together.

My recommendation

If you are running multiple AKS clusters and struggling with the cross-cluster networking story, or if you have wanted service mesh capabilities but found Istio’s operational burden too high, this is worth standing up in a dev or staging environment now. The preview is early enough that your feedback could shape the GA release.

If you are already running the AKS Istio add-on in production, stay put for now. There is no migration path yet, and you would need to replace your Istio add-on before joining Application Network.

Watch for GA announcement, Windows node pool support, and regional expansion. The regional limitation is the most likely blocker for teams outside those five initial regions.

Final thoughts

Azure Kubernetes Application Network is the first time Azure has offered a managed, ambient-mode service mesh that spans multiple clusters with a unified control plane and shared trust boundary. For platform engineers building multi-cluster AKS environments, that is a meaningful offering. The preview is limited, but the architecture is sound and the feature set is real.

One last thing worth acknowledging. Yes, Cilium is also available on AKS and yes, it has its own networking and Layer 7 policy story. The Istio vs Cilium debate in AKS circles is very much alive, and Application Network does not end it. What it does do is make the managed Istio path significantly more appealing by removing the operational weight that made Cilium look attractive in comparison. Whether that tips the scales for your team probably depends on how much of your existing investment is already in eBPF-land. That conversation is a whole other post.

Try it out: Get started with Azure Kubernetes Application Network

Share:
Twitter
LinkedIn
Facebook
Reddit
Follow by Email
Copy link
Threads
Bluesky

Pixel Robots.

I’m Richard Hooper aka Pixel Robots. I started this blog in 2016 for a couple reasons. The first reason was basically just a place for me to store my step by step guides, troubleshooting guides and just plain ideas about being a sysadmin. The second reason was to share what I have learned and found out with other people like me. Hopefully, you can find something useful on the site.

0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *