On April 28th, 2020 Microsoft announced the general availability of Windows Server Container support on Azure Kubernetes Service (AKS). This has gotten me extremely excited. As you may know, if you have been following me, I have been running Windows containers in a production environment for over a year now. They have been running on a Custom Virtual Machine Scale Set (VMSS) with scaling being done via a logic app. Finally, I can now move away from that solution and use AKS and have all my containers running in the same place.
Below, I will share with you the steps you can use to create an AKS cluster with windows node pools using the Azure CLI.
Limitations
When using creating an AKS cluster with support for windows node pools you always have to create a Linux node pool first, this basically becomes the system node pool, but you can still use it to schedule Linux containers. So, if you are just wanting to use Windows containers then make this first node pool as small as you can, but you will always need at least 2 nodes for reliability.
For the Windows node pool you will also have the following limitations:
- The AKS cluster can have a maximum of 10 node pools.
- Each node pool can have a maximum of 100 nodes
- The windows Server node pool name has a limit of 6 characters.
Now we have that out of the way let’s look at creating the cluster.
But first we need a resource group
In your terminal or the Azure Cloud Shell use the following to create a resource group. If you already have one, then you can skip this step.
1 |
az group create --name aks-win-cluster --location eastus |

Now that’s out of the way, lets create the cluster
Windows Server containers only support a cluster that uses a network policy that uses Azure CNI (advanced) network plugin. The below command will create the AKS cluster with the correct network plugin and will also create the network resources for you.
1 2 3 4 5 6 7 8 |
az aks create \ --resource-group aks-win-cluster \ --name aks-win-cluster \ --node-count 2 \ --enable-addons monitoring \ --kubernetes-version 1.16.7 \ --generate-ssh-keys \ --network-plugin azure |
After a few minutes you should have your new AKS cluster with the 2 Linux nodes needed for the first node pool.

Time to add the Windows Server node pool
For this we are going to use the az aks nodepool add command.
1 2 3 4 5 6 7 8 |
az aks nodepool add \ --resource-group aks-win-cluster \ --cluster-name aks-win-cluster \ --os-type Windows \ --name win \ --node-count 1 \ --node-taints kubernetes.io/os=windows:NoSchedule \ --kubernetes-version 1.16.7 |
The command above will create a Windows Server node pool called win with a node count of 1. It will use the default node size of standard_D2s_v3 which is the minimum recommended size for windows server container nodes. It will also add a taint to the node pool and any new nodes added to this node pool. When a taint is applied to a node it means only pods with a toleration that matches can be scheduled on it. So only pods that can run on windows server will be scheduled on this node pool. This means you only have to edit your Windows container yaml files and not everything.

Testing time
So, now let’s make sure everything is set up correctly and we can connect to the cluster.
For this we are going to use kubectl, if you do not have it installed you can use the following command:
1 |
az aks install-cli |
Next use the az aks get-credentials command to configure kubectl to connect to your cluster.
1 |
az aks get-credentials --resource-group aks-win-cluster --name aks-win-cluster |

Now use the following command to view all your nodes.
1 |
kubectl get nodes |

You will see 2 Linux nodes and one Window node. The window one will start aks and then the name we have it above, win, in this case.
Time to deploy a test application
Below you will find a Kubernetes manifest file that will deploy a test ASP.NET application to your cluster and more specifically your Windows Server node pool.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
apiVersion: apps/v1 kind: Deployment metadata: name: sample labels: app: sample spec: replicas: 1 template: metadata: name: sample labels: app: sample spec: tolerations: - key: kubernetes.io/os operator: Equal value: windows effect: NoSchedule nodeSelector: "beta.kubernetes.io/os": windows containers: - name: sample image: mcr.microsoft.com/dotnet/framework/samples:aspnetapp resources: limits: cpu: 1 memory: 800M requests: cpu: .1 memory: 300M ports: - containerPort: 80 selector: matchLabels: app: sample --- apiVersion: v1 kind: Service metadata: name: sample spec: type: LoadBalancer ports: - protocol: TCP port: 80 selector: app: sample |
Copy the above code and save it as sample.yaml.
You will notice under the spec section (line 14) we have added in the toleration to match the taint we set on the node pool.
1 2 3 4 5 |
tolerations: - key: kubernetes.io/os operator: Equal value: windows effect: NoSchedule |
Without this being added then the container would not start and would be sat in pending state.
You may have also noticed the node selector line. Without the taint this would allow you to specify which OS type the node has to be to allow it to run on. Unfortunately, a lot of the system containers and publicly available Linux containers do not have this line added, so I find it easier to add the taint and toleration to ensure I do not get any containers stuck in pending state.
To deploy the sample application, use the following command after navigating to the folder where you saved the yaml file.
1 |
kubectl apply -f sample.yaml |

After about 10 minutes the pod will have been pulled and started. You can check the status by using the following:
1 |
kubectl get pods |

To see the application is up and running you can use the following command to get the external IP address of the service.
1 |
kubectl get service sample |

Now in your web browser navigate to the external IP.

Clean up time
Once you have finished with your testing use the following to delete everything.
1 |
az group delete --name aks-win-cluster --yes --no-wait |
This will delete the resource group and the cluster, but it will not delete the Azure Active Directory service principal that got created. See https://docs.microsoft.com/en-us/azure/aks/kubernetes-service-principal#additional-considerations on how to remove it.
All in All
I am excited about windows containers. With the work Microsoft are doing to get the image size down I believe we will see more windows containers in the wild. There is still work to be done in my opinion but this is an amazing start. I would love to see a way for all existing Linux containers to not try and schedule on a windows node without using tolerations or node selectors, maybe one day this might happen or, node selector will become a requirement of all manifest files.
Just remember the above is only for testing and not production use!
I hope you found this article helpful and if you have any questions please reach out.
4 Comments
Jarred · April 13, 2021 at 3:32 am
It looks like you’ve misspelled the word “Enginer” on your website. I thought you would like to know :). Silly mistakes can ruin your site’s credibility. I’ve used a tool called SpellScan.com in the past to keep mistakes off of my website.
-Jarred
Jarred · November 1, 2021 at 6:47 am
I think you misspelled the word “Enginer” on your website. If you want to keep errors off of your site we’ve successfully used a tool like SpellPros.com in the past for our websites. A nice customer pointed out our mistakes so I’m just paying it forward :).
Beth · January 21, 2022 at 6:21 pm
Greetings,
I’m not the best speller but I see the word “Enginer” is spelled incorrectly on your website. In the past I’ve used a service like SpellAlerts.com or SiteChecker.com to help keep mistakes off of my websites.
-Beth
Azure Insights: Kubernetes Service; Custom handlers; Multi-Factor Authentication; Resource Groups ERP for Hong Kong SME · May 25, 2020 at 6:00 pm
[…] Hooper, writing on Pixel Robots, examined Microsoft’s announcement of Windows Server Container support for Kubernetes […]