Reading Time: 3 minutes

I have been using AKS for a while now, maybe a year or so and recently I have needed the ability to auto-scale the cluster. Unfortunately, my old AKS cluster was one that used availability sets and not Node Pools (VMSS) which meant I was not able to auto-scale the nodes.

After some research, aka reaching out on twitter, it seems there is no easy way to migrate from an old AKS cluster to a new one. This meant I had to redeploy my whole cluster and applications.

So, I decided to build a new cluster, but use my old service principal and server and client details. Luckily I have the information saved in a password manager. This time why not try ARM templates to deploy it rather than Terraform.

Prerequisites

For you to be able to use this ARM template you will need a few things.

  • An existing vNet
  • An existing Log Analytics Workspace
  • A Service Principal that has contributor rights to your vNet (normally scoped to the resource group)
  • A Server and client application registered in Azure AD.

If you have not got a service principal or server and client application already you can check out my old blog post on how to do create them.

The Arm Template

{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"name": {
"type": "string",
"metadata": {
"description": "AKS cluster name"
}
},
"kubernetesVersion": {
"type": "string",
"metadata": {
"description": "Kubernetes version"
},
"defaultValue": "1.14.8"
},
"agentPoolProfiles": {
"type": "array",
"metadata": {
"description": "Define one or multiple node pools"
},
"defaultValue": [
{
"nodeCount": 3,
"nodeVmSize": "Standard_D2_v3",
"availabilityZones": [
"1",
"2",
"3"
],
"enableAutoScaling": true,
"maxCount": 10,
"minCount": 3
}
]
},
"workspaceResourceGroup": {
"type": "string",
"metadata": {
"description": "Log Analytics workspace resource group name"
}
},
"workspaceName": {
"type": "string",
"metadata": {
"description": "Log Analytics workspace name that has the Container Insights solution enabled"
}
},
"vnetResourceGroupName": {
"type": "string",
"metadata": {
"description": "Virtual Network resource group name"
}
},
"vnetName": {
"type": "string",
"metadata": {
"description": "Virtual Network name"
}
},
"vnetSubnetName": {
"type": "string",
"metadata": {
"description": "Virtual Network subnet name"
}
},
"servicePrincipalClientId": {
"type": "securestring",
"metadata": {
"description": "Service Principal client id"
}
},
"servicePrincipalClientSecret": {
"type": "securestring",
"metadata": {
"description": "Service Principal client secret"
}
},
"aadClientAppId": {
"type": "securestring",
"metadata": {
"description": "AAD client application id"
}
},
"aadServerAppId": {
"type": "securestring",
"metadata": {
"description": "AAD server application id"
}
},
"aadServerAppSecret": {
"type": "securestring",
"metadata": {
"description": "AAD server application secret"
}
},
"aadTenantId": {
"type": "securestring",
"metadata": {
"description": "AAD tenant id"
}
}
},
"variables": {
"apiVersion": {
"aks": "2019-08-01"
},
"agentPoolProfiles": {
"vnetSubnetId": "[concat(resourceId(parameters('vnetResourceGroupName'),'Microsoft.Network/virtualNetworks',parameters('vnetName')),'/subnets/',parameters('vnetSubnetName'))]"
},
"cluster": {
"workspaceId": "[resourceId(parameters('workspaceResourceGroup'),'Microsoft.OperationalInsights/workspaces',parameters('workspaceName'))]"
},
"outputs": {
"resourceId": "[resourceId('Microsoft.ContainerService/managedClusters/',parameters('name'))]"
}
},
"resources": [
{
"apiVersion": "[variables('apiVersion').aks]",
"type": "Microsoft.ContainerService/managedClusters",
"name": "[parameters('name')]",
"location": "[resourceGroup().location]",
"properties": {
"nodeResourceGroup": "[concat(parameters('name'),'-worker')]",
"kubernetesVersion": "[parameters('kubernetesVersion')]",
"enableRBAC": true,
"dnsPrefix": "[parameters('name')]",
"addonProfiles": {
"kubeDashboard": {
"enabled": false
},
"omsagent": {
"enabled": true,
"config": {
"logAnalyticsWorkspaceResourceID": "[variables('cluster').workspaceId]"
}
}
},
"copy": [
{
"name": "agentPoolProfiles",
"count": "[length(parameters('agentPoolProfiles'))]",
"input": {
"name": "[concat('nodepool',add(copyIndex('agentPoolProfiles'),1))]",
"maxPods": 250,
"osDiskSizeGB": 128,
"count": "[parameters('agentPoolProfiles')[copyIndex('agentPoolProfiles')].nodeCount]",
"vmSize": "[parameters('agentPoolProfiles')[copyIndex('agentPoolProfiles')].nodeVmSize]",
"osType": "Linux",
"vnetSubnetID": "[variables('agentPoolProfiles').vnetSubnetId]",
"enableAutoScaling": "[if(parameters('agentPoolProfiles')[copyIndex('agentPoolProfiles')].enableAutoScaling, parameters('agentPoolProfiles')[copyIndex('agentPoolProfiles')].enableAutoScaling, json('null'))]",
"maxCount": "[if(parameters('agentPoolProfiles')[copyIndex('agentPoolProfiles')].enableAutoScaling, parameters('agentPoolProfiles')[copyIndex('agentPoolProfiles')].maxCount, json('null'))]",
"minCount": "[if(parameters('agentPoolProfiles')[copyIndex('agentPoolProfiles')].enableAutoScaling, parameters('agentPoolProfiles')[copyIndex('agentPoolProfiles')].minCount, json('null'))]",
"type": "VirtualMachineScaleSets",
"availabilityZones": "[parameters('agentPoolProfiles')[copyIndex('agentPoolProfiles')].availabilityZones]"
}
}
],
"networkProfile": {
"loadBalancerSku": "standard",
"networkPlugin": "azure",
"networkPolicy": "azure",
"serviceCidr": "10.0.0.0/16",
"dnsServiceIp": "10.0.0.10",
"dockerBridgeCidr": "172.17.0.1/16"
},
"servicePrincipalProfile": {
"clientId": "[parameters('servicePrincipalClientId')]",
"secret": "[parameters('servicePrincipalClientSecret')]"
},
"aadProfile": {
"clientAppID": "[parameters('aadClientAppId')]",
"serverAppID": "[parameters('aadServerAppId')]",
"serverAppSecret": "[parameters('aadServerAppSecret')]",
"tenantId": "[parameters('aadTenantId')]"
}
}
}
],
"outputs": {
"name": {
"type": "string",
"value": "[parameters('name')]"
},
"resourceId": {
"type": "string",
"value": "[variables('outputs').resourceId]"
}
}
}

Let’s dig a little deeper

We start with the parameters. Most will make sense, but I will go deeper into agentPoolProfile as it is an array parameter.

  • Name: AKS cluster name
  • kubernetesVersion: Kubernetes version
  • agentPoolProfiles: Define one or multiple node pools
  • workspaceResourceGroup: Log Analytics workspace resource group name
  • workspaceName: Log Analytics workspace name that has the Container Insights solution enabled
  • vnetResourceGroupName: Virtual Network resource group name
  • vnetName: Virtual Network name
  • vnetSubnetName: Virtual Network subnet name
  • servicePrincipalClientId: Service Principal client id
  • servicePrincipalClientSecret: Service Principal client secret
  • aadClientAppId: AAD client application id
  • aadServerAppId: AAD server application id
  • aadServerAppSecret: AAD server application secret
  • aadTenantId: AAD tenant id

agentPoolProfiles

"agentPoolProfiles": {
"type": "array",
"metadata": {
"description": "Define one or multiple node pools"
},
"defaultValue": [
{
"nodeCount": 3,
"nodeVmSize": "Standard_D2_v3",
"availabilityZones": [
"1",
"2",
"3"
],
"enableAutoScaling": true,
"maxCount": 10,
"minCount": 3
}
]
},

So as I mentioned this is an array parameter. You can read more about Arm templates and arrays at https://azurecitadel.com/automation/arm/lab6/, but basically later on in the ARM template, we use the copy section which uses this array and loops through it using the length of the array as the count.

In this array, you can set the following for the node pool.

  • nodeCount: How many nodes you want in the node pool
  • nodeVmSize: The size of each node
  • availabilityZones: How many availability Zones you want this node pool spread over
  • enableAutoScaling: If you would like auto scaling enabled
  • maxCount: The maximum nodes you want in this node pool
  • minCount: The minimum nodes you want in this node pool.

I have set some defaults here which will get you up and running, feel free to change them, just make sure you change them in the parameters file.

Now, if you wanted to add another Node Pool at cluster creation you can add the following to the agentPoolProfiles section of the Arm Template.

Just make sure you close off the preceding } with a ,

It should look something like this.

"agentPoolProfiles": {
"type": "array",
"metadata": {
"description": "Define one or multiple node pools"
},
"defaultValue": [
{
"nodeCount": 3,
"nodeVmSize": "Standard_D2_v3",
"availabilityZones": [
"1",
"2",
"3"
],
"enableAutoScaling": true,
"maxCount": 10,
"minCount": 3
},
{
"nodeCount": 2,
"nodeVmSize": "Standard_A2_v2",
"availabilityZones": null,
"enableAutoScaling": false
}
]
},

If you want to add node pools after the AKS cluster has been created, you will need to use another Arm Template.

Fill in the blanks

Now its time to build the parameter file. You can find the template for this in my Git Repo. https://github.com/PixelRobots/ArmTemplates/tree/master/AKS_node_pools_ARM

So in your favourite IDE, I am using VS code, open the file parameters.json and fill in the values.

The kubernetesVersion parameter can be left empty if you like. I have set the default to the version I am using.

Don’t forget to check the defaults I have set for agentPoolProfiles.

Deployment Time!

Open up your terminal of choice that has the Azure CLI installed on and connect to your Azure subscription. If you’re doing the above you should not need tips on how to do this 🙂

Navigate to the folder that you have the files in. Now type the following command.

After some time you will have your AKS cluster with a node pool.

Now as this is an AKS cluster linked to Azure AD you will need to give either a user or group access to the cluster.  I have two YAML files in my git repo that will help with that. If you need instructions then check out my previous blog post it will help. The bit you need is near the bottom under Configuring Kubernetes RBAC. https://pixelrobots.co.uk/2019/02/create-a-rbac-azure-kubernetes-services-aks-cluster-with-azure-active-directory-using-terraform/

I hope you found this article helpful. Feel free to leave a comment or reach out via the usual channels.

Thanks for reading.

Categories: AKSAzure

Pixel Robots.

I’m Richard Hooper aka Pixel Robots. I started this blog in 2016 for a couple reasons. The first reason was basically just a place for me to store my step by step guides, troubleshooting guides and just plain ideas about being a sysadmin. The second reason was to share what I have learned and found out with other people like me. Hopefully, you can find something useful on the site.

0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *