Azure Services

Download as pdf or txt
Download as pdf or txt
You are on page 1of 214

Contents

Azure Resiliency
Azure Resiliency feature page
Design resilient applications for Azure
High Availability
High availability for Azure applications
Regions and Availability Zones
Availability Zones Documentation
Azure Services that support Availability Zones
Virtual machines
Create a Linux VM in an Availability Zone with CLI
Create a Windows VM in an Availability Zone with PowerShell
Create a Windows VM in an Availability Zone with Azure portal
Managed disks
Add a managed disk in Availability Zones with CLI
Add a managed disk in Availability Zones with PowerShell
Virtual machine scale sets
Create a scale set in an Availability Zone
Load Balancer
What is Load Balancer?
Load Balancer Standard and Availability Zones
Create a zone redundant public Standard Load Balancer
Create a zone redundant public Standard Load Balancer (PowerShell)
Create a zone redundant public Standard Load Balancer (CLI)
Create a zonal public Standard Load Balancer
Create a zonal public Standard Load Balancer (PowerShell)
Create a zonal redundant public Standard Load Balancer (CLI)
Load balance VMs across availability zones
Load balance VMs across availability zones with Azure (CLI)
Public IP address
SQL Database
Availability zones with SQL Database general purpose tier
Availability zones with SQL Database premium & business critical tiers
Storage
Zone-redundant storage
Event Hubs
Event Hubs geo-disaster recovery
Service Bus
Service Bus geo-disaster recovery
VPN Gateway
Create a zone-redundant virtual network gateway
ExpressRoute
Create a zone-redundant virtual network gateway
Application Gateway v2
Autoscaling and Zone-redundant Application Gateway v2
Identity
Create an Azure Active Directory Domain Services instance
Edge Zones Documentation
What are Edge Zones?
Azure Orbital Documentation
What is Azure Orbital?
Disaster Recovery
Use Azure Site Recovery
Azure Backup
Use Azure Backup
Resources
Azure Roadmap
Azure Regions
Regions and Availability Zones in Azure
4/9/2021 • 8 minutes to read • Edit Online

Microsoft Azure services are available globally to drive your cloud operations at an optimal level. You can
choose the best region for your needs based on technical and regulatory considerations: service capabilities,
data residency, compliance requirements, and latency.

Terminology
To better understand regions and Availability Zones in Azure, it helps to understand key terms or concepts.

T ERM O R C O N C EP T DESC RIP T IO N

region A set of datacenters deployed within a latency-defined


perimeter and connected through a dedicated regional low-
latency network.

geography An area of the world containing at least one Azure region.


Geographies define a discrete market that preserve data
residency and compliance boundaries. Geographies allow
customers with specific data-residency and compliance
needs to keep their data and applications close. Geographies
are fault-tolerant to withstand complete region failure
through their connection to our dedicated high-capacity
networking infrastructure.

Availability Zone Unique physical locations within a region. Each zone is made
up of one or more datacenters equipped with independent
power, cooling, and networking.

recommended region A region that provides the broadest range of service


capabilities and is designed to support Availability Zones
now, or in the future. These are designated in the Azure
portal as Recommended .

alternate (other) region A region that extends Azure's footprint within a data
residency boundary where a recommended region also
exists. Alternate regions help to optimize latency and provide
a second region for disaster recovery needs. They are not
designed to support Availability Zones (although Azure
conducts regular assessment of these regions to determine if
they should become recommended regions). These are
designated in the Azure portal as Other .

foundational service A core Azure service that is available in all regions when the
region is generally available.

mainstream service An Azure service that is available in all recommended


regions within 90 days of the region general availability or
demand-driven availability in alternate regions.

specialized service An Azure service that is demand-driven availability across


regions backed by customized/specialized hardware.
T ERM O R C O N C EP T DESC RIP T IO N

regional service An Azure service that is deployed regionally and enables the
customer to specify the region into which the service will be
deployed. For a complete list, see Products available by
region.

non-regional service An Azure service for which there is no dependency on a


specific Azure region. Non-regional services are deployed to
two or more regions and if there is a regional failure, the
instance of the service in another region continues servicing
customers. For a complete list, see Products available by
region.

Regions
A region is a set of datacenters deployed within a latency-defined perimeter and connected through a dedicated
regional low-latency network. Azure gives you the flexibility to deploy applications where you need to, including
across multiple regions to deliver cross-region resiliency. For more information, see Overview of the resiliency
pillar.

Availability Zones
An Availability Zone is a high-availability offering that protects your applications and data from datacenter
failures. Availability Zones are unique physical locations within an Azure region. Each zone is made up of one or
more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there's a
minimum of three separate zones in all enabled regions. The physical separation of Availability Zones within a
region protects applications and data from datacenter failures. Zone-redundant services replicate your
applications and data across Availability Zones to protect from single-points-of-failure. With Availability Zones,
Azure offers industry best 99.99% VM uptime SLA. The full Azure SLA explains the guaranteed availability of
Azure as a whole.
An Availability Zone in an Azure region is a combination of a fault domain and an update domain. For example,
if you create three or more VMs across three zones in an Azure region, your VMs are effectively distributed
across three fault domains and three update domains. The Azure platform recognizes this distribution across
update domains to make sure that VMs in different zones are not scheduled to be updated at the same time.
Build high-availability into your application architecture by co-locating your compute, storage, networking, and
data resources within a zone and replicating in other zones. Azure services that support Availability Zones fall
into two categories:
Zonal ser vices – where a resource is pinned to a specific zone (for example, virtual machines, managed
disks, Standard IP addresses), or
Zone-redundant ser vices – when the Azure platform replicates automatically across zones (for example,
zone-redundant storage, SQL Database).
To achieve comprehensive business continuity on Azure, build your application architecture using the
combination of Availability Zones with Azure region pairs. You can synchronously replicate your applications
and data using Availability Zones within an Azure region for high-availability and asynchronously replicate
across Azure regions for disaster recovery protection.
IMPORTANT
The Availability Zone identifiers (the numbers 1, 2 and 3 in the picture above) are logically mapped to the actual physical
zones for each subscription independently. That means that Availability Zone 1 in a given subscription might refer to a
different physical zone than Availability Zone 1 in a different subscription. As a consequence, it's recommended to not rely
on Availability Zone IDs across different subscriptions for virtual machine placement.

Region and service categories


Azure's approach on availability of Azure services across regions is best described by expressing services made
available in recommended regions and alternate regions.
Recommended region - A region that provides the broadest range of service capabilities and is designed
to support Availability Zones now, or in the future. These are designated in the Azure portal as
Recommended .
Alternate (other) region - A region that extends Azure's footprint within a data residency boundary where
a recommended region also exists. Alternate regions help to optimize latency and provide a second region
for disaster recovery needs. They are not designed to support Availability Zones (although Azure conducts
regular assessment of these regions to determine if they should become recommended regions). These are
designated in the Azure portal as Other .
Comparing region types
Azure services are grouped into three categories: foundational, mainstream, and specialized services. Azure's
general policy on deploying services into any given region is primarily driven by region type, service categories,
and customer demand:
Foundational – Available in all recommended and alternate regions when the region is generally available,
or within 90 days of a new foundational service becoming generally available.
Mainstream – Available in all recommended regions within 90 days of the region general availability;
demand-driven in alternate regions (many are already deployed into a large subset of alternate regions).
Specialized – Targeted service offerings, often industry-focused or backed by customized/specialized
hardware. Demand-driven availability across regions (many are already deployed into a large subset of
recommended regions).
To see which services are deployed in a given region, as well as the future roadmap for preview or general
availability of services in a region, see Products available by region.
If a service offering is not available in a specific region, you can share your interest by contacting your Microsoft
sales representative.

NON- F O UN DAT IO N AVA IL A B IL IT Y DATA


REGIO N T Y P E REGIO N A L AL M A IN ST REA M SP EC IA L IZ ED Z O N ES RESIDEN C Y

Recommende ️
✔ ️
✔ ️
✔ Demand- ️
✔ ️

d driven

Alternate ️
✔ ️
✔ Demand- Demand- N/A ️

driven driven

Services by category
As mentioned previously, Azure classifies services into three categories: foundational, mainstream, and
specialized. Service categories are assigned at general availability. Often, services start their lifecycle as a
specialized service and as demand and utilization increases may be promoted to mainstream or foundational.
The following table lists the category for services as foundational, mainstream. You should note the following
about the table:
Some services are non-regional. For information and a list of non-regional services, see Products available
by region.
Older generation of services or virtual machines are not listed. For more information, see documentation at
Previous generations of virtual machine sizes.
Services are not assigned a category until General Availability (GA). For information, and a list of preview
services, see Products available by region.

F O UN DAT IO N A L M A IN ST REA M

Storage Accounts API Management

Application Gateway App Configuration

Azure Backup App Service

Azure Cosmos DB Automation

Azure Data Lake Storage Gen2 Azure Active Directory Domain Services

Azure ExpressRoute Azure Bastion

Azure Public IP Azure Cache for Redis

Azure SQL Database Azure Cognitive Services

Azure SQL Managed Instance Azure Cognitive Services: Computer Vision

Disk Storage Azure Cognitive Services: Content Moderator

Event Hubs Azure Cognitive Services: Face


F O UN DAT IO N A L M A IN ST REA M

Key Vault Azure Cognitive Services: Text Analytics

Load balancer Azure Data Explorer

Service Bus Azure Database for MySQL

Service Fabric Azure Database for PostgreSQL

Storage: Hot/Cool Blob Storage Tiers Azure DDoS Protection

Storage: Managed Disks Azure Firewall

Virtual Machine Scale Sets Azure Firewall Manager

Virtual Machines Azure Functions

Virtual Machines: Azure Dedicated Host Azure IoT Hub

Virtual Machines: Av2-Series Azure Kubernetes Service (AKS)

Virtual Machines: Bs-Series Azure Monitor: Application Insights

Virtual Machines: DSv2-Series Azure Monitor: Log Analytics

Virtual Machines: DSv3-Series Azure Private Link

Virtual Machines: Dv2-Series Azure Site Recovery

Virtual Machines: Dv3-Series Azure Synapse Analytics

Virtual Machines: ESv3-Series Batch

Virtual Machines: Ev3-Series Cloud Services: M-series

Virtual Network Container Instances

VPN Gateway Container Registry

Data Factory

Event Grid

HDInsight

Logic Apps

Media Services

Network Watcher
F O UN DAT IO N A L M A IN ST REA M

Premium Blob Storage

Premium Files Storage

Virtual Machines: Ddsv4-Series

Virtual Machines: Ddv4-Series

Virtual Machines: Dsv4-Series

Virtual Machines: Dv4-Series

Virtual Machines: Edsv4-Series

Virtual Machines: Edv4-Series

Virtual Machines: Esv4-Series

Virtual Machines: Ev4-Series

Virtual Machines: Fsv2-Series

Virtual Machines: M-Series

Virtual WAN

Specialized Services
As mentioned previously, Azure classifies services into three categories: foundational, mainstream, and
specialized. Service categories are assigned at general availability. Often, services start their lifecycle as a
specialized service and as demand and utilization increases may be promoted to mainstream or foundational.
The following table lists specialized services.

SP EC IA L IZ ED

Azure API for FHIR

Azure Analysis Services

Azure Blockchain Service

Azure Cognitive Services: Anomaly Detector

Azure Cognitive Services: Custom Vision

Azure Cognitive Services: Form Recognizer

Azure Cognitive Services: Immersive Reader

Azure Cognitive Services: Language Understanding


SP EC IA L IZ ED

Azure Cognitive Services: Personalizer

Azure Cognitive Services: QnA Maker

Azure Cognitive Services: Speech Services

Azure Data Share

Azure Databricks

Azure Database for MariaDB

Azure Database Migration Service

Azure Dedicated HSM

Azure Digital Twins

Azure Health Bot

Azure HPC Cache

Azure Lab Services

Azure NetApp Files

Azure Red Hat OpenShift

Azure SignalR Service

Azure Spring Cloud

Azure Stream Analytics

Azure Time Series Insights

Azure VMware Solution

Azure VMware Solution by CloudSimple

Spatial Anchors

Storage: Archive Storage

Ultra Disk Storage

Video Indexer

Virtual Machines: DASv4-Series


SP EC IA L IZ ED

Virtual Machines: DAv4-Series

Virtual Machines: DCsv2-series

Virtual Machines: EASv4-Series

Virtual Machines: EAv4-Series

Virtual Machines: HBv1-Series

Virtual Machines: HBv2-Series

Virtual Machines: HCv1-Series

Virtual Machines: H-Series

Virtual Machines: LSv2-Series

Virtual Machines: Mv2-Series

Virtual Machines: NCv3-Series

Virtual Machines: NDv2-Series

Virtual Machines: NVv3-Series

Virtual Machines: NVv4-Series

Virtual Machines: SAP HANA on Azure Large Instances

Next steps
Regions that support Availability Zones in Azure
Quickstart templates
Azure Services that support Availability Zones
4/6/2021 • 6 minutes to read • Edit Online

Microsoft Azure global infrastructure is designed and constructed at every layer to deliver the highest levels of
redundancy and resiliency to its customers. Azure infrastructure is composed of geographies, regions, and
Availability Zones, which limit the blast radius of a failure and therefore limit potential impact to customer
applications and data. The Azure Availability Zones construct was developed to provide a software and
networking solution to protect against datacenter failures and to provide increased high availability (HA) to our
customers.
Availability Zones are unique physical locations within an Azure region. Each zone is made up of one or more
datacenters with independent power, cooling, and networking. The physical separation of Availability Zones
within a region limits the impact to applications and data from zone failures, such as large-scale flooding, major
storms and superstorms, and other events that could disrupt site access, safe passage, extended utilities uptime,
and the availability of resources. Availability Zones and their associated datacenters are designed such that if
one zone is compromised, the services, capacity, and availability are supported by the other Availability Zones in
the region.
All Azure management services are architected to be resilient from region-level failures. In the spectrum of
failures, one or more Availability Zone failures within a region have a smaller failure radius compared to an
entire region failure. Azure can recover from a zone-level failure of management services within a region. Azure
performs critical maintenance one zone at a time within a region, to prevent any failures impacting customer
resources deployed across Availability Zones within a region.

Azure services supporting Availability Zones fall into three categories: zonal , zone-redundant , and non-
regional services. Customer workloads can be categorized to utilize any of these architecture scenarios to meet
application performance and durability.
Zonal ser vices – A resource can be deployed to a specific, self-selected Availability Zone to achieve
more stringent latency or performance requirements. Resiliency is self-architected by replicating
applications and data to one or more zones within the region. Resources can be pinned to a specific zone.
For example, virtual machines, managed disks, or standard IP addresses can be pinned to a specific zone,
which allows for increased resilience by having one or more instances of resources spread across zones.
Zone-redundant ser vices – Azure platform replicates the resource and data across zones. Microsoft
manages the delivery of high availability since Azure automatically replicates and distributes instances
within the region. ZRS, for example, replicates the data across three zones so that a zone failure does not
impact the HA of the data.
Non-regional ser vices – Services are always available from Azure geographies and are resilient to
zone-wide outages as well as region-wide outages.
To achieve comprehensive business continuity on Azure, build your application architecture using the
combination of Availability Zones with Azure region pairs. You can synchronously replicate your applications
and data using Availability Zones within an Azure region for high-availability and asynchronously replicate
across Azure regions for disaster recovery protection. To learn more, read building solutions for high availability
using Availability Zones.

Azure services supporting Availability Zones


The older generation virtual machines are not listed. For more information, see Previous generations of
virtual machine sizes.
As mentioned in the Regions and Availability Zones in Azure, some services are non-regional. These services
do not have dependency on a specific Azure region, as such are resilient to zone-wide outages as well as
region-wide outages. The list of non-regional services can be found at Products available by region.

Azure regions with Availability Zones


A M ERIC A S EURO P E A F RIC A A SIA PA C IF IC

Brazil South France Central South Africa North* Australia East

Canada Central Germany West Central Central India*

Central US North Europe Japan East

East US UK South Korea Central*

East US 2 West Europe Southeast Asia

South Central US

US Gov Virginia

West US 2

West US 3*

* To learn more about Availability Zones and available services support in these regions, contact your Microsoft
sales or customer representative. For the upcoming regions that will support Availability Zones, see Azure
geographies.

Azure Services supporting Availability Zones


Older generation virtual machines are not listed below. For more information, see previous generations
of virtual machine sizes.
Some services are non-regional, see Regions and Availability Zones in Azure for more information. These
services do not have dependency on a specific Azure region, making them resilient to zone-wide outages
and region-wide outages. The list of non-regional services can be found at Products available by region.
Zone Resilient Services
Non-Regional Services - Services are always available from Azure geographies and are resilient to zone-
wide outages as well as region-wide outages.
Resilient to the zone-wide outages
Foundational Ser vices

P RO DUC T S RESIL IEN C Y

Storage Account

Application Gateway (V2)

Azure Backup

Azure Cosmos DB

Azure Data Lake Storage Gen 2

Azure Express Route

Azure Public IP

Azure SQL Database (General Purpose Tier)

Azure SQL Database (Premium & Business Critical Tier)

Disk Storage

Event Hubs

Key Vault

Load Balancer

Service Bus

Service Fabric

Storage: Hot/Cool Blob Storage Tiers

Storage: Managed Disks

Virtual Machines Scale Sets

Virtual Machines
P RO DUC T S RESIL IEN C Y

Virtual Machines: Av2-Series

Virtual Machines: Bs-Series

Virtual Machines: DSv2-Series

Virtual Machines: DSv3-Series

Virtual Machines: Dv2-Series

Virtual Machines: Dv3-Series

Virtual Machines: ESv3-Series

Virtual Machines: Ev3-Series

Virtual Machines: F-Series

Virtual Machines: FS-Series

Virtual Network

VPN Gateway

Mainstream ser vices

P RO DUC T S RESIL IEN C Y

App Service Environments

Azure Active Directory Domain Services

Azure Bastion

Azure Cache for Redis

Azure Cognitive Search

Azure Cognitive Services: Text Analytics

Azure Data Explorer

Azure Database for MySQL – Flexible Server

Azure Database for PostgreSQL – Flexible Server

Azure DDoS Protection

Azure Disk Encryption


P RO DUC T S RESIL IEN C Y

Azure Firewall

Azure Firewall Manager

Azure Kubernetes Service (AKS)

Azure Private Link

Azure Site Recovery

Azure SQL: Virtual Machine

Azure Web Application Firewall

Container Registry

Event Grid

Network Watcher

Network Watcher: Traffic Analytics

Power BI Embedded

Premium Blob Storage

Storage: Azure Premium Files

Virtual Machines: Azure Dedicated Host

Virtual Machines: Ddsv4-Series

Virtual Machines: Ddv4-Series

Virtual Machines: Dsv4-Series

Virtual Machines: Dv4-Series

Virtual Machines: Edsv4-Series

Virtual Machines: Edv4-Series

Virtual Machines: Esv4-Series

Virtual Machines: Ev4-Series

Virtual Machines: Fsv2-Series

Virtual Machines: M-Series


P RO DUC T S RESIL IEN C Y

Virtual WAN

Virtual WAN: ExpressRoute

Virtual WAN: Point-to-Site VPN Gateway

Virtual WAN: Site-to-Site VPN Gateway

Specialized Ser vices

P RO DUC T S RESIL IEN C Y

Azure Red Hat OpenShift

Cognitive Services: Anomaly Detector

Cognitive Services: Form Recognizer

Storage: Ultra Disk

Non-regional

P RO DUC T S RESIL IEN C Y

Azure DNS

Azure Active Directory

Azure Advanced Threat Protection

Azure Advisor

Azure Blueprints

Azure Bot Services

Azure Front Door

Azure Defender for IoT

Azure Front Door

Azure Information Protection

Azure Lighthouse

Azure Managed Applications

Azure Maps
P RO DUC T S RESIL IEN C Y

Azure Performance Diagnostics

Azure Policy

Azure Resource Graph

Azure Sentinel

Azure Stack

Azure Stack Edge

Cloud Shell

Content Delivery Network

Cost Management

Customer Lockbox for Microsoft Azure

Intune

Microsoft Azure Peering Service

Microsoft Azure portal

Microsoft Cloud App Security

Microsoft Graph

Security Center

Traffic Manager

Pricing for VMs in Availability Zones


Azure Availability Zones are available with your Azure subscription. Learn more here - Bandwidth pricing page.

Get started with Availability Zones


Create a virtual machine
Add a Managed Disk using PowerShell
Create a zone redundant virtual machine scale set
Load balance VMs across zones using a Standard Load Balancer with a zone-redundant frontend
Load balance VMs within a zone using a Standard Load Balancer with a zonal frontend
Zone-redundant storage
SQL Database general purpose tier
Event Hubs geo-disaster recovery
Service Bus geo-disaster recovery
Create a zone-redundant virtual network gateway
Add zone redundant region for Azure Cosmos DB
Getting Started Azure Cache for Redis Availability Zones
Create an Azure Active Directory Domain Services instance
Create an Azure Kubernetes Service (AKS) cluster that uses Availability Zones

Next steps
Regions and Availability Zones in Azure
Create a virtual machine in an availability zone
using Azure CLI
3/10/2021 • 4 minutes to read • Edit Online

This article steps through using the Azure CLI to create a Linux VM in an Azure availability zone. An availability
zone is a physically separate zone in an Azure region. Use availability zones to protect your apps and data from
an unlikely failure or loss of an entire datacenter.
To use an availability zone, create your virtual machine in a supported Azure region.
Make sure that you have installed the latest Azure CLI and logged in to an Azure account with az login.

Check VM SKU availability


The availability of VM sizes, or SKUs, may vary by region and zone. To help you plan for the use of Availability
Zones, you can list the available VM SKUs by Azure region and zone. This ability makes sure that you choose an
appropriate VM size, and obtain the desired resiliency across zones. For more information on the different VM
types and sizes, see VM Sizes overview.
You can view the available VM SKUs with the az vm list-skus command. The following example lists available VM
SKUs in the eastus2 region:

az vm list-skus --location eastus2 --output table

The output is similar to the following condensed example, which shows the Availability Zones in which each VM
size is available:

ResourceType Locations Name [...] Tier Size Zones


---------------- --------- ----------------- --------- ------- -------
virtualMachines eastus2 Standard_DS1_v2 Standard DS1_v2 1,2,3
virtualMachines eastus2 Standard_DS2_v2 Standard DS2_v2 1,2,3
[...]
virtualMachines eastus2 Standard_F1s Standard F1s 1,2,3
virtualMachines eastus2 Standard_F2s Standard F2s 1,2,3
[...]
virtualMachines eastus2 Standard_D2s_v3 Standard D2_v3 1,2,3
virtualMachines eastus2 Standard_D4s_v3 Standard D4_v3 1,2,3
[...]
virtualMachines eastus2 Standard_E2_v3 Standard E2_v3 1,2,3
virtualMachines eastus2 Standard_E4_v3 Standard E4_v3 1,2,3

Create resource group


Create a resource group with the az group create command.
An Azure resource group is a logical container into which Azure resources are deployed and managed. A
resource group must be created before a virtual machine. In this example, a resource group named
myResourceGroupVM is created in the eastus2 region. East US 2 is one of the Azure regions that supports
availability zones.
az group create --name myResourceGroupVM --location eastus2

The resource group is specified when creating or modifying a VM, which can be seen throughout this article.

Create virtual machine


Create a virtual machine with the az vm create command.
When creating a virtual machine, several options are available such as operating system image, disk sizing, and
administrative credentials. In this example, a virtual machine is created with a name of myVM running Ubuntu
Server. The VM is created in availability zone 1. By default, the VM is created in the Standard_DS1_v2 size.

az vm create --resource-group myResourceGroupVM --name myVM --location eastus2 --image UbuntuLTS --generate-
ssh-keys --zone 1

It may take a few minutes to create the VM. Once the VM has been created, the Azure CLI outputs information
about the VM. Take note of the zones value, which indicates the availability zone in which the VM is running.

{
"fqdns": "",
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/resourceGroups/myResourceGroupVM/providers/Microsoft.Compute/virtualMachines/myVM",
"location": "eastus2",
"macAddress": "00-0D-3A-23-9A-49",
"powerState": "VM running",
"privateIpAddress": "10.0.0.4",
"publicIpAddress": "52.174.34.95",
"resourceGroup": "myResourceGroupVM",
"zones": "1"
}

Confirm zone for managed disk and IP address


When the VM is deployed in an availability zone, a managed disk for the VM is created in the same availability
zone. By default, a public IP address is also created in that zone. The following examples get information about
these resources.
To verify that the VM's managed disk is in the availability zone, use the az vm show command to return the disk
ID. In this example, the disk ID is stored in a variable that is used in a later step.

osdiskname=$(az vm show -g myResourceGroupVM -n myVM --query "storageProfile.osDisk.name" -o tsv)

Now you can get information about the managed disk:

az disk show --resource-group myResourceGroupVM --name $osdiskname

The output shows that the managed disk is in the same availability zone as the VM:
{
"creationData": {
"createOption": "FromImage",
"imageReference": {
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/Providers/Microsoft.Compute/Locations/westeurope/Publishers/Canonical/ArtifactTypes/VMImage/Off
ers/UbuntuServer/Skus/16.04-LTS/Versions/latest",
"lun": null
},
"sourceResourceId": null,
"sourceUri": null,
"storageAccountId": null
},
"diskSizeGb": 30,
"encryptionSettings": null,
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/resourceGroups/myResourceGroupVM/providers/Microsoft.Compute/disks/osdisk_761c570dab",
"location": "eastus2",
"managedBy": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/resourceGroups/myResourceGroupVM/providers/Microsoft.Compute/virtualMachines/myVM",
"name": "myVM_osdisk_761c570dab",
"osType": "Linux",
"provisioningState": "Succeeded",
"resourceGroup": "myResourceGroupVM",
"sku": {
"name": "Premium_LRS",
"tier": "Premium"
},
"tags": {},
"timeCreated": "2018-03-05T22:16:06.892752+00:00",
"type": "Microsoft.Compute/disks",
"zones": [
"1"
]
}

Use the az vm list-ip-addresses command to return the name of public IP address resource in myVM. In this
example, the name is stored in a variable that is used in a later step.

ipaddressname=$(az vm list-ip-addresses -g myResourceGroupVM -n myVM --query "


[].virtualMachine.network.publicIpAddresses[].name" -o tsv)

Now you can get information about the IP address:

az network public-ip show --resource-group myResourceGroupVM --name $ipaddressname

The output shows that the IP address is in the same availability zone as the VM:
{
"dnsSettings": null,
"etag": "W/\"b7ad25eb-3191-4c8f-9cec-c5e4a3a37d35\"",
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/resourceGroups/myResourceGroupVM/providers/Microsoft.Network/publicIPAddresses/myVMPublicIP",
"idleTimeoutInMinutes": 4,
"ipAddress": "52.174.34.95",
"ipConfiguration": {
"etag": null,
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/resourceGroups/myResourceGroupVM/providers/Microsoft.Network/networkInterfaces/myVMVMNic/ipConf
igurations/ipconfigmyVM",
"name": null,
"privateIpAddress": null,
"privateIpAllocationMethod": null,
"provisioningState": null,
"publicIpAddress": null,
"resourceGroup": "myResourceGroupVM",
"subnet": null
},
"location": "eastUS2",
"name": "myVMPublicIP",
"provisioningState": "Succeeded",
"publicIpAddressVersion": "IPv4",
"publicIpAllocationMethod": "Dynamic",
"resourceGroup": "myResourceGroupVM",
"resourceGuid": "8c70a073-09be-4504-0000-000000000000",
"tags": {},
"type": "Microsoft.Network/publicIPAddresses",
"zones": [
"1"
]
}

Next steps
In this article, you learned how to create a VM in an availability zone. Learn more about availability for Azure
VMs.
Create a virtual machine in an availability zone
using Azure PowerShell
3/10/2021 • 4 minutes to read • Edit Online

This article details using Azure PowerShell to create an Azure virtual machine running Windows Server 2016 in
an Azure availability zone. An availability zone is a physically separate zone in an Azure region. Use availability
zones to protect your apps and data from an unlikely failure or loss of an entire datacenter.
To use an availability zone, create your virtual machine in a supported Azure region.

Sign in to Azure
Sign in to your Azure subscription with the Connect-AzAccount command and follow the on-screen directions.

Connect-AzAccount

Check VM SKU availability


The availability of VM sizes, or SKUs, may vary by region and zone. To help you plan for the use of Availability
Zones, you can list the available VM SKUs by Azure region and zone. This ability makes sure that you choose an
appropriate VM size, and obtain the desired resiliency across zones. For more information on the different VM
types and sizes, see VM Sizes overview.
You can view the available VM SKUs with the Get-AzComputeResourceSku command. The following example
lists available VM SKUs in the eastus2 region:

Get-AzComputeResourceSku | where {$_.Locations.Contains("eastus2")};

The output is similar to the following condensed example, which shows the Availability Zones in which each VM
size is available:

ResourceType Name Location Zones [...]


------------ ---- -------- -----
virtualMachines Standard_DS1_v2 eastus2 {1, 2, 3}
virtualMachines Standard_DS2_v2 eastus2 {1, 2, 3}
[...]
virtualMachines Standard_F1s eastus2 {1, 2, 3}
virtualMachines Standard_F2s eastus2 {1, 2, 3}
[...]
virtualMachines Standard_D2s_v3 eastus2 {1, 2, 3}
virtualMachines Standard_D4s_v3 eastus2 {1, 2, 3}
[...]
virtualMachines Standard_E2_v3 eastus2 {1, 2, 3}
virtualMachines Standard_E4_v3 eastus2 {1, 2, 3}

Create resource group


Create an Azure resource group with New-AzResourceGroup. A resource group is a logical container into which
Azure resources are deployed and managed. In this example, a resource group named myResourceGroup is
created in the eastus2 region.
New-AzResourceGroup -Name myResourceGroup -Location EastUS2

Create networking resources


Create a virtual network, subnet, and a public IP address
These resources are used to provide network connectivity to the virtual machine and connect it to the internet.
Create the IP address in an availability zone, 2 in this example. In a later step, you create the VM in the same
zone used to create the IP address.

# Create a subnet configuration


$subnetConfig = New-AzVirtualNetworkSubnetConfig -Name mySubnet -AddressPrefix 192.168.1.0/24

# Create a virtual network


$vnet = New-AzVirtualNetwork -ResourceGroupName myResourceGroup -Location eastus2 `
-Name myVNet -AddressPrefix 192.168.0.0/16 -Subnet $subnetConfig

# Create a public IP address in an availability zone and specify a DNS name


$pip = New-AzPublicIpAddress -ResourceGroupName myResourceGroup -Location eastus2 -Zone 2 `
-AllocationMethod Static -IdleTimeoutInMinutes 4 -Name "mypublicdns$(Get-Random)" -Sku Standard

Create a network security group and a network security group rule


The network security group secures the virtual machine using inbound and outbound rules. In this case, an
inbound rule is created for port 3389, which allows incoming remote desktop connections. We also want to
create an inbound rule for port 80, which allows incoming web traffic.

# Create an inbound network security group rule for port 3389


$nsgRuleRDP = New-AzNetworkSecurityRuleConfig -Name myNetworkSecurityGroupRuleRDP -Protocol Tcp `
-Direction Inbound -Priority 1000 -SourceAddressPrefix * -SourcePortRange * -DestinationAddressPrefix *
`
-DestinationPortRange 3389 -Access Allow

# Create an inbound network security group rule for port 80


$nsgRuleWeb = New-AzNetworkSecurityRuleConfig -Name myNetworkSecurityGroupRuleWWW -Protocol Tcp `
-Direction Inbound -Priority 1001 -SourceAddressPrefix * -SourcePortRange * -DestinationAddressPrefix *
`
-DestinationPortRange 80 -Access Allow

# Create a network security group


$nsg = New-AzNetworkSecurityGroup -ResourceGroupName myResourceGroup -Location eastus2 `
-Name myNetworkSecurityGroup -SecurityRules $nsgRuleRDP,$nsgRuleWeb

Create a network card for the virtual machine


Create a network card with New-AzNetworkInterface for the virtual machine. The network card connects the
virtual machine to a subnet, network security group, and public IP address.

# Create a virtual network card and associate with public IP address and NSG
$nic = New-AzNetworkInterface -Name myNic -ResourceGroupName myResourceGroup -Location eastus2 `
-SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pip.Id -NetworkSecurityGroupId $nsg.Id

Create virtual machine


Create a virtual machine configuration. This configuration includes the settings that are used when deploying
the virtual machine such as a virtual machine image, size, and authentication configuration. The
Standard_DS1_v2 size in this example is supported in availability zones. This configuration also specifies the
availability zone you set when creating the IP address. When running this step, you are prompted for credentials.
The values that you enter are configured as the user name and password for the virtual machine.

# Define a credential object


$cred = Get-Credential

# Create a virtual machine configuration


$vmConfig = New-AzVMConfig -VMName myVM -VMSize Standard_DS1_v2 -Zone 2 | `
Set-AzVMOperatingSystem -Windows -ComputerName myVM -Credential $cred | `
Set-AzVMSourceImage -PublisherName MicrosoftWindowsServer -Offer WindowsServer `
-Skus 2016-Datacenter -Version latest | Add-AzVMNetworkInterface -Id $nic.Id

Create the virtual machine with New-AzVM.

New-AzVM -ResourceGroupName myResourceGroup -Location eastus2 -VM $vmConfig

Confirm zone for managed disk


You created the VM's IP address resource in the same availability zone as the VM. The managed disk resource
for the VM is created in the same availability zone. You can verify this with Get-AzDisk:

Get-AzDisk -ResourceGroupName myResourceGroup

The output shows that the managed disk is in the same availability zone as the VM:

ResourceGroupName : myResourceGroup
AccountType : PremiumLRS
OwnerId : /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/resourceGroups/myResourceGroup/providers/Microsoft.
Compute/virtualMachines/myVM
ManagedBy : /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx//resourceGroups/myResourceGroup/providers/Microsoft.
Compute/virtualMachines/myVM
Sku : Microsoft.Azure.Management.Compute.Models.DiskSku
Zones : {2}
TimeCreated : 9/7/2017 6:57:26 PM
OsType : Windows
CreationData : Microsoft.Azure.Management.Compute.Models.CreationData
DiskSizeGB : 127
EncryptionSettings :
ProvisioningState : Succeeded
Id : /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/resourceGroups/myResourceGroup/providers/Microsoft.
Compute/disks/myVM_OsDisk_1_bd921920bb0a4650becfc2d830000000
Name : myVM_OsDisk_1_bd921920bb0a4650becfc2d830000000
Type : Microsoft.Compute/disks
Location : eastus2
Tags : {}

Next steps
In this article, you learned how to create a VM in an availability zone. Learn more about availability for Azure
VMs.
Create a virtual machine in an availability zone
using the Azure portal
3/9/2021 • 2 minutes to read • Edit Online

This article steps through using the Azure portal to create a virtual machine in an Azure availability zone. An
availability zone is a physically separate zone in an Azure region. Use availability zones to protect your apps and
data from an unlikely failure or loss of an entire datacenter.
To use an availability zone, create your virtual machine in a supported Azure region.

Sign in to Azure
Sign in to the Azure portal at https://portal.azure.com.

Create virtual machine


1. Click Create a resource in the upper left-hand corner of the Azure portal.
2. Select Compute , and then select Windows Ser ver 2016 Datacenter .
3. Enter the virtual machine information. The user name and password entered here is used to sign in to the
virtual machine. The password must be at least 12 characters long and meet the defined complexity
requirements. Choose a Location such as East US 2 that supports availability zones. When complete, click
OK .

4. Choose a size for the VM. Select a recommended size, or filter based on features. Confirm the size is
available in the zone you want to use.

5. Under Settings > High availability , select one of the numbered zones from the Availability zone
dropdown, keep the remaining defaults, and click OK .

6. On the summary page, click Create to start the virtual machine deployment.
7. The VM will be pinned to the Azure portal dashboard. Once the deployment has completed, the VM
summary automatically opens.

Confirm zone for managed disk and IP address


When the VM is deployed in an availability zone, a managed disk for the VM is created in the same availability
zone. By default, a public IP address is also created in that zone.
You can confirm the zone settings for these resources in the portal.
1. Click Resource groups and then the name of the resource group for the VM, such as myResourceGroup.
2. Click the name of the Disk resource. The Over view page includes details about the location and
availability zone of the resource.

3. Click the name of the Public IP address resource. The Over view page includes details about the location
and availability zone of the resource.

Next steps
In this article, you learned how to create a VM in an availability zone. Learn more about availability for Azure
VMs.
Add a disk to a Linux VM
3/18/2021 • 6 minutes to read • Edit Online

This article shows you how to attach a persistent disk to your VM so that you can preserve your data - even if
your VM is reprovisioned due to maintenance or resizing.

Attach a new disk to a VM


If you want to add a new, empty data disk on your VM, use the az vm disk attach command with the --new
parameter. If your VM is in an Availability Zone, the disk is automatically created in the same zone as the VM. For
more information, see Overview of Availability Zones. The following example creates a disk named myDataDisk
that is 50 Gb in size:

az vm disk attach \
-g myResourceGroup \
--vm-name myVM \
--name myDataDisk \
--new \
--size-gb 50

Attach an existing disk


To attach an existing disk, find the disk ID and pass the ID to the az vm disk attach command. The following
example queries for a disk named myDataDisk in myResourceGroup, then attaches it to the VM named myVM:

diskId=$(az disk show -g myResourceGroup -n myDataDisk --query 'id' -o tsv)

az vm disk attach -g myResourceGroup --vm-name myVM --name $diskId

Format and mount the disk


To partition, format, and mount your new disk so your Linux VM can use it, SSH into your VM. For more
information, see How to use SSH with Linux on Azure. The following example connects to a VM with the public
IP address of 10.123.123.25 with the username azureuser:

ssh azureuser@10.123.123.25

Find the disk


Once connected to your VM, you need to find the disk. In this example, we are using lsblk to list the disks.

lsblk -o NAME,HCTL,SIZE,MOUNTPOINT | grep -i "sd"

The output is similar to the following example:


sda 0:0:0:0 30G
├─sda1 29.9G /
├─sda14 4M
└─sda15 106M /boot/efi
sdb 1:0:1:0 14G
└─sdb1 14G /mnt
sdc 3:0:0:0 50G

Here, sdc is the disk that we want, because it is 50G. If you aren't sure which disk it is based on size alone, you
can go to the VM page in the portal, select Disks , and check the LUN number for the disk under Data disks .
Format the disk
Format the disk with parted , if the disk size is 2 tebibytes (TiB) or larger then you must use GPT partitioning, if
it is under 2TiB, then you can use either MBR or GPT partitioning.

NOTE
It is recommended that you use the latest version parted that is available for your distro. If the disk size is 2 tebibytes
(TiB) or larger, you must use GPT partitioning. If disk size is under 2 TiB, then you can use either MBR or GPT partitioning.

The following example uses parted on /dev/sdc , which is where the first data disk will typically be on most
VMs. Replace sdc with the correct option for your disk. We are also formatting it using the XFS filesystem.

sudo parted /dev/sdc --script mklabel gpt mkpart xfspart xfs 0% 100%
sudo mkfs.xfs /dev/sdc1
sudo partprobe /dev/sdc1

Use the partprobe utility to make sure the kernel is aware of the new partition and filesystem. Failure to use
partprobe can cause the blkid or lslbk commands to not return the UUID for the new filesystem immediately.
Mount the disk
Now, create a directory to mount the file system using mkdir . The following example creates a directory at
/datadrive :

sudo mkdir /datadrive

Use mountto then mount the filesystem. The following example mounts the /dev/sdc1 partition to the
/datadrive mount point:

sudo mount /dev/sdc1 /datadrive

Persist the mount


To ensure that the drive is remounted automatically after a reboot, it must be added to the /etc/fstab file. It is
also highly recommended that the UUID (Universally Unique Identifier) is used in /etc/fstab to refer to the drive
rather than just the device name (such as, /dev/sdc1). If the OS detects a disk error during boot, using the UUID
avoids the incorrect disk being mounted to a given location. Remaining data disks would then be assigned those
same device IDs. To find the UUID of the new drive, use the blkid utility:

sudo blkid

The output looks similar to the following example:


/dev/sda1: LABEL="cloudimg-rootfs" UUID="11111111-1b1b-1c1c-1d1d-1e1e1e1e1e1e" TYPE="ext4"
PARTUUID="1a1b1c1d-11aa-1234-1a1a1a1a1a1a"
/dev/sda15: LABEL="UEFI" UUID="BCD7-96A6" TYPE="vfat" PARTUUID="1e1g1cg1h-11aa-1234-1u1u1a1a1u1u"
/dev/sdb1: UUID="22222222-2b2b-2c2c-2d2d-2e2e2e2e2e2e" TYPE="ext4" TYPE="ext4" PARTUUID="1a2b3c4d-01"
/dev/sda14: PARTUUID="2e2g2cg2h-11aa-1234-1u1u1a1a1u1u"
/dev/sdc1: UUID="33333333-3b3b-3c3c-3d3d-3e3e3e3e3e3e" TYPE="xfs" PARTLABEL="xfspart" PARTUUID="c1c2c3c4-
1234-cdef-asdf3456ghjk"

NOTE
Improperly editing the /etc/fstab file could result in an unbootable system. If unsure, refer to the distribution's
documentation for information on how to properly edit this file. It is also recommended that a backup of the /etc/fstab file
is created before editing.

Next, open the /etc/fstab file in a text editor as follows:

sudo nano /etc/fstab

In this example, use the UUID value for the /dev/sdc1 device that was created in the previous steps, and the
mountpoint of /datadrive . Add the following line to the end of the /etc/fstab file:

UUID=33333333-3b3b-3c3c-3d3d-3e3e3e3e3e3e /datadrive xfs defaults,nofail 1 2

In this example, we are using the nano editor, so when you are done editing the file, use Ctrl+O to write the file
and Ctrl+X to exit the editor.

NOTE
Later removing a data disk without editing fstab could cause the VM to fail to boot. Most distributions provide either the
nofail and/or nobootwait fstab options. These options allow a system to boot even if the disk fails to mount at boot time.
Consult your distribution's documentation for more information on these parameters.
The nofail option ensures that the VM starts even if the filesystem is corrupt or the disk does not exist at boot time.
Without this option, you may encounter behavior as described in Cannot SSH to Linux VM due to FSTAB errors
The Azure VM Serial Console can be used for console access to your VM if modifying fstab has resulted in a boot failure.
More details are available in the Serial Console documentation.

TRIM/UNMAP support for Linux in Azure


Some Linux kernels support TRIM/UNMAP operations to discard unused blocks on the disk. This feature is
primarily useful in standard storage to inform Azure that deleted pages are no longer valid and can be
discarded, and can save money if you create large files and then delete them.
There are two ways to enable TRIM support in your Linux VM. As usual, consult your distribution for the
recommended approach:
Use the discard mount option in /etc/fstab, for example:

UUID=33333333-3b3b-3c3c-3d3d-3e3e3e3e3e3e /datadrive xfs defaults,discard 1 2

In some cases, the discard option may have performance implications. Alternatively, you can run the
fstrim command manually from the command line, or add it to your crontab to run regularly:
Ubuntu

sudo apt-get install util-linux


sudo fstrim /datadrive

RHEL/CentOS

sudo yum install util-linux


sudo fstrim /datadrive

Troubleshooting
When adding data disks to a Linux VM, you may encounter errors if a disk does not exist at LUN 0. If you are
adding a disk manually using the az vm disk attach -new command and you specify a LUN ( --lun ) rather than
allowing the Azure platform to determine the appropriate LUN, take care that a disk already exists / will exist at
LUN 0.
Consider the following example showing a snippet of the output from lsscsi :

[5:0:0:0] disk Msft Virtual Disk 1.0 /dev/sdc


[5:0:0:1] disk Msft Virtual Disk 1.0 /dev/sdd

The two data disks exist at LUN 0 and LUN 1 (the first column in the lsscsi output details
[host:channel:target:lun] ). Both disks should be accessible from within the VM. If you had manually specified
the first disk to be added at LUN 1 and the second disk at LUN 2, you may not see the disks correctly from
within your VM.

NOTE
The Azure host value is 5 in these examples, but this may vary depending on the type of storage you select.

This disk behavior is not an Azure problem, but the way in which the Linux kernel follows the SCSI specifications.
When the Linux kernel scans the SCSI bus for attached devices, a device must be found at LUN 0 in order for the
system to continue scanning for additional devices. As such:
Review the output of lsscsi after adding a data disk to verify that you have a disk at LUN 0.
If your disk does not show up correctly within your VM, verify a disk exists at LUN 0.

Next steps
To ensure your Linux VM is configured correctly, review the Optimize your Linux machine performance
recommendations.
Expand your storage capacity by adding additional disks and configure RAID for additional performance.
Attach a data disk to a Windows VM with
PowerShell
3/10/2021 • 2 minutes to read • Edit Online

This article shows you how to attach both new and existing disks to a Windows virtual machine by using
PowerShell.
First, review these tips:
The size of the virtual machine controls how many data disks you can attach. For more information, see Sizes
for virtual machines.
To use premium SSDs, you'll need a premium storage-enabled VM type, like the DS-series or GS-series
virtual machine.
This article uses PowerShell within the Azure Cloud Shell, which is constantly updated to the latest version. To
open the Cloud Shell, select Tr y it from the top of any code block.

Add an empty data disk to a virtual machine


This example shows how to add an empty data disk to an existing virtual machine.
Using managed disks

$rgName = 'myResourceGroup'
$vmName = 'myVM'
$location = 'East US'
$storageType = 'Premium_LRS'
$dataDiskName = $vmName + '_datadisk1'

$diskConfig = New-AzDiskConfig -SkuName $storageType -Location $location -CreateOption Empty -DiskSizeGB 128
$dataDisk1 = New-AzDisk -DiskName $dataDiskName -Disk $diskConfig -ResourceGroupName $rgName

$vm = Get-AzVM -Name $vmName -ResourceGroupName $rgName


$vm = Add-AzVMDataDisk -VM $vm -Name $dataDiskName -CreateOption Attach -ManagedDiskId $dataDisk1.Id -Lun 1

Update-AzVM -VM $vm -ResourceGroupName $rgName

Using managed disks in an Availability Zone


To create a disk in an Availability Zone, use New-AzDiskConfig with the -Zone parameter. The following
example creates a disk in zone 1.
$rgName = 'myResourceGroup'
$vmName = 'myVM'
$location = 'East US 2'
$storageType = 'Premium_LRS'
$dataDiskName = $vmName + '_datadisk1'

$diskConfig = New-AzDiskConfig -SkuName $storageType -Location $location -CreateOption Empty -DiskSizeGB 128
-Zone 1
$dataDisk1 = New-AzDisk -DiskName $dataDiskName -Disk $diskConfig -ResourceGroupName $rgName

$vm = Get-AzVM -Name $vmName -ResourceGroupName $rgName


$vm = Add-AzVMDataDisk -VM $vm -Name $dataDiskName -CreateOption Attach -ManagedDiskId $dataDisk1.Id -Lun 1

Update-AzVM -VM $vm -ResourceGroupName $rgName

Initialize the disk


After you add an empty disk, you'll need to initialize it. To initialize the disk, you can sign in to a VM and use disk
management. If you enabled WinRM and a certificate on the VM when you created it, you can use remote
PowerShell to initialize the disk. You can also use a custom script extension:

$location = "location-name"
$scriptName = "script-name"
$fileName = "script-file-name"
Set-AzVMCustomScriptExtension -ResourceGroupName $rgName -Location $locName -VMName $vmName -Name
$scriptName -TypeHandlerVersion "1.4" -StorageAccountName "mystore1" -StorageAccountKey "primary-key" -
FileName $fileName -ContainerName "scripts"

The script file can contain code to initialize the disks, for example:

$disks = Get-Disk | Where partitionstyle -eq 'raw' | sort number

$letters = 70..89 | ForEach-Object { [char]$_ }


$count = 0
$labels = "data1","data2"

foreach ($disk in $disks) {


$driveLetter = $letters[$count].ToString()
$disk |
Initialize-Disk -PartitionStyle MBR -PassThru |
New-Partition -UseMaximumSize -DriveLetter $driveLetter |
Format-Volume -FileSystem NTFS -NewFileSystemLabel $labels[$count] -Confirm:$false -Force
$count++
}

Attach an existing data disk to a VM


You can attach an existing managed disk to a VM as a data disk.

$rgName = "myResourceGroup"
$vmName = "myVM"
$location = "East US"
$dataDiskName = "myDisk"
$disk = Get-AzDisk -ResourceGroupName $rgName -DiskName $dataDiskName

$vm = Get-AzVM -Name $vmName -ResourceGroupName $rgName

$vm = Add-AzVMDataDisk -CreateOption Attach -Lun 0 -VM $vm -ManagedDiskId $disk.Id

Update-AzVM -VM $vm -ResourceGroupName $rgName


Next steps
You can also deploy managed disks using templates. For more information, see Using Managed Disks in Azure
Resource Manager Templates or the quickstart template for deploying multiple data disks.
Create a virtual machine scale set that uses
Availability Zones
11/2/2020 • 8 minutes to read • Edit Online

To protect your virtual machine scale sets from datacenter-level failures, you can create a scale set across
Availability Zones. Azure regions that support Availability Zones have a minimum of three separate zones, each
with their own independent power source, network, and cooling. For more information, see Overview of
Availability Zones.

Availability considerations
When you deploy a regional (non-zonal) scale set into one or more zones as of API version 2017-12-01, you
have the following availability options:
Max spreading (platformFaultDomainCount = 1)
Static fixed spreading (platformFaultDomainCount = 5)
Spreading aligned with storage disk fault domains (platforFaultDomainCount = 2 or 3)
With max spreading, the scale set spreads your VMs across as many fault domains as possible within each zone.
This spreading could be across greater or fewer than five fault domains per zone. With static fixed spreading, the
scale set spreads your VMs across exactly five fault domains per zone. If the scale set cannot find five distinct
fault domains per zone to satisfy the allocation request, the request fails.
We recommend deploying with max spreading for most workloads , as this approach provides the best
spreading in most cases. If you need replicas to be spread across distinct hardware isolation units, we
recommend spreading across Availability Zones and utilize max spreading within each zone.

NOTE
With max spreading, you only see one fault domain in the scale set VM instance view and in the instance metadata
regardless of how many fault domains the VMs are spread across. The spreading within each zone is implicit.

Placement groups
When you deploy a scale set, you also have the option to deploy with a single placement group per Availability
Zone, or with multiple per zone. For regional (non-zonal) scale sets, the choice is to have a single placement
group in the region or to have multiple in the region. For most workloads, we recommend multiple placement
groups, which allows for greater scale. In API version 2017-12-01, scale sets default to multiple placement
groups for single-zone and cross-zone scale sets, but they default to single placement group for regional (non-
zonal) scale sets.

NOTE
If you use max spreading, you must use multiple placement groups.

Zone balancing
Finally, for scale sets deployed across multiple zones, you also have the option of choosing "best effort zone
balance" or "strict zone balance". A scale set is considered "balanced" if each zone the same number of VMs or
+\- 1 VM in all other zones for the scale set. For example:
A scale set with 2 VMs in zone 1, 3 VMs in zone 2, and 3 VMs in zone 3 is considered balanced. There is only
one zone with a different VM count and it is only 1 less than the other zones.
A scale set with 1 VM in zone 1, 3 VMs in zone 2, and 3 VMs in zone 3 is considered unbalanced. Zone 1 has
2 fewer VMs than zones 2 and 3.
It's possible that VMs in the scale set are successfully created, but extensions on those VMs fail to deploy. These
VMs with extension failures are still counted when determining if a scale set is balanced. For instance, a scale set
with 3 VMs in zone 1, 3 VMs in zone 2, and 3 VMs in zone 3 is considered balanced even if all extensions failed
in zone 1 and all extensions succeeded in zones 2 and 3.
With best-effort zone balance, the scale set attempts to scale in and out while maintaining balance. However, if
for some reason this is not possible (for example, if one zone goes down, the scale set cannot create a new VM
in that zone), the scale set allows temporary imbalance to successfully scale in or out. On subsequent scale-out
attempts, the scale set adds VMs to zones that need more VMs for the scale set to be balanced. Similarly, on
subsequent scale in attempts, the scale set removes VMs from zones that need fewer VMs for the scale set to be
balanced. With "strict zone balance", the scale set fails any attempts to scale in or out if doing so would cause
unbalance.
To use best-effort zone balance, set zoneBalance to false. This setting is the default in API version 2017-12-01. To
use strict zone balance, set zoneBalance to true.

Single-zone and zone-redundant scale sets


When you deploy a virtual machine scale set, you can choose to use a single Availability Zone in a region, or
multiple zones.
When you create a scale set in a single zone, you control which zone all those VM instances run in, and the scale
set is managed and autoscales only within that zone. A zone-redundant scale set lets you create a single scale
set that spans multiple zones. As VM instances are created, by default they are evenly balanced across zones.
Should an interruption occur in one of the zones, a scale set does not automatically scale out to increase
capacity. A best practice would be to configure autoscale rules based on CPU or memory usage. The autoscale
rules would allow the scale set to respond to a loss of the VM instances in that one zone by scaling out new
instances in the remaining operational zones.
To use Availability Zones, your scale set must be created in a supported Azure region. You can create a scale set
that uses Availability Zones with one of the following methods:
Azure portal
Azure CLI
Azure PowerShell
Azure Resource Manager templates

Use the Azure portal


The process to create a scale set that uses an Availability Zone is the same as detailed in the getting started
article. When you select a supported Azure region, you can create a scale set in one or more available zones, as
shown in the following example:

The scale set and supporting resources, such as the Azure load balancer and public IP address, are created in the
single zone that you specify.

Use the Azure CLI


The process to create a scale set that uses an Availability Zone is the same as detailed in the getting started
article. To use Availability Zones, you must create your scale set in a supported Azure region.
Add the --zones parameter to the az vmss create command and specify which zone to use (such as zone 1, 2,
or 3). The following example creates a single-zone scale set named myScaleSet in zone 1:

az vmss create \
--resource-group myResourceGroup \
--name myScaleSet \
--image UbuntuLTS \
--upgrade-policy-mode automatic \
--admin-username azureuser \
--generate-ssh-keys \
--zones 1

For a complete example of a single-zone scale set and network resources, see this sample CLI script
Zone -redundant scale set
To create a zone-redundant scale set, you use a Standard SKU public IP address and load balancer. For enhanced
redundancy, the Standard SKU creates zone-redundant network resources. For more information, see Azure
Load Balancer Standard overview and Standard Load Balancer and Availability Zones.
To create a zone-redundant scale set, specify multiple zones with the --zones parameter. The following example
creates a zone-redundant scale set named myScaleSet across zones 1,2,3:

az vmss create \
--resource-group myResourceGroup \
--name myScaleSet \
--image UbuntuLTS \
--upgrade-policy-mode automatic \
--admin-username azureuser \
--generate-ssh-keys \
--zones 1 2 3

It takes a few minutes to create and configure all the scale set resources and VMs in the zone(s) that you specify.
For a complete example of a zone-redundant scale set and network resources, see this sample CLI script

Use Azure PowerShell


To use Availability Zones, you must create your scale set in a supported Azure region. Add the -Zone parameter
to the New-AzVmssConfig command and specify which zone to use (such as zone 1, 2, or 3).
The following example creates a single-zone scale set named myScaleSet in East US 2 zone 1. The Azure
network resources for virtual network, public IP address, and load balancer are automatically created. When
prompted, provide your own desired administrative credentials for the VM instances in the scale set:
New-AzVmss `
-ResourceGroupName "myResourceGroup" `
-Location "EastUS2" `
-VMScaleSetName "myScaleSet" `
-VirtualNetworkName "myVnet" `
-SubnetName "mySubnet" `
-PublicIpAddressName "myPublicIPAddress" `
-LoadBalancerName "myLoadBalancer" `
-UpgradePolicy "Automatic" `
-Zone "1"

Zone -redundant scale set


To create a zone-redundant scale set, specify multiple zones with the -Zone parameter. The following example
creates a zone-redundant scale set named myScaleSet across East US 2 zones 1, 2, 3. The zone-redundant Azure
network resources for virtual network, public IP address, and load balancer are automatically created. When
prompted, provide your own desired administrative credentials for the VM instances in the scale set:

New-AzVmss `
-ResourceGroupName "myResourceGroup" `
-Location "EastUS2" `
-VMScaleSetName "myScaleSet" `
-VirtualNetworkName "myVnet" `
-SubnetName "mySubnet" `
-PublicIpAddressName "myPublicIPAddress" `
-LoadBalancerName "myLoadBalancer" `
-UpgradePolicy "Automatic" `
-Zone "1", "2", "3"

Use Azure Resource Manager templates


The process to create a scale set that uses an Availability Zone is the same as detailed in the getting started
article for Linux or Windows. To use Availability Zones, you must create your scale set in a supported Azure
region. Add the zones property to the Microsoft.Compute/virtualMachineScaleSets resource type in your
template and specify which zone to use (such as zone 1, 2, or 3).
The following example creates a Linux single-zone scale set named myScaleSet in East US 2 zone 1:
{
"type": "Microsoft.Compute/virtualMachineScaleSets",
"name": "myScaleSet",
"location": "East US 2",
"apiVersion": "2017-12-01",
"zones": ["1"],
"sku": {
"name": "Standard_A1",
"capacity": "2"
},
"properties": {
"upgradePolicy": {
"mode": "Automatic"
},
"virtualMachineProfile": {
"storageProfile": {
"osDisk": {
"caching": "ReadWrite",
"createOption": "FromImage"
},
"imageReference": {
"publisher": "Canonical",
"offer": "UbuntuServer",
"sku": "16.04-LTS",
"version": "latest"
}
},
"osProfile": {
"computerNamePrefix": "myvmss",
"adminUsername": "azureuser",
"adminPassword": "P@ssw0rd!"
}
}
}
}

For a complete example of a single-zone scale set and network resources, see this sample Resource Manager
template
Zone -redundant scale set
To create a zone-redundant scale set, specify multiple values in the zones property for the
Microsoft.Compute/virtualMachineScaleSets resource type. The following example creates a zone-redundant
scale set named myScaleSet across East US 2 zones 1,2,3:

{
"type": "Microsoft.Compute/virtualMachineScaleSets",
"name": "myScaleSet",
"location": "East US 2",
"apiVersion": "2017-12-01",
"zones": [
"1",
"2",
"3"
]
}

If you create a public IP address or a load balancer, specify the "sku": { "name": "Standard" }" property to create
zone-redundant network resources. You also need to create a Network Security Group and rules to permit any
traffic. For more information, see Azure Load Balancer Standard overview and Standard Load Balancer and
Availability Zones.
For a complete example of a zone-redundant scale set and network resources, see this sample Resource
Manager template

Next steps
Now that you have created a scale set in an Availability Zone, you can learn how to Deploy applications on
virtual machine scale sets or Use autoscale with virtual machine scale sets.
What is Azure Load Balancer?
3/30/2021 • 3 minutes to read • Edit Online

Load balancing refers to evenly distributing load (incoming network traffic) across a group of backend resources
or servers.
Azure Load Balancer operates at layer 4 of the Open Systems Interconnection (OSI) model. It's the single point of
contact for clients. Load balancer distributes inbound flows that arrive at the load balancer's front end to
backend pool instances. These flows are according to configured load-balancing rules and health probes. The
backend pool instances can be Azure Virtual Machines or instances in a virtual machine scale set.
A public load balancer can provide outbound connections for virtual machines (VMs) inside your virtual
network. These connections are accomplished by translating their private IP addresses to public IP addresses.
Public Load Balancers are used to load balance internet traffic to your VMs.
An internal (or private) load balancer is used where private IPs are needed at the frontend only. Internal
load balancers are used to load balance traffic inside a virtual network. A load balancer frontend can be
accessed from an on-premises network in a hybrid scenario.

Figure: Balancing multi-tier applications by using both public and internal Load Balancer
For more information on the individual load balancer components, see Azure Load Balancer components.

NOTE
Azure provides a suite of fully managed load-balancing solutions for your scenarios.
If you are looking to do DNS based global routing and do not have requirements for Transport Layer Security (TLS)
protocol termination ("SSL offload"), per-HTTP/HTTPS request or application-layer processing, review Traffic Manager.
If you want to load balance between your servers in a region at the application layer, review Application Gateway.
If you need to optimize global routing of your web traffic and optimize top-tier end-user performance and reliability
through quick global failover, see Front Door.
Your end-to-end scenarios may benefit from combining these solutions as needed. For an Azure load-balancing options
comparison, see Overview of load-balancing options in Azure.

Why use Azure Load Balancer?


With Azure Load Balancer, you can scale your applications and create highly available services. Load balancer
supports both inbound and outbound scenarios. Load balancer provides low latency and high throughput, and
scales up to millions of flows for all TCP and UDP applications.
Key scenarios that you can accomplish using Azure Standard Load Balancer include:
Load balance internal and external traffic to Azure virtual machines.
Increase availability by distributing resources within and across zones.
Configure outbound connectivity for Azure virtual machines.
Use health probes to monitor load-balanced resources.
Employ por t for warding to access virtual machines in a virtual network by public IP address and port.
Enable support for load-balancing of IPv6 .
Standard load balancer provides multi-dimensional metrics through Azure Monitor. These metrics can be
filtered, grouped, and broken out for a given dimension. They provide current and historic insights into
performance and health of your service. Insights for Azure Load Balancer offers a preconfigured
dashboard with useful visualizations for these metrics. Resource Health is also supported. Review
Standard load balancer diagnostics for more details.
Load balance services on multiple por ts, multiple IP addresses, or both .
Move internal and external load balancer resources across Azure regions.
Load balance TCP and UDP flow on all ports simultaneously using HA por ts .
Secure by default
Standard load balancer is built on the zero trust network security model.
Standard Load Balancer is secure by default and part of your virtual network. The virtual network is a
private and isolated network.
Standard load balancers and standard public IP addresses are closed to inbound connections unless
opened by Network Security Groups. NSGs are used to explicitly permit allowed traffic. If you don't have
an NSG on a subnet or NIC of your virtual machine resource, traffic isn't allowed to reach this resource.
To learn about NSGs and how to apply them to your scenario, see Network Security Groups.
Basic load balancer is open to the internet by default.
Load balancer doesn't store customer data.

Pricing and SLA


For standard load balancer pricing information, see Load balancer pricing. Basic load balancer is offered at no
charge. See SLA for load balancer. Basic load balancer has no SLA.

What's new?
Subscribe to the RSS feed and view the latest Azure Load Balancer feature updates on the Azure Updates page.

Next steps
See Create a public standard load balancer to get started with using a load balancer.
For more information on Azure Load Balancer limitations and components, see Azure Load Balancer
components and Azure Load Balancer concepts
Load Balancer and Availability Zones
3/5/2021 • 3 minutes to read • Edit Online

Azure Load Balancer supports availability zones scenarios. You can use Standard Load Balancer to increase
availability throughout your scenario by aligning resources with, and distribution across zones. Review this
document to understand these concepts and fundamental scenario design guidance
A Load Balancer can either be zone redundant, zonal, or non-zonal . To configure the zone related properties
(mentioned above) for your load balancer, select the appropriate type of frontend needed.

Zone redundant
In a region with Availability Zones, a Standard Load Balancer can be zone-redundant. This traffic is served by a
single IP address.
A single frontend IP address will survive zone failure. The frontend IP may be used to reach all (non-impacted)
backend pool members no matter the zone. One or more availability zones can fail and the data path survives as
long as one zone in the region remains healthy.
The frontend's IP address is served simultaneously by multiple independent infrastructure deployments in
multiple availability zones. Any retries or reestablishment will succeed in other zones not affected by the zone
failure.

Figure: Zone redundant load balancer

Zonal
You can choose to have a frontend guaranteed to a single zone, which is known as a zonal. This scenario means
any inbound or outbound flow is served by a single zone in a region. Your frontend shares fate with the health
of the zone. The data path is unaffected by failures in zones other than where it was guaranteed. You can use
zonal frontends to expose an IP address per Availability Zone.
Additionally, the use of zonal frontends directly for load balanced endpoints within each zone is supported. You
can use this configuration to expose per zone load-balanced endpoints to individually monitor each zone. For
public endpoints, you can integrate them with a DNS load-balancing product like Traffic Manager and use a
single DNS name.

Figure: Zonal load balancer


For a public load balancer frontend, you add a zones parameter to the public IP. This public IP is referenced by
the frontend IP configuration used by the respective rule.
For an internal load balancer frontend, add a zones parameter to the internal load balancer frontend IP
configuration. A zonal frontend guarantees an IP address in a subnet to a specific zone.

Design considerations
Now that you understand the zone related properties for Standard Load Balancer, the following design
considerations might help as you design for high availability.
Tolerance to zone failure
A zone redundant Load Balancer can serve a zonal resource in any zone with one IP address. The IP can
survive one or more zone failures as long as at least one zone remains healthy within the region.
A zonal frontend is a reduction of the service to a single zone and shares fate with the respective zone. If the
zone your deployment is in goes down, your deployment will not survive this failure.
It is recommended you use zone redundant Load Balancer for your production workloads.
Control vs data plane implications
Zone-redundancy doesn't imply hitless data plane or control plane. Zone-redundant flows can use any zone and
your flows will use all healthy zones in a region. In a zone failure, traffic flows using healthy zones aren't
affected.
Traffic flows using a zone at the time of zone failure may be affected but applications can recover. Traffic
continues in the healthy zones within the region upon retransmission when Azure has converged around the
zone failure.
Review Azure cloud design patterns to improve the resiliency of your application to failure scenarios.

Next steps
Learn more about Availability Zones
Learn more about Standard Load Balancer
Learn how to load balance VMs within a zone using a zonal Standard Load Balancer
Learn how to load balance VMs across zones using a zone redundant Standard Load Balancer
Learn about Azure cloud design patterns to improve the resiliency of your application to failure scenarios.
Quickstart: Create a public load balancer to load
balance VMs using the Azure portal
3/30/2021 • 15 minutes to read • Edit Online

Get started with Azure Load Balancer by using the Azure portal to create a public load balancer and three virtual
machines.

Prerequisites
An Azure account with an active subscription. Create an account for free.

Sign in to Azure
Sign in to the Azure portal at https://portal.azure.com.

Standard SKU
Basic SKU

NOTE
Standard SKU load balancer is recommended for production workloads. For more information about SKUs, see Azure
Load Balancer SKUs .

Figure: Resources created in quickstart.


In this section, you create a load balancer that load balances virtual machines.
When you create a public load balancer, you create a new public IP address that is configured as the frontend
(named as LoadBalancerFrontend by default) for the load balancer.
1. Select Create a resource .
2. In the search box, enter Load balancer . Select Load balancer in the search results.
3. In the Load balancer page, select Create .
4. On the Create load balancer page enter, or select the following information:

SET T IN G VA L UE

Subscription Select your subscription.

Resource group Select Create new and enter CreatePubLBQS-rg in


the text box.

Name Enter myLoadBalancer

Region Select (Europe) West Europe .

Type Select Public.

SKU Leave the default Standard .

Tier Leave the default Regional.

Public IP address Select Create new . If you have an existing Public IP you
would like to use, select Use existing .

Public IP address name Type myPublicIP in the text box.

Availability zone Select Zone-redundant to create a resilient load


balancer. To create a zonal load balancer, select a specific
zone from 1, 2, or 3

Add a public IPv6 address Select No .


For more information on IPv6 addresses and load
balancer, see What is IPv6 for Azure Virtual Network?

Routing preference Leave the default of Microsoft network .


For more information on routing preference, see What is
routing preference (preview)?.

5. Accept the defaults for the remaining settings, and then select Review + create .
6. In the Review + create tab, select Create .
Create load balancer resources
In this section, you configure:
Load balancer settings for a backend address pool.
A health probe.
A load balancer rule.
Create a backend pool
A backend address pool contains the IP addresses of the virtual (NICs) connected to the load balancer.
Create the backend address pool myBackendPool to include virtual machines for load-balancing internet
traffic.
1. Select All ser vices in the left-hand menu, select All resources , and then select myLoadBalancer from
the resources list.
2. Under Settings , select Backend pools , then select Add .
3. On the Add a backend pool page, for name, type myBackendPool , as the name for your backend
pool, and then select Add .
Create a health probe
The load balancer monitors the status of your app with a health probe.
The health probe adds or removes VMs from the load balancer based on their response to health checks.
Create a health probe named myHealthProbe to monitor the health of the VMs.
1. Select All ser vices in the left-hand menu, select All resources , and then select myLoadBalancer from
the resources list.
2. Under Settings , select Health probes , then select Add .

SET T IN G VA L UE

Name Enter myHealthProbe .

Protocol Select HTTP .

Port Enter 80 .

Interval Enter 15 for number of Inter val in seconds between


probe attempts.

Unhealthy threshold Select 2 for number of Unhealthy threshold or


consecutive probe failures that must occur before a VM
is considered unhealthy.

3. Leave the rest the defaults and Select OK .


Create a load balancer rule
A load balancer rule is used to define how traffic is distributed to the VMs. You define the frontend IP
configuration for the incoming traffic and the backend IP pool to receive the traffic. The source and destination
port are defined in the rule.
In this section, you'll create a load balancer rule:
Named myHTTPRule .
In the frontend named LoadBalancerFrontEnd .
Listening on Por t 80 .
Directs load balanced traffic to the backend named myBackendPool on Por t 80 .
1. Select All ser vices in the left-hand menu, select All resources , and then select myLoadBalancer from
the resources list.
2. Under Settings , select Load-balancing rules , then select Add .
3. Use these values to configure the load-balancing rule:

SET T IN G VA L UE

Name Enter myHTTPRule .

IP Version Select IPv4

Frontend IP address Select LoadBalancerFrontEnd


SET T IN G VA L UE

Protocol Select TCP .

Port Enter 80 .

Backend port Enter 80 .

Backend pool Select myBackendPool.

Health probe Select myHealthProbe .

Idle timeout (minutes) Move the slider to 15 minutes.

TCP reset Select Enabled .

Outbound source network address translation (SNAT) Select (Recommended) Use outbound rules to
provide backend pool members access to the
internet.

4. Leave the rest of the defaults and then select OK .

Create backend servers


In this section, you:
Create a virtual network.
Create three virtual machines for the backend pool of the load balancer.
Install IIS on the virtual machines to test the load balancer.

Create the virtual network


In this section, you'll create a virtual network and subnet.
1. On the upper-left side of the screen, select Create a resource > Networking > Vir tual network or
search for Vir tual network in the search box.
2. In Create vir tual network , enter or select this information in the Basics tab:

SET T IN G VA L UE

Project Details

Subscription Select your Azure subscription

Resource Group Select CreatePubLBQS-rg

Instance details

Name Enter myVNet

Region Select West Europe

3. Select the IP Addresses tab or select the Next: IP Addresses button at the bottom of the page.
4. In the IP Addresses tab, enter this information:

SET T IN G VA L UE

IPv4 address space Enter 10.1.0.0/16

5. Under Subnet name , select the word default .


6. In Edit subnet , enter this information:

SET T IN G VA L UE

Subnet name Enter myBackendSubnet

Subnet address range Enter 10.1.0.0/24

7. Select Save .
8. Select the Security tab.
9. Under BastionHost , select Enable . Enter this information:

SET T IN G VA L UE

Bastion name Enter myBastionHost

AzureBastionSubnet address space Enter 10.1.1.0/24

Public IP Address Select Create new .


For Name , enter myBastionIP .
Select OK .

10. Select the Review + create tab or select the Review + create button.
11. Select Create .
Create virtual machines
In this section, you'll create three VMs (myVM1 , myVM2 and myVM3 ) in three different zones (Zone 1 , Zone
2 , and Zone 3 ).
These VMs are added to the backend pool of the load balancer that was created earlier.
1. On the upper-left side of the portal, select Create a resource > Compute > Vir tual machine .
2. In Create a vir tual machine , type or select the values in the Basics tab:

SET T IN G VA L UE

Project Details

Subscription Select your Azure subscription

Resource Group Select CreatePubLBQS-rg

Instance details
SET T IN G VA L UE

Virtual machine name Enter myVM1

Region Select West Europe

Availability Options Select Availability zones

Availability zone Select 1

Image Select Windows Ser ver 2019 Datacenter

Azure Spot instance Select No

Size Choose VM size or take default setting

Administrator account

Username Enter a username

Password Enter a password

Confirm password Reenter password

Inbound por t rules

Public inbound ports Select None

3. Select the Networking tab, or select Next: Disks , then Next: Networking .
4. In the Networking tab, select or enter:

SET T IN G VA L UE

Network interface

Virtual network myVNet

Subnet myBackendSubnet

Public IP Select None .

NIC network security group Select Advanced

Configure network security group Select Create new .


In the Create network security group , enter myNSG
in Name .
Under Inbound rules , select +Add an inbound rule .
Under Destination por t ranges , enter 80 .
Under Priority , enter 100 .
In Name , enter myHTTPRule
Select Add
Select OK
SET T IN G VA L UE

Load balancing

Place this virtual machine behind an existing load Select Yes


balancing solution?

Load balancing settings

Load balancing options Select Azure load balancing

Select a load balancer Select myLoadBalancer

Select a backend pool Select myBackendPool

5. Select the Management tab, or select Next > Management .


6. In the Management tab, select or enter:

SET T IN G VA L UE

Monitoring

Boot diagnostics Select Off

7. Select Review + create .


8. Review the settings, and then select Create .
9. Follow the steps 1 to 8 to create two additional VMs with the following values and all the other settings
the same as myVM1 :

SET T IN G VM 2 VM 3

Name myVM2 myVM3

Availability zone 2 3

Network security group Select the existing myNSG Select the existing myNSG

Create outbound rule configuration


Load balancer outbound rules configure outbound SNAT for VMs in the backend pool.
For more information on outbound connections, see Outbound connections in Azure.
Create outbound rule
1. Select All ser vices in the left-hand menu, select All resources , and then select myLoadBalancer from
the resources list.
2. Under Settings , select Outbound rules , then select Add .
3. Use these values to configure the outbound rules:
SET T IN G VA L UE

Name Enter myOutboundRule .

Frontend IP address Select Create new .


In Name , enter LoadBalancerFrontEndOutbound .
Select IP address or IP prefix.
Select Create new under Public IP address or Public
IP prefix.
For Name, enter myPublicIPOutbound or
myPublicIPPrefixOutbound .
Select Add .

Idle timeout (minutes) Move slider to 15 minutes .

TCP Reset Select Enabled .

Backend pool Select Create new .


Enter myBackendPoolOutbound in Name .
Select Add .

Port allocation -> Port allocation Select Manually choose number of outbound
por ts

Outbound ports -> Choose by Select Por ts per instance

Outbound ports -> Ports per instance Enter 10000 .

4. Select Add .
Add virtual machines to outbound pool
1. Select All ser vices in the left-hand menu, select All resources , and then select myLoadBalancer from
the resources list.
2. Under Settings , select Backend pools .
3. Select myBackendPoolOutbound .
4. In Vir tual network , select myVNet .
5. In Vir tual machines , select + Add .
6. Check the boxes next to myVM1 , myVM2 , and myVM3 .
7. Select Add .
8. Select Save .

Install IIS
1. Select All ser vices in the left-hand menu, select All resources , and then from the resources list, select
myVM1 that is located in the CreatePubLBQS-rg resource group.
2. On the Over view page, select Connect , then Bastion .
3. Enter the username and password entered during VM creation.
4. Select Connect .
5. On the server desktop, navigate to Windows Administrative Tools > Windows PowerShell .
6. In the PowerShell Window, run the following commands to:
Install the IIS server
Remove the default iisstart.htm file
Add a new iisstart.htm file that displays the name of the VM:

# install IIS server role


Install-WindowsFeature -name Web-Server -IncludeManagementTools

# remove default htm file


remove-item C:\inetpub\wwwroot\iisstart.htm

# Add a new htm file that displays server name


Add-Content -Path "C:\inetpub\wwwroot\iisstart.htm" -Value $("Hello World from " +
$env:computername)

7. Close the Bastion session with myVM1 .


8. Repeat steps 1 to 6 to install IIS and the updated iisstart.htm file on myVM2 and myVM3 .

Test the load balancer


1. Find the public IP address for the load balancer on the Over view screen. Select All ser vices in the left-
hand menu, select All resources , and then select myPublicIP .
2. Copy the public IP address, and then paste it into the address bar of your browser. The default page of IIS
Web server is displayed on the browser.

To see the load balancer distribute traffic across all three VMs, you can customize the default page of each VM's
IIS Web server and then force-refresh your web browser from the client machine.

Clean up resources
When no longer needed, delete the resource group, load Balancer, and all related resources. To do so, select the
resource group CreatePubLBQS-rg that contains the resources and then select Delete .

Next steps
In this quickstart, you:
Created an Azure Standard or Basic Load Balancer
Attached 3 VMs to the load balancer.
Configured the load balancer traffic rule, health probe, and then tested the load balancer.
To learn more about Azure Load Balancer, continue to:
What is Azure Load Balancer?
Quickstart: Create a public load balancer to load
balance VMs using Azure PowerShell
3/30/2021 • 14 minutes to read • Edit Online

Get started with Azure Load Balancer by using Azure PowerShell to create a public load balancer and three
virtual machines.

Prerequisites
An Azure account with an active subscription. Create an account for free.
Azure PowerShell installed locally or Azure Cloud Shell
If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version
5.4.1 or later. Run Get-Module -ListAvailable Az to find the installed version. If you need to upgrade, see Install
Azure PowerShell module. If you're running PowerShell locally, you also need to run Connect-AzAccount to create
a connection with Azure.

Create a resource group


An Azure resource group is a logical container into which Azure resources are deployed and managed.
Create a resource group with New-AzResourceGroup:

New-AzResourceGroup -Name 'CreatePubLBQS-rg' -Location 'eastus'

Standard SKU
Basic SKU

NOTE
Standard SKU load balancer is recommended for production workloads. For more information about skus, see Azure
Load Balancer SKUs .
Create a public IP address - Standard
Use New-AzPublicIpAddress to create a public IP address.

$publicip = @{
Name = 'myPublicIP'
ResourceGroupName = 'CreatePubLBQS-rg'
Location = 'eastus'
Sku = 'Standard'
AllocationMethod = 'static'
Zone = 1,2,3
}
New-AzPublicIpAddress @publicip

To create a zonal public IP address in zone 1, use the following command:

$publicip = @{
Name = 'myPublicIP'
ResourceGroupName = 'CreatePubLBQS-rg'
Location = 'eastus'
Sku = 'Standard'
AllocationMethod = 'static'
Zone = 1
}
New-AzPublicIpAddress @publicip

Create standard load balancer


This section details how you can create and configure the following components of the load balancer:
Create a front-end IP with New-AzLoadBalancerFrontendIpConfig for the frontend IP pool. This IP
receives the incoming traffic on the load balancer
Create a back-end address pool with New-AzLoadBalancerBackendAddressPoolConfig for traffic sent
from the frontend of the load balancer. This pool is where your backend virtual machines are deployed.
Create a health probe with Add-AzLoadBalancerProbeConfig that determines the health of the backend
VM instances.
Create a load balancer rule with Add-AzLoadBalancerRuleConfig that defines how traffic is distributed to
the VMs.
Create a public load balancer with New-AzLoadBalancer.

## Place public IP created in previous steps into variable. ##


$publicIp = Get-AzPublicIpAddress -Name 'myPublicIP' -ResourceGroupName 'CreatePubLBQS-rg'

## Create load balancer frontend configuration and place in variable. ##


$feip = New-AzLoadBalancerFrontendIpConfig -Name 'myFrontEnd' -PublicIpAddress $publicIp

## Create backend address pool configuration and place in variable. ##


$bepool = New-AzLoadBalancerBackendAddressPoolConfig -Name 'myBackEndPool'

## Create the health probe and place in variable. ##


$probe = @{
Name = 'myHealthProbe'
Protocol = 'http'
Port = '80'
IntervalInSeconds = '360'
ProbeCount = '5'
RequestPath = '/'
}
$healthprobe = New-AzLoadBalancerProbeConfig @probe

## Create the load balancer rule and place in variable. ##


$lbrule = @{
Name = 'myHTTPRule'
Protocol = 'tcp'
FrontendPort = '80'
BackendPort = '80'
IdleTimeoutInMinutes = '15'
FrontendIpConfiguration = $feip
BackendAddressPool = $bePool
}
$rule = New-AzLoadBalancerRuleConfig @lbrule -EnableTcpReset -DisableOutboundSNAT

## Create the load balancer resource. ##


$loadbalancer = @{
ResourceGroupName = 'CreatePubLBQS-rg'
Name = 'myLoadBalancer'
Location = 'eastus'
Sku = 'Standard'
FrontendIpConfiguration = $feip
BackendAddressPool = $bePool
LoadBalancingRule = $rule
Probe = $healthprobe
}
New-AzLoadBalancer @loadbalancer

Configure virtual network - Standard


Before you deploy VMs and test your load balancer, create the supporting virtual network resources.
Create a virtual network for the backend virtual machines.
Create a network security group to define inbound connections to your virtual network.
Create virtual network, network security group, and bastion host
Create a virtual network with New-AzVirtualNetwork.
Create a network security group rule with New-AzNetworkSecurityRuleConfig.
Create an Azure Bastion host with New-AzBastion.
Create a network security group with New-AzNetworkSecurityGroup.

## Create backend subnet config ##


$subnet = @{
Name = 'myBackendSubnet'
AddressPrefix = '10.1.0.0/24'
}
$subnetConfig = New-AzVirtualNetworkSubnetConfig @subnet

## Create Azure Bastion subnet. ##


$bastsubnet = @{
Name = 'AzureBastionSubnet'
AddressPrefix = '10.1.1.0/24'
}
$bastsubnetConfig = New-AzVirtualNetworkSubnetConfig @bastsubnet

## Create the virtual network ##


$net = @{
Name = 'myVNet'
ResourceGroupName = 'CreatePubLBQS-rg'
Location = 'eastus'
AddressPrefix = '10.1.0.0/16'
Subnet = $subnetConfig,$bastsubnetConfig
}
$vnet = New-AzVirtualNetwork @net

## Create public IP address for bastion host. ##


$ip = @{
Name = 'myBastionIP'
ResourceGroupName = 'CreatePubLBQS-rg'
Location = 'eastus'
Sku = 'Standard'
AllocationMethod = 'Static'
}
$publicip = New-AzPublicIpAddress @ip

## Create bastion host ##


$bastion = @{
ResourceGroupName = 'CreatePubLBQS-rg'
Name = 'myBastion'
PublicIpAddress = $publicip
VirtualNetwork = $vnet
}
New-AzBastion @bastion -AsJob

## Create rule for network security group and place in variable. ##


$nsgrule = @{
Name = 'myNSGRuleHTTP'
Description = 'Allow HTTP'
Protocol = '*'
SourcePortRange = '*'
DestinationPortRange = '80'
SourceAddressPrefix = 'Internet'
DestinationAddressPrefix = '*'
Access = 'Allow'
Priority = '2000'
Direction = 'Inbound'
}
$rule1 = New-AzNetworkSecurityRuleConfig @nsgrule
## Create network security group ##
$nsg = @{
Name = 'myNSG'
ResourceGroupName = 'CreatePubLBQS-rg'
Location = 'eastus'
SecurityRules = $rule1
}
New-AzNetworkSecurityGroup @nsg

Create virtual machines - Standard


In this section, you'll create the three virtual machines for the backend pool of the load balancer.
Create three network interfaces with New-AzNetworkInterface.
Set an administrator username and password for the VMs with Get-Credential.
Create the virtual machines with:
New-AzVM
New-AzVMConfig
Set-AzVMOperatingSystem
Set-AzVMSourceImage
Add-AzVMNetworkInterface
# Set the administrator and password for the VMs. ##
$cred = Get-Credential

## Place the virtual network into a variable. ##


$vnet = Get-AzVirtualNetwork -Name 'myVNet' -ResourceGroupName 'CreatePubLBQS-rg'

## Place the load balancer into a variable. ##


$lb = @{
Name = 'myLoadBalancer'
ResourceGroupName = 'CreatePubLBQS-rg'
}
$bepool = Get-AzLoadBalancer @lb | Get-AzLoadBalancerBackendAddressPoolConfig

## Place the network security group into a variable. ##


$nsg = Get-AzNetworkSecurityGroup -Name 'myNSG' -ResourceGroupName 'CreatePubLBQS-rg'

## For loop with variable to create virtual machines for load balancer backend pool. ##
for ($i=1; $i -le 3; $i++)
{
## Command to create network interface for VMs ##
$nic = @{
Name = "myNicVM$i"
ResourceGroupName = 'CreatePubLBQS-rg'
Location = 'eastus'
Subnet = $vnet.Subnets[0]
NetworkSecurityGroup = $nsg
LoadBalancerBackendAddressPool = $bepool
}
$nicVM = New-AzNetworkInterface @nic

## Create a virtual machine configuration for VMs ##


$vmsz = @{
VMName = "myVM$i"
VMSize = 'Standard_DS1_v2'
}
$vmos = @{
ComputerName = "myVM$i"
Credential = $cred
}
$vmimage = @{
PublisherName = 'MicrosoftWindowsServer'
Offer = 'WindowsServer'
Skus = '2019-Datacenter'
Version = 'latest'
}
$vmConfig = New-AzVMConfig @vmsz `
| Set-AzVMOperatingSystem @vmos -Windows `
| Set-AzVMSourceImage @vmimage `
| Add-AzVMNetworkInterface -Id $nicVM.Id

## Create the virtual machine for VMs ##


$vm = @{
ResourceGroupName = 'CreatePubLBQS-rg'
Location = 'eastus'
VM = $vmConfig
Zone = "$i"
}
New-AzVM @vm -AsJob
}

The deployments of the virtual machines and bastion host are submitted as PowerShell jobs. To view the status
of the jobs, use Get-Job:
Get-Job

Id Name PSJobTypeName State HasMoreData Location Command


-- ---- ------------- ----- ----------- -------- -------
1 Long Running O… AzureLongRunni… Completed True localhost New-AzBastion
2 Long Running O… AzureLongRunni… Completed True localhost New-AzVM
3 Long Running O… AzureLongRunni… Completed True localhost New-AzVM
4 Long Running O… AzureLongRunni… Completed True localhost New-AzVM

Create outbound rule configuration


Load balancer outbound rules configure outbound source network address translation (SNAT) for VMs in the
backend pool.
For more information on outbound connections, see Outbound connections in Azure.
Create outbound public IP address
Use New-AzPublicIpAddress to create a standard zone redundant public IP address named
myPublicIPOutbound .

$publicipout = @{
Name = 'myPublicIPOutbound'
ResourceGroupName = 'CreatePubLBQS-rg'
Location = 'eastus'
Sku = 'Standard'
AllocationMethod = 'static'
Zone = 1,2,3
}
New-AzPublicIpAddress @publicipout

To create a zonal public IP address in zone 1, use the following command:

$publicipout = @{
Name = 'myPublicIPOutbound'
ResourceGroupName = 'CreatePubLBQS-rg'
Location = 'eastus'
Sku = 'Standard'
AllocationMethod = 'static'
Zone = 1
}
New-AzPublicIpAddress @publicipout

Create outbound configuration


Create a new frontend IP configuration with Add-AzLoadBalancerFrontendIpConfig.
Create a new outbound backend address pool with Add-AzLoadBalancerBackendAddressPoolConfig.
Apply the pool and frontend IP address to the load balancer with Set-AzLoadBalancer.
Create a new outbound rule for the outbound backend pool with Add-
AzLoadBalancerOutboundRuleConfig.
## Place public IP created in previous steps into variable. ##
$pubip = @{
Name = 'myPublicIPOutbound'
ResourceGroupName = 'CreatePubLBQS-rg'
}
$publicIp = Get-AzPublicIpAddress @pubip

## Get the load balancer configuration ##


$lbc = @{
ResourceGroupName = 'CreatePubLBQS-rg'
Name = 'myLoadBalancer'
}
$lb = Get-AzLoadBalancer @lbc

## Create the frontend configuration ##


$fe = @{
Name = 'myFrontEndOutbound'
PublicIPAddress = $publicIP
}
$lb | Add-AzLoadBalancerFrontendIPConfig @fe | Set-AzLoadBalancer

## Create the outbound backend address pool ##


$be = @{
Name = 'myBackEndPoolOutbound'
}
$lb | Add-AzLoadBalancerBackendAddressPoolConfig @be | Set-AzLoadBalancer

## Apply the outbound rule configuration to the load balancer. ##


$rule = @{
Name = 'myOutboundRule'
AllocatedOutboundPort = '10000'
Protocol = 'All'
IdleTimeoutInMinutes = '15'
FrontendIPConfiguration = $lb.FrontendIpConfigurations[1]
BackendAddressPool = $lb.BackendAddressPools[1]
}
$lb | Add-AzLoadBalancerOutBoundRuleConfig @rule | Set-AzLoadBalancer

Add virtual machines to outbound pool


Add the virtual machine network interfaces to the outbound pool of the load balancer with Add-
AzNetworkInterfaceIpConfig:
## Get the load balancer configuration ##
$lbc = @{
ResourceGroupName = 'CreatePubLBQS-rg'
Name = 'myLoadBalancer'
}
$lb = Get-AzLoadBalancer @lbc

# For loop with variable to add virtual machines to backend outbound pool. ##
for ($i=1; $i -le 3; $i++)
{
$nic = @{
ResourceGroupName = 'CreatePubLBQS-rg'
Name = "myNicVM$i"
}
$nicvm = Get-AzNetworkInterface @nic

## Apply the backend to the network interface ##


$be = @{
Name = 'ipconfig1'
LoadBalancerBackendAddressPoolId = $lb.BackendAddressPools[0].id,$lb.BackendAddressPools[1].id
}
$nicvm | Set-AzNetworkInterfaceIpConfig @be | Set-AzNetworkInterface
}

Install IIS
Use Set-AzVMExtension to install the Custom Script Extension.
The extension runs PowerShell Add-WindowsFeature Web-Server to install the IIS webserver and then updates the
Default.htm page to show the hostname of the VM:

IMPORTANT
Ensure the virtual machine deployments have completed from the previous steps before proceeding. Use Get-Job to
check the status of the virtual machine deployment jobs.

## For loop with variable to install custom script extension on virtual machines. ##
for ($i=1; $i -le 3; $i++)
{
$ext = @{
Publisher = 'Microsoft.Compute'
ExtensionType = 'CustomScriptExtension'
ExtensionName = 'IIS'
ResourceGroupName = 'CreatePubLBQS-rg'
VMName = "myVM$i"
Location = 'eastus'
TypeHandlerVersion = '1.8'
SettingString = '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content -
Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}'
}
Set-AzVMExtension @ext -AsJob
}

The extensions are deployed as PowerShell jobs. To view the status of the installation jobs, use Get-Job:
Get-Job

Id Name PSJobTypeName State HasMoreData Location Command


-- ---- ------------- ----- ----------- -------- -------
8 Long Running O… AzureLongRunni… Running True localhost Set-AzVMExtension
9 Long Running O… AzureLongRunni… Running True localhost Set-AzVMExtension
10 Long Running O… AzureLongRunni… Running True localhost Set-AzVMExtension

Test the load balancer


Use Get-AzPublicIpAddress to get the public IP address of the load balancer:

$ip = @{
ResourceGroupName = 'CreatePubLBQS-rg'
Name = 'myPublicIP'
}
Get-AzPublicIPAddress @ip | select IpAddress

Copy the public IP address, and then paste it into the address bar of your browser. The default page of IIS Web
server is displayed on the browser.

To see the load balancer distribute traffic across all three VMs, you can customize the default page of each VM's
IIS Web server and then force-refresh your web browser from the client machine.

Clean up resources
When no longer needed, you can use the Remove-AzResourceGroup command to remove the resource group,
load balancer, and the remaining resources.

Remove-AzResourceGroup -Name 'CreatePubLBQS-rg'

Next steps
In this quickstart:
You created a standard or basic public load balancer
Attached virtual machines.
Configured the load balancer traffic rule and health probe.
Tested the load balancer.
To learn more about Azure Load Balancer, continue to:
What is Azure Load Balancer?
Quickstart: Create a public load balancer to load
balance VMs using Azure CLI
3/30/2021 • 15 minutes to read • Edit Online

Get started with Azure Load Balancer by using Azure CLI to create a public load balancer and three virtual
machines.
If you don't have an Azure subscription, create a free account before you begin.

Prerequisites
Use the Bash environment in Azure Cloud Shell.

If you prefer, install the Azure CLI to run CLI reference commands.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
This quickstart requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version
is already installed.

Create a resource group


An Azure resource group is a logical container into which Azure resources are deployed and managed.
Create a resource group with az group create:
Named CreatePubLBQS-rg .
In the eastus location.

az group create \
--name CreatePubLBQS-rg \
--location eastus

Standard SKU
Basic SKU

NOTE
Standard SKU load balancer is recommended for production workloads. For more information about skus, see Azure
Load Balancer SKUs .
Configure virtual network - Standard
Before you deploy VMs and test your load balancer, create the supporting virtual network resources.
Create a virtual network
Create a virtual network using az network vnet create:
Named myVNet .
Address prefix of 10.1.0.0/16 .
Subnet named myBackendSubnet .
Subnet prefix of 10.1.0.0/24 .
In the CreatePubLBQS-rg resource group.
Location of eastus .

az network vnet create \


--resource-group CreatePubLBQS-rg \
--location eastus \
--name myVNet \
--address-prefixes 10.1.0.0/16 \
--subnet-name myBackendSubnet \
--subnet-prefixes 10.1.0.0/24

Create a public IP address


Use az network public-ip create to create a public ip address for the bastion host:
Create a standard zone redundant public IP address named myBastionIP .
In CCreatePubLBQS-rg .

az network public-ip create \


--resource-group CreatePubLBQS-rg \
--name myBastionIP \
--sku Standard
Create a bastion subnet
Use az network vnet subnet create to create a bastion subnet:
Named AzureBastionSubnet .
Address prefix of 10.1.1.0/24 .
In virtual network myVNet .
In resource group CreatePubLBQS-rg .

az network vnet subnet create \


--resource-group CreatePubLBQS-rg \
--name AzureBastionSubnet \
--vnet-name myVNet \
--address-prefixes 10.1.1.0/24

Create bastion host


Use az network bastion create to create a bastion host:
Named myBastionHost .
In CreatePubLBQS-rg .
Associated with public IP myBastionIP .
Associated with virtual network myVNet .
In eastus location.

az network bastion create \


--resource-group CreatePubLBQS-rg \
--name myBastionHost \
--public-ip-address myBastionIP \
--vnet-name myVNet \
--location eastus

It can take a few minutes for the Azure Bastion host to deploy.
Create a network security group
For a standard load balancer, the VMs in the backend address for are required to have network interfaces that
belong to a network security group.
Create a network security group using az network nsg create:
Named myNSG .
In resource group CreatePubLBQS-rg .

az network nsg create \


--resource-group CreatePubLBQS-rg \
--name myNSG

Create a network security group rule


Create a network security group rule using az network nsg rule create:
Named myNSGRuleHTTP .
In the network security group you created in the previous step, myNSG .
In resource group CreatePubLBQS-rg .
Protocol (*) .
Direction Inbound .
Source (*) .
Destination (*) .
Destination port Por t 80 .
Access Allow .
Priority 200 .

az network nsg rule create \


--resource-group CreatePubLBQS-rg \
--nsg-name myNSG \
--name myNSGRuleHTTP \
--protocol '*' \
--direction inbound \
--source-address-prefix '*' \
--source-port-range '*' \
--destination-address-prefix '*' \
--destination-port-range 80 \
--access allow \
--priority 200

Create backend servers - Standard


In this section, you create:
Three network interfaces for the virtual machines.
Three virtual machines to be used as backend servers for the load balancer.
Create network interfaces for the virtual machines
Create three network interfaces with az network nic create:
Named myNicVM1 , myNicVM2 , and myNicVM3 .
In resource group CreatePubLBQS-rg .
In virtual network myVNet .
In subnet myBackendSubnet .
In network security group myNSG .

array=(myNicVM1 myNicVM2 myNicVM3)


for vmnic in "${array[@]}"
do
az network nic create \
--resource-group CreatePubLBQS-rg \
--name $vmnic \
--vnet-name myVNet \
--subnet myBackEndSubnet \
--network-security-group myNSG
done

Create virtual machines


Create the virtual machines with az vm create:
VM1
Named myVM1 .
In resource group CreatePubLBQS-rg .
Attached to network interface myNicVM1 .
Virtual machine image win2019datacenter .
In Zone 1 .
az vm create \
--resource-group CreatePubLBQS-rg \
--name myVM1 \
--nics myNicVM1 \
--image win2019datacenter \
--admin-username azureuser \
--zone 1 \
--no-wait

VM2
Named myVM2 .
In resource group CreatePubLBQS-rg .
Attached to network interface myNicVM2 .
Virtual machine image win2019datacenter .
In Zone 2 .

az vm create \
--resource-group CreatePubLBQS-rg \
--name myVM2 \
--nics myNicVM2 \
--image win2019datacenter \
--admin-username azureuser \
--zone 2 \
--no-wait

VM3
Named myVM3 .
In resource group CreatePubLBQS-rg .
Attached to network interface myNicVM3 .
Virtual machine image win2019datacenter .
In Zone 3 .

az vm create \
--resource-group CreatePubLBQS-rg \
--name myVM3 \
--nics myNicVM3 \
--image win2019datacenter \
--admin-username azureuser \
--zone 3 \
--no-wait

It may take a few minutes for the VMs to deploy.

Create a public IP address - Standard


To access your web app on the Internet, you need a public IP address for the load balancer.
Use az network public-ip create to:
Create a standard zone redundant public IP address named myPublicIP .
In CreatePubLBQS-rg .
az network public-ip create \
--resource-group CreatePubLBQS-rg \
--name myPublicIP \
--sku Standard

To create a zonal redundant public IP address in Zone 1:

az network public-ip create \


--resource-group CreatePubLBQS-rg \
--name myPublicIP \
--sku Standard \
--zone 1

Create standard load balancer


This section details how you can create and configure the following components of the load balancer:
A frontend IP pool that receives the incoming network traffic on the load balancer.
A backend IP pool where the frontend pool sends the load balanced network traffic.
A health probe that determines health of the backend VM instances.
A load balancer rule that defines how traffic is distributed to the VMs.
Create the load balancer resource
Create a public load balancer with az network lb create:
Named myLoadBalancer .
A frontend pool named myFrontEnd .
A backend pool named myBackEndPool .
Associated with the public IP address myPublicIP that you created in the preceding step.

az network lb create \
--resource-group CreatePubLBQS-rg \
--name myLoadBalancer \
--sku Standard \
--public-ip-address myPublicIP \
--frontend-ip-name myFrontEnd \
--backend-pool-name myBackEndPool

Create the health probe


A health probe checks all virtual machine instances to ensure they can send network traffic.
A virtual machine with a failed probe check is removed from the load balancer. The virtual machine is added
back into the load balancer when the failure is resolved.
Create a health probe with az network lb probe create:
Monitors the health of the virtual machines.
Named myHealthProbe .
Protocol TCP .
Monitoring Por t 80 .
az network lb probe create \
--resource-group CreatePubLBQS-rg \
--lb-name myLoadBalancer \
--name myHealthProbe \
--protocol tcp \
--port 80

Create the load balancer rule


A load balancer rule defines:
Frontend IP configuration for the incoming traffic.
The backend IP pool to receive the traffic.
The required source and destination port.
Create a load balancer rule with az network lb rule create:
Named myHTTPRule
Listening on Por t 80 in the frontend pool myFrontEnd .
Sending load-balanced network traffic to the backend address pool myBackEndPool using Por t 80 .
Using health probe myHealthProbe .
Protocol TCP .
Idle timeout of 15 minutes .
Enable TCP reset.

az network lb rule create \


--resource-group CreatePubLBQS-rg \
--lb-name myLoadBalancer \
--name myHTTPRule \
--protocol tcp \
--frontend-port 80 \
--backend-port 80 \
--frontend-ip-name myFrontEnd \
--backend-pool-name myBackEndPool \
--probe-name myHealthProbe \
--disable-outbound-snat true \
--idle-timeout 15 \
--enable-tcp-reset true

Add virtual machines to load balancer backend pool


Add the virtual machines to the backend pool with az network nic ip-config address-pool add:
In backend address pool myBackEndPool .
In resource group CreatePubLBQS-rg .
Associated with load balancer myLoadBalancer .

array=(myNicVM1 myNicVM2 myNicVM3)


for vmnic in "${array[@]}"
do
az network nic ip-config address-pool add \
--address-pool myBackendPool \
--ip-config-name ipconfig1 \
--nic-name $vmnic \
--resource-group CreatePubLBQS-rg \
--lb-name myLoadBalancer
done
Create outbound rule configuration
Load balancer outbound rules configure outbound SNAT for VMs in the backend pool.
For more information on outbound connections, see Outbound connections in Azure.
A public IP or prefix can be used for the outbound configuration.
Public IP
Use az network public-ip create to create a single IP for the outbound connectivity.
Named myPublicIPOutbound .
In CreatePubLBQS-rg .

az network public-ip create \


--resource-group CreatePubLBQS-rg \
--name myPublicIPOutbound \
--sku Standard

To create a zonal redundant public IP address in Zone 1:

az network public-ip create \


--resource-group CreatePubLBQS-rg \
--name myPublicIPOutbound \
--sku Standard \
--zone 1

Public IP Prefix
Use az network public-ip prefix create to create a public IP prefix for the outbound connectivity.
Named myPublicIPPrefixOutbound .
In CreatePubLBQS-rg .
Prefix length of 28 .

az network public-ip prefix create \


--resource-group CreatePubLBQS-rg \
--name myPublicIPPrefixOutbound \
--length 28

To create a zonal redundant public IP prefix in Zone 1:

az network public-ip prefix create \


--resource-group CreatePubLBQS-rg \
--name myPublicIPPrefixOutbound \
--length 28 \
--zone 1

For more information on scaling outbound NAT and outbound connectivity, see Scale outbound NAT with
multiple IP addresses.
Create outbound frontend IP configuration
Create a new frontend IP configuration with az network lb frontend-ip create :
Select the public IP or public IP prefix commands based on decision in previous step.
Public IP
Named myFrontEndOutbound .
In resource group CreatePubLBQS-rg .
Associated with public IP address myPublicIPOutbound .
Associated with load balancer myLoadBalancer .

az network lb frontend-ip create \


--resource-group CreatePubLBQS-rg \
--name myFrontEndOutbound \
--lb-name myLoadBalancer \
--public-ip-address myPublicIPOutbound

Public IP prefix
Named myFrontEndOutbound .
In resource group CreatePubLBQS-rg .
Associated with public IP prefix myPublicIPPrefixOutbound .
Associated with load balancer myLoadBalancer .

az network lb frontend-ip create \


--resource-group CreatePubLBQS-rg \
--name myFrontEndOutbound \
--lb-name myLoadBalancer \
--public-ip-prefix myPublicIPPrefixOutbound

Create outbound pool


Create a new outbound pool with az network lb address-pool create:
Named myBackEndPoolOutbound .
In resource group CreatePubLBQS-rg .
Associated with load balancer myLoadBalancer .

az network lb address-pool create \


--resource-group CreatePubLBQS-rg \
--lb-name myLoadBalancer \
--name myBackendPoolOutbound

Create outbound rule


Create a new outbound rule for the outbound backend pool with az network lb outbound-rule create:
Named myOutboundRule .
In resource group CreatePubLBQS-rg .
Associated with load balancer myLoadBalancer
Associated with frontend myFrontEndOutbound .
Protocol All .
Idle timeout of 15 .
10000 outbound ports.
Associated with backend pool myBackEndPoolOutbound .
az network lb outbound-rule create \
--resource-group CreatePubLBQS-rg \
--lb-name myLoadBalancer \
--name myOutboundRule \
--frontend-ip-configs myFrontEndOutbound \
--protocol All \
--idle-timeout 15 \
--outbound-ports 10000 \
--address-pool myBackEndPoolOutbound

Add virtual machines to outbound pool


Add the virtual machines to the outbound pool with az network nic ip-config address-pool add:
In backend address pool myBackEndPoolOutbound .
In resource group CreatePubLBQS-rg .
Associated with load balancer myLoadBalancer .

array=(myNicVM1 myNicVM2 myNicVM3)


for vmnic in "${array[@]}"
do
az network nic ip-config address-pool add \
--address-pool myBackendPoolOutbound \
--ip-config-name ipconfig1 \
--nic-name $vmnic \
--resource-group CreatePubLBQS-rg \
--lb-name myLoadBalancer
done

Install IIS
Use az vm extension set to install IIS on the virtual machines and set the default website to the computer name.

array=(myVM1 myVM2 myVM3)


for vm in "${array[@]}"
do
az vm extension set \
--publisher Microsoft.Compute \
--version 1.8 \
--name CustomScriptExtension \
--vm-name $vm \
--resource-group CreatePubLBQS-rg \
--settings '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content -
Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}'
done

Test the load balancer


To get the public IP address of the load balancer, use az network public-ip show.
Copy the public IP address, and then paste it into the address bar of your browser.

az network public-ip show \


--resource-group CreatePubLBQS-rg \
--name myPublicIP \
--query ipAddress \
--output tsv
Clean up resources
When no longer needed, use the az group delete command to remove the resource group, load balancer, and all
related resources.

az group delete \
--name CreatePubLBQS-rg

Next steps
In this quickstart
You created a standard or public load balancer
Attached virtual machines.
Configured the load balancer traffic rule and health probe.
Tested the load balancer.
To learn more about Azure Load Balancer, continue to:
What is Azure Load Balancer?
Quickstart: Create a public load balancer to load
balance VMs using the Azure portal
3/30/2021 • 15 minutes to read • Edit Online

Get started with Azure Load Balancer by using the Azure portal to create a public load balancer and three virtual
machines.

Prerequisites
An Azure account with an active subscription. Create an account for free.

Sign in to Azure
Sign in to the Azure portal at https://portal.azure.com.

Standard SKU
Basic SKU

NOTE
Standard SKU load balancer is recommended for production workloads. For more information about SKUs, see Azure
Load Balancer SKUs .

Figure: Resources created in quickstart.


In this section, you create a load balancer that load balances virtual machines.
When you create a public load balancer, you create a new public IP address that is configured as the frontend
(named as LoadBalancerFrontend by default) for the load balancer.
1. Select Create a resource .
2. In the search box, enter Load balancer . Select Load balancer in the search results.
3. In the Load balancer page, select Create .
4. On the Create load balancer page enter, or select the following information:

SET T IN G VA L UE

Subscription Select your subscription.

Resource group Select Create new and enter CreatePubLBQS-rg in


the text box.

Name Enter myLoadBalancer

Region Select (Europe) West Europe .

Type Select Public.

SKU Leave the default Standard .

Tier Leave the default Regional.

Public IP address Select Create new . If you have an existing Public IP you
would like to use, select Use existing .

Public IP address name Type myPublicIP in the text box.

Availability zone Select Zone-redundant to create a resilient load


balancer. To create a zonal load balancer, select a specific
zone from 1, 2, or 3

Add a public IPv6 address Select No .


For more information on IPv6 addresses and load
balancer, see What is IPv6 for Azure Virtual Network?

Routing preference Leave the default of Microsoft network .


For more information on routing preference, see What is
routing preference (preview)?.

5. Accept the defaults for the remaining settings, and then select Review + create .
6. In the Review + create tab, select Create .
Create load balancer resources
In this section, you configure:
Load balancer settings for a backend address pool.
A health probe.
A load balancer rule.
Create a backend pool
A backend address pool contains the IP addresses of the virtual (NICs) connected to the load balancer.
Create the backend address pool myBackendPool to include virtual machines for load-balancing internet
traffic.
1. Select All ser vices in the left-hand menu, select All resources , and then select myLoadBalancer from
the resources list.
2. Under Settings , select Backend pools , then select Add .
3. On the Add a backend pool page, for name, type myBackendPool , as the name for your backend
pool, and then select Add .
Create a health probe
The load balancer monitors the status of your app with a health probe.
The health probe adds or removes VMs from the load balancer based on their response to health checks.
Create a health probe named myHealthProbe to monitor the health of the VMs.
1. Select All ser vices in the left-hand menu, select All resources , and then select myLoadBalancer from
the resources list.
2. Under Settings , select Health probes , then select Add .

SET T IN G VA L UE

Name Enter myHealthProbe .

Protocol Select HTTP .

Port Enter 80 .

Interval Enter 15 for number of Inter val in seconds between


probe attempts.

Unhealthy threshold Select 2 for number of Unhealthy threshold or


consecutive probe failures that must occur before a VM
is considered unhealthy.

3. Leave the rest the defaults and Select OK .


Create a load balancer rule
A load balancer rule is used to define how traffic is distributed to the VMs. You define the frontend IP
configuration for the incoming traffic and the backend IP pool to receive the traffic. The source and destination
port are defined in the rule.
In this section, you'll create a load balancer rule:
Named myHTTPRule .
In the frontend named LoadBalancerFrontEnd .
Listening on Por t 80 .
Directs load balanced traffic to the backend named myBackendPool on Por t 80 .
1. Select All ser vices in the left-hand menu, select All resources , and then select myLoadBalancer from
the resources list.
2. Under Settings , select Load-balancing rules , then select Add .
3. Use these values to configure the load-balancing rule:

SET T IN G VA L UE

Name Enter myHTTPRule .

IP Version Select IPv4

Frontend IP address Select LoadBalancerFrontEnd


SET T IN G VA L UE

Protocol Select TCP .

Port Enter 80 .

Backend port Enter 80 .

Backend pool Select myBackendPool.

Health probe Select myHealthProbe .

Idle timeout (minutes) Move the slider to 15 minutes.

TCP reset Select Enabled .

Outbound source network address translation (SNAT) Select (Recommended) Use outbound rules to
provide backend pool members access to the
internet.

4. Leave the rest of the defaults and then select OK .

Create backend servers


In this section, you:
Create a virtual network.
Create three virtual machines for the backend pool of the load balancer.
Install IIS on the virtual machines to test the load balancer.

Create the virtual network


In this section, you'll create a virtual network and subnet.
1. On the upper-left side of the screen, select Create a resource > Networking > Vir tual network or
search for Vir tual network in the search box.
2. In Create vir tual network , enter or select this information in the Basics tab:

SET T IN G VA L UE

Project Details

Subscription Select your Azure subscription

Resource Group Select CreatePubLBQS-rg

Instance details

Name Enter myVNet

Region Select West Europe

3. Select the IP Addresses tab or select the Next: IP Addresses button at the bottom of the page.
4. In the IP Addresses tab, enter this information:

SET T IN G VA L UE

IPv4 address space Enter 10.1.0.0/16

5. Under Subnet name , select the word default .


6. In Edit subnet , enter this information:

SET T IN G VA L UE

Subnet name Enter myBackendSubnet

Subnet address range Enter 10.1.0.0/24

7. Select Save .
8. Select the Security tab.
9. Under BastionHost , select Enable . Enter this information:

SET T IN G VA L UE

Bastion name Enter myBastionHost

AzureBastionSubnet address space Enter 10.1.1.0/24

Public IP Address Select Create new .


For Name , enter myBastionIP .
Select OK .

10. Select the Review + create tab or select the Review + create button.
11. Select Create .
Create virtual machines
In this section, you'll create three VMs (myVM1 , myVM2 and myVM3 ) in three different zones (Zone 1 , Zone
2 , and Zone 3 ).
These VMs are added to the backend pool of the load balancer that was created earlier.
1. On the upper-left side of the portal, select Create a resource > Compute > Vir tual machine .
2. In Create a vir tual machine , type or select the values in the Basics tab:

SET T IN G VA L UE

Project Details

Subscription Select your Azure subscription

Resource Group Select CreatePubLBQS-rg

Instance details
SET T IN G VA L UE

Virtual machine name Enter myVM1

Region Select West Europe

Availability Options Select Availability zones

Availability zone Select 1

Image Select Windows Ser ver 2019 Datacenter

Azure Spot instance Select No

Size Choose VM size or take default setting

Administrator account

Username Enter a username

Password Enter a password

Confirm password Reenter password

Inbound por t rules

Public inbound ports Select None

3. Select the Networking tab, or select Next: Disks , then Next: Networking .
4. In the Networking tab, select or enter:

SET T IN G VA L UE

Network interface

Virtual network myVNet

Subnet myBackendSubnet

Public IP Select None .

NIC network security group Select Advanced

Configure network security group Select Create new .


In the Create network security group , enter myNSG
in Name .
Under Inbound rules , select +Add an inbound rule .
Under Destination por t ranges , enter 80 .
Under Priority , enter 100 .
In Name , enter myHTTPRule
Select Add
Select OK
SET T IN G VA L UE

Load balancing

Place this virtual machine behind an existing load Select Yes


balancing solution?

Load balancing settings

Load balancing options Select Azure load balancing

Select a load balancer Select myLoadBalancer

Select a backend pool Select myBackendPool

5. Select the Management tab, or select Next > Management .


6. In the Management tab, select or enter:

SET T IN G VA L UE

Monitoring

Boot diagnostics Select Off

7. Select Review + create .


8. Review the settings, and then select Create .
9. Follow the steps 1 to 8 to create two additional VMs with the following values and all the other settings
the same as myVM1 :

SET T IN G VM 2 VM 3

Name myVM2 myVM3

Availability zone 2 3

Network security group Select the existing myNSG Select the existing myNSG

Create outbound rule configuration


Load balancer outbound rules configure outbound SNAT for VMs in the backend pool.
For more information on outbound connections, see Outbound connections in Azure.
Create outbound rule
1. Select All ser vices in the left-hand menu, select All resources , and then select myLoadBalancer from
the resources list.
2. Under Settings , select Outbound rules , then select Add .
3. Use these values to configure the outbound rules:
SET T IN G VA L UE

Name Enter myOutboundRule .

Frontend IP address Select Create new .


In Name , enter LoadBalancerFrontEndOutbound .
Select IP address or IP prefix.
Select Create new under Public IP address or Public
IP prefix.
For Name, enter myPublicIPOutbound or
myPublicIPPrefixOutbound .
Select Add .

Idle timeout (minutes) Move slider to 15 minutes .

TCP Reset Select Enabled .

Backend pool Select Create new .


Enter myBackendPoolOutbound in Name .
Select Add .

Port allocation -> Port allocation Select Manually choose number of outbound
por ts

Outbound ports -> Choose by Select Por ts per instance

Outbound ports -> Ports per instance Enter 10000 .

4. Select Add .
Add virtual machines to outbound pool
1. Select All ser vices in the left-hand menu, select All resources , and then select myLoadBalancer from
the resources list.
2. Under Settings , select Backend pools .
3. Select myBackendPoolOutbound .
4. In Vir tual network , select myVNet .
5. In Vir tual machines , select + Add .
6. Check the boxes next to myVM1 , myVM2 , and myVM3 .
7. Select Add .
8. Select Save .

Install IIS
1. Select All ser vices in the left-hand menu, select All resources , and then from the resources list, select
myVM1 that is located in the CreatePubLBQS-rg resource group.
2. On the Over view page, select Connect , then Bastion .
3. Enter the username and password entered during VM creation.
4. Select Connect .
5. On the server desktop, navigate to Windows Administrative Tools > Windows PowerShell .
6. In the PowerShell Window, run the following commands to:
Install the IIS server
Remove the default iisstart.htm file
Add a new iisstart.htm file that displays the name of the VM:

# install IIS server role


Install-WindowsFeature -name Web-Server -IncludeManagementTools

# remove default htm file


remove-item C:\inetpub\wwwroot\iisstart.htm

# Add a new htm file that displays server name


Add-Content -Path "C:\inetpub\wwwroot\iisstart.htm" -Value $("Hello World from " +
$env:computername)

7. Close the Bastion session with myVM1 .


8. Repeat steps 1 to 6 to install IIS and the updated iisstart.htm file on myVM2 and myVM3 .

Test the load balancer


1. Find the public IP address for the load balancer on the Over view screen. Select All ser vices in the left-
hand menu, select All resources , and then select myPublicIP .
2. Copy the public IP address, and then paste it into the address bar of your browser. The default page of IIS
Web server is displayed on the browser.

To see the load balancer distribute traffic across all three VMs, you can customize the default page of each VM's
IIS Web server and then force-refresh your web browser from the client machine.

Clean up resources
When no longer needed, delete the resource group, load Balancer, and all related resources. To do so, select the
resource group CreatePubLBQS-rg that contains the resources and then select Delete .

Next steps
In this quickstart, you:
Created an Azure Standard or Basic Load Balancer
Attached 3 VMs to the load balancer.
Configured the load balancer traffic rule, health probe, and then tested the load balancer.
To learn more about Azure Load Balancer, continue to:
What is Azure Load Balancer?
Quickstart: Create a public load balancer to load
balance VMs using Azure PowerShell
3/30/2021 • 14 minutes to read • Edit Online

Get started with Azure Load Balancer by using Azure PowerShell to create a public load balancer and three
virtual machines.

Prerequisites
An Azure account with an active subscription. Create an account for free.
Azure PowerShell installed locally or Azure Cloud Shell
If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version
5.4.1 or later. Run Get-Module -ListAvailable Az to find the installed version. If you need to upgrade, see Install
Azure PowerShell module. If you're running PowerShell locally, you also need to run Connect-AzAccount to create
a connection with Azure.

Create a resource group


An Azure resource group is a logical container into which Azure resources are deployed and managed.
Create a resource group with New-AzResourceGroup:

New-AzResourceGroup -Name 'CreatePubLBQS-rg' -Location 'eastus'

Standard SKU
Basic SKU

NOTE
Standard SKU load balancer is recommended for production workloads. For more information about skus, see Azure
Load Balancer SKUs .
Create a public IP address - Standard
Use New-AzPublicIpAddress to create a public IP address.

$publicip = @{
Name = 'myPublicIP'
ResourceGroupName = 'CreatePubLBQS-rg'
Location = 'eastus'
Sku = 'Standard'
AllocationMethod = 'static'
Zone = 1,2,3
}
New-AzPublicIpAddress @publicip

To create a zonal public IP address in zone 1, use the following command:

$publicip = @{
Name = 'myPublicIP'
ResourceGroupName = 'CreatePubLBQS-rg'
Location = 'eastus'
Sku = 'Standard'
AllocationMethod = 'static'
Zone = 1
}
New-AzPublicIpAddress @publicip

Create standard load balancer


This section details how you can create and configure the following components of the load balancer:
Create a front-end IP with New-AzLoadBalancerFrontendIpConfig for the frontend IP pool. This IP
receives the incoming traffic on the load balancer
Create a back-end address pool with New-AzLoadBalancerBackendAddressPoolConfig for traffic sent
from the frontend of the load balancer. This pool is where your backend virtual machines are deployed.
Create a health probe with Add-AzLoadBalancerProbeConfig that determines the health of the backend
VM instances.
Create a load balancer rule with Add-AzLoadBalancerRuleConfig that defines how traffic is distributed to
the VMs.
Create a public load balancer with New-AzLoadBalancer.

## Place public IP created in previous steps into variable. ##


$publicIp = Get-AzPublicIpAddress -Name 'myPublicIP' -ResourceGroupName 'CreatePubLBQS-rg'

## Create load balancer frontend configuration and place in variable. ##


$feip = New-AzLoadBalancerFrontendIpConfig -Name 'myFrontEnd' -PublicIpAddress $publicIp

## Create backend address pool configuration and place in variable. ##


$bepool = New-AzLoadBalancerBackendAddressPoolConfig -Name 'myBackEndPool'

## Create the health probe and place in variable. ##


$probe = @{
Name = 'myHealthProbe'
Protocol = 'http'
Port = '80'
IntervalInSeconds = '360'
ProbeCount = '5'
RequestPath = '/'
}
$healthprobe = New-AzLoadBalancerProbeConfig @probe

## Create the load balancer rule and place in variable. ##


$lbrule = @{
Name = 'myHTTPRule'
Protocol = 'tcp'
FrontendPort = '80'
BackendPort = '80'
IdleTimeoutInMinutes = '15'
FrontendIpConfiguration = $feip
BackendAddressPool = $bePool
}
$rule = New-AzLoadBalancerRuleConfig @lbrule -EnableTcpReset -DisableOutboundSNAT

## Create the load balancer resource. ##


$loadbalancer = @{
ResourceGroupName = 'CreatePubLBQS-rg'
Name = 'myLoadBalancer'
Location = 'eastus'
Sku = 'Standard'
FrontendIpConfiguration = $feip
BackendAddressPool = $bePool
LoadBalancingRule = $rule
Probe = $healthprobe
}
New-AzLoadBalancer @loadbalancer

Configure virtual network - Standard


Before you deploy VMs and test your load balancer, create the supporting virtual network resources.
Create a virtual network for the backend virtual machines.
Create a network security group to define inbound connections to your virtual network.
Create virtual network, network security group, and bastion host
Create a virtual network with New-AzVirtualNetwork.
Create a network security group rule with New-AzNetworkSecurityRuleConfig.
Create an Azure Bastion host with New-AzBastion.
Create a network security group with New-AzNetworkSecurityGroup.

## Create backend subnet config ##


$subnet = @{
Name = 'myBackendSubnet'
AddressPrefix = '10.1.0.0/24'
}
$subnetConfig = New-AzVirtualNetworkSubnetConfig @subnet

## Create Azure Bastion subnet. ##


$bastsubnet = @{
Name = 'AzureBastionSubnet'
AddressPrefix = '10.1.1.0/24'
}
$bastsubnetConfig = New-AzVirtualNetworkSubnetConfig @bastsubnet

## Create the virtual network ##


$net = @{
Name = 'myVNet'
ResourceGroupName = 'CreatePubLBQS-rg'
Location = 'eastus'
AddressPrefix = '10.1.0.0/16'
Subnet = $subnetConfig,$bastsubnetConfig
}
$vnet = New-AzVirtualNetwork @net

## Create public IP address for bastion host. ##


$ip = @{
Name = 'myBastionIP'
ResourceGroupName = 'CreatePubLBQS-rg'
Location = 'eastus'
Sku = 'Standard'
AllocationMethod = 'Static'
}
$publicip = New-AzPublicIpAddress @ip

## Create bastion host ##


$bastion = @{
ResourceGroupName = 'CreatePubLBQS-rg'
Name = 'myBastion'
PublicIpAddress = $publicip
VirtualNetwork = $vnet
}
New-AzBastion @bastion -AsJob

## Create rule for network security group and place in variable. ##


$nsgrule = @{
Name = 'myNSGRuleHTTP'
Description = 'Allow HTTP'
Protocol = '*'
SourcePortRange = '*'
DestinationPortRange = '80'
SourceAddressPrefix = 'Internet'
DestinationAddressPrefix = '*'
Access = 'Allow'
Priority = '2000'
Direction = 'Inbound'
}
$rule1 = New-AzNetworkSecurityRuleConfig @nsgrule
## Create network security group ##
$nsg = @{
Name = 'myNSG'
ResourceGroupName = 'CreatePubLBQS-rg'
Location = 'eastus'
SecurityRules = $rule1
}
New-AzNetworkSecurityGroup @nsg

Create virtual machines - Standard


In this section, you'll create the three virtual machines for the backend pool of the load balancer.
Create three network interfaces with New-AzNetworkInterface.
Set an administrator username and password for the VMs with Get-Credential.
Create the virtual machines with:
New-AzVM
New-AzVMConfig
Set-AzVMOperatingSystem
Set-AzVMSourceImage
Add-AzVMNetworkInterface
# Set the administrator and password for the VMs. ##
$cred = Get-Credential

## Place the virtual network into a variable. ##


$vnet = Get-AzVirtualNetwork -Name 'myVNet' -ResourceGroupName 'CreatePubLBQS-rg'

## Place the load balancer into a variable. ##


$lb = @{
Name = 'myLoadBalancer'
ResourceGroupName = 'CreatePubLBQS-rg'
}
$bepool = Get-AzLoadBalancer @lb | Get-AzLoadBalancerBackendAddressPoolConfig

## Place the network security group into a variable. ##


$nsg = Get-AzNetworkSecurityGroup -Name 'myNSG' -ResourceGroupName 'CreatePubLBQS-rg'

## For loop with variable to create virtual machines for load balancer backend pool. ##
for ($i=1; $i -le 3; $i++)
{
## Command to create network interface for VMs ##
$nic = @{
Name = "myNicVM$i"
ResourceGroupName = 'CreatePubLBQS-rg'
Location = 'eastus'
Subnet = $vnet.Subnets[0]
NetworkSecurityGroup = $nsg
LoadBalancerBackendAddressPool = $bepool
}
$nicVM = New-AzNetworkInterface @nic

## Create a virtual machine configuration for VMs ##


$vmsz = @{
VMName = "myVM$i"
VMSize = 'Standard_DS1_v2'
}
$vmos = @{
ComputerName = "myVM$i"
Credential = $cred
}
$vmimage = @{
PublisherName = 'MicrosoftWindowsServer'
Offer = 'WindowsServer'
Skus = '2019-Datacenter'
Version = 'latest'
}
$vmConfig = New-AzVMConfig @vmsz `
| Set-AzVMOperatingSystem @vmos -Windows `
| Set-AzVMSourceImage @vmimage `
| Add-AzVMNetworkInterface -Id $nicVM.Id

## Create the virtual machine for VMs ##


$vm = @{
ResourceGroupName = 'CreatePubLBQS-rg'
Location = 'eastus'
VM = $vmConfig
Zone = "$i"
}
New-AzVM @vm -AsJob
}

The deployments of the virtual machines and bastion host are submitted as PowerShell jobs. To view the status
of the jobs, use Get-Job:
Get-Job

Id Name PSJobTypeName State HasMoreData Location Command


-- ---- ------------- ----- ----------- -------- -------
1 Long Running O… AzureLongRunni… Completed True localhost New-AzBastion
2 Long Running O… AzureLongRunni… Completed True localhost New-AzVM
3 Long Running O… AzureLongRunni… Completed True localhost New-AzVM
4 Long Running O… AzureLongRunni… Completed True localhost New-AzVM

Create outbound rule configuration


Load balancer outbound rules configure outbound source network address translation (SNAT) for VMs in the
backend pool.
For more information on outbound connections, see Outbound connections in Azure.
Create outbound public IP address
Use New-AzPublicIpAddress to create a standard zone redundant public IP address named
myPublicIPOutbound .

$publicipout = @{
Name = 'myPublicIPOutbound'
ResourceGroupName = 'CreatePubLBQS-rg'
Location = 'eastus'
Sku = 'Standard'
AllocationMethod = 'static'
Zone = 1,2,3
}
New-AzPublicIpAddress @publicipout

To create a zonal public IP address in zone 1, use the following command:

$publicipout = @{
Name = 'myPublicIPOutbound'
ResourceGroupName = 'CreatePubLBQS-rg'
Location = 'eastus'
Sku = 'Standard'
AllocationMethod = 'static'
Zone = 1
}
New-AzPublicIpAddress @publicipout

Create outbound configuration


Create a new frontend IP configuration with Add-AzLoadBalancerFrontendIpConfig.
Create a new outbound backend address pool with Add-AzLoadBalancerBackendAddressPoolConfig.
Apply the pool and frontend IP address to the load balancer with Set-AzLoadBalancer.
Create a new outbound rule for the outbound backend pool with Add-
AzLoadBalancerOutboundRuleConfig.
## Place public IP created in previous steps into variable. ##
$pubip = @{
Name = 'myPublicIPOutbound'
ResourceGroupName = 'CreatePubLBQS-rg'
}
$publicIp = Get-AzPublicIpAddress @pubip

## Get the load balancer configuration ##


$lbc = @{
ResourceGroupName = 'CreatePubLBQS-rg'
Name = 'myLoadBalancer'
}
$lb = Get-AzLoadBalancer @lbc

## Create the frontend configuration ##


$fe = @{
Name = 'myFrontEndOutbound'
PublicIPAddress = $publicIP
}
$lb | Add-AzLoadBalancerFrontendIPConfig @fe | Set-AzLoadBalancer

## Create the outbound backend address pool ##


$be = @{
Name = 'myBackEndPoolOutbound'
}
$lb | Add-AzLoadBalancerBackendAddressPoolConfig @be | Set-AzLoadBalancer

## Apply the outbound rule configuration to the load balancer. ##


$rule = @{
Name = 'myOutboundRule'
AllocatedOutboundPort = '10000'
Protocol = 'All'
IdleTimeoutInMinutes = '15'
FrontendIPConfiguration = $lb.FrontendIpConfigurations[1]
BackendAddressPool = $lb.BackendAddressPools[1]
}
$lb | Add-AzLoadBalancerOutBoundRuleConfig @rule | Set-AzLoadBalancer

Add virtual machines to outbound pool


Add the virtual machine network interfaces to the outbound pool of the load balancer with Add-
AzNetworkInterfaceIpConfig:
## Get the load balancer configuration ##
$lbc = @{
ResourceGroupName = 'CreatePubLBQS-rg'
Name = 'myLoadBalancer'
}
$lb = Get-AzLoadBalancer @lbc

# For loop with variable to add virtual machines to backend outbound pool. ##
for ($i=1; $i -le 3; $i++)
{
$nic = @{
ResourceGroupName = 'CreatePubLBQS-rg'
Name = "myNicVM$i"
}
$nicvm = Get-AzNetworkInterface @nic

## Apply the backend to the network interface ##


$be = @{
Name = 'ipconfig1'
LoadBalancerBackendAddressPoolId = $lb.BackendAddressPools[0].id,$lb.BackendAddressPools[1].id
}
$nicvm | Set-AzNetworkInterfaceIpConfig @be | Set-AzNetworkInterface
}

Install IIS
Use Set-AzVMExtension to install the Custom Script Extension.
The extension runs PowerShell Add-WindowsFeature Web-Server to install the IIS webserver and then updates the
Default.htm page to show the hostname of the VM:

IMPORTANT
Ensure the virtual machine deployments have completed from the previous steps before proceeding. Use Get-Job to
check the status of the virtual machine deployment jobs.

## For loop with variable to install custom script extension on virtual machines. ##
for ($i=1; $i -le 3; $i++)
{
$ext = @{
Publisher = 'Microsoft.Compute'
ExtensionType = 'CustomScriptExtension'
ExtensionName = 'IIS'
ResourceGroupName = 'CreatePubLBQS-rg'
VMName = "myVM$i"
Location = 'eastus'
TypeHandlerVersion = '1.8'
SettingString = '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content -
Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}'
}
Set-AzVMExtension @ext -AsJob
}

The extensions are deployed as PowerShell jobs. To view the status of the installation jobs, use Get-Job:
Get-Job

Id Name PSJobTypeName State HasMoreData Location Command


-- ---- ------------- ----- ----------- -------- -------
8 Long Running O… AzureLongRunni… Running True localhost Set-AzVMExtension
9 Long Running O… AzureLongRunni… Running True localhost Set-AzVMExtension
10 Long Running O… AzureLongRunni… Running True localhost Set-AzVMExtension

Test the load balancer


Use Get-AzPublicIpAddress to get the public IP address of the load balancer:

$ip = @{
ResourceGroupName = 'CreatePubLBQS-rg'
Name = 'myPublicIP'
}
Get-AzPublicIPAddress @ip | select IpAddress

Copy the public IP address, and then paste it into the address bar of your browser. The default page of IIS Web
server is displayed on the browser.

To see the load balancer distribute traffic across all three VMs, you can customize the default page of each VM's
IIS Web server and then force-refresh your web browser from the client machine.

Clean up resources
When no longer needed, you can use the Remove-AzResourceGroup command to remove the resource group,
load balancer, and the remaining resources.

Remove-AzResourceGroup -Name 'CreatePubLBQS-rg'

Next steps
In this quickstart:
You created a standard or basic public load balancer
Attached virtual machines.
Configured the load balancer traffic rule and health probe.
Tested the load balancer.
To learn more about Azure Load Balancer, continue to:
What is Azure Load Balancer?
Quickstart: Create a public load balancer to load
balance VMs using Azure CLI
3/30/2021 • 15 minutes to read • Edit Online

Get started with Azure Load Balancer by using Azure CLI to create a public load balancer and three virtual
machines.
If you don't have an Azure subscription, create a free account before you begin.

Prerequisites
Use the Bash environment in Azure Cloud Shell.

If you prefer, install the Azure CLI to run CLI reference commands.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
This quickstart requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version
is already installed.

Create a resource group


An Azure resource group is a logical container into which Azure resources are deployed and managed.
Create a resource group with az group create:
Named CreatePubLBQS-rg .
In the eastus location.

az group create \
--name CreatePubLBQS-rg \
--location eastus

Standard SKU
Basic SKU

NOTE
Standard SKU load balancer is recommended for production workloads. For more information about skus, see Azure
Load Balancer SKUs .
Configure virtual network - Standard
Before you deploy VMs and test your load balancer, create the supporting virtual network resources.
Create a virtual network
Create a virtual network using az network vnet create:
Named myVNet .
Address prefix of 10.1.0.0/16 .
Subnet named myBackendSubnet .
Subnet prefix of 10.1.0.0/24 .
In the CreatePubLBQS-rg resource group.
Location of eastus .

az network vnet create \


--resource-group CreatePubLBQS-rg \
--location eastus \
--name myVNet \
--address-prefixes 10.1.0.0/16 \
--subnet-name myBackendSubnet \
--subnet-prefixes 10.1.0.0/24

Create a public IP address


Use az network public-ip create to create a public ip address for the bastion host:
Create a standard zone redundant public IP address named myBastionIP .
In CCreatePubLBQS-rg .

az network public-ip create \


--resource-group CreatePubLBQS-rg \
--name myBastionIP \
--sku Standard
Create a bastion subnet
Use az network vnet subnet create to create a bastion subnet:
Named AzureBastionSubnet .
Address prefix of 10.1.1.0/24 .
In virtual network myVNet .
In resource group CreatePubLBQS-rg .

az network vnet subnet create \


--resource-group CreatePubLBQS-rg \
--name AzureBastionSubnet \
--vnet-name myVNet \
--address-prefixes 10.1.1.0/24

Create bastion host


Use az network bastion create to create a bastion host:
Named myBastionHost .
In CreatePubLBQS-rg .
Associated with public IP myBastionIP .
Associated with virtual network myVNet .
In eastus location.

az network bastion create \


--resource-group CreatePubLBQS-rg \
--name myBastionHost \
--public-ip-address myBastionIP \
--vnet-name myVNet \
--location eastus

It can take a few minutes for the Azure Bastion host to deploy.
Create a network security group
For a standard load balancer, the VMs in the backend address for are required to have network interfaces that
belong to a network security group.
Create a network security group using az network nsg create:
Named myNSG .
In resource group CreatePubLBQS-rg .

az network nsg create \


--resource-group CreatePubLBQS-rg \
--name myNSG

Create a network security group rule


Create a network security group rule using az network nsg rule create:
Named myNSGRuleHTTP .
In the network security group you created in the previous step, myNSG .
In resource group CreatePubLBQS-rg .
Protocol (*) .
Direction Inbound .
Source (*) .
Destination (*) .
Destination port Por t 80 .
Access Allow .
Priority 200 .

az network nsg rule create \


--resource-group CreatePubLBQS-rg \
--nsg-name myNSG \
--name myNSGRuleHTTP \
--protocol '*' \
--direction inbound \
--source-address-prefix '*' \
--source-port-range '*' \
--destination-address-prefix '*' \
--destination-port-range 80 \
--access allow \
--priority 200

Create backend servers - Standard


In this section, you create:
Three network interfaces for the virtual machines.
Three virtual machines to be used as backend servers for the load balancer.
Create network interfaces for the virtual machines
Create three network interfaces with az network nic create:
Named myNicVM1 , myNicVM2 , and myNicVM3 .
In resource group CreatePubLBQS-rg .
In virtual network myVNet .
In subnet myBackendSubnet .
In network security group myNSG .

array=(myNicVM1 myNicVM2 myNicVM3)


for vmnic in "${array[@]}"
do
az network nic create \
--resource-group CreatePubLBQS-rg \
--name $vmnic \
--vnet-name myVNet \
--subnet myBackEndSubnet \
--network-security-group myNSG
done

Create virtual machines


Create the virtual machines with az vm create:
VM1
Named myVM1 .
In resource group CreatePubLBQS-rg .
Attached to network interface myNicVM1 .
Virtual machine image win2019datacenter .
In Zone 1 .
az vm create \
--resource-group CreatePubLBQS-rg \
--name myVM1 \
--nics myNicVM1 \
--image win2019datacenter \
--admin-username azureuser \
--zone 1 \
--no-wait

VM2
Named myVM2 .
In resource group CreatePubLBQS-rg .
Attached to network interface myNicVM2 .
Virtual machine image win2019datacenter .
In Zone 2 .

az vm create \
--resource-group CreatePubLBQS-rg \
--name myVM2 \
--nics myNicVM2 \
--image win2019datacenter \
--admin-username azureuser \
--zone 2 \
--no-wait

VM3
Named myVM3 .
In resource group CreatePubLBQS-rg .
Attached to network interface myNicVM3 .
Virtual machine image win2019datacenter .
In Zone 3 .

az vm create \
--resource-group CreatePubLBQS-rg \
--name myVM3 \
--nics myNicVM3 \
--image win2019datacenter \
--admin-username azureuser \
--zone 3 \
--no-wait

It may take a few minutes for the VMs to deploy.

Create a public IP address - Standard


To access your web app on the Internet, you need a public IP address for the load balancer.
Use az network public-ip create to:
Create a standard zone redundant public IP address named myPublicIP .
In CreatePubLBQS-rg .
az network public-ip create \
--resource-group CreatePubLBQS-rg \
--name myPublicIP \
--sku Standard

To create a zonal redundant public IP address in Zone 1:

az network public-ip create \


--resource-group CreatePubLBQS-rg \
--name myPublicIP \
--sku Standard \
--zone 1

Create standard load balancer


This section details how you can create and configure the following components of the load balancer:
A frontend IP pool that receives the incoming network traffic on the load balancer.
A backend IP pool where the frontend pool sends the load balanced network traffic.
A health probe that determines health of the backend VM instances.
A load balancer rule that defines how traffic is distributed to the VMs.
Create the load balancer resource
Create a public load balancer with az network lb create:
Named myLoadBalancer .
A frontend pool named myFrontEnd .
A backend pool named myBackEndPool .
Associated with the public IP address myPublicIP that you created in the preceding step.

az network lb create \
--resource-group CreatePubLBQS-rg \
--name myLoadBalancer \
--sku Standard \
--public-ip-address myPublicIP \
--frontend-ip-name myFrontEnd \
--backend-pool-name myBackEndPool

Create the health probe


A health probe checks all virtual machine instances to ensure they can send network traffic.
A virtual machine with a failed probe check is removed from the load balancer. The virtual machine is added
back into the load balancer when the failure is resolved.
Create a health probe with az network lb probe create:
Monitors the health of the virtual machines.
Named myHealthProbe .
Protocol TCP .
Monitoring Por t 80 .
az network lb probe create \
--resource-group CreatePubLBQS-rg \
--lb-name myLoadBalancer \
--name myHealthProbe \
--protocol tcp \
--port 80

Create the load balancer rule


A load balancer rule defines:
Frontend IP configuration for the incoming traffic.
The backend IP pool to receive the traffic.
The required source and destination port.
Create a load balancer rule with az network lb rule create:
Named myHTTPRule
Listening on Por t 80 in the frontend pool myFrontEnd .
Sending load-balanced network traffic to the backend address pool myBackEndPool using Por t 80 .
Using health probe myHealthProbe .
Protocol TCP .
Idle timeout of 15 minutes .
Enable TCP reset.

az network lb rule create \


--resource-group CreatePubLBQS-rg \
--lb-name myLoadBalancer \
--name myHTTPRule \
--protocol tcp \
--frontend-port 80 \
--backend-port 80 \
--frontend-ip-name myFrontEnd \
--backend-pool-name myBackEndPool \
--probe-name myHealthProbe \
--disable-outbound-snat true \
--idle-timeout 15 \
--enable-tcp-reset true

Add virtual machines to load balancer backend pool


Add the virtual machines to the backend pool with az network nic ip-config address-pool add:
In backend address pool myBackEndPool .
In resource group CreatePubLBQS-rg .
Associated with load balancer myLoadBalancer .

array=(myNicVM1 myNicVM2 myNicVM3)


for vmnic in "${array[@]}"
do
az network nic ip-config address-pool add \
--address-pool myBackendPool \
--ip-config-name ipconfig1 \
--nic-name $vmnic \
--resource-group CreatePubLBQS-rg \
--lb-name myLoadBalancer
done
Create outbound rule configuration
Load balancer outbound rules configure outbound SNAT for VMs in the backend pool.
For more information on outbound connections, see Outbound connections in Azure.
A public IP or prefix can be used for the outbound configuration.
Public IP
Use az network public-ip create to create a single IP for the outbound connectivity.
Named myPublicIPOutbound .
In CreatePubLBQS-rg .

az network public-ip create \


--resource-group CreatePubLBQS-rg \
--name myPublicIPOutbound \
--sku Standard

To create a zonal redundant public IP address in Zone 1:

az network public-ip create \


--resource-group CreatePubLBQS-rg \
--name myPublicIPOutbound \
--sku Standard \
--zone 1

Public IP Prefix
Use az network public-ip prefix create to create a public IP prefix for the outbound connectivity.
Named myPublicIPPrefixOutbound .
In CreatePubLBQS-rg .
Prefix length of 28 .

az network public-ip prefix create \


--resource-group CreatePubLBQS-rg \
--name myPublicIPPrefixOutbound \
--length 28

To create a zonal redundant public IP prefix in Zone 1:

az network public-ip prefix create \


--resource-group CreatePubLBQS-rg \
--name myPublicIPPrefixOutbound \
--length 28 \
--zone 1

For more information on scaling outbound NAT and outbound connectivity, see Scale outbound NAT with
multiple IP addresses.
Create outbound frontend IP configuration
Create a new frontend IP configuration with az network lb frontend-ip create :
Select the public IP or public IP prefix commands based on decision in previous step.
Public IP
Named myFrontEndOutbound .
In resource group CreatePubLBQS-rg .
Associated with public IP address myPublicIPOutbound .
Associated with load balancer myLoadBalancer .

az network lb frontend-ip create \


--resource-group CreatePubLBQS-rg \
--name myFrontEndOutbound \
--lb-name myLoadBalancer \
--public-ip-address myPublicIPOutbound

Public IP prefix
Named myFrontEndOutbound .
In resource group CreatePubLBQS-rg .
Associated with public IP prefix myPublicIPPrefixOutbound .
Associated with load balancer myLoadBalancer .

az network lb frontend-ip create \


--resource-group CreatePubLBQS-rg \
--name myFrontEndOutbound \
--lb-name myLoadBalancer \
--public-ip-prefix myPublicIPPrefixOutbound

Create outbound pool


Create a new outbound pool with az network lb address-pool create:
Named myBackEndPoolOutbound .
In resource group CreatePubLBQS-rg .
Associated with load balancer myLoadBalancer .

az network lb address-pool create \


--resource-group CreatePubLBQS-rg \
--lb-name myLoadBalancer \
--name myBackendPoolOutbound

Create outbound rule


Create a new outbound rule for the outbound backend pool with az network lb outbound-rule create:
Named myOutboundRule .
In resource group CreatePubLBQS-rg .
Associated with load balancer myLoadBalancer
Associated with frontend myFrontEndOutbound .
Protocol All .
Idle timeout of 15 .
10000 outbound ports.
Associated with backend pool myBackEndPoolOutbound .
az network lb outbound-rule create \
--resource-group CreatePubLBQS-rg \
--lb-name myLoadBalancer \
--name myOutboundRule \
--frontend-ip-configs myFrontEndOutbound \
--protocol All \
--idle-timeout 15 \
--outbound-ports 10000 \
--address-pool myBackEndPoolOutbound

Add virtual machines to outbound pool


Add the virtual machines to the outbound pool with az network nic ip-config address-pool add:
In backend address pool myBackEndPoolOutbound .
In resource group CreatePubLBQS-rg .
Associated with load balancer myLoadBalancer .

array=(myNicVM1 myNicVM2 myNicVM3)


for vmnic in "${array[@]}"
do
az network nic ip-config address-pool add \
--address-pool myBackendPoolOutbound \
--ip-config-name ipconfig1 \
--nic-name $vmnic \
--resource-group CreatePubLBQS-rg \
--lb-name myLoadBalancer
done

Install IIS
Use az vm extension set to install IIS on the virtual machines and set the default website to the computer name.

array=(myVM1 myVM2 myVM3)


for vm in "${array[@]}"
do
az vm extension set \
--publisher Microsoft.Compute \
--version 1.8 \
--name CustomScriptExtension \
--vm-name $vm \
--resource-group CreatePubLBQS-rg \
--settings '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content -
Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}'
done

Test the load balancer


To get the public IP address of the load balancer, use az network public-ip show.
Copy the public IP address, and then paste it into the address bar of your browser.

az network public-ip show \


--resource-group CreatePubLBQS-rg \
--name myPublicIP \
--query ipAddress \
--output tsv
Clean up resources
When no longer needed, use the az group delete command to remove the resource group, load balancer, and all
related resources.

az group delete \
--name CreatePubLBQS-rg

Next steps
In this quickstart
You created a standard or public load balancer
Attached virtual machines.
Configured the load balancer traffic rule and health probe.
Tested the load balancer.
To learn more about Azure Load Balancer, continue to:
What is Azure Load Balancer?
Tutorial: Load balance VMs across availability zones
with a Standard Load Balancer using the Azure
portal
3/30/2021 • 9 minutes to read • Edit Online

Load balancing provides a higher level of availability by spreading incoming requests across multiple virtual
machines. This tutorial steps through creating a public Standard Load Balancer that load balances VMs across
availability zones. This helps to protect your apps and data from an unlikely failure or loss of an entire
datacenter. With zone-redundancy, one or more availability zones can fail and the data path survives as long as
one zone in the region remains healthy. You learn how to:
Create a Standard Load Balancer
Create network security groups to define incoming traffic rules
Create zone-redundant VMs across multiple zones and attach to a load balancer
Create load balancer health probe
Create load balancer traffic rules
Create a basic IIS site
View a load balancer in action
For more information about using Availability zones with Standard Load Balancer, see Standard Load Balancer
and Availability Zones.
If you prefer, you can complete this tutorial using the Azure CLI.
If you don't have an Azure subscription, create a free account before you begin.

Prerequisites
An Azure subscription

Sign in to Azure
Sign in to the Azure portal at https://portal.azure.com.

Create a Standard Load Balancer


Standard Load Balancer only supports a Standard Public IP address. When you create a new public IP while
creating the load balancer, it is automatically configured as a Standard SKU version, and is also automatically
zone-redundant.
1. On the top left-hand side of the screen, click Create a resource > Networking > Load Balancer .
2. In the Basics tab of the Create load balancer page, enter or select the following information, accept the
defaults for the remaining settings, and then select Review + create :

SET T IN G VA L UE

Subscription Select your subscription.


SET T IN G VA L UE

Resource group Select Create new and type MyResourceGroupLBAZ in


the text box.

Name myLoadBalancer

Region Select West Europe .

Type Select Public.

SKU Select Standard .

Public IP address Select Create new .

Public IP address name Type myPublicIP in the text box.

Availability zone Select Zone redundant .

Create backend servers


In this section, you create a virtual network, virtual machines in different zones for the region, and then install
IIS on the virtual machines to help test the zone-redundant load balancer. Hence, if a zone fails, the health probe
for VM in the same zone fails, and traffic continues to be served by VMs in the other zones.

Virtual network and parameters


In this section you'll need to replace the following parameters in the steps with the information below:

PA RA M ET ER VA L UE

<resource-group-name> myResourceGroupLBAZ (Select existing resource group)

<vir tual-network-name> myVNet

<region-name> West Europe

<IPv4-address-space> 10.0.0.0/16

<subnet-name> myBackendSubnet

<subnet-address-range> 10.0.0.0/24

Create the virtual network and subnet


In this section, you'll create a virtual network and subnet.
1. On the upper-left side of the screen, select Create a resource > Networking > Vir tual network or
search for Vir tual network in the search box.
2. In Create vir tual network , enter or select this information in the Basics tab:
SET T IN G VA L UE

Project Details

Subscription Select your Azure subscription

Resource Group Select Create new , enter <resource-group-name> ,


then select OK, or select an existing <resource-group-
name> based on parameters.

Instance details

Name Enter <vir tual-network-name>

Region Select <region-name>

3. Select the IP Addresses tab or select the Next: IP Addresses button at the bottom of the page.
4. In the IP Addresses tab, enter this information:

SET T IN G VA L UE

IPv4 address space Enter <IPv4-address-space>

5. Under Subnet name , select the word default .


6. In Edit subnet , enter this information:

SET T IN G VA L UE

Subnet name Enter <subnet-name>

Subnet address range Enter <subnet-address-range>

7. Select Save .
8. Select the Review + create tab or select the Review + create button.
9. Select Create .

Create a network security group


Create network security group to define inbound connections to your virtual network.
1. On the top left-hand side of the screen, click Create a resource , in the search box type Network Security
Group, and in the network security group page, click Create .
2. In the Create network security group page, enter these values:
myNetworkSecurityGroup - for the name of the network security group.
myResourceGroupLBAZ - for the name of the existing resource group.
Create network security group rules
In this section, you create network security group rules to allow inbound connections using HTTP and RDP using
the Azure portal.
1. In the Azure portal, click All resources in the left-hand menu, and then search and click
myNetworkSecurityGroup that is located in the myResourceGroupLBAZ resource group.
2. Under Settings , click Inbound security rules , and then click Add .
3. Enter these values for the inbound security rule named myHTTPRule to allow for an inbound HTTP
connections using port 80:
Service Tag - for Source .
Internet - for Source ser vice tag
80 - for Destination por t ranges
TCP - for Protocol
Allow - for Action
100 for Priority
myHTTPRule - for name of the load balancer rule.
Allow HTTP - for description of the load balancer rule.
4. Click OK .
5. Repeat steps 2 to 4 to create another rule named myRDPRule to allow for an inbound RDP connection
using port 3389 with the following values:
Service Tag - for Source .
Internet - for Source ser vice tag
3389 - for Destination por t ranges
TCP - for Protocol
Allow - for Action
200 for Priority
myRDPRule for name
Allow RDP - for description
Create virtual machines
Create virtual machines in different zones (zone 1, zone 2, and zone 3) for the region that can act as backend
servers to the load balancer.
1. On the top left-hand side of the screen, click Create a resource > Compute > Windows Ser ver 2016
Datacenter and enter these values for the virtual machine:
myVM1 - for the name of the virtual machine.
azureuser - for the administrator user name.
myResourceGroupLBAZ - for Resource group , select Use existing , and then select
myResourceGroupLBAZ.
2. Click OK .
3. Select DS1_V2 for the size of the virtual machine, and click Select .
4. Enter these values for the VM settings:
zone 1 - for the zone where you place the VM.
myVNet - ensure it is selected as the virtual network.
myBackendSubnet - ensure it is selected as the subnet.
myNetworkSecurityGroup - for the name of network security group (firewall).
5. Click Disabled to disable boot diagnostics.
6. Click OK , review the settings on the summary page, and then click Create .
7. Create a second VM, named, VM2 in Zone 2, and third VM in Zone 3, and with myVnet as the virtual network,
myBackendSubnet as the subnet, and *myNetworkSecurityGroup as the network security group using steps
1-6.
Install IIS on VMs
1. Click All resources in the left-hand menu, and then from the resources list click myVM1 that is located in
the myResourceGroupLBAZ resource group.
2. On the Over view page, click Connect to RDP into the VM.
3. Log into the VM with username azureuser.
4. On the server desktop, navigate to Windows Administrative Tools >Windows PowerShell .
5. In the PowerShell Window, run the following commands to install the IIS server, remove the default
iisstart.htm file, and then add a new iisstart.htm file that displays the name of the VM:

# install IIS server role


Install-WindowsFeature -name Web-Server -IncludeManagementTools

# remove default htm file


remove-item C:\inetpub\wwwroot\iisstart.htm

# Add a new htm file that displays server name


Add-Content -Path "C:\inetpub\wwwroot\iisstart.htm" -Value $("Hello World from" +
$env:computername)

6. Close the RDP session with myVM1.


7. Repeat steps 1 to 6 to install IIS and the updated iisstart.htm file on myVM2 and myVM3.

Create load balancer resources


In this section, you configure load balancer settings for a backend address pool and a health probe, and specify
load balancer and NAT rules.
Create a backend address pool
To distribute traffic to the VMs, a back-end address pool contains the IP addresses of the virtual (NICs)
connected to the load balancer. Create the backend address pool myBackendPool to include VM1, VM2, and
VM3.
1. Click All resources in the left-hand menu, and then click myLoadBalancer from the resources list.
2. Under Settings , click Backend pools , then click Add .
3. On the Add a backend pool page, do the following:
For name, type myBackEndPool, as the name for your backend pool.
For Vir tual network , in the drop-down menu, click myVNet
For Vir tual machine , in the drop-down menu, click, myVM1 .
For IP address , in the drop-down menu, click the IP address of myVM1.
4. Click Add new backend resource to add each virtual machine (myVM2 and myVM3) to add to the
backend pool of the load balancer.
5. Click Add .

6. Check to make sure your load balancer backend pool setting displays all the three VMs - myVM1 ,
myVM2 and myVM3 .
Create a health probe
To allow the load balancer to monitor the status of your app, you use a health probe. The health probe
dynamically adds or removes VMs from the load balancer rotation based on their response to health checks.
Create a health probe myHealthProbe to monitor the health of the VMs.
1. Click All resources in the left-hand menu, and then click myLoadBalancer from the resources list.
2. Under Settings , click Health probes , then click Add .
3. Use these values to create the health probe:
myHealthProbe - for the name of the health probe.
HTTP - for the protocol type.
80 - for the port number.
15 - for number of Inter val in seconds between probe attempts.
2 - for number of Unhealthy threshold or consecutive probe failures that must occur before a VM is
considered unhealthy.
4. Click OK .

Create a load balancer rule


A load balancer rule is used to define how traffic is distributed to the VMs. You define the front-end IP
configuration for the incoming traffic and the back-end IP pool to receive the traffic, along with the required
source and destination port. Create a load balancer rule myLoadBalancerRuleWeb for listening to port 80 in the
frontend FrontendLoadBalancer and sending load-balanced network traffic to the backend address pool
myBackEndPool also using port 80.
1. Click All resources in the left-hand menu, and then click myLoadBalancer from the resources list.
2. Under Settings , click Load balancing rules , then click Add .
3. Use these values to configure the load balancing rule:
myHTTPRule - for the name of the load balancing rule.
TCP - for the protocol type.
80 - for the port number.
80 - for the backend port.
myBackendPool - for the name of the backend pool.
myHealthProbe - for the name of the health probe.
4. Click OK .

Test the load balancer


1. Find the public IP address for the Load Balancer on the Over view screen. Click All resources and then
click myPublicIP .
2. Copy the public IP address, and then paste it into the address bar of your browser. The default page of IIS
Web server is displayed on the browser.
To see the load balancer distribute traffic across the VMs distributed across the zone you can force-refresh your
web browser.

Clean up resources
When no longer needed, delete the resource group, load balancer, and all related resources. To do so, select the
resource group that contains the load balancer and select Delete .

Next steps
Learn more about load balancing a VM within a specific availability zone..
Load balance VMs within an availability zone
Quickstart: Create a public load balancer to load
balance VMs using Azure CLI
3/30/2021 • 15 minutes to read • Edit Online

Get started with Azure Load Balancer by using Azure CLI to create a public load balancer and three virtual
machines.
If you don't have an Azure subscription, create a free account before you begin.

Prerequisites
Use the Bash environment in Azure Cloud Shell.

If you prefer, install the Azure CLI to run CLI reference commands.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
This quickstart requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version
is already installed.

Create a resource group


An Azure resource group is a logical container into which Azure resources are deployed and managed.
Create a resource group with az group create:
Named CreatePubLBQS-rg .
In the eastus location.

az group create \
--name CreatePubLBQS-rg \
--location eastus

Standard SKU
Basic SKU

NOTE
Standard SKU load balancer is recommended for production workloads. For more information about skus, see Azure
Load Balancer SKUs .
Configure virtual network - Standard
Before you deploy VMs and test your load balancer, create the supporting virtual network resources.
Create a virtual network
Create a virtual network using az network vnet create:
Named myVNet .
Address prefix of 10.1.0.0/16 .
Subnet named myBackendSubnet .
Subnet prefix of 10.1.0.0/24 .
In the CreatePubLBQS-rg resource group.
Location of eastus .

az network vnet create \


--resource-group CreatePubLBQS-rg \
--location eastus \
--name myVNet \
--address-prefixes 10.1.0.0/16 \
--subnet-name myBackendSubnet \
--subnet-prefixes 10.1.0.0/24

Create a public IP address


Use az network public-ip create to create a public ip address for the bastion host:
Create a standard zone redundant public IP address named myBastionIP .
In CCreatePubLBQS-rg .

az network public-ip create \


--resource-group CreatePubLBQS-rg \
--name myBastionIP \
--sku Standard
Create a bastion subnet
Use az network vnet subnet create to create a bastion subnet:
Named AzureBastionSubnet .
Address prefix of 10.1.1.0/24 .
In virtual network myVNet .
In resource group CreatePubLBQS-rg .

az network vnet subnet create \


--resource-group CreatePubLBQS-rg \
--name AzureBastionSubnet \
--vnet-name myVNet \
--address-prefixes 10.1.1.0/24

Create bastion host


Use az network bastion create to create a bastion host:
Named myBastionHost .
In CreatePubLBQS-rg .
Associated with public IP myBastionIP .
Associated with virtual network myVNet .
In eastus location.

az network bastion create \


--resource-group CreatePubLBQS-rg \
--name myBastionHost \
--public-ip-address myBastionIP \
--vnet-name myVNet \
--location eastus

It can take a few minutes for the Azure Bastion host to deploy.
Create a network security group
For a standard load balancer, the VMs in the backend address for are required to have network interfaces that
belong to a network security group.
Create a network security group using az network nsg create:
Named myNSG .
In resource group CreatePubLBQS-rg .

az network nsg create \


--resource-group CreatePubLBQS-rg \
--name myNSG

Create a network security group rule


Create a network security group rule using az network nsg rule create:
Named myNSGRuleHTTP .
In the network security group you created in the previous step, myNSG .
In resource group CreatePubLBQS-rg .
Protocol (*) .
Direction Inbound .
Source (*) .
Destination (*) .
Destination port Por t 80 .
Access Allow .
Priority 200 .

az network nsg rule create \


--resource-group CreatePubLBQS-rg \
--nsg-name myNSG \
--name myNSGRuleHTTP \
--protocol '*' \
--direction inbound \
--source-address-prefix '*' \
--source-port-range '*' \
--destination-address-prefix '*' \
--destination-port-range 80 \
--access allow \
--priority 200

Create backend servers - Standard


In this section, you create:
Three network interfaces for the virtual machines.
Three virtual machines to be used as backend servers for the load balancer.
Create network interfaces for the virtual machines
Create three network interfaces with az network nic create:
Named myNicVM1 , myNicVM2 , and myNicVM3 .
In resource group CreatePubLBQS-rg .
In virtual network myVNet .
In subnet myBackendSubnet .
In network security group myNSG .

array=(myNicVM1 myNicVM2 myNicVM3)


for vmnic in "${array[@]}"
do
az network nic create \
--resource-group CreatePubLBQS-rg \
--name $vmnic \
--vnet-name myVNet \
--subnet myBackEndSubnet \
--network-security-group myNSG
done

Create virtual machines


Create the virtual machines with az vm create:
VM1
Named myVM1 .
In resource group CreatePubLBQS-rg .
Attached to network interface myNicVM1 .
Virtual machine image win2019datacenter .
In Zone 1 .
az vm create \
--resource-group CreatePubLBQS-rg \
--name myVM1 \
--nics myNicVM1 \
--image win2019datacenter \
--admin-username azureuser \
--zone 1 \
--no-wait

VM2
Named myVM2 .
In resource group CreatePubLBQS-rg .
Attached to network interface myNicVM2 .
Virtual machine image win2019datacenter .
In Zone 2 .

az vm create \
--resource-group CreatePubLBQS-rg \
--name myVM2 \
--nics myNicVM2 \
--image win2019datacenter \
--admin-username azureuser \
--zone 2 \
--no-wait

VM3
Named myVM3 .
In resource group CreatePubLBQS-rg .
Attached to network interface myNicVM3 .
Virtual machine image win2019datacenter .
In Zone 3 .

az vm create \
--resource-group CreatePubLBQS-rg \
--name myVM3 \
--nics myNicVM3 \
--image win2019datacenter \
--admin-username azureuser \
--zone 3 \
--no-wait

It may take a few minutes for the VMs to deploy.

Create a public IP address - Standard


To access your web app on the Internet, you need a public IP address for the load balancer.
Use az network public-ip create to:
Create a standard zone redundant public IP address named myPublicIP .
In CreatePubLBQS-rg .
az network public-ip create \
--resource-group CreatePubLBQS-rg \
--name myPublicIP \
--sku Standard

To create a zonal redundant public IP address in Zone 1:

az network public-ip create \


--resource-group CreatePubLBQS-rg \
--name myPublicIP \
--sku Standard \
--zone 1

Create standard load balancer


This section details how you can create and configure the following components of the load balancer:
A frontend IP pool that receives the incoming network traffic on the load balancer.
A backend IP pool where the frontend pool sends the load balanced network traffic.
A health probe that determines health of the backend VM instances.
A load balancer rule that defines how traffic is distributed to the VMs.
Create the load balancer resource
Create a public load balancer with az network lb create:
Named myLoadBalancer .
A frontend pool named myFrontEnd .
A backend pool named myBackEndPool .
Associated with the public IP address myPublicIP that you created in the preceding step.

az network lb create \
--resource-group CreatePubLBQS-rg \
--name myLoadBalancer \
--sku Standard \
--public-ip-address myPublicIP \
--frontend-ip-name myFrontEnd \
--backend-pool-name myBackEndPool

Create the health probe


A health probe checks all virtual machine instances to ensure they can send network traffic.
A virtual machine with a failed probe check is removed from the load balancer. The virtual machine is added
back into the load balancer when the failure is resolved.
Create a health probe with az network lb probe create:
Monitors the health of the virtual machines.
Named myHealthProbe .
Protocol TCP .
Monitoring Por t 80 .
az network lb probe create \
--resource-group CreatePubLBQS-rg \
--lb-name myLoadBalancer \
--name myHealthProbe \
--protocol tcp \
--port 80

Create the load balancer rule


A load balancer rule defines:
Frontend IP configuration for the incoming traffic.
The backend IP pool to receive the traffic.
The required source and destination port.
Create a load balancer rule with az network lb rule create:
Named myHTTPRule
Listening on Por t 80 in the frontend pool myFrontEnd .
Sending load-balanced network traffic to the backend address pool myBackEndPool using Por t 80 .
Using health probe myHealthProbe .
Protocol TCP .
Idle timeout of 15 minutes .
Enable TCP reset.

az network lb rule create \


--resource-group CreatePubLBQS-rg \
--lb-name myLoadBalancer \
--name myHTTPRule \
--protocol tcp \
--frontend-port 80 \
--backend-port 80 \
--frontend-ip-name myFrontEnd \
--backend-pool-name myBackEndPool \
--probe-name myHealthProbe \
--disable-outbound-snat true \
--idle-timeout 15 \
--enable-tcp-reset true

Add virtual machines to load balancer backend pool


Add the virtual machines to the backend pool with az network nic ip-config address-pool add:
In backend address pool myBackEndPool .
In resource group CreatePubLBQS-rg .
Associated with load balancer myLoadBalancer .

array=(myNicVM1 myNicVM2 myNicVM3)


for vmnic in "${array[@]}"
do
az network nic ip-config address-pool add \
--address-pool myBackendPool \
--ip-config-name ipconfig1 \
--nic-name $vmnic \
--resource-group CreatePubLBQS-rg \
--lb-name myLoadBalancer
done
Create outbound rule configuration
Load balancer outbound rules configure outbound SNAT for VMs in the backend pool.
For more information on outbound connections, see Outbound connections in Azure.
A public IP or prefix can be used for the outbound configuration.
Public IP
Use az network public-ip create to create a single IP for the outbound connectivity.
Named myPublicIPOutbound .
In CreatePubLBQS-rg .

az network public-ip create \


--resource-group CreatePubLBQS-rg \
--name myPublicIPOutbound \
--sku Standard

To create a zonal redundant public IP address in Zone 1:

az network public-ip create \


--resource-group CreatePubLBQS-rg \
--name myPublicIPOutbound \
--sku Standard \
--zone 1

Public IP Prefix
Use az network public-ip prefix create to create a public IP prefix for the outbound connectivity.
Named myPublicIPPrefixOutbound .
In CreatePubLBQS-rg .
Prefix length of 28 .

az network public-ip prefix create \


--resource-group CreatePubLBQS-rg \
--name myPublicIPPrefixOutbound \
--length 28

To create a zonal redundant public IP prefix in Zone 1:

az network public-ip prefix create \


--resource-group CreatePubLBQS-rg \
--name myPublicIPPrefixOutbound \
--length 28 \
--zone 1

For more information on scaling outbound NAT and outbound connectivity, see Scale outbound NAT with
multiple IP addresses.
Create outbound frontend IP configuration
Create a new frontend IP configuration with az network lb frontend-ip create :
Select the public IP or public IP prefix commands based on decision in previous step.
Public IP
Named myFrontEndOutbound .
In resource group CreatePubLBQS-rg .
Associated with public IP address myPublicIPOutbound .
Associated with load balancer myLoadBalancer .

az network lb frontend-ip create \


--resource-group CreatePubLBQS-rg \
--name myFrontEndOutbound \
--lb-name myLoadBalancer \
--public-ip-address myPublicIPOutbound

Public IP prefix
Named myFrontEndOutbound .
In resource group CreatePubLBQS-rg .
Associated with public IP prefix myPublicIPPrefixOutbound .
Associated with load balancer myLoadBalancer .

az network lb frontend-ip create \


--resource-group CreatePubLBQS-rg \
--name myFrontEndOutbound \
--lb-name myLoadBalancer \
--public-ip-prefix myPublicIPPrefixOutbound

Create outbound pool


Create a new outbound pool with az network lb address-pool create:
Named myBackEndPoolOutbound .
In resource group CreatePubLBQS-rg .
Associated with load balancer myLoadBalancer .

az network lb address-pool create \


--resource-group CreatePubLBQS-rg \
--lb-name myLoadBalancer \
--name myBackendPoolOutbound

Create outbound rule


Create a new outbound rule for the outbound backend pool with az network lb outbound-rule create:
Named myOutboundRule .
In resource group CreatePubLBQS-rg .
Associated with load balancer myLoadBalancer
Associated with frontend myFrontEndOutbound .
Protocol All .
Idle timeout of 15 .
10000 outbound ports.
Associated with backend pool myBackEndPoolOutbound .
az network lb outbound-rule create \
--resource-group CreatePubLBQS-rg \
--lb-name myLoadBalancer \
--name myOutboundRule \
--frontend-ip-configs myFrontEndOutbound \
--protocol All \
--idle-timeout 15 \
--outbound-ports 10000 \
--address-pool myBackEndPoolOutbound

Add virtual machines to outbound pool


Add the virtual machines to the outbound pool with az network nic ip-config address-pool add:
In backend address pool myBackEndPoolOutbound .
In resource group CreatePubLBQS-rg .
Associated with load balancer myLoadBalancer .

array=(myNicVM1 myNicVM2 myNicVM3)


for vmnic in "${array[@]}"
do
az network nic ip-config address-pool add \
--address-pool myBackendPoolOutbound \
--ip-config-name ipconfig1 \
--nic-name $vmnic \
--resource-group CreatePubLBQS-rg \
--lb-name myLoadBalancer
done

Install IIS
Use az vm extension set to install IIS on the virtual machines and set the default website to the computer name.

array=(myVM1 myVM2 myVM3)


for vm in "${array[@]}"
do
az vm extension set \
--publisher Microsoft.Compute \
--version 1.8 \
--name CustomScriptExtension \
--vm-name $vm \
--resource-group CreatePubLBQS-rg \
--settings '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content -
Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}'
done

Test the load balancer


To get the public IP address of the load balancer, use az network public-ip show.
Copy the public IP address, and then paste it into the address bar of your browser.

az network public-ip show \


--resource-group CreatePubLBQS-rg \
--name myPublicIP \
--query ipAddress \
--output tsv
Clean up resources
When no longer needed, use the az group delete command to remove the resource group, load balancer, and all
related resources.

az group delete \
--name CreatePubLBQS-rg

Next steps
In this quickstart
You created a standard or public load balancer
Attached virtual machines.
Configured the load balancer traffic rule and health probe.
Tested the load balancer.
To learn more about Azure Load Balancer, continue to:
What is Azure Load Balancer?
Manage public IP addresses
3/5/2021 • 12 minutes to read • Edit Online

Learn about a public IP address and how to create, change, and delete one. A public IP address is a resource with
its own configurable settings. Assigning a public IP address to an Azure resource that supports public IP
addresses enables:
Inbound communication from the Internet to the resource, such as Azure Virtual Machines (VM), Azure
Application Gateways, Azure Load Balancers, Azure VPN Gateways, and others. You can still communicate
with some resources, such as VMs, from the Internet, if a VM doesn't have a public IP address assigned to it,
as long as the VM is part of a load balancer back-end pool, and the load balancer is assigned a public IP
address. To determine whether a resource for a specific Azure service can be assigned a public IP address, or
whether it can be communicated with through the public IP address of a different Azure resource, see the
documentation for the service.
Outbound connectivity to the Internet using a predictable IP address. For example, a virtual machine can
communicate outbound to the Internet without a public IP address assigned to it, but its address is network
address translated by Azure to an unpredictable public address, by default. Assigning a public IP address to a
resource enables you to know which IP address is used for the outbound connection. Though predictable, the
address can change, depending on the assignment method chosen. For more information, see Create a
public IP address. To learn more about outbound connections from Azure resources, see Understand
outbound connections.

Before you begin


NOTE
This article has been updated to use the Azure Az PowerShell module. The Az PowerShell module is the recommended
PowerShell module for interacting with Azure. To get started with the Az PowerShell module, see Install Azure PowerShell.
To learn how to migrate to the Az PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

Complete the following tasks before completing steps in any section of this article:
If you don't already have an Azure account, sign up for a free trial account.
If using the portal, open https://portal.azure.com, and log in with your Azure account.
If using PowerShell commands to complete tasks in this article, either run the commands in the Azure Cloud
Shell, or by running PowerShell from your computer. The Azure Cloud Shell is a free interactive shell that you
can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with
your account. This tutorial requires the Azure PowerShell module version 1.0.0 or later. Run
Get-Module -ListAvailable Az to find the installed version. If you need to upgrade, see Install Azure
PowerShell module. If you are running PowerShell locally, you also need to run Connect-AzAccount to create a
connection with Azure.
If using Azure Command-line interface (CLI) commands to complete tasks in this article, either run the
commands in the Azure Cloud Shell, or by running the CLI from your computer. This tutorial requires the
Azure CLI version 2.0.31 or later. Run az --version to find the installed version. If you need to install or
upgrade, see Install Azure CLI. If you are running the Azure CLI locally, you also need to run az login to
create a connection with Azure.
The account you log into, or connect to Azure with, must be assigned to the network contributor role or to a
custom role that is assigned the appropriate actions listed in Permissions.
Public IP addresses have a nominal charge. To view the pricing, read the IP address pricing page.

Create a public IP address


For instructions on how to Create Public IP addresses using the Portal, PowerShell, or CLI -- please refer to the
following pages:
Create public IP addresses - portal
Create public IP addresses - PowerShell
Create public IP addresses - Azure CLI

NOTE
Though the portal provides the option to create two public IP address resources (one IPv4 and one IPv6), the PowerShell
and CLI commands create one resource with an address for one IP version or the other. If you want two public IP address
resources, one for each IP version, you must run the command twice, specifying different names and IP versions for the
public IP address resources.

For additional detail on the specific attributes of a Public IP address during creation, see the table below.

SET T IN G REQ UIRED? DETA IL S

IP Version Yes Select IPv4 or IPv6 or Both. Selecting


Both will result in 2 Public IP addresses
being create- 1 IPv4 address and 1
IPv6 address. Learn more about IPv6
in Azure VNETs.
SET T IN G REQ UIRED? DETA IL S

SKU Yes All public IP addresses created before


the introduction of SKUs are Basic
SKU public IP addresses. You cannot
change the SKU after the public IP
address is created. A standalone virtual
machine, virtual machines within an
availability set, or virtual machine scale
sets can use Basic or Standard SKUs.
Mixing SKUs between virtual machines
within availability sets or scale sets or
standalone VMs is not allowed. Basic
SKU: If you are creating a public IP
address in a region that supports
availability zones, the Availability
zone setting is set to None by default.
Basic Public IPs do not support
Availability zones. Standard SKU: A
Standard SKU public IP can be
associated to a virtual machine or a
load balancer front end. If you're
creating a public IP address in a region
that supports availability zones, the
Availability zone setting is set to
Zone-redundant by default. For more
information about availability zones,
see the Availability zone setting. The
standard SKU is required if you
associate the address to a Standard
load balancer. To learn more about
standard load balancers, see Azure
load balancer standard SKU. When you
assign a standard SKU public IP
address to a virtual machine’s network
interface, you must explicitly allow the
intended traffic with a network security
group. Communication with the
resource fails until you create and
associate a network security group and
explicitly allow the desired traffic.

Tier Yes Indicates if the IP address is associated


with a region (Regional) or is
"anycast" from multiple regions
(Global). Note that a "Global Tier" IP is
preview functionality for Standard IPs,
and currently only utilized for the
Cross-Region Load Balancer.

Name Yes The name must be unique within the


resource group you select.
SET T IN G REQ UIRED? DETA IL S

IP address assignment Yes Dynamic: Dynamic addresses are


assigned only after a public IP address
is associated to an Azure resource, and
the resource is started for the first
time. Dynamic addresses can change if
they're assigned to a resource, such as
a virtual machine, and the virtual
machine is stopped (deallocated), and
then restarted. The address remains
the same if a virtual machine is
rebooted or stopped (but not
deallocated). Dynamic addresses are
released when a public IP address
resource is dissociated from a resource
it is associated to. Static: Static
addresses are assigned when a public
IP address is created. Static addresses
are not released until a public IP
address resource is deleted. If the
address is not associated to a resource,
you can change the assignment
method after the address is created. If
the address is associated to a resource,
you may not be able to change the
assignment method. If you select IPv6
for the IP version , the assignment
method must be Dynamic for Basic
SKU. Standard SKU addresses are Static
for both IPv4 and IPv6.

Idle timeout (minutes) No How many minutes to keep a TCP or


HTTP connection open without relying
on clients to send keep-alive
messages. If you select IPv6 for IP
Version , this value can't be changed.
SET T IN G REQ UIRED? DETA IL S

DNS name label No Must be unique within the Azure


location you create the name in (across
all subscriptions and all customers).
Azure automatically registers the name
and IP address in its DNS so you can
connect to a resource with the name.
Azure appends a default subnet such
as location.cloudapp.azure.com (where
location is the location you select) to
the name you provide, to create the
fully qualified DNS name. If you choose
to create both address versions, the
same DNS name is assigned to both
the IPv4 and IPv6 addresses. Azure's
default DNS contains both IPv4 A and
IPv6 AAAA name records and
responds with both records when the
DNS name is looked up. The client
chooses which address (IPv4 or IPv6)
to communicate with. Instead of, or in
addition to, using the DNS name label
with the default suffix, you can use the
Azure DNS service to configure a DNS
name with a custom suffix that
resolves to the public IP address. For
more information, see Use Azure DNS
with an Azure public IP address.

Name (Only visible if you select IP Yes, if you select IP Version of Both The name must be different than the
Version of Both ) name you enter for the first Name in
this list. If you choose to create both
an IPv4 and an IPv6 address, the
portal creates two separate public IP
address resources, one with each IP
address version assigned to it.

IP address assignment (Only visible if Yes, if you select IP Version of Both Same restrictions as IP Address
you select IP Version of Both ) Assignment above

Subscription Yes Must exist in the same subscription as


the resource to which you'll associate
the Public IP's.

Resource group Yes Can exist in the same, or different,


resource group as the resource to
which you'll associate the Public IP's.

Location Yes Must exist in the same location, also


referred to as region, as the resource
to which you'll associate the Public IP's.
SET T IN G REQ UIRED? DETA IL S

Availability zone No This setting only appears if you select a


supported location. For a list of
supported locations, see Availability
zones overview. If you selected the
Basic SKU, None is automatically
selected for you. If you prefer to
guarantee a specific zone, you may
select a specific zone. Either choice is
not zone-redundant. If you selected
the Standard SKU: Zone-redundant is
automatically selected for you and
makes your data path resilient to zone
failure. If you prefer to guarantee a
specific zone, which is not resilient to
zone failure, you may select a specific
zone.

View, modify settings for, or delete a public IP address


View/List : To review settings for a Public IP, including the SKU, address, any applicable association (e.g.
Virtual Machine NIC, Load Balancer Frontend).
Modify : To modify settings using the information in step 4 of create a public IP address, such as the idle
timeout, DNS name label, or assignment method. (For the full process of upgrading a Public IP SKU from
Basic to Standard, see Upgrade Azure public IP addresses.)

WARNING
To change the assignment for a Public IP address from static to dynamic, you must first dissociate the address from any
applicable IP configurations (see Delete section). Also note, when you change the assignment method from static to
dynamic, you lose the IP address that was assigned to the public IP address. While the Azure public DNS servers maintain
a mapping between static or dynamic addresses and any DNS name label (if you defined one), a dynamic IP address can
change when the virtual machine is started after being in the stopped (deallocated) state. To prevent the address from
changing, assign a static IP address.

O P ERAT IO N A Z URE P O RTA L A Z URE P O W ERSH EL L A Z URE C L I

View In the Over view section of Get-AzPublicIpAddress to az network public-ip show


a Public IP retrieve a public IP address to show settings
object and view its settings

List Under the Public IP Get-AzPublicIpAddress to az network public-ip list to


addresses category retrieve one or more public list public IP addresses
IP address objects and view
its settings

Modify For an IP that is dissociated, Set-AzPublicIpAddress to az network public-ip update


select Configuration to update settings to update
modify idle timeout, DNS
name label, or change
assignment of Basic IP from
Static to Dynamic

Delete : Deletion of Public IPs requires that the Public IP object not be associated to any IP configuration or
Virtual Machine NIC. See the table below for more details.
RESO URC E A Z URE P O RTA L A Z URE P O W ERSH EL L A Z URE C L I

Virtual Machine Select Dissociate to Set-AzPublicIpAddress to az network public-ip update


dissociate the IP address dissociate the IP address --remove to dissociate the
from the NIC configuration, from the NIC configuration; IP address from the NIC
then select Delete . Remove-AzPublicIpAddress configuration; az network
to delete public-ip delete to delete

Load Balancer Frontend Navigate to an unused Set- az network lb frontend-ip


Public IP address and select AzLoadBalancerFrontendIp update to associate new
Associate and pick the Config to associate new Frontend IP config with
Load Balancer with the Frontend IP config with Public Load Balancer;
relevant Front End IP Public Load Balancer; Remove-AzPublicIpAddress
Configuration to replace it Remove-AzPublicIpAddress to delete; can also use az
(then the old IP can be to delete; can also use network lb frontend-ip
deleted using same method Remove- delete to remove Frontend
as for VM) AzLoadBalancerFrontendIp IP Config if there are more
Config to remove Frontend than one
IP Config if there are more
than one

Firewall N/A Deallocate() to deallocate az network firewall ip-config


firewall and remove all IP delete to remove IP (but
configurations must use PowerShell to
deallocate first)

Virtual Machine Scale Sets


When using a virtual machine scale set with Public IPs, there are not separate Public IP objects associated with
the individual virtual machine instances. However, a Public IP Prefix object can be used to generate the instance
IPs.
To list the Public IPs on a virtual machine scale set, you can use PowerShell (Get-AzPublicIpAddress -
VirtualMachineScaleSetName) or CLI (az vmss list-instance-public-ips).
For more information, see Networking for Azure virtual machine scale sets.

Assign a public IP address


Learn how to assign a public IP address to the following resources:
A Windows or Linux Virtual Machine (when creating), or to an existing Virtual Machine
Public Load Balancer
Application Gateway
Site-to-site connection using a VPN Gateway
Virtual Machine Scale Set

Permissions
To perform tasks on public IP addresses, your account must be assigned to the network contributor role or to a
custom role that is assigned the appropriate actions listed in the following table:

A C T IO N NAME

Microsoft.Network/publicIPAddresses/read Read a public IP address


A C T IO N NAME

Microsoft.Network/publicIPAddresses/write Create or update a public IP address

Microsoft.Network/publicIPAddresses/delete Delete a public IP address

Microsoft.Network/publicIPAddresses/join/action Associate a public IP address to a resource

Next steps
Create a public IP address using PowerShell or Azure CLI sample scripts, or using Azure Resource Manager
templates
Create and assign Azure Policy definitions for public IP addresses
High availability for Azure SQL Database and SQL
Managed Instance
3/31/2021 • 11 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


The goal of the high availability architecture in Azure SQL Database and SQL Managed Instance is to guarantee
that your database is up and running minimum of 99.99% of time (For more information regarding specific SLA
for different tiers, Please refer SLA for Azure SQL Database and SQL Managed Instance), without worrying
about the impact of maintenance operations and outages. Azure automatically handles critical servicing tasks,
such as patching, backups, Windows and Azure SQL upgrades, as well as unplanned events such as underlying
hardware, software, or network failures. When the underlying database in Azure SQL Database is patched or
fails over, the downtime is not noticeable if you employ retry logic in your app. SQL Database and SQL Managed
Instance can quickly recover even in the most critical circumstances ensuring that your data is always available.
The high availability solution is designed to ensure that committed data is never lost due to failures, that
maintenance operations do not affect your workload, and that the database will not be a single point of failure in
your software architecture. There are no maintenance windows or downtimes that should require you to stop
the workload while the database is upgraded or maintained.
There are two high availability architectural models:
Standard availability model that is based on a separation of compute and storage. It relies on high
availability and reliability of the remote storage tier. This architecture targets budget-oriented business
applications that can tolerate some performance degradation during maintenance activities.
Premium availability model that is based on a cluster of database engine processes. It relies on the fact
that there is always a quorum of available database engine nodes. This architecture targets mission critical
applications with high IO performance, high transaction rate and guarantees minimal performance impact to
your workload during maintenance activities.
SQL Database and SQL Managed Instance both run on the latest stable version of the SQL Server database
engine and Windows operating system, and most users would not notice that upgrades are performed
continuously.

Basic, Standard, and General Purpose service tier locally redundant


availability
The Basic, Standard, and General Purpose service tiers leverage the standard availability architecture for both
serverless and provisioned compute. The following figure shows four different nodes with the separated
compute and storage layers.
The standard availability model includes two layers:
A stateless compute layer that runs the sqlservr.exe process and contains only transient and cached data,
such as TempDB, model databases on the attached SSD, and plan cache, buffer pool, and columnstore pool in
memory. This stateless node is operated by Azure Service Fabric that initializes sqlservr.exe , controls health
of the node, and performs failover to another node if necessary.
A stateful data layer with the database files (.mdf/.ldf) that are stored in Azure Blob storage. Azure blob
storage has built-in data availability and redundancy feature. It guarantees that every record in the log file or
page in the data file will be preserved even if sqlservr.exe process crashes.

Whenever the database engine or the operating system is upgraded, or a failure is detected, Azure Service
Fabric will move the stateless sqlservr.exe process to another stateless compute node with sufficient free
capacity. Data in Azure Blob storage is not affected by the move, and the data/log files are attached to the newly
initialized sqlservr.exe process. This process guarantees 99.99% availability, but a heavy workload may
experience some performance degradation during the transition since the new sqlservr.exe process starts with
cold cache.

General Purpose service tier zone redundant availability (Preview)


Zone redundant configuration for the general purpose service tier is offered for both serverless and provisioned
compute. This configuration utilizes Azure Availability Zones to replicate databases across multiple physical
locations within an Azure region.By selecting zone redundancy, you can make yournew and existing serverlesss
and provisioned generalpurpose single databases and elastic pools resilient to a much larger set of failures,
including catastrophic datacenter outages, without any changes of the application logic.
Zone redundant configuration for the general purpose tier has two layers:
A stateful data layer with the database files (.mdf/.ldf) that are stored in ZRS(zone-redundant storage). Using
ZRS the data and log files are synchronously copied across three physically-isolated Azure availability zones.
A stateless compute layer that runs the sqlservr.exe process and contains only transient and cached data,
such as TempDB, model databases on the attached SSD, and plan cache, buffer pool, and columnstore pool in
memory. This stateless node is operated by Azure Service Fabric that initializes sqlservr.exe, controls health of
the node, and performs failover to another node if necessary. For zone redundant serverless and provisioned
general purpose databases, nodes with spare capacity are readily available in other Availability Zones for
failover.
The zone redundant version of the high availability architecture for the general purpose service tier is illustrated
by the following diagram:
IMPORTANT
Zone redundant configuration is only available when the Gen5 compute hardware is selected. This feature is not available
in SQL Managed Instance. Zone redundant configuration for serverless and provisioned general purpose tier is only
available in the following regions: East US, East US 2, West US 2, North Europe, West Europe, Southeast Asia, Australia
East, Japan East, UK South, and France Central.

NOTE
General Purpose databases with a size of 80 vcore may experience performance degradation with zone redundant
configuration. Additionally, operations such as backup, restore, database copy, setting up Geo-DR relationships, and
downgrading a zone redundant database from Business Critical to General Purpose may experience slower performance
for any single databases larger than 1 TB. Please see our latency documentation on scaling a database for more
information.

NOTE
The preview is not covered under Reserved Instance

Premium and Business Critical service tier locally redundant


availability
Premium and Business Critical service tiers leverage the Premium availability model, which integrates compute
resources ( sqlservr.exe process) and storage (locally attached SSD) on a single node. High availability is
achieved by replicating both compute and storage to additional nodes creating a three to four-node cluster.
The underlying database files (.mdf/.ldf) are placed on the attached SSD storage to provide very low latency IO
to your workload. High availability is implemented using a technology similar to SQL Server Always On
availability groups. The cluster includes a single primary replica that is accessible for read-write customer
workloads, and up to three secondary replicas (compute and storage) containing copies of data. The primary
node constantly pushes changes to the secondary nodes in order and ensures that the data is synchronized to at
least one secondary replica before committing each transaction. This process guarantees that if the primary
node crashes for any reason, there is always a fully synchronized node to fail over to. The failover is initiated by
the Azure Service Fabric. Once the secondary replica becomes the new primary node, another secondary replica
is created to ensure the cluster has enough nodes (quorum set). Once failover is complete, Azure SQL
connections are automatically redirected to the new primary node.
As an extra benefit, the premium availability model includes the ability to redirect read-only Azure SQL
connections to one of the secondary replicas. This feature is called Read Scale-Out. It provides 100% additional
compute capacity at no extra charge to off-load read-only operations, such as analytical workloads, from the
primary replica.

Premium and Business Critical service tier zone redundant availability


By default, the cluster of nodes for the premium availability model is created in the same datacenter. With the
introduction of Azure Availability Zones, SQL Database can place different replicas of the Business Critical
database to different availability zones in the same region. To eliminate a single point of failure, the control ring
is also duplicated across multiple zones as three gateway rings (GW). The routing to a specific gateway ring is
controlled by Azure Traffic Manager (ATM). Because the zone redundant configuration in the Premium or
Business Critical service tiers does not create additional database redundancy, you can enable it at no extra cost.
By selecting a zone redundant configuration, you can make your Premium or Business Critical databases
resilient to a much larger set of failures, including catastrophic datacenter outages, without any changes to the
application logic. You can also convert any existing Premium or Business Critical databases or pools to the zone
redundant configuration.
Because the zone redundant databases have replicas in different datacenters with some distance between them,
the increased network latency may increase the commit time and thus impact the performance of some OLTP
workloads. You can always return to the single-zone configuration by disabling the zone redundancy setting.
This process is an online operation similar to the regular service tier upgrade. At the end of the process, the
database or pool is migrated from a zone redundant ring to a single zone ring or vice versa.
IMPORTANT
When using the Business Critical tier, zone redundant configuration is only available when the Gen5 compute hardware is
selected. For up to date information about the regions that support zone redundant databases, see Services support by
region.

NOTE
This feature is not available in SQL Managed Instance.

The zone redundant version of the high availability architecture is illustrated by the following diagram:

Hyperscale service tier availability


The Hyperscale service tier architecture is described in Distributed functions architecture and is only currently
available for SQL Database, not SQL Managed Instance.
The availability model in Hyperscale includes four layers:
A stateless compute layer that runs the sqlservr.exe processes and contains only transient and cached data,
such as non-covering RBPEX cache, TempDB, model database, etc. on the attached SSD, and plan cache, buffer
pool, and columnstore pool in memory. This stateless layer includes the primary compute replica and
optionally a number of secondary compute replicas that can serve as failover targets.
A stateless storage layer formed by page servers. This layer is the distributed storage engine for the
sqlservr.exe processes running on the compute replicas. Each page server contains only transient and
cached data, such as covering RBPEX cache on the attached SSD, and data pages cached in memory. Each
page server has a paired page server in an active-active configuration to provide load balancing, redundancy,
and high availability.
A stateful transaction log storage layer formed by the compute node running the Log service process, the
transaction log landing zone, and transaction log long term storage. Landing zone and long term storage use
Azure Storage, which provides availability and redundancy for transaction log, ensuring data durability for
committed transactions.
A stateful data storage layer with the database files (.mdf/.ndf) that are stored in Azure Storage and are
updated by page servers. This layer uses data availability and redundancy features of Azure Storage. It
guarantees that every page in a data file will be preserved even if processes in other layers of Hyperscale
architecture crash, or if compute nodes fail.
Compute nodes in all Hyperscale layers run on Azure Service Fabric, which controls health of each node and
performs failovers to available healthy nodes as necessary.
For more information on high availability in Hyperscale, see Database High Availability in Hyperscale.

Accelerated Database Recovery (ADR)


Accelerated Database Recovery(ADR) is a new database engine feature that greatly improves database
availability, especially in the presence of long running transactions. ADR is currently available for Azure SQL
Database, Azure SQL Managed Instance, and Azure Synapse Analytics.

Testing application fault resiliency


High availability is a fundamental part of the SQL Database and SQL Managed Instance platform that works
transparently for your database application. However, we recognize that you may want to test how the
automatic failover operations initiated during planned or unplanned events would impact an application before
you deploy it to production. You can manually trigger a failover by calling a special API to restart a database, an
elastic pool, or a managed instance. In the case of a zone redundant serverless or provisioned General Purpose
database or elastic pool, the API call would result in redirecting client connections to the new primary in an
Availability Zone different from the Availability Zone of the old primary. So in addition to testing how failover
impacts existing database sessions, you can also verify if it changes the end-to-end performance due to changes
in network latency. Because the restart operation is intrusive and a large number of them could stress the
platform, only one failover call is allowed every 15 minutes for each database, elastic pool, or managed instance.
A failover can be initiated using PowerShell, REST API, or Azure CLI:

DEP LO Y M EN T T Y P E P O W ERSH EL L REST A P I A Z URE C L I

Database Invoke- Database failover az rest may be used to


AzSqlDatabaseFailover invoke a REST API call from
Azure CLI

Elastic pool Invoke- Elastic pool failover az rest may be used to


AzSqlElasticPoolFailover invoke a REST API call from
Azure CLI

Managed Instance Invoke- Managed Instances - az sql mi failover


AzSqlInstanceFailover Failover

IMPORTANT
The Failover command is not available for readable secondary replicas of Hyperscale databases.

Conclusion
Azure SQL Database and Azure SQL Managed Instance feature a built-in high availability solution, that is deeply
integrated with the Azure platform. It is dependent on Service Fabric for failure detection and recovery, on Azure
Blob storage for data protection, and on Availability Zones for higher fault tolerance (as mentioned earlier in
document not applicable to Azure SQL Managed Instance yet). In addition, SQL Database and SQL Managed
Instance leverage the Always On availability group technology from the SQL Server instance for replication and
failover. The combination of these technologies enables applications to fully realize the benefits of a mixed
storage model and support the most demanding SLAs.

Next steps
Learn about Azure Availability Zones
Learn about Service Fabric
Learn about Azure Traffic Manager
Learn How to initiate a manual failover on SQL Managed Instance
For more options for high availability and disaster recovery, see Business Continuity
High availability for Azure SQL Database and SQL
Managed Instance
3/31/2021 • 11 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


The goal of the high availability architecture in Azure SQL Database and SQL Managed Instance is to guarantee
that your database is up and running minimum of 99.99% of time (For more information regarding specific SLA
for different tiers, Please refer SLA for Azure SQL Database and SQL Managed Instance), without worrying
about the impact of maintenance operations and outages. Azure automatically handles critical servicing tasks,
such as patching, backups, Windows and Azure SQL upgrades, as well as unplanned events such as underlying
hardware, software, or network failures. When the underlying database in Azure SQL Database is patched or
fails over, the downtime is not noticeable if you employ retry logic in your app. SQL Database and SQL Managed
Instance can quickly recover even in the most critical circumstances ensuring that your data is always available.
The high availability solution is designed to ensure that committed data is never lost due to failures, that
maintenance operations do not affect your workload, and that the database will not be a single point of failure in
your software architecture. There are no maintenance windows or downtimes that should require you to stop
the workload while the database is upgraded or maintained.
There are two high availability architectural models:
Standard availability model that is based on a separation of compute and storage. It relies on high
availability and reliability of the remote storage tier. This architecture targets budget-oriented business
applications that can tolerate some performance degradation during maintenance activities.
Premium availability model that is based on a cluster of database engine processes. It relies on the fact
that there is always a quorum of available database engine nodes. This architecture targets mission critical
applications with high IO performance, high transaction rate and guarantees minimal performance impact to
your workload during maintenance activities.
SQL Database and SQL Managed Instance both run on the latest stable version of the SQL Server database
engine and Windows operating system, and most users would not notice that upgrades are performed
continuously.

Basic, Standard, and General Purpose service tier locally redundant


availability
The Basic, Standard, and General Purpose service tiers leverage the standard availability architecture for both
serverless and provisioned compute. The following figure shows four different nodes with the separated
compute and storage layers.
The standard availability model includes two layers:
A stateless compute layer that runs the sqlservr.exe process and contains only transient and cached data,
such as TempDB, model databases on the attached SSD, and plan cache, buffer pool, and columnstore pool in
memory. This stateless node is operated by Azure Service Fabric that initializes sqlservr.exe , controls health
of the node, and performs failover to another node if necessary.
A stateful data layer with the database files (.mdf/.ldf) that are stored in Azure Blob storage. Azure blob
storage has built-in data availability and redundancy feature. It guarantees that every record in the log file or
page in the data file will be preserved even if sqlservr.exe process crashes.

Whenever the database engine or the operating system is upgraded, or a failure is detected, Azure Service
Fabric will move the stateless sqlservr.exe process to another stateless compute node with sufficient free
capacity. Data in Azure Blob storage is not affected by the move, and the data/log files are attached to the newly
initialized sqlservr.exe process. This process guarantees 99.99% availability, but a heavy workload may
experience some performance degradation during the transition since the new sqlservr.exe process starts with
cold cache.

General Purpose service tier zone redundant availability (Preview)


Zone redundant configuration for the general purpose service tier is offered for both serverless and provisioned
compute. This configuration utilizes Azure Availability Zones to replicate databases across multiple physical
locations within an Azure region.By selecting zone redundancy, you can make yournew and existing serverlesss
and provisioned generalpurpose single databases and elastic pools resilient to a much larger set of failures,
including catastrophic datacenter outages, without any changes of the application logic.
Zone redundant configuration for the general purpose tier has two layers:
A stateful data layer with the database files (.mdf/.ldf) that are stored in ZRS(zone-redundant storage). Using
ZRS the data and log files are synchronously copied across three physically-isolated Azure availability zones.
A stateless compute layer that runs the sqlservr.exe process and contains only transient and cached data,
such as TempDB, model databases on the attached SSD, and plan cache, buffer pool, and columnstore pool in
memory. This stateless node is operated by Azure Service Fabric that initializes sqlservr.exe, controls health of
the node, and performs failover to another node if necessary. For zone redundant serverless and provisioned
general purpose databases, nodes with spare capacity are readily available in other Availability Zones for
failover.
The zone redundant version of the high availability architecture for the general purpose service tier is illustrated
by the following diagram:
IMPORTANT
Zone redundant configuration is only available when the Gen5 compute hardware is selected. This feature is not available
in SQL Managed Instance. Zone redundant configuration for serverless and provisioned general purpose tier is only
available in the following regions: East US, East US 2, West US 2, North Europe, West Europe, Southeast Asia, Australia
East, Japan East, UK South, and France Central.

NOTE
General Purpose databases with a size of 80 vcore may experience performance degradation with zone redundant
configuration. Additionally, operations such as backup, restore, database copy, setting up Geo-DR relationships, and
downgrading a zone redundant database from Business Critical to General Purpose may experience slower performance
for any single databases larger than 1 TB. Please see our latency documentation on scaling a database for more
information.

NOTE
The preview is not covered under Reserved Instance

Premium and Business Critical service tier locally redundant


availability
Premium and Business Critical service tiers leverage the Premium availability model, which integrates compute
resources ( sqlservr.exe process) and storage (locally attached SSD) on a single node. High availability is
achieved by replicating both compute and storage to additional nodes creating a three to four-node cluster.
The underlying database files (.mdf/.ldf) are placed on the attached SSD storage to provide very low latency IO
to your workload. High availability is implemented using a technology similar to SQL Server Always On
availability groups. The cluster includes a single primary replica that is accessible for read-write customer
workloads, and up to three secondary replicas (compute and storage) containing copies of data. The primary
node constantly pushes changes to the secondary nodes in order and ensures that the data is synchronized to at
least one secondary replica before committing each transaction. This process guarantees that if the primary
node crashes for any reason, there is always a fully synchronized node to fail over to. The failover is initiated by
the Azure Service Fabric. Once the secondary replica becomes the new primary node, another secondary replica
is created to ensure the cluster has enough nodes (quorum set). Once failover is complete, Azure SQL
connections are automatically redirected to the new primary node.
As an extra benefit, the premium availability model includes the ability to redirect read-only Azure SQL
connections to one of the secondary replicas. This feature is called Read Scale-Out. It provides 100% additional
compute capacity at no extra charge to off-load read-only operations, such as analytical workloads, from the
primary replica.

Premium and Business Critical service tier zone redundant availability


By default, the cluster of nodes for the premium availability model is created in the same datacenter. With the
introduction of Azure Availability Zones, SQL Database can place different replicas of the Business Critical
database to different availability zones in the same region. To eliminate a single point of failure, the control ring
is also duplicated across multiple zones as three gateway rings (GW). The routing to a specific gateway ring is
controlled by Azure Traffic Manager (ATM). Because the zone redundant configuration in the Premium or
Business Critical service tiers does not create additional database redundancy, you can enable it at no extra cost.
By selecting a zone redundant configuration, you can make your Premium or Business Critical databases
resilient to a much larger set of failures, including catastrophic datacenter outages, without any changes to the
application logic. You can also convert any existing Premium or Business Critical databases or pools to the zone
redundant configuration.
Because the zone redundant databases have replicas in different datacenters with some distance between them,
the increased network latency may increase the commit time and thus impact the performance of some OLTP
workloads. You can always return to the single-zone configuration by disabling the zone redundancy setting.
This process is an online operation similar to the regular service tier upgrade. At the end of the process, the
database or pool is migrated from a zone redundant ring to a single zone ring or vice versa.
IMPORTANT
When using the Business Critical tier, zone redundant configuration is only available when the Gen5 compute hardware is
selected. For up to date information about the regions that support zone redundant databases, see Services support by
region.

NOTE
This feature is not available in SQL Managed Instance.

The zone redundant version of the high availability architecture is illustrated by the following diagram:

Hyperscale service tier availability


The Hyperscale service tier architecture is described in Distributed functions architecture and is only currently
available for SQL Database, not SQL Managed Instance.
The availability model in Hyperscale includes four layers:
A stateless compute layer that runs the sqlservr.exe processes and contains only transient and cached data,
such as non-covering RBPEX cache, TempDB, model database, etc. on the attached SSD, and plan cache, buffer
pool, and columnstore pool in memory. This stateless layer includes the primary compute replica and
optionally a number of secondary compute replicas that can serve as failover targets.
A stateless storage layer formed by page servers. This layer is the distributed storage engine for the
sqlservr.exe processes running on the compute replicas. Each page server contains only transient and
cached data, such as covering RBPEX cache on the attached SSD, and data pages cached in memory. Each
page server has a paired page server in an active-active configuration to provide load balancing, redundancy,
and high availability.
A stateful transaction log storage layer formed by the compute node running the Log service process, the
transaction log landing zone, and transaction log long term storage. Landing zone and long term storage use
Azure Storage, which provides availability and redundancy for transaction log, ensuring data durability for
committed transactions.
A stateful data storage layer with the database files (.mdf/.ndf) that are stored in Azure Storage and are
updated by page servers. This layer uses data availability and redundancy features of Azure Storage. It
guarantees that every page in a data file will be preserved even if processes in other layers of Hyperscale
architecture crash, or if compute nodes fail.
Compute nodes in all Hyperscale layers run on Azure Service Fabric, which controls health of each node and
performs failovers to available healthy nodes as necessary.
For more information on high availability in Hyperscale, see Database High Availability in Hyperscale.

Accelerated Database Recovery (ADR)


Accelerated Database Recovery(ADR) is a new database engine feature that greatly improves database
availability, especially in the presence of long running transactions. ADR is currently available for Azure SQL
Database, Azure SQL Managed Instance, and Azure Synapse Analytics.

Testing application fault resiliency


High availability is a fundamental part of the SQL Database and SQL Managed Instance platform that works
transparently for your database application. However, we recognize that you may want to test how the
automatic failover operations initiated during planned or unplanned events would impact an application before
you deploy it to production. You can manually trigger a failover by calling a special API to restart a database, an
elastic pool, or a managed instance. In the case of a zone redundant serverless or provisioned General Purpose
database or elastic pool, the API call would result in redirecting client connections to the new primary in an
Availability Zone different from the Availability Zone of the old primary. So in addition to testing how failover
impacts existing database sessions, you can also verify if it changes the end-to-end performance due to changes
in network latency. Because the restart operation is intrusive and a large number of them could stress the
platform, only one failover call is allowed every 15 minutes for each database, elastic pool, or managed instance.
A failover can be initiated using PowerShell, REST API, or Azure CLI:

DEP LO Y M EN T T Y P E P O W ERSH EL L REST A P I A Z URE C L I

Database Invoke- Database failover az rest may be used to


AzSqlDatabaseFailover invoke a REST API call from
Azure CLI

Elastic pool Invoke- Elastic pool failover az rest may be used to


AzSqlElasticPoolFailover invoke a REST API call from
Azure CLI

Managed Instance Invoke- Managed Instances - az sql mi failover


AzSqlInstanceFailover Failover

IMPORTANT
The Failover command is not available for readable secondary replicas of Hyperscale databases.

Conclusion
Azure SQL Database and Azure SQL Managed Instance feature a built-in high availability solution, that is deeply
integrated with the Azure platform. It is dependent on Service Fabric for failure detection and recovery, on Azure
Blob storage for data protection, and on Availability Zones for higher fault tolerance (as mentioned earlier in
document not applicable to Azure SQL Managed Instance yet). In addition, SQL Database and SQL Managed
Instance leverage the Always On availability group technology from the SQL Server instance for replication and
failover. The combination of these technologies enables applications to fully realize the benefits of a mixed
storage model and support the most demanding SLAs.

Next steps
Learn about Azure Availability Zones
Learn about Service Fabric
Learn about Azure Traffic Manager
Learn How to initiate a manual failover on SQL Managed Instance
For more options for high availability and disaster recovery, see Business Continuity
Azure Storage redundancy
4/7/2021 • 15 minutes to read • Edit Online

Azure Storage always stores multiple copies of your data so that it is protected from planned and unplanned
events, including transient hardware failures, network or power outages, and massive natural disasters.
Redundancy ensures that your storage account meets its availability and durability targets even in the face of
failures.
When deciding which redundancy option is best for your scenario, consider the tradeoffs between lower costs
and higher availability. The factors that help determine which redundancy option you should choose include:
How your data is replicated in the primary region
Whether your data is replicated to a second region that is geographically distant to the primary region, to
protect against regional disasters
Whether your application requires read access to the replicated data in the secondary region if the primary
region becomes unavailable for any reason

Redundancy in the primary region


Data in an Azure Storage account is always replicated three times in the primary region. Azure Storage offers
two options for how your data is replicated in the primary region:
Locally redundant storage (LRS) copies your data synchronously three times within a single physical
location in the primary region. LRS is the least expensive replication option, but is not recommended for
applications requiring high availability.
Zone-redundant storage (ZRS) copies your data synchronously across three Azure availability zones in
the primary region. For applications requiring high availability, Microsoft recommends using ZRS in the
primary region, and also replicating to a secondary region.

NOTE
Microsoft recommends using ZRS in the primary region for Azure Data Lake Storage Gen2 workloads.

Locally-redundant storage
Locally redundant storage (LRS) replicates your data three times within a single data center in the primary
region. LRS provides at least 99.999999999% (11 nines) durability of objects over a given year.
LRS is the lowest-cost redundancy option and offers the least durability compared to other options. LRS protects
your data against server rack and drive failures. However, if a disaster such as fire or flooding occurs within the
data center, all replicas of a storage account using LRS may be lost or unrecoverable. To mitigate this risk,
Microsoft recommends using zone-redundant storage (ZRS), geo-redundant storage (GRS), or geo-zone-
redundant storage (GZRS).
A write request to a storage account that is using LRS happens synchronously. The write operation returns
successfully only after the data is written to all three replicas.
The following diagram shows how your data is replicated within a single data center with LRS:
LRS is a good choice for the following scenarios:
If your application stores data that can be easily reconstructed if data loss occurs, you may opt for LRS.
If your application is restricted to replicating data only within a country or region due to data governance
requirements, you may opt for LRS. In some cases, the paired regions across which the data is geo-replicated
may be in another country or region. For more information on paired regions, see Azure regions.
Zone -redundant storage
Zone-redundant storage (ZRS) replicates your Azure Storage data synchronously across three Azure availability
zones in the primary region. Each availability zone is a separate physical location with independent power,
cooling, and networking. ZRS offers durability for Azure Storage data objects of at least 99.9999999999% (12
9's) over a given year.
With ZRS, your data is still accessible for both read and write operations even if a zone becomes unavailable. If a
zone becomes unavailable, Azure undertakes networking updates, such as DNS re-pointing. These updates may
affect your application if you access data before the updates have completed. When designing applications for
ZRS, follow practices for transient fault handling, including implementing retry policies with exponential back-
off.
A write request to a storage account that is using ZRS happens synchronously. The write operation returns
successfully only after the data is written to all replicas across the three availability zones.
Microsoft recommends using ZRS in the primary region for scenarios that require consistency, durability, and
high availability. ZRS is also recommended for restricting replication of data to within a country or region to
meet data governance requirements.
The following diagram shows how your data is replicated across availability zones in the primary region with
ZRS:
ZRS provides excellent performance, low latency, and resiliency for your data if it becomes temporarily
unavailable. However, ZRS by itself may not protect your data against a regional disaster where multiple zones
are permanently affected. For protection against regional disasters, Microsoft recommends using geo-zone-
redundant storage (GZRS), which uses ZRS in the primary region and also geo-replicates your data to a
secondary region.
The following table shows which types of storage accounts support ZRS in which regions:

STO RA GE A C C O UN T T Y P E SUP P O RT ED REGIO N S SUP P O RT ED SERVIC ES

General-purpose v21 (Africa) South Africa North Block blobs


(Asia Pacific) East Asia Page blobs2
(Asia Pacific) Southeast Asia File shares (standard)
(Asia Pacific) Australia East Tables
(Asia Pacific) Central India Queues
(Asia Pacific) Japan East
(Asia Pacific) Korea Central
(Canada) Canada Central
(Europe) North Europe
(Europe) West Europe
(Europe) France Central
(Europe) Germany West Central
(Europe) Norway East
(Europe) Switzerland North
(Europe) UK South
(Middle East) UAE North
(South America) Brazil South
(US) Central US
(US) East US
(US) East US 2
(US) North Central US
(US) South Central US
(US) West US
(US) West US 2
STO RA GE A C C O UN T T Y P E SUP P O RT ED REGIO N S SUP P O RT ED SERVIC ES

BlockBlobStorage1 Asia Southeast Premium block blobs only


Australia East
Europe North
Europe West
France Central
Japan East
UK South
US East
US East 2
US West 2

FileStorage Asia Southeast Premium files shares only


Australia East
Europe North
Europe West
France Central
Japan East
UK South
US East
US East 2
US West 2

1 The archive tier is not currently supported for ZRS accounts.


2 Storage accounts that contain Azure managed disks for virtual machines always use LRS. Azure unmanaged
disks should also use LRS. It is possible to create a storage account for Azure unmanaged disks that uses GRS,
but it is not recommended due to potential issues with consistency over asynchronous geo-replication. Neither
managed nor unmanaged disks support ZRS or GZRS. For more information on managed disks, see Pricing for
Azure managed disks.
For information about which regions support ZRS, see Ser vices suppor t by region in What are Azure
Availability Zones?.

Redundancy in a secondary region


For applications requiring high availability, you can choose to additionally copy the data in your storage account
to a secondary region that is hundreds of miles away from the primary region. If your storage account is copied
to a secondary region, then your data is durable even in the case of a complete regional outage or a disaster in
which the primary region isn't recoverable.
When you create a storage account, you select the primary region for the account. The paired secondary region
is determined based on the primary region, and can't be changed. For more information about regions
supported by Azure, see Azure regions.
Azure Storage offers two options for copying your data to a secondary region:
Geo-redundant storage (GRS) copies your data synchronously three times within a single physical
location in the primary region using LRS. It then copies your data asynchronously to a single physical
location in the secondary region. Within the secondary region, your data is copied synchronously three times
using LRS.
Geo-zone-redundant storage (GZRS) copies your data synchronously across three Azure availability
zones in the primary region using ZRS. It then copies your data asynchronously to a single physical location
in the secondary region. Within the secondary region, your data is copied synchronously three times using
LRS.
NOTE
The primary difference between GRS and GZRS is how data is replicated in the primary region. Within the secondary
region, data is always replicated synchronously three times using LRS. LRS in the secondary region protects your data
against hardware failures.

With GRS or GZRS, the data in the secondary region isn't available for read or write access unless there is a
failover to the secondary region. For read access to the secondary region, configure your storage account to use
read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS). For more
information, see Read access to data in the secondary region.
If the primary region becomes unavailable, you can choose to fail over to the secondary region. After the
failover has completed, the secondary region becomes the primary region, and you can again read and write
data. For more information on disaster recovery and to learn how to fail over to the secondary region, see
Disaster recovery and storage account failover.

IMPORTANT
Because data is replicated to the secondary region asynchronously, a failure that affects the primary region may result in
data loss if the primary region cannot be recovered. The interval between the most recent writes to the primary region
and the last write to the secondary region is known as the recovery point objective (RPO). The RPO indicates the point in
time to which data can be recovered. Azure Storage typically has an RPO of less than 15 minutes, although there's
currently no SLA on how long it takes to replicate data to the secondary region.

Geo -redundant storage


Geo-redundant storage (GRS) copies your data synchronously three times within a single physical location in
the primary region using LRS. It then copies your data asynchronously to a single physical location in a
secondary region that is hundreds of miles away from the primary region. GRS offers durability for Azure
Storage data objects of at least 99.99999999999999% (16 9's) over a given year.
A write operation is first committed to the primary location and replicated using LRS. The update is then
replicated asynchronously to the secondary region. When data is written to the secondary location, it's also
replicated within that location using LRS.
The following diagram shows how your data is replicated with GRS or RA-GRS:

Geo -zone -redundant storage


Geo-zone-redundant storage (GZRS) combines the high availability provided by redundancy across availability
zones with protection from regional outages provided by geo-replication. Data in a GZRS storage account is
copied across three Azure availability zones in the primary region and is also replicated to a secondary
geographic region for protection from regional disasters. Microsoft recommends using GZRS for applications
requiring maximum consistency, durability, and availability, excellent performance, and resilience for disaster
recovery.
With a GZRS storage account, you can continue to read and write data if an availability zone becomes
unavailable or is unrecoverable. Additionally, your data is also durable in the case of a complete regional outage
or a disaster in which the primary region isn't recoverable. GZRS is designed to provide at least
99.99999999999999% (16 9's) durability of objects over a given year.
The following diagram shows how your data is replicated with GZRS or RA-GZRS:

Only general-purpose v2 storage accounts support GZRS and RA-GZRS. For more information about storage
account types, see Azure storage account overview. GZRS and RA-GZRS support block blobs, page blobs (except
for VHD disks), files, tables, and queues.
GZRS and RA-GZRS are supported in the following regions:
(Africa) South Africa North
(Asia Pacific) East Asia
(Asia Pacific) Southeast Asia
(Asia Pacific) Australia East
(Asia Pacific) Central India
(Asia Pacific) Japan East
(Asia Pacific) Korea Central
(Canada) Canada Central
(Europe) North Europe
(Europe) West Europe
(Europe) France Central
(Europe) Germany West Central
(Europe) Norway East
(Europe) Switzerland North
(Europe) UK South
(Middle East) UAE North
(South America) Brazil South
(US) Central US
(US) East US
(US) East US 2
(US) North Central US
(US) South Central US
(US) West US
(US) West US 2
For information on pricing, see pricing details for Blobs, Files, Queues, and Tables.

Read access to data in the secondary region


Geo-redundant storage (with GRS or GZRS) replicates your data to another physical location in the secondary
region to protect against regional outages. However, that data is available to be read only if the customer or
Microsoft initiates a failover from the primary to secondary region. When you enable read access to the
secondary region, your data is available to be read at all times, including in a situation where the primary region
becomes unavailable. For read access to the secondary region, enable read-access geo-redundant storage (RA-
GRS) or read-access geo-zone-redundant storage (RA-GZRS).

NOTE
Azure Files does not support read-access geo-redundant storage (RA-GRS) and read-access geo-zone-redundant storage
(RA-GZRS).

Design your applications for read access to the secondary


If your storage account is configured for read access to the secondary region, then you can design your
applications to seamlessly shift to reading data from the secondary region if the primary region becomes
unavailable for any reason.
The secondary region is available for read access after you enable RA-GRS or RA-GZRS, so that you can test
your application in advance to make sure that it will properly read from the secondary in the event of an outage.
For more information about how to design your applications for high availability, see Use geo-redundancy to
design highly available applications.
When read access to the secondary is enabled, your application can be read from the secondary endpoint as
well as from the primary endpoint. The secondary endpoint appends the suffix –secondary to the account name.
For example, if your primary endpoint for Blob storage is myaccount.blob.core.windows.net , then the secondary
endpoint is myaccount-secondary.blob.core.windows.net . The account access keys for your storage account are
the same for both the primary and secondary endpoints.
Check the Last Sync Time property
Because data is replicated to the secondary region asynchronously, the secondary region is often behind the
primary region. If a failure happens in the primary region, it's likely that all writes to the primary will not yet
have been replicated to the secondary.
To determine which write operations have been replicated to the secondary region, your application can check
the Last Sync Time property for your storage account. All write operations written to the primary region prior
to the last sync time have been successfully replicated to the secondary region, meaning that they are available
to be read from the secondary. Any write operations written to the primary region after the last sync time may
or may not have been replicated to the secondary region, meaning that they may not be available for read
operations.
You can query the value of the Last Sync Time property using Azure PowerShell, Azure CLI, or one of the Azure
Storage client libraries. The Last Sync Time property is a GMT date/time value. For more information, see
Check the Last Sync Time property for a storage account.

Summary of redundancy options


The tables in the following sections summarize the redundancy options available for Azure Storage
Durability and availability parameters
The following table describes key parameters for each redundancy option:

PA RA M ET ER L RS Z RS GRS/ RA - GRS GZ RS/ RA - GZ RS

Percent durability of at least at least at least at least


objects over a given 99.999999999% (11 99.9999999999% 99.99999999999999 99.99999999999999
year 9's) (12 9's) % (16 9's) % (16 9's)

Availability for read At least 99.9% (99% At least 99.9% (99% At least 99.9% (99% At least 99.9% (99%
requests for cool access tier) for cool access tier) for cool access tier) for cool access tier)
for GRS for GZRS

At least 99.99% At least 99.99%


(99.9% for cool (99.9% for cool
access tier) for RA- access tier) for RA-
GRS GZRS

Availability for write At least 99.9% (99% At least 99.9% (99% At least 99.9% (99% At least 99.9% (99%
requests for cool access tier) for cool access tier) for cool access tier) for cool access tier)

Number of copies of Three copies within a Three copies across Six copies total, Six copies total,
data maintained on single region separate availability including three in the including three
separate nodes zones within a single primary region and across separate
region three in the availability zones in
secondary region the primary region
and three locally
redundant copies in
the secondary region

Durability and availability by outage scenario


The following table indicates whether your data is durable and available in a given scenario, depending on
which type of redundancy is in effect for your storage account:

O UTA GE SC EN A RIO L RS Z RS GRS/ RA - GRS GZ RS/ RA - GZ RS

A node within a data Yes Yes Yes Yes


center becomes
unavailable

An entire data center No Yes Yes1 Yes


(zonal or non-zonal)
becomes unavailable

A region-wide No No Yes1 Yes1


outage occurs in the
primary region
O UTA GE SC EN A RIO L RS Z RS GRS/ RA - GRS GZ RS/ RA - GZ RS

Read access to the No No Yes (with RA-GRS) Yes (with RA-GZRS)


secondary region is
available if the
primary region
becomes unavailable

1 Account failoveris required to restore write availability if the primary region becomes unavailable. For more
information, see Disaster recovery and storage account failover.
Supported Azure Storage services
The following table shows which redundancy options are supported by each Azure Storage service.

L RS Z RS GRS/ RA - GRS GZ RS/ RA - GZ RS

Blob storage Blob storage Blob storage Blob storage


Queue storage Queue storage Queue storage Queue storage
Table storage Table storage Table storage Table storage
Azure Files Azure Files Azure Files Azure Files
Azure managed disks

Supported storage account types


The following table shows which redundancy options are supported by each type of storage account. For
information for storage account types, see Storage account overview.

L RS Z RS GRS/ RA - GRS GZ RS/ RA - GZ RS

General-purpose v2 General-purpose v2 General-purpose v2 General-purpose v2


General-purpose v1 BlockBlobStorage General-purpose v1
BlockBlobStorage FileStorage BlobStorage
BlobStorage
FileStorage

All data for all storage accounts is copied according to the redundancy option for the storage account. Objects
including block blobs, append blobs, page blobs, queues, tables, and files are copied. Data in all tiers, including
the archive tier, is copied. For more information about blob tiers, see Azure Blob storage: hot, cool, and archive
access tiers.
For pricing information for each redundancy option, see Azure Storage pricing.

NOTE
Azure Premium Disk Storage currently supports only locally redundant storage (LRS). Block blob storage accounts support
locally redundant storage (LRS) and zone redundant storage (ZRS) in certain regions.

Data integrity
Azure Storage regularly verifies the integrity of data stored using cyclic redundancy checks (CRCs). If data
corruption is detected, it is repaired using redundant data. Azure Storage also calculates checksums on all
network traffic to detect corruption of data packets when storing or retrieving data.

See also
Check the Last Sync Time property for a storage account
Change the redundancy option for a storage account
Use geo-redundancy to design highly available applications
Disaster recovery and storage account failover
Azure Event Hubs - Geo-disaster recovery
4/10/2021 • 11 minutes to read • Edit Online

Resilience against disastrous outages of data processing resources is a requirement for many enterprises and in
some cases even required by industry regulations.
Azure Event Hubs already spreads the risk of catastrophic failures of individual machines or even complete racks
across clusters that span multiple failure domains within a datacenter and it implements transparent failure
detection and failover mechanisms such that the service will continue to operate within the assured service-
levels and typically without noticeable interruptions in the event of such failures. If an Event Hubs namespace
has been created with the enabled option for availability zones, the outage risk is further spread across three
physically separated facilities, and the service has enough capacity reserves to instantly cope with the complete,
catastrophic loss of the entire facility.
The all-active Azure Event Hubs cluster model with availability zone support provides resiliency against grave
hardware failures and even catastrophic loss of entire datacenter facilities. Still, there might be grave situations
with widespread physical destruction that even those measures cannot sufficiently defend against.
The Event Hubs Geo-disaster recovery feature is designed to make it easier to recover from a disaster of this
magnitude and abandon a failed Azure region for good and without having to change your application
configurations. Abandoning an Azure region will typically involve several services and this feature primarily
aims at helping to preserve the integrity of the composite application configuration.
The Geo-Disaster recovery feature ensures that the entire configuration of a namespace (Event Hubs, Consumer
Groups and settings) is continuously replicated from a primary namespace to a secondary namespace when
paired, and it allows you to initiate a once-only failover move from the primary to the secondary at any time.
The failover move will re-point the chosen alias name for the namespace to the secondary namespace and then
break the pairing. The failover is nearly instantaneous once initiated.

IMPORTANT
The feature enables instantaneous continuity of operations with the same configuration, but does not replicate the
event data . Unless the disaster caused the loss of all zones, the event data that is preserved in the primary Event Hub
after failover will be recoverable and the historic events can be obtained from there once access is restored. For replicating
event data and operating corresponding namespaces in active/active configurations to cope with outages and disasters,
don't lean on this Geo-disaster recovery feature set, but follow the replication guidance.

Outages and disasters


It's important to note the distinction between "outages" and "disasters." An outage is the temporary
unavailability of Azure Event Hubs, and can affect some components of the service, such as a messaging store,
or even the entire datacenter. However, after the problem is fixed, Event Hubs becomes available again. Typically,
an outage doesn't cause the loss of messages or other data. An example of such an outage might be a power
failure in the datacenter. Some outages are only short connection losses because of transient or network issues.
A disaster is defined as the permanent, or longer-term loss of an Event Hubs cluster, Azure region, or datacenter.
The region or datacenter may or may not become available again, or may be down for hours or days. Examples
of such disasters are fire, flooding, or earthquake. A disaster that becomes permanent might cause the loss of
some messages, events, or other data. However, in most cases there should be no data loss and messages can
be recovered once the data center is back up.
The Geo-disaster recovery feature of Azure Event Hubs is a disaster recovery solution. The concepts and
workflow described in this article apply to disaster scenarios, and not to transient, or temporary outages. For a
detailed discussion of disaster recovery in Microsoft Azure, see this article.

Basic concepts and terms


The disaster recovery feature implements metadata disaster recovery, and relies on primary and secondary
disaster recovery namespaces.
The Geo-disaster recovery feature is available for the standard and dedicated SKUs only. You don't need to make
any connection string changes, as the connection is made via an alias.
The following terms are used in this article:
Alias: The name for a disaster recovery configuration that you set up. The alias provides a single stable
Fully Qualified Domain Name (FQDN) connection string. Applications use this alias connection string to
connect to a namespace.
Primary/secondary namespace: The namespaces that correspond to the alias. The primary namespace is
"active" and receives messages (can be an existing or new namespace). The secondary namespace is
"passive" and doesn't receive messages. The metadata between both is in sync, so both can seamlessly
accept messages without any application code or connection string changes. To ensure that only the
active namespace receives messages, you must use the alias.
Metadata: Entities such as event hubs and consumer groups; and their properties of the service that are
associated with the namespace. Only entities and their settings are replicated automatically. Messages
and events aren't replicated.
Failover: The process of activating the secondary namespace.

Supported namespace pairs


The following combinations of primary and secondary namespaces are supported:

P RIM A RY N A M ESPA C E SEC O N DA RY N A M ESPA C E SUP P O RT ED

Standard Standard Yes

Standard Dedicated Yes

Dedicated Dedicated Yes

Dedicated Standard No

NOTE
You can't pair namespaces that are in the same dedicated cluster. You can pair namespaces that are in separate clusters.

Setup and failover flow


The following section is an overview of the failover process, and explains how to set up the initial failover.
Setup
You first create or use an existing primary namespace, and a new secondary namespace, then pair the two. This
pairing gives you an alias that you can use to connect. Because you use an alias, you don't have to change
connection strings. Only new namespaces can be added to your failover pairing.
1. Create the primary namespace.
2. Create the secondary namespace in a different region. This step is optional. You can create the secondary
namespace while creating the pairing in the next step.
3. In the Azure portal, navigate to your primary namespace.
4. Select Geo-recover y on the left menu, and select Initiate pairing on the toolbar.

5. On the Initiate pairing page, follow these steps:


a. Select an existing secondary namespace or create one in a different region. In this example, an existing
namespace is selected.
b. For Alias , enter an alias for the geo-dr pairing.
c. Then, select Create .
6. You should see the Geo-DR Alias page. You can also navigate to this page from the primary namespace
by selecting Geo-recover y on the left menu.
7. On the Geo-DR Alias page, select Shared access policies on the left menu to access the primary
connection string for the alias. Use this connection string instead of using the connection string to the
primary/secondary namespace directly.
8. On this Over view page, you can do the following actions:
a. Break the pairing between primary and secondary namespaces. Select Break pairing on the
toolbar.
b. Manually failover to the secondary namespace. Select Failover on the toolbar.

WARNING
Failing over will activate the secondary namespace and remove the primary namespace from the Geo-
Disaster Recovery pairing. Create another namespace to have a new geo-disaster recovery pair.

Finally, you should add some monitoring to detect if a failover is necessary. In most cases, the service is one part
of a large ecosystem, thus automatic failovers are rarely possible, as often failovers must be performed in sync
with the remaining subsystem or infrastructure.
Example
In one example of this scenario, consider a Point of Sale (POS) solution that emits either messages or events.
Event Hubs passes those events to some mapping or reformatting solution, which then forwards mapped data
to another system for further processing. At that point, all of these systems might be hosted in the same Azure
region. The decision of when and what part to fail over depends on the flow of data in your infrastructure.
You can automate failover either with monitoring systems, or with custom-built monitoring solutions. However,
such automation takes extra planning and work, which is out of the scope of this article.
Failover flow
If you initiate the failover, two steps are required:
1. If another outage occurs, you want to be able to fail over again. Therefore, set up another passive
namespace and update the pairing.
2. Pull messages from the former primary namespace once it's available again. After that, use that
namespace for regular messaging outside of your geo-recovery setup, or delete the old primary
namespace.

NOTE
Only fail forward semantics are supported. In this scenario, you fail over and then re-pair with a new namespace. Failing
back is not supported; for example, in a SQL cluster.
Management
If you made a mistake; for example, you paired the wrong regions during the initial setup, you can break the
pairing of the two namespaces at any time. If you want to use the paired namespaces as regular namespaces,
delete the alias.

Samples
The sample on GitHub shows how to set up and initiate a failover. This sample demonstrates the following
concepts:
Settings required in Azure Active Directory to use Azure Resource Manager with Event Hubs.
Steps required to execute the sample code.
Send and receive from the current primary namespace.

Considerations
Note the following considerations to keep in mind:
1. By design, Event Hubs geo-disaster recovery does not replicate data, and therefore you cannot reuse the
old offset value of your primary event hub on your secondary event hub. We recommend restarting your
event receiver with one of the following methods:
EventPosition.FromStart() - If you wish read all data on your secondary event hub.
EventPosition.FromEnd() - If you wish to read all new data from the time of connection to your
secondary event hub.
EventPosition.FromEnqueuedTime(dateTime) - If you wish to read all data received in your secondary
event hub starting from a given date and time.
2. In your failover planning, you should also consider the time factor. For example, if you lose connectivity
for longer than 15 to 20 minutes, you might decide to initiate the failover.
3. The fact that no data is replicated means that current active sessions aren't replicated. Additionally,
duplicate detection and scheduled messages may not work. New sessions, scheduled messages, and new
duplicates will work.
4. Failing over a complex distributed infrastructure should be rehearsed at least once.
5. Synchronizing entities can take some time, approximately 50-100 entities per minute.

Availability Zones
The Event Hubs Standard SKU supports Availability Zones, providing fault-isolated locations within an Azure
region.

NOTE
The Availability Zones support for Azure Event Hubs Standard is only available in Azure regions where availability zones
are present.

You can enable Availability Zones on new namespaces only, using the Azure portal. Event Hubs doesn't support
migration of existing namespaces. You can't disable zone redundancy after enabling it on your namespace.
When you use availability zones, both metadata and data (events) are replicated across data centers in the
availability zone.

Private endpoints
This section provides more considerations when using Geo-disaster recovery with namespaces that use private
endpoints. To learn about using private endpoints with Event Hubs in general, see Configure private endpoints.
New pairings
If you try to create a pairing between a primary namespace with a private endpoint and a secondary namespace
without a private endpoint, the pairing will fail. The pairing will succeed only if both primary and secondary
namespaces have private endpoints. We recommend that you use same configurations on the primary and
secondary namespaces and on virtual networks in which private endpoints are created.
NOTE
When you try to pair the primary namespace with private endpoint and a secondary namespace, the validation process
only checks whether a private endpoint exists on the secondary namespace. It doesn't check whether the endpoint works
or will work after failover. It's your responsibility to ensure that the secondary namespace with private endpoint will work
as expected after failover.
To test that the private endpoint configurations are same on primary and secondary namespaces, send a read request (for
example: Get Event Hub) to the secondary namespace from outside the virtual network, and verify that you receive an
error message from the service.

Existing pairings
If pairing between primary and secondary namespace already exists, private endpoint creation on the primary
namespace will fail. To resolve, create a private endpoint on the secondary namespace first and then create one
for the primary namespace.

NOTE
While we allow read-only access to the secondary namespace, updates to the private endpoint configurations are
permitted.

Recommended configuration
When creating a disaster recovery configuration for your application and Event Hubs namespaces, you must
create private endpoints for both primary and secondary Event Hubs namespaces against virtual networks
hosting both primary and secondary instances of your application.
Let's say you have two virtual networks: VNET-1, VNET-2 and these primary and secondary namespaces:
EventHubs-Namespace1-Primary, EventHubs-Namespace2-Secondary. You need to do the following steps:
On EventHubs-Namespace1-Primary, create two private endpoints that use subnets from VNET-1 and VNET-
2
On EventHubs-Namespace2-Secondary, create two private endpoints that use the same subnets from VNET-
1 and VNET-2

Advantage of this approach is that failover can happen at the application layer independent of Event Hubs
namespace. Consider the following scenarios:
Application-only failover : Here, the application won't exist in VNET-1 but will move to VNET-2. As both
private endpoints are configured on both VNET-1 and VNET-2 for both primary and secondary namespaces, the
application will just work.
Event Hubs namespace-only failover : Here again, since both private endpoints are configured on both
virtual networks for both primary and secondary namespaces, the application will just work.

NOTE
For guidance on geo-disaster recovery of a virtual network, see Virtual Network - Business Continuity.

Next steps
The sample on GitHub walks through a simple workflow that creates a geo-pairing and initiates a failover for
a disaster recovery scenario.
The REST API reference describes APIs for performing the Geo-disaster recovery configuration.
For more information about Event Hubs, visit the following links:
Get started with Event Hubs
.NET Core
Java
Python
JavaScript
Event Hubs FAQ
Sample applications that use Event Hubs
Azure Service Bus Geo-disaster recovery
3/29/2021 • 12 minutes to read • Edit Online

Resilience against disastrous outages of data processing resources is a requirement for many enterprises and in
some cases even required by industry regulations.
Azure Service Bus already spreads the risk of catastrophic failures of individual machines or even complete
racks across clusters that span multiple failure domains within a datacenter and it implements transparent
failure detection and failover mechanisms such that the service will continue to operate within the assured
service-levels and typically without noticeable interruptions when such failures occur. If a Service Bus
namespace has been created with the enabled option for availability zones, the risk is outage risk is further
spread across three physically separated facilities, and the service has enough capacity reserves to instantly
cope with the complete, catastrophic loss of the entire facility.
The all-active Azure Service Bus cluster model with availability zone support is superior to any on-premises
message broker product in terms of resiliency against grave hardware failures and even catastrophic loss of
entire datacenter facilities. Still, there might be grave situations with widespread physical destruction that even
those measures can't sufficiently defend against.
The Service Bus Geo-disaster recovery feature is designed to make it easier to recover from a disaster of this
magnitude and abandon a failed Azure region for good and without having to change your application
configurations. Abandoning an Azure region will typically involve several services and this feature primarily
aims at helping to preserve the integrity of the composite application configuration. The feature is globally
available for the Service Bus Premium SKU.
The Geo-Disaster recovery feature ensures that the entire configuration of a namespace (Queues, Topics,
Subscriptions, Filters) is continuously replicated from a primary namespace to a secondary namespace when
paired, and it allows you to initiate a once-only failover move from the primary to the secondary at any time.
The failover move will repoint the chosen alias name for the namespace to the secondary namespace and then
break the pairing. The failover is nearly instantaneous once initiated.

IMPORTANT
The feature enables instant continuity of operations with the same configuration, but doesn't replicate the messages
held in queues or topic subscriptions or dead-letter queues . To preserve queue semantics, such a replication will
require not only the replication of message data, but of every state change in the broker. For most Service Bus
namespaces, the required replication traffic would far exceed the application traffic and with high-throughput queues,
most messages would still replicate to the secondary while they are already being deleted from the primary, causing
excessively wasteful traffic. For high-latency replication routes, which applies to many pairings you would choose for Geo-
disaster recovery, it might also be impossible for the replication traffic to sustainably keep up with the application traffic
due to latency-induced throttling effects.

TIP
For replicating the contents of queues and topic subscriptions and operating corresponding namespaces in active/active
configurations to cope with outages and disasters, don't lean on this Geo-disaster recovery feature set, but follow the
replication guidance.

Outages and disasters


It's important to note the distinction between "outages" and "disasters."
An outage is the temporary unavailability of Azure Service Bus, and can affect some components of the service,
such as a messaging store, or even the entire datacenter. However, after the problem is fixed, Service Bus
becomes available again. Typically, an outage doesn't cause the loss of messages or other data. An example of
such an outage might be a power failure in the datacenter. Some outages are only short connection losses
because of transient or network issues.
A disaster is defined as the permanent, or longer-term loss of a Service Bus cluster, Azure region, or datacenter.
The region or datacenter may or may not become available again, or may be down for hours or days. Examples
of such disasters are fire, flooding, or earthquake. A disaster that becomes permanent might cause the loss of
some messages, events, or other data. However, in most cases there should be no data loss and messages can
be recovered once the data center is back up.
The Geo-disaster recovery feature of Azure Service Bus is a disaster recovery solution. The concepts and
workflow described in this article apply to disaster scenarios, and not to transient, or temporary outages. For a
detailed discussion of disaster recovery in Microsoft Azure, see this article.

Basic concepts and terms


The disaster recovery feature implements metadata disaster recovery, and relies on primary and secondary
disaster recovery namespaces. The Geo-disaster recovery feature is available for the Premium SKU only. You
don't need to make any connection string changes, as the connection is made via an alias.
The following terms are used in this article:
Alias: The name for a disaster recovery configuration that you set up. The alias provides a single stable
Fully Qualified Domain Name (FQDN) connection string. Applications use this alias connection string to
connect to a namespace. Using an alias ensures that the connection string is unchanged when the failover
is triggered.
Primary/secondary namespace: The namespaces that correspond to the alias. The primary namespace is
"active" and receives messages (this can be an existing or new namespace). The secondary namespace is
"passive" and doesn't receive messages. The metadata between both is in sync, so both can seamlessly
accept messages without any application code or connection string changes. To ensure that only the
active namespace receives messages, you must use the alias.
Metadata: Entities such as queues, topics, and subscriptions; and their properties of the service that are
associated with the namespace. Only entities and their settings are replicated automatically. Messages
aren't replicated.
Failover: The process of activating the secondary namespace.

Setup
The following section is an overview to set up pairing between the namespaces.
You first create or use an existing primary namespace, and a new secondary namespace, then pair the two. This
pairing gives you an alias that you can use to connect. Because you use an alias, you don't have to change
connection strings. Only new namespaces can be added to your failover pairing.
1. Create the primary namespace.
2. Create the secondary namespace in a different region. This step is optional. You can create the secondary
namespace while creating the pairing in the next step.
3. In the Azure portal, navigate to your primary namespace.
4. Select Geo-recover y on the left menu, and select Initiate pairing on the toolbar.

5. On the Initiate pairing page, follow these steps:


a. Select an existing secondary namespace or create one in a different region. In this example, an
existing namespace is used as the secondary namespace.
b. For Alias , enter an alias for the geo-dr pairing.
c. Then, select Create .

6. You should see the Ser vice Bus Geo-DR Alias page as shown in the following image. You can also
navigate to the Geo-DR Alias page from the primary namespace page by selecting the Geo-recover y
on the left menu.

7. On the Geo-DR Alias page, select Shared access policies on the left menu to access the primary
connection string for the alias. Use this connection string instead of using the connection string to the
primary/secondary namespace directly. Initially, the alias points to the primary namespace.
8. Switch to the Over view page. You can do the following actions:
a. Break the pairing between primary and secondary namespaces. Select Break pairing on the toolbar.
b. Manually fail over to the secondary namespace.
a. Select Failover on the toolbar.
b. Confirm that you want to fail over to the secondary namespace by typing in your alias.
c. Turn ON the Safe Failover option to safely fail over to the secondary namespace. This
feature makes sure that pending Geo-DR replications are completed before switching over
to the secondary.
d. Then, select Failover .

IMPORTANT
Failing over will activate the secondary namespace and remove the primary namespace from the
Geo-Disaster Recovery pairing. Create another namespace to have a new geo-disaster recovery
pair.

9. Finally, you should add some monitoring to detect if a failover is necessary. In most cases, the service is
one part of a large ecosystem, thus automatic failovers are rarely possible, as often failovers must be
performed in sync with the remaining subsystem or infrastructure.
Service Bus standard to premium
If you have migrated your Azure Service Bus Standard namespace to Azure Service Bus Premium, then you
must use the pre-existing alias (that is, your Service Bus Standard namespace connection string) to create the
disaster recovery configuration through the PS/CLI or REST API .
It's because, during migration, your Azure Service Bus Standard namespace connection string/DNS name itself
becomes an alias to your Azure Service Bus Premium namespace.
Your client applications must utilize this alias (that is, the Azure Service Bus Standard namespace connection
string) to connect to the Premium namespace where the disaster recovery pairing has been set up.
If you use the Portal to set up the Disaster recovery configuration, then the portal will abstract this caveat from
you.

Failover flow
A failover is triggered manually by the customer (either explicitly through a command, or through client owned
business logic that triggers the command) and never by Azure. It gives the customer full ownership and visibility
for outage resolution on Azure's backbone.

After the failover is triggered -


1. The alias connection string is updated to point to the Secondary Premium namespace.
2. Clients(senders and receivers) automatically connect to the Secondary namespace.
3. The existing pairing between Primary and Secondary premium namespace is broken.
Once the failover is initiated -
1. If another outage occurs, you want to be able to fail over again. So, set up another passive namespace
and update the pairing.
2. Pull messages from the former primary namespace once it's available again. After that, use that
namespace for regular messaging outside of your geo-recovery setup, or delete the old primary
namespace.

NOTE
Only fail forward semantics are supported. In this scenario, you fail over and then re-pair with a new namespace. Failing
back is not supported; for example, in a SQL cluster.

You can automate failover either with monitoring systems, or with custom-built monitoring solutions. However,
such automation takes extra planning and work, which is out of the scope of this article.
Management
If you made a mistake; for example, you paired the wrong regions during the initial setup, you can break the
pairing of the two namespaces at any time. If you want to use the paired namespaces as regular namespaces,
delete the alias.

Use existing namespace as alias


If you have a scenario in which you can't change the connections of producers and consumers, you can reuse
your namespace name as the alias name. See the sample code on GitHub here.

Samples
The samples on GitHub show how to set up and initiate a failover. These samples demonstrate the following
concepts:
A .NET sample and settings that are required in Azure Active Directory to use Azure Resource Manager with
Service Bus, to set up, and enable Geo-disaster recovery.
Steps required to execute the sample code.
How to use an existing namespace as an alias.
Steps to alternatively enable Geo-disaster recovery via PowerShell or CLI.
Send and receive from the current primary or secondary namespace using the alias.

Considerations
Note the following considerations to keep in mind with this release:
1. In your failover planning, you should also consider the time factor. For example, if you lose connectivity
for longer than 15 to 20 minutes, you might decide to initiate the failover.
2. The fact that no data is replicated means that currently active sessions aren't replicated. Additionally,
duplicate detection and scheduled messages may not work. New sessions, new scheduled messages, and
new duplicates will work.
3. Failing over a complex distributed infrastructure should be rehearsed at least once.
4. Synchronizing entities can take some time, approximately 50-100 entities per minute. Subscriptions and
rules also count as entities.

Availability Zones
The Service Bus Premium SKU supports Availability Zones, providing fault-isolated locations within the same
Azure region. Service Bus manages three copies of messaging store (1 primary and 2 secondary). Service Bus
keeps all the three copies in sync for data and management operations. If the primary copy fails, one of the
secondary copies is promoted to primary with no perceived downtime. If the applications see transient
disconnects from Service Bus, the retry logic in the SDK will automatically reconnect to Service Bus.
When you use availability zones, both metadata and data (messages) are replicated across data centers in the
availability zone.

NOTE
The Availability Zones support for Azure Service Bus Premium is only available in Azure regions where availability zones
are present.

You can enable Availability Zones on new namespaces only, using the Azure portal. Service Bus does not
support migration of existing namespaces. You cannot disable zone redundancy after enabling it on your
namespace.

Private endpoints
This section provides more considerations when using Geo-disaster recovery with namespaces that use private
endpoints. To learn about using private endpoints with Service Bus in general, see Integrate Azure Service Bus
with Azure Private Link.
New pairings
If you try to create a pairing between a primary namespace with a private endpoint and a secondary namespace
without a private endpoint, the pairing will fail. The pairing will succeed only if both primary and secondary
namespaces have private endpoints. We recommend that you use same configurations on the primary and
secondary namespaces and on virtual networks in which private endpoints are created.
NOTE
When you try to pair the primary namespace with a private endpoint and the secondary namespace, the validation
process only checks whether a private endpoint exists on the secondary namespace. It doesn't check whether the
endpoint works or will work after failover. It's your responsibility to ensure that the secondary namespace with private
endpoint will work as expected after failover.
To test that the private endpoint configurations are same, send a Get queues request to the secondary namespace from
outside the virtual network, and verify that you receive an error message from the service.

Existing pairings
If pairing between primary and secondary namespace already exists, private endpoint creation on the primary
namespace will fail. To resolve, create a private endpoint on the secondary namespace first and then create one
for the primary namespace.

NOTE
While we allow read-only access to the secondary namespace, updates to the private endpoint configurations are
permitted.

Recommended configuration
When creating a disaster recovery configuration for your application and Service Bus, you must create private
endpoints for both primary and secondary Service Bus namespaces against virtual networks hosting both
primary and secondary instances of your application.
Let's say you have two virtual networks: VNET-1, VNET-2 and these primary and second namespaces:
ServiceBus-Namespace1-Primary, ServiceBus-Namespace2-Secondary. You need to do the following steps:
On ServiceBus-Namespace1-Primary, create two private endpoints that use subnets from VNET-1 and VNET-
2
On ServiceBus-Namespace2-Secondary, create two private endpoints that use the same subnets from VNET-
1 and VNET-2

Advantage of this approach is that failover can happen at the application layer independent of Service Bus
namespace. Consider the following scenarios:
Application-only failover : Here, the application won't exist in VNET-1 but will move to VNET-2. As both
private endpoints are configured on both VNET-1 and VNET-2 for both primary and secondary namespaces, the
application will just work.
Ser vice Bus namespace-only failover : Here again, since both private endpoints are configured on both
virtual networks for both primary and secondary namespaces, the application will just work.

NOTE
For guidance on geo-disaster recovery of a virtual network, see Virtual Network - Business Continuity.

Next steps
See the Geo-disaster recovery REST API reference here.
Run the Geo-disaster recovery sample on GitHub.
See the Geo-disaster recovery sample that sends messages to an alias.
To learn more about Service Bus messaging, see the following articles:
Service Bus queues, topics, and subscriptions
Get started with Service Bus queues
How to use Service Bus topics and subscriptions
Rest API
Create a zone-redundant virtual network gateway
in Azure Availability Zones
11/2/2020 • 4 minutes to read • Edit Online

You can deploy VPN and ExpressRoute gateways in Azure Availability Zones. This brings resiliency, scalability,
and higher availability to virtual network gateways. Deploying gateways in Azure Availability Zones physically
and logically separates gateways within a region, while protecting your on-premises network connectivity to
Azure from zone-level failures. For information, see About zone-redundant virtual network gateways and About
Azure Availability Zones.

Before you begin


This article uses PowerShell cmdlets. To run the cmdlets, you can use Azure Cloud Shell. The Azure Cloud Shell is
a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled
and configured to use with your account.
To open the Cloud Shell, just select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com/powershell. Select Copy to copy the blocks
of code, paste it into the Cloud Shell, and press enter to run it.
You can also install and run the Azure PowerShell cmdlets locally on your computer. PowerShell cmdlets are
updated frequently. If you have not installed the latest version, the values specified in the instructions may fail.
To find the versions of Azure PowerShell installed on your computer, use the Get-Module -ListAvailable Az
cmdlet. To install or update, see Install the Azure PowerShell module.

1. Declare your variables


Declare the variables that you want to use. Use the following sample, substituting the values for your own when
necessary. If you close your PowerShell/Cloud Shell session at any point during the exercise, just copy and paste
the values again to re-declare the variables. When specifying location, verify that the region you specify is
supported. For more information, see the FAQ.

$RG1 = "TestRG1"
$VNet1 = "VNet1"
$Location1 = "CentralUS"
$FESubnet1 = "FrontEnd"
$BESubnet1 = "Backend"
$GwSubnet1 = "GatewaySubnet"
$VNet1Prefix = "10.1.0.0/16"
$FEPrefix1 = "10.1.0.0/24"
$BEPrefix1 = "10.1.1.0/24"
$GwPrefix1 = "10.1.255.0/27"
$Gw1 = "VNet1GW"
$GwIP1 = "VNet1GWIP"
$GwIPConf1 = "gwipconf1"

2. Create the virtual network


Create a resource group.
New-AzResourceGroup -ResourceGroupName $RG1 -Location $Location1

Create a virtual network.

$fesub1 = New-AzVirtualNetworkSubnetConfig -Name $FESubnet1 -AddressPrefix $FEPrefix1


$besub1 = New-AzVirtualNetworkSubnetConfig -Name $BESubnet1 -AddressPrefix $BEPrefix1
$vnet = New-AzVirtualNetwork -Name $VNet1 -ResourceGroupName $RG1 -Location $Location1 -AddressPrefix
$VNet1Prefix -Subnet $fesub1,$besub1

3. Add the gateway subnet


The gateway subnet contains the reserved IP addresses that the virtual network gateway services use. Use the
following examples to add and set a gateway subnet:
Add the gateway subnet.

$getvnet = Get-AzVirtualNetwork -ResourceGroupName $RG1 -Name VNet1


Add-AzVirtualNetworkSubnetConfig -Name 'GatewaySubnet' -AddressPrefix 10.1.255.0/27 -VirtualNetwork $getvnet

Set the gateway subnet configuration for the virtual network.

$getvnet | Set-AzVirtualNetwork

4. Request a public IP address


In this step, choose the instructions that apply to the gateway that you want to create. The selection of zones for
deploying the gateways depends on the zones specified for the public IP address.
For zone -redundant gateways
Request a public IP address with a Standard PublicIpaddress SKU and do not specify any zone. In this case, the
Standard public IP address created will be a zone-redundant public IP.

$pip1 = New-AzPublicIpAddress -ResourceGroup $RG1 -Location $Location1 -Name $GwIP1 -AllocationMethod Static
-Sku Standard

For zonal gateways


Request a public IP address with a Standard PublicIpaddress SKU. Specify the zone (1, 2 or 3). All gateway
instances will be deployed in this zone.

$pip1 = New-AzPublicIpAddress -ResourceGroup $RG1 -Location $Location1 -Name $GwIP1 -AllocationMethod Static
-Sku Standard -Zone 1

For regional gateways


Request a public IP address with a Basic PublicIpaddress SKU. In this case, the gateway is deployed as a regional
gateway and does not have any zone-redundancy built into the gateway. The gateway instances are created in
any zones, respectively.

$pip1 = New-AzPublicIpAddress -ResourceGroup $RG1 -Location $Location1 -Name $GwIP1 -AllocationMethod


Dynamic -Sku Basic
5. Create the IP configuration
$getvnet = Get-AzVirtualNetwork -ResourceGroupName $RG1 -Name $VNet1
$subnet = Get-AzVirtualNetworkSubnetConfig -Name $GwSubnet1 -VirtualNetwork $getvnet
$gwipconf1 = New-AzVirtualNetworkGatewayIpConfig -Name $GwIPConf1 -Subnet $subnet -PublicIpAddress $pip1

6. Create the gateway


Create the virtual network gateway.
For ExpressRoute

New-AzVirtualNetworkGateway -ResourceGroup $RG1 -Location $Location1 -Name $Gw1 -IpConfigurations $GwIPConf1


-GatewayType ExpressRoute -GatewaySku ErGw1AZ

For VPN Gateway

New-AzVirtualNetworkGateway -ResourceGroup $RG1 -Location $Location1 -Name $Gw1 -IpConfigurations $GwIPConf1


-GatewayType Vpn -VpnType RouteBased -GatewaySku VpnGw1AZ

FAQ
What will change when I deploy these new SKUs?
From your perspective, you can deploy your gateways with zone-redundancy. This means that all instances of
the gateways will be deployed across Azure Availability Zones, and each Availability Zone is a different fault and
update domain. This makes your gateways more reliable, available, and resilient to zone failures.
Can I use the Azure portal?
Yes, you can use the Azure portal to deploy the new SKUs. However, you will see these new SKUs only in those
Azure regions that have Azure Availability Zones.
What regions are available for me to use the new SKUs?
See Availability Zones for the latest list of available regions.
Can I change/migrate/upgrade my existing virtual network gateways to zone -redundant or zonal gateways?
Migrating your existing virtual network gateways to zone-redundant or zonal gateways is currently not
supported. You can, however, delete your existing gateway and re-create a zone-redundant or zonal gateway.
Can I deploy both VPN and Express Route gateways in same virtual network?
Co-existence of both VPN and Express Route gateways in the same virtual network is supported. However, you
should reserve a /27 IP address range for the gateway subnet.
Create a zone-redundant virtual network gateway
in Azure Availability Zones
11/2/2020 • 4 minutes to read • Edit Online

You can deploy VPN and ExpressRoute gateways in Azure Availability Zones. This brings resiliency, scalability,
and higher availability to virtual network gateways. Deploying gateways in Azure Availability Zones physically
and logically separates gateways within a region, while protecting your on-premises network connectivity to
Azure from zone-level failures. For information, see About zone-redundant virtual network gateways and About
Azure Availability Zones.

Before you begin


This article uses PowerShell cmdlets. To run the cmdlets, you can use Azure Cloud Shell. The Azure Cloud Shell is
a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled
and configured to use with your account.
To open the Cloud Shell, just select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com/powershell. Select Copy to copy the blocks
of code, paste it into the Cloud Shell, and press enter to run it.
You can also install and run the Azure PowerShell cmdlets locally on your computer. PowerShell cmdlets are
updated frequently. If you have not installed the latest version, the values specified in the instructions may fail.
To find the versions of Azure PowerShell installed on your computer, use the Get-Module -ListAvailable Az
cmdlet. To install or update, see Install the Azure PowerShell module.

1. Declare your variables


Declare the variables that you want to use. Use the following sample, substituting the values for your own when
necessary. If you close your PowerShell/Cloud Shell session at any point during the exercise, just copy and paste
the values again to re-declare the variables. When specifying location, verify that the region you specify is
supported. For more information, see the FAQ.

$RG1 = "TestRG1"
$VNet1 = "VNet1"
$Location1 = "CentralUS"
$FESubnet1 = "FrontEnd"
$BESubnet1 = "Backend"
$GwSubnet1 = "GatewaySubnet"
$VNet1Prefix = "10.1.0.0/16"
$FEPrefix1 = "10.1.0.0/24"
$BEPrefix1 = "10.1.1.0/24"
$GwPrefix1 = "10.1.255.0/27"
$Gw1 = "VNet1GW"
$GwIP1 = "VNet1GWIP"
$GwIPConf1 = "gwipconf1"

2. Create the virtual network


Create a resource group.
New-AzResourceGroup -ResourceGroupName $RG1 -Location $Location1

Create a virtual network.

$fesub1 = New-AzVirtualNetworkSubnetConfig -Name $FESubnet1 -AddressPrefix $FEPrefix1


$besub1 = New-AzVirtualNetworkSubnetConfig -Name $BESubnet1 -AddressPrefix $BEPrefix1
$vnet = New-AzVirtualNetwork -Name $VNet1 -ResourceGroupName $RG1 -Location $Location1 -AddressPrefix
$VNet1Prefix -Subnet $fesub1,$besub1

3. Add the gateway subnet


The gateway subnet contains the reserved IP addresses that the virtual network gateway services use. Use the
following examples to add and set a gateway subnet:
Add the gateway subnet.

$getvnet = Get-AzVirtualNetwork -ResourceGroupName $RG1 -Name VNet1


Add-AzVirtualNetworkSubnetConfig -Name 'GatewaySubnet' -AddressPrefix 10.1.255.0/27 -VirtualNetwork $getvnet

Set the gateway subnet configuration for the virtual network.

$getvnet | Set-AzVirtualNetwork

4. Request a public IP address


In this step, choose the instructions that apply to the gateway that you want to create. The selection of zones for
deploying the gateways depends on the zones specified for the public IP address.
For zone -redundant gateways
Request a public IP address with a Standard PublicIpaddress SKU and do not specify any zone. In this case, the
Standard public IP address created will be a zone-redundant public IP.

$pip1 = New-AzPublicIpAddress -ResourceGroup $RG1 -Location $Location1 -Name $GwIP1 -AllocationMethod Static
-Sku Standard

For zonal gateways


Request a public IP address with a Standard PublicIpaddress SKU. Specify the zone (1, 2 or 3). All gateway
instances will be deployed in this zone.

$pip1 = New-AzPublicIpAddress -ResourceGroup $RG1 -Location $Location1 -Name $GwIP1 -AllocationMethod Static
-Sku Standard -Zone 1

For regional gateways


Request a public IP address with a Basic PublicIpaddress SKU. In this case, the gateway is deployed as a regional
gateway and does not have any zone-redundancy built into the gateway. The gateway instances are created in
any zones, respectively.

$pip1 = New-AzPublicIpAddress -ResourceGroup $RG1 -Location $Location1 -Name $GwIP1 -AllocationMethod


Dynamic -Sku Basic
5. Create the IP configuration
$getvnet = Get-AzVirtualNetwork -ResourceGroupName $RG1 -Name $VNet1
$subnet = Get-AzVirtualNetworkSubnetConfig -Name $GwSubnet1 -VirtualNetwork $getvnet
$gwipconf1 = New-AzVirtualNetworkGatewayIpConfig -Name $GwIPConf1 -Subnet $subnet -PublicIpAddress $pip1

6. Create the gateway


Create the virtual network gateway.
For ExpressRoute

New-AzVirtualNetworkGateway -ResourceGroup $RG1 -Location $Location1 -Name $Gw1 -IpConfigurations $GwIPConf1


-GatewayType ExpressRoute -GatewaySku ErGw1AZ

For VPN Gateway

New-AzVirtualNetworkGateway -ResourceGroup $RG1 -Location $Location1 -Name $Gw1 -IpConfigurations $GwIPConf1


-GatewayType Vpn -VpnType RouteBased -GatewaySku VpnGw1AZ

FAQ
What will change when I deploy these new SKUs?
From your perspective, you can deploy your gateways with zone-redundancy. This means that all instances of
the gateways will be deployed across Azure Availability Zones, and each Availability Zone is a different fault and
update domain. This makes your gateways more reliable, available, and resilient to zone failures.
Can I use the Azure portal?
Yes, you can use the Azure portal to deploy the new SKUs. However, you will see these new SKUs only in those
Azure regions that have Azure Availability Zones.
What regions are available for me to use the new SKUs?
See Availability Zones for the latest list of available regions.
Can I change/migrate/upgrade my existing virtual network gateways to zone -redundant or zonal gateways?
Migrating your existing virtual network gateways to zone-redundant or zonal gateways is currently not
supported. You can, however, delete your existing gateway and re-create a zone-redundant or zonal gateway.
Can I deploy both VPN and Express Route gateways in same virtual network?
Co-existence of both VPN and Express Route gateways in the same virtual network is supported. However, you
should reserve a /27 IP address range for the gateway subnet.
Autoscaling and Zone-redundant Application
Gateway v2
3/5/2021 • 8 minutes to read • Edit Online

Application Gateway is available under a Standard_v2 SKU. Web Application Firewall (WAF) is available under a
WAF_v2 SKU. The v2 SKU offers performance enhancements and adds support for critical new features like
autoscaling, zone redundancy, and support for static VIPs. Existing features under the Standard and WAF SKU
continue to be supported in the new v2 SKU, with a few exceptions listed in comparison section.
The new v2 SKU includes the following enhancements:
Autoscaling : Application Gateway or WAF deployments under the autoscaling SKU can scale out or in
based on changing traffic load patterns. Autoscaling also removes the requirement to choose a
deployment size or instance count during provisioning. This SKU offers true elasticity. In the Standard_v2
and WAF_v2 SKU, Application Gateway can operate both in fixed capacity (autoscaling disabled) and in
autoscaling enabled mode. Fixed capacity mode is useful for scenarios with consistent and predictable
workloads. Autoscaling mode is beneficial in applications that see variance in application traffic.
Zone redundancy : An Application Gateway or WAF deployment can span multiple Availability Zones,
removing the need to provision separate Application Gateway instances in each zone with a Traffic
Manager. You can choose a single zone or multiple zones where Application Gateway instances are
deployed, which makes it more resilient to zone failure. The backend pool for applications can be
similarly distributed across availability zones.
Zone redundancy is available only where Azure Zones are available. In other regions, all other features
are supported. For more information, see Regions and Availability Zones in Azure
Static VIP : Application Gateway v2 SKU supports the static VIP type exclusively. This ensures that the VIP
associated with the application gateway doesn't change for the lifecycle of the deployment, even after a
restart. There isn't a static VIP in v1, so you must use the application gateway URL instead of the IP
address for domain name routing to App Services via the application gateway.
Header Rewrite : Application Gateway allows you to add, remove, or update HTTP request and response
headers with v2 SKU. For more information, see Rewrite HTTP headers with Application Gateway
Key Vault Integration : Application Gateway v2 supports integration with Key Vault for server
certificates that are attached to HTTPS enabled listeners. For more information, see TLS termination with
Key Vault certificates.
Azure Kubernetes Ser vice Ingress Controller : The Application Gateway v2 Ingress Controller allows
the Azure Application Gateway to be used as the ingress for an Azure Kubernetes Service (AKS) known as
AKS Cluster. For more information, see What is Application Gateway Ingress Controller?.
Performance enhancements : The v2 SKU offers up to 5X better TLS offload performance as compared
to the Standard/WAF SKU.
Faster deployment and update time The v2 SKU provides faster deployment and update time as
compared to Standard/WAF SKU. This also includes WAF configuration changes.
Supported regions
The Standard_v2 and WAF_v2 SKU is available in the following regions: North Central US, South Central US,
West US, West US 2, East US, East US 2, Central US, North Europe, West Europe, Southeast Asia, France Central,
UK West, Japan East, Japan West, Australia East, Australia Southeast, Brazil South, Canada Central, Canada East,
East Asia, Korea Central, Korea South, UK South, Central India, West India, South India.

Pricing
With the v2 SKU, the pricing model is driven by consumption and is no longer attached to instance counts or
sizes. The v2 SKU pricing has two components:
Fixed price - This is hourly (or partial hour) price to provision a Standard_v2 or WAF_v2 Gateway. Please
note that 0 additional minimum instances still ensures high availability of the service which is always
included with fixed price.
Capacity Unit price - This is a consumption-based cost that is charged in addition to the fixed cost.
Capacity unit charge is also computed hourly or partial hourly. There are three dimensions to capacity unit -
compute unit, persistent connections, and throughput. Compute unit is a measure of processor capacity
consumed. Factors affecting compute unit are TLS connections/sec, URL Rewrite computations, and WAF rule
processing. Persistent connection is a measure of established TCP connections to the application gateway in a
given billing interval. Throughput is average Megabits/sec processed by the system in a given billing interval.
The billing is done at a Capacity Unit level for anything above the reserved instance count.
Each capacity unit is composed of at most: 1 compute unit, 2500 persistent connections, and 2.22-Mbps
throughput.
To learn more, see Understanding pricing.
Scaling Application Gateway and WAF v2
Application Gateway and WAF can be configured to scale in two modes:
Autoscaling - With autoscaling enabled, the Application Gateway and WAF v2 SKUs scale up or down based
on application traffic requirements. This mode offers better elasticity to your application and eliminates the
need to guess the application gateway size or instance count. This mode also allows you to save cost by not
requiring the gateway to run at peak provisioned capacity for anticipated maximum traffic load. You must
specify a minimum and optionally maximum instance count. Minimum capacity ensures that Application
Gateway and WAF v2 don't fall below the minimum instance count specified, even in the absence of traffic.
Each instance is roughly equivalent to 10 additional reserved Capacity Units. Zero signifies no reserved
capacity and is purely autoscaling in nature. You can also optionally specify a maximum instance count, which
ensures that the Application Gateway doesn't scale beyond the specified number of instances. You will only
be billed for the amount of traffic served by the Gateway. The instance counts can range from 0 to 125. The
default value for maximum instance count is 20 if not specified.
Manual - You can alternatively choose Manual mode where the gateway won't autoscale. In this mode, if
there is more traffic than what Application Gateway or WAF can handle, it could result in traffic loss. With
manual mode, specifying instance count is mandatory. Instance count can vary from 1 to 125 instances.

Autoscaling and High Availability


Azure Application Gateways are always deployed in a highly available fashion. The service is made out of
multiple instances that are created as configured (if autoscaling is disabled) or required by the application load
(if autoscaling is enabled). Note that from the user's perspective you do not necessarily have visibility into the
individual instances, but just into the Application Gateway service as a whole. If a certain instance has a problem
and stops being functional, Azure Application Gateway will transparently create a new instance.
Please note that even if you configure autoscaling with zero minimum instances the service will still be highly
available, which is always included with the fixed price.
However, creating a new instance can take some time (around six or seven minutes). Hence, if you do not want
to cope with this downtime you can configure a minimum instance count of 2, ideally with Availability Zone
support. This way you will have at least two instances inside of your Azure Application Gateway under normal
circumstances, so if one of them had a problem the other will try to cope with the traffic, during the time a new
instance is being created. Note that an Azure Application Gateway instance can support around 10 Capacity
Units, so depending on how much traffic you typically have you might want to configure your minimum
instance autoscaling setting to a value higher than 2.

Feature comparison between v1 SKU and v2 SKU


The following table compares the features available with each SKU.

F EAT URE V1 SK U V2 SK U

Autoscaling ✓

Zone redundancy ✓

Static VIP ✓

Azure Kubernetes Service (AKS) ✓


Ingress controller

Azure Key Vault integration ✓


F EAT URE V1 SK U V2 SK U

Rewrite HTTP(S) headers ✓

URL-based routing ✓ ✓

Multiple-site hosting ✓ ✓

Traffic redirection ✓ ✓

Web Application Firewall (WAF) ✓ ✓

WAF custom rules ✓

Transport Layer Security (TLS)/Secure ✓ ✓


Sockets Layer (SSL) termination

End-to-end TLS encryption ✓ ✓

Session affinity ✓ ✓

Custom error pages ✓ ✓

WebSocket support ✓ ✓

HTTP/2 support ✓ ✓

Connection draining ✓ ✓

NOTE
The autoscaling v2 SKU now supports default health probes to automatically monitor the health of all resources in its
back-end pool and highlight those backend members that are considered unhealthy. The default health probe is
automatically configured for backends that don't have any custom probe configuration. To learn more, see health probes
in application gateway.

Differences from v1 SKU


This section describes features and limitations of the v2 SKU that differ from the v1 SKU.

DIF F EREN C E DETA IL S

Authentication certificate Not supported.


For more information, see Overview of end to end TLS with
Application Gateway.

Mixing Standard_v2 and Standard Application Gateway on Not supported


the same subnet

User-Defined Route (UDR) on Application Gateway subnet Supported (specific scenarios). In preview.
For more information about supported scenarios, see
Application Gateway configuration overview.
DIF F EREN C E DETA IL S

NSG for Inbound port range - 65200 to 65535 for Standard_v2 SKU
- 65503 to 65534 for Standard SKU.
For more information, see the FAQ.

Performance logs in Azure diagnostics Not supported.


Azure metrics should be used.

Billing Billing scheduled to start on July 1, 2019.

FIPS mode These are currently not supported.

ILB only mode This is currently not supported. Public and ILB mode
together is supported.

Net watcher integration Not supported.

Azure Security Center integration Not yet available.

Migrate from v1 to v2
An Azure PowerShell script is available in the PowerShell gallery to help you migrate from your v1 Application
Gateway/WAF to the v2 Autoscaling SKU. This script helps you copy the configuration from your v1 gateway.
Traffic migration is still your responsibility. For more information, see Migrate Azure Application Gateway from
v1 to v2.

Next steps
Quickstart: Direct web traffic with Azure Application Gateway - Azure portal
Create an autoscaling, zone redundant application gateway with a reserved virtual IP address using Azure
PowerShell
Learn more about Application Gateway.
Learn more about Azure Firewall.
Tutorial: Create and configure an Azure Active
Directory Domain Services managed domain
3/5/2021 • 11 minutes to read • Edit Online

Azure Active Directory Domain Services (Azure AD DS) provides managed domain services such as domain join,
group policy, LDAP, Kerberos/NTLM authentication that is fully compatible with Windows Server Active
Directory. You consume these domain services without deploying, managing, and patching domain controllers
yourself. Azure AD DS integrates with your existing Azure AD tenant. This integration lets users sign in using
their corporate credentials, and you can use existing groups and user accounts to secure access to resources.
You can create a managed domain using default configuration options for networking and synchronization, or
manually define these settings. This tutorial shows you how to use default options to create and configure an
Azure AD DS managed domain using the Azure portal.
In this tutorial, you learn how to:
Understand DNS requirements for a managed domain
Create a managed domain
Enable password hash synchronization
If you don't have an Azure subscription, create an account before you begin.

Prerequisites
To complete this tutorial, you need the following resources and privileges:
An active Azure subscription.
If you don't have an Azure subscription, create an account.
An Azure Active Directory tenant associated with your subscription, either synchronized with an on-premises
directory or a cloud-only directory.
If needed, create an Azure Active Directory tenant or associate an Azure subscription with your
account.
You need global administrator privileges in your Azure AD tenant to enable Azure AD DS.
You need Contributor privileges in your Azure subscription to create the required Azure AD DS resources.
Although not required for Azure AD DS, it's recommended to configure self-service password reset (SSPR) for
the Azure AD tenant. Users can change their password without SSPR, but SSPR helps if they forget their
password and need to reset it.

IMPORTANT
After you create a managed domain, you can't then move the managed domain to a different resource group, virtual
network, subscription, etc. Take care to select the most appropriate subscription, resource group, region, and virtual
network when you deploy the managed domain.

Sign in to the Azure portal


In this tutorial, you create and configure the managed domain using the Azure portal. To get started, first sign in
to the Azure portal.
Create a managed domain
To launch the Enable Azure AD Domain Ser vices wizard, complete the following steps:
1. On the Azure portal menu or from the Home page, select Create a resource .
2. Enter Domain Services into the search bar, then choose Azure AD Domain Services from the search
suggestions.
3. On the Azure AD Domain Services page, select Create . The Enable Azure AD Domain Ser vices wizard is
launched.
4. Select the Azure Subscription in which you would like to create the managed domain.
5. Select the Resource group to which the managed domain should belong. Choose to Create new or select
an existing resource group.
When you create a managed domain, you specify a DNS name. There are some considerations when you choose
this DNS name:
Built-in domain name: By default, the built-in domain name of the directory is used (a .onmicrosoft.com
suffix). If you wish to enable secure LDAP access to the managed domain over the internet, you can't create a
digital certificate to secure the connection with this default domain. Microsoft owns the .onmicrosoft.com
domain, so a Certificate Authority (CA) won't issue a certificate.
Custom domain names: The most common approach is to specify a custom domain name, typically one
that you already own and is routable. When you use a routable, custom domain, traffic can correctly flow as
needed to support your applications.
Non-routable domain suffixes: We generally recommend that you avoid a non-routable domain name
suffix, such as contoso.local. The .local suffix isn't routable and can cause issues with DNS resolution.

TIP
If you create a custom domain name, take care with existing DNS namespaces. It's recommended to use a domain name
separate from any existing Azure or on-premises DNS name space.
For example, if you have an existing DNS name space of contoso.com, create a managed domain with the custom domain
name of aaddscontoso.com. If you need to use secure LDAP, you must register and own this custom domain name to
generate the required certificates.
You may need to create some additional DNS records for other services in your environment, or conditional DNS
forwarders between existing DNS name spaces in your environment. For example, if you run a webserver that hosts a site
using the root DNS name, there can be naming conflicts that require additional DNS entries.
In these tutorials and how-to articles, the custom domain of aaddscontoso.com is used as a short example. In all
commands, specify your own domain name.

The following DNS name restrictions also apply:


Domain prefix restrictions: You can't create a managed domain with a prefix longer than 15 characters.
The prefix of your specified domain name (such as aaddscontoso in the aaddscontoso.com domain name)
must contain 15 or fewer characters.
Network name conflicts: The DNS domain name for your managed domain shouldn't already exist in the
virtual network. Specifically, check for the following scenarios that would lead to a name conflict:
If you already have an Active Directory domain with the same DNS domain name on the Azure virtual
network.
If the virtual network where you plan to enable the managed domain has a VPN connection with your
on-premises network. In this scenario, ensure you don't have a domain with the same DNS domain
name on your on-premises network.
If you have an existing Azure cloud service with that name on the Azure virtual network.
Complete the fields in the Basics window of the Azure portal to create a managed domain:
1. Enter a DNS domain name for your managed domain, taking into consideration the previous points.
2. Choose the Azure Location in which the managed domain should be created. If you choose a region that
supports Azure Availability Zones, the Azure AD DS resources are distributed across zones for additional
redundancy.

TIP
Availability Zones are unique physical locations within an Azure region. Each zone is made up of one or more
datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there's a minimum of
three separate zones in all enabled regions.
There's nothing for you to configure for Azure AD DS to be distributed across zones. The Azure platform
automatically handles the zone distribution of resources. For more information and to see region availability, see
What are Availability Zones in Azure?

3. The SKU determines the performance, backup frequency, and maximum number of forest trusts you can
create. You can change the SKU after the managed domain has been created if your business demands or
requirements change. For more information, see Azure AD DS SKU concepts.
For this tutorial, select the Standard SKU.
4. A forest is a logical construct used by Active Directory Domain Services to group one or more domains.
By default, a managed domain is created as a User forest. This type of forest synchronizes all objects from
Azure AD, including any user accounts created in an on-premises AD DS environment.
A Resource forest only synchronizes users and groups created directly in Azure AD. For more information
on Resource forests, including why you may use one and how to create forest trusts with on-premises AD
DS domains, see Azure AD DS resource forests overview.
For this tutorial, choose to create a User forest.
To quickly create a managed domain, you can select Review + create to accept additional default configuration
options. The following defaults are configured when you choose this create option:
Creates a virtual network named aadds-vnet that uses the IP address range of 10.0.2.0/24.
Creates a subnet named aadds-subnet using the IP address range of 10.0.2.0/24.
Synchronizes All users from Azure AD into the managed domain.
Select Review + create to accept these default configuration options.

Deploy the managed domain


On the Summar y page of the wizard, review the configuration settings for your managed domain. You can go
back to any step of the wizard to make changes. To redeploy a managed domain to a different Azure AD tenant
in a consistent way using these configuration options, you can also Download a template for automation .
1. To create the managed domain, select Create . A note is displayed that certain configuration options such
as DNS name or virtual network can't be changed once the Azure AD DS managed has been created. To
continue, select OK .
2. The process of provisioning your managed domain can take up to an hour. A notification is displayed in
the portal that shows the progress of your Azure AD DS deployment. Select the notification to see
detailed progress for the deployment.

3. The page will load with updates on the deployment process, including the creation of new resources in
your directory.
4. Select your resource group, such as myResourceGroup, then choose your managed domain from the list
of Azure resources, such as aaddscontoso.com. The Over view tab shows that the managed domain is
currently Deploying. You can't configure the managed domain until it's fully provisioned.

5. When the managed domain is fully provisioned, the Over view tab shows the domain status as Running.

IMPORTANT
The managed domain is associated with your Azure AD tenant. During the provisioning process, Azure AD DS creates two
Enterprise Applications named Domain Controller Services and AzureActiveDirectoryDomainControllerServices in the
Azure AD tenant. These Enterprise Applications are needed to service your managed domain. Don't delete these
applications.

Update DNS settings for the Azure virtual network


With Azure AD DS successfully deployed, now configure the virtual network to allow other connected VMs and
applications to use the managed domain. To provide this connectivity, update the DNS server settings for your
virtual network to point to the two IP addresses where the managed domain is deployed.
1. The Over view tab for your managed domain shows some Required configuration steps . The first
configuration step is to update DNS server settings for your virtual network. Once the DNS settings are
correctly configured, this step is no longer shown.
The addresses listed are the domain controllers for use in the virtual network. In this example, those
addresses are 10.0.2.4 and 10.0.2.5. You can later find these IP addresses on the Proper ties tab.

2. To update the DNS server settings for the virtual network, select the Configure button. The DNS settings
are automatically configured for your virtual network.

TIP
If you selected an existing virtual network in the previous steps, any VMs connected to the network only get the new
DNS settings after a restart. You can restart VMs using the Azure portal, Azure PowerShell, or the Azure CLI.

Enable user accounts for Azure AD DS


To authenticate users on the managed domain, Azure AD DS needs password hashes in a format that's suitable
for NT LAN Manager (NTLM) and Kerberos authentication. Azure AD doesn't generate or store password hashes
in the format that's required for NTLM or Kerberos authentication until you enable Azure AD DS for your tenant.
For security reasons, Azure AD also doesn't store any password credentials in clear-text form. Therefore, Azure
AD can't automatically generate these NTLM or Kerberos password hashes based on users' existing credentials.
NOTE
Once appropriately configured, the usable password hashes are stored in the managed domain. If you delete the
managed domain, any password hashes stored at that point are also deleted.
Synchronized credential information in Azure AD can't be re-used if you later create a managed domain - you must
reconfigure the password hash synchronization to store the password hashes again. Previously domain-joined VMs or
users won't be able to immediately authenticate - Azure AD needs to generate and store the password hashes in the new
managed domain.
For more information, see Password hash sync process for Azure AD DS and Azure AD Connect.

The steps to generate and store these password hashes are different for cloud-only user accounts created in
Azure AD versus user accounts that are synchronized from your on-premises directory using Azure AD Connect.
A cloud-only user account is an account that was created in your Azure AD directory using either the Azure
portal or Azure AD PowerShell cmdlets. These user accounts aren't synchronized from an on-premises directory.

In this tutorial, let's work with a basic cloud-only user account. For more information on the additional steps
required to use Azure AD Connect, see Synchronize password hashes for user accounts synced from your
on-premises AD to your managed domain.

TIP
If your Azure AD tenant has a combination of cloud-only users and users from your on-premises AD, you need to
complete both sets of steps.

For cloud-only user accounts, users must change their passwords before they can use Azure AD DS. This
password change process causes the password hashes for Kerberos and NTLM authentication to be generated
and stored in Azure AD. The account isn't synchronized from Azure AD to Azure AD DS until the password is
changed. Either expire the passwords for all cloud users in the tenant who need to use Azure AD DS, which
forces a password change on next sign-in, or instruct cloud users to manually change their passwords. For this
tutorial, let's manually change a user password.
Before a user can reset their password, the Azure AD tenant must be configured for self-service password reset.
To change the password for a cloud-only user, the user must complete the following steps:
1. Go to the Azure AD Access Panel page at https://myapps.microsoft.com.
2. In the top-right corner, select your name, then choose Profile from the drop-down menu.
3. On the Profile page, select Change password .
4. On the Change password page, enter your existing (old) password, then enter and confirm a new
password.
5. Select Submit .
It takes a few minutes after you've changed your password for the new password to be usable in Azure AD DS
and to successfully sign in to computers joined to the managed domain.

Next steps
In this tutorial, you learned how to:
Understand DNS requirements for a managed domain
Create a managed domain
Add administrative users to domain management
Enable user accounts for Azure AD DS and generate password hashes
Before you domain-join VMs and deploy applications that use the managed domain, configure an Azure virtual
network for application workloads.
Configure Azure virtual network for application workloads to use your managed domain
About Azure Edge Zone Preview
3/5/2021 • 5 minutes to read • Edit Online

Azure Edge Zone is a family of offerings from Microsoft Azure that enables data processing close to the user. You
can deploy VMs, containers, and other selected Azure services into Edge Zones to address the low latency and
high throughput requirements of applications.
Typical use case scenarios for Edge Zones include:
Real-time command and control in robotics.
Real-time analytics and inferencing via artificial intelligence and machine learning.
Machine vision.
Remote rendering for mixed reality and VDI scenarios.
Immersive multiplayer gaming.
Media streaming and content delivery.
Surveillance and security.
There are three types of Azure Edge Zones:
Azure Edge Zones
Azure Edge Zones with Carrier
Azure Private Edge Zones

Azure Edge Zones

Azure Edge Zones are small-footprint extensions of Azure placed in population centers that are far away from
Azure regions. Azure Edge Zones support VMs, containers, and a selected set of Azure services that let you run
latency-sensitive and throughput-intensive applications close to end users. Azure Edge Zones are part of the
Microsoft global network. They provide secure, reliable, high-bandwidth connectivity between applications that
run at the edge zone close to the user. Azure Edge Zones are owned and operated by Microsoft. You can use the
same set of Azure tools and the same portal to manage and deploy services into Edge Zones.
Typical use cases include:
Gaming and game streaming.
Media streaming and content delivery.
Real-time analytics and inferencing via artificial intelligence and machine learning.
Rendering for mixed reality.
Azure Edge Zones will be available in the following metro areas:
New York, NY
Los Angeles, CA
Miami, FL
Contact the Edge Zone team for more information.

Azure Edge Zones with Carrier

Azure Edge Zones with Carrier are small-footprint extensions of Azure that are placed in mobile operators'
datacenters in population centers. Azure Edge Zone with Carrier infrastructure is placed one hop away from the
mobile operator's 5G network. This placement offers latency of less than 10 milliseconds to applications from
mobile devices.
Azure Edge Zones with Carrier are deployed in mobile operators' datacenters and connected to the Microsoft
global network. They provide secure, reliable, high-bandwidth connectivity between applications that run close
to the user. Developers can use the same set of familiar tools to build and deploy services into the Edge Zones.
Typical use cases include:
Gaming and game streaming.
Media streaming and content delivery.
Real-time analytics and inferencing via artificial intelligence and machine learning.
Rendering for mixed reality.
Connected automobiles.
Tele-medicine.
Edge Zones will be offered in partnership with the following operators:
AT&T (Atlanta, Dallas, and Los Angeles)
ISVs working on optimized and scalable applications connected to 5G networks can now use the new Los
Angeles preview location of Azure Edge Zones with AT&T when building and experimenting with ultra-low
latency platforms, mobile and connected scenarios. Register for the early adopter program to take advantage of
secure, high-bandwidth connectivity.
Contact the Edge Zone team for more information.

Azure Private Edge Zones

Azure Private Edge Zones are small-footprint extensions of Azure that are placed on-premises. Azure Private
Edge Zone is based on the Azure Stack Edge platform. It enables low latency access to computing and storage
services deployed on-premises. Private Edge Zone also lets you deploy applications from ISVs and virtualized
network functions (VNFs) as Azure managed applications along with virtual machines and containers on-
premises. These VNFs can include mobile packet cores, routers, firewalls, and SD-WAN appliances. Azure Private
Edge Zone comes with a cloud-native orchestration solution that lets you manage the lifecycles of VNFs and
applications from the Azure portal.
Azure Private Edge Zone lets you develop and deploy applications on-premises by using the same familiar tools
that you use to build and deploy applications in Azure.
It also lets you:
Run private mobile networks (private LTE, private 5G).
Implement security functions like firewalls.
Extend your on-premises networks across multiple branches and Azure by using SD-WAN appliances on the
same Private Edge Zone appliances and manage them from Azure.
Typical use cases include:
Real-time command and control in robotics.
Real-time analytics and inferencing with artificial intelligence and machine learning.
Machine vision.
Remote rendering for mixed reality and VDI scenarios.
Surveillance and security.
We have a rich ecosystem of VNF vendors, ISVs, and MSP partners to enable end-to-end solutions that use
Private Edge Zones. Contact the Private Edge Zone team for more information.
Private Edge Zone partners
Virtualized network functions (VNFs )
Vi r t u a l i z e d Ev o l v e d P a c k e t C o r e (v E P C ) fo r m o b i l e n e t w o r k s

Affirmed Networks
Celona
Druid Software
Expeto
Mavenir
Metaswitch
Nokia Digital Automation Cloud
Mo bi l e r adi o par t n er s

Celona
Commscope Ruckus
SD - W A N v e n d o r s

128 Technology
NetFoundry
Nuage Networks from Nokia
Versa Networks
VMware SD-WAN by Velocloud
Ro u t er ven do r s

Arista
Fi r ew al l ven do r s

Palo Alto Networks


M a n a g e d So l u t i o n s P r o v i d e r s: M o b i l e o p e r a t o r s a n d G l o b a l Sy st e m I n t e g r a t o r s (G SI s)

GSIS A N D O P ERATO RS M O B IL E O P ERATO RS

Amdocs Etisalat

American Tower NTT Communications

CenturyLink Proximus

Expeto Rogers
GSIS A N D O P ERATO RS M O B IL E O P ERATO RS

Federated Wireless SK Telecom

Infosys Telefonica

Tech Mahindra Telstra

Vodafone

Contact the Private Edge Zone team for information on how to become a partner.
Private Edge Zone solutions
Private mobile network on Private Edge Zones

You can now deploy a private mobile network on Private Edge Zones. Private mobile networks enable ultra-low
latency, high capacity, and the reliable and secure wireless network that's required for business-critical
applications.
Private mobile networks can enable scenarios like:
Command and control of automated guided vehicles (AGVs) in warehouses.
Real-time communication between robots in smart factories.
Augmented reality and virtual reality edge applications.
The virtualized evolved packet core (vEPC) network function is the brains of a private mobile network. You can
now deploy a vEPC on Private Edge Zones. For a list of vEPC partners that are available on Private Edge Zones,
see vEPC ISVs.
Deploying a private mobile network solution on Private Edge Zones requires other components, like mobile
access points, SIM cards, and other VNFs like routers. Access to licensed or unlicensed spectrum is critical to
setting up a private mobile network. And you might need help with RF planning, physical layout, installation,
and support. For a list of partners, see Mobile radio partners.
Microsoft provides a partner ecosystem that can help with all aspects of this process. Partners can help with
planning the network, purchasing the required devices, setting up hardware, and managing the configuration
from Azure. A set of validated partners that are tightly integrated with Microsoft ensure your solution will be
reliable and easy to use. You can focus on your core scenarios and rely on Microsoft and its partners to help with
the rest.
SD-WAN on Private Edge Zones
SD-WAN lets you create enterprise-grade wide area networks (WANs) that have these benefits:
Increased bandwidth
High-performance access to the cloud
Service insertion
Reliability
Policy management
Extensive network visibility
SD-WAN provides seamless branch office connectivity that's orchestrated from redundant central controllers at
lower cost of ownership. SD-WAN on Private Edge Zones lets you move from a capex-centric model to a
software-as-a-service (SaaS) model to reduce IT budgets. You can use your choice of SD-WAN partners,
orchestrator or controller, to enable new services and propagate them throughout your entire network
immediately.

Next steps
For more information, contact the following teams:
Edge Zone team
Private Edge Zone team, to become a partner
What is Azure Orbital? (Preview)
3/5/2021 • 9 minutes to read • Edit Online

Azure Orbital is a fully managed cloud-based ground station as a service that lets you communicate with your
spacecraft or satellite constellations, downlink and uplink data, process your data in the cloud, chain services
with Azure services in unique scenarios, and generate products for your customers. Azure Orbital lets you focus
on the mission and product data by off-loading the responsibility for deployment and maintenance of ground
station assets. This system is built on top of the Azure global infrastructure and low-latency global fiber network.

Watch the Azure Orbital announcement at Ignite on the Azure YouTube Channel
Azure Orbital focuses on building a partner ecosystem to enable customers to use partner ground stations in
addition to Orbital ground stations as well as use partner cloud modems in addition to integrated cloud
modems. Azure Orbital focuses on partnering with industry leaders such as KSAT, in addition to other ground
station/teleport providers like ViaSat Real-time Earth (RTE) and US Electrodynamics Inc. to provide broad
coverage that is available up-front. This partnership also extends to satcom telecom providers like SES and other
ground station/teleport providers, ViaSat Real-time Earth (RTE), and US Electrodynamics Inc. to offer
unprecedented connectivity such as global access to your LEO/MEO fleet or direct Azure access for
communication constellations or global access to your LEO/MEO fleet. We’ve taken the steps to virtualize the RF
signal and partner with leaders – like Kratos and Amergint – to bring their modems in the Marketplace. Our aim
is to empower our customers to achieve more and build systems with our rich, scalable, and highly flexible
ground station service platform.
Azure Orbital enables multiple use cases for our customers, including Earth Observation and Global
Communications. It also provides a platform that enables digital transformation of existing ground stations
using virtualization. You have direct access to all Azure services, the Azure global infrastructure, the Marketplace,
and access to our world-class partner ecosystem through our service.
Value propositions for Azure Orbital users include:
Global footprint – The Azure Orbital ground station service is inclusive of our partner ground stations
as well. Global coverage is available without delay and customers can schedule Contacts using Orbital on
the partner ground stations as well in addition to Microsoft-owned ground stations.
Conver t Capital Expenditure to Operational Expenditure – As we take on the task of deploying
and managing ground stations, your up-front costs required for ground-station investments can be used
to focus on the mission and deployment of assets. Our pay-as-you-go consumption model means you
are only charged for the time you use.
Licensing – Our team can help on-board your satellite(s) across our sites and regulatory bodies.
Operational Efficiency and Scalability – You don’t have to worry about maintenance, leasing,
building, or running operational costs for ground stations anymore. You will have the option to rapidly
scale satellite communications on-demand when the business needs it.
Direct access to Azure network and regions – We deploy our own ground stations at datacenter
locations or in close proximity to the edge of our network, and also inter-connect with partner ground
stations and networks to provide proximity to Azure regions. Your data is delivered to the cloud
immediately and anywhere to your desired location through Azure’s secure and low-latency global fiber
network.
Digitized RF – With the fully digitized signal available from the antenna, including up to 500 MHz of
wideband, you have complete control and security over the data coming from your spacecraft. Software
modems from our partners are integrated into the platform for seamless use and are also available in the
Marketplace for use to complete the processing chain. We anticipate certain customers to bring their own
modems as well (for their unique mission needs) which is supported by delivery of digitized RF at the
designated endpoint in your virtual network.
Azure Cloud and Marketplace – Take advantage of all Azure solutions to process and store the data,
including but not limited to IoT, AI and ML, Cognitive services, analytics and storage, and chain together
with your workload in one environment.
Flexibility – The power of our scheduling service, partner networks, digitized RF, and the marketplace
means you are not restricted to a particular solution set or workflow. We encourage you to think outside
of the box and reach out to us. For example, your product chain, could be offered in the Marketplace as
well for other users to incorporate in their products. The possibilities are endless.
For more information on our preview, or to express interest to participate in the preview, fill the contact form
here, or email us at MSAzureOrbital@microsoft.com.

Earth observation

You can use Azure Orbital to schedule contacts with satellites on a pay-as-you-go basis for house-keeping and
payload downlinks. Use the scheduled access times to ingest data from the satellite, monitor the satellite health
and status, or transmit commands to the satellite. Incoming data is delivered to your private virtual network
allowing it to be processed or stored in Azure.
As the service is fully digitized, a software modem from Kratos and Amergint, can be used to perform the
modulation/demodulation and encoding/decoding functions to recover the data. You will have the option to
purchase from the Marketplace or let us manage this part for you. Furthermore, integrate with Kubos, to fully
leverage an end-to-end solution to manage fleet operations and Telemetry, Tracking, & Control (TT&C)
functions. Implement your workloads in Azure using Azure resources and toolboxes to manipulate the payload
data into the final offerings.

Scheduling contacts
Scheduling contacts using Azure Orbital is an easy three-step process:
1. Register a spacecraft – Input the NORAD ID, TLE, and licensing information for each satellite.
2. Create a contact profile – Input the center frequency and bandwidth requirements for each link, as
well as other details such as minimum elevation and autotrack requirements. Feel free to create as many
profiles as required. For example, one for commanding only, or one for payload downlinks.
3. Schedule the contact – Select the spacecraft and select a contact profile along with the time and date
window to view the available passes at ours and our partner network’s sites to reserve. We will have a
first come first serve algorithm at first, but priority scheduling or guaranteed scheduling is on the
roadmap.
For more information on our preview, or to express interest to participate in the preview, fill the contact form
here, or email us at MSAzureOrbital@microsoft.com.

Global communication

Satellite providers who provide global communication capabilities to their customers, can use Azure Orbital to
either colocate new ground stations in Azure data centers or edge of Azure network, or inter-connect their
existing ground stations with global Azure backbone. They can then route their traffic on global Microsoft
network, leverage internet breakout from the edge of Azure network for providing internet services and other
managed services to their customers.
Azure Orbital service delivers the traffic from Orbital ground station to Provider’s virtual network. Using these
Azure Orbital services, a satellite provider can integrate or bundle other Azure services (like Security services
like Firewall, connectivity services like SDWAN, etc.) along with their satellite connectivity to provide managed
services to their customers in addition to satellite connectivity.
For more information on our preview, or to express interest to participate in the preview, fill the contact form
here, or email us at MSAzureOrbital@microsoft.com.

Partner ground stations


In addition to building our own ground stations, Azure Orbital enables customers to use partner ground stations
to ingest data directly into Azure.
Ground station or teleport providers can partner with Azure Orbital to digitally transform their ground stations.
By doing so, customers can use these ground stations to schedule contacts to their satellites while leveraging all
the software radio processing and data processing capabilities offered by the platform and Orbital partners
through Marketplace. The service is closely integrated with workloads in Cloud, and a vibrant ecosystem of
third-party solutions via marketplace such as modems, resource management, and mission control services. All
data can also leverage the low latency and high reliability global fiber network of Azure. Together, we believe it
will offer the widest coverage & flexibility possible for our customers to communicate with the satellites with
highest agility and reliability.

For more information on our preview, or to express interest to participate in the preview, fill the contact form
here, or email us at MSAzureOrbital@microsoft.com.

Partners
As we move forward with our journey to Space, we will be adding more partners to our ecosystem to help our
customers achieve more using Azure Orbital. We will be partner-led in our approach as we build Azure Orbital.
Our goal has been also to build a vibrant ecosystem of partners to jointly create more value for both our
partners as well as our customers. Think of it as a coral reef!

The following sections show a list of partner categories and Azure Orbital partners that are already part of
Orbital ecosystem:
Ground station infrastructure partners
We have partnered with KSAT, ViaSat RTE (real-time earth), and US Electrodynamics to enable our customers to
communicate to their satellites by using these partner ground stations and ingest data directly in Azure.
Virtualized modem partners
We have partnered with Kratos and Amergint to bring their software radio processing capabilities in the cloud
as part of our Orbital platform. These capabilities have been upgraded to adopt Azure platform accelerations
(including but not limited to Accelerated Networking using DPDK and GPU-based acceleration using special
purpose Azure compute) to process the radio signal in real time at high throughputs/bandwidth. Additionally,
our customers will also be able to deploy these software modems from our partners from Azure Marketplace in
their own virtual networks for more granular and closer control on signal processing.
Global communication partner
SES is one of the largest satellite connectivity providers in the Space industry. We are happy to share that SES
has selected Azure Orbital to augment their ground network needs for their next generation MEO
communication system mPower. As part of this launch, we will be colocating new dedicated ground stations in
our datacenters in addition to inter-connecting existing ground stations with our global backbone network.
Allowing SES with a faster time to market with highly scalable Ground Station as a Service by leveraging cloud-
based virtualized modems provided as part of the Orbital platform in addition to the Azure global backbone
network.
SES will leverage our global backbone network to route their traffic globally and use Azure Orbital services to
provide multiple managed services, built on top of the platform, to their customers. These services will range
from security services, SDWAN, Edge compute, 5G mobility solutions to multiple other services.
TT&C solution partner
We have partnered with Kubos to bring Major Tom, their Cloud-Based Mission Control Software, to Azure
Marketplace for Azure Orbital customers.

Next steps
For more information on our preview, or to express interest to participate in the preview, fill the contact form
here, or email us at MSAzureOrbital@microsoft.com.

You might also like