0% found this document useful (0 votes)
179 views150 pages

AZ-700 Study Guide

Uploaded by

Ngo Van Truong
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
179 views150 pages

AZ-700 Study Guide

Uploaded by

Ngo Van Truong
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 150

Explore Azure Virtual Networks .......................................................................................................................

Capabilities of Azure Virtual Networks ......................................................................................................... 8

Design considerations for Azure Virtual Networks ....................................................................................... 8

Address space and subnets...................................................................................................................... 9

Determine a naming convention............................................................................................................ 10

Understand Regions and Subscriptions .................................................................................................. 10

Azure Availability Zones ......................................................................................................................... 10

Configure public IP services ........................................................................................................................... 12

Use dynamic and static public IP addresses ................................................................................................... 12

Architecture diagram................................................................................................................................. 13

Design name resolution for your virtual network ........................................................................................... 13

Public DNS services ................................................................................................................................... 13

Considerations .......................................................................................................................................... 14

Private DNS services .................................................................................................................................. 14

Link VNets to private DNS zones ............................................................................................................ 14

Integrating on-premises DNS with Azure VNets ..................................................................................... 16

Enable cross-virtual network connectivity with peering ................................................................................. 17

Gateway Transit and Connectivity .............................................................................................................. 18

Use service chaining to direct traffic to a gateway ..................................................................................... 18

Implement virtual network traffic routing ...................................................................................................... 19

System routes............................................................................................................................................ 19

Default routes ....................................................................................................................................... 20

Optional default routes ......................................................................................................................... 20

Configure internet access with Azure Virtual NAT .......................................................................................... 22

Design and implement hybrid networking ..................................................................................................... 23

Design and implement Azure VPN Gateway ................................................................................................... 23

Azure VPN Gateways ................................................................................................................................. 23

Plan a VPN gateway ................................................................................................................................... 23


VPN Gateway types ................................................................................................................................... 24

PolicyBased ........................................................................................................................................... 24

RouteBased ........................................................................................................................................... 24

High availability options for VPN connections ............................................................................................ 24

VPN Gateway redundancy ..................................................................................................................... 25

Multiple on-premises VPN devices ........................................................................................................ 25

Active-active VPN gateways ................................................................................................................... 26

Dual-redundancy: active-active VPN gateways for both Azure and on-premises networks...................... 27

Highly Available VNet-to-VNet ............................................................................................................... 27

Architecture diagram................................................................................................................................. 28

Connect networks with Site-to-site VPN connections ..................................................................................... 28

Connect devices to networks with Point-to-site VPN connections .................................................................. 30

Point-to-site protocols ........................................................................................................................... 30

Point-to-site authentication methods .................................................................................................... 31

Authenticate using native Azure certificate authentication .................................................................... 31

Authenticate using native Microsoft Entra authentication ..................................................................... 31

Authenticate using Active Directory (AD) Domain Server ....................................................................... 32

Connect remote resources by using Azure Virtual WANs................................................................................ 33

What is Azure Virtual WAN? ...................................................................................................................... 33

Hub private address space ..................................................................................................................... 34

Gateway scale ....................................................................................................................................... 34

Connect cross-tenant VNets to a Virtual WAN hub................................................................................. 36

Virtual Hub routing .................................................................................................................................... 36

Create a network virtual appliance (NVA) in a virtual hub .............................................................................. 37

Manage an NVA in a Virtual Hub ................................................................................................................ 37

Deploy an NVA in your Virtual Hub ............................................................................................................ 38

Design and implement Azure ExpressRoute ................................................................................................... 39

Explore Azure ExpressRoute .......................................................................................................................... 39


ExpressRoute capabilities .......................................................................................................................... 39

Understand use cases for Azure ExpressRoute ........................................................................................... 40

ExpressRoute connectivity models............................................................................................................. 40

Design considerations for ExpressRoute deployments ............................................................................... 42

Choose between provider and direct model (ExpressRoute Direct) ........................................................ 42

Route advertisement ................................................................................................................................. 42

Bidirectional Forwarding Detection ........................................................................................................... 43

Configure encryption over ExpressRoute ................................................................................................... 44

Design redundancy for an ExpressRoute deployment ................................................................................ 45

Configure ExpressRoute and site to site coexisting connections ............................................................. 46

Create a zone redundant VNet gateway in Azure availability zones ........................................................ 47

Configure a Site-to-Site VPN as a failover path for ExpressRoute ................................................................ 48

Configure peering for an ExpressRoute deployment ...................................................................................... 50

Configure route filters for Microsoft Peering.............................................................................................. 51

Connect an ExpressRoute circuit to a virtual network .................................................................................... 51

Connect a virtual network to an ExpressRoute circuit ................................................................................ 52

Add a VPN to an ExpressRoute deployment ............................................................................................... 52

Connect geographically dispersed networks with ExpressRoute global reach ................................................. 53

Use cross-region connectivity to link multiple ExpressRoute locations ....................................................... 53

Choose when to use ExpressRoute global reach......................................................................................... 57

Load balance non-HTTP(S) traffic in Azure ..................................................................................................... 58

Explore load balancing .................................................................................................................................. 58

Load Balancing options for Azure............................................................................................................... 58

Categorizing load balancing services .......................................................................................................... 59

Global versus regional ........................................................................................................................... 59

HTTP(S) versus non-HTTP(S) .................................................................................................................. 59

Choosing a load balancing option for Azure ............................................................................................... 60

Design and implement Azure load balancer using the Azure portal ................................................................ 62
Choosing a load balancer type ................................................................................................................... 62

Azure load balancer and availability zones ................................................................................................. 64

Zone redundant..................................................................................................................................... 65

Zonal ..................................................................................................................................................... 65

Architecture diagram................................................................................................................................. 67

Explore Azure Traffic Manager ....................................................................................................................... 68

Key features of Traffic Manager ................................................................................................................. 68

How Traffic Manager works ....................................................................................................................... 69

Traffic Manager example client usage .................................................................................................... 70

Traffic routing methods ............................................................................................................................. 71

Routing method examples ..................................................................................................................... 73

Load balance HTTP(S) traffic in Azure ............................................................................................................ 76

Design Azure Application Gateway ................................................................................................................ 76

Application Gateway features .................................................................................................................... 77

Determine Application Gateway routing .................................................................................................... 77

Path-based routing ................................................................................................................................ 78

Multiple site routing .............................................................................................................................. 79

Configure Azure Application Gateway ............................................................................................................ 80

Frontend configuration.......................................................................................................................... 80

Backend configuration ........................................................................................................................... 80

Configure health probes ............................................................................................................................ 81

Default health probe ............................................................................................................................. 82

Design and configure Azure Front Door ......................................................................................................... 83

Azure Front Door tier comparison ............................................................................................................. 85

Create a Front Door in the Azure portal ..................................................................................................... 85

Routing architecture overview ................................................................................................................... 85

Configure redirection rules in Front Door .................................................................................................. 86

Front Door route rules configuration structure ...................................................................................... 86


Redirection types .................................................................................................................................. 88

Redirection protocol .............................................................................................................................. 88

Destination host .................................................................................................................................... 89

Destination path.................................................................................................................................... 89

Destination fragment ............................................................................................................................ 89

Query string parameters........................................................................................................................ 90

Configure rewrite policies.......................................................................................................................... 90

Configure health probes, including customization of HTTP response codes ................................................ 90

Supported HTTP methods for health probes .......................................................................................... 91

Secure Front Door with SSL ....................................................................................................................... 91

Design and implement network security........................................................................................................ 92

Get network security recommendations with Microsoft Defender for Cloud .................................................. 92

Network Security....................................................................................................................................... 92

NS-1: Establish network segmentation boundaries ................................................................................ 93

NS-2: Secure cloud services with network controls ................................................................................ 94

NS-3: Deploy firewall at the edge of enterprise network ........................................................................ 94

NS-4: Deploy intrusion detection/intrusion prevention systems (IDS/IPS) .............................................. 95

NS-5: Deploy DDOS protection .............................................................................................................. 95

NS-6: Deploy web application firewall.................................................................................................... 95

NS-7: Simplify network security configuration ....................................................................................... 96

NS-8: Detect and disable insecure services and protocols ...................................................................... 96

NS-9: Connect on-premises or cloud network privately.......................................................................... 97

NS-10: Ensure Domain Name System (DNS) security .............................................................................. 97

Microsoft cloud security benchmark.......................................................................................................... 98

Implement Microsoft cloud security benchmark .................................................................................... 99

Regulatory compliance dashboard ....................................................................................................... 100

Deploy Azure DDoS Protection by using the Azure portal ............................................................................. 101

Distributed Denial of Service (DDoS) ........................................................................................................ 101


DDoS implementation ............................................................................................................................. 101

Types of DDoS attacks ............................................................................................................................. 102

Azure DDoS protection features .............................................................................................................. 103

Multi-layered protection ......................................................................................................................... 103

Deploying a DDoS protection plan ........................................................................................................... 104

Deploy Network Security Groups by using the Azure portal ......................................................................... 105

NSG security rules ................................................................................................................................... 105

Application Security Groups .................................................................................................................... 106

Filter network traffic with an NSG using the Azure portal ......................................................................... 106

Design and implement Azure Firewall .......................................................................................................... 109

Azure Firewall features ............................................................................................................................ 109

Rule processing in Azure Firewall ............................................................................................................. 111

Rule processing with classic rules ........................................................................................................ 111

Rule processing with Firewall Policy..................................................................................................... 111

Outbound connectivity using network rules and application rules ....................................................... 112

Inbound connectivity using DNAT rules and network rules ................................................................... 112

Deploying and configuring Azure Firewall ................................................................................................ 113

Deploying Azure Firewall with Availability Zones .................................................................................. 114

Methods for deploying an Azure Firewall with Availability Zones ......................................................... 114

Secure your networks with Azure Firewall Manager..................................................................................... 116

Working with Azure Firewall Manager ..................................................................................................... 116

Azure Firewall Manager features ......................................................................................................... 117

Azure Firewall Manager policies .......................................................................................................... 118

Deploying Azure Firewall Manager for Hub Virtual Networks ............................................................... 119

Deploying Azure Firewall Manager for Secured Virtual Hubs ................................................................ 120

Implement a Web Application Firewall on Azure Front Door ........................................................................ 121

Web Application Firewall policy modes.................................................................................................... 122

Web Application Firewall Default Rule Set rule groups and rules .............................................................. 122
Managed rules .................................................................................................................................... 122

Custom rules ....................................................................................................................................... 123

Design and implement private access to Azure Services............................................................................... 124

Explain virtual network service endpoints.................................................................................................... 124

What is a virtual network service endpoint? ............................................................................................ 124

Preparing to Implement Service Endpoints .............................................................................................. 125

Create Service Endpoints ......................................................................................................................... 126

Configure service tags ............................................................................................................................. 126

Available service tags .............................................................................................................................. 127

Define Private Link Service and private endpoint ......................................................................................... 129

What is Azure Private Link? ..................................................................................................................... 129

What is Azure Private Endpoint?.............................................................................................................. 130

How is Azure Private Endpoint different from a service endpoint? ....................................................... 131

What is Azure Private Link Service? ......................................................................................................... 131

Private Endpoint properties ..................................................................................................................... 132

Integrate private endpoint with DNS............................................................................................................ 134

Azure Private Endpoint DNS configuration ............................................................................................... 134

Significance of IP address 168.63.129.16 ................................................................................................. 135

Azure services Private DNS zone configuration examples ......................................................................... 136

DNS configuration scenarios .................................................................................................................... 136

On-premises workloads using a DNS forwarder ................................................................................... 137

Virtual network and on-premises workloads using Azure DNS Private Resolver .................................... 138

Design and implement network monitoring................................................................................................. 140

Monitor your networks using Azure monitor ............................................................................................... 140

What is Azure Monitor?........................................................................................................................... 140

Monitor data types in Azure Monitor................................................................................................... 141

Azure Monitor metrics ........................................................................................................................ 141

Azure Monitor metrics sources ............................................................................................................ 142


Monitor your networks using Azure network watcher ................................................................................. 142

Azure Network Watcher .......................................................................................................................... 143

Configure NSG Flow Logs ......................................................................................................................... 145

Connection Monitor ................................................................................................................................ 146

Connection Monitor overview ............................................................................................................. 146

Set up Connection Monitor ................................................................................................................. 147

Traffic Analytics ....................................................................................................................................... 148

How Traffic Analytics works ................................................................................................................. 148

Introduction to Azure Virtual Networks

Explore Azure Virtual Networks

Capabilities of Azure Virtual Networks


• Communication with the internet.
• Communication between Azure resources.
• Communication between on-premises resources.
• Filtering network traffic.
• Routing network traffic.

Design considerations for Azure Virtual


Networks

Address space and subnets


Virtual Networks

When creating a VNet, it's recommended that you use the address ranges enumerated in
RFC 1918, which have been set aside by the IETF for private, non-routable address spaces:
• 10.0.0.0 - 10.255.255.255 (10/8 prefix)
• 172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
• 192.168.0.0 - 192.168.255.255 (192.168/16 prefix)

In addition, you can't add the following address ranges:


• 224.0.0.0/4 (Multicast)
• 255.255.255.255/32 (Broadcast)
• 127.0.0.0/8 (Loopback)
• 169.254.0.0/16 (Link-local)
• 168.63.129.16/32 (Internal DNS)

Subnets

When planning to implement subnets, you need to consider the following:


• Each subnet must have a unique address range, specified in Classless Inter-Domain
Routing (CIDR) format.
• Certain Azure services require their own subnet.
• Subnets can be used for traffic management. For example, you can create subnets
to route traffic through a network virtual appliance.
• You can limit access to Azure resources to specific subnets with a virtual network
service endpoint. You can create multiple subnets, and enable a service endpoint
for some subnets, but not others.
Determine a naming convention

An effective naming convention composes resource names from important information


about each resource.

Understand Regions and Subscriptions


All Azure resources are created in an Azure region and subscription. A resource can only
be created in a virtual network that exists in the same region and subscription as the
resource.

Azure Availability Zones


An Azure Availability Zone enables you to define unique physical locations within a region.

Azure services that support Availability Zones fall into three categories:
• Zonal services: Resources can be pinned to a specific zone. For example, virtual
machines, managed disks, or standard IP addresses can be pinned to a specific
zone, which allows for increased resilience by having one or more instances of
resources spread across zones.
• Zone-redundant services: Resources are replicated or distributed across zones
automatically. Azure replicates the data across three zones so that a zone failure
doesn't impact its availability.
• Non-regional services: Services are always available from Azure geographies and
are resilient to zone-wide outages as well as region-wide outages.
Configure public IP services
Use dynamic and static public IP
addresses
In Azure Resource Manager, a public IP address is a resource that has its own properties.
Some of the resources you can associate a public IP address resource with:
• Virtual machine network interfaces
• Virtual machine scale sets
• Public Load Balancers
• Virtual Network Gateways (VPN/ER)
• NAT gateways
• Application Gateways
• Azure Firewall
• Bastion Host
• Route Server
Architecture diagram

Design name resolution for your


virtual network

Public DNS services


Public DNS services resolve names and IP addresses for resources and services accessible
over the internet such as web servers.

In Azure DNS, you can create address records manually within relevant zones. The records
most frequently used will be:
• Host records: A/AAAA (IPv4/IPv6)
• Alias records: CNAME
Considerations
• The name of the zone must be unique within the resource group, and the zone
must not exist already.
• The same zone name can be reused in a different resource group or a different
Azure subscription.
• Where multiple zones share the same name, each instance is assigned different
name server addresses.
• Root/Parent domain is registered at the registrar and pointed to Azure NS.
• Child domains are registered in AzureDNS directly.

Private DNS services


Private DNS services resolve names and IP addresses for resources and services

Link VNets to private DNS zones


In Azure, a VNet represents a group of 1 or more subnets, as defined by a CIDR range.
Resources such as VMs are added to subnets.

At the VNet level, default DNS configuration is part of the DHCP assignments made by
Azure, specifying the special address 168.63.129.16 to use Azure DNS services.
Two ways to link VNets to a private zone:
• Registration: Each VNet can link to one private DNS zone for registration. However,
up to 100 VNets can link to the same private DNS zone for registration.
• Resolution: There may be many other private DNS zones for different namespaces.
You can link a VNet to each of those zones for name resolution. Each VNet can link
to up to 1000 private DNS Zones for name resolution.
Integrating on-premises DNS with Azure
VNets
Forwarding takes two forms:
• Forwarding - specifies another DNS server (SOA for a zone) to resolve the query if
the initial server cannot.
• Conditional forwarding - specifies a DNS server for a named zone, so that all
queries for that zone are routed to the specified DNS server.
Enable cross-virtual network
connectivity with peering
Virtual network peering enables you to seamlessly connect two Azure virtual networks.
Once peered, the virtual networks appear as one, for connectivity purposes. There are two
types of VNet peering.
• Regional VNet peering connects Azure virtual networks in the same region.
• Global VNet peering connects Azure virtual networks in different regions. When
creating a global peering, the peered virtual networks can exist in any Azure public
cloud region or China cloud regions, but not in Government cloud regions. You can
only peer virtual networks in the same region in Azure Government cloud regions.

The benefits of using virtual network peering, whether local or global, include:
• A low-latency, high-bandwidth connection between resources in different virtual
networks.
• The ability to apply network security groups in either virtual network to block
access to other virtual networks or subnets.
• The ability to transfer data between virtual networks across Azure subscriptions,
Microsoft Entra tenants, deployment models, and Azure regions.
• The ability to peer virtual networks created through the Azure Resource Manager.
• The ability to peer a virtual network created through Resource Manager to one
created through the classic deployment model.
• No downtime to resources in either virtual network is required when creating the
peering, or after the peering is created.

Gateway Transit and Connectivity


When virtual networks are peered, you configure a VPN gateway in the peered virtual
network as a transit point. In this case, a peered virtual network uses the remote gateway
to gain access to other resources. A virtual network can have only one gateway. Gateway
transit is supported for both VNet Peering and Global VNet Peering.

When you Allow Gateway Transit the virtual network can communicate to resources
outside the peering. For example, the subnet gateway could:
• Use a site-to-site VPN to connect to an on-premises network.
• Use a VNet-to-VNet connection to another virtual network.
• Use a point-to-site VPN to connect to a client.

Use service chaining to direct traffic to


a gateway
Suppose you want to direct traffic from the Contoso VNet to a specific network virtual
appliance (NVA). Create user-defined routes to direct traffic from the Contoso VNet to the
NVA in the Fabrikam VNet. This technique is known as service chaining.

Azure virtual networks can be deployed in a hub-and-spoke topology, with the hub VNet
acting as a central point of connectivity to all the spoke VNets. The hub virtual network
hosts infrastructure components such as an NVA, virtual machines and a VPN gateway. All
the spoke virtual networks peer with the hub virtual network. Traffic flows through
network virtual appliances or VPN gateways in the hub virtual network. The benefits of
using a hub and spoke configuration include cost savings, overcoming subscription limits,
and workload isolation.
Implement virtual network traffic
routing
Azure automatically creates a route table for each subnet within an Azure virtual network
and adds system default routes to the table.

System routes
Azure automatically creates system routes and assigns the routes to each subnet in a
virtual network. You can't create or remove system routes, but you can override some
system routes with custom routes. Azure creates default system routes for each subnet,
and adds additional optional default routes to specific subnets, or every subnet, when you
use specific Azure capabilities.
Default routes
Each route contains an address prefix and next hop type. When traffic leaving a subnet is
sent to an IP address within the address prefix of a route, the route that contains the prefix
is the route Azure uses.

In routing terms, a hop is a waypoint on the overall route. Therefore, the next hop is the
next waypoint that the traffic is directed to on its journey to its ultimate destination.

• Virtual network: Routes traffic between address ranges within the address space
of a virtual network. Azure creates a route with an address prefix that corresponds
to each address range defined within the address space of a virtual network. Azure
automatically routes traffic between subnets using the routes created for each
address range.
• Internet: Routes traffic specified by the address prefix to the Internet. The system
default route specifies the 0.0.0.0/0 address prefix. Azure routes traffic for any
address not specified by an address range within a virtual network to the Internet,
unless the destination address is for an Azure service. Azure routes any traffic
destined for its service directly to the service over the backbone network, rather
than routing the traffic to the Internet. You can override Azure's default system
route for the 0.0.0.0/0 address prefix with a custom route.
• None: Traffic routed to the None next hop type is dropped, rather than routed
outside the subnet. Azure automatically creates default routes for the following
address prefixes:
• 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16: Reserved for private use in RFC
1918.
• 100.64.0.0/10: Reserved in RFC 6598.

Optional default routes


• Virtual network (VNet) peering: When you create a virtual network peering
between two virtual networks, a route is added for each address range within the
address space of each virtual network.
• Virtual network gateway: When you add a virtual network gateway to a virtual
network, Azure adds one or more routes with Virtual network gateway as the next
hop type. The source is listed as virtual network gateway because the gateway adds
the routes to the subnet.
Configure internet access with
Azure Virtual NAT
Globally, IPv4 address ranges are in very short supply, and can be an expensive way to
grant access to Internet resources. Network Address Translation (NAT) arose out of this
need for internal resources on a private network to share routable IPv4 addresses to gain
access to external resources on a public network.

You define the NAT configuration for each subnet within a VNet to enable outbound
connectivity by specifying which NAT gateway resource to use. After NAT is configured,
all UDP and TCP outbound flows from any virtual machine instance will use NAT for
internet connectivity. No further configuration is necessary, and you don’t need to create
any user-defined routes. NAT takes precedence over other outbound scenarios and
replaces the default Internet destination of a subnet.

Design and implement hybrid


networking

Design and implement Azure VPN


Gateway

Azure VPN Gateways


An Azure VPN gateway is a specific type of virtual network gateway that is used to send
and receive encrypted traffic between an Azure virtual network and an on-premises
location over the public Internet. Azure VPN gateways can also be used to connect
separate Azure virtual networks using an encrypted tunnel across the Microsoft network
backbone.

Plan a VPN gateway


When you're planning a VPN gateway, there are three architectures to consider:
• Point to site over the internet
• Site to site over the internet
• Site to site over a dedicated network, such as Azure ExpressRoute
VPN Gateway types

PolicyBased
PolicyBased VPNs were previously called static routing gateways in the classic deployment
model. Policy-based VPNs encrypt and direct packets through IPsec tunnels based on the
IPsec policies configured with the combinations of address prefixes between your on-
premises network and the Azure VNet.

When using a PolicyBased VPN, keep in mind the following limitations:

Policy based VPNs which support IKEv1 protocols can be used with Basic Gateway SKUs
only.

You can have only 1 tunnel when using a PolicyBased VPN.

You can only use PolicyBased VPNs for S2S connections, and only for certain
configurations. Most VPN Gateway configurations require a RouteBased VPN.

RouteBased
RouteBased VPNs were previously called dynamic routing gateways in the classic
deployment model. RouteBased VPNs use "routes" in the IP forwarding or routing table
to direct packets into their corresponding tunnel interfaces. The tunnel interfaces then
encrypt or decrypt the packets in and out of the tunnels. The policy (or traffic selector) for
RouteBased VPNs are configured as any-to-any (or wild cards). The value for a RouteBased
VPN type is RouteBased.

High availability options for VPN


connections
To provide better availability for your VPN connections, there are a few options available:
• VPN Gateway redundancy (Active-standby)
• Multiple on-premises VPN devices
• Active-active Azure VPN gateway
• Combination of both

VPN Gateway redundancy


Every Azure VPN gateway consists of two instances in an active-standby configuration.
For any planned maintenance or unplanned disruption that happens to the active instance,
the standby instance would take over (failover) automatically and resume the S2S VPN or
VNet-to-VNet connections.

Multiple on-premises VPN devices


You can use multiple VPN devices from your on-premises network to connect to your
Azure VPN gateway, as shown in the following diagram:
This configuration provides multiple active tunnels from the same Azure VPN gateway to
your on-premises devices in the same location. There are some requirements and
constraints:
1. You need to create multiple S2S VPN connections from your VPN devices to Azure.
When you connect multiple VPN devices from the same on-premises network to
Azure, you need to create one local network gateway for each VPN device, and one
connection from your Azure VPN gateway to each local network gateway.
2. The local network gateways corresponding to your VPN devices must have unique
public IP addresses in the GatewayIpAddress property.
3. BGP is required for this configuration. Each local network gateway representing a
VPN device must have a unique BGP peer IP address specified in the
BgpPeerIpAddress property.
4. You should use BGP to advertise the same prefixes of the same on-premises
network prefixes to your Azure VPN gateway, and the traffic will be forwarded
through these tunnels simultaneously.
5. You must use Equal-cost multi-path routing (ECMP).
6. Each connection is counted against the maximum number of tunnels for your
Azure VPN gateway, 10 for Basic and Standard SKUs, and 30 for HighPerformance
SKU

Active-active VPN gateways


You can create an Azure VPN gateway in an active-active configuration, where both
instances of the gateway VMs will establish S2S VPN tunnels to your on-premises VPN
device, as shown the following diagram:
In this configuration, each Azure gateway instance will have a unique public IP address,
and each will establish an IPsec/IKE S2S VPN tunnel to your on-premises VPN device
specified in your local network gateway and connection. Note that both VPN tunnels are
part of the same connection. You will still need to configure your on-premises VPN device
to accept or establish two S2S VPN tunnels to those two Azure VPN gateway public IP
addresses.

Dual-redundancy: active-active VPN


gateways for both Azure and on-premises
networks
The most reliable option is to combine the active-active gateways on both your network
and Azure, as shown in the diagram below.

Here you create and set up the Azure VPN gateway in an active-active configuration and
create two local network gateways and two connections for your two on-premises VPN
devices as described above. The result is a full mesh connectivity of 4 IPsec tunnels
between your Azure virtual network and your on-premises network.

Highly Available VNet-to-VNet


The same active-active configuration can also apply to Azure VNet-to-VNet connections.
You can create active-active VPN gateways for both virtual networks, and connect them
together to form the same full mesh connectivity of 4 tunnels between the two VNets, as
shown in the diagram below:

Architecture diagram

Connect networks with Site-to-site


VPN connections
A site-to-site (S2S) VPN gateway connection lets you create a secure connection to your
virtual network from another virtual network or a physical network. The following diagram
illustrates how you would connect an on-premises network to the Azure platform. The
internet connection uses an IPsec VPN tunnel.


The on-premises network represents your on-premises Active Directory and any
data or resources.
• The gateway is responsible for sending encrypted traffic to a virtual IP address
when it uses a public connection.
• The Azure virtual network holds all your cloud applications and any Azure VPN
gateway components.
• An Azure VPN gateway provides the encrypted link between the Azure virtual
network and your on-premises network. An Azure VPN gateway is made up of
these elements:
o Virtual network gateway
o Local network gateway
o Connection
o Gateway subnet
• Cloud applications are the ones you've made available through Azure.
• An internal load balancer, located in the front end, routes cloud traffic to the
correct cloud-based application or resource.
Connect devices to networks with
Point-to-site VPN connections
A Point-to-Site (P2S) VPN gateway connection lets you create a secure connection to your
virtual network from an individual client computer. A P2S connection is established by
starting it from the client computer. This solution is useful for telecommuters who want
to connect to Azure VNets from a remote location, such as from home or a conference.
P2S VPN is also a useful solution to use instead of S2S VPN when you have only a few
clients that need to connect to a VNet.

Point-to-site protocols
Point-to-site VPN can use one of the following protocols:
• OpenVPN® Protocol, an SSL/TLS based VPN protocol. A TLS VPN solution can
penetrate firewalls, since most firewalls open TCP port 443 outbound, which TLS
uses. OpenVPN can be used to connect from Android, iOS (versions 11.0 and
above), Windows, Linux, and Mac devices (macOS versions 10.13 and above).
• Secure Socket Tunneling Protocol (SSTP), a proprietary TLS-based VPN protocol. A
TLS VPN solution can penetrate firewalls, since most firewalls open TCP port 443
outbound, which TLS uses. SSTP is only supported on Windows devices. Azure
supports all versions of Windows that have SSTP (Windows 7 and later).
• IKEv2 VPN, a standards-based IPsec VPN solution. IKEv2 VPN can be used to
connect from Mac devices (macOS versions 10.11 and above).

Point-to-site authentication methods


The user must be authenticated before Azure accepts a P2S VPN connection. There are
two mechanisms that Azure offers to authenticate a connecting user.

Authenticate using native Azure certificate


authentication
When using the native Azure certificate authentication, a client certificate on the device is
used to authenticate the connecting user. Client certificates are generated from a trusted
root certificate and then installed on each client computer. You can use a root certificate
that was generated using an Enterprise solution, or you can generate a self-signed
certificate.

Authenticate using native Microsoft Entra


authentication
Microsoft Entra authentication allows users to connect to Azure using their Microsoft
Entra credentials. Native Microsoft Entra authentication is only supported for OpenVPN
protocol and Windows 10 and requires the use of the Azure VPN Client.

At a high level, you need to perform the following steps to configure Microsoft Entra
authentication:
• Configure a Microsoft Entra tenant
• Enable Microsoft Entra authentication on the gateway
• Download and configure Azure VPN Client

Authenticate using Active Directory (AD)


Domain Server
AD Domain authentication is a popular option because it allows users to connect to Azure
using their organization domain credentials. It requires a RADIUS server that integrates
with the AD server. Organizations can also leverage their existing RADIUS deployment.
Connect remote resources by using
Azure Virtual WANs

What is Azure Virtual WAN?


Azure Virtual WAN is a networking service that brings many networking, security, and
routing functionalities together to provide a single operational interface. Some of the
main features include:
• Branch connectivity (via connectivity automation from Virtual WAN Partner devices
such as SD-WAN or VPN CPE).
• Site-to-site VPN connectivity.
• Remote user VPN connectivity (point-to-site).
• Private connectivity (ExpressRoute).
• Intra-cloud connectivity (transitive connectivity for virtual networks).
• VPN ExpressRoute inter-connectivity.
• Routing, Azure Firewall, and encryption for private connectivity.

The following diagram shows an organization with two Virtual WAN hubs connecting the
spokes. VNets, Site-to-site and point-to-site VPNs, SD WANs, and ExpressRoute
connectivity are all supported.
To configure an end-to-end virtual WAN, you create the following resources:
• Virtual WAN
• Hub
• Hub virtual network connection
• Hub-to-hub connection
• Hub route table

Hub private address space


A virtual hub is a Microsoft-managed virtual network. The hub contains various service
endpoints to enable connectivity. From your on-premises network (vpnsite), you can
connect to a VPN gateway inside the virtual hub, connect ExpressRoute circuits to a virtual
hub, or even connect mobile users to a point-to-site gateway in the virtual hub. The hub
is the core of your network in a region. Multiple virtual hubs can be created in the same
region.

Gateway scale
A hub gateway isn't the same as a virtual network gateway that you use for ExpressRoute
and VPN Gateway. For example, when using Virtual WAN, you don't create a site-to-site
connection from your on-premises site directly to your VNet. Instead, you create a site-
to-site connection to the hub. The traffic always goes through the hub gateway. This
means that your VNets don't need their own virtual network gateway. Virtual WAN lets
your VNets take advantage of scaling easily through the virtual hub and the virtual hub
gateway.
Connect cross-tenant VNets to a Virtual
WAN hub
You can use Virtual WAN to connect a VNet to a virtual hub in a different tenant. This
architecture is useful if you have client workloads that must be connected to be the same
network but are on different tenants. For example, as shown in the following diagram, you
can connect a non-Contoso VNet (the Remote Tenant) to a Contoso virtual hub (the Parent
Tenant).

Before you can connect a cross-tenant VNet to a Virtual WAN hub, you must have the
following configuration already set up:
• A Virtual WAN and virtual hub in the parent subscription.
• A virtual network configured in a subscription in the remote tenant.
• Non-overlapping address spaces in the remote tenant and address spaces within
any other VNets already connected to the parent virtual hub.

Virtual Hub routing


The routing capabilities in a virtual hub are provided by a router that manages all routing
between gateways using Border Gateway Protocol (BGP). A virtual hub can contain
multiple gateways such as a Site-to-site VPN gateway, ExpressRoute gateway, Point-to-
site gateway, Azure Firewall. This router also provides transit connectivity between virtual
networks that connect to a virtual hub and can support up to an aggregate throughput
of 50 Gbps. These routing capabilities apply to Standard Virtual WAN customers.

Create a network virtual appliance


(NVA) in a virtual hub
One of the benefits of Azure Virtual WAN is the ability to support reliable connections
from many different technologies, whether Microsoft based, such as ExpressRoute or a
VPN Gateway, or from a networking partner, such as Barracuda CloudGen WAN, Cisco
Cloud OnRamp for Multi-Cloud, and VMware SD-WAN. These types of devices are known
as network virtual appliances (NVAs); they are deployed directly into a Virtual WAN hub
and have an externally facing public IP address. This capability enables customers who
want to connect their branch Customer Premises Equipment (CPE) to the same brand NVA
in the virtual hub to take advantage of proprietary end-to-end SD-WAN capabilities. Once
VNets are connected to the virtual hub, NVAs enable transitive connectivity throughout
the organization's Virtual WAN.

Manage an NVA in a Virtual Hub


The NVAs available in the Azure Marketplace can be deployed directly into a virtual hub
and nowhere else. Each is deployed as a Managed Application, which allows Azure Virtual
WAN to manage the configuration of the NVA. They cannot be deployed within an
arbitrary VNet.

The following diagram shows the NVA deployment process:


Although each NVA offers support for different CPEs and has a slightly different user
experience, they all offer a Managed Application experience through Azure Marketplace,
NVA Infrastructure Unit-based capacity and billing, and Health Metrics surfaced through
Azure Monitor.

Deploy an NVA in your Virtual Hub


To deploy an NVA in your virtual hub, you can access the Azure Marketplace through the
Azure portal and select the Managed Application for the NVA partner that you need to
enable connectivity for your devices. When you create an NVA in the Virtual WAN hub,
like all Managed Applications, there will be two Resource Groups created in your
subscription.
• Customer Resource Group - This will contain an application placeholder for the
Managed Application. Partners can use this resource group to expose whatever
customer properties they choose here.
• Managed Resource Group - Customers cannot configure or change resources in
this resource group directly, as this is controlled by the publisher of the Managed
Application. This Resource Group will contain the NetworkVirtualAppliances
resource.
Design and implement Azure
ExpressRoute

Explore Azure ExpressRoute


ExpressRoute lets you extend your on-premises networks into the Microsoft cloud over a
private connection with the help of a connectivity provider. With ExpressRoute, you can
establish connections to various Microsoft cloud services, such as Microsoft Azure and
Microsoft 365. Connectivity can be from an any-to-any (IP VPN) network, a point-to-point
Ethernet network, or a virtual cross-connection through a connectivity provider at a
colocation facility. Since ExpressRoute connections do not go over the public Internet, this
approach allows ExpressRoute connections to offer more reliability, faster speeds,
consistent latencies, and higher security.

ExpressRoute capabilities
Some key benefits of ExpressRoute are:
• Layer 3 connectivity between your on-premises network and the Microsoft Cloud
through a connectivity provider
• Connectivity can be from an any-to-any (IPVPN) network, a point-to-point Ethernet
connection, or through a virtual cross-connection via an Ethernet exchange
• Connectivity to Microsoft cloud services across all regions in the geopolitical region
• Global connectivity to Microsoft services across all regions with the ExpressRoute
premium add-on
• Built-in redundancy in every peering location for higher reliability

Azure ExpressRoute is used to create private connections between Azure datacenters and
infrastructure on your premises or in a colocation environment. ExpressRoute connections
do not go over the public Internet, and they offer more reliability, faster speeds, and lower
latencies than typical Internet connections.
Understand use cases for Azure
ExpressRoute
Faster and Reliable connection to Azure services - Organizations leveraging Azure
services look for reliable connections to Azure services and data centers. Public internet
is dependent upon many factors and may not be suitable for a business. Azure
ExpressRoute is used to create private connections between Azure data centers and
infrastructure on your premises or in a colocation environment. Using ExpressRoute
connections to transfer data between on-premises systems and Azure can also give
significant cost benefits.

Storage, backup, and Recovery - Backup and Recovery are important for an organization
for business continuity and recovering from outages. ExpressRoute gives you a fast and
reliable connection to Azure with bandwidths up to 100 Gbps, which makes it excellent
for scenarios such as periodic data migration, replication for business continuity, disaster
recovery and other high-availability strategies.

Extends Data center capabilities - ExpressRoute can be used to connect and add
compute and storage capacity to your existing data centers. With high throughput and
fast latencies, Azure will feel like a natural extension to or between your data centers, so
you enjoy the scale and economics of the public cloud without having to compromise on
network performance.

Predictable, reliable, and high-throughput connections - With predictable, reliable,


and high-throughput connections offered by ExpressRoute, enterprises can build
applications that span on-premises infrastructure and Azure without compromising
privacy or performance. For example, run a corporate intranet application in Azure that
authenticates your customers with an on-premises Active Directory service, and serve all
your corporate customers without traffic ever routing through the public Internet.

ExpressRoute connectivity models


You can create a connection between your on-premises network and the Microsoft cloud
in four different ways, CloudExchange Co-location, Point-to-point Ethernet Connection,
Any-to-any (IPVPN) Connection, and ExpressRoute Direct. Connectivity providers may
offer one or more connectivity models.

Co-located at a cloud exchange

If you are co-located in a facility with a cloud exchange, you can order virtual cross-
connections to the Microsoft cloud through the co-location provider’s Ethernet exchange.
Co-location providers can offer either Layer 2 cross-connections, or managed Layer 3
cross-connections between your infrastructure in the co-location facility and the
Microsoft cloud.

Point-to-point Ethernet connections

You can connect your on-premises datacenters/offices to the Microsoft cloud through
point-to-point Ethernet links. Point-to-point Ethernet providers can offer Layer 2
connections, or managed Layer 3 connections between your site and the Microsoft cloud.

Any-to-any (IPVPN) networks

You can integrate your WAN with the Microsoft cloud. IPVPN providers (typically MPLS
VPN) offer any-to-any connectivity between your branch offices and datacenters. The
Microsoft cloud can be interconnected to your WAN to make it look just like any other
branch office. WAN providers typically offer managed Layer 3 connectivity.
Direct from ExpressRoute sites

You can connect directly into the Microsoft's global network at a peering location
strategically distributed across the world. ExpressRoute Direct provides dual 100 Gbps or
10-Gbps connectivity, which supports Active/Active connectivity at scale.

Design considerations for


ExpressRoute deployments

Choose between provider and direct model


(ExpressRoute Direct)
ExpressRoute Direct

ExpressRoute Direct gives you the ability to connect directly into Microsoft’s global
network at peering locations strategically distributed around the world. ExpressRoute
Direct provides dual 100 Gbps or 10-Gbps connectivity, which supports Active/Active
connectivity at scale. You can work with any service provider for ExpressRoute Direct.

Key features that ExpressRoute Direct provides includes:


• Massive Data Ingestion into services like Storage and Cosmos DB
• Physical isolation for industries that are regulated and require dedicated and
isolated connectivity like: Banking, Government, and Retail
• Granular control of circuit distribution based on business unit

Using ExpressRoute direct vs using a Service Provider

Route advertisement
When Microsoft peering gets configured on your ExpressRoute circuit, the Microsoft Edge
routers establish a pair of Border Gateway Protocol (BGP) sessions with your edge routers
through your connectivity provider. No routes are advertised to your network. To enable
route advertisements to your network, you must associate a route filter.

In order to associate a route filter:


• You must have an active ExpressRoute circuit that has Microsoft peering
provisioned.
• Create an ExpressRoute circuit and have the circuit enabled by your connectivity
provider before you continue. The ExpressRoute circuit must be in a provisioned
and enabled state.
• Create Microsoft peering if you manage the BGP session directly. Or, have your
connectivity provider provision Microsoft peering for your circuit.

Bidirectional Forwarding Detection


ExpressRoute supports Bidirectional Forwarding Detection (BFD) both over private and
Microsoft peering. When you enable BFD over ExpressRoute, you can speed up the link
failure detection between Microsoft Enterprise edge (MSEE) devices and the routers that
your ExpressRoute circuit gets configured (CE/PE). You can configure ExpressRoute over
your edge routing devices or your Partner Edge routing devices (if you went with managed
Layer 3 connection service). This section walks you through the need for BFD, and how to
enable BFD over ExpressRoute.
Configure encryption over
ExpressRoute
This section shows you how to use Azure Virtual WAN to establish an IPsec/IKE VPN
connection from your on-premises network to Azure over the private peering of an Azure
ExpressRoute circuit. This technique can provide an encrypted transit between the on-
premises networks and Azure virtual networks over ExpressRoute, without going over the
public internet or using public IP addresses.

Topology and routing

The following diagram shows an example of VPN connectivity over ExpressRoute private
peering:

The diagram shows a network within the on-premises network connected to the Azure
hub VPN gateway over ExpressRoute private peering. The connectivity establishment is
straightforward:
• Establish ExpressRoute connectivity with an ExpressRoute circuit and private
peering.
• Establish the VPN connectivity.

An important aspect of this configuration is routing between the on-premises networks


and Azure over both the ExpressRoute and VPN paths.

Traffic from on-premises networks to Azure


For traffic from on-premises networks to Azure, the Azure prefixes (including the virtual
hub and all the spoke virtual networks connected to the hub) are advertised via both the
ExpressRoute private peering BGP and the VPN BGP. This results in two network routes
(paths) toward Azure from the on-premises networks:
• One over the IPsec-protected path
• One directly over ExpressRoute without IPsec protection

To apply encryption to the communication, you must make sure that for the VPN-
connected network in the diagram, the Azure routes via on-premises VPN gateway are
preferred over the direct ExpressRoute path.

Traffic from Azure to on-premises networks

The same requirement applies to the traffic from Azure to on-premises networks. To
ensure that the IPsec path is preferred over the direct ExpressRoute path (without IPsec),
you have two options:
• Advertise more specific prefixes on the VPN BGP session for the VPN-connected
network. You can advertise a larger range that encompasses the VPN-connected
network over ExpressRoute private peering, then more specific ranges in the VPN
BGP session. For example, advertise 10.0.0.0/16 over ExpressRoute, and 10.0.1.0/24
over VPN.
• Advertise disjoint prefixes for VPN and ExpressRoute. If the VPN-connected
network ranges are disjoint from other ExpressRoute connected networks, you can
advertise the prefixes in the VPN and ExpressRoute BGP sessions, respectively. For
example, advertise 10.0.0.0/24 over ExpressRoute, and 10.0.1.0/24 over VPN.

In both examples, Azure will send traffic to 10.0.1.0/24 over the VPN connection rather
than directly over ExpressRoute without VPN protection.

Design redundancy for an


ExpressRoute deployment
There are 2 ways in which redundancy can be planned for an ExpressRoute deployment.
• Configure ExpressRoute and site to site coexisting connections
• Create a zone redundant VNET gateway in Azure Availability zones
Configure ExpressRoute and site to site
coexisting connections
This section helps you configure ExpressRoute and Site-to-Site VPN connections that
coexist. Having the ability to configure Site-to-Site VPN and ExpressRoute has several
advantages. You can configure Site-to-Site VPN as a secure failover path for ExpressRoute
or use Site-to-Site VPNs to connect to sites that are not connected through ExpressRoute.

Configuring Site-to-Site VPN and ExpressRoute coexisting connections has several


advantages:
• You can configure a Site-to-Site VPN as a secure failover path for ExpressRoute.
• Alternatively, you can use Site-to-Site VPNs to connect to sites that are not
connected through ExpressRoute.

You can configure either gateway first. Typically, you will incur no downtime when adding
a new gateway or gateway connection.

Network Limits and limitations


• Only route-based VPN gateway is supported. You must use a route-based VPN
gateway. You also can use a route-based VPN gateway with a VPN connection
configured for 'policy-based traffic selectors'.
• The ASN of Azure VPN Gateway must be set to 65515. Azure VPN Gateway
supports the BGP routing protocol. For ExpressRoute and Azure VPN to work
together, you must keep the Autonomous System Number of your Azure VPN
gateway at its default value, 65515. If you previously selected an ASN other than
65515 and you change the setting to 65515, you must reset the VPN gateway for
the setting to take effect.
• The gateway subnet must be /27 or a shorter prefix, (such as /26, /25), or you
will receive an error message when you add the ExpressRoute virtual network
gateway.
• Coexistence in a dual stack VNet is not supported. If you are using ExpressRoute
IPv6 support and a dual-stack ExpressRoute gateway, coexistence with VPN
Gateway will not be possible.
Create a zone redundant VNet gateway in
Azure availability zones
You can deploy VPN and ExpressRoute gateways in Azure Availability Zones. This brings
resiliency, scalability, and higher availability to virtual network gateways. Deploying
gateways in Azure Availability Zones physically and logically separates gateways within a
region, while protecting your on-premises network connectivity to Azure from zone-level
failures.

Zone-redundant gateways

To automatically deploy your virtual network gateways across availability zones, you can
use zone-redundant virtual network gateways. With zone-redundant gateways, you can
benefit from zone-resiliency to access your mission-critical, scalable services on Azure.

Zonal gateways

To deploy gateways in a specific zone, you can use zonal gateways. When you deploy a
zonal gateway, all instances of the gateway are deployed in the same Availability Zone.
Configure a Site-to-Site VPN as a
failover path for ExpressRoute
You can configure a Site-to-Site VPN connection as a backup for ExpressRoute. This
connection applies only to virtual networks linked to the Azure private peering path. There
is no VPN-based failover solution for services accessible through Azure Microsoft peering.
The ExpressRoute circuit is always the primary link. Data flows through the Site-to-Site
VPN path only if the ExpressRoute circuit fails. To avoid asymmetrical routing, your local
network configuration should also prefer the ExpressRoute circuit over the Site-to-Site
VPN. You can prefer the ExpressRoute path by setting higher local preference for the
routes received the ExpressRoute.
Configure peering for an
ExpressRoute deployment
An ExpressRoute circuit has two peering options associated with it: Azure private, and
Microsoft. Each peering is configured identically on a pair of routers (in active-active or
load sharing configuration) for high availability. Azure services are categorized as Azure
public and Azure private to represent the IP addressing schemes.

Create Peering configuration


• You can configure private peering and Microsoft peering for an ExpressRoute
circuit. Peering's can be configured in any order you choose. However, you must
make sure that you complete the configuration of each peering one at a time.
• You must have an active ExpressRoute circuit. Have the circuit enabled by your
connectivity provider before you continue. To configure peering(s), the
ExpressRoute circuit must be in a provisioned and enabled state.
• If you plan to use a shared key/MD5 hash, be sure to use the key on both sides of
the tunnel. The limit is a maximum of 25 alphanumeric characters. Special
characters are not supported.
• This only applies to circuits created with service providers offering Layer 2
connectivity services. If you are using a service provider that offers managed Layer
3 services (typically an IPVPN, like MPLS), your connectivity provider configures and
manages the routing for you.

Configure route filters for Microsoft


Peering
Connectivity to all Azure and Microsoft 365 services causes many prefixes to gets
advertised through BGP. The large number of prefixes significantly increases the size of
the route tables maintained by routers within your network. If you plan to consume only
a subset of services offered through Microsoft peering, you can reduce the size of your
route tables in two ways. You can:
• Filter out unwanted prefixes by applying route filters on BGP communities. Route
filtering is a standard networking practice and is used commonly within many
networks.
• Define route filters and apply them to your ExpressRoute circuit. A route filter is a
new resource that lets you select the list of services you plan to consume through
Microsoft peering. ExpressRoute routers only send the list of prefixes that belong
to the services identified in the route filter.

Connect an ExpressRoute circuit to


a virtual network
An ExpressRoute circuit represents a logical connection between your on-premises
infrastructure and Microsoft cloud services through a connectivity provider. You can order
multiple ExpressRoute circuits. Each circuit can be in the same or different regions and
can be connected to your premises through different connectivity providers. ExpressRoute
circuits do not map to any physical entities. A circuit is uniquely identified by a standard
GUID called as a service key (s-key).

Connect a virtual network to an


ExpressRoute circuit
• You must have an active ExpressRoute circuit.
• Ensure that you have Azure private peering configured for your circuit.
• Ensure that Azure private peering gets configured and establishes BGP peering
between your network and Microsoft for end-to-end connectivity.
• Ensure that you have a virtual network and a virtual network gateway created and
fully provisioned. A virtual network gateway for ExpressRoute uses the
GatewayType 'ExpressRoute', not VPN.
• You can link up to 10 virtual networks to a standard ExpressRoute circuit. All virtual
networks must be in the same geopolitical region when using a standard
ExpressRoute circuit.
• A single VNet can be linked to up to 16 ExpressRoute circuits. Use the following
process to create a new connection object for each ExpressRoute circuit you are
connecting to. The ExpressRoute circuits can be in the same subscription, different
subscriptions, or a mix of both.
• If you enable the ExpressRoute premium add-on, you can link virtual networks
outside of the geopolitical region of the ExpressRoute circuit. The premium add-
on will also allow you to connect more than 10 virtual networks to your
ExpressRoute circuit depending on the bandwidth chosen.
• To create the connection from the ExpressRoute circuit to the target ExpressRoute
virtual network gateway, the number of address spaces advertised from the local
or peered virtual networks needs to be equal to or less than 200. Once the
connection has been successfully created, you can add additional address spaces,
up to 1,000, to the local or peered virtual networks.

Add a VPN to an ExpressRoute


deployment
This section helps you configure secure encrypted connectivity between your on-premises
network and your Azure virtual networks (VNets) over an ExpressRoute private connection.
You can use Microsoft peering to establish a site-to-site IPsec/IKE VPN tunnel between
your selected on-premises networks and Azure VNets. Configuring a secure tunnel over
ExpressRoute allows for data exchange with confidentiality, anti-replay, authenticity, and
integrity.

Connect geographically dispersed


networks with ExpressRoute global
reach

Use cross-region connectivity to link


multiple ExpressRoute locations
There are various ways of designing and implementing ExpressRoute based on specific
organizational requirements.

ExpressRoute connections enable access to the following services:


• Microsoft Azure services
• Microsoft 365 services

Connectivity to all regions within a geopolitical region


You can connect to Microsoft in one of the peering locations and access regions within
the geopolitical region.

For example, if you connect to Microsoft in Amsterdam through ExpressRoute, you will
have access to all Microsoft cloud services hosted in Northern and Western Europe.

Global connectivity with ExpressRoute Premium

You can enable ExpressRoute Premium to extend connectivity across geopolitical


boundaries. For example, if you connect to Microsoft in Amsterdam through ExpressRoute,
you will have access to all Microsoft cloud services hosted in all regions across the world.
You can also access services deployed in South America or Australia the same way you
access North and West Europe regions. National clouds are excluded.

Local connectivity with ExpressRoute Local

You can transfer data cost-effectively by enabling the Local SKU. With Local SKU, you can
bring your data to an ExpressRoute location near the Azure region you want. With Local,
Data transfer is included in the ExpressRoute port charge.

Across on-premises connectivity with ExpressRoute Global Reach

You can enable ExpressRoute Global Reach to exchange data across your on-premises
sites by connecting your ExpressRoute circuits. For example, if you have a private data
center in California connected to an ExpressRoute circuit in Silicon Valley and another
private data center in Texas connected to an ExpressRoute circuit in Dallas. With
ExpressRoute Global Reach, you can connect your private data centers together through
these two ExpressRoute circuits. Your cross-data-center traffic will traverse through
Microsoft's network.

Rich connectivity partner ecosystem

ExpressRoute has a constantly growing ecosystem of connectivity providers and systems


integrator partners. You can refer to ExpressRoute partners and peering locations.

ExpressRoute Direct

ExpressRoute Direct provides customers the opportunity to connect directly into


Microsoft’s global network at peering locations strategically distributed across the world.
ExpressRoute Direct provides dual 100-Gbps connectivity, which supports Active/Active
connectivity at scale.

ExpressRoute is a private and resilient way to connect your on-premises networks to the
Microsoft Cloud. You can access many Microsoft cloud services such as Azure and
Microsoft 365 from your private data center or your corporate network. For example, you
might have a branch office in San Francisco with an ExpressRoute circuit in Silicon Valley
and another branch office in London with an ExpressRoute circuit in the same city. Both
branch offices have high-speed connectivity to Azure resources in US West and UK South.
However, the branch offices cannot connect and send data directly with one another. In
other words, 10.0.1.0/24 can send data to 10.0.3.0/24 and 10.0.4.0/24 network, but NOT
to 10.0.2.0/24 network.
Choose when to use ExpressRoute
global reach
ExpressRoute Global Reach is designed to complement your service provider’s WAN
implementation and connect your branch offices across the world. For example, if your
service provider primarily operates in the United States and has linked all your branches
in the U.S., but the service provider does not operate in Japan and Hong Kong SAR, with
ExpressRoute Global Reach you can work with a local service provider and Microsoft will
connect your branches there to the ones in the U.S. using ExpressRoute and the Microsoft
global network.
Load balance non-HTTP(S)
traffic in Azure

Explore load balancing


The term load balancing refers to the even distribution of workloads (that is, incoming
network traffic), across a group of backend computing resources or servers. Load
balancing aims to optimize resource use, maximize throughput, minimize response time,
and avoid overloading any single resource. It can also improve availability by sharing a
workload across redundant computing resources.

Load Balancing options for Azure


Azure provides various load balancing services that you can use to distribute your
workloads across multiple computing resources, but the following are the main services:
• Azure Load Balancer - high-performance, ultra-low-latency Layer 4 load-
balancing service (inbound and outbound) for all UDP and TCP protocols. It's built
to handle millions of requests per second while ensuring your solution is highly
available. Azure Load Balancer is zone-redundant, ensuring high availability across
Availability Zones.
• Traffic Manager - DNS-based traffic load balancer that enables you to distribute
traffic optimally to services across global Azure regions, while providing high
availability and responsiveness. Because Traffic Manager is a DNS-based load-
balancing service, it load-balances only at the domain level. For that reason, it can't
fail over as quickly as Front Door, because of common challenges around DNS
caching and systems not honoring DNS time-to-live values (TTLs).
• Azure Application Gateway - provides application delivery controller (ADC) as a
service, offering various Layer 7 load-balancing capabilities. Use it to optimize web
farm productivity by offloading CPU-intensive SSL termination to the gateway.
• Azure Front Door - application delivery network that provides global load
balancing and site acceleration service for web applications. It offers Layer 7
capabilities for your application like SSL offload, path-based routing, fast failover,
caching, etc. to improve performance and high-availability of your applications.

Categorizing load balancing services


The above load balancing services can be categorized in two ways: global versus regional,
and HTTP(S) versus non-HTTP(S).

Global versus regional


Global load-balancing services distribute traffic across regional backends, clouds, or
hybrid on-premises services. These services route end-user traffic to the closest available
backend. They also react to changes in service reliability or performance, in order to
maximize availability and performance. You can think of them as systems that load balance
between application stamps, endpoints, or scale-units hosted across different
regions/geographies.

In contrast, Regional load-balancing services distribute traffic within virtual networks


across virtual machines (VMs) or zonal and zone-redundant service endpoints within a
region. You can think of them as systems that load balance between VMs, containers, or
clusters within a region in a virtual network.

HTTP(S) versus non-HTTP(S)


HTTP(S) load-balancing services are Layer 7 load balancers that only accept HTTP(S)
traffic. They're intended for web applications or other HTTP(S) endpoints. They include
features such as SSL offload, web application firewall, path-based load balancing, and
session affinity.

In contrast, non-HTTP(S) load-balancing services can handle non-HTTP(S) traffic and are
recommended for non-web workloads.

The table below summarizes these categorizations for each Azure load balancing service.
Service Global/regional Recommended traffic
Azure Front Door Global HTTP(S)
Traffic Manager Global non-HTTP(S)
Application Gateway Regional HTTP(S)
Azure Load Balancer Regional or Global non-HTTP(S)

Choosing a load balancing option for


Azure
When choosing an appropriate load balancing option, there are some key factors to
consider:
• Type of traffic - is it for a web application? Is it a public-facing or private
application?
• Scope - do you need to load balance virtual machines and containers within a
virtual network, or load balance across regions, or both? (see 'Global versus
regional' above)
• Availability - what is the Service Level Agreement (SLA) for the service?
• Cost - In addition to the cost of the actual service itself, consider the operational
cost to manage and maintain a solution built on that service. See Load balancing
pricing.
• Features and limitations - what features and benefits does each service provide,
and what are its limitations? See Load balancer limits.

The flowchart below will help you to select the most appropriate load-balancing solution
for your application, by guiding you through a set of key decision criteria in order to reach
a recommendation.
Design and implement Azure load
balancer using the Azure portal
Azure Load Balancer operates at layer 4 of the Open Systems Interconnection (OSI)
model. It's the single point of contact for clients. Azure Load Balancer distributes inbound
flows that arrive at the load balancer's front end to backend pool instances. These flows
are according to configured load-balancing rules and health probes. The backend pool
instances can be Azure Virtual Machines or instances in a virtual machine scale set.

Choosing a load balancer type


Load balancers can be public (also known as external) or internal (also known as private).

A public load balancer can provide outbound connections for virtual machines (VMs)
inside your virtual network. These connections are accomplished by translating their
private IP addresses to public IP addresses. External load balancers are used to distribute
client traffic from the internet across your VMs. That internet traffic might come from web
browsers, module apps, or other sources.

An internal load balancer is used where private IPs are needed at the frontend only.
Internal load balancers are used to load balance traffic from internal Azure resources to
other Azure resources inside a virtual network. A load balancer frontend can also be
accessed from an on-premises network in a hybrid scenario.
Azure load balancer and availability
zones
Azure services that support availability zones fall into three categories:
• Zonal services: Resources can be pinned to a specific zone. For example, virtual
machines, managed disks, or standard IP addresses can be pinned to a specific
zone, which allows for increased resilience by having one or more instances of
resources spread across zones.
• Zone-redundant services: Resources are replicated or distributed across zones
automatically. Azure replicates the data across three zones so that a zone failure
doesn't impact its availability.
• Non-regional services: Services are always available from Azure geographies and
are resilient to zone-wide outages and region-wide outages.

Azure Load Balancer supports availability zones scenarios. You can use Standard Load
Balancer to increase availability throughout your scenario by aligning resources with, and
distribution across zones. Review this document to understand these concepts and
fundamental scenario design guidance.

A Load Balancer can either be zone redundant, zonal, or non-zonal. To configure the zone
related properties (mentioned above) for your load balancer, select the appropriate type
of frontend needed.
Zone redundant

In a region with Availability Zones, a Standard Load Balancer can be zone-redundant. This
traffic is served by a single IP address.

A single frontend IP address survives zone failure. The frontend IP may be used to reach
all (non-impacted) backend pool members no matter the zone. One or more availability
zones can fail and the data path survives as long as one zone in the region remains healthy.

The frontend's IP address is served simultaneously by multiple independent infrastructure


deployments in multiple availability zones. Any retries or reestablishment succeed in other
zones not affected by the zone failure.

Zonal
You can choose to have a frontend guaranteed to a single zone, which is known as a zonal.
This scenario means any inbound or outbound flow is served by a single zone in a region.
Your frontend shares fate with the health of the zone. The data path is unaffected by
failures in zones other than where it was guaranteed. You can use zonal frontends to
expose an IP address per Availability Zone.

Additionally, the use of zonal frontends directly for load balanced endpoints within each
zone is supported. You can use this configuration to expose per zone load-balanced
endpoints to individually monitor each zone. For public endpoints, you can integrate them
with a DNS load-balancing product like Traffic Manager and use a single DNS name.

For a public load balancer frontend, you add a zones parameter to the public IP. This
public IP is referenced by the frontend IP configuration used by the respective rule.

For an internal load balancer frontend, add a zones parameter to the internal load
balancer frontend IP configuration. A zonal frontend guarantees an IP address in a subnet
to a specific zone.
Architecture diagram
Explore Azure Traffic Manager
Azure Traffic Manager is a DNS-based traffic load balancer. This service allows you to
distribute traffic to your public facing applications across the global Azure regions. Traffic
Manager also provides your public endpoints with high availability and quick
responsiveness.

Traffic Manager uses DNS to direct the client requests to the appropriate service endpoint
based on a traffic-routing method. Traffic manager also provides health monitoring for
every endpoint. The endpoint can be any Internet-facing service hosted inside or outside
of Azure. Traffic Manager provides a range of traffic-routing methods and endpoint
monitoring options to suit different application needs and automatic failover models.
Traffic Manager is resilient to failure, including the failure of an entire Azure region.

Key features of Traffic Manager


Traffic Manager offers the several key features.

Feature

Description

Increase application availability

Traffic Manager delivers high availability for your critical applications by monitoring your
endpoints and providing automatic failover when an endpoint goes down.

Improve application performance

Azure allows you to run cloud services and websites in datacenters located around the
world. Traffic Manager can improve the responsiveness of your website by directing traffic
to the endpoint with the lowest latency.

Service maintenance without downtime


You can have planned maintenance done on your applications without downtime. Traffic
Manager can direct traffic to alternative endpoints while the maintenance is in progress.

Combine hybrid applications

Traffic Manager supports external, non-Azure endpoints enabling it to be used with hybrid
cloud and on-premises deployments, including the burst-to-cloud, migrate-to-cloud, and
failover-to-cloud scenarios.

Distribute traffic for complex deployments

Using nested Traffic Manager profiles, multiple traffic-routing methods can be combined
to create sophisticated and flexible rules to scale to the needs of larger, more complex
deployments.

How Traffic Manager works


Azure Traffic Manager enables you to control the distribution of traffic across your
application endpoints. An endpoint is any Internet-facing service hosted inside or outside
of Azure.

Traffic Manager provides two key benefits:


• Distribution of traffic according to one of several traffic-routing methods
• Continuous monitoring of endpoint health and automatic failover when
endpoints fail

When a client attempts to connect to a service, it must first resolve the DNS name of the
service to an IP address. The client then connects to that IP address to access the service.

Traffic Manager uses DNS to direct clients to specific service endpoints based on the rules
of the traffic-routing method. Clients connect to the selected endpoint directly. Traffic
Manager isn't a proxy or a gateway. Traffic Manager doesn't see the traffic passing
between the client and the service.
Traffic Manager example client usage

1. The client sends a DNS query to its configured recursive DNS service to resolve the
name 'partners.contoso.com'. A recursive DNS service, sometimes called a 'local
DNS' service, doesn't host DNS domains directly. Rather, the client off-loads the
work of contacting the various authoritative DNS services across the Internet
needed to resolve a DNS name.
2. To resolve the DNS name, the recursive DNS service finds the name servers for the
'contoso.com' domain. It then contacts those name servers to request the
'partners.contoso.com' DNS record. The contoso.com DNS servers return the
CNAME record that points to contoso.trafficmanager.net.
3. Next, the recursive DNS service finds the name servers for the 'trafficmanager.net'
domain, which are provided by the Azure Traffic Manager service. It then sends a
request for the 'contoso.trafficmanager.net' DNS record to those DNS servers.
4. The Traffic Manager name servers receive the request. They choose an endpoint
based on:
oThe configured state of each endpoint (disabled endpoints aren't returned)
o The current health of each endpoint, as determined by the Traffic Manager
health checks.
o The chosen traffic-routing method.
5. The chosen endpoint is returned as another DNS CNAME record. In this case, let
us suppose contoso-eu.cloudapp.net is returned.
6. Next, the recursive DNS service finds the name servers for the 'cloudapp.net'
domain. It contacts those name servers to request the 'contoso-eu.cloudapp.net'
DNS record. A DNS 'A' record containing the IP address of the EU-based service
endpoint is returned.
7. The recursive DNS service consolidates the results and returns a single DNS
response to the client.
8. The client receives the DNS results and connects to the given IP address. The client
connects to the application service endpoint directly, not through Traffic Manager.
Since it's an HTTPS endpoint, the client performs the necessary SSL/TLS handshake,
and then makes an HTTP GET request for the '/login.aspx' page.

Traffic routing methods


Azure Traffic Manager supports six traffic-routing methods to determine how to route
network traffic to the various service endpoints. For any profile, Traffic Manager applies
the traffic-routing method associated to it to each DNS query it receives. The traffic-
routing method determines which endpoint is returned in the DNS response.

The following traffic routing methods are available in Traffic Manager:

Routing method

Priority

Select this routing method when you want to have a primary service endpoint for all traffic.
You can provide multiple backup endpoints in case the primary or one of the backup
endpoints is unavailable.
Weighted

Select this routing method when you want to distribute traffic across a set of endpoints
based on their weight. Set the weight the same to distribute evenly across all endpoints.

Performance

Select the routing method when you have endpoints in different geographic locations,
and you want end users to use the "closest" endpoint for the lowest network latency.

Geographic

Select this routing method to direct users to specific endpoints (Azure, External, or
Nested) based on where their DNS queries originate from geographically. With this
routing method, it enables you to be compliant with scenarios such as data sovereignty
mandates, localization of content & user experience and measuring traffic from different
regions.

MultiValue

Select this routing method for Traffic Manager profiles that can only have IPv4/IPv6
addresses as endpoints. When a query is received for this profile, all healthy endpoints
are returned.

Subnet

Select this routing method to map sets of end-user IP address ranges to a specific
endpoint. When a request is received, the endpoint returned will be the one mapped for
that request’s source IP address.
Routing method examples
This is an example of the Priority routing method.

This is an example of the Weighted routing method.


This is an example of the Performance routing method.

This is an example of the Geographic routing method.


Traffic Manager endpoints
Azure Traffic Manager enables you to control how network traffic is distributed to
application deployments running in your different datacenters. You configure each
application deployment as an endpoint in Traffic Manager. When Traffic Manager receives
a DNS request, it chooses an available endpoint to return in the DNS response. Traffic
manager bases the choice on the current endpoint status and the traffic-routing method.

Traffic Manager supports three types of endpoints:


• Azure endpoints - Use this type of endpoint to load-balance traffic to a cloud
service, web app, or public IP address in the same subscription within Azure.
• External endpoints - Use this type of endpoint to load balance traffic for IPv4/IPv6
addresses, FQDNs, or for services hosted outside Azure. These services can either
be on-premises or with a different hosting provider.
• Nested endpoints - Use this type of endpoint to combine Traffic Manager profiles
to create more flexible traffic-routing schemes to support the needs of larger, more
complex deployments. With Nested endpoints, a child profile is added as an
endpoint to a parent profile. Both the child and parent profiles can contain other
endpoints of any type, including other nested profiles.

There are no restrictions on how different endpoints types can be combined in a single
Traffic Manager profile; each profile can contain any mix of endpoint types.
Load balance HTTP(S) traffic in
Azure

Design Azure Application Gateway


Azure Application Gateway is a web traffic load balancer that enables you to manage
traffic to your web applications. Traditional load balancers operate at the transport layer
(OSI layer 4 - TCP and UDP) and route traffic based on source IP address and port, to a
destination IP address and port.

Application Gateway can make routing decisions based on additional attributes of an


HTTP request, for example URI path or host headers. For example, you can route traffic
based on the incoming URL. So, if /images is in the incoming URL, you can route traffic to
a specific set of servers (known as a pool) configured for images. If /video is in the URL,
that traffic is routed to another pool that's optimized for videos.

This type of routing is known as application layer (OSI layer 7) load balancing. Azure
Application Gateway can do URL-based routing and more.
Application Gateway features
• Support for the HTTP, HTTPS, HTTP/2 and WebSocket protocols.
• A web application firewall to protect against web application vulnerabilities.
• End-to-end request encryption.
• Autoscaling, to dynamically adjust capacity as your web traffic load change.
• Redirection: Redirection can be used to another site, or from HTTP to HTTPS.
• Rewrite HTTP headers: HTTP headers allow the client and server to pass
parameter information with the request or the response.
• Custom error pages: Application Gateway allows you to create custom error pages
instead of displaying default error pages. You can use your own branding and
layout using a custom error page.

Determine Application Gateway


routing
Clients send requests to your web apps to the IP address or DNS name of the gateway.
The gateway routes requests to a selected web server in the back-end pool, using a set of
rules configured for the gateway to determine where the request should go.

There are two primary methods of routing traffic, path-based routing, and multiple site
routing.

Path-based routing
Path-based routing sends requests with different URL paths different pools of back-end
servers. For example, you could direct requests with the path /video/* to a back-end pool
containing servers that are optimized to handle video streaming, and direct /images/*
requests to a pool of servers that handle image retrieval.
Multiple site routing
Multiple site routing configures more than one web application on the same application
gateway instance. In a multi-site configuration, you register multiple DNS names
(CNAMEs) for the IP address of the Application Gateway, specifying the name of each site.
Application Gateway uses separate listeners to wait for requests for each site. Each listener
passes the request to a different rule, which can route the requests to servers in a different
back-end pool. For example, you could direct all requests for https://contoso.com to
servers in one back-end pool, and requests for https://fabrikam.com to another back-end
pool. The following diagram shows this configuration.

Multi-site configurations are useful for supporting multi-tenant applications, where each
tenant has its own set of virtual machines or other resources hosting a web application.
Configure Azure Application
Gateway
Application Gateway has a series of components that combine to route requests to a pool
of web servers and to check the health of these web servers.

Frontend configuration
You can configure the application gateway to have a public IP address, a private IP address,
or both. A public IP address is required when you host a back end that clients must access
over the Internet via an Internet-facing virtual IP.

Backend configuration
The backend pool is used to route requests to the backend servers that serve the request.
Backend pools can be composed of NICs, virtual machine scale sets, public IP addresses,
internal IP addresses, fully qualified domain names (FQDN), and multi-tenant back-ends
like Azure App Service. You can create an empty backend pool with your application
gateway and then add backend targets to the backend pool.

Configure health probes


Azure Application Gateway by default monitors the health of all resources in its back-end
pool and automatically removes any resource considered unhealthy from the pool.
Application Gateway continues to monitor the unhealthy instances and adds them back
to the healthy back-end pool once they become available and respond to health probes.
By default, Application gateway sends the health probes with the same port that is defined
in the back-end HTTP settings. A custom probe port can be configured using a custom
health probe.

The source IP address that the Application Gateway uses for health probes depends on
the backend pool:
• If the server address in the backend pool is a public endpoint, then the source
address is the application gateway's frontend public IP address.
• If the server address in the backend pool is a private endpoint, then the source IP
address is from the application gateway subnet's private IP address space.
Default health probe
An application gateway automatically configures a default health probe when you don't
set up any custom probe configurations. The monitoring behavior works by making an
HTTP GET request to the IP addresses or FQDN configured in the back-end pool. For
default probes if the backend http settings are configured for HTTPS, the probe uses
HTTPS to test health of the backend servers.

For example: You configure your application gateway to use back-end servers A, B, and C
to receive HTTP network traffic on port 80. The default health monitoring tests the three
servers every 30 seconds for a healthy HTTP response with a 30 second timeout for each
request. A healthy HTTP response has a status code between 200 and 399. In this case,
the HTTP GET request for the health probe looks like http://127.0.0.1/.

If the default probe check fails for server A, the application gateway stops forwarding
requests to this server. The default probe continues to check for server A every 30 seconds.
When server A responds successfully to one request from a default health probe,
application gateway starts forwarding the requests to the server again.
Design and configure Azure Front
Door
Azure Front Door is Microsoft’s modern cloud Content Delivery Network (CDN) that
provides fast, reliable, and secure access between your users and your applications’ static
and dynamic web content across the globe. Azure Front Door delivers your content using
the Microsoft’s global edge network with hundreds of global and local POPs distributed
around the world close to both your enterprise and consumer end users.

Many organizations have applications they want to make available to their customers,
their suppliers, and almost certainly their users. The tricky part is making sure those
applications are highly available. In addition, they need to be able to quickly respond
while being appropriately secured. Azure Front Door provides different SKUs (pricing tiers)
that meet these requirements. Let's briefly review the features and benefits of these SKUs
so you can determine which option best suits your requirements.
A secure, modern cloud CDN provides a distributed platform of servers. This helps
minimize latency when users are accessing webpages. Historically, IT staff might have
used a CDN and a web application firewall to control HTTP and HTTPS traffic flowing to
and from target applications.

If an organization uses Azure, they might achieve these goals by implementing the
products described in the following table
Product Description
Azure Front Door Enables an entry point to your apps positioned in the Microsoft global edge network.
Provides faster, more secure, and scalable access to your web applications.

Azure Content Delivers high-bandwidth content to your users by caching their content at strategically
Delivery Network placed physical nodes around the world.

Azure Web Helps provide centralized, greater protection for web applications from common
Application exploits and vulnerabilities.
Firewall

Azure Front Door tier comparison


Azure Front Door is offered in 2 different tiers, Azure Front Door Standard and Azure Front
Door Premium. Azure Front Door Standard and Premium tier combines capabilities of
Azure Front Door (classic), Azure CDN Standard from Microsoft (classic), and Azure WAF
into a single secure cloud CDN platform with intelligent threat protection. Azure Front
Door resides in the edge locations and manages user requests to your hosted applications.
Users connect to your application through the Microsoft global network. Azure Front Door
then routes user requests to the fastest and most available application backend.

For a comparison of supported features in Azure Front Door, Review the feature
comparison table.

Create a Front Door in the Azure portal


Review the following QuickStart to learn how to create an Azure Front Door profile using
the Azure portal. You can create an Azure Front Door profile through Quick Create with
basic configurations or through the Custom create which allows a more advanced
configuration.

Routing architecture overview


Front Door traffic routing takes place over multiple stages. First, traffic is routed from the
client to Front Door. Then, Front Door uses your configuration to determine the origin to
send the traffic to. The Front Door web application firewall, routing rules, rules engine,
and caching configuration all affect the routing process. The following diagram illustrates
the routing architecture:
Configure redirection rules in Front
Door
After establishing a connection and completing a TLS handshake, when a request lands
on a Front Door environment one of the first things that Front Door does is determine
which routing rule to match the request to and then take the defined action in the
configuration.

Front Door route rules configuration


structure
A Front Door routing rule configuration is composed of two major parts: a "left-hand side"
and a "right-hand side". Front Door matches the incoming request to the left-hand side
of the route. The right-hand side defines how Front Door processes the request.

Incoming match

The following properties determine whether the incoming request matches the routing
rule (or left-hand side):
• HTTP Protocols (HTTP/HTTPS)
• Hosts (for example, www.foo.com, *.bar.com)
• Paths (for example, /, /users/, /file.gif)

These properties are expanded out internally so that every combination of


Protocol/Host/Path is a potential match set.

Route data

Front Door speeds up the processing of requests by using caching. If caching is enabled
for a specific route, it uses the cached response. If there is no cached response for the
request, Front Door forwards the request to the appropriate backend in the configured
backend pool.

Route matching

Front Door attempts to match to the most-specific match first looking only at the left-
hand side of the route. It first matches based on HTTP protocol, then Frontend host, then
the Path.

• Frontend host matching:


o Look for any routing with an exact match on the host.
o If no exact frontend hosts match, reject the request and send a 400 Bad
Request error.
• Path matching:
o Look for any routing rule with an exact match on the Path.
o If no exact match Paths, look for routing rules with a wildcard Path that
matches.
o If no routing rules are found with a matching Path, then reject the request
and return a 400: Bad Request error HTTP response.

If there are no routing rules for an exact-match frontend host with a catch-all route
Path (/*), then there will not be a match to any routing rule.

Redirection types
A redirect type sets the response status code for the clients to understand the purpose of
the redirect. The following types of redirection are supported:

Redirecti Action Description


on type
301 Moved Indicates that the target resource has been assigned a new permanent URI. Any
permanen future references to this resource will use one of the enclosed URIs. Use 301
tly status code for HTTP to HTTPS redirection.
302 Found Indicates that the target resource is temporarily under a different URI. Since the
redirection can change on occasion, the client should continue to use the effective
request URI for future requests.
307 Temporar Indicates that the target resource is temporarily under a different URI. The user
y redirect agent MUST NOT change the request method if it does an automatic redirection
to that URI. Since the redirection can change over time, the client ought to
continue using the original effective request URI for future requests.
308 Permanen Indicates that the target resource has been assigned a new permanent URI. Any
t redirect future references to this resource should use one of the enclosed URIs.

Redirection protocol
You can set the protocol that will be used for redirection. The most common use case of
the redirect feature is to set HTTP to HTTPS redirection.
• HTTPS only: Set the protocol to HTTPS only, if you're looking to redirect the traffic
from HTTP to HTTPS. Azure Front Door recommends that you should always set
the redirection to HTTPS only.
• HTTP only: Redirects the incoming request to HTTP. Use this value only if you want
to keep your traffic HTTP that is, non-encrypted.
• Match request: This option keeps the protocol used by the incoming request. So,
an HTTP request remains HTTP and an HTTPS request remains HTTPS post
redirection.

Destination host
As part of configuring a redirect routing, you can also change the hostname or domain
for the redirect request. You can set this field to change the hostname in the URL for the
redirection or otherwise preserve the hostname from the incoming request. So, using this
field you can redirect all requests sent on https://www.contoso.com/* to
https://www.fabrikam.com/*.

Destination path
For cases where you want to replace the path segment of a URL as part of redirection, you
can set this field with the new path value. Otherwise, you can choose to preserve the path
value as part of redirect. So, using this field, you can redirect all requests sent to
https://www.contoso.com/* to https://www.contoso.com/redirected-site.

Destination fragment
The destination fragment is the portion of URL after '#', which is used by the browser to
land on a specific section of a web page. You can set this field to add a fragment to the
redirect URL.
Query string parameters
You can also replace the query string parameters in the redirected URL. To replace any
existing query string from the incoming request URL, set this field to 'Replace' and then
set the appropriate value. Otherwise, keep the original set of query strings by setting the
field to 'Preserve'. As an example, using this field, you can redirect all traffic sent to
https://www.contoso.com/foo/bar to
https://www.contoso.com/foo/bar?&utm_referrer=https%3A%2F%2Fwww.bing.com%2F.

Configure rewrite policies


Azure Front Door supports URL rewrite by configuring an optional Custom Forwarding
Path to use when constructing the request to forward to the backend. By default, if a
custom forwarding path isn't provided, the Front Door will copy the incoming URL path
to the URL used in the forwarded request. The Host header used in the forwarded request
is as configured for the selected backend. Read Backend Host Header to learn what it
does and how you can configure it.

The powerful part of URL rewrite is that the custom forwarding path will copy any part of
the incoming path that matches to a wildcard path to the forwarded path.

Configure health probes, including


customization of HTTP response codes
To determine the health and proximity of each backend for a given Front Door
environment, each Front Door environment periodically sends a synthetic HTTP/HTTPS
request to each of your configured backends. Front Door then uses these responses from
the probe to determine the "best" backend resources to route your client requests.

Since Front Door has many edge environments globally, health probe volume for your
backends can be quite high - ranging from 25 requests every minute to as high as 1200
requests per minute, depending on the health probe frequency configured. With the
default probe frequency of 30 seconds, the probe volume on your backend should be
about 200 requests per minute.

Supported HTTP methods for health


probes
Front Door supports sending probes over either HTTP or HTTPS protocols. These probes
are sent over the same TCP ports configured for routing client requests and cannot be
overridden.

Front Door supports the following HTTP methods for sending the health probes:

GET: The GET method means retrieve whatever information (in the form of an entity) is
identified by the Request-URI.

HEAD: The HEAD method is identical to GET except that the server MUST NOT return a
message-body in the response. Because it has lower load and cost on your backends, for
new Front Door profiles, by default, the probe method is set as HEAD.

Secure Front Door with SSL


Use the HTTPS protocol on your custom domain (for example, https://www.contoso.com),
you ensure that your sensitive data is delivered securely via TLS/SSL encryption when it's
sent across the internet. When your web browser is connected to a web site via HTTPS, it
validates the web site's security certificate and verifies that it was issued by a legitimate
certificate authority. This process provides security and protects your web applications
from attacks.

Some of the key attributes of the custom HTTPS feature are:


• No extra cost: There are no costs for certificate acquisition or renewal and no extra
cost for HTTPS traffic.
• Simple enablement: One-click provisioning is available from the Azure portal. You
can also use REST API or other developer tools to enable the feature.
• Complete certificate management: All certificate procurement and management
is handled for you. Certificates are automatically provisioned and renewed before
expiration, which removes the risks of service interruption because of a certificate
expiring.

Design and implement network


security

Get network security


recommendations with Microsoft
Defender for Cloud
Network security covers a multitude of technologies, devices, and processes. It provides
a set of rules and configurations designed to protect the integrity, confidentiality and
accessibility of computer networks and data. Every organization, regardless of size,
industry, or infrastructure, requires a degree of network security solutions in place to
protect it from the ever-growing risks of attacks.

For Microsoft Azure, securing or providing the ability to secure resources like
microservices, VMs, data, and others is paramount. Microsoft Azure ensures it through a
distributed virtual firewall.

Network Security
Network Security covers controls to secure and protect Azure networks, including
securing virtual networks, establishing private connections, preventing and mitigating
external attacks, and securing DNS. Full description of the controls can be found at
Security Control V3: Network Security on Microsoft Docs.

NS-1: Establish network segmentation


boundaries
Security Principle: Ensure that your virtual network deployment aligns to your enterprise
segmentation strategy defined in the GS-2 security control. Any workload that could incur
higher risk for the organization should be in isolated virtual networks. Examples of high-
risk workload include:
• An application storing or processing highly sensitive data.
• An external network-facing application accessible by the public or users outside of
your organization.
• An application using insecure architecture or containing vulnerabilities that cannot
be easily remediated.

To enhance your enterprise segmentation strategy, restrict or monitor traffic between


internal resources using network controls. For specific, well-defined applications (such as
a 3-tier app), this can be a highly secure "deny by default, permit by exception" approach
by restricting the ports, protocols, source, and destination IPs of the network traffic. If you
have many applications and endpoints interacting with each other, blocking traffic may
not scale well, and you may only be able to monitor traffic.

Azure Guidance: Create a virtual network (VNet) as a fundamental segmentation


approach in your Azure network, so resources such as VMs can be deployed into the VNet
within a network boundary. To further segment the network, you can create subnets inside
VNet for smaller sub-networks.

Use network security groups (NSG) as a network layer control to restrict or monitor traffic
by port, protocol, source IP address, or destination IP address.

You can also use application security groups (ASGs) to simplify complex configuration.
Instead of defining policy based on explicit IP addresses in network security groups, ASGs
enable you to configure network security as a natural extension of an application's
structure, allowing you to group virtual machines and define network security policies
based on those groups.
NS-2: Secure cloud services with network
controls
Security Principle: Secure cloud services by establishing a private access point for the
resources. You should also disable or restrict access from public network when possible.

Azure Guidance: Deploy private endpoints for all Azure resources that support the Private
Link feature, to establish a private access point for the resources. You should also disable
or restrict public network access to services where feasible.

For certain services, you also have the option to deploy VNet integration for the service
where you can restrict the VNET to establish a private access point for the service.

NS-3: Deploy firewall at the edge of


enterprise network
Security Principle: Deploy a firewall to perform advanced filtering on network traffic to
and from external networks. You can also use firewalls between internal segments to
support a segmentation strategy. If required, use custom routes for your subnet to
override the system route when you need to force the network traffic to go through a
network appliance for security control purpose.

At a minimum, block known bad IP addresses and high-risk protocols, such as remote
management (for example, RDP and SSH) and intranet protocols (for example, SMB and
Kerberos).

Azure Guidance: Use Azure Firewall to provide fully stateful application layer traffic
restriction (such as URL filtering) and/or central management over a large number of
enterprise segments or spokes (in a hub/spoke topology).

If you have a complex network topology, such as a hub/spoke setup, you may need to
create user-defined routes (UDR) to ensure the traffic goes through the desired route. For
example, you have option to use an UDR to redirect egress internet traffic through a
specific Azure Firewall or a network virtual appliance.

NS-4: Deploy intrusion detection/intrusion


prevention systems (IDS/IPS)
Security Principle: Use network intrusion detection and intrusion prevention systems
(IDS/IPS) to inspect the network and payload traffic to or from your workload. Ensure that
IDS/IPS is always tuned to provide high-quality alerts to your SIEM solution.

For more in-depth host level detection and prevention capability, use host-based IDS/IPS
or a host-based endpoint detection and response (EDR) solution in conjunction with the
network IDS/IPS.

Azure Guidance: Use Azure Firewall’s IDPS capability on your network to alert on and/or
block traffic to and from known malicious IP addresses and domains.

For more in-depth host level detection and prevention capability, deploy host-based
IDS/IPS or a host-based endpoint detection and response (EDR) solution, such as
Microsoft Defender for Endpoint, at the VM level in conjunction with the network IDS/IPS.

NS-5: Deploy DDOS protection


Security Principle: Deploy distributed denial of service (DDoS) protection to protect your
network and applications from attacks.

Azure Guidance: Enable DDoS standard protection plan on your VNet to protect
resources that are exposed to the public networks.

NS-6: Deploy web application firewall


Security Principle: Deploy a web application firewall (WAF) and configure the appropriate
rules to protect your web applications and APIs from application-specific attacks.
Azure Guidance: Use web application firewall (WAF) capabilities in Azure Application
Gateway, Azure Front Door, and Azure Content Delivery Network (CDN) to protect your
applications, services and APIs against application layer attacks at the edge of your
network. Set your WAF in "detection" or "prevention mode", depending on your needs
and threat landscape. Choose a built-in ruleset, such as OWASP Top 10 vulnerabilities, and
tune it to your application.

NS-7: Simplify network security


configuration
Security Principle: When managing a complex network environment, use tools to
simplify, centralize and enhance the network security management.

Azure Guidance: Use the following features to simplify the implementation and
management of the NSG and Azure Firewall rules:
• Use Microsoft Defender for Cloud Adaptive Network Hardening to recommend
NSG hardening rules that further limit ports, protocols and source IPs based on
threat intelligence and traffic analysis result.
• Use Azure Firewall Manager to centralize the firewall policy and route management
of the virtual network. To simplify the firewall rules and network security groups
implementation, you can also use the Azure Firewall Manager ARM (Azure
Resource Manager) template.

NS-8: Detect and disable insecure services


and protocols
Security Principle: Detect and disable insecure services and protocols at the OS,
application, or software package layer. Deploy compensating controls if disabling insecure
services and protocols are not possible.

Azure Guidance: Use Azure Sentinel’s built-in Insecure Protocol Workbook to discover
the use of insecure services and protocols such as SSL/TLSv1, SSHv1, SMBv1, LM/NTLMv1,
wDigest, Unsigned LDAP Binds, and weak ciphers in Kerberos. Disable insecure services
and protocols that do not meet the appropriate security standard.

Note: If disabling insecure services or protocols is not possible, use compensating controls
such as blocking access to the resources through network security group, Azure Firewall,
or Azure Web Application Firewall to reduce the attack surface.

NS-9: Connect on-premises or cloud


network privately
Security Principle: Use private connections for secure communication between different
networks, such as cloud service provider datacenters and on-premises infrastructure in a
colocation environment.

Azure Guidance: Use private connections for secure communication between different
networks, such as cloud service provider datacenters and on-premises infrastructure in a
colocation environment.

For lightweight connectivity between site-to-site or point-to-site, use Azure virtual private
network (VPN) to create a secure connection between your on-premises site or end-user
device to the Azure virtual network.

For enterprise-level high performance connection, use Azure ExpressRoute (or Virtual
WAN) to connect Azure datacenters and on-premises infrastructure in a co-location
environment.

When connecting two or more Azure virtual networks together, use virtual network
peering. Network traffic between peered virtual networks is private and is kept on the
Azure backbone network.

NS-10: Ensure Domain Name System (DNS)


security
Security Principle: Ensure that Domain Name System (DNS) security configuration
protects against known risks:
• Use trusted authoritative and recursive DNS services across your cloud
environment to ensure the client (such as operating systems and applications)
receive the correct resolution result.
• Separate the public and private DNS resolution so the DNS resolution process for
the private network can be isolated from the public network.
• Ensure your DNS security strategy also includes mitigations against common
attacks, such as dangling DNS, DNS amplifications attacks, DNS poisoning and
spoofing, and so on.

Azure Guidance: Use Azure recursive DNS or a trusted external DNS server in your
workload recursive DNS setup, such as in VM's operating system or in the application.

Use Azure Private DNS for private DNS zone setup where the DNS resolution process does
not leave the virtual network. Use a custom DNS to restrict the DNS resolution which only
allows the trusted resolution to your client.

Use Azure Defender for DNS for the advanced protection against the following security
threats to your workload or your DNS service:
• Data exfiltration from your Azure resources using DNS tunneling
• Malware communicating with command-and-control server
• Communication with malicious domains as phishing and crypto mining
• DNS attacks in communication with malicious DNS resolvers

You can also use Azure Defender for App Service to detect dangling DNS records if you
decommission an App Service website without removing its custom domain from your
DNS registrar.

Microsoft cloud security benchmark


Microsoft has found that using security benchmarks can help you quickly secure cloud
deployments. A comprehensive security best practice framework from cloud service
providers can give you a starting point for selecting specific security configuration settings
in your cloud environment, across multiple service providers and allow you to monitor
these configurations using a single pane of glass.

The Microsoft cloud security benchmark (MCSB) includes a collection of high-impact


security recommendations you can use to help secure your cloud services in a single or
multicloud environment. MCSB recommendations include two key aspects:
• Security controls: These recommendations are generally applicable across your
cloud workloads. Each recommendation identifies a list of stakeholders that are
typically involved in planning, approval, or implementation of the benchmark.
• Service baselines: These apply the controls to individual cloud services to provide
recommendations on that specific service’s security configuration. We currently
have service baselines available only for Azure.

Implement Microsoft cloud security


benchmark
• Plan your MCSB implementation by reviewing the documentation for the
enterprise controls and service-specific baselines to plan your control framework
and how it maps to guidance like Center for Internet Security (CIS) Controls,
National Institute of Standards and Technology (NIST), and the Payment Card
Industry Data Security Standard (PCI-DSS) framework.
• Monitor your compliance with MCSB status (and other control sets) using the
Microsoft Defender for Cloud – Regulatory Compliance Dashboard for your
multicloud environment.
• Establish guardrails to automate secure configurations and enforce compliance
with MCSB (and other requirements in your organization) using features such as
Azure Blueprints, Azure Policy, or the equivalent technologies from other cloud
platforms.
Regulatory compliance dashboard
Deploy Azure DDoS Protection by
using the Azure portal

Distributed Denial of Service (DDoS)


A denial of service attack (DoS) is an attack that has the goal of preventing access to
services or systems. If the attack originates from one location, it's called a DoS. If the attack
originates from multiple networks and systems, it's called distributed denial of service
(DDoS).

Distributed Denial of Service (DDoS) attacks are some of the largest availability and
security concerns facing customers that are moving their applications to the cloud. A
DDoS attack tries to drain an API's or application's resources, making that application
unavailable to legitimate users. DDoS attacks can be targeted at any endpoint that is
publicly reachable through the internet.

DDoS implementation
Azure DDoS Protection, combined with application design best practices, provide defense
against DDoS attacks. Azure DDoS Protection provides the following service tiers:
• Network Protection: Provides additional mitigation capabilities over DDoS
infrastructure Protection that are tuned specifically to Azure Virtual Network
resources. Azure DDoS Protection is simple to enable, and requires no application
changes. Protection policies are tuned through dedicated traffic monitoring and
machine learning algorithms. Policies are applied to public IP addresses associated
to resources deployed in virtual networks, such as Azure Load Balancer, Azure
Application Gateway, and Azure Service Fabric instances, but this protection
doesn't apply to App Service Environments. Real-time telemetry is available
through Azure Monitor views during an attack, and for history. Rich attack
mitigation analytics are available via diagnostic settings. Application layer
protection can be added through the Azure Application Gateway Web Application
Firewall or by installing a third party firewall from Azure Marketplace. Protection is
provided for IPv4 and IPv6 Azure public IP addresses.
• IP Protection: DDoS IP Protection is a pay-per-protected IP model. DDoS IP
Protection contains the same core engineering features as DDoS Network
Protection, but will differ in value-added services like DDoS rapid response support,
cost protection, and discounts on WAF.

DDoS Protection protects resources in a virtual network including public IP addresses


associated with virtual machines, load balancers, and application gateways. When coupled
with the Application Gateway web application firewall, or a third-party web application
firewall deployed in a virtual network with a public IP, DDoS Protection can provide full
layer 3 to layer 7 mitigation capability.

Every property in Azure is protected by Azure's DDoS infrastructure (Basic) Protection at


no additional cost. While Azure DDoS Protection is a paid service, design for services that
are deployed in a virtual network.

Types of DDoS attacks


DDoS Protection can mitigate the following types of attacks:

Volumetric attacks - These attacks flood the network layer with a substantial amount of
seemingly legitimate traffic. They include UDP floods, amplification floods, and other
spoofed-packet floods. DDoS Protection mitigates these potential multi-gigabyte attacks
by absorbing and scrubbing them, with Azure's global network scale, automatically.

Protocol attacks - These attacks render a target inaccessible, by exploiting a weakness in


the layer 3 and layer 4 protocol stack. They include SYN flood attacks, reflection attacks,
and other protocol attacks. DDoS Protection mitigates these attacks, differentiating
between malicious and legitimate traffic, by interacting with the client, and blocking
malicious traffic.

Resource (application) layer attacks - These attacks target web application packets, to
disrupt the transmission of data between hosts. They include HTTP protocol violations,
SQL injection, cross-site scripting, and other layer 7 attacks. Use a Web Application
Firewall, such as the Azure Application Gateway web application firewall, and DDoS
Protection to provide defense against these attacks. There are also third-party web
application firewall offerings available in the Azure Marketplace.

Azure DDoS protection features


Some of Azure DDoS protection features include:
• Native platform integration: Natively integrated into Azure and configured
through portal.
• Turnkey protection: Simplified configuration protecting all resources immediately.
• Always-on traffic monitoring: Your application traffic patterns are monitored 24
hours a day, 7 days a week, looking for indicators of DDoS attacks.
• Adaptive tuning: Profiling and adjusting to your service's traffic.
• Attack analytics: Get detailed reports in five-minute increments during an attack,
and a complete summary after the attack ends.
• Attack metrics and alerts: Summarized metrics from each attack are accessible
through Azure Monitor. Alerts can be configured at the start and stop of an attack,
and over the attack's duration, using built-in attack metrics.
• Multi-layered protection: When deployed with a web application firewall (WAF),
DDoS Protection protects both at the network layer (Layer 3 and 4, offered by Azure
DDoS Protection) and at the application layer (Layer 7, offered by a WAF).

Multi-layered protection
Specific to resource attacks at the application layer, you should configure a web
application firewall (WAF) to help secure web applications. A WAF inspects inbound web
traffic to block SQL injections, cross-site scripting, DDoS, and other Layer 7 attacks. Azure
provides WAF as a feature of Application Gateway for centralized protection of your web
applications from common exploits and vulnerabilities. There are other WAF offerings
available from Azure partners that might be more suitable for your needs via the Azure
Marketplace.
Even web application firewalls are susceptible to volumetric and state exhaustion attacks.
Therefore, it's firmly recommended to enable DDoS Protection on the WAF virtual network
to help protect from volumetric and protocol attacks.

Deploying a DDoS protection plan


The key stages of deploying a DDoS Protection plan are as follows:
• Create a resource group
• Create a DDoS Protection Plan
• Enable DDoS protection on a new or existing virtual network or IP address
• Configure DDoS telemetry
• Configure DDoS diagnostic logs
• Configure DDoS alerts
• Run a test DDoS attack and monitor the results.
Deploy Network Security Groups by
using the Azure portal
A Network Security Group (NSG) in Azure allows you to filter network traffic to and from
Azure resources in an Azure virtual network. A network security group contains security
rules that allow or deny inbound network traffic to, or outbound network traffic from,
several types of Azure resources. For each rule, you can specify source and destination,
port, and protocol.

NSG security rules


A network security group contains zero, or as many rules as desired, within Azure
subscription limits. Each rule specifies the following properties:
• Name - Must be a unique name within the network security group.
• Priority - Can be any number between 100 and 4096. Rules are processed in
priority order, with lower numbers processed before higher numbers, because
lower numbers have higher priority. Once traffic matches a rule, processing stops.
As a result, any rules that exist with lower priorities (higher numbers) that have the
same attributes as rules with higher priorities are not processed.
• Source or destination - Can be set to Any, or an individual IP address, or classless
inter-domain routing (CIDR) block (10.0.0.0/24, for example), service tag, or
application security group.
• Protocol - Can be TCP, UDP, ICMP, ESP, AH, or Any.
• Direction - Can be configured to apply to inbound, or outbound traffic.
• Port range - Can be specified either as an individual port or range of ports. For
example, you could specify 80 or 10000-10005. Specifying ranges enables you to
create fewer security rules.
• Action - Can be set to Allow or deny.
Application Security Groups
An Application Security Group (ASG) enables you to configure network security as a
natural extension of an application's structure, allowing you to group virtual machines
and define network security policies based on those groups. You can reuse your security
policy at scale without manual maintenance of explicit IP addresses. The platform handles
the complexity of explicit IP addresses and multiple rule sets, allowing you to focus on
your business logic.

To minimize the number of security rules you need, and the need to change the rules,
plan out the application security groups you need and create rules using service tags or
application security groups, rather than individual IP addresses, or ranges of IP addresses,
whenever possible.

Filter network traffic with an NSG using


the Azure portal
You can use a network security group to filter network traffic inbound and outbound from
a virtual network subnet. Network security groups contain security rules that filter network
traffic by IP address, port, and protocol. Security rules are applied to resources deployed
in a subnet.

The key stages to filter network traffic with an NSG using the Azure portal are:

7. Create a resource group - this can either be done beforehand or as you create
the virtual network in the next stage. All other resources that you create must be
in the same region specified here.
8. Create a virtual network - this must be deployed in the same resource group you
created above.
9. Create application security groups - the application security groups you create
here will enable you to group together servers with similar functions, such as web
servers or management servers. You would create two application security groups
here; one for web servers and one for management servers (for example,
MyAsgWebServers and MyAsgMgmtServers)
10. Create a network security group - the network security group will secure network
traffic in your virtual network. This NSG will be associated with a subnet in the next
stage.
11. Associate a network security group with a subnet - this is where you'll associate
the network security group you create above, with the subnet of the virtual network
you created in stage 2 above.
12. Create security rules - this is where you create your inbound security rules. Here
you would create a security rule to allow ports 80 and 443 to the application
security group for your web servers (for example, MyAsgWebServers). Then you
would create another security rule to allow RDP traffic on port 3389 to the
application security group for your management servers (for example,
MyAsgMgmtServers). These rules will control from where you can access your VM
remotely and your IIS Webserver.
13. Create virtual machines - this is where you create the web server (for example,
MyVMWeb) and management server (for example, MyVMMgmt) virtual machines
which will be associated with their respective application security group in the next
stage.
14. Associate NICs to an ASG - this is where you associate the network interface card
(NIC) attached to each virtual machine with the relevant application security group
that you created in stage 3 above.
15. Test traffic filters - the final stage is where you test that your traffic filtering is
working as expected.
o To test this, you would attempt to connect to the management server virtual
machine (for example, MyVMMgmt) by using an RDP connection, thereby
verifying that you can connect because port 3389 is allowing inbound
connections from the Internet to the management servers application
security group (for example, MyAsgMgmtServers).
o While connected to the RDP session on the management server (for
example, MyVMMgmt), you would then test an RDP connection from the
management server virtual machine (for example, MyVMMgmt) to the web
server virtual machine (for example, MyVMWeb), which again should
succeed because virtual machines in the same network can communicate
with each over any port by default.
o However, you'll not be able to create an RDP connection to the web server
virtual machine (for example, MyVMWeb) from the internet, because the
security rule for the web servers application security group (for example,
MyAsgWebServers) prevents connections to port 3389 inbound from the
Internet. Inbound traffic from the Internet is denied to all resources by
default.
o While connected to the RDP session on the web server (for example,
MyVMWeb), you could then install IIS on the web server, then disconnect
from the web server virtual machine RDP session, and disconnect from the
management server virtual machine RDP session. In the Azure portal, you
would then determine the Public IP address of the web server virtual
machine (for example, MyVMWeb), and confirm you can access the web
server virtual machine from the Internet by opening a web browser on your
computer and navigating to http:// (for example, http://23.96.39.113) . You
should see the standard IIS welcome screen, because port 80 is allowed
inbound access from the Internet to the web servers application security
group (for example, MyAsgWebServers). The network interface attached to
the web server virtual machine (for example, MyVMWeb) is associated with
the web servers application security group (for example, MyAsgWebServers)
and therefore allows the connection.
Design and implement Azure Firewall
Azure Firewall is a managed, cloud-based network security service that protects your
Azure Virtual Network resources. It is a fully stateful firewall as a service with built-in high
availability and unrestricted cloud scalability.

Azure Firewall features


Azure Firewall includes the following features:
• Built-in high availability - High availability is built in, so no extra load balancers
are required and there's nothing you need to configure.
• Unrestricted cloud scalability - Azure Firewall can scale out as much as you need
to accommodate changing network traffic flows, so you do not need to budget for
your peak traffic.
• Application FQDN filtering rules - You can limit outbound HTTP/S traffic or Azure
SQL traffic to a specified list of fully qualified domain names (FQDN) including wild
cards. This feature does not require TLS termination.
• Network traffic filtering rules - You can centrally create allow or deny network
filtering rules by source and destination IP address, port, and protocol. Azure
Firewall is fully stateful, so it can distinguish legitimate packets for different types
of connections. Rules are enforced and logged across multiple subscriptions and
virtual networks.
• FQDN tags - These tags make it easy for you to allow well-known Azure service
network traffic through your firewall. For example, say you want to allow Windows
Update network traffic through your firewall. You create an application rule and
include the Windows Update tag. Now network traffic from Windows Update can
flow through your firewall.
• Service tags - A service tag represents a group of IP address prefixes to help
minimize complexity for security rule creation. You cannot create your own service
tag, nor specify which IP addresses are included within a tag. Microsoft manages
the address prefixes encompassed by the service tag, and automatically updates
the service tag as addresses change.
• Threat intelligence - Threat intelligence-based filtering can be enabled for your
firewall to alert and deny traffic from/to known malicious IP addresses and domains.
The IP addresses and domains are sourced from the Microsoft Threat Intelligence
feed.
• Outbound SNAT support - All outbound virtual network traffic IP addresses are
translated to the Azure Firewall public IP (Source Network Address Translation
(SNAT)). You can identify and allow traffic originating from your virtual network to
remote Internet destinations.
• Inbound DNAT support - Inbound Internet network traffic to your firewall public
IP address is translated (Destination Network Address Translation) and filtered to
the private IP addresses on your virtual networks.
• Multiple public IP addresses - You can associate multiple public IP addresses (up
to 250) with your firewall, to enable specific DNAT and SNAT scenarios.
• Azure Monitor logging - All events are integrated with Azure Monitor, allowing
you to archive logs to a storage account, stream events to your Event Hubs, or send
them to Azure Monitor logs.
• Forced tunneling - You can configure Azure Firewall to route all Internet-bound
traffic to a designated next hop instead of going directly to the Internet. For
example, you may have an on-premises edge firewall or other network virtual
appliance (NVA) to process network traffic before it is passed to the Internet.
• Web categories (preview) - Web categories let administrators allow or deny user
access to web site categories such as gambling websites, social media websites,
and others. Web categories are included in Azure Firewall Standard, but it is more
fine-tuned in Azure Firewall Premium Preview. As opposed to the Web categories
capability in the Standard SKU that matches the category based on an FQDN, the
Premium SKU matches the category according to the entire URL for both HTTP and
HTTPS traffic.
• Certifications - Azure Firewall is Payment Card Industry (PCI), Service Organization
Controls (SOC), International Organization for Standardization (ISO), and ICSA Labs
compliant.

Rule processing in Azure Firewall


In the Azure Firewall, you can configure NAT rules, network rules, and applications rules,
and this can be done either by using classic rules or Firewall Policy. An Azure Firewall
denies all traffic by default, until rules are manually configured to allow traffic.

Rule processing with classic rules


With classic rules, rule collections are processed according to the rule type in priority order,
lower numbers to higher numbers from 100 to 65,000. A rule collection name can have
only letters, numbers, underscores, periods, or hyphens. It must also begin with either a
letter or a number, and it must end with a letter, a number, or an underscore. The
maximum name length is 80 characters. It is best practice to initially space your rule
collection priority numbers in increments of 100 (i.e., 100, 200, 300, and so on) so that you
give yourself space to add more rule collections when needed.

Rule processing with Firewall Policy


With Firewall Policy, rules are organized inside Rule Collections which are contained in
Rule Collection Groups. Rule Collections can be of the following types:
• DNAT (Destination Network Address Translation)
• Network
• Application

You can define multiple Rule Collection types within a single Rule Collection Group, and
you can define zero or more Rules in a Rule Collection, but the rules within a Rule
Collection must be of the same type (i.e., DNAT, Network, or Application).

With Firewall Policy, rules are processed based on Rule Collection Group Priority and Rule
Collection priority. Priority is any number between 100 (highest priority) and 65,000
(lowest priority). Highest priority Rule Collection Groups are processed first, and inside a
Rule Collection Group, Rule Collections with the highest priority (i.e., the lowest number)
are processed first.

In the case of a Firewall Policy being inherited from a parent policy, Rule Collection Groups
in the parent policy always takes precedence regardless of the priority of the child policy.

Application rules are always processed after network rules, which are themselves always
processed after DNAT rules regardless of Rule Collection Group or Rule Collection priority
and policy inheritance.

Outbound connectivity using network rules


and application rules
If you configure both network rules and application rules, then network rules are applied
in priority order before application rules. Additionally, all rules are terminating, therefore,
if a match is found in a network rule, no other rules are processed thereafter.

If there is no network rule match, and if the protocol is either HTTP, HTTPS, or MSSQL,
then the packet is then evaluated by the application rules in priority order. For HTTP, Azure
Firewall looks for an application rule match according to the Host Header, whereas for
HTTPS, Azure Firewall looks for an application rule match according to Server Name
Indication (SNI) only.

Inbound connectivity using DNAT rules and


network rules
Inbound Internet connectivity can be enabled by configuring DNAT. As mentioned
previously, DNAT rules are applied in priority before network rules. If a match is found, an
implicit corresponding network rule to allow the translated traffic is added. For security
reasons, the recommended approach is to add a specific Internet source to allow DNAT
access to the network and avoid using wildcards.

Application rules aren't applied for inbound connections. So, if you want to filter inbound
HTTP/S traffic, you should use Web Application Firewall (WAF).

For enhanced security, if you modify a rule to deny access to traffic that had previously
been allowed, any relevant existing sessions are dropped.

Deploying and configuring Azure


Firewall
Be aware of the following when deploying Azure Firewall:
• It can centrally create, enforce, and log application and network connectivity
policies across subscriptions and virtual networks.
• It uses a static, public IP address for your virtual network resources. This allows
outside firewalls to identify traffic originating from your virtual network.
• It is fully integrated with Azure Monitor for logging and analytics.
• When creating firewall rules, it is best to use the FQDN tags.

The key stages of deploying and configuring Azure Firewall are as follows:
• Create a resource group
• Create a virtual network and subnets
• Create a workload VM in a subnet
• Deploy the firewall and policy to the virtual network
• Create a default outbound route
• Configure an application rule
• Configure a network rule
• Configure a Destination NAT (DNAT) rule
• Test the firewall

Deploying Azure Firewall with Availability


Zones
One of the major features of Azure Firewall is Availability Zones.

When deploying Azure Firewall, you can configure it to span multiple Availability Zones
for increased availability. When you configure Azure Firewall in this way your availability
increases to 99.99% uptime. The 99.99% uptime SLA is offered when two or more
Availability Zones are selected.

You can also associate Azure Firewall to a specific zone just for proximity reasons, using
the service standard 99.95% SLA.

For more information, see the Azure Firewall Service Level Agreement (SLA).

There is no additional cost for a firewall deployed in an Availability Zone. However, there
are added costs for inbound and outbound data transfers associated with Availability
Zones.

For more information, see Bandwidth pricing details.

Azure Firewall Availability Zones are only available in regions that support Availability
Zones.

Availability Zones can only be configured during firewall deployment. You cannot
configure an existing firewall to include Availability Zones.

Methods for deploying an Azure Firewall


with Availability Zones
You can use several methods for deploying your Azure Firewall using Availability Zones.
• Azure portal
• Azure PowerShell - see Deploy an Azure Firewall with Availability Zones using Azure
PowerShell
• Azure Resource Manager template - see Quickstart: Deploy Azure Firewall with
Availability Zones - Azure Resource Manager template
Secure your networks with Azure
Firewall Manager

Working with Azure Firewall Manager


Azure Firewall Manager is a security management service that provides central security
policy and route management for cloud-based security perimeters.

Azure Firewall Manager simplifies the process of centrally defining network and
application-level rules for traffic filtering across multiple Azure Firewall instances. You can
span different Azure regions and subscriptions in hub and spoke architectures for traffic
governance and protection.

If you manage multiple firewalls, you know that continuously changing firewall rules make
it difficult to keep them in sync. Central IT teams need a way to define base firewall policies
and enforce them across multiple business units. At the same time, DevOps teams want
to create their own local derived firewall policies that are implemented across
organizations. Azure Firewall Manager can help solve these problems.

Firewall Manager can provide security management for two network architecture types:
• Secured Virtual Hub - This is the name given to any Azure Virtual WAN Hub when
security and routing policies have been associated with it. An Azure Virtual WAN
Hub is a Microsoft-managed resource that lets you easily create hub and spoke
architectures.
• Hub Virtual Network - This is the name given to any standard Azure virtual
network when security policies are associated with it. A standard Azure virtual
network is a resource that you create and manage yourself. At this time, only Azure
Firewall Policy is supported. You can peer spoke virtual networks that contain your
workload servers and services. You can also manage firewalls in standalone virtual
networks that are not peered to any spoke.

Azure Firewall Manager features


The key features offered by Azure Firewall Manager are:
• Central Azure Firewall deployment and configuration - You can centrally deploy
and configure multiple Azure Firewall instances that span different Azure regions
and subscriptions.
• Hierarchical policies (global and local) - You can use Azure Firewall Manager to
centrally manage Azure Firewall policies across multiple secured virtual hubs. Your
central IT teams can author global firewall policies to enforce organization wide
firewall policy across teams. Locally authored firewall policies allow a DevOps self-
service model for better agility.
• Integrated with third-party security-as-a-service for advanced security - In
addition to Azure Firewall, you can integrate third-party security-as-a-service
providers to provide additional network protection for your VNet and branch
Internet connections. This feature is available only with secured virtual hub
deployments (see above).
• Centralized route management - You can easily route traffic to your secured hub
for filtering and logging without the need to manually set up User Defined Routes
(UDR) on spoke virtual networks. This feature is available only with secured virtual
hub deployments (see above).
• Region availability - You can use Azure Firewall Policies across regions. For
example, you can create a policy in the West US region, and still use it in the East
US region.

Azure Firewall Manager policies


A Firewall policy is an Azure resource that contains NAT, network, and application rule
collections and Threat Intelligence settings. It is a global resource that can be used across
multiple Azure Firewall instances in Secured Virtual Hubs and Hub Virtual Networks. New
policies can be created from scratch or inherited from existing policies. Inheritance allows
DevOps to create local firewall policies on top of organization mandated base policy.
Policies work across regions and subscriptions.

You can create Firewall Policy and associations with Azure Firewall Manager. However, you
can also create and manage a policy using REST API, templates, Azure PowerShell, and
the Azure CLI. Once you create a policy, you can associate it with a firewall in a virtual
WAN hub making it a Secured Virtual Hub and/or associate it with a firewall in a standard
Azure virtual network making it a Hub Virtual Network.
Deploying Azure Firewall Manager for Hub
Virtual Networks
The recommended process to deploy Azure Firewall Manager for Hub Virtual Networks is
as follows:
16. Create a firewall policy. You can either create a new policy, derive a base policy,
and customize a local policy, or import rules from an existing Azure Firewall. Ensure
you remove NAT rules from policies that should be applied across multiple firewalls.
17. Create your hub and spoke architecture. Do this either by creating a Hub Virtual
Network using Azure Firewall Manager and peering spoke virtual networks to it
using virtual network peering, or by creating a virtual network and adding virtual
network connections and peering spoke virtual networks to it using virtual network
peering.
18. Select security providers and associate firewall policy. (At time of writing, only
Azure Firewall is a supported provider). This can be done while creating a Hub
Virtual Network, or by converting an existing virtual network to a Hub Virtual
Network. It is also possible to convert multiple virtual networks.
19. Configure User Defined Routes to route traffic to your Hub Virtual Network
firewall.

Deploying Azure Firewall Manager for


Secured Virtual Hubs
The recommended process to deploy Azure Firewall Manager for Secured Virtual Hubs is
as follows:
20. Create your hub and spoke architecture. Do this either by creating a Secured
Virtual Hub using Azure Firewall Manager and adding virtual network connections,
or by creating a Virtual WAN Hub and adding virtual network connections.
21. Select security providers. This can be done while creating a Secured Virtual Hub,
or by converting an existing Virtual WAN Hub to a Secure Virtual Hub.
22. Create a firewall policy and associate it with your hub. This is applicable only if
you are using Azure Firewall. Third-party security-as-a-service policies are
configured via the partners management experience.
23. Configure route settings to route traffic to your Secured Virtual Hub. You can
easily route traffic to your secured hub for filtering and logging without User
Defined Routes (UDR) on spoke Virtual Networks by using the Secured Virtual Hub
Route Setting page.

You cannot have more than one hub per virtual WAN per region, however you can add
multiple virtual WANs in the region to achieve this.

You cannot have overlapping IP spaces for hubs in a vWAN.

Your hub VNet connections must be in the same region as the hub.
Implement a Web Application
Firewall on Azure Front Door
Web Application Firewall (WAF) provides centralized protection of your web applications
from common exploits and vulnerabilities. Web applications are increasingly targeted by
malicious attacks that exploit commonly known vulnerabilities. SQL injection and cross-
site scripting are among the most common attacks.

Preventing such attacks in application code is challenging. It can require rigorous


maintenance, patching, and monitoring at multiple layers of the application topology. A
centralized web application firewall helps make security management much simpler. A
WAF also gives application administrators better assurance of protection against threats
and intrusions.

A WAF solution can react to a security threat faster by centrally patching a known
vulnerability, instead of securing each individual web application.
Web Application Firewall policy modes
When you create a Web Application Firewall (WAF) policy, by default the WAF policy is in
Detection mode. In Detection mode, WAF does not block any requests; instead, requests
matching the WAF rules are logged at WAF logs. To see WAF in action, you can change
the mode settings from Detection to Prevention. In Prevention mode, requests that match
rules that are defined in Default Rule Set (DRS) are blocked and logged at WAF logs.

Web Application Firewall Default Rule


Set rule groups and rules
Azure Front Door web application firewall (WAF) protects web applications from common
vulnerabilities and exploits. Azure-managed rule sets provide an easy way to deploy
protection against a common set of security threats. Since such rule sets are managed by
Azure, the rules are updated as needed to protect against new attack signatures.

Managed rules
Azure-managed Default Rule Set includes rules against the following threat categories:
• Cross-site scripting
• Java attacks
• Local file inclusion
• PHP injection attacks
• Remote command execution
• Remote file inclusion
• Session fixation
• SQL injection protection
• Protocol attackers

Azure-managed Default Rule Set is enabled by default. The current default version is
DefaultRuleSet_1.0. From WAF Managed rules>Assign, the recently available ruleset
Microsoft_DefaultRuleSet_1.1 is available in the drop-down list.

To disable an individual rule, select the checkbox in front of the rule number, and select
Disable at the top of the page. To change action types for individual rules within the rule
set, select the checkbox in front of the rule number, and then select Change action at the
top of the page.

Custom rules
Azure WAF with Front Door allows you to control access to your web applications based
on the conditions you define. A custom WAF rule consists of a priority number, rule type,
match conditions, and an action. There are two types of custom rules: match rules and
rate limit rules. A match rule controls access based on a set of matching conditions while
a rate limit rule controls access based on matching conditions and the rates of incoming
requests. You may disable a custom rule to prevent it from being evaluated, but still keep
the configuration.
Design and implement private
access to Azure Services

Explain virtual network service


endpoints
You've migrated your existing app and database servers for your ERP system to Azure as
VMs. Now, to reduce your costs and administrative requirements, you're considering
using some Azure platform as a service (PaaS) services. Storage services will hold certain
large file assets, such as engineering diagrams. These engineering diagrams have
proprietary information, and must remain secure from unauthorized access. These files
must only be accessible from specific systems.

In this unit, you'll look at how to use virtual network service endpoints for securing
supported Azure services.

What is a virtual network service


endpoint?
Use virtual network service endpoints to extend your private address space in Azure by
providing a direct connection to your Azure services. Service endpoints let you secure
your Azure resources to only your virtual network. Service traffic will remain on the Azure
backbone, and doesn't go out to the internet.
By default, Azure services are all designed for direct internet access. All Azure resources
have public IP addresses, including PaaS services such as Azure SQL Database and Azure
Storage. Because these services are exposed to the internet, anyone can potentially access
your Azure services.

Service endpoints can connect certain PaaS services directly to your private address space
in Azure, so they act like they’re on the same virtual network. Use your private address
space to access the PaaS services directly. Adding service endpoints doesn't remove the
public endpoint. It simply provides a redirection of traffic.

Preparing to Implement Service


Endpoints
To enable a Service Endpoint, you must do the following two things:
• Turn off public access to the service.
• Add the Service Endpoint to a virtual network.

When you enable a Service Endpoint, you restrict the flow of traffic, and enable your Azure
VMs to access the service directly from your private address space. Devices cannot access
the service from a public network. On a deployed VM vNIC, if you look at Effective routes,
you'll notice the Service Endpoint as the Next Hop Type.
Create Service Endpoints
As the network engineer, you're planning to move sensitive engineering diagram files into
Azure Storage. The files must only be accessible from computers inside the corporate
network. You want to create a virtual network Service Endpoint for Azure Storage to secure
the connectivity to your storage accounts.

In the service endpoint tutorial you will learn how to:


• Enable a service endpoint on a subnet
• Use network rules to restrict access to Azure Storage
• Create a virtual network service endpoint for Azure Storage
• Verify that access is denied appropriately

Configure service tags


A service tag represents a group of IP address prefixes from a given Azure service.
Microsoft manages the address prefixes encompassed by the service tag and
automatically updates the service tag as addresses change, minimizing the complexity of
frequent updates to network security rules.

You can use service tags to define network access controls on network security groups or
Azure Firewall. Use service tags in place of specific IP addresses when you create security
rules. By specifying the service tag name, such as API Management, in the appropriate
source or destination field of a rule, you can allow or deny the traffic for the corresponding
service.

As of March 2021, you can also use Service Tags in place of explicit IP ranges in user
defined routes. This feature is currently in Public Preview.

You can use service tags to achieve network isolation and protect your Azure resources
from the general Internet while accessing Azure services that have public endpoints.
Create inbound/outbound network security group rules to deny traffic to/from Internet
and allow traffic to/from AzureCloud or other available service tags of specific Azure
services.

Available service tags


The following table includes all the service tags available for use in network security group
rules.
The columns indicate whether the tag:
• Is suitable for rules that cover inbound or outbound traffic.
• Supports regional scope.
• Is usable in Azure Firewall rules.

By default, service tags reflect the ranges for the entire cloud. Some service tags also allow
more granular control by restricting the corresponding IP ranges to a specified region.
For example, the service tag Storage represents Azure Storage for the entire cloud, but
Storage. WestUS narrows the range to only the storage IP address ranges from the
WestUS region. The following table indicates whether each service tag supports such
regional scope.

Service tags of Azure services denote the address prefixes from the specific cloud being
used. For example, the underlying IP ranges that correspond to the SQL tag value on the
Azure Public cloud will be different from the underlying ranges on the Azure China cloud.

If you implement a virtual network Service Endpoint for a service, such as Azure Storage
or Azure SQL Database, Azure adds a route to a virtual network subnet for the service.
The address prefixes in the route are the same address prefixes, or CIDR ranges, as those
of the corresponding service tag.
Define Private Link Service and
private endpoint

What is Azure Private Link?


Azure Private Link enables you to access Azure PaaS Services (for example, Azure Storage
and SQL Database) and Azure hosted customer-owned/partner services over a Private
Endpoint in your virtual network.

Before you learn about Azure Private Link and its features and benefits, let's examine the
problem that Private Link is designed to solve.

Contoso has an Azure virtual network, and you want to connect to a PaaS resource such
as an Azure SQL database. When you create such resources, you normally specify a public
endpoint as the connectivity method.

Having a public endpoint means that the resource is assigned a public IP address. So,
even though both your virtual network and the Azure SQL database are located within
the Azure cloud, the connection between them takes place over the internet.

The concern here is that your Azure SQL database is exposed to the internet via its public
IP address. That exposure creates multiple security risks. The same security risks are
present when an Azure resource is accessed via a public IP address from the following
locations:
• A peered Azure virtual network
• An on-premises network that connects to Azure using ExpressRoute and Microsoft
peering
• A customer's Azure virtual network that connects to an Azure service offered by
your company
Private Link is designed to eliminate these security risks by removing the public part of
the connection.

Private Link provides secure access to Azure services. Private Link achieves that security
by replacing a resource's public endpoint with a private network interface. There are three
key points to consider with this new architecture:
• The Azure resource becomes, in a sense, a part of your virtual network.
• The connection to the resource now uses the Microsoft Azure backbone network
instead of the public internet.
• You can configure the Azure resource to no longer expose its public IP address,
which eliminates that potential security risk.

What is Azure Private Endpoint?


Private Endpoint is the key technology behind Private Link. Private Endpoint is a network
interface that enables a private and secure connection between your virtual network and
an Azure service. In other words, Private Endpoint is the network interface that replaces
the resource's public endpoint.

Private Link provides secure access to Azure services. Private Link achieves that security
by replacing a resource's public endpoint with a private network interface. Private
Endpoint uses a private IP address from the VNet to bring the service into the VNet.
How is Azure Private Endpoint different
from a service endpoint?
Private Endpoints grant network access to specific resources behind a given service
providing granular segmentation. Traffic can reach the service resource from on premises
without using public endpoints.

A service endpoint remains a publicly routable IP address. A private endpoint is a private


IP in the address space of the virtual network where the private endpoint is configured.

What is Azure Private Link Service?


Private Link gives you private access from your Azure virtual network to PaaS services and
Microsoft Partner services in Azure. However, what if your company has created its own
Azure services that are consumed by your company's customers? Is it possible to offer
those customers a private connection to your company's services?

Yes, by using Azure Private Link Service. This service lets you offer Private Link connections
to your custom Azure services. Consumers of your custom services can then access those
services privately—that is, without using the internet—from their own Azure virtual
networks.

Azure Private Link service is the reference to your own service that is powered by Azure
Private Link. Your service that is running behind Azure standard load balancer can be
enabled for Private Link access so that consumers to your service can access it privately
from their own VNets. Your customers can create a private endpoint inside their VNet and
map it to this service. A Private Link service receives connections from multiple private
endpoints. A private endpoint connects to one Private Link service.
Private Endpoint properties
Before creating a Private Endpoint, you should consider the Private Endpoint properties
and collect data about specific needs to be addressed. These include:
• A unique name with a resource group
• A subnet to deploy and allocate private IP addresses from a virtual network
• The Private Link resource to connect using resource ID or alias, from the list of
available types. A unique network identifier will be generated for all traffic sent to
this resource.
• The subresource to connect. Each Private Link resource type has different options
to select based on preference.
• An automatic or manual connection approval method. Based on Azure role-based
access control (Azure RBAC) permissions, your Private Endpoint can be approved
automatically. If you try to connect to a Private Link resource without Azure RBAC,
use the manual method to allow the owner of the resource to approve the
connection.
• A specific request message for requested connections to be approved manually.
This message can be used to identify a specific request.
• Connection status, A read-only property that specifies if the Private Endpoint is
active. Only Private Endpoints in an approved state can be used to send traffic.

Also consider the following details:


• Private Endpoint enables connectivity between the consumers from the same VNet,
regionally peered VNets, globally peered VNets and on premises using VPN or
Express Route and services powered by Private Link.
• Network connections can only be initiated by clients connecting to the Private
Endpoint, Service providers do not have any routing configuration to initiate
connections into service consumers. Connections can only be established in a
single direction.
• When creating a Private Endpoint, a read-only network interface is also created for
the lifecycle of the resource. The interface is assigned dynamically private IP
addresses from the subnet that maps to the Private Link resource. The value of the
private IP address remains unchanged for the entire lifecycle of the Private
Endpoint.
• The Private Endpoint must be deployed in the same region and subscription as the
virtual network.
• The Private Link resource can be deployed in a different region than the virtual
network and Private Endpoint.
• Multiple Private Endpoints can be created using the same Private Link resource. For
a single network using a common DNS server configuration, the recommended
practice is to use a single Private Endpoint for a given Private Link resource to avoid
duplicate entries or conflicts in DNS resolution.
• Multiple Private Endpoints can be created on the same or different subnets within
the same virtual network. There are limits to the number of Private Endpoints you
can create in a subscription. For details, see Azure limits.
• The subscription from the Private Link resource must also be registered with
Microsoft.
Integrate private endpoint with
DNS
Private DNS zones are typically hosted centrally in the same Azure subscription where the
hub VNet is deployed. This central hosting practice is driven by cross-premises DNS name
resolution and other needs for central DNS resolution such as Active Directory. In most
cases, only networking/identity admins have permissions to manage DNS records in these
zones.

Azure Private Endpoint DNS


configuration
The following diagram shows a typical high-level architecture for enterprise environments
with central DNS resolution and where name resolution for Private Link resources is done
via Azure Private DNS:
From the previous diagram, it is important to highlight that:
• On-premises DNS servers have conditional forwarders configured for each Private
Endpoint public DNS zone forwarder pointing to the DNS forwarders (10.100.2.4
and 10.100.2.5) hosted in the hub VNet.
• The DNS servers 10.100.2.4 and 10.100.2.5 hosted in the hub VNet use the Azure-
provided DNS resolver (168.63.129.16) as a forwarder.
• All Azure VNets have the DNS forwarders (10.100.2.4 and 10.100.2.5) configured as
the primary and secondary DNS servers.
• There are two conditions that must be true to allow application teams the freedom
to create any required Azure PaaS resources in their subscription:
• Central networking and/or central platform team must ensure that application
teams can only deploy and access Azure PaaS services via Private Endpoints.
• Central networking and/or central platform teams must ensure that whenever
Private Endpoints are created, the corresponding records are automatically created
in the centralized private DNS zone that matches the service created.
• DNS record needs to follow the lifecycle of the Private Endpoint and automatically
remove the DNS record when the Private Endpoint is deleted.

Significance of IP address
168.63.129.16
IP address 168.63.129.16 is a virtual public IP address that is used to facilitate a
communication channel to Azure platform resources. Customers can define any address
space for their private virtual network in Azure. The Azure platform resources must be
presented as a unique public IP address. This virtual public IP address facilitates the
following things:
• Enables the VM Agent to communicate with the Azure platform to signal that it is
in a "Ready" state
• Enables communication with the DNS virtual server to provide filtered name
resolution to the resources (such as VM) that do not have a custom DNS server.
This filtering makes sure that customers can resolve only the hostnames of their
resources
• Enables health probes from Azure load balancer to determine the health state of
VMs
• Enables the VM to obtain a dynamic IP address from the DHCP service in Azure
• Enables Guest Agent heartbeat messages for the PaaS role

Azure services Private DNS zone


configuration examples
Azure creates a canonical name DNS record (CNAME) on the public DNS. The CNAME
record redirects the resolution to the private domain name. You can override the
resolution with the private IP address of your Private Endpoints.

Your applications don't need to change the connection URL. When resolving to a public
DNS service, the DNS server will resolve to your Private Endpoints. The process doesn't
affect your existing applications.

Private networks already using the private DNS zone for a given type, can only connect to
public resources if they don't have any Private Endpoint connections, otherwise a
corresponding DNS configuration is required on the private DNS zone in order to
complete the DNS resolution sequence.

For Azure services, use the recommended zone names found in the documentation.

DNS configuration scenarios


The FQDN of the services resolves automatically to a public IP address. To resolve to the
private IP address of the Private Endpoint, change your DNS configuration.

DNS is a critical component to make the application work correctly by successfully


resolving the Private Endpoint IP address.

Based on your preferences, the following scenarios are available with DNS resolution
integrated:
• Virtual network workloads without custom DNS server
• On-premises workloads using a DNS forwarder
• Virtual network and on-premises workloads using a DNS forwarder
• Private DNS zone group

On-premises workloads using a DNS


forwarder
For on-premises workloads to resolve the FQDN of a Private Endpoint, use a DNS
forwarder to resolve the Azure service public DNS zone in Azure. A DNS forwarder is a
Virtual Machine running on the Virtual Network linked to the Private DNS Zone that can
proxy DNS queries coming from other Virtual Networks or from on-premises. This is
required as the query must be originated from the Virtual Network to Azure DNS. A few
options for DNS proxies are: Windows running DNS services, Linux running DNS services,
Azure Firewall.

The following scenario is for an on-premises network that has a DNS forwarder in Azure.
This forwarder resolves DNS queries via a server-level forwarder to the Azure provided
DNS 168.63.129.16.

This scenario uses the Azure SQL Database-recommended private DNS zone. For other
services, you can adjust the model using the following reference: Azure services DNS zone
configuration.

To configure properly, you need the following resources:


• On-premises network
• Virtual network connected to on-premises
• DNS forwarder deployed in Azure
• Private DNS zones privatelink.database.windows.net with type A record
• Private Endpoint information (FQDN record name and private IP address)

The following diagram illustrates the DNS resolution sequence from an on-premises
network. The configuration uses a DNS forwarder deployed in Azure. The resolution is
made by a private DNS zone linked to a virtual network:
Virtual network and on-premises
workloads using Azure DNS Private
Resolver
When you use DNS Private Resolver, you don't need a DNS forwarder VM, and Azure DNS
is able to resolve on-premises domain names.

The following diagram uses DNS Private Resolver in a hub-spoke network topology. As a
best practice, the Azure landing zone design pattern recommends using this type of
topology. A hybrid network connection is established by using Azure ExpressRoute and
Azure Firewall. This setup provides a secure hybrid network. DNS Private Resolver is
deployed in the hub network.
Design and implement network
monitoring

Monitor your networks using


Azure monitor

What is Azure Monitor?


Azure Monitor helps you maximize the availability and performance of your applications
and services. It delivers a comprehensive solution for collecting, analyzing, and acting on
telemetry from your cloud and on-premises environments. This information helps you
understand how your applications are performing and proactively identify issues affecting
them and the resources they depend on.

Just a few examples of what you can do with Azure Monitor include:
• Detect and diagnose issues across applications and dependencies with Application
Insights.
• Correlate infrastructure issues with VM insights and Container insights.
• Drill into your monitoring data with Log Analytics for troubleshooting and deep
diagnostics.
• Support operations at scale with smart alerts and automated actions.
• Create visualizations with Azure dashboards and workbooks.
• Collect data from monitored resources using Azure Monitor Metrics.

The diagram below offers a high-level view of Azure Monitor. At the center of the diagram
are the data stores for metrics and logs, which are the two fundamental types of data
used by Azure Monitor. On the left are the sources of monitoring data that populate these
data stores. On the right are the different functions that Azure Monitor performs with this
collected data. This includes such actions as analysis, alerting, and streaming to external
systems.

Monitor data types in Azure Monitor


The data collected by Azure Monitor fits into one of two fundamental types:
• Metrics - Metrics are numerical values that describe some aspect of a system at a
particular point in time. They are lightweight and capable of supporting near real-
time scenarios.
• Logs - Logs contain different kinds of data organized into records with different
sets of properties for each type. Telemetry such as events and traces are stored as
logs in addition to performance data so that it can all be combined for analysis.

Azure Monitor metrics


Azure Monitor Metrics is a feature of Azure Monitor that collects numeric data from
monitored resources into a time series database. Metrics are numerical values that are
collected at regular intervals and describe some aspect of a system at a particular time.
Metrics in Azure Monitor are lightweight and capable of supporting near real-time
scenarios making them particularly useful for alerting and fast detection of issues. You can
analyze them interactively with metrics explorer, be proactively notified with an alert when
a value crosses a threshold or visualize them in a workbook or dashboard.
Azure Monitor metrics sources
There are three fundamental sources of metrics collected by Azure Monitor. Once these
metrics are collected in the Azure Monitor metric database, they can be evaluated
together regardless of their source.
• Azure resources - Platform metrics are created by Azure resources and give you
visibility into their health and performance. Each type of resource creates a distinct
set of metrics without any configuration required. Platform metrics are collected
from Azure resources at one-minute frequency unless specified otherwise in the
metric's definition.
• Applications - Metrics are created by Application Insights for your monitored
applications and help you detect performance issues and track trends in how your
application is being used. This includes such values as Server response time and
Browser exceptions.
• Virtual machine agents - Metrics are collected from the guest operating system
of a virtual machine. Enable guest OS metrics for Windows virtual machines with
Windows Diagnostic Extension (WAD) and for Linux virtual machines with
InfluxData Telegraf Agent.
• Custom metrics - You can define metrics in addition to the standard metrics that
are automatically available. You can define custom metrics in your application that
is monitored by Application Insights or create custom metrics for an Azure service
using the custom metrics API.

Monitor your networks using


Azure network watcher
Azure Network Watcher
Azure Network Watcher is a regional service that enables you to monitor and diagnose
conditions at a network scenario level in, to, and from Azure. Scenario level monitoring
enables you to diagnose problems at an end-to-end network level view. Network
diagnostic and visualization tools available with Network Watcher help you understand,
diagnose, and gain insights to your network in Azure. Network Watcher is enabled
through the creation of a Network Watcher resource, which allows you to utilize Network
Watcher capabilities. Network Watcher is designed to monitor and repair the network
health of IaaS products which includes Virtual Machines, Virtual Networks, Application
Gateways, and Load Balancers.
• Automate remote network monitoring with packet capture. Monitor and
diagnose networking issues without logging in to your virtual machines (VMs)
using Network Watcher. Trigger packet capture by setting alerts, and gain access
to real-time performance information at the packet level. When you observe an
issue, you can investigate in detail for better diagnoses.
• Gain insight into your network traffic using flow logs. Build a deeper
understanding of your network traffic pattern using Network Security Group
flow logs. Information provided by flow logs helps you gather data for compliance,
auditing and monitoring your network security profile.
• Diagnose VPN connectivity issues. Network Watcher provides you the ability
to diagnose your most common VPN Gateway and Connections issues.
Allowing you, not only, to identify the issue but also to use the detailed logs created
to help further investigate.

Network Topology: The topology capability enables you to generate a visual diagram of
the resources in a virtual network, and the relationships between the resources.

Verify IP Flow: Quickly diagnose connectivity issues from or to the internet and from or
to the on-premises environment. For example, confirming if a security rule is blocking
ingress or egress traffic to or from a virtual machine. IP flow verify is ideal for making sure
security rules are being correctly applied. When used for troubleshooting, if IP flow verify
doesn’t show a problem, you will need to explore other areas such as firewall restrictions.

Next Hop: To determine if traffic is being directed to the intended destination by showing
the next hop. This will help determine if networking routing is correctly configured. Next
hop also returns the route table associated with the next hop. If the route is defined as a
user-defined route, that route is returned. Otherwise, next hop returns System Route.
Depending on your situation the next hop could be Internet, Virtual Appliance, Virtual
Network Gateway, VNet Local, VNet Peering, or None. None lets you know that while
there may be a valid system route to the destination, there is no next hop to route the
traffic to the destination. When you create a virtual network, Azure creates several default
outbound routes for network traffic. The outbound traffic from all resources, such as VMs,
deployed in a virtual network, are routed based on Azure's default routes. You might
override Azure's default routes or create additional routes.

Effective security rules: Network Security groups are associated at a subnet level or at a
NIC level. When associated at a subnet level, it applies to all the VM instances in the
subnet. Effective security rules view returns all the configured NSGs and rules that are
associated at a NIC and subnet level for a virtual machine providing insight into the
configuration. In addition, the effective security rules are returned for each of the NICs in
a VM. Using Effective security rules view, you can assess a VM for network vulnerabilities
such as open ports.

VPN Diagnostics: Troubleshoot gateways and connections. VPN Diagnostics returns a


wealth of information. Summary information is available in the portal and more detailed
information is provided in log files. The log files are stored in a storage account and
include things like connection statistics, CPU and memory information, IKE security errors,
packet drops, and buffers and events.

Packet Capture: Network Watcher variable packet capture allows you to create packet
capture sessions to track traffic to and from a virtual machine. Packet capture helps to
diagnose network anomalies both reactively and proactively. Other uses include gathering
network statistics, gaining information on network intrusions, to debug client-server
communications and much more.

Connection Troubleshoot: Azure Network Watcher Connection Troubleshoot is a more


recent addition to the Network Watcher suite of networking tools and capabilities.
Connection Troubleshoot enables you to troubleshoot network performance and
connectivity issues in Azure.

NSG Flow Logs: NSG Flow Logs maps IP traffic through a network security group. These
capabilities can be used in security compliance and auditing. You can define a prescriptive
set of security rules as a model for security governance in your organization. A periodic
compliance audit can be implemented in a programmatic way by comparing the
prescriptive rules with the effective rules for each of the VMs in your network.

Configure NSG Flow Logs


Network security groups (NSG) allow or deny inbound or outbound traffic to a network
interface in a VM.

NSG flow logs is a feature of Azure Network Watcher that allows you to log information
about IP traffic flowing through an NSG. The NSG flow log capability allows you to log
the source and destination IP address, port, protocol, and whether traffic was allowed or
denied by an NSG. You can analyze logs using a variety of tools, such as Power BI and the
Traffic Analytics feature in Azure Network Watcher.

Common use cases for NSG flow logs are:


• Network Monitoring - Identify unknown or undesired traffic. Monitor traffic levels
and bandwidth consumption. Filter flow logs by IP and port to understand
application behavior. Export Flow Logs to analytics and visualization tools of your
choice to set up monitoring dashboards.
• Usage monitoring and optimization - Identify top talkers in your network.
Combine with GeoIP data to identify cross-region traffic. Understand traffic growth
for capacity forecasting. Use data to remove overtly restrictive traffic rules.
• Compliance - Use flow data to verify network isolation and compliance with
enterprise access rules.
• Network forensics and security analysis - Analyze network flows from
compromised IPs and network interfaces. Export flow logs to any SIEM or IDS tool
of your choice.
Connection Monitor

Connection Monitor overview


Connection Monitor provides unified end-to-end connection monitoring in Azure
Network Watcher. The Connection Monitor feature supports hybrid and Azure cloud
deployments. Network Watcher provides tools to monitor, diagnose, and view
connectivity-related metrics for your Azure deployments.

Here are some use cases for Connection Monitor:


• Your front-end web server VM communicates with a database server VM in a multi-
tier application. You want to check network connectivity between the two VMs.
• You want VMs in the East US region to ping VMs in the Central US region, and you
want to compare cross-region network latencies.
• You have multiple on-premises office sites in Seattle, Washington, and in Ashburn,
Virginia. Your office sites connect to Microsoft 365 URLs. For your users of
Microsoft 365 URLs, compare the latencies between Seattle and Ashburn.
• Your hybrid application needs connectivity to an Azure Storage endpoint. Your on-
premises site and your Azure application connect to the same Azure Storage
endpoint. You want to compare the latencies of the on-premises site to the
latencies of the Azure application.
• You want to check the connectivity between your on-premises setups and the
Azure VMs that host your cloud application.

Connection Monitor combines the best of two features: the Network Watcher Connection
Monitor (Classic) feature and the Network Performance Monitor (NPM) Service
Connectivity Monitor, ExpressRoute Monitoring, and Performance Monitoring feature.

Here are some benefits of Connection Monitor:


• Unified, intuitive experience for Azure and hybrid monitoring needs
• Cross-region, cross-workspace connectivity monitoring
• Higher probing frequencies and better visibility into network performance
• Faster alerting for your hybrid deployments
• Support for connectivity checks that are based on HTTP, TCP, and ICMP
• Metrics and Log Analytics support for both Azure and non-Azure test setups

Set up Connection Monitor


There are several key steps you need to perform in order to setup Connection Monitor
for monitoring:
24. Install monitoring agents - Connection Monitor relies on lightweight executable
files to run connectivity checks. It supports connectivity checks from both Azure
environments and on-premises environments. The executable file that you use
depends on whether your VM is hosted on Azure or on-premises. For more
information, visit Install monitoring agents.
25. Enable Network Watcher on your subscription - All subscriptions that have a
virtual network are enabled with Network Watcher. When you create a virtual
network in your subscription, Network Watcher is automatically enabled in the
virtual network's region and subscription. This automatic enabling doesn't affect
your resources or incur a charge. Ensure that Network Watcher isn't explicitly
disabled on your subscription.
26. Create a connection monitor - Connection Monitor monitors communication at
regular intervals. It informs you of changes in reachability and latency. You can also
check the current and historical network topology between source agents and
destination endpoints. Sources can be Azure VMs or on-premises machines that
have an installed monitoring agent. Destination endpoints can be Microsoft 365
URLs, Dynamics 365 URLs, custom URLs, Azure VM resource IDs, IPv4, IPv6, FQDN,
or any domain name.
27. Set up data analysis and alerts - The data that Connection Monitor collects is
stored in the Log Analytics workspace. You set up this workspace when you created
the connection monitor. Monitoring data is also available in Azure Monitor Metrics.
You can use Log Analytics to keep your monitoring data for as long as you want.
Azure Monitor stores metrics for only 30 days by default. For more information,
visit Data collection, analysis, and alerts.
28. Diagnose issues in your network - Connection Monitor helps you diagnose issues
in your connection monitor and your network. Issues in your hybrid network are
detected by the Log Analytics agents that you installed earlier. Issues in Azure are
detected by the Network Watcher extension. You can view issues in the Azure
network in the network topology. For more information, visit Diagnose issues in
your network.

Traffic Analytics
Traffic Analytics is a cloud-based solution that provides visibility into user and application
activity in cloud networks. Traffic Analytics analyzes Network Watcher network security
group (NSG) flow logs to provide insights into traffic flow in your Azure cloud and provide
rich visualizations of data written to NSG flow logs.

With Traffic Analytics, you can:


• Visualize network activity across your Azure subscriptions and identify hot spots.
• Identify security threats to, and secure your network, with information such as
open-ports, applications attempting internet access, and virtual machines (VM)
connecting to rogue networks.
• Understand traffic flow patterns across Azure regions and the internet to optimize
your network deployment for performance and capacity.
• Pinpoint network misconfigurations leading to failed connections in your network.

How Traffic Analytics works


Traffic analytics examines the raw NSG flow logs and captures reduced logs by
aggregating common flows among the same source IP address, destination IP address,
destination port, and protocol. For example, Host 1 (IP address: 10.10.10.10)
communicating to Host 2 (IP address: 10.10.20.10), 100 times over a period of 1 hour
using port (for example, 80) and protocol (for example, http). The reduced log has one
entry, that Host 1 & Host 2 communicated 100 times over a period of 1 hour using port
80 and protocol HTTP, instead of having 100 entries. Reduced logs are enhanced with
geography, security, and topology information, and then stored in a Log Analytics
workspace.

The diagram below illustrates the data flow:

The key components of Traffic Analytics are:


• Network security group (NSG) - Contains a list of security rules that allow or deny
network traffic to resources connected to an Azure Virtual Network. NSGs can be
associated to subnets, individual VMs (classic), or individual network interfaces
(NIC) attached to VMs (Resource Manager). For more information, see Network
security group overview.
• Network security group (NSG) flow logs - Allow you to view information about
ingress and egress IP traffic through a network security group. NSG flow logs are
written in json format and show outbound and inbound flows on a per rule basis,
the NIC the flow applies to, five-tuple information about the flow
(source/destination IP address, source/destination port, and protocol), and if the
traffic was allowed or denied. For more information about NSG flow logs, see NSG
flow logs.
• Log Analytics - An Azure service that collects monitoring data and stores the data
in a central repository. This data can include events, performance data, or custom
data provided through the Azure API. Once collected, the data is available for
alerting, analysis, and export. Monitoring applications such as network
performance monitor and traffic analytics are built using Azure Monitor logs as a
foundation. For more information, see Azure Monitor logs.
• Log Analytics workspace - An instance of Azure Monitor logs, where the data
pertaining to an Azure account, is stored. For more information about Log Analytics
workspaces, see Create a Log Analytics workspace.
• Network Watcher - A regional service that enables you to monitor and diagnose
conditions at a network scenario level in Azure. You can turn NSG flow logs on and
off with Network Watcher. For more information, see Network Watcher.

You might also like