AZ-700 Study Guide
AZ-700 Study Guide
Architecture diagram................................................................................................................................. 13
Considerations .......................................................................................................................................... 14
System routes............................................................................................................................................ 19
PolicyBased ........................................................................................................................................... 24
RouteBased ........................................................................................................................................... 24
Dual-redundancy: active-active VPN gateways for both Azure and on-premises networks...................... 27
Architecture diagram................................................................................................................................. 28
Design and implement Azure load balancer using the Azure portal ................................................................ 62
Choosing a load balancer type ................................................................................................................... 62
Zone redundant..................................................................................................................................... 65
Zonal ..................................................................................................................................................... 65
Architecture diagram................................................................................................................................. 67
Frontend configuration.......................................................................................................................... 80
Destination path.................................................................................................................................... 89
Get network security recommendations with Microsoft Defender for Cloud .................................................. 92
Network Security....................................................................................................................................... 92
Deploy Azure DDoS Protection by using the Azure portal ............................................................................. 101
Deploy Network Security Groups by using the Azure portal ......................................................................... 105
Filter network traffic with an NSG using the Azure portal ......................................................................... 106
Outbound connectivity using network rules and application rules ....................................................... 112
Inbound connectivity using DNAT rules and network rules ................................................................... 112
Methods for deploying an Azure Firewall with Availability Zones ......................................................... 114
Deploying Azure Firewall Manager for Hub Virtual Networks ............................................................... 119
Deploying Azure Firewall Manager for Secured Virtual Hubs ................................................................ 120
Web Application Firewall Default Rule Set rule groups and rules .............................................................. 122
Managed rules .................................................................................................................................... 122
How is Azure Private Endpoint different from a service endpoint? ....................................................... 131
Virtual network and on-premises workloads using Azure DNS Private Resolver .................................... 138
When creating a VNet, it's recommended that you use the address ranges enumerated in
RFC 1918, which have been set aside by the IETF for private, non-routable address spaces:
• 10.0.0.0 - 10.255.255.255 (10/8 prefix)
• 172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
• 192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
Subnets
Azure services that support Availability Zones fall into three categories:
• Zonal services: Resources can be pinned to a specific zone. For example, virtual
machines, managed disks, or standard IP addresses can be pinned to a specific
zone, which allows for increased resilience by having one or more instances of
resources spread across zones.
• Zone-redundant services: Resources are replicated or distributed across zones
automatically. Azure replicates the data across three zones so that a zone failure
doesn't impact its availability.
• Non-regional services: Services are always available from Azure geographies and
are resilient to zone-wide outages as well as region-wide outages.
Configure public IP services
Use dynamic and static public IP
addresses
In Azure Resource Manager, a public IP address is a resource that has its own properties.
Some of the resources you can associate a public IP address resource with:
• Virtual machine network interfaces
• Virtual machine scale sets
• Public Load Balancers
• Virtual Network Gateways (VPN/ER)
• NAT gateways
• Application Gateways
• Azure Firewall
• Bastion Host
• Route Server
Architecture diagram
In Azure DNS, you can create address records manually within relevant zones. The records
most frequently used will be:
• Host records: A/AAAA (IPv4/IPv6)
• Alias records: CNAME
Considerations
• The name of the zone must be unique within the resource group, and the zone
must not exist already.
• The same zone name can be reused in a different resource group or a different
Azure subscription.
• Where multiple zones share the same name, each instance is assigned different
name server addresses.
• Root/Parent domain is registered at the registrar and pointed to Azure NS.
• Child domains are registered in AzureDNS directly.
At the VNet level, default DNS configuration is part of the DHCP assignments made by
Azure, specifying the special address 168.63.129.16 to use Azure DNS services.
Two ways to link VNets to a private zone:
• Registration: Each VNet can link to one private DNS zone for registration. However,
up to 100 VNets can link to the same private DNS zone for registration.
• Resolution: There may be many other private DNS zones for different namespaces.
You can link a VNet to each of those zones for name resolution. Each VNet can link
to up to 1000 private DNS Zones for name resolution.
Integrating on-premises DNS with Azure
VNets
Forwarding takes two forms:
• Forwarding - specifies another DNS server (SOA for a zone) to resolve the query if
the initial server cannot.
• Conditional forwarding - specifies a DNS server for a named zone, so that all
queries for that zone are routed to the specified DNS server.
Enable cross-virtual network
connectivity with peering
Virtual network peering enables you to seamlessly connect two Azure virtual networks.
Once peered, the virtual networks appear as one, for connectivity purposes. There are two
types of VNet peering.
• Regional VNet peering connects Azure virtual networks in the same region.
• Global VNet peering connects Azure virtual networks in different regions. When
creating a global peering, the peered virtual networks can exist in any Azure public
cloud region or China cloud regions, but not in Government cloud regions. You can
only peer virtual networks in the same region in Azure Government cloud regions.
The benefits of using virtual network peering, whether local or global, include:
• A low-latency, high-bandwidth connection between resources in different virtual
networks.
• The ability to apply network security groups in either virtual network to block
access to other virtual networks or subnets.
• The ability to transfer data between virtual networks across Azure subscriptions,
Microsoft Entra tenants, deployment models, and Azure regions.
• The ability to peer virtual networks created through the Azure Resource Manager.
• The ability to peer a virtual network created through Resource Manager to one
created through the classic deployment model.
• No downtime to resources in either virtual network is required when creating the
peering, or after the peering is created.
When you Allow Gateway Transit the virtual network can communicate to resources
outside the peering. For example, the subnet gateway could:
• Use a site-to-site VPN to connect to an on-premises network.
• Use a VNet-to-VNet connection to another virtual network.
• Use a point-to-site VPN to connect to a client.
Azure virtual networks can be deployed in a hub-and-spoke topology, with the hub VNet
acting as a central point of connectivity to all the spoke VNets. The hub virtual network
hosts infrastructure components such as an NVA, virtual machines and a VPN gateway. All
the spoke virtual networks peer with the hub virtual network. Traffic flows through
network virtual appliances or VPN gateways in the hub virtual network. The benefits of
using a hub and spoke configuration include cost savings, overcoming subscription limits,
and workload isolation.
Implement virtual network traffic
routing
Azure automatically creates a route table for each subnet within an Azure virtual network
and adds system default routes to the table.
System routes
Azure automatically creates system routes and assigns the routes to each subnet in a
virtual network. You can't create or remove system routes, but you can override some
system routes with custom routes. Azure creates default system routes for each subnet,
and adds additional optional default routes to specific subnets, or every subnet, when you
use specific Azure capabilities.
Default routes
Each route contains an address prefix and next hop type. When traffic leaving a subnet is
sent to an IP address within the address prefix of a route, the route that contains the prefix
is the route Azure uses.
In routing terms, a hop is a waypoint on the overall route. Therefore, the next hop is the
next waypoint that the traffic is directed to on its journey to its ultimate destination.
• Virtual network: Routes traffic between address ranges within the address space
of a virtual network. Azure creates a route with an address prefix that corresponds
to each address range defined within the address space of a virtual network. Azure
automatically routes traffic between subnets using the routes created for each
address range.
• Internet: Routes traffic specified by the address prefix to the Internet. The system
default route specifies the 0.0.0.0/0 address prefix. Azure routes traffic for any
address not specified by an address range within a virtual network to the Internet,
unless the destination address is for an Azure service. Azure routes any traffic
destined for its service directly to the service over the backbone network, rather
than routing the traffic to the Internet. You can override Azure's default system
route for the 0.0.0.0/0 address prefix with a custom route.
• None: Traffic routed to the None next hop type is dropped, rather than routed
outside the subnet. Azure automatically creates default routes for the following
address prefixes:
• 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16: Reserved for private use in RFC
1918.
• 100.64.0.0/10: Reserved in RFC 6598.
You define the NAT configuration for each subnet within a VNet to enable outbound
connectivity by specifying which NAT gateway resource to use. After NAT is configured,
all UDP and TCP outbound flows from any virtual machine instance will use NAT for
internet connectivity. No further configuration is necessary, and you don’t need to create
any user-defined routes. NAT takes precedence over other outbound scenarios and
replaces the default Internet destination of a subnet.
PolicyBased
PolicyBased VPNs were previously called static routing gateways in the classic deployment
model. Policy-based VPNs encrypt and direct packets through IPsec tunnels based on the
IPsec policies configured with the combinations of address prefixes between your on-
premises network and the Azure VNet.
Policy based VPNs which support IKEv1 protocols can be used with Basic Gateway SKUs
only.
You can only use PolicyBased VPNs for S2S connections, and only for certain
configurations. Most VPN Gateway configurations require a RouteBased VPN.
RouteBased
RouteBased VPNs were previously called dynamic routing gateways in the classic
deployment model. RouteBased VPNs use "routes" in the IP forwarding or routing table
to direct packets into their corresponding tunnel interfaces. The tunnel interfaces then
encrypt or decrypt the packets in and out of the tunnels. The policy (or traffic selector) for
RouteBased VPNs are configured as any-to-any (or wild cards). The value for a RouteBased
VPN type is RouteBased.
Here you create and set up the Azure VPN gateway in an active-active configuration and
create two local network gateways and two connections for your two on-premises VPN
devices as described above. The result is a full mesh connectivity of 4 IPsec tunnels
between your Azure virtual network and your on-premises network.
Architecture diagram
•
The on-premises network represents your on-premises Active Directory and any
data or resources.
• The gateway is responsible for sending encrypted traffic to a virtual IP address
when it uses a public connection.
• The Azure virtual network holds all your cloud applications and any Azure VPN
gateway components.
• An Azure VPN gateway provides the encrypted link between the Azure virtual
network and your on-premises network. An Azure VPN gateway is made up of
these elements:
o Virtual network gateway
o Local network gateway
o Connection
o Gateway subnet
• Cloud applications are the ones you've made available through Azure.
• An internal load balancer, located in the front end, routes cloud traffic to the
correct cloud-based application or resource.
Connect devices to networks with
Point-to-site VPN connections
A Point-to-Site (P2S) VPN gateway connection lets you create a secure connection to your
virtual network from an individual client computer. A P2S connection is established by
starting it from the client computer. This solution is useful for telecommuters who want
to connect to Azure VNets from a remote location, such as from home or a conference.
P2S VPN is also a useful solution to use instead of S2S VPN when you have only a few
clients that need to connect to a VNet.
Point-to-site protocols
Point-to-site VPN can use one of the following protocols:
• OpenVPN® Protocol, an SSL/TLS based VPN protocol. A TLS VPN solution can
penetrate firewalls, since most firewalls open TCP port 443 outbound, which TLS
uses. OpenVPN can be used to connect from Android, iOS (versions 11.0 and
above), Windows, Linux, and Mac devices (macOS versions 10.13 and above).
• Secure Socket Tunneling Protocol (SSTP), a proprietary TLS-based VPN protocol. A
TLS VPN solution can penetrate firewalls, since most firewalls open TCP port 443
outbound, which TLS uses. SSTP is only supported on Windows devices. Azure
supports all versions of Windows that have SSTP (Windows 7 and later).
• IKEv2 VPN, a standards-based IPsec VPN solution. IKEv2 VPN can be used to
connect from Mac devices (macOS versions 10.11 and above).
At a high level, you need to perform the following steps to configure Microsoft Entra
authentication:
• Configure a Microsoft Entra tenant
• Enable Microsoft Entra authentication on the gateway
• Download and configure Azure VPN Client
The following diagram shows an organization with two Virtual WAN hubs connecting the
spokes. VNets, Site-to-site and point-to-site VPNs, SD WANs, and ExpressRoute
connectivity are all supported.
To configure an end-to-end virtual WAN, you create the following resources:
• Virtual WAN
• Hub
• Hub virtual network connection
• Hub-to-hub connection
• Hub route table
Gateway scale
A hub gateway isn't the same as a virtual network gateway that you use for ExpressRoute
and VPN Gateway. For example, when using Virtual WAN, you don't create a site-to-site
connection from your on-premises site directly to your VNet. Instead, you create a site-
to-site connection to the hub. The traffic always goes through the hub gateway. This
means that your VNets don't need their own virtual network gateway. Virtual WAN lets
your VNets take advantage of scaling easily through the virtual hub and the virtual hub
gateway.
Connect cross-tenant VNets to a Virtual
WAN hub
You can use Virtual WAN to connect a VNet to a virtual hub in a different tenant. This
architecture is useful if you have client workloads that must be connected to be the same
network but are on different tenants. For example, as shown in the following diagram, you
can connect a non-Contoso VNet (the Remote Tenant) to a Contoso virtual hub (the Parent
Tenant).
Before you can connect a cross-tenant VNet to a Virtual WAN hub, you must have the
following configuration already set up:
• A Virtual WAN and virtual hub in the parent subscription.
• A virtual network configured in a subscription in the remote tenant.
• Non-overlapping address spaces in the remote tenant and address spaces within
any other VNets already connected to the parent virtual hub.
ExpressRoute capabilities
Some key benefits of ExpressRoute are:
• Layer 3 connectivity between your on-premises network and the Microsoft Cloud
through a connectivity provider
• Connectivity can be from an any-to-any (IPVPN) network, a point-to-point Ethernet
connection, or through a virtual cross-connection via an Ethernet exchange
• Connectivity to Microsoft cloud services across all regions in the geopolitical region
• Global connectivity to Microsoft services across all regions with the ExpressRoute
premium add-on
• Built-in redundancy in every peering location for higher reliability
Azure ExpressRoute is used to create private connections between Azure datacenters and
infrastructure on your premises or in a colocation environment. ExpressRoute connections
do not go over the public Internet, and they offer more reliability, faster speeds, and lower
latencies than typical Internet connections.
Understand use cases for Azure
ExpressRoute
Faster and Reliable connection to Azure services - Organizations leveraging Azure
services look for reliable connections to Azure services and data centers. Public internet
is dependent upon many factors and may not be suitable for a business. Azure
ExpressRoute is used to create private connections between Azure data centers and
infrastructure on your premises or in a colocation environment. Using ExpressRoute
connections to transfer data between on-premises systems and Azure can also give
significant cost benefits.
Storage, backup, and Recovery - Backup and Recovery are important for an organization
for business continuity and recovering from outages. ExpressRoute gives you a fast and
reliable connection to Azure with bandwidths up to 100 Gbps, which makes it excellent
for scenarios such as periodic data migration, replication for business continuity, disaster
recovery and other high-availability strategies.
Extends Data center capabilities - ExpressRoute can be used to connect and add
compute and storage capacity to your existing data centers. With high throughput and
fast latencies, Azure will feel like a natural extension to or between your data centers, so
you enjoy the scale and economics of the public cloud without having to compromise on
network performance.
If you are co-located in a facility with a cloud exchange, you can order virtual cross-
connections to the Microsoft cloud through the co-location provider’s Ethernet exchange.
Co-location providers can offer either Layer 2 cross-connections, or managed Layer 3
cross-connections between your infrastructure in the co-location facility and the
Microsoft cloud.
You can connect your on-premises datacenters/offices to the Microsoft cloud through
point-to-point Ethernet links. Point-to-point Ethernet providers can offer Layer 2
connections, or managed Layer 3 connections between your site and the Microsoft cloud.
You can integrate your WAN with the Microsoft cloud. IPVPN providers (typically MPLS
VPN) offer any-to-any connectivity between your branch offices and datacenters. The
Microsoft cloud can be interconnected to your WAN to make it look just like any other
branch office. WAN providers typically offer managed Layer 3 connectivity.
Direct from ExpressRoute sites
You can connect directly into the Microsoft's global network at a peering location
strategically distributed across the world. ExpressRoute Direct provides dual 100 Gbps or
10-Gbps connectivity, which supports Active/Active connectivity at scale.
ExpressRoute Direct gives you the ability to connect directly into Microsoft’s global
network at peering locations strategically distributed around the world. ExpressRoute
Direct provides dual 100 Gbps or 10-Gbps connectivity, which supports Active/Active
connectivity at scale. You can work with any service provider for ExpressRoute Direct.
Route advertisement
When Microsoft peering gets configured on your ExpressRoute circuit, the Microsoft Edge
routers establish a pair of Border Gateway Protocol (BGP) sessions with your edge routers
through your connectivity provider. No routes are advertised to your network. To enable
route advertisements to your network, you must associate a route filter.
The following diagram shows an example of VPN connectivity over ExpressRoute private
peering:
The diagram shows a network within the on-premises network connected to the Azure
hub VPN gateway over ExpressRoute private peering. The connectivity establishment is
straightforward:
• Establish ExpressRoute connectivity with an ExpressRoute circuit and private
peering.
• Establish the VPN connectivity.
To apply encryption to the communication, you must make sure that for the VPN-
connected network in the diagram, the Azure routes via on-premises VPN gateway are
preferred over the direct ExpressRoute path.
The same requirement applies to the traffic from Azure to on-premises networks. To
ensure that the IPsec path is preferred over the direct ExpressRoute path (without IPsec),
you have two options:
• Advertise more specific prefixes on the VPN BGP session for the VPN-connected
network. You can advertise a larger range that encompasses the VPN-connected
network over ExpressRoute private peering, then more specific ranges in the VPN
BGP session. For example, advertise 10.0.0.0/16 over ExpressRoute, and 10.0.1.0/24
over VPN.
• Advertise disjoint prefixes for VPN and ExpressRoute. If the VPN-connected
network ranges are disjoint from other ExpressRoute connected networks, you can
advertise the prefixes in the VPN and ExpressRoute BGP sessions, respectively. For
example, advertise 10.0.0.0/24 over ExpressRoute, and 10.0.1.0/24 over VPN.
In both examples, Azure will send traffic to 10.0.1.0/24 over the VPN connection rather
than directly over ExpressRoute without VPN protection.
You can configure either gateway first. Typically, you will incur no downtime when adding
a new gateway or gateway connection.
Zone-redundant gateways
To automatically deploy your virtual network gateways across availability zones, you can
use zone-redundant virtual network gateways. With zone-redundant gateways, you can
benefit from zone-resiliency to access your mission-critical, scalable services on Azure.
Zonal gateways
To deploy gateways in a specific zone, you can use zonal gateways. When you deploy a
zonal gateway, all instances of the gateway are deployed in the same Availability Zone.
Configure a Site-to-Site VPN as a
failover path for ExpressRoute
You can configure a Site-to-Site VPN connection as a backup for ExpressRoute. This
connection applies only to virtual networks linked to the Azure private peering path. There
is no VPN-based failover solution for services accessible through Azure Microsoft peering.
The ExpressRoute circuit is always the primary link. Data flows through the Site-to-Site
VPN path only if the ExpressRoute circuit fails. To avoid asymmetrical routing, your local
network configuration should also prefer the ExpressRoute circuit over the Site-to-Site
VPN. You can prefer the ExpressRoute path by setting higher local preference for the
routes received the ExpressRoute.
Configure peering for an
ExpressRoute deployment
An ExpressRoute circuit has two peering options associated with it: Azure private, and
Microsoft. Each peering is configured identically on a pair of routers (in active-active or
load sharing configuration) for high availability. Azure services are categorized as Azure
public and Azure private to represent the IP addressing schemes.
For example, if you connect to Microsoft in Amsterdam through ExpressRoute, you will
have access to all Microsoft cloud services hosted in Northern and Western Europe.
You can transfer data cost-effectively by enabling the Local SKU. With Local SKU, you can
bring your data to an ExpressRoute location near the Azure region you want. With Local,
Data transfer is included in the ExpressRoute port charge.
You can enable ExpressRoute Global Reach to exchange data across your on-premises
sites by connecting your ExpressRoute circuits. For example, if you have a private data
center in California connected to an ExpressRoute circuit in Silicon Valley and another
private data center in Texas connected to an ExpressRoute circuit in Dallas. With
ExpressRoute Global Reach, you can connect your private data centers together through
these two ExpressRoute circuits. Your cross-data-center traffic will traverse through
Microsoft's network.
ExpressRoute Direct
ExpressRoute is a private and resilient way to connect your on-premises networks to the
Microsoft Cloud. You can access many Microsoft cloud services such as Azure and
Microsoft 365 from your private data center or your corporate network. For example, you
might have a branch office in San Francisco with an ExpressRoute circuit in Silicon Valley
and another branch office in London with an ExpressRoute circuit in the same city. Both
branch offices have high-speed connectivity to Azure resources in US West and UK South.
However, the branch offices cannot connect and send data directly with one another. In
other words, 10.0.1.0/24 can send data to 10.0.3.0/24 and 10.0.4.0/24 network, but NOT
to 10.0.2.0/24 network.
Choose when to use ExpressRoute
global reach
ExpressRoute Global Reach is designed to complement your service provider’s WAN
implementation and connect your branch offices across the world. For example, if your
service provider primarily operates in the United States and has linked all your branches
in the U.S., but the service provider does not operate in Japan and Hong Kong SAR, with
ExpressRoute Global Reach you can work with a local service provider and Microsoft will
connect your branches there to the ones in the U.S. using ExpressRoute and the Microsoft
global network.
Load balance non-HTTP(S)
traffic in Azure
In contrast, non-HTTP(S) load-balancing services can handle non-HTTP(S) traffic and are
recommended for non-web workloads.
The table below summarizes these categorizations for each Azure load balancing service.
Service Global/regional Recommended traffic
Azure Front Door Global HTTP(S)
Traffic Manager Global non-HTTP(S)
Application Gateway Regional HTTP(S)
Azure Load Balancer Regional or Global non-HTTP(S)
The flowchart below will help you to select the most appropriate load-balancing solution
for your application, by guiding you through a set of key decision criteria in order to reach
a recommendation.
Design and implement Azure load
balancer using the Azure portal
Azure Load Balancer operates at layer 4 of the Open Systems Interconnection (OSI)
model. It's the single point of contact for clients. Azure Load Balancer distributes inbound
flows that arrive at the load balancer's front end to backend pool instances. These flows
are according to configured load-balancing rules and health probes. The backend pool
instances can be Azure Virtual Machines or instances in a virtual machine scale set.
A public load balancer can provide outbound connections for virtual machines (VMs)
inside your virtual network. These connections are accomplished by translating their
private IP addresses to public IP addresses. External load balancers are used to distribute
client traffic from the internet across your VMs. That internet traffic might come from web
browsers, module apps, or other sources.
An internal load balancer is used where private IPs are needed at the frontend only.
Internal load balancers are used to load balance traffic from internal Azure resources to
other Azure resources inside a virtual network. A load balancer frontend can also be
accessed from an on-premises network in a hybrid scenario.
Azure load balancer and availability
zones
Azure services that support availability zones fall into three categories:
• Zonal services: Resources can be pinned to a specific zone. For example, virtual
machines, managed disks, or standard IP addresses can be pinned to a specific
zone, which allows for increased resilience by having one or more instances of
resources spread across zones.
• Zone-redundant services: Resources are replicated or distributed across zones
automatically. Azure replicates the data across three zones so that a zone failure
doesn't impact its availability.
• Non-regional services: Services are always available from Azure geographies and
are resilient to zone-wide outages and region-wide outages.
Azure Load Balancer supports availability zones scenarios. You can use Standard Load
Balancer to increase availability throughout your scenario by aligning resources with, and
distribution across zones. Review this document to understand these concepts and
fundamental scenario design guidance.
A Load Balancer can either be zone redundant, zonal, or non-zonal. To configure the zone
related properties (mentioned above) for your load balancer, select the appropriate type
of frontend needed.
Zone redundant
In a region with Availability Zones, a Standard Load Balancer can be zone-redundant. This
traffic is served by a single IP address.
A single frontend IP address survives zone failure. The frontend IP may be used to reach
all (non-impacted) backend pool members no matter the zone. One or more availability
zones can fail and the data path survives as long as one zone in the region remains healthy.
Zonal
You can choose to have a frontend guaranteed to a single zone, which is known as a zonal.
This scenario means any inbound or outbound flow is served by a single zone in a region.
Your frontend shares fate with the health of the zone. The data path is unaffected by
failures in zones other than where it was guaranteed. You can use zonal frontends to
expose an IP address per Availability Zone.
Additionally, the use of zonal frontends directly for load balanced endpoints within each
zone is supported. You can use this configuration to expose per zone load-balanced
endpoints to individually monitor each zone. For public endpoints, you can integrate them
with a DNS load-balancing product like Traffic Manager and use a single DNS name.
For a public load balancer frontend, you add a zones parameter to the public IP. This
public IP is referenced by the frontend IP configuration used by the respective rule.
For an internal load balancer frontend, add a zones parameter to the internal load
balancer frontend IP configuration. A zonal frontend guarantees an IP address in a subnet
to a specific zone.
Architecture diagram
Explore Azure Traffic Manager
Azure Traffic Manager is a DNS-based traffic load balancer. This service allows you to
distribute traffic to your public facing applications across the global Azure regions. Traffic
Manager also provides your public endpoints with high availability and quick
responsiveness.
Traffic Manager uses DNS to direct the client requests to the appropriate service endpoint
based on a traffic-routing method. Traffic manager also provides health monitoring for
every endpoint. The endpoint can be any Internet-facing service hosted inside or outside
of Azure. Traffic Manager provides a range of traffic-routing methods and endpoint
monitoring options to suit different application needs and automatic failover models.
Traffic Manager is resilient to failure, including the failure of an entire Azure region.
Feature
Description
Traffic Manager delivers high availability for your critical applications by monitoring your
endpoints and providing automatic failover when an endpoint goes down.
Azure allows you to run cloud services and websites in datacenters located around the
world. Traffic Manager can improve the responsiveness of your website by directing traffic
to the endpoint with the lowest latency.
Traffic Manager supports external, non-Azure endpoints enabling it to be used with hybrid
cloud and on-premises deployments, including the burst-to-cloud, migrate-to-cloud, and
failover-to-cloud scenarios.
Using nested Traffic Manager profiles, multiple traffic-routing methods can be combined
to create sophisticated and flexible rules to scale to the needs of larger, more complex
deployments.
When a client attempts to connect to a service, it must first resolve the DNS name of the
service to an IP address. The client then connects to that IP address to access the service.
Traffic Manager uses DNS to direct clients to specific service endpoints based on the rules
of the traffic-routing method. Clients connect to the selected endpoint directly. Traffic
Manager isn't a proxy or a gateway. Traffic Manager doesn't see the traffic passing
between the client and the service.
Traffic Manager example client usage
1. The client sends a DNS query to its configured recursive DNS service to resolve the
name 'partners.contoso.com'. A recursive DNS service, sometimes called a 'local
DNS' service, doesn't host DNS domains directly. Rather, the client off-loads the
work of contacting the various authoritative DNS services across the Internet
needed to resolve a DNS name.
2. To resolve the DNS name, the recursive DNS service finds the name servers for the
'contoso.com' domain. It then contacts those name servers to request the
'partners.contoso.com' DNS record. The contoso.com DNS servers return the
CNAME record that points to contoso.trafficmanager.net.
3. Next, the recursive DNS service finds the name servers for the 'trafficmanager.net'
domain, which are provided by the Azure Traffic Manager service. It then sends a
request for the 'contoso.trafficmanager.net' DNS record to those DNS servers.
4. The Traffic Manager name servers receive the request. They choose an endpoint
based on:
oThe configured state of each endpoint (disabled endpoints aren't returned)
o The current health of each endpoint, as determined by the Traffic Manager
health checks.
o The chosen traffic-routing method.
5. The chosen endpoint is returned as another DNS CNAME record. In this case, let
us suppose contoso-eu.cloudapp.net is returned.
6. Next, the recursive DNS service finds the name servers for the 'cloudapp.net'
domain. It contacts those name servers to request the 'contoso-eu.cloudapp.net'
DNS record. A DNS 'A' record containing the IP address of the EU-based service
endpoint is returned.
7. The recursive DNS service consolidates the results and returns a single DNS
response to the client.
8. The client receives the DNS results and connects to the given IP address. The client
connects to the application service endpoint directly, not through Traffic Manager.
Since it's an HTTPS endpoint, the client performs the necessary SSL/TLS handshake,
and then makes an HTTP GET request for the '/login.aspx' page.
Routing method
Priority
Select this routing method when you want to have a primary service endpoint for all traffic.
You can provide multiple backup endpoints in case the primary or one of the backup
endpoints is unavailable.
Weighted
Select this routing method when you want to distribute traffic across a set of endpoints
based on their weight. Set the weight the same to distribute evenly across all endpoints.
Performance
Select the routing method when you have endpoints in different geographic locations,
and you want end users to use the "closest" endpoint for the lowest network latency.
Geographic
Select this routing method to direct users to specific endpoints (Azure, External, or
Nested) based on where their DNS queries originate from geographically. With this
routing method, it enables you to be compliant with scenarios such as data sovereignty
mandates, localization of content & user experience and measuring traffic from different
regions.
MultiValue
Select this routing method for Traffic Manager profiles that can only have IPv4/IPv6
addresses as endpoints. When a query is received for this profile, all healthy endpoints
are returned.
Subnet
Select this routing method to map sets of end-user IP address ranges to a specific
endpoint. When a request is received, the endpoint returned will be the one mapped for
that request’s source IP address.
Routing method examples
This is an example of the Priority routing method.
There are no restrictions on how different endpoints types can be combined in a single
Traffic Manager profile; each profile can contain any mix of endpoint types.
Load balance HTTP(S) traffic in
Azure
This type of routing is known as application layer (OSI layer 7) load balancing. Azure
Application Gateway can do URL-based routing and more.
Application Gateway features
• Support for the HTTP, HTTPS, HTTP/2 and WebSocket protocols.
• A web application firewall to protect against web application vulnerabilities.
• End-to-end request encryption.
• Autoscaling, to dynamically adjust capacity as your web traffic load change.
• Redirection: Redirection can be used to another site, or from HTTP to HTTPS.
• Rewrite HTTP headers: HTTP headers allow the client and server to pass
parameter information with the request or the response.
• Custom error pages: Application Gateway allows you to create custom error pages
instead of displaying default error pages. You can use your own branding and
layout using a custom error page.
There are two primary methods of routing traffic, path-based routing, and multiple site
routing.
Path-based routing
Path-based routing sends requests with different URL paths different pools of back-end
servers. For example, you could direct requests with the path /video/* to a back-end pool
containing servers that are optimized to handle video streaming, and direct /images/*
requests to a pool of servers that handle image retrieval.
Multiple site routing
Multiple site routing configures more than one web application on the same application
gateway instance. In a multi-site configuration, you register multiple DNS names
(CNAMEs) for the IP address of the Application Gateway, specifying the name of each site.
Application Gateway uses separate listeners to wait for requests for each site. Each listener
passes the request to a different rule, which can route the requests to servers in a different
back-end pool. For example, you could direct all requests for https://contoso.com to
servers in one back-end pool, and requests for https://fabrikam.com to another back-end
pool. The following diagram shows this configuration.
Multi-site configurations are useful for supporting multi-tenant applications, where each
tenant has its own set of virtual machines or other resources hosting a web application.
Configure Azure Application
Gateway
Application Gateway has a series of components that combine to route requests to a pool
of web servers and to check the health of these web servers.
Frontend configuration
You can configure the application gateway to have a public IP address, a private IP address,
or both. A public IP address is required when you host a back end that clients must access
over the Internet via an Internet-facing virtual IP.
Backend configuration
The backend pool is used to route requests to the backend servers that serve the request.
Backend pools can be composed of NICs, virtual machine scale sets, public IP addresses,
internal IP addresses, fully qualified domain names (FQDN), and multi-tenant back-ends
like Azure App Service. You can create an empty backend pool with your application
gateway and then add backend targets to the backend pool.
The source IP address that the Application Gateway uses for health probes depends on
the backend pool:
• If the server address in the backend pool is a public endpoint, then the source
address is the application gateway's frontend public IP address.
• If the server address in the backend pool is a private endpoint, then the source IP
address is from the application gateway subnet's private IP address space.
Default health probe
An application gateway automatically configures a default health probe when you don't
set up any custom probe configurations. The monitoring behavior works by making an
HTTP GET request to the IP addresses or FQDN configured in the back-end pool. For
default probes if the backend http settings are configured for HTTPS, the probe uses
HTTPS to test health of the backend servers.
For example: You configure your application gateway to use back-end servers A, B, and C
to receive HTTP network traffic on port 80. The default health monitoring tests the three
servers every 30 seconds for a healthy HTTP response with a 30 second timeout for each
request. A healthy HTTP response has a status code between 200 and 399. In this case,
the HTTP GET request for the health probe looks like http://127.0.0.1/.
If the default probe check fails for server A, the application gateway stops forwarding
requests to this server. The default probe continues to check for server A every 30 seconds.
When server A responds successfully to one request from a default health probe,
application gateway starts forwarding the requests to the server again.
Design and configure Azure Front
Door
Azure Front Door is Microsoft’s modern cloud Content Delivery Network (CDN) that
provides fast, reliable, and secure access between your users and your applications’ static
and dynamic web content across the globe. Azure Front Door delivers your content using
the Microsoft’s global edge network with hundreds of global and local POPs distributed
around the world close to both your enterprise and consumer end users.
Many organizations have applications they want to make available to their customers,
their suppliers, and almost certainly their users. The tricky part is making sure those
applications are highly available. In addition, they need to be able to quickly respond
while being appropriately secured. Azure Front Door provides different SKUs (pricing tiers)
that meet these requirements. Let's briefly review the features and benefits of these SKUs
so you can determine which option best suits your requirements.
A secure, modern cloud CDN provides a distributed platform of servers. This helps
minimize latency when users are accessing webpages. Historically, IT staff might have
used a CDN and a web application firewall to control HTTP and HTTPS traffic flowing to
and from target applications.
If an organization uses Azure, they might achieve these goals by implementing the
products described in the following table
Product Description
Azure Front Door Enables an entry point to your apps positioned in the Microsoft global edge network.
Provides faster, more secure, and scalable access to your web applications.
Azure Content Delivers high-bandwidth content to your users by caching their content at strategically
Delivery Network placed physical nodes around the world.
Azure Web Helps provide centralized, greater protection for web applications from common
Application exploits and vulnerabilities.
Firewall
For a comparison of supported features in Azure Front Door, Review the feature
comparison table.
Incoming match
The following properties determine whether the incoming request matches the routing
rule (or left-hand side):
• HTTP Protocols (HTTP/HTTPS)
• Hosts (for example, www.foo.com, *.bar.com)
• Paths (for example, /, /users/, /file.gif)
Route data
Front Door speeds up the processing of requests by using caching. If caching is enabled
for a specific route, it uses the cached response. If there is no cached response for the
request, Front Door forwards the request to the appropriate backend in the configured
backend pool.
Route matching
Front Door attempts to match to the most-specific match first looking only at the left-
hand side of the route. It first matches based on HTTP protocol, then Frontend host, then
the Path.
If there are no routing rules for an exact-match frontend host with a catch-all route
Path (/*), then there will not be a match to any routing rule.
Redirection types
A redirect type sets the response status code for the clients to understand the purpose of
the redirect. The following types of redirection are supported:
Redirection protocol
You can set the protocol that will be used for redirection. The most common use case of
the redirect feature is to set HTTP to HTTPS redirection.
• HTTPS only: Set the protocol to HTTPS only, if you're looking to redirect the traffic
from HTTP to HTTPS. Azure Front Door recommends that you should always set
the redirection to HTTPS only.
• HTTP only: Redirects the incoming request to HTTP. Use this value only if you want
to keep your traffic HTTP that is, non-encrypted.
• Match request: This option keeps the protocol used by the incoming request. So,
an HTTP request remains HTTP and an HTTPS request remains HTTPS post
redirection.
Destination host
As part of configuring a redirect routing, you can also change the hostname or domain
for the redirect request. You can set this field to change the hostname in the URL for the
redirection or otherwise preserve the hostname from the incoming request. So, using this
field you can redirect all requests sent on https://www.contoso.com/* to
https://www.fabrikam.com/*.
Destination path
For cases where you want to replace the path segment of a URL as part of redirection, you
can set this field with the new path value. Otherwise, you can choose to preserve the path
value as part of redirect. So, using this field, you can redirect all requests sent to
https://www.contoso.com/* to https://www.contoso.com/redirected-site.
Destination fragment
The destination fragment is the portion of URL after '#', which is used by the browser to
land on a specific section of a web page. You can set this field to add a fragment to the
redirect URL.
Query string parameters
You can also replace the query string parameters in the redirected URL. To replace any
existing query string from the incoming request URL, set this field to 'Replace' and then
set the appropriate value. Otherwise, keep the original set of query strings by setting the
field to 'Preserve'. As an example, using this field, you can redirect all traffic sent to
https://www.contoso.com/foo/bar to
https://www.contoso.com/foo/bar?&utm_referrer=https%3A%2F%2Fwww.bing.com%2F.
The powerful part of URL rewrite is that the custom forwarding path will copy any part of
the incoming path that matches to a wildcard path to the forwarded path.
Since Front Door has many edge environments globally, health probe volume for your
backends can be quite high - ranging from 25 requests every minute to as high as 1200
requests per minute, depending on the health probe frequency configured. With the
default probe frequency of 30 seconds, the probe volume on your backend should be
about 200 requests per minute.
Front Door supports the following HTTP methods for sending the health probes:
GET: The GET method means retrieve whatever information (in the form of an entity) is
identified by the Request-URI.
HEAD: The HEAD method is identical to GET except that the server MUST NOT return a
message-body in the response. Because it has lower load and cost on your backends, for
new Front Door profiles, by default, the probe method is set as HEAD.
For Microsoft Azure, securing or providing the ability to secure resources like
microservices, VMs, data, and others is paramount. Microsoft Azure ensures it through a
distributed virtual firewall.
Network Security
Network Security covers controls to secure and protect Azure networks, including
securing virtual networks, establishing private connections, preventing and mitigating
external attacks, and securing DNS. Full description of the controls can be found at
Security Control V3: Network Security on Microsoft Docs.
Use network security groups (NSG) as a network layer control to restrict or monitor traffic
by port, protocol, source IP address, or destination IP address.
You can also use application security groups (ASGs) to simplify complex configuration.
Instead of defining policy based on explicit IP addresses in network security groups, ASGs
enable you to configure network security as a natural extension of an application's
structure, allowing you to group virtual machines and define network security policies
based on those groups.
NS-2: Secure cloud services with network
controls
Security Principle: Secure cloud services by establishing a private access point for the
resources. You should also disable or restrict access from public network when possible.
Azure Guidance: Deploy private endpoints for all Azure resources that support the Private
Link feature, to establish a private access point for the resources. You should also disable
or restrict public network access to services where feasible.
For certain services, you also have the option to deploy VNet integration for the service
where you can restrict the VNET to establish a private access point for the service.
At a minimum, block known bad IP addresses and high-risk protocols, such as remote
management (for example, RDP and SSH) and intranet protocols (for example, SMB and
Kerberos).
Azure Guidance: Use Azure Firewall to provide fully stateful application layer traffic
restriction (such as URL filtering) and/or central management over a large number of
enterprise segments or spokes (in a hub/spoke topology).
If you have a complex network topology, such as a hub/spoke setup, you may need to
create user-defined routes (UDR) to ensure the traffic goes through the desired route. For
example, you have option to use an UDR to redirect egress internet traffic through a
specific Azure Firewall or a network virtual appliance.
For more in-depth host level detection and prevention capability, use host-based IDS/IPS
or a host-based endpoint detection and response (EDR) solution in conjunction with the
network IDS/IPS.
Azure Guidance: Use Azure Firewall’s IDPS capability on your network to alert on and/or
block traffic to and from known malicious IP addresses and domains.
For more in-depth host level detection and prevention capability, deploy host-based
IDS/IPS or a host-based endpoint detection and response (EDR) solution, such as
Microsoft Defender for Endpoint, at the VM level in conjunction with the network IDS/IPS.
Azure Guidance: Enable DDoS standard protection plan on your VNet to protect
resources that are exposed to the public networks.
Azure Guidance: Use the following features to simplify the implementation and
management of the NSG and Azure Firewall rules:
• Use Microsoft Defender for Cloud Adaptive Network Hardening to recommend
NSG hardening rules that further limit ports, protocols and source IPs based on
threat intelligence and traffic analysis result.
• Use Azure Firewall Manager to centralize the firewall policy and route management
of the virtual network. To simplify the firewall rules and network security groups
implementation, you can also use the Azure Firewall Manager ARM (Azure
Resource Manager) template.
Azure Guidance: Use Azure Sentinel’s built-in Insecure Protocol Workbook to discover
the use of insecure services and protocols such as SSL/TLSv1, SSHv1, SMBv1, LM/NTLMv1,
wDigest, Unsigned LDAP Binds, and weak ciphers in Kerberos. Disable insecure services
and protocols that do not meet the appropriate security standard.
Note: If disabling insecure services or protocols is not possible, use compensating controls
such as blocking access to the resources through network security group, Azure Firewall,
or Azure Web Application Firewall to reduce the attack surface.
Azure Guidance: Use private connections for secure communication between different
networks, such as cloud service provider datacenters and on-premises infrastructure in a
colocation environment.
For lightweight connectivity between site-to-site or point-to-site, use Azure virtual private
network (VPN) to create a secure connection between your on-premises site or end-user
device to the Azure virtual network.
For enterprise-level high performance connection, use Azure ExpressRoute (or Virtual
WAN) to connect Azure datacenters and on-premises infrastructure in a co-location
environment.
When connecting two or more Azure virtual networks together, use virtual network
peering. Network traffic between peered virtual networks is private and is kept on the
Azure backbone network.
Azure Guidance: Use Azure recursive DNS or a trusted external DNS server in your
workload recursive DNS setup, such as in VM's operating system or in the application.
Use Azure Private DNS for private DNS zone setup where the DNS resolution process does
not leave the virtual network. Use a custom DNS to restrict the DNS resolution which only
allows the trusted resolution to your client.
Use Azure Defender for DNS for the advanced protection against the following security
threats to your workload or your DNS service:
• Data exfiltration from your Azure resources using DNS tunneling
• Malware communicating with command-and-control server
• Communication with malicious domains as phishing and crypto mining
• DNS attacks in communication with malicious DNS resolvers
You can also use Azure Defender for App Service to detect dangling DNS records if you
decommission an App Service website without removing its custom domain from your
DNS registrar.
Distributed Denial of Service (DDoS) attacks are some of the largest availability and
security concerns facing customers that are moving their applications to the cloud. A
DDoS attack tries to drain an API's or application's resources, making that application
unavailable to legitimate users. DDoS attacks can be targeted at any endpoint that is
publicly reachable through the internet.
DDoS implementation
Azure DDoS Protection, combined with application design best practices, provide defense
against DDoS attacks. Azure DDoS Protection provides the following service tiers:
• Network Protection: Provides additional mitigation capabilities over DDoS
infrastructure Protection that are tuned specifically to Azure Virtual Network
resources. Azure DDoS Protection is simple to enable, and requires no application
changes. Protection policies are tuned through dedicated traffic monitoring and
machine learning algorithms. Policies are applied to public IP addresses associated
to resources deployed in virtual networks, such as Azure Load Balancer, Azure
Application Gateway, and Azure Service Fabric instances, but this protection
doesn't apply to App Service Environments. Real-time telemetry is available
through Azure Monitor views during an attack, and for history. Rich attack
mitigation analytics are available via diagnostic settings. Application layer
protection can be added through the Azure Application Gateway Web Application
Firewall or by installing a third party firewall from Azure Marketplace. Protection is
provided for IPv4 and IPv6 Azure public IP addresses.
• IP Protection: DDoS IP Protection is a pay-per-protected IP model. DDoS IP
Protection contains the same core engineering features as DDoS Network
Protection, but will differ in value-added services like DDoS rapid response support,
cost protection, and discounts on WAF.
Volumetric attacks - These attacks flood the network layer with a substantial amount of
seemingly legitimate traffic. They include UDP floods, amplification floods, and other
spoofed-packet floods. DDoS Protection mitigates these potential multi-gigabyte attacks
by absorbing and scrubbing them, with Azure's global network scale, automatically.
Resource (application) layer attacks - These attacks target web application packets, to
disrupt the transmission of data between hosts. They include HTTP protocol violations,
SQL injection, cross-site scripting, and other layer 7 attacks. Use a Web Application
Firewall, such as the Azure Application Gateway web application firewall, and DDoS
Protection to provide defense against these attacks. There are also third-party web
application firewall offerings available in the Azure Marketplace.
Multi-layered protection
Specific to resource attacks at the application layer, you should configure a web
application firewall (WAF) to help secure web applications. A WAF inspects inbound web
traffic to block SQL injections, cross-site scripting, DDoS, and other Layer 7 attacks. Azure
provides WAF as a feature of Application Gateway for centralized protection of your web
applications from common exploits and vulnerabilities. There are other WAF offerings
available from Azure partners that might be more suitable for your needs via the Azure
Marketplace.
Even web application firewalls are susceptible to volumetric and state exhaustion attacks.
Therefore, it's firmly recommended to enable DDoS Protection on the WAF virtual network
to help protect from volumetric and protocol attacks.
To minimize the number of security rules you need, and the need to change the rules,
plan out the application security groups you need and create rules using service tags or
application security groups, rather than individual IP addresses, or ranges of IP addresses,
whenever possible.
The key stages to filter network traffic with an NSG using the Azure portal are:
7. Create a resource group - this can either be done beforehand or as you create
the virtual network in the next stage. All other resources that you create must be
in the same region specified here.
8. Create a virtual network - this must be deployed in the same resource group you
created above.
9. Create application security groups - the application security groups you create
here will enable you to group together servers with similar functions, such as web
servers or management servers. You would create two application security groups
here; one for web servers and one for management servers (for example,
MyAsgWebServers and MyAsgMgmtServers)
10. Create a network security group - the network security group will secure network
traffic in your virtual network. This NSG will be associated with a subnet in the next
stage.
11. Associate a network security group with a subnet - this is where you'll associate
the network security group you create above, with the subnet of the virtual network
you created in stage 2 above.
12. Create security rules - this is where you create your inbound security rules. Here
you would create a security rule to allow ports 80 and 443 to the application
security group for your web servers (for example, MyAsgWebServers). Then you
would create another security rule to allow RDP traffic on port 3389 to the
application security group for your management servers (for example,
MyAsgMgmtServers). These rules will control from where you can access your VM
remotely and your IIS Webserver.
13. Create virtual machines - this is where you create the web server (for example,
MyVMWeb) and management server (for example, MyVMMgmt) virtual machines
which will be associated with their respective application security group in the next
stage.
14. Associate NICs to an ASG - this is where you associate the network interface card
(NIC) attached to each virtual machine with the relevant application security group
that you created in stage 3 above.
15. Test traffic filters - the final stage is where you test that your traffic filtering is
working as expected.
o To test this, you would attempt to connect to the management server virtual
machine (for example, MyVMMgmt) by using an RDP connection, thereby
verifying that you can connect because port 3389 is allowing inbound
connections from the Internet to the management servers application
security group (for example, MyAsgMgmtServers).
o While connected to the RDP session on the management server (for
example, MyVMMgmt), you would then test an RDP connection from the
management server virtual machine (for example, MyVMMgmt) to the web
server virtual machine (for example, MyVMWeb), which again should
succeed because virtual machines in the same network can communicate
with each over any port by default.
o However, you'll not be able to create an RDP connection to the web server
virtual machine (for example, MyVMWeb) from the internet, because the
security rule for the web servers application security group (for example,
MyAsgWebServers) prevents connections to port 3389 inbound from the
Internet. Inbound traffic from the Internet is denied to all resources by
default.
o While connected to the RDP session on the web server (for example,
MyVMWeb), you could then install IIS on the web server, then disconnect
from the web server virtual machine RDP session, and disconnect from the
management server virtual machine RDP session. In the Azure portal, you
would then determine the Public IP address of the web server virtual
machine (for example, MyVMWeb), and confirm you can access the web
server virtual machine from the Internet by opening a web browser on your
computer and navigating to http:// (for example, http://23.96.39.113) . You
should see the standard IIS welcome screen, because port 80 is allowed
inbound access from the Internet to the web servers application security
group (for example, MyAsgWebServers). The network interface attached to
the web server virtual machine (for example, MyVMWeb) is associated with
the web servers application security group (for example, MyAsgWebServers)
and therefore allows the connection.
Design and implement Azure Firewall
Azure Firewall is a managed, cloud-based network security service that protects your
Azure Virtual Network resources. It is a fully stateful firewall as a service with built-in high
availability and unrestricted cloud scalability.
You can define multiple Rule Collection types within a single Rule Collection Group, and
you can define zero or more Rules in a Rule Collection, but the rules within a Rule
Collection must be of the same type (i.e., DNAT, Network, or Application).
With Firewall Policy, rules are processed based on Rule Collection Group Priority and Rule
Collection priority. Priority is any number between 100 (highest priority) and 65,000
(lowest priority). Highest priority Rule Collection Groups are processed first, and inside a
Rule Collection Group, Rule Collections with the highest priority (i.e., the lowest number)
are processed first.
In the case of a Firewall Policy being inherited from a parent policy, Rule Collection Groups
in the parent policy always takes precedence regardless of the priority of the child policy.
Application rules are always processed after network rules, which are themselves always
processed after DNAT rules regardless of Rule Collection Group or Rule Collection priority
and policy inheritance.
If there is no network rule match, and if the protocol is either HTTP, HTTPS, or MSSQL,
then the packet is then evaluated by the application rules in priority order. For HTTP, Azure
Firewall looks for an application rule match according to the Host Header, whereas for
HTTPS, Azure Firewall looks for an application rule match according to Server Name
Indication (SNI) only.
Application rules aren't applied for inbound connections. So, if you want to filter inbound
HTTP/S traffic, you should use Web Application Firewall (WAF).
For enhanced security, if you modify a rule to deny access to traffic that had previously
been allowed, any relevant existing sessions are dropped.
The key stages of deploying and configuring Azure Firewall are as follows:
• Create a resource group
• Create a virtual network and subnets
• Create a workload VM in a subnet
• Deploy the firewall and policy to the virtual network
• Create a default outbound route
• Configure an application rule
• Configure a network rule
• Configure a Destination NAT (DNAT) rule
• Test the firewall
When deploying Azure Firewall, you can configure it to span multiple Availability Zones
for increased availability. When you configure Azure Firewall in this way your availability
increases to 99.99% uptime. The 99.99% uptime SLA is offered when two or more
Availability Zones are selected.
You can also associate Azure Firewall to a specific zone just for proximity reasons, using
the service standard 99.95% SLA.
For more information, see the Azure Firewall Service Level Agreement (SLA).
There is no additional cost for a firewall deployed in an Availability Zone. However, there
are added costs for inbound and outbound data transfers associated with Availability
Zones.
Azure Firewall Availability Zones are only available in regions that support Availability
Zones.
Availability Zones can only be configured during firewall deployment. You cannot
configure an existing firewall to include Availability Zones.
Azure Firewall Manager simplifies the process of centrally defining network and
application-level rules for traffic filtering across multiple Azure Firewall instances. You can
span different Azure regions and subscriptions in hub and spoke architectures for traffic
governance and protection.
If you manage multiple firewalls, you know that continuously changing firewall rules make
it difficult to keep them in sync. Central IT teams need a way to define base firewall policies
and enforce them across multiple business units. At the same time, DevOps teams want
to create their own local derived firewall policies that are implemented across
organizations. Azure Firewall Manager can help solve these problems.
Firewall Manager can provide security management for two network architecture types:
• Secured Virtual Hub - This is the name given to any Azure Virtual WAN Hub when
security and routing policies have been associated with it. An Azure Virtual WAN
Hub is a Microsoft-managed resource that lets you easily create hub and spoke
architectures.
• Hub Virtual Network - This is the name given to any standard Azure virtual
network when security policies are associated with it. A standard Azure virtual
network is a resource that you create and manage yourself. At this time, only Azure
Firewall Policy is supported. You can peer spoke virtual networks that contain your
workload servers and services. You can also manage firewalls in standalone virtual
networks that are not peered to any spoke.
You can create Firewall Policy and associations with Azure Firewall Manager. However, you
can also create and manage a policy using REST API, templates, Azure PowerShell, and
the Azure CLI. Once you create a policy, you can associate it with a firewall in a virtual
WAN hub making it a Secured Virtual Hub and/or associate it with a firewall in a standard
Azure virtual network making it a Hub Virtual Network.
Deploying Azure Firewall Manager for Hub
Virtual Networks
The recommended process to deploy Azure Firewall Manager for Hub Virtual Networks is
as follows:
16. Create a firewall policy. You can either create a new policy, derive a base policy,
and customize a local policy, or import rules from an existing Azure Firewall. Ensure
you remove NAT rules from policies that should be applied across multiple firewalls.
17. Create your hub and spoke architecture. Do this either by creating a Hub Virtual
Network using Azure Firewall Manager and peering spoke virtual networks to it
using virtual network peering, or by creating a virtual network and adding virtual
network connections and peering spoke virtual networks to it using virtual network
peering.
18. Select security providers and associate firewall policy. (At time of writing, only
Azure Firewall is a supported provider). This can be done while creating a Hub
Virtual Network, or by converting an existing virtual network to a Hub Virtual
Network. It is also possible to convert multiple virtual networks.
19. Configure User Defined Routes to route traffic to your Hub Virtual Network
firewall.
You cannot have more than one hub per virtual WAN per region, however you can add
multiple virtual WANs in the region to achieve this.
Your hub VNet connections must be in the same region as the hub.
Implement a Web Application
Firewall on Azure Front Door
Web Application Firewall (WAF) provides centralized protection of your web applications
from common exploits and vulnerabilities. Web applications are increasingly targeted by
malicious attacks that exploit commonly known vulnerabilities. SQL injection and cross-
site scripting are among the most common attacks.
A WAF solution can react to a security threat faster by centrally patching a known
vulnerability, instead of securing each individual web application.
Web Application Firewall policy modes
When you create a Web Application Firewall (WAF) policy, by default the WAF policy is in
Detection mode. In Detection mode, WAF does not block any requests; instead, requests
matching the WAF rules are logged at WAF logs. To see WAF in action, you can change
the mode settings from Detection to Prevention. In Prevention mode, requests that match
rules that are defined in Default Rule Set (DRS) are blocked and logged at WAF logs.
Managed rules
Azure-managed Default Rule Set includes rules against the following threat categories:
• Cross-site scripting
• Java attacks
• Local file inclusion
• PHP injection attacks
• Remote command execution
• Remote file inclusion
• Session fixation
• SQL injection protection
• Protocol attackers
Azure-managed Default Rule Set is enabled by default. The current default version is
DefaultRuleSet_1.0. From WAF Managed rules>Assign, the recently available ruleset
Microsoft_DefaultRuleSet_1.1 is available in the drop-down list.
To disable an individual rule, select the checkbox in front of the rule number, and select
Disable at the top of the page. To change action types for individual rules within the rule
set, select the checkbox in front of the rule number, and then select Change action at the
top of the page.
Custom rules
Azure WAF with Front Door allows you to control access to your web applications based
on the conditions you define. A custom WAF rule consists of a priority number, rule type,
match conditions, and an action. There are two types of custom rules: match rules and
rate limit rules. A match rule controls access based on a set of matching conditions while
a rate limit rule controls access based on matching conditions and the rates of incoming
requests. You may disable a custom rule to prevent it from being evaluated, but still keep
the configuration.
Design and implement private
access to Azure Services
In this unit, you'll look at how to use virtual network service endpoints for securing
supported Azure services.
Service endpoints can connect certain PaaS services directly to your private address space
in Azure, so they act like they’re on the same virtual network. Use your private address
space to access the PaaS services directly. Adding service endpoints doesn't remove the
public endpoint. It simply provides a redirection of traffic.
When you enable a Service Endpoint, you restrict the flow of traffic, and enable your Azure
VMs to access the service directly from your private address space. Devices cannot access
the service from a public network. On a deployed VM vNIC, if you look at Effective routes,
you'll notice the Service Endpoint as the Next Hop Type.
Create Service Endpoints
As the network engineer, you're planning to move sensitive engineering diagram files into
Azure Storage. The files must only be accessible from computers inside the corporate
network. You want to create a virtual network Service Endpoint for Azure Storage to secure
the connectivity to your storage accounts.
You can use service tags to define network access controls on network security groups or
Azure Firewall. Use service tags in place of specific IP addresses when you create security
rules. By specifying the service tag name, such as API Management, in the appropriate
source or destination field of a rule, you can allow or deny the traffic for the corresponding
service.
As of March 2021, you can also use Service Tags in place of explicit IP ranges in user
defined routes. This feature is currently in Public Preview.
You can use service tags to achieve network isolation and protect your Azure resources
from the general Internet while accessing Azure services that have public endpoints.
Create inbound/outbound network security group rules to deny traffic to/from Internet
and allow traffic to/from AzureCloud or other available service tags of specific Azure
services.
By default, service tags reflect the ranges for the entire cloud. Some service tags also allow
more granular control by restricting the corresponding IP ranges to a specified region.
For example, the service tag Storage represents Azure Storage for the entire cloud, but
Storage. WestUS narrows the range to only the storage IP address ranges from the
WestUS region. The following table indicates whether each service tag supports such
regional scope.
Service tags of Azure services denote the address prefixes from the specific cloud being
used. For example, the underlying IP ranges that correspond to the SQL tag value on the
Azure Public cloud will be different from the underlying ranges on the Azure China cloud.
If you implement a virtual network Service Endpoint for a service, such as Azure Storage
or Azure SQL Database, Azure adds a route to a virtual network subnet for the service.
The address prefixes in the route are the same address prefixes, or CIDR ranges, as those
of the corresponding service tag.
Define Private Link Service and
private endpoint
Before you learn about Azure Private Link and its features and benefits, let's examine the
problem that Private Link is designed to solve.
Contoso has an Azure virtual network, and you want to connect to a PaaS resource such
as an Azure SQL database. When you create such resources, you normally specify a public
endpoint as the connectivity method.
Having a public endpoint means that the resource is assigned a public IP address. So,
even though both your virtual network and the Azure SQL database are located within
the Azure cloud, the connection between them takes place over the internet.
The concern here is that your Azure SQL database is exposed to the internet via its public
IP address. That exposure creates multiple security risks. The same security risks are
present when an Azure resource is accessed via a public IP address from the following
locations:
• A peered Azure virtual network
• An on-premises network that connects to Azure using ExpressRoute and Microsoft
peering
• A customer's Azure virtual network that connects to an Azure service offered by
your company
Private Link is designed to eliminate these security risks by removing the public part of
the connection.
Private Link provides secure access to Azure services. Private Link achieves that security
by replacing a resource's public endpoint with a private network interface. There are three
key points to consider with this new architecture:
• The Azure resource becomes, in a sense, a part of your virtual network.
• The connection to the resource now uses the Microsoft Azure backbone network
instead of the public internet.
• You can configure the Azure resource to no longer expose its public IP address,
which eliminates that potential security risk.
Private Link provides secure access to Azure services. Private Link achieves that security
by replacing a resource's public endpoint with a private network interface. Private
Endpoint uses a private IP address from the VNet to bring the service into the VNet.
How is Azure Private Endpoint different
from a service endpoint?
Private Endpoints grant network access to specific resources behind a given service
providing granular segmentation. Traffic can reach the service resource from on premises
without using public endpoints.
Yes, by using Azure Private Link Service. This service lets you offer Private Link connections
to your custom Azure services. Consumers of your custom services can then access those
services privately—that is, without using the internet—from their own Azure virtual
networks.
Azure Private Link service is the reference to your own service that is powered by Azure
Private Link. Your service that is running behind Azure standard load balancer can be
enabled for Private Link access so that consumers to your service can access it privately
from their own VNets. Your customers can create a private endpoint inside their VNet and
map it to this service. A Private Link service receives connections from multiple private
endpoints. A private endpoint connects to one Private Link service.
Private Endpoint properties
Before creating a Private Endpoint, you should consider the Private Endpoint properties
and collect data about specific needs to be addressed. These include:
• A unique name with a resource group
• A subnet to deploy and allocate private IP addresses from a virtual network
• The Private Link resource to connect using resource ID or alias, from the list of
available types. A unique network identifier will be generated for all traffic sent to
this resource.
• The subresource to connect. Each Private Link resource type has different options
to select based on preference.
• An automatic or manual connection approval method. Based on Azure role-based
access control (Azure RBAC) permissions, your Private Endpoint can be approved
automatically. If you try to connect to a Private Link resource without Azure RBAC,
use the manual method to allow the owner of the resource to approve the
connection.
• A specific request message for requested connections to be approved manually.
This message can be used to identify a specific request.
• Connection status, A read-only property that specifies if the Private Endpoint is
active. Only Private Endpoints in an approved state can be used to send traffic.
Significance of IP address
168.63.129.16
IP address 168.63.129.16 is a virtual public IP address that is used to facilitate a
communication channel to Azure platform resources. Customers can define any address
space for their private virtual network in Azure. The Azure platform resources must be
presented as a unique public IP address. This virtual public IP address facilitates the
following things:
• Enables the VM Agent to communicate with the Azure platform to signal that it is
in a "Ready" state
• Enables communication with the DNS virtual server to provide filtered name
resolution to the resources (such as VM) that do not have a custom DNS server.
This filtering makes sure that customers can resolve only the hostnames of their
resources
• Enables health probes from Azure load balancer to determine the health state of
VMs
• Enables the VM to obtain a dynamic IP address from the DHCP service in Azure
• Enables Guest Agent heartbeat messages for the PaaS role
Your applications don't need to change the connection URL. When resolving to a public
DNS service, the DNS server will resolve to your Private Endpoints. The process doesn't
affect your existing applications.
Private networks already using the private DNS zone for a given type, can only connect to
public resources if they don't have any Private Endpoint connections, otherwise a
corresponding DNS configuration is required on the private DNS zone in order to
complete the DNS resolution sequence.
For Azure services, use the recommended zone names found in the documentation.
Based on your preferences, the following scenarios are available with DNS resolution
integrated:
• Virtual network workloads without custom DNS server
• On-premises workloads using a DNS forwarder
• Virtual network and on-premises workloads using a DNS forwarder
• Private DNS zone group
The following scenario is for an on-premises network that has a DNS forwarder in Azure.
This forwarder resolves DNS queries via a server-level forwarder to the Azure provided
DNS 168.63.129.16.
This scenario uses the Azure SQL Database-recommended private DNS zone. For other
services, you can adjust the model using the following reference: Azure services DNS zone
configuration.
The following diagram illustrates the DNS resolution sequence from an on-premises
network. The configuration uses a DNS forwarder deployed in Azure. The resolution is
made by a private DNS zone linked to a virtual network:
Virtual network and on-premises
workloads using Azure DNS Private
Resolver
When you use DNS Private Resolver, you don't need a DNS forwarder VM, and Azure DNS
is able to resolve on-premises domain names.
The following diagram uses DNS Private Resolver in a hub-spoke network topology. As a
best practice, the Azure landing zone design pattern recommends using this type of
topology. A hybrid network connection is established by using Azure ExpressRoute and
Azure Firewall. This setup provides a secure hybrid network. DNS Private Resolver is
deployed in the hub network.
Design and implement network
monitoring
Just a few examples of what you can do with Azure Monitor include:
• Detect and diagnose issues across applications and dependencies with Application
Insights.
• Correlate infrastructure issues with VM insights and Container insights.
• Drill into your monitoring data with Log Analytics for troubleshooting and deep
diagnostics.
• Support operations at scale with smart alerts and automated actions.
• Create visualizations with Azure dashboards and workbooks.
• Collect data from monitored resources using Azure Monitor Metrics.
The diagram below offers a high-level view of Azure Monitor. At the center of the diagram
are the data stores for metrics and logs, which are the two fundamental types of data
used by Azure Monitor. On the left are the sources of monitoring data that populate these
data stores. On the right are the different functions that Azure Monitor performs with this
collected data. This includes such actions as analysis, alerting, and streaming to external
systems.
Network Topology: The topology capability enables you to generate a visual diagram of
the resources in a virtual network, and the relationships between the resources.
Verify IP Flow: Quickly diagnose connectivity issues from or to the internet and from or
to the on-premises environment. For example, confirming if a security rule is blocking
ingress or egress traffic to or from a virtual machine. IP flow verify is ideal for making sure
security rules are being correctly applied. When used for troubleshooting, if IP flow verify
doesn’t show a problem, you will need to explore other areas such as firewall restrictions.
Next Hop: To determine if traffic is being directed to the intended destination by showing
the next hop. This will help determine if networking routing is correctly configured. Next
hop also returns the route table associated with the next hop. If the route is defined as a
user-defined route, that route is returned. Otherwise, next hop returns System Route.
Depending on your situation the next hop could be Internet, Virtual Appliance, Virtual
Network Gateway, VNet Local, VNet Peering, or None. None lets you know that while
there may be a valid system route to the destination, there is no next hop to route the
traffic to the destination. When you create a virtual network, Azure creates several default
outbound routes for network traffic. The outbound traffic from all resources, such as VMs,
deployed in a virtual network, are routed based on Azure's default routes. You might
override Azure's default routes or create additional routes.
Effective security rules: Network Security groups are associated at a subnet level or at a
NIC level. When associated at a subnet level, it applies to all the VM instances in the
subnet. Effective security rules view returns all the configured NSGs and rules that are
associated at a NIC and subnet level for a virtual machine providing insight into the
configuration. In addition, the effective security rules are returned for each of the NICs in
a VM. Using Effective security rules view, you can assess a VM for network vulnerabilities
such as open ports.
Packet Capture: Network Watcher variable packet capture allows you to create packet
capture sessions to track traffic to and from a virtual machine. Packet capture helps to
diagnose network anomalies both reactively and proactively. Other uses include gathering
network statistics, gaining information on network intrusions, to debug client-server
communications and much more.
NSG Flow Logs: NSG Flow Logs maps IP traffic through a network security group. These
capabilities can be used in security compliance and auditing. You can define a prescriptive
set of security rules as a model for security governance in your organization. A periodic
compliance audit can be implemented in a programmatic way by comparing the
prescriptive rules with the effective rules for each of the VMs in your network.
NSG flow logs is a feature of Azure Network Watcher that allows you to log information
about IP traffic flowing through an NSG. The NSG flow log capability allows you to log
the source and destination IP address, port, protocol, and whether traffic was allowed or
denied by an NSG. You can analyze logs using a variety of tools, such as Power BI and the
Traffic Analytics feature in Azure Network Watcher.
Connection Monitor combines the best of two features: the Network Watcher Connection
Monitor (Classic) feature and the Network Performance Monitor (NPM) Service
Connectivity Monitor, ExpressRoute Monitoring, and Performance Monitoring feature.
Traffic Analytics
Traffic Analytics is a cloud-based solution that provides visibility into user and application
activity in cloud networks. Traffic Analytics analyzes Network Watcher network security
group (NSG) flow logs to provide insights into traffic flow in your Azure cloud and provide
rich visualizations of data written to NSG flow logs.