Reference Material
Reference Material
Reference Material
V100R001C00
DC Technical Proposal
Issue 01
Date 2011-08-31
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and
the customer. All or part of the products, services and features described in this document may not be
within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,
information, and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Contents
2 Service Requirements................................................................................................................... 7
2.1 Overview .......................................................................................................................................................... 7
2.2 Data Service ..................................................................................................................................................... 7
2.2.1 Overview ................................................................................................................................................. 7
2.2.2 Network Requirements of the Data Service ............................................................................................ 8
2.3 Web Service.................................................................................................................................................... 10
2.3.1 Overview ............................................................................................................................................... 10
2.3.2 Network Requirements of the Web Service .......................................................................................... 11
2.4 Computing Service ......................................................................................................................................... 13
2.4.1 Overview ............................................................................................................................................... 13
2.4.2 Network Requirements of the Computing Service................................................................................ 13
z Reliability
High reliability ensures successful operations of the DC. If the user experience on
enterprise services (such as e-commerce or video services) deteriorates due to DC
network faults, the service expansion of an enterprise will be hindered, and users will not
use the services, decreasing the profits. Reliability is an important aspect when designing
an enterprise DC network.
The reliability design is achieved through redundant links, key devices, and key service
modules.
z Scalability
Each layer of the DC uses devices with a high port density to prepare for the DC
expansion.
Devices on the Internet layer, intranet layer, core layer, and aggregation layer adopt the
modular design so that capacities of these devices can be expanded flexibly with the
development of the DC network.
The scalability of functions enables the DC to support value-added services. The DC
provides functions such as load balancing, dynamic content replication, and VLAN to
support value-added service expansion.
z Manageability
A manageable network is the prerequisite for successful operation of the DC. The DC
provides:
Various optimized manageable information
Complete QoS functions
Integrated SLA management system
Capability to manage devices of different vendors
Independent background management platform for the DC and users to manage the
networks
z Security
As a concern of DC users especially e-commerce users, security is a key factor during
DC construction. DC security is ensured by security control for the physical space and
network. The DC provides an integrated security policy control system to ensure DC
security.
Carrier 1
Carrier 2
VPN
Internet
WAN
Backup
DMZ Extranet area Disaster recovery Extranet
/DMZ
Active DC network center
LB
MAN
FW
UTM LB
iStack LB iStack
LB Combined core layer
FW
LB Combined core layer
CSS LB
LB FW CSS
DNS Email Web APP
Server LB
iStack
iStack
FW iStack iStack
iStack Web Web iStack iStack iStack
Expanded multi-layer design
iStack
APP APP
iStack
Web APP DB Web APP DB
Control servers
Server
Backup control area DB DB IP storage area
SDH/WDM
FC switch
FC switch
Aggregation Storage
Core switch Access switch FC switch Low-level router High-level router Load balancing Firewall/IPS Server
switch device
As shown in Figure 1-1, to enhance the security, scalability, and maintainability of the
network, the Huawei DC solution is divided into the service network, management network,
and storage network.
z The service network consists of network access modules and server access modules.
z The management network consists of background management modules.
z The storage network consists of the storage system and the storage area network (SAN).
This technical proposal focuses on the service network and management network.
Network access modules include routers, switches, firewalls, load balancers, and unified
threat management (UTM) system which contains the firewall, intrusion detection/protection
system (IDS/IPS), antivirus, URL filtering, and SSL VPN. These modules provide network a
high quality infrastructure with, density, availability, and security.
Server access modules are divided into different service areas based on the types and
characteristics of the services provided to the user. The service areas are separated from each
other logically or physically.
Cloud Multi-tenant
Reliability
computing service
Service
requirements Virtualization Disaster Resource
recovery management
2 Service Requirements
2.1 Overview
A DC deploys various service systems in a centralized mode to integrate them. This helps to
analyze services, make decisions, and maximize the information production capability.
A DC also provides Web portals, which help to establish channels with customers and
improve the enterprise's brand awareness, product promotion, and customer service. With the
Web portals, the enterprise can implement ecommerce and other Internet-based businesses.
In addition, a DC provides high-performance computing services, such as 3D rendering,
medicine research, gene analysis, and Web search.
In an enterprise, a DC may provide all the preceding services concurrently. These services
may be independent of each other or be integrated into a large service system. You must
analyze the real situation when planning a network for the DC.
In an enterprise, key services such as the financial service are transmitted as a data
service and require high security. In addition to physical security measures, protection
measures are also required on the network, including isolating different services,
identifying and handling the traffic and virus attacks. Services are isolated, enabling
terminals to access only servers of specified services.
z Reliability requirement
The data reliability is required and varies according to the service type (internal service
and external service) on the network.
The internal service system does not require high network reliability. A fault occurring in
a DC internal part recovers within 20 minutes to 30 minutes, and a fault occurring in the
entire DC recovers within 4 hours to 8 hours during which services are implemented
from the standby DC.
The external service system requires high network reliability. A fault occurring in a DC
internal part recovers automatically or can be manually rectified within 10 minutes,
while a fault occurring in the entire DC recovers within 2 hours during which services
are implemented from the disaster recovery center.
The disaster recovery system is classified into the following seven tiers according to the
international standard Share 78:
Tier 0: No off-site data
Tier 1: Data backup with no hot site
Tier 2: Data backup with a hot site
Tier 3: Electronic vaulting
Tier 4: Point-in-time copies
Tier 5: Transaction integrity
Tier 6: Zero or near-Zero data loss
Tier 7: Highly automated, business integrated solution
Two technical indicators are used to measure disaster recovery:
Recovery point objective (RPO): acceptable amount of data loss
Recovery time objective (RTO): acceptable longest duration within which services
are interrupted or the shortest duration between the time when a disaster occurs and
the time when services are restored
RPO measures data loss, while RTO measures service loss. RPO and RTO are not
necessarily related. RTO and RPO vary according to services and enterprises, and are
calculated based on service requirements after risk analysis and service influence
analysis are performed.
If the RTO is shorter and the RPO is newer, the service loss will be less. The costs in
developing and building the system, however, will become higher. Both factors are an
important consideration.
z Cloud-computing requirement
In most cases, service systems of the data service do not operate concurrently. To
efficiently utilize the server resources, deploy multiple virtual servers on a physical
server to host different service systems. This is the easiest way to apply cloud computing.
When deploying multiple virtual servers on a physical server, consider the bandwidth
requirement of each service to prevent one service from occupying the bandwidth of
other services on the same server.
In a word, the network requirements of the data service guarantee bandwidth and security.
App Server
WEB Server
WEB browser
DB Server
As shown in Figure 2-3, the Web service model adds a Web server and an App server to form
a three-layer structure. Services are processed in the following process:
a. The App server (App Server in Figure 2-3) processes services sent from the client on the
Web browser using HTML or HTTP.
b. The DB server and storage system provide DB services.
c. The Web server displays information for users.
The three-layer structure enhances flexibility of the service system. You can modify the
service system on the Web server, application server, or DB server. Users only need to refresh
the web page on the Web browser to view the modification.
iStack
iStack
Web Web
iStack
APP APP
iStack
DB DB
iStack
Web APP DB
In the layered deployment mode, bandwidth is planned for each layer. In the flattened
deployment mode, traffic between servers is aggregated to one server and bandwidth is
planned based on the total traffic volume. The traffic between clients and DC is much
smaller than that within the DC.
The Web service traffic is transmitted through more servers and network devices than the
data service traffic. Therefore, the Web service requires a shorter network delay. The
Web service interaction process is different from the data service interaction process. The
Web server responds to the request from a client. The application server and DB server
then process the request. Finally, request information is displayed on web pages.
Therefore, the delay of the Web server's response to the requests from clients must be
short.
z Security requirement
In the Web service mode, the client and DB server are isolated by the Web server and
application serve. This enhances the security of the DB server and data. Traffic is
transmitted among the Web server, application server, and DB server hop by hop over the
network channels, which is vulnerable to hop-by-hop attacks.
Web services, especially services for Internet users, are faced with more threats because:
The attack sources are well organized and industrialized. Attacks may come from
anywhere on the Internet.
The service system is more complex. Security holes may exist in the operating
system, Web server, application server, and DB. A hole in one system may cause
other systems to be corrupted one by one.
When internal users are accessing the Internet, they may be intruded by unauthorized
users and used for attacks.
z Reliability requirement
In a three-layer structure, the Web service is processed by servers at three layers together
and interactions between servers are more frequent, so higher network reliability is
required. The overall fault recovery time is not prolonged; however, the network
reliability must improve so that the DC availability can remain unchanged in such a
serial system.
The link error rate of the link between a switch and a server is 1 h/1000 h. In Web
service mode, a switch is connected to the Web server, application server, and DB server
and three links are available. Therefore, the link error rate is 1 (1 1 h/1000 h)3 3
h/1000 h. If you want to keep the error rate of the entire service at 1 h/1000 h, reduce the
link error rate to 20 min/1000 h.
Server cluster
APP DB DB DB DB DB DB DB DB DB DB DB DB
The application server distributes the computing service to a large number of DB severs, and
the DB servers return the results to the application server. The network requirements include:
z Instantaneous traffic buffering capability
The application server must have a scheduling mechanism to distribute services.
Otherwise, the results sent from the DB servers arrive at the application server in a short
time period. The burst traffic rate exceeds the interface bandwidth on the application
server. If the network cannot buffer the traffic, packets are lost and the application server
cannot process all the services. This leads to more frequent interactions between the
application server and DB servers and prolongs the overall processing time. Therefore,
the network must be capable of buffering packets to eliminate packet lost.
z Non-blocking network
Different from the cluster model shown in Figure 2-6, the cluster model shown in Figure
2-7 provides interconnection between services on all servers. In this service system,
servers communicate with each other in point-to-point communication mode.
Non-blocking forwarding allows any two servers can communicate services with each
other so that the forwarding capability is not limited by the location.
MAN
32*10G
Core switch (1) Core switch (2) Core switch (3) Core switch (4)
192*10G 192*10G 192*10G 192*10G
10G
1G
3 DC Network Design
Logical Architecture
Figure 3-1 shows the logical architecture of a DC.
Partner
Partner Partner
Partner
Enterprise
Enterprise Internet
enterprise
enterprise enterprise
Intranet
Intranet network
network extranet
Extranet
4
Enterprise Enterprise Disaster 5
Internet recovery
intranet extranet
center network
1 Core network
3 Storage area
Backup area
The intranet interconnects the headquarters and branches through the enterprise
campus network and the wide area network (WAN).
The enterprise extranet connects the partner enterprise network using the
metropolitan area network (MAN) and the WAN leased lines.
The Internet allows public users, staff on a business trip, and office users without a
WAN network to access the Internet safely.
z Disaster recovery center Internet area
In this area, the disaster recovery centers in the same city are interconnected by
transmission devices and disaster recovery centers in different cities are interconnected
by the WAN leased line.
z OAM area
The network, server, application system, and storage devices are managed in this area.
The functions of the OAM area include fault management, system configuration, device
performance, and data security management.
Disaster Partner
Internet Enterprise Enterprise enterprise
recovery center
user campus branch
Disaster Partner
Internet Enterprise enterprise
recovery
intranet extranet
Extranet
network
4
LLB Active DC
FW
UTM
iStack LB
Combined iStack
LB core layer FW
LB CSS
LB
1
DNS Email Web APP
Server
iStack
iStack
FW
Product service area
iStack
APP APP
iStack
Web APP DB
Control server 2
Server
Backup control area DB DB IP storage area
FC Switch
5
Layer 3 Networking
Figure 3-3 shows the Layer 3 networking diagram. The core layer and the aggregation layer
are separated in this networking. Each aggregation area has security devices such as firewalls
deployed.
Egress
layer
Core
layer
Convergence CSS
layer FW FW FW FW
Flattened Networking
Figure 3-4 shows the flattened networking diagram. In the flattened networking, devices in
the core area and the aggregation area are replaced by two large-capacity switches in a
combined core area. Security devices such as firewalls of large capacities are deployed in this
area.
Huawei recommends the flattened networking, which simplifies the network topology and
improves data transmission efficiency.
Interconnection
area
FW FW
Core area CSS
LB LB
FW FW
LB LB
Triangular
loop
As shown in Figure 3-5, dotted lines represent links that are blocked by STP. This plan uses
the standard STP protocol to integrate devices from multiple vendors into a hybrid network.
The disadvantages of the plan are:
z Long convergence time
The traditional STP technology makes the network converge slowly. It takes more than
10 seconds to restore services after a fault occurs. RSTP increases the convergence speed
to some extent, but the convergence still takes several seconds. A service interruption for
several seconds lowers user experience.
z Low link usage
If servers in the same rack belong to the same VLAN, the bandwidth of an uplink cannot
be used. In this case, the bandwidth usage is only 50%. The Multiple Spanning Tree
Protocol (MSTP) optimizes the bandwidth usage based on VLANs but it cannot solve
the problem completely.
z Complex configuration that is difficult to maintain, and frequently occurred faults on the
network
Every access switch or aggregation switch needs to run the STP protocol. When more
access switches are added to the network, the STP processing becomes more complicated,
which reduces the network reliability.
Loop-free networking with cluster and stacking is used to overcome these disadvantages.
Cluster
FW FW
CSS
LB LB
Stack
The combined core layer uses two framed switches as a cluster. The access layer uses box
switches to form a stack system. Links between switches at the access layer and the combined
core layer form an Eth-Trunk.
The loop-free networking design has the following advantages:
z Simplified management and configuration
The cluster and stacking networking reduces managed nodes by more than a half.
In addition, it simplifies the network topology and configuration because it does not need
complex protocols such as STP, Smart Link, and VRRP.
z Fast convergence
The convergence time is less than 10 ms after a fault occurs, which significantly reduces
the impact on services caused by faults on links and nodes.
z High bandwidth usage
Links form a trunk so that the bandwidth usage reaches 100%.
z Easy to expand the capacity, saving investment
When new services are provided, the enterprise can add devices directly to upgrade the
network. The network capacity can be expanded without changing the network
configuration, saving users' investments.
The loop-free networking improves the network reliability rate from 99.9% to 99.9999%. The
fault rate on a single link is reduced from 1 hour to 3.6 seconds in 1000 hours.
Framed switches are provided in the core area to ensure network reliability in the following
ways:
z The MPUs work in backup mode.
z The power supplies work in backup mode.
z Modular design of fans is provided, in which a single-fan failure does not affect system
running.
z All modules are hot swappable.
z The CPU defense function is configured.
z Complete alarm functions are provided.
CSS
iStack iStack
High level
Middle and Blade servers server and
low level without built-in Blade servers large switch
rack servers switches with built-in
switches
10GE Stack cables GE
Foreground
service network
Inband NMS
HBA
SAN network
Figure 3-8 shows multiple channels on a server. A server has four types of ports that are used
to access the following networks:
z Service network
z Network management and the keyboard video mouse (KVM) network
z SAN network
z Backup and IP storage network
A server working in multiple channels has the following advantages:
z Improves the IO capacity.
z Separates traffic of different services safely.
Figure 3-9 shows the logical networking architecture of multiple channels on a server.
DC backbone
network
The server area is divided into four physically isolated networks: the service, management,
storage, and backup networks. The server accesses different networks using network interface
cards (NICs).
Figure 3-10 shows the physical network topology.
Core switch
Backup and IP
Product service area Testing service area Office service area Management area
storage area
FC access switch
Service network
Management network
FC core switch
Backup and IP storage network
FC storage network
SAN storage Stack cable
Cluster
Combined core layer
Stack
Access layer
The two NICs in active/standby mode have the same MAC address (such as MAC1 in Figure
3-11). When the active NIC fails, the server switches the traffic to the standby NIC and sends
a gratuitous ARP packet from the standby NIC. Network devices must properly process
gratuitous ARP packets to switch the traffic to a new directory.
Figure 3-12 shows the change of the data transmission route. Data is transmitted in the green
route using the active NIC. If the active NIC fails, the data transmission route is changed from
the green one to the purple one.
Figure 3-12 Change of the data transmission route using active and standby NICs
Cluster
Stack
Access layer
Active NIC
Standby NIC
MAC1 MAC1
NIC1 NIC2
When the access switch receives a gratuitous ARP packet, it changes the outbound interface
matching MAC1 to the link connected to the standby NIC. You need to add the two ports of
active and standby NICs to the same VLAN and bundle the links so that the outbound
interface can be updated when a switchover occurs.
Switches at the combined core layer do not detect route changes at the access layer when
receiving gratuitous ARP packets because they connect to access switches through trunk links.
Cluster
Combined core layer
Stack
Access layer
Switches at the combined core layer do not detect route changes at the access layer because
they are connected to access switches through trunk links. Therefore, data is still sent to the
access switch on the left, forwarded to the switch on the right through the stacking link, and
then forwarded to the server.
DWDM
Management area
z IP storage area:
The IP storage area is separated from other areas. Devices are deployed in this area to
compress traffic transmitted in the Entire Fiber Channel Frame over IP (FCIP) channel
and accelerate data transmission. Data is synchronized and saved through an
IP/Multiprotocol Label Switching (MPLS) network. Implementation of virtualization
speeds up data transmission between servers and storage devices.
z Management area: Operators allocate and manage storage network and storage resources
in this area.
z Disaster backup in the same city:
The active DC and the disaster recovery center in the same city are connected through
the dense wavelength division multiplexing (DWDM) network.
Use the following configuration to implement real-time or quasi real-time data exchange
between the active DC and backup DC:
z Use the carrier's MPLS VPN or virtual leased line based on the virtual private LAN
service (VPLS) to transmit data traffic between servers on the IP storage network in the
active DC and backup DC.
z Use bare optical fibers or a DWDM network to transmit data between SAN storage
networks in the active DC and backup DC. This implements quasi real-time data
transmission at a high speed and a short delay.
Virtualization increases data exchange between servers and storage devices, so switches must
access the NAS storage network through a 10G link.
Disaster
Internet Enterprise recovery center Partner
Intranet network enterprise
Extranet
DMZ Extranet
LLB Active DC
FW
UTM
iStack LB
Combined iStack
LB FW
core layer
LB
LB
CSS
DNS Email Web APP
Server
The interconnection area is divided into the following connection areas based on access
modes and services:
z Intranet area
Intranet users access the DC through the WAN or the LAN.
z Internet area
External users access the DC through the Internet.
z Extranet area
Extranet users access the DC through the WAN or the LAN.
You can assign an isolated area for the VPN users in the Internet area.
Internet user
Internet
DMZ
Active DC
LLB
UTM
iStack
LB Combined
core layer FW
LB
CSS LB
SSL IPSec
DNS Email Web APP
VPN VPN
Figure 3-16 shows the Internet area devices, such as routers, link load balancers, and unified
threat management (UTM) devices. The UTM devices must provide firewall and intrusion
prevention system (IPS) functions.
z Load balancers are used to respond to requests from different egress routers leased by
two carriers. No load balancers are required when there is only one egress router.
z The IPS detects malicious codes, attacking actions, and distributed denial of service
(DDoS) attacks that are mixed in the application data stream, and takes response in real
time.
z The firewall is deployed at the network layer to filter invalid traffic and protect intranet
resources against attacks from the Internet.
The firewall and the IPS are important network devices, which are located at the network
egress. The location and functions of the firewall and the IPS require that they should provide
high reliability.
To ensure Internet area reliability, deploy devices in pairs, such as routers, link load balancers,
and UTM devices (including firewalls and the IPS).
The VPN access area must provide the Internet Protocol Security (IPSec) VPN and the Secure
Sockets Layer (SSL) VPN functions for secure access.
z The IPSec VPN provides site-to-site access mode.
z The SSL VPN provides client-to-site access mode.
The IPSec VPN gateway and SSL VPN gateway can be deployed independently, or connected
to the network using the UTM devices.
Partner enterprise
Partner
enterprise
Extranet
Extranet
Active DC
FW
LB iStack
Combined
core area FW
LB
CSS
Server
Carrier 1
Carrier 2
VPN
Internet
Family network Small organization
Corporate campus network Corporate campus network
Building
Core network Core network Building
WAN
MAN
Active DC
Combined
core layer FW
LB
CSS
This area uses dual-homed routes and redundancy backup of routes and devices.
Network connection reliability between branches of an enterprise is ensured through backup
of multiple egress links, backup of routes, and load balancing. QoS needs to be configured on
WAN link to guarantee quality of links and services.
Independent access devices and two backup devices are required to ensure device reliability.
The intranet is a safe area with low security risks which are mainly caused by intranet users
who access or save data without authorization. Data access between the enterprise branch
networks is restricted based on users' actual requirements.
Internet MPLS
KVM switch
Figure 3-19 shows the networking in the management area. The management network
connects all devices by the management interfaces and the KVM switches, and provides
functions such as network management, data collection, and real-time surveillance.
Only administrators can access the management network that connects the inner DC using
isolation measures such as VPNs and firewalls. Administrators are granted rights to access
specified network devices.
Front-End network
...
KVM switch
Network management: This module manages network devices such as switches, routers, and
firewalls in the aspects of the topology, configuration, asset, fault, performance, event, traffic,
and report.
There are two areas of network management:
z Traffic management: This module provides functions such as traffic monitoring, traffic
threshold setting, protocol analysis, and Web access behaviors audit. It works with the
NetFlow analyzer to implement more refined and convenient traffic analysis.
z Application management: This module monitors the website and manages systems and
upper-level applications such as the database, mail server, Web server, application server,
operating system, and website surveillance.
network. The VLAN technology can limit the network faults within a VLAN, and enhances
the network robustness.
3.7.2 Principles
Observe the following principles when configuring VLANs:
z Differentiate service VLAN, management VLAN, and interconnection VLAN.
z Add interfaces to different VLANs based on service areas.
z Add interfaces to different VLANs based on service types for the same service (such as
the Web, application, and database).
z Distribute each VLAN consecutively to properly use VLAN resources.
z Reserve some VLANs for further expansion.
3.7.3 Recommendation
Configure VLAN ranges based on different areas as shown in Figure 3-21.
z Core area: 100199
z Server area: 200999, reserved VLANs: 10001999
z Access network: 20002999
z Management network: 30003999
Partner
Enterprise
Enterprise Internet
enterprise Disaster
Intranet
Intranet
Extranet backup center
Storage area
3.8 IP Planning
A few devices in the Internet connection area use public IP addresses, but devices in the
intranet use private IP address. IP addresses in the intranet are easy to manage because private
IP address space is large, for example, 10.0.0.0 is a class-A address.
z Configure NAT mapping on the firewall to convert the virtue IP address of the slave
server into a public IP address for Internet users to use for accessing the intranet.
z Provide services for Internet users using intelligent DNSs on load balancers.
Providing DNS Services for Internet Users Using the Slave Server
Intranet user
Internet user Carriers
DNS server Corporate
Internet
campus
network
NAT
DMZ
External: Internet
IP address
LLB Internal: 10.0.3.10
Virtue IP address
of the DNS:
UTM 10.0.3.10
iStack
Combined
core layer FW
LB
LB CSS LB
Server
Slave Slave cluster Active DC
Master
DNS server DNS2 DNS1
10.0.2.5 server server
172.16.0.6 172.16.0.5
The blue dotted line marked in Figure 3-22 shows how the slave server is used to provide
DNS services for Internet users.
The slave servers DNS1 and DNS2 use virtue IP addresses on the load balancer to function as
master DNS servers for Internet users and slave DNS servers for intranet users.
The master DNS, slave DNS1, and slave DNS2 servers are all deployed in the DMZ area.
The process to handle DNS requests with reliable design is as follows:
Intranet users send DNS requests to the master DNS server that communicates with the
carrier's DNS server to resolve Internet domain names. If the master DNS server is faulty, the
slave DNS servers provide services.
Internet users send DNS requests to the carrier's DNS server to resolve the enterprise domain
name, such as Huawei.com, and relay the further resolution results, such as www.huawei.com,
to the enterprise DNS server. The DNS requests are evenly distributed between slave DNS1
and slave DNS2 servers. If slave DNS 1 server is faulty, all DNS requests are sent to slave
DNS2 server. If both slave DNS servers are faulty, the master DNS server provides services.
Providing Services for Internet Users Using the Intelligent DNS Server
Figure 3-23 shows how the intelligent server is used to provide DNS services for Internet
users.
Intranet user
Internet user Carriers
DNS server Corporate
Internet
campus
network
DMZ
LLB
Virtue IP address
of the DNS:
10.0.3.10
UTM
iStack
Combined
core layer FW
LB
LB CSS LB
Server
Master Slave Slave cluster Active DC
DNS server DNS2 DNS1
10.0.2.5 server server
172.16.0.6 172.16.0.5
The Internet users send requests (such as www.huawei.com) to the carrier's DNS server to
query the domain name of Huawei. The carrier's DNS server identifies the information
(huawei.com), and sends the request to the DNS server in Huawei DC to resolve the domain
name. The blue dotted line displays this process.
The intelligent DNS server in the link load balancer receives the request, and finishes the
DNS resolution.
The intelligent DNS server recognizes user sources and resolves domain names to different IP
addresses. The DNS policy resolution server resolves the domain name to the related Netcom
IP address for a China Netcom user and the related Telecom IP address for a China Telecom
user.
Meanwhile, the intelligent DNS server monitors carrier link quality. If a carrier's link is
interrupted, the intelligent DNS server returns another carrier's IP address to ensure service
continuity.
Internet WAN
OSPF
Combined
FW
core layer L3 router
LB
CSS
L2 switch
iStack
iStack
FW
Web Web iStack iStack
iStack
APP APP
iStack
Web APP DB
Server
DB DB Non-Web-based application design
Simplified multi-layer design
Routes need to be configured only on two combined core layer switches. Access
switches perform only Layer 2 switching, simplifying the configuration. Users can use
the automatic configuration functions of access switches to reduce the maintenance
workload.
z Scalability
You can easily increase the number of servers on a core/aggregation switch.
A new service server can be deployed in any rack. The IP address of the new server is
contiguous with the IP address of the original service system.
When the position of a server changes due to a service change, the carrier does not need
to reconfigure the servers and the network, and the servers can be used immediately after
being installed in the new position. A large Layer 2 network is needed when the next
generation virtual servers are used to move servers without interrupting services.
Carrier 1
Carrier 2
VPN
Internet
Family network
Small organization
Corporate campus network Corporate campus network
Building
Core network Core network Building
WAN
Backup
DMZ Extranet Extranet
/DMZ
Active DC
LLB
Disaster
MAN backup
FW center
UTM LB
iStack LB
Combined iStack
LB FW
core layer Combined
LB core layer
LB
CSS LB FW
DNS Email Web APP CSS
Server LB
iStack
iStack
FW iStack iStack
iStack Web Web iStack iStack iStack
iStack
APP APP
iStack
Web APP DB Web APP DB
Control server Server
Backup control area DB DB Simplified multi-layer design Non-Web-based application design IP storage area
Expandable multi-layer design
As shown in Figure 3-26, the active DC has four paths to reach the branch DC. Priorities of
four paths are as follows:
z Highest priority (normal access route): The active DC is connected to the branch DC
directly.
z Second highest priority (alternative route 1): The active DC reaches the branch DC
through the backup DC.
z Third highest priority (alternative route 2): The active DC reaches the branch DC
through the disaster recovery center.
z Lowest priority (alternative route 3): The active DC reaches the branch DC through the
backup DC and the disaster recovery center.
The priorities of the links are determined by the EBGP AS-Path and multi-exit discriminator
(MED) attributes.
Corporate
User class A campus User class B
network
VPN A VPN B
Router-
imported
mode
VPN VPN VPN
DC network
As shown in Figure 3-27, firewalls are used to accurately control the rights of server groups.
The security policy is configured based on the table for rights of the user groups and server
groups. By default, the firewalls are disabled. Users can access the server only after a security
policy is configured to enable the firewalls.
User class A
User User
group 1 group 2
Figure 3-29 Congestion on an outbound interface when multiple servers send data to one server
Internet WAN
As shown in Figure 3-29, servers send data to the yellow-colored server and congestion
occurs in the stared node. Packets are lost if queues are not sufficient in the nodes that
forward data.
To solve the problem, install large-capacity line cards on the EOR switch and the core switch
to cache burst data and prevent packet loss.
Figure 3-30 Large-capacity line cards on the EOR switch and the core switch to prevent packet
loss
Internet WAN
Desktop cloud connects clients (such as offices and reception areas) to the DC through the
intranet or Internet. The DC adopts desktop virtualization technologies to create dozens of
virtual desktops based on a physical server.
Shanghai red area Shanghai yellow area Shanghai green area Shenzhen red area Shenzhen yellow area
AD SPES DNS
Company network
netwrok
The desktop cloud DC is divided into areas based on services, security requirements, and
scale limit in virtualization management.
Basic concepts related to the desktop cloud solution are as follows:
z Management server: Components such as WI, DDC, AD, and license server. Not the
desktop virtual server.
z Service block: Extension unit in the solution. Each service block supports 2000
concurrent desktop users. A service block contains one to four management servers and
pools.
z Pool: A pool consists of 1 to 20 servers, and one set of storage equipment (one controller
subrack and several extension subracks). Each pool supports 400 to 500 concurrent
desktop users.
Pool1 Pool1
Management servers Management servers
...
...
Pool2 Pool2
...
...
...
...
...
Pool4 Pool4
...
...
As shown in Figure 4-3, the blade servers serve as desktop servers. Blade servers are
virtualized to create management servers and desktop virtual servers.
Service network:
Client access and Maintenance network:
enterprise application Desktop cloud internal
access management, monitoring,
and control network
Storage network:
IPSAN or FC storage
access network
By default, a blade server is configured with service, maintenance, and storage network
planes. Each plane is configured in 1+1 redundancy mode. These three network planes are
independent, enhancing the network stability and availability.
Figure 4-5 Networking for the desktop cloud DC and fit clients
Figure 4-5 is a network configured with three service blocks. Each service block supports
2000 concurrent desktop users. The service network, maintenance network, and storage
network are designed independently in the entire desktop cloud network. The service network
and maintenance network consist of two layers: access layer and combined core layer. The
firewall and load balancing devices are shared globally on the whole network.
The data reliability is guaranteed by the high reliability of the storage system. Therefore, no
standby system or network is needed.
In the desktop cloud application system, servers are deployed in the original DC instead of the
desktop cloud DC. The desktop cloud center, however, can be deployed as a large area of the
original DC.
The service network of the desktop cloud is similar to the campus network. In the service
network, desktops are deployed in the DC, each server supporting 20 virtual desktops.
Therefore, box switches in the campus network are necessary to be deployed in the desktop
cloud DC.
For example, in an E6000 subrack used in the desktop cloud, 10 blade servers are deployed,
each supporting 20 to 23 concurrent desktops. That is, each E6000 subrack supports 200 to
230 concurrent desktops.
In addition, an E6000 subrack is configured with six NX910 modules, each providing 10 GE
electrical interfaces. That is, each E6000 subrack supports 200 to 230 concurrent desktops and
provides 60 GE electrical interfaces. Based on the bandwidth statistics shown in Table 4-1,
two upstream GE electrical interfaces can meet service requirements.
Therefore, S5700 switches are deployed at the access layer and combined core layer. Switches
at the access layer are connected to the combined core layer through GE interfaces.
The performance of the load balancer and firewall is calculated using the following formula:
Performance = Number of areas x Number of GE interfaces
Based on the preceding table, the bandwidth of each desktop user is 150 kbit/s. The
bandwidth for the desktop access varies according to service applications and user behaviors.
Generally, the 200 kbit/s bandwidth is sufficient for web page applications and enterprise
intranet applications.
In the service network, the total bandwidth needed is:
350 kbit/s x Number of concurrent desktops
If the service network has 2500 concurrent desktops, the required bandwidth is:
350 kbit/s x 2500 = 875 Mbit/s
Therefore, two 1GE ports are sufficient for the service network
Similarly, 1 Gbit/s throughput is sufficient for the SSL VPN gateway and load balancer.
Disabling risking services (such as the DHCP service) forcibly, or enabling necessary
services
Two solutions are available to deploy policy enforcement points (PEPs) on the network
devices:
z The 802.1x authentication can be enabled on switches at the access layer. This isolates
users who do not meet the security policy requirements, preventing security threats on
the network.
Figure 4-7 Switches with 802.1x authentication enabled at the access layer
Server
Internet
802.1x
Antivirus server
(optional)
802.1x
DMZ
802.1x authentication
End users
iNade clients
z The 802.1x authentication (based on the MAC address) can be enabled on switches at the
aggregation layer, and security functions (such as port isolation and private VLAN) are
enabled on switches at the access layer, preventing mutual influence between virtual
switches that are configured on a single access switch.
Figure 4-8 Switches with 802.1x authentication enabled at the aggregation layer
Server
Internet
802.1x authentication
Portal authentication
802.1x
Antivirus server
(optional)
802.1x
DMZ
End users
iNade clients
IP/MPLS
Synchronous backup
FC SAN FC SAN
WAN WAN
FC link
LAN
Remote disaster recovery center
As more services are deployed in the enterprise, the network architecture of three centers in
two areas cannot meet the requirements for service development. The architecture of multiple
centers with different levels has emerged to replace the original network architecture. If DCs
with different levels are established in a region, the load of global DCs is lessened, the WAN
bandwidth is saved, and the response time of regional services is shortened. In addition, if a
fault occurs in a region, services in other regions are not affected.
Figure 5-2 shows the network architecture of multiple centers with different levels.
Canada
Brazil America Venezuela
Sweden America
France Provinces
regional center
in China
England
Disaster Remote
backup disaster China regional center
Bahrain Active Russia
center in the backup
center same city center
United Arab Emirates
Turkey
Japan
Malaysia
India Indonesia
Figure 5-3 Plan for active and standby paths connecting DCs
Four DCs are defined as four autonomous systems (ASs). They advertise routes using EBGP.
As shown in Figure 5-3, the regional active DC has four paths to the global active DC.
Priorities of four paths are as follows:
z Highest priority: active path. If the link is normal, the regional active DC is directly
connected to the global active DC.
z Second highest priority: standby path 1. If the gateway or the outbound link of the
regional active DC is faulty, the regional active DC is connected to the global active DC
through the regional standby DC.
z Third highest priority: standby path 2. If the access device of the global active DC is
faulty, the regional active DC is connected to the global active DC through the global
disaster recovery center.
z Lowest priority: standby path 3. If the preceding errors occur concurrently, the regional
active DC is connected to the global active DC through the regional standby DC and
then global disaster recovery center.
The priorities of the links are determined by the EBGP AS-Path and MED attributes.
Country/region branch
The active link of the country/region branch is connected to the regional active DC and the
standby link to the regional standby DC. The regional active DC, regional standby DC, and
country/region branch are defined as different ASs by EBGP.
z Active path. Generally, the country/region branch is directly connected to the regional
active DC using the active access link.
z Standby path 1. If the active access link is faulty, the country/region branch is connected
to the regional active DC through the regional standby DC using the standby access link.
z Standby path 2. If the regional active DC is faulty, the traffic is switched to the standby
path 2 on the application layer using the domain name system (DNS) mechanism.
10.1/16
AS 2 1
10.1/16
AS 1 10.1/16 AS 4 1
AS 3 AS 4
10.1/16 AS 4 2 1
Production service link
Regional active Disaster recovery link Regional standby
DC DC
Active path Standby path 1
Standby path 2 Standby path 3
EBGP prefers the route with the shortest AS-Path. As shown in Figure 5-5, AS 3 receives
information on route 10.1/16 from AS 1, AS 2, and AS 4. The AS-Paths of these routes are AS
1, AS 2 1, AS 4 1, and AS 4 2 1.
z The route advertised from AS 1 (active path) has the shortest AS-Path. Therefore, it has
the highest priority and is selected.
z The route advertised from AS 4 2 1 (standby path 3) has the longest AS-Path. Therefore,
it has the lowest priority.
z The routes advertised from AS 2 1 (standby path 2) and AS 4 1 (standby path 1) have the
same AS-Path. The BGP MED attribute is needed to distinguish their priorities. As
shown in Figure 5-6, the MED value of route 10.1/16 advertised from AS 4 is 100,
smaller than that of the route advertised from AS 2. Therefore, standby path 1 has a
higher priority than standby path 2.
10.1/16
MED200
BGP has powerful routing control and selection capabilities. By controlling the BGP AS-Path
and MED attributes, you can effectively solve the route selection and link reliability problems
in multiple DCs.
Based on data and service features of each tier, Huawei classifies the disaster recovery system
into the following three levels:
z Tier 0 to 2: Backup level.
z Tier 3 to 5: Data level disaster recovery. A remote data system is established to replicate
the key application data of the local system in real time. When a disaster occurs, the
remote data system takes over services of the local system to ensure service continuity.
z Tier 6 to 7: Application level disaster recovery. A backup application system with a
higher level than the data disaster recovery system is established in a remote area. The
backup application system and the local application system can back up each other and
work together. When a disaster occurs, the remote application system takes over services
of the local application system.
Figure 5-7 shows the service frameworks of the data-level and application-level disaster
recovery systems.
Process switchover
Service system Service system
Application level
disaster recovery
Application software Application software
The traditional tape backup is performed at a fixed point. If the system corrupts, data
communicated from latest backup to the disaster occurrence is lost and cannot be recovered.
In this backup mode, the backup speed is slow and the backup process is not performed in real
time. Therefore, it cannot meet requirements for recovering a large amount of data, database
continuity, and real-time performance.
The mainstream disaster recovery solution is real-time backup. A real-time data recovery can
replicate updated data from the active DC to the standby DC through communications links,
ensuring synchronization between the active and standby DCs. If the active DC cannot work
properly, the standby DC takes over services of the active DC and maintains data integrity.
This technology functions at the volume manager layer and it mirrors or replicates disk
volumes to implement disaster recovery. This technology does not require the same
storage devices on both production centers and disaster recovery centers, but it occupies
system CPU resources and has a great impact on the system performance. Therefore, it
has poor scalability and running performance. This technology is based on the host, so
unexpected unauthorized access to the protected data may occur, affecting system
stability and security.
Commonly used volume replication software includes Symantec Veritas Volume
Replicator.
z File system-based replication technology
This technology replicates data files from the production center to the disaster recovery
center to implement data recovery. This technology functions in the file-based storage
systems, such as file servers, NAS, NAS devices, or file virtualization combinations.
The file-based replication technology is widely used for backing up data. The following
two reasons account for its popularity:
This technology is easy to deploy and supports standard protocols. In addition to its
own replication functions, it can work with multiple driver technologies to provide
more replication functions.
This technology provides enterprises with methods for using storage resources
properly, sharing resource across media servers, and configuring storage capacity for
media servers in a timely manner when the enterprises are running the block-based
storage system.
z Database-based replication technology
This logical replication technology supports heterogeneous storage and operating system
platforms. After analyzing redo logs of the production database, this technology
generates universal or private SQL statements and transmits these statements to the
backup database for application.
The replication process does not involve the lower-layer storage. The replication is
performed across platforms at a high speed, but it occupies system resources, does not
support some special data formats and data description language (DDL) statements, and
cannot guarantee data consistency when random data is generated in the service system.
The common products that provide this technology include Oracle DataGuard, Oracle
Stream, Quest SharePlex for Oracle, DSG RealSync for Oracle, and IBM DB2 HA/DR.
z Application system-based replication technology
The application system must support transaction distribution when the application
system-based replication technology is used. This technology uses transaction
middleware to back up online transaction concurrently in the production center and
disaster recovery center, or to transmit updated data from the active DC to the standby
DC, ensuring data consistency between the production center and disaster recovery
center.
This technology requires low bandwidth, but existing current applications can only
implement this technology after you modify these applications.
provides the highest protection level, but application performance is affected due to the
time delay caused by data transmission between arrays in local and remote ends.
z Asynchronous mode
Local volumes can continue the write operation even if the remote volumes are not
updated. Remote volumes are updated after a period of delay. This mode ensures high
application performance, but data that is not updated to remote volumes will be lost if a
disaster occurs.
Figure 5-8 Network planning for disaster recovery in the same/different cities
City A City B
IP/MPLS WAN
DWDM/SDH
FC SAN FC SAN FC SAN
IP link FC link
Protocol (iFCP), Infiniband, and Internet Small Computer System Interface (iSCSI). Huawei
recommends the asynchronous mode.
Figure 5-9 Automatic switchover and active/active load balancing implemented based on the
active/backup intelligent DNS/GSLB
Disaster recovery DC
Active DC
Carrier
DNS
The DNS service has a great impact on services in the DC, so disaster recovery for DNS
servers must be taken into consideration. In multiple DCs, it is recommended that you deploy
the slave DNS server in the active DC, and master DNS server in the standby DC. This
guarantees the proper operation of DNS services when the whole active DC fails.
Based on user experience and service characteristics, services have different requirements for
bandwidth and transmission delay. Therefore, the related DCs are deployed in different modes.
For example, office automation (OA) services such as Notes and Email, are sensitive to
transmission delay and require high bandwidth. Therefore, they are deployed in distributed
mode, which reduces bandwidth on leased lines of regional DCs and active DCs.
Centralized and distributed deployment modes are applicable to the services in Table 5-2.
Deployment Applicable To
Mode
With the global load balancing technology, distributed services meet requirements of the
nearest enterprises, back up data in DCs for each other in multiple locations, and perform load
balancing among multiple DCs.
z DCs provide services for the nearest enterprises.
z If the server fails or a fault occurs on the network in a DC, the remote DC will provide
services for users who are not informed.
z The global load balancing technology provides the intelligent DNS function to distribute
enterprises' services, ensuring load balance among servers in multiple locations.
With the global server load balancing (GSLB) technology, the redirection function can be
implemented. The redirection process is as follows:
a. A user sends a Hypertext Transfer Protocol/Real Time Streaming Protocol (HTTP/RTSP)
request.
b. GSLB servers communicate with each other to select a proper DC to provide services for
the user.
c. The GSLB server that is nearest to the user replies to the user with an HTTP/RTSP 302
redirection message which contains the virtual IP address of the selected Internet data
center (IDC).
d. User's HTTP/RTSP request is redirected to the virtual IP address of the selected IDC.
6 DC Network Maintenance
Recommendations
Managing Topologies
As shown in Figure 6-1, the eSight topology view displays the navigation tree on the left and
the view on the right. The navigation tree displays the hierarchy of the network structure
while the view displays hierarchical objects in different coordinates so that users can learn
about the object deployment in a clear and direct way.
Monitoring NEs
The homepage of the NE manager displays basic information about NE devices, TOPN
alarms, interface traffic, bandwidth usage, CPU, and memory in tables. Users can determine
whether to display these performance tables as required.
As shown in Table 6-1, the eSight provides abundant NE monitoring and management
functions for various devices.
Configuring NEs
The eSight configures a single NE in the following ways:
1. As shown in Figure 6-3, the eSight configures interfaces and routes using the simple
configuration frame.
2. As shown in Figure 6-4, the eSight configures a single NE using the smart configuration tool.
3. As shown in Figure 6-5, the eSight configures switches, access routers, and security devices
using the Web NMS.
During new deployment and network maintenance, users need to configure services for
devices deployed in centralized mode in batches. In this case, as shown in Figure 6-6, users
are recommended to use the smart configuration tool to configure services for multiple
devices in batches, which significantly improves operation and maintenance efficiency.
----End
Monitoring Services
As shown in Figure 6-7, the eSight monitors services in real time and collects traffic statistics
and other information based on the service type, which helps the maintenance personnel to
monitor services.
z Performance threshold
A performance threshold is specified in the network. When the performance index is
lower than the performance threshold, an alarm will be reported, instructing operators to
take actions to prevent the network performance from deteriorating.
Performance data is collected during the performance monitoring process. As shown in
Figure 6-8, the performance data collected in a performance index collection period
indicates the network performance in this period and provides a basis for predicting the
network performance change.
Users can query the collected performance data displayed in GUI in the performance
monitoring view to learn the network performance within a specified period and predict the
network performance change.
Maintaining
As shown in Figure 6-11 and Figure 6-12, the eSight can manage configuration files to help
users quickly save files and log in to the device. In addition, the eSight provides a tool to
inspect devices periodically, lessening the workload of the maintenance personnel.
Configuring Manufacturers
As shown in Figure 6-13, the eSight can configure the name and contact information of a
manufacturer. The configured manufacturer information is used in the subsequent
configuration of device models.
Customizing Alarms
As shown in Figure 6-15, the eSight can customize reported alarms. The customized alarms
can be parsed and are displayed on the alarm management page.
Customizing Reports
The eSight can make report designs by modifying predefined report design files.
Upgrading Software
The eSight provides a function to upgrade software remotely at one time. Figure 6-19 displays
the operation guide to upgrade devices. If the upgrade fails, the eSight provides
troubleshooting methods to ensure that devices run in normal status.
Start
Configure FTP/TFTP/
SFTP servers (optional)
Is the upgrade No
Troubleshoot the fault
successful
Yes
End
Loading Patches
The eSight provides a function to load patches remotely at one time. Figure 6-20 displays the
operation flow to load patches. The eSight also provides the patch rollback function to restore
the NE to the previous status.
Start
Load patches
Activate patches
Confirm (optional)
End
6.2 Troubleshooting
The DC network system consists of network devices, links between devices, and servers. If
the network system is faulty, you can locate the fault by checking the link status, device status,
or server status, or by detecting virus attacks. The upper layer application cannot work
properly if any one of these components is faulty.
A Device Is Down
If a device is down, check the power cable and power supply in the equipment room first.
If the power cable is connected properly and the power supply is normal, call the device
vendor or service provider for help immediately. If the hardware is faulty, ask the device
vendor or service provider to replace parts as soon as possible.
An Alarm Is Reported
Send the alarm to the device vendor and service provider, and ask them to troubleshoot the
fault or replace parts.
Server Expansion
Server expansion is implemented by expanding servers in an original area or creating servers
in a new area. The expansion policies of each are different.
Collaborative Disaster
Enterprise
unit dedicated Internet recovery
intranet
network network
Collaborative Disaster
Enterprise Internet
access unit access backup center
access network
network network access network
VLAN: VLAN : VLAN: VLAN:
2000 to 2199 2200 to 2299 2300 to 2399 2400 to 2499
VLAN:
Management Core network
100 to 199
VLAN:
3000 to 3999
Production Office area Other areas DMZ area
...
area
VLAN: VLAN: VLAN: VLAN:
200 to 399 400 to 599 600 to 799 800 to 999
Storage area
Device Expansion
Figure 6-22 shows common network architecture of a DC. Many ring networks exist at the
access layer and aggregation layer. Once you add servers, you need to deploy routers at the
access layer and connect them to the combined core layer, which makes the network more
Internet WAN
10GE
OSPF
Aggregation/
core layer
Rack
Access layer
To avoid affecting services while expanding, Huawei recommends cluster and stacking
technologies in planning the network architecture of a DC, as shown in Figure 6-23. Cluster
and stacking technologies tear down the loop prevention protocol, simplifies the network
architecture, and facilitates the device expansion.
Figure 6-23 Network architecture of a DC deployed in the cluster and stacking mode
Internet WAN
10GE
OSPF
Aggregation
/core layer
Trunk
Rack
Access layer
After the network is planned in the cluster and stacking mode, the network changes from a
ring topology to a tree topology which is easy to maintain. When you expand devices, you
only need to add new devices to the stacking system to implement smooth expansion, which
has no impact on the network architecture and does not add physical links at the combined
core layer.
7 Recommended Products
S9303 z LPU: 3
z Switch fabric capacity: 1440 Gbit/s
z Backplane capacity: 3 Tbit/s
z Forwarding performance: 540 Mpps
S9306 z LPU: 6
z Switch fabric capacity: 2 Tbit/s
z Backplane capacity: 6 Tbit/s
z Forwarding capacity: 1320 Mpps
S9312 z LPU: 12
z Switch fabric capacity: 2 Tbit/s
z Backplane capacity: 12 Tbit/s
z Forwarding capacity: 1320 Mpps
High Reliability
Huawei's carrier-class high reliability design ensures that the S9300 is 99.999% reliable,
which meets and exceeds carrier-class operation requirements. The S9300 provides redundant
backup for key components, including MPUs, power supply units, and fans, all of which are
hot swappable. Based on distributed hardware forwarding architecture, the routing plane is
separated from the switching plane to ensure service continuity.
The S9300 provides 3.3 ms hardware-based Ethernet operation, administration, and
maintenance (OAM) function, which can quickly detect and locate faults. By using the
Ethernet OAM technology and switchover technologies, the S9300 can provide
millisecond-level protection for networks.
The service traffic can be switched between active and standby components without rebooting
the equipment. The S9300 also supports the in-service software upgrade (ISSU), further
reducing service interruption.
The S9300 supports the link aggregation defined in IEEE 802.3ad, the IEEE 802.1s/w
standard, and Virtual Router Redundancy Protocol (VRRP). In addition, it supports various
millisecond switchover technologies, such as Rapid Ring Protection Protocol (RRPP), Smart
Link, IP fast reroute (FRR), traffic engineering (TE) FRR, and virtual private network (VPN)
FRR. These features improve the reliability of data transmission.
7.1.4 Specifications
The following table lists the specifications of the S9300 series switches.
IP routing z IPv4 dynamic routing protocols, such as, RIP, OSPF, IS-IS, and
BGP
z IPv6 dynamic routing protocols, such as, RIPng, OSPFv3, ISISv6,
and BGPv4
Multicast z IGMP snooping
z IGMP fast leave
z Multicast traffic control
z Multicast queries
z Suppression on multicast packets
z Multicast ACL
MPLS z Basic MPLS functions
z MPLS OAM
z MPLS traffic engineering (TE)
z MPLS VPN, VLL, and VPLS
Clock z Synchronous Ethernet clock
z IEEE 1588v2
QoS z Traffic classification based on the Layer 2 protocol header, Layer 3
protocol, Layer 4 protocol, and 802.1p priority
z Actions such as ACL, CAR, remark, and schedule
z Queue scheduling styles such as PQ, WRR, DRR, PQ+WRR, and
PQ+DRR
z Congestion avoidance mechanisms such as Weighted Random
Early Detection (WRED) and tail drop
z Traffic shaping
Configuration and z Terminal services such as Console, Telnet, and SSH
maintenance z Network management protocols such as SNMPv1/v2/v3
z Uploading and downloading of files using FTP and TFTP
z BootROM upgrade and remote online upgrade
z Hot patches
z User operation logs
Security and z 802.1x authentication and portal authentication
management z RADIUS and HWTACACS authentication for login users
z Hierarchical protection for commands to prevent unauthorized users
from accessing the device
z Protection against DoS attacks, SYN flood attacks of TCP, UDP
flood attacks, broadcast storms, and large-traffic attacks
z CPU channel protection
z Ping and traceroute
z RMON
The S6700 supports strict learning of ARP entries to prevent ARP spoofing attackers from
exhausting ARP entries so that authorized users can access the Internet. The S6700 supports
IP source check to prevent DoS attacks caused by MAC address spoofing, IP address spoofing,
and MAC/IP spoofing. Unicast reverse path forwarding (URPF) provided by the S6700 can
reverse check packet transmission path to authenticate packets, which can protect the network
against increasing source address spoofing attacks.
The S6700 supports the integrated MAC address authentication and 802.1x authentication.
User information, such as the user name, IP address, MAC address, VLAN ID, access
interface, and a flag indicating whether antivirus software is installed on the client, can be
bound statically or dynamically, and policies (VLAN, QoS, and ACL) can be delivered
dynamically.
The S6700 can limit the number of MAC addresses learned on an interface to prevent
attackers from exhausting MAC address entries by using bogus source MAC addresses. In
this way, MAC addresses of authorized users can be learned and flooding is prevented.
High Reliability
The S6700 supports dual power supplies for backup and can use an AC power supply and a
DC power supply at the same time. Users can select a single power supply or dual power
supplies to improve device reliability. The switch provides two built-in fans to improve
operating stability and has a long mean time between failure (MTBF).
Enhancing STP, RSTP, and MSTP, the S6700 supports the MSTP multi-process that greatly
increases the number of sub-ring instances. It supports enhanced Ethernet technologies such
as Smart Link and RRPP to implement millisecond-level protective switchover, improving
network reliability. Smart Link and RRPP both support multi-instance to implement load
balancing among links, further improving bandwidth usage.
The S6700 supports enhanced trunk (E-Trunk). When a client edge (CE) is dual homed to a
VPLS, VLL, or PWE3 network, an E-Trunk can be configured to protect the links between
the CEs and provider edges (PEs) and implement backup between PEs. The E-trunk can
implement link aggregation across devices to upgrade the link reliability to device level.
The S6700 supports Smart Ethernet Protection (SEP) protocol, a ring network protocol
applied to the link layer of an Ethernet network. SEP is applicable to open ring networks and
can be deployed on upper-layer aggregation devices to provide millisecond-level switchover
without interrupting services. Huawei devices have implemented Ethernet link management
using SEP. SEP features simplicity, high reliability, high switchover performance, convenient
maintenance, and flexible topology and enables users to conveniently manage and plan
networks.
The S6700 supports VRRP to keep the communication continuity and reliability, ensuring a
stable network. Multiple equal-cost routes can be configured on the S6700 to implement route
redundancy. When the active uplink route is faulty, traffic is automatically switched to a
backup route. This feature implements multi-level backup for uplink routes.
High Extensibility
The S6700 supports long-distance intelligence stacking (iStack). A common interface can be
configured as a stack interface at the CLI, enabling flexible interface usage. The optical fibers
can be used for stacking, greatly increasing the distance between stacked devices. Compared
with a single device, intelligent stacking features powerful extensibility, reliability, and
performance.
When customers need to expand the device or replace a single faulty device, they can add new
devices without interrupting services. Compared with chassis switches, the performance and
port density of intelligent stacking are not restricted by the hardware architecture. Multiple
stacked devices can be considered as a logical device, which simplifies the network
management and configuration.
The S6700 supports various IPv6 routing protocols including RIPng and OSPFv3. It uses the
IPv6 Neighbor Discovery Protocol (NDP) to manage packets exchanged between neighbors.
It also provides the Path MTU Discovery (PMTU) mechanism to select a proper MTU on the
path from the source to the destination, optimizing network resources and obtaining the
maximum throughput.
7.3.2 Appearance
The following table lists models of the S5700.
High Reliability
The S5700 supports dual power supplies for backup and can use an AC power supply and a
DC power supply at the same time. Users can select a single power supply or dual power
supplies to improve device reliability. The switch provides three built-in fans to improve
stability and has a long MTBF.
Enhancing STP, RSTP, and MSTP, the S5700 supports the MSTP multi-process that greatly
increases the number of sub-ring instances. It supports enhanced Ethernet technologies such
as Smart Link and RRPP to implement millisecond-level protective switchover, improving
network reliability. Smart Link and RRPP both support multi-instance to implement load
balancing among links, further improving bandwidth usage.
The S5700 supports E-Trunk. When a CE is dual homed to a VPLS, VLL, or PWE3 network,
an E-Trunk can be configured to protect the links between the CEs and PEs and implement
backup between PEs. The E-trunk can implement link aggregation across devices to upgrade
the link reliability to device level.
The S5700 supports SEP, a ring network protocol applied to the link layer of an Ethernet
network. SEP is applicable to open ring networks and can be deployed on upper-layer
aggregation devices to provide millisecond-level switchover without interrupting services.
Huawei devices have implemented Ethernet link management using SEP. SEP features
simplicity, high reliability, high switchover performance, convenient maintenance, and
flexible topology and enables users to manage and plan networks conveniently.
The S5700 supports VRRP to keep the communication continuity and reliability, ensuring a
stable network. Multiple equal-cost routes can be configured on the S5700 to implement route
redundancy. When the active uplink route is faulty, traffic is automatically switched to a
backup route. This feature implements multi-level backup for uplink routes.
bound statically or dynamically, and policies (VLAN, QoS, and ACL) can be delivered
dynamically.
The S5700 can limit the number of MAC addresses learned on an interface to prevent
attackers from exhausting MAC address entries by using bogus source MAC addresses. In
this way, MAC addresses of authorized users can be learned and flooding is prevented.
The S5700 can implement complex traffic classification based on information such as the
5-tuple, IP preference, ToS, DSCP, IP protocol type, ICMP type, TCP source port, VLAN, the
protocol type of an Ethernet frame, and CoS. The S6700 supports inbound and outbound
ACLs. The S5700 supports the flow-based two-rate and three-color CAR. Each interface
supports eight priority queues and multiple queue scheduling algorithms such as WRR, DRR,
SP, WRR+SP, and DRR+SP, which ensures the quality of network services such as voice,
video and data services.
simultaneously run IPv4 and IPv6. This makes the networking flexible and meets the
requirements for the network transition from IPv4 to IPv6.
The S5700 supports various IPv6 routing protocols including RIPng and OSPFv3. It uses the
IPv6 NDP to manage packets exchanged between neighbors. It also provides the PMTU
mechanism to select a proper MTU on the path from the source to the destination, optimizing
network resources and obtaining the maximum throughput.