Internetworking The Storage Area Networks
Internetworking The Storage Area Networks
Internetworking The Storage Area Networks
net/publication/251827247
CITATIONS READS
2 3,040
2 authors, including:
Nikos E Mastorakis
Technical University of Sofia
970 PUBLICATIONS 5,805 CITATIONS
SEE PROFILE
All content following this page was uploaded by Nikos E Mastorakis on 28 September 2016.
#
MILITARY INSTITUTIONS OF UNIVERSITY EDUCATION
HELLENIC NAVAL ACADEMY
Terma Hatzikyriakou, 18539, Piraeus, GREECE.
Abstract: - A SAN internetworking has been highlighted in this paper describing new technologies available
for building an enterprise-wide SAN and connecting Fibre Channel SANs over Wide Area Network. The
new storage-centric infrastructure includes an open, modular and scalable storage network, not tied to any
one server or application.
Key-Words: - Storage Area Network, Fibre Channel, e-business, WAN, Internet, ATM, Gigabit Ethernet
2 SAN Technology Overview The central foundation of the SAN is Fibre Channel
technology (FC). The first generation SAN
SAN is a separate, centrally managed (but
provides 1Gb/s (2Gb/s full-duplex), while the
functionally distributed) networked environment
second generation SAN supports multiple data rates
that provides a scalable, reliable IT infrastructure to
up to 2Gb/s (4 Gb/s full-duplex) [12]. The FC
meet the high-availability, high-performance
specification is a set of standards being developed
requirements of today’s most demanding e-
by ANSI (American National Standards Institute)
business applications. SAN is focused on the single
and is ideal for storage, video, graphic and mass
task of managing storage resources and removing
data transfer applications [13].
that task from the LAN or servers. SANs
FC is a layer 2 technology which operates over • Enhanced performance
copper and fiber optic cabling with maximum Take storage traffic off the enterprise LAN and
distances appropriate to the media (30m for copper, enterprise servers. The SAN enables bulk data
500m for short-wave laser over multimode fiber, transfer from each server to shared SAN storage,
10Km for long-wave laser over single mode fiber) but the LAN is used only for communication (not
[14]. FC supports protocols such as SCSI (SCSI data) traffic between the servers. Sophisticated
over FC is called FCP), ESCON (Enterprise backup and recovery software applications still
Systems Connection), FICON (Fiber Connectivity), control the process, tracking the backup and
SSA (Serial Storage Architecture), IPI (Intelligent recovery data. The result is a faster, more scalable,
Peripheral Interface), HiPPI (High Performance and more reliable backup and recovery solution —
Parallel Interface), ATM, IP [15]. FC is designed with more effective utilization of storage, server,
to transport large blocks of data with greater and LAN resources [18].
efficiency and reliability than IP-based networks, • Mission continuity
which can significantly improve backup and - Consolidate backups and archives
recovery performance. FC supports three - Disk mirroring to disaster recovery sites
topologies: point-to-point, arbitrated loop (FC-AL), • Applications
and switched — FC-SW (1Gb/s, i.e. 2Gb/s full - High availability mission critical databases
duplex) or FC-SW2 (2Gb/s, i.e. 4Gb/s full duplex); - Distributed server clustering
all three topologies are fully interoperable, so the - Disk virtualisation
topology is transparent to the attached devices. The
principal change in this new architecture is the
externalization of server storage onto the SAN as a 4 Requirements
shared resource attached to multiple servers [16]
Since standards from SNIA (Storage Network
(Figure 1). It does not matter whether the attached
Industry Alliance) and the FCIA (Fibre Channel
storage resources are mainframe DASD (direct
Industry Alliance) are just now coming into being,
access storage device), open systems disk arrays, or
the major requirement was to deploy FC switches
even remote storage used for a variety of reasons.
whose firmwares were upgradeable to newer
They are all connected by a SAN utilizing high-
versions as new standards become available.
speed storage interconnect technologies.
Specific requirements that were driving towards
storage area networking solution include:
• Scalability,
• High Speed Storage Access,
• Heterogeneous connectivity,
• Flexibility in Server and Storage Placement,
• Secure transactions and data transfer,
• 24x7 response and availability.
5 SAN Internetworking
Following are network components that had been
Figure 1. The SAN employed to build and interconnect an
enterprise-wide SAN infrastructure:
The LAN meets the SAN at the server. The server • SilkWorm 3800 switch (SW3800) provides 16
is provisioned with both a LAN interface card and ports with auto-sensing 1Gbit/s (2Gb/s full
a FC Host Bus Adapter (HBA). As a user requests duplex) and 2Gbit/s (4Gb/s full duplex)
data from the server over the LAN, the server interfaces for seamless integration with existing
retrieves the data from storage over the SAN [17]. FC fabrics. SW3800 includes Brocade
Advanced Fabric Services that increase
security through hardware-enforced World
3 Objective Wide Name zoning [19]. SW3800 switches
The commitment was to providing an open, were deployed with fiber optic cabling to
vendor-neutral SAN solution to meet the following support all FC-related topologies providing
objectives: reliable, high-performance data transfer that is
critical to efficient SAN applications, such as
LAN-free backup, server-free backup, storage (Ultra2 SCSI adapters); FC 6227, Qlogic 2100
consolidation, remote mirroring and and 2200 series Fibre Channel HBAs.
high-availability clustering configurations. • Tivoli Storage Manager (TSM) server running
Inter-switched links (ISLs) between two central on AIX. TSM delivers data protection for file
SW3800 switches (Figure 2) create a single and application data, record retention, space
logical high speed trunk running up to 4Gb/s management, and disaster recovery. The client
(8Gb/s full duplex). The trunked ISL is software (Storage Agent) running on different
fault-tolerant in that it will withstand failure of systems (PCs, workstations or application
individual links. This feature will improve core servers), in conjunction with the server
fabric throughput and performance [20]. software (TSM) enables the LAN-free data
• The ATTO FibreBridge 4500C/R/D Fibre transfer exploiting SAN infrastructure [22].
Channel-to-SCSI bridge is configured with • The ADVA DiskLink SAN Interconnect
three independent Fibre Channel ports and four Gateway enables the distribution and mirroring
independent Ultra2 SCSI buses [21]. of data across multiple sites. DiskLink is a
• Compaq ProLiant (Windows NT/2000, Linux), high-performance storage networking device
IBM RS6000 (AIX), HP9000 (HP-UX), Sun that provides 2Gb/s Fibre Channel (4Gb/s full
Enterprise (Solaris) servers; IBM Enterprise duplex) and Ultra 2 SCSI networking over
Storage Server 2105 F20 RAID (Redundant unlimited distances across ATM networks at
Array of Independent Disks), Nextor 18F (FC 155/622 Mbps, or over next-generation Internet
target JBOD - Just a Bunch of Disks); services (Virtual Private Networks) with
Terasystem DataSTORE LTO TLS 8000 (FC Gigabit Ethernet. Configurable levels of
or SCSI tape library); Terasystem OptiNET intelligent storage command handling allow
CD-ROM/DVD Jukebox and CD-ROM/DVD DiskLink to reduce the impact of transmission
Jukebox Controller; Adaptek 2940U2W delays and eliminate distance barriers [23].