Site Preparation and Planning
Site Preparation and Planning
Site Preparation and Planning
Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS-IS.“ DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED
IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE.
Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners.
Published in the USA.
Dell EMC
Hopkinton, Massachusetts 01748-9103
1-508-435-1000 In North America 1-866-464-7381
www.DellEMC.com
CPUs, RAM, NVRAM, network interfaces, InfiniBand adapters, disk controllers, and
storage media. An Isilon cluster is made up of three or more nodes, up to 144. The 4U
chassis is always used for Generation 6. There are four nodes in one 4U chassis in
Generation 6, therefore a quarter chassis makes up one node.
When you add a node to a pre-Generation 6 cluster, you increase the aggregate disk,
cache, CPU, RAM, and network capacity. OneFS groups RAM into a single coherent
cache so that a data request on a node benefits from data that is cached anywhere.
NVRAM is grouped to write data with high throughput and to protect write operations
from power failures. As the cluster expands, spindles and CPU combine to increase
throughput, capacity, and input-output operations per second (IOPS). The minimum
cluster for Generation 6 is four nodes and Generation 6 does not use NVRAM.
Journals are stored in RAM and M.2 flash is used for a backup in case of node failure.
There are several types of nodes, all of which can be added to a cluster to balance
capacity and performance with throughput or IOPS:
Node Function
A-Series Performance Accelerator Independent scaling for high performance
Isilon offers a variety of storage and accelerator nodes that you can combine to meet
the storage needs.
The requirements for mixed-node clusters section in the Isilon Supportability and
Compatibility Guide provides information on installing more than one type of node in an
Isilon cluster.
Talk to an Isilon Sales Account Manager to identify the equipment that is best suited
to support the workflow.
DANGER
Grounding guidelines
To eliminate shock hazards and facilitate the operation of circuit-protective devices,
ensure that the rack is grounded.
l The rack must have an earth-ground connection as required by applicable codes.
Connections such as a grounding rod or building steel provide an earth ground.
l The electrical conduits must be made of rigid metallic material that is securely
connected or bonded to panels and electrical boxes, which provides continuous
grounding.
l The ground must have the correct low impedance to prevent buildup of voltage on
equipment or exposed surfaces. Low-impedance grounding and lightning
protection are recommended.
l The electrical system must meet local and national code requirements. Local codes
might be more stringent than national codes. Floor load bearing requirements and
Safety and EMI Compliance provide more information.
Grounding guidelines 9
Selecting the equipment
Note
Standard depth is
from the front NEMA
rail to rear 2.5in SSD
cover ejector: 35.8in.
Generation 6 supports both 2.5in, and 3.5 in drives in the same enclosure.
Note
Currently, the F810 nodes are supported for OneFS 8.1.3 only. To take advantage of
F810 in your environment, contact your Dell EMC Account team.
Number of nodes 4 4 4
Self-Encrypting No No Yes
Drives (SED, SSD)
option
Attribute Capacity
Network interfaces Network interfaces support IEEE 802.3 standards for 1 Gbp/s,
10Gbp/s, 40 Gbp/s, and 100Mbp/s network connectivity
CPU Type Intel® Xeon® Processor E5-2967A v4 (40M Cache, 2.60 GHz)
InfiniBand connections are not supported for the F810 node back-end
networking.
SSD drives (2.5 in) per Per node = Per node = Per node = Per node = Per node =
chassis 15, per 15, per 15, per 15, per 15, per
chassis (4 chassis (4 chassis (4 chassis (4 chassis (4
nodes), the nodes), the nodes), the nodes), the nodes), the
value is 60 value is 60 value is 60 value is 60 value is 60
OneFS Version 8.1 or later except for self-encrypting drive options which require
Required Isilon OneFS 8.1.0.1 or later
Network interfaces Network interfaces support IEEE 802.3 standards for 1 Gbp/s,
10Gbp/s, 40 Gbp/s, and 100Mbp/s network connectivity
CPU Type Intel® Xeon® Processor E5-2967A v4 (40M Cache, 2.60 GHz)
Note
Number of nodes 4 4 4
SSD drives (2.5 in) Per node = 15, per Per node = 15, per Per node = 15, per
per chassis chassis (4 nodes), the chassis (4 nodes), the chassis (4 nodes), the
value is 60 value is 60 value is 60
System Memory 64 GB
Network interfaces Network interfaces support IEEE 802.3 standards for 1Gbps, 10Gbps,
and 100Mbps network connectivity
Infrastructure 2 InfiniBand connections with quad data rate (QDR) link or 2 X 10 GbE
Networking (SFP+)
Number of nodes 4 4 4
Attribute Capacity
HDD drives (3.5" 4kn Per node = 15, per Per node = 15, per Per node = 15, per
SATA) per chassis) chassis (4 nodes), the chassis (4 nodes), the chassis (4 nodes), the
value is 60 value is 60 value is 60
Network interfaces Network interfaces support IEEE 802.3 standards for 1Gbps, 10Gbps,
40Gbps and 100Mbps network connectivity
Infrastructure 2 InfiniBand connections with quad data rate (QDR) link or 2 X 40GbE
Networking (QSFP+)
Number of nodes 4 4
SAS drives (2.5 in Per node = 30, per chassis (4 Per node = 30, per chassis (4
512n) per chassis nodes), the value is 120 nodes), the value is 120
Attribute Capacity
Infrastructure 2 InfiniBand connections with quad data rate (QDR) link or 2 X 40GbE
Networking (QSFP+)
Cluster attributes
Cluster attribute H400 H500 H600
Number of Chassis 1 to 36 1 to 36 1 to 36
Number of nodes 4 4 4
SSD drives (2.5 in) Per node = 15, per Per node = 15, per Per node = 15, per
per chassis chassis (4 nodes), the chassis (4 nodes), the chassis (4 nodes), the
value is 60 value is 60 value is 60
Attribute Capacity
System Memory 16 GB
Network interfaces Network interfaces support IEEE 802.3 standards for 10 Gbp/s, 1
Gbp/s, and 100 Mbp/s network connectivity
10 TB HDD
Chassis capacity 800 TB
Number of nodes 4
HDD drive (3.5" Per node = 20, per chassis (4 nodes), the value is 80
4kn SATA)
Self-encrypting Yes
drive (SED HDD)
option
System Memory 16 GB
Network Network interfaces support IEEE 802.3 standards for 10Gbps, 1Gbps, and
interfaces 100Mbps network connectivity
Cluster attributes
Cluster attributes A200 A2000
Number of Chassis 1 to 36 1 to 36
Note
The S210 is approximately 27 in. without the front panel cable bend (approximately 3
in.) resulting in the 30.5 in. depth.
Front-end 2 copper 1000 Base-T (GE) and 2 x 10GE (SFP+ or twin-ax copper)
Networking
Cluster attributes
Number of nodes Capacity Memory Rack Units
3–144 16.2 TB to 4.15 PB 96 GB to 36.8 TB 6–288
Solid State Up to 6 Up to 6 Up to 6 Up to 6
Drives (SSDs)
(200 GB, 400
GB, or 800 GB)
System ECC 24 GB or 48 GB
Memory
Network Isilon network interfaces support IEEE 802.3 standards for 10Gbps, 1Gbps,
Interfaces and 100MBps network connectivity
CPU Type Single Intel® Xeon® Processor E5-2407v2 @ 2.4 GHz, 4 core
Non-volatile 2 GB
RAM (NVRAM)
Cluster attributes
Number of nodes Capacity Memory Rack Units
3–144 18 TB to 6.9 PB 72 GB to 6.9 TB 6–288
Network Isilon network interfaces support IEEE 802.3 standards for 10Gbps, 1Gbps,
interfaces and 100Mbps network connectivity
Non-volatile 2 GB
RAM (NVRAM)
Cluster attributes
Number of nodes Capacity Memory Rack Units
3–144 108 TB to 20.7 PB 192 GB to 36.8 TB 12–576
Self- No No No Yes No
Encrypting
Drives (SEDs
SSD) option
(800 GB)
System ECC 24 GB or 48 GB
Memory
Network Isilon network interfaces support IEEE 802.3 standards for 10Gbps, 1Gbps, and
Interfaces 100MBps network connectivity
Non-volatile 2 GB
RAM
(NVRAM)
Typical 800W
Power
Consumption
@100v
Typical 720W
Power
Consumption
@240v
Cluster attributes
Number of nodes Capacity Memory Rack Units
3–144 108 TB to 30.2 PB 36 GB to 6.9 TB 12–576
Non-volatile 512 MB
RAM (NVRAM)
Cluster attributes
Number of nodes Capacity Memory Rack Units
3–144 16.2 TB to 4.15 PB 72 GB to13.8 TB 6–288
Self-Encrypting No No Yes
Drives (SED HDD)
option (7200 RPM)
Cluster attributes
Number of nodes Capacity Memory Rack Units
3–144 24 TB to 5.2 PB 18 GB to 6.9 TB 6–288
Solid State 0, 2, or 4 0 or 3 0 or 4 0 or 6
Drives
Self-Encrypting No No Yes No
Drives (SEDs)
option (7200 Note
RPM) FIPS 140-2 level
2 validated SEDs
with unique
AES-256 bit
strength keys
that are assigned
to each drive.
Network Isilon network interfaces support IEEE 802.3 standards for 10Gps, 1Gps,
Interface and 100Mbps network connectivity
Non-volatile 512 MB
RAM (NVRAM)
Cluster attributes
Number of nodes Capacity Memory Rack Units
3–144 108 TB to 20.7 PB 72 GB to 27.6 TB 12–576
Hard Drives 36 36 36 36
(3.5" SATA)
Self-Encrypting No No Yes No
Drives (SEDs)
option (7200 Note
RPM) FIPS 140-2 level
2 validated SEDs
with unique
AES-256-bit
strength keys
assigned to each
drive.
Network Isilon network interfaces support IEEE 802.3 standards for 10Gps, 1Gps,
Interface and 100Mbps network connectivity
Non-volatile 512 MB
RAM (NVRAM)
Cluster attributes
Number of nodes Capacity Memory Rack Units
3–144 108 TB to 20.7 PB 72 GB to 27.6 TB 12–576
l The HD400 node is 35 inches deep, which is longer than other Isilon nodes.
l You can install the HD400 node in standard racks, but you are not able to close the
rear doors on most standard cabinets.
l The 35 inch depth of the HD400 node does not include additional space for cable
management arms.
Node attributes and options
Attribute 6 TB HDDs
Capacity 354 TB
Hard Drives 59
(3.5" SATA,
7200 RPM)
Solid State 1
drives (800GB)
Self-Encrypting No
Drives (SEDs)
option
System Memory 24 GB or 48 GB
Network Isilon network interfaces support IEEE 802.3 standards for 10Gbps, 1Gbps,
interfaces and 100Mbps network connectivity
CPU Type Intel® Xeon® Processor E5-2407 v2 (10M Cache, 2.40 GHz)
Non-volatile 2 GB
RAM (NVRAM)
Attribute 6 TB HDDs
Typical Power 1100 Watts
Consumption
@240v
Cluster attributes
Number of nodes Capacity Memory Rack Units
3–144 1.06 PB to 50.9 PB 72 GB to 6.9 TB 12–576
Accelerator nodes
With Isilon accelerators, backup and performance, you can scale the performance of
the cluster without increasing capacity.
Backup accelerator
The backup accelerator provides a simple and flexible solution to address these
challenges and scale backup performance to fit within the backup windows.
The backup accelerator integrates quickly and easily into the Isilon scale-out NAS
cluster and can be installed in about a minute, without system downtime.
The backup accelerator is also designed to integrate easily with current tape
infrastructures, as well as with leading data backup and recovery software
technologies and processes. Each backup accelerator node supports multi-paths and
can drive multiple uncompressed LTO-5 and LTO-6 tape drives while supporting both
two-way and three-way NDMP to meet specific data backup requirements.
Performance accelerator
The performance accelerator is a next-generation performance accelerator that is
designed for demanding applications and workloads that require maximum
performance.
With performance accelerator nodes, latency can be reduces and concurrent reads
that are increased with throughput for a cached dataset while gaining the ability to
scale performance independent of capacity. Isilon A100 performance accelerator
nodes also allow accelerated cluster operations, including disk and node rebuilds, file
striping, and file-based replication.
With 256 GB of L1 cache per node, performance accelerator nodes can hold a huge,
recently accessed dataset in memory. This dataset can be read with extremely high
performance, low latency, and concurrent throughput.
Each performance accelerator node delivers an aggregate throughput of up to 1100
MB/s per node and allows you to scale performance without adding capacity, while
supporting the most demanding high performance work flows.
Accelerator nodes 27
Selecting the equipment
Node attributes
Attribute A100 backup accelerator A100 performance accelerator
Front-end l 8 Gb Fibre Channel x Four (4) l 10 Gb Ethernet X Two (2) (Fibre
networking Channel or Copper)
l 1 Gb Ethernet X Four (4)
(Copper) l 1 Gb Ethernet X Four (4)
(Copper)
Memory 64 GB 256 GB
Environmental specifications
Attribute Value
Operating environment 50°F to 95°F (10°C to 35°C), 5% to 95%
relative humidity, non-condensing
Minimum service clearances 35” (88.9 cm), rear: 14” (35.6 cm)
A100 guidelines
Follow these guidelines to get optimal performance from the A100 accelerator node.
The A100 accelerator provides the most benefit to work flows where it can:
l Magnify cached read performance.
l Expand physical GigE network ports.
Titan HD racks
Titan HD is designed to support fully populated racks of A2000 chassis/nodes.
However, all Generation 6 platforms can be installed in the Titan HD racks.
Environmental requirements
Table 3 Titan HD environmental requirements
Note
Note
Cabinet clearance
This Dell EMC cabinet ventilates from front to back. Provide adequate clearance to
service and cool the system. Depending on component-specific connections within the
cabinet, the available power cord length may be somewhat shorter than the 15-foot
standard.
footprint. Once you have positioned, leveled, and stabilized the cabinet, the four
leveling feet determine the final load-bearing points on the site floor.
Cabinet specifications provide details about cabinet dimensions and weight
information to effectively plan for system installation at the customer site.
Note
For installations that require a top of the cabinet power feed, a 3 m extension cord is
provided. Do not move or invert the PDUs.
In addition, the pallets that are used to ship cabinets are specifically engineered to
withstand the added weight of components that are shipped in the cabinet.
Figure 1 Cabinet component dimensions
Titan HD racks 31
Selecting the equipment
NOTICE
Customers are responsible for ensuring that the data center floor on which the Dell
EMC system is configured can support the system weight. Systems can be configured
directly on the data center floor, or on a raised floor supported by the data center
floor. Failure to comply with these floor-loading requirements could result in severe
damage to the Dell EMC system, the raised floor, subfloor, site floor, and the
surrounding infrastructure. In the agreement between Dell EMC and the customer,
Dell EMC fully disclaims all liability for any damage or injury resulting from a customer
failure to ensure that the raised floor, subfloor and/or site floor can support the
system weight as specified in this guide. The customer assumes all risk and liability
that is associated with such failure.
Leave approximately 2.43 meters (8 ft) of clearance at the back of the cabinet to
unload the unit and roll it off the pallet.
Figure 3 Cabinet clearance
Installed clearance
The 40U-P and 40U-D Titan rack cabinets ventilate from front to back. Provide
adequate clearance to service and cool the system.
Depending on component-specific connections within the cabinet, the available power
cable length may be somewhat shorter than the 15-foot standard.
Figure 4 40U-P
Figure 5 40U-D
Width
60 cm
(24.00 in.)
Height
190 cm
(75.00 in.)
Power cord
length
4.5 m
(15 ft)
Rear access
91 cm
(36.00 in.)
Depth
111.76 cm
(44 in.) Front access
Note: Systems with a front door 107 cm
are 5.5 cm (2.2 in.) deeper. (42 in.)
CL5443
Caster wheels
The bottom of the 40U-P and 40U-D Titan rack cabinets includes four caster wheels.
Of the four wheels on the bottom of the rack, the two front wheels are fixed, and the
two rear casters swivel in a 1.75-inch diameter. The swivel position of the caster
wheels determines the load-bearing points on the site floor, but does not affect the
cabinet footprint. After you position, level, and stabilize the cabinet, the four leveling
feet determine the final load-bearing points on the site floor.
Dimension 3.620
to center of caster
17.102 minimum 20.580 maximum wheel from this surface
(based on swivel (based on swivel
position of caster wheel) position of caster wheel)
Detail B
1.750
Caster swivel
18.830 diameter Bottom view
Outer surface Outer surface Leveling feet
Rear of rear door of rear door 1.750 Rear
Swivel diameter
reference (see
detail B)
Floor tile
cutout
29.120
maximum
(based on
swivel position
of caster wheel)
28.240
27.370
minimum
(based on
35.390
swivel position
of caster wheel)
Leveling feet
3.620
Front 20.700
Right 20.650
Top view side view
Dimension 3.620 to center of
caster wheel from this surface
(see detail A) Front
Note: Some items in the views are removed for clarity. CL3627
All measurements are in inches.
WARNING
The data center floor on which you configure the system must support that
system. You are responsible to ensure that the data center floor can support the
weight of the system. Whether, or not the system is configured directly on the
data center floor, or on a raised floor supported by the data center floor. Failure
to comply with these floor-loading requirements could result in severe damage to
the system, the raised floor, sub-floor, site floor, and the surrounding
infrastructure. Dell EMC fully disclaims any liability for damage or injury resulting
from a failure to ensure that the raised floor, sub-floor, and/or site floor can
support the system weight as specified. The customer assumes all risk and
liability that is associated with such failure.
Stabilizer brackets
Optional brackets help to prevent the rack from tipping during maintenance or minor
seismic events. If you intend to secure the optional stabilizer brackets to the site floor,
prepare the location for the mounting bolts. Install an anti-tip bracket to provide an
extra measure of security.
Note
There are two kits that can be installed. For cabinets with components that slide, it is
recommended that you install both kits.
l Seismic restraint bracket - Install a seismic restraint bracket to provide the highest
protection from moving or tipping.
Front
Rear
.438 16.92
28.03 21.25
30.03
8.46
3.55
2.00
16.60
2.00 24.90 .50
29.23
All measurements are in inches . EMC2856
AC power input
After the 40U-P and 40U-D Titan racks are positioned and loaded, connect power
cords to the P1 and P2 connectors on the four power distribution units (PDU) within
the cabinet.
Depending on the cabinet components and configuration, the 40U-P rack requires
two, four, or six independent 200–240 V power sources. Power cords included with
the shipment support the maximum configurations. There might be extra cords as part
of the shipment.
CAUTION
40U-P cabinet PDUs do not include a power ON/OFF switch. Ensure that the
four circuit breaker switches on each PDU are up, in the off position, until AC
power is supplied to the unit. The power must be off before disconnecting
jumper or power cords from a PDU.
Attach the power cords to the power distribution units on each side of the rack. The
following image displays where to attach two AC source connections.
Third party rack specifications for the Dell EMC Generation 6 deep chassis
solution
The current Dell EMC rack solutions support up to 8 PDUs (4 on each side). The
illustrations in this section provide the dimensions and guidelines for 3rd party rack
solutions. The table following the illustrations lists the components and dimensions for
the labels in the illustrations.
Third party rack specifications for the Dell EMC Generation 6 deep chassis solution 41
Selecting the equipment
h 19in (486.2mm)
NEMA+(2e)+(2f)
Note
i Chassis depth:
l Normal chassis=35.80in (909mm)
l Deep chassis=40.40in (1026mm)
Note
k Front
l Rear
m Front door
n Rear door
p Rack post
q PDU
s NEMA 19 inch
CAUTION
Isilon requires that separate switches are used for the external and internal
interfaces. Isilon Nodes support Ethernet traffic on the front end and InfiniBand
for Generation 5 and older nodes and Ethernet and InfiniBand for Generation 6
nodes.
CAUTION
Cable management
To protect the cable connections, organize cables for proper airflow around the
cluster, and to ensure fault-free maintenance of the Isilon nodes.
Protect cables
Damage to the InfiniBand or Ethernet cables (copper or optical Fibre) can affect the
Isilon cluster performance. Consider the following to protect cables and cluster
integrity:
l Never bend cables beyond the recommended bend radius. The recommended bend
radius for any cable is at least 10–12 times the diameter of the cable. For example,
if a cable is 1.6 inches, round up to 2 inches and multiply by 10 for an acceptable
bend radius. Cables differ, so follow the recommendations of the cable
manufacturer.
l As illustrated in the following figure, the most important design attribute for bend
radius consideration is the minimum mated cable clearance (Mmcc). Mmcc is the
distance from the bulkhead of the chassis through the mated connectors/strain
relief including the depth of the associated 90 degree bend. Multimode fiber has
many modes of light (fiber optic) traveling through the core. As each of these
modes moves closer to the edge of the core, light and the signal are more likely to
be reduced, especially if the cable is bent. In a traditional multimode cable, as the
bend radius is decreased, the amount of light that leaks out of the core increases,
and the signal decreases.
Figure 14 Cable design
Note
Gravity decreases the bend radius and results in the loss of light (fiber optic),
signal power, and quality.
l For overhead cable supports:
n Ensure that the supports are anchored adequately to withstand the significant
weight of bundled cables. Anchor cables to the overhead supports, then again
to the rack to add a second point of support.
n Do not let cables sag through gaps in the supports. Gravity can stretch and
damage cables over time. You can anchor cables to the rack with velcro ties at
the mid-point of the cables to protect your cable bundles from sagging.
n Place drop points in the supports that allow cables to reach racks without
bending or pulling.
l If the cable is running from overhead supports or from underneath a raised floor,
be sure to include vertical distances when calculating necessary cable lengths.
Ensure airflow
Bundled cables can obstruct the movement of conditioned air around the cluster.
l Secure cables away from fans.
l To keep conditioned air from escaping through cable holes, employ flooring seals
or grommets.
Prepare for maintenance
To accommodate future work on the cluster, design the cable infrastructure. Think
ahead to required tasks on the cluster, such as locating specific pathways or
connections, isolating a network fault, or adding and removing nodes and switches.
l Label both ends of every cable to denote the node or switch to which it should
connect.
l Leave a service loop of cable behind nodes. Service technicians should be able to
slide a node out of the rack without pulling on power or network connections. In
the case of Generation 6 nodes, you should be able to slide any of the four nodes
out of the chassis without disconnecting any cables from the other three nodes.
WARNING
If adequate service loops are not included during installation, downtime might
be required to add service loops later.
l Allow for future expansion without the need for tearing down portions of the
cluster.
environment. It is strongly recommended that you use OM3, and OM4 50 μm cables
for all optical connections.
Note: Dual LC for 10GbE cables have a bend radius of 3 cm (1.2 in) minimum. You can obtain
MPO connector ends for optical 40GbE cables.
The maximum length that is listed in the preceding table for the 50 μm or 62.5 μm
optical cables includes two connections or splices between the source and the
destination.
NOTICE
It is not recommended to mix 62.5 μm and 50 μm optical cables in the same link. In
certain situations, you can add a 50 μm adapter cable to the end of an already installed
62.5 μm cable plant. Contact the service representative for details.
Note
The Celestica Ethernet rails are designed to overhang the rear NEMA rails to align the
switch with the Generation 6 chassis at the rear of the rack. The rails require a
minimum clearance of 36 in from the front NEMA rail to the rear of the rack to ensure
that the rack door can be closed.
Note
In OneFS 8.2.1, the Dell Z9100-ON switch is required if you plan to implement Leaf-
Spine networking for large clusters.
Note
There is no breakout cable support for Arista switches. However, you can add a 10GbE
or 40GbE line card depending on the Arista switch model. Details are included in the
following table.
Switch Nodes
Arista DCS-7304 Shipped with 2 line cards, each with 48 10GbE
ports to a maximum of 144 nodes, you can add
either of the following:
l 1 additional line card of 48 ports 10GbE
l 2 additional 32 port line card of 40GbE
Switch Nodes
The following table lists rack and power requirements for Arista switches.
Note
If the installation instructions in this section do not apply to the switch you are using,
follow the procedures provided by your switch manufacturer.
CAUTION
If the switch you are installing features power connectors on the front of the
switch, it is important to leave space between appliances to run power cables to
the back of your rack. There is no 0U cable management option available at this
time.
Procedure
1. Remove rails and hardware from packaging.
2. Verify that all components are included.
3. Locate the inner and outer rails and secure the inner rail to the outer rail.
4. Attach the rails assembly to the rack using the eight screws as illustrated in the
following figure.
Note
The rail assembly is adjustable for NEMA, front to rear spacing extends from 22
in to 34 in.
5. Attach the switch rails to the switch by placing the larger side of the mounting
holes on the inner rail over the shoulder studs on the switch. Press the rail even
against the switch.
Note
The orientation of the rail tabs for the front NEMA rail are located on the power
supply side of the switch.
6. Slide the inner rail towards the rear of the switch slide into the smaller side of
each of the mounting holes on the inner rail. Ensure the inner rail is firmly in
place.
7. Secure the switch to the rail, securing the bezel clip and switch to the rack
using the two screws as illustrated in the following figure.
Figure 17 Secure the switch to the rail
Note
The 35 in depth of the HD400 does not include additional space required for cable
management arms.
Installation instructions for these accessories are available in the HD400 Installation
Guide.
Network topology
External networks connect the cluster to the outside world.
Subnets can be used in external networks to manage connections more efficiently.
Specify the external network subnets depending on the topology of the network.
In a basic network topology in which each node communicates to clients on the same
subnet, only one external subnet is required.
More complex topologies require several different external network subnets. For
example, if a network topology has nodes that connect to one external IP subnet,
nodes that connect to a second IP subnet, and nodes that do not connect at all
externally.
Note
CAUTION
SmartPools
The SmartPools module enables you to administer node pools and storage tiers, and to
create policies to manage files on a granular level.
OneFS provides functions such as autoprovisioning, compatibilities, virtual hot spare
(VHS), SSD strategies, global namespace acceleration (GNA), L3 cache, and storage
tiers.
The following table compares storage pool features based on whether a SmartPools
license is active.
SmartQuotas
The SmartQuotas module is a quota-management tool that monitors and enforces
administrator-defined storage limits.
Through the use of accounting and enforcement quota limits, reporting capabilities,
and automated notifications, you can manage and monitor storage utilization, monitor
disk storage, and issue alerts when storage limits are exceeded.
A storage quota defines the boundaries of storage capacity that are allowed for a
group, a user, or a directory on a cluster. The SmartQuotas module can provision,
monitor, and report disk-storage usage and can send automated notifications when
storage limits are approached or exceeded. SmartQuotas also provides flexible
reporting options that can help you analyze data usage.
SmartDedupe
The SmartDedupe software module enables you to save storage space on your cluster
by reducing redundant data. Deduplication maximizes the efficiency of your cluster by
decreasing the amount of storage required to store multiple files with similar blocks.
SmartDedupe deduplicates data by scanning an Isilon cluster for identical data blocks.
Each block is 8 KB. If SmartDedupe finds duplicate blocks, SmartDedupe moves a
single copy of the blocks to a hidden file called a shadow store. SmartDedupe then
deletes the duplicate blocks from the original files and replaces the blocks with
pointers to the shadow store.
Deduplication is applied at the directory level, targeting all files and directories
underneath one or more root directories. You can first assess a directory for
deduplication and determine the estimated amount of space you can expect to save.
You can then decide whether to deduplicate the directory. After you begin
deduplicating a directory, you can monitor how much space is saved by deduplication
in real time.
You can deduplicate data only if you activate a SmartDedupe license on a cluster.
However, you can assess deduplication savings without activating a SmartDedupe
license.
InsightIQ
The InsightIQ module provides advanced monitoring and reporting tools to help
streamline and forecast cluster operations.
InsightIQ helps to create customized reports containing key cluster performance
indicators such as:
l Network traffic on a per-interface, per-node, per-client, and per-protocol basis.
l Protocol operation rates and latencies that are recorded on a per-protocol, per-
client, and per-operation class basis.
l Per-node CPU utilization and disk throughput statistics.
To run the Isilon OneFS virtual appliance, the environment must meet the following
minimum system requirements.
Isilon cluster
The monitored cluster must be running version 5.5.3 or later of the operating
system. The InsightIQ File System Analytics functionality requires OneFS 6.0 or
later. The available InsightIQ features depend on the OneFS version that the
monitored system is running.
For monitored clusters running OneFS 7.0 and later, enable HTTPS port 8080.
For monitored clusters running an earlier version of OneFS, enable HTTPS port
9443. For the File System Analytics feature, enable the NFS service, HTTPS port
111, and HTTPS port 2049 on all monitored clusters.
Web browser
You can access the Isilon InsightIQ application through any web browser that
supports sophisticated graphics. Examples of supported browsers include
SmartDedupe 55
Adding functionality to the cluster
SnapshotIQ
The SnapshotIQ module provides the ability to create and manage snapshots on the
Isilon cluster.
A snapshot contains a directory on a cluster, and includes all data that is stored in the
specified directory and its subdirectories. If data contained in a snapshot is modified,
the snapshot stores a physical copy of the original data and references the copied
data. Snapshots are created according to user specifications, or generated
automatically by OneFS to facilitate system operations.
To create and manage snapshots, you must activate a SnapshotIQ license on the
cluster. Some applications must generate snapshots to function, but do not require
you to activate a SnapshotIQ license by default. The snapshots are automatically
deleted when the system no longer needs them. However, if a SnapshotIQ license is
active on the cluster, some applications can retain snapshots. You can view auto-
generated snapshots regardless of whether a SnapshotIQ license is active.
The following table lists the available snapshot functionality depending on whether a
SnapshotIQ license is active.
SyncIQ
The SyncIQ module enables you to replicate data from one Isilon cluster to another.
With SyncIQ, you can replicate data at the directory level while optionally excluding
specific files and sub-directories from being replicated. SyncIQ creates and references
snapshots to replicate a consistent point-in-time image of a root directory. Metadata
such as access control lists (ACLs) and alternate data streams (ADS) are replicated
along with data.
SyncIQ enables you to retain a consistent backup copy of your data on another Isilon
cluster. SyncIQ offers automated failover and failback capabilities that enable you to
continue operations on another Isilon cluster if a primary cluster becomes unavailable.
SmartLock
The SmartLock module allows you to prevent users from modifying and deleting files
on protected directories.
Use the SmartLock tool to create SmartLock directories and commit files within those
directories to a write once, read many (WORM) state. You cannot erase or re-write a
file that is committed to a WORM state. You can delete a file that has been removed
from a WORM state, but you cannot modify a file that has ever been committed to a
WORM state.
Note the following SmartLock considerations:
l Create files outside of SmartLock directories and transfer them into a SmartLock
directory only after you finish working with the files.
Upload files to a cluster in two steps.
1. Upload the files into a non-SmartLock directory.
2. Transfer the files to a SmartLock directory.
Note
Files committed to a WORM state while being uploaded will become trapped in an
inconsistent state.
Files can be committed to a WORM state while they are still open. If you specify
an autocommit time period for a directory, the autocommit time period is
calculated according to the length of time since the file was last modified, not
when the file was closed. If you delay writing to an open file for more than the
autocommit time period, the file will be committed to a WORM state the next time
you attempt to write to it.
SyncIQ 57
Adding functionality to the cluster
Note
Compliance mode is not compatible with Isilon for vCenter, VMware vSphere API for
Storage Awareness (VASA), or the vSphere API for Array Integration (VAAI) NAS
Plug-In for Isilon.
SmartConnect Advanced
The SmartConnect Advanced module adds enhanced balancing policies to evenly
distribute CPU usage, client connections, or throughput.
If you activate a SmartConnect Advanced license, you are also able to:
l Enable dynamic IP allocation and IP failover in your cluster.
l Define IP address pools to support multiple DNS zones in a subnet.
l Establish multiple pools for a single subnet.
SupportIQ
The SupportIQ module allows Isilon Technical Support, with your permission, to
securely upload and analyze your OneFS logs to troubleshoot cluster problems.
When SupportIQ is enabled, Isilon Technical Support personnel can request logs
through scripts that gather cluster data and then upload the data to a secure location.
You must enable and configure the SupportIQ module before SupportIQ can run
scripts to gather data.
You can also enable remote access, which allows Isilon Technical Support personnel to
troubleshoot your cluster remotely and run additional data-gathering scripts. Remote
access is disabled by default. To enable remote SSH access to your cluster, you must
provide the cluster password to a Technical Support engineer.
Antivirus planning
You can scan the OneFS file system for computer viruses and other security threats
by integrating with third-party scanning services through the Internet Content
Adaptation Protocol (ICAP). This feature does not require you to activate a license.
If an ICAP server detects a threat it notifies OneFS. OneFS creates an event to inform
system administrators, displays near real-time summary information, and documents
the threat in an antivirus scan report. You can configure OneFS to request that ICAP
servers attempt to repair infected files. You can also configure OneFS to protect users
against potentially dangerous files by truncating or quarantining infected files.
ICAP servers
The number of ICAP servers that are required to support an Isilon cluster depends on
how virus scanning is configured, the amount of data a cluster processes, and the
processing power of the ICAP servers.
If you intend to scan files exclusively through antivirus scan policies, it is
recommended that you have a minimum of two ICAP servers per cluster. If you intend
to scan files on access, it is recommended that you have at least one ICAP server for
each node in the cluster.
If you configure more than one ICAP server for a cluster, it is important to ensure that
the processing power of each ICAP server is relatively equal. OneFS distributes files to
the ICAP servers on a rotating basis, regardless of the processing power of the ICAP
servers. If one server is significantly more powerful than another, OneFS does not
send more files to the more powerful server.
CAUTION
When files are sent from the cluster to an ICAP server, they are sent across the
network in cleartext. Make sure that the path from the cluster to the ICAP server
is on a trusted network.
Antivirus planning 59
Adding functionality to the cluster
To ensure an optimal data center, and the long-term health of the Isilon equipment,
prepare and maintain the environment as described in this section.
Note
Note
Note
The actual cabinet weight depends on the specific product configuration. Calculate
the total using the tools available at http://powercalculator.emc.com.
Power requirements
Depending on the cabinet configuration and input AC power source, single or three-
phase listed in the Single-phase power connection requirements and Three-phase
power connection requirements. The cabinet requires between two and six
independent power sources. To determine the site requirements, use the published
technical specifications and device rating labels to provide the current draw of the
devices in each rack. The total current draw for each rack can then be calculated. For
Dell EMC products, use the Dell EMC Power Calculator available at, http://
powercalculator.emc.com.
Circuit breakers 30 A 32 A
Power requirements at site (minimum to l Single-phase: six 30A drops, two per zone
maximum)
l Three-phase Delta: two 50A drops, one per zone
l Three-phase Wye: two 32A drops, one per zone
Note
The options for the single-phase PDU interface connector are listed in the following
table, Single-phase AC power input connector options.
Power requirements 63
Preparing your facility
IEC-309 332P6
Circuit breakers 50 A 32 A
Note
The interface connector options for the Delta and Wye three-phase PDUs are listed
in the following tables, and .
GARO P432-6
Note
The Isilon cluster might be qualified to operate outside these limits. Refer to the
product-specific documentation for system specifications.
Systems and components must not experience changes in temperature and humidity
that are likely to cause condensation to form on or in that system or component. Do
not exceed the shipping and storage temperature gradient of 45°F/hr (25°C/hr).
Requirement Description
Ambient temperature -40° F to 149°F (-40°C to 65°C)
l Position adjacent cabinets with no more than two casters or leveling feet on a
single floor tile.
l Cutouts in 24 in. x 24 in. (60 cm x 60 cm) tiles must be no more than 8 in. (20.3
cm) wide by 6 in. (15.3 cm) deep, and centered on the tiles, 9 in. (22.9 cm) from
the front and rear and 8 in. (20.3 cm) from the sides. Cutouts weaken the tile, but
you can minimize deflection by adding pedestal mounts adjacent to the cutout.
The number and placement of additional pedestal mounts relative to a cutout must
be in accordance with the floor tile manufacturer's recommendations.
Hardware acclimation
Systems and components must acclimate to the operating environment before power
is applied to them. Once unpackaged, the system must reside in the operating
environment for up to 16 hours to thermally stabilize and prevent condensation.
If the last 24 hours of the TRANSIT/STORAGE …and the OPERATING …then let the system or
environment was: environment is: component acclimate in
the new environment this
many hours:
Temperature Relative Humidity Temperature Acclimation Time
Damp >30% RH
Humid 30–45% RH
Unknown
IMPORTANT:
l If there are signs of condensation after the recommended acclimation time has
passed, allow an additional eight (8) hours to stabilize.
l System components must not experience changes in temperature and humidity
that are likely to cause condensation to form on or in that system or component.
Do not exceed the shipping and storage temperature gradient of 45°F/hr (25°C/
hr).
l To facilitate environmental stabilization, open both front and rear cabinet doors.
2 Watt 4 meters
5 Watt 6 meters
7 Watt 7 meters
10 Watt 8 meters
12 Watt 9 meters
15 Watt 10 meters
CAUTION
If a node loses power, the NVRAM battery will sustain the cluster journal on the
NVRAM card for five days. If you do not restore power to the node after five
days, it is possible that you will lose data.
Power requirements
Power cords and connectors depend on the type ordered with your system, and must
be match the supply receptacles at your site.
56PA332 Right 240 V ac, 50/60 Hz 32-amp service, single phase Australia
Angle
Each AC circuit requires a source connection that can support a minimum of 4800 VA
of single phase, 200-240 V AC input power. For high availability, the left and right
sides of any rack or cabinet must receive power from separate branch feed circuits.
Note
Each pair of power distribution panels (PDP) in the 40U-C cabinet can support a
maximum of 24 A AC current draw from devices connected to its power distribution
units (PDU). Most cabinet configurations draw less than 24 A AC power, and require
only two discrete 240 V AC power sources. If the total AC current draw of all the
devices in a single cabinet exceeds 24 A, the cabinet requires two additional 240 V
power sources to support a second pair of PDPs. Use the published technical
specifications and device rating labels to determine the current draw of each device in
your cabinet and calculate the total.
Note