Fibrecat SHB en
Fibrecat SHB en
Fibrecat SHB en
Service Manual
Certified documentation
according to DIN EN ISO 9001:2000
To ensure a consistently high quality standard and
user-friendliness, this documentation was created to
meet the regulations of a quality management system which
complies with the requirements of the standard
DIN EN ISO 9001:2000.
cognitas. Gesellschaft für Technik-Dokumentation mbH
www.cognitas.de
1 Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
9 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
V CAUTION Reference to hazards that can lead to personal injury, loss of data or
damage to equipment
Table 1: Notational Conventions
1
The settings on your browser might differ from these settings.
Power supply
Drive module
Controller/
expansion module
Midplane
The primary field replaceable units found in the controller/expansion enclosure are labeled
in Figure 1 and include:
● 2U controller/expansion enclosure and midplane (3.5 inches tall by 19 inches wide).
The midplane is replaced with the enclosure housing.
● Up to 12 SATA or SAS drive modules per enclosure. When a disk drive fails the entire
drive module is replaced.
● Up to two controller/expansion modules. When a host or drive side bus fault,
management controller fault or storage controller fault occurs that is related to the
controller/expansion module, the entire module is replaced.
● Two redundant power and cooling modules. If a power supply fault or fan fault occurs
the entire module must be replaced.
NOTE
i Do not remove any field replaceable unit until the replacement is on-hand.
Removing a field replaceable unit without a replacement will disrupt the system
airflow and cause an over temperature condition.
2.2.1 Midplane
The midplane is the common connection point for all system electronics and is part of the
controller/expansion enclosure. All FRUs plug into this board. The drive modules plug into
the front through a dongle board. The power and cooling modules and controller(s) plug into
the rear. The upper controller (controller module A) and lower controller (controller module
B) connect to the midplane through two Molex SQEQ series connectors for signals, and one
HDM series for power. The midplane is designed to support 3.0 GBit/sec SATA II and SAS
operation.
The midplane incorporates high-speed differential pair design layout rules to match
impedance, minimize skin effect losses, minimize transition losses through even mode
impedance changes at transitions, and minimize crosstalk.
The midplane uses a serial EEPROM to hold system serial number and WWN information.
The serial EEPROM is accessible by the I/O controller through an I2C connection. Another
I2C is used for the power supply and fan status/control functions with the exception of Turn
On and Mated states. These I2C busses are multiplexed from a single bus on the IOM.
2.2.2 Enclosure ID
The enclosure ID (EID) provides a visual single-digit numerical reference to each enclosure
in an array. It is located on the left mounting flange when you are facing the front of the array.
The array uses the SAS protocol for internal data routing: therefore, its devices are
addressed through their 64-bit world wide name (WWN). The WWN is not user-friendly for
device identification although it does simplify the identification process.
Because the WWN is used, there is no need for a selectable or equivalent mechanical
interface. Instead, the array uses an LED display on each enclosure in the system. The
value shown on the LED display serves as EID. It is the responsibility of the controlling
member of the system, whether a host computer(s) or RAID controller(s), to set the EID on
each enclosure in the system.
The SCSI Enclosure Services (SES) chip on the I/O board obtains and sets the EID. A host
or RAID system manages the EID by setting bits 3-0 of byte 2 in either the A (top) or B
(bottom) SAS expander element when sending an SES enclosure control page.
Refer to “Updating Firmware” on page 96 for information about how the enclosure ID
changes when expansion modules are moved.
The following criteria define EID usage for an expansion enclosure behind a RAID system:
● The controller enclosure should always display zero (0) on its EID.
● An expansion enclosure attached to a controller enclosure should have a non-zero
value displayed on its EID.
● Each enclosure, within a single solution, should have a unique value displayed on its
EID.
● When one or more expansion enclosures are used, the RAID controllers within the
RAID system assign an ID to each enclosure.
● The RAID system uses a persistent algorithm to assign EIDs, so that they will not
change during simple reconfigurations.
● The values on the EID display can be used to correlate physical enclosures, and drives
within them, to logical views of the system provided by the FibreCAT SX Manger’s WBI
or CLI.
Dongle
Disk
drive
Carrier
Each drive module is inserted into a drive slot in the enclosure. The drive slots are used to
identify drives, for example enclosure 0 slot 0 is the upper left drive slot of enclosure 0, the
RAID enclosure. Figure 3 displays each of the drive slot numbers in the enclosure. Drive
modules are slot independent, that is, the drives can be moved to any slot with the power
off. Once power is applied, the RAID controllers will use the metadata held on each disk to
locate each member of a virtual disk.
Because the SAS drives are natively dual ported and can fully utilize the dual path FibreCAT
SX architecture, the SAS dongle board only serves to make the drive module connector
compatible with the enclosure midplane.
The single ported SATA drive’s dongle board is used to make it connector compatible with
the midplane and includes an active/active (AA) multiplexer (MUX). The SATA AA MUX
enables a single port drive to appear as a dual port on the midplane.
The storage controller (SC) consists of a processor subsystem which provides all RAID
functionality. The SC also provides the bridging functionality that takes in a Fibre Channel
signal and sends out a SAS signal to the back end drive bus.
The management controller (MC) is a separate processor subsystem. The MC provides all
out-of-band management features, including the FibreCAT SX Manager’s web based
Interface (WBI), SNMP, CLI, DMS and e-mail notification. The MC also consists of the
external serial ports and Ethernet port.
Note that there are two primary processors: the SC and the MC. Both CPUs are
independent, and most importantly, one will operate if the other one goes down. In addition,
by having two CPUs, management functions will have significantly less impact on RAID I/O
performance, which differentiates the storage product's architecture from traditional
approaches.
As illustrated in Figure 4, the controller module includes a number of high-speed serial
interfaces:
● SAS/SATA serial back-end disk channels (12 lanes per controller)
● SAS inter-controller alternate path (4 lanes)
● SAS disk channel expansion (4 lanes)
● PCI Express inter-controller messaging and write cache mirroring (4 lanes)
● FC serial front-end host channels (dual port per controller)
● Two FC serial connections between controllers used to facilitate controller failover (up
to 4 lanes)
Storage Storage
Controller Controller
Following the data path as it leaves the SC the signal enters the SAS controller. It is then
sent from the controller to the SAS expander and then onto the drive module. The SAS
expander is much like a Fibre Channel switch in that it maintains a routing map and can
route data to the addressed destination. The expander ports connect to each disk slot. It
also connects to the failover or alternate path and to the expansion path.
SCSI Enclosure Services (SES) intelligence controls LED indicators on the front and rear
panels to provide environmental and hardware status on enclosures and FRUs. The SES
controller also monitors the following:
2.4.3.1 Host Interface Speed for FibreCAT SX60 / SX80 in Direct Attached Configurations
NOTE
i The following restriction applies to FibreCAT SX60 / SX80 in direct attached config-
urations:
If your Host Interface Module (HIM) is Model 0 (or you have a mixed mode of HIM
0 and 1 in a dual controller FibreCAT controller enclosure), for FibreCAT SX60 /
SX80 in direct connect mode 2 Gbit FC speed is supported only.
If both HIMs in your controller enclosure are Model 1 (or you have only a single controller
FibreCAT SX and it is Model 1), in direct host connect mode up to 4 Gbit FC speed is
supported for FibreCAT SX60 / SX80.
For FibreCAT SX88, in direct host connect mode always up to 4 Gbit FC speed is
supported.
In switch attached mode, for FibreCAT SX60 / SX80 / SX88 up to 4 Gbit FC speed is
supported always (no restriction with any HIM Model).
If you have a direct attached configuration with FibreCAT SX60 / SX80, you should find out
the HIM Model (0 or 1) of your cointroller(s) via the controllers’ part numbers or via FibreCAT
SX Manager’s Web Based Interface:
● Part Number (see Fujitsu Siemens Computers’ identification label on the rear side of
each controller module)
FibreCAT SX60 HIM Model 1 has the part number 10600862818 only.
FibreCAT SX80 HIM Model 1 has the part number 10600862820 only.
● FibreCAT SX Manager’s WBI
1. Open FibreCAT SX Manager’s Web Based Interface.
2. Login as monitor or manage user.
3. In MONITOR STATUS menu, click the Link advanced settings (see screenshots below).
Here you can find out the HIM Model of your controller module(s):
Figure 7: Detecting the HIM Model with FibreCAT SX Manager’s WBI (Example with two HIM Models 0)
Figure 8: Detecting the HIM Revision with FibreCAT SX Manager’s WBI (Example with two HIM Models 1)
The diagram in Figure 10 illustrates how the array can be configured for disk drive
expansion. Additional configurations are available.
Storage Storage
Controller Controller
Airflow is controlled and optimized over the RAID I/O board and HIM in a similar manner.
The controller cover is used as an air duct to force air over the entire surface of the controller
from front to back, ensuring no dead air spaces, and increasing the velocity flow (LFM) by
controlling the cross-sectional area that the mass flow travels through.
Cooling for all hot components is passive. There are no other fans in the system other than
the fans contained in the power and cooling module.
2.5.2 Airflow
CAUTION
! To allow for correct airflow and cooling, use an air management module for removed
disk drives and IOMs. Do not leave a FRU out of its slot for more than 2 minutes.
As noted above, the array's cooling system is comprised of four fans in a tandem parallel
array. These variable speed fans provide low noise and high mass flow rates. Airflow is from
front to back. Each drive slot draws ambient air in at the front of the drive, sending air over
the drive surfaces and then through tuned apertures in the chassis midplane.
Note that the air-flow washes over the top and bottom surface of the disk drive at high mass
flow and velocity flow rates, so both sides of the drive are used for cooling. The air-flow
system uses a cavity in the chassis behind the midplane as an air-pressure equalization
chamber to normalize the negative pressure behind each of the disk drive slots. This
mechanism together with the tuned apertures in the midplane behind each drive assures
an even distribution of airflow and therefore LFM for each drive slot. This even cooling
extends the operational envelope of the system by ensuring no 'hot' drive bypass.
Further, airflow is “in line” with the top and bottom surfaces of the drive to reduce back-
pressure and optimize fan performance. All of the mass flow at room ambient is used for
cooling the 12 disk drives. The high velocity flow helps to lower the thermal resistance of
the disk drive assembly to ambient temperature. The thermal temperature rise of the disk
drive is dependent upon the power consumed by the disk drive, which varies by drive model
as well as the level of drive activity.
Use FibreCAT SX Manager’s WBI to verify any faults found while viewing the LEDs.
FibreCAT SX Manager’s WBI is also a good tool to use in determining where the fault is
occurring if the LEDs cannot be viewed due to the location of the system. FibreCAT SX
Manager’s WBI provides you with a visual representation of the system and where the fault
is occurring. It can also provide more detailed information about FRUs, data, and faults. See
“Troubleshooting Using System LEDs” on page 29 for more information about LEDs.
Unit Locator
Fault/Service Required
FRU OK
Temperature Fault
Check the status LEDs as described in Table 2 periodically or after you have received an
error notification. It is important to note that more than one of the LEDs might display a fault
condition at the same time. For example, if a disk drive were to fail due to an exceedingly
high ambient temperature, the temperature fault LED and the fault/service LED both display
the fault. This functionality can help determine the cause of a fault in a FRU.
Location LED Color Power-On Operating Description
State State1
Right ear White On for 3–4 Off Normal operation.
Unit Locator seconds, Off
Unit Locator
icon
Blink Physically identifies the enclosure.
Right ear Fault/Service Yellow On for 3–4 Off No fault.
Required seconds, Off
Fault/Service
Required icon
On An enclosure-level fault occurred.
Service action is required. The
event has been acknowledged but
the problem needs attention.
Right ear Green On for 3–4 On The enclosure is powered on with
FRU OK seconds, blink at least one power and cooling
Power On/OK for up to 2 module operating normally.
icon minutes during
boot, On
Off Both power and cooling modules
are off.
Table 2: Enclosure Status LEDs (Front)
4.1.2 Enclosure ID
A hex display on the left enclosure ear as shown in Figure 12 provides the enclosure ID.
The ID number it presents enables you to correlate a physical enclosure with logical views
presented in FibreCAT SX Manager’s WBI. The enclosure ID for a controller enclosure is
always zero (0); the enclosure ID for an attached expansion enclosure is always nonzero.
Fore more information about the Enclosure ID, see “Enclosure ID” on page 13.
Enclosure ID
The drive module LEDs are shown in Figure 13 and described in Table 3.
Host activity
Figure 14: Host Link Status LEDs
If the host link status LED indicates that there is no link, review the event logs for indicators
of a specific fault in a host data path component. If you are unable to locate a specific fault
or are unable to access the event logs, halt all I/O and use the following procedure to isolate
the fault. The procedure requires scheduled downtime.
NOTE
i Do not perform more than one step at a time. Changing more than one variable at
a time can complicate the troubleshooting process.
1. Halt all I/O.
2. Check the host activity LED.
If there is activity, halt all applications that access the array.
3. Reseat the SFP and FC cable.
Is the host link status LED on?
● Yes – Monitor the status to ensure that there is no intermittent error present. If the
fault occurs again, clean the connections to ensure that a dirty connector is not
interfering with the data path.
● No – Proceed to Step 4.
4. Move the SFP and cable to a port with a known good link status.
This step isolates the problem to the external data path (SFP, host cable, HBA) or to the
I/O controller module port.
Is the host link status LED on?
● Yes – You now know that the SFP, host cable, and HBA are functioning properly.
Return the SFP and cable to the original port. If the link status LED remains off, you
have isolated the fault to the controller module’s port. Replace the controller
module.
● No – Proceed to Step 5.
5. Swap the SFP with the known good one.
Is the host link status LED on?
● Yes – Replace the controller module. The fault has been isolated.
● No – Proceed to Step 6.
6. Place the original SFP back into the configuration and swap the cable with a known
good one.
Is the host link status LED on?
● Yes – Replace the original cable. The fault has been isolated.
● No – Proceed to Step 7.
7. Replace the HBA with a known good HBA, or move the host side cable and SFP to a
known good HBA.
Is the host link status LED on?
● Yes – You have isolated the fault to the HBA. Replace the HBA.
● No – It is likely that the controller module needs to be replaced.
If the expansion port status LED indicates that there is no link, review the event logs for
indicators of a specific fault. If you are unable to locate a specific fault or are unable to
access the event logs, halt all I/O and use the following procedure to isolate the fault. The
procedure requires scheduled downtime.
NOTE
i Do not perform more than one step at a time. Changing more than one variable at
a time can complicate the troubleshooting process.
1. Halt all I/O.
2. Check the host activity LED.
7. Replace the cable with a known good cable, ensuring the cable is attached to the
original ports used by the previous cable.
Is the host link status LED on?
● Yes – Replace the original cable. The fault has been isolated.
● No – It is likely that the controller module needs to be replaced
AC Power Good
DC Voltage/Fan
Fault/Service Required
0 0
0 0
● No Response Count – Number of times the drive failed to respond to an I/O request.
A high value can indicate that the drive is too busy to respond to further requests.
● Spin-up Retries – Number of times the drive failed to start on power-up or on
software request. Excessive spin-up retries can indicate that a drive is close to
failing.
● Media Errors – Number times the drive had to retry an I/O operation because the
media did not successfully record/retrieve the data correctly.
● Non Media Errors – Number of soft, recoverable errors not associated with drive
media.
● Bad Block Reassignments – Number of block reassignments that have taken place
since the drive was shipped from the vendor. A large number of reallocations in a
short period of time could indicate a serious condition.
● Bad Block List Size – Number of blocks that have been deemed defective either
from the vendor or over time due to reallocation.
If a PHY becomes disabled, the event log entry helps to determine which enclosure or
enclosures and which controller (or controllers) are affected.
a) Review each connection represented by a line between the device and Controller A
or Controller B.
Any red lines indicate a fault.
The message in the event log helps you determine which enclosure or enclosures and
which controller or controllers are affected. The reason can be Err Count Interrupts, Exter-
nally Disabled, Ctrl Page Disabled, or Unknown Reason.
● A faulty SFP
● A faulty port in the host interface module
● A disconnected cable
3. To target the cause of the link failure, view the FC port details by clicking on a port in
the graphical view and then reviewing the details listed below it.
The data displayed includes:
● Host Port Status Details – Selected controller module and port number.
● SFP Detect – SFP Present or No SFP Present.
● Receive Signal – Present or Not Present.
● Link Status – Active or Inactive.
● Signal Detect – No Signal or Signal Detected.
● Topology – Loop. If the loop is active, shows Private Loop or Public Loop.
● Speed – 2 Gbyte/sec or 4 Gbyte/sec as set in FibreCAT SX Manger’s WBI. To
change this setting for host ports, go to the
Manage > General Config > Host Port Configuration page.
● FC Address – 24-bit FC address or Unavailable if the FC link is not active.
● Node WWN – FC World Wide Node Name (WWNN).
● Port WWN – FC World Wide Port Name (WWPN).
5.7.1 Enabling and Using the Trust Virtual Disk for Disaster Recovery
If a virtual disk appears to be down or offline (not quarantined) and the disks are labeled
“Leftover”, use the Trust Virtual Disk function to recover the virtual disk. The Trust Virtual
Disk function brings a virtual disk back online by ignoring metadata that indicates the drives
may not form a coherent virtual disk. This function can force an offline virtual disk to be
critical or fault tolerant, or a critical virtual disk to be fault tolerant. You might need to do this
when:
● A drive was removed or was marked as failed in a virtual disk due to circumstances you
have corrected (such as accidentally removing the wrong disk). In this case, one or
more disks of a virtual disk can start up more slowly, or might have been powered on
after the rest of the disks in the virtual disk. This causes the date and time stamps to
differ, which the array interprets as a problem. Also see “Dequarantining a Virtual Disk”
on page 62.
● A virtual disk is offline because a drive is failing, you have no data backup, and you want
to try to recover the data from the virtual disk. In this case, the Trust Virtual Disk function
might work, but only as long as the failing drive continues to operate.
CAUTION
! If used improperly, the Trust Virtual Disk feature can cause unstable operation and
data loss. Only use this function for disaster recovery purposes and when advised
to do so by a service technician. The virtual disk has no tolerance for any additional
failures.
To trust a virtual disk, first enable the Trust Virtual Disk function and then use it:
1. Select Manage > Utilities > Recovery Utilities > Enable Trust Virtual Disk.
2. Select Enabled.
3. Click Enable/Disable Trust Virtual Disk.
The option is only enabled until you use it. After you trust a virtual disk, the option
reverts back to being disabled.
4. Select Manage > Utilities > Recovery Utilities > Trust Virtual Disk.
5. Select the array and click Trust This Array.
6. Back up the data from all the volumes residing on this virtual disk and audit it to make
sure that it is intact.
7. Verify the virtual disk using the verify utility. While verify is running, any new data written
to any of the volumes on the virtual disk will be written in a parity-consistent way. Select
Manage > Virtual Disk Config > Verify Virtual Disk
NOTE
i If the virtual disk does not come back online, it might be that too many disks are
offline or the virtual disk might have additional failures on the bus or enclosure that
Trust Virtual Disk cannot fix.
settings have higher precedence for enabling events than individual event selection. If the
critical event category is selected, all critical events cause a notification regardless of the
individual critical event selection. You can select individual events to fine-tune notification
either instead of or in addition to selecting event categories. For example, you can select
the critical event category to be notified of all critical events, and then select additional
individual warning and informational events.
To select events for notification:
1. Select Manage > Event Notification > Select Individual Events.
2. From the Manage menu, select the type of individual event you want to track:
● Critical Events. Represent serious device status changes that might require
immediate intervention.
● Warning Events. Represent device status changes that might require attention.
● Informational Virtual Disk Events. Represent device status changes related to
virtual disks that usually do not require attention.
● Informational Drive Events. Represent device status changes related to disk drives
that do not require attention.
● Informational Health Events. Represent device status changes related to the array’s
health that usually do not require attention.
● Informational Status Events. Represent device status changes related to the array’s
status that usually do not require attention.
● Informational Configuration Events. Represent device status changes related to the
array’s configuration that usually do not require attention.
● Informational Miscellaneous Events. Represent device status changes related to
informational events that usually do not require attention.
3. Select events by clicking the corresponding check box in the column.
4. For each event you want to be notified of, select a notification method.
For a description of each notification method, refer to the “FibreCAT SX60 / SX80 /
SX88 Administrator’s Guide”.
5. Click Change Events to save your changes.
The array quarantines a virtual disk (shown by the quarantine icon Quarantined Virtual Disk
icon) if it does not see all of the virtual disk’s drives in these cases:
● After restarting one or both controllers, typically after powering up the array, or after a
failover
● After inserting a disk drive that is part of a virtual disk from another controller/disk
enclosure combination
The virtual disk can be fully recovered if the missing disk drives can be restored. Make sure
that no disk drives have been inadvertently removed or that no cables have been
unplugged. Sometimes not all drives in the virtual disk power up. Check that all enclosures
have rebooted after a power failure. If these problems are found and then fixed, the virtual
disk recovers and no data is lost.
The quarantined virtual disk’s drives are “write locked,” and the virtual disk is not available
to hosts until the virtual disk is dequarantined. The array waits indefinitely for the missing
drive. If the drive does spin up, the array automatically dequarantines the virtual disk. If the
drive never spins up, because it has been removed or has failed, you must dequarantine the
virtual disk manually.
If the missing drives cannot be restored (for example, a failed drive), you can use dequar-
antine to restore operation in some cases. If the virtual disk is fault-tolerant and is not
missing too many drives and you dequarantine the virtual disk, it comes back up in a critical
state. If a spare of the appropriate size is available, it is used to reconstruct a critical virtual
disk.
NOTE
i After you dequarantine the virtual disk, make sure that a spare drive is available to
let the virtual disk reconstruct.
CAUTION
! If the virtual disk does not have enough drives to continue operation, when a
dequarantine is done, the virtual disk comes back up to an offline state and its data
is not recoverable.
To dequarantine a virtual disk:
1. Select Manage > Utilities > Recovery Utilities > Virtual Disk Quarantine.
2. Select the array you want to dequarantine.
3. Click Dequarantine Selected Virtual Disk.
Problem Solution
You cannot access FibreCAT SX Verify that you entered the correct IP address.
Manager’s WBI. Enter the IP address using the format http://ip-
address/index.html
If the array has two controllers, enter the IP address of the partner
controller.
FibreCAT SX Manager’s WBI pages do Configure your browser according to the information contained in the
not display properly. “FibreCAT SX60 / SX80 / SX88 Administrator’s Guide”.
Click Refresh or Reload in your browser to display the most current
FibreCAT SX Manager’s WBI information.
Be sure that someone else is not accessing the array using the CLI. It is
possible for someone else to change the array’s configuration using the
CLI. The other person’s changes might not display in FibreCAT SX
Manager’s WBI until you refresh the FibreCAT SX Manager’s WBI page.
If you are using Internet Explorer, clear the following option: Tools >
Internet Options > Accessibility > Ignore Colors specified on web pages.
Prevent FibreCAT SX Manager’s WBI pages from being cached by
disabling web page caching in your browser.
Menu options are not available. User configuration affects the FibreCAT SX Manager’s WBI menu. For
example, diagnostic functions are available only to users with Diagnostic
access privileges. Refer to the “FibreCAT SX60 / SX80 / SX88 Admin-
istrator’s Guide” for information on user configuration and setting access
privileges.
Table 10: Problems Accessing the Array Using FibreCAT SX Manager’s WBI
Problem Solution
All user profiles have been deleted and 1. Use a terminal emulator (such as Microsoft HyperTerminal) to
you cannot log into FibreCAT SX connect to the system.
Manager’s WBI or the CLI with a remote 2. In the emulator, press Enter to display the serial CLI prompt (#). No
connection. password is required because the local host is expected to be secure.
3. Use the create user command to create new users. For infor-
mation about using the command, enter
create user ? or refer to the “FibreCAT SX Manager Command Line
Interface (CLI)” manual.
Table 10: Problems Accessing the Array Using FibreCAT SX Manager’s WBI
3. Review the events that occurred before and after the primary event.
During this review you are looking for any events that might indicate the cause of the
critical/warning event. You are also looking for events that resulted from the
critical/warning event, known as secondary events.
4. Review the events following the primary and secondary events.
You are looking for any actions that might have already been taken to resolve the event.
● Custom Debug Tracing – Shows that specific events are selected for inclusion in the
log.
3. Click Change Debug Logging Setup.
4. If instructed by service personnel, click Advanced Debug Logging Setup Options and
select one or more additional types of events.
Under normal conditions, you should not select any of these options because they have
a slight impact on read/write performance.
Fan 2
Fan 3
Power and
{
cooling module 1
Fan 0
Fan 1 { Power and cooling module 0
Figure 25: Power and Cooling Module and Cooling Fan Locations
During a shutdown, the cooling fans do not shut off. This allows the unit to continue cooling.
The order numbers of potentially additional or changed FRUs can be retrieved via spare
part information tool Ersin.
Figure 28: Querying the Serial Number Remotely (example, showing a FibreCAT SX80)
Figure 29: Checking the First Four Letters of the Serial Number (example, showing a FibreCAT SX80 controller
Enclosure)
Figure 30: 1st Four Letters of the FibreCAT SX Serial Number in Ersin (example, showing a FibreCAT SX80
controller enclosure)
In this case please print it out and attach one tag to each returning FRU. It is strongly recom-
mended that all replaced FRUs that are shipped back to the Repairer should have a filled
out Field Return Tag attached.
Problem Solution
Flash write failure The controller needs to be replaced.
Event code 157
Firmware mismatch The downlevel controller needs to be upgraded.
Event code 89
Table 20: Controller Module or Expansion Module Faults
CAUTION
! In a dual-controller configuration, both controllers must have the same cache size.
If the new controller has a different cache size, controller A will boot and controller
B will not boot. To view the cache size, select
Monitor > Advanced Settings > Controller Version.
If you use FibreCAT SX Manager’s WBI to save the configuration settings, the file will
contain all FibreCAT SX configuration data, including the following settings:
● FC host port
● Enclosure management
● Options
● Disk
● LAN
● Service security
● Remote notification
NOTE
i The configuration file does not include any virtual disk or volume information. You
do not need to save this information before replacing the controller or expansion
module because it is saved to a special area on the disk drive.
To save your array’s configuration data to a file on the management host or another host on
your network using FibreCAT SX Manager’s WBI, perform following steps:
1. Connect to the FibreCAT SX from FibreCAT SX Manager’s WBI using the IP address
for one of the controller modules.
2. Select Manage > Utilities > Configuration Utilities > Save Config File.
3. Click Save Configuration File.
4. If prompted to open or save the file, click Save.
5. Specify the file location and name, using a .config extension.
The default file name is saved_config.config.
NOTE
i If you are using Firefox and have a download directory set, the file is automatically
saved to it.
CAUTION
! To ensure continuous availability of the system, be sure that the other controller
module is online before shutting down a controller module. Check the status of the
other controller module using FibreCAT SX Manager’s WBI to determine whether
the other controller module is online.
To shut down a controller module using FibreCAT SX Manager’s WBI, perform the following
steps:
1. Select Manage > Restart System > Shut Down/Restart.
2. In the Shut Down panel, select the controller module you want to shut down.
3. Click Shut Down.
A warning might appear that data access redundancy will be lost until the selected
controller is restarted. This is an informational message that requires no action.
4. Click OK.
FibreCAT SX Manager’s WBI shows that the module is shut down.
You only need to use the Shut Down function for controller modules. The blue OK to
Remove LED illuminates to indicate that the module can be removed safely.
4. Use FibreCAT SX Manager’s WBI to illuminate the Unit Locator LED for the enclosure
where you want to replace the module.
a) Select Manage > General Config > Enclosure Management
b) Click on Illuminate Locator LED
5. Physically locate the module with the Unit Locator LED blinking.
On the Enclosure Management page, look at the System Panel at the bottom of the
page. This panel shows the status of the controllers. In the enclosure, controller A is
always on top, and controller B is always on the bottom.
6. If the controller module is connected to an expansion enclosure, disconnect the SAS
cables from the controller module before removing the controller.
7. Turn the thumbscrew on each ejector handle (see Figure 33) counterclockwise until the
screw disengages from the module.
Do not remove the screw from the handle.
Ejector handles
Controller A
Controller B
Thumbscrews
Figure 33: Location of Controller/Expansion Module Ejector Thumbscrews (Controller Modules Shown)
8. Rotate both ejector handles downward, supplying leverage to disengage the module
from the interior connector.
NOTE
i If you have not already shut down the module, ejecting it forces the module offline
regardless of the state of the other module.
9. Pull outward on the ejector handles to slide the module out of the chassis.
4. Rotate the ejector handles upward until they are flush with the top edge of the module,
and turn the thumbscrews on each ejector handle clockwise until they are finger-tight.
The OK LED illuminates green when the module completes its initialization and is
online.
NOTE
i If partner firmware update is selected, when you install a new controller module, the
controller module with the oldest firmware will update itself with the newer firmware
on the other controller.
If the Fault/Service Required yellow LED is illuminated, the module has not gone online and
likely failed its self-test. Try to put the module online (see “Shutting Down a Controller
Module” on page 92) or check for errors that were generated in the event log from FibreCAT
SX Manager’s WBI.
When powering on the controllers, if a boot handshake error occurs, try turning off both
controllers for two seconds and then powering them back on. If this does not correct the
error, remove and replace each controller following the instructions in “Removing a
Controller Module or Expansion Module” on page 93.
cally downloads the firmware from the controller with the most recent firmware (partner
firmware upgrade). If told to do so by a service technician, you can disable the partner
firmware upgrade function using FibreCAT SX Manager’s WBI.
The partner firmware upgrade option is enabled by default in FibreCAT SX Manager’s WBI.
Only disable this function if told to do so by a service technician.
1. Select Manage > General Config > System Configuration.
2. For Partner Firmware Upgrade, select Disable.
A Code Load Progress window is displayed to show the progress of the update, which
can take several minutes to complete. The update procedure for one controller will take
up to 30 minutes. Do not power off the array during the code load process. Once the
firmware upload is complete, the controller resets after which the opposite controller
automatically repeats the process to load the new firmware. When the update
completes on the connected controller, you are logged out. Wait one minute for the
controller to start and click Log In to reconnect to FibreCAT SX Manager’s WBI.
If an enclosure firmware update is necessary (see release notes), you must update the
enclosure firmware from Controller A and Controller B.
Problem
Event Event
Problem Code Solution Code
Impending disk drive 55 Replace the disk before failure. 8
failure
Ensure that the virtual disk that includes this disc is fault
tolerant. If it is not, add a spare disk to the FibreCAT SX.
The virtual disk will automatically use the spare when the
failed disc is removed.
Table 23: Disk Drive Problems
4. Replace the failed module by following the instructions in “Removing a Drive Module”
on page 106.
You can also use the CLI show enclosure-status command. If the drive status is “Absent”
the drive might have failed, or it has been removed from the chassis. For details on the show
enclosure-status command, refer to the “FibreCAT SX Manager Command Line
Interface (CLI)” manual.
Status Action
The status of the virtual disk that originally had the failed drive Use FibreCAT SX Manager’s WBI to assign the new
status is Good. A global or virtual disk (dedicated) spare has drive module as either a global spare or a vdisk
been successfully integrated into the virtual disk and the spare:
replacement drive module can be assigned as either a global Select Manage > Virtual Disk Config > Global
spare or a virtual disk spare. Spare Menu.
The status of the disk drive just installed is LEFTOVER. All of the member disk drives in a virtual disk
contain metadata in the first sectors. The array uses
the metadata to identify virtual disk members after
restarting or replacing enclosures.
Use FibreCAT SX Manager’s WBI to clear the
metadata if you have a disk drive that was previ-
ously a member of a virtual disk. After you clear the
metadata, you can use the disk drive in a virtual
disk or as a spare:
Select Manage > Utilities > Disk Drive Utilities >
Clear Metadata.
Select the disk, and click on Clear Metadata for
Selected Disk Drives.
Table 25: Disk Drive Status
Status Action
If the status of the virtual disk that originally had the failed All data in the virtual disk is lost. Use the FibreCAT
drive status is FATAL FAIL, two or more drive modules have SX Manager’s WBI Trust Virtual Disk function to
failed. attempt to bring the virtual disk back online.
Select Manage > Utilities > Recovery Utilities >
Trust Virtual Disk.
Note: You must be a Diagnostic Manage-level user
to access the Trust Virtual Disk submenu. Refer to
the “FibreCAT SX60 / SX80 / SX88 Adminis-
trator’s Guide” for more information on access privi-
leges.
The status of the virtual disk that originally had the failed drive See “Verify that the Correct Power-On
status is DRV ABSENT or INCOMPLETE. These status Sequence was Performed” on page 109. If the
indicators only occur when the enclosure is initially powered power-on sequence was correct, locate and replace
up. DRV ABSENT indicates that one drive module is bad. the additional failed drive modules.
INCOMPLETE indicates that two or more drive modules are
bad.
The status of the virtual disk that originally had the failed drive Wait for the virtual disk to complete its operation.
indicates that the virtual disk is being rebuilt.
The status of the virtual disk that originally had the failed drive If this status occurs after you replace a defective
is DRV FAILED. drive module with a known good drive module, the
enclosure midplane might have experienced a
failure.
Replace the enclosure.
Table 25: Disk Drive Status
Problem Solution
Expanding virtual disk requires days In general, expanding a virtual disk can take days to complete. You cannot
to complete. stop the expansion once it is started.
If you have an immediate need, create a new virtual disk of the size you
want, transfer your data to the new virtual disk, and delete the old virtual
disk.
Failover causes a virtual disk to In general, controller failover is not supported if a disk drive is in an
become critical when one of its expansion enclosure that is connected with only one cable to the controller
drives “disappears.” enclosure. This is because access to the expansion enclosure will be lost if
the controller to which it is connected fails. When the controller with the
direct connection to the expansion enclosure comes back online, access to
the expansion enclosure drives is restored. To avoid this problem, ensure
that two cables are used to connect the enclosures as shown in the
“FibrecAT SX60 / SX80 / SX88 Operating Manual” and that the cables are
connected securely and are not damaged.
If the problem persists or affects a disk drive in a controller enclosure, a
hardware problem might have occurred in the drive module, dongle,
midplane, or controller module. Identify and replace the FRU where the
problem occurred
A virtual disk is much smaller than it Verify that the disk drives are all the same size within the virtual disk. The
should be. virtual disk is limited by the smallest sized disk.
Volumes in the virtual disk are not Verify that the volumes are mapped to the host using FibreCAT SX Manger’s
visible to the host. WBI:
Manage > Volume Management > Volume Mapping > Map by Volume.
Virtual Disk Degraded Replace the failed disk drive and add the replaced drive as a spare to the
Event codes 58 and 1, or event critical virtual disk.
codes 8 and 1 If you have dynamic spares enabled, you only need to replace the drive. The
system will automatically reconstruct the virtual disk.
Virtual Disk Failure Replace the bad disk drive and restore the data from backup.
Event codes 58 and 3, or event
codes 8 and 3
Virtual Disk Quarantined Ensure that all drives are turned on.
Event code 172 When the vdisk is de-quarantined, event code 79 is returned.
Problem Solution
Spare Disk Failure Replace the disk.
Event code 62 If this disk was a dedicated spare for a vdisk, assign another spare to the
vdisk.
Spare Disk Unusable The disk might not have a great enough capacity for the vdisk.
Event code 78 Replace the spare with a disk that has a capacity equal to or greater than
the smallest disk in the vdisk.
Mixed drive type errors Virtual disks do not support mixed drive types.
Verify that the drives in the virtual disk are of the same type (SATA or SAS)
and that they have the same capacity. If you attempt to build a virtual disk
with mixed drive types you will receive an error.
If you attempt to build a virtual disk with various sized disk drives, a warning
will be displayed. The capacity of the smallest disk will be set for all others.
Table 26: Virtual Disk Faults
NOTE
i When a power supply fails, the fans of the module continue to operate because they
draw power from the power bus located on the midplane.
Once a fault is identified in the power and cooling module, you need to replace the entire
module.
CAUTION
! Because removing the power and cooling module significantly disrupts the
enclosure’s airflow, do not remove the power and cooling module until you have the
replacement module.
Table 27 lists possible power and cooling module faults.
Fault Solution
Power supply fan warning or failure, or Check that all of the fans are working using FibreCAT
power supply warning or failure. Event code 168 SX Manger’s WBI.
Make sure that no slots are left open for more than 2
minutes. If you need to replace a module, leave the
old module in place until you have the replacement,
or use a blank cover to close the slot. Leaving a slot
open negatively affects the airflow and might cause
the unit to overhead.
Make sure that the controller modules are properly
seated in their slots and that their latches are locked.
Power and cooling module status is listed as failed or you Check that the switch on each power and cooling
receive a voltage event notification. Event code 168 module is turned on.
Check that the power cables are firmly plugged into
both power and cooling modules and into an appro-
priate electrical outlet.
Replace the power and cooling module.
AC Power LED is off. Same as above.
DC Voltage & Fan Fault/Service LED is on. Replace the power and cooling module.
Table 27: Power and Cooling Module Faults
CAUTION
! When you remove a power and cooling module, install the new module within two
minutes of removing the old module. The enclosure might overheat if you take more
than two minutes to replace the power and cooling module.
To remove a power and cooling module from an enclosure, perform the following steps:
1. Follow all static electricity precautions as described in “Static Electricity Precautions” on
page 89.
2. Set the power switch on the module to the Off position.
3. Disconnect the power cable.
4. Turn the thumbscrew at the top of the latch (see Figure 37) counterclockwise until the
thumbscrew is disengaged from the power and cooling module.
Do not remove the thumbscrew from the latch.
Thumbscrew
Latch
Figure 37: Removing the Power and Cooling Module from the Chassis
5. As shown in Figure 37, rotate the latch downward to about 45 degrees, supplying
leverage to disconnect the power and cooling module from the internal connector.
6. Use the latch to pull the power and cooling module out of the chassis.
NOTE
i Do not lift the power and cooling module by the latch. This could break the latch.
Hold the power and cooling module by the metal casing.
218 Warning The super-capacitor pack is near end of A service technician must replace the
life. super-capacitor pack in the controller
reporting this event.
220 Informational Master volume rollback operation has
started.
221 Informational All master volume partitions have been
deleted.
222 Informational Setting of the policy for the backing store
is complete. Policy is the action to be
taken when the backing store hits the
threshold level.
223 Informational The threshold level for the backing store
has been set. Threshold is the percent
value of the backing store to be set to
handle the out of space issue. The options
are warning, error and critical.
To summarize, policy is the action taken
depending on the threshold value.
224 Informational A background master volume rollback
operation has completed.
225 Critical Background master write copy-on-write
operation has failed. There was an
internal I/O error. Could not complete the
write operation to the disk.
226 Critical A background master volume rollback Check to make sure backing store is
failed to start due to inability to initialize online and the array on which this
the snap pool. All rollback is in a partition exists and restart the
suspended state. operation.
227 Critical Failure to execute rollback for a particular Restart the rollback operation.
portion of the master volume.
228 Critical Background rollback for a master volume Check to make sure backing store is
failed to end due to inability to initialize the online and the array on which this
snap pool. All rollback is in a suspended partition exists, and restart the
state. operation.
230 Warning The snap pool has reached the snap-pool will behave as per the policy set for the
error threshold. Backing store.
231 Warning The snap pool has reached the snap-pool will behave as per the policy set for the
critical threshold. Backing store.
Some commands accept a comma-separated list of virtual disk serial numbers and names.
Do not include spaces before or after commas. The following virtual disk list specifies a
serial number and two names:
00c0ff0a43180048e6dd1c4500000000,Sales/Mktg,”Vdisk #1”
AA43BF501234560987654321FEDCBA,Image-Data,”Vol #1”
You can specify a nickname for a data host’s host bus adapter (HBA). A nickname is a user-
defined string that can include a maximum of 16 printable ASCII characters. For example,
MyHBA. A name cannot include a comma, backslash (\), or quotation mark (“); however, a
name that includes a space must be enclosed in quotation marks. Names are case-
sensitive.
You specify the mapping of a host to a volume by using the syntax channels.LUN, where:
● channels is a single host channel number or a list of host channel numbers, ranges,
or both. For example, 0,1,3-5.
● LUN is a logical unit number (LUN) from 0–127 to assign to the mapping.
For example, 8.
0-1.8
# help
# help command
# command ?
To view information about the syntax to use for specifying disk drives, virtual disks, volumes,
and volume mapping, type:
# help syntax
9.3.5 ping
Tests communication with a remote host. The remote host is specified by IP address. Ping
sends ICMP echo response packets and waits for replies.
For details about using ping, see the “CLI” manual.
9.3.7 restart
Restarts the RAID controller or the management controller in either or both controller
modules.
If you restart a RAID controller, it attempts to shut down with a proper failover sequence,
which includes stopping all I/O operations and flushing the write cache to disk, and then the
controller restarts. The management controllers are not restarted so they can provide
status information to external interfaces.
If you restart a management controller, communication with it is temporarily lost until it
successfully restarts. If the restart fails, the partner management controller remains active
with full ownership of operations and configuration information.
CAUTION
! If you restart both controller modules, you and users lose access to the system and
its data until the restart is complete.
For details about using restart, see the “CLI” manual.
CAUTION
! This command changes how the system operates and might require some recon-
figuration to restore host access to volumes.
For details about using restore defaults, see the “CLI” manual.
Output
Field Description
host Host interface debug messages
disk Disk interface debug messages
mem Internal memory debug messages
fo Failover/recovery debug messages
msg Inter-controller message debug messages
fca, fcb, fcc, fcd Four levels of Fibre Channel driver debug messages
misc Internal debug messages
rcm Removable-component manager debug messages
raid RAID debug messages
cache Cache debug messages
emp Enclosure Management Processor debug messages
capi Internal Configuration API debug messages
mui Internal service interface debug messages
bkcfg Internal configuration debug messages
awt Auto-write-through feature debug messages
res2 Internal debug messages
capi2 Internal Configuration API tracing debug messages
dms Snapshot feature debug messages
Table 30: Debug Log Parameters
For details about using show debug-log-parameters, see the “CLI” manual.
Output
Field Description
Type The component type:
Fan: Cooling fan unit
PSU: Power supply unit
Temp: Temperature sensor
Voltage: Voltage sensor
DiskSlot: Disk drive module
# Unit ID
Status Component status:
Absent: Component is not present
Fault: One or more subcomponents has a fault
OK: All subcomponents are operating normally
N/A: Status is not available
FRU P/N Part number of the field-replaceable unit (FRU) that contains the component
FRU S/N Serial number of the FRU that contains the component
Add’l Additional data such as temperature (Celsius), voltage, or slot address
Data
Table 32: Enclosure Component Status Fields
For details about using show enclosure-status, see the “CLI” manual.
Output
For details about using show events, see the “CLI” manual.
Output
Parameter Description
Id Identifier for a specific PHY lane.
Encl Enclosure that contains the SAS expander
Status OK: No errors detected on the PHY lane
ERROR: An error has occurred on the PHY lane
Type DRIVE: Disk drive PHY lane
INTER-EXP: Inter-expander PHY lane, communicating between the SAS
expanders in a dual-controller system
INGRESS: SAS ports on controller enclosures and expansion enclosures
EGRESS: SAS ports on expansion enclosures
Table 34: SAS Expander Information
For details about using show expander-status, see the “CLI” manual.
Output
Field Description
Name FRU name:
CHASSIS_MIDPLANE: 2U chassis and midplane; the metal
enclosure and the circuit board to which power, controller,
expansion, and drive modules connect
RAID_IOM: Controller module
BOD_IOM: Expansion module
POWER_SUPPLY: Power and cooling module
Description FRU description
Part Number FRU part number
Mid-Plane SN For the CHASSIS_MIDPLANE FRU, the mid-plane serial number
Serial Number For the RAID_IOM, BOD_IOM, and POWER_SUPPLY FRUs, the
FRU serial number
Revision FRU revision number
Dash Level FRU template revision number
FRU Shortname FRU part number
Mfg Date Date and time that the FRU was programmed
Mfg Location Location where the FRU was programmed
Mfg Vendor ID JEDEC ID of the manufacturer
FRU Location Location of the FRU in the enclosure, as viewed from the back:
MID-PLANE SLOT: Chassis midplane
UPPER IOM SLOT: Upper controller module or expansion
module
LOWER IOM SLOT: Lower controller module or expansion
module
LEFT PSU SLOT: Left power and cooling module
RIGHT PSU SLOT: Right power and cooling module
Configuration SN A customer-specific configuration serial number
FRU Status Component status:
Absent: Component is not present
Fault: One or more subcomponents has a fault
OK: All subcomponents are operating normally
N/A: Status is not available
Table 35: FRU Information
For details about using show frus, see the “CLI” Manual.
Output
Output
Field Description
Redundancy Mode Active-Active
Redundancy Status Redundant Operation: Both controllers are
operating
Only Operational: Only the connected
controller is operating
Controller ID Status Operational: The controller is operational
Not Installed: The controller is not installed
or has failed
Controller ID Serial Number Controller module serial number
Not Available
Table 36: Redundancy Information
For details about using show redundancy-mode, see the “CLI” manual.
9.3.21 trust
Enables an offline virtual disk to be brought online for emergency data collection only. It
must be enabled before each use.
CAUTION
! This command can cause unstable operation and data loss if used improperly. It is
intended for disaster recovery only. Use only when advised to do so by a service
technician.
The trust command resynchronizes the time and date stamp and any other metadata on a
bad disk drive. This makes the disk drive an active member of the virtual disk again. You
might need to do this when:
● One or more disks of a virtual disk start up more slowly or were powered on after the
rest of the disks in the virtual disk. This causes the date and time stamps to differ, which
the system interprets as a problem with the “late” disks. In this case, the virtual disk
functions normally after being trusted.
● A virtual disk is offline because a drive is failing, you have no data backup, and you want
to try to recover the data from the virtual disk. In this case, trust may work, but only as
long as the failing drive continues to operate.
When the “trusted” virtual disk is back online, back up its data and audit the data to make
sure that it is intact. Then delete that virtual disk, create a new virtual disk, and restore data
from the backup to the new virtual disk. Using a trusted virtual disk is only a disaster-
recovery measure; the virtual disk has no tolerance for any additional failures.
For details about using trust, see the “CLI” manual.
Figure 25: Power and Cooling Module and Cooling Fan Locations . . . . . . . . . . . . . . . . . 75
Figure 29: Checking the First Four Letters of the Serial Number
(example, showing a FibreCAT SX80 controller Enclosure) . . . . . . . . . . . . . . . . . . . . . . 85
Figure 30: 1st Four Letters of the FibreCAT SX Serial Number in Ersin
(example, showing a FibreCAT SX80 controller enclosure). . . . . . . . . . . . . . . . . . . . . . . 86
Figure 37: Removing the Power and Cooling Module from the Chassis . . . . . . . . . . . . 113
Table 10: Problems Accessing the Array Using FibreCAT SX Manager’s WBI . . . . . . . . 64
[7] FibreCAT SX60 / SX80 / SX88 Service Manual (the manual in hand)
The latest version of the manual
is available at http://www.fujitsu-siemens.com/support/manuals.html
[9] See also the user forum for the FibreCAT SX Series at
http://www.fibreservice.net
A syntax 137
AC Power Good LED 40 volume mapping syntax 137
AC power module command syntax
installing 114 CLI 136
advanced manage-level functions controller module
dequarantining a virtual disk 62 cache status LED 39
saving log information to a file 63 Ethernet activity LED 37
Ethernet link status LED 37
B expansion port status LED 35
bad block Fault/Service Required LED 38
list size, displaying 48 FC link speed LED 33
reassignments, displaying 48 FRU OK LED 38
boot handshake 96 host activity LED 33
host link speed LED 33
C host link status LED 33
cables identifying faults 90
identifying faults installing 95, 96
expansion enclosure side 99 LEDs 33
host side 99 OK to Remove LED 38
cache only one boots 90
checking status 39 removing 93
clearing 57 replacing 96
size 91 Unit Locator LED 38
status LED 39 updating firmware 97
CLI controller modules
disk drive syntax 136 conflicts 90
help, view command 138 cooling element
host nickname syntax 137 fan sensor descriptions 75
keyword syntax 136 critical events
parameter syntax 136 selecting to monitor 61
virtual disk critical state, virtual disk
name 136 preventing 62
syntax 136
volume D
serial number 137 data paths
AC Power Good 40 R
DC Voltage/Fan Fault/Service RAIDar
Required 40 cache data status 57
Temperature Fault LED 31 checking I/O status 46
leftover disk drives configuring event notification 60
clearing metadata 55 debug utilities 58
LIP, remotely issuing on host channels 54 diagnostic manage-level user only
log information functions 55
saving to a file 63 disk error statistics 47
log information, saving 67 displaying system status 44
loop initialization primitive. See LIP enable/disable trust virtual disk 56
icons, system status 44
M locating a disk drive 47
metadata reviewing event logs 49
clearing 55 status summary 44
Model of HIM 21 using to troubleshoot 43
recovery
O clearing cache data 57
OK to Remove LED disaster
controller module 38 trust virtual disk 56
drive module 32 rescan disks 50
expansion module 42 reset PHY status 52
resetting host channels 54
P
restriction
partner controller, disabling automatic update 97
direct connect mode 21
PHY
FibreCAT SX60 / SX80 21
disabled 50
errors 50 S
event logs 53 SAS In port status LED 41
fault isolation 50 SAS Out port status LED 41
fencing 50 saving
rescan disks 50 log information 67
reset status 52 sensors
physical layer interface. See PHY 50 cooling fan 75
power and cooling module locating 74
AC Power Good LED 40 power supply 74
DC Voltage/Fan Fault/Service Required temperature 76
LED 40 voltage 77
identifying faults 111 Serial Number 83, 84, 85
power module SMART
replacing 114 displaying event count 47
Power/Activity/Fault LED 32 spin-up retries, displaying 48
Product Class 87 static electricity precautions 89
status
T
Temperature Fault LED 31
temperature sensor descriptions 76
temperature warnings, resolving 73
trust virtual disk
U
Unit Locator LED
controller module 38
enclosure ear 30
expansion module 42
V
view CAPI trace 58
view error buffers 58
view mgmt trace 59
virtual disks
clearing cache data 57
dequarantining 63
disaster recovery 56
identifying faults 110
preventing critical state 62
voltage sensor descriptions 77
voltage warnings, resolving 73
W
warning events
selecting to monitor 61
warnings, temperature 73