X4450 - 3713255 - LSI 106x - RAID - User - Guide
X4450 - 3713255 - LSI 106x - RAID - User - Guide
X4450 - 3713255 - LSI 106x - RAID - User - Guide
Copyright © 2009 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, California 95054, Etats-Unis. Tous droits réservés.
Non publie - droits réservés selon la législation des Etats-Unis sur le droit d'auteur.
CE PRODUIT CONTIENT DES INFORMATIONS CONFIDENTIELLES ET DES SECRETS COMMERCIAUX DE SUN MICROSYSTEMS, INC.
SON UTILISATION, SA DIVULGATION ET SA REPRODUCTION SONT INTERDITES SANS L AUTORISATION EXPRESSE, ECRITE ET
PREALABLE DE SUN MICROSYSTEMS, INC.
Cette distribution peut inclure des éléments développés par des tiers.
Sun, Sun Microsystems, le logo Sun, Java, Solaris et Sun Fire et Sun Blade sont des marques de fabrique ou des marques déposées de Sun
Microsystems, Inc., ou ses filiales, aux Etats-Unis et dans d'autres pays.
LSI est une marque déposée de LSI Corporation
Ce produit est soumis à la législation américaine sur le contrôle des exportations et peut être soumis à la règlementation en vigueur dans
d'autres pays dans le domaine des exportations et importations. Les utilisations finales, ou utilisateurs finaux, pour des armes nucléaires, des
missiles, des armes biologiques et chimiques ou du nucléaire maritime, directement ou indirectement, sont strictement interdites. Les
exportations ou reexportations vers les pays sous embargo américain, ou vers des entités figurant sur les listes d'exclusion d'exportation
américaines, y compris, mais de maniere non exhaustive, la liste de personnes qui font objet d'un ordre de ne pas participer, d'une façon directe
ou indirecte, aux exportations des produits ou des services qui sont régis par la législation américaine sur le contrôle des exportations et la liste
de ressortissants spécifiquement désignés, sont rigoureusement interdites.
L'utilisation de pièces détachées ou d'unités centrales de remplacement est limitée aux réparations ou à l'échange standard d'unités centrales
pour les produits exportés, conformément à la législation américaine en matière d'exportation. Sauf autorisation par les autorités des Etats-
Unis, l'utilisation d'unités centrales pour procéder à des mises à jour de produits est rigoureusement interdite.
Contents
Preface xi
iii
Write Journaling 10
Fusion-MPT Support 11
Contents v
Supported Commands 48
auto Command 48
create Command 50
delete Command 52
display Command 52
hotspare command 56
list command 57
rebuild command 58
status noreset command 58
status Command 59
Monitoring and Managing RAID Arrays 60
▼ To Create a RAID 0 Array 61
▼ To Fail RAID 0 62
▼ To Create a RAID 1 Array 63
▼ To Rebuild a RAID 1 Array 63
▼ To Delete a RAID Array 64
▼ To View the Status of a RAID1 Volume 65
Contents vii
Monitoring Disk Drives 98
▼ To Display Complete Disk Drive Information 98
▼ To Display a Graphical View of a Disk Drive 98
Monitoring Virtual Disks 99
▼ To Display a Graphical View of a Virtual Disk 99
Monitoring Rebuilds and Other Processes 99
Maintaining and Managing Storage Configurations 100
▼ To Scan for New Drives 100
Rebuilding a Drive 101
▼ To Rebuild a Drive on a SAS IR System 101
Putting a Drive Offline or Missing 102
▼ To Put a Drive Offline or Missing 102
Known Issues 102
Sun Fire X4100 M2 /X4200 M2 Server Issues 103
(Windows 2003 Server) MSM-IR 1.19 Does Not Reflect Correct Disk Count
Information (CR 6514389) 103
Sun Fire X4600 M2 Server Issues 103
MSM-IR 1.19 Does Not Show Disk Removal Status Correctly in a Non-
RAID Configuration (CR 6525255) 103
MSM Server and Client Must Be in Same Subnet (6533271) 103
Sun Blade X6240, X6440 Server Issues 103
Locate Virtual Disk Function Does Not Light LEDs on Disks Controlled by
Server Blade (CR 6732326) 103
Sun Blade X6220 Server Issues 104
MSM "Prepare For Removal" Operation Fails in Windows 2003, 2008 (CR
6747581) for Disks in Disk Blade 104
Glossary 133
Index 139
Contents ix
x Sun LSI 106x RAID User’s Guide • April 2009
Preface
This Sun™ LSI 106x RAID User’s Guide contains instructions for creating and
maintaining hardware RAID volumes. It applies to all servers (including blades) that
include integrated disk controllers or PCI adapter cards that use LSI 106x controller
chips with MPT firmware that supports integrated RAID (IR).
Note – If your LSI controller uses MPT firmware that supports IT (initiator-target)
technology, this manual does not apply to you. For more on how to find out your
MPT firmware version, see “Does Your LSI Controller Support Integrated RAID?” on
page 1.
Obtaining Utilities
The LSI BIOS utility is automatically available in your server’s BIOS if a 106x chip is
present, either embedded on your server or on a PCI card.
The LSI MegaRAID Storage Manager software should be on your product’s Tools
and Drivers CD. Alternatively, you can download a CD image from the Sun web site
at:
http://sun.com/downloads/
On this web page, look for the link labelled “x64 Servers and Workstations”. The
linked page provides links to all x64-related downloads, organized by product name.
xi
Related Documentation
For all Sun hardware documentation, go to:
http://www.sun.com/documentation
http://docs.sun.com
http://www.sun.com/hwdocs/feedback
Please include the title and part number of your document with your feedback:
This part describes how to use the BIOS RAID Configuration utility and has the
following chapters:
■ “Introduction to Integrated RAID” on page 1
■ “Overview of Integrated Mirroring and Integrated Mirroring Enhanced” on
page 5
■ “Creating IM and IME Volumes” on page 13
■ “Overview of Integrated Striping” on page 31
■ “Creating Integrated Striping Volumes” on page 35
CHAPTER 1
This chapter provides an overview of the LSI Integrated RAID solution for LSI SAS
integrated disk controllers and adapters usesd in Sun servers. The chapter includes
these sections:
■ “Does Your LSI Controller Support Integrated RAID?” on page 1
■ “Integrated RAID Features” on page 3
■ “Using this Manual” on page 4
You can use the LSI Integrated RAID solution with the following LSI SAS controllers
that have MPT firmware that supports IR (integrated RAID):
■ LSISAS1064/1064E
■ LSISAS1068/1068E
■ LSISAS1078
Note – If your LSI controller uses MPT firmware that supports IT (initiator-target)
technology, this manual does not apply to you. For more on how to find out your
MPT firmware version, see “Does Your LSI Controller Support Integrated RAID?” on
page 1.
To find out if your LSI controller supports integrated RAID, do the following:
1
1. Boot the system.
As the BIOS loads you will see a message about the LSI Configuration Utility.
Chapters 4 and 5 of this User’s Guide list IS features and explain how to create IS
volumes and optional hot-spare disks.
This chapter provides an overview of the LSI Integrated Mirroring (IM) and
Integrated Mirroring Enhanced (IME) features. It includes these sections:
■ “Introduction” on page 5
■ “IM and IME Features” on page 6
■ “IM/IME Description” on page 7
■ “Integrated RAID Firmware” on page 9
■ “Fusion-MPT Support” on page 11
Introduction
The LSI Integrated Mirroring (IM) and Integrated Mirroring Enhanced (IME)
features provide data protection for the system boot volume to safeguard critical
information such as the OS on servers and high-performance workstations. The IM
and IME features provide a robust, high-performance, fault-tolerant solution to data
storage needs.
The IM and IME features support one or two mirrored volumes per LSI SAS
controller, to provide fault-tolerant protection for critical data. The two volumes can
have up to twelve disk drives total, plus one or two hot-spare disks.
If a disk in an Integrated Mirroring volume fails, the hot swap capability allows you
to restore the volume by simply swapping disks. The firmware then automatically
re-mirrors the swapped disk. Additionally, each SAS controller can have one or two
global hot-spare disks available to automatically replace a failed disk in the IM or
IME storage volumes on the controller. Hot-spares make the IM/IME volume even
more fault-tolerant.
5
Note – You can also configure one IM or IME volume and one Integrated Striping
(IS) volume on the same LSI SAS controller.
The IM/IME feature operates independently from the OS, in order to conserve
system resources. The BIOS-based configuration utility makes it easy to configure IM
and IME volumes.
1. Configurations of one or two IM or IME volumes on the same LSI SAS controller.
IM volumes have two mirrored disks; IME volumes have three to ten mirrored
disks. Two volumes can have up to 12 disks total. (Requires Integrated RAID
firmware v1.20.00 or above.)
2. One or two global hot-spare disks per controller, to automatically replace failed
disks in IM/IME volumes. (Support for two hot-spares requires Integrated RAID
firmware v1.20.00 or above.) The hot-spares are in addition to the 12-disk
maximum for two volumes per SAS controller.
3. Mirrored volumes run in optimal mode or in degraded mode (if one mirrored
disk fails).
6. Supports both SAS and SATA disks. The two types of disks cannot be combined
in the same volume. However, an LSI SAS controller can support one volume
with SATA disks and a second volume with SAS disks.
7. Fusion-MPT architecture
IM/IME Description
The LSI Integrated RAID solution supports one or two IM/IME volumes on each LSI
SAS controller (or one IM/IME volume and one Integrated Striping volume).
Typically, one of these volumes is the primary or boot volume, as shown in
FIGURE 2-1. Boot support is available through the firmware of the LSI SAS controller
that supports the standard Fusion-MPT interface. The runtime mirroring of the boot
disk is transparent to the BIOS, drivers, and OS. Host-based status software
monitors the state of the mirrored disks and reports any error conditions. FIGURE 2-1
shows an IM implementation with a second disk as a mirror of the first (primary)
disk.
The advantage of an IM/IME volume is that there is always a second, mirrored copy
of the data. The disadvantage is that writes take longer because data must be written
twice. On the other hand, performance is actually improved during reads.
FIGURE 2-2 shows the logical view and physical view of an IM volume.
An IME volume can be configured with up to ten mirrored disks. (One or two global
hot-spares can be added also.) FIGURE 2-3 shows the logical view and physical view
of an Integrated Mirroring Enhanced (IME) volume with three mirrored disks. Each
mirrored stripe is written to a disk and mirrored to an adjacent disk. This type of
configuration is also called RAID 1E.
Metadata Support
The firmware supports metadata, which describes the IM/IME logical drive
configuration stored on each member disk. When the firmware is initialized, each
member disk is queried to read the stored metadata in order to verify the
configuration. The usable disk space for each member disk is adjusted down when
the configuration is created, in order to leave room for this data.
Hot Swapping
The firmware supports hot swapping. The hot-swapped disk is automatically re-
synchronized in the background, without any host or user intervention. The
firmware detects hot swap removal and disk insertion.
Following a hot swap event, the firmware readies the new physical disk by spinning
it up and verifying that it has enough capacity for the mirrored volume. The
firmware re-synchronizes all hot-swapped disks that have been removed, even if the
same disk is re-inserted. In a two-disk mirrored volume, the firmware marks the hot-
swapped disk as the secondary disk and marks the other mirrored disk as the
primary disk. The firmware re-synchronizes all data from the primary disk onto the
new secondary disk.
Media Verification
The firmware supports a background media verification feature that runs at regular
intervals when the IM/IME volume is in optimal state. If the verification command
fails for any reason, the other disk’s data for this segment is read and written to the
failing disk in an attempt to refresh the data. The current Media Verification Logical
Block Address is written to nonvolatile memory occasionally to allow media
verification to continue approximately where it left off prior to a power-cycle.
Write Journaling
The Integrated RAID firmware requires at least a 32K NVSRAM in order to perform
write journaling. Write journaling is used to verify that the disks in the IM/IME
volume are synchronized with each other.
This chapter explains how to create Integrated Mirroring (IM) and Integrated
Mirroring Enhanced (IME) volumes using the LSI SAS BIOS Configuration Utility
(SAS BIOS CU). The chapter includes these topics:
■ “IM/IME Configuration Overview” on page 13
■ “Creating IM and IME Volumes” on page 14
■ “Creating a Second IM or IME Volume” on page 21
■ “Managing Hot Spares” on page 21
■ “Other Configuration Tasks” on page 26
Although you can use disks of different size in IM and IME volumes, the smallest
disk in the volume will determine the logical size of all disks in the volume. In other
words, the excess space of the larger member disk(s) will not be used. For example,
if you create an IME volume with two 100 Gbyte disks and two 120 Gbyte disks,
only 100 Gbytes of the larger disks will be used for the volume.
Refer to “IM and IME Features” on page 6 for more information about Integrated
Mirroring volumes.
13
Creating IM and IME Volumes
The SAS BIOS CU is part of the Fusion-MPT BIOS. When the BIOS loads during boot
and you see the message about the LSI Configuration Utility, press Ctrl-C to start the
CU.
After a brief pause, the main menu (Adapter List Screen) of the SAS BIOS CU
appears. On some systems, however, the following message appears next:
In this case, the SAS BIOS CU will load after the system has completed its POST.
This is an example of the main menu of the SAS BIOS CU.
You can configure one or two IM or IME volumes per Fusion-MPT controller. You
can also configure one IM/IME and one Integrated Striping (IS) volume on the same
controller, up to a maximum of twelve physical disk drives for the two volumes. In
addition, you can create one or two hot-spares for the IM/IME array(s).
1. On the Adapter List screen, use the arrow keys to select an LSI SAS adapter (if
it is not already selected as it is in the figure above).
5. Move the cursor to the RAID Disk column and select a disk. To add the disk to
the volume, change the No to Yes by pressing the + key, − key, or space bar.
6. Repeat this step to select a total of three to ten disks for the volume.
All existing data on all the disks you select will be overwritten. As you add disks,
the Array Size field changes to reflect the size of the new volume.
7. [Optional] Add one or two global hot-spares to the volume by moving the
cursor to the Hot Spr column and pressing the + key, − key, or space bar.
When you have finished with Steps 5, 6, and 7, your selections might look like
this:
9. Select Save changes then exit this menu to commit the changes.
The SAS BIOS CU pauses while the array is being created. When the utility
finishes creating the array (volume), the main screen reappears.
11. When the next screen appears, select View Existing Array and press Enter.
You see the volume that you have created.
▼ To Create an IM Volume
Note – The procedure for creating an IM volume is almost the same as the process
for creating an IME volume. In the case of an IM volume, you can only include two
disks and you have the option of not erasing the first disk that you choose (see Step 6
in the procedure below). Use the figures in the IME procedure as necessary to
visualize the IM procedure.
1. On the Adapter List screen, use the arrow keys to select an LSI SAS adapter.
3. On the Adapter Properties screen, use the arrow keys to select RAID Properties
on the screen and press Enter.
5. Move the cursor to the RAID Disk column and select a disk. To add the disk to
the volume, change the No to Yes by pressing the + key, - key, or space bar.
When the first disk is added, the SAS BIOS CU prompts you to either keep
existing data or overwrite existing data.
6. Press M to keep the existing data on the first disk or press D to overwrite it.
If you keep the existing data, this is called a data migration. The first disk will be
mirrored onto the second disk, so any data you want to keep must be on the first
disk selected for the volume. Data on the second disk is overwritten. The first
disk must have 512 Kbytes available for metadata after the last partition.
As disks are added the Array Size field changes to reflect the size of the new
volume.
7. [Optional] Add one or two global hot-spares by moving the cursor to the Hot
Spr column and pressing the + key, - key, or space bar.
8. When the volume has been fully configured, press C, then select Save
changes then exit this menu to commit the changes.
The SAS BIOS CU pauses while the array is being created.
第二組,只支援2組 RG
3. On the Adapter Properties screen, use the arrow keys to select RAID Properties
and press Enter.
Note – All hot spares are global, including those that you create when you create a
RAID volume.
Usually, you create global hot-spares at the same time you create the IM/IME
volume. Follow these steps to add global hot-spare disks later:
3. On the Adapter Properties screen, use the arrow keys to select RAID Properties
on the screen and press Enter.
The Select New Array Type screen appears.
7. Select a disk from the list by pressing the + key, − key, or spacebar.
10. Select Save changes then exit this menu to commit the changes.
The configuration utility pauses while the global hot-spares are being added.
Note – The hot spares are available for rebuilding any RAID volume, including one
that has not yet been created.
3. Select Save changes then exit this menu to commit the changes.
The configuration utility pauses while the global hot-spare is being removed.
Note – If you create one volume using SAS disks, another volume using SATA
disks, and global hot-spare disks, the hot-spare disks will only appear when you
view the volume that has the same type of disks as the hot-spare disks.
2. If two volumes are configured, press Alt+N to view the other array.
3. To manage the current array, select the Manage Array item and press Enter.
Synchronizing an Array
The Synchronize Array command forces the firmware to re-synchronize the data on
the mirrored disks in the array. It is seldom necessary to use this command, because
the firmware automatically keeps the mirrored data synchronized during normal
operation. When you use this command, one disk of the array is placed in the
Degraded state until the data on the mirrored disks has been re-synchronized.
▼ To Synchronize an Array
1. Select Synchronize Array on the Manage Array screen.
▼ To Activate an Array
1. Select Activate Array on the Manage Array screen.
Note – If there is a global hot-spare disk on the controller to which you have moved
the array, the BIOS checks when you activate the array to determine if the hot-spare
is compatible with the new array. An error message appears if the disks in the
activated array are larger than the hot-spare disk or if the disks in the activated array
are not the same type as the hot-spare disk (SATA versus SAS).
Deleting an Array
Caution – Before deleting an array, be sure to back up all data on the array that you
want to keep.
▼ To Delete an Array
1. Select Delete Array on the Manage Array screen.
1. When you are creating an IM or IME volume, and a disk drive is set to Yes as part
of the volume, the LED on the disk drive is blinking. The LED is turned off when
you have finished creating the volume.
2. You can locate individual disk drives from the SAS Topology screen. To do this,
move the cursor to the name of the disk in the Device Identifier column and press
Enter. The LED on the disk blinks until the next key is pressed.
3. You can locate all the disk drives in a volume by selecting the volume on the SAS
Topology screen. The LEDs blink on all disk drives in the volume.
Note – The LEDs on the disk drives will blink as described above if the firmware is
correctly configured and the drives or the disk enclosure supports disk location.
3. To select a boot disk, move the cursor to the disk and press Alt+B.
4. To remove the boot designator, move the cursor down to the current boot disk
and press Alt+B. This controller will no longer have a disk designated as boot.
5. To change the boot disk, move the cursor to the new boot disk and press Alt+B.
The boot designator will move to this disk.
Note – The firmware must be configured correctly in order for the Alt+B feature to
work.
This chapter provides an overview of the LSI Integrated Striping (IS) feature. It
includes these sections:
■ “Introduction” on page 31
■ “IS Features” on page 32
■ “IS Description” on page 32
■ “Integrated Striping Firmware” on page 34
■ “Fusion-MPT Support” on page 34
Introduction
The LSI Integrated Striping (IS) feature is useful for applications that require the
faster performance and increased storage capacity of striping. The low-cost IS
feature has many of the advantages of a more expensive RAID striping solution. A
single IS logical drive may be configured as the boot disk or as a data disk.
The IS feature is implemented with controller firmware that supports the Fusion-
MPT Interface. IS provides better performance and more capacity than individual
disks, without burdening the host CPU. The firmware splits host I/Os over multiple
disks and presents the disks as a single logical drive. In general, striping is
transparent to the BIOS, the drivers, and the operating system.
The SAS BIOS CU is used to configure IS volumes, which can consist of two to ten
disks.
31
IS Features
IS includes the following features:
■ Support for volumes with two to ten disks
■ Support for two IS volumes (or one IS volume and one IM/IME volume) on a
controller, with up to 12 disks total (Requires Integrated RAID firmware v1.20.00
or above.)
Note – All physical disks in a volume must be connected to the same SAS controller.
IS Description
The IS feature writes data across multiple disks instead of onto one disk. This is
accomplished by partitioning each disk’s storage space into 64 Kbyte stripes. These
stripes are interleaved round-robin, so that the combined storage space is composed
alternately of stripes from each disk.
FIGURE 4-2 shows a logical view and a physical view of Integrated Striping
configuration.
Metadata Support
The firmware supports metadata, which describes the IS logical drive configuration
stored on each member disk. When the firmware is initialized, each member disk is
queried to read the stored metadata to verify the configuration. The usable disk
space for each IS member disk is adjusted down when the configuration is created,
in order to leave room for this data.
SMART Support
SMART is a technology that monitors disk drives for signs of future disk failure and
generates an alert if such signs are detected. The firmware polls each physical disk in
the volume at regular intervals. If the firmware detects a SMART ASC/ASCQ code
on a physical disk in the IS volume, it processes the SMART data and stores it in
nonvolatile memory. The IS volume does not support SMART directly, since it is just
a logical representation of the physical disks in the volume.
Fusion-MPT Support
The BIOS uses the LSI Fusion-MPT interface to communicate to the SAS controller
and firmware to enable IS. This includes reading the Fusion-MPT configuration to
gain access to the parameters that are used to define behavior between the SAS
controller and the devices connected to it. The Fusion-MPT drivers for all supported
operating systems implement the Fusion-MPT interface to communicate with the
controller and firmware.
This chapter explains how to create Integrated Striping (IS) volumes using the LSI
SAS BIOS Configuration Utility (SAS BIOS CU). The chapter includes these topics:
■ “IS Configuration Overview” on page 35
■ “Creating IS Volumes” on page 36
■ “Creating a Second IS Volume” on page 38
■ “Other Configuration Tasks” on page 39
IS Configuration Overview
You can use the SAS BIOS CU to create one or two IS volumes, with up to twelve
drives total, on an LSI SAS controller. Each volume can have from two to ten drives.
Disks in an IS volume must be connected to the same LSI SAS controller, and the
controller must be in the BIOS boot order.
Although you can use disks of different size in IS volumes, the smallest disk
determines the “logical” size of each disk in the volume. In other words, the excess
space of the larger member disk(s) is not used. Usable disk space for each disk in an
IS volume is adjusted down to leave room for metadata. Usable disk space may be
further reduced to maximize the ability to interchange disks in the same size
classification. The supported stripe size is 64 kilobytes.
Refer to “IS Features” on page 32 for more information about Integrated Striping
volumes.
35
Creating IS Volumes
The SAS BIOS CU is part of the Fusion-MPT BIOS. When the BIOS loads during boot
and you see the message about the LSI Configuration Utility, press Ctrl-C to start it.
After you do this, the message changes to:
After a brief pause, the main menu of the SAS BIOS CU appears. On some systems,
however, the following message appears next:
In this case, the SAS BIOS CU will load after the system has completed its power-on
self test.
Follow the steps below to configure an Integrated Striping (IS) volume with the SAS
BIOS CU. The procedure assumes that the required controller(s) and disks are
already installed in the computer system. You can configure an IM/IME volume and
an IS volume on the same SAS controller.
▼ To Create IS Volumes
1. On the Adapter List screen of the SAS BIOS CU, use the arrow keys to select a
SAS adapter.
3. On the Adapter Properties screen, use the arrow keys to select RAID Properties
and press Enter.
4. When you are prompted to select a volume type, select Create IS Volume.
5. The Create New Array screen shows a list of disks that can be added to a
volume.
FIGURE 5-2 shows the Create New Array screen with an IS volume configured with
two drives.
6. Move the cursor to the RAID Disk column. To add a disk to the volume, change
the No to Yes by pressing the + key, − key, or space bar. As disks are added, the
Array Size field changes to reflect the size of the new volume.
There are several limitations when creating an IS (RAID 0) volume:
■ All disks must be either SATA (with extended command set support) or SAS
(with SMART support).
■ Disks must have 512-byte blocks and must not have removable media.
■ There must be at least two and no more than ten drives in a valid IS volume.
Hot-spare drives are not allowed.
4. When you have added the desired number of disks to the array, press C, then
select Save Changes, and then Exit This Menu to commit the changes. The
configuration utility pauses while the array is being created.
3. On the Adapter Properties screen, use the arrow keys to select RAID Properties
and press Enter.
3. To manage the current array, press Enter when the Manage Array item is
selected.
Activating an Array
An array can become inactive if, for example, it is removed from one controller or
computer and moved to another one. The “Activate Array” option allows you to
reactivate an inactive array that has been added to a system. This option is only
available when the selected array is currently inactive.
▼ To Delete an Array
Caution – Before deleting an array, be sure to back up all data on the array that you
want to keep.
Note – Once a volume has been deleted, it cannot be recovered. The master boot
records of all disks are deleted.
1. When you are creating an IS volume, and a disk drive is set to Yes as part of the
volume, the LED on the disk drive is flashing. The LED is turned off when you
have finished creating the volume.
2. You can locate individual disk drives from the SAS Topology screen. To do this,
move the cursor to the name of the disk in the Device Identifier column and press
Enter. The LED on the disk flashes until the next key is pressed.
3. You can locate all the disk drives in a volume by selecting the volume on the SAS
Topology screen. The LEDs flash on all disk drives in the volume.
Note – The LEDs on the disk drives will flash as described above if the firmware is
correctly configured and the drives or the disk enclosure supports disk location.
3. To select a boot disk, move the cursor to the disk and press Alt+B.
4. To remove the boot designator, move the cursor down to the current boot disk
and press Alt+B. This controller will no longer have a disk designated as boot.
Note – The firmware must be configured correctly in order for the Alt+B feature to
work.
Windows 的
The LSI cfggen utility is a configuration utility used to create Integrated Mirroring
(IM) volumes.
Note – This chapter describes the utility as implemented on Sun’s x64 servers. The LSI
documentation in the Tools and Drivers CD describes the utility in general.
45
Overview of cfggen
The cfggen utility is a configuration utility used to create Integrated Mirroring (IM)
volumes. A command-line utility, it runs in the Windows Preinstallation
Environment (WinPE) and on DOS. The utility is a minimally interactive program
that can be executed from a command-line prompt or a shell script. The result of
running this utility is communicated through the program status value that is
returned when the program exits. You use the utility to create IM storage
configurations on both SCSI controllers and SAS controllers.
The utility runs on WinPE and is statically compiled with the LSI MptLib Library
(MptLib.lib). The WinPE environment must have the appropriate LSI Logic MPT
Windows driver (ScsiPort or StorPort) installed and loaded in order to
recognize and communicate with the I/O controller. The utility does not recognize
an LSI53C1030 or LSI53C1020 controller unless there is at least one device attached
to the controller.
cfggen Syntax
cfggen uses a command line interface with the following format:
The unique number of a PCI function found in the system, starting with controller
number 0. Therefore, the controller-number is used to address a particular SCSI bus in
the system. For example, cfggen assigns two controller numbers to an LSI53C1030
dual SCSI bus chip. It assigns one controller number to an LSI53C1020 single SCSI
bus chip. For the LSI Logic SAS1064/1064E and SAS1068/1068E controllers, the
controller number corresponds to a single SAS controller.
The SCSI bus address of a peripheral device attached to an LSI Logic controller. The
maximum value of SCSI ID depends on the type of I/O controller and the maximum
number of devices supported by the OS for this controller.
■ enclosure:bay
The enclosure (encl) and bay/slot of a peripheral device attached to the bus. The
argument must use a colon (:) as a separator and must follow the enclosure:bay
format. Only devices connected to LSI SAS controllers can be addressed using
enclosure:bay and hence this option is not supported on LSI53C1020/1030
controllers.
The enclosure and slot numbers of a drive can be obtained from the display
command
Supported Commands
The following commands are currently supported by cfggen:
■ “auto Command” on page 48
■ “create Command” on page 50
■ “display Command” on page 52
■ “delete Command” on page 52
■ “hotspare command” on page 56
■ “list command” on page 57
■ “rebuild command” on page 58
■ “status noreset command” on page 58
■ “status Command” on page 59
auto Command
The AUTO command automatically creates an IM, IME, or IS volume on an
LSI1064/1064E or LSI1068/1068E controller. The volume is created with the
maximum number of disks available for use in the specified volume type. The main
difference from the CREATE command is that with the AUTO command you do not
specify SCSI ID values for disks to use in the volume. CFGGEN automatically
When a disk drive is added to an IM, IME, or IS volume, its entire storage capacity
may or may not be used, depending on drive capacity and volume capacity. For
example, if you add a 36 Gbyte disk drive to a volume that only uses 9 Gbytes of
capacity on each disk drive, the remaining 27 Gbytes of capacity on the disk drive
are unusable. When AUTO creates an IM volume, the first disk found is assigned as
the primary disk drive. If the controller is allowed to resync the disk drives, the data
on the primary disk drive will be available by accessing the newly created volume.
CFGGEN follows these rules when creating IM, IME, and IS volumes and hot spare
disks with the AUTO command:
■ All disks that are part of a volume or a hot spares for a volume must be connected
to the same controller.
■ IM, IME, and IS volumes are supported.
■ Only two volumes per controller can be created.
■ SAS and SATA drives cannot be mixed in a volume. With the AUTO command,
all drives used must be the same type as the first available disk found.
■ The total number of disks in a volume, including hot spare disks, cannot exceed
eight for LSI1064/1064E and LSI1068/1068E controllers, and the total number of
disks combined for two volumes cannot exceed ten. An IM volume must have
exactly two disks.
■ An IME volume can have three to six disks for an LSI SCSI controller, and three to
eight disks for an LSI SAS controller as long as rules 4 and 5 are not violated.
Example
cfggen controller-number auto volume-type size size [qsync] [noprompt]
Parameters
controller-number Number of the SAS controller targeted by this command.
volume- Volume type for the volume to be created. Valid values are IM,
type IME and IS
size Size of the RAID volume in Mbytes, or MAX for the maximum size
available
qsync If this optional parameter is specified, a quick synchronization of the
new volume will be performed. If the volume type is IME or IS, a
quick synchronization is always performed even if this option is not
specified. A quick synchronization means that the first 32 Kbytes of the
drives in the volume are cleared to 0.
noprompt Suppresses display of warnings and prompts
create Command
The create command creates IM, IME (Integrated Mirroring Enhanced), and IS
(Integrated Striping) volumes on the LSI53C1020/1030 and SAS1064/1064E and
SAS1068/1068E controllers. The firmware and hardware limitations for these
controllers determine the number of configurations that can be created. When a disk
drive is added to an IM, IME, or IS volume, its entire storage capacity can be used,
depending on drive capacity and volume capacity. Any unused capacity is not
accessible for other purposes.
For example, if you add a 36 Gbyte disk drive to a volume that only uses 9 Gbytes of
capacity on each disk drive, the remaining 27 Gbytes of capacity on the disk drive is
unusable. The disk identified by the first SCSI ID on the command line is assigned as
the primary disk drive when an IM volume is created. If the controller is allowed to
resynchronize the disk drives, the data on the primary disk drive will be available
when you access the newly created volume.
The following rules must be observed when creating IM, IME, and IS volumes and
hot spare disks:
1. All disks that are part of a volume, including hot spares for that volume, must be
on the same SAS controller or on the same SCSI bus (for SCSI controllers).
4. The total number of disks in a volume, including hot-spare disks, cannot exceed
six for LSI53C1020/1030 controllers.
5. The total number of disks in a volume, including hot-spare disks, cannot exceed
eight for SAS1064/1064E and SAS1068/1068E controllers, and the total number of
disks combined for two volumes cannot exceed ten. Ten disks is a theoretical
upper limit for the firmware; the SAS controller can actually support a fewer
number of disks.
7. An IME volume can have a minimum of three disks and a maximum of six disks
(for LSI53C1020/1030 controllers) or eight disks (for SAS controllers), as long as
rules 4 and 5 are not violated.
Example
cfggen controller-number create volume-type size {SCSI-ID} [qsync] [noprompt]
controller- Number of the SCSI bus or SAS controller targeted by this command
number
volume-type Volume type for the new volume to be created. Valid values are IM or
IME or IS.
size Size of the RAID volume in Mbytes, or MAX for the maximum size
available
SCSI-ID SCSI ID of a hard drive to be included in the RAID volume
encl:bay The enclosure:bay value for the disk drive to be included in the RAID
volume. These values can be obtained from the output of the DISPLAY
command.
qsync If this optional parameter is specified, a quick synchronization of new
volume will be performed. If the volume type is IME or IS, a quick
synchronization is always performed even if qsync is not specified. A
quick synchronization means that the first 32 Kbytes of the drives in
the volume are cleared to 0
noprompt Suppresses display of warnings and prompts
delete Command
The delete command deletes all IM, IME, and IS volumes and hot spare drives. No
other controller configuration parameters are changed.
Example
cfggen controller-number delete [noprompt]
controller- Number of the SCSI bus or SAS controller targeted by this command
number
noprompt Suppresses display of warnings and prompts
display Command
The display command displays configuration information for the supported LSI
controllers. The information includes controller type, firmware version, BIOS version
(version executed), volume information, and physical drive information. An
example of the information that will be output by this command is provided below.
Note – 1 Mbyte = 1,048,576 bytes. All sizes displayed in Mbytes are rounded down
to the nearest Mbyte.
controller- Number of the SCSI bus or SAS controller targeted by this command
number
filename Optional valid filename to store output of command to a file
Sample Output
The following example shows the output of the display command with an IM
configuration on a SAS1068 controller.
Note – The format and content of the display command output might vary
depending on the version being used.
Okay (OKY) Volume is Active and drives are functioning properly. User data is
protected if the volume is IM or IME
Degraded (DGD) Volume is Active. User data is not fully protected due to a
configuration change or drive failure
Inactive, Okay (OKY Volume is inactive and drives are functioning properly. User data is
protected if the current RAID level is RAID 1 (IM) or RAID 1E (IME).
Inactive, Degraded Volume is inactive and the user’s data is not fully protected due to a
(DGD) configuration change or drive failure; a data resync or rebuild may be
in progress.
Hot Spare (HSP) Drive is a hot spare that is available for replacing a failed drive in an
array
Ready (RDY) Drive is ready for use as a normal disk drive; or it is available to be
assigned to a disk array or hot spare pool
Failed (FLD) Drive was part of a logical drive or was a hot spare drive, and it failed.
It has been taken offline
Standby (SBY) This status is used to tag all non-hard drive devices.
hotspare command
The hotspare command creates a hot spare disk drive, which is added to hot spare
pool 0. The number of disk drives in an IM, IME, or IS volume, including the hot
spare disk cannot exceed six for LSI53C1020/1030 controllers and eight for
LSI1064/1064E and LSI1068/1068E controllers. Only one hot spare disk can be
created. The capacity of the hot spare disk must be greater than or equal to the
capacity of the smallest disk in the logical drive. An easy way to verify this is to use
the display Command.
The following rules must be observed when creating hot spare disks:
■ A hot spare disk cannot be created unless at least one IM or IME volume is
already created
■ For LSI1064/1064E and LSI1068/1068E controllers, CFGGEN does not allow
adding a hot spare disk of a type (SAS/SATA) that is different from the disk types
in any of the volume
Example
cfggen controller-number hotspare <SCSI ID> [delete]
list command
The LIST command displays a list of all controllers present in the system, along with
their corresponding controller #.
Example
cfggen list
Parameters
None
Sample Output
Here is an example of the output of LIST command
rebuild command
The REBUILD command initiates a resync of drives in an IM or IME volume. This
command is used to force a manual resync of drives in the volume even if the auto
rebuild is turned off. This command is accomplished by bringing the secondary
drive offline and bringing it online immediately there by kicking a resync. The
volume status changes to Resyncing (RSY) upon successful execution.
Example
cfggen <controller #> rebuild <volume id>
Example
Sample Output
See “To View the Status of a RAID1 Volume” on page 65.
Example
cfggen controller-number status
controller- Number of the SCSI bus or SAS controller targeted by this command
number
Sample Output
Here is an example of the status information returned when a volume
resynchronization is in progress:
The status fields in the data displayed can have the following values:
You must determine the controller numbers used. The controllers are enumerated
starting with 0 based on bus location. Unless other LSI add-on cards have been
installed, the controller number for the 1064 is 0. Otherwise, run the MPTutil to
determine the order of the LSI controllers.
2. If there is an array present, delete the array as described in “To Delete a RAID
Array” on page 64.
3. Determine the slot numbers of the desired drives and check the drives are
ready with the display command.
This command gives information about the controller, IR volume, physical devices,
and enclosure. The slot numbers are located in the physical device information for
each device. For RAID 0, at least two slot numbers are needed. An example of the
physical device information is below.
Target on ID #1
Device is a Hard disk
Slot # : 1
Target ID : 11
State : Ready (RDY)
The size is in Mbytes and, like the delete command, noprompt is optional. For
example, to create a RAID0 array on controller 0 that is 512 MB on slots 0 and 1,
type:
If an array already exists on the specified drives, the create command gives an
error stating there are not enough resources.
▼ To Fail RAID 0
Make RAID0 fail by removing one of its drives.
5. Verify that the drive still fails by executing the status command.
The command displays:
6. Delete the array as described in “To Delete a RAID Array” on page 64.
2. Verify the RAID1 has been created by running the status command.
The command displays:
If you use the noprompt option, the utility automatically deletes the arrays.
Otherwise, the utility asks if it can continue with this command.
This part describes how to use the MegaRAID Storage Manager and has the
following chapters:
■ “MegaRAID Storage Manager (MSM) Installation” on page 69
■ “Using MegaRAID Storage Manager” on page 75
■ “LSI SNMP Utility” on page 105
CHAPTER 7
The MegaRAID Storage Manager (MSM) program provides you with graphical user
interface (GUI) tools to configure RAID storage systems, based on the LSI 106x
controllers used in some of the x64 servers. To determine if your server supports this
program, refer to the Product Notes for your platform.
Overview
The MSM program enables you to configure the controllers, physical disk drives,
and virtual disk drives on your system. The Configuration Wizard in the MSM
program simplifies the process of creating disk groups and virtual disk drives by
guiding you through several simple steps to create your storage configurations.
MSM works with the appropriate operating system (OS) libraries and drivers to
configure, monitor, and maintain storage configurations attached to x64 servers. The
MSM GUI displays device status in the form of icons, which represent the
controllers, virtual disk drives, and physical disk drives on your system. Special
69
icons appear next to the device icons on the screen to notify you of disk failures and
other events that require immediate attention. System errors and events are recorded
in an event log file and are displayed on the screen.
Note – The MSM installation files and drivers were installed on your system if you
selected the correct optional components during the Windows 2003 Server
installation. If you did not select these components, continue with this procedure.
The MSM packages are available on the product Tools and Drivers CD, and also as
part of an archive called windows.zip. You can download CD ISO and the archive
from the Sun web site. See “Obtaining Utilities” on page xi.
3. Run the installation application in this directory. This is a file with a name of
the form InstallPackxxxxxx.zip, where the xxxxxx is a version string.
The Sun Fire Installation Package dialog box appears.
7. Click Next.
The End User License Agreement dialog box appears.
9. Click Finish.
If you see output similar to this, then mptctl is inserted. If the command has no
input, mptctl is not inserted.
To insert mptctl, type this command:
# modprobe mptctl
2. To ensure you have a fully operational system to run the MSM utility, install a
full Linux installation (everything) on the system.
Using the MSM utility on a Linux system requires a number of shared libraries
that are not included in the basic install of most Linux distributions.
4. Insert the Tools and Drivers CD into the CD-ROM drive connected to your
server, or copy the files to your system.
5. Locate the file MSM Linux installer file in the raid directory.
8. Locate the disk directory created by uncompressing the installer, and move to
this directory by typing:
# cd disk
2. Check the last entries in the dmesg log to determine which SCSI device is the
LSI RAID device. For example, type:
# dmesg | tail -30 | grep Attached
This searches the previous 30 lines of the dmesg for the appropriate line. A line
such as the following will appear:
Attached scsi disk sda at scsi0, channel 0, id 2 lun 0
In this case the disk sda would be the SCSI device.
c. Press enter to choose the default and start the partition at the beginning of
the drive.
d. Press enter to choose the default and end the partition at the end of the
drive.
Note – You can create a partition that is smaller than the maximum size of the
device or create multiple partitions on the device. See the fdisk man page for
details.
You determined the base name of the RAID device in Step 2 above. That device
name refers to the RAID as a whole. When you created a partition on that RAID,
a device name was created to address just that partition. If the RAID is sda, and
you followed the above directions, the partition device would be sda1. If there
were multiple partitions on a disk there would be multiple corresponding device
names (sda1, sda2, sda3, etc).
Now that a partition exists on the raid device, a file system needs to be written to
that partition. The following commands create an ext2 or ext3 file system. Replace
device with the device name referencing the appropriate partition.
This chapter explains how to launch and use the MSM (MegaRAID Storage
Manager) program. Use the program to create RAID arrays, and then manage and
monitor the RAID arrays after array creation. The following sections describe how to
start and use the MSM program:
■ “Starting the MSM Program” on page 75
■ “Using the MSM RAID Configuration Wizard” on page 84
■ “Monitoring System Events and Storage Devices” on page 96
■ “Maintaining and Managing Storage Configurations” on page 100
■ “Known Issues” on page 102
75
▼ To Start MSM on the Windows 2003 Server
1. On the taskbar, click Start and choose All Programs.
Running MSM
After you have started MSM, the MSM server window appears. The first screen is
the Select Server window, similar to the one shown in FIGURE 8-1.
Note – If a warning appears indicating the Windows Firewall has blocked some
features of the program, click Unblock to allow MSM to start. The Windows Firewall
might block some Java based programs like MSM. If there are multiple servers on
the network, you might experience a delay before the Select Server window appears.
A network with multiple servers might look similar to the Select Server window
shown in FIGURE 8-1.
The Host server icon status is indicated by an LED-like indicator located within
the host server icon, to the left of the center to the left of the IP address.
■ A green LED indicates normal operation.
■ A yellow LED indicates that the server is running in a degraded state.
For example, a disk drive used as a virtual disk has failed.
■ A red LED indicates that the server’s storage configuration has failed.
Note – You can access servers on a different subnet by entering an IP address in the
Connect to remote Framework at field at the bottom of the screen. The check box next to
the Connect to remote Framework at field enables you to access a standalone server
running MSM, if it has a network connection.
2. Click Update.
▼ To Log in to MSM
1. Double-click the icon of the desired Host server in the Select Server
window.
See FIGURE 8-1 or FIGURE 8-2.
3. Select a login mode from the drop-down list. See FIGURE 8-3.
■ Select Full Access to view or modify the current configuration.
■ Select View Only to view and monitor the configuration.
Note – If you are accessing the server over a network, you will also need to enter
the root/administrator user name and password to use Full Access mode. Step 4
gives you access to the server, but not full access over the network.
If your user name and password are correct, the MSM Physical/Logical window
appears, similar to the one shown in FIGURE 8-4.
5. (Full Access) Type the Administrator user name and password, and then click
Login. See FIGURE 8-3.
The MSM Physical/Logical window displays similar to the one shown in
FIGURE 8-4.
MSM Windows
This section describes the MSM Physical/Logical window, which appears after you
have logged in to the MSM program. The following topics describe the panels and
menu options that display in this window.
TABLE 8-1 shows the icons that appear in the left panel to represent the controllers,
disk drives, and other devices:
System
Port
Disk group
For example, creating a new RAID 0 or RAID 1 will generate "#####" in the MSM
date & timestamp log. Also, swapping hard disks will only display
"###################" in the MSM log section.
When you have SAS disk drives with dual paths, you see a single disk in the left
panel of the main menu screen with the Physical tab chosen.
Select a drive in this panel and choose the Properties tab in the right panel of the
screen. If the disk drive has two paths, you see a SAS Address 0 and SAS Address 1
in the Properties tab. You also see that the Redundant Paths property has the value
‘Yes.’
If you remove one of the paths (for example by removing a Multi-Function NEM
that connects a server blade to a disk blade), you see only one SAS address and the
Redundant Paths property has the value ‘No.’ When you restore the path, the
Redundant Paths property has the value ‘Yes’ once again.
Note – You can view the Redundant Paths property when you remove and restore a
path to verify that your version of MSM is multi-path aware.
You get an Alert Event Notification whenever a second path is added or deleted. The
messages are:
■ Redundant path inserted.
■ Redundant path broken.
Menu Bar
The brief descriptions listed here refer to the main selections in the MSM menu bar.
■ File Menu: The File menu has an Exit option for exiting MSM. It also has a Rescan
option for updating the display in the MSM-IR window. (Rescan is seldom
required; the display normally updates automatically.)
■ Operations Menu: The Operations menu is available when a controller, physical
drive, or logical drive is selected in the MSM window. The Operations menu
options vary, depending on what type of device is selected in the left panel of the
MSM window.
Note – When you use MSM online Help, you might see a warning message that
Internet Explorer has restricted the file from showing active content. If this warning
displays, click the active content warning bar and enable the active content.
Note – Physical disk drives with bootable partitions cannot be used to create a
virtual drive on a SAS IR controller with MSM. Drives with bootable partitions do
not appear on the list of available drives to create a new virtual drive. To make
physical disk drives with bootable partitions available for use in creating a virtual
drive, you must clear the bootable flag in the partition or remove the partition.
3. Select the physical disk drive(s) and add the selected disk drive(s) to the new
array.
Note – Two disks can be selected at the same time by using the shift key after
making the first disk drive selection.
b. Skip to Step c, or make a second selection by holding down the shift key,
while your first physical disk drive is selected (highlighted).
Both physical disk drives should be highlighted.
4. When you are finished, click Accept, and then click Next.
The Virtual Disk Creation dialog box displays. See FIGURE 8-8.
5. Select a RAID Level, Size (in Mbytes), and a Disk Cache Policy from Virtual
Disk Properties.
6. Click the active Accept button, and then click Next. See FIGURE 8-8 and
FIGURE 8-9.
The Finish dialog box displays with RAID 0 (New Array) selected and accepted.
The MSM Configuration Wizard builds the RAID and the resultant updated RAID
configuration is displayed in the MSM window. See FIGURE 8-10.
4. Right-click the Disk containing the new RAID created by MSM, and then click
Initialize Disk.
6. Follow the onscreen prompts to page through the New Partition wizard to
create and format a new Windows partition.
Once the New Partition wizard completes, the RAID file system is built and
available for use. See FIGURE 8-14.
Note – MSM might also be installed from the BIOS. For details see the Service
Manual for your product.
You can change a virtual disk’s Read Policy, Write Policy, and other properties at any
time after the virtual disk is created.
Note – Support is provided for enabling/disabling SMART and Write Cache Enable
on physical disks that are not part of a virtual disk, but are connected to a SAS IR
controller. This is different from the way in which properties are changed for virtual
disks.
2. In the right panel, select the Properties tab, and select Set Virtual Disk
Properties.
A list of virtual disk properties displays in the right panel.
Note – Virtual drives with a bootable partition cannot be deleted. This prevents you
from accidentally deleting a drive that contains the operating system. To delete the
virtual drive, you must clear the bootable flag in the partition or remove the
partition.
2. In the left panel of the MSM window, select the Logical tab and click the icon
of the virtual disk to delete.
3. In the right panel, select the Operations tab and select Delete Virtual Disk.
4. Click Go.
5. When the warning message appears, click Yes to confirm the deletion of the
virtual disk.
2. On the menu bar, select Operations > Advanced Operations > Configuration
> Clear Configuration.
A warning message displays.
Each event in the log includes an error level (Info, Warning, Caution, Fatal, or Dead)
a date and timestamp, and a brief description.
For example, creating a new RAID 0 or RAID 1 will generate "#####" in the MSM
date & timestamp log. Also, swapping hard disks will only display
"###################" in the MSM log section.
Monitoring Controllers
MSM enables you to see the status of all
controllers in the left panel of the MSM window. The controller’s status is indicated
by the controller icon(s).
A red LED next to the icon indicates that the controller has failed. (See
“Physical/Logical View Panel” on page 81 for a complete list of device icons.)
The physical disk drive icon by itself indicates that it is operating normally.
A red LED next to the icon indicates that the physical disk drive has failed.
A yellow LED next to the icon indicates that the virtual disk is in degraded mode.
For example, if a physical disk has failed, the virtual disk icon reflects this
degraded condition
A red LED next to the icon indicates that the virtual disk has failed. (See
“Physical/Logical View Panel” on page 81 for a complete list of device icons.)
In Graphical View, the icon for this disk group (array) indicates the virtual disk
usage.
■ Blue = Indicates how much of the disk group capacity is used by this virtual disk.
■ White = Indicates that some of the virtual disk capacity is used by another virtual
disk.
A yellow LED next to the virtual disk indicates that it is in a degraded state; the
data is still safe, but data could be lost if another drive fails.
If you see that the virtual disk is in a degraded state, then view the physical disk
in the virtual disk configuration for drive indications.
A red LED next to the drive icon indicates that the drive has failed.
2. Shut down the system, disconnect the power cord, and open the server chassis.
3. Find the failed disk drive and remove it from the server chassis.
You can identify the disk drive by reading the number (0, 1, 2, 3) on the drive
cable. This corresponds to the drive number displayed in the MSM window. Also,
the drive 0 cable is color-coded. For an Integrated RAID controller, the hard drive
number is on the motherboard next to the cable connector.
4. Replace the failed disk drive with a new drive of equal or greater capacity.
5. Close the computer case, reconnect the power cord, and restart the server.
6. Restart MSM.
When the new drive spins up, the drive icon changes back to normal status, and
the rebuild process begins automatically.
Note – The option to mark a physical disk (drive) as missing does not appear in
some versions of MSM. In those versions where “Mark physical disk as missing” is
disabled or not available, you will see "Mark drive online" and "Rebuild" options.
If a disk drive is currently part of a redundant configuration and you want to use it
in another configuration, you can use MSM commands to remove the disk drive
from the first configuration for this purpose. When you do this, all data on that drive
is lost. You can remove the disk drive from the configuration without harming the
data on the virtual disk.
Note – If a disk drive in a virtual disk has failed, the drive goes offline. If this
happens, you must remove the disk drive and replace it. You cannot make the drive
usable for another configuration by using the Mark physical disk as missing and
Rescan commands.
3. Right-click the disk drive icon again and select Mark physical disk as missing.
Known Issues
The following section lists known issues by product.
The LSI (SAS IR) SNMP utility is used over SAS connections to monitor MSM
activity from a remote station for Windows Server 2003 systems and Linux systems.
The LSI SNMP agent requires installation of SNMP service on the server side
followed by installation of the LSI SNMP agent on the remote station.
Note – You must install and configure SNMP service before installing the LSI
(SAS IR) SNMP Agent.
105
▼ To Install the SNMP Service on the Server Side
on Windows Server 2003
Note – To complete this procedure, you will need the Windows Server 2003 CD that
came with your system.
If the SNMP service has already been installed and configured, skip this procedure.
4. Click Next.
The Windows Server 2003 OS extracts the necessary SNMP files and installs the
SNMP service on your server.
5. Click the Security tab and select the Accept SNMP Packets From Any Host.
6. If you want to send traps to a host IP, click the Traps tab and select from the list
of host IPs.
2. Uncompress sas_ir_snmp.tar.gz.
4. To allow a client machine to run SNMP queries on the server, modify the
snmpd.conf file by adding this line:
5. To configure the server to send traps to remote clients automatically, add the
following line to
/etc/lsi_mrdsnmp/sas-ir/sas_ir_TrapDestination.conf:
1.1.1.1 public
where 1.1.1.1 is the IP address of the machine you wish to send traps to.
Note – Before you install the SAS IR Agent, verify that the SNMP service is already
installed and configured on the system. If the SNMP Service is not installed on your
system, refer to “Installing and Configuring SNMP Service” on page 105.
Note – This file is not on the CD or in the online downloads. To obtain it contact
your Sun service representative. (Refer to CR 6578969.)
■ (Linux) SAS-IR_SNMP_Linux_Installer-3.xx-xxxx.zip is in
/linux/tools/raid
■ SUN-PLATFORM-MIB.mib is in /common/snmp/
3. Open the DISK1 folder and run setup.exe to install the LSI (SAS IR) SNMP
Agent on the remote system.
4. Use the SNMP Manager to retrieve the SAS IR data and monitor the MSM
activity on the server from the remote station.
Note – The trap function of SNMP is described in the Integrated Lights Out
Management (ILOM) documentation available on your product documentation web
site. You will need the MIB file, LSI-megaRAID_Sas_IR.mib. This MIB describes
the LSI SNMP traps and this MIB must be compiled into the trap-catching utility.
This part describes how to use the Solaris command raidctl to create hardware
RAID for any server running the Solaris OS and has the following chapters:
■ “Introduction to raidctl” on page 113
■ “The raidctl Man Page” on page 119
CHAPTER 10
Introduction to raidctl
This chapter provides an overview of the LSI Integrated RAID solution for LSI SAS
controllers. The chapter includes these sections:
■ “What is raidctl?” on page 113
■ “When to Use raidctl” on page 114
■ “Using raidctl to Create RAID Volumes” on page 114
■ “Other Uses For raidctl” on page 118
What is raidctl?
raidctl is a Solaris command that can be used to set up hardware RAID volumes
on LSI host bus adapters (HBAs). It is entirely analogous to the LSI BIOS utility
described in Part I of this document.
SPARC systems, which run the Solaris OS, do not have a BIOS, so that the LSI BIOS
utility is not available to set up RAID on the LSI HBA. raidctl takes its place.
raidctl can be used to set up RAID volumes for any LSI HBA that can be set up
with the LSI BIOS utility. This includes on-board LSI 106x chips and PCI cards based
on LSI 106x chips.
Note – Since raidctl is a Solaris command, it can be used with any server running
the Solaris OS, no matter whether it has a SPARC, AMD, or Intel processor. The
behavior of raidctl is not dependent on which server is running the Solaris OS.
113
When to Use raidctl
Hardware RAID can be set up with raidctl before or after the your server’s OS is
installed. However, if you want to mirror your boot disk, the RAID mirror must be
set up before OS installation. To do this:
The raidctl -C and raidctl -c commands are described in detail in the raidctl
man page, which is reproduced in the next chapter (“The raidctl Man Page” on
page 119). Numerous examples are given in the man page.
Disk Names
HBA’s might have different connectors to different SCSI buses; these are called
channels. In Solaris device file convention, they are represented by the letter c, or
controller number.
SCSI disks are addressed by target number and logical unit numbers. There could be
multiple logical unit numbers up to a maximum of 8 under each target number.
1. The Solaris canonical format, c?t?d?, where c is the controller number, t is the
target number, and d is the logical unit number. For example, three disks
connected to controller number 2 could be c2t0d0, c2t1d0, and c2t2d0.
2. The C.ID.L format, where C is the channel number (not the same as the
controller number) and ID and L are once again the target ID and logical unit
number.
# format
Searching for disks...done
c2t3d0: configured with capacity of 136.71GB
AVAILABLE DISK SELECTIONS:
0. c2t0d0 .........
1. c2t1d0 ..........
2. c2t2d0.........
3. c2t3d0........
#
# raidctl -l
Controller: 2
Disk: 0.0.0
Disk: 0.1.0
Disk: 0.2.0
Disk: 0.3.0
#
Note – Running the raidctl -l command also provides the number of the
controller, which is 2. This means that the name of these disk in canonical form
would be c2t0d0, c2t1d0, c2t2d0, and c2t3d0 (as was found above by running
the format command).
Parameter Description
Note – If you run the raidctl -c command without the [-r raid_level]
option, you can only list two disks and you will get a RAID 1 volume. To create a
RAID 1E volume, you must list more than two disks and you must use the -r option.
Here is what happens if you list three disks and do not specify the raid_level
option:
Here is what happens when you do not specify the raid_level option, but only list
two disks:
Although the output did not say so, c2t1d0 is a RAID 1 volume.
Parameter Description
“disks” A list of disks in C.ID.L format. The list can include disks and sub-
volumes, separated by spaces. Sub-volumes are groups of disks
separated by spaces but enclosed by parenthesis—for example, (0.0.0
0.1.0).
-r raid_level raid_level can be 0, 1, 1E, 5, 10, or 50. See the man page for
descriptions of the disks combinations that can be used. If this
parameter is omitted, raidctl will create a RAID 1 volume if two
disks are listed and will fail otherwise.
Note - The LSI 106x HBA can only form RAID levels 0, 1, and 1E
-z capacity The capacity of the volume that will be created. Can be terabytes,
gigabytes, megabytes, etc., entered as 2t, 24g, 256m and so forth.
Note – As with raidctl -c, you must use the [-r raid_level] option unless
you are forming a RAID 1 volume with just two disks.
These options are all described in the next chapter, which lists the raidctl man
page.
NAME
raidctl - RAID hardware utility
SYNOPSIS
raidctl -C “disks” [-r raid_level] [-z capacity] [-s stripe_size] [-f] controller
raidctl -d [-f] volume
raidctl -F filename [-f] controller...
raidctl -a {set | unset} -g disk {volume | controller}
raidctl -p “param=value” [-f] volume
raidctl -c [-f] [-r raid_level] disk1 disk2 [disk3...]
raidctl -l -g disk controller
raidctl -l volume
raidctl -l controller...
raidctl [-l]
raidctl -S [volume | controller]
119
raidctl -S -g disk controller
raidctl -h
DESCRIPTION
The raidctl utility is a hardware RAID configuration tool that supports different
RAID controllers by providing a CLI (command-line interface) to end-users to create,
delete or display RAID volume(s). The utility can also used to set properties of a
volume, assign hot-spare (HSP) disks to volumes or controllers, and to update
firmware/code/BIOS for RAID controllers.
The raidctl utility requires privileges that are controlled by the underlying file-
system permissions. Only privileged users can manipulate the RAID system
configuration. If a non-privileged user attempts to run raidctl, the command fails
with an exit status of 1.
The raidctl utility defines a set of command line options to provide management for
full feature RAID controllers. Since the supported features may vary with different
RAID controllers, not all options are supported for a given RAID controller. The user
can use raidctl to list the type of a given controller and the firmware version to
determine the supported features.
OPTIONS
The following options are supported:
-C “disks” [-r raid_level] [-z capacity] [-s stripe_size] [-f] controller
Create a RAID volume using specified disks.
When creating a RAID volume using this option, the identity of the newly
created volume is automatically generated and raidctl reports it to the user.
The argument specified by this option contains the elements used to form the
volume that will be created. Elements can be either disks or sub-volumes,
where disks are separated by space(s) and a sub-volume is a set of disks
grouped by parenthesis. All disks should be in C.ID.L expression (for example,
0.1.2 represents a physical disk of channel 0, target id 1, and logical unit
number 2). The argument must match the RAID level specified by the -r
option, even if it’s omitted. This means the argument can only be:
For example, the expression “0.0.0 0.1.0” means that the 2 specified disks form
a RAID volume, which can either be a RAID 0 or a RAID 1 volume. “(0.0.0
0.1.0)(0.2.0 0.3.0)” means that the first 2 disks and the last 2 disks form 2 sub-
volumes, and that these 2 sub-volumes form a RAID 10 volume. See the
EXAMPLES section for more samples.
The -r option specifies the RAID level of the volume that will be created.
Possible levels are 0, 1, 1E, 5, 10, 50. If this option is omitted and only two disks
are listed, raidctl creates a RAID 1 volume by default—otherwise, it fails.
The -z option specifies the capacity of the volume that will be created. The unit
can be tera-bytes, giga-bytes, or mega-bytes (for example, 2t, 10g, 20m, and so
on). If this option is omitted, raidctl calculates the maximum capacity of the
volume that can be created by the specified disks and uses this value to create
the volume.
The -s option specifies the stripe size of the volume that will be created. The
possible values are 512, 1k, 2k, 4k, 8k, 16k, 32k, 64k, or 128k. If this option is
omitted, raidctl chooses an appropriate value for the volume (for example,
64k).
In some cases, the creation of a RAID volume may cause data on specified
disks to be lost (for instance, on LSI1020, LSI1030, LSI1064, or LSI1068 HBAs),
and raidctl prompts the user for confirmation about the creation. Use the -f
option to force the volume creation without prompting the user for
confirmation.
The controller argument is used to identify which RAID controller the specified
disks belongs. The -l option can be used to list the controller’s ID number.
-d [-f] volume
Delete the RAID volume specified as volume. The volume is specified in
canonical form (for example, c0t0d0).
-l -g disk controller
Display information about the specified disk of the given controller. The
output includes the following information:
-l volume
Display information about the specified volume. The output includes the
following information:
-l controller...
Display information about the specified controller(s). The output includes the
following information:
[-l]
List all RAID related objects that the raidctl utility can manipulate, including
all available RAID controllers, RAID volumes, and physical disks. The -l option
can be omitted.
The output includes the following information
:
Controller Displays the controller ID number, and the controller type string
in double-quotation marks.
Volume Displays the RAID volume name, number of component disks,
the C.ID.L expression of the component disks, the RAID level, and
the status. The status can be either OPTIMAL, DEGRADED,
FAILED, or SYNCING.
Disk Displays the C.ID.L expression of the disk, and the status. The
status can be either GOOD, FAILED, or HSP (disk has been set as a
stand-by disk).
-S -g disk controller
Takes a snapshot of the information for the specified disk.
-h
Print out the usage string.
EXAMPLES
Example 1 Creating the RAID Configuration
The following command creates a RAID 0 volume of 10G on controller 0, and the
stripe size will be set to 64k:
# raidctl -C “0.0.0 0.2.0” -r 0 -z 10g -s 64k 0
The following command creates a RAID 1 volume on controller 2:
# raidctl -C “0.0.0 1.1.0” -r 1 2
The following command creates a RAID 5 volume on controller 2:
# raidctl -C “0.0.0 0.1.0 0.2.0” -r 5 2
Volume Sub Disk Size Stripe Size Status Cache RAID Level
The following command sets disk 0.3.0 on controller 2 as a global hot-spare disk:
# raidctl -a set -g 0.3.0 2
The following command sets disk 0.3.0 on controller 2 as a local hot-spare disk to
volume c2t0d0:
# raidctl -a set -g 0.3.0 c2t0d0
The following command converts disk 0.3.0 on controller 2 from a global hot-
spare disk to a normal one:
# raidctl -a unset -g 0.3.0 2
The following command removes disk 0.3.0 from being a local hot-spare disk
from volume c2t0d0:
# raidctl -a unset -g 0.3.0 c2t0d0
The following command sets the write policy of the volume to “off”:
# raidctl -a set -p “wp=off” c0t0d0
# raidctl -S c1t0d0
c1t0d0 2 0.0.0 0.1.0 1 OPTIMAL
# raidctl -S -g 0.1.0 1
0.1.0 GOOD
ATTRIBUTES
See attributes(5) for descriptions of the following attributes:
Availability SUNWcsu
Interface Stability Evolving
SEE ALSO
attributes(5), mpt(7D)
WARNINGS
Do not create raid volumes on internal SAS disks if you are going to use the
Solaris Multipathing I/O feature (also known as MPxIO). Creating a new raid
volume under Solaris Multipathing will give your root device a new GUID which
does not match the GUID for the existing devices. This will cause a boot failure
since your root device entry in /etc/vfstab will not match.
RAID Terminology
array See disk group.
caching The process of using a high speed memory buffer to speed up a computer
system’s overall read/write performance. The cache can be accessed at a
higher speed than a disk subsystem. To improve read performance, the cache
usually contains the most recently accessed data, as well as data from adjacent
disk sectors. To improve write performance, the cache may temporarily store
data in accordance with its write-back policies.
consistency check An operation that verifies that all stripes in a virtual disk with a redundant
RAID level are consistent and that automatically fixes any errors. For RAID 1
disk groups, this operation verifies correct mirrored data for each stripe.
controller A chip that controls the transfer of data between the microprocessor and
memory or between the microprocessor and a peripheral device such as a
physical disk. RAID controllers perform RAID functions such as striping and
mirroring to provide data protection. MSM runs on the SAS Integrated RAID
controller.
current write policy A virtual disk property that indicates whether the virtual disk currently
supports write back or write through caching mode.
default write policy A virtual disk property indicating whether the default write policy is
Write through or Write back.
device driver Software that allows the operating system to control a device such as a printer.
Many devices do not work properly unless the correct driver is installed in the
computer.
133
device ID A controller or physical disk property indicating the manufacturer-assigned
device ID.
device port count A controller property indicating the number of ports on the controller.
disk cache policy A virtual disk property indicating whether the virtual disk cache is enabled,
disabled, or unchanged from its previous setting.
disk group A logical grouping of disks attached to a RAID controller on which one or
more virtual disks can be created, such that all virtual disks in the disk group
use all of the physical disks in the disk group.
disk subsystem A collection of disks and the hardware that controls them and connects them to
one or more controllers. The hardware can include an intelligent controller, or
the disks can attach directly to a system I/O bus controller.
fast initialization A mode of initialization that quickly writes zeros to the first and last sectors of
the virtual disk. This enables you to start writing data to the virtual disk
immediately while the initialization is running in the background.
fault tolerance The capability of the disk subsystem to undergo a single drive failure per disk
group without compromising data integrity and processing capability. The SAS
Integrated RAID controller provides fault tolerance through redundant in
RAID 1 disk groups.
foreign configuration A RAID configuration that already exists on a replacement set of physical disks
that you install in a computer system. MSM enables you to import the existing
configuration to the RAID controller, or you can clear the configuration so you
can create a new one.
formatting The process of writing a specific value to all data fields on a physical disk, to
map out unreadable or bad sectors. Because most physical disks are formatted
when manufactured, formatting is usually done only if a physical disk
generates many media errors.
global hot spare One or two disk drives per controller can be configured as global hot-spare
disks, to protect data on the IM/IME volumes configured on the controller.
If the firmware fails one of the mirrored disks, it automatically replaces the
failed disk with a hot-spare disk and then re-synchronizes the mirrored
data. The firmware is automatically notified when the failed disk has been
replaced, and it then designates the failed disk as the new hot-spare.
host interface A controller property indicating the type of interface used by the computer
host system: for example, PCIX.
host port count A controller property indicating the number of host data ports currently in use.
initialization The process of writing zeros to the data fields of a virtual disk and, in fault-
tolerant RAID levels, generating the corresponding parity to put the virtual
disk in a Ready state. Initialization erases all previous data on the physical
disks. Disk groups will work without initializing, but they can fail a
consistency check because the parity fields have not been generated.
migration The process of moving virtual disks from one controller to another by
disconnecting the physical disks from one controller and attaching them to
another one. The firmware on the new controller will detect and retain the
virtual disk information on the physical disks.
mirroring The process of providing complete data redundancy with two physical disks
by maintaining an exact copy of one disk’s data on the second physical disk. If
one physical disk fails, the contents of the other physical disk can be used to
maintain the integrity of the system and to rebuild the failed physical disk.
name A virtual disk property indicating the user-assigned name of the virtual disk.
offline A physical disk is offline when it is part of a virtual disk but its data is not
accessible to the virtual disk.
physical disk A nonvolatile, randomly addressable device for storing data. Physical disks are
re-writable and are commonly referred to as disk drives.
Glossary 135
physical drive state A physical disk drive property indicating the status of the drive. A physical
disk drive can be in one of the following states:
Online: A physical disk can be accessed by the RAID controller and is part of
the virtual disk.
Failed: A physical disk that was originally configured as Online but on which
the firmware detects an unrecoverable error.
Missing: A physical disk that was Online, but which has been removed from
its location.
Offline: A physical disk that is part of a virtual disk but which has invalid
data.
None: A physical disk with the unsupported flag set. An Unconfigured Good
or Offline physical disk that has completed the prepare for removal operation.
physical drive type A physical disk drive property indicating the characteristics of the drive.
product info A physical disk property indicating the vendor-assigned model number of the
drive.
product name A controller property indicating the manufacturing name of the controller.
RAID A group of multiple, independent disk drives that provide high performance
by increasing the number of disks used for saving and accessing data. A RAID
disk group improves I/O performance and data availability. The group of disk
drives appears to the host system as a single storage unit or as multiple logical
disks. Data throughput improves because several physical disks can be
accessed simultaneously. RAID configurations also improve data storage
availability and fault tolerance.
RAID 0 RAID 0 uses data striping on two or more disk drives to provide high data
throughput, especially for large files in an environment that requires no data
redundancy.
RAID 1 RAID 1 uses data mirroring on a pair of disk drives so that data written to one
physical disk is simultaneously written to the other physical disk. RAID 1
works well for small databases or other small applications that require
complete data redundancy.
read policy A controller attribute indicating the current read policy mode. In Always read
ahead mode, the controller reads sequentially ahead of requested data and
stores the additional data in cache memory, anticipating that the data will be
needed soon. This speeds up reads for sequential data, but there is little
improvement when accessing random data. In No read ahead mode, read-ahead
capability is disabled. In Adaptive read ahead mode, the controller begins using
read-ahead read policy if the two most recent disk accesses occurred in
sequential sectors. If the read requests are random, the controller reverts to No
read ahead mode.
rebuild The regeneration of all data to a replacement disk in a redundant virtual disk
after a physical disk failure. A disk rebuild normally occurs without
interrupting normal operations on the affected virtual disk, though some
degradation of performance of the disk subsystem can occur.
reclaim virtual disk A method of undoing the configuration of a new virtual disk. If you highlight
the virtual disk in the Configuration Wizard and click the Reclaim button, the
individual disk drives are removed from the virtual disk configuration.
redundancy A property of a storage configuration that prevents data from being lost when
one physical disk fails in the configuration.
redundant
configuration A virtual disk that has redundant data on physical disks in the disk group that
can be used to rebuild a failed physical disk. The redundant data can be parity
data striped across multiple physical disks in a disk group, or it can be a
complete mirrored copy of the data stored on a second physical disk. A
redundant configuration protects the data in case a physical disk fails in the
configuration.
revision level A physical disk property that indicates the revision level of the disk’s
firmware.
SCSI device type A physical drive property indicating the type of the device, such as Disk Drive.
stripe size A virtual disk property indicating the data stripe size used in the virtual disk.
See striping.
Glossary 137
striping A technique used to write data across all physical disks in a virtual disk. Each
stripe consists of consecutive virtual disk data addresses that are mapped in
fixed-size units to each physical disk in the virtual disk using a sequential
pattern. For example, if the virtual disk includes five physical disks, the stripe
writes data to physical disks 1 through 5 without repeating any of the physical
disks. The amount of space consumed by a stripe is the same on each physical
disk. Striping by itself does not provide data redundancy.
subvendor ID A controller property that lists additional vendor ID information about the
controller.
vendor info A physical disk drive property listing the name of the vendor of the drive.
virtual disk (VD) A storage unit created by a RAID controller from one or more physical disks.
Although a virtual disk may be created from several physical disks, it is seen
by the operating system as a single disk. Depending on the RAID level used,
the virtual disk may retain redundant data in case of a disk failure.
virtual disk state A virtual disk property indicating the condition of the virtual disk. Examples
include Optimal and Degraded.
write back caching The controller sends a data transfer completion signal to the host when the
controller cache has received all the data in a disk write transaction. Data is
written to the disk subsystem in accordance with policies set up by the
controller. These policies include the amount of dirty/clean cache lines, the
number of cache lines available, and elapsed time from the last cache flush.
write through caching The controller sends a data transfer completion signal to the host when the
disk subsystem has received all the data in a transaction.
D M
deleting a RAID array, 64 Microsoft Developers Network (MSDN) web
site, 71
Disk Manager initialization wizard, 90
mirroring
activating an array, 27
F
creating a second IM or IME volume, 21
failing RAID0, 62
creating global hot-spare disks, 21
creating IM and IME volumes, 14
H deleting an array, 28
hd utility disk write caching, 10
installing on Solaris OS, 45 hot swapping, 9
hot-spare disk, 10
I IM and IME features, 6
Integrated Mirroring (IM), 46 IM/IME configuration overview, 13
integrated RAID features, 3 IM/IME description, 7
IR firmware support, 1 integrated raid firmware, 9
locating a disk drive or multiple disk drives in a
IT firmware support, 1
volume, 28
managing hot-spares, 21
L media verification, 10
log in to server, MSM, 78 metadata support, 9
139
re-synchronization, 9 port icon, 81
selecting a boot disk, 29 Properties tab, 82
SMART support, 10 RAID
synchronizing an array, 27 configuration wizard, 84
viewing volume properties, 26 Size selection, 87
write journaling, 10 terminology, 133
MSM setting bootable partitions, 84
accessing different server subnet, 77 system event log, 81
Advanced Operations menu, 84 using RAID configuration wizard, 84
clearing storage configuration, 96 using Shift key to select two drives, 86
controller icon, 81 View Only, server login, 79
creating RAID partition Virtual Disk Creation dialog, 87
Windows, 89 virtual disk icon, 81
degraded state icon, 81 Windows Disk Manager, 90
deleting virtual disks, 94 Windows Firewall blocking launch, 76
description, 69 MSM-IR
detecting newly installed disk drives, 100 Log menu, 84
device failure icon, 81 remote station monitoring, 105
disk drive icon, 81
disk group icon, 81 N
driver installation, 71 network login, MSM, 79
File menu, 83 New Partition wizard, MSM
Full Access log in to network, 79 Windows, 92
Graphical tab, 83
Group Operations menu, 84
P
GUI, 69
Help menu, 84 Physical view, MSM, 81
host server icon status, 77
host server login, 78 R
installation files RAID support
Linux, 72 Integrated RAID versus Initiator-Target
installing on controller firmware, 1
Linux, 71 raidctl
Windows Server 2003, 70 creating raid volumes, 114
LSI SAS106x IR controllers, 69 disk names, 114
MegaRAID Storage Manager defined, 69 man page, 119
monitor obtaining disk names in C.ID.L format, 115
disk drive status, 96 obtaining disk names in canonical format, 115
disk drives, 98 other uses for raidctl, 118
rebuilding progress, 99 raictl -c command, 116
storage devices, 96 raidctl -C command, 117
virtual disks, 96, 99 what is it?, 113
multiple servers on screen, 76 when to use, 114
New Partition wizard rebuilding a RAID1 array, 63
Windows, 92 remote station
Operations menu, 83 downloading the LSI SNMP agent files, 107
Operations tab, 83 installing LSI SNMP agent files, 108
physical drive icon, 81
physical/Logical window description, 80
W
Windows Preinstallation Environment (WinPE), 46
Index 141
142 Sun LSI 106x RAID User’s Guide • April 2009