OEM Technical Guide 15.2 Rev-0.81
OEM Technical Guide 15.2 Rev-0.81
OEM Technical Guide 15.2 Rev-0.81
15.2
OEM Technical Guide
Revision 0.81
Intel Confidential
INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL
OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL’S TERMS AND
CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED
WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A
PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. Intel
products are not intended for use in medical, lifesaving, or life sustaining applications.
Intel may make changes to specifications and product descriptions at any time, without notice.
Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined." Intel reserves these for
future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them.
The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from
published specifications. Current characterized errata are available on request.
Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order.
Intel and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
*Other names and brands may be claimed as the property of others.
Copyright © 2016, Intel Corporation. All rights reserved.
2 Intel Confidential
Contents
1 About This Document ....................................................................................... 11
1.1 Purpose and Scope of this Document ....................................................... 11
Intel Confidential 3
5.4.1 Intel® RST Private NVMe pass-through IOCTL support ................. 42
5.5 14.5 Release ......................................................................................... 44
5.5.1 NVMe pass-through IOCTL support ............................................. 44
5.5.2 Adaptive D3 for Connected Standby (Windows 10) ....................... 44
5.6 14.0 Release ......................................................................................... 45
5.6.1 NVMe Interface Version Compliance ........................................... 45
5.6.2 Accelerated Volume Criteria with Multiple Controllers .................... 45
5.6.3 RAID volume support with multiple controllers ............................. 45
5.6.4 Remapped PCIe Storage Device Support ..................................... 45
5.6.5 Supported Configurations for Remapped PCIe Storage Devices ...... 45
5.6.6 Non-PCH Remapped AHCI Controller Devices ............................... 46
5.6.7 Remapped PCIe Spare Support - RST UI ..................................... 46
5.6.8 Remapped PCIe Spare Support – Driver ...................................... 46
5.6.9 MSI-X Support ......................................................................... 46
5.6.10 RRT volume support with multiple controllers .............................. 46
5.6.11 Pass-through remapped NVMe PCIe Storage Device Support ......... 46
5.6.12 NVMe Admin Security Commands Support ................................... 46
5.6.13 NVMe Autonomous Power State Transition (APST) Support ............ 46
6 Intel Rapid Storage Technology for PCIe NVMe Storage Devices ............................. 47
6.1 OEM System BIOS Requirements............................................................. 47
6.2 General Requirements............................................................................ 47
6.3 Feature Limitations ................................................................................ 48
6.4 PCIe NVMe Device Usage Model .............................................................. 48
6.5 Intel® RST for PCIe NVMe Storage Use cases ............................................ 48
6.6 Intel® Rapid Storage Technology UEFI Compliance Utitlity for PCIe Storage . 50
7 Intel Rapid Storage Technology for PCIe AHCI Storage Devices .............................. 52
7.1 OEM System BIOS Requirements............................................................. 52
7.2 General Requirements............................................................................ 52
7.3 Warnings .............................................................................................. 52
7.3.1 Features Limitations ................................................................. 52
7.3.2 PCIe Device Usage Model .......................................................... 53
7.4 Intel Rapid Storage Technology for PCIe Storage Use cases ........................ 53
7.5 Intel® Customer Reference Board BIOS Settings ...................................... 54
7.6 Intel® Rapid Storage Technology UEFI Compliance Utitlity for PCIe Storage . 54
8 Using Dynamic Storage Accelerator (DSA) ........................................................... 56
8.1.1 OEM System BIOS Vendors’ Requirements .................................. 56
8.1.2 General Requirements .............................................................. 56
8.1.3 Configuring DSA ...................................................................... 58
8.1.4 Configuring DSA using Intel® RSTCLI 32/64 Windows* Utilities ...... 59
9 How to Enable the Platform for Intel® RST Support of BIOS Fast Boot ..................... 60
9.1.1 OEM System BIOS Vendors’ Requirements .................................. 60
9.1.2 Supported System Configurations .............................................. 60
10 Creating a RAID Volume.................................................................................... 62
10.1 Using the Intel® Rapid Storage Technology UI........................................... 62
10.2 Using the Intel® Rapid Storage Technology Legacy Option ROM User Interface63
10.3 Using the Intel® Rapid Storage Technology UEFI User Interface .................. 63
10.4 Using the RAID Configuration Utilities (DOS, UEFI Shell, and Windows) ........ 68
11 Deleting a RAID Volume .................................................................................... 69
11.1 Using the Windows User Interface Utility .................................................. 69
4 Intel Confidential
11.2 Using the Option ROM User Interface ....................................................... 69
11.3 Using the Intel® Rapid Storage Technology UEFI User Interface .................. 69
11.4 Using the RAID Configuration Utilities (DOS, UEFI Shell, and Windows) ........ 70
12 Common RAID Setup Procedures ....................................................................... 71
12.1 Build a SATA RAID 0, 1, 5 or 10 System ................................................... 71
12.1.1 Using the Legacy OROM User Interface ....................................... 71
12.1.2 Using the UEFI HII User Interface .............................................. 72
12.2 Build a “RAID Ready” System ................................................................. 73
12.3 Migrate to RAID 0 or RAID 1 on an Existing “RAID Ready” System .............. 73
12.4 Migrate an Existing Data Hard Drive to a RAID 0 or RAID 1 Volume ............. 74
12.5 Migrating From one RAID Level to Another ............................................... 75
12.6 Create a RAID Volume on Intel® SATA Controller While Booting to Different Controller 75
12.7 Build a RAID 0 or RAID 1 System in an Automated Factory Environment ...... 76
12.7.1 Part 1: Create the Master Image ................................................ 76
12.7.2 Part 2: Apply the Master Image ................................................. 76
Intel Confidential 5
18.5 SATA Asynchronous Notification .............................................................. 95
18.6 Runtime D3 (RTD3) ............................................................................... 95
18.7 Hybrid Hinting ....................................................................................... 96
18.7.1 Instructions to Disable Hybrid Hinting ......................................... 96
18.7.2 Hybrid Hint Reset ..................................................................... 96
18.7.3 Disable Hybrid Hinting During Hibernation ................................... 97
19 Power Savings with Intel® Rapid Storage Technology............................................ 98
19.1 Link Power Management (LPM) ............................................................... 98
19.1.1 Instructions to disable/enable LPM ............................................. 98
19.1.2 LPM Updates in 14.0 (APS, SIPM) ............................................... 99
19.2 Runtime D3 (RTD3) ............................................................................. 100
19.2.1 Adapter RTD3 Support ............................................................ 101
19.3 New for RST 14.0 Release .................................................................... 101
19.3.1 RTD3 Support - RAID HDD/SSD/SSHDs Unit Support.................. 101
19.3.2 RTD3 Support - RAID w mixed RTD3 capable/non-capable ports .. 101
19.3.3 RTD3 Support - RRT ............................................................... 101
19.3.4 RTD3 Support - SRT ............................................................... 101
19.3.5 RTD3 Support - Hot Spares ..................................................... 101
19.3.6 RTD3 Support - Migrations & Rebuilds ...................................... 102
19.4 DEVSLP .............................................................................................. 102
19.4.1 DEVSLP Registry Key Setting: .................................................. 103
19.5 DevSleep Tool ..................................................................................... 104
19.5.1 CsDeviceSleepIdleTimeoutInMS ............................................... 104
19.5.2 DeviceSleepIdleTimeoutInMS ................................................... 105
19.5.3 DeviceSleepExitTimeoutInMS ................................................... 105
19.5.4 MinimumDeviceSleepAssertionTimeInMS ................................... 105
19.5.5 DevSleep Tool Usage .............................................................. 106
19.6 L1.2 Support....................................................................................... 107
19.7 InstantGo* Device Notification Support .................................................. 108
19.7.1 Requirements ........................................................................ 108
19.7.2 Detail Description ................................................................... 108
19.7.3 Registry Settings.................................................................... 109
19.8 New in 14.5 Release ............................................................................ 110
19.8.1 Connected Standby Power State Support for SSHD .................... 110
19.8.2 CONNECTED STANDBY Power Model ......................................... 111
19.8.3 Adaptive D3 Idle Timeout ........................................................ 111
19.8.4 Connected Standby Power Model Support for SRT ...................... 112
19.8.5 SATA Link Power Management Support ..................................... 115
20 Legacy RAID Option ROM and Utilities............................................................... 116
20.1.1 General Requirements ............................................................ 116
21 HDD Password Support With RAID Volumes ....................................................... 117
21.1 HDD Password Use Cases ..................................................................... 117
21.2 Unlocking Password Protected Disks ...................................................... 118
22 Intel® Smart Response Technology – Dual Drive Configuration ............................ 119
22.1 Overview ............................................................................................ 119
22.1.2 Requirements and Limitations .................................................. 120
22.1.3 Acceleration Modes................................................................. 122
22.2 Dynamic Cache Sharing Between SRT and Rapid Start ............................. 123
22.3 Build a New System with Disk/Volume Acceleration Enabled ..................... 123
22.3.1 Prepare New Computer ........................................................... 123
6 Intel Confidential
22.3.2 Setup the HW for Installing the OS to an Accelerated Disk/Volume 124
22.3.3 If Using RSTCLI32/64 (compatible with WinPE) .......................... 125
22.4 At the DOS prompt command line (note that rstcli and rstcli64 are interchangeable in the
below example)................................................................................... 125
22.5 Setup the SSD to be the “Cache SSD”: .................................................. 126
22.6 Accelerate the pass-through disk (this is the disk planned to be the OS system disk for the
‘New System’): ................................................................................... 126
22.7 Setup the SSD to be the “Cache SSD”: .................................................. 126
22.8 Accelerate the RAID volume (this is the RAID volume planned to be the OS system disk for
the ‘New System’): .............................................................................. 126
22.8.1 If Using the Intel® RST UI ...................................................... 126
22.9 Installing the OS to a New System Prepared for Disk/Volume Acceleration . 127
22.9.1 For Acceleration Components Pre-configured Via RCfgSata (DOS or UEFI Shell)
........................................................................................... 127
22.9.2 For Acceleration Components Pre-configured Via RSTCLI 32/64 (OS)128
22.10 OEM System Manufacturing and Intel® SRT ............................................ 128
22.10.1 Imaging an OS onto a Pre-Configured Acceleration-enabled HDD . 128
22.10.2 Enabling Acceleration post end user OOBE ................................ 129
22.11 OEM System Manufacturing and Cache Pre-load for Intel® SRT ................ 131
22.11.1 Requirements ........................................................................ 131
22.11.2 Process ................................................................................. 132
22.11.3 Replicating the Accelerated HDD and SSD for Mass Production ..... 135
23 Intel® Smart Response Technology Hybrid Drive Accelerator ............................... 136
23.1 Overview ............................................................................................ 136
23.1.1 Driver/OROM updates to support Hybrid Hints Accelerated systems:136
23.1.2 Requirements and Limitations .................................................. 137
23.2 Dynamic Cache Sharing Between SRT and Rapid Start ............................. 138
23.3 Build a New System with Hybrid Drive Acceleration Enabled ..................... 138
23.3.1 Prepare New Computer ........................................................... 138
23.4 OEM System Manufacturing and Cache Pre-load for Intel® SRT ................. 139
23.4.1 Requirements ........................................................................ 140
23.4.2 Process ................................................................................. 140
23.4.3 Setup System for Cache Loading .............................................. 140
24 ATA Power-Up in Standby (PUIS) Supporting Intel® Smart Connect Technology‡ .. 144
24.1.1 Overview .............................................................................. 145
24.1.2 Theory of Operation ............................................................... 145
25 Intel® Rapid Storage Technology UI.................................................................. 148
25.1 Introduction........................................................................................ 148
25.1.1 Getting Started ...................................................................... 148
25.1.2 Understanding the Application ................................................. 151
25.1.3 Notification Area .................................................................... 152
25.2 Storage System Status ........................................................................ 154
25.2.1 Understanding the Status ........................................................ 154
25.2.2 Storage System View.............................................................. 155
25.3 Creating a Volume ............................................................................... 157
25.3.1 Volume Requirements ............................................................. 157
25.3.2 Creation Process .................................................................... 158
25.3.3 Creating Additional Volumes .................................................... 162
25.4 Managing the Storage System .............................................................. 164
25.4.1 Managing Arrays .................................................................... 164
25.4.2 Managing Volumes ................................................................. 167
Intel Confidential 7
25.4.3 Managing Disks ...................................................................... 180
25.4.4 Managing Ports ...................................................................... 186
25.4.5 Managing ATAPI Devices ......................................................... 186
25.4.6 Managing Solid-State Hybrid Drives (SSHD) .............................. 186
25.5 Accelerating the Storage System ........................................................... 187
25.5.1 Cache Device Properties .......................................................... 187
25.5.2 Enabling Acceleration.............................................................. 189
25.5.3 Disabling Acceleration ............................................................. 190
25.5.4 Changing Acceleration Mode .................................................... 191
25.5.5 Accelerating a Disk or Volume ................................................. 192
25.5.6 Resetting a Cache Device to Available ....................................... 193
25.5.7 Disassociating the Cache Memory ............................................ 193
25.6 Preferences ........................................................................................ 194
30 Glossary........................................................................................................ 204
31 Troubleshooting ............................................................................................. 211
31.1 Failed Volumes .................................................................................... 211
31.2 Degraded Volumes .............................................................................. 212
31.3 Other Volume States............................................................................ 215
31.4 Disk Events ........................................................................................ 218
31.5 Caching Issues .................................................................................... 220
31.6 Software Errors ................................................................................... 223
32 Appendix A: RST SATA Port Bitmap Implementation ........................................... 225
32.1 Legacy OROM ..................................................................................... 225
32.2 UEFI Driver ......................................................................................... 225
33 Appendix B: Common Storage Management Interface Support (CSMI) .................. 227
34 Appendix C: Drive and Volume Encryption Support ............................................. 228
8 Intel Confidential
34.1 ATA Security Commands and HDD Password Support............................... 228
34.2 Self-Encrypting Drives (SED) ................................................................ 228
34.3 Solid State Hybrid Drives (SSHD’s) with Encryption ................................. 228
34.4 RAID Volume and Drive Partition Encryption ........................................... 228
35 Appendix D: Remapping Guidelines for RST PCIe Storage Devices ........................ 229
35.1 Remapping Reference Documentation .................................................... 229
35.2 Remapping HW and BIOS Requirements ................................................. 230
35.3 Remapping Configuration Rules ............................................................. 230
35.4 PCH-H Remapping Configurations .......................................................... 232
35.4.1 Configurations With 1 x2 PCIe Port Remapped ........................... 233
35.4.2 Configurations With 2 x2 PCIe Ports Remapped.......................... 234
35.4.3 Configurations With 3 x2 PCIe Ports Remapped.......................... 235
35.4.4 Configurations With 1, 2, and 3 x4 PCIe Ports Remapped ............ 236
35.4.5 Configurations With (1 x2 + 1 x4) PCIe Ports Remapped ............. 238
35.4.6 Configurations With (2 x2 + 1 x4) PCIe Ports Remapped............. 239
35.4.7 Configurations With (1 x2 + 2 x4) PCIE Ports Remapped ............ 241
35.5 PCH-LP Premium-U Remapping Configurations ........................................ 241
35.5.1 Configurations With 1 x2 PCIe Port Remapped ........................... 241
35.5.2 Configurations With 2 x2 PCIe Ports Remapped.......................... 243
35.5.3 Configurations With 1 and 2 x4 PCIe Ports Remapped ................. 244
35.5.4 Configurations With (1 x2 + 1 x4) PCIe Ports Remapped ............. 245
35.6 PCH-LP Premium-Y Remapping Configurations ........................................ 245
35.6.1 Configurations With 1 x2 PCIe Port Remapped ........................... 246
35.6.2 Configurations With 2 x2 PCIe Ports Remapped.......................... 247
35.6.3 Configurations With x4 PCIe Ports Remapped ............................ 247
35.7 Examples of Configurations to Meet Design Specifications ........................ 248
35.7.1 Example #1: SPT-H HM170 SKU With 1x2 + 1x4 + 1 SATA ......... 248
35.7.2 Example #2........................................................................... 249
Intel Confidential 9
Revision History
10 Intel Confidential
1 About This Document
1.1 Purpose and Scope of this Document
This document will assist customers in evaluating, testing, configuring, and enabling RAID and AHCI
functionality on platforms using the Intel® Rapid Storage Technology software for the chipset
components as listed in the product’s Readme.txt file.
This document also describes installation procedures, Caching Acceleration techniques, other RST
features, RAID volume management such as creating, deleting, and modifying volumes, common
usage models, and any special notes necessary to enable customers to develop their RAID-
compatible products.
Intel Confidential 11
2 Intel® Rapid Storage
Technology
Intel® Rapid Storage Technology (Intel® RST) provides added performance and reliability for
systems equipped with serial ATA (SATA) hard drives and/or solid state disk (SSD) drives and/or
Peripheral Components Interconnect Express Solid State Drive (PCIe SSD’s) to enable an optimal PC
storage solution. It offers value-add features such as RAID, advanced Serial ATA* capabilities (for
detailed OS support, review the Release Notes for each software release). The driver also offers Non-
volatile (NV) caching for performance and application acceleration with device of MEMORY GROUP 3
or faster used as the cache memory device.
The RAID solution supports RAID level 0 (striping), RAID level 1 (mirroring), RAID level 5 (striping
with parity) and RAID level 10 (striping and mirroring). Specific platform support is dependent upon
the available SATA ports.
A configuration supporting two RAID levels can also be achieved by having two volumes in a single
RAID array that use Intel® RST. These are called matrix arrays. Typical for desktops, workstations,
and entry level servers, Intel® RST RAID solution addresses the demand for high-performance or
data-redundant platforms. OEMs are also finding it beneficial to implement this RAID capability into
mobile platforms as well.
Reference documents:
For detailed use case information on the features also refer to the below documents on CDI
2.1 Overview
12 Intel Confidential
DDDD: This section represents the build number of release AA.B.CC,
Release Build Number Note: for production releases, the build number always
begins with the number ‘1’ (e.g. AA.B.CC.1001)
Intel Confidential 13
RAID 5 (striping with RAID level 5 combines three to six drives so that all data is
parity) divided into manageable blocks called strips. RAID 5 also stores
parity, a mathematical method for recreating lost data on a single
drive, which increases fault tolerance. The data and parity are
striped across the array members. The parity is striped in a
rotating sequence across the members.
Because of the parity striping, it is possible to rebuild the data
after replacing a failed hard drive with a new drive. However, the
extra work of calculating the missing data will degrade the write
performance to the volumes. RAID 5 performs better for smaller
I/O functions than larger sequential files.
RAID 5, when enabled with volume write-back cache with
Coalescer, will enhance write performance. This combines multiple
write requests from the host into larger more efficient requests,
resulting in full stripe writes from the cache to the RAID5 volume.
RAID 5 volume provides the capacity of (N-1) * smallest size of
the hard drives, where N >= 3 and <=4.
For example, a 3-drive RAID 5 will provide capacity twice the size
of the smallest drive. The remaining space will be used for parity
information.
RAID 10 (striping and RAID level 10 uses four hard drives to create a combination of
mirroring) RAID levels 0 and 1. The data is striped across a two-disk array
forming a RAID 0 component. Each of the drives in the RAID 0
array is mirrored to form a RAID 1 component. This provides the
performance benefits of RAID 0 and the redundancy of RAID 1.
The RAID 10 volume appears as a single physical hard drive with a
capacity equal to two drives of the four drive configuration (the
minimum RAID 10 configuration). The space on the remaining two
drives will be used for mirroring.
RAID 0 This provides end-users the performance necessary for any disk-
intensive applications; these include video production and editing,
image editing, and gaming applications.
14 Intel Confidential
2.1.4 Supported Platforms for This Release
Intel® Rapid Storage Technology provides enhanced management capabilities and detailed status
information for Serial ATA AHCI and RAID subsystems. Basic support for this release is on the
following hardware components.
Note: some RST features are limited to hardware and/or OS versions and will be documented in this
guide under each feature’s requirements.
Legacy Platforms/chipsets
Skylake Platform / PCH: Sunrise Point (SPT): SPT-H and SPT-LP
Chipset: Intel® 100 Series/C230 Chipset Family SATA AHCI/RAID Controller
Desktop SKUs: SPT-H:
Z170R
H170R
Q170R
H110*
B150*
Q150*
Mobile SKUs: SPT-H:
Intel Confidential 15
HM170R
QM170R
CM236R (mobile workstation)
Workstation SKUs: SPT-H (Greenlow)
C236R
Chipset: Intel® 6th Generation Core Processor Family Platform I/O SATA AHCI/RAID
Controller
Mobile SKUs: SPT-LP:
Base-U*
Premium-UR
Premium-YR
*Denotes the Platform Supports AHCI Mode Only
R
Denotes PCIe remappable SKU
Platform Support
OS Version
Kaby Lake Skylake
16 Intel Confidential
3 Intel® Rapid Storage
Technology Suite
The Intel® Rapid Storage Technology Suite contains these core components:
1. Intel® Rapid Storage Technology (Intel® RST) OS runtime software package:
a. AHCI/RAID driver (and filter driver for backwards compatibility)
b. Graphical User Interface (Intel® RST UI) , optional
c. Event Monitor service (IAStorDataMgrSvc) optional; interfaces with:
i. Intel® RST UI (graphical user interface)
ii. Event Notification Tray Icon (IAStorIcon)
iii. Windows system NT Event log
2. Intel® Rapid Storage Technology BIOS components:
a. Intel® Rapid Storage Technology RAID Option ROM (legacy support)
b. UEFI driver (with HII-compliant UI)
The following components are available for OEM manufacturing use only; NOT to be
distributed to end-users!
3. Intel® Rapid Storage Technology RAID utilities
a. Intel® RSTCLI 32/64-bit Windows/WinPE command line interface utilities (replaces RAIDCFG32/64
utilities)
b. DEVSLP Tool - command line utility for configuring DEVSLP register values
c. RcfgSata
i. DOS-based command line interface utility (legacy support)
ii. UEFI Shell-based command line interface utility
d. RcmpSata compliance utility
i. DOS-based Intel® RST RAID compliance check utility (legacy support)
ii. UEFI Shell-based Intel® RST RAID compliance check utility
Intel Confidential 17
3.2 Intel® Rapid Storage Technology Option ROM
The Intel® Rapid Storage Technology Option ROM is a standard Plug and Play option ROM that adds
the Int13h services and provides a pre-OS user interface for the Intel® Rapid Storage Technology
solution. The Int13h services allow a RAID volume to be used as a boot hard drive. They also detect
any faults in the RAID volume being managed by the RAID controller. The Int13h services are active
until the RAID driver takes over after the operating system is loaded.
The Intel Rapid Storage Technology option ROM expects a BIOS Boot Specification (BBS) compliant
BIOS. It exports multiple Plug and Play headers for each non-RAID hard drive or RAID volume, which
allows the boot order to be selected from the system BIOS's setup utility. When the system BIOS
detects the RAID controller, the RAID option ROM code should be executed.
The Intel Rapid Storage Technology option ROM is delivered as a single uncompressed binary image
compiled for the 16-bit real mode environment. To conserve system flash space, the integrator may
compress the image for inclusion into the BIOS. System memory is taken from conventional DOS
memory and is not returned.
The RAID Configuration utilities use command line parameters. Below is a snapshot of the help text
displayed when using the -? flag. It shows the usage for all supported command line flags necessary
for creating, deleting, and managing RAID volumes.
The command syntax for the Intel RAID Configuration utility is shown below:
======================================================================
/Y Suppress any user input. Used with options /C, /D, /SP & /X.
18 Intel Confidential
COMMANDS - Only one command at a time unless otherwise specified.
/C Create a volume with the specified name. /S, /DS, /SS, & /L can be specified along with
/C.
/L Specify RAID Level (0, 1, 10, or 5). Only valid with /C.
/DS Selects the disks to be used in the creation of volume. List should be delimited by
spaces.
/X Remove all metadata from all disks. Use with /DS to delete metadata from selected
disks.
/U Do not delete the partition table. Only valid with /C on RAID 1 volumes.
/RRT Create a recovery volume. Only valid with /C. Requires /M.
/Sync Set sync type for 'Recovery' volume. Only valid with /RRT.
/M Specify the port number of the Master disk for 'Recovery' volume. Only valid with /RRT.
/ER Enable only recovery disk for recovery volume; /EM and /ER actions will result in change
from Continuous Update mode to On-Request.
/SD Synchronizes the data from the cache device to the Accelerated Disk/Volume.
======================================================================
Intel Confidential 19
3.4 RSTCLI (32/64 bit) Windows Utilities
NOTE: RSTCLI Commands are CasE SenSitiVe
The Intel RSTCLI 32/64 utility is an executable. It provides OEMs with the ability to create, delete,
and manage RAID volumes on a system within a windows environment using command line
parameters that make it possible to perform these functions by using scripts or shell commands.
For use in all supported Windows OS including WinPE 32/64.
The command syntax for the Intel RSTCLI utilities is shown below:
Create Options:
Flag Name
-C --create
-E --create-from-existing
-l --level
-n --name
-s --stripe-size
-z --size
--rrt
--rrtMaster
--rrtUpdate
Create Usage:
Creates a new volume and array or creates a new volume on an existing array.
--create --level x [--size y] [--stripe-size z] --name string [--create-from-existing diskId] diskId
{[diskId]}
Create Examples:
-C -l 1 -n Volume 0-1-0-0 0-2-0-0 (format of the disk ID is “0-SATA_Port-0-0” where the
second digit from the left represents the SATA port on the platform where the disk is located; thus
0-1-0-0 represents SATA port # 1)
--create -l 0 -z 5 --name RAID0Volume 0-3-0-0 0-4-0-0 0-5-0-0
-C -l 1 -E 0-1-0-0 -n VolumeWithData 0-2-0-0
-C --rrt -n RRTVolume 0-1-0-0 0-2-0-0 --rrtMaster 0-1-0-0
-C --rrt -n RRTVolume 0-1-0-0 0-2-0-0 --rrtUpdate Continuous
--create --help
Information Options:
Flag Name
-I --information
20 Intel Confidential
-a --array
-c --controller
-d --disk
-v --volume
Information Usage:
Displays disk, volume, array, and controller information.
--information --controller|--array|--disk|--volume {[device]}
Information Examples:
-I -v Volume
-I -d 0-5-0-0
--information --array Array_0000
--information --help
Manage Options:
Flag Name
-M --manage
-x --cancel-verify
-D --delete
-p --verify-repair
-f --normal-volume
-F --normal
-i --initialize
-L --locate
-T --delete-metadata
-Z --delete-all-metadata**
-N --not-spare
-P --volume-cache-policy
-R --rebuild
-S --spare
-t --target
-U --verify
-w --write-cache
**WARNING: Using this command deletes the metadata on ALL disks in the system.
There is no option to select individual disks with this command and there is
no warning prior to the command initiating and completing. To delete
metadata on individual disks use the –D (--delete) command with either
“volume_name” or “diskID”.
Manage Usage:
Manages arrays, volumes and disks present in the storage system.
--manage --cancel-verify volumeName
--manage --delete volumeName
--manage --verify-repair volumeName
--manage --normal-volume volumeName
--manage --normal diskId
--manage --initialize volumeName
Intel Confidential 21
--manage --locate diskId
--manage --delete-metadata diskId (deletes the metadata only on disks that are in a non-Normal
state e.g. offline or unknown)
--manage --delete-all-metadata
--manage --not-spare diskId
--manage --volume-cache-policy off|wb --volume volumeName
--manage --rebuild volumeName --target diskId
--manage --spare diskId
--manage --verify volumeName
--manage --write-cache true|false --array arrayName
Manage Examples:
--manage --spare 0-3-0-0
-M -D VolumeDelete
-M --normal 0-2-0-0
--manage -w true -array Array_0000
-M -U VolumeVerify
-M -Z
--manage --help
Modify Options:
Flag Name
-m --modify
-A --Add
-X --expand
-l --level
-n --name
-s --stripe-size
-v --volume
Modify Usage:
Modifies an existing volume or array.
--modify --volume VolumeName --add diskId {[diskId]}
--modify --volume VolumeName --expand
--modify --volume VolumeName --level L [--add diskId {[diskId]} [--stripe-size s] [--name N]
--modify --volume VolumeName --name n
Modify Examples:
-m -v Volume_0000 -A 0-3-0-0 0-4-0-0
-m --volume ModifyVolume --level 5
--modify -v Volume -n RenameVolume
--modify --help
Accelerate Options:
Flag Name
--accelerate
--createCache
--setAccelConfig
22 Intel Confidential
--disassociate
--reset-to-available
--accel-info
--loadCache
--stats
Accelerate Usage:
Accelerates a given disk or volume with the specified SSD disk.
--accelerate --createCache|--setAccelConfig|--disassociate|--reset-to-available|--accel-info
--accelerate --createCache --SSD <diskId> --cache-size X [where 16 ≤ X ≤ 64]
--accelerate --setAccelConfig --disk-to-accel <diskId> | --volume-to-accel <volume name> --
mode [enhanced | maximized | off]
--accelerate --disassociate --cache-volume <volume name>
--accelerate --reset-to-available --cache-volume <volume name>
--accelerate --accel-info
--accelerate --loadCache <files or directory> --recurse
--accelerate --stats
Accelerate Examples:
--accelerate --createCache --SSD 0-3-0-0 --cache-size X [where 16 ≤ X ≤ 64]
--accelerate --setAccelConfig --disk-to-accel 0-5-0-0 --mode enhanced
--accelerate --setAccelConfig --volume-to-accel MyVolume --mode maximized
--accelerate --disassociate --cache-volume Cache_Volume
--accelerate --reset-to-available --cache-volume Cache_Volume
--accelerate --accel-info
--accelerate --loadCache C:\Windows\*.* --recurse
--accelerate --stats
--accelerate --help
OPTIONS:
-a, --array
Lists information about the arrays in the storage system.
--accel-info
Lists information about Accelerate settings.
--accelerate
Accelerates a given disk or volume with the specified SSD disk.
-C, --create
Creates a new volume and array or creates a new volume on an existing array.
-c, --controller
Lists information about the controllers in the storage system.
Intel Confidential 23
Sets a size in gigabytes for the cache memory. This is an optional switch. If the size is not
specified, the complete size of the SSD will be used for acceleration.
--createCache
Creates the cache.
-d, --disk
Lists information about the disks in the storage system.
--disassociate
Disassociates the Cache volume from acceleration
--disk-to-accel <<host>-<bus>-<target>-<lun>>
Specifies a disk if accelerating a pass-through disk.
-h, --help
Displays help documentation for command line utility modes, options, usage, examples, and
return codes. When used with a mode switch (create, information, mange, modify, or accelerate),
instructions for that mode display. For example, --create --help displays Create option help.
-I, --information
Displays disk, volume, array, and controller information.
-M, --manage
Manages arrays, volumes and disks present in the storage system.
24 Intel Confidential
-m, --modify
Modifies an existing volume or array.
-q, --quiet
Suppresses output for create, modify, and manage modes. Not valid on info mode.
-r, --rescan
Forces the system to rescan for hardware changes.
--reset-to-available
Resets the cache volume to available.
--rrt
Creates a recovery volume using Intel(R) Rapid Recovery Technology (RRT).
--rrtMaster <<host>-<bus>-<target>-<lun>>
Optionally creates a recovery volume that allows you to select a specific disk as the master disk.
Default is the first disk in the disk list.
--SSD <<host>-<bus>-<target>-<lun>>
Specifies SSD disk that will be used as cache. If another SSD is being used as cache, then that
volume needs to be deleted to use a new SSD disk.
--setAccelConfig
Sets the config for accelerating a volume or disk.
Intel Confidential 25
--stats
Indicates percentage of cache usage.
-V, --version
Displays version information.
-v, --volume
Lists information about the volumes on the system. Stipulates the volume to act on when used
in Modify or Manage mode.
-X, --expand
Expands a volume to consume all available space in an array.
-Z --delete-all-metadata
Deletes the metadata on all disks in the system without any warning prior to initiating and
completing the action.
RETURN CODES:
0, Success
Request completed successfully.
1, Request Failed
Request is formatted correctly but failed to execute.
2, Invalid Request
Unrecognized command, request was formatted incorrectly.
26 Intel Confidential
3, Invalid Device
Request not formatted correctly, device passed in does not exist.
4, Request Unsupported
Request is not supported with the current configuration.
Specification Location
UEFI Specification version 2.3.1 (http://www.uefi.org/specsandtesttools)
Intel Confidential 27
UEFI Shell Specification version 2.0 (http://www.uefi.org/specsandtesttools )
RaidDriver.efi (filename):
UEFI driver that requires integration into the UEFI System BIOS by the OEM’s BIOS
vendor. This file can be placed into the OEMs’ UEFI BIOS source build where their tools
can integrate it.
RaidDriver.ffs (filename):
The Intel® RST UEFI driver (RaidDriver.efi) is wrapped in the Firmware File System
(.ffs)
Useful for an external tool to integrate the binary into a compiled BIOS image. Firmware
File System Details:
o Firmware File Type - EFI_FV_FILETYPE_DRIVER (0x07)
o File GUID - 90C8D394-4E04-439C-BA55-2D8CFCB414ED
o 2 Firmware File Sections
EFI_SECTION_PE32 (0x10)
EFI_SECTION_USER_INTERFACE (0x15)
Name “SataDriver”
RaidDriver.bin (filename):
This is an optional format that is provided to OEMs that might want it delivered as a PCI
3.0 UEFI OROM
Disadvantage of the UEFI OROM format is that it likely will require the BIOS to have a
Compatibility Support Module (CSM) in order to function
28 Intel Confidential
FORMSET_GUID { 0xd37bcd57, 0xaba1, 0x44e6, { 0xa9, 0x2c, 0x89, 0x8b, 0x15, 0x8f,
0x2f, 0x59 } }
{D37BCD57-ABA1-44e6-A92C-898B158F2F59}
RcfgSata.efi (filename):
A UEFI application that requires booting to the UEFI Shell environment to run
Same functionality and commands as have been provided by the legacy DOS version
(RcfgSata.exe) in previous releases of the Intel® RST product.
Requires the exact same version of the RST UEFI Driver to be loaded on system in order
to function.
Intel Confidential 29
3.5.2.4 Command line RAID Compliance Checking Utility
RcmpSata.efi (filename):
A UEFI application that requires booting to the UEFI Shell environment to run.
Investigates if the list of UEFI required protocols by the RST UEFI Driver are present.
Also provides a list of the protocols published by the RST UEFI Driver and the
capabilities/features of the RST UEFI Driver.
EFI_BOOT_SERVICES:
LocateHandleBuffer
OpenProtocol
CloseProtocol
WaitForEvent
HandleProtocol
FreePool
30 Intel Confidential
AllocatePages
AllocatePool
InstallMultipleProtocolInterfaces
UninstallMultipleProtocolInterfaces
Stall
EFI_RUNTIME_SERVICES:
SetVariable
GetVariable
GetTime
Other Protocols:
Intel Confidential 31
o Non-RAID disks:
All ATA commands are supported
o RAID disks (only the following commands are supported):
EXECUTE DEVICE DIAGNOSTIC (0x90)
IDENTIFY DEVICE (0xEC)
IDLE (0xE3)
IDLE IMMEDIATE (0xE1)
SECURITY DISABLE PASSWORD (0xF6)
SECURITY ERASE PREPARE (0xF3)
SECURITY ERASE UNIT (0xF4)
SECURITY FREEZE (0xF5)
SECURITY SET PASSWORD (0xF1)
SECURITY UNLOCK (0xF2)
SET FEATURES (0xEF)
SMART READ DATA (0xB0 / 0xD0)
SMART READ LOG (0xB0 / 0xD5)
SMART RETURN STATUS (0xB0 / 0xDA)
STANDBY (0xE2)
STANDBY IMMEDIATE (0xE0)
o All disk types:
EFI_ATA_PASS_THRU_PROTOCOL_ATA_NON_DATA
EFI_ATA_PASS_THRU_PROTOCOL_PIO_DATA_IN
EFI_ATA_PASS_THRU_PROTOCOL_PIO_DATA_OUT
EFI_ATA_PASS_THRU_PROTOCOL_DEVICE_DIAGNOSTIC
EFI_ATA_PASS_THRU_PROTOCOL_UDMA_DATA_IN
EFI_ATA_PASS_THRU_PROTOCOL_UDMA_DATA_OUT
EFI_ATA_PASS_THRU_PROTOCOL_RETURN_RESPONSE
32 Intel Confidential
o EFI_HII Protocols** (see section 3.5.3.2) *** Required for the Intel® RST UEFI UI
3.5.4.2 Step2: Download and Integrate the Intel® RST UEFI Package
1. Download the latest kit from the Intel VIP (Validation Internet Portal) website. From the kit
select the efi_sata.zip file which will contain the UEFI driver binary files (RaidDriver.efi,
RaidDriver.ffs, and RaidDriver.bin)
2. Select and extract the binary file based on the planned integration method:
o RaidDriver.efi: Use this binary if planning to integrate at the time of the BIOS image build
o RaidDriver.ffs: Use this binary if planning to integrate into an already built BIOS image
o RaidDriver.bin: Use this binary if planning to integrate as legacy type OROM (CSM may also
be required)
3. Use the proper integration tools based on the binary file selected above
2. Place the file on a USB thumb drive and insert the drive into the platform
4. Run the RCmpSata.efi application (it’s a command line utility): at the prompt type the
command:
Intel Confidential 33
3.5.5.1 Bootable Image Support Test\Block IO Protocol Test
34 Intel Confidential
Reason: The Intel® RST UEFI driver does not support BuildDevicePath – EFI_UNSUPPORTED
is returned
Intel Confidential 35
4 New in Release Version 15.x
4.1 15.2 Release
36 Intel Confidential
4.2.2 Updated Features/Specifications in This Release
4.2.2.1 PCIe
Modern Standby support is updated to include remapped PCIe NVMe SSDs. PCIe
AHCI devices are NOT SUPPORTED.
4.2.2.2 RTD3
RTD3 support is updated to include remapped single pass-through PCIe NVMe
devices. PCIe AHCI devices are NOT SUPPORTED.
4.2.2.3 RAID/SRT
Feature enhancements/updates for the RAID and SRT features when creating a RAID
volume form existing partitioned disk or accelerating a partitioned disk (usually the
system disk)
Intel Confidential 37
5 14.x Features
OS Steps:
1. Download Windows10 x64 iso from official Microsoft Insider
Windows 8.1
website: https://insider.windows.com/
x64
2. Upgrade driver to version 13.2.4.1000
3. Open iso and launch setup.exe to update directly from current system Windows 8.1 x64
Windows 8 x64
/ Windows 8 x64
4. Follow the Windows 10 setup steps
(Legacy / UEFI)
1. Download Windows10 x32 / 10 x64 iso from official Microsoft Insider website:
https://insider.windows.com/
Windows 7 x32
2. Upgrade RST driver to version 13.2.4.1000 or higher.
/ x64
3. Open iso and launch setup.exe to update directly from current system Windows 7 x86 /
Windows 7 x64
(Legacy / UEFI)
4. Follow the Windows10 setup steps
Beginning with the Intel® RST 14.0 Release version, NVMe storage devices over the PCIe bus are
supported on the following Skylake PCH SKUs:
Note: Support for NVMe storeage devices over PCIe will not be backwards compatible (not
supported) on pre-Skylake PCH platforms.
See Apendix D for available remapping configurations for Sklylake (SPT) PCH to meet
OEMs’ specific platform design requirements.
38 Intel Confidential
Skylake PCH-H HSIO Details (Lanes 15 – 26) PCIe lanes 9 - 20
Table 1 PCIe Controller #3 :| (Cycle Router #1) PCIe Controller #4 | (Cycle Router #2) PCIe Controller #5 | (Cycle Router #3)
SKU 15 16 17 18 19 20 21 22 23 24 25 26
PCIe / PCIe / PCIe / PCIe / PCIe / PCIe /
HM170 PCIe PCIe N/A N/A N/A N/A
SATA SATA SATA SATA SATA SATA
PCIe / PCIe / PCIe / PCIe / PCIe / PCIe /
QM170 PCIe PCIe N/A N/A N/A N/A
SATA SATA SATA SATA SATA SATA
Intel RST PCIe Storage Device #1 Intel RST PCIe Storage Device #2 Intel RST PCIe Storage Device #3
<------------------------------------------------------------------------H170**, Q170, Z170, C236------------------------------------------------------------------------------->
<---------------------------------------------------HM170, QM170-------------------------------------------------->
** H170 Supports Only a Maximum of 2 Remapped PCIe Devices (Controllers) of the 3 Available
PCIe #10
PCIe #11
PCIe #12
PCIe #13
PCIe #14
PCIe #15
PCIe #16
PCIe #17
PCIe #18
PCIe #19
PCIe #20
SATA 0 'Alternate'
SATA 1 'Alternate'
SATA 1
SATA 2
SATA 3
SATA 4
SATA 5
x4 x4 x4
x2 x2 x2 x2 x2 x2
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #3
Intel Confidential 39
Skylake PCH-LP HSIO Details (Lanes 9 – 16) PCIe lanes 5 - 16
PCH-LP HSIO Detail (Lanes 9 -16) (PCIe #5 -#12)
PCIe Controller #2 PCIe Controller #3
HSIO 9 10 11 12 13 14 15 16
Prem-U PCIe PCIe PCIe / SATA PCIe / SATA PCIe PCIE PCIe / SATA PCIe / SATA
Prem-Y PCIe PCIe PCIe / SATA PCIe / SATA PCIe PCIE N/A N/A
x4 x4
x2 x2 x2 x2
Intel RST PCIe Storage Device #1 Intel RST PCIe Storage Device #2
<------------------------------------------------------------U-------------------------------------------------------------->
<---------------------------------------------Y---------------------------------------------->
PCIe #6
PCIe #7
PCIe #8
PCIe #9
PCIe #10
PCIe #11
PCIe #12
HSIO 9 10 11 12 13 14
PCIe #5
PCIe #6
PCIe #7
PCIe #8
PCIe #9
PCIe #10
SATA 0
SATA 1
SATA 1 'Alternate'
SATA 0
SATA 1
SATA 2
x4 x4 x4
x2 x2 x2 x2 x2 x2 x2
40 Intel Confidential
Registry Key:
'HKLM\SYSTEM\CurrentControlSet\Services\iaStoreA\Parameters\Device\IsSystemOnB
attery'
Intel Confidential 41
5.3.1.2 EFI_NVM_EXPRESS_PASS_THRU_PROTOCOL Installation
Location
The RST PRE-OS UEFI driver will install the
EFI_NVM_EXPRESS_PASS_THRU_PROTOCOL on the SATA controller's handle.
Identify
Get Features
Set Features
Asynch Event Request
Firmware Image Download
Abort
The RST DRIVER provides a new private API that allows user-space applications to
send and execute NVMe commands to remapped PCIE NVMe devices. This new API is
based on the new IOCTL definition to implement NVMe pass-through channel.
Unsupported
42 Intel Confidential
Non-Remapped : PCIe NVMe devices not configured using the Intel® 100
Series or later chipset’s remapping technology are not supported by this
API
Supported
Supported
Windows* 10 /64bit (TH, TH-2)
Windows* 8.1 /64bit
Windows* 8 /64bit
Windows* 7 /64bit
The Intel® RST NVMe Passthrough API allows to send commands listed below. All
other are rejected and request is returned with status
SRB_STATUS_INVALID_REQUEST.
NVM Commands:
o Vendor Specific
o Flush
o Read
Admin Commands:
o Get Log Page
o Identify
o Get Features
o Set Features:
Power Management
Temperature Threshold
APST
o Vendor Specific commands
Intel Confidential 43
5.5 14.5 Release
Beginning with the release of Microsoft Windows 10, in order to enable storage
devices with rotating media on Connected Standby systems the time period of
idleness (Idle Timeout) required to remove power to the device changes depending
upon varying factors. Thus this ‘Adaptive’ mechanism uses new algorithms to balance
power savings and device wear.
Once RST enables the Adaptive Idle Timeout feature it does nothing else with respect
to the feature. Windows Storport completely handles the adaptive timeout
mechanism.
Enable Adaptive D3 Idle Timeout for Rotating Media Storage Devices
The RST DRIVER enables the adaptive D3 idle timeout feature for SATA rotating
media.
Hard Disk Drives (HDD)
Hybrid Hard Drives (SSHD)
RAID Arrays
o Enable:
RAID Arrays with only a single RAID volume
RAID Array must be homogeneous (e.g. all array
members must be HDDs)
o Disable:
MATRIX RAID volumes.
MIXED RAID volumes.
Disable Adaptive D3 Idle Timeout for:
The RST DRIVER always disables the adaptive D3 idle timeout for:
Non-Rotating Media Storage Devices
o SATA SSDs
o PCIe Remapped SSDs
ATAPI Devices
o Tape Devices
44 Intel Confidential
o ODDs
o ZPODDs
HW: All its member disks must be on the PCH AHCI controller OR all its member disks
must be on the REMAPPED PCIe controllers
Intel Confidential 45
5.6.6 Non-PCH Remapped AHCI Controller Devices
The RST SOLUTION only allows non-PCH REMAPPED AHCI controller target devices on
port 0 of the PCIe add-in card and will ignore any devices present on any other ports
on the add-in card.
46 Intel Confidential
6 Intel Rapid Storage
Technology for PCIe NVMe
Storage Devices
Operating All x64 bit supported Operating Systems for this release
System
Intel Confidential 47
6.3 Feature Limitations
Intel® Rapid Storage Technology for PCIe NVMe Storage Devices has the following feature
limitations:
No support for:
o Legacy AHCI DEVSLP
o RTD3
o Hot Plug
o InstantGo*
Supports: maximum of 3 ports can be remapped using x2 or x4 lanes
If used in a RAID volume, all member devices must be on the same bus type
If any of the above conditions are not met, the PCIe NVMe SSD will not be recognized by the RST
driver.
Use Cases:
48 Intel Confidential
device as a pass-through can be used as a pass-
device through device. It can be a
boot device or a data
device.
Configurations:
Up to 3 pass-through
disks PCH SKU
dependent)
Configurations:
Intel Confidential 49
3-disk RAID volume
(RAID 0 or 5)
2-disk RAID volume
(RAID 0 or 1) + Spare
2-disk RAID volume
(RAID 0 or 1) + Single
Disk
*RcmpSata utility is also available in earlier releases for Legacy OROM and UEFI compliance testing in the Pre-
OS environment (see section 3.5.2.4).
With the RcmpSata.efi utility downloaded to a Fat32 formatted usb drive attached to the platform,
the following syntax can be used in the UEFI shell to download compliance data to a text file for
viewing in a text editor (where ‘#’ is the file system number of the usb drive shown when booting to
the UEFI shell):
You may scroll through the text file in the UEFI shell by typing the following command:
Fs#:>edit rcmpsata.txt
The final test results are displayed at the end. Test Section 16 will confirm “remap” for PCIe is
enabled/disabled for debugging issues in the Pre-OS environment.
50 Intel Confidential
Intel Confidential 51
7 Intel Rapid Storage
Technology for PCIe AHCI
Storage Devices
Beginning with the Intel® RST 13.0 Release version, PCIe storage devices are supported on the
following SKUs:
Operating All x64 bit supported Operating Systems for this release
System
7.3 Warnings
52 Intel Confidential
RTD3
Hot Plug
InstantGo*
If any of the above conditions are not met, the PCIe SSD will not be recognized by the RST driver.
Reference documents: For detailed use case information refer to RST13.0 Use case document ID
539242 published on CDI
Use Cases:
Intel Confidential 53
*Refer to section on SRT
BIOS Settings using the EDISTO BEACH FAB 4 SKU-2 SDIO WIFI card with PCIe SSD
connected
*RcmpSata utility is also available in earlier releases for Legacy OROM and UEFI compliance testing in the Pre-
OS environment (see section 3.5.2.4).
With the RcmpSata.efi utility downloaded to a Fat32 formatted usb drive attached to the platform,
the following syntax can be used in the UEFI shell to download compliance data to a text file for
54 Intel Confidential
viewing in a text editor (where ‘#’ is the file system number of the usb drive shown when booting to
the UEFI shell):
You may scroll through the text file in the UEFI shell by typing the following command:
Fs#:>edit rcmpsata.txt
The final test results are displayed at the end. Test Section 16 will confirm “remap” for PCIe is
enabled/disabled for debugging issues in the Pre-OS environment.
Intel Confidential 55
8 Using Dynamic Storage
Accelerator (DSA)
Note: Beginning with the Intel® RST 14.0 Release, this feature will not be enabled on
Broadwell and all newer platform platforms. The 14.0 release will continue to be enbaled
on those supported pre-Broadwell platforms.
Beginning with the Intel® RST 12.0 Release version, DSA feature is supported.
System BIOS System BIOS must include the ACPI method GLTS (Get Dynamic
Storage Accelerator Status) implemented (consult the Intel BIOS
Writers Guide for your platform)
56 Intel Confidential
Figure 3.6.1.1 RST UI Performance Page
Intel Confidential 57
8.1.3 Configuring DSA
Beginning with RST 13.0 the default configuration setting for DSA is “Automatic”.
Manual To set the DSA feature into one of the three gear settings:
Disable To disable the feature and use the Windows power plan setting,
you can click the Disable link near the top of the page (Item
labeled ‘1’ in figure 3.6.1.2).
58 Intel Confidential
8.1.4 Configuring DSA using Intel® RSTCLI 32/64 Windows*
Utilities
The RSTCLI 32/64 utilities support the enable or disable of DSA in the manufacturing environment.
These utilities do not support any other DSA configuration options; DSA configuration must be
performed from the RST UI Dynamic Storage Accelerator Page.
Enable Usage:
Disable Usage:
Intel Confidential 59
9 How to Enable the Platform for
Intel® RST Support of BIOS
Fast Boot
Beginning with the Intel® RST 12.0 Release version, Intel® RST implements pre-OS UEFI driver and
Windows runtime driver support for the platform BIOS Fast Boot specification.
System BIOS The Intel® RST UEFI driver requires the following BIOS
components:
2KB of non-volatile UEFI variable storage with access from
runtime and as a boot service
Access to UEFI Hand-off Block Hand-off Info Table (PHIT
HOB) to determine boot mode
o BOOT_WITH_FULL_CONFIGURATION = Fast-Boot
disabled
o BOOT_WITH_MINIMAL_CONFIGURATION = Fast-Boot
enabled
60 Intel Confidential
*Fast Boot will not be supported on configurations where the
PCIe Storage device is used as the cache disk.
Windows 7 64
Intel Confidential 61
10 Creating a RAID Volume
RAID volumes can be created three different ways. The method most widely used by end-users is to
use the Intel Rapid Storage Technology UI in Windows*. The second method to create a RAID
volume is to use the Intel Rapid Storage Technology option ROM user interface (or the Intel® RST
pre-OS UEFI HII UI). The third way, used by OEMs only, is using the pre-OS RCfgSata or Windows
(including WinPE) RSTCLI 32/64 utilities.
2. Based on the available hardware and your computer's configuration, you may be able to create a
volume by selecting the ‘easy to use’ options such as ‘Protect data’ under ‘Status’, or by
selecting a volume type under ‘Create’. Based on the number of non RAID disks available to you
and the size of the disks the user will only be able to see the possible volume creation options...
(e.g. if you have only two disks ...you can only see options to create RAID 0, RAID1 and
Recovery(Intel® RRT) ; if you have three disks, you can only see options for creating RAID 0,
RAID 1, RAID5 and Recovery)
NOTE: To create a volume the user must be in admin mode and the system must be in RAID
Ready mode with two or more hard disks connected to it
b. Now configure the volume by providing the volume name, selecting the hard disks to
be part of the volume and strip size if applicable
NOTE: When configuring a volume, the application will only list the disks that meet
the min requirements to be part of the volume. Based on the first disk selected or
the order of selection, some disks may become grayed out if one or more
requirements are not met. Changing the order of selection generally helps re-enable
disks that were grayed out. For Ex: If the first selection is a system disk, only disks
that are of equal or greater size will be presented for selection and other remains
grayed out. For more information on disk requirements refer ‘creating a volume’
under help file in the UI.
c. Once the disks are selected for volume creation, the user will presented with option,
if you want preserve data on which selected disk. Click on ‘Next’ and select the
‘Create Volume’ button.
62 Intel Confidential
4. After the RAID volume is created, you will be shown a dialog box stating that the RAID volume
was successfully created and you will you will need to use Windows Disk Management or other
third-party software to create a partition within the RAID volume and format the partition. Click
OK to close this dialog box.
5. After formatting the partition, you may begin to copy files to, or install software on, the RAID
volume.
2. In the Main Menu, select option #1 ‘Create RAID Volume’. Enter the name you want to use for
the RAID volume, then press Enter.
3. Select the RAID level by using the arrow keys, then press Enter.
4. Press Enter to select the disks to be used by the array that the volume will be created on. Press
Enter when done.
5. Select the strip size (128 KB is the default for RAID 0) by using the arrow keys, then press Enter
when done.
6. Enter the size for the RAID volume in gigabytes. The default value will be the maximum size. If
you specify a smaller size, you will be able to create a second volume in the remaining space
using the same procedure.
1. Upon re-boot, launch the Intel® RST UEFI user interface (HII compliant)
Intel Confidential 63
Figure 4
c. It displays physical devices enumerated by the RST UEFI driver that are not part
of the RAID volume
3. Section 3 gives information on how to navigate within the current page of the UEFI UI.
Note: this section is not implemented by the RST UEFI driver and is specific to
the BIOS that was used for documentation purposes.
a. Enter the name you want to use for the RAID volume, then press <Enter>.
64 Intel Confidential
Figure 5
b. Scroll down to ‘RAID Level” and press <Enter> to select a RAID level
Figure 6
Intel Confidential 65
c. Scroll down to ‘Select Disks’ and at each disk that you wish to include in the RAID
volume press <space bar>
Figure 7
d. Next scroll down to ‘Strip Size’ and press <enter> to select a Strip size or continue if
you wish to use the default strip size
Figure 8
66 Intel Confidential
e. Next scroll down to ‘Capacity (MB)’ where the maximum capacity is selected and
displayed in MB. To select a smaller capacity for the RAID volume, type in the size
in MB that you wish to use
Figure 9
Note: The “Create Volume” action will only be enabled if the RAID volume options
selected will result in a valid configuration.
4. Changes in HII, Beginning with Intel® RST UEFI 13.0, for PCIe Devices include new labeling for
Devices and multiple controller management ability.
Device Ids numbering scheme =
<Device Type><Controller ID>”.”<Device ID>
Example below: “PCIe 1.0”
Intel Confidential 67
10.4 Using the RAID Configuration Utilities (DOS, UEFI
Shell, and Windows)
Note: rstcli and rstcli64 can be used interchangeably below.
Run “rcfgsata.exe in DOS environment (or rcfgsata.efi from UEFI shell) or “rstcli.exe (or
rstcli64.exe)” (Windows environment) with the following command line flags to create a RAID
volume.
With PCIe Storage, the command line utilities will require a controller ID to be specified when
creating RAID volumes:
The following command line will instruct the utility to create a RAID 0 volume named “OEMRAID0”
on hard drives attached to the SATA Controller (Controller #0) on Port 0 and 1 with a strip size of
128 KB and a size of 120 GB:
The following command will create a RAID volume using all of the default values. It will create a
RAID 0 volume with a strip size of 128 KB on the two hard drives in the system. The volume will be
the maximum size allowable.
C:\>rcfgsata.exe /C OEMRAID0 (requires that only two disks can be attached to the
system)
The following command line will instruct the utility to create a RAID 0 volume named “PCIeRAID0”
on 1 PCIe AHCI SSD (Controller #1 ) and 1 PCIe NVMe SSD (Controller #2 ) attached to the system
on remapped Port 0 and Port 2 with a strip size of 128 KB and a size of 120 GB:
The following command line will display usage for all support command line parameters:
C:\>rcfgsata.exe(or rcfgsata.efi) /?
C:\>rstcli.exe --help
Note: Selecting the strip size is only applicable for RAID 0, RAID 5, RAID 10 levels. Strip size is
not applicable for RAID 1.
68 Intel Confidential
11 Deleting a RAID Volume
RAID volumes can be deleted in three different ways. The method most widely used by end-users is
the Windows user interface utility. The second method is to use the Intel Rapid Storage Technology
Option ROM user interface. The third way, used by OEMs only, uses the RAID Configuration utility.
2. Under ‘Status’ or ‘Manage’ Click on the volume you want to delete. The user will be presented
with the volume properties on the left.
4. Review the warning message, and click ‘Yes’ to delete the volume.
5. The ‘Status’ page refreshes and displays the resulting available space in the storage system
view. You can now use it to create a new volume.
3. You should be presented with another screen listing the existing RAID volume.
4. Select the RAID volume you wish to delete using the up and down arrow keys.
6. Press Y to confirm.
Note: Option #3 ‘Reset Hard Drives to Non-RAID’ in the option ROM user interface may also be
used to delete a RAID volume. This resets one or more drives to non-RAID status, by deleting all
metadata on the hard drives. This has the affect of deleting any RAID volumes present. This function
is provided for re-setting the hard drives when there is a mismatch in RAID volume information on
the hard drives. The option #2 ‘Delete RAID Volume’ on the contrary, will allow deleting a volume at
a time, while retaining the existing RAID array metadata (for instance Matrix RAID).
Intel Confidential 69
1. Upon re-boot, enter the system BIOS and select the Intel® Rapid Storage Technology menu for
the UEFI user interface
2. In the Main Menu, go to the ‘RAID Volumes’ section, highlight the volume to be deleted and
press <Enter>
b. At the dialogue box press <Enter> to confirm the deletion of the volume (Note: All
data on the volume will be lost!)
C:\>rcfgsata.exe /D OEMRAID0
C:\>rstcli.exe --manage --delete OEMRAID0
The following command line will display usage for all support command line parameters:
C:\>rcfgsata.exe(rcfgsata.efi) /?
C:\>rstcli.exe --help
70 Intel Confidential
12 Common RAID Setup
Procedures
12.1 Build a SATA RAID 0, 1, 5 or 10 System
This is the most common setup. This configuration will have the operating system striped for RAID
0, or mirrored for RAID 1, or striped with parity for RAID 5, or mirrored and striped across two or up
to four drives for RAID 10. All RAID member drives must be from the same BUS PROTOCOL GROUP.
To prepare for this, you must have the Intel RAID driver on a floppy drive (USB). See the procedure
for creating this floppy (USB) further down in this document.
1. Assemble the system using a motherboard that supports Intel Rapid Storage Technology
and attach the drives depending on the RAID level that will be built.
2. Enter System BIOS Setup and ensure that RAID mode is enabled. This setting may be
different for each motherboard manufacturer. Consult the manufacturer’s user manual if
necessary. When done, exit Setup.
4. Within this UI, select option ‘1. Create RAID Volume’. When ‘Create RAID Volume’ menu is
displayed, fill the following items:
a. Name: Enter a volume name, and press Enter to proceed to next menu item,
b. RAID Level: select RAID level (0, 1, 5, 10), and press Enter to proceed to next
menu item;
c. Disks: press Enter on ‘Select Disks’ to select the hard drives to be used for your
configuration.
d. Within the ‘SELECT DISKS’ window, choose the hard drives and press Enter to
return to the ‘MAIN MENU’.
e. Strip Size: Applicable for RAID levels 0, 5, and 10 only. You may choose the
default size or another supported size in the list and press Enter to proceed to
the next item.
f. Capacity: The default size would be the maximum allowable size summation of
all the drives in your configuration. You may decrease this volume size to a
lower value. If you specified a lower capacity size volume, the remaining space
could be utilized for creating another RAID volume. Press Enter to proceed to the
next item.
Intel Confidential 71
5. After this is done, exit the Intel Rapid Storage Technology option ROM user interface by
pressing the Esc key or Option #4.
7. Installation procedures as follows: Use the ‘load driver’ mechanism when prompted. Insert
a USB key with the Intel® RST driver and browse to the directory on the USB key where the
driver that you wish to install is located. Select the driver INF file. If correct the proper
Intel controller for your system will be shown. Continue the driver install.
8. Finish the Windows installation and install all other necessary drivers.
9. Install the Intel Rapid Storage Technology software package obtained from the Intel VIP
website. This will add the Intel Rapid Storage Technology UI that can be used to manage the
RAID configuration.
4. Select this menu. Choose the ‘Create RAID Volume’. When ‘Create RAID Volume’ menu is
displayed, fill the following items:
a. Name: Enter a volume name, and press Enter to proceed to next menu item,
b. RAID Level: select RAID level (0, 1, 5, 10), and press Enter to proceed to next menu
item;
c. Disks: press space bar to ‘Select Disks’ to select the devices to be used for your
configuration.
d. Within the ‘SELECT DISKS’ window, choose the devices and press Enter to return to
the ‘MAIN MENU’.
e. Strip Size: Applicable for RAID levels 0, 5, and 10 only. You may choose the default
size or another supported size in the list and press Enter to proceed to the next
item.
f. Capacity: The default size would be the maximum allowable size summation of all
the drives in your configuration. You may decrease this volume size to a lower value.
If you specified a lower capacity size volume, the remaining space could be utilized
for creating another RAID volume. Press Enter to proceed to the next item.
5. After this is done, exit the Intel Rapid Storage Technology menu HII user interface by
pressing to save changes and the Esc key.
7. Installation procedures as follows: Use the ‘load driver’ mechanism when prompted. Insert
a USB key with the Intel® RST driver and browse to the directory on the USB key where the
72 Intel Confidential
driver that you wish to install is located. Select the driver INF file. If correct the proper
Intel controller for your system will be shown. Continue the driver install.
8. Finish the Windows installation and install all other necessary drivers.
9. Install the Intel Rapid Storage Technology software package obtained from the Intel VIP
website. This will add the Intel Rapid Storage Technology UI that can be used to manage the
RAID configuration.
1. Assemble the system using a motherboard that supports Intel Rapid Storage Technology
with Intel Rapid Storage Technology OROM integrated into the BIOS and attach one SATA
hard drive.
2. Enter System BIOS Setup; ensure that RAID mode is enabled. This setting may be different
for each motherboard manufacturer. Consult your manufacturer’s user manual if necessary.
When done, exit Setup.
4. Installation procedures as follows: Use the ‘load driver’ mechanism when prompted. Insert
a USB key with the Intel® RST driver and browse to the directory on the USB key where the
driver that you wish to install is located. Select the driver INF file. If correct the proper
Intel controller for your system will be shown. Continue the driver install:
5. Finish the Windows installation and install all other necessary drivers.
6. Install the Intel Rapid Storage Technology software package obtained from the Intel VIP
website. This will add the Intel Rapid Storage Technology UI that can be used to manage the
RAID configuration.
Intel Confidential 73
1. Note the port number of the source hard drive already in the system; you will use this to select
hard drive for preserving data for the migration.
3. Boot Windows, then install the Intel Rapid Storage Technology software, if not already installed,
using the setup package obtained from a CD-ROM or from the Internet. This will install the
necessary Intel Rapid Storage Technology UI and start menu links.
4. Open the Intel Rapid Storage Technology UI from the Start Menu and select the volume type
under Create from the Actions menu. Click on ’Next’
5. Under the configure options provide the volume name , select disks
6. When the disks are selected, the user will be presented the option to select the disk on which to
preserve the data. Here the user need to select the right disk on the which the data needs to
preserved and migrated
7. After the migration is complete, reboot the system. If you migrated to a RAID 0 volume, use
Disk Management from within Windows in order to partition and format the empty space created
when the two hard drive capacities are combined. You may also use third-party software to
extend any existing partitions within the RAID volume.
Begin with a system where you are booting from a PATA hard drive. Make sure the PCH I/O RAID
controller is enabled and the Intel Rapid Storage Technology is installed. Then do the following:
1. Note the serial number of the SATA hard drive that is already installed. You will use this to select
it as the source hard drive when initiating the migration.
2. Physically attach the second SATA hard drive to the available SATA port.
3. Boot to Windows, install the Rapid Storage Technology software, if not already installed, using
the setup package obtained from a CD-ROM or from the Internet. This will install the necessary
Intel Rapid Storage Technology UI and start menu links.
4. Open the Intel Rapid Storage Technology UI from the Start Menu.
74 Intel Confidential
12.5 Migrating From one RAID Level to Another
RAID level migration allows an existing RAID configuration to be migrated to another RAID
configuration. The following migrations are possible.
NOTE: Not all migrations are supported on all chipsets. The support varies depending on the chipset
and the ports supported on the chipset (For supported migrations for each chipset please Intel Rapid
Storage Technology product requirements document):
Note: In order for the migration options to be accessible, the minimum required SATA hard drives
for the RAID level have to be met.
Start Menu ->All Programs -> Intel Rapid Storage Technology -> Intel Rapid Storage
Technology UI
2. Under 'Status' or 'Manage', in the storage system view, click the array or volume to which you
want to modify. The volume properties now display on the left.
4. In the 'Change Volume Type' dialog, type a new name if you want to change the default name.
6. The 'Manage' page refreshes and reports the new volume type.
7. After the migration starts, you can view the migration progress under status.
8. When the Status field indicates volume as ‘Normal’, the migration is complete.
Intel Confidential 75
system is booting to a Windows, with installation on a different disk controller, the user can add two
SATA hard drives and create a RAID volume on them.
2. Enter System BIOS Setup; ensure that RAID mode is enabled. This setting may be different for
each motherboard manufacturer. Consult your manufacturer’s user manual if necessary. When
done, exit Setup.
3. Boot to Windows; install the Intel Rapid Storage Technology software, if not already installed,
use the setup package obtained from a CD-ROM or from the Internet. This will install the
necessary Intel Rapid Storage Technology UI and Start menu links.
4. Use the Intel Rapid Storage Technology UI to create a RAID 0 volume on two SATA drives
according to the procedure in section 6.1 of this document.
5. After the RAID volume is created, you will need to use Windows Disk Management or other
third-party software to create a partition within the RAID volume and format the partition. At
this point, you may begin to copy files to, or install software on, the RAID volume.
2. Install the Intel Rapid Storage Technology software from the CD-ROM included with your
motherboard or after downloading it from the Internet. This will add the Intel Rapid Storage
Technology UI that can be used to manage the RAID configuration in Windows*.
3. Use third-party software to create an image of the RAID volume as if it were a physical hard
drive or create an image of the partition within the RAID volume containing the operating
system, program and data files.
2. Enter System BIOS Setup; ensure that RAID mode is enabled. This setting may be different for
each motherboard manufacturer. Consult your manufacturer’s user manual if necessary. When
done, exit Setup.
76 Intel Confidential
3. If the system has CSM on, and can boot to a DOS environment, use the Intel RAID Configuration
utility (RCfgSata.exe). Else if CSM is off, or not present, boot to the UEFI shell and use the
RcfgSata.efi utility to create a RAID volume. The following command line will instruct the utility
to create a RAID 0 volume named “OEMRAID0” on hard drives on Port 0 and 1 with a strip size
of 128 KB and a size of 120GB (rcfgsata.efi can replace rcfgsata.exe if using the UEFI shell
environment):
‘/DS’ for device selection will distinguish the different controllers for device selection:
<Controller><Port>
Create RAID 0 using 1 PCIe AHCI SSD on port 3, and 1 PCIe NVMe SSD on port 6:
The following command line will display all supported command line parameters and their
usage: C:\>RCfgSata.efi /?
4. The system does not need to be rebooted before moving on to the next step. If there are no
PATA hard drives in the system, the RAID volume created will become the boot device upon
reboot.
5. Use third-party software to apply the image created in Part 1 to the RAID volume you created in
Part 2.
Intel Confidential 77
13 RAID Volume Data Verification
and Repair Feature
This feature is available starting with Intel® Matrix Storage Manager 6.1.
When the verification process is complete, a dialog will appear that displays the number of
verification errors, verification errors repaired and blocks with media errors that were found.
1. Under ‘Status’ or ‘Manage’ click on the RAID volume you want to perform the verify operation
under ‘storage system view’. The volume properties now display on the left.
3. For RAID 0 the verification process starts once you click ‘verify’. For RAID1, 5, 10, Recovery
volumes, a dialog box with check box option to repair the errors found automatically during the
verification process is present. If the user wants to perform repair you can select this box and
then click ‘verify’.
5. When the verification process is complete and the volume status is set to normal, now you can
click on the volume under ‘status’ or ‘manage’. Under the volume properties to the left under
‘Advanced’ you can view the number of verification errors, verification errors repaired and blocks
with media errors that were found.
Pre-conditions: UI installed, at least 1 RAID volume on the system that is initialized, in normal state,
and a valid RAID type (RRT, R0**, R1, R5, R10) **RAID 0 volumes can only do a Verify; they
cannot be repaired
1. Login to Windows and launch the Intel® RST UI and click on the ‘Preferences’ tab at the top of
the UI
2. From the ‘Preferences’ page, select the ‘Scheduler’ button on the left navigation pane to display
the
78 Intel Confidential
4. Select ‘Recurrence’ schedule: Once (default), Daily, Weekly, or Monthly
5. Select the ‘Start Date’; day for the scheduler to begin/run the V&R operation
7. Select the ‘Recur every’ schedule: choices will vary depending upon what is selected for
‘Recurrence’ (this step is not applicable for Recurrence of once)
8. Select whether or not to Automatically Repair Errors encountered during the Verify operation
Intel Confidential 79
14 Intel® Rapid Recover
Technology
This technology utilizes RAID 1 functionality to copy data from a designated Master drive to a
designated Recovery drive with the following limitations:
The size of the Master drive must be less than or equal to the size of the Recovery
drive.
The size of the Master drive is limited to less than or equal to (<=) 1.3125TB in
capacity.
When a Recovery volume is created, complete capacity of the Master drive will be used as the
Master volume. Only one Recovery Volume can exist on a system. There are 2 methods of updating
the data on the Master to the Recovery drive. They are:
When using the continuous update policy, changes made to the data on the master drive while the
recovery drive is not available are automatically copied to the recovery drive becomes available.
When using the Update on request policy, the master drive data can be restored to a previous state
by copying the data on the recovery drive back to the master drive.
More control over how data is copied between master and recovery drives
Fast volume updates (only changes to the master drive since the last update are copied to
the recovery drive)
Better power management on mobile systems by spinning down the Recovery drive when in
On Request Update Policy mode or when the Recovery drive goes offline when in Continuous
Update Policy mode.
Applications: Critical data protection for mobile systems; fast restoration of the master drive to a
previous or default state.
A Recovery volume can be created through the RAID Option ROM or through Intel ® Rapid Storage
Technology UI application.
Follow the below steps to create a Recovery volume through the OROM
1. Enter the OROM by pressing the Ctrl and I keys early during system POST.
80 Intel Confidential
2. Under the ‘Create RAID’ volume option, select the option to create a Recovery volume.
Note: The Primary disk size must be less than or equal to the Recovery disk size.
i. Enter the BIOS Setup Menu and select Intel® Rapid Storage Technology
menu.
vi. Highlight Synchronization, press <Enter> and select Mode of ‘On Request’ or
‘Continuous’
1. Under Create select the volume type as ‘Recovery’ and click ‘Next’
2. Under the ‘Configure Volume’ you can change the default volume name if you want, then select
the ‘master’ disk and then the ‘recovery’ disk. Now change the ‘update’ mode if needed to ‘On
Request’. The default selection is ‘continuous’.
Intel Confidential 81
4. Under ‘Confirm’ review the selected configuration. If you are not ok with the configuration click
‘back’ or click ‘create volume’ if you are fine with the configuration.
5. Now you will see a dialog box with warning message and read the warning message before
clicking ‘ok’ to make sure you are erasing data on the right disk.
6. Once you click ‘ok’ the volume creation starts and progress of the volume creation can be
viewed under status. Once the status is set to ‘normal’ the volume creation is completed.
7. The system will synchronize the Primary with the Recovery disk once after the creation of the
Recovery volume.
2. Under ‘Manage’ or ‘Status’ click on the recovery volume under the storage system view on
right where you need to change the update mode. The volume properties now display on the
left view
4. The page refreshes and the volume properties report the new update mode. NOTE: Disabling
the continuous update policy requires the end-user to request updates manually. Only
changes since the last update process are copied. The recovery volume will remain in On
Request Policy until the end-user enables continuous updates.
2. Under 'Status' or 'Manage', in the storage system view, click the recovery volume. The
volume properties now display on the left.
4. A dialog box is shown stating that the only changes since the last update will be copied.
Select the check box if you don’t want this confirmation message to display each time you
request an update. Click ‘Yes’ to confirm.
82 Intel Confidential
14.6 Access Recovery Drive Files
When data recovery to the master disk of a recovery volume is required, you can use ‘access the
recovery disk files’ option. This action is only available if a recovery volume is present, in a normal
state, and in on request update mode. Follow the below instructions to access the recovery drive file
when you have a recovery volume in ‘on request’ mode on your system (If the recovery drive is not
in continuous mode, use the instructions in section 8.3 to change the mode)
2. Under 'Status' or 'Manage', in the storage system view, click the recovery volume. The
volume properties now display on the left.
4. Now you can view recovery disk files using Windows Explorer*.
NOTE: The recovery drive can only be accessible in read only mode and data updates are not
available in that state
2. Under 'Status' or 'Manage', in the storage system view, click the recovery volume. The
volume properties now display on the left.
4. Now the recovery drive files are no longer accessible in Windows Explorer.
5. The page refreshes and data updates on the volume are now available.
Solution:
When a Recovery drive that is part of a Intel® Rapid Recover Technology volume fails, follow the
below steps to set up a new disk as the Recovery drive.
1. Shut down the system.
2. Remove the failed Recovery disk and insert a new hard drive. The size of the new drive must
be greater than or equal to the Master drive.
3. Boot to the Master drive and open Intel Rapid Storage Technology UI.
4. Under 'Status' or 'Manage', in the storage system view, click the recovery volume to be
rebuilt. The volume properties now display on the left.
5. Click on ‘rebuild to another disk’
Intel Confidential 83
6. Now a dialog box is shown requesting you to select one of the non RAID disks to rebuild the
volume.
7. Once the disk selection is complete, click ‘rebuild’
8. Now you can view the progress of the build under ‘status’ or ‘manage’
Scenario 2:
What happens if the Master Drive fails and/or the user would like to do a reverse synchronization to
a new Master Drive?
Solution:
If the Recovery volume was in Continuous update policy when the Master drive crashed, then the
system will continue to function off of the Recovery drive.
If the Recovery volume was in Update on Request policy, then a Master drive failure may result in a
BSOD.
In either case, follow the below steps to create a new Master drive using the Recovery Drive.
Scenario 3:
What is the expected behavior if a power failure occurs (and no battery supply available)
in the middle of migration for each of the below?
Creating a recovery volume (migration)
Updating a recovery volume (Copy some files from Master drive to Recovery drive)
Verify and Repair a recovery volume
Recovering a recovery volume (copy from a Recovery drive to a Master Drive
Solution:
In each case, upon the next reboot, the migration, or Verifying a Recovery Volume, or
Verify and Repair a Recovery Volume or Recovering a Recovery Volume operation would
continue normally starting from where it had been interrupted by the power failure.
In the case where the Recovery volume was getting updated or was being recovered, if it
were a fast synchronization, then if writes had been in progress while the power was lost,
then it would result in a dirty shutdown. As a result, the fast synchronization would
degenerate to a slow synchronization or a complete update.
84 Intel Confidential
Note: If the system is running is on battery, the volume will not synchronize if it is in
continuous update policy. If the volume is in Update on Request policy, then the
synchronization will be successful.
Additional comments: need to call out that an on update volume should first be updated
before the recovery disk is valid.
Scenario 4:
Once a system is configured with Intel Raid Recover Technology, a user would like to
revert the Master Drive Data to a Previous State.
Solution:
If the recovery volume is set to the on request update policy, you can revert master drive data to
the state it was in at the end of the last volume update process. This is especially useful when a
virus is detected on the master drive or guests use your system.
1. Restart the system. During the system startup, press Ctrl-I to enter the user interface of the
Intel® Rapid Storage Technology option ROM.
2. In the 'MAIN MENU' select 'Recovery Volume Options'.
3. In the 'Recovery Volume Options' menu, select 'Enable Only Recovery Disk' to boot from the
recovery drive.
4. Exit the option ROM and start up Windows*.
5. After the operating system is running, select the Intel® Rapid Storage Technology UI from
the Start Menu.
6. Under 'Status' or 'Manage', in the storage system view, click the recovery volume to be
recovered. The volume properties now display on the left.
7. Click on ‘recover data’ and then click ‘ok’ on the dialog box.
8. Now you can view the progress of the recovery under ‘status’ or ‘manage’.
9. Once the recovery of the volume is completed, you can reboot to the master drive.
Product Recovery volume created with recovery drive normal and master drive
Condition offline or missing
Access UI OROM – Note that the master drive is designated as an offline disk or master drive
missing
Select option 4 Recovery Volume Options
Intel Confidential 85
Figure 10
86 Intel Confidential
15 Pre-OS Installation of the
Intel® Rapid Storage
Technology Driver
The Intel® Rapid Storage Technology driver can be loaded before installing the Windows OS on a
RAID volume or when in AHCI mode. All later Windows OS releases do not require that the Intel®
RST driver be installed and loaded prior to the OS installation. On those OS versions the Intel® RST
driver can be loaded post OS installation. The Intel® Rapid Storage Technology AHCI driver can be
installed over Window’s native AHCI driver.
2. When prompted, insert the media with the Intel® RST driver files and press Enter.
3. You can find the media and browse to the folder where the files are located.
4. Follow the steps to load the driver and return the installation.
Intel Confidential 87
16 Determining the Version of the
RAID Driver
There are two accurate ways to do this. The first is to use the Intel Rapid Storage Technology UI.
The second alternate method is to locate the driver (iaStorA.sys) itself and view its properties.
1. Run the Intel Rapid Storage Technology UI from the following Start Menu path:
3. Click on the top menu button ‘help’ to launch the ‘Help’ window. In the ‘help’ window click the
top menu button ‘System Report’‘
4. If not already expanded, click on ‘Intel® Rapid Storage Technology’ link to expand the item.
Under it you can view the driver version in the following format: WW.XX.YY.ZZZZ
5. This is the current version of the user interface utility installed on your system. The WW.XX.YY
portion is the product release number; the ZZZZ portion is the build number. E.g. 10.5.1.1001.
<System Root>\Windows\System32\Drivers
3. Select the “Details” tab (for Windows 7; may vary for other OS versions)
4. At the top of this tab, there should be a parameter called “File version”. Next to it is the version
of the driver currently installed on your system. It should have the same format and version as
the one you obtained using the Intel Rapid Storage Technology UI
88 Intel Confidential
16.3.2 Using the Intel® RST Option ROM User Interface
1. Early in system boot-up, during post, or when you see the “Intel® RAID for Serial ATA” status
screen output, type CTRL-I. This will open the Option ROM user interface.
3. Intel® Rapid Storage Technology option ROM w.x.y.zzzz Intel® SATA Controller
4. w.x.y.zzzz is the version of the Option ROM currently installed on your system. The w.x.y
portion is the product release number; the zzzz portion is the build number.
Shell:>Drivers
The Intel®RST UEFI driver will be shown along with version, where xx.x.x.xxxx will be replaced with
the actual UEFI OROM Version i.e.:
Intel Confidential 89
17 Un-installation
Uninstalling the RAID driver could potentially cause an end-user to lose access to important data
within a RAID volume. This is because the driver can only provide functionality for the Intel® SATA
RAID controller. Therefore, Intel does not provide a way to permanently remove the driver from the
system. However, disabling the Intel® SATA RAID Controller causes the operating system to not use
the RAID driver.
The uninstallation application that is included with the Intel Rapid Storage Technology software can
remove all components except the RAID driver (i.e. it removes the UI application, Start Menu links,
Control Panel Applet, etc.).
Use the following procedures to remove the Intel Rapid Storage Technology software or to disable
the SATA RAID controller:
3. The first dialog box that appears gives you the option of un-installing all components of the Intel
Rapid Storage Technology software except the RAID driver. Click ‘OK’ to do so.
4. The next dialog box is a confirmation that you would like to un-install all components of the
software except the RAID driver. Click ‘Yes’ to confirm.
5. All components of the software will be un-installed except the RAID driver. You should no longer
see any Start menu links to the UI application or a control panel applet for Intel Rapid Storage
Technology. However, the RAID configuration should still function normally.
1. Enter System BIOS Setup and disable RAID Mode. This setting may be different for each
motherboard manufacturer. Consult your manufacturer’s user manual if necessary. When done,
exit Setup.
2. Reboot the system (The OS must have been installed on a disk not attached to the Intel® SATA
RAID controller). You should no longer see the RAID Option ROM status screen during boot, and
you should no longer see the Intel® SATA RAID Controller in Device Manager.
3. At this point, Windows will no longer be using the RAID driver and you will not have Intel RAID
functionality. All data contained in existing RAID volumes will no longer be accessible.
To re-enable Intel RAID functionality, re-enter System BIOS Setup and re-enable RAID mode.
90 Intel Confidential
Uninstall Note: End-users can use this same procedure to disable the Intel® SATA RAID Controller
if necessary. In fact, the uninstall program used in section 12.1 of this document will display a text
file with a similar procedure. Run the Uninstall Program, click ‘Cancel’ when presented with the first
dialog box, then click ‘Yes’ at the second dialog box to read the text document containing the
procedure.
Intel Confidential 91
18 Registry Customizations
Note: Windows registry changes require reboot to take effect.
After installation of the Intel Rapid Storage Technology, the registry will contain keys to allow
customization of several features. Customize Support URLs in Rapid Storage Technology UI
The Rapid Storage Technology UI [Help] Menu, Submenu [Online Support] when selected will
display a pop-up window with the support URLs as shown in the figure below:
**Note: This feature is not supported on Windows XP and older operating systems.
92 Intel Confidential
Associated with this feature are two registry keys located at
[KEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\iaStorA\Parameters]
1. ZPODD enable/disable
[KEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\iaStorA\Parameters\Device]
"OddZeroPowerEnable"=dword(0, 1)
This key determines a platform’s eligibility for the feature. When the value is zero then this
feature will be disabled. When the value is non-zero or not present the feature will be
enabled. Default value will be enabled (1).
"SecondsToOddZeroPower"=dword:(30, 300)
This key determines the idle timeout value. When the value is zero then this feature will be
disabled. The value is the number of seconds the ODD must be idle (defined as a period of
time in which no non-GESN commands are received; minimum value is 30 and maximum
value is 300) before the ODD will be powered off. The default value is 60. If the registry
value is set to a value outside this range then the default value of 60 seconds will be used.
HKEY_LOCAL_MACHINE\SOFTWARE\Intel\IRST
DisableEmail
DWORD(32) = 1:
When this value is created and set to 1, the UI will not display a
menu item in the ‘Preferences’ page for the end-user to setup
email notification on the system. The feature is disabled.
Intel Confidential 93
18.3 Disabling Maximized Mode Option for Intel® SRT
OEMs have the ability to disable the Accelerate Maximized mode option and limit the Intel® Smart
Response Technology to Enhanced mode selection only.
The registry key by default is not populated in the registry. In order to remove the functionality
from the UI the registry key has to be created using the following settings:
HKEY_LOCAL_MACHINE\SOFTWARE\Intel\IRST
DisablePerformanceMode
DWORD(32) = 1:
The registry key by default is not populated in the registry. In order to enable the functionality in
the UI the registry key has to be created using the following settings:
Open the registry editor and Add the following key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\iaStorA\Parameters\Device
Create a new DWORD (32) value as follows:
RebuildOnHotInsert
94 Intel Confidential
By default or when this value is created and cleared to 0,
RebuildOnHotInsert this feature is disabled.
DWORD(32) = 1:
When this value is created and set to 1, this feature is
enabled and when all the system conditions are met, the
driver will begin an auto-rebuild upon hot insertion of a
supported disk.
The registry key by default is not populated in the registry, but AN is enabled by default. In order to
change the functionality in the driver, the registry key has to be created using the following setting:
Open the registry editor and Add the following key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\iaStorA\Parameters\Device
Create a new DWORD (32) value as follows:
Controller0PhyXANEnable
Where ‘X’ represents the SATA port on which AN is to be disabled or enabled.
MinimumIdleTimeoutInMS
Intel Confidential 95
This value specifies the minimum amount of time the power
framework must wait to power down a logical unit once it is at
idle.
REG_DWORD(32) = MAXULONG:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\iaStorA\Parameters\Device
HybridHintDisabled
HybridHintDisabled DWORD(32) = 0
DWORD(32) = 1
Name: HybridHintReset
Location:
KEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\iaStorA\Parameters\Device
Note: Upon the first reboot after the driver installation is complete, the registry key is written and
the hybrid log reset by the driver. Once the registry key is written, it will remain throughout reboots
96 Intel Confidential
and OS upgrades. If deleted manually, it will be rewritten automatically by the driver upon the next
reboot.
The following registry key can be added to disable hybrid hinting during hibernation.
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\iaStorA\Parameters\Device
HiberFileHintDisable
DWORD(32) = 1
Disable hybrid hinting during hibernate
Intel Confidential 97
19 Power Savings with Intel®
Rapid Storage Technology
19.1 Link Power Management (LPM)
Intel® Rapid Storage Technology implements the Link power management (LPM) feature described
by the Serial ATA specification to overcome the power demand of a high-speed serial interface,
SATA and providing the capability of SATA at the minimum power cost. LPM, when used in
conjunction with a SATA hard drive that supports this feature, enables lower power consumption.
LPM was initially enabled by default on mobile platforms starting with ICH6M with Intel® Matrix
Storage Manager. Starting with ICH9R this feature has also been supported on desktop platforms
with Intel® Matrix Storage Manager 7.5 release but not enabled by default.
Beginning with the Intel® Rapid Storage Technology 10.0 release, LPM support is enabled by default
on both mobile and desktop platforms. OEM’s who wish to modify the default settings for LPM on
their platforms can follow the instructions in the section titled Instructions to disable/enable LPM.
NOTE: Beginning with the Intel® Rapid Storage Technology 10.0 release, the registry keys are no
longer populated in the Windows registry by default. The RST driver does not require the registry
keys to be present to support the default settings.
1. Go to Start->Run
2. Type in RegEdit and press the Enter Key.
3. Go to the below mentioned location to insert or configure the registry keys for LPM.
NOTE: OEM’s need to configure the LPM settings per SATA port. Ports are numbered starting
with zero (please refer to the desired platform EDS for the number of ports supported on
your platform).
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\iaStorA\Parameters\Dev
ice\
4. Now add the following registry keys under the registry location mentioned in step3, if they are not
available (These registry keys are not available by default, they can be added by using
automated scripts, .reg files, executable utilities, etc). If you find the below registry keys
already available, you can modify the values for desired support. Values are modified on a port
by port basis so modify all ports that you wish the changes to be supported on. **
Per-port Setting:
Replace the ‘X’ with the SATA port number to independently control HIPM/DIPM per port.
98 Intel Confidential
Value: 0 = disable, 1 = enable (default)
-- (Old key, DWORD: LPM)
Configure HIPM to use partial or slumber when the drive is in a D0 ACPI device state
DWORD: Controller0PhyXLPMState
Value: 0 = Partial (default), 1 = Slumber
-- (Old key, DWORD: LPMState)
Configure HIPM to use partial or slumber when the drive is in a D3 ACPI device state (Device
receives a start_stop_unit request: e.g. HDD idle spindown).
DWORD: Controller0PhyXLPMDstate
Value: 0 = Partial, 1 = Slumber (default)
-- (Old key, DWORD: LPMDState)
Controller-wide Setting:
This allows auto partial to slumber to be enabled. Actual setting of APS is controlled by the values
below:
Auto Partial to Slumber:
DWORD: EnableAPS
Value: 0 = disable, 1 = enable (default)
**Warning: If you edit the registry incorrectly, you can cause serious problems that may require
you to reinstall your operating system. Intel does not guarantee that problems that are caused by
editing the Registry incorrectly can be resolved.
19.1.2.1 APS
APS is always disabled in all connected SATA devices
The RST driver enables SIPM only for SIPM capable SATA ports, and ASP will be set to
PARTIAL, and host SATA storage controller will report HIPM capability.
The RST DRIVER enables SIPM only when ASP is set to PARTIAL
By default the RST DRIVER enables SIPM on SIPM capable SATA ports,
excluding eSATA ports and hot-pluggable SATA ports.
Intel Confidential 99
SIPM for Hot-Pluggable SATA Ports: By default the RST DRIVER enables
SIPM on SIPM capable SATA ports, excluding eSATA ports and hot-
pluggable SATA ports
SIPM for eSATA Ports: The RST DRIVER keeps SIPM disabled on SATA
ports which are defined by the platform BIOS as eSATA ports
SIPM Capable SATA Ports: The RST DRIVER enables SIPM only for SATA
ports which connected devices are reporting HIPM capability
SIPM will be enabled when the host is reporting HIPM capability
SIPM will only be enabled for SATA devices which have APS disabled
The RST DRIVER reads the value of SIPM timeout from the Windows registry. In case the
Windows registry key is not present the RST DRIVER will use default values for the
timeout. The SIPM timeout will be configurable for each SATA device independently
The RST driver allows configuring the SIPM timeout for each SATA device
independently
The RST driver sets the SIPM timeout to 0 (zero) in order to disable SIPM for
the given SATA port
The RST driver reads the SIPM timeout value from the Windows registry
The RST driver uses default values for SIPM timeout when the Windows
registry key is not present. The amount of time to elapse before the RST
driver puts the link into SLUMBER is as follows:
RTD3 refers to the abililty to completely remove power from devices (D3cold) during long idle
periods, while the system remains in S0.
NOTE: For system’s that support RTD3, but are shipped with RST in ‘RAID Ready’ mode, it is
suggested that RTD3 be disabled in the BIOS. This is to prevent end users from creating RAID
volumes while RTD3 is enabled.
For additional information on configuration and usage of this feature, please refer to its
documentation CDI/IBP document # 516865.
19.4 DEVSLP
Beginning with Intel® Rapid Storage Technology 12.5 release, support for a new SATA link power
state was introduced, device sleep (DEVSLP). DEVSLP is a fourth and lowest link power state,
coming after Slumber. The link will enter DEVSLP when all current IO have completed, the link is in
the Slumber state and the DEVSLP idle timer has expired granting permission for the SATA
controller to assert the DEVSLP signal. BIOS is responsible for configuring and enabling DEVSLP. The
driver configurable settings for DEVSLP may be found in the section titled SATA Device Sleep
(DEVSLP) Settings. For additional information on configuration and usage of this feature, please
refer to the document CDI/IBP document # 516865.
Intel® RST supports DEVSLP for reduced power during long idle periods such as when the system is
in InstantGo*. When DEVSLP is enabled, the Intel® RST driver will support InstantGo* when
requested by the OS on pass through devices on the SATA ports that support DEVSLP. InstantGo*
is only supported on Windows* 8.1 and newer.
The following recommendation for Intel® Rapid Storage Technology DEVSLP Idle Timeout is taken
from Section 4.6 of the Intel “UltrabookTM Storage Power Management Recommendations White
Papers” CDI/IBL #528428
When the system is in InstantGo*, the DEVSLP idle timeout should be set to maximize power
savings. Because the I/O pattern while in InstantGo* is not deterministic, the DEVSLP idle time
cannot be set to an arbitrarily low value, or else the power consumed by entering and exiting
DEVLSP may be (on the average) greater than the power consumed by remaining in the next higher
power state (Slumber). Instead, the idle timeout must be set to a value that delivers the best
average power consumption. To achieve the best average power across a variety of configurations,
the DEVSLP idle timeout should be set to equal the DEVSLP transition energy recoup time. The
DEVSLP recoup time is the time in the next higher power state (Slumber) that consumes that same
amount of power as entering and exiting DEVLSP. Using the recoup time ensures that (on the
average) the device is not placed in DEVSLP before the energy consumed by the transition can be
recouped.
Recoup time for a device to enter and then exit the next lowest power state relative to the device
remaining in and then exiting from its current power state is calculated as follows:
(𝑛𝑒𝑥𝑡 𝑠𝑡𝑎𝑡𝑒 𝑒𝑛𝑡𝑟𝑦 𝑒𝑛𝑒𝑟𝑔𝑦 + 𝑛𝑒𝑥𝑡 𝑠𝑡𝑎𝑡𝑒 𝑒𝑥𝑖𝑡 𝑒𝑛𝑒𝑟𝑔𝑦 − 𝑐𝑢𝑟𝑟𝑒𝑛𝑡 𝑠𝑡𝑎𝑡𝑒 𝑒𝑥𝑖𝑡 𝑒𝑛𝑒𝑟𝑔𝑦)
𝑟𝑒𝑐𝑜𝑢𝑝 𝑡𝑖𝑚𝑒 =
(𝑐𝑢𝑟𝑟𝑒𝑛𝑡 𝑖𝑛𝑠𝑡𝑎𝑡𝑒 𝑝𝑜𝑤𝑒𝑟 − 𝑛𝑒𝑥𝑡 𝑠𝑡𝑎𝑡𝑒 𝑖𝑛𝑠𝑡𝑎𝑡𝑒 𝑝𝑜𝑤𝑒𝑟)
In the following example, DEVSLP recoup time is calculated for a hypothetical device with the
following characteristics:
In-State or
In-State or Transition
Transition Average In-State or Transition
State/State Transition Time (S) Power (W) Energy (J)
Following the general case above, DEVSLP recoup time is calculated as follows:
0 𝐽 + .13 𝐽 − .0013 𝐽
𝐷𝐸𝑉𝑆𝐿𝑃 𝑟𝑒𝑐𝑜𝑢𝑝 𝑡𝑖𝑚𝑒 = = 2.86 𝑠𝑒𝑐𝑜𝑛𝑑𝑠
. 05 𝑊 − .005 𝑊
A device with these characteristics may stay in Slumber for 2.86 seconds and use the same power
as would be consumed by transitioning to DEVSLP. Therefore, the recoup time is defined as 2.86
seconds, and the DEVSLP idle time-out when in InstantGo* should be set to 2.86 seconds.
For additional information on configuration and usage of these parameters, refer to the document
CDI/IBP document # 516865
Per-port Settings:
This allows the OEM to customize the DEVSLP Idle Time Out value for InstantGo*
enabled systems. When the OS enters InstantGo*, the driver configures the SATA
controller to enter DEVSLP sooner to save power. This setting will temporarily
override the BIOS configured value for the duration of the InstantGo* period. Upon
exiting InstantGo*, the driver will restore the value the BIOS originally
programmed:
Replace the ‘X’ with the SATA port number to independently control the DEVSLP
timeout value while in InstantGo*.
At boot time, the RST DRIVER shall read the configured registry key values and use them as
overrides on a per-device basis.
CsDeviceSleepIdleTimeoutInMS
DeviceSleepIdleTimeoutInMS
DeviceSleepExitTimeoutInMS
MinimumDeviceSleepAssertionTimeInMS
Multiple “product id timeout” pairs can be placed in the registry key. Each pair is separated by a
null delimiter.
The RST DRIVER will first look for the per-device registry key and if a device match is found it shall
use the value indicated. If per-device registry key does not contain a match for any attached
device, then the driver shall use the per-port specific registry key if present. This requirement shall
not modify the behavior of any per port registry key.
19.5.1 CsDeviceSleepIdleTimeoutInMS
Path: HKLM\System\CurrentControlSet\Services\iaStorA\Parameters\Device
This registry key is the Device Sleep idle timeout (DITOActual) to use when the system is in
connected standby. DITOActual = (DITO * (DM+1)
Total DevSlp Idle Timeout is the total amount of time in ms that the host bus adapter will wait after
the port is idle before raising the DevSlp signal, max=16368.
If this registry key is not present, the RST DRIVER shall check for the per-port specific registry key
“DevSlpDITOsmall”. This registry key will not take precedence over the registry setting of
“DevSlpDITOsmall” if already present.
19.5.2 DeviceSleepIdleTimeoutInMS
Path: HKLM\System\CurrentControlSet\Services\iaStorA\Parameters\Device
Key:
DeviceSleepIdleTimeoutInMS
This registry key is the Device Sleep idle timeout (DITOActual) to use when the system is not in
Connected Standby (CS). Note: this registry shall apply to both CS and non-CS platforms. The <
timeout> is the value of the DEVSLP idle timeout to use when the system is not in Connected
Standby, in milliseconds (decimal value).
19.5.3 DeviceSleepExitTimeoutInMS
Path: HKLM\System\CurrentControlSet\Services\iaStorA\Parameters\Device
Key:
DeviceSleepExitTimeoutInMS
This registry key is the Device Sleep Exit timeout (PxDEVSLP.DETO). The < timeout > value is the
DEVSLP exit timeout in milliseconds (decimal value).
19.5.4 MinimumDeviceSleepAssertionTimeInMS
Path: HKLM\System\CurrentControlSet\Services\iaStorA\Parameters\Device
Key:
MinimumDeviceSleepAssertionTimeInMS
This registry key is the minimum amount of time, in ms, that the HBA must assert the
DEVSLP signal before it may be de-asserted; Minimum Device Sleep Assertion time
(PxDEVSLP.MDAT). The nominal value is 10ms and the minimum is 1ms depending on device
identification information.
Create Options:
Create Usage:
Creates the new registry keys and populates them with default values
--create [--key x] [--inline]
Create Examples:
-C
--create
--create --inline
--create --key CsDeviceSleepIdleTimeoutInMS
--create --key DeviceSleepExitTimeoutInMS --inline
--create --help
Export Options:
Export Usage:
Exports the Dev Sleep registry keys to a distributable .reg file
--export
Export Examples:
-E
--export
--export --help
List Options:
List Usage:
Lists all devices and values in the registry
--list [--key x]
List Examples:
-L
--list
--list --key MinimumDeviceSleepAssertionTimeInMS
--list --help
Modify Options:
Modify Usage:
Modifies the reg key
--modify --index z --value y [--key x] [Product ID]
Modify Examples:
-M --index 3 --value 10
--modify --index 0 --value 3 --key DeviceSleepIdleTimeoutInMS
-M --index 1 --value 7 --key CsDeviceSleepIdleTimeoutInMS newproductid
--modify --index 1 --value 7 productid with spaces for all reg keys
--modify --help
Add Usage:
Adds a new registry key
--add --value y Product ID
Add Examples:
-A --value 10 newproductid
--add --value 3 --key DeviceSleepIdleTimeoutInMS newproductid
-A --value 7 productid with spaces for all reg keys
--add --help
Import Options:
Import Usage:
Imports the Dev Sleep registry key to a specified OS image
--import --driveLetter c
Import Examples:
-I --driveLetter C
--import --driveLetter C
--import --help
Delete Options:
Delete Usage:
Deletes an existing registry key
--delete --index z [--key x]
Delete Examples:
-D
--delete --index 3
--delete --index 1 --key MinimumDeviceSleepAssertionTimeInMS
--delete –help
Beginning with Intel® Rapid Storage Technology 13.0, the Intel® RST driver also supports
InstantGo* Notification on all InstantGo* Notification capable devices connected to the AHCI
controller. The Intel® RST driver will notify devices when the system is entering/exiting InstantGo*.
This allows supported devices to change policy and be more aggressive in internal power
management and power savings.
19.7.1 Requirements
Devices must support the Advanced Power Management feature (APM) defined in the ATA*
standard (ACS-3) and report support for APM levels.*
o IDENTIFY DEVICE data Word 159 set to 0xA5A5 in the Vendor Specific area.**
Platform Hardware and Devices must support the DEVSLP Feature.
o Device supports the Device Sleep feature (per ATA IDENTIFY DEVICE command)
IDENTIFY DEVICE data word 78 bit 8 is set to ‘1’.(i.e., resume from Device Sleep using
COMWAKE).
Devices must support DevSleep_to_ReducedPwrState (as indicated in Identify Device data).
Supported on pass-thru devices (non-RAID member).
*Intel recommendations for IHVs to support devices’ INSTANTGO* Notification requirements and
APM levels can be found in “Ultrabook Storage Power Management Recommendations White Paper”
on CDI/IBL #528428.
Once the Intel® RST driver receives notification from the OS that the system is going to enter or
exit InstantGo*, Intel® RST driver will notify the device by using the APM (Advanced Power
Management) mechanism defined in the ACS (ATA Command Set).
The Intel® RST driver will use a SET FEATURE command to send a hint of power/performance
balance to the device. The hint value is 01 – FEh.
Values:
- FEh - max performance at the expense of power
- 01h - max power savings at the expense of performance
- 80h – defined for HDDs as max power savings, but cannot spin down, lower than 80h and
the device is allowed to self spin down
- All other values are vendor specific
Intel® RST uses 10h as the default value for entering InstantGo*, and 80h is the default value
Intel® RST uses for exiting InstantGo*. Values are customizable by using registry keys.
Add the following key to disable InstantGo* Notification for device on port ‘X’:
DWORD: Controller0PhyXCsDeviceNotification
Value: 0x0– 0x01, default 0x01 (1= enabled / 0=disabled)
The following Registry key will allow customizable APM levels to set the device to when the system
enters and/or exits InstantGo*. This will only be done if InstantGo* Notification is enabled.
Open the registry editor and navigate to this path:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\iaStorA\Parameters\Device
Add the following registry key to customize Entry into InstantGo* notification :
DWORD: Controller0PhyXEnterCSApmLevel
Values: As below (default is 10h)
Add the following registry key to customize Exit from InstantGo* notification :
DWORD: Controller0PhyXExitCSApmLevel
Values: As below (default is 80h)
VALUES: APM levels interpreted as follows:
FEh – Maximum performance mode
80h – Minimum power management without standby (e.g. balance between power savings
and performance)
10h – High performance bursts, quick to low power (e.g. Windows* 8.1 InstantGo*)
The requested APM levels persist across resets, but not power cycles. Intel® RST 13.0, and newer,
supports this capability and will set the appropriate APM levels after power is applied to the devices
(either power on or after resume from RTD3).
The RST DRIVER classifies a drive as HYBRID TYPE when the drive reports
HYBRID HINTING, PUIS and APM capabilities.
The RST DRIVER uses APM to spin down of the SSHD rotational media when
entering CONNECTED STANDBY power state.
The RST DRIVER uses HYBRID HINTING to redirect I/O requests to SSHD
NVCACHE when in CONNECTED STANDBY power state.
The RST DRIVER uses APM to control the spindle. The APM will automatically
spin-up the rotational media part of the drive for time of a “read cache miss”
event.
The RST DRIVER restores APM to its original value upon exiting CONNECTED
STANDBY power state.
The RST DRIVER enables PUIS prior entering CONNECTED STANDBY to prevent
spin-up of the rotational media part after RTD3 transition requests from the
operating system.
The RST DRIVER disables the adaptive D3 idle timeout feature for MATRIX RAID
volumes.
The RST DRIVER always disables the adaptive D3 idle timeout feature for MIXED
RAID volumes.
The RST DRIVER always disables the adaptive D3 idle timeout feature for PCIe
Remapped SSDs.
The RST DRIVER always disables the adaptive D3 idle timeout feature for SATA
SSDs.
The RST DRIVER always disables the adaptive D3 idle timeout feature for tape
drives.
The RST DRIVER always disables the adaptive D3 idle timeout feature for ODDs.
The RST DRIVER always disables the adaptive D3 idle timeout feature for ZPODDs.
The RST DRIVER allows entering into CONNECTED STANDBY power state under
following conditions:
Platform is supporting CONNECTED STANDBY power state,
CACHE DEVICE is used to accelerate a BOOT VOLUME.
The RST DRIVER will not D3 the BACKING STORAGE when the CACHE VOLUME is
configured in ENHANCED MODE.
The RST DRIVER may D0 the BACKING STORAGE while in CONNECTED STANDBY
power state, regardless of power source, for time of occurred events:
Cache miss during I/O READ request,
Cache cleaning due to cache volume full.
The RST DRIVER reads configuration of I/O inactivity timeouts for AGGRESSIVE D3
and SEMI-AGGRESSIVE D3 from Windows Registry.
The RST DRIVER waits not less than 2 seconds for completion of an I/O request to
BACKING STORAGE before attempting to D3 the BACKING STORAGE.
The RST DRIVER waits not less than 30 seconds for completion of an I/O request to
BACKING STORAGE before attempting to D3 the BACKING STORAGE.
The RST DRIVER will AGRESIVELLY D3 the BACKING STORAGE when entering the
RESILIENCY PHASE of CONNECTED STANDBY when system is on battery power.
The RST DRIVER keeps ASP set to SLUMBER when the platform is in MODERN
STANDBY.
The RST DRIVER keeps ASP set to PARTIAL when the platform is not in MODERN
STANDBY.
Intel® Rapid Storage Technology supports password protected HDDs to be RAID array member
disks and pass-thru disks. The product will rely on the BIOS implementing for most of the ATA
Security support. There is a whitepaper available called “Implementing Intel® Matrix Storage
Manager Compatible Support for ATA Security in BIOS” available on CDI that describes the
necessary BIOS design for compatibility with the Intel Rapid Storage Technology. Rapid Storage
Technology product will handle the RAID and hot-plug related behavior with regards to password
protected disks.
Accelerated volumes containing a locked member disk will return to a normal online state upon
transitioning from S4 to S1 and entering the correct password to unlock the volume/disk.
RAID1 Volume Remove Disk 1 Volume becomes The user had authority to
Disk 1 – Locked (locked disk) unlocked and access Disk 2 which has
Disk 2 - Unlocked Degraded. the same data as Disk 1,
Volume – Locked by removing the locked
User can rebuild drive the user can access
(Both disks have volume unto a new Disk 2.
relevant data) unlocked disk.
Intel® RRT Volume User connects The recovery drive Similar situation to a user
Master Disk – Locked laptop to can be connected to leaving a laptop unlocked
Recovery Disk - docking station a new laptop and the and unattended.
Locked (external port and unlocks information can be
docking station) Recovery disk used to rebuild an
Volume – Locked and Master Disk Intel® RRT volume if
and boots. the power was
(Both disks have maintained, because
relevant data) Then user takes the drive is still in an
the laptop from unlocked state.
the docking
station and
leaves the
external drive
connected to
power
Intel® Smart Response Technology is an Intel® RST caching-related feature that improves
computer system performance and lowers power consumption for systems running on battery power
while in Maximized mode. It allows OEMs to configure computer systems with an SSD used as
cache memory between the hard disk drive and system memory. This provides the advantage of
having a hard disk drive (or a RAID volume) for maximum storage capacity while delivering an SSD-
like overall system performance experience. Intel® Smart Response Technology caching is
implemented as a single drive letter solution; no additional drive letter is required for the SSD
device used as cache. Beginning with Intel® Rapid Storage Technology 13.0 minimum cache device
size requirement is 16GB (1000x1000=1MB).
£Note: This feature is only supported on designated RAID enabled SKUs; for SKU support see
“Requirements and Limitations”.
22.1 Overview
Updating the OROM to a newer version requires that the driver version be updated to a driver
version from the same release package of the OROM or newer.
OROM Version
Driver Version 10.5.0 PV 10.5.1 PV 10.6.0 PV 12.0.0 PV
10.5.0 PV O X X X
10.5.1 PV S O X X
10.6.0 PV S S O X
12.0.0 PV S S S O
X = this configuration is not supported
O = this configuration is supported and is optimal for the driver and OROM
S = this configuration is supported; however, it is limited to the features of the driver that
was originally released with that OROM version. E.g. if the 11.0.0 PV driver is
installed/updated to a system running the 10.5.0 PV OROM, the system will be limited to the
features of the 10.5.0 OROM. Any new features associated with the 11.0.0 PV release may
not be enabled with this configuration.
Architectural Limitations:
5MB unallocated disk space at the max LBA of the disk: There is a limitation associated with
the HDD and SSD when enabling Acceleration. When a system is first booted with no RAID volumes
or no SRT enabled on a disk, there is no Intel® Rapid Storage Technology configuration information
stored on the disks (this configuration information is called RAID metadata). This is true for all disks
in the system that are in the pass-through state. Whenever a RAID volume is created or a disk is
accelerated with SRT, the Intel® RST driver writes metadata to the disks that stores all the
configuration information (metadata) associated with the disks. The driver locates the max LBA of
the disk and determines if the final ~5MB of space is un-partitioned unallocated space. If the space
is un-partitioned, the driver will reserve this space for the Intel ® RST driver metadata. This
reserved space will be hidden from the host so that the host will never be able to access this space
and overwrite the Intel® RST driver metadata. The max LBA presented to the host will be the full
capacity of the disk minus the 5MB offset from the max LBA.
In those cases when the user attempts to create a RAID volume or enable SRT on a disk that the
driver detects a partition within the max LBA minus the 5MB offset, the operation will fail. The user
will not be able to complete the RAID creation or the enable SRT operation. In order to complete
the action, the user would have to use the appropriate Windows tool to delete the partition or shrink
the size of the partition. WARNING! Deleting a partition can result in loss of user data. Ensure that
there is no data on the partition that is required to be preserved without backing it up somewhere
else.
For a system to support Intel® Smart Response Technology it must have the following:
One of the following platforms (RAID enabled SKUs):
o Intel® 9 Series Chipset SATA RAID Controller:
System BIOS with the Intel® Smart Response Technology caching bit (bit 9) of the ‘Intel RST
Feature Capabilities’ register in the SATA controller MMIO space enabled (set to 1; the
default setting is 0).
System BIOS with SATA mode set to RAID enabled
System BIOS that is PCI 3.0 and PMM (POST Memory Management) compliant to allow the
OROM to handle dirty shutdowns of Accelerated disks/volumes;
o Legacy OROM: BIOS PMM must be able to allocate a minimum of 130MB of temporary,
non-aligned, extended memory to the Legacy OROM
o UEFI Driver: BIOS UEFI global boot services must be able to allocate 130 MB of
temporary, non-aligned, extended memory to the UEFI Driver.
Intel® RST driver and OROM installed from the production Intel ® RST 10.5 version release or
later production releases
Flash part must budget the following space for the Intel® RST OROM:
o Image file size ~119KB
o Runtime size ~41.5KB
For an SSD to meet the Intel® Smart Response Technology ”Cache SSD” criteria it must have the
following:
Must be from a faster Bus Protocol Group than the device/raid member devices it is accelerating.
Examples:
o 1 RST PCIe AHCI/NVMe SSD can accelerate 1 single SSD/mSATA/HDD
o 1 RST PCIe AHCI/NVMe SSD can accelerate SSD/mSATA/HDDs in a RAID 0/1/5
configuration.
o 1 SSD can accelerate 1 HDD
o 1 SSD can accelerate HDDs in a RAID 0/1/5/10 configuration.
16GB minimum capacity (1000x1000=1MB) as calculated by most SSD vendors
OR
14.9GB minimum capacity (1024x1024=1MB) will be used and calculated by the Intel® RST UI
and Configuration utilities
The Intel RST product recognizes a device as an SSD if its IDD data structure word 217 = 0x01
Note: There is no ability to enable Acceleration while in the OROM UI. Acceleration must be
enabled either in the Intel® RST UI or CLI 32/64 utilities during OS runtime or the RCfgSata CLI
utility (as described below) if required to do so pre-OS. The OROM UI only allows disabling of
Acceleration.
1. Select the required HDD(s) needed for the type of system configuration
2. Locate SATA port(s) and attach HDD(s). (Note the port number of the pass-through HDD to be
used for the OS system disk.)
3. Install any other HW peripheral desired for the system configuration (e.g. ODD)
If Using RCfgSata
1. Copy the RCfgSata tool from the RST 13.0 release or newer for PCIe SSD cache devices to a
UEFI bootable media (e.g. USB thumb drive) and attach the media device to the targeted new
system (for non-RST PCIe SSD systems the tool also runs in DOS)
At the command line type: rcfgsata /c Sys_Vol /ds 0 (where ‘Sys_Vol’ is the logical name of the
single pass-through disk and ‘0’ is the Internal configured port where the single physical disk is
located)
At the command line type: rcfgsata /c Cache_Dev /ds 3 (Where ‘Cache_Dev’ is the logical name
representing the “Cache SSD” and ‘3’ is the Internal configured port location of the physical
SSD. (Note: if the SSD is larger than 64GB, the following command will be required: rcfgsata /c
Cache_Dev /ds 3 /s 14.9; where /s is for the size of the caching region in GB (14.9 (1024mb =
1GB) or 16 GB (1000mb = 1GB) is the minimum and 64 GB is maximum supported size)
OR
1. User Data region: rcfgsata /c SSD_UserData /ds 3 /s 4 (where 4 is the size of the user data
region of the SSD)
2. Cache region: rcfgsata /c Cache_Dev /ds 3 (this will default to the remaining capacity of the
SSD) (where 64 is max size of 64GB and 16GB is the minimum)
At the command line type: rcfgsata /accel Sys_Vol Cache_Dev max (Where Sys_Vol is the
single disk to be Accelerated, ‘Cache_Dev’ is the “Cache SSD”, and ‘max’ indicates the
Acceleration mode is ‘Maximized’ .
Note: The disk/volume is not actually Accelerated until the system is booted and the
pre-configured Accelerated disk/volume and the pre-configured “Cache SSD” are
enumerated by the Intel® Rapid Storage Technology driver.
5. Reboot; the ‘New System’ is now prepared and ready for Windows OS installation to a pre-
configured Accelerated pass-through disk.
Locate the HDD that will be used as the single pass-through disk that will have the OS installed and
Accelerated on the new system and attach it to an unused SATA port on the ‘Build System’
Locate the SSD that will be used as the “Cache SSD” on the new system and attach it to an
‘Internal’ configured SATA port on the ‘Build System’. (Note: the ‘Build System’ cannot already have
an SSD configured as a “Cache SSD”)
Boot the ‘Build System’ into Windows (login as administrator) and launch a DOS prompt command
line. If not already done so, copy a version of the RSTCLI32/64 application from the RST 11.5
Release or later to a directory on the ‘Build System’.
Power down the ‘Build System’ and physically remove the “Cache SSD” and the Associated
single pass-through disk that are targeted for the new system. (Note: To remain valid, the
preconfigured “Cache SSD” and Accelerated HDD must be installed as a pair in a system that
has no Accelerated Disk/Volume or “Cache SSD” already installed.)
In the ‘New System’, install the “Cache SSD” and the Associated pass-through disk onto the
desired SATA ports (Note: the SSD must be installed to an Internal SATA port or a remapped
PCIe port).
The ‘New System’ is now prepared for OS installation to an Accelerated single pass-through disk
3. Click on the ‘Performance’ tab at the top of the UI and click the Enable acceleration link
a. If multiple SSDs on the build system, select the SSD that will be used in the ‘New
System’
b. Select the size to be allocated on the SSD for cache memory; options are:
i. 16 GB – minimum required
iii. Custom
c. Select the HDD to be accelerated that will be used in the ‘New System’
4. Power down the ‘Build System’. Remove the “Cache SSD” and the Associated single pass-
through disk that are targeted for the ‘New System’.
5. On the ‘New System’, install them into the desired SATA ports (Note: the SSD must be installed
to an Internal SATA port or a remapped PCIe port ).
6. The ‘New System’ is now prepared for OS installation to an Accelerated single pass-through disk.
1. Select the required HDD(s) needed for the type of system configuration
2. Locate SATA port(s) and attach HDD(s). (Note the port number of the pass-through HDD to be
used for the OS system disk.)
3. Install any other HW peripheral desired for the system configuration (e.g. ODD)
2. Locate the desired Internal configured SATA port of the ‘New System’ and install the SSD that
was previously configured on the ‘Build System’
22.9.2.1 OS Installation
1. Boot to the Windows OS installation media (ensure that the media with the RST driver, e.g. USB
thumb drive, is not installed in the system during boot)
2. When prompted to load driver, insert the RST driver installation media and click the Load Driver
link.
4. The disk/volume with Acceleration enabled, should now be available in the list of storage drives.
5. Select the Acceleration enabled disk/volume and continue with the normal OS installation
procedure from this point.
Once the installation is complete nothing else is required to enable Acceleration. Acceleration
should be enabled on the disk/volume that was configured via the RCfgSata/RSTCLI 32/64 tool or
RST UI.
Note: The following procedures are for setting up a single disk (RAID mode pass-through disk) in
Acceleration mode with an OS image pre-configured and installed. OEMs need only move and install
the imaged Accelerated disk along with its Associated “Cache SSD” as a pair to a properly configured
computer system. The system will boot up with the Acceleration mode enabled.
Note: The HDD and SSD must remain together at all times as an Accelerated pair.
2. HDD (targeted for Acceleration and OS image) and SSD (targeted for “Cache SSD”)
3. Media with bootable WinPE environment (USB thumb drive, ODD, etc) that has:
a) Intel® RST driver (compatible with RAID OROM on ‘Build Platform’); driver must be loaded
during WinPE boot or loaded using the drvload command.
b) RSTCLI/RSTCLI64 executable
c) The OS image
3. Boot into WinPE with Intel® RST driver, RSTCLI/RSTCLI64, and OS image
2. Enable Acceleration on the single HDD: at command prompt/> type rstcli –-accelerate --
setAccelConfig --disk-to-accel 0-Z-0-0 --mode maximized (where Z = the SATA port
location of the HDD to be Accelerated)
2. Once the imaging process has completed, the Accelerated pair (HDD + SSD) can be moved to a
supported platform to build a fresh new system with Intel® Smart Response Technology
already pre-configured
a. Make sure that along with the HDD for the system disk, that an SSD is installed in the
system
b. System must meet chipset and CPU requirements for Intel® Smart Response
Technology
c. System BIOS is properly configured with the Intel® RST RAID OROM and the SATA
controller set to RAID mode
3. Once system boots for first time after OOBE automatically run a script in the background to
enable Acceleration of the system disk; see an example batch file in the next bullet:
Prepare system HW
1. At least one HDD (for the OS image) and one SSD (for the cache device; must be installed
to an Internal configured SATA port)
2. Ensure that system BIOS has a properly integrated Intel® RST OROM that supports Intel®
Smart Response Technology. Set the SATA controller to RAID mode via the system BIOS
1. Prepare the final master OS image with Intel® RST RAID driver (the OS image system must
have been built on a system with the RAID mode set in the BIOS so that the Intel® RST
RAID driver is the installed storage driver)
2. Transfer the RST RAID-enabled master OS image to the HDD in the new system (use
whatever is current OEM process, e.g. Ghost)
1. Because the RSTCLI 32/64 utilities are not supported in the end-user environment, the
script should delete the utility upon completion or the utility shall be located on the
computer in a location that is not accessible to the end user
2. Once the system boots for first time after OOBE, automatically run a script in the
background to enable Acceleration of the system disk; see an example batch file in the next
bullet:
exit
EOF
The first command line prepares the SSD on port 1 as the cache device; 0-1-0-0 = the SATA
port location of the SSD, and 24 = the size of the cache volume on the SSD.
The second command line accelerates the system disk on port 0 (0-0-0-0) in Maximized
mode using the SSD on port 1 (0-1-0-0) as the cache device;
Once the script using the 2 bolded command lines above completes, the system should be in
Maximized Acceleration mode.
The Intel® Smart Response Technology caching solution is a learning solution. This means that
when the cache is initially enabled, there is little to no data being cached. This initially results in
many cache misses causing the host to have to access the HDD for I/O requests. However, over
time, the caching policies of Intel® SRT places data in the cache that is accessed often. So after
some time the cache will be loaded with often used data giving the system its optimal or maximum
performance configuration.
The problem with this is that when a new system is first used by an end user, the system will have
no data cached and thus the performance gains expected of the cache will be small. Depending on
use, it could take days of use before the end user starts seeing the expected performance as the
cache learns what data should be stored in the cache.
To overcome this initial poor performance gain is to ship the system in the box with the cache
already loaded with user data. This could be accomplished in one of two ways. The OEM could
configure the system then spend weeks using the system so that it learns what data to load into the
cache, or the OEM could preload data into the cache that is likely to be used immediately by the end
user.
The following sections dictate the process that the Intel® SRT solution uses to pre-load the cache to
be shipped in the box ready for an optimal OOBE performance-wise. It will detail the steps for
setting up a system with an Accelerated single pass-through disk with a pre-loaded cache.
22.11.1 Requirements
1. Must meet all Intel® Smart Response Technology requirements
22.11.2 Process
The NV cache loading process is a three stage process (assuming that SRT caching has already been
enabled in Enhanced mode):
1. Setup system for Cache loading: Modify the SRT default Caching policy via the Registry and
reboot.
3. Return the system to the SRT default Caching policy and cleanup (remove files and shutdown)
1. Configure an Accelerated system in Enhanced mode and install the OS and desired applications
that will be shipped on this system
2. Download NvCacheScripts archive (.zip) and RAID configuration utility (RSTCLI.exe) from Intel
VIP site (use same kit as the Intel® RST driver and OROM/UEFI driver that will be shipping on
the systems)
3. If not already exist, create the C:\Intel directory on the system disk
4. Unzip the archive (open it and select to extract it to the default directory which should be
C:\Intel\NvCacheScripts\). The following files should be extracted into the directory:
cache_cleanup.reg
cache_insert.reg
Readme.txt
step1_RegistrySetup.bat
step2_LoadNVCache.bat
a. Copy RSTCLI.exe into the directory
5. Open a command prompt window (needs to be run as administrator in Windows 8) and change
directories to C:\Intel\NvCacheScripts\
6. Run the script step1_registrysetup.bat (this will change the caching policy for cache loading
then reboot the system for the new policy to take effect)
Note: this script needs to be edited or replaced to fit your specific requirements and
system configuration
2. The system now is ready to be shipped with the cache loaded for optimal performance out of the
box
1. The OEM has already configured the system in Enhanced Acceleration mode
4. The CLI tool, the scripts, and registry editor files are located in directory
C:\Intel\NvCacheScripts\
step1_RegistrySetup.bat
REM ************************************************
REM * PART 1: *
REM ************************************************
REM Edit the registry to set the system up for Nv Cache content
REM insertion and Startup Menu. This step in the script will
REM automatically call the cache_insert.reg file to update the registry.
REM ************************************************
regedit /s C:\Intel\NvCacheScripts\cache_insert.reg
REM ************************************************
REM * PART 2: *
REM ************************************************
REM ************************************************
REM This step will reboot the system for the cache insert policy
REM change to take effect and will automatically start the cache
REM loading script to begin copying data to the cache.
REM ************************************************
EOF
cache_insert.reg
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\iaStorA\Parameters\Device]
"NvCachePolicy"=dword:0
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run]
"Rstcli"="C:\\Intel\\NvCacheScripts\\step2_LoadNvCache.bat"
step2_LoadNvCache.bat
REM ************************************************
REM This section is used for content that needs to
REM have the longest eviction path as possible.
REM This will put the content into the BOOT LRU.
REM
REM Begin loading user application data into NV cache.
REM Make sure the drive selected is Accelerated i.e.
REM "C:\".
REM ************************************************
C:\Intel\NvCacheScripts\rstcli.exe --accelerate --loadCache
C:\windows\system32\winevt\logs\*.evtx
REM ************************************************
REM To make sure the content in the BOOT LRU doesn't
REM get evicted, a time delay can be used.
REM This is only needed if the user content is >1.7GB
REM and can be loaded in less than 60s.
REM The time can be "fine-tuned" to adjust for a
REM particular system. Below is the max to ensure
REM BOOT LRU content doesn't get evicted.
REM ************************************************
timeout 60
REM ************************************************
REM Finish loading content from the Accelerated disk into NV cache.
REM
REM ************************************************
C:\Intel\NvCacheScripts\rstcli.exe --accelerate --loadCache
"C:\Program Files" --recurse
EOF
cache_cleanup.reg
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\iaStorA\Parameters\Device]
"NvCachePolicy"=-
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run]
"Rstcli"=-
EOF
Note: These are just examples that demonstrate the process. They can be edited by
each OEM for their specific requirement or can be used as a guideline to create OEM-
specific scripts.
Once the disks are duplicated they can be paired at a later time and installed in systems on the
manufacturing line. Any SSD and any HDD can be paired because the Intel® RST OROM/UEFI
Driver will fix the metadata on both the SSD and the HDD the very first time they are booted
together after the replication process.
Intel® Smart Response Technology for Hybrid Drives is an Intel® RST caching-related feature that
improves computer system performance and lowers power consumption for systems running on
battery power. SSHDs integrate NAND flash with traditional hard drive storage. SSHDs allow OEMs
to configure computer systems with the NAND portion used as cache memory between the hard
drive portion of the SSHD and . This provides the advantage of having a hard disk drive for
maximum storage capacity while delivering an SSD-like overall system performance experience.
£Note: This feature is only supported on designated SKUs (see Section 21.1.2).
23.1 Overview
OROM Version
Driver Version 12.0.0 PV 12.5.0 PV 13.0.0PV 13.1.0PV
12.0.0 PV X X X X
12.5.0 PV X O X X
13.0.0 PV X S O X
13.1.0PV X S S O
X = this configuration is not supported
O = this configuration is supported and is optimal for the driver and OROM
S = this configuration is supported; however, it is limited to the features of the driver that was
originally released with that OROM version. E.g. if the 11.0.0 PV driver is installed/updated to a
system running the 10.5.0 PV, the system will be limited to the features of the 10.5.0 OROM.
Any new features associated with the 11.0.0 PV release will not be enabled with this
configuration.
Architectural Limitations:
5MB unallocated disk space required at the max LBA of the disk*
*see “Requirements and Limitations” under Dual Drive Configuration for details.System
Requirements:
Intel® Smart Response Technology Hybrid Accelerator is enabled when all of the following
conditions are met:
SSHD Requirements:
Beginning with the Intel® Rapid Storage Technology 13.0 driver release version, for the SSHD to
meet the Intel® Smart Response Technology ”Cache SSD” criteria, it must have the following:
8 GB (1GB = 1000MB) minimum non-volatile cache capacity.**
1TB maximum non-volatile cache capacity.
SSHD needs to report Hybrid Hinting support capability (via ATA Identify command return data
Word 78 bit 9 =1)
On systems in RAID mode, support for this feature is limited to pass-thru disks (not members
of an array).
Intel® RST configuration utilities do not support reporting the enablement of this feature. The
Intel® RST UI will support reporting if feature is enabled.
When implementing both Intel® Rapid Start and Intel® Smart Response Technology with RST13.0
release or later, the Rapid Start partition is dynamically created. The size of the partition
dynamically created is dependent on the size of the cache portion of the SSHD and the DRAM on the
system:
- For SSHD NV Cache Size of < 18.6GB, The RST driver will dynamically reserve space for the
Rapid Start Feature in the Cache Volume equivalent to the size of the DRAM on the system up
to 4GB.**
- For SSHD NV Cache Size of > 18.6GB, The RST driver will dynamically reserve space for the
Rapid Start Feature in the Cache Volume equivalent to the size of the DRAM on the system up
to 8GB.
The assumption is that the OEM/user has properly met all the requirements for Intel® Rapid Start
Technology. There are NO instructions in this document for configuring the platform for
Intel® Rapid Start Technology. For detailed configuration and setup requirements for Intel®
Rapid Start Technology please contact your Intel representative for assistance.
1. Select an SSHD that has a minimum NV cache capacity of between 4GB and 1TB.
2. Locate SATA port(s) and attach SSHD(s). (Note the port number of the pass-through SSHD to
be used for the OS system disk.)
3. Install any other HW peripheral desired for the system configuration (e.g. ODD)
23.3.1.4 OS Installation
1. Boot to the Windows OS installation media (ensure that the media with the RST driver, e.g. USB
thumb drive, is not installed in the system during boot)
2. When prompted to load driver, insert the RST driver installation media and click the Load Driver
link.
4. The disk with Hybrid Drive Acceleration enabled, should now be available in the list of storage
drives.
5. Select the Hybrid Accelerated enabled disk and continue with the normal OS installation
procedure from this point.
Once the installation is complete nothing else is required to enable Acceleration. Acceleration should
be enabled on the disk.
The Intel® Smart Response Technology caching solution is a learning solution. This means that
when the cache is initially enabled, there is little to no data being cached. This initially results in
many cache misses causing the host to have to access the spindle portion of the SSHD for I/O
requests. However, over time, the caching policies of Intel® SRT place data in the cache that is
accessed often. So after some time the cache will be loaded with often used data giving the system
its optimal or maximum performance configuration.
To overcome this initial poor performance gain is to ship the system in the box with the cache
already loaded with user data. This could be accomplished in one of two ways. The OEM could
configure the system then spend weeks using the system so that it learns what data to load into the
cache, or the OEM could preload data into the cache that is likely to be used immediately by the end
user.
The following sections dictate the process that the Intel® SRT solution uses to pre-load the cache to
be shipped in the box ready for an optimal OOBE performance-wise. It will detail the steps for
setting up a system with a Hybrid Accelerated pass-through disk with a pre-loaded cache.
23.4.1 Requirements
1. Must meet all Intel® Smart Response Technology requirements for Hybrid Drive Acceleration.
2. Intel® RST 12.5 production release or later (If in RAID mode, this requires the 12.5 or newer
OROM)
3. The RSTCLI Pre-warming tool and scripts located in the SW kit on VIP
- rstcli64.exe
- SshdScripts.zip
23.4.2 Process
The SSHD cache loading process is a two stage process:
1. Install the OS and desired applications that will be shipped on this system
3. Download SshdScripts archive (.zip) and RAID configuration utility (RSTCLI.exe) from Intel
VIP site (use same kit as the Intel® RST driver and OROM/UEFI driver that will be shipping
on the systems)
4. If not already existing, create the C:\Intel directory on the system disk
7. Run the script step1_registrysetup.bat (this will change the caching policy for cache
loading then reboot the system for the new policy to take effect)
Note: this script needs to be edited or replaced to fit your specific requirements
and system configuration
*Rapid Start/Dynamic Cache Sharing feature support removed for Broadwell Mobile and
Skylake Platforms and newer
4. The system now is ready to be shipped with the cache loaded for optimal performance out of the
box
5. The OEM has already configured the system in Enhanced Acceleration mode
7. The CLI tool, the scripts, and registry editor files are located in directory
C:\Intel\SshdScripts\
REM ************************************************
REM * PART 2: *
REM ************************************************
REM ************************************************
REM This step will reboot the system for the cache insert policy
REM change to take effect and will automatically start the cache
REM loading script to begin copying data to the cache.
REM ************************************************
shutdown -f -r -c "Rebooting for SSHD Cache loading"
EOF
cache_insert.reg
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\iaStorA\Parameters\Device]
"NvCachePolicy"=dword:0
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run]
"Rstcli"="C:\\Intel\\SshdScripts\\step2_LoadSshdCache.bat"
step2_LoadSshdCache.bat
REM ************************************************
REM This section is used for content that needs to
REM have the longest eviction path as possible.
REM This will put the content into the BOOT LRU.
REM
REM Begin loading AOAC content into cache.
REM Make sure the drive selected is Accelerated i.e.
REM "C:\".
REM ************************************************
C:\Intel\SshdScripts\rstcli.exe --accelerate --loadCache C:\windows\system32\winevt\logs\*.evtx
REM ************************************************
REM To make sure the content in the BOOT LRU doesn't
REM get evicted, a time delay can be used.
REM This is only needed if the user content is >1.7GB
REM and can be loaded in less than 60s.
REM The time can be "fine-tuned" to adjust for a
REM particular system. Below is the max to ensure
REM BOOT LRU content doesn't get evicted.
REM ************************************************
timeout 60
REM ************************************************
REM ************************************************
REM Clean up the registry and shutdown
REM ************************************************
regedit C:\Intel\SshdScripts\cache_cleanup.reg
cache_cleanup.reg
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\iaStorA\Parameters\Device]
"NvCachePolicy"=-
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run]
"Rstcli"=-
EOF
Note: These are just examples that demonstrate the process. They can be edited by each
OEM for their specific requirement or can be used as a guideline to create OEM-specific
scripts.
This feature is supported in RAID mode for SRT Dual Drive configurations, and both RAID and AHCI
mode for SSHDs when Hybrid Drive Acceleration is enabled.
The Intel® Smart Connect Technology‡ Version 4.1.2308 or later feature must be enabled (for
information on installation, configuration, and usage of this feature, please refer to its
documentation CDI/IBP document # 482930 or contact your Intel® Smart Connect Technology
representative).
The Intel® Smart Connect Technology (Intel® SCT) is a feature of the platform in which the software
on the platform and combination of NIC (LAN/WLAN/WWAN) features provides content updates
during periods when the user is away from the PC and the PC is in a power saving mode. In
addition, if the system has the Intel® Rapid Storage Technology driver installed with the Intel ®
Smart Response Technology caching feature enabled in Maximized mode and the pass-through HDD
supports the PUIS feature of the ATA specification, then there is an additional power savings benefit
during the periods of content update. The PUIS feature allows the HDD to stay in a powered-down
state (the drive does not spin-up) during the periods of content update.
‡The Intel® Smart Connect Technology feature is not a part of the Intel® RST SW suite.
Contact your Intel field representative for more details regarding this feature.
£Note:This feature is only supported on the designated RAID-enabled SKUs of the chipsets for SRT
Dual Drive.
*Note: Only HDDs that DO NOT require a jumper to emulate the feature are supported.
Step 2: Intel® RST driver checks for the following and if all are present,
enables PUIS support per the Agent’s request:
OS system disk (boot volume) is a single pass-through hard
disk drive (HDD or SSHD) AND
The HDD/SSHD supports the ATA PUIS feature (jumpered
drives are not supported) AND
The Dual drive configuration system disk is Accelerated in
Maximized mode AND
The HDD/SSHD is Writeable
Registry key A Windows registry key is provided to disable the PUIS feature
When the key is present and set to ‘1’, the RST driver does the
following:
Disable the PUIS feature on all enumerated HDDs that
support the PUIS feature
Enumerate all HDDs as non-PUIS HDDs
Return an ‘Invalid’ status to the Intel® SCT Agent
Thereafter have no special handling related to the PUIS
feature
Sub Value Type Default Description
Key
[KEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\iaStor\Parameters\]
"DisablePuis"=dword(0, 1)
0XA;
N/A AoacTimeout dword Change to 0x3C(h) for SSHD
[KEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\iaStor\Parameters\
Device]
24.1.2.5 Performance
HDD Spin-up During the period when the Intel® Smart Connect Technology
Frequency Agent cycles the system to the S0-ISCT power state to synch the
application data, the Intel® RST driver is expected to keep the
PUIS HDD in the ATA Power Standby Mode approximately 90% of
the synch cycles.
£Note:This feature is only supported on the designated RAID-enabled SKUs of the chipsets for SRT
Dual Drive.
25.1 Introduction
The Intel® Rapid Storage Technology UI is a Windows*-based application that provides users
monitoring and management capabilities for the Intel® RST storage subsystem. It offers a wide
range of monitoring and management activities for the Intel® RST RAID subsystem (In AHCI
mode there are no management or monitoring capabilities offered by the UI application).
The Intel® Rapid Storage Technology(RST) UI requires the Microsoft .NET 4.5 framework beginning
with Intel® RST 13.0 release. For prior releases, the RST UI connects and interoperates with the
Microsoft .NET 3.0, 3.5, and 4.0 framework.
Redundant Array of Independent Disks (RAID) refers to multiple independent disks combined to
form one logical drive. The main objective of this technology is to improve storage system
performance, data protection, and increase fault-tolerance.
· TRIM
This feature provides support for all solid state disks (SSDs) in your storage system that
meet the ATA-8 protocol requirements and are not part of an array. This feature
optimizes write operations, helps devices reduce wear, and maintains unused storage
area on devices as large as possible.
Beginning with the Intel® 7 Series chipset the driver supports TRIM on SSDs in a RAID
0 configuration.
AHCI-enabled systems
Advanced Host Controller Interface (AHCI) is an interface specification that automatically allows the
storage driver to enable advanced SATA features, such as Native Command Queuing and Native Hot
Plug, on the SATA disks connected to your computer. The following features are supported on AHCI-
enabled systems:
· Native command queuing
· Hot plug
· Disks of more than two terabytes (if that size is supported by the RST UEFI pre-OS
driver or legacy OptionROM)
· Password-protected disks
· ODD power optimization (Microsoft Windows Vista* and higher)
· Dynamic Storage Acceleration
· Hybrid Hinting
In this section, we describe each of these RAID configuration elements and explain how they relate
to each other.
· Array
An array is a collection of two or more SATA disks in a RAID configuration and is the highest
element in the hierarchy of a storage system. Once a volume is created, the disks you used
to create that volume form an array. Refer to the Creating Additional Volumes topic for
details on how you can create two volumes across the same disks. An array can include one
or two RAID volumes if the hardware allows it.
· Volume
A volume is the storage area on two or more disks whose type dictates the configuration of
the data stored. If you created a volume for data protection, then your storage system may
include a RAID 1 volume spanning two SATA disks, which mirrors data on each disk.
· Disks
A disk (i.e., hard disk or hard disk drive) physically stores data and allows read/write data
access. If a disk is used to create a volume, it becomes an array disk because it has been
grouped with other disks to form an array.
The storage system can also include ATAPI devices, which cannot be used to create a volume. They
are a mass storage device with a parallel interface, such as CD-ROM, DVD/Blu-ray disc, or tape
drive.
25.1.2.2 Navigation
The application is organized into five main areas depicted by the top navigation buttons: Status,
Create, Manage, Accelerate, and Preferences. Depending on your computer's configuration and
available hardware, Create and Accelerate may not be available.
Status
The 'Status' area provides a general state of health of your storage system. If a status other than
normal is reported, the Manage sub-section will be available to provide you with basic information
and actions links necessary to return the status to normal.
Create
The 'Create' area allows you to create different types of volumes to protect data, enhance disk
performance, optimize disk capacity, or create a custom volume to combine benefits.
Note
Manage
The 'Manage' area combines the logical and physical view of your storage system. The area displays
detailed information about each element that is part of the storage system, such as volumes and
disks; the storage system view shows how the selected element relates to others. Each element has
its own 'Manage' area which is accessible by clicking any element displayed in the storage system
view under 'Status' or 'Manage'.
The 'Manage' area also provides the actions available for the selected element, such as renaming a
volume or changing the volume type.
Accelerate
The ‘Accelerate’ area allows you to manage the cache memory configuration using a non-system
solid state disk as a cache device. If the cache is reported in an abnormal state, detailed information
and troubleshooting actions will display. The Acceleration View is specific to the ‘Accelerate’ area and
only displays in this location.
Preferences
The 'Preferences' area allows you to customize system settings by enabling the display of the
notification area icon, and by selecting the type of notifications that you want the application to
display.
Acceleration View
The Acceleration View is a graphical representation of the acceleration configuration, and only
displays the devices (disks and volumes) included in this particular configuration. You can use this
view to access the ‘Manage’ page specific to each represented device by clicking the storage system
element for which you want more detailed information.
The storage system is reported in a warning state and data may be at risk. We
recommend that you open the application to review and resolve the reported
issues.
The storage system is reported in an error state and data may be lost. We
recommend that you open the application to review and resolve the reported
issues as soon as possible.
Note
To hide the notification area icon, deselect ‘Show the notification area icon’ under ‘System
Preferences’.
Reviewing notifications
· Hover over the icon at any time to view the storage system status or the progression of an
operation.
· Small pop-up windows will display for a short time to notify you of specific events, such as a
missing disk or the completion of an operation.
· Open the application to view more details about storage system events in the 'Status' or
'Manage' areas.
Normal
Reports that the system is functioning as expected, SATA disks are present and connected to the computer. If
an array is present, volume data is fully accessible.
The Create subsection is only available if the storage system meets the minimum requirements to create a
volume. Depending on the available hardware, you may be given the option to create a volume to protect data,
optimize the disk performance, or create a custom volume.
The Manage subsection is only available if the storage system reports atypical conditions in a normal state.
Typically, details or a recommended action are provided to help you rectify any storage system conditions. For
example, if a recovery volume was reported as read-only, we would inform you that disk files must be hidden
prior to requesting updates.
The Accelerate subsection is only available if a solid state disk can be used as a cache device and an eligible
disk or volume can be accelerated. This area typically provides the option to enable acceleration and reports the
cache and accelerated device health state, as well as the current acceleration mode.
Warning
Reports that storage system data may be at risk due to a problem detected on one or more SATA disks.
The Manage subsection displays any SATA disk or volume states reported by the storage system that may
require your attention in order to keep data fully protected and accessible. Details or a recommended action are
provided to help you fix any storage system problems. For example, if the master disk in a recovery volume is
reported as failed, we would recommend that you rebuild the volume to another disk.
Note
In this state, we recommend that you backup any accessible data before taking action
In this state, the Accelerate subsection typically reports that the cache volume is failing possibly because
the solid state disk is reported at risk of failing (smart event). Details and a recommended action are provided to
help you fix the problem reported on the solid state disk.
Error
Reports that storage system data may be lost due to a problem detected on one or more SATA disks.
The Manage subsection displays any SATA disk or volume states reported by the storage system that
require your immediate attention in order to keep data fully protected and accessible. Details or a recommended
action are provided to help you fix any storage system problems. For example, if the data on a RAID 1 volume
appears inaccessible due to a failed array disk, we would recommend that you rebuild the volume to another
disk.
Note
In this state, we recommend that you backup any accessible data before taking action
Note:
Hovering over a designated element in the storage system view provides a snapshot of its properties.
Clicking allows you to access and manage its properties.
An internal disk is reported at risk or Back up your data and replace the disks as
Incompatible. soon as possible. Refer to the
Troubleshooting section for more information.
An external hard disk is reported at risk or Back up your data and refer to the
incompatible. Troubleshooting section for more information.
An internal solid state disk is reported as Back up your data and refer to the
being at risk or incompatible. Troubleshooting section for more information.
An external solid state disk is reported at Back up your data and refer to the
risk or incompatible. Troubleshooting section for more information.
An internal disk is reported offline. Unlock all array disks to unlock the volume.
Refer to the Troubleshooting section for more
information.
An external disk is reported offline. Unlock all array disks to unlock the volume.
Refer to the Troubleshooting section for more
information.
An internal disk is reported normal and Unlock the disk to access more options.
locked.
An external disk is reported normal and Unlock the disk to access more options.
locked.
An internal hard disk is reported failed. Refer to the Troubleshooting section for more
information.
An external hard disk is reported failed. Refer to the Troubleshooting section for more
information.
An internal solid state disk is reported as Refer to the Troubleshooting section for more
failed. information.
An external solid state disk is reported as Refer to the Troubleshooting section for more
failed. information.
Volume states
Volume type Normal Degraded Failed
Refer to Troubleshooting Refer to Troubleshooting Failed
Degraded Volumes and Volumes and Caching Issues for
Caching Issues for more more information.
information.
RAID 0 Not applicable
Single-disk (cache)
RAID 1/Recovery
RAID 5
RAID 10
Warning
Performing this action will permanently delete any existing data on the disks used to create a
volume, unless you choose to keep the data when selecting array disks. Backup all valuable data
before starting this process.
Based on the first disk selected, some disks may become grayed out if one or more requirements
are not met. Selecting a different disk generally helps re-enable disks that were previously grayed
out.
If the first selection is a system disk, any additional SATA disks selected must be of
equal or greater size to ensure that all the system files are migrated to the new volume.
If the first selection is a non-system disk, and a system disk is then selected, the latter
must be of equal or smaller size to ensure that all the system files are migrated to the
new volume.
A system volume cannot be greater than 2 TB. If your first selection is a system disk,
the total size of the other disks shall not allow the volume size to exceed 2 TB.
Exception: If you are creating a volume using disks that have no existing data, and your
operating system is a 64-bit Edition, the application will allow a volume to be greater
than 2TB.
The SATA disks used to create a volume must have the same type of connection,
internal or external. An internal disk shall not be paired with an external disk to create a
volume. Some systems will support mixed connection types.
Depending on the input/output (I/O) controller hub that your computer is using and the hardware
connected to the system, some volume types may not be enabled in the selection list. Refer to the
Readme file located in the Program Files directory for this application or to the Device Manager to
Note
Intel® 5 Series Chipset applies to both desktop and mobile platforms as well as all later chipsets.
Note
No other volumes can be present on the system. The master disk must include
100% of the available disk space and must be less than 1.3125 TB
Application Critical data protection for mobile systems; fast restoration of the master disk
to a previous or default state. Available in specific mobile configurations.
Disks required 2
Advantage Full data redundancy and excellent fault-tolerance; increased read transfer rate.
Disadvantage Storage capacity is only as large as the smallest disk; slight decrease in write
transfer rate.
Application Typically used in workstations and servers to store critical data. Available in
specific mobile configurations.
Disks required 2 to 6
Advantage Increased data access and storage performance; no loss in data capacity
Disadvantage No data redundancy (if one disk fails, all data on the volume is lost).
Disks required 3 to 6
Advantage Data redundancy; improved storage performance and capacity; high fault-
tolerance and read performance.
Application Good choice for large amounts of critical data, such as file and application
servers; Internet and Intranet servers. Available in mobile configurations that
include the Intel® 5 Series Chipset which supports up to six SATA ports.
Disks required 4
Advantage Combines the read performance of RAID 0 with the fault-tolerance of RAID 1,
resulting in increased data access and full data redundancy, and increased
storage capacity.
Application High performance applications and high load database servers requiring data
protection, such as video editing. Available in mobile configurations that
include the Intel® 5 Series Chipset which supports up to six SATA ports.
Once the volume type is selected, you are ready to configure your volume.
Recovery volume
1. Type a new volume name if you want to change the default name.
2. Select the master disk.
3. Select the recovery disk.
4. Select a different update mode, if desired.
5. Click ‘Next’. This button will not be active until all the required selections have been made.
RAID Volume
1. Type a new volume name if you want to change the default name.
2. Select the required number of disks.
3. Select the disk from which you want to keep data, if desired. You can only keep data from one
disk. If you want to keep data from more than one disk, you must back up all valuable data
prior to creating a volume.
4. Click ‘Next’. This button will not be active until all the required selections have been made.
Note
Currently, the application does not allow the creation of greater than 2TB volumes where the source
disk is greater than 2TB and data on that disk is preserved (e.g. system volume). Target disks can
be greater than 2TB but such volumes cannot. This limitation results from the lack of GPT partition
scheme support. Note that volumes greater than 2TB that include member disks greater than 2TB
are supported as long as array disks are unpartitioned or that no data is preserved at volume
creation.
If you are creating a two-disk volume for data protection or disk optimization from 'Status', you can
follow the procedure provided below.
Warning
You can only keep existing data from one of the disks you select to create a volume. We
recommend that you backup all valuable data before proceeding.
If you perform a driver upgrade or downgrade while the data migration is in progress and then
restart your computer, the driver will not be able to recognize the volume or the data on it. If you
If the size of the new volume is larger than the size of the source drive, the following steps apply:
6. Once the migration status reports 100% complete, restart your computer for the operating
system to recognize the new volume size.
7. Create a new partition or extend the existing partition to utilize the new volume space using
Windows Disk Management*. If your system is running Microsoft XP*, you may only have the
option to create a new partition.
Note
To open Windows Disk Manager, click Start, right click My Computer, select Manage, then in the
console tree select Disk Management.
You can add a volume to an existing RAID array by creating another volume that uses the available
space on the array. This feature allows you to combine different volume types and their respective
benefits. For example, a configuration with RAID 0 and RAID 1 on two SATA disks provides better
data protection than a single RAID 0 and higher performance than a single RAID 1.
The first RAID volume occupies part of the array, leaving space for the other volume to be created.
After creating the first volume with an array allocation set to less than 100% in the Configure
Volume step, you will be able to add a second volume to that array.
Note
The configuration is only available if the array allocation for the first volume created is less than
100%, and space is available on that array. The application currently supports an array to include a
maximum of two RAID volumes.
Visit our Online Support for additional information on RAID type combinations for each I/O controller
hub.
You can choose to create two or more volumes on two different arrays, as long as the volume
requirements are met.
1. Click ‘Create’ or 'Create a custom volume' under 'Status'.
2. Select the volume type. Selecting a volume type in the list updates the graphical
representation to provide a detailed description of that type.
3. Click ‘Next’.
4. Select 'No' in order to add a volume to a new array.
5. Select the required number of disks.
6. Select the disk from which you want to keep data, if desired. You can only keep data from
one disk. If you want to keep data from more than one disk, you must back up all valuable
data prior to creating a volume.
7. Make any necessary changes in the Advanced section.
8. Review the selected configuration. Click 'Back' or an option in the left pane if you want to
make changes.
9. Click ‘Next’.
10. Click 'Finish' to start the creation process.
Note: Systems with an RST OROM older than 9.5, will not recognize 2 volumes on a single array if
the RST Windows Driver version is 9.5 and newer.
The 'Manage' area also provides the actions available for the selected element, such as renaming a
volume or changing the volume type.
You can manage arrays by clicking a selected array in the storage system view under 'Status' or
'Manage'. This allows you to review the properties and access all actions associated with that array,
such as adding a disk or increasing a volume size.
Available space Reports the unallocated space on the array that can be used.
Disk data cache Reports whether the data cache is enabled for all array disks.
Refer to Connecting a Disk under Managing Disks for more information on installing SATA disks on
your computer.
Warning
Any existing data on the available disk used to increase the array size will be permanently deleted.
Backup all the data you want to preserve prior to executing this action.
If you perform a driver upgrade or downgrade while the data migration is in progress and then
restart your computer, the driver will not be able to recognize the volume or the data on it. If you
are migrating a system volume, you will not be able to restart your system because the operating
system cannot load. If you are migrating a data volume, you will have to reverse (roll back) that
last performed driver update, and then restart the computer to return to a normal state.
1. Under 'Status' or 'Manage', in the storage system view, click the array to which you want to
add a disk. The element properties are now displayed on the left.
2. Click 'Add disk'.
3. Select the disk you want to use to increase the array capacity.
4. Click 'Add Disk'. Caution: Once the data migration starts, the operation cannot be canceled.
5. Once the migration has completed, restart your computer for changes to take effect. Then
use Windows Disk Management* to increase the partition size on the volumes for which a
disk was added, or add another partition.
Note
To open Windows Disk Manager, click Start, right click My Computer, select Manage, then in the
console tree select Disk Management
The first RAID volume occupies part of the array, leaving space for the other volume to be created.
After creating the first volume with an array allocation set to less than 100% in the Configure
Volume step, you will be able to add a second volume to that array.
Note
This configuration is only available if the array allocation for the first volume is less than 100%, and
space is available on that array. The application currently supports an array to include a maximum
of two RAID volumes on a single array.
You can also complete this action using the ‘Create’ area.
1. Under 'Status' or 'Manage', in the storage system view, click the array to which you want to
add a volume. The array properties are now displayed on the left.
2. Click 'Create additional volume'.
3. In the 'Create Additional Volume' dialog, type a new name if you want to change the default
name.
4. Select the volume type, and then click 'OK'. Only the volume types available for the
current configuration will display. Refer to the table below for more information.
5. The page refreshes and the array now displays the additional volume.
Visit our Online Support for additional information on RAID type combinations for each I/O
controller hub.
After creating a volume with an array allocation set to less than 100% in the Configure Volume step,
you will be able to increase the volume size by the amount of available space on that array. If two
volumes are present on a single array and capacity expansion is possible, only the space available at
the end of the second volume will be used to increase the volume size.
Warning
If you perform a driver upgrade or downgrade while the data migration is in progress and then
restart your computer, the driver will not be able to recognize the volume or the data on it. If you
are migrating a system volume, you will not be able to restart your system because the operating
system cannot load. If you are migrating a data volume, you will have to reverse (roll back) that
last performed driver update, and then restart the computer to return to a normal state.
Note
To open Windows Disk Manager, click Start, right click My Computer, select Manage, then in the
console tree select Disk Management
Under Manage Array, the disk data cache is reported as enabled or disabled for all SATA disks in the
array. Under Manage Disk, the disk data cache is reported as enabled or disabled for a specific disk
that is part of that array. The option to change this setting is only available from Manage Array.
Warning
Enabling the disk data cache increases the cache size and the amount of cached data that could be
lost in the event of a power failure. The risk can be decreased if your computer is connected to an
uninterruptable power supply (UPS).
1. Under 'Status' or 'Manage', in the storage system view, click the array you want to manage.
The element properties are now displayed on the left.
2. In the Advanced section, click 'Enable' or 'Disable' depending on the option available.
3. Click 'Yes' to confirm.
4. The page refreshes and now displays the new setting.
You must be logged on as an administrator to perform the actions listed in this section.
You can manage existing volumes by clicking a volume in the storage system view under 'Status' or
'Manage'. This allows you to review the volume properties and access all actions associated with that
volume, such as renaming, changing type, and deleting.
A volume is an area of storage on one or more SATA disks used within a RAID array. A volume is
formatted by using a file system and has a drive letter assigned to it. The volume properties listed
below display to the left of the storage system view under 'Manage' and report values specific to the
element selected in the view.
Locked Indicates that at least one array disk is locked with a password. The volume is visible
because at least one other array disk is unlocked. Refer to Unlocking Password-
Protected Disks for instructions on unlocking disks.
Degraded Indicates that one array disk is missing or has failed. A RAID 0 volume cannot be in
this state because of the striping configuration.
Failed RAID 0 volume: indicates that one or more array disks are missing or have
failed.
RAID 1 volume: indicates that both array disks are missing or have failed.
RAID 5 or 10 volume: indicates that two or more array disks are missing or
have failed.
Incompatible Indicates that the volume was moved to another system that does not support the
volume type and configuration.
Inaccessible Indicates that data on the accelerated volume cannot be accessed because it is missing,
or that the accelerated volume data is not synchronized with the data on the cache
volume.
Locked Indicates that at least one array disk is locked with a password. The volume is visible
because at least one other array disk is unlocked. Refer to Unlocking Password-
Protected Disks for instructions on unlocking disks.
Incompatible Indicates that the volume was moved to another system that does not support the
volume type and configuration.
Power-saving mode Indicates that the computer is running on battery power. If the volume is in continuous
update mode, data updates are paused and will resume as soon as the computer is
reconnected to the power supply.
Data update needed Indicates that the recovery disk does not have a redundant copy of the data on the
master disk, and you should request an update.
Running off recovery disk Indicates that the recovery disk is the designated source drive in the volume.
Recovery disk read-only Indicates that the recovery disk files are accessed. In this state, data updates are not
available.
Verifying Indicates that the volume is being scanned to detect data inconsistencies.
Verifying and repairing Indicates that the volume is being scanned to detect data inconsistencies, and errors
are being repaired. This state does not apply to a RAID 0 volume because errors
cannot be repaired.
Migrating data Indicates that data is being reorganized on the volume. This state displays when a
system volume is created, the volume size is increased, or the type is changed to
different RAID configuration.
Rebuilding Indicates that data redundancy is being restored across all disks associated with the
volume. A RAID 0 volume cannot be in this state because of the striping
configuration.
Recovering data Indicates that data on the master disk is being overridden by all the data on the
recovery disk. This state only applies to recovery volumes.
Updating data Indicates that the latest master disk changes are being copied to the recovery disk.
This state only applies to recovery volumes.
Acceleration mode Reports the acceleration mode for the disk or volume associated with the cache
Size device.
Enhanced: Indicates that the disk or volume is accelerated for optimized data
protection.
Maximized: Indicates that the disk or volume is accelerated for optimized input/output
performance.
Reports the total capacity of the volume in gigabytes (GB) in the storage system view
and in megabytes (MB) in the volume properties under Manage Volume.
Data stripe size Reports the size of each logical contiguous data block used in the volume for RAID 0,
5, and 10 volumes. The strip size is indicated in kilobytes (KB).
Write-back cache Reports whether the write-back cache feature is enabled for the volume.
Verification errors found Reports the number of inconsistencies found during the last volume data verification.
Block with media errors Reports the number of blocks with media errors found during the last volume data
verification.
Physical sector size Reports the size of each sector that is physically located on the disk.
You can change the name assigned to a volume present in your storage system at any time. The
name change will take effect immediately.
1. Under 'Status' or 'Manage', in the storage system view, click the volume that you want to
rename. The volume properties are now displayed on the left.
2. Click 'Rename'.
3. Type a new volume name, and then click 'OK'.
Note
Volume names are limited to 16 English alphanumeric and special characters including spaces, but
cannot include a backslash “\”.
When a volume is reported as degraded because of a failed or missing disk, the disk must be
replaced or reconnected and the volume be rebuilt in order to maintain fault-tolerance. The option to
rebuild is only available when a compatible disk is connected, available and normal. If a spare disk is
available, the rebuild process will start automatically when a disk fails or is missing. For RAID 0
volumes, the rebuild process will start automatically only when one of its members is reported as at
risk.
Warning
Completing this action will permanently delete existing data on the new disk and make any other
volume on the array inaccessible. We recommend you backup valuable before continuing.
Warning
Completing the action will override existing data on the master disk and update it with the data on
the recovery disk. Backup all valuable data before continuing.
1. Under 'Status', in the Manage subsection, click 'Recover data' or click the recovery
volume in the storage system view, and then click 'Recover data'.
2. Click 'Yes' to confirm.
3. The recovery operation starts immediately. You can follow the progress by hovering over
the notification area icon or by reviewing the volume status under 'Status' or 'Manage'.
Note
If master disk is removed while the data recovery is in progress and is then reconnected, the
operation will resume automatically from where it stopped as long as the volume is in on request
update mode. If the volume is in continuous update mode, you will need to restart the operation by
following the procedure described above,
This action is only available when a volume is reported as failed, but both array disks are present
and normal, and allows you to access and try recovering healthy volume data.
In most cases, this situation will occur after one or more array disks was reported as failed or at
risk, and then reset to normal.
Completing this action resets the volume state by ignoring previous events and does not repair data.
Any data loss or corruption that may have occurred as a result of prior hardware failure or change of
state remains. We recommend that you back up accessible data and replace failed hardware as soon
as possible to prevent further data loss.
You can choose to change the type of an existing volume based on your storage system needs. The
following configurations are possible:
Note
Only available if the recovery volume is in continuous update mode
Note
No other volumes can be present on the system.
The RAID 1 volume must be less than 1.3125 TB
and include 100% of the available space on the
array
Note
Before starting, refer to the system and volume requirements to determine which RAID types are
supported by your computer and make sure the required number of SATA disks are connected. The
Intel® Chipset provides support for the creation of all RAID volume types and for up to six SATA
ports on a mobile platform. Changing volume type does not require re-installation of the operating
system
1. Under 'Status' or 'Manage', in the storage system view, click the volume that you want to
modify. The volume properties are now displayed on the left.
2. Click 'Change type'.
3. In the 'Change Volume Type' dialog, type a new name if you want to change the default
name.
4. Select the new volume type, and then click 'OK'. Caution: Once the data migration starts,
the operation cannot be canceled.
5. Once the migration has completed, the 'Manage' page refreshes and reports the new
volume type.
If you perform a driver upgrade or downgrade while the data migration is in progress and then
restart your computer, the driver will not be able to recognize the volume or the data on it. If you
are migrating a system volume, you will not be able to restart your system because the operating
system cannot load. If you are migrating a data volume, you will have to reverse (roll back) that
last performed driver update, and then restart the computer to return to a normal state.
You can increase the size of a RAID volume by using the remaining available space on the array. A
minimum of 32 MB must be available for this action to be available. Hovering over the array name in
the storage system view displays the amount of available space in MB.
After creating a volume with an array allocation set to less than 100% in the Configure Volume step,
you will be able to increase the volume size by the amount of available space on that array. If two
volumes are present on a single array and capacity expansion is possible, only the space available at
the end of the second volume will be used to increase the volume size.
Warning
If you perform a driver upgrade or downgrade while the data migration is in progress and then
restart your computer, the driver will not be able to recognize the volume or the data on it. If you
are migrating a system volume, you will not be able to restart your system because the operating
system cannot load. If you are migrating a data volume, you will have to reverse (roll back) that
last performed driver update, and then restart the computer to return to a normal state.
1. Under 'Status' or 'Manage', in the storage system view, click the array you want to
manage. The array properties are now displayed on the left.
2. Click 'Increase size' next to the volume name. If more than one volume is present on a
single array, you will need to increase the size of each volume one at a time.
3. Click 'Yes' to confirm. Caution: Once the data migration starts, the operation cannot be
canceled.
4. Once the migration has completed, restart your computer for changes to take effect.
Then use Windows Disk Management* to increase the partition size on the volumes, or
add another partition.
1. Under 'Status' or 'Manage', in the storage system view, click the volume whose size you
want to increase. The volume properties are now displayed on the left.
Note
To open Windows Disk Manager, click Start, right click My Computer, select Manage, then in the
console tree select Disk Management
You can add one or more SATA disks to an existing array to increase the system storage capacity.
This feature can be useful if you want to change to a volume type that requires additional disks.
Refer to Connecting a Disk under Managing Disks for more information on installing SATA disks on
your computer.
Warning
Any existing data on the available disk used to increase the array size will be permanently deleted.
Backup all the data you want to preserve before completing this action.
If you perform a driver upgrade or downgrade while the data migration is in progress and then
restart your computer, the driver will not be able to recognize the volume or the data on it. If you
are migrating a system volume, you will not be able to restart your system because the operating
system cannot load. If you are migrating a data volume, you will have to reverse (roll back) that
last performed driver update, and then restart the computer to return to a normal state
This action can also be performed from Manage Array. Refer to the Adding a Disk to an Array section
for more information.
1. Under 'Status' or 'Manage', in the storage system view, click the volume to which you
want to add a disk. The element properties are now displayed on the left.
2. Click 'Add disk'.
3. Select the disk you want to use to increase the array capacity.
4. Click 'Add Disk'. Caution: Once the data migration starts, the operation cannot be
canceled.
5. Once the migration has completed, restart your computer for changes to take effect.
Then use Windows Disk Management* to increase the partition size on the volumes for
which a disk was added, or add another partition.
Note
A recovery volume gives you the flexibility to choose between updating data on the recovery disk
continuously or on request.
In continuous update mode, the latest master disk changes are copied to the recovery disk
automatically, as long as both disks are connected to the computer. In on request mode, the latest
master disk changes are copied to the recovery disk only when you request a data update.
The current update mode is reported in the volume properties under Manage Volume. By default,
the recovery volume is created in continuous update mode.
Note
This action is only available if a recovery volume is present and in normal state. If the recovery
volume is read-only because the master or recovery disk files are accessed, you will need to hide
the files before the update mode can be changed.
1. Under 'Status' or 'Manage', in the storage system view, click the recovery volume. The
volume properties are now displayed on the left.
2. Click ‘Change mode’, and then click 'Yes' to confirm.
3. The page refreshes and the volume properties report the new update mode.
You can manually copy the latest master disk changes to the recovery disk at any given time; this
action allows you to synchronize data on the recovery volume, improving data protection and
lowering the risk of losing valuable data in the event of a disk failure. When you request an update,
only changes since the last update are copied.
Note
This action is only available if a recovery volume is present, and in ‘on request’ update mode.
1. Under 'Status' or 'Manage', in the storage system view, click the recovery volume. The
volume properties are now displayed on the left.
2. Click ‘Update data’.
3. The update process can be instantaneous or may take a while depending on the
amount of data being copied. You can follow the progress by hovering over the
notification area icon or by reviewing the volume status under 'Status' or 'Manage'.
Note
You can follow the progress of the update by hovering over the notification area icon or under
‘Status’ or Manage Volume.
This action is only available if a recovery volume is present, in a normal state, and in on request
update mode.
You can view the recovery or master disk files using Windows Explorer* depending on the
designated source drive of the recovery volume. This feature can be useful when a data recovery
from or to the master disk is necessary.
Note
When files have been accessed, the disk is displayed as missing from the array, and becomes
available. Also, the volume is set to read-only and data updates are not available in this state.
Hiding disk files will make the volume writable and allow data updates.
You can also access master or recovery disk files from Manage Disk.
This action is only available if a recovery volume is present and disk files have been accessed.
When you are done viewing master or recovery disk files, you can hide the display of the files from
Windows Explorer*. Once the disk files are hidden, the disk becomes writable, and data updates on
the volume are available.
Note
You can also hide master or recovery disk files from Manage Disk
When a volume is deleted, you create available space that can be used to create new volumes. Note
that you cannot delete a system volume using this application because the operating system needs
the system files to run correctly. Also, if the volume is a recovery volume and the master or
recovery disk files are accessed, you will need to hide these files before the volume can be deleted.
Warning
When a volume is deleted, all existing data on all disks that are a part of the selected volume is
permanently lost. It is recommended to complete a backup of all valuable data before continuing.
1. Under ‘Status’ or ‘Manage’, in the storage system view, click the volume you want to
delete. The volume properties are now displayed on the left.
2. Click ‘Delete volume’.
3. Review the warning message, and click ‘Yes’ to delete the volume.
4. The ‘Status’ page refreshes and displays the resulting available space in the storage
system view. You can now use it to create a new volume.
You can assign a data strip size to a volume while creating a new volume or while changing the type
of an existing volume. You cannot change the strip size of an existing volume without changing its
type.
The strip size refers to each logical contiguous data block used in a RAID 0, RAID 5, or RAID 10
volume. This setting is not available for RAID 1 or recovery volumes, due to their redundant
configuration. The default value is the recommended strip size based on the system configuration
and the volume type selected; changing the pre-selection is best suited for advanced users.
The following table describes the usage scenarios for the typical strip sizes.
Options 4 KB, 8 KB, 16 KB, 32 KB, 64 16 KB, 32 KB, 64 KB, 128 KB. 4 KB, 8 KB, 16 KB, 32 KB,
KB, 128 KB. 64 KB.
You can improve the read/write performance of a RAID or recovery volume by enabling the write-
back cache on one or all volumes on an array. When this feature is enabled, data may be
temporarily stored in the cache memory before being written to the physical disks. Multiple I/O
requests may be grouped together to improve performance. By default, the write-back cache is
disabled.
Warning
While this feature highly improves the volume and array performance, it also increases the amount
of cached data that could be lost in the event of a power failure. This risk can be lowered if your
computer is connected to an uninterruptable power supply (UPS)
Note
If your computer is running on battery and a recovery volume is present, the option to enable the
write-back cache is not available because the recovery disk is offline and data updates are not
available. If this feature was enabled prior to running the battery, write-back cache activity would
be temporarily disabled until you reconnect your computer to the power supply.
Initializing a volume is the process of synchronizing all redundant data on a volume prior to verifying
or verifying and repairing that data. If you attempt to start a verification process for a volume that
has not been initialized, you will be prompted to do so.
Initializing a volume
1. Under 'Status' or 'Manage', in the storage system view, click the volume that you want
to initialize. The volume properties are now displayed on the left.
2. Click 'Initialize'.
3. Click 'OK' to start the initialization process. Caution: Once the data migration starts, the
operation cannot be canceled.
Note
While inititialization is in progress, you can view the status in the notifications area by hovering over
the Intel(R) Rapid Storage Technology icon, or in the application under Staus or Manage Volume.
Warning
The initializating process could take a while depending on the number and size of the disks. You can
continue using array disks and other applications during this time. Closing the application, or
powering off and restarting your computer will not disrupt the progress of this operation.
You can verify data on an existing volume by identifying and repairing inconsistencies. Running this
operation on a regular basis helps you keep valuable data and the overall storage system healthy.
1. Under 'Status' or 'Manage', in the storage system view, click the volume that you want to
verify. The volume properties are now displayed on the left.
2. Click 'Verify'.
3. Select the check box if you want errors found to be repaired automatically during the
verification process.
4. Click 'OK' to start the verification process.
Note
Data on a volume cannot be verified and repaired unless the volume has been initialized first. If you
attempt to start a verification process for a volume that is not initialized, you will be prompted to
first initialize the volume. Based on its configuration, a RAID 0 volume cannot be repaired because
of the lack of redundancy.
You can change the order of designation for array disks in a recovery volume by setting the master
disk as the destination drive and the recovery disk as the source drive. This action is best suited for
advanced users.
Note
This action is only available if a recovery volume is present, normal, and in continuous update mode.
1. Under 'Status' or 'Manage', in the storage system view, click the recovery volume. The
volume properties are now displayed on the left.
2. In the Advanced section, click 'Swap master and recovery disks'.
3. Click ‘Yes’ to confirm.
4. Hover over each disk in the storage system view to review their new usage.
Parameter Value
Port Reports the port number to which the disk or device is attached.
Usage Array disk: a disk that has been grouped with other disks to form an array
containing RAID volumes.
Master disk: the disk that is the designated source drive in a recovery volume.
Recovery disk: the disk that is the designated destination drive in a recovery
volume.
Spare: the disk has been designated as the destination drive for automatic
volume rebuilds in the event of a failed, missing or at risk array disk. For RAID
0 volumes, automatic rebuilds will only occur when one of its array disks is
reported as at risk.
Warning
Assigning an available disk to an array or marking it as a spare will permanently
delete any existing data on that disk.
Unknown: the disk is available but contains metadata that cannot be displayed in
the operating system. Even though the disk is reported as normal, you will need
to clear and reset the disk to make the disk available.
Acceleration mode Reports the acceleration mode for the disk or volume associated with the cache
device.
Enhanced: Indicates that the disk or volume is accelerated for optimized data
protection.
At risk: an impending error condition was detected on the disk and it is now at
risk of failure.
Failed: the disk has failed to properly complete read and write operations in a
timely manner, and it has exceeded its recoverable error threshold.
Offline: indicates that an array disk is locked, that the recovery volume is in on
request update mode, or that your computer is running on battery and data
updates to the recovery volume are not available.
Size Reports the total capacity of the disk in megabytes (MB) in the disk properties
and in gigabytes (GB) in the storage system view.
Serial number Reports the manufacturer's serial number for the disk.
System disk Reports whether the disk contains system files that are required to start and run
the operating system.
Disk data cache Reports whether the data cache is enabled on this disk. This feature is controlled
at the array level.
Native command queuing Reports whether the disk supports this feature.
SATA transfer rate Reports the data transfer rate between the SATA controller and the SATA disk.
The supported rates are:
The data transfer rate reported is based on the Intel® Chipset and SATA disks
present in your system.
Physical sector size Reports the size of physical sectors on the disk (bytes).
Logical sector size Reports the size of logical sectors on the disk (bytes).
You can unlock a password-protected disk by entering the password which allows you to access data
or use that disk to create a volume. The password is setup through the system BIOS. Locked disks
can be identified with the lock icon appended to them and display a ‘Locked’ status in the disk
properties.
Marking a disk as a spare allows you to designate an available SATA disk as the default destination
for automatic volume rebuilds in the event of a failed, missing or at risk array disk. However, for
RAID 0 volumes, automatic rebuilds will only occur if one of its members is reported at risk.
1. Under 'Status' or 'Manage', in the storage system view, click the disk that you want to mark
as a spare. The volume properties are now displayed on the left.
2. Click 'Mark as spare'.
3. Click 'OK'.
Note
RAID 1, 5, 10, and recovery volumes can use one or more spares.
Warning
When marking a disk as a spare, any existing data on that disk is permanently deleted. Back up all
data you want to preserve before starting this action.
If you system is running a version of the RST OROM that does not support disks that are 2TB or
larger, you can reset such a disk to available, but disallow the marking of it as a spare.
After a disk was marked as spare, you can choose to make that spare disk available again and use it
differently. Once available, the disk can be used to create a volume or be added to an existing
volume if all other requirements are met.
1. Under 'Status' or 'Manage', in the storage system view, click the disk that you want to reset
to available. The volume properties are now displayed on the left.
2. Click 'Reset to available'.
3. The page refreshes and the disk usage is now reported as available.
You can reset a SATA disk to normal when the storage system reports one of the following disk
statuses:
At risk
A disk is reported at increased risk of failing in the near future that could be due to a slow degradation
over time. You can choose to ignore this alert at this time by resetting the disk to normal, but it may
re-appear if the disk continues to assert this condition. We recommend that you contact the
manufacturer for more information to prevent potential data loss.
Failed
A SATA disk has failed to properly complete read and write operations in a timely manner, and data
may be lost. We recommend that you replace the failed disk as soon as possible to return the overall
storage system to normal. In this state, data may be lost, but you can try resetting the disk to
If the failed disk is an array disk, refer to the Troubleshooting section for guidelines on rebuilding a
failed or degraded volume.
1. Under ‘Status’, in the Manage subsection, locate the disk reported as at risk or failed. You can
also perform this action from Manage Disk, which is accessible by clicking the disk in the
storage system view.
2. Click 'Reset disk to normal'. The page refreshes instantly, returning to a normal state.
Note
Completing this action clears the event on the disk and does not delete existing data. However,
ignoring early warning signs of disk failure may result in data loss.
This action is only available if a recovery volume is present, in a normal state, and is on request
update mode.
This feature allows you to view the files on the designated destination drive in a recovery volume
using Windows Explorer*. For example, you may want to review the recovery disk files prior to starting
a data recovery in the event that data on the master disk is inaccessible or corrupted.
When the volume status is normal, the recovery disk is the designated destination drive and files are
accessible. When the volume status is running off the recovery disk, the master disk is the designated
destination drive and files are accessible. You can review the usage of each disk by hovering over the
array disks in the storage system view or by clicking one of the disks to review its properties under
Manage Disk.
1. Under ‘Status’ or ‘Manage’, in the storage system view, click the recovery or the master disk
depending on the volume status. The disk properties are now displayed on the left.
2. Click ‘Access files’.
3. Windows Explorer opens and displays the files located on the disk.
Note
When files have been accessed, the disk is displayed as missing from the array, and becomes
available. Also, the volume is set to read-only and data updates are not available in this state.
Hiding disk files will make the volume writable and allow data updates.
Warning
Windows Explorer will not open if the disk does not have any partitions on it.
This action is only available if a recovery volume is present and disk files have been accessed.
Note
You can also hide master or recovery disk files from Manage Volume.
Installing new hardware is one of the steps you may have to take to keep you storage system
healthy or to extend the life of a computer that is running out of storage space.
Intel® Rapid Storage Technology provides hot plug support, which is a feature that allows SATA
disks to be removed or inserted while the computer is turned on and the operating system is
running. As an example, hot plugging may be used to replace a failed external disk.
Our application provides support for SATA 1.5 Gb/s (generation 1), SATA 3 Gb/s (generation 2), and
6 Gb/s (generation 3) data transfer rates. The rate support depends on the Intel® Chipset and SATA
disks present in your system. Visit our Online Support for additional information on chipset features
and benefits.
Follow these procedures to replace or connect a disk in case you need to power off your computer:
Replacing a disk
1. Power off your computer.
2. Replace the disk that reports a problem.
3. Turn your computer back on. If the replaced disk was part of an array, you will need to follow
the procedure provided in the Troubleshooting section based on the volume state and type.
Note
To install an external disk, plug it into you computer and connect the power cord.
To remove and install an internal disk, you should be comfortable opening your computer case and
connecting cables. Follow the manufacturer’s installation guide to complete this procedure. If you
are replacing the system disk, you will have to re-install the operating system after you connect
the disk because the system disk contains the files required to start and run your computer.
A port is a connection point on your computer where you can physically connect a device, such as a
SATA disk or ATAPI device. A port transfers I/O data between the device and the computer.
If a port is reported as empty in the storage system view, you can use that port to connect a new
device in order to increase the storage system capacity. Currently, the maximum number of internal
ports that can be used to connect devices is six.
The port properties listed below display to the left of the storage system view under 'Manage' and
report values specific to the element selected in the view.
Parameter Value
Port Reports the port number to which the disk or device is attached.
An ATAPI device is a mass storage device with a parallel interface such as a CD-ROM, DVD/Blu-ray
disc, tape drive, or solid-state disk. The ATAPI properties listed below display to the left of the storage
system view under 'Manage' and report values specific to the selected element.
Parameter Value
Port Reports the port number to which the disk or device is attached.
Serial number Reports the manufacturer's serial number for the device.
SATA transfer rate Reports the transfer mode between the SATA controller and the ATAPI device. The
typical values for this parameter are:
The data transfer rate reported is based on the Intel® Chipset and SATA disks present in
your system.
This feature also increases the power efficiency of a mobile computer by retaining stored data and
reading data from the cache instead of the SATA disk itself.
Accelerate is only available if the requirements listed in this section under Cache Device Properties
are met.
The Performance tab-> enable acceleration page in the UI are only available if the following
requirements are met:
Limitations
The maximum cache size is 64 GB.
Only one disk or volume at a time can be accelerated per system.
If two volumes are present on a single array (they share the same array of disks),
neither volume can be accelerated.
Once a volume is accelerated, a second volume cannot be added to the same array.
Once a solid state disk is configured to be used as a cache device, the option to create a
recovery volume is no longer available. Recovery volumes do not support system
configurations with multiple volumes.
Status Reports the state of health of the internal solid state disk present in the system.
Failed: Indicates that the solid state disk has failed to properly complete read and write
operations in a timely manner, and it has exceeded its recoverable error threshold.
At risk:
Busy: Indicates that acceleration is transitioning from maximized to enhanced mode, or
that cache data is being deleted in order to disable acceleration. In some cases, these
transitions will start automatically in the event that errors are detected and a risk of data
loss is identified.
Usage Reports that the solid state disk is configured to be used as a cache device.
Size Reports that the solid state disk is configured to be used as a cache device.
Serial number Reports the manufacturer's serial number for the internal solid state disk.
Firmware Reports the version of the firmware found in the solid state disk.
Disk data cache Reports that the data cache is enabled on the solid state disk. When a solid state disk is
configured as a cache device, this setting can only be changed at the operating system
level.
Native command queuing Reports whether the solid state disk supports this feature.
SATA transfer rate Reports the data transfer rate between the SATA controller and the SATA solid state disk.
The supported rates are:
SATA 1.5 Gb/s (generation 1)
SATA 3 Gb/s (generation 2)
SATA 6 Gb/s (generation 3)
The data transfer rate reported is based on the Intel® Chipset and SATA disks present in
your system.
Physical sector size Reports the size of physical sectors on the solid state disk (bytes).
Logical sector size Reports the size of logical sectors on the solid state disk (bytes).
Accelerated device Indicates the location of the disk or the name of the volume that is currently accelerated by
the cache device.
Acceleration mode Reports the acceleration mode for the disk or volume associated with the cache device
.
Enhanced: Indicates that the disk or volume is accelerated for optimized data protection.
Maximized: Indicates that the disk or volume is accelerated for optimized input/output
performance.
Failing: Indicates that a SMART event was detected on the solid state disk that is used as a
cache device.
Failed: Indicates that the cache volume has exceeded its recoverable error threshold, and
that read and write operations are no longer occurring.
Data stripe size Indicates that the single-disk RAID 0 volume is a cache volume.
Allocated cache size Reports the volume capacity used for cache memory.
Write-back cache Reports whether the write-back cache feature is enabled for the volume.
Physical sector size Reports the size of each sector that is physically located on the disk.
Enhanced: Indicates that the disk or volume is accelerated for optimized data protection.
Maximized: Indicates that the disk or volume is accelerated for optimized input/output
performance.
You can enable acceleration in order to improve the performance for a SATA hard disk or a RAID
volume that includes only SATA hard disks. This operation caches its contents using a non-volatile
memory device (a solid state disk) that is attached to an AHCI port.
Acceleration modes
Completing this action makes any cached data associated with the accelerated disk or volume
immediately inaccessible. If the current acceleration mode is maximized, disabling acceleration may
take a while to complete, depending on the cache and the solid state disk size. You can use other
applications during this time.
In the event that you are unable to open or access Intel® Rapid Storage Technology due to an
application error or operating system issue, you will need to disable acceleration using the option
ROM user interface.
This action is only available if a disk or volume is currently accelerated. A disk or volume can be
accelerated in either of the following modes:
By default, acceleration is enabled in enhanced mode due to the lower risk of data loss, but you can
change acceleration mode at any time as long as the cache volume and accelerated device are in a
normal state and caching activity is occurring.
Warning
When a device is accelerated in Maximized mode, performance is highly improved but cached
data is at higher risk of being lost in the event of a power failure or under other atypical
conditions.
The acceleration mode will display as busy under the following conditions (by user interaction or
automatic transition):
· When changing acceleration mode from maximized to enhanced.
· When disabling acceleration while in maximized mode.
The transition time varies based on the cache and disk sizes. Disk and volume actions will not be
available until the acceleration transition has completed, except for renaming and deleting volumes.
Once a solid state disk is configured to be used as a cache device, you can choose to accelerate any
disk or volume in a normal state that is located on your storage system. We recommend that you
accelerate the system disk or volume in order to get the full benefits of the non-volatile cache
memory configuration.
This action is only available if a solid state disk is configured as a cache device and there is no
accelerated disk or volume present (no association with the cache device). In this situation, you
have two options:
Reset the solid state disk to available and use that device for other purposes.
Accelerate a disk or volume that is eligible and available for acceleration. Refer to Cache
Device Properties for a detailed list of eligibility requirements.
Warning
In the event that a single-disk RAID 0 data volume was created along with a cache volume,
resetting the solid state disk to available will delete both volumes. Data on the RAID 0 data
volume will be permanently erased. Backup all valuable data before beginning this action.
1. Click ‘Accelerate’.
2. Click 'Reset to available'.
3. In the dialog, select the check box to confirm that you understand that data on the data
volume will be permanently deleted.
4. Click 'Yes' to confirm.
5. The 'Accelerate' page refreshes. Under 'Status', the storage system view displays the solid
state disk usage as available. The device can now be used for any purpose.
This action is only available if an issue is reported on the accelerated disk or volume that is
associated with the cache device and it is missing. In this state, the acceleration mode is typically
reported as unavailable and caching activity is no longer occurring.
If you are unable to resolve the reported issue on the accelerated disk or volume (e.g., degraded or
failed volume due to a missing array disk), the only option will be to remove the association
between the cache device and the disk or volume.
Once the association between the cache and the accelerated disk or volume is removed, all cache
metadata and data is deleted from the cache device. You can then reset the solid state disk to
available or accelerate a different disk or volume, as long as the cache device is healthy.
Follow these steps to disassociate the cache memory and the accelerated device:
1. Click ‘Accelerate’.
2. Click ‘Disassociate’.
3. In the 'Disassociate' dialog, click 'Yes' to confirm.
4. The page refreshes and the Acceleration View reports the new configuration. Options to
reset the solid state disk to available or to select a new device to accelerate (as long as an
eligible disk or volume is available) are now available.
Note
You can also perform this action using the option ROM user interface.
Both administrators and standard users can change the notification area settings using the
application or directly from the notification area. Settings changes are applied on a per user basis,
and do not affect other users’ settings.
By default, System preferences are set to show the notification area icon. If you previously chose to
hide the notification area icon, follow these steps to display the icon again:
1. Under ‘Preferences’, select ‘Show the notification area icon’.
2. Click ‘Apply Changes’. Verify that the icon is now displayed in the notification area.
Once you hide the notification area icon, the service no longer reports storage system information,
warnings, or errors through the notification area. You will need to use the application to monitor the
health of the storage system. Follow these steps to hide the notification area icon:
1. Under ‘Preferences’, deselect 'Show the notification area icon'.
2. In the ‘Hide Notification Area Icon’ dialog, click ‘Yes’ to confirm.
3. Verify that the icon is no longer displayed in the notification area.
Note
Storage system information provides details on any changes of state other than warnings or errors,
such as new disks being detected or locked.
Storage system warnings report the cause for the overall warning state of the storage system,
such as a degraded RAID volume due to a missing disk.
Storage subsystem errors report the cause for the overall error state of the storage system, such
as a failed volume due to a failed disk.
– SECURITY SEND
– SECURITY RECEIVE
RST will allow the Security Send and Security Receive commands to be sent only when using the
following protocols:
– RST UEFI
– EFI_STORAGE_SECURITY_COMMAND_PROTOCOL: EFI_STORAGE_
SECURITY_COMMAND_PROTOCOL is loaded ONLY if a device
supports TCG and Block IO protocol is loaded successfully
– RST OS Driver
– SCSI SECURITY PROTOCOL: Intel® RST driver supports translating
commands to NVMe Security Send and NVMe Security Receive
commands on TCG supported Opal PCIe NVMe SSDs.
Intel® RST uses the register from Admin Command Set Attributes & Optional Controller Capabilities
(bytes 257:256). RST requires that bit 0 of this register must be set to a value of 0x01 to indicate
Opal support. If the bit is not set, RST will not recognize the device as supporting Opal.
Intel® RST added support for TCG communication in version 13.0 (see section 24).
Further details and specifications for eDrives can be found at the IEEE 1667 Workgroup
web page: http://www.ieee1667.com
The following sections explain the use of each of the bits in the BCFS, also known as the Software
Feature Mask bits.
Note: This document does not cover details on how to setup a system BIOS. For that level of
information please contact your platform’s BIOS vendor or your Intel field representative to put you
in contact with the appropriate Intel BIOS support personnel.
Note: Clearing all RAID level related bits to ‘0’ (that includes the Intel® RRT bit) is an unsupported
configuration. The Intel® RST OROM will ignore the BIOS settings and enable all RAID levels
(Intel® RRT inclusive).
Bit 0 == 1 (default)
Bit 1 == 0
Bit 2 == 1 (default)
Bit 3 == 0
Bit 0 == 0
Bit 1 == 0
Bit 2 == 0
Bit 3 == 1 (default)
Bit 4 == 1 (default)
Bit 8 == 0 (default)
01 – 4 secs
10 – 6 secs
11 – 8 secs
Bit 5 == 1 (default)
Bit ‘11’ = = 1
Bit 6 == 0
When this bit is cleared the Intel® RST UI does not display any option to use this feature.
Bit 6 == 0 (default)
Bit 9 == 1
This enables the Intel® Smart Response Technology feature on the platform SKU.
15:14 RO 0h Reserved.
11:10 RWO 0h OROM UI Normal Delay (OUD): Values of these bits specify the
delay of the OROM UI Splash Screen in a normal status.
10 – 6 secs
11 – 8 secs
8 RWO 0h Intel® RRT Only on ESATA (ROES): If set to ‘1’, then only Intel®
RRT volumes can span internal and eSATA drives. If cleared to ‘0’,
then any RAID volume can span internal and eSATA drives.
7 RWO 0h Reserved
6 RWO 0h Reserved
5 RWO 1h Intel® RST OROM UI (RSTOROMUI): If set to ‘1’ then the OROM
UI is shown. Otherwise, no OROM banner or information will be
displayed if all disks and RAID volumes are Normal.
4 RWO 1h Intel® RRT Enable (RSTE): If set to ‘1’, then Intel Rapid Recovery
Technology is enabled
There is a whitepaper describing use of Removable device capability bits on Win7 by bus drivers:
http://www.microsoft.com/whdc/Device/DeviceExperience/ContainerIDs.mspx.
In order to correct this issue to pass the platform WHQL test, RST recommends the OEM to take the
following action:
In the system BIOS, define an _EJ0 ACPI method on the interlocked port. _EJ0 will signal to the
ACPI driver to set Removable for the RST driver and still mark the device as internal to the
system such that it does not show in its own container. The implementation is to use a registry
key for each port to tell RST whether to set Removable bit or not. If _EJ0 ACPI method is
defined in the system BIOS by the manufacturers, they can tell RST not to set the Removable
bit. For example:
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\iaStor\Parameters\Port1]
"EJ0IsDefined"=dword:1
If 1, _EJ0 will set Removable bit instead of RST. If 0, no _EJ0 defined so RST will set
Removable bit. The default value is 0.
CD Compact Disc
GB Giga-byte
I/O Input/Output
Information file (.inf) used by Microsoft operating systems that support the Plug & Play
INF feature. When installing a driver, this file provides the OS needed information about
driver filenames, driver components, and supported hardware.
Intel® Option ROM Standard Plug and Play option ROM that provides a pre-operating system user
(OROM) interface for the Intel RAID implementation.
MB Mega-byte
Term used to describe the movement of data from one configuration or usage model
Migration
to another.
A code module built into the System BIOS that provides extended support for a
particular piece of hardware. For this product, the Option ROM provides boot support
Option ROM
for RAID 0/1/5/10 volumes, and provides a user interface for configuring and
managing RAID 0/1/5/10 volumes.
OS Operating System
PCH Platform Controller Hub is the new term for Intel chipsets
Term used to describe the point at which a SATA drive is physically connected to the
Port 0..3 SATA Controller. Port n is the nth of the four available ports in ICH9 systems, where
n=0..3
A RAID level where data is striped across multiple physical hard drives
RAID 0
(aka striping)
A RAID level where data is mirrored between hard drives to provide data redundancy
RAID 1
(aka mirroring)
A RAID level where data and parity are stripped across the hard drives to provide
RAID 5 good read/write performance and data redundancy. The parity is stripped in a rotating
sequence (aka Stripping and rotating parity).
A RAID level where information is striped across a two disk array for system
RAID 10 performance. Each of the drives in the array has a mirror for fault tolerance. (aka
Stripping and mirroring)
A block of capacity allocated from a RAID Array and arranged into a RAID topology.
RAID volume
Operating Systems typically interpret a RAID volume as a physical hard drive.
RAM Random Access Memory. Usually refers to the system’s main memory
RTD3 Runtime D3
Strip Grouping of data on a single physical hard drive within a RAID volume
The sum of all strips in a horizontal axis across physical hard drives within a RAID
Stripe
volume
UI User Interface
Glossary
ABCDEFHILMNOPRSUVW
· A
. Accelerated disk or volume
Cause Solution
Missing array disk Follow this procedure to recover data:
Failed array disk In most cases, the volume cannot be recovered and any data on the volume is lost.
However, before deleting the volume, you can try resetting the disks to normal, and
then attempt a data recovery. If the read/write data access consistently fails, the disk
will likely return to a failed state immediately. Refer to Troubleshooting Disk Events for
instructions on resetting a disk to normal.
This procedure deletes the failed volume:
1. Power off your computer and replace the failed SATA disk with a new one
that is of equal or greater capacity.
2. Turn on your computer. During the system startup, the volume status will
display as 'Failed' in the Intel Rapid Storage Technology option ROM user
interface.
3. Press Ctrl-I to access the main menu of the option ROM user interface.
4. Select Delete RAID Volume from the main menu.
5. From the Delete Volume menu, select the failed RAID volume, using the up
and down arrow keys.
6. Press the 'Delete' key to delete the volume, then 'Y' to confirm.
7. Create a new RAID 0 volume using the new disk. If the failed disk was part
of the system volume, you will also need to reinstall the operating system.
RAID 5
A RAID 5 volume is reported as failed when two or more of its members have failed.
Cause Solution
Two or more array disks In most cases, the volume cannot be recovered and any data on the volume is lost.
failed However, before deleting the volume, you can try resetting the disks to normal, and
then attempt a data recovery. If the read/write data access consistently fails, the
disk will likely return to a failed state immediately. Refer to Troubleshooting Disk
Events for instructions on resetting a disk to normal.
This procedure deletes the failed volume:
RAID 10
A RAID 10 volume is reported as failed when two adjacent members are disconnected or have failed, or when
three or four of its members are disconnected or have failed.
Cause Solution
Two adjacent array disks 1. Power off your computer and reconnect the missing disks.
missing 2. The rebuild operation will start automatically. You can follow the progress by
hovering over the notification area icon or by reviewing the volume status under
'Status' or 'Manage'.
Three or four array disks In most cases, the volume cannot be recovered and any data on the volume is lost.
missing This procedure deletes the failed volume:
Two or more array disks In most cases, the volume cannot be recovered and any data on the volume is lost.
failed However, before deleting the volume, you can try resetting the disks to normal, and
then attempt a data recovery. If the read/write data access consistently fails, the
disk will likely return to a failed state immediately. Refer to Troubleshooting Disk
Events for instructions on resetting a disk to normal.
This procedure deletes the failed volume:
1. Power off your computer and replace the failed SATA disks with new ones that
are of equal or greater capacity.
2. Turn on your computer. During the system startup, the volume status will
display as 'Failed' in the Intel Rapid Storage Technology option ROM user
interface.
3. Press Ctrl-I to access the main menu of the option ROM user interface.
4. Select Delete RAID Volume from the main menu.
5. From the Delete Volume menu, select the failed RAID volume, using the up and
down arrow keys.
6. Press the 'Delete' key to delete the volume, then 'Y' to confirm.
7. Create a new RAID 10 volume using the new disks.
8. You will then need to reinstall the operating system on the new volume.
Cause Solution
Recovery disk failed We recommend that you rebuild the degraded volume to a new disk to return the
volume and overall storage system status to normal. However, you can try resetting
the disk to normal, but if the read/write data access consistently fails, the disk will
likely return to a failed state immediately. Refer to Troubleshooting Disk Events for
instructions on resetting a disk to normal.
If a SATA disk is compatible, available and normal, follow this procedure to rebuild
the volume:
Note
If there is no available disk present, you will need to power off your computer and
connect a new SATA disk that is equal or greater capacity than the failed disk. Once
your computer is back up and running you can follow the rebuild procedure
described above.
Master disk missing If you can reconnect the missing master disk, follow this procedure to recover data:
If you cannot reconnect the missing disk and a SATA disk is available and normal,
follow this procedure to rebuild the volume:
Note
Master disk failed We recommend that you rebuild the degraded volume to a new disk to return the
volume and overall storage system status to normal. However, you can try resetting
the disk to normal, but if the read/write data access consistently fails, the disk will
likely return to a failed state immediately.
To reset the failed master disk and the volume to normal, follow this procedure:
1. Under 'Status', click 'Reset disk to normal'. Note that the volume is now
running off the recovery disk, and that the master disk is reported as offline.
2. Under 'Status', in the Manage subsection, click 'Recover data' or click the
recovery volume in the storage system view, and then click 'Recover data'.
Warning
Starting this action will override existing data on the master disk and update it
with the data on the recovery disk. Backup all valuable data before continuing.
If a SATA disk is compatible, available and normal, follow this procedure to rebuild
the volume:
Note
If there is no available disk present, you will need to power off your computer and
connect a new SATA disk. Once rebuilt, the recovery volume will be limited to its
original size even if the new disk is larger than the original master disk. Once your
computer is back up and running you can follow the rebuild procedure described
above.
RAID 1
A RAID 1 volume is reported as degraded when one of its members is disconnected or has failed. Data
mirroring and redundancy are lost because the system can only use the functional member.
RAID 5
A RAID 5 volume is reported as degraded when one of its members is disconnected or has failed. When two or
more array disks are disconnected or have failed, the volume is reported as failed.
RAID 10
Cause Solution
Missing array disk If you can reconnect the missing disk, follow this procedure to rebuild the volume:
If you cannot reconnect the missing disk and a SATA disk is available and normal,
follow this procedure to rebuild the volume:
Note
If there is no available disk present, you will need to power off your computer and
connect a new SATA disk that is equal or greater capacity than the failed disk. Once
your computer is back up and running you can follow the rebuild procedure
described above.
Failed array disk We recommend that you rebuild the degraded volume to a new disk to return the
volume and overall storage system status to normal. However, you can try resetting
the disk to normal, which will prompt the volume to start rebuilding automatically.
But if the read/write data access consistently fails, the disk will likely return to a
failed state immediately and you will need to rebuild the volume to another disk.
If a SATA disk is compatible, available and normal, follow this procedure to rebuild
the volume:
1. Under 'Status', click 'Rebuild to another disk'.
2. Select the disk you want to use to rebuild the volume, and then click 'Rebuild'.
3. The rebuild operation starts immediately. You can follow the progress by
hovering over the notification area icon or by reviewing the volume status
under 'Status' or 'Manage'.
4. Once the operation successfully completed, the array disk and volume status
will display as 'Normal'.
Note
If there is no available disk present, you will need to power off your computer and
connect a new SATA disk that is equal or greater capacity than the failed disk. Once
your computer is back up and running you can follow the rebuild procedure
described above.
Repeat this procedure for all locked disks included in the volume in order to unlock
the volume.
Note
If all the disks included in a volume are locked, the volume is no longer displayed
Incompatible
Cause Solution
Indicates that the volume In this situation, volume data is accessible to the operating system and can be
was moved to another backed up, but the volume cannot operate because your system does not support
system that does not its RAID configuration.
support the volume type
and configuration. Here are your options:
1. Reconnect the volume to the computer where the volume was originally
created, and continue using it.
2. Delete the volume, and then create a new volume with a RAID configuration
that is supported by the current system. Follow the procedure described
above to delete the volume.
Warning
When a volume is deleted, all existing data on the member disks of the selected
volume is permanently erased. It’s recommended that you backup all valuable data
prior to beginning this action.
Unknown
Cause Solution
The volume is in an The application is unable to detect the exact nature of the problem. Try restarting
unexpected state due to a your computer. If the error persists, back up all valuable data and delete the
configuration error. volume using the option ROM user interface. Refer to the user’s manual accessible
from the Online Support area for details on using the option ROM.
1. Under 'Status' or 'Manage', in the storage system view, click the recovery
volume or the recovery disk. The element properties are now displayed on the
left.
2. Click 'Hide Files' from Manage Disk or 'Hide recovery disk files' from Manage
Volume.
3. The Windows Explorer window closes.
You can resume data updates by clicking ‘Update data’ under Manage Volume. To
copy the latest changes to the recovery disk automatically, change the update
mode to continuous from the same area.
Refer to the 'Running off recovery disk' procedure above to recover data to the
master disk.
Missing volume
Cause Solution
At risk An impending error condition The application is detecting early warning signs of failure
was detected on an internal with a SATA disk that result from a slow degradation over
or external disk and is now time. When a disk is reported at risk, you can reset that disk
at risk of failure. to normal, but we recommend that you contact the
manufacturer for more information to prevent potential data
loss. Follow this procedure to reset the disk to normal:
An unexpected error was In this state, it is likely that some or all of the disk data is
detected on a disk that has inaccessible. After backing up any accessible data, you will
RAID configuration data need to clear the metadata and reset the disk to return to a
(metadata) on it. normal state.
Missing An array disk is not present Ensure that the disk is securely connected to the SATA port
or physically connected to and that the SATA cable is functioning properly. If the disk is
the computer. lost or cannot be reconnected, you will need to connect a
new SATA disk, and then rebuild the volume to that new
disk. Refer to Degraded or Failed Volumes in this section for
instructions on how to rebuild a volume.
The recovery or master disk Hide the recovery or master disk files to return the disk
files have been accessed and status to offline and resume data updates in on request
display in Windows mode.
Explorer*.
Failed An internal or external disk Back up your data and we recommend that you replace the
has failed to properly disk as soon as possible. If the failed disk is an array disk,
complete read and write the volume will be reported as degraded or failed depending
operations in a timely on its configuration. Refer to Degraded or Failed Volumes in
manner, and it has exceeded this section for instructions on resolving the problem.
its recoverable error In a failed state, disk data may be lost, but you can try
threshold. resetting the disk to normal, and then attempt a data
recovery. Follow this procedure to reset the failed disk to
normal:
Note
If the failed array disk is part of a redundant volume, the
volume will start rebuilding automatically as soon as the disk
is reset to normal.
Offline An internal or external array We recommend that you unlock the disk to make the volume
disk is locked and data on data fully accessible. If more than one array disk is locked,
that disk cannot be read. unlock all those disks to unlock the volume.
Your computer is running on Reconnect your computer to the power supply in order to
battery and data updates to return the recovery disk to a normal state.
the recovery disk are not
available as long as that disk
is offline.
Cause
The solid state disk was removed or the disk is present but cannot be detected.
Solution
The application provides the option to clear the metadata on the array disks or previously accelerated disk and
reset these disks to a normal state.
1. Under Status, in the Manage subsection, click ‘Clear and reset’ next to each array disk reported as
offline. You can also perform this action under ‘Manage’ by clicking any offline disk reported in the
storage system view.
2. Click ‘Yes’ to confirm.
3. The array disk now displays as an available disk in a normal state and can be used to create a new
volume
Solution
Early warning signs of failure with the solid state disk are detected that result from a slow degradation over
time. When a disk used as a cache device is reported at risk, you can reset that disk to normal or replace the
solid state disk after resetting it to available.
Regardless of which option you choose, we recommend that you contact the manufacturer for more
information to prevent potential data loss.
1. Under ‘Status’, in the Manage subsection, locate the disk reported as at risk. You can also perform
this action from Manage Disk, which is accessible by clicking the failing disk in the storage system
view.
2. Click 'Reset disk to normal'. The page refreshes instantly, returning to a normal state.
3. The cache volume should also return to a normal state and caching activity should resume.
Completing this action clears the event on the disk and does not delete existing data. However, ignoring early
warning signs of disk failure may result in data loss.
1. If a compatible spare is detected, the volume rebuild operation will start automatically. Once the
process is complete, the cache volume will display in a normal state and caching activity will resume.
2. If no compatible spare is detected, the acceleration mode will automatically transition to enhanced in
order to avoid data loss. You can then follow the procedures described above to return the solid
state disk and cache volume to normal.
Solution
Back up any recoverable data and replace the solid- state disk as soon as possible. In a failed state, disk data
may be lost, but you can try recovering it by resetting the disk to normal.
1. In the Manage subsection, under ‘Status’, locate the disk reported as failed. Alternately, perform this
action from Manage Disk, accessible by clicking the disk in the storage system view.
2. Click 'Reset disk to normal'. The page refreshes instantly, returning to a normal state.
If the disk operations continue to fail, the disk will return to a failed state immediately and should be replaced.
Follow this procedure:
1. Click ‘Accelerate’.
2. Click 'Reset to available'.
3. In the dialog, select the check box to confirm that you understand that data on the cache and data
volumes will be deleted.
4. Click 'Yes' to confirm.
5. The page refreshes and the storage system displays the solid state disk usage as available.
6. Power off your computer and replace the failed solid state disk with an operational one.
7. Power on your computer. To resume the caching activity, enable acceleration again.
If acceleration was in maximized mode prior to being automatically disabled, the disk or volume previously
associated with the cache will be reported as failed if the data cleaning was unsuccessful.
If data cleaning was successful, once the mode transition is complete, the accelerated disk or volume
previously associated with the cache will be reported as normal.
Solution
If the disk or volume can be reconnected:
1.Power off your computer and reconnect the missing disk or volume.
2.Restart your computer.
3.Once the operating system is running, open the application.
If the disk or volume cannot be reconnected, follow this procedure to disassociate the cache and the missing
device:
1.Click ‘Accelerate’.
2.Click ‘Disassociate’.
3.Click ‘Yes’ to confirm.
4. The page refreshes and you can now select another disk or volume to accelerate.
Solution
Refer to Troubleshooting Disk Events, Failed Volumes, or Degraded Volumes for detailed procedure on fixing
the issue.
If you cannot fix the issue reported on the accelerated disk or volume, follow this procedure to disassociate the
cache and the missing device:
1.Click ‘Accelerate’.
2.Click ‘Disassociate’.
3.Click ‘Yes’ to confirm.
4. The page refreshes and you can now select another disk or volume to accelerate.
The Intel® Rapid Storage Your computer was started in safe mode Once you are done troubleshooting
Technology service cannot be and the operating system is running with application or driver problems in safe
started in safe mode. a limited set of files and drivers. Intel mode, you will need to exit safe
Rapid Storage Technology cannot start or mode, restart your computer, and
run in safe mode. then let the operating system start
normally.
The Intel Rapid Storage Technology
service can now be started and open
the application.
Multiple users cannot run the One or more users are attempting to Make sure only one instance of the
application at the same time. open the application while an instance of application is running at a time.
the application is already running.
An error occurred due to The Intel® Rapid Storage Technology Wait a few moments, then try
insufficient resources, and the driver does not have sufficient resources performing the action again.
operation could not be to execute the request. Another
completed. Please try again operation may be in progress and needs
later. to complete before being able to handle
a new request.
An unknown error occurred An unexpected error occurred during the Verify that your hardware is properly
during the volume creation operation, and the application cannot connected and try recreating the
process. Please try recreating identify its origin. The volume could not volume.
the volume. be created.
An error occurred while an An unexpected error occurred during an Restart the operation. If the error
operation was in progress. The operation, such as a data migration or a persists, try restarting your
operation could not be rebuild, and the application cannot computer and then the operation.
completed. identify its origin.
An error occurred and the The cache memory allocation was likely Follow these steps to accelerate a
selected disk or volume could increased to use full solid state disk disk or volume:
not be accelerated. Please capacity (up to 64 GB) while enabling
restart your computer, and then acceleration. Restart your computer to
try the operation again. complete the process of
allocating the requested cache
size.
Launch the application.
Try enabling acceleration again
by clicking 'Enable acceleration'.
EFI_DEVICE_PATH_PROTOCOL
For each logical disk that is exposed by the SATA RAID UEFI Driver, an
EFI_DEVICE_PATH_PROTOCOL shall be created.
The Device Path Protocol for each logical disk shall be appended to the PCI SATA Controller Device
Path.
The fields of the EFI_DEVICE_PATH_PROTOCOL shall be filled out differently depending on whether
the device is an ODD or an HDD.
Length 10 10
HBA Port Number Port ID bitmap (bit #n Port ID bitmap (bit #n set if logical
set if device is on port device contains device ID #n)
#n) Lowest Significant Bit (LSB)
represents port 0.
Intel® RST driver does not support application development that require interface access to the
driver via API method. However RST does support applications developed to interface to the driver
via the industry standard specification of the Common Storage Management Interface. We support
a subset of the total command set. The below table has a list of the commands that are supported
by Intel® RST. For more detail information on the specification you can access the
http://www.t11.org/ website. The document number for the specification is 04-468v0.
BASE IOCTLs:
CC_CSMI_SAS_GET_DRIVER_INFO
CC_CSMI_SAS_GET_CNTLR_CONFIG
CC_CSMI_SAS_GET_CNTLR_STATUS
RAID IOCTLs:
CC_CSMI_SAS_GET_RAID_INFO
CC_CSMI_SAS_GET_RAID_CONFIG
The following CSMI IOCTLs are supported for both SATA and PCIe AHCI devices:
CC_CSMI_SAS_GET_PHY_INFO
CC_CSMI_SAS_GET_SATA_SIGNATURE
CC_CSMI_SAS_STP_PASSTHRU
2. When SED’s are enabled using HDD passwords, via the ATA Security command set,
following the restrictions outlined in the previous section above.
3. When SED’s are Opal or eDrive enabled, using 3rd party software, and following
configuration and support outlined in Sections Opal Drive Support and eDrive Support.
It is suggested that drives be encrypted before enabling SRT. This is to ensure that the SRT cache
is populated with encrypted data.
551377 Skylake-H Client Platform SPI Please download latest version ME firmware package
Programming Guide for latest version
545659 Skylake Platform Controller Hub (SKL PCH) Tables 1-3, 25-3
External Design Specification Volume 1 of Chapter 3
2-
http://www.intel.com/cd/edesign/library/as
mo-na/eng/545659.htm
Reference: Doc #549921: Skylake Platform Controller Hub (PCH), H and LP BIOS Specification
Table 1 PCIe Controller #3 :| (Cycle Router #1) PCIe Controller #4 | (Cycle Router #2) PCIe Controller #5 | (Cycle Router #3)
SKU 15 16 17 18 19 20 21 22 23 24 25 26
PCIe / PCIe / PCIe / PCIe / PCIe / PCIe /
HM170 PCIe PCIe N/A N/A N/A N/A
SATA SATA SATA SATA SATA SATA
PCIe / PCIe / PCIe / PCIe / PCIe / PCIe /
QM170 PCIe PCIe N/A N/A N/A N/A
SATA SATA SATA SATA SATA SATA
Intel RST PCIe Storage Device #1 Intel RST PCIe Storage Device #2 Intel RST PCIe Storage Device #3
<------------------------------------------------------------------------H170**, Q170, Z170, C236------------------------------------------------------------------------------->
<---------------------------------------------------HM170, QM170-------------------------------------------------->
** H170 Supports Only a Maximum of 2 Remapped PCIe Devices (Controllers) of the 3 Available
Prem-U PCIe PCIe PCIe / SATA PCIe / SATA PCIe PCIE PCIe / SATA PCIe / SATA
Prem-Y PCIe PCIe PCIe / SATA PCIe / SATA PCIe PCIE N/A N/A
Intel RST PCIe Storage Device #1 Intel RST PCIe Storage Device #2
<------------------------------------------------------------U-------------------------------------------------------------->
<---------------------------------------------Y---------------------------------------------->
Remapping Rules
NOTE: When remapping is enabled, the HSIO lanes that are associated with the Cycle Router that is
enabled for remapping are dedicated to be used only for Intel RST PCIe Storage (PCIE
NVME/AHCI or SATA devices controlled by RST). Those four lanes associated with the
remapped Cycle Router cannot be used for any other purpose.
For a HSIO lane to be available for RST PCIe Storage remapping it must have the 'PCIe'
label for that SKU in Table 33-1 and 33-2 in this section (e.g., lanes 23 and 24 cannot be
remapped for RST on H170 SKU as they are labled ‘SATA’ only). The ‘SATA’ label indicates
that the lane can only be mapped as SATA 3.0. The Lanes labeled ‘PCIE/SATA” are flexible
and can be mapped as PCIe, SATA, or remapped as RST PCIe Storage.
-Reference: Document #546717: "Skylake H Platform Controller Hub (SKL PCH) External
Design Specification Volume 1 of 2" : Table 1-5
-Reference: Document #545659: "Skylake Platform Controller Hub (SKL PCH) External
Design Specification Volume 1 of 2" : Table 1-3
Once a HSIO lane has been remapped for Intel RST PCIe Storage, if any SATA port is
assigned to that lane, then that SATA port is no longer available for use
o (E.g. if a x4 has been remapped on PCH-H lanes 19 and 22, then SATA 2 and 3 of
the PCH are no longer availableto be used)
o
SPT-H
HSIO 15 16 17 18 19 20 21 22 23 24 25 26
PCIe #09
PCIe #10
PCIe #11
PCIe #12
PCIe #13
PCIe #14
PCIe #15
PCIe #16
PCIe #17
PCIe #18
PCIe #19
PCIe #20
PCIe #5
PCIe #6
PCIe #7
PCIe #8
PCIe #9
PCIe #10
PCIe #5
PCIe #6
PCIe #7
PCIe #8
PCIe #9
PCIe #10
PCIe #11
PCIe #12
SATA 0 'Alternate'
SATA 1 'Alternate'
SATA 1
SATA 2
SATA 3
SATA 4
SATA 5
SATA 1 'Alternate'
SATA 0
SATA 1
SATA 2
SATA 0
SATA 1
x4 x4 x4
x2 x2 x2 x2 x2 x2
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage x4 x4 x4
Device #1 Device #2 Device #3 x2 x2 x2 x2 x2 x2 x2
<----------------------------H170**, Q170, Z170, C236------------------------------> Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Intel RST P CIe
Sto rage
<----------------------HM170, QM170------------------> Device #1 Device #2 Device #1 Device # 2
SATA6/7 = SATA 6/7 of CM/C236 SKU are not available in remapping mode
SATA 5 = SATA 5 is unavailable for use when a x2 PCIe is configured on HSIO lanes 25 and 26
PI = Port Index : RST storage port number enumeration {e.g. PI 7 = port 0-7-0-0
SATA 2 = PI 2 = port 0-2-0-0}
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
SATA 2
SATA 3
SATA 4
SATA 5
SATA 0
SATA 1
SATA 2
SATA 3
SATA 4
SATA 5
SATA 0
SATA 1
PCIe #12
SATA 2
SATA 3
SATA 4
SATA 5
PI 7
PI 6
x2 x2 x2
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #3 Device #1 Device #2 Device #3 Device #1 Device #2 Device #3
Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 9 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 11 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 13
Devi ce #2 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #2 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #2 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5*
Devi ce #3 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #3 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #3 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5*
Devi ce #4 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #4 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #4 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5*
Devi ce #5 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #5 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #5 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5*
Devi ce #6** : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #6** : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #6** : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5*
Devi ce #7** : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #7** : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #7** : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5*
*N/A for HM170, QM170 *N/A for HM170, QM170 *N/A for HM170, QM170
**Devices #6 and #7 are N/A for HM170, QM170 **Devices #6 and #7 are N/A for HM170, QM170 **Devices #6 and #7 are N/A for HM170, QM170
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
SATA 0
SATA 1
PCIe #12
SATA 4
SATA 5
SATA 0
SATA 1
PCIe #12
SATA 2
SATA 3
SATA 0
SATA 1
PCIe #12
SATA 2
SATA 3
SATA 4
SATA 5 N/A
PI 6
PI 5
PI 5
x2 x2 x2
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #3 Device #1 Device #2 Device #3 Device #1 Device #2 Device #3
Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 15 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 17 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 19
Devi ce #2 : SATA 0 or SATA 1 or SATA4* or SATA5* Devi ce #2 : SATA 0 or SATA 1 or SATA 2 or SATA 3 Devi ce #2 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4*
Devi ce #3 : SATA 0 or SATA 1 or SATA4* or SATA5* Devi ce #3 : SATA 0 or SATA 1 or SATA 2 or SATA 3 Devi ce #3 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4*
Devi ce #4 : SATA 0 or SATA 1 or SATA4* or SATA5* Devi ce #4 : SATA 0 or SATA 1 or SATA 2 or SATA 3 Devi ce #4 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4*
Devi ce #5 : SATA 0 or SATA 1 or SATA4* or SATA5* Devi ce #5 : SATA 0 or SATA 1 or SATA 2 or SATA 3 Devi ce #5 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4*
Devi ce #6** : N/A Devi ce #6**: N/A Devi ce #6** : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4*
Devi ce #7** : N/A Devi ce #7**: N/A Devi ce #7** : N/A
*N/A for HM170, QM170 *N/A for HM170, QM170 *N/A for HM170, QM170
**Devices #6 and #7 are N/A for HM170, QM170 **Devices #6 and #7 are N/A for HM170, QM170 **Devices #6 and #7 are N/A for HM170, QM170
Note: This PCIe+SATA config is N/A for H170, HM170 and QM170 Note1: This PCIe+SATA config is N/A for HM170 and QM170
Note2: SATA5 is N/A in this configuration
PCIe x2 :
PCIe x2 :
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
PCIe x2 :
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
SATA 2
SATA 3
SATA 4
SATA 5
PCIe #12
SATA 4
SATA 5
PCIe #12
SATA 2
SATA 3
PI 6
PI 6
PI 7
PI 5
PI 7
x2 x2 x2 x2 x2 x2
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #3 Device #1 Device #2 Device #3 Device #1 Device #2 Device #3
Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 9 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 9 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 9
Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 13 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 15 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 17
Devi ce #3 : SATA 2 or SATA 3 or SATA 4* or SATA 5* Devi ce #3 : SATA 0 or SATA 1 or SATA4* or SATA5* Devi ce #3 : SATA 0 or SATA 1 or SATA 2 or SATA 3
Devi ce #4 : SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #4 : SATA 0 or SATA 1 or SATA4* or SATA5* Devi ce #4 : SATA 0 or SATA 1 or SATA 2 or SATA 3
Devi ce #5 : SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #5 : SATA 0 or SATA 1 or SATA4* or SATA5* Devi ce #5 : SATA 0 or SATA 1 or SATA 2 or SATA 3
Devi ce #6 : SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #6 : SATA 0 or SATA 1 or SATA4* or SATA5* Devi ce #6 : SATA 0 or SATA 1 or SATA 2 or SATA 3
*N/A for HM170, QM170 *N/A for HM170, QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
PCIe x2 :
PCIe x2 :
PCIe x2 :
SATA 2
SATA 3
SATA 4
SATA 5 N/A
SATA 0
SATA 1
PCIe #12
SATA 4
SATA 5 N/A
SATA 0
SATA 1
PCIe #12
SATA 2
SATA 3
PI 7
PI 5
PI 6
PI 5
PI 6
PI 5
x2 x2 x2 x2 x2 x2
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #3 Device #1 Device #2 Device #3 Device #1 Device #2 Device #3
Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 9 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 15 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 13
Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 19 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 19 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 17
Devi ce #3 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* Devi ce #3 : SATA 0 or SATA 1 or SATA4* Devi ce #3 : SATA 0 or SATA 1 or SATA 2 or SATA 3
Devi ce #4 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* Devi ce #4 : SATA 0 or SATA 1 or SATA4* Devi ce #4 : SATA 0 or SATA 1 or SATA 2 or SATA 3
Devi ce #5 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* Devi ce #5 : SATA 0 or SATA 1 or SATA4* Devi ce #5 : SATA 0 or SATA 1 or SATA 2 or SATA 3
Devi ce #6 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* Devi ce #6 : N/A Devi ce #6 : SATA 0 or SATA 1 or SATA 2 or SATA 3
*N/A for HM170, QM170 *N/A for HM170, QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170
Note1: This PCIe+SATA config is N/A for HM170 and QM170 Note1: This PCIe+SATA config is N/A for HM170 and QM170
Note2: SATA5 is N/A in this configuration Note2: SATA5 is N/A in this configuration
PCIe x2 :
PCIe x2 :
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
PCIe x2 :
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
SATA 1
SATA 2
SATA 3
SATA 4
SATA 5
SATA 0
SATA 1
SATA 4
SATA 5
SATA 0
SATA 1
SATA 2
SATA 3
PI 7
PI 6
PI 7
PI 6
PI 7
x2 x2 x2 x2 x2 PI 5
x2
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #3 Device #1 Device #2 Device #3 Device #1 Device #2 Device #3
Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 11 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 11 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 11
Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 13 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 15 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 17
Devi ce #3 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #3 : SATA 0 or SATA 1 or SATA4* or SATA5* Devi ce #3 : SATA 0 or SATA 1 or SATA 2 or SATA 3
Devi ce #4 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #4 : SATA 0 or SATA 1 or SATA4* or SATA5* Devi ce #4 : SATA 0 or SATA 1 or SATA 2 or SATA 3
Devi ce #5 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #5 : SATA 0 or SATA 1 or SATA4* or SATA5* Devi ce #5 : SATA 0 or SATA 1 or SATA 2 or SATA 3
Devi ce #6 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #6 : SATA 0 or SATA 1 or SATA4* or SATA5* Devi ce #6 : SATA 0 or SATA 1 or SATA 2 or SATA 3
*N/A for HM170, QM170 * N/A for HM170, QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170
PCIe x2 :
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
PCIe x2 :
PCIe x2 :
PCIe x2 :
SATA 0
SATA 1
SATA 2
SATA 3
SATA 4
SATA 5 N/A
SATA 0
SATA 1
PCIe #12
SATA 0
SATA 1
PCIe #12
SATA 2
SATA 3
SATA 4
SATA 5 N/A
PI 7
PI 5
PI 6
PI 5
PI 6
PI 5
x2 x2 x2 x2 x2 x2
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #3 Device #1 Device #2 Device #3 Device #1 Device #2 Device #3
Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 11 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 15 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 13
Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 19 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 17 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 19
Devi ce #3 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* Devi ce #3 : SATA 0 or SATA 1 Devi ce #3 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4*
Devi ce #4 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* Devi ce #4 : SATA 0 or SATA 1 Devi ce #4 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4*
Devi ce #5 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* Devi ce #5 : N/A Devi ce #5 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4*
Devi ce #6 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* Devi ce #6 : N/A Devi ce #6 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4*
* N/A for HM170, QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170 * N/A for HM170, QM170
Note: This PCIe+SATA config is N/A for HM170 and QM170 Note: This PCIe+SATA config is N/A for HM170 and QM170
Note2: SATA5 is N/A in this configuration Note2: SATA5 is N/A in this configuration
PCIe x2 :
PCIe x2 :
PCIe x2 :
PCIe x2 :
PCIe x2 :
PCIe x2 :
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
PCIe x2 :
SATA 2
SATA 3
PCIe #12
SATA 2
SATA 3
SATA 4
SATA 5 N/A
PCIe #12
PI 7
PI 6
PI 5
PI 7
PI 6
PI 5
PI 7
PI 6
PI 5
PI 7 6 5 PI 7 6 5 PI 7 6 5
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #3 Device #1 Device #2 Device #3 Device #1 Device #2 Device #3
Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 9 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 9 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 9
Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 13 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 13 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 15
Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 17 Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 19 Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 17
Note: This PCIe+SATA config is N/A for H170, HM170 and QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
PCIe x2 :
PCIe x2 :
PCIe x2 :
PCIe x2 :
PCIe x2 :
PCIe x2 :
PCIe x2 :
PCIe #12
SATA 4
SATA 5 N/A
SATA 0
SATA 1
SATA 2
SATA 3
SATA 0
SATA 1
SATA 2
SATA 3
SATA 4
SATA 5 N/A
PI 7
PI 6
PI 5
PI 7
PI 6
PI 5
PI 7
PI 6
PI 5
PI 7 6 5 PI 7 6 5 PI 7 6 5
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #3 Device #1 Device #2 Device #3 Device #1 Device #2 Device #3
Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 9 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 11 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 11
Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 15 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 13 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 13
Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 19 Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 17 Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 19
Note: This PCIe+SATA config is N/A for H170, HM170 and QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
PCIe x2 :
PCIe x2 :
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
PCIe x2 :
SATA 0
SATA 1
SATA 0
SATA 1
SATA 4
SATA 5 N/A
PI 7
PI 6
PI 5
PI 7
PI 6
PI 5
PI 7 6 5 PI 7 6 5
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #3 Device #1 Device #2 Device #3
Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 11 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 11
Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 15 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 15
Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 17 Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 19
Note: This PCIe+SATA config is N/A for H170, HM170 and QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x4 :
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x4 :
SATA 2
SATA 3
SATA 4
SATA 5
SATA 0
SATA 1
PCIe #12
SATA 4
SATA 5
SATA 0
SATA 1
PCIe #12
SATA 2
SATA 3
PI 7
PI 6
PI 5
x4 x4 x4
PI PI PI
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #3 Device #1 Device #2 Device #3 Device #1 Device #2 Device #3
Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 9 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 13 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 17
Devi ce #2 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #2 : SATA 0 or SATA 1 or SATA4* or SATA5* Devi ce #2 : SATA 0 or SATA 1 or SATA 2 or SATA 3
Devi ce #3 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #3 : SATA 0 or SATA 1 or SATA4* or SATA5* Devi ce #3 : SATA 0 or SATA 1 or SATA 2 or SATA 3
Devi ce #4 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #4 : SATA 0 or SATA 1 or SATA4* or SATA5* Devi ce #4 : SATA 0 or SATA 1 or SATA 2 or SATA 3
Devi ce #5 : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #5 : SATA 0 or SATA 1 or SATA4* or SATA5* Devi ce #5 : SATA 0 or SATA 1 or SATA 2 or SATA 3
Devi ce #6** : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #6** : Not Ava i l a bl e Devi ce #6** : Not Ava i l a bl e
Devi ce #7** : SATA 0 or SATA 1 or SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #7** : Not Ava i l a bl e Devi ce #7** : Not Ava i l a bl e
*N/A for HM170, QM170 *N/A for HM170, QM170 **Devices #6 and #7 are N/A for HM170, QM170
**Devices #6 and #7 are N/A for HM170, QM170 **Devices #6 and #7 are N/A for HM170, QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170
Note: This PCIe+SATA config is N/A for HM170 and QM170 Note: This PCIe+SATA config is N/A for HM170 and QM170
PCIe x4 :
PCIe x4 :
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x4 :
PCIe x4 :
PCIe x4 :
SATA 4
SATA 5
SATA 2
SATA 3
SATA 0
SATA 1
PCIe #12
PI 7
PI 6
PI 7
PI 5
PI 6
PI 5
x4 x4 x4 x4 x4 x4
PI PI PI
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #3 Device #1 Device #2 Device #3 Device #1 Device #2 Device #3
Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 9 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 9 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 13
Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 13 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 17 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 17
Devi ce #3 : SATA4* or SATA5* Devi ce #3 : SATA 0 or SATA 1 or SATA 2 or SATA 3 Devi ce #3 : SATA 0 or SATA 1
Devi ce #4 : SATA4* or SATA5* Devi ce #4 : SATA 0 or SATA 1 or SATA 2 or SATA 3 Devi ce #4 : SATA 0 or SATA 1
Devi ce #5 : Not Ava i l a bl e Devi ce #5 : SATA 0 or SATA 1 or SATA 2 or SATA 3 Devi ce #5 : Not Ava i l a bl e
Devi ce #6 : Not Ava i l a bl e Devi ce #6 : SATA 0 or SATA 1 or SATA 2 or SATA 3 Devi ce #6 : Not Ava i l a bl e
*N/A for HM170, QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170
Note: This PCIe+SATA config is N/A for HM170 and QM170
PCIe x4 :
PCIe x4 :
PI 7
PI 6
PI 5
x4 x4 x4
PI 7 6 5
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #3
Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 09
Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 13
Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 17
Note: This PCIe+SATA config is N/A for H170, HM170 and QM170
PCIe x2 :
PCIe x4 :
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
PCIe x4 :
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
SATA 3
SATA 4
SATA 5
SATA 4
SATA 5
SATA 2
SATA 3
PI 7
PI 6
PI 7
PI 6
PI 7
PI 5
x4 x4 x4
x2 x2 x2
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #3 Device #1 Device #2 Device #3 Device #1 Device #2 Device #3
Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 09 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 09 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 09
Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 13 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 15 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 17
Devi ce #3 : SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #3 : SATA 0 or SATA 1 or SATA4* or SATA5* Devi ce #3 : SATA 0 or SATA 1 or SATA2 or SATA 3
Devi ce #4 : SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #4 : SATA 0 or SATA 1 or SATA4* or SATA5* Devi ce #4 : SATA 0 or SATA 1 or SATA2 or SATA 3
Devi ce #5 : SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #5 : SATA 0 or SATA 1 or SATA4* or SATA5* Devi ce #5 : SATA 0 or SATA 1 or SATA2 or SATA 3
Devi ce #6 : SATA 2 or SATA 3 or SATA4* or SATA5* Devi ce #6 : SATA 0 or SATA 1 or SATA4* or SATA5* Devi ce #6 : SATA 0 or SATA 1 or SATA2 or SATA 3
*N/A for HM170, QM170 *N/A for HM170, QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
PCIe x2 :
PCIe x4 :
PCIe x2 :
PCIe x4 :
SATA 3
SATA 4
SATA 5 N/A
PCIe #12
SATA 4
SATA 5
SATA 0
SATA 1
SATA 4
SATA 5
PI 7
PI 5
PI 7
PI 6
PI 7
PI 6
x4 x4 x4
x2 x2 x2
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #3 Device #1 Device #2 Device #3 Device #1 Device #2 Device #3
Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 09 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 09 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 11
Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 19 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 13 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 13
Devi ce #3 : SATA 0 or SATA 1 or SATA2 or SATA 3 or SATA4* Devi ce #3 : SATA4* or SATA5* Devi ce #3 : SATA 0 or SATA 1 or SATA4* or SATA5*
Devi ce #4 : SATA 0 or SATA 1 or SATA2 or SATA 3 or SATA4* Devi ce #4 : SATA4* or SATA5* Devi ce #4 : SATA 0 or SATA 1 or SATA4* or SATA5*
Devi ce #5 : SATA 0 or SATA 1 or SATA2 or SATA 3 or SATA4* Devi ce #5 : Not Ava i l a bl e Devi ce #5 : SATA 0 or SATA 1 or SATA4* or SATA5*
Devi ce #6 : SATA 0 or SATA 1 or SATA2 or SATA 3 or SATA4* Devi ce #6 : Not Ava i l a bl e Devi ce #6 : SATA 0 or SATA 1 or SATA4* or SATA5*
*N/A for HM170, QM170 *N/A for HM170, QM170 *N/A for HM170, QM170
Note: This PCIe+SATA config is N/A for HM170 and QM170
PCIe x4 :
PCIe x2 :
PCIe x4 :
PCIe x2 :
PCIe x2 :
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x4 :
SATA 0
SATA 1
PCIe #12
SATA 0
SATA 1
PCIe #12
SATA 4
SATA 5 N/A
PCIe #12
SATA 2
SATA 3
PI 6
PI 5
PI 6
PI 5
PI 7
PI 5
x4 x4 x4
x2 x2 x2
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #3 Device #1 Device #2 Device #3 Device #1 Device #2 Device #3
Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 13 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 13 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 09
Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 17 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 19 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 17
Devi ce #3 : SATA 0 or SATA 1 Devi ce #3 : SATA 0 or SATA 1 or SATA4* Devi ce #3 : SATA 0 or SATA 1 or SATA2 or SATA 3
Devi ce #4 : SATA 0 or SATA 1 Devi ce #4 : SATA 0 or SATA 1 or SATA4* Devi ce #4 : SATA 0 or SATA 1 or SATA2 or SATA 3
Devi ce #5 : Not Ava i l a bl e Devi ce #5 : SATA 0 or SATA 1 or SATA4* Devi ce #5 : SATA 0 or SATA 1 or SATA2 or SATA 3
Devi ce #6 : Not Ava i l a bl e Devi ce #6 : Not Ava i l a bl e Devi ce #6 : SATA 0 or SATA 1 or SATA2 or SATA 3
Note: This PCIe+SATA config is N/A for H170, HM170 and QM170 *N/A for HM170, QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170
Note: This PCIe+SATA config is N/A for HM170 and QM170
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x4 :
PCIe x2 :
PCIe x4 :
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
PCIe x4 :
SATA 0
SATA 1
SATA 2
SATA 3
SATA 0
SATA 1
PCIe #12
SATA 2
SATA 3
SATA 0
SATA 1
PCIe #12
PI 7
PI 5
PI 6
PI 5
PI 6
PI 5
x4 x4 x4
x2 x2 x2
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #3 Device #1 Device #2 Device #3 Device #1 Device #2 Device #3
Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 11 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 13 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 15
Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 17 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 17 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 17
Devi ce #3 : SATA 0 or SATA 1 or SATA2 or SATA 3 Devi ce #3 : SATA 0 or SATA 1 or SATA2 or SATA 3 Devi ce #3 : SATA 0 or SATA 1
Devi ce #4 : SATA 0 or SATA 1 or SATA2 or SATA 3 Devi ce #4 : SATA 0 or SATA 1 or SATA2 or SATA 3 Devi ce #4 : SATA 0 or SATA 1
Devi ce #5 : SATA 0 or SATA 1 or SATA2 or SATA 3 Devi ce #5 : SATA 0 or SATA 1 or SATA2 or SATA 3 Devi ce #5 : Not Ava i l a bl e
Devi ce #6 : SATA 0 or SATA 1 or SATA2 or SATA 3 Devi ce #6 : SATA 0 or SATA 1 or SATA2 or SATA 3 Devi ce #6 : Not Ava i l a bl e
Note: This PCIe+SATA config is N/A for H170, HM170 and QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170
PCIe x2 :
PCIe x2 :
PCIe x4 :
PCIe x2 :
PCIe x2 :
PCIe x4 :
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
PCIe x2 :
SATA 3
SATA 2
SATA 3
SATA 4
SATA 5 N/A
PI 7
PI 6
PI 5
PI 7
PI 6
PI 5
PI 7
PI 6
PI 5
x4 x4 x4
x2 x2 x2 x2 x2 x2
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #3 Device #1 Device #2 Device #3 Device #1 Device #2 Device #3
Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 09 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 09 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 09
Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 13 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 13 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 15
Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 17 Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 19 Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 17
Note: This PCIe+SATA config is N/A for H170, HM170 and QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
PCIe x2 :
PCIe x2 :
PCIe x4 :
PCIe x2 :
PCIe x2 :
PCIe x4 :
PCIe x2 :
SATA 4
SATA 5 N/A
PCIe #12
PCIe #12
SATA 4
SATA 5 N/A
PI 7
PI 6
PI 5
PI 7
PI 7
PI 5
PI 7
PI 7
PI 5
x4 x4 x4
x2 x2 7 x2 7 x2
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #3 Device #1 Device #2 Device #3 Device #1 Device #2 Device #3
Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 09 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 09 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 09
Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 15 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 13 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 13
Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 19 Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 17 Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 19
Note: This PCIe+SATA config is N/A for H170, HM170 and QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170
PCIe x4 :
PCIe x2 :
PCIe x2 :
PCIe x4 :
PCIe x2 :
PCIe x2 :
PCIe x2 :
PCIe x4 :
SATA 0
SATA 1
SATA 0
SATA 1
SATA 4
SATA 5 N/A
PCIe #12
SATA 2
SATA 3
PI 7
PI 7
PI 5
PI 7
PI 7
PI 5
PI 7
PI 6
PI 5
x4 x4 x4
7 x2 7 x2 7 x2
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #3 Device #1 Device #2 Device #3 Device #1 Device #2 Device #3
Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 11 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 11 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 09
Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 13 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 13 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 13
Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 17 Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 19 Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 17
Note: This PCIe+SATA config is N/A for H170, HM170 and QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
PCIe x4 :
PCIe x2 :
PCIe x2 :
PCIe x4 :
PCIe x2 :
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
PCIe x4 :
PCIe #12
SATA 0
SATA 1
SATA 2
SATA 3
SATA 0
SATA 1
PI 7
PI 6
PI 5
PI 7
PI 6
PI 5
PI 7
PI 6
PI 5
x4 x4 x4
7 x2 7 x2 7 x2
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #3 Device #1 Device #2 Device #3 Device #1 Device #2 Device #3
Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 09 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 11 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 11
Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 15 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 13 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 15
Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 17 Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 17 Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 17
Note: This PCIe+SATA config is N/A for H170, HM170 and QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170
PCIe x4 :
PCIe x2 :
PCIe x4 :
PCIe x4 :
PCIe x2 :
PCIe x2 :
PCIe x4 :
PCIe x4 :
SATA 4
SATA 5 N/A
PCIe #12
PI 7
PI 6
PI 5
PI 7
PI 6
PI 5
PI 7
PI 6
PI 5
x4 x4 x4 x4 x4 x4
x2 x2 x2
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #3 Device #1 Device #2 Device #3 Device #1 Device #2 Device #3
Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 09 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 09 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 09
Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 13 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 13 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 13
Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 17 Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 19 Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 17
Note: This PCIe+SATA config is N/A for H170, HM170 and QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170
PCIe x4 :
PCIe x4 :
PCIe x4 :
PCIe x2 :
PCIe x4 :
PCIe x4 :
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
PCIe x4 :
SATA 0
SATA 1
SATA 2
SATA 3
PI 7
PI 6
PI 5
PI 7
PI 6
PI 5
PI 7
PI 6
PI 5
x4 x4 x4 x4 x4 x4
7 x2 x2
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #3 Device #1 Device #2 Device #3 Device #1 Device #2 Device #3
Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 11 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 09 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 09
Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 13 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 13 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 15
Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 17 Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 17 Devi ce #3 : 1 rema pped PCIe Devi ce on PCIe Port 17
Note: This PCIe+SATA config is N/A for H170, HM170 and QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170 Note: This PCIe+SATA config is N/A for H170, HM170 and QM170
KEY
PI = Port Index : RST storage port number enumeration {e.g. PI 7 = port 0-7-0-0
SATA 2 = PI 2 = port 0-2-0-0}
SATA 1 'Alternate'
PCIe x2 Port #7
SATA 1 'Alternate'
SATA 0
SATA 1
PCIe #9
PCIe #10
SATA 2
PCIe #5
PCIe #6
PCIe #9
PCIe #10
SATA 2
x2 x2
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #1 Device #2
Device #1 : 1 remapped PCIe Device on PCIe Port 5 Device #1 : 1 remapped PCIe Device on PCIe Port 7
Device #2 : SATA 0 or SATA 1 or SATA 2 Device #2 : SATA 1 or SATA 2
Device #3 : SATA 0 or SATA 1 or SATA 2 Device #3 : SATA 1 or SATA 2
Device #4 : SATA 0 or SATA 1 or SATA 2 Device #4 : Not Available
SATA 1 'Alternate'
PCIe #6
SATA 0
SATA 1
SATA 2
PCIe #5
PCIe #6
SATA 0
SATA 1
PCIe #9
PCIe #10
x2 x2
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #1 Device #2
Device #1 : 1 remapped PCIe Device on PCIe Port 9 Device #1 : 1 remapped PCIe Device on PCIe Port 11
Device #2 : SATA 0 or SATA 1 or SATA 2 Device #2 : SATA 0 or SATA 1
Device #3 : SATA 0 or SATA 1 or SATA 2 Device #3 : SATA 0 or SATA 1
Device #4 : SATA 0 or SATA 1 or SATA 2 Device #4 : Not Available
PCIe x2 Port #9
SATA 1 'Alternate'
PCIe x2 Port #5
SATA 1
SATA 2
SATA 0
SATA 1
PCIe #9
PCIe #10
x2 x2 x2 x2
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #1 Device #2
Device #1 : 1 remapped PCIe Device on PCIe Port 5 Device #1 : 1 remapped PCIe Device on PCIe Port 5
Device #2 : 1 remapped PCIe Device on PCIe Port 9 Device #2 : 1 remapped PCIe Device on PCIe Port 11
Device #3 : SATA 0 or SATA 1 or SATA 2 Device #3 : SATA 0 or SATA 1
Device #4 : SATA 0 or SATA 1 or SATA 2 Device #4 : SATA 0 or SATA 1
PCIe x2 Port #9
SATA 1 'Alternate'
PCIe x2 Port #7
PCIe #5
PCIe #6
SATA 2
PCIe #5
PCIe #6
PCIe #9
PCIe #10
PCIe #11
x2 x2 x2 x2
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #1 Device #2
Device #1 : 1 remapped PCIe Device on PCIe Port 7 Device #1 : 1 remapped PCIe Device on PCIe Port 7
Device #2 : 1 remapped PCIe Device on PCIe Port 9 Device #2 : 1 remapped PCIe Device on PCIe Port 11
Device #3 : SATA 1 or SATA 2 Device #3 : Not Available
Device #4 : SATA 1 or SATA 2
SATA 1 'Alternate'
PCIe x4 Port #9
PCIe #9
PCIe #10
SATA 2
PCIe #5
PCIe #6
SATA 0
SATA 1
x4 x4
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #1 Device #2
Device #1 : 1 remapped PCIe Device on PCIe Port 5 Device #1 : 1 remapped PCIe Device on PCIe Port 9
Device #2 : SATA 1 or SATA 2 Device #2 : SATA 0 or SATA 1
Device #3 : SATA 1 or SATA 2 Device #3 : SATA 0 or SATA 1
Device #4 : Not Available Device #4 : Not Available
PCIe x4 Port #9
x4 x4
PCIe x2 Port #9
SATA 1 'Alternate'
PCIe x4 Port #5
PCIe #9
PCIe #10
x4 x4
x2 x2
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #1 Device #2
Device #1 : 1 remapped PCIe Device on PCIe Port 5 Device #1 : 1 remapped PCIe Device on PCIe Port 5
Device #2 : 1 remapped PCIe Device on PCIe Port 9 Device #2 : 1 remapped PCIe Device on PCIe Port 11
Device #3 : SATA 1 or SATA 2 Device #3 : Not Available
Device #4 : SATA 1 or SATA 2 Device #4 : Not Available
PCIe x4 Port #9
PCIe x2 Port #7
PCIe x4 Port #9
SATA 0
SATA 1
PCIe #5
PCIe #6
x4 x4
x2 x2
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #1 Device #2
Device #1 : 1 remapped PCIe Device on PCIe Port 5 Device #1 : 1 remapped PCIe Device on PCIe Port 7
Device #2 : 1 remapped PCIe Device on PCIe Port 9 Device #2 : 1 remapped PCIe Device on PCIe Port 9
Device #3 : SATA 0 or SATA 1 Device #3 : Not Available
Device #4 : SATA 0 or SATA 1 Device #4 : Not Available
PI = Port Index : RST storage port number enumeration {e.g. PI 6 = port 0-6-0-0
SATA 2 = PI 2 = port 0-2-0-0}
PCIe x2 :
SATA 0
SATA 1
PCIe #9
PCIe #10
PCIe #5
PCIe #6
PCIe #9
PCIe #10
PI 6
PI 6
x2 x2
Intel RST P CIe Intel RST P CIe
Intel RST PCIe Storage Sto rage
Intel RST PCIe Storage Sto rage
Device #1 Device # 2 Device #1 Device # 2
Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 5 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 7
Devi ce #2 : SATA 0 or SATA 1 Devi ce #2 : Not Ava i l a bl e
Devi ce #3 : SATA 0 or SATA 1 Devi ce #3 : Not Ava i l a bl e
Config Y03-1x20x4
SPT-LP Premium-Y
HSIO 9 10 11 12 13 14
PCIe #5
PCIe #6
SATA 0
SATA 1
PCIe x2 :
PI 5
x2
Intel RST P CIe
Intel RST PCIe Storage Sto rage
Device #1 Device # 2
Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 9
Devi ce #2 : SATA 0 or SATA 1
Devi ce #3 : SATA 0 or SATA 1
PCIe x2 :
SATA 0
SATA 1
PCIe #5
PCIe #6
PCIe x2 :
PCIe x2 :
PI 6
PI 6
PI 5
PI 5
x2 x2 x2 x2
Intel RST P CIe Intel RST P CIe
Intel RST PCIe Storage Sto rage
Intel RST PCIe Storage Sto rage
Device #1 Device # 2 Device #1 Device # 2
Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 5 Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 7
Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 9 Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 9
Devi ce #3 : SATA 0 or SATA 1 Devi ce #3 : Not Ava i l a bl e
Devi ce #4 : SATA 0 or SATA 1 Devi ce #4 : Not Ava i l a bl e
PCIe x4 :
PCIe #9
PCIe #10
PCIe x2 :
PI 6
PI 6
PI 5
x4 x4
x2
Intel RST P CIe Intel RST P CIe
Intel RST PCIe Storage Sto rage
Intel RST PCIe Storage Sto rage
Device #1 Device # 2 Device #1 Device # 2
Device #1 : 1 remapped PCIe Device on PCIe Port 5 Device #1 : 1 remapped PCIe Device on PCIe Port 5
Device #2 : Not Available Device #2 : 1 remapped PCIe Device on PCIe Port 9
Device #3 : Not Available Device #3 : Not Available
35.7.1 Example #1: SPT-H HM170 SKU With 1x2 + 1x4 + 1 SATA
Customer Design Requirement (Example of SKU Dependency):
Customer wishes to design an HM170 with the following specs for Intel RST support:
- 1 x2 PCIe Storage Device
- 1 x4 PCIe Storage Device
- 1 SATA 3.0 Storage Device
There are four possible configurations for 1 x4 + 1 x2 Intel® RST PCIe Storage Devices on the SPT-H HM170 SKU
PCIe x2 :
PCIe x4 :
SATA 0 'Alternate'
SATA 1 'Alternate'
PCIe x2 :
SATA 3
SATA 4
SATA 5
SATA 4
SATA 5
PI 7
PI 6
PI 7
x4 x2 x4 x2 PI 6
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #3 Device #1 Device #2 Device #3
Devi ce #1 : 1 rema pped PCIe Devi ce on PCIe Port 09 Device #1 : 1 remapped x4 PCIe Device on PCIe Port 9
Devi ce #2 : 1 rema pped PCIe Devi ce on PCIe Port 13 Device #2 : 1 remapped x2 PCIe Device on PCIe Port 15
Devi ce #3 : SATA 2 or SATA 3 Device #3 : SATA 0 or SATA 1
Device #4 : Available, not required Device #4 : Available, not required
Devi ce #5 : Not Ava i l a bl e Device #5 : Not Available
Devi ce #6 : Not Ava i l a bl e Device #6 : Not Available
PCIe x4 :
PCIe x2 :
PCIe x4 :
SATA 5
SATA 0
SATA 1
SATA 4
SATA 5
PI 7
PI 6
PI 7
PI 6
x2 x4 x2 x4
Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage Intel RST PCIe Storage
Device #1 Device #2 Device #3 Device #1 Device #2 Device #3
Device #1 : 1 remapped x2 PCIe Device on PCIe Port 09 Device #1 : 1 remapped x2 PCIe Device on PCIe Port 11
Device #2 : 1 remapped x4 PCIe Device on PCIe Port 13 Device #2 : 1 remapped x4 PCIe Device on PCIe Port 13
Device #3 : Not Available Device #3 : SATA 0 or SATA 1
Device #4 : Not Available Device #4 : Available, not required
Device #5 : Not Available Device #5 : Not Available
Device #6 : Not Available Device #6 : Not Available
1) Config H38-1x21x4 does not meet the requirement on HM170 SKU since there are no SATA ports
available in this PCIe + SATA configuration
2) Conclusion: The SKL SPT-H HM170 SKU with RST remapped PCIe Storage has three possible
configurations (Configs #'s H34-1x21x4, H35-1x21x4, and H39-1x21x4) that will meet this
customer design requirement.
End