Upgrading The Controller Hardware On A Pair of
Upgrading The Controller Hardware On A Pair of
Upgrading The Controller Hardware On A Pair of
In this procedure, you join the new nodes to the cluster and move all the nonroot volumes from the original nodes to the new
nodes.
This procedure does not result in loss of client access to data, provided that all volumes are moved from the original system's
storage.
You are upgrading a pair of nodes running clustered Data ONTAP 8.3 in an HA pair to a new pair of nodes that have not
been used, and that are running clustered Data ONTAP 8.3 in an HA pair.
Note: Both the original and new controllers must be running the same major version of clustered Data ONTAP before
the upgrade. For example, you can upgrade nodes running Data ONTAP 8.3 with Data ONTAP 8.2.1 or later without
upgrading Data ONTAP. In contrast, you cannot upgrade nodes running releases prior to Data ONTAP 8.1.x to nodes
running Data ONTAP 8.3 without first upgrading Data ONTAP on the original nodes.
You are moving all the volumes from the original nodes to the new nodes.
If you are in a SAN environment, you have a supported multipathing solution running on your host.
If you are replacing a single, failed node, use the appropriate controller-replacement procedure instead of this procedure.
If you are replacing an individual component, see the field-replaceable unit (FRU) flyer for that component on the NetApp
Support Site.
This procedure uses the term boot environment prompt to refer to the prompt on a node from which you can perform certain
tasks, such as rebooting the node and printing or setting environmental variables.
The prompt is shown in the following example:
LOADER>
Note: Most Data ONTAP platforms released before Data ONTAP 8.2.1 were released as separate FAS and V-Series
hardware systems (for example, a FAS6280 and a V6280). Only the V-Series systems (a V or GF prefix) could attach to
storage arrays. Starting in Data ONTAP 8.2.1, only one hardware system is being released for new platforms. These new
215-09573_A0_ur002
platforms, which have a FAS prefix, can attach to storage arrays if the required licenses are installed. These new platforms
are the FAS80xx and FAS25xx systems.
This document uses the term systems with FlexArray Virtualization Software to refer to systems that belong to these new
platforms and the term V-Series system to refer to the separate hardware systems that can attach to storage arrays.
Steps
1.
2.
3.
4.
A V-Series system or system with FlexArray Virtualization Software to a V-Series system or system with FlexArray
Virtualization Software
A V-Series system or system with FlexArray Virtualization Software to a FAS system, provided that the V-Series system or
system with FlexArray Virtualization Software has no array LUNs
The following table displays the upgrades that you can perform for each model of controller in Data ONTAP 8.3:
Original controllers
Replacement controllers
FAS2552A
FAS2554A
32xx, 62xx
FAS8080
Note: If your FAS80xx controllers are running Data ONTAP 8.3 or later and one or both are All-Flash FAS models, make
sure that both controllers have the same All-Flash Optimized personality set:
system node show -instance node_name
Both nodes must either have the personality enabled or disabled; you cannot combine a node with the All-Flash Optimized
personality enabled with a node that does not have the personality enabled in the same HA pair. If the personalities are
different, refer to KB Article 1015157 in the NetApp Knowledge Base for instructions on how to sync node personality.
Maximum cluster size
During the procedure, you cannot exceed the maximum cluster size for the Data ONTAP release.
The cluster can be as small as a single HA pair (two controllers). When you move volumes, the cluster count is increased by two
because of the addition of the newly added destination HA pair. For this reason, the maximum controller count supported in this
Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes
procedure must be less than the maximum supported controller count for the version of Data ONTAP installed on the nodes.
Maximum cluster size also depends on the models of controllers that make up the cluster.
See the Clustered Data ONTAP System Administration Guide for Cluster Administrators for information about cluster limits in
non-SAN environments; see the Clustered Data ONTAP SAN Configuration Guide for information about cluster limits in SAN
environments. See the Hardware Universe for information about the maximum number of nodes per cluster for SAN and NAS
environments.
Licensing in Data ONTAP 8.3
When you set up a cluster, the setup wizard prompts you to enter the cluster base license key. However, some features require
additional licenses, which are issued as packages that include one or more features. Each node in the cluster must have its own
key for each feature to be used in that cluster.
If you do not have new license keys, currently licensed features in the cluster will be available to the new controller. However,
using features that are unlicensed on the controller might put you out of compliance with your license agreement, so you should
install the new license key or keys for the new controller.
Starting with Data ONTAP 8.2, all license keys are 28 uppercase, alphabetic characters in length. You can obtain new 28character license keys for Data ONTAP 8.3 on the NetApp Support Site at mysupport.netapp.com. The keys are available in the
My Support section under Software licenses. If the site does not have the license keys you need, contact your NetApp sales
representative.
For detailed information about licensing, see the Clustered Data ONTAP System Administration Guide for Cluster
Administrators and the KB article Data ONTAP 8.2 and 8.3 Licensing Overview and References on the NetApp Support Site.
Storage Encryption
Storage Encryption is available in clustered Data ONTAP 8.2.1. The original nodes or the new nodes might already be enabled
for Storage Encryption. In that case, you need to take additional steps in this procedure to ensure that Storage Encryption is set
up properly.
If you want to use Storage Encryption, all the disk drives associated with the nodes must have self-encrypting disk drives.
Grounding strap
#2 Phillips screwdriver
2. Download from the NetApp Support Site at mysupport.netapp.com the documents that contain helpful information for the
upgrade.
Download the version of the document that matches the version of Data ONTAP that the system is running.
Document
Contents
Document
Contents
Clustered Data ONTAP 8.3 Upgrade and Revert/ Contains instructions for downloading and upgrading Data ONTAP.
Downgrade Guide
Clustered Data ONTAP Physical Storage
Management Guide
Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes
The NetApp Support Site includes the Hardware Universe, which contains information about the hardware that the new
system supports. The NetApp Support Site also includes documentation about disk shelves, NICs, and other hardware that
you might use with your system.
If you are upgrading a pair of nodes in a switchless cluster, you must have converted them to a switched cluster before
performing the upgrade procedure. See the Migrating from a switchless cluster to a switched CiscoNexus 5596, Nexus 5020,
or Nexus 5010 cluster environment or the Migrating from a switchless cluster to a switched NetApp CN1610 cluster
environment on the NetApp Support Site at mysupport.netapp.com.
Steps
1. Make sure that the original nodes are running the same version of Data ONTAP as the new nodes; if they are not, update
Data ONTAP on the original nodes.
This procedure assumes that the new nodes have not previously been used and are running the desired version of Data
ONTAP. However, if the original nodes are running the desired version of Data ONTAP and the new nodes are not, you need
to update Data ONTAP on the new nodes.
See the Clustered Data ONTAP Upgrade and Revert/Downgrade Guide for instructions for upgrading Data ONTAP.
2. Check the Hardware Universe at hwu.netapp.com to verify that the existing and new hardware components are compatible
and supported.
3. Make sure that you have enough storage on the new nodes to accommodate storage associated with the original nodes.
Upgrading the controller hardware
If you do not have enough storage, add more storage to the new nodes before joining them to the cluster. See the Clustered
Data ONTAP Physical Storage Management Guide and the appropriate disk shelf guide.
4. If you plan to convert a FAS2240 to a disk shelf or move internal SATA drives or SSDs from a FAS2220 system, enter the
following command and capture the disk name and ownership information in the output:
storage disk show
See the Clustered Data ONTAP Commands: Manual Page Reference for more information about the storage disk show
command.
5. Prepare a list of all the volumes that you want to move from the original nodes, whether those volumes are found on internal
storage, on attached disk shelves, or on array LUNs (if you have a V-Series system or system with FlexArray Virtualization
Software) that are not supported by the new nodes.
You can use the volume show command to list the volumes on the nodes. See the Clustered Data ONTAP Commands:
Manual Page Reference.
6. Obtain IP addresses, mail host addresses, and other information for the Service Processors (SPs) on the new nodes.
You might want to reuse the network parameters of the remote management devicesRemote LAN Managers (RLMs) or
SPsfrom the original system for the SPs on the new nodes.
For detailed information about the SPs, see the Clustered Data ONTAP System Administration Guide for Cluster
Administrators and the Clustered Data ONTAP Commands: Manual Page Reference.
7. If you want the new nodes to have the same licensed functionality as the original nodes, enter the following command to see
which licenses are on the original system and examine its output:
system license show
8. Send an AutoSupport message to NetApp for the original nodes by entering the following command, once for each node:
system node autosupport invoke -node node_name -type all -message "Upgrading node_name from
platform_original to platform_new"
Contact technical support to perform an optional step to preserve the security of the encrypted drives by rekeying all drives to a
known authentication key.
Steps
The nodeshell is a special shell for commands that take effect only at the node level.
2. Display the status information to check for disk encryption:
disk encrypt show
Example
The system displays the key ID for each self-encrypting disk, as shown in the following example:
node> disk
Disk
0c.00.1
0c.00.0
6
encrypt show
Key ID
0x0
080CF0C8000000000100000000000000A948EE8604F4598ADFFB185B5BB7FED3
Locked?
No
Yes
Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes
0c.00.3
0c.00.4
0c.00.2
0c.00.5
...
080CF0C8000000000100000000000000A948EE8604F4598ADFFB185B5BB7FED3
080CF0C8000000000100000000000000A948EE8604F4598ADFFB185B5BB7FED3
080CF0C8000000000100000000000000A948EE8604F4598ADFFB185B5BB7FED3
080CF0C8000000000100000000000000A948EE8604F4598ADFFB185B5BB7FED3
Yes
Yes
Yes
Yes
If you get the following error message, proceed to the Preparing for netboot section; if you do not get an error message,
continue with these steps.
node> disk encrypt show
ERROR: The system is not configured to run this command.
3. Examine the output of the disk encrypt show command, and if any disks are associated with a non-MSID key, rekey
them to an MSID key by taking one of the following actions:
To rekey disks individually, enter the following command, once for each disk:
disk encrypt rekey 0x0 disk_name
4. Verify that all the self-encrypting disks are associated with an MSID:
disk encrypt show
Example
The following example shows the output of the disk encrypt show command when all self-encrypting disks are
associated with an MSID:
node> disk
Disk
---------0b.10.23
0b.10.18
0b.10.0
0b.10.12
0b.10.3
0b.10.15
0a.00.1
0a.00.2
encrypt show
Key ID
---------------------------------------------------------------0x0
0x0
0x0
0x0
0x0
0x0
0x0
0x0
Locked?
------No
No
Yes
Yes
No
No
Yes
Yes
When you install the new nodes, you must make sure that you properly configure them for high availability. See the Clustered
Data ONTAP High-Availability Configuration Guide in addition to the appropriate Installation and Setup Instructions and
cabling guide.
Steps
1. Install the new nodes and their disk shelves in a rack, following the instructions in the appropriate Installation and Setup
Instructions.
Note: If the original nodes have attached disk shelves and you want to migrate them to the new system, do not attach them
to the new nodes at this point. You need to move the volumes from the original nodes' disk shelves and then remove them
from the cluster along with the original nodes.
You must have completed the Cluster Setup worksheet in the Clustered Data ONTAP Software Setup Guide before turning on
the power to the new nodes.
Steps
Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes
This node has its management address assigned and is ready for cluster setup.
To complete cluster setup after all nodes are ready, download and run the System Setup
utility from the NetApp Support Site and use it to discover the configured nodes.
For System Setup, this node's management address is: <management interface port>.
IP addresses. If the cluster interfaces are not correctly configured they will not be able to join the cluster.
4. To join the node to the cluster follow these steps:
a. Enter join at the prompt to add the node to the cluster:
join
b. Enter
yes
6. After the Cluster Setup wizard is completed and exits, verify that the node is healthy and eligible to participate in the cluster
by completing the following substeps:
a. Log in to the cluster.
b. Enter the following command to display the status of the cluster:
cluster show
Example
The following example shows a cluster after the first new node (node2) has been joined to the cluster:
cluster::> cluster show
Node
Health
--------------------- ------node0
true
node1
true
node2
true
Eligibility
-----------true
true
true
Make sure that the output of the cluster show command shows that the two new nodes are part of the same cluster and
are healthy.
c. The Cluster Setup wizard assigns the node a name that consists of cluster_name-node_number. If you want to
customize the name, enter the following command:
system node rename -node current_node_name -newname new_node_name
Enabling storage failover on one node of an HA pair automatically enables it on the partner node.
9. Verify that storage failover is enabled by entering the following command:
storage failover show
Example
The following example shows the output when nodes are connected to each other and takeover is possible:
cluster::> storage failover show
Takeover
Node
Partner
Possible State Description
-------------- -------------- -------- ------------------------------------10
Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes
node0
node1
node2
node3
node1
node0
node3
node2
true
true
true
true
Connected
Connected
Connected
Connected
to
to
to
to
node1
node0
node3
node2
Then...
In a SAN environment
Complete Step 2 and go to the section Creating SAN LIFs on the new nodes.
Skip both Step 2 and the section Creating SAN LIFs on the new nodes, and go to the section
Creating an aggregate.
2. Verify that all the nodes are in quorum by entering the following command on one of the nodes:
event log show -messagename scsiblade.*
Example
The following example shows the output when the nodes in the cluster are in quorum:
cluster::> event log show -messagename scsiblade.*
Time
Node
Severity
------------------- ---------------- ------------8/13/2012 14:03:51 node0
INFORMATIONAL
8/13/2012 14:03:51 node1
INFORMATIONAL
8/13/2012 14:03:48 node2
INFORMATIONAL
8/13/2012 14:03:43 node3
INFORMATIONAL
Event
--------------------------scsiblade.in.quorum: The scsi-blade...
scsiblade.in.quorum: The scsi-blade...
scsiblade.in.quorum: The scsi-blade...
scsiblade.in.quorum: The scsi-blade...
You need to create at least two SAN LIFs on each node for each Storage Virtual Machine (SVM).
Steps
1. Determine if any iSCSI or FCP LIFs on the original nodes were members of port sets by entering the following command on
one of the original nodes and examining its output:
lun portset show
Example
The following example displays the output of the command, showing the port sets and LIFs (port names) for an SVM named
vs1:
cluster::> lun portset
Virtual
Server
Portset
--------- -----------vs1
ps0
Upgrading the controller hardware
show
Protocol Port Names
Igroups
-------- ----------------------- -----------mixed
LIF1,
igroup1
11
ps1
iscsi
ps2
fcp
3 entries were displayed.
LIF2
LIF3
LIF4
igroup2
-
2. To list the mappings between the LUNs and initiator groups, enter the lun mapping show command:
lun mapping show
Example
lun mapping show
Vserver
Path
-------- ------------------------------------------vs1
/vol/vol1/lun1
vs1
/vol/vol1/lun1
vs1
/vol/vol2/lun2
3 entries were displayed.
Igroup LUN ID
------- -----igroup1
1
igroup2
4
igroup3
10
Protocol
-------mixed
mixed
mixed
The following example shows the current node and HA partner for the LUN mapping of /vol/vol1/lun2 being added to
igroup igroup1:
cluster::> lun mapping add-reporting-nodes -vserver vs1 -path /vol/vol1/lun2 -igroup
igroup1
4. Use the lun mapping-remove-reporting-nodes command to remove a LUN from the existing LUN map:
lun mapping remove-reporting-nodes -vserver vserver_name -path lun_path -lun lun_name igroup igroup_name
Example
The following command removes excess remote nodes from the LUN mapping of /vol/vol1/lun1 to igroup igroup1:
lun mapping remove-reporting-nodes -vserver vs1 -path /vol/vol1/lun1 -igroup igroup1
5. Create SAN LIFs on the new nodes by entering the following command, once for each LIF:
If LIF type is...
Enter...
iSCSI
FCP
For detailed information about creating LIFs, see the Clustered Data ONTAP Network Management Guide and the Clustered
Data ONTAP Commands: Manual Page Reference.
6. If port sets exist, add the new SAN LIFs to the port sets by entering the following command, once for each LIF:
lun portset add vserver vserver_name -portset portset_name -port-name lif_name
12
Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes
If FC is enabled, display information about FC initiators that are currently logged in on the new nodes by entering the
following command:
vserver fcp initiator show -vserver vvol_vs -lif lif_name
The following example displays information about logged-in FC initiators for an SVM named vvol_vs1:
cluster::> fcp initiator show -vserver vvol_vs1 -lif fcp_lif_2,fcp_lif_3
Logical
Port
Initiator
Initiator
Vserver
Interface
Address WWNN
WWPN
Igroup
--------- ----------------- -------- ------------ ------------ --------------vvol_vs
fcp_lif_2
dd0800
20:01:00:1b:32:2a:f6:b0 21:01:00:1b:32:2a:f6:b0
vvol_vs
fcp_lif_2
dd0700
20:01:00:1b:32:2c:ae:0c 21:01:00:1b:32:2c:ae:0c
vvol_vs
fcp_lif_3
dd0800
20:01:00:1b:32:2a:f6:b0 21:01:00:1b:32:2a:f6:b0
vvol_vs
fcp_lif_3
dd0700
20:01:00:1b:32:2c:ae:0c 21:01:00:1b:32:2c:ae:0c
4 entries were displayed.
If iSCSI is enabled, display information about iSCSI initiators that are currently logged in on the new nodes by entering
the following command, once for each new LIF:
iscsi connection show -vserver vserver_name -lif new_lif
The following example displays information about a logged-in initiator for an SVM named vs1:
cluster::> iscsi connection show
Tpgroup
Vserver
Name
TSIH
------------ ------------- ----vs1
data1
10
You might need to set up the initiators to discover paths through the new nodes. Steps for setting up initiators vary depending
on the operating system of your host. See the host utilities documentation for your host computer for specific instructions.
See the Clustered Data ONTAP SAN Configuration Guide and the Clustered Data ONTAP SAN Administration Guide for
additional information about initiators.
Creating an aggregate
You create one or more aggregates on each new node to provide storage for volumes on the internal disk drives of the original
nodes or any disk shelves that are attached to the nodes along with disk shelves. Third-party array LUNs can also be used to
create aggregates if the configuration is a diskless FAS system that has a FAV license installed.
Before you begin
The new nodes must have enough storage to accommodate the aggregates.
You must know which disks will be used in the new aggregates.
You must know the source aggregates, the number and kind of disks, and the RAID setup on the original nodes.
13
You can specify disks by listing their IDs, or by specifying a disk characteristic such as type, size, or speed. Disks are owned by
a specific node; when you create an aggregate, all disks in that aggregate must be owned by the same node, which becomes the
home node for that aggregate.
If the nodes are associated with more than one type of disk and you do not explicitly specify what type of disks to use, Data
ONTAP creates the aggregate using the disk type with the highest number of available disks. To ensure that Data ONTAP uses
the disk type that you expect, always specify the disk type when creating aggregates from heterogeneous storage.
You can display a list of the available spares by using the storage disk show -spare command. This command displays
the spare disks for the entire cluster. If you are logged in to the cluster on the cluster management interface, you can create an
aggregate on any node in the cluster. You can ensure that the aggregate is created on a specific node by using the node option or
by specifying the disks that are owned by that node.
Aggregate names must meet the following requirements:
Steps
1. Create at least one aggregate by entering the following command on one of the new nodes, once for each aggregate that you
want to create:
storage aggregate create -aggregate aggr_name -node new_node_name -diskcount integer
You must specify the node and the disk count when you create the aggregates.
The parameters of the command depend on the size of the aggregate needed. See the Clustered Data ONTAP Commands:
Manual Page Reference for detailed information about the storage aggregate create command.
2. Repeat Step 1 on the other new node.
3. Verify the RAID group and disks of your new aggregate:
storage aggregate show -aggregate aggr_name
If you are moving a data protection mirror and you have not initialized the mirror relationship, you must have initialized the
mirror relationship using the snapmirror initialize command before proceeding. If you receive an error message when
trying to move an uninitialized volume it may be caused by there being no Snapshot copies available until after the volume is
initialized.
Data protection mirror relationships must be initialized before you can move one of the volumes.
See the Clustered Data ONTAP Data Protection Guide and the Clustered Data ONTAP Commands: Manual Page Reference for
detailed information about initializing data protection.
You must have prepared the list of volumes that you want to move, which you were directed to do in Step 5 of Preparing for the
upgrade.
You must have reviewed the requirements for and restrictions on moving volumes in the Clustered Data ONTAP Logical
Storage Management Guide.
14
Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes
Note: You must review the guidelines in the Clustered Data ONTAP Logical Storage Management Guide if you receive error
messages when trying to use the volume move command to move FlexClone, load-sharing, temporary, or FlexCache
volumes.
About this task
The data from the original volume is copied to the new volume.
During this time, the original volume is intact and available for clients to access.
After completing the move, the system routes client traffic to the new source volume and resumes.
Moving volumes is not disruptive to client access because the time in which client access is blocked ends before clients notice a
disruption and time out. Client access is blocked for 45 seconds by default. If the volume move operation cannot finish in the
time that access is denied, the system aborts this final phase of the volume move operation and allows client access.
The system runs the final phase of the volume move operation until the volume move is complete or until the default of three
attempts is reached. If the volume move operation fails after the third attempt, the process goes into a cutover-deferred state and
waits for you to initiate the final phase.
You can change the amount of time client access is blocked or the number of times the final phase of the volume move
operation, known as cutover attempts, are run if the defaults are not adequate. You also can determine what the system does if
the volume move operation cannot be completed during the time client access is blocked. See the Clustered Data ONTAP
Commands: Manual Page Reference for detailed information about the volume move command.
Note: In Data ONTAP 8.3, when you move volumes, an individual controller should not be involved in more than 16
simultaneous volume moves at a time. The 16-simultaneous volume move limit includes volume moves where the controller
is either the source or destination of the operation. For example, if you are upgrading NodeA with NodeC and NodeB with
NodeD, up to 16 simultaneous volume moves can take place from NodeA to NodeC, and at the same time 16 simultaneous
volume moves can take place from NodeB to NodeD.
Note: If you want to use the volume move command on an Infinite Volume, you need to contact technical support for
assistance.
Note: You need to move the volumes from an original node to the new node that is replacing it. That is, if you replace NodeA
with NodeC and NodeB with NodeD, you must move volumes from NodeA to NodeC and volumes from NodeB to NodeD.
Steps
1. Display information for the volumes that you want to move from the original nodes to the new nodes:
volume show -vserver vserver_name -node original_node_name
Example
The following example shows the output of the command for volumes on an SVM named vs1 and a node named node0:
cluster::> volume show -vserver vs1
Vserver
Volume
Aggregate
--------- ------------ -----------vs1
clone
aggr1
vs1
vol1
aggr1
vs1
vs1root
aggr1
3 entries were displayed.
-node node0
State
Type
Size Available Used%
---------- ---- ---------- ---------- ----online
RW
40MB
37.87MB
5%
online
RW
40MB
37.87MB
5%
online
RW
20MB
18.88MB
5%
15
Capture the information in the output, which you will need in Step 4.
Note: If you are moving one volume at a time, complete Step 1 through Step 6 once for each volume you move, all the
steps for one volume, and then all the steps for the next volume, and so on. If you are moving multiple volumes at the
same time, complete Step 1 through Step 6 once for each group of volumes that you move, keeping within the limit of 16
simultaneous volume moves for Data ONTAP 8.3 or 25 simultaneous volume moves for Data ONTAP 8.1.
The output in the following example shows that the SVM vs2 volume user_max can be moved to any of the listed
aggregates:
cluster::> volume move target-aggr show -vserver vs2 -volume user_max
Aggregate Name
Available Size Storage Type
--------------------------- -----------aggr2
467.9GB
FCAL
node12a_aggr3
10.34GB
FCAL
node12a_aggr2
10.36GB
FCAL
node12a_aggr1
10.36GB
FCAL
node12a_aggr4
10.36GB
FCAL
5 entries were displayed
3. Run a validation check on the volume to ensure that it can be moved to the intended aggregate by entering the following
command for the volume that you want to move:
volume move start -vserver vserver_name -volume volume_name -destination-aggregate
destination_aggregate_name -perform-validation-only true
4. Move the volumes by entering the following command for the volume that you want to move:
volume move start -vserver vserver_name -volume vol_name -destination-aggregate
destination_aggr_name -cutover-window integer
You need to be in advanced mode to use the -cutover-window and -cutover-action parameters with the volume
move start command.
You must enter the command once for each volume that you want to move from the original nodes to the new nodes,
including SVM root volumes.
The -cutover-window parameter specifies the time interval in seconds to complete cutover operations from the original
volume to the moved volume. The default is 45 seconds, and the valid time intervals can range from 30 to 300 seconds,
inclusive.
5. Check the outcome of the vol move command:
volume move show -vserver vserver_name -volume vol_name
6. If the volume move operation does not complete the final phase after three attempts and goes into a cutover-deferred state,
enter the following command to try to complete the move:
volume move trigger-cutover -vserver vserver_name -volume vol_name -force true
Forcing the volume move operation to finish can disrupt client access to the volume you are moving.
16
Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes
1. Enter the following command, once for each SVM, and examine the output to make sure the volumes are in the correct
aggregate:
volume show -vserver Vserver_name
Example
The following example shows the output for the command entered for an SVM named vs1:
cluster::> volume show
Vserver
Volume
--------- -----------vs1
vol1
vs1
vol1_dr
vs1
vol2
vs1
vol2_dr
vs1
vol3
-vserver vs1
Aggregate
-----------aggr1
aggr0_dp
aggr0
aggr0_dp
aggr1
State
---------online
online
online
online
online
Type
---RW
DP
RW
DP
RW
Size
---------2GB
200GB
150GB
150GB
150GB
Available
---------1.9GB
160.0GB
110.3GB
110.3GB
120.0GB
Used%
----5%
20%
26%
26%
20%
2. If any volumes are not in the correct aggregate, move them again by following the steps in the section Moving volumes from
the original nodes.
3. If you are unable to move any volumes to the correct aggregate, contact technical support.
Moving non-SAN data LIFs and cluster management LIFs from the original nodes to the new
nodes
After you have moved the volumes from the original nodes, you need to migrate the non-SAN data LIFs and clustermanagement LIFs from the original nodes to the new nodes.
About this task
You should execute the command for migrating a cluster-management LIF from the node where the cluster LIF is hosted.
You cannot migrate a LIF if that LIF is used for copy-offload operations with VMware vStorage APIs for Array Integration
(VAAI).
For more information about VMware VAAI, see the Clustered Data ONTAP File Access Management Guide for CIFS or the
Clustered Data ONTAP File Access Management Guide for NFS.
You should use the network interface show command to see where the cluster-management LIF resides.
If the cluster-management LIF resides on one of the controllers that is being decommissioned, you need to migrate the LIF
to a new controller.
Steps
1. Modify the home ports for the non-SAN data LIFs from the original nodes to the new nodes by entering the following
command, once for each LIF:
network interface modify -vserver vserver_name -lif lif_name -home-node new_node_name -homeport netport|ifgrp
If you use the same port on the destination node, you do not need to specify the home port.
2. Take one of the following actions:
17
Then...
A specific LIF
Example
The following command migrates a LIF named datalif1 on the SVM vs0 to the port e0d on node0b:
cluster::> network interface migrate -vserver vs0 -lif datalif1 -destination-node node0b destination-port e0d
The following command migrates all the data and cluster-management LIFs from the current (local) node:
cluster::> network interface migrate-all -node local
3. Check whether the cluster-management LIF home node is on one of the original nodes by entering the following command
and examining its output:
network interface show -lif cluster_mgmt -fields home-node
4. Take one of the following actions, based on the output of the command in Step 3:
If the cluster management LIF home
node...
Then...
Switch the home node of the cluster-management LIF to one of the new nodes:
network interface modify -vserver cluster_name -lif
cluster_mgmt -home-node new_node_name -home-port netport|
ifgrp
Note: If the cluster-management LIF is not on one of the new nodes, you will not be
able to unjoin the original nodes from the cluster.
b.
c.
Is not on one of the controllers being
decommissioned
18
If you are in a SAN environment, or if both SAN and NAS are enabled, complete the section Deleting SAN LIFs from the
original nodes and then go to the section Unjoining the original nodes from the cluster.
If you are in a NAS environment, skip the section Deleting SAN LIFs from the original nodes and go to the section
Unjoining the original nodes from the cluster.
Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes
Then...
iSCSI initiators
FC initiators
Go to Step 4.
2. Enter the following command to display a list of active initiators currently connected to an SVM on the original nodes, once
for each of the old LIFs:
iscsi connection show -vserver Vserver_name -lif old_lif
Example
The following example shows the output of the command with an active initiator connected to SVM vs1:
cluster::> iscsi connection show
Tpgroup
Vserver
Name
TSIH
------------ ------------- ----vs1
data2
9
3. Examine the output of the command in Step 2 and then take one of the following actions:
If the output of the command in
Step 2...
Then...
On your host computer, log out of any sessions on the original controller.
Go to Step 4.
Instructions vary depending on the operating system on your host. See the host utilities
documentation for your host for the correct instructions.
The following example shows output of the lun portset show command:
cluster:>
Virtual
Server
--------js11
Portset
Protocol Port Names
Igroups
------------ -------- ----------------------- -----------ps0
mixed
LIF1,
igroup1
LIF2
ps1
iscsi
LIF3
igroup2
ps2
fcp
LIF4
3 entries were displayed.
19
See the Clustered Data ONTAP SAN Administration Guide for information about port sets and the Clustered Data ONTAP
Commands: Manual Page Reference for detailed information about portset commands.
5. Examine the output of the lun portset show command to see if any iSCSI or FC LIFs on the original nodes belong to
any port sets.
6. If any iSCSIs or FC LIFs on either node being decommissioned are members of a port set, remove them from the port sets
by entering the following command, once for each LIF:
lun portset remove -vserver vserver_name -portset portset_name -port-name lif_name
Note: You need to remove the LIFs from port sets before you delete the LIFs.
7. Delete the LIFs on the original nodes by entering the following command, once per LIF:
network interface delete -vserver vserver_name -lif lif_name
1. Disable high-availability by entering the following command at the command prompt of one of the original nodes:
storage failover modify -node original_node_name -enabled false
2. Access the advanced privilege level by entering the following command on either node:
set -privilege advanced
3. Enter y.
4. Find the node that has epsilon by entering the following command and examining its output:
cluster show
The system displays information about the nodes in the cluster, as shown in the following example:
cluster::*>
Node
-------------------node0
node1
node2
node3
Health
------true
true
true
true
Eligibility
-----------true
true
true
true
Epsilon
-----------true
false
false
false
If one of the original nodes has the epsilon, then the epsilon needs to be moved to one of the new nodes. If another node in
the cluster has the epsilon, you do not need to move it.
5. If necessary, move the epsilon to one of the new nodes by entering the following commands:
cluster modify -node original_node_name -epsilon false
cluster modify -node new_node_name -epsilon true
6. Enter the following command from one of the new nodes for both of the original nodes:
cluster unjoin -node original_node_name
20
Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes
Enter y.
You need to log in to both unjoined nodes to perform Steps 7 and 8.
Note: The cluster unjoin command should only be invoked from one of the new nodes. If it is entered on an existing
node you may see the following error message:
Error: command failed: Cannot unjoin a node on which the unjoin command is
invoked. Please connect to any other node in the cluster to unjoin this
node.
7. The node boots and stops at the boot menu, as shown here:
This node was removed from a cluster. Before booting, use option (4)
to initialize all disks and setup a new system.
Normal Boot is prohibited.
Please choose one of the following:
(1) Normal Boot.
(2) Boot without /etc/rc.
(3) Change password.
(4) Clean configuration and initialize all disks.
(5) Maintenance mode boot.
(6) Update flash from backup config.
(7) Install new software first.
(8) Reboot node.
Selection (1-8)?
Enter 4.
8. The system displays the following messages:
Zero disks, reset config and install a new file system?:
This will erase all the data on the disks, are you sure?:
Then...
a.
b.
More than two nodes
Go to Step 11.
Go to Step 11.
21
11. Before you turn off power to any disk shelves attached to the original nodes and move them, make sure that the disk
initialization started in Step 7 through Step 9 is complete.
12. Turn off power to the original nodes and then unplug them from the power source.
13. Remove all cables from the original nodes.
14. If you plan to reuse attached disk shelves from the original nodes on the new nodes, cable the disks shelves to the new nodes.
15. Remove the original nodes and their disk shelves from the rack if you do not plan to reuse the hardware in its original
location.
Deleting aggregates and removing disk ownership from the original nodes' internal storage
If you want convert a FAS2240 system to a disk shelf or move internal SATA drives or SSDs from a FAS22xx system, you must
delete the old aggregates from the original nodes' internal storage before completing the upgrade. You also must remove disk
ownership from the original system's internal disks.
Before you begin
You must have completed the previous tasks in this upgrade procedure.
About this task
If you do not want to convert a FAS2240 system to disk shelves or move internal SATA drives or SSDs from a FAS2220
controller, go to the section Completing the upgrade.
Note: You do not need to delete aggregates or remove ownership from disks in external shelves that you plan to migrate to the
new system.
Steps
1. Verify that there are no data volumes associated with the aggregates to be destroyed by entering the following command,
once for each aggregate:
volume show -aggregate aggr_name
If volumes are associated with the aggregates to be destroyed, repeat the steps in the section Moving volumes from the
original nodes.
2. If there are any aggregates on the original nodes, delete them by entering the following command, once for each aggregate:
storage aggregate delete -aggregate aggregrate_name
22
Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes
[Job 43] Job is queued: Delete aggr1.DBG: VOL_OFFLINE: tree aggr1, new_assim=0,
assim_gen=4291567353, creating=0, has_vdb=true
[Job 43] deleting aggregate aggr1 ... DBG: VOL_OFFLINE: tree aggr1, new_assim=0,
assim_gen=4291567353, creating=0, has_vdb=true
DBG:vol_obj.c:1823:volobj_offline(aggr2): clear VOL_FLAG_ONLINE
DBG:config_req.c:9921:config_offline_volume_reply2(aggr2): clr VOL_FLAG_ONLINE
[Job 43] Job succeeded: DONE
4. Verify all the old aggregates are deleted by entering the following command and examining its output:
storage aggregate show -aggregate aggr_name1,aggr_name2...
The system should display the message shown in the following example:
cluster::> storage aggregate show -aggregate aggr1,aggr2
There are no entries matching your query.
5. Remove ownership from the original system's internal disks by entering the following command, once for each disk:
storage disk removeowner -disk disk_name
Refer to the disk information that you captured in the section Preparing for the upgrade.
6. Verify that ownership has been removed for all of the internal disks by entering the following command and examining its
output:
storage disk show
Internal drives have 00. at the beginning of their ID. The 00. indicates an internal disk shelf, and the number after the
decimal point indicates the individual disk drive.
1. Configure the SPs by performing the following command on both of the new nodes:
system node service-processor network modify
See the Clustered Data ONTAP System Administration Guide for Cluster Administrators for more information.
2. Set up AutoSupport by following the instructions in the Clustered Data ONTAP System Administration Guide for Cluster
Administrators.
3. Install new licenses for the new nodes by entering the following commands for each node:
system license add -license-code license_code,license_code,license_code...
In Data ONTAP 8.3, you can add one license at a time, or you can add multiple licenses at a time, each license key separated
by a comma.
4. To remove all of the old licenses from the original nodes, enter one of the following commands:
system license clean-up -unused -expired
system license delete -serial-number node_serial_number -package licensable_package
23
To delete a specific license from a cluster, enter the following commands on the nodes:
system license delete -serial-number <node1 serial number> -package *
system license delete -serial-number <node2 serial number> -package *
You might want to compare the output with the output that you captured in Step 7 of the section Preparing for the upgrade.
6. Send a postupgrade AutoSupport message to NetApp by entering the following command, once for each node:
system node autosupport invoke -node node_name -type all -message "node_name successfully
upgraded from platform_old to platform_new"
7. If you have a two-node cluster running Data ONTAP 8.3 and you want to set up a switchless cluster on the new nodes,
follow the instructions in Transitioning to a two-node switchless cluster on the NetApp Support Site.
You must have moved all the volumes from the original nodes and completed all of the upgrade procedure through the section
Completing the upgrade.
About this task
You can reuse FAS2240 nodes by converting them to disk shelves and attaching them to the new system. You can transfer SATA
disk drives or SSDs from FAS22xx nodes and install them in disk shelves attached to the new nodes.
Step
24
Then...
Go to the section Setting up Storage Encryption on the new nodes if the new system has
encryption-enabled disks; otherwise go to the section Decommissioning the old system.
FAS2240 system
Go to the section Converting the FAS2240 system to a disk shelf and attaching it to the new
system.
Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes
Steps
1.
2.
3.
4.
5.
6.
You must have done the following before proceeding with this section:
Made sure that the SATA or SSD drive carriers from the FAS2220 system are compatible with the new disk shelf
Check the Hardware Universe on the NetApp Support Site for compatible disk shelves
Made sure that there is a compatible disk shelf attached to the new system
Made sure that the disk shelf has enough free bays to accommodate the SATA or SSD drive carriers from the FAS2220
system
You cannot transfer SAS disk drives from a FAS2220 system to a disk shelf attached to the new nodes.
Steps
6.0TB
6.0TB
6.0TB
The cam handle on the carrier springs open partially, and the carrier releases from the midplane.
3. Pull the cam handle to its fully open position to unseat the carrier from the midplane and gently slide the carrier out of the
disk shelf.
Attention: Always use two hands when removing, installing, or carrying a disk drive. However, do not place your hands
25
6.0TB
6.0TB
6.0TB
6.0TB
4. With the cam handle in the open position, insert the carrier into a slot in the new disk shelf, firmly pushing until the carrier
stops.
Caution: Use two hands when inserting the carrier.
5. Close the cam handle so that the carrier is fully seated in the midplane and the handle clicks into place.
Be sure you close the handle slowly so that it aligns correctly with the face of the carrier.
6. Repeat Step 2 through Step 5 for all of the disk drives that you are moving to the new system.
Converting the FAS2240 system to a disk shelf and attaching it to the new system
After you complete the upgrade, you can convert the FAS2240 system to a disk shelf and attach it to the new system to provide
additional storage.
Before you begin
You must have upgraded the FAS2240 system before converting it to a disk shelf. The FAS2240 system must be powered down
and uncabled.
Steps
1. Replace the controller modules in the FAS2240 system with IOM6 modules.
2. Set the disk shelf ID.
Each disk shelf, including the FAS2240 chassis, requires a unique ID.
3. Reset other disk shelf IDs as needed.
4. Turn off power to any disk shelves connected to the new nodes, and then turn off power to the new nodes.
5. Cable the converted FAS2240 disk shelf to a SAS port on the new system, and, if you are using ACP cabling, to the ACP
port on the new system.
Note: If the new system does not have a dedicated onboard network interface for ACP for each controller, you must
dedicate one for each controller at system setup. See the Installation and Setup Instructions for the new system and the
Universal SAS and ACP Cabling Guide for cabling information. Also consult the Clustered Data ONTAP HighAvailability Configuration Guide.
6. Turn on power to the converted FAS2240 disk shelf and any other disk shelves attached to the new nodes.
7. Turn on the power to the new nodes and then interrupt the boot process on each node by pressing Ctrl-C to access the boot
environment prompt.
26
Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes
have a FAV license installed . You do not need to perform this task if you did not move external disk shelves from the original
system to the new one.
About this task
You need to perform the steps in this section on both nodes, completing each step on one node and then the other node before
going on to the next step.
Steps
1. Boot Data ONTAP on the new node by entering the fooling command at the boot environment prompt:
boot_ontap maint
2. On the new node, display the new node system ID by entering the following command at the Maintenance mode prompt:
disk show
Example
*> disk show
Local System ID: 101268854
...
When you run the disk reassign command on node1, the -s parameter is node1 (original_sysid), the -p parameter
is node2 (partner_sysid), and node3 is the -d parameter (new_sysid):
disk reassign -s node1_sysid -d node3_sysid -p node2_sysid
When you run the disk reassign command on node2, the -p parameter is node 3's partner_sysid. The disk
reassign command will reassign only those disks for which original_sysid is the current owner.
To obtain the system ID for the nodes, use the sysconfig command.
Example
5. Enter y.
27
6. Enter y.
7. If the node is in the FAS22xx, FAS25xx, FAS32xx, FAS62xx, or FAS80xx family, verify that the controller and chassis are
configured as ha by entering the following command and observing the output:
ha-config show
Example
The following example shows the output of the ha-config show command:
*> ha-config show
Chassis HA configuration: ha
Controller HA configuration: ha
FAS22xx, FAS25xx, FAS32xx, FAS62xx, and FAS80xx systems record in a PROM whether they are in an HA pair or standalone configuration. The state must be the same on all components within the stand-alone system or HA pair.
If the controller and chassis are not configured as ha, use the ha-config modify controller ha and ha-config
modify chassis ha commands to correct the configuration.
8. Enter the following command at the Maintenance mode prompt:
halt
All the disks on the storage system must be encryption-enabled before you set up Storage Encryption on the new nodes.
About this task
You can skip this section if the system that you upgraded to does not have Storage Encryption enabled.
If you used Storage Encryption on the original system and migrated the disk shelves to the new system, you can reuse the SSL
certificates that are stored on migrated disk drives for Storage Encryption functionality on the upgraded system. However, you
should check that the SSL certificates are present on the migrated disk drives. If they are not present you will need to obtain
them.
Note: Step 1 through Step 3 are only the overall tasks required for configuring Storage Encryption. You need to follow the
detailed instructions for each task in the Clustered Data ONTAP Software Setup Guide.
Steps
1. Obtain and install private and public SSL certificates for the storage system and a private SSL certificate for each key
management server that you plan to use.
28
Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes
Requirements for obtaining the certificates and instructions for installing them are contained in the Clustered Data ONTAP
Software Setup Guide.
2. Collect the information required to configure Storage Encryption on the new nodes.
This includes the network interface name, the network interface IP address, and the IP address for external key management
server. The required information is contained in the Clustered Data ONTAP Software Setup Guide.
3. Launch and run the Storage Encryption setup wizard, responding to the prompts as appropriate.
4. If you have not done so, repeat Step 1 through Step 3 on the other new node.
After you finish
See the Clustered Data ONTAP Physical Storage Management Guide for information about managing Storage Encryption on
the updated system.
You must have the correct SFP+ modules for the CNA ports.
About this task
CNA ports can be configured into native Fibre Channel (FC) mode or CNA mode. FC mode supports FC initiator and FC target;
CNA mode allows concurrent NIC and FCoE traffic over the same 10-GbE SFP+ interface and supports FC target.
Note: NetApp marketing materials might use the term UTA2 to refer to CNA adapters and ports. However, the CLI and
CNA cards ordered when the controller is ordered are configured before shipment to have the personality you request.
CNA cards ordered separately from the controller are shipped with the default FC target personality.
Onboard CNA ports on new controllers are configured before shipment to have the personality you request.
However, you should check the configuration of the CNA ports on the node and change them, if necessary.
Steps
3. Check how the ports are currently configured by entering one of the following commands on one of the new nodes:
If the system you are upgrading...
Then...
29
unified-connect show
Pending Pending
Admin
Mode
Type
Status
------- -----------online
online
online
online
online
online
online
online
Pending
Mode
-------
Pending
Type
-------
Status
-----online
online
online
online
online
online
online
online
4. If the current SFP+ module does not match the desired use, replace it with the correct SFP+ module.
5. Examine the output of the ucadmin show or system node hardware unified-connect show command and
determine whether the CNA ports have the personality you want.
6. Take one of the following actions:
If the CNA ports ...
Then...
Go to Step 7.
Go to Step 9.
7. If the CNA adapter is online, take if offline by entering one of the following commands:
If the system that you are
upgrading...
Has storage disks
Then...
30
Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes
8. If the current configuration does not match the desired use, enter the following commands to change the configuration as
needed:
If the system that you are
upgrading...
Then...
In either command:
9. Verify the settings by entering one of the following commands and examining its output:
If the system that you are
upgrading...
Then...
Example
The output in the following examples show that the FC4 type of adapter 1b is changing to initiator and that the mode of
adapters 2a and 2b is changing to cna:
cluster1::> system node hardware
Current Current
Node Adapter Mode
Type
---- ------- ------- --------f-a
1a
fc
initiator
f-a
1b
fc
target
f-a
2a
fc
target
f-a
2b
fc
target
4 entries were displayed.
unified-connect show
Pending Pending
Mode
Type
Status
------- -----------online
initiator online
cna
online
cna
online
Pending
Mode
------cna
cna
Pending
Type
------initiator
-
Status
-----online
online
online
online
31
halt
Then...
Not correct...
Then...
a.
Select Decommission this system in the Product Tool Site drop-down menu.
b.
Go to Step 5.
a.
Click the feedback link to open the form for reporting the problem.
b.
5. On the Decommission Form page, fill out the form and click Submit.
32
Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes
If you want to be notified automatically when production-level documentation is released or important changes are made to
existing production-level documents, follow Twitter account @NetAppDoc.
You can also contact us in the following ways:
Trademark information
NetApp, the NetApp logo, Go Further, Faster, ASUP, AutoSupport, Campaign Express, Cloud ONTAP, clustered Data ONTAP,
Customer Fitness, Data ONTAP, DataMotion, Fitness, Flash Accel, Flash Cache, Flash Pool, FlashRay, FlexArray, FlexCache,
FlexClone, FlexPod, FlexScale, FlexShare, FlexVol, FPolicy, GetSuccessful, LockVault, Manage ONTAP, Mars, MetroCluster,
MultiStore, NetApp Insight, OnCommand, ONTAP, ONTAPI, RAID DP, SANtricity, SecureShare, Simplicity, Simulate
ONTAP, Snap Creator, SnapCopy, SnapDrive, SnapIntegrator, SnapLock, SnapManager, SnapMirror, SnapMover, SnapProtect,
SnapRestore, Snapshot, SnapValidator, SnapVault, StorageGRID, Tech OnTap, Unbound Cloud, and WAFL are trademarks or
registered trademarks of NetApp, Inc., in the United States, and/or other countries. A current list of NetApp trademarks is
available on the web at http://www.netapp.com/us/legal/netapptmlist.aspx.
Cisco and the Cisco logo are trademarks of Cisco in the U.S. and other countries. All other brands or products are trademarks or
registered trademarks of their respective holders and should be treated as such.
Trademark information
33