Revert ONTAP
Revert ONTAP
Revert ONTAP
ONTAP 9
NetApp
October 31, 2023
The information in this section will guide you through the steps you should take before and after you revert,
including the resources you should read and the necessary pre- and post-revert checks you should perform.
If you need to transition a cluster from ONTAP 9.1 to ONTAP 9.0, you need to use the
downgrade procedure documented here.
Revert paths
The version of ONTAP that you can revert to varies based on the version of ONTAP
currently running on your nodes. You can use the system image show command to
determine the version of ONTAP running on each node.
These guidelines refer only to on-premises ONTAP releases. For information about reverting ONTAP in the
cloud, see Reverting or downgrading Cloud Volumes ONTAP.
1
You can revert from… To…
ONTAP 9.9.1 ONTAP 9.8
If you need to change from ONTAP 9.1 to 9.0, you should follow the downgrade process
documented here.
The “Important cautions” section describes potential issues that you should be aware of before
downgrading or reverting.
3. Confirm that your cluster and management switches are supported in the target release.
You must verify that the NX-OS (cluster network switches), IOS (management network switches), and
reference configuration file (RCF) software versions are compatible with the version of ONTAP to which
you are reverting.
4. If your cluster is configured for SAN, confirm that the SAN configuration is fully supported.
2
All SAN components—including target ONTAP software version, host OS and patches, required Host
Utilities software, and adapter drivers and firmware—should be supported.
Revert considerations
You need to consider the revert issues and limitations before beginning an ONTAP
reversion.
• Reversion is disruptive.
No client access can occur during the reversion. If you are reverting a production cluster, be sure to include
this disruption in your planning.
The reversion affects all nodes in the cluster; however, the reversion must be performed and completed on
each HA pair before other HA pairs are reverted.
• The reversion is complete when all nodes are running the new target release.
When the cluster is in a mixed-version state, you should not enter any commands that alter the cluster
operation or configuration except as necessary to satisfy reversion requirements; monitoring operations are
permitted.
If you have reverted some, but not all of the nodes, do not attempt to upgrade the cluster
back to the source release.
• When you revert a node, it clears the cached data in a Flash Cache module.
Because there is no cached data in the Flash Cache module, the node serves initial read requests from
disk, which results in decreased read performance during this period. The node repopulates the cache as it
serves read requests.
• A LUN that is backed up to tape running on ONTAP 9.x can be restored only to 9.x and later releases and
not to an earlier release.
• If your current version of ONTAP supports In-Band ACP (IBACP) functionality, and you revert to a version
of ONTAP that does not support IBACP, the alternate path to your disk shelf is disabled.
• If LDAP is used by any of your storage virtual machines (SVMs), LDAP referral must be disabled before
reversion.
• In MetroCluster IP systems using switches which are MetroCluster compliant but not MetroCluster
validated, the reversion from ONTAP 9.7 to 9.6 is disruptive as there is no support for systems using
ONTAP 9.6 and earlier.
3
Verify cluster health
Before you revert cluster, you should verify that the nodes are healthy and eligible to participate in the cluster,
and that the cluster is in quorum.
1. Verify that the nodes in the cluster are online and are eligible to participate in the cluster: cluster show
If any node is unhealthy or ineligible, check EMS logs for errors and take corrective action.
Enter y to continue.
4
cluster1::*> cluster ring show -unitname vldb
Node UnitName Epoch DB Epoch DB Trnxs Master Online
--------- -------- -------- -------- -------- --------- ---------
node0 vldb 154 154 14847 node0 master
node1 vldb 154 154 14847 node0 secondary
node2 vldb 154 154 14847 node0 secondary
node3 vldb 154 154 14847 node0 secondary
4 entries were displayed.
The most recent scsiblade event message for each node should indicate that the scsi-blade is in quorum.
Related information
System administration
5
To check for… Do this…
Disks undergoing maintenance or reconstruction a. Display any disks in maintenance, pending, or
reconstructing states: storage disk show
-state
maintenance|pending|reconstructing
b. Wait for the maintenance or reconstruction
operation to finish before proceeding.
2. Verify that all aggregates are online by displaying the state of physical and logical storage, including
storage aggregates: storage aggregate show -state !online
This command displays the aggregates that are not online. All aggregates must be online before and after
performing a major upgrade or reversion.
3. Verify that all volumes are online by displaying any volumes that are not online: volume show -state
!online
All volumes must be online before and after performing a major upgrade or reversion.
4. Verify that there are no inconsistent volumes: volume show -is-inconsistent true
See the Knowledge Base article Volume Showing WAFL Inconsistent on how to address the inconsistent
volumes.
Related information
Disk and aggregate management
1. Verify that the cluster is associated with an NTP server: cluster time-service ntp server show
2. Verify that each node has the same date and time: cluster date show
6
cluster1::> cluster date show
Node Date Timezone
--------- ------------------- -------------------------
node0 4/6/2013 20:54:38 GMT
node1 4/6/2013 20:54:38 GMT
node2 4/6/2013 20:54:38 GMT
node3 4/6/2013 20:54:38 GMT
4 entries were displayed.
1. Review the list of any running or queued aggregate, volume, or Snapshot jobs: job show
2. Delete any running or queued aggregate, volume, or Snapshot copy jobs: job delete -id job_id
3. Verify that no aggregate, volume, or Snapshot jobs are running or queued: job show
In this example, all running and queued jobs have been deleted:
7
cluster1::> job show
Owning
Job ID Name Vserver Node State
------ -------------------- ---------- -------------- ----------
9944 SnapMirrorDaemon_7_2147484678
cluster1 node1 Dormant
Description: Snapmirror Daemon for 7_2147484678
18377 SnapMirror Service Job
cluster1 node0 Dormant
Description: SnapMirror Service Job
2 entries were displayed
Continuously available SMB shares, which are accessed by Hyper-V or Microsoft SQL Server clients using the
SMB 3.0 protocol, do not need to be terminated before upgrading or downgrading.
1. Identify any established SMB sessions that are not continuously available: vserver cifs session
show -continuously-available No -instance
This command displays detailed information about any SMB sessions that have no continuous availability.
You should terminate them before proceeding with the ONTAP downgrade.
8
cluster1::> vserver cifs session show -continuously-available No
-instance
Node: node1
Vserver: vs1
Session ID: 1
Connection ID: 4160072788
Incoming Data LIF IP Address: 198.51.100.5
Workstation IP address: 203.0.113.20
Authentication Mechanism: NTLMv2
Windows User: CIFSLAB\user1
UNIX User: nobody
Open Shares: 1
Open Files: 2
Open Other: 0
Connected Time: 8m 39s
Idle Time: 7m 45s
Protocol Version: SMB2_1
Continuously Available: No
1 entry was displayed.
2. If necessary, identify the files that are open for each SMB session that you identified: vserver cifs
session file show -session-id session_ID
Node: node1
Vserver: vs1
Connection: 4160072788
Session: 1
File File Open Hosting
Continuously
ID Type Mode Volume Share Available
------- --------- ---- --------------- ---------------------
------------
1 Regular rw vol10 homedirshare No
Path: \TestDocument.docx
2 Regular rw vol10 homedirshare No
Path: \file1.txt
2 entries were displayed.
9
NVMe/TCP secure authentication
If you are running the NVMe/TCP protocol and you have established secure authentication using DH-HMAC-
CHAP, you must remove any host using DH-HMAC-CHAP from the NVMe subsystem before you revert. If the
hosts are not removed, revert will fail.
Depending on your MetroCluster configuration, you need to consider certain factors before revert. Get started
by reviewing the table below to see what special considerations you need to consider.
10
SnapMirror
• You must delete any SnapMirror Synchronous relationship in which the source volume is serving data
using NFSv4 or SMB.
• You must delete any SnapMirror Synchronous relationships in a mirror-mirror cascade deployment.
A mirror-mirror cascade deployment is not supported for SnapMirror Synchronous relationships in ONTAP
9.5.
• If the common Snapshot copies in ONTAP 9.5 are not available during revert, you must initialize the
SnapMirror Synchronous relationship after reverting.
After two hours of upgrade to ONTAP 9.6, the common Snapshot copies from ONTAP 9.5 are automatically
replaced by the common Snapshot copies in ONTAP 9.6. Therefore, you cannot resynchronize the
SnapMirror Synchronous relationship after reverting if the common Snapshot copies from ONTAP 9.5 are
not available.
The system node revert-to command notifies you of any SnapMirror and SnapVault
relationships that need to be deleted or reconfigured for the reversion process to be
completed. However, you should be aware of these requirements before you begin the
reversion.
• All SnapVault and data protection mirror relationships must be quiesced and then broken.
After the reversion is completed, you can resynchronize and resume these relationships if a common
Snapshot copy exists.
• SnapVault relationships must not contain the following SnapMirror policy types:
◦ async-mirror
You must delete any relationship that uses this policy type.
◦ MirrorAndVault
If any of these relationships exist, you should change the SnapMirror policy to mirror-vault.
11
• The all_source_snapshot rule must be removed from any async-mirror type SnapMirror policies.
The Single File Snapshot Restore (SFSR) and Partial File Snapshot Restore (PFSR)
operations are deprecated on the root volume.
• Any currently running single file and Snapshot restore operations must be completed before the reversion
can proceed.
You can either wait for the restore operation to finish, or you can abort it.
• Any incomplete single file and Snapshot restore operations must be removed by using the snapmirror
restore command.
3. Undo the physical block sharing in all of the split FlexClone volumes across the cluster: volume clone
sharing-by-split undo start-all
4. Verify that there are no split FlexClone volumes with shared physical blocks: volume clone sharing-
by-split show
12
cluster1::> volume clone sharing-by-split show
This table is currently empty.
1. Identify and delete all of the non-default qtrees in each FlexGroup volume that are enabled with the qtree
functionality:
a. Log in to the advanced privilege level: set -privilege advanced
b. Verify if any FlexGroup volume is enabled with the qtree functionality.
c. Delete all of the non-default qtrees in each FlexGroup volume that are enabled with the qtree
functionality: volume qtree delete -vserver svm_name -volume volume_name -qtree
qtree_name
If the qtree functionality is enabled because you modified the attributes of the default qtree and if you
do not have any qtrees, you can skip this step.
2. Disable the qtree functionality on each FlexGroup volume: volume flexgroup qtree-disable
-vserver svm_name -volume volume_name
13
cluster1::*> volume flexgroup qtree-disable -vserver vs0 -volume fg
3. Identify and delete any Snapshot copies that are enabled with the qtree functionality.
a. Verify if any Snapshot copies are enabled with the qtree functionality: volume snapshot show
-vserver vserver_name -volume volume_name -fields is-flexgroup-qtree-enabled
b. Delete all of the Snapshot copies that are enabled with the qtree functionality: volume snapshot
delete -vserver svm_name -volume volume_name -snapshot snapshot_name -force
true -ignore-owners true
The Snapshot copies that must be deleted include regular Snapshot copies and the Snapshot copies
taken for SnapMirror relationships. If you have created any SnapMirror relationship for the FlexGroup
volumes with a destination cluster that is running ONTAP 9.2 or earlier, you must delete all of the
Snapshot copies that were taken when the source FlexGroup volume was enabled for the qtree
functionality.
Related information
FlexGroup volumes management
14
If you are going to… Then use this command….
Move the SMB server from the workgroup to an vserver cifs modify -vserver
Active Directory domain: vserver_name -domain domain_name
3. If you deleted the SMB server, enter the username of the domain, then enter the user password.
Related information
SMB management
If you have enabled both deduplication and data compression on a volume that you want to revert, then you
must revert data compression before reverting deduplication.
1. Use the volume efficiency show command with the -fields option to view the progress of the efficiency
operations that are running on the volumes.
The following command displays the progress of efficiency operations: volume efficiency show
-fields vserver,volume,progress
2. Use the volume efficiency stop command with the -all option to stop all active and queued deduplication
operations.
The following command stops all active and queued deduplication operations on volume VolA: volume
efficiency stop -vserver vs1 -volume VolA -all
3. Use the set -privilege advanced command to log in at the advanced privilege level.
4. Use the volume efficiency revert-to command with the -version option to downgrade the efficiency
metadata of a volume to a specific version of ONTAP.
The following command reverts the efficiency metadata on volume VolA to ONTAP 9.x: volume
efficiency revert-to -vserver vs1 -volume VolA -version 9.x
The volume efficiency revert-to command reverts volumes that are present on the node on
which this command is executed. This command does not revert volumes across nodes.
5. Use the volume efficiency show command with the -op-status option to monitor the progress of the
downgrade.
The following command monitors and displays the status of the downgrade: volume efficiency show
15
-vserver vs1 -op-status Downgrading
6. If the revert does not succeed, use the volume efficiency show command with the -instance option to see
why the revert failed.
The following command displays detailed information about all fields: volume efficiency show
-vserver vs1 -volume vol1 - instance
7. After the revert operation is complete, return to the admin privilege level: set -privilege admin
1. Disable Snapshot copy policies for all data SVMs: volume snapshot policy modify -vserver
* -enabled false
2. Disable Snapshot copy policies for each node’s aggregates:
a. Identify the node’s aggregates by using the run-nodenodenameaggr status command.
b. Disable the Snapshot copy policy for each aggregate: run -node nodename aggr options
aggr_name nosnap on
c. Repeat this step for each remaining node.
3. Disable Snapshot copy policies for each node’s root volume:
a. Identify the node’s root volume by using the run-nodenodenamevol status command.
You identify the root volume by the word root in the Options column of the vol status command
output.
b. Disable the Snapshot copy policy on the root volume: run -node nodename vol options
root_volume_name nosnap on
c. Repeat this step for each remaining node.
16
4. Delete all Snapshot copies that were created after upgrading to the current release:
a. Set the privilege level to advanced: set -privilege advanced
b. Disable the snapshots:snapshot policy modify -vserver * -enabled false
c. Delete the node’s newer-version Snapshot copies: volume snapshot prepare-for-revert
-node nodename
This command deletes the newer-version Snapshot copies on each data volume, root aggregate,
and root volume.
If any Snapshot copies cannot be deleted, the command fails and notifies you of any required
actions you must take before the Snapshot copies can be deleted. You must complete the required
actions and then rerun the volume snapshot prepare-for-revert command before proceeding to the
next step.
Warning: This command will delete all Snapshot copies that have
the format used by the current version of ONTAP. It will fail if
any Snapshot copy polices are enabled, or
if any Snapshot copies have an owner. Continue? {y|n}: y
d. Verify that the Snapshot copies have been deleted: volume snapshot show -node nodename
If any newer-version Snapshot copies remain, force them to be deleted: volume snapshot
delete {-fs-version 9.0 -node nodename -is-constituent true} -ignore
-owners -force
You must perform these steps on both the clusters in MetroCluster configuration.
17
During the revert, you will be prompted to run the advanced command security login
password-prepare-to-downgrade to reset your own password to use the MD5 hash
function. If your password is not encrypted with MD5, the command prompts you for a new
password and encrypts it with MD5, enabling your credential to be authenticated after the revert.
The volumes with existing protection will continue to work normally after revert, and ARP status can be
displayed using the ONTAP CLI. However, System Manager cannot show ARP status without the MTKM
license.
Therefore, if you want ARP to continue after reverting to ONTAP 9.10.1, be sure the MTKM license is installed
before reverting. Learn about ARP licensing.
Remove S3 NAS bucket configuration before reverting from ONTAP 9.12.1 or later
If you have configured S3 client access for NAS data and you revert from ONTAP 9.12.1
or later to ONTAP 9.11.1 or earlier, you must remove the NAS bucket configuration, and
you must remove any name mappings (S3 users to Windows or Unix users) before
reverting.
About this task
The following tasks are completed in the background during the revert process.
• Remove all partially completed singleton object creations (that is, all entries in hidden directories).
• Remove all hidden directories; there might be one on for each volume that is accessible from the root of
the export mapped from the S3 NAS bucket.
• Remove the upload table.
• Delete any default-unix-user and default-windows-user values for all configured S3 servers.
18
System Manager
1. Remove a S3 NAS bucket configuration.
Click Storage > Buckets, click for each configured S3 NAS bucket, then click Delete.
2. Remove local name mappings for UNIX or Windows clients (or both).
a. Click Storage > Buckets, then select the S3/NAS-enabled storage VM.
b. Select Settings, then click in Name Mapping (under Host Users and Groups).
c. In the S3 to Windows or S3 to UNIX tiles (or both), click for each configured mapping, then
click Delete.
CLI
1. Remove S3 NAS bucket configuration.
vserver object-store-server bucket delete -vserver svm_name -bucket
s3_nas_bucket_name
2. Remove name mappings.
vserver name-mapping delete -vserver svm_name -direction s3-unix
vserver name-mapping delete -vserver svm_name -direction s3-win
Related information
MetroCluster management and disaster recovery
19
Download the software image
To downgrade or revert from ONTAP 9.4 and later, you can copy the ONTAP software image from the NetApp
Support Site to a local folder. For a downgrade or revert to ONTAP 9.3 or earlier, you must copy the ONTAP
software image to an HTTP server or FTP server on your network.
You must obtain the correct image for your cluster. Software images, firmware version information, and the
latest firmware for your platform model are available on the NetApp Support Site.
• Software images include the latest version of system firmware that was available when a given version of
ONTAP was released.
• If you are downgrading a system with NetApp Volume Encryption from ONTAP 9.5 or later, you must
download the ONTAP software image for non-restricted countries, which includes NetApp Volume
Encryption.
If you use the ONTAP software image for restricted countries to downgrade or revert a system with NetApp
Volume Encryption, the system panics and you lose access to your volumes.
1. Locate the target ONTAP software in the Software Downloads area of the NetApp Support Site.
2. Copy the software image.
▪ For ONTAP 9.3 or earlier, copy the software image (for example, 93_q_image.tgz) from the NetApp
Support Site to the directory on the HTTP server or FTP server from which the image will be
served.
▪ For ONTAP 9.4 or later, copy the software image (for example, 97_q_image.tgz) from the NetApp
Support Site to the directory on the HTTP server or FTP server from which the image will be served
or to a local folder.
• If you are downgrading or reverting a system with NetApp Volume Encryption from ONTAP 9.5 or later, you
must have downloaded the ONTAP software image for non-restricted countries, which includes NetApp
Volume Encryption.
If you use the ONTAP software image for restricted countries to downgrade or revert a system with NetApp
Volume Encryption, the system panics and you lose access to your volumes.
1. Set the privilege level to advanced, entering y when prompted to continue: set -privilege
advanced
This command downloads and installs the software image on all of the nodes simultaneously. To
download and install the image on each node one at a time, do not specify the -background parameter.
20
configuration:system node image update -node * -package location -replace
-package true -setdefault true -background true
This command uses an extended query to change the target software image, which is installed as
the alternate image, to be the default image for the node.
▪ If you are dowgrading or reverting a four or eight-node MetroCluster configuration, you must issue
the following command on both clusters: system node image update -node * -package
location -replace-package true true -background true -setdefault false
This command uses an extended query to change the target software image, which is installed as
the alternate image on each node.
This command displays the current status of the software image download and installation. You should
continue to run this command until all nodes report a Run Status of Exited, and an Exit Status of
Success.
The system node image update command can fail and display error or warning messages. After
resolving any errors or warnings, you can run the command again.
This example shows a two-node cluster in which the software image is downloaded and installed
successfully on both nodes:
21
system configurations on a node, and then repeat the process for each additional node in
the cluster.
You must have completed the revert verifications and pre-checks.
Reverting a cluster requires you to take the cluster offline for the duration of the reversion.
2. Verify that the target ONTAP software is installed: system image show
The following example shows that version 9.1 is installed as the alternate image on both nodes:
3. Disable all of the data LIFs in the cluster: network interface modify {-role data} -status
-admin down
4. Determine if you have inter-cluster flexcache relationships: flexcache origin show-caches
-relationship-type inter-cluster
5. If inter-cluster flexcaches are present, disable the data lifs on the cache cluster: network interface
modify -vserver vserver_name -lif lif_name -status-admin down
6. If the cluster consists of only two nodes, disable cluster HA: cluster ha modify -configured
false
7. Disable storage failover for the nodes in the HA pair from either node: storage failover modify
-node nodename -enabled false
You only need to disable storage failover once for the HA pair. When you disable storage failover for a
node, storage failover is also disabled on the node’s partner.
To revert a node, you must be logged in to the cluster through the node’s node management LIF.
9. Set the node’s target ONTAP software image to be the default image: system image modify -node
nodename -image target_image -isdefault true
22
10. Verify that the target ONTAP software image is set as the default image for the node that you are reverting:
system image show
The following example shows that version 9.1 is set as the default image on node0:
11. If the cluster consists of only two nodes, verify that the node does not hold epsilon:
a. Check whether the node currently holds epsilon: cluster show -node nodename
Node: node1
UUID: 026efc12-ac1a-11e0-80ed-0f7eba8fc313
Epsilon: true
Eligibility: true
Health: true
b. If the node holds epsilon, mark epsilon as false on the node so that epsilon can be transferred to the
node’s partner: cluster modify -node nodenameA -epsilon false
c. Transfer epsilon to the node’s partner by marking epsilon true on the partner node: cluster modify
-node nodenameB -epsilon true
12. Verify that the node is ready for reversion: system node revert-to -node nodename -check
-only true -version 9.x
The check-only parameter identifies any preconditions that must be addressed before reverting, such as
the following examples:
23
14. Revert the cluster configuration of the node: system node revert-to -node nodename -version
9.x
The -version option refers to the target release. For example, if the software you installed and verified is
ONTAP 9.1, the correct value of the -version option is 9.1.
The cluster configuration is reverted, and then you are logged out of the clustershell.
15. Log back in to the clustershell, and then switch to the nodeshell: run -node nodename
After logging on the clustershell again, it might take a few minutes before it is ready to accept the nodeshell
command. So, if the command fails, wait a few minutes and try it again.
16. Revert the file system configuration of the node: revert_to 9.x
This command verifies that the node’s file system configuration is ready to be reverted, and then reverts it.
If any preconditions are identified, you must address them and then rerun the revert_to command.
Using a system console to monitor the revert process displays greater details than seen in
nodeshell.
If AUTOBOOT is true, when the command finishes, the node will reboot to ONTAP.
If AUTOBOOT is false, when the command finishes the LOADER prompt is displayed. Enter yes to revert;
then use boot_ontap to manually reboot the node.
17. After the node has rebooted, confirm that the new software is running: system node image show
In the following example, image1 is the new ONTAP version and is set as the current version on node0:
18. Verify that the revert status is complete for each node: system node upgrade-revert show -node
nodename
19. Repeat [step-6] through [step-16] on the other node in the HA pair.
20. If the cluster consists of only two nodes, reenable cluster HA: cluster ha modify -configured
24
true
21. Reenable storage failover on both nodes if it was previously disabled: storage failover modify
-node nodename -enabled true
22. Repeat [step-5] through [step-19] for each additional HA pair and both the clusters in MetroCluster
Configuration.
1. Verify that the nodes in the cluster are online and are eligible to participate in the cluster: cluster show
If any node is unhealthy or ineligible, check EMS logs for errors and take corrective action.
Enter y to continue.
◦ The relational database epoch and database epochs should match for each node.
◦ The per-ring quorum master should be the same for all nodes.
25
To display this RDB process… Enter this command…
SAN management daemon cluster ring show -unitname bcomd
The most recent scsiblade event message for each node should indicate that the scsi-blade is in quorum.
Related information
System administration
After you revert or downgrade a cluster, you should verify the status of your disks, aggregates, and volumes.
26
To check for… Do this…
Disks undergoing maintenance or reconstruction a. Display any disks in maintenance, pending, or
reconstructing states: storage disk show
-state
maintenance|pending|reconstructing
b. Wait for the maintenance or reconstruction
operation to finish before proceeding.
2. Verify that all aggregates are online by displaying the state of physical and logical storage, including
storage aggregates: storage aggregate show -state !online
This command displays the aggregates that are not online. All aggregates must be online before and after
performing a major upgrade or reversion.
3. Verify that all volumes are online by displaying any volumes that are not online: volume show -state
!online
All volumes must be online before and after performing a major upgrade or reversion.
4. Verify that there are no inconsistent volumes: volume show -is-inconsistent true
See the Knowledge Base article Volume Showing WAFL Inconsistent on how to address the inconsistent
volumes.
Related information
Disk and aggregate management
27
After you revert a cluster, you must enable and revert any LIFs that are not on their home
ports.
The network interface revert command reverts a LIF that is not currently on its home port back to its home port,
provided that the home port is operational. A LIF’s home port is specified when the LIF is created; you can
determine the home port for a LIF by using the network interface show command.
This example displays the status of all LIFs for a storage virtual machine (SVM).
If any LIFs appear with a Status Admin status of down or with an Is home status of false, continue with the
next step.
2. Enable the data LIFs: network interface modify {-role data} -status-admin up
28
This command reverts all LIFs back to their home ports.
4. Verify that all LIFs are in their home ports: network interface show
This example shows that all LIFs for SVM vs0 are on their home ports.
29
volume snapshot policy modify -vserver * -enabled true
2. For each node, enable the Snapshot copy policy of the root volume by using the run-nodenodenamevol
optionsroot_vol_namenosnap off command.
30
cluster1::*> system services firewall policy show
Policy Service Action IP-List
---------------- ---------- ------ --------------------
cluster
dns allow 0.0.0.0/0
http allow 0.0.0.0/0
https allow 0.0.0.0/0
ndmp allow 0.0.0.0/0
ntp allow 0.0.0.0/0
rsh allow 0.0.0.0/0
snmp allow 0.0.0.0/0
ssh allow 0.0.0.0/0
telnet allow 0.0.0.0/0
data
dns allow 0.0.0.0/0, ::/0
http deny 0.0.0.0/0, ::/0
https deny 0.0.0.0/0, ::/0
ndmp allow 0.0.0.0/0, ::/0
ntp deny 0.0.0.0/0, ::/0
rsh deny 0.0.0.0/0, ::/0
.
.
.
2. Manually add any missing default IPv6 firewall entries by creating a new firewall policy: system
services firewall policy create
3. Apply the new policy to the LIF to allow access to a network service: network interface modify
31
2. Communicate the temporary password to the affected users and have them log in through a console or
SSH session to change their passwords as prompted by the system.
For more information, see Accounts that can access the SP.
32
Copyright information
Copyright © 2023 NetApp, Inc. All Rights Reserved. Printed in the U.S. No part of this document covered by
copyright may be reproduced in any form or by any means—graphic, electronic, or mechanical, including
photocopying, recording, taping, or storage in an electronic retrieval system—without prior written permission
of the copyright owner.
Software derived from copyrighted NetApp material is subject to the following license and disclaimer:
THIS SOFTWARE IS PROVIDED BY NETAPP “AS IS” AND WITHOUT ANY EXPRESS OR IMPLIED
WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
AND FITNESS FOR A PARTICULAR PURPOSE, WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL
NETAPP BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
NetApp reserves the right to change any products described herein at any time, and without notice. NetApp
assumes no responsibility or liability arising from the use of products described herein, except as expressly
agreed to in writing by NetApp. The use or purchase of this product does not convey a license under any
patent rights, trademark rights, or any other intellectual property rights of NetApp.
The product described in this manual may be protected by one or more U.S. patents, foreign patents, or
pending applications.
LIMITED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject to restrictions as set
forth in subparagraph (b)(3) of the Rights in Technical Data -Noncommercial Items at DFARS 252.227-7013
(FEB 2014) and FAR 52.227-19 (DEC 2007).
Data contained herein pertains to a commercial product and/or commercial service (as defined in FAR 2.101)
and is proprietary to NetApp, Inc. All NetApp technical data and computer software provided under this
Agreement is commercial in nature and developed solely at private expense. The U.S. Government has a non-
exclusive, non-transferrable, nonsublicensable, worldwide, limited irrevocable license to use the Data only in
connection with and in support of the U.S. Government contract under which the Data was delivered. Except
as provided herein, the Data may not be used, disclosed, reproduced, modified, performed, or displayed
without the prior written approval of NetApp, Inc. United States Government license rights for the Department
of Defense are limited to those rights identified in DFARS clause 252.227-7015(b) (FEB 2014).
Trademark information
NETAPP, the NETAPP logo, and the marks listed at http://www.netapp.com/TM are trademarks of NetApp, Inc.
Other company and product names may be trademarks of their respective owners.
33