Redp 5734
Redp 5734
Redp 5734
Redpaper
IBM Redbooks
October 2024
REDP-5734-00
Note: Before using this information and the product it supports, read the information in “Notices” on page v.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .v
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
Redbooks (logo) ® IBM® IBM FlashSystem®
HyperSwap® IBM FlashCore® Redbooks®
OpenShift, Red Hat, are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the United
States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
This comprehensive IBM Redpaper explores the intricacies of IBM FlashSystem® 5300 port
configuration, empowering IT professionals to optimize performance, enhance security, and
ensure seamless integration within their existing infrastructure.
The target audience of this paper is storage administrators, system administrators and
network specialists.
Authors
This paper was produced by a team of specialists from around the world.
Elias Luna
IBM USA
Vineet Sharma
IBM Dubai
viii The Definitive Guide to IBM Storage FlashSystem 5300 Port Configuration
Now you can become a published author, too!
Here’s an opportunity to spotlight your skills, grow your career, and become a published
author—all at the same time! Join an IBM Redbooks residency project and help write a book
in your area of expertise, while honing your experience using leading-edge technologies. Your
efforts will help to increase product acceptance and customer satisfaction, as you expand
your network of technical contacts and relationships. Residencies run from two to six weeks
in length, and you can participate either in person or as a remote resident working from your
home base.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our papers to be as helpful as possible. Send us your comments about this paper or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
redbooks@us.ibm.com
Mail your comments to:
IBM Corporation, IBM Redbooks
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface ix
x The Definitive Guide to IBM Storage FlashSystem 5300 Port Configuration
1
The IBM FlashSystem 5300 is an NVMe end-to-end platform that is targeted at the entry and
midrange market and delivers the full capabilities of IBM FlashCore® technology.
The IBM FlashSystem 5300 also provides a rich set of software-defined storage (SDS)
features that are delivered by IBM Storage Virtualize, including the following features:
Data reduction and deduplication
Dynamic tiering
Thin provisioning
Snapshots
Cloning
Replication
Data copy services
Transparent Cloud Tiering
IBM HyperSwap® including 3-site replication for high availability (HA)
Policy-based replication and policy-based HA (policy-based HA)
Ransomware Threat Detection
Scale-out and scale-up configurations further enhance capacity and throughput for better
availability.
This section describes the possible use cases and where to place the IBM FlashSystem 5300
(or another system in the IBM FlashSystem family) in the client infrastructure. This solution
addresses requirements and can be used to optimize and simplify an IT storage
infrastructure.
Figure 1-1 on page 3 shows the current IBM FlashSystem and IBM SAN Volume Controller
family.
Note: IBM Storage Virtualize for Public Cloud is not currently supported on IBM Storage
Virtualize V8.7. This functionality is planned for a future release.
The IBM FlashSystem 5300 can be used as a production data repository and a component of
a DR solution because a primary system can send data in an efficient way into the hybrid
multicloud infrastructure.
In particular, the IBM FlashSystem 5300 can meet the following customer requirements:
First tier repository for production data.
Primary or target system for data replication or disaster recovery.
Provide HA services using policy-based HA.
Use Storage Virtualize capabilities to manage and virtualize older IBM or non-IBM storage
and extend advanced Storage Virtualize functions (for example, data reduction) to the
external capacity presented by the old storage.
Old storage systems can be decommissioned, or their usage can be extended as an
added pool of resources to the IBM FlashSystem 5300.
Storage Virtualize in the IBM FlashSystem 5300 can provide the intelligent data migration
tool from an outer storage to replace it or distribute application workload on more systems.
The IBM FlashSystem 5300 can use Transparent Cloud Tiering to move data into the
cloud:
– Use IBM Storage Virtualize for Public Cloud on Amazon AWS or other providers.
– Use the Container Storage Interface (CSI) driver for Red Hat OpenShift Container
Platform, which enables Cloud Pak foundation.
IBM software-defined storage (SDS) capabilities:
Figure 2 shows an IBM FlashSystem 5300 as the main provider of advanced data services for
on-premises and in a hybrid multicloud system.
Figure 2 IBM FlashSystem 5300 as the main provider of advanced data services
The client can expect the modern and advanced data services that are provided by a storage
system to cover several scopes concurrently. The IBM FlashSystem products, which include
the IBM FlashSystem 5300, all share this main characteristic.
Because all IBM FlashSystem products share the functions and software layer, it can be
easier to select the suitable system that can match performance, capacity, and functional
requirements.
Note: IBM FlashSystem 5300 provides proven 99.9999% availability, with an optional
100% guarantee when using IBM HyperSwap.
Figure 3 IBM FlashSystem 5300 control enclosure showing the front and rear view
Figure 4 shows the front view of the IBM FlashSystem 5300 control enclosure with the bezel
removed. Also shown are six NVMe drives that are installed in upper slots 1–6 and six fillers
in lower slots 7–12.
Figure 4 IBM FlashSystem 5300 control enclosure front view with bezel removed and drive slot locations
Figure 5 shows a top view of the IBM FlashSystem 5300 enclosure. Highlighted are the
various components of the control enclosure and the two canisters.
Control enclosure:
– Two canisters that are placed side by side.
– 12 NVMe drive slots.
– Six enclosure fan assemblies.
Each canister contains the following components and quantities:
– CPU (1)
– DIMM Slots (4)
– Battery (1)
– Canister Fans (3)
– Power Supply PSU (1)
– PCIe adapters (0–2)
– PCIe riser cards (2)
– PCIe adapter blanking plates (0-2)
Note: The number of PCIe adapters is configurable at product ordering time and can be
added or removed by a sales MES. MES (Miscellaneous Equipment Specification) refers to
any server hardware modification, including adding, improving, removing, or a combination
of these actions. The server's serial number remains unchanged.
In Figure 7, you can see the RJ45 and USB ports in the canister. Also shown are the two new,
on board, planer SFP ports on the left-hand side of the canister. These ports can be used for
both external storage virtualization and host attachment, The PCIe adapter slots are shown
with blanking plates in place to ensure and maintain the correct air flow for cooling through
the canister. The IBM FlashSystem 5300 will allow 2 x PCIe adapters per canister, 4 x
adapters per IBM FlashSystem 5300 enclosure.
For information on adapter support, see IBM FlashSystem 5300 Node Canister Overview.
Important: Unlike previous offerings, the IBM FlashSystem 5300 assigns logical port
numbers differently from physical port numbers. The management port is always port 1.
This means physical port 1 becomes logical port 2, and physical port 2 becomes logical
port 3.
A fixed set of ports is available on each node canister. Those ports are always present:
1x RJ45 dedicated management port.
1x RJ45 dedicated technician port.
1x USB Type A port for attaching encryption key media and service tasks.
2x Ethernet SFP ports for host I/O, clustering and replication over Ethernet SAN.
Each canister has two slots for Host Interface Cards (HIC). Both nodes in the control
enclosure must have the same set of cards installed. The following HIC cards can be added
to each node canister to expand its connectivity:
2-port 12Gb SAS card for expansion enclosure attachment (one card per node only).
2-port 64Gb Fibre Channel card for host I/O, clustering and replication.
4-port 32Gb Fibre Channel card for host I/O, clustering and replication.
4-port 10Gb Ethernet card for host I/O, clustering and replication.
Note: The list above is valid at the moment of writing this publication and can be extended
in future.
HIC cards can be installed into any of two node slots, with the exception of a SAS adapter
which is supported in slot 2 only. Manufacturing starts populating slots with slot #1.
Adapter types can be mixed within a single node.
Node1 Node2
Slot 1 Slot 2 Slot 1 Slot 2
Empty (onboard Ethernet only) Empty (onboard Ethernet only) Empty (onboard Ethernet only) Empty (onboard Ethernet only)
Empty (onboard Ethernet only) SASAdapter Empty (onboard Ethernet only) SASAdapter
10GbE 10GbE 10GbE 10GbE Empty 10GbE 10GbE 10GbE 10GbE Empty
32GB FC 32GB FC 32GB FC 32GB FC Empty 32GB FC 32GB FC 32GB FC 32GB FC Empty
64GB FC 64GB FC Empty 64GB FC 64GB FC Empty
10GbE 10GbE 10GbE 10GbE SASAdapter 10GbE 10GbE 10GbE 10GbE SASAdapter
32GB FC 32GB FC 32GB FC 32GB FC SASAdapter 32GB FC 32GB FC 32GB FC 32GB FC SASAdapter
64GB FC 64GB FC SASAdapter 64GB FC 64GB FC SASAdapter
10GbE 10GbE 10GbE 10GbE 10GbE 10GbE 10GbE 10GbE 10GbE 10GbE 10GbE 10GbE 10GbE 10GbE 10GbE 10GbE
32GB FC 32GB FC 32GB FC 32GB FC 32GB FC 32GB FC 32GB FC 32GB FC 32GB FC 32GB FC 32GB FC 32GB FC 32GB FC 32GB FC 32GB FC 32GB FC
64GB FC 64GB FC 64GB FC 64GB FC 64GB FC 64GB FC 64GB FC 64GB FC
64GB FC 64GB FC 32GB FC 32GB FC 32GB FC 32GB FC 64GB FC 64GB FC 32GB FC 32GB FC 32GB FC 32GB FC
10GbE 10GbE 10GbE 10GbE 64GB FC 64GB FC 10GbE 10GbE 10GbE 10GbE 64GB FC 64GB FC
10GbE 10GbE 10GbE 10GbE 32GB FC 32GB FC 32GB FC 32GB FC 10GbE 10GbE 10GbE 10GbE 32GB FC 32GB FC 32GB FC 32GB FC
Technician port
There is a technician port on each FlashSystem 5300 node canister. The technician port is an
RJ45 1Gb Ethernet port, which can auto-negotiate down to 100Mbps and 10Mbps. It can be
visually identified by blue stripes on both sides of the connector, and by a black gear symbol
on the node’s faceplate. See Figure 2-1 on page 10.
The technician port is used for initial system setup and recovery tasks, such as resetting the
superuser password. It requires a direct connection to a workstation (no LAN) and provides
access to a dedicated management interface.
The port is set up to assign an IP address to the workstation that is attached to it with DHCP.
IP address on the interface is 192.168.0.1 and it cannot be changed.
For more information, see Initializing the system with the technician port and Using
technician port for node access.
Important: Restricting physical access to the storage system is essential for safeguarding
the technician port due to its elevated management privileges.
Tip: If required, USB ports can be disabled to comply with organizational security policies.
The dedicated management port is identified by #3 on the FlashSystem 5300 node faceplate.
Important: On FlashSystem 5200 and most of the other platforms, the dedicated or
shared primary management port is usually port #1. On FlashSystem 5300, hardware port
number assignment is different, and the port has physical #3, while logically it is still
recognized as port id 1.
The node's service IP address is accessible through the primary management port. If a node
becomes the Configuration node within a cluster, the system's management IP address (or
Cluster IP address) is also assigned to this port.
On a new system, only the default service IP is available on the dedicated management port.
The default address is 192.168.70.121 on node 1 (in the left chassis slot) and 192.168.70.122
on node 2, in the right slot of the chassis.
The system management IP address is assigned during initial cluster setup and can be
modified as needed.
Beyond providing access to the system's GUI, CLI, and REST API, the primary management
port facilitates outbound communication for services like DNS and Call Home.
The cluster management IP must be unique from all service IPs. While it can reside on the
same subnet as the service IPs, it is not required to do so.
Depending on the set of features that are in use, set up management network firewall to pass
the following traffic to and from the system:
Management - from administrators hosts to the system’s CLI (SSH) and GUI/REST API
(HTTPS) interfaces.
Monitoring - to Storage Insights data collector host (if standalone collector is used).
Network services - from the system to NTP and DNS servers.
Remote user authentication - from the system to LDAP server.
Event notifications - from the system to SMTP, SNMP and syslog servers.
Replication management - control plane of replication, traffic between management ports
of Storage Virtualize systems in replication partnership.
IP Quorum - to and from hosts running IP Quorum application.
SAS card has four ports, but only ports 1 and 3 can be used for expansion attachment.
SAS card supports only expansion enclosure attachment. SAS host attachment is not
supported.
Cards can be installed in either node slot, allowing for flexibility in card type selection within a
node. However, both nodes must have identical card configurations.
Ports can be attached to SAN switches, and also support direct attachment to hosts and
another Storage Virtualize system for clustering or replication.
When using direct attachment between two control enclosures, consider the examples shown
in Figure 2-3. Each node requires connectivity to both nodes in the opposing control
enclosure. In clustering configurations, such as HyperSwap, it is required to have redundant
connections - four links per node. In replication-based configurations, such as policy-based
high availability, one connection to each remote node is sufficient. However, two are
recommended for maximum throughput and performance.
Minimal required direct-attached connectivity for Recommended direct-attached connectivity for policy-
policy-based HA based HA. Required direct-attached connectivity for
HyperSwap
FlashSystem 5300 exclusively operates in NPIV mode, allowing each physical FC port to
utilize multiple WWPNs for SAN fabric registration. This mode is mandatory and cannot be
altered.
In NPIV mode, virtual ports (WWPNs) can be migrated between equivalent physical ports on
different nodes within the same I/O group. However, maintaining consistent SAN fabric
connectivity is crucial. All equivalent physical ports in an I/O group must be connected to the
same SAN fabric.
Every physical port of FlashSystem 5300 registers in the SAN switch three WWPNs:
Physical WWPN: allows external storage virtualization, replication, clustering traffic.
FCP-SCSI host WWPN: allows host I/O with FCP-SCSI.
FC-NVMe host WWPN: allows host I/O with FC-NVMe.
During an NPIV failover, host WWPNs migrate to the partner node while the physical WWPN
remains static. This allows up to five WWPN logins per physical FC port.
WWPN is assigned according to adapter and port location. Figure 2-4 shows the WWPN
numbering scheme.
PCI slot Adapter Physical WWPN NPIV WWPN for NPIV WWPN for
port FCP-SCSI hosts FC-NVMe hosts
1 1 500507681211xxxx 500507681215xxxx 500507681219xxxx
1 2 500507681212xxxx 500507681216xxxx 50050768121axxxx
1 3 500507681213xxxx 500507681217xxxx 50050768121bxxxx
1 4 500507681214xxxx 500507681218xxxx 50050768121cxxxx
2 1 500507681221xxxx 500507681225xxxx 500507681229xxxx
2 2 500507681222xxxx 500507681226xxxx 50050768122axxxx
2 3 500507681223xxxx 500507681227xxxx 50050768122bxxxx
2 4 500507681224xxxx 500507681228xxxx 50050768122cxxxx
Figure 2-4 Adapter port number to WWPN relationship
There is a significant difference in attachment and use options between onboard and optional
ports.
Both onboard 25GbE and optional 10GbE ports support host access with SCSI and NVMe
protocols for host attachment and IP replication, however optional 10GbE ports are
RDMA-capable, while 25GbE are not. This results in a wider list of possible use options for
10GbE ports.
For onboard 10/25GbE ports, the following protocols and applications are supported:
– Host I/O with iSCSI
– Host I/O with NVMe/TCP
For optimal performance and reliability, dedicate separate ports for IP-based clustering or
replication traffic. Avoid combining host access and replication on the same port. This can
be achieved using Network Portsets.
If in HyperSwap configuration it is not possible to dedicate inter-site links for intra-cluster
traffic, configure Priority Flow Control (QoS) to make sure that system traffic is prioritized
over other traffic types.
For host access, it is recommended to separate host to storage (iSCSI or NVMe/TCP)
traffic from other types of traffic in your LAN. This can be achieved by building a separate
physical network, or using dedicated storage access ports on a host side, separating
networks with VLANs and using QoS to prioritize storage traffic.
Use recommendations given in the iSCSI performance analysis tuning article:
– Utilize all available storage ports.
– Verify that your network supports Jumbo frames end-to-end, and enable them by
setting MTU to 9000 on ports designated for host access.
– Disable delayed TCP ACK on the hosts.
When working with the FlashSystem 5300, carefully distinguish between logical and
physical port numbers. These are distinct identifiers within the system. Table 2-2 shows
relationship between onboard ports physical and logical numbers.
Initial setup of an FS5300 is done on the T-port. These are 1Gb RJ45 ports.
3.1.1 Procedure
Perform the following steps:
1. Ensure the system is powered on.
2. Configure an Ethernet port on the personal computer to enable Dynamic Host
Configuration Protocol (DHCP) configuration of its IP address and DNS settings. If you do
not have DHCP, you must manually configure the personal computer. Specify the static
IPv4 address 192.168.0.2, subnet mask 255.255.255.0, gateway 192.168.0.1, and DNS
192.168.0.1.
3. Locate the technician port on each node canister, as shown in Figure 3-1.
4. Disconnect the personal computer from all networks. Connect an Ethernet cable between
the port of the personal computer that is configured in step 2 and the technician port in the
left canister (1) that is shown in Figure 3-1.
5. Once the personal computer is connected through Ethernet, open a supported web
browser and navigate to https://install. If your network does not use DHCP for
automatic IP assignment, use the static IP address 192.168.0.1 instead.
6. The browser is automatically directed to the initialization tool.
7. To set up the system's management IP address, follow the on-screen prompts provided by
the initialization wizard.
8. Once the initialization process is complete, disconnect the cable between the personal
computer and the technician port. You can then continue with the initial system setup as
outlined in see Completing the initial system setup (customer task).
Alternatively, you can configure the system using the service address on the management
port. The default IP addresses are 192.168.70.121 for the first node and 192.168.70.122 for
the second node in an enclosure. This method offers two options:
Command Line: Use SSH with the command satask mkcluster (for advanced users).
GUI: Use the graphical user interface for a more user-friendly approach.
Note: The system will always use the management IP on the lowest numbered port for
outbound communication, for example, Cloud Call Home, e-mail notifications, DNS lookup.
Management IPs are included by default in the newly defined System Management portset
(ID 72) for managing the system. See Figure 3-2.
You can see that one port is currently defined (Figure 3-3). You can view details about this
port or check for additional defined ports.
Another new feature is the VLAN support. This functionality enables the creation of Virtual
Local Area Networks (VLANs) within your existing network infrastructure. VLANs provide a
method for segmenting your network into logical sub-divisions, thereby enhancing both
security and network performance. VLAN configurations can be added and modified at your
convenience to best suit your evolving network management needs. See Figure 1-5.
Starting with IBM Storage Virtualize Version 8.7.0, system administrators now have the option
to configure a second management IP address for increased redundancy and manageability.
Additionally, management IP addresses are no longer restricted to ports 1 and 2. This
expanded flexibility allows for a more customized and efficient network configuration.
The system-defined default management port set restricts the number of configurable system
IP addresses to two.
Figure 1-7 shows how to select a port number for the second management IP.
Increased data IP flexibility: You can now configure up to 4 routable data IP addresses
per port, per node. This provides more flexibility for network configuration and traffic
management.
Faster failover: The configuration node failover time has been reduced by 10%. This
means the system recovers from a failure of the config node more quickly, minimizing
downtime.
Unified CLI commands: Common Command-Line Interface (CLI) commands (mkip, rmip,
lsip) are now available for both data and management IP addresses. This simplifies
managing IP addresses by providing a consistent interface for both types.
New command for system management IPs: There is a new command named chip for
managing system and management IP addresses. For more information, see chip.
For more information, see Release Note for systems built with IBM Storage Virtualize.
Enabling DNS resolution is also recommended. This allows the system to translate
hostnames into IP addresses, simplifying network operations and improving overall usability.
This service IP address is always assigned to port 1, even if you change the management IP
address to a different port. You need to change these default service IPs to addresses that
are readily accessible on your network. This is necessary for remote management and
service tasks using the Service Assistant Interface.
The service IP address allows access to the Service Assistant Interface, accessible through a
web browser or SSH client. This interface provides functionalities for maintenance and
service tasks on the system.
Important: While the service IPs are used to access the Service Assistant Interface, their
importance goes beyond that. These IPs are also crucial for various system functions such
as:
Key server access: Communication with a key server for security purposes.
IP quorum: Establishing a quorum for cluster management and data consistency.
Remote support assistance: Enabling remote technicians to access the system for
troubleshooting or maintenance.
Therefore, it is vital to configure the service IPs with addresses that are readily accessible
on your network.
3.4.1 Portsets
Portsets are collections of logical port addresses grouped based on specific traffic types. This
allows for efficient management and isolation of different network traffic flows.
Note: Fibre Channel (FC) port masking does not affect traffic between hosts and storage
devices. It applies only to communication between nodes within a system and replication
traffic between systems. FC port masking is deprecated after Storage Virtualize Version
8.5.
The system offers pre-defined Fibre Channel and Ethernet portsets for specific traffic types:
host attachment, system management, remote copy, and back-end storage virtualization. For
more information, see Portsets documentation.
For more information, see Planning for more than four fabric ports per node canister.
Note: A host definition is configured to access storage devices through a single Fibre
Channel portset.
For a high volume of similar devices, consider creating dedicated portsets (Figure 3-9 on
page 26). This allows for granular grouping based on functionalities (for example, cluster and
server groups) to optimize network traffic flow and simplify management.
Host attachment portsets you can specifying an ownership group (optional). This group
defines user access and simplifies management for specific sets of hosts.
For longer distances, Long Wavelength (LW) SFP+ transceivers with mono mode cable are
required. However, the maximum achievable speeds are limited by the cable type, distance,
and the FC adapter's 40 Buffer Credits. Here is a summary of the limitations:
8 Gb: Up to 10km
16 Gb: Up to 5km
32 Gb: Up to 2.5km
The system also provides a 2-port 64 Gbit/s FC adapter card for higher bandwidth needs.
See Figure 3-13.
Important: While a 64 Gbit/s FC adapter card option exists with four physical ports, only
two of those ports are usable.
Selecting a dedicated single port displays information about its number and speed. See
Figure 1-14.
Usage of FC ports
On the System Hardware - Overview view you will find only the physical WWPN used for
cluster communication, remote mirroring and external virtualization. You can find the FC port
WWPN details by selecting Settings → Fibre Channel Ports. See Figure 3-15 on page 29.
On IBM FlashSystem storage units with IBM Storage Virtualize 8.7, NPIV is always on by
default.
N-Port ID Virtualization:
The system supports N_Port ID Virtualization (NPIV) technology for Fibre Channel (FC)
connections. NPIV is an industry standard that allows a single physical FC adapter to act
as multiple virtual ports. Each virtual port can have its own unique World Wide Port Name
(WWPN) and World Wide Node Name (WWNN) to register with the Storage Area Network
(SAN) fabric.
For successful NPIV configuration, see N-Port ID Virtualization for proper cabling and
zoning procedures.
In some cases, FC fabric management or zoning scripts might require a different WWPN
format for copy-and-paste operations. See Figure 3-16 on page 30.
No No SCSI Cluster
communication,
remote mirroring,
external virtualization
It is important to zone the correct WWPNs. Hosts cannot access storage through physical,
non-virtualized WWPNs. When using virtualized WWPNs, the selection determines whether
the host can access via FC SCSI or NVMe/FC, allowing control over the access protocol.
For direct FC connections, there is no selection for access protocols. To confirm direct
operating system access through FC, you can check the IBM System Storage Interoperation
Center (SSIC).
Each host or external storage system does a full fabric login as part of the FC communication
process. The FS5300 maintains the information of each device, which is registered and
accessing FS5300 nodes, similar to the name space of a fiber switch.
You can check the FC SCSI host connections under Settings → Fibre Channel
Connectivity. See Figure 3-17 on page 31.
You can also check the state of the connection. Only FC SCSI connections will be shown.
You can check for active NVMe connections within your system management interface by
navigating to Settings → NVMe Connectivity. See Figure 3-18.
It is possible to filter by hosts and nodes in the listings and also to export this listings to CSV
files.
Modify FC ports
The system allows you to change the FC port being used. See Figure 3-19 on page 32.
You can manage FC port assignments. This allows you to add or remove FC ports from
portsets for better zoning control.
For advanced configurations, you can also change how hosts access an FC port. This might
involve using NPIV to assign different virtual WWPNs or modifying security settings. See
Figure 3-20.
Any: Selected Port can be used for any type of traffic like local cluster communication,
remote replication or host communication (using NPIV WWPN for host).
Local: Selected Port can be used for only local cluster communication or host
communication (using NPIV WWPN for host).
Remote: Selected Port can be used for only remote replication or host communication
(using NPIV WWPN for host).
None: Selected Port cannot be used for cluster communication or remote replication. Host
communication is always allowed in any type (using NPIV WWPN for host).
lsportethernet command displays information about the Ethernet ports on a system. The
output of the command includes details about each Ethernet port, such as:
Status: Whether the port is up, down, or experiencing any errors.
Speed: The connection speed of the port (for example, 1Gbps, 10 Gbps).
Connected: Indicates if there is a physical cable connection established on the port.
Possible usage: This might provide clues about how the port is being used, such as
"Host Attachment," "iSCSI," or "Replication. See Figure 3-28 on page 36.
If you plan to change the Maximum Transmission Unit (MTU) size of a port, it is only possible
if no IP address is configured on that port and its reference port. As shown in Figure 3-22 on
page 34, both ports 2 need to be free of IP addresses in this scenario.
Restriction: NVMe/TCP and clustering are only supported with an MTU size of 1500
bytes.
On the next screen, you can assign an IP address to the selected port. Click Add IP Address
to open a configuration window. Here, you can specify the IP address, subnet mask, and
other relevant settings. Additionally, you might have the option to add the port to a specific
portset for further management. See Figure 3-24 on page 35.
Select a portset. In this scenario we use Portset0, the default predefined Portset for Ethernet
host attach.
Once the configuration is complete, the assigned IP address will be displayed for the port.
You can refer to Figure 3-25 for illustration.
You can configure a port with up to four routable IP addresses. However, it is important to use
separate VLANs for each IP address to avoid network conflicts.
To manage existing IP addresses on a port, click the overflow menu (three dots) located to
the right of the IP address entry. This menu allows you to modify, duplicate, or delete the IP
address configuration. Refer to Figure 3-26 on page 36 for illustration.
To check your configuration using CLI use the lsip command. See Figure 3-27.
You can use the mkip command to add additional IP addresses. With the chip command you
can modify IP addresses and with the rmip command remove IP addresses.
The lsportethernet command can be used to display information about Ethernet ports on
your system. See Figure 3-28.
The lsportethernet command can show you whether Data Center Bridging (DCBX) is
supported on a port. DCBX can contribute to achieving lossless Ethernet, which is important
for some applications.
The current configuration displays only "TCP" as the RDMA type on internal ports. This
suggests that other RDMA types like iWARP (RDMA over Converged Ethernet) might not be
supported with the existing hardware.
If you plan on using clustering or remote copy functionality that relies on RDMA, you will likely
need to add a 4-port 10Gb/s Ethernet card that supports iWARP.
In such scenarios, you might want to restrict direct host access to these ports for security
reasons. You can achieve this through the Ethernet ports menu. Select the desired port and
go to Actions → Modify Host Attachment Support. Alternatively, right-click on the port and
choose Modify Host Attachment Support from the context menu. See Figure 3-29.
In your current IO group configuration, any changes made to port X will be applied to all
nodes within the group. View the current configuration details for these ports within the
Ethernet Ports menu. See Figure 3-30.
Modifying storage ports and remote copy settings follow a similar approach. Use the same
method to access configuration options, including those specific to remote copy functionality.
If you have defined multiple host portsets, select the intended portset for a host during
configuration using the Advanced option. See Figure 3-32.
For NVMe connections like NVMe/TCP, use the Settings → Network → NVMe Connectivity
menu.
When configuring remote copy protocols over IP, you have two main options:
TCP: This is a widely supported protocol that works with any standard Ethernet network.
However, it may not offer the highest performance for remote copy operations.
RDMA (Remote Direct Memory Access): This protocol can provide significantly faster
data transfer speeds for remote copy compared to TCP.
Note: Remote copy with RDMA will require an Ethernet adapter with RDMA capability and
a maximum Round Trip Time (RTT) <=1ms.
Remote copy with RDMA you will also need a corresponding Portset to define. See
Figure 3-34.
Information for Configuring clustering by using Ethernet connections you will find in the
"Configuring clustering by using Ethernet connections" in IBM Redbooks Unleash the Power
of Flash: Getting Started with IBM Storage Virtualize Version 8.7 on IBM Storage
FlashSystem and IBM SAN Volume Controller, SG24-8561.
3.5 Troubleshooting
For troubleshooting port configurations, see Troubleshooting.
Also, refer to the “Troubleshooting chapter” in IBM Redbooks Unleash the Power of Flash:
Getting Started with IBM Storage Virtualize Version 8.7 on IBM Storage FlashSystem and
IBM SAN Volume Controller, SG24-8561.
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this paper.
IBM Redbooks
The following IBM Redbooks publications provide additional information about the topic in this
document. Note that some publications referenced in this list might be available in softcopy
only.
Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7 on
IBM Storage FlashSystem and IBM SAN Volume Controller, SG24-8561
Ensuring Business Continuity: A Practical Guide to Policy-Based Replication and
Policy-Based HA for IBM Storage Virtualize Systems, SG24-8569
You can search for, view, download or order these documents and other Redbooks,
Redpapers, Web Docs, draft and additional materials, at the following website:
ibm.com/redbooks
Online resources
These websites are also relevant as further information sources:
IBM Storage FlashSystem
IBM SAN Volume Controller information
IBM System Storage Interoperation Center (SSIC)
REDP-5734-00
ISBN 0738461784
Printed in U.S.A.
®
ibm.com/redbooks