EMC Celerra Network Server: Release 6.0
EMC Celerra Network Server: Release 6.0
EMC Celerra Network Server: Release 6.0
Release 6.0
Copyright 2006 - 2010 EMC Corporation. All rights reserved. Published September 2010 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date regulatory document for your product line, go to the Technical Documentation and Advisories section on EMC Powerlink. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners. Corporate Headquarters: Hopkinton, MA 01748-9103
Contents
Chapter 2: Concepts.............................................................................15
Understanding MPFS threads.............................................................................17 Overview of EMC Celerra MPFS over FC and iSCSI.......................................17 EMC Celerra MPFS architectures........................................................................17 EMC Celerra MPFS over Fibre Channel..................................................18 EMC Celerra MPFS over iSCSI..................................................................19 Planning considerations.......................................................................................22 Compatibility with MPFS.....................................................................................22
Chapter 3: Configuring.........................................................................25
MPFS configuration summary............................................................................26 Steps for configuring a Celerra Network Server...............................................29 Verify Data Mover compatibility........................................................................31 Start MPFS..............................................................................................................32 Operating MPFS through a firewall...................................................................33 Mount a file system for servers...........................................................................33 Exporting a file system path for MPFS servers.......................................34 Stop MPFS..............................................................................................................34
Contents
Chapter 4: Managing............................................................................35
Set the threads variable.........................................................................................36 Delete configuration parameters.........................................................................37 Add threads............................................................................................................38 Delete threads........................................................................................................38 Reset default values..............................................................................................39 View MPFS statistics.............................................................................................39 Viewing MPFS protocol statistics..............................................................40 Viewing MPFS performance statistics......................................................41 Data Mover statistics...................................................................................42 MPFS session statistics................................................................................44 File statistics.................................................................................................45 Listing open sessions..................................................................................47 Resetting statistics.......................................................................................48
Chapter 5: Troubleshooting..................................................................49
EMC E-Lab Interoperability Navigator..............................................................50 Error messages.......................................................................................................50 EMC Training and Professional Services...........................................................51 Installing MPFS software.....................................................................................51 Mounting and unmounting a file system..........................................................52 Miscellaneous issues.............................................................................................57
Glossary..................................................................................................61 Index.......................................................................................................67
Preface
As part of an effort to improve and enhance the performance and capabilities of its product lines, EMC periodically releases revisions of its hardware and software. Therefore, some functions described in this document may not be supported by all versions of the software or hardware currently in use. For the most up-to-date information on product features, refer to your product release notes. If a product does not function properly or does not function as described in this document, please contact your EMC representative.
Preface
Special notice conventions EMC uses the following conventions for special notices:
CAUTION: A caution contains information essential to avoid data loss or damage to the system or equipment.
Hint: A note that provides suggested advice to users, often involving follow-on activity for a particular action.
Where to get help EMC support, product, and licensing information can be obtained as follows: Product information For documentation, release notes, software updates, or for information about EMC products, licensing, and service, go to the EMC Powerlink website (registration required) at http://Powerlink.EMC.com. Troubleshooting Go to Powerlink, search for Celerra Tools, and select Celerra Troubleshooting from the navigation panel on the left. Technical support For technical support, go to EMC Customer Service on Powerlink. After logging in to the Powerlink website, go to Support Request Support. To open a service request through Powerlink, you must have a valid support agreement. Contact your EMC Customer Support Representative for details about obtaining a valid support agreement or to answer any questions about your account.
Note: Do not request a specific support representative unless one has already been assigned to your particular system problem.
Your comments Your suggestions will help us continue to improve the accuracy, organization, and overall quality of the user publications. Please send your opinion of this document to:
techpubcomments@EMC.com
1 Introduction
The EMC Celerra Multi-Path File System (MPFS) combines the industry-standard file sharing of network-attached storage (NAS) and the high performance and efficient data delivery of a storage area network (SAN) into one unified storage network. The MPFS file system accelerates data access by providing separate transports for file data (file content) and metadata (control data). Only metadata passes through the server, and all file content is accessed directly from the storage array which decreases overall network traffic. In addition, servers access file data from EMC Symmetrix or EMC CLARiiON systems over iSCSI or Fibre Channel connections, which increases the speed with which the Celerra Network Server can deliver files to the servers.
Note: A server can be a Linux, Windows, HP-UX, AIX, or Solaris server unless specified.
Enables data access at channel speeds Reduces network traffic Improves server performance Enhances Celerra Network Server performance and system scalability Allows file sharing among heterogeneous clients
This document is part of the Celerra Network Server information set and is intended for use by system administrators responsible for supporting High Performance Computing (HPC), grid computing, distributed computing virtualization, or simply backing up file systems by using MPFS. Topics included are:
Introduction
Introduction
System requirements
Table 1 on page 9 describes the EMC Celerra Network Server software, hardware, network, and storage configurations.
Table 1. System requirements Celerra/MPFS software
Celerra Network Server version 6.0. MPFS software packages for Linux, Windows, UNIX, AIX, or Solaris.
Linux software
Cent OS 5 with update 3 or later (iSCSI only) Red Hat Enterprise Linux 4 with update 6 or later Red Hat Enterprise Linux 5 with update 2 or later (without Itanium) SuSE Linux Enterprise Server 10 with SP1 or later
Windows 2003 with SP2 Windows 2003 x64 with SP2 Windows 2003 R2 with SP2 Windows 2003 R2 x64 with SP2 Windows XP with SP3 Windows XP x64 with SP2 Windows Vista with SP2 Windows Vista x64 with SP2 Windows 2008 with SP2 Windows 2008 x64 with SP2 Windows 2008 R2 x64 with SP2 Windows 7 Windows 7 x64
System requirements
Introduction
UNIX software
HP-UX version 11.23 with V2 HP-UX version 11.23 (Itanium) with V2 HP-UX version 11.31 (Itanium) with V3
AIX software
IBM AIX 5.2 with ML04 IBM AIX 5.3 with ML04 and ML07 IBM AIX 6.1 with ML00
Solaris software
Sun Solaris version 5.9 (Sparc) Sun Solaris version 5.10 (Sparc) Sun Solaris version 5.10 (AMD) VMware ESX version 3.0.1 and 3.0.2 (FC) or later VMware ESX version 3.5.1 (FC and iSCSI) (optional) or later Celerra Network Server, MPFS configuration summary on page 26 lists the configurations. EMC Symmetrix or EMC CLARiiON storage array. For iSCSI configuration, one of the following MDS switches is configured as an iSCSI target:
VMware software
Hardware
MDS 9506 MDS 9509 MDS 9513 MDS 9216A MDS 9216i MDS 9222i
Network
Server with a Fibre Channel connection to the storage array. For an iSCSI configuration, an OS-based iSCSI initiator and an IP connection to a switch are required. Symmetrix or CLARiiON storage array.
Storage
10
Introduction
Note: The EMC E-Lab Interoperability Navigator provides specific information on server version support.
Restrictions
These restrictions apply when using MPFS:
The EMC Celerra MPFS over iSCSI configurations do not use the Celerra iSCSI target feature. They rely on the iSCSI initiator and the MDS switch or the CLARiiON CX3 or CX4 series storage array as the iSCSI target. The Nolock Common Internet File System (CIFS) locking policy is the default setting for MPFS; it is also the only locking policy supported on an MPFS-enabled file system. Only the Nolock locking policy is compatible with MPFS on a Celerra Network Server. A server cannot properly mount a file system when a Data Mover is running an incompatible locking policy. Managing EMC Celerra for a Multiprotocol Environment provides more information on locking policies. Before enabling any new features, ensure that the file system is compatible with MPFS. Verify Data Mover compatibility on page 31 provides the appropriate command syntax. MPFS improves performance significantly when large file transfers (sequential I/Os) are common. MPFS does not greatly benefit a configuration that deals with many small, random I/Os. When both Checkpoint and EMC Celerra Replicator are active, MPFS system performance is reduced. The performance reduction is caused by additional CPU use and I/O overhead of the block copy operation. An MPFS file system with a stripe size of 256 KB generally achieves optimal performance. To ensure continuous availability of file systems in the unlikely event of a Data Mover failure, configure each MPFS-enabled Data Mover for automatic failover to a standby Data Mover. Configuring Standbys on EMC Celerra provides more information on configuring a standby Data Mover. When exporting a file system on a Data Mover by using the server_export command and the ro (read-only) option, MPFS disregards the read-only option and writes to the file system. When an MPFS-enabled file system is extended, by using the nas_fs command through the command line interface (CLI) or by using EMC Unisphere software, the server loses the MPFS connection. The server needs to be rezoned to see the added disks; only after the rezoning will the server enable MPFS on the file system. The EMC Celerra Network Server Command Reference Manual provides information on the nas_fs command. Managing EMC Celerra Volumes and File Systems Manually provides information on extending Celerra Network Server file systems. Both documents are available on the EMC Powerlink website (registration required) at http://Powerlink.EMC.com.
Restrictions
11
Introduction
MPFS is compatible with the Celerra AntiVirus Agent (CAVA) and the Celerra Event Publishing Agent (CEPA). However, CAVA or CEPA cannot share the same server with the MPFS software in a Windows environment. MPFS Windows software version 3.2 or earlier supports global shares but does not support NetBIOS shares. MPFS Windows software version 3.2.1 and later support both global shares and NetBIOS shares.
Related information
Specific information related to the features and functionality described in this document is included in:
Celerra MPFS over iSCSI Applied Best Practices Guide Celerra MPFS over FC and iSCSI Linux Clients Product Guide Celerra MPFS over FC and iSCSI Windows Clients Product Guide Celerra MPFS for HP-UX, AIX and Solaris Clients Product Guide Celerra MPFS for Linux Clients Release Notes Celerra MPFS for Windows Clients Release Notes Celerra MPFS for HP-UX and Solaris Clients Release Notes Celerra MPFS for AIX Clients Release Notes Using International Character Sets with EMC Celerra
EMC Celerra Network Server Documentation on Powerlink The complete set of EMC Celerra customer publications is available on the EMC Powerlink website at http://Powerlink.EMC.com. After logging in to Powerlink, click Support, and locate the link for the specific product technical documentation required.
Celerra Support Demos Celerra Support Demos are available on Powerlink. Use these instructional videos to learn how to perform a variety of Celerra configuration and management tasks. After logging in to Powerlink, click Support. Then click the link for the specific product required. Click Tools. Locate the link for the video that you require.
12
Introduction
Celerra wizards Celerra wizards can be used to perform setup and configuration tasks. Using Wizards to Configure Celerra provides an overview of the steps required to configure a Celerra Network Server by using the Set Up Celerra wizard.
Related information
13
Introduction
14
2 Concepts
MPFS allows servers to access shared data concurrently over iSCSI or Fibre Channel connections to Symmetrix or CLARiiON storage arrays. The components needed are:
Celerra Network Server with MPFS Symmetrix or CLARiiON storage array Linux, Windows, UNIX, AIX, or Solaris server
Symmetrix or A high-performance unified storage cached disk array CLARiiON storage designed for online data storage. The Celerra Network array Server interacts with the storage array to provide fast, reliable, and secure access to storage. Note: CLARiiON storage array support requires MPFS software version 4.0 or later. Linux, Windows, UNIX, AIX, or Solaris server MPFS is enabled on a server and interacts with the MPFS-configured Celerra Network Server for synchronization, access control, and metadata management. The server accesses the Data Mover through network file system (NFS) and CIFS for file access, and through File Mapping Protocols (FMPs) protocols for MPFS.
15
Concepts
Understanding MPFS threads on page 17 Overview of EMC Celerra MPFS over FC and iSCSI on page 17 EMC Celerra MPFS architectures on page 17 Planning considerations on page 22 Compatibility with MPFS on page 22
16
Concepts
Over the IP LAN between the server and storage array for a unified storage or gateway configuration Through an iSCSI-to-Fibre Channel bridge for a unified storage (MDS-based) or gateway (MDS-based) configuration
Metadata passes through the Celerra Network Server (and the IP network), which includes the NAS portion of the configuration.
EMC Celerra MPFS over Fibre Channel EMC Celerra MPFS over iSCSI
Celerra unified storage with Fibre Channel Celerra gateway with Fibre Channel
17
Concepts
Celerra unified storage with iSCSI Celerra unified storage with iSCSI (MDS-based) Celerra gateway with iSCSI Celerra gateway with iSCSI (MDS-based)
Celerra Network Server with MPFS A NAS device that is configured with an EMC Celerra Network Server with MPFS software Symmetrix or CLARiiON storage array Servers with MPFS software connected to a Celerra Network Server through the IP LAN, Symmetrix, or CLARiiON storage arrays by using Fibre Channel architecture
Figure 1 on page 18 shows the Celerra unified storage with Fibre Channel configuration where the servers are connected to a Celerra Network Server by using an IP switch and one or more Fibre Channel switches. In a smaller configuration of one or two servers, the servers can be connected directly to the Celerra Network Server without the use of Fibre Channel switches.
Figure 2 on page 19 shows the Celerra gateway with Fibre Channel configuration. In this diagram, the servers are connected to a CLARiiON or a Symmetrix storage array by using a Celerra Network Server and IP switch or optional Fibre Channel switch.
18
Concepts
Celerra Network Server with MPFS A NAS device that is configured with an EMC Celerra Network Server with MPFS software Symmetrix or CLARiiON storage array Servers with MPFS software connected to a Celerra Network Server through the IP LAN, Symmetrix, or CLARiiON storage arrays by using iSCSI architecture
Figure 3 on page 19 shows the Celerra unified storage with iSCSI configuration where the servers are connected to a Celerra Network Server by using one or more IP switches.
19
Concepts
Figure 4 on page 20 shows the Celerra unified storage with iSCSI (MDS-based) configuration where the servers are connected to an iSCSI-to-Fibre Channel bridge (MDS-switch) and a Celerra Network Server by using an IP switch.
Figure 5 on page 20 shows the Celerra gateway with iSCSI configuration where the servers are connected to a CLARiiON or Symmetrix storage array with a Celerra Network Server by using one or more IP switches.
Figure 6 on page 21 shows the Celerra gateway with iSCSI (MDS-based) configuration where the servers are connected to a CLARiiON or Symmetrix storage array with an iSCSI-to-Fibre Channel bridge (MDS-switch) and a Celerra Network Server by using an IP switch.
20
Concepts
21
Concepts
Planning considerations
When configuring or planning to run MPFS, consider:
The compatibility of other Celerra Network Server features with MPFS Where MPFS fits into the Celerra Network Server configuration process
Yes
Yes
Yes
Yes
No Yes
Yes Yes
Yes Yes
Yes Yes
Yes Yes
Yes No
22
Concepts
Table 3. MPFS-supported NAS features (continued) Product or feature EMC SnapSure MPFS Yes Non-MPFS Note Yes Only the Production File System (PFS) can be used with MPFS. MPFS is not used when accessing ckpt-type file systems. The administrator is not notified when the restore is complete. The restore completion status can be checked in the server_log file and can also be configured by using the SVFS facility of nas_event. Not supported on MPFS with 128 TB or 256 TB Data Mover capacity. EMC Symmetrix Remote Data Facility/Asynchronous (SRDF/A) No No N/A
No
SRDF is active on the primary site only. After an SRDF failover, the server is able to access the secondary sites file systems over NFS or CIFS. N/A When the quota limit is close, all traffic falls back to standard NAS.
Yes Yes
Yes Yes
23
Concepts
24
3 Configuring
MPFS configuration summary on page 26 Steps for configuring a Celerra Network Server on page 29 Verify Data Mover compatibility on page 31 Start MPFS on page 32 Operating MPFS through a firewall on page 33 Mount a file system for servers on page 33 Stop MPFS on page 34
25
Configuring
Celerra unified Entry-level storage with Fibre Channel (NS20FC) Celerra unified Entry-level storage with Fibre Channel (NS-120) Celerra unified Midtier storage with Fibre Channel (NS40FC) Celerra unified Midtier storage with Fibre Channel (NS-480) Celerra unified High-end storage with Fibre Channel (NS-960)
CLARiiON CX3-10F 1
120
CLARiiON CX4-120
120
CLARiiON CX3-40F
240
CLARiiON CX4-480
500
CLARiiON CX4-960
Celerra gateway with Fibre Channel (NS40G, NS80G, NS-G2, NS-G8, NSX, VG2, or VG8)
High-end
CLARiiON CX300, 4 CX500, CX700, CX3-20F, CX3-40F, a CX3-80, CX4-120 , CX4-240, CX4b c 480 , CX4-960 , EMC Symmetrix DMX series, EMC Symmetrix VMAX series, or Symmetrix 8000 series
a b c
240 Linux Servers are supported with EMC FLARE release 29 or later. 1020 Linux Servers are supported with FLARE release 29 or later. 4080 Linux Servers are supported with FLARE release 29 or later.
26
Configuring
Table 4. MPFS configuration summary (continued) Figure Configuration Price/size Maximum servers Storage system supported 120
a
Celerra unified Entry-level storage with iSCSI (NS-120 with iSCSI enabled for MPFS) Celerra unified Entry-level storage with iSCSI (NS40 for MPFS) Celerra unified Midtier storage with iSCSI (NS-480 with iSCSI enabled for MPFS) Celerra unified High-end storage with iSCSI (NS-960 with iSCSI enabled for MPFS)
CLARiiON CX4-120 1
120
CLARiiON CX4-120
240
CLARiiON CX4-480
500
CLARiiON CX4-960
Celerra unified Entry-level storage with iSCSI (MDS-based) (NS20FC) Celerra unified Entry-level storage with iSCSI (MDS-based) (NS-120 with FC enabled for MPFS) Celerra unified Midtier storage with iSCSI (MDS-based) (NS40FC)
27
Configuring
Table 4. MPFS configuration summary (continued) Figure Configuration Price/size Maximum servers Storage system supported Dependent on MDS CLARiiON CX4-480 b limit Maximum number of arrays
Celerra unified Midtier storage with iSCSI (MDS-based) (NS-480 with FC enabled for MPFS) Celerra unified High-end storage with iSCSI (MDS-based) (NS-960 with FC enabled for MPFS) 5 Celerra gateway Midtier with iSCSI (NS40G, NS80G, NS-G2, NS-G8, NSX, VG2, or VG8) Celerra gateway High-end with iSCSI (NS40G, NS80G, NS-G2, NS-G8, NSX, VG2, or VG8)
CLARiiON CX300, CX500, CX700, CX3-20C, CX3a 40C, CX4-120 , CX4-240, CX4b c 480 , or CX4-960 CLARiiON CX300, CX500, CX700, CX3-20C, CX3a 40C, CX4-120 , CX4-240, CX4b c 480 , CX4-960 , Symmetrix DMX series, Symmetrix VMAX series, or Symmetrix 8000 series
Celerra gateway High-end with iSCSI (MDSbased) (NS40G, NS80G, NS-G2, NS-G8, NSX, VG2, or VG8)
Dependent on MDS CLARiiON CX300, 4 limit CX500, CX700, CX3-20C, CX3c 40C, CX4-960 , Symmetrix DMX series, Symmetrix VMAX series, or Symmetrix 8000 series
28
Configuring
Mount the file system, specifying the options appro- CIFS users: Configuring and Managing CIFS on EMC priate for your application. Celerra
29
Configuring
Table 5. Celerra Network Server configuration tasks (continued) Step 9. Action Export a file system Make the network access point available for NFS and CIFS users. 10. Configure Data Movers to use CIFS Configure the Data Movers to become members of a Windows domain and establish security policies. 11. Configure the Celerra Network Server for MPFS Using MPFS on EMC Celerra Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide Celerra MPFS over FC and iSCSI v6.0 Windows Clients Product Guide Celerra MPFS over FC for HP-UX, AIX, and Solaris Version 4.0 Clients Product Guide 12. Install, configure, and run MPFS Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide Celerra MPFS over FC and iSCSI v6.0 Windows Clients Product Guide Celerra MPFS over FC for HP-UX, AIX, and Solaris Version 4.0 Clients Product Guide Reference NFS users: Configuring NFS on EMC Celerra CIFS users: Configuring and Managing CIFS on EMC Celerra Configuring and Managing CIFS on EMC Celerra
30
Configuring
The ALL option, in place of the movername, runs the command for all Data Movers. Example: To receive the mount status on server_2, type:
$ server_mpfs server_2 -mountstatus
31
Configuring
Output server_2 : fs --
mpfs compatible? ---------------no testing_renaming no no server2_fs1_ckpt no mpfs_fs2_lockdb_ckpt_5 no mpfs_fs2_lockdb_ckpt_4 no mpfs_fs2_lockdb_ckpt_3 no mpfs_fs2_lockdb_ckpt_2 no mpfs_fs2_lockdb_ckpt_1 no mpfs_fs2_lockdb_ckpt_10 no mpfs_fs2_lockdb_ckpt_9 no mpfs_fs2_lockdb_ckpt_8 no mpfs_fs2_lockdb_ckpt_7 no no mpfs_fs2_lockdb_ckpt_6 no root_fs_common no mpfs_fs2 yes mpfs_fs1 mounted server2_fs1 yes root_fs_2 yes Note: Possible reasons for incompatibility include:
reason -----not a ufs file system volume structure not FMP compatible not a ufs file system volume structure not FMP compatible volume structure not FMP compatible volume structure not FMP compatible volume structure not FMP compatible volume structure not FMP compatible volume structure not FMP compatible volume structure not FMP compatible volume structure not FMP compatible volume structure not FMP compatible volume structure not FMP compatible not a ufs file system volume structure not FMP compatible
The disk mark is not 8 KB aligned because it was created with Celerra Network Server 3.x software version or earlier. The 8 KB alignment is available for Data Movers configured with Celerra Network Server 4.0 software version or later. The volume has a stripe size that is too small. For best performance, use a stripe size of 256 KB. A non Universal Extended File System (nonUxFS) is used, for example, a checkpoint file system created with the SnapSure feature. Restrictions on page 11 provides more information.
Start MPFS
Use this command to start MPFS. You can set the number of threads to run. The number of threads must be between 1 and 128.
32
Configuring
Action To start the MPFS service, use this command syntax: $ server_setup <movername> -P mpfs -option start=<n> where:
<movername> = name of the Data Mover. <n> = number of MPFS threads to start.
Note: If the server_mpfs server_2 -set threads command is run after the server_setup -P mpfs -option start command, threads are added and removed dynamically. Output server_2: done
Server: 6907 Celerra Network Server: 4656, 2079, 1234, 111, 625, 626-6351
MPFS service is started on the Data Mover. The server can mount and access the MPFS file system.
33
Configuring
Stop MPFS
Use this command to stop MPFS. When stopping MPFS, the configuration information previously entered is invoked when restarting MPFS (for example, the number of threads that run when starting MPFS).
Action To stop the MPFS service, use this command syntax: $ server_setup <movername> -P mpfs -option stop where:
<movername> = name of the Data Mover.
34
4 Managing
Set the threads variable on page 36 Delete configuration parameters on page 37 Add threads on page 38 Delete threads on page 38 Reset default values on page 39 View MPFS statistics on page 39
35
Managing
Note: The configuration values set with the server_mpfs command are recorded in a configuration file on the Data Mover. The ALL option runs the command for all Data Movers.
Note: When running the server_mpfs command before the server_setup -P mpfs -o start command, the threads option overrides the default value for the number of threads in the configuration file.
Note: When running the server_mpfs command before the server_setup -P mpfs -o start=<n> command, the threads option determines the number of threads that MPFS starts by default.
Note: The default number of threads is set to the number specified. In the example, it is set to 32.
36
Managing
Example: To stop the MPFS service and delete the MPFS configuration on server_2, type:
$ server_setup server_2 -P mpfs -option delete
37
Managing
Add threads
When MPFS is started, 16 threads are run, which is the default number of MPFS threads. The maximum number of threads is 128. If system performance is slow, gradually increase the number of threads alloted for the Data Mover to improve system performance. Add threads conservatively, as the Data Mover allocates 16 KB of memory to accommodate each new thread. The optimal number of threads depends on the network configuration, the number of servers, and the workload.
Action To increase the number of threads running on a Data Mover, use this command syntax: $ server_mpfs <movername> -add <number_of_threads> where:
<movername> = name of the Data Mover. <number_of_threads> = number of MPFS threads added from the previous total for the specific server.
Example: To increase the number of MPFS threads running on server_2 by 16, type:
$ server_mpfs server_2 -add 16
Delete threads
While MPFS is running, threads can be deleted from a Data Mover. The number of MPFS threads must be between 1 and 128.
Action To decrease the number of threads running on a Data Mover, use this command syntax: $ server_mpfs <movername> -delete <number_of_threads> where:
<movername> = name of the Data Mover. <number_of_threads> = number of MPFS threads deleted from the previous total for the specific server.
Example: To decrease the number of MPFS threads running on server_2 by 16, type:
$ server_mpfs server_2 -delete 16
38
Managing
Note: Without a variable entry, the command resets all variables to their default values. The only valid variable is threads. Set the threads variable on page 36 provides more information. Output server_2: done
39
Managing
Output server_2: Server ID=server_2 FMP Threads=16 Max Threads Used=1 FMP Open Files=0 FMP Port=4656 HeartBeat Time Interval=30
Note: The output for this example reflects that server_2 is running the default number of threads (16).
Note: Table 7 on page 40 provides detailed information on the statistics output when using the -Stats option. Table 7. Server protocol statistics Statistic Server ID Description Unique identifier for the Data Mover. Comment
The server uses the Server ID to identify Data Movers that are running the MPFS The default Data Mover is server_<x>, service.The default Server ID is unique where <x> is the Celerra cabinets slot to all Data Movers in a single Celerra number, unless it has been changed cabinet, but may be duplicated in multiwith the server_name command. ple Celerra environments. The server requires a unique Server ID for each Data Mover. Use the server_name command to rename any Data Movers with duplicate Server IDs.
40
Managing
Table 7. Server protocol statistics (continued) Statistic FMP Threads Description Number of available FMP threads for servicing server requests. Comment If required for performance reasons, use server_mpfs to change this value. Set the threads variable on page 36 provides more information. N/A
Number of files currently opened by the N/A server. FMP port where the Celerra Network Server receives requests. N/A
FMP Port
Time interval in which the server must N/A renew the sessions connection; otherwise, the session terminates.
41
Managing
Output server_2: server MPFS statistics -----------------------------total avg msec high msec ------- ---------- ----------open(): 44835 2.73 52 getMap(): 3130 0.03 4 allocSpace(): 19490 2.14 180 mount(): 177 4.72 32 commit(): 53135 0.91 72 release(): 575 0.06 4 close(): 480 0.02 4 nfs/cifs sharing delays: 34149 3.06 6084 notify replies (delay): 17745 1.37 6084 total ------notify msgs sent: 17745 notify replies failed: 0 conflicts (total): 34678 conflicts (lock): 0 conflicts (sharing): 34678 conflicts (starvation): 0 open files: 0 open sessions: 0 throughput for last 273230.03 sec: 1461.67 blks/sec read 5464.12 blks/sec written
42
Managing
Notes where (columns): total = cumulative time spent performing operations of this type, in milliseconds. avg msec = average time spent performing an operation of this type, in milliseconds. high msec = longest time spent performing an operation of this type, in milliseconds. where (rows): open() = number of files the server opened. getMap() = number of times the server reads a file and has no extent information; it runs a getmap. allocSpace() = number of times the server writes a file and has no extent information; it runs a getmap. mount() = number of mounts. commit() = number of commits (such as NFS commit). release() = number of times the server releases an extent, due to a notify or file close. close() = number of file close commands the server sends to the Celerra Network Server. nfs/cifs sharing delays = number of times and duration, NFS or CIFS threads were delayed while waiting for a Celerra Network Server to release locked resources. notify replies (delay) = number of notify rpcs received from the server. notify msgs sent = number of notify server messages sent to the server. notify replies failed = number of notify replies with a NOTIF_ERROR returned. conflicts (total) = number of shared conflicts. conflicts (lock) = number of conflicts caused by conflicting MPFS range lock requests. conflicts (sharing) = number of conflicts caused by file sharing between MPFS requests and CIFS/NFS requests. conflicts (starvation) = number of conflicts caused by the triggering of the starvation prevention mechanism in the MPFS module. open files = number of files opened. open sessions = number of active sessions. throughput for last 273230.03 sec: = average throughput time for the session, in seconds. 1461.67 blks/sec read = throughput for file reads, measured in 8 KB blocks per second. 5464.12 blks/sec written = throughput for file writes, measured in 8 KB blocks per second.
43
Managing
Output server_2: -----------------------------session MPFS statistics -----------------------------session = xxx.xx.xx.xxx total avg msec ------- --------open(): getMap(): allocSpace(): mount(): commit(): release(): close(): notify (delay): 1106 470 636 85 81764 1105 1105 0 total ------conflicts (generated): 0 conflicts (notified): 0 open files: 1 throughput for last 9628.88 sec: 6247.87 blks/sec read 8376.78 blks/sec written 0.04 0.05 26.83 5.55 57.17 0.03 0.03 0.00 high msec ---------4 4 2720 16 680 4 4 0
44
Managing
Notes where (columns): total = cumulative time spent performing operations of this type, in milliseconds. avg msec = average time spent performing an operation of this type, in milliseconds. high msec = longest time spent performing an operation of this type, in milliseconds. where (rows): open() = number of files the server opened. getMap() = number of times the server reads a file and has no extent information; it runs a getmap. allocSpace() = number of times the server writes a file and has no extent information; it runs a getmap. mount() = number of mounts. commit() = number of commits (such as NFS commit). release() = number of times the server releases an extent, due to a notify or file close. close() = number of file close commands the server sends to the Celerra Network Server. notify (delay) = time the server spent replying to the Celerra Network Server. conflicts (generated) = number of conflicts generated. conflicts (notified) = number of conflicts notified. open files = number of files opened. throughput for last 9628.88 sec: = average throughput time for the session, in seconds. 6247.87 blks/sec read = throughput for file reads, measured in 8 KB blocks per second. 8376.78 blks/sec written = throughput for file writes, measured in 8 KB blocks per second.
File statistics
Use this command to view performance statistics for a particular file.
Action To view the counting statistics for a particular file, use this command syntax: $ server_mpfssat <movername> -file <filepath> where:
<movername> = name of the Data Mover. <filepath> = fully qualified filename for the desired file in the form /<fs_name> or <fullpath of file>.
45
Managing
Output ------------------------file MPFS statistics ------------------------------file = /ufs1/mpfs/file.dat total ----open(): getMap(): allocSpace(): commit(): release(): 5 4291 22727 28 19 total ----5 1
total = cumulative time spent performing operations of this type, in milliseconds. avg msec = average time spent performing an operation of this type, in milliseconds. high msec = longest time spent performing an operation of this type, in milliseconds. where (columns): open() = number of files the server opened. getMap() = number of times the server reads a file and has no extent information; it runs a getmap. allocSpace() = number of times the server writes a file and has no extent information; it runs a getmap. commit() = number of commits (such as NFS commit). release() = number of times the server releases an extent, due to a notify or file close. close() = number of file close commands the server sends to the Celerra Network Server. conflicts = number of shared conflicts.
46
Managing
Output Active MPFS sessions (clientid/timestamp) ---------------------------xxx.xx.xxx.xxx 0 xxx.xx.xxx.xxx 1087815431 xxx.xx.xxx.xxx 29644763 xxx.xx.xxx.xxx 1087820585
sec 0 usec sec 690034 usec sec 2080141652 usec sec 140034 usec
Note: The output lists the IP addresses of the servers that are connected to the Data Mover along with the duration of the servers session.
47
Managing
Resetting statistics
Use this command to reset the statistics associated with a file or session.
Action To reset the statistics associated with the specified file or session, use this command syntax: $ server_mpfssat <movername>-z -session <sessionid> where:
<movername> = name of the Data Mover. <sessionid> = IP address of the servers associated with the desired session.
Example: To reset the statistics associated with a specified file or session, type:
$ server_mpfsstat server_2 -z -session xxx.xx.xxx.xxx
48
5 Troubleshooting
As part of an effort to continuously improve and enhance the performance and capabilities of its product lines, EMC periodically releases new versions of its hardware and software. Therefore, some functions described in this document may not be supported by all versions of the software or hardware currently in use. For the most up-to-date information on product features, refer to your product release notes. If a product does not function properly or does not function as described in this document, contact your EMC Customer Support Representative. Problem Resolution Roadmap for Celerra contains additional information about using Powerlink and resolving problems. Topics included are:
EMC E-Lab Interoperability Navigator on page 50 Error messages on page 50 EMC Training and Professional Services on page 51 Installing MPFS software on page 51 Mounting and unmounting a file system on page 52 Miscellaneous issues on page 57
49
Troubleshooting
Error messages
All event, alert, and status messages provide detailed information and recommended actions to help you troubleshoot the situation. To view message details, use any of these methods:
Unisphere software:
Right-click an event, alert, or status message and select to view Event Details, Alert Details, or Status Details.
CLI:
Type nas_message -info <MessageID>, where <MessageID> is the message identification number.
Use this guide to locate information about messages that are in the earlier-release message format.
Powerlink:
Use the text from the error message's brief description or the message's ID to search the Knowledgebase on Powerlink. After logging in to Powerlink, go to Support Search Support.
50
Troubleshooting
51
Troubleshooting
Solution Use a supported OS kernel. The EMC Celerra MPFS for Linux Clients Release Notes provide a list of supported kernels.
Problem The MPFS software does not run or the MPFS daemon did not start.
If the MPFS software is installed properly, the command displays output similar to this:
EMCmpfs-6.0.x.x Note: Alternately, use the mpfsctl version command to verify that the Linux server is installed. The mpfsctl man page or the EMC Celerra MPFS over FC and iSCSI v6.0 Linux Product Guide provides additional information.
2. Use the ps command to verify that the MPFS daemon has started:
ps -ef | grep mpfsd
The output will look like this if the MPFS daemon has started:
root 847 1 0 15:19 ? 00:00:00 /usr/sbin/mpfsd
3. If the ps command output does not show the MPFS daemon process is running, as root, start MPFS using the following command:
# /etc/rc.d/init.d/mpfs start
52
Troubleshooting
Problem The mount command displays messages about unknown file systems.
Cause An option was specified that is not supported by the mount command.
Solution Check that the correct server name was specified and that the server is up with an exported file system.
53
Troubleshooting
Cause The MPFS mount operation could not find the physical disk associated with the specified file system.
Solution Use the mpfsinq command or the mount -o rescan command to verify that the physical disk device associated with the file system is connected to the server over Fibre Channel and is accessible from the server.
Solution Create a mount point and try the mount command again.
Solution Install the MPFS software and try the mount command again.
54
Troubleshooting
Problem A file system cannot be unmounted. The umount command displays this message:
umount: Device busy
Cause Existing processes were using the file system when an attempt was made to unmount it, or the umount command was issued from the file system itself.
Solution 1. Use the fuser command to identify all processes using the file system. 2. Use the kill -9 command to stop all processes. 3. Run the unmount command again.
Cause The server specified with the mount command does not exist or cannot be reached.
Solution 1. Interrupt the mount command by using the interrupt key combinations (usually Ctrl-C). 2. Try to reach the server by using the ping command. 3. If the ping command succeeds, retry the mount.
Cause 1 Permissions are required to access the file system specified in the mount command.
55
Troubleshooting
Solution 1 Ensure that the file system has been exported with the right permissions, or set the right permissions for the file system. The Celerra Network Server Command Reference Manual provides more information.
Cause The server specified in the mount command is not an NFS or Celerra Network Server.
Solution Check whether the correct server name was specified and the server has an exported file system.
Problem The mount command logs this message in the /var/log/messages file:
Couldnt find device during mount.
Cause The MPFS mount operation could not find the physical disk associated with the specified file system.
Solution Use either the fdisk command or the mpfsinq command to verify that the physical disk device associated with the file system is connected to the server over Fibre Channel and is accessible from the server.
56
Troubleshooting
Cause The server name specified in the mount command does not exist on the network.
Solution 1. Ensure that the correct server name is specified in the mount command. 2. If the correct name was not specified, check whether the hosts /etc/hosts file or the NIS/DNS map contains an entry for the server. 3. If the server does appear in /etc/hosts or the NIS/DNS map, check whether the server responds to the ping command. 4. If the ping command succeeds, try using the servers IP address instead of its name in the mount command.
Solution 1. Install the MPFS software on the server. 2. Run the mount command again.
Miscellaneous issues
The following miscellaneous issues may be encountered with a Linux server.
Miscellaneous issues
57
Troubleshooting
Cause Write permission is required on the file system or the file system is mounted as read-only.
Solution 1. Check that you have write permission on the file system. 2. Try unmounting the file system and remounting it in read/write mode.
Cause The Celerra Network Server is unavailable due to a network-related problem, a reboot, or a shutdown.
Solution Check whether the server responds to the ping command. Also try unmounting and remounting the file system.
Solution 1 Check that the MPFS software package name is spelled correctly, with uppercase and lowercase letters specified. If the MPFS software package name is spelled correctly, verify that the MPFS software is installed on the Linux server by typing: #rpm -q EMCmpfs If the MPFS software is installed properly, the command displays output similar to:
EMCmpfs-6.0.20-xxx
58
Troubleshooting
Cause 2 Trying to remove the MPFS software package while one or more MPFS-mounted file systems are active, and I/O is taking place on the active file system. A message appears on the Linux server similar to:
ERROR: Mounted MPFS filesystems found on the system. Please unmount all MPFS filesystems before removing the product.
Solution 2 1. Stop the I/O. 2. Unmount all active MPFS file systems by using the unmount command. 3. Restart the removal process.
Miscellaneous issues
59
Troubleshooting
60
Glossary
A AV engine Third-party antivirus software running on a Windows Server that works with the Celerra AntiVirus Agent (CAVA). See also AV server, CAVA, VC client, and virus definition file. AV server Windows Server configured with the CAVA and a third-party antivirus engine. See also AV engine, CAVA, and VC client. C CAVA See Celerra AntiVirus Agent. Celerra AntiVirus Agent (CAVA) Application developed by EMC that runs on a Windows Server and communicates with a standard antivirus engine to scan CIFS files stored on a Celerra Network Server. See also AV engine, AV server, and VC client. Celerra Event Publishing Agent (CEPA) EMC-provided agent running on a Windows Server that provides details of events occurring on the Windows server. It can communicate with the Celerra Network Server to display a list of events that occurred. Celerra Network Server EMC network-attached storage (NAS) product line. CEPA See Celerra Event Publishing Agent. checkpoint Point-in-time, logical image of a PFS. A checkpoint is a file system and is also referred to as a checkpoint file system or an EMC SnapSure file system.
61
Glossary
See also production file system. CIFS See Common Internet File System. client Front-end device that requests services from a server, often across a network. command line interface (CLI) Interface for typing commands through the Control Station to perform tasks that include the management and configuration of the database and Data Movers and the monitoring of statistics for the Celerra cabinet components. Common Internet File System (CIFS) File-sharing protocol based on the Microsoft Server Message Block (SMB). It allows users to share file systems over the Internet and intranets. D daemon UNIX process that runs continuously in the background, but does nothing until it is activated by another process or triggered by a particular event. Data Mover In a Celerra Network Server, a cabinet component that is running its own operating system that retrieves data from a storage device and makes it available to a network client. This is also referred to as a blade. A Data Mover is sometimes internally referred to as DART since DART is the software that is running on the platform. Domain Name System (DNS) Name resolution software that allows users to locate computers on a UNIX network or TCP/IP network by domain name. The DNS server maintains a database of domain names, hostnames, and their corresponding IP addresses, and services provided by the application servers. See also ntxmap. E extent Set of adjacent physical blocks. F Fibre Channel Nominally 1 Gb/s data transfer interface technology, although the specification allows data transfer rates from 133 Mb/s up to 4.25 Gb/s. Data can be transmitted and received simultaneously. Common transport protocols, such as Internet Protocol (IP) and Small Computer Systems Interface (SCSI), run over Fibre Channel. Consequently, a single connectivity technology can support high-speed I/O and networking.
62
Glossary
File Mapping Protocol (FMP) File system protocol used to exchange file layout information between an application server and the Celerra Network Server. See also MPFS. file system Method of cataloging and managing the files and directories on a storage system. FLARE Embedded operating system in CLARiiON disk arrays. I Internet Protocol address (IP address) Address uniquely identifying a device on any TCP/IP network. Each address consists of four octets (32 bits), represented as decimal numbers separated by periods. An address is made up of a network number, an optional subnetwork number, and a host number. iSCSI See Internet SCSI. iSCSI initiator iSCSI endpoint, identified by a unique iSCSI name, which begins an iSCSI session by issuing a command to the other endpoint (the target). iSCSI target iSCSI endpoint, identified by a unique iSCSI name, which executes commands issued by the iSCSI initiator. K kernel Software responsible for interacting most directly with the computers hardware. The kernel manages memory, controls user access, maintains file systems, handles interrupts and errors, performs input and output services, and allocates computer resources. M mount point Local subdirectory to which a mount operation attaches a subdirectory of a remote file system. MPFS See Multi-Path File System. MPFS session Connection between an MPFS client and a Celerra Network Server. MPFS share Shared resource designated for multiplexed communications by using the MPFS file system.
63
Glossary
Multi-Path File System (MPFS) Celerra Network Server feature that allows heterogeneous servers with MPFS software to concurrently access, directly over Fibre Channel or iSCSI channels, shared data stored on a EMC Symmetrix or CLARiiON storage array. MPFS adds a lightweight protocol called File Mapping Protocol (FMP) that controls metadata operations. N network basic input/output system (NetBIOS) Network programming interface and protocol developed for IBM personal computers. network file system (NFS) Network file system (NFS) is a network file system protocol that allows a user on a client computer to access files over a network as easily as if the network devices were attached to its local disks. Network Information Service (NIS) Distributed data lookup service that shares user and system information across a network, including usernames, passwords, home directories, groups, hostnames, IP addresses, and netgroup definitions. Network Time Protocol (NTP) Protocol used to synchronize the realtime clock in a computer with a network time source. network-attached storage (NAS) Specialized file server that connects to the network. A NAS device, such as a Celerra Network Server, contains a specialized operating system and a file system, and processes only I/O requests by supporting popular file sharing protocols such as NFS and CIFS. P production file system (PFS) Production file system on a Celerra Network Server. A PFS is built on Symmetrix volumes or CLARiiON LUNs and mounted on a Data Mover in the Celerra Network Server. R replication Service that produces a read-only, point-in-time copy of a source file system. The service periodically updates the copy, making it consistent with the source file system. S storage area network (SAN) Network of data storage disks. In large enterprises, a SAN connects multiple servers to a centralized pool of disk storage. See also network-attached storage (NAS). stripe size Number of blocks in one stripe of a stripe volume.
64
Glossary
T thread Sequential flow of control in a computer program. A thread consists of address space, a stack, local variables, and global variables. V virus definition file File containing information for a virus protection program that protects a computer from the newest, most destructive viruses. This file is sometimes referred to as a virus signature update file, a virus pattern update file, or a virus identity (IDE) file. See also AV engine, AV server, CAVA, and VC client. virus-checking client (VC client) Virus-checking agent component of the Celerra Network Server software that runs on the Data Mover. See also AV engine, AV server, and CAVA.
65
Glossary
66
Index
A
adding threads 38 architectures 17
deleting threads 38
E
EMC E-Lab Navigator 50 error messages 50 exporting a file system path 34 exporting file system 11
C
Celerra AntiVirus Agent (CAVA) 12 Celerra Event Publishing Agent (CEPA) 12 Celerra gateway with Fibre Channel 18 with iSCSI 20 with iSCSI MDS-based 20 Celerra Network Server configuration 29 requirements 9 Celerra Replicator 11 Celerra unified storage with Fibre Channel 18 with iSCSI 19 with iSCSI MDS-based 20 Checkpoint 11 configuration summary 26
F
Fibre Channel architecture 18 file system compatibility 11 firewall FMP port numbers 33
G
global shares 12
H
hardware 10
D
Data Mover incompatability 32 automatic failover 11 listing open sessions 47 locking policy 11 mount status, verifying 31 performance statistics 42 protocol statistics 40 server_mount command 11 standby 11 verify compatability 31 deleting configuration parameters 37
I
iSCSI architecture 19
M
messages, error 50 metadata 17 miscellaneous Linux server issues 57 mount status, verifying 31 mounting a file system 33, 52 MPFS components 15 MPFS file statistics 45
67
Index
restrictions 11
N
NAS features 22 nas_fs command 11 NetBIOS shares 12 network 10
S
session statistics 44 setting threads 36 starting MPFS 33 stopping MPFS 34 storage 10 stripe size 11 support for shares 12 system requirements 9
O
operating through a firewall 33 overview 7
T
threads maximum 17 adding 38 default 17 resetting default values 39 setting 36 troubleshooting 49
P
performance 11 performance statistics 41 planning considerations 22 protocol statistics 40
R
related documentation 12 resetting default thread values 39 resetting statistics 48
U
unmount a file system 52 unmounting, troubleshooting 55
68