NetApp Harvest 1.4 Document
NetApp Harvest 1.4 Document
NetApp Harvest 1.4 Document
28 September 2017
Abstract
This guide discusses installation and administration steps for the NetApp Harvest data
collector.
TABLE OF CONTENTS
13 References ................................................................................................................................................... 48
2.2 Interoperability
The following software versions are known to work. Other software versions make work but have not
explicitly been tested:
Software Version Notes
Graphite 0.9.13 (0.9.13-pre1) 0.9.14 Earlier versions work but lack a function
(maxDataPoints) needed for high
performance when working with the JSON
api interface and large time spans
Grafana 3.0, 3.1.1, 4.0, 4.1, 4.5.2 Dashboards are designed for minimum
Grafana 3.0. If using an older release some
dashboard panels may not display as
intended
NetApp OnCommand Insight 7.3.0 and above
OnCommand Unified Manager 6.1, 6.2, 6.3, 6.4, 7.0, 7.1, 7.2
apt-get based system sudo apt-get -y install unzip perl libjson-perl libwww-perl
sudo apt-get -y install libxml-parser-perl
(Debian, Ubuntu, etc) sudo apt-get -y install liblwp-protocol-https-perl
sudo apt-get -y install libexcel-writer-xlsx-perl
Note: If your system is ‘hardened’ with a more restrictive umask you should modify it in your terminal
session prior to installing software:
#This results in files being rwxr-xr-x
[root@localhost ~]# umask 022
ONTAP Cluster HTTPS with 443*/TCP Used to access the API and fetch
management LIF, Data TLS performance and capacity
ONTAP 7-mode information
management IP, OCUM
system IP
Graphite Server IP Graphite 2003*/TCP* Used to post metrics to the
plaintext Graphite metrics DB
protocol
5. Change into the /tmp directory and use yum to install the rpm replacing the ‘#’ symbols with your
actual version:
[root@host ~]# cd /tmp
Dependencies Resolved
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
netapp-harvest noarch 1.3-1 /netapp-harvest-1.3-1.noarch 2.8 M
Transaction Summary
================================================================================
Install 1 Package
Complete!
[root@host tmp]#
6. Extract the Perl component of the SDK, replacing the 4 x ‘#’ symbols with your actual version:
[root@host tmp]# unzip -j netapp-manageability-sdk-#.#.zip \
netapp-manageability-sdk-#.#/lib/perl/NetApp/* -d /opt/netapp-harvest/lib
Archive: netapp-manageability-sdk-5.6.zip
inflating: /opt/netapp-harvest/lib/DfmErrno.pm
inflating: /opt/netapp-harvest/lib/NaElement.pm
inflating: /opt/netapp-harvest/lib/NaErrno.pm
inflating: /opt/netapp-harvest/lib/NaServer.pm
continues…
7. Verify installation by executing the two main Harvest programs without any arguments to see their
usage syntax. If an error is shown, check prerequisites or the Troubleshooting section for help.
a. NetApp Manager
[root@host tmp]# /opt/netapp-harvest/netapp-manager
continues…
b. NetApp Worker:
[root@host tmp]# /opt/netapp-harvest/netapp-worker
continues…
8. Enable autostart:
RHEL or CentOS 6 [root@host netapp-harvest]# chkconfig --add netapp-harvest
Note: Do not start the pollers (service netapp-harvest start) until you have configured Graphite
with appropriate retention settings; see Setting frequency and retention in storage-schemas.conf
• Change into the /tmp directory and use apt-get to install the deb replacing the ‘#’ symbols with
your actual version:
root@host:~# cd /tmp
root@host:tmp#
• Extract the Perl component of the SDK, replacing the 4 x ‘#’ symbols with your actual version:
root@host:tmp# unzip -j netapp-manageability-sdk-#.#.zip \
netapp-manageability-sdk-#.#/lib/perl/NetApp/* -d /opt/netapp-harvest/lib
Archive: netapp-manageability-sdk-5.6.zip
inflating: /opt/netapp-harvest/lib/DfmErrno.pm
inflating: /opt/netapp-harvest/lib/NaElement.pm
inflating: /opt/netapp-harvest/lib/NaErrno.pm
inflating: /opt/netapp-harvest/lib/NaServer.pm
continues…
• Verify installation by executing the two main Harvest programs without any arguments to see their
usage syntax. If an error is shown, check prerequisites or the Troubleshooting section for help.
• NetApp Manager
root@host:tmp# /opt/netapp-harvest/netapp-manager
continues…
• NetApp Worker:
root@host:tmp# /opt/netapp-harvest/netapp-worker
continues…
Note: Do not start the pollers (service netapp-harvest start) until you have configured Graphite
with appropriate retention settings; see Setting frequency and retention in storage-schemas.conf
5. Change into the /tmp directory, extract the NetApp Harvest software from the .zip file (replacing
the ‘#’ symbols with your actual version), and move the contents to the /opt directory:
[root@host ~]# cd /tmp
Archive: netapp-harvest-1.3.zip
creating: netapp-harvest/
inflating: netapp-harvest/CHANGES.txt
inflating: netapp-harvest/netapp-harvest.conf.example
continues…
7. Verify installation by executing the two main Harvest programs without any arguments to see their
usage syntax. If an error is shown, check prerequisites or the Troubleshooting section for help.
a. NetApp Manager
[root@host netapp-harvest]# /opt/netapp-harvest/netapp-manager
Usage: /opt/netapp-harvest/netapp-manager {-status|-start|-stop|-restart|-
import|export} [-poller <str>] [-group <str>] [-conf <str>] [-confdir <str>] [-logdir
<str>] [-h] [-v]
continues…
continues…
9. Add the service to the Linux system configuration and enable it to autostart:
Ubuntu 14.04 root@host:tmp# update-rc.d netapp-harvest defaults
Note: Do not start the pollers (service netapp-harvest start) until you have configured
Graphite with appropriate retention settings; see Setting frequency and retention in
storageschemas.conf
5. Change into the /tmp directory and use dpkg to install the deb package:
Dependencies Resolved
================================================================================
Package Arch Version Repository Size
================================================================================
Updating:
netapp-harvest noarch 1.3X2-1 /netapp-harvest-1.3X2-1.noarch 2.8 M
Transaction Summary
================================================================================
Upgrade 1 Package(s)
Complete!
5. Change into the /tmp directory and use dpkg to install the deb package:
5. Change into the /tmp directory, extract the NetApp Harvest software from the .zip file, and move
the contents to the /opt/netapp-harvest-new directory:
[root@host ~]# cd /tmp
Archive: netapp-harvest-1.3.zip
creating: netapp-harvest/
inflating: netapp-harvest/CHANGES.txt
inflating: netapp-harvest/netapp-harvest.conf.example
continues…
[root@host tmp]# mv /tmp/netapp-harvest /opt/netapp-harvest-new
6. Copy the configuration files from your existing installation into the new installation
Note: The \cp ensures you don’t use an alias for cp (which normally contains a -i requiring you to
interactively accept overwrites)
[root@host tmp]# \cp /opt/netapp-harvest/template/* /opt/netapp-harvest-new/template
(ignore warning about omitting subdirectories)
[root@host tmp]# \cp /opt/netapp-harvest/cert/* /opt/netapp-harvest-new/cert
Verify using ps that no pollers are running. If any are running kill them by PID manually.
[root@host tmp]# ps –ef | grep netapp
10. Update the Grafana dashboards to the latest versions, see Integrating with Grafana for
instructions.
11. Copy the default startup script to the /etc/init.d directory.
[root@host ~]# cp /opt/netapp-harvest/util/netapp-harvest /etc/init.d
12. Add the service to the Linux system configuration and enable it to autostart:
Ubuntu 14.04 root@host:tmp# update-rc.d netapp-harvest defaults
Note: Do not start the pollers (service netapp-harvest start) until you have configured
Graphite with appropriate retention settings; see Setting frequency and retention in
storageschemas.conf
If ssl is ‘active’ continue. If not, setup SSL and be sure to choose a Key length (bits) of 2048:
netapp> secureadmin setup ssl
SSL Setup has already been done before. Do you want to proceed? [no] yes
Country Name (2 letter code) [US]: NL
State or Province Name (full name) [California]: Noord-Holland
Locality Name (city, town, etc.) [Santa Clara]: Schiphol
Organization Name (company) [Your Company]: NetApp
Organization Unit Name (division): SalesEngineering
Common Name (fully qualified domain name) [sdt-7dot1a.nltestlab.hq.netapp.com]:
Administrator email: noreply@netapp.com
Days until expires [5475] :5475 Key length (bits) [512] :2048
3. Create a user that utilizes this role and enter the password when prompted:
useradmin user add netapp-harvest -c "User account for performance monitoring by NetApp Harvest" -
n "NetApp Harvest" -g netapp-harvest-group
The user is now created and can be configured for use by NetApp Harvest.
2. If you wish to use SSL certificate based authentication complete the following sub steps,
otherwise continue to step 3).
a. Login the NetApp Harvest Linux host, become root, change into the
/opt/netappharvest/cert subdirectory, and generate a client certificate and private key
using information from your environment:
Note: The ‘Common Name’ must match the username you create on the cDOT cluster
later. Also, in this example the certificate will be valid for 10 years; adjust the days count
according to your security requirements.
root@host ~# cd /opt/netapp-harvest/cert
root@host cert# openssl req -x509 -nodes -days 3650 -newkey rsa:1024 -keyout
netapp-harvest.key -out netapp-harvest.pem Generating a 1024 bit RSA private
key
.......................................++++++
....................++++++
writing new private key to '10yr.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:NL
State or Province Name (full name) [Some-State]:Noord-Holland
Locality Name (eg, city) []:Schiphol-Rijk
Organization Name (eg, company) [Internet Widgits Pty Ltd]:NetApp BV
Organizational Unit Name (eg, section) []:PS
Common Name (e.g. server FQDN or YOUR name) []:netapp-harvest
Email Address []:netapp-harvest@netapp.com
A file named netapp-harvest.pem with the public certificate and a
netappharvest.key containing your private key is created.
b. Login the ONTAP clustershell as a user with full administrative privileges and add the
public certificate. The entire contents of the pem file should be copied and pasted when
requested. Also be sure to replace the argument of -vserver with the name of your
cluster:
cluster::> security certificate install -type client-ca -vserver cluster
Please enter Certificate: Press <Enter> when done
-----BEGIN CERTIFICATE-----
MIIChDCCAe2gAwIBAgIJAKgurBmDXc3uMA0GCSqGSIb3DQEBBQUAMFsxCzAJBgNV
BAYTAk5MMRUwEwYDVQQHDAxEZWZhdWx0IENpdHkxHDAaBgNVBAoME0RlZmF1bHQg
Q29tcGFueSBMdGQxFzAVBgNVBAMMDm5ldGFwcC1oYXJ2ZXN0MB4XDTE1MDYyNjEw
MTk1NloXDTI1MDYyMzEwMTk1NlowWzELMAkGA1UEBhMCTkwxFTATBgNVBAcMDERl
ZmF1bHQgQ2l0eTEcMBoGA1UECgwTRGVmYXVsdCBDb21wYW55IEx0ZDEXMBUGA1UE
AwwObmV0YXBwLWhhcnZlc3QwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMyq
Qq6qXRW7czWRNHYMfmlZjpr0FV/VmOv0Brt9Ij7+tHYb+CcIKVyj/gv0RM8DGJ5L
X7VrdrnpINAu6tghBS6YOG2Nr1h9CRunBR91Hm2/DPKA7C0cNjg6EHuJkYLOVF21
nmRpdAXDURBfw89v1YrZz7uc6LBqGX8SRqi0y0OvAgMBAAGjUDBOMB0GA1UdDgQW
BBTOMM2pC8HH0aK9ZRGw5OxOqcV7RDAfBgNVHSMEGDAWgBTOMM2pC8HH0aK9ZRGw
5OxOqcV7RDAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBBQUAA4GBAFrg5HjXtZ8q
YkRcnCyekvdtFT1a18FyWjDUkRtldySyRgsdtwcF6BoYiVvEmjPVX2QR8n6u8G/R
Ii+6MWt+ODwPTvzZX6k92ni3yDr0Ffghjj9V5+UZEK8aGHPnD4kpt/sAnJf3gbzO
WswIMiWH6mYaYLnkGDAze9UuXZcEuw4E
-----END CERTIFICATE-----
You should keep a copy of the CA-signed digital certificate for future reference.
c. Enable SSL client authentication. Be sure to replace the argument of -vserver with the
name of your cluster:
cluster::> security ssl modify -client-enabled true -vserver clustername
Note: The user created has the ability to login via the API only; login access via other
application/login methods is expected to fail.
3. Copy the default example configuration file to your configuration file and set the owner:
[root@host netapp-harvest]# cp netapp-harvest.conf.example netapp-harvest.conf
[root@host netapp-harvest]# chown netapp-harvest:netapp-harvest netapp-harvest.conf
4. Use a text editor (nano, vi) to edit the configuration file. Values in all CAPITAL LETTERS must
be modified to match your environment. Details related to Grafana and OCI parameters will be
modified later in the Integrating with Grafana and Integrating with OCI instructions so skip them
now.
[root@host netapp-harvest]# nano netapp-harvest.conf
a. Locate the [default] section of the file and modify the graphite_server value to match your
environment:
graphite_server = INSERT_IP_OR_HOSTNAME_OF_GRAPHITE_SERVER_HERE
example new
value:
graphite_server = 10.64.31.76
b. Continue in the [default] section of the file and set authorization credentials
username = netapp-harvest
password = sEcReTr3aDoNlYpW
Or, to use SSL certificates ensure the .pem and .key files are in the
/opt/netappharvest/cert directory and uncomment and complete the filenames
for your environment:
#auth_type = ssl_cert
#ssl_cert = INSERT_PEM_FILE_NAME_HERE
#ssl_key = INSERT_KEY_FILE_NAME_HERE
c. Find the example section for a Data ONTAP system, copy and remove the ‘#’ symbols,
and modify the values to match your environment:
Note: No whitespace characters are allowed in the section header name or group fields
# [INSERT_CLUSTER_OR_CONTROLLER_NAME_HERE_EXACTLY_AS_SHOWN_FROM_CLI_PROMPT]
# hostname = INSERT_IP_ADDRESS_OR_HOSTNAME_OF_CONTROLLER_OR_CLUSTER_LIF_HERE
# group = INSERT_GROUP_IDENTIFIER_HERE
[amsstor001]
hostname = 10.64.31.100 group
= ams4
d. Find the example section for an OCUM system, copy and remove the ‘#’ symbols, and
modify the values to match your environment:
# [INSERT_OCUM_SERVER_NAME_HERE]
# hostname = INSERT_IP_ADDRESS_OR_HOSTNAME_OF_OCUM_SERVER
# group = INSERT_GROUP_IDENTIFIER_HERE
# host_type = OCUM
# data_update_freq = 900
# normalized_xfer = gb_per_sec
[nyclinx001]
hostname = 10.24.31.243
group = nyc1
host_type = OCUM
data_update_freq = 900
normalized_xfer = gb_per_sec
Note: If you only use OCUM capacity collection and do not want to collect performance
information from the clusters you must still have an entry for each monitored cluster. Create
an entry for each cluster as shown above (omitting login details) and set the parameter
host_enabled = 0. These settings will allow NetApp Harvest to still include the group in
the Graphite metrics hierarchy but will not collect performance data from the clusters.
5. Repeat step 4c and 4d as required for each cluster, 7-mode controller, or OCUM system. If a
global username/password is not desired it can also be specified in each poller section.
6. Save the file and exit to the command prompt.
Note: Do not start the pollers (./netapp-manager –start) until you have configured Graphite
with appropriate retention settings; see Setting frequency and retention in storage-schemas.conf.
The following steps configure frequency and retention settings which are appropriate for NetApp Harvest
sourced metrics:
1. Login the graphite host and if not logged in as the root superuser, become root using sudo:
[user@host ~]# sudo -i
2. Use a text editor (nano, vi) to edit the storage-schemas.conf file. Depending on your installation
the location can vary, but is typically one of:
[root@host ~]# nano /etc/carbon/storage-schemas.conf
or
[netapp_capacity] pattern =
^netapp(\.poller)?\.capacity\.
retentions = 15m:100d,1d:5y
The above entries result in metrics retention:
i. Performance: 60 sec samples for 35 days, 5 min averages for 100 days, 15 min
averages for 395 days, and 1 hour averages for 5 years.
ii. Capacity: 15 min samples for 100 days, 1 day averages for 5 years
iii. Retentions can be adjusted to match your environment, but do not adjust the
pattern.
7. Save the file and exit to the command prompt.
8. The running Carbon service will automatically discover changes made to this file so no restart of
the service is needed.
2. Use a text editor (nano, vi) to edit the storage-schemas.conf file. Depending on your installation
the location can vary, but is typically one of:
[root@host ~]# nano /etc/carbon/carbon.conf
or
MAX_CREATES_PER_MINUTE = 600
4. Save the file and exit to the command prompt.
5. Restart the Carbon cache process to make the changes active:
[root@host ~]# service carbon-cache stop
2. Use a text editor (nano, vi) to edit the blacklist.conf file. Depending on your installation the
location can vary, but is typically one of:
[root@host ~]# nano /etc/carbon/blacklist.conf
or
#
# Exclusion for SnapCreator
# This is from creating a clone manually in SnapCreator
# Clone volume gets a “cl_” prefix and a “_YYYYMMDDhhmmss” suffix #
^netapp\.(capacity|perf7?)\..+\.vol\.cl_.+_(19|20)\d\d(0[1-9]|1[012])( 0[1-9]|[12][09]|3[01])[0-9]{6}\..+
#
# Exclusion for SnapDrive/SnapManager
# Clone volumes gets a “sdw_cl_” prefix
#
^netapp\.(capacity|perf7?)\..+\.vol\.sdw_cl_.+\..+
#
# Exclusion for Metadata volumes that may also clutter menus.
#
# CRS volumes in SVM-DR or MetroCluster get a “MDV_CRS_” prefix # Audit volumes get a “MDV_aud_” prefix #
^netapp\.(perf)\..+\.vol\.MDV_CRS_.+\..+
^netapp\.(perf)\..+\.vol\.MDV_aud_.+\..+
2. Add a crontab entry. The following syntax will purge metrics with 120 days of inactivity, and any
empty directories, every Sunday at 00:30:
[root@host ~]# crontab -e
or if using Ubuntu
package:
2. Add a new API key by clicking the main org and then API keys:
4. The API key will be displayed. Copy the entire string (including an ‘=’ at the end) to be able to add to
the conf file in the next step:
24 NetApp Harvest Installation and Administration Guide 1.4
5. Use a text editor (nano, vi) to edit the netapp-harvest configuration file and add the Grafana server
details.
[root@host netapp-harvest]# nano /opt/netapp-harvest/netapp-harvest.conf
6. Locate the [global] section of the file and modify the grafana_api_key value to match they key
provided in the previous step, and modify the Grafana URL to match the URL used in your web
browser:
[global]
grafana_api_key = INSERT_LONG_KEY_HERE
grafana_url = INSERT_URL_OF_GRAFANA_WEB_INTERFACE_HERE
example new
values:
[global]
grafana_api_key = yJrIjoiMWhhUHIiLCJuIjoibmV0YXBwLWhhcnZlc3QiLCJpZCI6MX0=
grafana_url = http://localhost:3000
7. Save the file and exit to the command prompt.
1. Run the netapp-manager -import command to import all dashboards from the grafana
subdirectory to the Grafana server
root@host:/opt/netapp-harvest# /opt/netapp-harvest/netapp-manager -import
[OK ] Will import dashboards to http://localhost:3000
[OK ] Imported dashboard [db_netapp.json] successfully
[OK ] Imported dashboard [db_netapp-dashboard-7-mode-node.json] successfully continues…
1. Run the netapp-manager -export command to download dashboards from the Grafana server
to the grafana subdirectory
root@host:/opt/netapp-harvest# /opt/netapp-harvest/netapp-manager -export
[OK ] Will export dashboards from http://localhost:3000
[OK ] Exported [db/netapp] to dashboard file [db_netapp.json]
[OK ] Exported [db/netapp-detail-graphite-server] to dashboard file [db_netapp-detailgraphite-
server.json] continues…
Harvest can be configured to push data exclusively to NetApp OnCommand Insight. To disable the push
to Graphite set graphite_enabled configuration parameter to 0, see Configuring NetApp Harvest
(advanced). In this scenario, if a subset of object types (OCI namespaces) are configured you should
also disable collection for object/namespaces that are not being forwarded to OCI. See Disabling
collection of an object type for more information.
PURPOSE:
Stop/start/restart/status of netapp-worker poller processes and import/export Grafana
dashboards
VERSION:
1.3
ARGUMENTS:
Required (one of):
-status Show status of all matching pollers
-start Start all matching pollers
-stop Stop all matching pollers
-restart Stop/start all matching pollers
-export Export dashboard json files from Grafana server
-import Import dashboard json files to Grafana server
Optional:
-poller <str> Filter on poller names that match <str> (only valid for
status/start/stop/restart)
-group <str> Filter on group names that match <str> (only valid for
status/start/stop/restart)
-conf <str> Name of config file to use to find poller name
(default: netapp-harvest.conf)
-h Show this help
-v Show verbose output, and if starting pollers, start them with verbose
logging
EXAMPLES:
Check status of all pollers netapp-
manager -status
Start all pollers that are not already running netapp-
manager -start
Stop all pollers with poller name that includes netapp
netapp-manager -stop -poller netapp
Restart all pollers with poller name that includes netapp at group AMS
netapp-manager -restart -poller netapp -group AMS
Import all dashboards from the grafana subdirectory to Grafana netapp-
manager –import
Start all enabled pollers in the netapp-harvest.conf configuration file that are not already running:
[root@host ~]# /opt/netapp-harvest/netapp-manager –start
STATUS POLLER GROUP
############### #################### ##################
[STARTED] blob1 nl
[STARTED] blob2 nl
[STARTED] bumblebee ny
[STARTED] cmdemo uk
Stop all pollers in the netapp-harvest.conf configuration file that are running:
[root@host ~]# /opt/netapp-harvest/netapp-manager -stop
STATUS POLLER GROUP
############### #################### ##################
[STOPPED] blob1 nl
[STOPPED] blob2 nl
[STOPPED] bumblebee ny
[STOPPED] cmdemo uk
Restart all pollers with blob in the poller name in the netapp-harvest.conf configuration file:
[root@host ~]# /opt/netapp-harvest/netapp-manager –restart –poller blob
STATUS POLLER GROUP
############### #################### ##################
[STOPPED] blob1 nl
[STOPPED] blob2 nl
[STARTED] blob1 nl
[STARTED] blob2 nl
Import dashboards found in the grafana subdirectory to the Grafana instance specified in the
netappharvest.conf configuration file:
root@host:/opt/netapp-harvest# /opt/netapp-harvest/netapp-manager -import
[OK ] Will import dashboards to http://localhost:3000
[OK ] Imported dashboard [db_netapp.json] successfully
[OK ] Imported dashboard [db_netapp-dashboard-7-mode-node.json] successfully continues…
Note: The last line with ‘Startup complete’ informs you that the poller has completed startup tasks and is
now entering the instance and data polling loops
The parameter value pairs (example: hostname = 10.64.31.110) specify the parameters to use in the
configuration. When parsing a parameter = value pair the parameter will be set to the non-whitespace
characters prior to the first = character, and the value will be set to the remainder of the line, with leading
and trailing whitespace characters removed. Do not enclose your values in quotes as the quotes
themselves would be included in the value.
Blank lines, and any line whose first non-whitespace character is a ‘#’, will be ignored.
This table below lists all possible parameters for the ‘global’ section:
Parameter Description Example or possible
values (default in bold)
grafana_url Used to connect with the Grafana server to import and http://server51:3000
export dashboards. Enter the URL as you would to
access the server from your browser
normalized_xfer The raw counters in Data ONTAP and OCUM use b_per_sec
kb_per_sec
varying units for data rate and capacities. To ease
mb_per_sec
display and analysis harvest normalizes these to a gb_per_sec
unit of bytes, kilobytes, megabytes, or gigabytes
depending on the value of this parameter. For
provided dashboards normalize all FILER pollers to
mb_per_sec and OCUM to gb_per_sec (which makes
all capacity info in GB; ignore what _per_sec might
imply)
normalized_time The raw counters in Data ONTAP use varying time microsec
millisec
units. To ease display and analysis harvest
sec
normalizes these to a unit of microsec, millisec, or
sec. For provided dashboards normalize all FILER
pollers to millisec.
graphite_root The Graphite metrics path prefix for the poller. It can default
contain hardcoded strings as well as poller section ntap.performance.NL.{di
variables enclosed in {} (i.e. {parameter_name}). splay_name} etc.
display_name The name of the monitored system to use in the path cluster99
of submitted metrics. If this parameter does not exist ams4nas1a etc.
or is blank it will be automatically be populated with the
name from the poller section header.
In most cases it is easiest to populate the section
header with the short hostname of the device and not
use this parameter, but if using multiple pollers for a
single monitored system these headers must be unique
you can instead use this parameter to locate metrics
from both pollers under the same system name in the
Graphite metrics path.
group The group of the monitored system to use in the path of VDC2
submitted metrics. It is preferred to use a short name AWS-NW
to consume less screen space and spaces are not SFO8
allowed; use dash or underscore instead if needed.
In an earlier release of Harvest this parameter was
named site; if site is used it will be aliased to group
internally.
The default graphite_root uses group to help organize
monitored systems.
The default Grafana template driven dashboards also
require a specific metrics path that includes group. If
you have a small installation you can of course place
all monitored systems at a single named group. For a
large installation still only one level of group is
recommended to maintain compatibility with the
default Grafana dashboards.
10.3 OCUM capacity metrics with the OPM performance metrics data provider
The default OCUM template submits metrics in a hierarchy that is parallel to the performance metrics
submitted by Harvest and is compatible with the default Grafana dashboards provided by Harvest.
If it is desired to use the OPM external data provider feature for performance data, and Harvest for
capacity metrics, an example custom collection template can be used that creates a hierarchy that is
38 NetApp Harvest Installation and Administration Guide 1.4
parallel to OPM submitted metrics.
The template to use only if you use OPM ‘external data provider feature’ for performance metrics is
named template/example/ocumopm-hierarchy.conf. To use it copy the file to template/ocum-
opm-hierarchy.conf, and then add a poller section in the format:
[INSERT_OCUM_HOSTNAME_HERE]
hostname =
INSERT_IP_ADDRESS_OR_HOSTNAME_OF_OCUM_HOSTNAME group
= INSERT_GROUP_IDENTIFIER_HERE host_type = OCUM
data_update_freq = 900
normalized_xfer = gb_per_sec
template = ocum-opm-hierarchy.conf
graphite_root = netapp-capacity.Clusters.{display_name} graphite_meta_metrics_root
= netapp-capacity-poller.INSERT_OCUM_HOSTNAME_HERE
10.4 perf-counters-utility
The perf-counters-utility located in the util subdirectory can be useful to browse the counter manager
system in Data ONTAP from the API. It can be useful when troubleshooting or preparing to collect a new
object or additional counters to understand the data format and values that Harvest will receive and
parse.
It connects to a live Data ONTAP system and displays:
• objects
• available instances [for a provided object]
• available counters [for a provided object]
• counter data in raw format [for a provided object and instance]
Command-line help for it is shown by running the program with no arguments:
Usage: perf-counters-utility -host <host> -user <user> -pass <pass> [-o|-in|-c|-d] [-f <family>]
[-n <name> | -u <uuid>]
PURPOSE:
Collect performance data from Data ONTAP's cDOT performance counter
subsystem that uses a hierachy of: object-instance-counter
ARGUMENTS:
Required:
-host Hostname to connect with
-user Username to connect with
-pass Password to connect with
Required (one of):
-o Display object list
-in Display instance list
-c Display counter list
-d Output counter data
Required with -in, -c, -d:
-f <family> Object family
Required (one of) with -d:
-n <name> Name of instance to graph
-u <uuid> Name of UUID to graph
Optional:
-h Output this help text
-v Output verbose output to stdout
EXAMPLE:
Display object list:
perf-counters-utility -host sdt-cdot1 -user admin -pass secret -o
Display instance list:
perf-counters-utility -host sdt-cdot1 -user admin -pass secret -in -f volume
Display counter list: perf-counters-utility -host sdt-cdot1 -user admin -
pass secret -c -f volume
Display counter data for specific instance:
perf-counters-utility -host sdt-cdot1 -user admin -pass secret -d -f volume -n vol0
Example output once opened in Excel and navigating to the processor tab:
12.1 “Can't locate NaServer.pm in @INC (you may need to install the NaServer
module)”
If a module cannot be loaded by netapp-worker this will not be reported in the logfile. You may see that
when starting a poller via netapp-manager it immediately stops. If you start the poller via netapp-worker
the output is:
[root@host ~]# /opt/netapp-harvest/netapp-worker -poller cluster99
Can't locate NaServer.pm in @INC (you may need to install the NaServer module) (@INC contains:
/opt/netapp-harvest/lib /etc/perl /usr/local/lib/perl/5.18.2 /usr/local/share/perl/5.18.2
/usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.18 /usr/share/perl/5.18 /usr/local/lib/site_perl
.) at ./netapp-worker line 35.
BEGIN failed--compilation aborted at ./netapp-worker line 35. [root@host ~]#
It stopped immediately because the NaServer module couldn’t be located. The NaServer.pm file (and
others from the SDK) were looked for in lib subdirectory but were not found. To resolve ensure that the
SDK files are installed correctly into the lib subdirectory.
To resolve check the configuration file section for the poller and ensure the parameter is listed (or is
inherited from the [default] section) and resolve.
The system will retry indefinitely to handle transient network failures, but if the hostname is incorrect or
DNS is not configured it will never succeed. Resolve the issue by adding the hostname to DNS or the
/etc/hosts file.
There are various reasons why Harvest cannot connect (incorrect IP address, a firewall is blocking
communication, a mistake in routing, the cluster LIF is not reachable, etc) so do a test to connect directly
to the port.
Use this test to connect to the port using nc (netcat):
[root@host ~]# nc -vz -w 5 cluster99.nltestlab.hq.netapp.com 443
nc: connect to 192.168.100.33 port 443 (tcp) timed out: Operation now in progress
The above shows there is no reachability to that cluster on port 443/tcp. In this case we determined that
the firewall policy on the cluster was limiting SSL access to a specific management host [and it was not
our Harvest host]. We added the harvest poller host to have access and tried again:
[root@host ~]# nc -vz -w 5 cluster99.nltestlab.hq.netapp.com 443 nc:
connect to 192.168.100.33 port 443 (tcp) failed: Connection refused
Better, so we can now reach the system on port 443/tcp but it rejects access, this means there is no
service listening on the port. We checked the cluster and the SSL server was disabled. We enabled it
and tried again:
[root@host ~]# nc -vz -w 5 cluster99.nltestlab.hq.netapp.com 443 nc:
connect to 192.168.100.33 443 port [tcp/https] succeeded!
With the above success Harvest should now be able to connect to the SSL port.
Verify the configured credentials (username/password, or SSL certificate details) are correct.
12.6 “Update of system-info cache DOT Version failed with reason: No response
received from server”
With Data ONTAP 7-mode TLS must be enabled. See the Enabling TLS section for more.
12.7 “[lun] data-list poller next refresh at [2015-07-28 02:17:00] not scheduled
because it occurred in the past”
Each object will update according to the configured data_update_freq. The poller updates each
object (lun, volume, processor, etc) in a serialized manner. If cumulative time to collect all object types is
greater than the data_update_freq then on some polls some objects will be skipped and log this
message. If you see a message like this sporadically you may decide it is acceptable to miss a few polls
here and there. If you see it regularly however you can investigate in the Grafana dashboard “NetApp
Detail: Harvest Poller” to see collection times of each object type and analyze what to do.
If a single object is taking too long you can opt to not collect it, or to separate collection of it into a different
dedicated poller, potentially with a different data_update_freq.
If many object types are taking a long time and collection is over a WAN you can setup a harvest collector
local to the monitored system to reduce the impact of the WAN latency. If not over a WAN then look to
configure a less frequent data_update_freq or split collection of the objects across multiple templates
and multiple pollers.
It is also possible that you are using the NetApp Management SDK 4.0 and reverse hostname resolution
is failing after a timeout; see Setting HTTP/1.0 because reverse hostname resolution (IP -> hostname)
fails.
12.8 Setting HTTP/1.0 because reverse hostname resolution (IP -> hostname) fails.
To enable HTTP/1.1 ensure reverse hostname resolution succeeds.
The NetApp Management SDK 5.4 introduced support for HTTP/1.1. This version of HTTP requires a
Host: header in each request and the SDK will do a gethostbyaddr lookup on the hostname defined in the
poller configuration. If this reverse hostname lookup fails Harvest will detect it and force HTTP/1.0
avoiding a http protocol request error (400 Bad Request). But, the SDK 5.4 still performs this
gethostbyaddr lookup and if times out, instead of succeeding or failing quickly, it can extend each API
request by several seconds. This issue is logged as NetApp Bug ID 935453.
To resolve ensure forward and reverse hostname resolution is possible (DNS entries or /etc/hosts file
entry), or, use a SDK version prior to 5.4, such as 5.3.1.
secs Number of seconds of activity which are summarized in the other fields; typically 14400s, or 4
hours
api_time Cumulative seconds that the poller was waiting on the API to respond.
plugin_time Cumulative seconds that the poller was waiting on plugins to process.
skips Cumulative number of polls that were skipped because previous poll was still running
An Example:
[2015-07-28 20:50:52] [NORMAL ] Poller status: status, secs=14400, api_time=1806, plugin_time=55,
metrics=459594, skips=0, fails=0
The poller was active 1806 + 55 = 1861 seconds out of 14400 seconds, or about 13% of the time. This
would indicate that there is plenty of ‘budget’ when the poller is idle waiting for the next poll.
A total of 459594 metrics was submitted, or 1914 per minute on average. This information could be
useful when sizing Graphite to understand the quantity of updates submitted by the poller.
Example if data quantity is higher than the maximum the API can accept:
[2017-04-20 17:09:04] [ERROR ] [OCI Output Plugin]: HTTP POST error Received reply: {
"errorCode": "NOT_AUTHORIZED", "errorMessage": "Integration agent 'c7fdf0cd-b6bb-48c0-9d97-
388b94412a82' cannot report more than 8 reports per 60000ms, reported 9 in last 58978ms." }
Action: Reduce the number of namespaces being pushed into NetApp OnCommand Insight or configure it
to accept more reports per minute.
Example if sending an object to a namespace from Harvest which has not been added to OCI:
[2017-06-06 17:39:31] [ERROR ] [OCI Output Plugin] HTTP POST error Received reply: {
"errorCode": "NOT_AUTHORIZED", "errorMessage": "Integration agent not authorized to add
data to timeseries harvest_hostadapter"}
Action: Use the netapp-oci-setup to add the timeseries name. See Updating a NetApp OnCommand
Insight agent for more information.
If you see this, make sure the OCI server is available at specified URL accessible from the Harvest
machine.
NetApp provides no representations or warranties regarding the accuracy, reliability, or serviceability of any information or
recommendations provided in this publication, or with respect to any results that may be obtained by the use of the information or
observance of any recommendations provided herein. The information in this document is distributed AS IS, and the use of this
information or the implementation of any recommendations or techniques herein is a customer’s responsibility and depends on the
customer’s ability to evaluate and integrate them into the customer’s operational environment. This document and the information
contained herein may be used solely in connection with the NetApp products discussed in this document.
© 2017 NetApp, Inc. All rights reserved. No portions of this document may be reproduced without prior written consent of
NetApp, Inc. Specifications are subject to change without notice. NetApp, and the NetApp logo are trademarks or registered
trademarks of NetAp Administrationp, Inc. in the United States and/or other countries Guide 1.4 . All other brands or products are
trademarks or registered trademarks of their respective holders and should be treated as such.