ITM Integration With Omnibus Multi Tier Architecure v1
ITM Integration With Omnibus Multi Tier Architecure v1
ITM Integration With Omnibus Multi Tier Architecure v1
V1.0
Stefania Oliverio & Randall Allen
Introduction
Integrating ITM and OMNIbus with a Multi-Tier Architecture presents the administrative user with certain
challenges that would not be encountered when integrating ITM with a simple, single-tier OMNIbus installation.
The flow of events from one layer to the next requires that configuration steps specific to each layer and each
OMNIbus system be performed in order for events to properly be synchronized between ITM and OMNIbus
installs. (Correct event sharing, updating, deleting etc.) Additional steps are needed when there are OMNIbus
failover pairs in the mix as well.
This paper describes the main concepts and steps necessary to configure event sharing between IBM Tivoli
Monitoring (ITM) and Tivoli Netcool/OMNIbus Multi Tier Architecture. These instructions assume that ITM
6.2.3 or later versions and OMNIbus7.3.1 have been deployed in your environment and that the multi - tier
architecture has been configured with failover capabilities as described herein.
While this paper covers all of the steps needed to configure this integration, it is by no means a complete guide and
should be used in addition to the published OMNIbus and ITM documentation.
Table Of Contents
Environment Configuration
This white paper is based on a real deployment that can be taken as example and consisted primarily of virtual
machines (VMs); the Tivoli Monitoring components were installed on VMs in the a domain specified later as
domain1 and the OMNIbus Multi-tier Architecture was installed on VMs in the Integration Test Enablement
(ITE) environment specified later as domain2. Tivoli Monitoring agents used to generate situational events were
installed on a mix of physical machines and VMs in the domain1 domain.
System
Installed Components
Primary Collection ObjectServer
Primary
Collection
ObjectServer
uni-directional ObjectServer
Hostname: colp1
Gateway that connects Primary
SUSE Linux
Collection server to the
Enterprise Server
Aggregation layer
10
Master probe
Component name
COL_P_1
C_TO_A_GATE_P_1
nco_p_tivoli_eif
Backup Collection ObjectServer
Hostname: colb1
Backup Collection
SUSE Linux
ObjectServer
Enterprise Server
10
uni-directional ObjectServer
Gateway that connects Backup
Collection server to Aggregation
layer
Slave probe
COL_B_1
C_TO_A_GATE_B_1
nco_p_tivoli_eif
Primary
Aggregation
ObjectServer
AGG_P
Hostname: aggb
Red Hat
Enterprise Linux Bi-directional ObjectServer
Server release 5.7 gateway
AGG_B
AGG_GATE
Hostname: dis1
Widows Server
2008 R2
Backup Display
ObjectServers
Hostname: dis2
Widows Server
2008 R2
uni-directional ObjectServer
Gateway that connects Display
Server to Aggregation layer
Backup Display ObjectServer
uni-directional ObjectServer
Gateway that connects Display
Server to Aggregation layer
DIS_1
A_TO_D_GATE_1
DIS_2
A_TO_D_GATE_2
The following diagram details the standard multi-tier architecture as found in the IBM Netcool/OMNIbus
InfoCenter:
DIS_1
DIS_2
A_TO_D_GATE_2
AGG_P
AGG_B
C_TO_A_GATE_P_1
C_TO_A_GATE_B_1
COL_P_1
COL_B_1
All OMNIBUS components were installed using netcool user-id belonging to ncoadmin group:
uid=500(netcool) gid=500(ncoadmin) groups=0(root),500(ncoadmin)
Components were installed under the default OMNIBUS install directory as follows:
/opt/IBM/tivoli/netcool
For more information on the OMNIbus Multi-Tier Architecture consult the OMNIbus references or InfoCenter:
http://publib.boulder.ibm.com/infocenter/tivihelp/v8r1/index.jsp?topic=%2Fcom.ibm.netcool_OMNIbus.doc_7.3.1
%2FOMNIbus%2Fwip%2Finstall%2Fconcept%2Fomn_ins_multitieredhighavailability.html
Hub TEMS
Collection Layer
3
9
Situation Update
Forwarder
Aggregation layer
7
4
TEPS
Data Base
Triggers
Display Layer
6
10
1.
2.
3.
4.
8. Situation Update Forwarder sends changes to Hub TEMS using a SOAP request
9. Status changes are propagated through the Tivoli Enterprise Portal Server and shown in the Tivoli
Enterprise Portal
10. The complete list of open, acknowledged and de-acknowledged events is shown in the Situation Event
Console workspace of the Tivoli Enterprise Portal Broswer/Desktop.
Configuring the omni.dat file (aggp, aggb, colp1, colb1, dis1, & dis2)
Each of the OMNIbus installations should utilize the same omni.dat file (detailed below). After configuring this file
on one system it should be copied to each of the other 5 machines on which the different OMNIbus components
have been installed. (For more information about the omni.dat file or OMNIbus interfaces consult the Netcool
OMNIbus InfoCenter.)
The following is the omni.dat configuration file (/opt/IBM/tivoli/netcool/etc/omni.dat) used in this environment:
#
# omni.dat file as prototype for interfaces file
#
# Ident: $Id: omni.dat 1.5 1999/07/13 09:34:20 chris Development $
#
[AGG_P]
{
Primary: aggp 4100
}
[AGG_B]
{
Primary: aggb 4100
}
[AGG_V]
{
Primary: aggp 4100
Backup: aggb 4100
}
[AGG_GATE]
{
Primary: aggb 4300
}
[COL_P_1]
{
Primary: colp1 4100
}
[COL_B_1]
{
Primary: colb1 4100
}
[DIS_1]
{
Primary: dis1 4100
}
[DIS_2]
{
Primary: dis2 4100
}
[C_TO_A_GATE_P_1]
{
Primary: colp1 4300
}
[C_TO_A_GATE_B_1]
{
Primary: colb1 4300
}
[A_TO_D_GATE_1]
{
Primary: dis1 4300
}
[A_TO_D_GATE_2]
{
Primary: dis2 4300
}
[NCO_PA]
{
Primary: aggb 4200
}
[NCO_PROXY]
{
Primary: aggb 4400
}
Please note the [AGG_V] definition in the file. It is the Virtual Aggregation pair definition. All incoming
Collection Gateway connections and all outgoing Display Gateway connections connect to the Virtual
Aggregation pair AGG_V so that they can fail over and fail back once the primary Aggregation
ObjectServer.
.domain1.ibm.com
9.x.x.x
Probes
For this reason hosts files on Hub TEMS and Hot Standby Hub TEMS need to include entries for systems where
probes are running:
9.x.x.1 colp1.domain2.ibm.com colp1 external IP
9.x.x.2 colb1.domain2.ibm.com colb1 external IP
and hosts files on Aggregation Servers where ITM Event Synchronization components are running need entries for
the two Hub TEMS systems:
9.x.x.3
9.x.x.4
nc049044.domain1.ibm.com
nc049043.domain1.ibm.com
nc049044
nc049043
Considering no DNS resolution is in place hosts file on OMNIbus systems need to list all machines and this is the
reason why later in this document you see commands run with short hostname:
192.x.x.3 aggp.domain2.ibm.com
192.x.x.4 aggb. domain2.ibm.com
aggp
aggb
192.x.x.1
192.x.x.2
192.x.x.5
192.x.x.6
colp1. domain2.ibm.com
colb1. domain2.ibm.com
dis1. domain2.ibm.com
dis2. domain2.ibm.com
colp1
colb1
dis1
dis2
ProcessType
PaPA_AWARE
}
nco_process 'bigate'
{
Command '$OMNIHOME/bin/nco_g_objserv_bi -propsfile OMNIHOME/etc/AGG_GATE.props' run as 500
Host
=
'aggb'
Managed
=
True
RestartMsg =
'${NAME} running as ${EUID} has been restored on ${HOST}.'
AlertMsg
=
'${NAME} running as ${EUID} has died on ${HOST}.'
RetryCount =
0
ProcessType =
PaPA_AWARE
}
#
# List of Services
#
# NOTE: To ensure that the service is started automatically, change the
#
"ServiceStart" attribute to "Auto".
#
nco_service 'Core'
{
ServiceType =
Master
ServiceStart =
Auto
process 'MasterObjectServer' NONE
process 'bigate' NONE
}
#
# This service should be used to store processs that you want to temporarily
# disable. Do not change the ServiceType or ServiceStart settings of this
# process.
#
nco_service 'InactiveProcesses'
{
ServiceType =
Non-Master
ServiceStart =
Non-Auto
}
#
# ROUTING TABLE
#
# 'user'
- (optional) only required for secure mode PAD on target host
#
'user' must be member of UNIX group 'ncoadmin'
# 'password' - (optional) only required for secure mode PAD on target host
#
use nco_pa_crypt to encrypt.
nco_routing
{
host 'omnihost' 'NCO_PA' 'user' 'password'
}
8. Next, edit the ObjectServer props files, $OMNIHOME/etc/AGG_P.props on aggp and AGG_B.props on
aggb.
Edit the PA. properties specifying the operating system user and password of the user who will run
process control and then save the file:
PA.Name: 'NCO_PA'
PA.Password: 'passw0rd'
PA.Username: 'netcool'
9. Execute the following command to start the process control daemon:
opt/IBM/tivoli/netcool/omnibus/bin/nco_pad
The following is a sample of the normal nco_pad output displayed on the console:
[netcool@aggp /]$ opt/IBM/tivoli/netcool/omnibus/bin/nco_pad
Netcool/OMNIbus Process Agent Daemon - Version 7.3.1
Netcool/OMNIbus PA API Library Version 7.3.1
Sybase Server-Library Release: 15.0
Server Settings :
Name of server
: NCO_PA
Path of used log file
: /opt/IBM/tivoli/netcool/omnibus/log/NCO_PA.log
Configuration File
: /opt/IBM/tivoli/netcool/omnibus/etc/nco_pa.conf
Child Output File
: /dev/null
Maximum logfile size
: 1024
Thread stack size
: 69632
Message Pool size
: 45568
PID Message Pool size
: 50
Rogue Process Timeout
: 30
Truncate Log
: False
Instantiate server to daemon : True
Internal API Checking
: False
No Configuration File
: False
Start Auto-start services : True
Authentication System
: UNIX
Trace Net library
: False
Trace message queues
: False
Trace event queues
: False
Trace TDS packets
: False
Trace mutex locks
: False
Host DNS name
: aggp
PID file (from $OMNIHOME) : ./var/nco_pa.pid
Kill Process group
: False
Secure Mode
: False
Administration Group Name. : ncoadmin
Forking to a Daemon Process.............
10. After the process control daemon has started check that it is running correctly by running the following
command from the $OMNIHOME/bin directory:
/opt/IBM/tivoli/netcool/omnibus/bin/nco_pa_status server NCO_PA user netcool password passw0rd
11. After you have confirmed that NCO_PA is running properly you can use the following commands to
stop/start OMNIbus processes using process control:
1. The ITM Event Syncronization component is included in the IBM Tivoli Monitoring Tools offering
available on Passport Advantage.
2. Untar the tools product package and copy the ESync2300Linux.bin file from the ITM Tools tec directory
to the desired software images directory on both the aggp and aggb systems.
NOTE: The ITM EventSync component can be installed using the same user that OMNIbus was installed with or
as root. It should be noted that if the programs executable (in step 11) has been run as root complications may
arise if you try to subsequently run it as a non-root user as certain files will then be owned by root.
5. The panel for License Agreement is displayed. Accept the License and click Next.
6. On the panel to select installation directory leave the default and click Next.
7. In the next 2 panels leave the defaults (or increase debugging level for logs if needed) and click Next.
8. Enter Tivoli Monitoring TEMS values: hostname (both short and long names) and operating system user-id
and password to login to it. Click Add and then Next.
NOTE: You may need to use the fully qualified domain name for your Tivoli Monitoring TEMS server
as the Event Synchronization component may not be able to communicate with the TEMS system
otherwise. If you are unsure of which name to use you should add entries for both the fully qualified
domain name and simple hostname.
10. Once the install has completed a panel with the installation result is displayed. Click Finish.
11. The ITM Event synchronization component has now been installed. Use the following commands to
start/test/stop ITM Event Synchronization component when needed:
To Start the Event Sync component:
/opt/IBM/SitForwarder/bin/startSUF.sh
To test TEMS configuration has been done correctly use the command:
/opt/IBM/SitForwarder/bin/test.sh
A successful test should output the following:
Successfully connected to Tivoli Enterprise Monitoring Server nc049043.ibm.com
To stop ITM Event Synchronization component:
/opt/IBM/SitForwarder/bin/stopSUF.sh
#######################################################################
#
#
CUSTOM alerts.status FIELD MAPPINGS GO HERE
#
#######################################################################
'ITMStatus'
=
'@ITMStatus',
'ITMDisplayItem'
=
'@ITMDisplayItem',
'ITMEventData'
=
'@ITMEventData',
'ITMTime'
=
'@ITMTime',
'ITMHostname'
=
'@ITMHostname',
'ITMPort'
=
'@ITMPort',
'ITMIntType'
=
'@ITMIntType',
'ITMResetFlag'
=
'@ITMResetFlag',
'ITMSitType'
=
'@ITMSitType',
'ITMThruNode'
=
'@ITMThruNode',
'ITMSitGroup'
=
'@ITMSitGroup',
'ITMSitFullName'
=
'@ITMSitFullName',
'ITMApplLabel'
=
'@ITMApplLabel',
'ITMSitOrigin'
=
'@ITMSitOrigin',
'TECHostname'
=
'@TECHostname',
'TECFQHostname'
=
'@TECFQHostname',
'TECDate'
=
'@TECDate',
'TECRepeatCount'
=
'@TECRepeatCount',
'ServerName'
= '@ServerName'
ON INSERT ONLY,
'ServerSerial' = '@ServerSerial'
ON INSERT ONLY
);
CREATE MAPPING JournalMap
(
.
);
CREATE MAPPING DetailsMap
(
..
);
CREATE MAPPING IducMap
(
..
);
#######################################################################
# NOTE: If replication of the user related system tables is required, uncomment
# the table mapping definitions below. The associated table replication
# definitions will also need to be uncommented.
#######################################################################
CREATE MAPPING SecurityUsersMap
(
..
);
CREATE MAPPING SecurityGroupsMap
(
);
CREATE MAPPING SecurityRolesMap
(
.
);
CREATE MAPPING SecurityRoleGrantsMap
(
);
);
CREATE MAPPING SecurityPermissionsMap
(
);
#######################################################################
# NOTE: If replication of desktop related system tables is required, uncomment
# the replication definitions below. The associated maps will also need to be
# uncommented.
#######################################################################
CREATE MAPPING ToolsMenusMap
(
);
CREATE MAPPING ToolsMenuITEMSMap
(
);
CREATE MAPPING ToolsActionsMap
(
);
CREATE MAPPING ToolsActionAccessMap
(
.
);
CREATE MAPPING ToolsMenuDefsMap
(
);
CREATE MAPPING ToolsPromptDefsMap
(
..
);
CREATE MAPPING AlertsConversionsMap
(
.
);
CREATE MAPPING AlertsColVisualsMap
(
.
);
CREATE MAPPING AlertsColorsMap
(
...
);
#######################################################################
# NOTE: If replication of the master.servergroups is is required, uncomment
# the table mapping definitions below. The associated table replication
# definitions will also need to be uncommented.
#######################################################################
CREATE MAPPING MasterServergroupsMap
(
);
#######################################################################
#
#
CUSTOM table mappings
#
#######################################################################
CREATE MAPPING ItmLoopbackMap
(
'Identifier'
=
'@Identifier' ON INSERT ONLY,
'itmstatus'
=
'@itmstatus'
);
CREATE MAPPING ItmHeartbeatMap
(
'Identifier'
=
'@Identifier' ON INSERT ONLY,
'LastOccurrence'
=
'@LastOccurrence',
'Agent'
=
'@Agent'
ON INSERT ONLY,
'AlertGroup'
=
'@AlertGroup' ON INSERT ONLY,
'Node'
=
'@Node',
'NodeAlias'
=
'@NodeAlias',
'ITMSitOrigin'
=
'@ITMSitOrigin',
'Manager'
=
'@Manager',
'Class'
=
'@Class'
ON INSERT ONLY,
'HeartbeatInterval' =
'@HeartbeatInterval',
'ExpirationTime'
=
'@ExpirationTime',
'Type'
'@Type'
);
### This is needed only if using cache for cleared sampled events
CREATE MAPPING ItmEventCacheMap
(
'Identifier'
=
'@Identifier' ON INSERT ONLY,
'Node'
=
'@Node',
'NodeAlias'
=
'@NodeAlias',
'AlertGroup'
=
'@AlertGroup' ON INSERT ONLY,
'AlertKey'
=
'@AlertKey',
'Summary'
=
'@Summary',
'ExtendedAttr'
=
'@ExtendedAttr',
'ITMEventData'
=
'@ITMEventData',
'ITMTime'
=
'@ITMTime',
'ITMHostname'
=
'@ITMHostname',
'ITMThruNode'
=
'@ITMThruNode',
'ITMDisplayItem'
=
'@ITMDisplayItem',
'ITMSitOrigin'
=
'@ITMSitOrigin',
'InsertTime'
=
'@InsertTime'
);
group 500
7. Run the nco_sql command to create objects defined in the itm_sync.sql file:
[netcool@aggp /]$ opt/IBM/tivoli/netcool/omnibus/bin/nco_sql -user root -server AGG_P <
/opt/IBM/SitForwarder/omnibus/itm_sync.sql
Password:
Default Password for internal OMNIbus root user is blank.
On aggb system run the same command changing the name of the OMNIbus ObjectServer from AGG_P to
AGG_B.
8. Run the nco_sql command to create objects defined in the itm_event_cache.sql file:
[netcool@aggp /]$ opt/IBM/tivoli/netcool/omnibus/bin/nco_sql -user root -server AGG_P <
/opt/IBM/SitForwarder/omnibus/itm_event_cache.sql
Password:
Default Password for internal OMNIbus root user is blank.
On aggb system run the same command changing the name of the OMNIbus ObjectServer from AGG_P to
AGG_B.
1. On each windows machine where Display ObjectServers have been installed run the Netcool
Administrator:
2. When the Administrator console opens. Expand the Navigator tree on the left of the console, select the
ObjectServer you have to work with (AGG_P in this case), right click on the mouse and select the
Connect as option:
3. Insert root for the Username, leave blank in the Password field (or enter your ObjectServer root
password if one has been configured) and click OK.
6. Select the agg_deduplication trigger, right click on the mouse and select Edit Trigger:
2. Copy the sql collection file from aggp machine where ITM Event Synchronization component has been
installed to colp1 machine using scp command:
scp /opt/IBM/SitForwarder/omnibus/multitier/collection_itm.sql netcool@colp1:/tmp
3. Run the nco_sql command to create objects defined in the collection_itm.sql file:
/[netcool@colp1]$ /opt/IBM/tivoli/netcool/omnibus/bin/nco_sql -user root -server COL_P_1 <
/tmp/collection_itm.sql
Password:
Default Password for internal OMNIbus root user is blank.
On colb1 system run the same commands changing the name of the OMNIbus ObjectServer from
COL_P_1 to COL_B_1.
: 'COL_P_1'
: 'COL_B_1'
: 9998
:0
The steps below must be executed for both probes installed on colp1 and colb1 systems.
1. Login to the system with netcool user (if not already done)
2. Copy the rule files from aggp machine where ITM Event Synchronization component has been installed to
colp1 machine under the probes directory using scp command:
scp /opt/IBM/SitForwarder/omnibus/itm_event.rules
netcool@colp1:/opt/IBM/tivoli/netcool/omnibus/probes/linux2x86
3. Uncomment the include statement for itm_event.rules in the tivoli_eif.rules file under
/opt/IBM/tivoli/netcool/omnibus/probes/linux2x86 directory
To Stop the processes here run ps ef | grep omnibus and search for the specific process id number for the gates
and probes and kill the processes.
To restart ObjectServer Uni-directional Gateway (change GW property file name on backup server colb1 from
C_TO_A_GATE_P_1.props to C_TO_A_GATE_B_1.props) execute the following command:
/opt/IBM/tivoli/netcool/omnibus/bin/nco_g_objserv_uni -propsfile /opt/IBM/tivoli/netcool/omnibus/etc/C_TO_A_GATE_P_1.props &
4. On dis2 system run the same commands changing the name of the OMNIbus ObjectServer from DIS_1 to
DIS_2.
Install and configure OMNIbus Web GUI (TIP) on (dis1 & dis2)
1. Click on OK when the Netcool/OMNIbus splash screen appears.
2. Click on Next at the Introduction window.
8. Enter and confirm a password for the tipadmin user and then click on Next.
10. Enter the root password , ObjectServer name, hostname, and port for the aggp server and then click Next.
11. Enter the ObjectServer name, hostname, and port for the aggb server and then click Next.
13. When the Install Complete window appears observe and note the listed URL for accessing the
and then click on Done.
ncwDataSourceCredentials
ncwFailOverPairDefinition with Primary and Backup Aggregation ObjectServers
ncwReadCloudDefinition with Primary and Backup Display ObjectServers
The following is the file configured for the described environment. (You may wish to paste this into an XML editor
and use it in place of the default file):
Configure ITM TEMS to share situational events with the EIF probes
Configure colp1 as default eif receiver on both Hub TEMS and Hot Standby Hub TEMS.
In this integration scenario both TEMS were Linux systems with hostnames nc049043 and nc049044 respectively
and the ITM home directory on both was /data/IBM/itm.
To configure the TEMS for EIF Probe failover and follow steps below on the primary Hub TEMS (nc049043) first:
Note: On Windows platforms the OMNIbus ObjectServer and Gateways can be configured as Windows
Services and controlled via the Windows Services control panel. The OMNIbus InfoCenter should be
referenced if you wish to configure them in this manner.