Administration Guide
Administration Guide
Administration Guide
Administration Guide
r11.2 SP2
This documentation and any related computer software help programs (hereinafter referred to as the "Documentation") are for your informational purposes only and are subject to change or withdrawal by CA at any time. This Documentation may not be copied, transferred, reproduced, disclosed, modified or duplicated, in whole or in part, without the prior written consent of CA. This Documentation is confidential and proprietary information of CA and may not be used or disclosed by you except as may be permitted in a separate confidentiality agreement between you and CA. Notwithstanding the foregoing, if you are a licensed user of the software product(s) addressed in the Documentation, you may print a reasonable number of copies of the Documentation for internal use by you and your employees in connection with that software, provided that all CA copyright notices and legends are affixed to each reproduced copy. The right to print copies of the Documentation is limited to the period during which the applicable license for such software remains in full force and effect. Should the license terminate for any reason, it is your responsibility to certify in writing to CA that all copies and partial copies of the Documentation have been returned to CA or destroyed. TO THE EXTENT PERMITTED BY APPLICABLE LAW, CA PROVIDES THIS DOCUMENTATION "AS IS" WITHOUT WARRANTY OF ANY KIND, INCLUDING WITHOUT LIMITATION, ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NONINFRINGEMENT. IN NO EVENT WILL CA BE LIABLE TO THE END USER OR ANY THIRD PARTY FOR ANY LOSS OR DAMAGE, DIRECT OR INDIRECT, FROM THE USE OF THIS DOCUMENTATION, INCLUDING WITHOUT LIMITATION, LOST PROFITS, LOST INVESTMENT, BUSINESS INTERRUPTION, GOODWILL, OR LOST DATA, EVEN IF CA IS EXPRESSLY ADVISED IN ADVANCE OF THE POSSIBILITY OF SUCH LOSS OR DAMAGE. The use of any software product referenced in the Documentation is governed by the applicable license agreement and is not modified in any way by the terms of this notice. The manufacturer of this Documentation is CA. Provided with "Restricted Rights." Use, duplication or disclosure by the United States Government is subject to the restrictions set forth in FAR Sections 12.212, 52.227-14, and 52.227-19(c)(1) - (2) and DFARS Section 252.227-7014(b)(3), as applicable, or their successors. Copyright 2010 CA. All rights reserved. All trademarks, trade names, service marks, and logos referenced herein belong to their respective companies.
CA Product References
This document references the following CA components and products: CA 7 Workload Automation CA Access Control CA ADS (CA ADS) CA Advanced Systems Management (CA ASM) CA Cohesion Application Configuration Manager (ACM) CA ASM2 Backup and Restore CA eHealth Performance Manager CA Jobtrac Job Management (CA Jobtrac JM Workstation) CA NSM CA NSM Job Management Option (CA NSM JMO) CA San Manager CA Scheduler Job Management (CA Scheduler JM) CA Security Command Center (CA SCC) CA Service Desk CA Service Desk Knowledge Tools CA Software Delivery CA Spectrum Infrastructure Manager CA Virtual Performance Management (CA VPM)
Contact CA
Contact Technical Support For your convenience, CA provides one site where you can access the information you need for your Home Office, Small Business, and Enterprise CA products. At http://ca.com/support, you can access the following: Online and telephone contact information for technical assistance and customer services Information about user communities and forums Product and documentation downloads CA Support policies and guidelines Other helpful resources appropriate for your product
Provide Feedback If you have comments or questions about CA product documentation, you can send a message to techpubs@ca.com. If you would like to provide feedback about CA product documentation, complete our short customer survey, which is also available on the CA Support website, found at http://ca.com/docs.
Contents
Chapter 1: Introduction 17
About CA NSM ................................................................................ 17 About This Guide ............................................................................. 18 UNIX and Linux Support ...................................................................... 18 CA NSM Databases ........................................................................... 19 Management Data Base ....................................................................... 19 Distributed Intelligence Architecture ........................................................... 20 Discovery .................................................................................... 20 Visualizing Your Enterprise .................................................................... 21 Management Command Center ............................................................ 21 Other CA NSM User Interfaces ............................................................. 23 Discovery Classic ......................................................................... 25 WorldView ................................................................................ 25 Business Process View Management ........................................................ 25 Smart BPV ............................................................................... 26 Customizing Your Business Views Using Unicenter Management Portal ........................... 26 Monitoring Your Enterprise .................................................................... 26 Unicenter Configuration Manager .......................................................... 26 Unicenter Remote Monitoring .............................................................. 27 Administering Critical Events .................................................................. 28 Event Management ....................................................................... 28 Alert Management System................................................................. 29 Analyzing Systems Performance ............................................................... 29 Correlating Important Events .................................................................. 29 Advanced Event Correlation ............................................................... 30 Unicenter Notification Services ............................................................. 30 Creating Customized Reports .................................................................. 30 Trap Manager ................................................................................ 31 System Monitoring for z/OS ................................................................... 32 Integration with Other Products ............................................................... 32 Unicenter Service Desk .................................................................... 32 Unicenter Management for MOM ........................................................... 33 Unicenter Cisco Integration ................................................................ 33 Intel Active Management Technology ....................................................... 34 eHealth Integration with the Management Command Center ................................. 34 SPECTRUM Integration .................................................................... 35 Related Publications .......................................................................... 36
Contents 5
39
Role-based Security .......................................................................... 39 Securing the MDB ............................................................................ 39 MDB Users (Microsoft SQL Server Databases) ............................................... 40 MDB User Groups (Ingres Databases) ...................................................... 40 MDB Users (Ingres Databases) ............................................................ 41 Operating System Users ................................................................... 42 Ingres Virtual Node Names (VNODES) ...................................................... 43 Component-Level Security .................................................................... 44 What is Security Management ............................................................. 46 Administrators or Power Users Group ...................................................... 47 How You Change the CA NSM Administrator Password On Windows .......................... 47 Change the Password for the Severity Propagation Engine User Accounts (Windows) .......... 47 How You Change File Privileges on Microsoft Windows 2003 Server........................... 48 Run Utilities Requiring Administrator Privileges on Windows Vista ............................ 49 Create Additional Users with Administrator Privileges to Run Discovery (Microsoft SQL Server Databases) ............................................................................... 49 How You Create Additional Users Without Administrator Privileges (SQL Server Databases) .... 50 Create Additional Users with Administrator Privileges to Run Discovery (Ingres Databases) .... 50 How You Create Additional Users Without Administrator Privileges (Ingres Databases) ......... 51 WorldView Security ....................................................................... 51 Management Command Center Security .................................................... 56 Integrating with eTrust Access Control ......................................................... 59 How Integration and Migration Works ...................................................... 60 Rules and Statistics Not Migrated .......................................................... 61 Attributes Not Migrated ................................................................... 62 Protecting and Filtering MDB Data Using Data Scoping .......................................... 62 Data Scoping Rules ....................................................................... 63 How Data Scoping Rules are Inherited ..................................................... 65 Rule Performance Issues .................................................................. 68 Data Scoping Security on Windows ......................................................... 69 Data Scoping Security on UNIX/Linux ...................................................... 69 User IDs Required for Data Scoping Rule Evaluations ........................................ 69 Data Scoping Limitations When the MDB Resides on UNIX/Linux ............................. 74 Data Scoping Limitations on UNIX/Linux When the MDB Resides on Windows ................. 74 Data Scoping in the 2D Map (Windows) .................................................... 75 Activate Data Scoping on Windows ......................................................... 75 Deactivate Data Scoping on Windows ...................................................... 76 Activate Data Scoping on UNIX/Linux ...................................................... 76 Deactivate Data Scoping on UNIX or Linux.................................................. 77 DataScope Rule Editor .................................................................... 77 Implement End-to-End Data Scoping ....................................................... 78
6 Administration Guide
Communication Protocol Security .............................................................. 79 Encryption Levels ......................................................................... 79 Agent to Manager Communication Security ................................................. 80 Common Communications Interface (CAICCI) .............................................. 81
99
Discovery .................................................................................... 99 How You Can Combine Running Classic and Continuous Discovery .............................. 101 Classic Discovery Multi-Homed Device Support ................................................ 102 Discovery Classification Engine ............................................................... 102 Discovery Timestamp ........................................................................ 102 How Subnet Filters Work ..................................................................... 103 How Timeout Values Affect Discovery ......................................................... 103 Discovery Object Creation Rules .............................................................. 104 Types of Discovery Methods .............................................................. 104 How You Modify or Write Classification Rules .............................................. 106 How to Enable Classification of New Classes ............................................... 106 methods.xml file--Configure Classification Methods ........................................ 106 classifyrule.xml--Configure Classification Rules ............................................ 112 Device Not Discovered ....................................................................... 114 Discovering Your Network Devices Continuously in Real-Time Mode ............................. 115 Continuous Discovery Architecture ........................................................ 115 How Continuous Discovery Monitors Your Network ......................................... 116 How Continuous Discovery Discovers and Monitors Subnets ................................ 117 Continuous Discovery Default Configuration ............................................... 117 DHCP Engine Configuration ............................................................... 118 Set the Admin Status Property for an Object Using Continuous Discovery .................... 118 Exclude Classes from Discovery ........................................................... 119 How You Set Up SNMP Community Strings for Continuous Discovery ........................ 120 Discovery Managers ...................................................................... 121 Discovery Events Reported to the Event Console ........................................... 123 Discovery Agents ........................................................................ 123 Discovery and Firewalls .................................................................. 127 Continuous Discovery Rapidly Consumes Memory .......................................... 128 Discovering Your Network Devices on Demand Using Classic Discovery .......................... 129 Discovery Methods ....................................................................... 130 How Agent Discovery Works .............................................................. 131 How IPX Discovery Works ................................................................ 132 How SAN Discovery Works ............................................................... 133 How Discovery Uses Subnets ............................................................. 134 How You Prepare to Run Discovery ........................................................ 135 How You Discover a Single Network ....................................................... 136
Contents 7
How You Determine the Time Required to Ping a Class B Network ........................... 136 How Names of Discovered Devices are Determined......................................... 137 Discovery Creates Incorrect Subnets ...................................................... 138 Discovering IPv6 Network Devices using Common Discovery ................................... 138 Common Discovery ...................................................................... 139 Using Common Discovery GUI ............................................................ 141 Understanding IPv6 Discovery ............................................................ 147
149
WorldView Components ...................................................................... 149 Managed Objects ........................................................................ 150 Viewing Your Network Topology Using the 2D Map ......................................... 151 Business Process Views .................................................................. 157 Determining the Relative Importance of an Object in Your Network ......................... 161 Set Policies for a Managed Object's Severity Using Alarmsets ............................... 164 Severity Propagation Service ............................................................. 164 How You Correctly Stop and Restart the Microsoft SQL Server Database ..................... 165 Viewing Object Details and Properties ..................................................... 166 Modifying Class Properties with the Class Editor ............................................ 166 Viewing MIBs and WBEM Data with ObjectView ............................................ 167 Viewing Relationships Among Objects Using the Association Browser ........................ 169 Viewing Links Between Objects ........................................................... 170 Viewing Historical Information about Your Network ......................................... 171 Importing and Exporting Objects to and from WorldView ................................... 171 Understanding IPv6 Discovery ............................................................ 174 Registering and Updating Unicenter Components Using Unicenter Registration Services ...... 177 Configuring Business Process Objects Using Business Process Views ............................ 180 Business Process Objects ................................................................. 180 Rules ................................................................................... 180 Integration with Event Management ....................................................... 183 Creating Business Process Views Using SmartBPV.............................................. 184 Business Process Views .................................................................. 185 Benefits of SmartBPV .................................................................... 185 How SmartBPV Works .................................................................... 185 SmartBPV Examples ..................................................................... 186 How Optimizing SmartBPV Enhances Implementation ...................................... 186
189
Why You Need Unicenter Management Portal .................................................. 189 CleverPath Portal Technology ................................................................. 190 Users, Workgroups, and Security Profiles ...................................................... 191
8 Administration Guide
Scoreboards and Dashboards ................................................................. 191 Scoreboards and Dashboards Distributed with Unicenter MP ................................ 192 Unicenter MP Administration ................................................................. 193 Administration Wizard .................................................................... 194 Task 1: Manage Components ............................................................. 195 Workplace Templates ........................................................................ 196 Create Workplaces from Templates ....................................................... 197 Working with Components ................................................................... 198 Working with Unicenter WorldView ........................................................ 198 Working with Agent Management ......................................................... 203 Working with Unicenter Event Management ................................................ 206 Working with Unicenter Alert Management ................................................ 209 Working with Unicenter MP Notification .................................................... 213 Working with Unicenter MP Reports ....................................................... 214 Working with Unicenter Service Metric Analysis ............................................ 215 Working with Unicenter Service Desk ...................................................... 216 eHealth Integration with Unicenter MP .................................................... 217 Working with SPECTRUM ................................................................. 219 Additional Component Integrations........................................................ 220
221
Using Agent Technology to Monitor Resources ................................................. 221 Understanding Unicenter Remote Monitoring .................................................. 222 Remote Monitoring Architecture ........................................................... 223 Resource Types You Can Monitor.......................................................... 225 Securing Access to Remote Monitoring .................................................... 227 Understanding Resource Monitoring ........................................................... 227 Basic Concepts .......................................................................... 227 General Functions........................................................................ 228 Monitoring System Resources ............................................................. 236
239
Understanding Systems Management ......................................................... 243 Understanding the Architecture ........................................................... 244 Tools to Configure Managed Resources .................................................... 250 Configuring Managed Nodes .............................................................. 255 Configuring a DSM Environment .......................................................... 258 Monitoring the Health of your DSM ........................................................ 261 Understanding Configuration Manager ........................................................ 264 Resource Model Groups .................................................................. 264 Base Profiles ............................................................................ 265
Contents 9
Differential Profiles ....................................................................... 267 File Packages ............................................................................ 268 Delivery Schedules ....................................................................... 269 Configuration Bundles .................................................................... 270 Reporting Feature........................................................................ 273
275
Event Management .......................................................................... 275 Events .................................................................................. 276 Event Management Policies ............................................................... 276 Event Agent ............................................................................. 277 Dates and Times for Automated Event Processing .......................................... 280 Automatic Responses to Event Messages .................................................. 280 Event Console ........................................................................... 289 SNMP Traps ............................................................................. 292 Event Policy Packs ....................................................................... 302 Wireless Message Delivery................................................................ 306 Alert Management System ................................................................... 312 What Are Alerts? ......................................................................... 312 How Alert Management Works ............................................................ 313 Viewing and Responding to Alerts in the Management Command Center ..................... 320 Integrating with Unicenter Service Desk ................................................... 321
323
Unicenter Notification Services ............................................................... 323 How Unicenter Notification Services Works ................................................ 324 Features of Unicenter Notification Services ................................................ 325 Configuration and Diagnostics ............................................................ 329 Advanced Event Correlation .................................................................. 339 Why Use AEC? ........................................................................... 340 How AEC Works ......................................................................... 340 Alert Management Integration ............................................................ 341 Event Definitions......................................................................... 341 Configure AEC ........................................................................... 342 Impact Analysis.......................................................................... 348 Implement AEC .......................................................................... 349 Understanding the AEC Components ...................................................... 351
363
Analyzing Systems Performance .............................................................. 363 Performance Scope Usage .................................................................... 364
10 Administration Guide
Working with Performance Trend ............................................................. 365 Effective Reporting with Performance Reporting ............................................... 365 Charging for Resource Usage with Performance Chargeback .................................... 366 Data Fundamentals .......................................................................... 366 Real-time Data Gathering ................................................................ 366 Historical Data Gathering ................................................................. 367 Performance Architecture .................................................................... 368 Data Accessibility and Management by the Performance Data Grid .......................... 370 Configuration Services ................................................................... 371 Main Performance Architecture Components ............................................... 371 Administrative Tools ......................................................................... 375 Secure, Centralized Configuration with Performance Configuration .......................... 375 Command-Line Utilities................................................................... 375
377
379
What is Security Management ................................................................ 379 How Security Management Works ............................................................ 380 Security Policies ......................................................................... 381 How the Commit Process Works .......................................................... 381 How Security Management Is Implemented ................................................... 382 Phase 1: Customize Security Management Options ............................................ 383 How You Modify Windows Security Management Option Settings ............................ 383 How You Modify UNIX/Linux Security Management Option Settings .......................... 383 Options to Consider for Your Operations ................................................... 383 Additional Options for UNIX/Linux Platforms ............................................... 387 Set Certain Options to Absolute Values .................................................... 388 Phase 2: Start Security in QUIET Mode ........................................................ 388 Phase 3: Create Rules for Production in WARN Mode ........................................... 389 Defining User Groups .................................................................... 389 Defining Asset Groups .................................................................... 391 Asset Permissions ........................................................................ 392 Defining Access Permissions .............................................................. 395 How CAISSF Scoping Options Work ....................................................... 396 Phase 4: Set Options for Production, FAIL Mode ............................................... 399 How You Commit Rules in Fail Mode ....................................................... 399 How You Deactivate Security Management ................................................ 399 Security Management Reports ................................................................ 399
Contents 11
Access Violations Written to the Event Console Log......................................... 400 UNIX/Linux Reports ...................................................................... 400
403
UNIX and Linux Support ..................................................................... 403 Supported Components ...................................................................... 404 UNIX and Linux Support Quick Reference ..................................................... 406
409
CA NSM FIPS 140-2 Compliance .............................................................. 409 Compliant Components ...................................................................... 409 Systems Performance .................................................................... 410 Active Directory Management ............................................................. 422 Agent Technology ........................................................................ 423 Common Communications Interface ....................................................... 424 Management Command Center ........................................................... 426 Unicenter Management Portal............................................................. 429 Web Reporting Server .................................................................... 430
433
Trap Daemon................................................................................ 433 Trap Filters .................................................................................. 434 Local Versus Remote Installation ............................................................. 434
435
Analyzing CISCO Integration ................................................................. 435 Cisco Device Recognition ..................................................................... 435
437
Analyzing Repository Bridge .................................................................. 437 How Repository Bridge Works ................................................................ 438 Repository Bridge Architectures .............................................................. 439 Fanout Architecture ...................................................................... 439 Aggregation Architecture ................................................................. 440 How to Determine Which Architecture to Use .............................................. 441 Repository Bridge Components ............................................................... 442 Bridge Configuration ..................................................................... 442 Bridge Control ........................................................................... 443 Bridge Instances ......................................................................... 444
12 Administration Guide
Repository Bridge Supported Platforms........................................................ 444 Repository Bridge in a Distributed Organization ................................................ 444 Repository Bridge for a Restricted View of Resources ........................................... 445 Repository Bridge for Problem Notification ..................................................... 445 Troubleshooting ............................................................................. 445 View Repository Bridge Log Files .......................................................... 446 How to Create a Bridge Configuration File (Windows Only) ..................................... 446 Bridging Rules (Windows) .................................................................... 448 Bridging Objects to A Repository Where a DSM is Running...................................... 448 Start the Bridge Configuration GUI (Windows Only) ............................................ 448 Manage Repository Bridge Instances Using a Windows Service (Windows Only) .................. 449 Create a Configuration File (UNIX/Linux) ...................................................... 450 Rule File Parameters for UNIX/Linux ...................................................... 451
453
Desktop Management Interface (DMI) ........................................................ 453 DMI Service Provider ..................................................................... 454 Unicenter Support for Desktop Management Interface (DMI) ................................ 455 Install the DMI Manager and DMI Agent ................................................... 455 Set SNMP Destinations in the CA DMI Agent ............................................... 456 Unicenter Management for Microsoft Operations Manager ...................................... 457 MOM Terminology........................................................................ 457 How MOM Management Works ............................................................ 458 MOM Alerts as Event Messages ........................................................... 459 Status of MOM Entities in WorldView ...................................................... 460 Using MOM Management ................................................................. 461 Integration with Microsoft System Center Operations Manager (SCOM) .......................... 461 Minimum Software Requirements ......................................................... 462 SCOM Terminology....................................................................... 463 How the SCOM Integration Works ......................................................... 464 SCOM Alerts as Event Messages .......................................................... 465 Status of SCOM Entities in WorldView ..................................................... 466 SCOMMsgconfig Utility ................................................................... 466
469
Virus Scan .................................................................................. 469 Downloading Virus Signature Updates ......................................................... 469 Deleting Old Scan Logs ...................................................................... 470
471
Contents 13
Required Open Ports ......................................................................... 472 Optional Ports ............................................................................... 473 Configure the DIA Communications Port....................................................... 474 CA Message Queuing Service (CAM) .......................................................... 476 Supported Transport Layer Protocols ...................................................... 476 Components That Use CAM/CAFT ......................................................... 477 CAM/CAFT Configuration Files ............................................................ 478 CAM/CAFT Binaries ...................................................................... 478 How to Encrypt the MCC Data Transport (CAM) for AIS Providers ........................... 479
483
485
CA Spectrum-NSM Integration Kit ............................................................ 485 CA Spectrum Infrastructure Manager and CA NSM Integration Guide ............................ 485
487
Introduction to CA Virtual Performance Management ........................................... 487 CA SystemEDGE Agent....................................................................... 488 Logical Partition (LPAR) AIM .............................................................. 488 Service Response Monitor (SRM) AIM ..................................................... 488 VMware vCenter (VC) AIM ................................................................ 489 Xen AIM ................................................................................. 489 Zones AIM .............................................................................. 490 Integration with CA Virtual Performance Management .......................................... 490 Discover VPM Resources ..................................................................... 491 IBM LPAR Object Discovered .............................................................. 491 Start the LPAR AIM Agent View ........................................................... 492 Sun Zones Objects Discovered ............................................................ 492 Start the Zones AIM Agent View .......................................................... 493 Citrix XenServer Objects Discovered ...................................................... 494 Start the Citrix XenServer AIM View ....................................................... 495 VMware Objects Discovered .............................................................. 495 Start the VC AIM Agent View ............................................................. 496 Enable AIMs in VPM integration ............................................................... 496
497
Integrating with VMware Virtual Center 2.5 and 4.0 ............................................ 497
14 Administration Guide
VMware Virtual Center Credentials ............................................................ 498 VMware Virtual Center Password Utility ....................................................... 498
499
How CA NSM Job Management Option Works .................................................. 499 CA NSM Job Management Option Job Server ............................................... 500 Unicenter Universal Job Management Agent ............................................... 500 CA NSM JM Option Profiles................................................................ 501 CA NSM JM Option Variables .............................................................. 502 Types of Job Scheduling .................................................................. 502 How to Specify Where to Perform Work ....................................................... 502 How to Identify Resource Requirements for Workload Balancing ................................ 503 How to Schedule Work by Dates .............................................................. 504 Expanded Calendar Processing ............................................................ 505 How to Form Groups of Related Tasks (Jobsets) ............................................... 506 Jobset Resources ........................................................................ 506 Jobset Predecessors ...................................................................... 507 How to Identify Work to Perform ............................................................. 509 Jobset Membership ...................................................................... 509 How to Schedule Work by Special Events ...................................................... 515 Use caevent ............................................................................. 516 Run a Job on Demand .................................................................... 518 How to Test Your CA NSM JM Option Policy Definitions ......................................... 519 How to Run Additional CA NSM JM Option Reports ......................................... 520 Autoscan .................................................................................... 520 How a Job or Jobset Qualifies for Selection During Autoscan ................................ 521 Cleanup and Backlogging ................................................................. 521 Workload Processing ......................................................................... 522 Maintenance Considerations .................................................................. 523 Job Management Logs (UNIX/Linux) ...................................................... 523 Tracking File ............................................................................. 524 Undefined Calendars During Autoscan ..................................................... 525 Purge Old History Records (UNIX/Linux) ................................................... 525 Unload the CA NSM JM Option Database Definitions to a Text File ........................... 525 How to Submit Jobs on Behalf of Another User ............................................. 526 Agent/Server Configurations ................................................................. 526 Single Server ............................................................................ 527 Cross-Platform Scheduling ................................................................... 528 Job Management Managers and Agents.................................................... 529 Implementation.......................................................................... 530 Windows Configuration Environment Variables ................................................. 532 UNIX/Linux Configuration Environment Variables .............................................. 534
Contents 15
Environment Variables for Jobs and Actions ................................................... 535 Monitor Workload Status ..................................................................... 536 Jobflow Tracking on Windows ............................................................. 537
Index
545
16 Administration Guide
Chapter 1: Introduction
This section contains the following topics: About CA NSM (see page 17) About This Guide (see page 18) UNIX and Linux Support (see page 18) CA NSM Databases (see page 19) Management Data Base (see page 19) Distributed Intelligence Architecture (see page 20) Discovery (see page 20) Visualizing Your Enterprise (see page 21) Customizing Your Business Views Using Unicenter Management Portal (see page 26) Monitoring Your Enterprise (see page 26) Administering Critical Events (see page 28) Analyzing Systems Performance (see page 29) Correlating Important Events (see page 29) Creating Customized Reports (see page 30) Trap Manager (see page 31) System Monitoring for z/OS (see page 32) Integration with Other Products (see page 32) Related Publications (see page 36)
About CA NSM
CA NSM delivers innovative, secure, and platform-independent management to let you deploy single platform or heterogeneous business applications. CA NSM solutions help you sustain an optimized, on-demand infrastructure, maximizing your IT investment by continuously assessing and self-managing network and systems elements. CA NSM lets organizations deploy and maintain a complex, secure, and reliable infrastructure that supports business objectives. It helps ensure the continuous health and performance of your critical infrastructure through innovative and intelligent techniques to help you control costs while maintaining or increasing responsiveness to changing business priorities. Its ability to integrate with other solutions in the CA portfolio and share information using a common database provides unparalleled intelligence for CA's EITM strategy.
Chapter 1: Introduction 17
18 Administration Guide
CA NSM Databases
CA NSM Databases
In CA NSM r11, the database tool used for the MDB is Ingres for both Windows and UNIX/Linux. In CA NSM r11.1 and r11.2, however, the database tool used for the MDB on Windows platforms is Microsoft SQL Server. The documentation for CA NSM r11.2 has information for both Ingres databases and Microsoft SQL Server databases, so be aware that some of it may not apply, depending on the CA NSM version you are running. On UNIX and Linux platforms, Unicenter NSM r11.2 does not use Ingres for the MDB. Unicenter NSM r11.2 for UNIX and Linux platforms supports a free embedded database PostGreSQL. Therefore any Ingres or Microsoft SQL Server information does not apply to Unicenter NSM r11.2 for UNIX and Linux. For more information about the PostGreSQL database, see the MDB Overview. CA NSM r11 users can migrate the Ingres database to the r11.2 PostGreSQL database. For more information, see the Migration Guide.
Chapter 1: Introduction 19
Discovery
Discovery discovers and classifies devices on IP and IPX networks. It provides both an ad hoc (on demand) and continuous (real-time) mode. It provides discovery services to other CA Common Services components and updates the MDB with newly discovered and classified network objects. When you install your product, you can use any of the following types of Discovery: Classic Discovery Provides on demand discovery that lets you decide which subnets you want to discover and when. You can also configure Classic Discovery to run at regular intervals, which can be used as an alternative to Continuous Discovery and ensures that your discovered environment in the MDB is always current. You can start a Classic Discovery from the Discovery Classic GUI, the Management Command Center, the Unicenter Browser Interface, or the command line. Continuous Discovery Provides event-driven and ongoing discovery. Continuous Discovery employs a manager and agents that continuously scan your network in real-time mode for new devices or changes in IP addressing of existing IP devices. You can configure Continuous Discovery for optimal load balancing between the Discovery Agents and the Discovery Manager. If you choose this method of discovery, you must install the Discovery Agents and the Discovery Manager. Common Discovery Discovers IPv6 networks. The Common Discovery Import utility discovers IPv6 networks using Common Discovery technology and imports IPv6 addresses into WorldView, where they are integrated with existing networks.
20 Administration Guide
Note: For more information about Discovery, see the "Discovering Your Enterprise" chapter in this guide. For more information about Common Discovery and the Common Discovery Import utility, see the chapters "Discovering Your Enterprise" and "Visualizing Your Enterprise."
Chapter 1: Introduction 21
22 Administration Guide
To start the Management Command Center on UNIX or Linux, run the camcc command from the $JI_SYSTEM/bin directory. You can route the camcc display by setting the DISPLAY environment variable to the proper hostname or IP address. By default, only one Management Command Center instance is permitted per UNIX/Linux server, but you can edit the $JI_SYSTEM/.max_ue file to change this limit to reflect the number of instances of Unicenter MCC that you want to run simultaneously on the server. The new limit takes effect when all instances of the Unicenter MCC are restarted. Note: For more information about the tndbrowser.bat, showinmcc, and camcc commands, see the online CA Reference.
Chapter 1: Introduction 23
Unicenter Classic
Unicenter Classic refers to the traditional Windows-based user interface delivered with previous versions of CA NSM. Unicenter Classic includes the WorldView, Enterprise Management, and Discovery program groups accessed through Start, Programs, CA, Unicenter, NSM. Procedures based on the Unicenter Classic GUI are contained in the online CA Procedures. Unicenter Classic also includes the cautil command line interface.
WorldView Classic
WorldView Classic refers to the traditional Windows-based user interface. WorldView Classic includes the WorldView program group accessed through Start, Programs, CA, Unicenter, NSM, WorldView. Procedures based on the WorldView Classic GUI are contained in CA Procedures located in the Online Books program group. WorldView Classic also includes the cautil command line interface. Note: For more information about the cautil command line interface, see CA Reference in the Online Books program group.
Agent Dashboards
Dashboards display real-time information from CA NSM agents. A dashboard lets you combine on one screen multiple metrics from one or many agents and one or many hosts. Each metric is presented in an individual tile. Dashboards poll the data from the agents and show the metrics "as is." The Management Command Center supports two types of dashboards: Agent dashboards display information about a single agent, which consists of a number of chart titles each of which reflects the state of a particular variable/group monitored by the agent on a host. Server dashboards display information about each agent installed on the host.
To display dashboards, a CA NSM Web Reports and Dashboards server must be installed and running on a host that the Management Command Center can access. If a dashboard server is found, a Dashboard viewer option becomes available for agent objects in the Topology and DSM view trees. After selecting an agent object, you can open the Dashboard viewer using the Add or Open Viewer context menu (available by right-clicking the object). You can also click the right pane drop-down list and choose Dashboards.
24 Administration Guide
The first time you request a dashboard a connection dialog appears, which allows you to select the dashboard server you want to use. The connection dialog also contains user name and password fields for specifying the credentials to use when the server is accessed. The information you enter is saved and used for subsequent access to the same server for the remainder of your session.
Discovery Classic
Discovery Classic refers to the traditional Windows-based user interface. Discovery Classic includes the Discovery program group accessed through Start, Programs, CA, Unicenter, NSM, Discovery. Procedures based on the Discovery Classic GUI are contained in the CA Procedures located in the Online Books program group.
WorldView
Unicenter WorldView offers a highly visual and intuitive approach to enterprise management with the 2D Map available through the Management Command Center, the Unicenter Browser Interface, and the WorldView Classic GUI. The 2D Map works as an infrastructure navigator, allowing you to view any part of your enterprise with the click of a button. For example, you can view all assets in your network-from the global network to the local subnets, hubs, bridges links, the servers and workstations connected to them, their processors and drives, all the way down to the databases and applications. WorldView provides support for the Desktop Management Interface (DMI) specification. This feature lets you manage the installed hardware and software on your PCs. This can be accomplished locally, as well as remotely across your network. DMI is available on Windows only.
Chapter 1: Introduction 25
Smart BPV
Smart Business Process View Management (SmartBPV) lets you automatically create and dynamically update Business Process Views. Through analysis of network activity, SmartBPV identifies infrastructure elements that support a specific application and automatically builds and continuously updates a focused view for management. SmartBPV is available on Windows only.
26 Administration Guide
To access the Unicenter Configuration Manager using a web browser, enter the following URL: http://UCMServerName:port/wiser UCMServerName Specifies the name of the computer on which Unicenter Configuration Manager is installed. port Specifies the port for the Unicenter Configuration Manager server. To access the Unicenter Configuration Manager agent configuration tool from the Management Command Center, a Unicenter Configuration Manager server must be installed and running on a host that the Management Command Center can access.
Unicenter Remote Monitoring can monitor the following resource types: Windows UNIX Linux Mac OS X IP
Chapter 1: Introduction 27
Event Management
Event Management, the focal point for integrated message management throughout your network, can monitor and consolidate message activity from a variety of sources. It lets you identify event messages that require special handling and initiate a list of actions for handling an event. Through support of industry-standard facilities, you can channel event messages from any node in your network to one or more monitoring nodes. You can centralize management of many servers and ensure the detection and appropriate routing of important events. For example, you may want to route message traffic to different event managers: Event and workload messages to the production control event manager Security messages to the security administrator's event manager Problem messages to the help desk administrator's event manager
By filtering messages that appear on each console, you can retrieve specific information about a particular node, user, or workstation. Wireless Messaging provides alternate channels for operator input in situations where the operator cannot access a CA Event Console. The supported messaging protocols are email and pager. Using the SMTP/POP3 mail messaging protocol, you can send and receive pager messages from two-way pager devices. An incoming message can trigger any series of actions you define for Event Console to perform in response to it. You can install the Event Manager on Windows and UNIX or Linux platforms. For more information about installation options, see the Implementation Guide.
28 Administration Guide
Chapter 1: Introduction 29
30 Administration Guide
Trap Manager
To create reports in CA NSM, you use the Report Viewer. The Report Viewer is a reporting feature of the Unicenter MCC that displays canned reports for the following types of information: Administration Documentation Agent Technology WorldView Unicenter Scoreboards
Canned reports are visible when you select Reports from the left pane drop-down list. When you select a report, WRS opens in the right pane viewer. Reports are viewed as HTML in the right pane using a web browser window. Note: For more information about using WRS, see the WRS online help. For those customers who choose not to install Unicenter MCC, CA NSM provides the Report Explorer. You can use the Report Explorer to create customized reports just as you can using WRS. The Report Explorer uses the Windows Explorer interface to view, print, edit, and create reports. To open the Report Explorer, choose Start, Programs, CA, Unicenter, NSM, Utilities, Report Explorer. Note: For more information about using the Report Explorer, see the Report Explorer online help.
Trap Manager
The Trap Manager is a component of CA NSM that lets you perform sophisticated trap database and trap filter file management. You can use the Trap Manager to manage trap information and translation messages stored in the Management Database (MDB) and trap filters stored in the trap filter file. To sign on to the Trap Database, go to Start, Programs, CA, Unicenter, Trap Manager, Unicenter Trap Manager. The Enter window appears. For more information, see the online help.
Chapter 1: Introduction 31
32 Administration Guide
Chapter 1: Introduction 33
34 Administration Guide
Alarm Detail Reports The Unicenter MCC provides access to eHealth Alarm Detail reports for AMS alerts that were created from eHealth alarms. eHealth Alarm Detail reports show the availability and performance history over time of an eHealth object that caused an alarm to be generated. Trend Reports The Unicenter MCC provides access to eHealth Trend reports for eHealth objects in the WorldView Topology and DSM Views. eHealth Trend reports are charts that plot a variable for an object over a period of time. Trend reports can also show variables for groups of objects. The reports can reveal patterns over time and relationships between objects and between variables. The available Trend reports are Availability, Bandwidth, and Error, depending on the type of managed object. eHealth Alarms and netHealth Exceptions Create AMS Alerts Based on policy that you deploy, eHealth alarms and netHealth exceptions create alerts automatically. When alarms are closed, the associated alerts are closed. Likewise, if an alert associated with an eHealth alarm is closed through AMS, the alarm is also closed. Note: If you receive a security error when closing an alert associated with an eHealth alarm or netHealth exception, see Authorize Users to Run Commands.
SPECTRUM Integration
CA NSM provides integration with CA Spectrum Infrastructure Manager, which is a network fault management tool that provides proactive management of your network infrastructure through root cause analysis, impact analysis, event correlation, and service level management. CA NSM integrates with CA Spectrum through an Integration Kit that you can install from the Unicenter Product Explorer. After you install the kit, you can view CA Spectrum device model alarms from the MCC, 2D Map, and the Event Console. You can also launch the CA Spectrum OneClick interface from the MCC, 2D Map, or Management Portal. For more information about integrating with CA Spectrum, see the CA Spectrum Infrastructure Manager and CA NSM Integration Guide (5147), which is included with CA NSM and CA Spectrum.
Chapter 1: Introduction 35
Related Publications
Related Publications
The following guides provide information that you will find useful. Most are available on the CA NSM installation media. Administration Guide Is intended for use by system administrators and contains general information and procedures about how to secure, customize, configure, and maintain CA NSM after installation and implementation. Individual chapters describe the components that are included with or that can be integrated with your CA NSM installation. Agent Technology Support for SNMPv3 Provides information about how Agent Technology can take advantage of the SNMPv3 protocol. Documents how the security information is handled on the manager and agent side as well as how it is applied to the managed systems. SNMPv3 configuration and usage details are provided in this guide. CA Procedures Contains procedures and processes for all components of CA NSM, including WorldView, Agent Technology, Enterprise Management, Event Management, CAICCI, Data Scoping, Discovery, Notification Services, Wireless Messaging, Security Management, and CA NSM Job Management Option. CA Reference Contains commands, parameters, and environment variables for all components of CA NSM, including Advanced Event Correlation, Agent Technology, Enterprise Management, Event Management, CAICCI, Data Scoping, Discovery, Notification Services, Wireless Messaging, Security Management, CA NSM Job Management Option, and WorldView. Implementation Guide Contains architecture considerations, pre-installation tasks, installation instructions, post-installation configuration information, and implementation scenarios. Appendixes include in-depth information about Distributed Intelligence Architecture (DIA), the MDB, and the CA High Availability Service (HAS) for cluster aware environments. This guide is intended for users who are implementing CA NSM on a new system. Inside Active Directory Management Provides general information, installation scenarios, and configuration procedures for Active Directory Management. Inside Event Management and Alert Management Provides detailed information about Event Management (message records and actions), Advanced Event Correlation, and Alert Management.
36 Administration Guide
Related Publications
Inside the Performance Agent Contains detailed information about the configuration and use of the Performance Agent. Inside Systems Management Describes systems management from the CA NSM architecture perspective. The guide describes the different layers (WorldView, Management Layer, Monitoring Layer) and associated components, for example: Distributed State Machine (DSM), Unicenter Configuration Manager, dashboards, and so on. Inside Systems Monitoring Explores how to use and configure the system agents of CA NSM to monitor the system resources in your environment. The chapters guide you through the process of configuring and optimizing the agent for your special requirements. Inside Systems Performance Contains detailed information about the three architectural layers of Systems Performance, and provides guidance in the deployment, configuration, use, and best practices of the Systems Performance components. MDB Overview Provides a generic overview of the Management Database (MDB), a common enterprise data repository that integrates CA product suites. The MDB provides a unified database schema for the management data stored by all CA products (mainframe and distributed). The MDB integrates management data from all IT disciplines and CA products. The guide includes implementation considerations for the database systems that support the MDB and information specific to the CA NSM implementation of the MDB. MIB Reference Guide Provides detailed information about each MIB attribute of the CA NSM system agents. Migration Guide Provides detailed upgrade and migration instructions. This guide is only available on the CA Support website: http://ca.com/support Programming Guide Provides details for constructing applications by CA development teams and by third parties and their clients. The guide is intended for developers who use one or more of the application programming interfaces (APIs) in the SDK to develop applications for use with CA NSM. Key among these APIs are the WorldView API, the Agent Technology API, and the Enterprise Management API.
Chapter 1: Introduction 37
Related Publications
Readme Files Provides information about known issues and information discovered after CA NSM publication. The following readme files are available: The CA NSM r11.2 SP2 for UNIX and Linux readme The CA NSM r11.2 SP2 Windows readme The Unicenter Management Portal readme
Release Notes Provides information about operating system support, system requirements, new and changed features, published fixes, international support, and the documentation roadmap. The following release notes are available: The CA NSM r11.2 SP2 for UNIX and Linux release notes The CA NSM r11.2 SP2 release notes The Unicenter Management Portal release notes
Unicenter Management Portal Implementation Guide Provides installation, deployment, and basic administrative instructions for Unicenter Management Portal. CA Green Book, Systems Management Identifies the CA solution for managing challenges involved in maintaining the performance and availability of complex server infrastructures. The CA solution provides proactive management of servers as part of an overall effort to improve service levels, and minimize the costs of managing the computing infrastructure through increased automation. It provides a view of the requirements for systems management and best practices for deployment. This guide is available online at: https://support.ca.com/irj/portal/anonymous/phpdocs?filePath=0/common /greenbooks.html. CA Green Book, Service Availability Management Describes how to deliver integrated end-to-end performance and event management that is centered on services. The CA Service Availability Management solution leverages the Manager of Managers integration capabilities of CA NSM and eHealth and explains how to take advantage of those capabilities. It includes details on how to install and configure a variety of management solutions to provide simpler and more comprehensive management and monitoring of IT services. This guide is available online at: https://support.ca.com/irj/portal/anonymous/phpdocs?filePath=0/common /greenbooks.html.
38 Administration Guide
Role-based Security
CA NSM was developed with detailed security and now uses a role-based approach so that the management station is not a point of concern for today's security-conscious IT environments. CA NSM and its options are unique in providing a security methodology that protects corporate assets and also makes the system easier to manage because CA NSM security lets you segregate security according to a user's role within the organization. CA NSM can be secured at the following levels: Database security (MDB) CA NSM component-level security, such as securing WorldView tables, Agent Technology security, Enterprise Management security General product security for the primary communications protocols
Note: For information about using the Security Management component to secure CA NSM objects such as calendar and event, see the chapter "Securing CA NSM Objects."
The Management Database (MDB) creates a hierarchy that you must understand so that you can access it correctly from CA NSM and its components. The CA NSM database security model uses one of the following ways to connect to the MDB, depending on which database you are using: For Ingres databases, private VNODES instead of an installation password. All connections to the MDB require a valid operating system user ID and password. That user ID must also be defined to Ingres. For Microsoft SQL Server databases, Microsoft SQL Server authentication or Windows authentication. All connections to the MDB require either a valid operating system ID or Microsoft SQL Server user ID. Different applications require each method of authentication. This section defines the preferred way of connecting to and accessing database objects in the MDB. Topics covered include the following: MDB User Groups (Ingres) MDB User Roles (Microsoft SQL Server) MDB Users Operating System Users Virtual Node Names (VNODEs) (Ingres)
A Microsoft SQL Server user can be set up with a password, or a Windows-authenticated user can be set up by a Microsoft SQL Server user who has the system administrator database role.
40 Administration Guide
As part of the MDB definition, the following user groups are defined for CA NSM: An administrator group called uniadmin at the product level and wvadmin and emadmin at the component level. Table privileges: Insert, Update, Delete, Select Users assigned to these groups have the Security privilege that allows uniadmin to "impersonate" user mdbadmin, which allows WorldView and Enterprise Management tables to be owned by user mdbadmin and to grant access to other users accordingly. This security privilege is required for creating and updating WorldView classes and Enterprise Management data.
A read-only user group called uniuser at the product level and wvuser and emuser at the component level. Table privileges: Select
For each CA NSM component, a user group could be created with grants for the tables within that subcomponent. For an administrator group For a read-only user group
By default, no users are defined for these groups. These groups are available if you want to protect tables at the component level. Other component groups are defined for other components of CA NSM.
Note: For detailed information about administering Enterprise Management Database privileges in Ingres on UNIX and Linux operating systems, see the Inside Event Management and Alert Management guide.
Ingres users are defined without Ingres passwords. Ingres verifies the operating system user before checking whether the user is defined to Ingres.
An Ingres user can be set up with a password, at the time the user is created or by an Ingres user who has the maintain_users privilege. This password has no connection with the operating system user's password. Ingres users with privileges can change their own passwords using the ALTER USER SQL command by specifying the old and new passwords. Only an Ingres user with the correct privileges can change another user's password. Ingres user passwords are not currently used to connect to the MDB. Important! CA NSM components may not be able to connect if the Ingres user has been assigned a password. An Ingres user can be set up with an expiration date. Once that date is past, the Ingres user cannot connect to Ingres until the expiration date is reset. Only an Ingres user with the correct privileges can reset the expiration date. Note: Ingres user expiration dates are not currently used to connect to the MDB. For security reasons, the Ingres user mdbadmin owns all database objects, does not have a corresponding operating system user ID, and should not be used by any application.
How You Create Additional CA NSM Administrators (Microsoft SQL Server Databases)
When you install the CA NSM Server component, you are prompted to create a CA NSM Microsoft SQL Server account with a password. You can create another user who will have CA NSM administrator privileges for the MDB. 1. 2. Create a Microsoft SQL Server user with a password. Assign the user to a default user role for the tablespace for which that user needs access.
For example:
sp_adduser 'nsmadmin', 'uniadmin'
42 Administration Guide
For example:
CREATE USER nsm_admin_user WITH group = uniadmin
Note: You can create an operating system user with a password expiration date, which may be a requirement for your organization. The Ingres VNODE entry on the client will not be able to connect to the server until the password entry for the VNODE is reset. Important! For security reasons, do not create an operating system user called mdbadmin.
Component-Level Security
The WorldView registry is scanned to look for a valid VNODE for the server and user combination. If one is found, WorldView connects with that VNODE. If the connection fails, the connection dialog prompts for a user ID and password, and from this information the VNODE is updated and the connection is attempted again. If the connection fails again, this cycle is repeated until the connection succeeds, or the user clicks Cancel. If the connection succeeds, the VNODE is saved for subsequent connections to WorldView. When you are using the WorldView Classic GUI (Windows), the user ID and password you provide on the Repository Sign On dialog is saved and stored in the VNODE. When you start any additional WorldView component, such as Object Browser or Severity Browser, you are not prompted again for MDB credentials because the credentials saved in the VNODE are used. For WorldView on UNIX and Linux, most WorldView components have input parameters that let you specify the server to connect to and the user name and password to use. Components that are run without specifying the server use the DefaultRepository registry entry, which is set at installation, to determine the server.
Component-Level Security
Security at the product level helps keep unauthorized users from causing problems with key infrastructure components. Component level security focuses on improving the following two aspects of security: Unintentional problems caused by users having more access or authority than required to do their jobs Efficiencies that can be gained by having users presented only with the information required to do their jobs properly
44 Administration Guide
Component-Level Security
CA NSM security provides about 100 rules, 9 roles (also known as user groups), and about 100 assettypes. CA NSM provides embedded security, which is a DENY mode security engine that, by default, is turned on. The following components use CA NSM embedded security: Calendar Management Embedded Security (protects itself) Job Management Option Alert Management Notifications Services Agent dashboards Web Reporting Service Unicenter Configuration Manager Event Management Management interfaces, which include Management Command Center, Unicenter Management Portal, Unicenter Browser Interface, and Unicenter for Pocket PC (logon only) Note: Security is not installed by default, nor is it a selectable option. If you select any of the components that use security, the installation asks if you would like to enable security, and installs security if you answer "yes. For Windows, the question appears only when you are installing CA NSM in non-Express mode. Without specific "permit" security rules for a given role or user, access to a component is denied. Default permit rules are created and activated for each of the components that uses embedded security for the following roles and types of access: Systems administrators (SYSADMIN) have full access to most components. By default, these users include only "administrator," "root," and the installing user. Network administrators (NETADMIN) have full access to most components. By default, these users include only "administrator" and "root." Operators (OPERATOR) have read access to most components. By default, these users include a "dummy" user for place-marker purposes. Application administrators (APPADMIN), database administrators (DBADMIN), mail administrators (MAILADMIN), web administrators (WEBADMIN), and business users (USER) have no users assigned and no access to most components. By default, these users include a "dummy" user for place-marker purposes. General users (PUBLIC) have no users assigned.
Component-Level Security
Exceptions to the above rules are as follows: The embedded security component allows full access for only systems administrators. Unicenter Management Portal access varies by role. For example, application administrators may have access that systems administrators do not. Logon access for most user interfaces, such as Unicenter Management Portal, Unicenter Configuration Manager, Unicenter Browser Interface, Management Command Center, and Unicenter for Pocket PC, is available for all roles. By default these roles include "administrator," "root," and the installing user. Windows embedded security does not provide granularity for some components, such as Calendar Management, Event Management, Security Management, and the Job Management Option. These components have access that is all or nothing, and therefore, systems administrator, network administrator, and operator roles have identical access.
Enhanced and simplified Security Management means reduced errors, greater responsiveness, and increased flexibility in meeting the needs of your organization. Most importantly, it means you can implement thorough and effective security policies without disrupting your work environment. Note: The CA NSM Security Management components no longer provide file access authorization. If you need this type of additional security, you may want to evaluate eTrust Access Control. For more information, see Integration with eTrust Access Control.
46 Administration Guide
Component-Level Security
Note: For more information about running modp, see the online CA Reference.
Change the Password for the Severity Propagation Engine User Accounts (Windows)
When CA NSM is installed, the Severity Propagation Service is registered and a SeverityPropagation user account with a strong password is automatically created. A RunAs user account with the same password is also added to the dcomcfg utility. These user IDs are created so that the Severity Propagation Engine can stay connected when the user logs off of the computer. You may want to change the password for these user accounts for security reasons. To do this, you must deregister the Severity Propagation Service and re-register it with a new password.
Component-Level Security
Important! Failure to deregister and re-register the Severity Propagation Service correctly will result in a catastrophic failure of many CA NSM subsystems. Any errors that occur during registration and deregistration are written to the application event log in the operating system's event viewer. To change the password for the SeverityPropagation and RunAs user accounts 1. 2. Stop the Severity Propagation Service (sevprop.exe) using the Windows Service Manager. Run the following command from a command line:
sevpropcom /unregister
The Severity Propagation Service is removed from the dcomcfg utility and the SeverityPropagation user account is removed. 3. Run the following command from a command line:
sevpropcom /regserver
4.
The Severity Propagation Service is re-registered and the SeverityPropagation user account is created with a proprietary password. The password conforms to Microsoft's most rigorous password complexity methodology, using Microsoft's LSA policy to ensure the security of the password. Note: You can use the sevpropcom /regserver /password command to register the DCOM server with a user-defined password. You must ensure that all password requirements are met if you enter your own password.
5.
Start the Severity Propagation Service (sevprop.exe) using the Windows Service Manager.
48 Administration Guide
Component-Level Security
5. 6. 7.
Click add, enter TNDUsers, and click OK. In the Permissions field at the bottom of the Properties dialog, click the Allow box after the Write permission, and click OK. Log off as administrator and log back in as the user that is a member of the TNDUsers group.
Create Additional Users with Administrator Privileges to Run Discovery (Microsoft SQL Server Databases)
Only nsmadmin, or the CA NSM administrator that was used to install the Ingres server on the MDB server, can run Discovery after CA NSM is installed. You may want to give other administrative users authority to run Discovery.
Component-Level Security
To create a user with administrator privileges 1. On the MDB server, create a Microsoft SQL Server user by running the following command:
addntgroup -a "TNDUsers" -s repository_name -u "userid" -p "password" -b mdb -g uniadmin
The Microsoft SQL Server user is created as a member of the uniadmin role. 2. Run the following command using the nsmadmin user and password:
modp -r repository_name -u nsmadmin -n nsmadmin_password
Note: You only need to run the modp command if Discovery is run on a new remote MDB, that is, a different MDB than the one used during installation. The user ID you created has the authority to run Discovery.
How You Create Additional Users Without Administrator Privileges (SQL Server Databases)
Only nsmadmin and the install user (usually Administrator) can run Discovery after CA NSM is installed. You may want to give other users authority to run Discovery without giving them administrator privileges. To create a user without administrator privileges, follow these steps: 1. 2. 3. Manually create a Windows user and add it to the TNDUsers group. Manually create a Microsoft SQL Server user with SQL Enterprise Manager with uniadmin as its default role. Modify the security permissions of the Program Files\CA\SharedComponents\CCS\Discovery folder to allow users of the TNDUsers group to modify, read and execute, list folder contents, and to have read and write access. Run the modp command using the nsmadmin user and password.
4.
Note: For more information about the modp command, see the online CA Reference.
Create Additional Users with Administrator Privileges to Run Discovery (Ingres Databases)
Only nsmadmin, or the CA NSM administrator that was used to install the Ingres server on the MDB server, can run Discovery after CA NSM is installed. You may want to give other administrative users authority to run Discovery.
50 Administration Guide
Component-Level Security
To create a user with administrator privileges 1. On the MDB server, create a Windows and Ingres user by running the following command:
addntgroup -a "TNDUsers" -s repository_name -u "userid" -p "password" -b TNGDB -g uniadmin
The Windows user is created as a member of the TNDUsers group and the Ingres user is created as a member of the uniadmin group. 2. Run the following command using the nsmadmin user and password:
modp -r repository_name -u nsmadmin -n nsmadmin_password
Note: You only need to run the modp command if Discovery is run on a new remote MDB, that is, a different MDB than the one used during installation. The user ID you created has the authority to run Discovery. Note: For more information about the addntgroup and modp commands, see the online CA Reference.
How You Create Additional Users Without Administrator Privileges (Ingres Databases)
Only nsmadmin and the install user (usually Administrator) can run Discovery after CA NSM is installed. You may want to give other users authority to run Discovery without giving them administrator privileges. To create a user without administrator privileges, follow these steps: 1. 2. 3. Manually create a Windows user and add it to the TNDUsers group. Manually create an Ingres user VisualDBA with uniadmin as its default group. Modify the security permissions of the Program Files\CA\SharedComponents\CCS\Discovery folder to allow users of the TNDUsers group to modify, read and execute, list folder contents, and to have read and write access. Run the modp command using the nsmadmin user and password.
4.
Note: For more information about the modp command, see the online CA Reference.
WorldView Security
The topics that follow explain WorldView security considerations.
Component-Level Security
52 Administration Guide
Component-Level Security
3.
Component-Level Security
2.
Select the name of the logical repository you want to connect to. Note: If the name does not appear in the drop-down, click Find and select the name, or type the name of the logical repository. You are connected to the logical repository, and the WorldView Classic GUI component opens. Note: When you start the 2D Map using the catng2d command, use the /R parameter to specify the logical repository. Do not use the /U and /P parameters for Ingres connections if you are using an Ingres database.
Example: Connect to a Remote Repository Unixp is a CA NSM client computer, and uswv01 is the name of the WorldView server where the MDB resides. On unixp, define a logical repository named uswv01a to associate with the MDB on uswv01 using the nsmadmin user ID and password for uswv01. If unixp contains CA NSM management components, run modp to define the nsmadmin user ID and password for uswv01. You can now connect to uswv01a and run WorldView and Discovery applications (Discovery is a management component) from unixp and the data is stored in the MDB on uswv01.
54 Administration Guide
Component-Level Security
Component-Level Security
56 Administration Guide
Component-Level Security
Different domains with the same user ID are considered the same for Ingres (Ingres databases only). For example, if a user logged into the client computer as DomainA\joe, the user is authenticated to Ingres as joe, not DomainA\joe because Ingres does not support domain accounts.
You must always use a local operating system user ID to authenticate to an Ingres database. This account can be defined to an Ingres server using the accessdb utility. This operating system ID must also have access to the MDB. You can do this on the server where the Ingres database resides. Each Ingres user must be defined to a default user group. For WorldView access that has full authorization, assign the default group of wvadmin (or uniadmin for all CA NSM tables). For users that should only have only read authorization, assign the default group of wvuser (or uniuser for all product tables).
You must always use an SQL Server user ID or Windows-authenticated user ID to authenticate to an SQL Server database. This account can be defined to SQL Server using the SQL Enterprise Manager utility. This user ID must also have access to the MDB. You can do this on the server where the SQL Server database resides. Each SQL Server user must be defined to a default user role. For WorldView access that has full authorization, assign the default role of wvadmin (or uniadmin for all CA NSM tables). For users that should only have only read authorization, assign the default role of wvuser (or uniuser for all product tables).
Component-Level Security
All non-root users have access to the Management Command Center. 2. (Optional) To allow non-root users to access WorldView data stored in the MDB, enter the following commands:
chmod 777 $CAIGLBL0000/wv/config $CAIGLBL0000/wv/scripts/add_ingres_user non_root_user_name
All non-root users have access to WorldView data in the Management Command Center.
58 Administration Guide
60 Administration Guide
CA NSM 3.x asset types are pre-loaded during eTrust AC r8 installation. After the migration, CA NSM 3.x assets are protected by eTrust AC. eTrust AC provides programs that extract data from the CA NSM 3.x Security database and translate it into eTrust AC commands that populate the eTrust AC database. The following data is migrated: CA NSM Security users CA NSM Security user groups CA NSM Security rules CA NSM Security assetgroups
Rules that apply to any of these asset types, or any of their derivatives, are ignored during the migration process. Creation and modification statistics for all CA NSM objects are lost in the migration process.
For CA NSM Security rules, the following attributes cannot be migrated: EXPIRES-Rule expiration date is not supported by eTrust AC.
62 Administration Guide
64 Administration Guide
This same concept applies to class generation level. Class generation level 0 implies that the rule is tied to the current class and has precedence over all other class generation levels. Class generation level 1 implies that the rule is tied to a class that is a direct superclass of the current class. Rules for a class always take precedence over superclass rules regardless of whether the rule is a user rule or group rule.
An Allow object-level rule takes precedence over any other object-level rule regardless of whether that other rule is a user or group rule. An Allow class-level rule takes precedence over any other class-level rule whether that other rule is a user or group rule. An Allow class-level rule never takes precedence over any object-level rule. If there is no specific class-level rule or object-level for a class or any of its superclasses, the inclusion hierarchy is used for Data Scoping evaluation. That is, the topology of the network is used for evaluation. Rules are evaluated for the parent folder of an object. If the rule applies to the parent, it applies to all children within the folder. If the parent folder has no rule that applies, its grandparent is searched, then its great grandparent, and so forth. Objects that are in multiple folders where one folder has a Deny rule, and the other an Allow, the Allow takes precedence. User rules and group rules are treated equally. Any Allow rule in either category takes precedence.
Examples: Data Scoping Order of Rule Preference For the following two rules, Rule 2 takes precedence: Rule 1: Deny Delete on Windows for User1. Rule 2: Allow Delete on Windows for Group1 where User1 is a member of Group1.
For the following two rules, Rule 1 takes precedence: Rule1: Deny Delete on Windows where the label equals Computer1 for User1. Rule2: Allow Delete on Windows for User1.
For the following two rules, Rule 2 takes precedence: Rule1: Deny Delete on Windows for group1 where User1 is a member of Group1. Rule2: Allow Delete on Windows for User1.
For the following two rules, Rule 1 takes precedence: Rule1: Deny Delete on Windows for User1. Rule2: Allow Delete on ManagedObject for User1.
66 Administration Guide
For the following rule, Computer1 in subnet 192.168.255.0 is denied access for delete: Rule: Deny Delete on IP_Subnet where Name=192.168.255.0. For the following rules, and if Computer1 is in subnet 192.168.255.0 and class Windows, Rule2 takes precedence. Rule1: Allow Delete on IP_Subnet where name=192.168.255.0. Rule2: Deny Delete on Windows where Name=Computer1.
For the following two rules, and if Computer1 is on subnet 192.168.255.0, Rule 2 takes precedence:
a
Rule1: Deny Delete on IP_Subnet where Name=192.168.255.0. Rule2: Allow Delete on Business Process View BPV1 where Computer1 is in BPV1.
For the following three rules, Rule1 takes precedence over Rule3 for Computer1 in BPV Atlas:
a
Since Rule1 is a class inheritance rule, it takes precedence for objects within the BPV named Atlas over the inclusion inheritance rule Rule3. To allow update access for all objects within the BPV Atlas, Rule1 should be changed to the following: Rule: Class=ManagedObjectRoot/Deny/Create+Update+Delete/ Name=ManagedObjectRoot/User=user1
68 Administration Guide
If we assume that the user ID is the same user ID that is used to connect to the MDB, the impact of Data Scoping rules on MDB performance can be summarized as follows: When Data Scoping rules are not used, there is no performance impact on the MDB functionality. When a user ID has no Data Scoping rules applied to it, there is no performance degradation for any MDB requests after the first data access. When a user ID has Data Scoping rules that apply to it at a class level, there is no performance degradation for any MDB requests to those classes with no rules applied. Any performance degradation of access is limited to those classes with rules applied and is negligible. When a user ID has Data Scoping rules that define object-level overrides (thus requiring property-by-property analysis), there is no performance degradation of MDB requests to those classes for which there are no rules. The impact on performance is limited to those requests that target a class for which there is a rule.
User IDs for Data Scoping Evaluation on Windows Platforms (Microsoft SQL Server Databases)
On a local MDB, the user ID that is used to connect to Microsoft SQL Server is used for Data Scoping evaluation. For a remote MDB, a Microsoft SQL Server user ID or Windows user ID defined on the remote computer is used for Data Scoping evaluation. You can enter the user ID in the Login Security dialog that is accessible from the Management Command Center. When the Logon Security dialog for remote connections appears, use the following user IDs: Microsoft SQL Server user ID Windows user ID. Use the User Manager to add this user ID to the Windows Group TNDUsers. TNDUsers is created during CA NSM installation and has all the necessary Windows privileges. You need only to add a Data Scoping user to this group before they can sign on to CA NSM using the Windows user ID.
User IDs for Data Scoping Evaluation on Windows Platforms (Ingres Databases)
On a local MDB, the currently logged on Windows user ID is used for Data Scoping evaluation. For a remote MDB, a Windows user ID defined on the remote computer is used for Data Scoping evaluation. You can enter the Windows user ID in the Login Security dialog that is accessible from the Management Command Center. When the Logon Security dialog for remote connections appears, use the following user IDs: Ingres user ID Windows user ID. Use the User Manager to add this user ID to the Windows Group TNDUsers. TNDUsers is created during CA NSM installation and has all the necessary Windows privileges. You need only to add a Data Scoping user to this group before they can sign on to CA NSM using the Windows user ID.
70 Administration Guide
Data Scoping Rule Evaluation Using Windows Domain Groups (Microsoft SQL Server Databases)
Microsoft SQL Server supports Windows domain accounts for authentication. Data Scoping rules are enforced for domain groups in which the particular user is a member. You can create rules for multiple domains on one MDB using the DataScope Rule Editor. You can create rules when logged into different domains by using the DataScope Rule Editor locally or remotely. Only the rules created for the domain that is used to authenticate Windows to the MDB are applied. You can then create Data Scoping rules for domain group accounts defined on the domain that is currently logged in. Rules are applied in the following ways: If a rule exists for a domain group account and the domain user who is authenticated is a member of that domain group, the rule applies to that user. If rules are defined for multiple domain groups and the domain user who is authenticated is a member of those domain groups, then all rules apply. If the domain user is a member of a domain group or local group for which a rule exists, or if the domain user is a member of a domain group that is a member of a local group and a rule for the local group exists, the rule applies.
Data Scoping rule evaluation takes place as described for a local computer.
Data Scoping Rule Evaluation Using Windows Domain Groups (Ingres Databases)
Ingres does not support Windows domain accounts for authentication. However, domain support for Data Scoping exists over Ingres in that the domain into which the user is logged on at the client computer is used for Data Scoping rule evaluation. For example, if a user is logged into a domain on a client computer, that domain user is used for Data Scoping rule evaluation. If a user logged into the client computer as DomainA\joe, the user is authenticated to Ingres as joe (not DomainA\joe because Ingres does not support domain accounts) but DomainA\joe is used for Data Scoping rule evaluation, regardless of whether the Ingres database is local or remote. If the user is logged into a different client computer, such as DomainB\joe, the user is still authenticated to Ingres as joe but DomainB\joe is used for Data Scoping rule evaluation. Thus, two different client computers connected to the same server are authenticated to Ingres using the same user ID (joe) but two different domain user accounts are used for Data Scoping rule evaluation. Data Scoping rules are enforced for domain groups in which the particular user is a member. You can create rules for multiple domains on one MDB using the DataScope Rule Editor. You can create rules when logged into different domains by using the DataScope Rule Editor locally or remotely. Only the rules created for the domain that is used to Windows-authenticate to the MDB are applied. You can then create Data Scoping rules for domain group accounts defined on the domain that is currently logged in. Rules are applied in the following ways: If a rule exists for a domain group account and the domain user who is authenticated is a member of that domain group, the rule applies to that user. If rules are defined for multiple domain groups and the domain user who is authenticated is a member of those domain groups, then all rules apply. If the domain user is a member of a domain group or local group for which a rule exists, or if the domain user is a member of a domain group that is a member of a local group and a rule for the local group exists, the rule applies.
Data Scoping rule evaluation takes place as described for a local computer.
72 Administration Guide
74 Administration Guide
Data Scoping is activated and classes are created. Note: Only administrators can run this command. If you run this command after you have connected to the MDB, applications already connected will not have Data Scoping rules enforced until the applications are restarted. You also will not be able to create new Data Scoping rules until the applications are restarted.
Data Scoping is deactivated. Note: Only administrators can run this command.
where repository-name is the name of the MDB on which you want to activate Data Scoping. Remote MDBs are not supported; therefore, the database name should be that of the local MDB. Note: Only the root user can run this command. If you run this command after you have connected to the MDB, applications already connected will not have Data Scoping rules enforced until the applications are restarted. You also will not be able to create new Data Scoping rules until the applications are restarted.
76 Administration Guide
where repository-name is the name of the MDB on which you want to deactivate Data Scoping. Data Scoping is deactivated. 2. Recycle WorldView. Changes made to the database tables are recorded. Note: Only the root user can run the wvwscpdel command.
The user ID is then not able to write any dynamic Ingres or Microsoft SQL Server applications that access the MDB database and cannot access MDB through external tools such as Ingres SQL Microsoft SQL Query Analyzer. The only way to write an application to access the MDB is by using the WorldView Application Programming Interface (API). Since all CA NSM applications access the MDB through the WorldView API, they are assured complete access to the MDB. If Data Scoping is deactivated, this user ID should have full database privileges restored. 2. Create Data Scoping rules to restrict access to users who write applications using the WorldView API, or who use CA NSM tools. When the rules are saved in the MDB for which access needs to be restricted, Data Scoping enforcement is complete for all CA NSM applications. Applications that are not using CA NSM are denied complete access to the MDB because of the precautions you set up when you created the database user ID. 3. Maintain Data Scoping security. Data Scoping security is an ongoing process. You can update and delete rules. You can remove DataScope classes to completely deactivate Data Scoping. On Windows platforms, when you update Data Scoping rules, they are enforced immediately, except for the conditions noted in Data Scoping in the 2D Map (Windows). On UNIX/Linux platforms, when you update Data Scoping rules, they are enforced immediately.
78 Administration Guide
Each of these methods has different security considerations, such as which ports are required to be open, whether or how they encrypt data, and so forth. The following sections clarify the usage and considerations for each of these methodologies. For additional information regarding port configuration, see "Utilizing and Configuring Ports."
Encryption Levels
As data traverses the network, it is important to understand the encryption methodologies in place for security compliance so that you can be assured that data is protected appropriately.
Encryption Level SSL 80-bit (See Note 1) SSL (See Note 1) 48-bit 48-bit SSL SSL
Comments Open SSL Open SSL DES (See Note 2) DES (See Note 2) SSA 2.0 (See Note 3) Open SSL
Note 1: CAICCI comes preconfigured to let you use the strongest encryption possible by downloading the algorithms from the external OpenSSL library. OpenSSL uses two forms of encryption: an asymmetric algorithm to establish a connection, and a symmetric algorithm for the duration of a connection after it has been established. The strongest asymmetric algorithm we recommend using is RSA with a 2048-bit key. The strongest symmetric algorithm we recommend using is AES with a 256-bit key. Note 2: DES encryption is built into the code or product module. Note 3: For more information about configuring CAM to use SSL encryption, see the CAM section in the chapter "Using Ports to Transfer Data."
The cipher suite, which declares the algorithms used for each of these areas, is fully configurable to use any of the combinations available through OpenSSL. In general, we use the strongest ciphers that also provide acceptable performance. The default cipher suites, as delivered, are as follows: Protocol: Key exchange: Authorization: Encryption: MAC algorithm: SSLv3 or TLSv1 RSA RSA using a 1024-bit key AES with a 256-bit key SHA1
80 Administration Guide
If configured to run anonymously (peers are not authenticated), the defaults are as follows: Protocol: Key exchange: Authorization: Encryption: MAC algorithm: SSLv3 or TLSv1 ADH NONE AES with a 256-bit key SHA1
OpenSSL
CCISSF uses OpenSSL, which is a consortium open source facility. For more information, see http://www.openssl.org. Use of OpenSSL provides standards-based encryption and authentication for the sender and receiver. In OpenSSL, authentication is achieved through certificates. Certificates contain data about the local host that the remote host can use to determine whether the local host is authentic. This mechanism ensures that the communicating partner is who the partner claims to be. Secure Sockets Layer functionality is provided by dynamically loading available OpenSSL libraries. These libraries must be available on all installed machines. The minimum version requirement of OpenSSL to be used with CCISSF is OpenSSL Version 0.9.7. It is your responsibility to obtain a version of OpenSSL that is consistent with your needs and planned deployment. For your convenience, a version of OpenSSL will be installed with CAICCI.
Enable CCISSF
CCISSF is disabled by default to provide out of the box compatibility with previous versions of CAICCI. Also, not all users will require this enhanced level of security, which comes with some performance costs. To enable CCISSF, do one (or both) of the following: Set the following environment variable in the system environment:
CAI_CCI_SECURE=[YES|NO]
YES Specifies that all connections, unless otherwise specified in the remote daemon configuration file, will have CCISSF enabled. NO Specifies that the remote daemon will not request SSL for any connections unless they are overridden in the configuration file. However, all incoming secure connections will be accepted using a secure connection. The default is NO. Note: Regardless of the environment variable setting, communications to a remote CAICCI will not use CCISSF unless an entry for that remote node is present in the remote daemon configuration file on the local system.
82 Administration Guide
Override the environment variable setting in the ccirmtd.rc file by setting the following parameter to override the default behavior:
SECURE=[YES|NO]
YES Specifies that a connection attempt to the corresponding remote CAICCI will be required to be made in a secure manner. NO Specifies that an outgoing connection attempt will be made in a non-secure manner unless the corresponding remote CAICCI requires it.
CCISSF will always connect with the highest possible level of security when communicating with another CCISSF-capable CAICCI. The following table describes the behavior:
Source CAICCI Effective Target CAICCI SECURE Value Effective SECURE Value Yes Yes No No Yes No Yes No
Connection Status
Configuring CCISSF
CCISSF depends on OpenSSL for effective communication. To use CCISSF, you must configure it to use OpenSSL. If OpenSSL is available on the system when CAICCI initializes, CAICCI uses the OpenSSL libraries to provide service for any secure connections. If OpenSSL is not available, CAICCI follows the behavior defined in the following table:
Effective SECURE Values of All Connections CAICCI Behavior if OpenSSL Is Not Available NoDefault is No and no remote configuration file entries with SECURE=YES Warning message to the Event Log or syslog indicating that OpenSSL is not present at the time of initialization. All inbound connections will be denied if a secure connection request is made. All outbound connections will be made as a non-secure request. YesDefault is Yes or at least one remote configuration file with SECURE=YES An error message will be issued to the Event Log or syslog indicating the required OpenSSL component is not present and that only non-secure connections will be made. CAICCI will initialize but only connections that are requested to be non-secure will be made. Any connection for which the effective value of Secure is Yes will be disabled. Note: SSL connections are currently supported only between CAICCI remote daemons. Communication between hosts that use the QUES layer (transport daemon) cannot use SSL. The QUES implementation is typically used in Windows environments. For those users that want to use CCISSF, you must migrate to the remote daemon implementation.
84 Administration Guide
Default Certificate
To facilitate installation, CCISSF supplies a default certificate to allow out of the box operation. However, use of the default certificate cannot ensure any significant level of authentication since all certificates are identical. For true authentication, we strongly recommend you use customized PEM format certificates in accordance with site standards, and replace the default certificate in the location discussed in this topic. The default certificate has the following properties: Common Name is Default CCI Certificate. Issuer Common Name is Default Root Certificate. Serial number is 0xC. The certificate becomes invalid after January 2, 2033.
The default certificates private key is encrypted with the passphrase CACCI. When CCISSF is installed a default certificate called cert.pem is installed on the system. Unless the default configuration is altered (see ccisslcfg Utility) CCISSF will use this default certificate. This default certificate can be replaced with a user-provided certificate of the same name, or the ccisslcfg utility can be used to configure CCISSF to use a user-provided certificate with a different name.
86 Administration Guide
Whether to use OpenSSLs default root certificate authority locations. CCISSF will use these locations in addition to any locations you specify in the following items below: Any number of root certificates Any number of directories containing root certificates (when specifying a directory, it is assumed that the files inside are all named with the convention of using the hash value of the issuers subject name. OpenSSL will not be able to correctly look up these files if they are not named with this convention. Read the OpenSSL documentation for more information.)
The location of any certificate revocation lists (CRLs), which can be any number of files or directories (As stated before, when specifying a directory, we assume the files inside are all named with the convention of using the hash value of the issuers subject name.)
After ccisslcfg prompts you for all these settings, it will write them in encrypted form to the file %CAILOCL0000%\ccissl.cfg. Ccisslcfg will overwrite any previous settings you may have set in the past. Because this configuration file is encrypted, only the ccisslcfg utility can change these settings. Note that although the contents of this file are encrypted, we recommend that the permissions are set so that only administrators and CCISSF have access to this file. The presence of this file overrides CCISSFs default behavior with respect to where it looks for certificates. This configuration utility does not need to be used if you plan to use the default CCISSF certificate locations along with providing the password_cb() callback in the cauccissl\libccissl library.
-in Specifies a file to read input from. If it is left out, ccicrypt uses standard input. -out Specifies a file to direct output to. If it is left out, ccicrypt uses standard output. -p Specifies a password for ccicrypt to use. If it is left out, ccicrypt uses a default internal password.
-cipher Tells ccicrypt what type of encryption algorithm to use. Enter ccicrypt -help for a list of choices and refer to http://www.openssl.org for descriptions. If an algorithm is not specified, DES in cipher feedback mode (des-cfb) is used. -dv Specifies a data vector to ccicrypt. In addition to a password, some encryption algorithm modes (life cipher feedback mode) require additional random data to further randomize the output. The data vector is any string of characters. -encrypt or decrypt Specify whether ccicrypt should encrypt or decrypt data. There is no default for this. One option should always be specified. -help Lists the available types of encryption algorithms.
Using a Passphrase
A passphrase is used to protect elements of the certificate while it resides on the local file system. CAICCI requires access to your system passphrase to properly function. By default, CCISSF will provide SSL with the passphrase used to encrypt the default certificate. To use a different passphrase, several options exist. First, the password_cb function can be provided in a shared library (see Programming a Customized SSL Environment (see page 88)). Additionally, you can use the ccisslcfg utility to provide the passphrase itself or the absolute path of a file that contains the desired passphrase.
88 Administration Guide
Email Address Not Valid Before Date Not Valid After Date Issuer Common Name Issuer Locality Issuer State or Province Issuer Country Issuer Organization Issuer Organizational Name Issuer Email Address
Additionally, the following popular certificate extensions can be retrieved: Basic Constraints Key Usage Subject Alternate Name Issuer Alternate Name
Additional fields can be defined in the provided CAICCI header file but are not supported by CAICCI at this time.
Default Functions
Along with the default certificates, CCISSF also provides two default functions, password_cb and verifyCert, to provide the private keys password and authenticate the remote host respectively. To facilitate the customized environment, we provide an API. This interface acts as a convenient way to access the underlying Open SSL environment.
buf Specifies the SSL-provided buffer of length num that points to the null-terminated password string upon exit of the function. num Specifies the length of buffer pointed to by buf (includes space for terminating character). rwflag Specifies the flag that indicates whether the function was called for encryption (rwflag is nonzero) or decryption (rwflag = 0). userdata Reserved for future use (will always be NULL). This function returns the length of the password string pointed to by buf.
User-Exported Symbols
password_cb Function to supply private key passphrase. verifyCert Function to check the authenticity of a remote certificate.
90 Administration Guide
psCACE Specifies a pointer to user structure (see CCISSL_CLIENT_Auth_Callback_Env following). CCISSL_CERT_ID Specifies an enumerated type indicating which certificate (local or remote) to look at (see header file at end). CCISSL_CERT_DATA_ID Specifies an enumerated type indicating which field of the certificate to return (see header file at end). cert_data Specifies a pointer to a char**. This will hold the data requested upon successful return. The user need not malloc or free this space. CAICCI will handle all memory management. cert_data_len Specifies a pointer to an int*. This will contain the length of cert_data upon successful return. This function returns -1 on error, or if the data requested does not exist. On success it returns the index into the array of data where the requested information exists.
psCACE Specifies a pointer to user structure (see CCISSL_CLIENT_Auth_Callback_Env following). CCISSL_CERT_ID Specifies an enumerated type indicating which certificate (local or remote) to look at (see header file at end). CCISSL_CERT_DATA_ID* Specifies a pointer to CCISSL_CERT_DATA_ID that contains the data ID of the data pointed to by cert_data upon return of the function. cert_data Specifies a pointer to a char**. This will hold the next piece of data in the array. The user need not malloc or free this space. CAICCI will handle all memory management. cert_data_len Specifies a pointer to an int*. This will contain the length of cert_data upon successful return. This function returns -1 on error; 0 if data returned is the last in the array. If not, the index of the next piece of data is returned.
92 Administration Guide
psCACE Specifies a pointer to user structure (see CCISSL_CLIENT_Auth_Callback_Env). CCISSL_OUTPUT_CERT_INFO_TYPE Specifies an enumerated type indicating the destination of the output (see header file described in next section). CCISSL_OUTPUT_TO_LOG specifies the event log. format Specifies a string to be output. Specifies variables to be substituted into format.
CCISSL_CLIENT_Auth_Callback_Env
The CAICCI structure (CCISSL_CLIENT_Auth_Callback_Env) is specified at the end of this document. This structure stores values that are taken from the certificates. The fields of the structure are as follows: client_callback_handle Specifies a value reserved for future use and always set to zero. local_hostname Specifies a pointer to string representing local host name. local_ipaddr Specifies a character array representation of local hosts IP address associated with remote daemon. local_portno Specifies the port number of the local side of the SSL connection. local_CCI_sysid Specifies a pointer to string representing the system ID CAICCI has assigned to the local host.
remote_hostname Specifies a pointer to string representing the remote host name. remote_ipaddr Specifies a character array representation of remote hosts IP address associated with its remote daemon. remote_portno Specifies the port number of the remote side of the SSL connection. remote_CCI_sysid Specifies a pointer to string representing the system ID CAICCI has assigned to the remote host. remote_appname Specifies a pointer reserved for future use and set to NULL. remote_taskid Specifies a pointer reserved for future use and set to NULL. remote_userid Specifies a pointer reserved for future use and set to NULL. local_cert Specifies a pointer to an array holding information from the local machines certificate. local_cert_elem_ct Specifies the count of how many elements are in the array pointed to by local_cert. remote_cert Specifies a pointer to an array holding information from the remote machines certificate. remote_cert_elem_ct Specifies the count of how many elements are in the array pointed to by remote_cert. local_cert_elem_loc Specifies the current index into array pointed to by local_cert. remote_cert_elem_loc Specifies the current index into array pointed to by remote_cert. get_cert_info Specifies the pointer to the function that returns data from a certificate based on ID (see Header Information for User Customization).
94 Administration Guide
enum_cert_info Specifies the pointer to the function that sequentially returns data from a certificate (see Header Information for User Customization). output_cert_info Specifies the pointer to the function that allows the user to output a data string (see Header Information for User Customization).
CCISSL_CERT_ISSUER_COMMON_NAME, CCISSL_CERT_ISSUER_LOCALITY, CCISSL_CERT_ISSUER_STATE_OR_PROVINCE, CCISSL_CERT_ISSUER_COUNTRY, CCISSL_CERT_ISSUER_ORG, CCISSL_CERT_ISSUER_ORG_UNIT, CCISSL_CERT_ISSUER_DN_PRINTABLE, CCISSL_CERT_ISSUER_DN_DER, CCISSL_CERT_ISSUER_POSTAL_CODE, CCISSL_CERT_ISSUER_EMAIL } CCISSL_CERT_DATA_ID; typedef enum CCISSL_OUTPUT_CERT_INFO_TYPE_T { CCISSL_OUTPUT_TO_STDOUT, CCISSL_OUTPUT_TO_LOG } CCISSL_OUTPUT_CERT_INFO_TYPE; typedef struct Certificate_Element { char *data; int length; CCISSL_CERT_DATA_ID id; } certElem;
96 Administration Guide
typedef struct CCISSL_Client_Auth_Callback_Env sCACE; typedef sCACE* psCACE; struct CCISSL_Client_Auth_Callback_Env { int char int int char char int int char char char char client_callback_handle; *local_hostname; local_ipaddr; local_portno; *local_CCI_sysid; *remote_hostname; remote_ipaddr; remote_portno; *remote_CCI_sysid; *remote_appname; *remote_taskid; *remote_userid;
certElem *remote_cert; int int int int remote_cert_elem_ct; local_cert_elem_loc; remote_cert_elem_loc; (*get_cert_info)(psCACE, CCISSL_CERT_ID, CCISSL_CERT_DATA_ID, char** cert_data, int* cert_data_len); int (*enum_cert_info)(psCACE, CCISSL_CERT_ID, CCISSL_CERT_DATA_ID*, char** cert_data, int* cert_data_len); void (*output_cert_info)(psCACE, CCISSL_OUTPUT_CERT_INFO_TYPE, char* format, ...); };
Discovery
Discovery is the process by which devices on the network are found and classified and then placed in the MDB as managed objects. Discovery discovers and classifies devices on Internet Protocol (IP). Classic Discovery also supports two non-IP plugins that are fully integrated: Storage Area Network (SAN)--SAN Discovery can be started using the command line or the SAN pages in the Discovery Classic GUI or the Unicenter MCC. It discovers devices that are on a SAN and IP-enabled, as well as access points in the SAN. It automatically creates SAN Business Process Views in the 2D Map. Internetwork Packet Exchange Protocol (IPX) Discovery--IPX Discovery can be started using its own command line (ipxbe) or by using the Discovery Classic GUI. It discovers devices located on an IPX network.
Discovery also determines if a device provides Web-Based Enterprise Management (WBEM) data, and if so, creates a WBEM object in the devices Unispace. The Agent Technology WorldView Gateway service locates agents running on the network objects. Note: An MDB must exist before you can run Discovery to discover your network devices and populate the MDB.
Discovery
Once defined, you can view, monitor, and manage these objects and their Management Information Base (MIB) through the 2D Map, ObjectView, and the Topology Browser. You can manage the entities they represent using Event Management, Agent Technology, and third-party manager applications. When you install your product, you can decide which type of Discovery you want to use: Classic Discovery is an on demand process that lets you decide which subnets you want to discover and when. You can also configure Classic Discovery to run in regular intervals, which can be used as an alternative to Continuous Discovery and ensures that your discovered environment in the MDB is always current. You can start a Classic Discovery from the Discovery Classic GUI, the Unicenter MCC, the Unicenter Browser Interface, or the command line. Continuous Discovery is event-driven and ongoing. It employs a manager and agents that continuously scan your network in real-time mode for new devices or changes in IP addressing of existing IP devices. You can configure Continuous Discovery for optimal load balancing between the Discovery agents and the Discovery Manager. If you choose this method of discovery, you must install the Discovery agents and the Discovery Manager. Common Discovery is a new tool used for discovery across multiple CA products. In CA NSM r11.2, you can use Common Discovery to discover devices on IPv6 networks. WorldView uses the Common Discovery Import service to poll Common Discovery and populate the MDB with IPv6 network entities. If you choose this method of discovery, you must install the WorldView Manager, MCC, and WorldView Provider. You also must install the Common Discovery component on all servers on which you want to discover IPv6 entities.
Note: We do not recommend that you run Continuous Discovery and Classic Discovery concurrently on your network. Timing issues could result in the duplication of objects and an unnecessarily heavy load on the network and the MDB. You can however, run a combination of Classic and Continuous Discovery. For more information, see How You Can Combine Running Classic and Continuous Discovery (see page 101).
To avoid discovering duplicate devices, in Classic Discovery, set the dscvrbe -j option to IP to use the IP address if the DNS name cannot be found. Using IP addresses to name discovered devices ensures that objects are named using the same method and that no duplicates result. Set this option only if DNS is not enabled in your environment. Note: If you are using the Discovery Classic GUI to run Discovery, select "Use IP address instead of Sysname." When you run a full subnet discovery using Classic Discovery, stop the Continuous Discovery services. Continuous Discovery discovers only subnets on which an agent is deployed. To discover non-agent subnets because you want to automatically monitor them using Continuous Discovery, you can run the Classic Discovery dscvrbe command to discover a router and all of the subnets it supports, or you can write a script using the dscvrbe -7 option to discover all of the gateways on the desired subnets.
Discovery Timestamp
Discovery maintains a "last seen on the network" timestamp for each device in your network. This timestamp can help you determine if a device should not be monitored anymore and if it can be removed from the system. Using the information in this timestamp, you can define usage policies. For example, you may conclude that a device that has not been seen by the Discovery process in more than 45 days is no longer valid. This information is stored in the MDB so that you can run queries against this value for inventory reports. The "last seen on the network" timestamp updates a property that contains the length of time that a device was not seen by the Discovery process. This time can be determined by when a device was last successfully accessed or when network traffic was last seen from the device.
Whenever Discovery runs a ping sweep and successfully addresses an object, it saves this information. The Classic Discovery process updates the MDB immediately, and the Continuous Discovery process holds this information in the Discovery agent caches until the Discovery Manager requests this information. You can configure the Discovery Manager to poll the agents for this information on a regular basis.
You can also use the dscvrbe command to limit Discovery to specific subnets to discover, a range of IP addresses to discover, a range of subnets to discover, or subnets to exclude subnets from the Discovery process. You define the subnets in a text file and then specify this text file using the -8 parameter of the dscvrbe command. This functionality is available only when you use the dscvrbe command.
If you are discovering routers, we recommend that you use higher SNMP timeout values. You can specify the timeout value in any of the following places, depending on how you are running Discovery: Using the command line by specifying the -W parameter on the dscvrbe command Using the Management Command Center Discovery or Advanced Discovery Wizard Timeouts page Using the Discovery Classic GUI Timeouts box on the Discovery page of the Discovery Setup dialog
SNMP Uses a certain port and community string for classification. You can customize your rule files by adding a new general method to the methods.xml file or by changing the existing SNMPGeneric string. All pattern matches for the results of SNMP queries are specified in the classifyrule.xml file. Review the classifyrule.xml file for more information about how to classify by evaluating SNMP query results. Telnet reply pattern match Attempts to establish a Telnet session, and returns the Telnet login screen if successful. In the classification methods, this screen can then be matched with a default pattern. The Telnet method could also be described as screen scraping of the Telnet login screen. Default classification rules are supplied for all major operating system vendors. In many environments, these login screens are standardized. You can modify the Telnet classification rules by entering your own pattern matches if you have specialized login screens. Telnet methods specify a state computer that usually consists of establishing the connection and then waiting for the amount specified in the timeout parameter. After the timeout is reached, the connection can be closed. UDP/TCP port scanning (socket) Socket type methods scan ports of a computer to retrieve a port map that can be used to identify what type of device was discovered on the network. The desired port combination can be defined in the classifyrule.xml file (see this file for examples). In the port combination, you can specify whether a port should be found at all. For example, the absence of a Telnet port may signify that the device could be a Windows computer. You can now combine this rule with the NetBios port scan (SocketWindows_NetBios method) to describe the port layout of the computer so that the computer can be classified as correctly as possible. You can configure port scans for TCP/IP or UDP. You can specify pattern matches in the classification rule in the classifyrule.xml file if you know the byte pattern. MAC address patterns Specifies the first six bytes of a MAC address in the filter of a classification rule. HTTP response pattern match Queries a computer using the HTTP protocol and returns the response. The response is matched with a byte pattern in the classifyrule.xml file. Default methods are provided by Discovery. SMTP Attempts to establish an SMTP session with a mail server. The SMTP method is very similar to the Telnet and FTP methods. You can customize this method to fit different types of mail servers. The default method supplied by the default Discovery configuration files works for Microsoft Exchange Mail servers.
FTP Attempts to establish an FTP session with the computer and returns the FTP login screen. The FTP method is very similar to the Telnet method.
SNMPGeneric Uses the common MIB-II sysobjid entries to classify a computer. SNMP must be installed on the computer that is to be discovered. Contains the following parameters: Port Default: 161 Community Default: public Timeout (in milliseconds) Default: 2000 SNMP_AgentOID Finds Agent Technology common services (if aws_sadmin was configured to respond to SNMP requests). Contains the following parameters: Port Default: 6665 Community Default: admin Timeout (in milliseconds) Default: 4000 SNMP_SysEdgeAgentOID Finds active CA SystemEDGE agents and evaluates their operating system information. Contains the following parameters: Port Default: 1691 Community Default: public Timeout (in milliseconds) Default: 4000
SNMPSuspect_AP Finds special wireless access points in an environment. If there are none, remove this method from the configuration file for better performance. All references to a removed method must be deleted from the classifyrule.xml file. Contact Technical Support for help with this type of rule modification. Contains the following parameters: Port Default: 161 Community Default: public Timeout (in milliseconds) Default: 2000 SocketWindows_DS Scans for the Windows domain server port. Contains the following parameters: Port Default: 445 InitDataLength Default: 100 Timeout (in milliseconds) Default: 2000 TCP Default: True SocketWindows_NetBios Scans for the Windows NetBios port. Contains the following parameters: Port Default: 139 InitDataLength Default: 100 Timeout (in milliseconds) Default: 2000 TCP Default: True
SocketUnix_RPC Scans for the RPC port, which is a common port on Sun Solaris computers. Contains the following parameters: Port Default: 111 InitDataLength Default: 100 Timeout (in milliseconds) Default: 2000 TCP Default: True SocketSuspect_AP Scans ports for suspect access points. Contains the following parameters: Port Default: 192 InitDataLength Default: 116 Timeout (in milliseconds) Default: 2000 TCP Default: False HTTPGeneric Sends a generic HTTP request to a device. User ID and password are not specified. If the device has a web service that returns a login screen or an error screen, a pattern in that response can be matched with classification rules in classifyrule.xml. Contains the following parameters: Port Default: 80 Timeout Default: 1000 UserID Default: " " Password Default: " "None
HTTPAuthenticate Sends an HTTP authentication request to a device. Contains the following parameters: Port Default: None Timeout Default: 1000 TelnetWithSend Attempts to establish a Telnet connection and sends bogus data. Some specialized devices will not acknowledge Telnet commands without a subsequent send. Contains the following parameter: Timeout Default: 1000 TelnetGeneric Attempts to establish a Telnet connection and returns the Telnet login screen if successful. Contains the following parameters: Port Default: 23 Timeout Default: 5000 FTPGeneric Attempts to establish an FTP session with the computer and returns the FTP login screen. This method is very similar to the Telnet method. Contains the following parameters: Port Default: 21 Timeout Default: 1000
SMTPGeneric Attempts to establish an SMTP session with a mail server. This method is very similar to the Telnet and FTP methods. You can customize this method to fit different types of mail servers. The default method supplied by the default Discovery configuration files works for Microsoft Exchange Mail servers. Contains the following parameters: Port Default: 25 Timeout Default: 1000 ClassHint (Used only for Continuous Discovery and should not be modified.) Re-uses previously discovered data, and uses some limited SNMP queries in the first discovery phase to find system information such as host names, router flags, and multiple IP addresses. If the ClassHint method is specified, it reuses previously gathered information for classification purposes. Contains the following parameters: Port Default: None Timeout Default: None
The classifyrule.xml file is located in the discovery_installdir\config folder on the Discovery agent computer. You can define as many classification rules as needed in the classifyrule.xml file. Examples: Simple SNMP and Subclasses Here is an example of a simple SNMP rule:
<Device Class="WindowsNT" ClassScheme="Operating System"> <ClassificationRule Enabled="1" Priority="1"> <Method Name="SNMPGeneric"> <Filter Type="RegExp">((SysOID REGEX "1.3.6.1.4.1..311.1.1..3.1")|| (SysOID REGEX "1.3.6.1.4.1..311.1.1..3.1.1"))&&(SysDescr REGEX "Windows NT Version 4.0") </Filter> </Method> </ClassificationRule> </Device>
In the previous example, an object is classified as a Windows NT computer if the method SNMPGeneric (which is defined in the methods.xml file) returns with the value 1.3.6.1.4.1.311.1.1.3.1 in the sysobjid field of MIB-2 and the system description field in MIB-2 returns a string that contains "Windows NT Version 4.0". The ClassScheme is a reference for the classification hierarchies available in MDB. For more information, see the MDB schema description for the table ca_class_hierarchy. Here is an example of a rule that contains subclasses:
<Device Class="Unix" ClassScheme="Operating System"> <Relation DeviceName="RISC6000" Type="child"/> <Relation DeviceName="Solaris" Type="child"/> <Relation DeviceName="HPUnix" Type="child"/> <Relation DeviceName="DG_UX" Type="child"/> <Relation DeviceName="Linux" Type="child"/> <Relation DeviceName="NCRUnix" Type="child"/> <Relation DeviceName="UnixWare" Type="child"/> <Relation DeviceName="SCOUnix" Type="child"/> <Relation DeviceName="Silicon" Type="child"/> <Relation DeviceName="SiemenUX" Type="child"/> <Relation DeviceName="FUJIUxp" Type="child"/> <Relation DeviceName="Sequent_Server" Type="child"/> <Relation DeviceName="OpenVMS" Type="child"/> <Relation DeviceName="ICLUnix" Type="child"/> <ClassificationRule Enabled="1" Priority="3"> <Method Name="SocketUnix_RPC"> <Filter Type="RegExp">00</Filter> </Method> </ClassificationRule> </Device>
In the previous example, all child classes that are allowed for a parent class are listed in the classification rule. Listing child classes tells Discovery to execute additional rules, and enables Discovery to determine when a final, best fit rule can be determined. Specifying this type of rule applies mostly to Continuous Discovery because classification is an ongoing effort. In the MDB, the class hierarchy that works in the same way. Many products' rules are evaluated to find the best classification for any common object in MDB. For more information, see the MDB schema description.
If you are running Discovery from a Windows XP system, the port scan methods (SocketWindows_DS, SocketWindows_NetBios, SocketUnix_RPC, SocketSuspect_AP) are taking a much longer time than on other operating systems. You may need to increase the timeout values for all the socket type methods in the methods.xml files until the device is correctly classified as UNIX or Windows.
The Discovery Manager controls multiple Agents. The Discovery agents discover and classify devices, while the Discovery Manager consolidates the discovered information from the agents and interacts with the MDB through WorldView. The Discovery Manager also distributes the workload of Discovery agents. A Discovery agent's workload is simply the list of subnets the agent monitors. Discovery and subsequent classification is restricted to this list of subnets.
After a device is discovered by a Discovery agent, it is classified. Classification is rule-driven, which facilitates quick additions and updates of classification rules without needing new code modules, for example, libraries. Classified devices are sent to the Discovery Manager, which updates the WorldView repository in the MDB. The Discovery Manager updates the WorldView repository with any information received from Discovery agents, and also registers with the WorldView repository for any notifications regarding the network entities. The Discovery Manager and Discovery agents "handshake" with each other at the start of Continuous Discovery. During this "handshake," the Discovery Manager distributes subnets and their constituent devices already available in the WorldView repository to the agents. If multiple agents exist, the Manager distributes subnets to the agents, that is, performs a "load balance" to ensure that all agents have an optimal load. The Discovery Manager and Discovery agents maintain their device information in caches. They communicate with each other using a messaging service. Continuous Discovery is implemented as services or daemons, which enables real-time discovery. The discovery mechanism employs various methods, such as DHCP, ICMP, SNMP, ARP cache of router scans and sniffing network packets using CTA. Additional discovery methods can be accommodated using the plugin interface, where components can be "plugged" into the framework.
Common Traffic Analyzer (CTA) network sniffing engine--CTA is a shared component of CA's sonar technology and is installed by Continuous Discovery if it not already available on the system. Every Discovery agent is installed with the CTA plugin enabled. CTA lets the agent "sniff" traffic from devices on the network to determine MAC address/IP address changes, discover new devices, and attempt to classify the devices. You can disable the CTA plugin.
Dynamic Host Configuration Protocol (DHCP) traffic monitoring (either agent or manager based)--By default, the Continuous Discovery Manager listens to DHCP traffic on the network for discovery of new devices or changes in IP address/MAC address pairs. To fully utilize this method, configure the local router to redirect DHCP requests to the Discovery Manager host. This configuration lets the Discovery Manager discover devices using DHCP other than on the local subnet. You can integrate Continuous Discovery with third-party DHCP servers.
Set the Admin Status Property for an Object Using Continuous Discovery
Using Continuous Discovery, you may want to discover devices but set their administrative status to Unmanaged. You do this by setting the DeviceDefaultAdminStatus property for a Discovery Manager. The default setting for this property is Managed. Note: In Classic Discovery, you can also set this flag using the -24 parameter on the dscvrbe command. To discover all devices in a subnet and set their administrative status to Unmanaged 1. Open the Management Command Center, choose Class Specification from the left pane drop-down menu, expand the tree until you see ManagedObject, expand ManagedObject until you see ContinuousDiscoveryManager. The ContinuousDiscoveryManager object appears. 2. Right-click ContinuousDiscoveryManager and choose Add Viewer, Instances. All instances of Discovery Managers appear in the left-pane. 3. Right-click a Discovery Manager instance and choose Open Viewer, Properties. The Properties notebook for the Discovery Manager instance appears. 4. Click the RunTime tab, double-click the DeviceDefaultAdminStatus property, and set the property to 1 (Unmanaged). The DeviceDefaultAdminStatus property for the Discovery Manager is set to 1 (Unmanaged), and the Admin Status property for all devices that are discovered by agents that this manager monitors is set to 1 (Unmanaged).
Note: You do not need to restart the Discovery Manager for the property change to take effect.
6.
If you create a new method, you must also update the classifyrule.xml to use the new method in its classification rules for the corresponding classes. Optionally, you can also edit classifyrule.xml to add the community name as a parameter within a method specification under the classification rules for a particular device. You may want to do this if you want to have all devices of a particular class to be queried through SNMP using a specific community string, but the remaining classes to be queried using the default set of community strings. To do this, in classifyrule.xml, add the community string as a parameter under the <Method> XML element as follows:
<Params Community=admin/>
Discovery Managers
A Discovery Manager provides the necessary communication between WorldView, the MDB, and the Discovery agents. It performs the following functions: Cache Management--The Discovery Manager stores in memory a view of all discovered objects. The information in the cache is updated as messages and events from the MDB or Discovery agents are received. Discovery Agent Management--The Discovery Manager is responsible for discovering the Discovery Agents and coordinating the functions that an agent performs. Central DHCP Discovery Engine--A DHCP Discovery Engine is built into the Discovery Manager because in certain DHCP scenarios, distributed monitoring of DHCP is not feasible. The DHCP Discovery Engine listens for DHCP requests to discover new devices or to reclassify them dynamically.
5.
Click the RunTime tab and set the following properties: DeviceDefaultAdminStatus Specifies the default value of admin_status property for newly discovered devices. SubnetFilter Specifies the string expression specifying the list of subnets that are to be discovered. Note: For more information about subnet filters, see Configure a Discovery Agent to Manage Additional Subnets (see page 123).
6.
Click the EventMgmt tab and set the following properties: Enable_Discovery Events Specifies whether all Discovery events (new device events, new subnet events, address change events, and handshake events) are reported to the Event Console. The default is true. If set to false, no Discovery events are sent to the Event Console. Enable_New_Device_Events Specifies whether new device events are reported to the Event Console. The default is true. Enable_New_Subnets_Events Specifies whether new subnet events are reported to the Event Console. The default is true. Enable_Address_Change_Events Specifies whether address change events are reported to the Event Console. The default is true. Enable_Handshake_Events Specifies whether handshaking events between the Continuous Discovery Manager and the Continuous Discovery Agents that report to it are reported to the Event Console. The default is true.
7.
Close the Properties notebook. Properties are saved for the Discovery Manager.
You can disable sending messages to the Event Console by setting Event Management properties for the Continuous Discovery Manager. For more information, see Set Properties for Continuous Discovery Managers (see page 121).
Discovery Agents
A Discovery agent consists of the following components: Network sniffing technology (enabled by default) DHCP request listener (disabled by default) Ping discovery engine (always enabled) Classification engine (always enabled)
Note: You can also discover only a subnet using the dscvrbe command and workload balancing. After the subnet is added to the MDB, workload balancing assigns the subnet to an available agent and the subnet is then monitored by the agent. For more information about the dscvrbe command, see the online CA Reference. To manually add subnets to a Discovery Agent 1. In the Management Command Center, right-click the agent in the left pane tree, and choose Add Viewer, Properties. The Properties notebook for the agent appears. 2. Click the RunTime tab. The RunTime page appears. 3. In the SubnetToManage field, add the additional subnets you want the agent to monitor. Separate subnets with commas. You can use wildcards to specify subnets. You can also define a range of subnets by separating the range with a hyphen (-). Only one range of IP addresses per subnet is permitted. Additional subnets to manage are defined. 4. Close the Properties notebook for the Discovery Agent, right-click the Discovery Manager in the left pane, and choose Add Viewer, Properties from the context menu. The Properties notebook appears in the right pane. 5. Click the WLB tab, set the ENABLE_WLB property to true, and close the Properties notebook for the Discovery Manager. Workload Balancing is enabled. 6. Stop and restart the Discovery Agent and the Discovery Manager. The additional subnets will now be discovered and monitored.
Example: Valid Subnet Filters Valid subnet filters are as follows: xxx.xxx.xxx.xxx where xxx is a valid number between 1-254. *.*.*.* specifies that any subnet should be monitored by the agent. xxx.xxx.xxx.* specifies that all subnets of xxx.xxx.xxx should be monitored by the agent. xxx.xxx.xxx.xxx - xxx.xxx.xxx.yyy specifies a range of IP addresses in a subnet that the agent should monitor. To specify that a range of IP addresses for three subnets should be monitored, use an entry similar to the following entry: 172.16.333.0 - 172.16.333.128, 172.16.334.1 - 172.16.334.128, 172.16.335.1 - 172.16.335.128
4.
Click the DHCP tab and set the following properties: DHCP_ENABLE Specifies whether DHCP Discovery is enabled. The default value is false. DHCP_POLL_INTERVAL Specifies the time period in milliseconds over which the agent will poll the DHCP Discovery component for newly discovered devices.
5.
Click the ICMPBroadC tab and set the following property: ICMP_ENABLED Specifies whether ICMP Discovery is enabled. The default value is true.
6.
Click the SNMPBroadC tab and set the following property: SNMP_ENABLED Specifies whether SNMP broadcast Discovery is enabled. The default value is true.
7.
Click the STANDARD tab and set the following properties: ICMPTIMEOUT Specifies the timeout value in milliseconds when pinging a device and waiting for a response. ICMPRETRY Specifies the number of pings sent to each device during Discovery. SNMPTIMEOUT Specifies the timeout value in milliseconds when pinging an SNMP device and waiting for a response. SNMPRETRY Specifies the number of SNMP queries sent to each device during Discovery. DISCOVERY_INTERVAL Specifies the time period in hours over which the Discovery engine is polled to receive and update newly discovered devices.
8.
Click the RunTime tab and set the following properties: SubNetToManage, SubNetToManage1, SubNetToManage2, SubNetToManage3, SubNetToManage4 Specifies a list of subnets the agent is to discover. The syntax is the same as for subnet filters, except for IP address ranges.
9.
Close the Properties notebook. Properties are saved for the Discovery agent.
Discovery also uses the default ICMP ports on the agent computer to find devices. Other discovery mechanisms such as HTTP and port scans also require that ports be open if run from behind a firewall, although we do not recommend this approach.
Object descriptions and relationships based on the information in the devices SNMP Management Information Base (MIB) is then used by IP Discovery to create a managed object for this network device in the MDB. SNMP MIB agents typically are resident in network device firmware and are provided by each devices vendor. Discovery also determines if a device provides Web-Based Enterprise Management (WBEM) data, and if so, creates a WBEM object in the devices Unispace. The Agent Technology WorldView Gateway service locates agents running on the network objects. Note: An MDB must exist before you can run Discovery to discover your network devices and populate the MDB.
Once defined, you can view, monitor, and manage these objects and their Management Information Base (MIB) through the 2D Map, ObjectView, and the Topology Browser. You can manage the entities they represent using Event Management, Manager/Agent Technology, and third-party manager applications.
Discovery Methods
You can use any of the following Classic Discovery methods to discover your network: ARP Cache Starts at the gateway address (the address of the nearest router to the computer running Discovery) for the current subnet and uses the ARP (Address Resolution Protocol) Cache of that device to determine information about the devices. The ARP Cache contains the IP-to-MAC (physical network) address mappings. Discovery retrieves the gateway address from the computer on which it is running and gets the IP list from the ARP Cache on that router. It then discovers the subnets nearest that router and for each subnet it discovers, queries its gateway, doing the same thing over and over again. For each device found in the ARP Cache, an SNMP request is initiated. If the device does not respond, it is assumed to be a non-SNMP device, just the IP address is retrieved, and the object is created as an Unclassified_TCP object. Ping Sweep Pings all of the devices on the network based on the subnet mask, finds IP devices, and then retrieves SNMP information. If no SNMP information is retrieved, just the IP address is retrieved, and the object is created as an Unclassified_TCP device. This is the slowest but most thorough method. Fast ARP Similar to ARP Cache, Fast ARP saves time by checking only the ARP Cache of routers. Fast ARP is the best method for updating the MDB when you do not want to use the more intensive searches provided by Ping Sweep and ARP Cache. This is the fastest way to discover your network. DNS Search Limits the discovery of devices to those that are defined in the domain name server (DNS). The IP address of each of these devices is combined with the defined subnet mask to determine whether or not to discover the device. (In contrast, the Ping Sweep option tries to discover all active devices numerically, without regard to their definition in the DNS).
Each Discovery method has advantages and disadvantages. The Ping Sweep method provides more comprehensive quantitative informationin the form of the number of devicesbecause every device on the network is pinged. Even devices not recognized by the router, which may not be discovered through the ARP Cache method, can be discovered using Ping Sweep. On the other hand, ARP Cache provides the MAC and IP address information on all the devices that are found in the ARP Cache of the router. Ping Sweep, however, generates additional network traffic and is thus more time consuming than ARP Cache and Fast ARP. Sometimes, to discover every device in the network, a combination of Ping Sweep and ARP Cache is required. We recommend that when you first install your product that you run a Ping Sweep Discovery so that a comprehensive search of your network is done. Periodically, it is a good idea to run an ARP Cache Discovery to check your network for devices added after the initial Discovery was done.
If IPX Discovery finds a NetWare server already in the CA MDB, it ignores it and moves on to the next server with no interruption. If an existing server is found with new interfaces installed that were not previously discovered, the new interfaces are added to the CA MDB. IPX Discovery can run concurrently with Auto Discovery, which uses SNMP and TCP/IP protocols. When Auto Discovery and IPX Discovery find two or more objects with matching MAC addresses and different interface types, a Multi_Protocol_Host object can be created from them using the utility multi_if. Use multi_if after running Discovery and IPX Discovery to create relationships between two servers with different protocols that share the same MAC address.
After the initial SAN Discovery is run, you can rerun SAN Discovery manually using any of the following methods: Discover SAN Devices Only Executes an IP Discovery on the subnets you specify and only SAN devices are added to the MDB. Once the device discovery is complete, SAN links are determined. SAN Discovery uses the newly discovered SAN objects as well as any already existing in the MDB to determine the SAN configurations within the subnets that were searched. The SAN configurations can be composed of IP and non-IP (SCSI) enabled devices. Typical IP Discovery Executes an IP Discovery on the subnets you specify. The SAN devices are discovered and identified during the Discovery process. No IP Discovery - Refresh SAN Links only Re-determines the links of previously discovered SAN components in the subnets you specify. IP Discovery is bypassed and SAN Discovery uses only those objects already present in the MDB to determine the SAN configurations in the specified subnets.
For example, 400 times 3 times 255 times 255 equals 78,030,000 milliseconds.
To use the IP address, set one of the following options: On the dscvrbe command, use the -J parameter, which means to use the Internet Protocol (IP) address as the object's name instead of the MIB-II SYSNAME. On the Options page in the Advanced Discovery tool in the Management Command Center or the Discovery page in Classic Discovery GUI, select the Use IP Address Instead of sysName option.
If no parameter or option is set, the sysName (from SNMP) is used. If the sysName is not available, the IP address is used as the object's name. The names of devices discovered are obtained using the Use Domain Name Server/Host File option, which is enabled by default. This option lets Discovery call the socket function gethostbyaddr() to resolve the device name. On Windows, this function checks the local host file and then queries the DNS server, WINS, NetBIOS, and so on (depending on Windows network properties). If the device's IP address has a DNS name, the DNS name becomes the object's name. If the device's IP address does not have a DNS name, if the Use Domain Name Server/Host File option is disabled, or if the -J flag specifies IP, then the IP address is used to name the object. Otherwise, Discovery checks to see if the device is SNMP agent-enabled. If the device is SNMP agent-enabled, the MIB-II SYSNAME value is used for the object's name. If the device is not SNMP agent-enabled, the IP address is used for the object's name. Note: If you are combining Classic and Continuous Discovery, see How You Can Combine Running Classic and Continuous Discovery (see page 101).
Common Discovery
CA Common Discovery is a subsystem that provides the discovery and classification of all entities in your Internet Protocol version 4 (IPv4) or Internet Protocol version 6 (IPv6) network. It discovers the relationships between these entities and effectively records the network's topology. CA Common Discovery includes the following components: Discovery User Interface Provides the services to support an administration user interface thin client. The Apache Tomcat web service must be previously installed on the computer. The installation prompts for the Tomcat installation path, discovery server name, and port number. You can install multiple Discovery UI components that point to a single Common Discovery server component. Discovery Server Provides a central point for storing and querying Discovery data, options, policies, and log data. The server includes a single instance of the discovery agent. You must install at least one discovery server in a CA Common Discovery installation. Large environments can install multiple discovery servers as they are needed, but it is not necessary to have multiple discovery servers. Several CA products can share the same discovery server. The installation prompts for the discovery server port number. Installation includes the following discovery server subcomponents: Request Manager Database (DB) Manager Log Manager Enterprise Common Services Installation
Note: It is not necessary to have multiple Discovery Servers. You can share a single Discovery Server across multiple CA products.
Discovery Agent Provides discovery data gathering. CA Common Discovery installs at least one discovery agent with a discovery server in a CA Common Discovery installation. Multiple agents can be installed at strategic locations on the network to gather data and communicate it back to the discovery server. After installation, the agent service starts automatically. It also registers itself with the discovery server. The agent registration process adds the agent to the discovery server's agent list. The server returns a set of default agent options. The installation prompts you for the discovery server name. Installation includes the following discovery agent subcomponents: Request Agent Discovery Engines Enterprise Common Services Installation
Note: For any Common Discovery deployment, there must are at least two installed Discovery Request Clients - one that is specific to the consumer application and the other that consists of the Common Discovery Java Servlets for supporting its web clients.
Note: If you route the log data to a remote machine, ensure that the remote machine has the Enterprise Common Services (ECS) installed and running. Trusted Servers Configures a list of CA Common Discovery servers. You must configure the discovery servers options so that both the servers are listed in each others Trusted Servers list to share information. Subnet Hierarchy Configures subnet hierarchy. The IPv6 protocol gives network administrators the ability to define subnet hierarchy in their IPv6 networks. The following parameters can be used to define a subnet hierarchy: Global Routing Prefix Length Determines the number of bits in IPv6 address that precede the subnetID. Bits Per Level Associates bits in the subnetID with subnet levels in an IPv6 subnet hierarchy. Note: The bits are left justified. The first subnet level comprises the leftmost n bits of the subnetID. There is a corresponding filter in scan policies that lets you use these bits to filter scan requests. Subnet hierarchy configuration is adjusted only if scan policies do not use it. The policies that leverage the current subnet hierarchy configuration are listed on the right hand side. You can use the Scan tab to review and edit the subnet hierarchy filters individually. You can click Disable Filters to remove the subnet hierarchy filters for all listed policies. To set global scan options, the following information is required:
Global SNMP Community Names Controls the SNMP community names list globally for your discovery server. You can avoid adding community names when you define scan policy by adding enterprise-wide SNMP community names in the list. For better performance, keep the community names list small or the discovery agent could incur overhead when attempting to connect to SNMP enabled devices. Note: Global exclusion criteria and SNMP community name values are appended to the scan policy at the time it is run. Global Exclude IP Addresses Sets global exclusion criteria for all scans. More information: Set Discovery Server Options (see page 143)
WorldView Components
WorldView provides an extensive set of tools that let you customize any view of your enterprise according to its logical structure, from 2D maps for iconic views, to a selection of browsers for textual views. Using WorldView, you can perform the following tasks: See your entire network graphically represented on the 2D Map, grouped into networks, subnets, and segments based on their logical relationships. Define new classes using the Class Editor (Management Command Center) or Class Wizard (WorldView Classic), allowing for characterization and modeling of practically anything. Create customized Business Process Views (static and dynamic) of specific processes based on different business needs, resource features, roles, geographical locations, organizational structures, and applications. Import and export objects to and from WorldView using the Repository Import Export utility (trix). Set policies for a managed object's severity using alarmsets. View relationships among objects using the Association Browser. View links between objects using the Link Browser. Travel back in time to view historical information about your network using the Management Command Center Historian view. Determine the relative importance of an object in your network by associating a weight to each object. View MIBs data using ObjectView. Import IPv6 network devices discovered by Common Discovery using the Common Discovery Import service.
WorldView Components
Discover, view, and manage certain Unicenter components, such as the WorldView Severity Propagation Service, the Distributed State Machine (DSM), and Enterprise Management, without installing CA NSM on the computer where these components are installed.
Managed Objects
Managed objects represent entities stored in the MDB that can be monitored and controlled using the Enterprise Management applications. These may include hardware, software applications, databases, and various business processes. Managed objects have two distinct characteristics: They represent entities that have vendor-supplied agents or a product agent. They are derived from an existing class in the MDB that is itself derived from the ManagedObject class. Note: Only objects instantiated from the ManagedObjects class with the class property map_visible set to True are visible in the 2D Map. System managed objects are different from ordinary objects. You can use a managed object to monitor and control the IT infrastructure entity that it represents. A managed object can represent anything: hardware, software, business process view, and so forth. Managed objects have the following characteristics: Object Properties Relate to the state of an object. An object usually has a set of possible states determined by its properties. A property, in turn, may be either an attribute or a relationship. The terms property and method take on special meaning when used in the context of either class or instance. Therefore, we qualify the discussion when necessary by referring to class-level and instance-level properties and methods. Object Methods Determine the kind of behavior exhibited by the object. For example, an object modeling a satellite might have a method that calculates the satellite's position for display on a monitor. Object Attribute Specifies a type of property containing literal values. An object representing a PC would probably have a value for the number of printer ports available.
WorldView Components
Object Relationships Specifies a type of property denoting that a class or object relates in some way to another class or object. For example, a model object may have vendor information, which forms a relationship to a vendor object. Topology Presents a set of relationships among managed objects. Using a broad definition of topology simplifies the task of modeling object topology. Topology represents the set of relationships between objects. The simplicity of this definition allows for more flexible interpretation, wider functionality, and more powerful applications.
WorldView Components
2D Map
The 2D Map is a two-dimensional, geographical representation of the logical structure of your enterprise. The map simplifies managing your eBusiness resources with tools you can use to customize visual representations of your business enterprise. To access the 2D Map, select Topology from the list box of the left pane of the Unicenter MCC. Expand the Managed Object Root tree and select an object from the list. Select 2D Map from the list box on the right pane. The 2D Map appears in the right pane. The 2D Map has a Toolbox that lets you create new objects, copy objects, move objects, delete objects, add links to other objects, and design custom views. To view various aspects of your network Topology, select the ManagedObjects you would like to view from the Topology pane, then select 2D Map from the right pane of the Unicenter MCC. To view various Business Processes, select the Business Process Views you would like to display from the Business Processes Views pane, then select 2D Map from the right pane of the Unicenter MCC. Following are some of the map features that expand your 2D Map views and display properties of managed objects: To open an object for a view of its children, double-click the object. The 2D Map will fly into the object, expanding as it navigates. To display all the instance-level properties of an object, right-click the object, select Open Viewer from the context menu to open a submenu, and click Properties. The Property Notebook opens, displaying properties for the selected object. To display a cross-hair containing the name, label, IP address, and severity status, hold the cursor over the object.
WorldView Components
Billboards
Billboard objects let you keep up-to-date information about critical (not propagated) objects in view at all times. You can take a quick look at the billboard to see if any of the children of this container are critical. They are real objects so you can enter the billboard to solve the problem. Double-click on the object to get a closer look at a critical object. Once you create a billboard object, all of the critical children of the billboards siblings are shown in the billboard. If the critical status of an object changes to normal, that object is removed from the billboard. A status that changes from normal to critical causes the affected object to appear in the billboard automatically. To create a billboard, click the Toolbox icon and choose Billboard from the Managed Object tree, then drag and drop it into the 2D Map. You can see the status of any object that appears in the 2D Map at a glance, because devices with problems, along with their segments, subnets, and networks, appear in colors reflecting the severity of their problems. Alarmsets defined in the MDB and represented on the 2D Map determine the relative importance of different classes of faults by assigning each one a severity status. CA NSM provides default alarmsets that you can assign to any object, customize, and extend. Note: Do not place billboard objects on the topmost parent.
Background Maps
You can add background images to your 2D Map by choosing a background image or geomap from the Toolbox tree list. Click and drag the image by holding down the left mouse button and dropping it onto the 2D Map. The background map appears underneath the objects in the 2D Map. The Unicenter MCC supports any BMP graphic you want to use as a background. Use the context menu to remove the background image. Note: The object you select in the Unicenter MCC left pane determines the contents of the Toolbox images; that is, the images available to add as background. The Toolbox is always populated with the classes and images from the current provider of the object you select. For example, if you select a host name in the left panel and then navigate, or drill down, to the agent level, the Toolbox provides images and geomaps you can set as backgrounds. However, if you select an agent from an expanded tree view in the left pane, no class or images are available in the Toolbox because the DSM provider does not expose any images for use.
WorldView Components
You can create additional custom maps to use as background geomaps with Vector Globe, a product licensed by CA from Cartografx Corporation. You can create maps for anywhere in the world with a configurable level of detail, place these maps as backgrounds for your topology, and arrange devices by geographical latitude and longitude. Vector Globe is provided as a separate CD for you to install after CA NSM is installed.
Custom Views
Custom Views implement functionality that lets you display 2D Maps with custom rendering such as colors, link tariff information, link bends, and so forth. A Custom View allows for the MDI-style layout of multiple maps or plugins across multiple Unicenter MCC frames. You create your custom objects in the 2D Map using the Toolbox after first activating Custom Views in the Unicenter MCC view toolbar drop-down menu option.
WorldView Components
Custom Views provide the following features: Text boxes Multiple bitmaps Shapes, such as Circle, Diamond (Variable), Diamond (Fixed), Ellipse, Hexagon, Pentagon (Right), Rectangle, Rhombus, Square, Trapezoid, (Up), Trapezoid (Right), Triangle (Up), Triangle (Left), and None Lines Polygons Bendable links Layering
Note: To convert existing .gbf files into custom views, open the .gbf file in the WorldView Classic 2D Map and resave it. The .gbf is converted and appears in the Custom View left pane of the Unicenter MCC under the private node. The custom view is a named, publishable object that contains custom rendering and layout.
Favorite Views
Favorite View allows you to create placeholders of specific objects for quick and easy view.
WorldView Components
WorldView Components
WorldView Components
The following Business Process Views are created automatically: The Domain Business Process View contains Domain objects. Each Domain object represents the Agent Technologies DSM component. The WBEM Business Process View is created by the Discovery process and contains all of the WBEM objects (devices that provide WBEM data) found in your network. The Deployed Objects Business Process View contains the state of all CA NSM components that are installed in the same DIA zone.
WorldView Components
2.
3. 4.
WorldView Components
The DCS Engine implements the policy defined for each Dynamic Container object in the MDB. The Engine detects property changes, and objects are added or removed as children of a Dynamic Container object as they conform to or no longer conform to the inclusion policy. The DCS Policy Wizard quickly configures the engine to maintain your designated Dynamic Container objects.You can access the DCS Policy Wizard from the Start, Programs, CA, Unicenter, NSM, WorldView group. Using the Policy Wizard, you can perform the following tasks: Select the repository you want the engine to run against, and configure sign-on details so that it can run unassisted as a Windows service. Specify the location and granularity of the log file that the engine generates. Specify the Event Management node to which events are forwarded by the engine. You must provide a hostname, or an IP address; either an IPv4 or IPv6 address is acceptable. Note: If you enter a valid compressed IPv6 address, the address gets expanded to maximal extended form. But if you enter an invalid IPv6 address, an error message appears. Configure the inclusion policy for any number of Dynamic Container objects you want to have dynamically maintained by the engine.
Note: Configuration changes do not take effect until you stop and restart the DCS Service.
WorldView Components
Severity Levels
The Management Command Center displays the current, real-time status of any managed object. If a managed object experiences a problem, its visual appearance in the Management Command Center changes. CA NSM uses the severity level value to change the state of an object. The severity value is a number from 0 to 9, indicating one of the ten predefined severities provided with WorldView. The severity level value of an object at any given time is assigned using policy that you define. The severity of a managed object determines its appearance in the Management Command Center. Using the Status Color Schemes Editor, you can override the default colors that are used to show an object's severity. This feature lets you customize the appearance of Management Command Center objects and lets you better visualize your network.
WorldView Components
Weighted Severity
WorldView calculates the weighted severity of an object by multiplying an object's numeric severity by the objects weight. Note: The weighted severity component of CA NSM uses only the following severity to status mappings: 0=Normal 1=Unknown 2=Warning 3=Minor 4=Major 5=Critical Example: Calculate Weighted Severity The default weight assigned to the router class is 60. When you discover all of the routers in your network, each router inherits a weight of 60. However, you have one router (Router A) that is more valuable to your network, so you change the weight of that router to 80. When an object below Router A changes state, for example, a server is low on disk space and goes critical, that critical severity is propagated to Router A. The propagated severity of Router A is then critical. Because you assigned a weight of 80 to the router, the propagated weighted severity is 400. If an object below another router (for example, Router B, which has the inherited weight of 60) also changes to a critical state, the propagated weighted severity of Router B is 300. These values are used in the algorithm to derive importance.
Object Importance
Using a sophisticated algorithm, WorldView determines the importance of each managed object in your network. Letting you view the importance of each managed object in your network helps you better analyze your network by giving you a better view of the health of your IT infrastructure. For example, importance lets you quickly and easily distinguish between a printer that has a critical severity because it is out of toner and a critical server that processes your payroll. Importance is calculated using the weight and severity levels of child and parent objects in your network. The importance of an object increases when the propagated weighted severity of one of its child objects increases. You can set the weight of each object in your network, or you can use the default weight that is preset for each managed object class.
WorldView Components
After WorldView calculates the importance of an object, the following thresholds determine what color is used to display the object in Maximum Importance view: Insignificant--0-15 Of Interest--16-40 Minor Concern--41-60 Major Concern--61-80 Severe--81-100 Ultra--101-500
You can change the default thresholds by editing the wvapiwrap.cfg file. On Windows, this file is located in the install_path\CA\SharedComponents\CCS\WVEM\CONFIG directory. On UNIX/Linux, this file is located in the $CASHCOMP/ccs/wv/config directory. These thresholds map to the severity levels for the purposes of displaying the colors that represent the different levels of importance in the Management Command Center. These same colors associated with the six levels of severity are used for importance: Ultra appears as the same color that is defined for a severity of Critical. Severe appears as the same color that is defined for a severity of Major. Major Concern appears as the same color that is defined for a severity of Minor. Minor Concern appears as the same color that is defined for a severity of Warning. Of Interest appears as the same color that is defined for a severity of Unknown. Insignificant appears as the same color that is defined for a severity of Normal.
WorldView Components
WorldView Components
These user IDs are created so that the Severity Propagation Engine can stay connected when the user logs off of the computer. You may want to change the password for this user for security reasons. To do this, you must deregister the Severity Propagation Engine, which removes the user accounts, and re-register it with a new password. Important! All WorldView connections to this MDB must be closed before stopping the Severity Propagation Service, which is a prerequisite for deregistering and re-registering the Severity Propagation Engine, or the severity for this WorldView repository will be incorrect. Note: For information about changing the password for the Severity Propagation Engine user accounts, see Change the Password for the Severity Propagation Engine User Accounts (Windows) Important! When attempting to start or restart CA NSM services manually on a computer that contains the WorldView Manager, you must start the Severity Propagation Service first. This action ensures that all services that use the Severity Propagation Engine initialize correctly. In particular, you must start the Severity Propagation Service before you start Agent Technology Services and DIA/DNA Services. Also, when you shut down the MDB on a computer running a WorldView Manager, you must recycle current persistent services and any instances of Unicenter MCC (both local and remote) after you restart the MDB. Current persistent services include, but are not limited to, the Severity Propagation Service and Agent Technology Services.
How You Correctly Stop and Restart the Microsoft SQL Server Database
You may need to stop and restart the Microsoft SQL Server database. To do this safely, you must follow a specific sequence. Following this sequence ensures that you can see all objects in your network, and that the Severity Propagation Service is correctly reporting the severity of all objects in your network. 1. Run the following command to stop the RMI Server:
RMI_MONITOR -k LOCALHOST
2.
Stop the following services using the Windows Service Control Manager: CA Agent Technology services CA-Continuous Discovery Manager CA DIA DNA (there may be a version associated with this service)
3.
WorldView Components
4. 5.
The Microsoft SQL Server database starts by itself at the next request. After Microsoft SQL Server starts, restart the following services in this sequence: a. b. c. d. CA WorldView Severity Propagation Service CA DIA DNA (there may be a version associated with this service) CA Agent Technology Services CA-Continuous Discovery Manager
Note: When you stop the Microsoft SQL Server database on a computer running a WorldView Manager, you must also stop and restart any instances of Unicenter MCC (both local and remote) after you restart Microsoft SQL Server.
WorldView Components
WorldView Components
Using ObjectView, you can obtain attribute information such as Object ID, Value Type, or information for an attribute group such as an attribute count and, if applicable, table information. You may also set attribute values at the device level. For example, if an interface is having problems, you can change its adminStatus from up to down. ObjectView also provides the DashBoard Monitor to let you graph selected MIB attributes in real time.
DashBoard Monitor
The DashBoard Monitor lets you graph selected MIB attributes in real time. It provides access to the Graph Wizard, a tool for creating dynamic graphs of device performance for further analysis. WorldView Classic's ObjectView supports Microsoft Excel and you can display the collected data in Excel spreadsheets. Using the Dashboard Monitor and the Unicenter MCC Customize Chart, or the WorldView Classic Graph Wizard, you can create formulas for attributes you select for graphing that include usage of polling interval information.
WorldView Components
In addition to these options, you can set alarms in graphs, except LED, to change the display of an attribute value when it reaches a definable threshold. You can customize alarm notification by setting colors, text, and severity.
Context Menu
The Association Browser displays a context menu when you right-click an object. Each menu is sensitive to the type of class from which the selected object is created. The class information decides what context menu appears. When you right-click the whitespace (background) in the Association Browser, a context menu with the following options appears: Expand Expands the root node object to reveal any associations. If there are associations to other objects you will see these relationships in the form of the hyperbolic tree. Collapse Collapses the hyperbolic tree. Show Implied Links Toggles the display of implied links on and off. By default, implied links are displayed in the Association Browser. Implied links are links that appear for parent objects because somewhere deep in the hierarchy, two or more child objects of those objects are linked together. Certain SAN objects have many implied links.
WorldView Components
Show Reverse Inclusions Toggles the display of reverse inclusions on and off. Reverse inclusions show you the parent objects associated with each node in the Association Browser. Displaying reverse inclusions is useful when administering SAN objects. By default, reverse inclusions do not appear in the Association Browser. Legend Displays the Association Legend. You can also right-click an association link object to display a context menu, which varies depending upon the type of association link connecting the two objects.
WorldView Components
WorldView Components
Populate WorldView with existing objects from another WorldView. Populate WorldView with objects that are similar to existing objects already stored in that WorldView or a different one. You may want add a few objects that are similar to existing objects and avoid running Discovery, which saves network resources. For example, if you have two subnets with 10 computers on each that are the same except for their names, you can use trix to export the definitions of the existing computers into a trix file, modify the name property as necessary for each applicable object and import the objects with modified name properties into WorldView.
Trix exports (copies) the object definitions (properties) from WorldView into a trix script file. You can import or export all of WorldView or any part of it. Trix is available using the WorldView Classic or Unicenter Browser Interface on Windows platforms. You can also use the trix command from a command prompt on Windows platforms.
Export Methods
You can export objects from WorldView in either of the following ways: By object By class
When exporting by object, you can export child objects, inclusions, and outgoing and internal links. Incoming links are not available for exporting. When exporting by class, all objects that match the class definition you specify are exported. You can then import (copy) these object definitions in the trix script into another logical repository. You can import the trix script as is or you can modify the object definitions in the trix script and import the modified object definitions into the same logical repository or a different one. You may want to do this when you have new objects on your network that are similar to existing objects. Performing this procedure makes it unnecessary to run Discovery for a small number of new objects, thus saving network resources. You can also create your own trix scripts using the trix script syntax.
WorldView Components
WorldView Components
WorldView Components
IPv6 Import is one of the Unicenter MCC tools that you can access by selecting Tools from the drop-down list above the left pane. This tool has the following fields and buttons: Discovery Servers area Lets you define one or more Discovery servers that provide information about IPv6 devices, include or exclude the defined servers from an import, and configure what those servers should discover. The columns are: Server Lists defined servers. When you double-click a server name, a connection settings dialog opens so that you can change the user name or password used to access the web-based Common Discovery configuration interface. Status Shows whether a server is accessed during an import. The status is Included or Excluded. Default: Included
WorldView Components
Configuration Contains hyperlinks that open the CA Common Discovery configuration tool. This tool lets you configure, start, and maintain Discovery scans and purges. It also provides access to administrative information like scan history and logs. The buttons are: Add Opens a dialog that lets you define a discovery server by entering the server name, protocol (http or https), port (8081 by default), and user name and password for accessing the server. Delete Removes the server from the list in the IPv6 Import tool. This server can no longer participate in the import. Exclude Prevents the server from the participating in the import. The status changes to Excluded and the icon to the left of the server name becomes red. Include Lets an excluded server participate again in the import. The status changes to Included and the icon to the left of the server name becomes green. Select All Selects all discovery servers in the list. Service Statistics area Shows how many hosts, routers, and other devices are added to the MDB as managed objects during the last data collection by the Common Discovery Import service. Service Configuration area Lets you start or stop an import, specify the polling interval, and indicate whether the service imports all discovery data or only changed information. Start Service/Stop Service Lets you start or stop an import. The label is Start Service when no import is taking place, and Stop Service when the Common Discovery Import service is running.
WorldView Components
Collection Interval Opens a dialog that lets you specify the number of minutes the import service should wait between each poll to discovery servers. Default: 60 minutes Reset Import Controls whether the import service requests all discovered objects from Discovery servers or only objects that have been added or updated since the last collection. A dialog asks you to confirm that you want to import all objects in the discovery server database. A full import can be timeand resource-intensive. Default: new information only Note: For more information about Common Discovery, see the Common Discovery Help. For more information about the Common Discovery Import service and the IPv6 Import tool, see the MCC Help.
WorldView Components
Using this functionality, the device where the component resides is discovered as well as the component. The Unicenter Registration Service Server contains a Business Process View under ManagedObjectRoot called Deployed Objects, which contains the state of all CA NSM components that are installed in the same DIA zone. After the component is registered in WorldView, you can use all of the WorldView functionality to view and manage the component, such as defining status propagation rules and find algorithms, and accessing the component using the Management Command Center, WorldView Classic GUI, and the Unicenter Browser Interface. When you install a CA NSM component, the component is "registered" in the MDB on the Unicenter Registration Service Server, and a proxy object is created, which represents the component. Each component sets the state of its proxy object and this state is recorded in the MDB. Components that create proxy objects are WorldView, Enterprise Management, and Agent Technology. Agent Technology groups the DSM object into a Business Process View called Domain, which also appears directly under ManagedObjectRoot. Possible states for each component include the following: Running Started Stopped Removed
You may want to use alarmsets to map the specific status text to a severity. The status texts are product-specific and help you diagnose the problem with a particular proxy object. In addition to the state, other properties that are important to the health of the component are recorded in the MDB. When the state of an object changes, the change is reported to the MDB, and any properties that may have changed are also reported.
WorldView Components
The following picture shows the Topology View for a computer called Server10, which is the designated Unicenter Registration Services server. Server10 also has WorldView installed on it. The Unicenter object is created under Server10 in the TCP/IP network. The proxy object, called, WorldView-server10.ps.cpm, is created as a child object of the Unicenter object. The Deployed Objects Business Process View mirrors this topology. Any status changes to the WorldView object will be reflected in both places.
Rules
Rules determine the Business Process object state. You can configure any number of rules for each Business Process, and you can combine rules to provide sophisticated forms of correlation for the contents (children) of the target object. Each rule is represented by an object in the MDB, under the associated Business Process, and thus independently influences the propagated process state. Rules report all threshold breaches to a designated Event Console to support automated actions.
BPVM provides several kinds of rules to influence the Business Process' state from different aspects of the target object and its children as follows: Child Count Rule State Count Rule Propagation Thresholds Rule Boolean Logic Rule Child Update Rule
Notification Events
Notification events indicate that a threshold has been breached for a rule object and provide details about why the breach occurred. Notification events are reported to the Event Console. Examples of notification events are shown following. When the target object changes to a critical condition, as in the Child Count Rule, the event provides details about the reasons for the state change as follows.
%UNI_BPVM, Target Object [ Europe Mail BusinessView ]: Rule ' ChildCount[W=3/C=4] ' for Business Process ' Business Process for Europe Mail ' is changing state to Critical because the child count of 9 exceeds the Threshold of 4
When multiple conditions exist, as in the Boolean Logic Rule, the event provides details about which conditions were met as follows:
%UNI_BPVM, Target Object [ Europe Mail BusinessView ]: Rule ' Email Server Rule ' for Business Process ' Business Process for Europe Mail ' is changing state to Critical because { [ EWB-NTS-01.schaf.com WindowsNT_Server >= Warning ] [ EWB-NTS-03.schaf.com WindowsNT_Server >= Warning ] }
When the target object returns to an acceptable condition, the reset event provides details about the state change as follows:
%UNI_BPVM, Target Object [ Europe Mail BusinessView ]: Rule ' Email Server Rule ' for Business Process ' Business Process for Europe Mail ' is changing state to Normal because no Thresholds have been breached
Impact Events
Impact events signal a state change for a target object and provide details about the child object that caused the change, including whether the overall change is positive or negative. When event logging is set to Notification or above, impact events are reported to the Event Console. Examples of impact events are shown following.
%UNI_BPVM, Target Object [ Web Servers BusinessView ] for Proxy 'Proxy for Web Servers' is currently Normal %UNI_BPVM, Target Object [ Web Servers BusinessView ] for Proxy 'Proxy for Web Servers' has WORSENED from Normal to Critical because child object [ TestA Windows2000_Server ] WORSENED from Normal to Critical %UNI_BPVM, Target Object [ Web Servers BusinessView ] for Proxy 'Proxy for Web Servers' has IMPROVED from Critical to Warning because child object [ TestA Windows2000_Server ] IMPROVED from Critical to Warning
Benefits of SmartBPV
SmartBPV benefits your enterprise by: Automatically building views that focus only on related elements within the infrastructure such as those for a specific application, business process, server type, or location. Showing which infrastructure components communicate directly with each other for faster problem resolution. Minimizing application downtime, freeing valuable IT resources to focus on new initiatives and strategic planning. Easily creating an automatically and dynamically updated management view for your domain of responsibility.
SmartBPV Examples
Examples of how you can use SmartBPV to benefit your enterprise include: Validating your infrastructure and determining where to place new software or apply maintenance (for example, how many Exchange monitoring agents are required and where). Collecting all instances of Windows servers and determining where activity is occurring between them for diagnostic and network planning purposes. Monitoring new or emerging protocols within your network (for example, where all Voice over IP elements reside and how they interact).
If it is not practical to run a Discovery of your network before running SmartBPV, we recommend that you separate the running of SmartBPV and Discovery using one of the following methods: Run SmartBPV with the option to postpone discovery of unknown nodes. SmartBPV then runs without trying to discover unknown objects, and instead creates a list of objects to be discovered later. When SmartBPV starts, respond No when prompted about the deletion of SmartBPV PLOG files so that the files can be used again. Once SmartBPV completes, run Discovery to find these unknown objects and then start SmartBPV again. To configure SmartBPV to postpone discovery of unknown nodes, modify smartbpv.properties to set these values: DISCOVER_MISSING_OBJECTS = False DISCOVERY_SCRIPT = ./temp/SmartBPR_Discovery.script CREATE_MISSING_OBJECTS = No
Run SmartBPV to skip discovery of all unknown nodes and treat them as unclassified objects. Objects not already discovered will be unclassified in the repository until they are later discovered and classified. To configure SmartBPV to skip discovery of unknown nodes, modify smartbpv.properties to set these values: DISCOVER_MISSING_OBJECTS = False CREATE_MISSING_OBJECTS = Yes
If it is not practical for you to run a Discovery of your network before starting SmartBPV and you require that SmartBPV be fully initialized and all objects discovered in a single step, run SmartBPV as a batch process when the load on your system and network is low. The syntax for this mode of operation is as follows:
smartbpv -nogui
Unicenter MP helps to expose enterprise management information by means of the Web to a very broad audience, including network and systems administrators, performance groups, customers, business partners, and inside users. The portal's lightweight HTML-based interfaces provide fast and secure access to the information within and outside of the enterprise. A new generation of Internet-based technologies introduces new challenges and opportunities. In Unicenter MP, you can tailor enterprise information to address the business needs of specific users. For example, Unicenter MP lets you provide access for the following users to information that is relevant to their business needs: Business customers, who routinely use the Web for other purposes, expect service providers to use the same technology to inform them of service level agreements, critical problems, status of specific resources, and simple tests they can use to verify service availability.
Business partners need to be able to exchange enterprise management information between enterprises. Traditional CA NSM users (for example, enterprise management groups or network administrators) need to share information with other groups within the enterprise (for example, performance groups, application groups, IT managers, Business Executives, users, and so on).
Unicenter MP provides a clear, real-time view of information that makes it easy to understand how IT affects a particular business unit or application. It eliminates users from viewing lengthy status reports that do not apply directly to them in order to find information.
Note: The same restrictions should be honored if Unicenter MP is installed on top of CleverPath Portal. For additional information about CleverPath Portal, see the CleverPath Portal Administrator Help and User Help. Access these help systems within Unicenter MP by clicking the Help button on the top right of the interface (User Help) or clicking the link for either help system under the Help sub-tree on the Portal Administration page.
Unicenter MP Administration
Spectrum Portlets Lets you publish portlets for any CA Spectrum server that is connected to Unicenter MP. Unicenter Service Desk Portlets Lets you publish the Service Desk portlets from a specific Service Desk server to Unicenter MP. Unicenter SLA (Service Level Agreement) Scoreboard Displays a summary view of the SLAs you select. WV-Agent Status Scoreboard Displays a summary status of selected Unicenter agents and organizes them by agent type. WV-Business Process Views Scoreboard Displays a summary view of the Business Process Views objects you select. WV-System Status Scoreboard Displays a summary status of selected Unicenter systems and organizes them by host name.
Unicenter MP Administration
Unicenter MP Administration is a set of tools for setting up, configuring, monitoring, and tuning the Unicenter MP server. Only members of the Admin workgroup have access to Unicenter MP Administration. The Admin workgroup is a default workgroup defined by Unicenter MP. Unicenter MP provides an Administration Wizard that guides you through the most commonly used administration tasks. Administering Unicenter MP includes two categories of tasks: General tasks required to administer the Unicenter MP server, including user and workgroup administration, starting and stopping the server, and changing the Admin user password. Tasks required to administer Unicenter NSM components, including setting up connections with the servers running Unicenter NSM and other products, enabling new components, tuning role-based security settings and monitoring the Unicenter MP infrastructure.
Unicenter MP Administration
Administration Wizard
Unicenter MP provides an Administration Wizard to guide you through the most commonly used administration tasks. The UMP Administration Wizard is the default workplace for users with admin privileges. Also, as a member of the Admin workgroup, you can access the UMP Administration Wizard from the Knowledge Library under Administration by clicking _UMP Administration Wizard. Note: Only members of the Admin workgroup have access to the UMP Administration Wizard. The Admin workgroup is a default workgroup defined by Unicenter MP. From the wizard, you can launch the following tasks: Task 1: Manage Components Establishes a connection to hosts running other CA components, such as WorldView, Agent Technology, and Event Management. Defining a host as a data source lets Unicenter MP obtain and display data from that host. Complete Task 1 before moving on to other tasks. Task 2: Create or Modify Unicenter MP Portlets Lets you define scoreboards or dashboards, which are real-time, query-based summary views of your data. Scoreboards and dashboards appear in the Unicenter MP Library, making them available to your Unicenter MP users. Task 3: Manage Scheduled Tasks Lets you select a scheduled task and change the status, suspend execution, reset the next execution, resume a suspended execution, or delete the task. Task 4: Portal Administration Launches CleverPath Portal Administration, letting you perform administrative tasks such as creating users, creating workgroups, assigning users to workgroups, and more. Unicenter MP is based on CleverPath Portal technology. Task 5: Manage Users Manages user profiles by letting you add, edit, or remove users and assign them to workgroups. Task 6: Manage Workgroups Lets you define, edit, or remove user workgroups. Workgroups help organize users into logical business groups, such as systems administrators, business users, and mail administrators. As members of a workgroup, users inherit permissions assigned to the entire group.
Unicenter MP Administration
Task 7: Manage Unicenter Management Portal Properties Configures properties for Unicenter Configuration Management, Service Level Agreements (SLAs), Business Process Views (BPVs), scoreboards, reports, the knowledge base, IPv6 addresses, and security. Task 8: Manage Security Profiles Lets you define, edit, or delete security profiles. Security profiles control access to specific data and controls for all actions you can perform. Assigning a security profile to each workgroup further defines the security permissions for that workgroup. Task 9: Manage Global User Preferences Lets you define or edit user preferences. Although users can specify personal display and data handling preferences, you can override individual preference settings for all users, if needed. Task 10: Manage Web Reporting Servers Establishes connections to hosts running Web Reporting Server (WRS), making the reports running on these servers available to Unicenter MP users. Task 11: Manage Unicenter Management Portal Security Reports Lets you view the Unicenter MP reports related to Documents, Channels, Workplace Templates, Menu Actions, BPVs, SLAs, and Event Filters.
Workplace Templates
Note: For detailed information about the rest of the tasks you can perform from the Unicenter MP Administration Wizard, see the Unicenter Management Portal Help.
Workplace Templates
Workplaces are custom pages that display only the Unicenter MP data you want. You can fully customize the content and layout to suit your needs. Unicenter MP provides the following templates for creating workplaces: Empty Workplace Contains no preconfigured attributes. If another template does not fit your needs, use this one. Application Servers Status Includes only application server data, such as the Application Agents Status or Application Events Summary scoreboards. Database Servers Status Includes only database server data, such as the Database Agents Status or Database Events Summary scoreboards. Mail Servers Status Includes only mail server data, such as the Mail Agents Status Breakdown or Mail Events Summary scoreboards. Messaging and Transaction Servers Status Includes only messaging and transaction server data, such as the WorldView Transaction/Messaging scoreboard. My Unicenter Workplace Includes the basic data to get you started with Unicenter MP. Network Status Includes only network data, such as the Network Agents Status Breakdown or Network Events Summary scoreboards. Systems Status Includes only systems data, such as the System Agents Status Breakdown or System Events Summary scoreboards. UNIX Systems Status Includes only UNIX systems data, such as the UNIX System Agents Status Breakdown or UNIX System Events Summary scoreboards.
Workplace Templates
Web Servers Status Includes only web server data, such as the Web Agents Status Breakdown or Web Server Events Summary scoreboards. Windows Systems Status Includes only Windows systems data, such as the Windows System Agents Status Breakdown or Windows Agent Events Summary scoreboards.
You can establish connections to the following Unicenter NSM components and other CA products to gain access to data provided by these components and launch component and product interfaces from the Portal Explorer: Active Directory Explorer Wily Introscope Unicenter Configuration Management Unicenter NSM Knowledge Base Unicenter Systems Performance
The monitoring always starts with a scoreboard, which is a high level status summary of the IT resources. During the installation, Unicenter MP creates many scoreboards for both business process views and resource groups, so you can immediately start to use them once installed. Unicenter MP also provides the tools and facilities to let you customize existing scoreboards or create new scoreboards. Note: Unicenter MP, by default, counts only managed WorldView objects in business process views, system status, and resource status scoreboards. However, you can change the settings to count unmanaged objects for business process views scoreboards. The Portal Explorer, the Severity Browser, and the Severity Tree let you view detailed information about your IT resources. You can launch these detailed interfaces from the status scoreboards. You can also access the Business Process View Explorer, which lets you view Business Process Views in the context of the Portal Explorer. You can create reports for your IT resources in the WorldView repository to monitor the resource status or to find the topology or containment information. You can create a sophisticated schedule in Unicenter MP to run these reports at certain times and intervals and notify the appropriate people when they are published.
2.
The Unicenter MP administrator uses the Business Process Views Scoreboard Publishing Wizard to create and publish different and appropriate scoreboards for each group of users. For example, if the workgroup LondonUsers exists for London users, NYUsers for New York users, LNAdmins for Lotus Notes administrator, and Unicenter MPAdmins for Unicenter MP administrators, then the Unicenter MP administrator can create and publish four different Business Process Views scoreboards, each containing the appropriate Business Process Views for each workgroup. The administrator can also configure the scoreboards so that when they publish, automatic email notification is sent to the appropriate users. Users connect to Unicenter MP and see the Business Process View scoreboard that is published for the workgroups to which they belong.
3.
Note: A Business Process View scoreboard can show Business Process Views information from multiple WorldView repositories.
Resource Scoreboards
Other than using Business Process View scoreboards to track the status of the enterprise resources important to your business processes, you can use resource scoreboards to horizontally track the status of your enterprise by the resource groups (classes) such as network, system, application, and so on. Unicenter MP creates a set of resource scoreboards during installation. You can customize these scoreboards or create your own scoreboards to accurately monitor the resources important to your environment. Note: Unicenter MP counts only managed WV objects in resource status scoreboards.
Portal Explorer
The Unicenter MP Portal Explorer lets you view the relationships between BPVs or managed objects in tree form and view the details of a selected object in the object tree. You can launch the Portal Explorer from either Business Process View scoreboards or Resource scoreboards. The Portal Explorer is broken into an interactive tree in the left pane and the corresponding views and tasks in the right pane. The left pane includes a tree structure list of objects for navigation purposes. For example, when the Portal Explorer launches from a Business Process View scoreboard, the left pane displays Business Process View objects. When the explorer launches from a managed resource scoreboard, the left pane tree shows the resource group (class) as the root, and underneath the root, lists the objects that belong to the class. Upon selecting an object in the left pane, the right pane displays various tabbed views about that object. The displayed views depend on the type of selected object. The views include the following: Object Severity Browser Appears for WorldView objects and DSM objects. Notebook Appears for WorldView objects and DSM objects. The view shows the property and value pair of the selected object. For a WorldView object, the properties are grouped and showed in different sub-tabbed pages. Explorer Appears if you select a virtual root object, such as a resource group. The view displays the name of the objects that belong to the selected resource group (class). Event Appears for DSM objects. Event Console Appears for Node (Hosts and Workstations) WorldView objects. The view is actually the event console that shows all events that are sent from the selected node. Alert View Appears for alert-aware WorldView objects, including BPV, Node, Network Segment, Router, and so on. The view shows in the alert console all the alerts that are associated with the selected object.
By right-clicking to select a Worldview object in the tree, you may see menu pop ups. You can act on the object based on the available menu items. Also, by selecting an object in the right-hand pane, menu actions may appear depending on the type of object. For example, you can launch Performance Reports, agent dashboards, and other interfaces such as Unicenter Configuration Manager and Active Directory Explorer. As a Unicenter MP administrator, you can set up a security policy to secure some views from certain roles of users in your organization.
Severity Browser
The Severity Browser shows status details of the objects that are contained in a selected Business Process View or managed object. The two types of severity browsers are: BPV severity browser Object severity browser
The status details in these views include the object name, label, severity, propagated severity, maximum severity, class name, and IP address. In the browser, you can configure the starting object status and object level. You can launch the severity tree in the severity browser to quickly identify the root cause of an abnormal Business Process View object or managed object.
Each row in the scoreboard represents a single agent, and you can select each row to make available any actions that you can perform on the agent in the Select Link drop-down list, such as launching the Portal Explorer and launching agent and server dashboards. You can also click the icon next to the agent name to quickly perform the default menu action for the agent (which is often launching the Portal Explorer). This type of scoreboard is particularly useful for network administrators, systems administrators, and database administrators who need to monitor specific components in the systems for which they are responsible. Note: When creating a scoreboard, only select the exception level that is meaningful. That is, do not select Normal status when all you really need to see are agents in a Warning state and worse. Also, only select agents of the same type (for example, mibmuxed or non-mibmuxed).
Based on these views, Unicenter MP provides the following two modes of viewing the dashboards: Normal mode dashboards Provide a high-level view of the agents by displaying all of the agents' tiles. For a given agent, the agent dashboards display one tile for each monitored group. By clicking on the link within the tile, you can drill down into a specific monitored group to get more information. Exception mode dashboards Provide a diagnostic view of the agent by displaying only the tiles that have a specific abnormal status. For a given agent, you can specify the exception level, and only the tiles with that status or worse display. Based upon the content or tile-definition, the agent view dashboards can be one of the following: Default dashboards Contain all available tiles for a given agent. They are defined in the tile-definition called General. The default dashboard is available out-of-the-box. User-defined dashboards Contain the tiles that you define in a specified tile definition. In certain cases, you may only be interested in monitoring a small set of resources. In such cases, you can define a tile-definition containing only those resource groups. The dashboard only displays resources specified in that tile-set. Note: The dashboard can still be shown in either Normal or Exception mode.
Server Exception dashboards Combine information from all of the agents running on that server and present in one dashboard. It includes all of the tiles that are in exception mode for all of the agents. You can create agent view or server view dashboards using the Administration Wizard, or you can launch the dashboards in context from scoreboards, reports, and the Portal Explorer. You can also publish dashboards to the Unicenter MP Knowledge Tree. Note: Status information for the agents is obtained from the DSM. This means that Unicenter MP should be configured to obtain the DSM information through a Component Manager connection.
Events scoreboards in Unicenter MP provide the following four types of statistics for event messages: Summary scoreboards Provide the total number of messages. Breakdown scoreboards Provide a status bar for all messages of a group or filter and separate counts for each status type found.
Last N Provides the last N number of messages that match the scoreboard filter criteria. Dynamic chart Provides a dynamic chart of the message severity breakdown. The chart is updated periodically. The Event Console shows the detail of the messages. You can launch the Event Console from event scoreboards or the Knowledge Tree. In the Event Console, you can act on events whenever necessary, and you can search for events, filter events, and publish events to an HTML, CSV, or PDF report. You can set up the security policies to allow or disallow a user to take certain actions. The Event Console is presented in the Portal Explorer for a Node (Host and Workstation) object and is accessible from the Knowledge Library. In Unicenter MP, you can specify users from certain workgroups as event administrators. Event administrators can change the data configuration of event consoles and scoreboards. Non-event administrators can change the presentation configuration of event consoles, but need to get permission to change the scoreboard configurations.
Event Scoreboard
Event scoreboards provide summarized views of messages regarding events that may occur within your IT infrastructure. Predefined event scoreboards reflect various aspects of your infrastructure including applications, databases, mail servers, networks, systems, transactions, and messaging. You may also create customized scoreboards to track events from specific sources that are significant to you. Individual scoreboards for each resource type let you see only event messages for resources that are of interest to you. For example, if you want to know how many critical messages exist for your database server only, you can select the appropriate scoreboard and drill down into the Event Scoreboard Detail to view specific events. You can also assign specific scoreboards to personnel who are responsible for that aspect of your infrastructure. For example, you can create a scoreboard that captures events coming from only your SNA Manager Agents, Switch Status, and Chassis Monitoring Agents and assign it to your network administrator. Consolidation of these scoreboards centralizes management of different aspects of your infrastructure, ensuring that important events are acted upon in a timely manner.
Event Console
Using filters, you can create an Event Console to scope the events from a specific area of your enterprise infrastructure. The Event Console lets you monitor the status of the events, respond to abnormal events as they occur, and rectify critical situations immediately. The Event Console organizes the events in pages and shows several specified display attributes. Using the filtering and searching facility provided in a console, you can narrow your search further and view the events that you want. Due to the sequential nature of the event log, event messages in a console are sorted by the creation time of the messages in descending order. The most recent event appears at the top of the first page. By default, the console is automatically refreshed within one minute. You can change your preference to disable the auto-refresh or increase the refresh interval. You can take numerous actions on events from the Event Console, including acknowledging events, viewing event annotations and details, and publishing held or log messages to a report. A predefined Event Console, is published at Library, Enterprise Management, Unicenter Event Management. You can use publishing tools to publish a customized Event Console at the same location in the library.
Event Actions
Once you have selected an event message in the Event Console, you may take several actions, as follows: Acknowledge Reply View Detail View Annotations Export to an HTML, CSV, or PDF report
Note: Acknowledge is available only on messages that are waiting to be acknowledged, while Reply is available only for WTOR messages. The type of message is represented by different icons in the Attribute column in the Event Console. If you are a Unicenter MP administrator, you can set up the security policies on the action a user can take on events.
From Unicenter MP, you can act on alerts whenever necessary in the following ways: Create requests, incidents, and problems from an alert View details Acknowledge selected alerts simultaneously Transfer selected alerts simultaneously Raise an alarm Consolidate or unconsolidate alerts Add, view, modify, and delete annotations Close alerts Change urgency Change display attributes Export up to the last 5000 alerts or selected alerts into HTML, CSV, and PDF reports Search for specific alerts Access URLs associated with alerts View alert audit trails
The alert view is presented in the Portal Explorer for Business Process Views and some managed objects, such as nodes (Host and Workstation), network segments, and routers. Unicenter MP provides tools for you to manage alert scoreboards and the Alert Console. While all users can change the scoreboard presentation configuration, only alert administrators can change the data configuration.
Alert Scoreboard
Alert scoreboards display a graphical representation of your alert breakdown in different priority ranges. Alert scoreboard data is obtained from the AMS alert queue. Alert scoreboards let you perform the following actions: Change the graphical presentation View alerts from specific queues Configure the scoreboard if you are an alert administrator
By default, the scoreboard is set to refresh automatically within one minute. Through My Preferences, you can disable the auto-refresh function or change the refresh interval. Two predefined alert scoreboards are published at Library, Enterprise Management, Unicenter Alert Management, Configured Scoreboards. You can use the publishing tool to create customized scoreboards at the same location in the knowledge library.
Alert Console
The Alert Console organizes the alerts in pages and shows the alerts with specified display attributes. You can create an Alert Console to view the alerts from a specific area of your enterprise infrastructure. The console also lets you do the following activities: Navigate the alerts page by page Sort alerts by all displayable properties Change the display attribute of an alert Take a number of actions on alerts Use the filtering and searching facility tool to further specify and get the alerts in which you are really interested Export alerts to HTML, CSV, and PDF reports View URLs associated with alerts
By default, the Alert Console is automatically refreshed within one minute. You can change your preferences to disable the auto-refresh or increase the refresh interval. A predefined Alert Console, which shows all alerts, is published at Library, Enterprise Management, Unicenter Alert Management. You can use publishing tools to publish a customized Alert Console at the same location in the library.
Alert Actions
Once you have selected an alert, you can select one of the following actions from the 'Select and' drop-down list and click Go: Create Request View Requests Search Knowledge Tool eHealth Report Server Business Service Console At-a-Glance Report Alarm Detail Report View Detail View Annotations Acknowledge Alarm Transfer Unconsolidate Consolidation Detail Close View Audit Trail
Unicenter Service Desk-related actions are available only when a connection to a Service Desk server is established in Unicenter MP. The following actions are available only when the Service Desk server connection is running in ITIL mode: Create Incident Create Problem View Requests only View Incidents only View Problems only
eHealth-related actions (eHealth Report Server, Business Server Console, and At-a-Glance Report) apply to eHealth alarms, alerts, and exception alerts. The Alarm Detail Report action applies to eHealth alarms only. eHealth actions do not appear for non-eHealth alerts.
eHealth alerts are created with corresponding eHealth server information. Even if there is no eHealth server registered with Unicenter MP, the eHealth actions still appear for eHealth alerts. The first time you access an eHealth server in a Unicenter MP session, a login window appears (unless you have enabled EEM security in Unicenter MP). After successful authentication, access to the server remains valid for the rest of the session. Note: If you are a Unicenter MP administrator, you can create security policies on the actions a user can take on alerts.
To send a notification to Unicenter MP, an integrated application must invoke the Unicenter MP notification utility java class. With the automatic responding facility, you can set up the Unicenter Event Manager to send the notification to Unicenter MP when it receives certain types of event messages. Notifications are saved in the MDB. By default, the notifications never expire. However, by using Task 7: Manage Unicenter Management Portal Properties in the Administration Wizard, you can set the expiration time in hours. Unicenter MP automatically removes a notification once it expires. A predefined notification (My Notifications) is published at Library, Enterprise Management, Notifications. The My Notifications link is also in the My Unicenter workplace in the My Unicenter menu. This link shows all notifications sent to the workgroup that the current user belongs to. Another predefined notification (UM Portal Notifications) is published at Library, Enterprise Management, Notifications. The link is accessible for admin users only. This notification shows those notifications sent from various Unicenter MP components that report their status. Through the predefined UM Portal Notifications notification, you can create and publish a notification with a filter on the notification properties.
For example, if you are concerned about x factor within a supported product, you can execute the x factor configured report, which provides summarized information on x factor activity for that product. But if you want to view more specific information on y within the x factor, you can fill in the y within the x factor template provided by that product to define your own configured report, or publish the report into the product tree so you can retrieve it. Web Reports provide several report templates and configured reports across supported products that let you see your data the way you want to see it. Published reports are the resulting reports that display the actual report data.
Report data represents only a snapshot in time, and the data is quickly dated after the report is generated. However, you can schedule your reports to run at specific intervals, ensuring that your report data is continually updated. When you schedule a report, you can specify the date and time on which the report will run. You can specify whether the report will run on certain days of the week, a certain day of the month, or certain months of the year. You can specify what date the report starts running, whether the report runs multiple times in one day, and on what date the report stops running. You can also schedule to send automatic email notifications to any number of users every time a report is published. You can also launch other interfaces or perform actions on selected objects in reports. When you select an object, the 'Select and' drop down list is populated with the interfaces you can launch for the object and the actions you can perform on it. For example, you can launch the Portal Explorer for an object, or you can open Unicenter Configuration Manager for an agent with the Policy Configuration option.
For more information about how to publish Unicenter Service Metric Analysis reports under Unicenter MP, see the Unicenter Service Metric Analysis User Guide. Important! The SLA component of Unicenter MP requires Unicenter Service Metric Analysis Version 2.0 or newer to be installed and running in your enterprise.
When establishing a connection to Service Desk, you can specify whether Service Desk is configured to run in ITIL mode. The ITIL Service Desk interface supports additional data objects, and additional alert actions are available in Unicenter MP when you connect to an ITIL Service Desk server. You can specify how to log in to Service Desk from Unicenter MP. You can log in using the Service Desk login screen, Unicenter MP login credentials, or a specific user name and password that you specify. You can also create a Service Desk portlet, which lets you open the Service Desk screen within Unicenter MP. If you also added a connection to the Unicenter Alert Manager, you can take the following actions on alerts in the Alert Console related to Service Desk: Create Alert requests Create Alert incidents Create Alert problems View Alert incidents View Alert node requests View Alert node incidents View Alert node problems Search the knowledge tool
Note: For more information about how to define Service Desk connections and publish Service Desk portlets in Unicenter MP, see the Unicenter Management Portal Help.
Trend Reports Plot a variable for an object over a period of time in chart form. Trend reports can also show variables for groups of objects. The reports can reveal patterns over time and relationships between objects and between variables. The available Trend reports are Availability, Bandwidth, and Error, depending on the type of managed object. You can access Trend Reports in Unicenter MP for WorldView Topology and DSM objects from the Portal Explorer. eHealth Alarms and netHealth Exceptions Create Alert Management System alerts in Unicenter MP automatically, based on policy that you deploy. When alarms are closed in eHealth, the associated alerts are closed. Likewise, if an alert associated with an eHealth alarm is closed through AMS, the alarm is also closed. Access alerts representing eHealth alarms and netHealth exceptions in Unicenter MP from the Alert Console. Unicenter MP also offers single sign on capability with Embedded Entitlements Manager (EEM) security. When you enable EEM security in Unicenter MP, you do not have to enter credentials to access eHealth features.
The Alert Console provides access to the Business Service Console and the Report List, Alarm Detail reports, and At-a-Glance Reports for alarms appearing as alerts. The following new scoreboards are available for eHealth objects in the WorldView Topology and alerts associated with eHealth alarms: eHealth Trap Status Breakdown Scoreboard (EM) eHealth Interface Status Breakdown Scoreboard (WV) eHealth Status Breakdown Scoreboard (WV) All eHealth Alerts Breakdown (Alert)
When you try to access any of these features for the first time in a Unicenter MP session, you are prompted for login credentials. However, Unicenter MP offers single sign on capability with EEM security. When you enable EEM security in Unicenter MP, do you not have to enter eHealth login credentials to access eHealth features.
To work with SPECTRUM, you must define a connection to the WorldView server that has been integrated with SPECTRUM and contains SPECTRUM objects.
System Metrics 2003 Standard Server, Datacenter, Enterprise Server, Small Business Server Detailed Metrics (Intel x86, AMD-64, EM64-T, IA-64) Registry Keys 2003 R2 Standard, Enterprise, Datacenter (Intel x86, AMD-64, EM64-T, IA-64) XP Professional (Intel x86, AMD-64, EM64-T) Windows Vista Business, Enterprise, Ultimate (Intel x86, AMD-64, EM64-T, IA-64) Windows Server 2008 (Intel x86, AMD-64, EM64-T, IA-64) 5.2 (POWER) 5.3 (POWER) 6.2 (Intel x86) System Metrics Detailed Metrics System Metrics Detailed Metrics
AIX FreeBSD
HP-UX
11iv1 (PA-Risc-64) 11.23 (PA-Risc-64, IA-64) 11.31 (PA-Risc-64, IA-64) Red Hat 4.0 (Intel x86, AMD-64, EM64-T, IA-64, S/390) Red Hat 5.0 (Intel x86, AMD-64, EM64-T, IA-64, S/390) SLES 9 (Intel x86, AMD-64, EM64-T, IA-64, S/390)
Linux
Mac OS X
10.2 (PPC) 10.3 (PPC) 10.4 (Intel, PPC) 10.5 (Intel, PPC) 8 (UltraSPARC) 9 (UltraSPARC) 10(UltraSPARC, Intel x86, AMD-64, EM64-T) 5.1b (Alpha)
Solaris
Tru64
In addition to monitoring these platforms, Remote Monitoring provides IP resource monitoring. This type of monitoring lets you gather the following information: State Indicates whether the system is responding. Response time Determines whether the response time is reasonable. State of selected ports Issues an alarm based on a state change, such as a port that is responding when it should be turned. off (not responding).
Basic Concepts
After startup, the system agents immediately start monitoring the system resources based on a predefined configuration. Lists of available (auto-discovered) system resources let you easily customize your agents during runtime to meet the specific requirements of your system. An agent monitors system resources on the base of watchers. A watcher is the term used for any instance of a monitored resource that has been added into the agent's configuration. The agent evaluates the status of a specific resource according to the assigned watcher configuration.
To prevent losing a change in its configuration, for example, as a result of a power failure, the agent writes back its configuration data periodically. The duration of this period can be specified with the start command of the agent. Some of the system agents support Auto Discovery. For some specific resource groups the corresponding agent adds watchers into its configuration automatically by applying filter conditions to the available lists. The agent uses the default values from the MIB to specify the properties of these watchers.
General Functions
Most of the system agents support the general functions listed in the following sections. The descriptions in this section provide a brief overview. For further details, procedures, and examples, see the corresponding references.
For example, you can specify a filter condition for the process path to monitor all processes that belong to c:\Windows\system32 by a single watcher. In the case of a Down status the agent creates a list of items (process-ID:utilization value), which identifies the processes that caused this status. The sort order and length of this list depends on the severity of the violation, for example: 408:222|409:333|475:444
Call-Back Mechanism
The call-back mechanism of system agents enables you to assign an automated task or action to a particular event within the agent layer of the CA NSM architecture. This assignment is accomplished by means of a call-back reference which can be set up for each functional area of the agent, such as one call-back reference for CPU, one call-back reference for logical volumes, one call-back reference for files, and so on. These call-back references can only be defined in an agent's call-back configuration file (for example: caiUxsA2.cbc) that can be secured by access rights. This configuration file is stored in the Install_Path/SharedComponents/ccs/atech/agents/config/cbc directory. It contains an entry for each call-back reference, and associates with this reference the full path and name of the script or application to run. Additionally, parameter information can be passed to the script or application, as well as a user ID that should be used to execute the script or application. The advantage of using this additional level of indirection or call-back reference is that the name of this reference can be safely shown in the MIB without causing any security exposure, because the actual path and name of the call-back script or application is hidden within a secured file. This reference also enables you to remotely check in a secure way if a call-back reference has been configured for the respective monitored area. Note: In the MIB the call-back reference name is defined as read-only. Therefore it cannot be set or modified by Agent View or the MIB Browser. The reference name can only be configured through a definition in a configuration set.
To provide improved functionality, you can specify that the agent will pass a set of predefined or user-defined parameters to the call-back script or application upon instigation. These predefined parameters will contain the following information: New watcher state (for example: OK, Warning, Critical) Type of element being watched (for example: FSys) Instance name of element being watched (for example: /var) Name of the monitored resource property that caused this status change (for example: Space, Inodes, Mount) Other miscellaneous var-bind information sent with the trap (for example: file system space and warning/critical thresholds)
By passing these parameters to the call-back script or application, it will enable you to build powerful scripts. These scripts can perform different actions depending on the state of the monitored resource.
Cluster Awareness
Basically, support of monitoring clusters with CA NSM system agents is based on the CA High Availability Service (HAS). HAS is a set of extensions to Unicenter which enables Unicenter components to operate within a cluster environment, to function as highly available components, and to failover between cluster nodes gracefully. The system agents (caiUxsA2, caiWinA3, caiLogA2) use CA HAS and are cluster aware. This means even though those agents are running multiple times within the cluster (on each physical cluster node) only one agent monitors a shared cluster resource such as a shared disk. No specific configuration is required for using these agents in a cluster, except for monitoring processes. The appropriate name of the cluster resource group (cluster service) must be specified when creating a process watcher. Note: For more information, see the section Cluster Awareness and the appendix "High Availability Service" in the Inside Systems Monitoring guide, and the appendix "Making Components Cluster Aware and Highly Available" in the Implementation Guide.
Editing Watchers
All the watchers of the system agents are editable. No watchers have to be removed and then re-added. If attributes of a watcher (for example, thresholds) are modified, the status of the watcher will be re-evaluated based on the current poll values. Therefore, modifying a watcher does not invoke polling.
Evaluation Policy
For analog metrics of one-to-many watchers there are several possibilities to calculate the metric value. An evaluation policy makes this evaluation watcher-specific. If the result violates the monitoring conditions, a culprit list is determined. The form of the culprit list depends on the evaluation policy setting and different kinds of thresholds (rising/declining) or minimum/maximum ranges. The supported evaluation policies are: sum, max, min, average, and individual.
History Group
The History Table lists the last n enterprise-specific status traps the agent raised. The value of n is a configurable attribute in the history group (<xyz>HistoryMaxEntries). Setting this value to 0 causes the agent not to store any trap history. The trap history collection can be switched on and off on a per resource group basis. This feature is especially useful, if toggling watchers cause the trap history table to be filled again and again.
The logic of the metric can be changed by using additional policies, for example, the evaluation policy.
Modification Policy
Files and directories can be monitored for being modified or unmodified. In both cases the dates of the corresponding files are used, that is, the file or files addressed by a file watcher or the entries in a directory including the directory itself (.) and all subentries if the recursive option is set.
Overloading Thresholds
In most cases, you define thresholds as percentages, but sometimes it is useful to define absolute values instead. Percentages are suitable where a high degree of resolution is not required. Additionally, they can provide generic values across many machines. Absolute values enable a far higher resolution. The overloaded thresholds concept lets you configure thresholds with the following scales: Absolute used values An example of this is defining the absolute number of MB that can be used on a logical volume before a state change occurs. Percentage used values This type of overload is indicated by appending a percent sign (%) to the threshold value. An example of this is the percentage of total logical volume space that can be used before the state change occurs. Absolute free values This type of overload is indicated by appending an F symbol to the threshold value. An example of this is defining the absolute number of bytes that should be left unused on a logical volume.
The agent will always convert the overloaded value entered by the client into an absolute used value and store this value in the MIB. This value is used for validation and status checks. The overloading must be the same for warning and critical thresholds. Not all kind of overloading is possible by all thresholds. For details see the MIB description. Through MIB Browser, the manner in which the client distinguishes the type of overload is by appending the percent (%) sign or F symbol to the value. In Agent View, this translation is performed dynamically, using slider widgets and graphical controls.
Poll Method
For each resource group the agent provides a method, which lets you disable the polling of any metric for that group completely. You can allow polling only triggered by the poll interval or allow polling also by a query. This property can be used to save performance in the agent.
Status Deltas
For resources whose growth can consume finite resources on the machine (such as data files, and so forth), the concept of delta monitoring has been employed where feasible. This allows the agent to record the difference between the size of the resource during the last polling interval, and the size of the resource returned by the current poll. If this difference exceeds a client-defined threshold, an alert is issued. As a monitored object such as a file can contract as well as expand, it is also possible to calculate a negative value for a delta. The delta reported by the agent is always a positive or negative integer that simply reflects either the factor of growth or contraction of the resource. In the case of overloading the delta value may appear as a decimal value, for example: 99.86%.
To allow you greater flexibility when configuring the delta watchers, a type of overloading is implemented. This allows you to specify a threshold for growth, shrinkage or change in both directions. In addition to this it is possible to use the percentage type of overloading as well. You can define thresholds in the following formats: nn+ n n%n%+ n% absolute shrinkage absolute growth absolute change in both directions percentage shrinkage percentage growth percentage change in both directions
The threshold will always be entered as a positive value even if it is used to threshold against shrinkage. The actual delta value stored in the MIB is a positive or negative value to indicate the change as growth or shrinkage.
Status Lags
To provide meaningful monitoring for resources that can peak for a very short period without a problem occurring, the agent can be configured to check for several threshold breaches before the state changes. This is configured by lag attributes. The lag specifies the number of consecutive threshold (b)reaches on which state changes. If the lag is set to one then the status behaves as if there is no lag. If the lag is set to two then the threshold needs to be (b)reached twice in a row to change the state. The agent offers an aggregate lag attribute for all resources having an aggregate status. This lag defines the number of consecutive poll intervals on which any status of the monitored resource is not in the OK or Up state, before the aggregate status changes.
SNMPv3 Support
SNMPv3 support is encapsulated in aws_sadmin. CA NSM r11, r11.1, and r11.2 system agents support SNMPv1 or SNMPv3, depending on an aws_sadmin configuration option.
Watcher
An agent monitors IT resources on the base of watchers. A watcher is the term used for any instance of a monitored resource that has been added into the agent's configuration. The agent evaluates the status of a specific resource according to the assigned watcher configuration. Usually a watcher consists of a set of metrics which are used to compare the detected values of monitored resources with monitoring conditions by considering settable monitoring levels. The result of this comparison is the status of the monitored resource according to the metric settings. The status of the watcher is the worst case aggregate of all associated resource statuses. If the aggregate status of a watcher changes, an info-trap can be sent to the manager. The info-trap contains information about the monitored resource that caused the status change. Two basic watcher types can be distinguished: One-to-one watcher A watcher is mapped to a single resource that shall be monitored. Characteristics of the monitored resource are evaluated by appropriate metrics. For example, a file system is monitored by a single watcher and different metrics are used to detect the status of file system characteristics such as size. One-to-many watcher A watcher is mapped to a set of resources (instances) that shall be monitored. Common characteristics of these instances are evaluated by appropriate metrics. Unlike the one-to-one watcher a culprit list is provided to identify those instances that cause a status change of the watcher. Additionally, an evaluation policy defines for one-to-many watchers, how metric values, statuses total values and culprit lists of monitored instances are calculated. For example, processes or files can be monitored by one-to-many watchers.
The Active Directory Enterprise Manager queries the Active Directory for information about these resources. Additionally, it polls the Active Directory Agents on all monitored domain controllers in all forests for domain controller-specific metrics and statuses. The Active Directory Enterprise Manager analyzes the information it gathers from enterprise-wide Active Directory resources and displays it through Active Directory Explorer. Based on this information it provides an enterprise-wide view of your Active Directory resources.
Note: When you install the agent on a member server, only the subset of the previously listed resources pertinent to all servers is available for monitoring.
CICS Resources
The CICS Agent provides status, event, and configuration information about a CICS region and the transactions that are executed within it. The agent enables you to monitor the key resources, such as DSA and memory, of your CICS regions. The agent can monitor individual resources as well as the "health" of an entire region, allowing you to quickly determine the cause of a problem. The CICS Agent puts you in control by allowing you to determine the warning and critical thresholds for each monitored resource. The agent monitors these resources and, whenever a user-defined threshold is exceeded, sends an SNMP trap. The CICS Agent runs in IPv6 environments.
You can view the Host Resources MIB on the Management Command Center, Agent View (abrowser), Node View, and MIB Browser.
The DSM policy discovers the agent and script instances and uses traps or polls to determine the current state of each instance. The .wvc file populates the scriptAgt class to WorldView. Because the scripts often represent elements of key business logic that are being monitored for health and availability, you can include the class in Business Process Views. Windows, Linux, and most current UNIX platforms support the Script Agent.
SystemEDGE Agent
The CA SystemEDGE is a lightweight SNMP (Simple Network Management Protocol) agent capable of retrieving, monitoring, and publishing operating system metrics on a wide variety of platforms. It lets remote management systems access important information about the systems configuration, status, performance, users, processes, file systems and much more. The agent includes intelligent self-monitoring capabilities that enable reporting and managing of exceptions and that eliminate the need for excessive polling. CA NSM r11.2 supports CA SystemEDGE 4.3. Starting with r11.2 service pack 1, CA NSM supports CA SystemEDGE 4.3 and 5.0. Note: For more information, see Inside Systems Monitoring in the CA NSM product.
Processes Print Queues Network Interfaces Shared Memory Semaphores Message Queues Hardware/Programmable Watcher
Dfs Links Quotas Directories Files Processes Services Jobs Sessions Printers Network Interfaces Registry Entries Hardware/Programmable Watcher
z/OS Resources
The z/OS system agent enables you to monitor key resources of your z/OS system and provides status, event, and configuration information. The agent can monitor individual resources as well as the health of an entire system, allowing you to quickly determine the cause of a problem. The z/OS system agent also monitors UNIX System Services (USS) resources. The z/OS system agent puts you in control by allowing you to determine the warning and critical thresholds for each monitored resource. The agent monitors these resources and, whenever a user-defined threshold is exceeded, sends an SNMP trap. The z/OS Agent runs in IPv6 environments.
This layered architecture delivers a powerful, distributed, and versatile management solution to accommodate large-scale, complex, and dynamic environments. Data collected on each node in the enterprise passes from the monitoring layer to the management layer to the WorldView layer, as shown in the following illustration.
This information is interpreted according to a management protocol that is understood by both managers and agents. CA NSM agents use the following protocols: Communications Protocol User datagram protocol (UDP) of the transmission control protocol/Internet protocol (TCP/IP) suite. Network Management Protocol Simple network management protocol (SNMP) designed to run on top of TCP/IP. Distributed Intelligent Architecture (DIA) designed to run on top of TCP/IP. Both agent and management applications can view the collection of data items for the managed resource. This collection is defined by the Management Information Base (MIB). Each MIB describes attributes that represent aspects of a managed resource. The network management platform accesses MIB data using SNMP. You can view the current statistics about monitored resources from various interfaces, such as MIB Browser, Agent View browser, and through the Management Command Center. Every agent must be associated with at least one DSM. Through configuration, you can determine which machines in your enterprise report to a DSM. Each DSM can communicate with only one MDB, but a single MDB can accept information from multiple DSMs.
Managed Objects
Each resource that an agent monitors is called a managed object, and every managed object has a state. A managed object can represent a physical device, such as a printer or a router, or it can represent an abstraction, such as the combination of hardware and software components that constitute a network connection between two nodes. A managed object can be monitored and, in some cases, controlled with the use of one or more management applications. CA NSM groups managed objects into classes. A class is a group of managed objects that share a common definition, and therefore, share common structure and behavior. By changing the behavior of a class, you can change the behavior of the managed objects that belong to that class. The definitions for each agent class are in their individual policy files. For more information about policy files and their syntax, see the guide Inside Systems Management. For more information about a specific class of managed object, see the individual guide, such as Inside Systems Monitoring.
States
A state is one of a set of predefined possibilities for the condition (for example, up, down, or unknown) of the managed object. A change in state appears on the WorldView 2D Map or in Management Command Center, as a change in icon color. You can drill down through the network topology, to the machine name or IP address, to Unispace, and to the agent that communicated the state change. From the pop-up menu associated with that agent, you can view the current state of the agent according to the DSM (from Node View) or according to the agent (from Agent View). Each layer of Unicenter interprets the state of a managed object differently, placing an emphasis on different aspects of the object. For example, a Windows 2003 server may be taken off-line for repairs. The Windows 2003 system agent stops sending information to its DSM. The state of the agent according to the DSM is Absent, indicating that no information is being relayed. However, the state of the agent according to WorldView is Down, indicating that the server is inaccessible.
2.
3. 4.
5.
These components run as processes that can be started and stopped independently, or as a group, by the Agent Technology Service Control Manager (awservices). To view the Service Control Manager, see the section Tools to Configure Managed Resources. A brief explanation of each component follows. For more information about any of these components, see the guide Inside Systems Management.
Trap MUX
The Trap multiplexer (MUX) allows multiple management applications to listen for traps on the same trap port. For example, Enterprise Management and the DSM both listen to port 162.
Object Store
Agent Technology provides a mechanism for the persistent storage of objects, called the object store. The Object Store stores class data on managed objects. Managed object class definitions are loaded into Object Store from DAT files. The DSM uses the class definitions in Object Store to discover agents and managed objects on the appropriate nodes.
DSM Store
The DSM Store contains DSM managed objects that represent network nodes, agents, and the resources that are being monitored. These managed objects are created each time the DSM starts up during the DSM discovery process. The DSM uses the managed objects to maintain the current status of monitored resources in its domain. Each DSM managed object has associated property values assigned based on the class definition that is present for that type of object in the Object Store. An objects current state is one property value maintained by the DSM.
DSM Monitor
The DSM is self-managing to ensure that your resources are under constant surveillance and that the health and load of each DSM can easily be determined. DSM Monitor uses various data collection methods to effectively monitor the DSM process, its impact on CA NSM, and its impact on the performance of the server on which it runs. You can use the historical data collected to fine tune the DSM-managed enterprise by balancing the number of managed objects and classes across multiple DSMs.
WorldView Gateway
The WorldView Gateway communicates with MDB through the WorldView Application Programming Interface (API) to get information about any nodes that WorldView has discovered. The WorldView Gateway then filters the list of discovered nodes based on the contents of the DSM Configuration IP Scoping Table, and forwards the appropriate list of managed objects to each DSM. For example, if DSM1 is configured to manage only devices with IP addresses 172.16.0.0 to 172.52.255.255, then the filtered list provided to DSM1 by WorldView Gateway would include only the addresses of any devices discovered within that range. The WorldView Gateway also filters the list of nodes based on the node class. This information comes from the Class Scoping Table, which contains a list of the node classes that the DSM should monitor. Another task of the WorldView Gateway is to pass state change information from the DSM to the MDB.
During its operation, the DSM moves through the following steps: 1. 2. 3. 4. 5. DSM obtains the list of discovered nodes from WorldView The list of discovered nodes gets filtered to create the domain list. The DSM discovers Managed Objects within its domain. The DSM creates a managed object for each running agent and child object, stores this information in DSM Store, and registers with its agents. The DSM determines the current state of each managed object in its domain and loads it into DSM Store.
For more information about WorldView, see the appropriate chapters in this guide.
Agent View
Agent View provides an interface for configuring an agent. Agent View contains several windows that reflect the different sets of monitored resources. Within each window, you can set configurable attributes by adding and deleting resources to monitor, setting warning and critical threshold values, and setting resource polling intervals. You can perform similar tasks in agent dashboards. For more information about dashboards, see the chapter "Managing On-Demand."
To access the Agent View window from Node View, right-click the bar containing its name, then choose View Agent from the pop-up menu.
DSM View
DSM View displays the managed objects for an individual DSM. DSM View lets you find objects and manage properties associated with managed object classes in a management domain. You can create new properties for a managed object, as well as modify the values of existing properties. Note: You can also use the DSM Wizard to modify a selected subset of property values for all discovered instances of specific agent object classes. To access the DSM Wizard from a command prompt, enter: dsmwiz. You can access DSM View using any of the following methods: From Node View menu bar, choose Edit, Find Object From Node View, click the Find Object button (microscope icon) From a command prompt, enter the command, obrowser (For more information on the syntax of the obrowser command, see the online CA Reference.) From Management Command Center, select DSM View from the left pane drop-down list
Event Browser
The Event Browser provides detailed information about the events that have affected the status of an object and includes the following information: State changes Reason for state changes Warning messages sent Creation of an object Deletion of an object
With this information, you can determine patterns of activity. For example, the Event Browser shows that an object moves from a NORMAL to a CRITICAL state once every hour. You can use this pattern to determine the cause of the problem with that object.
Because the Event Browser lists events in real-time, the display window changes continuously. You can freeze the display of the event log list temporarily to examine the details of a particular event. You can also sort and filter Event Browser information. Access the Event Browser using any of the following methods: Right-click an object in WorldView Classic 2D Map and choose Event Browser from the pop-up menu. Right-click an object in Node View and choose Event Browser from the pop-up menu. Right-click an object in Management Command Center and choose Viewers, Event View from the pop-up menu. From a command prompt, enter the command, ebrowser
MIB Browser
MIB Browser lets you view the agent MIB in a tree-like hierarchy. If you are familiar with a particular agent, you may prefer to use MIB Browser to set configurable attributes, such as threshold values. MIB Browser shows an agents entire MIB as it appears in the file, whereas Agent View provides a graphical representation of the MIB. You can access the MIB Browser by using one of the following methods: Right-click an object in WorldView Classic 2D Map and choose MIB Browser from the pop-up menu. Right-click an object in Node View and choose MIB Browser from the pop-up menu. From a command prompt, enter the following command, mibbrowse
After the MIB Browser appears, you can log into any MIB by using the Open Connection menu (or click the telephone icon). If you are using SNMPv3, you can perform a secure login from the Open Connection dialog.
Node View
Node View builds an object tree from the information provided by the DSM. Node View recognizes status change from the DSM by changing icon colors. Status propagates up through the Node View treethat is, the most severe status reported by a child object propagates horizontally to the parent object. From the Guidance Window at the bottom of Node View, you can see the real-time recording of session activity, initial statuses of managed objects in the tree, status changes, acknowledgements of status changes, the syntax of commands triggered when using the Node View menu, error information, and so forth. Note: You can change DSM policy to affect the way the DSM states are displayed in Node View. You can access Node View by using one of the following methods: From WorldView 2D Map, right-click an agent object and select Node View from the pop-up menu. From Management Command Center, right-click an agent object, and select Action - View Node from the pop-up menu. From a command prompt, enter the command: nodeview. (For more information about the syntax of this command, see the online CA Reference.)
Remote Ping
CA NSM lets you poll a remote host from another host. The Remote Ping interface lets you indicate the IP addresses of the source and destination machines and to establish retries and timeouts for the poll. In addition, you can view the activity of the Distributed Services Bus where the poll originates. You can request a Remote Ping from the Event Console, the 2D Map, or the command line. For more information on polling a remote host, see the Remote Ping online help. You can access Remote Ping using one of the following methods: Click Start, Programs, CA, Unicenter, NSM, Agent Technology, Remote Ping. From a command prompt, enter the command, rping The Remote Ping dialog appears.
Repository Monitor
Repository Monitor lets you monitor the various agent classes that are listed in the MDB. You can view and delete a complete list of objects for a specified agent class. Use this tool if you have discovered classes in your enterprise that you know you will not want to monitor. Note: Advanced users can also delete the class name from the central list of classes being monitored. You can access the Repository Monitor using one of the following methods: Click Start, Programs, CA, Unicenter, NSM, Agent Technology, Repository Monitor. From a command prompt, enter the command, agtrmon. The Repository Monitor appears.
SNMP Administrator
The SNMP Administrator checks the community string and Internet Protocol (IP) address of get, get-next, and set requests to ensure that these requests come from authenticated management applications. This component forwards trap messages to the appropriate destinations. The SNMP Administrator also stores configuration sets and MIB definitions for each agent on the node in memory.
While the DSM is discovering the monitored systems in its domain, the DSM registers its own IP address with each system's SNMP Administrator. At that point, each monitored system knows which DSM to send traps to. Consequently, those remote monitored systems do not require individual configuration for trap destinations. Note: For fail-over purposes, a monitored system should have more than one trap destination. Access the SNMP Administrator by right-clicking the AWsadmin object in Node View and selecting View Agent from the pop-up menu. The SNMP Administrator View - Summary dialog appears.
After startup the Adaptive Configuration service provides a predefined configuration. If the predefined configuration does not match your specific applications you can customize the service to meet your needs. You can influence the Adaptive Configuration service, for example, by specifying threshold policies or including or excluding specific resources.
By default the Adaptive Configuration service is installed along with the Active Directory Services Agent, the UNIX System Agent, and the Windows System Agent. The Log Agent partially supports the Adaptive Configuration service. The Adaptive Configuration service moves through the following modes: Self-Configuration Mode Rapid and automatic configuration of an agent when it is first deployed to its target environment with no other form of a predefined configuration. Duration: about 3 minutes Initial Self-Adaptation Mode Self-Adaptation Mode Ongoing refinement and adjustment of an agent's existing configuration. In this mode of operation, the Adaptive Configuration process provides an ongoing learning and training exercise conducted over a number of weeks or months. You can access Adaptive Configuration through the Unicenter Configuration Manager, which is described in a subsequent section. See the guide Inside Systems Monitoring for more information about running the Adaptive Configuration service on a specific host.
Distributing Configurations
You can use the ldconfig -h <host> parameter to distribute agent configsets, either individually or in batch mode. A better approach to applying a configset to many similar servers is to use a software delivery based solution. Configuration files are usually distributed to the folder \ccs\at\agents\config of specified managed nodes.
Central configuration is provided by the Unicenter Configuration Manager. From Unicenter Configuration Manager you can create, modify, and distribute agent configurations in your enterprise. Within Unicenter Configuration Manager, agent configsets and Adaptive Configuration profiles become agent profiles and are deployed to remote hosts within configuration bundles. You should also use Unicenter Configuration Manager to centrally distribute other files that configure your environment, such as atservices.ini, atmanager.ini, aws_sadmin.cfg, or aws_sadminV3.cfg. For more information about Unicenter Configuration Manager, see the section Understanding Configuration Manager.
IP Address Scoping
Each DSM has a list of nodes that it manages; this is referred to as the DSM Domain. In the past CA NSM controlled the DSM Domain at the DSM layer using a file called gwipflt.dat, then later at the repository layer using the gwipfltii.dat file. You want to modify the list of IP addresses reporting to a DSM when the historical data of the DSM Monitor suggests that the DSM server is having reoccurring problems. By assigning your DSMs to manage specific IP addresses, you can distribute the load of monitoring your enterprise among the number of DSMs you deploy. You can also ensure that each DSM manages only those nodes which are relatively close to it within the network, to reduce the amount of network traffic.
Exclude specific IP address ranges from being monitored -172.28.192.* Specify a range of addresses within a subnet Add another subnet range for monitoring +172.28.192.2-8 172.30.4.*
The above entries define the scope for a DSM to manage all discovered nodes in the 172.28 subnets, except for the 172.28.192 subnet, but also manage all hosts with IP address in the 172.28.192.2 to 172.28.192.8 range, as well as all nodes in the subnet 172.30.4. The DSM IPScope table, in conjunction with the setdsmname service, notifies each DSM which nodes to manage without having to restart the DSM process.
To ensure that each DSM is running efficiently, the DSM can now manage itself and report its own status. The DSM Monitor provides real-time monitoring of the following resources: Connectivity status of DSM to MDB Number of nodes, objects, and classes that a DSM is monitoring System performance impact by a DSM and by other services on which the DSM depends Message loads coming in to and leaving the DSM Test of system path to WorldView and the Event Console Historical data collection
For information about configuring each of your DSMs from a central location, see the section Configure DSM Environments.
To access the Summary window and view the overall status of the DSM, access Node View, right-click DSM Monitor, and select Agent View from the list. If you are connecting to a DSM server running SNMPv3, click File, Connect, which allows you to provide the SNMP Connection Parameters.
Node View displays a horizontal tree structure that lays out the hierarchy of these three monitored groups, with all of their states being propagated to the dsmMonitor, then to the host. The most severe state overrides any less severe state.
Create a Group
Groups are logical groupings of managed objects that contain managed resources. The Group is created as a Business Process View in the Common Database (MDB). You can create a group that contains hosts with managed resources that have similar configuration requirements, business process views, or dynamic business process views. You can then apply one profile to the group instead of applying the profile to each individual host or managed resource. Using groups ensures consistent management and maintenance across your environment. To create a group 1. Select the Groups tab from the navigation bar. The group hierarchy tree appears. 2. Select the model type from the drop-down above the hierarchy tree. The hierarchy tree for the model you selected appears. 3. Select the Management Model or the Resources Model from the hierarchy tree. The Model pane appears. 4. Click New Group from the right pane menu. The New Group pane appears.
5.
Complete the following fields for the new group. Group Name Defines the name of the new group. Limits: up to 200 characters Description Defines the description for the object. Active Specifies the configuration bundle is active on the group or managed host when selected. You can temporarily suspend the configuration bundle from delivering to the group or managed host when the check box is cleared.
6.
Complete the object filter criteria to add a host, and click Go. The results of the search appear in the Available Objects list.
7.
Select the host that you want to add from the Available Objects list, and click Add. The host is moved to the Selected Objects list.
8.
Click Save as Child or Save as Sibling. The new group is saved and appears in the hierarchy tree.
Base Profiles
A profile contains a set of configuration data for a managed resource. You can apply different base profiles to different groups, managed hosts, or managed resources in the hierarchy tree. The lower level profiles override the higher level profiles in the hierarchy tree. Typically, a base profile contains configuration data that is common to a number of managed hosts. A differential profile can be applied to the base profile to create minor changes to the configuration data.
Get Config from Host Obtains the configuration data currently loaded in the managed resource or host and uses it to create the initial profile. Register Only Registers (saves) an existing XML profile with Unicenter Configuration Manager so you can use the profile to create other profiles. Note: You must supply the exact URI location for the profile in the URI Location field in order to use the register option. 6. Click Next. The New Profile - Get Configuration from Host Pane appears. 7. Select the appropriate host class and search criteria, click Go. The search is processed and the results are displayed in the Available Hosts list. 8. Select the host you want from the list and click Finish. The new profile is created and appears in the profile hierarchy tree.
Differential Profiles
A differential profile can modify a base profile by overriding (adding, deleting, updating) configuration data. Differential profiles are applied to a base profile in the following order: Inherited Differential Profiles in the order they were applied. Locally applied Differential Profiles in the order they were applied.
Note: A base profile must be applied to the group or host in the hierarchy tree in order to use a differential profile. You can create a differential profile that contains a threshold value of 5 that overrides the threshold value contained in the base profile for the specific group, host, or managed resource in the hierarchy tree.
File Packages
A file package is a collection of files associated with a managed resource that is delivered to a target host through DIA file transfer mechanisms. A file package delivers one or more files from a location on the Unicenter Configuration Manager server to a target destination on the host.
Delivery Schedules
A Cron expression or calendar-based schedule, that when combined in a configuration bundle with a profile or file package, facilitates the audit and delivery of the bundled profile or file package.
Configuration Bundles
Configuration bundles are logical groupings of one Base Profile or File Packages together with a delivery schedule. You can also add Differential Profiles to a configuration bundle. Base Profiles and Differential Profiles contain agent configuration data that is automatically loaded into the sadmin store after its delivery to the target server. File Packages contain other files (scripts or configuration files) that are not loaded into the sadmin store and that have to be copied to specific locations on target servers.
Adaptive Configuration Profiles are contained in File Packages because they must be copied into specific directories on the target servers. Adaptive Configuration Profiles contain specific instructions for the Adaptive Configuration Service, which automatically creates agent configuration data on the target servers according to these instructions. Note: You should not deliver a Base Profile and an Adaptive Configuration Profile for the same agent to the same target server. The Base Profile can overwrite the configuration data that was created by the Adaptive Configuration Service.
The file package is moved to the Selected File Packages list. Note: You are not required to add a file package to the configuration bundle before moving to the next step.
9.
Click Next to add a base profile. Note: You can also click Finish to complete and save the new configuration bundle. The New Configuration Bundle - Select Base Profile pane appears.
10. Click Go from the base profile search. The results of the search appear in the Select Base Profile list. 11. Select the base profile that you want to assign to the configuration bundle by selecting the Select to Add column and click Next. Note: You can also click Finish to complete and save the new configuration bundle. The New Configuration Bundle - Add Differential Profiles pane appears. 12. Click Go from the differential profile search. The results of the search appear in the Available Differentials list. 13. Select the differential profile that you want to assign to the configuration bundle from the Available Differentials list and click Add .
The differential profile is moved to the Selected Differentials list. 14. Click Finish. The configuration bundle is created and appears in the applied configuration bundles list.
Reporting Feature
The reporting feature of Unicenter Configuration Manager lets you configure and generate reports. The reports provide an audit trail for Unicenter Configuration Manager and include the user ID, date, and time on which the object was last configured. The following reports are available: Configuration Bundles Audit Report Displays a list of configuration bundles that were created, updated, or deleted during the specified time range. Configuration Objects Audit Report Displays a list of base profiles, differential profiles, and file packages that were created, updated, or deleted during the specified time range. Resources Model Audit Report Displays a list of resource models that were created, updated, or deleted during the specified time range. Delivery Schedules Audit Reports Displays a list of the delivery schedules that were created, updated, or deleted during the specified time range. Delivery Forecast Reports Displays a list of deliveries scheduled in the future within a specified time and date range. Delivery Status Reports Displays the status of deliveries during the time range selected and whether the delivery was successful or failed.
Event Management
Event Management, the focal point for integrated message management throughout your network, can monitor and consolidate message activity from a variety of sources. It lets you identify event messages that require special handling and initiate a list of actions for handling an event. Through support of industry-standard facilities, you can channel event messages from any node in your network to one or more monitoring nodes. You can centralize management of many servers and ensure the detection and appropriate routing of important events. For example, you may want to route message traffic to different event managers: Event and workload messages to the production control event manager Security messages to the security administrator's event manager Problem messages to the help desk administrator's event manager
By filtering messages that appear on each console, you can retrieve specific information about a particular node, user, or workstation. Wireless Messaging provides alternate channels for operator input in situations where the operator cannot access a CA Event Console. The supported messaging protocols are email and pager. Using the SMTP/POP3 mail messaging protocol, you can send and receive pager messages from two-way pager devices. An incoming message can trigger any series of actions you define for Event Console to perform in response to it.
Event Management
Successfully implementing Event Management involves the following activities: Establishing date and time controls for automated event processing Trapping important event messages and assigning actions Putting Event Management policies into effect Monitoring message traffic Controlling access to messages Providing Wireless Message Delivery Using SNMP to monitor activity Implementing maintenance considerations
Note: For more information about Event Management, see the guide Inside Event Management and Alert Management.
Events
An event is a significant situation that indicates a change in the enterprise. It can be positive, negative, or just informative. It can indicate a significant problem or just describe a situation. It can be a warning of conditions that indicate a possible future problem, or it can tell of the success or failure of certain things. When an event occurs, a message is usually sent. Event Management processes it using its Event Management policy.
Event Management
Event Agent
The Event Agent is a lightweight solution that provides all Event Management functions with very little overhead. Servers running the Event Agent do not have a Management Database or the administrative GUI. The agent gets Event policy from an Event Manager server or a local DSB file. When you install the Event Agent, you indicate whether to load the policies from a local DSB file or from a specific remote Event Manager. When you start the Event Agent or run opreload, message records and actions are copied from the local DSB file or the specified Event Manager, and an in-memory version is created on the agent computer.
Event Management
Configure sudo
The sudoers file lets you configure sudo. See the sample file on www.gratisoft.us/sudo. To modify the sudoers file you must use the visudo command. You need to add users and permissions to the file: Add authenticate parameters for each user so that sudo does not prompt for a password: Defaults:user !authenticate
The following excerpt from the sudoers file (sudo configuration file) gives the user unimgr permission to execute the /usr/bin/touch file as root or opsuser on server1:
unimgr server1 =(opsuser) /usr/bin/touch, (root) /usr/bin/touch
Event Management
Windows
Run the cautenv utility from the command line of the Event Manager or Event Agent. On the Event Manager:
oprcmd -n agent-name cautenv setlocal envname value
On Event Agents:
oprcmd cautenv setlocal envname value
Note: The user running the commands must be listed in CA_OPR_AUTH_LIST (Users Authorized To Run Commands) on the agent computers. The following are examples of settings that can be changed. Substitute envname with one of the following environment variables. CA_OPR_USEDB (Load from Management Database?) Specifies whether the Event daemon should use the Management Database. Set this to N because Event Agent installations have no database. CA_OPR_PROXY (Event Agent Proxy Node) Indicates the name of the Event Manager server that provides policy to the Event Agent. If no value is specified, policies are loaded from the local DSB file. CA_OPERA_NODE (Console Daemon Node) Specifies the name of the server where event messages are forwarded. You may want to set CA_OPERA_NODE to the local agent computer so that it processes its own events. You may need to use Event policies to forward some events to the manager for processing.
UNIX/Linux
To change Event Management settings on UNIX/Linux, edit the configuration files $CAIGLBL0000/opr/scripts/envset and $CAIGLBL0000/opr/scripts/envusr. For example, the following variable is set in the envset file based on the response to an installation question. CAI_OPR_REMOTEDB (Event Agent Proxy Node) Indicates the name of the Event Manager server that provides policy to the Event Agent.
Event Management
Event Management
Event Sources
Event Management receives events from a variety of sources: The cawto command, which sends an event to the Event Console. The cawtor command, which sends an event to the Event Console and waits for a reply. It appears in the held messages pane and will not be deleted until the operator replies. The oprcmd command, which sends a request to execute a command to the designated target machines. The careply command, which lets you use any terminal to reply to an event being held by the Event Console. Enterprise Management components, which generate events directly to the Event Console. SNMP traps that are generated by various devices, such as switches or printers, and other software components. catrapd (an Event Management component), collects, formats, and routes these traps to the Event Management daemon on the local or remote node. The Windows Event Logs, which store events generated by the Windows operating system, device drivers, or other products. The Event Management log reader collects these events and forwards them to the Event Management daemon. The syslog daemon on UNIX/Linux platforms, where messages are routed through the syslog daemon to the Event Console. Events issued through the logger utility are included as they also use the syslog daemon. These events may have originated on a platform not running CA NSM. Agent Technology agents, policies, and DSM. Any CA or client programs that use the CA NSM SDK. API functions, such as EmEvt_wto, which issue events to Event Management.
For additional information about the cawto, cawtor, oprcmd, careply, and catrapd administrator commands, see the online CA Reference and the CA SDK Reference.
Event Management
Message Records
You identify events that require special handling by creating message record objects. You then specify the special handling requirements by creating message action objects that are associated with a particular message record object. Once defined, message records and message actions become an event handling policy that identifies events with special handling requirements and the tasks to perform when they are detected. Event Management provides two categories of message records to identify important events: Message - Represents the output text string received by Event Management and displayed on the Event Console. Command - Represents the text string input by someone operating the Event Console. (You can enter commands at the command field of the console, use customized buttons to automatically issue commands, or enter them as command line arguments provided to the oprcmd utility.)
Command output can be a source of text to substitute into the message text in Management Database message records during the message matching process. For example, the string `pwd` in the MDB record message text field causes the current directory to be inserted into the message text.
Message Actions
Message actions specify what Event Management should do when it detects a match between an input event message and a message record. Possible actions range from simply highlighting messages on the console display to replying to messages, opening problems, or executing commands or other programs. For example, to ensure that a message catches the attention of the person responsible for monitoring the console, you can use either or both of these methods: Route the message to a held area of the console GUI where it remains until acknowledged by the console operator. Assign an attribute, such as highlighting or blinking, to make a message more noticeable on the Event Console.
You can use several types of actions in any sequence or combination to thoroughly automate processing of an input or output message. For explanations of these action keywords, see the cautil DEFINE MSGACTION control statement in the online CA Reference.
Event Management
Whenever the Management Database is loaded, it checks the EVALNODE of every message record against its own node name. If its node name matches the EVALNODE of the message record, the record and all associated message actions are read into memory. If there is no match, the message record is ignored. The set of message records and message actions read into memory constitute the Event policy for the current execution of the Event daemon until the policy is reloaded by a restart or the opreload command.
Event Management
To instruct the syslog daemon to route all messages to a remote machine, edit the syslog daemons configuration file and insert the remote host name in the action part of the line, prefixing the host name with a single at sign (@). Note: If you use both the Berkeley syslog daemon and specific message action policies to reroute the same messages to the same remote machines, those messages will display twice on those remote machines as they were sent there twice, once by the Berkeley syslog daemon and again by Event Management.
Event Management
If you instead respond to the prompt for message action restriction, setup creates the actnode.prf configuration file with a single entry of -n=*,*,D to deny all RUNIDs from all nodes the ability to submit these message actions. When setup detects that you are installing Event Management for the first time on the node, a message appears informing you of the new message action restriction feature and the default setting that disables message action restriction. You are given the opportunity to override the default and enable message action restriction at that time. If you accept the default response n to the prompt for message action restriction, setup creates the actnode.prf configuration file for you with a single entry of -n=*,*,E to enable message action submission for all RUNIDs from all nodes. If you instead respond y to the prompt for message action restriction, setup creates the actnode.prf configuration file with a single entry of -n=*,*,D to disable all RUNIDs from all nodes from submitting these message actions. You can change this rule at any time after installation by executing the caevtsec utility located in the $CAIGLBL0000\bin directory. The utility only allows the uid 0 user to maintain the file and preserve the file permissions. The file may also be maintained using a UNIX/Linux text editor. For more information about using the caevtsec utility, see the online CA Reference. The actnode.prf configuration file is located in the $CAIGLBL0000/opr/config/hostname directory. You can use this file to maintain policies that specify how message action restriction is to be enforced based on the submitting node and RUNID. The file must be owned by root and only a uid of 0 may have write access to it. An individual entry in the file has the following format:
-n=nodename,runid,flag
nodename Specifies the node from which the COMMAND, UNIXCMD or UNIXSH message action is initiated; it may contain a trailing generic mask character. runid Specifies the node from which the COMMAND, UNIXCMD or UNIXSH message action is initiated; it may contain a trailing generic mask character. flag Specifies D for disable (feature is active; disallow the message action submitted by RUNID from nodename), E for enable (allow the RUNID from nodename to submit the message action), or W for warn (check the rule but allow the message action submission to occur).
Event Management
For example:
-n=*,*,E
is the default rule in effect if, during installation, you elected not to activate message action restriction. The rule states that for all nodes and all RUNIDs, COMMAND, UNIXCMD and UNIXSH message action submission is allowed.
-n=*,*,D
is the default rule in effect if, during installation, you elected to activate message action restriction. The rule states that for all nodes and all RUNIDs, COMMAND, UNIXCMD and UNIXSH message action submission is disallowed.
-n=*,*,E -n=*,root,D
enforces a message action restriction on RUNID root and allows all other RUNIDs to submit the message actions.
-n=*,*,E -n=mars,*,D -n=*,root,W
allows all RUNIDs to bypass message action restriction unless the request comes from the node mars. In that case, message action restriction is enforced for all RUNIDs. The last entry sets a warning type restriction rule for RUNID root if it comes from a node other than mars. Event Management scans the entire configuration file for a best match and uses that rule. It uses the node field as a high level qualifier when searching for a best match. For example if the following are the only two entries in the file, any request coming from the node mars uses the disallow rule. The user root only uses the warning rule if the request comes from a node other than mars.
-n=mars,*,D -n=*,root,W
Note: On Windows, to execute a command a user must be defined in the Users Authorized to Issue Commands configuration setting.
Event Management
Event Management
EVENT_TIME8 - Time (hh:mm:ss) command was invoked EVENT_TOKEN - Token number of message record that matched this action EVENT_TYPE - Type of event: MSG/CMD/REPLY/WTOR EVENT_UDATA - User data (value of the CA_UDATA environment variable when the event was generated) EVENT_USERID - User origin associated with the event EVENT_YYYYMMDD - Date the action was invoked
Message Enhancement
Event Management enhances messages by automatically providing the source or origin of each message along with the message text. You can customize the message text to meet the specific characteristics of your enterprise. Use the following action keywords to control the message text that appears on the Event Console: EVALUATE FORWARD HILITE SENDKEEP SENDOPER WAITOPER
For more information, see the cautil DEFINE MSGACTION control statement in the online CA Reference.
Event Management
Event Correlation
Often a single event coming across the Event Console is not important unless seen in context with other events. By constructing a series of message records and actions, you can be notified and take action if two or more events occur that together have more significance to your enterprise than any one of the events may have when it occurs in isolation. For example, assume you have two PCs in your accounting department. If one goes down, it is a problem and you probably have established policies to deal with such an occurrence. However, should the second also go down, the problem suddenly becomes critical. The action you want to take in this situation may be quite different. A solution is to define message records to trap events coming to the Event Console informing you that Accounting PC #1 and Accounting PC #2 are coming down. Then, for each message record, define message actions that test for the occurrence of the other event. As a result, you will be automatically notified of the critical situation that exists in the Accounting department.
Event Console
Event Management gives you a visual window into event activity that lets you view and immediately respond to events as they occur. The Event Console provides two areas to view event messages written to the console log. The held messages area displays messages that require a response. These messages are often critical and require immediate attention or require an operator reply. Held messages that require an operator reply are WTOR (Write To Operator with Reply) messages. When no held or WTOR messages exist, the area reserved for messages of that type disappears. If a reply pending message (WTOR) has been sent, and either the Event Manager or the entire system goes down while the message is still pending, the message is queued and activated automatically (appears to be still active) when the Event Manager is brought back up. The log messages area displays all logged messages (including held messages). Through message records and actions, you can further highlight these messages with a variety of colors and attributes to make them more noticeable on the console display.
You can narrow the focus of the display so you can concentrate on events pertinent to your immediate situation. For example, you can apply filters that limit the number of messages displayed. You can also append comments to messages that come across the console.
Event Management
For detailed descriptions of these asset types, see the Asset Types Table in the online CA Reference. Important! To enforce your access rules, you must define users in FAIL mode. The only Enforcement mode that results in access being denied is FAIL mode, whether set explicitly in the user profile or implicitly by referring to a System Violation Mode of FAIL in the user profile. After defining console view access rules, you can execute the commit process to put them into effect. Users accessing the console log can choose from a list of console views associated with their user IDs. If no console view access rules exist for a user, the entire console log appears. When a user is removed from the console view definition, that view is no longer available to the user.
Event Management
Event Management
For more information about multiple log file support, see the topic Event Management Console Table (console.tab) File in the online CA Reference.
SNMP Traps
Simple Network Management Protocol (SNMP), a widely used standard in network Event Management, identifies objects on a network and provides a method to monitor and report on their status and activities. An SNMP trap is usually an unsolicited message that reports on one of two types of events: Extraordinary events indicate something is wrong, or an error has occurred. Confirmed events provide status information, such as a process ending normally or a printer coming online.
Many SNMP agents are available, including those provided through Unicenter Agent Technology. Although they vary in purpose, complexity, and implementation, all SNMP agents can: Respond to SNMP queries Issue an SNMP trap Accept instructions for routing an SNMP trap (accept a setting for a trap destination)
Note: On some UNIX/Linux computers, an snmptrapd system might be running, occupying port 162. If so, catrapmuxd stops snmptrapd and starts it at port 6164, which frees port 162 for catrapmuxd. When catrapmuxd is shut down, it stops snmptrapd and restarts it, listening to port 162.
Event Management
The CA Trap Multiplexer also supports IPv6. Therefore, make sure that you use catrapmuxd instead of the Windows SNMP service if using IPv6 on Windows versions earlier than Windows Vista, because the Windows SNMP service does not support IPv6 on these versions. For more information about catrapmuxd, see the online CA Reference. TRAPMUX requires port 162 to be available. On Windows, if port 162 is in use, catrapmuxd issues an error message. You must free port 162 before attempting to run catrapmuxd again. On a Windows server system with the Windows SNMP service installed, the Windows SNMP service is probably using port 162. You can configure the Windows SNMP service to run on a different port. TRAPMUX can forward traps to that port if the Windows SNMP service is still required. Note: When TRAPMUX forwards a trap to the Windows SNMP service, the trap loses its original embedded address. From the perspective of the Windows SNMP service, the trap originated on the local node with catrapmuxd. This also applies to any third-party SNMP service manager configured to receive traps from TRAPMUX. To support SNMP version 3 Traps 1. 2. 3. 4. Shut down the snmp and snmp-trap services. Open the %system%/drivers/etc/services file. Change snmptrap 162/udp to snmptrap xxxx/udp, where xxxx is a port not currently in use, for example: snmptrap 5162/udp. Save and close the services file.
Event Management
After freeing port 162, if the Windows SNMP service is still required, follow these steps: 1. 2. Restart the snmp and snmp-trap services. To enable the Windows SNMP service to receive traps from TRAPMUX, enter the following command:
catrapmuxd add snmptrap:xxxx
where xxxx is the port to which snmptrap was moved in the services file, for example: catrapmuxd add snmptrap:5162.
where: h Specifies the host subnet or range for every authorization. This is a required field. c Specifies the agent classname. cn Specifies the contextname, or instance. u Specifies the username. This is a required field. sl Specifies the snmpSecurityLevel. This is a required field. noAuthNoPriv AuthNoPriv AuthPriv ap Specifies the authProtocol. This is a required field if sl is not set to noAuthNoPriv. MD5 SHA
Event Management
a Specifies the authentication password. This is a required field if sl is not set to noAuthNoPriv. pp Specifies the privProtocol. This is a required field if sl is AuthPriv. DES p Specifies the privacy password. This is a required field if sl is AuthPriv.
Examples
Set all hosts in the range to have minimum SNMP version 3 security (no authentication required) if the user is evans:
172.24.111.5-15:*:* evans:noAuthNoPriv
Set all hosts in the range to have AuthNoPriv security using MD5 protocol and an authentication password of evansa if the user is evans33:
172.24.111.5-15:*:* evans33:AuthNoPriv:MD5:evansa
Set all hosts in the range to have AuthPriv security using SHA protocol, an authentication password of AJHa0123 and a privacy password of AJHp0123, if the user is AJH3:
172.24.111.5-15:*:* AJH3:AuthPriv:SHA:AJHa0123:DES:AJHp0123
Remove the node from SNMP version 3 security, and let it default to SNMP version 1/version 2 security:
-172.24.111.11
Note: You must recycle catrapd for the updated authorized user information to take effect.
Event Management
For more information about authorization of SNMP version 3 agents, see Agent Technology Support for SNMPv3.
Trap Destinations
Traps should be routed to a destination where an action can be taken. Many vendors provide facilities for setting a system-wide default trap destination through an SNMP configuration file. For example, some UNIX/Linux platforms set their trap destination in the /etc/snmpd.conf file. This path and file name may be different for your system. After a trap destination setting is accepted, there must be something at that destination to receive and process the trap. An Event Management agent, CA trap daemon (catrapd), automatically receives and processes traps directed to the destination (machine) on which it is executing. catrapd receives an SNMP trap, unpacks (decodes) it, and sends it to other Event Management components for processing. As part of this decoding, character representations, or strings, can be assigned to substitute names for the Enterprise IDs that are part of the SNMP trap. CA NSM provides the following translation files for that purpose: %CAIGLBL0000%\WVEM\DB\enterprise.dat on Windows platforms $CAIGLBL0000/snmp/dat/enterprise.dat on UNIX/Linux platforms
On Windows, to enable enterprise name translation, go to EM Settings and modify the Enterprise OID displayed as setting to NAME. Recycle catrapd so that the change takes effect. Note: On Windows you can update the enterprise.dat file with the command catrapd update. For more information about catrapd, see the online CA Reference.
Event Management
Where AlarmName is from the TRAP record and unique variable-text is created by substituting selected VarBinds into the Format column of the corresponding record. catrapd sends the formatted traps to the Event Management daemon for further processing.
Event Management
5. 6. 7.
Set the Format traps using provided tables option to YES, and click Yes on the confirmation message. From the Settings menu, click Exit. Restart catrapd. Your settings are in effect.
Note: This command makes it simple for user applications, shell scripts that are part of production jobs, or Event Management policies to issue their own SNMP traps, simply by executing the command and passing it the appropriate arguments. Unlike some other SNMP trap commands, catrap does not restrict itself to any particular set of ISO or Enterprise MIBs and is totally open for use with any MIB or pseudo-MIB with no dependencies on any third-party network management components. For more information on catrap, see the online CA Reference.
If either of the preceding cases is true, the octet string is displayed in hex. Octet string varbinds containing binary or hex string data in traps when using catrapmuxd with v1, v2c, and v3 SNMP support are converted to printable strings with a potential for truncation of data in the Console when the octet string contains non-printable data. If this occurs, you can modify the aws_snmp.cfg file to specify that certain varbind OIDs are always displayed in hex. This ensures that the octet string data is displayed fully, in hex, on the Console.
Event Management
For example, the following command disables the automatic formatting of the Link Layer Operational trap:
TRPCNTRL disable trap e=1.3.6.1.2.1.10.16 g=6 s=1
To enable the TRAP record shown previously, enter the following command:
TRPCNTRL enable trap e=1.3.6.1.2.1.10.16 g=6 s=1
Enable or disable multiple TRAP records by using wildcards in the keywords. For example, the following command disables all TRAP records that have an Eid column beginning with 1.3.6.1.4.1.199.1:
TRPCNTRL disable trap e=1.3.6.1.4.1.199.1.*
Notify CATRAPD of Changes After modifying TRAP or MIB records, issue the TRPCNTRL refresh command to notify CATRAPD of the changes. List MIB or TRAP Records To list MIB or TRAP records of a specific MIB or group of MIBs, use the following syntax:
TRPCNTRL list mib: m=<mib-name/mib-mask> TRPCNTRL list trap m=<mib-name/mib-mask>
For example:
TRPCNTRL list mib: m=RFC1157 TRPCNTRL list trap m=RFC* TRPCNTRL list trap enabled=N
Event Management
MIBs
A MIB (Management Information Base) is the numeric code that identifies an event and includes other data as necessary to describe the object affected by the event. It is essential that no two vendors use the same MIB number to describe different events, so standards exist to organize MIBs into one of three broad categories. Industry Standard MIBs are sanctioned and published by the International Standards Organization (ISO). Enterprise MIBs are assigned by the Internet Assigned Numbers Authority (IANA) to a given organization and are reserved for the exclusive use of that organization. Pseudo-MIBs are not sanctioned or assigned by the IANA but can be just as meaningful and useful as an ISO or Enterprise MIB. Pseudo-MIBs often piggy-back on an Enterprise MIB of another organization and take advantage of many of the defaults available on a given platform.
Sample Pseudo-MIB
The following sample pseudo-MIB describes an event tree. Each element represents information that can be sent when specified as a variable on the catrap command.
Event Management
Sending a trap of 999.1.1.2 is equivalent to sending the message The Enterprise Database server that handles the General Ledger database has been started. A trap of 999.1.1.3 indicates that the General Ledger database has encountered a journal full condition. A trap of 999.2.1.5 indicates that the General Ledger financial application has resumed processing after a temporary outage (warm start). Taking the example further, assume CA NSM is executing on several nodes, but you want to direct all SNMP trap traffic to a single monitoring machine running on the server Earth. The server Earth receives the SNMP traps. Event Management records and acts on them. The server Mars runs production financial applications. The General Ledger production application running on Mars terminates with an error. Testing the return code issued by the General Ledger production executable, the shell script detects an exit code indicating a problem and issues an SNMP trap to alert the server Earth by executing the following command:
catrap earth 6 0 22 999.2.1.7 integer 128
where: catrap earth Sends the identified trap information to the server Earth. and Instructs catrap to take the default Enterprise code and the default agent address respectively for this node. 6 Indicates that this command is sending a specific trap. 0 Identifies the specific trap number for this example. 22 Specifies an arbitrary number selected as a timestamp indicator.
Event Management
Note: The following operands identify the variable binding (varbind) information for the trap. 999.2.1.7 Identifies the object about which information is being sent. In the event tree illustrated earlier, this object refers to an error in the Enterprise financial application, General Ledger. integer 128 Provides additional information about the event. In this example, it could mean send an integer value of 128 to node Earth, assuming 128 is an error code that has meaning to the General Ledger application; or it could be the exit code that the shell script detected as indicating an error. When received at the trap target server Earth, catrapd decodes the event and performs automatic actions in response. The event tree shows other types of events that could be sent, such as 999.1.1.1, indicating that the database of the Enterprise data server for the General Ledger system has shut down. When combined with other CA NSM capabilities, the possibilities expand. For example, you can use Event Management to intercept error messages from any application and automatically execute customized catrap commands in response. The detection of key events can result in traps being sent in response to files becoming available for processing or applications completing their processing. Security violation attempts can result in other SNMP traps being sent. On the receiving side of an SNMP trap, you can use Event Management message handling policies to: Send warning messages in human readable form to other consoles or terminals Issue additional traps to one or more other nodes
For more information on catrap, including an example of how to use it to issue an SNMP trap, see the online CA Reference.
Event Management
These policy packs are on the installation DVD: DVD\Windows\NT\Policy Packs for Windows DVD/policypacks for UNIX/Linux
Event Management
caiWinA3 (Windows 2003 System Agent) policy has these rules: File system failure rule suppresses symptomatic quota, directory, and files on the same file system. CPU rule shows process-specific trap/poll as the root cause and suppresses general CPU traps/polls. Memory rule shows process-specific trap/poll as the root cause and suppresses general memory traps/polls. CPU spike rule detects five critical events within a time period.
caiW2kOs (Windows 2000 System Agent) policy has these rules: File system failure rule suppresses symptomatic quota, directory, and files on the same file system. CPU rule shows process-specific trap/poll as the root cause and suppresses general CPU traps/polls. Memory rule shows process-specific trap/poll as the root cause and suppresses general memory traps/polls. CPU spike rule detects five critical events within a time period.
Event Management
caWmiAgent (Windows Management Instrumentation agent) policy has example rules to determine what is possible with Advanced Event Correlation and the caWmiAgent: Terminal services rule correlates the number of sessions and users to virtual memory. Locked-out user rule correlates locked-out users to application failures. Applications may fail due to incorrect or obsolete credentials. Device failure rule shows a fan failure as the root cause of other device failures.
Ora2agent (Oracle database agent) policy has these rules: Catastrophic failure rule generates an alert to the CA-Database queue. Memory rule correlates Oracle agent memory monitoring to Windows/UNIX/Linux system agent memory monitoring and shows the more specific Oracle trap/poll as the root cause. Disk space rule correlates the Oracle agent tablespaces events to Windows/UNIX/Linux system agent disk space and shows the more specific Oracle trap/poll as the root cause. Multiple database failure rule looks for three or more failure of any kind on a particular database instance.
Sqla2agent (Microsoft SQL Server database agent) policy has these rules: Catastrophic failure rule generates an alert to the CA-Database queue. Memory rule correlates Microsoft SQL Server agent memory monitoring to Windows/UNIX/Linux system agent memory monitoring and shows the more specific Microsoft SQL Server trap/poll as the root cause. Disk space rule correlates the Microsoft SQL Server agent tablespaces events to Windows/UNIX/Linux system agent disk space and shows the more specific Microsoft SQL Server trap/poll as the root cause. Multiple database failure rule looks for three or more failures of any kind on a particular database instance.
Db2agent (DB2-UDB database agent) policy has these rules: Catastrophic failure rule generates an alert to the CA-Database queue. Memory rule correlates DB2 agent memory monitoring to Windows/UNIX/Linux system agent memory monitoring and shows the more specific DB2 trap/poll as the root cause. Disk space rule correlates the DB2 agent tablespaces events to Windows/UNIX/Linux system agent disk space and shows the more specific DB2 trap/poll as the root cause. Multiple database failure rule looks for three or more failures of any kind on a particular database instance.
Event Management
syba2agent (Sybase database agent) policy has these rules: Catastrophic failure rule generates an alert to the CA-Database queue. Cpu rule correlates Sybase agent memory monitoring to Windows/UNIX/Linux system agent memory monitoring and shows the more specific Sybase trap/poll as the root cause. Disk space rule correlates the Sybase agent tablespace events to Windows/UNIX/Linux system agent disk space and shows the more specific Sybase trap/poll as the root cause. Multiple database failure rule looks for three or more failures of any kind on a particular database instance.
Job Management Option policy looks for correlations within the Job Management Option with these rules and generates an alert to the CA-Scheduling queue for critical events that go unresolved for a time period. Job submission problems rule generates an alert to the CA-Scheduling queue for jobs submitted but not started and for jobs started but not completed within a certain interval. Autoscan problems rule detects an autoscan (and/or pre-scan) started but never completed. Predecessor warnings rule highlights warnings that are uncorrected after a time period. SQL errors rule generates an alert to the CA-Scheduling queue for critical SQL errors. Multiple failures rule detects multiple failures on a given Job Management Option.
Event Management
You can define Event Management policies for sending and receiving pager and email messages by using the Wireless Messaging Policy Writer GUI on Windows. The Policy Writer lets you do the following: Specify the text of the incoming message that triggers the pager or email response. The message can include environment variables like &NODID and substitution variables like &1, &source, and &severity. Define up to three pager or email messages to be sent during the Event Console action sequence. Define the text of the pager or email message and up to six possible replies to that message. Administer the Wireless Messaging address database and set the path used for storage and workspace.
To secure operations, warning messages from the Event Console are assigned a unique identifier and must have a correctly formatted reply before any action is taken. When a response is received or the page has timed out, the identifier is expired and cannot be reused.
Event Management
The Wireless Messaging client performs some additional formatting and administrative tasks. These are set by entries on the command line or by directives in the message file. For detailed descriptions of command line options for capagecl, see the online CA Reference. For instructions about sending one-way and two-way messages from the command line, see the online help.
Message File
Wireless Messaging creates messages from information in the message file. Messages are composed of fixed text, environment variables like &NODEID and substitution variables like &1, &source, and &severity. The message file may include a list of pre-formatted replies expected from the message recipient. The processing of replies, however, is independent of the message file contents. The recipient may send additional or alternative replies, and these replies are resolved into actions if the replies are included in the configuration file and if policy actions specify how to handle the additional return codes. Besides directives to the Wireless Messaging client (set xxx=yyy), text written to the message file is appended to the message specified on the command line and included in the sent message. The format of the reply text depends on the device to which the message is sent. The wireless messaging client recognizes replies delimited by three underscore characters, as in ___Reply___ though this formatting may be transparent on remote devices. For information about formatting the commands embedded in the message file, see the online CA Reference.
Event Management
Configuration Files
Configuration files store addresses, audience information, message groups, and a list of replies with their associated return codes. When a message arrives at the CA NSM mailbox, the message server opens it and searches for an ID code. If this code matches the code expected by the Wireless Messaging client, the server passes the message to that client. The client processes the text of the message and looks for a reply. If a reply is found, the client checks the appropriate configuration file to find the code it should return to the calling application. The Reply Information Configuration file then maps responses to return codes. In the following sample configuration file, Acknowledge is mapped to return code 97.
# SendKeep Text= 30 Page Someone Else= 31 Banner Message= 37 Acknowledge= 97
Note: You can define message actions that send responses based on the return code received in the message. The Reply Information Configuration file may include any or all responses sent with a message, and can include additional responses that were not sent as suggestions but may be useful to the remote user. Valid return codes range from 6 through 95 and can be unique to each client instance. Note: Return codes 6 through 95 can be assigned to user replies. All 90 return codes can be in any configuration file, but configuration files and standard policy generated by the Wireless Messaging Policy Writer recognize only replies that you list when defining the policy. Wireless Messaging can interpret other replies if they are added manually to the configuration file or a default file is used. Otherwise, return code definitions can be arbitrary (as long as they are unique) without affecting the behavior of the policy. The reserved return codes (0-5 and 96-99) are used by the system. The following codes are significant.
Code 03 96
Description Message not sent (rejected by calendar check) Abort code (triggered by server termination or capagecl -I issueid -X command)
Event Management
Code 97 98 99
Description Acknowledge Reply not found in the given configuration file Wireless Messaging client timed out without receiving a reply
For information about the format of configuration files, see the online CA Reference.
For information about setting environment variables, see the online CA Reference. Note: For security reasons, the password is contained in an encrypted portion of the command message. Using the PageNet Pagewriter application, this process is transparent, although the password must be entered at the time the page is generated.
Event Management
The Wireless Messaging Policy Writer provides persistent storage, so that you need not define email addresses, message layouts, default timeouts, and standard groups of messages for each message. Note: You can send messages to up to three recipients. For more information about the Wireless Messaging Policy Writer, see the online Help.
Template Files
Template files, which are modified Event Management script files, provide the basis for Wireless Messaging policy. An example of a template file is one named Single Notify. These files contain command actions for starting the pager client and sequences of conditional GOTOs for trapping return codes. Many of the details, such as the capagecl command lines and condrc= numbers, are supplied to template files by your entries in the Wireless Messaging Policy Writer. Entries in the file that are specific to each policy are replaced with flag entries beginning with [DUMMY...]. For a full list of these substitution flags, see the information on template files in the online CA Reference. Some policy templates are supplied with Wireless Messaging, and you can create new template files by copying from those files.
Note: For more information about AMS, see the guide Inside Event Management and Alert Management.
Alerts are automatically escalated based on several of their properties to ensure that they receive attention quickly. Escalation can increase alert urgency, transfer alerts to another queue, set alarms, send notifications, and more. Alerts have a detailed audit trail that includes information about automated and manual actions that affect them or are carried out on their behalf so that you always know the actions taken to resolve them.
Event Management message policy determines the conditions that prompt alert creation. Message records and actions for alerts use the ALERT message action. You assign initial properties for alerts by indicating the alert class in the message action.
Note: Alerts should be a small subset of the events that occur. They should represent only events that require human intervention or provide information critical to continued normal operations. We recommend no more than 1,000 alerts per day. There are two reasons for this. First, if few alerts are generated, the operations staff can focus more easily on what is important. Second, AMS is a complex system and each alert consumes more computing resources than other events. By carefully designing your AMS configuration and policy, you can help ensure that you get the most benefit from AMS. Advanced Event Correlation can generate correlation alerts directly into the Management Command Center. The alert class is specified at the engine level of the policy and each rule can enable or disable alert generation to that class. Alerts are shown by alert queue or managed object in the Management Command Center: The queues you have defined are listed in the left pane when Alerts is chosen from the drop-down list above that pane. When you select a queue in the left pane, the alerts in that queue are shown in the right pane. You can open multiple queues in the right pane, and lock them in place as you move to other areas of the Unicenter MCC. A bar chart of alert statistics is displayed when you right-click a node in the left pane and choose Viewers Status. The chart shows the total number of alerts for the node broken down by queue and priority. Alerts for a managed object in the Topology view are displayed when you right-click the object and choose Alert Viewer from the context menu. Periodically, an association daemon polls the alert table for unassociated alerts and links them to their origin node. The context menu that opens when you right-click individual alerts in the right pane lets you acknowledge alerts, view their properties, transfer them to another queue, and more.
AMS provides a connection to Unicenter Service Desk, which is a customer support application that manages calls, tracks problem resolution, shares corporate knowledge, and manages IT assets. You can open and resolve Service Desk trouble tickets without leaving the Unicenter MCC. Besides viewing trouble tickets from AMS, you can also view them for managed objects in the Topology view. AMS also integrates with eHealth Suite, which delivers fault, availability, and performance management across heterogeneous systems and application environments. Based on policy that you deploy, eHealth alarms and netHealth exceptions create alerts automatically. When an alert or alarm is closed by either AMS or eHealth, the corresponding alert or alarm is also closed. You can display eHealth At-a-Glance and Alarm Detail reports for a selected alert from the context menu or the My Actions menu in the Unicenter MCC.
Note: When you define profiles for alerts, start with display attributes and move up the list until you reach alert classes.
Alert Classes
Alert classes organize alerts and specify their initial properties. Classes are groups of alert profiles like queue, escalation policy, and display attributes. Classes make it easy to define alerts because properties are automatically given to alerts in each class. You do not have to specify all alert properties manually. Besides linking alerts to the other objects like queue and escalation policy, classes also specify properties not defined elsewhere. These include: Urgency Urgency indicates how soon a technician or operator should try to resolve the situation that caused an alert. Because a situation can become more urgent or less urgent as time passes, you can change the urgency manually or with escalation policies after an alert is generated. Impact Impact indicates how much an event affects your business. A consideration in determining the level of impact is how many users are inconvenienced by a situation. Priority Priority is a value calculated by multiplying urgency and impact. Priority provides an additional way, besides urgency and impact, to evaluate the severity of an alert.
Consolidation Consolidation groups alerts that have the same alert class, alert queue, node of origin, and alert text. Consolidated alerts appear on the Management Command Center as one alert that has a number indicating how many similar alerts are grouped together. When alerts are consolidated, fewer messages appear on the Management Command Center. Also, if the alert creates a Service Desk request, only one request is opened. Alarm Alarm is a property that indicates an alert should be acted on promptly. Alarmed alerts attract more attention than other alerts because an Alarm dialog is displayed on your desktop. Calendar (Optional) A calendar indicates the dates and times when alerts in a particular class can be created. Expiration Date (Optional) Expiration date is a date when alerts in a particular class will no longer be created. On this date, the class is deactivated.
Alert Queues
Alert queues are groups of similar alerts. Queues are usually based on role, function, department, geographical location, or other category that is meaningful to your enterprise. For example, your queues may be for departments like Accounting, Finance, and Research and Development. AMS organizes alerts by the queues that you define, and the Management Command Center displays alerts for each queue separately in the right pane. Alert queues are like console logs because they group messages. The difference is console logs group messages by day, whereas queues group alerts by whatever you specify.
An Alert text field becomes available on the Override page. This field is necessary if you want to consolidate alerts. Consolidation groups alerts that have the same alert class, alert queue, node of origin, and alert text. Consolidated alerts appear on the Unicenter MCC as one alert that has a number indicating how many similar alerts are grouped together. You enable consolidation for a class on the Alert Class - Detail, Limits page. Alert text can be up to 80 characters of text, variables, tokens, or any combination of these. The variables and tokens are the same ones used for Event Management. See Using Variables to Enhance the Current Action. Examples of alert text are: Workstation Agent or Eastern Region "&nodeid" to consolidate all alerts for a node. All events for that asset are assigned to the same Service Desk request.
"Issues for &9 assigned to &10" where the ninth word in the alert message is an office name and the tenth is the person assigned to resolve the critical situation. All alerts for that office and that person are consolidated.
The Workstation field on the Override page provides a way to run a user action (command) that you defined in Alert Management. Specify the name of an Event Manager node that forwarded the alert. This lets you take corrective actions directly on the node that manages the node where the alert originated. For more information, see User Actions. The Unicenter MCC shows the manager node in the Route Node column.
Note: Alert Management provides a way to define message records and actions quickly when you define alert classes. For more information, see Define Alert Classes.
By Alert Queue
Alert queues can be for departments like Accounting, Research and Development, and Marketing. Or, they can be for geographical areas like Eastern and Western regions. You define the queues that are meaningful to your business. Alerts in each queue are shown separately from alerts in all other queues. When Alerts is selected from the drop-down list above the left pane of the Management Command Center, a list of alert managers appears. You can expand each alert manager to display alert queues. The left-pane list lets you display alerts in the right pane for each queue individually, or a status bar chart for a selected manager. Alerts are displayed in the right pane when you select an alert queue in the left pane. Alerts in that queue are shown in the Alert Queue Records right-pane view. A context menu lets you perform many actions involved in resolving alerts. By right-clicking an alert you can acknowledge it, transfer it to another queue, open a Service Desk request, and more. A bar chart of alert statistics for all queues appears in the right pane when you right-click a server in the left pane and choose Viewers, Summary. The chart contains bars that represent each queue. The length of the bars shows the relative number of alerts in each queue, and the color represents the highest priority for alerts in that queue, as indicated by the legend.
Note: AMS creates requests only for Service Desk resources that are active. This helps avoid flooding the Service Desk. CA NSM comes with EMS, AEC, and AMS policy that can automatically create and close Service Desk requests. You can also write your own policy using message records and actions, correlation rules, and alert classes and escalation policies. This is how CA NSM interacts with Unicenter Service Desk: Alert policy definitions specify that Service Desk requests be opened and closed during the life cycle of an alert: Open a Service Desk request when an alert is created. Indicate this using the Alert Class Window. Note: AMS does not open a request if an existing request has identical summary, description, and asset properties. This prevents multiple trouble tickets describing the same root problem. Open a Service Desk request when an alert is escalated. Use the Escalation Policy Editor. Close a request when the alert that opened it is closed or made inactive. Use the context menu in the Unicenter MCC to close an alert; use the Alert Class window Main page or Alert Properties dialog Status page to make an alert inactive.
Alerts that are associated with Service Desk requests include the request reference number. Likewise, Service Desk requests created by alerts indicate that an outside application opened the request. The activity log of a Service Desk request is updated automatically with additional information from AMS when duplicate alerts are created. The context menu in the Unicenter MCC lets you interact manually with the Service Desk. You can view requests, open a request, and search the Service Desk Knowledge Tools. For example, when you right-click an alert, you can see requests associated with that alert. When you right-click a managed object in the 2D Map or Topology view, you can see requests for the selected node.
Note: When Service Desk requests are opened and closed, a message is sent to the Event Console.
Scenarios
This section contains examples of situations that could trigger the creation of an alert and a Service Desk request.
Voice - TAPI Telephony Application Programming Interface (TAPI) is used on Windows to send one-way voice messages that are synthesized from text using the Microsoft Speech Application Programming Interface (SAPI) text-to-speech (TTS) engine. The default speech is set in the Windows Control Panel. The messages travel by telephone line using a TAPI-compliant telephony device to a human recipient. Script Third-party or customer programs or scripts can be used to send one-way messages. Scripts and command definitions are stored in the file UNSConnections.ini in the install_path/config directory.
Based on the recipient, provider, or protocol information in the request, the Notification Services daemon (unotifyd) selects a protocol-specific driver to send the notification. Note: The daemon runs as a service on Windows and as a background process on UNIX/Linux.
3.
The daemon assigns a tracking ID, which it returns to the command or program that sent the notification. Note: If the daemon stops and then restarts, it also restarts the outstanding notifications stored on disk.
4.
If a response was requested, the daemon checks for it periodically from the service provider.
5.
The daemon stores information about the notification on disk, and updates that information throughout the life cycle of the notification. This is called checkpointing. Updates include: The request is created. The service provider received the notification. The provider delivered it. The recipient read it. The recipient sent a reply.
Recipient and Provider Registry The Notification Services recipient and provider registry lets you enter information about recipients, recipient groups, and service providers. It contains recipient addresses, protocols, and connection information so that you need not enter everything manually with each notification. You just enter the name (alias) for the information you want to use. This saves time and may hide sensitive information. The registry has files that you can edit with a text editor, the Windows GUI, or the unsutil command-line utility described later in this section. Each file contains an explanation of the files contents and includes sample templates to help you define your own recipients and providers. The files are: uns_provider.ini Defines provider aliases with connection information for the protocols that service providers support. See the topic Connection Information for details about what is required for each protocol. uns_recipient.ini Defines recipient aliases and their default providers. Each recipient has one default provider. uns_recipient_group.ini Defines recipient groups. uns_recipient_address_link.ini Defines recipient addresses for each provider. Recipient Plug-in External recipient registries can be queried by the provided LDAP recipient plug-in or user-developed plug-ins. The file uns_source.ini defines the recipient plug-ins available to the Notification Services daemon for recipient resolution. When a recipient alias cannot be found in the recipient registry (uns_recipient.ini), and a recipient plug-in is configured, active, and successfully loaded, the daemon tries to resolve the recipient alias in the plug-in.
The file uns_source.ini provides samples for the default installed LDAP servers: uns_rcp_ldap_eTrust.ini -- CA eTrust Directory uns_rcp_ldap_sun.ini -- Sun Java Directory uns_rcp_ldap_domino.ini -- IBM Domino LDAP Server uns_rcp_ldap_novell.ini -- Novell eDirectory uns_rcp_ldap_msad.ini -- MS Active Directory on Windows 2003 Server
Before you activate the source, customize the corresponding configuration file according to your environments. The files have comments that explain the values you can change. unsutil The unsutil command-line utility lets you define, alter, delete, and list recipients, groups, providers, and addresses. This utility provides facilities similar to the user interface, and the syntax is similar to cautil. For more information, see the online CA Reference. Reports Statistical reports from the unsutil command-line utility display the following types of information: summary, provider, protocol, recipient, sender, and error. Notification Services Daemon Configuration Some features of the Notification Services daemon can be configured to reflect the way you use Notification Services in your enterprise. For example, you can specify whether the daemon should create a transaction log and what that log contains; indicate whether to store information about notifications on disk (checkpointing); and enter a default sender name for several protocols. To customize the daemon, update the file UNSDaemon.ini. This file has comments that explain the values you can change. Also, see the procedure Configure the Notification Services Daemon in the online help and CA Procedures.
Commands The Unicenter Notification Services commands are: unotify Sends a one-way or two-way notification message using the Notification Services daemon. If a reply or other status is requested, the command waits for the status and displays it when received. If the wait times out, you can use the uquery command for future queries of that notification. unotifys Sends a one-way or two-way notification message on the local node without using the Notification Services daemon. This command lets you send notifications when the daemon is not running. The unotifys command does not store notification information on disk because the daemon is not running. uquery Requests the status of one notification or all notifications from the Notification Services daemon. For a one notification, you can display the current status immediately, or wait until a requested status, like a reply, is received. uquerys Requests the status of one notification sent by unotifys on the local node. It does not use the Notification Services daemon, so you can use this command when the daemon is not running. unscntrl Starts, stops, and queries the status of the Notification Services daemon. unsconfig Encrypts and decrypts a configuration file. Some connection information in the file uns_provider.ini requires a user name and password that you may want to protect. Only Notification Services applications can read an encrypted file. Note: Before changing data in an encrypted file, decrypt it. After changing the file, encrypt it again.
Note: IDs are small integers and different daemons could possibly assign the same number to different messages. Therefore, each daemon should monitor a dedicated mailbox to avoid mismatched replies.
Troubleshooting
You can diagnose errors with SMTP/POP3 using the smtp_session.log and pop3_session.log files in the Notification Services log directory. Use TELNET host PORT# to verify if ports 25 or 110 are blocked.
The following modems have been tested with Notification Services TAP: Note: ATI3 is the modem driver version and ATI6 is the chipset type. Modems listed with an init string require that the string be set for a successful connection. Boca Modem 33.6 ATI3: V2.05C-V34_ACF_DS1 ATI6: RC336DPFSP Rev. 44BC Init string: AT&F&C1&D2&G0-C0%C0%E2\N391=13 Boca Modem v.34 28.8 ATI3: V1.000-V34_DS ATI6: RC288DPi Rev 04BC Init string: ATQ0E1F1N1W1\K5S37=11S82=128S95=47X4 Hayes Accura 56K + FAX ATI3: V1.120HY-K56-DLS ATI6: RC56DPF L8570A Rev 35.0/34.0
LASAT SAFIRE 288 ATI3: LASAT Safire 288 V1.43C ATI6: RCV288DPi Rev 05BA
Lucent LT Winmodem ATI3: LT V.90 Data+Fax Modem Version 6.00 ATI6: 6.00,0,19,11C1,0448,1668,2400
MICA (V.90/K56FLEX/FAX/V.110) Hex Modem Module (installed in a Cisco AS5300 series router) ATI3: Cisco MICA Hex Modem Module Product Information Country Code 001 V.90, K56FLEX 1.1, V.34+, V.32terbo, V.22bis, V.42, MNP2-4, V.42bis, MNP5, Fax, V.110, SS7_COT, TRACE, VOICE HEX modem index CP code revision CP revision date SP code revision SP revision date 00 2.7.2.0 May 30 2000 2.7.2.0 05/30/2000 (MM/DD/YYYY)
ATI6: Returns error, instead looked at ATI4 ATI4: Cisco Mica V.90/K56FLEX/FAX/V.110 Unknown name internal modem ATI3: V1.301-V34_DP ATI6: RC288Dpi Rev 05BA Note: The modem works fine on UNIX/Linux. On Windows it works fine when the TAP protocol uses TAPI, but has problems when TAP uses the COM port directly. Direct COM port access works only when previous use of the modem worked with TAPI, but stops working when the modem is reset. ZOOM V.92 External Modem (3049) ATI3: Zoom ACF3_V1.801A-V92 -C Z201 ATI6: RCV56DPF-PLL L8571A Rev 50.02/34.00 Init string: ATZ&F~AT+IBC=0,0,0,,,,,0;+PCW=2;+PMH=1;+PIG=1~ATE0V1S0=0 &C1&D2+MR=2;+DR=1;+ER=1;W2~ATS7=60S30=0L1M0+ES=3,0,2; +DS=3;+IFC=2,2;BX4 Note: The init string was taken from the modem's Windows driver init string. This init string and no init string provided successful connections most of the time, but not always. The ~ character represents a carriage return.
Troubleshooting
If the protocol driver reports a timeout error while dialing or after connection, specify a modem initialization string. If the initialization string does not resolve the problem, or if no string is available, lower the baud rate. A higher baud rate may cause handshake problems and more sensitivity to line noise. If an error occurs during modem initialization, make sure the modem is connected properly and detected by the operating system: On Windows, choose Control Panel, Phone and Modem Options. On the Modems tab, highlight the modem being used and click Properties. On the Diagnostics tab, click the Query Modem button. The modem should respond with response codes and not return an error. If it returns an error, the modem needs to be configured properly on Windows before it can be used with the protocol driver. On UNIX/Linux, open a terminal program such as minicom. When the program has access to the modem device being used, enter the command ATQ0 and press Enter. The modem should respond with OK. If it does not return anything, the modem needs to be configured properly on UNIX/Linux before it can be used with the protocol driver.
If your phone network requires a prefix number to dial out, the phone number used in the connection information must begin with this number and a comma. For example, if the dial-out prefix is 9, the phone number would be: 9,18005555555.
Troubleshooting
If there is an error trying to connect to a service provider and you are required to go through a proxy server, make sure the proxy server information and credentials (if required) are set properly. If you are behind a firewall and do not use a proxy server, the port number (usually 80) required by the service provider's website must be opened.
There is a workaround, but it is not straightforward and the interaction between UNS_VOICES and the recipient is not seamless. When TAPI makes a call, it initializes the voice modem with settings that are defined by the modem's INF registry setting. Most modems use the voice command settings +VRN and +VRA (although some use #VRN and #VRA): +VRN is the Ringback Never Appeared Timer whose value is the time the modem waits for the first ring tone to occur before assuming the call has been answered. Voice modem manufacturers usually set +VRN to 0, which means the modem does not wait for a ring tone. This value needs to be greater than 0. A recommended value is 10, but this requires trial and error depending on the phone system. +VRA is the Ringback Goes Away Timer whose value is the time the modem waits for silence between the one ring tone and the next before assuming the call has been answered. We recommend that you increase this value if it is 0.
You can find descriptions of +VRN and +VRA at http://www.cisco.com/en/US/products/sw/accesssw/ps275/prod_comman d_reference09186a00801f6327.html. To change these settings: 1. Launch regedit and expand the following registry key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4D 36E96D-E325-11CE-BFC1-08002BE10318} Key enumerations for each modem on your system appear, starting with 0000. 2. Expand the key for the voice modem you are going to specify to UNS_VOICES. Note: The key may be hard to find. Try reading the FriendlyName key value for each enumeration. 3. 4. Select the VoiceDialNumberSetup key. Find AT+VRN=0, and change it to 10. If AT+VRA=0, change that value, too.
Voice modems are not designed to detect a human voice, therefore it is not possible to wait until the recipient answers the phone and says something. With this resolution, when a recipient answers the phone, the message may not be spoken immediately because the voice modem uses timers to determine when a ring-tone has not occurred. TAPI reports a connected state only after the voice modem determines there are no further ring tones. A program cannot set these values because after control is returned, TAPI reads the settings in the registry and therefore overrides any changes a program may have made.
The following device is not supported: US Robotics Sportster Voice 33.6 Faxmodem with Personal Mail FCC ID: CJE-0375 FCC REG#: CJEUSA 20778-MM-E ATI1: D869 ATI3: USRobotics Sportster Voice 33600 Fax RS Rev. 2.0 ATI9: (1.0USR0007\\Modem\Sportster 33600 FAX/VOICE EXT)FF
Troubleshooting
If an error occurs while using a voice modem or telephony device, make sure the device is connected properly and detected by Windows. For a voice modem, choose Control Panel, Phone and Modem Options. On the Modems tab, highlight the modem being used and click Properties. On the Diagnostics tab, click the Query Modem button. The modem should respond with response codes and not return an error. If it returns an error, the modem needs to be configured properly on Windows before it can be used with the protocol driver. For another telephony device, check the device's diagnostic or sample program (if any) to determine if the device is working properly. Resolve any problems before using the device with the protocol driver. For an Intel Dialogic D/4PCI telephony card, run the sample program installed with the Intel Dialogic System Release software, talker32.exe (Program Files\Intel Dialogic System Software\Sample Programs\TAPI). This must work before you can use the device with the protocol driver.
Root cause analysis lets you clearly differentiate the root cause event associated with an event stream from the non-root cause or symptomatic events that may not require a direct response. Root cause analysis helps you to reduce the number and frequency of events seen by console operators, eliminate message flooding, and reduce false notifications. Symptomatic events can provide valuable information about the impact of the root cause problem on the overall system, and, therefore, should not be discarded in all cases. The impact analysis function helps you alert users to an impending problem, thus reducing the load on your help desk. It also helps you to initiate failover or recovery procedures for the dependent systems, or alert operations staff that they need not address a particular problem. Note: On non-Windows platforms, AEC is installed with the Event Manager and Event Agent. On Windows, AEC is a separate component.
These false failure messages cause problems because message records and actions erroneously generate notifications and trouble tickets, and, therefore, important messages may be lost in all the erroneous, secondary, false messages. Using AEC, you can do the following: Distinguish between primary and secondary failure messages Determine the root cause of the failure Provide an impact analysis of a failure Diagnose and filter unwanted messages Respond to dynamically changing environments
AEC uses correlation rules to analyze the input messages in relation to each other and to identify the root cause messages from those incoming messages. A correlation rule performs the following functions: Describes patterns so as to recognize those incoming messages that are related Defines timings on when to report root cause messages to the Event Console Captures the logic of cause-and-effect relationships of all related messages Describes formatting of the root cause messages reported to the Event Console
AEC processes events as follows: 1. 2. 3. 4. 5. 6. 7. 8. 9. Listens to all incoming messages. Uses patterns in the rule to identify the incoming messages that match. Triggers the correlation rule when a matched message is detected. Listens to incoming messages to see if any more messages match the patterns in the rule. Uses timing, specified in the rule, to determine continuation of monitoring. Stores the logic of cause-and-effect relationships of different messages. Identifies which incoming messages are root causes, based on the cause and effect logic. Applies the formatting specified in the correlation rule. Reports the resulting message to the Event Console.
Event Definitions
Understanding event definitions is critical to understanding AEC, configuring it, and using it correctly. You can define the two types of events in AEC: input events and output events. Input events are the events that define patterns used by AEC to match messages coming in to the Event Console. Output events are events generated by AEC and sent to the Event Console.
You define these events in the correlation rules when you configure AEC. Each event that you define has a key field, called the message string, which describes the event. The message string can contain regular expressions and tokens.
Configure AEC
Configuring AEC consists of defining correlation rules and saving them in the Management Database (MDB). You can create correlation rules using either the Integrated Development Environment (IDE), which is a Windows application, or the browser-based Policy Editor. Before you use AEC in a production environment, you must typically work through the following procedures: 1. 2. 3. 4. Define the correlation policy. Deploy the correlation policy. Test the correlation policy. Save the correlation policy in the MDB.
Note: You can import CA NSM 3.x rca files into the Policy Editors and then save them to the MDB. After you define your correlation policies, you can deploy them to a test Event Agent (preferably a non-production machine) using deployment dialogs within the editors. The Windows IDE editor also provides a real-time testing environment by reading the Event Console messages and applying the rules you have defined. Note: You need only use the Policy Editors when you are defining, deploying, and testing rules. After you are satisfied that your new AEC policy is working properly, you can use the Deploy Policy dialog to deploy it into a production environment. See Implement AEC.
The policy editors have a real-time status capability to let you see the following: What rules were triggered. How many instances are running, and the values of tokens in each instance. For more information, see Tokens. The times the rule processing started. How much time is left for maturity and reset of the rule. For more information, see Timing Parameters.
Note: If Security Management is running, and AEC policy is intended to create alerts in the Alert Management System (AMS), the user defining the policy must have permission to the CA-AMS-POLICY asset type. Without this permission, an access violation message appears. For more information about CA-AMS-POLICY, see the online CA Reference topic Asset Types for Windows and UNIX/Linux. It is under Security Management, Executables: Security Management, cautil Security Management Control Statements, Control Statements for ASSETTYPE.
Launching the Web Policy Editor independently, outside of Management Command Center, requires some basic configuration. The Web Editor automatically displays a Configuration tab that prompts for the name of the Distributed Intelligence Architecture (DIA) Knowledge Base and the Event Manager host. Note: Launching from within Management Command Center does not require this configuration because Management Command Center already understands these values and passes them automatically to the web policy editor within the right pane of Management Command Center.
The window contains the following Policy Wizards and descriptions: Missing Event Wizard Detects the absence of an important event. Example: When a Database Backup Started event is detected but the Database Backup Completed event is not detected within a specified time range, an alert is sent indicating that the backup failed. Down for Maintenance Wizard Suppresses messages from systems that are down for scheduled maintenance. Example: If software patches that require a reboot are scheduled for a particular machine, you can select an event that indicates that machine is down for maintenance. All messages coming from that machine during the specified time are suppressed. Transient Event Wizard Eliminates spike alarms when a resource has acceptable periods of peak activity. Example: A web server is known to have surges in activity every time new content is posted. The Transient Event rule suppresses alerts caused by these surges. Suppression of Duplicates Wizard Suppresses repeated similar events. Example: A host IP device failure causes DSM to repeatedly generate ping failure events. This type of rule suppresses the redundant events, allowing only the initial failure to trigger a trouble ticket. Dependency Event Wizard Raises an alert for a component based on events raised by other components. Example: A web application requires both a database and a file server. An alert for the web application is sent if either resource reports a problem. Dual Dependency Event Wizard Raises an alert for a component based on two events raised by other components. Example: A web application runs on a cluster consisting of two cluster nodes. An alert is sent if both cluster nodes report a problem within the specified time period.
Event Threshold Wizard Detects the number of times a specific event occurs within a time range. Example: When CPU usage exceeds its threshold five times in two minutes, an alert is raised. Root Cause Wizard Raises an alert for a component based on events raised by other components and provides information on the event that initiated the issues. Example: A switch failure causes a ping failure, which causes an agent failure. Missing Heartbeat Wizard Detects the absence of a heartbeat message within a specified time range. Example: A heartbeat message is sent from a server to communicate that it is online. The absence of the heartbeat event from the server indicates that the server is offline. User Defined An empty rule list; it lets you manually create a customized rule.
Note: For more information about Advanced Event Correlation, see the guide Inside Event Management and Alert Management and the online help for any of the AEC help systems.
Impact Analysis
You can configure AEC rules to generate, in addition to root cause messages, messages associated with the events impacted by the root causes. AEC analyzes input messages to determine the impact a failure has on a component of a system. AEC responds by sending out impact analysis messages based on its rules. These messages can contain specified substrings from both the root cause and the impacted message. In addition, these impact messages can be sent to the Event Console in the form of an aggregate report, one for each non-root cause. AEC recognizes a dependency of event A on event B (which is defined in the correlation rules), so you can use it to report impact messages like the following: A is impacted because B went down B has impacted A B has impacted [A1, A2, A3, A4]
For example, an operator shutdown on US-NY-01 has caused a ping failure and an agent failure.
You can use impact analysis to do the following: Provide the operators with complete and intelligent information, enabling them to understand and provide notification of the failures in the enterprise. Use impact messages to notify the repair personnel to fix the real failures. These messages can also be used to notify users that "false" failure of components has impacted their hardware or software. Make infrastructure changes that affect the impacted components, after receiving impact messages. For example, a router failure has caused a group of applications to fail because they have been disconnected from the database server. After receiving the impact messages, provide an alternate route or use a failover router from the applications to the database server that would bypass the failed router, thereby reducing the downtime of these applications. Provide system administrators with a way to measure the impact of failures of hardware and software throughout the enterprisemeasuring not only the downtime of the failed component, but the impact of the failure on all affected components. For example, a failure on a router that is connected to two less critical workstations may not necessitate a repair until hours later. However, a failure on a router that supports hundreds of servers that house an enterprises main applications, which are accessed in real time by its clients, requires an immediate fix.
Implement AEC
After you have created, deployed, and tested rules, you can put them into production. The correlation process runs unattended by deploying policy to the AEC Engine, which is installed with every Event Agent and Event Manager. The AEC Engine runs in the background and processes the events received at the Event Console. By default, Event Management passes all events to this Engine before doing any of its own processing (that is, message actions, writing to the log, and so forth). You can also configure AEC to process events sent directly to the console log, such as the message action SENDOPER. Note: The Windows IDE Policy Editor processes events after they are sent to Event Console, whereas the Engine processes them beforehand. So, when AEC is configured with reformatting or suppression features, these features work only when using the Engine.
Deploy Policy
You can deploy policy using either the Deploy Policy dialog or the ace_reload command line utility. Both facilities let you select any combination of policies from the MDB and deploy them to any combination of Event Agents. Note: Do not use the Windows IDE Policy Testing function and deploy policy to the AEC Engine at the same time to process rules. If both the IDE Engine and the Engine are running at the same time on the same machine, duplicate messages appear in the Event Console.
Pipeline
The pipeline is where most of the logic of cause-and-effect relationships of messages is defined. Each correlation rule has one pipeline listing the pipeline items, each of which contains descriptions of similar messages. Each pipeline item deals with only one message type. You group pipeline items to form a pipeline that has a cause-and-effect relationship among the items. The order of the items in a pipeline is important, as any item is considered to be the root cause of all the items below it. When AEC receives many messages that are matched by different pipeline items, it chooses the highest item and determines that message to be the root cause. For example:
Pipeline Item # 1: Ping Failure on Server Pipeline Item # 2: Service Critical on Server
The Promote/Demote feature lets you modify the order. The main components of pipeline items are as follows: Match Event This component indicates conditions (message string and node name) under which the item triggers the rule.
Local Correlation Event and Local Reset Event The Local Correlation and Local Reset Events describe the message strings that are sent to the Event Console at maturity (reset) of the correlation rule. In this way they are similar to the Root Correlation and Root Reset Events. However, you can configure AEC to use either the Local or the Root Correlation (Reset) message by setting one of two flags. The flags Use Root Correlation Event and Use Root Reset Event let the Root Correlation and Root Reset messages override the Local Event Messages. The advantage of setting the Root Correlation (Reset) message is that, for example, you must configure the correlation (Reset) message at only one place. However, the disadvantage may be that, regardless of the root cause, AEC generates the same formatted message (although, using tokens, it can be specialized to reflect the root cause event in each case). The disadvantage of setting the Local Correlation (Reset) message is that you must configure these messages at each of the individual pipeline items. This lets you configure different messages to be sent to the Event Console when you have different root causes. Exclusion Event You can use the Exclusion Event with the Match Event to help restrict the events that match the matching element. For example, you could define a Match Event to match any events containing the text ABC but exclude any events also containing the text DEF. If you defined the following events in a rule, all Application Failure events are matched except for those that refer to lab applications. Match Event: ^Application .* has failed$ Exclusion Event: ^Application LAB_APP1|LAB_APP2 has failed$ You can also use the Exclusion Event to restrict the matching of any element of an event. For example, you could use it with the Match Event to match a given event from all servers except those specified in the Node field of the Exclusion Event. Reset Request Event The Reset Request Event lets you reset an individual pipeline item. Set the Enable Local Reset Request Event flag to reset an individual pipeline item. In addition, this lets you decrement the counter for the number of matching events associated with a pipeline item when a Reset Request Event is received. When you set this flag, a pipeline item is reset only when the counter is decremented to zero.
For example, suppose that you have five automated procedures that generate consecutive events to indicate that they have started and completed successfully. Using this flag, you can match the five start events and decrement the counter by assigning the Reset Request Event to the completion event. If the matching element has not reset at the end of the maturity period, one or more of the automated procedures must have failed to complete, and the rule can generate a Root Correlation Event to indicate that. Local Reformat Event Configured at the rule or matching element level, you can use the Reformat Event to change the format of a matched event. The reformatted event can consist of the following: All, or any element of the original event (using &TEXT or &1 - &n, respectively) Any global or user-defined token value Static text
For example, suppose that you want to prefix any event that matches Pipeline Item # 1 with the string %AEC_HOLD. This prefix could then be identified by a standard Event Management message record/action, resulting in the event being placed in the Held Messages queue. Local Revised Correlation Event It is possible that a higher pipeline item can be matched after a correlation event has been generated (for example, where the rule matures before the highest pipeline item is matched). In that case, you may want to generate an event indicating that the previous correlation event has been superseded. A Revised Correlation Event can consist of the following: All, or certain elements of the original root cause event (using &TEXT or &1 - &n, respectively) All, or certain elements of the new root cause event (using &RCTEXT or &RC1 - &RCn, respectively) Any global or user-defined token value Static text
For example, if Event B was initially determined to be the root cause but was subsequently replaced by Event A, you could generate the Revised Correlation Event Event A has been replaced by Event B as the root cause for Problem X using the template &TEXT has been replaced by &RCTEXT as the root cause for Problem X. Reset Request Acknowledge Event The Reset Request Acknowledge Event can be generated whenever a rule or pipeline item resets in response to a Reset Request Event.
Local Impact Event If the rule is configured to generate impact events, the pipeline item Use Root Impact Event flag is set to false, and this is not the root cause item, this output event is generated to the Event Console after maturity to report the events impacted by the root cause.
Root Events
You can define root events to override pipeline events. Individual root events are defined as follows: Reset Request Event You can configure this input event to match incoming events that trigger the rule to reset automatically, rather than waiting for the duration of the reset period. Root Reformat Event You can configure this output event to reformat events that matched the item if the pipeline item Reformat Matched Event is set to TRUE, and the Use Root Reformat Event flag is set to TRUE. Root Correlation Event This component describes the message that identifies the root cause event to be sent to the Event Console at maturity of the correlation rule. Root Revised Correlation Event This output event is generated to indicate that a new root cause has been identified in the following circumstances: The rule level Enable Revised Root Cause Event flag is set to TRUE. The new root cause pipeline item Use Root Revised Correlation Event flags is set to TRUE. The pipeline item is matched after maturity and is higher than the current root cause item.
Root Impact Event This component describes the message to be sent to the Event Console for each of the impacted messages. This message could contain components of the root cause message as well as the impacted messages. You can also use event-by-event impact messages, or aggregate impact messaging. Note: For more information, see Impact Analysis. Root Reset Event This component describes the message to be sent to the Event Console when the correlation rule is reset. Note: For more information about resetting a rule, see Timing Parameters.
Root Request Acknowledge Event This output event is generated to acknowledge receipt of the request to the Console if the rule has been reset using a Reset Request Event.
Boolean Operators
Each Boolean rule can have one or more nested Boolean operators, each with one or more pipeline items. These Boolean operators let you establish complex relationships among messages. When AEC receives many messages that are matched by different pipeline items, it performs the logical Boolean operations to determine if all conditions have been met. For example, assume you define a rule that contains the following components:
Boolean Operator AND has been selected. Pipeline Item # 1: Disk I/O Usage Critical Pipeline Item # 2: Database Backup Starting
When AEC detects both events (Item # 1 AND Item # 2) occurring within the maturity period, it generates a correlation event. In this example, you may want to stop the database backup to save disk I/O. The following components of Boolean operator pipeline items are the same as a pipeline rule: Match Event Exclusion Event Reset Request Event Local Reformat Event Reset Request Acknowledge Event
Note: The Local Correlation Event, Local Reset Event, Local Impact Event, and Local Revised Correlation Events are not available in a Boolean pipeline item, because all pipeline items must be considered together in a Boolean rule. Use the root versions to generate these output events.
Timing Parameters
Each correlation rule has time settings that specify when to report a correlation message to the Event Console once the rule is triggered, and when to stop processing the rule after it is triggered.
Tokens
You can use tokens in correlation rules. A token is similar to a substitution parameter and can be recognized by the preceding ampersand character (&). For each Event field, any tokens are replaced by their actual correlation rule values (if they exist), otherwise they will be replaced by a word wildcard, that is, any value will match. AEC supports the following types of tokens: Internal tokens User-defined tokens
Internal Tokens
Internal tokens already exist in AEC and can be referenced without the need to define them. Internal tokens are also referred to as built-in tokens. AEC internal tokens closely match the tokens available in Event Management message records and actions, and can be used to identify predefined event variables. In addition, there are tokens specific to AEC. See the online help for descriptions of AEC internal tokens.
User-Defined Tokens
User-defined tokens can be included in any field of an incoming matching message. User-defined tokens are defined as &(..), with the name of the user-defined token in the parentheses. User-defined tokens can be used to establish a relationship among messages that should be correlated. For example, if you want to relate a ping failure on one server to a service failure on that particular server, you can define a token such as &(NODENAME) in the matching string of the two messages. An assigned token, such as &(NODENAME), parsed from an incoming message, can be reused in an output message. For example, if you enter &(NODENAME) in the node field of a pipeline item match message, it is assigned to the first matching message and may be reused as part of an output message, such as a local correlation message. User-defined tokens can facilitate template rules, that is, a new rule will be triggered for each unique user-defined token assignment. For more information, see Template Rules. The user-defined token value is assigned by the first matching message, and it does not change until the rule has been reset.
Global Constants
A global constant is a constant that you define once manually or by calling an external script, Dynamic Link Library (DLL), or executable and then use throughout AEC policy. Global constants apply to all rules in a policy. These constants can be used to implement a static text substring in multiple rules. The substring can be changed globally, making it unnecessary to modify many rules manually.
Global constants can be either static or dynamic. The value of static constants can be determined using the fields of an event. A dynamic constant can be configured to use an external script, DLL, or executable to return the constant value. The DLL is loaded periodically, and the specified DLL function is called to retrieve the constant value. In this way, constants that reflect the current state of the dynamically changing enterprise can be assigned. As with user-defined dynamic tokens, the script or executable invoked must return a string in the format:
[input-string]\n[output-string]
where input-string is the string substituted in input events, and output-string is the string substituted in output events. If you want to write a DLL function then it must be in Microsoft MFC/ATL. The function declaration is as follows:
bool DllFunc(CStringList *lpParams, int *nBufSize, CString *lpReturnString);
where the parameters are as follows: CStringList *lpParams In parameter. A cstring list of all parameters, specified during the creation of the global dynamic constant. int *nBufSize Out parameter. Return the length of the lpReturnString here. CString *lpReturnString Out parameter. Return the string here. This should be in the format Input\nOutput, as with the executables and scripts.
Credentials
Dynamic constants and tokens can sometimes contain sensitive data in their command-line parameters, such as user names and passwords. To prevent clear-text passwords from being stored in the MDB or seen by a passerby, the policy editor provides a way to hide and encrypt passwords. You can add a Credential item containing the user name and password, and then reference it in the command-line arguments of the script, instead of the user name and password. Note: The password is always stored and displayed in its encrypted form. The reference to the Credential item has the following format:
&(CREDUSER:CRED1) and &(CREDPASSWORD:CRED1).
You must create a credential before it can be referenced later as a way to hide and encrypt passwords.
Calendar Support
As with correlation rules, dynamic global constants support the use of CA NSM calendars. If configured, the value of the dynamic global constant is only refreshed when the calendar is active.
Template Rules
A template rule is a rule that acts as a generic template and lets multiple instances run. The rule should contain user-defined tokens that enable AEC to identify similar (but unrelated) events. Tokens are set when events are compared against the rule; the tokens take the values of the corresponding items in the event. When an event occurs that matches the match string in a rule but does not agree with the user-defined tokens, the new event invokes another instance of the rule. This new instance processes with its own token values, and, at the time of its maturity and reset, creates its own correlation and resets messages.
Regular Expressions
AEC allows the matching of events based on patterns instead of fixed strings. These text patterns are known as regular expressions, and they are stored in the match event fields of AEC. AEC uses the Perl-compatible Regular Expression Library for the evaluation of regular expressions.
Regular expressions evaluate text data and return an answer of true or false. That is, either the input string matches or it does not. Regular expressions consist of characters to be matched as well as a series of special characters that further describe the data. The description specified by special characters (also called meta characters) can be in the form of position, range, repetition, and placeholders. Within AEC, rules can be specified for the set of possible events that you want to match. Regular expressions can also be used to split the event apart in various ways. Regular expressions are also used to extract parts of strings and store the parts as tokens. All fields of the Match Event accept regular expressions, including the following: Message Number Message String Node User Station Severity Device Job Management Process User Data Category Tag Source Hour Day of Month Month Year Day of Week Day of Year
AEC correlates only those messages whose node, user, station, message string, and so on, matches what is specified in the match event of a rule, and then triggers that rule. For a list of regular expressions and their meanings, see the online help.
Note: For more information about Advanced Event Correlation, see the guide Inside Event Management and Alert Management and the online help for any of the AEC help systems.
With comprehensive systems platform coverage and support for industry standards, Systems Performance provides a flexible and extensible architecture that simplifies the management of the numerous systems and devices that make up today's complex infrastructures. Its facilities for collecting, analyzing, and reporting performance information simplify performance and capacity trend analysis, and increase IT responsiveness to unexpected problems, ensuring higher service levels. Prepackaged management policies and secure, centralized configuration further simplifies administration and increases IT efficiency, resulting in faster ROI.
Data Fundamentals
Systems Performance uses Performance Agents running on each monitored machine to collect data on a wide range of system and database resources, SAP resources, and SNMP-based resources. There are two types of agents: Real-Time Performance Agent (prfAgent) This agent is responsible for the real-time, transient collection of performance data, which it supplies to client applications such as Performance Scope. Historical Performance Agent (hpaAgent) This agent provides facilities for collecting, storing, and managing historical, time-banded data. Where necessary, it can act as a proxy to enable the monitoring of resources from SNMP-enabled hosts or devices that cannot support a Performance Agent directly, such as a network router.
For information on how to install and customize the Performance Agents, see the Inside the Performance Agent manual.
Data Fundamentals
The three axes in this data model are as follows: Different resource metrics (such as Disk Reads/sec or % Processor Time) are represented on the Y axis. Time-bands across the day are represented across the X axis. Different days (Monday, Tuesday, Wednesday, and so on) or periods (average day within March, April, June) or machines (machine 1, machine 2, machine 3) are represented on the Z axis.
Performance Architecture
There are three types of performance cube: Daily These are a two-dimensional matrix of the Y axis (resource metric) and X axis (up to 24-hour timeband). You primarily use this cube to view how a resource is performing on a given day. Using this daily data lets you closely monitor resources on a real-time basis. Period These are similar to daily cubes except that they include the Z axis to track same-machine performance over multiple days. For example, you might use a period cube to monitor how a machine has performed over the course of a month. Enterprise These are just like period cubes except that the Z axis represents different machines during a single day rather than the same machine during different days. For example, you might use an enterprise cube to monitor the performance of a related set of servers. You can use the Performance Configuration application to set up your cube requirements.
Performance Architecture
The computing facilities in many companies typically comprise a large and diverse collection of machines spread over a wide geographic area. The Systems Performance architecture supports such a distributed environment by providing high levels of scalability and enabling easy configuration across many thousands of machines. Furthermore, because it is often desirable to define logical groupings within the enterprise and manage each of these groups independently, the architecture implements the concept of multiple configuration domains.
Performance Architecture
The following figure shows the main components in the Systems Performance architecture. See the following pages for further details of these components.
The architecture of Systems Performance has two main functions: To provide access to and management of the performance data gathered by the Performance Agents. To enable the configuration of the Performance Agents.
Performance Architecture
The Performance Data Grid (PDG) in CA NSM Systems Performance (r11 and above) provides all these capabilities and more. In essence, the PDG lets you obtain performance information that covers any time period and at virtually any degree of granularity for any managed element in your enterprise (device, server, application, and more).
Performance Architecture
The PDG is formed from a network of orthogonal, distributed Performance Distribution Servers that form a grid to service data requests. This grid creates a single image of the performance data for the entire enterprise and grants you seamless access to the data. A notable feature of this design is that you do not need to know where the data you are requesting is physically stored, or which end-point is servicing the request; you simply place a query on the grid and obtain a response.
Configuration Services
Performance Agents (running on the managed nodes) report to the Performance Distribution Servers, which in turn report to a Performance Domain Server. These Domain and Distribution Servers run as persistent services/daemons, so they can react immediately to registration requests from agents and service instructions from the Performance Configuration application.
Performance Architecture
To achieve this, each Distribution Server examines its local cube store and builds up an index of the machines for which it has cubes, the cubes that exist for each machine, and the resource metrics for which there is data in the cubes. The Distribution Server then passes its index to the other servers with which it is registered so that they all have up-to-date information on each other. When you use an application like Performance Scope to request performance data, the application submits the request to any available Distribution Server. The server examines its local cube store and then either returns the requested data or, if it is not the most appropriate server to handle the request, forwards the query to a more suitable server.
Performance Architecture
Once replication has been configured, the agent automatically delivers the cubes to both the primary and backup Distribution Servers. Additionally, the primary and backup Distribution Servers exchange cube lists, allowing the primary and backup Distribution Servers to pull cubes from each other. This ensures that both the primary and backup Distribution Servers contain a full superset of Performance cubes for machines managed by the primary Distribution Server. When configuring Distribution Server replication, you must ensure that you use the correct server name in the cfgutil command. Failure to specify the full server name will cause the cfgutil command to fail and replication will not occur. The following procedure is considered best practice. To configure Distribution Server replication 1. Issue the following command on the machine you want to use as the backup PDE:
camstat -n
This command returns the correct server name to specify in the cfgutil command. 2. Issue the following command on a Performance Domain Server to configure the backup PDE:
cfgutil -P <primary Distribution Server> -b <backup Distribution Server>
Replication is configured. 3. Issue the following command to verify configuration was successful:
pdectl -h <backup Distribution Server> status
Status details for the backup Distribution Server are returned. The Backup for value should be the primary Distribution Server.
Summary Cubes
A Performance Distribution Server also builds a summary cube for each machine for which it is the primary source of performance data. This type of cube is designed to provide fast access to data that spans several days, weeks, or months. Each summary cube contains one year's worth of data for a single machine. However, the data is averaged to a granularity of 24 hours for each of the monitored resources, so a typical cube provides 365 samples per resource. All requests for performance data with a granularity of 24 hours or more are directed to the summary cube. Note that summary data occupies approximately 10% of all storage space.
Performance Architecture
Access to Metadata
As well as providing access to performance data, each Performance Distribution Server also provides access to metadata-information about the performance data itself. Examples of this metadata include lists of: System types All the machines known to the entire PDG Machines filtered by system type Resources monitored on a particular machine
Using facilities such as Performance Reporting, you can submit a request for this metadata. The Performance Distribution Server that receives the request examines its local cube store and then either returns the required metadata or forwards the request to a Distribution Server that is better able to handle the query.
You can use the Systems Performance tools to publish performance, configuration, and asset data to the MDB.
Administrative Tools
The Performance Domain Server automatically publishes to the MDB historical performance data that it has obtained from the PDG. Systems Performance tables exist in the MDB schema for this information. Although the data is published to the MDB automatically, the content and granularity are configurable. In addition, the Domain Server enables the publishing of asset and configuration information to the MDB. You can also use command-line utilities to retrieve performance data from either the PDG or one or more cubes and publish it to the MDB. For more information, see the Inside Systems Performance manual.
Administrative Tools
Systems Performance provides a number of tools for easily and effectively performing configuration operations.
Command-Line Utilities
Systems Performance also provides a number of configuration commands that complement the graphical Performance Configuration application. These commands include the following:
Command cfgutil
Administrative Tools
Command configserver cubespider hpaagent pcmtocsv pdectl pdtodb_m and pdtodb_u pdtoxml prfagent profileserver rtpmon
Function Starts, stops, and displays the status of the Performance Distribution Server on the local machine. Fetches missing remote cubes. Controls the Historical Performance Agent. Converts performance cubes to CSV format. Controls a Performance Distribution Server. Publishes performance data to a relational database. Converts performance data to XML format. Controls the Real-Time Performance Agent. Starts, stops, and displays the status of the Performance Domain Server on the local machine. Displays real-time performance data.
Types of Reports
Web Reports let you view all of your historical performance data through an Internet browser, in a way that is meaningful to you. You can view Web Reports from one of three sources: WRS Unicenter MP Unicenter MCC
You will encounter three report types in Unicenter MP: Configured Reports Report Templates Published Reports
Configured Reports Configured Reports are out-of-the-box report templates that have not yet been executed. You can immediately use these to access meaningful information from supported products and view that information in a report. You can create and save these reports using the provided templates, or use the reports provided by the product. Configured Reports are listed in the tree according to product-specific classification. Report Templates Report Templates provide a way to customize reports using option fields to fill in the criteria that you want to use to generate your report. They provide all of the possibilities of what you can define. Once you fill in a template, you can either publish the information into the tree as a published report, or simply add it to the list of configured reports in the tree.
Report Templates
For example, if you are concerned about x factor within a supported product, you can execute the x factor Configured Report, which provides summarized information on x factor activity for that product. But if you want to view more specific information on y within the x factor, you can fill in the y within the x factor template provided by that product to define your own configured report, or publish the report into the product tree so you can retrieve it. Web Reports provide several Report Templates and Configured Reports across supported products that let you see your data the way you want to see it. Report templates are listed in the tree according to product-specific classification. Published Reports Published Reports are the static contents of the results of executing reports in Unicenter MP. After you publish a report, you must reload the Knowledge Tree for the report link to become available. You may choose to delete items that are published in the tree after you have viewed them. The key to Web Reporting is establishing a connection to the Web Reporting Servers using the WRS Catalog page in Unicenter MP. Establishing connections makes it possible for users to view reports running on these servers. The WRS Catalog lists all defined Web Reporting Services (WRS) connections and allows you to add, delete, or manage them.
Report Templates
Report templates provide a blank slate of fields in which you can fill in the criteria that you want to view in your report. They provide all of the possibilities of what you can define. Once you fill in a template, you can either publish the information into the Knowledge tree as a published report or simply add it to the list of configured reports in the tree. Report templates are listed in the tree according to product-specific classification. To get you started, Unicenter MP ships with several predefined reports. If needed, you can use the Report Configuration pages to edit these predefined reports to suit your needs.
Enhanced and simplified Security Management means reduced errors, greater responsiveness, and increased flexibility in meeting the needs of your organization. Most importantly, it means you can implement thorough and effective security policies without disrupting your work environment. Note: The CA NSM Security Management components no longer provide file access authorization. If you need this type of additional security, you may want to evaluate eTrust Access Control. For more information, see Integration with eTrust Access Control.
One of the most important advantages of a policy-based security system over access control lists (ACLs) is that systems are protected, not by their physical attributes and ACLs, but rather by security policies you define. These security policies are stored in the MDB. By configuring default DENY security policy, newly created management policy is protected automatically. This set-and-forget nature of policy-based security is the key to managing hundreds of users on a system as easily as you can manage a dozen.
Security Policies
All of the asset access controls provided in Security Management are maintained through policies you define in the MDB. Once these policies are set, they are enforced until the security administrator changes them. Additionally, all access violation attempts are routed to the Event Console Log, providing a real-time window into security activity. The primary policy definitions used in managing security policies are as follows: User Groups User groups logically group users and access permissions together, providing a role-based security policy. Defining user groups is optional and is not tied to the native OS user groups. Assets Assets describe specific occurrences of a protected entity, such as an Enterprise Management calendar object. Users can be given access to an asset directly by granting permission to the user, or indirectly by granting permission to a user group of which the user is a member. Asset Groups Asset groups describe multiple assets with similar attributes; for example, all the Enterprise Management components to which a user group has CONTROL access. As with assets, users may be given access to an asset group directly by granting permission to the user, or indirectly by granting permission to a user group of which the user is a member.
The commit process, when executed, performs the following tasks: Security Database Access--The first phase extracts the policies that you have defined in the Security Management Database. Therefore, that database must be online and active. Security Administration (or SDM daemon) Active--Another phase takes the policies extracted from the database and places them into effect on the designated server, where they are processed by the Security Management functions, which must already be running. If Security Management is not running, the commit process will detect this and issue an appropriate error message indicating that an attempt to place the new security policies into effect has failed.
To perform a commit process, issue the secadmin command with option c from the cautil command line, or select File, Commit from the menu bar in the Security Management GUI. Note: For command syntax, examples, and additional information about warnings and commit customization, see the secadmin command in the online CA Reference.
This chapter provides detailed information and points you to procedures that allow you to accomplish each of the phases of implementation.
You can associate the Default Permission option with the System Violation Mode option to produce a specific level of asset protection. For example, a Default Permission of DENY combined with a System Violation Mode of WARN causes Security Management to log all unauthorized asset access, but not deny access. (Assets specifically permitted to a user will not generate an Event Console Log entry.) The combination of DENY and WARN is especially useful while you are in the process of implementing your security policies, because it produces an audit trail of all asset access through the Event Management component. Alternatively, when the security option USE_PAT is set to YES, a Default Permission option setting of ALLOW combined with a System Violation Mode of FAIL enables Security Management to protect only those assets specifically defined as protected, that is, everything except the protected assets will be accessible. Many security administrators prefer this approach because it quickly protects a set of defined assets without affecting the rest of the system.
Asset Definitions
Asset definitions can contain an associated node value. Similarly, within any definition that accepts an asset specification, there can be a node specification associated with that asset. Upon completion of the initial Security Management installation, all node values are blank, which means that all definitions are global. Asset definitions can be qualified by node. Asset definitions are associated with user, user group, and asset group definitions. When you include a node value in an asset definition, Security Management applies the policy only to that specific node. In the absence of a node value, the policy is global and applies to all nodes supported by the MDB.
When you commit Security Management policies to your system, Security Management looks for rules that are applicable to your system. It retrieves all rules that have a node value equal to the system identification on which you execute the commit process; it then retrieves rules that are global (have no associated node value). The Security Management evaluators use this set of rules to enforce the Security policy.
Windows Option
UNIX/Linux Option
Supply an Administrator ID for the Authorized User List (SSF_AUTH for UNIX/Linux platforms) option, the user ID of the person who is authorized to modify your Security Management policies.
Note: For instructions about setting options, see Customizing Your Security Management Options in the online CA Procedures.
After starting the Security Management daemons in QUIET mode, you can test security policies without risk of adversely affecting your user community. Although you are in QUIET mode, an administrator ID is required; ensure that you have supplied an administrator ID for the Windows option, Authorized User List (SSF_AUTH option for UNIX/Linux platforms), before starting the daemons. Note: See the instructions for starting the Security Management daemons in the online CA Procedures.
Description Laboratory Team Planning Analyst Human Resource Systems Clearance is Classified
USER01 has access to any CA NSM management component permitted to any of these user groups. This example illustrates that permissions can be logically assigned, based on USER01s department (PRJTX), the users area within the department (LAB), or specific title (PLANNING). You can also base access on the application on which the user works (HRSAPPL), or even the information clearance level assigned to the user (CLSIFIED). User groups exemplify the power of the Security Management architecture. If USER01 is promoted, changes jobs, or changes departments, permissions can be adjusted automatically by changing the user groups of which the user is a member. Note: For instructions about defining your users, see Defining User Groups in the online CA Procedures.
You can apply as many levels of nesting as you want. If, in addition to the corporate and department groups, you also have a company group, you can make the departments a member of corporate, and corporate a member of the company. Permissions apply as described in the following list: All members of departments can access their departments assets, the corporate assets, and the company assets. All members at the corporate level can access the corporate assets and the company assets. All members at the company level can access only the company assets.
The asset type is the category into which the asset logically falls. You can define your own asset types; you also have available over 60 predefined asset types identified by the prefix, CA-. For a list of supported asset types, see the asset type tables in the online CA Reference under the Security Management cautil ASSETTYPE command. For procedures to define your own asset types, see Defining Dynamic Asset Types in the online CA Procedures. The asset name, or asset ID, is used primarily to identify specifically and individually a particular instance of an asset type.
For example, imagine a corporate asset group that contains various files and other general-purpose assets, one of which is the telephone directory. You want to associate these general-purpose assets with the more specific assets in three other asset groups that comprise payroll, administrative, and mailroom assets. You can accomplish the appropriate sharing of assets and avoid a duplication of asset groups by nesting the three more specific asset groups within the general-purpose asset group.
Asset Permissions
Asset permissions govern which protected assets a user can access and how they can be used after being accessed. Permissions are created through the Security Management GUI or the cautil command line interface by specifying the name of the asset or asset group to which you want to give access to a user or user group. You can, conversely, provide the name of the user or user group to be permitted access to an asset or asset group. With this level of flexibility, you can manage security in the manner that is most comfortable for you. Important! If a user is denied access based on asset permissions (or the lack of an asset permission to a protected asset), a violation results. It is important to remember that the presence of a violation does not necessarily mean that a user will be stopped from accessing the asset. The System Violation mode (QUIET, WARN, FAIL) and access type (for example, log) controls whether the user will actually reach the asset.
Access Types
The access type specified in a definition determines whether the user will be allowed to access an asset. When defining access permissions, you can use the following access types to meet your specific control or audit needs: PERMIT Allows access. The standard control is Allow this user to access this asset. LOG Allows access and logs the event. LOG is used in those cases where you want to maintain an audit trail of access to a critical asset. For example, you may want to record all updates to the CA NSM calendar BASE. Do this by using two access typesa PERMIT for READ and a LOG for WRITE. READ authority is allowed normally, while a WRITE request generates a record of the access to the Event Console Log. The end user will not be notified that this access has been logged. DENY Denies access and logs the event. DENY is useful for creating exceptions to normal permission sets. Whenever an asset is referenced (either explicitly or generically) as the subject of a PERMIT or DENY rule, it becomes protected. This protection means that when you permit a user to access an asset, such as CA-CALENDAR, any other users who have not been granted permission to access this file (those in FAIL mode) will be denied access when the security option USE_PAT is set to YES. On Windows, such protection is referred to as implicit DENY. USE_PAT and Implicit DENY are disabled by default. Important! Security Management considers the access mode when evaluating a rule. For example, if the access mode (READ, WRITE, and so on) of a permission does not match the requested access type, the permission is not used. For more information, see Access Modes.
Understanding how access modes and calendars affect security evaluation make it possible to construct sophisticated security rules. For example, assume the staff in the systems management department has read and write access to the CA NSM calendar BASE, and you want to deny write access to members of that group on weekends. You could create a weekdays-only calendar and associate it to Payroll as a PERMIT access type for access mode UPDATE.
Access Modes
The access policies of Security Management support several types of authority, or access modes. Think of access modes as ways a user may try to access an asset. An access policy can specify one, several, or all of the applicable access modes for an asset. At least one access mode must be specified. These access modes include the following: READ Controls READ access to the asset. WRITE Controls WRITE access to the asset. UPDATE Controls UPDATE access to an asset. UPDATE is only valid for CA NSM asset types. UPDATE access implies READ and WRITE authority. CREATE Controls creating a new asset, such as a new calendar. DELETE Controls deleting or removing the asset. For example, this mode would prevent a calendar erase or delete operation. DELETE is useful for preventing accidental deletions. CONTROL Controls access CA NSM administration objects, such as STATION, CALENDAR, and so forth.
To grant a user access to the Event Console Log, define a PERMIT rule with the following information: User ID: Asset Type: Asset Name: Access Modes: Access Type: USER01 CA-CONLOG * CONTROL PERMIT
Access Determination
When a user attempts to access an asset, Security Management looks at all rules associated with the user to find the ones that apply to the access in question. Security Management only considers rules that match the asset type, asset name, asset node, access mode, and any conditions determined by a calendar or criteria profile. If several permission rules match an asset name, the rule with the best fit (closest match to asset name) is used, and an access type DENY overrides LOG, which overrides PERMIT. For step-by-step procedures to define your access permissions, see the following topics in the online CA Procedures: Permitting Users to Assets Permitting Assets to Users
Rule Evaluation
Two decisions are made during the security evaluation process. The first decision is whether a specific access attempt is considered authorized. If the access is not authorized, a violation results and the second decisionwhat to do about the unauthorized access attemptdepends on the Enforcement mode in effect. The only Enforcement mode that results in access being denied is FAIL mode, which is set by the System Violation Mode. For additional information about Enforcement modes, see Access Violations and Enforcements.
Scoping rules can be written only for CA asset types (those having the prefix CA-), but not for user-defined asset types. Scoping can narrow an access permission to a keyword object, a data object, or a command object. Each of these has a specific CA asset type and an associated Security Management option (which must be set before scoping can be applied).
Security Management Option Windows: SSF Keyword Scope UNIX/Linux platforms: SSF_SCOPE_KEYWORD
Security Management Option Windows: SSF Data Scope UNIX/Linux platforms: SSF_SCOPE_DATA
Command
CM
When specifying the asset ID (type) for a CA data object (suffix DT), you must supply a setup character immediately preceding the operand (as the underscore is used with the node= operand in the previous example). This character is used by the rule evaluator to edit the definition, and is required to indicate to the evaluator that the next specification is a new operand. You can use one of the following characters: Tilde (~) Slash (/) Pipe (|) Underscore (_)
Note: Scoping on data objects is not supported through the EmSec APIs.
CASF_E_465 Specifies the general message number used for all DENY violations. userid Specifies the ID of the user who caused the violation. mode Specifies the users violation mode: W=Warn, M=Monitor, F=Fail. assetname Specifies the asset name of the asset involved in the violation. For WNT-FILE, UNIX-FILE, and UNIX-SETID, the asset name is a fully qualified path name. terminal_device Specifies the device the user was logged into at the time of the violation. source_node Specifies the node from which the user logged into the system. access_type Specifies the access mode, abbreviated as follows: Rd=read, Wr=write, Up=update, Sc=scratch, Fe=fetch, Cr=create, Co=control, Se=search, Ex=execute. context Specifies the context of the violation. For Windows intercepted events, specifies the access type (read, write, and so on). For UNIX/Linux platforms, specifies the system call name. For CAISSF resources checks through components, the context specifies resource.
UNIX/Linux Reports
You can generate two reports on UNIX/Linux platforms that let you review Security Management policies.
Whohas Report
You can use the whohas report to look at the policies that have been set for a particular asset type and asset name. To create this report, run the following command from the command line prompt:
whohas [asset_type] [asset_value] {user_name} {node_name}
The following command was used to generate the following sample report:
whohas CA_CALENDAR_BASE
COMPUTER ASSOCIATES INTERNATIONAL, INC. 'WHOHAS' FOR UNICENTER USER: audit NODE: ASSETNODE: ---- ACCESS MODES --- -------------- RULE -------------------NUM FILE SSF NAME - ORIGIN User PERMISS -----PERMIT TEXT ------------------------- ---- ---- ----------- -----rwdxc --- allfcrit PAGE 1
COMPUTER ASSOCIATES INTERNATIONAL, INC. 'WHOHAS' FOR UNICENTER USER: causer1 NODE: ASSETNODE: ---- ACCESS MODES --- -------------- RULE -------------------NUM FILE SSF NAME - ORIGIN User PERMISS -----PERMIT TEXT ------------------------- ---- ---- ----------- -----r--- --- rcrit Total rules: 2 Whohas run by root on Tue Jul 10, 2001 at 11:49:46 End of whohas PAGE 2
The USER, NODE, and ASSETNODE values identify the user, the node associated with the user, and the targeted asset node, respectively. The whohas report groups the assets by the USER, USERNODE, and ASSETNODE values. The ACCESS MODES indicate the CAISSF access modes. The access modes are abbreviated as follows: R=READ, W=WRITE, D=DELETE, X=EXECUTE, U=UPDATE, C=CREATE, S=SEARCH, N=CONTROL. In addition to the FILE and CAISSF access type flags, the NAME field lists the internal criteria name (for diagnostic purposes) or the name of a custom jll criteria profile.
The RULE is a combination of the following: ORIGINUser or user group source of this policy PERMISSPermission granted by this policy TEXTAsset name for this policy
What-Has Report
You can use the what-has report to look at the policies that have been set for a particular user ID. To create this report, run the following command from the command line prompt:
whathas [userid] [node]
The following command was used to generate the following sample report:
whathas audit
Userid: audit
Type PERMIT
Expires
Calendar Path
Path
-------- -------
-------------
-----CA-C
Supported Components
Supported Components
Unicenter NSM r11.2 supports only a subset of the components on UNIX and Linux that are included in the base Unicenter NSM product. The components supported provide upgrades for UNIX and Linux users for Event Management, Agent Technology, and other vital areas of the product. Note: CA NSM r11.2 for UNIX and Linux does not support Ingres. Any UNIX and Linux information in the CA NSM documentation set pertaining to Ingres databases does not apply to CA NSM r11.2 users. The following are the components supported on UNIX and Linux in Unicenter NSM r11.2: Event Management Unicenter NSM r11.2 includes an upgrade of the Event Manager component on UNIX and Linux. The Event Manager supports the Calendar and uses the new free embedded database PostGreSQL. For more information in this guide about Event Management, see the section Event Management in the chapter "Administering Critical Events." Distributed State Machine Unicenter NSM r11.2 includes the remote DSM manager component on UNIX and Linux. This component is the manager supporting Agent Technology, and it lets you remotely communicate with local DSM and WorldView Manager components in CA NSM r11.2 environments. For more information, see the section Understanding Systems Management in the chapter "Monitoring Your Enterprise." Alert Management enablement Provides the ability to create alerts for the Alert Management subsystem. The full AMS manager is not supported on UNIX and Linux. Alert enablement lets you forward alerts to a remote Alert Management Server. For more information about creating alerts for Alert Management, see the section Alert Management System in the chapter "Administering Critical Events." Advanced Event Correlation Agent Provides the ability to implement Advanced Event Correlation policy. The Advanced Event Correlation user interface is not supported. For more information about the Advanced Event Correlation Agent, see the chapter "Correlating Important Events." Event Trap Processing Provides processing of external traps. This capability requires that you add the CCS Trap Daemon and CCS Trap Multiplexer.
Supported Components
CCI Enables communication of certain CA NSM components with other components. For more information, see the section Common Communications Interface in the chapter "Securing CA NSM." DIA Provides cross-component communication for certain CA NSM components. SDK CA NSM r11.2 includes UNIX and Linux support for the Event Management and Agent Technologies SDKs. Utilities associated with Event and Agent managers required for administration and maintenance are also supported. Management Command Center Provides a centralized web interface for viewing and managing data collected about your monitored resources. The following MCC dependencies are also supported: AIS CAM (CA Messaging) EM Provider
For more information about the MCC, see the chapter "Administering Critical Events" in this guide or the MCC Help. For more information about CA Messaging, see the appendix "Using Ports to Transfer Data." High Availability Service Enables high-available ready components in clustered configurations. High-availability readiness is limited to Linux platforms only. Components that are high-availability ready in the base CA NSM r11.2 product are also high-availability ready on Linux, except for Event Management and the Calendar. Job Management Option Includes the manager component for Job Management Option (JMO). For more information about JMO, see the appendix "Job Management Option." CA NSM Security Provides the built-in security solution for Unicenter NSM. CA NSM security can secure your environment at the object or access level. For more information about CA NSM security, see the chapters "Securing CA NSM" and "Securing CA NSM Objects." Trap Manager Lets you manage trap databases and trap filter files. For more information about Trap Manager, see the appendix "Managing Traps Using the Trap Manager."
Administration Guide See the section Event Management in the chapter "Administering Critical Events"
Calendar
See the section Event Inside Event Management in the chapter Management, MCC Help "Administering Critical Events" See the section Common CA Reference Communications Interface in the chapter "Securing CA NSM" Implementation Guide Programming Guide See the chapter "Administering Critical Events" MCC Help
CCI
Inside Systems Monitoring, Implementation Guide MDB Overview See the section Inside Systems Understanding Systems Monitoring Management in the chapter "Monitoring Your Enterprise" See the appendix "Unicenter Job Management Option" MCC Help
Administration Guide
Other Guides
See the chapters "Securing CA Procedures CA NSM" and "Securing CA NSM Objects" See the appendix Trap Manager Help "Managing Traps Using the Trap Manager" See the section Alert MCC Help, Inside Event Management System in the Management chapter "Administering Critical Events" See the chapter "Correlating Important Events" Inside Event Management
Trap Manager
For information about installation and migration, see the Implementation Guide and Migration Guide. For information about the database abstraction, see the MDB Overview.
Compliant Components
The following CA NSM components provide FIPS 140-2 compliant encryption: Systems Performance Active Directory Management Agent Technology Common Communications Interface (CCI)
The following component provides FIPS 140-2 encryption support in certain situations: Management Command Center Unicenter Management Portal Web Reporting Server
Compliant Components
Systems Performance
Systems Performance provides FIPS 140-2 encryption support using the ETPKI for all sensitive data. The ETPKI wraps the FIPS 140-2 validated RSA BSAFE Crypto-C Micro Edition cryptographic module. Systems Performance uses the AES encryption algorithm with a 256-bit strength key to encrypt keys and data. It also uses SHA-1 (Secure Hash Algorithm) to hash the keys to make sure they are not tampered with. Note: When FIPS mode is not enabled, Systems Performance uses the PKCS #5 v2.0 algorithm to generate a password generated key. By default, FIPS encryption is disabled. You must enable the encryption, at which time any sensitive data is re-encrypted. Data is encrypted to a Systems Performance-specific library using a Data Encryption Key (DEK), which must be distributed to all Systems Performance servers where encryption and decryption is required.
Data Encrypted
The following Systems Performance data is encrypted when using FIPS mode: SNMPv3 credentials Includes the SNMPv3 security name, authentication password and protocol, and privacy password and protocol used to access SNMPv3 devices and their MIBs. This information is created by Performance Configuration and decrypted by Performance Scope for real-time monitoring and the Performance Agent for historical monitoring. Files: Stored by the Performance Domain Server in files named resource.ext and <machine>.cmp. Copied by the Performance Distribution Server to Performance Agents configuration file Hpaagent.cfg if it is configured to monitor an SNMP device. SNMPv1/v2 credentials Includes SNMPv1/v2 community string information. This information is encrypted by Performance Configuration when accessing a MIB or device, and it is decrypted by Performance Scope for real-time monitoring and the Performance Agent for historical monitoring. Files: Stored by the Performance Domain Server in files named resource.ext and <machine>.cmp. Copied by the Performance Distribution Server to Performance Agents configuration file Hpaagent.cfg if it is configured to monitor an SNMP device.
Compliant Components
Batch Reporting credentials Includes computer access credentials used by Performance Trend Batch Reporting to successfully generate and output reports. Batch Reporting must supply credentials to Unicenter Workload that let it interact with the desktop. These credentials are encrypted by Performance Trend Batch Reporting after being entered into the Performance Trend Batch Reporting Wizard, and they are decrypted by the Performance Trend Batch Reporting Generator when executing the Batch Reporting Profile. Location: Stored by Performance Trend in the Performance Trend Batch Reporting Profile files (*.tbp) held within the Performance Trend area of the <Logged in User> or <All Users> application data area of the computer. MDB credentials Includes MDB connection credentials used by the Performance Domain Server to publish summary performance data to the MDB. These credentials are created through the Systems Performance Installer or the Performance Domain Server configuration utility (cfgutil), and they are decrypted by the Performance Domain Server when accessing the MDB to publish summary data. File: dbcred.dat in the Performance Domain Server Installer response file credentials Includes MDB connection credentials that you enter before a response file-generated installation so that the Performance Domain Server can publish Performance Data to the MDB, and Performance Reporting can use the WorldView section of the MDB for reporting. The response file is created by the Systems Performance Installer, and the MDB credentials are decrypted by the Systems Performance Installer when running an installation in response file mode. Location: Stored in a response file for later use in response file-driven installations. The response file is stored in a user specified file. Performance Data Grid access information Includes the user names and domain names used to gain access to the Performance Domain Server and Performance Distribution Server data and operations. The user names and domains are created by the user on the Performance Domain Server, and they are decrypted by the Performance Domain Server and Performance Distribution Server hosts. File: Maintained by the Performance Domain Server in a file named users.dat and distributed to the Performance Distribution Server so that it can also validate data requests.
Compliant Components
Unicenter Management Portal and Web Reporting Service credentials Includes Unicenter Management Portal and Web Reporting Service connection details that are required for Performance Trend to publish reports to either of these tools. These credentials are encrypted by Performance Trend Batch Reporting after being entered into the Performance Trend Batch Reporting Wizard, and they are decrypted by the Performance Trend Batch Reporting Generator when publishing reports to Unicenter Management Portal or Web Reporting Service. Location: Maintained by Performance Trend in Portal Profiles (*.pop) that are held within the Performance Trend area of the <Logged in User> or <All Users> application data area of the computer.
We recommend giving only administrators permissions to access the directory and keyfiles for Performance Manager and Performance Agent servers and read access to non-admin users on servers running UI components.
Compliant Components
Installation Considerations
You can perform an initial manager installation of Systems Performance in any of the following ways: FIPS mode turned off FIPS mode on with the default key FIPS mode on with a custom key
We recommend that you perform an initial installation with FIPS mode turned off (the default setting), in which the installer continues to encrypt all data using password-based encryption. If earlier versions of Performance Agents exist in your enterprise, the manager may not be able to configure these agents with FIPS mode enabled. Once all Performance Agents are upgraded to r11.2 levels, you can enable FIPS encryption. Note: FIPS mode cannot be enabled if earlier releases of the Performance Agents are to be installed on platforms not currently supported by CA NSM r11.2. These earlier agents will be unable to decrypt the encrypted configuration information. No special steps are required when reinstalling manager components, UI components, or Performance Agents with FIPS mode turned on or off. For more detailed information about how the FIPS encryption option affects your installation and several deployment scenarios, see the Systems Performance documentation. For a description of the recommended installation scenario, see How to Install Systems Performance with FIPS Mode Off.
Compliant Components
1.
(Optional) Create a custom key if necessary by running setup.exe with the following parameter:
setup.exe /genkey <File path>
<File path> Specifies the full path of an existing directory, including the key file name, where you want to store the key file. Note: By default, Systems Performance uses an embedded key to perform the FIPS-based encryption and decryption and does not require the creation of a custom key. 2. (Optional) Copy the generated custom key to the CA NSM installation image by copying the DVD image to a writeable drive and copying the generated file key to the following location: Windows
Windows\NT\SystemsPerformance\data
UNIX/Linux
/data/sysperf_key
When you place the key file on the image, the Systems Performance installer automatically copies it to the installed system. 3. 4. 5. Install the Manager components. Install the UI components. Install the Performance Agents. Note: You can deploy additional Performance Agents after the initial setup process. For more information, see the Systems Performance documentation. 6. 7. 8. 9. Stop all client applications and the Performance Domain Server, and turn on FIPS mode (see page 412) on the Domain Server. Restart the Domain Server and any client applications. Reencrypt all existing Domain Server-based data using the CASPEncrypt utility (see page 421). Stop Performance Trend, and turn on FIPS mode (see page 412) on all Performance Trend servers.
10. Reencrypt existing Batch Reporting profiles on all Performance Trend servers using the CASPEncrypt utility (see page 421). 11. Redeliver all profiles to Performance Agent servers. Performance Agents continue to run with the configurations encrypted using non-FIPS based encryption until you redeliver all profiles.
Compliant Components
Compliant Components
Compliant Components
Compliant Components
The process is similar to installing Systems Performance with FIPS mode off. For details, see How to Install Systems Performance with FIPS Mode Off. For more information about how to upgrade, see the Systems Performance documentation.
<FILE> Specifies the name of the file to export the active key to. 3. 4. Copy the DVD image to a writeable drive. Copy the file you exported the key to to the following location: Windows
Windows\NT\SystemsPerformance\data
UNIX or Linux
/data/sysperf_key or /<Platform>/data/sysperf_key
5.
Install the Performance Agent. The current key is automatically deployed to the Performance Agent server.
Compliant Components
An alternative method is to copy the key to the target system after the Performance Agent has been installed. Complete the following process to copy the appropriate encryption key to the Performance Agent server after installation: 1. 2. 3. Ensure that the Performance Agent is installed, and install it if necessary. Log onto the Performance Domain Server. Extract the key file by running the following command:
CASPKeyUtil -e <FILE>
<FILE> Specifies the name of the file to export the active key to. 4. 5. Log onto the Performance Agent server. Install the key file by running the following command:
CASPKeyUtil -i <FILE>
<FILE> Specifies the name of the key file exported from the Performance Domain Server.
Note: You can only run this command on the Performance Domain Server. 3. Make the required updates to the users.dat file. Find the file at the following location:
%CASP_PATH%\DomainServer\data\users.dat
Note: For more information about how to update the users.dat file, see the Inside Systems Performance guide. 4. Re-encrypt the file after completing the updates by running the following command:
CASPEncrypt -y
5.
Compliant Components
-i, --install [FILE] Generates and installs a new key into the key store. If you specify a file, this key is installed into the key store. -c, --create <FILE> Creates a new key into the specified file, but does not install it into the key store. -e, --export <FILE> Exports the active key to the specified file name. -p, --purge Purges the old key in the key store. -p, --purgeall Purges all of the keys in the key store. -f, --info [FILE] Displays the properties for either the active or specified key. -?, --help Displays the utility's help. Important! The CASPKeyUtil utility maintains only two keys at a time - key and key.old. Therefore, you must ensure that all sensitive data has been reencrypted using the CASPEncrypt utility before you create a second custom key. When you create a second custom key, the original key is permanently deleted, and you will not be able to recover the data encrypted by that key if you have not reencrypted it.
Compliant Components
-d, --domain Reencrypts only Domain Server files. -t, --trend Reencrypts only Performance Trend files. -a, --all Reencrypts Domain Server and Performance Trend files. -h, --help Displays usage information. -i, --info Displays whether FIPS is enabled. Use this parameter at any time to confirm whether a switch to FIPS mode or non-FIPS mode was successful. -x, --usersdec Un-encrypts the user access control file (users.dat) for editing. -y, --usersenc Reencrypts the user access control file (users.dat) after editing.
Compliant Components
Data Encrypted
The following Active Directory Management data is encrypted: Active Directory credentials Includes the password required to access Active Directory forest and domain information. This information is created by the ADEM Connection Settings tool (ADEMConfig) and decrypted by the ADEM Service. File: Stored by ADEMConfig in a file named forest.py and read from the file by the ADEM Service.
We recommend securing the file using Operating Systems file security, and giving read permissions to only ADM administrators and the local system account.
Compliant Components
Agent Technology
The Agent Technology component of CA NSM provides FIPS 140-2 encryption support using the RSA Crypto-C ME encryption library through the ETPKI. Agent Technology uses the AES CFB algorithm with a 128-bit strength encryption key, the 3DES CBC algorithm with a 64-bit strength encryption key, and the SHA-1 algorithm to encrypt configuration files, communications with other CA NSM components, and SNMPv3 communications.
Data Encrypted
The following Agent Technology data is encrypted in a FIPS 140-2 compliant manner: Configuration files Includes files containing sensitive data such as passwords. SNMPv3 communications Includes encrypted content transported over a network using SNMPv3. This encryption is done using the AES CFB algorithm with a 128-bit key or the 3DES CBC algorithm with a 64-bit key. Component communications Includes all data exchanges with other CA NSM components.
Compliant Components
Installation Considerations
You must turn on 'FIPS-only' mode for Agent Technology to use FIPS 140-2 compliant libraries and algorithms to protect sensitive data and communications.
Migration Considerations
After an upgrade, all new files are encrypted using FIPS algorithms. Encrypted Agent Technology data from previous versions works with CA NSM r11.2 after an upgrade. Some of the existing data is reencrypted automatically during the initial run-time, while other data requires you to reencrypt it manually If you want to make this data FIPS 140-2 compliant.
Data Encrypted
Data exchanged between CA NSM components using the Common Communications interface is encrypted using FIPS compliant libraries and algorithms. Everything sent from the local to remote hosts, including the data itself, CCI headers, and user data, is encrypted.
Compliant Components
Installation Considerations
You must do the following to enable FIPS compliant encryption after installing CCI: Turn on FIPS mode. Set the CAI_CCI_SECURE environment variable's value to YES to enable SSL support. Restart the remote CCI daemon.
For more information about enabling SSL support, see the chapter "Securing CA NSM".
Compliant Components
Data Encrypted
The following MCC data is encrypted in a FIPS 140-2 compliant manner: Web application login credentials Includes the user names and passwords entered to access the following web-based applications: AEC Web Editor, Configuration Management (UCM), Adaptive Dashboard Services (ADS), Discovery Configuration, eHealth Report Server, Web Reporting Service (WRS), and Unicenter Service Desk. These credentials are encrypted and stored in the MCC WebApplications directory located under the user's home directory whenever the user indicates that the credentials should be remembered. File: savedsettings.xml Communications with web applications Includes data sent between the MCC and other web applications that are hosted by Tomcat. These applications include AEC Web Editor, UCM, ADS, Discovery Configuration, eHealth Report Server, WRS, and Unicenter Service Desk.
Compliant Components
Communications with AIS providers Includes data sent between the MCC and WorldView and Event Management using CA Messaging (CAM). Communications with DIA providers (Alerts and Console logs) Includes data sent between the MCC and Alert and Event Management using DIA.
Installation Considerations
During installation of MCC, a private Java Runtime Engine (JRE) is installed for use by MCC and other CA NSM components. The jsafeJCEFIPS.jar file containing the RSA BSAFE Crypto-J JCE Provider Module is installed in the extensions directory of this private JRE. At run-time, the MCC triggers the JRE to load the BSAFE JCE provider module. Therefore, FIPS compliant encryption of login credentials for web-based applications requires no post-installation steps. Encryption of communications between the MCC and web applications must be enabled manually after installation by configuring Apache Tomcat to use TLS encryption. For detailed step-by-step instructions for configuring Apache for TLS, see Configure Tomcat with SSL in the Implementation Guide. When configuring Tomcat to use SSL, you can also specify to use only FIPS 140-2 compliant algorithms by setting the "ciphers" attribute of the Connector element and listing the allowed algorithms. Specify the ciphers using the JSSE cipher naming convention. For more information about configuring Tomcat to use specific ciphers, see the Apache Tomcat and Sun JSSE web sites for details.
Compliant Components
For encryption of communications between MCC and the WorldView and Event Management components, you must configure CA Messaging (CAM) to use TLS encryption. For more information, see Configure CAM to Use TLS Encryption. You must enable encryption of DIA-based communications between MCC and Event and Alert Managers by configuring DIA to use FIPS 140-2 compliant encryption. For detailed steps on how to configure DIA encryption, see Configure Communications for Encryption in the Implementation Guide.
Migration Considerations
Login credentials encrypted using a prior release of MCC are decrypted using the previous algorithm upon the first launch of MCC and automatically reencrypted when the MCC session ends.
Turn Off Password Caching for Event Management and WorldView Credentials
Remembering of login credentials for the WorldView and Event Management components is accomplished through CA Messaging (CAM) using password caching, which is not FIPS compliant. You may want to turn off this password caching if you are concerned about the level of security it provides. The password caching is turned on by default. To turn off password caching for CAM login credentials 1. 2. Access the ji.cfg file. Set the following parameter to a non-zero value:
default.SEC.bypass_cache
Compliant Components
Data Encrypted
The following Unicenter MP data is encrypted using FIPS 140-2 compliant libraries: Login credentials Includes persistent user names and passwords. File: database.properties Database credentials Includes database user names, passwords, and JDBC URLs entered in Unicenter MP to access MDB data. File: database.properties CA NSM credentials Includes Unicenter NSM login names and passwords entered in Unicenter MP to establish connections with CA NSM components. Communications with web applications Includes data sent between Unicenter MP and other web applications that are hosted by Tomcat. These applications are AEC Web Editor, Configuration Management (UCM), Adaptive Dashboard Services (ADS), Discovery Configuration, eHealth Report Server, Web Reporting Service (WRS), and Unicenter Service Desk.
Compliant Components
Installation Considerations
Encryption of communications between Unicenter MP and web applications must be enabled manually after installation by configuring Apache Tomcat to use TLS encryption. For detailed step-by-step instructions for configuring Apache for TLS, see Configure Tomcat with SSL in the Implementation Guide. When configuring Tomcat to use SSL, you can also specify to use only FIPS 140-2 compliant algorithms by setting the "ciphers" attribute of the Connector element and listing the allowed algorithms. Specify the ciphers using the JSSE cipher naming convention. For more information about configuring Tomcat to use specific ciphers, see the Apache Tomcat and Sun JSSE web sites for details.
Data Encrypted
The following WRS data is encrypted using FIPS 140-2 compliant libraries: Login credentials Includes persistent user names and passwords for components that use WRS.
Compliant Components
Installation Considerations
WRS requires a post-installation configuration of Apache Tomcat to use TLS to encrypt Tomcat-enabled communications using FIPS-compliant libraries. For detailed instructions for configuring Tomcat to use TLS, see Configure Tomcat with SSL in the Implementation Guide.
Trap Daemon
The Trap Manager lets you perform sophisticated trap database and trap filter file management. You can use the Trap Manager to manage trap information and translation messages stored in the Trap Database and trap filters stored in the trap filter file. The CA Trap Daemon (CATRAPD) receives Simple Network Management Protocol (SNMP) traps on UDP port 162. These SNMP traps contain critical information about the latest status of your network environment, including the network itself and devices on that network. Since this information is received in the form of Management Information Base (MIB) variables and their numeric values, it is difficult to understand offhand. The Trap Daemon reads the MDB trap tables, which contain all trap information and translation messages, and translates SNMP traps into meaningful, easy to understand messages. These translated traps appear on the CA NSM Event Console. For every incoming trap, the Trap Daemon also searches the trap filters file for any filters that apply. If the specified filter criteria is satisfied, the trap is dropped from further processing and does not appear on the Event Console. This can be very helpful if you are only interested in certain traps. Note: By default, CATRAPD does not translate the trap information it receives. You must configure CATRAPD to use the Trap Translation Database. You can also enable or disable translation for specific traps.
Trap Filters
Trap Filters
The Trap Manager lets you easily manage trap filters stored in the trap filter file. You can use trap filters to filter out traps from appearing on the Event Console. For every incoming trap, the Trap Daemon searches the trap filters file for any filters that apply. If the specified filter criteria are satisfied, the trap is dropped from further processing and does not appear on the Event Console. This can be very helpful if you are only interested in certain traps. You can use the Trap Manager to view, add, edit, or delete trap filters.
Feature
Remote Installation (Trap Daemon and Trap Manager on Different Computers) Functional
File - Add Vendor, MIB Functional File, Trap, Rename, and Exit View MIBs and Refresh Tools Import, Find, Backup, and Restore Functional Functional
Functional Functional Not functional because commands required to perform these tasks are executed only locally. Functional Functional Not functional because the filter definitions are stored locally in a flat file, not in the database.
Trap Daemon Refresh Functional Cache, Shutdown, and Start Help Functional
Trap Tab (right pane) Functional Filter Tab (right pane) Functional
By using a number of bridges in parallel, a single source repository can be bridged to many destination repositories or many source repositories can be bridged to a single destination repository. A many-to-one bridge configuration enables central monitoring of significant objects in a distributed enterprise environment. Classes must exist on a destination repository before bridging is done. Note: To use Repository Bridging on UNIX and Linux, the destination MDB must be hosted in an Ingres database because Repository Bridge uses an Ingres client to connect to the destination MDB on the remote server.
After these startup procedures are complete, the Repository Bridge is driven by notifications from the source repository. When a notification is received, the object to which it relates is analyzed to determine the following: The object is bridged, so the notification is bridged. The object should be bridged as a consequence of the notification, so the object is replicated in the destination repository. The object is bridged, but no longer conforms to bridging policy as a consequence of the notification, so the replicated object is deleted.
Fanout Architecture
The fanout architecture consists of one source repository and one or more destination repositories. Bridge instances run between the source repository and each of the destination repositories.
Destination A
Destination B
Destination C
Bridge
Bridge
Bridge
Source A
DSM
Use the following guidelines when you are considering using a fanout architecture: The number of bridge instances running on a host affects CPU utilization on that host. Each bridge instance independently processes notifications from the source repository. Therefore, the more activity in the repository, the more objects being bridged, and the greater the load on the host running the instances. The number of bridge instances associated with a source repository increases the load on the source database server. Each destination repository in this architecture requires a separate bridge instance, which runs independently of the other instances associated with the source repository. This situation causes an increased load on the source database server as the server processes requests and queries from those instances.
Aggregation Architecture
The aggregation architecture consists of several source repositories and one destination repository. Bridge instances run between each of the source repositories and the destination repository.
Destination A
Source A
Source B
Source C
DSM
DSM
DSM
Use the following guidelines when you are considering using an aggregation architecture: Carefully monitor the cumulative number of objects bridged from the source repositories to the destination repository. If several source repositories exist, the number of objects in the destination repository can quickly exceed the recommended limits. The same guidelines provided for a standard repository should be followed for a bridged repository. To avoid problems, obtain estimates for the number of bridged objects from each source repository before implementation. Bridging duplicate objects to a repository causes errors. If you have a duplication of objects across source repositories (that is, objects have the same name), and those objects are bridged to the same destination repository, errors can occur.
Bridge Configuration
Bridge configuration lets you develop and edit bridging policy, which is maintained in a .tbc file. Although it is possible to write or edit .tbc files manually (they are flat ASCII text files), we recommend that you use the interfaces provided to ensure the accuracy and consistency of the policy. On Windows, the Repository Bridge Configuration GUI lets you define bridging policy for a bridge instance. This interface generates the .tbc files, which are stored in the Bridge Config. Directory specified during installation. On UNIX/Linux, use the bridgecfg command to create bridging policy. This interface generates the .tbc files and saves them in $CAIGLBL0000/wv/config/bridge.
Bridge Control
The Bridge Control provides an operator with a means of starting, stopping, and displaying the status of bridge instances available on the local host. On Windows, the Bridge Control also lets you start the Configuration GUI where you can edit or delete existing configurations. The Bridge Control can be accessed through the Repository Bridge Control GUI or the command line. The Repository Bridge Control GUI displays information about the way the bridge was configured at installation, including the installation path and the path under which configurations are stored. On Windows, start the Repository Bridge Control GUI by selecting Start, Programs, CA, Unicenter, NSM, WorldView, Bridge, Bridge Control.
The Repository Bridge Control also has a command line interface, bridgecntrl, through which you can start and stop any number of configured instances, letting you write scripts or batch files to control instances on a particular host without user intervention. On UNIX/Linux, use the bridgecntrl command from the UNIX/Linux command line to start, stop, and display bridge instances on the local host. The bridgecntrl command creates a file, $CAIGLBL0000/wv/config/bridge/.bridgePid, that stores the process IDs of each running bridge instance.
Bridge Instances
A bridge instance implements the bridging policy defined in a bridge configuration file. A bridge instance is a generic shell that derives its configuration from a .tbc file. Only one instantiation of a given bridge configuration can be running at any time, which means that you cannot run several instances for the same source and destination repository. However, you can run any number of different configuration instances on a particular host. Depending on the logging level set in the configuration, you can monitor the status and activity of a bridge instance by inspecting the log file generated locally. In addition, you can monitor startup, shutdown, and critical error messages remotely from a designated Event Manager Message Console.
Troubleshooting
Check the Repository Bridge log files for errors. If the log file does not contain any reported errors, the source of the problem may be in the way that the Repository Bridge has been configured. If errors are present, this may imply a problem with the source or destination repository, or the Repository Bridge itself.
5.
Define the bridging policy by creating bridge configuration rules. Bridging policy is defined as a set of rules. Bridge rules consist of property and values pairs. The value can be specified in terms of an explicit alphanumeric value, a mix of alphanumeric and wildcard characters, a numeric range set, or a combination. Rules are specific to a class of object. Identifying the class to which the rule relates determines the properties that can then be specified as rule criteria. Examples of each property/value format are as follows:
Example class_name: WindowsNT_Server, severity: 5 class_name: WindowsNT_Server, address: 172.24?.*.* class_name: WindowsNT_Server, severity: {0,2,4-6} class_name: WindowsNT_Server, address: 172.{24-28,35}.*.*
6.
Configure the logging facility, which lets you determine where logs are written and the level of logging you want to write. Each bridge instance has its own log file that can contain a large amount of information on the operational state of the Repository Bridge. If a bridge instance fails to start, shuts down unexpectedly, or displays any unusual behavior, you can access the log file to help you determine what the problem is.
7.
Configure Event Management integration. You can send Repository Bridge events to the Event Management Console, specify the Event Management node where start up, shut down, and other messages can be sent, and specify a message prefix to enable easy identification of the instance from which the event message was initiated.
8. 9.
Configure startup options. Save the configuration file. If the configuration definition is successful, the Bridge Configuration GUI closes and the Bridge Control interface updates its list of available configurations.
This command also stops all instances of the Repository Bridge currently running on the hosts that are under the control of the service. You can use the bridge command from a command line to control bridge instances.
For more information about specifying parameters, see the online CA Reference. You are prompted to choose rule options if you do not specify an existing rule file with the -f parameter. The names of the source repository and the destination repository name are saved in the configuration file. The bridge configuration file is saved as destination-name.tbc. Configuration files are written to $CAIGLBL0000/wv/config/bridge. Note: A field called vspactive is automatically created in your configuration file. It is not supported on UNIX/Linux, but it is there for compatibility with Repository Bridge on Windows.
Component Interface Allows a Component Instrumentation (CI) program that may come with a component to provide real-time component information. The CI handles communications between components and the Service Provider. The CI communicates with components that supply Component Instrumentation programs, which provide real-time access to the values for component attributes in the MIF database. MIF Database Contains information about hardware and software installed on a system. Each product that adheres to the DMI specification is shipped with a MIF. Upon installation of the component, information in the MIF file is stored in the MIF database. A MIF file may contain static information about a component, or it may describe how component data can be obtained through the component instrumentation.
MOM Terminology
The following terms define industry standards related to Microsoft Operations Manager (MOM). Also included are definitions that are specific to MOM. Web-Based Enterprise Management (WBEM) Web-Based Enterprise Management (WBEM) is a standard set of management tools and Internet technologies. The Distributed Management Task Force (DMTF) has developed standards for WBEM that include the Common Information Model (CIM) for databases, xml/CIM for coding, and CIM Operations over HTTP for transporting information. Windows Management Instrumentation (WMI) Windows Management Instrumentation (WMI) is a Microsoft infrastructure that supports the CIM model and Microsoft-specific extensions of CIM. It offers query-based information retrieval and event notification. MOM Entity A MOM entity is a MOM Server or a MOM Managed PC.
MOM Server A MOM Server is a computer that has access to the MOM database. MOM Managed PC A MOM Managed PC is a computer with a MOM agent running on it. The MOM agent monitors the computer and reports problems to a MOM Server. MOM Administrator Console The MOM Administrator Console is the GUI where MOM is configured. It also provides the central monitoring point in MOM.
MOM Management updates MOM alerts. Use the MOM Management GUI or the momalertalter command in CA NSM to acknowledge an alert, assign someone to fix the situation that caused an alert, and indicate the progress toward resolving the situation. Note: On Windows, the node running the CA NSM integration with MOM must be in the local Administrators group on the node where MOM is running.
The error that caused the MOM alert is corrected. MOM Management notifies MOM that the alert is resolved. Use the MOM Management GUI or the momalertalter command.
MOM Alert MOM server that generated the alert Process that gathers MOM alerts
MOM Managed PC where the event that caused the MOM alert occurred Station CAMM prefix Message ID
MOM alert description field. The MOM alert URL is appended to the message, if possible. Message Text GUID (Globally Unique Identifier) MOM alert source field User Data Category
The following table shows how MOM severity is converted to Event severity:
MOM Alert Severity 20 (Information) 30 (Warning) 40 (Error) 50 (Critical Error) 60 (Security Breach) 70 (Unavailable)
WorldView Status Depends on severity: WorldView Normal Normal Warning Critical Critical Critical Down MOM Success Information Warning Error Critical Error Security Breach Unavailable
85 (Acknowledged) 170 (Level 1: Assigned to help desk or local support) 180 (Level 2: Assigned to subject matter expert)
MOM Resolution State 190 (Level 3: Requires scheduled maintenance) 200 (Level 4: Assigned to external group or vendor) 255 (Resolved)
Note: CA NSM provides integration kits to both of Microsoft's management applications, Microsoft Operations Manager (MOM) and System Center Operations Manager 2007 (SCOM). Although the integrations to MOM and SCOM can coexist on the same management server, each one integrates only with its Microsoft counterpart.
Elsewhere in the Domain The domain where the SCOM Integration is installed must meet the following minimum requirements: An instance of System Center Operations Manager must be installed and running. A CA NSM Event Manager must be present because the SCOM Integration creates CA NSM events from SCOM alerts. The Event Manager may be on the same computer as the SCOM Integration. A CA NSM WorldView Manager must be present because the SCOM Integration creates objects in the WorldView Repository. The Integration reflects the status of those objects, and SCOM alerts are created based on that status. The WorldView Manager may be on the same computer as the SCOM Integration. An instance of the Management Command Center must be present because it is the user interface that shows SCOM objects in the WorldView Topology, and alerts in the SCOM Alert Viewer. The Unicenter MCC may be on the same computer as the SCOM Integration.
SCOM Terminology
The following terms define industry standards related to Microsoft System Center Operations Manager (SCOM). Also included are definitions that are specific to SCOM. Windows Management Instrumentation (WMI) Windows Management Instrumentation (WMI) is a Microsoft infrastructure that supports the CIM model and Microsoft-specific extensions of CIM. It offers query-based information retrieval and event notification. SCOM Entity A SCOM entity is any device that SCOM manages regardless of the method used to do it. SCOM Management Server A SCOM Management Server is a computer that has access to the SCOM database. SCOM RMS A SCOM Root Management Server (RMS) is a computer that runs the SDK service needed for the integration to communicate. It is a SCOM Management Server as well. SCOM Agent Managed PC A SCOM Agent Managed PC is a computer with a SCOM agent running on it. The SCOM agent monitors the computer and reports problems to a SCOM Server. SCOM Agentless Managed PC A SCOM Agentless Managed PC is a remotely managed computer that has no health service installed. SCOM Gateway Server A SCOM Gateway Server is a computer that provides a trust relationship between two managed domains. SCOM Operations Console The SCOM Operations Console is the GUI where SCOM is configured. It also provides the central monitoring point in SCOM.
SCOM Alert SCOM server that generated the alert Process that gathers SCOM alerts SCOM object where the event that caused the alert occurred CAOPS prefix SCOM alert description field GUID (Globally Unique Identifier) SCOM object display name
Event Message Node User Station Message ID Message Text User Data Category
The following table shows how SCOM alert severity is converted to Event severity:
SCOMMsgconfig Utility
The SCOMMsgconfig utility lets you select the SCOM alert fields to be included in the corresponding CA NSM EM message from the CA Unicenter System Center Operation Manager integration. You can select any SCOM alert field in the EM message and in any order.
4. 5. 6.
In CA NSM r11.2 or CA NSM r11.2 SP1, the Node field contained SCOM server machine and the Workstation field contained the SCOM alert originator name. However, in CA NSM r11.2 SP2, the Node and the Workstation field contains the SCOM alert originator name.
Virus Scan
The Event Management suite of functions includes Unicenter Virus Scan. This utility, available only on Windows, automatically scans the local drive daily at midnight. Parameters can be set to customize the type of scan, the drive to be scanned, and the actions to be taken upon virus detection. CA NSM provides predefined message policies that automatically launch the Virus Scan utility at midnight. The installation procedure defines a Midnight* message record and associated action to run Virus Scan every day at midnight. Note: To prevent Virus Scan from running automatically, simply delete the Midnight Virus Scan message record (Message ID: Midnight*) from the Message Records container. You can also delete the message records only associated message action from the Message Action Summary container. Deleting either record prevents the automatic execution of Virus Scan. You can also run Virus Scan on demand by launching the inocmd32.exe command with your desired parameters. Parameters can be used to specify the type of scan performed and the action to be taken upon virus detection. Upon virus detection, Virus Scan sends messages to the CA NSM Console Log and the Event Log. For more information on the inocmd32.exe command, including a list of valid parameters, see the online CA Reference.
Other legacy communications mechanisms may also be supported by CA NSM for the purposes of backwards compatibility. To support these communication mechanisms, certain ports in a firewall must be open. Note: CA is committed to reducing the number of ports that are required to use CA NSM. For this reason, we have identified the ports that are required and the ports that are optional. CA Technology Services can help you design the best port solution for your enterprise.
Component
Comments
CA Common Communications Interface (CAICCI) CAM DIA DIA CA Common Communications Interface (CAICCI) Apache Tomcat
Y N N Y
Used for System Performance, Continuous Discovery. Used for MCC to Manager communication Used for MCC to Manager communication Required for NSM communications that use CCI between Windows servers. Services incoming requests by component applications that expose functionality through Tomcat, such as Unicenter Web Reporting Server. Used when Tomcat waits for a shutdown command. Only requires firewall to open outbound Only requires firewall to open outbound Only requires firewall to open outbound Used by DIA for manager to manager communication Used to communicate with MDB server if you are using an Ingres
9090
TCP
Y N N N N Y
N N N N N Y
Optional Ports
Component
Comments
database. Port number is bound to Ingres instance name. Default code is EI, but can be changed at installation.
Optional Ports
This table provides a list of the optional ports that are required to be open in a firewall to support only certain low-level features or compatibility with a previous version. These ports are grouped by their specific component in the Ports by Component section.
Component
Default Port Install Port Type checks if default port is in use? 162 UDP N
Comments
Native SNMP traps sent to DSM policy. Required to receive traps from non-CA agents. Additionally, Enterprise Management requires this port if Trapmux is used to support SNMP V3. Used for System Performance, Continuous Discovery, CAM. This port needs to be opened in a firewall only if the CAM communications method is set to TCP. Default mode of operation among computers is UDP. When trapmux is active, used as the catrapd command port. If Trapmux is being used (SNMP v3 support is activated), catrapd opens this port on which to listen Used only if DIA is not installed
CAM
4105
TCP
6161 6163
UDP UDP
N N
Y Y
6665
UDP
Component
Default Port Install Port Type checks if default port is in use? 8888 TCP N
Comments
Mobile Services
Used for communications between a CA NSM manager and Pocket PC devices. Disabled by default. If Pocket PC connectivity is required, Mobile Services must be activated and this port opened. Default for AP/OPS interface component. Configurable at site. In field since 2000. Is not required if all nodes are at r11 or higher. Required when using Agent Technology processes using the -@ option.
7000
TCP
7774
TCP
2.
5.
6.
Set RMI_REGISTRY_PORT and RMI_DATA_PORT to the port numbers you want. The two port numbers must be different from each other and should be different from those you set for RMI_REGISTRY_PORT and RMI_DATA_PORT in the dna.cfg file.
7. 8.
Save the ukb.cfg file. Modify the SRV record and set the port number field to the same value as you have for RMI_REGISTRY_PORT in the ukb.cfg file. See the topic Configure Unicenter Domain Name Services earlier in this appendix. If you do not have an SRV record in the domain, skip this step.
9.
Stop both of the following services or daemons and restart them to apply the changes: CA DIA 1.3 DNA CA DIA 1.3 Knowledge Base
The following example assumes you want to set customized ports to 16001, 16002, 16003, and 16004: 1. In the dna.cfg file, set the following: 2. RMI_REGISTRY_PORT = 16001 RMI_DATA_PORT = 16002
In the ukb.cfg file, set the following RMI_DNA_REGISTRY_PORT =16001 RMI_REGISTRY_PORT = 16003 RMI_DATA_PORT = 16004
3.
In the SRV record of DNS, make the following change: Set port number field to 16003.
CAM combines the lightweight benefits of UDP with the reliable delivery of TCP. A CAM server process runs on each host supporting CAM. CA's applications that use CAM communicate with the CAM local server, which then forwards messages to other CAM servers or to other CAM client applications on the same computer. For more information about CAM, see the CAM product documentation. CAFT is a simple file transfer protocol (similar to FTP) that uses CAM for its data transport.
Windows Executable dscvmgrservice.exe dscvagtservice.exe perfscope.exe perftrend.exe java.exe egc30n.exe discover.exe hpaprofile.exe
UNIX/Linux Executable CaDiscMgrService CaDiscAgentService N/A N/A java N/A N/A N/A N/A pdectl pdgstat capmpde pdesumgen configserver profileserver prfAgent hpaAgent hpacbman hpacbcol cubespider rtpmon cfgutil pdtodm_m pdtodb_u
Systems Performance
configserver.exe
Performance Distribution Server profileserver.exe Performance Agent prfagent.exe hpaagent.exe hpacbman.exe hpacbcol.exe Performance Utilities cubespider.exe rtpmon.exe cfgutil.exe pdtodm_m.exe pdtodb_u.exe
Component
Subcomponent
CAM/CAFT Binaries
The following list of CAM binaries includes the principle CAM components, as well as utilities and configuration tools.
Windows cam.exe camabort.exe camben.exe camclose.exe camconfig.exe camping.exe camq.exe camsave.exe camstat.exe camswitch.exe
UNIX/Linux cam camabort camben camclose camconfig camping camq camsave camstat camswitch
Description CAM server Stops the CAM server (forcefully) Benchmarks a communications link Stops the CAM server cleanly (informs clients first). Changes the CAM configuration and routing. Similar to ICMP echo request (ping), but can check availability of client applications as well as hosts. Lists and manipulates queues in the CAM server. Saves the CAM server's configuration in the same format as cam.cfg. Displays detailed status information for a CAM server. Forces a log file switch.
How to Encrypt the MCC Data Transport (CAM) for AIS Providers
For security reasons, you may want to encrypt the information going over the network between the MCC and the AIS Providers (WorldView, DSM, etc.). These providers use the AIS subsystem, which in turn uses CAM. Complete the following process to encrypt CAM for AIS providers: 1. 2. Install the CA Secure Socket Adapter (SSA). Reconfigure CAM to use the newly installed SSA component.
3.
Note: SSA 2.0 is not currently supported on Solaris on Intel. All other manager/client configurations are supported.
On UNIX platforms, the CAM server detects SSA's presence at startup and and makes use of the SSA library to interface with the underlying communications layer. On Windows, we provide a version of the CAM server code that has been adapted to use SSA. In both cases, the adapted CAM performs no differently unless SSA is configured to adapt the port that CAM uses for TCP.
Alternatively, you can configure specific paths that you want to adapt by defining them in the *PATHS section. However, this approach could be cumbersome on large networks and difficult to maintain when machine addresses are determined by DHCP. A more usable option is to select the machines on which you always want to use SSL for CAM communications and configure them as follows in cam.cfg:
*CONFIG udp_port=-1
With this setting, all remote paths created by CAM are TCP paths (and can then be adapted, using SSA to use SSL). Also, if other machines attempt to establish a UDP path, they are rejected and switched to TCP. However, the part of the network where security is not required can continue to use UDP. Note: With this configuration, one unencrypted UDP message is sent and rejected for connections that are switched to TCP.
The command associates the SSA-enabled version of the CAM server with the CAM service. 3. Restart the CAM service. CAM is SSA-enabled. You can reverse this process by running the following command and restarting the CAM service:
cam install
This command requests SSL/TLS encryption on the CAM TCP port and enables use of the SSA connection broker (port multiplexer) for connections using that port. The port multiplexer must be used to allow support for non-encrypted TCP connections, as it enables the SSA software to differentiate between the two. Legacy connections are also allowed on the port but are restricted to within-machine connections by binding to 127.0.0.1, the IPv4 localhost address. In an IPv6 environment, you may need to replace the final parameter value of 127.0.0.1 with localhost (or 127.0.0.1;::1 if localhost cannot be resolved to one or both of these addresses). Note: On some machines, localhost may not resolve to any addresses.
On AIX, in rare circumstances, CAM may not be able to accept local connection when using CAM 1.12 or later if you set a bind address. If you experience this issue (one symptom is that the camf process is running but utilities such as camstat claim that it is not), remove the PmuxLegacyPortBindAddress parameter from the initial definition or set it back to its default value. If you want to accept unadapted connections from remote machines, you can omit the PmuxLegacyPortBindAddress parameter, but you also have to define an appropriate OutboundHostList to ensure that outward connections to these machines are not adapted. This operation may prove complex in practice, and the most viable policy is to encrypt all connections. You could use UDP for non-encrypted connections, but this would require you to explicitly configure (in CAM) all encrypted connections. SSA 2.1 will improve flexibility in this area.
Alarm types Sends all WorldView status updates as state change alarms. WorldView has no true alarms, only object severity and status text properties. The connector creates an alarm for any object with non-normal severity. Because each object can have only one severity, only one alarm can exist for an object at any time. If a state changes, a new alarm replaces the previous one, or clears the alarm if the state changes to Normal.
DSM Import Connection to Silo Uses the DSM ObjectFactory to connect to ORB. Automatic CI and service synchronization Provides this support for DSM objects. Services are not imported from DSM. Object updates Registers for DSM callbacks for additions, deletions, and updates to DSM objects. Events and Alarms Uses the DSM Object Factory to call the connector event handler. Text provided as reason in the ObjectFactory callback is imported as alarm text. Starting with CA NSM r11.2 SP2, the DSM connector imports more detailed information for each event. Object types and classes Imports DSM objects of following types as CIs. All standard classes for these objects are supported. Host-level objects Agent objects Leaf node objects (metrics)
Note: No DSM objects are imported as services. Alarm types Supports all standard DSM alarm types. Note: The metrics and resources being monitored by DSM agents that appear in the WorldView topology under the agent objects (DSM granular objects) do not exist in the WorldView repository by default. The WorldView connector only imports objects that exist in the WorldView repository, so these DSM granular objects are not automatically included in any imported BPV service definition. However, you can insert the DSM granular objects into the WorldView repository using SAMP. With this feature enabled, the WorldView connector can automatically import services containing DSM objects.
The CA Spectrum-NSM Integration Kit is included on the product media for both the CA NSM and CA Spectrum applications.
CA SystemEDGE Agent
CA SystemEDGE Agent
CA Virtual Performance Management uses the CA SystemEDGE agent and its application insight module (AIM) plugins for monitoring and managing a broad range of physical systems and virtual resources. CA NSM lets you manage the CA SystemEDGE agent and the AIMs that are distributed with CA Virtual Performance Management. For more information, see the guide Inside Systems Monitoring in the CA NSM documentation set and the CA Virtual Performance Management documentation. Some CA VPM features are not available with version 4.3 of the CA SystemEDGE agent, and with version 5.0 running in legacy mode. Use version 5.0 of the agent in its regular operating mode to enable full CA VPM functionality. Note: For more information about the CA SystemEDGE agent, see the CA SystemEDGE User Guide.
CA SystemEDGE Agent
The CA SRM AIM Agent View (abrowser) lets you view AIM summary information and view, create, and manage response time tests. The Agent View supports CA SRM r3.0, which must be running under CA SystemEDGE r5.0. The CA SRM cannot exist or run without CA SystemEDGE. Note: For more information about how to use the CA SRM AIM, see the Service Response Monitor User Guide and SRM AIM Agent View Help.
Xen AIM
XenServer is a server virtualization platform that offers near bare-metal virtualization performance for virtualized server and client operating systems. XenServer uses the Xen Hypervisor to virtualize each server on which it is installed, enabling each to host multiple virtual machines simultaneously with guaranteed performance. XenServer allows you to combine multiple Xen-enabled servers into a powerful Resource Pool, using industry-standard shared storage architectures and leveraging resource clustering technology created by XenSource. In doing so, XenServer extends the basic single-server notion of virtualization to enable seamless virtualization of multiple servers as a Resource Pool, whose storage, memory, CPU, and networking resources can be dynamically controlled to deliver optimal performance, increased resiliency and availability, and maximum utilization of data center resources. XEN AIM agent is developed to monitor and configure the resources of the Citrix Xen Server machine. The Xen AIM will reside on a Windows machine and gather information from Xen hosts using RPC-XML protocol.
Zones AIM
A Sun Solaris Zone defines a virtualized operating system platform (called a zone) that provides an isolated, secure environment in which to run applications. This allows allocation of resources among applications and services, and helps ensure that processes do not affect other zones. Solaris manages each zone as one entity. A container is a zone that also uses the operating system's resource management. The Solaris Zones PMM provides health monitoring, management, and provisioning of Solaris Zones environments. The Zone application insight module (AIM) is a plugin to the CA SystemEDGE agent that lets you manage the infrastructure of your Sun Solaris systems environment. When you integrate the Sun Solaris Zone AIM with CA NSM, you can discover the Sun Solaris Zone AIM, view its monitored data, and configure how it monitors. Note: For more information about enabling and configuring the Sun Solaris Zone AIM in CA Virtual Performance Management (CA VPM), see the CA VPM documentation.
dscvrbe -7 hostname -v 9
hostname Specifies the host name of the VPM server that the VPM AIM is managing. Objects discovered as a part of the VPM Integration with CA NSM are represented by the Business Process View icon labeled VPM. When you drill down, icons for Solaris Zones, Citrix XenServer, VMware vCenter, and IBM LPAR environments appear, depending on the type of environments that are discovered. Note: For more information about the CA NSM discovery process, see the Administration Guide. For more information about discovery command options, see the CA Reference Guide. Both documents are available with the CA NSM documentation set.
lparaimhost Specifies the host name of the server on which the LPAR AIM is installed.
To start the Zones AIM Agent View from the command line, open a command prompt and enter the following command:
zoneaimhost Specifies the host name of the server on which the Zones AIM is installed.
Notes: If you use Citrix XenServer resource pools, CA NSM can only discover the pool master. Since a XenServer resource pool is represented by the pool master only, the other pool members are not visible to the network. For more information about resource pools, see the Citrix XenServer documentation. For more information about the CA NSM discovery process, see the Administration Guide. For more information about discovery command options, see the CA Reference Guide. Both documents are available with the CA NSM documentation set. For more information about Cirtix XenServer Management, see the CA Virtual Performance Management Implementation Guide.
xenaimhost Specifies the host name of the server on which the XenServer AIM is installed.
vcaimhost Specifies the host name of the server on which the VC AIM is installed.
To change the port, cancel the installation and start again to enter a new port. Note: The default port is 42511. If this port is in use then enter any free port from 1024 to 65535.
The following section describes how the CA NSM JM Option processes act together to automatically schedule, submit, and track a unit of work. The Monitor determines when it is time to submit a job and sends a submission request to the appropriate agent. The job server and job agent together accomplish the tasks of scheduling, submitting, and tracking your jobs as follows: 1. The remote submission agent receives the submission request, which includes data items like submit file name, user ID information, and parameters required by the submit file. The remote submission agent performs the submission. When the job starts, the submission agent records the event in the CA NSM JM Option checkpoint file. Shortly thereafter, the tracking agent reads and forwards the event data to the job tracker. The job tracker marks the job as started. The same flow of events occurs when the job terminates.
2.
3. 4. 5.
A typical production job involves more than just executing a program. Often, job setup and post-processing requirements must also be performed to ensure the correct processing of jobs. Recognizing this, the CA NSM JM Option provides three categories of Job Management stations: PRECPU Specifies the location where a manual task, such as loading a printer with special forms, must be performed prior to running jobs on the CPU. CPU Specifies the computer where a job actually runs. POSTCPU Specifies the location where a manual task, such as verifying or distributing printed reports, must be performed after the CPU completes its work. By providing the same level of attention to critical non-CPU tasks that you do to CPU-based processes, the CA NSM JM Option helps you ensure that jobs are set up correctly and have appropriate manual checks and balances. For procedures to specify where to perform work, see the following topics in the online CA Procedures: Defining Station Profiles Defining Station Group Profiles
To derive maximum benefit from workload balancing, you must identify those resources that have special usage or access coordination requirements and define them to the CA NSM JM Option as resource profiles. Changes made to resource profiles affect all jobsets and jobs that reference the resources, including work currently scheduled for processing (entries in the tracking file). Work currently scheduled for processing can be monitored or updated using Jobset Status and Job Status. For procedures to identify resource requirements, see Defining Resource Profiles in the online CA Procedures.
Jobset Resources
Use of jobset resources is optional. If you do not want to use workload balancing with jobsets, go to Jobset Predecessors. Jobset resource profiles specify the minimum resource requirements for each job in the jobset. Because jobsets do not allocate or require resources, a resource identified for the jobset does not have to be available for the jobset to be declared eligible for processing and selection to the tracking file. (Jobsets never have a status level of WRSRCWaiting for Resourcesand are never included in workload balancing.) Resource requirements specified at the jobset level define the minimum resource requirements that must be available before any individual jobs of the jobset are considered eligible for submission.
An example of resource allocation follows. Assume that an existing resource profile named tapedevs specifies that four tape drives exist on station europa. The following Jobset - Detail Resources window indicates that each job in the jobset requires one of the four tape drives defined as available resources by the previously defined resource profile named tapedevs, and each job needs exclusive access to the tape drives.
Note: The CA NSM JM Option does not verify the physical existence or actual availability of tape drives or any other device. These resource definitions are logical rather than physical.
Jobset Predecessors
Use of jobset predecessors is optional. If you do not want to use predecessor relationships at the jobset level, go to How to Identify Work to Perform. Jobset predecessors are used to specify the list of jobs, jobsets, or trigger profiles that must complete successfully before this jobset can be started. A predecessor requirement will be marked satisfied if any of the following conditions are true: The predecessor completed successfully (COMPL). The predecessor aborted and shows a status code of ABORT but has an Abend action of CONTINUE in its profile. The predecessor is specified as dynamic and is missing from (that is, does not exist in) the current workload tracking file.
Canceled Predecessors
The only predecessor requirements honored are those that represent predecessor jobs that are scheduled to run in the current workload (also referred to as being in the tracking file). Because of this rule, the cancellation of any predecessor job (which removes that job from the tracking file) results in that predecessor requirement being ignored. The cancellation of a predecessor job that was referenced as dynamic effectively satisfies the predecessor requirement, allowing any successor jobs to run. If the predecessor requirement was referenced as static, any successor jobs remain in a "wait on predecessor" state (WPRED). While many enterprises find this behavior intuitive and useful for dynamic predecessor requirements, others do not. To support each enterprises preference, the CA NSM JM Option components provide an option that lets you change the default effect of the cancel command so that successors (those jobs that define the canceled job as a predecessor) are not automatically posted. For information about specifying this option using the Post on Cancel and CAISCHD0014 environment variables, see Configuration Environment Variables. For procedures to form groups of related tasks, see Defining Jobsets in the online CA Procedures.
Before reading about how to define jobs, review the following sections that describe how the CA NSM JM Option evaluates the policies you define to determine when jobs are eligible to run.
Jobset Membership
Every job must belong to a jobset, and only after a jobset is marked as started will Job Management evaluate the jobs in the jobset. Note: DYNAMIC jobs are a special exception to this rule and are discussed separately in Demand a DYNAMIC Job.
External Predecessors
The EXTERNAL job type setting in a job profile lets you define a job to the CA NSM JM Option that will be submitted by an external job manager. The job is actually submitted by another job management system such as Unicenter CA-7 Job Management (Unicenter CA-7), Unicenter CA-Scheduler Job Management (Unicenter CA-Scheduler), Unicenter CA-Jobtrac Job Management (Unicenter CA-Jobtrac), and so forth. The CA NSM JM Option treats this job as a non-CPU job with a class of EXTERNAL (similar to PRECPU and POSTCPU). The jobs presence on the tracking file causes JOBINIT and JOBTERM triggers to be generated internally and the job is automatically tracked by the client where the job actually runs. The job server tracks the job, but does not submit it. Since this is a JOB base entry, predecessor definitions and calendar selection can reference it. You specify the job type at the Main - Info notebook page of the Job - Detail window.
Job Resources
Job resource requirements do not override jobset resource requirements. Rather, resource requirements defined at the job level are added to any resource requirements that may already be defined at the jobset level. Jobset and job resource requirements are therefore cumulative.
Job Submission
The CA NSM JM Option submits CPU jobs based on user-defined processing requirements. You define what to submit and any special considerations through a submission profile that assigns these submission characteristics: User ID (the user for whom the job is submitted) Name of the file or program to submit Optional parameter values to pass to the submitted file or program Password of the user for whom the job is submitted Domain of the user for whom the job is submitted (Windows only)
Job Predecessors
Jobs, jobsets, and triggers can all be defined as predecessors for a job. All predecessors defined for the job must be satisfied before a job can become a candidate for submission. Important! The jobset starts only after all of the jobsets predecessor requirements have been met, and only after the jobset starts will the CA NSM JM Option evaluate the predecessor requirements for the individual jobs that are members of that jobset. On UNIX/Linux, you can also specify advanced system conditions that must be met before a job can run on a specific node. For example, if a job must not run when a particular user is logged on, you can define a system condition criterion that specifies this. The CA NSM JM Option holds the job until that user is no longer on the system and then releases it. SYSCON objects for administering system condition requirements are available using cautil command syntax and are described in JOBSYSCON, JOBSETSYSCON, and TJOBSYSCON in the online CA Reference.
The ExtNodeL.sch configuration file is located in the $CAIGLBL0000\sche\config directory. You can use this file to maintain policies that specify how password validation is to be performed based on the submitting node and user ID. The file must be owned by root, and only a uid of 0 may have write access to it. An individual entry in the file has the following format:
-n=nodename,user-id,flag
where: nodename Specifies the node from which the job is initiated; it can contain a trailing generic mask character. user-id Specifies a user ID to whom the rule applies; it can contain a trailing generic mask character. flag Specifies D for disable (perform no password authorizations), E for enable (unless the proper password is supplied, the job will not run), or W for warn (check the password; if invalid, run the job but issue a warning message).
Examples
The following rule is the default rule in effect if you elected to enable password checking during installation. The rule states that for all nodes and all users password validation is to occur.
-n=*,*,E
The following rule is the default rule in effect if you elected to disable password checking during installation. The rule states that for all nodes and all users password validation is bypassed.
-n=*,*,D
The following combination of rules only enforces a password validation on user root and allows all other users to bypass password validation.
-n=*,*,D -n=*,root,E
The following combination of rules allows all users to bypass password validation unless the request comes from the node mars. In that case, password validation is enforced for all users. The last entry sets a warning type password validation for user root if it comes from a node other than mars.
-n=*,*,D -n=mars,*,E -n=*,root,W
Job Management scans the entire configuration file for a best match and uses that rule. It uses the node field as a high level qualifier when searching for a best match. For example, if the following entries are the only two entries in the file, any request coming from the node mars uses the enforce rule. The user root only uses the warning rule if the request comes from a node other than mars.
-n=mars,*,E -n=*,root,W
For example, if the job has an early start time of 1:00 p.m. and a frequency of 60 minutes, and the job is demanded at 3:15 p.m., the CA NSM JM Option generates entry one to start at 1:00 p.m. When the job enters the current workload (by entering the tracking file), the CA NSM JM Option recognizes that it is late (it should have started at 1:00 p.m.) and starts it immediately. The next submitted instance of this job has a start time of 4:00 p.m. with subsequent entries to start at 5:00 p.m., 6:00 p.m., and so forth. By default, when a cyclic job is brought in late (demanded), the CA NSM JM Option skips the jobs that are too late to run and only runs the number of iterations left, based on the calculation of when the jobs should have run. If you set the Windows configuration setting "cycle count precedence" or the CAISCHD0540 environment variable on UNIX/Linux to Y, the cycle count takes precedence over the remaining count based on current time. Thus, the defined number of cyclic jobs is scheduled up to the time of the new-day autoscan. Using the default, consider, for example, a cyclic job that has a frequency of 30, a cycle count of 16, and an early start time that defaults to the new-day autoscan time of 1:00 a.m. The job is typically brought in at 1:00 a.m. and runs 16 times: at 1:00 a.m., 1:30 a.m., 2:00 a.m., and so forth until the last (sixteenth) run at 8:30 a.m. However, if the job is demanded in at 4:05 a.m., the occurrences up to that time are skipped and the job is run at 4:05 a.m., 4:30 a.m., 5:00 a.m., 5:30 a.m., and so forth. At 8:30 a.m., the job runs for the last time, for a total of 10 runs. The occurrences that would have run (at 1:00 a.m., 1:30 a.m., 2:00 a.m., 2:30 a.m., 3:00 a.m., 3:30 a.m.), are skipped. The CA NSM JM Option does not attempt to catch up if it means running the jobs after the last calculated start time of 8:30 a.m. If you enable cycle count precedence using the above example, the jobs would run at 4:05 a.m., 5:00 a.m., 5:30 a.m., and so forth, until the last (16th) ran at 12:30 p.m. All counts up to the time of the new-day autoscan are scheduled. If you elect to run with the autoscan process disabled, cyclic jobs are submitted only when you execute a manual autoscan. The jobs are not automatically submitted each day. For an explanation of the autoscan process, see Autoscan. You cannot define a cyclic job as a predecessor to itself. In other words, when a cyclic job is submitted, the job begins execution when its early start time is reached. It does not wait for any preceding instances of itself to complete. You can define a cyclic job as a predecessor to another job or jobset. All occurrences of the cyclic job are treated as predecessors and must complete before the successor job or jobset is eligible to run.
The predecessor criteria for cyclic jobs include a qualifier option that allows cyclic jobs to run in one of two ways. For example, assume you have two cyclic jobs, CycJobA and CycJobB, where both have an iteration number of 3, frequency is set to 60, and CycJobA is the predecessor of CycJobB. When both jobs are demanded into the tracking file, the file appears as follows:
CycJobA CycJobA CycJobA CycJobB CycJobB CycJobB QUAL=xx01 QUAL=xx02 QUAL=xx03 QUAL=xx01 QUAL=xx02 QUAL=xx03
If you set the qualifier Use Qualifier Predecessor Criteria for cyclic job? to N, CycJobB runs after all CycJobA iterations have run. If you set the qualifier to Y, CycJobB QUAL=xx01 runs after CycJobA QUAL=xx01 completes, CycJobB QUAL=xx02 runs after CycJobA QUAL=xx02 completes, and so forth. If you plan to use cyclic job submission for Windows platforms, review the environment settings for the CA NSM JM Option in the Configuration Settings window (Options tab) for Max active jobs, Max resources, Max predecessors, and Use Qualifier Predecessor Criteria for cyclic job? and set the appropriate values to accommodate the additional jobs in the daily workload. If you plan to use cyclic job submission for UNIX/Linux platforms, review the values set for the environment variables CAISCHD0025, CAISCHD0026, CAISCHD0027, and CAISCHD0040 and set appropriate values to accommodate the additional jobs in the daily workload. For procedures to identify work to perform, see Defining Jobs in the online CA Procedures.
For example, assume you download an updated database to your system on a weekly basis and you want a number of reports to run as soon as the file transfer is complete. To automate the sequence, you can define a File Close (DCLOSEU) trigger profile and a message action profile. When the file closes, the trigger is detected, the appropriate message is issued, and the message action profile demands a jobset containing all of the necessary report jobs. The CA NSM JM Option lets you define triggers for the following events: Job initiation Job termination The caevent command File close (when a file, opened for write or newly created, is closed, a File close event is created) File unlink (deletion) IPL event
Once a trigger trips, its associated message action profile executes. For a description of message action processing, see Trap Important Event Messages and Assign Actions in the Administer Critical Events chapter. Triggers can be defined to be associated with a calendar so that although a triggering event may occur at any time, the trigger is only tripped when its associated calendar determines it is appropriate. When a trigger profile has no associated calendar, it is always in effect; it is scheduled every day and never marked complete.
Use caevent
For most situations you can choose a trigger type of File close, Job termination, or Job initiation. However, there may be times when the triggering event you want to define is not a system-initiated even and a logical event may be more suitable. The trigger type of CA Event lets you create these user-initiated logical events. Using the executable caevent (provided with CA NSM), you can generate a logical event by executing the caevent command and supplying two command line arguments--the first is the logical event name, and the second optional parameter is an event status code. By running the caevent executable in scripts or using it interactively, the system alerts the CA NSM JM Option to events as they take place.
In addition, since the event name and status code are user-specified, they can describe any trigger event. For example, the following caevent command sends an event of type caevent, with an event name of bkupnow (a previously defined trigger profile), and a status code of 10 to the CA NSM JM Option:
caevent bkupnow 10
If a defined trigger profile matches these criteria, the trigger trips and the associated message action profile is processed. The message action profile may demand a jobset into the workload automatically or execute a cautil backup command. Message action profiles are flexible enough to meet your needs. For additional information about the caevent command, see the online CA Reference.
Both of these conditions must be satisfied to bring jobs into the tracking file when you demand the jobset of which they are members. Otherwise, you must specifically demand the job into the tracking file using the Job Demand option on the Jobs list container window.
The next autoscan removes all references to the job. For procedures to run a job on demand, see Demanding a Job into the Workload in the online CA Procedures.
The information obtained from the Simulation Report will help you to evaluate your implementation and more readily identify any adjustments that may be required to the workload policies you have defined. Run the following command from a command line prompt to create a simulation report:
wmmodel
The wmmodel executable lets you play what if scenarios, override job dependencies, and change the duration of a job execution. Note: To run a CA NSM JM Option report on UNIX/Linux platforms, you must be logged in as an authorized user such as root. The following command also runs the Simulation report.
schdsimu BOTH
Report output goes to stdout, which can be directed to any printer or file, using standard redirection operators. For additional information about CA NSM JM Option reports, see the online CA Reference.
Autoscan
Creates Report Check Report Forecast Report History Report Predecessor/ Successor Cross-Reference Report Simulation Report Simulation Report
Autoscan
The selection, submission, tracking, and cleanup of jobs begin with the autoscan procedure. The CA NSM JM Option treats a workday as a 24-hour time period that begins at new-day time. The default setting for the new-day autoscan is midnight for Windows platforms and 01:00 a.m. for UNIX/Linux platforms. At this time, all the jobsets and jobs that qualify are added to the new-days workload (added to the tracking file). A periodic autoscan occurs at specific intervals to determine if any changes or additions should be made to the current days workload. New-day autoscan performs the following tasks: Cleans up the tracking file by purging finished work from the previous days workload. Any unfinished work that is backlog eligible is carried over to the new days workload. Scans the job Management Database to select the new days work. Autoscan first selects all jobsets that qualify for processing the current day (based on their calendar, expiration, and effective date criteria). From these jobsets, autoscan similarly selects the individual jobs that qualify. Once these jobsets and their jobs are in the current days workload (in the tracking file), they are ready for processing, review, and any last minute changes.
Autoscan
2.
3.
4.
Workload Processing
Jobs that are running when the new days autoscan occurs and have Backlog=RUNNING in their profile are backlogged. When a running job is backlogged, it remains in the tracking file as part of the workload for the following day and is automatically rescheduled for processing. If the job is not running during the autoscan, the job is removed from the tracking file. Note: The time of the new-day autoscan should occur after the expected completion of all scheduled jobs. This makes the Backlog option more meaningful. If the new-day autoscan runs before the days work is completed, all unfinished jobs are purged or backlogged.
Workload Processing
The processing of the current days workload (work in the tracking file) is status-driven. The CA NSM JM Option constantly monitors and tracks jobs, moving jobs from one status level to the next only after specific conditions or requirements have been satisfied. Jobsets and jobs advance through various status levels during workload processing. The autoscan process brings qualifying jobsets and jobs into the tracking file with an initial status of LOADNG. After all the new work is loaded, the CA NSM JM Option marks it WSTART (Waiting to Start). Autoscan is then complete. After bringing all new qualifying jobsets and jobs into the current days workload (into the tracking file), the CA NSM JM Option processes the workload by reviewing: 1. 2. 3. 4. Early start time Predecessors Priority Resource usage and availability
Jobsets are processed first. Jobset criteria are evaluated prior to criteria for jobs in the jobset. First, the CA NSM JM Option checks that a jobsets early start time has arrived; second, it checks to see if all predecessor requirements for the jobset have been satisfied. When the jobset is in START status, the CA NSM JM Option looks at similar criteria for jobs in the jobset. A jobset must be marked as started (START) before its jobs can be evaluated. For example, when a job is in WSTART (Waiting to Start) status, the CA NSM JM Option evaluates its early start time. If the early start time has been reached, the job is placed in WPRED (Waiting for Predecessors) status. Otherwise, the job stays in WSTART until the specified early start time is reached.
Maintenance Considerations
The CA NSM JM Option prioritizes jobs based on resource requirements (if any), early start time, and priority. Although you can fine-tune the sequencing of your workload using early start time and priority, you typically use predecessors as the primary means to establish the sequence of workload jobset and job processing. When all the jobs in the jobset are complete, the jobset is marked complete (COMPL). A jobset is also marked complete if none of its jobs meet the criteria for selection, or if there are no jobs in the jobset. To summarize, the CA NSM JM Option first looks at the jobset to determine that early start time has arrived and predecessor requirements are satisfied. The CA NSM JM Option then looks at the jobs in the jobset to determine that early start time has arrived, predecessor requirements are satisfied, and that resources, must complete time, and priority can be satisfied.
Maintenance Considerations
The following topics describe how to maintain the CA NSM JM Option. Job Management Logs Tracking File Undefined Calendars During Autoscan Unload Job Management Database Definitions to a Text File Purge Old History Records (UNIX/Linux) How to Submit Jobs on Behalf of Another User
Maintenance Considerations
The $CAIGLBL0000/sche/log/$host directory contains the following job management log files: mtrsuf Identifies the current suffix number of the schdxxx.nnnn files; descriptions of these files follow. The version number can be a maximum of 9999. ca7xxx.log and ca7xxx.out Represent the stdout and stderr for each of the CA NSM JM Option daemons. There is only one version of each of these files; ca7xxx.out provides an audit trail of all display command processing that flows through the CA NSM JM Option. The files are rebuilt each time the CA NSM JM Option is stopped and restarted. schdlcs.nnnn Provides an audit trail of all update command processing that flows through the CA NSM JM Option, including trigger commands. A new version of this file is created each time the CA NSM JM Option is cycled and at midnight every day; nnnn is identified by the mtrsuf file. schdtrk.nnnn Provides an audit trail of all tracking activity by the CA NSM JM Option. A new version of this file is created each time the CA NSM JM Option is cycled and at midnight every day; nnnn is identified by the mtrsuf file. schdmtr.nnnn Provides an audit trail of all submission and selection activity by the CA NSM JM Option. A new version of this file is created each time the CA NSM JM Option is cycled and at midnight every day; nnnn is identified by the mtrsuf file.
Tracking File
Work remains in the tracking file until it is cleaned out during the new-day autoscan, or until you manually cancel and purge it from the current days workload. Because any failed jobs (and their jobsets) that are eligible to be backlogged would remain in the tracking file after the new-day autoscan, you may want to check the tracking file on a periodic basis and clean out (cancel and purge) any obsolete entries that were not automatically removed. These backlogged jobs were not eligible to be purged as part of the new-day autoscan.
Maintenance Considerations
Note: You must specify the CLEAN parameter in uppercase characters. The CLEAN parameter deletes the associated history entries for any job or jobset record that no longer exists in the job Management Database (has been deleted). The History Report lists the history entries that have been deleted from the database.
Agent/Server Configurations
Agent/Server Configurations
The CA NSM JM Option provides a flexible architecture that lets you distribute work to multiple computers where the necessary resources are available for a job to be processed. The CA NSM JM Option has two primary components that are used in combination to provide both local and remote scheduling. These two components are as follows: The CA NSM JM Option server, which provides support for the database and has the capacity to build schedules for execution. The Unicenter Universal Job Management Agent, which processes and tracks the execution of work on behalf of the job server.
By combining these two elements, the CA NSM JM Option can provide different combinations of local and remote scheduling. Distributed remote scheduling can be performed between a single full-function job management server and a single agent, or many agents, or among multiple job management servers installed in the same network. Each job management server can have many clients, and each job management client can be designated as a client of many servers. The CA NSM JM Option is not restricted to having an agent service a single job management server. Regardless of platform--UNIX/Linux, Windows, or z/OS or OS/390--a server can have its work tracked for it by the Unicenter Universal Job Management Agent. The following sections describe how you can apply this architecture to defining CA NSM JM Option policies across the following: A single server computer Multiple servers in an agent/server domain
Agent/Server Configurations
Single Server
The CA NSM JM Option on a single server uses a local server and a local agent to define, service, and track all scheduling activity. The following diagram shows you this configuration for a single node called Mars:
Cross-Platform Scheduling
Cross-Platform Scheduling
Scheduling, as defined in the past, was the ability to schedule units of work (jobs) within a particular host (for example, z/OS or OS/390, Windows, UNIX/Linux, AS/400). This scheduling activity started with simple definitions: Time dependencies. (Start Job A at 10:00 AM.) Unit-of-work dependencies. (Only start Job A after Job B has completed successfully.)
Today many data centers are working with more complex production workloads that cover a wider variety of processing environmentsincluding OS/390, UNIX/Linux, Windows, OpenVMS, and other platforms communicating with each other. Such an environment requires cross-platform scheduling.
Cross-platform scheduling provides integrated enterprise control. It defines and sets dependencies. It submits and tracks the status of units of work (jobs or events) not only on the traditional platforms (z/OS or OS/390, UNIX/Linux, Windows, AS/400) but on a variety of other platforms. Actions result in the immediate and automatic notification of status changes to the unit of work (executing, completed successfully, error) and the ability to perform additional automatic actions. These status changes can trigger event-dependent processing not only on the local platform but on other systems/resources throughout the environment to maximize enterprise processing efficiency. CA provides distributed scheduling capabilities that enable integration of its z/OS or OS/390 (formerly MVS) productsUnicenter CA-7, Unicenter CA-Jobtrac, and Unicenter CA-Schedulerwith the CA NSM JM Option.
Cross-Platform Scheduling
In this architecture, the manager performs the following functions: Maintains job definitions and relationships Evaluates job execution and job completion information Uses a database for workload definitions Interfaces with agents to initiate jobs and collect status information
The Unicenter Universal Job Management Agent is a small set of programs that execute on each target computer where jobs will be processed. The agent performs the following functions: Receives job requests from one or more managers and initiates the requested program, script, JCL or other unit of work Collects status information about job execution and file creations Sends the status information to the requesting workload manager for evaluation
Many environments choose to use a centralized repository for defining jobs and monitoring their workload. The CA NSM JM Option provides a Unicenter Universal Job Management Agent so you can initiate and track jobs on a computer without maintaining a workload database on that computer. The Unicenter Universal Job Management Agents can process a request from a job management manager (the CA NSM JM Option on another computer or one of our OS/390 scheduling products) by initiating the requested process and returning tracking information about that process to the requesting job management manager. Any job management manager can submit job requests to any Unicenter Universal Job Management Agent.
Cross-Platform Scheduling
Implementation
Cross-platform scheduling can include several different platforms and several different products, as indicated in the following examples: CA NSM JM Option servers on Windows can submit work to Unicenter Universal Job Management Agents on UNIX/Linux. CA NSM JM Option servers on UNIX/Linux can submit work to Unicenter Universal Job Management Agents on Windows. Unicenter CA-7, Unicenter CA-Scheduler, and Unicenter CA-Jobtrac can submit work to Unicenter Universal Job Management Agents on Windows and UNIX/Linux. CA NSM JM Option servers on Windows and UNIX/Linux can submit work to Unicenter CA-7, Unicenter CA-Scheduler, and Unicenter CA-Jobtrac.
The implementation of cross-platform scheduling can be tailored to the needs of each site. Before installing the software components, you must determine where you want to implement managers and agents.
Cross-Platform Scheduling
Centralized Implementation
You can implement the CA NSM JM Option in a centralized fashion so that there is a central repository on one server. You can use this central repository to schedule and monitor work on any other server. In the following scenario, the CA NSM JM Option server is installed on one server, and the Unicenter Universal Job Management Agents are installed on the other systems.
Decentralized Implementation
You can implement the CA NSM JM Option in a decentralized fashion so that there are multiple job management managers. Each of these can manage jobs on their own server and request processing on other servers running Unicenter Universal Job Management Agents (and, optionally, the CA NSM JM Option server).
WorldView Implementation
Regardless of whether you use a centralized or decentralized method of implementation, an important element of any implementation is the ability to access and monitor all of the enterprises workload from a single point. To accomplish this, use Unicenter WorldView and the scheduling WorkStations for the z/OS or OS/390 scheduling products: Unicenter CA-Scheduler WorkStation Unicenter CA-Jobtrac WorkStation Unicenter CA-7 WorkStation
Recommended Approach
For procedures to implement cross-platform scheduling, see Implementing Cross-Platform Scheduling in the online CA Procedures.
Autoscan hour and Interval between Autoscans Sets the time of day for the new-day autoscan and the interval between autoscans for the current workday. Set to different values to change the default time of the new-day autoscan (default is 0 for 00:00 or the default autoscan interval (default is every 3 hours). Set in the Configuration Settings dialog by clicking the Options tab and then the Job Management Options tab. Post on Cancel? Controls the effect of the cancel command on successor jobs. The default setting is Y. When set to Y (yes), the CA NSM JM Option posts on cancel, which allows successor jobs to run because the predecessor requirement is satisfied by the predecessor having been removed (by the cancel command) from the current days workload. If the variable is set to N (no), successor jobs dependent on the prior completion of a canceled job are not posted. Because the predecessor requirement is not satisfied, successor jobs remain in a WPRED (waiting for predecessors) status. If you change this variable, you must stop and restart the CA NSM JM Option using the following commands:
unicntrl stop sch unicntrl start sch
Note: Running the unicntrl stop sch command stops job submission and disables display of the status tracking information for jobsets and jobs. These processes resume when you run the unicntrl start sch command. There is no loss of tracking data for jobs that complete or abnormally terminate while the CA NSM JM Option is stopped. The Job Event Logger continues to run and track completion of scheduled processes even while the CA NSM JM Option is stopped. Cycle count precedence? Controls how the number of cyclic job iterations are calculated when cyclic jobs are demanded after their early start time. Authorized remote manager nodes Specifies a comma-delimited list of authorized remote manager nodes allowed to submit jobs. The default (blank) is to allow all job managers to submit jobs to the job agent.
Then shut down and restart the CA NSM JM Option using these commands:
unishutdown sche unistart sche
Note: Running the unicntrl stop sche or unishutdown sche command stops job submission and disables display of the status tracking information for jobsets and jobs. These processes resume when you run the unicntrl start sche or unistart sche command. There is no loss of tracking data for jobs that complete or abnormally terminate while the CA NSM JM Option is stopped. The Job Event Logger continues to run and track completion of scheduled processes, even while the CA NSM JM Option is stopped.
CAISCHD0530 Specifies a comma-delimited list of authorized remote manager nodes allowed to submit jobs. The default (blank) is to allow all job managers to submit jobs to the job agent. CAISCHD0540 Controls how the number of cyclic job iterations are calculated when cyclic jobs are demanded after their early start time.
The following sample BAT file references settings within the following submitted job definition:
define jobset id=Trees define job id=(Trees,Oak,01) station=AMERICA autosel=yes define jobparm id=(Trees,Oak,01)subfile=Oak.bat
If job Oak is selected or demanded, the following messages are sent to the Event Console Log:
AMERICA: Job(Trees,Oak,01) Qual(3001) Phase 1 has ended AMERICA: Job(Trees,Oak,01) Qual(3001) Phase 2 has ended
If you discover that you need to make changes that affect CA NSM JM Option policies, as opposed to minor changes in the current days workload, see the procedures in Modifying Job Management Policies in the online CA Procedures. For procedures to monitor workload status, see the following topics in the online CA Procedures: Displaying Jobset Status Changing the Status of Non-CPU Jobs Tracking the Time Spent on Manual Tasks Changing the Current Workload
Jobflow provides an online, graphical representation of an enterprises workload. You can view relationships between jobs that are defined in the scheduling database and monitor the status of jobs in real time. The graphical interface lets you tailor your view of the workload. You can limit the number of jobs to view by specifying job names, time span, and number of successor levels. You can display the results as either a Gantt or Pert chart. You can zoom in on a single jobs successors and triggers, or zoom out for a larger perspective.
Jobflow provides two types of views: Jobflow forecast views Jobflow status views
Jobflow forecast views display job relationships (that is, predecessor-successor or job-triggers) for selected jobs within a given time frame. You can view dependencies (requirements) for any job in the flowchart. Jobflow status views display real-time status information for selected jobs within a given time frame. Color coding lets you identify job status and pinpoint trouble spots. For example, abended jobs are red and late jobs are yellow. Status information is updated at regular intervals, so your view of the workload remains current. Once you display a jobflow view, you can expand the scope of the display to include additional levels of successors (or triggered jobs), and to include job dependencies. For procedures to view jobflows, see the following topics in the online CA Procedures: Starting the Jobflow Forecast Function Starting the Jobflow Status Function
The following sections present an overview of the jobflow forecast view and jobflow status view.
In a single view you can see all of the jobs that affect the target jobs execution, as well as all of the jobs that depend on the target job for their execution. Further, you can display multiple dependency views. For procedures to view dependencies, see Viewing Dependencies in the online CA Procedures.
The default color codes for job status are as follows: Light blue Identifies jobs that are supposed to be processed in the near future based on what is defined in the database. Dark green Identifies active jobs. Medium green Identifies ready queue jobs. Light green Identifies request queue jobs. Dark blue Identifies completed jobs. Yellow Identifies late jobs. Red Identifies abnormal jobs. Pink Identifies canceled jobs. Note: To change the default color scheme, see Customize the Environment.
Operational Modes
Jobflow has two operational modes: Run mode Design mode
Run mode is the basic operating mode. When the system is in run mode, you can perform all of the Jobflow functions except changing the appearance of objects in the jobflow and saving jobflow forecast and status files. Design mode lets you open the Tools window where you can change the appearance of the Jobflow environment and save jobflow forecast and status files. The default mode at startup is run mode. For procedures to customize your Jobflow environment, see the following topics in the online CA Procedures: Adjusting Job Name Labels Changing the Appearance of Objects Changing the Chart Type: Gantt or Pert Changing the Display Color Changing the Display Fonts Changing the Display Size Choosing Run Mode or Design Mode Expanding and Collapsing Jobflow Levels
The sections in the remainder of this chapter provide more detail about the following topics: Customizing the output, specifying margins, titles, and reference information using the Page Setup dialog. Previewing the output before printing it. Adjusting the number of pages and the scale.
When you open a flowchart forecast, the displayed workload is not connected to the underlying scheduling database; thus, you cannot expand and contract trigger levels or display dependencies. The advantage of working with a jobflow forecast file is that you can open the file quickly, because Jobflow does not have to build a selection list of jobs in the database. Working with a file, rather than a dynamic flowchart is useful if your schedule does not change frequently. For procedures to open and save a Jobflow Forecast file, see the following topics in the online CA Procedures: Opening a Jobflow Forecast or Status File Saving a Jobflow Forecast or Status
Index
2
2D Map background maps 153 billboards 153 custom views 154 favorite views 155 how navigation works 156 overview 151, 152 starting the web policy editor 344 template rules 359 timing parameters 356 user-defined tokens 356, 357 using tokens 356, 357 Advanced Event Correlation (AEC) 304 agent discovery 131 Agent Technology FIPS encryption about 423 data encrypted 423 data encryption key 423 installation considerations 424 migration considerations 424 Agent Technology overview 221 agents Agent View 250 auto watchers and available lists 228 call-back mechanism 229 cluster awareness 230 Distributed Services Bus 247 Distributed State Machine 248 generic resource monitoring 231 in Unicenter MP 203 maximum and minimum metrics 232 overloading thresholds 233 periodic configuration 234 Poll method 234 remote monitoring 222 resource monitoring 227 scoreboards and dashboards 203, 204, 205 SNMPv3 support 235 status deltas 234 watchers 236 Alert Management System 29 about alerts 312 action menus 317 AEC policies for alerts 319 alert console 211 alert scoreboard 211 classes 315 consolidate alerts 317 display attributes 317 escalate alerts 317 how AMS works 313 impact 317 in Unicenter MP 209
A
access permissions asset perspective 394 user perspective 394 action menus in Alert Management System 317 active directory agent 238 Active Directory Enterprise Manager (ADEM) 237 Active Directory Management FIPS encryption about 422 converting password file to FIPS encryption 423 data encrypted 422 data encryption key 422 installation and migration 422 administrator ID for Unicenter NSM 42 administrator password, changing 47 Advanced Event Correlation about 339, 340 Boolean logic rules 356 Boolean rule pipeline items 355 correlation rules 340, 351 creating rules 342 credentials 359 deploying policy 350 event definitions 340, 341 global constants 357 impact analysis 348 implementing 349 individual root events 354, 356 introduction 30 pipeline items 351 regular expressions 359 starting the Integrated Development Environment(IDE) 342, 343
Index 545
priority 317 queues 316 Service Desk integration 321 urgency 317 user actions 212, 317 user data 317 viewing alerts in MCC 320 architecture remote monitoring 223 systems management 244 asset types 290 association browser 169
types 158
C
CA File Transfer (CAFT) binaries 478 components using CAFT 477 configuration files 478 definition 476 environmental variables 478 CA Message Queuing Service (CAM) binaries 478 components using CAM 477 configuration files 478 configuring to use TLS encryption 428 optional ports 473 overview 476 required ports 472 transport layer protocols 476 CA Secure Sockets Facility (CCISSF) about 81 configuring 84 enabling 82 OpenSSL 82 CA Spectrum in Unicenter MP 219 integrating with 35, 485 CA Spectrum integration kit about 485 Ca7xxx log 523 CA-CONVIEW asset type 290 CA-CONVIEW-ACCESS asset type 290 caevent command 515, 516 CAICCI about 81 data encrypted 424 FIPS encryption 424 functions 89, 90, 91 installing with FIPS 425 required ports 472 turning on FIPS mode 425 user customization 95 CAISCHD0009 environment variable 504, 534 CAISCHD0011 environment variable 534 CAISCHD0012 environment variable 534 CAISCHD0014 environment variable 508 calendars default 504, 534 expanded processing 505 profiles 501
B
backlog scheduling 521 Base calendar 504, 534 billboards 153 Boolean logic using in AEC rules 356 bridge configuration 442 bridge configuration files, creating 446, 450 bridge control 442, 443 bridge instance 442, 444, 449 bridgecfg command 442 bridgecntrl command 443 bridging policy creating 442 implementing 442 overview 437 browsers Agent View 250 DSM View 251 Event Browser 251 MIB Browser 252 Node View 253 Remote Ping 253 Repository Monitor 254 Service Control Manager 254 SNMP Administrator 254 Business Process View Management about 25, 180 business process objects 180 integration with Event Management 183 rules 180, 181, 182 Business Process Views dynamic 159 Dynamic Containment Service 160 overview 157 scoreboards 199
scheduling by 502 undefined 525 cancel command 534 capagecl command 308 catrap command 298 catrapd daemon 292 cauexpr command 525 cawrksec utility 511 ccicrypt utility 87 child update rule 182 CICS agent 238 Cisco device recognition 435 Cisco Integration 435 class editor 166 classes in Alert Management System 315 Classic Discovery determine device names 137 discover a single network 136 effect of timeout values 103 how subnets are used 134 methods 130 multi-homed device support 102 preparing for 135 classifyrule.xml file 106, 112 CLEAN parameter 525 cleanlog shell script 523 CleverPath Portal 190 command messaging, implementing 310 Commands bridgecfg 442 bridgecntrl 443 caevent 515, 516 cancel 534 cauexpr 525 cautil 392 cawto 283 dscvrbe 101 modp 50, 51 schdchk 520 schdfore 520 schdhist 520 schdsimu 520 secadmin 381 unicntrl 532 unishutdown 534 unistart 532, 534 whathas 402 whohas 400 wmmodel 520
Common Discovery common discovery gui 141 configuring discovery agent 145, 146 configuring discovery server 142, 143, 144 discovery agent 139 discovery request client 140 discovery server 139 discovery web client 140 import service 147 IPv6 import tool 175 overview 138, 139 communication protocol security about 79 encryption levels 79 configuration and diagnostics 329 Configuration Manager about 26, 264 base profile 265, 266 configuration bundle 270, 271 delivery schedule 269, 270 differential profile 267, 268 file package 268, 269 reports 273 resource model groups 264 configuration, agent/server configuration, agent/server, multiple node 527 configuration, agent/server, single node 527 connect remotely to another MDB using WorldView Classic GUI 53 console log 292 console views 290 Continuous Discovery and CAFT 477 and CAM 477 behind firewalls 127 create unmanaged devices 118 default configuration 117 Discovery agents 115, 123 Discovery Manager 115, 121 exclude classes 119 how it works 116, 117 overview 115 set up SNMP community strings 120 Correlation creating rules 342 correlation rules, Advanced Event Correlation 340
Index 547
CPU station 502 cross-platform scheduling description 528 implementation 530 manager/agent model 529 manager/agent model configuration 530 cross-platform scheduling implementation centralized 531 decentralized 531 overview 530 worldview 532 cyclic job submission 513
D
daily cubes 367 dashboards 191 Data Scoping about 62 activating 75 deactivating 76 implementing 78 in the 2D map 75 rule editor 77 rule evaluation 71, 73 rule inheritance 65 rule performance issues 68 rules 63, 64 security 69 user IDs for evaluation on Windows (Ingres) 70 user IDs for evaluation on Windows (Microsoft SQL Server) 70 data transport mechanisms 471 Define Logical Repository utility and VNODEs 43 connect remotely to another MDB 53 using 55 demand scheduling 518 description of 22 Desktop Management Interface (DMI) components 453 DMI agent 455 DMI browser 454 DMI manager 455 overview 453 service provider 453, 454 set trap destinations in the DMI agent 456 Unicenter support for DMI 455 destination repositories, multiple 439
destination repository, single 440 DIA communications port, configuring 474 DIA, required ports 472 Discovery about 99 classic interface 25 classification engine 102 classification rules 106 combining continuous and classic 101 Common 138, 139 configuration files 104, 106, 112 Continuous 115 create users with administrator privileges (Ingres) 50 create users with administrator privileges (SQL Server) 49 create users without administrator privileges (Ingres) 51 create users without administrator privileges (SQL Server) 50 events reported to the Event Console 123 IPv6 devices 138, 147 IPv6 import tool 175 object creation rules 104 subnet filters 103 timestamp 102 types of methods 104 display attributes in Alert Management System 317 Distributed Intelligence Architecture (DIA) 20, 80 DSM (Distributed State Machine) configuration 258 discover resources 249 interfaces 262 manage object properties 251 monitoring resources 221 overview 248 Dynamic Business Process Views 159 Dynamic COntainment Service (DCS) 160
E
early start time 502, 509, 522 eHealth integration about 34, 217 how it works in Unicenter MP 218 encrypting 81 encryption levels 79 encryption See also CAICCI 81
encryption utility for use with CCISSF 87 encryption, FIPS compliance 409 enterprise cubes 367 enum_cert_info function 92 environment variables 287, 310 ETPKI 409 eTrust Access Control integrating with 59, 60 not migrated 61, 62 event agent configuring 279 implementing 278 Event agent 277 event console 208, 289 Event correlation 289, 304 Event Management about 28, 275 Discovery events sent to the Event Console 123 filters 209 in Unicenter MP 206 integration with Business Process View Management 183 maintenance considerations 469 scoreboard 207 testing policy 283 virus scan utility 469 events 276 actions 208 impact 184 notification 183
Systems Performance 410 Unicenter Management Portal 429 Web Reporting Server 430
H
Historical Performance Agent 366, 367 hpaAgent 366, 367
I
impact events 184 impact in Alert Management System 317 Industry Standard MIBs 300 Ingres remote connections to the MDB 43 user groups 40 users 41 Intel Active Management 34 International Standards Organization (ISO) 300 Internet Assigned Numbers Authority (IANA) 300 IPv6 discovery 138, 139, 147, 175 IPX Discovery 132
J
Job Management Agent agent/server configuration 526 configuration 530 functions 500, 529 remote submission agent 500 remote tracking agent 500 job management autoscan cleanup and backlogging 521 definition 520 new-day 520, 524, 534 qualifications for 521 undefined calendars during 525 workload processing 522 Job Management calendar profile calendar processing 505 definition 501 event-based scheduling 502 job scheduling 502 profile notebook page 505 undefined calendars 525 Job Management environment variables authorized remote manager nodes 532 autoscan hour 532 CAISCHD0009 504, 534
F
fanout repository architecture 439 file close event 515 file privileges, changing 48 FIPS 140-2 compliance about 409 Active Directory Management 422 Agent Technology 423 CCI 424 compliant components 409 data encrypted 410, 422, 423, 424, 426, 429, 430 data encryption key 412, 422, 423, 427, 430, 431 installation considerations 413, 422, 424, 425, 427, 430, 431 Management Command Center 426
Index 549
CAISCHD0011 534 CAISCHD0012 534 CAISCHD0014 534 cycle count precedence? 532 default calendar 532 for jobs and actions 535 interval between autoscans 532 post on cancel? 532 Job Management maintenance CLEAN parameter 525 database definitions, change to text file 525 log files 523 purge old history records 525 submit jobs for another user 526 tracking file 524 undefined calendars 525 Job Management Option agent/server configurations 526 agents 499, 500 autoscan 520 CPU station 502 event-based scheduling 502 job demand option 518 Job Management manager 529 job server 499, 500 jobflow 499 jobsets 506 log files 523 maintenance 523 multiple hosts 527 overview 499 POSTCPU station 502 PRECPU station 502 predictive scheduling 502 profiles 501 simulation report 519 single server 527 trigger profiles 515 variables 502 workload balancing 503 Job Management Option tasks clear undefined calendars 525 form groups of related tasks (jobsets) 506 identify resource requirements 503 identify work to perform 509 maintenance 524 monitor workload status 536 run a job on demand 518 run simulation reports 519
schedule work by dates 504 schedule work by special events 515 specify where to perform work 502 Job Management option variables early start time 502 maximum time 502 must complete time 502 must start time 502 Job Management predecessor profiles 501 Job Management resource profiles 501, 503 Job Management shell scripts cleanlog 523 schdcheck 520 schdfore 520 schdhist 520, 525 schdpxr 520 schdsimu 520 wmmodel 520 Job Management station group profiles 501, 502 Job Management station profiles 501, 502 job management triggers as predecessors 517 caevent 516 profile 515 job scheduling by dates 504 by special events 515 cross-platform scheduling 528 cross-platform scheduling implementation 530 expanded calendar processing 505 scheduling workstation usage 532 types 504 job server components 500 functions 500 jobflow .GBF files 543 customizing 540 design mode 541 forecast files 543 forecast views 537 jobflow GUI 537 levels, expanding/collapsing 541 multiple views 540 navigating 541 page setup dialog 543 refreshing 542
run mode 541 status color codes 539 status views 537 view status 539 jobflow forecast views definition 537 example 538 expanding/collapsing 541 mutliple views 540 opening 543 overview 538 printing 542 saving 543 viewing dependencies 539 jobflow printing adjusting pages/scale 543 customizing output 543 forecast views 542 page setup for 543 jobflow status view mutliple views 540 jobs autoscan 520 cyclic 513 DYNAMIC type 518 early start time 509 EXTERNAL setting 510 in jobsets 509 job profiles 501 on demand 518 overview 509 password validation 511 predecessors 511 resource requirements 503 resources 510 scheduling types 502 submission characteristics 510 workload processing 522 jobset predecessors canceled 508 nonexistent predecessor evaluation 508 jobset resources resource amount 506 resource usage 506 resource weight 506 jobsets definition 506 jobset profile 506 jobsets, on demand 518
L
link browser 170 log agent 240
M
maintenance Event Management 469 Management Command Center (Unicenter MCC) 21 Management Database (MDB) also see MDB 19 Management Information Base (MIB) 299, 300 manager layer browsers Agent View 250 DSM View 251 Event Browser 251 MIB Browser 252 Node View 253 overview 250 Remote Ping 253 Repository Monitor 254 Service Control Manager 254 SNMP Administrator 254 master catalog description of 22 Maximum time 502 MDB about 19 database support 19 managed objects 150 securing 39 users 40, 42 MDB server operating system user 42 remote connections 43, 53 mdbadmin Ingres user group 40 message actions 282, 283, 284, 303 scheduling by 502, 515 message records 282, 303 messages 280, 281, 284, 288 metadata 374 methods.xml file 106 modp command 50, 51 MOM Management about 33
Index 551
alert severities 459 how it works 458 resolution states 460 status in WorldView 460 Monitor in Job server 500 monitoring enterprise 221 remote 222 resources 227 mtrsuf log 523 multi-homed devices 102 multiple node configuration in Job Management 527 Must complete time 502 Must start time 502
N
network topology 151 New-day autoscan 536 Nodes multiple 527 single 527 UNIX/Linux node support 387 Non-CPU tasks 502 non-root Event Agent 277 notification events 183 notifications 213
O
objects Discovery creation rules for 104 importance 161, 162, 163 managed 150 setting policy using alarmsets 164 severity levels 161 viewing properties 166 ObjectView customize chart 168 dashboard monitor 168 graph wizard 168 overview 167 on demand scheduling 518 OpenSSL 82
Performance Chargeback 366 Performance Configuration 375 Performance Data Grid (PDG) 370 Performance Distribution Servers 370, 372 Performance Domain Servers 370, 371 Performance Reporting 365 Performance Trend 365 period cubes 367 policies 276, 302 policy packs 302, 303, 304 ports about 471 optional 473 required 472 POSTCPU station 502 PRECPU station 502 Predecessors evaluated 522 for jobs 511 for jobsets 507 profiles 501 triggers as 517 Predictive scheduling 502 prfAgent 366 priority in Alert Management System 317 priority of jobs 522 Profile Editor 375 Profiles 515 propagation thresholds rule 182 protected asset table (PAT) 385 pseudo-MIBs 300 Purging history records 525
Q
queues in Alert Management System 316
R
Real-Time Performance Agent 366 Remote Monitoring about 27 advantage and disadvantage 222 architecture 223 components 223 resource types 225 role-based security 227 remote repository, connecting 54 Reports Configuration Manager 273 creating 30
P
Page Writer 310 PageNet 310 PAT (protected asset table) 385 performance architecture 368
in Unicenter MP 214 Job Management 520 report types 377 Security Management 399 Simulation 519 templates 378 Repository Bridge and notifications 438 architecture types 439 bridge configuration GUI 448 bridged and non-bridged objects 437 bridging rules 448 components 442 initialization 438 on UNIX/Linux 437 overview 437 rule file parameters 451 supported platforms 444 troubleshooting 445 view log files 446 Repository Bridge architecture aggregation architecture 440 duplicate objects 440 fanout architecture 439 which to use 441 Repository Bridge components bridge configuration 442 bridge control 443 bridge instances 444 creating a bridge configuration file 446, 450 managing bridge instances 449 Repository Bridge uses for problem notification 445 in a distributed organization 444 restricted view of resources 445 repository import export utility 173, 174 resource monitoring call-back mechanism 229 configuring auto discovery 230 evaluation policy 231 object instances 234 overview 227 polling method 234 selection list 234 support for SNMPv3 235 using Agent Technology 221 using metrics 232 watchers 235 write-back periodic configuration 234
resource monitoring functions 228 resource monitoring system agents Active Directory Services agent 238 CICS agent 238 Log agent 240 Script agent 240 SystemEDGE agent 241 UNIX/Linux agent 241 Windows agent 242 Windows Management Instrumentation agent 242 z/OS agent 243 Resources for Job Management evaluated 522 for jobs 510 for jobsets 506 profiles 501, 503 requirements 503 rules 180
S
SAN Discovery 133 schdcheck shell script 520 schdchk command 520 schdfore command 520 schdfore shell script 520 schdhist command 520 schdhist shell script 520, 525 schdlcs log 523 schdpxr shell script 520 schdsimu command 520 schdsimu shell script 520 Scheduling work across platforms 528 SCOM management how the integration works 464 NSM integration with 461 SCOM alerts 465 SCOM entities status in WorldView 466 software requirements 462 terminology 463 scoreboards and dashboards 191, 192 script agent 240 Secure Sockets Facility (SSF) ccicrypt utility 87 compability with previous CAICCI versions 83 encryption utility 87 enum_cert_info function 92 Security Management
Index 553
creating rules in WARN mode 389 daemons, starting 388 deactivate 399 FAIL mode 384 file access authorization 46 functions 380 implementation phases 382 MONITOR mode 384 node support implementation 387 QUIET mode 384, 386 remote CAISSF return codes 386 setting options in FAIL mode 398 WARN mode 384, 389 Security Management access access determination 395 access modes 393 access permissions 394 access rule example 394 access types 392 asset permissions 392 CAISSF scoping 396 CONTROL mode 393 CREATE mode 393 date and time controls 393 DELETE mode 393 DENY type 392 LOG type 392 PERMIT type 392 READ mode 393 rule evaluation 395 UPDATE mode 393 violations 399 WRITE mode 393 Security Management asset groups adding new 392 defining 391 description 381 groups within groups 391 nested 391 Security Management assets asset definitions 387 asset names 391 asset permissions 392 asset types 391 description 381 Security Management commit process description 381 executing for production, FAIL mode 398 in FAIL mode 398
in WARN mode 398 using the GUI 398 Security Management options authorized user list 386, 388 automatic startup 387 CAISSF command scope (Windows) 397 CAISSF data scope 397 CAISSF keyword scope 396 CAISSF scoping 396 command scoping options 396 customizing 383 default permission 384 DEFAULT_PERMISSION 384 implicit DENY permission for rules 392 rule server support 386 security management options, UNIX/Linux location 383 setting to absolute values 388 SSF_AUTH 386, 388 system violation mode 384, 395 SYSTEM_MODE 384 USE_PAT 385 user group server report 386 Security Management reports access violations 399 overview 399 what-has reports 402 whohas reports 400 Security Management user groups 381 defining 389 description 381 nested 390 server 386 security policies 381 testing during implementation 388 Service Desk integration in Alert Management System 321 about the integration 321, 322 integrating with Unicenter MP 216 set up read-only users for WorldView 53 severity level of an object 161, 164 severity propagation service 164, 165 Simulation Report for Job Management 519 single node configuration for Job Management 527 SmartBPV about 26, 184 benefits 185 create Business Process Views 184
examples 186 how it works 185 integration with Event Management 183 SNMP traps 292, 293, 296, 297, 298, 299 source repositories, multiple 440 source repositories, single 439 special event scheduling in Job Management 515 SPO-Performance Scope 364 SSF support return codes 386 state count rule 181 station groups profile 501 stations profiles 501 types 502 submit process as another user 526 Submitting jobs 510 summary cubes 373 support for version 3 traps 293 system monitoring for z/OS 32 SystemEDGE agent 241 systems management architecture 244 DSM discovery process 249 managed objects 245 manager layer 247 monitoring layer 244 overview 243 states 246 threshold breach process 246 WorldView layer 250 systems management configuration sets adaptive configuration 256 benefits 255 configuration file location 256 distribute configset 257 load configset 257 write configset using mkconfig utility 256 systems management DSM configuration Agent Class Scoping 258 Discovery Community Strings 259 Discovery Pollset Values 259 DSM Wizard 261 IP Address Scoping 259 Managed Object Scoping 260 overview 258 systems management DSM Monitor DSM Monitor Dashboard 263
DSM Monitor View 262 DSM Node View 263 using DSM 261 systems management, manager layer components Distributed Services Bus 247 DSM 248 DSM Monitor 249 DSM Store 249 Object Store 248 Service Control Manager 247 SNMP/DIA Gateways 248 Trap multiplexer 248 WorldView Gateway 249 Systems Performance FIPS encryption about 410 adding the Performance Agent 418 changing encryption key 416 data encrypted 410 data encryption key 412 generate a new encryption key 420 installation considerations 413 migration considerations 418 reencrypting data 421 switching off FIPS mode 417 switching to FIPS mode 415 turning on 412 updating user domain access file 419 Systems Performance, introduction to 29, 363
T
Technical Support iv time controls user access 393 workload 504, 509, 513 timestamp for discovered devices 102 tracking file in Job Management 520, 524 Trap daemon 292 Trap Daemon 433 trap destinations 296 trap filters 434 trap formatting 297 Trap Manager about 31 local versus remote installation 434 TRAP tables 299 trigger profiles as predecessors 517 scheduling by 502, 515
Index 555
U
uniadmin Ingres user group 40 Unicenter classic interface 24 Unicenter MCC about 21 controlling access to 56 deactivating password caching 59 FIPS encryption 426 overriding user ID 58 starting 22 Unicenter MCC FIPS encryption about 426 configuring CAM to use TLS encryption 428 data encrypted 426 data encryption key 427 installation considerations 427 migration considerations 428 turning off password caching 428 Unicenter MP about 189 administration 193, 194 components 198 filters 209 FIPS encryption 429 managing component connections 195 portal explorer 201 reports 214 scoreboards and dashboards 191, 192 severity browser 202 users 191 workplaces 196 Unicenter MP components about 198 agent dashboards 205 agent management 203 agent scoreboards 203, 204 alert actions 212 Alert Management 209, 211 alert scoreboard 211 eHealth 217, 218 event actions 208 Event Management 206, 208 event scoreboard 207 notifications 213 other components 220
SPECTRUM 219 Unicenter Service Desk 216 Unicenter Service Metric Analysis 215 WorldView 198 WorldView scoreboards 199, 200 Unicenter MP FIPS encryption about 429 data encrypted 429 data encryption key 430 installation considerations 430 Unicenter Notification Services about 323 configuration and diagnostics 329 email protocol 330 features 325 how it works 324 instant message protocol 331 page snpp protocol 331 page tap protocol 332 protocols 329 script protocol 338 short message protocol 335 voice protocol 336 wireless protocol 331 Unicenter NSM about 17 database support 19 for UNIX and Linux 18, 403 Unicenter NSM security administrator ID 42 changing administrator password 47 changing file privileges 48 changing severity propagation password 47 create Unicenter NSM administrators (Ingres) 43 create Unicenter NSM administrators (Microsoft SQL Server) 42 embedded 44 MDB security 39 Microsoft SQL Server roles 40 role-based 39 roles (user groups) 44 security rules 44 Windows Administrators and Power Users groups 47 Unicenter Registration Services 177 Unicenter Service Desk. See Service Desk Integration in Alert Management System 321
Unicenter Service Metric Analysis, working with 215 unicntrl command 532 unishutdown command 534 unistart command 534 uniuser Ingres user group 40 UNIX system agent 241 UNIX/Linux support about 18, 403 database support 19 supported components 404, 406 unmanaged devices, create with Continuous Discovery 118 urgency in Alert Management System 317 user actions in Alert Management System 317 user data in Alert Management System 317 user groups defining 389 groups within groups 390 nested 390 user interfaces 23 using Agent Technology 221
V
Virtual Node Names (VNODEs) 43 virus scanning utility 469 Vista, running utilities on 49 VNODEs, Ingres 43
W
Web Reporting Server FIPS encryption about 430 data encrypted 430 data encryption key 431 installation considerations 431 weighted severity 162 whathas command 402 whohas command 400 Windows Administrators and Power Users groups 47 Windows system agent 242 Wireless Messaging 306 capagecl 308 command messaging 310 configuration files 309 environment variables 310 message files 308 Policy Writer GUI 311 Reply Information Configuration file 309
reply text format 308 template files 311 WMI agent 242 wmmodel command 519, 520 wmmodel shell script 520 workload balancing in Job Management 503 WorldView 2D Map 151, 152 about 25 alarmsets 164 association browser 169 billboards 153 business process views 157 class editor 166 Common Discovery Import service 147 components 149 custom views 154 importance of an object 161, 162 importing and exporting objects 171, 172, 173, 174 in Unicenter MP 198 IPv6 import tool 175 IPv6 topology 147 link browser 170 managed objects 150 ObjectView 167 scoreboards 199, 200 set up read-only users (Ingres) 53 set up read-only users (Microsoft SQL Servers) 52 set up read-only windows-authenticated users 52 severity levels 161 severity propagation service 164 WorldView classic 24 wvadmin Ingres user group 40 wvuser Ingres user group 40
Z
z/OS monitoring 32 z/OS system agent 32, 243
Index 557