Task Configuration Reference Guide
Task Configuration Reference Guide
Version 8.0
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Task Configuration
Reference Guide
•
•
•
•
Disclaimer of Liability
The information contained in this document (and other media provided
herewith) constitutes confidential information of Siemens AG and is
protected by copyright laws and international copyright treaties, as well
as other intellectual property laws and treaties. Such information is not to
be disclosed, used or copied by, or transferred to, any individual,
corporation, company or other entity, in any form, by any means or for
any purpose, without the express written permission of Siemens AG.
The information contained in this document and related media
constitutes documentation relating to a software product and is being
provided solely for use with such software product. The software product
was provided pursuant to a separate license or other agreement and
such information is subject to the restrictions and other terms and
conditions of such license or other agreement.
The information contained in this document and related media is subject
to change without notice and does not represent a commitment and does
not constitute any warranty on the part of Siemens AG. Except for
warranties, if any, set forth in the separate license or other agreement
relating to the applicable software product, Siemens AG makes no
warranty, express or implied, with respect to such information or such
software product.
Trademarks
Siemens AG and FactoryLink are trademarks or registered trademarks of
Siemens AG in the United States and/or other countries. All other brand
or product names are trademarks or registered trademarks of their
respective holders.
Chapter 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Tasks by Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Using this Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Configuration Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Program Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Technical Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Chapter 2 Alarms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Operating Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Alarm Logging Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Establishing the Alarm Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Alarm Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Alarm Categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Parent/Child Relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Hide Alarms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Locally Redefined Unique Alarm IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Alarm Persistence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Alarm Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Alarm Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Logbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Configuring Alarms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Set Up Alarm Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Define Alarms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Define Parent-Child Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Set Up Database Archive Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Set Up General Alarm Counters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Set Up Remote Alarm Groups Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Set Up Alarm Local Area Network (LAN) Control . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Using Alarm E-mail Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
E-mail Notification Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
•
•
•
•
Introduction
This guide provides detailed technical information about how to configure FactoryLink tasks.
The major audience of this guide includes users of FactoryLink who need to build a
FactoryLink application.
It is recommended that you use this guide as a reference while you are developing your
FactoryLink application.
TASKS BY F UNCTION
The tasks in this guide are arranged in alphabetical order. The following table provides a
functional listing of the tasks.
Function Task
Basic Functionality Alarms
Batch Recipe
File Manager
Math and Logic
Persistence
Print Spooler
Programmable Counters
Report Generator
Run-Time Manager
Scaling and Deadbanding
Tag Server
General
Most of the tasks discussed in this guide use the Configuration Explorer. The information in
this guide identifies the location of the tasks, defines the fields and parameters, and explains
the usage of the tasks. For detailed information about the Configuration Explorer, see the
Configuration Explorer Help.
Procedures that can be done using the Client Builder are mentioned, along with references to
the Client Builder Help for detailed information.
Configuration Tables
Accessing
In the Configuration Explorer, you can work with configuration tables in the Grid Editor or in
the Form Editor. The principle method of showing the configuration tables in this guide is in
the Grid Editor, but occasionally the Form Editor is used when it is easier to explain a function.
Which editor to use is a user preference.
The Accessing section identifies the path to open the configuration tables in your server
application. Many of the tables have a parent/child relationship. After a parent (control) table is
set up, the child table becomes accessible. You can open the child table using either of these
methods:
• Expanding the folders in the configuration tree and opening the appropriate table
• Using the Drill Down or Drill Up buttons in the toolbar.
Parent
Table
Child
Table
For detailed information about using the Configuration Explorer and understanding the user
interface, see the Configuration Explorer Help.
Field Descriptions
The Field Descriptions section provides a definition for each field that appears in the
configuration table. With a configuration table open, you can obtain field-level help by
clicking in a field and then clicking the Help button.
The field descriptions may show the valid entry, valid data types, and default values for the
fields. The fields that do not show a default value usually are blank. An asterisk (*) before a
field name denotes a field that accepts a tag name or constant value as a valid entry. If a
constant is specified, it must be prefixed with a single quotation (‘).
Tip: When working in the configuration tables, you can specify the value for
one field and click the Save button to have the default values automatically
appear in the other fields. If a required field is not specified, a message
identifying the required field appears.
In the field descriptions section, an X in the Req. (required) column indicates that an entry is
required for that field
The configuration tables may require a valid entry of a tag name and a valid data type. The
FactoryLink tasks use tag names to reference the tags in the real-time database. After a tag is
defined, unlimited references can be made to it. A data type identifies the type of data that will
be stored in the tag.
The Fundamentals Guide provides recommendations and guidelines for naming tags and a
description about the data types.
Program Arguments
Program arguments are valid for the current revision of FactoryLink at the time of publishing.
Not all listed arguments and their parameters may be implemented in earlier versions of
FactoryLink. The program arguments can be configured with the Configuration Explorer in the
Program Arguments column of the System Configuration table.
A program argument is marked by a hyphen {-} followed by an argument name and a value if
required. Program arguments are not case-sensitive and must be separated by at least one
space. An argument without a hyphen is interpreted as a file name where the program
arguments are read.
Error Messages
The Error Messages section identifies the messages that may display on the Run-Time
Manager screen if an error occurs with the task during run time. In some cases, error messages
are also written to a log file. The location of this file is identified in the appropriate tasks.
In some error messages, references to tags and elements are synonymous and are used
interchangeably.
TECHNICAL S UPPORT
If you experience problems or have questions regarding the use of this product, contact your
authorized Siemens reseller or representative.
•
•
•
•
Alarms
The alarms task is used to define alarms and monitor them throughout an alarm cycle until the
tag value no longer meets the alarm criteria.
Alarming interacts with the historian task to write alarm records to a database. The alarm data
is logged to the relational database and/or to a file in a table or text format. The FactoryLink
Distributed Alarm Logger performs logging as the status of the alarm changes: when the alarm
occurs, when the alarm is acknowledged, or after an alarm has returned to the normal status.
At run time, the alarm task provides the operator the ability to view and manage the alarms
which have met the established alarm criteria in the real-time database.
The alarm criteria can be configured to require an acknowledgment from the operator. The
acknowledgment ensures the operator knows the alarm has been generated because the alarm
does not clear from the viewer until it is acknowledged. If you want to preserve the times and
occurrences of alarms, configure the Distributed Alarm Logger task to send the alarm data to a
disk-based relational database using an historian task.
You can configure the Distributed Alarm Logger task to distribute the alarm messages across a
network if you want the alarms to be viewed on more than one workstation. If the alarms are
being logged and acknowledged, the node names where they were acknowledged are included
in the alarm data sent to the relational database.
1 The real-time database receives and stores tag values from various sources, such as a remote
device, user input, or computation results from FactoryLink tasks.
2 The Distributed Alarm Logger task reads and compares the tag values stored in the real-time
database with criteria defined in tables. These tables contain the configuration information for
the Distributed Alarm Logger task.
3 When the value of the tag meets the criteria for an alarm, the Distributed Alarm Logger task
sends the alarm to the alarm server for display on the Alarm Viewer.
4 Each time the tag value changes, the Distributed Alarm Logger task evaluates the tag. If the
status has changed, the Alarm Viewer is updated.
5 When the value of the tag no longer meets the criteria for an alarm, the Distributed Alarm
Logger task removes the alarm from the active alarm list. The alarm is cleared from the Alarm
Viewer. However, if the alarm has been configured to require an acknowledgment from the
operator, a status change to the alarm message occurs instead. The alarm is cleared from the
list when it is acknowledged.
6 If the alarms are being logged to a relational database, the Distributed Alarm Logger task sends
the alarm data to the relational database using a historian task each time a change occurs in the
status of the alarm.
In both cases, the tag condition is greater than (>) but each alarm is different. As the pressure
changes, the display is updated to reflect the new readings and messages. When the pressure
drops to 800, the danger passes and the alarms are no longer active.
The tag value must be checked against three components to establish this alarm:
• Limit – The limit is the value the condition is checked against. The example establishes the
limit as 900.
• Condition – The condition that triggers the alarm. In the example, the condition is greater
than.
• Deadband – The deadband is a range above or below the limit. The alarm stays active in this
range. The example uses a deadband of 100 (900-100 = 800).
The Limit and the Deadband can both be set with a constant value or the value from another
tag. The following valid condition settings generate alarms:
ON An alarm is triggered when the value of the tag referenced is ON (1).
OFF An alarm is triggered when the value of the tag referenced is OFF (0).
TGL An alarm is triggered when the value of the tag changes, such as a change
from ON (1) to OFF (0), from OFF (0) to ON (1), or if the change-status
bits of the tag are set by a forced write.
HI, GT, HIHI or > An alarm is triggered when the value of an analog or float tag is greater than
the value specified by the Limit.
LO, LT, LOLO or < An alarm is triggered when the value of an analog or float tag is less than
the value specified by the Limit.
GE or >= An alarm is triggered when the value of an analog or float tag is greater than
or equal to the value specified by the Limit.
Digital Tags
Figure 2-1 shows the behavior of a digital alarm with specified limits by tag type. The diagram
represents an alarm status of active and normal based on a value, a limit, and deadband range.
Figure 2-1 Digital Alarm Cycle
ON OFF
Active Active
Normal Normal
1 1
TAG TAG
0 0
Time Time
TGL
Active
Normal
Time
The principles of operations are identical when operating on analog, longana, or float tag
types. The smallest unit detected is dependent on the type.
Figure 2-2 shows the behavior of an analog, longana, or float tag types with specified limits.
The diagram represents an alarm status of active and normal based on the value, limit, and
deadband range. All examples assume the Limit = 5 and Deadband = 2.
Figure 2-2 Analog and Float Alarm Cycles
GT LT
>= <=
7 7
5 5
TAG TAG
3 3
Time Time
EQ NE
= <> Active
Active
Normal Normal
7 7
5 5
TAG TAG
3 3
Time Time
Message Tags
When there is a change in the value of a Message tag the value is checked to be equal or not
equal to the entire message defined as part of the alarm criteria.
The Distributed Alarm Logger task maintains running counts of the number of alarms in the
active queue at run time.
Alarm Categories
Categorizing alarms facilitates administration and analysis. Three methods are provided to
show related alarms:
• Group Name – The group name is assigned to a class of alarms. Group names can be
identifiers of the severity of the alarm, represent similar types such as pressure gauges, or
indicate a combination of any other characteristics.
• Area – The area is assigned to each alarm individually. More than one alarm can reside in an
area and alarms from different groups can also reside together. An area can reflect a physical
location such as the boiler room or an area of responsibility such as maintenance.
• Priority – The priority is a numerical hierarchy assigned to each individual alarm. Use a
number between 1 (lowest) and 9999 (highest) to set priority. Multiple alarms can be
assigned the same priority number and multiple groups and areas can have common priority
numbers within them.
At least one Group Name must be established to define any individual alarms. All alarms must
belong to a group. The use of areas and priorities is optional. Categories enable filtering and
sorting of alarms on the Alarm Viewer.
Parent/Child Relationship
The conditions which generate one alarm may also cause another related alarm to be
generated. When these relationships exist, you generally do not want to display the additional
alarms. For example, if the closing of a valve that feeds four different pipelines generates an
alarm, it is a reasonable assumption that the lack of flow in each pipe would generate an alarm
based on the value of the flowmeter tag as shown in Figure 2-3. These resulting alarms would
not be important because you already know the flow has been cut off and why. This
relationship between the alarms is identified as a parent/child relationship.
Figure 2-3 Parent/Child Alarm Relationship
Child Alarm
Child Alarm
Parent Alarm
Child Alarm
Child Alarm
In the example, the main valve is the parent alarm of each of the flow alarms. The resulting
child alarms are not displayed or counted as active alarms because they are a result of the
parent alarm. However, if the main valve is open and one of the individual pipeline flowmeters
registers an alarm, you would want to be advised. In this case the child is not dependent on the
parent because the child alarm was initiated on its own. This alarm is displayed and counts as
an active alarm.
Hide Alarms
Alarm hiding (sometimes referred to as masking) is done when you do not need to manage a
particular set of alarms. Alarm hiding is used in the following common situations:
• Equipment maintenance
• Redundant systems
• Station functionality
• Bad sensor
Alarm hiding should not be confused with filters used with the Alarm Viewer. Alarm hiding
can be configured to disregard a particular set of alarms for viewing and/or logging purposes.
Alarm filtering selects specified alarms for viewing and suppresses other alarms from the
Alarm Viewer; however, the alarms are still being logged and tracked.
Filtering is more common on multiuser or distributed systems. In these architectures, all users
have the ability to monitor all alarms. However, certain operators may be responsible for a
subset of these alarms. Filters enable operators to view only alarms they are responsible for on
the Alarm Viewer.
The Global Hide tag is used most frequently in redundant systems. In redundant systems, one
node is the master and all alarms are active for this node (Global Hide tag = 0). The slave node
or standby node has the Global Hide tag = 1.
The Group Hide tag is used to hide equipment maintenance alarms. The developer must ensure
that alarms are grouped by machine, so when a maintenance cycle begins, those alarms can be
hidden.
The Group Hide tags are also used to define station functionality. This is a special case because
a node may have multiple functional requirements. For example, a node may function as a
simple operator station for only one piece of equipment one day. The next day the same node
may be the supervisor's station for all of the equipment. Groups are hidden based on the node
functionality.
In some systems, individual alarms may need to be hidden to silence an alarm because of a
malfunctioning sensor. When the sensor is repaired, the alarm needs to be monitored again.
Remote Group
Alarms received from remote groups do not have a hiding function. Alarms should be hidden
at the server node. If you do not want to view the alarms, create a filter in the Alarm Viewer so
the alarms do not show.
Event Alarms
Event alarms are any alarms that are logged to a database but are not processed for viewing and
acknowledgment. This provides the archival of the alarm condition without operator required
processing. To configure an event alarm, use the Group Hide Tag or the Alarm Hide tag.
Alarm Persistence
Alarm persistence is the storing of current information about the status of active alarms and the
child alarms at user-defined intervals. At startup the information is read preserving important
information, for example, initial time and acknowledgment information.
If the *.prs file is unable to be read at startup, the *.bak file is used instead.
The al_log.prs file is updated at the time the Distributed Alarm Logger task is shut down and
on a Persistence Timed Trigger change. The al_log.bak file is updated on a Persistence Backup
Trigger change. For more information about the persistence function, see “Persistence” on
page 389.
Upon restart of the Distributed Alarm Logger task, the al_log.prs or al_log.bak file is read into
memory, and all alarms are checked for validity.
The active alarms are stored using their Unique Alarm ID number. If you have not defined a
Unique Alarm ID in the alarm definition, one is defined at startup. If the configuration does not
change, each alarm receives the same Unique Alarm ID as it did the previous time at startup. If
the configuration changes, however, each Unique Alarm ID could be altered, and the
Distributed Alarm Logger task could potentially load persistence information for incorrect
alarms or not load persistence information.
Alarm Logging
If you want to preserve the time of alarm, alarm data, and the node that acknowledged the
alarm, you can configure Distributed Alarm Logger task to read data from the tags in the
real-time database and send the data to a disk-based relational database or to a text file. Data
logged to a relational database is then available for browsing through the FactoryLink
Database Browser or other browser program.
The Distributed Alarm Logger task logs data to a relational database using the same
methodology as the FactoryLink Database Logger. The data is logged in a table format using a
historian task. Alarm instances are logged at a status change: as the alarm occurs, when the
alarm is acknowledged, or an alarm returns to the normal status. The tables for alarm logging
output and their associated schemas are already defined for the Distributed Alarm Logger task.
If a remote group has logging turned on but no database information is defined on the client
node, no information is logged. This condition does not result in the display of an error
message.
When a remote node shuts down and restarts or reconnects after a communication failure with
the same alarm still active, the logger tries to insert the alarm into the database twice. This
condition results in generating a Duplicate Entry error.
The record length is determined by the size specified in the Message Size field of the Alarm
Archive Control Information table in the Distributed Alarm Logger Setup table.
Table 2-1 and Table 2-2 describe the schema layout used to build the alarm entry table.
Logbook
Entries to the logbook are indicated by an asterisk in the Logbook field on the Alarm Control
Viewer. The logbook data is viewable using the Database Browser. See Client Builder Help for
more information on the Alarm Logbook.
The examples in this section are from the starter applications, supplied with the software.
These applications provide tables with preconfigured data to illustrate proper configuration of
the fields. It is recommended that you use a starter application as the basis for your application.
This will make configuration faster and easier.
Color and sound information in the Alarm Group control table do not transfer to the Client
Builder Alarm Viewer. These features are individually configured in the Client Builder
application. If you are viewing alarms using ECS Graphics, colors and sounds can be
configured in the Alarm Group Control table.
Three groups are preconfigured for default purposes: WARNING, CRITICAL, SYSTEM.
These can be used or deleted as required.
Accessing
Field Descriptions
Note: The fields in bold are not passed to the Client Builder Alarm Viewer. They are
only recognized by the alarm task in ECS Graphics.
The basic definition of an alarm is to enter a tag name for the alarm identity and to establish the
conditions which generate the alarm.
Note: Setup of the alarm group controls is essential before alarm records can be
defined. All alarms must be defined within a group.
Accessing
Alarms > Distributed Alarm Definitions > Alarm Group Control > “group name” > Alarm Definition
Information
Field Descriptions
Parent-child alarm relationships are based on the parent alarm status. When a child alarm is
initiated within the defined child alarm delay, it is hidden if the parent alarm is in the ACTIVE
status. The child alarm is activated when the parent alarm returns to NORMAL. If the parent
alarm is already in the NORMAL status, the child alarm is activated immediately.
Each alarm can have multiple parent/child relationships. Alarms defined in a remote group can
never act as a child alarm. A parent alarm must have a defined Unique Alarm ID to create the
child alarms on the local node.
Each alarm is evaluated by the Distributed Alarm Logger task and compared to its parent/child
relationship prior to displaying.
• If the alarm is a parent, it is displayed.
• If the alarm is a child and the parent status is not active, the child is displayed.
• If the alarm is a child and the parent status is active, the child alarm is disregarded or
displayed based on delay criteria you establish in the relationship.
In the parent/child relationship, two kinds of delays can be specified: child alarm delay and
child recovery delay. These delays specify the time allowed between the generation or clearing
of a parent alarm and the activation of a child alarm unrelated to the parent.
The length of time a child alarm is suppressed after a parent alarm is triggered is the child
alarm delay.
The conditions that generate both the parent and child alarms must return to normal to allow
the alarm statuses to return to normal. When both have returned to normal, the parent/child
relationship is reestablished. At the next invocation of the parent, the timer is started again to
inhibit the display of the child alarm for the child alarm delay period. Figure 2-5 illustrates
these concepts.
The length of time a child alarm is provided to return to normal status after a parent alarm has
been returned to normal status is the child recovery delay.
In the previous example, the main valve causing the generation of the parent alarm was shut
off. This generated the four pipeline alarms but they are disregarded because they are
redundant. If the main valve is now turned on, the flow should return to all four pipelines. The
child recovery delay provides sufficient time for a child alarm status to return to normal. If the
child status cannot return to normal in this time period then the child alarm generates an alarm.
After the child status has returned to normal, and the parent has a normal status, the
parent/child relationship is reestablished. Figure 2-6 illustrates these concepts.
Figure 2-6 Child Recovery Delay
Child Recovers
Return to Alarms
Parent Alarm Normal Displayed: none
10:00
Parent 1
Child
Recovery 10:01
:05
Child Alarm
Child 1 Suppressed 10:04
Child Alarms
Alarms
Parent Alarm 10:00 Return to Displayed:
Normal Child 1
Parent 1
Child alarmed
10:01 at 10:06
Recovery
:05
Child Alarm
Child 1 Suppressed 10:15
TGL type alarms should not be configured as parent alarms. When a TGL alarm is generated it
becomes ACTIVE and immediately returns to NORMAL. A TGL alarm never remains in the
ACTIVE status. Using a TGL alarm as a parent would result in the child alarm never being
hidden. An alarm can be a child to more than one parent alarm.
Alarms are logged as soon as they are generated. The Logger task (AL_LOG) performs and
controls all alarm logging. The internal database creates the files using the configured structure
depending on the selection made at FactoryLink Installation.
To configure logging to a text file and archiving the text files, configure the Log File Trigger
and Log File Directory files in this table and either the Log field or the Log Method Tag field
in the Alarm Group Control table. Also, the Database Alias Name from the Alarm Archival
Control table must be entered in the Database Alias Name field on the appropriate Historian
Information table if alarm data is logged to a relational database.
Accessing
Alarms > Distributed Alarm Logger Setup > Alarm Archive Control
Accessing
Alarms > Distributed Alarm Logger Setup > General Alarm Setup Control
Field Descriptions
Accessing
Alarms > Distributed Alarm Logger Setup > Remote Alarm Groups Control
Field Descriptions
Accessing
Alarms > Distributed Alarm Logger Setup > Alarm Local Area Network (LAN) Control
Field Descriptions
If a client acknowledges an alarm in Client Builder, the acknowledge event is sent to the
E-mail Agent and the agent will notify all necessary contacts. Any pending outgoing e-mail
pertaining to the original alarm will not get processed.
Figure 2-7 illustrates how an alarm notification is processed. 1 shows a tag configured for
alarms is in the active state. The alarm logger sends the alarm ID, sequence ID, and
notification group information. At 2, the E-mail Agent sends the alarm information to all
contacts in the notification group. At 3, a recipient responds to the e-mail and acknowledges
the alarm. At 4, the alarm logger verifies the contact is authorized to acknowledge the alarm
and then formally notifies the alarm logger task that the alarm has been acknowledged. At 5,
the alarm logger sends an acknowledge event to the E-mail Agent with the alarm ID, sequence
ID, and notification group. (The alarm logger task performs the actual acknowledgment.) At
6, the E-mail Agent sends an acknowledgment e-mail to all contacts in the notification group.
Note: The E-mail Agent is a self-contained e-mail client that supports POP3 and
SMTP. No additional software is required.
Figure 2-7 Alarm Notification and Acknowledgment
2
Alarm Notification 1 E-mail Notification
E-mail
FactoryLink Notification
Alarm Log Task Acknowledge Alarm
Agent
Acknowledge
3 Alarm E-mail Reply
4
Alarm
Update
Notify of Alarm
5 6
Acknowledge Notification Acknowledge
Acknowledge
Alarm
Client Builder/
Alarm Viewer
Control
Contacts
The notification group determines which contact groups will receive the e-mail message. The
Alarm ID & Alarm Sequence ID make up part of the outgoing e-mail subject field. The
outgoing E-mail Subject field contains the Subject Text + Alarm ID + Sequence ID.
The E-mail Subject text is defined in the alarm logger task. The Subject Text can contain any
custom message. It is recommended that the Subject Text either contain the alarm tag name or
descriptive text about the alarm; this information helps the recipient to identify the alarm.
An individual contact can be configured to receive the alarm message text as part of the e-mail
body. Because some contacts may have restrictions for the size of e-mail they can receive
(such as mobile phones), it is optional to include the alarm message text. The reply instructions
are added to the message body of the outgoing e-mail only if the contact is configured to
include the instructions. Figure 2-8 shows an outgoing e-mail message that requires an
acknowledgment by the recipient and one that does not require acknowledgment (intended to
simply inform the recipient).
The reply instructions are contained in a multilingual file named emreply.txt, located in
FLBIN\MSG\[language] directory, where [language] is EN, FR, or DE. If a language is not
supported, the desired language can be substituted in the text file for the currently defined
language set using FLLANG. For example, if reply instruction must be sent to only Chinese
recipients and the current FLLANG setting is EN (English), the EN text entry in the
emreply.txt file can be changed to the Chinese text.
From: fluser@sqa.sfd
To: jack@sqa.sfd
Sent: Tuesday, August 24, 2004 11:06 AM
Subject: Tank1Level_Alarm, AID=99201, SEQ=1923467
Digital is ON!
Reply This e-mail informs you of a FactoryLink alarm status. This e-mail is
Instructions sent for information purpose only. DO NOT REPLY TO THIS E-MAIL!
The E-mail Agent uses the From and Subject field to match outgoing e-mail messages with the
response messages. The Alarm ID and Sequence ID is used to determine what alarm will get
acknowledged in the alarm logger task. Because this information appears in the Subject field, it
is recommended that the contact not modify the Subject field when replying. Figure 2-9 shows
examples of alarm acknowledged e-mail messages.
If a contact does not acknowledge an alarm within a specified time delay, the e-mail is
escalated to another contact. More contacts are notified as time progresses. Escalation only
applies to alarms requiring an acknowledgment. If multiple contacts have the same delay time,
the e-mail is sent to these contacts at the same time.
When e-mail clients are set to automatically reply to messages, the Subject field usually gets
altered to include an automatic reply message. This type of response message is ignored
because the response message fails to match the outgoing e-mail message Subject requirement.
Figure 2-9 Alarm Acknowledged E-mail Messages
Contact notified when alarm is acknowledged
From: fluser@sqa.sfd
To: jane@sqa.sfd
Sent: Tuesday, August 24, 2004 11:08 AM
Subject: Tank1Level_Alarm has been acknowledged, AID=99201, SEQ=1923467
Digital is ON!
This e-mail informs you of a FactoryLink alarm status. This e-mail is sent for
information purpose only. DO NOT REPLY TO THIS E-MAIL!
Contact notified when alarm has changed but does not require acknowledgment
From: fluser@sqa.sfd
To: jane@sqa.sfd
Sent: Tuesday, August 24, 2004 11:16 AM
Subject: Tank1Level_Alarm, AID=99201, SEQ=1923467
Returned to normal!
This e-mail informs you of a FactoryLink alarm status. This e-mail is sent for
information purpose only. DO NOT REPLY TO THIS E-MAIL!
Contact notified when event has changed but does not require acknowledgment
From: fluser@sqa.sfd
To: jane@sqa.sfd
Sent: Tuesday, August 24, 2004 11:45 AM
Subject: Egress Door Open, AID=1000, SEQ=1923500
Door is open!
This e-mail informs you of a FactoryLink alarm status. This e-mail is sent for
information purpose only. DO NOT REPLY TO THIS E-MAIL!
Accessing
Field Descriptions
Valid Data
Field Name Description Req. Valid Entry Type
Sender’s Defines the e-mail address of the sender. The sender is 1 to 128
E-mail Address usually an account set up to send e-mail from characters
FactoryLink. Use of a user’s personal account is not
recommended because the sender’s user name and
password are visible in Configuration Explorer.
SMTP Server Defines the address name (such as “mymailserver”) or 1 to 80
Address IP address (in the "xxx.xxx.xxx.xxx" format) of the characters
(Outgoing Mail) SMTP server for outgoing e-mail.
SMTP Port The port number that supports the SMTP server. Obtain 25 (default)
this information from your IT department or e-mail
provider, or if a problem occurs using the default value.
SMTP Logon Indicates if your SMTP server (outgoing mail) requires NO (default),
Requires authentication to log in, which means the user name and YES, N, Y
Secure password are to be encoded before getting passed to the
Password mail server.
Authentication? Most secure servers use authentication. Contact the
entity that hosts your e-mail server to determine if
secure password authentication is required.
Note: If YES is selected and your system does not
require authentication, your login will get rejected.
SMTP User Defines the user name account required by the SMTP 1 to 255
Name server to log in. characters
Valid Data
Field Name Description Req. Valid Entry Type
SMTP Defines the password required by the SMTP server to 1 to 255
Password log in. characters
(case-sensitive)
POP3 Server Defines the address name (such as “mymailserver”) or 1 to 80
Address IP address (in the "xxx.xxx.xxx.xxx" format) of the characters
(Incoming Mail) POP3 server for incoming e-mail.
POP3 Port The port number that supports the POP3 server. Obtain 110 (default)
this information from your IT department or e-mail
provider, or if a problem occurs using the default value.
POP3 Logon Indicates if your POP3 server (incoming mail) requires
Requires authentication to log in, which means the user name and
Secure password are to be encoded before getting passed to the
Password mail server.
Authentication? Most secure servers use authentication. Contact the
entity that hosts your e-mail server to determine if
secure password authentication is required.
Note: If YES is selected and your system does not
require authentication, your login will get rejected.
POP3 User Defines the user name account to log into the POP3 1 to 255
Name server. If not specified, SMTP User Name field is used. characters
In most mail servers, the SMTP and POP3 login (user
name and password) parameters are the same.
POP3 Defines the password to log into the POP3 server. If not 1 to 255
Password specified, SMTP Password field is used. In most mail characters
servers, the SMTP and POP3 login (user name and (case-sensitive)
password) parameters are the same.
Delete Mail Indicates whether to delete the e-mail from the POP3 NO, N, YES
From Server server (inbox) after an acknowledged alarm is (default), Y
After successfully processed. A processed e-mail is
Processing? considered an e-mail that has been validated as an
acknowledgment to an active alarm. Deleting the
processed e-mail messages frees up storage space and
reduces the possibility that an old e-mail may get
mistaken for a response to a current alarm.
If a notification group is used on an alarm group level, e-mail is generated for all alarms in the
group. If a notification group is defined on an individual alarm tag level, e-mail is generated
only for that alarm or event tag.
Accessing
Field Description
Valid Data
Field Name Description Req. Valid Entry Type
Notification Name assigned to a group of alarm tags or an individual X 1 to 80
Group alarm tag that determines which contact groups will alphanumeric
receive the e-mail message. This group name must characters
match the notification group name used in the (case-sensitive)
Distributed Alarms Definitions tables.
Accessing
Alarms > E-mail Notification Agent > Notification Groups > “your notification group name” >
Contact Groups
Field Descriptions
Valid Data
Field Name Description Req. Valid Entry Type
Contact Defines the contact group name X 1 to 80
Group alphanumeric
characters
Schedule Defines the start time (in 24-hour format) that the 0000 (midnight)
Start (24hr contacts in a group can receive e-mail notifications. This to 2359 (11:59
Format) time can coincide with the work hours of the contacts p.m.)
that belong to the contact group. default = 0000
Note: It is possible to use a start time that is greater
than the end time. For example, the group’s
availability to receive e-mail spans the time between
two days such as an 8-hour shift that starts at 2200
(10:00 p.m.) and ends at 0600 (6:00 a.m.).
Schedule Defines the end time (in 24-hour format) that the 0000 (midnight)
End (24hr contacts in a group are to stop receiving e-mail to 2359 (11:59
Format) notifications. This time can coincide with the work p.m.)
hours of the contacts that belong to the contact group. default = 2359
Note: It is possible to use an end time that is less
than the start time. For example, the group’s
availability to receive e-mail spans the time between
two days such as an 8-hour shift that starts at 2200
(10:00 p.m.) and ends at 0600 (6:00 a.m.).
SUN, MON, Indicates the day of the week that the contacts in a group NO, YES, N, Y
TUE, WED, can receive e-mail. The contacts can receive e-mail only
THU, FRI, on those days marked with YES or Y.
SAT Sunday and Saturday (default = NO)
All other days (default = YES)
Accessing
Alarms > Notification Agent > Notification Groups > “your notification group name” > Contact
Groups > “your contact group name” > Contact Definition Information
Field Descriptions
Valid Data
Field Name Description Req. Valid Entry Type
Display Name Defines the display name associated with an e-mail X 1 to 80
address. A display name is considered as an alias (an alphanumeric
easily identifiable name) for an e-mail address. characters
E-mail Defines the e-mail address for the contact X 1 to 128
Address characters
Delay Before Defines the delay time (also known as escalation time) 0 (default) to
Notification in minutes to wait until an e-mail is sent to a contact. A 9999
(mins) time of 0 indicates no delay and the e-mail is sent
immediately.
Log Escalation Indicates whether the alarm logger task is to log the NO (default),
event if an escalation occurs YES, N, Y
Role Defines the role of a contact when processing an e-mail ACK,
response. INFORM
If the contact responds to an e-mail that indicates an
active alarm, its role is checked first. If the role is set to
ACK, the alarm logger is notified that the alarm was
acknowledged.
A contact may have dual roles where the contact is
informed of all alarm notifications and has the
capability to acknowledge an alarm. If this case is
desired, the contact’s name must appear twice: once
with the ACK role and again with the INFORM role.
Valid Entry explanation:
ACK – A contact can acknowledge an alarm by
responding to the e-mail.
INFORM – A contact cannot acknowledge an alarm by
e-mail response and no response is anticipated.
Valid Data
Field Name Description Req. Valid Entry Type
Include Alarm Indicates whether to include the alarm message in the NO (default),
Message? e-mail message body. YES, N, Y
Note: If the contact’s e-mail provider has a message
size limitation, the alarm message may exceed the
limit. In this case, NO is recommended.
Include Reply Indicates whether to include the special instructions NO (default),
Instructions? about how to reply to an alarm in the e-mail message YES, N, Y
body.
Note: If the contact’s e-mail provider has a message
size limitation, the reply instructions may exceed the
limit. In this case, NO is recommended.
Use of this utility requires that the notification groups are created and associated with alarms in
Configuration Explorer. Any configurations made with this utility are visible immediately in
the Configuration Explorer tables. For information about this utility, see the Utilities Guide.
The E-mail Agent shares the use of these parameters for debugging. The “n” signifies a level:
0 – Off
1 – Error
2 – Information
3 – Configuration
4 – EmailAgent (Mail Server network communication)
5 – Heap validation
9 – EmailAgent (Alarm Logger pipe communication)
{FLAPP}/{FLNAME}/{FLDOMAIN}/{FLUSER}/log/EmailAgent.log
DebugView can also capture output to a file. DebugView is a freeware product from
SysInternals (www.sysinternals.com).
P RINTING A LARMS
Printing to files, directing output to files and to the printer can be accomplished using several
different methods. For an explanation of these methods, see Table 2-5. The formats used for
printing can be modified.
Open the file {FLINK}\msg\{language}\al_fmt.txt with your favorite text editor program. Edit this
text as indicated in the file to configure the print formats and tokens. The guidelines for
configuration of each option are included in each section of the file.
Table 2-5 Printing and Directing Output to Files Using the al_fmt.txt File
No. Description Configuration
1 Print to a file In Alarm Group Control table, set the Alarm Stat Print Dev field equal to a
using the printer line number in the Device field of the Printer Spooler table that
print spooler contains the address of a file; example, C:\msg\filename.txt
2 Print to a In the Alarm Group Control table, set the Alarm Stat Print Dev field equal
printer to a printer line number in the Device field of the Printer Spooler table that
contains the device port; example, COM1: or LPT2
5 Print all Print Active Alarms Tag field is set to ON. The Active List Print device
active alarms tag field is set to a printer line number in the Device field of the Printer
to a file Spooler table that must contain the address of a file; example,
C:\msg\filename.txt
6 Print all Print Active Alarms Tag field is set to ON. The Active List Print device
active alarms tag field is set to a printer line number in the Device field of the Printer
to a printer Spooler table that must contain the device port; example, COM1: or LPT2:
Client Builder provides an integrated design and run-time environment. Accessed in Client
Builder, the Alarm Viewer configuration can be modified in design mode and the changes
observed immediately in the run-time mode, so the designer can make adjustments as needed
to complete the design. Certain features or options can be locked to prevent operator changes
at run time.
In addition to the Alarm Viewer, an Alarm Banner Viewer is configured from the same
ActiveX control. The Alarm Banner Viewer, which displays up to three alarms, provides a
subset of the Alarm Viewer features. Because of its smaller size, it is easily positioned on
various Client Builder mimics. Depending on the design, this viewer shows the operator the
most critical or newest alarms. See the Client Builder Help for instructions to configure the
alarm viewers.
R UN -T IME A LARMING
As alarms are generated, the information is displayed on the run time Alarm Viewer or Alarm
Banner Viewer. These alarms remain on the display until the alarm criteria no longer meets the
defined alarm conditions. If an alarm is defined as an alarm that must be acknowledged, the
alarm output remains listed even after the alarm condition is removed and the operator has not
manually acknowledged the alarm.
The size of the viewers is determined at design time, but fields can be resized at run time. The
horizontal scroll bar (when enabled in design mode) allows operators to view the columns
(during run time) when they are no longer visible due to resizing. Other run-time features are
sort, filter, acknowledgment of alarms, and printing. Figure 2-10 shows the Alarm Viewer with
all of the basic features selected, and Figure 2-11 shows the Alarm Banner Viewer.
Figure 2-10 Parts of the Alarm Viewer at Run Time
Toolbar Group Fields Header Bar Message List Scroll Bars
Group
Browser
Logbook
Entry
Status
Bar
For detailed instructions on using the alarm viewers at run time, see the Client Builder Help.
When all selections are complete, click Apply and then OK.
Accessing
Alarms > Distributed Alarm Viewer Setup > Alarm View Control
Field Descriptions
Field Descriptions
Accessing
Alarms > Distributed Alarm Viewer Setup > Alarm View Control> “your view name” > Alarm View
Output Information
Field Descriptions
TROUBLESHOOTING
If the Alarm task is not working, please check the following steps. If you used one of the
starter applications as a basis for your application, these steps are preconfigured for you.
If you used one of the starter applications as a basis for your application, this information is
already completed for you. For most applications, you should not change any of the default
information.
Accessing
System > System Configuration > System Configuration Information > Alarm Server
Field Descriptions
Accessing
System > System Configuration > System Configuration Information > Distributed Alarm Logger
(open in form view)
Field Descriptions
If you used one of the example applications as a basis for your application, this information is
already completed for you. For most applications, you do not need to change the default
information.
Accessing
Alarms > Distributed Alarm Server > Distributed Alarm Server (open in form view)
Field Descriptions
The time interval for the poll trigger tag can be modified using the Interval Timer table.
P ROGRAM A RGUMENTS
Argument Description
–A Disables the “Return-to-Normal” message for digital
alarms.
–D<#> Set debug log level for Run-Time Manager output
window. (# = 1 to 9)
–F Freezes initial text display of alarms configured with
%s (C-style) variables.
–G Ignore remote log settings.
–H<#> Set historian time-out parameter. (# = 5 to 30 seconds)
–I Leave Node ID embedded in sequence for logging.
–L Enables logging of debug information to a log file.
–M<#> Set maximum number of records in Alarm log text file.
(# = 1 to 1000)
–O Set “log once” mode.
–Q<#> Set warning limit for historian maximum number of
outstanding responses.
-S or –s Sleep before re-entering DTP wait. This would be used
when a lot of alarms which change often are configured
but do not result in very many alarms.
-V# Verbose level increases from 1 through 9.
–W Warm start; use/maintain a Persistence file of alarms.
{FLAPP}/{FLNAME}/{FLDOMAIN}/log
•
•
•
•
Batch Recipe
The Batch Recipe task transfers sets of predefined values, sometimes called recipes, between
binary disk files and selected tags in the real-time database. In the real-time database, a batch
recipe is a collection of tags grouped together for some purpose. These tags can contain
internally-generated or operator-entered values.
Depending upon the type of recipes that you need, you might find it preferable to use one of
the relational databases to create and store your recipes.
O PERATING P RINCIPLES
You can perform the following functions with Batch Recipe:
• Define up to 8,000 different recipe templates, each associated with a virtually unlimited
number of files
• Store batch recipes in disk files so the total number of different recipes stored on a system is
limited only by available disk space
• Store each batch recipe file under a standard file name
• Specify up to 8,000 tags for one batch recipe template
• Use with any of these data types: digital, analog, longana, float, and message
You can configure Batch Recipe for use in many diverse applications. For example, a program
can use a graphic display for the entry of application values and write these values to an
external device using an external device interface task. Batch Recipe can save these tag values
in a recipe so the program can then read the values from the batch recipe file.
You can use batch recipes in conjunction with any FactoryLink task because each FactoryLink
task communicates with other tasks through the real-time database. Batch Recipe executes as a
You can configure Batch Recipe to be triggered by events, timers, or operator commands, such
as:
• An external device read operation
• A Math & Logic calculation
• An activity from another station on a network
• Input from the operator using a keyboard or pointing device
1. If the recipes are to be edited and/or viewed, create and animate a mimic with fields to
enter or display tag values in the recipe.
3. Complete the protocol module Read/Write table for an external device if you want to
connect the tags in the recipe with addresses in the device.
Monitor the Run-Time Manager screen to determine the status of Batch Recipe at run time.
Note: When performing a platform-dependent FLSAVE, FactoryLink saves recipe
files; however, when performing a platform-independent or multiplatform FLSAVE,
FactoryLink does not save recipe files.
Accessing
Recipe > Recipe > Recipe Control
Field Descriptions
Accessing
Recipe > Recipe > Recipe Control > “your recipe name” > Recipe Information
Field Description
3 Draw and animate a mimic for the operator to use to create and edit recipes at run time.
1 In your server application, open Recipe > Recipe > Recipe Control.
2 Enter the tags as shown in the tables above and save the information when the Recipe Control
table is complete.
3 Open Recipe > Recipe > Recipe Control > CC_RECIPE > Recipe Information and enter the names
of the tags to be used in the recipe template as shown below. Then, save the information.
The entries in this table specify the tags (cc_temp, cc_cook_time, cc_flour, cc_water, and
cc_sugar) whose values you enter on the RECIPE display at run time.
Recipe
Temperature Save
Recipe
Cook Time
Open
Recipe
Flour
Water Main
Menu
Sugar
Link the tags you created in the recipe table to the animated fields in the mimic.
At run time, you create a recipe the first time you open this mimic by entering a Recipe name
and entering values for Temperature, Cook Time, Flour, Water, and Sugar. To save the recipe,
the operator clicks Save Recipe and then the Batch Recipe:
• Writes the values you just entered to the tags cc_temp, cc_cook_time, cc_flour, cc_water, and
cc_sugar you defined in the Recipe Information table when you configured the task.
• Creates a recipe file (with a .RCP extension) and writes the values of these tags in binary
form to this file.
When you wish to retrieve the recipe later for display on the screen, open the RECIPE mimic,
type the recipe name, and click Open Recipe. Batch Recipe collects the binary values from the
disk file, writes them to the tags, and the Graphics task displays them on the mimic.
1 In the System Configuration table, be sure the recipe task is configured and has the R flag to
start. (See page 505 for information to configure a task in the System Configuration table.)
2 To create a recipe for Cookies, enter the values shown in the following figure and click Save
Recipe to write these values to the tags specified and store the recipe under the file name
Cookies.rcp.
Recipe Cookies
3 To create a recipe for making cereal, type Cereal in the Recipe Name field and enter the
following values for each variable.
Recipe Cereal
4 Click Save Recipe. Batch Recipe writes the values for the cereal recipe to the specified tag in
the real-time database and stores them on disk in the binary file named CEREAL.RCP.
5 To open the recipe for Cookies, type Cookies in the Recipe Name field and click Open Recipe to
recall the recipe for cookies. Batch Recipe reads the values for each of the variables from the
binary disk file, deposits them in the real-time database, and displays them on the screen.
P ROGRAM A RGUMENTS
Argument Description
–L or –l – Enables logging of debug information to a log file.
–V Does the same as argument –L.
E RROR M ESSAGES
•
•
•
•
Client Builder
A FactoryLink application consists of a client project that is configured in the Client Builder
and a server application that is configured in the Configuration Explorer. In the Client Builder
environment, you create and configure the graphical user interfaces for your FactoryLink
application to graphically represent your industrial processes. Client Builder also provides the
run-time environment for interacting with those interfaces.
Although most of the procedures in this guide are done using the Configuration Explorer, some
procedures reference the Client Builder. You can access the Client Builder by double-clicking
the Client Builder icon on your desktop. For detailed information, procedures, and program
arguments to use the Client Builder, see the Client Builder Help.
•
•
•
•
Database Browser
The Database Browser task works in conjunction with the historian tasks to allow a server
application to access data in a relational database through a browse window. This method of
browsing is more flexible and powerful than using the Database Browser Control, but requires
more configuration effort.
O PERATING P RINCIPLES
Database Browser is a historian-client task that communicates with a historian through
mailbox tags to send and receive historical information stored in an external database.
Database Browser accesses data in a relational database by selecting the data specified in a
configuration table and placing it in a temporary table called a result table. The task views and
modifies the data in the result table through a browse window. A browse window is a sliding
window that maps data between the relational database and the real-time database. The browse
window views selected portions of the result table.
For example, if a mimic is used to display the browse window, it can display as many rows of
data from the result table as there are tags in the two-dimensional tag array. If there are more
rows in the result table than in the browse window, the operator can scroll through the result
table and see each row of it in the browse window.
Database Browser can read from and write to an entire array of tags in one operation.
An internal buffer stores the rows of the result table in RAM. An external buffer stores the
overflow of rows from the internal buffer on disk. This allows the operator to scroll back up
through the result table. Figure 5-1 shows the buffers.
5 5
15 20
In this example, as the operator scrolls through the result table, the rows of the result table flow
into the internal buffer to be stored in memory. Because, in this case, the result table consists of
25 rows and the internal buffer can store only 20 rows, when the internal buffer is full, the
excess rows in the internal buffer flow into the external buffer to be stored on disk.
U SE OF L OGICAL E XPRESSIONS
You use logical expressions to specify the data in a relational database to view or modify. For
the purposes of the Database Browser task, a logical expression is a command containing a
standard Structured Query Language (SQL) WHERE clause.
To select data from a database table, a logical expression works in conjunction with the table’s
column name and logical operators to form an SQL WHERE clause. The WHERE clause
specifies which rows in a database table to place in the result table.
Note: You must know how to write a standard SQL statement to configure the
Database Browser task. For additional information, see any SQL guide or the
user manual for the relational database.
To make a logical expression flexible at run time, use the tag whose value is a WHERE clause.
If viewing all data from a column in a relational database table, you do not need to specify a
logical expression.
TRANDATE > ‘19910126075959’ AND TRANDATE < ‘19910126170001’ AND CONVEYOR = 1 AND
CARNUM > 14 AND CARNUM <21
Which colors cars 15 through 20 on conveyor 1 were painted between 8:00 a.m. and 5:00 p.m. on
January 26, 1991
From this WHERE clause, the relational database places the following values in a result table.
19910126110000 1 15 black
19910126113000 1 16 black
19910126120000 1 17 white
19910126123000 1 18 white
19910126130000 1 19 blue
19910126133000 1 20 blue
If the view size of the browse window is 2, the browse window writes the values of the tags in
two rows to the real-time database, where other FactoryLink tasks can read it and write to it,
and an operator can view the data on a mimic.
Accessing
Data Logging > Database Browser > Database Browser Control
Note: If you are using Client Builder, it is recommended that you use the Shared
domain. For ECS Graphics, use the User domain.
Field Descriptions
Field Name Description Req. Valid Entry Valid Data Type
Browse Specifies the developer-assigned name of the browse X 1 to 15
Name window being defined or modified. alphanumeric
characters
Select Tag that triggers a select operation. A select operation tag name digital, analog,
Trigger selects specific data from a relational database table longana, float,
based upon information specified in the Database message
Browser Information table and places it in a result table
for you to view or manipulate.
Accessing
Data Logging > Database Browser > Database Browser Control > “your browser table” >
Database Browser Information
Field Descriptions
= is equal to
< is less than
> is greater than
<> is not equal to
<= is less than or equal to
>= is greater than or equal to
is not null is not a null value (for dBASE IV historian TRUE when database
column is not all spaces)
between defines a range of values where X is the lower limit and Y is the higher
X and Y limit. This is equal to COLNAME >= X and COLNAME <= Y
If you are not using the dBASE IV historian, refer to the RDBMS SQL Language user’s
manual for more information.
The WHERE clause is generated by appending the Local Operator, Column Name, and
Logical Expression fields in the order displayed in the Database Browser Information table.
Punctuation is supplied by the Database Browser to ensure correct SQL syntax. Any
embedded variable found in the Logical Expression field is replaced by a ?, which SQL
defines as a substitution marker for a value to be supplied at execute time. The value
supplied is the tag’s value defined by the embedded variable.
The string generated by this is a WHERE condition. If the first word(s) in this string is not
an SQL reserved word such as ORDER BY, then the reserved word WHERE is attached to
the start of this string. Ensure that any placement of SQL clauses such as ORDER BY and
GROUP BY is properly ordered as defined by the SQL language for the targeted database
server.
The ORDER BY clause is supported in the dBASE IV historian but only to the extent that
the columns listed in the ORDER BY clause must match an index that was created for the
database table. The dBASE IV historian does not build any temporary tables to reorder the
rows, so be sure the ORDER BY matches an index for the dBASE IV database table. If an
ORDER BY clause does not match an index, the dBASE IV historian returns an error.
If you define a Select Trigger in the Database Browser Control table, the WHERE clause is
used for the selected statement. If a Select Trigger is not defined, the WHERE clause is used
for the either the update operation or delete operation or both.
A Logical Expression can contain one of the following:
(1) Character string of 1 to 79 characters containing an SQL expression or an SQL clause.
For example, an SQL expression:
OUTLETVAL = 30 and TANKID = ‘BLUE001’
For example, an SQL clause:
ORDER BY TANKID
(2) Character string of 1 to 79 characters representing an SQL expression that contains
embedded variables. If the tag is a message tag, the character data in the message tag should
not be enclosed in single quotes.
For example:
=:tagTANKID
WHERE tagTANKID is a message tag of value: BLUE001
(3) An embedded message variable only. THis variable must be a message tag. The message
tag contains an SQL clause or SQL expression. THe SQL expression cannot contain an
embedded variable and any string constants in the SQL expression must be quoted in single
quotes.
For example:
:tagSQLExpression
WHERE tagSQLExpression is a message tag
OUTLETVAL = 30 and
TANKID = ‘BLUE001’
Because the Select Trigger tag SELTAG1 (defined in the Control table) is digital in this
example, the historian returns the two following values to the Database Browser task when
the change-status flag for SELTAG1 is set:
• Values where the column named TANKID equals BLUE001
• The column named OUTLET is greater than or equal to the value of the tag
OUTLETVAL.
The Database Browser task writes these values to the tags contained in the tag arrays
TANKID[3] and OUTLET[3]. These values are then displayed in a browse window.
P ROGRAM A RGUMENTS
Argument Description
–L or –l Enables logging of debug information to a log file. By default,
the Database Browser does not log errors.
–N or -n Notifies on the completion of a SELECT trigger that the query
resulted in an EOF (End of Fetch) condition if the rows returned
from the query do not equal the rows defined in the View Size.
By default, the Database Browser task does not report an End of
Fetch condition for a SELECT until a move operation advances
the current row past the last row of the query.
–S# or –s# Set maximum number (# = 4 to 160) of open SQL statements
that the Database Browser will have active at one time. The
default is 160. For very large applications, this program switch
may have to be adjusted if the database server is unable to
allocate a resource to open a new SQL cursor.
–V# or –v# Set verbose level. (# = 0 to 1) Writes the SQL statements
generated by the Database Browser to the log file. The Database
Browser must have logging enabled for this program switch to
work. The default is to not write the SQL statements to the log
file.
–W# or –w# Historian time-out parameter. (# = 5 to 30 seconds).
Sets the maximum timeout in seconds for the Browser to wait
for a response from the historian. The default is 30 seconds.
For values less than 30 seconds, this switch will only work
correctly when the historian initially achieves a successful
connection with the database server. If the historian fails to
successfully connect with the database server, Database
Browser will time out in 30 seconds regardless of this switch
setting.
•
•
•
•
Database Logger
The Database Logger task (Logger) writes blocks of data to a historical database to preserve
data for historical purposes. Each time a new value for a tag is collected or computed, the
current value of the tag in the real-time database is overwritten by the new data. To preserve
this data, the Database Logger task reads the data from the real-time database and sends it to a
disk-based relational database through a historian.
The historian used for this transfer depends on the relational database receiving the data. The
database can be either the SQL Server database or a third-party database (such as Oracle).
With the Logger, you can create a table and specify which tags to capture in that table. When
the value of any tag changes, the values of all tags in the table are logged. Database Logging
provides the ability to group tags in a database table, and event-based data can be logged using
a sequence key rather than a time key.
Data is logged using logging operations. Each logging operation defines which data to log
when the operation executes.
1. The real-time database receives and stores data in tags from various sources, such as a
remote device, user input, or computation results from a FactoryLink task. When data is
collected and stored in this database, other tasks can access and manipulate it.
2. The Logger reads the values of tags in the real-time database and maps the tags to columns
in a disk-based relational database table.
3. The Logger sends the data from the real-time database to a historian mailbox in the form of
an SQL INSERT statement. The request remains in the historian mailbox until the historian
processes the request.
Accessing
Data Logging > Database Logging > Database Logging Control
Field Descriptions
The sample application for nongrouped/nonsequenced data is a gasoline station that logs data
(tank level, pressure, and temperature) for an unleaded fuel storage tank.
This example defines one logging operation named NONGROUP that logs nongrouped tank
data for the unleaded fuel storage tank. The NONGROUP operation executes when the
hour_trig tag is set.
The connection for the Historian Mailbox must be defined in the historian table. The data is
logged to the USCO_LOG database, which is an alias for referencing the USCO database.
The sample application for nongrouped/sequenced data is a gasoline station that logs the total
gallons of unleaded and diesel gas pumped each hour of the day for 24 hours. The hourly total
is an accumulated value stored in a real-time database tag. Each time gas is pumped, the
number of gallons sold is added to the accumulated value. The total for each type of gas
(unleaded and diesel) is logged to different columns in the same table.
This example defines one logging operation named SEQUENCE that logs the total gallons of
unleaded and diesel gas sold each hour. The SEQUENCE operation executes when the
hour_trig tag is set.
The connection for the Historian Mailbox must be defined in the historian table. The data is
logged to the USCO_LOG database, which is an alias for referencing the USCO database.
The sample application for grouped/subgrouped data is a gasoline station that logs the total
gallons of unleaded and diesel gas sold each hour of the day for each day of the week for a
week. The hourly total is an accumulated value stored in a real-time database tag. Each time
gas is pumped, the number of gallons sold is added to the accumulated value.
The total for each type of gas (unleaded and diesel) is logged to the same column in the same
table but distinguished by a groupname_subgroupnum in the group column. Each day of the
week is represented by the subgroupnum that increments at the end of each day. The table size
is controlled by subgroup rollover that occurs after seven days.
This example defines two logging operations: one named UNLEAD_GRP that logs total gas
pumped each hour for unleaded gas and DIESEL_GRP that logs total gas pumped each hour for
diesel gas. Both operations execute when the hour_trig tag is set.
The connection for the Historian Mailbox must be defined in the historian table. The data is
logged to the USCO_LOG database, which is an alias for referencing the USCO database.
The following table shows the fields in the Database Logging Control table displayed after the
Group Delete Trigger field. Leave the fields blank if you are not using subgrouping.
Accessing
Data Logging > Database Logging > Database Logging Control > “your trend” > Database
Logging Information
Field Descriptions
The Column Name and Column Usage fields defined in the Schema Information table are
related to the fields in the Database Logging Information table.
This example shows four columns: order_col, time_col, ugal_sold, and dgal_sold.
The Column Name and Column Usage fields defined in the Schema Information table are
related to the fields in the Database Logging Information table.
The Column Name and Column Usage fields defined in the Schema Information table are
related to the fields in the Database Logging Information table.
1. Using the Run-Time Monitor (RTMON) or DBX/DBT, add the following items to a watch
list for the tags you are logging and triggers you are using for a single logging operation.
2. Using the Tag Input feature, enter sample data into each real-time database tag you are
logging and trigger the tag that executes the logging operation you are testing.
3. Check the relational database tables to see if the sample data gets logged.
4. Trigger subgroup rollover and add more sample data to check that subgroup rollover occurs
properly.
5. Trigger subgroup rollover using a number that exceeds the maximum number of subgroups
allowed to check that subgroup rollover returns to one at the proper time.
6. Trigger group delete to check that group data is deleted at the proper time.
In the
Run-Time Manager Check the Error Messages
No
window, does the Logger task section. Take the suggested
indicate "Running" with action.
no errors?
Yes
Yes
P ROGRAM A RGUMENTS
Argument Description
-D or -d Enables debug information to be sent to the shared window.
-E or -e Causes Database Logging to set the completion trigger when the
historian task processes the logging operation. By default the
completion is set when Database Logging sends the request to the
historian mailbox. With this switch, the completion trigger for all log
operations means the historian task has processed the logging
transaction.
Setting the completion trigger does not guarantee the log transaction
is successful; it only means the log transaction has completed.
-L or -l Enables error logging to the log file. By default Database Logging
does not log errors.
-Q# or -q# Sets the maximum number of outstanding asynchronous logging
transactions (SQL statements) for the historian task to complete.
Once this limit is reached, Logger operates synchronously until the
number of uncompleted transactions is reduced. By default Logger
allows for up to 100 outstanding logging transactions before
operating in a synchronous mode.
(# = 100 to 2,000,000,000)
-S# or -s# Sets the maximum number of concurrently prepared SQL statements
active at one time. The default is 30.
(# = 1 to 30)
-V1 or -v1 Causes Logger to write the SQL statements generated by the
Database Browser to the log file. The Database Browser must have
logging enabled for this program switch to work. The default is to not
write the SQL statements to the log file.
-W# or -w# Sets the maximum time-out in seconds for Database Logging to wait
for a response from the historian task. The default is 30 seconds.
(# = 5 to 30)
For values less than 30 seconds, this switch will only work correctly
when the historian initially achieved a successful connection with the
database server. If the historian has never successfully connected with
the database server, Logger will time out in 30 seconds regardless of
this switch setting. The -w switch always works for time-outs set at
more than 30 seconds, whether the historian initially achieved a
successful connection with the database server or not.
Note: Do not set arbitrarily high values because it could delay
the detection of an actual network or server malfunction.
•
•
•
•
Database Schemas
The FactoryLink relational databases are configured in a table format consisting of rows and
columns. The schema of the table defines the number, size, and content of the rows and
columns.
Schema definitions are created in the Database Schema Creation folder for the Database
Logging tables. This folder contains four tables:
• Schema Control table – Assigns unique names to table structures to log data.
• Schema Information table – Defines the columns and table structure attributes.
• Index Information table – Defines which columns the table structure uses as the index. Do
not use this table if you are not indexing the table.
• Security Event Logging Schema table – Defines the table columns included in the Security
Event Logging table.
Accessing
Data Logging > Database Schema Creation > Schema Control
Field Descriptions
• Due to a problem in the routines used to build this functionality, the integer value “0” for a
unique index integer type column is not handled correctly. If you use “0” in your unique
index column, you should designate this column of type “char” instead of “integer” or
“smallint.” It is not necessary to change the type of the tag that logs to this column or reads
from this column, as FactoryLink does data conversion automatically. You should only use
unique index column of type integer also if you can guarantee that a value of “0” will never
be logged to it. If a value of “0” is logged, at record rollover time the index file will become
corrupt and the database useless.
Caution: This applies even if you do not use the Maximum Records
feature. Having a “0” in a unique index integer column in
your table will make your index file useless if you reindex
using BH_SQL or DBCHK or any other means in the future.
• If the records for your different groups are logged non sequentially over time, and you want
to delete records based on the age of the groups, do not use the Maximum Records feature.
After record rollover, DB4_HIST will overwrite the records starting from the oldest without
considering its group ID. You should use the Group Delete feature provided for this
purpose. The Group Delete of records and Maximum Records features are incompatible.
Accessing
Data Logging > Database Schema Creation > Schema Control > “my schema” > Schema
Information
Field Descriptions
With the dBASE historian, for a float data type, the maximum precision you can save is a
five-digit integer with a precision of five (11,5). A float data type with a value greater than
99,999 is not logged correctly, and is displayed as eleven asterisks. To circumvent this
constraint, dBASE users can specify number as the Column Type. This allows larger numbers;
for example, by using a precision of (13,3), you can log the number 123456789.123.
In the Schema Index Information table, specify the following information for each index key
you want associated with the table structure. You can specify up to 99 different index keys for
each schema, although a practical limit is between 6 and 9. Each index key is a separate line
item on this table.
Accessing
Data Logging > Database Schema Creation > Schema Control > “my schema” > Index Information
Field Descriptions
If a new FactoryLink application is created using the Application Setup Wizard or the Create
New Application (FLNEW) utility, the client projects have examples for viewing the Operator
Event Log found on the RUNMGRS mimic. The simple browser control example shows the
database table OPERLOG displayed in reverse TMSTAMP order (latest first).
Accessing
Data Logging > Database Schema Creation > Security Event Logging
Field Description
You can change the column names and length in the Security Event Logging Schema table, but
the column alias must remain the same. The column order can also be altered from the standard
found in the Examples Application and the FLNEW templates.
1 Open the Graphics > Tag Server > Tag Server Options table and set values for the following
fields:
2 Add the mailbox for your historian of choice to the mailbox list. For more information, see the
“Historian Mailbox Information Table” on page 280.
3 Set up the database alias in the historian. For more information, see the “Historian Information
Table” on page 274. (You can use an alias for a previously defined database connection.)
Note: This database table will grow quickly if you have many clients attached
to the server. It is recommended that you monitor the size of the table and then
archive and purge the table regularly.
4 If you do not plan to use operator event logging, change the Log Actions field to NO in the Tag
Server Options table to disable operator event logging.
•
•
•
•
Data Point Logger
The Data Point Logger task logs one data point at a time to a historical database to preserve
data for historical purposes through a Historian. The Historian used for this transfer depends
on the relational database receiving the data, such as SQL Server, Oracle, or Sybase.
The Data Point Logger simplifies the task of logging individual data points by providing
preconfigured tables. It allows you to add or remove tags from the list of tags being logged
during run time. If desired, you can define your own Data Point Logging tables.
Data Point Logging is best for situations when you want to:
• Log a tag only when its value changes
• Use preconfigured tables and eliminate the time spent setting up tables
• Be able to index on log time or tag name or both
• Sort all logs of a tag in order of occurrence
• Configure a tag to be a dynamic pen on a trend chart
• Dynamically change the list of tags being logged during run time
Because the table structures are preconfigured, the Data Point Logging task can only be used to
log shared, numeric value tags. The tags to be logged can be specified in the Configuration
Explorer using the Data Point Logging Information table or the Tag Editor.
Each preconfigured Data Point Logging table uses the Database Alias Name MYDPLOG, which
references the relational database where the Historian sends the data from Data Point Logging.
In addition, each default table refers to the Historian Mailbox tag entry.
The maximum number of records allowed in a database table is governed by the relational
database being used. For example, the maximum number allowed in a dBASE IV database
table governed by any of the four default Data Point Logging table schemas is 1,000,000. Each
default schema specifies a maximum tagname column width of 48.
Data Logged
Data is logged to a static schema. For each event logged, the database row reflects the
following entries:
LOGTIME Stores the time the tag was logged
TAGNAME Tag that was logged at LOGTIME
TAGVALUE Value of the tag logged at LOGTIME
Only the log time, tag name, and the tag value are recorded in each row. This means new tags
can be added to the logging without an impact (manual addition) to the database columns. If a
tag is logged more than once during a given second, any values requested to be logged after the
first occurrence within that second are ignored.
L OGGING M ETHODS
With Data Point Logging, you can specify when a tag (data point) is to be logged based on one
or more of the following:
• A change in the tag (exception logging)
• A fixed-time interval
• A change in a trigger tag
At task startup, all exception and fixed-time interval tags are logged to create a default
beginning reference point. Triggered logging tags are logged at the event of the trigger only.
If a given tag changes frequently but not all changes are significant, you can configure
deadbanding on the tag so only significant changes are logged. This reduces the amount of data
logged and decreases system processing time. Deadbanding allows you to specify a band
around a tag to determine when the change is significant enough to record the changed value to
the system. This band can be an integer or a percentage of the value.
Fixed-time interval-based logging is tied to time maintained by an internal clock based on the
SECTIME global tag. This tag tracks time from the starting point of midnight, January 1, 2004,
in intervals of one second.
When configuring data for compressed logging, you use the Log On Change, Deadband Value,
and Maximum Log Rate fields in the Data Point Logger Information table. The Log On
Change field must be set to YES to use compressed logging. If this field is set to NO, the
values specified in the deadband and maximum log rate fields are ignored.
The following configuration table compares four ways of using compressed logging.
Option 1
Option 2
Option 3
Option 4
Option 1 uses compressed logging without setting a deadband. This configuration would yield
more data than is necessary.
Option 2 uses compressed logging with a deadband. A point is logged any time a change
occurs outside the deadband value. The deadband is an absolute value that defines what the
precision (number of points in time) should be. Setting the deadband too large may lead to
missing variations in the data.
Option 3 uses compressed logging with a deadband and maximum log rate defined. This
configuration says to not log more than 1 point every 2 seconds, even if changes occur outside
the deadband value. The maximum log rate is the fastest rate at which to log data in the
database. If data is changing abruptly, you will still only see a point in the database every 2
seconds.
Option 4 uses compressed logging with a deadband and maximum and minimum log rates
defined. This configuration is similar to Option 3 with the exception that a point gets logged
every 10 seconds to the database regardless if changes occur. The minimum log rate (Log Rate
field) sets the time interval to log a point always to the database even if the data has not
changed. If a trend does not find a point within the duration time, it goes back in time in the
database to get a point (as a reference) for the line in the chart.
This illustration shows the four options defined in the configuration table. The structure of the
pens appears the same, but the amount of data logged to the database is significantly different.
The markers represent the physical logs in the database.
Option 4
Option 3
Option 2
Option 1
To ensure sufficient data is logged to retain the shape of a time-based curve, the data
compression uses algorithms to log the data that is changing. If a point is not already logged
when a transition occurs, the previous point gets logged before the point outside the deadband
is logged.
If you define your own Data Point Logging table, you must specify a schema for the table in
the Data Point Schema Control table.
Accessing
Data Logging > Data Point Logging > Data Point Logger Control
Field Descriptions
Ensure the name of the table you want to log data to is displayed in the Table Name field at the
bottom of the table.
Accessing
Data Logging > Data Point Logging > Data Point Logger Control > “your log tag name” > Data
Point Logger Information
Accessing
Data Logging > Data Point Logging > Data Point Schema Control
Field Descriptions
When the Data Point Logger starts, it looks at the following files to determine the newer file to
use to build the log:
• Data Point Logger configuration table file, {FLAPP}\shared\ct\dplogger.ct
• Data Point Save file, {FLAPP}\log\dplogger.dyn
• Data Point Save file specified in the Command File Tag field of the Dynamic Logging
Control table.
The newer file becomes the list of all tags to be configured for logging. If the Command File
Tag is not configured or is blank, the default Data Point Save file is used.
Also at task startup, all exception and fixed-time interval tags are logged to create a default
beginning reference point. Triggered Data Point Logging tags are not logged at startup because
because logging control is external to Data Point Logging through another FactoryLink task.
The Data Point Save file {FLAPP}\log\dplogger.dyn contains a list of all tags currently
configured for logging. You can create this file by letting the Data Point Logger generate it or
using the manual method. This file contains one or more LOG commands.
When the Data Point Logger generates a Save file, you can save a snapshot list of all tags being
logged. This list is written to the Save file whenever the associated Write Trigger is set.
You can create a Save file that gets loaded whenever its associated Read Trigger is set. The
load process causes the list of tags currently being logged to be overwritten by the list of tags
in the specified Save file. The Data Point Logger creates a list of all tags currently configured
for Data Point Logging each time a Save file is loaded.
Data Point Logging may miss an exception log, a fixed-time interval log, or a triggered log
during generation or loading of the Save file.
Data Point Logging allows you to enter a single logging request using the Command Tag
defined in the Dynamic Logging Control table. This type of dynamic logging request either
adds tags to or removes tags from the list of tags currently configured for logging. Optionally,
the logging request can have a tag associated with it that describes the logging request status.
This type of dynamic addition and removal of tags is temporary. Every time Data Point
Logging is restarted, the new tag list generated from the Save file or the configuration table file
supersedes the existing list.
Command syntax is not case-sensitive unless stated. “\” is used as a line continuation character.
where:
LOG adds tag_list to the current Data Point Logging list
(tag_list) = tag_name_1 [,tag_name_2,tag_name_n]
tag_name a case-sensitive, valid shared tag (digital, analog, longana, float)
[time_clause] = EVERY numconst {S[econds]|M[inutes]|H[ours]|D[ays]}
numconst a numeric integer constant between 1 and 86400
[trigger_clause] = ON tag_name
[log_table_clause] = TO table_name
Omitting the log_table_clause variable causes Data Point Logging to go to
the table specified in the first row of the Data Point Logging Control table.
table_name a case-sensitive table name already configured in the Data Point Logging
Control table
If time_clause or trigger_clause is not specified with the LOG command, Data Point Logging
bases the listed tags on exception.
where:
REMOVE removes tag_list from the current Data Point Logging list
(tag_list) = tag_name_1 [,tag_name_2,tag_name_n]
tag_name a case-sensitive, valid shared tag (digital, analog, longana, float)
[time_clause] = EVERY numconst {S[econds]|M[inutes]|H[ours]|D[ays]}
numconst a numeric integer constant of 1 to 86400
[trigger_clause] = ON tag_name
[logging_method_cl = { ALL | EXCEPTION | INTERVAL | TRIGGER }
ause]
ALL removes every listed tag from each defined Data Point Logging method
EXCEPTION (default) removes every listed tag from exception logging
INTERVAL removes every listed tag from the fixed-time interval logging method
regardless of the fixed-time interval specified in the LOG command
TRIGGER removes every listed tag from triggered logging. The name of the trigger is
not required with this keyword.
[remove_table_ = FROM {table_name | *}
clause]
Using the asterisk in the remove_table_clause removes Data Point Logging
of the listed tags from all affected relational database tables for each Data
Point Logging method specified.
If none of the following clauses or keywords are used with the REMOVE command, the listed
tags are only removed from exception Data Point Logging: time_clause, trigger_clause, ALL,
EXCEPTION, and TRIGGER.
1. To remove the meter11reading tag from any table currently being logged for all defined
Data Point Logging methods:
2. To remove the tags meter8reading and meter10reading configured for the fixed-time
interval logging method from the relational database table METERDATA:
This table is preconfigured with one row (entry). Data Point Logging uses information only in
the first row. If other rows exist, an error message displays even though Data Point Logging
continues to run.
Accessing
Data Logging > Data Point Logging > Dynamic Logging Control
Field Descriptions
Specify the following information in the Database Browser Control table for each Data Point
Logging table to be maintained.
Specify the following information in the Database Browser Information table for the Browse
Name specified in the Database Browser Control table.
To make the Database Browser task delete rows in a Data Point Logging table also requires a
Math & Logic procedure be written that supplies a value in the tag DPCUTOFF, triggers the
delete, and notifies you when the completion trigger is set. The procedure should be entered in
the Shared domain and all tags referenced in the procedure should be Shared domain tags. See
the “Database Browser” on page 81 for more information.
For dBASE IV databases only, you can use the Maximum Records field in the Data Point
Schema Control table to facilitate table maintenance.
Argument Description
–I Disable logging tag values at initialization.
–L Enable logging of SQL statements to a file.
–R<#> Set maximum number of rows.
–S Generate Data Point save file after successful dynamic log request.
–T Generate Data Point save file at task termination.
–V Enable logging of SQL statements. Statements logged (sent) to
Run-Time Manager output window, but not saved.
–W<#> Set historian time-out parameter. (# = 5 to 300 seconds; default =
30 seconds)
–I Disable logging tag values at initialization.
E RROR M ESSAGES
•
•
•
•
Event and Interval Timer
Event and Interval Timer allows you to define timed events and time intervals that initiate and
control any system function in run-time mode. This task links timed events and intervals to
tags used as triggers whenever the event or interval occurs. Timer tags can be referenced by
other FactoryLink tasks to trigger some action, such as:
• Read values from a PLC
• Update a report
• Log data to a relational database
• Perform a mathematical procedure
Use this task to signal the occurrence of specified events or intervals by writing to digital tags
in the FactoryLink real-time database.
• Timed events occur at a specific time not more than once every 24 hours (for example,
Monday at 8:00 A.M.). They are configured in the Event Timer Table.
• Time intervals occur at least once every twenty-four hours at regular intervals of the system
clock (for example, every 60 seconds). They are configured in the Interval Timer Table.
O PERATING P RINCIPLES
The Event and Interval Timer task operates in synchronization with the system clock. For each
defined interval or event, you must create a digital tag in the real-time database. When the
system clock matches the specified event or interval, the task forces the value of this digital tag
to 1.
There is no limit, except the amount of available memory, to the number of event and interval
timers that can be defined.
The Event and Interval Timer task also updates global information used by FactoryLink such
as the current time, the day of the week, and the month. Such global information is stored in
predefined FactoryLink tags, known as reserved tags, each of which is an analog, longana, or
message data type. The reserved tags are available for standard and UTC format (Coordinated
Universal Time).
While the Timer task is running, these reserved tags are constantly updated. In order for the
Timer task to run, you must have entered an R flag for the Timer task in the System
Configuration Table, as explained on page 505.
Reserved Tag
Description Data Type
Standard UTC
Accessing
Timers > Event Timer > Event Timer Information
Field Descriptions
Note: Between midnight (00:00:00) and the time indicated in the Hours, Mins.,
and Secs. fields, the value of the tag an event is linked to is 0. The tag value
changes to 1 after the timed event occurs and stays this way until midnight when
it changes back to 0. Because of this fact, always set a time other than 00:00:00
to avoid the changing back to 0 at midnight.
The Event Timer Information table resembles this example when all information is specified.
In this example, the startday tag has a value of 0 between midnight and 8:00 A.M. and 1
between 8:00 A.M. and 11:59 P.M. and 59 seconds (23.59.59) each day of the year.
Similarly, the endday tag has a value of 0 between midnight and 5:00 P.M and 1 between 5:00
P.M. and 11:59 P.M. and 59 seconds.
The newyear tag value has a value of 1 on January 1 of each year and 0 on all other days.
Similarly, the lastday tag value has a value of 1 on December 31 of each year and 0 on all other
days.
The fri5pm tag has a value of 1 each Friday between 5:00 P.M. and 11:59:59 P.M.
Accessing
Timers > Interval Timer > Interval Timer Information
Field Descriptions
Note: The interval timer assumes a default value of 0 if these fields are left
blank. At least one of these fields must be filled in with a valid entry. (Not zero,
as it is not considered a valid entry.) If the interval can be divided evenly into
24 hours (86400 seconds or 1440 minutes), the timer runs as if it started at
midnight. If the interval cannot be evenly divided into 24 hours, the timer starts
at system startup.
In this example, the sec5 tag’s change-status flags are set to 1 every 5 seconds; that is, when
the reserved analog tag A_SEC = 0, 5, 10, 15, ... 55. This timer runs as if it started at midnight;
therefore, if system startup time is 9:39:18, the sec5 tag’s change-status flags are first set 2
seconds later, at 9:39:20, and every 5 seconds thereafter.
The sec30 tag’s change-status flags are set to 1 every 30 seconds, when A_SEC = 0 and 30.
This timer runs as if it started at midnight.
The min7 tag’s change-status flags are set to 1 every 7 minutes after system startup, because
1440 is not evenly divisible by 7.
The min20 tag’s change-status flags are set to 1 every hour, again at 20 minutes after the hour,
and again at 40 minutes after the hour.
The report1 tag’s change-status flags are set to 1 every hour and 17 minutes, after system
startup.
The hour8 tag’s change-status flags are set to 1 three times a day: at 8:00 A.M., 4:00 P.M., and
midnight, regardless of system startup time.
When interval timers are used as triggers for other tasks, such as PLC read triggers or Report
Generator triggers, these tasks automatically use the change-status flags associated with these
timers.
E RROR M ESSAGES
If the files are present and the problem still exists, delete
FLAPP/TIMER.CT and restart the application to rebuild
the TIMER.CT file.
If the files are present and the problem still exists, delete
FLAPP/TIMER.CT and restart the application to rebuild
the TIMER.CT file.
If the files are present and the problem still exists, delete
FLAPP/TIMER.CT and restart the application to rebuild
the TIMER.CT file.
If the files are present and the problem still exists, delete
FLAPP/TIMER.CT and restart the application to rebuild
the TIMER.CT file.
Reserved Timer tags not defined Cause: Some or all of the reserved timer tags are not defined.
Action: The GLOBAL.CDB and/or GLOBAL.MDX files are
damaged.
If the files are present and the problem still exists, delete
FLAPP/TIMER.CT and restart the application to rebuild
the TIMER.CT file.
•
•
•
•
Event Time Manager
The Event Time Manager (ETM) task allows a user to configure objects, functions, and
parameters, and to control them based on an Event List that is related to the configuration. An
optional user interface can be used to build the Event List independent of the FactoryLink
system.
The Event Time Manager was originally available as a third-party option. It is mainly provided
now to allow customers who used it in the past to upgrade their systems to the latest version of
FactoryLink.
The location for ASCII Event Lists is %flapp%\ETM; for historian files it is freely selectable.
An event is defined by fields Fix Date, Event Time, Weekday, Special Event, Valid
from..through.
• The date/time format is ISO 8601 and starts with the FactoryLink time calculation
(1980-01-01 00:00:00).
• The date (YYYY-MM-DD) is defined by year (4 digits), month (2 digits) and day (2 digits)
separated by a hyphen or minus sign {-}.
• The time (hh:mm:ss) is defined by hours, minutes and seconds (each of 2 digits) separated
by a colon {:}; the time resolution is one second.
• The day begins at 00:00:00 and ends at 23:59:59.
• The fields Valid from...through require the date format and are used to limit the span of a
repetition.
Weekday and Special Event are further possibilities to describe an event. You can specify the
available entries in the ETM Runtime Parameter table. Every entry can be negated by a
preceding hyphen {-}.
An event is defined exactly for one time (explicit) or as a repetition, being processed more than
once. On a repeated event at least one field in the date and time string is empty. Preceding
delimiters must be declared. An empty field generally means always whereas Weekday and
Special Event are considered one field. An empty time field means at 00:00:00. As a logical
rule consider an event to occur at the time: [Weekday OR Special Event] AND [Fix Date]
AND [Valid from..through] AND [Event Time].
The examples in the following tables illustrate explicit events and repetitions.
Actual adjustment
Time example: Daylight Savings Time
On Off
stop ETM clock
Real-Time
Caution: This may cause unpredictable reactions. For example, telling ETM to stop
the heating system, it is best to cut off the communication to the PLC first.
Mode Description
The operation mode is available as an input/output tag configured by Parameter tag OpMode
in the ETM Runtime Parameter table. The tag is subdivided like a bit field in command and
information modes. The user can set ETM to a certain mode by forcing the tag with a
command mode. ETM always shows its actual state with information modes. The following
modes are available:
After each SleepTime (Program Argument), ETM checks if the system clock has changed and
if the internal clock is late. If so, ETM processes the events according to the faster internal
clock. Then ETM suspends again for the duration of SleepTime.
If the ReadPeriod (Program Argument) has expired, ETM goes to ReadDB to update the Event
List and returns to its previous state.
If Parameter <OpMode> is set to 2, ETM goes to Off and stops processing. The
Parameter <ExternalTime> is initialized with the actual value of the system clock.
If the <OpMode> is set to 1, ETM goes Test/Init. The <ExternalTime> is initialized with the
actual value of the system clock; the Event List is read and an initialization on that
<ExternalTime> is started.
ETM reads the Event List database. If the initialization is completely processed, ETM goes to
Auto.
If the <OpMode> is set to 2, ETM goes to Off mode and stops processing. The
<ExternalTime> is initialized with the actual value of the system clock.
Test/Init
ETM reads the Event List database. If the initialization is completely processed, ETM goes to
Test mode.
If the <OpMode> is set to 2, ETM goes to Off mode and stops processing.
Off
If the <OpMode> is set to 0, ETM goes to Auto/Init mode. The Event List is read and an
initialization on the system clock is started.
If the <OpMode> is set to 1, ETM goes to Test/Init mode. The Event List is read and an
initialization on the actual value of <ExternalTime> is started.
Test
After each SleepTime, ETM checks if the Parameter <ExternalTime> tags value has changed
and the internal clock is late. If so, ETM processes the events for the increased internal clock.
Then ETM suspends again for the duration of SleepTime.
If the ReadPeriod has expired, ETM reads the Event List database again.
If the <OpMode> is set to 0, ETM goes to Auto/Init. The Event List is read and an
initialization on the system clock is started.
Initialization
completed
Off (2)
Auto/Init set system clock
(256, 768) to <ExternalTime>
ReadDB <ReadPeriod>
Off
(512, 513, read Event List and
return to previous step (2)
768, 769)
Test (1)
Test (1)
set system clock
initialize on
to <ExternalTime>
<ExternalTime>
and initialize
Test/Init
(257, 769)
Off (2)
Initialization
completed
Accessing
Other Tasks > ETM Event Timer Manager > ETM Object Information
Field Descriptions
Accessing
Other Tasks > ETM Event Timer Manager > ETM Object Information > “my ETM” > ETM Function
Information
Field Descriptions
Valid Data
Field Name Description Req. Valid Entry Type
Command Individual description of the command. The X any name 1 to 23
descriptions are used in the Event List and case-sensitive characters)
displayed in the ETM Input Masks. By or “?” for user-specified
entering a command in the Event List or in input value
the ETM Input Masks, ETM executes the
appropriate action and processes the event
with the *Standard Value or with the higher
ranking *Preset Value. Or, on the special
command "?" the user can enter a value that
even supersedes the *Preset Value.
*Preset Value Tag or character constant representing the tag name or character digital, analog,
value of the command if no other value constant (1 to 48 longana, float,
supersedes it. If specified, this field characters, case-sensitive) message
supersedes the *Standard Value.
The following graphic illustrates the behavior of Startup and Enable mode.
Accessing
Other Tasks > ETM Event Time Manager > ETM Runtime Parameter
Field Descriptions
Table 10-3 lists valid parameters for the Parameter Argument field.
Table 10-3 Valid Parameters for ETM Runtime Parameters Screen (continued)
All functions can be accessed by mouse, Tab and Enter key, or by selecting an item from the
menu. To copy, modify or delete a record, the desired object must be selected prior to releasing
the function. For copy and modify, open the Event Configuration Mask to define the events for
the selected object.
The screen below shows the input mask for a weekly program with the function Step:
The screen below shows the input mask for weekly program with the function Temperature.
The cursor can be set into a field by mouse click or by stepping through with the Tab key. Input
fields are displayed with a white background.
The special ? command allows you to enter a user specified value. It is indicated by a prompt
?= in the list of commands and accepts any useful value. Just enter the value after the prompt
e.g. ?=23 or ?=Alarm.
You can limit an event to a Day of Week by simply checking the appropriate box and/or you
can limit it to a Fix Date and/or Valid from Through. Note that periods for Day of Week,
Special Events, can exclude each other and thus prevent an action. As a logical rule consider an
Event to be valid at the time given by: [Day of Week OR Special Event] AND [Fix Date] AND
[Valid from..Through] AND [Event Time].
An empty field generally means always. An empty time field means at 00:00:00.
Example: To see only the objects whose name begins with MB01, type *^MB01 in the object
name field.
P ROGRAM A RGUMENTS
You can control the behavior of ETM by program arguments. A program argument is marked
by a hyphen {-} followed by a argument name and a value if required. Program arguments are
not case-sensitive and must be separated by at least one space. An argument without a hyphen
is interpreted as a file name where the program arguments are read.
The ETM task writes information about startup, shutdown, version, actual program argument
values and log output into file {flapp}\{FLNAME}\{FLDOMAIN}\{FLUSER}\log\etm.log. As an
example, a log file name can be c:\flapp\flapp1\shared\shareusr\log\etm.log.
Description
Argument Default
(also see sample file ETM_para.run on the installation media)
file Program Argument File None
The file must be specified by full path and file name in the System
Configuration table. Environment variables can be used; they must be
set in braces { }, such as {flapp}\etm_para.run.
-OpMode# Startup Command Operation Mode #=0..2 0
ETM will start up in the specified mode.
-SleepTime# Cyclic Sleep Time #=100..1000 [ms] (Normal Interval) 900 ms
After each process cycle, ETM is suspended for the specified time if
the internal clock is not more than 1 second behind the external clock
(system clock or test clock).
See ‹ Parameters <ClockTick>, <InternalTime> and <ExternalTime>.
-ShortSleepTime# Cyclic Short Sleep Time #=50..500 [ms] (Catch-up Interval) 300 ms
After each process cycle, ETM is suspended for the specified time if
the internal clock is more than 1 second behind the external clock
(system clock or test clock).
See ‹ Parameters <ClockTick>, <InternalTime> and <ExternalTime>.
-StartupTime# Span Considering Startup Events #=1..100 [hours] 6 hours
At startup, ETM considers events in this span if the appropriate
command uses Mode S to initialize the control tags value,
see Function Table Mode column.
-BufferTime# Additional Span for Events #=1..10 [days] 1 day
When scanning the Event List, ETM stores every event for the
specified day(s). This span is used to prevent the loss of events in case
the Event List is not available for any reason.
-ReadPeriod# Rate of Reading the Event List #=600..100000 [sec] 900 sec
This is the period the Event List is scanned for events valid for this
day and the time buffer. Use ‹ Parameter Tag <ReadDB> to force
reading the Event List.
When starting the ETM Input Masks, you can preselect the list of objects to display by the
program argument OBJECTNAME=<expr>, where expr is a regular expression defining the
filter.
Do not use an asterisk at the begin of expr although it is required in the selection mask. This
can be useful in conjunction with MMI in order to display the Input Mask for a currently
selected object; for example, to see the events of object named LK06MM01_AS, enter:
nova -w etm OBJECTNAME=LK06MM01_AS
or for events of the first object beginning with LK06, enter:nova -w etm
OBJECTNAME=^LK06
Verify the option key is installed and the license is enabled. Use the
License Wizard to see the purchased options.
ACE_ERR_IN_INSTALLATION #101 Authorization executable file may be corrupted
ACE_ERR_IN_KEYFILE #102 Authorization key file may be corrupted
PROG_INIT_FAIL #103 Error registering ETM to kernel, code=%d
E_CT_GET_HDRLEN #110 Error header length %s in ct file %s len=%d
E_CT_GET_NCTS #111 Error empty ct file %s
E_CT_GET_NRECS #112 Error no records in ct file %s
E_CT_GET_RECLEN #113 Error record length %s in ct file %s len=%d
E_CT_OPEN #114 Error opening ct file %s
E_CT_READ_HDR #115 Error reading header %s in ct file %s
E_CT_READ_INDEX #116 Error reading index in ct file %s
E_CT_READ_RECS #117 Error reading %s records in ct file %s
E_CT_TYPE #118 Error unknown ct type %d in ct file%s
SYS_NO_MEMORY #130 Error getting memory
SYS_FOPEN_ERR #131 Error opening file %s
PROG_THREAD_START_ERR #132 Error starting thread %s : %s
E_GET_GLOBAL_TAG #133 Error global tag with id=%d not found
E_DCREATE #134 Error creating directory %s
E_MKDIR #135 Error creating directory %s
E_NPATH #136 Error getting memory for NPATH
•
•
•
•
File Manager
File Manager allows you to perform basic operating system file management operations
initiated by a FactoryLink application at run time. This task works in conjunction with the
FLLAN option to initiate operations within other FactoryLink stations on the network.
O PERATING P RINCIPLES
The File Manager initiates the operations using commands.
These commands perform the same functions just like their operating system counterparts. The
File Manager controls all file operations through the real-time database. You can configure
other FactoryLink tasks to initiate File Manager operations. For example:
• You can configure input functions in Graphics so an operator can use them to initiate
file-management operations at run time, such as to display a list of recipes or reports.
• The Timer task can trigger File Manager to automatically back up files to a networked
server at certain intervals, such as each day at midnight.
• The Timer task can trigger File Manager to delete log files automatically at certain intervals
(like once every four hours) or after certain events (when log files reach a specified size).
• Alarm Supervisor can trigger File Manager to print alarm files.
2 Locate the row containing the entry FLFM_SERVER in the Task field.
3 In the Flags field for that row, enter an R. This configures the FLFM_SERVER task on the
remote node to start up automatically whenever FactoryLink is started. (See page 505 for
information to configure a task in the System Configuration table.)
Accessing
Other Tasks > File Manager > File Manager Control
Field Descriptions
For examples of File Manager operations, see “Sample File Manager Operations” on page 219.
Accessing
Other Tasks > File Manager > File Manager Control > “your tag name” > File Manager Information
Field Descriptions
The first four examples do not require you to complete an associated File Manager Information
table. The last two examples do require you to complete a File Manager Information table.
Example 1: COPY
Example 1 demonstrates how to configure a COPY operation using Windows file syntax. You
can configure the Math & Logic task or an analog counter in the Counters task to use this
operation to increment the alarm history file number. This results in a rolling count of the
history file being transferred: Hist.001, Hist.002, and so on. Complete the control table to
configure a COPY operation,
Example 2: PRINT
Example 2 demonstrates how to configure a PRINT operation. Configure the control table to
configure a PRINT operation. PRINT command file syntax is the same for all operating
systems.
Example 4: TYPE
Example 4 demonstrates the TYPE command. TYPE command file syntax is the same for all
operating systems.
You can include up to four variable specifiers (each one designated by a leading percent sign
%) in the path or file name. These variable specifiers indicate a portion of the path or file name
that is variable (replaced with data from tags when the file operation is performed). The
variables can be digital, analog, longana, float, or message tags. Multiple variables can be used
together, as in a file name and extension (for example, %8s.%3s).
If you want to vary the actual path/files used in either the source or destination paths, use one
or more of the four variables and %xx type specifiers to dynamically build these at run time
from tags; otherwise, hardcode the exact path/file names desired and leave the four tag variable
fields blank.
The data type of the tag must match the variable-specifier type as follows.
Path names with wildcard characters in the file specifications might resemble this example:
source /DEVICE/FLINK/SAMPLE/SAMPLE.*
destination /DEVICE/FLINK/EXE.
Example of a File Manager operation using wildcard characters Windows file syntax):
Do not specify a file name for the destination path as File Manager will do it for you.
Pathnames with wildcard characters in the file specifications might resemble this example:
source C:\FLINK\SAMPLE\SAMPLE.*
destination C:\FLINK\EXE
File-management functions, such as copying, deleting, printing, and renaming files, can be
performed between the local FactoryLink system and any remote computer running File
Manager as long as the FactoryLink system contains the FactoryLink Local Area Networking
(FLLAN) option.
If using FLLAN, create the LOCAL file before filling in the configuration tables. Define the
local station name in the ASCII file LOCAL in the FLAPP/NET directory. Remember:
standalone systems require the LOCAL file.
Either the source or destination path name can refer to a file on a remote station. The format for
a remote file path is
\\(station)\(path)
where
station Is the name of the remote station, 1 to 256 characters.
path Is the full path name of the file on the remote station.
The source and destination are interchangeable as long as one of them is the local FactoryLink
station. The only difference between local file operations and remote file operations is remote
file names must include the disk/drive specification if required by the operating system and
must conform to the file name syntax for the remote computer’s operating system.
Only one file can be remote in a copy operation. Both files must be on the same station in a
rename operation.
For example, to copy a file from a local FactoryLink station to a remote FactoryLink station,
use the following format for the remote path name:
\\STATION_NAME\DEVICE_NAME/DIR_NAME/FILE_NAME
Other file-management operations can be performed with File Manager using the same format.
Do not use the remote file name (\\(STATION)\) when performing File Manager operations on
networks unless you installed FLLAN on the local and remote computers. Using the FLLAN
FactoryLink station name instructs FLLAN rather than the network to perform the operation.
At run time, ensure the FLFM_SERVER task is running on the remote node before invoking
file management operations between local and remote nodes.
Different operating systems reference network devices in different ways. Consult the user’s
manual for the appropriate operating system to find the proper syntax for referencing these
devices.
P ROGRAM A RGUMENTS
E RROR M ESSAGES
•
•
•
•
FLLAN
The FactoryLink Local Area Networking (FLLAN) module transmits FactoryLink data
between computers (called stations) across a network. A network is a combination of hardware
and software that lets multiple computers share resources, such as files, printers, or data. A
network consists of the following parts:
• A Network Operating System (NOS)—Software that transports data between software
applications on different computers.
• A network application—Software that sends data to a similar application on another
computer via the Network Operating System.
• The network hardware—Network interface cards installed on each computer on the network
and cables that link them all together.
Note: FLLAN was the first FactoryLink task for sharing data between nodes
on a network. In a later version of FactoryLink, the Virtual Real-Time Network
and Redundancy (VRN/VRR) task was introduced. VRN/VRR has all of the
functionality of FLLAN and is more flexible. FLLAN is still supported, but if
you are starting a new application, it is recommended that you use VRN/VRR
instead. For more information, see “Virtual Real-Time Network and
Redundancy” on page 547.
O PERATING P RINCIPLES
Tags are sent between one station and another using send and receive operations. The tags and
operations are defined in the Local Area Network Send and Receive tables. These tables define
the conditions under which the send operations are initiated and whether or not the remote
station is willing to receive the data.
During a send operation, the FLLAN on the local station sends tag values from the
FactoryLink real-time database across the network to the FLLAN on the remote station. The
FLLAN on the remote station writes these values to the real-time database on the receiving
station.
During a receive operation, FLLAN receives values from a remote station and stores them in
the FactoryLink real-time database as tags.You do not need the module FLLAN on two or
more FactoryLink stations in order to share and store files on a network server or use network
printers. External networking software allowing peer services is sufficient to achieve this goal.
Network Groups
You can combine one or more stations into groups. Grouping permits you to transmit the same
data to multiple stations with a single operation. A single station can belong to more than one
group. You can use the same group name on more than one remote station; however, these
groups are independent and do not correspond to each other.
Because the Network Operating System is transparent to FactoryLink, you can use a different
Network Operating System program on each station on a network. This lets you use
FactoryLink for different platforms within the same network. You must use the same protocol
on all stations in the network.
You can monitor the status of remote stations on the network, such as the number of
transmissions the remote station has sent and received and whether these transmissions were
successful. You can view the status at run time and other FactoryLink modules can use this
information for other activities.
The local station name and the default values FLLAN uses to transmit data is stored in the
local name file FLAPP/net/local on each FactoryLink station. We recommend you consult your
network administrator if you need to change the default values.
When sending tag values, FLLAN groups the tags into packets by data type and sends them in
the following order: digital, analog, float, message, longana, and mailbox. To maximize
efficiency, place tags of like data-types in the same order in the LAN Send Information table.
1 Define the TCP/IP Internet addresses for all stations in the hosts file if you are not using a
name server or if you are using a name server but the local station name is not in it.
Refer to the vendor’s documentation for details on how to modify these files. Contact your
system administrator if you do not know your TCP/IP addresses. The syntax for defining the
TCP/IP address is
where
address Is the TCP/IP internet address.
sta_name Is the unique name assigned to the station.
STA_ALIAS Is the alias used to reference the station. This must be in all uppercase. For
example,
FLLAN restricts you to 1024 sessions. For each read-only entry in the external domain table, a
session is needed for the client and the server. If the entry is a read-write connection, two
sessions are created on both the server and client. The 1024 session limit is for each FLLAN
application. This means a client can have 1024 sessions and the server can also have 1024
sessions. See the -n option for changing the session limit.
Enter the following lines in the file defined for your operating system to define the service
ports. Use all uppercase letters for the service names.
Use the service port numbers unless another service name in the services file is already using
one of these numbers. If you use different service port numbers, make them consistent for all
stations on the network. See the vendor’s documentation for details about service port
numbers.
FLLANSIG is a number that is less than or equal to the number of seconds in either the TX or
CALL parameter, depending on which is less. If you did not change the local station default
TX or CALL values, this is a number less than or equal to 10. If you changed the local station
default TX or CALL values, this is a number less than or equal to the lesser of the two.
FLLAN does not wake up when the TX or CALL intervals have passed. When the value of
FLLANSIG changes, FLLAN wakes up to check whether either of these two intervals have
passed.
These test programs send and receive data using the same format as FLLAN. Run NR on one
station (the local station), and run NS on the remote station to test the communications
between two stations on the network. Then, reverse the process for the same two stations. Test
every station on the network and test every station as both a local station and a remote station.
1 Start NR on the local station. Use the following syntax for this command:
where
local_name Is the name of the computer that receives the data.
remote_name Is the name of the remote computer that sends the data.
verbose_level Controls how much information NR displays about each packet it receives.
This can be one of the following:
0 Displays the sequence number of messages in multiples of
10 when every 10th message is received. The message is
displayed on the same line as the sequence number; the
message does not scroll. This is the default.
1 Displays the sequence number of the current message. The
message is displayed on the same line as the sequence
number; the message does not scroll.
2 Displays the sequence number of the current message. The
message is displayed on different lines and scrolls.
>3 In addition to level 2 output, the message is displayed in
hexadecimal format. Any value greater than 3 displays the
same information as 3.
debug_level Is a number >0 that indicates how much information the network debug
layer displays about each packet. The higher the value, the more
information NR displays. The default is 0.
-l Writes debug information to a log file named nr.log in the current directory.
bufsize Is a number from 128 to 2048 that specifies the number of bytes in a buffer
(message). The default is 512.
-a Acknowledges all received messages. If you include -a with this command,
you must include it with the NS command on the remote station.
nr STATION1 STATION2 -a
2 Start NS on the remote station when NR is in the listening mode. Use the following syntax for
this command:
where
local_name Is the name of the computer that sends the data.
remote_name Is the name of the computer that receives the data.
verbose_level Controls how much information NS displays about each packet it sends.
This can be one of the following:
0 Displays the sequence number of messages in multiples of
10 when every 10th message is sent. The message is
displayed on the same line as the sequence number; the
message does not scroll. This is the default.
1 Displays the sequence number of the current message. The
message is displayed on the same line as the sequence
number; the message does not scroll.
2 Displays the sequence number of the current message. The
message is displayed on different lines and scrolls.
>3 In addition to level 2 output, the message is displayed in
hexadecimal format. Any value greater than 3 displays the
same information as 3.
debug_level Is a number >0 that indicates how much information the network layer
displays about each packet. The default is 0. The higher the value, the more
information NS displays.
-l Debug information to a log file named ns.log in the current directory.
bufsize Is a number from 128 to 2,048 that specifies the number of bytes in a buffer
(message). The default is 512.
secs Is a number from 1 to 59 that specifies the number of seconds between
packet sends.
In the following example, STATION2 is running NS. STATION1 is running NR. STATION2
acknowledges all transmissions from STATION:
ns STATION2 STATION1 -a
After you start NR and NS, they display the following message on the computers they are
running on:
The programs then display the following message until the two computers establish a
connection:
open remote_station_name
You may experience a delay of several seconds between the two messages. Then the computers
display the following message:
wait on call
3 Verify the computers establish a connection. After the computers establish a connection, NR
and NS automatically begin transmitting messages. The computer running NR displays
data-transfer information on its screen each time it receives data. The computer running NS
displays data-transfer information on its screen each time it sends data.
5 Run NR and NS again at a higher debug level if the computers do not connect. Note any errors
that display.
7 Repeat this procedure again, but run NR on the station you first ran NS and run NS on the
station you first ran NR.
You can fill out as many Send tables as the RAM on your system allows. You can enter as
many tags as the available RAM allows. The Local Area Network Send table is filled out in the
Shared domain.
Perform the following steps to define the station name for the local computer. You must repeat
this procedure for each computer in the network running FactoryLink.
1 In your server application, open Networking > Local Area Network Groups > local.
2 Enter the computer name of the local station as defined in the network operating system. (To
find out what your computer name is, open the control table and click the Network icon.)
Computer names are case-sensitive. Enter the computer name in the LAN Local Names
exactly as it is spelled in the control table.
3 (Optional) To change any of the transmit parameters from their default values, enter the
parameters and their new values beneath the station name.
In the example, TX=30 changes the maximum time between data transmissions to 30 seconds,
RX=120 changes the maximum time between receipts of data to 120 seconds. The possible
transmit parameters and their default values are given below.
4 Press Enter at the end of the last line to enter a hard return. If only a station name is entered,
then press Enter after the station name. This hard return is required.
5 Complete the LAN Remote Names table to define the network groups. (See page 251.)
TX (Transmit Time-out)
A number between 0 and 65,527 that sets the maximum time, in seconds, between
transmissions. The default is 20. If the local station does not send any data to a given remote
station after the indicated time, the local station sends an “I am still here” packet to the remote
station.
A number between 0 and 65,527 that sets the maximum time, in seconds, between receptions.
The default is 60. Make sure this value is at least three times greater than the TX value. If
the local station does not receive any data from a remote station after the indicated time, the
local station disconnects from the remote station and attempts to reconnect.
If you specify an RX value greater than 60, modify the -t program argument in the System
Configuration table; otherwise, FLLANRCV may not shut down properly. To do this, complete
the following steps:
1. Open the System Configuration table in the Shared domain. The System Configuration
editor appears.
2. Click the right arrow at the bottom of the editor to select the FLLANRCV task.
3. In the Program Arguments field, enter the -t argument with the required RX value. For
example, if the RX value in the Local Names table = 90, then enter -t90.
4. Click Apply to save the change and then close the System Configuration editor.
INIT
A value of 0 or 1 that specifies whether the local station sends all data when it first connects
with another station. The default is 0. The local station uses this value only when a remote
station starts up.
• If INIT = 0, when the local station first connects to another station, it does not send values
until one of the values has changed.
• If INIT = 1, when the local station first connects to another station, it sends all values during
the first real-time database scan. This can be useful when you start a remote station after the
local station has been running. The new station has no values when it starts, so the local
station sends the values it has at that time. After that, the local station values are updated
normally.
Because startup data can contain uninitialized values, it is recommended that you leave INIT at
0.
CALL
A number between 0 and 65,527 that defines the minimum amount of time, in seconds, the
local station waits for a call to a remote station to connect. The default is 10. If the remote
station does not connect to the local station, the local station waits at least CALL seconds
before attempting to reconnect. The remote station may still connect to the local station in the
interim.
MAXLEN
Only FLLAN uses the MAXLEN parameter. The largest number of bytes a station can send or
receive in a single data packet. The minimum is 512; the maximum is 65,536. The default is
512. The tag data is truncated if a message or mailbox tag is sent that is larger than MAXLEN.
Make sure this number is the same on all stations.
• If you enter a value less than the minimum of 512, FLLAN uses 512.
• If you enter a value greater than the maximum of 65,536, FLLAN uses 65,536.
Each tag uses a specific number of bytes, depending on its data type. All tags use 4 bytes to
store its tag name + x number of bytes to store the value, as shown in the table below:
The tag type... uses ... for the tag name + ... for the value which =
Digital 4 bytes 2 bytes 6 bytes
Analog 4 bytes 2 bytes 6 bytes
Longana 4 bytes 4 bytes 8 bytes
Float 4 bytes 8 bytes 12 bytes
Message 6 bytes the number of characters y bytes
(4 + 2 bytes for the length) in the string
Mailbox 30 bytes the number of characters y bytes
(4 + 26 bytes for the header) in the string
The MAXLEN parameter must be configured to specify the maximum number of bytes each
node requires to send or receive a single data packet.
To distribute alarms and logbook entries along the network, use the following formula to
calculate the number of bytes required at each node:
((84 x number of active alarms) + 38) + (number of logbook entries x (24 + msg space)) = bytes
where
number of active is the maximum number of alarms defined for display in the Active Alarms
alarms field in the General Alarm Setup Control table.
number of logbook is the maximum number of logbook entries expected to be generated for the
entries alarms defined. This number can be smaller than or equal to the number of
active alarms. A practical estimate of the normal volume of logbook entries
is 20-30% of the total alarms.
msg space this number is smaller than or equal to the number of input lines.
BUFSIZE
Only File Manager uses the BUFSIZE parameter. A number between 128 and 2,048 that sets
the size of each buffer in bytes. The default is 512 bytes. The size of the buffer determines the
amount of data File Manager can transmit across the network in a single message.
MAXSESS
The maximum number of stations to which the local station can connect at the same time.
These are called connections. The default is 32. The maximum number of connections varies
by network protocol:
• For NetBIOS, any number from 1 to x where x is the maximum allowed by NetBIOS. See
the NetBIOS documentation.
• For TCP/IP and DECnet, any number from 1 to 64.
ACK
A number from 0 to 1,024 that specifies the number of seconds the local station will wait for a
remote station to send a data packet acknowledgment before disconnecting from that station.
The default is 0, which indicates the local does not require an acknowledgment from a remote.
ST (Send Time-out)
A number from 0 to 1,024 that specifies the number of seconds the local station will keep
trying to send its data if the remote station cannot accept it because it cannot process data fast
enough. The default is 10 seconds. When the time-out expires, the local station generates an
error.
SD (Send Delay)
A number from 0 to 1,024 that specifies the number of seconds the local station waits between
tries to send its data if the remote station cannot accept it because it cannot process data fast
enough. The default is 10 seconds. If you increase this number, you will reduce CPU
consumption but you may cause the overall performance to drop.
Perform the following steps to define network groups for the local station. You must repeat this
procedure for each computer in the network running FactoryLink.
1 In your server application, open Networking > Local Area Network Groups > Groups.
2 Complete the LAN Remote Names table. Enter all group names on a separate line using the
following format.
In this example, the ALARM group consists of STATION2, STATION3, STATION4, and
STATION5. The REPORT group consists of STATION3. Note that STATION3 belongs to both
groups and that each line ends in a semicolon.
3 Press Enter at the end of the last line to enter a hard return. This hard return is required.
You can complete as many Send tables as the RAM on your system allows. You can enter as
many tags as the available RAM allows.
Accessing
Networking > Local Area Network Send > LAN Send Control
Field Descriptions
Specify the following information for this table. Add an entry for each send operation you
want FLLAN to transmit across the network.
Accessing
Position the cursor on the line entry on the LAN Send Control table representing the send
operation you are configuring. In your server application, open Networking > Local Area
Network Send > LAN Send Control > “your table name” > LAN Send Information.
Field Descriptions
For example, the local station sends the value of the tag regular_tank_level to a corresponding
tag on some remote station. The tag on the remote station may, or may not, be the same name.
If it is not the same name, then you can specify an alias to link the “sending” tag on the local
station to the “receiving” tag on the remote station.
Using r87_tank_level as the name of the receiving tag on the remote station, the local station
sends the value of regular_tank_level across the network identified as tank_level. The alias
tank_level would also be used on the remote station and map to the tag r87_tank_level. Because
the alias between the two stations is the same, the local station can send the value of
regular_tank_level to the remote station tag r87_tank_level.
You can complete as many Receive tables as the RAM on your system allows. You can enter as
many tags as the available RAM allows.
Accessing
Networking > Local Area Network Receive > LAN Receive Control
Field Descriptions
In this example, the local station receives data from the remote stations belonging to the
network group REPORT.
Accessing
Networking > Local Area Network Receive > LAN Receive Control > “your table name” > LAN
Receive Information
Field Descriptions
The name of the receive operation you are configuring is displayed in the Table Name field at
the bottom of the table. Specify the following information for this table. Add an entry for each
tag received from any remote station in the network group.
You can monitor any or all FactoryLink stations on the network as long as they are running
FLLAN. The example table defines tags to contain the status of the STATION3 remote station.
To view the status on screen at run time, design and configure a graphics screen to display the
information.
If you want other FactoryLink modules to view and use this information for other activities,
configure those modules’ tables. For example, you can configure Math & Logic and Alarm
Supervisor to monitor these tags and trigger an alarm whenever a remote station disconnects.
Accessing
Networking > Network Monitoring > Network Monitor Information
Field Descriptions
P ROGRAM A RGUMENTS
Argument Description
–D<#> Set verbose level. (# = 0 to 22)
–L Enables logging of debug information to a log file.
–R (LAN Send only) Prevents setting LAN Send Enable/Disable tag to 1.
–T Insert timestamp at beginning of each debug statement.
–S<#> Closes and reopens log file every # messages.
–W<#> Wraps log file every # messages.
–X Logs underlying network software’s error messages to
a log file.
•
•
•
•
Historians
The Historian task is the interface between FactoryLink and a relational database. It processes
data requests from other FactoryLink tasks and sends them to the relational database. Data
requests from Database Logger or Data Point Logger tasks can store data in the relational
database. Data requests from Trending or Database Browser tasks can retrieve data from the
relational database.
O PERATING P RINCIPLES
The following steps describe how a historian processes data requests for a relational database:
1. A FactoryLink task sends a data request to a mailbox historian service. This can be a
request from Database or Data Point Logging to store data in the relational database or
from a task like Trending to retrieve data from the relational database. FactoryLink tasks
submit their requests for data in the form of Structured Query Language (SQL) statements.
Generally, mailboxes are unidirectional: a task requesting data from the historian makes the
request through a different mailbox than the mailbox historian uses to return data.
2. Historian reads this mailbox and processes any queued data requests. It transmits the data
request to the relational database server.
3. The relational database returns the requested information to the historian if the request was
to retrieve data.
1 In your server application, open System > System Configuration > System Configuration
Information in the form view.
• For Oracle and Sybase, create a new task using and perform these steps:
2) In the Task Description box, type a description for the respective database: Historian for
Oracle or Historian for Sybase.
3 In the Program Arguments box, type the desired arguments. See a list of the program
arguments on page 292.
4 Under Task Flags, select the Run At Startup check box. Click Apply and exit.
ODBC H ISTORIAN
The ODBC ((Microsoft) Open DataBase Connectivity) historian allows FactoryLink to access
data from multiple, diverse Relational DataBase Management Systems (RDBMS). ODBC
focuses on the Application Programming Interface (API) that supports connections to various
database systems. The standard used for connecting and accessing a database is the Structured
Query Language (SQL). Software components (drivers) link a FactoryLink application to an
RDBMS. The ODBC historian allows multiple instances of the task to run in a single
application.
The ODBC historian enables FactoryLink to access data from several diverse database systems
through this single interface while the historian remains independent of any RDBMS from
which it accesses data. The ODBC interface defines the following:
• Dynamic-link libraries (DLL) of ODBC function calls to connect to an RDBMS, execute
statements, and retrieve results
• SQL syntax of function calls
• Standard representation for data types and the mapping between FactoryLink and RDBMS
data types
• Error codes
The following components work together to make ODBC and FactoryLink communicate:
• FactoryLink – Performs processing and calls to third-party ODBC drivers to provide data to
or request data from a data source.
• Driver Manager – Loads ODBC drivers for the needed data source.
• Driver – Processes ODBC function calls, submits SQL requests to a specific data source,
and returns results to FactoryLink.
• Data Source – Contains the data the driver accesses. Connection strings link a data source to
a driver.
Considerations
Supported Drivers
The ODBC historian supports drivers for Windows platforms. These drivers handle the
connections to the various platforms relational databases run on.
The following table specifies the required additional connectivity software for each driver. For
specific information regarding the additional software requirements, refer to the document on
the specific driver and the network protocol that connect to your server. For specific software
version numbers for the various products listed in the table below, see the Installation Guide.
Conformance Levels
Drivers and their associated RDBMS provide a varying range of functionality. The ODBC
historian requires that drivers conform to the Level 1 API conformance, which determines the
ODBC procedures and SQL statements the driver supports. Use of the Level 2 API function
SQLExtendedFetch is based on whether the driver and its data source support it.
SQL Statements
The ODBC historian does not totally depend on the SQL conformance levels, but rather it tries
to map the FactoryLink data types to the best match provided by each data source. When a data
type maps, its SQL statement is accepted as long as the driver and data source can perform that
operation.
Data stored on an RDBMS has an SQL data type, which may be specific to that data source. A
driver maps data source-specific SQL data types to ODBC SQL data types and driver-specific
SQL data types.
Setting Up ODBC
The general steps for setting up ODBC with FactoryLink are:
4 Complete the necessary FactoryLink Historian Mailbox tables. (The detailed instructions for
setting up these steps are described in the following sections.)
Use the ODBC Administrator to add and delete drivers, and to add, configure, and delete data
sources. Perform the following steps to complete information for drivers and data sources:
1 In Configuration Explorer, open the Historians folder and double-click ODBC Data Source
Administrator.
After you install an ODBC driver, define one or more data sources for it. A data source name
provides a unique pointer to the name and location of the RDB associated with the driver. The
data sources defined for the currently installed drivers appear in the User DSN box in the
ODBC Data Source Administrator dialog box.
2 Select the driver you want to define as part of the data source definition.
3 Click Add to display the setup dialog for the selected driver.
5 For the setup instructions on each supported driver, see “Defining Drivers” on page 270”.
Defining Drivers
Within this section are subsections for each driver you can define and the syntax for the data
source you must enter in that screen and on the ODBC Historian Information table.
The SQL Server driver supports the SQL Server database system available from Microsoft and
Sybase. Perform the following steps to complete setting up the SQL Server:
1 Enter the server name (where Microsoft SQL server database is located).
2 Enter the Database Name and then select Two Phase Commit when prompted.
3 Click OK to add the Data Source Name. Then, click OK to close the ODBC Administrator.
For detailed information, refer to the Microsoft ODBC Desktop Database Drivers Getting
Started guide. Perform this procedure to set up the Microsoft Access Driver and Data Source:
1 To create a new database, click Create. Choose the path drive and directory, such as
d:\fl660acc97, and database name, such as plant1.mdb. Click OK and the database is created,
which a popup message indicates.
2 To connect to an existing database, click Select and then OK for the path and database file.
3 Click OK to accept this setup. Then, click OK to close the ODBC Administrator.
The Sybase System driver supports the SQL Server 10 database system available from Sybase.
For information on the setup information you must enter, refer to the MERANT DataDirect
ODBC Drivers Reference guide. Perform this procedure to complete setting up the Sybase
System 10 driver and Data Source:
1 Type the Server Name (from the SyberClient software that is already installed on the
FactoryLink Client computer).
2 Type the Database Name, then click OK to add the Data Source Name. Then, click OK to close
the ODBC Administrator.
The ODBC historian supports the configuration and execution of up to 10 instances of the task
in a single application. This allows developers to selectively distribute the various database
queries required by the application across different running instances of the task. The
developer can route the more critical and high-speed queries to one historian instance and the
slower and less critical requests to another and thereby alleviate the performance issues
associated with a single historian servicing all client queries.
For example, the execution of stored procedures through PowerSQL, large SELECT,
UPDATE, or DELETE queries from DBBROWSE and PowerSQL and historical data requests
by Trending have the potential to be time-consuming queries. However, the logging of records
by the Database Logger or Data Point Logging tasks are generally a faster and more time
critical operation. Therefore, one instance of the ODBC historian could be configured and run
to service all Database Logger queries and another to handle the PowerSQL and DBBROWSE
task queries.
The configuration for a trend chart using the Real-Time and Historical Trend Control requires
that the logging be routed to the same historian used by the trending task. The Multi-instance
ODBC historian still has potential for performance relief in this situation but would require a
slightly more complex configuration. One possibility is to use the Real-Time Trend Control if
you do not need historical data. Another possibility is to configure one chart for real-time only
that uses logging and trending through one historian instance, and another chart just for
historical viewing through another historian instance. The distribution can also be set up so
that some of the queries from a specific client go to one historian instance while others are
routed to another instance.
Database queries are routed to a specific historian by defining a unique mailbox (or set of
mailbox) tags and database data source names for each instance and referencing these mailbox
tags and data source names in the ODBC task configuration tables.
The following rules apply to the configuration requirements across the historian instances:
1. Each instance of the ODBC historian must use a unique set of mailbox tags.
2. Each instance of the ODBC historian must use a unique set of Disable/Enable Connection,
Connection Status, and Database Error tags. If tags are used for the Connection String, they
must be unique for each historian instance.
Accessing
Historians > Historian for ODBC > Historian Instance Information for ODBC
Field Description
Accessing
Historians > Historian for ODBC > Historian Instance Information for ODBC > ”your instance ID
name” > Historian Mailbox for ODBC
Field Description
Field Name Description Req. Valid Entry Valid Data Type
Historian Mailbox this Historian services. This name must match tag name mailbox
Mailbox the name defined in the task using Historian to process
data requests.
Create a separate mailbox or set of mailboxes for each
instance of the Historian that is to be configured.
Different Historian instances may not reference the same
mailboxes.
Accessing
Historians > Historian for ODBC > Historian Instance Information > “your instance ID name” >
Historian Information for ODBC
Field Descriptions
1 An entry must be added to the System Configuration table for each instance of the Historian to
be executed.
2 The first instance to run is always considered instance zero. Additional instances are 1 through
9, for a total maximum of 10 instances.
3 The Task Name filed for instance zero (the first instance) must be ODBCHIST. (Existing
applications do not require any modification to the System Configuration table.) The
FLCONV function will make all necessary modifications. The Task Name for each additional
instance to be added is ODBCHISTn, where n is the instance number (1-9).
4 The Program Arguments field in the System Configuration table should include a new
argument -Un, where n is the instance number (0-9). The argument is not required for the first
instance; if omitted, -U0 is assumed. The argument is required for all other instances.
5 The entry in the Executable File field of the System Configuration table is bin/odbchist for all
instances.
Note: The correct operation of FLCONV utility to convert the old ODBC Historian
tables to new multiple instance ODBCHIST tables should be run on the earlier version
of the application, which has not been restored with the current FLREST option. To
transfer an earlier version application from another computer, you need to have it in
the form of that version’s platform specific save file, or a .zip file or similar format.
The Microsoft ODBC Desktop Database Drivers diskettes and documentation are included
with your ODBC Historian. Refer to the ODBC Getting Started manual regarding Access
Drivers or the MERANT DataDirect ODBC Drivers Reference book regarding Oracle, the SQL
Servers, and Sybase System drivers to set up your ODBC drivers.
The ODBC Driver Conformance Test utility validates the level of conformance provided by an
unsupported driver meets the requirements of a supported data source. This utility is installed
by default in the FLINK/BIN directory during installation. This directory contains all the
FactoryLink program files. The executable program file for this utility is FLHSTDRV.EXE.
Before you start, a driver must be connected to a data source. Perform the following steps to
use the ODBC Driver Conformance Test utility:
1 Run the utility executable: FLHSTDRV.EXE to display the Data Sources dialog listing the SQL
data sources already set up through the ODBC Administrator.
2 Choose a data source from the displayed list, then click OK. A message notifies you of a
successful connection to the data source.
3 The FactoryLink Driver Conformance Test window is displayed behind the message. From this
window, the File menu lists the options:
• Connect
• Disconnect
• FactoryLink Driver Conformance Test
4 Choose the Driver Conformance Test option to run the test and display the test results:
• Successful – FactoryLink Auto-Test message confirmation of
Driver PASSED minimum FactoryLink conformance requirements
• Unsuccessful – Driver is not supported.
If the test is unsuccessful, disconnect the current data source from the File menu and connect it
again. Or, you may want to connect to a different data source for another test.
Caution: Passing the FactoryLink Driver Conformance Test means that an ODBC
driver passed only the minimal FactoryLink conformance requirements
and it may work with the ODBC Historian. However, a driver could pass
the test and still be incompatible. There is no brief test available to certify
that a driver is supported completely. Testing of ODBC drivers is
performed with each release and a list of the drivers that were tested and
certified for use with the release is provided in the Installation Guide.
O RACLE H ISTORIAN
This section provides information needed to configure the Oracle Historian.
Considerations
This section explains how to set your FactoryLink application to work with the Oracle
historian. Read this section before you configure your historian.
If you want to use the Oracle historian, refer to the release notes for the Oracle-specific
software to use.
Oracle Licenses
Oracle requires you to purchase licenses for the number of FactoryLink processes using an
Oracle database. Connection strings often use platform-defined aliases to reference Oracle
servers.
The minimum user requirement to connect FactoryLink to one Oracle server is two user
licenses. Calculate the number of Oracle licenses required for each Oracle database:
2. One license for each unique Oracle User Name and connection string pair
Note: Each historian running in the application creates a FactoryLink process. For
example, four historians (ODBCHIST, ODBCHIST1, ODBCHIST2, ODBCHIST3)
are on a FactoryLink client computer 1, all talking with the Oracle server computer.
Client computer 2 also has four historians talking with the same server computer. Even
if all eight historians use the same user name and password (flink/flink), the server
considers these as eight different processes. For license information, check with Oracle.
The OPEN CURSORS parameter determines the maximum number of cursors per user. Before
you start the Oracle historian the first time, increase the value of the OPEN CURSORS
parameter to 200 or above. This setting is in the INIT.ORA file and has a valid range of 5 to 255.
For instructions for increasing the value of OPEN_CURSORS, refer to the Oracle RDBMS
Database Administrator Guide.
A setting of 200 cursors may not be high enough for extremely large applications. When this
setting is not high enough, the following message is written to the directory defined by the
environment variables in the log file ohmmddyy.log, where oh is the identifier for the Oracle
historian and mmddyy represents the date.
FLAPP/FLNAME/FLDOMAIN/FLUSER/log:
ORA-01000: maximum open cursors exceeded
Increase the value of the OPEN CURSORS parameter to 255.
Accessing
Historians > Historian for Oracle(R) > Historian Mailbox Information for Oracle(R)
Field Description
Field Name Description Req. Valid Entry Valid Data Type
Historian Mailbox name for the historian services. This name must tag name mailbox
Mailbox match the name defined in the task using historian to
process data requests.
Create a separate mailbox for each task that submits data
requests except for Database Logging and Trending,
which can share a mailbox.
Accessing
Historians > Historian for Oracle(R) > Historian Information for Oracle(R)
Field Descriptions
Field Name Description Req. Valid Entry Valid Data Type
Database Unique name to represent a database connection. This database
Alias Name must match the database name defined in the task using connection
historian to process data requests. name
Disable/ Tag that enables or disables the connection. When this tag tag name digital
Enable is set to one, the connection to the relational database
Connection defined in this entry is closed; when set to 0, the
connection opens.
Note: Database aliases should not share connection
tags. Sharing connection tags between database
aliases can result in errors.
*Oracle User Login name required to connect to the database. This tag name or message of 1 to
Name name must be a valid Oracle account with connect, constant 32 characters
read/write, and create access to database tables. This
name can either be a constant or a tag name.
If you enter a constant, precede the user name with a
single quote.
The tag specified must be a message tag type if you enter
a tag name. You must specify a login name in the tag
Default Value field and a maximum length of 32 in the
Length field.
You must grant access to the user account for the historian to exchange data with an Oracle
database, meaning the username and password, specified in the Historian Information table.
For the instructions on how to create an Oracle user account, refer to the Oracle System
Administration Guide.
The Oracle user account must have system privileges to connect to a database and delete,
update, insert, and select rows from a database table. Additionally, if the FactoryLink
application requires, this account may also need to create table and index privileges.
You must set the connection strings. This section provides the syntax to connect to SQL*Net
V1 and V2 clients. Refer to the SQL*Net documentation set for your Oracle server running on
your server host before you define a connection string to any platform.
SQL*Net V1 Syntax
@prefix:host_name:system_ID
where
@ Marks the start of the connection string
: Is a field delimiter
prefix Represents the network transport
host_name Is the server host
system_ID Is the Oracle system ID
This is an example connection string for the SQL*Net TCP/IP network protocol to a UNIX
server:
@T:FLORASRV:B
where
@ Marks the start of the connection string
T Represents the TCP/IP network transport
SQL*Net V2 Syntax
@alias
where
@ Marks the start of the connection string.
alias Is an alias name defined in an SQL*Net V2 configuration file
S YBASE H ISTORIAN
This section provides information needed to configure the Sybase Historian.
Considerations
By default, 25 is the maximum number of Sybase connections allowed per process; however,
you can increase this maximum, which is limited only by system resources, by changing the
value set by the environment variable MAXDBPROCS. The Historian checks
MAXDBPROCS against the actual number of Sybase connections per process; therefore, if
you want to use more than 25 Sybase connections per process, set the environment variable
MAXDBPROCS to that value or greater.
Accessing
Historians > Historian for Sybase(R) > Historian Mailbox Information for Sybase(R)
Field Description
Accessing
Historians > Historian for Sybase(R) > Historian Information for Sybase(R)
3 Create Sybase databases. Create all Sybase databases for FactoryLink to use before you start
up the Historian.
6 Grant permission to the FactoryLink account to use CREATE PROC and CREATE TABLE
commands.
For instructions on how you complete these steps, refer to the Sybase System Administration
Guide and Sybase Commands Reference.
2 Add one entry for each SQL server to the interfaces file when using more than one Sybase
SQL server.
use database
go
where
database Is the name of the Sybase database FactoryLink uses.
go
where
username Is the name of the user accessing the Sybase SQL server.
D BASE IV H ISTORIAN
The dBASE IV Historian is file-based. For large applications, it is recommended that you use a
standard multi-tier database, such as SQL Server, Oracle, or Sybase. For smaller applications
or applications with minimal logging requirements, the dBase IV Historian may be adequate.
This section describes how to configure connection information for dBASE IV Historian,
which includes defining mailboxes and connection information.
Accessing
Historians > Historian for dBASE IV(R) > Historian Information for dBASE IV
Field Description
Accessing
Historians > Historian for dBASE IV(R) > Historian Information for dBASE IV
Field Descriptions
Reserved Words
The Historian uses the following reserved words with dBASE IV. Do not use these keywords
when defining table or column names.
ALL DECIMAL INTEGER SET
ALTER DELETE INTERSECT SMALLINT
AND DESC INTO SOME
ANY DESCENDING IS SUM
ASC DISTINCT LIKE SYNONYM
ASCENDING DROP MAX TABLE
AVG EXISTS MIN UNION
BETWEEN FLOAT MINUS UNIQUE
BY FOR NOT UPDATE
CHAR FROM NULL VALUES
CHARACTER GROUP NUMBER VARCHAR
COUNT HAVING NUMERIC VIEW
CREATE IN ON WHERE
CURRENT INDEX OR
DATE INSERT ORDER
DEC INT SELECT
Disconnects from a relational database can occur for either of the following reasons:
• They are scheduled to occur at predefined times.
• A fatal error forces an unscheduled disconnect.
Scheduled Disconnects
You must configure a FactoryLink task to set the disable/enable connection tag defined on the
Historian Configuration table to 1 to initiate a disconnect. Once a connection is disabled, the
historian returns a HSDISABLED error code to the requesting tasks. All data is lost during the
period of disconnect.
You must configure a FactoryLink task to write the connection strings required to connect to
the new database to the tags that define the connection you want to change. These tags are
specified in the Historian Configuration tables. Then write 0 to the disable/enable connection
tag defined on the Historian Configuration tables.
Unscheduled disconnects can occur because of fatal errors. Historians detect fatal error
conditions returned either by the RDBMS server or the network client software. The historian
tasks consider an error condition to be fatal when an error code generated by a database server
is found in the Fatal Error Codes list you defined in the FLINK/bin/flhst.ini file. For more
information on how to define these codes, see “Setting Run-Time Fatal Error Code Values” on
page 293.
Database Reconnect
Database reconnect provides the ability to reconnect to a database when the connection has
been lost. Historian reconnect is only valid when the task is running. The historian information
may not get updated after the reconnect if a screen is open when the database is disconnected
and reconnected. If this occurs, exit and reenter the screen to refresh historian updating.
Reconnect does not work if the historian is brought down and then brought back up.
Fatal run-time error codes and ODBC support information are configured in the
FLINK/bin/flhst.ini file. This file contains a section for each historian and one or more ODBC
database server names.
Any ODBC alphanumeric error code must be surrounded by quotation marks. The
alphanumeric error code consists of two parts. The first part is the ODBC “state” string. The
second part (enclosed in parentheses) is the native error produced by the database.
The FLINK/bin/flhst.ini configuration file is divided into sections. Each section represents a
different ODBC data source name for ODBC support information or a different historian for
definition of fatal error codes.
Historian
Task Name # Oracle Version 7 ODBC driver
[Oracle7]
IllegalDuplicateKey=1
List of fatal
codes FatalErrorCodes=6000 to 6429, 6600, 6610, 7000 to 7100
Note: The error code values listed in the Fatal Error Codes example are not actual
error codes. For the actual codes, refer to the RDBMS user’s manual.
If the error tag data type is message, the error message is written in the following format:
taskname:err_msg
where
taskname Is the historian task name that initiated the error condition.
err_msg Is the text from the relational database server.
If the error tag data type is a longana, the tag contains the database-dependent error code
number.
Every time the relational database server returns an error code to the historian, the historian
tests this code against the range defined in the flhst.ini file.
When a historian determines an error is fatal, it sets the connection status tag to 110. What
happens next depends on how the FactoryLink application is configured to handle fatal errors.
Your FactoryLink application can reconnect to the database after an error has been resolved.
One approach is to have the FactoryLink application set the disable/enable connection tag to 1
to disable the connection to the database causing the error, then attempt to reconnect by setting
the disable/enable connection tag to 0.
P ROGRAM A RGUMENTS
Historian log files are accessible on disk for seven days. After seven days, old log files are
deleted. At the start of each new day, the previous day log file closes and a new one opens.
Log files reside in the FLAPP/FLNAME/SHARED/FLUSER/log directory. The name of a log file
follows the format of
PRMMDDYY.log
where
PR Identifies the Historian name.
MM, DD, YY Are two-digit numerals for the month, day, year the log file was created.
The following table lists the prefix and sample log file names for each Historian.
Entries continue to append to each .log file. Consequently, these files can grow and take up
large amounts of disk space. Delete the file contents or the file itself. Remove the Program
Arguments on the System Configuration Information table to stop logging for a Historian.
Note: Be aware that, when using multiple historians, some or all transactions for the
various clients are synchronous while others are asynchronous. If a client executes a
synchronous transaction with one historian and it does not respond for whatever reason,
the client must wait until the timeout period for that transaction to elapse before it can
process any other triggered transactions for any of the historians.
Use the following flowchart for help in troubleshooting the logging configuration.
Yes
Yes
No
Yes Yes
Yes
Rerun test
Yes
Correct identifiable
errors
ODBC Driver
Display the Data Source Setup dialog and perform the following steps to troubleshoot the
ODBC driver:
By default, the Stop Tracing Automatically check box is enabled, which sets tracing to stop
automatically upon a disconnect from the data source. Make note of the trace file location—
SQL.log. You can change this location.
H ISTORIAN M ESSAGES
Messages may come from FactoryLink, a database driver, or data source. Messages
communicate a status or a condition that may or may not require an action from you.
Run-Time Messages
FactoryLink Historians generally do not write error messages generated from a data source to
the Task Status tag. The Historian reports all database errors to its log file and returns a status
code to the Historian client tasks, such as Database Browser for every Historian operation.
Errors and messages may display as your FactoryLink application runs. FactoryLink sends a
code or message to the Run-Time Manager screen for display whenever an error occurs in a
Historian or a Historian-client task. You can also define an output text object to display codes
and messages on a graphics screen.
FactoryLink also sends a longer, more descriptive message to the log file when the log file
Program Argument is set. The Task Status tag is located on the System Configuration
Information table. The data type you assign to this tag for a Historian and any FactoryLink task
determines the type of codes written to this tag. You can assign these data types:
• Digital data type reports these two codes
0 – indicates the requested operation successfully completed
1 – indicates an error occurred
Startup Messages
The following messages may display on the Run-Time Manager screen if an error occurs with
Historian at startup. See the Historian’s .log file for the complete message.
•
•
•
•
Math and Logic
The Math and Logic task performs mathematical and logical operations on tags in the
Real-Time Database. The results are stored in tags for use by other tasks. Two modes
(interpreted and compiled) are available, permitting users to optimize applications for
maximum performance. The Math and Logic functions include the following types of
operations:
• Arithmetic • Relational
• Logical • Trigonometric
• Exponential • Logarithmic
• String Manipulation • If-Then-Else or While Functions
• User-defined C Routines (compiled mode)
M ODES
Math and Logic runs in one of two modes: Interpreted Math and Logic (IML) and Compiled
Math and Logic (CML). It is possible to run both modes at the same time, but the application
designer must be sure any procedures called from a compiled procedure are also configured as
compiled.
The FactoryLink designer must determine the type of mode to use. A comparison of the two
modes is in the following table. Most applications written in the Interpreted Mode function can
be used with limited or no modifications under the Compiled Mode. If an application running
in Interpreted Mode uses any reserved words as variables or procedure names, these must be
modified before they can be used in CML. These words include any reserved by the compiler
you are using and those reserved by FactoryLink.
IML is preconfigured in FactoryLink. If you are using CML, you must add the CML task to the
System Configuration Table. Removing the IML task is not required.
For a complete explanation and attributes of each method, see the Configuration Explorer
Help.
Field Description
Accessing
Math and Logic Triggers > Math and Logic Triggers Information
Field Descriptions
The procedures within a program file can be totally unrelated in functionality as they are
individually invoked by the predefined trigger or a function call embedded in another
procedure. All procedures in a program must be defined as either an IML or CML procedure.
Accessing
1 To create a new program file, expand the Math and Logic Procedures folder, right-click Math
and Logic Procedure - Shared and select New Prg file. The New Math and Logic Program File
dialog box appears.
2 Type a name for the program file name, and select either Interpreted or Compiled mode. Enter
the name of the tag to trigger the program.
A new program, when saved, is automatically stored at: c:\{FLAPP}\{FL DOMAIN}\procs. It
is essential that the file remains in this directory.
3 Expand the Math and Logic Procedure - Shared folder and then expand the program name. (A
new procedure without the extension appears under the program name.) Open the program.
The program file displays with the procedure definition statements (PROC, BEGIN, and END)
inserted into the program file.
1 Open the program file, position the cursor at the beginning of the line where you want to add
the new procedure, and then click Insert Procedure .
2 In the Insert Procedure dialog box, type the Procedure Name, the Trigger tag name (if
applicable), select the mode (either Interpreted or Compiled), and click OK. A template is
inserted in the file to assist you with writing the procedure.
Each procedure definition statement or proc statement starts with the word PROC, followed by
the unique name of the procedure, followed by any arguments (parameters) the procedure
requires. Any procedure, except the main procedure for the file, can have arguments. Place the
keyword BEGIN on the next line.
For example:
PROC name (type name1 [,type name2])
BEGIN
.
.
END
where
type Is SHORT, LONG, FLOAT, or STRING.
name1 Is the name of a variable, constant, or tag name
name2 Is the name of a variable, constant, or tag name other than name1
Coding Guidelines
• Always start the procedure with a BEGIN statement and conclude it with an END statement.
• The maximum line length is 1023 characters. Running a procedure with lines longer than
1023 characters can cause unpredictable validation results; the procedure may validate even
though no errors occur. Math and Logic will not function properly while running such a
procedure.
• For each IF statement, enter a matching ENDIF, properly nested.
• Show all keywords, such as IF, THEN, ELSE, and ENDIF, in uppercase characters to
distinguish them from tag names. Keywords are not case-sensitive, but tag names are.
• A local variable (tag) can be declared in the program. If it is added to the top of the file
above the initial BEGIN statement, it is available to all procedures. If the local variables are
added at the procedure level, then they are only available to the procedure in which they
were declared. To differentiate local variables from tag variables, begin the local variable
names with “_”.
• Global variables (tags) are added to a procedure by typing the tag name in the procedure.
Highlight and right-click the tag name. Select Add to Tag List. The FactoryLink Tag Editor
dialog box appears providing definition of the tag. For more information, see the
Configuration Explorer Help. The tag color changes to blue when the definition is
completed.
• Once defined, the tag name appears in the Xref Table, the Tag Browser, and the Object
Table in addition to the Math and Logic Variables table. Global variables (tags) can be
added at any time to the Math and Logic Variables Information table and then typed into the
procedure. Type the variable (tag) name, and the variable text color changes from black to
blue indicating it is already defined.
• Math and Logic can operate in either the Shared or the User domain. Use the Shared domain
when all tasks or users must share the same Math and Logic data. If a Shared tag is used in
both Shared and User procedures, it must be referenced in both the Shared and User Math
and Logic Variables Information tables. By default Configuration Explorer displays only the
Shared domain. To view both domains, right-click the application name and select
Shared+User.
• User-inserted markers and error markers have the same appearance. Markers can be toggled
with the shortcut <Ctrl + F2> on a specific line, or added to many lines using the find
function. All previously set markers are erased when the validate function is performed.
Save the procedure after you finish making changes.
• To avoid confusion and a possible error, do not give any two procedures, tag names,
variables, or constants the same name, even if the case is different. Local variable names
translate directly into C code when compiled. Even if you are using IML, it is important to
understand this so that you develop procedures that can be compiled later if needed. If a
For example, all of the following declaration statements become declare short lu_lu with
potentially confusing results, such as duplicate definition errors, or the changes in one
variable get reflected in another:
declare short lu$lu
declare short lu@lu
declare short lu_lu
• The number of tags, triggers, and programs you can define is limited only by the amount of
available memory, the operating system, and an optional compiler (compiled mode only).
Using CML requires a compiler program in addition to the FactoryLink software. Using
IML does not have compiler requirements.
• The Microsoft® Visual C++ .NET compiler is the compiler to use with the FactoryLink
CML processing. See the “Supported Layered Products Information” section in the
Installation Guide. Refer to the documentation supplied with the compiler for details on the
compiler limitations for your system.
• Math and Logic does not provide return codes for developer-defined procedures; therefore,
the task cannot set a variable’s value to the return code from a procedure call.
• After you finish typing the procedure, you must validate it to check for syntax errors. Click
Validate to verify the syntax, such as matching braces, parentheses, and brackets, and the
correct use of operators. The correct definition of local and global variables (tags) is
checked plus the essential keywords are present (BEGIN, END, PROC). If no errors exist,
the system reports nothing. If errors exist, red triangle markers display in the left hand
margin for each line with an error. Correct the errors and revalidate the program.
• Math and Logic reserves a set of keywords for use in procedures. Because these keywords
have predetermined meanings, they cannot be used as procedure names, local or global
variable names, constant names, or tag names. The keywords are not case-sensitive. Do not
write procedures that use forms of reserved keywords as names because they may cause
unpredictable system behavior during execution.
• Because of the way float values are rounded and stored, you should not compare float values
for equality in Math and Logic (or any other programming language).
* The reserved keywords in boldface-italic type are C keywords reserved by the C compiler. Program files cannot
use these C keywords. Other keywords may exist; refer to the user manual supplied with the C compiler in use.
** The keyword begin is interchangeable with the opening brace ({), and the keyword end is interchangeable with
the closing brace (}) inside Math and Logic programs.
Constants are especially useful in applications when the boundary value of a loop or array must
be modified. When the constant is modified, its value only has to be changed in one place
within the application rather than many different places.
For example, a factory upgrades from three drying beds to five and the constant BED_MAX is
used as:
• A loop index—to index through the operations on the groups of beds
• An array index—for the array containing information on each bed
• As a limiting factor on the number of beds polled
The value of BED_MAX can be modified from 3 to 5, thus preventing the need for massive
search-and-replace operations on hard-coded values.
Numeric Constants
Numeric constants can be assigned to digital, analog, longana, or float tags as well as to
numeric local variables. Constants can be used in expressions wherever a numeric operand or
argument is valid, provided they are not the objects of an assignment operator.
Because constants cannot take on new values, they must never be placed on the left-hand side
of an assignment operator.
• Integer constants—You can assign integer constants to tags and local variables.
For a tag, its data type must be one of the FactoryLink data types digital, analog, or longana,
and its value must be an integer.
For a local variable, its data type must be one of the local variable types short or long, and
its value must be an integer.
Integer constants can be represented in binary, decimal, octal, or hexadecimal notation:
Binary Strings of 0s and 1s in which the first two characters are either 0b or 0B (to
indicate base-two representation).
Decimal Strings of any digits 0 through 9 with the first digit either nonzero, 0d or 0D
(to indicate base-10 representation).
Octal Strings of any digits 0 through 7 with the first digit a 0.
Hexadecimal Strings containing any combinations of the digits 0 through 9 and/or the
characters A through F or a through f, in which the first characters are 0x or
0X (to indicate base-16 representation).
For example, to define the local variable _length as 28, use any of the following definitions:
Notation Definition
Binary _length = 0b11100
Decimal _length = 28
Octal _length = 034
Hexadecimal _length = 0x1C
Furthermore, some values are too large to be represented as short ANALOG values and
must be represented as LONGANA values. Any integer constant to be represented as a
LONGANA (long integer) data type must be following by a trailing L.
The following value ranges must be represented as LONGANA values:
For example, if a constant is to be larger than 65535, place a trailing L after the number to
indicate longana representation, as follows:
Minimum and maximum longana values can range between -2,147,483,647 and
2,147,483,647.
• Floating-point constants— Use standard floating-point notation or exponential notation to
represent floating-point constants. Floating-point constants are strings of any digits, 0
through 9, that either contain or end in a decimal point.
• Exponential constants—Exponential constants are strings of any digits, 0 through 9, with
an E, E-, e, or e- preceding the exponential portion of the value.
String Constants
A string is a sequence of ASCII characters enclosed in double quotation marks (“ ”). String
constants can be from 0 to 79 characters and the ending character must always be the final
character in the string. For example, the string “ABC” consists of the characters A, B, C, in
that order. An empty string has no characters and is represented as a space enclosed in double
quotation marks. If an operator enters more than 79 characters as the value of a message, the
task truncates the string to include only the first 79 characters.
You can assign string constants to message-type tags or string-type local variables. Math and
Logic supports operator input in both IML and CML.
In string constants, the single backslash (\) character introduces print-formatting characters.
The Math and Logic parser recognizes the single backslash as a signal that a print-format
character (an escape code) follows. Therefore, the string “\” causes a parsing error during Math
and Logic processing because nothing follows the backslash. If a backslash is required within
the string itself, use a double backslash (\\). The following table lists the meanings of the
print-formatting characters in Math and Logic.
Other special ASCII characters, such as nonprinting control characters (for example, the
escape character), are sometimes needed as constants. Use the chr function to refer to these
characters.
To store ASCII data, including nonprinting ASCII characters, as string constants, enter the
ASCII code in a call to the built-in Math and Logic function chr, which uses this format:
chr(xx)
For example:
x = chr(27) # sets the string variable x to the escape
character
x = chr(124) # assigns to x the “vertical bar” symbol (|)
Refer to any table of standard ASCII character codes to determine the proper ASCII value of
any character. The following examples illustrate the use of string constants.
Refer to the system software documentation supplied with the operating system for specialized
information about ASCII characters and the details of string handling, such as values of the
machine’s character set.
Symbolic Constants
A symbolic constant is a name you define to represent a single, known numeric value. You can
define a symbolic constant using either of two formats—with an equal sign or with a space to
separate the name and value. In the example, a symbolic constant PI represents the value
3.14159; thereafter, the constant PI can be used wherever needed in place of the value 3.14159.
Format Example
CONST name value CONST PI 3.14159
CONST name=value CONST PI=3.14159
Procedure Declarations
A procedure declaration identifies a procedure either defined later in the current program file
or is referenced (called) by a procedure in the current program file. Use one of the following
forms to declare a procedure depending on whether or not the procedure will accept
arguments:
or
If a procedure is to take arguments, use the second form given above. Only the data type of
each argument is given in a procedure declaration. The data type of each argument is the same
as the original local variable (SHORT, LONG, FLOAT, STRING).
The number of arguments in the declaration, the order the arguments are entered in, and their
data types must match the procedure definition. The procedure declarations are convenient
when a custom-written procedure must refer to another custom-written procedure that has not
yet been encountered because it is contained within another program file or occurs later in the
same program file. Procedure declarations are not required when the procedure called is
displayed in the same file but before the current procedure.
The following example shows how procedure declarations affect procedure calls:
PROC A
BEGIN
.
CALL B Because Procedure B has not been
. declared and does not appear before
. Procedure A, this call is not
END allowed. Procedure B must be
PROC B declared first.
BEGIN
.
CALL A Because Procedure A is displayed
. before Procedure B, this call is
END allowed.
Using the same example, by declaring PROC B above the definition of PROC A, then PROC
B can be called:
You can also call a procedure or function defined in another program file. If no triggered
procedures exist in the referenced program file, then the Math and Logic Triggers table must
contain an entry for that file.
PROC1.PRG PROC2.PRG
DECLARE PROC func1 PROC PROC2
PROC PROC1 BEGIN
BEGIN .
. .
. END
CALL func1
. PROC func1
. BEGIN
. .
END END
Constant Declarations
Constants are shared by all procedures and must be declared before any procedure in which
they are used; therefore, place constant declarations above the procedure statement of the first
procedure within the program file the constant is referenced in. Only one constant can be
declared on each line.
Variable Declarations
Variables can be declared in a Math and Logic program as procedure variables or as tags.
Variables declared as procedure variables are used to store values used only by Math and Logic
to perform operations. These values cannot be used by other FactoryLink tasks because they
are not tags in the real-time database.
Although procedure variables are not tags in the real-time database, they are still represented in
system memory and can be saved and opened repeatedly or printed during the running of those
procedures that can open them.
Use the following guidelines to determine whether to declare a variable as a procedure variable
or as a tag:
• If the variable is opened from an external source, declare it as a tag.
• If the variable is a trigger for any procedure, it must be declared as a tag defined as a trigger
tag with an associated trigger tag name.
• If the variable is used only by Math and Logic and must be accessible by all of the
procedures within a program file, declare the variable as a procedure variable with a global
scope by declaring it outside the first procedure in the program file.
• If the variable is used only by Math and Logic and is used only within a particular
procedure, declare the variable as a procedure variable with a local scope by declaring it
inside that procedure.
Variables declared inside a procedure must have different names from variables declared
outside of a procedure. The case of a variable name is significant.
• A variable name cannot begin with a digit (0-9).
• Variables cannot be initialized at declaration.
• Arrays cannot be passed as arguments to a procedure, but individual array tags can.
Local Procedure Variables—Declare local procedure variables immediately after the BEGIN
statement. A local variable declaration must precede all other instructions in a procedure.
Local variables are declared one data type to a line in statements similar to:
Initialized Value—Each time a procedure is called in the interpreted mode, a new instance of
each local variable is created and the value of each variable is initialized to 0. Each time the
executable is run in the compiled mode, the value of each local variable is initialized to 0,
which redefines the variable. When a procedure is completed, variables defined inside the
procedure are destroyed.
When large numbers of local variables are declared in a program file and are meant to be
accessible to all the procedures in that file, performance can be improved by placing the
declarations at the top of the file in which the procedures are stored before the start of the first
procedure. This makes the declarations global to the program file.
A local procedure variable may be declared as a scalar local variable or as a local array.
where
type Is one of the following:
SHORT signed short integer
LONG signed long integer
FLOAT double-precision floating-point number
STRING ASCI character string of 1023 characters in CML and
1024 in IML.
name Contains only alphabetic characters (A-Z, a-z), digits (0-9), periods (.),
dollar signs ($), at-signs (@), and underscores (_).
Has a maximum length of 30 characters.
Does not have a period as its first or last character.
Does not have a digit as its first character in a name.
In the compiled mode, the periods (.), dollar signs ($), and at-signs (@) are all converted to
underscores (_) in the resulting C source file.
.temp
$temp
@temp
all equal _temp when they are translated into C source code; therefore, avoid variable names
with periods, dollar signs, and at-signs, in case you need to convert to CML in the future.
Separate the names with commas, as shown in the following example, to declare more than one
variable of the same type on the same line:
DECLARE SHORT _s1,_s2,_s3 # loop & array indices/3D array
However, we recommend variables be declared one to a line with comments on the same line
after each declaration briefly describing the use of the variable.
Local Array—A local variable can also be declared as a local array. A local array represents a
set of values of the same type. An array is declared by specifying the size or dimension of the
array after the array type. An array can have a maximum of 16 dimensions. An array with more
than one dimension may be thought of as an array whose tags are also arrays rather than scalar
variables; each additional dimension gains the array another set of row and column indices.
Each of the dimensions, which must be constant, are enclosed in brackets.
Use one of the following forms to declare a local variable as a local array:
DECLARE SHORT _week[7] # days of the week
or
DECLARE SHORT _cal[12][31][10] # ten-year calendar array
The second form defines a three-dimensional array. The total number of tags in array _cal is the
product of the size of each dimension.
Local variable arrays function like scalar local variables in many ways except neither an array
nor an array tag can be passed as an argument to a procedure.
Global Procedure Variables—You must declare global variables outside of any procedure
that references them. For Interpreted Math and Logic, declare global procedure variables
before the first procedure definition in a program file. For purposes of validation, declare
global variables in each program file they are used in. After the first invocation, they retain
their values across procedure calls.
Generally, use a variable, constant, or procedure after its declaration point in a program;
therefore, where variables, constants, and procedures are declared in a Math and Logic
program depend on their intended scope.
# comments
Global DECLARE . . .# comments
Variables DECLARE . . .
# comments
PROC name
BEGIN
Local DECLARE . . .
Variables DECLARE . . .
A = A + 1
A = B + 1
END
Limitations
The 64K barrier under segmented architectures, such as Microsoft Windows, presents a
limitation on the size of some variable data in Math and Logic. Neither global nor local
variable arrays or data items, such as string arrays or message/buffer data, both of which tend
to become large, may exceed 64K. Items declared larger than 64K will, nevertheless, be
allocated only 64K under Microsoft Windows; no compile-time or table-entry checking is
planned to limit the size of declarations because of the multi-platform nature of the current
FactoryLink software system. Note also that the index (sizing) value for a variable array is
limited to 32K (32767); array dimensions must be declared so as not to exceed this limit.
Note these limitations when designing your application. Any global or local variables that
must be larger than 64K should be partitioned logically during design so no data item as
declared exceeds 64K. Declare several linked data items, if large buffers are needed in an
application.
E XPRESSIONS
An expression is a set of operands that resolve to exactly one value. An expression consists of
some combination of the following tags:
• Operators (symbols or keywords that specify the operation to be performed)
• Variables (tag names and procedure variables)
• Constants (symbolic, numeric, and string constants)
• Functions (user-defined and library)
In an expression, parentheses and brackets are balanced and all operators have the correct
number and types of operands. The following examples illustrate well-formed expressions
(assuming the data types of each operand are valid with the operators):
5
X + 3.5
temp < 0 OR temp > = 100
outrange AND (valve1 = 1 OR valve2 = 1)
100*sin(voltage1 - voltage2)
“This is a message to the operator!”
O PERATORS
Operators are symbols or keywords that are used in expressions to specify the type of
operation to be performed. Operators can be either unary or binary. Unary operators operate on
only one operand at a time while binary operators operate on two operands at a time.
Math and Logic employs the following operator groups, arranged in alphabetical order:
• Arithmetic
• Bitwise
• Change-Status
• Grouping
• Logical
• Relational
These operators must be used in a particular sequence to get the desired results from a
calculation. For information about the order in which the operations are performed in an
expression, refer to “Calling Procedures and Functions” on page 350.
Place spaces before and after the keyword MOD to avoid confusion with variable names when
the program parses the formula.
All arithmetic operators except modulo operate on any type of numeric operands, including
floating-point. MOD functions with only integers. In the case of tag names, this means any
combination of analog or longana data types.
This operation returns the remainder after dividing x by y. The following examples illustrate
arithmetic operations.
Operation Results
17/5 = 3 Returns quotient of 3; remainder is lost
17 MOD 5 = 2 Returns remainder of 2; quotient is ignored
17.0/5 = 3.4 Result is converted to floating-point; returns quotient and remainder
Bitwise Operators
Bitwise operators compare and manipulate the individual bits of their operands. Math and
Logic resolves all operands to integers. The following table illustrates bitwise operators.
Math and Logic defines bitwise operators as demonstrated in the following table.
Do not enclose the tag name (operand) in parentheses when checking change status. The
construct ?(x) is misinterpreted by Math and Logic in this context and does not produce the
desired result. Always use the construct ? x or (changed x).
Operation Results
y = y + ?x Increments the value of y by 1 whenever the value of x
changes.
If (changed my_tag) then call Initiates the procedure my_proc whenever the value of
proc my_proc my_tag changes.
Endif
Do not perform change-status operations on tags being used as procedure triggers (trigger
tags). This may prevent the corresponding procedure(s) from being triggered at the proper
time. This is because checking the change status of the tag resets the change bit for that tag.
Grouping Operators
The following table illustrates the special grouping operators.
Operator Name Use
() Parenthesis Use these to group sub-expressions. Their main purpose is to
override the precedence of operations by forcing the evaluation
of other operations first. Also, use parentheses to enclose
arguments being passed to a function or procedure.
[] Brackets Use these to enclose array indices. Use multiple pairs of brackets
for double- or triple-indexed arrays.
, Commas Use these to separate the arguments (if more than one) being
passed to a function. Also, use commas between types in
procedure declarations and between type-argument name pairs
in procedure definition header statements (proc statements).
Logical Operators
Logical operators test operands for TRUE (nonzero) or FALSE (zero) values and return a
result of 1 (TRUE) or 0 (FALSE). Math and Logic resolves all operands to numeric form. The
following table illustrates logical operators.
Operation
Operator Type Usage Operation Definition
Name
NOT unary NOT x logical NOT If x is zero, result is 1.
If x is nonzero, result is 0.
AND binary x AND y logical AND If x != 0 and y != 0, result is 1.
If x or y (or both) are 0, result is 0.
OR binary x OR y logical OR If x or y (or both) are != 0, result is 1.
If x is zero and
y is zero, result is 0.
Place spaces before and after the keywords AND, NOT, and OR to avoid confusion with
variable names when the program parses the formula. The following table shows logical
operators.
Operation Return Value Operation Return Value
NOT 3 0 0 AND 0 0
NOT 0 1 1 AND 2 1
NOT - 1 0 0 OR 2 1
0 AND 1 0 0 OR 0 0
Given the short analog variable x = 3, the results of various relational operations done using x
as an operand are shown in the following table.
S TATEMENTS
A statement is an instruction that describes mathematical and/or logical operations to be
performed in a specified order. Statements can be one of three types: assignment, control,
procedure call.
Assignment Statements
Assignment statements assign values to Math and Logic procedure variables or tags and can
have either of the following forms, where = and == are the assignment operators. Whether in a
formula or within a procedure, assignment statement are written with the variable to be
changed on the left-hand side of the assignment operator and the term or expression whose
value should be taken on the right-hand side. Math and Logic computes the expression expr
and assigns the result to the procedure variable or tag.
The following examples use the tags fptemp and itemp to demonstrate the difference:
x = expr Only writes if value of x has changed. Valid for procedure variables and
tags. Will not change the value of x unless it is different from the value in
expr.
x == expr Forced write, regardless of tag's present value. Turns on change-status flags
for x regardless of whether its value actually changed or not. Valid only for
tags.
Note: You can end an assignment statement with a semi-colon (;), if desired.
Control Statements
Control statements include instructions that determine when a block of code is to be executed.
End a control statement line only with an end-of-line character, never with a semicolon.
If the test expression of the statement is true, the THEN block is executed. If the test
expression is not true and an optional ELSE clause exists, the ELSE block is executed.
The IF...THEN block is not optional and the THEN verb must immediately follow the test
expression on the same line as IF. Each IF statement must be ended with an ENDIF statement
on a line by itself. The following example illustrates the use of IF...ENDIF control statements:
If the expression test_expr is false, the block is not executed. The block is executed while the
expression test_expr is true.If the expression never becomes false, the loop never terminates
until the operator or another run-time process forces the procedure to stop running. Ensure the
value of test_expr can become false at some point in the loop’s execution to prevent the
program from hanging.
The keyword ENDWHILE can be substituted for WEND. The following examples illustrate
the use of WHILE...WEND control statements:
# Example 1:
n = 0
WHILE n < 10
a[n] = -1
n = n + 1
WEND
# Example 2:
fib[0] = x
fib[1] = y
n = 2
WHILE n < 100 AND fib[n-1] < 10000
fib[n] = fib[n-2] + fib[n-1]
n = n + 1
ENDWHILE
Indent conditionally executed blocks for readability; program execution is not affected.
Syntax
See “Calling Procedures and Functions” on page 350 for more information on calling
procedures.
Excessive nesting of blocks or procedure calls can cause the operating system to halt the
procedure and return a Stack overflow error. If this occurs, either restructure the procedures to
reduce the number of nesting levels or increase the stack size for Math and Logic.
Directives
Directives are symbols used in statements. Math and Logic recognizes the directives in the
following table.
O PERATOR P RECEDENCE
Most high-level languages use relative operator precedence and associativity to determine the
order procedures perform operations in. If one operator has higher precedence than another, the
procedure executes it before the other. If two operators have the same precedence, the
procedure evaluates them according to their associativity, which is either left to right or right to
left and is always the same for such operators.
Because parentheses are operators with very high precedence, they can be used to alter the
evaluation order of other operators in an expression.
The Math and Logic operators are divided into 10 categories in the following table of operator
precedence. The operators within each category have equal precedence.
Include the new variable in a configuration table or in a program file depending on whether the
original variable is a tag or is local to program operations. This will greatly simplify the
debugging process should a problem occur during startup.
Create a new variable of a particular data type for accuracy in computation, such as
floating-point, and initialize the new item to the current value of another variable of a different
data type, such as a longana. This conversion prevents a possible loss of accuracy in upcoming
calculations. Use the new variable to do operations with other variables of the same type as the
new variable.
Data type conversions are not often needed, but they can be useful in particular situations.
Convert variables whenever the result requires the accuracy of the most precise data type
involved or when incompatible operations are taking place between digital and analog values.
Data type conversion can ensure the accuracy of the results of certain calculations with a few
exceptions. The following guidelines indicate when and why data types should be converted.
Data type precision When numeric data types are used in arithmetic operations
(+, -, *, /), the result has the precision of the most accurate data type. If one
of the data types is floating-point, the result is floating-point; otherwise, the
result is analog. Digital and analog data types are internally represented as
signed integers.
Overflow Execution of arithmetic operations can result in an out-of-range value being
placed into an analog or float variable. This results in a condition known as
overflow [loss of most significant bit(s)] in that variable.
To avoid causing overflow, do not use calculations in your application that
divide very small numbers by very large numbers, those that divide very
large numbers by very small numbers, or those that divide a number by
zero.
Before performing computations, ensure the results will be within the stated
maximum and minimum ranges of the system itself; however, if you need to
use larger analog values than the system can handle, use floating-points as a
workaround; situations requiring numbers larger than the float
representations possible on most systems will almost never arise.
It is recommended that you test for integer overflow (analog values)
conditions and for floating-point overflow (float values) conditions.
Let message1 be a message tag set to 1e308 (representing the number 10 raised to the power
308, a large floating-point constant stated in string form). Assume you set message2, defined
the same way as message1, to a value of -1e-308 (representing the very small floating-point
constant 10 raised to the power -308).
Let float1 be a floating-point tag that receives the total of these two message tags. The
statement float1 = message1 + message2 which should add the two values, instead results in
float1 receiving an undefined value represented in the system as 1.#INF (is not float) or
something similar, leading to an unpredictable result. This happens because the system
performs string concatenation (the + operator acts as a concatenation operator in regard to
string operands), which yields 1e3081e-308. The system stops converting at the second
occurrence of e (discarding the -308 portion) and attempts to place into the variable float1 the
out-of-range value (10 raised to the power of 3081), which is too large to fit into the
floating-point constant and is not the desired value.
Convert each of these values before adding them to prevent this type of error and avoid
unpredictable system behavior. Create two conversion variables, float1 and float2, and replace
the statement above with the following statements:
Arguments
Arguments are values passed to a procedure for it to use in its computations. Arguments are
input-only parameters.
Declare arguments by placing their types and names in the procedure definition statement, as
shown in the example above. Local and global variable names and tag names can be used. The
data type of the argument is the same as that of the original variable or tag (SHORT, LONG,
FLOAT, STRING).
Math and Logic copies the values used as arguments so the procedure modifies the copies, not
the original values of the variables or tags. For example, if the tag is used as an argument, the
task copies the value of that tag and sends it to the procedure as the argument. The original
value of the tag is not affected. Values modified as arguments cannot be passed back to the
calling procedure.
The declaration section of a procedure definition is optional. Any of the declarations can be
made in this section. Remember the two previously stated rules:
• Any variables declared within the procedure are by definition local variables and cannot be
referenced outside of the procedure.
• Declarations must come before any statements.
Calling Sequence—You must specify a procedure call using one of the following
interchangeable forms:
{CALL} proc_name[(type1 arg1] [, type2 arg2...])]
{CALL} proc_name (type1 arg1 [, type2 arg2...])
Procedure names can be 1 to 16 characters, must conform to the naming rules for variables,
and can be followed by a set of parentheses containing the function’s input parameters
(arguments), if any are required.
Library Functions
Math and Logic has several predefined, specialized procedures, known as library functions.
Expressions can include calls to library functions, which are grouped into five categories:
• Directory/Path Control
• Mathematical
• String Manipulation
• Programming Routines
• Miscellaneous Routines
The functions within each category are described in the following sections. Included in each
function’s description is a sample format of the function and an example of its use. Functions
can vary among different operating systems. Refer to your operating system documentation for
information about specific functions for a particular operating system.
Directory and path control functions are unique to each operating system.
Mathematical
Sample
Function Description Example Result
Format
abs x = abs(y) Returns absolute value x = abs(-5) x=5
cos x = cos(y) Returns the cosine of y x = cos(.4) x = 0.921061
(specified in radians)
exp x = exp(y) Returns ey x = exp(4) x = 54.59815
log x = log(y) Returns the log base 10 of y x = log(100) x=2
loge x = loge(y) Returns the natural log of y x = loge(1) x=0
pow x = y pow (z) Returns y to the zth power x = 2 pow (3) x=8
rnd x = rnd Returns a pseudo-random x = rnd x = 32750
positive integer within the (One possible
range 0 to 32767 result)
sin x = sin(y) Returns the sine of y x = sin(1.5) x = 0.9974951
(specified in radians)
sqr x = sqr(y) Returns the square root of y x= sqr(144) x = 12
tan x = tan(y) Returns the tangent of y x = tan (.785) x=1
(specified in radians)
Sample
Function Description Example Result
Format
alltrim string = Returns string with msgvar = msgvar =
alltrim(string) leading and trailing alltrim(“SMITH”) “SMITH”
blanks trimmed
asc x= Returns the ASCII x = asc (“TEN”) x = 84 (84 is the
asc(string) code for the first ASCII code for T)
character in string
chr string = Returns equivalent msgvar = chr(66) msgvar = “B” (66
chr(var) character of an is the ASCII code
ASCII code for B)
instr x= Returns the offset x= x=2
instr(str1, into str1 of the instr(“ABCDE”,“B”)
str2) occurrence of str2
len x= Returns the length of x = len(“MIAMI, x = 13
len (string) string, not including FLORIDA”)
the terminator
lower string = lower Returns string msgvar = lower msgvar = “not”
(string) converted to lower (“NOT”)
case
ltrim string = Returns string with msgvar = ltrim msgvar =
ltrim (string) leading blanks (“SMITH”) “SMITH”
trimmed
substr string= Returns a string of msgvar = msgvar = “CD”
substr (string, len length or less substr(“ABCDE”, 3,
offset, len) beginning with the 2)
offset character from
the beginning of
string. The offset to
the first character in
string is 1.
trim string = Returns string with msgvar = msgvar =
trim(string) trailing blanks trim(“SMITH”) “SMITH”
trimmed
upper string = Returns the input, msgvar = msgvar = “NOT”
upper(string) string, converted to upper(“not”)
upper case
Programming Routines
Syntax Description
EXIT(status) Exits the program and sets the program return status
CALL procname([p1...]) Calls a procedure. The keyword CALL is not required.
See “Procedure Call Statements” on page 341.
INPUT string_prompt, var1, var2... Accepts input from keyboard. The first field entered is
placed in var1. The first comma entered begins the
second field, which is placed in the var2, and so on.
LOCK Locks the database. No other task can access the
database while it is locked. A LOCK statement delimits
a block of code to execute in critical mode, without
interference from other FactoryLink tasks running on
the system. Each LOCK statement must have an
UNLOCK statement.
UNLOCK Unlocks the database, allowing other tasks to access it.
Must be issued for every LOCK. If time-consuming
code is included between LOCK and UNLOCK
statements, performance may be affected, because no
other tasks can access the database while it remains
locked.
PRINT “Row and line:”, row1, line Sends each listed print parameter (variable) to the
display, converting to ASCII, if necessary.
TRACE expr While expr remains true, each line assignment and the
procedure exit point print as they run.
Note: TRACE is not supported in CML.
Each time an interpreted program is executed, Math and Logic first reads or interprets, the
instructions within the program to determine the actions to perform, then it executes those
actions.
CML P ROCESS
CML contains utilities and libraries that are used along with a third-party ANSI C/C++
compiler to generate ANSI C code from the *.prg files you created. When you have completed
configuring the Variables Table, Triggers Table, and Procedures Table, you have created the
processing procedures for running programs in either IML or CML. The following discusses
the process involved in producing an executable file for the given domain from the .PRG files.
The compile process begins at run time on a development system, when CML:
4 Links the object files to the appropriate libraries to create binary executable (.exe) files
5 Runs the executable file as each program’s associated trigger(s) are set.
Note: After the CML files have been tested and approved for use, the executable
files can be copied to a run-time system that has the CML option enabled. A
compiler is not needed on the run-time system.
Because FactoryLink applications can be configured in both Shared and User domains, CML
creates one executable file for each domain that contains the .PRG files. The file name of each
executable is unique. The filename begins with a C and is followed by the domain name:
• {FLAPP}/SHARED/CML/CSHARED.EXE for the Shared domain
• {FLAPP}/USER/CML/CUSER.EXE for the User domain
Each utility performs a specific role in the compile process as shown in the call sequence in
Figure 14-2. Utilities are started in a specific order:
1 FLRUN calls the MKCML utility. The FLRUN command sets the FactoryLink path, the
application directory path, the user name, and the domain name to the environment variables
and turns off the verbose-level and clean-build parameters.
Note: CTGEN (and GENDEF) run normally as part of FLRUN. If you are
debugging and need to run the items separately, always run CTGEN and
GENDEF before running MKCML. MKCML calls CTGEN, which ensures the
Math and Logic .CT file is up to date.
2 MKCML calls PARSECML to produce .C files from the program (.PRG) files.
3 MKCML then calls CCCML to compile the .C files into object files using an external
compiler. Using an object linker, the object files are linked with library files into binary
executable files.
Figure 14-2 CML Utilities Call Hierarchy
FLRUN
MKCML
Compiler
Linker
MKCML
The MKCML utility is a shell that calls the PARSECML and CCCML utilities as needed for
the current application. For each domain, MKCML checks the dependencies between the
configuration tables (named IML.CT for both IML and CML) and the program files. MKCML
performs these tasks:
• Calls CTGEN which compares IML.CT against the database files. If the database files have
a later time/date stamp than IML.CT, CTGEN rebuilds IML.CT to bring it up to date.
• Determines whether the time/date of IML.CT has changed. If so, MKCML reproduces and
recompiles all of the .C files by calling PARSECML and CCCML.
When you redirect the output of MKCML to a file, the messages displayed in the output
appears out of order because of the method used by the operating system to buffer and output
messages. If you do not redirect the output of MKCML, the messages are reported to the
standard output in the correct order.
PARSECML
The PARSECML utility parses the application program files and produces .C files for each
domain. It produces a .C file for each program file if the program Mode field is set to
COMPILED in the Math and Logic Triggers Information table.
This utility also checks the dependencies between the program files and the .C files to
determine if any procedures were updated since the .C files were last produced.
PARSECML has various levels of debugging via the -Vx parameter that can generate more
detailed output or even add debugging statements to the C code.
CCCML
The CCCML utility compiles each .C file produced by PARSECML into an object file using an
external compiler. It then links the object files with the FactoryLink and developer-supplied
libraries into a binary executable. To determine the name of the compiler to use for a specific
operating system, CCCML uses a special file called a makefile named:
{FLINK}/CML/CML.MAK.
Its debugging levels provide minimal information; for example, the exact command line used
to compile and link the code. The CML variables in Table 14-3 provide manipulation of the
CML environment.
The cml.mak file, located in the {FLINK}/CML directory, typically contains the following
information to create the final executable file:
• Name of the C compiler to use for a given operating system
• Command-line switches to be used when compiling
• Name of the operating system’s object linker
• Linker command-line switches
• References to the FactoryLink libraries to be linked
• References to the developer-supplied libraries to be linked
As an aid for advanced users, CML provides a method for editing the cml.mak file. You can
change the compiler and linker options, specify command-line switches, and specify which
object files and libraries to link, providing the flexibility to create a makefile unique to an
application for a given domain.
CML provides two file options: System Makefile and Domain Makefile. Both files for these
options must retain the same name: cml.mak.
The cml.mak file in the System Makefile folder sets the defaults to control the compile job
instructions for CML procedures. Any changes made to this file are global; they apply to all
applications on the system. However, it is not recommended that any changes be made to this
file. Any definitions in the system-specific makefile in the application directory override the
definitions in the master makefile in the /FLINK/CML directory.
If the cml.mak file requires editing, expand the Math and Logic System Makefile folder, open
cml.mak (the same file from the {FLINK}/CML directory), edit the file as required, and then
save the changes.
A domain-specific makefile does not exist until you create one. Once created, this makefile is
used for the domain instead of the system makefile.
To create a domain makefile, either copy the cml.mak file from the {FLINK}/CML directory
to the {FLAPP}/{FL DOMAIN}/CML directory for the Shared domain, or right-click the Math
and Logic Domain Makefile folder and click New. A new file that is an exact copy of the system
makefile is created. Edit the file as required, and save the changes. Any definitions in the
domain-specific makefile in the application directory override the definitions in the master
system makefile in the {FLINK}/CML directory.
For example, using any text editor, create and edit an include file with an .INC extension,
containing the following text:
Include files must have an .INC extension so the system can open and save them during
an FLSAVE. Include files are located in the PROCS directory of the current domain and
current application.
For example, the previous include file is saved to path:
FLAPP\FLDOMAIN\PROCS\MYPROG.INC
where
MYPROG.INC
Is a developer-defined file name.
Use the keyword include to declare the include file with any program file to be run in the
compiled mode. The syntax is
include “MYPROG.INC”
The keyword include instructs Math and Logic to read the contents of the include file and
include it as part of the current program file.
Note: Include causes a validation error even though it is evaluated properly at
compile time. An alternative is to use the C include within a cbegin cend block.
For example,
cbegin
#include <time.h>
cend
The following example shows how to use an include file (procedures p1 and testproc with
an include file):
R UNNING CML
CML compiles and runs on both development systems and run-time systems.
Before starting the Run-Time Manager, FLRUN invokes several utilities to compile programs
into a single executable file. The compiled programs will have COMPILED entered in the Mode
field of the Math and Logic Triggers Information table.
The CML development system executables must be transferred from the development system
to the run-time system to run CML on a run-time-only system. Perform the following steps to
run CML on a run-time-only system:
1 Use either of the following methods to transfer the CML executables to the run-time system:
• Use the FLSAVE and FLREST utilities to perform a save and restore of the application
from the development system to the run-time system. This saves and restores the compiled
CML task along with the rest of the application.
• Copy the executables from {FLAPP}/USER/CML or {FLAPP}/SHARED/CML on the
development system to the same path on the run-time system.
2 Start CML. Depending on whether the R flag was set in the System Configuration Information
table (as explained on page 505), do one of the following:
• If the R flag was set, right-click the application name and select Start.
• If the R flag was not set, start CML from the Run-Time Manager (RTMON).
The compile process begins and CML creates the executables. Because the development and
run-time operating systems are the same, CML runs as is.
CML is designed so each of the CML utilities can be started from the command prompt
window. This is useful when only a portion of the compile process needs to be processed.
Table 14-4 identifies the command line parameters used by all CML utilities.
A DVANCED TECHNIQUES
The MLProcHeader.txt file can be edited in any standard text editor, such as Notepad. The
MLProcHeader.txt is created after a user creates the first .PRG file on the server. The edits
appear in all new .PRG files. As additional edits are made to MLProcHeader.txt, the new edits
appear only in .PRG files created after the edit is made. Table 14-5 shows the tokens and
values provided for the edit customizing.
Calling C Code
The Math and Logic program uses three CML-specific keywords to call C code: cfunc, cbegin,
and cend. This functionality is very powerful and flexible, but should be used sparingly
because it makes your system harder to maintain in the future.
Using cfunc
Use the keyword cfunc to declare standard C functions and user-defined C functions as
callable in-line functions within a CML program. In-line C functions allow a CML program to
call a C function directly without opening a C code block. The function must be declared
before it is called.
The C code generated by CML provides prototypes for standard library functions; however, it
does not include prototypes for user-defined C functions. You must provide function
prototypes for all user-defined functions. Including a function without a prototype may result
in compiler warnings regarding the missing functions.
Use only C functions that use the Math and Logic data types of SHORT, LONG, FLOAT, and
STRING with cfunc. Although a C function may use any data type internally, its interface to
Math and Logic must use only these types.
In the following example, testfunc is declared to use four arguments whose values are SHORT,
LONG, FLOAT, and STRING data types and to return a value with a SHORT data type:
The VOID data type is unique to CML. Use VOID when declaring a function not required to
return a value. Do not use VOID in programs designed to run in interpreted mode.
Example 1—uses cfunc to declare the standard C function strcmp( ) for use within a CML
program:
The function strcmp( ) compares two strings and returns a value that indicates their
relationship. In this program, strcmp compares the input string s1 to the string QUIT and is
declared to have a return value of the data type SHORT.
• If the return value equals 0, then s1 is identical to QUIT and the program prints the message
QUITTING.
• If the return value is less than or greater than 0, the program prints nothing.
C functions declared using cfunc have full data conversion wrapped around them, meaning any
data type can be passed to and returned from them.
In this program, strcmp converts the FLOAT value f and the LONG value k to strings,
compares the two strings, and then returns a number (buff) that indicates whether the
comparison was less than, greater than, or equal to zero. This comparison is:
• If f < k, then buff is a number less than 0.
• If f = k, then buff is equal to 0.
• If f > k, then buff is a number greater than 0.
Example 2—uses cfunc to declare the function testfunc which has a return data type of VOID:
In this program, the declared floating-point variable flp is set to 100.0 and this value is passed
to the function testfunc. Note that VOID is entered in place of the data type for the function’s
return value. This is because the program is only passing a value to testfunc and the function is
not required to return a value.
You can use the keywords cbegin and cend to embed C code directly into a CML procedure.
Between these keywords, you can call external library functions and manipulate structures and
pointers Math and Logic does not support; however, you cannot declare C variables inside a
cbegin/cend block already within the scope of a procedure. When you declare a C variable, the
declaration block from cbegin to cend must be displayed outside the procedure, above the
PROC statement. See the declaration of static FILE *Fp=stderr in Example 2.
The cbegin and cend statement must each be on a line by itself with no preceding tabs or
spaces. All lines between these two keywords (the C code block) are passed directly to the .C
file that PARSECML produces for this program.
The following examples show how to use the cbegin and cend keywords.
# Example 1:
PROC TEST(STRING message)
BEGIN
DECLARE STRING buff
IF message="QUIT" THEN
PRINT “FINISHED.\n”
ENDIF
cbegin
sprintf(buff,"The message was %s\n",message);
fprintf(stderr,buff);
cend
END
In this program, the sprintf and fprintf functions, called between cbegin and cend, are passed
directly to the .C file that PARSECML generates for TEST. Note that local variables are within
the scope of the C code block and can be accessed during calls to external functions.
Any C code blocks outside the body of a CML program are collected and moved to the top of
the generated .C file, as shown in Example 2. In this program file, the statement: static FILE
*Fp=stderr; is moved to the top of the program file just after the line include “mylib.h”.
# Example 2:
cbegin
#include “mylib.h”
cend
PROC TEST(STRING s1)
BEGIN
PRINT “The message is ”,s1
END
cbegin
static FILE *Fp=stderr;
cend
PROC SOMETHING (FLOAT f1)
BEGIN
cbegin
fprintf(Fp,"%6.2g\n",f1);
cend
END
The following example shows how to access tags from within embedded C code blocks. It
increments the values of two analog tags, Tag1 and Tag2[5], by 10. Notice, the variable task_id
is a predefined global CML variable and does not need to be declared.
PROC example
BEGIN
cbegin
{
TAG tag[2];
ANA value[2];
fl_tagname_to_id(tag,2, “TAG1”,“TAG2[5]”);
fl_read(Task_id,tag,2,value);
value[0] += 10;
value[1] += 10;
fl_write(Task_id,tag,2,value);
}
cend
END
The following example shows how to manipulate message tags within embedded C code
(cbegin/cend code blocks). This example reads from TAG1, adds X to the string, then writes
the result to TAG2.
PROC ADD_X
BEGIN
cbegin
{
#define MAX_LEN 80 /* default maximum message length */
TAG tags[2];
FLMSG tag1, tag2;
char string_buff[MAX_LEN+1]; /* max length plus terminating 0 */
tag1.m_ptr=tag2.m_ptr=string_buf;
tag1.m_max=tag2.m_max=MAX_LEN;
fl_tagname_to_id(tags,2,TAG1,TAG2);
fl_read(Task_id,&tags[0],1,&tag1);
strcat(string_buf,X);
tag2.m_len=strien(string_buf);
fl_write(Task_id,&tags[1],1,&tag2);
}
cend
END
When values are assigned to and read from MESSAGE tags in the normal syntax for the
procedure files the MAX LEN field is limited to 1023 characters. All message values are
truncated at 1023 characters. The function fl_write ( ) must be called directly to store values
longer than 1023 characters into a MESSAGE tag. The following example shows how to use a
C macro to call the procedure msgtest to store a 90-character constant into the MESSAGE tag
msgtag:
MSGTEST.PRG
cbegin
#define assign_msg(tagname, value) {\
TAG tag; \
FLMSG msg; \
char buf[] = value; \
fl_tagname_to_id(&tag,1,tagname); \
msg.m_ptr = buf; \
msg.m_len = strlen(buf); \
msg.m_max = strlen(buf)+100; /* leave plenty of room */ \
fl_write(Task_id,&tag,1,&msg); \
}
cend
PROC msgtest
BEGIN
cbegin
assign_msg(msgtag,123456789012345678901234567890123456789012345
678901234567890123456789012345678901234567890)
cend
END
where
TAG*tp Is a pointer to a developer-supplied tag array to be filled in with
tag IDs
int num Is the number of tag names to look up.
char* Is one or more character pointers to valid tag names.
By using fl_tagname_to_id( ) inside CML C code blocks, developers can look up one or more
tag names and fill in a developer-supplied tag array with the tag ID for each tag name.
Developers can then use these Tag IDs with the FactoryLink PAK functions, and any other
function that operates on the tag ID instead of the tag name, just as the Math and Logic
grammar does.
fl_tagname_to_id( ) is a variable argument function like print. The developer can retrieve as
many valid tag IDs as tag array has room for.
cbegin
void myfunc()
{
TAG list[2]
fl_tagname_to_id(list, 2, “TIME”, “DATE”);
.
}
cend
In this example, the function retrieves the tag IDs for the two tags TIME and DATE and places
their IDs into the tag array named list.
Horizontal
splitter
Vertical
scrollbar
Edit
buffer view
Bookmark
Horizontal
scrollbar
Vertical
splitter
For information about these functions, see the Configuration Explorer Help.
To enable CML processing, the CML task must be added anywhere in the task list. The IML
task does not need to be removed. Both tasks can be enabled and run at the same time. Adding
a task requires displaying an existing task to use the dialog box as a template. The new task is
added to the list below the displayed task. The position of the task in the list does not determine
its rank in the run-time process; the Start Order field determines the run-time rank.
To add a task, double-click an existing task in the list, such as the Interpreted Math and Logic
task. In the System Configuration Task dialog box, click the arrow-asterisk button at the
bottom of the dialog box. Complete all the fields using the information in Table 14-7. Click
Apply to complete the task. Refresh the application tree to display the new task in the list.
For more information about adding and modifying task parameters, see the Configuration
Explorer Help.
Verbose-Level Parameters
When you use a verbose-level parameter, the utility displays messages about its progress as it
performs its part of the compile process. This serves as a debugging aid. Table 14-8 shows the
messages produced by each utility at the verbose level indicated.
The following Math and Logic error messages can display on the Run-Time Manager screen,
depending on the mode (IML or CML). Math and Logic configuration table files are named
IML.CT regardless of the mode used (IML or CML).
•
•
•
•
Persistence
The Persistence task saves values from an active FactoryLink application at predetermined
times to prevent loss of useful data if FactoryLink shuts down unexpectedly. These saved
values are written to disk and are not affected when FactoryLink shuts down. Then, when you
restart FactoryLink with the warm start command-line option, the Run-Time Manager restores
the real-time database from the values in the disk file.
The memory-resident real-time database is a collection of tag values and it represents the
current state of the application. The values of the tags are lost when the application is shut
down because the real-time database is removed from RAM. When the application is started
again, the real-time database is recreated and its tags are initialized to zero or their default
values, if defined. This can be a problem if FactoryLink unexpectedly shuts down because of
an event, such as a power loss or a faulty process. Useful information can be lost if it has not
been saved. Persistence provides a way of saving the state of an active FactoryLink
application.
Persistence is the ability of a tag to maintain its value over an indefinite period of time.
Non-persistent tags lose their value when the Run-Time Manager exits and shuts down the
real-time database. The Persistence task writes tag values to disk, making these tags persistent.
The file the task creates is called a persistence save file.
At run time, the Persistence task saves the values of the persistent tags to its own internal disk
cache and then writes the data to disk from there. Saving the persistent values to memory first
increases processing speed and ensures all values meant to be saved are saved within the
allotted time.
The RESOLVE program, executed by the FLRUN command, creates a blank persistence save
file the first time it is executed. At startup, the Persistence task loads the persistence save file to
determine which tags in the application are persistent and when the values of those tags are to
be saved. It also loads the PERSIST.CT file to get specific information about the configuration
of the Persistence task itself.
The -w command is already set for the Examples Application and Starter Templates. To add the
-w command to another FactoryLink application, follow these steps:
2 Click the field next to FLRunArgs and add -w. Be sure a space is between the last character in
the command line and the dash in -w. Click OK.
The RESOLVE.EXE program automatically resolves any configuration changes. The FLRUN
command automatically executes this program before it starts the Run-Time Manager.
1. Creates the blank Persistence save file the first time it is run
2. Manages the changes between the Persistence save file and the FactoryLink configuration
files
3. Determines if the Persistence save file is usable and, if not, the program looks for and uses
the Persistence backup file
C ONFIGURING P ERSISTENCE
Before configuring Persistence, you must first consider which tags in the application are
critical to application startup and must be saved. This subset of tags from your application will
be the ones you mark as persistent. It is not feasible for Persistence to save every tag in an
application, so make sure that Persistence saves only those values that need to be maintained
after the application shuts down. To make use of this save file after FactoryLink has shut down,
you must restart FactoryLink with the -w argument.
2 Configure the Persistence task itself by completing the Persistence Save Information table.
3 Add the R flag to the Persistence task in the System Configuration table.
Marking the tags tells the Persistence task which tags to save, but the task does not run until
you configure its table and set the R flag. (See page 505 for information to configure a task in
the System Configuration table.)
Configure Persistence for individual tags using the Tag Editor, which appears when you:
• Define a new tag in Configuration Explorer, or
• Press Ctrl+T in a Tag field for a previously defined tag.
Use Domain Saves the value of this persistent tag according to the option chosen in the
Settings Domain List. The Saving and Restoring options are disabled when this
option is chosen.
Clear Use Domain Settings to enable the Saving and Restoring options for
this tag specifically.
Save Indicates when the value of this persistent tag is saved. Click one, or both,
of the following:
On Time—Saves the value of the tag on a timed trigger.
On Exception—Saves the value of the tag whenever its value changes.
When Restoring Indicates how to set this tag’s change-status bits when it’s value is restored
in the real-time database. Click one of the following:
Set Change Status ON—Restores the tag with its change-status bits set to
1 after a warm start.
Set Change Status OFF—Restores the tag with its change-status bits set to
0 after a warm start. This is the default.
For example, you may have several Math & Logic procedures triggered by
digital tags but the application controls when these tags are force-written to
a 1 (value = 1; change-status bits = 1). If you perform a warm start with
Change Bits ON, all of the digital tags change-status bits are written to a 1
and all of your IML procs run at once.
No Options This tag is not marked as persistent.
Selected
2 Choose when you want to save this tag’s value by clicking On Time, On Exception, or both.
3 Choose how you want to restore this tag’s value from the persistence save file to the real-time
database at application startup by clicking Set Change Status ON or Set Change Status OFF.
4 Click OK.
Domain persistence means that all persistent tags in a domain are saved the same way and
restored the same way. This is in contrast to the individual method just described where each
tag can be marked differently for saving and restoring. Configure persistence for a domain
using both the Tag Editor and the Domain List.
Note: The options selected in the Persistence and Change Bits fields apply only
to those tags that have Use Domain Settings selected in their tag definition. These
tags follow the domain configuration in the Domain List.
1 Right-click your application and click View > View Domain List.
2 In the row containing the domain to be made persistent, click the Persistence arrow and select
the method to save the tags’ values:
None – The tags are not persistent.
Timed – Saves the values of the tags on a timed trigger.
Except – Saves the values of the tags whenever their values change.
Both – Saves the values of the tags on a timed trigger and whenever their values change.
3 For the same domain, click the Change Bits arrow and select how to set the tags’ change-status
bits when their values are restored to the real-time database:
ON – Restores the tags with their change-status bits set to 1 after a warm start.
OFF – Restores the tags with their change-status bits set to 0 after a warm start.
4 For the tags you want to mark as persistent, open the Tag Editor for that tag and select Use
Domain Settings in the Persistence section. The tags will be saved in the Persistence save file
and restored to the real-time database per the selections in the Domain List.
Note: If no tags are marked as Use Domain Settings, the selections in the
Domain List are ignored.
Accessing
Field Descriptions
In this example, when the value of persist_trig changes to 1, it triggers the Persistence task to
save the values of all tags in the application configured as persistent by time. The number of
buffers set aside for the internal cache is 16 with 512 bytes per buffer. A disk cache is a way to
compensate for the slowness of the disk drive in comparison to RAM (memory). The
Persistence task’s cache process speeds up computer operations by keeping data in memory.
Rather than writing each piece of data to be saved to the hard disk, the task writes the data to its
internal disk cache (reserved memory area). When the cache process has time, it writes the
saved data to the hard disk.
The maximum length for message tags during persistent saves is 2048 bytes. When
persist_backup is triggered, Persistence copies the current Persistence save file to a backup
file.
After the application starts, the values of the TASKSTART_? tags are 1, so Persistence saves a
1 as their last known value. At shutdown, because Persistence stops first, Persistence does not
see the change in value of the TASKSTART_? tags from 1 to 0 (zero), so the saved values
remain as 1. On a warm start of the application, the TASKSTART_? tags for all tasks running
at shutdown are restored to 1 and therefore, their tasks will start. It is important to note that
these same tasks will be started regardless of their “R” flag settings in the SYS.CT file. and
that there are no manual starts or terminations.
Because Persistence starts first, it sees the application starting and, therefore, sees the values of
the TASKSTART_? tags at 0. Because Persistence stops last, it saves a 0 as the last known
value of the TASKSTART_? tags if a termination happens during the startup process. On a
warm start of the application, none of the tasks start because all of the TASKSTART_? tags
have a last known value of 0.
The shutdown order is more significant than the startup order if the tags are saved on change.
In general, specify the Persistence task to shutdown first (and therefore, start last) so the saved
values in the Persistence save file reflect the last known running state of the application at
shutdown. Then, the warm start restores it to that state, which is the purpose of the Persistence
task.
However, the digital tags RTMCMD and RTMCMD_U, cannot be made persistent because,
when the value of these tags is set to 1, the FactoryLink system shuts down, which makes these
tags persistent and immediately shuts down the system.
Note that the R (Run) flag for each task in the System Configuration Information table
supersedes the value of the digital start trigger associated with a task.
The following examples show the relationship between the R flag in the System Configuration
Information table and the restored value of a digital tag.
Example 1
The R flag is NOT set for task A, and the digital start trigger associated with task A is defined
as persistent by Exception (always updated) with Force Change Status ON if:
• Task A is running when the system is shut down, then the value of the task’s digital start
trigger is 1. When a warm start is performed, the system restarts task A because the value of
the digital start trigger is restored to 1.
Example 2
The R flag IS set for task A and the digital start trigger associated with task A is defined to be
persistent by Exception (always updated) with Force Change Status ON if:
• Task A is running when the system is shut down, then the value of the task’s digital start
trigger is 1. When a warm start is performed, the system restarts task A because the task’s
Run flag is set.
• Task A is not running when the system is shut down, then the value of the task's digital start
trigger is 0. When a warm start is performed, the system still restarts task A because, even
though the value of the digital start trigger is restored to 0, the task’s Run flag is set and the
Run flag supersedes the restored value of the digital start trigger.
where
FLAPP Translated application environment variable.
FLNAME Translated application environment variable.
FLDOMAIN Translated domain environment variable.
The name of each Persistence save file is {FLUSER}.PRS where FLUSER is the translated
environment variable for the domain user name. The Persistence save file contains the saved
values for that domain user.
For example, in Windows, where the FLRUN.BAT file sets the Shared FLUSER environment
variable to SHAREUSR, but the User domain FLUSER environment variable remains at the
default setup in the AUTOEXEC.BAT file, the Shared persist file is named SHAREUSR.PRS
and the User persist file is named FLUSER1.PRS.
The Persistence backup files are in the same place and have the same name, except they have
the extension .BAK.
TAGPERWHEN (meaning Tag is saved when) is the text equivalent to the buttons on the Tag
Editor when defining a tag or using CTRL+T to view the tag definition. The possible values are:
• NONE – tag is not persistent
• Left blank – same as NONE
• DOMAIN – save based on domain Persistence definition as configured in the Domain
configuration table.
• TIMED – save on timed trigger
• EXCEPT – save on change
• BOTH – save on timed trigger or change
The procedure updates the table changing all instances of a specific entry in the TAGPERWHEN
field at one time to a new value.
Prior to executing the instructions below, we recommend you make a backup of the application
using the FLSAVE utility or some other backup utility. At least make a backup copy of the
OBJECT.CDB and OBJECT.MDX files so if anything goes wrong during the procedure, the
backup can be restored with no damage done to the application. The general syntax can be
modified to update the Persistence setting for any group of tags from the current settings to any
valid new setting as a group, by varying the literal values in the first and second instance of
tagperwhen = '????'.
1 Type the program name at a prompt for all systems except MS Windows. For MS Windows,
run the program from Start > Run.
2 At the BH_SQL prompt, type SQL > connect flapp and press Enter.
“flapp” is the actual path to the FLAPP directory as defined in the environment variable.
3 Type SQL > update object set tagperwhen = 'NONE' where tagperwhen = '' and press Enter.
Use the following command if you have a large number of tags configured to be saved as
defined for the domain configuration and you want to change the setting for all of these tags to
be saved individually when they change value or on exception.
SQL > update object set tagperwhen = 'DOMAIN' where tagperwhen = 'EXCEPT'
4 After all desired changes are made, type QUIT.
E RROR M ESSAGES
•
•
•
•
PowerNet
PowerNet allows you to share Real-time Database tags among FactoryLink applications
running on the same or different workstations or nodes.
One FactoryLink application can act as a client and/or server. This application can serve other
FactoryLink applications by providing needed information. As a client, the application can use
information provided by other FactoryLink applications.
The configuration tables used to configure the PowerNet task are completed in the Shared
domain. Currently, the only network protocol supported by PowerNet is TCP/IP.
Note: PowerNet is a legacy FactoryLink task for sharing data between nodes
on a network. The current method to use is the Virtual Real-Time Network and
Redundancy (VRN/VRR) task. VRN/VRR has all of the functionality of
PowerNet and is more flexible. PowerNet is still supported, but if you are starting
a new application, it is recommended that you use VRN/VRR instead. For more
information, see “Virtual Real-Time Network and Redundancy” on page 547.
O PERATING P RINCIPLES
This section describes what initiates data transfer between a server application and a client
application.
As each client attaches to a server application, all of the data shared between the server and
client applications is transmitted from the server to the client. This ensures the client contains
up-to-date data immediately upon starting up. This also occurs at reconnection in the event a
connection is lost between the client and server.
Data transfer from the server to the client is configured by one of the following two methods:
• Exception Data – Transmits data to the client only when data has changed in the server
application.
• Polled Data – Transmits data to the client on a fixed interval, a dynamic interval, or at any
event the client application generates.
Domains that include mailbox tags should be READONLY, not READWRITE. If a domain
containing a mailbox tag is READWRITE, PowerNet will ignore the mailbox tag in the
write-back connection. The task will still run, but PowerNet will print a warning message
exdomain:tagname{[sub1] {,[sub2],...}}
Valid tag names conform to the following syntax:
[<node>:]<name>[<dims>]
<node> Optional node name used by PowerNet only. The name can be a variable;
maximum 8 characters.
<name> Name of tag. Maximum 32 characters, including node, dimensions, and
extensions, and separators (: and .).
If using Scaling and Deadbanding, the character count is reduced to 25.
If using PowerNet, the character count is reduced to 23.
If using Scaling and Deadbanding with PowerNet, the character count is reduced
to 16.
If unsure whether PowerNet or Scaling and Deadbanding will be used, it is
recommended you define tag names using only 16 characters.
<dims> Array dimensions. Maximum16 characters.
<ext> FactoryLink-created extension added when Scaling and Deadbanding is applied
to a tag. Maximum 6 characters. Additional characters reduce the maximum
length of the tag name by up to 7, because of the period delimiter.
N ETWORK S OFTWARE
Perform the following steps to configure network software:
1 Design your network topology. Include the following information for each node (or a TCP/IP
host).
• Node name
• IP address
• Client/server connections
For example, the following drawing shows a network with three nodes: nodea, nodeb and
nodec.
Node named
nodea
App1
Node named
nodec
Node named
nodeb
App2
2 Add the names of all nodes in the network that share FactoryLink data in the TCP/IP hosts file.
Do this for each client and server node running FactoryLink.
For example, the following host file identifies the nodes in the example.
192.195.178.1 nodea NODEA
192.195.179.1 nodeb NODEB
192.195.183.1 nodec NODEC
Two server nodes are named nodea and nodeb and one client node named nodec.
Node named
nodec
Node named
nodea
Host file:
192.195.178.1 nodea NODEA
192.195.179.1 nodeb NODEB
FL1
192.195.183.1 nodec NODEC
Node named
nodeb
Host file:
192.195.178.1 nodea NODEA FL2
192.195.179.1 nodeb NODEB
192.195.183.1 nodec NODEC
Host file:
192.195.178.1 nodea NODEA
192.195.179.1 nodeb NODEB
192.195.183.1 nodec NODEC
3 Define the environment variable FLHOST for your operating system that corresponds to the
local host name. This environment variable must be set for each application instance.
Alternatively, the application can be passed a program argument at run time to define the local
host name. You define this argument in the System Configuration Information table, discussed
on page 412.
4 Add the name(s) assigned to each PowerNet service running on the node in the TCP/IP
services file. Do this for each client and server node running FactoryLink. Refer to the
appropriate vendor documentation for more information on configuring services.
One services file is associated with each node. The services file should contain the names of
PowerNet services for each FactoryLink application running on the node. The default service
name is POWERNET, which can be used if only one application on a node is running
PowerNet.
Each instance of PowerNet must use a unique service name if more than one application
running PowerNet exists on a node. The service name the local PowerNet uses, specified using
the -s command line parameter, must match the service name (*Remote Service Name or
TAG) specified in the External Domain Definition table for that domain on the remote node.
-sPOWERNET
nodename: NodeC
Service: POWERNT2
Services file:
POWERNET 5096/tcp powernet
POWERNT2 5097/tcp powernt2 FL1
Node named
nodeb
FL2
PowerNet Powernt2
where
SERVICE Is the uppercase specification of the name assigned to the service running
on the node. This name can be from 1 to 8 characters and must be unique
for each service defined for a single node. The default is POWERNET.
port_num Is a unique number assigned to reference the port number to TCP/IP. This
number must be unique for each service defined for a single node. The
recommended port number is 5096; however, any number can be used as
long as it is consistent across all services files.
alias Is the lowercase specification of the name assigned to the service running
on the node.
For example, the following services file identifies the services for the nodes in the previous
diagram:
POWERNET 5096/tcp powernet
POWERNT2 5097/tcp powernt2
Accessing
Networking > External Domain Definition > External Domain Definition
Field Definitions
Accessing
System > System Configuration > System Configuration Information
Copy and paste the last row of the System Configuration Information table into the empty row
just below it if a row for PowerNet does not exist.
Field Definitions
P ROGRAM A RGUMENTS
All of the options are optional, but if the -h is not used to set the host name, the FLHOST
environment variable must be used instead to specify the local host name. Arguments are case
sensitive.
Argument Description
–b<#> Set transfer buffer size. (# = kilobytes) Where # is the buffer size (Default
= 512)
The buffer size is the maximum packet size PowerNet sends across the
network. PowerNet only sends the data that it has to up to the buffer size.
If all the data cannot fit in one packet, PowerNet breaks the data into
several packets. If you increase the buffer size on the Windows platform,
remember to increase the TCP/IP buffer size also.
–c<#> Set connect time. (# = seconds) Where # is the amount of time in seconds
for PowerNet to try to reconnect (Default = 10)
The -c option is the number of seconds between connect tries. PowerNet
continuously tries to connect to the server. This option could be useful if
a server will not be running for a long period of time and you do not want
PowerNet to continue trying to connect.
–d<#> Set debug verbose level. (# = 0 to 4)
–h <H> Set host name. (H = Host name) The local host name may be specified
either with the -h parameter or in the environment variable FLHOST. It
must be specified in one of the two places, or PowerNet will not run. If
both are specified, the command line (-h) overrides the FLHOST
variable.
–i<#> Set timeout for bind wait. (Time to wait for network connection.) (# =
number of milliseconds to sleep after binding every 20 tags)
This parameter is important in applications where PowerNet is binding a
large amount of tags and is using too much CPU time. This parameter
slows the process down but allows other tasks to function normally
during PowerNet binding.
–l Enables logging of debug information to a log file.
–m<#> Set timeout for message transfer. (# = seconds) Where # is the amount of
time in seconds for a data transfer
default: 1
The -m option is the amount of time allowed for a transfer of data to
occur. When PowerNet is used with a modem, this option can be set to
allow for the data to be completely transferred.
TROUBLESHOOTING
The PowerNet task can display and log information during run time. Customer Support uses
this information to determine and resolve the user’s problems. The amount and the content of
the information being logged is controlled by the command line options. PowerNet was
implemented in three layers, PowerNet, NSI class, and NSI. NSI stands for network services
interface and is the TCP/IP specific layer. The NSI class layer is an intermediate layer between
NSI and PowerNet. Each layer has its own topics and levels (NSI class does not have topics). If
you have a PowerNet problem and are working with Customer Support, they will tell you
which categories and levels to use to produce the most helpful log file.
The following are examples of the command line debug options for PowerNet:
-Bn Display the messages related to the topic B up to level n
-Cn Display the messages related to the topic C up to level n
-BNn Display the NSI messages related to the topic B up to level n
-CNn Display the NSI messages related to the topic C up to level n
-dn Display the messages from all topics up to the level n
-on Display the messages from NSI class up to the level n
-xn Display the messages from NSI up to the level n
-l Log the displayed messages to the file
-v Insert a timestamp in the beginning of each message
-wm Wrap the log file every m messages (see more detailed descriptions below)
-yp Perform closing and reopening of the log file once per p messages
Note: The options are case-sensitive, -D3 is not the same as -d3. The use of -dx
supports the old style logging messages where all categories are displayed at
level x.
The following is a description of the topics and levels that are currently in use:
B - Binding
1 - errors
2 - warnings
3 - bind request/response was sent/received from NODE
4 - bind logic
5 - contents of bind request/response
6 - bind logic (more detail)
D - Data/tags
1 - errors
2 - warnings
3 - type and count tags in packet
4 - value of tag
5 - data conversion
6 - more details of data conversion
R - Receiving
1 - errors
2 - warnings
3 - a packet is received from NODE
4 - packet header
5 - processing the received packet, calling receive
S - Sending
1 - errors
2 - warnings
3 - a packet is send to NODE
4 - packet header
5 - checking for time-outs, waiting for pkts
6 - send logic
7 - send logic
8 - more details
9 - detailed information about room left in the packet
10 - mailboxes
M - Miscellaneous
1 - errors
2 - warnings
In addition to the topics and levels, the messages comply to a certain format:
• All error messages begin with the word ERROR.
• All warning messages begin with the word WARNING.
• Information and debug messages do not have a specific format.
-wm Wrap the log file every m messages
When this command line argument is specified along with -l argument, the logging mechanism
keeps the size of the log file under m messages. The <name>.log file always contains no more
than m most recent messages, when the (m + 1) message comes, the <name>.log file gets
renamed to <name>.111, and the new <name>.log file gets created. In addition, the very first m
messages are stored forever in file <name>.000. So, in common cases, three files are always on
a disk: <name>.000, <name>.111, and <name>.log.
The default is to let the log file grow indefinitely with the extension .log except on the
Windows platform where the default is a maximum number of messages equal to 65535, and
the maximum number of messages may not be set higher than 65535. On other platforms, the
maximum number of messages may be set higher than 65535.
The -w option is particularly useful when tracking a PowerNet problem that takes a long time
to reproduce. This option prevents the log file from consuming all available disk space.
Example
-hFLHP2 -b1024
•
•
•
•
PowerSQL
The PowerSQL (Structured Query Language) task works in conjunction with the historian task
to allow an application to access data in an external relational database through a result
window. PowerSQL offers the following features:
• Allows data in an external relational database to be manipulated from within FactoryLink
• Allows an application to send and retrieve data to and from external database tables,
including those created outside FactoryLink
• Allows you to define tags referenced by PowerSQL in arrays as well as individually
• Allows you to execute SQL statements generated in Math & Logic
• Allows you to execute database-stored procedures for database servers that support them
• Allows you to processes SQL statements that are entered in a FactoryLink message tag
O PERATING P RINCIPLES
PowerSQL is a historian-client task that communicates with historian through mailbox tags to
send and receive historical information stored in an external database using SQL.
PowerSQL retrieves data in a relational database by generating an SQL SELECT from the data
specified in a FactoryLink configuration table and placing it in a temporary table called a result
table. The FactoryLink application can view and modify the retrieved data in the result table
through a result window. A result window is a sliding window that maps data columns in a
relational database table to FactoryLink tags. The result window views selected portions of the
result table.
For example, if a graphic screen is used to display the result window, it can display as many
rows of data from the result table as there are tags in the two-dimensional tag array. If there are
more rows in the result table than in the result window, the operator can scroll through the
result table and see each row of the table in the result window.
PowerSQL can read from and write to an entire array of tags in one operation. The
relationships among the external database, the result table, the result window, the real-time
database, and the graphic display are displayed in Figure 17-1.
External database
External FactoryLink
FactoryLink Graphic Display
database Result table real-time
real-time
table "car" database
database
19910126110000 1 15 black Result
Col1 Col2 Col3 Col4 19910126113000 1 16 black window Car#
Car Color
# Color
19910126120000 1 17 white
18 white
18 white
19910126123000 1 18 white
white
19 blue
19 blue
19910126123000 1 18 white
19910126130000 1 19 blue
blue
19910126130000 1 19 blue
19910126133000 1 20 blue
19910126140000 1 21 blue
Logical expression:
Col1 >19910126075959
and
Col1 < 19910126170001
and
Col2 = 1
and
Col3 > 14
and
Col3 < 22
An internal buffer stores the rows of the result table in RAM. An external buffer stores the
overflow of rows from the internal buffer on disk. This allows the operator to scroll back up
through the result table. The buffers are shown in Figure 17-2.
Figure 17-2 Buffers Used in PowerSQL
5 5
15 20
In this example, as the operator scrolls through the result table, the rows of the result table flow
into the internal buffer to be stored in memory. Because, in this case, the result table consists of
25 rows and the internal buffer can store only 20 rows, when the internal buffer is full, the
excess rows in the internal buffer flow into the external buffer to be stored on disk.
L OGICAL E XPRESSIONS
You use logical expressions to specify the data in a relational database to view or modify. For
the purposes of PowerSQL, a logical expression is a command containing a standard SQL
WHERE clause. To make a logical expression flexible at run time, use the name of a message
tag whose value is a WHERE clause. If viewing all data from a column in a relational database
table, you do not need to specify a logical expression.
You must know how to write a standard SQL statement to configure PowerSQL. For
information about writing SQL statements, refer to any quick reference SQL guide or the user
manual for the relational database in use.
To select data from a database table, a logical expression works in conjunction with the table’s
column name and logical operators to form an SQL WHERE clause. The WHERE clause
specifies which rows in a database table to place in the result table.
TRANDATE > ‘20040126075959’ AND TRANDATE < ‘20040126170001’ AND CONVEYOR = 1 AND
CARNUM > 14 AND CARNUM <19
What were the colors of cars 15 through 21 on conveyor 1 painted between 8:00 a.m. and 5:00
p.m. on January 26, 2004?
From this WHERE clause, the relational database places the following values in a result table.
19910126110000 1 15 black
19910126113000 1 16 black
19910126120000 1 17 white
19910126123000 1 18 white
If the view size of the result window is 2, the result window writes the values of the tags in two
rows to the real-time database. When the data reaches the real-time database, other
FactoryLink tasks can read it and write to it, and an operator can view the data on a graphics
screen.
Accessing
Field Descriptions
Select Tag that triggers a select operation. A select operation tag name digital, analog,
Trigger selects specific data from a relational database table longana, float,
based on information specified in the PowerSQL message
Information table and places it in a result table for you to
view or manipulate.
Update Tag that triggers an update operation. PowerSQL
Trigger performs a positional update if you defined a Select
Trigger. When the value of this tag changes during a
positional update, PowerSQL reads the values in the
active row (the value of the Current Row tag) and updates
the values in that row of the result table and external
database.
For a positional update to work, the database table must
have a unique index. This can be configured in Database
Schema Creation or executed externally to FactoryLink
when the database table is created.
Completion Tag whose change-status flag is set by PowerSQL when tag name digital, analog,
Trigger any operation undertaken by this control record is longana, float,
completed. message
This example assumes the following information is specified in the PowerSQL Control table.
*Insert Trigger
Control Name Select Trigger Update Trigger Delete Trigger Auto Create Move Trigger
Move Trigger Position Trigger Historian Mailbox *Database.Table Name Dynamic SQL Tag
MVRTAG1 MVATAG1 Histmbx REFINERY.TANK
Current Row Tag Data Array Size (Rows) Internal Cache Size Completion Trigger Completion Status
(Rows)
CROWTAG1 12 100 COMTRIG1 STATTAG1
In this example, PowerSQL sends a request for select, update, delete, move, and position
operations to the historian through the historian mailbox tag HISTMBX. PowerSQL asks for
data from the table TANK in the relational database REFINERY.
PowerSQL updates the value of the current row tag CROWTAG1 when PowerSQL performs a
select, move, or position operation. The Completion Status tag STATTAG1 contains status
information about the operation just completed. The change-status flag for the digital tag
COMTRG1 is set when an operation for this result window is complete.
Because the Insert Trigger/Auto Create field indicates NO, PowerSQL does not create a new row
and the update operation is not performed whenever you do not find a row for the update
operation.
Because the Data Array Size is 12, the result window can display 12 rows of data from the
result table at a time. The internal cache can hold 100 rows of data from the result table.
Accessing
Data Logging > Power SQL > “your control name” > Power SQL Information
Field Descriptions
This example uses the following information in the PowerSQL Information table.
Maximum
Tag Name Logical Operator Column Character Data Logical
Expression Expression
Size
TANKID[3] TANK.TANKID 33 =’BLUE001”
outlet[3] AND TANK.OUTLET 0 >=:OUTLETVA
L
Because the Select Trigger tag SELTAG1 (defined in the Control table) is digital in this
example, the historian returns the two following values to PowerSQL when the change-status
flag for SELTAG1 is set:
• Values where the column named TANKID equals BLUE001
• The column named OUTLET is greater than or equal to the value of the tag OUTLETVAL.
PowerSQL writes these values to the tags contained in the tag arrays TANKID[3] and
OUTLET[3]. These values are then displayed in a result window.
Each Tag Name tag displays one column of values in a result window. Because an array has
been defined for TANKID and OUTLET, the values in the columns the for which the logical
expression is true are displayed in the result window.
End PKGCSP03;
/
Create or replace package body PKGCSP03 as
cursor c1 (key in integer) is
select fltime, flsec from trendtbl where trendkey >= key;
procedure updsel_trendtbl (
inrecs in integer,
key in integer,
newtime in string,
addsec in integer,
outtime out char_array,
outsec out int_array,
outrecs in out integer) is
begin
Note: Siemens is not responsible for any changes in Oracle. Refer to the Oracle manual
for any changes.
Argument Description
-C<#> or –c<#> In earlier versions of PowerSQL, a COMMIT
statement was performed after all database accesses,
(except the SELECT statement), and were executed as
a nondynamic SQL statement. The execution of
dynamic SQL statements, especially for stored
procedures, can result in complex database operations
that include many steps. In such cases, PowerSQL
cannot determine if a COMMIT or a ROLLBACK is
more appropriate. This has the potential to COMMIT
unwanted database updates in the case of execution
failures.
Proper procedures dictate that COMMIT/ROLLBACK
logic should be programmed into the stored procedures.
However, since changing how this works might have an
impact on an existing application, the task has been
modified to accept a program argument that controls
the COMMIT logic. (# = 0, 1, or 2)
-c1 results in no COMMITs for dynamic SQL
statements. The nondynamic SQL operations
(traditional insert, delete, and update statements) are
followed by a COMMIT.
-c2 (the default action) is to COMMIT logic exactly as
in the earlier version, so no modifications are required
to existing applications. However, it is strongly
recommended that the applications be modified to use
the -c1 argument and that all stored procedures be
updated to include all necessary and appropriate
COMMIT/ROLLBACK logic.
-c0 results in no COMMITs for any statements
executed, except for a final COMMIT upon task
shutdown. Use of “-c0” is not recommended, since
failure to COMMIT nondynamic SQL statements could
have an adverse effect on the database server, but the
configuration is included for completeness. Since a
COMMIT can be easily executed through the use of the
SQL tag, it allows users to take responsibility for
COMMIT logic away from PowerSQL and make it
become part of the application design and control.
Argument Description
-L or –l Enables logging of errors to the log file. By default,
PowerSQL does not log errors.
N or –n Notifies on the completion of a SELECT trigger that
the query resulted in an End of Fetch condition.
Notification will only occur if the rows returned from
the query do not equal the rows defined in the Data
Array Size field. By default, PowerSQL does not report
an End of Fetch condition for a SELECT until a move
operation advances the current row past the last row of
the query.
-S<#> or –s<#> Sets the maximum number of SQL statements that
PowerSQL will have active at one time. The default is
160. For very large applications, this program switch
may have to be adjusted if the database server is unable
to allocate a resource to open a new SQL cursor. (# = 4
to 60)
-W<#> or -w<#> Sets the maximum timeout in seconds for PowerSQL to
wait for a response from the historian task. The default
is 30 seconds. (# = 5 to 36000)
-V1 or -v1 Writes the SQL statements generated by PowerSQL to
the log file. PowerSQL must have logging enabled for
this program switch to work. The default is to not write
the SQL statements to the log file.
•
•
•
•
Print Spooler
The FactoryLink Print Spooler allows you to direct data to printers or other devices with
parallel interfaces and also to disk files. The Print Spooler task also provides other features:
• File name spooling (loads file when print device is available, minimizing required memory)
• Management of printing and scheduling functions
Print Spooler receives output from other FactoryLink tasks, such as Alarm Supervisor or File
Manager, and sends this output to a printer or disk file.
With Print Spooler, you can define up to five devices to receive output from other FactoryLink
tasks. To send files to one of these devices, FactoryLink tasks reference the corresponding
device number in a configuration table.
Accessing
Reports > Print Spooler > Print Spooler Information
Field Descriptions
1 Verify printer is configured in printer manager with a capture port assignment even if it is
hooked up to LPT1.
If the printer is connected directly to the computer, define the printer as generic text only, by
using the Add Printer wizard to create a new printer and assign it to a open LPT port. When it
asks for the manufacturer, select Generic and then select Generic / Text Only. Once the printer
is set up, configure the Print Spooler. Under the Device column enter the LPT port you
configured the printer on, like LPT1. Under the Use OS Print Services column, select NO.
Then the page break will not be added between alarms. If you want to restore the printing of
one alarm per page you can add a line feed (\0A) to the File Separator Sequence column in the
spooler task.
If the printer is a network printer, then you have to map the network printer to an unused local
printer port like lpt2. From a command prompt type:
This maps the printer to lpt2, and the yes at the end sets the printer to be restored every time the
user connects. In Print Spooler, set up lpt2 and set Use OS print services to NO. Then the
printer will not print out a page until it is filled with alarms.
If the printer driver does not have the functionality to control whether or not a form feed is
done, removing the printer from the print manager should make Spooler print directly to the
port. There should not be a form feed, in this case.
Note that for page printers like HP Laserjets, a whole page has to be filled or a form feed needs
to be encountered before a page comes out of the printer. The form feed will be lighted but
nothing will happen until enough alarms are generated to force it to start a second page.
P ROGRAM A RGUMENTS
Argument Description
–D<#> Sets debug log level for Run-Time Manager output
window. (# = 1 to 9)
–L Enables logging of debug information to a log file.
–M Use OS print services; send print requests (except for
alarm logs and binary files) to the system print queue
instead of directly to the printer.
•
•
•
•
Programmable Counters
The Programmable Counters task provides totalizers and event delays, such as defining a
trigger to unlock a door and then specifying a delay before the door locks again. A
programmable counter is similar to a counter in programmable controllers.
O PERATING P RINCIPLES
A programmable counter is a group of tags with values that work together to perform a count.
Outputs from programmable counters can be used to provide input to or trigger Math & Logic
programs or other FactoryLink tasks. There is no limit, except the amount of memory, to the
number of programmable counters that can be defined.
Each programmable counter is made up of some or all of the following tags and analog and
digital values.
Tags
• Enable – triggers counting activity.
• Up Clock – initiates the count upward.
• Down Clock – initiates the count downward.
• Clear – resets the counted value to the starting point.
• Positive Output – contains the value 1 (on) when the counting limit has been reached.
• Negative Output – contains the value 0 (off) when the counting limit has been reached.
• Current Value – indicates the current value of the count.
There is no limit, except the amount of memory, to the number of programmable counters that
can be defined.
Example One
In this example, counting is configured to count bottles (20 per case). The Preset Value (start
count) is 0 and the Terminal Value (count limit) for the number of bottles per case is 20. The
Increment Value of 1 represents one bottle. When counting is triggered, each bottle counted
increases the current count of bottles (starting with 0 in the case) by 1 until the case contains 20
bottles (until the Current Value reaches the Terminal Value of 20).
When the case contains 20 bottles (when the Current Value reaches the Terminal Value), the
Counter task indicates the case is full by force-writing a 1 to the Positive Output tag and
force-writing a 0 to the Negative Output tag. At this point, if AutoClear = YES, the Current
Value tag is reset to 0 (the Preset Value) and the count can begin again. If AutoClear = NO, the
current Value tag remains at 20 (the Terminal Value) until another task writes a 1 to the clear
tag, indicating the count can begin again. The count does not continue past 20 (the Terminal
Value). Each time the bottle count reaches 20 (the Terminal Value), the Counter task again
force-writes a 1 and a 0 to the Positive and Negative Output tags. When AutoClear = YES or
when the Clear tag is triggered, the bottle count is reset to 0 (the Preset Value), ready for a
repeat of the counting process.
Example Two
You can set up another task, such as EDI or Math & Logic, to react to a deviation, such as a
defective bottle, during the count by adjusting the count. To adjust the count, that task writes a
1 to the Down Clock tag to cause the value of the Current Value tag to move toward the Preset
Value by the Increment Value.
For example, during counting, if a defective bottle is counted but not packed in the case, the
EDI or Math & Logic task subtracts that bottle from the total count by writing a 1 to the Down
Clock tag to cause the Current Value to move toward the Preset Value (0 in this example) by
the Increment Value (1 in this example).
After six bottles have been counted and packed in the case, the Counter task counts the seventh
bottle. But the seventh bottle is defective, so it is not packed in the case. Therefore, the EDI or
Math & Logic task subtracts that bottle from the total count by writing a 1 to the Down Clock
tag. This causes the Current Value to move from 7 down to 6.
Accessing
Timers > Programmable Counters > Programmable Counters Information
Field Descriptions
Example
The counter in the first line on the table, along with a Math & Logic procedure that saves the
count and resets the counter, counts the number of bottles packed per minute. Since the Enable
field is left blank, counting is always enabled. Each time a bottle is packed, a 1 is written to the
Up Clock tag btl_upclock. This triggers the Counters task to increase the Current Value
In the second line on the table, the counter is used to create a one-minute delay of an event,
such as bottle capping. Since the Enable field is left blank, counting is always enabled. When
the value of sec1 becomes 1, the Counters task increases the Current Value min_delay by 1.
The task continues to increase this value once each second until the Current Value matches the
Terminal Value of 60. At this time, counting stops and the Counters task writes a 1 to the
Positive Output tag min_end, indicating the end of the one-minute delay. Other FactoryLink
tasks can monitor the min_end tag to trigger another operation and then write a 1 to the Clear
tag min_start to reset the count.
P ROGRAM A RGUMENTS
Argument Description
–t The Programmable Counters task establishes
parameters for the initiation, performance, and
conclusion of counting activity. With the -t program
argument in the System Configuration for the Counters
task, negative output, positive output, and current value
are initialized. Positive output is set to 0. Negative
output is set to 1. With no program argument, those
tags remain at their default/persistent values.
E RROR M ESSAGES
•
•
•
•
Report Generator
If you want to report on this real-time data, you can write the data to a report file as it is
received using Report Generator. The Report Generator is a flexible reporting tool that lets you
define simple custom reports. The data included on the report can be generated as a disk file, a
printed report, or exchanged with other programs that accept ASCII files.
Some typical uses for generating report data include the following:
• Predicting potential problems based on data patterns
• Reporting on productivity of shifts
• Generating hardcopy reports for management or specific agencies
Note: Depending upon the types of reports you need, you might also want to
consider using the predefined Historical Reports that are available with
FactoryLink or use a reporting tool to create reports from data stored in a
relational database.
Reporting Methodology
Memory-resident real-time data is logged to a report file for generating a report. This task
completes the following steps to generate a report:
1 The real-time database receives data from various sources, such as a remote device, user input,
or computation results from a FactoryLink task.
2 When a report is triggered, Report Generator reads the current values of the tags included on
the report and maps them to object names. Object names are used in defining the report format
or template file.
3 Report Generator checks the report format file to determine placement of text and objects in
the report file. The format file contains keywords that trigger when the report starts, ends, and
writes data. Each keyword represents a section. When the trigger executes, the associated
section of the format file is processed and written to a temporary working file.
The temporary file resides on disk, not in memory, to protect against loss of data. For example,
if FactoryLink shuts down before Report Generator has created the report archive file, the
temporary report file still exists on disk.
5 When the report is completed, the information in the temporary disk-based working file is sent
to either a permanent file on disk, a printer, or a communication port.
Configuration
Explorer tables
Real-time database
Format file: tags
.BEGIN
Log pump temperature
.REPEAT
Format file
Temperature = (temp)
generated
.END
by user
all done reporting
Triggered events
defined in Configuration
Explorer
Temporary
working file
Hardcopy or
ASCII report file
Keywords
Keywords are used in the format file to trigger an action. The associated section of the format
file is processed and written to a temporary working file when the trigger executes. Three
keywords are used in format files:
• .BEGIN
• .REPEAT
• .END
Keyword lines begin with a period (.), followed by a keyword and a line terminator such as LF
(Line Feed) or CR, LF (Carriage Return, Line Feed) sequence.
Comment lines can also be included in the format file by starting the line with a period and
following it with any text that does not represent a keyword. Text displayed in a comment line
is not included in the report.
.BEGIN
Begin section Get pump temperature
.REPEAT
Repeat section
Temperature [temp]
.END
End section
all done reporting
Each format file consists of one or more of these sections. The Begin, Repeat, and End sections
can include object names that are substituted with tag values when the report is generated. The
only required section is the End section, which generates a snapshot report if used alone.
Data specified in the format file is collected from the real-time database and placed into the
report. Placement of real-time database values is determined by the following:
• Location of its object name in the format file
• Format specifiers
Object names act as a placeholder for data and are linked to tags in the real-time database. The
value of the tag replaces the object name during report generation. Object names are enclosed
in braces {} or brackets [ ] within the format file.
• Use braces { } for data that may vary in length. This places the data relative to other text in
that line because the position may change based on the tag value. A typical use may be to
locate data within a sentence.
• Use brackets [ ] for fixed position data. The value of the tag associated with the object name
is displayed in the report exactly where the object name is displayed. The starting bracket,
which is the anchor for the data, is typically used to format data in columns.
The identifier (braces or brackets) is not displayed in the generated report file. Use an escape
sequence identified in “Escape Sequences” on page 490 if you want a brace or bracket to be
displayed in the report.
You can use object names in the begin, repeat, and end sections.
Format Specifiers
Format specifiers allow you to define a variable where a literal is expected. Format specifiers
consist of two types of objects:
• Ordinary characters, which are copied literally to the output stream
• Variable specifiers, which indicate the format variable information are displayed in
The following table provides a list of the specifiers typically used with Report Generator.
Trigger Actions
When a .BEGIN, .REPEAT, or .END trigger executes, the associated section of the format file
is processed and written to a temporary working file.
The following figure illustrates what occurs when each keyword is triggered. This sample
report format is used to generate an historical data log. A temporary working file is opened
when the report is triggered. This file remains open until the end section is triggered. The
report header is written to the file when the begin section is triggered. In this example, Get
pump temperature is written at the top of the report.
When the repeat section is triggered, the values of the tags mapped to the object names
included in this section are read from the real-time database and written to the file. In this
example, the value of the tag containing the pump temperature is mapped to the object name
temp and is written to the report.
Get pump
temperature
Temp = 10
Temperature = 14
all done reporting
Any literal text included in this section is also written to the file. In this example, the literal text
Temperature = is written to the report in front of the tag value. The event that triggers the repeat
can be a periodic sampling, a specific time, or an event driven trigger like a part meeting a
photo-eye in a conveyor system.
You can trigger the repeat section any number of times before ending the report. In this
example, the pump temperature is written to the report twice. The first time its value is 10; the
next time its value is 14.
The literal text in this section is written to the temporary working file when the end section is
triggered; then, the entire report is sent to its configured destination. This can be a disk-based
file, a printer, or across the network to another node. The temporary working file is deleted.
Another common format for reports is a snapshot report, as shown in the following figure. The
purpose of this type of report is to gather information and to generate a printed report by
triggering a single event. This is done by specifying only an end trigger. The end event causes
all information in the format file to be sent instantly to the printer.
.END
all done reporting
Recording
end of shift
If the data reported on in the repeat section is generated from an external device connected via
a device interface task, in order to maintain data integrity, it may be necessary to coordinate
operations between these two tasks. You do not want to log data to the report unless you are
certain the data has been returned successfully from the device interface task. Likewise, you do
not want to sample more data from the external device before the previous data is logged
through the report generator.
Another application that may require coordination is if you want to read data from a relational
database using Browser and write it to the printer. You can do this by using the complete
trigger on the Browser to trigger the Report Generator repeat trigger and then have the Report
Generator complete trigger generate a move to the next database row in the Browser task. This
coordination takes place until all rows are fetched. Use Math and Logic to verify not only
complete, but successful completion, with both tasks.
Escape Sequences
Escape sequences send instructions to the printer, such as form and line feeds. These sequences
can also be used to change operating modes of printers to compressed versus standard print.
The following table lists and explains commonly used escape sequences.
Escape Sequence Description
\b Send backspace (0x08).
\f Send form feed (0x0C).
\n Send line feed (0x0A).
\r Send carriage return (0x0A).
\t Send horizontal tab (0x09).
\XX Send 0xXX or any two uppercase hex digits (\9F).
\Z Send Z, where Z is any character not previously listed.
\. Send. (necessary to start a Report File line with a period.)
\[ Send [.
\{ Send {.
\\ Send a single \.
You must define a unique format file for every report. Format files are stored by default in the
FLAPP/rpt directory as filename.fmt where filename is the name you assign to the format file.
Figure 20-1 Sample Report Format File
Comment
Section
Begin
Section
Repeat
Section
End
Section
Escape
sequence
2 Right-click Report Generator Formats and select New Report Format File.
3 Type a unique report format file name and click Enter. FactoryLink automatically adds the .fmt
extension.
4 (Optional) Enter a comment in the comment section starting with the first line of the format file
table. The comment section extends to the first line starting with a keyword. Each line in the
comment section cannot exceed 512 characters. It is not necessary to precede comments in this
section with a period. A comment can reference the format file or the report you are
configuring. In Figure 20-1, the sample report contains one line of comments.
5 (Optional) Define a begin section by entering the keyword .BEGIN followed by text you want
as the header for the report. Enter the name of the report, column names, and any other fixed
data in this section. You can also include object names, such as date and time. It is not required
to include a begin section.
6 (Optional) Define a repeat section by entering the keyword .REPEAT followed by any text and
names of objects you want included in the report. You can include both text and object names
in this section. The contents of this section can be repeated in the report any number of times.
This section is repeated each time the repeat trigger is activated.
In Figure 20-1, the Repeat section includes data to be read from tags and inserted into a fixed
location in the report. The Repeat section is completed when the last object name is read and
sent to the temporary file at the end of the first shift.
7 Define an End section by entering the keyword .END followed by text and object names you
want included at the end of the report.
At a minimum, a report format file must include an end section. In Figure 20-1, the example
format file has one line End section that includes literal text and an object name that places the
date the report is completed.
8 (Optional) Enter an escape sequence to specify instructions to the printer. If you do not enter an
escape sequence, the report prints exactly as defined in this format file. In Figure 20-1, the
example contains an escape sequence instructing the printer to form feed the paper when
printing completes.
9 After you finish formatting the report, save the file and then close it.
Accessing
Field Descriptions
Accessing
Reports > Report Generator > Report Generator Control > "your report name" > Report Generator
Information
Field Descriptions
Add an entry for each object you defined in the format file. You can configure up to 2,048
entries in this table. Each row in this table represents an entry. If you do not have any object
names defined in the format file, define a placeholder record using any valid FactoryLink tag
and any object name as the placeholder. You are limited to 256 characters when formatting a
line in the report.
If the tag is digital, you can specify a character string to be displayed depending on the digital
value. To do this, enter the desired character strings for 0 and 1 in the following format:
open|closed
where:
open specifies the message open to print when the tag is 1
closed specifies the message closed to print when the tag is 0
In the example, the date and time objects do not have associated formats specified, so the data
displays as defined in the tag definition. The object pressure is formatted to 10 total characters
with 4 significant digits after the decimal point. The object pump_stat indicates the current
status of the pump draining the tank (open or closed).
E RROR M ESSAGES
•
•
•
•
Run-Time Manager
The Run-Time Manager (also known as Run Manager) allows you to start, monitor, and stop
individual FactoryLink server tasks. This chapter describes how to configure and use the
Run-Time Manager.
O PERATING P RINCIPLES
The Run-Time Manager task starts, stops, and monitors all other FactoryLink tasks according
to the settings configured in the System Configuration table.
At system startup, Run-Time Manager reads the System Configuration table to determine
which tasks to start, their start order, priority, debug status, and program arguments. There are
several different ways to invoke Run-Time Manager, which will be discussed later in this
chapter.
During run time, Run-Time Manager monitors the status of the other tasks in the system and
updates system tags in the real-time database with that status information. A set of standard
Run-Time Manager mimics is included in the FactoryLink Examples Application and Starter
Application templates. These mimics may be edited, replaced, or used as-is. They are intended
for administrators and maintenance people. The developer should consider limiting access to
these screens for security purposes, since the screens can be used to shut down individual tasks
or the entire application.
At system shutdown, Run-Time Manager reads the System Configuration table to determine
the order in which to stop tasks and performs an orderly shutdown. It is important that you
perform an orderly shutdown rather than just turning off the computer where FactoryLink is
running.
FactoryLink comes with a pre-configured Run-Time Manager mimic as shown in Figure 21-1.
Using Client Builder, you can customize or replace the Run-Time Manager mimic according to
your needs.
There are Run-Time Manager mimics for both Shared and User Domains. The mimics have the
following major components, not all of which appear on the first mimic:
• The task buttons on the left start and stop the tasks as well as indicate one of the following
states:
Color State Color State
Gray Inactive Blue Starting
Green Running Yellow Stopping
Red Error has occurred
• The Last Message displays the most recent error or system message.
• Program Directory is the Server’s FLINK environment variable.
• Application Directory is the Server's FLAPP environment variable.
• Application Name is the Shared or User FLNAME environment variable.
• Application User is the Shared or User FLUSER environment variable.
• The Application Name Button on the second Shared Run-Time Manager mimic, controls
the Application Start/Stop.
The default values establish the following parameters for the run-time FactoryLink system:
• Tasks that start up when the application is running
• Tasks allowed to run as foreground tasks
• Order in which tasks start up and shut down
• Priority of each task
• Domain associated with each task
Accessing
System > System Configuration > System Configuration Information (open in form view)
Note: Even though you can open the System Configuration Information table
in the Grid view, it is recommended that you open this table in the Form view.
Field Descriptions
Table 21-1 Task Name, Description, and Executable File Location Default Values
Table 21-1 Task Name, Description, and Executable File Location Default Values (continued)
You may edit this table to identify an external program to the system. Although you can make
changes in some fields, it is better not to change any fields except Flags and Display Status.
A DDING N EW TASKS
The Run-Time Manager uses pre-defined tags and array dimensions to automatically display
items on screen. These tag names and array dimensions appear in the System Configuration
Information table for each task displayed on the Run-Time Manager screen. If you add another
task to the Run-Time Manager screen, use the next available array dimension.
The next available array dimension is determined by a task’s position in the display sequence.
If you view the System Configuration table associated with the Shared domain, you will find
all tag names associated with Run-Time Manager end with a dimension of [0], Persistence tag
names end with a dimension of [1], Scale tags end in [2], and so on. For example, the complete
entry in the Task Status field for the Run-Time Manager task is TASKSTATUS_S[0]. The
bracketed number represents an array dimension.
If the information in a field is longer than the number of characters that fit in the allotted space
on the screen, part of the entry will scroll out of sight. To see this information, press the → and
← keys. The field scrolls to display the text.
Complete the following steps to add a task to the Run-Time Manager screen:
1 Choose the appropriate domain for the task. Then, open the System Configuration Information
table.
2 Starting under the last row of information in the System Configuration Information table, add
the required information about the new task to each field. Use the Copy and Paste functions to
copy duplicate information, such as tag names, from the previous row.
3 Review the previous task in the task list to determine its dimension. Assign the new task’s tag
names the next available array dimension. If you used the Copy and Paste functions to copy
information from an existing row, modify each array dimension to be the correct value.
The next time you run the application, the new task and its related information is displayed at
the bottom of the specified domain’s Run-Time Manager screen.
You can configure FactoryLink to automatically create an error message .LOG file at startup.
where
FLAPP is the environment variable for the application directory.
FLNAME is the environment variable for the application name.
FLDOMAIN is the environment variable for the domain.
FLUSER is the environment variable for the user name.
FactoryLink creates the log file name using the following format:
XXMMDDYY.LOG
where
XX indicates the FactoryLink task.
MM is the month of the year (1-12).
DD is the day of the month (1-31)
YY is the year since 1900 (00-99).
If you specified during installation you wanted to install the Old version of FLLAN, FLLAN’s
.LOG files will have the following path and file names: FLAPP\NET\FLLANSND.LOG and
FLAPP\NET\FLLANRCV.LOG
If you configure FactoryLink to create a log file for a task, FactoryLink logs a message in its
log file whenever that task generates an error. The messages in the log file are more descriptive
than those that appear on the Run-Time Manager screen.
For debugging purposes, configure FactoryLink to create log files automatically at startup.
Complete the following steps to configure FactoryLink to do this:
2 Ensure the current domain selected. Locate the corresponding entry for the task in the Task
Name field.
5 Enter -L, -V# (not case-sensitive) where # is 2, 3, or 4 in the Program Arguments field. The
greater the number, the more information you receive. (Enter -L, -D# where # is any number
from 2 to 22 for the File Manager and FLLAN tasks.)
7 Repeat steps 2 through 6 for each task that needs a log file.
The log files continue to grow at run time as messages are logged to them until the operator
shuts down and restarts each task. Then, FactoryLink creates new log files. However,
FactoryLink creates only one log file per task per day no matter how many times each task is
shut down and re-started in one day.
Delete old log files periodically to prevent log files from using too much disk space. You can
configure the File Manager task to delete files for you. For example, File Manager can delete
them each day at midnight or when the files specified reach a specified size.
Caution: To avoid errors, do not delete the current log file if the task is still running.
When you are finished debugging your application, you can remove the Program Arguments
from the System Configuration Information table to eliminate the creation of extra files.
Using the FactoryLink On the server, open Start > Program Files > FactoryLink >
Application Manager FactoryLink Application Manager. This method would
typically be used by developers or administrators.
Using the FactoryLink Right-click on any server application and click Start/Stop >
Configuration Explorer Start. This method would typically be used by developers
or administrators.
Using the Autostart feature Right-click on any server application and click Start/Stop >
Autostart. FactoryLink starts the selected application
during the boot of a FactoryLink server machine before
the user logs in. The application is started as an NT
Service. This method would typically be used by operators
on run-time only systems.
Once a mailbox has been stuffed with orphaned messages, no other mailbox writes can be
performed, even if these writes are to a different mailbox.
The system must be able to be tuned to handle large quantities of mailbox messages as well as
not allow any mismatched mailbox producer/consumer task combinations exhaust the kernel
of all resources.
To make this configuration tunable, a switch to the Shared domain instance of the Run-Time
Manager is used. This switch is configured in the System Configuration Information table. The
syntax of this switch is
-m<max_seg>[:<max_mbxsegs>[:<max_onembx>]]
where
<max_seg> Maximum number of kernel segments. (default = 1000 of 64K bytes each)
<max_mbxsegs> Maximum segments used for mailbox messages. (default = 250)
<max_onembx> Maximum number of K bytes of message space held by one mailbox. The
default sets no per-mailbox ceiling.
In addition to system-wide limits, a memory usage ceiling can be set per mailbox tag. The
message length field of the Object table, currently supported for message tags, can be set with
a maximum memory usage for its associated mailbox tag. This is specified in K bytes. The
per-mailbox limit supersedes the system-wide mailbox limit.
Once a mailbox tag ceiling is reached, all subsequent writes to that tag are dropped. A new
error code, FLE_MBXLIM_EXCEED, is returned for this case.
The per-mailbox K byte limit can also be set or obtained through the Programmer’s Access Kit
(PAK).
Where you define the default run-time options depends on whether you are starting the
Run-Time Manager from a FactoryLink icon or from an operating system command line.
Argument Description
-d Turns on debug mode. Any errors encountered are logged to the
log file. If you specify this option, you can use Ctrl+C to
shutdown Run-Time Manager.
-a<flapp_dir> Defines the full path of the directory containing the application
files. This path overrides any path set by the FLAPP
environment variable.
-p<flink_dir> Defines the full path of the directory containing the
FactoryLink programs. This path overrides any path set by the
FLINK environment variable.
-f1 PID check. If experiencing kernel lock-up problems, the switch
adds extra checking to prevent rogue tasks from corrupting the
kernel; however, there is a performance penalty.
-L Logs errors and other data to a log file
-t<timeout> Defines the start/stop time-out, in seconds, for the Run-Time
Manager error report process. The default time-out is 60
seconds.
-s Starts only the shared domain on a PC platform. The user
domain is not started.
-n<fldomain> Defines the domain name, where domain can either be shared
or user. If you specify shared, only the shared domain is started.
This overrides the FLDOMAIN environment variable.
-i<flname> Defines the name of the application to start. This overrides the
FLNAME environment variable.
-u<fluser> Defines the user name. This overrides the FLUSER
environment variable.
-w Turns on the warm start mode. If you specify this option,
FactoryLink loads persistent tags with the last value saved for
them.
E RROR M ESSAGES
Text Messages
Each error number has a corresponding error keyword, which does not display in the error
message. Knowing the error keyword helps Customer Support engineers identify the cause of
the problem.
1 Try to determine which task is sending the error by shutting down FactoryLink, restarting it,
and starting each task, one at a time.
2 Write down any error messages displayed on the Run-Time Manager screen and their
corresponding tasks. (The task having the problem may generate a seemingly unrelated error
message.)
3 Contact the supplier of the task if the task in error is an external task.
4 Contact your technical support representative if the task in error is a FactoryLink task.
•
•
•
•
Scaling and Deadbanding
The Scaling and Deadbanding task converts or scales incoming raw data to a value in a more
useful format using a linear relationship. Scaling is often referred to as engineering units
conversion. The task can also indicate a deadband or area around a scaled value that is small
enough to be considered insignificant and is ignored.
Many values read from various types of control equipment are in units other than those the user
wishes to display, manipulate and/or archive. The Scaling and Deadbanding task eliminates the
need to process data through an intermediate routing mechanism and the need to write code to
perform the scaling function when the scaling is linear. If given ranges for the incoming and
desired data values, it can derive the necessary conversion factor and/or offset and perform the
linear scaling calculations automatically using the formula:
y = mx + b
where x is the raw value, m is the multiplier, b is a constant, and y is the result.
If you indicate a deadband around a value, the new value is stored and a new deadband
recalculated, but the new value is not written to the real-time database. Since FactoryLink tasks
process values upon every change, deadbanding provides a means of saving processing time
and improving system efficiency.
Note: The deadbanding portion of the function cannot be implemented without
configuring the scaling portion of the function.
O PERATING P RINCIPLES
The scaling function only applies for tags with an analog, longana, or float data type.
Scaling is configured using a pair of ranges for raw values and a pair for scaled values. These
ranges can be specified as constants or tags. The scaling formula is adjusted accordingly if one
or more of the range tags changes.
When a value is written to a raw value tag, its related scaled value tag is updated accordingly.
This is a raw-to-scaled conversion.
When a value is written to a scaled value tag, its raw value tag is updated accordingly. This is a
scaled-to-raw conversion.
Prior to changing a range tag, raw value tag, or scaled value tag, the function should be
disabled using the Scaling Lock Tag. When the Scaling Lock Tag has a nonzero value, changes
During raw-to-scaled conversion, a newly calculated scaled value that does not exceed the
deadband is not written to the database. If deadbanding is being applied to a tag associated
with scaling rather than a specific alpha-numeric range, deadbanding is specified by a
percentage of a range rather than as an absolute value. If the deadband variance for a scaled tag
is specified as an absolute value, then no deadbanding is applied to the associated raw tag.
Accessing
Scaling and Deadbanding > Scaling and Deadbanding > Scaling and Deadbanding Information
Field Descriptions
1 Create a tag in the Math and Logic Variables table named scale_test, then Save.
2 With scale_test still selected, open the Tag Editor and select the Scaling/Deadbanding tab.
4 Enter a Disable Tag and Deadbanding Value if desired, then press OK.
5 Open the Scaling and Deadbanding Information table in grid view. You will see that new tags
have been added. The tag names all have the “root” of scale_test. The scaling task appends the
suffixes .raw, .rawmin, .rawmax, .eumin, .eumax, .dead, and .lock to create seven unique tag
names for each value.
6 Examine these tags in the Tag Editor and you will see that the default values are the values you
entered in the Scaling/Deadbanding tab.
If you enter the scaling data manually into the table, you need to manually add persistence to
the .raw tag.
E RROR M ESSAGES
•
•
•
•
Tag Server
The primary purpose for the Tag Server is to provide tags to the Client Builder. It includes a
structure of properties that allows the tag to be treated as an object on the client. This object
includes information like the tag value, tag description, security level, and other properties
unique to the data type of the tag.
The Tag Server is the only way to get redundant connections to a pair of redundant servers.
Using the FLCONV utility to convert an application adds this task to your system and creates a
default configuration for this task.
Tags can be referenced in Client Builder only if they have a valid Xref entry in the FactoryLink
Server. Rather than using the Math and Logic Variables table, use the Tag Properties table (the
preferred place) to define a tag that is used only in Client Builder and not referenced elsewhere
in the server application.
Accessing
Graphics > Tag Properties > Tag Properties
Field Descriptions
Accessing
Field Descriptions
•
•
•
•
Trending
Using the Trend module, you can create animated graphs called trend charts that show numeric
data graphically. Trend charts are capable of plotting a single value in a chart or multiple data
points concurrently. The lines or bars on the chart are referred to as pens as shown in this
sample chart.
20
Pen 1
15
Pen 2
10
5
Pen 3
0
11-17-04 12:43:00 11-17-04 10:43:00
Note: For information about the ECS trend, see the “Animating a Chart” chapter
in the ECS Graphics and Web Client Reference Guide.
O PERATING P RINCIPLES
The Trending module consists of a Trend Server, a relational database, and two trend controls.
The components work together with the logger and historian tasks to format real-time or
historical information into a Trend chart that can be viewed at run time.
The logger reads the value of specific tags in the real-time database and maps the tags to
columns in a relational database table. The logger sends the data from the real-time database to
the database via a historian mailbox. The historian inserts the data into the relational database.
Once in this database, the data can be used by other applications.
The trend controls are used to view the data at run time. Depending upon your application
needs, you should select one of the two controls:
• The Real-time Trend Control provides a quick and easy way to insert a real-time trend chart
into a mimic. Use the Real-time Trend Control if you want to trend only real-time OPC data.
• The Historical and Real-time Trend Control lets you configure trend charts to display
historical and real-time data or data from non-FactoryLink database tables.
Appearance
Most aspects of the appearance of a trend chart can be configured, such as size, background
color, captions, text color, font, legends, line styles, and others.
Run-time Permissions
You can design your trend charts so that operators may have a large range of permissions for
online changes at run time or none, depending upon your application. A few of the functions
that can be permitted at run time are adding tags, viewing pen statistics, and changing pen
appearance.
Multiple Pens
Using FactoryLink, you can create different types and numbers of pens. You can configure
fixed pens at design-time to allow you to permanently assign a database table column to a
particular pen. You can assign the column to a pen at run time, and you can assign multiple
pens to a Trend chart at both design and run time.
Note: It is recommended you configure no more than eight pens to a chart for
good readability.
Multiple Axes
You can configure multiple pens in a trend chart. FactoryLink creates an X and Y axis to
correspond to each pen as each pen is created.
Because all historical Trend data is written to a relational database, you can arrange it to show
different data ranges. These ranges can be expanded or collapsed as needed.
Panning allows you to select the time span of data to display in a historical trend chart. Using
this feature, you can move forward or backward through historical data and you can move to a
specific time or sample.
Zooming is the ability to look at small or large chunks of data by changing the chart duration.
Zooming either increases or decreases the amount of data displayed.
Tooltip Information
When you hold your cursor over a point in the Trend chart, information about that value
appears in a text box over the point. Tooltip information includes the name of the pen, the
value of the X-field, the value of the Y-field, and ID. The ID field shows information about the
point in the ID/Key field of the database, if the database contains this field.
Value Cursor
A value cursor allows you to display the value associated with a point on a Trend chart. When
you click anywhere in the chart at run time, the value cursor, which looks like a vertical bar, is
displayed. You can write custom programs for pen cursor values. Figure 24-1 illustrates an
example of a value cursor.
Figure 24-1 Value Cursor
Delta T
Delta T refers to an offset in time. In FactoryLink, for example, you can have two pens
showing data simultaneously. FactoryLink’s Delta T feature allows you to associate an offset
in time for one of the pens. You could find this feature useful in conducting a comparison
between spans of time on a pen. This feature allows you to shift and line up one span of time
over another to conduct such a comparison.
FactoryLink provides everything that you need to construct trend charts that suit most
applications. For additional flexibility and customization, FactoryLink also allows custom
programming capabilities for the Historical and Real-time Trend Control. You can write a
custom program to access the properties, methods, and events included with FactoryLink. For
more information about customizing the trend controls, see the Client Builder Help.
Trend Server
FactoryLink Server 1
(Application Server)
Other Input
Relational
Database 1
Trend Server
Trend Server is a program that provides a service to client programs, such as the Historical and
Real-time Trend Control. The Trend Server can query any relational database, or many
databases, simultaneously.
Relational Database
All trend data configured in the Historical and Real-time Trend Control is stored in a relational
database. Data can also come from sources other than FactoryLink’s Real-Time Data Base
(RTDB).
Trend Controls
The Historical and Real-time Trend Control requests data from the Trend Server. The Trend
Server sends the requested data back to the control, which displays the information on the
trend chart.
The Historical and Real-time Trend Control is an Active X Control that is a client of the Trend
Server. Its container is Client Builder.
The Real-time Trend Control only accesses real-time data from the FactoryLink real-time
database and does not interact with the Trend Server or relational database.
A Trend Server can establish multiple database connections as shown in Figure 24-3. Trend
Server can establish multiple database connections because the pens that appear on the Trend
chart may come from more than one data source. A Trend Server establishes as many
connections as needed to get the data as needed by the pens.
Figure 24-3 illustrates the Trend Server query. Trend Control contacts Trend Server and passes
the data source information to the Trend Server. Trend Control passes this data to the Trend
Server as pens are added to the chart. Trend Server returns the data. Trend Control takes that
data and associates it with a pen. This interaction allows a pen to be modified at run time as
well as design time. Trend Server gets the data from the relational database (RDB), and sends
it back to the Trend Control. This data appears on the Trend chart.
Trend Control
Trend Server
FactoryLink Server
(Application Server)
FactoryLink provides flexibility in choosing where to run Trend Server. The recommended
place to run Trend Server is on the same node as the FactoryLink Server application, but it can
be run on any node. You can choose to put a Trend Server on each client node, or on the node
where the FactoryLink Server resides. If you choose to put Trend Server where the
FactoryLink Server resides, point the client nodes to this location.
C HART TYPES
Trend charts can be based on time or events.
Time-Based Charts
Time-based charts are best suited for continuous types of data. Figure 24-4 shows a
representation of a time-based chart showing a boiler temperature over time. For a time-based
chart, the key column in the database is set up as a time field.
Figure 24-4 Time-Based Trend Chart Showing Boiler Temperature
Boiler Temperature
This Trend chart is
1000 set at ten minute
intervals back
750 through time. Chart
direction can start on
Temperature
Event-Based Charts
Event-based charts are well-suited for per piece or batch data. For an event-based chart, the
key column is set up as a sequence or an ID field.
Per-piece data is data collected for every item in a process. For example, a manufacturer of car
windshields inspects the thickness of every completed windshield as it comes off the assembly
line. Using an event-based chart, this manufacturer can graphically represent the thicknesses of
the windshields produced, regardless of time. Figure 24-5 illustrates an example of an
event-based chart.
Figure 24-5 Event-Based Trend Chart Showing Per Piece Type of Data
2.0
1.5
Thickness
1.0
(inches)
0.5
Batch (group)
Group data is data that logically belongs together and is categorized or grouped in that manner.
For example, a soup manufacturer that makes two flavors of soup may want to track different
batches of both flavors. Using an event-based chart with groups (soup flavors), this
manufacturer can graphically represent the differences in sodium content for each group by
batch. At the end of the batch cycle, a trigger initiates the sampling of the sodium content for
the batch. This sample is written to the database and the sequence number increments to
prepare for the sampling of the next batch. Figure 24-6 illustrates an example of an
event-based chart that shows batch type of data.
150
Sodium
(grams)
100
50
5 4 3 2 1
Soup Batch
C ONFIGURING TRENDING
All configuration for the various trending components occurs in Client Builder. Trend consists
of three phases: predesign, design, and run time.
During the predesign phase, you set up a data source name (DSN) so that the database table
can be linked to a pen in your Trend chart. In addition, you set up a Trend Server and add it to
the configuration. At the end of the predesign phase, you set up a Trend cluster. You work
through a series of wizards and dialog boxes to prompt you through the predesign phase.
At design-time, you work through the Trend Control property screens to design your Trend
chart. The Trend Control property screen contains four tabs: Aspect, Graph, Pens, and Fonts.
All of the properties available on these property pages are accessible in the custom
programming environment. The Graph and Pen Tabs contain dialog boxes that are invoked by
command buttons. You define the pens of your Trend chart from the Pens tab. In the process of
defining these pens, you use the Pens Configuration screen to associate each pen with a data
source that you configure during the predesign phase of the process.
At run time, you can use all of the functionality of the Trend task that you can use in design
time. Additionally, you can perform the panning and zooming functions during run time in the
offline mode only. For more information, see the Client Builder Help.
P ROGRAM A RGUMENTS
Argument Description
-V# or –v# Writes trend chart events to a log file.
-W# or –w# Sets the maximum time-out in seconds to wait for a
response from the historian. The default is 30 seconds.
•
•
•
•
Virtual Real-Time Network
and Redundancy
This chapter contains detailed information for configuring the Virtual Real-time Network and
Redundancy (VRN) task for real-time database redundancy. Included are possible solutions for
configuring historical database redundancy using VRN and other components. The VRN task
communicates tag data across the FactoryLink network. This is the mechanism through which
the real-time databases of a redundant system are kept in sync. VRN also manages the
redundant system’s master/slave negotiation and execution. VRN has the capabilities of
FLLAN and PowerNet, but also supports the DBX Data Base (X) Terminal, a powerful tool for
online testing and debugging with local or remote access through a network.
Note: VRN is supported only between the same version of FactoryLink server
applications. For example, you can set up redundancy between two FactoryLink
8.0 applications, but not between a FactoryLink 7.5 server application and a
FactoryLink 8.0 server application.
Remote
Graph Graph Graph Graph Graph Operator
Client Client Client Client Client Stations
VRN VRN
Redundant Redundant Redundant Database
FactoryLink
Databases Remote
Access
VRN runs the VRN_INIT program at startup to prepare all new or changed configuration data
prior to running. If required, you can start VRN_init with arguments, as described on page 600.
VRN_INIT uses Microsoft software that is installed with Internet Explorer 5.5 or higher. If this
software is not found, VRN_INIT will not run.
Configuring VRN is easy. A server simply requires the appropriate information in the
FactoryLink System Configuration and a single line in the Connect Control table. Clients may
require one more line entry in the Client Object Information table. This is because lists of tag
names can be specified by wildcards. VRN can be tuned to optimize its update rate to match
the operating environment of the application by specifying parameters for RdUpdWr in the
Connect Control or Client Object Information tables. See the “Configuration Tables” on page
559 for detailed information about how to set these parameters. Existing FLLAN and
PowerNet tables may be easily translated to become VRN configuration tables.
From a technical viewpoint, there is a wide range of server redundancy from simply having a
second server in stock, through an installed (but stopped) cold-standby server, and down to a
fault-tolerant system that has either a ready-to-run or a fully operative, hot-standby server like
FactoryLink with VRN.
From a user viewpoint, redundancy should minimize downtime and loss of data due to a
system failure. While a cold-standby system may in many cases be reasonable, it may require
trained personnel to reinstate it after a failure. This, together with a likely prolongation of the
downtime, may cost more than a hot-standby server, including the required software.
Therefore, when talking about redundancy for FactoryLink, a hot-standby solution is normally
assumed.
Loss of data normally refers to both real-time and historical information. While historical
information can be safeguarded on hard disks using standard node and data management
software such as Microsoft Cluster Service (MSCS), real-time data requires special treatment,
since it cannot be managed by the operating system. VRN mirrors the FactoryLink real-time
database rather than historical files that can be synchronized by standard software, such as
SQL Server or Oracle. For historical data, you can run the historians of a redundant system in
parallel.
VRN redundancy is not based on just waiting for an auto takeover at failure. At any time
all servers involved can be used to their fullest extent. The VRN redundancy cluster supports
multiple servers, and VRN clients can automatically reconnect to “1 of x” servers according to
a priority level; that is, if a server becomes active, the client automatically reconnects to it.
A VRN redundant system quickly recovers from failures. Except for the Alarm Logger,
which is automatically restarted as a server on the actual VRN master, clients would typically
not realize a change over. Combining VRN redundancy with a redundant historian database
provides both high availability and reliability at moderate costs.
Database
Remote
Redundant Stations
VRN FactoryLink VRN VRN
Databases DBX Client
Redundant Redundant Terminal FactoryLink
including Database
Distributed
Alarm Logger
Router
Virtual Real-time Network
For quick configuration, tag selection is done from a simple list that allows wildcards. VRN
automatically controls the Distributed Alarm Logger to run as a client or server on the two
redundant FactoryLink stations. Thus, the applications can be kept 100 percent identical.
A typical setup for a redundant partner station is shown in the following graphic. The
configuration at the redundant partner station is identical, so you can specify an application,
save it, and then restore it on the second computer.
Setup for
redundant system
including
Distributed Alog
Recommendations
Driver • Network drivers work best (such as Ethernet, KT, Modbus Plus).
• Serial drivers are very difficult to make redundant without special
hardware to manage the serial port communications.
• Need to build application to disable communications on backup server.
Historian • The dBASE IV shipped with FactoryLink is not a good choice for
redundancy for trending.
• SQL Server is a much better choice.
• The SQL Server computer should not be one of the FactoryLink
servers for a redundant system. Only the primary server should log
data.
• The SQL Server computer should be running a server-grade operating
system (Windows 2000 Server or Windows 2003 Server) so that the
Standard Edition of SQL Server 2005 can be installed. The Standard
Edition can use the replication features to provide data redundancy.
Network • Ethernet networks are preferred with at least 100 MBit network cards.
• Switches, not Hubs, are preferred for networks.
• Dual network cards are preferred for FactoryLink servers with
CrossOver Ethernet cable between them (no network switch to fail).
Term Definition
An application that references input/output (I/O) data from a server (calls for
service). It may also send data to a server. Multiple clients of the same kind may
Client
appear in a network. A client may be linked to several servers of which I/O data
may either be different or multiplexed for redundancy purposes.
An application that sends I/O data to one or more clients (provides service). It
Server may also receive data from a client. A particular server must be unique in a
network.
Data from Server to Client is called Read and from Client to Server is called
Read / Write
Write.
A database containing a process input/output data image. Normally, this is the
FactoryLink Shared database. However, it may be another data image, such as a
I/O Data
driver. Data from the server is mirrored in the client. The system may be
compared to Dual Ported Random Access Memory (RAM) as it mirrors data,
which may be changed on either the client or server side. Similarly, data may be
changed at several locations in a complex network.
VRN Client Interface
Client The Client interface (I/F) supports a local cache for each individual I/O data tag.
Individual I/O data is entered to and displayed from the very same database tag,
providing instantaneous updates on the local screen while data is transferred to
and from the server in the background. The method allows for proper
I/O Data bidirectional data exchange without the need for a complex database-locking
Client I/F mechanism at the server side nor the threat of data consistency problems.
TCP/IP
Network
Server
I/O Data I/O Data I/O Data I/O Data I/O Data
Client I/F Client I/F Client I/F Client I/F Client I/F A+B
Client A
Client B
Client C
I/O Data
Client I/F
I/O Data
Client I/F
Server I/F
I/O Data
Client I/F
Client D
Client I/F
I/O Data
Client I/F
Server I/F
I/O Data
Client I/F
Server X Server Y
Action=Reaction
Read and Write I/O addresses may or may not be identical. The fact that individual Read/Write
data can be combined in a single tag at the client side provides powerful methods for
visualization as shown in these examples.
Visualizing a Pump
A motor command sent through the Write channel may be
interlocked by hardware and software before it is returned as a
contactor feedback signal through the corresponding Read
channel. However, the tag in the Client Object Information table
may be identical for both command and feedback. Consider a tag
that is used to animate a pump with these values:
0 = OFF, 1 = RUNNING, 2 = STOPPING, 3 = STARTING
To start the pump, set the tag’s value to 3 to indicate STARTING
by sending the value as a command to the external device. If the
feedback signal from the external device indicates a starting pump
by a value of 3 in the Update Delay time, the animation remains
on STARTING and you have achieved Action=Reaction.
Regardless of other delays, the feedback signal may only indicate
RUNNING after a while. To stop the pump, enter value 2 to
indicate STOPPING. In turn, the feedback will indicate a value of
2 for STOPPING and, after a while, return to zero to indicate OFF.
Note that all this is done with a single tag at the client side, while
control of the pump is possible on either the client or server side.
C ONFIGURATION TABLES
The Connect Control Table identifies connections, each having its own TCP/IP socket
interface. A connection is specified by a local mode for data processing, the partner’s host
name, and the common services used for the particular link. A local system may play client
and/or server on the same machine. For a service, which acts as a listener for possible
incoming calls, object information is configured in the partner system(s).
The Client Object Information Tables identify individual object I/Os, which are linked by
one or more connects to be read from and written to the server. Read data may or may not be
identical to write data. The fact that individual read/write data can be combined in a single
object at the client side provides powerful methods for visualization.
• For example, a start/stop signal WrCommand sent through the write channel by the server to
an external device may be acknowledged by a contactor signal RdFeedback received
through the corresponding read channel. However, the object I/O Tag/Item IO_Animation in
the Client Information table may be identical for both start/stop command and contactor
feedback.
• On the other hand, an IO_Setpoint value may be transmitted to and from the very same
address in the external device to emulate Dual Ported RAM through communication. In this
case, the object in the Client is linked to the same tag for both read setpoint and write
setpoint from/to the server side.
Configuration A Layout
Configuration B Layout
Legend
Accessing
Redundancy > VRN Virtual Realtime Network > VRN Connect Control
Field Descriptions
Read Cycle
Read Cycle
Client/Server and
Read/Update/Write
REDUNDANT I/O Data determined at startup I/O Data REDUNDANT
Status Tag
Redundant Mode
Analog Digital
Master (Slave) 785 ON
Slave (Client) 786 OFF
Tries to reconnect / Stand-alone 801 ON
Master
Slave initializing 816 OFF
Accessing
Redundancy > VRN Virtual Realtime Network > VRN Connect Control > “your table name” > VRN
Client Object Information
When using the Exclude, Include, ExclGlobal, and InclGlobal arguments, it is important to
order your entries in such a way that no entry will result in negating a previously defined entry.
The following guideline describes the best practice rules:
1. Define wildcard entries that you want included first. If the entry is to include everything
(example, the Tag/Item is '* ), it is considered best to use the ExclGlobal argument to
exclude the system and global tags from the VRN connection.
2. Define specific items/tags that you want to Include. The Include argument is not required
because it is the default.
3. Define specific items/tags that you want to Exclude. The Exclude argument is required on
the first item to be excluded, but is optional on subsequent entries because the first
excluded item is now the new default mode.
The following table example shows all entries accepted for Read, Write and I/O Tag/Item
when using the Sync and Async function.
Note: Use explicit tags (no wildcards) in the Sync line (apply dummy entries
if a tag is not used). Except for the Sync line itself, synchronized data is sent at
triggering regardless of data changes. When entering Mailboxes in a Sync table,
one only message will be transmitted at a time.
If you want to Poll or Fetch data from a server, specify a Sync Read, then write
to the ReadTag in the server by a separate Async line entry. Note that the I/OTag
of the Sync Read is the Read Complete trigger. If you wish a Write Complete
trigger, simply read the WriteTag to an I/OTag by a separate Async line entry.
Configuration Error
found
found
Host
Host
ive
ive
act
act
Dig OFF Dig OFF
ask
ask
Initializing Initializing
NT
NT
Ana=562 Ana=306
VR
VR
synchronize
synchronize
Ready to
Ready to
Dig ON Dig ON
CLIENT Publ-Client
Ana=529 Ana=273
Change Host
Change Host
Disconnect or
Disconnect or
VRN Task stop or Dig OFF VRN Task stop or Dig OFF
From any From any
State
Inactive State
Inactive
Connection lost Ana=514 Connection lost Ana=258
stopped
stopped
Task
Task
Terminated Terminated
Legend
Digital Tag: ON OFF
REDUNDANT Status
VRN Start
C
on
fig
ur
at
io Change Slave Master
Dig OFF n
Er
1st Connect ro
r
Ana=800 Master Connect Slave
Sl
av
e
ne
Partner
Dig ON
te
Initializing
te
as
tia
Ana=816
M
go
R
ne
ea
dy
sy
e
nc
d
to
av
te
hr
Sl
tia
Program Argument
on
go
-SlaveSyncDelay
iz
e
ne
Slave
C
VRN Task
M ha
-SlaveSyncDelay
m
as ng Dig OFF
fro
active
te Master Connect
r
e
te
r fro
e
as
ng
Sl m
M
ha
av Dig ON
C
e
e
av
Sl
Task
Terminated
Legend
Digital Tag: ON OFF
If you use ODX with ECI or RAPD with IOXlator, the VRN task uses a feature called mailbox
redundancy. In a perfect redundant application, the only data that needs to be synchronized
between the servers is the I/O data. This means that you should put every driver tag in the VRN
tables or use a simple naming convention so that you transmit all PLC tags using the VRN
wildcard function. This technique can be problematic because it is too easy to forget a tag. If
you are using an ECI or IOXlator supported driver, there is an easier solution.
Mailbox redundancy uses VRN to route mailbox tags either locally and/or across the network
to the redundant server. The reason this is the perfect solution is because you don't have to set
up any tags other than the 4 standard ECI or IOXlator mailbox tags.
In a redundant system the VRN task manages the mailbox tags so that the slave can disable the
driver functions but still have the IO synchronized with the master system.
Master: PLC <-protocol-> Driver <-mailbox-> VRN <-mailbox-> IOXlator <-> tags
The application object uses the tag VRN_CONTROL to control the mailbox redundancy.
Master: VRN_CONTROL=0
Slave: VRN_CONTROL=1
Computer 3
DB
RAID
PRO: This configuration is the simplest and easiest to implement and is very reliable.
CON: This configuration has a single point of failure, but it can be improved by using
failure-tolerant RAID drives and redundant power supplies.
Computer 1 Computer 2
FL1 FL2
VRN
DB1 DB2
PRO: This configuration is for when you want to capture only historical data, and it is
acceptable for the data to reside in multiple databases. The database can be replicated using
snapshot replication with no loss of logged data.
CON: You would need to stop data logging operations while the database is being replicated.
Note: Refer to Microsoft’s SQL Server documentation and technical support
for details on implementing a snapshot historical database replication solution.
Computer 1 Computer 2
VRN
FL1 FL2
CON: Data has to be noncritical since during a failover, there will be a time window when a
small amount of data might not be captured. After a failback, the saved data must be restored
back into the primary database. This configuration requires using SQL Server Standard Edition
and has to be implemented by the proper personnel to use the SQL replication technology.
Note: Refer to Microsoft’s SQL Server documentation and technical support
for details on implementing a transactional historical database replication
solution.
SQL Technology
DB1 DB2
Software
Computer 3 Computer 4
PRO: This configuration uses four computers and has no single point of failure. You do not
have to stop operations to resynchronize data. Even if an application computer is down,
constant database synchronization continues. If the primary database is down, logging
continues on the second database.
CON: Because this configuration requires more CPU time, there is a higher hardware
requirement of four computers. This configuration has to be implemented by the proper
personnel to use the SQL replication technology.
Note: Refer to Microsoft’s SQL Server documentation and technical support
for details on implementing a merge historical database replication solution.
For a clustered historical database solution, no special work is required in your FactoryLink
application because with a clustered database, FactoryLink sees only a single virtual database,
and the redundancy is handled transparently by the clustering.
Computer 1 Computer 2
VRN
FL1 FL2
RAID
DB1 DB2
PRO: This configuration provides a fully redundant solution and is the surest database
redundancy method for both small and large databases. Clustering is straight forward to
implement because of the single virtual database. This is the most fault-tolerant method for
replicating data.
CON: This configuration is the most expensive because of additional hardware, software, and
implementation costs.
Note: Refer to Microsoft’s SQL Server documentation and technical support
for details on implementing a clustered historical database replication solution.
TCP/IP Network
Server Connect Configuration at NodeA
Note that the default transmission control for Read Cycle, Update Delay and Write Interval
may be replaced by a parameter entry in the Function/Arguments column in either table.
Wildcard entries in the information table are possible and provide a lot of flexibility: you may
specify I/O Tag/Items, which are read only, write only, or, you may rename (alias) tags and,
you may even enter different tags for read and write at the server side. In addition, the system
accepts mailboxes, and an existing PowerNet table may be easily translated to become a VRN
configuration table.
TCP/IP Network
Server Connect Configuration at NodeA
Note that network alias tag names are not required due to automatic renaming at the client and
server side. The default transmission control for Read Cycle and Write Interval may be
modified by a parameter in the Function/Arguments column. You can use wildcard entries to
make configuration easier. As shown, an existing FLLAN table may be easily converted to
become a VRN configuration table. The Sync Function also allows for configuring complete
triggers.
TCP/IP Network
The Redundant Mode specifies a connection, which either runs as a master or a slave.
Configuration may be identical for either node. Thus, you may save an application, restore it
on another computer and run it as a redundant system. The only difference is the system's host
file containing the partner's node name. Wildcard entries make configuration much easier and
are used to specify Tag data exchange. For example, the simple configuration example can be
set up as a redundant system setup.
Remote Remote
Graph Graph Individual Tag Data Exchange Graph Graph
Tag Data
Mailbox Data
Partner
ECI FLDB VRN VRN FLDB ECI
TCP/IP Network
Tandem Connect
TASKSTART_S[x] = TASKSTART_S[x] =
VRN Tandem Status VRN Tandem Status
In this example, all Shared tags named XYZ and ABC are automatically exchanged to keep
data consistent between the two systems. The VRN_CONTROL and VRN_STATUS tags can
be used to control the driver read/write tables and ECI task (enable/disable). And, you can use
this tag to force the system to become master if set to 1 (odd) or to become slave if set to 0
(even).
Note that for ECI with RAPD or OPC Data eXchange (ODX), it is recommended to exchange
Mailbox data directly as shown on the next page. This is more powerful as it exchanges binary
data between the driver(s).
VRN Client Object and Connect Configuration for NodeA and NodeB
EciMbx EciMbx
EciWmbx EciWmbx
Driver TASKSTART_S[x]
controlled by VRN
TCP/IP Network
Note that the VRN Object Information entries are identical for both Partner/Redundant and
Local/CLIENT connection. If you wish to exchange additional data through the Redundant
link, simply add the information to the Partner table, but not to the Local table.
Note: When you declare an OPC connection in ODX, you configure the OPC
Server Name parameters. If you have several networks on your computer, you
must specify the node before the server name, such as Node:Server_Name.
The diagram below shows NodeA as the master running the driver while NodeB’s driver is
passive. The VRN_CONTROL and VRN_STATUS tags may be used to enable or disable the
read and write tables or the ODX Link Control. The status tag is set to OFF only at startup or if
the system is running as a slave. For any other case, the tag is set to ON to start the driver and
connect it through the Local link (Mux=1). The master’s Status Tag can also be used as an
update trigger for a driver that reads unsolicited mailbox data. At normal operation, datasets
are sent to the master’s ECI through its Local link and to the slave’s ECI through the Partner
link. If the master fails, the slave takes over and activates its local link and driver.
Remote Remote
Graph Graph IMX Mailbox Dataset Exchange Graph Graph
Tag Data
Mailbox Data
NodeA NodeB
(Master) TASKSTART_S[x] = TASKSTART_S[x] = (Slave)
DRV VRN Tandem Status VRN Tandem Status DRV
Device Driver IMX / RAPD or OPC Device Driver IMX / RAPD or OPC
Note that Mailboxes for ECI (EciRmbx/EciWmbx) and driver (DrvRmbx/DrvWmbx) must be
different while IOX cannot be used because of IMX queries. For ECI-based RAPD or ODX,
use “Rd/Wr Ds Idx” and then duplicate and rename the ECI Control table entries to be
referenced by the driver. For non-ECI based RAPD drivers, make sure the two applications are
identical (same dataset tag index) using FLSAVE > FLRESTORE.
IOXLator IOXLator
NO NO
Master? Master?
YES YES
VRN VRN
The following sections show the configuration of a redundant FactoryLink application using
mailbox redundancy. Assume that the redundant application resides on nodes Redundant
NodeA and Redundant NodeB.
The logical station ID should be the same for all tasks on system A, but different from the
logical station ID on system B of the redundant pair.
This example shows both IOXlator and the Modbus RAPD Ethernet driver set to logical
station ID 1. On its redundant pair system, both tasks should be set to a logical station ID other
than 1, such as 2.
The second entry, RedundLocal, establishes a client connection to itself, ‘localhost. This
connection is active only when the system is not the SLAVE. Local Control Tag has a value of
786 when the local system is a slave, hence the “Mux<>786” function to disable the
connection when the REDUNDANT connection is in SLAVE mode. When the system is the
slave, no messages can be exchanged between the local instances of IOXlator and the drivers.
The third entry, table RedundServer, establishes a redundant connection to the node set by the
in the {RedundServer} environment variable.
The VRN Client Object Information table is configured for the CLIENT and REDUNDANT
modes.
This configuration injects VRN in between IOXlator and its driver on the REDUNDANT
communication link or on the local, loopback communication link. Any mailbox messages
written by IOXlator are read by VRN and then, if the communication link is active, written
back to the driver receive mailbox.
Since the applications are identical on both nodes, the VRN Connect Control table for NodeB
is configured exactly the same as the table for NodeA. The only difference is that each
system’s host file contains the partner’s node name or an environment variable is set with the
partner’s computer name. The FLVRNSetup application object in the Examples Application or
the FLNEW template creates the {RedundServer} environment variable.
To control your process only from the Master application (the recommended choice),
configure the VRN Local Control tag (for example, VRN_CONTROL) as your disable tags for
the Read Disable, Write Disable, and the Unsolicited Rcv Disable fields in the I/O Translator
Data Definition table. In the following configuration example, reads and writes can be
triggered only from the Master system.
If you do not enter a tag in the Read Disable, Write Disable, and the Unsolicited Rcv Disable
fields of the I/O Translator Data Definition table, both the Master and Slave systems can
invoke control operations.
IOXlator incoming
Driver outgoing
Driver incoming
IOXlator outgoing
Remote Groups, LAN Control, and VRN Connect Configuration at NodeA and NodeB
TCP/IP Network
In the System Configuration table, remove the Alarm Logger’s Run Flag since its run status
will be controlled by VRN. Also, set the -w program argument to warmstart the AL_LOG task
in the event of a restart. (See page 505 for information to configure a task in the System
Configuration table.)
Alarm Information
Mailbox Data
Partner
AlServer FLDB VRN VRN FLDB AlServer
TCP/IP Network
VRN Tandem Tandem Connect VRN Tandem
Function AlogX Function AlogX
Distributed Alarm Logger Server or Client Distributed Alarm Logger Server or Client
The following diagram shows a detailed data flow of all mailboxes involved. The systems may
be set up for Shared and/or User tasks. The Alarm historian is shown for information only.
Data exchange between Alarm Logger and Historian is standard.
Shared or
ALView DBLog DBLog User Task
BROWSEHISTMBX(_U)
DALOGVIEWMBX(_U)
DBLOGHISTMBX(_U)
DALOGACKMBX
BrowseVRNMbx
(Shared or User)
(Shared or User)
(Shared or User)
DBLogVRNMbx
(Shared)
(Shared)
(Shared)
VRN Client
Application
translate
translate
TCP/IP
copy
copy
copy
Network
VRN Server
VRNMbx(X)
Application
(Shared)
(Shared)
Legend:
BrowseHistMbx
DBLogHistMbx
VRNMbx(X)
(Shared)
(Shared)
DALogHistMbx
(Shared)
(Shared)
Reserved (hidden)
Feedback Mailboxes:
ALViewer: DALOGVIEWMBX(_U)
DALogger: DALOGRCVMBX(_U)
DBLogger: DBLOGHISTMBX(_U)
DPLogger: DPLOGHISTMBX
DBHist DBHist DBHist Browser: BROWSEHISTMBX(_U)
Trending: TRENDHISTMBX(_U)
Local or
Remote
Database
Database Database Database
Argument Description
-L=<path\logfile> Log job information.
-V<#> Set the verbose level for logging. (# = 1 to 4)
-C Force VRN_init to create all data at startup.
–DefaultMsgLength=<#> If VRN is the first task that writes to a message tag
without a configured length, it sets the max length to #
characters. (default=80)
-TagMatchCompatibility Causes the Wildcard tag matching to use the previous
(flawed) algorithm. Tests show that the flaw in the
algorithm causes tags to be shared that do not match the
wildcard string. The switch is provided to assure that
existing applications continue to work, even though
they may be sharing more tags than expected.
If you use this flag in conjunction with the verbose flag,
you will get notified of tags that are being incorrectly
accepted by the wildcard comparison algorithm.
Example using -TagMatch Compatibility -V2:
Output: ** Note ** Due to -TagMatchCompatibility
switch, ‘_1_1_1_1’ is being incorrectly accepted by the
wildcard comparison for ‘_*_’.
Using these together will likely cause a slowdown in
performance, but it is worth it to find the places where
there were tags being incorrectly included in the VRN
transfer. It is recommended that you manually run
VRN_INIT as follows:
Vrn_init -C -V2 -TagMatchCompatibility
Study the output and decide whether the tags listed are
needed. If they are, you can change the wildcard to
include them and then run the application without the
-TagMatch Compatibility switch.
Argument Description
Arguments for VRN Tuning and
Performance
–ThreadPriority =<#> Caution: Some of these arguments, if not adjusted
correctly, might cause unpredictable results. You may
set the VRN Thread Priority and Task Priority Class as
described in more detail in the sample file
VRN_para.run. However, this should only be adjusted
by experts who understand the possible impact to the
system. (# =0 to 3, 0 being normal and 3 being the
highest)
–SleepTime=<#> Adjust process speed/CPU load by suspending program
every scan, in tenth of seconds. (default = 100 ms.)
–Alive=<#> Adjust global alive check time-out, minimum = twice
the SleepTime. (default = 60 [s])
–FirstConnect=<#> Time allowed in seconds for first connection when
starting in Redundant Mode. (default = 30 [s])
–ConnDelay=<#> Multiple simultaneous connections can be staggered at
the rate given by ConnDelay. (default = 3 [s])
–Throttle=<#> Throttle data transmission generally if the internal
transmission buffer of 64 kB is full. (# default = 300
[ms])
–SlaveSyncDelay=<#> Data synchronization on the slave of a REDUNDANT
system can be delayed to prevent possible overwriting
of synchronized data due to a still active but stopped
driver at Master_Slave changeover. (default = 3 [s]:)
–AlogClientDelay=<#> Distributed Alarm Logger start delay for Function
Alog[..] in a redundant system, this may be useful to
unburden the system at REDUNDANT changeover.
(default = 5 [s] for client and default = 1 [s] for server)
TE_ACCESS #0x0080 Access: %s Code=%d (Code=0 configuration error, Code=1..x database access
error)
A database tag is not configured or an invalid access occurred Æ please clearly notify the
message
TE_VERSION #0x0090 Version conflict (VRN_INIT:%s VRN:%s) Æ must be same version, re-install VRN
Status Tag[x]
Label in VRN.TXT Text displayed for Service Message TagArray[x]
Analog Bit1/Bit0
Status Tag
Label in VRN.TXT Text displayed for all other Message Tags
Analog Bit1/Bit0
The Ctrl Flag Bit0 of the message number is applied when specifying a digital status tag. This
can be used to control a task by linking its TASKSTART_S[x] tag to the Status Tag (see
“Client, Publ-Clnt, and Redundant State Event Diagram” on page 577). Note that data
synchronization at a REDUNDANT slave can be delayed by Program Argument –
SlaveSyncDelay to allow for a proper shutdown of drivers and thus prevent possible
overwriting of synchronized data.
For a REDUNDANT, CLIENT or PUBL-CLNT connection, the Status Tag can further be used
to force the system to a dedicated state, and it can be used as an update trigger for a driver in a
redundant system that reads unsolicited mailbox data to master and slave.
•
•
•
•
Waveform Generator and
Sequencer
The Waveform Generator and Sequencer (FLWAVE) task provides features for simulating
real-world data for the purpose of testing, training, and commissioning of FactoryLink
applications and operator stations. The task is divided into three functional areas:
• continuous waveform generation
• event-driven output curve
• event sequencing
The Waveform tables provide the ability to output various continuous waveforms. The
waveforms can be used to test or simulate minimum and maximum conditions managed by the
HMI/SCADA system.
The Action tables provide an input event-driven output curve. This curve can be delayed to
mimic real-world propagation of the data to the IO devices and the corresponding output value
changes.
The Sequencer tables provide time or event-driven sequences of digital events. The tables can
be chained together to provide for 100’s of steps driving digital tags.
O PERATING P RINCIPLES
This task uses function generators to simulate factory floor data. The function generators
include ramp (saw tooth), triangle, sine, square, and random signals that are scaled over a
user-defined range and duration. The functions simulate continuous output devices.
Configuration tables are used to establish the simulation. The Waveform tables are used to
assign a waveform to a tag, which helps to test boundary conditions in the application and
animations. A trigger, which provides the common interface with PLC drivers, is used to
activate the waveform.
In Client Builder, an operator can see a snapshot of the waveform as if it were generated from
a real device connected to a PLC. Activating the trigger (for example, clicking the Start
Sequence button as shown in Figure 26-1) starts the simulation. A sample waveform mimic is
available in the Examples Application.
This button
displays a
trend of the
sine waveform.
This button
starts the
example
sequence.
The waveform generator is driven by the sequence. The sequence tells the Action Control to
begin. Looking at the sample waveform mimic and the configuration tables, Figure 26-1 shows
a trigger named flwave_pump_start initiates the pump to start in 5 seconds (State 1 in the
Sequence Output Information table), causing the curve to go upwards. After 15 seconds, the
pump turns off (State 3), causing the curve to go downwards. The Tank Level is the ramp.
WAVEFORM TABLES
Accessing
Other Tasks > Waveform Generator > Waveform Control
Field Descriptions
Field Descriptions
Accessing
Field Descriptions
When the input tag is set off, the value of the output tag varies from maximum to minimum in
the time specified by the delay time value. The shape of the output is determined by the
function generator field.
Accessing
Other Tasks > Waveform Generator > Action Control > “your tag name” > Action Information
Field Descriptions
S EQUENCER TABLES
Accessing
Other Tasks > Waveform Task Sequencer > Sequencer Control Information
Field Descriptions
The first row of the table defines the count for each state and the count of triggers in the
sequence. For this row, the Output Tag Name and the Off State fields must be blank. Each
sequence can have up to 30 steps. The value for each step is the number of the step triggers that
will be counted for the step.
All rows after the first row define an output tag and the action to take for each state. A 0 or 1
value specifies what output is written at the beginning of the step. Leaving a field blank
indicates no action to take for the state.
Accessing
Other Tasks > Waveform Task Sequencer > Sequencer Control Information > “your tag name” >
Sequence Output Information
Field Descriptions
State The action to take for State x of the sequence. In the first ON / OFF
(1 to 30) row of this column, the value defines the count for the +/–
state and the count of triggers in the sequence. In all other 1/0
rows, the value specifies the output that is written at the blank – no
beginning of the step. action
•
•
•
•
Format Specifiers
Format specifiers allow you to define the format for all or part of an output string. The
following FactoryLink tasks support the use of format specifiers:
• Alarm Supervisor
• Batch Recipe
• File Manager
• Report Generator
Format specifiers permit you to define a variable when a literal is expected. Variable specifiers
can consist of two types of objects:
• Ordinary characters, which are copied literally to the output stream
• Format specifiers, which indicate the format in which variable information will display
S YNTAX
% [flags][width][.prec]type
where
% Always precedes a format specifier.
flags Controls the format of the output. This can be one of the following.
- Left-justified within the field. If you do not specify this
flag, the field is right-justified.
0 Fills the spaces to the left of the value with zeros until it
reaches the specified width.
width Specifies minimum field width. For floating point fields, width specifies a
minimum total field width that includes the decimal point and the number of
digits beyond the decimal point specified with the “.prec” parameter.
.prec Controls the precision of the numeric field. What precision defines depends
on the format type specified by the type variable.
For exponential (type e) and floating point (type f or g) notations, specify the
number of digits to be printed after the decimal point.
E XAMPLES
The following table shows examples of valid format specifiers for each FactoryLink data type.
For more information about format specifiers, see any ANSI-C reference manual.
alarm persistence 15
at startup 15
locally redefined 15
parent/child relationships 28
update operation 423, 441
logical 430, 436, 437, 439, 446
update trigger 434, 447
V
value cursor 541
variable
embedded 446
FLHOST environment 413
input 434
Variables
declaration 328
size in message 28
specifiers (in Alarms) 28
specifiers (in File Manager) 225
verbose-level parameters 373
viewing
domain associations 511
W
wildcard characters 227