J BASE
J BASE
J BASE
Release R15.000
June 2015
Warning: This document is protected by copyright law and international treaties. Unauthorised reproduction of this document, or any portion of it, may
result in severe and criminal penalties, and will be prosecuted to the maximum extent possible under law.
Table of Contents
Introduction 16
Purpose of this Guide 16
Intended Audience 16
Overview 17
Dataguard 18
Components 19
Databases 19
Transaction Journaling 19
TRANSACTIONS 22
Transaction Boundaries and Locking 22
Transaction Processing within Transaction Journaling 22
DATABASES 23
Concept 23
Configuration 23
DATABASE CONTROL COMMANDS 25
DB-START 26
Syntax 26
DB-PAUSE 28
Syntax 28
DB-SHUTDOWN 29
Syntax 29
DB-RESUME 30
Syntax 30
DB-REMOVE 31
Syntax 31
DB-STATUS 32
Syntax 32
TJ Configuration 33
jediLoggerConfig 33
jediLoggerAdminLog 33
jediLoggerTransLock 33
Configuring Transaction Journaling 33
Monitoring TJ 44
jlogstatus 44
JLOGSYNC 46
SYNTAX 46
SYNTAX ELEMENTS 46
JLOGMONITOR 48
SYNTAX 48
SYNTAX ELEMENTS 48
JLOGDUP 50
SYNTAX 50
SYNTAX ELEMENTS 50
INPUT_spec/OUTPUT_spec 50
timespec 52
Examples of use 52
Verifying a logging tape 54
Terminating jlogdup 55
Resilient Files 84
Resilience 84
Intended Audience
This User Guide is intended for the use of Internal Temenos users and Clients.
l DataGuard
l File Handling
l jBASE Data Provider
l jBASE Indexes
l jBASE Internationalisation
l jBASE ODBC
l jBASE Remote File Service
l jBASE Triggers
l Environment Variables
l JDBC Driver
l jDLS Distributed LockService
l jQL
l SQL Engine
l Transaction Journalling
l Overview
l Transactions
l Databases
l Database Control Commands
l TJ Configuration
l Monitoring TJ
l jlogsync
l jlogmonitor
l jlogdup
l Resilient Files
l Recovery
l WarEmstart Recovery
l Media/Computer FailurEe and Recovery
l Sample System Configurations
l Resilient T24 Configurations
l Run Time
Traditional jBASE systems essentially comprise three parts: User- and System-related files – “the database”; an application suite of programs
to manipulate the data in the database - “the application” and a DBMS system comprising jBASE programs and user-developed programs to
service database requests made by the application. The database is the only component which requires special attention with regard to resi-
lience; the others can merely be reloaded from an archive image. The database is the only fluid component – it changes from day-to-day and
probably from second-to-second. This document will describe the features of jBASE which exist in order to protect the database from potential
problems which could occur, as well as the methods to use when confronted by each of such circumstances.
Databases
The database is the collection of data which supports any business. This valuable commodity must be protected as much as possible and be
restored to a known, stable, state when the computer facilities fail to perform normally. The database comprises not only application data, but
also configuration data pertaining to: the users of the computer along with their access and restrictions; and peripherals connected to the com-
puter. The configuration data is not part of the data resilience referred to in this document. Any changes to such data should be archived (nor-
mally during the O/S archiving procedures).
Transaction Journaling
Transaction Journaling provides the capability to prevent permanent data loss following a media or system failure. The Transaction Journal is a
copy of database updates in a chronological order. In the event of such a failure, the Transaction Journal may be replayed, thus restoring the
database to a usable state. Transaction Journaling preserves the integrity of the jBASE database by ensuring that logically related updates are
applied to the database in their entirety or not at all.
These are the main transaction journaling administration utilities provided within jBASE:
jlogstatus This command allows the administrator to monitor the activity of transaction journaling.
Selective Journaling
The jBASE journal does not record every update that occurs on the system. It is important to understand what is and is not automatically
recorded in the transaction log.
What is journaled? Unless a file is designated unloggable, everything is updated through the jEDI interface (i.e. use of the jchmod –L filename
command). This includes non-jBASE hash files such as directories.
l Operations using non-jBASE commands such as the ‘rm’ and ‘cp’ commands, the ‘vi’ editor.
l The UNIX spooler files.
l Index definitions.
l Trigger definitions.
l Remote files using jRFS via remote Q-pointers or stub files
l When a SUBROUTINE is cataloged, the resulting shared library is not logged.
l When a PROGRAM is cataloged the resulting binary executable file is not logged.
It is recommended that most application files be enabled for transaction journaling. Exceptions to this may include temporary scratch files and
work files used by an application. Files can be disabled from journaling by specifying LOG=FALSE with the CREATE-FILE command or by
using the -L option with the jchmod command. Journaling on a directory can also be disabled with the jchmod command. When this is done, a
file called .jbase_header is created in the directory to hold the information.
Remote files are disabled for journaling by default. Individual remote files can be enabled for journaling by using QL instead of Q in attribute 1
of the Q pointer, for example;
<1>QL
<2>REMOTEDATA
<3>CUSTOMERS
Example
In general, journaling on specific files should not be disabled for "efficiency" reasons as such measures will backfire when you can least afford it.
Selective Restores
There may be times when a selective restore is preferable to a full restore. This cannot be automated and must be judged on its merits.
For example, assume you accidentally deleted it a file called CUSTOMERS. In this case you would probably want to log users off while it is
restored, while certain other files may not require this measure. The mechanism to restore the CUSTOMERS file would be to selectively restore
the image taken by a jbackup and then restore the updates to the file from the logger journal. For example:
If required, use the jlogdup rename and renamefile options to restore the data to another file.
Note: In order to preserve the chronological ordering of the records do not use a SSELECT command on the time field. This may not pro-
duce the correct ordering (multiple entries can occur during the same time period – the granularity being one second).
Resilient Files
Resilience is the ability of a file to remain uncorrupted in adverse conditions such as system failure. The implementation of resilient files is
essential for warmstart recovery to guarantee recovery from failure by rolling forward from the transaction journal with or without a system
restore.
If a file is structurally corrupt this will stop any database level updates being applied; preventing the possibility of a roll forward and hence inval-
idate the warmstart recovery. Logical database corruption will be resolved by the roll forward.
A resilient file must have a singularity of update where one disk operation cannot rely on another in a change of file structure. For this reason
new substructures are built within the file before a single disk operation redirects the file to the new structure and the old structure is released.
The functionality of the restore process, jrestore, has been extended to allow for the automatic roll-forward of logsets after a database restore
has completed. This extension uses the Transaction Journal configuration (JediLoggerConfig) which was active at the time of the last backup
along with the corresponding Transaction Journal Logfiles.
Warmstart
This facility is designed to enable the databases defined by the administrator to be brought back to a stable, working position, following a
power failure. Without this it is not clear whether all transactions have been committed to the database following such events. Databases
which have been shutdown prior to the power outage will not require recovery, so recovery is not attempted on them. Those databases which
were active at the time the computer lost power will be recovered. This recovery will take the form of a database roll-forward of all complete
transactions. A complete transaction is deemed to be one which has entered the commit phase of processing. Those transactions which were
incomplete will not be recovered at all. The databases will be left in a consistent state following recovery. It is the database administrator’s
responsibility to determine which transactions require re-entry.
Database transactions are a group of logically related file updates. These updates are intended to be processed as a whole. In order to maintain
database consistency, all of the updates within a transaction must occur or none of them.
For instance, assume there is a standing instruction where in the bank needs to debit customer A’s account with USD500 and credit customer
B’s account on account of house rent payable by A to B. This has to happen on the 1st day of each month. In the above-mentioned transaction,
two important tasks need to be carried out.
It is vital that both the above-mentioned tasks happen or none of them to happen. If either of them only happens, it would lead to database
inconsistency.
(i) The jBC command - “TRANSTART ….” when executed by the jBC runtime system causes a “TRANSTART” record to be
placed in the transactional cache for this user.
(ii) & (iii) “WRITE” records containing not only the record data, but details about the origin of the updates (see Viewing the
Transaction Journal in a later section. These journal records are cached following the “TRANSTART” record.
(iv) The jBC command - “TRANSEND …” causes the process to enter the “COMMIT” phase of execution. Up to this point no
data has been written either to the Transaction Journal or to the database. The following procedure is then followed:
Concept
Until recently there has been no way or to manage and control subdivisions of the application – departmental control or to duplicate the applic-
ation/DBMS to support more than one instance of the database – multi-customer hosting. The database grouping is achieved by the used of
the JBASE_ DATABASE environment variable; not specifying this will result in the user being assigned to the “default” database group. This
allows the system administrator to control access to various populations of user/applications, without affecting the other users/applications.
Departmental Control
Users may now be assigned a target grouping or “database” when they access applications. This “database” enables finer control over which
groups of users may access the database. This grouping is likely to be on a departmental or functional basis, i.e. users may be assigned to the
“Sales” or “Accounts” database or even the “Administrators” database. This physical database may thus be physically or logically split by func-
tional areas. Control of each of these areas is by the assigned “database” name. Thus it is possible to restrict access to the database to only,
say, those users who are in the “Administrators” database group etc. The database could be designed such that each functional area contains
files pertinent to each area and that files which are shared between functional groups are stored in a central repository, with access available to
all.
It should be stated that this “database” grouping is not intended to replace file ownership and access permissions which are normally in exist-
ence.
Multi-customer hosting
An application could be replicated such that a provision is made to support multiple customers each running the same application but with
each having their own copy of database files. In this instance the “database” grouping could be by customer, thus allowing control over each dis-
tinct customer database.
Configuration
The default configuration of databases is as follows :
There will always be a “default” database file and a “databases_defined” file defined within the system. In order for the system to run, the envir-
onment variable “JBCRELEASEDIR” must exist in order to find where jBASE resides. This will be the default entry within the “databases_
Environment Variables
Two environment command may be used to assign a user or application to a particular database and to use a particular set of Transaction
Journal files:
Syntax
DB-START {-nt}
Where:
This command is used to start a database. Upon completion, users/applications which have been configured to use this database, may then do
so. Prior to this point the following message will be displayed to the user/application :
The use of a particular database is trapped very early on in the creation of an application process. If the expectation is that the database should
be available for use, then the system administrator should be contacted for resolution.
The DB-START command not only control access to a particular database, but is used also to define the location of the configuration files for
Transaction Journaling operations. If the “-t” switch is not used, then the default location ($JBCRELEASEDIR or %JBCRELEASEDIR% for
Windows platforms, will be used to record the location of the “config” directory/folder. This information is used by the “Warmstart” facility in
order to provide recovery in case of power failure.
The DB-START command will write two entries into the “databases” directory/folder:
The first file will be named after the Database name as specified by the “-n” switch. If there is not database name specified, then this will default
to the creation of a file called “default”. This file is used to hold the status of the database. This file will contain the following identifier :
JBC__DB
The remainder of the file contains information about the database itself, notably the state of the database.
The second file involved in recovery is the “databases_defined” file within the “databases” directory/folder. If this file does not exist, then it will
be created during the execution of the DB-START command. Each entry within the “databases_defined” file will take the following form:
or
%JBCRELEASEIR% (Windows)
Note: Each field is separated by a space character. The above example shows a sample configuration. The databases “default” and “HR”
will both use the default configuration for Transaction Journaling, whereas “Sales” and “Accounts” will each have their own set of Trans-
action Journal log files. All databases will use the same set of jBASE executables.
Syntax
DB-PAUSE {-anrt}
Where:
l -a - Administrators are still allowed access to the database as such must be used with care.
l -n - Database to pause
l -r - Read type operations are still allowed on the database. Write operations including DELETE-FILE, FILELOCK, CLEARFILE,
WRITE and DELETE record will be paused.
l -t - Transactions are allowed to complete.
The DB-PAUSE command is used when the administrator wishes to selectively pause the named database. The pause effected by this com-
mand prohibits all access to the database from this time, dependent on the options chosen. Processes will wait until this condition is cleared
from the database , with no application programming required to effect this wait.
l TRANSTART
l READs
l WRITEs
l DELETEs
l TRANSEND
When the TRANSEND instruction is executed, this process is now deemed to be “in a transaction” for database purposes. No database updates
have occurred up to this time, but they are cached. The “–t” option refers to those processes which have entered this state. Once a transaction
has been processed fully this state is exited. The process will now be paused – depending on other options chosen.
Syntax
DB-SHUTDOWN {-ant}
Where:
This command will allow the system administrator to shutdown databases in an orderly manner. This allow for a clean system shutdown,
ensuring database integrity. The effect on processes is the same as for DB-PAUSE.
Syntax
DB-RESUME {-n}
Where:
l -n - Database name
This command will set the specified database to active – no restrictions on update will be in effect.
Syntax
DB-REMOVE {-n}
Where :
This command will remove the specified database from the databases directory/folder. If the defined database is the “default” database, then
this is ignored, else the database definition is removed from the “databases-defined” file. The “databases-defined” file is used by the
WARMSTART utility when recovering a database following a power failure.
Syntax
DB-STATUS {-ntvwV}
Where:
l -a - All Databases
l -n - Database name
l -t - Display users inside a transaction.
l -v - Verbose mode.
l -w - Display users currently waiting for DB-RESUME.
l -V - Very verbose mode.
This command allows the system administrator to determine the state of each defined database. The following will show various states of each
of the defined databases (for example).
For example 1
:DB-STATUS
For example 2
Defined database set to different states – description is self-evident as to the state of each.
jediLoggerConfig
This file is the repository for all configuration and operational details concernin Transaction Journaling. The default location of this file is at
$JBCRELEASEDIR (or %JBCRELEASEDIR% for Windows computers). For a system with one active Journal, this will be the location of its
configuration. If other system topologies require separate Journaling facilities (for separate databases?), then the environment variable
JBCLOGCONFDIR is used to identify the location of such configurations.
jediLoggerAdminLog
This file contains logged data regarding the running of Transaction Journaling. The details of this file refer to changes to the Journaling con-
figuration as well as error/warning messages generated bt the Journaling system.
jediLoggerTransLock
This file is used by the Journaling system to act as a lock table during checkpointing. No user information is contained therein.
jlogadmin
The jlogadmin command allows for the administration of the jBASE Transaction Journal. The jlogadmin command will enabled for interactive
usage when invoked by the super-user/Administrator; execution by other users being restricted to read-only. All administration tasks con-
tained within the jlogadmin utility can also be invoked from the command line, using jlogadmin, with optional parameters.
When the jlogadmin command is executed interactively, navigation to the next field is by using the tab key or cursor-down key and to the pre-
vious field by the cursor-up key. Each field can be modified using the same editor type commands as available in jsh. Changes to a particular
field are effected by the <Enter> key and CTRL-X is used to exit from interactive mode.
Interactive Display
The first execution of jlogadmin will display the following screen:
Status:
Specifies the current transaction journal status, which can be On/Active, Off/Inactive or Susp/Suspended. Note: When the status is changed to
Suspended, all transactions which would be updated in the transaction log file will also suspend awaiting a change of status.
Specifies the current log set in use. There are four possible log sets – numbered 1 to 4. An entry of 0 indicates that no log set has been chosen
at this time.
Extended records:
Specifies additional information: the application id; the tty name and the login name, will be added in the jBASE transaction journal for each
update.
Specifies the number of seconds between each synchronization of the log set with the disk. All memory used by the log set is force flushed to
disk. Should the system crash, the maximum amount of possible data loss is limited to the updates which occurred since the last log set syn-
chronization.
Specifies the number of minutes between system checkpoints. After a transaction has completed, this time is checked. If expired, then a sys-
tem checkpoint is performed.
This specifies the program to execute when the warning threshold of the log set is reached. The log notify program is called every time a mes-
sage is written to jediLoggerAdminLog. The text of the message can be captured by adding arguments to the command line which the notify pro-
gram can examine using SENTENCE(). For example, possibly define the program as:
An example of a log notify program, “switchlogs” may be designed to allow automatic switching of logset when the warning threshold is
reached:
The program identified by the “log notify program” is called each time that a message is entered into jediLoggerAdminLog. It is the respons-
ibility of the called program to deal with the reason for the message being entered. The function SENTENCE returns information from JediLog-
gerAdminLog about the latest entry.
NOTE: The message is designated INFORMATION, WARNING or FATAL ERROR. This designation can be used by the log notify program to
decide on a course of action. The messages that can be logged are:
Log file warning threshold set to p initial percentage thereafter every additional q per- Yes
cent or n seconds
Kill initiated on jlogdup process id pid : Process id pid from port n Yes
Warning threshold:
If the amount of space consumed in the file system, which the active logset resides upon, exceeds the specified threshold, it runs the log notify
program. Individual files in a logset have a capacity of 2GB. If the logsets are not switched, files in a logset can grow to the 2GB limit without
the file system reaching the threshold capacity. If this happens, journaling will cease to function predictably and normal database updates may
fail.
Sync Transactions
An option “SYNC” exists for the TRANSTART command which will force-flush the database and journal following a transaction commit. The
option in jlogadmin allows for this option to be invoked globally. If “Sync Transactions” is set to “on” then all committed transactions will
cause the force-flush. If set to “off” then committed will not automatically force-flush the database and journal unless the “SYNC” option is
present in individual TRANSTART commands.
The transaction journal is not normally encrypted. This option will allow the data content of each record to be encrypted on disk. The data con-
tent of each record will be encrypted with an (internally-specified) industry-standard encryption scheme, using an internal key. The record head-
ers remain unencrypted so that all utilities accessing the journal will be unaffected.
File definitions:
As indicated above, the maximum size of an individual file is 2GB. It is clear that if a single file were used for the log file, then this would likely
be insufficient for most realistic application environments. Therefore the administrator is able to set up a log set consisting of a maximum of
sixteen files, thus enabling a maximum log set of 32GB. The configuration will allow for a maximum of four log sets. Usage and switching of the
four log sets will be described in appropriate sections. If the file specified by the administrator does not already exist, then it will be created
automatically.
Command-Line Syntax
In addition to the interactive screen setup facility, there are options which can be added to the jlogadmin command execution. This allows the
administrator to create scripts which can be run either at pre-defined times or intervals; or in response to transaction journal events (usually
error handling events).
jlogadmin –{options}
Where {options} are identified below:
SYNTAX ELEMENTS
Option Description
-h Display help
-i[1-4],- Import a log set to override one of the 4 standard log sets. The -o argument is optional. If used it
filename{,fi- suppresses the warning and confirmation message. You can specify up to 16 filenames to define
lename...} the imported log set
-o Perform operation without checking if the specified log set is empty. Used with -f and -t.
-t Truncates log set n. The log set may not be the current switched set. This option ensures that
disk space will be freed and is sometimes preferable to "rm" which may not free the disk space if
any process still has log files open
Defining Logsets
The following diagram illustrates the constituent parts of a Transaction Journal installation
Each logset should, ideally, be defined within a separate filesystem/partition. The definition of the logset can either be the root of such a filesys-
tem/partition or some sub-directory therein. Each logfile within such logsets are special files; the implication of this is that they should not be
created/restored without using the jlogadmin utility. N.B. This will not only create the files where specified but will also enter such con-
figuration in the jediLoggerConfig file.
Note: Because the logfile does not exist, the operator is asked to create. If e:\logset 1 does not exist, then a message will be displayed:
Following the completion of the creation of this logfile, the operator moves to the next file definition (2) by tabbing to the next field. When all
three files have been created in this way, by entering the cursor key several times, the log set number is changed from a “1” to a “2”. The same
procedure may now be followed to create the logfiles for logset 2.
The following command lines may be used to create both logsets thus:
If the “-c” option is omitted then the files will be created automatically without prompting the operator. The same caveat still exists – if the log-
set directories do not exist then the commands will fail.
l Access to the logger is restricted to this process. Other process will wait until unlocked.
l Writes to a logfile are contained in 4k blocks. If the record (and associated update information) can fit into the current block, then it
is allocated as much space as it needs in that block.
l If the current block is too small then the remainder of the block is allocated; the next logfile is selected and the test for fit is
repeated.
l Once all the requested size of update has been allocated, the configuration file is updated with details of: which logset to update next;
which file in that logset and the offset in that file to write the next data.
l The logger is now unlocked, the next process may now allocate space in the logger.
l The space has been allocated in the logger, this process may now write the updated record data to the assigned space. This allows for
a rapid throughput of logger space allocation, while allowing asynchronous writes to the logger.
l The process can now write to the logger asynchronously, knowing that the allocated space cannot be written to by other processes.
The use of asynchronous writes is vital during writes of large updates. Note that utilities which access the logfiles directly are aware
of this situation and will retry the reads of the logger until a complete record is read.
l The next process requiring writing to the logger may now do so.
Observing the use of the buffers in step (iii), the writes to the logfiles contained in a logset is in a “striping” manner. The file space initially
used when creating a logfile is approximately 4k. As allocated buffers in a logfile as used, then the logfile grows accordingly. So if one logfile
were to be allocated to a logset, then once the 2Gb limit was reached, then Transaction Journaling would be suspended. Now if (say) 16 log-
files are allocated in a logset and we use the “striping” of each file to contain data in a round-robin basis, it can be seen that if the first logfile
allocated exceeds the 2Gb limit, then each of the other logfiles in the logset would be almost to that limit. So Transaction Journaling would
be suspended after almost 32Gb of journal information has been stored.
Logset Switching
As stated above, there can be up to four logsets defined. The number of logsets which need to be defined is dependent on particular system
operation requirements.
Single Logset
This command switches to logset 1 (make current) and then sets Transaction Journaling active.
These commands take advantage of the fact that when a logset is re-used, it is automatically truncated to an empty state.
Note: The “Log notify program” may be used to automate this switching as previously described.
Multiple Logsets
The normal configuration for Transaction Journaling is to use at least two logsets. If the maximum logset usage between backups is greater
than 32Gb (?), then multiple logsets will have to be defined to increase this capacity. This is not the normal case. Multiple logsets are normally
used so that the updates since the penultimate backup are preserved. This has two benefits: if there is a problem with the last backup, the
administrator has the option of recovery to the previous backup, followed by a roll-forward of the transactions since that backup. The oper-
ation is then:
Notes:
l This is done to ensure, positively, that no more updates are added to the Journal
l This logset will be set to an empty state automatically
l This archive would be used if the backup at failed and the previous backup used instead for recovery.
TJ1
JBC__SOB jBASE_TJ_INIT SET: set=current terminate=eos
When a file of type TJLOG is created, it generates a set of dictionary records in the dictionary section of the TJLOG file, which is a normal j4
hash file. The data section of a TJLOG file is handled by a special JEDI driver which accesses the current log set. The log set can be changed by
additional parameters when creating the TJLOG file after the TYPE specification.
Example
CREATE-FILE TJ2 TYPE=TJLOG set=eldest
7 TRANS Trans
Type Description
Selective Restores
The jlogdup command enables selective restores to be performed by preceding the jlogdup command with a select list. The select list can be
generated from the log set by generating a special file type, which uses the current log set as the data file.
In this example, all updates to the CUSTOMER file, which have been logged, except for any CLEARFILEs, are re-applied to the CUSTOMER
file.
Note: This type of operation must be used with great care! It is highly possible that the database may be left in an inconsistent state ifs an
individual file is rolled-forward. If transactions contain updates to more than one file (the normal case), then regard must be paid to other
file updates occurring within those transactions in order to maintain database integrity.
jlogstatus
The jlogstatus command displays the status of the jBASE Transaction Journal. In its simplest form the jlogstatus, command shows a summary
of the current Transaction Journal activities. Additional command line options are available for output that is more verbose. The jlogstatus
command can also be used to present a rolling status screen, using the ‘-r n’ option, which will update the display every ‘n’ seconds.
SYNTAX
jlogstatus -options
SYNTAX ELEMENTS
Option Description
-h display help
-v verbose mode
Example
This will display all information and will refresh every 5 seconds.
When a jBASE application performs a database update, it writes to the transaction log file (if active). It does this to a memory image and nor-
mally it is up to the platform file system to flush the memory image to disk every so often, by default on most platforms this is usually every
minute.
You can use options in jlogadmin so that the jBASE processes themselves do this file synchronization more often. The default is every 10
seconds. This means in the event of a system failure, you will lose at the most 10 seconds worth of updates.
The use of the jlogsync program means the jlogsync process instead of individual jBASE processes performs file synchronization, thereby alle-
viating the overhead of the synchronization from the update processes. Thus, the jlogsync process is not mandatory. However, in a large install-
ation it may provide beneficial performance gains.
SYNTAX
jlogsync -options
SYNTAX ELEMENTS
Option Description
-v verbose mode
The most common way of starting jlogsync is by using the “-i” and “-b” options. This will stat the process in the background. The command
will be typically be used in a machine startup script, prior to allowing multi-user mode.
jlogsync -i
jlogsync: Started on pid 1640
The daemon may be killed by the administrator by the use of the ”-k” option. No message is displayed unless the kill fails in which case “kill”
will be displayed.
When jlogsync is initialized, a default inactivity timeout count of 5 minutes is set up to determine whether the daemon is still working cor-
rectly. If this time does expire and the daemon has not done anything in the meantime, it is deemed at this point that the daemon has died pre-
maturely. The “-tnn” option allows for an inactivity timeout period of “nn” seconds. This value can be any values greater than 60 seconds
(despite the “nn” description).
When this option is used details of the last sync events are displayed along with details of the inactivity timeout and logset warning values.
The jlogmonitor command can be used to monitor potential problems with the jlogdup process. It will report errors when specific trigger
events occur. jlogmonitor can be run in the foreground but will usually be run as a background process (using the standard –Jb option).
SYNTAX
jlogmonitor {-h|?} {-ccmd} {-Cnn} {-Dnn} {-E} {-Inn) {-Snn}
SYNTAX ELEMENTS
Option Description
-Cnn If the file system utilization of the journal log exceeds nn% full then an error message is dis-
played. The error message is repeated for every 1% increase in file system utilization.
-Dnn If the jlogdup process processes no records (or if there is no jlogdup process active), then
after nn minutes of inactivity it displays an error message. It repeats the error message
every nn minutes while the jlogdup process(es) is inactive.
-E If the jlogdup program reports an error, this option causes jlogmonitor to also display an
error. You can view the actual nature of the error by either looking at the screen where the
jlogdup process is active, or by listing the jlogdup error message file (assuming the –
eERRFILE option was used).
-h display help
-Inn The status of the Journaler can be ACTIVE, INACTIVE or SUSPENDED. If the status of
the journaler is either INACTIVE or SUSPENDED (with jlogadmin) for more than nn
minutes, it s=displays an error message. The error message will be repeated every nn
minutes that the journaler is not active
-Snn Use this option to determine if any updates are being applied to the journal logs. If no
updates are applied to the current journal log set for nn minutes it displays an error mes-
sage. It repeats the error message for every nn minutes of system inactivity.
Note: You must specify at least one of the options, -C, -D, -E, -I or -S.
Examples:
l -Cnn
A monitor may be set up which will display a message once the warning threshold (as defined in jlogadmin) has been reached. The mon-
itor will then wait until the percentage full has increased by 1% at which point a new message indicating this is displayed. This will con-
tinue indefinitely (or until aborted).
jlogmonitor -C10
09:43:30 14 DEC 2006 Journal File System capacity exceeds 10% , actual 89%
09:46:30 14 DEC 2006 Journal File System capacity exceeds 10% , actual 90%
l –ccmd
l -Dnn
This option allows the operator to monitor any jlogdup processes which may be running. If there is no activity for the specified time,
then an error message is displayed. Note that this command will report inactivity for all running jlogdup processes. It is not possible
to specify one of many jlogdup processes to monitor.
jlogmonitor -D1
l –E
If one or more jlogdup processes are reporting errors, jlogmonitor may be used to display this condition. The process will interrogate
all running jlogdup processes for erros which have been encountered. If any are reporting errors a message similar to the following will
be displayed:
jlogmonitor -E
Further information about any such errors can be found on those screens running the jlogdup processes which are reporting errors.
l -Inn
If journaling is suspended or stopped for any period, jlogmonitor may be used to trap such occasions. The “nn” parameter is in
minutes, so if journaling is stopped/suspended for more than this time a message to that effect will be displayed.
jlogmonitor -I1
l –Snn
This option is similar to the “-I” option, but will display a message if no updates have been made to the journal for “nn” minutes.
jlogmonitor -S1
16:13:07 18 DEC 2006 No reported activity being applied to the journal log sets
16:14:07 18 DEC 2006 No reported activity being applied to the journal log sets
The options may be combined on the command line to trap any or all of the possible conditions described above.
So,
16:15:25 18 DEC 2006 No reported activity being applied to the journal log sets
16:16:25 18 DEC 2006 No reported activity being applied to the journal log sets which indicates the reason for there being no updates
to the logger.
The jlogdup command provides the capability to duplicate transaction log set data from the jBASE Transaction Journal. The transfer may be in
the simple case an archive of the Transaction Journal to an external device or may be used in a combination of transfers to produce a “hot
standby” machine. The whole or part of a transaction logset may be transferred, either following a jBASE SELECT statement or by specification
in the jlogdup command line. The transfer process(es) may be monitored utilising a comprehensive range of dynamic statistics.
SYNTAX
jlogdup -Options INPUT input_spec OUTPUT output_spec
An “input specification” consists of a source device for the transfer with optional run-time parameters and an “output specification” consists
of an output device and associated optional run-time parameters. The “Options” parameters are generally used to display/record information
about the transfer overall.
SYNTAX ELEMENTS
Options
Option Description
-f used with the -v or -V option; shows information for the next (future) update; by default
-h display help
INPUT_spec/OUTPUT_spec
The input/output specification can specify one or more of the following parameters
Parameter Description
device=file%dev (S) the file name for SERIAL device. Can be more than one
renamefile=file (O) use rename file list of format ‘from,to’ to rename files
set=current (IL) begin restore/duplication using the current log set as input
set=stdin (IT) the input data comes from the terminal stdin
set=logset (OL) the output is directed to the current log set as an update
terminate=wait (I) switch to elder log sets as required and wait for new updates
terminate=waiteos(I) switch to elder log sets as required and wait for new updates until logset switched, then terminate
Indicator Meaning
timespec
The time specification, used in the ‘start=’ and ‘end=’ specification can be one of the following formats:
Timespec meaning
filename regular file, use the time the file was last modified
Examples of use
In order to expand on the description of each of the many specifications and options, a series of example usages will be used for illustration.
If the Journal, depicted above, contains 4 logsets: logset1-4 and logset2 is the active logset, then a snapshot of this logset may be made to
either a real tape drive e.g. /dev/rmt0 on AIX or a tape image file E:\jrnl_save on Windows, so:
current or 0-4
The current logset refers to that logset which is selected for use at this time within transaction journaling. This logset may be active or inact-
ive. Logset 0 is a special case and means that there is no logset currently being used at this time. It is possible to define up to 4 logsets and the
number 1-4 refer the specific logset.
It can be seen that the input set, whether specified as current or 2 refer to the same logfile data and that this is the source of the transfer.
eldestLogset
Logsets may have been switched since the last backup, so the updates made to the journal may exist in more than one logset. Consider that
the data in logset1 contains the oldest data and logset2 contains the more recent, a command such as:
This will take all the data in logset1 and all in logset2 (to this point) and output to the destination as specified by the “output spec”.
blocksize
The output specification indicates where to put the logfile data. The size of the blocks written to a tape device can be specified using the block-
size parameter, thus:
blockmax
In the likelihood that the tape capacity is less than the journal size, another parameter, “blockmax” may be used to specify how many blocks
(as specified by “blocksize”) may be written before the media is required to be changed.
Multiple Devices
When using tape devices it is possible to specify multiple devices so that in the event of media overrun, the jlogdup operations may continue
without intervention
When the end of the tape on /dev/rmt0 is reached (or blockmax is reached), operations will automatically continue on /dev/rmt1, and so on. A
check is made that the media being used is not reused from the same jlogdup operation (by timestamps). If there is a conflict, user intervention
is required.
prompt=true
If no automatic cascading of tapes is desired the use of “prompt=true” on the command line will force operator intervention:
nullspecification
This command will terminate when the end of the current logset (i.e. 2), is reached (unless terminated externally). This will only give a partial
snapshot of the journal.
terminate=eos, terminate=eof
These specifications are normally used when more than one logset is defined and more than one logset contains valid logfile data; as above.
The command:
will backup all entries in the journal from the start of logset1 (the eldest) up to the last update in logset1 – no further updates will be saved.
Again this is only part of the information required to recover all of the data.
or
will take all updates from the start of the journal and output to the tape all records up to and including the last update on set 2, the current log-
set.
Note: Omitting “terminate=??” will default to “EOF” the end off all logset containing valid data.
terminate=wait
What is more normal is that we want to transfer from the beginning of all logsets, transfer all the logfile data and the wait for new updates, then
transfer them (e.g.) to tape as they arrive. Thus will achieve this.
This will perform as the previous example, with the exception being that while waiting for new updates, if the logsets are switched then the jlo-
gdup process will terminate. This may be used to trigger some batch operation, ensuring that all updates from that point will reside in another
logset.
timeout
When using “terminate=wait” or “terminate=waiteos” it is possible to set a limit of the amount of time the process will wait for new updates
into the journal. If the “timeout” option is missing the process will wait indefinitely, otherwise it will wait for the number of seconds dedfined
in the “timeout” option.
The jlogdup process will wait for 5 minutes or the switching of the logset before terminating.
retry
The “retry” option is used to attempt to re-read the journal for a complete record and refers to the time delay between re-reads. A complete
record may exist in the journal when the update to the journal is from a slow device (i.e. tape device) or is a large record, or a combination of
both. The start of the record may have been written but the rest of the write may not yet have completed. This option allows this the operator
to change the default delay time from 5 seconds to another value in seconds. The re-read is attempted 10 times and is normal operation.
The “retry” time allows the operator to override the default wait time of 5 seconds to so other value. Therefore, will wait for 3 seconds between
retries.
verbose=true
This option confirms the input and responds with details about when and how the process will be terminated.
Start and End times are chosen by the operator. At (probably) any specified time there are likely to be more than one transaction open (i.e.
records are being updated between transaction boundaries). During normal operation, when the destination is “database” , jlogdup will alert the
operator that if a record is to be transferred and that record is part of a transaction, and that the transaction start record has not been detected.
This is not a fatal situation, but alerts the operator to those records so found. These records will not cause the database to be updated with
their contents. These records will cause a message like:
is issued, then the fact that the updates were part of a transaction is ignored and the database will be updated. This may cause the database to
enter an inconsistent state.
Note: It is advisable that this option is not used without careful analysis of the outcome.
If a backup is now performed with the –sfilename option (create statistics file), and then use this as the start of jlogdup (after adding some
more updates to the journal):
Journal data may be transferred to a different computer by one of two techniques: using “stdin” and “stdout” and “rsh”. Though this method
does work for Unix/Linux-based computers, because of the lack of security it is now not the recommended method of transfer. A “socket” inter-
face exists which allows the operator to manage the transfers more robustly.
stdout
To specify the output destination of a jlogdup transfer, a command like the following may be issued:
The output from this command is omitted as it will contain non-printable characters.
stdin, database
rsh
To tie these two commands together it is usual to use rsh – remote shell daemon. All activity is controlled from the local host (Nodej, here)
and will execute a command on the remote host to run a jlogdup process on that computer (Nodek).
This script will set up to run jBASE commands and then run a jlogdup process to update the database on Nodej.
/GLOBALS/JSCRIPTS/logrestore Script
tty
For the specifications “stdin” and “stdout”, the specification “tty” may be used instead.
and
Sockets
Note: The receiving jlogdup must be set up before the sending jlogdup; failure to do this will cause the sending jlogdup process to fail with
an error message:
Note: As the output device is “logset”, transaction journaling must be set up on Nodek and active. If there is a “current” logset defined but
not active, then a message similar to this will be displayed:
If logging has not been set up at all, the transfer will stop immediately and a message similar to this is displayed:
This message will be displayed periodically until logging is set active. If logging is subsequently made active then the transfer will complete as
normal.
Once Nodek has been set up, Nodej can be set up thus:
This will connect to the jlogdup process running on Nodek, transfer all the journal data in the current logset and then terminate. The ter-
mination of the jlogdup process on Nodej will cause the jlogdup process on Nodek also to terminate.
The command :
Will now connect, transfer all the journal data from the current logset and then wait for new updates, transferring the updates as they arrive.
This process will not terminate and will thus keep the socket open for transfers to Nodek.
This command will listen for a connection, then receive journal updates and output to the current logset. If the jlogdup process on Nodej ter-
minates, then this process will also terminate that connection and will return to listening for a new connection, and so on.
If “terminate=wait” is present on both ends of the socket then this will form a continuous client-server mechanism.
Note:
l If the “timeout” option is used on either end, then the operation will perform as expected, except for one instance. If the receiving end
of the socket (on Nodek) is terminated by the operator, then the sending jlogdup process on Nodej may be sending journal data or be
waiting for new updates.
l If sending data, then the forced closure of the socket will force the termination of the sending jlogdup process and display an error mes-
sage of the form.
rename
The rename option is used to change the location of files used within journal updates to other locations. It is typically used when transferring
data between machine when the directory structure of the two machine is different. As each update is read, if the rename option is effect it will
change the destination location on-the-fly.
renamefile
Then using the following command will create the same result as the rename example above.
Note: The rename file may contain many entries, one per line of the form “from,to” to effect many automatic redirections. Note also that
the content of the “from” field must be exactly as it appears in the journal and is case sensitive.
Journal transfers via jlogdup are normally on an unencrypted stream, leaving the data unprotected during the session. The operator is able to
specify on the jlogdup command line the form of encryption required for the session.
The first thing to note is that encryption is specified on the sending jlogdup process only; embedded information in the stream will identify that
this stream of data is encrypted, the encryption scheme used and the key to use to encrypt/decrypt the stream. In order that the key (espe-
cially) and the scheme is not sent in clear-text format, the blocks sent between the two jlogdup processes will undergo a further encryption
using an internally-specified encryption scheme and key. Note that the encryption options are only allowed on output specifications..
Using the examples above and extending for encryption usage, the following will illustrate the use of this facility.
scheme
Is the encryption scheme to use for the transfer of journal entries. This mechanism utilizes OpenSSL high level cryptographic functions. The
valid specifications for encryption are;
l rc2
l blowfish
If key is omitted from the command line then a default internal value will be used.
key
Is the string to be used as the encryption key for the transfer of journal entries. If scheme is omitted on the command line, then a default
internal value will be used.
encrypt=true
If either scheme or key are omitted, their values will be internal values. If either key or scheme or both are set then they will override the
default internal values.
Notes:
1. If the logset is encrypted, then this encryption is in addition to any transient encryption during jlogdup transfers.
2. If the logfile is encrypted on the source machine then:
l If the output set is to “logset” then the resulting destination logset will also contain the encrypted records.
l If the output set is to database then the encrypted records are decrypted prior to storage on the database.
l If the output set is anything else then the encrypted records remain encrypted.
-e file
This option will produce an error log containing any update errors during the jlogdup session. The file specified must exist as a hash file.
If an attempt to roll the database forward, there will be an error as NEW-FILE already exists. This will be reported to the specified error file.
-f future update
This option is used to display a help screen. It contains an overview of the command and all the reporting options.
This option is used to assign a file to which all status and error information may be stored. This is not the same as the “-e” option in that this
file will record not only the final status of the operation, but also a high-level description of an errors which may have occurred during the ses-
sion.
Verbose options
Two options exist which allow the operator to view the records being worked on by jlogdup
-v verbose
This option shows the journal update details (“*” separated field showing whever, precisely in the journal the record exists; the type of journal
entry, the file being updated and finally the record being updated
In addition, the very verbose option also shows the user name; the port number, the time and the date of the update:
The speed of database recovery may be improved by the “-x” option. This option must be used with care. No group locks will be taken when
the output set is to database. This is for recovery only, when there should be no processes updating the database.
This option will display all options for input/output specs.; timespec details plus the output of the “-v” option.
Resilient files have the following characteristics: they are resistant to corruption in adverse conditions and they have the ability to auto-resize
themselves as the population of such files increase
Resilience
For standard jBASE hashed files, the writing of an item may cause one or many physical disk writes, depending on the size of the item being
written. If the series of writes is interrupted (by say, a power failure), then the structure of the file may be compromised as the item may be par-
tially written to disk.
The resilience (for Resilient files) is provided by running in SECURE mode where any update resolves down to a single disk write, any depend-
ent writes having been flushed to disk beforehand. Fundamentally, the body of the item is written to and then flushed to disk. If a power failure
occurs at this time, the “before image” of the item is still in existence on disk with the integrity of the file being maintained. The intended
update is abandoned (because of the power failure). Upon power being restored to the system, the database may not be in a consistent state if
the failed update was part of a transaction. This does not present a problem as the entire transaction will have been written to the Transaction
Journal prior to attempting any database disk writes of the transactional data. The transaction will thus be replayed in its entirety, thus main-
taining database consistency, (via a roll-forward – this will be described later in the document)
In the normal course of events the final write/ of the item pointer on disk will not be interrupted, the pointer will be switched to the new ver-
sion of the item thus completing the item write.
Autosizing
With the increase in 24 hour operation there has been a corresponding decrease of available time for system maintenance of hashed files. Stand-
ard hashed files become less efficient as the data population exceeds the original creation sizing, resulting in slower retrieval and updates, so an
expanding hashed file requires regular resizing.
Resilient files need no resizing as there is no concept of overflow. When the data within a frame exceeds the available disk space it is split into a
pointer frame pointing to child data frames. The individual items within the frame are rehashed according to the split level and reallocated to
the appropriate child frame. The hashing algorithm base changes according to the split level to avoid common hashing paths.
Where standard hashed files have a linear expansion of search path (the number of data frames read according to population), resilient files have
a logarithmic expansion of the order Modulo, so where an undersized hashed file may require 5 disk reads a resilient file may require 3. A prop-
erly sized hashed file may require only one disk read, but that is assuming regular system maintenance.
The logarithmic search path may imply an exponential file size expansion, but this doesn’t happen in practice as data frames which are not
required, are not allocated.
SYNTAX
CREATE-FILE TYPE=JR [EXTMODS=Modulo] [INTMODS=x[,y[,z]]] [SECURE=YES] [MINSPLIT=m] [HASHMETHOD=h] [SECSIZE=n]
[DEALLOCATE=P|D]
SYNTAX ELEMENTS
Parameter Description
EXTMODS A comma separated list of the modulo of split frames, default 31. When a data frame overfills
it will change to a pointer frame of the order Modulo[level] with a maximum of Modulo[level]
child frames where items are rehashed according to the split level and hashing algorithm.
There are a maximum of 32 modulo and each must be prime between 3 and 509.
DEALLOCATE The default behaviour is never to deallocate empty frames to the free list. This can be changed
by setting D – deallocate data frames for reuse or P deallocate data and pointer frames. This
can be changed on existing files using jrchmod.
INTMODS Up to 3 prime numbers defining the internal hash table modulo, default 3, 7, 19. The cumu-
lative product cannot exceed 485, i.e x + x * y + x * y * z.
MINSPLIT Minimum split level of the file. The file will be preallocated to a minimum level of split frames
from the Modulo list. This can have extreme adverse affects on performance and excessive file
size, so its use is not recommended.
SECSIZE Secondary record size, default 2048. Items exceeding this size are stored out of group, i.e. the
item retains its own data frame(s), referenced by a pointer.
SECURE The file is flushed at critical junctures such that any file update will rely only on a single disk
write. This maintains the file structure in the event of system failure.
EXTMODS
Up to 32 comma-separated prime numbers specifying the external modulo for this file. Only one modulo is usually provided, default 31.
DEALLOCATE
When a frame becomes empty through deletion whether to deallocate from the main structure onto the free list. Can be D to deallocate data
frames or P to deallocate data and pointer frames. The default behaviour is not to deallocate any frames.
HASHMETHOD
The hash method as used with all hashed files. The default method of 5 is recommended.
INTMODS
A newly created resilient file consists of a single 4096 byte header containing, amongst other things, an internal hash table up to three levels
deep.
The size and depth of the internal hash table is specified by the INTMODS parameter and by default take the values 3, 7 & 19. The INTMODS
values must be prime and ascending, and the table must fit in the available space in the file header.
MINSPLIT
The MINSPLIT value forces a table to be created with a minimum split level & would normally be used only where the future data population is
known to be large and will remain large throughout the lifetime of the file. In general resilient files control their own sizing and MINSPLIT is
not required.
MINSPLIT can create extremely large files as it is an exponential sizing parameter. Assuming default parameters (3, 7, 19 & 31) this table show
the resultant filesize:
0 4096
1 1,638,400
2 50,667,520
3 1,570,570,240
4 48,687,554,560
If the current data profile is not known or the future profile not predictable then the use of MINSPLIT is not recommended.
SECSIZE
As with all hashed files, if an item size exceeds SECSIZE then the record data is given its own linked chain of data frames and only the record
key and a pointer to the data are stored inline. This is known as out of group (OOG) storage. Storing data OOG saves resources when searching
or updating a group.
SECURE
When SECURE=YES is specified updates are flushed to disk where necessary to maintain the structure of the file in the event of a system fail-
ure. This will affect file performance.
x+x*y+x*y*z
By default
3 + 3 * 7 + 3 * 7 * 19, or 423 This is the number that cannot exceed 485, above.
3 * 7 *19, or 399.
Hashing
Hashing is simply a method of deriving a seemingly random number from the record key and applying a modulo to the result. A given key will
always produce the same hash value for a given hash method.
To hash into a modulo 3 table for key FRED where the hash value is 11, the remainder when divided by 3 is 2 so the key FRED hashes to the
last group (0-2). In reality the hash value is a very large number.
File Size
A newly-created, empty file will only contain 4096 bytes. A populated and subsequently empty file may contain much more as the following
data frames are not released until a resize:
l Overflow frames.
l External level 0 frames.
l Internal data frames.
Writing Data
When the first item is added to the file it is hashed on the first internal modulo (default 3) a data frame is added to the file to contain the new
item making a minimum file size of 8192.
The internal hash table consists of up to three ascending prime numbers, default 3, 7 & 19, that configure the initial search path for the record
id. The key is hashed on the first modulo (3) which contains one of:
0 The group is empty, nothing has ever been written to it.
In the case of an empty file the value will be 0, so for a write a data frame is allocated and the pointer changed to reference the frame – on the
first record this will always reference 4096 as this was end of file at file creation.
When an internally referenced data frame overflows, all items within it are rehashed on the next modulo and reallocated to their respective
newly allocated data frames. The original data frame is released to the free list.
1. It is pointed to by the file header (i.e. external level zero or an internal data frame).
2. It is less than the MINSPLIT value for the file.
As records are deleted, a pointer frame may point to a few frames the data within which would fit in the pointer frame if it was changed to a
data frame. No check is made for this eventuality as checking all the parent pointer’s children is too expensive. Instead a pointer frame is only
released when the last pointer is zeroed.
Internal Pointers
Given a set of internal modulo the internal pointer values are known at file creation, i.e. a given pointer can have only one internal value, rather
than zero or a data frame reference. In the diagram below the level 0 pointers can only point to their respective level 1 tables, hence the internal
pointer values are predictable.
Once an internal pointer has been set to an internal table position the pointer will never lose this value throughout the lifetime of the file. When
the top level internal pointer (default modulo 19) is allocated a data frame, this will never change throughout the lifetime of the file. If the data
frame overflows it becomes a pointer frame, but this doesn’t require a change in the internal hash table.
When the internal hash tables are full, no values will ever change, so processes opening the file do not require a re-read of the file header. This
avoids expensive locking. Sparsely populated files may require much more locking as pointers to data frames are liable to become internal point-
ers.
External Frames
Data or pointer frames referenced at the highest internal pointer level of internal hash table or beyond are referred to as external frames.
Internal data frames (they cannot be pointer frames) are relatively few, by default a maximum of 24 can ever exist. Once a frame is allocated to
the highest level internal reference, it will never be released, even on a CLEAR-FILE.
When an external data frame overflows it is rehashed on the modulo appropriate to the external level, in the same way as internal modulo but
the data frame itself becomes a pointer frame.
In internal or external hashing, if no item hashes to a pointer then no data frame is allocated.
Given the set of parameters at file creation it is always possible to predict the path a given item id will take, what cannot be predicted is the
level within the path (internal and external) at which the item will exist.
jrscan
As the internal structure of Resilient files differs from hashed files so much a new utility, jrscan has been written to complement the func-
tionality that jcheck provides to other hashed files, although without the destructive recovery.
Options:
-b Uses a bitmap to map the file structure ensuring all frames are referenced once and once only.
-h Help text.
Database recovery can take several forms depending on the nature of the state of the system.
For a simple power failure or an O/S reboot while the database is being updated, the database should be recoverable by a system warmstart pro-
cedure. This warmstart will use Transaction Journal(s) which are being used by the database(s) to roll forward all complete database trans-
actions from the last checkpoint to the point of failure.
For a media failure whereby the database itself has been lost then this data must be restored from the last backup taken. If the Online Backup
facility has been used, then the restore process can restore the system to a consistent state. Providing the Transaction Journal has not been
lost during this media failure (journal is held on other media), then is should be possible to recover the system to a position just prior to the
media failure. Again the system will be recovered to a consistent state.
For disaster recovery situations where there is likely to be some lengthy/permanent disruption to the live site, it may be possible to continue
operations at a site which has been functioning as a hot-standby site.
DB-WARMSTART
This command is restricted to administrative use only and there are no optional parameters.
This command will inspect the “databases-defined” file and determine whether each of the databases defined therein require to be recovered fol-
lowing a power failure. Any defined database which has a status of “active” will cause a recovery process to begin – all databases which have
been stopped will be in a consistent state. The recovery process takes the form of a roll-forward of the database from the Transaction Journal
logfiles defined for that database. The format of the recovery command is :
As this command suggests checkpointing must be configured for this to be effective. A checkpoint is defined as a point in time when all trans-
actions have completed in their entirety – no partial transactional updates are pending. When checkpointing is used, the database is deemed to
be in a “consistent” state at the point at which the “checkpoint” record appears in the Transaction Journal. This being the case recovery is only
required from the last checkpoint time. No user intervention is required in determining this time. At the completion of the recovery all trans-
actions which were completed in their entirety will be applied to the database and transactions which are incomplete (i.e. no TRANSEND or
TRANSABORT entry found in the log files for this transaction), will be discarded.
Syntax
DB-WARMSTART
For all computer types, the “WARMSTART” utility should be run with the JBASE_DATABASE set to “warmstart”. It is not possible to predict
which database is active (all may not be including “default”. As the access to jBASE databases is determined very early on in the life of a pro-
cess, the “databases_ defined” file cannot be interrogated to find a usable database. The database “warmstart” also must be started. This will
not be added to the “databases_defined” and as such is a special case. This ensures that recovery is not attempted for this dummy database.
Once the recovery of all required databases has completed, the dummy database entry is deleted. Note: As DB-START is only possible by a sys-
tem administrator, misuse of the dummy database is prevented.
This recovery mechanism relies on three components within jBASE: transaction boundaries; Transaction Journaling and jBackup. Firstly, data-
base integrity cannot be guaranteed unless transaction boundaries are utilised. By encapsulating related database updates within transaction
boundaries, jBASE will either perform all of the related updates or none. Transaction boundaries are identified by the TRANSTART,
TRANSEND and TRANSABORT instructions. Transaction Journaling is required in order to provide a chronologically-
You can use existing UNIX commands, such as ‘tar’ or ‘cpio’ (or Windows Backup), which work well, but should not be run while a jBASE
application is updating files. ‘Tar’ and ‘cpio’ and Backup perform a binary dump of the file data, and do not obey any locks, which may have
been set to indicate that an update is in progress. In addition, these saves can be limited because they cannot be restored correctly on a sys-
tem, which has a different architecture to the original system.
The preferred mechanism is to use the jbackup and jrestore jBASE utilities. The jbackup program will back up normal UNIX/Windows data
files and directories as well as jBASE data files, and will respect any locks set by jBASE applications. Bear in mind though that if you choose to
run jbackup concurrently with other active online jBASE applications, your saved files will not be corrupt, but the continuity of any data saved
from an active system cannot be guaranteed. (We shall see later that an Online Backup facility is available which overcomes this caveat.)
Option Explanation
-v Verbose mode
-F Use fixed block device. Use for qic tapes (nt only)
-S Statfile Save statistics of all saved objects in jBASE, file Statfile. The dictionary for this file is
JBCRELEASEDIR/jbackup] D.
Examples:
Reads all records, files and directories under the /home directory provided by the find selection and displays each file or directory name as it is
encountered. This option can be used to verify the integrity of the selected files and directories.
Reads all files and directories listed in the UNIX file FILELIST and writes the formatted data blocks to the floppy disk device, displaying each
file or directory name as it is encountered. The jbackup utility will prompt for the next disk if the amount of data produced exceeds the spe-
cified media size of one Mbyte.
Reads all files and directories in home directory of user-id “jBASE” Generates statistics information and outputs blocks to stdout, which is
redirected to /dev/null. The statistics information is then listed using the jbackup dictionary definitions to calculate the file space used.
Reads all records, files and directories under the C:\users\home directory provided by the find selection and displays each file or directory
name as it is encountered. This option can be used to verify the integrity of the selected files and directories. This command should be run with
jshell type sh rather than jsh.
jrestore provides a powerful selective restore capability. Records, files and directories can be selectively restored by specifying relational expres-
sions with one or more of the available options.
jrestore is capable of resynchronisation so that the restore procedure can begin from any position on the restore media. However, note that
this capability can be limited by a lack of positioning options available with the specific restore device. For example, a streaming cartridge tape
cannot be backspaced.
jrestore will continue to restore from the specified device until the end of volume label is detected. You will then be prompted to mount the
next device or you can select an alternative device if required.
jrestore command
jrestore <options>
options are:
Option Explanation
-c“o n” Restore old directory path (o) as new directory path (n).
-i“key” Restore record keys matching regular expression. Usually used with the ‘h’ option.
-o“o” Restore other UNIX files matching regular expression, e.g. named pipes.
-v Verbose Mode. Display files and directories before they are restored. Output is dir-
ected to stderr.
-W Roll forward the database following the restore using the saved logfile data and con-
figuration
-G Roll forward the database using the logfile data and configuration which are already in
use. This will follow the data restore and roll forward specified by the -W option.
jrestore Examples
Reads formatted files and directories from a streaming cartridge device, displaying each file or directory as it is encountered. This option can be
used to verify that the tape does not contain any parity or formatting errors and so can be restored at a later date.
Reads and restores formatted files and directories from a floppy disk device, displaying each file or directory as it is encountered.
Reads formatted files and directories from stdin, which is being supplied by jbackup, modifies all occurrences of path string /home/old to
/home/new and then restores files and directories using modified path string.
Reads formatted files and directories from UNIX file BACKUP, limits restore to any directories whose path name ends in PAYROLL.
Reads formatted files and directories from UNIX file BACKUP, limits restore to any hash files whose path name ends in CUSTOMERS, and
only restores record keys containing.
Each configuration which will be described adheres to those goals as identified in the Temenos Technology and Research White Paper - T24
Resilience High Availability in Failover Scenarios and the proposed new Close of Business Procedures as described in the Functional Spe-
cification Changes to Batch Operation for Global Processing Environments
This should be the minimum standard configuration utilizing Transaction Journaling. The assumptions made here are that
Transaction handling will be achieved by the use of TRANSTART, TRANSEND and TRANSABORT programming commands. Transactions
which are not completed in their entirety will be completely “rolled back” by jBASE, when commanded to so do by the TRANSABORT com-
mand. Upon execution of the TRANSEND command all or none of the constituent database updates will be actioned, ensuring database con-
sistency. Any transactional recovery will be achieved through the use of jBASE facilities.
Transaction Journaling has been configured for example, with two logsets:
l /bnk/bnk.jnl/logset1
l /bnk/bnk.jnl/logset2
where: logset1 and logset2 are links to two mounted filesystems each containing the corresponding transaction log file definitions.
TJ is then activated by a script similar to start_ tj, which activates transaction logging and also the backup of the transaction logs to tape
(/dev/rmt/0 in this case).
The Transaction journal is copied to tape (or other external medium) on a continuous basis by means of the jlogdup facility.
A backup of the database (using the backup_ jbase script) is initiated prior to the execution of Close of Business procedures. Logsets are
“switched” following the successful completion of backups.
When a backup is required, a script, based on “backup_jbase” is run. Actions performed by this script are:
The command:
will dump all data to tape below /bnk. As all the transaction log data (bnk.jnl) data has already been dumped to tape prior to the backup, the
exclusion of this directory would seem appropriate, by configuring the data thus:
Directory Description
Note: The use of the “-c” option will allow for the dumping of index files to avoid having to rebuild indexes on a restore process.
Once the backup has completed and verified, a new tape for tape logging replaces the last backup tape.
The operating system and configuration (device assignments, user login information, etc).
This skeleton system must be kept up to date. Any changes to the operating system or jBASE configurations must be reflected in this skeleton
system as a standard procedure; any such changes triggering the production of a new skeleton system.
Once the system has been brought to an operational state, the database needs to be brought back to a known state. The last backup set pro-
duced is recovered by the recover_jbase script. This not only restores the jBASE database including saved indexes, but also replays all com-
pleted transactions which have been transferred to tape and initiates transaction logging to tape.
If there has been an application/database error which has resulted in the decision to perform a complete restore of the system, it is clear that if
the error can be identified to have taken place at a particular time, (whether precisely or approximately), then the whole of the transaction log
should not be replayed. Using the “end=timespec” option of jlogdup will cause the transaction log replay to terminate at the specified time
rather than the end of the logset. (See jlogdup section for valid format of timespec). The recover_jbase script will prompt for a time or assume
EOS (i.e. all the transaction log is to be replayed). As the
Warning: If an “end=timespec” parameter has been specified, then the time chosen may cause transactions which began before this time not to
be completed (i.e. rolled back). Additional database updates pertaining to such transactions and bounded by the corresponding TRANSEND
commands may exist on the transaction log file, but will not be executed.
jBASE will handle any supported relational database connectivity (such as Oracle/DB2 etc.) through the appropriate jEDI driver. Data mapping
will be achieved through the corresponding RDBMS stub file definitions. The jBASE/RDBMS stub file definitions can exist on one of various
locations:
On the Application Servers – this could (would) potentially create a locking minefield – how to communicate between the Application Servers
the locked state of the database entities.
On the Database Server (1) – Application Servers communicate over NFS mounts to RDBMS stub files defined on the Database Server. The
downside of this approach is that RDBMS client components (at least) have to exist on each of the Application Servers. Also there is a problem
with managing database locks. This can be achieved by inefficient application-level lock mechanisms whereby the locks are held within a central
filesystem and are accessed by all Applications Servers, utilizing OS locks to manage access to the lock table.
On the Database Server (2) – Application servers communicate using a jRFS driver to jRFS servers on the Database Server. The Database Server
contains the RDBMS stub file mappings to the RDBMS, also residing on the Database server. As jRFS operates in a client-server relationship,
there are no locks taken directly by any process within the Application Servers, but are taken by the jRFS server processes, on their behalf, run-
ning on the Database Server. As all the jRFS server processes run under control (for locking purposes) of a single jBASE server, there is no
issue with locking between these processes. There is also likely to be a cost advantage over Database Server (1) approach, because no RDBMS
components need to exist on the Application Servers.
Transaction management (i.e. the use of TRANSTART, TRANSEND and TRANSABORT programming commands) within the Application Serv-
ers is handled within jBASE as for the Stand-Alone system.
The Hot Standby configuration using jBASE as the database server has the same attributes as previously described in the Cluster Systems with
the exception that all database updates to jBASE are duplicated to a separate server (or remote in the case of disaster recovery). The database
duplication process, achieved by the jlogdup facility, would normally be an operation in addition to dumping the transaction log data to a local
tape device.
The Transaction journal is copied to tape (or other external medium) on a continuous basis by means of the jlogdup facility.
A backup of the database (using jbackup) is initiated each night at 12:01 am (for example) to the tape deck /dev/rmt/0 (for example).
A jlogdup process will be initiated on the database server which will, in tandem with a corresponding jlogdup server process on the standby
server, transfer all transaction updates from the transaction log on the live cluster to the transaction log on the standby server.
Another jlogdup process on the standby server will take the updates from the previously transferred log files and update the database on the
standby server.
Transaction handling will be achieved by the use of TRANSTART, TRANSEND and TRANSABORT programming commands. The updates con-
tained within a transaction are cached until a TRANSABORT or TRANSEND command is executed for that transaction. No RDBMS activity
takes place when the TRANSABORT command is executed, whereas the TRANSEND can result in many RDBMS interactions before success or
failure is detected. The application code within T24 is unaware of the underlying backend database.
Scripts/Commands
Note 1: For Windows, each of these names should have a file type of “.cmd”
Note 2: The following are replacements for the variable LD_LIBRARY_PATH depending on platform.
warmstart
The content of the script/command for a Linux computer is:
JBCRELEASEDIR=/usr/jbc
JBCGLOBALDIR=/usr/jbc
PATH=$PATH:$JBCRELEASEDIR/bin
LD_LIBRARY_PATH=$JBCRELEASEDIR/lib
DB-START -nwarmstart
DB-WARMSTART
DB-REMOVE -nwarmstart
l set JBCRELEASEDIR=c:\jbase4.1
l set JBCGLOBALDIR=%JBCDATADIR%
l SET JBCOBJECTLIST=%JBCRELEASEDIR%\lib
l SET JEDIFILEPATH=%HOME%;.
setup_tj
For Unix/Linux:
l #! /bin/ksh
l export JBCRELEASEDIR=/data/reldir/jbcdevelopment
l export JBCGLOBALDIR=/data/reldir/jbcdevelopment
l export LD_LIBRARY_PATH=$JBCRELEASEDIR/lib:$LD_LIBRARY_PATH
l jlogadmin -cf1,1,[logset1 directory]/logfile1
l jlogadmin -cf1,2,[logset1 directory]/logfile2
l jlogadmin -cf2,1,[logset2 directory]/logfile1
l jlogadmin -cf2,2,[logset2 directory]/logfile2
l jlogadmin -cf3,1,[logset3 directory]/logfile3
l jlogadmin -cf3,2,[logset3 directory]/logfile3
For Windows:
l @ECHO OFF
l set JBCRELEASEDIR=c:\jbase4.1
l set JBCGLOBALDIR=c:\jbase4.1
l set PATH=%JBCRELEASEDIR%\bin;%PATH%
l jlogadmin -cf1,1,[logset1 directory]\logfile1
l jlogadmin -cf1,2,[logset1 directory]\logfile2
l jlogadmin -cf2,1,[logset2 directory]\logfile1
l jlogadmin -cf2,2,[logset2 directory]\logfile2
l jlogadmin -cf3,1,[logset3 directory]\logfile3
l jlogadmin -cf3,2,[logset3 directory]\logfile3
For example; jlogadmin –c –f1,1,E:\logset1\logfile1 will create a logfile called logfile1 in directory E:\logset1. Note: The folder logset1 must
exist.
start_tj
For Unix/Linux:
l #! /bin/ksh
l export JBCRELEASEDIR=/data/ reldir /jbcdevelopment
l export JBCGLOBALDIR=/data/ reldir /jbcdevelopment
l export LD_LIBRARY_PATH=$JBCRELEASEDIR/lib:$LD_LIBRARY_PATH
l jlogadmin -l 1 -a Active
For Windows:
l @ECHO OFF
l set JBCRELEASEDIR=c:\jbase4.1
l set JBCGLOBALDIR=c:\jbase4.1
l set PATH=%JBCRELEASEDIR%\bin;%PATH%
l jlogadmin -l 1 -a Active
l echo %date% > %JBCRELEASEDIR%\logs%\ jlogdup_to_tape_start
l jlogdup input set=current terminate=wait output set=serial device=[Device Spec]
stop_tj
For Unix/Linux:
l #! /bin/bash
l export JBCRELEASEDIR=/data/reldir/jbcdevelopment
l export JBCGLOBALDIR=/data/reldir/jbcdevelopment
l export LD_LIBRARY_PATH=$JBCRELEASEDIR/lib:$LD_LIBRARY_PATH
l jlogadmin –a Off
For Windows:
l @ECHO OFF
l set JBCRELEASEDIR=c:\jbase4.1
l set JBCGLOBALDIR=c:\jbase4.1
l set PATH=%JBCRELEASEDIR%\bin;%PATH%
l jlogadmin –a Off
start_jlogdup
For Unix/Linux:
l #! /bin/ksh
l export JBCRELEASEDIR=/data/ reldir /jbcdevelopment
l export JBCGLOBALDIR=/data/ reldir /jbcdevelopment
l export LD_LIBRARY_PATH=$JBCRELEASEDIR/lib:$LD_LIBRARY_PATH
l echo `date` > $JBCRELEASEDIR/logs/jlogdup_to_tape_start
l jlogdup input set=current terminate=wait output set=serial device=[Device Spec]&
For Windows:
l @ECHO OFF
l set JBCRELEASEDIR=c:\jbase4.1
l set JBCGLOBALDIR=c:\jbase4.1
l set PATH=%JBCRELEASEDIR%\bin;%PATH%
l date /t > %JBCRELEASEDIR%\config\jlogdup_to_tape_start
l jlogdup input set=current terminate=wait output set=serial device=[Device Spec]&
l #! /bin/ksh
l export JBCRELEASEDIR=/data/ reldir /jbcdevelopment
l export JBCGLOBALDIR=/data/ reldir /jbcdevelopment
l export LD_LIBRARY_PATH=$JBCRELEASEDIR/lib:$LD_LIBRARY_PATH
l jlogadmin -k* > discard
For Windows:
l @ECHO OFF
l set JBCRELEASEDIR=c:\jbase4.1
l set JBCGLOBALDIR=c:\jbase4.1
l set PATH=%JBCRELEASEDIR%\bin;%PATH%
l jlogadmin -k* > discard
backup_jbase
For Unix/Linux:
#! /bin/ksh
export JBCRELEASEDIR=/data/reldir/jbcdevelopment
export JBCGLOBALDIR=/data/reldir/jbcdevelopment
export LD_LIBRARY_PATH=$JBCRELEASEDIR/lib:$LD_LIBRARY_PATH
typeset -u TAPEOUT
typeset -u REPLY
typeset -u BACKUPOK
set TAPEOUT = N
do
read TAPEOUT
done
if [ "$TAPEOUT" != N ]
then
print -n Has all logging to tape finished - press any key when it has
read REPLY
fi
if [ "$TAPEOUT" = Y ]
then
print Please remove the tape for logging and replace with the backup tape
while [ "$REPLY" != Y ]
do
read REPLY
done
fi
set BACKUPOK = N
while [ "$BACKUPOK" != Y ]
do
sleep 5
read BACKUPOK
done
if [ "$TAPEOUT" = Y ]
then
read INPUT
fi
For Windows:
@ECHO OFF
set JBCRELEASEDIR=c:\jbase4.1
set JBCGLOBALDIR=c:\jbase4.1
set PATH=%JBCRELEASEDIR%\bin;%PATH%
0006 PRINT "Has all logging to tape finished - press any key when it has":
0010 CRT "Please remove the tape for logging and replace with the backup tape"
0013 END
0021 SLEEP 5
0027 CLEARDATA
0030 REPEAT
0040 * EXECUTE "jlogdup input set=current terminate=wait output set=serial device=[Device Spec]"
0042 END
recover_jbase
For Unix/Linux:
#!/bin/ksh
if [ -z "$1" ]
then
PS3="Option :"
do break; done
if [ -z "$REPLY" ]
then
exit
fi
else
REPLY=$1
fi
if [ $REPLY = 1 ]
then
read DONE
read REPLY
then
read DONE
echo -n "Enter a time to terminate the duplication process (or RETURN for all logs)"
read ENDTIME
if [-z $ENDTIME ]
then
else
fi
fi
else
read DONE
jlogdup input set=current start=$JBCRELEASEDIR/logs/jlogdup_to_tape_start terminate=wait output set=serial device=[Device Spec] &
fi
For Windows:
REPLY = ""
CRT "What is the nature of the recovery? F=Full recovery required, T=Tape logging failure"
INPUT REPLY
REPEAT
END
INPUT DONE
INPUT REPLY
INPUT DONE
CRT "Enter a time to terminate the duplication process (or RETURN for all logs) ":
INPUT ENDTIME
* EXECUTE "jlogdup input set=serial device=[Device Spec] backup terminate=EOS output set=database"
END ELSE
END
END ELSE
INPUT DONE
EXECUTE "jlogdup input set=current start=RELDIR:"config":JBUILD_ DELIM_ CH:jlogdup_ to_ tape_ start terminate=wait output set-
t=serial device=[Device Spec]"
END
The jBASE run time provides support for the application during execution time providing support functions such as lock arbitration, update
journaling and spooling facilities.
l Overview
l CLEAR-FILE
l COPY
l CREATE-FILE
l DELETE
l DELETE-FILE
l JBACKUP
l JRESTORE
l JGREP
l JRCHMOD
l JRF
l JRSCAN
l JSTAT
l SEL-RESTORE
l DISTRIBUTED FILES
l PART FILES
l Creating DISTRIBUTED FILES
l Attaching and Detaching PART FILES
l Partioning Algorithm
l CREATE-DISTRIB
l DELETE-DISTRIB
l LIST-DISTRIB
l VERIFY-DISTRIB
l Distributed File Considerations
l Distributed Files Example
jBASE can handle data in a variety of forms. Support is built in for access to
l Distributed files
l Resilient files
l Directories
l Sequential files
Data held in other forms can be accessed through the use of an appropriate driver. Regardless of where the data is stored, the access mech-
anisms are the same.
The CLEAR-FILE command allows the user to clear all records from the dictionary file or data section files.
COMMAND SYNTAX
CLEAR-FILE {DICT|DATA} filename{,section}
SYNTAX ELEMENTS
filename is the name the file to be cleared. The file type must be one of the supported jBASE file types. If the file type supports separate dic-
tionary and data files, the DICT or DATA keywords may be used to clear either the dictionary file or the data file. DATA is assumed by default.
EXAMPLES
CLEAR-FILE File1
or
The COPY command copies specific or selected records from a specified file to the terminal, printer or another file.
COMMAND SYNTAX
COPY {DICT} filename{,section} {recordlist} {(options}
PROMPT
TO: {({DICT} filename{,section}} {targetrecordlist}
SYNTAX ELEMENTS
filename is the name of a valid file. The file type must be one of the supported jBASE file types. The DICT keyword can be used to specify that
the record or records should be copied from or to a dictionary file.
recordlist is the list of records (keys) to be copied. If recordlist is omitted, the active SELECT list is used, if present.
Options
A Force ASCII mode. Newline becomes field mark and vice versa.
targetrecordlist is a list of record keys to copy the records to - effectively renaming the copied records. Each key in the targetrecordlist is
applied in sequence to the copied records. If targetrecordlist is not specified or contains less keys than there are copied records, the original
record keys will be used.
NOTES
If you enter <RETURN> at the TO: prompt, the records will be copied to the terminal screen or spooler, depending on the options chosen. In
this case, the D(elete) option will have no effect.
EXAMPLE
COPY File1 Record1 (T
Copies Record1 from File1 to dictionary file File2]D overwriting Record2. Once copied, the original Record1 is deleted from File1.
The CREATE-FILE command allows the user to create a new file for use within the jBASE system. The command will allow creation of any file
type known to the jEDI libraries - unless the concept of a file is redundant to that target system.
COMMAND SYNTAX
CREATE-FILE {DICT|DATA} filename{,section} {HASHMETHOD=nnn} {TYPE=tname} {PERM=nnn} {LOG=YES|NO} {TRANS=YES|NO}
{BACKUP=YES|NO} {NumBuckets{, BucketMult{, SecSize}}} {NumBuckets{, BucketMult{, SecSize}}}
SYNTAX ELEMENTS
DICT An optional keyword used to specify that the command should create a dictionary file only for
the filename.
DATA An optional keyword used to specify that the command should create a data section file only
for the filename.
HASHMETHOD Used when a hash file is to be created. The numeric parameter nnn specifies the hashing
method to be used when accessing the file. The default method of 2 works very well with all
sorts of key types. However if the record keys will be perfectly uniform numeric keys then
there may be a slight advantage to using method 1 on the file.
PERM Used to set the permissions of the file in exactly the same manner as the UNIX chmod com-
mand. nnn is an octal number that will be masked by the current umask setting. The default
the value of nnn is 666.
LOG LOG=YES|NO allows the file to be included in or excluded from, the record or transaction log-
ging mechanism, if licensed on your system. The value is set to YES by default.
TRANS TRANS=YES|NO allows the file to be included in or excluded from any transaction bound-
aries that are defined by an executing program. The value is set to YES by default.
BACKUP BACKUP=YES|NO allows the file to be included automatically by the jBASE jbackup utility.
The value is set to YES by default.Options
The DELETE command deletes specific or selected records from a specified file.
COMMAND SYNTAX
DELETE {DICT} filename{,section} {recordlist}
SYNTAX ELEMENTS
Filename The name of a valid file. The file type must be one of the supported jBASE file types. If the file type
supports separate dictionary and data files
Recordlist The list of record keys to be deleted. If the recordlist is omitted the active SELECT list will be used if
present
EXAMPLE
DELETE File1 Record1
GET-LIST DeleteList
DELETE File1
Deletes all records from File1 that match the record keys selected by the active select list.
The DELETE-FILE command allows the user to delete complete file sets, the dictionary, or the data section of a file.
COMMAND SYNTAX
DELETE-FILE {DICT|DATA} filename{,section}
SYNTAX ELEMENTS
filename Name the file to be deleted. The file type must be one of the supported jBASE file types. If the file type
supports separate dictionary and data files, the DICT or DATA keywords may be used to delete either
the dictionary file or the data file.
NOTES
The command will detect inconsistencies in its use and issue suitable error messages.
Note: Beware of creating a file and then immediately deleting it using the DELETE-FILE command. The DELETE-FILE command will
respect the JEDIFILEPATH variable and if it finds a file of the same name in a directory earlier in the path than the current working dir-
ectory it will delete that file. For this reason it is best to define the JEDIFILEPATH variable as '.' (the current working directory):
EXAMPLE
DELETE-FILE File1
Deletes the complete file set of File1, comprising the dictionary file, default data section and any multiple data sections.
DELETE-FILE File1,Section
The jbackup utility provides fast on-line backup facilities and can also be used to check file integrity.
COMMAND SYNTAX
jbackup -Option {Inputlist}
OPTIONS
-bn set number of write buffers to n (default is 8, minimum is 1)
-v verbose mode
-F use fixed block device. Use for QIC tapes (Windows only)
-S Statfile Save statistics of all saved objects in jBASE file Statfile. The dictionary for this file is JBCRELEASEDIR/jbackup]D.
-E1 will make jbackup pause and give the user an option to quit if it encounters corrupt files
NOTES
This command will set the JEDIFILEPATH environment variable to “.” to prevent the backing up of incorrect files.
jchmod -B filename
will cause jbackup to skip 'filename'. Other options of interest are +B, -O and +O.
jbackup creates a file named jbk*PID as a work file when executed. Therefore, jbackup must be run from a directory which has write privileges.
If the file system or directory is not write enabled you will receive the error message ERROR! Cannot open temporary file jbk*PID.tmp, error 2
EXAMPLES
Unix
find /home -print | jbackup -P
Reads all records, files and directories under the /home directory provided by the find selection and displays each file or directory name as it is
encountered. This option can be used to verify the integrity of the selected files and directories.
Reads all files and directories listed in the UNIX file FILELIST and writes the formatted data blocks to the floppy disk device, displaying each
file or directory name as it is encountered. The jbackup utility will prompt for the next disk if the amount of data produced exceeds the spe-
cified media size of 1 Mbyte.
Reads all files and directories in home directory of user-id "jbase". Generates statistics information and outputs blocks to stdout, which is redir-
ected to /dev/null. The statistics information is then listed using the jbackup dictionary definitions to calculate the file space used.
Windows
jfind C:\users\vanessa -print | jbackup -P
Reads all records, files and directories under the C:\users\vanessa directory provided by the jfind selection and displays each file or directory
name as it is encountered. The -P option means that the files are not actually backup (print and scan only). It is useful to verify the integrity of
the selected files and directories. This command should be run with jshelltype sh rather than jsh.
The jfind command outputs the names of all the files and directories under the D:\data directory. This output is passed to the jbackup com-
mand causing it to backup every file that jfind locates. Rather than save to tape, this jbackup command creates a backup file: C:\tem-
p\save20030325. Note that jbackup creates the save2003025 file, but the directory c:\temp must exist before running the command. The -
m10000 option specifies that the maximum amount of data to back up is 10,000MB (or 10GB) rather than the default 100MB. The -S option
causes file statistics to be written to the hashed file stats. This file should exist and be empty prior to commencing the backup. The -v option
causes the name of each file to be displayed as it is backed up. Because of the pipe character used to direct the output of jfind to jbackup, this
command should be run with jshelltype sh rather than jsh.
The jrestore utility provides fast on-line restores from the saves produced by the jbackup utility. The jrestore can be controlled to restore from
any file type on the backup, from single records to multiple directories. The jrestore utility can also be used to verify jbackup saves.
COMMAND SYNTAX
jrestore -Options
OPTIONS
-a restore from current media position
-H FileList restore files using only file names from FileList file
-I ItemList restore items using only item ids from ItemList file
-v verbose mode
-T type restore hash files as specified file type; the original modulo and separation will be retained rather than use the 'resize' parameters.
NOTES
When using jrestore ensure that you executing at the standard shell not in jsh otherwise the double quotes and other meta characters will lose
their meaning.
C:\\MyApp\\new
EXAMPLES
jrestore -f /dev/rmt/ctape -P
Reads formatted files and directories from a streaming cartridge device, displaying each file or directory as it is encountered. This option can be
used to verify that the tape does not contain any parity or formatting errors and so can be restored at a later date.
jrestore -f /dev/rmt/floppy -v
Reads and restores formatted files and directories from a floppy disk device, displaying each file or directory as it is encountered.
Reads formatted files and directories from stdin, which is being supplied by jbackup, modifies all occurrences of path string /home/old to
/home/new and then restores files and directories using modified path string.
Reads formatted files and directories from UNIX file BACKUP, limits restore to any directories whose path name ends in PAYROLL.
Reads formatted files and directories from UNIX file BACKUP, limits restore to any hash files whose path name ends in CUSTOMERS, and
only restores record ids containing the string SMITH.
JCHMOD
The jchmod command enables you to modify jBASE-specific file attributes such as the resize parameters of a hashed file.
COMMAND SYNTAX
jchmod {+options} {-options} file{ file...}
SYNTAX ELEMENTS
+options
+R Set the resize parameters. The syntax is the same as for the CREATE-FILE command and includes spaces. The spaces should
{parms} be retained by quoting the string from the shell.
-options
-t Tabulate and display statistics of the files. Note this option is exclusive and will cause all other options to be ignored.
file{ file…} A list of all the ‘real’ file names to be processed by the command.
The ‘real’ file name should be specified to jchmod for dictionary and data sections of a file. The formats DICT file and file,section are not sup-
ported by this command.
The jgrep utility enables pattern matching of records in one or more jBASE files.
COMMAND SYNTAX
jgrep {options} searchstring file {recordlist...}
SYNTAX ELEMENTS
l searchstring is the string to search for in the file(s). If the string contains spaces, surround it with quotes or use the -I option
described below.
l file is the name of any valid file.
l recordlist is a list of record keys.
Options Explanation
-C Make the search case insensitive. This means that a search string of ‘ABC’ would
match the string ‘abc’.
-I The search string(s) has been omitted from the command line and will be prompted
for before searching the files in the list.
-L List only the record keys that the search string(s) were found in.
-S Search all subdirectories. If the file specified is a UNIX directory, all the files in the dir-
ectory and its sub-directories will be searched for the searchstring(s).
-r Raw mode. Display record key, line number and occurrences field separated, one line
at a time. Note that this option skips jBC object records located in hash files.
NOTES
Options may be specified after the file or recordlist by preceding the options with a left parenthesis:
The -S option will cause all records in a hashed file, or all files in a UNIX directory to be searched.
EXAMPLE
jgrep "ABC DE" SALES CUST0001
Searches the record CUST0001 in file SALES for the string ‘ABC DE’.
Standard UNIX commands can be used to provide arguments to the jgrep command.
jgrep -ILSN .
Prompts for search strings and then searches the records in all files in the current directory, and all files in any subdirectories. Standard UNIX
commands can be used to provide arguments to the jgrep command. Does not pause at the end of each page of output.
The jrchmod utility can be used to change the deallocation strategy of a JR file.
COMMAND SYNTAX
jrchmod {options} filename
SYNTAX ELEMENTS
l filename is the JR file to be changed.
Option Explanation
-h Help text.
NOTES
The default behaviour of JR files is never to deallocate frames on the assumption they will be reused.
COMMAND SYNTAX
jrf {options} {filename{,section}}
SYNTAX ELEMENTS
l filename is the base name of the file to be resized. The file type must be one of the supported jBASE hash file types.
l section is the name of the data section to be resized.
Option Explanation
-I Ignore empty files. By default the user will be prompted to resize empty files.
-V Verbose mode. Information is printed about each file size as the command progresses.
-V1 Very verbose mode. A jstat is performed on each file and the results printed.
NOTES
Badly sized files can cause severe performance problems. Often, over time internal file sizes become too small for the number of records they
contain.
The jrf utility will resize files listed on the command line, selected via an active SELECT list or all hash files in the current directory. By default
the jrf command will resize all valid hash files in the current directory.
JR files will by default take default internal parameters of INTMODS=3,7,19 and EXTMODS=31 and a JR to JR resize will retain whatever para-
meters the original file used. If there is a a particularly unusual hashing pattern jrf can be forced into a ‘sizing’ data scan with the –F option
where various parameters will be tested to attempt to achieve the best data capacity. The – F option would normally only be used on fairly
static files.
The jrscan utility can be used to verify or display the internal structure of Resilient files.
COMMAND SYNTAX
jrscan {options} filename
SYNTAX ELEMENTS
l filename is the base name of the JR file to be scanned..
Option Explanation
-v Verbose output.
NOTES
This is generally only used for diagnostic purposes.
The jstat command analyses a hashed file to provide statistics on the distribution of records and use of the data space.
COMMAND SYNTAX
jstat {options} {filename{,section}}
SYNTAX ELEMENTS
l filename is the base name of the file to be analysed. The file type must be one of the supported jBASE hash file types.
l section is the name of the data section to be analysed.
Option Explanation
-v Displays additional verbose information. When used with the r option displays each record key in the
bucket.
-dchar Specifies delimiter char to use as a field separator, for machine readable information supplied by the m
option.
The -f option displays additional information relating to the organisation of free space within the file. The information allows a judgement to
made as to how fragmented the file has become.
As records are added to the file, buckets are allocated for secondary space or to extend large groups. As records are deleted, empty buckets are
added to the free space chain associated with the file. When new buckets are required to extend data space for new records, the buckets avail-
able on the free space chain will be used wherever possible. This fills up any ‘holes’ that may have been created in the file. If the file is very
dynamic this process can create fragmentation of the free space and therefore the file. If this becomes excessive, the file can become very large
in relation to the data it contains.
If there are a large number of buckets in the free space chain, in relation to the number of buckets allocated to the file, this may indicate that
the file has become fragmented. Use the jrf utility to remap the file and remove fragmentation.
Two figures regarding freespace are given in the output (see notes). The Total buckets used for free space chain shows how many buckets have
been used within the file to record the free space chain. If this is large, it is a good indication that the file has become fragmented. The Total
unused (freed) buckets shows how many buckets have been created for extra storage space and then given back to the free space chain. If this
is large in relation to the number of buckets in the file, this another indication of file fragmentation.
Bucket Information
The r option displays additional information regarding the efficiency of each bucket in the file. A table will be output showing the number of
bytes allocated, and the number of bytes used, for each group in the file. This can then be used to judge the distribution of the data within file.
You could also extract this information from the output stream using an awk script (or similar) and then perform a statistical analysis of the fig-
ures.
Additional Information
The v or verbose option displays the following additional information:
Restore re-size parameters Any resize parameters that have been set against the file using the jchmod command.
Last Modified Latest time and date that the file was modified in any way.
Log File Whether file updates are logged by record or transaction logger.
Trans Rollback Whether file updates should be rolled back if a transaction fails or is aborted.
When the v option is combined with the r option, the key and size of each record in the buckets is also displayed.
(13) #Ublocks The size of each bucket in UNIX file space blocks.
NOTES
In its simplest form (jstat filename) the command output will be as follows:
File filename
Type = HASH1 , Hash method = 2 , Created Tue Jul 14 02:58 1992
Buckets = 97 , BucketSize = 512 , SecondaryBucketSize = 512
Record Count = 5 , Record Bytes = 832
Bytes/Record = 166 , Bytes/Bucket = 8
Most of the fields are self-explanatory, those that are not are explained below:
SecondaryBucketSize is the secondary bucket size of a hashed file as calculated at file creation time by the CREATE-FILE command. It spe-
cifies the record size beyond which space will be allocated outside the hash bucket itself. The hash bucket will instead contain a pointer to the
record.
Primary file space is the primary file space allocation. This shows how the file relates to the originally allocated number of buckets and allows
the effectiveness of the file size to be judged. The total number of buckets should match or be close to the number originally allocated to the
file.
When a record is to be stored in a bucket that does not have enough free space, the bucket is allocated an additional number of primary buck-
ets to accommodate the new record. The number by which the total buckets shown here exceed the original number allocated is the number of
times that this has occurred. If this number is large, you should consider resizing the file. This can be done in two ways:
1. If you have allocated enough buckets to cater for the number of records in the file, increasing the size of each individual bucket should
allow the records to fit into the file better. The new bucket size should be large enough to cater for the average record size shown in
the jstat output.
2. If there are many more records than the number of buckets, you should increase the number of buckets to reflect this. The new num-
ber of buckets should be equal to the number of records divided by the number of average size records that will fit into a single bucket.
If the file is growing in size over time, you should allow for some additional expansion in your calculations. You may find that a combination of
both the above techniques is necessary if the average record size is much greater than the current bucket size and there are many more records
than allocated buckets.
If the records in your file are very large, it is usually more efficient to force all records into secondary file space rather than create a very large
bucket size. In this case, recreate the file with a secondary bucket size of 0.
Secondary file space indicates how much of the file was allocated as pointers to secondary space in the original group. Where possible, this
should only form a small proportion of the file.
In general, the file should be sized slightly larger than its ‘perfect’ size. Unless files are very badly sized, overall performance will only be affected
marginally.
The SEL-RESTORE command restores all or specific records into a jBASE file from an ACCOUNT-SAVE or FILE-SAVE.
COMMAND SYNTAX
SEL-RESTORE targetfilename {recordlist} {(options}
PROMPT
Name of Account on media : sourceaccountname
or
SYNTAX ELEMENTS
sourceaccountname sourceaccountname is the name of the account on the media where the source file
resides.
sourcefilename sourcefilename is the name of the file on the media in which the source records reside.
sourcefilenumber sourcefilenumber is the number of the file on the media in which the source records
reside. Used with the N option.
targetfilename targetfilename is the name of the file to which the records are to be restored.
Option Explanation
-A Media is already positioned in the section containing the account where the source file is located.
-F Display file names as the media is searched for the source file.
NOTES
The command prompts for the name of source account and file held on the media, unless the N option (restore by file number) has been used,
in which case you will be prompted for the number of the source file on the media.
Before execution, any tape device should have been opened with the T-ATT command.
Restore all records from the file TAXCODES in account PAYROLL into the file NEWCODES. The record key will be displayed as each record is
restored.
Restore record SINGLE from file number 22 into the file NEWCODES
A Distributed file is a collection of existing files used primarily for the purpose of organizing data into functional groups. Each file within the
collection is called a part file. A distributed file can contain up to 254 part files. The method for determining in which part file a record belongs
is called the partition algorithm.
As a simple example, suppose your database consists of records which span 42 regions and you elect to distribute your data so that each part
file contains all records for a specific region. With distributed files you would be able to process any one of the region part files independently
of the others, or you would be able to process all 42 region part files collectively (i.e. as one database containing the records from all 42
regions).
Distributed files can also be used when the size of a file exceeds the size limit for the operating system (typically 2 gigabytes). This effectively
permits file sizes to reach 254 times the maximum file size your operating system allows.
Part files can have any name and can be any file type except a distributed file.
Part files can exist anywhere on the network, accessible via the JEDIFILEPATH environment variable, Q-pointers or F-pointers.
Each part file is assigned a part number when it is attached to a distributed file. The part number must be a positive integer in the range of 1
through 254 inclusive. This part number is an integral element as it is used by the partition algorithm to determine which part file the record
belongs.
Part numbers do not have to be sequential nor do they have to be continuous. It is quite valid, for example, to have 4 part files numbered 52,
66, 149 and 242.
A part file can belong to more than one distributed file although this imposes two restrictions:
1. The part file must always have the same part number for each distributed file to which it belongs.
2. All distributed files to which a part file belongs must use the same partition logic. In other words, when a record is written to the
common part file, the partition algorithm for each distributed file must resolve the record's location in the same manner. This is only
applicable when the distributed file uses the user-defined partition method.
The number of part files and the partition algorithm can be varied at any time throughout the life of the distributed file. Be aware that if the par-
tition algorithm changes such that records that were normally written to one part file, using the original partition algorithm, might to be writ-
ten to another part file using the new partition algorithm. This could lead to unwanted duplication.
Another problem that can occur is the wrong file is accessed through the distributed file stub (i.e. the file to which the part files are attached to
create the distributed file set; see Creating Distributed Files). Be aware that part files are resolved in the same manner as any other file in
jBASE. For example, suppose two files exist with the same filename where one is resolved via an F-pointer (in $JEDFILENAME_MD) and the
another is resolved via $JEDIFILEPATH, and that the one in $JEDIFILEPATH is our actual part file. What will happen is the actual part file
will never be found because the file pointed to by the F-pointer will be found first, as indicated by the jshow -f command. To alleviate this prob-
lem, it is best to attach the files using a full explicit filepath (see here for further details on attaching/detaching part files).
A distributed file is created using the CREATE-FILE command with the qualifier TYPE=DISTRIB. This will create two files, a dictionary which
is a Hash4 (currently fixed at mod3) and the distributed file stub. If desired, the dictionary can be resized using the jrf utility. For example, the
following command creates a distributed file called DISTREGION:
The file partition table is empty at this point, and the partition algorithm is set to the default system partition method with a delimiter of ‘-‘
(that is; all record IDs must be of the form "PartNumber-recordID"). These aspects of the distributed file can be changed with the create-dis-
trib command.
Files are attached to a distributed file using the create-distrib command with the -a option. A file must already exist before it can be attached
to a distributed file.
In the following example an existing file, DISTCUST.SOUTH, is attached to the distributed file DISTCUST as part number 4:
Note: We can also attach a file using a full explicit filepath as in "create-distrib -a c:\home\myaccount\DISTCUST 4 DISTCUST.SOUTH"
This method of attaching a distributed file is preferred to ensure the proper part file is resolved through the partition algorithm. See Part Files
for further details.
A part file can be detached from a distributed file using the create-distrib command with the -d option. The synonym DELETE-DISTRIB can
also be used for this purpose. For example, to detach the DISTCUST.SOUTH part file:
Each distributed file uses a partition algorithm to determine in which part file a record belongs. The partition algorithm is specified by using
the create-distrib command. All part files belonging to a distributed file use the same partition algorithm.
There are two methods for defining the partition algorithm, the system defined method and the user-defined method. The partition algorithm
uses the record ID (or part of the record ID) to distribute the record to the appropriate part file.
Where:
PartNumber is an integer which determines the part file to which the record is written.
Delimiter can be any character except a system delimiter (AM, VM, SVM). The default delimiter is a dash (-).
RecordID is the actual item-ID of the record. In a ‘hashed’ file type, this determines the group to which the record is written.
The following example sets (or changes) the distributed file DISTREGION to use the system partition algorithm. A dash (-) will be used as the
delimiter between the part number and the record ID:
EQU Otherwise TO 1
FirstChar = Key[1,1]
BEGIN CASE
PartNo = 1
PartNo = 2
PartNo = 3
PartNo = 4
END CASE
RETURN
Compile and catalog the subroutine. Ensure that the subroutine is accessible via the JBCOBJECTLIST environment variable.
The subroutine is called each time a record is read from or written to the DISTCUST distributed file. The subroutine must support 3 argu-
ments:
Argument Description
Reservced This parameter is reserved for future enhancements and should not be altered
within the context of the subroutine.
Key This is the record ID. It must be constructed in the application program prior
to READing or WRITEing the record from/to the distributed file. Do not alter
this argument, use it only as a source.
PartNo This must be assigned by the subroutine and must return a valid part number.
You will notice that the part numbers consist of 1, 2, 3, 4 and 99. This illustrates an important feature. It is not a requirement that the part
numbers be sequential or continuous. This could be used to allow additional part files to be added to the distributed file collection without the
necessity of renumbering.
Take special care when writing this subroutine to account for all possibilities. If for any reason the PartNo cannot be determined you will
receive either a READ_ERROR or WRITE_ERROR at the point of failure. Here is one such example where there are 11 part files. The part num-
ber is determined based on the last character of the key, the last character is assumed to be numeric but, if it's not, it will be placed in the 11th
part file:
lastchar = key[-1,1]
IF NUM(lastchar) THEN
partno = lastchar
END ELSE
partno = 11
END
RETURN
The fatal flaw is if the subroutine ever encounters an item-id of 'null'. A null item-id is considered numeric, hence partno will be set to 'null'. A
better way to code this would be:
lastchar = key[-1,1]
partno = lastchar
END ELSE
partno = 11
END
This subroutine takes the 'explicit' approach and does not make assumptions about what form the data will be in.
To set (or change) the distributed file to use the user-defined partition algorithm, use the create-distrib command. For example, to set the
DISTCUST distributed file to use the DistCustSub subroutine:
When compared to the system partition algorithm, the user-defined partition method incurs a small performance penalty when calling the jBC
subroutine. The exact cost of this is highly dependent on how easily the part number is resolved within the subroutine.
The CREATE-DISTRIB command accepts a variety of options which determines its function as the following table illustrates:
Verify the existence of the Part Files CREATE-DISTRIB -v FileName VERIFY-DISTRIB FileName
Syntax Elements
Element Description
PartNo An integer from 1 through 254 inclusive which associates the Part file to the Distributed File
Delim A single character used to separate the Part Number from the record ID
-V Verbose
Examples
CREATE-DISTRIB -a INVOICES 24 INVOICES.MAR1999
Attaches the file INVOICES.MAR1999 as the 24th part file to the INVOICES distributed file.
Same as the previous example. Note that the -a is assumed in the absence of any options.
Sets (or changes) the DISTCUST distributed file to use the user-defined subroutine, DistSub, as the partition algorithm.
Detaches (disassociates) the 149th part file from the MEDICAL.CLAIMS distributed file.
CREATE-DISTRIB -l CONVENTIONS
Lists the component part files of the CONVENTIONS distributed file. Also lists the partition algorithm.
CREATE-DISTRIB -v CONVENTIONS
Verifies the existence of the component part files belonging to the CONVENTIONS distributed file. Also confirms the partition method. If the
distributed file uses the user-defined partition method this also verifies that the subroutine can be executed.
The DELETE-DISTRIB command detaches (de-references) a component part file from a distributed file.
Syntax
DELETE-DISTRIB FileName PartNumber
Syntax Elements
Element Description
PartNumber An integer from 1 through 254 inclusive which was used to associate the Part File to the Dis-
tributed File
Notes
If the user-defined partition method is used, you should ensure that the subroutine used for the partition algorithm does access the de-ref-
erenced file.
If the system partition method is used, you should ensure that no keys are created which can read from or write to the de-referenced file.
Example
DELETE-DISTRIB INVENTORY 42
Detaches (de-references) the 42nd part file from the distributed file INVENTORY.
The LIST-DISTRIB command displays all partition information pertaining to a distributed file.
Syntax
LIST-DISTRIB FileName
Syntax Elements
FileName is the name of a Distributed File.
Notes
The VERIFY-DISTRIB command is much more useful as this not only displays the same information as LIST-DISTRIB, it also verifies the exist-
ence of the component part files. If the distributed file uses the user-defined partition method, VERIFY-DISTRIB also verifies that the sub-
routine is executable.
Example
LIST-DISTRIB INVENTORY
The VERIFY-DISTRIB command verifies the existence of the component part files of a distributed file. If the distributed file uses the user-
defined partition method, VERIFY-DISTRIB also verifies that the subroutine is executable.
Syntax
VERIFY-DISTRIB FileName
Syntax Elements
FileName is the name of a Distributed File.
Example
VERIFY-DISTRIB INVENTORY
Although jBASE does not restrict you from directly populating part files, records should always be written through the distributed file stub. Be
aware that if a record is placed in the wrong part file, and that record is subsequently handled through the partition algorithm, it will be placed
in the part file according to the partition algorithms own relentless logic. This will result in the same record appearing in two part files.
Once part files are populated, changing the logic of the partition algorithm (or changing the partition method), could have disastrous results. If
it is necessary to do this you must pass each record through the new partition algorithm so that it is placed in the proper part file. You must
also remember to delete each record from its original location.
A distributed files is opened in the usual way. For example, the following statement opens a distributed file called DISTCUST:
By default, when a distributed file is opened, all component part files are opened at the same time. You can defer the opening of all part files by
setting the JEDI_DISTRIB_DEFOPEN environment variable.
On versions of jBASE prior to 3.3.9, if a record ID resolved to a partition (part file) that did not exist, the process would be trapped to the
jBASE debugger with an "Error 22" error message. This behavior has been changed (see patch number PN3_30268) such that a READ from a
non-existent partition will take the ELSE clause and a WRITE will be trapped with an 'Error 22' unless the WRITE is supplied with the ON
ERROR clause.
If you delete a part file then you must also DELETE-DISTRIB to remove the reference from the distributed file stub. You must also modify any
user-defined partitioning algorithm. This is detailed in the distributed file example.
Distributed files support secondary indexes and triggers at both the distributed file level and the part file level.
In this comprehensive example, we create a distributed file called DISTCUST using a user defined partition algorithm and attaching five part
files.
l Create the distributed file stub. This is the file to which all part files will be attached:
l Define the partition algorithm. If the distributed file uses the default system partition method, this step would not be necessary unless
you wanted to change the delimiter separating the part number from the record ID. For this example we will use the user-defined
method by assigning the subroutine DistCustSub as the partition algorithm:
l Create the five files to be attached as part files. If the files already exist then this step can be omitted:
DELETE-DISTRIB DISTCUST 99
-or-
create-distrib -d DISTCUST 99
l Modify the user-defined partition subroutine DistCustSub by removing the lines which allocate records to part number 99. Recompile
and catalog.
l what is jDP?
l NAV UTIL
l jBASE Dictionary Configurator
l jDP and Subvalues
jDP is the jBASE driver for the attunity connect software, this provides an interface for the attunity software to access jBASE data files.
Using attunity, any products adhering to the ODBC, OLE DB, JDBC or ADO standards can retrieve data from the jEDI interface (directly from
the jBASE database)
l Allows connectivity to jBASE files via, ODBC \ ADO \ OLDB \ XML type technologies
l Allows applications (front end) to use SQL (Standard Query Languages) syntax
l Standard jBASE files, (back End) need no modification
l Multi-values, Sub-values are supported, and can be grouped together as associated sets.
l Attunity is available on all supported jBASE platforms except iSeries and zSeries.
Mode 1
This is the default mode and allows access to all the files, a user running the jDP driver can open files that:
Have filenames that are valid SQL table names and have dictionaries.
This mode is useful for ad hoc queries, or for users using shrink-wrapped client software
Mode 2
This mode restricts the view of database files to only those specified in a ‘TABLEFILE’. (a TABLEFILE is a hash file, which contains an item for
each data file you want to make ‘visible’).
This method allows access to be restricted to a nominated set of files by using a special TABLEFILE.
Mode 3
A ‘Catalogue’ described simply as a ‘Database’ -a collection of tables that hold data and belong to the database – the database acting as a logical
boundary. Unlike the other two methods of accessing the data through jDP, this method allows a good security and permissions strategy to be
set up on all the individual files in the database, thus allowing various users access to only specified data.
For the catalogue to maintain a list of users, schemas, tables and permissions, a catalogue directory is set up to hold ‘system files’. These sys-
tem files hold all the needed information that to administer the catalogue.
Within a catalogue, it is possible to create ‘Schemas’: described as workspaces where users can place\attach their tables. If no Schemas are
used, the catalogue will use the public schema (this is the default schema). Schemas are used to hold files for individual users (e.g., Bob) or a
logical group of users (e.g. SalesTeam). Each individual schema must have its own working directory where it stores its tables.
When referencing a table in a catalogue, the full explicit reference would be:
Example Catalogue:
NutsAndBoltsPlc:SalesTeam
NutsAndBoltsPlc:AdminStaff
NutsAndBoltsPlc:TheMD
With in the SalesTeam schema you may find the following tables:
NutsAndBoltsPlc:SalesTeam.SalesPersonnel
NutsAndBoltsPlc:SalesTeam.Customers
NutsAndBoltsPlc.SalesTeam.SalesOutstanding
NutsAndBoltsPlc.SalesTeam.SalesMade
Getting Started
$JBASERELEASEDIR/jdp (UNIX)
%JBASERELEASEDIR%\jdp (WINDOWS)
Within the jdp directory there should be all that is needed to run the jDP demo, however if you have not installed jBASE into its default loc-
ation,
C:\jbase4\4.1 (Windows)
/opt/jbase4/4.1 (UNIX)
On UNIX system
$NAVROOT
It should point to the location where jDP has been installed (default /opt/jbase4/4.1/jdp )
$PATH
Should contain $NAVROOT/lib, as well as any additional subroutines that are needed by your dictionaries.
On WINDOWS system
%NAVROOT%
%PATH%
%JBCOBJECTLIST%
Should contain the location of any additional subroutines that are needed by your dictionary’s
Generally most common errors are down to the environment not being set correctly, a good test is to open a “jshell” using the same envir-
onment that you are using for jDP and see if you can list your files.
On Windows systems use “services” in start >> settings >> control panel >> administrative tools >> Services
As root in $NAVROOT/bin.
./irpcd.ctl start
UNIX
nav_sever.ksh Sets up the environment when the server is started
WINDOWS
nav_login.bat Gets ran for each client session.
These scripts are needed to setup the environment for jDP when it first launches a client or server, don’t forget that jDP will need to see the
same sort of things that jBASE does.
For example you may use subroutines from within your dictionaries or using a different MD for jDP users, in both these example’s jDP will fail
if you have not setup your jBASE environment variables in the relevant script.
These scripts can be found in $NAVROOT/bin, and should have a working example that points to the default jBASE/jDP locations.
addon.def This file is needed to tell the Attunity driver how to connect to jDP.
license.txt
The upgrade utility transfers existing elements of your jDP configuration into object store, jDPs new internal storage mechanism. This utility
transfers into object store the following elements in your jDP system:
l jDP metadata
l User profile information specified in the jDP security file
l When connecting to a data source using the ODBC driver on a non-Windows platform.
l When using the data connector and user defined data types SDK. For details see jDP Open Data Connectivity and the Developer SDK.
What is NAV_UTIL
NAV_UTIL is a collection of Attunity utilities. The utilities include troubleshooting utilities and metadata utilities. All of the utilities run from
NAV_UTIL. NAV_UTIL can be used for any thing from creating a datasource, running a SQL query or even checking on the status of any run-
ning demons.
Example of commands
The following are examples of useful commands, for a more detailed description please read the Attunity documentation.
Generating metadata
NAV_UTIL GEN_ARRAY_TABLES DEMO * (For all tables)
Or
<remoteMachines name='NAV'>
</remoteMachines>
On-the-fly: Write a sql statement and end it with a semi-colon. Press Enter to execute the statement.
If the SQL contains data from more than one data source, use a colon (:) to identify the data source (that is, datasource_name:Table_name).
From a file: Enter the full name of a file that contains sql, prefixed by @. Press Enter to execute the sql contained in the file. For example:
NavSQL> @C:\sql\sql-query.sql;
You can access the NavSQL environment and run a file immediately by entering the following command:
where data_source is the name of the data source as defined in the binding file and file is the name of the SQL file.
If you want to run all the queries in the file without the overhead of displaying query information on the screen for each query, enter the fol-
lowing command:
In this case, only queries that fail cause information to be displayed to the screen during the run. A message is displayed after all the queries
have been run, stating the number of queries that succeeded and the number that failed.
From within a transaction: Enter the command begin-transaction (optionally with either read-only or write permission) to start a transaction
where you can commit a number of sql statements together. Use commit to update the data sources with any changes or rollback if you decide
that you do not want to accept the changes.
Standard datasource
This example shows how to use the standard jBASE environment with jDP, in our sample account we have 3 files
WINDOWS: C:\Sample
UNIX: /home/Sample
It also has its own MD, but this is just to demonstrate that it is actually can be used, create a file pointer called CUSTOMERS.
l Create the sample tables and dictionaries, and then populate them with some meaningful data.
l Adding a datasource.
You will need to let both the client and server know what type of datasource you are creating, which can be done by using one of the following:
l “Attunity Configuration Manager”, this is a GUI interface installed with the windows version of Attunity, it also allows you to configure
remote servers, (UNIX and WINDOWS)
l “nav_util”
l nav_util is a multi-functional utility supplied by attunity, it allows jDP to setup and maintain datasources, and it can also execute quer-
ies.
To add a datasource via nav_util execute the following from the server.
When a client connects the attunity driver needs to be able to see where the jBASE account is. This is done by using the following script.
WINDOWS: nav_login.bat
UNIX: site_nav_login.sh
WINDOWS
Using jed / notepad edit %NAVROOT%\bin\ nav_login.bat
Command->
Add the following lines to let jBASE know where things are:
001 #
003 #…
035 # Make sure Attunity can see our demo data files
036 JEDIFILEPATH=/usr/jbc/jdp/demo
038
039 #
041 ######################################################################
Add the following lines to let jBASE know where things are..
036 JEDIFILEPATH=/home/Sample
038 JEDIFILENAME_MD=/home/Sample
040 HOME=/home
On the client machines you also need to add a reference to the remote datasource, this can be done via the GUI or by executing the following
command :
EXAMPLE
nav_util UPD_DS REMOTE SAMPLE
Creating a link between the attunity driver and the jBASE files.
If you are using a file that contains multivalues you will need to configure jDP so that the jDP driver treats the multivalued columns as tables,
physically they are still stored as multivalues in the jBASE files, jDP simply creates an internal virtual table that the jDP driver can use.
EXAMPLE
nav_util gen_array_table SAMPLE CLIENT
You can list all data sources with the following command :-
The file viewer will vary depending on whether you are running on windows or UNIX in this example we are not using multi-values so all tables
should be usable via ODBC without any problems.
We can use the files that where created for the previous example:
WINDOWS: C:\Sample
UNIX: /home/Sample
We also need to create a jBASE H4 file called TABLEFILE, this file will hold reference to the above files. Each entry in a TABLEFILE is very sim-
ilar to having an F Pointer in your “MD”
WINDOWS: C:\TSample
UNIX: /home/TSample
If you are using a UNIX system, <<ACCOUNT>> will be /home/sample, WINDOWS c:\Sample.
001: <<ACCOUNT>>\ORDERS
002: <<ACCOUNT>>\ORDERS]D
001: <<ACCOUNT>>\ITEMS
002: <<ACCOUNT>>\ITEMS]D
001: <<ACCOUNT>>\CLIENTS
002: <<ACCOUNT>>\CLIENTS]D
l Adding a datasource.
You will need to let both the client and server know what type of datasource you are creating, this can be done by using on of the fol-
lowing :
“Attunity Configuration Manager”, This is a GUI interface installed with the windows version of Attunity, it also allows you to con-
figure remote servers, (UNIX and WINDOWS )
nav_util is a multi-functional utility supplied by Attunity, it allows jDP to setup and maintain datasources, and it can also execute
queries.
To add a datasource via nav_util execute the following from the server..
When a client connects the Attunity driver needs to be able to see where the jBASE account is. This is done by using the following
script.
WINDOWS: nav_login.bat
UNIX: site_nav_login.sh
WINDOWS
Using jed / notepad edit %NAVROOT%\bin\ nav_login.bat
Command->
Add the following lines to let jBASE know where things are..
UNIX
jed $NAVROOT/bin/site_nav_login.sh
001 #
003 #
035 # Make sure Attunity can see our demo data files
036 JEDIFILEPATH=/usr/jbc/jdp/demo
038
039 #
Add the following lines to let jBASE know where things are..
036 JEDIFILEPATH=/home/Sample
038 JEDIFILENAME_MD=/home/Sample
040 HOME=/home
On the client machines you also need to add a reference to the remote datasource, this can be done via the GUI or by executing the following
command :
EXAMPLE
nav_util UPD_DS REMOTE TSAMPLE
l Creating a link between the Attunity driver and the jBASE files.
If you are using a file that contains multivalues you will need to configure jDP so that the jDP driver can treats the multivalued
columns as tables, physically they are still stored as a multivalues in the jBASE files, and jDP simply creates an internal virtual table
that the jDP driver can use.
EXAMPLE
nav_util gen_array_table TSAMPLE TCLIENT
You can list all data sources with the following command :-
The file viewer will vary depending on whether you are running on windows or UNIX in this example we are not using multi-values so all tables
should be usable via jDP without any problems.
Catalogue datasource
This example shows how to use a Catalogue datasource with jDP,
To create a catalogue, you must set up a catalogue directory by using the ‘CreateJDPCatalog’ program normally found in the jBASE Bin dir-
ectory. Run the program on the server (the machine that will hold the data), which will prompt the user for three inputs: ‘DSN Name’ (the name
to be entered into the nav.bnd file), ‘Catalogue Directory’ and ‘Public Tables Path’.
DSN Name
This is the datasource name that the program will write into the binding file; the ‘jDP’ software to connect to the catalogue uses this file.
Catalogue Directory
This is the directory that the catalogue can find all its required system tables.
EXAMPLE
jsh ~ -->CreateJDPCatalog
Enter absolute directory pathname in which to locate the Catalog or <Q>uit: c:\cat
Enter absolute directory pathname in which to create tables for the Public schema or <Q>uit:
c:\cat\public
jsh ~ -->
After the program has run, you will have to let jDP know about the new datasource.
If a client machine accesses the data (i.e. the server and the client are separate machines) then the client will also need to setup a datasource
entry.
The jBASE Dictionary Configurator (or jDC) is designed to amend standard jBASE Dictionaries to store extra data needed by jDP in order to
carry out tasks in SQL, which require more information to perform operations such as joins. The jDC also allows configuration of which files
are visible and updateable. By default, all files are read-only through jDP.
The jDC only operates on locally accessible files. To configure server based files, a network share mechnism needs to be set up.
Normal jBASE files contain data in one file in multi-valued format which would normally be spread amongst a number of hierarchical tables in a
relational database. The jDC allows the configuration of a number of logical sub tables to allow SQL constructs to work against jBASE multi-
value data in the normal manner.
To configure the settings for a file, select it and click on the properties button. A dialog should appear similar to this which allows con-
figuration of the visibility / read-only attributes of the file.
Description Some tools can be supplied with the string you enter here as a TABLE description.
Visible to Specifies whether the columns are hidden form jDP or not. the default, provided by the driver, is
jDP that the table is visible. Override the default here. You can override the file defaults on an individual
column (dictionary entry).
Allow Specifies whether dictionary elements in this file are updatable by default.
update
Allow SQL If a dictionary element is updateable, this property allows or denies its update using the SQL value
NULL NULL. Note that SQL NULL is not the same as a null string "". It is roughly equivalent to "Unas-
Update signed Var", in the jBC language.
Process Dictionary elements that are A or S types, may have conversions/correlatives on both attribute 7
Attribute 7 and attribute 8 of their definition. By default, anything specified on Attribute 7 is ignored. You can
choose to process this attribute by default or not.
File visible By default, the driver advertises all files that it can see. However, you can choose to remove this file
to jDP from the list of advertised TABLES.
Add New Inserts, Deletes and Updates are forbidden by default, for what should be obvious security con-
Items cerns. If you want users to be able to add new rows (items etc), to the file, then specify this here.
Delete Specify whether users are allowed to delete rows (items etc), from this file or not.
Items
jDP can be used to query subvalues provided the system has been setup correctly. In order to use jDP to query sub values, the environment
variable JDP_ AUTO_EXPAND must be set to instruct the query engine to take account of them. In addition all subvalues must be properly
configured using the jDC tool such that there is a controlling multivalue for each subvalue set.
l Overview
l CREATE-INDEX
l DELETE-INDEX
l KEY-SELECT / QUERY-INDEX
l LIST-INDEX
l REBUILD-INDEX
l VERIFY-INDEX
l Using JQL Commands
l SUBROUTINES
l Related BASIC Statements
l REGULAR EXPRESSIONS
l Backup and Restore
l Appendices
In order to speed up retrieval of records by attributes rather than by key, jBASE provides the facility to build secondary indexes on these fields.
These indexes are stored as binary trees (see Appendix A for details) which are compiled and stored in a file with the same name as the data file
with “]I” appended onto the end. For example, an index created on JCUSTOMERS will be stored in a file called JCUSTOMERS]I.
Indexes will have a positive effect on SELECTs on the indexed fields themselves but the maintenance of the index will give a slight overhead on
record updates as the index must be maintained. Thus a balance must be sought between improvements in query performance versus the over-
head on other file access.
The CREATE-INDEX command will build a secondary index on an attribute or combination of attributes in the specified file.
COMMAND SYNTAX
create-index -Options filename indexname index-definition
SYNTAX ELEMENTS
Option Description
This command can be used to create a new index definition. By default the index is then rebuilt to include any existing records.
Option -c means the indexes created will be done in a case insensitive fashion. For example "Fred" and "FRED" will be the same index. This is
used automatically in the key-select or query-index command. However if a jQL command such as SORT or SELECT wants to use the index,
then the command must be done in such a way that the jQL command would also be case insensitive (for example, attribute 7 of the DICT item
is MCU and the selection criteria is all upper case).
Option -d means the pseudo-code created to build the index key can be debugged. This assumes that the debugger is enabled for the rest of the
jBC code anyway.
Option –k and –w are advanced tuning options see Appendix A for a description of their use.
Option -l is the lookup code. It is used with key-select and query- index. What happens is that the selection criteria will be converted using an
ICONV call before being used. For example if you create a right justified (numeric) index on say attribute 6 of all items, this could be a date field
in internal format. If you want to look at a range of dates then instead of doing this:
where 10638 is a date in internal format, then by using the option "-lD" we will perform an ICONV on the selection criteria of format "D" thus
translating the date in external format to internal format, and so your command line would be:
This also applies to selection criteria passed with a jQL command such as LIST or SELECT.
Option -n shows that any index keys that are created as zero length strings will not be entered into the index. This is useful for suppressing
unwanted index keys. It is especially useful when used in conjunction with the CALL statement in the index definition (see Appendix A) so that
if the subroutine decides the index is not interested in being stored, it creates a null index key.
Option -o will overwrite any existing index definition. Without this, if you attempt to re-define an index definition you will get an error. If the -o
option is used to re-define an existing index definition, then all the index data for the definition previously associated with the index will be
deleted.
Option -a means the index data will not be built once the index definition is created. The default action is to build the index, depending upon
the size of your file data, this could be a lengthy operation ! Once the index data is built it then becomes in-sync with the file data and is avail-
able to all the index commands such as key-select and available to jQL commands such as SELECT to improve the performance of them. With
this option you need to execute a rebuild-index command to rebuild the index data.
Option -s causes some pseudo source code to be created. This is used with option -d so that you can debug complex index definitions.
Option -v is the verbose mode. It will display a period character for every 1000 records in the file that are rebuilt.
EXAMPLES
Example: Create an index based on attribute 1 , concatenated with attribute 2.
Example: Create an index on the attribute given in the DICTionary definition NAME and a second attribute number 3 , but in descending order
Example: Create an index on attribute 4 which is normally an internal date. You want to be able to specify dates in external format when doing
selections against it. Additionally if the field is a null string, then don't store any index information.
In the above definition the index key is built out of three attributes. Should these attributes all have differing numbers of multi-values it makes
it difficult to create an index key that is logically consistent. Therefore in the above example it would fail to create the index definition.
Option -N is synonymous with the -n option on create-index. When used, any index keys that equate to a null string will not be stored.This is a
compatibility option.
Option -S is a compatibility option which provides for silent operation when an index is created.
Option -M or -m option suppresses creating individual index keys for each mutlivalue, in other words all mutlivalues are used to create the
index key.
PIPE
001 123]456]789
Then by default three index values based on "123" , "456" and "789" will be created. With the -m or (M) option on create-index, we will build a
single index value based on "123]456]789".
Option -Vn. This option provides compatibility and is used to limit the number of multivalues used to generate an index key. Without this
option then ALL multi-values will each generate an index definition. This option restricts it to the first nnn values. A special case of (V0) exists.
In this case where the multi-value count is set to 0, we assume no multi-values are required and so we don"t split the attribute into multi-val-
ues but treat the entire attribute as a single entity -- in effect then (V0) option is identical to the (M) option.
Remember the jBASE syntax already allows an individual value to be used instead. For example
Option -X. This option on CREATE-INDEX will set-up the index, but not run the existing file through it - in other words, it doesn"t make any
attempt to index what is already in the file. The file will still me marked as "in-sync". The net result is that you get an index later with only
newly-written or modified records - very nice when you're dealing with huge files and you only want to process what's changed or created since
the index was set-up.
In addition to the above syntax a compatible form of the CREATE-INDEX command can also be used.
This creates an index called ITEM and the index definition is based on the dictionary item ITEM.
jBASE supports this by converting on-line the syntax to the jBASE syntax and notifying the user of the equivalent converted command (unless
the (S) option is used).
jBASE allows indexes created in this manner to be used with some jQL commands like SELECT or SORT. An index which is not created via a
dictionary item must query the index with KEY-SELECT or QUERY-INDEX.
If a complex definition exists in attribute 8, then the conversion will fail and the user will have to use the jBASE syntax.
This example shows a DICT item in jBASE and how , if you run the create-index command against it, it will be converted to the jBASE syntax
and run.
INDEX1
001 A
002 3
003 Description
004
005
006
007 D2
008 MCU
009 R
010 10
For example
Note: You now need to rebuild the index data for file "filename" using rebuild-index.
This command is called to delete one or more index definitions that are associated with a file. All the space taken by the index is released to the
file overflow space (but not necessarily the operating system file free space).
COMMAND SYNTAX
delete-index -Options filename { {indexname {indexname ...}} | *}
SYNTAX ELEMENTS
Option Description
Option -a causes all index definitions to be deleted. Note: For J4 files this does not physically delete the index file. If you have no further need
for indexes then the filename]I can safely be deleted.
Without option -a you need to specify one or more index name on the command line.
EXAMPLES
Example: Delete ALL the index definitions for file PRODUCTS
This command allows you to select or count a list of record keys. Note that file updates that cause a change to an index will wait for a query-
index to complete. The first structure of the query-index command allows you to select all record keys sorted by the index definition. For
example, to select all customers sorted by their last name:
COMMAND SYNTAX
query-index -Options filename index_name
query-index -Options filename {IF/WITH} iname {Op} "startpos" {AND} {Op} "endpos"
SYNTAX ELEMENTS
Where Op can be: LT < less than
Option Description
The second structure of the query-index command allows you to specify a single conditional parameter. You can make this query less than,
greater than etc. to the parameter. If you don't specify LT,GT,etc. then it defaults to equals.
EXAMPLES
Select all customers whose name begins with "KOOP"
Note that in this case the double quotes will be ignored , as would single quotes. The IF token is a throwaway token, and is used simply for
clarity. WITH can also be used to the same effect.
Another example is to select all customers whose date of birth is before 25 July 1956
The third structure of the query-index command allows you to specify a range of values. This means the operators must be either GT or GE fol-
lowed by LT or LE. If the operators are not specified the command defaults to GE and LE.
Example: Count all the customers whose last order was placed between 19-DEC-1996 and 23-DEC-1996 inclusively.
Option -c means a count of record keys is done instead of producing a select list.
Similarly you can use the -mREGEXP to use a pattern matching algorithm called "Regular Expressions". This allows complicated patterns to be
searched for. See the "Regular Expressions" chapter in this document. As an example, the following command will select all products whose
description begins with the letter A is followed by any number of characters before the sequence PIPE is found:
Option -i can be used to restrict the number of indexes used to create the list of record keys. This can be useful to restrict a search to a smaller
subset.
NOTES
QUERY-INDEX should only be used to generate select-lists for READNEXT KEY and READPREV KEY statements. It should not be used to
generate select-lists for a subsequent SELECT, the READNEXT statement or for use with other jQL commands, e.g. LIST; the jQL SELECT
command should be used for this purpose.
This command is used to display to the screen details of all the current definitions. A format similar to jQL is produced.
COMMAND SYNTAX
list-index -Options filename {Same as jQL options}
SYNTAX ELEMENTS
Option Description
Option -f allows you to specify your own file name instead of list-index creating a temporary file. This way you can find out what DICTionary
items are created by list-index, and if you want to you can modify them and pass them on the command line. Using this option therefore allows
you to define your own output format for the command.
Option -m produce "machine readable" displays. In other words the detail is displayed simply as a series of lines, one line per index definition,
with a tab character (CHAR(9)) to delimit the fields. This makes it easily parsed by another program or UNIX script.
Option -a is for verbose mode -- all details will be printed instead of a smaller selection.
EXAMPLES
Example: Display all the index definitions, in full, for file CUSTOMERS, and send the output to the printer.
NOTES
Machine readable format broken down as follows;
2 Not used
5 Always 1
16 – 19 Not used
This command will rebuild the index definitions. It can be used in the following circumstances:
By default create-index will build the index and will not require a rebuild-index command to be performed.
COMMAND SYNTAX
rebuild-index -Options filename { {indexname {indexname ...}} | *}
SYNTAX ELEMENTS
Option Description
Option -a means you want to rebuild all the indexes defined for the file. This can also be achieved by specifying * as the index name. Otherwise
you must specify on the command line one or more index names to rebuild.
Option -r will rebuild all files in the directory name specified. This is a useful operation after using, for example, jrestore to restore your data-
base and then you can use the option -r to rebuild all files in a certain directory.
This command will verify the integrity of an index definition, in so far as it looks for internal corruption. It doesn't actually verify that the index
data actually correctly cross references the data file records.
COMMAND SYNTAX
verify-index -Options filename { {indexname {indexname ...}} | *}
SYNTAX ELEMENTS
Option Description
-v Verbose mode
Option -a means all indexes will be verified and this can also be achieved by using * on the command line for the index name. Without the -a
option (or * as index name) you must specify on the command line one or more indexes to verify.
Option -r causes all the record information to be displayed. This is the index key followed by all the record keys that share the same index
value.
NOTES
CAUTION While this command is active a lock on an entire index is taken. Should an application try to update that index then the application
will wait until the lock is released, which isn't until the verify-index command has completed for that particular index. This means scenarios
such as the one below should be used only with caution as the piping of the output into "more" means the lock will be retained until the dis-
play has completed.
jBASE supports a limited mechanism whereby jQL commands such as SORT or SELECT can automatically use any valid secondary index to
reduce the search time. This does not involve creating a specific DICTionary item. If for any reason the index cannot be found, or is not up to
date (e.g. awaiting a rebuild-index command) then the jQL command will ignore the secondary index and retrieve the information in the usual
manner.
At present only a single search pattern can be used in a jQL command. As an example a file will have an index built onto attribute 23 , the cus-
tomer last name like this:
Let us assume there exists a DICTionary definition called LASTNAME that looks like this:
LASTNAME
001 A
002 23
003 Customer Lastname
004
005
006
007
008
009 T
010 20
Now let us assume we try to select all customers in that file whose last name equals "COOPER]". The jQL statement would look like this:
In this example the index definition is "out of sync", awaiting a rebuild-index command to be performed. Therefore the SELECT would achieve
the result by looking through the entire file. Now let us run the rebuild-index command as:
If we now re-execute the SELECT command, then instead of scanning through the entire CUSTOMERS file, it will look through the index defin-
ition "lastname" instead and will therefore execute considerably quicker.
It is possible to call a jBC subroutine from an index definition. The subroutine should have five parameters as follows:
Filevar File variable of file for which the update is being processed.
When an update occurs the index key is calculated by taking attribute 1 and concatenating it with the output from a call to a subroutine called
INDEX-DEF. The source code for this may look something like this:
INDEX-DEF
001 SUBROUTINE INDEX-DEF(result , file , record , key , field )
002 IF NUM(field) THEN result = "*1" ELSE result = "*0"
003 result := record<3>
004 RETURN
In the above example the result is created in the first parameter, the "result" variable. This is calculated by taking the string "*1" or "*0" and con-
catenating it with attribute 3 from the record being updated. The choice of "*1" or "*0" depends upon whether the extracted attribute, passed in
the fifth parameter as variable "field" , is numeric or not. The index definition was "CALL(2,"INDEX-DEF")" so this extracted attribute will be
attribute 2.
Any normal jBC code will execute in these subroutines but you should be aware of the following pitfalls.
The code should always create exactly the same result given the same record. This means you should avoid using functionality that creates a
variable value, such as the RND() function, the TIME() or DATE() functions, the users port number and so on. If this occurs then there will be
no way jBASE can delete a changed index value and so the index file will continually grow with invalid data even if the number of records remain
constant.
These subroutines will be called implicitly from other running jBC code which to its knowledge has merely executed a DELETE or WRITE state-
ment. You should therefore avoid any code that changes the nature of the environment such as using the default select list, turning echo on or
off, turning the break key on or off. There are ways around many of these, for example you can turn the echo on and off so long as your code
remembers in all cases to restore it to its original status. Similarly you can do a SELECT so long as it is to a local variable rather than the
default select list.
Depending upon your application, these subroutines may be accessed by users other than the account in which the files exist. Therefore all per-
sons who have access to OPEN and update the file must also have access to be able to CALL your subroutine. This can be done in a number of
ways.
All users who want to update the file may have the environment variable JBCOBJECTLIST set to include the library where these subroutines
were catologed into. For example if the subroutines have been cataloged from account greg then you can set up the JBCOBJECTLIST as fol-
lows so that we look in the users current lib directory and failing that in the lib directory for greg (this from the Korn shell)
export JBCOBJECTLIST=$HOME/lib:~greg/lib
You can CATALOG into a directory that is common to all users anyway. Such as directory is the lib directory where jBASE is installed. In this
example you don't need to set up JBCOBJECTLIST. You do , however, remember to re-catalog all these subroutines when a new version of
jBASE is loaded. You change the output directory for CATALOG with the JBCDEV_LIB environment variable. For example, from the Unix Korn
shell you would do this:
export JBCDEV_LIB=$JBCRELEASEDIR/lib
CATALOG BP INDEX-DEF
Regular expressions are the name given to a set of pattern matching characters. The term derived from Unix environment. The regular expres-
sions can be used to great effect using the query-index command to decide what records to select. A full description of regular expressions can
be obtained on Unix systems by entering the command:
% man 8 regexp
For Windows systems only a limited subset of regular expressions are available. The following characters inside a regular expression have spe-
cial meaning:
\x This escapes the character x meaning it just evaluates to x. This is useful if you want to include say the ^ character
as part of your text string rather than a character with special meaning.
For example, on either a Unix or Windows/NT system you could use key-select to find a product description that has the text SLIPPER at the
start of the description. This can be done using the jQL format as you might be familiar with using say the SELECT command or by using reg-
ular expressions. The two methods are therefore:
As a more complicate regular expression, the following example looks for a product that begins with the string BIG , has the words RED some-
where in the text, and then must end with the words ENGINE:
The Unix implementation uses the operating system supplied version of regular expressions and these are far more powerful than the jBASE
supplied version of regular expressions on Windows systems. As already mentioned , use man 5 regexp to find more details. The following
example looks for a product description that begins with the words PIPE , any number of spaces, then one or more numeric characters follow
(including optionally a decimal point), any number of spaces, and finally the characters "mm" , which are case insensitive:
PIPE 5 mm
PIPE15MM
PIPE 33.3 mm
ACCOUNT-SAVE
This should only be used to transfer your data to a non-jBASE system. The indexing information will be lost.
By default , when you use jbackup to save a database, any indexing information will be saved as just the index definition, not the actual index
data. Conversely during a restore using jrestore, the index definition will be restored, but not the index data, whether it exists on tape or not.
cd $HOME
find . -print | jbackup -v -f/dev/rmt/0
cd $HOME
rm -rf *
jrestore -v -f /dev/rmt/0
l Rebuild indexes.
l The stage (ii) will have restored the database and the index definitions, but not the index data. Now to rebuild the index data for all the
files you have just restored:
rebuild-index -r $HOME
If you have restored files into sub-directories you could use the following Unix command to rebuild the indexes for all files in all sub-directories
cd $HOME
find . -print | xargs rebuild-index
When you backup a database with jbackup, you can use the "-c" option and this will backup the actual index data as well as the data files. The
index data will be dumped as a normal Unix binary file , and so during the restore phase will be restored exactly as-is.
When you restore a database using jrestore, by default it will restore the index information, but will NOT rebuild the index data. This is quite
time-consuming and so by default the jrestore works in the quickest mode. The index will need to be re-built at a later stage using rebuild-
index. In the meantime, any attempts to use query-index or key-select will fail, and the jQL programs such as COUNT and SELECT will not use
the index in order to satisfy the request. During the restore a warning message will be shown against each file that needs re-building. Once the
rebuild-index has completed, the index will become available again to the jQL programs and query-index.
You can optionally use the -N option to jrestore and this will result in the indexes being built as the data is restored. This will slow down the
restore time, but means the index is immediately available for use without the need to re-build the indexes with rebuild-index.
Within the filename]I file each index is stored as a b-tree. By default each node of the tree relates to one ‘index value’ and contains a list of keys
for the records that contain that value. This is efficient for read operation because, for any requested index value, the required node can rapidly
be found and the record list returned.
For write operations this is however inefficient. If a given value points to a large list of records then to insert a new record the correct place in
the list must be found, the remainder of the list shifted up and the new ‘record key’ inserted. If a file containing a large number of records is
indexed this can be a very slow operation.
To solve this problem ‘write mode’ was introduced. In this mode instead of storing record keys in simple lists each list is also stored as a b-
trees rooted in a node of the existing index value tree. These ‘record b-trees’ are searched by record key instead of index value. The result is that
once the correct index value has been found a new record key can quickly be inserted. However when a read is required the record b-tree must
be coalesced back into a list. This is time consuming. So read performance is reduced.
By default when create-index (or rebuild-index) is invoked the system is switched into write mode. All the records are written into the index,
the system is switched back into ‘read mode’ and the record b-trees are coalesced back into lists ready to be read Create-index has some
options to fine tune this behaviour:-
Option – k means ‘Slow coalesce’ mode. In this mode the re-coalescing of b-trees is deferred until each index value is read for the first time.
This will give a slightly faster rebuild but at the cost of a delay on the first read of each value.
Option – w means ‘Write only’ mode. In this mode b-trees are never coalesced into lists. This will reduce the performance of read operations
but increase the performance of write operations. If an indexed file is frequently modified, but only occasionally searched, then this mode
should be considered.
In the above example CUSTOMERS is the name of the file and CUST-NAME is the name of the index definition. The full details of create-index
have already been described at the earlier part of this document.
This appendix deals with the description of how to build the index key from the record data, which is "BY 4" in the above example. The syntax
of the index definition can be summaries as:
Where by-expr is one of BY , BY-AL , BY-DL , BY-AR or BY-DR and sort-def will be described later. Each by-expr causes another sort
sequence in the index key. Consider the following example:
The means the index key will be made up from attribute 3 of the record , a 0xff delimiter, and finally attribute 2 of the record. The index data
will be sorted by attribute 3. Where there are cases when attribute 3 is the same as previous index keys, it will be further sorted by attribute 2.
The keyword BY-AL same as BY and means the sort sequence is ascending left justified.
The keyword BY-DL means the sort sequence is descending left justified.
The keyword BY-AR means the sort sequence is ascending right justified.
The keyword BY-DR means the sort sequence is descending right justified.
When a sort sequence is described as right justified then this takes on the same meaning as it does in jQL commands such as LIST and
SELECT, i.e. it assumes numeric data. Therefore right justified fields will be sorted numerically and left justified fields sorted textually. In the
event that a right justified field contains a null string or non-numeric data, then the sort sequence will be the same as jQL command and have
the following precedence:
The "sort-def" definition mentioned earlier is the description of how an individual sort sequence is defined. The "sort-def" can be one or more
extract definitions , each definition being delimited by a colon. Each sort-def definition is concatenated to each other. For example:
by 3 : 4 : 5 by-dr DATE
l attribute 3 from the record concatenated with attribute 4 from the record concatenated with attribute 5 from the record concatenated
with a 0xff delimiter between the sort sequence
l attribute NN from the record, where NN is described the in DICTionary item DATE.
When the index key is added to the index data, the key will be sorted , in ascending left justified sequence, by the first sort sequence , attrib-
utes 3 , 4 and 5. If there are duplicate keys then it is further sorted in descending right justified sequence by attribute NN from the record
where NN is described in the DICTionary item.
Note that when using DICTionary items to describe what attribute number to extract the index definition simply extracts the attribute number
and forgets the DICTionary name. This way the index remains logically consistent even if the DICT record is later amended. The "sort-def" defin-
ition can be more than a simple attribute number. It can be any of the following types.
Numeric This shows to simply extract attribute "Numeric" from the record. If the numeric value is 0 then this means the record key.
Numeric.Numeric This shows to extract a specific multi-value from a specific attribute. For example 4.3 shows to extract multi-value 3
from attribute 4.
OCONV(field , "code") This causes an output conversion to be applied to the defined "field" The conversion to apply is any that can be
applied with the OCONV function in jBC code. The "field" is defined using one of the above mentioned definitions of "Numeric" or "Numer-
ic.Numeric".
CALL(field,"subname") This allows a normal jBC subroutine to be called. The mechanism for doing this is defined later.
The above creates an index key with two sort sequences. The first sort sequence is in ascending left justified with multivalue 1 of attribute 3
concatenated with an asterisk and then concatenated with multivalue 1 of attribute 2. The second sort sequence is an ascending right (e.g.
numeric) sequence on the fourth attribute. This index could be, for example, a customers surname , and asterisk and a customers first name. In
the event you have two customers with the same name it will be further sorted by their date of birth.
In the above example attribute 3 will be extracted from the record and passed to a user-written subroutine call TESTDATE. This subroutine
can amend the index key from what it was passed (e.g. attribute 3) to whatever it likes. We will assume the purpose of this subroutine is to
The use of the -n option means any null index keys will not be stored in the index data. Hence when used in conjunction with the subroutine,
it is a way of using a user-written subroutine to decide what records to put in the index data.
The use of by-ar further shows that all index keys (in this case non- null keys) will be stored in ascending right justified fashion, e.g. as a
numeric sort.
The use of the -lD option shows that when an enquiry is made of the index via query-index , key-select or the jQL commands such as LIST or
SELECT, then the string to search for will first of all be converted using an input conversion of type "D". This is primarily used so that these
commands can use date (or times) in external format (e.g. 23-jul-1970) but will be compared numerically as internal values against the index
data.
Within a jBASE application the normal convention adopted by jBASE , for the purposes of legacy code compatibility, is that times are stored as
seconds past 00:00:00 and dates stored as the number of days since 31st December 1967.
The way to convert from UTC to jBASE time and dates and vice-versa can be demonstrated by the following code segment
*
* Convert a UTC value to a displayable time and date
*
UTC = 866558733
internal.date = INT(UTC/86400)+732
internal.time = MOD(UTC,86400)
CRT "Date = ":OCONV(internal.date,"D")
CRT "Time = ":OCONV(internal.time,"MTS")
*
* Convert internal time and date to UTC
*
UTC2 = (DATE()-732)*86400 + TIME()
CRT "UTC2 = ":UTC2
One important aspect to remember is that the UTC is often in the base time of the operating system without any time zone applied. For
example on Unix systems you set the time and date of your system in UTC but then individual accounts may have different time zones applied.
Thus if you create an index at what appears to be a time of say 10:40:29 then this could actually be a time of 11:40:29 but with a time zone of
minus one hour applied.
l Introduction
l Overview
l The Development Challenges
l The Development Approaches
l Internationalization and Localization
l The Process of Internationalizing
l The Process of Localizing
l Code Pages
l Locales
l Unicode
l International Components for Unicode (ICU)
l jBASE, Internationalization and UTF-8
l jBASE Internationalization Configuration
l jBASE Code Page Configuration
l jBASE Locale Configuration
l jBASE Timezone Configuration
l jBASE Function Changes for International Mode
l jBASE JQL Changes for International Mode
l jBASE File Conversion
l JBASE Error Message Files
l jBASE Spooling and Printing
l Potential Performance Issues
l The Future and UTF-8
l Summary
l Appendices
The release of jBASE 4.1 incorporates modifications to the underlying jBASE library functions and components to provide the tools and mech-
anisms whereby global communities can internationalize and localize applications. The internationalization functionality provides applications
with the capability for correct handling of locale dependent collation sequencing, along with processing of unique character properties, code
page data import/export and terminal/printer data input and output. The jBASE library functions when used in International Mode process
internally using character rather than byte orientated values and properties such that applications can be easily coded or converted by minimum
change for the international market.
More applications are crossing international and cultural boundaries, which require localization to provide support for the local language of the
user. Internationalization is the development of software for localized user communities without the need to change the executable code.
Fundamentally, computers deal with numbers. They store letters and other characters by assigning a number for each one. Before the invention
Unicode, there were hundreds of different encoding systems for assigning these numbers. No single encoding could contain enough characters:
for example, the European Union alone requires several different encodings to cover all languages. Even for a single language like English, no
single encoding was adequate for all the letters, punctuation, and technical symbols in common use.
These encoding systems also conflict with one another. That is, two encodings can use the same number for two different characters, or use dif-
ferent numbers for the same character. Any given computer may need to support many different encodings; yet, whenever passing data between
different encodings or platforms, that data always runs the risk of character corruption from incorrect or incompatible conversion.
As different countries use different characters to represent words, some basic problems arise in the context of software development when con-
sidering the global market. Here follows some common problems developers need to consider:
l The basic concern is that there are more than 256 characters in the world. Languages such as Cyrillic, Hebrew, Arabic, Chinese, Japan-
ese, Korean, and Thai use characters that are not included in the 256-character ASCII set; but somehow, these characters must be
made available.
l It is impossible to store text from different character sets in the same document/record: if each document has its own character set,
manual intervention for conversion becomes inevitable.
l The introduction of new characters, such as Euro symbols must be accounted for. The Euro is replacing old European currency sym-
bols; documents containing these symbols have now changed.
How can applications interchange data that may include one or more character sets? The solution is to adopt a worldwide usable character set
and use a well-conceived strategy that incorporates that character set to produce software products.
Companies often develop a first version of a program or system to just deal with English. Later, when it becomes necessary to produce the first
international version, the product is ‘internationalized’ by going through all the lines of code, translating the literal strings.
The process is time consuming and is not a good long-term strategy. Each new version is expensive, as the painstaking process to identify all
the strings, which require translation is repetitive. Because there are multiple versions of the source code, maintenance and support becomes
very expensive. Moreover, there is a high probability that a translator may introduce bugs by mistakenly modifying code.
The fundamental advantage of internationalizing software is that the developer is not limited to the number of different languages or countries
that you can support. If, after developing the internationalized product and you need to provide the software in another language, it is simply a
matter of translating resource files into that language.
For applications that will only use the standard ASCII characters, internationalization is not a vital concern. However, for those applications
that need to handle characters outside the ASCII range, internationalization is the best medium- and long-range development solution.
Internationalization is the process of producing a globalized product, in terms of both the design and the code, which is independent of the lan-
guage, script, and culture.
The design of Internationalized products is to support any language, script, culture, and code pages, through the localization process, with min-
imal expense and effort. The localization of an internationalized product is possible without any code change, merely by translating text and per-
haps supplying new graphics.
When internationalizing a software product, the goal is to develop software independently of the countries or languages of its intended users.
The translated software is ready for multiple countries or regions. This completes and reduces the Localization time as the process of inter-
nationalization through eliminating the need to examine the source code in order to translate the displayed user strings; it translates the
resource files containing the display strings.
The overriding goal of internationalizing programs is to prepare and ensure the code never needs modification; separate files contain the trans-
latable information. This process involves a number of modifications to the code:
l Move all translatable strings into separate files called resource files, and make the code access those strings when needed. These
resource files are completely separate from the main code, and contain nothing but the translatable data.
l Change variable formatting to be language-independent.
l Change sorting, searching, and other types of processing to be language-independent. This means that dates, times, numbers, cur-
rencies, and messages call functions to format according to local language and country requirements.
Localization is the process of adapting an internationalized offering to a specific language, script, and culture. A localized product is one that is
fully adapted to a country's language and cultural conventions, such as currency, number formats, sorting sequence, and so on.
Localizing an internationalized program involves no changes to the source. Instead, contractors and/or translation agencies modify the files.
The initial cost of producing internationalized code is somewhat higher than localizing to a single market, but the advantage is that you only
pay that cost once. The cost of doing an additional localization, once the code is internationalized, is a fraction of the previous cost, and avoids
the considerable cost of maintenance and source code control for multiple code versions.
The term ‘code page’ refers to any of the many different schemas and standards used to represent character sets for various languages. Unicode
has combined the various code pages into the Unicode Standard (which is equivalent to ISO 10646).
English: ASCII
Chinese, Japanese, Korean, and Vietnamese: CJKV, Win950, Win932, Win949, Win1258
From a geographical perspective, a locale is a place. From a software perspective, a locale is a set of information associated with a place. The loc-
ale information includes the name and identifier of the spoken language, sorting and collating requirements, currency usage, numeric display
preferences, and text directionality (left-to-right or right-to-left, horizontal or vertical).
l Language code
l Country Code
l Variant Code
l fr_FR_EURO
The ‘fr’ is the French language code, the ‘FR’ is the country code; the EURO signifies the use of euro currency.
Unicode is a single-coded character set providing a repertoire for all the languages of the world. Its first version used 16-bit numbers, which
allowed encoding for 65,536 characters; further development allowed a repertoire of more than one million characters, requiring 21 bits. Higher
bits have been declared unusable to ensure interoperability between UTF encoding schemes; UTF16 cannot encode any code points above this
limit. The handling of values above 16 bits is by two 16-bit codes.
l The first Unicode version used 16 bits, which allowed for encoding 65,536 characters.
l Further extended to 32 bits, although restricted to 21 bits to ensure interoperability between UTF encoding schemes.
l Unicode provides a repertoire of more than one million characters.
The 16-bit subset of UCS (Universe Character Set) is known as the Basic Multilingual Plan (BMP) or Plane 0.
Unicode provides a unique number for every character, on every platform, for every program, no matter what the language. Standards such as
XML, Java, ECMAScript (JavaScript), LDAP, CORBA 3.0, WML, etc., requires Unicode and is the official way to implement ISO/IEC 10646;
supported in many operating systems, all modern browsers, and many other products.
Incorporating Unicode into client-server or multi-tiered applications and websites can offer significant cost savings over the use of legacy char-
acter sets. Unicode enables a single software product or a single website to be targeted across multiple platforms, languages and countries
without re-engineering, and allows data to be transported through many different systems without corruption.
Contracting In Spanish sort order, ‘ch’ is considered a single letter. All words that begin with ‘ch’ sort after all other words beginning with ‘c’
Expanding In German, ä is equivalent to ‘ae,’ such that words beginning with ä sort between words starting with ‘ad’ and ‘af’.
Unicode Normalization
Normalization is the removal of ambiguities caused by precomposed and compatibility characters. There are four different forms of nor-
malization.
Form D................................. Split up (decompose) precomposed characters into combining sequences where possible.
Form NFKD ........................ Like D but avoid use of compatibility characters (e.g., use ‘fi’ instead of U+FB01 LATIN SMALL LIGATURE FI).
Precomposed ü = U+00FC
Note: that UTF-8 encoding requires the use of precomposed characters wherever possible.
UTF-8 can represent all possible Unicode code points by byte sequences, which in turn represent different code points. The sequence used for
any given character depends on the Unicode number, which represents that particular character. The Universal Character Set has the following
properties:
UTF-8 encoding is a Unicode Translation Format of Unicode. Before UTF-8 emerged, users all over the world had to use various language-spe-
cific extensions of ASCII. This made the exchange of files difficult, and application software had to consider small differences between these
encodings. Support for these encodings was usually incomplete and unsatisfactory, because the application developers rarely used all these
encodings themselves.
l Files and strings that contain only 7-bit ASCII characters have identical encoding under ASCII and UTF-8.
l ASCII bytes 0x00-0x7F cannot appear as part of any other character.
l Allows easy resynchronization and makes the encoding stateless and guards against the possibility of missing bytes.
l Can encode all possible 231 UCS codes.
l UTF-8 encoded characters may theoretically be up to six bytes long, however 16-bit BMP characters are only up to three bytes long.
l The sorting order of Bigendian UCS-4 byte strings is preserved.
l It never uses the byte 0xFE and 0xFF in the UTF-8 encoding.
l UTF-8 is also much more compact than other encoding options, because characters in the range 0x00-0x7f still only use one byte.
l Use only the shortest possible multi byte sequence that can represent the code point of the character.
l In multi byte sequences, the number of leading one bit in the first byte is identical to the number of bytes in the entire sequence.
l Unicode represents a unique character by a unique 32-bit integer. Hence using UTF-8 encoding avoids problems, which would arise if
using 16 or 32 bit character byte streams, as the normal C string termination byte is a zero, thus byte streams could become pre-
maturely truncated.
International Components for Unicode (ICU) is IBM’s open source package for cross-platform Unicode library enablement for C/C++ products.
ICU provides functions for formatting numbers, dates, times, currencies according to locale conventions, and similarly, ICU provides code and
data to handle the complexities of native language collation, searching, and other processes. It also provides a mechanism for accessing strings
from resource files, whereby sharing common strings across countries that have the same language.
Perhaps the chief benefit of ICU is that it provides fully portable cross-platform libraries. Since the code is portable to a wide variety of plat-
forms, it is possible to share data formats that drive the code, at runtime, across different platforms.
JBASE has implemented additional functions that interface with the ICU API’s; for example, instead of using normal standard jBASE date con-
versions, it invokes the ICU date conversion procedures, thereby providing fully internationalized date formats.
The code of jBASE 4.1 releases provides internationalization functionality, sometimes referred to ‘i18n’, (i followed by the 18 characters of nter-
nationalizatio followed by n). This enables applications to take advantage of the internationalization functionality and hence provide for the
global market, i.e. a fully internationalized application.
When configuring application accounts for international mode, all program variables and data records are to be of type UTF-8 handled internally
as UTF-8 encoded byte sequences. The UTF-8 encoding scheme manipulates characters other than those in the standard 7-bit ASCII range
(0x00-0x7f) as two-byte or three-byte sequences, rather than the normal single 8-bit byte.
The number of bytes in the UTF-8 sequence will depend on the original code page. For example, it encodes characters in the range 0x80-0xff,
representing the single byte character set ISO-8859-1, as two bytes. However, it represents characters imported from a Double Byte Character
Sequences (DBCS), such as Kanji characters from the Japanese code page “shift_jis”, as three bytes when encoded as a UTF-8 byte sequence.
When executing in international mode, all terminal input can be configured to be converted from the configured input code page to a UTF-8
byte sequence. Similarly, for terminal output, configure the UTF-8 byte sequences such that it converts the output to one of the output code
pages dependent upon the code page of the terminal device. Normally the input and output code pages would be the same. There is also an obvi-
ous advantage to skipping the conversion step and using UTF-8 direct where possible to communicate with terminal devices as these helps
reduce the conversion overhead from UTF-8 to the configured code page. Several telnet emulators now support a UTF-8 mode for telnet com-
munication.
UTF-8 is an ASCII preserving encoding of the ISO/IEC Unicode Standard 10646. Therefore, all ASCII characters remain the same and still use
a single byte of storage. This provides UTF-8 encoding with an advantage over other forms of Unicode encoding or translation formats. Some
forms would require either a doubling (UTF-16, UCS2) or quadrupling (UTF-32, UCS4) of byte space required to represent the Unicode Code
Point; however with UTF-8 encoding, only the characters over and above the ASCII 7 bit range need to be stored as multi bytes.
JBASE 4.1 provides internationalization support code page conversion, collation sequences, international dates and times along with number
and currency formatting. The basis of the internationalization configuration is on user id and/or certain jBASE environment variables:
l JBASE_CODEPAGE,
l JBASE_LOCALE,
l JBASE_TIMEZONE.
NOTE: The user id configuration or environment variables have no effect if the account in which it executes the application is not configured for
international mode or the environment variable JBASE_I18N is not set.
Application providers are responsible for the handling of all directionality issues, such as left-to-right, top-to-bottom orientation. The jBASE
library functions such as length (LEN); string comparisons (LT, LE, GT, GE) and collation order statements like (LOCATE/SORT) have all
been modified to operate on a character basis in international mode rather than bytes, along with the currently configured user locale.
Configure the Code pages for the user id using the JBASE_CODEPAGE environment variable. Display a full list of available code pages using
the “jcodepages” command.
All input and output conversion can be undertaken, however it is more efficient to use UTF-8 for input and output if possible, as no code page
conversion is then necessary, reducing system resource requirements. There are several commercially available telnet clients that can com-
municate using UTF-8, in these cases the telnet client performs the conversion from the configured code page to UTF-8, hence it is important
to ensure that the client is configured correctly such that the input and output code page is the correct one for the keyboard mapping required.
Code page conversion is only applicable when the JBASE_I18N environment variable is set. If the JBASE_ I18N environment is not set, then
code page conversion will not occur, and all variables will be handled as bytes rather than as characters. As configuration of international mode
is on an account basis the state of international mode can change on execution of a LOGTO.
Locales can be configured for the user id via the JBASE_LOCALE environment variable. Display a full list of available locales from the com-
mand line by the “jlocales” command.
Configured locales are only applicable when executing an application in international mode or the JBASE_ I18N environment variable is con-
figured. The locale is based on the underlying OS locale configuration and the configured locale for the user id has no effect.
As configuration of the international mode is on an account basis, the state of international mode can change on execution of a LOGTO. If con-
figuring an account with international mode ‘false’ then the JBASE_I18N environment variable will be unset as the result of the LOGTO.
Timezones can be configured for the user id via the JBASE_TIMEZONE environment variable. Display a full list of available Timezones from
the command line by the “jtimezones” command.
Configured Timezones are only applicable when executing an application in international mode or when the JBASE_I18N environment variable
is configured. If the JBASE_I18N, environment variable is not set, the timezone is based on the underlying OS timezone configuration and the
configured timezone for the user id has no effect. As configuration, the international mode on an account basis the state of international mode
can change on execution of a LOGTO. If configuring an account with international mode ‘false’ then the JBASE_I18N environment variable will
be unset as the result of the LOGTO.
Internally, very few of the jBASE library functions need changing to process data as UTF-8 encoded multi byte sequences rather than single
bytes. It bases resultant values on characters rather than bytes. Some functions change their internal workings depending upon the state of
international mode or JBASE_I18N setting.
In international mode, length and sub string extraction works in ‘characters’ not ‘bytes’. Resultant positions are character positions not byte off-
set.
BYTELEN
A new function has been provided to obtain the actual number of bytes rather than characters. For example:
The following source code example contains UTF-8 encoded characters representing the German ‘u’ umlaut (0xC3 0xBC) and the double ‘s’
(0xC3 0x9F).
CRT X
If executed in international mode and with the Input/Output Code Page configured to ISO-8859-1 (Latin1) this code will produce the following
output.
Note: The length returned by the LEN function is the number of characters in variable X, whereas the length returned by the BYTELEN
function is always the number of bytes in variable X.
Füßball
Character Length of X is 7
Byte Length of X is 9
Substring[1,3] of X is Füß
Character properties
UPCASE, DOWNCASE, ALPHA, MATCHES, MATCHFIELD
In International mode, functions use the configured locale to convert and/or test character properties.
For example:
The following source code example contains a UTF-8 encoded byte sequence representing the German ‘u’ umlaut (0xC3 0xBC).
If executing in international mode and with the Input/Output Code Page configured to ISO-8859-1 and with the locale configured for German
(de_DE) the code produces the following output.
ü is alphabetic
ü is alphabetic
The UPCASE function converts the lower case u umlaut to the upper case equivalent. In other words, the UTF-8 byte sequence 0xC3 0xBC
becomes 0xC3 0x9C.
The ALPHA function tests the lower case u umlaut as an alphabetic character according to the configured locale, de_DE.
The MATCHES statement tests the lower case u umlaut against the single alphabetic character according to the configured locale, de_DE
Collation properties
SORT, LOCATE, COMPARE, LE, LT, GE, GT
In international mode, statements use the configured locale to determine sort order. For example:
A sort of the following UTF-8 encoded byte sequences using the SORT function will generate a different sort order depending on the configured
locale.
cote
côte
coté
côté
Note: that the word côte sorts BEFORE the word coté for the configured locale fr_FR
The statement :
It produces the following output in International mode when executed with the locale configured for French, fr_FR.
Conversion properties
ICONV, OCONV, FMT
The implementation of conversions is by a set of jBASE library functions, which in turn invoke functions in the IBM Public License package,
ICU. This package provides cross-platform open source libraries compliant with Unicode Standard 3.0 and currently supports over 170 locales
independently of the system locales. Several input and output conversions become dependent upon the configured locale.
For example, then following source code example will output a different date format dependent upon the configured locale when executing in
international mode.
CRT OCONV(0,"D2/")
CRT OCONV(0,"D")
This code will produce the following if executed in international mode with a configured German locale of ‘de_DE’.
31 DEZ 1967
However, some conversions can be used to ‘force’ an expected format regardless of locale, for instance the DE date format will always produce a
European date format. The DG format is a new Global date format for YYYYMMDD.
Character functions
CHAR, SEQ
The CHAR and SEQ functions behaviour differently for international mode.
In international mode the CHAR function now provides for an extended numeric range to support 32 bit Unicode code point values. The CHAR
function will return a UTF-8 encoded byte sequence for the numeric range 128-247 (0x80-0xf7) and the range 256 and beyond, however
numeric values in the system delimiter range 248-255 (0xf8-0xff) will continue to return the normal single byte system delimiters characters.
The resultant characters for numeric values in the ASCII range 0-127 (0x00-0x7f) are unchanged.
In international mode, the SEQ function now accepts UTF-8 encoded byte sequences such that UTF-8 byte sequences representing characters
in the range 0-127 (0x00-0x7f), i.e. single byte characters return the normal ASCII numeric values. UTF-8 encoded byte sequences representing
characters in the range 128-255 (0x80-0xff) will return the ISO-8859-1 equivalent numeric value. System delimiter characters will return
numeric values in the range 248-255 (0xf8-0xff). Other UTF-8 encoded byte sequences will return the equivalent numeric value as specified by
the Unicode code point.
Additional Functions
BYTELEN, LATIN1, LENDP, UTF8
The provision of additional functions helps with programs that need to know the actual real byte length of a variable as well as conversion func-
tions for handling binary values. The conversion function should only be required when dealing with binary data, for example handling data to/-
from tape devices.
The BYTELEN function returns the number of actual bytes used for the string variable. Use this function whether executing in international
mode or not.
The LATIN1 function will convert a string variable from ISO-8859-1 to a UTF-8 encoded byte sequence. Use this function whether executing in
international mode or not.
The LENDP function will return the number of character display positions required in order to display the string variable. Use this function to
determine the display width of characters, for instance the null character has a display width of zero, where as some Japanese Kanji characters
require more than one display position. The LENDP function will change behaviour if used without international mode set true.
The UTF8 function will convert a string variable from UTF-8 encoded byte sequence to the ISO-8859-1 (binary) equivalent. Use this function
whether executing in international mode or not.
Timestamp Functions
TIMESTAMP, TIMDIFF, CHANGETIMESTAMP, MAKETIMESTAMP
LOCALDATE, LOCALTIME
The provision of additional functions assist with date and time internationalisation; these functions enable applications to obtain, convert and
process a ‘timestamp’. These functions are available regardless of current state of international mode.
The TIMESTAMP functions returns a timestamp of Universal Coordinated Time, UTC as decimal seconds.
The CHANGETIMESTAMP function generates a new timestamp by adjusting the supplied timestamp by a dynamic array, which specifies the
adjustment values.
The LOCALTIME function generates an internal time value using a supplied timestamp and time zone.
The prime target of the statements READBLK and WRITEBLK are at device access and hence use a block size or byte count. It is normal for
device formats to use binary values to describe the contents of the data blocks regardless of the underlying structure. As such, these state-
ments continue to work on a byte rather than character basis whether used with international mode set true or not.
If the requirement is to read/write large files, use instead the READSEQ/WRITESEQ commands. In the default configuration the READSEQ
and WRITESEQ statements read/write a line at a time such one a line from the file has been read into a variable, that variable can be used on a
character basis rather than bytes. This assumes that the data in the file is UTF-8 encoded. If the data in the file is not UTF-8 encoded, but ISO-
8859-1 (binary) then convert the data to UTF-8 using the UTF8 function.
Note: If using IOCTL commands to suppress one line at a time mode for the READSEQ or WRITESEQ, operates these statements only in
byte mode, and not character mode
Modification of the jBASE jQL Processor in several areas provides full Internationalization capabilities.
TimeStamp "W{Dx}{Tx}"
In addition, it includes a provided suite of conversions including A, F and I-types for timestamp functionality. This is such that it displays a gen-
erated timestamp for date and/or time in short, long, and full formats. These conversions also support non-Gregorian locales. The meaning of
the components of the conversion is as follows:
D - Date
T - Time
When international mode is not enabled, the keys are sorted by the binary value of the individual characters as in prior releases.
When international mode is enabled, the keys are first passed to a lookup algorithm that converts the key into a collation key, which is tailored
specifically for the user’s language. Using the collation key, the sort processor is able to produce output in the order expected in the user’s loc-
ale.
The use of right justified fields with completely non-numeric data does not affect sort order, just the display.
As part of the internationalization of jBASE, jQL uses a new algorithm for right justified fields designed to provide optimal sorting of mixed
alpha and numeric fields as well as numeric fields. The field width specified in the attribute definition no longer affects the behaviour of the
sort.
A single leading minus or plus sign may be present, which ignores leading zeros before a decimal point for sorting purposes. It ignores trailing
zeros after a decimal point for sorting purposes. Nulls will sort either before all numeric keys or as zero, depending on emulation option. If
international mode is true, the defined characters in the Unicode 3.0 specification (section 4.6) to be decimal digits are sorted as numbers.
In this format and content, the field is unknown and can be expected to contain alpha, alphanumeric, and pure numeric values. Each candidate
key is split into parts, alternating between numeric and non-numeric parts. Sign characters are only valid as the first character of the key, else-
where they are treated as non-numeric. If the part is numeric then it processes that part in the same manner as a pure numeric key above. If
international mode is true, it passes non-numeric parts through the collation algorithm to produce collation key parts. If international mode is
false, it sorts the non-numeric parts left to right.
Data Conversion
When executing programs in international mode, it processes all variable contents as UTF-8 encoded sequences. As such all data must be held
as UTF-8 encoded byte sequences. This means that data imported into an Account configured to operate in international mode must be con-
verted from the data’s current code page to UTF-8. Normally if ALL the data are 8 bit bytes in the range 0x00-0x7f (ASCII) then no conversion
is necessary as these values are effectively already UTF-8 encoded. However values outside of the 0x00-0x7f range must be converted into
UTF-8 proper such that there can be no ambiguity between character set code page values.
For instance, the character represented by the hex value 0xE0 in the Latin2 code page, (ISO-8859-2), is described as "LATIN SMALL LETTER
R WITH ACUTE". However the same hex value in the Latin1 code page, (ISO-8859-1), is used to represent the character "LATIN SMALL
LETTER A WITH GRAVE".
To avoid this clash of code pages the Unicode specification provides unique hex value representations for both of these characters within the
specifications 32-bit value sequence.
EXAMPLE
l Unicode value 0x00E0 used to represent LATIN SMALL LETTER A WITH GRAVE
l Unicode value 0x0155 used to represent LATIN SMALL LETTER R WITH ACUTE
Note : UTF-8 is an encoding of 32 bit Unicode values, which also has ‘special’ properties (as described earlier), which Unix and Windows
platforms can use effectively.
Another good reason for complete conversion from the original code page to UTF-8 is that doing so also removes the requirement for ‘on the
fly’ conversions when reading/writing to files, as this would add massive and unnecessary overhead to ALL application processing, whereas the
conversion from original code page to UTF-8 is a ‘one off’ cost.
The first requirement before configuring an account and application for international mode is to convert the file data from the original code page
into UTF-8 encoded byte sequences.
Compiler
You must convert all source files containing characters in the range 0x80 thru 0x255 such that these characters are represented in UTF-8
before compiling.
Conversion Utility
A conversion tool ‘jutf8’ has been provided to help with the file conversion. The first would be to restore the data in the normal way using a
restore process working in binary mode. Once the files have been restored, use the following utility with the imported data files to convert the
data. The syntax of the conversion utility is as follows:
d Process directories
-v Verbose mode
The conversion utility, by default, will attempt to confirm that the data is not already converted into UTF-8. Directories are skipped by default
unless the –d option is explicitly specified.
Note: The conversion of file contents containing binary data such as compiled programs may render the compiled object no longer usable.
It is recommended that the files be cleared of program object files before use of the utility on source files.
Conversion Map
Use the MapFilePath option to specify a file that describes the mapping of certain characters, e.g., system delimiters, from and to the required
hex value.
The map file describes how characters in the original file should be ‘mapped’ from their current hex value to the required hex value BEFORE con-
version to UTF-8 proper. The example below maps any characters in the range 0x01-0x08 into what would normally be system delimiters
before conversion to UTF-8. Therefore, character 0x04 is mapped to 0xFC and then converted to the two-byte UTF-8 encoded sequence 0xC4
0xBC, which does not clash with the system delimiter and which in turn represents the 32-bit Unicode value of 0x00FC.
MyMapFile
#From To
0x01 0xFF
0x02 0xFE
0x04 0xFC
0x05 0xFB
0x06 0xFA
0x07 0xF9
0x08 0xF8
Note: If the map file is specified along with the ‘u’ option then it reverses mapping from/to.
Below is an example of using the IOCTL to convert data in a UNIX directory file from ‘shift_jis’, Japanese, to UTF-8 while reading the record
from the native file. The record is then written out (without conversion) to a jBASE Hash File. This IOCTL command will also return the pre-
viously configured Code Page for the File Descriptor. Note: hash files do not support this additional IOCTL command.
* Convert directory record from CodePage shift-jis to UTF-8 and place into Hash file
INCLUDE JBC.h
CodePage ="shift-jis"
IF IOCTL(FILE,JIOCTL_COMMAND_SETCODEPAGE,CodePage) ELSE
END
END
When executing in international mode, error messages files use the configured locale such that it generates de-nationalized error message files
used in place of the default error message file.
The detection of the correct error message file for the locale works on the basis that if the error message for the full locale specification, i.e., it
cannot open LanguageCode_CountryCode_ Variant, and the process defaults to use the LanguageCode_ContryCode. If this still fails, it only
uses LanguageCode until ultimately it uses no part of the locale to detect the error message file.
For instance, if configuring a user for the locale ‘fr_FR_EURO’ then any error messages for processing are initially searched for in the "jbcmes-
sages_fr_FR_EURO" file.
If this file cannot be opened, the process will attempt to open "jbcmessages_fr_FR". Similarly, if this file is not available, the process will then
attempt to open "jbcmessages_fr". If the open still fails, it uses the default error message file "jbcmessages".
Spooling
The jBASE spooler files will hold the created spooler jobs as UTF-8 encoded byte sequences only if generated by a program executing in inter-
national mode. i.e, as per the Account definition. Else, it creates spooler jobs in the normal Latin1 (ISO-8859-1) code page as previously.
Printing
You can configure a new parameter, CODEPAGE, in the FORM TYPE configuration file in the jBASE release sub directory ‘config’, (see jsp-
form_deflt), to specify a code page to use for conversion when despooling the print job. The syntax of the parameter is as follows:
CODEPAGE codepage
Where “codepage” is the name of the code page to use, such that the print job is converted from the internal format of UTF-8 encoded byte
sequences to the required code page for the printer device. For example:
CODEPAGE shift-jis
This code page parameter will convert the UTF-8 byte sequence in the print job to shift-jis for Japanese.
Note : the internal format MUST always be UTF-8 if using CODEPAGE parameters; otherwise, fatal conversion errors can occur. If the
CODEPAGE parameter is not specified, output will be not be converted, hence if the spool job was generated by a process executing in
international mode, then output will be in UTF-8, otherwise if the job was generated by a process executing in normal mode, output will
be in ISO-8859-1 (latin1).
Whenever possible, printers should be configured to support UTF-8, so that code page conversion is not necessary, thereby further reducing
unnecessary conversion overheads on the system.
By operating in international mode, it is inevitable that certain functions will suffer in terms of application performance:
l LEN: must now scan variables counting characters, not simply return the number of bytes
l LOCATE: must use the locale for the sort order
l SORT/COMPARE: must use the locale for the sort/compare order
l MATCHES/MATCHFIELD: must determine if characters are numeric, alpha, etc via locale
l ICONV/OCONV: date, time and currency conversions all use the locale
l ALPHA, ISPRINT: properties must be based on the locale
l INPUT/PRINT: code page conversion to and from UTF-8
Normally, the LEN function returns the current byte length of the array, which is always kept up-to-date as the array increases or decreases in
size. In international mode, the LEN function must return the number of characters rather than the number of bytes in the array. As a result,
the array must be traversed in order to count the characters, causing a decrease in performance.
LOCATE usually compares strings directly, without regard for locale. In international mode, however, the locale is used during comparison.
The same holds true for MATCHES, MATCHFIELD, SORT, COMPARE and property tests, since variables must first be converted to Unicode.
If international mode is enabled, conversion between code pages is required for Terminal I/O; however, this is a relatively slow operation.
Whenever possible, it is ideal to use terminal emulators etc., which are capable of sending and receiving UTF-8, such that no code page con-
version is necessary, thereby reducing the CPU overhead of conversion.
As all strings must be converted to UTF-8 encoding before compile time, and all read/write data is all presumed to be UTF-8 encoded, there
should be no overhead to other functions, except as mentioned above or when functions are working on a character basis, e.g., substring extrac-
tion.
If an account is not configured for international mode, the overhead is a simple bit test in a few functions.
Desktop Applications
Desktop applications vary in their Unicode support, and as a result, have limited internalization support:
The industry is converging on UTF-8 and Unicode for all internationalization. Microsoft NT is built on a base of Unicode; AIX, Sun, HP/UX all
offer Unicode support. All the new web standards, such as HTML, XML, etc., support Unicode. The latest versions of Netscape Navigator and
Internet Explorer both support Unicode. UNIX support for Unicode for directory names is provided via UTF-8.
Because of these difficulties, the major Unix distributors and developers foresee Unicode eventually replacing older legacy encodings, primarily
in the UTF-8 form.
If it is certain that an application will only ever use ASCII characters, internationalization may be not necessary. However, with UTF-8 all
ASCII characters stay the same. On the other hand, if providing an application to any additional markets is a possibility, you should carefully
consider internationalization as a development process.
It is best to consider internationalization impacts early in the development of software products preferably before it is fully developed as sig-
nificant application changes will be necessary, particularly if the product will be made available to the Asian market. Far more than simply trans-
lating literal strings, internationalization is a process that either can positively affect the quality and cost of development.
It is true that internationalization can lessen the performance of some important functions in the finished software product. However, if provid-
ing your application to a global marketplace it is an important business priority, tremendous carefully considering and understanding the pro-
cess of internationalization will make gains in the development lifecycle and improved product quality.
JBASE_I18N
When the JBASE_I18N environment variable is set, the application is expecting to execute in International mode.
Note: That the value of this environment variable can be modified by a LOGTO command. The value of the JBASE_I18N variable will then
be set according to the true or false value for the account.
JBASE_CODEPAGE
You can only set the JBASE_CODEPAGE environment variable to a valid code page available with the ICU package. Use the jcodepages com-
mand for a list of currently available code pages. Conversion for input and output will only take place if configuring the account for international
mode or the JBASE_I18N variable is set.
JBASE_LOCALE
You can only set the JBASE_LOCALE environment variable to a valid locale available with the ICU package. Use the jlocales command for a
list of currently available locales. Only use the configured locale if the account is configured for international mode or the JBASE_I18N variable
is set.
JBASE_TIMEZONE
You can only set the JBASE_TIMEZONE environment variable to a valid time zone available with the ICU package. Use the jtimezones com-
mand for a list of currently available time zones. Only use the configured time zone if the account is configured for international mode or the
JBASE_I18N variable is set.
For example, the following environment variable configuration would configure a user for the French and the country locale specific for France
and the code page set for latin1, iso-8859-1.
JBASE_I18N=1
JBASE_CODEPAGE=iso-8859-1
JBASE_LOCALE=fr_FR
http://www.unicode.org
http://oss.software.ibm.com/icu
Locale Definition
A Locale represents a specific geographical, political, or cultural region.
An operation that requires a Locale to perform its task is called locale-sensitive and uses the Locale to tailor information for the user. For
example, displaying a date is a locale-sensitive operation: format the date according to the customs/conventions of the user's native country,
region, or culture.
The first argument to the constructors is a valid ISO Language Code. These codes are the lower-case two-letter codes as defined by ISO-
639.
http://www.ics.uci.edu/pub/ietf/http...ted/iso639.txt
http://www.chemie.fu-berlin.de/diver.../ISO_3166.html
The third constructor requires a third argument: the Variant. Use the Variant codes to further define the locale, e.g., European countries now
using the Euro use the variant code EURO.
Configured Locales
LanguageCode_CountryCode_Variant (Available with ICU )
be be_BY bg bg_BG
eu eu_ES
fr_LU_EURO
ga ga_IE gl gl_ES gv
gv_GB
he he_IL hi hi_IN hr
hr_HR hu hu_HU
id id_ID is is_IS it
ja ja_JP
kok_IN kw kw_GB
lt lt_LT lv lv_LV
mk mk_MK mr mr_IN mt
mt_MT
pt_PT_EURO
sh sh_YU sk sk_SK sl
sw_KE sw_TZ
ta ta_IN te te_IN th
th_TH tr tr_TR
uk uk_UA
vi vi_VN
wo Wolof xh Xhosa yi Yiddish (formerly ji) yo Yoruba
Country A2 A3 Number
The ICU library package provides a comprehensive character set conversion framework, mapping tables, and implementations for many encod-
ings, including Unicode encodings. These mapping tables have mostly originated from the IBM code page repository. However, for non-IBM
code pages there is usually an equivalent code configured. However, the textual data format is generic, and data for other code page mapping
tables can be added. There is no single, authoritative source of precise definitions of many of the encodings and their names. However, IANA is
the best source for names, and the Character Set repository provided by ICU is a good source of encoding definitions for each platform.
Aliases : us-ascii, iso-ir-6, 646, csASCII, us, iso646-us, ISO_646.irv:1991, ANSI_X3.4-1986, ANSI_X3.4-1968, US-ASCII, ascii-7, ascii, us-
ascii, ibm-367
Aliases : iso-8859-1, ANSI_X3.110-1983, l1, ISO_8859-1:1987, cp367, iso-ir-100, csisolatin1, 8859-1, latin1, cp819, ibm-819, iso-8859-1,
LATIN_1
Aliases : iso-8859-3, l3, ISO_8859-3:1988, iso-ir-109, csisolatin3, 8859-3, cp913, latin3, iso-8859-3, ibm-913
Aliases : iso-8859-4, l4, ISO_8859-4:1988, iso-ir-110, csisolatin4, 8859-4, cp914, latin4, iso-8859-4, ibm-914
Aliases : iso-8859-5, ISO_8859-5:1988, iso-ir-144, csisolatincyrillic, 8859-5, cp915, cyrillic, iso-8859-5, ibm-915
Aliases : iso-8859-6, asmo-708, ecma-114,ISO_8859-6:1987, iso-ir-127, csisolatinarabic, 8859-6, cp1089, arabic, iso-8859-6, ibm-1089
Aliases : iso-8859-7, ISO_8859-7:1987, iso-ir-126, csisolatingreek, 8859-7, ecma-118, elot_928, greek8, greek, iso-8859-7, cp813, ibm-4909
Aliases : iso-8859-8, ISO_8859-8:1988, iso-ir-138, csisolatinhebrew, 8859-8, cp916, hebrew, iso-8859-8, ibm-916
Aliases : iso-8859-9, l5, ISO_8859-9:1989, iso-ir-148, csisolatin5, 8859-9, cp920, latin5, ECMA-128, iso-8859-9, ibm-920
Aliases : x-sjis, windows-31j, csshiftjis, ms_kanji, cp932, cp943, sjis, csWindows31J, Shift_JIS, ibm-943
Aliases : GB2312, zh_cn, cp936, gb2312-1980, GB2312, gb, chinese, gbk, csISO58GB231280, iso-ir-58, GB_2312-80, ibm-1386
l Introduction
l Configuration
l Developing Client Applications
l Enabling TABLEFILE functionality for jODBC
The jBASE ODBC Connector is an ODBC driver implementing the Open Database Connectivty (ODBC) 3.0 API. This driver release sup-
ports a driver-manager based interface, featuring support for transactions and calling stored procedures. The ODBC Connector is only available
to Windows platforms but SQL requests may be issued against a remote jBASE instance running on other platforms.
jAgent is a jBASE component responsible for accepting and processing incoming client requests. As shown in this diagram, jAgent must be
running to accept and dispatch SQL requests to the jBASE Server. jAgent, as well as ODBC, use TCP socket connections to communicate
between each other and therefore need to be configured to use the same TCP port. More details about jAgent may be found in the jAgent user
guide.
The ODBC Driver Manager is a system component which on Windows is part of the MDAC (Microsoft Data Access Components) package
and automatically included with the latest Windows operating systems. Odbcad32.exe is the ODBC Data Source Administrator and odbc32.lib/
odbccp32.lib are import libraries to be used by client applications.
Assumptions
Fore more information about the ODBC API and how to use it, refer to
If the ODBC driver is to be used to develop client applications accessing a jBASE instance, the following prerequisite knowledge is required:
l C
l General DBMS knowledge
l jBASE and concepts of Multivalue databases
l Secure Sockets Layer (SSL) protocol
Environment
The ODBC Connector is available on the following platforms:
l 32-bit Windows XP
l 32-bit Windows 2003 Server
l 32-bit Windows 2000
Locales can be configured for the user id through the JBASE_LOCALE environment variable. Display a full list of available locales from the com-
mand line by the “jlocales” command.
Configured locales are only applicable when executing an application in international mode or the JBASE_ I18N environment variable is con-
figured. The locale is based on the underlying OS locale configuration and the configured locale for the user id has no effect.
As configuration of the international mode is on an account basis, the state of international mode can change on execution of a LOGTO. If con-
figuring an account with international mode ‘false’ then the JBASE_I18N environment variable will be unset as the result of the LOGTO.
ODBC CLI is an API written in C but other frameworks like e.g. .NET provide ODBC wrapper classes. The following Visual Basic .NET
examples use .NET’s Microsoft.Data.Odbc module.
Example 1:
Imports System
Imports Microsoft.Data.Odbc
Module Module1
Sub Main()
cmd.Connection = conn
conn.Open()
While reader.Read()
Console.Write(("ID:" + reader.GetString(0).ToString()))
Console.Write(" ,")
Console.Write(("NAME:" + reader.GetString(1).ToString()))
Console.Write(" ,")
Console.WriteLine(("AGE:" + reader.GetInt32(2).ToString()))
EndWhile
reader.Close()
conn.Close()
EndSub
EndModule
Example 2:
Creates a table with 100 records, followed by a SELECT. A DSN named T24 is required.
Imports System
Imports Microsoft.Data.Odbc
Module Module1
Sub Main()
TimeStart = Date.Now()
conn.Open()
Dim createCmd AsNew OdbcCommand("CREATE TABLE MY_TEST_TABLE(ID INTEGER, NAME VARCHAR(255), AGE SMALLINT,
CREDIT_SCORE INTEGER, BALANCE DOUBLE, PRIMARY KEY(ID))", conn)
Try
createCmd.ExecuteNonQuery()
Catch e As Exception
dropCmd.ExecuteNonQuery()
createCmd.ExecuteNonQuery()
EndTry
Dim insertCmd AsNew OdbcCommand("INSERT INTO MY_TEST_TABLE(ID, NAME, AGE, BALANCE, CREDIT_SCORE) VALUES
(?, ?, 30, 345, 876.67)", conn)
insertCmd.Prepare()
insertCmd.Parameters.Add("@ID", OdbcType.Int)
Dim i AsInteger
insertCmd.Parameters("@ID").Value = i
insertCmd.ExecuteNonQuery()
Next
Dim selectCmd AsNew OdbcCommand("SELECT ID, NAME, AGE, BALANCE, CREDIT_SCORE FROM MY_TEST_TABLE ORDER BY
ID", conn)
While reader.Read()
Console.Write(", ")
Console.Write(", ")
Console.Write(", ")
Console.Write(", ")
EndWhile
reader.Close()
TimeEnd = Date.Now()
conn.Close()
EndSub
EndModule
Stored procedures are supported via the ODBC CALL statement and provide way of calling jBASE subroutines.
Resources
ODBC API http://support.microsoft.com/kb/110093
The Open Group X/Open SQL Call Level Interface (CLI) http://www.opengroup.org/
The previous jDP functionality allowed a list of files accessible to the current datasource to be specified in the connections string.
Similar functionality is now provided via 'TABLEFILE' functionality.
To use:
l Create and populate a catalog file, on the jAgent server, in the normal way.
l When creating a new data source, Specify the full path of the of the catalog file in either the advanced options dialog of odbcad32 or the
'USER_CATALOG' parameter of the connection string e.g.
jODBCManager -add="DSN=MyTestjODBC;SERVER=localhost;UID=test;USER_CATALOG=c:\data\myCatalog"
Introduction
This user guide provides insight on configuring jRFS Driver on Application Server(s) to achieve remote access to files present on Database
Server to form Multiple Application Server (MAS) architecture.
Main benefits of the jBASE Remote File Service (hereafter referred to as 'jRFS') is:
Assumptions
Familiarity with the following is assumed:
l T24 Architecture
l Usage of jAgent
l Secure Sockets Layer (SSL) protocol
l Overview
l Installation
l Example Deployment
l jRFS and Multiple Application Server (MAS) Architecture
With the concept of jRFS, a T24 Application could be run on and/or data kept on a separate server, and thus the architecture could be split.
This split in architecture helps in achieving load sharing on and high availability of the Application server.
With the jRFS Driver concept, requests, either in the form of file access or query execution, are sent from Application Server and are resolved at
the Database Server. The resulting data is then sent to the Application Server for further processing which could involve any T24 business
logic.
Configuration of the jRFS Driver is achieved via the "tafc.ini" configuration file, which defines where the Database Server jRFS Driver resides on
the Application Server. The "tafc.ini" file is located under the "$TAFC_HOME/config/$TAFC_CONTEXT" directory.
At the Database Server the "jAgent" daemon/service is started, which services requests sent from Application Server.
When a request is made from Application Server(s), once the connection has been established with jAgent, the driver checks the stub file as
pointed to by the "JEDIFILENAME_MD" environment variable. This stub file contains file name of the VOC file present on the Database
server.
Limitations
l jFRS does not support acces to UD file types from the Application server access.
In such cases those directories on Database server need to be NTFS mounted, so that they can be accessible from the Application
server.
Resources
l jAgent User Guide
l jBASE Dataguard
l Open SSL Website (http://www.openssl.org/)
l jRemote User Guide
l Before starting jAgent, the environmental variable "JEDIFILENAME_MD" should be set to point location of VOC file.
l Create a VOC entry for "VOC" under "VOC" file itself. This entry is required by jRFS on Application server.
For example:
If the "VOC" file is present under "/glodev2/Pareas/TestBase/TestBase.run/VOC", then create a record with name "VOC" as below spe-
cifying the absolute path:
Note: For complete list of options related to configuring and starting "jAgent", refer to the jAgent User Guide.
Environmental Variables
On Application server the following environmental variables should be set:
JEDIFILENAME_ Export/set this environmental variable to the location of the local stub file.
MD
For further information related to the local stub file see the section Stub entry of the remote VOC Filename.
In "$TAFC_ HOME/config" directory there are 3 directories present, namely "default", "multiapp" and "minimal".
As the "tafc.ini" configuration file is located under "default" directory, this environmental variable should be set to
"default".
For Example:
export JDIAG=jRFS=TRACE:filename=first.log
Configuration file
"tafc.ini" is the configuration file required by jRFS, which holds information related to remote Database server to which Application Server
should communicate.
Add an entry with "[jrfs]" as the section name under the "tafc.ini" file, located under "$TAFC_HOME/config/$TAFC_CONTEXT".
For- [jrfs] IPAddress=<IP Address on which jAgent is started> PortNumber=<Port number on which jAgent is
mat listening> Username=<name of user> Password=<encrypted password> SSL={ON|OFF}
of
[jrf-
s]
sec-
tio-
n:
For
exa-
mpl-
e:
l Create a JBC__SOB JediInitjRFS <Name of the remote VOC file present on the Database Server>
normal
file with
contents
as
shown
here:
For
example:
In this example, a Windows machine is used as the Application server and an AIX machine with IP address 10.92.5.15 will be configured to
work as Database server.
We will create a J4 file on the Database server from the Application server.
With jRFS there is a split between the Application server architecture and the Database.
The application server only maintains a pointer to Database VOC file. This makes it possible to have multiple Application servers pointing to
the same instance of the Database Server.
The implication of this is that each application server has no direct, individual database storage, but shares access to a central database.
High availability is often associated with fault-tolerant systems. This means during any system fault it is expected that the faulty component
can either be quickly replaced or corrected with minimal interruption of service. Availability has long been a critical component for online sys-
tems because business processes can quickly come to a halt when a computer system is down.
This type of architecture assists in quick replacement of the Application Server, as only it requires configuring Database server information on
the Application server and reference to Database VOC file. It simplifies replacement of the Application server during a failure/crash scenario (as
data is always stored separately from the Application server).
On the Database Server(s), Transaction Journaling facilities are available for recovery of any data lost due to a failure at Database level.
Adding to our previous example we will use a Linux machine as another Application server connecting to the same Database server.
1. Update "tafc.ini" file located under "$TAFC_ HOME\config\default" with Database server details.
In this example "tafc.ini" file is update as below:
2. Export "TAFC_ CONTEXT" to the name of the folder in which "tafc.ini" is present.
In this example we have "tafc.ini" file located under the "default" directory of "$TAFC_ HOME/config" directory.
export TAFC_CONTEXT=default
3. Create a normal file anywhere with any name. This will act as a stub file holding name of the remote VOC file.
export JEDIFILENAME_MD=$TAFC_HOME/localVOC
4. To turn on logging on the Application server we need to set JDIAG environmental variable:
export JDIAG=TRACE=jRFS
export JDIAG=TRACE=jRFS:filename=newfile.log
(This will create "newfile.log" and logs all messages in this file).
The additional Application server is now ready to communicate with Database server.
l Overview
l Assignment of Trigger Subroutine Arguments
l CREATE-TRIGGER
The mechanism provided to define the action that takes place when a database trigger event occurs is a jBC subroutine. The name of the sub-
routine is specified in the create-trigger command. A different subroutine can be defined for each of the nine database trigger events, however it
is usually more convenient to use one subroutine for each file that has a trigger defined, distinguishing between the different events in the sub-
routine.
The subroutine can used to define ancillary updates that need to occur as a result of the primary update. The seven parameters passed to the
subroutine allow interrogation and (where applicable) manipulation of the record being updated.
filevar
The file variable associated with the update. For example, you can do:
however you must then be very careful of calling this subroutine recursively.
Event
One of the TRIGGER_ TYPE_xxx values to show which of the 9 events is currently about to take place.['TRIGGER_ TYPE_xxx' values are
defined in $JBCRELEASEDIR/include/JBC.h (Unix) and %JBCRELEASEDIR%\include\JBC.h (Windows).
Type Event
prerc
The current return code (i.e. status) of the action.
For all the TRIGGER_TYPE_ POSTxx events, it will show the current status of the action, with 0 meaning that the action was performed suc-
cessfully and any other value showing the update failed. For example, if a WRITE fails because the lock table is full, the value in prerc is 1.
flags
Flags to show whether a WRITE or WRITEV was requested. Currently not implemented.
RecordKey
The record key (or item-id) of the WRITE or DELETE being performed.
userrc
This vatiable can be set to a non-zero value for the TRIGGER_TYPE_PRExxx actions in order to abort the current action. However, unless the
-t option was used with the create-trigger command, it will be meaningless.
Any negative value will cause the action to be terminated. However, nothing will be flagged to the application, and it will appear to all intents
and purposes that the action performed.
Any positive value is taken to be the return code for the action.
For example, when a WRITE completes it will normally give a return code of 0.
If this variable is then set to say 13 (which is the Unix error number for "Permission denied") then the application will fall into the jBASE debug-
ger with error code 13.
NOTES
Processing carried out inside a trigger should be as economic as possible. They are not designed to allow copious application logic. Any files
opened inside a trigger should be opened once and the file descriptor stored in named common such that files are not opened in every iteration.
Care should be taken that logic inside a trigger will not result in overhead or in an infinite loop of trigger calls.
The arguments of a trigger subroutine are generally assigned by the database management system at the time the subroutine is invoked, but
there are exceptions. The subroutine can in turn assign or reassign argument values if the trigger was created with the -a option.
The table below summarizes the state of each argument at the time the subroutine is invoked, according to each trigger type. Note that there
are three cases where record is null even though the record key is assigned, i.e., pre- and post-delete and pre-read. This is so for the read event
because there is no need to read a record before reading a record, and in the case of the delete events, because the attempt to delete a non-exist-
ent record warrants no further action.
If an application requires a record to be verified prior to deleting it, then that operation that should be performed at a higher level.
YES means that the variable is assigned in this trigger. UD means that it is user definable, N/A means that the variable is not used. Null means
that the variable is assigned a null value.
Note that filevar is not the name of the file, but rather the system-level file unit, ie the 'OPEN' file descriptor. It can be treated as such for file
operations within the subroutine, but cannot be treated as a typical variable, e.g., it cannot be used with a PRINT or CRT statement.
The CREATE-TRIGGER command is used to specify the database events for which the trigger subroutine is called.
COMMAND SYNTAX
CREATE-TRIGGER -Options FileName {triggername|*} subroutine
SYNTAX ELEMENTS
FileName can reference either a jBASE hashed file or a directory.
triggername must be * or one of the nine database events: POSTOPEN, PREREAD, POSTREAD, PREWRITE, POSTWRITE, PREDELETE,
POSTDELETE, PRECLEAR, POSTCLEAR. If * is specified then the trigger subroutine will be called for each of the nine database events.
NOTES
CREATE-TRIGGER can be run multiple times for the same file. If a trigger has already been defined for the specified event then the overwrite
flag must be used to effect the change.
EXAMPLES
CREATE-TRIGGER BP POSTOPEN SUBBPOPEN
The subroutine SUBBPOPEN will be called immediately after the BP file is successfully opened by any jBASE process.
The subroutine SUBBP will be called for every database event to the PAYROLL file. Existing trigger definitions will be overwritten.
jBASE uses a number of environment variables to modify jBASE behaviour. Suitable defaults apply to most of them. Although most envir-
onment variables can be set any time, the best place to do so is in the .profile script.
Variables can be configured in the System environment for all users, and/or on a per user basis via echo $variable
the user environment. Additional variables for jBASE can also be added to the current user con-
figuration registry. This works for all shells, although
one can do “export variable=value” in
Win9x variables are usually configured in the AutoExec.bat. Care should be taken that the envir- ksh, etc.
onment area does not become overwritten on Win9x as it is initially quite small, approximately 512
bytes. Subsequent .bat commands should increase the required environment space. Setting it in the Variables are usually configured in
config.sys file can explicitly increase the environment space: the .profile of the user login directory
although global variables can be
shell=c:\command.com /e:2048 /p added to the /etc/profile script.
jBASE PROGRAMS
The jBASE BASIC functions PUTENV and GETENV can be used to manipulate environment variables. For example:
jBASE Initialization
Some environment variables can only be set before jBASE initialization. jBASE initialization occurs when the first jBASE program is executed
for a particular “PORT”.
The jBASE initialization process reads the environment entries looking for possible variables required by jBASE. These environment variables
continue to be valid as long as the “PORT” is still active. Some environment variables can be changed by subsequent program execution. The
state of these variables is imported back into the local environment after program execution.
For instance:
T-ATT requires a “PORT” against which it saves the tape device assignment
.SP-ASSIGN requires a “PORT” with which to save assignment status for print jobs
With jBASE 5.2 all programs execute in the same process unless explicitly executed via the chars(255) *.k construct.
UNIX Windows
jBASE initialization on UNIX is usually performed in the jBASE initialization on Windows usually occurs when the first jBASE program
.profile. executes.
JEDI_SECURE_LEVEL Set security level for flushable files (such as J3s or jPLUS files)
(Network failure)
JEDI_INDEX_MMAP_ Set to force use of memory mapping on indexes when updating memory mapped files
ON
JEDI_ AIX_ FILE_ Set to force use of memory mapping of j4 files on AIX multi-processor machines
MMAP_ON
JEDI_ AIX_ OBJECT_ Set to force use of memory mapping of .el files on AIX multi-processor machines
MMAP_ON
Windows - %home%\lib
UNIX - $HOME/lib
JBASE_WIN_TERM_ This should be set on servers running Windows Terminal Server before starting the License Server, and for all
SVR sessions wishing to access jBASE licences.
Windows - %home%\bin
UNIX - $HOME/bin
Windows - %home%\lib
UNIX - $HOME/bin
LIB Specify additional paths for linking with libraries. (NT only)
JBC_DESPOOLSLEEP Specify the interval for despoolers to check for queued jobs
JBC_STDERR Set to 1 to redirect standard error to standard out. Useful for Capturing output that would normally be sent to the
screen.
Setting these environment variables overrides the jcompile built-ins when processing files containing Embedded SQL using the -Jq<flavour>
option.
JBC_SQLCOPTS Set alternate SQL options for C compiler (also passed via nsqlprep for MSSQL)
EXAMPLE
For Oracle Pro*C Embedded SQL pre-compiler, on Windows the following environment variables can be set (assuming e.g. ORACLE_ HOME-
E=C:\Oracle\product\9.2.0.1.0\Client_1):
Assuming that the PATH environment is also configured for Embedded SQL, the command jcompile -Jqo SqlDemo.b compiles the jBASE
BASIC program, including passing it through the Oracle Pro*C pre-processor.
EXAMPLE
To convert all files on “jbackup” tape to J4 files set the following environment variable is using jrestore.
Export JEDI_ PREFILEOP=TYPE=J4 (UNIX) Can use quotes to surround multiple parameters
set JEDI_PREFILEOP=TYPE=J4 (NT)
COMMAND SYNTAX
PUTENV (expression)
SYNTAX ELEMENTS
Expression should evaluate to a string of the form:
EnvVarName=value
Where EnvVarName is the name of a valid environment variable and value is any string that makes sense to variable being set.
If PUTENV function succeeds it returns a Boolean TRUE value, if it fails it will return a Boolean FALSE value.
NOTES
PUTENV only sets environment variables for the current process and processes spawned (say by EXECUTE) by this process. These variables
are known as export only variables.
EXAMPLE
IF PUTENV (“JBCLOGNAME=”:UserName) THEN
CRT “Environment configured”
END
All processes have an environment associated with them that contains a number of variables indicating the state of various parameters. The
GETENV function allows a jBASE BASIC program to determine the value of any of the environment variables associated with it.
COMMAND SYNTAX
GETENV (expression, variable)
SYNTAX ELEMENTS
The expression should evaluate to the name of the environment variable whose value is to be returned. The function will then assign the value
of the environment variable to variable. The function itself returns a Boolean TRUE or FALSE value indicating the success or failure of the func-
tion.
NOTES
See the UNIX documentation for the Bourne shell (sh) or the Windows on-line help for information on environment variables. Click here for
information regarding environment variables unique to the jBASE system.
EXAMPLE
IF GETENV (“PATH”, ExecPath) THEN
CRT “Execution path is “:ExecPath
END ELSE
CRT “Execution path is not set up”
END
PATH
DESCRIPTION
The PATH variable contains a list of all directories that contain executable programs. As a minimum, this should contain the shell default value
plus the path /the shell sees usr/jbc/bin so that j JBASE commands. You will also wish to add the path of your application executable dir-
ectory (such as ${HOME}/bin).
VALUES
for any directory, the user has privileges
DEFAULT
The default depends entirely upon your UNIX system and how it has been set up.
SETTINGS
Normal UNIX environment variable, so it can be set at any time by the commands:
UNIX Windows
DESCRIPTION
This is a SVR4 UNIX only variable and should be set to /usr/jbc/lib.
VALUES
Colon separated library file paths.
DEFAULT
None
SETTINGS
Normal UNIX environment variable, so it can be set at any time by the commands:
TERM
DESCRIPTION
On UNIX, this variable should be set to your terminal type as defined by the UNIX terminfo database
VALUES
On UNIX, any valid terminfo database entry
On Windows, any file name (up to the underscore) in the directories under %JBCRELEASEDIR%\misc. Additional terminal definitions can be
created using the jtic command.
DEFAULT
On UNIX, the default depends upon your system and how it has been set up.
SETTING
Normal environment variable, so it can be set at any time by the commands:
UNIX Windows
DESCRIPTION
The TERMINFO environment variable is used for terminal handling. The environment variable is supported only on platforms that provide full
support for the terminfo libraries that System V and Solaris UNIX systems provide.
VALUES
The TERMINFO environment variable defines a directory where the terminal settings are read from.
DEFAULT
On UNIX, the default depends upon your system and how it has been set up.
SETTING
Normal environment variable, so it can be set at any time by the commands:
UNIX Windows
DESCRIPTION
This defines your current port number and is useful when a particular user wishes to keep the same port number whenever they log on. On
UNIX, it takes a sensible default, but this default may change if the UNIX configuration is changed. On Windows, port numbers are allocated
sequentially from zero.
VALUES
Decimal port number
DEFAULT
None
SETTING
UNIX WINDOWS
Set in the .profile prior to execution of initial jBASE program set before invoking the jSHELL
NOTES
UNIX Windows
On UNIX platforms, jBASE will assign the low- On Windows, if the specified port number is in use then the connecting process is given
est available port number from the list or range the next highest port number available. jBASE OBjEX processes are automatically
specified. If all port numbers specified by assigned port numbers from 5,000. Processes run in the background (see jstart -b) are
JBCPORTNO are already in use then the user is assigned port numbers from 10,000 but a GETENV () on JBCPORTNO will always
denied access. return -1.
Port number is already logged on and in use
DESCRIPTION
The account name as perceived by commands such as “WHO” or conversions such as U50BB will normally be returned as the login name of the
UNIX user (LOGNAME variable). However if you wish your users to login with personal ids but execute as if they were all on the same account
you may set this variable to override the default. The account name will be returned to whatever this environment variable is set.
VALUES
any character string
DEFAULT
None
SETTING
As per normal environment variable
UNIX WINDOWS
NOTES
UNIX Windows
DESCRIPTION
Defines the directory for jBASE global files
VALUES
Valid file path
DEFAULT
The default value is the same as JBCRELEASEDIR
UNIX Windows
/usr/jbc C:\JBASE5\5.2
SETTING
UNIX Windows
As per normal environment variable, should be setup in the This is set in the registry when jBASE is installed. See
*.profile
JBCGLOBALDIR=/usr/jbc HKEY_ LOCAL_ MACHINE/SOFTWARE/JAC/jJBASE/3.0/CURRENT_
export JBCGLOBALDIR CONFIG
This value can be overridden by setting JBCGLOBALDIR as an environment
variable.
DESCRIPTION
Defines the release directory for the jBASE system executables and libraries
VALUES
Valid file path
UNIX Windows
SETTING
UNIX Windows
On UNIX, as per normal environment variable, should be Set in the .profile prior to execution of initial SET JBCRELEASEDIR = c:\j-
jBASE program base5\5.2
JBCRELEASEDIR=/usr/jbc3.1
export JBCRELEASEDIR
DESCRIPTION
Defines the location for jBASE to determine any configured databases. Overrides the default setting for the spooler directory.
NOTES
When the JBCSPOOLERDIR is not defined, the default setting for the jBASE spooler directory is $JBCDATADIR/jbase_ data. When
JBCDATADIR is not set, the default setting is $JBCGLOBALDIR/jbase_data.
VALUES
Valid file path
UNIX Windows
/opt/jbase4/4/1/jbase_data C:\JBASE5\5.2\jbase_data
SETTING
UNIX Windows
On UNIX, as per normal environment variable, should be Set in the .profile prior to execution of initial SET JBCDATADIR =
jBASE program c:\mydata
JBCDATADIR=/usr/jbc/data
DESCRIPTION
Specifies one or more files that are used to hold dictionary items for use by jQL. When JBCDEFDICTS is set, jQL will scan each specified file
for dictionary items that cannot be located in the dictionary of the queried file. When JBCDEFDICTS is not set, jQL will scan just the dic-
tionary of the queried file and then the MD / VOC.
VALUES
Colon separated file paths (Unix)
Semicolon separated file paths (Windows)
DEFAULT
None
SETTING
UNIX Windows
DESCRIPTION
When importing legacy applications, this variable tells the jBASE what system it originally ran on. NOTE: that programs and subroutines impor-
ted from different systems may be freely mixed.
VALUES
jBASE, adds, ape, fuj, prime, ros, r83, r91, ultimate, universe.
DEFAULT
The default is jBASE, which will suit most imported applications.
SETTING
Normal environment variable, so it can be set at any time by the commands:
UNIX Windows
DESCRIPTION
This environment variable provides a variable amount of jBASE trace information depending on which options are specified.
VALUES
Colon separated name and value pairs from the following options;
profile= {off|short|long|user|jcover|all}
filename= {stdout|stderr|tmp|pathname,refresh_ mins} %p can be used for process ID
memory= {off|on|verify}
branch= {off|on|verbose}
trace=env_name{,env_name …}
DEFAULT
Not set.
SETTING
UNIX Windows
DESCRIPTION
This environment variable provides a list of directories in which to search for jBASE data files. If an MD or VOC file is configured with F / Q
pointers, these take precedence over the directories in the JEDIFILEPATH.
VALUES
Colon separated file paths (UNIX)
DEFAULT
The current directory
SETTING
As per normal environment variable, so it can be set at any time. The use of relative file paths (such as “.”) should be avoided as it can result in
unintended file access.
UNIX Windows
DESCRIPTION
This variable should be used if you require the use of the MD/VOC file to hold Q pointers, jCL programs, paragraphs or entries for the jQL lan-
guage. If you have loaded an account-save into your home directory then you might wish to set this variable. This will then allow you to:
Execute jCL programs and paragraphs directly from the MD/VOC (using jsh or EXECUTE/CHAIN etc.) On systems with 14 char filename lim-
its, create cross-reference items for executables from the original name to the new name. F pointers and Q pointers in an MD / VOC take pre-
cedence over paths in the JEDIFILEPATH.
VALUES
Valid file path; while it is not required, it is strongly advised that this value be set to the complete path of the MD and not a relative path (as an
example, “/home/bob/MD]D” should be used instead of “./MD]D”).
DEFAULT
None
SETTING
As per normal environment variable, so it can be set at any time by the commands:
UNIX Windows
DESCRIPTION
If you are using Q pointers in a defined MD/VOC file then you may well need to create a SYSTEM file to define the home directories of other
accounts. By default Q-pointers are resolved by using the $JBCRELEASEDIR/src/SYSTEM file. Setting the JEDIFILENAME_ SYSTEM vari-
able to an alternative hash file or directory can change this.
While it is not required, it is strongly advised that this value be set to the complete path of the system file and not a relative path (as an
example, “/home/islandjim/SYSTEM]D” should be used instead of “./SYSTEM]D”).
VALUES
Valid file path
DEFAULT
None
SETTING
As per normal environment variable, so it can be set at any time by the commands:
UNIX Windows
DESCRIPTION
Resolve this environment variable by setting to Q-pointer-to-Q-pointer chains. The maximum chain length allowed is 11. NOTE:: that this envir-
onment variable enables Q-pointer-to-Q-pointer resolution. Q-pointer to F-pointer resolution is not supported.
VALUES
1
DEFAULT
Not set
SETTING
As per normal environment variable
UNIX Windows
DESCRIPTION
Setting this environment variable will defer the OPENing of component or part files in a distributed file set until the component file is required
to be opened by the application program.
VALUES
1
DEFAULT
Not set.
SETTING
As per normal environment variable
UNIX Windows
DESCRIPTION
Defines the security level for files which support configurable flushing.
VALUES
1 Switches off secure mode.
2 When certain changes occur that could corrupt the file in the event of a failure, the file data is flushed from memory to disk. Normal
updates will not be flushed.
3 All file updates will cause the file data to be flushed from memory to the disk.
DEFAULT
3
SETTING
As per normal environment variable
UNIX Windows
Performance Implications
There is a performance penalty to pay for running in secure mode levels 2 and 3.
Level 2 will protect against file corruption by flushing the file from memory to disk when certain operations occur. However, it will not protect
against loss of data. Most operating systems will periodically flush this data, usually a tuneable system and often with a default of every 60
seconds. Therefore, if you can withstand a loss of up to 60 seconds of data, and your primary concern is that the files are not corrupted in the
event of a system failure, then this is the level for you. Minimal impact on performance is seen so long as your files are properly sized. Even if
they go out of the main group, performance is only impacted if the extended group size keeps varying considerably.
Level 3 will protect against almost everything including loss of data. This impacts most on the system. The actual level of performance impact
depends greatly on your application. For example, most of your updates may be to very large files in a pseudo-random manner (e.g. updating
stock records, customer details etc.). In this situation, all this does is move the overhead from the operating system flush daemon that runs
about every 60 seconds (see Level 2 above) to the process doing the update. Therefore, it may be a case of “What you lose on the roundabouts
you gain on the swings!” On the other hand, you may have a small file regularly being updated with things like current days orders. In this case
the impact will be substantial as you will be causing a disk update for each application WRITE, whereas without this you might do many of
these WRITEs before the operating system daemon does a single write.
Another way to control your flushing of data to disk is to use transaction boundaries. For example, the following jBASE BASIC code will cause
all data to be flushed to disk for all files regardless of the file type or file status
This mechanism guards against data loss but is less effective in protecting against file corruption should the server fail while the TRANSEND is
actually in progress.
In the above example the secure mode is disabled during the COPY, command and so will perform much quicker. When the COPY is com-
pleted, it is normal operating system practice to flush all modified file data to disk anyway.
DESCRIPTION
Enables the Command Level Restart feature
VALUES
Restart Program
DEFAULT
Command Level Restart feature disabled
SETTING
UNIX Windows
Create the JBC_ TCLRESTART environment variable Set before any jBASE program is invoked. The environment variable should contain
in the .profile prior to execution of initial jBASE pro- the command string to execute when the user would otherwise go to a shell
gram Prompt.
To later enable the feature, use the BITSET(-2); to later disable the feature, use the BITRESET(-2)
DESCRIPTION
Enables the Break/End Restart feature
VALUES
Restart program name
DEFAULT
Break/End Restart feature disabled
SETTING
UNIX Windows
Create the JBC_ ENDRESTART environment in the .pro- Set before any jBASE program is run. The environment variable should contain
file prior to execution of the initial jBASE program the command string to execute when the debugger is entered/exited.
To later enable the feature, use the BITSET (-3); to later disable the feature, use the BITRESET (-3).
DESCRIPTION
Defines the directories to find user shared object libraries where user subroutines are located.
VALUES
File paths Colon separated on UNIX. Semi-colon separated on Windows.
DEFAULT
%HOME%\lib (Windows)
$HOME/lib (UNIX)
Note: It is good practice to set JBCOBJECTLIST explicitly rather than relying on the default setting. This is because the value of the
HOME environment variable may change (for example after a LOGTO).
SETTING
UNIX Windows
Set in the .profile before execution of the initial jBASE Program. Set before the jSHELL is invoked.
DESCRIPTION
When this environment variable is set, it calls SYSTEM(14) which results in a 100-millisecond delay.
VALUES
1
DEFAULT
Not set
SETTING
As per normal environment variable, the environment variable can be set dynamically with PUTENV
UNIX Windows
SET JBC_BLOCK_SYSTEM14=1
Note: Looking for type ahead data using SYSTEM(14) in a tight loop can have a detrimental impact on system performance because left
unchecked, such loops can consume all available system resources. With JBC_BLOCK_SYSTEM14 set, each call to SYSTEM(14) incurs a
100-millisecond delay, so a loop with SYSTEM(14) doesn’t waste system resources by looping too quickly.
It should be noted that the accuracy of the pause is dependent on the granularity of the system clock and the load on the system. Most oper-
ating systems and hardware will provide a granularity of 10 milliseconds.
DESCRIPTION
When this environment variable is set to a directory, jBASE dynamically creates and deletes workfiles jBASEWORK_nn where nn is the port
number. This can be used in place of JBASETMP.
VALUES
Any valid directory.
DEFAULT
Not set
SETTING
UNIX Windows
Setting this environment variable is recommended in a high user environment as a single workfile for all ports can result in a bottleneck.
DESCRIPTION
Defines behaviour when a BASIC program encounters a numeric operation being attempted on a non-numeric value. The default behaviour is to
raise an error and drop into the debugger.
VALUES
1 Don’t display an error message
128 Caller to place source variable in the target variable after operation
DEFAULT
0 - Raise an error and drop into the debugger.
SETTING
The value stored in a bit mask so different behaviours can be combined by adding them together. For example, to suppress the error message
and avoid going into the debugger – set the variable to 3. As per normal environment variable, the environment variable can be set dynamically
with PUTENV
UNIX Windows
DESCRIPTION
Defines behaviour when a BASIC program encounters an error
VALUES
1 – Log an error message to $JBCRELEASEDIR/tmp/jbase_error_trace
DEFAULT
0 – Do not log the error.
SETTING
The only valid values for this variable are 1 or 0. Setting this variable will not interfere with the behaviour set by other JBASE_ERMSG envir-
onment variables. As per normal environment variables, it can be set dynamically using PUTENV
UNIX Windows
DESCRIPTION
Defines behaviour when a BASIC program encounters a null variable. The default behaviour is to raise an error and drop into the debugger.
VALUES
1 Don’t display an error message
128 Caller to place source variable in the target variable after operation
DEFAULT
0 - Raise an error and drop into the debugger.
SETTING
The value stored in a bit mask so different behaviours can be combined by adding them together. For example, to suppress the error message
and avoid going into the debugger – set the variable to 3. As per normal environment variable, the environment variable can be set dynamically
with PUTENV
UNIX Windows
SET JBASE_ERRMSG_ZERO_USED=3
JBASE_ ERRMSG_ ZERO_ USED=3
export JBASE_ERRMSG_ZERO_USED
DESCRIPTION
This should be set on servers running Windows Terminal Server before starting the License Server, and for all sessions wishing to access jBASE
licences. It enables global access to shared memory to enable MTS sessions to obtain a jBASE licence.
VALUES
Set or unset.
DEFAULT
Unset.
SETTING
UNIX Windows
DESCRIPTION
On a machine with mixed enterprise and server licenses available, indicates that a server license is required.
VALUES
Set or unset.
DEFAULT
Unset.
SETTING
On sites with both server and enterprise licenses installed, an enterprise license will be assumed unless JBASE_SVR_ SESSION is set to 1.
With server only licenses installed, JBASE_SVR_SESSION must be set in order to obtain a license. Failure to do so will result in a licensing
error. With enterprise only licenses installed, setting this environment variable will not allow a license to be allocated and a license error will be
produced.
UNIX Windows
DESCRIPTION
Provided to emulate input on UniVerse systems. If this environment variable is set, all INPUT, KEYIN() and IN statements will receive input
values in the opposite case. In other words, lower case characters will become upper case and vice-versa. Characters within cursor control
sequences are also inverted, consequently up, down, left and right arrows will no longer work as required with this variable set.
VALUES
Set or unset.
DEFAULT
Unset.
UNIX Windows
JBASE_I18N
DESCRIPTION
Setting this environment variable switches on UTF8 processing in jBASE.
VALUES
Set or unset.
DEFAULT
Not set.
SETTING
UNIX Windows
JBASE_CODEPAGE
DESCRIPTION
Setting this environment variable sets the codepage to use for UTF8 conversion. This will have no effect unless internationalisation is switched
on using JBASEI18N..
VALUES
Any valid code page. Use jcodepages utility for a list of supported code pages.
DEFAULT
Not set.
SETTING
UNIX Windows
JBASE_LOCALE
DESCRIPTION
Setting this environment variable sets the locale to use for UTF8 collation, sorting and date settings. This will have no effect unless inter-
nationalisation is switched on using JBASEI18N..
DEFAULT
Not set.
SETTING
UNIX Windows
JBASE_TIMEZONE
DESCRIPTION
Setting this environment variable sets the timezone to use for UTF8 timestamp conversion into local time for display. This will have no effect
unless internationalisation is switched on using JBASEI18N..
VALUES
Any valid timezone. Use jcodepages utility for a list of supported code pages.
DEFAULT
Not set.
SETTING
UNIX Windows
JBCDEV_BIN
DESCRIPTION
Defines the directory where user executables will be built when programs are CATALOGed.
VALUES
Valid file path
DEFAULT
%HOME%\bin (Windows)
$HOME/bin (UNIX)
Note: It is good practice to set JBCDEV_BIN explicitly rather than relying on the default setting. This is because the value of the HOME
environment variable may change (for example after a LOGTO
SETTING
As per normal environment variable
UNIX Windows
JBCDEV_LIB
DESCRIPTION
Defines the directory where user shared object libraries will be built when subroutines are CATALOGed.
VALUES
Valid file path
DEFAULT
%HOME%\lib (Windows)
$HOME/lib (UNIX)
Note: It is good practice to set JBCDEV_LIB explicitly rather than relying on the default setting. This is because the value of the HOME
environment variable may change (for example after a LOGTO).
UNIX Windows
JBCTTYNAME
DESCRIPTION
This variable defines your UNIX tty name. If this variable is not defined then jBASE must use the UNIX system call ttyname () to locate it. On
some systems, this function call is very slow but the use of this variable will greatly improve execution start-up times.
VALUES
Any character string
DEFAULT
None
SETTING
As per normal UNIX environment, variable should be setup in the .profile before executing the initial jBASE program.
JBCTTYNAME=Jterm
export JBCTTYNAME
JBCERRFILE
DESCRIPTION
Sets the location of the jBASE error message file
VALUES
Valid path to a hashed file
DEFAULT
$JBCRELEASEDIR/jbcmessages (UNIX)”
%JBCRELEASEDIR%\jbcmessages (Windows)
SETTING
As per normal environment variable must be set before jBASE is invoked.
UNIX Windows
JBCSPOOLERDIR
DESCRIPTION
This environment variable defines the directory where the jBASE spooler entries are located.
VALUES
Valid file path
DEFAULT
/usr/jspooler (UNIX)
C:\JJBASE30\jspooler
SETTING
As per normal environment variable
UNIX Windows
setup in the .profile before executing the initial set before the jSHELL is invoked. If using telnet it should be set before the first execut-
jBASE program able in REMOTE.cmd.
JBC_DESPOOLSLEEP
DESCRIPTION
By default, the jBASE despooler processes on Windows check for queued jobs every 30 seconds. This environment variable can be used to
decrease or increase that interval. The environment variable is not required on UNIX because the despooler processes are sent a signal when a
new job has been generated.
VALUES
Number of seconds
DEFAULT
30
SETTING
Windows only: As per normal environment variable it should be set before form queues are started.
SET JBC_DESPOOLSLEEP=12
JBC_CRREQ
DESCRIPTION
Controls whether line feeds and form feeds are followed by a carriage return when printing to the spooler.
1 Specifies that a carriage return is required after each and every line feed when printing to the spooler
2 Specifies that a carriage return is required after each form feed when printing to the spooler.
3 specifies that a carriage return is required after each line feed and form feed when printing to the spooler.
DEFAULT
zero
Note: When printing to a Printronix printer on UNIX (which converts ‘linefeeds’ to ‘linefeed + carriage return’ but does not append ‘car-
riage return’ to ‘form feeds’) you should set JBC_CRREQ=two.
When printing binary data to a laser (or similar printer) on Windows you should set JBC_CRREQ=3
In addition, the device definition for the appropriate form queue should specify the -l and -n options to ‘jlp’ e.g. fqfred PROG jlp -d \\prin-
tername -l -n
SETTING
As per normal environment variable, it must be setup before connecting to jBASE.
UNIX Windows
JBCLISTFILE
DESCRIPTION
This environment variable specifies the file where stored lists are kept.
VALUES
Any valid path to a directory or hashed file
SETTING
As per normal environment variable, See also List Storage.
UNIX Windows
JBCSCREEN_WIDTH
DESCRIPTION
Specifies the page width for paged terminal output, and overrides the value specified by the TERM type.
VALUES
Decimal number
DEFAULT
None
SETTING
As per normal environment variable, it should be setup before the jSHELL is invoked.
UNIX Windows
JBCPRINTER_DEPTH
DESCRIPTION
This environment variable specifies the page depth for paged spooler output, and overrides the value specified by the TERM type.
VALUES
Decimal number
DEFAULT
None
SETTING
As per normal environment variable
Setup in the .profile before executing the initial jBASE program SET JBCPRINTER_DEPTH=112
JBCPRINTER_WIDTH
DESCRIPTION
Specifies the page width for paged spooler output, and overrides the value specified by the TERM type.
VALUES
Decimal number
DEFAULT
None
SETTING
As per normal environment variable
UNIX Windows
setup in the. profile before the jbcconnect command. set before any jBASE program is invoked.
JBCNETACCESS
DESCRIPTION
VALUES
DEFAULT
/usr/jbc/config (UNIX)
%JBCRELEASEDIR%\config (Windows)
SETTING
UNIX Windows
VALUES
DEFAULT
/usr/jbc/config (UNIX)
%JBCRELEASEDIR%\config (Windows)
SETTING
UNIX Windows
JRFS_REMOTE_JQL
DESCRIPTION
Specifies that the jQL command will run on the remote server and send the data back rather than querying line by line over the network
VALUES
DEFAULT
Not set
SETTINGS
UNIX Windows
Specifies that the jRFS Server process will use the file name as 'opened' on the remote system
rather than using the file name specified in the original select statement.
VALUES
DEFAULT
Not set
SETTINGS
UNIX Windows
JRFS_SERVERNAME
DESCRIPTION
VALUES
DEFAULT
Not set
SETTINGS
UNIX Windows
JBASE_GROUP_LOCK
DESCRIPTION
Allows the POSIX semaphore locking to override the default locking in UNIX. This setting is to handle the jBASE scalability problem in UNIX
based systems.
VALUES
Not set
SETTINGS
UNIX Windows
To use the JDBC Driver from a non-managed client application, it necessary to place this archive inside your CLASSPATH.
To deploy this archive on a managed environment it is necessary to configure a deployment descriptor specific to the application server.
jbasejdbc-ds.xml
<!--========================================================================== -->
<!-- -->
<!-- JBoss deployment descriptor for jBASE JDBC Data Sources -->
<!-- -->
<datasources>
<local-tx-datasource>
<connection-url>
jdbc:jbase:thin:@127.0.0.1:20002/mytestaccount
</connection-url>
<driver-class>
com.jbase.jdbc.driver.JBaseJDBCDriver
</driver-class>
</local-tx-datasource>
</datasources>
After configuring the deployment descriptor, follow the steps below to deploy the jBASE JDBC driver:
l Copy the jBASE JDBC Driver (jbasejdbc.jar) archive to the lib directory of the JBoss default configuration.
l Copy the JBoss deployment descriptor to the deploy directory of the JBoss default configuration.
To use the JDBC Driver from a non-managed client application, it necessary to place this archive inside your CLASSPATH.
To deploy this archive on a managed environment it is necessary to configure a deployment descriptor specific to the application server.
jbasejdbc-ds.xml
<!--========================================================================== -->
<!-- -->
<!-- JBoss deployment descriptor for jBASE JDBC Data Sources -->
<!-- -->
<datasources>
<local-tx-datasource>
<connection-url>
jdbc:jbase:thin:@127.0.0.1:20002/mytestaccount
</connection-url>
<driver-class>
com.jbase.jdbc.driver.JBaseJDBCDriver
</driver-class>
</local-tx-datasource>
</datasources>
After configuring the deployment descriptor, follow the steps below to deploy the jBASE JDBC driver:
l Copy the jBASE JDBC Driver (jbasejdbc.jar) archive to the lib directory of the JBoss default configuration.
l Copy the JBoss deployment descriptor to the deploy directory of the JBoss default configuration.
The following section provides a detailed guide on how to connect and access the jBASE server.
l DriverManager: This class requires an application to load the specific JDBC driver which in our case would be the jBASE JDBC
Driver. This interface is typically used on a non-managed two-tier deployment scenario where a java naming service is not available.
l DataSource: This interface is preferred on managed scenarios because JNDI is typically used to lookup a data source. The advant-
ages of having a data source managed by the application are e.g. connection pooling, security and distributed transaction processing for
XA-compliant JDBC drivers as is the jBASE JDBC driver.
DataSource and DriverManager provide the following methods to create a new connection.
For more information on these methods, please read the JDBC API documentation.
l getConnection(String url): Obtain a new connection for the specified connection string. Connection properties must be specified
inside the connection string.
l getConnection(String url, String user, String password): Obtain a new connection for the specified connection string. Connection
properties, except user and password must be specified inside the connection string.
l getConnection(String url, Properties info): Obtain a new connection for the specified connection string. The second parameter spe-
cifies the connection properties.
Authentication
jAgent will attempt to authenticate a user given the user credentials provided to the getConnection() method.
The jBASE JDBC Driver implements the following connection properties to provide user credentials:
Encryption
jAgent can be configured to use SSL encrypted connections for deployment scenarios which require enhanced security.
l SSL [Default value: false] : Specifies whether the connection should use SSL encryption. SSL should only be
used if the jAgent instance running on the jBASE server has also been configured to accept SSL connections.
l enableNaiveTrustManager [Default value: false] : This property forces the JDBC Driver to trust all server certificates.
Connectioncx = null;
try {
cx = (Connection) cxf.getConnection();
} catch(NamingException e) {
// error
Class.forName("com.jbase.jdbc.driver.JBaseJDBCDriver");
cxProps.setProperty("user", "test");
cxProps.setProperty("password", "newpassword");
cxProps.setProperty("NaiveTrustManager", "true");
Class.forName("com.jbase.jdbc.driver.JBaseJDBCDriver");
Connection cx = DriverManager.getConnection(url);
cx.close();
The jBASE JDBC 2.0 Driver implements a subset of the JDBC 2.0 API.
Please read the JDBC specification documentation or refer to the JDBC API javadoc documentation for further information.
The following example shows how a client application executing an SQL SELECT query and display the obtained result set:
try {
stat = cx.createStatement();
//Obtain the meta data associated to the result set to print the no. of columns
while(rs.next()) {
} catch(SQLException e) {
throw e;
} finally {
closeDB(stat);
Contents:
l Overview
l Deployment
l Developers Guide
l JDBC API reference
l Resources
This user guide provides detailed instructions for the configuration and deployment of the jBASE JDBC 2.0 Driver.
The jBASE JDBC 2.0 Driver is a jBASE component implementing the JDBC API. The JDBC API is part of the JavaTM 2 Platform Standard
Edition 5.0 (J2SETM).
The following diagrams show two of the most common deployment scenarios. In both cases, the JDBC API implemented by the jBASE JDBC
2.0 Driver provides client applications with the ability to perform SQL queries against a jBASE server.
Two-tier model:
This model represents deployment scenarios where a Java Client Application has the responsibility to create and manage connection, trans-
action and security resource.
Three-tier model:
A three-tier deployment scenario would often involve an application server hosting different application components as e.g EJBs, servlets, etc.
Deploying the jBASE JDBC Driver on a J2EE application server does not only allow those applications to perform SQL queries against jBASE,
but also allows the application server to manage its connections, transactions and security aspects.
JBoss - http://www.jboss.org/
l Overview
l Distributed Lock Service Deployment
l Client Distributed Lock Deployment
l jBASE Distributed Lock Service process flow
l Lock Mechanisms
l Distributed Lock Information
l Utilities
l Resilience
l Recovery
Although T24 has been successfully deployed over Multiple Applications Servers, certain idiosyncrasies related to operating systems, per-
formance and recovery of file locks over networked file systems have prompted the requirement to provide an alternative lock strategy for Mul-
tiple Application Server deployment. As such the jBASE Distributed lock service is now offered as an alternative mechanism, to the networked
file system, by which locks can be propagate between servers.
The Distributed lock service simply extends the existing lock mechanisms already available within jBASE by the provision of a distributed lock
interface.
The Distributed lock service can be deployed as a service executing on a dedicated server or, as is more likely, deployed on one or two of the
Application Servers in the Multiple Application Server configuration.
The Distributed lock service is provided via a daemon process executing on Linux/Unix systems or as an installable service for the Windows
platform. The Distributed lock service can be initiated using either the lower case form of the executable, ‘jdls’, or the jBASE capitalized con-
vention for process daemons, jDLS, (jBASE Distributed Lock Service).
The Distributed lock service also supersedes the jRLA, (jBASE Record Lock Arbiter), which provided the shared memory lock mechanism for
record locks. Linux/Unix system boot scripts should be modified to initialise the lock mechanism using the jDLS rather than the jRLA execut-
able.
Client processes on the Application Servers are configured to connect to the distributed lock service in order to request a lock be taken,
request the status of a lock or request locks be released. Once initialised the client remains connected to the distributed lock service for the
remainder of the life of the client process, with all lock requests issued and acknowledged through the same connection.
The connection comprises of a TCPIP socket, which uses a network byte oriented structure to pass specific lock only information. The inform-
ation for example being lock type, client port number, client process id, client thread id, etc. No application data is passed over the network
and hence there is no requirement for message encryption. The use of the TCP protocol is deliberate such that any break in network service can
be detected quickly and if enabled, resilience activated
Once initialised the jBASE Distributed Lock Service process undertakes two distinct operations:
The first operation is to initialise the Shared memory lock table and then continuously monitor the lock table for orphaned locks and tidy up as
required. This operation was formerly undertaken by the jRLA daemon process and effectively remains exactly the same.
The second operation is to initialise a socket listener for connecting clients who wish to take locks on the local system. This operation is star-
ted as a second process on Linux and Unix system and as an additional thread on Windows. The distributed lock listener continuously listens
for client connections, when a client connection is detected a separate independent process is started that will handle all the lock requirements
for the connecting client process.
Each lock server/service process once started runs as a single thread within its own process space and is completely independent from any
other lock server/service process as well as the daemon listener process. This enables the lock listener service to be stopped without impacting
existing lock server processes and also ensures any potential lock server process failure does not impact other already executing client pro-
cesses.
This approach provides for a robust, scalable and easily extendable implementation, which avoids many of the complications and restrictions
involved with multi threaded servers such as forked threads, signal handling and error recovery.
The one to one process relationship ensures that locks can be handled by either of the currently available locking strategies, i.e. Shared memory
locks or OS file locks and that locks can be easily identified to point of origin and the release of locks will be automatic in the event of a prob-
lem with the client process or client Application Server failure.
Although the Distributed Lock Service can be deployed on dedicated servers the more usual configuration is to deploy the Lock Service on one
or two of the Application Servers in the Multiple Application Server environment. Quite often a mix of Application Servers are used whereby the
most powerful Application Server is used for core processes and other Application Servers used for online processing such as enquires, etc. In
this case it is more efficient to configure the main Application Server with the Distributed lock service for the other Application Servers and use
local lock processing rather than Distributed locking for the core processes. The local processes will automatically detect this configuration,
such that all local lock requests will automatically use the same lock mechanism as the Distributed Lock service, whereby local and distributed
locks become seamlessly integrated.
Once the servers that will provide the distributed lock service have been determined the service should be configured and deployed as follows:
The jBASE software should be installed local to the distributed lock server and JBCRELEASEDIR and JBCGLOBALDIR environment variables
configured for the user id by which the lock service will execute. In the case of a standalone lock server no license installation is required as the
Distributed lock service is unlicensed.
Once the jBASE release is installed and configured then the Distributed Lock Service can be initialised and started in background.
The full set of options for the Distributed lock service can be displayed using the – h option on the jDLS command line. However specific
option combinations for the Distributed Lock Service daemon are as follows:
jdls -k {-AD}
jdls -K {-o}
Where :
-snn,mm Set Shared memory lock table size to 'nn' record locks over 'mm' groups
This command initializes the jDLS to start both the jDLS Shared memory lock monitor service and the jDLS Distributed lock listener service in
background. The shared memory will be configured to provide 13000 locks with 50 locks per group. The lock table algorithm will actually use
the next prime number for the number of groups to provide a better spread of locks over the groups and so in this case configure the shared
lock table with 13150 locks in 263 groups with 50 locks per group.
jDLS –ibD
This command instructs the jDLS to initialize ONLY the jDLS Distributed lock listener service. Unless the jDLS Shared memory lock mon-
itor/arbiter service had been previously started then OS file locks will be used for the default lock mechanism for both distributed locks and
local process locks.
jDLS -kD
This command will stop ONLY the jDLS Distributed lock listener service. If active the jDLS Shared memory lock monitor/arbiter service will
be unaffected.
jDLS –k
This command will stop both the jDLS Distributed lock listener and the jDLS Shared memory lock monitor/arbiter service.
This command will remove the IPCS resource. Note all processes must be disconnected and the effective user id for the command must have
adequate permissions to remove the resource.
Note some command line options are not applicable to Windows as the jDLS executable runs in background as a Windows Service. See the jdls
–h command display on Windows for list of available options.
Along with other jBASE services, the jBASE Distributed Lock Service can be installed using the jBASE jServControl command from the Win-
dows console command line.
e.g.
Once installed the service can then be stopped or started using Windows Services panel.
Alternatively the service can also be stopped or started from the command line, via the jBASE jServControl command.
All jBASE processes must be disconnected before stopping and removing the jDLS service.
To configure the client system to use the Distributed Lock Service, each client user must be configured with the JDLS environment variable.
This variable should be set in the users profile prior to program execution.
To be properly effective ALL users of the same database must be configured with exactly the same Distributed Lock Service parameters, oth-
erwise locks will NOT be correctly respected and data inconsistencies may occur.
The following basic options can be specified in the JDLS environment variable:
Where:
Hostname is either the DNS hostname or the dotted IP address of the system where the jBASE Distributed
Lock Service is executing.
Port is the socket port number (default 50002) on which the jBASE Distributed Lock Service is listen-
ing.
Note: The above specifications are completely optional as denoted by the braces, although SERVER2 spe-
cification has no meaning without specification of SERVER. If SERVER is not specified but the
JDLS environment is set then the configuration will default to ‘localhost’ and port 50002.
WAIT
The WAIT option can be used to control the action of distributed lock retries. If the WAIT option is configured then the client will wait for
acknowledgement that the lock request has been completed without any interim communication, (see “distributed lock retries”). With this
option set the default acknowledgement timeout period is not used. This option is not recommended as processes may wait a considerable
time in the case of lock contention without update to the lock status of the process.
TIMEOUT=Seconds
The TIMEOUT option can be used to override the timeout period, within which any distributed lock request must be acknowledged. The
default timeout period is set for thirty seconds. This period allows for multiple retries for the lock on the server system, as such should only be
adjusted upward.
BINARY
The BINARY option can be used to intercept all binary type locks, i.e. locks other than record locks and redirect them to the Distributed Lock
Service. Locks that would normally be taken on the local system will be intercepted and redirected to the Distributed Lock Service, such that
they can be propagated and hence respected across multiple systems. This option should not be required except when using jBASE Hash files
over NFS and/or File and Execution Locks.
OSLOCKS
The OSLOCKS option can be used to force all record locks to be routed via the OS file lock path such that OS file locks are taken on the Dis-
tributed Lock Service Server by default rather than using the configured lock mechanism on the lock server.
LOCK=EXTERNAL|INTERNAL|ALL
VERBOSE | TRACE=Tracefile
The TRACE option will override the VERBOSE option such that Distributed Lock trace information will be redirected from standard error to
the specified trace file. These options are intended for debugging and problem analysis only.
The JDLS client connection configuration can be tested using the –C option to the jDLS executable at the command line.
If the Distributed Lock Service cannot be reached the process will time out and display the following error message:
If the Distributed Lock Service cannot be reached for a lock request the process will time out and exit with the following message:
Client: connection failed for host ‘10.44.1.56’, service '50002', error Connection refused
Locks are taken either in the shard memory lock table or via local OS file locks
There are two types of lock mechanism that can be implemented on the lock server used for the jDLS, jBASE Distributed Lock Service. One
being shared memory locking the other being OS file locks.
When starting the jDLS service the default is to use the shared memory lock mechanism. The jDLS service initialisation, by default, will start
both the jDLS Shared memory lock monitor service and the jDLS Distributed lock listener service.
The jDLS Shared memory lock monitor will create a shared memory structure for the locks, which is then used for inter process com-
munication of lock information by both jDLS lock server processes, which acquire locks on behalf of remote jBASE client processes and also
local jBASE processes executing on the same system as the jDLS Lock Service. If the shared memory lock area is already allocated then the
shared memory lock group structure cannot be changed and any lock configuration options specified on the jDLS command line ignored.
The default lock mechanism can be force to be OS file locks by inhibiting the initialisation of the jDLS Shared memory lock monitor process. If
the jDLS Shared memory lock monitor service is not initialised then the default lock mechanism will be OS file locks.
OS Lock Mechanism
To initialise the lock service without the jDLS Shared memory lock monitor, start the lock service using the –D option. This option causes
only the jDLS Distributed lock listener service to be initialised and as such the default lock mechanism will be OS file locks.
e.g.
jDLS -ibD
When the jDLS Distributed lock service is active but using the OS file lock mechanism, all record locks are taken on lock files created in the
/tmp/jbase subdirectory for Unix systems or the %SYSTEMROOT%\jbase subdirectory for Windows. The file name of the lock file represents
an 8 digit hexadecimal value of the inode and device numbers of the original file or pseudo file.
e.g.
/tmp/jbase/jlock_xxxxxxxx_yyyyyyyy
Where ‘xxxxxxxx’ is the hexadecimal value of the inode and ‘yyyyyyyy’ is the hexadecimal value of the device.
Note: When the jDLS Distributed lock service is active and a physical file is used, rather than pseudo MD/VOC entries, then the device number
is defaulted to a value of –1. The reason for defaulting the device number to a value of –1 is because the device number allocated to the moun-
ted NFS partition can vary on each of the Multiple Application Servers.
To initialise the lock service to use jDLS Shared memory locking simply use the –ib options (initialise services and run in background) on the
jDLS command line. This is the default configuration for jDLS when run as a Windows Service.
e.g.
jDLS -ib
The Shared Memory Lock Monitor will scan all the locks in the shared memory lock table on a tidy up and when a suspected orphaned lock is
discovered, the lock is thoroughly checked against the process identifier to ensure that the associated process is no longer in existence; if that
is the case then the lock is released, as such locks left behind by processes will automatically be cleaned up every five minutes.
The shared memory lock table is currently only used for record locks as group locks or other binary locks are taken as OS file locks on the phys-
ical files or pseudo files in /tmp/jbase_lock or %SYSTEMROOT\jbase_lock directories.
The Shared Memory Lock Monitor can be stopped and restarted, (using the –k and –ib options together with the –A option on the jDLS com-
mand line), without interfering with the Distributed Lock Listener service or the currently active lock sever processes working on behalf of
remote clients. However this action is not recommended, and should only be used in extreme circumstances, as the preferred locking mech-
anism should be chosen prior to any local or remote client connection and the Distributed Lock Service initialized accordingly
Although the lock mechanisms keeps track of outstanding locks taken by processes, or even threads in the case of the shared memory lock
table, this information is relatively limited and usually insufficient to easily determine lock ownership.
In the case of OS file locks all that is retrieval is the process id of the process, which has taken the lock and maybe the device and inode of the
file or stub file. The shared memory mechanism provides additional independent information in the lock table such as port number, record key
and thread id. Unfortunately the record key alone is not usually enough for lock diagnosis and additional information such as the file/table
information is also usually required. Finding the filename associated with an inode may not be simple for some platforms and in later Multiple
Application Servers implementations, which will not use stub files this information will be unavailable.
The only process that knows the application file/table name and other associated information is usually the client process, as this was the pro-
cess that opened or referenced the file/table and obtained the variable on which to take or release locks.
As such when using distributed locking, the internal client lock and file/table information is periodically written out to a native file using the
port number for the process. This port information is written to the JBCRELEASEDIR/proc/info directory of the lock server system.
In the case of remote clients the information is sent to the associated lock server processes, which in turn write the data, with appended stat-
istics, into the info directory on behalf of the client process.
In the case of local processes executing on the same system as the jDLS lock service, these processes write their lock and file information dir-
ectly into the JBCRELEASEDIR/proc/info directory.
This information can then be retrieved, interpreted and displayed by the lock and file utilities, such as SHOW-ITEM-LOCKS and LIST-OPEN-
FILES.
This procedure is also a change to the ‘polling’ procedure done by the utilities that would otherwise occur when not using jDLS, as it is not pos-
sible to use shared memory between Multiple Application Servers.
The client or local processes only write out lock information when the lock information has changed and the process is either initialised, block-
ing on a lock, no longer waiting on a blocked lock, about to get input or about to sleep. As such the lock information can only be used to
provide a snapshot of the lock or open file status for the Multiple Application Server processes at any one point in time. The lock information
is deleted on process exit however should a process exit abnormally this lock information may persist.
There is essentially one utility program, although referenced by other program names, which is used to display record lock information, namely
SHOW-ITEM-LOCKS.
The jDLS executable can also be used directly to display the shared memory lock table information, but is unable to display associated file/t-
able name information.
SHOW-ITEM-LOCKS
The SHOW-ITEM-LOCK utility has been modified to include the IP address of the port when displaying information. The utility has also been
modified to obtain the lock information for each port from the JBCRELEASEDIR/proc/info directory when either the jDLS Lock Service is
deemed active on the system or the client JDLS environment variable is configured, thus enabling the utility to be executed from a remote cli-
ent system.
e.g. note, the filename, etc fields have been modified/truncated to fit this document.
show-item-locks information retrieved from process information entries generated in JBCRELEASEDIR/proc/info dir-
ectory on host 10.44.1.56
Where:
Port 2 is a local process on the same system as the jDLS server and hence has a dotted address of 0. This pro-
cess (pid 15071) takes locks directly on the same server.
Port 1000 is a remote client (on system 10.44.1.56) with the JDLS environment variable configured as JDLS-
S=SERVER=10.44.1.56. The client process is communicating with the lock service (jDLS) on the lock server system
(10.44.1.56) and has an associated lock server process (15031). The dotted host address of the client system,
(10.44.1.55), is displayed by show-item-locks. The jDLS lock server process (pid 15031) has taken the lock on
record key 33537 on behalf of the remote client on port 1000 executing on the remote system, (10.44.1.55).
Port 4 is another local process on the same system as the active jDLS server but has also been configured with a
JDLS environment variable. This time the variable is set to use the default configuration and as such this pro-
cess is using the reserved loop back local host address of 127.0.0.1.
jDLS –dvL
In the above case the shared memory lock mechanism was deployed as the default locking mechanism. As such the shared memory lock table
information can also be displayed using the –dvL command line options to the jDLS executable.
e.g. Executon of the jDLS command with –dvL option on system 10.44.1.56.
jDLS –dvL
Lock retries: 0
Tidy-up operations: 0
Group value pid type port i-node dev queued ipaddr key
This display shows similar information to the show-item-lock utility however the file name information is not held within the shared memory
lock table, only the associated inode, also note the device information is defaulted to minus one as the values for the devices for the mounted
file partitions can vary between application servers and hence cannot be used.
Additional information regarding the process and status of the Distributed Lock Monitor and the Distributed Lock Listener processes is dis-
played along with the time the Lock service was started. For the Windows platform the process numbers for the two function would be the
same as the Lock Monitor and the Lock Listener run as different threads within the same process.
As can be seen from the utility displays, Port 2 is executing on the same local system as the distributed lock service, (10.44.1.56), on process
15071 and taking locks directly in the shared memory lock table.
This example demonstrates the integration capabilities of the Distributed Lock Service whereby the distributed lock configuration can be adjus-
ted to take best advantage of system topology. For example a 32 CPU system mixed with a couple 4 CPU systems could be allocated to form
the Application Server tier. The lock service can be configured such that Distributed Lock Service executes on the 32 CPU system such that
lock intensive processing jobs like the COB, which would probably be best scheduled to run on the larger server, can run using the standard dir-
ect lock mechanisms avoiding network requirement for lock requests, whereas other online or interface processes could be enabled to process
on the smaller application servers and hence configured to communicate with the Distributed Lock Service for their lock requirements. The Dis-
tributed Lock Service is fully compatible with the existing lock mechanisms and hence the disparate systems can be fully integrated to coordin-
ate lock activity.
In resilience mode the client process will issue a second lock request, to the secondary Distributed Lock server, only once the original lock
request has been acknowledged as successful by the primary distributed lock server. The response of the lock requests will be compared and if
the secondary response is different an error logged such that the problem can be investigated. As such the performance cost of resilience is an
additional socket send and receive message per lock request. While both primary and secondary Distributed Lock Servers continue to respond
the process is executing in resilient mode.
If communication to the primary Distributed Lock server should be interrupted or lost, an error is logged and then the client process will auto-
matically promote the secondary Distributed Lock server to take over from the original primary server and become the new primary. At this
point the duplication of lock requests will cease and the process will continue to communicate only with the new primary lock server. At this
point the process is also no longer resilient and any subsequent communication failure with the Distributed Lock server will result in the client
process wrapping up and exiting the client system.
If communication to the secondary Distributed Lock server should fail while in resilient mode, an error message is logged and the process con-
tinues to communicate only with the primary Distributed Lock server and is hence therefore no longer resilient.
Once communication fails to one or other of the Distributed Lock servers further communication with the failed server is never attempted for
the remainder of the lifetime of the process as attempting to do could cause lock confusion and undermine the lock mechanism. All com-
munication errors are logged to the jbase_error_trace.
The resilient mode should not be used with a configuration that integrates with direct local locking processes, as the local processes do not
even communicate with the primary lock server let alone a secondary lock server. If resilience is required then all processes both local and
remote must be configured to communicate to the same primary and secondary Distributed Lock servers via the JDLS environment variable.
If communication with a client process fails then the distributed lock server process handling lock requests on behalf of that client will release
and outstanding locks and then exit.
This procedure ensures that absent or misbehaving clients cannot continue to hold locks irrespective of the state of the client system. The
release of locks in this scenario, by the distributed lock server process, only effects the local system and has no effect on any other distributed
lock server executing on either another primary or secondary lock server system.
The Distributed Lock Listener service can be stopped or restarted, (using the –k and –ib options together with the –D option on the jDLS com-
mand line), without interfering with the communications of the currently connected remote clients. Obviously new clients will be unable to
connect while the Distributed Lock Listener service is not currently listening, hence these options should be used with great care.
Currently there is no automated procedure to recover the Distributed Lock Service on systems that have failed when using resilient mode, as
the lock table or OS file locks cannot be easily resynchronized with the current primary lock server and guarantee that all locks will be valid.
Once the failed system is recovered and the Distributed Lock service restarted, all remote client processes will need to exit and then reconnect
using the JDLS environment variable configuration in order to use the original Distributed Lock Service configuration and renew com-
munication with the restarted Distributed Lock Service.
This procedure ensures that all locks are released and retaken such that the lock table and/or OS file locks on both Distributed Lock Service
systems are correctly synchronized.
jQL (or jBASE Query Language) is the data retrieval language for jBASE data. This query language uses english-like constructs to selectively
retrieve, sort and display data held in jBASE files (or in other databases through the relevant Direct Connect driver). jQL commands can be
entered directly at the shell prompt or embedded in jBC programs so that the data can be processed programmatically.
Overview
jQL Sentences
jQL Verbs
BSELECT
COUNT
EDELETE
ESEARCH
I-DUMP / S-DUMP
LIST
LIST-LABEL
LISTDICT
REFORMAT
SELECT
SORT
SORT - LABEL
SREFORMAT
SSELECT
File Modifiers
Value Strings
Between Connective
Relational Operators
Logical Connectives
Synonyms
CNV Connective
COL.HDG Connective
FMT Connective
Total Connectives
BREAK-ON Connective
GRAND-TOTAL
Throwaway Connectives
Field Qualifiers
Using Clause
Command Options
Macros
JQLCOMPILE
JQLEXECUTE
JQLFETCH
JQLGETPROPERTY
JQLPUTPROPERTY
Conversion Processing
TimeStamp “W{Dx}{Tx}”
Data Conversion
A Conversion
A: Expression Format
An:expression Format
AE:expression Format
Format Codes
Summary of Operands
jQL Operands
Remainder Function
Substring Function
Arithmetic Operators
Relational Operators
Concatenation Operator
IF STATEMENT
B Conversion
C Conversion
D Conversion
D1 D2 Conversion
F Conversion
The Stack
Order of Operation
Push Operator
Miscellaneous Operators
Relational Operators
Logical Operators
Repeat Operators
Format Codes
G Conversion
L Conversion
MC Conversion
Changing Case
Extracting Characters
Replacing Characters
Converting Characters
MD Conversion
MK Conversion
Ml / MR Conversion
MP Conversion
MS Conversion
MT Conversion
Output Conversion
P Conversion
R Conversion
S Conversion
T Conversion
T File Conversion
Record Structure
Sublist – V Code
I-TYPES
User Subroutines
ICOMP
EXPLAIN
The jBASE Query Language (jQL) is a powerful and easy to use facility, which allows you to retrieve data from the database in a structured
order and to present the data in a flexible and easily understood format. You can enter jQL Commands from your terminal or embed jQL Com-
mands in applications programs, procs and paragraphs to access data in Direct Connect files. The language is characterized by the use of intu-
itive Commands that resemble everyday English language Commands.
You might for instance manage a retail department and need to review a particular set of figures, which requires the phrase: “Show me the sales
figures for January sorted in date order.”
By using the jQL Command LIST with a file named SALES and your predefined data definition records such as MONTH and DATE, you can
construct complex ad-hoc reports directly from the Command line interface (>). You can also choose how you want the information presented;
displayed directly to your printer or to your screen; listed in date order, or in descending or ascending order. The choice is yours as jQL con-
tains a rich range of commands for listing, sorting, selecting and controlling the presentation of your details and is a safe language for end users.
With the exception of the “EDELETE” Command, jQL will not alter the contents of the source data files.
All jQL Command sentences begin with a verb-like Command such as LIST or SELECT followed by a file name such as SALES or PERSONNEL,
and then a series of qualifiers and modifiers with which you control elements such as eligible data, report formatting, any totals that you want
to appear and so on.
Most data files on the system will have two assigned storage areas:
Some files might be single level and others might have multiple data sections. (See the File Management chapter of the System Administrators
Guide for more details)
Data definition records kept in the dictionary portion of the file defines all the data fields in a file. These data definition records do not have to
exist (you can use defaults provided in the environment variables or even the dictionaries of other files). However, where you need to manip-
ulate ‘say’ dates (which are held in internal format), or to join data held in different files, you will find that one or more definition records will
be required for each data field.
EXAMPLE
Data definition records (or DICT records) allow you to specify the position of the data in a record (its field number); a narrative to be used as a
column heading; any input or output conversions required (such as for dates); the data type (left or right justified, or text that will break on
word boundaries) and a column width, used in reports
Input and output conversion codes can also be used to manipulate the data by performing mathematical functions, concatenating fields, or by
extracting specific data from the field.
Multivalued Files
JBASE uses a three-dimensional file structure called a non-first normal form data model to store multiple values for a field in a single record
known as multivalued fields. A multivalued field holds data that would otherwise be scattered among several interrelated files. Two or more
multivalued fields can be associated with each other when defined in the file dictionary. Such associations are useful in situations where a
group of multivalued fields forms an array or are a nested table within a file. You can define multivalued fields as belonging to associations in
which the first value in one multivalued field relates to the first value in each of the other multivalued fields in the association, the second value
relates to all the other second values. Each multivalue field can be further divided into subvalues, again obeying any relationships between
fields.
A jQL Command sentence is entered at the shell in response to a Command prompt (:) or a select prompt (>). If a Command such as SELECT
or GET-LIST creates an implicit list whilst in jSHELL, it displays the select prompt. Each sentence must start with a jQL Command and can be
of any length. Press <ENTER> to submit the constructed sentence. If you enter an invalid Command, the system will reject it and display an
appropriate error message.
EXAMPLE
jsh ~ -->SORT jcustomers FIRSTNAME LASTNAME CITY STATE NUMUSERS WITH FIRSTNAME = "TED" AND NUMUSERS > “10” BY
CITY DBL-SPC HDR-SUPP (P
The verb in this case is SORT. The file specifier is jcustomers. The fields specified in the output specification are FIRSTNAME, LASTNAME,
CITY, STATE & NUMUSERS. The selection criteria specify that only those records with a FIRSTNAME of TED and with more than 10 users
should be returned. The sort criterion says to order the results by the CITY field. The format specifier sets the output to be double spaced
with no header. The (P option sends all output to the printer rather than the screen.
Line Continuation
When you are typing words in response to the TCL prompt the system allows you to enter up to 240 characters before it performs an auto-
matic linefeed. You can extend a line by entering the line continuation characters. To enter the continuation sequence hold down the CTRL key
and press the underscore key (_), which may require you to hold down the shift key. Follow this combination immediately with the RETURN
key.
Use the following words and symbols as described in this manual as all have special significance within a jQL sentence. These words are
defined in each Master Dictionary (MD) and their definitions should not be changed in any way.
! # &
< <= =
>=
A AFTER AN
AND ARE
BSELECT BY BY-DSND
BY-EXP BY-EXP-DSND
COL-SPACES COUNT
DICT
EACH EDELETE EQ
ESEARCH EVERY
GE GRAND-TOTAL GT
IF IN ISTAT
ITEMS LE LIST
LT
NE NO NOPAGE
NOT
OF ONLY OR
PAGE PG REFORMAT
A jQL Command sentence must contain at least a verb and a File name. The verb specifies which process to perform and the filename indicates
the initial data source.
You can add optional clauses to refine the basic Command. You can use clauses to control the range of eligible record keys, define selection
and sorting criteria, or to specify the format of the output, and so on.
REMEMBER: only a verb and filename are required. The following list summarizes each element in the Syntax.
COMMAND SYNTAX
jQL-verb {DICT} file-specifier {field-list} {record-list} {selection-criteria} {FROM #} {sort-criteria} {USING file- specifier} {macro-call} {out-
put-specification} {format-specification} {output-limiter}{(options}
SYNTAX ELEMENTS
Element Description
Verb One of the verbs like Commands detailed later. Most Commands will accept any or all of the
optional clauses
file modifier file modifiers DICT, ONLY=, WITHIN and TAPE modify the use of the file, and how it is accessed
file specifier Identifies the main data file to be processed. Usually the data section of a file, but could be a dic-
tionary or a secondary data area.
record-list Defines which records will be eligible for processing. Comprises an explicit list of record keys or
record selection clauses. An explicit list comprises one or more record keys enclosed in single or
double quotes. A selection clause uses value strings enclosed in single or double quotes and has
at least one relational operator. If no record list is supplied, all records in the file will be eligible
for processing unless an “implicit” record list is provided by preceding the Command with a selec-
tion Command such as GET-LIST or SELECT.
FROM list# A number from 0 through 10 of an active select list that contains record IDs. The query operates
on records whose IDs are in the select list.
Selection-cri- Qualify the records to be processed. Comprises a selection connective (WITH or IF) followed by a
teria field name. Field names can be followed by relational operators and value strings enclosed in
double quotes. Logical Connectives AND/OR ca also be used. Expressions can be parenthesized
to specify precedence.
sort-criteria Specify the order in which the data is returned. Comprises a sort modifier, such as BY or BY-
DSND, followed by a field name. Used also to “explode” a report by sorting lines corresponding to
multivalued fields by value, and to limit the output of values (see output specification).
macro call jQL allows the use of macros to predefine parts of a sentence. The macro definition contains one
or more optional sentence elements. You invoke the macro by including its name in a sentence.
The jQL processor looks for the macro in the currently active dictionary and includes all of its text
parts in the sentence.
Format spe- Comprise modifiers, such as HEADING, ID-SUPP, and DBL-SPC, which define the overall format
cification of the report.
output- lim- The WHEN clause, used to limit the output of multivalued fields
iter
"A" codes provide many powerful features, which include arithmetic, relational, logical, and concatenation operators, the ability to reference
fields by name or FMC, the capability to use other data definition records as functions that return a value, and the ability to modify report data
by using format codes.
The A code also allows you to handle the data recursively, or “nest” one A code expression inside another.
SYNTAX SUMMARY
The A code function uses an algebraic format. There are two forms of the A code:
l A uses only the integer parts of stored numbers unless a scaling factor is included.
l AE handles extended numbers. Uses both integer and fractional parts of stored numbers.
COMMAND SYNTAX
A {n} {;expression}
AE;expression
SYNTAX ELEMENTS
n is a number from 1 to 6 that specifies the required scaling factor.
Comments: The A code replaces and enhances the functionality of the F code
The AE format uses both the integer and fractional parts of stored numbers. Use format codes to scale do scaling of output..
EXAMPLES OF NUMERIC RESULTS
4 012 16 16000 16
The "An;expression" format performs the functions specified in expression on values stored with an embedded decimal point. It then converts
the resulting value to a scaled integer.
The "An" format converts a value stored with an embedded decimal point to a scaled integer. The stored value’s explicit or implied decimal
point is moved n digits to the right with zeros added if necessary. Returns only the integer portion
Field 2 of the data definition record must contain the FMC of the field that contains the data to be processed.
The following sentence lists information about ORDER numbered 200 to 399.
The following sentences do not list information regarding 117 and 119 because they would not be on the implicit list. Although this sentence
seems to have an explicit item-id list and an item-id selection clause, the whole series is a selection clause because there is a relational operator
somewhere in the list.
42 RECORDS SELECTED
The arithmetic F code operators work on just the top stack entry or the top two stack entries. They are:
+ Add the top two stack entries together and push result into entry 1.
*{n} Multiply the top two stack entries and push result into entry 1. If n is specified, the result is divided by 10
raised to the power of n.
R Compute remainder from the top two stack entries and push result into entry 1:
S Replace the multivalued entry 1 with the sum of the multivalues and subvalues.
- Difference of operands
Performs the functions specified in expression on values stored without an embedded decimal point.
Provides interface for jBASIC subroutines or C functions to manipulate data during jQL processing. Synonymous with CALL code
The connective BETWEEN followed by two value strings is a shorthand way of saying ‘all values greater than the first value string and less than
the second’. The value of the second value string must be greater than the value of the first to select items. Value strings including special char-
acters ^, [ and ] are not valid.
The BREAK-ON connective causes monitoring of the following fields for change permitting up to fifteen breaks within one sentence treated in
hierarchical left to right order. The first BREAK-ON in the sentence is the highest level.
When detected, the change in the value of the field outputs a blank line, followed by a line with three asterisks, and then another blank line. If
the BREAK-ON clause specifies text, it outputs the text in place of asterisks. If the text is wider than the column width, the processor applies
the same justification as the named field.
You can suppress the BREAK-ON output by setting the column width of the field to zero.
You can use BREAK-ON in conjunction with the TOTAL connective to generate subtotals. If using the modifier DET-SUPP with TOTAL and
BREAK-ON, it displays only the subtotal and grand total lines.
BREAK-ON Options
B Break. Works in conjunction with the B option of the heading and footing modifiers to put the break
values in the heading or footing
D Data. Suppresses the break line if there is only one detail since the last BREAK. This is the line with
the asterisks, any text that is specified, or totals
L Line. Suppresses the blank line preceding the break data lie. Overrides the U option if both are spe-
cified
R Rollover. Inhibits a page break until all the data associated with the current break is output
U Underlines. Places underlines on the line above the accumulated totals if the TOTAL modifier was spe-
cified. Ignored if used with the ‘L’ option
V Value. Causes the values of the control Break attribute to be inserted at this point in the BREAK-ON
label
A controlling field is one, which has the code D1 in field 8 of its data definition record and points
1. Look for the first field specified in the output specification clause that matches each FMC (Field Mark Count) of its dependent field and
has D2 code in field 8 of the data definition item specifying the controlling field.
2. Position the found fields in the order found to the immediate right of the controlling field for display.
3. Display an asterisk (*) under the column heading of each found field.
4. Dependent fields are output immediately to the right of their controlling field regardless of the order in which you specify them.
5. An independent field found between the controlling and dependent fields is moved “logically” to the right of the controlling and depend-
ent fields.
6. Will ignore dependent fields unless you specify the controlling field.
Selects all the records in the ORDER file and outputs the ORD.ID data. The ORD.QTY data will only be included if it matched 5 - any other
value will be shown as blank.
EXAMPLE 2
SORT ORDER BY ORD.QTY BREAK-ON ORD.QTY ORD.ID
Selects all the records in the SALESORDER file in ORD.QTY order and outputs a line for each record until the ORD.QTY changes. At this
point, a control break triggers and outputs the running total of ORD.QTY. At the end of the report, it displays a cumulative total for ORD.ID.
Retrieves selected records and generates a list composed of data fields from those records as specified by any explicit or default output spe-
cifications. Each subvalue within a field becomes a separate entry within the list.
COMMAND SYNTAX
BSELECT file-specifier {record-list} {selection-criteria} {sort-criteria} {USING file-specifier}{output-specification} {(options}
Comments: When the Command terminates, it displays the total number of entries in the generated list and makes the list available as if gen-
erated by a SELECT, GET-LIST or other list-providing Command.
If you do not specify a sort-criteria clause, the record list will be unsorted.
If you do not specify an output-specification, it uses the default data definitions “1”, “2” etc. .
EXAMPLE
BSELECT ORDER WITH ORD.QTY = “500]” ORD.AMT
Creates a list containing all ORD.QTY values from all the records in the ORDER file, which have an ORD.QTY that CONTAINS ORDERS =
500
COMMAND SYNTAX
C{;}n{xn}...
SYNTAX ELEMENTS
; Optional in that it has no function other than to provide compatibility.
x The character for insertion between the concatenated elements. If you specify a semicolon (;), no separator will be used. Any non-numeric
character except system delimiters (value, subvalue, field, start buffer, and segment marks) is valid.
Comments: See the descriptions of the function codes (A, F, FS and their variants) for other concatenation methods.
Input Conversion: does not invert; applies the concatenation to the input data.
EXAMPLE 1
C1;2
Concatenates the contents of field 1 with field 2, with no intervening separator character
EXAMPLE 2
C1*2
Concatenates the contents of field 1 with an asterisk (*) and then the content of field 2
EXAMPLE 3
C1*”ABC” 2/3
Concatenates the contents of field 1 with an asterisk (*), the string ABC, a space, field 2 a forward slash (/) and then field 3.
B; {filename} subname
Or
SYNTAX ELEMENTS
filename is ignored but provided for compatibility with older systems
subname is the name of the called subroutine (or function). This subroutine must reside in one of the libraries defined by the user.
The subroutine can be called as a conversion (attribute 7 of the dictionary item) or as a correlative (attribute 8 of the dictionary item). Data is
passed to and from the subroutine with named COMMON elements. In each subroutine the following line must be included:
OR
For ex-Sequoia users, you may INCLUDE the file qbasiccommonseq, which provides compatibility with that platform.
The INCLUDE file defines the named common that is used by jQL. The named common consists of 2 arrays: access and newpick.
USAGE
access
access(1) Data file open variable
access(5) Attribute being processed. This is the value in attribute 2 of the calling dictionary item.
access(8) reserved
access(9) reserved
access(10) Item id
access(12) reserved
access(13) reserved
access(15) reserved
access(17) reserved
By default, jBASE will only call a subroutine once per item. This is normally desirable, since value and sub value manipulation can be done
within the subroutine. In addition, it is clearly more efficient to only call the subroutine once per item. However, for backward compatibility,
jBASE can be configured to call the subroutine for every value and sub value processed. If this is required then set jql_mv_ subcall = true in
usr/jbc/Config_EMULATE. If this setting is in place, access(6) and access(7) are incremented appropriately as each value and sub value is pro-
cessed. Otherwise the values in access(6) and access(7) have no meaning.
newpick
newpick(1) through newpick(11) – reserved
newpick(12) - On entry to the subroutine this will contain the value of the data passed from jQL to the subroutine. By default, this will be all
the data defined by the calling dictionary item (i.e. all values and sub values). However if "jql_mv_ subcall = true" is set, then the subroutine is
called for every value/sub value and newpick(12) contains just each value or sub value as it is processed.
It is worth noting that a subroutine can be called as part of a multi-valued correlative. For example, the calling dictionary item could look like:
<1>S
<2>17
<8>F;"ABCD"]CALL SUB1
In this instance, the data defined by the calling dictionary item is "ABCD". But if the calling dictionary item is:
<1>S
<2>17
<8>CALL SUB1
Then the data passed to the subroutine in newpick (12) is simply the contents of attribute 17 of the current item, which may be multi/sub val-
ued.
EXAMPLE
COMMENTS (in DICT of SALES file)
001 A
002 3
003 Comments
004
005
006
007 B;comments
008
009 T
010 25
006 RETURN
SALES........ Comments.................
ABC Grade 1
DEF Grade 2
PERSISTENT VARIABLES
When calling subroutines from dictionary items it is sometimes advantageous for the values of variables to persist between CALLs, for the dur-
ation of the jQL execution. An example of how persistent variables can be employed is when it is necessary to READ from a file in the sub-
routine. Rather than open the file every time the subroutine is called (i.e. for each record processed by jQL), it is more efficient to open the file
when the first record is processed and keep the file open variable available for subsequent records. This can be achieved with the following code
in the subroutine:
...
IF UNASSIGNED(CustFileVar) THEN
GOSUB FatalError
ABORT
END
END
...
In order that the variables are persistent, a compiler directive must be supplied:
Persistent variables should be treated as COMMON variables. The one exception is that they are initialized for each jQL command. If a sub-
routine is called from two or more dictionary items in the same jQL command then the variables will be shared in the same way that COMMON
variables are. If the subroutine is called recursively, then the variables will be shared between each level of recursion, in the same way that
COMMON variables are.
se the following MC codes to transform text from upper to lower case and visa versa are:
MCT Convert all upper case letters (A-Z) in the text to lower case, starting with the second character in each
word. Change the first character of each word to upper case.
Input conversion does not invert. The conversion code will be applied to the input data.
EXAMPLE 1
MCL
EXAMPLE 2
MCT
Assuming a source value of AbC dEf “ghi, MCT will return Abc Def “ghi.
EXAMPLE 3
MCU
The CNV connective allows the query to override the default conversion as supplied in the dictionary with another conversion.
EXAMPLE
LIST CUSTOMER *A1
CUST..... *A1......
1 FRED BLOGGS
2 TOM JONES
CUST..... *A1......
1 Fred Bloggs
2 Tom Jones
The COL.HDG connective allows the query to override the default column header as supplied in the dictionary with another textual descrip-
tion.
EXAMPLE
LIST CUSTOMER *A1
CUST..... *A1......
1 FRED BLOGGS
2 TOM JONES
1 FRED BLOGGS
2 TOM JONES
Command options are letters enclosed in parentheses, which modify the action of the jQL command sentence. The options described here are
common to most commands. Where the options are command-specific, they are described with the command.
Do not confuse options for commands with options for modifiers and connectives such as HEADING and BREAK-ON. Commas or spaces can
separate options; when the options are at the end of the sentence (as is recommended) omit the closing parenthesis. jQL ignores any option,
not used by a particular command
Options
C Suppresses column headings, page and date, line at the start and summary line at the end of a report: Equi-
valent to the COL-HDR-SUPP modifier
H Suppress page and date line at the start and summary line at the end of the report: Equivalent to HDR-
SUPP modifier
EXAMPLE
LIST CUSTOMER (HIP
List the SALES file (using the default data definition records) but suppress the output of a header and the record keys. Send the output to the
assigned printer.
For Example: the following expression concatenates the character “Z” with the result of adding together fields 2 and 3:
A;”Z”:2 + 3
A Algebraic functions.
B Subroutine call.
C Concatenation.
F Mathematical functions.
G Group extract.
L Length.
MC Mask character.
MD Mask decimal.
MK Mask metric.
MS Mask Sequence.
MT Mask time.
P Pattern match.
R Range check.
S Substitution.
T Text extraction.
U User exit.
W Timestamps.
The MC codes that convert ASCII character codes to their binary or hexadecimal representations or vice versa are:
spaces).
Comments: The MCAB and MCABS codes convert each ASCII character to its binary equivalent as an eight-digit number. If there is more than
one character, MCAB puts a blank space between each pair of eight-digit numbers. MCABS suppresses the spaces.
When converting from binary to ASCII characters, MCBA uses blank spaces as dividers, if they are present. MCBA scans from the right-hand
end of the data searching for Elements of “eight-bit” binary strings. If it encounters a space and the element is not eight binary digits long, it pre-
pends zeros to the front of the number until it contains eight digits and continues until reaching the leftmost digit prepending zeros if neces-
sary, it then converts each eight-digit element to its ASCII character equivalent.
Input conversion does not invert. The original code will be applied to input data.
EXAMPLE 1
MCAX
EXAMPLE 2
MCXA
EXAMPLE 3
MCAB
EXAMPLE 4
MCABS
EXAMPLE 5
MCBA
EXAMPLE 6
MCBA
The MC codes that convert numeric values (as opposed to characters), to equivalent values in other number schemes are:
MCBX{S} Convert a binary value to its hexadecimal equivalent. Use S to suppress spaces.
MCDR Convert a decimal value to its equivalent Roman numerals. Input conversion is effective.
MCDX or MCD Convert a decimal value to its hexadecimal equivalent. Input conversion is effective.
MCRD or MCR Convert Roman numerals to the decimal equivalent. Input conversion is effective.
MCXB{S} Convert a hexadecimal value to its binary equivalent. Use S to suppress spaces.
MCXD or MCX Convert a hexadecimal value to its decimal equivalent. Input conversion is effective.
Comments: These codes convert numeric values rather than individual characters. For Example, conversion of the decimal value of 60 is to
X”3C” in hexadecimal, or LX in Roman numerals. The value 60 is converted, not the characters “6” and “0”.
With the exception of MCBX {S} that handles spaces, all conversion of these codes will stop if they encounter an invalid character that is not a
digit of the source number system.
With the exception of MCDR, if the conversion fails to find any valid digits, a zero MCDR will return null.
If you submit an odd number of hexadecimal digits to the MCXB code, it will add a leading zero (to arrive at an even number of characters)
before converting the value.
The MCXB and MCXBS codes convert each pair of hexadecimal digits to its binary equivalent as an eight-digit number. If there is more than
one pair of hexadecimal digit, MCXB puts a blank space between each pair of eight-digit numbers. MCXBS suppresses the spaces.
When converting from binary to hexadecimal digits, MCBX uses blank spaces as dividers if they are present. MCBX effectively scans from the
right-hand end of the data searching for Elements of eight-bit binary digits. If it encounters a space and the element is not a multiple of eight
binary digits, it prepends zeros to the front of the number until it contains eight digits. This continues until it reaches the leftmost digit pre-
pending zeros if necessary. Each eight-digit element is converted to a hexadecimal character pair.
Input conversion is effective for MCDR, MCDX, MCRD and MCXD. Input conversion is not inverted for the other codes. The original code will
be applied to input data.
EXAMPLE 1
MCBX
Assuming a source value of 01000001 1000010, MCBX will return 4142. Would return the same value if there was no space between the bin-
ary source Elements.
EXAMPLE 2
MCRD
EXAMPLE 3
MCDX
Reports the total number of records found in a file, which matches the specified selection criteria.
COMMAND SYNTAX
COUNT file-specifier {record-list} {selection-criteria} {USING file- specifier} {(options}
SYNTAX ELEMENTS
Options can be one or more of the following:
C Display running counters of the number of records selected and records processed. Unless modified by n,
{n} the counter increments after every 500 records processed or the total number of records if less than 500.
The n specifies a number other than 500 by which to increment. For Example, (C25) increments the
counter after every 25 records processed.
P Display running counters of the number of records selected and records processed. Unless modified by n,
the counter increments after every 500 records processed or the total number of records if less than 500.
The n specifies a number other than 500 by which to increment. For Example, (C25) increments the
counter after every 25 records processed. Send the report to the printer.
EXAMPLE
COUNT ORDER WITH ORD.AMT > “1000”
91 Records counted
Count the number of records in the SALES file which have a value greater than 1000.
91 Records counted
Count the number of records in the ORDER file which have a ORD.AMT greater than 1000, and display a running total of selected and pro-
cessed records after each group of 50 records are processed.
COMMAND SYNTAX
D{p}{n}{s}
SYNTAX ELEMENTS
P The special processing operator and can be any one of the following:
I Returns only dates stored in the external format in internal format. You can use this in field 7 or 8.
J Returns the Julian day (1 - 365, or 1 - 366 for a leap year).
W Returns the day of the week as a numeric value (Monday is 1).
WA
Returns the day of the week in uppercase letters (MONDAY - SUNDAY).
n is a number from 0 to 4 that specifies the how many digits to use for the year field. If omitted, the
year will have four digits; suppresses the year if n is 0.
s used as a non-numeric character as a separator between month, date, and year. Must not be one of
the special processing operators.
Comments: Dates are stored internally as integers, which represent the number of days (plus or minus) from the base date of December 31,
1967.
EXAMPLE
30 December 1967 -1
31 December 1967 0
01 January 1968 1
If you do not specify a special processing operator (see later) or an output separator, the default output format is two-digit day, a space, a
three-character month, a space, and a four-digit year. If you specify just an output separator, the date format defaults either to the US numeric
format “mm/dd/yyyy” or to the international numeric format “dd/mm/yyyy” (where / is the separator). You can change the numeric format for
the duration of a logon session with the DATE-FORMAT Command.
COMMAND SYNTAX
D1;fmcd {;fmcd}...
D2;fmcc
SYNTAX ELEMENTS
fmcd is the field number (FMC) of an associated dependent field.
Comments: You can logically group multivalued fields in a record by using a controlling multivalued field and associating other fields with it.
For example, you could group the component parts of an assembly on an invoice.
The D1 code in field 8 defines the controlling field and nominates the associated dependent fields. Each dependent field will have a D2 code in
field 8.
Important: The D1 and D2 codes must be in field 8 of the data definition record and be the first code specified; other codes can follow (sep-
arated by a value mark), but it must be the first code.
Outputs the values in the dependent associative fields in order as specified in field 8 of the controlling field the specified order in the dependent
fields in the output specification clause is irrelevant.
EXAMPLE
LIST CUSTOMER “ABC” CUS.ID .CUS.ORDER
The records in data file CUSTOMER have three associated, multivalued fields, named CUS.ID and CUS.ORDER, and numbered seven, two and
five respectively.
CUS.ID is the controlling field because, for each multivalue in this field there will a corresponding value in the other fields, and also because
CUS.ID should appear first on the report. The data definition record for CUS.ID will have D1;2;5 in field 8.
The data definition records for QTY and PRICE will both have D2;7 in field eight.
The report generated by the Command will look something like this:
BBB 11 4.00
CCC 2 3.30
When executing programs in international mode, it processes all variable contents as UTF-8 encoded sequences. As such all data must be held
as UTF-8 encoded byte sequences. This means that data imported into an account configured to operate in international mode must be con-
verted from the data in the current code page to UTF-8. Normally if ALL the data are eight bit bytes in the range 0x00-0x7f (ASCII) then no
conversion is necessary as these values are effectively already UTF-8 encoded. However values outside of the 0x00-0x7f range must be con-
verted into UTF-8 proper such that there can be no ambiguity between character set code page values.
For instance, the character represented by the hex value 0xE0 in the Latin2 code page, (ISO-8859-2), is described as “LATIN SMALL LETTER
R WITH ACUTE”. However the same hex value in the Latin1 code page, (ISO-8859-1), is used to represent the character “LATIN SMALL
LETTER A WITH GRAVE”.
To avoid this clash of code pages the Unicode specification provides unique hex value representations for both of these characters within the
specifications 32-bit value sequence.
EXAMPLE
Unicode value 0x00E0 used to represent LATIN SMALL LETTER A WITH GRAVE
Unicode value 0x0155 used to represent LATIN SMALL LETTER R WITH ACUTE
NOTE: that UTF-8 is an encoding of 32 bit Unicode values, which also has especially properties (as described earlier), which can be used effect-
ively with Unix and Windows platforms.
Another good reason for complete conversion from the original code page to UTF-8 is that doing so also removes the requirement for con-
versions when reading/writing to files, as this would add massive and unnecessary overhead to ALL application processing, whereas the con-
version from original code page to UTF-8 is a one off cost.
Data definition records (sometimes known as field definition records) define the characteristics of each field in a data file. They specify the out-
put format and the type of processing required to generate each column of a jQL report.
Although normally used to define a single physical field in a file, use the data definition records for operations that are more complex.
EXAMPLE
To “join” or derive data from other fields or files
To format their output in the most easily understood manner (to convert numeric 0 and 1 flags to “Yes” or “No”, for Example, or to output text
like “Overdue” if one date field is older than another).
The data definition records are usually located in the dictionary of the data file (but not always - see the USING Clause and the Default Output
Specification topics). You can set up any number of data definition records. Often, there are several definitions for each field, each one used by
a different set of reports which have different output requirements.
You associate the data definition record with a particular field in the data file by specifying the target fields FMC (field-mark count) in field 2 of
the data definition record. The FMC refers to (points to) the field number (also known as the line number) of the data within the records of the
data file.
When issuing a jQL Command without containing specific references to data definition records, nor do you suppress the output of the report
detail, the system will attempt to locate any default data definition records, which may be set up.
For Example: if you issue the Command “LIST SALES”, the system will look in the dictionary of the SALES file for a data definition record
named “1”. If it finds “1”, this will become the default output for column two. The system will then look for a data definition record named “2”
and so until the next data definition record is not found. If “1” is not found in the file dictionary, the system will search the default dictionaries
for the same sequence of data definition records.
When you issue a jQL Command, which does not contain specific references to data definition records, the system will first attempt to locate
each data definition record in the dictionary of the file (or in the file specified in a USING clause). If no data definition is found in the dictionary
(or the file specified in a USING clause), the system will look for the data definition in the file defined by the JEDIFILENAME_
MD environment variable.
For Example: if you issue the Command “LIST SALES VALUE”, the system will look in the dictionary of the SALES file for a data definition
record named “VALUE”. If it cannot find “VALUE” in the file dictionary, the system will look in the file specified by the JEDIFILENAME_MD
environment variable. In this way, you can set up data-specific, file-specific or account-specific defaults for use with any jQL Command.
Field|Description
1. D/CODE|Defines the record as a data definition record. Must be one of the following codes:
S Obsolete but still supported. Was like the A type, but suppressed default column headings when field 3 was
blank. Replaced by the A type with a backslash in field 3 to defeat heading.
X Forces the definition to be ignored if selected as part of a default set of data definitions. Use only when expli-
citly named. See Default Output Specification later.
2. FMC (field-mark count)|A field number or special FMC (see Special Field-mark Counts for more details). A field number refers to the cor-
responding field (or line) in a record.
3. Column heading|
4 - 6|Not used.
7. Input/Output conversion codes|Used for processing the data after sort and selection but before output. See Conversion Codes. Multiple con-
version codes, separated by a value marks, will be processed from left to right.
8. Pre-process conversion codes|Used for processing the data before sort and selection and before field 7 codes. See Conversion Codes later.
Multiple conversion codes, separated by a value marks, will be processed from left to right.
9. Format|Specifies the layout of the data within the column. Can be any of the following:
L Left justified If the data exceeds the column width specified in field 10, the data is broken at column
width without respect to blank spaces.
R Right justify If the data exceeds the column width specified in field 10, it truncates the data.
T Text. Word (Left justified) Where possible, lines will be broken at the blank space between words.
wrap - like L
Setting field 2 of the data definition record to 0 (zero) causes the system to work with the record key. In this way, you could set up a data defin-
ition record which would allow a the record keys to be output in a column other than the first, and to use any column heading.
Typically, you would also use the ID-SUPP modifier or the “I” Command option to suppress output of the record key in the first column.
Setting field 2 of the data definition record to 9998 causes the system to return a record (or line) count equal to the number of records output
so far in the report.
You could also use function operators within an A or F conversion code in field 7 or 8 of the data definition record to achieve the same result.
Function code operand NI yields the same value as an FMC of 9998.
Setting field 2 of the data definition record to 9999 causes the system to return the record size in bytes. The size does not include the key but
does include all field marks within the record.
You could also use function operators within an A or F conversion code in field 7 or 8 of the data definition record to achieve the same result.
Function code operand NL yields the same value as an FMC of 9999.
You can therefore set up a series of data definition records, which the system will use if a jQL Command sentence does not include any explicit
output file, Ids.
You must name these “default” records in a numeric sequence starting at 1 (1, 2, 3, and so on). The fields, which these records define, will be
output in the same sequence as the keys but they do not need to follow the same sequence as the fields in the file.
When a jQL Command sentence with no explicit output fields is issued, the system first looks in the dictionary for a data definition record
named 1, then for a record named 2, then 3, and so on until it fails to find a record with the next number. Will use the record if it has a
D/CODE of A; it ignores the record if it has a D/CODE of X, but it will not break the sequence.
Will skip a record with a D/CODE of X if it was found as the result of a search for defaults; Under normal circumstances it can be used in the
same way as any other data definition record.
This means that when you first set up a series of “default” data definition records, you should put an A in the D/CODE field of each. If you sub-
sequently need to remove one from the sequence, you can simply change the D/CODE field to an X. This way you do not break the sequence or
have to copy the remaining “default” records to new names in order to fill the gap.
You can still use a data definition record with a number for a key in the same way as any other data definition record.
The predefined data definition records are named *A0 to *Annn. The numeric portion of the key corresponds to the position of the field they
report on and the column heading will be the same as the DDR name.
Deletes selected records from a file according to record list or selection criteria clauses.
COMMAND SYNTAX
EDELETE file-specifier [record-list | selection-criteria]
Comments: EDELETE requires an implicit or explicit record list, or selection criteria. Preceding the Command with a SELECT, GET-LIST or
other list-providing Command can provide an implicit list. EDELETE will immediately delete the specified records. To clear all the records in a
file, use the CLEAR-FILE Command.
EXAMPLES
EDELETE ORDER “ABC” “DEF”
2 Records deleted
Delete the records ABC and DEF based on the explicit list of records.
n Records deleted
Delete all records in the ORDER file in which the ORD.AMT field IS LESS THAN 500.
n Records selected
EDELETE ORDER
n Records deleted
Selects all records in the ORDER file in which the ORD.AMT field = 500, and deletes them.
Generates an implicit list of records in a file if they contain (or do not contain) one or more occurrences of specified character strings
COMMAND SYNTAX
ESEARCH file-specifier {record-list} {selection-criteria} {sort-criteria} {USING file-specifier} {(options}
SYNTAX ELEMENTS
Options can be one or more of the following:
Option Description
A ANDs prompted strings together. Records must contain all specified strings
L Saves the field numbers in which it found the specified strings. The resulting list contains the record
keys followed by multivalued line numbers. Ignores the A and N options if either or both are specified.
N Selects only those records that do not contain the specified string(s).
S Suppresses the list but displays the record keys that would have been selected.
String: Enter the required character string and press <ENTER>. This prompt is repeated until only <ENTER> is pressed. You can enter unlim-
ited characters.
Do not enter double quotes unless they are part of the string to search.
Comments: When the Command terminates (unless the “S” option is used), it displays the total number of entries in the generated list. The list
is then available as if generated by a SELECT, GET-LIST or other list-providing Command. If you do not specify a sort criteria clause, the
record list will be unsorted.
EXAMPLE
ESEARCH ORDER (I
STRING: ABC
STRING: DEF
KEY1
KEY2
18 Records selected
>
Generates a list of all records in the ORDER file, which contain the strings ABC or DEF
EXPLAIN displays information about how the statement will be processed. This lets you decide if you want to rewrite the query more effi-
ciently.
EXPLAIN lists the files, indexes, sorts etc… included in the command, along with access timings in a report format so hopefully it’s a little
easier to understand what’s going on.
SYNTAX ELEMENTS
EXPLAIN <<Command>>
e.g.
EXPLAIN LIST JCUSTOMERS FIRSTNAME LASTNAME BY LASTNAME BY LASTNAME EXPLAIN SSELECT JCUSTOMERS SAVING
FIRSTNAME LASTNAME
The aim of the report is to provide a simple representation of what’s just happened, along with some information of how long things have
taken and what objects have been used.
This should allow the user to alter there commands to make them more efficient.
Internally each operation is grouped and given a description to try and make what the statement does a little easier to understand. It should
give you a basic understanding why something is taken so long and where an index may help.
l File Details of the file used in you query, file type and access methods used.
l Indexing Details on what indexes have been used, and any limiting statements that would affect them.
l Sorting Details of what columns are used for sorting and information on what attributes associated with the column will
affect the output
l Aggregate List of things that the break processor will process. These can be operations like totals. Break on etc…
Sort Processor
Total Taken is split into total time for all records, followed by minimum and maximum access times in mil-
liseconds.
A good example for access timings is that we can use multiple methods to access data,
1) Simple dictionary
001: D
002: 1
3) ITYPE Dictionary
001: I
002: THEFIRSTDICTIONARY;@1
4) EVAL Statement
EVAL “A+B”
5) Subroutine call
001: I
002: SUBR(XXXX)
All of the above would take a different amount of time to return a result, the timings aren’t necessarily how long
it took to read the file, but how long it took to get a single row using the current statement.
So a combination of index access timings, full file source etc… are use to give an overview of what all of the com-
ponents used in the current statement have done.
An explicit item-id lists lists items for processing, which encloses each item-id in double quotes. Spaces between item-ids are optional. An
item-id list cannot include a relational operator and ignores any included logical connectives.
JQL treats the values you place between quotes as item-ids, not as value strings. This treats the left ignore, right ignore and wild card as ordin-
ary characters and not as special characters.
SYNTAX
‘item-id’ {‘item-id’}.. .
MCAB{S} Convert ASCII character codes to binary representation. Use S to suppress spaces.
MC/B Extract only special characters that are neither alphabetic nor numeric.
MCDR Convert a decimal value to its equivalent Roman numerals. Input conversion is effective.
MCDX or Convert a decimal value to its hexadecimal equivalent. Input conversion is effective.
MCD
MCNP{c} Convert paired hexadecimal digits preceded by a period or character c to ASCII code.
MCP{c} Convert each non-printable character (X”00” - X”IF”, X”80” - X”FE”) to a period (.) or to character c.
MCPN{c} Same as MCP but insert the two-character hexadecimal representation of the character immediately
after the period or character c.
MCRD or Convert Roman numerals to the decimal equivalent. Input conversion is effective.
MCR
MCT Convert all upper case letters (A-Z) in the text to lower case, starting with the second character in
each word. Change the first character of each word to upper case if it is a letter.
MCU Convert all lower case letters (a-z) to upper case.
MCXB{S} Convert a hexadecimal value to its binary equivalent. Use S to suppress spaces between each block of
8 bytes.
MCXD or Convert a hexadecimal value to its decimal equivalent. Input conversion is effective.
MCX
EXAMPLE 1
MCA
EXAMPLE 2
MC/A
EXAMPLE 3
MCB
EXAMPLE 4
MC/B
EXAMPLE 5
MCN
EXAMPLE 6
MC/N
F codes provide many facilities for arithmetic, relational, logical, and concatenation operations. The expression of all operations is in Reverse
Polish notation and involves the use of a “stack” to manipulate the data.
SYNTAX SUMMARY
There are three forms of the F code:
F Uses only the integer parts of stored numbers unless a scaling factor is included. If the JBCEMULATE environment variable is set to
“ROS” the operands for “-”, “/” and concatenate are used in the reverse order.
FS Uses only the integer parts of stored numbers (use SMA standard stack operations for all emulations)
FE Uses both the integer and fraction parts of stored numbers.
COMMAND SYNTAX
F {n};elem {;elem}...
FS;elem {;elem}...
FE;elem{;elem}
SYNTAX ELEMENTS
n A number from 1 to 9 used to convert a stored value to a scaled integer. The stored value explicit or implied decimal point is moved n
digits to the right with zeros added if necessary. Returns only the integer portion of this operation
Comments: F codes use the Reverse Polish notation system. Reverse Polish is a postfix notation system where the operator follows the oper-
ands. The expression for adding two Elements is “a b + “. (The usual algebraic system is an infix notation where the operator is placed between
the operands, for Example, “a + b”).
The F code has operators to push operands on the stack. Other operators perform arithmetic, relational, and logical operations on stack Ele-
ments. There are also concatenation and string operators.
Operands pushed on the stack may be constants, field values, system parameters (such as date and time), or counters (such as record coun-
ters).
By default, formatting & conversion are defined by a field’s DICTionary entry. If the behaviour defined in the DICTionary needs to be overridden
for any reason, there are a number of different qualifiers that may be used in jQL statements
SYNTAX
MULTIVALUED
SINGLEVALUED
EXAMPLE
>LIST HAT.TYPE HAT.SIZE DISPLAY.LIKE PRICE COL.HDG “Hat sizes available”
Trilby 10
Top Hat 13
As described below file modifiers DICT, ONLY=, WITHIN and TAPE modifies the use of the file, and how it is accessed
SYNTAX ELEMENTS
{DICT} {ONLY} {WITHIN} {TAPE} filename{,data-section-name}
DICT Specifies the dictionary section of the file and contains the data for referencing. You must type the
modifier DICT before the filename. When modifying a filename by the DICT the processor looks in the
MD for attribute and macro definition items.
ONLY Specifies that only item-ids are to be output and suppress any default output contents. You can type
the modifier ONLY before filename or following all clauses, which contain attribute names.
WITHIN Specifies a sublist such as bill of material items. Use WITHIN only with the LIST and COUNT verbs
and must precede filename. Specify one item-id only; if you enter more than one item-id, it displays an
error message.
TAPE Tells the processor to retrieve data from a magnetic tape, which written only in a T-DUMP format.
This modifier cannot be used with the sorting verbs such as SORT and ST-DUMP, nor with tape out-
put verbs, such as T-DUMP, nor with the updating verb EDELETE
data- sec- Specifies a data section other than the data section called filename. It must follow filename and use a
tion- comma with no spaces for separation.
name
The FMT connective allows the query to override the formatting used to display the corresponding data with a different format mask
EXAMPLE
LIST CUSTOMER *A1
CUST..... *A1...
1 FRED B
2 TOM JO
1 FRED BLOGGS
2 TOM JONES
You can format the result of any "A" code operation by following the expression with a value mark, and then the required format code:
An;expression]format
Format codes can also be included within the expression. For more information, see Format codes.
II. another transforms the content of a field before it is pushed on the stack
COMMAND SYNTAX
f- code {] format- code...}
field- number (format- code {] format- code}...)
(format-code{]format-code}...)
SYNTAX ELEMENTS
F code A complete F Code expression.
Field number The field number in the record from which to retrieve the data.
] Represents a must use value mark (ctrl ]) to separate each format code.
Comments: To process a field before it is pushed on the stack, follow the FMC with the format codes enclosed in parentheses. To process the
top entry on the stack, specify the format codes within parentheses as an operation by itself. To specify more than one format code in one oper-
ation, separate the codes with the value mark, (ctrl]). All format codes will convert values from an internal format to an output format.
EXAMPLE
F;2(MD2]G0.1);100;-
Obtain the value of field 2. Apply an MD2 format code. Then apply a group extract to acquire the integer portion of the formatted value, and
push the result onto the stack. Subtract 100 from the field 2 formatted, group extracted value. Return this value. Note that under ROS emu-
lation, the value returned would be the result of subtracting the integer value from the group extract, from 100. In other words:
The following sentence lists ORDER information with numbers that are both greater than or equal to 200 and less than 700:
The following sentence displays information about orders with numbers less than 200 and with available dates after May 17 2002.
The following sentence displays CUSTOMER information 500 and greater than 199 and with CUSTOMER ADDRESS. The second AND arises
because the sentence includes both item selection and data selection criteria: these operations perform one after the other, giving an effective
AND function. The OR between “ST” and “D” is implicit.
The following sentence lists rooms with numbers less than 200 or greater than 399.
G codes extract one or more contiguous strings (separated by a specified character), from a field value.
COMMAND SYNTAX
G{m}xn
SYNTAX ELEMENTS
m the number of strings to skip. If omitted or zero, extraction begins with the first character.
Comments: The field value can consist of any number of strings, each separated by the specified character. The separator can be any non-
numeric character, except a system delimiter.
If m is zero or null and the separator x is not found, the whole field will be returned. If m is not zero or null and the separator x is not found,
null will be returned.
Input Conversion: does not invert. It simply applies the group extraction to the input data.
EXAMPLE 1
G0.1
If the field contains “123.45”, 123 will be returned. You could also use “G.1” to achieve the same effect.
EXAMPLE 2
G2/1
EXAMPLE 3
G0,3
If the field contains “ABC,DEF,GHI,JKL”, returns ABC,DEF,GHI. Note that the field separators are included in the returned string.
Specifies the text to replace the default asterisks in the cumulative total line at the end of the report; CAPTION is a synonym for GRAND-
TOTAL.
L Line: suppresses the blank line preceding the GRAND-TOTAL line. Overrides the U option if both are
specified and
U Underline: places underlines on the line above the accumulated totals. Ignored if used with the ‘L’
option.
LPTR Specifies that a report go to the printer queue (spooler) instead of displaying at the terminal. You could
use the ‘P’ option at the end of the sentence in place of this modifier.
Comments: Enter a heading or footing option, which specify a value in the order in which they appear.
Text spaces are not normally required between option codes. However, you can present options that represent values such as pages or dates
without spaces. For example: ’”PD”’ will print on the first page as:
111/11/00
In this case, enter the options with a space between them like this “‘P’ ‘D’”
EXAMPLE
SORT ORDER BY ORD.ID. BREAK-ON ORD.ID ‘BL’ ORD.QTY GRAND-TOTAL “Total “HEADING ORD.QTY : ‘B’ ‘DL’” FOOTING “PAGE
‘CPP’ “LPTR
Control Break on a change in ORD.ID and suppress the LINE FEED before the break. Reserve the break value for use in the heading (‘B’). Main-
tain a running total of the VALUE field and output it at each control break.. Put the word Total on the GRAND-TOTAL line.
Set up a heading for each page, which comprises the words ‘ORD.QTY:’, the ORDER code (from the break), a date and a line feed. Set up a foot-
ing, this contains the text ‘PAGE’, and a page number, centered on the line.
Displays the entire contents of items in a file, including the system delimiters
COMMAND SYNTAX
I-DUMP file-specifier {record-list} {selection-criteria} {sort-criteria} {USING file-specifier} {(options}
Attribute mark ^
Value mark ]
EXAMPLE 1
I-DUMP CUSTOMER WITH CUS.CITY = “BEAVERTON”
13 Records Listed
6^^^
EXAMPLE
jsh machinename ~ -->S-DUMP CUSTOMER BY CUS.ADDR WITH CUS.NAME "A..."
958^^
8^^^30058493^
^^^
The jBASE jQL processor supports I-TYPES as imported from PRIME or Universe.
The jBASE query language, jQL, has been enhanced to support D and I type attribute definition records.
Formats
I-TYPE D- TYPE
001 I
001 D
002 Expression
002 AttributeNo
003 Conversion
003 Conversion
004 Header
004 Header
005 Format
005 Format
006 - 016 Reserved
006 - 016 Reserved
Expression
This can be one or more of the following types:
Type Description
EXAMPLE
Expression ; Expression ;
Expressions can be parenthesized, contain numeric constants, string literals, enclosed in single or double quotes, and extended operators such
as EQ, NE, LE, GT, CAT, AND, OR, MATCHES.
Using an I-TYPE for the first time in a query, i.e. jQL Command, the expression attribute will be “compiled”, to produce internal op codes and
parameter definitions. This mechanism provides greater efficiency at run time. However to ensure it compiles all I-TYPE definitions, rather
than on an ad hoc basis, a utility, ICOMP, has been provided.
Called as:
ICOMP {DICT} FileName {RecordList | * }
Where:
NOTE: ICOMP will always attempt to convert the dictionary section of a file. If RecordList is omitted, it compiles all I-TYPE definitions.
ICOMP will also respect a preceding SELECT list.
COMMAND SYNTAX
IF expression THEN statement ELSE statement
SYNTAX ELEMENTS
expression must evaluate to true or false. If true, executes the THEN statement. If false,
Comments: Each IF statement must have a THEN clause and a corresponding ELSE clause. You can nest statements but the result of the state-
ment must evaluate to a single value. The words IF, THEN and ELSE must be followed by at least one space.
EXAMPLE 1
A;IF N(QTY) < 100 THEN N(QTY) ELSE ERROR!
Tests the QTY value to see if it is less than 100. If it is, output the QTY field. Otherwise, output the text “ERROR!”.
EXAMPLE 2
A;IF N(QTY) < 100 AND N(COST) < 1000 THEN N(QTY) ELSE ERROR!
Same as Example 1 except that QTY will only be output if it is less than 100 and the cost value is less than 1000.
EXAMPLE 3
A;IF 1 THEN IF 2 THEN 3 ELSE 4 ELSE 5
If field 1 is zero or null, follow else and use field 5. Else test field 2; if field 2 is zero or null, follow else and use field 4. Else, use field 3. Use
Field 3 only if both fields 1 and 2 contain a value.
To provide an implicit item-id list execute a verb such as SELECT or GET-LIST immediately before executing a jQL command. If you also spe-
cify item-id selection, the jQL processor effectively ANDs its result with the implicit item-id list to limit further the items selected.
If you specify an explicit item-id list, the processor ignores any implicit list.
EXAMPLE
The following sentences will not list anything because the value strings cannot match any item-id in the implicit list.
The following sentences list information about CUSTOMER 40823 and 40825 because the process ignores an implicit item-id list when an
implicit item-id list is in the sentence.
If a record list of any type is outstanding when processing reaches the selection criteria, only those in the list will be submitted to the selection
process; if there are no record lists outstanding the selection process considers all records in the file.
Each selection criterion specifies a field (data or key) for testing to determine selection of a record. The selection criterion begins with the con-
nective (WITH (or IF) and must also include a field name. The field name can be followed by a value selection clause otherwise it defaults to
NE ““(not equal NULL)
SYNTAX
WITH | IF { NOT } { EACH } field {value-selection clause} {{AND | OR}
SYNTAX ELEMENTS
WITH or IF is the selection connective. It must be the first word of a selection criterion. WITH and IF are synonymous. WITHOUT is a syn-
onym for WITH NOT.
The following statements enable jBASIC programmers to deal directly with jQL statements, thereby eliminating the need to parse the output of
commands such as EXECUTE. EXECUTE.
NOTE: Properties are valid after the compile; this is the main reason for separating the compile and execute into two functions, after compiling,
it is possible examine the properties and set properties before executing.
jBASE jQL enables users to call Basic subroutines from within correlatives and conversions. There are two flavors of subroutine and each
requires a different include file. For Advanced Pick subroutines, the developer must include the following header file from the “include” sub-
directory in the jBASE release directory.
qbasiccommonpick
For Sequoia subroutines, the developer must include the following header file from the “include” subdirectory in the jBASE release directory.
qbasiccommonseq
For dates and times, simple date format functions have been applied to use the configured locale to support the standard conversions D and
MTS. Formatting numbers via MR/ML/MD, use locale for Thousands, Decimal Point and Currency notation.
The first R specifies that any non-existent multivalues should use the previous non-null multivalue. When the second R is specified, any non-
existent subvalues should use the previous non-null subvalue.
N(field-name){R{R}}
“literal”
NB Returns the current break level counter. 1 is the lowest break level, 255 is the GRAND TOTAL line.
ND Returns the number of records (detail lines) since the last control break.
string[start-char-no, len] Returns the substring starting at character start-char-no for length len.
COMMAND SYNTAX
field-number{R{R}}
SYNTAX ELEMENTS
field- the number of the field (FMC), which contains the required value.
number
R specifies that the value obtained from this field be applied for each multivalue not present in a cor-
responding part of the calculation.
RR Specifies that the value obtained from this field be applied for each subvalue not present in a cor-
responding part of the calculation.
0 Record key
EXAMPLE 1
A;2
EXAMPLE 2
A;9999
EXAMPLE 3
A;2 + 3R
For each multivalue in field 2, the system also obtains the (first) value in field 3 and adds it. If field 2 contains 1]7 and field 3 contains 5 the res-
ult would be two values of 6 and 12 respectively. Where three does not have a corresponding multivalue, will use the last non-null multivalue in
three
EXAMPLE 4
A;2 + 3RR
COMMAND SYNTAX
N(field-name){R{R}}
SYNTAX ELEMENTS
field-name is the name of another field defined in the same dictionary or found in the list of default dictionaries
R Specifies that the value obtained from this field be applied for each multivalue not present in a corresponding part of the
calculation.
RR Specifies that the value obtained from this field be applied for each subvalue not present in a corresponding part of the cal-
culation.
Comments: If the data definition record of the specified field contains field eight pre-process conversion codes, it applies these before it returns
the value(s).
Any pre-process conversion codes in the specified field-name including any further N(field-name) constructs are processed as part of the con-
version code.
N(field-name) you can nest constructs up to 30 levels. The number of levels is restricted to prevent infinite processing loops. For Example:
TEST 1
008 A;N(TEST2)
TEST 2
008 A;N(TEST1)
EXAMPLE 1
A;N(S.CODE)
EXAMPLE 2
A;N(A.VALUE) + N(B.VALUE)R
For each multivalue in the field defined by A.VALUE, the system also obtains the corresponding value in B.VALUE and adds it. If A.VALUE
returns 1]7 and B.VALUE returns 5, the result would be two values of 6 and 12 respectively.
EXAMPLE 3
A;N(A.VALUE) + N(B.VALUE)RR
For each subvalue in the field defined by A.VALUE, the system also obtains the corresponding value in B.VALUE and adds it. If A.VALUE
returns 1\2\3]7 and B.VALUE returns 5 the result would be four values of 6, 7, 8 and 12 respectively.
COMMAND SYNTAX
"literal"
SYNTAX ELEMENTS
literal is a text string or a numeric constant.
NOTES
Assumes a number not enclosed in double quotes to be a field number (FMC).
EXAMPLE 1
A;N(S.CODE) + "100"
EXAMPLE 2
A;N(S.CODE):"SUFFIX"
EXAMPLE
AE;I(N(COST) * N(QTY))
By default, displays output from a jQL Command on your terminal, in columnar format, with a pause at the end of each page (Full screen).
OUTPUT DEVICE
You can redirect the output to a printer (or the currently-assigned Spooler device) by using the LPTR format specifier or the P option.
REPORT LAYOUT
If the columnar report will not fit in the current page width of the output device, it will be output in “non-columnar” format where each field of
each record occupies one row on the page.
PAGING
If the displayed report extends over more than one screen, press <ENTER> to view the next screen. To exit the report without displaying any
remaining screens, press <Control X> or “q”
Verb Description
BSELECT Retrieves selected records and generates a list composed of data fields from those records as specified by any explicit or
default output specifications. Each subvalue within a field becomes a separate entry within the list.
LIST-LABEL Displays records in a format suitable for mailing labels and other block listings
SEARCH Creates a select list of records that contain an occurrence of one or more specified strings
SORT-LABEL Displays items in a format suitable or mailing labels and other block listings
SREFORMAT Redirects jQL output to a file or to a tape with records sorted by sort expression
SSELECT Creates a sorted list of records that meet specified selection criteria
SUM Adds numeric values in fields of records that meet specified selection criteria
COMMAND SYNTAX
JQLCOMPILE(Statement, Command, Options, Messages)
SYNTAX ELEMENTS
Statement is the variable, which will receive the compiled statement (if it compiles); most other functions use this to execute and work on the
result set etc.
Command is the actual jQL query that you want to compile (such as SELECT or something similar). Use RETRIEVE to obtain fetchable data
records, as the verb rather than an existing jQL verb. This will ensure that the right options are set internally. In addition, use any word that is
not a jQL reserved word as the verb and it will work in the same way as RETRIEVE: implement a PLOT command that passes the entire com-
mand line into JQLCOMPILE and the results will be the same as if the first word were replaced with RETRIEVE.
Option: To supply a select list to the JQLEXECUTE function specify JQLOPT_ USE_SELECT specify JQLOPT_USE_ SELECT; the compile
builds a different execution plan if using select lists.
Messages: If the statement fails to compile, this dynamic array is in the STOP format, and therefore you can program and print STOP mes-
sages, which provides a very useful history of compilation for troubleshooting purposes. It returns -1 if a problem was found in the statement
and zero if there was not.
COMMAND SYNTAX
JQLEXECUTE(Statement, SelectVar)
SYNTAX ELEMENTS
Statement is the valid result of a call to JQLCOMPILE(Statement, …).
SelectVar is a valid select list that used to limit the statement to a predefined set of items. For example:
1 Item Selected
PROGRAMMERS... NAME
0123 COOPER, F B
This function returns -1 in the event of a problem, such as the statement variable not being correct. It will cause the statement to run against
the database and produce a result set for use with JQLFETCH()
COMMAND SYNTAX
JQLFETCH(Statement, ControlVar, DataVar)
SYNTAX ELEMENTS
Statement is the result of a valid call to JQLCOMPILE(), followed by a valid call to JQLEXECUTE().
ControlVar will receive the ‘control break’ elements of any query. For example, if there are BREAK values in the statement, and you want the
totals, they will be described here.
The format of ControlVar is:
1 - 255 for the control breaks, the same as the A correlative NB.
DataVar will receive the actual data sent to the screen on a LIST statement for instance. The format is one attribute per column.
If setting the property STMT_PROPERTY_ FORMAT then it also formats each attribute according to the width and justification of the attrib-
ute definition and any override caused by the use of FMT, of DISPLY.LIKE on the command line –
NOTE: that column headers may also affect the formatting for that column.
COMMAND SYNTAX
JQLGETPROPERTY(PropertyValue, Statement, Column, PropertyName)
SYNTAX ELEMENTS
Option Description
PropertyValue Receives the requested property value from the system or “” if the property is not set
Column Specifies that you want the value of the property for a specific column (otherwise 0 for the whole
statement).
PropertyName These are EQUATED values defined by INCLUDE’ing the file JQLINTERFACE.h.
This function returns -1 if there is a problem with the parameters or the programmer. The use of
these properties is to answer questions such as “Was LPTR mode asked for,” and “How many
columns are there?”
Note: Properties are valid after the compile; this is the main reason for separating the compile and execute into two functions. After compiling,
it is possible examine the properties and set properties before executing.
COMMAND SYNTAX
JQLPUTPROPERTY(PropertyValue, Statement, Column, PropertyName)
SYNTAX ELEMENTS
PropertyValue is the value you want to which to set the specified property, such as 1 or “BLAH”
Column Holds 0 for a general property of the statement, or a column number if it is something that can be set for a specific column.
PropertyName – These are EQUATED values defined by INCLUDE’ing the file JQLINTERFACE.h. There are lots of these and someone is going
to have to document each one.
This function returns -1 if a problem was found in the statement and 0 if there was not.
NOTE: Properties are valid after the compile; this is the main reason for separating the compile and execute into two functions. After com-
piling, it is possible examine the properties and set properties before executing.
L codes return the length of a value, or the value if it is within specified criteria.
COMMAND SYNTAX
L{{min,}max}
SYNTAX ELEMENTS
min Specifies that the process is to return an element if its length is greater than or equal to the number min.
max Specifies that the process is to return an element if its length is less than or equal to the number max.
Comments: The L code by itself returns the length of an element. When used with max or min and max the L code returns the element if it is
within the length specified by min and/or max.
EXAMPLE 1
L - Assuming a value of ABCDEF, returns the value 6.
EXAMPLE 2
L4
If JBCEMULATE is set to ROS, L4 is translated as return the value if its length is less than or equal to 4 - the equivalent of L0,4. Assuming a
value of ABCDEF, L4 will return null - the value is longer than 4 characters.
If JBCEMULATE is not set to ROS, L4 is translated as return the value if its length is exactly equal to 4 - the equivalent of L4,4. Assuming a
value of ABCDEF, L4 will return null - the value is longer than 4 characters.
EXAMPLE 3
L4,7
L4,7 is translated as return the value if its length is greater than or equal to 4 and less than or equal to 7. Assuming a value of ABCDEF, L4,7
will return ABCDEF.
The following sentence lists information about all the CUSTOMER code numbers ending in 00.
The following sentence does not list any rooms because there is no relational operator, the value [23 is treated as an item-id.
COMMAND SYNTAX
LIST-LABEL file-specifier {record-list} {selection-criteria} {sort-criteria} {USING file-specifier}{output-specification} {format-specification}
{(options}
PROMPTS
At the prompt, supply formatting criteria as follows:
COL
The number of columns required to list the data across the page.
ROW
Number of lines for each record; Each element of the output specification appears on a separate line, if
more elements exist in the output specification than rows specified it ignores the extra elements. If
you specify more rows than elements, the output specification for these rows will be blank.
SKIP
Number of blank lines between each record.
INDENT
Number of spaces for left margin.
SIZE
Number of spaces required for the data under each column.
SPACE
Number of horizontal spaces to skip between columns
C
Optional:. Suppresses null or missing data; If absent, outputs null or missing values as blanks. If
present, the C must be upper case and not in quotes
Comments: The total number of columns specified must not exceed the page width, based on the calculation:
ROW must be a minimum of one for each field, plus one for the record key (if not suppressed). If the record keys are not suppressed, the first
row of each label will contain the record key.
If INDENT is not zero, at the prompt supply a series of HEADERs that will appear in the left margin for each field. If a heading is not required
for a particular line, press <ENTER>.
If specified, COL-HDR-SUPP or HDR-SUPP, or the C or H options, the page number, date, and time will not be output and generates the
report without page breaks. You must specify a sort criteria clause to sort the records.
EXAMPLE
LIST-LABEL ORDER ORD.ID ORD.CUS.REF ID-SUPP (C
COL,ROW,SKIP,INDENT,SIZE,SPACE(,C): 2,5,2,0,25,4,C
Customer Ref
Customer Ref
Customer Ref
COMMAND SYNTAX
LIST file-specifier {record-list} {selection-criteria} {sort-criteria} {USING file-specifier} {output-specification} {format-specification}
{(options}
Comments: If providing no output specification clause the system searches for default data definition records (named 1, 2 and so on) in the file
dictionary and then in the file specified in the JEDIFILENAME_MD environment variable. If no default data definition records are found, it
lists only the record keys. You must specify a sort criteria clause to sort the records.
EXAMPLE 1
LIST ORDER
List all the records in the SALES file and use the default data definition records (if found) to format the output.
EXAMPLE 2
LIST ORDER “ABC” “DEF” “GHI”
List the records from the ORDER file with key values of ABC, DEF or GHI. Use the default data definition records (if found) to format the out-
put.
EXAMPLE 3
GET-LIST ORDER
Get the previously saved list called ORDER.Q4 and, using the list, report on the records in the ORDER file which have a key greater than DEF.
Use the default data definition records (if found) to format the output.
EXAMPLE 4
LIST ORDER WITH ORD.ID = “ABC]” OR “[DEF”
List the records in the ORDER file in which the ORD.ID field contains values which start with ABC or end with DEF. Use the default data
definition records (if found) to format the output.
EXAMPLE 5
LIST ORDER WITH NO ORD.ID = “ABC]” OR “[DEF” (P
List the records in the ORDER file in which the ORD.ID field does not contain values which start with ABC or end with DEF. Output the
report to the printer. Use the default data definition records (if found) to format the output.
Sort the ORDER file by ORD.AMT. Output the ORD.AMT, ORD.ID and ORD.COST fields.
Control break on a change in ORD.AMT and suppress the LINE FEED before the break. Reserve the break value for use in the heading (“B”).
Maintain a running total of the ORD.COST field and output it at each control break.
Set up a heading for each page which comprises the words “Sales Code: “, the sales code (from the break), a date and a LINE FEED. Set up a
footing, which contains the text “Page”, and a page number, centered on the line?
Generates a report of all data definition records in the first MD file found, or the specified file
COMMAND SYNTAX
LISTDICT {file-specifier}
SYNTAX ELEMENTS
file specifier - specifies a dictionary file other than a file named MD in the JEDIFILEPATH.
Comments: If you do not specify a file-name, LISTDICT will work with the first file named MD, it finds in your JEDIFILEPATH.
The logical connective AND or OR joins two relational expressions. The default connective is OR. If giving two relational expressions without a
logical operator between them, items satisfying either expression are selected (as if the OR connective had been used).
The connective AND yields a truth-value of true if all the truth values it is combining are true. If any truth-value is false, the result of the AND
connective is false. The OR connective yields a truth value of true if at least one of the truth values it is combining is true.
The logical operators test two expressions for true (1) or false (0) and return a value of true or false. Logical operators are:
The words AND and OR must be followed by at least one space. The AND operator takes precedence over the OR unless you specify a different
order by means of parentheses. OR is the default operation.
Logical operators include a logical AND test and a logical inclusive-OR test.
& AND stack entries 1 If both entries contain non-zero, pushes a 1 onto stack entry 1, otherwise, pushes a
and 2. 0.
! OR stack entries 1 and If either of the entries contains non-zero, it pushes a 1 onto stack entry 1; oth-
2. erwise, pushes a 0.
Macros contain predefined or often used elements of a jQL sentence, stored on the system like data definition records and are specified in the
command sentence in a similar way. When submitting a command containing one or more macros for execution it expands and includes the
macro references in the sentence. You can substitute macros for any element of the command sentence except the command itself and the file-
name.
The search for macro definition records is in the same way as data definition records. Do not use a jQL keyword for a Data Definition record.
The first field of a macro definition must contain the letter M. The remaining fields are either command elements or comment lines (indicated
by a leading asterisk ‘*’ and a space).
You can nest macros - a macro can refer to another macro - but the resulting command sentence must still follow the same rules as a normal
jQL sentence. When nesting macros, beware of infinite loops where for example, macro A calls macro B that calls macro A that calls macro B.
EXAMPLE
SORT ORDER BY ORD.COST STD.HEADING
STD.HEADING
001 M
004 LPTR
One source of confusion when using MC codes is that input conversion does not always invert the code. If most MC codes are used in field 7 of
the data definition record, applies the code in its original (un-inverted) form to the input data Therefore, you should always try to place MC
codes into field 8 of the data definition record. The exceptions to this, is where input conversion is effective, are clearly indicated in the fol-
lowing sections.
SUMMARY
MC codes codes are:
MCAB{S} Convert ASCII character codes to binary representation. Use S to suppress spaces.
MC/B Extract only special characters that are neither alphabetic nor numeric.
MCDR Convert a decimal value to its equivalent Roman numerals. Input conversion is effective.
MCDX or Convert a decimal value to its hexadecimal equivalent. Input conversion is effective.
MCD
MCNP{c} Convert paired hexadecimal digits preceded by a period or character c to ASCII code.
MCP{c} Convert each non-printable character (X”00” - X”IF”, X”80” - X”FE”) to a period (.) or to character c.
MCPN{c} Same as MCP but insert the two-character hexadecimal representation of the character immediately
after the period or character c.
MCT Convert all upper case letters (A-Z) in the text to lower case, starting with the second character in each
word. Change the first character of each word to upper case if it is a letter.
MCU Convert all lower case letters (a-z) to upper case.
MCXB{S} Convert a hexadecimal value to its binary equivalent. Use S to suppress spaces between each block of 8
bytes.
MCXD or Convert a hexadecimal value to its decimal equivalent. Input conversion is effective.
MCX
The MD code transforms integers by scaling them and inserting symbols, such as a currency sign, thousands separators, and a decimal point.
The ML and MR codes are similar to MD but have greater functionality.
COMMAND SYNTAX
MDn{m}{Z}{,}{$}{ix}{c}
SYNTAX ELEMENTS
n a number from 0 to 9 that specifies how many digits are to be output after the decimal point; inserts trail-
ing zeros as necessary. If n is omitted or 0, the decimal point is not output.
n a number from 0 to 9, which represents the number of digits that the source value contains to the right of
the implied decimal point. Uses m as a scaling factor and the source value is descaled (divided) by that
power of 10. For Example, if m=1, the value is divided by 10; if m=2, the value is divided by 100, and so
on. If m is omitted, it is assumed equal to n (the decimal precision). If m is greater than n, the source value
is rounded up or down to n digits. The m option must be present if the ix option is used and both the Z
and $ options are omitted. This to remove ambiguity with the ix option.
z suppresses leading zeros. Note that fractional values, which have no integer, will have a zero before the
decimal point. If the value is zero, a null will be output.
, specifies insertion of the thousands separator symbol every three digits to the left of the decimal point. The
type of separator (comma or period) is specified through the SET THOU Command. (Use the SET DEC
Command to specify the decimal separator.)
$ appends an appropriate currency symbol to the number. The currency symbol is specified through the SET
MONEY Command.
ix aligns the currency symbol by creating a blank field of “i” number of columns. The value to be output over-
writes the blanks. The “x” parameter specifies a filler character that can be any non-numeric character,
including a space.
c appends a credit character or encloses the value in angle brackets ( >). Can be any one of the following:
- Appends a minus sign to negative values; a blank follows positive or zero values.
C Appends the characters CR to negative values. Two blanks follow positive or zero values.
Input Conversion: works with a number that has only thousands separators and a decimal point.
EXAMPLES
Miscellaneous operators control formatting, exchanging stack entries, popping the top entry, concatenation, and string extraction. They are:
^ pop last entry from the stack and discard. Pushes all other entries up.
Format Perform the specified format code on last entry and replace last entry with the result
Code
[ ] Extract a substring from stack entry 3. The starting column is specified in stack entry 2 and the num-
ber of characters is specified in entry 1
The MK code allows you to display large numbers in a minimum of columns by automatically descaling the numbers and appending a letter to
represent the power of 10 used. The letters and their meanings are:
K 10 /3 (Kilo)
M 10 /6(Mega)
G 10 /9 (Giga)
COMMAND SYNTAX
MKn
SYNTAX ELEMENTS
n represents the field width and if present will include the letter and a minus sign.
Comments: will not change if a number will fit into the specified field width.
If the number is too long but includes a decimal fraction, the MK code first attempts to round the fractional part so that the number will fit the
field. If the number is still too long, the code rounds off the three low-order integer digits, replacing them with a K. If the number is still too
long, the code rounds off the next three digits, replacing them with an M. If that is still too long, the code rounds off three more digits, repla-
cing them with a G. If the number still does not fit the specified field, the code displays an asterisk. If the field size is not specified or is zero,
the code outputs null.
Input Conversion: does not invert. It simply applies the metric processing to the input data.
EXAMPLES
123456789012345 * * * 123457G
ML and MR codes format numbers and justify the result to the left or right respectively. The codes provide the following capabilities:
COMMAND SYNTAX
ML {n {m}} {Z} {,} {c} {$} {fm}
MR{n{m}}{Z}{,}{c}{$}{fm}
SYNTAX ELEMENTS
m a number that defines the scaling factor. The source value is descaled
(divided) by that power of 10. For Example, if m=1, the value is divided by
10; if m=2, the value is divided by 100, and so on. If m is omitted, it is
assumed equal to n (the decimal precision).
Fm : Specifies a format mask. A format mask can include literal characters as
well as format codes. The format codes are as follows
CODE FORMAT
#{n} Spaces. Repeat space n times. Overlays the output value on the spaces created.
*{n} Asterisk. Repeat asterisk n times. Overlays the output value on the asterisks created.
%{n} Zero. Repeat zeros n times. Overlays the output value on the zeros created.
&x can be any of the above format codes, a currency symbol, a space, or literal text. The first character fol-
Format. lowing ‘&’ is used as the default fill character to replace #n fields without data. You may enclose
x format strings enclosed in parentheses “( )”.
Comments: The justification specified by the ML or MR code applies at different stages from that specified in field 9 of the data definition
record. The sequence of events begins with the formatting of the data with the symbols, filler characters and justification (left or right) specified
by the ML or MR code. The formatted data is justified according to field 9 of the definition record and overlaid on the output field, which ini-
tially comprises the number of spaces specified in field 10 of the data definition record.
Input Conversion: works with a number that has only thousands separators and a decimal point.
EXAMPLES
MP codes convert packed decimals to unpacked decimal representation for output or decimal values to packed decimals for input.
COMMAND SYNTAX
MP
Comments: The MP code most often used as an output conversion; on input, the MP processor combines pairs of 8-bit ASCII digits into
single 8-bit digits as follows:
l Strips off the high order four bits of each ASCII digit.
l Moves the low order four bits into successive halves of the stored byte
l Adds a leading zero (after the minus sign if present) if the result would otherwise yield an uneven number of halves.
l Ignores leading plus signs (+)
l Stores leading minus (-) signs as a four-bit code (D) in the upper half of the first internal digit.
When displaying packed decimal data, you should always use an MP or MX code. Raw packed data is almost certain to contain control codes
that will upset the operation of most terminals and printers.
Input Conversion: is valid. Generally, for selection processing you should specify MP codes in field 7 of the data definition record.
EXAMPLES
OCONV -1234 “MP”
yields 0x D01234
yields -01234
The MS code allows an alternate defined sort sequence for sort fields.
COMMAND SYNTAX
MS
Comments: Use of the MS code is only relevant when applying in field 8 pre-process codes to a specified field in a sort clause. In all other
cases, it will be ignored.
Use the sort sequence defined in a special record named SEQ that you must create in the ERRMSG file. Field 1 of this record contains a
sequence of ASCII characters that define the order for sorting.
EXAMPLE
SEQ (defined in ERRMSG file)
001 aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyY
zZ9876543210 ,.?!””;:+-*/^=()[]{}<>@#$%&”~\|
SALES....
AbC789
ABC789
ABC788
dEF123
Use the MT code to convert time notations such as 01:40:30 or 1:30 AM between internal and external format.
COMMAND SYNTAX
MT{H}{S}
SYNTAX ELEMENTS
H specifies 12-hour format. Uses 24-hour format if omitted
Comments: Time is stored internally as the number of seconds since midnight. Outputs the stored value in 12 hour or 24 hour (international)
format
Input Conversion: is valid. Generally, for selection processing you should specify MT codes in field 7 of the data definition record.
Considers AM or PM designators; affects the result of the input conversion for certain values by the time_is_hours emulation setting.
EXAMPLES
Input Conversion
MT 00:00 0
MTH 12:00AM 0
MT 01:00AM 3600
MT 01:00 3600
MT 01:00PM 46800
Operators used in A code expressions include arithmetic, relational and logical operators, the concatenation operator, and the IF statement.
F code operations are typically expressed as “F;stack2;stack1;operation” and evaluated under most emulation, as “stack2 operation stack1”.
If JBCEMULATE is set to “ROS”, this example is evaluated as “stack1 operation stack2”, effectively reversing the order of operations.
NOTE: that the FE and FS codes are evaluated in the same way for all emulations.
EXAMPLE 1
F;C3;C5;-
Push a value of three onto the stack. Push a value of five onto the stack.
Take entry 1 from entry 2 (3 - 5) and push the result (-2) back onto the stack as entry 1. ROS emulations will subtract 3 from 5 and return a
result of two.
EXAMPLE 2
FS;C3;C5;-
Push a value of three onto the stack. Push a value of five onto the stack. Take entry 2 from entry 1 (3 - 5) and push the result (-2) back onto
the stack. This works in the same way for all emulations.
EXAMPLE 3
F;C2;C11;C3;-;/
Push a value of two onto the stack. Push a value of 11 onto the stack. Push a value of three onto the stack. Subtract entry 1 from entry 2 (11 -
3) and push the result (8) back onto the stack. Now divide entry 2 by entry 1 (2 divided by 8) and push the result (0) back onto the stack.
MTS 0 00:00:00
MTHS 0 12:00:00AM
MT 3600 01:00
MT 46800 13:00
The output specification clause names the fields that are to be included in the report.
SYNTAX
field {print limiter}
{NOT} {relational operator} “value string” {{logical-connective} {NOT} {relational-operator} “value string”}...
SYNTAX ELEMENTS
TOTAL specifies that a running total of a numeric field be maintained
Print limiter suppresses output of values (to subvalue level) that do not match the clause, which replaces suppressed values with blanks. Any
detail lines that would as a result, be blank, are suppressed. Any totals produced include just the values that match the limiting clause.
BREAK-ON specifies that control break be performed and a break line displayed, each time the value of a field changes
Text comprises any printable characters except RETURN, LINE FEED, double quotes, single quotes or system delimiters.
B Break: works in conjunction with the B option of the Heading and FOOTING modifiers to insert the break
value in the heading or footing.
D Data: suppresses the break if only one detail line has been output since the last break.
L Line: suppresses the blankline preceding the break data line. Overrides the U option if both are specified.
P Page: throws a new page after each new break value until all the data associated with the current break has
been output.
R Rollover: Inhibits a page break until all the data associated with the current break has been output.
U Underlines: if specified places underlines on the line above the accumulated totals. Ignored if used with the
L option.
V Value: inserts the value of the control break field at this point in the BREAK-ON option.
Comments: If the sentence contains an output specification clause, it ignores any default definition records in the dictionary.
The P code returns a value if it matches one of the specified patterns, which can be combinations of numeric and alphabetic characters and lit-
eral strings.
COMMAND SYNTAX
P{#}(element){;(element)}...
SYNTAX ELEMENTS
Comments: Returns a null value if the value does not match any of the patterns.
Input Conversion: does not invert. It simply applies the pattern matching to the input data.
EXAMPLE 1
P(2A”*”3N”/”2A)
Will match and return AA*123/BB or xy*999/zz. Will fail to match AAA*123/BB or A1*123/BB, and will return null.
EXAMPLE 2
P(2A”*”3N”/”2A);(2N”-“2A)
Will match and return AA*123/BB, xy*999/zz, 99-AA or 10-xx. Will fail to match AA&123/BB, A1*123/BB, 9A-AA or 101-xx, and will return
null.
Field 8 codes are valid but, generally, it is easier to specify the D code in field 7 for input conversion. Dates in output format are difficult to use
in selection processing.
If you are going to use selection processing and you want to use a code which reduces the date to one of its parts, such as DD (day of month),
the D code must be specified in field 8.
Generally, for selection processing, you should specify D codes in field 7 except when you use a formatting code, such as DM, that reduces the
date to one of its parts. If you specify no year in the sentence, the system assumes the current year on input conversion. If specifying only the
last two digits of the year the system assumes the following:
00-29 2000-2029
30-99 1930-1999
EXAMPLES
D- 9904 11-02-1995
D0 9904 11 FEB
DD 9904 11
DJ 9904 41
DM 9904 2
DQ 9904 1
DW 9904 6
DY 9904 1995
DY2 9904 95
A push operator always pushes a single entry onto the stack. Existing entries are moved one position down. Push operators are: “literal” Lit-
eral. Any text string enclosed in double or single quotes.
field-number{R{R}}{(format-code)}
R Specifies that the last non-null value obtained from this field be applied for each multivalue that does
not exist in a corresponding part of the calculation.
RR Specifies that the last non-null value obtained from this field be applied for each subvalue that does
not exist in a corresponding part of the calculation.
(format code). One or more format codes (separated by value marks) enclosed in
parentheses and applied to the value before it is pushed onto the stack.
Cn Constant - where n is a constant (text or number) of any length up to the next semicolon or system
delimiter.
ND Number of records since the last BREAK on a BREAK data line. Equal to the record counter on a
GRAND-TOTAL line. Used to compute averages.
NI Record counter. The ordinal position of the current record in the report.
NL Length of the record, in bytes. Includes all field marks but not the key.
NS Subvalue counter. The ordinal position of the current subvalue within the field.
NV Value Counter. The ordinal position of the current multivalue within the field.
V or Previous Value. Use the value from the previous format code.
LPV
The R code returns a value that falls within one or more specified ranges.
COMMAND SYNTAX
Rn,m{;n,m}...
SYNTAX ELEMENTS
n the starting integer of the range. Can be positive or negative.
m the ending integer of the range. Can be positive or negative, but must be equal to or greater than n.
Comments: Returns a null value if the value does not fall within the range(s).
Input Conversion: does not invert. It simply applies the range check to the input data.
EXAMPLE 1
R1,10
Will return any value that is greater than or equal to one and less than or equal to 10
EXAMPLE 2
R-10,10
Will return any value that is greater than or equal to -10 and less than or equal to 10
EXAMPLE 3
R-100,-10
Will return any value that is greater than or equal to -100 and less than or equal to -10
The fields of a file definition record that affect jQL reports are:
Field 7 Conversion code for key if required. For date, time, etc.,
Field 9 Justification for key. Can be one of the following (see data definition records)
L Left justified
R Right justified
T Text
U unlimited
REFORMAT is similar to the LIST Command in that it generates a formatted list of fields, but its output is directed to another file or the mag-
netic tape rather than to the terminal or printer.
COMMAND SYNTAX
REFORMAT file-specifier {record-list} {selection-criteria} {USING file-specifier} {output-specification} {format-specification} {(options}
PROMPT
At the prompt, supply the destination file:
File: Enter a file name, or the word “TAPE” for output to a magnetic tape.
Comments: Overwrites records that already exist in the destination file; when you reformat one file into another, each selected record becomes
a record in the new file. It uses the first value specified in the output specification clause as the key for the new records. The remaining values
in the output specification clause become fields in the new records.
When you reformat a file to tape, it concatenates the values specified in the output specification clause to form one tape record for each selec-
ted record. The record output is truncated or padded at the end with nulls (X”00”) to obtain a record the same length as specified when the
tape was assigned by the T-ATT Command.
Unless you specify HDR-SUPP or COL-HDR-SUPP, or a C or H option, a tape label containing the file name, tape record length (in hexa-
decimal), it will write the time, and date to the tape. If specifying a HEADING clause, this will form the data for the tape label.
Unless the ID-SUPP modifier or the 'I' option is specified record keys are displayed as the records are written to tape.
EXAMPLE
REFORMAT ORDER ORD.ADDR
FILE: ADDRESS
Creates new records in the ADDRESS file, keyed on C.CODE from the SALES file. Each record contains two fields, one with the values from the
NAME field and one with the values from the ADDRESS field.
Relational operators specify relational operations so that any two expressions can treated as operands and evaluated as returning true (1)
= or EQ Equal to
Relational operators compare stack entries and push the result onto stack entry 1; is either 1 (true) or 0 (false). Relational operators are:
= equal to
FS,FE
FS,FE
F entry 2 [ entry 1
FS,FE
entry2 [ entry 1
F entry 2 [ entry 1
FS,FE
entry 2 [ entry 1
# Not equal.
The Remainder Function R(exp1, exp2) takes two expressions as operands and returns the remainder when dividing the first expression by the
second.
Summation Function: S(expression) evaluates an expression and then adds together all the values.
EXAMPLE
A;S(N(HOURS) * N(RATE)R)
Multiplies each value in the HOURS field by the value of RATE; the multivalued list of results is then totalled.
To repeat a value for combination with multivalues, follow the field number with the R operator. To repeat a value for combination with mul-
tiple subvalues, follow the FMC with the RR operator.
Some MC codes replace one set of characters with other characters. These codes can:
MCP{c} Convert each non-printable character (X”00” - X”IF”, X”80” - X”FE”) to character c, or period (.) if c is
not specified.
MCPN Same as MCP but insert the two-character hexadecimal representation of the character immediately
{c} after character c, or tilde (~) if c is not specified.
MCNP Convert paired hexadecimal digits preceded by a tilde or character c to ASCII code. The opposite of the
{c} MCPN code.
Input conversion does not invert. The original code will be applied to input data.
EXAMPLE 1
MCC;X5X;YYY
EXAMPLE 2
MCPN
Assuming a source value of ABC]]DEF where ] represents a value mark, MCPN will return ABC.FC.FCDEF.
COMMAND SYNTAX
S;Var1;Var2
SYNTAX ELEMENTS
Var1 specifies the value to be substituted if the referenced value is not null or zero. Can be a quoted string, an FMC (field number), or an aster-
isk. An asterisk indicates that you should use the value of the referenced field.
Var2 specifies the value for substitution if the referenced value is null or zero. Can be a quoted string, an FMC (field number), or an asterisk.
EXAMPLE 1
S;*;”NULL VALUE!”
If the referenced field is null, this Example will return the string “NULL VALUE!”. Else, it will return the referenced value.
EXAMPLE 2
S;*;3
If the referenced field is null, this Example will return the content of field 3 of the data record. Else, it will return the referenced value.
EXAMPLE 3
S;4;5
If the referenced field is null, this Example will return the content of field 5 of the data record. Else, it will return the content of field 4.
Generates an implicit list of record keys or specified fields based on the specified selection criteria
COMMAND SYNTAX
SELECT file-specifier {record-list} {selection-criteria} {sort-criteria} {output-criteria} {USING file-specifier} {(options}
SYNTAX ELEMENTS
The options are:
C{n} Display running counters of the number of records selected and records processed. Unless modified by n, the counter increments after
every 500 records processed or the total number of records if less than 500.
n n Specifies a number other than 500 by which to increment. For Example, C25 increments the counter after every 25 records processed.
If you specify an output-criteria clause, the generated list will comprise the data (field) values defined by the clause, rather than the selected
record keys.
If you are in jSHELL when the Command terminates, it displays the total number of entries in the generated list and the list is made available
to the next Command, as indicated by the > prompt.
If you use the BY-EXP or BY-EXP-DSND connectives on a multivalued field, the list will have the format:
record-key]multivalue#
where multivalue# is the position of the multivalue within the field specified by BY-EXP or BY-EXP-DSND. multivalue# can be accessed by a
READNEXT Var,n statement in a jBC program.
EXAMPLE 1
SELECT ORDER WITH ORD.AMT = “ABC]”
23 Records selected
Select all the records in ORDER file with an ORD.AMT value that starts with ABC. Then, using the list, report on the records in the ORDER
file which have a VALUE field greater than 1000.
EXAMPLE 2
SELECT ORDER WITH ORD.AMT = “ABC]”
23 Records selected
>SAVE-LIST ORDER.ABC
Select all the records in ORDER file with an ORD.AMT value that starts with ABC. Then save the list as ORDER.ABC.
Values string delimiters are single quote (‘) and double quote (“). You can enclose an item-id value string in double quotes, but only if it is
entered immediately after the file name. Use single quotes within item-id selection clauses and doubles quotes within ordinary selection criteria
except when you are searching for an item-id that includes single quotes.
COMMAND SYNTAX
SORT-LABEL file-specifier {record-list} {selection-criteria} {sort-criteria} {USING file-specifier}{output-specification} {format-specification}
{(options}
PROMPTS
At the prompt, supply formatting criteria as follows:
COL Number of columns required to list the data across the page.
ROW Number of lines for each record; the output of each element of the output specification is on a sep-
arate line, if more elements exist in the output specification than there are rows specified it ignores
the extra elements. If specifying more rows than elements, the output specification for these rows
will be blank.
SIZE Number of spaces required for the data under each column
C Optional. Suppresses null or missing data. If absent, null or missing values are output as blanks. If
present, the C must be upper case and not in quotes.
COL Number of columns required to list the data across the page.
ROW Number of lines for each record; the output of each element of the output specification is on a sep-
arate line, if more elements exist in the output specification than there are rows specified it ignores
the extra elements. If specifying more rows than elements, the output specification for these rows
will be blank.
SIZE Number of spaces required for the data under each column
C Optional: Suppresses null or missing data. If absent, null or missing values are output as blanks. If
present, the C must be upper case and not in quotes.
Comments: The total number of columns specified must not exceed the page width, based on the calculation:
If INDENT is not zero, at the prompt supply a series of HEADERs that will appear in the left margin for each field. If a heading is not required
for a particular line, press RETURN.
If specified, COL-HDR-SUPP or HDR-SUPP, or the C or H options, the page number, date, and time will not be output and the generated
report will be without page breaks.
The sort criteria clause allows you to specify the presentation order of the records in the report.
SYNTAX
BY field
BY-DSND field
{relational operator} “value string” {{logical connective} {relational operator} “value string”}...
BY Specifies a single value sort that will order the records according to an
BY-DSND Specifies a single value sort the which will order the records according to a
BY-EXP Specifies a multivalue sort that will order the multivalues of the specified
multivalues element
BY-EXP-DSND Specifies a multivalues sort that will order the multivalues of the specified
multivalued element
Comments: Each sort clause comprises a sort connective followed by a field name. The sort connective can specify an ascending or descending
sort sequence of single or multivalued fields. If you include more than one sort of criteria clause, the processor ranks the clauses in a left to
right, most to least important hierarchical sequence. Always used as the least important sort value, unless explicitly included in the SORT cri-
teria is the record key.
The precise sorting sequence depends on whether a field is left - right justified.
The sort connectives for single valued fields sort the record orders according to the value of a field.
If using a single value sort connective with a field that contains multivalues or subvalues, it only uses the first value in the field as the sort key.
If using a multiple value sort connective with a file, which contains subvalues, it only uses the first subvalue in each multivalue as the sort key.
The treatment of each value is as if it were the only value so that each value occupies a line of output in the report. This effectively “explodes” a
record into multiple records. You can limit the values for sorting and output by including a print limiter with the multivalue sort connectives.
When using a SELECT-type command with BY-EXP the formatting of the records list appears:
record-key]multi value#
Where: multi-value-# is the position of the multivalue within the field. The READNEXT statement in a jBASIC program can use this value.
EXAMPLE 1
SORT SALESORDER WITH S.CODE = “ABC]”ORD.COST => ‘500’ BY S.CODE
ORD.COST
Selects the records in the ORDER file in which the ORD.COST file contains the values of the order and must sort the orders greater than or
equal to 500. The output in the records is in ORD.COST order.
EXAMPLE 2
SORT ORDER WITH ORD.COST = ‘500’ BY ORD.COST BY-DSND ORD.ID
EXAMPLE 3
SORT ORDER BY-EXP ORD.ID
Selects all the records in the Order file and outputs the detail lines in key order within P.CODE order.
Generates a sorted and formatted report of records and fields from a specified file
COMMAND SYNTAX
SORT file-specifier {record-list} {selection-criteria} {sort-criteria} {USING file-specifier} {output-specification} {format-specification}
{(options}
Comments: Unless a different sort order is specified in the sort criteria, the presentation of the records will be in an ascending order based on
the record key.
The data definition records (or the file definition records in the case of keys) determine whether to apply a left or right sort to the data.
If the field is left justified, it compares the data on a character-by-character basis from left to right, using ASCII values.
EXAMPLE:
01
100
21
A
ABC
BA
If the field is right justified and the data is numeric, it performs a numeric comparison and the values ordered by magnitude.
If the field is right justified and the data is alphanumeric, it collates the data into an alphanumeric sequence.
EXAMPLE:
A
01
123
ABCD
If a descending sequence is required, use the BY-DSND modifier in the sort criteria. Use the BY-DSND modifier with a data definition record to
obtain a descending sequence of record keys, which points to field 0 (the key). See “Sort Criteria Clause” earlier for a full explanation of the sort-
ing process.
EXAMPLE 1
SORT ORDER
Sort all the records in the SALES file into key order and use the default data definition records (if found) to format the output.
EXAMPLE 2
SORT ORDER WITH ORD.AMT = “ABC” “DEF” “GHI”
Select the records in the ORDER file in which the ORD.AMT field contains the values ABC, DEF or GHI. Sort the records into key order.
Get the implicit list called SALES.Q4 and, using the list, report on the records in the SALES file, which have a key greater than DEF. Sort does
the report by S.CODE.
EXAMPLE 4
SORT ORDER WITH ORD.AMT = “ABC]” OR “[DEF” BY-DSND S.KEY LPTR
Select the records in the SALES file in which the S.CODE field contains values which start with ABC or end with DEF. Sort the report in des-
cending order of S.KEY (a data definition record which points to field 0 - the key) and output the report to the printer
EXAMPLE 5
SORT ORDER BY ORD.ID BREAK-ON ORD.ID “”BL” ORD.AMT TOTAL ORD.COS GRAND-TOTAL “Total” HEADING “Sales Code: “B”
“DL” FOOTING “Page “CPP” LPTR
Sort the ORDER file by ORD.ID. Output the ORD.ID, ORD.AMT and VALUE fields.
Control break on a change in S.CODE and suppress the LINE FEED before the break. Reserve the break value for use in the heading (“B”). Main-
tain a running total of the VALUE field and output it at each control break. Put the word “Total” on the grand-total line.
Set up a heading for each page which comprises the words “Sales Code: “, the sales code (from the break), a date and a LINE FEED. Set up a
footing, which contains the text “Page”, and a page number, centered on the line?
SREFORMAT is similar to the SORT Command in that it generates a formatted list of fields, but directs its output to another file or the mag-
netic tape rather than to the terminal or printer.
COMMAND SYNTAX
SREFORMAT file-specifier {record-list} {selection-criteria} {USING file-specifier} {output-specification} {format-specification} {(options}
File: Enter a file name, or the word “TAPE” for output to a magnetic tape.
COMMENTS:
Overwrites records that already exist in the destination file; when you reformat one file into another, each record selected becomes a record in
the new file. It uses the first value specified in the output specification clause as the key for the new records. The remaining values in the out-
put specification clause become fields in the new records.
When you reformat a file to tape, it concatenates the values specified in the output specification clause to form one tape record for each selec-
ted record. The record output is either truncated or padded at the end with nulls (X”00”) to obtain a record the same length as specified when
the tape was assigned by the T-ATT Command.
Unless you specify HDR-SUPP or COL-HDR-SUPP, or a C or H option, a tape label containing the file name, tape record length (in hexa-
decimal), it first writes the time, and date to the tape. If specifying a HEADING clause, this will form the data for the tape label.
Record keys are displayed as the records are written to tape unless the ID-SUPP modifier or the “I” option is specified.
Generates an implicit list of record keys or specified fields, based on the selection criteria specified
COMMAND SYNTAX
SSELECT file-specifier {record-list} {selection-criteria} {sort-criteria} {output-criteria}
SYNTAX ELEMENTS
Options are:
C Display running counters of the number of records selected and records processed. Unless modified by n,
{n} the counter increments after every 500 records processed or the total number of records if less than 500.
N Specifies a number other than 500 by which to increment. For Example, C25 increments the counter after
every 25 records processed.
Comments: Unless you specify a sort criteria clause it sorts the records in key order.
If you specify an output-criteria clause, the generated list will comprise the data (field) values defined by the clause, rather than the selected
record keys.
When the Command terminates, it displays the total number of entries in the generated list; the list is available to the next Command. This is
indicated by the “>” prompt if you are in jSHELL.
If you use the BY-EXP or BY-EXP-DSND connectives on a multivalued field, the list will have the format:
record-key]multivalue#
where multivalue# is the position of the multivalue within the field specified by BY-EXP or BY-EXP-DSND. multivalue# can be accessed by a
READNEXT Var,n statement in a jBASIC program.
EXAMPLE 1
SSELECT ORDER WITH ORD.AMT = ‘100’
23 Records selected
Select all the records in SALES file with an S.CODE value that starts with ABC. Sort the list into key order. Then, using the list, report on the
records in the SALES file which have a VALUE field greater than 1000.
EXAMPLE 2
SSELECT ORDER WITH ORD.AMT = “ABC]” BY P.CODE
23 Records selected
>SAVE-LIST SALES.ABC
Use the COUNT and LIST commands to access file records which contain sublists, with the COUNT and LIST Commands.. For the commands
and the modifier to function correctly, you must include the V processing code in field 8 of the file definition record. See File Specifiers topic in
the jQL Sentence Construction Chapter for more details.
COMMAND SYNTAX
V;field-no
SYNTAX ELEMENTS
Field No. The number of the field, which contains the sublist
EXAMPLE
Consider the stock file used by a camera factory where each data record can represent either an assembly or a component part. Take as an
Example the record set that defines a simple camera assembly. The data records contain the following data.
003 10 003 15
003 10 003 11
002 002
003 19 003 21
002 002
003 13 003 14
Record A1 represents assembled cameras. It points to the used sub-assemblies (A21, A22 and A23) to make each camera. The sub-assemblies
in turn point to their component parts; A21 points to A210 and A211, A22 does not have any components, and A23 points to A230.
Having established the logical data relationships, we now need to ensure that the system understands that field 2 is a multivalued sublist. We
do this by updating field 8 in the file definition record to read “V;;2”,
like this:
STOCK
001 D
002
003
004
005
006
007
008 V2
009 L
010 10
To create three data definition records in the dictionary of STOCK - one for each field, use the following titles DESC, COMPONENTS, and
QTY.
The final step is to issue a COUNT or LIST Command which uses the WITHIN modifier:
8 RECORDS LISTED
The substring function [start-char-no, len] extracts the specified number of characters from a string, starting at a specified character.
SYNTAX ELEMENTS
Start-char no An expression that evaluates to the position of the first character of the substring.
Len An expression that evaluates to the number of characters required in the substring.
Use - len (minus prefix) to specify the end of the substring. For Example, [1, -2] will return all but the last character and [-3, 3] will return the
last three characters.
EXAMPLE 1
A;N(S.CODE)[“2”, “3”]
Extracts a sub-string from the S.CODE field, starting at character position 2 and continuing for 3 characters
EXAMPLE 2
A;N(S.CODE)[2, N(SUB.CODE.LEN)]
Extracts a sub-string from the S.CODE field, starting at the character position defined by field 2 and continuing for the number of characters
defined by SUB.CODE.LEN
Format Codes: Specifies a format code to be applied to the result of the A code or an operand.
COMMAND SYNTAX
a- code {] format- code...}
a-operand(format-code{]format-code}...)
SYNTAX ELEMENTS
A code A complete A Code expression.
format code is one of the codes described later G(roup), D(ate) or M(ask).
Comments: You can format the result of the complete "A" code operation by following the expression with a value mark and then the required
format code(s). (This is a standard feature of the data definition records.)
Format codes can also be included within "A" code expressions; enclosed in parentheses, using a value mark for separation if using more than
one format code. All format codes will convert values from an internal format to an output format.
EXAMPLE 1
A;N(COST)(MD2]G0.1) * ...
A;I(N(COST)(MD2)) * ...
EXAMPLE 2
A;N(COST) * N(QTY)]MD2
Shows the MD2 format code applied outside the A code expression. Multiplies COST by QTY and the result formatted by the MD2 format
code.
Special Functions
You can format any operand by following it with one or more format codes enclosed in parentheses, and separated by value marks, (ctrl ]):
operand(format-code{]format-code}...)
Value String
COMMAND SYNTAX
T{m,}n
SYNTAX ELEMENTS
m specifies the starting column number.
Comments: If specifying m, the content of field 9 of the data definition record has no effect - it counts and extracts characters from left to right,
for n characters.
If m is not specified, the content of field 9 of the data definition record will control whether n characters are extracted from the left or the right-
hand end of the value. If field 9 does not contain an R, extracts the first n characters from the value. If field 9 does contain an R (right justify),
extracts the last n characters from the value.
Input Conversion: does not invert. It simply applies the text extraction to the input data.
EXAMPLES
T2 ABCDEFG L AB
T3 ABCDEFG R EFG
T3 ABCDEFG T ABC
Tfile codes provide a method for retrieving data fields from any other file to which the user has access.
COMMAND SYNTAX
T[*|DICT]file-specifier;c{n};{i-fmc};{o-fmc}
SYNTAX ELEMENTS
* or DICT Indicates the use of the dictionary of the specified file, rather than the data section.
file- spe- identifies the reference file by name in the format file-name{,data-section-name}.
cifier
C If reference record does not exist or the specified FMC is null, output the value unchanged.
I Inputs verify: Functions as a C code for output and as a V code for input.
O Outputs verify: Functions as a C code for input and as a V code for output.
V Reference record must exist and the specified FMC must contain a translatable value. If the
record does not exist or the FMC contains a null, an error message will be output.
X If reference record does not exist or the specified FMC is null, return a null
n specifies a value mark count to return one specific value from a multivalued field.
i fmc the field number for input translation. which if omitted or contains a null value, no input trans-
lation takes place.
o fmc is the field number for output translation. If the value is null, no output translation takes place.
Comments: Uses the current data value as the record key for searching the specified reference file.
Returns a data field or a single value from a data field, from the record
Use Tfile codes in fields 7 or 8 of the data definition record. Use field 8 if translation of a multivalued field or comparisons and sorts are
required.
If you apply selection criteria, you can either use field 8, or field 7 and set up special records in the reference file to perform any input trans-
lation you require.
The special records in the reference file have as record keys values that the field subject to translation may be compared with in a jQL sentence.
Field i-fmc within these records contains the translate value that will be compared to values on file. Typically, values in a jQL sentence are out-
put values, so that the special input translation records are effectively the inverse of the output translation records.
Tfile codes can be “embedded” in other conversion codes but you must still follow the syntactical conventions of the “host” code. For Example,
if you include a Tfile code in an F code conversion, enclose the Tfile code in parentheses.
Output conversion is valid. The Tfile code has a parameter (o-fmc) that specifies the field in the translation record to use for output con-
version.
EXAMPLE 1
TSALES;X;;2
Using this Tfile code in field 8 of a data definition record, which also has a 0 in field 2, will cause the key of the current record to be used as the
key when accessing the reference file SALES; returns null if the record cannot be found; returns the value of field 2 if the record is found.
EXAMPLE 2
TSALES;C;;2
Using this Tfile code in field 8 of a data definition record, which also has a 6 in field 2, will cause the content of field 6 from the current record
to be used as the key when accessing the reference file SALES. If the record cannot be found, or if found, field two is null, returns the content
of field 6 of the current record. If the record is found, and field 2 contains a value, it returns that value.
EXAMPLE 3
A;3(TSALES;X;;2)
Using this embedded Tfile code in field 8 of a data definition record will cause the use of field 3 of the current record as the key when accessing
field 2 of the reference file SALES. Returns null if the record cannot be found; returns the value of field 2 if the record is found.
NOTE: All possible F correlative operators push values onto the stack, perform arithmetic and other operations on the stack entries, and pop
values off the stack.
The term “push” is used to indicate the placing of an entry (a value) onto the top of the stack so that existing entries are pushed down one
level. “Pop” means to remove an entry from the top of the stack so that existing entries pop up by one level. Arithmetic functions typically
begin by pushing two or more entries onto the stack. Each operation then pops the top two entries, and pushes the result back onto the top of
the stack. After any operation is complete, the result will always be contained in entry 1.
Throwaway connectives are keywords, which make queries more readable. You can use in any query to make the sentence read more like Eng-
lish and can be used anywhere in a sentence as throwaway connectives do not affect the query.
The following query uses the words THE, FOR, and FILE without affecting the meaning of the command:
Throwaway
A ARE FILE
In addition, to provide for timestamp functionality included is a suite of conversions including A, F and I types. This is to generate a
timestamp, displayed for date and/or time in short, long, and full formats. These conversions also support non-Gregorian locales. The meaning
of the components of the conversion is as follows:
D Date
T- Time
The TOTAL connective specifies that a running total of the field be maintained and to output the total at each control break and at the end of
the report. Also, use TOTAL in conjunction with the BREAK-ON connective to display intermediate totals at the control breaks.
Use the GRAND-TOTAL modifier in the format specification clause to display any specified text on the last total line.
You can combine Item-id selection with implicit but not with explicit item-id lists. You can combine every type of list with selection criteria
based on attribute values.
COMMAND SYNTAX
Uxxxx
SYNTAX ELEMENTS
XXXX The hexadecimal identity of the routine
Comments: jBASE user exits are customized routines specially produced to perform extraordinary
processing.
Unicode is a single-coded character set providing a repertoire for all the languages of the world. Its first version used 16-bit numbers, which
allowed encoding for 65,536 characters; further development allowed a repertoire of more than one million characters, requiring 21 bits. Higher
bits have been declared unusable to ensure interoperability between UTF encoding schemes; UTF16 cannot encode any code points above this
limit. The handling of values above 16 bits is by two 16-bit codes.
l The first Unicode version used 16 bits, which allowed for encoding 65,536 characters.
l Further extended to 32 bits, although restricted to 21 bits to ensure interoperability between UTF encoding schemes.
l Unicode provides a repertoire of more than one million characters.
The 16-bit subset of UCS (Universe Character Set) is known as the Basic Multilingual Plan (BMP) or Plane 0.
Unicode provides a unique number for every character, on every platform, for every program, no matter what the language. Standards such as
XML, Java, ECMAScript (JavaScript), LDAP, CORBA 3.0, WML, etc., requires Unicode and is the official way to implement ISO/IEC 10646;
supported in many operating systems, all modern browsers, and many other products.
Incorporating Unicode into client-server or multi-tiered applications and websites can offer significant cost savings over the use of legacy char-
acter sets. Unicode enables a single software product or a single website to be targeted across multiple platforms, languages and countries
without re-engineering, and allows data to be transported through many different systems without corruption.
Contracting In Spanish sort order, ‘ch’ is considered a single letter. All words that begin with ‘ch’ sort after all other words beginning with ‘c’
Expanding In German, ä is equivalent to ‘ae,’ such that words beginning with ä sort between words starting with ‘ad’ and ‘af’.
Unicode Normalization
Normalization is the removal of ambiguities caused by precomposed and compatibility characters. There are four different forms of nor-
malization.
Form D................................. Split up (decompose) precomposed characters into combining sequences where possible.
Form NFKD ........................ Like D but avoid use of compatibility characters (e.g., use ‘fi’ instead of U+FB01 LATIN SMALL LIGATURE FI).
Precomposed ü = U+00FC
Note: that UTF-8 encoding requires the use of precomposed characters wherever possible.
UTF-8 can represent all possible Unicode code points by byte sequences, which in turn represent different code points. The sequence used for
any given character depends on the Unicode number, which represents that particular character. The Universal Character Set has the following
properties:
UTF-8 encoding is a Unicode Translation Format of Unicode. Before UTF-8 emerged, users all over the world had to use various language-spe-
cific extensions of ASCII. This made the exchange of files difficult, and application software had to consider small differences between these
encodings. Support for these encodings was usually incomplete and unsatisfactory, because the application developers rarely used all these
encodings themselves.
l Files and strings that contain only 7-bit ASCII characters have identical encoding under ASCII and UTF-8.
l ASCII bytes 0x00-0x7F cannot appear as part of any other character.
l Allows easy resynchronization and makes the encoding stateless and guards against the possibility of missing bytes.
l Can encode all possible 231 UCS codes.
l UTF-8 encoded characters may theoretically be up to six bytes long, however 16-bit BMP characters are only up to three bytes long.
l The sorting order of Bigendian UCS-4 byte strings is preserved.
l It never uses the byte 0xFE and 0xFF in the UTF-8 encoding.
l UTF-8 is also much more compact than other encoding options, because characters in the range 0x00-0x7f still only use one byte.
l Use only the shortest possible multi byte sequence that can represent the code point of the character.
l In multi byte sequences, the number of leading one bit in the first byte is identical to the number of bytes in the entire sequence.
l Unicode represents a unique character by a unique 32-bit integer. Hence using UTF-8 encoding avoids problems, which would arise if
using 16 or 32 bit character byte streams, as the normal C string termination byte is a zero, thus byte streams could become pre-
maturely truncated.
You can add additional functionality by calling user written basic subroutines, which you should compile and catalog and add the library loc-
ation to the library path in the JBCOBECTLIST environment variables.
The first parameter of the called routine is the result parameter; used as the evaluated value of the subroutine e.g.
FRED
001 SUBROUTINE FRED (Result, Param1)
002 IF Param1 > 100 THEN Result = 1 ELSE Result = 0
003 RETURN
One or other of the following formats can call subroutines from an I-TYPE.
Conversion
The Conversion attribute provides support for normal queries output conversions. E.g. D2, MT, F;, TFile etc
Header
This attribute specifies the column heading text for display.
Format
The format attribute can be specified as follows:
Where:
The using clause specifies a dictionary file, which is the source of data definition records.
SYNTAX
USING {DICT} filename {,data-section-name}
SYNTAX ELEMENTS
USING specifies the use of the named file as the dictionary for the data file.
filename names a file. If the DICT modifier is not specified, it will use the data section of the file.
data-section-name specifies a secondary data section of the file with a name different from the dictionary; it must follow filename, separated by
a comma but no space.
One main advantage of the using clause is that you can share a dictionary between several files where for example there are common data defin-
ition records.
EXAMPLE
SORT ORDER USING DICT ORDER
The data definition records in the dictionary of the file ORDER (DICT ORDER) assess File ORDER
Report qualifiers provide a variety of ways to control and refine the overall format of a report. COL-HDG, ID-SUP, DET-SUP, LPTR, SAMPLE,
and SAMPLED are report qualifiers you saw in previous examples. The following list summarizes the most commonly used report qualifiers:
HEADING HEADER Uses the report header you specify in the query rather than the
default heading.
VERTICALLY VERT Displays the report in vertical format with one field on each line.
B Functions only if a BREAK-ON modifier with a B option is also included in the sentence. You can use the
B option in either the header or footer. When the B Option is in the HEADING the value of the first
BREAK-ON field on the page replaces the B in the header. When the B is in the FOOTING, the last
BREAK-ON value on the page replaces the B in the footer.
C Centralizes the heading or footing text and centres the text according to the predefined number of the
{n} columns specified for the printer or terminal. To change the centering of the text specify the number of
columns (n) for the heading line on which to base the center. For example: ‘C80’ positions the text
centered at character position 40. You should allow the printer or terminal set-up to determine the cen-
tering.
I Inserts the current record key. The last record key listed on the page is inserted in the footing; the first
key on the page is inserted in the heading
P Inserts the current page number right justified, expanding to the right as the number increases.
PP Inserts the current page number right justified in a field of four spaces
T Inserts the current system time and date in the format: hh:mm:ss dd mmm yyyy
Value strings are character strings enclosed in delimiters (usually single quotes within item-id-selection criteria and double quotes within ordin-
ary selection criteria); also used to compare against character strings in the file. The value string cannot contain the character by which it is
delimited. For example: if the value string is enclosed in single quotes, it may contain double quotes, but not single quotes. Otherwise, the
value string can contain any printable character, excluding RETURN, LINE FEED, and system delimiters. The simplest value string is a char-
acter string that has precisely those characters for testing (for example. ‘Johansen’) however a value string can also include the following special
characters:
Left ignore ([) at the beginning of the string to indicate that the item-id may start with any characters (for example,’[son’)
Right ignore (]) at the end to indicate that the item-id may end with any characters (for example, Johan]’)
Wild cards(^) anywhere within the string, each wild card matching one character of any value (for example, ‘Joh^ns^n).
EXAMPLE
The following sentence lists CUSTOMER information with CUSTOMER numbers “40823” or “40825”. Note: the equal sign makes these val-
ues strings rather than item-ids. Hence, without an implicit item list, the processor must search the entire file, comparing all items-ids against
these two value strings; thus it would be better to omit the equal sign as shown in the previous example, to avoid this;
The following sentence list information about all the rooms with numbers that begin with three, end with five, and have an intervening char-
acter of any value
The following sentence does not list any CUSTOMER because there is no relational operator, the string 3^5 is treated as an item-id.
JQLOPT_FETCH_ALL_VALUES 2 No
JQLOPT_FORCE_SELECT 512 Yes // switch on trigers in no active select list (if file
has triggers)
JQLOPT_USE_SQLDELETE 1024 Yes // Delete, supports clear file, where but no sub
queries
JQLOPT_FETCH_ALL_VALUES 2 No
JQLOPT_FORCE_SELECT 512 Yes // switch on trigers in no active select list (if file
has triggers)
JQLOPT_USE_SQLDELETE 1024 Yes // Delete, supports clear file, where but no sub
queries
A common question is how data is associated if one column or more columns are multi-valued and the rest are not. Take this example where
both NUMBEERSPERBRAND and NUMCALSPERBRAND are multi-valued:
jsh -->SQLSELECT a.LASTNAME, a.NUMBEERSPERBRAND, a.NUMCALSPERBRAND FROM CUSTOMERS a WHERE a.FIRSTNAME = 'JIM'
STALLED 10 105
STALLED 12 100
JAMES 6 150
JAMES 12 100
SUE 4 200
SUE 12 100
Selected 6 rows.
The data on disk for JIM STALLED is shown below. Attribute 5 (NUMBEERSPERBRAND) and Attribute 6 (NUMCALSPERBRAND) are both
multi-valued. Yet, only two rows are returned from the SQL query above. Why? Attribute 5 and Attribute 6 are associated in the dictionary.
0001 JIM
0002 STALLED
0003 41
0004 2
0005 10]12
0006 105]100
0007 OLY]BUD
0009 PORTLAND
0010 97210
0011 US
0012 FIDO\JACK
Attribute 7 (BRANDS) in the dictionary (CUSTOMERS]D) is the controlling attribute for NUMBEERSPERBRAND (Attribute 5) and
NUMCALSPERBRAND (Attribute 6). This is defined in bold in attribute 4 below.
BRANDS
001 A
003 BRANDS
004 C;5;6
005
006
007
008
009 L
010 30
NUMBEERSPERBRAND
001 A
002 5
003 NUMBEERSPERBRAND
004 D;7
005
006
007
008
009 R
010 30
NUMCALSPERBRAND
001 A
002 6
003 NUMCALSPERBRAND
004 D;7
005
006
007
008
009 L
010 30
The dependant attributes (NUMBEERSPERBRAND and NUMCALSPERBRAND) define their controlling attribute in attribute 4 as well.
Without this relationship defined, the same query would yield vastly different results.
jsh -->SQLSELECT a.LASTNAME, a.NUMBEERSPERBRAND, a.NUMCALSPERBRAND FROM CUSTOMERS a WHERE a.FIRSTNAME = 'JIM'
STALLED^10^105
STALLED^12^105
STALLED^12^100
JAMES^6^150
JAMES^6^100
JAMES^12^150
JAMES^12^100
SUE^4^200
SUE^4^100
SUE^12^200
SUE^12^100
Selected 12 rows.
Now we see that there is a JOIN taking place, so please take note that multi-valued attributes (tables within a table) need to be related to one
another in the dictionary, otherwise a JOIN will occur.
While it is not entirely necessary that the reader understand jQL syntax, it is assumed that the reader is familiar with SQL syntax. It is also
assumed that the user understand that jBASE is a hierarchical database and not a relational database (meaning that data is not necessarily nor-
malized to 3rd normal form as in relational databases). Other assumptions follow:
$JEDIFILEPATH refers to the search path jBASE uses to find files or tables.
TRANSLATE
STDDEV
VARIANCE
CONVERT
CHARTOROWID
HEXTORAW
RAWTOHEX
ROWIDTOCHAR
DUMP
GREATEST
LEAST
UID
USERENV
INITCAP
LPAD
Selected 1 rows.
However, if you query the file CUSTOMERS in the samples directory (which does contain tables within tables) with the exact same query, you
retrieve back 3888 rows(because there are multiple tables within tables in this file)! You can, however, retrieve the tables within tables in one
column which can then be parsed programmatically. To do this, set the environment variable JQL_DONT_MAKE_ROWS as shown below;
Selected 1 rows.
As well, the * syntax is now only supported for one and only one table in the FROM clause. Queries such as SELECT a.*, b.* FROM table1 a,
table2 b WHERE a.id = b.id will not work. This is a limitation that will be fixed in a future release.
jBASE has different mechanisms to represent dictionary items or meta data. These are documented in the jQL documentation and the following
section assumes a cursory knowledge of dictionary definitions. The main thing to keep in mind is that there are Dictionary files which hold the
meta-data, and data files which hold the application data.
FIRSTNAME
001 A
002 1
003 FIRSTNAME
004
005
006
007
008
009 L
010 24
LASTNAME
001 A
002 2
004
005
006
007
008
009 L
010 20
etc.
FIRSTNAME maps to Attribute 1 in the datafile and LASTNAME maps to Attribute 2 in the data file. Now let’s look at the raw data for an
item (jed is a jBASE editor similar to ED. 0000011 below is the record key of the shown record.)
001 JIM
002 HARRISON
004
006 IN
007 09324
010 JIMH@compe.com
012 HPUX]SOLARIS]DGUX]TRU64]DGUX]TRU64]SOLARIS
014 1980]1315]1475]1016]843]1436]879
One can see the FIRSTNAME “JIM” in Attribute 1 and the LASTNAME “HARRISON” in Attribute2. Attribute13 is SYSTEMTYPE and is
multi-valued.
Dictionary Considerations
1) For numeric comparisons, the item must be defined as right justified as shown below in attribute 9.
NUMBEERSPERBRAND
001 A
002 5
003 NUMBEERSPERBRAND004 D;7
005
006
007
008
009 R
010 30
jsh-->SQLSELECT LASTNAME, NUMBEERSPERBRAND FROM CUSTOMERS2 WHERE FIRSTNAME = 'JIM' AND LASTNAME = 'JAMES' AND
NUMBEERSPERBRAND > 5 AND NUMBEERSPERBRAND < 10
2) Columns that belong to the same relation have to be associated, otherwise a JOIN will occur if it appears in the SELECT clause (See the
Chapter on Associations)
Dictionary Descriptors
(not inclusive)
A-descriptor
The 10-line attribute definition which is used on generic MultiValue systems including jBASE and on UniVerse
001 A A D/CODE
002 1 0 A/AMC
004 V/STRUCT
005
006
009 R R V/TYPE
010 11 3 V/MAX
D-descriptor
001 D TYPE
002 7 LOC
005 8R FORMAT
006 M SM
I-descriptor
001 I TYPE
006 S SM
Selected 3 rows.
jsh -->SQLSELECT a.FIRSTNAME, a.SYSTEMTYPE, b.AGE, b.LASTNAME FROM MYCUSTS a, CUSTOMERS b WHERE a.FIRSTNAME =
b.FIRSTNAME AND a.FIRSTNAME = 'JIM' AND a.LASTNAME = 'FLETCHER'
Selected 6 rows.
3) Example of a subquery
jsh-->SQLSELECT DISTINCT a.FIRSTNAME, a.LASTNAME FROM MYCUSTS a WHERE a.FIRSTNAME IN ( SELECT b.FIRSTNAME FROM
CUSTOMERS b WHERE b.AGE = 50)
FIRSTNAME LASTNAME
JIM FLETCHER
CLIVE PIPENSLIPPERS
JIM FREEMAN
CLIVE DELL
CLIVE COOPER
CLIVE JACKSON
JIM HARRISON
JIM SUE
JIM LAMBERT
JIM COOPER
JIM FENCES
CLIVE GATES
FIRSTNAME LASTNAME
------------------------ --------------------
CLIVE FLETCHER
CLIVE WALKER
CLIVE BOYCOTT
Selected 15 rows.
jsh-->SQLSELECT a.FIRSTNAME, a.LASTNAME FROM CUSTOMERS a WHERE a.FIRSTNAME BETWEEN 'JIM' AND 'JOHNO'
jsh-->SQLSELECT a.FIRSTNAME, a.LASTNAME FROM CUSTOMERS a WHERE EXISTS ( SELECT FIRSTNAME FROM MYCUSTS WHERE
FIRSTNAME = 'DONNA' )
$JBCRELEASEDIR/samples/SQL/MYSQLLIST.b.
It is meant to show how rows are returned with user given selection criteria. Example of output follows the code. (To compile: the commands
are (1) BASIC . MYSQLLIST.b (2) CATALOG . MYSQLLIST.b)
PROGRAM MYSQLLIST
INCLUDE JQLINTERFACE.h
ResultCode = 0
INPUT SelCriteria
Options = JQLOPT_USE_SQLSELECT
* Start execution
sel = ""
Status = JQLEXECUTE(Statement,sel)
ProcessedItems = 0
LOOP
Status = JQLFETCH(Statement,Control,Data)
ProcessedItems++
END
REPEAT
Example run
jsh -->MYSQLLIST
Data :JIM
Data :JIM
Data :DONNAYA
Data :JOHNO
Data :^
Data :CLIVE
Data :JIM
Processed 7
When a multi-valued attribute is presented in the SQLSELECT clause, and that same attribute is also present in the WHERE clause, a question
arises as to how the data is to be displayed. Let’s take an easy example. Below is the data as it is stored on disk. Attribute1 is the FIRSTNAME
column, Attribute2 is the LASTNAME column and Attribute13 is the SYSTEMTYPE multi-valued column (different values are separated with a
] character...)
MYCUSTS2.. 0000162
FIRSTNAME. JIM
SYSTEMTYPE... Another Pick ] Boo! Not jBASE ] jBASE ] ROS ] UNI* ] Another Pick
First, let’s look at a jQL listing of the file. (Note that the bolded words at the end of the query are the output specification).
jsh-->LIST MYCUSTS2 WITH FIRSTNAME = "JIM" AND LASTNAME = "FREEMAN" AND SYSTEMTYPE >= 'ROS' AND SYSTEMTYPE !=
'Boo! Not jBASE' FIRSTNAME LASTNAME SYSTEMTYPE
MYCUSTS2.. 0000162
FIRSTNAME. JIM
SYSTEMTYPE... Another Pick Boo! Not jBASE jBASE ROS UNI* Another Pick
1 Records Listed
You can see that the whole item is returned and every attribute in SYSTEMTYPE is returned even though we’ve attempted to narrow
SYSTEMTYPE in the query with two conditions. Why does this happen? Because we are selecting on the item and not the multi-
values in the jQL language. In other words, each ITEM meets the criteria of SYSTEMTYPE >= 'ROS' AND SYSTEMTYPE != 'Boo! Not
jBASE', not each multi-value. (There is an ITEM that has at least one multi-value that meets the condition, hence the AND clauses can be
thought of as OR clauses).
In jQL there is a way to “limit” the display of multi-values. This is shown underlined below where the output specification of SYSTEMTYPE is
met with added conditions.
jsh-->LIST MYCUSTS2 WITH FIRSTNAME = "JIM" AND LASTNAME = "FREEMAN" AND SYSTEMTYPE >= 'ROS' AND SYSTEMTYPE !=
'Boo! Not jBASE' FIRSTNAME LASTNAME SYSTEMTYPE GE "ROS" AND NE "Boo! Not jBASE"
MYCUSTS2.. 0000162
FIRSTNAME. JIM
The effect is that there are only 3 values now displayed for SYSTEMTYPE. Now let’s look at a query that is returning results for SQL. This is
what jBASE will return by default:
jsh -->SQLSELECT FIRSTNAME, LASTNAME, SYSTEMTYPE FROM MYCUSTS2 WHERE FIRSTNAME = 'JIM' AND LASTNAME = 'FREEMAN'
AND SYSTEMTYPE >= 'ROS' AND SYSTEMTYPE != 'Boo! Not jBASE'
Selected 3 rows.
So here is the dichotomy of limiting. Are we selecting on the ITEM or are we selecting on the MULTI-VALUES being displayed on the item?
Which one does the WHERE clause refer to? By default, the SQL engine selects on the multi-values and the AND clauses are treated as AND
clauses when limiting the display. Such that the query in SQL will produce no results as shown below where it will display the item with the
LIST command.
jsh-->SQLSELECT FIRSTNAME, LASTNAME, SYSTEMTYPE FROM MYCUSTS2 WHERE FIRSTNAME = 'JIM' AND LASTNAME = 'FREEMAN'
AND SYSTEMTYPE = 'UNI*' AND SYSTEMTYPE = 'jBASE'
Selected 0 rows.
But what if you really want to select on the ITEM in SQL and in effect have the AND clauses be treated as OR clauses? This behaviour can be
changed by setting the environment variable JQL_LIMIT_WHERE to any value or setting the option programmatically as shown below.
Options = JQLOPT_LIMIT_WHERE
With this variable set, the following query would produce the results shown below.
jsh-->SQLSELECT FIRSTNAME, LASTNAME,SYSTEMTYPE FROM MYCUSTS2 WHERE FIRSTNAME = 'JIM' AND LASTNAME = 'FREEMAN'
AND SYSTEMTYPE = 'UNI*' AND SYSTEMTYPE = 'jBASE'
Selected 2 rows.
In addition, the user can choose to ignore limiting the display all together by setting the environment variable JQL_ DONT_LIMIT or setting
the Option JQLOPT_DONT_LIMIT.
One of the main benefits of providing a SQL engine for jBASE is such that the database can be used with external tools and APIs. This doc-
ument is meant to be used with the jDBC Driver manual which gives a description of how the JAVA API for jDBC can be used with jBASE. In
addition, there is an API for jBASE BASIC that is covered in this manual.
SQL has many benefits that can be applied to the multi-valued, hierarchical, database jBASE. In particular with jBASE, SQL allows users to
query data where there might be tables within tables and no primary-key/foreign key relationship (these relationships are defined in the dic-
tionary). This is an extreme advantage not available in most other RDBMS systems. Some of the advantages of using SQL over the traditional
query language of jQL are discussed below:
1. SQL allows sub-queries, UNION/INTERSECT/MINUS statements, and allows joins. jQL does not. jQL might take 2 or 3 queries to
do the work of one SQL statement. jQL may programmatically require more lines of code to accomplish the same task.
2. To call user defined functions in jQL, there needs to be a dictionary item representing this, usually expressed as an Itype. This clutters
the dictionary. SQL allows use of functions directly in the language (ex. SELECT MYFUNC(a.FIRSTNAME,a.AGE) FROM MYCUSTS
a). One can build complex virtual columns without having to modify the dictionary to do it.
3. SQL has support for grouping records with GROUP BY and further selecting on those grouped records with the HAVING keyword.
While jQL has group functionality with some verbs (grouping is not supported with the most commonly used jQL verbs that return
select lists), it doesn’t have the HAVING functionality.
4. SQL is a more structured language that has no implications in it. Therefore, it is more readable and more easily understood. For
example,
are all valid jQL statements that return the exact same results. In addition, jQL allows one to put the ordering clause before the selection cri-
teria clause and vice versa.
Contents:
l Overview
l Assumptions
l SQLSELECT PROGRAM
l Examples of SQL
l Limiting multi-values in display
l Associations
l Dictionaries
l Current limitations
l SQL Programatic Options
l Appendix
Some options need to be known such that the statement can be compiled. These options are passed to JQLCOMPILE like below.
Available Description
options
JQLOPT_ Treat ANDs LIKE Ors when limiting (see section on limiting)
LIMIT_
WHERE
JQLOPT_ Keep multi-values and subvalues as is without splitting them up into rows (most useful for
DONT_ PICK developers who want to handle processing multi-values and sub-values themselves)
MAKE_ROWS
The SQLSELECT program is the program that runs SQL statements on the jsh. It displays headers that are supplied by the dictionary. Some
important notes:
1. By default, data is truncated according to the size of the display length. For example, if the dictionary item looks like this,
Command->
0001 A
0002 3
0004
0005
0006
0007
0008
0009 L
0010 6
ADDR1
------
1 SUN
1 SUN
64 HAD
121 EL
1 SUN
10260
10260
Selected 7 rows.
jBASE is different than Oracle and other relational databases in that the size of the variable is not declared (it can be any size up to the max size
of the file). If the user wishes to display all data, one can do so by setting the environment variable JSQLSHOWRAWDATA as shown below.
FENCES^10260 SW GREENBURG RD
FREEMAN^10260 SW GREENBURG RD
Selected 7 rows.
Running in the mode JSQLSHOWRAWDATA will ignore header processing. You will also find that the addr1 field above is no longer truncated.
Note as well, that attribute marks are displayed as the ‘^’ character in this reporting mode.
2) Headers can be turned off as well by setting the environment variable JSQLHEADER=OFF. Formatting will be preserved in this mode, but
JSQLSHOWRAWDATA overrides any setting of JSQLHEADER.
HARRISON 1 SUN
SUE 1 SUN
LAMBERT 64 HAD
FLETCHER 121 EL
COOPER 1 SUN
FENCES 10260
FREEMAN 10260
Selected 7 rows.
Transaction Journaling
Transaction journaling provides the capability to log updates which are made to a jBASE database. The order in which the updates are logged
will reflect the actual updates made by a particular user/program facility in a sequential manner. Updates made system-wide will be appended
to the log as and when they occur on the database; i.e. the transaction log will reflect all updates in sequential order over time. The intention of
the transaction log is that it provides a log of updates available for database recovery in the event the system uncontrollably fails.
These are the main transaction journaling administration utilities provided within jBASE:
jlogstatus This command allows the administrator to monitor the activity of transaction journaling.
There will be 2 sets of transaction log files on each machine, logset1 and logset2. Logset1 will contain all the updates applied on Monday or
Wednesday or Friday and logset2 will contain all the updates applied on Tuesday, Thursday or Saturday/Sunday.
The definitions of these files are maintained by the ‘jlogadmin’ command. The transaction log files should be switched by use of cron at mid-
night (or 1 minute past midnight) using the command ‘jlogadmin –l N’ command where N is 1 for Monday , Wednesday or Friday and N is 2 for
Tuesday , Thursday and Saturday.
The administrator must ensure that all users are logged off the system prior to a system backup.
The backup script ‘backup_jbase’ should run to backup the system. This scenario allows for the backup failing and being restarted. Note the
creation of a statistics file. This is used effectively to timestamp the transaction log with the start time of the backup. Thus if the save was
restarted then the creation time of the statistics file will reflect the start of the last good backup. The operation is thus:
Stop the transaction log file to tape jlogdup process: database updates for the duration of the backup will be prevented by the administrator.
Remove and label the tape – this contains all database updates since just prior to the last backup.
Once this has been done, the operator responds to the prompt and the backup commences.
Upon completion of the backup and verify, the tape is removed and labeled appropriately.
A new tape to hold the transaction log file is then mounted in the tape deck.
The operator now responds to the prompt and the jlogdup process, dumping updates from the disk-based transaction log file to tape re-com-
mences.
There is no need to switch the transaction log files after the completion of the backup, as this is performed automatically.
Transaction Log
Access to the transaction log is via a special file. Use the CREATE-FILE command to create the file stub:
TJ1
When a file of type TJLOG is created, it generates a set of dictionary records in the dictionary section of the TJLOG file, which is a normal j4
hash file. The data section of a TJLOG file is handled by a special JEDI driver which accesses the current log set. The log set can be changed by
additional parameters when creating the TJLOG file after the TYPE specification.
EXAMPLE
7 TRANS Trans
The following record types are used in the transaction log (see dictionary item TYPE).
Type Description
The jlogdup command enables selective restores to be performed by preceding the jlogdup command with a select list. The select list can be
generated from the log set by generating a special file type, which uses the current log set as the data file.
EXAMPLE
In this example, all updates to the CUSTOMER file, which have been logged, except for any CLEARFILEs, are re-applied to the CUSTOMER
file.
The jlogmonitor command can be used to monitor potential problems with the jlogdup process. It will report errors when specific trigger
events occur. jlogmonitor can be run in the foreground but will usually be run as a background process (using the standard –Jb option).
Switching Logsets
A logset consists of 1 to 16 files. Each file has a capacity of 2GB, so the maximum capacity of a logset is 32GB. Before the logset reaches its
capacity, a switch must be made to another logset using the jlogadmin command. Failure to do so will render journaling inoperable and may res-
ult in database updates from jBASE programs failing.
Using 16 files in a logset does not consume more space than using just 1 file. This is because updates to the logset are striped across all the
files in the logset. When journaling is active on a live system, the recommendation is to define 16 files for each logset.
At least 2 logsets must be configured (with jlogadmin) so that when the active logset nears capacity, a switch can be made to another logset.
Switching to a logset causes that logset to be initialized, i.e. all files in that logset are cleared. The logset that is switched from remains intact.
The usual command to switch logsets is jlogadmin -l next. If there are 4 logsets defined, this command works as follows:
1 2
3 4
4 1
If a jlogdup process is running in real time to replicate to another machine, it should automatically start reading the next logset when it reaches
the end of the current logset. To effect this behavior, use the parameter terminate=wait in the input specification of the jlogdup command.
EXAMPLE
jlogadmin –XON
The larger WRITE’s which are captured in TJ can be compressed and stored to reduce the size of log sets. The jlogadmin utility allows the user
to switch ON/OFF compression, and also to set the threshold for larger TJ WRITE’s.
EXAMPLE
jlogadmin –Z500
Activates TJ log compression, setting the TJ WRITE threshold to be 500 bytes. Any TJ WRITE above 500 bytes will be compressed and
stored.
jlogadmin –Z0
What is journaled? Unless a file is designated unloggable, everything is updated through the jEDI interface (i.e. use of the jchmod –L filename
command). This includes non-jBASE hash files such as directories.
• Operations using non-jBASE commands such as the ‘rm’ and ‘cp’ commands, the ‘vi’ editor.
• Index definitions.
• Trigger definitions.
• When a PROGRAM is cataloged the resulting binary executable file is not logged.
• Internal files used by jBASE such as jPMLWorkFile, jBASEWORK, jutil_ ctrl will be set to non-logged only when they are
automatically created by jBASE. If you create any of these files yourself, you should specify that they be not logged (see note
on CREATE-FILE below). You may also choose to not log specific application files.
It is recommended that most application files be enabled for transaction journaling. Exceptions to this may include temporary scratch files and
work files used by an application. Files can be disabled from journaling by specifying LOG=FALSE with the CREATE-FILE command or by
using the -L option with the jchmod command. Journaling on a directory can also be disabled with the jchmod command. When this is done, a
file called .jbase_header is created in the directory to hold the information.
Remote files are disabled for journaling by default. Individual remote files can be enabled for journaling by using QL instead of Q in attribute 1
of the Q pointer, e.g.
<1>QL
<2>REMOTEDATA
<3>CUSTOMERS
EXAMPLE
JBC_SOB JediInitREMOTE CUSTOMERS darthv.jbaseintl.com
In general, journaling on specific files should not be disabled for "efficiency" reasons as such measures will backfire when you can least afford it.
For example, assume you accidentally deleted it a file called CUSTOMERS. In this case you would probably want to log users off while it is
restored, while certain other files may not require this measure. The mechanism to restore the CUSTOMERS file would be to selectively restore
the image taken by a jbackup and then restore the updates to the file from the logger journal. For example:
If required, use the jlogdup rename and renamefile options to restore the data to another file.
NOTE: In order to preserve the chronological ordering of the records do not use a SSELECT command on the time field. This may not produce
the correct ordering (multiple entries can occur during the same time period – the granularity being one second).
• Tape device/tape failure. In the event of a tape device failure – the device has to be repaired/replaced. The tape should be replaced. For this
case and the tape failure, the disk-based transaction log file is still valid. The start time of the last execution of the jlogdup to tape operation
was saved automatically by either the start_tj or backup_jbase script.
• During the dump of transaction log file information created during the backup/verify
• Problem with the tape: run recover_jbase after replacing the tape.
System/disk problem
The backup verified, so this is the backup set to be used for recovery by the recover_jbase script.
NOTE: that the jlogdup process to tape is still valid. Those transactions which have been dumped to tape can still be recovered.
The diagram above represents the use of Transaction Journaling on a stand-alone system. Why would we use this kind of setup? In the event of
system failure, the vast majority of processing which has taken place since the last system backup can be recovered and replayed. This is vital
for those situations where the transaction cannot physically be repeated. The majority of these transactions can be replayed during system
recovery when TJ is utilized.
Journal Configuration
The Transaction Journal will be configured with two logsets; logset1 and logset2. Each of these logsets will occupy a separate volume partition
on disk; this will allow for correct size monitoring of the logsets. The statistics of the logset usage indicated by the jlogstatus command is not
at obvious at first glance. What is displayed is the proportion of the volume that has been used. Naturally, if the volume is shared by one or
more logsets and/or other data files, then the percentage full will not necessarily reflect the percentage of the volume used by the transaction
log. If the logset is contained within its own volume, then the figures are a fairer reflection of the TJ logset usage (albeit with certain storage
overhead being present). Correct automatic invocation of the Log Nofity program relies on the accuracy of the percentages obtained.
Also if the logsets share a volume with other data, there is the possibility that the writing to the transaction log file may abort due to lack of
space within the volume. The logset volumes should be created large enough for the expected population of updates between logset switches:
i.e. if the logsets are switched every night at midnight, then the logset volume should be large enough to contain all the updates for a whole day
(plus contingency).
The schematic shows the same system, except this time equipped with two tape decks. The advantages of this configuration over the previous
are as follows:
• For the majority of the time during the day, there is a tape deck free for other uses; either for system or development usage.
• This configuration allows for tape deck redundancy. If the event of a deck failure, the previous scenario can still be maintained while the tape
deck is repaired or replaced.
• The jlogdup process can be left running during the backup/verify. This is the most important advantage over the previous scenario. Any data-
base updates which are performed during backup/verify are likely not only be logged to the disk-based transaction log file, but also to the tape.
This eliminates the lag between backing up the system and ensuring that database updates are logged to an external medium.
• The disadvantage of employing this configuration is that in the event of a system (or disk) failure, the machine has to be taken offline for the
duration of the system restore process as well as the Transaction Log restore from tape. As time is money, this approach may be prohibitively
costly to the organisation.
The architecture depicted above shows the use of a Failsafe or Hot Standby machine. This allows for a failure of the live main machine (Nodej
in this case). Unlike the previous configuration where the disk-based transaction logs are written to an external medium (tape), this con-
figuration will enable database updates to be replayed to a standby machine, and indeed to the database on that standby machine, shortly after
the update has been made (and logged) to the live machine.
It is assumed that for the case of a full system reload, there is some external medium available for the operating system reload and con-
figuration. This could also be contained on the standby machine as a system image. In the latter case, enough disk space should be available to
hold this system image.
The processor/disk configuration should be fast enough on the standby machine, so as not to lag too far behind database updates on the live
machine. The implication of the standby machine’s unable to cope with the database update rate, may cause the live and standby machines’
database to be unacceptably out-of-sync. – There may be too many disk-based transaction log entries on the live machine which have not been
transferred via jlogup to the standby machine.
If the Hot Standby machine is to be used within a fast recovery mechanism, then the following is required:
The network between the two machines should be fast and reliable.
The database on the standby machine must be sufficiently up-to-date, with reference to the live machine, as to be acceptable.
During the period when the live machine is unavailable, then the standby machine should be able to handle failures. A minimum configuration
should be that Transaction Journaling should be initiated on the standby machine and the transaction log file produced should be backed up to
an external medium (tape?).
HOT STANDBY MACHINE TO BE USED AS A LIVE MACHINE REPLACEMENT DURING SYSTEM RECOVERY
If the intent is that the standby machine becomes a temporary replacement for the live machine, then ideally the standby machine should be of
similar configuration to the live machine.
Transaction Journaling is started on Nodej; thus producing a transaction log file of updated records.
A jbackup/jrestore sequence is initiated from Nodej, by means of the script Backup_ Restore. This will take a snapshot of the database on
Nodej and reproduce it on Nodek. The jbackup option '-s /tmp/jbstart' is used to create a time-stamp file for later use.
find /JBASE_APPS /JBASE_SYSFILES /JBASE_ACCOUNTS –print | jbackup –s /tmp/jbstart –v –c | rsh Nodek –l jbasesu jrestore –N
Once this sequence completes, the updates which have occurred on Nodej since the start of the sequence, need to be updated onto the data-
base on Nodej. This could be achieved with:
jlogdup -u10 input set=eldest start=/tmp/backup.logset0 terminate=wait output set=stdout | rsh Nodek /GLOBALS/JSCRIPTS/logrestore
This will start the updates from the oldest logset (set=eldest); the first database update will be at the time the backup stats file was written
(start=/tmp/backup.logset0), i.e. the start of the backup; the transfer procedure will wait for further updates (terminate=wait). The method
employed to effect the transfer is by means of a pipe. Data from the transaction log is put into the pipe (output set=stdout); this data is taken
from the pipe by means of the logrestore command, initiated on Nodek by means of the rsh command (remote shell). The logrestore command
sets up a jBASE environment and then initiates a jlogdup command, taking its input from the pipe (input set=stdin) and updating the database
on Nodek (output set=database).
/GLOBALS/JSCRIPTS/logrestore Script
JBCRELEASEDIR=/usr/jbc
JBCGLOBALDIR=/usr/jbc PATH=$PATH:$JBCRELEASEDIR/bin
LD_LIBRARY_PATH=$JBCRELEASEDIR/lib:/usr/ccs/lib
JBCOBJECTLIST=/JBASE_APPS/lib: (or whatever it is for your usual users)
export JBCRELEASEDIR JBCGLOBALDIR JBCOBJECTLIST
jlogdup input set=stdin output set=database
The status of the jlogdup process can be monitored by running jlogstatus from a dedicated window:
jlogstatus -r5 –a
It is usual to configure more than one set of transaction log files. Initially logging will start to, say, set 1; and at some point logging to logset 2
will be initiated. This will usually be done daily just before each jbackup to tape. Then, typically, on the next day, logging will be switched back
to logset 1 (and overwriting the previous transaction log) and the daily jbackup started.
Database update metrics should be established to determine the correct size of the logsets. The jlogstatus display should be monitored to
ensure that the logsets don't fill the disk! Transaction Journaling can be configured to perform certain actions when the transaction log disks
begin to fill past a configurable watershed.
There is a configurable flush rate parameter which may be adjusted for Transaction Journaling. This parameter governs how often transaction
log file updates, held in memory are flushed to disk. The minimum period between transaction log file flushes is 5 seconds. This will limit lost
transaction log file updates to at maximum the last 5 seconds.
In the event of failure of the disk holding the transaction log file as well as the disk holding the database, the lost data is limited to those trans-
actions which have been logged to disk, but not transferred to the standby machine, plus the logging of those transactions which have still to
be flushed to disk. This situation is less quantifiable, but as the transaction log file reflects a sequential record of database updates over time,
manual investigation would be required to determine the latest updates which were actually updated on the standby machine. Obviously, the
database update transaction rate on the live machine governs the magnitude of this investigation.
Although the majority of database updates can be preserved after a system failure, what is not necessarily preserved is database integrity. The
use of and management of transaction boundaries within application code ensures that only complete (multi-file) updates make it to the data-
base. During system recovery (rebuild) only complete database transactions are rolled forward; those transactions which were not complete at
the time of system failure are not written to disk. When initiating a transaction through the jBC command TRANSTART, the use of the option
SYNC ensures that a memory flush will take place upon a transaction end or abort. This also ensures that the transaction log file is also flushed
to disk, thus eliminating any delay in writing to disk. Subsequent to system failure, manual investigation is now targeted at complete applic-
ation transactions rather than individual database updates, albeit at the possible expense of system performance.
RECOVERY PROCEDURE
Wait for the TJ restore process on the standby system (Nodek) to finish. This will be known when the statistics on the number of records read
remains constant.
Establish the validity of the database on Nodek and the transactions to be re-entered (if necessary).
Restart Nodek in level 1. This is before the communications daemons are started. Create scripts to switch the IP addresses of the network and
VTC cards to become those that Nodej formerly were. Continue the booting of Nodek.
Re-start the logger to a fresh set of transaction log files using the jlogadmin command.
Reload the operating system and jBASE on Nodek (if necessary). This can be contained in a system backup tape, held securely. This system
backup should contain a skeleton system, including a valid jBASE installation, C++ compiler, peripheral and user logon definitions. Any
upgrades the system should be reflected in this system backup.
find /JBASE_APPS /JBASE_SYSFILES /JBASE_ACCOUNTS –print | jbackup –s /tmp/jbstart –v –c | rsh Nodej –l jbasesu jrestore –N
where :
The filesystems /JBASE_APPS etc. identified are examples for a jBASE system
-l jbasesu identifies a jBASE user to be used for restores. This is important if indexes are to be rebuilt on restore (the user should have access
to files and subroutines).
Enable jBASE logons. At this point it is safe for users to start using the system. Any updates since the start of the backup will be logged in the
TJ log.
Once the backup/restore process has fully completed, the updates which have accrued on Nodek since is start can now be replayed thus:
jlogdup input set=current output set=stdout terminate=wait start=/tmp/jbstart | rsh Nodej –l jbasesu /JBASE_SYSFILES/logrestore
Once the two machines are in sync again both machines can be brought down, the network and VTC card addresses swapped, and users can be
allowed to re-logon to the Nodej machine.
Once the /JBASE_APPS have the developer sources in normal Unix files, the use of a nightly backup and a RAID configuration will be suf-
ficient.
When developers BASIC and CATALOG their programs, they will go into their own directories rather than into /JBASE_ APPS. At certain
points in time, when no users are active, the programs and subroutine libraries will be copied en-bloc to both the Nodej and Nodek machines
in /JBASE_APPS. This is the correct way to release new software and it needs to be done on both machines to ensure consistency of applic-
ations in the event of failure.
When an application developer changes an index or trigger definition, it should be done on files in their own environment. At some point you
will want to release them into the live community. This again is best done when no users are active. To do this you will need to release the
changed application and subroutine libraries (as shown above) and then release the new trigger and/or index definitions and apply the same
changes to both the Nodej and Nodek machines. The indexes will need to be rebuilt on both machines.
All changes to jBASE scripts kept in the /JBASE_SYSFILES will need to be manually duplicated.
Many of the synchronization requirements should be checked nightly in a cron script and errors reported. Such a script could be made to verify
the password file, the jBASE spooler configuration, the Unix spooler configuration., the scripts in the /JBASE_SYSFILES file system, check
that the programs and subroutine libraries are identical on both Nodek and Nodej, and could check the index and trigger definitions are
identical on both Nodek and Nodej, check the cron jobs are the same and the scripts they invoke are the same.
This verification of the two machines could also be run following a rebuild.
The configuration above shows a small, but significant refinement to the previous configuration. Essentially, the transaction log file is being rep-
licated to Nodek, with the logrestore script showing the following change:
Thus, all updates transferred from the transaction log file on Nodej are updated to the transaction log file on Nodek. Another jlogdup process is
initiated thus:
this takes these updates and applies them to the database. The reason for this becomes apparent when a recovery is required. Because there is
now a copy of the transaction log file on the standby machine, by interrogation of the transaction log file, it is clear which updates have been
transferred from the live machine. If the jlogdup is allowed to continue until all updates in the transaction log file have been applied to the data-
base, then the recovery position can be established far more easily than by interrogating the database.
This should be the minimum standard configuration utilizing Transaction Journaling. The assumptions made here are that jBASE will be the
database (native) server.
• TRANSTART
• TRANSEND
• TRANSABORT
Transactions which are not completed in their entirety will be completely “rolled back” by jBASE, when commanded to so do by the
TRANSABORT command. Upon execution of the TRANSEND command all or none of the constituent database updates will be actioned, ensur-
ing database consistency. Any transactional recovery will be achieved through the use of jBASE facilities.
Transaction Journaling has been configured for example, with two logsets:
1. /bnk/bnk.jnl/logset1
2. /bnk/bnk.jnl/logset2
where: logset1 and logset2 are links to two mounted filesystems each containing the corresponding transaction log file definitions.
TJ is then activated by a script similar to start_ tj, which activates transaction logging and also the backup of the transaction logs to tape
(/dev/rmt/0 in this case).
A backup of the database (using the backup_ jbase script) is initiated prior to the execution of Close of Business procedures. Logsets are
“switched” following the successful completion of backups.
• Disk-based transaction log file entries are still allowed to be dumped to tape. When there is no Transaction Logging activity, then all out-
standing transactions have either been logged to tape or rolled back. NOTE: The time allowed for transactions to complete is dependent on
application design. The end of dump activity can be checked by use of the jlogstatus command
The command:
will dump all data to tape below /bnk; As all the transaction log data (bnk.jnl) data has already been dumped to tape prior to the backup, the
exclusion of this directory would seem appropriate, by configuring the data thus:
Directory Description
NOTE
The use of the “-c” option will allow for the dumping of index files to avoid having to rebuild indexes on a restore process.
NOTE 2
Once the backup has completed and verified, a new tape for tape logging replaces the last backup tape.
The use of Transaction Journaling in this configuration allows for the recovery of transactions up to the point of failure. This configuration
provides assistance to the administrator in identifying those transactions which have not been written to tape prior to system failure. The tape
(set) contains a sequential history of database updates since the last backup.
• The operating system and configuration (device assignments, user login information, etc).
• This skeleton system must be kept up to date. Any changes to the operating system or jBASE configurations must be reflected in this skel-
eton system as a standard procedure; any such changes triggering the production of a new skeleton system.
If the operating system and/or jBASE is deemed corrupt or there has been a catastrophic disk failure, resulting in the loss of a disk, then the
system should be reconstructed as a skeleton system as discussed above. The details of this recovery are out of the scope of this document.
Once the system has been brought to an operational state, the database needs to be brought back to a known state. The last backup set pro-
duced is recovered by the recover_jbase script. This not only restores the jBASE database including saved indexes, but also replays all com-
pleted transactions which have been transferred to tape and initiates transaction logging to tape.
If there has been an application/database error which has resulted in the decision to perform a complete restore of the system, it is clear that if
the error can be identified to have taken place at a particular time, (whether precisely or approximately), then the whole of the transaction log
should not be replayed. Using the “end=timespec” option of jlogdup will cause the transaction log replay to terminate at the specified time
rather than the end of the logset. (See jlogdup section for valid format of timespec). The recover_jbase script will prompt for a time or assume
EOS (i.e. all the transaction log is to be replayed). As the
Warning: If an “end=timespec” parameter has been specified, then the time chosen may cause transactions which began before this time not to
be rolled back. Additional database updates pertaining to such transactions and bounded by the corresponding TRANSEND commands may
exist on the transaction log file, but will not be executed.
This configuration, being a jBASE-only solution will allow for on-line backups to be taken prior to Close of Business procedures.
With this configuration, jBASE will be the sole database server. Communication between the application server(s) and the database server will
be by using jRFS within jBASE. This allows multiple application servers to have pointers/stubs as file definitions. These pointers/stubs ref-
erence files exist on the database server. jRFS mechanisms allow for the updating of the database through jRFS server processes from requests
made on the application servers. The implication of this is that each application server has no direct, individual database storage but shares
access to a central (jBASE) database. As there is only one database server, Transaction Journaling facilities will be available, using the same
mechanisms as the Stand-Alone system above.
This configuration uses jBASE as a gateway to another DBMS (such as Oracle or DB2).
jBASE will handle any supported relational database connectivity (such as Oracle/DB2 etc.) through the appropriate jEDI driver. Data mapping
will be achieved through the corresponding RDBMS stub file definitions. The jBASE/RDBMS stub file definitions can exist on one of various
locations:
• On the Application Servers – this could (would) potentially create a locking minefield – how to communicate between the Application Servers
the locked state of the database entities.
• On the Database Server (1) – Application Servers communicate over NFS mounts to RDBMS stub files defined on the Database Server. The
downside of this approach is that RDBMS client components (at least) have to exist on each of the Application Servers. Also there is a problem
with managing database locks. This can be achieved by inefficient application-level lock mechanisms whereby the locks are held within a central
filesystem and are accessed by all Applications Servers, utilizing OS locks to manage access to the lock table.
• Transaction management (i.e. the use of TRANSTART, TRANSEND and TRANSABORT programming commands) within the Application Serv-
ers is handled within jBASE as for the Stand-Alone system.
The Hot Standby configuration using jBASE as the database server has the same attributes as previously described in the Cluster Systems with
the exception that all database updates to jBASE are duplicated to a separate server (or remote in the case of disaster recovery). The database
duplication process, achieved by the jlogdup facility, would normally be an operation in addition to dumping the transaction log data to a local
tape device.
• The Transaction journal is copied to tape (or other external medium) on a continuous basis by means of the jlogdup facility.
• A backup of the database (using jbackup) is initiated each night at 12:01 am (for example) to the tape deck /dev/rmt/0 (for example).
• A jlogdup process will be initiated on the database server which will, in tandem with a corresponding jlogdup server process on the standby
server, transfer all transaction updates from the transaction log on the live cluster to the transaction log on the standby server.
If a backend RDBMS is configured then Hot Standby/disaster recovery is handled by the RDBMS; jBASE Transaction Logging is not used as
the recovery mechanisms are handled by the RDBMS. The RDBMS recovery mechanisms are outside of the scope of this document.
The updates contained within a transaction are cached until a TRANSABORT or TRANSEND command is executed for that transaction. No
RDBMS activity takes place when the TRANSABORT command is executed, whereas the TRANSEND can result in many RDBMS interactions
before success or failure is detected. The application code within T24 is unaware of the underlying backend database.
setup_tj
#! /bin/ksh
export JBCRELEASEDIR=/data/colins/5.0_rels/jbcdevelopment
export JBCGLOBALDIR=/data/colins/5.0_rels/jbcdevelopment
export LD_LIBRARY_PATH=$JBCRELEASEDIR/lib:$LD_LIBRARY_PATH
start_tj
#! /bin/ksh
export JBCRELEASEDIR=/data/colins/5.0_rels/jbcdevelopment
export JBCGLOBALDIR=/data/colins/5.0_rels/jbcdevelopment
export LD_LIBRARY_PATH=$JBCRELEASEDIR/lib:$LD_LIBRARY_PATH
jlogadmin -l 1 -a Active
stop_tj
#! /bin/bash
export JBCRELEASEDIR=/data/colins/5.0_rels/jbcdevelopment
export JBCGLOBALDIR=/data/colins/5.0_rels/jbcdevelopment
export LD_LIBRARY_PATH=$JBCRELEASEDIR/lib:$LD_LIBRARY_PATH
jlogadmin –a Off
start_jlogdup
#! /bin/ksh
export JBCRELEASEDIR=/data/colins/5.0_rels/jbcdevelopment
export JBCGLOBALDIR=/data/colins/5.0_rels/jbcdevelopment
export LD_LIBRARY_PATH=$JBCRELEASEDIR/lib:$LD_LIBRARY_PATH
stop_jlogdup
#! /bin/ksh
export JBCRELEASEDIR=/data/colins/5.0_rels/jbcdevelopment
export LD_LIBRARY_PATH=$JBCRELEASEDIR/lib:$LD_LIBRARY_PATH
backup_jbase
#! /bin/ksh
export JBCRELEASEDIR=/data/colins/5.0_rels/jbcdevelopment
export JBCGLOBALDIR=/data/colins/5.0_rels/jbcdevelopment
export LD_LIBRARY_PATH=$JBCRELEASEDIR/lib:$LD_LIBRARY_PATH
typeset -u TAPEOUT
typeset -u REPLY
typeset -u BACKUPOK
set TAPEOUT = N
do
read TAPEOUT
done
if [ "$TAPEOUT" != N ]
then
print -n Has all logging to tape finished - press any key when it has
read REPLY
fi
if [ "$TAPEOUT" = Y ]
then
print Please remove the tape for logging and replace with the backup tape
set REPLY = N
while [ "$REPLY" != Y ]
do
read REPLY
done
fi
set BACKUPOK = N
while [ "$BACKUPOK" != Y ]
do
sleep 5
read BACKUPOK
done
if [ "$TAPEOUT" = Y ]
then
read INPUT
fi
recover_jbase
#!/bin/ksh
if [ -z "$1" ]
then
PS3="Option :"
do break; done
if [ -z "$REPLY" ]
then
exit
fi
else
REPLY=$1
fi
if [ $REPLY = 1 ]
read DONE
read REPLY
if [ $REPLY = "y" ]
then
read DONE
echo -n "Enter a time to terminate the duplication process (or RETURN for all logs)"
read ENDTIME
if [-z $ENDTIME ]
then
else
fi
fi
else
read DONE
jlogdup input set=current start=$JBCRELEASEDIR/logs/jlogdup_to_tape_start terminate=wait output set=serial device=[Device Spec] &
fi
jlogadmin
The jlogadmin command allows for the administration of the jBASE Transaction Journal. The jlogadmin command will enabled for interactive
usage when invoked by the super-user/Administrator; execution by other users being restricted to read-only. All administration tasks con-
tained within the jlogadmin utility can also be invoked from the command line, using jlogadmin, with optional parameters.
When the jlogadmin command is executed interactively, navigation to the next field is by using the tab key or cursor-down key and to the pre-
vious field by the cursor-up key. Each field can be modified using the same editor type commands as available in jsh. Changes to a particular
field are effected by the <Enter> key and CTRL-X is used to exit from interactive mode.
Interactive Configuration
INTERACTIVE DISPLAY
Description of Fields
STATUS
Specifies the current transaction journal status, which can be On/Active, Off/Inactive or Susp/Suspended. Note: When the status is changed to
Suspended, all transactions which would be updated in the transaction log file will also suspend awaiting a change of status.
Specifies the current log set in use. There are four possible log sets – numbered 1 to 4. An entry of 0 indicates that no log set has been chosen
at this time.
EXTENDED RECORDS
• the application id
Specifies the number of seconds between each synchronization of the log set with the disk; All memory used by the log set is force flushed to
disk. Should the system crash, the maximum amount of possible data loss is limited to the updates which occurred since the last log set syn-
chronization.
%1 == {INFORMATION: | WARNING: | FATAL ERROR:} From user root at Wed Sep 04 12:38:23 2002
%3 == Depends upon the actual error message e.g. "Error number nnn while reading from file /dev/xxxxx"
NOTE: The message is designated INFORMATION, WARNING or FATAL ERROR. This designation can be used by the log notify program to
decide on a course of action. The messages that can be logged are:
Log file warning threshold set to p initial percentage thereafter every additional q percent or n seconds Yes
Kill initiated on jlogdup process id pid : Process id pid from port n Yes
Termination Statistics: usr x ,sys y, elapsed z, r records read from current log set number n : r records b blocks Yes
rb record bytes e errors in file
WARNING THRESHOLD
If the amount of space consumed in the file system, which the active logset resides upon, exceeds the specified threshold, it runs the log notify
program. Individual files in a logset have a capacity of 2GB. If the logsets are not switched, files in a logset can grow to the 2GB limit without
the file system reaching the threshold capacity. If this happens, journaling will cease to function predictably and normal database updates may
fail.
File definitions:
As indicated above, the maximum size of an individual file is 2GB. It is clear that if a single file were used for the log file, then this would likely
be insufficient for most realistic application environments. Therefore the administrator is able to set up a log set consisting of a maximum of
sixteen files, thus enabling a maximum log set of 32GB. The configuration will allow for a maximum of four log sets. Usage and switching of the
four log sets will be described in appropriate sections. If the file specified by the administrator does not already exist, then it will be created
automatically.
COMMAND-LINE SYNTAX
In addition to the interactive screen setup facility, there are options which can be added to the jlogadmin command execution. This allows the
administrator to create scripts which can be run either at pre-defined times or intervals; or in response to transaction journal events (usually
error handling events).
jlogadmin –{options}
SYNTAX ELEMENTS
Option Description
Off/InActive or Susp/Suspend
-d set Delete files from logset n. Note: This option may not be combined with others
-h Display help
-i [1-4]file- filename{
name...} {-
o}
-o Perform operation without checking if the specified log set is empty. Used with -f and -t.
-tn Truncates log set n. The log set may not be the current switched set. This option ensures that disk space will be freed and is
sometimes preferable to "rm" which may not free the disk space if any process still has log files open.
-Znn Enables TJ log compression, where ‘nn’ stands for the threshold of TJ WRITES in bytes
Prior to archiving a logset, jlogstatus –av will display “Not Archived” for each logset which has not been archived.
jlogadmin -e 1,/home/tjarch
jlogadmin -I /home/tjarch/logdev1
Files:
/home/tjarch/logdev1
/home/tjarch/logdev2
During recovery procedures this command may be used to determine the correct logset to be restored. The embedded statistical information
contained within the first file of the import specification will be used to update information in the jediLoggerConfig file, so that subsequent log-
set manipulation may proceed accurately.
jlogstatus
The jlogstatus command displays the status of the jBASE Transaction Journal. In its simplest form the jlogstatus, command shows a summary
of the current Transaction Journal activities. Additional command line options are available for output that is more verbose. The jlogstatus
command can also be used to present a rolling status screen, using the ‘-r n’ option, which will update the display every ‘n’ seconds.
SYNTAX
jlogstatus -options
SYNTAX ELEMENTS
Option Description
-h display help
-v verbose mode
jlogstatus -a –v –r 5
This will display all information and will refresh every 5 seconds.
Journal file sets switched: 10:41:21 08 APR 1998 , by root from port 9
Full log warning threshold: 70 percent , thereafter every 1 percent or 300 secs
Journal files synced every: 10 seconds , last sync 10:49:59 08 APR 1998
Current log file set: 1, date range 10:41:21 08 APR 1998 to 10:49:59 08 APR 1998
Status log set 1 (current): 2 files, 100000 records , 20415568 bytes used
Date range 10:41:21 08 APR 1998 to 10:49:59 08 APR 1998 Status log set 2:
2 files, 100000 records, 20415568 bytes used Date range 10:41:21 08 APR 1998 to 10:49:59 08 APR 1998
You can use options in jlogadmin so that the jBASE processes themselves do this file synchronization more often. The default is every 10
seconds. This means in the event of a system failure, you will lose at the most 10 seconds worth of updates.
The use of the jlogsync program means the jlogsync process instead of individual jBASE processes performs file synchronization. Therefore alle-
viates the overhead of the synchronization from the update processes. Thus, the jlogsync process is not mandatory. However, in a large install-
ation it may provide beneficial performance gains.
SYNTAX
jlogsync -options
SYNTAX ELEMENTS
Option Description
-v verbose mode
SYNTAX
SYNTAX ELEMENTS
Option Description
-f used with the -v or -V option; shows information for the next (future) update; by default information for past updates is displayed
-h display help
INPUT_SPEC/OUTPUT_SPEC
The input/output specification can specify one or more of the following parameters
Parameter Description
device=file%dev (S) the file name for SERIAL device. Can be more than one
renamefile=file (O) use rename file list of format ‘from,to’ to rename files
set=current (IL) begin restore/duplication using the current log set as input
set=stdin (IT) the input data comes from the terminal stdin
set=logset (OL) the output is directed to the current log set as an update
terminate=wait (I) switch to elder log sets as required and wait for new updates
Indicator Meaning
The time specification, used in the ‘start=’ and ‘end=’ specification can be one of the following formats:
timespec meaning
filename regular file, use the time the file was last modified
HOST
The IP address or the DNS name of the host to use for socket transfers
PORTNUM
KEY
The string to be used as the encryption key for the transfer of journal entries.
METHOD
The encryption scheme to use for the transfer of journal entries. This mechanism utilizes OpenSSL high level cryptographic functions. The valid
specifications for encryption are;
• RC2
• BASE64
• DES
• 3DES
• BLOWFISH
• RC2_BASE64
• DES_BASE64
• 3DES_BASE64
• BLOWFISH_BASE64
SYNTAX ELEMENTS
Option Description
-Cnn If the file system utilization of the journal log exceeds nn% full then an error message is displayed. The error message is repeated
for every 1% increase in file system utilization.
-Dnn If the jlogdup process processes no records (or if there is no jlogdup process active), then after nn minutes of inactivity it displays
an error message. It repeats the error message every nn minutes while the jlogdup process(es) is inactive.
-E If the jlogdup program reports an error, this option causes jlogmonitor to also display an error. You can view the actual nature of
the error by either looking at the screen where the jlogdup process is active, or by listing the jlogdup error message file (assuming
the –eERRFILE option was used).
-h display help
-lnn The status of the Journaler can be ACTIVE, INACTIVE or SUSPENDED. If the status of the journaler is either INACTIVE or
SUSPENDED (with jlogadmin) for more than nn minutes, it s=displays an error message. The error message will be repeated every
nn minutes that the journaler is not active
-Snn Use this option to determine if any updates are being applied to the journal logs. If no updates are applied to the current journal
log set for nn minutes it displays an error message. It repeats the error message for every nn minutes of system inactivity.
NOTES
You must specify at least one of the options, -C, -D, -E, -I or -S.
EXAMPLE
The command "MESSAGE * %" is executed for every message sent to the screen by jlogdup. The jlogmonitor specially interprets the use of the
% by the program and will be replaced with the error message.
What is ODBC?
Open Database Connectivity (ODBC) is an open standard Application Programming Interface (API) for accessing a database. By using ODBC
statements in a program, you can access files in a number of different common databases. In addition to the ODBC software, a separate module
or driver is needed for each database to be accessed
The ODBC Driver Manager loads and unloads ODBC drivers on behalf of an application. It is a system component which on Windows is part of
the MDAC (Microsoft Data Access Components) package and automatically included with the latest Windows operating systems. Odb-
cad32.exe is the ODBC Data Source Administrator and odbc32.lib/ odbccp32.lib are import libraries to be used by client applications.
ODBC Driver
The ODBC driver processes ODBC function calls, submits SQL requests to a specific data source and returns results to the application. The
ODBC driver may also modify an application’s request so that the request conforms to syntax supported by the associated database.
In case of jBASE, we have got a driver called jBASE ODBC Connector which is an ODBC 3.0 compliant ODBC driver that works with most
existing ODBC-compliant applications such as MS Excel, MS Access, Crystal Reports, etc. The ODBC Connector is only available to Windows
platforms but SQL requests may be issued against a remote jBASE instance running on other platforms.
Data Source
The data source consists of the data the user wants to access and its database management system. This can be relational databases like
Oracle, DB2 or non-relational databases like jBASE.
The following system components must be installed prior to installing the ODBC Connector:
These runtime libraries can be downloaded from Microsoft and are supplied with the following package:
Note: These runtime components are already included in Microsoft .NET Framework 2.0 SP1 or higher
Start the installation by executing jodbc32.msi. Follow the instructions on the installation wizard and complete the installation.
If the installation is successful, a new entry,’ jBASE ODBC Driver’ will appear in the drivers list.
Use the following commands to install jBASE ODBC driver through jBASE ODBC Manager
Configuring DSN
ODBC applications usually obtain the connection details from DSNs which may be configured via Microsoft’s ODBC Data Source Administrator
(also known as ODBC Manager / odbcad32.exe or Control Panel à Administrative Tools à Data Sources (ODBC)). Use the following steps to
add a DSN for jBASE connectivity
Click ‘Add’ in the ODBC Data Source Administrator, select ‘jBASE ODBC Driver’ and click ‘Finish’
Step 2:
The jBASE ODBC data source windows will pop-up, specify the DSN name & Connection details
Parameter Use
Step 3:
Start the jbase_ agent/tafc_agent on the jBASE Server machine to listen on the port defined in the last step. Use the ‘test’ button to see if the
connection can be established from the client machine to the jBASE Server.
Step 1:
In MS Excel, navigate to Data->From Other Sources and select ‘From Microsoft Query’,
Step 2:
Select the DSN that was created earlier, and click OK.
You will see all the jBASE tables available in the current directory where jBASE_Agent is running. Select the file / table from which the data
need to be imported. Also select the fields of the table that are to be exported.
Identify a field that should be used for sorting, and specify the same in the ‘Sort by’ option. Select ‘Next’ and click on ‘Finish’
Select ‘OK’ on the ‘Import Data’ dialog box to produce the output in Table format. You will see the jBASE table data exported and displayed in
excel.
l Timeout
l Env.Variables
Timeout: To set the connection timeout for jAgent running on jBASE Server. If this property is not set, the default connection timeout of
jAgent will be used.
Env.Variables: To set the JEDIFILEPATH on jBASE Server. This will help us to query any table available in the specified path. By default, the
query looks for tables / files in the default JEDIFILEPATH or the tables / files in the current directory, where jAgent is running.
These properties can be defined using the DSN configuration options in ODBC Data Source Admin.