5 TESTING-v2
5 TESTING-v2
5 TESTING-v2
INTRODUCTION
Testing is a scientific process performed to determine whether the controls ensure the system
design effectiveness as well as the implemented system controls operational effectiveness. It
involves, understanding a process and the expected results. Testing of large amounts of
transactions or data is usually not possible due to time and cost constraints. Hence, sampling is
done on the population (system resource) and ensures a sufficient quality and quantity to
extrapolate the results of the testing into a reliable conclusion on the entire population (system
resource).
Testing of Controls involves obtaining the population and conducting the compliance tests either
on the entire population and/or on selected samples from the population. It may also be
conducted using utilities of audit tools. Testing of the controls design and the reliable results are
done by one of the following methods:
(i) Substantive Testing: This type of testing is used to substantiate the integrity of the actual
processing. It is used to ensure that processes, not controls, are working as per the design of
the control and produce the reliable results.
(ii) Compliance Testing - A compliance test determines if controls are working as designed. As
per the policies and procedures, compliance testing results into the adherence to
management directives.
AUDIT TESTING
The auditor must address many considerations that cover the nature, timing, and extent of
testing. The auditor must devise an auditing testing plan and a testing methodology to determine
whether the previously identified controls are effective. The auditor also tests whether the end-
user applications are producing valid and accurate information. For microcomputers, several
manual and automated methods are available to test for erroneous data. An initial step is to
browse the directories of the PCs in which the end-user-developed application resides. Any
irregularities in files should be investigated. Depending on the nature of the audit, computer-
assisted techniques could also be used to audit the application.
One should test the critical controls, processes, and apparent exposures. The auditor
performs the necessary testing by using documentary evidence, corroborating interviews, and
personal observation. Validation of the information obtained is prescribed by the auditor's work
program. Again, this work program is the organized, written, and pre-planned approach to the
study of the IT department. It calls for validation in several ways as follows:
• Asking different personnel the same question and comparing the answers
Such an intensive program allows an auditor to become informed about the operation in a short
time. Programs are run on the computer to test and authenticate application programs that are run
in normal processing. The audit team selects one of the many Generalized Audit Software (GAS)
packages such as Microsoft Access or Excel, IDEA, or ACL and determines what changes are
necessary to run the software at the installation. The auditor is to use one of these software's to
do sampling, data extraction, exception reporting, summarize and foot totals, and other tasks to
perform in-depth analysis and reporting capability.
(i) Entity wide or Component Level: (General controls) Controls at the entity or component
level consist of the entity-wide or component-wide (IS system modules) processes designed to
achieve the control activities. They are focused on how the entity or component manages IS
related to each control activity.
For example, the entity or component may have an entity-wide process for configuration
management, including establishment of accountability and responsibility for configuration
management, broad policies and procedures, development and implementation of monitoring
programs, and possibly centralized configuration management tools. The absence of entity-wide
processes may be a root cause of weak or inconsistent controls; by increasing the risk that IS
controls are not applied consistently across the organization.
(ii) System level (General controls): Controls at the system level consist of processes for
managing specific system resources related to either a general support system or major
application.
These controls are more specific than those at the entity or component level and generally relate
to a single type of technology. Within the system level are three further levels that the auditor
should assess: network, operating system, and infrastructure application.
The auditor who is evaluating configuration management at the system level should determine
whether the entity has applied appropriate configuration management practices for each
significant type of technology (e.g., firewalls, routers) in each of the three sublevels (e.g.,
specific infrastructure applications). Such configuration management practices typically
include standard configuration guidelines for the technology and tools to effectively determine
whether the configuration guidelines are effectively implemented.
(iii) Business process application level: Controls at the business process application level
consist of policies and procedures for controlling specific business processes. For example, the
entity's configuration management should reasonably ensure that all changes to application
systems are fully tested and authorized.
The auditor should design and conduct tests of relevant control techniques that are effective in
design to determine their effectiveness in operation. It is generally more efficient for the auditor
to test IS controls on a tiered basis, starting with the general controls at the entity wide and
system levels, followed by the general controls at the business process application level, and
concluding with tests of business process application, interface, and data management system
controls at the business process application level. Such a testing strategy may be used because
ineffective IS controls at each tier generally preclude effective controls at the subsequent tier.
If the auditor identifies IS controls for testing, the auditor should evaluate the effectiveness of
• general controls at the entity wide and system level;
• general controls at the business process application level; and
• specific business process application controls (business process controls, interface controls,
data management system controls), and/or user controls, unless the IS controls that achieve
the control objectives are general controls.
The auditor should determine whether entitywide and system level general controls are
effectively designed, implemented, and operating effectively by:
• determining how those controls function, and whether they have been placed in operation;
and
The auditor generally should use knowledge obtained in the planning phase. The auditor should
document the understanding of general controls and should conclude whether such controls are
effectively designed, placed in operation, and, for those controls tested, operating as intended.
If general controls at the entitywide and system levels are not effectively designed and operating
as intended, the auditor will generally be unable to obtain satisfaction that business process
application-level controls are effective. In such instances, the auditor should
(i) determine and document the nature and extent of risks resulting from ineffective general
controls and
(ii) identify and test any manual controls that achieve the control objectives that the IS controls
were to achieve.
However, if manual controls do not achieve the control objectives, the auditor should determine
whether any specific IS controls are designed to achieve the objectives. If not, the auditor should
develop appropriate findings principally to provide recommendations to improve internal
control. If specific IS controls are designed to achieve the objectives, but are in fact ineffective
because of poor general controls, testing would typically not be necessary, except to support
findings.
To assess the operating effectiveness of IS controls, auditors should perform an appropriate mix
of audit procedures to obtain sufficient, appropriate evidence to support their conclusions. Such
procedures could include the following:
• Inquiries of IT and management personnel can enable the auditor to gather a wide variety of
information about the operating effectiveness of control techniques. The auditor should
corroborate responses to inquiries with other techniques.
• Questionnaires can be used to obtain information on controls and how they are designed.
• Observation of the operation of controls can be a reliable source of evidence. For example,
the auditor may observe the verification of edit checks and password controls. However,
• The auditor may review documentation of control polices and procedures. For example, the
entity may have written policies regarding confidentiality or logical access. Review of
documents will allow the auditors to understand and assess the design of controls.
• Analysis of system information (e.g., configuration settings, access control lists, etc.)
obtained through system or specialized software provides the auditor with evidence about
actual system configuration.
• Data review and analysis of the output of the application processing may provide evidence
about the accuracy of processing. For example, a detailed review of the data elements or
analytical procedures of the data as a whole may reveal the existence of errors. Computer-
assisted audit techniques (CAAT) may be used to test data files to determine whether invalid
transactions were identified and corrected by programmed controls. However, the absence of
invalid transactions alone is insufficient evidence that the controls effectively operated.
• Reperformance of the control could be used to test the effectiveness of some programmed
controls by reapplying the control through the use of test data. For example, the auditor
could prepare a file of transactions that contains known errors and determine if the
application successfully captures and reports the known errors.
Based on the results of the IS controls audit tests, the auditor should determine whether the
control techniques are operating effectively to achieve the control activities. Controls that are not
properly designed to achieve the control activities or that are not operating effectively are
potential IS control weaknesses.
For each potential weakness, the auditor should determine whether there are specific
compensating controls or other factors that could mitigate the potential weakness. If the auditor
believes that the compensating controls or other factors could adequately mitigate the potential
weakness and achieve the control activity, the auditor should obtain evidence that the
In circumstances where the auditor regularly performs IS controls audits of the entity (as is done,
for example, for annual financial audits), the auditor may determine that a multiyear plan for
performing IS controls audits is appropriate. Such a plan will cover relevant key agency
applications, systems, and processing centres.
These strategic plans should cover not more than a three year period and include the schedule
and scope of assessments to be performed during the period and the rationale for the planned
approach. The auditor typically evaluates these plans annually and adjusts them for the results of
prior and current audits and significant changes in the IT environment, such as implementation
of new systems.
Multiyear testing plans can help to assure that all agency systems and locations are
considered in the IS control evaluation process, to consider relative audit risk and prioritization
of systems, and to provide sufficient evidence to support an assessment of IS control
effectiveness, while helping to reduce annual audit resources under certain conditions. When
appropriate, this concept allows the auditor to test computer-related general and business process
application controls on a risk basis rather than testing every control every year. Under a
multiyear testing plan, different controls are comprehensively tested each year, so that each
significant general and business process control is selected for testing at least once during the
multiyear period, which should not be more than 3 years.
For example, a multiyear testing plan for an entity with five significant business process
applications might include comprehensive tests of two or three applications annually, covering
all applications in a two or three year period. For systems with high IS risk, the auditor generally
should perform annual testing.
Information developed in the testing phase that the auditor should document includes the
following:
• An understanding of the information systems that are relevant to the audit objectives
• By level (e.g., entitywide, system, business process application) and system sublevel (e.g.,
network, operating system, infrastructure applications), a description of control techniques
used by the entity to achieve the relevant IS control objectives and activities
• By level and sublevel, specific tests performed, including related documentation that
describes the nature, timing, and extent of the tests;
• evidence of the effective operation of the control techniques or lack thereof (e.g., memos
describing procedures and results, output of tools and related analysis);
• if a control is not achieved, any compensating controls or other factors and the basis for
determining whether they are effective;
• the auditor's conclusions about the effectiveness of the entity's IS controls in achieving the
control objective; and
• for each weakness, whether the weakness is a material weakness, significant deficiency or
just a deficiency, as well as the criteria, condition, cause, and effect if necessary to achieve
the audit objectives.
After completing the testing phase, the auditor summarizes the results of the audit, draws
conclusions on the individual and aggregate effect of identified IS control weaknesses on audit
risk and audit objectives and reports the results of the audit. The auditor evaluates the individual
and aggregate effect of all identified IS control weaknesses on the auditor's conclusions and the
audit objectives.
The auditor evaluates the effect of any weaknesses on the entity's ability to achieve each of the
critical elements and on the risk of unauthorized access to key systems or files. Also, the auditor
evaluates potential control dependencies. For each critical element, the auditor should make a
summary determination as to the effectiveness of the entity's related controls, considering
entitywide, system, and business process application levels collectively.
The auditor should evaluate the effect of related underlying control activities that are not
achieved.
Errors in a computerized system are generated at high speeds and the cost to correct and rerun
programs are high. If these errors can be detected and corrected at the point or closest to the
point of their occurrence the impact thereof would be the least. Continuous auditing techniques
use two bases for collecting audit evidence. One is the use of embedded modules in the system to
collect, process, and print audit evidence and the other is special audit records used to store the
audit evidence collected.
Different types of continuous audit techniques may be used. Some modules for obtaining data,
audit trails and evidences may be built into the programs. Audit software is available which
could be used for selecting and testing data. Many audit tools are also available some of which
are described below:
(i) Snapshots: Tracing a transaction is a computerized system can be performed with the
help of snapshots or extended records. The snapshot software is built into the system at those
points where material processing occurs which takes images of the flow of any transaction as it
moves through the application. These images can be utilized to assess the authenticity, accuracy,
and completeness of the processing carried out on the transaction. The main areas to dwell upon
(ii) Integrated Test Facility (ITF) The ITF technique involves the creation of a dummy entity in
the application system files and the processing of audit test data against the entity as a means of
verifying processing authenticity, accuracy, and completeness. This test data would be included
with the normal production data used as input to the application system. In such cases the auditor
has to decide what would be the method to be used to enter test data and the methodology for
removal of the effects of the ITF transactions.
(iii) System Control Audit Review File (SCARF): The system control audit review file
(SCARF) technique involves embedding audit software modules within a host application
system to provide continuous monitoring of the system's transactions. The information collected
is written onto a special audit file- the SCARF master files. Auditors then examine the
information contained on this file to see if some aspect of the application system needs follow-
up. In many ways, the SCARF technique is like the snapshot technique along with other data
collection capabilities.
• Application system errors - SCARF audit routines provide an independent check on the
quality of system processing, whether there are any design and programming errors as well
as errors that could creep into the system when it is modified and maintained.
• Policy and procedural variances - Organizations have to adhere to the policies, procedures
and standards of the organization and the industry to which they belong. SCARF audit
routines can be used to check when variations from these policies, procedures and standards
have occurred.
• System exception - SCARF can be used to monitor different types of application system
exceptions. For example, salespersons might be given some leeway in the prices they charge
to customers. SCARF can be used to see how frequently salespersons override the standard
price.
• Snapshots and extended records - Snapshots and extended records can be written into
the SCARF file and printed when required.
• Profiling Data - Auditors can use embedded audit routines to collect data to build
profiles of system users. Deviations from these profiles indicate that there may be some
errors or irregularities.
• Performance Measurement - Auditors can use embedded routines to collect data that is
useful for measuring or improving the performance of an application system.
(iv) Continuous and Intermittent Simulation (CIS): This is a variation of the SCARF
continuous audit technique. This technique can be used to trap exceptions whenever the
application system uses a database management system. During application system
processing, CIS executes in the following way:
• Every update to the database that arises from processing the selected transaction will be
checked by CIS to determine whether discrepancies exist between the results it produces
and those the application system produces.
• The advantage of CIS is that it does not require modifications to the application system
and yet provides an online auditing capability.
Continuous auditing enables auditors to shift their focus from the traditional "transaction" audit
to the "system and operations" audit. Continuous auditing has a number of potential benefits
including:
1. reducing the cost of the basic audit assignment by enabling auditors to test a larger
sample (up to 100 percent) of client's transactions and examine data faster and more
efficiently than the manual testing required when auditing around the computer;
2. reducing the amount of time and costs auditors traditionally spend on manual
examination of transactions;
4. specifying transaction selection criteria to choose transactions and perform both tests of
controls and substantive tests throughout the year on an ongoing basis.
The following are some of the disadvantages and limitations of the use of the continuous audit
system:
1. Auditors should be able to obtain resources required from the organisation to support
development, implementation, operation, and maintenance of continuous audit
techniques.
2. Continuous audit techniques are more likely to be used if auditors are involved in the
development work associated with a new application system.
3. Auditors need the knowledge and experience of working with computer systems to be
able to use continuous audit techniques effectively and efficiently.
4. Continuous auditing techniques are more likely to be used where the audit trail is less
visible and the costs of errors and irregularities are high.
Hardware testing may be done to the entire system against the Functional Requirement
Specification(s) (FRS) and/or the System Requirement Specification (SRS). Focus is to have
almost a destructive attitude and test not only the design, but also the behaviour and even the
believed expectations. It is also intended to test up to and beyond the bounds defined in the
software/hardware requirements specification(s).
Types of Hardware Testing
• Functional testing
• User Interface testing
• Usability testing
• Compatibility testing
• Model Based testing
• Error exit testing
• User help testing
• Security testing
• Capacity testing
• Performance testing
• Reliability testing
• Recovery testing
• Installation testing
• Maintenance testing
• Accessibility testing
Review of Hardware
1. Review the capacity management procedures for hardware and performance evaluation
procedures to determine:
Whether historical data and analysis obtained from the Information System (IS) trouble
logs, processing schedules, job accounting system reports, preventative maintenance
Whether the IS management has issued written policy statements regarding the
acquisition of hardware.
Whether the criteria for the acquisition of hardware are laid out and procedures and forms
established to facilitate the acquisition approval process.
Whether the hardware acquisition plan is in concurrence with the strategic business plan
of management.
Whether the requests for the acquisition of hardware are supported by cost benefit
analysis.
Whether all hardware are purchased through the IS purchasing department to take
advantage of volume discounts or other quality benefits.
Whether the environment is conducive and space is adequate to accommodate the current
and new hardware.
Determine whether the change schedules allow time for adequate installation and testing
of new hardware.
Select samples of hardware changes that have affected the scheduling of IS processing
and determine if the plans for changes are being addressed in a timely manner.
Ensure there is a cross-reference between the change and its cause, i.e. the problem.
Ascertain whether the system programmers, application programmers and the IS staff
have been informed of all hardware changes to ensure that changes are coordinated
properly.
4. Review the preventive maintenance practices to evaluate the adequacy and the timeliness of
preventive maintenance as under:
Ascertain whether scheduled maintenance has had any adverse effect on the production
schedule during peak season.
Determine whether preventive maintenance logs are retained. Identify any abnormal
hardware or software problems.
Ensure that the hardware maintenance period commences on the same day as the
warranty or guarantee expires. This prevents additional maintenance charges while in
warranty period and also eliminates the time gap between the expiry of the warranty
period and the commencement of maintenance.
1. Interview technical service manager, system programming manager, and other personnel
regarding:
Review and approval process of option selection
Test procedures for software implementation
Review and approval procedures for test results
Implementation procedures
Documentation requirements
2. Review the feasibility study and selection process to determine the following:
Proposed system objectives and purposes are consistent with the request/proposal
Same selection criteria are applied to all proposals
3. Review cost/benefit analysis of system software procedures to determine they have addressed
the following areas:
Direct financial costs associated with the product
Cost of product maintenance
Hardware requirements and capacity of the product
Training and technical support requirements
Impact of the product on processing reliability
Impact on data security
Financial stability of the vendor's operations
4. Review controls over the installation of changed system software to determine the following:
That all appropriate levels of software have been implemented and that predecessor
updates have taken place
System software changes are scheduled when they least impact IS processing.
Problems encountered during testing were resolved and the changes were re-tested.
Test procedures are adequate to provide reasonable assurance that changes applied to
the system correct know problems and do not create new problems.
Software will be identified before it is placed into the production environment.
Fallback or restoration procedures are in place in case of production failure.
5. Review system software maintenance activities to determine the following:
Changes made to the system software are documented.
The vendor supports current versions of the software.
6. Review system software change controls to determine the following:
Access to the libraries containing the system software is limited to individual(s) needing
to have such access.
Software must be properly authorized prior to moving from the test environment to the
production environment.
7. Review systems documentation specifically in the areas of:
Installation control statements
Parameter tables
Exit definitions
Activity logs/reports
8. Review and test systems software implementation to determine adequacy of controls in:
Change procedures
Authorization procedures
Access security features
Documentation requirements
Documentation of system testing
Audit trails
Access controls over the software
Procedures have been established to restrict the ability to circumvent logical security
access controls.
Procedures have been established to limit access to the system interrupt capability.
Existing physical and logical security provisions are adequate to restrict access to the
master consoles.
Data redundancy is minimized by the database management system where redundant data
exists, appropriate cross-referencing is maintained within the system's data dictionary or
other documentation.
The review of controls over LANs is done to ensure that standards are in place for designing and
selecting a LAN architecture and for ensuring that the costs of procuring and operating the LAN
do not exceed the benefits.The unique nature of each LAN makes it difficult to define standard
testing procedures to effectively perform a review. The reviewer should identify the following:
• The company's division or department procedures and standards relating to network design
support, naming conventions and data security.
• LAN transmission media and techniques, including bridges, routers and gateways.
Understanding the above information should enable the reviewer to make an assessment of the
significant threats to the LAN, together with the potential impact and probability of occurrence
of each threat Having assessed the risks to the LAN, the reviewer should evaluate the controls
used to minimize the risks.
Physical controls should protect LAN hardware and access points to the LAN by limiting access
to those individuals authorized by management. Unlike most mainframes, the computers in a
LAN are usually decentralized. Company data stored on a file server is easier to damage or steal
than when on a mainframe and should be physically protected. The reviewer should review the
following:
LAN hardware devices, particularly the file server and documentation, should be located in a
secure facility and restricted to the LAN administrator. The wiring closet and cabling should be
secure.
Keys to the LAN file server facility should be controlled to prevent or minimize the risk of
unauthorized access.
LAN file server housing should be locked or otherwise secured to prevent removal of boards,
chips and the computer itself.
To test physical security, a reviewer should perform the following:
• Inspect the LAN wiring closet and transmission wiring and verify they are physically
secured.
• Obtain a copy of the key logs for the file server room and the wiring closet, match the key
logs to actual keys that have been issued an determine that all keys held are assigned to the
appropriate people, for example, the LAN Administrator and support staff.
• Select a sample of keys held by people without authorised access to the LAN file server
facility and wiring closet and determine that these keys do not permit access to these
facilities.
• Look for LAN operating manuals and documentation not properly secured.
• Environmental controls for LANs are similar to those considered in the mainframe
environment. However, the equipment may not require as extensive atmospheric controls as
a mainframe. The following should be considered:
• LAN file server equipment should be protected from the effects of static electricity (anti-
static rug) and electrical surges (surge protector)
• Air conditioning and humidity control systems should be adequate to maintain temperatures
within manufacturers' specifications.
• The LAN should be equipped with an uninterrupted power supply (UPS) that will allow the
LAN to operate in case of minor power fluctuations or in case of a prolonged power outage.
• The LAN file server facility should be kept free of dust, smoke and other matter particularly
food.
• Backup diskettes and tapes should be protected from environmental damage and the effects
of magnetic fields.
To test environmental controls, a reviewer should visit the LAN file server facility and verify:
• Temperature and humidity are adequate.
• Static electricity guards are in place.
• Electric surge protectors are in place.
• Fire extinguishers are nearby.
• Observe the storage methods and media for backup and verify they are protected from
environmental damage.
• LAN logical security controls should be in place to restrict, identify and report authorized
and unauthorized users of the LAN.
• Users should be required to have unique passwords and be required to change them
periodically. Passwords should be encrypted and not displayed on the computer screen when
entered.
• Remote access to the system supervisor should be prohibited. For maximum security an
individual should only be able to logon to the supervisor account on the console terminal.
This combination of physical security over consoles and logical security over the supervisor
account provides for maximum protection against unauthorized access.
• All logon attempts to the supervisor account should be logged on in the computer system.
• The LAN supervisor should maintain up-to-date information about all communication lines
connected to the outside.
To test logical security, a reviewer should interview the person responsible for maintaining LAN
security to ensure that person is:
• Aware of the risks associated with physical and logical access that must be minimized.
• Aware of the need to actively monitor logons and to account for employee changes.
• Knowledgeable in how to maintain and monitor access.
The reviewer should also perform the following interview users to access their awareness of
management policies regarding LAN security and confidentiality.
• Evaluate a sample of LAN users' access /security profiles to ensure access is appropriate and
authorized based on the individual's responsibilities.
Notes Prepared by Miss. Peninah J. Limo Page 21
• Review a sample of the security reports to:
• Verify timely and effective review of these reports is occurring and that there is evidence of
the review.
• Look for unauthorized users, if found, determine the adequacy and timeliness of follow-up
procedures
• Visually search for written passwords in the general areas of the computer that utilize the
LAN.
• If the LAN is connected to an outside source through a modem or dial-up network, attempt
to gain access to the LAN through these telecommunications mediums using authorized and
unauthorized management.
Review a sample of LAN access change requests and determine if the appropriate management
authorizes them and that the standard form has been util