ISABMVol 1
ISABMVol 1
ISABMVol 1
On
Information Systems Audit
Volume I
New Delhi
Revised Edition:
February, 2010
Committee/
Department
cit@icai.org
Website
www.icai.org, http://www.cit.icai.org
Price
ISBN
978-81-8841-335-9
Published by
Printed by
FOREWORD
Information Technology is revolutionizing the way businesses operate and offer
goods and services. Business Process Outsourcing (BPO) is fast becoming the
order of the day and is now migrating into Knowledge Process off shoring;
Internet has expanded our horizons with the free flow of vast amount of
information. Networks are increasingly connecting offices and diverse
businesses. The world is truly transforming into a Global Village. All these
developments are Information Technology driven.
The increasing use of Information Technology is not without the attached risks
and threats. Hacking is the order of the day. Viruses/ Worms are commonplace.
Denial of service attacks have happened. The ever increasing globalization is
shrinking barriers amongst nations across the world. Developments in
outsourcing and off shoring are based on sophisticated and complex Information
System Infrastructures. All these have resulted in a growing need for assurance
services on Information Systems in India.
The Committee on Information Technology (CIT) of the Institute has been
established to identify the emerging professional opportunities in the Information
Technology sector for members and prepare them to face the threats and
challenges ahead. Since its inception, the Committee has proactively considered
the modern day requirements and initiated steps to suitably equip the members
in terms of knowledge and skills to face the challenges ahead.
Post Qualification Course on Information Systems Audit is the first initiatives of
the Committee to enable members to offer value added services of IS Audit,
which are in increasing demand.
It gives me immense pleasure to so see this revised ISA Background Material for
the Post Qualification Course on IS Audit, to enable members to develop
understanding of the intricacies of Information Systems Audit in a simple and
lucid manner.
I appreciate the efforts put in by CA. K. Raghu, Chairman, IT Committee and
other members of the Committee and also the faculty members for bringing out
the revised background material.
iii
I am sure that this course will equip you to practice in this emerging field and
enhance the range of services that can be provided by you. I wish you all the
very best in pursuing the ISA Course.
President
iv
PREFACE
Today there is seamless integration of business processes, internal controls,
accounting systems and Information Technology. Our members need to provide
increased assurance and other value added services to clients in this scenario.
The Committee on Information Technology (CIT) of ICAI established in the year 2000
to identify the emerging professional opportunities in the Information Technology
sector for members and prepare them to face the challenges ahead had started the
course on Information Systems Audit (ISA) to suitably equip members to provide
assurance related services in the field of Systems & Process Assurance and a course
on Computer Accounting & Auditing Techniques (CAAT) to provide hands-on training
on use of computers, CAAT Resources CD to provide exposure to use of General
Audit Software/ Computer Assisted Audit Techniques. Additionally, the Committee
has published the Technical Guide on Information Systems Audit to provide a guide
to conduct of IS Audit and also organizes practical workshops/ Conferences/
Seminars to provide technical updates to members, apart from other developments.
The Committee on Information Technology is pleased to present this thoroughly
revised, upgraded and enhanced Background Materials for the ISA Course keeping
in view the required revisions to the course and modules in tune with developments in
the field.
I am very thankful to CA. Uttam Prakash Agarwal, President and CA. Amarjit Chopra,
Vice President, for the guidance and support in coming out with this revised material.
I would like to record my deep appreciation for the guidance and support of the
Members of the Committee on Information Technology in coming out with the revised
materials. I am also very thankful to all the team members involved in
conceptualizing the revision, content/ material development, content review and
content editing/consolidation for this commendable job. I also acknowledge the
significant contribution made by the Ms. Indu Arora, Additional Director of Studies.
I am confident that the revised Background Materials for the ISA Course will be of
significant assistance to the members providing Information Assurance Services that
is currently in increasing demand.
CA K. Raghu
Chairman
Committee on Information Technology
vi
ISA SYLLABUS
Module 1: Information Technology Infrastructure and
Communication/ Networking Technologies
Chapter 1: Introduction to Computer Hardware and Software
Types of computers - Hardware architecture of the computer - Various Input/Output
(I/O) devices - ASCII and EBCDIC codes - Hardware monitoring procedures - Data
and capacity management - Hardware acquisition plan - Definition of systems and
application software - Various systems software and its brief description -Operating
systems and its functions
Introduction to Database Management Systems - Introduction - Database and
Database Management Systems (DBMS) - DBMS architecture - DBMS models Database Languages - SQL - Roles and duties of a Database Administrator (DBA)
and Data Administrator (DA)
Chapter 2 : Introduction to Computer Networks
Basics of communication - Simplex, Half-Duplex, and Full-Duplex Communications,
Asynchronous & Synchronous Communication, Multiplexing, Switching techniques
Modem, Network Categories- LAN, WAN & MAN, Network Topology, Media used in
communication, Factors that influence the use of media, Factors that degrade a
signal.
Chapter 3: Introduction to OSI model
Various layers of OSI model - Application layer, Presentation layer, Session,
Transport, Network layer, Datalink layer, Physical layer. Networking devicesIntroduction to network management -IEEE LAN standards
Chapter 4: TCP/IP and Internet
A brief history of Internet & TCP/IP - Internet Administration - Generic Top-Level
Domains (gTLDs)- TCP/IP Protocol Architecture -The architecture of TCP/IP suite -IP
Addressing Scheme - The Domain Name System Ports -Comparison between OSI
model and TCP/IP protocol suite - Internet Services -Client/Server (C/S) Software
Architectures--An Overview - Intrusion Detection Systems (IDS)
vii
Role, EDI, E-Commerce, ERP Systems, Electronic Data Interchange (EDI Systems),
How does the EDI system function, Communication Software, Translation Software,
EDI standard, Communication handler, EDI Interface, EDI Translator, Applications
Interface, Application System, EDI standards, Features of ANSI ASCX, Features of
UN/ EDIFACT, UN/XML, Web Based EDI, EDI Risks and Controls, Auditor's Role in
Auditing EDI, Electronic Commerce (E-Commerce), The Advantages of the E
Commerce, Types of E Commerce Models, Enterprise Resource Planning Systems
(ERP Systems), Auditor's Role
Chapter 6: Auditing the System Development Process
IS Auditor's Role in Systems Development, Acquisition and Maintenance, IS
Auditor's Role in Reviewing Developmental Phases of SDLC, Feasibility study,
Requirement definition, Software acquisition process, Detailed design and
programming phases, Testing phase, Implementation phase, Post-implementation
review, System change procedures and program migration process, IS Auditor's Role
in Project Management, Systems Development Project - Audit Checklist, Corporate
Policies and Practices, User Requirements, Feasibility Analysis, Systems Design,
Systems Specifications, Systems Development, Implementation, PostImplementation
xii
xiv
xv
TABLE OF CONTENTS
Volume I
MODULE 1: Information Technology Infrastructure and Communication/
Networking Technologies
Chapter 1: Introduction to Computer Hardware and Software ............................. 1 - 56
Chapter 2: Introduction to Computer Networks ................................................ 57 - 104
Chapter 3: Introduction to OSI model............................................................. 105 - 134
Chapter 4: TCP/IP and Internet...................................................................... 135 - 180
Chapter 5: Introduction to Firewalls................................................................. 181- 201
Chapter 6: Cryptography................................................................................ 203 - 232
TABLE OF CONTENTS
Volume II
MODULE 4: Business Continuity Planning
Chapter 1 : Business Continuity & Disaster Recovery Plan ................................... 1 - 8
Chapter 2 : Documenting a Business Continuity Plan.......................................... 9 - 62
Chapter 3 : Business Continuity Plan Audit........................................................ 63 - 68
xvii
Module I
Information Technology
Infrastructure and
Communication/
Networking Technologies
1 Introduction to Computer
Hardware and Software
Learning Objectives
To understand
Introduction
One of the key competence requirements for the Information Systems Auditor is the
detailed understanding and knowledge of how computers process information and how
their various components perform critical roles in input, processing, storage and output
of information. This basic understanding is essential to appreciate the role each
component plays in the computing architecture and the potential vulnerabilities thereof.
A computer is an electronic device that performs high speed computation of data.
Collins English Dictionary describes it as A device, usually electronic, that processes
data according to a set of instructions. The actions carried out by the computer are
either arithmetic or logical in nature. Its principal characteristics are:
Module - I
o
o
o
o
In addition to these components, many others like bus, registers, accumulators etc.
make it possible for the basic components to work together efficiently. For example,
every computer requires a bus that transmits data from one part to another.
Classification of Computers
Computers are generally classified on the basis of various factors:
1. The operational principle of computers.
2. Purpose for which they are built (General or Special).
3. Their size and data processing power.
Basis of Computer Classification
Analog
Personal Computer
Hybrid
Workstation
Minicomputer
Mainframe
Super Computer
On the basis of their size and data processing power, computers are classified
as:
1. Personal computer:
A small, single-user computer based on a microprocessor. It can be defined as a
small, relatively inexpensive computer designed for an individual user. Its price
ranges anywhere from a few hundred pounds to over five thousand pounds. Because
these are based on the microprocessor technology, manufacturers can put an entire
CPU on one chip. Businesses use personal computers for word processing,
accounting, desktop publishing, and running spreadsheet and database management
applications. At home, personal computers are used mostly for playing games and
surfing the Internet.
Nowadays, the world of personal computers is divided between Apple Macintoshes
and PCs. The principal characteristics of personal computers are that they are singleuser systems and are based on microprocessors. However, although these are
designed as single-user systems, it is common to link them together to form a
network.
Module - I
models and tower models, and many variations on these two basic types. Then there
are portable computers, which are small enough for people to carry. Portable
computers include notebook and subnotebook computers, hand-held computers,
palmtops, and PDAs.
Tower model: The term refers to a computer in which the power supply,
motherboard, and mass storage devices are stacked on top of each other in a
cabinet. This is in contrast to desktop models, in which these are housed in a
more compact box. The main advantage of tower models is that these provide for
some usable space, which makes installation of additional storage devices
easier.
typically with the monitor sitting on top of the computer. Desktop model
computers are broad and low, whereas tower model computers are narrow and
tall. Because of their shape, desktop model computers are generally limited to
three internal mass storage devices. Desktop models that are designed to be
very small are sometimes called Slimline Models.
computers, typically weighing less than 6 pounds and are small enough to fit
easily in a briefcase. Aside from size, the principal difference between a
notebook and a personal computer is the display screen. Notebook computers
use a variety of techniques, known as flat-panel technologies, to produce a
lightweight and non-bulky display screen of different qualities. Their computing
power is nearly equivalent to that of personal computers. They have the same
CPUs, memory capacity, and disk drives. However, all this power in a small
package is expensive. Notebook computers cost about twice as much as
equivalent regular-sized computers. They also come with battery packs, which
enables us to run them without plugging them in. However, the batteries need to
be recharged every few hours.
full-size computers, palmtops are severely limited, but good enough as phone
books and calendars. Palmtops that use a pen rather than a keyboard for input
are often called hand-held computers or PDAs. Because of their small size, most
4
2. Workstation:
A workstation is like a single - user personal computer, but with a more powerful
microprocessor and, in general, a higher-quality monitor. In networking, workstation
refers to any computer connected to a local-area network. It could be a workstation or
a personal computer.
This computer is used for engineering
applications (CAD/CAM), desktop publishing,
software development, and other types of
applications that require a moderate amount of
computing power and relatively high quality
graphics capabilities. Workstations generally
come with a large, high-resolution graphics
screen, built-in network support, and a
graphical user interface. Most workstations
also have a mass storage device like a disk
Fig. 1.2: Workstation
drive, but a special type of workstation, called
a Diskless Workstation, comes without a disk drive. The most common operating
systems for workstations are UNIX and Windows NT. Like personal computers, most
workstations are single-user computers. However, workstations are typically linked
5
Module - I
together to form a local-area network, although they can also be used as stand-alone
systems.
3. Minicomputer:
It is a midsize, multi-user computer capable of supporting up to hundreds of users
simultaneously. In the past decade, the distinction
between large minicomputers and small
mainframes has blurred, almost like the distinction
between small minicomputers and workstations.
But in general, a minicomputer is a multiprocessing
system capable of supporting up to 200 users
simultaneously.
Fig. 1.3 : Minicomputer
4. Mainframe:
5. Supercomputer:
Supercomputer is an extremely fast computer that can perform hundreds of millions
of instructions per second.
Supercomputers
are
very
expensive and used only for
specialized
applications
that
require immense mathematical
calculations (number crunching).
For example - weather forecasting
requires a supercomputer. Other
uses of supercomputers are in the
fields of scientific simulations,
Fig. 1.5 : Supercomputer
6
PC
Workstation
Minicomputer
Mainframe
Supercomputer
Number of
processors
Uniprocessor
Uni-processor Uni-processor
Multiprocessor
Multi-processor
Computing
power
Normal
Moderate
Middle range
Very high
Extremely high
Quantity of
RAM
Normal
High
High
Very high
Very high
Number of
users
Single user
Single user
Multi-user
Multi-user
Multi-user
Cost
Affordable
Affordable
High
Very high
Very high
Operating
System
Open
Source
(Linux)
Proprietary but
source code
available for a
price
Proprietary but
source code
available for a
price
Proprietary
Proprietary
Used for
General
General
purpose
purpose
applications applications
Speed
In Million
Instructions
Per Second
(MIPS)
In Million
Instructions
Per Second
(MIPS)
In Million
Instructions Per
Second (MIPS)
In Floating Point
In Floating
Operations per
Point
Operations per second (FLOPS)
second
(FLOPS)
Examples
Acer,
Lenovo
PDP- 8
DEC VAX
machine
IBM S/390
Applications
requiring very large
programs and data
that must be
processed quickly
Cray
Y-MP/C90
Components of a Computer
Computers perform the following four main functions:
Module - I
Processing
Output
Storage
Fig. 1.6. Block Diagram of a Computer
Module - I
RAM or Random Access Memory: The RAM is volatile, which means that
when we remove the power, their content is lost.
The two main types of memory are SRAM (Static RAM) and DRAM (Dynamic
RAM). SRAM retains its contents for as long as power is supplied to it whereas
DRAM retains data for a few milliseconds even under power. The new types of
RAM are the DDR (Double Data Rate) and SDRAM (Synchronous DRAM).
DDR2 (Double Data Rate Two). Synchronous Dynamic Random Access
Memory runs twice as fast as DDR and generally needs less power. The speed
and capacity of the RAM are important factors that determine the performance of
a computer. For example, a personal computer running on Windows XP should
have a minimum of 1GB RAM; Vista requires a minimum RAM memory of 2GB.
ROM or Read Only Memory: It is used to store set-up data which is executed at
the initial startup or when the power is switched on. Data in ROM is permanent
and non-volatile after the power is switched off.
Flash memory is both volatile and non-volatile; its contents can be changed and
remembered even after switching off. This is popular in modems, cameras and,
of course, USB flash drives. There are also Flash Drives known as SSD (Solid
State Drive/Disk), which use a type of memory rather than a mechanical disk
drive. Accessing data from a Flash Drive is faster than from a hard disk.
Hard Drive: A collection of hard platters coated with magnetic material to which
data can be written and read using a series of read/write heads. A drive has
about eight platters which rotate at speeds of 5420 or 7200 rpm. The whole unit
is sealed inside a case which prevents it from dust. There are read-write heads
above the platters at a distance of 10 to 25 millionths of an inch. The storage
capacity of a hard drive ranges from 10GB, 80GB and 120GB. Modern drives
have a storage capacity of about 250 GB to 500 GB.
Input Devices
An input device is any peripheral piece of computer hardware equipment that is used
to provide data and control signals to an information processing system, such as a
computer. An input device converts input data and instruction into a suitable binary
form which can be accepted by the computer.
Commonly used input devices are Keyboard, Mouse, Trackball, Game Controllers,
Scanners, Barcode Readers, Optical Character Readers, Digitizer, and Multi-media
input devices.
1. Keyboard
A keyboard is an input device, partially modeled after the typewriter keyboard, which
uses an arrangement of buttons or keys. A keyboard typically has characters
engraved or printed on the keys and each press of a key typically corresponds to a
single written symbol. While most keyboard keys
produce letters, numbers or signs (characters),
other keys or simultaneous key presses can
produce actions or computer commands. When a
key is pressed, it produces an electrical signal
Fig.1.16: Keyboard QWERTY
which is detected by an electric circuit called
Keyboard encoder, which detects the key that
has been pressed and sends a binary code to the computer. Normally, the keyboard
is used to type text and numbers into a word processor, text editor or any other
program. Nowadays most users use style keyboards, as shown in Fig. 1.16.
Keyboards are also used for computer gaming, either with regular keyboards or
keyboards with special gaming features, which can expedite frequently-used
keystroke combinations.
2. Mouse
A mouse is a pointing device that functions by detecting
two-dimensional motion relative to its supporting surface.
Fig.1.17: Mouse
11
Module - I
Physically, a mouse consists of an object held under one of the user's hands, with
one or more buttons.
It sometimes features other elements, such as "wheels", which allow the user to
perform various system-dependent operations. An extra button or feature can add
more control or dimensional input. Each wheel of the mouse is connected to a shaft
encoder which emits an electrical pulse for every incremental rotation of the wheel.
When a user moves the mouse across a flat surface, the cursor on the CRT screen
also moves in the direction of the mouse movement. By moving the mouse, the user
can point to the menu on the screen, and the user can communicate his choice by
clicking the mouse button on that position. The mouse's motion typically translates
into the motion of a pointer on a display, which allows for the fine control of a
Graphical User Interface.
3. Joystick
4. Scanner
A scanner is a device that optically scans images, printed handwriting, or an object,
and converts it into a digital image. Common examples are variations of the desktop
(or flatbed) scanner where the document is placed on a
glass window for scanning. Hand-held scanners, where
the device is moved by hand, have evolved from text
scanning "wands" to 3D scanners, which are used for
industrial design, reverse engineering, test
Fig. 1.19 : Scanner
measurement, orthotics, gaming and other applications.
Mechanically driven scan that move the document are typically used for large - format
documents, where a flatbed design would be impractical.
12
5. Barcode Reader
A barcode reader is an electronic device for reading
printed barcodes. It consists of a light source, a lens
and a light sensor translating optical impulses into
electrical ones. Nearly all barcode readers contain
decoder circuitry, which analyzes the barcode's image
data provided by the sensor and sends the barcode's
content to the scanner's output port to the scanner's
output port.
6. Webcam
Webcams are video capture devices connected to computers or computer networks,
often using a USB port or, if connected to networks, via
Ethernet or Wi-Fi. Their most popular use is for video
telephony, permitting a computer to act as a videophone or
video conferencing station. Other popular uses, which
include the recording of video files or even still-images, are
accessible via numerous software programs, applications,
Fig. 1.21 : Webcam
and devices. They are known for their low manufacturing
cost and flexibility.
Output Devices
An output device is any piece of computer hardware equipment used to communicate
the results of data processing carried out by an information processing system (such
as a computer to the outside world). The output devices receive information from the
computer and provide them to the user. The computer sends information to the
output devices in binary coded forms. The output devices covert them into a form
which can be used by users such as a printed form or a display on a screen.
Commonly used output devices are monitors (both CRT and LCD), printers, plotters,
speakers and multi-media output devices like LCD Projectors.
Module - I
CRT: The VDT which contains CRT for visual display is also called CRT
Terminal. It consists of a small memory known as Buffer. Each character
entered through keyboard is stored in the buffer and displayed on the terminal
screen.
LCD: In LCDs, a liquid crystalline material is sandwiched between two glass or
plastic plates. The front plate is transparent and conductive. A voltage is applied
between a segment and back plate to create an electric field in the region under
the segment. LCD simply changes the reflection of available light. As LCDs are
lightweight, they are used mainly in portable computers.
2. Printers
A printer is a peripheral device which produces a hard copy (permanent humanreadable text and/or graphics) of documents stored in electronic form, usually on
paper or transparencies. Based on their technology, printers are classified as
Impact Printers
Non-Impact Printers
i.
Impact Printers
14
Thermal Printer
Laser Printer
Ink Jet Printer
Module - I
b. Laser Printers: Laser printers are page printers that rapidly produce high quality
text and graphics on plain paper. An entire
page is processed at a time. The printers
use a laser beam to produce an image of
the page containing text/graphics on a
photosensitive drum which is coated with
negatively charged photo-conductive
Fig. 1.26 : Laser Printer
material. Negatively charged ink powder
caller Toner is used on a laser printer for
printing. These printers produce low noise, work with high speed and the printing
quality is high, but they are more expensive than ink jet or dot matrix printers and
also large in size.
c. Ink-Jet Printers: In Ink-Jet printers, characters are formed when an electrically
charged or heated ink is sprayed in fine jets
onto the paper. When voltage is applied to
the element, it bends and creates a pressure
wave to force out a drop of ink. Individual
nozzles in the printing head produce high
resolution (up to 400 dots per inch or 400 dpi)
dot matrix characters. Ink jet printers have a
Fig. 1.27: Ink Jet Printer
low cast, are compact in size, and produce
low noise.
Their color printing quality is affordable but requires special paper. Usually the
speed of ink jet printers is slower than that of laser printers.
3. Plotters
A plotter is a vector graphics printing device that prints graphical plots, and is
connected to a computer. Vector graphics include geometrical primitives, such as
points, lines, curves, and shapes or polygon(s), which are all based on mathematical
equations, to represent images in computer graphics. There are two types of plotters:
Pen plotters
Electrostatic plotters
i.
Character Encoding
Module - I
the achievement of its objectives. In the present age, one of the main issues an
organization has to face is the constant and never-ending growth of data and
requirement for greater storage capacity along with the problem of data safety,
security and integrity.
In modern day enterprises, which employ large scale database applications,
multimedia applications, the requirements for disk storage run from gigabytes to
terabytes. If a proper data and storage management mechanism is in place, problems
of downtime, business loss on account of lost data and insufficient storage space can
be avoided.
The key issues in data and capacity management include the following:
Is the hardware to be procured compatible with the existing one and does it take
care of future applications?
ii. Have the workload and performance requirements been calculated and is the
hardware suggested capable of fulfilling them?
iii. Are there any industry standards for the same, and do the hardware components
comply with them?
Easy Operations
Support
Cost
a.
b.
c.
d.
19
Module - I
4. Hardware Maintenance
It is not uncommon for organizations to outsource the maintenance of computer
hardware, which includes any or all of desktops, servers, networks, cabling, etc. One
of the important criteria that need to be considered before finalizing the vendor is the
issue of maintenance. The organization has to have a hardware maintenance
program that takes into consideration the following:
i.
ii.
iii.
iv.
v.
vi.
Which company takes care of what IT resource? For example, computers may
be serviced by one company and printers by another.
How many times during a year does the vendor provide preventive maintenance
and when?
Was any problem reported in the past, and what corrective steps were
suggested? This has to be documented.
What is the cost of maintenance? Has, at any time during the year, the amount
spent on maintenance exceeded the budgeted amount? If yes, the details have
to be documented.
Apart from the preventive maintenance schedule, how many times during the
year did the vendor come for servicing the equipment because of some snag
failure?
What is the MTBF (Mean-Time-Between-Failure) and MTTR (Mean-Time-ToRepair) value? Typically, MTBF value must be high and MTTR value must be
low.
20
Software
Computer hardware performs various computations, based on the instruction input
given by the users and other programs; such a set of instruction input is actually its
software. The term software in its broad sense refers to the program that the machine
executes.
More precisely, Computer Software (or simply software) is the collection of
computer programs, procedures and documentation that perform different tasks on a
computer system. Program software performs the function of the program it
implements, either by directly providing instructions to the computer hardware or by
serving as input to another piece of software. A system is a set of interacting or
interdependent entities, real or abstract, forming an integrated whole.
Classification of Software
Software Publishing
Programming Software
System Software
Operating Systems
Compliers
Debuggers
Servers
Windowing Systems
Device Drivers
Interpreters
Linkers
Loaders
Utilities
Applications Software
SSoftware
Editors
Utilities Software
1. System Software
It is the low-level software required to manage computer resources and support the
production or execution of application programs but is not specific to any particular
application. These refer to the set of programs that -
21
Module - I
i.
ii.
iii.
iv.
v.
22
Cryptographic Utilities
Network managers
Registry cleaners
Launcher Applications
2. Programming Software
Programming software usually provides tools to assist a programmer in writing
computer programs and software by using different programming languages in a
convenient way. The tools include:
a. Compiler: A compiler is a computer program (or a set of programs) that
transforms the source code written in a computer language (the source
language) into another computer language (the target language, often having a
binary form known as the object code). The most common reason for
transforming the source code is to create an executable program.
b. Debugger: A debugger is a utility that helps in identifying any problem occurring
during execution. By using the debugger utility, one can pinpoint when and
where a program terminates abnormally, indicate the value of the variables used,
and in general provide information so that the programmer can locate bugs/
c. Interpreter: An interpreter is a computer program that executes instructions line
by line, i.e performs instructions written in a programming language.
d. Linker: Linking is the process of combining various pieces of code and data
together to form a single executable unit that can be loaded in memory. The
process of linking is generally done during compilation of the program. After
linking, the loading process takes place by using a loader software.
e. Loader: A loader is a program which loads the code and data of the executable
object file into the main memory, goes to the very first instruction and then
executes the program.
f. Editor: An editor program is a system software utility that allows the user to
create and edit files. During the editing process, no special characters are
added. On account of this, the editor utility is used to create the source code.
There are two kinds of editors: Line editors and Screen editors.
A Line editor edits files line by line whereas a Screen editor helps to load files
and edit them by moving the cursor movement keys. Nowadays, users use only
screen editors. Notepad utility in Windows, vi editor in UNIX, Edit utility in MSDOS are typical examples of screen editors.
3. Applications Software
It refers to the set of software that performs a specific function directly for the enduser. There are varieties of application software for different needs of users.
23
Module - I
a. General Business Productivity Applications
These refer to the software used for general business purposes, such as
improving productivity. It includes office suite applications, such as word
processors, spreadsheets, simple databases, graphics applications, project
management software, computer bases training software, etc.
b. Home Use Applications
These refer to the software used in the home for entertainment reference or
educational purposes that includes games, reference, home education, etc.
c. Cross-Industry Application Software
These refer to the software designed to perform and/or manage a specific
business function or process that is not unique to a particular industry. It includes
professional accounting software, human resource management, customer
relations management, Geographic Information system software, Web page /site
design software. etc.
d. Vertical Market application Software
These refer to the software that performs a wide range of business functions for
a specific industry. These include manufacturing, retail healthcare, engineering,
restaurants, etc.
e. Utilities Software
These refer to a small computer program that performs a specific task. Utilities
differ from other applications software in terms of size, cost, and complexity.
Examples include compression programs, antivirus, search engines, font, file
viewers, voice recognition software, etc.
Proprietary Software
Proprietary Software (also called non-free software) is a computer software that is
the legal property of one party. The terms of use for other parties are defined by
contracts or licensing agreements. These terms may include various privileges to
share, alter, dissemble, and use the software and its code. There are restrictions on
using, copying and modifying this software imposed by its proprietor. Restrictions on
use, modification and copying are achieved either by legal or technical means and
sometimes by both. Technical means include releasing machine-readable binaries to
users and withholding the human-readable source code. Legal means can involve
software licensing, copyright, and patent law.
Shareware
The term Shareware refers to the proprietary software that is provided to users
without payment on a trial basis and is often limited by any combination of
functionality, availability or convenience. It is often offered as a download from an
Internet website or as a compact disc included with a periodical such as a newspaper
or magazine. The aim is to give buyers the opportunity to use the program and judge
its usefulness before purchasing a license for the full version of the software.
Shareware is a method of marketing software based on the philosophy of Try before
We Buy, and is usually offered as a trial version with only some features or as a full
version, but only for the trial period. Once the trial period is over, the program may
stop until a license is purchased. Shareware is often offered without support,
updates, or help menus, which become available only with the purchase of a license.
The words "free trial" or "trial version" are indicative of shareware.
Open Source
Open Source is an approach to the design, development, and distribution of
software, offering practical accessibility to software's source code. In open source
any software along with its source code is easily accessible, and one can customize it
to specific requirements or add additional features to it, and make it available for
distribution and further improvement. One of the best examples is Linux.
In practice, open source usually means that the application is free to users as well as
developers. Furthermore, most open source software have communities that support
each other and collaborate on development. Therefore, unlike freeware, there are
future enhancements, and, unlike shareware, users are not dependent on a single
organization.
25
Module - I
Freeware
A software which permits users the following freedom is termed as a Free Software
or Freeware.
i.
ii.
A series of acquisition steps follow the finalization of software and hardware criteria.
The acquisition steps are very similar to the procurement of any capital equipment.
The ITT should be sent to vendors and after bids are received, they are to be
analyzed under the two major headings: Technical and Commercial. After analysis,
successful bidders are called for negotiations, during which all aspects including cost,
delivery and installation timeframe, maintenance issues, training issues, assistance in
changeover, and upgrading issues are discussed, and then a final choice made. All
contract terms, including the right to audit clauses, have to be finalized before a final
formal report is prepared. The report has to specify the reasons for the choice made
and justify it on the basis of costs and benefits.
26
The Hardware
The Operating System
The Application Programs
The Users
27
Module - I
Humans
User 1
Program
Interface
Compiler
User 2
Assembler
User 3
Text Editor
User n
..
Database System
O/S
Hardware Interface/
Privileged Instructions
Disk/Tape/ Memory
Batch Systems: In these systems, data or programs are collected, grouped and
processed at a later date. To speed up the programming, operators batched
together jobs with similar needs and executed them through the computer as a
group. Thus, the programmers would leave their programs with the operator who
would sort programs into batches with similar needs and, as the computer
became available, would run each batch. The output from each job would be
sent back to the appropriate programmer. For example, Payroll, stock control
and billing systems use the concept of batch systems.
ii. Multi-programmed Systems: It is defined as the ability to execute many
programs apparently at the same time so that the CPU has always one to
execute. All the jobs that enter the system are kept in the job pool which consists
of all the processes residing on disk awaiting allocation of main memory. The
O/S picks some jobs from the pool and keeps them in memory simultaneously.
The O/S picks and begins to execute one of the jobs in memory. Eventually, the
job may have to wait for some task, such as an I/O operation, to complete. In a
non-multi - programmed system, the CPU would sit idle. In a multiprogramming
system, the O/S switches to, and executes another job. When that job needs to
wait, the CPU is switched to another job, and so on. As long as at least one job
needs to be executed, the CPU is never idle. If several jobs are ready to be
28
2. Desktop Systems
PCs operating systems were neither multiuser nor multitasking. Instead of maximizing
CPU and peripheral utilization, their goal was to maximize user convenience and
responsiveness.
3. Multiprocessor Systems
Most systems that are currently in use are single-processor systems, that is, they
have only one main CPU. Multiprocessor systems, also known as Parallel systems
or Tightly-coupled systems have more than one processor in close communication,
sharing the computer bus, the clock, and sometimes memory and peripheral devices.
The main advantages of multiprocessor systems are:
i. Increased throughput: By increasing the number of processors, more work can
be done in less time. The speed-up ration with N processors is not N, but less
than N. When multiple processors cooperate on a task, a certain amount of
overhead is incurred in keeping all the parts working correctly.
ii. Economies of scale: Multiprocessor systems save more money than multiple
single-processor systems, because they can share peripherals, mass storage,
and power supplies.
iii. Increased reliability: If functions can be distributed properly among several
processors, then the failure of one processor will not halt the system, only slow it
down. For example, if there are 10 processors and one fails, then each of the
remaining 9 processors picks up a share of the work of the failed processor.
4. Distributed Systems
A network is a communication path between two or more systems. Distributed
systems are able to share computational tasks, and provide a rich set of features to
users, depending on networking for their functionality.
29
Module - I
Networks are typecast and based on the distance between their nodes. A LAN
(Local Area Network) exists within a room, a floor, or a building. A WAN (Wide Area
Network), usually exists between buildings, cities, or countries. These networks can
run one protocol or several protocols and may include media like copper wires, fiber
strands, and wireless transmissions between satellites, microwave dishes, and
radios.
i.
ii.
5. Clustered Systems
Clustered Systems bring together multiple CPUs to accomplish a computational task.
Clustered systems consist of two or more individual systems coupled together. They
share storage and are closely linked via LAN networking and provide high availability.
A layer of cluster software runs on the cluster nodes. Each node can monitor one or
more of the others. If the monitored machine fails, the monitoring machine can take
ownership of its storage, and restart the application(s) that were running on the failed
machine. The failed machine can remain down, but users and clients of the
application would only see a brief interruption of service.
A Hard Real Time System guarantees that critical tasks will be completed on
time. This goal requires that all delays in the system be bounded, from the
retrieval of stored data to the time that it takes the operating system to finish any
request made to it.
In Soft Real Time System a critical task gets priority over other tasks, and
retains that priority until it is completed.
7. Handheld Systems
Handheld systems include personal digital assistants (PDAs), such as cellular telephones
with connectivity to a network like the Internet. Developers of handheld systems and
applications face many challenges. These include their limited size and the speed of the
processor used in the device. Some handheld devices may use wireless technology, such
as BlueTooth to access email and for web browsing.
Operating System generally operates in two modes:
Supervisory mode /Monitor Mode/ System Mode/ Privileged Mode: In this
privileged mode, the user has full and complete access to all the resources of the
computer, bypassing all security features. Hence, this state has to be operated with
extreme caution.
Restricted or General User mode: In this mode, a user can access only those
resources for which rights have been granted. Normally, users are allowed to operate
only in the general user mode. However, if a user wishes to perform some privileged
functions, s/he has to communicate with the code running in the privileged OS area.
Module - I
2. Main Memory Management: The operating system is responsible for the
following activities in connection with the memory management :
Keeping track of the parts of memory that are currently being used and by
whom.
Deciding the processes that are to be loaded into memory when memory
space becomes available.
Allocating and de-allocating memory space as per need.
3. File Management: The operating system is responsible for the following
activities in connection with the file management :
Creating and deleting files.
Creating and deleting directories.
Supporting primitives for manipulating files and directories.
Mapping files onto secondary storage.
Backing up files on stable storage media.
4. I/O System Management: One of the purposes of an operating system is to hide
the peculiarities of specific hardware devices from users, which is done by I/O
subsystem itself. The I/O subsystem consists of :
A memory management component that includes buffering, caching and
spooling.
A general device called driver interface.
Drivers for specific hardware devices.
5. Secondary Storage Management : The operating system is responsible for the
following activities in connection with the disk management:
Free space management.
Storage allocation.
Disk scheduling.
6. Networking: The processors in the system are connected through a
communication network, configured in a number of different ways. A
distributed system is a collection of processors that do not share memory,
peripheral devices, or a clock. Instead, each processor has its own local memory
and clocks and the processors communicate with one another through various
communication lines, such as high-speed buses or networks. Different protocols
like FTP (File Transfer Protocol), NFS (Network file System) ad HTTP (Hypertext
Transfer Protocol) are different protocols used in communication between
different computers.
7. Protection System: Protection is a mechanism for controlling the access of
programs, processes or users to the resources defined by a computer system.
32
iii.
iv.
v.
vi.
Program Execution: The system must be able to load a program into memory
and to run that program. The program must be able to end its execution, either
normally or abnormally (indicating error).
I/O Operations: A running program may involve a file or an I/O device for
specific devices, or special functions may be desired (such as a tape drive or
CRT screen). For efficiency and protection, users usually cannot control I/O
devices directly. Therefore, an operating system must provide a means to do I/O.
File-System Manipulation: Programs need to read and write files. They also
need to create and delete files by name.
Communications: In many circumstances, one process needs to exchange
information from another process. Such communications can occur either
between two processes that are executing on the same computer or between
processes executing on different computers tied together by a computer network.
Error Detection: Errors may occur in the CPU and memory hardware (such as
power failure), in I/O devices (a lack of paper in the printer),and in the user
program (such as an arithmetic overflow). For each type of error, the operating
system should take the appropriate action to ensure correct and consistent
computing.
Resource Allocation: When multiple users are logged in the system or multiple
jobs are running at the same time, resources must be allocated to each of them.
For instance, CPU scheduling routines that take into account the speed of the
33
Module - I
CPU, the jobs that must be executed, the number of registers available, and
other factors are addressed by the operating system.
vii. Accounting: Statistics related to the number of users, the nature of their needs
and the kinds of computer resources available is a valuable tool for researchers
who wish to reconfigure the system to improve computing services.
viii. Protection: When several disjointed processes execute concurrently, it should
not be possible for one process to interfere with the other, or with the operating
system itself. Protection involves ensuring that all access to system resources is
controlled. Security of the system from outsiders is also important. Such security
starts with each user having to authenticate his self to the system, usually by
means of a password to access the resources.
Very high degree of data redundancy: Since data is required by more than
one application, it is recorded in multiple files, thus causing data redundancy.
High data redundancy leads to multiple updating, problems of maintaining
integrity and several others. Moreover, data redundancy may result in
inconsistency, i.e., one might update data in one file and forget to update the
same data stored in another file.
Difficulty in accessing data: Programmers may have to write a new application
program to satisfy an unusual request.
Limited Data Sharing: In the flat file method, data sharing is very limited.
Security and Integrity problems: Conventional file systems offer only limited
facilities with regard to maintaining security and integrity of data.
High degree of dependency between data and programs: Data in flat files are
accessed using application programs. If any change in the data structure or
format is made in the data file, a corresponding change has to be made in the
application program and vice versa. These problems led to the development of
databases and DBMS software.
34
Definition of a Database
A database is defined as a centralized repository of all the inter-related information of
an organization stored with the objectives of :
i.
ii.
iii.
iv.
v.
vi.
Example of Database
Consider a UNIVERSITY database in which information related to students, courses
and grades is to be stored. To construct the UNIVERITY database, the data may be
organized in five files, each of which stores data of the same type.
These databases have many types of records and have many relationships among
the records.
Definition of a DBMS
A Database Management System (DBMS) is a software that manages a database
and provides facilities for database creation, populating data, data storage, data
retrieval and data access control. It is a collection of programs that enables users to
create and maintain a database. The DBMS is thus a general-purpose software
35
Module - I
system that facilitates the processes of defining, constructing, manipulating, and
sharing databases among various users and applications
Database Systems
The database and DBMS software is called Database System. A database system
has four major components: Data, Hardware, Software and Users, which coordinate
with each other to form an effective database system.
Application
Program
Application
Program
Users
Database
Management
Systems (DBMS)
Application
Program
Data and
Programs
Database
Application
Program
Procedures
Procedures
Users
Software: The software part of a DBMS acts as a bridge between user and the
database. In other words, software interacts with users, application programs,
and database and files system of a particular storage media (hard disk, magnetic
tapes etc.) to insert, update, delete and retrieve data. For performing operations
such as insertion, deletion and updation, we can either use Query Languages
like SQL, QUEL or application software like Visual Basic.
Users: The broad classes of users are:
o
Module - I
o
Data Abstraction
The characteristic that allows program-data independence and program-operation
independence is called Data Abstraction. Abstraction means that the system hides
details of how data is stored and maintained and no information is revealed to users.
Based on the levels of abstraction, in a DBMS, we have three levels as shown in Fig.
1.33.
1. Level 1: External level or View Level or User Level: This is the highest level
that describes portions of the database for users. Every user has a limited view
of the entire database. To illustrate, an inventory employee may view the
inventory data but not payroll data.
View 1
View 2
View 3
..
Conceptual Level
Physical Level
38
View n
EXTERNAL
LEVEL
CONCEPTUAL
LEVEL
External View
..
External View
Conceptual Schema
INTERNAL LEVEL
Internal Schema
Stored Databases
Operating System
Hardware
External schema provides the necessary interface for users to perform various
activities on the database and provides their view of the database.
ii. It is user-friendly and focused on users who use the system and hides the rest of
the database from that user group.
iii. The design of the external schema is guided by end user requirements.
39
Module - I
The conceptual schema describes the structure of the whole database for a
community of users.
The conceptual schema hides the details of physical storage structures and
concentrates on describing entities, data types, relationships, user operations,
and constraints. In other words, it provides an abstract way of representing the
database.
DATA INDEPENDENCE
It is defined as the capacity to change the schema at one level of a database system
without having to change the schema at the next higher level. Two types of data
independence are:
a. Logical Data Independence
It is the capacity to change the conceptual schema without having to change
external schemas or application programs. We may change conceptual schema
to expand the database (by adding a record type or data item), to change
constraints, or to reduce the database (by removing a record type or data item).
b. Physical Data Independence
It is the capacity to change the internal schema without having to change the
conceptual schema. Hence, the external schema need not be changed at all.
Changes to the internal schema may be needed because some physical files
have to be recognized for example, by creating additional access structures
to improve the performance of retrieval or update.
DATA MODELS
One fundamental characteristic of the database approach is that it provides some
level of data abstraction by hiding details of data storage that are not needed by most
40
I. Database Model
A database model is a theory or specification describing how a database is structured
and used. Some of the common models are:
a. Hierarchical Model:
In this model, data is organized into a tree-like structure. Its basis is the hierarchical
tree structure, which is made up of nodes and branches. A node is a collection of
data attributes describing an entity. The following are the characteristics of a tree
structure.
i.
ii.
iii.
iv.
v.
LEAF NODES
Fig. 1.35: Hierarchical Model
41
Module - I
A hierarchical model supports only one-to-one and one-to-many relationship. The
main problem in this model is that unduly complex operations are involved in the
insertion of nodes in a tree. Moreover, there is also the problem of triggered deletion.
When a node is deleted from a tree, its children get deleted automatically. This
cascading operation is termed triggered deletion.
b. Network or Plex Model
This model has three basic components: Record type, Data items and Links.
This model, unlike the hierarchical model supports many-to-many relationship. For
example, many customers have many accounts. The main disadvantage of this
model is its complexity. Also, when the database is re-organized, there is every
possibility that data independence might be lost.
c. Relational Model
Relational model is the most common model used these days in a majority of
commercial DBMS packages. This model has a very strong mathematical backing
through relational algebra and relational calculus. This model represents the
database as a collection of Relations and is governed by the following rules:
i.
ii.
iii.
iv.
v.
vi.
vii.
viii.
ix.
x.
42
Tuple
Roll No.
Name
DOB
999-90
Julie
17.07.82
999-93
Doug
12.09.82
Phone No.
Domain constraint: This means that all the values in the column of a table must
be from the same domain.
Entity integrity: This rule is to ensure and assure that the data values for the
primary key are valid and not null.
Referential Integrity: In a relational data model, associations between tables
are defined by using foreign keys. The referential integrity rule states that if there
is a foreign key in one table, either the foreign key must match the primary key of
the other table or else the foreign key value must be null.
Operational Constraints: These constraints are enforced on the database by
the business rules or by the environment where the table operates. The
database must not violate these constraints.
Module - I
viii. The result is two levels of data abstraction.
Each object has its own unique identity, independent of the values it
contains, two objects containing the same values even are distinct.
Distinction is created and maintained in physical level by assigning distinct
object identifiers.
DIMENSION
FACT
TABLE
DIMENSION
DIMENSION
Fig. 1.37: The Star Schema
For example, a fact table in a sales database, implemented with a star schema, might
contain the sales revenue for the products of the company from each customer in
each geographic market over a period of time. The dimension tables in this database
define the customers, products, markets, and time periods used in the fact table.
Book 2
Cover 3
Hardback
Catalogue Number
Author 6
Page 8
Softback
III.
Entity-Relationship Model
One of the most commonly used data model is the Entity Relationship Model also
referred to as E - R Diagram.
E-R Diagram
An Entity-relationship (ER) diagram is a specialized graphic that illustrates the
interrelationships between entities in a database. An entity is defined as a
distinguishable object that exists in isolation and is described by a set of attributes.
For example, color and fuel are attributes for the entity car. A relationship is an
association among several entities. Again, taking the same example of a car, the
registration number of a car associates it with the owner. The set of all entities or
relationships of the same type is called the entity set or relationship set.
E-R diagrams have yet another element of cardinality, which expresses the number
of entities to which an entity can be associated via a relationship set. Using the
entitys attributes and cardinalities, the overall logical structure of a database can be
expressed graphically by an E-R diagram.
45
Module - I
ER diagrams often use symbols to represent three different types of information.
i. Boxes represent entities.
ii. Diamonds represent relationships and,
iii. Ovals represent attributes.
Types of relationships
Mapping Cardinalities: These express the number of entities to which an entity can
be associated via a relationship. For binary relationship sets between entity sets A
and B, the mapping cardinality must be one of :
i.
Roll Number
One-to-one relationship
Every student is assigned one Roll Number and every Roll Number is only for one
student.
Employees
Department
One-to-many relationship
A department can have more than one employee but an employee is attached to
only one department.
Inventory Item
Vendor
Many-to-many relationship
An inventory item can be procured from many vendors and a vendor may sell
more than one inventory item.
Data Dictionary
Data Dictionary is a tool that enables one to control and manage information in a
database. It is actually a documentation of the database that provides a detailed
description of every data that is in it. The dictionary provides information on the
following:
i. The way data is defined.
ii. Types of data present.
iii. Relationship among various data entities and their representation formats.
iv. Keys for the database.
v. People who access data and access rules for every user.
The dictionary also generates reports on:
Various data elements, their characteristics, entities and relationships.
Frequency of the use of data.
Responsibilities of various users.
Access control information.
A data dictionary is used as an effective tool by the administration in the design,
implementation, and operation phases of the database.
Database Manager
A database manager is a program module of the DBMS which provides the interface
between the data stored in a database and the application programs and queries
submitted to the system.
In large applications, the size of databases may extend to gigabytes and whenever
any operation involves data, it is moved from the hard-disk to RAM and processed.
Database Manager simplifies and facilitates this process.
Hence the database manager module in a DBMS is responsible for:
Interacting with the Operating System for storing data on the disk.
Translating DML statements into
Did you know?
low-level file system commands for
storing, retrieving and updating SQL Injection is a technique that
data in the database.
exploits the vulnerabilities of a database,
Enforcing Integrity and ensuring wherein strong SQL statements are
that there is no violation of used by the attacker to confuse the
consistency constraints.
DBMS and hence return unexpected
Enforcing Security.
execution and privileged access.
Backing up and recovery of the
database.
Providing concurrent control in large systems when there are concurrent users.
47
Module - I
Database Languages
A DBMS provides a comprehensive set of facilities to perform the following actions:
i.
ii.
iii.
iv.
v.
These facilities are grouped under database languages. The three important
classifications of database languages are:
1. Data Definition/Description Language (DDL);
2. Data Manipulation Language (DML); and
3. Data Control Language (DCL).
48
Normal Users
Application Programmers
Predeveloped
Application
Programs
DML Query
Application
Programs
embedded
Application Query
Processor
DML Compiler
Application
Program Code
Generator
DATABASE MANAGER
File Manager
Data Files
Data
Dictionary
Hard Disk
Database Administrator
Database
Schema
DDL Compiler
Module - I
1. Management Tasks
a
b
c
d
e
f
g
h
2. Security tasks
a
b
3. Technical Tasks
a
b
c
d
e
f
g
h
4. General Tasks
a
b
As mentioned in the management tasks, a DBA has to liaise with the Data
Administrator (DA). A DA is a person who identifies the data requirements of
51
Module - I
business users by creating an enterprise data model. He identifies the data owners
and sets standards for control and usage of data.
Summary
This chapter introduces the basic concept of computers and their types. It explains
the detailed architecture of a computer and various Input and output devices used in
a computer system. The reader is introduced to various hardware monitoring
procedures, and their acquisition plans. The theory on operating system and its
functions, Database management Systems, data models, data independence and
role of Database manager are highlighted in detail.
Sources:
1. Saunders D H, Computers Today, McGraw Hill International Edition, New York.
2. Leon A and Leon M, Introduction to Information Systems, Vijay Nicole Private
Ltd. Chennai, 2004.
3. Bahrami A, Object Oriented Systems Development -An unified approach,
McGraw Hill International edition, New York, 1999.
4. Beck, Leland L, Systems Software, Pearson Education India, New Delhi, 2002.
5. Berson A and Anderson, Sybase and Client-Server Computing, McGraw Hill
International edition, New York, 1999.
6. Laudon and Laudon, Management Information Systems, Prentice Hall of India,
New Delhi, 1999.
7. Pressman R, Software Engineering, a Practioners approach, Tata McGraw Hill,
New Delhi, 1999.
8. Rumbaugh J, et al, Object Oriented Modeling and Design, Prentice Hall of India,
New Delhi, 2002.
9. Sommerville I, Software Engineering, Addison Wesely Publishing.
10. Stallings W, Operating Systems, 4th edition, Prentice Hall of India, New Delhi,
2003.
11. Weber R, Information Systems Control and Audit, Pearson Education India, New
Delhi, 2002.
12. Abbey M and Corey M J, Oracle8, A beginners guide, Tata McGraw Hill, New
Delhi, 1999.
13. Leon A and Leon M, Database Management Systems, Vikas Publishing,
Chennai, 2002
14. Ramakrishnan R, Database Management Systems, McGraw Hill International
Edition, New York, 2001.
15. Ullman J, Widom, A first course on database systems, Pearson Education India,
New Delhi, 2002.
52
Questions
1.
2.
3.
4.
___ provide a file-system interface where clients can create, update, read and
delete files.
a.
Compute Server Systems
b.
Peer-to-peer Systems
c.
File Server Systems
d.
None of these
5.
53
Module - I
6.
7.
8.
9.
A ___ is a software that manages a database and provides facilities for database
creation, populating data, data storage, data retrieval and data access control.
a.
Mainframe Systems
b.
Peer-to-peer systems
c.
Distributed Systems
d.
Database Management System (DBMS)
10.
11.
12.
14.
15.
16.
17.
18.
_____ is a part of the system software which informs the computer how to use
a particular peripheral device.
a.
Device Driver
b.
Linker
c.
Loader
d.
Compiler
19.
Module - I
d.
20.
21.
22.
Answers:
1d
2b
3a
4c
5b
6a
7c
8d
9d
10 b
11 a
12 c
13 d
14 b
15 b
16 c
17 d
18 a
19 b
20 d
21 a
22 b
56
2 Introduction to Computer
Networks
Learning Objectives
To understand
Introduction
By themselves, computers are powerful tools. When they are connected to each
other in an appropriate fashion, they become more powerful because the resources
that each computer provides can be shared with other computers. The purpose of the
interconnection is to create opportunities for sharing resources and information.
Computer Network
The term ''Computer Network'' means a collection of autonomous computers
interconnected by some means that enables them to exchange information and share
resources.
In the simplest sense, networking means connecting computers so that they can
share files, printers, applications and other computer- related resources. The
advantages of networking computers are:
Users can save shared files and documents on a file server rather than storing
them in their individual computers.
Users can share resources like network printer, which costs much less than
having a locally attached printer for each users computer.
Module - I
One of the basic traits of social beings is their need to network, to share information.
So when one desires information available with another being, there is need to
network. Probably one of the most powerful benefits of computers is their ability to
interconnect and network. Networking helps in sharing information resources across
geographic locations.
To understand computer networks and the modalities by which they are connected
and how they share information, it is necessary to know clearly some of the
fundamentals of communication technology.
Network Characteristics
Resource Sharing
High Reliability
Low Cost
Scalability
Communication Medium
Fundamentals of communication
Any communication system should have
1.
2.
3.
4.
5.
Data Transmission
Data transmission or Digital communications is the physical transfer of data (a
digital bit stream) over a point-to-point or point-to-multipoint communication channel.
Examples of such channels are copper wires, optical fibers, wireless communication
58
Transmission Modes
A given transmission on a communications channel between two machines can occur
in several ways. The transmission is characterized by:
a. The direction of the exchanges.
b. The number of bits sent simultaneously.
c. Synchronization between the transmitter and receiver.
i.
On the basis of the direction of exchanges, the data is transmitted over a channel
in three different modes, as shown in Fig. 2.1, Simplex, Half-duplex and Fullduplex.
Transmission Mode
Simplex
Half-Duplex
Duplex
59
Module - I
channel has to be used to transmit signals from Y to X. It is like one-way street
where traffic moves only in one direction, as shown in Fig. 2.2.
One way
Y
Station 2
Station 1
Examples
Advantages
This mode of channel is Simple, (including software)
Inexpensive, and
Easy to install.
Disadvantages
Simplex mode has restricted applications, for it is Only a one-way communication.
There is no possibility of sending back error or control signals to the transmitter.
2. Half-Duplex : In Half-Duplex communication, there are facilities to send and
receive, but only one activity can be performed at a time: either send or receive.
When the sender transmits, the receiver has to wait, signals flow from X to Y in
one direction at a time. After Y has received the signals, it is enabled to send
signals back to X at another time by switching from receiving to transmitting
when X is not sending to Y. Thus, there is only one transmission medium
operational at any time, as shown in Fig. 2.3. Half - Duplex mode is sometimes
also known as Two-way-alternate (TWA) mode of transmission. This type of
connection makes it possible to have bidirectional communications, by using the
full capacity of the line.
Examples
Typical examples include walkie-talkies, internet surfing, etc.
Y
Station 2
Station 1
Advantages
This mode of channel Helps to detect errors and request the sender to retransmit information in case of
corruption of information.
Is less costly than full duplex.
Disadvantages
X
Station 1
Y
Station 2
Module - I
processors never process a single bit at a time; generally they are able to process
several, and for this reason the basic connections on a computer are parallel
connections.
Data Transmission
Parallel
Serial
Synchronous
Asynchronous
1. Parallel Transmission
i.
ii.
N physical lines, in which case each bit is sent on a physical line (which is
why parallel cables are made up of several wires in a ribbon cable).
One physical line is divided into several sub-channels by dividing up the
bandwidth. In this case, each bit is sent at a different frequency.
The Eight bits
are sent together
0
1
1
1
0
0
1
Sender
We need
eight lines
Receive
62
Advantages
Speed : Parallel devices have a wider data bus than serial devices and can therefore
transfer data in words of one or more bytes at a time. As a result, there is a speedup
in parallel transmission bit rate over serial transmission bit rate.
Disadvantages
i.
ii.
Examples
Examples of parallel mode transmission include connections between a computer
and a printer (parallel printer port and cable). Most printers are within 6 meters or 20
feet of the transmitting computer and the slight cost for extra wires is offset by the
added speed gained through parallel transmission of data.
2. Serial Transmission
In serial transmission, bits are sent sequentially on the same channel, so there is a
need for only one communication channel rather than n channels to transmit data
between two communication devices. Fig. 2.7 illustrates Serial Transmission.
0
1
1
0
0
0
1
Receiver
Sender
Module - I
Disadvantages of Serial Transmission
For serial transmission, some overhead time is needed since bits must be assembled
and sent as a unit and then disassembled at the receiver.
iii. On the basis of the synchronization between the receiver and sender, the serial
transmission can be either Synchronous or Asynchronous.
Given the problems that arise with a parallel-type connection, serial connections are
normally used for data transmission. However, since a single wire transports
information, the problem is how to synchronize the sender and receiver. In other
words, the receiver can not necessarily distinguish the characters (or more generally
the bit sequences) because the bits are sent one after the other. But Asynchronous
and Synchronous types of transmission address this problem.
1. Asynchronous Transmission
Also termed as Start-Stop communication, an asynchronous communication
technique is a technique in which the timing of a signal is unimportant and is most
widely used by computers to provide connectivity to printers, modems, fax machines,
etc. In other words, any communication between devices of dissimilar speeds will be
of asynchronous one. For example, the communication between the computer and
printer is asynchronous mode.
The basic characteristics of an Asynchronous Communication System are:
a.
b.
c.
d.
e.
f.
ii.
64
Stop Bit
Data
Start Bit
1 11111011
Sender
01101
1 11111011 0
1 00010111
1 11
Receiver
It is simple and does not require synchronization of the two communication sides.
Since timing is not critical for synchronous transmission, hardware can be
cheaper.
Its set-up is fast and well suited for applications where messages are generated
at irregular intervals, like data entry from the keyboard.
Disadvantages
Because of the insertion of start and stop bits into the bit stream, asynchronous
65
Module - I
transmission is slower than other forms of transmission that operate without the
addition of control information.
2. Synchronous Transmission
In Synchronous transmission, groups of bits are combined into longer frames which
may contain multiple bytes, and those frames are sent continuously with or without
data to be transmitted. In this transmission, groups of bits are sent as independent
units with start/stop flags to allow for arbitrary size gaps between frames. However, in
synchronous transmission, each byte is introduced onto the transmission link without
a gap between it and the next one. It is left to the receiver to separate the bit stream
into bytes for decoding.
In synchronous communication, the clock of the receiver is synchronised with the clock
of the transmitter. On account of this, higher data transmission rates are possible with
no start-stop bits. The characteristics of synchronous communication are:
a. There is synchronisation between the clocks of the transmitter & receiver.
b. It supports high data rates.
c. It is used in communication between computer and telephony networks.
66
Receiver
Multiplexing
Multiplexing is a set of techniques that permit the simultaneous transmission of
multiple signals on a single carrier.
With increase in data-and-telecommunications usage, there is increase in traffic and
also the need to accommodate individual users. To achieve this, we have to either
increase individual lines each time a new channel is needed, or install higher capacity
links and use each to carry multiple signals. If the transmission capacity of a link is
greater than the transmission needs of the devices connected to it, the excess
capacity is wasted. An efficient system maximizes the utilities of all facilities.
A device that performs multiplexing is called a multiplexer (MUX), and a device that
performs the reverse process is called a demultiplexer (DEMUX), as shown in Fig.
2.10.
n
inputs
MUX
1 Link, n channels
67
DEMUX
n
outputs
Module - I
Fig. 2.11 compares a system with no multiplexing with a multiplexed system.
M
U
D
E
M
U
(a) No Multiplexing
(b) Multiplexing
Frequency Division
Multiplexing (FDM)
Time Division
Multiplexing (TDM)
Synchronous
Wave Division
Multiplexing (WDM)
Asynchronous
i.
Fig. 2.13 shows the working of a FDM system. The transmission path is divided into
three parts, each representing a channel to carry one transmission. It can be
imagined as a tip where three narrow streets merge to form a three-lane highway.
Each car merging onto the highway from one of the streets has its own lane and can
travel without interfering with cars in other lanes.
69
Module - I
M
U
X
Channel 1
Channel 2
Channel 3
D
E
M
U
X
Characteristics of FDM
a. All signals are transmitted at the same time.
b. A multiplexer
accepts inputs and assigns frequencies to each device.
is attached to a high-speed communications line.
a. A corresponding demultiplexer
is at the end of the high-speed line.
separates the multiplexed signals into their constituent component signals.
b. The channel bandwidth is allocated even when no data is to be transmitted.
70
Etc.
H H H H H H
C C C C C C
Frequency
Time
71
Module - I
D
E
M
U
X
M
3
U
X
1. Synchronous TDM
In this scheme, the multiplexer allocates exactly the same time slot to each
device at all times, no matter whether a device has anything to transmit or not.
For example, time slot X is assigned to device X alone and cannot be used by
any other device. Each time the devices allocated time slot comes up, that
particular device has the option to send a segment of its data. If a device is
unable to transmit or does not have any data to transmit, its time slot remains
vacant.
72
A
M
Frame n
U
X
Frame 2
Frame 1
Time slots are grouped into frames wherein a frame consists of one complete cycle of
time slots dedicated to each sender. Thus the number of slots n in a frame is equal
to the number of input lines n carrying data through n sending devices.
In Fig. 2.17, four input devices A, B, C and D are sending signals. These signals are
multiplexed onto a single path using synchronous TDM. In this example, all the inputs
have the same data rate, so the number of time slots in each frame is equal to the
number of input lines.
73
Module - I
A
Number of Inputs lines: 4
Number of slots in each frame: 3
M
Frame n
Frame 2
Frame 1
U
C
D
Fig. 2.18: Asynchronous TDM
Asynchronous TDM avoids this waste and is more flexible. Under this scheme, the
length of time allocated is not fixed for each device, and time is given to only those
devices that have data to transmit.
Unlike synchronous system, wherein if we have n input lines the frame contains a
fixed number of at least n time slots, an asynchronous system having n input lines
has a frame containing no more than m slots, with m less than n. The number of time
slots in this scheme is based on a statistical analysis of the number of input lines that
are likely to transmit at any given time. Also, more than one slot in a frame can be
allocated for an input device. Fig. 2.18 is a schematic representation of
Asynchronous TDM.
iii.
74
Multiplexer
Demultiplexer
Switching Techniques
Switching Techniques refer to the manner in which a communication path is
established between the sender and the receiver. To connect multiple devices, one
way is to install a point-to-point connection between each pair of devices (a mesh
topology) or between a central device and every other device (a star topology). These
methods are impractical and wasteful when applied to very large networks. A better
solution to this problem is switching in which a series of interlinked nodes, called
switches, are used. Switches are hardware and/or software devices capable of
creating temporary connections between two or more devices linked to the switch but
not to each other.
Fig. 2.20 shows the various switching techniques that are available these days:
Switching Techniques
Circuit Switching
Packet Switching
Message Switching
Datagram Approach
Circuit Switching
Module - I
connection is established between two devices, such as computers or phones, for the
entire duration of transmission. In circuit switching, the entire bandwidth of the circuit
is available to the communicating parties. The cost of the circuit is calculated on the
basis of the time used. Circuit switching networks are ideal for real-time data
transmissions, and are sometimes called Connection-oriented networks.
D
A
I
II
E
B
III
F
C
G
IV
Fig. 2.21: Circuit Switched Network
One typical illustration of circuit switching is the telephone system shown in Fig. 2.21.
When we dial a number, our call lands at the telephone exchange where our phone is
attached. The telephone exchange acts like a big switch connecting the telephone
exchange of the called number. From that exchange, a connection to the called
number is established. The connection remains established until the communication
parties finish the communication.
Limitations of Circuit Switching
1. Circuit switching was designed for voice communication. So it is less suited for
data and non-voice transmissions.
2. In circuit switching, the link creates the equivalent of a single cable between two
devices and thereby assumes a single data rate for both devices, which limits the
usefulness of connection for networks interconnecting a variety of digital devices.
76
Header: It holds information about the data contained in the packet. The
contents of a header are:
o
o
o
o
o
o
Body: It is the actual data contained in the packet and delivered to the
destination.
Footer: It is a portion of the packet that holds control information about the
packet for error checking. The source computes a checksum of the body of the
packet and appends the same in the footer. The receiving device again
computes the checksum. If the values match, the content of the packet is not
tampered with. If the values do not match, the receiver determines that there has
been an integrity violation and thus sends a request to the originating device to
resend the packet.
Table 2.1 compares Circuit-switched Network to Packet-Switched Network.
ISSUE
CIRCUIT-SWITCHED
PACKET-SWITCHED
Yes
No
Bandwidth available
Fixed
Dynamic
Yes
No
77
Module - I
Store-and-forward transmission
No
Yes
No
Call setup
Required
Not required
At setup time
On every packet
Costing
Per minute
Per packet
2
F
A
D
Sender
3
Receiver
C
3 2 1
Sender
3
2
E
G
Receiver
79
Module - I
Issue
Datagram Subnet
Circuit setup
Net required.
Required.
Addressing
State Information Subnet does not hold state Each VC requires subnet table
information.
space.
Routing
Each
packet
independently.
is
Impact of Router
Failures
None, except for packets lost All VCs passed through the
during the crash.
failed router are terminated.
Congestion
Control
Difficult.
80
SP 1
M1
SP 3
M2
M2
M1
SP 2
B
P
M2
SP 4
M1
MODEM
MODEM (Modulator-Demodulator) is a device that modulates an analog carrier signal
to encode digital information, and also demodulates such a carrier signal to decode
the transmitted information. The goal is to produce a signal that can be transmitted
easily and decoded to reproduce the original digital data. A modem connects a
computer to a standard telephone line so that data can be transmitted and received
electronically.
Types of Modems
1. External Modem: This is the most commonly used modem, for it is simple to
install and operate. These modems have their own power supply and are
81
Module - I
connected to the serial port of the computer. External modems have indicators
for the following:
Did you know?
i. Transmit Data (Tx)
The modem that we use for accessing our
ii. Receive Data (Rx)
Broadband from home is generally the
iii. Acknowledge (ACK)
ADSL Modem: i.e., Asynchronous Digital
iv. Signal Ground
Subscriber Line Modem. The ADSL Modem
v. Data Terminal Ready (DTR)
splits the traffic between channels over
vi. Carrier Detect (CD)
regular telephone lines and can carry
vii. Request to Send (RTS)
signals for up to 5000 meters.
viii. Clear to Send (CTS)
2. Internal Modem: This is directly plugged into the mother-board of the computer.
It takes power from the computers power-supply unit and operates like the
external modem.
3. PC Card Modem: This modem, designed for portable computers, is of the size
of a credit card and fits into the PC Card slot on notebook.
4. Wireless Modem: Some Internet Service Providers support wireless internet
services, for which wireless modems are used. These modems work like the
traditional wired modems, but differ from them in their structure.
5. Cable Modem: The cable modem uses coaxial cable television lines to provide a
greater bandwidth than the dial-up computer modem. But this transmission rate
fluctuates with the number of users because of the shared bandwidth on which
the cable technology is based.
6. DSL Modem: DSL (Digital Subscriber Line) modem is exclusively used for
connections from a telephone switching office to the user. It is of two categories:
82
Network Categories
A computer network consists of autonomous machines called nodes, and connected
in some configuration. The purpose of networking is to communicate information and
share resources (both hardware and software). Table 2.3 presents the broad
categorization of computer networks on different lines.
Category
Description
Message Capacity
Range
i.
Module - I
Home Networks
Internetworks
FDDI (Fiber Distributed Data Interface), which uses optical fiber as the basic
transmission medium and supports transmission speeds of 100-plus Mbps.
DQDB (Distributed Queue Dual Bus), a two-bus topology, which is specified in
IEEE 802.6. DQDB supports transmission speeds ranging from 50 to 600 Mbps
over distances as large as 50 kilometers.
WANs are connected in three ways: Circuit Switching (ISDN, Switched 56, and
Switched T1), Message switching (ATM (Asynchronous Transfer Mode), Frame Relay
and X.25) and leased lines.
Wireless LANs
Networks that are established by using digital wireless communications are generally
called Wireless LANs. In these, short-range radio is used as the medium of
communication. One such short-range wireless network is the Bluetooth technology
that connects keyboards, printers, digital cameras, headsets, scanners, and other
devices to a computer.
In wireless LANs, every computer has a radio modem and antenna with which it can
communicate with other systems. Wireless LANs are also implemented in wide area
systems. The first generation wireless network was analog and used for voice only.
The second generation was digital but used for voice only. The 3G wireless networks
cater to both voice and data.
Home Networks
Home Networks, a recent new development, is based on the assumption that in the
future almost all devices in every home will be networked. Every device will be capable
of talking to every other device, and all of them will be accessible over the Internet.
Some of the devices that are likely to be networked and accessible by Internet
include desktop PCs, notebook PCs, PDAs, shared peripherals, TVs, DVDs,
camcorders, cameras, MP3 players, telephones (both landline and mobile), intercom
devices, fax machines, microwave cooking ranges, refrigerators, clocks, lighting
devices, smoke/burglar alarm, etc.
Internet works
An internet (lowercase "i") is a collection of separate physical networks,
interconnected by a common protocol, to form a single logical network. The Internet
(uppercase "I") is the worldwide collection of interconnected networks that use
Internet Protocol to link the various physical networks into a single network.
Peer - to - Peer Architecture: In this every node can act as a client and
server; that is, all nodes are treated on par.
Server Based Architecture: In this there is a dedicated file server, that runs
the network, granting other nodes access to resources. Novells NetWare is
a classic example of Server-based architecture.
85
Module - I
Fig. 2.25 displays the hierarchical classification of Network Topologies, which are:
Physical Topologies and Logical Topologies.
NETWORK TOPOLOGIES
MESH
STAR
TREE
BUS
RING
DAISY CHAINS
CENTRA-
DECENTRA-
LIZATION
BUS
RING
STAR
TREE
HYBRIDS
LIZATION
MESH
STAR
RING
STAR
BUS
i.
Physical Topology
Physical topology of a network is the actual geometric representation of the
relationship of all the links and linking devices. There are several common
physical topologies, described below and also shown in the illustration.
a. Mesh Topology
In this topology, every node is physically connected to every other node.
This is generally used in systems which require a high degree of fault
tolerance, such as the backbone of a telecommunications company or an
ISP. Its primary advantage is that it is highly fault tolerant: when one node
fails, traffic can easily be diverted to other nodes. It is also not vulnerable to
bottlenecks. Fig. 2.26 shows mesh topology.
Disadvantages
It requires more cables for connecting devices than any other topology.
It is complex and difficult to set up and maintain. If there are n nodes in
the system, the total number of connections emanating from every node
is (n-1) and the total number of connections in the system is (n*(n-1)/2).
It is difficult to introduce or remove nodes from the system as it
necessitates rewiring.
Its maintenance is expensive.
87
Module - I
b. Star Topology
Earlier called ARCNET (Attached Resource Computer Network), star topology
contains a central hub to which each and every node is connected. This
necessitates drawing of a separate cable from each and every node to the
central hub. All inter-node data transmission has to pass through it. Fig. 2.27
shows the star topology.
File Server
Hub
Node 1
Node 2
Node 3
Node 4
Node 5
Node 6
Easy to troubleshoot.
Allows mixing of cabling types.
Easy to install and wire.
No disruptions to the network when nodes are down.
No disruptions to the network while connecting or removing devices.
Disadvantages
Hubs become a single point of failure.
Cabling is expensive because individual cables have to be drawn from
nodes to hubs.
88
More expensive than bus topology because of the high cost of hubs.
The capacity of the hub determines the number of nodes that can be connected.
c. Tree Topology
A tree topology, also called Expandable Star Topology, consists of groups
of star -configured machines connected to one another by hubs. These are
of two types : Active and Passive.
Active Hubs need electric power and have the ability to drive other hubs and
nodes. Passive hubs cannot drive hubs and are used to connect machines.
The connection between Active hubs and Passive hubs is permitted,
whereas the connections between Passive and Active hub and Passive to
Passive hub are not permitted. Fig. 2.28 shows a tree topology with Active
and Passive hubs.
File Server
Active Hub
Power Supply
Needed
Active
Hub
Node
Node
Node
Node
Node
Node
Node
Passive Hub
89
Node
Node
No
Power
Supply
Node
Module - I
Advantages
Disadvantages
d. Bus topology
In Bus topology, a single cable also called the backbone, runs through the entire
network connecting all the workstations, servers, printers and other devices on
the network. The cable runs from device to device by using tee connectors that
plug into the network adapter cards. A device wishing to communicate with
another device on the network sends a broadcast message onto the wire that all
other devices see, but only the intended recipient actually accepts and
processes the message.
Fig. 2.29 shows the picture of a bus topology.
File Server
Terminator
Terminator
Node
Node
Node
Node
Node
Less expensive when compared to star topology due to less cabling and
no network hubs.
90
Disadvantages
e. Ring Topology
In a ring network, every device has exactly two neighbours for
communication purposes. All messages travel through a ring in the same
direction (effectively either "clockwise" or "anti-clockwise"). A token, or small
data packet, is continuously passed around the network. Whenever a device
needs to transmit, it holds the token. Whoever holds the token has the right
to communicate.
Token ring networks have the physical cabling of a star topology and the
logical function of a ring through the use of multi access units (MAU). In a
ring topology, the network signal is passed through each network card of
each device and passed on to the next device. All devices have individual
cables to the MAU. The MAU makes a logical ring connection between the
devices internally. Fig. 2.30 shows the ring topology.
Node
Node
Node
Node
Node
Module - I
Advantages
Disadvantages
Linear Topology: This puts a two-way link between two computers. This
was expensive in the early days of computing, since each computer (except
for the ones at each end as for them there will be only sender/receiver.)
required two receivers and two transmitters.
Ring Topology: This connects the computers at each end, and being
unidirectional, reduces the number of transmitters and receivers. When a
node sends a message, the message is processed by each computer in the
ring. If a computer is not the destination node, it passes the message to the
next node, until the message arrives at its destination.
b. Centralization
Star topology: The star topology reduces the probability of a network failure
by connecting all the peripheral nodes (computers, etc.) to a central node.
When the physical star topology is applied to a logical bus network such as
Ethernet, this central node (traditionally a hub) rebroadcasts all
transmissions received from any peripheral node to all peripheral nodes on
the network, sometimes including the originating node. All the peripheral
nodes may thus communicate with all others by transmitting to, and
92
receiving from, the central node only. The failure of a transmission line
linking any peripheral node to the central node will result in the isolation of
that node from all others, but the remaining peripheral nodes will be
unaffected. However, the disadvantage is that the failure of the central node
will cause the failure of all the peripheral nodes too.
Tree Topology: A tree topology can be viewed as a collection of star
networks arranged in a hierarchy. This tree has individual peripheral nodes
which transmit to and receive from one other node only and are not required
to act as repeaters or regenerators. Unlike the star network, the functionality
of the central node may be distributed.
c. Decentralization
Mesh Topology: In a mesh topology, there are at least two nodes with two
or more paths between them to provide redundant paths to be used in case
the link providing one of the paths fails. This decentralization is often used to
advantage to compensate for the single-point-failure disadvantage that is
present when using a single device as a central node (e.g., in star and tree
networks). The number of arbitrary forks in mesh networks makes them
more difficult to design and implement, but their decentralized nature makes
them very useful.
d. Hybrids
Hybrid networks use a combination of any two or more topologies in such a way
that the resulting network does not exhibit any one of the standard topologies
(e.g., bus, star, ring, etc.). For example, a tree network connected to a tree
network is still a tree network, but two star networks connected together create a
hybrid network topology. A hybrid topology is always produced when two
different basic network topologies are connected. Two common examples for
Hybrid network are: Star Ring network and Star Bus network.
Star Ring Network: A Star Ring network consists of two or more star
topologies connected by using a multi-station access unit (MAU) as a
centralized hub.
Star Bus Network: A Star Bus network consists of two or more star
topologies connected by using a bus trunk (the bus trunk serves as the
network's backbone).
Module - I
are broadly classified into two: Guided and Unguided, which are further categorized
as shown in Fig 2.31.
i.
Guided Transmission Media also known as Bound Media uses a "cabling" system
that guides the data signals along a specific path. The data signals are bound by the
"cabling" system.
1. Unshielded Twisted Pair (UTP)
This is the most commonly used media used in networks. This consists of two
wires twisted over one another which reduces the interference in signals.
Twisting the wires together results in a characteristic impedance for the cable. A
typical impedance for UTP is 100 ohm for Ethernet 10BaseT cable. Unshielded
Twisted Pair cable is used on Ethernet 10BaseT and can also be used with
Token Ring.
The EIA/TIA (Electronic Industry Association/Telecommunication Industry
Association) has established standards of UTP and rated five categories of wire,
as shown in Table 2.4.
Type
Use
Category 1
Category 2
Category 3
Category 4
Category 5
Category 6
94
Guided
Thick
Co-axial Cables
Thin
Optical Fibers
Transmission Media
Single Mode
Multi-Mode
Radio waves
Unguided
Step-Index
Micro waves
Graded - Index
Infra Red waves
Satellite
Cellular Phones
3. Coaxial Cable
Co-axial cable consists of a central core conductor of solid or stranded wires.
The central core is held inside an insulator with the other conductor either in the
form of a metal shield or a braid woven around it providing a shield. An insulating
protective coating called a jacket covers the outer conductor.
95
Module - I
Typical impedances for coaxial cables are 75 ohms for Cable TV, 50 ohms for
Ethernet Thinnet and Thicknet. The good impedance characteristics of the coaxial
cable allow higher data rates to be transferred than with twisted pair cable.
Disadvantages
Factor
UTP
Cost
Lowest
Installation Easy
Bandwidth Typically
10Mbps
Attenuation High
EMI
Most
vulnerable
Security
Easy to
eavesdrop
STP
Co-axial
Optical Fibre
Moderate
Moderate
Highest
Fairly easy
Fairly easy
Difficult
Typically 16Mbps Typically 16Mbps Typically 100Mbps
High
Less vulnerable
compared to UTP
Vulnerable to
eavesdrop
Lower
Less vulnerable
compared to STP
Vulnerable to
eavesdrop
Lowest
Not affected
Extremely difficult
to eavesdrop
Table 2.5 compares various types of guided media for various parameters.
97
Module - I
1. Radio waves
There are three types of RF (radio frequency) propagation:
Ground Wave propagation
Ionospheric propagation
Line of Sight (LOS) propagation
Ground wave propagation: This propagation follows the curvature of the Earth.
Ionospheric propagation: The Earth is surrounded by three layers:
Troposphere, where there is air; Stratosphere, where jets fly; and Ionosphere
that contains charged particles. Ionosphere acts as a reflecting media that
reflects the signal back to the earth. The transmitting equipment beams the
signal towards the Ionosphere and the beam after hitting the Ionosphere layer
gets reflected back, and the receiving station picks it up. The factors such as
change in weather and time of the day have a significant influence in this
propagation.
Line of Sight (LOS) propagation: In this, the transmitter and the receiver face
each other. This is sometimes called space waves or tropospheric propagation.
Curvature of the Earth for ground-based stations and reflected waves can cause
problems.
Fig. 2.34 shows various types of Radio-Frequency propagation.
2. Microwave
Microwave transmission is a typical example of the line of sight transmission.
The transmitting station must be in visible contact with the receiving station. This
sets a limit on the distance between stations. If the distance is more, there is a
need for repeaters.
Infrared Transmission: Infrared transmissions use Infra-Red radiation, a
frequency range just below the visible light spectrum. This transmission can be
used only for short distances.
Advantages:
i.
ii.
3. Satellite
Satellites act as transponders and play a crucial role in modern communication.
Today, geostationary satellites orbiting at 36,000 km from the Earth's surface
have good transponders which reflect signals from the sourcetodestination.
Fig. 2.35 shows satellite propagation.
4. Cellular Telephony
Modern day communications use cellular telephones in a big way. Computer
systems are connected to the handset and from there these are connected to the
Mobile Telephone Switching Office (MTSO). MTSO is a central switch that
controls the entire operation of a cellular system. It is a sophisticated computer
that monitors all cellular calls, tracks the location of all cellular-equipped vehicles
traveling in the system, arranges handoffs, and keeps track of billing information.
Module - I
Summary
This chapter explains the concept of network and its types. Data communication can
take place by using various modes of data transmission, depending upon the
direction of data exchanges, the number of bits sent simultaneously and the
synchronization between senders and receivers. MODEM uses the concept of
100
Questions
1.
2.
3.
4.
5.
On the basis of the number of bits sent simultaneously, the transmission mode
is categorized into _____.
a. Simplex and Duplex
b. Client Server and Peer-to-Peer architecture
c. Simplex and Half-Duplex
d. Serial and Parallel Transmission
6.
Module - I
7.
8.
9.
10.
11.
12.
13.
In _______ TDM, the multiplexer allocates exactly the same time slot to each
device at all times, whether or not a device has anything to transmit.
a. Synchronous
b. Asynchronous
102
MAU is_____.
a. Miscellaneous Access Units
b. Multi Access Unicode
c. Multi - station Access Units
d. Miscellaneous Access Unicode
15.
SDSL is_____.
a. Symmetric Digital Subscriber Line
b. Subscriber Digital Subscriber Line
c. Subscriber Digital Symmetric Line
d. Symmetric Digital Symmetric Line
16.
17.
18.
19.
Which of the following is not a factor that influences the use of appropriate
media?
a. Cost
b. Attenuation
c. Security
d. Noise
103
Module - I
20.
Answers:
1b
2a
3b
4a
5d
6c
7a
8b
9a
10 c
11 c
12 a
13 b
14 c
15 a
16 b
17 c
18b
19 d
20 c
104
Introduction
A Network may be described as a group or collection of computers connected
together for sharing resources and information. Where individual networks are
connected by different types of network devices, such as routers and switches, it is
called Internetwork. As already discussed in Chapter 2, networks can be classified
into Local Area Network (LAN), Wide Area Network (WAN), Metropolitan Area
Network (MAN), Wireless LANs, Home Networks and Inter-networks depending on
their range or scale.
Enabling effective and secure communication of various systems (within an
intranetwork) with disparate technologies is a challenging task. The Open Systems
Interconnection (OSI) model enables easy integration of various technologies and
provides solutions for managing the internetworking environment.
Protocol
In computer networks, communication occurs between entities that share application
programs, file transfer packages, browsers, and database management systems in
different systems. A networking protocol is defined as a set of rules that governs data
communication over a network for what is communicated, how it is communicated
and when it is communicated.
Module - I
The key elements of protocol are:
(a)
(b)
c.
Syntax : It is the structure or format of the data, that is, the order in which they
are presented.
Semantics : It means each section of bits, how a particular pattern is to be
interpreted, and based on that, what action to be taken.
Timing : It indicates when the data is to be transmitted and how fast it can be
sent.
Standards
Standards are documented agreements containing technical specifications or other
precise criteria to be used consistently as rules, guidelines, or definitions of
characteristics, to ensure that materials, products, processes and services are fit for
their purpose. These provide a model for development that makes it possible for a
product to work regardless of the individual manufacturer. These are essential for
creating and maintaining an open and competitive market for manufacturers and
provide guidelines to vendors, government agencies and other service providers to
ensure the kind of interconnectivity necessary in todays marketplace and in
international communications.
Benefits of standardization:
I. International
a. International Standards Organization (ISO):
ISO is a non - governmental worldwide federation of national standards bodies from
some 100 countries, one from each country, established in 1947. Its mission is to
106
107
Module - I
g. The Institute of Electrical and Electronics Engineers (IEEE):
The Institute of Electrical and Electronics Engineers (IEEE) is the world's largest
technical professional society. Founded in 1884 by a handful of practitioners of the
new electrical engineering discipline, the Institute presently has more than 320,000
members who conduct and participate in its activities in 147 countries. IEEE sponsors
technical conferences, symposia and local meetings and educational programs to
update its members' knowledge and expertise (state-of-the-art?). The purpose of all
these activities is two- fold: (1) to enhance the quality of life for all peoples through
improved public awareness of the influences and applications of its technologies; and
(2) to advance the standing of the engineering profession and its members.
h. The Internet Engineering Task Force (IETF):
The Internet Engineering Task Force (IETF) is (the protocol engineering?) and a large
open international community of network designers, operators, vendors, and researchers
concerned with the evolution of the Internet architecture and the smooth operation of the
Internet. The technical work of the IETF is done in its working groups, which are organized
into several areas (e.g., routing, network management, security, etc.).
II. Regional
a. Committee European de Normalization:
The European Committee for Standardization is responsible for European
standardization in all fields except Electro-technical (CENELEC) and
Telecommunications (ETSI). Related project of the CEN on the web is the
standardization of the European character set in the fields of identification, coding,
and others.
b. The European Telecommunications Standards Institute (ETSI)
The Internet Engineering Task Force (IETF) is the protocol engineering and
development arm of the Internet. It is a large open international community of network
designers, operators, vendors, and researchers concerned with the evolution of the
Internet architecture and its smooth operation.
c. The European Workshop on Open system (EWOS)
The European Workshop on Open Systems is the European forum for one-stop
development of technical guidance and pre-standards in the information and
communications technologies (ICT) field, working for the benefit of vendors, planners,
procurers, implementers and users.
III. National
a. Standards Australia (SAA)
Standards Australia is the Australian representative on the two major international
108
Module - I
for inter-computer communications. The model divides the tasks involved with moving
information between networked computers into seven smaller, more manageable task
groups. A task or group of tasks is then assigned to each of the seven OSI layers.
Each layer is reasonably self-contained so that the tasks assigned to it can be
implemented independently. This enables the solutions offered by one layer to be
updated without adversely affecting the other layers.
Fig. 3.1 shows the seven layers of the OSI Model. These layers belong to three
subgroups:
1. Layers 1, 2 and 3 - Physical, Data Link and Network, are the Network support
layers dealing with the physical aspect of transmitting data from one device to
another.
2. Layers 5, 6 and 7 - Session, Presentation and Application, can be considered as
the user support layers that allow interoperability among unrelated software
systems.
3. Layer 4 - the Transport Layer, ensures end-to-end reliable data transmission
while Layer 2 ensures reliable transmission on a single link.
Application Layer
Presentation
Session Layer
Transport Layer
Network Layer
Physical Layer
The upper OSI layers (Layers 5,6 and 7) are almost implemented in software;
whereas lower layers (layers 2, 3 and 4) are combination of hardware and software,
and the physical layer (Layer 1) is almost hardware. The handy way to remember the
seven layers is the sentence "All people seem to need data processing." The
beginning letter of each word corresponds to a layer.
All
People
Seem
To
Need
Data
Processing
Application layer
Presentation layer
Session layer
Transport layer
Network layer
Data link layer
Physical layer
110
Host B
Host A
Application Layer
Application Layer
Presentation Layer
Presentation Layer
Session Layer
Session Layer
Segments
Transport Layer
Network Layer
Packets
Network Layer
Frames
Physical Layer
Bits
Transport Layer
Physical Layer
111
Module - I
When information or data, such as an email, is sent from one machine to another on
the Internet, the message is broken down into chunks or parts of a certain size in
bytes, called a Packet. The messages have to be segmented into smaller units so
that it is easy to transport these over the networks. In Fig. 3.3, L7 data refers to the
data unit at layer 7, L6 data means the data unit at layer 6 and so forth. The process
starts at layer 7 of the senders machine and starts descending down sequentially
from layer to layer. At each layer, a header is added to the data unit; at layer 2, a
trailer is also added to the data unit. When the formatted data passes through the
physical layer, it is transformed into an electromagnetic signal and transported along
a physical link.
Upon reaching its destination, the process gets reversed. Here, the data units move
sequentially in ascending order from layer 1 to layer 7. As each data unit reaches the
next higher layer, the headers and trailers attached to each layer and the headers
and trailers attached to it at the corresponding sending layer are removed, and
actions appropriate to that layer are taken.
Application Layer
The application layer is the closest layer to the end user, which means that both the
OSI application layer and the user interact directly with the software application.
Responsibilities of the Application Layer include the following:
i.
This layer provides a means for the user to access information on the network
through an application, but does not include the application itself. It provides an
interface to the user to interact with the application and therefore the network.
ii. This layer interacts with software applications that implement a communicating
component. Such application programs fall outside the scope of the OSI model.
For example, the Internet browser is an example of the communication service
application that functions at this layer. Some other interfaces and support for
services like e-mail, remote file access and transfer, shared database
management, and other types of information services are provided at application
layer level only.
iii. Application layer functions include identifying communication partners,
determining resource availability, and synchronizing communication. When
identifying communication partners, the application layer determines the identity
and availability of communication partners for an application with data to
112
iv.
v.
vi.
vii.
Translation: This layer receives information from the application layer and
converts it in an ordered and meaningful format to ensure that it is
113
Module - I
understandable to all the systems following the OSI model. Since different
computers use different encoding systems, the presentation layer is responsible
for interoperability between these different encoding methods. This layer does
the important job of converting the various services from their proprietary
application formats to universally accepted formats, ASCII, EBCDIC.
ii. Encryption/Decryption : A system must be able to conserve privacy of the data
sent from one system to another. Encryption is a technique provided at this level
wherein the sender transforms the original information to another form and sends
the resulting message out over the network. Decryption reverses the original
process to transform the message back to its original form.
iii. Compression : Data compression reduces the number of bits to be transmitted
and thus is important in the transmission of multimedia, such as text, audio, and
video. Among the well-known graphic image formats are Graphics Interchange
Format (GIF), Joint Photographic Experts Group (JPEG), and Tagged Image File
Format (TIFF).
Common data representation formats, or the use of standard image, sound, and
video formats, enable the interchange of application data between different types of
computer systems. Standard data compression schemes enable data that is
compressed or encrypted at the source device to be properly decompressed, or
deciphered at the destination.
End-to-End Delivery: This layer is responsible for source - to - destination (endto-end) delivery of the entire message, whereas the network layer oversees the
end-to-end delivery of individual packets, making sure that the data is delivered
error-free and in a proper sequence.
ii. Connection Control: This layer can be either connectionless or connectionoriented. A connectionless layer treats each segment as an independent packet
and delivers it to the transport layer at the destination machine. A connectionoriented transport layer makes a connection between the two end ports for more
security. A connection is defined as a single logical path between the source and
destination associated with all the packets in a message.
iii. Segmentation and Reassembly: The transport layer divides the data into
transmittable segments, each segment containing a sequence number, which
enables the layer to reassemble the message correctly upon arriving at the
destination and to identify and replace packets that are lost in the transmission.
iv. Error & Flow Control: The error and flow control at this layer is performed end
to end rather than across a single link. The senders transport layer ensures that
the entire message arrives at the receiving transport layer without any error,
damage, loss, or duplication.
115
T2
H5
116
H4
H3
2
1
2
1
Transmission Media
H2
T2
116
H5
H4
H3
01101000111000000110011010110101
L3 Data
L4 Data
L5 Data
L6 Data
L7 Data H6
L7 Data
RECEIVER
01101000111000000110011010110101
L3 Data
L4 Data
L5 Data
L6 Data
L7 Data H6
L7 Data
SENDER
Module - I
H2
Module - I
Connection Establishment.
Data Transfer, and
Connection Release.
Sender
Receiver
Synchronize
Negotiate Connection
Synchronize
Acknowledge
Connection Established
Data Transfer
Module - I
In connection - oriented communication, the chain of nodes forms a kind of logical
pathway. The nodes forwarding the data packet provide information about which
packet is part of which connection. This enables the nodes to provide flow control as
the data moves along the path. For example, if a node determines that a link is
malfunctioning, it sends a notification message backward, through the path to the
source computer. Moreover, in a connection-oriented communication, nodes have the
facility to provide error correction at each link in the chain. If a node detects any error,
it asks the preceding node to retransmit.
Connection - less communication: Connection - less Service is modeled after the
postal system. In this, each packet carries the full destination address and each one
is routed through the system independent of all others. All activities regarding error
correction and retransmission are carried out by the source and destination nodes.
The nodes merely acknowledge the receipt of packets and in case of errors, merely
retransmit. Internal nodes do not participate in flow control too. The main advantage
of connectionless mode is that connectionless communication works faster though it
has some downsides.
The main disadvantages are:
V. Network Layer
The main tasks of a network layer include routing, congestion control, logical
addressing, and address transformation and delivery of data from source - to destination. These ensure that the network layer transports traffic between devices
that are not locally connected.
Responsibilities of the Network Layer include the following:
i.
Routing: If two systems are connected to the same link, there is usually no need
for a network layer. But when independent networks of links are connected
118
Data Framing: The Data Link Layer provides the stream of bits receive from the
network layer into manageable data units called frames.
ii.
Physical Addressing: The DLL adds a header to the frame to define the
physical address of the sender and/or receiver of the frame.
iii. Flow Control: The DLL imposes a data flow control mechanism in case the rate
of production of data frames does not match the rate of absorption of the same at
the receivers end.
iv. Error Control: The DLL adds reliability to the physical layer by adding
mechanisms such as acknowledgement frames, detection and retransmission of
damaged or lost frames, etc.
119
Module - I
v. Access Control: When two or more devices are connected to the same link,
DLL protocols are necessary to determine which device has control over the link
at any given time.
Protocols for Data link layer include Ethernet, ATM, LocalTalk, Token Ring, SLIP,
PPP, MTU, and CSLP and Fiber Distributed Data Interface (FDDI).
The IEEE standard divides DLL into two sub layers : Logical Link Control (LLC) and
Medium Access Control (MAC) as shown in Fig. 3.5.
LLC
Data Link
Layer
MAC
One of the key advantages of the model is that it supports inter-operability and
portability.
Changes in component technologies and product innovations are not affected by
their capability to inter-connect.
Allows modularity because of which products performing only a part of the
communication activities can interface easily with other components.
Promotes standardisation of interface design.
Function
121
Protocols
DNS, FTP,
TFTP, BOOTP,
SNMP, RLOGIN,
SMTP, MIME,
NFS, FINGER,
TELNET, NCP,
APPC, AFP,
SMB
Module - I
Presentation
Translation
Session
Syncs and
Sessions
Establishes,
Transport
Additional connection below the session
layer.
Packets; Flow
control & Error- Manages the flow control of data
between parties across the network.
handling
Divides streams of data into chunks or
122
NetBIOS
Names Pipes
Mail Slots
RPC
Network
Addressing;
Routing
Data Link
Turns packets into raw bits of 0 and 1
and at the receiving end turns bits into
Data frames to
packets.
bits
Handles data frames between the
Network and Physical layers.
The receiving end packages raw data
from the Physical layer into data frames
for delivery to the Network layer.
Responsible for error-free transfer of
123
Logical Link
Control
error correction
and flow control.
manages link
control and
defines SAPs.
802.1 OSI Model
802.2 Logical Link
Module - I
frames to other computers via the
Control
Physical Layer.
Media Access
This layer defines the methods used to
Control
transmit and receive data on the communicates
network. It consists of the wiring, the
with the adapter
devices used to connect the NIC to the
card.
wiring, the signaling involved to transmit controls the type
/ receive data and the ability to detect
of media being
signaling errors on the network media.
used: - 802.3
CSMA/CD
(Ethernet)
- 802.4 Token Bus
(ARCnet)
- 802.5 Token
Ring
- 802.12 Demand
Priority
Physical
Transmits raw bit stream over the
physical cable.
Hardware; Raw
Defines cables, cards, and physical
bit stream
aspects.
Defines NIC attachments to hardware
and how the cable is attached to NIC.
Defines techniques to transfer bit stream
to the cable.
IEEE 802
IEEE 802.2
ISO 2110
ISDN
Networking Devices
Computer networking devices are units that mediate data in a computer network.
Computer networking devices are also called network equipment, Intermediate
Systems (IS) or Inter-Working Unit (IWU). Units which are the last receiver or
generate data are called hosts or data terminal equipment. Table 3.2 displays the list
of various Networking Devices used in Internetwork communication.
124
Networking Devices
Definition
OSI
Layers
Switch
Repeater
Hub
Gateway
Router
Bridge
Module - I
2-3
Bridge Router
(Brouter)
Firewall
Network Address
Translator
Proxy
1-7
Modem
Line Driver
Multiplexer
Network Interface
Card
126
Network Management
Network management refers to the activities, methods, procedures, and tools that
pertain to the operation, administration, maintenance, and provisioning of networked
systems.
Operation deals with keeping the network (and the services that the network
provides) running smoothly. It includes monitoring the network to spot problems
as soon as possible, ideally before users are affected.
Administration deals with keeping track of resources in the network and how they
are assigned. It includes all the "housekeeping" that is necessary to keep the
network under control.
Maintenance is concerned with performing repairs and upgrades. For example,
when equipment must be replaced, when a router needs a patch for an operating
system image, when a new switch is added to a network. Maintenance also
involves corrective and preventive measures to make the managed network run
"better", such as adjusting device configuration parameters.
Provisioning is concerned with configuring resources in the network to support a
given service. For example, this might include setting up the network so that a
new customer can receive voice service.
five major
Performance management,
Accounting management,
Configuration management,
Fault management, and
Security management.
Performance Management
Performance management is monitoring, assessing, and adjusting the available
bandwidth and network resource usage to make a network run more efficiently. It is
majorly important particularly to the business and/or organization that wants to
streamline their network's performance. Solar winds is a great tool for performance
management.
Examples of performance variables that might be provided include network
throughput, user response times, and line utilization.
127
Module - I
Performance management involves three main steps:
a. Firstly, gathering of performance data for analysis.
b. Analysing the data to determine the baseline levels.
c. Finally, determining appropriate performance thresholds for every important
parameter.
Accounting Management
Accounting management monitors and assesses the usage of data and/or resources
for the purpose of billing. The goal of this is to measure network utilisation parameters
so that an individual or a group whoever uses the network can be regulated
appropriately for optimal resource utilisation. This aspect of the network management
is by Internet Service Providers to bill customers for the resources they use.
Configuration Management
The goal of configuration management is to monitor network and system
configuration information so that the effects on network operation of various versions
of hardware and software elements can be tracked and managed. An example of this
is Microsofts System Management Server (SMS) which has the capability to monitor,
manage and track every piece of software and hardware on a given network.
Fault Management
The goal of Fault Management is to detect, log and alert the system administrators of
problems that might effect the systems operations.
Fault management is a three - step process:
a. Firstly, it determines symptoms and isolates the problem.
b. Then it fixes the problem and tests the solution on all important subsystems.
c. Finally, the detection and resolution of the problem are recorded.
Security Management
Security Management deals with controlling access to resources and even alerting
the proper authorities when certain resources are accessed so that the network is not
sabotaged intentionally or unintentionally and sensitive information is not accessed
by those without appropriate authorisation. Intrusion detection systems such as
Symantecs Intruder Alert have this security management capability.
The IEEE 802 family of standards is maintained by the IEEE 802 LAN/MAN
Standards Committee (LMSC). The most widely used standards are for the Ethernet
family, Token Ring, Wireless LAN, Bridging and Virtual Bridged LANs. Table 3.3 lists
IEEE LAN standards.
Name
Description
IEEE 802.1
Bridging Network an Network Management.
IEEE 802.2
Logical Link Control.
129
Module - I
IEEE 802.3
IEEE 802.4
IEEE 802.5
IEEE 802.6
IEEE 802.7
IEEE 802.9
IEEE 802.10
IEEE 802.11 a /b/g/n
IEEE 802.14
IEEE 802.15.1
IEEE 802.16
IEEE 802.20
IEEE 802.22
Ethernet.
Token Bus.
Defines the MAC sub-layer for a Token Ring.
Metropolitans Area Networks.
Broadband LAN using Coaxial Cable.
Integrated Services LAN.
Interoperable LAN Security
Wireless LAN & Mesh (Wi-Fi Certification).
Cable Modems.
Bluetooth Certification.
Broadband Wireless Access (WiMAX Certification).
Mobile Broadband Wireless Access.
Wireless Regional Area Network.
Summary
We have understood the basic concept of protocols, their usage and characteristics.
The chapter deals with the seven layered OSI Model and its importance to carry out
the data transmission over the network. IEEE standards are established that define
the rules of data transmission and various networking devices used at each end of
the machine.
Questions
1. Which of the following is not an element of Protocol?
a. Syntax
b. Semantics
c. Format
d. Timing
2. Which of the following is not an OSI Layer?
a. Application Layer
b. Circuit Layer
c. Presentation Layer
d. Transport Layer
3. OSI Stands for _________.
a. Open systems Interconnection
130
131
Module - I
c. WiMax
d. Token Bus
11. OSPF stands for __________.
a. Open Shortest Path First
b. Open Shortest Pattern First
c. Outsource Shortest Path First
d. None of these
12. Trailer in a packet is added only at the _________.
a. Session Layer
b. Data Link Layer
c. Network Layer
d. Physical Layer
13. CMIP stand for _____
a. Customer Management Information Protocol
b. Customer Management Input Process
c. Common Management Information Protocol
d. Common Management Input Protocol
14. A specialized network device that determines the next network point to which a
data packet is forwarded toward its destination is called ________.
a. Gateway
b. Router
c. Firewall
d. Hub
15. TIFF stands for _________.
a. Tagged Image File Force
b. Tagged Image File Format
c. Tagged International File Force
d. None of these
16. Which one of the layers handles the task of data compression?
a. Transport Layer
b. Data Link Layer
c. Presentation Layer
d. Application Layer
17. The sequence of layers in the OSI model in a descending order is _____.
a. Network, Data Link, Physical, Application, Presentation, Session, Transport
b. Session, Transport, Presentation, Application, Network, Data Link, hysical
132
133
Module - I
24. Which area of the ISO Network management Model is responsible for identifying
problems, logging reports and notifying the users, so that the network runs
effectively?
a. Performance Management
b. Accounting Management
c. Fault Management
d. Configuration Management
Answers :
1c
2b
3a
4d
5a
6c
7b
8d
9a
10 d
11 a
12 b
13 c
14 b
15 b
16 c
17 d
18 a
19 b
20 b
21 b
22 c
23 a
24 c
134
Introduction
In the years to come information, especially the Internet, will become the basis for
personal, economic, and political advancement. A popular name for the Internet is the
information superhighway. Whether we want to find the latest financial news, browse
through library catalogs, exchange information with colleagues, or join in a lively
political debate, catch up with our mails, go shopping, banking or do business, the
Internet is the tool that takes us beyond telephones, faxes, and isolated computers to
a burgeoning networked information frontier.
The Internet supplements the traditional tools of gathering information: Data
Graphics, News and correspondence with other people. Used skillfully, the Internet
brings information, expertise, and knowledge on nearly every subject imaginable
straight to our computer.
Module - I
The conventions developed by ARPA to specify how individual computers could
communicate across that network became TCP/IP.
As networking possibilities grew to include other types of links and devices, ARPA
adapted new TCP/IP to meet the demands of the new technology. As involvement in
TCP/IP grew, the scope of ARPANET expanded to become what is now known as
the Internet.
The Internet is a combination of several technologies and an electronic version of
newspapers, magazines, books, catalogs, bulletin boards, and much more. Today,
the Internet connects millions of computers around the world in a nonhierarchical
manner unprecedented in the history of communications. It is a product of the
convergence of media, computers, and telecommunications. It is not merely a
technological development but the product of social and political processes.. From its
origins in a non-industrial, non-corporate environment and in a purely scientific
culture, it has quickly diffused into the world of commerce.
An Internet under TCP/IP operates like a single network connecting many computers
of any size, type and shape. Internally, an Internet is an interconnection of
independent physical networks linked together by internetworking devices.
G
III
F
4
I
B
IV
II
3
E
2
D
Fig. 4.1: An Actual Internet
136
Fig. 4.1 shows the topology of a possible internet with A, B, C, D, E, F and G be the
hosts. The solid circles in the figure, numbered 1, 2, 3 and so on are internetworking
devices like routers or gateways. The larger ovals containing roman numerals (I, II, III
etc.) represent separate physical networks.
G
INERNET
C
Fig. 4.2: An Internet seen by TCP/IP
Internet Administration
The specialty of Internet is that it has no single owner and no central operator.
Everyone operates a portion of the Internet. There is no central control and
regulation. Everyone tries to regulate a portion of the Internet.
137
Module - I
However, some bodies help in managing the Internet. These are:
File Transfer Protocol (FTP): The file transfer protocol (FTP) allows a user
on any computer to receive files from or send files to another computer.
Security is handled by requiring the user to specify a user name and password
for other computer.
Remote Login: The terminal network protocol (TELNET) allows a user to
log in on any other computer on the network. The user starts a remote session
by specifying a computer to which it wants to connect. From that time until the
end of the session, anything that the user types is sent to the other computer.
Mail Transfer: This allows a user to send messages to users on other
computers. Originally, people used only one or two specific computers. They
would maintain mail files on those machines. The computer mail system is
simply a method for a user to add a message to another user's mail file.
I.
Application Layer
The Application layer is the topmost layer of the TCP/IP protocol suite and runs
various applications which provide them the ability to access the services of the other
139
Module - I
layers and define the protocols that applications use to exchange data. There are
many Application layer protocols, and new ones are always being developed.
Protocols used in Application Layer
a. SMTP : It stands for Simple Mail Transfer Protocol which provides a
mechanism for sending messages to other computer users based on e-mail
addresses. SMTP provides for e-mail exchanges between users on the same or
different computers and supports :
DNS
TFTP
Network Layer
IGMP
IP
ARP
SNMP
SEGMENT
UDP
TCP
MESSAGE
ICMP
RARP
DATAGRAM
140
FRAMES
BITS
H
H
Module - I
on the concept of manager and agent. A manager, usually a host, controls and
monitors a set of agents, usually routers.
Its an application level protocol ion which a few manager stations control a set
of agents Still not clear. SNMP frees management tasks from both the physical
characteristics of the managed devices and the underlying networking
technology. It can be used in a heterogeneous internet made of different LANs
and WANs connected by routers or gateways made by different manufacturers.
142
Module - I
that he got it. If the sender does not receive the postcard, he or she assumes the
letter has been lost and sends another one.
IP transports data in packets called Datagrams, each of which is transported
separately. Datagrams may take different routes and may arrive out of sequence
or get duplicated, but IP does not keep track of the routes and has no facility for
reordering datagrams once they arrive. As it is a connectionless service, IP does
not create virtual circuits for delivery. There is no call set up to alert the receiver
about an incoming transmission.
b. ARP: Address Resolution Protocol associates an IP address with the physical
address which identifies each device on a LAN and is usually imprinted on the
Network Interface Card. By changing the NIC, in case the card fails, the
physical address of a machine can be altered. The IP addresses, on the other
hand, have universal jurisdiction and cannot be changed. ARP is used to locate
the physical address of the device when its Internet Address is known. Anytime a
host, or a router, needs to find the physical address of another host on its
network, it formats an ARP query packet that includes IP address and
broadcasts it over the network. Every host on the network receives and
processes the APR packet, but only the intended recipient recognizes its internet
address and sends it back to its physical address, the host holding the datagram
adds the address of the target host both to its cache memory and to the
datagram header, and then sends the datagram on its way.
c. RARP: The Reverse Address Resolution Protocol allows a host to discover its
internet address if it knows its physical address. Usually a host has its internet
address stored in the hard disk, but diskless computers cannot have that. So the
host wishing to know its internet address broadcasts an RARP query packet that
contains its physical address to every host on its physical network. A server of
the network recognizes the RARP packet and returns it the hosts internet
address.
d. ICMP : The Internet Control Message Protocol is a mechanism used by hosts
and routers to send notification of datagram problems back to the sender. ICMP
allows IP to inform a sender that a datagram is undeliverable. A datagram travels
from router to router until it reaches one that can deliver it to the final destination.
If a router is unable to route or deliver the datagram because of unusual
conditions or because of network congestion, ICMP allows it to inform the
original source.
e. IGMP : The Internet Group Message Protocol is a companion to the IP
Protocol. IP involves two types of communication : Unicasting, in which there is
144
Framing : The DLL divides the stream of bits received from the network layer
into manageable data units called frames.
Physical Addressing : If frames are to be distributed to different systems on the
network, the DLL adds a header to the frame to define the physical address of
the sender (source address) and/or receiver (destination address) of the frame. If
the frame is intended for a system outside the senders network, the receiver
address is the address of the device that connects one network to the next.
145
Module - I
iii. Flow Control : If the rate at which the data are absorbed by the receiver is less
than the rate produced in the sender, the DLL imposes a flow control mechanism
to prevent overwhelming the receiver.
iv. Error control : The DLL adds reliability to the physical layer by adding
mechanisms to detect and retransmit damaged or lost frames. It also uses a
mechanism to prevent duplication of frames. Error control is normally achieved
through a trailer added to the end of the frame.
v. Access Control : When two or more devices are connected to the same link,
data link layer protocols are necessary to determine which device has control
over the link at any given time.
V. Physical Layer
The physical layer coordinates the functions that are required to transmit a bit stream
over a physical medium. It deals with the mechanical and electrical specifications of
the interface and transmission medium. It also defines the procedures and functions
that physical devices and interfaces have to perform for the transmission. The
physical layer is concerned with the following:
i.
ii.
iii.
iv.
v.
vi.
vii.
PROTOCOL STACK
Protocol stack is defined as the set of protocols used in a communications network. A
protocol stack is a prescribed hierarchy of software layers, starting from the
application layer at the top (the source of the data being sent) to the data link layer at
the bottom (transmitting the bits on the wire). The stack resides in each client and
146
RECEIVER
Application Layer
Application Layer
Transport Layer
4
Transport Layer
2
Network Layer
IP
IP
Network Layer
TCP/UDP
IP
TCP/UDP
TCP/UDP
Ethernet
Physical Layer
LAN
IP
TCP/UDP
Physical Layer
LAN
147
Module - I
network, transport and application. Fig. 4.5 shows the pictorial comparison of the OSI
and TCP/IP.
a. The application layer in TCP/IP can be equated with the combination of
session, presentation, and application layers of the OSI model.
b. At the transport layer, TCP/IP defines two protocols: TCP and User Datagram
Protocol (UDP).
c. At the network layer, the main protocol defined by TCP/IP is Internetworking
Protocol (IP), although there are some other protocols that support data
movement in this layer.
d. At the physical layer and data link layers, TCP/IP does not define any specific
protocol. It supports all the standard and proprietary protocols. A network in a
TCP/IP internetwork can be a local area network (LAN), a metropolitan area
network (MAN) or a wide area network (WAN).
OSI
TCP / IP
Application Layer
Application Layer
Presentation Layer
Session Layer
Transport Layer
Transport Layer
Network Layer
Network Layer
Physical Layer
Physical Layer
148
IP Addressing scheme
Internet Protocol Addressing Scheme gives the address of a device attached to an
IP network (TCP/IP network). Every client, server and network device are assigned
an IP address, and every IP packet traversing an IP network contains a source IP
address and a destination IP address.
Every IP address that is exposed to the public Internet is unique. In contrast, IP
addresses within a local network use the same private addresses. Thus a user's
computer in company A can have the same address as a user in company B and
thousands of other companies. However, private IP addresses are not reachable from
the outside world.
Logical vs Physical : An IP address is a logical address that is assigned by software
residing in a server or router. In order to locate a device in the network, the logical IP
address is converted to a physical address by a function within the TCP/IP protocol
software. The physical address is actually built into the hardware.
Static and Dynamic IP : Network infrastructure devices such as servers, routers and
firewalls are typically assigned permanent "static" IP addresses. The client machines
can also be assigned static IPs by a network administrator, but most often are
automatically assigned temporary "dynamic" IP addresses via software that uses the
"dynamic host configuration protocol". Cable and DSL modems typically use dynamic
IP with a new IP address assigned to the modem each time it is rebooted.
IPv4 Networks
In the early stages of development of the Internet protocol, network administrators
interpreted an IP address as a structure of network number and host number. The
149
Module - I
highest order octet (most significant eight bits) was designated the network number
and the rest of the bits were called the rest field or host identifier and were used for
host numbering within a network. This method soon proved inadequate as additional
networks developed that were independent of the existing networks that had already
a designated network number. In 1981, the Internet addressing specification was
revised with the introduction of classful network architecture.
Classes
Classful network design allowed for a larger number of individual network
assignments. The first three bits of the most significant octet of an IP address was
defined as the class of the address. Three classes (A, B, and C) were defined for
universal unicast addressing. Depending on the class derived, the network
identification was based on octet boundary segments of the entire address. Each
class used successively additional octets in the network identifier, thus reducing the
possible number of hosts in the higher order classes (B and C). The following table
gives an overview of this system. There are currently five different field-length
patterns in use, each defining a class of address. The different classes are designed
to cover the needs of different types of organizations.
Classes A, B, C, D and E
Based on the split of the 32 bits, an IP address is Class A, B or C, the most common
of which is Class C.
Class A addresses are numerically the lowest and provide only one byte to
identify class type and netid, and leaves three bytes available for hostid
numbers.
Class B provides two bytes to identify class types and leaves remaining two
bytes available for hostid numbers.
Class C provides three bytes to identify class types and leaves remaining one
byte available for hostid numbers.
Class D is reserved for multicast addressing. Multicasting allows copies of a
datagram to be passed to a select group of hosts rather than to an individual
host. It is similar to broadcasting, but broadcasting requires that a packet be
passed to all possible destinations, and multicasting allows transmission to a
selected subset.
Class E addresses are reserved for future use.
150
Byte 1
Byte 2
Byte 3
Byte 4
Class A
Class B
1 0
Class C
1 1 0
Class D
1 1 1 0
Multicast Address
Class E
1 1 1 1
Netid
Hostid
Netid
Hostid
Netid
Hostid
Network
255
Host
Host
Host
Network
255
Network
. 255
Host
Network
255
Network
. 255
0
Subnets
Hosts
Network
255
Host
. 0
Host
Module - I
Although people identify the class by the first number in the IP address (as shown in
Table 4.1), a computer identifies class by the first three bits of the IP address (A=0;
B=10; C=110). This class system (a.b.c.d) has also been greatly expanded,
eliminating the huge disparity in the number of hosts that each class can
accommodate.
Private Addresses
There are certain blocks of IP addresses that are set aside for internal private use for
computers and are not directly connected to the Internet. InterNIC has reserved these
IP addresses as private addresses for use with internal web sites or intranets. Most
ISPs will block the attempt to address these IP addresses. These IP addresses are
used for internal use by companies that need to use TCP/IP but do not want to be
directly visible on the Internet. These IP ranges are given in Table 4.1.
Class
Range of
first octet
Private Start
Address
Private End
Address
Number of IP
addresses
0 - 127
10.0.0.0
10.255.255.255
1, 67, 77,216
128 - 191
172.16.0.0
172.31.255.255
10, 48,576
192 - 223
192.168.0.0
192.168.255.255
65,536
Reserved addresses
Although the previous table presents the IP address classes, still some addresses
are missing. For example, the range of IP address from 127.0.0.0 to 127.255.255.255
is not there. This is because several address values are reserved and have a special
meaning. The following are reserved addresses:
IP address 0.0.0.0 refers to the default network and is generally used for routing.
IP address 255.255.255.255 is called the Broadcast address.
IP address 127.0.0.1 is called the Loopback address.
Even with Classless IP, not every IP address is usable. Addresses with a first byte
greater than 223 are reserved for Multi-cast and other special applications. There are
also two Class A networks that are reserved for special purposes. The network
address 0.0.0.0 designates a default gateway. This is used in routing tables to
represent "All Other Network Addresses".
Another special Class A network is 127.0.0.0, the loopback address, is used to
simplify programming, testing, and troubleshooting. It allows applications to
152
In every network the host address which is all Zeros identifies the network itself.
This is called the Network Address and is used in routing tables to refer to the
whole network.
A host address which is all Ones is called the Broadcast Address or Announce
Address for that network. For example, on the Class C network 205.217.146.0,
the address 205.217.146.255 is the broadcast address.
Packets addressed to the broadcast address will be received by all hosts on that
network. The network address and the broadcast address are not used as actual host
addresses. These are invalid addresses on the internet. Routers don't route them.
153
Module - I
All hosts in the Internet are addressed using dotted representation. Since they are 32
bits in length, almost all users find it difficult to memorize the numeric addresses. For
example, it is easier to remember www.yahoo.com than 66.218.71.90.
The Domain Name System (DNS) was created to overcome this problem. It is a
distributed database that has the host name and IP address information for all
domains on the Internet.
In this hypothetical example, www.caindia.com would be converted into the IP
address 204.0.8.51. Without DNS, we would have to type the four numbers and dots
into our browser to retrieve the Website.
When we want to obtain a host's IP address based upon the host's name, a DNS
request is made by the initial host to a local name server. If the information is
available in the local name server, it is returned else the local name server then
forwards the request to one of the root servers. The root server, then, returns the IP
address. This is depicted in Fig. 4.8.
154
Ports
A port is a 16-bit number, which along with an IP address forms a socket. Since port
numbers are specified by 16-bit numbers, the total number of ports is given by 216
which are equal to 65536 (i.e. from 0 to 65535). Port numbers in the range 0-1023
are known as Well Known Ports. Port numbers in the range 1024-49151 are called
Registered Ports, and these have been publicly defined as a convenience for the
Internet community. The remaining numbers, in the range 49152-65535, are called
Dynamic and/or Private Ports and can be used freely by any client or server.
Table 4.2 lists some of the common port numbers.
Port #
Protocol
Service
TCP
echo
TCP
discard
21
TCP
ftp-data
23
TCP
telnet
25
TCP
SMTP
43
TCP
whois
53
TCP/UDP
DNS
70
TCP
gopher
79
TCP
finger
80
TCP
http
110
TCP
pop3
123
UDP
ntp
161
UDP
SNMP
155
Module - I
179
TCP
bgp
443
TCP
https
520
UDP
rip
1080
TCP
socks
33434
UDP
traceroute
The core group of generic top - level domains consists of the .com, .info, .net, and
.org domains. In addition, the domains .biz, .name, and .pro are also considered
157
Module - I
generic; however, these are designated as generic-restricted, and registrations within
them are supposed to require proof of eligibility within the guidelines set for each.
Historically, the groups of generic top - level domains include domains that were
created in the early development of the domain name system, notably .edu, .gov, .int,
.mil. However, these domains now have all been sponsored by appropriate agencies
or organizations and are now considered sponsored top-level domains, much like the
many newly created "themed" domain names. This entire group of non-country-code
top-level domains, domains that do not have a geographic or country designation, are
still often referred by the term generic.
A generic top - level domain (gTLD) is a top-level domain used by a particular class
of organizations. These are three or more letters long, and are named after the type
of organization they represent (for example, .com for commercial organizations).
Table 4.3 shows the current gTLDs.
Generic Top Level Domains
CURRENT
Generic
Sponsored
Infrastructure
.arpa
Deleted/retired
.nato
Reserved
Pseudo
PROPOSED
Location
Technical
.geo, .mail
Others
158
Internet Services
Internet provides a variety of
services. Fig. 4.8 shows only a
partial list of services. We can
subscribe (while some of them
are free, some are not) to these
and use the services.
DSL
Broadband wireless access
Cable modem
FTTH
ISDN
DSL
SHDSL
Ethernet technologies
When using a dial-up or ISDN connection method, the ISP cannot determine the
caller's physical location in detail than by using the number transmitted using an
appropriate form of Caller ID. Other means of getting connected such as cable or
DSL require a fixed registered connection node, usually associated with the ISP with
a physical address.
159
Module - I
ISP Interconnection
ISPs may engage in peering, where multiple ISPs interconnect at peering points or
Internet exchange points (IXs), allowing routing of data between each network,
without charging one another for the data transmitted - data that would otherwise
have passed through a third upstream ISP, incurring charges from the upstream
ISP.[Consider revising]
Network hardware, software and specifications, as well as the expertise of network
management personnel, are important for ensuring that data follows the most efficient
route, and upstream connections work reliably. A tradeoff between cost and efficiency
is possible.
What makes the World Wide Web appealing and innovative is its use of
hypertext as a way of linking documents to each other. A highlighted word or
phrase in one document acts as a pointer to another document that amplifies or
relates to the first document. In this way, the user tailors the experience to suit
his or her needs or interests.
The other very appealing aspect of the World Wide Web is the use of graphics
and sound capabilities. Documents on the WWW include text, still images,
videos, and audios. People who create WWW documents often include a
photograph of themselves along with detailed professional information and
personal interests. (This is often called a person's home page.)
160
161
Module - I
Instant Messaging
Web
FTP
Chat
INTERNET
SERVICES
Blogh
Discussion
RSS
Usenet
The right-most segment of the domain name usually adheres to the naming
conventions listed below:
3. Chatting
Internet Relay Chat (IRC), the other method for Internet conversation, is less
common than talk because someone must set up the Chat before others can join in.
Chat sessions allow many users to join in the same free-form conversation, usually
163
Module - I
centered on a discussion topic. When users see a topic that interests them, they type
a command to join and then type another command to choose a nickname.
Nicknames allow people in the session to find others on IRC Networks or Channels.
5. Usenet
Usenet is a world-wide distributed discussion system that consists of a set of
newsgroups with names that are classified hierarchically by subject. Articles of
messages are posted to these newsgroups by people on computers with the
appropriate software, which are then broadcast to other interconnected computer
systems via a wide variety of networks. Some newsgroups are moderated, wherein
the articles are first sent to a moderator for approval before appearing in the
newsgroup. Usenet is available on a wide variety of computer systems and networks,
but the bulk of modern Usenet traffic is transported over either the Internet or UUCP.
6. Blog
A Blog is a type of website, usually maintained by an individual with regular entries of
commentary, descriptions of events, or other material such as graphics or video. A
typical blog combines text, images, and links to other blogs, Web pages, and other
media related to its topic. The possibility of readers leaving comments in an
interactive format is an important part of many blogs.
7. Instant Messaging
Instant messaging (IM) is a form of real-time communication between two or more
people based on a typed text. The text is conveyed via devices connected over a
network such as the Internet. Instant messaging (IM) is a collection of technologies
that create the possibility of real-time text-based communication between two or more
participants over the internet or some form of internal network/intranet. The difference
between chatting and instant messaging is that an e-mail is the perceived
synchronicity of the communication by the user. Some systems allow the sending of
messages to people not currently logged on (offline messages), thus reducing
differences between Instant Messaging and e-mail.
164
165
Module - I
anywhere in the world. However, in some cases a given device can function both as a
client and a server for the same application. Likewise, a device that is a server for
one application can simultaneously act as a client to others for different applications.
Some of the most popular applications on the Internet follow the client-server model
including email, FTP and Web services. Each of these clients features a user
interface (either graphic- or text-based) and a client application that allows the user to
connect to servers. In the case of email and FTP, users enter a computer name (or
sometimes an IP address) into the interface to set up connections to the server.
All the basic Internet tools - including Telnet, FTP, Gopher, and the World Wide Web
-are based upon the cooperation of a client and one or more servers. In each case,
we interact with the client program and it manages the details of how data is
presented to us or the way in which we can look for resources. In turn, the client
interacts with one or more servers where the information resides. The server receives
a request, processes it, and sends a result, without having to know the details of our
computer system, because the client software on our computer system handles those
details.
The advantage of the client/server model lies in distributing work so that each tool
can focus or specialize on particular tasks : the server serves information to many
users while the client software for each user handles the individual user's interface
and other details of the requests and results.
Client/server software architecture is a robust, versatile and modular infrastructure
that is intended to improve usability, flexibility, interoperability, and scalability.
Before explaining what a Client/Server Software Architecture is, it is essential to know
its predecessors.
Mainframes / Machines operating using time-shared concept: With mainframe/
machines with time-shared concept, the entire architecture is dependent on the
monolithic central computer. Users interact with the host through a terminal. The
keystrokes are captured and sent to its central machine, which does all the
processing and displays the results. Modern day mainframes use both dumb
terminals as well as PCs. The main limitation of mainframes includes high cost (for
both installation and maintenance) and poor support for graphical user interfaces.
Network based architectures / File sharing architecture: The concept of networks
followed the mainframes / machines that adopted time-shared concepts. There is a
file server whose main job is to share information with the connected machines. It
operates on dedicated or non-dedicated mode. In the file sharing mode, users
request files from the server, and the server provides the required information.
166
Client
Server
Robust and good communication system
GUI based operating system
Open-database Connectivity drivers (ODBC) and Application
Interfaces (APIs)
Programming
Module - I
2. The job is shared between the client and the server.
3. The network has to be robust and must have zero failures.
4. A client can attach to any number of servers and a server can support multiple
clients.
5. In case of any arbitration, the servers decision is final.
Client
Server
Request (usually SQL)
Result (Data)
Two-tier architecture
A two-tier architecture is one where a client talks directly to a server without any
intervening server. It is usually used in small environments (usually less than 50
users). In a two-tier client/server architecture, the user interface (GUI) is in the user's
machine and the database management services are usually in the server.
168
Three-tier architecture
Three-tier architecture, also called n-tier architecture, overcomes the problems of the
two - tier architecture. In the three-tier architecture, a middle tier (also called as
middleware) was added between the client and the server. The middle tier performs
the job of queuing, application execution, and database staging. Middleware has
been implemented in a variety of ways. The important ones include transaction
processing monitors and message servers.
3 - Tier architecture with Transaction Processing (TP) monitor as the middleware
This is the most basic type of three-tier architecture. The Transaction Processing
monitor provides the services of message queuing, transaction scheduling, and
prioritisation. Here the client first connects to the TP monitor instead of the database
server. The transaction is accepted by the monitor, which queues it and then takes
responsibility for managing it to completion, thus freeing the client.
i.
ii.
Types of Servers
In Client/Server environments, we have the following types of servers which do
specific functions. They are:
a. Application
b. File
c. Database
169
Module - I
d.
e.
f.
g.
Print/fax
Communications
Security
Web
170
Classification of intruders
Intruders are broadly classified as :
i.
ii.
Outsiders: These refer to the people who do not belong to the organization but
intrude into the system. Typical activities of these people include defacing web
pages, forwarding spam mails and making the system unavailable for use.
Outsiders generally come from the Internet, dial-up lines, physical break-ins,
vendors, customers, resellers, and include all those who are linked to the
network.
Insiders: These are people who belong to the organization and have legitimate
rights to use the resources of the system. They misuse the privileges granted to
them or impersonate higher privileged users or create pathways for external
intruders to hack into the system. Researches have shown that 80% of security
breaches are committed by insiders.
Module - I
Once an intruder has penetrated into the system, his most likely activities are:
a.
b.
c.
d.
e.
f.
Alert/Alarm: A signal suggesting that the system has been or is being attacked.
True attack stimulus: An event that triggers an IDS to produce an alarm and
react as though a real attack were in progress.
False attack stimulus: The event signaling an IDS to produce an alarm when
no attack has taken place.
False (False Positive): An alert or alarm that is triggered when no actual attack
has taken place.
False negative: The failure of an IDS to detect an actual attack.
Noise: Data or interference that can trigger a false positive.
Site policy: Guidelines within an organization that control the rules and
configurations of an IDS.
Site policy awareness: The ability an IDS has to dynamically change its rules
and configurations in response to changing environmental activity.
Confidence value: A value an organization places on an IDS based on past
performance and analysis to help determine its ability to effectively identify an
attack.
Alarm filtering: The process of categorizing attack alerts produced from an IDS
in order to distinguish false positives from actual attacks.
173
Module - I
against the attack signature database to see if there is a match for a known
attack signature. This kind of IDS is also known as the Signature based IDS.
6. Statistical Anomaly IDS: Some smart IDS can also detect unknown attack
patterns using heuristic technology; these are called Statistical Anomaly IDS and
use simulation techniques to predict possible attacks.
7. A Hybrid Intrusion Detection System combines two or more approaches. Host
agent data is combined with network information to form a comprehensive view
of the network. An example of a Hybrid IDS is the Prelude.
Detection
Deflection
Deterrence
Counter-measures
174
175
Module - I
Intrusion detection systems serve three essential security functions: they monitor,
detect, and respond to unauthorized activity by company insiders and outsider
intrusion as depicted in Fig. 4.11. Intrusion detection systems use policies to define
certain events that, if detected will issue an alert. In other words, if a particular event
is considered to constitute a security incident, an alert will be issued if that event is
detected. Certain intrusion detection systems have the capability of sending out
alerts, so that the administrator of the IDS will receive a notification of a possible
security incident in the form of a page, email, or SNMP trap. Many intrusion detection
systems not only recognize a particular incident and issue an appropriate alert, they
also respond automatically to the event. Such a response might include logging off a
user, disabling a user account, and launching a script.
PREVENTION
Simulation
INTRUSION MONITORING
Analysis
INTRUSION DETECTION
Notification
RESPONSE
Fig. 4.11: Security Functions of IDS
IDS Services
1. Misuse Detection or Signature Detection
Commonly called Signature Detection, this method uses specifically known patterns
of unauthorized behavior to predict and detect subsequent similar attempts. These
specific patterns are called signatures. For host-based intrusion detection, one
example of a signature is "three failed logins." For network intrusion detection, a
signature can be as simple as a specific pattern that matches a portion of a network
packet. For instance, a packet content signatures and/or header content signatures
can indicate unauthorized actions, such as improper FTP initiation. The occurrence of
a signature might not signify an actual attempted unauthorized access (for example, it
can be an honest mistake), but it is a good idea to take each alert seriously.
Depending on the robustness and seriousness of a signature that is triggered, some
alarm, response, or notification is sent to the proper authorities.
176
Summary
This chapter deals with the historical view of the Internet and TCP/IP. It gives the
reader a detailed understanding on the generic top-level domains and discusses the
TCP/IP suite in detail. A comparative analysis of the OSI Model and TCP/IP suite
provides an insight about the functioning of the protocols over the network. Intrusion
detection systems play a vital role in the process of protecting the systems over
Internet.
Questions
1.
2.
Module - I
3.
4.
5.
6.
7.
8.
9.
_________provides three bytes to identify class types and leaves the remaining
one byte available for hostid numbers.
a. Class A
b. Class B
c. Class D
d. Class C
178
_________provides one byte to identify class types and leaves the remaining
three bytes available for (hosted?) numbers.
a. Class B
b. Class A
c. Class D
d. Class C
11.
Internet Protocol Version 4 (IPv4) uses a small 32-bit number that is split into
three fields: _________.
a. class type, netid, and hostid
b. class type, version id and hostid
c. class id, netid, and version id
d. None of these
12.
13.
14.
15.
16.
Module - I
c. Solution
d. Counter Measures
17.
18.
19.
20.
Answers:
1a
2b
3d
4c
5c
6a
7b
8c
9d
10 b
11 a
12 c
13 b
14 d
15 b
16 c
17 d
18 b
19 d
20 a
180
5 Introduction to Firewalls
Learning Objectives
To understand
Introduction
All of us are concerned about the security of data. It is because there are too many
peeping Toms in the unsecured network. This chapter explains the technology that
protects us against such intruders. Two major aspects of protection are a. protecting
information on various types of equipment and devices and b. protecting information
that is travelling on networks.
Because tools for penetrating networks are easily available to people, their security
has posed a challenging task for network administrators. Coupled with the need to
protect sensitive information from external and internal personnel, there is need to
enhance network security. That is to protect the internal organisational network and
the information assets located therein from external threats like unauthorised access
by protecting the paths of access from the external to the internal or secure network.
The technology to protect the perimeter from unauthorised access is the firewall.
Module - I
Firewalls protect the internal network from intentional access that could compromise
confidentiality, availability and integrity of the information present in the organisations
internal network. This includes information assets and R&D network. A firewall can
either be hardware, software or a combination of both, often referred to as an
appliance. Whatever the case, it is an access control mechanism that maintains
selective access to the secure or internal access based on rules that are built into it.
The firewall sits at the conjunction of two networks : the Secure Network that is
required to be protected and the Insecure Network, such as the Internet, as shown
in Fig. 5.1.
Internet
Public Network
Secure Private Network
Firewall
Hardware Firewall
usually a part of
TCP / IP Router
Private Local
Area Network
Configuration of the firewall determines how secure or otherwise the network can be.
A firewall may be configured in either of the two ways :
Introduction to firewalls
Characteristics of a Firewall
For a firewall to be effective, it is desirable that it has the following characteristics:
1. All traffic from inside to outside and from outside to inside shall pass only through
the firewall. There should not be any other alternate route. This requires that
183
Module - I
before installing a firewall, the perimeter of the internal network is defined. For
example, the internal network could consist of a certain number of users within a
department, some users working from other unconnected departments and
possibly from different locations and may also include mobile users. Hence
perimeter definition is a challenge in itself. This involves ensuring the only path
for packets from within the network and into it is through a pre-determined
network computer or networking device such as a router.
2. The overall security policy of the organisation shall determine what traffic must
be permitted to pass through the firewall. Every other traffic must be blocked
and this shall be appropriately configured onto the firewalls rule base or ACL.
3. The firewall must be resilient and immune to attacks and penetration.
4. The firewall should maintain a log of traffic that it negotiated and the action taken
on it.
The four general techniques that a firewall uses to control access and enforce
the sites security policy are:
a. Service Control : It determines the types of Internet services that can be
accessed, inbound or outbound. It may filter traffic on the basis of IP
address and TCP port number, may provide proxy software that receives
and interprets each service request before passing it on, or may host the
server software itself, such as the Web or mail service.
b. Direction Control : It determines the direction in which particular service
requests may be initiated and allowed to pass.
c. User Control : It controls access to a service according to the user who is
attempting to access it. This feature may be applied to users inside the
firewall perimeter or to the incoming traffic from external users.
d. Behaviour Control: It controls the use of a particular service. For example,
filtering e-mail to eliminate spam, etc.
Types of Firewalls
There are three popular types of firewalls:
1. Packet Filtering Router.
2. Circuit-level gateway.
3. Application Level gateway or proxy server.
I.
Packet-Filtering Router
i.
ii.
A packet filtering firewall is the first generation firewall, and works at the
network layer of the OSI model or the IP layer of TCP/IP.
It is a fast and cost effective firewall configuration.
184
Introduction to firewalls
iii. It is usually a part of a device called a router, whose function is to forward
packets from one network to another, depending on the destination address
of the packet which is embedded onto the header of the packet. The header
of a packet usually contains the following information: IP source address, the
IP destination address, the encapsulated protocol (TCP, UDP, ICMP, or IP
Tunnel), the TCP/UDP source port, the TCP/UDP destination port, the ICMP
message type, the incoming interface of the packet, and the outgoing
interface of the packet. Fig. 5.3 depicts the rules of a packet filtering router.
Example Filters :
Source Destination
Actions Protocol
Permit
TCP
128.18.30.2 Port 23
Discard TCP
All
Port 23
Firewall
128.18.30.2
Filters
Passed
Internet
Filter Engine
Rejected
(Port 23)
Server
Application
Presentation
Session
Other Client
Transport
Network
Data Link
Physical
If a match is found but the rule disallows it, the packet is rejected.
185
Module - I
The router maintains a state table for all the connections passing through
the firewall.
So the state of a connection becomes one of the criteria to specify filtering
rules.
If a packet matches an existing connection listed on the table, it will be
permitted to go without further checking.
Otherwise, it is to start a new connection and will be evaluated according to
the filtering rules.
Advantages:
i.
ii.
Introduction to firewalls
iii. Also, packet filtering depends on IP port numbers, which isn't always a
reliable indicator of the application in use; protocols like Network File System
(NFS) use varying port numbers, making it difficult to create static filtering
rules to handle their traffic.
iv. Many packet filtering routers lack robust logging capabilities.
Packet Filtering Routers are susceptible to some of the following attacks:
They work on a small set of data present on the Packet header. Since they have
such meager information, the firewall is limited in its decision making. Because of
this, these firewalls are susceptible to the following attacks:
a. Source IP Address Spoofing Attacks
In this type of attack, an intruder transmits packets that falsely contain the
source IP address of a permitted system. This is done with an idea that the
use of a spoofed source IP address will allow penetration of systems.
b. Source Routing Attacks
In a source routing attack, the source station specifies the route that a packet
should take as it crosses the Internet. It is designed to bypass security measures
and cause the packet to follow an unexpected path to its destination.
c. Tiny Fragment Attacks
Tiny Fragment Attack is a class of attack on Internet firewalls taking advantage
that it is possible to impose an unusually small fragment size on outgoing
packets. If the fragment size is made small enough to force some of a TCP
packet's into the second fragment. If the filtering implementation does not
enforce a minimum fragment size, a disallowed packet might be passed.
ii.
Module - I
across the firewall. Further, the gateway can be configured to support only
specific features of an application that the network administrator considers
acceptable while denying all other features.
iii. In general terms, an application level gateway that is configured to be a web
proxy will not allow any ftp, gopher, telnet or other traffic to go through.
Because they examine packets at the application layer, they can filter
application specific commands, such as http:post and get, etc. This cannot
be accomplished with either packet filtering firewalls or circuit level neither of
which knows anything about the application level information.
Fig. 5.4 explains the working of an Application-Level Gateway.
FTP Server
Proxy Server
FTP Proxy Agent
Internet
Client
Application
Presentation
Session
Transport
Network
Data Link
Physical
Fig. 5.4 : Application level Gateway
Introduction to firewalls
iii. Application-level gateways have the ability to support strong user authentication.
These are secure and application specific. Since it provides detailed logging
information, it is of immense use as an audit trail.
Disadvantages of Application-Level Gateways
i.
ii.
Circuit Level Filtering takes control a step further than a Packet Filter. This is
a firewall approach which validates connections before allowing data to be
exchanged.
ii. Circuit level gateways work at the session layer of the OSI model, or the
TCP layer of TCP/IP. They can be a stand-alone system or can be a
specialized function performed by an application-level gateway for certain
applications.
iii. A circuit-level gateway does not permit an end-to-end connection; rather, the
gateway sets up two TCP connections: one between itself and a TCP user
on an inner host and another one in between itself and a TCP user on an
outside host. This firewall does not merely allow or disallow packets but also
determines whether or not the connection between both ends is valid
according to configurable rules. Then it opens a session and permits traffic
only from the allowed source and possibly only for a limited period of time.
Once the two connections are established, the gateway relays TCP
segments from one connection to the other without examining the contents.
Whether a connection is valid or not, may be based upon :
Every session of data exchange is validated and monitored and all traffic is
disallowed unless a session is open. Fig. 5.5 explains Circuit level gateway.
189
Module - I
Proxy Server
Connection State
FTP Server
IP Proxy Agent
Client
Internet
Application
Presentation
Session
Transport
Network
Data Link
Physical
Circuit-level gateways are often used for outgoing connections only when the
system administrator trusts the internal users. Their chief advantage is that the
firewall can be configured as a hybrid gateway supporting application-level or
proxy services for inbound connections and circuit-level functions for outbound
connections. This makes the firewall system easier to use for internal users who
want direct access to Internet services, while still providing the firewall functions
needed to protect the organization from external attack.
ii. Circuit level gateways are relatively inexpensive.
iii. They have an advantage of hiding information about the private network they
protect. On the other hand, they do not filter individual packets.
Disadvantages of Circuit level Gateways
i.
ii.
They cannot restrict access to protocol subsets other than the TCP.
Testing the rules applied can be difficult, and may leave the network vulnerable.
190
Introduction to firewalls
Firewall Implementation
In addition to the use of a simple configuration consisting of a single system, such as
a single packet filtering router or a single gateway, more complex configurations are
possible and indeed quite common. The following are some of the common
configurations which can be implemented:
1. Single-Homed Firewall
This is a combination of two systems: a Packet-filtering router and a Bastion host.
This is more secure compared to the packet-filtering router.
Bastion Host is a special purpose computer on a network specifically designed and
configured to withstand attacks and perform authentication and proxy functions. The
computer generally hosts a single application. For example - a proxy server and all
other services are removed or limited to reduce the threat to the computer. It is
hardened in this manner primarily due to its location and purpose, which is either on
the outside of the firewall and usually involves access from un-trusted networks or
computers.
Here, the router is configured is such a fashion that packets from unsecure
environment addressed only to the bastion host are allowed. The bastion host then
takes care of the application-layer security. Fig. 5.6 shows Single - homed firewall.
Advantages
This configuration has greater security than simply a packet - filtering router or an
application - level gateway alone:
i.
191
Module - I
Introduction to firewalls
3. "Demilitarized Zone" or Screened-Subnet Firewall
This firewall system is the most secure for it employs two packet-filtering routers or
other firewall types and a bastion host. It supports both network and application layer
security while defining a Demilitarized Zone (DMZ) network.
For all incoming traffic, the outside router (the router facing the internet) directs all the
packets to the bastion host. The bastion host together with the firewall does
preliminary filtering and if the packet passes the test, directs the packet to the less
secure or DMZ. This usually contains the IT components that require public access
such as a mail server, web server, etc. However, where the packet needs to travel
into the secure network, which is configured as a separate segment, the inside router
along with the second firewall provides a second line of defence, managing DMZ
access to the private network by accepting only traffic originating from the bastion
host as shown in Fig. 5.8.
For outgoing traffic, the inside router manages private network access to the DMZ
network. It permits internal systems to access only the bastion host. The filtering rules
on the outside router require use of the proxy services by accepting outgoing traffic
only from the bastion host.
An intruder must penetrate three separate devices of the outside router, the
bastion host, and the inside router to infiltrate the private network.
Since the outside router advertises the DMZ network only to the Internet,
systems on the Internet do not have routes to the protected private network. This
ensures that the private network is "invisible.
193
Module - I
Since the inside router advertises the DMZ network only to the private network,
its systems do not have direct routes to the Internet. This ensures that inside
users access the Internet via the proxy services residing on the bastion host.
Packet-filtering routers direct traffic to specific systems on the DMZ network,
eliminating the need for the bastion host to be dual-homed.
Since the DMZ network is a different network than the private network, a Network
Address Translator (NAT) can be installed on the bastion host to eliminate the
need to renumber or re-subnet the private network.
Limitations of a Firewall
The general limitation of a firewall is that it provides a false sense of security. Most
managements tend to think that simply installing a firewall will provide them with the
highest level of security, which is not true. Given below are me of the most common
reasons for the failure of the firewall technology.
A software based firewall if installed on an improperly secured computer or network
device such as a router can lead to the traffic either bypassing the firewall or the
firewall itself being attacked or being ineffective in other ways. Hence there is need to
ensure that the firewall is installed on a secure host called the Bastion Host.
1. The firewall filters the traffic passing through the firewall using the firewall rule
base, which is configured with the organizations perimeter access policy. Hence
if the configuration of the firewall rule base is incorrect or ineffective, naturally the
firewall will fail to protect the interests thereof.
2. The network perimeter may not be properly defined; because of this any traffic
from the internal network that does not pass through the firewall and connects to
the external unprotected network can endanger the security of the internal
network by opening up an unprotected logical access path. Hence the
architecture of the network and ensuring its appropriate segmentation and
closing of all other possible paths of logical access is critical to the success of
fire-walling.
3. The objective of network perimeter protection can also fail if an inappropriate
type of firewall is used for protecting highly sensitive assets. For example, using
only a packet filter to protect a banks internet banking data server.
4. Some other firewall limitations include:
Viruses : Not all firewalls offer full protection against computer viruses as
there are many ways to encode files and transfer them over the Internet.
Attacks : Firewalls can't protect against attacks that don't go through it. For
example, a firewall may restrict access from the Internet, but may not protect
the equipment from dial in access to the computer systems.
194
Introduction to firewalls
Module - I
1.
2.
3.
4.
5.
6.
Introduction to firewalls
and detailed design. Even after the firewall is in use, periodic review and testing
during the system's lifetime may result in an earlier phase being revisited (indicated
by the upward-pointing blue arrows), as when a new, improved firewall component
becomes available or when defects in an earlier phase are discovered.
Prerequisite
Life Cycle
Component Evaluation,
Certification and Comparison
Firewall
Components
Database
Design tools
Generation of
Configurations
Module - I
The Firewall Life Cycle is summarised in Table 5.1
PHASES
DELIVERABLES
METHODS
High-Level Design
Selection of
Components
A selection of Components
Component Evaluation,
Component Certification
and
Component
Comparison
Design Tool
Implementation and
Configuration
Implementation of system
Generation
configuration
Design-oriented Testing,
Operational Testing
of
of
Summary
This chapter provides a detailed account of the concept of the firewall and its types.
Firewalls are used to protect a system against any kind of intrusions and have
different areas of implementation. General controls are associated with firewalls and
at the same time they have certain limitations. The chapter also provides information
about life-cycle of a firewall.
Questions
1. Which of the following is not a type of Firewall Implementation Scheme?
a. Single - Homed Firewall System
b. Dual - Homed Firewall System
c. Screened - subnet Firewall System
d. None of these
2. Which of the following is not a general technique that firewalls use to control
access and enforce a sites security policy?
a. Service Control
b. Direction Control
c. Dual Control
d. User Control
198
Introduction to firewalls
3. DMZ stands for ________.
a. De-military Zone
b. Demilitarized Zone
c. De-military Zone
d. None of these
4. Which of the following is not a type of firewall?
a. Application- level gateway
b. Dual-level gateway
c. Circuit-level Gateway
d. Packet Filtering Router
5. Which of the following is not a step in the Firewall Life Cycle?
a. Configuration of firewall
b. Review and Testing
c. Detailed Design and Verification
d. High Level Design
6. ________ determines the types of Internet services that can be accessed,
inbound or outbound.
a. Dual Control
b. User Control
c. Direction Control
d. Service Control
7.
In _________ , the intruder transmits packets that falsely contain the source IP
address of a permitted system.
a. Source IP Address Spoofing Attacks
b. Source Routing Attacks
c. Tiny Fragment Attacks
d. None of these
199
Module - I
9. Which phase follows the phase Detailed Design and Verification in the
Firewall Life Cycle?
a. Review and Testing
b. Configuration and Verification
c. High Level Design
d. Implementation and Configuration
10. NAT stands for________.
a. Network Address Translator
b. Network Address Transistor
c. Network address Testing
d. None of these
11. ___________ controls how to use particular services.
a. Service Control
b. User Control
c. Behavior Control
d. Direction Control
12. __________ determines the direction in which particular service requests may
be initiated and allowed to pass.
a. Service Control
b. User Control
c. Behavior Control
d. Direction Control
13. __________ is also known as a Proxy Server.
a. Dual Level Gateway
b. Application- Level Gateway
c. Circuit-Level Gateway
d. None of these
14. __________are operational at the Application Layer of the OSI Model.
a. Application-Level Gateways
b. Dual Level Gateways
c. Circuit-Level Gateways
d. None of these
200
Introduction to firewalls
15. __________ are operational at the Session Layer of the OSI Model.
a. Application-Level Gateways
b. Dual Level Gateways
c. Circuit-Level Gateways
d. None of these
Answers:
1D
2c
3C
4b
5A
6d
7A
8b
9D
10 a
11 C
12 d
13 B
14 a
15 C
201
6 Cryptography
Learning Objectives
To understand
Introduction
This chapter deals with the mechanism with which a piece of information is hidden
during its transmission and unfolded at the desired destination end only. The concept
of hiding the information is known as Cryptography and there are several
cryptographic algorithms by which information can be secured.
Cryptography
Cryptography means the practice and study of hiding information. It deals with
difficult problems. A problem may be difficult because (i) The solution requires some secret knowledge, such as decrypting an encrypted
message or signing some digital document; or
(ii) It is intrinsically difficult, such as finding a message which produces a given hash
value.
In technical terms, Cryptography is the study of techniques and applications to solve
difficult problems.
203
Module - I
Many schemes used for encryption constitute the area of Cryptography. These are
called Cryptographic Systems or Ciphers. Techniques used for deciphering a
message without any knowledge of enciphering details constitute cryptanalysis.
Cryptography and cryptanalysis form Cryptology.
A cryptanalyst is one who analyses cryptographic mechanisms and decodes
messages for military, political, or law enforcement agencies or organizations. He
helps to provide privacy to people and corporations, and keeps hackers out of
important data systems, as much as he possibly can.
BRIEF HISTORY OF CRYPTOGRAPHY
The following is a summary of the brief history of cryptography.
i.
ii.
iii.
iv.
v.
vi.
vii.
viii.
ix.
x.
xi.
Julius Caesar is credited with creating one of the earliest cryptographic systems
to send military messages to his generals. This technique is called Caeser
cipher. It is actually shift by 3 rule, wherein alphabet A is replaced by D, B is
replaced by E, and so on.
After this, various cryptographic systems were invented. These encryption
systems were known as Symmetric Cryptographic Techniques.
One of the most influential cryptanalytic papers of the 20th century, William F.
Friedman's monograph The Index of Coincidence and Its Applications in
Cryptography, appeared as a research report of the private Riverbank
Laboratories in 1918.
In 1970, Horst Feistel of IBM Watson Laboratory began the preliminary work for
a project which later became the US Data Data Encryption Standard (DES).
Whitfield Diffie and Martin Hellman proposed the idea of public-key cryptography
in 1976.
In 1977, DES was adopted by the National Bureau of Standards as FIPS
Standard.
RSA Public key Algorithm was developed in 1978.
Triple DES came into existence in 1985.
IDEA (International Data Encryption Algorithm), a symmetric block cipher was
developed in 1991.
Another symmetric block cipher Blowfish was developed in 1993 by Bruce
Schneier.
RC5, another symmetric block cipher, was developed by Ron Rivest in 1994.
204
Cryptography
xii. In October 2000, Rijndael Algorithm was declared the Advanced Encryption
Standard by the U.S. government.
205
Module - I
Secret Key or Symmetric Cryptography (SKC) : Uses a single key for both
encryption and decryption.
ii. Public Key or Asymmetric Cryptography (PKC) : Uses one key for encryption
and another for decryption.
iii. Hash Functions : Uses a mathematical transformation to irreversibly "encrypt"
information.
I.
Cryptography
2. The key must be known to both the sender and the receiver; that, in fact, is
the secret, but the difficulty with this system is the distribution of the key.
Symmetric Key algorithms can be further divided into two categories.
a. Stream algorithms or Stream Ciphers: These operate on a single bit
(byte or computer word) at a time and implement some form of
feedback mechanism so that the key is constantly changing.
b. Block Algorithm or Block Cipher: A block cipher is so - called
because the scheme encrypts one block of data at a time by using the
same key on each groups of bits called block.
In general, the same plaintext block will always encrypt to the same ciphertext when
using the same key in a block cipher whereas the same plaintext will encrypt to
different ciphertext in a stream cipher.
Transmitted
Plaintext
Plaintext
Ciphertext
Output
Input
Encryption
Algorithm
Decryption Algorithm
(Reverse of Encryption Algo.)
207
Module - I
4. Transposition ciphers
The sender and receiver agree on a keyword in which no alphabets are
repeated. Then using the keyword, the message is transposed either in row
fashion or columnar fashion to get the ciphertext. The keyword is used in the
reverse process for decryption.
5. Data Encryption Standard (DES)
The most common secret-key cryptography scheme used today is the Data
Encryption Standard (DES). Originally designed by IBM in the 1970s, this was
adopted by the National Bureau of Standards (NBS) (now the National Institute
for Standards and Technology (NIST)) in 1977 for commercial and unclassified
government applications. DES has been adopted as Federal Information
Processing Standard 46 (FIPS 46-2) by the American National Standards
Institute as X3.92. DES is a block-cipher employing a 56-bit key that operates on
64-bit blocks. DES has a complex set of rules and transformations.
In order to increase the power of DES, there are two important variants of DES:
i.
ii.
6. CAST-128
It is named after Carlisle Adams and Stafford Tavares of Nortel. Similar to DES,
this is a 64-bit block cipher using 128-bit keys. A 256-bit key version is called
CAST-256.
7. International Data Encryption Algorithm (IDEA)
It is another DES like substitution permutation crypto algorithm, employing a
128 bit key operating on a 64-bit block.
8. Rivest Ciphers
Named after its inventor Ron Rivest, it is a series of SKC algorithm.
RC1 : Designed on paper but never implemented.
RC2 : A 64-bit block cipher using variable-sized keys designed to replace
DES. Its code has not been made public although many companies have
licensed RC2 for use in their products.
208
Cryptography
9. Blowfish
It is a symmetric 64-bit block cipher invented by Bruce Schneier. In this, key
length can vary from 32 to 448 bits. Blowfish is available for free for all users.
10. Twofish
It is a 128-bit block cipher that uses 128-, 192-, or 256-bit keys. This was also
invented by Schneier.
11. Advanced Encryption Standard (AES)
It is a significant advancement over the DES that uses a stronger key size of
128, 192 & 256 bits and is much faster that 3DES. In January 1997, NIST
initiated a process with which to develop a new secure cryptosystem for U.S.
government applications. The result, the Advanced Encryption Standard (AES),
came into being as the "official" successor to DES. In October 2000, NIST
announced the selection of Rijndael Algorithm (pronounced as in "rain doll" or
"rhine dahl) as the algorithm.
Module - I
It uses one key for encryption and another for decryption, such that both keys have a
unique relationship (and irreversible as regards use of one key only not clear), i.e.
one cannot decrypt with the key that was used for encryption. In this concept, each
person gets a pair of keys, one called the public key and the other called the private
key. The public key of the individual is published in all places while the private key is
never revealed.
A simple analogy is the operation of a locker in a bank. As seen in Fig. 6.2, the keys
for encryption and decryption are different.
Transmitted
Plaintext
Input
Plaintext
Ciphertext
Encryption
Algorithm
Decryption Algorithm
Output
210
Cryptography
Working of PKC
i.
Generic PKC employs two keys that are mathematically related although
knowledge of one key does not allow someone to easily determine the other key.
ii. One key is used to encrypt the plaintext and the other key is used to decrypt the
ciphertext. The important point here is that it does not matter which key is applied
first, but that both keys are required for the process to work.
iii. Because two keys are required for the job, this approach is also called
Asymmetric Cryptography.
iv. In PKC, one of the keys is designated the public key and may be advertised as
widely as the owner wants. The other key is designated the private key and is
never revealed to another party.
Example: If there are two users, say Aditya and Bhaskar who want to exchange
some confidential information. Both have their own public and private keys. Since the
public keys of individuals are made public and the private keys are not disclosed, all
that Aditya does is as follows:
a.
b.
c.
d.
211
Module - I
5. Public-Key Cryptography Standards (PKCS) : A set of interoperable
standards and guidelines for public-key cryptography, designed by RSA Data
Security Inc. are:
212
Cryptography
How the public key encryption method works?
Fig. 6.3 shows the working of a Public-Key encryption system
213
Module - I
RSA Example
Step
Action
Result
p = 7; q = 19
Calculate n = pq
n = 7 * 19 = 133
Compute m = (p - 1)(q - 1)
m = (7 - 1)(19 - 1) = = 6 * 18 = 108
With e = 5, m =108 :
n = 0
=>
d = 1 / 5 (no)
n = 1
=>
d = 109 / 5 (no)
n = 2
=>
d = 217 / 5 (no)
n = 3 => d = 325 / 5 = 65 (yes)
n = 133, e = 5
n =133, d = 65
The public key consists of the modulus n and the public (or encryption) exponent e.
The private key consists of the modulus n and the private (or decryption) exponent d
which must be kept secret.
214
Cryptography
C
=
=
=
=
Pe%n
65%133
7776%133
62
=
=
=
=
=
=
=
Cd % n
6265 % 133
62 * 6264 % 133
62 * (622)32 % 133
62 * 384432 % 133
62 * (3844 % 133)32 % 133
62 * 12032 % 133
We now repeat the sequence of the operation that reduced 6265 to 12032 to reduce
the exponent down to 1.
=
=
=
=
=
=
=
62 * 3616 % 133
62 * 998 % 133
62 * 924 % 133
62 * 852 % 133
62 * 43 % 133
2666 % 133
6
And that matches the plaintext put in at the beginning, so the algorithm worked!
Constraints on RSA
Some of the constraints suggested by researchers on RSA are:
a. p and q should differ in length by only a few digits.
b. Both (p-1) and (q-1) should contain a large prime factor.
c. gcd (p-1, q-1) should be small.
215
Module - I
RSA Security
Four possible approaches to attacking the RSA Algorithm are:
i.
ii.
Key Size is generally small as Key Size is generally large when compared
compared to public key encryption
to private key encryption
Works fast
authentication
Cryptography
altered by an intruder or virus. Hash functions are commonly employed by many
operating systems to encrypt passwords. Hash functions then provide a measure of
the integrity of a file.
Some of the commonly used Hash algorithms used these days are:
1. Message Digest (MD) Algorithms: A series of byte-oriented algorithms that
produce a 128 bit hash value from an arbitrary-length message.
2. Secure Hash Algorithm (SHA) : Algorithm for NISTs Secure Hash Standard
(SHS). SHA-1 produces a 160-bit hash value, SHA-224 produces 224, SHA- 256
produces 256 bit, SHA- 384 produces 384 and SHA-512 produces 512 bits in
length.
3. RIPEMD : a series of message digests that is optimized for 32-bit processors to
replace the then-current 128-bit hash functions.
4. HAVAL : Creates hash values that are 128,160,192,224 or 256 bits in length.
5. Whirlpool : Operates on a message less that 2256 bits in length and produces a
message digest of 512 bits.
6. Tiger : Runs efficiently on 64-bit processors and produces 192-bit output.
Why do we use three Cryptographic Techniques?
Each of the cryptographic schemes is optimized for some specific applications.
1. Secret Key cryptography is ideally suited to encrypting messages, thus
providing privacy and confidentiality. The sender can generate a session key on
a per-message basis to encrypt the message; the receiver needs the same
session key to decrypt the message.
2. Public key cryptography is mainly used in Key exchange and Digital
Signatures.
3. Hash Functions are ideally suited for ensuring data integrity because any
change made in the contents of a message will result in the receiver calculating a
different hash value than the one placed in the transmission by the sender. Since
it is highly unlikely that two different messages will yield the same hash value,
data integrity is ensured to a high degree.
Module - I
If Aditya sends an authenticated message to Bhaskar, the following disputes may
occur between the two:
a. Bhaskar may forge a different message and claim that it came from Aditya.
b. Aditya may deny sending the message.
The signature basically serves three purposes.
1. Authentication (establishing the identity of the person who has signed it);
2. Integrity (that the document that has been signed is unchanged); and
3. Non-repudiation ( that the person who has signed it can't deny it later).
On similar lines, a digital signature too serves the above-mentioned three purposes.
A Digital Signature may be defined as a data string dependent on some secret
known only to the signer, and additionally, on the content of the message. Digital
signatures are not simply a typed name or image of a handwritten signature. It is
based on public-key encryption and is associated with a digital document. Digital
signatures must be verifiable, ie., if a dispute arises, an unbiased third party should
be able to settle the dispute fairly without accessing the signers secret.
How do digital signatures help in Authentication?
Let us take the earlier example of Aditya trying to send a document to Bhaskar.
Aditya does the following:
1.
2.
3.
4.
Encrypts the document using his private key (digitally signs the document)
Sends the encrypted (signed) message to Bhaskar
Bhaskar decrypts the message using Adityas public key.
The result is that Bhaskar gets the confidence that the Document has been sent
by Aditya.
In the case of asymmetric encryption systems, there are two keys, one public and the
other private. The private key is kept secret and not revealed under any
circumstances. If any message is encrypted using this private key of the sender, it
can now be decrypted only by the public key of the sender. The receiver by his ability
to decrypt the message using the senders public key, knows beyond any doubt that
the message could have been signed only by the sender. This is equivalent to affixing
a signature on a document.
ENCRYPTING ANY DOCUMENT USING THE PRIVATE KEY OF AN
INDIVIDUAL (TERMED AS DIGITAL SIGNATURE) IS TREATED
EQUIVALENT TO THE INDIVIDUAL SIGNING THE DOCUMENT
218
Cryptography
Module - I
5. Bhaskar subjects the message to the hash function and obtains a message
digest.
6. He then compares the computed message digest with the message digest he
has received from Aditya. If both of them are the same, the integrity of the
message is established. If the integrity is suspect, then Bhaskar can ask Aditya
to resend the message.
Fig. 6.4 shows how the system works.
Receivers Side
Senders Side
Plain Text
Plain Text
Receiver
Hashing Process
Hashing Process
Message Digest
Message Digest
Same Accept.
Else Reject
Comparison
Process
Encryption (Digital
Signature)
Senders
Private Key
Message Digest
Senders
Public Key
Digitally Signed
Document
Decryption
On the basis of above properties, we can formulate the following requirements for a
digital signature.
The signature must be a bit pattern that depends on the message being signed.
The signature must use some information unique to the sender to prevent both
forgery and denial.
It must be easy to produce the digital signature.
It must be easy to recognize and verify the digital signature.
It must be computationally infeasible to forge a digital signature, either by
constructing a new message from an existing digital signature or by
constructing a fraudulent digital signature for a given message.
It must be practical to retain a copy of the digital signature in storage.
220
Cryptography
Aditya subjects the message to a hashing function. The result is the message
digest.
The message digest is then encrypted (signed) using the private key of Aditya
(the result is the digital signature)
The plain-text message is encrypted using a randomly chosen symmetric key
(encryption of the message). This key is independent of the private and public
keys.
The symmetric key used for encryption is encrypted using public key algorithm
(Asymmetric Algorithm) and the public key of Bhaskar. (Encryption of the key)
All the three, encrypted message, encrypted key and the encrypted message
digest are sent to the receiver (Bhaskar).
On receipt of the above, Bhaskar first decrypts the encrypted key using his
private key (decryption of the key) and obtains the symmetric key.
He uses the symmetric key so obtained to decrypt the message (decryption of
the message).
He uses Adityas public key to decrypt the message digest (decryption of the
message digest).
The plain-text so obtained is subjected to the hash function and the message
digests are compared. If they are the same, the message is accepted. .
Hence he achieves message integrity, identity authentication, non repudiation
and confidentiality.
Senders
Private Key
Signature)
Encryption (Digital
Message
Digest
Hashing
Process
Plain Text
Module - I
Senders Side
Symmetric
Key
of
222
of
Encrypted Key
+
Encrypted Message
+
Encrypted Message
Digest
Receivers
Private Key
222
Receivers
Public Key
Encryption
Symmetric Key
Encryption
message
Decryption of
the Symmetric
Key
Symmetric
Key
Decryption
Message Digest
Message Digest
Decryption of
the Message
Hashing Process
Plain Text
Senders
Public Key
Comparison
Process
Same Accept, If
Not Reject
Receiver
Receivers Side
Module - I
Cryptography
Module - IModule - I
documents, provide bona fides for the signor. The specific functions of a digital
certificate include:
Typically, a digital certificate contains a public key, a name, an expiration date, the
name of the authority that issued the certificate (and, therefore, is vouching for the
identity of the user), a serial number, any pertinent policies describing how the
certificate was issued and/or how the certificate may be used, the digital signature of
the certificate issuer, and any other information.
The most widely accepted certificate format is the one defined in International
Telecommunication Union Telecommunication Standardisation Sector (ITU-T)
Recommendation X.509. X.509, a specification used around the world. The contents
of X.509 certificate are listed below.
version number
certificate serial number
signature algorithm identifier
issuer's name and unique identifier
validity (or operational) period
subject's name and unique identifier
subject public key information
standard extensions certificate appropriate use definition
eye usage limitation definition certificate policy
information
9. other extensions Application-specific CA-specific
Table 6.1: Contents of X.509 certificate
224
Cryptography
If we are using the browser of Internet
Explorer, we can view the digital
certificates if we execute the following
steps.
a. Launch Internet Explorer.
b. Go to Tools Internet Options
Content Certificates. Fig.
6.6 can be viewed.
c. Click
on
Certificates
Intermediate
Certification
Authorities.
d. Choose any one Intermediate
Certification
Authorities.
To
Fig. 6.6: To view digital Certificates
illustrate, if we choose Thawte
using Internet Explorer
Premium CA (as shown below in
Fig. 6.7) and click View Details, we can view the digital certificate.
Module - IModule - I
iii. The CA, after satisfying himself, encrypts the public key of Aditya using their
private key and issues a digital certificate.
iv. Aditya uses the digital certificate for all his commercial transactions.
v. When any person wants to verify the digital signature, he needs to click on the
digital certificate, which provides verification of the identity of the individual as
issued by CA.
6.9 Cryptanalysis
Cryptanalysis mainly deals with methods of recovering the plaintext from ciphertext
without using the key. In other words, it is defined as the study of methods for
obtaining the meaning of encrypted information, without access to the secret
information which is normally required to do so. It also deals with identifying
weakness in a cryptosystem. Cryptanalysis is also used to refer to any attempt to
circumvent the security of other types of cryptographic algorithms and protocols, and
not just encryption. There are many types of cryptanalytic attacks. The basic
assumption is that the cryptanalyst has complete knowledge of the encryption
algorithm used.
1. Ciphertext-only attack: The cryptanalyst has the ciphertext of several
messages, all of which have been encrypted using the same encryption
algorithm. Using the ciphertext, he deduces the key for encryption.
2. Known-plaintext attack: Here, the cryptanalyst has access not only to the
ciphertext of several messages, but also to the plaintext of those messages.
Using the ciphertext and its corresponding plaintext, he deduces the key used to
encrypt the messages.
3. Chosen-plaintext attack: This is a modified form of plaintext attack. Here, the
cryptanalyst not only has access to the ciphertext and associated plaintext for
several messages, but he also chooses the plaintext that gets encrypted.
Because of this, the cryptanalyst chooses a specific plaintext block to encrypt
which in all probability yields more information about the key.
4. Adaptive-chosen-plaintext attack: This is the special case of a chosenplaintext attack. Not only can the cryptanalyst choose the plaintext that is
encrypted, but he can also modify his choice based on the results of previous
encryption. In a chosen-plaintext attack, a cryptanalyst might just be able to
choose one large block of plaintext to be encrypted; in an adaptive-chosenplaintext attack he can choose a smaller block of plaintext and then choose
another based on the results of the first, and so forth.
5. Chosen-ciphertext attack: The cryptanalyst can choose different ciphertexts to
be decrypted and has access to the decrypted plaintext. For example, the
cryptanalyst has access to a tamperproof box that does automatic decryption.
226
Cryptography
His job is to deduce the key. This attack is primarily applicable to public-key
algorithms and sometimes effective against symmetric algorithm as well.
6. Chosen-key attack: This attack doesn't mean that the cryptanalyst can choose
the key; it means that he has some knowledge about the relationship between
different keys.
7. Rubber-hose cryptanalysis: The cryptanalyst threatens, blackmails, or tortures
someone until they give him the key. Bribery is also adopted sometimes. This is
called a Purchase-key Attack. These are all very powerful attacks and often the
best way to break an algorithm.
Summary
This chapter gives detailed knowledge about the concept of Cryptography, its need
and the goals achieved by any cryptographic system during data transmission over
the network. The chapter discusses various cryptographic algorithms and their
advantages and disadvantages with examples. The reader gets a brief idea about the
Digital signatures, digital envelopes and digital certificates.
Questions
1. The coded message is known as ________.
a. Plaintext
b. Ciphertext
227
Module - IModule - I
c. Encryption
d. Decrytpion
2. With _________, a single key is used for both encryption and decryption.
a. Secret key cryptography
b. Public key cryptography
c. Asymmetric key cryptography
d. None of these
3. DES stands for_________.
a. Digital Encryption Standard
b. Digital Encryption Symmetry
c. Data Encryption Standard
d. Digital Encryption Symmetry
4. _______ is the mathematical function used for encryption and decryption.
a. Bits
b. Stream
c. Block
d. Cipher
5. PKCS stands for ________.
a. Public-Key Cryptography Standards
b. Private-Key Cryptography Standards
c. Public-Key Cipher Standards
d. Private-Key Cipher Standards
6. _______ is a mechanism to prove that it is actually the sender who has sent the
message.
a. Integrity
b. Privacy
c. Non-repudiation
d. Authentication
7. _____ mainly deals with methods of recovering the plaintext from ciphertext
without using the required key.
a. Cryptography
b. Cryptanalysis
c. Encryption
d. Decryption
228
Cryptography
8. IDEA stands for __________.
a. International Digital Encryption Algorithm
b. International Data Encryption Algorithm
c. International Data Encoded Algorithm
d. International Digital Encoded Algorithm
9. Hash functions are also called __________.
a. One-way Encryption
b. Public Key Encryption
c. Symmetric Key Encryption
d. Asymmetric Key Encryption
10. ____ is a study of how differences in input can affect the resultant difference in
output.
a. One-way Encryption
b. Public Key Encryption
c. Differential Cryptanalysis
d. Linear Cryptanalysis
11. SHA stands for _____.
a. Secure Hash Algorithm
b. Simple Hash Algorithm
c. Stream Hash Algorithm
d. None of these
12. ____ is a property that does not permit any person who signed any document to
deny it later.
a. Integrity
b. Validation
c. Maintenance
d. Non-Repudiation
13. Caeser Cipher is also known as _____.
a. Shift by 2 method
b. Shift by 3 method
c. Shift by 4 method
d. Polyalphabetic Substitution
14. Which of the following is not a hash function?
a. MD Algorithm
b. Secure Hash Algorithm
c. Quantum Cryptography
d. HAVAL
229
Module - IModule - I
15. Which one of the following purposes is not served by Digital Certificates?
a. Authentication
b. Integrity
c. Non-Repudiation
d. Selection
16. _____ is a general form of cryptanalysis based on finding affine approximations
to the action of a cipher.
a. One-way Encryption
b. Linear Cryptanalysis
c. Differential Cryptanalysis
d. Public Key Encryption
17. FIPS stands for ________.
a. Federal Information Processing Standard
b. Federal Information Processing Symmetry
c. Foundation of Information Processing Standard
d. None of these
18. ____ ensures that no one except the intended receiver can read a message.
a. Authentication
b. Integrity
c. Privacy
d. Selection
19. A ____ is a person who analyses cryptographic mechanisms.
a. System analyst
b. Cryptanalyst
c. Accountant
d. Mechanic
20. Adaptive-chosen-plaintext attack is a special case of _____.
a. Ciphertext-only attack
b. Known-plaintext attack
c. Rubber-hose cryptanalysis
d. Chosen-plaintext attack
Answers:
1b
7b
13 b
19 b
2a
8b
14 c
20 d
3c
9a
15 d
4d
10 c
16 b
230
5a
11 a
17 a
6c
12 d
18 c
Cryptography
Sources:
1. William Stallings, Cryptography and Network Security, Principles and Practices,
Pearson Prentice Hall, Fourth Edition, 2006.
2. Andrew S. Tanenbaum, Computer Networks, PHI, Fourth Edition, 2003.
3. Saunders D H, Computers Today, McGraw Hill International Edition, New York.
4. Leon A and Leon M, Introduction to Information Systems, Vijay Nicole Private
Ltd. Chennai, 2004.
5. Bahrami A, Object Oriented Systems Development -An unified approach,
McGraw Hill International edition, New York, 1999.
6. Beck, Leland L, Systems Software, Pearson Education India, New Delhi, 2002.
7. Berson A and Anderson, Sybase and Client-Server Computing, McGraw Hill
International edition, New York, 1999.
8. Laudon and Laudon, Management Information Systems, Prentice Hall of India,
New Delhi, 1999.
9. Pressman R, Software Engineering, a Practioners approach, Tata McGraw Hill,
New Delhi, 1999.
10. Rumbaugh J, et al, Object Oriented Modeling and Design, Prentice Hall of India,
New Delhi, 2002.
11. Sommerville I, Software Engineering, Addison Wesely Publishing.
12. Stallings W, Operating Systems, 4th edition, Prentice Hall of India, New Delhi,
2003.
13. Weber R, Information Systems Control and Audit, Pearson Education India, New
Delhi, 2002.
14. Abbey M and Corey M J, Oracle8, A beginners guide, Tata McGraw Hill, New
Delhi, 1999.
15. Leon A and Leon M, Database Management Systems, Vikas Publishing,
Chennai, 2002.
16. Ramakrishnan R, Database Management Systems, McGraw Hill International
Edition, New York, 2001.
17. Ullman J, Widom, A first course on database systems, Pearson Education India,
New Delhi, 2002.
231
Module - IModule - I
18. Weber R, Information Systems Control and Audit, Pearson Education India, New
Delhi, 2002.
19. http://www.firstsql.com
20. C.J.Date, An Introduction to Database systems, Third Edition Vol. 1, Narosa
Publishing House, New Delhi, Madras, Bombay, Calcutta.
21. Elmsari, Navathe, Somayajulu, Gupta, Fundamentals of Database Systems IV
Edition, Pearson Education.
22. Silberschatz, Galvin, Gagne, Operating System concepts VI Edition,Wiley.
232
Module II
Protection of
Information assets
Module - II
Factors influencing an organizations control and audit of computers and the impact
of the information systems audit function on it are depicted in Figure 1.1
Organizational
costs of data loss
Costs of incorrect
decision making
Value of hardware,
software, personnel
Costs of
computer
abuse
Organizations
Controlled
evolution of
computer use
High costs of
computer
error
Maintenance
of privacy
Organizations
Improved
Safeguarding of
assets
Improved data
Integrity
Improved
system
effectiveness
Improved
system
efficiency
Objective of Control
Control objective is defined as A statement of the desired result or purpose to be
achieved by implementing control procedures in a particular IT process or activity. It
describes what is sought to be accomplished by implementing control, and serves
two main purposes:
i.
ii.
Outlines the policies of the organization as laid down by the management, and
a benchmark for evaluating if the control objectives are met.
235
Module - II
The objective of controls is to reduce or, if possible, eradicate the causes of the
exposure to probable loss. All exposures have causes and are potential losses due to
threats. Some categories of exposures are:
Internal Controls
The basic purpose of internal control in an organization is to ensure that business
objectives are achieved and undesired risk events are prevented or detected and
corrected. This is achieved by designing an effective internal control framework,
which comprises policies, procedures, practices, and organizational structure.
Eventually, all these policies, procedures etc. are broken into discrete activities and
supporting processes, which are managed manually or automatically. Control is not
solely a policy or a procedure which is performed at a certain point of time; rather it is
an ongoing activity, based on the risk assessment of the organization.
Managements Objective
Monitoring
Information &
Communication
Control Activities
Risk Assessment
Control Environment
Implementation
Methods
Preventive
Detective
Corrective
Administrative
Technical
Physical
Module - II
i.
Preventive Controls
These controls are those inputs, which are designed to protect the organization
from unauthorized activities. This attempts to predict the potential problems
before they occur and make necessary adjustments. The broad classification of
preventive controls is:
Restrict unauthorized
entry into the
premises
Manual Control
Build a gate and post a
security guard.
Computerized Control
Use access control
software, smartcard,
biometrics, etc.
238
Methods of
Control
Implementation
Manual Control
Detective
Preventive
With Corresponding
Corrective
Blank or Low
Moderate
Least effective,
Moderately effective
generally manual manual controls;
controls applied at probably least efficient
front-end of
processing;
239
Without
Corrective
Blank
Least effective
and possibly
dangerous since
users rely on them
improperly; very
Module - II
moderately
efficient
Computerized
Control
inefficient
Low or Moderate
High
Blank
Moderately
effective,
generally
Application
controls, applied
at front-end of
processing;
probably most
efficient
Most effective,
generally controls that
are computerized and
applied before
processing can take
place; moderately
efficient
240
Module - II
IS Assets
A typical computing environment would consist of computers, numerous supporting
computing equipment, communications equipment, and the like; facilities which house
these computing equipment and infrastructure are computer rooms, power sources,
and offsite storage. Further, one would find storage media, documents, computer
supplies and documentation related to Information Systems resources. Each of these
information system resources may need differing approaches to security both in
terms of the techniques of securing them and appropriate investment to secure them,
hence the need to categorize such assets. From the perspective of physical access
and control, Information System resources may be categorized as follows:
Table 3 lists some of the commonly found Information systems assets found in
organizations of various types.
Asset Class
Tangible
Overall IT
Environment
Physical Infrastructure
Asset Name
-
Tangible
Intranet Data
242
Extranet Data
Internet Data
Intangible
IT Services
Reputation
Goodwill
Employee moral
Employee productivity
Messaging
Core Infrastructure
Instant Messaging
Email/Scheduling
Domain Name System
Dynamic Host Configuration
Protocol (DHCP)
Enterprise Management tools
File Sharing, Storage and Dial-up
remote access
Telephony, Virtual Private Network
(VPN) access
Module - II
controlling access itself) is a system of checking authorized presence. For example, a
ticket, controller (Transportation).
Physical access control can be achieved by a human (a guard, bouncer, or
receptionist), or through mechanical means such as locks and keys, or through
technological means such as access control systems like the Access control
vestibule. Within these environments, physical key management may also be
employed as a means of further managing and monitoring access to mechanically
keyed areas or access to certain small assets.
Access control is the ability to permit or deny the use of a particular resource by a
particular entity. Access control mechanisms can be used for managing physical
resources (such as a movie theater, to which only ticketholders are admitted), logical
resources (a bank account, with a limited number of people authorized to make a
withdrawal), or digital resources (for example, a private text document on a computer,
which only certain users are able to read).
ii.
iii. Physical access controls include manual door or cipher key locks, photo Ids and
security guards, entry logs, perimeter intrusion locks, etc. The controls are meant
to:
grant/discontinue access authorizations.
control passkeys and entry during and after normal business hours.
handle emergencies.
244
control the deposit and withdrawals of tapes and other storage media to and
from the library.
CONFIDENTIALITY
SAFETY
AVAILABILITY
INTEGRITY
individuals or systems.
Module - II
voltage supply. Electrical threats also come from the noise of unconditioned
power or of total power loss.
ii. Environmental: These include natural disasters such as fires, hurricanes,
tornados, and flooding. Extreme temperature and humidity are also
environmental threats. .
iii. Hardware: It means a threat of physical damage to corporate hardware or its
theft.
iv. Maintenance: These threats arise from the poor handling of electronic
components, which cause ESD (electrostatic discharge), or because of the lack
of spare parts, poor cabling, poor device labeling, etc.
Administrative Controls
i.
In the choice of the location during initial planning for a facility the following concerns
are to be addressed.
Local considerations: What is the local rate of crime (such as forced entry
and burglary).
247
Module - II
With respect to designing the site the following considerations apply:
248
Technical Controls
These controls are technical solutions, which have administrative aspects. Given
below are various tools and techniques to achieve physical security.
i.
ii.
iii.
iv.
v.
vi.
vii.
Module - II
where large number of entrances and exits are used frequently. Such locks (both
cipher and combination) enable resetting the unlocking sequence periodically. .
viii. Electronic Door Locks: Such locks may use electronic card readers, smart
cards readers or optical scanners. The readers or scanners read the cards and
upon the information stored on the card matching with the information pre-stored
internally in the reader device, the device disengages the levers securing the
door, thus enabling physical access. The advantages of such locks are:
x.
xi.
xii.
xiii.
xiv.
xv.
xvi.
xvii.
Module - II
beyond restricted hours, violation of direction of movement e.g. where entry
only/exit only doors are used. Motion detectors are used to sense unusual
movement within a predefined interior security area and thus detect physical
breaches of perimeter security, and may sound an alarm.
xviii. Secured Distribution Carts: One of the issues in batch output control is to
get printed hardcopy reports (which may include confidential materials)
securely across to the intended recipients. In such cases distribution trolleys
with fixed containers secured by locks are used, and the keys to the relevant
container are held by the respective user team.
xix. Cable locks: A cable lock consists of a plastic-covered steel cable that
chains a PC, laptop or peripherals to the desk or other immovable objects.
xx. Port controls: Port controls are devices that secure data ports (such as a
floppy drive or a serial or parallel port) and prevents their misuse.
xxi. Switch controls: A switch control is a cover for the on/of switch, which
prevents a user from switching of the file servers power.
xxii. Peripheral switch controls: These types of controls are lockable switches
that prevent the use of a keyboard.
xxiii. Biometric Mouse: The input to the system uses a specially designed
mouse, which is usable only by pre-determined/pre-registered person based
on the fingerprint of the user.
xxiv. Laptops Security: Securing laptops and portables represent a significant
challenge, especially since, loss of laptops create loss of confidentiality,
integrity and availability. Cable locks, biometric mice/fingerprint/iris recognition
and encryption of the file system are some of the means available to protect
laptops and their data.
i.
Risk Assessment
The auditor should satisfy himself that the risk assessment procedure adequately
covers periodic and timely assessment of all assets, physical access threats,
vulnerabilities of safeguards and exposures.
253
Module - II
Review of Physical access procedures includes user registration and authorization,
special access authorization, logging, periodic review, supervision, etc. Employee
termination procedures should provide withdrawal of rights such as retrieval of
physical devices such as smart cards, access tokens, deactivation of access rights
and its appropriate communication to relevant constituents in the organization.
Examination of physical access logs and reports includes examination of incident
reporting logs and problem resolution reports.
Perimeter Security
Security control
policies and
procedures are
documented,
approved and
implemented by
management.
255
Module - II
accountable for their actions.
-are approved by
management and
- Periodically reviewed and
updated.
ii.
256
i.
Natural Threats
iii. Exposures
Some examples of exposures from violation of environmental controls:
A fire could destroy valuable computer equipment and supporting infrastructure
and invaluable organizational data. Usually the use/ storage of thermocole or
Styrofoam (technically called Expanded Polystyrene) material, inflammable
material used for construction of the server cabin, false ceiling aggravate the
probability of fire and loss due to fire.
257
Module - II
Administrative Controls
i.
Walls: Entire walls, from the floor to the ceiling, must have an acceptable
fire rating. Closets or rooms that store media must have a high fire rating.
rating.
Floors: If the floor is a concrete slab, the concerns are the physical weight
it can bear and its fire rating. If it is a raised flooring, the fire rating, its
electrical conductivity (grounding against static build-up), and that it employs
a non-conducting surface material are major concerns. Electrical cables
must be enclosed in metal conduits, and data cables must be enclosed in
raceways, with all abandoned cables removed. Openings in the raised floor
must be smooth and nonabrasive, and they should be protected to minimize
the entrance of debris or other combustibles. Ideally, an IPF should not be
located between floors and not at or near the ground floor, nor should it be
located at or near the top floor.
Windows: Windows are normally not acceptable in a data centre. But if
they are there, they must be translucent and shatterproof.
Doors: Doors in the computer centre must resist forcible entry and have a
fire rating equal to that of the walls. Emergency exits must be clearly marked
and monitored or alarmed. Electric door locks on emergency exits should
revert to a disabled state if power outages occur to enable safe evacuation.
While this may be considered a security issue, personnel safety always
takes precedence, and these doors should be manned or manually
operational in case of an emergency.
Media Protection: Location of media libraries, fire proof cabinets, and
different kind of media used are to be protected in a fungi resistant and heat
resistant environment.
Sprinkler system and fire resistance: The fire-resistance rating of
construction material is a major factor in determining the fire safety of a
computer operations room. Generally, the computer room must be
separated from other occupancy areas by a basic constructional plan with a
fire-resistant rating of not less than two hours.
259
Module - II
Water or gas lines: Water drains should be positive; that is, they should
flow outward, away from the building, so they do not carry contaminants into
the facility.
Air conditioning: AC units should have dedicated power circuits. Similar
to water drains, the AC system should provide outward, positive air pressure
and have protected intake vents to prevent air carrying toxins from entering
the facility.
Electrical requirements: The facility should have established backup
and alternate power sources.
iii. Documentation
The documentation of physical and geographical location and arrangement of
computing facilities and environmental security procedures should be modified
promptly for any changes. Access to such documentation should be strictly
260
v. Emergency Plan
Disasters result in increased environmental threats e.g. smoke from a fire in the
neighborhood or in some other facility of the organization would require appropriate
control action, evacuation plan should be in place and evacuation paths should be
prominently displayed at strategic places in the organization.
form a part of the administrative procedures. The tests of such inspection, tests
and drills should be escalated to appropriate levels in the organization.
Documented and tested emergency evacuation plans should consider the
physical outlay of the premises and orderly evacuation of people, shutting down
of power and computer equipment and activation of fire suppression systems.
Administrative procedures should also provide for Incident Handling
procedures and protocols due to environmental exposures.
Module - II
should include detailed analysis of considerations such as, whether to outsource,
choice of such an agency, background verification, security bonding, controlled
access of maintenance staff and performance appraisal.
Technical Controls
Some of the techniques for implementing control to protect against environmental
risks are:
i.
ii.
external sources such a grid and generators are subject to many quality
problems such as spikes, surges, sag and brown outs, noise, etc. Surge
protectors, spike busters and line conditioners cleanse the incoming power
supply and deliver clean power fit for the equipment.
iii. Power leads from two sub-stations: Failure of continued power supply to
some high consumption continuous processing could even result in concerns
regarding public safety such as refineries, nuclear reactors and hospitals. Electric
power lines may be exposed to many environmental and physical threats such
as foods, fire, lightning, careless digging, etc. To protect against such exposures,
263
Module - II
iv.
v.
vi.
vii.
viii.
ix.
redundant power lines from a different grid supply should be provided for.
Interruption of one power supply should result in the system immediately
switching over to the stand-by line.
Smoke Detectors and Fire Detectors: Smoke and fire detectors activate
audible alarms or fire suppression systems on sensing a particular degree of
smoke or fire. Such detectors should be placed at appropriate places, above and
below the false ceiling, in ventilation and cabling ducts. In case of critical
facilities, such devices must be linked to a monitoring station (such as a fire
station). Smoke detector should supplement and not replace fire suppression
systems.
Fire Alarms: Manually activated fire alarm switches should be located at
appropriate locations that are prominently visible and easily accessible in case of
fire (but should not be easily capable of misuse during other times). By manual
operation of switch or levers, these devices activate an audible alarm and may
be linked to monitoring stations both within and/or outside the organization.
Emergency Power Of: To take care of the necessity of an immediate
power shutdown during situations such as computer facility fire or emergency
evacuation, emergency power-of switches should be provided. There should
be one within the computer facility and another just outside it. Such switches
should be easily accessible and also properly shielded to prevent accidental
use.
Water detectors: Risks to IPF equipment from flooding and water logging can
be controlled by the use of water detectors placed under false flooring or near
drain holes. Water detectors should be placed on all unattended or unmanned
facilities. Water detectors on detecting water activate an audible alarm
Centralized Disaster monitoring and control Systems: Such systems
provide for an organization-wide network control wherein all detection devices,
alarms and corrective/suppression devices are controlled from a central
monitoring command and control. It is necessary that such systems are powered
by a secure and reliable/uninterrupted power supply. Such systems should be
failure tolerant and should involve low maintenance
Fire Suppression SystemsCombustibles are rated as either Class A, B, or C
based upon their material composition, thus determining which type of
extinguishing system or agent is used. Fires caused by common combustibles
(like wood, cloth, paper, rubber, most plastics) are classed as Class A and are
suppressed by water or soda acid (or sodium bicarbonate). Fires caused by
flammable liquids and gases are classed as Class B and are suppressed by
Carbon Dioxide (CO), soda acid, or Halon. Electrical fires are classified as Class
C fires and are suppressed by Carbon Dioxide (CO), or Halon. Fire caused by
264
the ceiling or on the walls and water is charged in the pipes. As generally
implemented a fusible link in the nozzle melts in the event of a heat rise, causing a
valve to open and allowing water to flow. These are considered the most reliable.
However, they suffer from the disadvantage of leakage, breakage of pipes
exposing the IPF to the risks of dampness and equipment to water damage.
Dry-Pipe Sprinklers: These are similar to the wet pipe sprinklers except that in
this the water is not kept charged in pipes but pipes remain dry and upon
detection of heat by a sensor, water is pumped into the pipes. This overcomes
the disadvantage with wet pipe systems of water leakages etc.
Pre-action: At the present time, this is the most recommended water-based fire
suppression system for a computer room. It combines both the dry and wet pipe
systems by first releasing water into the pipes when heat is detected (dry pipe)
and then release the water flow when the link in the nozzle melts (wet pipe). This
feature enables manual intervention before a full discharge of water on the
equipment occurs.
supply from the air, which is a critical component for combustion. However, CO
being potentially lethal for human life, such systems are recommended only in
unmanned computer facilities or in portable or hand-held fire extinguishers.
Portable fire extinguishers commonly contain CO or soda acid and should be
commonly located at exits, clearly marked with their fire types and checked
regularly by licensed personnel.
Halon: was once considered the most suitable agent for fire suppression. It is
an inert gas, does not damage equipment as water systems do and does not
leave any liquid or solid residues. However, Halon is not considered safe for
humans beyond certain levels of concentration and is an ozone-depleting agent
and is therefore environmentally unfriendly. Under an international agreement,
the Montreal Protocol, the production of Halon was suspended in 1994.
265
Module - II
As part of Risk assessment, the risk profile should include all kinds of
environmental risks that the organization is exposed to, which include taking
stock of both natural and man-made threats.
The profile should be periodically reviewed to ensure updating the profile with
new risks that may have arisen.
The Controls assessment should include examining that controls are in place to
safeguard the organization against all acceptable risks and should include newer
risks.
The Security Policy of the organization should be reviewed to assess that policy
and procedures for safeguarding the organization against environmental risks
are adequately covered.
The building plans, wiring plans, surroundings, power and cable wiring etc.
should be reviewed to determine their appropriateness of location of IPF.
The IS auditor should interview relevant personnel to satisfy himself regarding
employee awareness of environmental threats and controls, role of the
interviewee in environmental control procedures such as prohibited activities in
IPF, incident handling, evacuation procedures, and to assess if adequate
incident reporting procedures exist.
Review of administrative procedures such as preventive maintenance plans and
their implementation, incident reporting and handling procedures, inspection and
266
Erroneous
recordkeeping
Unacceptable
accounting
Loss or asset
destruction
Business
interruptions
Erroneous
management
decisions
Excess cost or
deficient
revenues
Standards
compliance
Fraud and
misuse
Unachieved
process
objectives
Monitor Compliance
Communicate
Segregate Duties
Authorize
Control
Plan
Direct
Area of managements
environmental Responsibility
Organize
Environmental
Exposure
Module - II
Low (L): Controls are useful but not fully effective in preventing exposure.
Inspect the IPF and examine the construction with regard to the type of materials
used for construction by referring to appropriate documentation.
Visually examine the presence of water and smoke detectors, examine power
supply arrangements to such devices, testing logs, etc.
Examine location of fire extinguishers, fire fighting equipment and refilling date of
fire extinguishers and ensure their adequate and appropriate maintenance.
Examine emergency procedures, evacuation plan and marking of fire exits. If
considered necessary, the IS Auditor can also require a mock drill to test the
preparedness with respect to disaster.
Examine documents for compliance with legal and regulatory requirements as
regards fire safety equipment, external inspection certificate, and shortcomings
pointed out by other inspectors/auditors.
Examine power sources and conduct tests to assure quality of power,
effectiveness of power conditioning equipment, generators, simulate power
supply interruptions to test effectiveness of back-up power.
Examine environmental control equipment such as air-conditioners,
dehumidifiers, heaters, ionizers, etc.
Examine complaint logs and maintenance logs to assess if MTBF and MTTR are
within acceptable levels.
Observe activities in the IPF for any undesirable d activities such as smoking,
consumption of eatables, etc.
A. Documentation of findings
As part of the audit procedures, the IS auditor should also document all findings as
part of working papers. The working papers could include audit assessment, audit
plan, audit procedure, questionnaires, and interview sheets, inspection charts, etc.
Control Techniques
Identify systems that provide
constant temperature and
humidity levels within the
organization.
268
Audit Procedures
Review a heating,
ventilation and airconditioning design to verify
proper functioning within an
Establish adequate
Critical systems have
interior security based emergency power supplies for
on risk
alarm systems; monitoring
devices, exit lighting,
communication systems.
Adequately protect
Appropriate plans and controls
against emerging
such as shelter in place or for
threats, based on risk. a potential CBR
attack(chemical, biological and
radioactive attack)
Adequate
environmental
controls have been
implemented
269
Module - II
Summary
The chapter deals with the physical and environmental threats and their control and
audit procedures on information system assets. The first step in providing a secured
physical environment for the information system assets is listing the various assets in
the computing environment. These assets could range from hardware, software,
facilities and people.. The next step is to identify the various threats and exposures
the assets are exposed to. These could include unauthorized access to the
resources, vandalism, public disclosure of confidential information and the like. The
main source of threat is from outsiders and the employees of the organization.
However, the information assets are exposed to various other threats like natural
damage due to environmental factors like food, earthquake, fire, rain, etc.
270
Sr.
No.
Check points
2.
3.
4.
5.
Module - II
Sr.
No.
Check points
6.
7.
8.
9.
10.
Whether all access routes are identified and controls are in place
11.
Whether the security awareness is created not only in IS function but also
across the organization
12.
13.
Whether the usage of any equipment outside the business premises for
information processing is authorized by the management
14.
15.
16.
17.
18.
19.
20.
Sr.
No.
Check points
21.
22.
23.
24.
Verify if the security controls in place are appropriate to prevent intrusion into
sensitive IS facilities: data centre, communication hubs, emergency power
services facilities.
25.
2.
3.
4.
5.
Whether fire instructions are clearly posted and fire alarm buttons clearly visible
6.
Whether emergency power-off procedures are laid down and evacuation plan
with clear responsibilities in place
7.
8.
9.
10.
11.
273
Module - II
Sr.
No.
Check points
12.
13.
Evaluate the data centers use of electronic shielding to verify that radio
emissions do not affect computer systems or that system emissions cannot
be used to gain unauthorized access to sensitive information.
14.
15.
Ensure that a fire alarm is protecting a critical IS facility like data center from
the risk of fire, a water system is configured to detect water in high-risk areas
of the data center and a humidity alarm is configured to notify data center
personnel of either high or low-humidity conditions.
16.
Check logs and reports on the alarm monitoring console(s) and alarm
systems which are to be monitored continually by data center/IS facility
personnel.
17.
Verify that fire extinguishers are placed every 50ft within data center isles and
are maintained properly with fire suppression systems are protecting the data
center from fire.
18.
Whether there are emergency plans that address various disaster scenarios,
such as backup data promptly from off-site storage facilities.
19.
Ensure that there exists a comprehensive disaster recovery plan that key
employees are aware of their roles in the event of a disaster and are updated
and tested regularly.
20.
Ensure that detail part inventories and vendor agreements are accurate and
current and maintained as critical assets.
Questions
1. Which of the following is not a type of Internal Control?
a. Preventive
b. Additive
c. Detective
d. Corrective
274
275
Module - II
9. UPS stands for _______
a. Uninterrupted Power Supply.
b. Uninterrupted Power Supplier.
c. Uniform Power Supply.
d. None of these.
10. Deadman doors are also called ________.
a. Biometric door locks.
b. Mantrap systems.
c. Bolting door locks.
d. None of these.
11. ______controls encompass securing physical access to computing equipment
as well as t to facilities housing the IS computing equipment and supplies.
a. Environmental access
b. Logical access
c. Physical access
d. Computer system
12. IPF stands for _________.
a. Information Product Facility.
b. Information Processing Feature.
c. Input Processing Facility.
d. Information Processing Facility.
13. War and bomb threats are an example of _____.
a. Environmental Threat.
b. Man Made Threat.
c. Physical Threat.
d. Metaphysical Threat.
14. Humidity, vapors, smoke and suspended particles are an example of ______.
a. Natural Threat.
b. Man Made Threat.
c. Physical Threat.
d. None of these.
15. Data___________ prevents modification of data by unauthorized personnel.
a. Integrity
b. Confidentiality
c. Availability
d. Marketability.
276
Answers:
1b
2a
3d
4c
5a
6c
7b
8d
9a
10 b
11 c
12 d
13 b
14 a
15 a
16 b
17 c
18 a
19 d
20 a
277
Introduction
Today IT systems store and process a wide variety of data centrally and provide
access to a large number of users. Storing data centrally on a system is cost effective
and contributes to efficient information sharing and processing. In such an
environment it is not unusual to expect that:
Information on a system that is accessed by many users has the associated risk of
unauthorized access. Therefore, a significant concern is to ensure that users have
access to information they need but do not have inappropriate access to data that
may be sensitive and not required by them. It is also important to ensure that certain
items, though readable by many users, are changed only by a few.
Logical access controls are a means of addressing these concerns. These are
protection mechanisms that limit users' access to data to what is appropriate for
them. Such controls are often built into the operating system, or form part of the
"logic" of applications programs or major utilities, such as Database Management
Systems. They may also be implemented in add-on security packages that are
installed into the operating system.
In this chapter, we look at the ways in which t data is accessed and how logical
access controls help to ensure that only the right persons access the right data.
Module - II
Objectives of Logical Access Controls
Information is the primary commodity in the world of E-Commerce. As technology
advances and access to markets expands, the need to protect information to ensure
confidentiality, integrity, and its availability to those who need it for making critical
personal, business, or government decisions becomes very important.
Logical access controls are the means of information security. Their purpose is to
restrict access to information assets / resources. They are expected to provide
access to information resources on a need to know and need to have basis using the
principle of least privileges. It means that access should not be so restrictive that it
makes the performance of business functions difficult but, at the same time, it should
not be so liberal that it is misused. The data, an information asset, can be
Logical access controls is all about protection of these assets wherever they reside.
A network device that is part of the network and with a free port to which a
personal computer can be attached
Hub
Switch
Bridge
L3 Switch
Router
280
User
Access
Point/Control
Access
Point/Control
Access
Point/Control
Application Software
Access
Point/Control
Database
Internet
/Communication
Facility
Mr. B
Mr. .A
Fig.2.2: Masquerade
281
Module - II
Hacker
Modifies message or
Adds contents to
message from Mr.B
Capture Message
from Mr. B
Internet
/Communication
Facility
Mr. B
Mr. .A
Fig.2.3: Piggybacking
Hacker
Observe MessageRead Contents
from B
Internet
/Communication
Facility
Mr. B
Mr. .A
Hacker
Hacker disrupts
service provided
by server
Internet
/Communication
Facility
Mr. B
Server
282
Wire Tapping:
Scavenging or
Discarded listings, tapes, or other information storage media
Dumpster Diving: from trash are filtered to determine useful information, such as
access codes, passwords, or sensitive data. The items may
have been discarded by their owners, but become useful to the
Dumpster diver.This can be used by the perpetrator to put
together critical information about an organization or subject it
to attack.
Emanation
Interception
Data Diddling
Piggybacking:
Masquerading:
283
Module - II
get greater privileges than he is authorized for. Masquerading
may be attempted by using a stolen logon IDs and passwords,
through security gaps in programs, or through bypassing the
authentication mechanism. The attempt may come from within
an organization, say from an employee; or from an outsider
through some connection to the public network. Weak
authentication provides one of the easiest points of entry for a
masquerader. Once attackers have been authorized for entry,
they may get full access to the organization's critical data, and
(depending on the privilege level they pretend to have) may be
able to modify and delete software and data or make changes.
Spoofing:
Asynchronous
Attacks:
Keystroke
monitoring (also
called Key
Logging):
284
Rounding Down
Salami
Technique:
Trap Doors
Remote Shut
Down:
285
Module - II
loss of integrity as the data may be corrupted during the
unplanned shut down.
Denial of Service
(DoS)
Social
Engineering
Phishing Attacks The most popular attacks on banking systems in recent times
have been attackers who target gullible victims, using a
combination of social engineering, e-mail and fake websites to
con the victim to click on a link embedded in an apparent
authentic mail from a reputed bank. The link takes the victim
(generally a customer of the bank) to a lookalike Bank website
that gets the personal details of the victim including details
such as PIN and internet banking password, which is then
exploited by the hacker.
Worms:
286
These are malicious codes which hide inside a host program that
does something useful. Once these programs are executed, the
hidden malicious code is released to attack the workstation, server,
or network or to allow unauthorized access to those devices. Some
Trojans are programmed to open specific ports to allow access for
exploitation. Then the open Trojan port could be scanned and
located, enabling an attacker to compromise the system. These are
also used as tools to create backdoors into the network for later
exploitation by crackers.
Logic
Bombs:
Macro
Viruses:
287
Module - II
Polymorphic Polymorphic viruses are difficult to detect because they hide
Viruses
themselves from antivirus software by altering their appearance after
each infection. Some polymorphic viruses can assume over two
billion different identities.
Stealth
Viruses
Adware and They are software that tracks the Internet activities of the user
Spyware
usually for the purpose of sending targeted advertisements. Besides
the loss of privacy and waste of bandwidth (loss of availability), they
do not pose other security related risks.. However, it is quite likely
that Trojans could be embedded in such software. Adware and
Spyware often come with some commercial software, both
packaged as well as shareware software. There is often a reference
to the Adware and Spyware software in the license agreement.
User registration
Information about every user is documented. The following questions are to be
answered:
ii.
Privilege management
Access privileges are to be aligned with job requirements and responsibilities.
For example, an operator at the order counter shall have direct access to order
processing activity of the application system. He/she will be provided higher
288
Password use
Mandatory use of strong passwords to maintain confidentiality.
ii.
ii.
Enforced path
Based on risk assessment, it is necessary to specify the exact path or route
connecting the networks; say, for example, internet access by employees will be
routed through a firewall. It is also necessary to maintain a hierarchical access
levels for both internal and external user logging.
289
Module - II
network is to be isolated from the basic internet usage service availability for
employees.
iv. Network connection and routing control
The traffic between networks should be restricted, based on identification of
source and authentication access policies implemented across the enterprise
network facility.
v. Security of network services
Are implemented with the help of techniques of authentication and authorization
policy implemented across the organizations network.
Operating system access control
Operating system provides the platform for an application to use various IS resources
and perform a specific business function. If an intruder is able to bypass the network
perimeter security controls, the operating system is the last barrier to be conquered
for unlimited access to all the resources. Hence, protecting the operating system
access is extremely crucial.
i.
ii.
290
ii.
291
Module - II
synchronizing clock time across the enterprise/organization network is
mandatory.
Mobile computing
In todays organizations computing facility is not restricted to a particular data centre
alone. Ease of access on the move provides efficiency and results in additional
responsibility on users and the need to maintain information security on the
management. Theft of data carried on the disk drives of portable computers is a high
risk factor. Both physical and logical access to these systems is critical. Information is
to be encrypted and access identifications like fingerprint, eye-iris, and smart cards
are necessary security features.
292
The primary function of access control is to allow authorized access and prevent
unauthorized access to information resources in an organization. Therefore, it may
become necessary to apply access control at each security architectural layer of an
organizations information system architecture to control and monitor access in and
around the controlled area. This includes operating system, network, database and
application systems. In each of these layers attributes may include some form of
identification, authentication and authorization and logging and reporting of user
activities.
Interfaces exist between operating system access control software and other system
software access control programs such as those of routers, and firewalls that manage
and control access from outside or within an organizations networks. On the other
side operating system access control software may interface with databases and / or
application system access controls to application data.
Identification Techniques
Authentication is the process of verifying that the identity claimed by the user is valid.
Users are authenticated by using any one of the three personal authentication
techniques. These are:
293
Module - II
Authentication Techniques
As stated above, authentication may be through remembered information, possessed
tokens, or physical characteristics. We shall examine each class of authentication
techniques below.
Fig.2.7 : What you have (Token), what you know (password/PIN) and who you
are (Biometric.
i.
294
If a password is too short or too easy, the chances of it being guessed are quite
high.
If a password is too long or too complex, the user may forget or may write it
down.
If many applications are to be accessed by one user, many passwords have to
be remembered.
Passwords can be shared, guessed, spoofed or captured.
i.
Brute force: In this crude form of attack, the attacker tries out every
possible combination to hit on the successful match. The attacker may also
use various password cracking softwares that assist in this effort.
295
Module - II
Plastic Cards: Plastic cards contain information about the user and primarily
provide a means of identification of the user and enable authentication. Plastic
cards are of the following types:
Memory tokens: In its most common form, the cards contain visible
information such as name, identification number, photograph and such other
information about the user and also a magnetic strip. This strip stores static
information about the user. In order to gain access to a system, the user is
296
required to swipe his card through a card reader, which reads the
information on the magnetic strip and passes onto the computer for
verification. Where two-factor authentication is adopted, the user is not only
required to have his card read by a card reading device but also required to
key in remembered information (passwords, PIN)
Smart Tokens: In this case, the card or device contains a small processor
chip which enables storing dynamic information on the card. Besides static
information about the user, the smart tokens can store dynamic information
such as bank balance, credit limits etc. However, the loss of smart cards can
have serious implications.
ii. Proximity Readers: In this case when a person in possession of the card
reaches the restricted access area, the card data is read by the proximity readers
(or sensors) and transmits information to the authentication computer, which
enables access to the restricted area or system. Because the user is not required
to insert the card into the device, access is faster. Proximity tokens can be either
static or processor based. In static tokens, the card contains a bar code, which
has to be brought in proximity of the reader device. In case of processor based
tokens, the token device, once in the range of the reader, senses the reader and
transmits the password to authenticate the user. Other token based systems
include challenge response systems and one time passwords.
iii. Single Sign-on: In many situations, a user, because of his job responsibilities in
an organization is often required to log into more than one application. The user
has to remember multiple logons and passwords. This can be solved by a single
sign-on, which provides user access to various applications. It is a session/user
authentication process that permits a user to enter one name and password in
order to access multiple applications. The single sign-on, which is requested at
the beginning of the session, authenticates the user to access all the applications
they have been given the rights to on the server, and eliminates future
authentication prompts when the user switches applications during that particular
session. The concern in a decentralized processing or database environment is
that the passwords travel over communication lines. Also if the single username
and password used for single sign on is compromised, unauthorized access to all
related application is possible.
Biometric Security
Compared to log on and token based authentication, Biometrics offers a very high
level of authentication based on what the user is. Biometrics, as the name suggests,
is based on certain physical characteristics or behavioral patterns identified with the
individual, which are measurable. The International Biometric Group defines
297
Module - II
biometrics as automated mechanism which uses physiological and behavioral
characteristics to determine or verify identities. Behavioral biometrics is based on
measurements and data derived from an action and indirectly measure
characteristics of the human body. Based on some feature unique to every user,
biometrics seeks to minimize the weaknesses in other mechanisms of authentication.
Some biometric characteristics are:
Fingerprints
Facial Scans
Hand Geometry
Signatures
Voice
Keystroke Dynamics
Iris Scanners
Retina Scanners
298
299
Module - II
Applications
login
Ftp
Other
Applications
Telnet
PAM
PAM
Library
Configuration
Data
PAM
Authentication
Service Module
Account
Management
Module
Session
Management
Module
Password
Management
Module
300
Read
Read
Write
User B
(Secret)
User A
(Unclassified)
Read
Write
Read
Database Y
(Secret)
Fig.2.11: Read and Write Access Policy.
301
Module - II
Discretionary access privileges to users pose problems within the organization, giving
rise to the need for mandatory and well documented access control policies. These
policies use attributes to determine which user can access which resource. For
example, users A and B may read from the unclassified database, but the secret
database can be read only by the secret user B. Suppose if both A and B can write to
the unclassified and secret database then the unclassified user can read the secret
information written by the secret user B. Here the user B is responsible for
downgrading the information.
User
Resource
DataBase X
Database Y
User A
Read
User B
Read
Audit Procedures
Segregation of duties- To
ensure that users do not have
access
to
incompatible
functions.
Review
policies
and
procedures which spell out
access
authorization
documentation
and user
Security
administration
rights
and
privileges
in the
parameters are set for access
to data files, software code information system.
libraries, security files and
important operating system Determine directory names
files.
for sensitive directories, files
and their access levels and
Naming
conventions
are types of access.
established for controlling
302
Redundant
accounts
like
default, guest are removed, Verify logs
disabled or secured.
accounts.
Review access to
- shared files
-emergency or temporary
access to files and hosts
These are to be controlled,
documented, approved by
managers and logged.
Process
Services
adequately
controlled
and Available
processes
are services installed need
required to process
services based on
functionality.
and
only
and
least
303
of
redundant
Module - II
Access
to Access
and
use
of
sensitive system sensitive/privileged accounts
resources
is have justified need aligned with
restricted
and valid business purpose.
monitored.
Review
policies
and
procedures
used
for
sensitive/privileged accounts.
Interview
management
personnel
on
access
Logical access to these are to restrictions by testing the
need and reasons for access.
be adequately controlled-Remote maintenance.
Review the accessing system
-System libraries.
activity logs maintained for
-password/authentication
services and directories are -personnel accessing system
software, controls acquired to
controlled and encrypted.
-Access restriction based on gain access.
time/location.
-Segregation between user Access controls implemented
interface services and system in the operating system
software, system libraries etc.
management functionality.
Interview officials along with
review
related
system
documentation
and
coordinate the vulnerability
analysis.
Appropriate and
adequate media
controls are to be
implemented.
304
Database Controls
The current trends in application software design include the frequent use of a
Database Management System (DBMS) to t actually handle data manipulation inside
its tables, rather than let it be done by the Operating System (OS) software itself in
flat files. The DBMS acts as a layer between the application software and the OS.
The application passes on the instructions for manipulating data, which are executed
by the DBMS, following the integrity rules and constraints built into the database
definitions.
However, using a utility such as a text editor in the OS, the data in the DBMS can be
manipulated directly, without the application. This can be done by using DBMS
utilities and features, such as SQL (Structured Query Language)if the user
possesses the privileges to gain access to the DBMS.
Object
granularity
Semantic
Relations between data pose a threat of security violations through
correlation inference.
among data
Meta-data
305
Module - II
Logical and OS only deals with physical objects (files, devices). DBMS deals
physical
with logical objects, independent of OS objects (relations, views).
objects
Multiple
data types
Static and
dynamic
objects
Multilevel
In OS an object can only have data of one security level. There is no
transactions need for polyinstantiation, which is not like what it is in databases.
Data life
cycle
306
iii.
iv.
v.
vi.
vii.
viii.
307
Module - II
ii.
Privileges: Access control is based on the notion of privileges: that is; the
authorization to perform a particular operation, to gain access to information in
the database. While privileges enable restriction of the types of operations a user
can perform in the database.
Roles: This addresses the complexity of privilege management by providing
user-defined collections of privileges that can be granted to (and revoked from)
users and from other roles. One could create the role of a MARKETING
MANAGER, and grant it all the needed privileges to perform his/her jobs, and
then grant this role to all other marketing managers. A role can be a foundation
for other roles. For example, a VP MARKETING role could be granted the basic
308
309
Module - II
Perhaps the most commonly used method of controlling data access is views. Views
are generally defined simply the result of a (SQL) query. This query, in turn, can pull
information from many different tables and can also perform common calculations on
the data. Although views provide many advantages to database developers, they can
also be very valuable from the security standpoint. Views provide database
administrators with a method to define granular permission settings that would not
otherwise be possible.
The access rights with respect to a view are:
The owner of a view has the same rights as on the base tables, plus the drop
right.
The owner of a view (on tables for which he has rights with the grant option) can
grant others access rights on the view, even if they do not have access rights on
the base tables.
Access rights on base tables, given to the owner of a view after the creation of
the view are not added to the view.
Access rights on base tables, revoked from the owner of a view, are also
removed from the view.
Stored Procedures
A stored procedure is a function / subroutine (group of SQL statements) that form a
logical unit and performs a particular task. It is available to applications accessing a
database system and is actually stored in the database. Large or complex data
processing that might require the execution of several SQL statements is moved into
stored procedures that are resident on the server and all applications can issue a
remote procedure call (RPC. to these procedures. Therefore, stored procedures are
used to consolidate and centralize logic that was otherwise implemented in
applications which required movement of raw data for calculations.
Database Server compiles each stored procedure once and then reutilizes the
execution plan which results in tremendous performance boosts. Stored procedures
reduce the long SQL queries to a single line that is transmitted over the wire and,
therefore, reduce the network traffic tremendously.
Typical uses for stored procedures include data validation (integrated into the
database) or access control mechanisms. Carefully written stored procedures may
allow for fine grained security permissions to be applied to a database. For example,
client programs might be restricted from accessing the database via any means
except those that are provided by the available stored procedures.
310
Discretionary users can specify those who can access the data they own and
the action privileges they have.
Name-Dependent: Users access to a data resource is restricted on the basis of
their action privileges with respect to the data resource. For example, a payroll
clerk is allowed to view all the data fields, except those related to, say, and
employees medical history.
The Access Matrix Model: The row represents subject (user, account, program)
and columns represents objects (relations, views, columns, etc.) a cell M(i, j) in
the matrix represents the types of privileges.
Objects
Table 1
User 1
User 2
Table 2
R,W,X
View 1
OWN,R,W,X
R,W
311
View 2
Module - II
Table: Access Matrix
312
Update Protocols:
Cryptographic Controls
Data storage integrity is done by using the block encryption method. While the stream
encryption method requires extra data be accessed and slows down retrieval. Data
stored on portable media is encrypted by a secure encryption device part of the
controller of the device. To protect privacy of a users data even when media is
stolen, the cryptographic keys are used. To facilitate ease of sharing data the
schemes used are file key, secondary key, and master key. To protect access to the
secondary key a password or authorization mechanism using hardware, software and
manual methods are implemented.
313
Module - II
File Handling Controls
To prevent accidental destruction of data contained on a storage medium controls are
implemented by using hardware, software, and also manual methods. These are:
Accounting Audit Trail maintains the chronology of events that occur in the
database definition or in the database itself with the following two operations:
ii.
Implosion operation: data can be traced from its source to the items it
affects.
Explosion operation: the sequence of events that have occurred in a data
item in the database definition or the database can be reconstructed.
Unique time stamp: can be put on all transactions.
Attach before and after images of the data item against which the
transaction is applied.
Facilities to define, create, modify, delete and retrieve data in the audit trail
Retention time for the audit trail with respect to business policy.
314
ii.
Recovery Strategies
315
Module - II
Existence Control Strategies (Backup and Recovery)
i.
(Grandfather-Kept for
further 2 cycles)
Input master
File
(Father-Kept for
further 2 cycles)
Update Program
(Son-kept for
Update
Reports
ii.
Manual input on
failure
Primary
Process
Front end Processor
(Remotely Located)
(Remotely Located)
Duplicate
Process
Manual input on
failure
Primary
database
Duplicate
Database
Transaction
input
Database
Management
System
Database
Unsuccessful
Input
Transactions
Successful
Input
Transactions
316
Transaction
Input
Database
Management
System
Separate
Channels
Separate
devices
Primary File
Audit Procedures
317
Module - II
-Cryptography
-Data Warehouse
-Data Reporting and
-Data Extraction.
Ensure
the
database
schema is consistent with
the organization of data and
functions that align with the
access limitations imposed
on different groups of users.
To
accurately
identify
historical system activity and
data access with adequate
logging and monitoring
controls.
Identify
the
critical
security events that are
logged and determine the
adequacy of controls to
monitor the audit logs and
detect abnormal activity.
For
correction
and/or
detection of data anomalies
effective data accuracy and
completeness controls are
in place.
To
understand
the
exchange of data and
interconnectivity between
different DBMS.
Assess and test the
general controls related to
DBMS (Data design,
storage, exchange format
etc.)
Audit Trail
An Audit Trail enables the reconstruction and examination of the sequence of events
leading to a transaction, from its inception to its final results or from output to the
initial trigger to the events resulting in the transactions. Audit trails maintain a record
of
318
For a complete reconstruction of the scenario the application level audit trail
augments the system level log by logging user activities including specific actions at
the data level.
The auditors opinion depends on his understanding of the general and IS controls
and audit procedures. These are used to test the effectiveness and efficiency of
access to organizational information, in respect of which the auditor should exercise
due care and diligence.
Audit Procedures: Special Considerations
The objective and scope of audit would determine the audit procedures and IS
resources to be covered. Often evaluation of logical access controls forms a part of a
generic IS audit, covering various other controls. However, the auditor may be
required to evaluate the logical access security of a system, sub- system, application,
operating software, database management software and the like.
Identification of logical access paths
An auditor has to identify the possible access paths permitting access to information
resources. He must document the logical paths and prescribe appropriate audit
319
Module - II
procedures to evaluate every component in the information systems infrastructure to
enable identification of logical access paths. This is often a challenging and complex
task, when it comes to auditing in networking computing environments. Identification
and documentation of access paths involves testing security at various layers:
i.
320
321
Module - II
i.
Logon and passwords are the most commonly used mechanisms to secure
logical access to information resources. The auditor should
322
Review audit trails, access violation reports in respect of all privileged logons
and special user accounts.
The strength and adequacy of monitoring and incident handling procedures.
iii. Access to file directories and application logic and system instruction sets
The auditor should evaluate the protection of
Summary
Logical access controls involve securing both stored and transmission data. In order
to secure this data, one of the key steps is authentication. Authentication involves
ensuring the genuineness of the identity claimed by the user. The chapter has
demonstrated various techniques to provide internal and external access controls.
They are systems with varying degrees of reliability, precision, sophistication and
cost.
The common access control technique is logon IDs and passwords. In token-based
authentication the user possesses identification to enable authentication. Biometrics
offers a authentication based on what the user is, based on characteristics of the
human body e.g. fingerprints, facial scans, etc.
Any compromise of operating systems processing can lead to improper access to
system resources. In order to safeguard against improper access, a concept of
reference monitor implements logical control over access to objects. Reference
monitor is an abstract mechanism that enables enforcing the security policy.
Relational database security works on the principles of tables and relations and
323
Module - II
allows rules of integrity and access to be specified. The principle of least privileges to
data items can be enforced using views as against reads.
The audit steps involved are identification of logical access paths at all levels like
hardware, system software, and database management and application control. The
various components to be evaluated during such audit are:
Checkpoints
User Access Management Policy and Procedure
1.
Check if the user access management policy and procedure have been
documented.
2.
Whether the user access management policy and procedure are approved by
the management.
3.
Whether the user access management policy and procedure document includes:
- Scope and objective.
- Procedure for user ID creation, approval, review, suspension, and deletion.
- Granting access to third parties.
- Password management.
- User access rights assignment & modifications.
- Emergency access Granting.
- Monitoring access violations.
- Review and update of document.
User Access Management
1.
Check whether User ID & access rights have been granted with approval from
appropriate level of IS and functional heads
(Verify the user ID creation, granting of access right and approval process)
324
Checkpoints
2.
3.
4.
Check whether invalid log in attempts are monitored and User IDs are
suspended after specific number of attempts?
(Verify the parameters set for unsuccessful log in attempt)
5.
6.
Check whether granting access to the third parties is according to the User
Access Management policy and procedure
(The organization should specify and implement a process for granting access
to third parties like contractors, suppliers, auditors, consultants, etc.)
7.
Check that users are forced to change password on first log-on and
periodically. Verify password parameters for first log on and password aging).
8.
Check if the organisation has implemented the clear screen and clear desk
policies
(Terminals should be automatically logged off if remaining idle for specific
time.)
9.
325
Module - II
No
Checkpoints
13. Are User IDs and Passwords communicated to the users in a secure manner?
(Verify the procedure for communicating user ID and password for the first time
and after suspension).
14. Check if the organisation reviews user IDs and access rights periodically.
15. Does the organisation monitor logs for the user access?
16. Are policies and procedure documents reviewed and updated at regular
intervals?
17. Is access to scheduled jobs restricted to the authorised?
18. Is the emergency user creation made according to the policy and procedures
for User Access Management?
(Verify the emergency access granting procedure, including approvals and
monitoring).
19. Whether periodic review process ensures user accounts align with business
needs and removal on termination/transfer.
(Review and evaluate procedures for creating user accounts and ensure that
accounts are created only when theres a legitimate business need and that
accounts are removed or disabled in a timely fashion in the event of termination
or job change.)
20. Check if passwords are shadowed and use strong hash functions (Ensure the
strength of passwords and access permission to password files. Review and
evaluate the strength of system passwords and the use of password controls
such as aging.)
21. Review the process for setting initial passwords for new users and their mode
of communication and evaluate the tracking of each account to a specific
employee.
22. Does the use of groups and access levels set for a specific group determine
the restrictiveness of their use?
(Evaluate the use of passwords, access rights at the group level)
23. Ensure that the facility to logon as super/root user is restricted to system
console for security reasons.
24. Check whether the parameters to control the maximum number of invalid logon
attempts has been specified properly in the system according to the security
policy.
326
Checkpoints
25. Check whether password history maintenance has been enabled in the system
to disallow same passwords from being used again and again on a rotation
basis.
26. Verify the parameters in the system to control automatic log-on from a remote
system, concurrent connections a user can have, users logged on to the
system at odd times (midnight, holidays, etc. and ensure whether they happen
according to the set security policy.
Maintenance of sensitive user accounts
1.
2. From the log file, identify the instances of use of sensitive passwords such as
super user and verify if records have been maintained with reason for the
same. Ensure that such instances have been approved/ authorized by the
management.
3.
From the log file, identify the instances of unsuccessful logon attempts to super
user account and check the terminal ID / IP address from which it is happening.
Check if appropriate reporting and escalation procedures are in place for such
violations.
Database controls
1.
Check if the policy and procedure documented and approved for database
activities is being followed.
2.
3.
4.
Are the policy and procedure documents reviewed and updated at regular
intervals?
327
Module - II
No
Checkpoints
5.
6.
7.
8.
Check if the design or schema of tables/ files in database contains fields for
recording makers, checkers and time stamp
9.
Have standards been set for database control reports to ensure accuracy and
integrity of the databases
(Verify the control total / reports like Total of transactions and balances, record
counts and hash totals).
10. Check the reconciliation between the source and receiving system for critical
information transferred through interface system
11. Verify that database permissions are granted or revoked appropriately for the
required level of authorization.
(Review database permissions granted to individuals instead of groups or roles
and are not implicitly granted incorrectly.)
12. Review the execution of dynamic SQL in stored procedures and ensure that
row-level access to table data is implemented properly.
13. Check if PUBLIC permissions are revoked when not needed.
(Restrict access to the operating system, the directory to which the database is
installed and the registry keys used by the database.)
14. Verify that encryption of data-at-rest is implemented appropriately. (Ensure that
encryption key management is part of the disaster-recovery plan.)
15. Verify that the database is running a current version that the vendor continues
to support.
(Ensure procedures to maintain database integrity by use of root kits, viruses,
backdoors, and Trojan horses.)
16. Doe the IT Department identify and segregate hardware hosting these
databases?
17. Check if there is a clear partition between application area and data areas
within the system.
328
Checkpoints
18. Does the IT Department have laid down standards / conventions for database
creation, storage, naming and archiving?
19. Are users denied access to the database except through the application?
20. Are direct query / access to database restricted to the concerned database
administrators?
21. Check if triggers and large queries monitored to prevent overloading of
database and consequent degradation of database performance are in place.
22. Have all the vendor-supplied passwords and default passwords changed?
Have all the demo user and demo databases been removed?
23. Are there controls on sessions per user, number of concurrent users, etc? Is
creation of users restricted and need based? Are the rights granted to users
reasonable and based on requirement? Is the database configured to ensure
audit trails, logging of user sessions and session auditing?
24. Does the administrator maintain a list of batch jobs executed on each
database, severity of access of each batch job and timing of execution?
Are Batch Error Logs reviewed and is corrective action being taken by the
Administrator periodically?
25. Is there a separate area earmarked for temporary queries created by power
users or database administrator based on specific user request?
Are temporary sub databases created removed periodically or after the desired
purpose has been achieved?
26. Does the design or schema of all tables / files in database contain fields for
recording makers, checkers and time stamp? Are database administrators
rotated periodically? Does the organization have confidentiality undertakings
from external service providers?
Referential Integrity and Accuracy
1.
2.
Are these reports run directly from the back-end database periodically and the
results both positive and negative are communicated by the administrators to
senior management?
3.
Are these reports run periodically and taken directly by the User Department to
ensure accuracy?
329
Module - II
No
Checkpoints
4.
5.
In cases where data is migrated from one system to another has the user
department verified and satisfied itself about the accuracy of the information
migrated? Is there a formal data migration report?
6.
Are entries made directly to the back end databases under exceptional
circumstances? Is there a system of written authorization?
7.
1.
Does the administrator periodically review the list of users to the database? Is
the review documented?
2.
3.
4.
Are databases periodically retrieved from the back up in test environment and
is accuracy being ensured?
5.
Are senior personnel from the user department involved in testing backup
retrieval?
6.
1.
Check whether the data accessed on the least privilege basis is established by
the data owner
2.
3.
4.
Check if authorized access to sensitive data is logged and the logs are
regularly reviewed to assess whether the access and use of such data was
appropriate
5.
6.
7.
330
Checkpoints
8.
1.
Evaluate the file permissions for a judgmental sample of critical files and their
related directories.
2.
Look for open/shared directories (directories with permission set to read, write,
execute) on the system.
3.
4.
Ensure that all files have a legal owner in the system access list. Ensure also
that system level commands are not to be used by users to compromise user
accounts. Examine the systems default scheduled jobs; especially roots/sysadmin, for unusual or suspicious entries.
Obtain the system information and service pack version, and compare with
policy requirements.
2.
3.
Ensure that all approved vendor-support patches are installed as per the
server management policy approved by the management.
4.
5.
Determine what services are enabled on the system and validate their
necessity with the system administrator. For necessary services, review and
evaluate procedures for assessing vulnerabilities associated with those
services and keeping them patched.
6.
Ensure that only approved applications are installed on the system as per the
server management policy.
7.
8.
Review and evaluate procedures for creating user accounts and ensure that
accounts are created only when there is a legitimate business need. Also
331
Module - II
review and evaluate processes for ensuring that accounts are removed or
disabled in a timely fashion in the event of termination or job change.
9.
Ensure that all users are created at the domain level and clearly annotated in
the active directory. Each user should trace to a specific employee or team.
10.
11.
Review and evaluate the use and need for remote access, including
connections like FTP, Telnet, SSH, VPN, and other methods and see to it
that a legal warning banner is displayed when connecting to the system.
12.
13.
Ensure that the server has auditing enabled aligned with the best practices of
a standard security policy or organizations practices
Auditing Clients/Hosts on the Enterprise Network
1.
2.
3.
Review and evaluate the clients/hosts with a basic security analyzer and a
commercial-grade network scanner.
Questions:
1. Logical access controls within an organizations enterprise-wide information
system are implemented to protect its information assets that include
332
333
Module - II
c. Equipment management
d. Network management
10. Maintenance of event logs across an enterprise network plays a significant role
in correlating an event and generating report using the application control.
a. System monitoring
b. Clock synchronization
c. Flood synchronization
d. Network isolation
11. One of the weaknesses of the password logon mechanism is.
a. Repeated use of the same password
b. Periodic changing of password
c. Encrypted password
d. One user one password
12. Facial scan, iris and retina scanning are used in ..
a. Smart tokens
b. Biodirect security
c. Backup Security
d. Biometric Security
13. The provides system administrators with the ability to
incorporate multiple authentication mechanisms into an existing system through
the use of pluggable modules.
a. Personal Authentication Module
b. Pluggable Authentication Module
c. Password Processing Module
d. Login identification Module
14. The access privileges of a user for two entities say A and B for read and write
are maintained in the within an application.
a. Actual access control list
b. Acquired control entry
c. Access control list
d. Secret policy entry
15. A can help a sales manager to read only the information in a
customer table that is relevant to customers in their own territory.
a. View
b. Procedure
c. Trigger
d. Table
334
335
Module - II
c. File control
d. Update Control
23. The differential backup recovery strategy is ..
a. Previous state of the data files
b. Pages or files modified after a full backup
c. Roll-back transaction log backup
d. Pages or files updated before a full backup
24. In case of automated interface between database systems- To check if there is a
system of reconciliation between the source and receiving system for critical
information? is a checklist entry to audit .of a DBMS.
a. User management
b. Recovery and Backup
c. Referential Integrity
d. Roll-forward Backup
25. Whether invalid log-in attempts are monitored and User IDs are suspended on
specific number of attempts. is a checklist question under the .control
mechanism.
a. User Access management
b. Operating system management
c. Transaction log management
d. Database retrieving management.
Answers:
1. a
2. c
3. d
4. c
5. b
6. d
7. c
8. c
9. a
10. b
11. a
12. d
13. b
14. c
15. a
16.c
17.b
18.c
19.c
20. d
21.c
22. b
23. b
24. c
25. a
336
Introduction
In this section, we examine the risks and controls that are specific to networked
computers. It is rare these days to find a standalone computer in any commercial
environment, as networks offer tremendous advantages that far outweigh the cost of
creating them. However, networks are also far more vulnerable to external and
internal threats than standalone systems. The internet, while offering tremendous
advantages, also poses several security challenges to organizations. In this section,
we shall look at the threats and risks that arise in a networked environment and the
controls and countermeasures that prevent or mitigate such risks.
Network Characteristics
The characteristics of a network are:
Module - II
Information Gathering
Communication Subsystem Vulnerabilities
Protocol Flaws
Impersonation
Message Confidentiality Threats
Message Integrity Threats
Web Site Defacement
Denial of Service
338
Port Scan: An easy way to gather network information is to use a port scanner, a
program that, for a particular IP address, reports which ports respond to
messages and which of the several known vulnerabilities are present.
ii. Social Engineering: Social engineering involves using social skills and personal
interaction to get someone to reveal security-relevant information and even
actions that can lead to an attack. The point of social engineering is to persuade
the victim to be helpful. The attacker often impersonates someone occupying a
senior position inside the organization that is in some difficulty. The victim
provides the necessary assistance without verifying the identity of the caller, thus
compromising security.
iii. Reconnaissance: Reconnaissance is a generally used term for collecting
information. In security, it often refers to gathering discrete bits of information
from various sources and then putting them together to make a coherent whole.
One commonly used reconnaissance technique is dumpster diving. It involves
looking through items that have been discarded in garbage bins or waste paper
baskets. One might find network diagrams, printouts of security device configurations,
system designs and source code, telephone and employee lists, and more. Even
outdated printouts may be useful. Reconnaissance may also involve eavesdropping.
The attacker or his accomplice may follow employees to lunch and try to listen to a
coworkers discussing security matters.
iv. Operating System and Application Fingerprinting: Here the attacker wants to
know which commercial server application is running, what version, and what the
underlying operating system and version are. While the network protocols are
standard and vendor independent, each vendor has implemented the standard
independently, so there may be minor variations in interpretation and behaviour.
The variations do not make the software noncompliant with the standard, but
they are different enough to make each version distinctive. How a system
responds to a prompt (for instance, by acknowledging it, requesting
retransmission, or ignoring it) can also reveal the system and version. New
features also offer a clue. For example, a new version will implement a new
feature but an old version will reject the request. All these peculiarities, are called
339
Module - II
the operating system or application fingerprint, can mark the manufacturer and
version.
v. Bulletin Boards and Chats: Underground bulletin boards and chat rooms
support exchange of information among the hackers. Attackers can post their
latest exploits and techniques, read what others have done, and search for
additional information on systems, applications, or sites.
vi. Documentation: The vendors themselves sometimes distribute information that
is useful to an attacker. For example, resource kits distributed by application
vendors to other developers can also gives attackers tools to use in investigating
a product that can subsequently be the target of an attack.
Did you know?
According to the statistics released by the Federal Bureau of Investigation (FBI):
a. 90% of companies admitted to a security breach in the last 12 months
b. 80% of companies admitted a loss; financial loss and loss of intellectual property
are the highest.
c. 78% of companies report abuse of Internet access by insiders.
Communication Subsystem Vulnerabilities
i.
340
341
Module - II
iii. Authentication Foiled by Avoidance: A flawed operating system may be such
that the buffer for typed characters in a password is of fixed size, counting all
characters typed, including backspaces for correction. If a user types more
characters than the buffer can hold, the overflow causes the operating system to
by-pass password comparison and act as if a correct authentication has been
supplied. Such flaws or weaknesses can be exploited by anyone seeking
unauthorized access.
iv. Nonexistent Authentication: The attacker can circumvent or disable the
authentication mechanism at the target computer. If two computers trust each
others authentication, an attacker may obtain access to one system through an
authentication weakness (such as a guessed password. and then transfer to
another system that accepts the authenticity of a user who comes from a system
on its trusted list. The attacker may also use a system that has some identities
requiring no authentication. For example, some systems have guest or
anonymous accounts to allow outsiders to access things the systems want to
release to the public. These accounts allow access to unauthenticated users.
v. Well-Known Authentication: Most vendors often sell computers with one
system administration account installed, having a default password. Or the
systems come with a demonstration or test account, with no required password.
Some administrators fail to change the passwords or delete these accounts,
creating vulnerability.
vi. Spoofing and Masquerading: Both of them are forms of impersonation. (Refer
to chapter on logical access controls for details.)
vii. Session Hijacking: Session hijacking is intercepting and carrying on a session
begun by another entity. In this case the attacker intercepts the session of one of
the two entities that have entered into a session and carry it over in the name of
that entity. For example, in an e-commerce transaction, just before a user places
his order and gives his address, credit number etc. the session could be hijacked
by an attacker.
viii. Man-in-the-Middle Attack: A man-in-the-middle attack is a similar to session
hijacking, in which one entity intrudes between two others. The difference
between man-in-the-middle and hijacking is that a man-in-the-middle usually
participates from the start of the session, whereas a session hijacking occurs
after the session has been established. The difference is largely semantic and
not particularly significant.
Message Confidentiality Threats
An attacker can easily violate message confidentiality (and perhaps integrity)
because of the public nature of networks. Eavesdropping and impersonation attacks
342
Active wiretap
Trojan horse
343
Module - II
Impersonation
Compromised host or workstation
Connection Flooding: This is the oldest type of attack where an attacker sends
more data than what a communication system can handle, thereby preventing
the system from receiving any other legitimate data. Even if an occasional
legitimate packet reaches the system, communication is seriously degraded.
ii. Ping of death: It is possible to crash, reboot or kill a large number of systems by
sending a ping of a certain size from a remote machine. This is a serious
problem, mainly because it can be reproduced very easily, and from a remote
machine. Ping is an ICMP protocol which requests a destination to return a
reply, intended to show that the destination system is reachable and functioning.
Since ping requires the recipient to respond to its ping request, all that the
attacker needs to do is to send a flood of pings to the intended victim.
iii. Traffic Redirection: A router is a device that forwards traffic on its way through
intermediate networks between a source hosts network and a destinations. So if
an attacker can corrupt the routing, traffic will disappear.
iv. DNS Attacks: DNS attacks are actually attacks based on the concept of domain
name server (DNS), which is a table that converts domain names like
www.icai.org into network addresses like 202.54.74.130, a process called
resolving the domain name or name resolution. By corrupting a name server or
causing it to cache spurious entries, an attacker can redirect the routing of any
traffic, or ensure that packets intended for a particular host never reach their
destination.
344
ii.
Cookies: Cookies are data files created by the server that can be stored on the
client machine and fetched by a remote server usually containing information
about the user on the client machine. Anyone intercepting or retrieving a cookie
can impersonate the cookies legitimate owner.
Scripts: Clients invoke services by executing scripts on servers. A malicious
user can monitor the communication between a browser and a server to see how
changing a web page entry affects what the browser sends and then how the
server reacts. With this knowledge, the malicious user can manipulate the
servers actions. The common scripting languages for web servers, CGI
(Common Gateway Interface), and Microsofts active server pages (ASP) have
vulnerabilities that can be exploited by an attacker.
345
Module - II
iii. Active Code: Active code or mobile code is a general name for a code that is
downloaded from the server by the client and executed on the client machine.
The popular types of active code languages are Java, JavaScript, VBScript and
ActiveX controls. Such an executable code is also called an applet. A hostile
applet is a downloadable code that can cause harm to the clients system.
Because an applet is not screened for safety when it is downloaded and because
it typically runs with the privileges of its invoked user, a hostile applet can cause
serious damage.
Did you know?
Computer pests can potentially stop an organization in its tracks. An infection
may cause a loss of computing power. Servers and
Workstations either slow down or quit responding. In addition, network
bandwidth and Internet connections (a primary means of communications with
other organizations), may slow so much that essential performance is affected.
Network Security Controls
This section examines the controls available to ensure network security from the
various threats identified earlier. The controls are listed under the following broad
heads:
Architecture
Cryptography/Encryption
Content Integrity
Strong Authentication
Remote Access Security
Firewalls
Intrusion Detection Systems
i.
Architecture
The architecture or design of a network has a significant effect on its security. Some
of the major considerations are:
346
347
Module - II
Link encryption protects the message in transit between two computers, but the
message is in plaintext inside the hosts (above the data link layer). Headers added by
the network layer (which includes addresses, routing information and protocol) and
above are encrypted, along with the message/data. The message is, however,
exposed at the Network layer and thus all intermediate nodes through which the
message passes can read the message. This is because all routing and addressing
is done at the Network layer. Link encryption is invisible to users and appropriate
when the transmission line is the point of greatest vulnerability in the network. Link
encryption provides protection against vulnerabilities that depend on network traffic
analysis.
348
End-to-End Encryption
Invisible to user
Implementation concerns
Requires one key per host pair
349
Module - II
iv. PKI and Certificates
A public key infrastructure (PKI) is a process which enables users to implement public
key (asymmetric. cryptography, usually in a large and distributed setting. It offers
each user a set of services, related to identification and access control:
PKI is a set of policies, procedures and products but not a standard. The policies
define the rules under which the cryptographic systems operate. In particular, these
specify how to handle keys and valuable information and match levels of control to
level of risk. The procedures dictate how the keys should be generated, managed,
and used. Finally, the products actually implement the policies, and they generate,
store, and manage the keys. Entities, called certificate authorities, implement the PKI
policy on certificates. The functions of the authority can be done in-house or by a
commercial service or a trusted third party. PKI may also involve a registration
authority that acts as an interface between a user and a certificate authority. The
registration authority captures and authenticates the identity of a user and then
submits a certificate request to the appropriate authority.
v. SSL Encryption
The SSL (Secure Sockets Layer) protocol was originally designed by Netscape to
protect communication between a web browser and server. It is also known now as
TLS, transport layer security. SSL interfaces between applications (such as browsers)
and the TCP/IP protocols to provide server authentication, optional client
authentication, and an encrypted communications channel between clients and
servers.
To create the SSL connection, the client requests an SSL session. The server
responds with its public key certificate so that the client can determine its authenticity.
The client returns symmetric session key encrypted under the servers public key.
The server decrypts the session key and then they switch to encrypted
communication, using the shared session key.
vi. IPSec
IETF (Internet Engineering Task Force) has adopted IPSec, or the IP Security
Protocol Suite. Designed to address spoofing, eavesdropping, and session hijacking,
350
IP Header
TCP Header
Data
Physical Trailer
IP Header
Physical Trailer
351
Module - II
Content Integrity
Content integrity is automatically implied when cryptographic systems are used. Most
kinds of malicious threats are addressed by cryptographic systems very effectively.
For non-malicious threats to integrity, the controls are Error Correcting codes and
Message Digests (Cryptographic Checksums)
i.
Error detection codes detect an error when it has occurred, and error correction
codes can actually correct errors without requiring retransmission of the original
message. The error code is transmitted along with the original data, so the recipient
can re-compute the error code and check whether the received result matches the
expected value.
Parity Check: The simplest error detection code is a parity check. An extra bit
(the parity bit) is added to an existing group of data bits depending on their sum.
With even parity the extra bit is 0 if the sum of the data bits is even and 1 if the
sum is odd; that is, the parity bit is set so that the sum of all data bits plus the
parity bit is even. Odd parity is the same except that the sum is odd. Parity bits
are useful only when the error is in a single bit (called single bit error).
Checksum and CRCs: A checksum is a form of redundancy check that, at its
simplest, works by adding up the basic components of a message, usually the
bits or bytes, and storing the resulting value. Later, anyone who has the
authentic checksum can verify that the message was not corrupted by doing the
same operation on the data, and checking the sum. A more sophisticated type
of redundancy check is the cyclic redundancy check (CRC. which considers not
only the value of each bit/byte but also the order of the values. A cyclic
redundancy check (CRC. uses a hash function to produce a checksum which is
a small integer from a large block of data, such as network traffic or computer
files, in order to detect errors in transmission or duplication. CRCs are
calculated before and after transmission or duplication, and compared to
confirm that they are the same.
Other Codes: Other kinds of error detection codes, such as hash codes and
Hamming codes are used to detect burst errors (several errors occurring
contiguously) and multiple bit errors (multiple errors among non-adjacent bits).
Some of the more complex codes (like Hamming codes) can detect multiple-bit
errors and may be able to pinpoint which bits have been changed, thus allowing
the data to be corrected.
352
One Time Passwords: A one-time password can guard against wiretapping and
spoofing of a remote host. In the simplest case, the user and host both have
access to identical lists of passwords. The user would enter the first password for
the first login, the next one for the next login, and so forth. As long as the
password lists remain secret and as long as no one can guess one password
from another, a password obtained through wiretapping can be useless. A more
complex but practical implementation uses a password token, a device that
generates a password that is unpredictable but that can be validated on the
receiving end. The simplest form of password token is a synchronous one. This
device displays a random number, generating a new number every minute. Each
user is issued a different device (that generates a different key sequence). The
user reads the number from the devices display and types it in as a one time
password. The computer on the receiving end executes the algorithm to
353
Module - II
generate the password appropriate for the current minute; if the users password
matches the one computed remotely, the user is authenticated.
ii.
354
355
Module - II
and RADIUS (Remote Authentication Dial in User Service). Some of the features of
such systems are:
Firewalls
The technical details of firewalls, their types and configurations have been dealt with
in the first module. Only specialized applications of firewalls for network security are
dealt with here.
i. Virtual Private Networks
Firewalls and firewall environments are used to construct Virtual Private Networks
(VPNs). (See the earlier module for more details.)
ii. Intranet
An intranet is a network that employs the same type of services, applications, and
protocols present in an Internet implementation, without involving external
connectivity. For example, an enterprise network employing the TCP/IP protocol
suite, along with HTTP for information dissemination would be considered an Intranet.
Most organizations currently employ some type of intranet, although they may not
refer to the network as such. Within the internal network (intranet), many smaller
intranets can be created by the using internal firewalls. For example, an organization
may protect its personnel network with an internal firewall, and the resultant protected
network may be referred to as the personnel intranet.
Since intranets utilize the same protocols and application services that are present on
the Internet, many security issues inherent in Internet implementations get bound to
them in intranet implementations. Therefore, intranets are typically implemented
behind firewall environments.
iii. Extranets
An extranet is usually a business-to-business intranet; that is, two intranets are joined
via the Internet. The extranet allows limited, controlled access to remote users via
some form of authentication and encryption such as the one provided by a VPN.
Extranets share nearly all of the characteristics of intranets, except that extranets are
designed to exist outside a firewall environment. By definition, the purpose of an
extranet is to provide access to potentially sensitive information to specific remote
users or organizations, but at the same time denying access to general external
356
Any unused networking protocols should be removed from the firewall operating
system build. Unused networking protocols can potentially be used to bypass or
damage the firewall environment. Finally, disabling unused protocols ensures
that attacks on the firewall utilizing protocol encapsulation techniques will not be
effective.
Any unused network services or applications should be removed or disabled.
Unused applications are often used to attack firewalls because many
administrators neglect to implement default-restrictive firewall access controls. In
addition, these services and applications are likely to run by using default
configurations, which are usually much less secure than production-ready
application or service configurations.
Any unused user or system accounts should be removed or disabled. This
however is operating system specific, since all operating systems vary in terms
of which accounts are present by default as well as how accounts can be
removed or disabled.
Applying all relevant operating system patches is also critical. Since patches and
hot fixes are normally released to address security-related issues, they should be
357
Module - II
integrated into the firewall build process. Patches should always be tested on a
non-production system prior to rollout to any production systems.
Unused physical network interfaces should be disabled or removed from the
server chassis.
358
359
Module - II
systems infrastructure. For example, a customer can now access his bank account
from anywhere in the world. This means that logical paths open up enabling access
through insecure networks and diverse computing infrastructures.
Audit of network security requires the auditor to take special considerations into
account and plan accordingly to achieve his audit objectives. These are:
360
Application
Based
D- Detective
P-Preventive
C- Corrective
S-Support
Confidentiality
Modification to files
Integrity
Availability
Other
D
D
D
P
Password
Assessment
Assessment
Network Based
Monitoring
Controls
Host Based
Network Based
Vulnerability
Target Based
Intrusion Detection
Host Based
Type of System
D
D
P
P
P
P
D
DPC DP
C
361
Module - II
Penetration Testing
Adequately protecting an organizations information assets is a business imperative,
one that requires a comprehensive, structured approach. The purpose of this section
is to explore an ethical hacking technique referred to in the IT community as
Penetration Testing, which is being used increasingly by organizations to evaluate
the effectiveness of information security measures.
As its name implies, penetration testing includes a series of activities undertaken to
identify and exploit security vulnerabilities. The idea is to find out how easy or difficult
it might be for someone to penetrate an organizations security controls or to gain
unauthorized access to its information systems.
A penetration test typically involves a small team of people sponsored by the
organization asking for the test. The team attempts to exploit vulnerabilities in the
organizations information security by simulating an unauthorized user (or hacker)
attacking the system by using similar tools and techniques. Penetration testing teams
typically comprise people from an organizations Internal Audit department or IT
department, or from consulting firms specializing in these services. Their goal is to
attempt to identify security vulnerabilities under controlled circumstances, so that they
can be eliminated before unauthorized users can exploit them. Because penetration
testing is an authorized attempt to simulate hacker activities, it is often referred to as
ethical hacking.
It is important to point out that a penetration test cannot be expected to identify all
possible security vulnerabilities, nor does it offer any guarantee that an organizations
information is secure. Penetration testing is typically conducted at a point in time.
New technology, new hacker tools and changes in an organizations information
system can create exposures not anticipated during the penetration testing. In
addition, penetration testing is normally completed with finite resources, focused on a
particular area, over a finite period of time. Hackers determined to break into an
organizations information systems are often not bound by similar constraints.
Penetration testing is also typically focused on a systems security vulnerabilities that
can cause unauthorized access. It is not necessarily focused on security
vulnerabilities that could result in the accidental loss or disclosure of the
organizations information and information systems.
Many organizations have deployed sophisticated security mechanisms, such as
firewalls or intrusion detection systems (IDS), to help protect their information assets
and to quickly identify potential attacks. While these mechanisms are important, they
are not foolproof. A firewall cannot protect against what is allowed through such as
online applications and allowed services. While an IDS can detect potential
362
ii.
363
Module - II
and domain name registry, Internet discussion board. to gather information about
the target and conduct its penetration tests. Blind testing can provide information
about the organization that may have been otherwise unknown, but it can also be
more time consuming and expensive than other types of penetration testing
(such as targeted testing) because of the effort required to research the target.
iii. Double-blind testing: extends the blind testing strategy in that the IT and
security staff of the organization are not informed beforehand about the planned
testing activities, and are thus blind to them. Double-blind testing can test the
security monitoring and incident identification of the organization, escalation and
response procedures. It requires careful monitoring by the project sponsor to
ensure that the testing procedures and the organizations incident response
procedures are terminated when the objectives of the test have been achieved.
iv. Targeted testing: (often referred to as the lights-turned-on approach) involves
both the organizations IT team and the penetration testing team who are aware
of the testing activities and provided with information concerning the target and
the network design. This approach is more efficient and cost-effective when the
objective of the test is focused more on the technical setting, or on the design of
the network, than on the organizations incident response and other operational
procedures. A targeted test takes less time and effort to complete than blind
testing, but may not provide as complete a picture of the security vulnerabilities
and response capabilities of an organization.
Types of Penetration Testing
In addition to the penetration testing strategies, consideration should be given to the
types of testing the testing team is to carry out. These could include:
i.
ii.
364
Social engineering activities can test a less technical, but equally important, security
component: the ability of the organization to contribute to, or prevent, unauthorized
access to information and information systems.
Risks associated with Penetration Testing
Though management sponsors testing activities for security reasons, such activities,
in themselves, carry some element of risk. Some of the key risks are:
365
Module - II
Testing activities may inadvertently trigger events or responses that may not
have been anticipated or planned, ,such as notifying law enforcement authorities;
Sensitive security information may be disclosed, increasing chances of external
attacks to the organization.
During the course of penetration testing, significant security vulnerabilities can, and
are likely to be, identified. Such information must be adequately protected, so that it
does not fall into wrong hands.
Some questions to consider include:
Will activities be conducted over the Internet or any other public network? If so,
how is information protected while in transmission over such networks?
How and where will the collected information, including working paper files, be
stored? In electronic form? In physical form? Who has, or will have, custody of
this information, including summaries of findings and observations?
How much information will the final reports and executive summaries contain?
How will the content and distribution of findings, observations and reports be
controlled?
How will notes, working papers and other forms of information be retained or
destroyed?
Do the terms of engagement include appropriate provisions to protect the
confidentiality of the information collected, as well as the findings, observations
and recommendations?
The activities or events that will trigger the conclusion of the penetration testing
activities should be clearly described. These would, of course, depend on the specific
objectives of the test, but could, for example, include collecting proof of the teams
ability to exploit security vulnerabilities. This proof could take many forms, such as
copying a target file, creating a file on a target server, adding a new user to a target
system or capturing screen shots of a target application system. In some instances, it
may be appropriate to define a time period within which the testing is to be completed.
Table: Network Vulnerabilities and Controls.
Target
Vulnerability
Control
Firewall
Intrusion detection system
Running as few services as possible
Services that reply with only what is
necessary
366
Reconnaissance
Firewall
Hardened"(self-defensive) operating
system and applications
Intrusion detection system
OS and application
fingerprinting
Firewall
"Hardened" (self-defensive) applications
Programs that reply with only what is
necessary
Intrusion detection system
Authentication Impersonation
failures
Guessing
Eavesdropping
Spoofing
Session hijacking
Man-in-the-middle
attack
Addressing errors
Programming controls
Intrusion detection system
Controlled execution environment
Personal firewall
Programming controls
Intrusion detection system
Controlled execution environment
367
Module - II
Personal firewall
Two-way authentication
Programming controls
Parameter
modification, time-of- Intrusion detection system
check to time-of-use Controlled execution environment
errors
Personal firewall
Two-way authentication
Server-side include
Programming controls
Personal firewall
Controlled execution environment
Intrusion detection system
Cookie
Firewall
Intrusion detection system
Controlled execution environment
Personal firewall
Malicious active
code: JavaScript,
ActiveX
Confidentiality
Malicious typed
code
Signed code
Intrusion detection system
Controlled execution environment
Protocol flaw
Programming controls
Controlled execution environment
Eavesdropping
Encryption
Passive wiretap
Encryption
Misdelivery
Encryption
End-to-end encryption
368
Integrity
Encryption
Traffic padding
Onion routing
Cookie
Firewall
Intrusion detection system
Controlled execution environment
Protocol flaw
Firewall
Controlled execution environment
Intrusion detection system
Protocol analysis
Audit
Active wiretap
Encryption
Error detection code
Audit
Impersonation
Firewall
Strong, one-time authentication
Encryption
Error detection code
Audit
Falsification of
message
Firewall
Encryption
Strong authentication
Error detection code
Audit
Noise
Firewall
Intrusion detection system
369
Module - II
Strong authentication for DNS changes
Audit
Availability
Protocol flaw
Firewall
Redundant architecture
Transmission or
component failure
Architecture
Firewall
Intrusion detection system
ACL on border router
Honey pot
Traffic redirection
Encryption
Audit
Distributed denial of
service
Firewall
Intrusion detection system
ACL on border router
Honey pot
The layers of security controls on the network are depicted in the following table.
Security Level
Perimeter
Firewall
Network-based anti-virus
VPN encryption
Network
Host
Host IDS
Host vulnerability assessment (VA.)
Network access control
370
Application shield
Access control/user authentication
Input validation
Data
Encryption
Access control/user authentication
Table: Security Layers
Obtain or prepare logical and physical diagrams of the network and attached
local and wide area networks, including the systems vendor and model
description, physical location, and applications and data residing and processing
on the servers and workstations.
Using the information obtained in the prior steps, document the server and
directory location of the significant application programs and data within the
network; document the flow of transactions between systems and nodes in the
network.
Assess whether the trusted domains are under the same physical and
administrative control and are logically located within the same sub-network.
Determine that router filtering is being used to prevent external network nodes
from spoofing the IP address of a trusted domain.
Determine that the Administrator/super-user and Guest accounts have passwords
assigned to them (by attempting to log on without providing a password.. Also
ascertain that the Administrator account password is well controlled and
used/known by only the system administrator and one backup person.
Review the account properties settings active in each users individual profile,
which may override the global account policy.
List the security permissions for all system directories and significant application
programs and directories and ensure that they are consistent with security policy.
Review and assess permissions assigned to groups and individual accounts,
noting that Full Control (all permissions) and Change (Read,Write, Execute, and
Delete) permissions are restricted to authorized users.
Review the audit log for suspicious events and follow up on these events with the
security administrator.
371
Module - II
Router
Determine the types of accounts that were used to access the routers.
Determine what users had access to these accounts.
Were access attempts to the routers logged?
Determine if all accounts had passwords and also determine the strength of the
passwords.
Was simple network management protocol (SNMP) used to configure the
network?
Determine the version of SNMP employed by the Company. (Version one stores
passwords in clear-text format. Version two adds encryption of passwords.)
Determine if open shortest path first (OSPF) was defined on the router.
Determine the authentication mechanism that was employed in the Company's
implementation of OSPF.
Determine whether directed broadcast functionality was enabled on the router.
This setting, if enabled, could allow a denial-of-service (DoS) attack of the
network (Smurf attack).
Obtain population of routers with modems and obtain the telephone numbers of
the routers.
Determine if users were properly authenticated when remotely accessing the
routers.
Determine how changes to the router environment were made.
Were there procedures for changing router configurations? If so, were these
procedures well-documented and consistent with security policy?
Determine if changes to the router configuration were documented.
Was there a separation of duties within the change control of the router
environment?
Firewalls
372
Ensure that a lockdown rule has been placed at the beginning of the rule base.
The lockdown rule protects the firewall, ensuring that whatever other rules are
put in later, it will not inadvertently compromise the firewall.
Obtain and review the connections table for time out limits and number of
connections.
Attempt to test the rule base by scanning secured network segments from other
network segments.
Identify accessible resources behind the firewall that are to be encrypted and
determine that the connections are encrypted.
Determine if there is a change control process in place for the rule base.
Determine the use of the firewall's automatic notification/alerting features and
archiving the detail intruder information to a database for future analysis.
Summary
An Information Systems Auditors understanding of network security helps to check
the adequacy of controls implemented in a distributed enterprise environment. A
thorough understanding of the techniques of controls, such as cryptography, security
protocols, firewalls and Intrusion Detection systems, facilitates an Auditor in
recommending and questioning control features. The penetration testing technique is
an important tool to justify his audit evidence.
Some examples of Application Control Techniques and their Suggested Audit
Procedures.
Control
Activity
Control
Connectivity to
system
resources
Control Techniques
Audit Procedures
373
Module - II
Security policies and controls are
to be implemented to authenticate
access through specific network
devices.
Interview
network
administrator and inspect
compliance with standard
security mechanisms followed
for security on the enterprise
network.
Checklist
Process
1.
374
3.
For all items supported by external vendors, does the vendor or the
manufacturer verify that all cryptographic functions in use by the
product/service, such as encryption, message authentication or digital
signatures, use approved cryptographic algorithms and key lengths.
4.
5.
This includes job applicants who have accepted a job offer, temporaries,
consultants, full time staff as well as the outsourced vendor who is involved
in product/service management and operations.
Authentication
6.
7.
Does the organization verify that the initial authentication has used a
mechanism that is acceptable for the application? Has the approach been
approved by IT Department and have the compensating controls been
implemented?
8.
9.
10.
11.
For products/services that use PKI, private keys which are stored in
hardware or software must be protected via an approved mechanism. The
375
Module - II
protection mechanism includes user authentication to enable access to the
private key. Are these protection mechanisms adequate?
12.
For products/services that use PKI, an approved process for verifying the
binding of a user identity to the public key (e.g., digital certificate) is
required for any server relying on public key authentication. Is such a
process in place?
Access Control
13.
14.
15.
16.
17.
18.
Does the product/service display the (a. date and time of last successful
login and (b. the number of unsuccessful login attempts since the last
successful login?
19.
Does the product/service support a periodic process to ensure that all user
IDs for employees, consultants, agents, auditors, or vendors are disabled
after X days and deleted after Y days from the day they were not used
unless explicitly approved by the concerned business manager.
Cryptography
20.
376
Is the approved Legal Affairs banner being displayed at all entry point
where an internal user logs into the product/service? An automated pause
or slow roll rate is in place to ensure that the banner is read. The Legal
Affairs Banner usually carries the following kind of text:
You are authorized to use this system for approved business purposes
only. Use for any other purposes is prohibited. All transactional records,
reports, e-mail, software and other data generated or residing upon this
system are the property of the Company and may be used by the Company
for any purpose. Authorized and unauthorized activities may be monitored.
NOTE: This is required for all mainframe, mid-range, workstation, personal
computer, and network systems.
22.
23.
24.
25.
26.
27.
Does the Security Administrator function review all security audit logs,
incident reports, and on-line reports at least once per business day?
28.
In case of Wide Area Networks (WAN), are the router tables maintained
securely in Routers?
29.
Are router login IDs and passwords treated as sensitive information and
managed by authorised administrators? Are all changes to router table
377
Module - II
entries logged and reviewed independently?
30.
Are access violations taken note of, escalated to a higher authority and
acted upon in a timely manner?
31.
32.
33.
Have all the all security related administrative procedures under the control
of the Security Administrator been documented and approved by
management (annual exercise)? The minimum procedures should include:
Information Ownership
Data Classification
User registration/Maintenance
Audit Trail review
Violation logging and reporting
Sensitive activity reporting
Semi-Annual Entitlement Reviews
Password resets
Escalation reporting
Microcomputer / PC Security
34.
35.
36.
37.
378
Does the audit trail associate with the product/service support the ability to
log and review all actions performed by systems operators, systems
managers, system engineers, system administrators, highly privileged
accounts and emergency IDs?
39.
40.
Does the audit trail for product/service record all identification and
authentication processes? Also, is there a retention period for the audit
trails? Is it adequate?
41.
Does the audit trail associate with the product/service log all actions by the
Security Administrator?
42.
43.
44.
45.
46.
Has all the media (File/Floppy/Disks/Tapes etc. under the control of the
product/service owner been marked (with the classification or have these
been classified?) and securely stored with access restricted to authorized
personnel only?
47.
Is there a process in place to ensure that all media under the control of the
product/service owner containing critical information is destroyed in a
manner that renders the data unusable and unrecoverable?
48.
379
Module - II
program, which secures all critical information from unauthorized access?
Penetration Testing
49.
50.
51.
Questions
1.
2.
3.
The Ping of death, connection flooding and traffic redirection are network
vulnerabilities called ..attacks that result in loss of network availability.
a. Dumpster of Data
b. Denial of signal
c. Dumping of service
d. Denial of service
4.
380
6.
7.
8.
9.
381
Module - II
11. The activity of testing undertaken by an organization with the help of teams to
exploit the security vulnerabilities of its enterprise network is called.
a. Intrusion Detection Testing
b. Preventing Detection Testing
c. Post-Implementation Testing
d. Penetration Testing
12. A testing team member when posing as a representative of the IT departments
help desk and asking users to divulge their user account and password
information is a .type of penetration testing.
a. Social engineering
b. Team engineering
c. User testing
d. User engineering
13. Identify the control implemented to prevent the network vulnerability that
monitors OS and application fingerprinting.
a. Penetration Testing
b. Cryptography methods
c. Intrusion Detection Systems
d. Immediate version System
14. vulnerability is an authentication failure which can be controlled
by using encrypted channel and one-time authentication.
a. Salami Technique
b. Spoofing
c. Buffer overflowing
d. Brute-force Method
15. Intrusion detection /prevention system (IDS/IPS) are network vulnerability
management systems implemented in the .level.
a. Application
b. Data
c. Perimeter
d. Network
16. The ..protects the firewall, ensuring that any change in the rules at
a later, time will not inadvertently compromise it. .
a. Lockdown
b. Listdown
c. OS banner
d. Segmentation
382
383
Module - II
22. Which of the following is an advantage of using link encryption?
a. Individual nodes in the network do not have to be protected.
b. The exposure that results from compromise of an encryption key is
restricted to a single user who owns the key
c. It prevents messages from traffic analysis attacks
d. The users of the network can bear the cost of link encryption.
23. .is a malicious code that can be used by a user to invoke/services
on the server.
a. Cookies
b. Scripts
c. Active Code
d. Viruses
24. A message authentication code is used to protect against:
a. Changes to the content of a message
b. Traffic Analysis
c. Release of message contents
d. Password being transmitted.
25. is an attack that adds spurious entries to a table in the server that
deals with the conversion of www.icai.org into network address like
202.54.74.130.
a. Host Name Redirection
b. Traffic Name Server
c. Data Name Server Attacks
d. Domain Name Server Attacks
Answers:
1. b
2. c
3. d
4. a
5. b
6. b
7. a
8. b
9. d
10. c
11. d
12. a
13. c
14. b
15.d
16. a
17. b
18. c
19. d
20. c
21. b
22. c
23. b
24. a
25. d
384
4 Application Controls
Learning Objectives
Introduction
Over the last several years, organizations around the world have spent billions of
dollars upgrading or installing new business application systems for reasons ranging
from tactical goals, such as year 2000 compliance, to strategic activities, such as
using technology to establish company differentiation in the marketplace. An
application or application system is a software that enables users to perform tasks by
employing a computers capabilities directly. These applications represent the
interface between the user and business functions. For example, a counter clerk at a
bank is required to perform various business activities as part of his job and assigned
responsibilities. From the point of view of users, it is the application that drives the
business logic.
Application controls pertain to individual business processes or application systems,
including data edits, separation of business functions, balancing of processing totals,
transaction logging, and error reporting. Therefore, the objective of application
controls is to ensure that:
Safeguard assets
Maintain data integrity
Module - II
Application Exposures
i.
ii.
iii.
iv.
v.
vi.
vii.
viii.
ix.
x.
Application Controls
iii. Processing Controls: Controls to ensure that there is only authorised
processing and integrity of processes and data is ensured.
iv. Datafile controls: Controls to ensure that data resident in the files are
maintained consistently with the assurance of integrity and confidentiality of the
stored data.
v. Output Controls: Controls to ensure that outputs delivered to the users in a
consistent and timely manner in the format prescribed/required by the user.
Application Boundary Controls
The objective of boundary controls is to prevent unauthorized access to applications
and their data. Such data may be in any stage, in input, processing, transit or output.
The controls restrict user access in accordance with the business policy of an
organization and its structure; and protect other associated applications, systems
software, database and utilities from unauthorized access.
Access controls may be implemented by using any of the logical security techniques
embedded in the application software. Besides access security implemented at the
operating system and/or database management systems level, a separate access
control mechanism is required for controlling access to application. The application is
to have boundary controls to ensure adequate access security to prevent any
unauthorized access to:
Applications themselves
Application data during communication or transit
Stored application data
Resources shared with other processes
The above objectives can be achieved by adopting logical security techniques like:
Using logon ids and passwords
Providing access to application from specified terminals only
Using Cryptographic Controls, that is,
o Encrypting all data leaving and entering the application process.
o Encrypting intermediary data either in input, processing or output stage
stored in database.
o Ensuring confidentiality, integrity and availability through encryption and
authentication of all exchanges of data/processes between applications. The
authentication of users and process usually occur at the network level since
it is cumbersome to authenticate r each and every application. See the next
chapter for authentication of users at the network layer.
387
Module - II
Uusing audit trails, i.e., logging all significant events that occur at the boundary
of the application. These events include all or any of the following:
o
o
o
o
o
o
Input Controls
Input controls are responsible for ensuring accuracy and completeness of data and
instruction input into an application system. These controls are important since
substantial time is spent on input of data, involving human intervention and therefore
prone to error and fraud. Input controls address the following:
a.
b.
c.
d.
e.
f.
g.
Source document design begins with an analysis of the need and usage of the
source document. Source document design includes the following:
For designing the layout of the source document, the following should be kept in
mind:
388
Application Controls
Make field captions consistent; always employ the same caption to indicate
the same kind of data entry.
Ensure that captions are sufficiently close to be associated with their proper
data fields, but are separated from data fields by at least one space.
389
Module - II
o
o
o
Data entry field design: Data entry fields should either be to the right of the
caption or exactly below it. Provide underscores in fields to indicate a fixed or
maximum length specified for a data entry. Where source documents are used,
the pattern should be based on the source document. Option buttons and check
boxes should be used when the user has to choose from a small list of options.
In case users have to choose from a long list of options, list boxes can be used.
Tabbing and skipping: During data entry, the user moves from field to field by
pressing the Tab key. It is important to ensure that the Tab order is consistent,
so that the insertion point moves from the first field on the screen to the last
without skipping fields. Incorrect tab order will not only frustrate users, but may
cause them to enter data in the wrong field. Automatic tabbing should be avoided
since fields may be skipped for data entry.
Colour: Colours help reduce search time and make the screen interesting.
However, bad usage of colour may distract or confuse the users. The following
should be kept in mind when deciding the colours.
o
o
Make captions for data fields distinctive, so that they will not be readily
confused with data entries, labeled control options, guidance messages, or
other displayed material.
The caption for each entry field should end with a special symbol(:),
signifying the start of the entry area.
Captions should employ descriptive wording, or else standard, predefined
terms, codes and/or abbreviations; avoid arbitrary codes.
Bright colours should be avoided. Only soft or pastel colours or those that
provide good contrast for the fields and captions should be used.
Uppercase and lowercase may be used to differentiate captions, if the
display is monochrome or if the users have difficulty in distinguishing colours
due to eye defects, poor or excessive lighting, etc.
Display rate: Display rate is the rate at which characters or images are displayed.
Data entry screens should have a fast and consistent display rate. If the rate is
slow or inconsistent, the data entry is prone to errors.
Prompting and help facilities: Descriptive help should be added wherever
possible. Data entry forms should be as self-explanatory as possible but should
also include help for each field. Prompting of actions by the user can be provided
by using pop-up messages that appear on placing the cursor on a field. More
prompting and help is required in the case of direct data entry where no source
document is involved.
390
Application Controls
Data code controls
Data codes are used to uniquely identify an entity or identify an entity as a member of
a group or set.
Types of data coding errors:
Length of the code: Long codes are naturally prone to more errors. Long codes
should be broken using hyphens, slashes or spaces to reduce coding errors.
Alphabetic numeric mix: The code should provide for grouping of alphabets
and numerals separately. Inter-mixing the two can result in errors.
Choice of characters: Certain alphabets are confused with numerals such as B,
I, O, S, V and Z would be confused with 8,1,0,5,U, 2 when written on source
document and entered into the system. Such as characters should be avoided
Mixing uppercase/lowercase fonts: Upper case and lower case should NOT
be mixed when using codes, since they delay the process of keying in due to
usage of the shift key. Such codes are also prone to errors.
Sequence of characters: Character sequence should be maintained as much
as possible. Such as using ABC instead of ACB.
Check digits are redundant digits that help verify the accuracy of other characters in
the code that is being checked. The program recalculates the check digits and
compares with the check digit in the code when it is entered to verify if it is correct.
Check digits may be prefixes or suffixes to the actual data. Since these take time to
calculate, they should be used only on critical fields.
Batch Controls
Batch controls group input transactions into logical or physical batches. Physical
batches are groups of transactions that constitute a physical unit such as a set of
invoices pertaining to a branch. Logical batches are groups of transactions that are
divided on a logical parameter, such as cut off date, or documents pertaining to
division or branch.
391
Module - II
Control over physical batches is ensured through batch header forms, which are data
preparation sheets containing control information about the batch.
Types of batch controls are as follows:
Edit Controls: Edit controls are the principal data validation controls and are
used to validate data. They contain the following:
392
Application Controls
o Sequence checks: Controls that verify if the data maintains a proper
sequence or order. They are usually used to check serial numbers of
documents or the date sequence of transactions.
o Range and Limit check: Range check means fixing upper and lower limits for
data values to ensure that they dont exceed them. Limit checks refer to only
upper or lower limit of data values, but not both. For example, if the edit
check requires that gross salary should not exceed Rs.100000/- per month,
it is a limit check. If the check requires that the percentage of tax should be
between 10% toof 30% on the gross total income, it is a range check.
Missing data check: Ensures that certain key fields are not left blank during data
entry.
Duplicate check: Duplicate check ensures that the same data is not keyed twice.
For example when entering invoices, the same invoice number is not repeated
twice.
Programmed Validity Check: Logical validations may be built in an application to
check invalid input. E.g., there can be a validation to prevent a user from entering
a non- existent account code.
Dependency Match: Where certain fields depend on input values in other fields,
programmed checks can be used for internally validating data before accepting
input. For example, Date of joining may be compared to the date of birth to
ensure that the latter is earlier than the former.
Completeness check: Completeness of data may be checked before accepting
data entered. For example, while creating a new e-mail user id, the input screen
will not be accepted without date of birth, name, address etc. (marked as
required fields).
Reasonableness check: Reasonableness of value entered may be compared
with an acceptable range within the system before accepting input, e.g. age
entered as 140 may be rejected as unreasonable.
Table lookups: Input entered is matched with a range of values in a table before
acceptance. E.g.. a customer code entered by the operator is internally matched
with a table of valid customers for a successful match, or else the transaction is
rejected.
393
Module - II
Rejecting only transaction with errors: Only those transactions containing the
errors would be rejected and the rest of the batch would be processed.
Reject the whole batch of transactions: The whole batch of transactions is
rejected if even a single error is found in it. This is usually done in cases where
transactions have to be in a sequential order and data entry has not been made
in that order.
Accepting batch in suspense: The batch would be held in suspense till the
corrections in transactions are made. However, the batch or the transactions are
not rejected.
Accepting the batch and marking error transactions: Erroneous transactions
are separately marked for identifying later and the whole batch is processed.
This is usually done in an online system.
394
Application Controls
Reporting Instruction Input Errors
Error messages and procedural instructions need to be communicated to users at the
instance of a possible error occurrence. The error message must be complete and
meaningful and help the user correct it immediately. Different error messages may be
given based on the expertise of a user.
Processing Controls
Data processing controls perform validation checks to identify errors during the
processing of data. They are required to ensure both the completeness and accuracy
of the data being processed. Normally the processing controls are enforced through
the database management system. However, adequate controls should be enforced
through the front end application system also to ensure consistency in the control
process.
i.
395
Module - II
Datafile Controls
Version usage: Proper version of a file should be used for processing the data
correctly. In this regard it should be ensured that only the most current file be
processed.
Internal and external labelling: Labelling of storage media is important to
ensure that the proper files are loaded for process. Where there is a manual
process for loading files, external labelling is important to ensure that the correct
file is being processed. Where there is an automated tape loader system, internal
labelling is more important.
Data file security: Unauthorized access to data file should be prevented, to
ensure its confidentiality, integrity and availability. These controls ensure that the
correct file is used for processing and are not concerned with datas validity.
Before and after image and logging: The application may provide for reporting
of before and after images of transactions. These images combined with the
logging of events enable re-constructing the datafile back to its last state of
integrity, after which the application can ensure that the incremental
transactions/events are rolled back or forward.
File updating and maintenance authorization: Sufficient controls should exist
for file updating and maintenance to ensure that stored data are protected. The
access restrictions may either be part of the application program or of the overall
system access restrictions.
Parity Checking: When programs or data are transmitted, additional controls are
needed. Transmission errors are controlled primarily by detecting errors or
correcting codes.
Output Controls
Output controls ensure that the data delivered to users will be presented, formatted
and delivered in a consistent and secured manner. Output can be in any form: either
printed data report or a database file in a removable media such as a floppy disk, CDROM or removable hard disk. Whatever the type of output, confidentiality, integrity,
and consistency of the output is to be maintained.
The following form part of the output controls:
396
Application Controls
Module - II
failure without having to repeat the entire process from the beginning. Existence
controls should also be exercised over output to prevent loss of output of any form
i.e., paper, spool files, output files, etc. Finally, recovering the application system
accurately, completely and promptly can be critical to many organizations, especially
if they are in an EDI environment.
Audit of Application Controls
The audit of application controls requires the IS auditor to ensure that the application
system under audit
When reviewing the application controls, the IS auditor should ensure that the audit
objectives with regard to confidentiality, integrity and availability of organisational
information are orderly and complete. The process of application control review will
include:
i.
398
Application Controls
Apply defense-in-depth.
Use a positive security model.
Fail safely.
Run with the least privilege
Avoid security by obscurity.
Keep the security simple.
Detect intrusion and keep logs.
Never trust infrastructure and services.
Establish secure defaults.
Use open standards.
Summary
Application controls are specific control procedures over the applications which can
provide assurance that all transactions are authorized and recorded, processed
completely, accurately and on a timely basis. Application controls may consist of
manual procedures carried out by users (user controls) and automated procedures or
controls performed by the computer software.
399
Module - II
Testing application controls is achieved by gaining evidence that a control has
operated. For example, if a computer enforces segregation of duties between various
finance functions the auditor may check the access control lists and the user
permissions on those lists. Alternatively, if the computer has automated controls to
ensure that purchase order clerks can only order goods from a predefined list of
approved products from approved suppliers (regularity assurance), the auditor may
check the access controls to the approved products list, together with a sample of
new additions to the list.
Evidence collected about the operation of computer controls is obtained from a
combination of observation, enquiry, examination and sampling. The auditor may also
be able to use computer assisted audit techniques to assist in the examination of
controls. For example, the auditor could download the audit log file and write a
routine to extract unauthorized access attempts.
Some examples of Application Control Techniques and their Suggested Audit
Procedures.
Audit procedures for application level controls are segregated into the following areas
of critical evaluation.
i.
ii.
iii.
iv.
v.
Control Activity
Application
security
Management
plan
Control Techniques
Audit Procedures
400
Application Controls
- Procedures for emergency
access to production system
including program or database
update or modification and
security parameter settings
compliant with the
organizations policies.
- Identify critical user IDs and
business process or subprocess with their appropriate
access privileges.
- User least privileges
preventing execution of
incompatible transactions from
application menu interface.
Application risk
assessment and
periodic
monitoring
Periodic assessment of
application and supporting
systems.
Documentation of risk
assessment, validation and
approvals and as part of the
security plan.
Policies and
procedures to
assess the
access to
applications
periodically.
Management
responsibility on
security policies
and procedures
and compliance.
Module - II
A mission on the frequency and
scope of testing the security
policies is in place
Identify the weakness and
initiation of corrective action plans,
milestones and tested with a
periodic monitoring plan.
resolution plans.
Based on the application test
plan assess the frequency
and scope of testing aligned
with the given risk and
criticality of the application.
Controlled
access to the
application.
Application Controls
parameters used within
applications and support
applications.
Policies and
procedure for
change
management for
application
functionality.
Inspect documents
identifying key transactions
that provide user access to
application functionality
changes.
Review the system
documentation of SDLC
methodology defined with
the followed or system
implemented methodology.
Examine and inspect recent
software modification
request forms and the
procedures followed.
Access to
Segregation of libraries of
program libraries programs containing production
is restricted.
code, source code, support
programs and their maintenance.
403
Module - II
Access to
application
activities,
processes,
transactions,
programs and
tables are
controlled.
parameters.
Application Controls
Steps to prevent
potential damage
and interruption
with a business
impact analysis
plan.
Module - II
S.No.
Checklist
Each transaction is authorized, complete, accurate, and timely and
input only once.
1.
2.
3.
4.
5.
Obtain from Payroll / Personnel a list of staff within the section. Request
from the Systems Administrator a list of all users of the system. Ensure that
all system users are valid employees and users.
6.
7.
8.
See to it that input of parameters for processing and other standing data is
strictly controlled.
(What controls exist to prevent accidental / malicious changes to fixed data
parameters, i.e., tax calculations, pay rise, etc.)
Check the correctness of key values and data within the system.
i.e. if a VAT calculation is required, is the standing VAT rate set to 17.5%.
Does the system record a history of standing data changes?)
406
Application Controls
S.No.
Checklist
9.
10.
Ensure that there are clear procedures for data items rejected on input.
(Ascertain how rejected inputs are treated and reported. From a sample of
rejected records, ensure that they are amended and successfully re-input.)
11.
Make sure that clear timetables exist for input and are adhered to.
(Ascertain who is responsible for authorizing the processing of jobs and
what procedures are in place. Are they reviewed on a regular basis?)
12.
13.
14.
15.
If the input of data is through batch upload, check if the software has
controls to ensure that all the entries in the batch have been uploaded
without any omission/ commission (e.g., reconciliation of control totals, etc.)?
16.
Review and evaluate the controls in place over data feeds to and from
interfacing systems.
17.
Check whether the application prevents the same user from performing both
the functions of entering a transaction and verifying the same.
18.
Find out if the application has adequate controls to ensure that all
transactions input have updated the files.
19.
In cases where the same data are kept in multiple databases and/ or
systems, check if periodic sync processes are executed to detect any
inconsistencies in the data.
An appropriate level of control is maintained during processing to
ensure completeness and accuracy of data.
20.
Module - II
S.No.
Checklist
(Ask who is responsible for job scheduling. What are the procedures for
scheduling jobs, and are they up to date and reviewed on a regular basis?)
21.
22.
Data is processed by the correct programs and written to the correct files.
(Confirm that logs / documentation exists, recording any systems updates
that may affect validity of processes and any legislative changes.)
23.
24.
25.
26.
27.
Controls are adequate for the programmed procedure that generates the
data.
(Verify the controls implemented like recalculations (Manual), Editing, Runto-run totals, and Limit checks, etc.)
28.
29.
408
Application Controls
S.No.
Checklist
(Review any records that are maintained to control output distribution and
check that they are adequate and are sufficient to identify who has received
the authorized outputs. What procedures exist for controlling the use of
stationery? Observe storage arrangements for output. Is physical access
controlled? Test a sample of output reports.)
30.
31.
32.
33.
34.
35.
36.
37.
Module - II
S.No.
Checklist
(Review application output and check whether listings are produced or can
be generated which substantiate reported control totals)
38.
When records are posted from one financial system to another, those input
to the second should agree with those output from the first.
(Follow up and reconciliations of data between financial systems)
39.
Determine the need for error/exception reports related to data integrity, and
evaluate whether this need has been fulfilled.
40.
41.
42.
43.
Review and evaluate the audit trails present in the system and the controls
over those audit trails.
Arrangements exist for creating back-up copies of data and programs,
storing and retaining them securely, and recovering applications in the
event of failure
44.
Any data and programs that have to be held on PCs / standalone systems
are backed up regularly.
(Determine the backup arrangements for data processed. What short term
data recovery / disaster recovery procedures exist? Is tape/disc back up
media safely stored in a suitable fireproof container?)
(Do procedures require media to be taken off site as an additional safety
measure? Are responsibilities clearly defined for the backing up of important
data?)
45.
Database integrity checks are run periodically and back-up copies of the
database are retained from one check to the next.
(Establish that database integrity checks are carried out, and , how are they
documented?)
46.
A general procedures or training manual for use by all staff involved with
processes exists to follow in the event of an application failing during
processing.
410
Application Controls
S.No.
Checklist
(Ensure that a manual exists that details actions to be taken in the event of
application failing.)
47.
Check if appropriate controls are established on data files such as Prior and
Later Image, Labeling, Version and Check Digit/Parity Check.
48.
49.
50.
51.
52.
53.
54.
55.
Ensure that a mechanism or process has been put in place that suspends
user access on termination from the company or on a change of jobs within
the company.
56.
57.
Review and evaluate processes for granting access to users. Ensure that
access is granted only when there is a legitimate business need.
58.
Ensure that users are automatically logged off from the application after a
certain period of inactivity.
Questions
1. Transaction or files not processed to completion due to an error during an online
or an offline processing or failure of a trigger is an .. application
exposure.
a. Incomplete processing
b. Inaccurate Information
c. Changes to Data or Programs
d. Unauthorized remote access
411
Module - II
2. . to ensure that only complete, accurate and valid data and instructions
are input to the application.
a. Boundary Controls
b. Input Controls
c. Output Controls
d. Processing Controls
3. The logging of events like user requesting access, incorrect login attempts and
terminal-id at the boundary of an application are done by using ..
a. Cryptographic controls
b. Transaction Trails
c. Audit Trails
d. Backup Trails
4. Which of the following is an input control?
a. Login
b. Range check
c. Report distribution
d. Digital signature
5. Which of the following best describes output control?
a. Protects the confidentiality of output.
b. Prevents data loss
c. Ensures availability of data
d. Ensure integrity of data while in transit
6. Which of the following is not an application Control?
a. Input Controls
b. Processing Controls
c. Recovery Controls
d. Output Controls
7. In the layout of a source document.
a. To avoid confusion for users keying information should not be present in the
form.
b. The form need not contain instructions.
c. To capture data that is to be calculated on the system for verification.
d. Data entry is to be sequenced from left to right and top to bottom.
8. .is the main objective to design the data-entry screens.
a. The number of data to be collected.
b. The approved source document of the screen
412
Application Controls
c. The experience of the data-entry operator with the screen
d. The periodic frequency of the screen being used.
9. If the product number B8597 is coded as B5987, it is an example of a
a. Truncation error
b. Double Transposition error
c. Random error
d. Transcription error
10. A data/field check edit control is
a. A check for missing blanks during data entry.
b. A check for the data upper and lower limit.
c. A check for proper sequence of data.
d. A check for data dependency.
11. Identify the best validating instruction input interface method that can avoid
errors and can be controlled well:
a. Processing language
b. Manual entry Screen
c. Menu Driven
d. Command language
12. While processing a journal entry if only debit entry was updated and the credit
entry was not updated due to absence of an important field the report
gives details of the respective transaction code.
a. Edit Report
b. Exception Report
c. Run-to-run Total Report
d. Verification Report
13. The existence control over processing of data should include a
control to recover a process from a failure without having to repeat the entire
process from the beginning.
a. Checkpoint
b. Backup
c. Monitoring
d. Validation
14. Storage of critical forms, logging programs executed for report generation and
print job monitoring are ..controls implemented in an application.
a. Boundary
b. Output
413
Module - II
c. Input
d. Data file
15. Identify the controls that can ensure loading and execution of the most current
data file or program.
a. Version and internal labeling
b. Verification and validation
c. Value and internal blocks
d. Internal segmentation labels
16. Which of the following is not a design guideline for using color on a data-entry
screen?
a. Use colors scarcely
b. Use bright colors so differences are highlighted
c. Use similar colors
d. Do not use red for error messages
17. Identify the audit procedure that assesses the access to program libraries within
an application.
a. Verify source code compile dates; compare module size to production load
module size.
b. Observe and inspect the user responsibilities within the organization.
c. Examine the adequacy of the backup strategies adopted.
d. Review the system documentation of SDLC methodology followed.
18. .is a logical batch control based on the cut off date parameter.
a. A group of invoices for a table
b. A set of invoices that have a similar error
c. A set of invoices that constitute a division/branch
d. A set invoices made by a group of users
19. control ensures that data resident in the files are maintained consistently
with the assurance of integrity and confidentiality of the stored data.
a. Output
b. Input
c. Boundary
d. Data file
20. Identify the control that is not a factor of data-entry screen design.
a. Caption design
b. Colour design
c. Tabbing design
d. Check digit design
414
Application Controls
21. If a data entry requires that gross salary should not exceed Rs.100000/- per
month this can be done by using control.
a. Sequence check
b. Range check
c. Total check
d. Field check
22. The ..control check is used to prevent unauthorized access to the
intermediate storage of output before printing.
a. Ranging
b. Table lookup
c. Spooling
d. Segmenting
23. If data entry is done directly on screen, the design organization should be
a. Tabbing and skipping design is a critical factor
b. Use asymmetry to reduce the number of screens required for data input
c. Screen organization should be synchronized to data capture method
d. Screen organization should be with the preferences of the data entry
operator.
24. Identify the audit procedure to verify that transactions are from recognized sources:
a. To compare the application with a valid source document timesheet.
b. To check the password of the data-entry operators
c. To interview the personnel maintaining rejected records
d. To check logging of data exchange methodology
25. Check if appropriate controls are established on data files such as Prior and
Later Image, Labeling, Version and Check Digit/Parity Check. This checklist
question can be used by an auditor to check..controls.
a. Boundary
b. Data process
c. Data Backup
d Data storage
Answers :
1. a
2. b
3. c
4. b
5. a
6. c
7. d
8. b
9. b
10. a
11. c
12. b
13. a
14. b
15. a
16. b
17.a
18. c
19. d
20. d
21. b
22. c
23. c
24. a
25. c
415
Introduction
The need to protect Information Assets is a tight rope walk for every enterprise as it
sets its business objectives and strives to achieve them, while managing the risks to
the external and internal environment. The successful performance of business
processes is significantly dependent on information technology, because it helps in
handling them effectively and efficiently. Increasingly information is being recognized
as a key business asset that is critical to the success and survival of todays
enterprise. But this technology, including the Internet, is a double edged sword, for
the benefits come along with risks. Hence it is all the more necessary that business
information resources and the information technology that processes it are protected.
While the former i.e. business information is generally referred to as Information
Resources, the latter i.e. the business processes, the technology, the IT processes
and people are referred to as Information Systems Resources.
The first major process undertaken in protecting information assets is information
recording and information classification.
Information recording takes stock of the kinds of information used for business. This
includes various kinds of information used in application processing on handheld and
portable devices, etc.
Module - II
All information in the company is classified according to its intended audience, and
handled accordingly. This includes paper documents, computer data, faxes and
letters, audio recordings, and any other type of information. Classifying this
information and labeling it clearly helps employees understand how management
expects them to handle it, and to whom they should expose it.
The next process is the classification of users. It is based on the roles played by
users in the organization.
Lastly, access control models describe what kind of users can have access to what
kind of data.
Information Classification
The information classification process focuses on business risks and data valuation.
Not all data has the same value for an organization. Some data is valuable to the
people who have to make strategic decisions, because it aids them in making longrange or short-range business direction decisions. Some data, such as trade secrets,
formulas, and new product information, are valuable because its loss can create a big
problem for the enterprise in the marketplace. Apart from creating public
embarrassment, it can also affect its credibility.
For these reasons, information classification has an enterprise-level benefit. Some of
the other benefits are:
Top secret: This indicates information of the highest degree of importance; any
compromise of its confidentiality, integrity and availability can endanger the
existence of the organization. Access to such information may be restricted to
either a few named individuals in the organization or to a set of identified
individuals.
Secret: Information in this category is strategic to the survival of the
organization. Unauthorized disclosure can cause severe damage to the
organization and its stakeholders.
Confidential: Information in this category also needs high levels of protection,
because its unauthorized disclosure can cause significant loss or damage. Such
information is highly sensitive and is to be well protected.
419
Module - II
Unclassified: It is information that does not fall in any of the above categories.
Its unauthorized disclosure will not t cause any adverse impact on the
organization. Such information may also be made freely available to the public.
It now applies to all personal information held in relevant fling systems, which
may be in any medium (paper, database, spreadsheet, word-processing folder,
etc.). The criteria for labeling something as an filing system in terms of the Act
relate to whether the information is held in a structured way, and indexed by
individual identifiers.
There is an emphasis on giving data subjects advance notification about
collecting data and what will be done with it (how it is to be 'processed'). In this
context, data subjects must have the opportunity to give consent to the collection
and processing of their data.
That consent should be given freely, and (wherever possible) be obtained
explicitly and in advance.
There are Fair Processing principles. The personal data that is collected and
processed must be for specified, explicit and legal purposes; it must be accurate,
relevant and not exceed those purposes. Personal data must be kept secure, upto-date and not longer than what is actually necessary.
There are strict controls on the processing of 'sensitive personal data' (i.e. race,
ethnicity, gender, health), even where it is processed only for research purposes.
The Act also prescribes compliance audits.
The aims of data protection compliance audits go beyond the basic requirements of
Data Security and address wider aspects of data protection including:
420
Classification of Users
Entitlement of access to an information resource and degree of access is determined
according to the job description, functions and role of the user in the organization.
The rights of access is to ensure that the user does not gain access to undesired
information resources or an undesired mode of access to the information. This is
governed by the need to know and need to do basis or the principle of least
privileges. The principle of least privilege (also known as the principle of least
authority) is an important concept in information security. It advocates minimal user
profile privileges on computers, based on users job necessities. It can also be
applied to processes on the computer; each system component or process should
have the least authority necessary to perform its duties. This is also called the
default deny principle. All these terms imply that access is available only to users
after it has been specifically granted to them, only to perform the job function that has
been granted to them.
These principles help reduce the attack surface of the computer by eliminating
unnecessary privileges that can result in network exploits and computer
compromises. At the same time, the authority for access should be adequate and not
adversely affect the functioning of the user.
Module - II
accountable for any loss or damage to it. . For example, the Sales Director may
be the designated owner of the sales transaction database. The data owner has
the rights of determining the classification of the data resource or changes
thereto and rights of delegation of responsibilities to users and custodians.
However, custodians may be responsible for day to day protection of the data
resource, under delegated authority from owners.
ii. Data Users: Users require access to the data for their day to day functioning.
For example, employees other than designated data owners, data entry
operators, customers etc. Data users derive their rights of access from the data
owners. Users are governed by the security policy and procedures of the
organization and have the responsibility to use their authority for organizational
purposes and protect the resources.
iii. Data Custodians: The custodians like the IT Department are delegated with
the responsibility of administering and protecting data resources. The custodians
also derive their rights from the owners and their actions are governed by the
security policy framework, policies and procedures.
Subject
(Eg.User/Program/Device)
Access
Access
Control
Mechanism
Object
(Eg.Program/Data/Device)
Guided by
Module - II
employers assets and information, and instructions regarding acceptable (and
unacceptable) practices and behavior.
Definition
Security policy is defined broadly as the documentation of computer security
decisions. In making these decisions, managers face hard choices involving resource
allocation, competing objectives, and organizational strategy related to protecting
both technical and information resources as well as guiding employee behavior.
Managers at all levels make choices that can result in policy, with the scope of the
policys applicability varying according to the scope of the managers authority. In this
chapter we use the term policy in a broad manner to encompass all the types of
policy described above regardless of the level of manager who sets the particular
policy. Managerial decisions on computer security issues vary greatly.
To differentiate among various kinds of policy, this chapter categorizes them into
three basic types:
Familiarity with various types and components of policy will aid managers in
addressing computer security issues of the organization. Effective policies result in
the development and implementation of an efficient computer security program and a
proper protection of systems and information.
It is not important not only to categorize organizational policies into these three
categories, but also to focus on their functions.
Procedures, standards, and guidelines are used to describe how these policies will be
implemented within an organization. Further, while policies are approved at the
topmost level, procedures, standards and guidelines are approved at lower levels of
management. What is included in a policy may vary from organization to organization.
It may also be that an item covered in a policy in one organization may be
documented at a procedure or standard level in another organization.
Introduction
Purpose of Security Policy
Security Policy Scope
Security Policy Exemptions
General Security Policy
Maintenance of Policies, Standards, Guidelines and Recommendations
Security Policy: Review, Schedule and Updates
Security Officers: Role and Responsibilities
Auditing and Compliance; State Auditors Role
425
Module - II
Program Policy
The senior management of an organization issues program policy to establish (or
restructure) its computer security program and its basic structure. This high-level
policy defines the purpose of the program and its scope assigns responsibilities (to
the computer security organization) for direct program implementation, as well as
other responsibilities to related offices, and addresses compliance issues.
Program policy sets organizational strategic directions for security and assigns
resources for its implementation.
Components of Program Policy
The following is a list of the components found in a typical Information security
program policy.
i.
ii.
Module - II
iv. Compliance:
Without a formal, documented information security policy it is not possible for the
management to proceed with the development of enforcement standards and
mechanisms. Program-level policy serves as the basis for enforcement by describing
penalties and disciplinary actions that can result from failure to comply with the
organizations IT security requirements. Discipline commensurate with levels and
types of security infractions should be discussed.
Program policy typically will address two compliance issues:
General compliance to ensure meeting the requirements to establish a program and
the responsibilities assigned therein to various organizational components. Often an
oversight office is assigned responsibility for monitoring compliance, including how
428
Issue-Specific Policy
Whereas program policy is intended to address the broad organization-wide
computer security program, issue-specific policies focus on areas of current
relevance and concern to an organization.
Management may find it appropriate, for example, to issue a policy on how the
organization will approach contingency planning (centralized vs. decentralized) or the
use of a particular methodology for managing risk to systems. A policy could also be
issued, for example, on the appropriate use of a leading-edge technology, whose
security vulnerabilities are still largely unknown within the organization. Issue specific
policies may also be appropriate when new issues arise, such as implementing a
recently passed law requiring additional protection of particular information. Program
policy is usually broad enough to require much modification over time, whereas issuespecific policies are likely to require more frequent revision with changes in technology
and related factors.
ii.
Module - II
iii.
iv.
v.
vi.
continue the previous example, this would mean stating whether the use of
unofficial software is prohibited in all or some cases, whether there are further
guidelines for approval and use, or whether case-by-case exceptions will be
granted by whom and on what basis.
Applicability: Issue-specific policies also need to include statements of
applicability. This means clarifying where, how, when, to whom, and to what a
particular policy applies. For example, it could be that the hypothetical policy on
unofficial software is intended to apply only to the organizations own on-site
resources and employees and not to contractors with offices at other locations.
Additionally, the policys applicability to employees travelling among different
sites and/or working at home who need to transport and use disks at multiple
sites might need to be clarified.
Roles and Responsibilities: The assignment of roles and responsibilities is
also usually included in issue-specific policies. For example, if the policy permits
unofficial software privately owned by employees to be used at work with the
appropriate approvals, then the approval authority granting such permission
would need to be stated. The Policy would lay down, who, by position, has such
authority. Likewise, it would need to be clarified who would be responsible for
ensuring that only approved software is used on organizational computer
resources and, perhaps, for monitoring users in regard to the use of unofficial
software.
Compliance: For some types of policy, it may be appropriate to describe, in
some detail, the infractions that are unacceptable, and the consequences of such
behavior. Penalties should be explicitly stated and be consistent with
organizational personnel policies and practices. When used, they should be
coordinated with appropriate officials and offices and, perhaps, employee unions.
It may also be desirable to task a specific office within the organization to monitor
compliance.
Points of Contact: For any issue-specific policy, the appropriate individuals in
the organization to contact for further information, guidance, and compliance
should be indicated. Since positions tend to change less often than the people
occupying them, specific positions may be preferable as the point of contact. For
example, for some issues the point of contact might be a line manager; for other
issues it might be a facility manager, technical support person, system
administrator, or security program representative. Using the above example once
more, employees would need to know whether the point of contact for questions
and procedural information would be their immediate superior, a system
administrator, or a computer security official.
430
431
Module - II
vi. Contingency Planning: Contingency Planning means planning for the
emergency actions required in the event of damage, failure, and/or other
disabling events that could occur to systems. Issues that need to be addressed
by policies include determining which systems are most critical and therefore of
highest priority in contingency planning; how the plans will be tested, how often,
and by whom; and who will be responsible for approving plans.
ii.
iii.
iv.
v.
vi.
Network Policies
i.
ii.
iii.
iv.
v.
vi.
433
Module - II
ii.
iv. Network Security Monitoring: All internal and external networks must be
constantly monitored, 24x7, by trained security analysts. This monitoring
must involve at least the following activities:
Usage Policies
i.
ii.
435
Module - II
Room Access Based on Job Function: Room access must be restricted based
on employee job function.
ii. Position of Computer Monitors: Computer monitors must be faced away from
windows to discourage eavesdropping.
iii. Badges on Company Premises: All corporate employees on the controlled
premises must display badges with picture identification in plain view.
System-Specific Policy
Program policy and issue-specific policy both address policy from a broad
perspective, usually encompassing the entire organization. However, they do not
provide sufficient information or direction that can be used , for example, for
establishing an access control list or training users on what actions are permitted.
System specific policy fills this need. It is very focused, since it addresses only one
system.
Many security policy decisions may apply only at the system level and vary from
system to system within the same organization. While these decisions may appear to
be too detailed to be policy, they are extremely important, for their significant impact
on system usage and security.
These types of decisions can be made by a management official and not by a technical
system administrator. The impact of these decisions, however, is often analyzed by
technical system administrators. To develop a cohesive and comprehensive set of
security policies, officials may use a management process that derives security rules
from security goals. It is helpful to consider a two-level model for system security policy:
security objectives and operational security rules, which together comprise the system
specific policy. Closely linked and often difficult to distinguish, however, is the
implementation of the policy in technology.
Security Objectives
The first step in the management process is to define security objectives for the
specific system. Although, this process may start with an analysis of the need for
integrity, availability, and confidentiality, it should not stop there. A security objective
needs to be specific, concrete and well defined. It also should be stated so that it is
clear that the objective is achievable. This process will also draw upon other
applicable organization policies. Security objectives consist of a series of statements
that describe meaningful actions about explicit resources. These objectives should be
based on system functional or business requirements, but should state the security
actions that support the organizational requirements.
436
Module - II
Policy Implementation
A Policy implementation is a process. It cannot merely be pronounced by the upper
management as a one-time statement or a directive with high expectations of it being
readily accepted and acted upon. The implementation process actually begins with
the formal issuance of the policy.
Policy Documentation
Once a security policy has been approved and issued, it may be initially publicized
through memorandums, presentations, staff meetings, or any other means, which
may be incorporated into the formal policy documentation. The policy documentation
needs to be updated with feedbacks from the organization stakeholders.
i.
ii.
This is often true of large organizations performing different activities and having
many levels of management. In such environments, different functional elements may
have widely differing IT systems and needs to accommodate. It is therefore generally
more practical to tailor the policy to meet their needs. This can be accomplished
through the development of documents containing detailed procedures and practices
for specific kinds of systems and activities within the functional elements.
For example, organizations will want to issue policies to decrease the likelihood of
data loss due to technology failures and/or operator errors. A program-level policy will
state something like It is the policy of the organization to ensure against data loss
due to accidents or mishaps. In an area where extensive writing and editing of
lengthy documents is performed, such as a word processing or technical publications
unit, security documentation might be developed on saving work in-progress much
438
Policy Visibility
Polices are generally public documents. However, in many cases, parts of the
policies may be kept private. High visibility should be given to the formal issuance of
the information security policy. This is due to a combination of factors, including the
following:
Many new terms, procedures, and activities are introduced. Information security
policy should be provided visibility through management presentations, discussions,
question/answer forums, and newsletters. Including information security as a regular
topic at staff meetings at all levels of the organization can also be a helpful tactic. As
an aspect of providing visibility for security policies, information should also be
included regarding the applicable high level directives and requirements to which the
organization is responding. Educating employees about requirements specified by
various laws and regulations will help emphasize the significance and timeliness of
computer security, and it will help provide a rational basis for the introduction of
information security policies.
Information security policy should be given visibility by using all applicable
documentation. A more integral security policy shall cover all aspects of daily routines
with the associated actions and practices and will become natural to doing business.
Ultimately, among the goals of policy are the assimilation of a common body of
knowledge and values and the demonstration of appropriate corresponding behavior..
Those goals will be expedited by making the information security policy integral to the
organization through all avenues.
Module - II
Frequently used technical methods to implement system-security policy are likely to
include the use of logical access controls. But there are other automated means of
enforcing or supporting security policy that typically supplement logical access
controls. For example, technology can be used to block telephone users from calling
certain numbers.
Intrusion detection software can alert system administrators to suspicious activity or
take action to stop its activity. Personal computers can be configured to prevent
booting from a floppy disk. Technology based enforcement of system security policy
has both advantages and disadvantages. A computer system, properly designed,
programmed, installed, configured, and maintained, consistently enforces policy
within the computer system, although no computer can force users to follow all
procedures. Management controls also play an important role and should not be
neglected. In addition, deviations from the policy may sometimes be necessary and
appropriate; such deviations may be difficult to implement easily with technical
controls. This situation crops up if the security policy is implemented too rigidly.
Interdependencies
Information Security Policy is related to many other areas:
Program Management Policy is used to establish an organizations computer
security program, and is therefore closely tied to program management and
administration. Both program and system specific policy may be established in any
area. For example, an organization may wish to have a consistent approach to
incident handling for all its systems and issue appropriate program policy to do so. On
the other hand, it may decide that its applications are sufficiently independent of each
other and that application managers should deal with incidents on an individual basis.
Access Controls System specific policy is often implemented through the use of
access controls. For example, it may be a policy decision that only two individuals in
an organization are authorized to run a check-printing program. Access controls are
used by the system to implement this policy.
Other Areas Information security policy can be related to nearly every topic covered
in this course material. . This is because all the topics discussed in the material have
associated issues that organizations may need to address through policies. The
topics most directly related, however, are: security program management and
administration; personnel; security training and awareness; contingency planning;
and physical and environmental security.
Cost Considerations
A number of potential costs are associated with developing and implementing
computer security policies. Overall, the major cost of policy is the cost of
implementing the policy and its impact upon the organization. Establishing an
information security program, through a properly framed policy, does not come at a
negligible cost.
Other costs may be those incurred through the policy development process.
Numerous administrative and management activities may be required for drafting,
reviewing, coordinating, clearing, disseminating, and publicizing policies. In many
organizations, successful policy implementation may require additional staffing and
training and can take time. In general, the cost to an organization for computer
security policy development and its implementation will depend upon how extensive
is the change needed to achieve a level of risk acceptable to the management.
Summary
The information asset is the organizational data and the mediums and devices of
storage, processing and communications constitute the information system asset.
Inadequate protection to either information asset or the information system assets
may result in computer crimes.
441
Module - II
Information systems resources are required to be classified or categorized according
to their sensitivity. This depends upon the risks affecting such resources and their
impact resulting from the exposure. The classifications help the security
administration to determine which class of information can be accessed by which
class of users and to what degree of access. The scheme of classification may vary
depending on the type of organization. In commercial organizations, data may be
classified as public, sensitive, private or confidential. Once users and resources are
classified, it then becomes possible to specify which users can access what data. The
two broad frameworks available are the mandatory access control and discretionary
access control.
The term information security policy generally refers to important information asset
and computer security related decisions taken by the management. There are three
basic types of policies: Program policy, Issue-specific policies, and system-specific
policies. A computer security program policy is a strategic document used to create
an organizations computer security program. Issue-specific policies, as the name
implies, addresses specific issues of concern to the organization. System specific
policies focus on decisions taken by the management to protect a particular system.
Since the policy is written at a high level, most organizations also develop standards,
guidelines and procedures that offer users, managers, and others the details
necessary to implement the policy. Policies and other supporting documents are
highly structured documents that have a format that is usually consistent
organization-wide. Policies are ineffective unless implemented. Apart from
documenting the policy, management must undertake a training exercise in order to
inculcate desirable security procedures and practices. Technology plays a significant
role in enforcing policies, but non-technical methods are also effective in many
situations.
Control Techniques
Audit Procedures
Review the
documentation of the
enterprise-wide security
policy and discuss the
key security
management issues
with the management
and staff.
442
- A hierarchical security
plan covering all levels of
access to IS assets.
- Security awareness
training
- Management level
testing and evaluation.
- Corrective actions and
incident procedures and
emergency plans.
Module - II
thoroughly
conducted.
Awareness of
security policies.
Employee hiring,
transfer, termination
and performance
policies addressing
security.
confidentiality, integrity or
availability would have on
the operations on these
assets.
Continuous security
awareness briefings and
training that is monitored for
all stakeholders of the
system.
Observe security
briefing and interview
data owners, system
administrators and
users.
A selected number of
users are examined with
respect to their security
awareness,
mechanisms by
attempting to talk to
them as a network staff
and try to make them
reveal their password.
Periodic investigation of
regulations implemented in
the preview of authorization
to information assets within
the organization.
Inspect investigation
policies with respect to
sensitive positions,
confidentiality or security
agreements.
To have appropriate
transfer, termination
procedure consisting of:
- Return of property keys,
444
Information security
weakness is
identified and
corrective action is
taken.
users.
Management initiates
prompt action to correct
weakness and documents
action plans and
milestones.
445
Module - II
Check points
1.
2.
3.
4.
5.
Whether the policy takes into account the business strategy for the
next 3 - 5 years.
6.
7.
8.
9.
10.
11.
446
17.
33.
447
Module - II
Questions
1. The information resources like paper documents, computer data, faxes and
letters are business information assets which are to be protected. The first major
step/process in protecting these assets is ..
a. information policy
b. information stocking
c. information controlling
d. information classification
2. Standard scheme of classification of data in a commercial organization is..
a. Private, Secret, Confidential and Unclassified
b. Public, Sensitive, Private and Confidential
c. Top Secret, Secret, Public and Unclassified
d. Public, Secret, Internet and Intranet
3. When data is stored/displayed/exchanged through any medium in a structured
manner and indexed by individual identifiers, it is called a system.
a. Relevant
b. Filed
c. Fling
d. Structured
4. Quality Assurance, Retention, Documentation, individual rights and fair process
of data along with data security come within the scope of data ..
a. Process Audit
b. Compliance Audit
c. System Audit
d. Tax Audit
5. The rights of access to information authorized to users on the need to know and
need to do basis is called the principle of ..
a. Need privileges
b. Full privileges
c. Least process
d. Least privileges
6. The personnel within the organization delegated with the responsibility of
administration and protection of data resources are called ..
a. Data protectors
b. Data custodians
c. Data committers
d. Data owners
448
Module - II
13. Copyright notice, encryption of data backups, customer information sharing and
e-mail monitoring are components of the issue specific .policy.
a. Data integrity
b. Data storage
c. Data privacy
d. Data exchange
14. ..is a component of the identity management security policy that
sets a maximum limit of 24 months validity for a user account.
a. Login account lifetime
b. Employee access account
c. Access Account lifetime
d. Employee Account lifetime
15. Extranet connection access control, restricted communication ports and blocking
of unauthorized internet access are components of the ... security policy.
a. Personnel policy
b. Logic policy
c. Network policy
d. System policy
16. Prevention of unauthorized access on a network, Intrusion response, penetration
blocking are activities of .
a. Intrusion-Detection Monitoring
b. Firewall Monitoring
c. Signal Monitoring
d. Port Monitoring
17. The process of documenting policies usually requires updating the and
also creating ..
a. Existing documentation , New documentation
b. Existing programs , New programs
c. Existing personnel, Old personnel
d. Networks documentation ,communication documentation
18. The control activity of observing security briefings along with interview of data
owners, system administrators and users is an audit procedure to check
the... of the security policy.
a. Documentation
b. Risk Assessment
c. Awareness
d. Responsibilities
450
Answers :
1. D
2. B
3. C
4. B
5. D
6. B
7. C
8. B
9. C
10. D
11. A
12. B
13. C
14. D
15.C
16. B
17. A
18. C
19. A
20. D
Sources:
[1].
Ron Weber, Information Systems Audit Control, Pearson Education Inc, 1999.
[2].
Chris Davis, Mike Schiller, Kevin WheelerIT Auditing: Using Controls to Protect
Information Assets (Paperback) McGraw-Hill Osborne Media; 1st edition, 2006.
[3].
[4].
[5].
451
Module III
System Development
Life Cycle & Application
Systems
1 Business Application
Development Framework
Learning Goals
A clear understanding of:
System
A system is an interrelated set of elements that function as an integrated whole. The
concept of an integrated whole can also be stated in terms of a system embodying a
set of relationships which are differentiated from relationships of the set to other
elements, and from relationships between an element of the set and elements not a
part of the relational regime. Thus, a system is composed of parts and subparts in
orderly arrangement according to some scheme or plan. For example; human body
is a system, consisting of various parts such as head heart, hands, legs and so
on. The various body parts are related by means of connecting networks of blood
vessels and nerves and the system has a main goal of living.
A business is also a system where economic resources such as people, money,
material, machines, etc are transformed by various organizational processes (such as
production, marketing, finance etc.) into goods and services. A computer based
information system is also a system which is a collection of people, hardware,
software, data and procedures that interact to provide timely information to authorized
people who need it. Each of these can be further divided into its sub-systems and this
process can go on till we decide to stop with respect to our study context. Just like
living animals, business systems also have a life span. After this life span is over, the
Module - III
systems will have to be retired and be replaced with new systems. Some of the
reasons for systems having a life span are listed below:
Technology may become outdated e.g. writing instruments evolved from ink pen
to ball point pen to gel pen and so on.
People using the system may change e.g. new generation people may not be
exposed to old technology and therefore systems will have to undergo change.
Government or other regulatory change may render the systems obsolete.
Business needs are expanded due to expansion of business, mergers, takeovers etc.
Characteristics of System
A system has mainly nine characteristics (as shown in Fig. 1.1), which are given as
follows:
1. Components: A system consists of several components. A component is either
an irreducible part or an aggregate of parts, also called a subsystem. For
example: in any automobile system, we can repair or upgrade the system by
changing individual components without changing the entire system.
454
Elements
Interactive behavior
Degree of human intervention
Working/Output
455
Module - III
ii.
Physical System
Description
Circulatory system
Transportation
system
Weapons system
School system
Computer system
456
Open System A system that interacts freely with its environment by taking
input and returning output is termed as an open system. With change of
environment an open system also changes to match itself with the
environment. For example, the education system or any business process
system will quickly change when the environment changes. To do this an
open system will interact with elements that exist and influence from outside
the boundary of the system.
Information systems are open systems because they accept inputs from
environment and sends outputs to environment. Also with change of
environmental conditions they adapt themselves to match the changes.
ii.
Closed System A system that does not interact with the environment nor
changes with the change in environment is termed as a closed system. Such
systems are insulated from the environment and are not affected with the
changes in environment. Closed systems are rare in business area but often
available in physical systems that we use in our day to work. For example,
consider a throw-away type sealed digital watch, which is a system,
composed of a number of components that work in a cooperative fashion
designed to perform some specific task. This watch is a closed system as it
is completely isolated from its environment for its operation. Such closed
systems will finally run down or become disorganized. This movement to
disorder is termed on increase in entropy.
Organizations are considered to be relatively open systems. They
continuously interact with the external environment, by processes or
transformation of inputs into useful output. However, organization behaves
as a relatively closed system in certain respects so as to preserve their
identity and autonomy. They may ignore many opportunities so as to
maintain their core-competence. Organizations are open systems, because
they are input-output systems. The input consists of finance, physical &
mental labor and raw material.
457
Module - III
Organizations perform several operations on these inputs and process out
products or services. The process of exchange generates some surplus, in the
form of profit, goodwill experience and so on, which can be retained in the
organization and can be used for further input output process. Organizations are
dependent upon their external environment for the inputs required by them and
for disposing of their outputs in a mutually beneficial manner.
According to Degree of Human Intervention
i.
ii.
Computers made it possible to carry out processing which would have been
either too difficult or too much time-consuming or even impossible to do
manually. A system may be even 100% manual. In earlier days, all accounting
procedures and transactions, production details or sales data used to be
maintained in different ledgers that were created by human efforts only and all
these were manual systems. With introduction of computers and complexity in
business procedures, these manual jobs were passed to computers in a major
way and now a business system inherently involves a close man-machine
interaction. The reasons for using computer in business area are as follows:
According to Working/Output
i.
458
Project Initiation
Whenever a business entity decides (i.e. stakeholders in the business or senior
management) to undertake computerization, a Project will have to be initiated. This
process is called as Project Initiation. Some examples of a formal Project Initiation
are given as follows:
459
Module - III
During project initiation, the project manager performs several activities that assess
the size, scope, and complexity of the project, and establishes procedures to support
subsequent activities. Depending on the project, some initiation activities may be
unnecessary and some may be very useful. The major activities to be performed in
the project initiation are as under:
The main outcome of Project Initiation is a formal Project Initiation Report which is
presented to senior management or BOD. This will be accepted with or without
modifications and then the next phases of SDLC will be rolled out. In case of SMEs or
very small organizations, a formal Project Initiation Report may not be prepared.
However, there may a 1 or 2 pages Concept Note or e-mail circular may be issued by
the stakeholders of the business. Having a Project Initiation (formal or informal) helps
to identify an objective of the organization for undertaking computerization and will
form an initial document. If this report is circulated to all concerned personnel, then it
will help in making people aware about the project.
460
461
Module - III
Need for Structured Systems Development Methodology
The following are the basic reasons for the need of Structured Systems Development
Methodology:
This gives rise to apply Project Management techniques and tools for entire Software
Development process which is discussed in Chapter 4 of the same module.
The Systems/Software Development Life Cycle (SDLC. is a common methodology for
systems development in many organizations. This lifecycle approach i.e. SDLC
involves defined phases and it is an incremental process of going towards next phase
in building and operating business application systems. The phases may be
undertaken in a serial manner (i.e. one after the other) or in a parallel manner. This
constitutes a Model of SDLC.
Approaches to Systems Development
Since organizations vary significantly in the way they automate their business
procedures, and since each new type of system usually differs from any other,
several different system development approaches are often used within an
organization. All these approaches are not mutually exclusive, which means that
it is possible to perform some prototyping while applying the traditional approach.
462
These Systems Development Models are discussed in chapter 3 of the same module.
Risks associated with SDLC
The SDLC framework provides system designers and developers to follow a
sequence of activities. It consists of a set of steps or phases in which each phase
of the SDLC uses the results of the previous one. The SDLC is document driven.
This means that at crucial stages during the process documentation is produced.
A phase of the SDLC is not complete until the appropriate documentation or
artifact is produced. These are sometimes referred to as deliverables. A
deliverable may be a substantial written document, a software artifact, a system
test plan or even a physical object such as a new piece of technology that has
been ordered and delivered. This feature of the SDLC is critical to the
successful management of an IS project.
The SDLC can also be viewed from a more process oriented perspective. This
emphasizes the parallel nature of some of the activities and presents activities
such as system maintenance as an alternative to a complete re-design of an
existing system. The advantages of this system are:
Identify risks
Discovering methods to eliminate or mitigate them
Accepting residual risk and going ahead with the project
463
Module - III
Some of the SDLC risks are enlisted below:
A well structured systems and methodology for the development of software has the
following distinct processes:
Module - III
iv.
v.
vi.
vii.
a system will meet the requirements identified above. The analyst designs
various reports/outputs, data entry procedures, inputs, files and database. He
also selects file structures and data storage devices. These detailed design
specifications are then passed on to the programming staff so that software
development can begin.
Programming / Development/Construction: After the system design details
are resolved, such resources needs as specific type of hardware, software, and
services are determined. Subsequently, choices are made regarding which
products to buy or lease from which vendors. Software developers may install (or
modify and then install, purchased software or they may write new, customdesigned programs. The choice depends on many factors such as time, cost and
availability of programmers. The analyst works closely with the programmers if
the software is to be developed in-house. During this phase, the analyst also
works with users to develop worthwhile documentation for software, including
various procedure manuals.
Systems Testing: Before the information system can be used, it must be tested.
Systems testing is done experimentally to ensure that the software does not fail
i.e. it will run according to its specifications and in the way users expect. Special
test data are input for processing, and results examined. If it is found satisfactory,
it is eventually tested with actual data from the current system.
Implementation: After the system is found to be fit, it is implemented with the
actual data. Hardware is installed and users are then trained on the new system
and eventually work on it is carried out independently. The results of the
development efforts are reviewed to ensure that the new system satisfies user
requirements.
Post-Implementation Maintenance: After implementation, the system is
maintained; it is modified to adapt to changing users and business needs so that
the system can be useful to the organization as long as possible.
466
Phase Name
Deliverable/s
Activities undertaken
1.
Preliminary
Feasibility Study
Investigation/
Report
Systems
Planning/
Feasibility Study
Determining strategic
benefits of the system
Priority list of systems
to be taken for
computerization (unless
it is ERP)
Cost-benefit analysis of
the proposed system
2.
Requirements
Analysis
Requirements
Analysis Report
3.
Systems Design
Systems Design
Report
Designing
user
interface (screens and
dialogue
boxes),
database and table
designs, depiction of
business
processes
and business rules
(validations etc. thru
various
graphical
representations such as
System flow charts,
data flow diagrams,
screen and report
layouts etc
4.
Programming /
Development/
Construction
Source programs
and executable
programs
467
Module - III
sometimes,
programming is called
as Construction (as
programmer
is
constructing system
by collecting and joining
various pre-developed
components
5.
Testing
6.
Implementation
7.
Post-Implementation
Maintenance
Evaluation and
Solving
day-to-day
improvement
problems of users thru
changes to system
hand-holding and helpthru users feed-back
desk support
Help desk support Improving operational
for users
system performance
Modifications
and
changes to the system
as per feed-back from
users
468
Productivity gains,
Future cost avoidance,
Cost savings, and
Intangible benefits like improvement in morale of employees.
Identification of Problem
Identification of Objective
Delineation of Scope
Feasibility Study
Module - III
environments, and evolving information technology may render systems ineffective or
inefficient. Some reviews can be conducted to determine whether we should adopt
new information technology like
Whatever may be the reason, managers and users may feel compelled to submit a
request for a new system to the IS department. If the need seems genuine, a system
analyst is assigned to make a preliminary investigation. It is advisable for all
proposals to be submitted to the steering committee for evaluation to identify those
projects that are most beneficial to the organization. A preliminary investigation is
then carried out by systems analyst, working under the direction of the steering
committee.
Thus it can be concluded that the purpose of the preliminary investigation is to
evaluate the project request. It is neither a designed study, nor it includes the
collection of details to completely describe the business system. Rather it relates to
collection of information that permits committee members to evaluate the merits of
the project request and make an informed judgement about the feasibility of the
proposed project.
470
Clarify and understand the project request: What is presently being done?
What is required and why? Is there an underlying reason different from the
one the user has identified?
Determine the size of the project: Does a request for a project call for new
development or for modification of the existing system? The investigation to
answer this question will also gather the details useful in estimating the
amount of time and number of people required to develop the project.
Determine the technical and operational feasibility of alternative approaches.
Assess costs and benefits of alternative approaches: What is the estimated
cost for developing a particular system? Will the proposed system reduce
operating costs? Will the proposed system provide better services to
customers, etc?
Report findings to the management with recommendation outlining the
acceptance or rejection of the proposal.
Identification of Objective
After the problem has been identified, it is easy to work out the objectives of the
proposed solution For instance; inability to provide a convenient reservation system,
for a large number of intending passengers was the problem of the Railways. So its
objective was to introduce a system wherein intending passengers could book a
ticket from source to destination, faster than in real-time.
Delineation of Scope
The scope of a solution defines its boundaries. It should be clear and comprehensible
to the user management stating what will be addressed by the solution and what will
not. Often the scope becomes a contentious issue between development and user
organizations. Hence, outlining the scope in the beginning is essential.
The following questions should be answered while stating the scope:
i.
Module - III
vi. Interfaces: Is there any special hardware/software that the application has to
interface with? For example-Payroll application may have to capture from the
attendance monitoring system that the company has already installed. Then the
solution developer has to understand the format of data, frequency mode of data
transfer and other aspects of the software.
vii. Reliability requirements: Reliability of an application is measured by its ability
to remain uncorrupted in the face of inadvertent / deliberate misuse. The
reliability required for an application depends on its criticality and the user profile.
For instance-an AM application should protect the dataset against any misuse.
While eliciting information to delineate the scope, few aspects need to be kept in
mind:
Different users will represent the problem and required solution in different
ways. The system developer should elicit the need from the initiator of the
project alternately called champion or executive sponsor of the project,
addressing his/her concerns should be the basis of the scope.
While the initiator of the project may be a member of the senior
management, the actual users may be from the operating levels in an
organization. An understanding of their profile helps in designing appropriate
user interface features.
While presenting the proposed solution for a problem, the development
organization has to clearly quantify the economic benefits to the user
organization. The information required has to be gathered at this stage. For
example- when you propose a system for Road tax collection, data on the
extent of collection and defaults is required to quantify benefits that will
result to the Transport Department.
It is also necessary to understand the impact of the solution on the
organization- its structure, roles and responsibilities. solutions, which have a
wide impact, are likely to meet with greater resistance. ERP implementation
in organizations is a classic example of change management requirement.
Organizations that have not been able to handle this have had a very poor
ERP implementation record, with disastrous consequences.
While economic benefit is a critical consideration when deciding on a
solution, there are several other factors that have to be given weightage too.
These factors have to be considered from the perspective of the user
management and resolved. For example- in a security system, how foolproof
it is, may be a critical a factor like the economic benefits that entail.
472
Feasibility Study
After possible solution options are identified, project feasibility - the likelihood that
these systems will be useful for the organization is determined. A feasibility study is
carried out by the system analysts for this purpose. Feasibility study refers to a
process of evaluating alternative systems through cost/benefit analysis so that the
most feasible and desirable system can be selected for development. The feasibility
study of a system is undertaken from three major angles: technical, economic and
operational feasibility. The proposed system is evaluated from a technical view point
first and if technically feasible, its impact on the organization and staff is assessed. If
a compatible technical and social system can be devised, it is then tested for
economic feasibility.
Dimensions of Feasibility
For computerization projects, a Feasibility Study will cover the following aspects of a
project:
i.
ii.
iii.
iv.
v.
vi.
vii.
viii.
Module - III
Technical Feasibility: It is concerned with hardware and software. Essentially, the
analyst ascertains whether the proposed system is feasible with existing or expected
computer hardware and software technology. The technical issues usually raised
during the feasibility stage of investigation include the following:
Some of the technical issues to be considered are given in the Table 1.3, which is as
under:
Design Considerations
Communications channel
configuration
Communications channels
Communications network
Computer programs
Data storage medium
Data storage structure
File organization and access
Input medium
Operations
Output frequency
Output medium
Output scheduling
Design Alternatives
Point to point, multidrop, or line sharing
Telephone lines, coaxial cable, fiber optics,
microwave, or satellite
Centralized, decentralized, distributed, or
local area
Independent vendor or in-house
Tape, floppy disk, hard disk, or hard copy
Files or database
Direct access or sequential files
Keying, OCR, MICR, POS, EDI, or voice
recognition
In-house or outsourcing
Instantaneous, hourly, daily, weekly, or
monthly
CRT, hard copy, voice, or turnaround
document
Predetermined times or on demand
Preprinted forms or system-generated
474
forms
Micro, mini, or mainframe
Batch or online
Instantaneous, hourly, daily, weekly, or
monthly
Table 1.3: Technical Issues
Financial Feasibility: The solution proposed may be prohibitively costly for the
user organization. For example Monitoring the stock through VSAT network
connecting multiple locations may be acceptable for an organization with high
turnover. But this may not be a viable solution for smaller ones.
iii. Economic Feasibility: Also known as Cost-Benefit Analysis, it includes an
evaluation of all the incremental costs and benefits expected if the proposed
system is implemented. After problems or opportunities are identified, the
analysts must determine the scale of response needed to meet the user's
requests for a new system, as well as the approximate amount of time and
money that will be required in the effort. The analysts then determine just how
much management wants to spend or change. Possible solutions are then
examined in the light of the findings. Because of the myriad possibilities involved
in most business situations every problem is different, and may require a solution
different from that used in the past. Thus, common sense and intuition are key
ingredients in the solution development process and is considered as the most
difficult aspect of the study. The financial and economic questions raised by
analysts during the preliminary investigation are for the purpose of estimating the
following:
Estimating costs and benefits: After possible solution options are identified, the
analyst should make a primary estimate of each solution's costs and benefits.
475
Module - III
Cost : System costs can be sub divided into development, operational and intangible
costs.
Development costs for a computer based information system include costs of the
system development process such as
i. Salaries of the system analysts and computer programmers who design and
program the system,
ii. Cost of converting and preparing data files and preparing systems manual and
other supportive documents,
iii. Cost of preparing new or expanded computer facilities,
iv. Cost of testing and documenting the system, training employees, and other start
up costs.
Operating costs of a computer based information system include:
Hardware/software rental or depreciation charges,
i. Salaries of computer operators and other data processing personnel who will
operate the new system,
ii. Salaries of system analysts and computer programmers who perform the system
maintenance function,
iii. Cost of input data preparation and control,
iv. Cost of data processing supplies,
v. Cost of maintaining proper physical facilities including power, light, heat, air
conditioning, building rental or other facility charges and equipment and building
maintenance charges, overhead charges of the business firm.
Intangible costs are costs that cannot be easily measure. For example, the development
of a new system may disrupt the activities of an organization and cause a loss of
employee productivity or morale. Customer sales and goodwill may be lost by errors
made during the installation of a new system. Such costs are difficult to measure in
rupees but are directly related to the introduction and operation of information system.
Benefits: The benefits which result from developing new or improved information
systems that utilize EDP can be subdivided into tangible and intangible benefits.
Tangible benefits are those that can be accurately measured and are directly related
to the introduction of a new system, such as decrease in data processing cost.
Intangible benefits such as improved business image are harder to measure and
define. Benefits that can result from the development of a computerized system are
summarized below:
1. Increase in sales or profits (improvement in product or service quality).
476
Module - III
because of the reluctance of skilled personnel to move to such locations.
vii. Operational Feasibility: It is concerned with ascertaining the views of workers,
employees, customers and suppliers about the use of computer facility. The
support or lack of support that the firm's employees are likely to give to the
system is a critical aspect of feasibility. A system can be highly feasible in all
respects except the operational and fails miserably because of human problems.
Some of the questions which help in testing the operational feasibility of a project
are stated below:
Is there sufficient support for the system from management and from users?
If the current system is well liked and used to the extent that persons will not
be able to see reasons for a change, there may be resistance.
Are current business methods acceptable to users? If they are not, users
may welcome a change that will bring about a more operational and useful
system.
Have the users been involved in planning and development of the project?
Early involvement reduces the chances of resistance to the system and
changes in general and increases the likelihood of successful projects.
Will the proposed system cause harm? Will it produce poorer results in any
respect or area? Will loss of control result in any areas? Will accessibility of
information be lost? Will individual performance be poorer after
implementation than before? Will performance be affected in an undesirable
way? Will the system slow performance in any areas? Will it work when
installed?
This analysis may involve a subjective assessment of the political and managerial
environment in which the system will be implemented. In general, the greater the
requirements for change in the user environment in which the system will be
installed the greater the risk of implementation failure.
v. Behavioral Feasibility: Systems, which will be designed to process data
and produce the desired outputs. However, if the data input for the system is
not readily available or collectable, then the system may not be successful.
This factor too must be considered.
vi. Legal Feasibility: Legal feasibility is largely concerned with whether there
will be any conflict between a newly proposed system and the organizations
legal obligations. Any system, which violates the local legal requirements,
should also be rejected. For example, a revised system should comply with
all applicable federal and state statutes about financial reporting
requirements, as well as the companys contractual obligations.
478
Module - III
The above technique can be used for both eliciting user requirements as well as
giving design of the system. This is somewhat similar to a building architect who
prepares blue print for showing and getting approval of users (the builder, govt.
authorities etc. as well as uses it as a design to explain to the people who are going
to build the structure of building.
481
Module - III
e.g. process, source of data or sink of data, arrow indicating direction of flow, data
store. Each is represented on a DFD by one of the symbols shown in Table 1.4.
Symbol
Name
Explanation
Data Sources and The people and organizations that send data
Destinations
to and receive data from the system are
represented by square boxes. Data
destinations are also referred to as data
sinks.
Data Flows
Transformation
Process
Data stores
482
DF1
Vacant posts
Processing
Legend
Data store (file)
Accepted applications
DF2
Interview notification
Process
Source /
Sink
Comparing
application with
vacant posts
Flow
Deptt and
Applicant
Rejected
application
DF3
Application
Applicant
483
Module - III
An E-R diagram gives us structure and relationship between entities, as shown in Fig.
1.4. For example:
Employee
Is Assigned
Parking Place
One-to-one
Fig. 1.4: ER Diagram
In above diagram, employee (person) is represented as one entity and parking place
(place) as another. They are related to each other in one-is-to-one relationship
(diamond indicates relationship). Since there are two entities this is called as binary
one-to-one relationship. Similarly, we can have unary, ternary relationships within
entities. For the entities described above, their structure is also given. E.g. in above
case employee will have structure (also called as attributes or fields or columns) as
Employee Identification, Employee name, Designation and so on. Parking place can
have attributes such as Parking Place ID, description, area, status indicator, type of
vehicle to be parked etc. Any further discussion is beyond the scope of this material.
3. Data Dictionaries
In an organization, there are many hundreds or thousands of data items which are
used across various systems. It becomes humanly impossible to keep track of
different data items, their structure and relationships. Data dictionaries are nothing
but repository of data items handled in an organization or in a system. Through these
data dictionaries, developers can find out what data is being used in which system
and whether it has been already defined in some other system. If so, developer can
simply use that data structure instead of defining it afresh.
So, now, we have flow of data and structure and relationship of data. But how we
process the data? What is the logic of processing data? E.g. in the above example,
how parking place is allotted for a new employee? Or what is to be done if all parking
is full and still there is persons awaiting parking. For this, missing logic description,
we will resort to either of the following: Decision table or decision tree or structured
English (psuedocode)
484
485
Module - III
Conditions /
Courses of
Action
Employee
Type
Hours Worked
Pay base
salary
Calculate
hourly wage
Rules
Salaried
Hourly
Hourly
Hourly
< 40
40
> 40
Calculate
overtime
Produce
absence
report
D
D
Table 1.5: Decision Table
The same business rule can be depicted in the form of Decision Tree instead of
Decision Table as given below in Fig. 1.5:
Decision Tree
A decision tree (or tree diagram) is a support tool that uses a tree-like graph or
model of decisions and their possible consequences, including chance event
outcomes, resource costs, and utility. Major benefits of Decision Trees are given
as follows:
i.
486
Salaried
1
Hourly
< 40
= 40
Legend :
1 Type of employee
2 Hours worked
>40
487
Module - III
Examples of keywords that may be used
START, BEGIN, END, STOP, DO, WHILE, DO WHILE, FOR, UNTIL, DO UNTIL,
REPEAT, END WHILE, END UNTIL, END REPEAT, IF, IF THEN, ELSE, IF
ELSE, END IF, THEN, ELSE THEN, ELSE IF, SO, CASE, EQUAL, LT, LE, GT,
GE, NOT, TRUE, FALSE, AND, OR, XOR, GET, WRITE, PUT, UPDATE, CLOSE,
OPEN, CREATE, DELETE, EXIT, FILE, READ, EOF, EOT
488
Idling state (i.e. ATM machine is idling as no user is using it). So program
residing in ATM machines memory is awaiting for user to come and insert an
ATM card.
Card acceptance state As soon as user inserts a card, a program module is
activated and the machine sucks the card
Card validation state Now ATM machine transits to card validation wherein, the
program will read the card number and validate it against database for existence
Denying state if card is found invalid, the machine will go in denying state.
And so on
A correct State-transition diagram will link program modules to the states and actions
thereupon of an entity.
Module - III
boxes layouts with the help of computer software. These can be shown to users for
getting their approvals, modifications and endorsement. Generally, developers design
standards for these such as background and foreground color schemes for screens,
position on the screens, colors for data items, graphics etc.
8. Report layouts
Report layouts give output of processed data or information. This is the element of
system which will have a direct impact on users and success of developer largely
depends upon the way in which he presents the reports (either on-screen or printed..
Reports can be final reports for user consumption or intermediate reports. They can
be for internal use or to be circulated to outsiders. The following are some of the
important considerations, which should be taken into account for layout and contents
of reports.
Software Acquisition
Software Acquisition is a very important activity, even though not considered as a
Phase in SDLC. The following steps shall be relevant in acquisition of the software:
1. The feasibility study in relation to the software acquisition should contain
documentation that supports the decision to acquire the software.
2. The decision is based upon various factors like Cost difference between
development and acquisition, availability of the required software readily in the
market, the time saved between development and acquisition etc.
3. A project team comprising of technical supports staff and key users should be
crated to prepare Request for Proposal (RFP).
4. The project team should carefully compare and examine the various responses
of the Vendor to the RFP, prepare the gap analysis and shortlist the vendors on
the basis of such evaluation.
5. The project team should invite the short listed vendors to have presentation of
the product and the processes. The users participation with feedback must be
ensured in such presentation. The project team may also visit some of the
customers to have live demonstration of the software. The discussion with the
customers should focus on:
i.
ii.
Module - III
iii. Vendor's commitment to user training and system documentation
6. Once the final vendor is confirmed, the vendor should be invited to present a
pilot session so as to enable the project team to understand the system, have a
hands-on experience and suggest for the areas of customization.
7. On the basis of the above activities, the vendor's presentations and final
evaluation, the project team can make the final selection of the product.
8. The last step in the acquisition process is the negotiation and signing of a legal
contract with the vendor. Contract should contain the following items:
i.
ii.
iii.
iv.
v.
vi.
vii.
viii.
ix.
9. Managing and control on the implementation of the system is required with the
regular status reports.
10. IS auditors job in the software acquisition process would be to determine
whether adequate level of security controls has been considered prior to making
an agreement. This is required to ensure data integrity of the information
processed and controls like audit trails, password controls and overall security of
the application. The above procedures should be more elaborate and systematic
in case where the business is implementing ERP systems giving fully integrated
corporate solutions like SAP, Oracle Financials, BAAN, PeopleSoft, JD Edwards
etc.
Following Table 1.6 lists out certain parameters for preparing RFP:
Item
Description
Product v/s System 1. Comparison between the required system and the
requirements
vendor developed product to include the volume of
transactions that the software can handle and the
database size
2. Acceptance or otherwise from the Users for the
492
Vendor support
Response time
Source
availability
Vendors experience
A list of recent or This will ensure the vendors effort to keep the product
planned
493
Module - III
enhancements to the current.
product with dates
List of current custom- More the number of customers, more is the accept-ability
ers
of the product.
Acceptance testing of The vendor should allow the acceptance testing by the
product
users to ensure that the product satisfies the system
requirements of the business. This should be done before
purchase commitment.
Table 1.6: Parameters for preparing RFP
Roles involved in SDLC
There are certain standard functions (not designations) during the development
process. This means that, these roles may be combined, especially for small
organizations and may be performed by the same individual. Under such cases, IS
Auditor has to remember to evaluate conflicts between the roles (segregation of
duties) as detailed out in Module-II. The roles are enlisted here in brief:
Steering Committee
Wherever large-scale application development is undertaken, a steering committee
is set up to supervise and direct the projects. This committee provides funding and
overall direction to the projects.
Project Manager
A project manager is normally responsible for more than one project and liaisons with
the client or the affected functions. This is a middle management function, and he is
responsible for delivery of the project within the time and budget.
Systems Analyst
The systems analyst is also referred to as a business analyst. His main responsibility
is to conduct interviews with users and understand their requirements. He is a link
between the users and the programmers. He converts the users requirements in the
system requirements. He plays a pivotal role in the Requirements analysis and
Design phase.
Module Leader/Team Leader
A project is divided into several manageable modules, and the development
responsibility for each module is assigned to module leaders.
Programmers
Programmers are the masons of the software industry. They convert design into
494
Module - III
training of employees to use the new/modified system, and the final system
implementation.
IS Auditor should consider the following influencing SDLC:
vi. In-house design and development of the system (using internal resources)
vii. Design and development of the system by using fully or partly the outsourced
resources located onsite or offsite
viii. Off the shelf available packages implemented as-is without any customization
ix. Off the shelf available packages implemented as-is with some customization
At times, large complex applications may involve a combination of the above.
IS Auditor should be aware of implications of the following risks, while auditing SDLC:
x. Adoption of inappropriate SDLC for the application system
xi. Inadequate controls in the SDLC process
xii. User requirements and objectives not being met by the application system
xiii. Lack of involvement of all the stakeholders
xiv. Lack of management support
xv. Inadequate project management
xvi. Inappropriate technology and architecture
xvii. Change in scope
xviii. Time over-runs
xix. Budget over-runs
xx. Insufficient attention to security and controls
xxi. Performance criteria not being met
xxii. Inappropriate resourcing / staffing model
xxiii. Incomplete documentation
xxiv. Inadequate contractual protection
xxv. Inadequate adherence to the development methodologies
xxvi. Insufficient attention to interdependencies on other applications and processes
xxvii. Inadequate configuration management
xxviii. Insufficient planning for data conversion/migration and cutover
SDLC Audit Scope
IS auditor should consider the following scenarios while finalizing the SDLC Audit
Scope and review relevant SDLC stages:
Auditing SDLC
IS auditor should consider following aspects while auditing and evaluating SDLC
phases:
xxix. Project charter
xxx. Roles and responsibilities of different groups / committees (e.g. Project
steering committee)
xxxi. Adopted project management methodology
xxxii. Application Development methodology / model
xxxiii. Contractual terms with the vendors for purchased applications (E.g. Service
Level Agreements SLAs)
xxxiv. Contractual terms with the vendors for outsourced services (E.g. Service Level
Agreements SLAs)
xxxv. Approvals and sign-offs by the Project steering committee for each SDLC
stages
xxxvi. Deliverables of each SDLC stages
xxxvii. Minutes of relevant meetings
xxxviii. Project tracking and reporting documentation
xxxix. Resource management
xl.
Ongoing risk management
xli. Quality Control / Assurance
xlii. Change management
xliii. Data conversion/migration
xliv. Application testing documentation
xlv. Relevant legal, regulatory and policy aspects to be complied with, if any
Master Checklist
Checklist for Auditing Entity- Level Controls
S. No.
1.
Checkpoints
Whether a review is done of the overall IT organization structure
to ensure that it provides for clear assignment of authority and
responsibility over IT operations and that it provides for adequate
segregation of duties?
497
Status
Module - III
2.
3.
4.
5.
6.
7.
8.
9.
Is there any review and evaluation of the process for ensuring that
IT employees at the company have the skills and knowledge
necessary for performing their jobs?
10.
498
12.
Is there any review and evaluation of the process for ensuring that
end users of the IT environment have the ability to report
problems, have appropriate involvement in IT decisions, and are
satisfied with the services provided by IT.?
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
499
Module - III
Summary
In this chapter we have learned the following major aspects:
Meaning of a System,
The characteristics of a system and how each system has a life cycle after which
it needs to be replaced,
A project should be initiated through a Project Initiation report whenever some
major business change in respect of IT is undertaken,
The need for adopting Structured Methodology for IT Systems development
projects due to people oriented nature of these projects. Structured Methodology
involves planning and organizational control over the entire project life cycle.
Risks associated with adopting structured methodology and importance from IS
audit perspective of adopting it,
Distinct activities undertaken in the structured methodology which is nothing but
SDLC,
All the SDLC phases and activities undertaken in each phase,
Description of 2 phases viz. Feasibility Study and Requirements Analysis.
Feasibility Study describes various feasibility aspects of the project such as
economic, technical, legal, time etc.,
Requirements analysis is carried out to know the requirements of various users
at all levels. Business systems can be looked upon as collection of processes
and sub-processes and how in requirements analysis developers collect data
about these processes,
We have also seen the methods of putting down the collected requirements in
descriptive as well as graphical manner for ease of understanding through
several diagrams such as context diagrams, data flow diagrams and so on,
We digressed from the 2nd phase viz. Requirements Analysis to Software
Acquisition activities if it has been decided during Feasibility Study to buy a
readymade solution. Software Acquisition is not a standard phase of SDLC but
important aspects as majority of companies are now buying readymade
solutions. This also cites an important point that even for readymade solution,
Requirements Analysis must be carried out so that an appropriate readymade
solution will be selected,
In the end, we have given some common roles that are found in todays industry
(some roles are typical in IT industry e.g. programmers).
Sources:
Ron Weber: Information Systems Control and Audit, Pearson Education, India,
Third Impression, 2009.
500
Valacich George Hoffer: Essentials of Systems Analysis & Design, PHI Pvt. Ltd.,
N. Delhi, India, 2004.
Muneesh Kumar: Business Information Systems, Vikas Publishing House Pvt.
Ltd., N. Delhi, India, 2001.
Charles Parker, Thomas Case: Management Information Systems, Mitchell
McGraw Hill, India, 1993.
M. M. Pant: System Analysis, Data Processing and Quantitative Techniques,
Pitambar Publishing Co. Pvt. Ltd., N. Delhi, India, 1999.
Gordon B. Davis, Margrethe H. Olson, Management Information Systems,
McGraw-Hill International Editions, 1984.
Sujata Garg: Professional Approach to Management Information & Control
Systems, Bharat Law House Pvt. Ltd., N. Delhi, India, 2005.
Pankaj Jalote: An Integrated Approach to Software Engineering, Narosa
Publishing House, Third Edition, 2005.
Roger S. Pressman: Software Engineering- A Practitioners Approach, McGrawHill, Sixth Edition, 2005.
Module - III
5. A cost/benefit analysis should be done in the following study:
a. Economic
b. Technical
c. Legal
d. All of the above
6. UML stands for:
a. Unique Modeling Language
b. Unique Modeling Limit
c. Unified Modeling Language
d. Unique Mathematical Limit
7. DFD stands for:
a. Data Flow Diagram
b. Duplicate Functional Diagram
c. Duplicate Flow Diagram
d. None of these
8. DBA stands for:
a. Data Base Administrator
b. Data Business Administrator
c. Duplicate Business Administrator
d. None of these
9. Example of ERP Solution/s may include:
a. ERP
b. BAAN
c. PeopleSoft
d. All of the above
10. BRS stands for:
a. Business Requirements Specification
b. Basic Requirements Specification
c. Business Requirements System
d. Basic Requirements System
11. OCR stands for:
a. Original Character Recognition
b. Optical Character Recognition
c. Optical Character Record
d. Original Character Record
502
Module - III
b. Opportunity Identification
c. System Evaluation
d. Program Specification
19. ____________ design and implement database structures.
a. Programmers
b. Project Managers
c. Technical Writers
d. Database Administrators
20. ____________ manage the system development, assign staff, manage the
budget and reporting, and ensure that deadlines are met.
a. Project Managers
b. Network Engineers
c. Graphic Designers
d. Systems Analysts
Answers:
1. (a).
2. (b).
3. (a).
4. (d).
5. (a).
6. (c).
7. (a).
8. (a).
9. (d).
10. (a).
11. (b).
12. (a).
13. (d).
14. (d).
15. (d).
16. (a).
17. (d).
18. (a).
19. (d).
20. (a).
504
A clear understanding of all the phases of SDLC except the phase involving
Feasibility Study and System Requirement Analysis, which we have already
discussed in Chapter 1, and
A brief discussion about the phases of Programming, Testing,
Implementation and Post implementation.
Users involvement in this phase will be minimum, as the project has now moved from
business aspects (Feasibility, and Requirements) to technical aspects (hardware,
programming etc). However, approval of Systems Design Document by the
Module - III
organization may be still necessary, which may be given by other third-party
consultants or in-house technical competent people. Key design phase activities
include:
Developing system flowcharts to illustrate how the information shall flow through
the system. We have already seen DFDs.
Defining the applications through a series of data or process flow diagrams,
showing various relationships from the top level down to the detail. We have
seen E-R diagrams, data dictionaries etc.
Describing inputs and outputs, such as screen design and reports. We shall
describe this aspect later in this chapter.
Determining the processing steps and computation rules for the new solution.
We have seen Decision Tables / trees and Structured English through which
logic and computational rules can be described.
Determining data file or database system file design. E-R diagram and data
dictionaries will lead to design of the table, as they describe entities, their
structure and relationships among them. We need not go into the details of
designing tables as it is beyond scope.
Preparing the program specifications for the various types of requirements or
information criteria defined. This topic is also beyond our current scope.
Developing test plans for:
o
o
o
o
o
o
Developing data conversion plans to convert data and manual procedures from
the old system to the new system.
506
Architectural Design
Designing of Data / Information Flow
Designing of Database
Designing of User Interface
Physical Design
Selection of Appropriate Hardware and Software
Architectural Design
Architectural design deals with the organization of applications in terms of hierarchy of
modules and sub-modules, as described above. At this stage, we identify:
The architectural design is made with the help of a technique called as functional
decomposition wherein top level functions are decomposed (i.e. broken into) and
inner-level functions are discovered. This process is continued till our context is met
with.
Designing of Database
We have seen what are entities and E-R diagrams in the last chapter. In designing
database, entities are described in detail, with their structure. For example, for an
Employee entity, obvious structure elements (also called as attributes, fields,
columns) would be Employee ID, Name, Address, Date of Birth etc. Only those
attributes which are of current interest with respect to the current system (or system
module) are only considered. For example, in a Project allocation system, an
employees spouses name may not be relevant but in HR system it may be relevant
and hence included in entity structure. When design of all entities is over, they can
be put in a repository to form a Data Dictionary so that, common entities across the
507
Module - III
system can be used by other development team members. The design of database
consists of 4 major activities
1.
2.
3.
4.
1. Conceptual Modeling
The entity structure and relationship leads to design of a database table. Let us
consider a simple example.
Suppose that a Student (recollect this is a People type entity) is able to take one
Course (a concept entity). For simplicity, let us assume that, one student can take
only one course. (One course may have many subjects). Thus there are 2 entities :
Student and Course and they are related in one-is-to-one manner. This is shown in
the Fig. 2.1. Students attributes of importance are Student ID, Name and Address
and that of Course are Course ID and Course Name. For simplicity, only small
numbers of attributes are considered here. In practice each entity may have
hundreds of attributes and therefore hundreds of table columns. Entities are shown
as rectangles, Attributes as ovals and relationship as diamond.
ID
Name
Student
Course
ID
Address
Enrolled in
Course
Name
Course
508
Attribute Type
Attribute Length
Student ID
Numeric
Name
Text
30
Address
Text
100
Attribute Name
(Column Name)
Attribute Type
Attribute Length
Course ID
Numeric
Course Name
Text
30
Attribute Name
(Column Name)
Attribute Type
Attribute Length
Student ID
Numeric
Course ID
Numeric
Student-Course Table
Note:
1. Student ID is numeric with 4 digits length in this example. If number of students
are in lacs then longer ID would be needed. Same is the case for Course ID,
Student Name and Course Name attributes. Address is assumed to be 100
character long and not divided for simplicity, into first line of address, second line
of address etc
2. The first 2 tables are master tables whereas the third table viz Student-Course
Table is indicating the relationship between these two entities. A developer may
add Course ID in Student Master table and discard the third table.
A record designing technique called as normalization is used to avoid update
anomalies. Here it is being called as a record designing technique as a table
contains set of records and we are designing these records layout rather than table
layout. However, table design and record design have the same meaning.
Database normalization is a technique for designing relational database tables to
minimize duplication of information and to eliminate data anomalies. A table that is
sufficiently normalized is less vulnerable to problems of duplication and data
509
Module - III
anomalies. It is ensured that for multiple instances of the same information a single
instance is designed to hold data in database tables.
Higher degrees of normalization typically involve more tables and create the need for
a larger number of joins, which can reduce performance. Accordingly, more highly
normalized tables are typically used in database applications involving many isolated
transactions, while less normalized tables tend to be used in database applications
that do not need to map complex relationships between data entities and data
attributes (e.g. a reporting application, data warehousing application).
Database theory describes a table's degree of normalization in terms of normal forms
of successively higher degrees of strictness. A table in third normal form (3NF) is also
in second normal form (2NF); but the reverse is not always the case.
The following are the most common three normal forms:
In the First Normal Form, the repeating groups in a record are removed and made
into a separate record. For example, in the purchase order record-item no, item
description, unit of measure, quantity, rate and amount will repeat as many times as
the number of items in the purchase order. This will obviously create inefficiency. Not
only will space be wasted because of the duplication of repeating items in each
record, but it will take a lot of work to maintain data consistency. For example, if there
are 20 items on the purchase order, and one of the repeating fields, say the shipping
address, changes, the change will have to be made in each of the 20 records. If the
repeating fields ( of which the shipping address is one) are made into a one table,
and the order details kept in another, there will be only one record in the first table for
the order, and space will be used only once for storing this information. Also, the
change in the shipping address will only have to be made once, in this record, since
the shipping address will no longer be in the second table.
In the Second Normal Form, a check is done on all non-key elements to ensure that
they are dependent on the key. Continuing with the same example of purchase order
record, in second normal form all supplier-related details will be moved to a separate
supplier table. This is because supplier details are dependent on supplier code and
not on the key of purchase order record, viz. purchase order no.
In the Third Normal Form, a check is done to ensure that no non-key element is
transitively dependent on another non-key element. For example, discount may be a
data element in purchase order. If the discount the supplier offers depends on the
amount of purchase Order, it is better to keep discount in a separate table called
supplier-purchase order amount.
510
511
Module - III
Parameter File: This file contains data which is used during processing of transaction
data. In early days this type of data was classified as Master data only. Some times
this type of data is also referred as lookup file data. E.g. rate of interest for different
banking products, rates of depreciation, unit of measures of materials in an inventory
system. The data in this file can change more frequently than the master data.
Transaction file : This file contains data captured about real life events happening in
day-to-day life. E.g. cash withdrawn from a bank, delivery of material received, bill
submitted to a client. All of these events have data associated with them which
changes with every transaction. This data is captured and kept in transaction file to
be processed. Depending upon a system, this file can hold data for a very short
period or for longer duration. E.g. there may be a transaction file holding only a days
data.
Work File or Temporary File: This file contains data which have been retrieved from
transaction and other files (such as master or parameter) for further processing. The
program holds data temporarily in this type of files and after the processing is over it
is written to final output files or printed. Good programs generally remove data from
the work files when desired processing is over.
Protection File or Audit Trails: This type of file contains before and / or after images
of the data before or after the processing cycle is over. The purpose of this file is to
provide data through different stages of data processing and also to store the
chronological events of data processing for future reference or analysis. IS auditors
need to ensure that every system should have this type of file and is storing data in
correct details.
History File: This type of file contains archival data which is useful to perform trend
analysis or for data warehousing kind of applications. Organizations may opt to keep
types history files such as yearly, location wise, customer wise and so on.
Report File: This file can be of two types. One, which contains exact print image (i.e.
along with cosmetic formatting for the report to be printed) of the final results of data
processing and the other which can be used further to be uploaded in some other
applications like spreadsheet.
File Organization Methods:
All the above described files can be stored on variety of secondary storage media
such as hard disk, floppies, CDs, tapes and so on. The following brief describes how
these files organize the data inside them.
Sequential File Organization: This is also known as text file or flat file organization.
The data records in this file are stored in sequential order as it is being written to the
512
Module - III
3. Place for company logos, dates etc should be uniform throughout the screens
4. For multipage screen layout, it is better to have tabs with page numbers
indicating on which page the user is
5. Mandatory fields should be indicated explicitly e.g. with red color or blinking
6. If system response time after a user action is more, it should be clearly displayed
intermittently on screen
7. Developers should design screen by keeping in mind computer awareness level
of users. E.g. help desk staff may require to work on more than one software
(and therefore more screens) simultaneously, so several page tabs on one
screen would be helpful rather than going thru several screens.
Generally, developers design prototype (dummy screens) and take approval of users
prior to finalizing the interface.
In some good readymade software packages, users are allowed to modify some
features (e.g. foreground and background colors) of user interfaces by changing the
configuration settings and storing them as Favorites e.g. SAP.
515
Module - III
A good program should have the following characteristics:
Accuracy: Accuracy does not only mean that the program should do what it is
supposed to do; it should also not do what it is not supposed to do. In fact, the
second part becomes more challenging for quality control personnel and
auditors
Reliability: The program continues to deliver the functionality, so long as there
are no changes required in functionality. Since programs do not wear and tear,
reliability is taken for granted. However, poor setting of parameters and hard
coding some data, which is of temporary value could result in the failure of a
program after some time,
Robustness: In normal circumstances, most programs work. It is the
extraordinary circumstances that really differentiate the good programs from the
bad ones. This means that the programmer has anticipated even the least likely
situations and provided safeguards for them, so that mishaps can be avoided,
This process of taking into consideration all possible inputs and outcomes of a
program is also known as error / exception handling process,
Efficiency: Performance should not be unduly affected with the increase in
input volumes.
Usability: A user-friendly interface and easy-to-understand user document is
necessary for any program.
Readability: The program should be easy to maintain even in the absence of
the programmer who developed it. The inclusion of comment lines is extremely
useful.
Maintainability: If a program has to be maintained by a programmer other than
the one who developed it, it should be developed the modular way.
Coding standards are essential for reading and understanding the program code
in simple and clear manner.
Coding standards may include source code level documentation, methods for
data declaration, and technique for input / output.
Coding standards serves a method of communication between teams, amongst
the members of the teams and the users, thereby working as a good control.
516
This technique allows programmers more flexibility in coding and compiling the
programs with a remote computer.
By the use of this facility, the programmers can enter, modify and delete program
codes as well as compile and store the programs on the computer
This facility, in general, allows a faster development of the programs
This approach can lower the development costs, maintain rapid response time
and effectively expand the programming resources.
Programming Languages
A programming language is a machine-readable artificial language designed to
express computations that can be performed by a machine, particularly a computer.
Programming languages can be used to create programs that specify the behavior of
a machine, to express algorithms precisely, or as a mode of human communication.
As many software experts point out, the complexity of software is an essential
property, not an accidental one. This inherent complexity is derived from the following
four elements:
Module - III
The sweeping trend in the evolution of high- level programming languages and the
shift of focus from-programming-in-the-small to programming-in-the-large has
simplified the task of the software development team. It also enables them to
engineer the illusion of simplicity. The shift in programming paradigm is categorized
into the following:
Monolithic Programming
Procedural Programming
Structured Programming
Object Oriented Programming
Like the computer hardware, programming languages have been passing through
evolutionary phases or generations. It is generally observed that most programmers
work in work in one language and use only one programming style. They program in
a paradigm enforced by the language they use. Frequently they may not have been
exposed to alternate ways of solving the problem and hence, they will have difficulties
in exploiting the advantages of choosing a style more appropriate to the problem at
hand. Programming style is defined as a way of organizing the ideas on the basis of
some conceptual model of programming and using an appropriate language to write
efficient programs. Five main kinds of programming styles are listed in Table 2.1 with
the different types of abstraction they employ.
Programming Style
Abstraction Employed
Procedure-oriented
Algorithms
Object-oriented
Logic-oriented
Rule-oriented
If-then-else rules
Constraint-oriented
Invariant relationship
Monolithic Programming
The programs, written in these languages exhibit relatively flat physical structure as
shown in Figure 2.2. They consist of only global data and sequential code. Program
flow control is achieved through the use of jumps and the program code is duplicated
each time it is to be used, since there is no support of the subroutine concept and
hence, it is suitable for developing small and simple applications. Practically, there is
no support for data abstraction and it is difficult to maintain or enhance the program
code.
Examples: Assembly language and BASIC
1:
2:
goto 100
goto 55
goto 3
goto 75
goto 6
100: .
Global Data
Programs are organized in the form of subroutines and all data items are global
Program controls are through jumps (gobos) and calls to subroutines
Subroutines are abstracted to avoid repetitions
519
Module - III
Global Data
520
Global Data
Subprograms
Module 1
Module 2
Module 3
521
Module - III
522
Object-A
Object-C
Object-B
Fig. 2.5: Object Oriented Programming
Object-oriented programming is a methodology that allows the association of data
structures with operations similar to the way it is way it is perceived in the human
mind. They associate a specific set of actions with a given type of object and actions
are based on these associations.
The following are the important features of object-oriented programming:
523
Module - III
Choice of Programming Language
With several programming languages being available today, the choice of language is
the first issue that needs to be addressed. The following are among the most
important criteria
Application area
Algorithmic complexity
Environment in which software has to be executed
Performance consideration
Data structure complexity
Knowledge of software development staff
Capability of in-house staff for maintenance
In the current information technology context, the following types of technologies are
being implemented in Indian industry. A detailed discussion is beyond the scope of
this module. Kindly refer other chapters for more elaboration.
Client Server and distributed data
Distributing data over networks and client/server system was very common during
90s. Even today several systems are using this type of technology. Client in this
context is the machine (usually a PC) where user is doing his/her day-to-day work.
Server is the machine where the data used by several users is stored. The server
also stores all the computer programs. Users run the computer programs by
connecting to server and calling required program, generally, through a menu driven
system. In this type of system, data can also be stored on client machines together
with server machines. Novell Netware and Unix systems were very popular clientserver technologies. The data and software instructions travel from client to server
and back. The data on server is usually stored in database systems. User
communicates with server through several forms or dialogue boxes to supply and
retrieve data from server. In order to speed up the traveling time between server and
client, designer may opt to store some data on user machines. Thus what you get is
a distributed data and distributed processing environment. (data processing is taking
place on server as well as client)
Integrating software programs for all business functions through Enterprise wide
Resource Planning led to ERP system which is based on client-server technology.
The choice of programming languages and database in client-server technology
depended upon how to efficiently use the connectivity between client and server for
instruction and data flowing between server and client.
524
Module - III
to have a hybrid of web-based multi-tier client-server system supported by a backend non-web-based client server architecture.
Coding Style
Programming can be improved by following the guidelines given below:
Consistent and meaningful names for programs, variables and the like to
improve understandability and maintainability. A trial balance program can be
called trialbal rather than Fin32. Similarly, employee number can be referred to
as emp_no and supplier number can be referred to as supp_no. This ensures
consistency in naming conventions.
Modular programming makes maintainability easy
Languages like C permit more than one instruction to be written in a line. But to
improve readability 'one instruction per line' standard must be maintained.
Comment lines improve the understandability of code. These indicate the
function of each logical block. Adequate comment lines should be included in the
program
Indentation, parenthesis and the like improve readability of the code if the logic is
complicated, clear formatting will make it easy to understand.
Program Debugging
Debugging is the most primitive form of testing activity. Programmers usually debug
their programs while developing their source codes by activating the compiler and
searching for implementation defects at the source code level. Bugs (Defects) are
always the result of faulty code structure, misunderstanding of the programming
language features, incoherent naming or wrongly spelt programming statements.
The need for extensive debugging is often an indication of poor workmanship.
Debugging activities should be kept at a minimum and the programmer should not
entirely be dependent on the compilers but should also take help of the debugging
software. Debugging software tools assist the programmer in fine tuning, fixing and
debugging the program under development. These tools fall in the following three
categories.
Output Analyzer: checks the accuracy of the output which is the result of
processing the input through that program by comparing the actual results with
the expected results.
Baselines
Before starting the discussion of next phase in SDLC, it is worthwhile to discuss
about baselining. In the context of software development, baselining every work
products at every stage of the SDLC phase is very crucial.
A baseline is a software configuration management concept that helps control
change without seriously impeding justifiable change. The IEEE standard defines
basline as A specification or product that has been formally reviewed and agreed
upon, that thereafter serves as the basis for further development, and that can be
changed only through formal change control procedures.
It is important to note from the above definition that,
Baselining a work product is also known as freezing the work product. The term
indicates that the product is frozen and its frozen status cannot be changed without
formal change management procedure. Thus the concept of baselining is a generic
and can be applied in other areas also.
Baselining can be done throughout the SDLC life cycle stages. If it is used for
program development phase, then programs (source and executable) will be the
work products. Once baselined, these programs will be frozen in a software library
527
Module - III
(either manually controlled library but generally auto controlled through version
control software). If any change is to be made to these baselined programs, it should
only be done through a formal procedure. Since a set of programs become a
software module, baselining can be applied to completed modules as well. Work
product in this case will be the module which will be set of programs which are also
controlled. Version numbering will be done to a completed system which is released
to the user.
Some of the Software Configuration items (not in any particular order) which are
baselined is given below:
1.
2.
3.
4.
5.
Scope creep is a risk in most projects and needs to be identified and managed. It will
often result in cost overrun and/or time overrun.
The next phase in SDLC is Software Testing.
Phase V- System Testing
Software Testing is an empirical investigation conducted to provide stakeholders with
information about the quality of the product or service under test, with respect to the
528
Unit Testing
System Testing
Unit Testing
In computer programming, Unit Testing is a software verification and validation
method where the programmer gains confidence that individual units (i.e. individual
programs or functions or objects) of source code are fit for use. A unit is the smallest
testable part of an application. In procedural programming a unit may be an individual
program, function, procedure, etc., while in object-oriented programming, the smallest
unit is a method, which may belong to a base/super class, abstract class or
529
Module - III
derived/child class. Unit testing can be done by something as simple as stepping
through code in a debugger; modern applications include the use of a test framework
such as xUnit.These tests ensure that the internal operation of the program performs
as per the specification. There are five categories of tests that a programmer typically
performs on a program unit:
Functional Tests: These tests check whether programs do what they are
supposed to do or not. The test plan specifies operating conditions, input
values, and expected results, and as per this plan programmer checks by
inputting the values to see whether the actual result and expected result match.
Performance Tests: These should be designed to verify the response time, the
execution time, the throughput, primary and secondary memory utilization and
the traffic rates on data channels and communication links.
Stress Tests: Stress testing is a form of testing that is used to determine the
stability of a given system or entity. It involves testing beyond normal
operational capacity, often to a breaking point, in order to observe the results.
Stress testing may have a more specific meaning in certain industries.These
tests are designed to overload a program in various ways. The purpose of a
stress test is to determine the limitations of the program. For example, during a
sort operation, the available memory can be reduced to find out whether the
program is able to handle the situation.
Structural Tests: These are concerned with examining the internal processing
logic of a software system. For example, if a function is responsible for tax
calculation, the verification of the logic is a structural test.
Parallel Tests: By using the same test data in the new and old system, the
output results are compared.
530
531
Module - III
Gray Box Testing
In recent years, the term grey box testing has come into common usage. Gray box
testing is a software testing technique that uses a combination of black box testing
and white box testing. Gray box testing is not black box testing, because the tester
does know some of the internal workings of the software under test. In gray box
testing, the tester applies a limited number of test cases to the internal workings of
the software under test. In the remaining part of the gray box testing, one takes a
black box approach in applying inputs to the software under test and observing the
outputs.
Gray box testing is a powerful idea. The concept is simple; if one knows something
about how the product works on the inside, one can test it better, even from the
outside. Gray box testing is not to be confused with white box testing; i.e. a testing
approach that attempts to cover the internals of the product in detail. Gray box testing
is a test strategy based partly on internals. The testing approach is known as gray
box testing, when one does have some knowledge, but not the full knowledge of the
internals of the product one is testing.
In gray box testing, just as in black box testing, we test from the outside of a product,
just as you do with black box, but you make better-informed testing choices because
we are better informed; because we know how the underlying software components
operate and interact.
Integration / Interface Testing
The objective is to evaluate the connection of two or more components that pass
information from one area to another. This is carried out in the following manner:
While the application running, suddenly restart the computer and after that check
the validness of application's data integrity;
533
Module - III
While application receives data from the network, unplug and then in some time
plug-in the cable, and analyze the application ability to continue receiving of data
from that point, when network connection disappeared;
To restart the system while the browser will have definite number of sessions
and after rebooting check, that it is able to recover all of them.
Checking the ability of recovery of the system after the failure of hardware or
software.
Quality Assurance Testing: It ensures that the new system satisfies the
prescribed quality standards and the development process is as per the
organizations quality assurance methodology.
534
Alpha Testing: This is the first stage, often performed by the users within
the organization.
Beta Testing: This is the second stage, generally performed by the
external users. This is the last stage of testing, and normally involves
sending the product outside the development environment for real world
exposure.
Pilot Testing
This kind of testing involves the users just before actual release to ensure that
users become familiar with the release contents and ultimately accept it. This is
often suggested before Go Live in a typical ERP system going to production. It
may involve many users and is generally conducted over a short period of time
and is tightly controlled.
Sociability Testing
In this type of testing, an application is tested in its normal environment, along with
other standard applications, to make sure they all get along together; that is, they
do not corrupt each other's files, they don't crash, they don't consume system
resources, they don't lock up the system, they can share the printer and other
shared resources. This is especially useful for web-based applications.
Auditors role in Testing
Auditor can play a very crucial role in the testing phase of SDLC. Typically, technical
people fail to test boundary or exception conditions in their testing activities. Auditor
must carefully review various test plans and test cases in addition with corresponding
test data and results. Programmers tend to prepare irrelevant fictitious data which
does not reflect real life data values. e.g. Names can be put as XYZ.
Throughout the testing phase, auditors need to identify whether written test plans and
scripts are available and adequate. E.g. test plans should include the test's set-up
and scope, testing procedures and data, expected results, and sign-off from the
appropriate staff. Also it is necessary to check whether testing is performed in an
environment that is separate from the production environment. Logging of test results
should be ensured for post testing analysis or review. If test results have unexpected
results, it is necessary to check and review adequacy and appropriateness of
reasons. Auditors may have to ensure that nothing is installed in the production
535
Module - III
environment until it is successfully examined in a test environment and formally
approved by the business user.
Auditor can conduct audit of testing phase to verify compliance of the testing carried
out by various teams. The prime reason may be to judge if the process complies with
a standard. An auditor may compare the actual testing process with the documented
process; e.g. ISO Standards require us to define our Software testing process. The
audit will try to verify if we actually conducted the testing as documented.
Sometimes audit objective could be to improve process or to address some problems
in the testing phase. Since auditor can do a value addition as a third-party
independent reviewer, the business may wish to seek auditors opinion on process
improvement aspects. Auditors involvement may also be necessary to find out a root
cause to a problem. Auditing Test Process helps the management to understand if
the process is being followed as specified. Typically Testing audit may be done for
one or more of the following factors:
Auditor may have to conduct testing by preparing test strategies, designing test plans
and preparing test cases. Auditor will need a testing environment which is similar to
the production environment. Auditor will carry out testing by running various software
processes module by module and reviewing the results.
After the testing of software is over, SDLC project goes in the next phase viz.
implementation of the software.
Installation of new hardware / software: If the new system interfaces with the
other systems or is distributed across multiple software platforms, some final
commissioning tests of the production environment may be desirable to prove
end to end connectivity.
Data conversion: Following steps are necessary for this activity:
o
o
o
o
Determining what data can be converted through software and what data
manually.
Performing data cleansing before data conversion
Identifying the methods to access the accuracy of conversion like record
counts and control totals
Designing exception reports showing the data which could not be converted
through software.
537
Module - III
Establishing responsibility for verifying and signing off and accepting overall
conversion by the system owner
o Actual conversion
User Final Acceptance testing: Ideally, the user acceptance test should be
performed in a secured testing environment where both source and executable
codes are protected. This helps to ensure that unauthorized or last minute
change to the system does not take place without going through the standard
system maintenance process.
User training: The following types of training may be given to the users:
o
o
o
o
539
Module - III
540
541
Module - III
response time, CPU usage, and random access space availability, that the auditor
has used as assessment criteria.
An Auditor may adopt a rating system such as on scale of 1 to 10 in order to give
rating to the various phases of SDLC. E.g. in rating a Feasibility Study, auditor can
review Feasibility Study Report and different work products of this study phase. An
interview with personnel who have conducted this feasibility study can be conducted.
Depending on the content and quality of the Feasibility Study report and interviews,
an auditor can arrive at a rating between 1 to 10 (10 being best). After deriving such a
rating for all the phases, the auditor can form his/her overall opinion about the SDLC
phases.
In order to audit technical work products (such as database design or physical
design), auditor may opt to include a technical expert to seek his/her opinion on the
technical aspects of SDLC. However, auditor will have to give control objectives,
directives and in general validate the opinion expressed by technical experts. Some
of the control considerations for an auditor are:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
542
The following checklist may be used by the IS Auditors for this purpose:
S. No.
Checkpoints
Status
1.
2.
3.
4.
5.
6.
543
are
Module - III
7.
8.
9.
10.
11.
12.
manual,
14.
15.
Module - III
16.
17.
18.
Summary
In this chapter, we have learned the following major aspects of SDLC:
Unit testing
Integration / Interface testing
System testing
Final acceptance testing
After the information system is through with the testing process, it moves on to
the implementation phase. This involves installation of the new hardware and
software, data conversion, user acceptance of the new system, user training and
test runs.
The final stage of SDLC is the post implementation review phase, which involves
assessment, evaluation, deficiencies, recommendations and corrective, adaptive,
perfective and preventive maintenance.
546
In addition to the activities associated with each phase, there are others, which
are undertaken throughout the life cycle, which are referred to as umbrella
activities. Some of them are:
o
o
o
o
o
o
o
Sources:
Ron Weber: Information Systems Control and Audit, Pearson Education, India,
Third Impression, 2009.
Valacich George Hoffer: Essentials of Systems Analysis & Design, PHI Pvt. Ltd.,
N. Delhi, India, 2004.
Muneesh Kumar: Business Information Systems, Vikas Publishing House Pvt.
Ltd., N. Delhi, India, 2001.
Charles Parker, Thomas Case: Management Information Systems, Mitchell
McGraw Hill, India, 1993.
M. M. Pant: System Analysis, Data Processing and Quantitative Techniques,
Pitambar Publishing Co. Pvt. Ltd., N. Delhi, India, 1999.
Gordon B. Davis, Margrethe H. Olson, Management Information Systems,
McGraw-Hill International Editions, 1984.
Sujata Garg: Professional Approach to Management Information & Control
Systems, Bharat Law House Pvt. Ltd., N. Delhi, India, 2005.
Pankaj Jalote: An Integrated Approach to Software Engineering, Narosa
Publishing House, N. Delhi, India, Third Edition, 2005.
Roger S. Pressman: Software Engineering- A Practitioners Approach, McGrawHill, Sixth Edition, 2005.
547
Module - III
2. The problem statement includes the ____________, which lists specific input
numbers, a program would typically expect the user to enter and precise output
values that a perfect program would return for those input values.
a. testing plan
b. error handler
c. IPO cycle
d. input-output specification
3. The design of a database consists of following major activities:
a. Conceptual Modeling
b. Data Modeling
c. Storage Structure Design
d. All of the above
4. is a technique for designing relational database tables to minimize
duplication of information and to eliminate data anomalies.
a. Database Normalization
b. Data Modeling
c. Storage Structure Design
d. None of these
5. Physical design includes the following step/s:
a. Designing physical files and databases
b. Designing system and program structure
c. Designing distributed processing strategies
d. All of the above
6. .. stores data or fixed data about an entity.
a. Master file
b. Parameter file
c. Transaction file
d. None of these
7. The data in . Files changes less frequently or may not change at all during
the life span of the entity.
a. Master file
b. Parameter file
c. Transaction file
d. None of these
548
549
Module - III
14. .. is the process of testing individual units (i.e. individual programs or
functions or objects) of software in isolation.
a. Unit Testing
b. System Testing
c. Penetration Testing
d. All of the above
15. In .., testing is done by using the same test data in the new and old
system, and the output results are compared.
a. Unit Testing
b. Parallel Testing
c. Penetration Testing
d. All of the above
16. In Black Box Testing, the focus is on .
a. Functional Correctness
b. Structural Correctness
c. Both a. and b.
d. None of these
17. In White Box Testing, the focus is on .
a. Functional Correctness
b. Structural Correctness
c. Both a) and b.
d. None of these
18. is conducted when the system is just ready for implementation.
a. Unit Testing
b. Parallel Testing
c. Penetration Testing
d. Final Acceptance Testing
19. is the first stage, often performed by the users within the organization.
a. Alpha Testing
b. Beta Testing
c. Both a. and b.
d. None of these
550
Answers:
1. b.
2. a.
3. d.
4. a.
5. d.
6. a.
7. a.
8. b.
9. a.
10. c.
11. d.
12. d.
13. d.
14. a.
15. b.
16. a.
17. b.
18. d.
19. a.
20. a.
551
3 Alternative Methodologies of
Software Development
Learning Objectives
To provide an understanding of:
In system development, an analyst looks for the processes that are reusable and
predictable. The main aim is to improve productivity and quality of the system and
also focusing on delivering the system on time by adhering to a systematic schedule
and keeping a check on the budget. An analyst of the system can achieve it by
formalizing the process and applying project management techniques. These
techniques help meet the desired expectations in terms of functionality, cost, delivery
schedule, productivity and quality of the system. A life cycle called SDLC (System
Development Life Cycle) is followed to achieve a system with the above mentioned
features. SDLC is implemented in the form of various methodologies and their
respective models.
Methodology: Methodology is a comprehensive guideline that is followed to
successfully complete every SDLC phase. It is a collection of models, tools and
techniques. Methodologies are used to enhance performance in the system. They are
comprehensive and multi-step approaches to systems development.
Model: Model describes the system in both formal and informal ways. For example,
one model can show the various components of the system and how they are related
to each other. Another model can show the order of information flow among the
components of the system. Some of the models used in system development are:
Flowchart
DFD (Data Flow Diagram)
ERD (Entity Relationship Diagram)
Structure Chart
Use Case Diagram
Module - III
Class Diagram
Seq. Diagram
PERT charts
Gantt chart
Hierarchy chart
Financial analysis models like NPV, ROI, etc.
Tools: A tool is a software support that helps create models or other components
required in the project. Some examples of the tools used in system development are
smart editors, C-S help, debugging tools, CASE (Computer-Aided System
Engineering), etc. Tools help the analyst to create the important system models.
Techniques: A technique is a collection of guidelines that help an analyst complete a
system development activity. Some of the techniques used in it are OO (Object
Oriented) analysis, data modeling, relational database design, structured analysis
and design, and software-testing.
Researchers all over the world have done a lot of experimentation and have adopted
various methodologies and models for conducting the SDLC phases. Software
Development Methodology is a framework which is adapted to structure, plan, and
control the process of developing an information system. The framework of a
software development methodology consists of the following:
Over the past few years a wide variety of frameworks have evolved. Each framework
has its own accepted strengths and weaknesses. Choice of system development
methodology is dependent on the nature of the project under development. The
system development methodology is based on various technical, organizational,
project and team considerations. The methodology is often documented
formally.There are nearly 40 different models. This chapter focuses on some of the
models that are generally used as a tool of system development methodology for
system development.
The present trend of using OOP and web-based systems demands that instead of
using traditional methods, it is better to adopt alternative development methodologies.
This chapter focuses on the models available for software development. But in order
554
Feasibility Study
Requirements Analysis
Systems Design
Programming
Testing
Implementation
Post-Implementation Support
Feasibility Study
Requirements Analysis
Systems Design
Programming
Testing
Implementation
Post-Implementation Support
Module - III
sequential software development process, in which progress is seen as flowing
steadily downwards like a waterfall as shown in fig 3.2.
Analysis
Requirement
Specification
Design
Implementation
Testing and
Integration
Operation and
Maintenance
Construction
Cutover
When a customer defines a general objective for software, but does not identify
detailed input processing or output requirements.
When the developer is ensure of one or all of the following:
The efficiency of an algorithm
The adaptability of an operating system
The form that human-machine interaction should take
557
Module - III
The prototyping software development process begins with requirements collection,
followed by prototyping and user evaluation. Often the end users may not be able to
provide a complete set of application objectives, detailed input, processing, or output
requirements in the initial stage. After the user evaluation, another prototype will be built
based on feedback from users, and again the cycle will return to customer evaluation.
Requirements gathering: The developer gets initial requirements from the users.
Quick design: The emphasis is on visible aspects such as input screens and
output reports.
Construction of the prototype: by the developer on the basis of inputs from the users.
Users evaluation of the prototype: The users accept the screens and options as
shown to them.
Refinement of the prototype: It is refined by fine tuning the users requirements.
The last two steps are iterated till the user is fully satisfied with the prototype.
The user sees the 'working' version of the software, without realizing that the
processing logic is still not ready. So, the user starts making unreasonable
delivery date demands without realizing that prototype has to be expanded to
handle transaction volume, client server network connectivity, backup and
recovery procedures and control features.
As the development has to be carried out very fast, the developer uses the 4GL
(4 Generation Language) tools. But in larger developments, design strategy has
558
to be clearly laid down; otherwise the effort will result in poor quality, poor
maintainability and low user acceptance, resulting in the failure of the effort.
The prototype is only for eliciting user requirements. Even the input and output
programs may have to be rewritten taking into account the target environment
and efficiency considerations. Having worked hard at developing a prototype, the
developer tries to work around it to develop the application thereby leading to
sub optimality.
The capability of the prototype to accommodate changes could also lead to
problems. The user may at times add changes at the cost of strategic objectives
of the application.
In prototyping, the software that is being developed is modified and coded, as
and when the user feels the need for change. It becomes difficult to keep track of
such changes in the controls.
Changes in design and development keep happening so quickly that formal
change control procedures may be violated.
Though the IS auditor is aware about the risks associated with prototyping, the IS
auditor also knows that this method of system development can provide the
organization with substantial saving in time and cost. Similarly, since users are giving
approval to data entry screens and report layouts early in SDLC life cycle, chances of
meeting user requirements are very high in this model.
RAD: RAD, also known as Rapid Application Development, is an incremental model
which has a short development cycle in which the requirements have to be clearly
understood and the scope well defined. It is a high speed adaptation of the waterfall
model, in which rapid development is achieved by using a component based
construction approach.
Module - III
RAD leverages the following techniques to keep the development cycle short:
560
Definition of scope
Creation of a functional design
Construction of application
Deployment
Combining the best available technique and specifying the sequence of tasks
that will make these techniques most effective.
Using evolutionary prototypes that are eventually transformed into the final
product.
Using workshops, instead of interviews, to gather requirement for review design.
Selecting a set of case tools to support modeling, prototyping and code
reusability, as well as automating many combination techniques.
Implementing time boxed development that allows the development team to
quickly build the core of the system and implement refinements in subsequent
releases.
Providing guidelines for success and describing pitfalls to be avoided.
The structure of the RAD life cycle is thus designed to ensure that developers build
systems that users really need. This life cycle, which goes through four stages,
includes all the activities and tasks required to define business requirements and
design, develop, and implement the application system that supports these
requirements.
Requirements Planning: It is also known as the concept definition stage. It defines the
business functions and data subject areas that it will support and thus determine also
its scope.
User Design: It is also known as the functional design stage. It uses workshops to
model the systems data and processes o build a working prototype of critical system
components.
Construction: It is also known as the development stage. It completes the
construction of the physical application system, builds the conversion system, and
develops user aids and implementation work plans.
Implementation: It is also known as the deployment stage. It includes final user
testing and training, data conversion, and the implementation of the application
system.
561
Module - III
People: The success of RAD is contingent upon the involvement of people with right
skills and talent. Excellent tools are essential to fast application development, but
they do not, by themselves, guarantee success. Fast development relies equally
heavily on the people involved. They must therefore be carefully selected, highly
trained, and highly motivated. They must also be able to use the tools and work
together in close-knit teams. Rapid development usually allows each person involved
to play several different roles, so a RAD project mandates a great degree of
cooperative effort among a relatively small group of people.
The key players in a RAD project include the following:
Key Player
Characterized By
Sponsor
User Coordinator
Requirements
Planning Team
User Review Board A team of users who review the system after construction and
decide whether modifications are needed before cutover.
Training Manager
The person responsible for training users to work with the new
system.
Project Manager
Construction
(SWAT) Team
Workshop Leader
Sometimes it is a better idea to buy software because buying may save money
compared to building the system. Is it an advantage?
In RAD, deliverables, based on condition, are easier to port because they make
greater use of high-level abstractions, scripts, intermediate code.
The development in RAD is conducted at a higher level of abstraction because
RAD tools operate at that level.
Early visibility is present because of prototyping.
Greater flexibility is provided because developers can redesign almost at will.
It greatly reduces the manual coding because of wizards, code generators, and
code reuse.
There is an increased user involvement because they are represented on the
team at all times.
Possibly fewer defects are noticed because CASE tools may generate much of
the code.
Possibly reduced cost is there because time is money, also because of reuse.
A shorter development cycle is present because development tilts toward
schedule and away from economy and quality.
563
Module - III
RAD model is not a good choice for system development when one or more of the
following situation(s) are present:
564
evaluating the first prototype in terms of its strengths, weaknesses, and risks;
defining the requirements of the second prototype;
565
Module - III
Its primary advantage is that its range of options accommodates the good
features of existing software process models.
In appropriate situations, the spiral model becomes equivalent to one of the
existing process models. In other situations, it provides guidance on the best
mix of existing approaches to a given project.
It focuses early attention on options involving the reuse of existing software.
It accommodates preparation for lifecycle evolution, growth, and changes of
the software product.
566
Areas of Concern
The three primary challenges of the spiral model involve the following:
Matching contract software: The spiral model works well on internal software
developments but needs more work to match it to the world of contract
software acquisition.
Relying on risk-assessment expertise: The model relies heavily on the ability
of software developers to identify and manage sources of project risk.
The need for further elaboration of spiral model steps: The model process
steps need further elaboration to ensure that all software development
participants operate in a consistent manner. .
Module - III
The following diagrams illustrate the above mentioned aspect.
Pay - roll
System
Tax
Data
Personnel
Data
Personnel
Data
Projects
Data
Project
Management
System
Pay - roll
System
Tax
Data
Personnel
Data
Projects
Data
Module - III
Original
Program
Source Code
Translation
Program
Documentation
Reverse
Engineering
Program
Structure
Program
Modularisation
Modularised
Program
Original
Data
Data
Re-engineering
Structured Program
Re-engineered
Data
Module - III
high-level abstractions and logical, implementation-independent designs to the
physical implementation of a system. Forward engineering is different from
software engineering. Software engineering follows the sequence of events of
normal development lifecycle. Forward engineering uses the output of
reengineering. For example, the most common forward engineering activity
involves the generation of source code from design information which was
captured by a previous reverse engineering activity.
Some factors that affect re-engineering costs are:
The quality of the software to be re-engineered: the lower the quality of the
software and its associated documentation (if any), the higher the
reengineering cost..
The tool support available for re-engineering: It is not normally cost effective
to re-engineer a software system unless we use CASE tools to automate
program changes.
The extent of data conversion required:. Since re-engineering requires
conversion of large volumes of data, it increases the process cost.
The availability of expert staff: If the staff responsible for maintaining the
system cannot be involved in the re-engineering process, this will increase
cost. System re-engineers will have to spend a great deal of time
understanding the system.
There are practical limits to the extent to which a system can be improved by
re-engineering. For example, it is not possible to convert a system with a
functional approach to an object-oriented system.
Since major architectural changes in the system of data management cannot
be carried out automatically, it involves high additional costs. Major
architectural changes of the system of data management have to be done
manually.
Although re-engineering can improve maintainability, the reengineered
system will not be as maintainable as a new system developed with modern
software engineering methods.
Automated
analysis
System
information
store
System to be
re-engineered
Document
generation
Manual
annotation
Data stucture
diagrams
Traceability
matrices
Module - III
However, reverse engineering need not always be followed by re-engineering.
The design and specification of a system may be reverse engineered so that they
can be an input to the requirements of specification process for the systems
replacement. The design and specification may be reverse engineered to support
program maintenance.
This process can be carried out by:
i
ii.
Decomposing the object or executable code into source code and using it to
analyze the program.
Utilizing the reverse engineering application as a black box test and unveiling
its functionality by using test data.
Security auditing
Removal of copy protection also called cracking
Customization of embedded systems
Enabling of additional features on low-cost hardware
574
Module - III
The principles of agile software development are:
577
Module - III
5. Agile development does not emphasize defined and repeatable process and
instead it performs and adapts its development based on frequent
inspections.
So far, we have dealt with the phases in SDLC, and how these have been
undertaken in traditional and alternative development models.
578
Object-oriented analysis
Object-oriented design
Object-oriented implementation
Build a
Use-Cases Model
Object
Analysis
Analysis
Validate /
Test
Using Tools
Case and / or
OO programming
Languages
User
Satisfaction
Usability &
QA Tests
Design classes,
Define attributes
and methods
Build object
and dynamic
model
Design
Implementation
Build user
interface and
prototype
User satisfaction test,
usability test, and quality
assurance test
Module - III
Major advantages of this approach are:
The model allows full exploitation of the power of object-based and objectoriented programming languages.
Object oriented development model can manage a variety of data types.
It has the ability to manage complex relationships.
It has the capacity to meet demands of a changing environment.
Since object-based models appeal to the workings of human cognition,
human input into the development of a software system is likely to be more
natural and less prone to error.
It encourages the re-use of modules and also of entire designs, which leads
to reduced development and maintenance costs.
Object-oriented systems are based on stable forms (i.e. objects and classes)
which are resilient to change.
Delaying decisions about representation of objects and hiding as much
information as possible within an object leads to strongly cohesive and
weakly coupled software, which is easier to modify.
Data Security aspect is also taken care of by implementing object oriented
technology.
Component qualification,
Component adaptation,
Component assembly, and
System evolution and maintenance.
Module - III
system. For many components in the marketplace, prediction is difficult because
of a lack of information about the capabilities of a component and lack of trust in
this information. Conventional software doctrine states that component
specifications should be sufficient and complete, static writable once and
frozen, and homogenous. However, full specifications may be impractical: some
components may exhibit (non-functional) properties which are infeasible to a
document. One method for addressing this issue is to use credentials i.e.,
knowledge-based specifications that evolve as more are discovered about a
component.
582
Module - III
The phases of SDLC such as feasibility study, requirements analysis may have to
be revisited to suit component based development. For example, in a componentbased approach, it is necessary to analyze whether these requirements can be
fulfilled by available components. This means that the analysts have to be aware
of the components that can possibly be used. Since appropriate components may
not be available, some other components have to be implemented, which can be
risky. IS auditors should be aware of this and carefully study if a compromise has
been made for correct functioning of the software. IS auditors should also ensure
that unit testing (which now applies to individual component) and integration
testing (integration of components to build system modules) have a significant
role in the entire testing phase.
For example, in Microsofts Transaction Server, one component is the component
Move Money. It moves money, i.e., a number, from source account to destination
account for a given transaction. A programmer who wishes to move a number
from one source to another can use this component and build his module.
3. Web-Based Application Development
The World Wide Web and the Internet have added to the value of computing.
With the help of web based applications, we can purchase shares, download
music, view movies, get medical advice, book hotel rooms, schedule airline
flights, do banking, take college courses, etc.
Web-based systems and applications become integrated in business strategies
for small and large companies. The following are the attributes of the Web based
applications:
1. Network Intensive: By its nature, a web based application is network
intensive. It resides on a network and serves the needs of diverse community
of clients. It may reside on the internet (thereby enabling open worldwide
communication) or intranet (implementing communication across the
organization) or extranet (making available intranet for external users with
proper access controls.)
2. Content Driven: In many cases, the primary function of a web based
application is to use hypermedia to present text, graphics, audio, and video
contents to the end user.
3. Continuous evolution: Unlike conventional application software that
evolves over a series of planned, chronologically spaced releases, web
based applications evolve continuously.
The following application categories are most commonly encountered in Web
based applications:
584
The web based application development process begins with the formulation of
an activity that identifies the goals and objectives of such development, planning
estimates of overall project cost, evaluates risks associated with the development
effort, and defines a development schedule. Analysis establishes technical
requirements for the application and identifies the content items that are to be
incorporated in it. The engineering activity incorporates two parallel tasks:
content design and technical design. Page generation is a construction activity
that makes heavy use of the automated tools for the web based application and
testing ensures web application navigation, attempting to uncover errors in
function and content, while ensuring that the web based application operates
correctly in different environments. Web engineering makes use of an iterative,
incremental process model because the development timetable for web based
applications is very short.
In the case of web based Applications, the clients database resides on a back
end processor and the software and the data related to the frequently asked
information may reside on a front end (unlike client server based applications
where both may reside on the same processor) to save the time of the user.
Risks associated with Web Based Applications:
1. As the web based applications are available via network access, it is difficult
to limit the possible end users who may access the application. In order to
585
Module - III
protect the sensitive content and provide secured modes of data
transmission, strong security measures must be implemented throughout the
infrastructure that supports a web based application and within the
application itself.
2. In the absence of disciplined process for developing web based systems,
there is an increasing concern that we may face serious problems in the
successful development, deployment and maintenance of these systems.
Poorly developed web based applications have too high a probability of
failure. As web based systems grow more complex, a failure in one can
propagate broad based problems across many. In order to avoid this, there is
a pressing need for disciplined web engineering approaches and new
methods for development, deployment and evaluation of web based systems
and applications.
4. Extreme Programming
Extreme Programming (or XP) is a set of values, principles and practices for
rapidly developing high-quality software that provides the highest value for the
customer in the fastest way possible. XP is extreme in the sense that it takes 12
well-known software development "best practices" to their logical extremes.
The 12 core practices of XP are:
1. The Planning Game: Business and development team cooperate to produce
the maximum business value as rapidly as possible.
2. Small Releases: XP start with the smallest useful feature set which is
released early. Further, during every fresh release, new features are added
to the system.
3. System Metaphor: Each project has an organizing metaphor, which provides
an easy to remember naming convention.
4. Simple Design: It is always better to use the simplest possible design for
successful completion of job. The requirements may change in future, so it is
better to concentrate on today's requirements for development of the system.
5. Continuous Testing: Before programmers add a feature, they write a test
for it. They test the feature using that test for successful execution.
6. Refactoring: Refactor out any duplicate code generated in a coding session.
7. Pair Programming: All production codes are written by two programmers
sitting at one machine. Essentially, all code is reviewed as it is written.
8. Collective Code Ownership: No single person "owns" a module. Any
developer is expected to be able to work on any part of the code base at a
particular time.
586
Module - III
across all computer systems irrespective of what their hardware and software
systems.
Web Services Description Language (WSDL), is an XML format for describing
network services as a set of endpoints operating on messages containing either
document-oriented or procedure-oriented information. It defines services as
collections of network endpoints or ports. WSDL specification provides an XML
format of documents for this purpose.
Information Systems Maintenance Practices
After a system moves into production, it seldom remains static. Change is a
reality, and systems undergo changes right through their life cycle. These
changes often create problems in the functionality and other characteristics of a
system. So it is necessary that a procedure for change is formalized.
Change Control
Any request for change by the user has to be submitted to the EDP department,
along with the reasons for this. . (In case the developer himself wants to change
the program to overcome a processing problem or to improve performance, then
he has to prepare a written change request document).
The user request is then assessed by the relevant application developer. He
evaluates the impact of the modifications on other programs. The number of days
required for making all the necessary changes, and time for testing and changes
in documentation is also estimated. A report is then prepared by the developer on
the basis of time and cost of change.
Every organization has a defined CCA (Change Control Authority). CCA is a
person or a committee who is the final authority that approves changes. The CCA
reviews the report and approves / rejects the change request. An Engineering
Change Order (ECO) is then generated for the changes approved.
The ECO describes the change that should be made; constraints within which
change should be made; and criteria for review and audit. The program(s) to be
changed is / are then copied to the test directory from the production directory
with access control for the designated programmer.
The programmer then makes the approved changes, and the programs go
through all the tests that they had gone through, when they were initially
developed. If a program change warrants a change in the database, then it is first
made in the test data base, and all related documents are changed.
588
System manual
Input screens
Output reports
Program specifications
Flow charts, Decision tables, Decisions trees
Narrative of program logic
Data dictionary
Entity relationship diagram
Object diagrams
Data flow diagrams
589
Module - III
User manual
Data entry procedures
Batch operation procedures
The current version of the documentation should also be available on the back-up site.
In Literate programming, a computer program is written like literature, with human
readability as its primary goal. Programmers aim at a literate style in their
programming just as writers aim at an intelligible and articulate style in their writing.
This is in contrast to the view that the programmers primary or sole objective is to
create source code and that documentation is a secondary objective.
In practice, literate programming is achieved by combining human-readable
documentation and machine-readable source code into a single source file, to
maintain close correspondence between documentation and source code. The
order and structure of this source file are specifically designed to aid human
comprehension: code and documentation together are organized in logical and /
or hierarchical order (typically according to a scheme that accommodates
detailed explanations and commentary as necessary). At the same time, the
structure and format of the source files accommodate external utilities that
generate program documentation and / or extract the machine-readable code
from the same source file(s)
590
Module - III
If the auditor so desires, he can take a change request and trace all activities
from the documentation to assure himself that all the processes are being
followed.
The time stamp on the object code should always be later than that of the
corresponding source code.
Users and application programmers should not have access to the
production source code.
592
After going through the source code comparison, the auditor compiles a copy
of the source code in the production directory and generates an object code.
The test plan for the program is then applied on the object code and the
results must be documented.
593
Module - III
The same test is e applied on the object code available in the production
directory.
The results of both these tests should match.
Emergency Changes
Sometimes emergency changes may have to be made. If the management
recognizes that there is not much time available to create a new version because
of an extraordinary situation, they can relax the change control procedure.
Normally, this is operated by a special log-in id, which is known only to the
systems administrator and available in a sealed envelope with the EDP
management. Within their approval the programmer may access the production
directory using the special log-in ID and make the necessary changes in the
program and run the new version to create outputs. These must be diligently
logged in the machine. The follow-up procedure will include all necessary documentation and written approvals post-facto.
Configuration Management[R4]
Configuration management involves various procedures throughout the life cycle
of the software to identify, define and baseline software items in the system thus
providing a basis for problem management, change management and release
management.
Configuration management process involves identification of items like programs,
documentation and data. Configuration management team takes care of
programs, documentation and data for safekeeping. Each is assigned a reference
number for a quick retrieval. Once it goes to the team, the item cannot be
changed without a formal change control process which is approved by a change
control group.
CI, Configuration Identification, is selection, identification and labeling of the
configuration structures and configuration items, including their respective 'owner'
and the relationships between them. CIs may be hardware, software or
documentation. These CIs will be stored in CMDB (Configuration Management
Database) and will be used in configuration management and other services,
such as incident handling, problem solving and change management. Any
changes done to CIs will be updated in CMDB and CMDB will be kept up-to-date.
The goals of Configuration Management are:
to account for all the IT assets and configurations within the organisation and
its services.
594
Summary
A few of the alternate approaches to system development are: Data-oriented
approach, object oriented design, prototyping, RAD (Rapid Application Development),
Reengineering and Structure Analysis. In the object-oriented analysis, the system is
analyzed in terms of objects and classes and the relationship between objects and
their interaction. Object Oriented Technology is widely used in Computer Aided
Engineering (CAE) and systems software. In prototyping, a set of general objectives
for the software is defined, instead of listing detail input/output and processing
requirements. RAD is an incremental model that supports a short development cycle.
In this approach, the requirements must be clear and the scope must be well-defined.
Reverse engineering or reengineering involves separating the components of a
595
Module - III
system and observe its working with the intention of creating its replica or improving
upon the original product. Structured analysis is a framework for the physical
components (data and process) of an application. It is done by using data flow
diagrams. Web based applications development deals with application systems that
work on the network to make the information available to the users. Agile
development involves development processes that are similar to the traditional way of
developing a complex system.
Once a system moves into production, it seldom remains static. Any changes in it
often create problems in its functionality. So it is essential that a systematic
approach for maintaining the information system is formulated and implemented.
These processes are collectively called information systems maintenance
practices. They include change control, continuous updating of systems
documentation, program migration process and testing.
Multiple Choice Questions:1. In ___________ model, all the SDLC phases run serially and you are not
allowed to go back and forth in these phases, and it most suited for
traditional programming languages such as COBOL.
a. Spiral Model
b. Iterative Enhancement Model
c. RAD Model
d. Waterfall Model
2. ______________ is a method for representing software requirements by
focusing on data structure and not data flow while processing.
a. Information Oriented System Development
b. Data Oriented System Development
c. Process Oriented System Development
d. Method Oriented System Development
3. In ______________ the developer gets initial requirements from the users.
a. Quick Design
b. Requirement Gathering
c. Construction of Prototype
d. Refinement of Prototype
4. RAD stands for _________________________.
a. Reverse Application Development
b. Requirement Application Development
c. Rapid Application Development
d. Reengineering Application Development
596
597
Module - III
11. Auditors use _____________ to ensure that the source code provided to them
for review is the same as the one that has been compiled to produce the current
version of object code.
a. Program Code Comparison
b. Program Code Compare
c. Program Code Evaluation
d. Program Code Matching
12. During change control, the process of taking out one or more copies of
program(s) from the production directory and moving them back to the
production directory are important steps that require ________.
a. Individual Controls
b. Additional Controls
c. Added Controls
d. Standard Controls
13. CAE stands for _____________.
a. Component Aided Engineering
b. Code Aided Engineering
c. Computer Aided Engineering
d. Control Aided Engineering
14. In the ______________, it is easier to establish relationships between any
two entities of a program, whether it is a database or an application program,
where each entity is considered an object.
a. Object Related Approach
b. Object Oriented Approach
c. Object Referenced Approach
d. Class Oriented Approach
15. IDL stands for ________________.
a. Interface Definition Language
b. Integrated Definition Language
c. Interactive Definition Language
d. Identity Definition Language
16. ______ Model aims at putting together a working model to test various aspects
of a design, illustrate ideas or features and gather early user feedback.
a. Prototype
b. Spiral
c. Prototype
d. RAD
598
2. (b)
3. (b)
4. (c)
5. (a)
6. (a)
7. (c)
8. (a)
9. (D)
10. (a)
11. (a)
12. (b)
13. (c)
14. (b)
15. (a)
16. (c)
17. (d)
18. (b)
19. (d)
20. (d)
21. (a)
599
Introduction
By now, we know that software is developed with a Structured Methodology and
consists of various phases. We also know that these phases can be undertaken by
using various models. Moreover, software is designed, programmed, used and
managed with the help of various kinds of hardware and software. Software
development is a complex process of managing resources like people, machines,
etc. Researchers think that engineering principles and practices can be applied to
software development process. Therefore, software development is treated as a
Software Development Project and all Project Management tools and techniques
are applicable to it.
Module - III
These tools and techniques are:.
Knowledge &
Practices
Skills (inherent)
Training
Initiation
Experience
Planning
Risk - based
Management
Execution
Quantitative &
Qualitative
Control
Closing
Size Estimation
Scheduling
Allocating
resources
Productivity
Tools
& Techniques
Project Initiation
Project Planning & Design
Project Execution
Project Monitoring & Control
Project Closing
Initiation
Executing
Planning and
Design
Monitoring
and
Closing
603
Module - III
Where we are?
(measurement)
Where we
planned to be?
(evaluation)
Procurement
Communication
Contract Closure
Close Project
Close project: to finalize all activities across all of the process groups to
formally close the project or a project phase
Contract closure: necessary for completing and settling each contract,
including the resolution of any open items, and closing each contract
applicable to the project or a project phase.
604
605
Module - III
program module once completed can be taken up for testing while other
programming is going on.
How will the total person-month effort be distributed over these activities, i.e.,
how much time will each activity take and how many persons of which type of
skills will be required for each of these activities? Some of the persons
required for a software development project could be Systems Analysts,
Programmers, DBAs, Testers, Documentation people, etc.
When will each activity start and finish?
What additional resources are required to complete the activity? Does the
activity require any special software or equipment?
What will assess the completion of an activity? E.g., completion of hardware
installation will be indicated by the clients sign-off on installation report given
by hardware vendor at various locations.
On what points will the management review the project? (In project
management terminology, these are called milestones.) For example, unless
hardware installation and operating system installation are over, application
software cannot be installed. So, hardware and OS installation will be treated
as a milestone.
Gantt Charts
Gantt Charts are prepared to schedule the tasks involved in the software
development. They show when tasks begin and end, what tasks are undertaken
concurrently, and what tasks have to be done serially. They help to identify the
consequences of early and late completion of tasks. The following is a sample
Gantt Chart:
Duration
7 days
Start
Mon 7/23/07
2 days
Wed 8/1/07
u
Finish
Tue 7/31/07 n
Thu 8/2/07vi
ID
1
Task Name
Project Inititation
Finalisation of Project
Feasibility Study
Acceptance of FS
SRS
Acceptance of SRS
Programming
Testing
70 days
Mon 11/5/07
Fri 2/8/08
Implementation
7 days
Mon 2/4/08
Tue 2/12/08
10
Go Live
2 days
Wed 2/13/08
Thu 2/14/08
15 days
Fri 8/3/07
7 days
Fri 8/24/07
Thu 8/23/07
Mon 9/3/07
30 days
Tue 9/4/07
Mon 10/15/07
7 days
Tue 10/16/07
Wed 10/24/07
60 days
Thu 10/25/07
Wed 1/16/08
Fig
4.5 Schedule
Activity
Months 2007-2008
July Aug Sept Oct Nov Dec Jan Feb
Project Inititation
Finalisation of Project
Feasibility Study
Acceptance of FS
SRS
Acceptance of SRS
Programming
Testing
Implementation
Go Live
Module - III
In the above diagram, activities like project initiation, finalization of project,
feasibility study are serial activities. Activities like programming and testing are
parallel activities.
PERT : Program Evaluation Review Techniques
PERT represents activities in a project as a network. It indicates the sequential and
parallel relationship between activities.
PERT Terminology
Activity
An activity is a portion of the project that requires resources and time to complete.
The activity is represented by an arrow. Fig 4.7 shows activity A to activity H.
Event
An event is the starting or end point of an activity. It does not consume resources
or time. It is represented by the starting circle in fig 4.7.
Predecessor activity
Activities that must be completed before another activity can begin are called
predecessor activities for that activity. In fig 4.7, Activity A is the Predecessor
Activity for Activity E.
Successor activity
Activities that are carried out after an activity is completed are known as successor
activities. In fig 4.7, Activity E is the successor activity of Activity A while Activity G
is the successor activity for Activities F, C and D
608
Module - III
which the project can be completed. Maximum control is required on the
completion of any activity on critical path because if any activity on this path gets
delayed, the whole project will be delayed.
Activities in the critical path have a zero slack.
The critical path is found by working forward through the network, computing the
earliest possible completion time for each activity, and thus earliest possible
completion time for the project. Taking this as the completion time of the project,
working backwards the latest completion time of each activity is found. The path
on which activities have the same earliest and latest completion time is the critical
path, where the slack is zero.
For effective monitoring of the project, activities in the critical path have to be
closely monitored. When there is contention for a resource between activities,
critical path activities should be given preference. If the duration of the project has
to be reduced, then it should be seen how activities in the critical path can be
crashed, that is, how their duration can be reduced.
Time Box Management
A time box is a limited time period within which a well-defined deliverable must be
produced with given resources. The deadline to produce the deliverable is fixed
and cannot be changed. The difference between a time box and classical progress
control is that when a time box is used, the scope of the deliverable is one of the
variables of project management. The quality, however, is never a variable.
The project manager must continuously weigh the trade-off between the scope and
quality of the deliverable and the time limit for accomplishing the work. If the scope
of the deliverable cannot be further reduced and/or its quality is not high enough,
the time box cannot be met.
Time Box Management Requirements
The following requirements must be met to conduct a project within a time box:
610
611
Module - III
612
Upper CASE: These tools are useful in the early stages of the life cycle. Tools
that help in defining application requirements fall in this category.
Middle CASE: These tools address the needs like those of design in the
middle levels of SDLC. Tools that help in designing screen and report layouts
and data and process design fall in this category.
Lower CASE: The later parts of the life cycle make use of these tools. These
use design information to generate program codes.
613
Module - III
Since CASE strictly follows SDLC, its use enforces discipline in the steps of
SDLC.
The standardization / uniformity of processes can be achieved.
Since CASE tools generate inputs of each stage from the outputs of previous
stage, consistency of application quality can be ensured.
Tasks such as diagramming, which are monotonous, need not be done by the
programmer, and can be left to the CASE tool.
This results in helping the programmer to do more productive tasks; thus
development time can be shortened and cost economy can be achieved.
Stage outputs and related documentation are created by the tool.
Disadvantages of CASE
CASE tools are costly, particularly the ones that address the early stages of
the life cycle.
Use of CASE tools requires extensive training.
614
The auditor will evaluate the project and carry out audit activities to get answers on
the following questions:
1. Are project risks identified in the project and are they appropriate? Is there a
risk mitigating plan? Are Project risks documented in Project Charter or Project
initiation document?
2. Does the project provide for a sufficient budget, and does it have a project
sponsor?
3. Does the project have a plan that is divided into PBS/WBS? Are the roles and
responsibilities adequately defined and allocated / communicated to project
personnel?
615
Module - III
4. Does the project have a quality management activity as an important
milestone?
5. Are the stakeholders and customers (internal and external) identified and is the
customer voice planned in the project?
6. Does the project have a project office, organizational structure, and tools to
monitor and manage it? Has the project manager been appointed?
7. Does the project require interaction with other projects running in a company?
How has this interaction / intercommunication within projects been planned?
8. Does the project have a change management in place for project WIP and
deliverables?
9. Does the project have defined templates / project directory structure on
computers and other necessary paraphernalia?
10. How is the project being monitored for time and cost adherence? What
corrective actions are being taken for lapses?
Based on such questions, the IS auditor will have to develop and conduct tests,
interviews, and other verification mechanisms to form his opinion about the project
under consideration. If the auditor has an independent role, he will submit his
opinion formally to senior management and / or to project manager. He may opt to
rank the project based on its performance. E.g., the ranking could be excellent,
satisfactory, good or poor. The auditor may choose to use a scale of say 1 to 10 to
rate the project. Evidence collection in the audit of a project management activity
could be a tedious task as many times the evidence would be of a subjective kind.
Summary
The two important issues to be addressed by a project manager in the process of
software development are time and cost overruns. As estimates are not very
accurate, more time and money is required to complete a project.
A continuous effort to refine the effort estimation procedures has resulted in the
development of many system development and project management tools. These
enable the developer to represent the size and complexity of the application and
evaluate the project cost / time estimate on the basis of number of inputs, outputs,
files, queries and interfaces that the application is likely to have.
Some widely used approaches are Gantt Chart, PERT (Program Evaluation Review Technique) and CPM (Critical Path Method).
Gantt Chart shows when the tasks begin and end, what tasks can be undertaken
concurrently, and what tasks must proceed serially.
616
Questions:1. Project Management cycle involves:a. Project Initiation -> Project Planning -> Project Execution
Control -> Project Closing
b. Project Initiation -> Project Execution -> Project Planning
Control -> Project Closing
c. Project Planning -> Project Initiation -> Project Execution
Control -> Project Closing
d. Project Initiation -> Project Planning -> Project Control
Execution -> Project Closing
-> Project
-> Project
-> Project
-> Project
Module - III
4. CPM stands for ___________.
a. Common Path Method
b. Common Path Measure
c. Critical Path Method
d. Critical Path Measure
5. The deadline to produce the deliverable is ________ and ________ be
changed.
a. Variable, Can
b. Variable, Cannt
c. Fixed, Can
d. Fixed, Cannt
6. Time box Management is useful for _____ type of SDLC model where a time box
can be used to limit the time available for producing a working system.
a. Prototype
b. Spiral
c. RAD
d. Waterfall
7. ____________ is the process where decisions are made on how to approach,
plan, and execute risk management activities.
a. Risk Identification
b. Risk Management Planning
c. Risk Response Planning
d. Risk Monitoring and Control
8. ____________ prioritizes risk for future analysis by analyzing the probability of
occurrence and impact. Qualitative Risk Analysis is commonly first engaged
within the planning process group.
a. Qualitative Risk Analysis
b. Quantitative Risk Analysis
c. Cumulative Risk Analysis
d. Calculative Risk Analysis
9. Code generators generate program codes on the basis of parameters defined by
__________ or data flow diagrams which aid in improving programmer
efficiency.
a. System Programmer
b. System Developer
618
Module - III
d. MS-PowerPoint
16. ______________ ascertains the options and action plans to enhance
opportunities and mitigate threats.
a. Risk Identification
b. Risk Monitoring
c. Risk Response Planning
d. Risk Control
17. ___________ involves overseeing the effectiveness of risk responses,
monitoring residual risks, identifying and documenting new risks, and assuring
that risk management processes are followed.
a. Risk Identification
b. Risk Monitoring & Control
c. Risk Response Planning
d. Risk Management Planning
18. The ________ is a UK based project management method, which mandates
the use of product based planning.
a. Prince2
b. Pride
c. LogFRAME
d. None of the above
19. MITP - Managing the Implementation for the Total Project - this is
_____________ Project Management Methodology. (Reframe this question)
a. Metasoft
b. Infotronix
c. Microsoft
d. IBM
20. _________ Methodology was originally developed by the United States
Department of Defense and is an analytical tool that is used to plan, monitor
and evaluate projects.
a. Logical Framework
b. Physical Framework
c. Network Framework
d. System Framework
620
2. (d)
3. (d)
4. (c)
5. (d)
6. (c)
7. (b)
8. (a)
9. (d)
10. (a)
11. (b)
12. (c)
13. (b)
14. (a)
15. (a)
16. (c)
17. (b)
18. (a)
19. (d)
20. (a)
21. (b)
621
5 Specialised Systems
Learning Objectives
1. An understanding of AI (Artificial Intelligence) that includes:
DSS frameworks
Design, development and implementation issues in DSS
DSS trends
Humans can think of things which they may not have experienced by improvising
(e.g., story writing)
They can use reasoning to solve problems
Module - III
They can learn from experience: That is, if a person burns his hands on a small
fire, he will be careful next time.
They can be creative and use their imagination to make paintings, songs, stories,
films, etc.
They can handle ambiguous or incomplete information. For example, in the word
informtion, a is missing, but human beings can still understand it. AI tries to
achieve the same through a computer.
AI Applications
The applications of AI can be classified into three major categories: Cognitive
Science, Robotics and Natural Languages.
Cognitive Science: This is an area based on research in disciplines such as biology,
neurology, psychology, mathematics and allied disciplines. It focuses on how human
brain works and how humans think and learn. Applications of AI in the cognitive
science are:
Specialised Systems
agent. Wizards found in MS Office are intelligent agents. Wizards are built-in
capabilities that can analyze how an end user uses a software package and offer
suggestions on how to complete various tasks.
Cognitive Science
Expert Systems
Fuzzy Logic
Learning Systems
Neural Network
Intelligent Agents
Natural Language
Robotics
Virtual Reality
Auditor's Role
IS auditor needs to be conversant with the controls relevant to these systems when
used as the integral part of the organizations business process or critical functions
and the level of experience or intelligence used as a basis for developing software.
The errors produced by such systems would be more critical as compared to the
errors produced by the traditional system.
625
Module - III
Expert Systems
One of the most practical and widely implemented applications of artificial intelligence
in business is the development of ES (Expert System). An expert system is a
knowledge- based information system that uses its knowledge about a specific,
complex application area to act as an expert consultant. They not only provide
answers in the specific application area, but also explain their reasoning process and
conclusions.
User Interface: This allows the user to interact with the system. The user
presents the problem for which solutions are delivered to him. There are different
ways in which a dialogue takes place between the user and the system. The
most common instance is the one in which the system runs a session where a
set of questions are fired to the user. The user in turn has to reply with relevant
information. Here all the initiative lies with the system because it runs the whole
show. The ultimate interface is the one that allows the user to take the initiative.
For example, consider the following questions to know what a patient is suffering
from?
System: What is the patient's name?
User: Mithil Joshi
System: What is the patient's age?
User: 35 years
System: Is the patient male or female?
User: Male
System: Does the patient have fever?
User: Yes.
System: Does he get fever with lot of shivering?
User: Yes.
System: It seems that the patient Mithil Joshi is suffering from Malaria. He should be
given Malaria treatment of chloroquine.
Interface Engine: The interface engine is that part of the program which reasons and
determines the application of knowledge in the knowledge base to the facts
presented in the user interface. Interface engine is the active component of an expert
system and its main job is to mimic human reasoning so that the user can understand
why the system has chosen a particular step. In the above example, the questions
asked and the answers given are part of the Interface Engine.
626
Specialised Systems
The knowledge and experience of the expert is captured before he leaves the
organization.
The codified knowledge in a central repository makes it easy to share it with the
less experienced in the application area.
This ensures consistent and quality decisions.
It also enhances personnel productivity.
627
Module - III
Portfolio analysis
Insurance
Demographic forecasts
Help Desk operations
Medical diagnostics
Maintenance scheduling
Communication network planning
Material selection
628
Specialised Systems
Data Warehouse
Data Warehouse, as defined by W. H. Inmon, is a Subject-oriented, integrated, timevariant, non-volatile, collection of data in support of managements decision making
process.
Another definition given by Wayne Eckerson is that It is a Central Repository of
clean, consistent, integrated & summarized information, extracted from multiple
operational systems, for on-line query processing.
The following diagram explains this concept:
Application-Orientation
Subject-Orientation
Operational
Database
Loans
Credit
Card
Data
Warehouse
Customer
Vendor
Product
Trust
Activity
Savings
Module - III
What kinds of other items in other departments does a shoe purchaser buy on the
same day?
It is a Stand-alone application.
It has a repository of information which may be integrated from several,
heterogeneous operational databases.
It stores large volumes of data which are frequently used for DSS.
It is physically stored separately from organizations databases.
It is relatively static, and has infrequent updates.
It is Read-Only application.
630
Specialised Systems
Prepare data
Transform data
Load data
Model data
Establish access to data warehouse
Retrieve data
Analyse data
Archive data
Destroy data from data warehouse
Module - III
2. Non-standard data format e.g. Phone nos as 91-22-4308630 or 24308630 or
9819375598 or 98193-75598
3. Non-atomic data fields name as Sachin Ramesh Tendulkar instead of Sachin
in first name, Ramesh in second name and Tendulkar in Surname
So, all these data items need cleaning and transformation. All the details presented
here, which includes items 1 to 3, need to be restated clearly.
In contrast to the OLTP (Online Transaction Processing) applications of operational
database, the data warehouse is subject to OLAP (Online Analytical Processing).
OLAP permits many operations such as:
All these capabilities, coupled with the graphical output capability, support the top
management in their strategic decision making.
Data warehouses find application in areas such as market analysis. The management
attempts to identify patterns from the warehouse and make decisions on that basis.
The process of identifying patterns in data warehouse is called data mining. When a
data warehouse is not created for the entire enterprise, but only for a part of it for a
specific function, it is called data mart.
Auditor's Role:
IS Auditor should keep in mind the following while auditing data warehouse:
1.
2.
3.
4.
5.
6.
Data Mining
Data Mining is a process of recognizing the patterns among the data in the data
warehouse. IS Auditors are likely to place more reliance on data mining techniques to
assess audit risk and to collect and evaluate audit risk by:
1. Detecting errors and irregularities
2. Knowledge discovery by better assessing safeguarding of assets, data integrity
and effective and efficient operation of the system
632
Specialised Systems
DSS Frameworks
Frameworks are generalizations about a field that help to put specific cases and
ideas into perspective.
DSS framework will depend upon:
1) Operational Control
1) Structured
2) Management Control
2) Semi structured
3) Strategic Planning
3) Unstructured
633
Module - III
problem moves up the management decision-making hierarchy, the complexity of the
model increases.
DSS Trends
With more sophisticated capabilities built into simple packages such as MSExcel, the use of DSS is now within the reach of small and medium sized
organizations.
Such easy-to-use packages have increased the availability of design talent.
The capability of PC based packages for graphical outputs have made the output
formats user-friendly.
With capabilities to incorporate expert systems in DSS, the quality of the models
is significantly better.
Specialised Systems
payment and electronic sales. They provide significant cost and time saving as
compared to the manual methods. They also eliminate errors that are inherent in
manual system (when a user is subjected to make transcription error while entering
data from a document into system). POS may involve batch processing or online
processing. These are generally observed in the case of big shopping malls or
departmental shops.
In the case of batch processing, the IS auditor should evaluate the batch controls
implemented by the organization, check if they are in operation, and review
exceptional transaction logs. The internal control system should take care to ensure
the accuracy and completeness of the transaction batch before updating it on the
corporate database. In the case of online updating system, the IS auditor will have to
evaluate the controls for accuracy and completeness of transactions; these controls
should be more effective and exhaustive as compared to the controls in batch
transfer.
Auditor's Role
The following are the guidelines for internal controls of ATM system which the auditor
shall have to evaluate and report:
a. Only authorized individuals have been granted access to the system.
b. The exception reports show all attempts to exceed the limits and reports are
reviewed by the management.
c. The bank has ATM liability coverage for onsite and offsite machines
d. Controls on proper storage of unused ATM cards, Controls on their issue only
against valid application form from a customer, Control over custody of unissued
ATM cards, Return of old/ unclaimed ATM cards, Control over activation of PINs
e. Controls on unused PINs, Procedure for issue of PINs, Return of PINs of
returned ATM cards.
635
Module - III
f.
Controls to ensure that PINs do not appear in printed form with the customers
account number.
g. Access control over retrieval or display of PINs via terminals
h. Mail cards to customers in envelops with a return address that do not identify the
Bank. Mail cards and PINs separately with sufficient period of time (usually three
days) between mailings.
Reduction in paperwork
Improved flow of information
Less errors while transmitting / exchanging information
Speedy communication due to electronic transmission
Improvement in carrying out a business process
636
Specialised Systems
EDI Translator: This device translates the data between the standard
format and the trading partner's proprietary format.
Applications Interface: This interface moves the electronic transactions to
or from the applications systems and performs data mapping.
3. Application System: The programs that process the data sent to, received from,
the trading partner. E.g., Purchase orders from a purchasing system.
637
Module - III
COMPANY A
COMPANY B
EDI Translator
EDI Translator
CONSTITUENTS
Physical Layer
Electronic Mail
X.435 MIME
Point to Point
FTP, TELNET
HTTP
EDI Standards
There are two competing and mutually incompatible standards for EDI:
638
Specialised Systems
Features of UN/ EDIFACT:
1. This standard was originally developed in Europe and adopted by United
Nations.
2. This is more flexible than X.12
3. Flexibility leads to frequent versions. Different Companies may have different
versions leading to conflicts.
4. Adopted in areas where X.12 was not adopted
Both the above standards are relatively expensive and have found acceptance in
large organizations, but do not address the needs of small and medium size
enterprises.
UN/XML
Now the effort is on to replace both X.12 and EDIFACT with XML (eXtensible Markup
Language). XML messages can be run on any browser (like Microsoft Internet
Explorer). This reduces the cost of EDI and benefits SMEs. These initiatives go by
various names like UN/XML and ebXML (electronic business XML). The following are
the features of this emerging world standard:
1.
2.
3.
4.
5.
6.
639
Module - III
EDI Risks and Controls:
RISK OF EDI
EDI CONTROLS
Encryption
Loss or duplication
transactions
of
Loss of confidentiality
Encryption
Non Repudiation
Digital signature
Specialised Systems
g. Ensure that all inbound transactions are logged. The total of the inbound
transactions received by the EDI system must be equal to the number and value
of transactions-forwarded and processed. Discrepancies, if any, must be
accounted for and satisfactorily explained and reconciled.
h. Ensure the continuity of message serial numbers.
i. Review the functioning of the EDI system to ensure that functional
acknowledgements are properly sent. The sender should match the functional
acknowledgements with the log of the EDI messages sent.
1. Information Sharing:
The WWW provides an effective medium for trading. Organizations can devise a Web
Site to include product catalogs that can be searched electronically by prospective
customers and can obtain data on which products were requested in searches, and
how often these searches were made. A request can also be made to visitors to the
website to provide information about themselves which will help the web site to provide
information to the user based on the demographic data related to product searches.
2. Payment:
Payments can be made through the internet by using:
a. a credit card,
b. an electronic cheque, which has all the features of a paper check. It functions as
a message to the sender's bank to transfer funds, and like a paper cheque, the
message is given initially to the receiver who in turn endorses the cheque and
presents it to the bank to obtain funds.
c. digital cash system in which the currency is nothing more than a string of digits.
Banks issues these strings of digits and debit the account with the withdrawal
equal to the value of the currency (tokens) issued. They validate each token with
its digital stamp before transmission to the customer's account. When the
customer has to spend some e-cash, he has to transmit the proper amount of
tokens to the merchant who then relays them to the bank for verification and
redemption. To ensure that each token is used only once, the bank records the
serial number of each token as it is spent. The small denomination digital tokens
used for payment are called microcash. Microcash can be used for items such as
a stock quote, a weather report, an image or even a chapter from electronic
book.
641
Module - III
3. Fulfillment:
Many companies make money by generating, transferring or analyzing information.
Documentation, program patches and upgrades are also well suited to Internet based
distribution.
Saving in cost
Saving in transaction time
No limitations of geographical boundaries.
Large customer base for suppliers and large choices for customers
No restriction of timings
Reduction in Storage or holding cost
Different roles for the intermediaries
The E Commerce risks are risks connected with the transfer of any information
through Internet.
RISK
CONTROL
1. Confidentiality
of
message Encryption with receiver's public key
Encryption with receiver's public key
2. Identity of the sender
642
Specialised Systems
Module - III
warehousing and logistics. Presently, there are many ERPs available in the market
like SAP, Oracle Applications, BAAN, People Soft, etc.
The ERP systems save a lot of time by recording the business transaction only once.
For example, the raw materials received shall be recorded only by the receiving
department, while the dispatches shall be recorded by Warehouses on the basis of
physical outward movement of the finished goods. Collections from the customers or
debit or credit notes relating to sales shall be entered by Sales Department. And, the
job of accounting and finance department shall be that of reconciliation and control.
The information is accessible to anybody who has authorized access under rules.
644
Specialised Systems
Auditor's Role
1. The IS Auditor can refer to the implementation or customization procedure to
ensure that appropriate controls available in ERP are enabled while customization.
2. As the information is recorded only through authorized persons, responsibility
can be fixed for the transactions recorded and cancelled.
3. The Auditor is required to evaluate internal controls in the operation of the ERP
as designed by the management and available in the respective documentation.
With the help of interviews and observation he ensures that these are actually
followed at the departmental level.
4. A lot of learning of ERP under consideration, especially, terminology and
understanding of navigation through the system is required for conducting IS
audit through the ERP system.
Summary
Specialized computer-based application systems or intelligent systems refer to a
framework that integrates both hardware and software infrastructure in such a way
that it serves as a complete solution to problems that entail human intelligence. In
other words, these systems simulate the human brain in problem-solving
methodologies. Continuing research and study in the field of specialized computerbased systems has resulted in the evolution of AI (Artificial Intelligence), ES (Expert
System), and DSS (Decision Support System). AI is an attempt to simulate intelligent
645
Module - III
behaviour in computer systems that makes a computer think and reason. Further AI
will make use of this reasoning for solving problems.
An ES is a knowledge-based information system that uses its knowledge about a
specific, complex application area to act as an expert consultant. It not only provides
answers in the specific application area, but also explains the reasoning process and
conclusions.
Typically, an expert system comprises user interface, interface engine and a
knowledge base.
Expert systems are employed in portfolio analysis, insurance, demographic forecasts,
help desk operations, medical diagnostics, maintenance scheduling, communication
network planning and material selection.
DSS are information systems that provide interactive information support to
managers with the use of analytical models. DSS are designed as ad hoc systems
and modeled for specific decisions of individual managers. For example, a
spreadsheet package can be used for creating DSS models.
Establishing and ensuring proper functioning of intelligent systems involves efficient
data object management. This calls for a data management design that is capable of
linking different interfaces (both software and hardware), and managing historical and
statistical data. Such data management systems are called data warehouses.
Questions:
1. _________ is an attempt to duplicate intelligent behaviour of the human beings
in computer system on the basis of predetermined set of rules.
a. Artificial Intelligence
b. Expert System
c. Fuzzy Logic
d. Intelligent System
2. _____ are the systems that can modify their behaviour based on information they
acquire as they operate. Chess playing system is one such popular application.
a. Expert System
b. Fuzzy Logic
c. Intelligent System
d. Learning System
3. ________ are systems that can process data that are ambiguous and
incomplete. This permits them to solve unstructured problems.
a. Learning System
646
Specialised Systems
b. Fuzzy Logic
c. Neural Network
d. Intelligent Agents
4. Credit risk determination could be a good application for _________ systems.
a. Fuzzy Logic
b. Expert System
c. Neural Network
d. Learning System
5. _______( are?) software that use built-in and learned knowledge base about a
person or process to make decisions and accomplish tasks in a way that fulfils
the intentions of user.
a. Intelligent Agent
b. Fuzzy Logic
c. Expert System
d. Artificial Intelligence
6. Interactive voice response is an application of ___________.
a. Fuzzy Logic
b. Expert System
c. Natural Language
d. Robotics
7. _______ involves using multi sensory human-computer interfaces that enable
humans to experience computer simulated objects, space and activities, as they
actually exist.
a. Interactive voice response
b. Expert System
c. Virtual Reality
d. Fuzzy Logic
8. In Expert System ________ is that part of the program which reasons and
determines the application of knowledge in the knowledge base to the facts
presented in the user interface.
a. User Interface
b. Interface Engine
c. Knowledge Base
d. None of the above
647
Module - III
9. _________ application of expert system will recommend a set of audit
procedures to be conducted on the basis of subject characteristics and may help
to evaluate the internal control system of an organization.
a. Risk Analysis
b. Evaluation of Internal Control
c. Audit Program Planning
d. Technical Advice
10. _________ is a Subject - oriented, integrated, time-variant, non-volatile,
collection of data in support of managements decision making process.
a. Data Warehouse
b. Data Mining
c. Both (a) and (b)
d. None of the above
11. ________ is the ability to look into layers of details from summary, depending on
the need. (Reframe this question.)
a. Slicing
b. Dicing
c. Drill-down
d. Consolidation
12. DSS stands for ___________.
a. Decision Support System
b. Decision Service Support
c. Decision Support Service
d. Decision Service System
13. _________ is a specialized form of the point of sale terminal.
a. PIN
b. PAN
c. ATM
d. None of the above
14. EDI stands for ___________.
a. Electronic Digital Data
b. Electronic Data Definition
c. Electronic Data Interface
d. Electronic Data Interchange
648
Specialised Systems
15. ___________ involves conversion of data from a business application translated
into a standard format, to be transmitted over the communication network, and
convert this data back from the EDI format into the proprietary format of the
receiver organization. (Change. Badly worded and confusing)
a. Communication Software
b. Translation Software
c. EDI Standards
d. None of the above
16. Payments can be made through Internet by using
a. Electronic Cheque
b. Digital Cash
c. Credit Card
d. All of the above
17. In _________ form of e-commerce, transactions sites take place between
industrial manufacturers, whole sellers or retailers.
a. Business to Consumer
b. Consumer to Business
c. Business to Business
d. Business to Government
18. E-commerce risk includes _________.
a. Identity of the sender
b. Integrity of the message
c. Non Acceptance of confidentiality by receiver
d. All of the above
19. In ________, currency is no more than a string of digits.
a. Digital Cheque
b. Credit Card
c. E-Wallet
d. Digital Cash
20. In ____________, an organization can devise a WebSite to include the product
catalogs that can be searched electronically by prospective customers and it can
obtain data on products requested in searches.
a. Information Transfer
b. Information Sharing
c. Information Processing
d. None of the above
649
Module - III
21. The errors produced by AI systems would be ________ as compared to the
errors produced by (the traditional system of what?).
a. Critical
b. Negligible
c. Proportionate
d. None of the above
Answers:1. (a)
2. (d)
3. (b)
4. (c)
5. (a)
6. (c)
7. (c)
8. (b)
9. (c)
10. (a)
11. (c)
12. (a)
13. (c)
14. (d)
15. (b)
16. (d)
17. (c)
18. (d)
19. (d)
20. (b)
21. (a)
650
Module - III
correctly that what are the organizations objectives. This can be achieved by
discussing with the senior management before starting audit.
Reviewing the system development methodology to ensure the quality of the
deliverables. He has to check for adherence to stated methodology
documentation and other deliverables.
Reviewing the change management process and effectiveness of its
implementation.
In the post implementation phase, the delivery and project management team
may cease to exist, so the auditor will have a greater role to play in assessing the
effectiveness of the system.
Reviewing the maintenance procedure and ensuring that adequate
documentation has been maintained for related activities. Lax controls in this
phase can also have an adverse effect on the systems, so the auditor should
continue to monitor the maintenance procedures.
Ensuring production source integrity during the maintenance phase.
Feasibility study
System requirement definition
Software acquisition
Detailed design and programming
Testing
Implementation
Post-implementation and maintenance
System change procedures and program migration process
Feasibility study
While reviewing the feasibility study phase, the IS auditor has to consider the
following questions:
Has there been an agreement in the definition of the problem among all
stakeholders?
Was there a genuine need for solution established?
Were alternate solutions considered? Or was the feasibility assessed on the
basis of a single solution?
What was the basis for choosing the solution?
652
What is the extent of the problem perceived and how extensive is the impact of
the solution likely to be? These give valuable inputs for evaluating feasibility.
Requirements definition
Problem definition
Information flows
The auditor can also evaluate the methodology employed and the compliance
level.
The auditor should also check whether CASE (Computer Aided Software
Engineering) tools were used, because the quality of work is likely to be better in
CASE environments.
The decision to acquire the software should flow from the feasibility study. The
auditor should ensure that it is so.
The auditor should also ensure that the software acquired would meet the overall
design goals of the proposed system, identified during requirement analysis
phase.
RFP (Request for proposal) should be checked for adequacy. Details such as
transaction volume, data base size, turn around time and response time
requirements and vendor responsibilities should clearly be specified in RFP
The auditor should also check the criteria for pre-qualification of vendors.
Similarly, there should be sufficient documentation available to justify the
selection of the final vendor / product.
The auditor may also collect information through his own sources on vendor
viability, support infrastructure, service record and the like.
The auditor should thoroughly review the contract signed with the vendor for
adequacy of safeguards and completeness. The contract should address the
contingency plan in case of vendor failures such as, source code availability and
653
Module - III
third party maintenance support He should also ensure that the contract went
through legal scrutiny before it was signed.
CASE environment simplify the tasks that have to be done by the IS auditor. This
is because the CASE tools ensure quality and consistency of design, a major
concern of auditors. In non-CASE environments, the auditor may have to
undertake a detailed design review:
The design diagrams should be checked for compliance with standards
Any change that has been incorporated in the design stage should have
appropriate approvals and this should be checked.
The auditor should check the design for modularity.
The auditor should review the input, processing and output controls of systems.
Design of input screens and output reports is an area, in which auditors can offer
valuable suggestions. The auditor should check the user interface design for
usability, appropriateness, compliance with standards and acceptance by users.
Audit trails are important the auditor should ensure its availability and adequacy
in the design
In the recommendation of hardware and software choices, the auditor should
look for compatibility, interoperability and scalability conditions.
Flow charts and other such tools should be checked for key calculations. Their
implementation in programs also should be checked with the help of a
programmer who is knowledgeable about the programming language.
Exception data handling is an area that the auditor has to focus on. He has to
test the design and program for such data.
Unit test results should be reviewed. The auditor has to ensure that the 'bugs'
have been fixed.
Testing phase
Implementation phase
It should be ensured that data conversion has been completed and all past data
are available in a format readable by the new software.
Post-implementation review
This review mainly addresses the issue of system's ability to fulfill objectives that
were specified initially. Apart from this, the auditor has to check the following aspects:
Collect documentation of each phase and check for adequacy and completion.
Attend project meetings to check the compliance of the development process.
Advise the team on adequate and cost effective control measures.
Represent the management interest in the team by continuously assessing the
ability of the team to meet targets that have been set.
655
Module - III
656
User Requirements
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
Feasibility Analysis
1. Is the feasibility analysis documented well and clearly?
2. Have departments which will be involved in systems development and operation
been consulted during the feasibility analysis and have recommendations been
included?
3. Does the feasibility analysis reflect any significant differences from original
objectives, boundaries, and interfaces?
657
Module - III
4. Is the preliminary design in sufficient detail to support adequately the time and
cost estimates, cost/benefit analysis, and impact study?
5. Does the preliminary design meet user requirements?
6. Does the preliminary design reflect corporate standards?
7. Has a project plan been prepared?
8. Have preliminary time and cost estimates been prepared?
9. Has a preliminary impact study been prepared?
10. Are the conclusions and recommendations supported by the feasibility analysis?
11. Do the recommendations conform with corporate policies and practices?
12. Has the feasibility analysis report been submitted to the management steering
committee for action?
13. Have responsible departments signed off for the feasibility phase?
14. Has the internal auditor prepared a milestone report with opinions and
recommendations for the feasibility analysis phase?
Systems Design
1. Is the systems design documented well and clearly?
2. Have significant changes to the preliminary design-been controlled and approved
by cognizant authority?
3. Has a detailed work plan been prepared for the design phase?
4. Has the systems development methodology been used effectively?
5. Has the project management and control system been used effectively?
6. Has actual accomplishment been reasonably close to estimates?
7. Are systems development team resources adequate to accomplish objectives?
8. Have time and cost estimates, cost/benefit analysis, and impact study been
updated?
9. Have significant changes to project scope been approved by the management
steering committee?
10. Do detailed functional design features reflect accurately approved detailed user
requirements?
11. Is it reasonable to expect the designed system to be implemented satisfactorily
within the user and data processing environments?
12. Does the design provide adequately for internal control and data security?
13. Does the design provide adequately for requested audit features?
14. Have the requirements for hardware and systems software been developed and
can they be met satisfactorily with resources available or approved for installation?
15. Does the design provide adequately for corporate standards and practices?
16. Have systems design acceptance criteria been prepared?
17. Has a preliminary systems test plan been prepared?
18. Does the design provide adequately for offsite backup and recovery measures?
658
Systems Specifications
1. Are systems specifications documented well and clearly?
2. Have significant changes to systems design been controlled and approved by
cognizant authority?
3. Has a detailed work plan been prepared for the systems specifications phase?
4. Has the systems development methodology been used effectively during
development of systems specifications?
5. Has the project management and control system been used effectively?
6. Has actual accomplishment during development of systems specifications been
reasonably close to estimates?
7. Are systems development team resources adequate to accomplish objectives?
8. Have time and cost estimates, cost/benefit analysis, and impact study been
updated?
9. Have significant changes to project scope been approved by the management
steering committee?
10. Do systems specifications reflect accurately approved functional design features
and user requirements?
11. Is it reasonable to expect the systems specifications to be implemented
satisfactorily within user and data processing environments?
12. Do the systems specifications provide adequately for internal control and data
security?
13. Do the systems specifications provide adequately for requested audit features?
14. Has an appropriate configuration for hardware and software been selected for
implementation of the systems design and specifications?
659
Module - III
15. Have the hardware and software selected been reviewed for adequacy of
internal control, data security, integrity, and dependability?
16. Do systems specifications provide adequately for corporate standards and
practices?
17. Have systems acceptance criteria been updated?
18. Has the systems test plan been updated?
19. Has data administration reviewed systems specifications?
20. Has data security reviewed systems specifications?
21. Has quality assurance reviewed systems specifications?
22. Has data processing operations reviewed systems specifications?
23. Have cognizant user departments reviewed systems specifications?
24. Has the risk analysis been updated?
25. Have systems specifications been submitted to the management steering
committee for action?
26. Have responsible departments signed off for systems specifications?
27. Has the internal auditor prepared a milestone report with opinions and
recommendations for the systems specifications phase?
Systems Development
1. Has a detailed work plan been prepared for the systems development phase?
2. Has the systems development methodology been used effectively during the
systems development phase?
3. Has the project management and control system been used effectively during the
systems development phase?
4. Has actual accomplishment during systems development been reasonably close
to estimates?
5. Have significant changes to systems specifications been controlled and
approved by cognizant authority?
6. Are systems development team resources adequate to accomplish objectives of
systems development phase?
7. Have time and cost estimates, cost/benefit analysis, impact study, and risk
analysis been updated?
8. Have significant changes to project scope been approved by the management
steering committee?
9. Do program specifications and user procedures reflect accurately approved
systems specifications?
10. Do program specifications and user procedures provide adequately for internal
control and data security?
11. Do program specifications and user procedures provide adequately for requested
audit features?
660
Implementation
1. Has a detailed work plan been prepared for the systems implementation phase?
2. Are the results of the pilot test and acceptance test satisfactory?
3. Has data processing operations conducted a systems turnover evaluation and is
the result satisfactory?
4. Is the system documented adequately?
5. Has internal control review been made?
6. Is the level of internal control satisfactory?
7. Are the results of the parallel test satisfactory?
8. Is the result of the test of backup and recovery tests satisfactory?
661
Module - III
9. Have responsible departments approved the system for implementation?
10. Has the management steering committee approved the system for
implementation?
11. Has the internal auditor prepared a milestone report with opinions and
recommendations for systems implementation?
Post-Implementation
1. Has the internal auditor conducted a detailed review of the system and its
environment at about six months after initial implementation?
2. Is the level of internal control and security adequate?
3. Does the system meet original objectives satisfactorily?
4. Has documentation been maintained current?
5. Has change control been maintained?
6. Has systems logic been evaluated using statistical sampling techniques?
7. Has the internal auditor prepared a report with opinions and recommendations?
Summary
An IS auditor gathers insights from the methodology used for development and
assesses its suitability for the system. This can be also done by interviewing the
project management team. Once the auditor is convinced about its suitability, he can
check compliance with the stated process.
The reviewing process generally entails
662
Module - III
7. On a periodic basis, the auditor should check the following:
a. Procedures for authorizing, prioritizing and tracking system changes
b. Appropriateness of authorizations for selected change requests
c. Existence of program change history
d. All of these
8. The auditor does not undertake this task of identification himself. He goes
through the _______ findings and crosschecks them with _____.
a. analyst's, users
b. users, analyst's
c. programmers, users
d. users, programmers
9. In the post implementation phase, the delivery and project management team
may cease to exist, so the auditor will have a greater role to play in assessing the
_________ of the system.
a. Correctiveness
b. Usefulness
c. Effectiveness
d. None of these
10. ________ controls can also have an adverse effect on the systems, so the
auditor should continue to monitor the maintenance procedures.
a. Lux
b. Lax
c. Pux
d. Pax
11. In _________ analysis, there could be a tendency to understate costs and
overstate benefits.
a. cost-benefit
b. financial
c. cost
d. benefit
12. In the recommendation of ______ and ______ choices, the auditor should look
for compatibility, interoperability and scalability conditions.
a. Input, Output
b. Hardware, Software
c. Data, Information
d. Design, Review
664
Answers:
1. (c)
2. (a)
3. (d)
4. (d)
5. (d)
6. (c)
7. (d)
8. (a)
9. (c)
10. (b)
11. (a)
12. (b)
13. (c)
14. (a)
15. (a)
665