0% found this document useful (0 votes)
74 views41 pages

1.1 Project Overview

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 41

1.

INTRODUCTION
1.1 Project Overview
The scope of this project is to avoid manual errors and to generate
reports at any point of time for the management to make right decisions.
This system is fully GUI oriented application taking the full advantage
of windows environment. Users are more comfortable with the GUI
interface and event-driven programming.
Many validation checks have been carried out for various types of
transformations. The report produced by package includes day-to-day
verification of the details maintained.
Record securities are such that there is no possibility of record being
damage due to pests and climate. Manpower is saved.
The system and development of the software cost will be reasonable
for the organization, since there wont be any purchase of registers and
maintaining it, which consumes a lot of manpower.

2. SYSTEM ENVIRONMENT
2.1 HARDWARE SPECIFICATION
CPU

DUAL CORE

PROCESSOR SPEED

2.5 GHz

RAM

2GB MB

HARD DISK

250 GB

KEYBOARD

105 KEYS

MOUSE

LOGITECH MOUSE

DISPLAY

LED COLOR MONITOR

PRINTER

HP DESK JET

2.2 SOFTWARE SPECIFICATION


FRONT END

ASP.NET

BACK END

MS ACCESS

OPERATING SYSTEM

WINDOWS XP / WINDOWS7

2.2.1 Front End VB .Net


Microsoft.Net Framework:
The .NET Framework is a new computing platform that simplifies
application development in the highly distributed environment of the
Internet. The .NET Framework is designed to fulfill the following
objectives:
To provide a consistent object-oriented programming environment
whether object code is stored and executed locally, executed locally
but Internet-distributed, or executed remotely.
To provide a code-execution environment that minimizes software
deployment and versioning conflicts.
To provide a code-execution environment that guarantees safe
execution of code, including code created by an unknown or semitrusted third party.
To provide a code-execution environment that eliminates the
performance problems of scripted or interpreted environments.

To make the developer experience consistent across widely varying


types of applications, such as Windows-based applications and Webbased applications.
To build all communication on industry standards to ensure that code
based on the .NET Framework can integrate with any other code.
The .NET Framework has two main components: the common
language runtime and the .NET Framework class library. The common
language runtime is the foundation of the .NET Framework. You can think
of the runtime as an agent that manages code at execution time, providing
core services such as memory management, thread management, and
remoting, while also enforcing strict type safety and other forms of code
accuracy that ensure security and robustness. In fact, the concept of code
management is a fundamental principle of the runtime. Code that targets the
runtime is known as managed code, while code that does not target the
runtime is known as unmanaged code. The class library, the other main
component of the .NET Framework, is a comprehensive, object-oriented
collection of reusable types that you can use to develop applications ranging
from traditional command-line or graphical user interface (GUI) applications

to applications based on the latest innovations provided by ASP.NET, such


as Web Forms and XML Web services.
The .NET Framework can be hosted by unmanaged components that
load the common language runtime into their processes and initiate the
execution of managed code, thereby creating a software environment that
can exploit both managed and unmanaged features. The .NET Framework
not only provides several runtime hosts, but also supports the development
of third-party runtime hosts.
For example, ASP.NET hosts the runtime to provide a scalable, serverside environment for managed code. ASP.NET works directly with the
runtime to enable Web Forms applications and XML Web services, both of
which are discussed later in this topic.
Internet Explorer is an example of an unmanaged application that
hosts the runtime (in the form of a MIME type extension). Using Internet
Explorer to host the runtime enables you to embed managed components or
Windows Forms controls in HTML documents. Hosting the runtime in this
way makes managed mobile code (similar to Microsoft ActiveX
controls) possible, but with significant EMPLOYEE GRADING SYSTEM

vements that only managed code can offer, such as semi-trusted execution
and secure isolated file storage.
The following illustration shows the relationship of the common
language runtime and the class library to your applications and to the overall
system. The illustration also shows how managed code operates within a
larger architecture.
Features of the Common Language Runtime:
The common language runtime manages memory, thread execution,
code execution, code safety verification, compilation, and other system
services. These features are intrinsic to the managed code that runs on the
common language runtime.
With regards to security, managed components are awarded varying
degrees of trust, depending on a number of factors that include their origin
(such as the Internet, enterprise network, or local computer). This means that
a managed component might or might not be able to perform file-access
operations, registry-access operations, or other sensitive functions, even if it
is being used in the same active application.

The runtime enforces code access security. For example, users can
trust that an executable embedded in a Web page can play an animation on
screen or sing a song, but cannot access their personal data, file system, or
network. The security features of the runtime thus enable legitimate Internetdeployed software to be exceptionally feature rich.
The runtime also enforces code robustness by implementing a strict
type- and code-verification infrastructure called the common type system
(CTS). The CTS ensures that all managed code is self-describing. The
various Microsoft and third-party language compilers
Generate managed code that conforms to the CTS. This means that
managed code can consume other managed types and instances, while
strictly enforcing type fidelity and type safety.
In addition, the managed environment of the runtime eliminates many
common software issues. For example, the runtime automatically handles
object layout and manages references to objects, releasing them when they
are no longer being used. This automatic memory management resolves the
two most common application errors, memory leaks and invalid memory
references.

The runtime also accelerates developer productivity. For example,


programmers can write applications in their development language of
choice, yet take full advantage of the runtime, the class library, and
components written in other languages by other developers. Any compiler
vendor who chooses to target the runtime can do so. Language compilers
that target the .NET Framework make the features of the .NET Framework
available to existing code written in that language, greatly easing the
migration process for existing applications.
While the runtime is designed for the software of the future, it also
supports software of today and yesterday. Interoperability between managed
and unmanaged code enables developers to continue to use necessary COM
components and DLLs.
The runtime is designed to enhance performance. Although the
common language runtime provides many standard runtime services,
managed code is never interpreted. A feature called just-in-time (JIT)
compiling enables all managed code to run in the native machine language
of the system on which it is executing. Meanwhile, the memory manager
removes the possibilities of fragmented memory and increases memory
locality-of-reference to further increase performance.
8

Finally, the runtime can be hosted by high-performance, server-side


applications, such as Microsoft SQL Server and Internet Information
Services (IIS). This infrastructure enables you to use managed code to write
your business logic, while still enjoying the superior performance of the
industry's best enterprise servers that support runtime hosting.
.NET Framework Class Library:
The .NET Framework class library is a collection of reusable types
that tightly integrate with the common language runtime. The class library is
object oriented, providing types from which your own managed code can
derive functionality. This not only makes the .NET Framework types easy to
use, but also reduces the time associated with learning new features of the
.NET Framework. In addition, third-party components can integrate
seamlessly with classes in the .NET Framework.
For example, the .NET Framework collection classes implement a set
of interfaces that you can use to develop your own collection classes. Your
collection classes will blend seamlessly with the classes in the .NET
Framework.
As you would expect from an object-oriented class library, the .NET
Framework types enable you to accomplish a range of common
9

programming tasks, including tasks such as string management, data


collection, database connectivity, and file access. In addition to these
common tasks, the class library includes types that support a variety of
specialized development scenarios. For example, you can use the .NET
Framework to develop the following types of applications and services:
Console applications.
Scripted or hosted applications.
Windows GUI applications (Windows Forms).
ASP.NET applications.
XML Web services.
Windows services.
For example, the Windows Forms classes are a comprehensive set of
reusable types that vastly simplify Windows GUI development. If you write
an ASP.NET Web Form application, you can use the Web Forms classes.
ACTIVE X DATA OBJECTS.NET:
ADO.NET Overview:
10

ADO.NET is an evolution of the ADO data access model that directly


addresses user requirements for developing scalable applications. It was
designed specifically for the web with scalability, statelessness, and XML in
mind.
ADO.NET uses some ADO objects, such as the Connection and
Command objects, and also introduces new objects. Key new ADO.NET
objects include the DataSet, DataReader, and DataAdapter.
The important distinction between this evolved stage of ADO.NET
and previous data architectures is that there exists an object -- the DataSet -that is separate and distinct from any data stores. Because of that, the
DataSet functions as a standalone entity. You can think of the DataSet as an
always disconnected recordset that knows nothing about the source or
destination of the data it contains. Inside a DataSet, much like in a database,
there are tables, columns, relationships, constraints, views, and so forth.
A DataAdapter is the object that connects to the database to fill the
DataSet. Then, it connects back to the database to update the data there,
based on operations performed while the DataSet held the data. In the past,
data processing has been primarily connection-based. Now, in an effort to
make multi-tiered apps more efficient, data processing is turning to a
11

message-based approach that revolves around chunks of information. At the


center of this approach is the DataAdapter, which provides a bridge to
retrieve and save data between a DataSet and its source data store. It
accomplishes this by means of requests to the appropriate SQL commands
made against the data store.
The XML-based DataSet object provides a consistent programming
model that works with all models of data storage: flat, relational, and
hierarchical. It does this by having no 'knowledge' of the source of its data,
and by representing the data that it holds as collections and data types. No
matter what the source of the data within the DataSet is, it is manipulated
through the same set of standard APIs exposed through the DataSet and its
subordinate objects.
While the DataSet has no knowledge of the source of its data, the
managed provider has detailed and specific information. The role of the
managed provider is to connect, fill, and persist the DataSet to and from data
stores.

The

OLE

DB

and

SQL

Server

.NET

Data

Providers

(System.Data.OleDb and System.Data.SqlClient) that are part of the .Net


Framework provide four basic objects: the Command, Connection,
DataReader and DataAdapter. In the remaining sections of this document,
12

we'll walk through each part of the DataSet and the OLE DB/SQL Server
.NET Data Providers explaining what they are, and how to program against
them.
The following sections will introduce you to some objects that have
evolved, and some that are new. These objects are:
Connections. For connection to and managing transactions against
a database.
Commands. For issuing SQL commands against a database.
DataReaders. For reading a forward-only stream of data records
from a SQL Server data source.
DataSets. For storing, remoting and programming against flat data,
XML data and relational data.
DataAdapters. For pushing data into a DataSet, and reconciling
data against a database.
When dealing with connections to a database, there are two different
options: SQL Server .NET Data Provider (System.Data.SqlClient) and OLE
DB .NET Data Provider (System.Data.OleDb). In these samples we will use
13

the SQL Server .NET Data Provider. These are written to talk directly to
Microsoft SQL Server. The OLE DB .NET Data Provider is used to talk to
any OLE DB provider (as it uses OLE DB underneath).
Connections:
Connections are used to 'talk to' databases, and are represented by
provider-specific classes such as SQLConnection. Commands travel over
connections and result sets are returned in the form of streams which can be
read by a DataReader object, or pushed into a DataSet object.

Commands :l
Commands contain the information that is submitted to a database,
and are represented by provider-specific classes such as SQLCommand. A
command can be a stored procedure call, an UPDATE statement, or a
statement that returns results. You can also use input and output parameters,
and return values as part of your command syntax. The example below
shows how to issue an INSERT statement against the Northwind database.
DataReaders :

14

The DataReader object is somewhat synonymous with a readonly/forward-only cursor over data. The DataReader API supports flat as
well as hierarchical data. A DataReader object is returned after executing a
command against a database. The format of the returned DataReader object
is different from a recordset. For example, you might use the DataReader to
show the results of a search list in a web page.
DataSets and DataAdapters :
DataSets:
The DataSet object is similar to the ADO Recordset object, but more
powerful, and with one other important distinction: the DataSet is always
disconnected. The DataSet object represents a cache of data, with databaselike structures such as tables, columns, relationships, and constraints.
However, though a DataSet can and does behave much like a database, it is
important to remember that DataSet objects do not interact directly with
databases, or other source data. This allows the developer to work with a
programming model that is always consistent, regardless of where the source
data resides. Data coming from a database, an XML file, from code, or user
input can all be placed into DataSet objects. Then, as changes are made to
the DataSet they can be tracked and verified before updating the source data.
15

The GetChanges method of the DataSet object actually creates a second


DatSet that contains only the changes to the data. This DataSet is then used
by a DataAdapter (or other objects) to update the original data source.
The DataSet has many XML characteristics, including the ability to
produce and consume XML data and XML schemas. XML schemas can be
used to describe schemas interchanged via WebServices. In fact, a DataSet
with a schema can actually be compiled for type safety and statement
completion.
DataAdapters (OLEDB/SQL):
The DataAdapter object works as a bridge between the DataSet and the
source data. Using the provider-specific SqlDataAdapter (along with its
associated

SqlCommand

and

SqlConnection)

can

increase

overall

performance when working with a Microsoft SQL Server databases. For


other OLE DB-supported databases, you would use the OleDbDataAdapter
object and its associated OleDbCommand and OleDbConnection objects.
The DataAdapter object uses commands to update the data source
after changes have been made to the DataSet. Using the Fill method of the
DataAdapter calls the SELECT command; using the Update method calls

16

the INSERT, UPDATE or DELETE command for each changed row. You
can explicitly set these commands in order to control the statements used at
runtime to resolve changes, including the use of stored procedures. For adhoc scenarios, a CommandBuilder object can generate these at run-time
based upon a select statement. However, this run-time generation requires an
extra round-trip to the server in order to gather required metadata, so
explicitly providing the INSERT, UPDATE, and DELETE commands at
design time will result in better run-time performance.
1. ADO.NET is the next evolution of ADO for the .Net Framework.
2. ADO.NET was created with n-Tier, statelessness and XML in the
forefront. Two new objects, the DataSet and DataAdapter, are
provided for these scenarios.
3. ADO.NET can be used to get data from a stream, or to store data in
a cache for updates.
4. There is a lot more information about ADO.NET in the
documentation.

17

5. Remember, you can execute a command directly against the


database in order to do inserts, updates, and deletes. You don't need
to first put data into a DataSet in order to insert, update, or delete it.
6. Also, you can use a DataSet to bind to the data, move through the
data, and navigate data relationships
Server Application Development:
Server-side applications in the managed world are implemented
through runtime hosts. Unmanaged applications host the common language
runtime, which allows your custom managed code to control the behavior of
the server. This model provides you with all the features of the common
language runtime and class library while gaining the performance and
scalability of the host server.
The following illustration shows a basic network schema with
managed code running in different server environments. Servers such as IIS
and SQL Server can perform standard operations while your application
logic executes through the managed code.
VB.net Introduction:

18

VB.Net, the next generation of visual basic is designed to be the


easiest and most productive tool for creating .NET applications, including
Windows applications, Web Services and Web applications. Visual Basic
.NET is a major component of Microsoft Visual Studio .NET suite. The
.NET version of Visual Basic is a new improved version with more features
and additions. After these new additions, VB qualify to become a full objectoriented language such as C++. ASP.NET is the following version of VB
6.0. Microsoft .NET is a new programming and operating framework
introduced by Microsoft. All .NET supported languages access a common
.NET library to develop applications and share common tools to execute
applications. Programming with visual basic using .net is called VB.Net.
While providing the traditional ease-of-use of Visual Basic
development, Visual Basic .NET also allows optional use of new language
features. Inheritance, method overloading, structured exception handling,
and free threading all make Visual Basic a powerful object-oriented
programming language. Visual Basic .NET fully integrates with the .NET
Framework and the Common Language Runtime, which together provide
language interoperability, simplified deployment, enhanced security, and
improved versioning support.

19

2.2.2 Back End MS Access


Ms-Access is a powerful multi-user DBMS developed by Microsoft
Corporation. It can be used store and manipulated large amounts of
information and automate repetitive tasks such as maintaining an inventory
and generating the invoices. Using Access, Easy to use data input forms
could be developed. Data can be processed and meaningful reports created.
Data in Access is organized in the form of tables. Within a table,
records are arranged according to a common reference value known as the
primary key or the key field. The value in the key field is field is different
for every record and thus helps in uniquely identifying records. A
combination of two or more fields can also be used as the primary key. Such
a combination is called a composite key. Since a value in one table can be
replicated across other tables, there should be a way to maintain a relation
between the two tables. This relation is implemented through the concept of
a foreign key. The foreign key in a table is a field, which links that table to
another table. Databases in Access have an extension of .mdb.
Access also maintains index files for tables. An index is an internal
table of values that Access maintains to store the order of records. Index
objects thus provide efficient access to data. In a table, indices control the

20

way data is accessed. However, it does not replicate the data itself nor does it
change the sequence in which data is stored in the table

Features of access:
MS-Access is windows based application and therefore has no
interface a similar to Windows NT. You can cut, copy and paste data from
any windows application to and from access. Since Microsoft Corporation
has developed both windows ands Access; the two products can work well
together. You need to have either Windows 98 Windows NT on your
machine before you can install Access.

3. ANALYSIS OF CENSUS INFORMATION


MANAGEMENT SYSTEM
3.1EXISTING SYSTEM
21

The characteristics of the existing system have been observed. They


are summarized as follows.
Existing system is a manual system.
It is tedious to keep track of the transaction efficiently.
It needs more manpower to record and retrieve
information
There is no security of information.
There is a delay in information search and retrieval.
There is redundancy of data.
Reliability and maintainability of data is difficult.
It is mainly paper-oriented and labour intensive.

3.2 PROPOSED SYSTEM


The proposed system is being developed in windows environment
using Visual Basic. It provides the following features,
22

It is user friendly
Accuracy of data is providing by proper validation of input.
Retrieval of necessary information is very easy and fast.
Manual operations are reduced to the maximum possible extent.
Maintenance of data is effective.
Backup of data are taken periodically.
Latest tool and techniques are used.
Errors and exceptional situation are handle frequently

3.3 FEASIBILITY STUDY


The feasibility of the project is analyzed in this phase and business
proposal is put forth with a very general plan for the project and some cost
estimates. During system analysis the feasibility study of the proposed
system is to be carried out. This is to ensure that the proposed system is not a
burden to the company. For feasibility analysis, some understanding of the
major requirements for the system is essential.
Three key considerations involved in the feasibility analysis are

Economical Feasibility

Technical Feasibility

23

Operational Feasibility

3.3.1 Economical Feasibility


This study is carried out to check the economic impact that the system
will have on the organization. The amount of fund that the company can
pour into the research and development of the system is limited. The
expenditures must be justified. Thus the developed system as well within the
budget and this was achieved because most of the technologies used are
freely available. Only the customized products had to be purchased.

3.3.2 Technical Feasibility


This study is carried out to check the technical feasibility, that is, the
technical requirements of the system. Any system developed must not have a
high demand on the available technical resources. This will lead to high
demands on the available technical resources. This will lead to high
demands being placed on the client. The developed system must have a
modest requirement, as only minimal or null changes are required for
implementing this system.
3.3.3 Social Feasibility

24

Operational feasibility is a measure of how well a proposed system


solves the problems, and takes advantage of the opportunities identified
during scope definition and how it satisfies the requirements identified in the
requirements analysis phase of system development. Operational feasibility
reviews the willingness of the organization to support the proposed system.
This is probably the most difficult of the feasibilities to gauge. In order to
determine this feasibility, it is important to understand the management
commitment to the proposed project. If the request was initiated by
management, it is likely that there is management support and the system
will be accepted and used. However, it is also important that the employee
base will be accepting of the change.

3.4 INPUT DESIGN


25

3.4.1 ARCHITECTURAL DESIGN


LOGIN

MAIN

CASTE IFORMATION

KARAIKUDI

CITY DETAILS

RELIGION STATUS

DEVAKOTTAI

THIRUPATTUR

3.4.2 DATABASE DESIGN


LOGIN
26

REPORT

SIVAGANGAI

This database has been mainly used for only security purpose. We use
two objects are username, password. We correctly enter these objects we go
to the main form
SNO

FIELDNAME

DATATYPE

SIZE

DESCRIPTION

UserName

Text

10

Enter the username

PassWord

Text

10

Enter the password

KARAIKUDI
This database is used to update the cities information. It has city
area id number, area name, gender data, and religion status, number of the
area id these fields are used easy way to identify the area updates
SN
O

FIELDNAME

DATATYPE

SIZE

DESCRIPTION

Area id

Number

10

Enter the area id


number

Area name

Text

30

Enter the area name

Ward number

Number

20

Enter the ward


number

Males

Text

30

Enter the total of


males

Females

Text

30

Enter the total of


females

Total

Text

30

Enter the total

School

Number

20

Enter the total


number of school

Graduate

Number

20

Enter the total

27

number of graduate

DEVAKOTTAI
This database is used to update the cities information. It has city area id
number, area name, gender data, and religion status, number of the area id
these fields are used easy way to identify the area updates
SN
O

FIELDNAME

DATATYPE

SIZE

DESCRIPTION

Area id

Number

10

Enter the area id


number

Area name

Text

30

Enter the area name

Ward number

Number

20

Enter the ward


number

Males

Text

30

Enter the total of


males

Females

Text

30

Enter the total of


females

Total

Text

30

Enter the total

School

Number

20

Enter the total


number of school

Graduate

Number

20

Enter the total


number of graduate

THIRUPATTHUR
This database is used to update the cities information. It has city area
id number, area name, gender data, and religion status, number of the area id
these fields are used easy way to identify the area updates.
28

SN
O

FIELDNAME

DATATYPE

SIZE

DESCRIPTION

Area id

Number

10

Enter the area id


number

Area name

Text

30

Enter the area name

Ward number

Number

20

Enter the ward


number

Males

Text

30

Enter the total of


males

Females

Text

30

Enter the total of


females

Total

Text

30

Enter the total

School

Number

20

Enter the total


number of school

Graduate

Number

20

Enter the total


number of graduate

SIVAGANGAI
This database is used to update the cities information. It has city
area id number, area name, gender data, and religion status, number of the
area id these fields are used easy way to identify the area updates

29

SN
O

FIELDNAME

DATATYPE

SIZE

DESCRIPTION

Area id

Number

10

Enter the area id


number

Area name

Text

30

Enter the area name

Ward number

Number

20

Enter the ward


number

Males

Text

30

Enter the total of


males

Females

Text

30

Enter the total of


females

Total

Text

30

Enter the total

School

Number

20

Enter the total


number of school

Graduate

Number

20

Enter the total


number of graduate

RELIGION STATUS
This database is used to update the religion status. It has area id
number area name religion types and caste types these fields are used easy
way to update the data.

30

SNO

FIELDNAME

DATATYPE

SIZE

DESCRIPTION

Area id

Number

10

Enter the area id


number

Area name

Text

20

Enter the area name

Christian

Number

30

Enter the number of


Christians

Hindu

Number

30

Enter the number of


Hindus

Muslim

Number

30

Enter the number of


Muslims

MBC

Number

30

Enter the number of


MBC

BC

Number

30

Enter the number of


BC

OC

Number

30

Enter the number of


OC

SC/ST

Number

30

Enter the number of


SC/ST

CASTE INFORMATION
This database is used to update the caste status. It has area id
number religion types and caste types these fields are used easy way to
update the data.

SNO

FIELDNAME

DATATYPE
31

SIZE

DESCRIPTION

Area id

number

10

Enter the area id


number

Christian

Number

30

Enter the number of


Christians

Hindu

Number

30

Enter the number of


Hindus

Muslim

Number

30

Enter the number of


Muslims

MBC

Number

30

Enter the number of


MBC

BC

Number

30

Enter the number of


BC

OC

Number

30

Enter the number of


OC

SC/ST

Number

30

Enter the number of


SC/ST

3.5 OUTPUT DESIGN


During output design developers determine the type of outputs
needed, consider necessary output controls, and prototype report layouts.
Required reports are identified during analysis (e.g., DFD data flows that
cross the boundary of the automated system). Output forms or reports are a
reflection of data flows produced by a process on a DFD. Users will decide

32

on the mix of reporting formats printed versus screen versions. Flexible


reporting tools allow users to create their own queries and reports, thus
reducing the need to identify every possible report and every specific report
format that may be needed by users as they implement the new system.

4. DESIGN OF CENSUS INFORMATION MANAGEMENT


SYSTEM

4.1 Modules
This project contains the following modules:
Login Form

33

City Details
Caste Information
Religion Status
Report

4.1.1 Login Page


Once successfully register your details next redirect to login page and
provide correct username and password to continue to goto main module.

4.1.2 City Details


This module is used to update the cities information. It has city area id
number, area name, gender data, religion status, number of the area id these
fields are used easy way to identify the area updates.

4.1.3 Caste Information


In this module used to update the caste status. It has area id
number religion types and caste types these fields are used easy way to
update the data.
4.1.4 Religion Status

34

This module is used to update the religion status. It has area id number
area name religion types and caste types these fields are used easy way to
update the data.
4.1.5 Report
This module is used to see the updates in data. It has view option
these fields are used easy way to see the number of updates to entire in data.

5. TESTING AND IMPLEMENTATION OF CENSUS


INFORMATION MANAGEMENT SYSTEM
5.1 TESTING METHODS
Testing is the phase where the bugs in the program are found and
corrected. One of the goals during dynamic testing is to produce a test suite.
This is applied to ensure that the modification of the program does not have
35

any side effect. This type of testing is called regression testing. Software
testing can be looked upon as among the many processes, a software
development organization performs to provide the lost opportunity to correct
any flaws in the development system, system testing is a series of different
tests whose primary purpose is to fully exercise the computer based system.
System testing constitutes the largest percentage of technical effort in
the software development process. Testing begins at the module level and
work towards the integration of the entire system. No testing process can be
completed without the verification and the validation part, which helps to
access and improve the quality of the work products generated during the
development and modification of the software.
The system has been thoroughly checked in the local intranet
environment. Testing is done to remove the residual bugs and improve the
reliability of the program. All the basic levels of testing were carried out.

5.1.1 UNIT TESTING


Unit testing focuses on verification efforts on the smallest unit of the
software design, the module. This is also known as module testing. The
modules are tested separately. This is carried out during the programming

36

stage. In this stage each module is found to be working satisfactorily as


regarded to the expected output from the module.

5.1.2 INTEGRATION TESTING


Integration testing is a systematic method for conducting test to
uncover errors, associated within the interface. In this phase all the modules
are combined and then the project is run as a whole. Thus in integration
testing step, all the errors uncovered are corrected for the next testing steps.

5.1.3 ACCEPTANCE TESTING


User acceptance testing of the system is the key factor for the success
of any system. The system under consideration is tested for the user
acceptance by constantly keeping in touch with prospective system users at
the time of developing. The testing of the software begins along with the
coding.

The unit testing was done for each module in the in the software. For
various other inputs such that each line should be executed at least once.
After all the modules were coded, integration testing was carried out. Some
minor errors were found in the earlier stage and each of them was corrected.
In the implementation of user interface phase no major errors were noted.
37

After the software was completely developed, the testing was done. The
outputs were correct at the time of documentation. After that no errors were
reported.

5.1.4 VALIDATION TESTING


Validation testing is where requirements established as part of
software requirement analysis is validated against the software that has been
developed. This test provides the final assurance that the software meets all
the functional, behavioral and performance requirements. Thus in validation
testing step, all the errors uncovered during Integration testing are corrected.

5.1.5 SYSTEM TESTING


It has two undergo robust testing each and every stage design time
data test. Run time list at module level. Test date after completion using
dummy data. Crash list for testing data safety data holding capacity after
dumping level update. Demo run with data for end use performance test at
the peak hour of work.

5.2 IMPLEMENTATION
38

Once the system has been designed, the next step is to convert the
designed one in to actual code, so as to satisfy the user requirements as
expected. If the system is approved to be error free it can be implemented.
When the initial design was done for the system, the department was
consulted for acceptance of the design so that further proceedings of the
system development can be carried on. After the development of the system
a demonstration was given to them about working of the system. The aim of
the system illustration was to identify any malfunctioning of the system.
Implementation

includes

proper

training

to

end-users.

The

implemented software should be maintained for prolonged running of the


software.
Initially the system was run parallel with manual system. The system
has been tested with data and has proved to be error-free and user-friendly.
Training was given to end-user about the software and its features.

39

6. CONCLUSION
The development process of this has been splited as estimation of the
system study, planning and design. The system uses standard software
development methodology.
Standardization plays a vital role in the life cycle of the development.
This software involves the usage of similar meaningful functions names,
variables names, table names.

40

This system performance is measured by giving sample data. All the


modules functions without ant bug and problems. It satisfies the user
requirement. It gives accurate results.

41

You might also like