Employee Salary Management System: 1.1 Problem Definition
Employee Salary Management System: 1.1 Problem Definition
1 INTRODUCTION
The “Salary Management System“ is based on the salary activity for each company staff
depending on their attendance. The first activity is based on saving the employees details where
each employee will be given a unique Employee ID. The Admin only has the authority to enter
the number of leaves available for leave type and for each employee. Whenever an employee
wishes to take a leave, he is supposed to enter his employee Id. As soon as this is done, he has to
select the leave type like casual leave, providential leave etc. Depending on the leave type the
number of leaves available will be shown. The employee is supposed to enter the no of leaves he
needed and it will be saved in the database. In the meanwhile, the no of leaves available will be
deducted depending on the no of days he wished to take a leave. Now based on the no of days an
employee attended, salary will be calculated and a separate salary slip will be provided for
reference.
Admin has the authority to add employee details. And he also has the right to edit or delete
employee information to/from the list. Admin provides a unique username and password for each
employee through which he can login and apply for leave. All the information’s are being saved
in the database.
Existing software’s have problem in calculating the salary for each employee and it also focuses
on each employee’s attendance and the no of leaves taken per month/year. There is also a
possibility of checking salary report at any time so that it doesn’t lead to any miscalculation.
2 SYSTEM ANALYSIS
2.1 Introduction
During this phase, the analysis has several sub-phases. The first is requirements determination. In
this sub-phase, analysts work with users to determine the expectations of users from the
proposed system. This sub-phase usually involves a careful study of current systems, manual or
computerized that might be replaced or enhanced as part of this project. Next, the requirements
are studied and structured in accordance with their inter-relationships and eliminate any
redundancies. Third, alternative initial design is generated to match the requirements. Then, these
alternatives are compared to determine which alternative best meets the requirement in terms of
cost and labor to commit to development process.
The System we are going to develop is according to the user requirements which will perform
salary calculation as well as tax calculation and file handling such as salary slips along with bank
invoices. Moreover, the system will be user friendly and flexible enough to be enhanced
according to the needs of the users in future. The system performs all the operations
automatically. Moreover, the system is user friendly, flexible, fast and highly secure. Some of
the advantages of the proposed system are:
SDLC is nothing but Software Development Life Cycle. It is a standard which is used by
software industry to develop good software.
Stages in SDLC:
Requirement Gathering
Analysis
Designing
Coding
Testing
Maintenance.
1. Requirements Gathering Stage
The requirements gathering process takes as its input the goals identified in the high level
requirements section of the project plan. Each goal will be refined into a set of one or more
requirements. These requirements define the major functions of the intended application, define.
Operational data areas and reference data areas, and define the initial data entities. Major
functions include critical process to be managed, as well as mission critical inputs, outputs and
reports. A user class hierarchy is developed and associated with these major functions, data
areas, and data entities. Each of these definitions is termed a requirement. Requirements are
identified by unique requirement identifiers and, at minimum. Contain a requirement title and
textual description.
High-Level Requirements
(Project Plan)
These requirements are fully described in the primary deliverables for this stage:
The Requirements Document and the Requirements Traceability Matrix (RTM). The
requirements document contains complete description of each requirement, including diagrams
and reference to external documents as necessary. Note that detailed listings of database tables
and fields are not included in the requirements document.
The title of each requirement is also placed into the first version of the RTM, along with the title
of each goal from the project plan. The purpose of the RTM is to show that the product
components developed during each stage of the software development lifecycle are formally
connected to the components developed in prior stages.
In the requirements stage, the RTM consists of a list high-level requirements, or goals, by title,
with a listing of associated requirements for each goal, listed by requirement title. In this
hierarchical listing, the RTM shows that each requirement developed during this stage is
formally linked to a specific product goal. In this format, each requirement can be traced to a
specific product goal, hence the term requirements traceability.
The outputs of the requirements definition stage include the requirements document, the RTM,
and an updated project plan.
The planning stage establishes a bird’s eye view of the intended software product, and uses this
to establish the basic project structure, evaluate feasibility and risks associated with the project,
and describe appropriate management and technical approaches.
Planning Stage
Software Configuration Software Quality Assurance Plan Project Plan & Schedule
Management Plan
The most critical section of the project plan is a listing of high-level product. Requirements
alsoreferred to as goals. All of the software product requirements to be developed during the
requirements definition stage flow from one or more of these goals. The minimum information
for each goal consists of a title and textual description, although additional information and
reference to external documents may be included. The outputs of the project planning stage are
the configuration management plan, the quality assurance plan, and the project plan and
schedule, with a detailed listing of scheduled activities for the upcoming requirements stage, and
high level estimates of efforts for the out stages.
3. Designing Stage
The design stage takes as its initial input the requirements identified in the approved
requirements document. For each requirement, a set of one or more design elements will be
produced as a result of interviews, workshops, and/or prototype efforts. Design elements
describe the desired software features in details, and generally include functional hierarchy
diagrams, pseudo code, and a complete entity-relationship diagram with a full data dictionary.
These design elements are intended to describe the software in sufficient detail that skilled
programmers may develop the software with minimal additional input.
Requirements
Documents
Design
Stage
When the design document is finalized and accepted, the RTM is updated to show that each
design element is formally associated with a specific requirement. The outputs of the design
stage are the design document, an updated RTM, and an updated project plan.
The development stage takes as its primary input the design elements described in the approved
design document. For each design element, a set of one or more software artifacts will be
produced. Software artifacts include but are not limited to menus, dialogs, and data management
forms, data reporting formats, and specialized procedures and functions. Appropriate test cases
will be developed for each set of functionally related software artifacts, and an online help
system will be developed to guide users in their interactions with the software.
Design
Documents
Development
Stage
Test Updated
Implementation
Plan Requirements
Map
Traceability
Matrix
The RTM will be updated to show that each developed artifact is linked to a specific design
element, and that each developed ratified has one or more corresponding test case items. This
point, the RTM is in its final configuration. The outputs of the development stage include a fully
functional set of software that satisfies the requirements and design elements previously
Dept. of Computer Science, Gulbarga University, Kalaburagi Page 9
Employee Salary Management System
documented, an online help System that describes the operation of the software, an
implementation map that identifies the primary code entry points for all major system functions,
a test plan that describes the test cases to be used to validate the correctness and completeness of
the software, an updated RTM, and an updated project plan.
2.4Feasibility study
Feasibility study is the important phase in the software development process. It enables the
developer to have an assessment of the product being developed. It refers to the feasibility study
of the product in the terms of outcomes of the product, operational use and technical support
required for implementing it. Feasibility study should be performed on the basis of various
criteria and parameters. The various feasibility studies are.
Technical Feasibility
Operational Feasibility
Economic feasibility
1. Economical Feasibility
It refers to the benefits or outcomes we are deriving from the product as compared to the total
cost we are spending for developing the product. It the more or less same as the older system,
then it is not feasible to develop the product.
2. Operational Feasibility
It refers to the feasibility of the product to be operational. Some products may work very well at
design and implementation but may fail in the real time environment. It includes the study of
additional human resource required and their Technical expertise.
3. Technical Feasibility
It refers to whether the software that is available in the market fully supports the present
application. It studies the pros and cons of using particular software for the development and its
feasibility. It also studies the additional training needed to be given to the people to make the
application work.
3.2Introduction
Based on the given requirements, conceptualize the Solution Architecture. Choose the domain of
your interest otherwise develop the application for ultimatedotnet.com. Depict the various
architectural components, show interactions and connectedness and show internal and external
elements. Design the web services, web methods and database infrastructure needed both and
client and server. Provide an environment for upgradation of application for newer versions that
are available in the same domain as web service target.
ABOUT .NET
.NET technology is integrated throughout Microsoft products, providing the capability to quickly
build, deploy, manage, and use connected, security-enhanced solutions through the use of Web
services.
.NET platform supports several other languages and it allows us to switch among them.
There are somewhere around 109 languages supported by .NET. These languages range from
the popular, such as c++, VB etc
Language Interoperability
An object created in one application of one language can be used in one application of another
language. i .e multiple languages in one platform.
Versioning
Any time we deploy a new version of a component the old version is replaced i.e., we cannot
run two versions of the same software at a time. But in .NET when a new component is
deployed a new thread is spawned when the page using the new component is first accessed by a
user .The old component coexists with the new component in memory because they are on
separate threads .When the old component is no longer used, the ASP.NET worker process
releases memory dedicated to the old component.
Microsoft.NET Framework
The .NET Framework is a new computing platform that simplifies application development in
the highly distributed environment of the Internet. The .NET Framework is designed to fulfill the
following objectives:
To make the developer experience consistent across widely varying types of applications,
such as Windows-based applications and Web-based applications.
To build all communication on industry standards to ensure that code based on the .NET
Framework can integrate with any other code.
The .NET Framework has two main components: the common language runtime and the .NET
Framework class library. The common language runtime is the foundation of the .NET
Framework. You can think of the runtime as an agent that manages code at execution time,
providing core services such as memory management, thread management, and remote, while
also enforcing strict type safety and other forms of code accuracy that ensure security and
robustness. In fact, the concept of code management is a fundamental principle of the runtime.
Code that targets the runtime is known as managed code, while code that does not target the
runtime is known as unmanaged code. The class library, the other main component of the .NET
Framework, is a comprehensive, object-oriented collection of reusable types that you can use to
develop applications ranging from traditional command-line or graphical user interface (GUI)
applications to applications based on the latest innovations provided by ASP.NET, such as Web
Forms and XML Web services.
The .NET Framework can be hosted by unmanaged components that load the common language
runtime into their processes and initiate the execution of managed code, thereby creating a
software environment that can exploit both managed and unmanaged features. The .NET
Framework not only provides several runtime hosts, but also supports the development of third-
party runtime hosts.
For example, ASP.NET hosts the runtime to provide a scalable, server-side environment for
managed code. ASP.NET works directly with the runtime to enable Web Forms applications and
XML Web services, both of which are discussed later in this topic.
Internet Explorer is an example of an unmanaged application that hosts the runtime (in the form
of a MIME type extension). Using Internet Explorer to host the runtime enables you to embed
managed components or Windows Forms controls in HTML documents. Hosting the runtime in
this way makes managed mobile code (similar to Microsoft® ActiveX® controls) possible, but
with significant improvements that only managed code can offer, such as semi-trusted execution
and secure isolated file storage.
The following illustration shows the relationship of the common language runtime and the class
library to your applications and to the overall system. The illustration also shows how managed
code operates within a larger architecture.
JIT
ML
Meta data: CLR enabled compilers provide information that describes the types, references
and members in .NET code. This is called Meta data.
This metadata is stored along with the code. Every CLR application contains Metadata.
The CLR uses this Metadata to load classes , execute methods , handle memory , generate
native code , and manage security.
When we create a .NET application the code is compiled into the CLR’S intermediate
language (like byte codes in java). When we run the application, that MSIL code is
translated into binary code by special compilers built in CLR.
Compilers translate the code into ML (understandable by current machine) by using JIT
compiler of CLR
The common language runtime manages memory, thread execution, code execution, code safety
verification, compilation, and other system services. These features are intrinsic to the managed
code that runs on the common language runtime.
With regards to security, managed components are awarded varying degrees of trust, depending
on a number of factors that include their origin (such as the Internet, enterprise network, or local
computer). This means that a managed component might or might not be able to perform file-
access operations, registry-access operations, or other sensitive functions, even if it is being used
in the same active application.
The runtime enforces code access security. For example, users can trust that an executable
embedded in a Web page can play an animation on screen or sing a song, but cannot access their
personal data, file system, or network. The security features of the runtime thus enable legitimate
Internet-deployed software to be exceptionally feature rich.
The runtime also enforces code robustness by implementing a strict type- and code-verification
infrastructure called the common type system (CTS). The CTS ensures that all managed code is
self-describing. The various Microsoft and third-party language compilers Generate managed
code that conforms to the CTS. This means that managed code can consume other managed
types and instances, while strictly enforcing type fidelity and type safety.
In addition, the managed environment of the runtime eliminates many common software issues.
For example, the runtime automatically handles object layout and manages references to objects,
releasing them when they are no longer being used. This automatic memory management
resolves the two most common application errors, memory leaks and invalid memory references.
The runtime also accelerates developer productivity. For example, programmers can write
applications in their development language of choice, yet take full advantage of the runtime, the
class library, and components written in other languages by other developers. Any compiler
vendor who chooses to target the runtime can do so. Language compilers that target the .NET
Framework make the features of the .NET Framework available to existing code written in that
language, greatly easing the migration process for existing applications.
While the runtime is designed for the software of the future, it also supports software of today
and yesterday. Interoperability between managed and unmanaged code enables developers to
continue to use necessary COM components and DLLs.
The runtime is designed to enhance performance. Although the common language runtime
provides many standard runtime services, managed code is never interpreted. A feature called
just-in-time (JIT) compiling enables all managed code to run in the native machine language of
the system on which it is executing. Meanwhile, the memory manager removes the possibilities
of fragmented memory and increases memory locality-of-reference to further increase
performance.
The .NET Framework class library is a collection of reusable types that tightly integrate with the
common language runtime. The class library is object oriented, providing types from which your
own managed code can derive functionality. This not only makes the .NET Framework types
easy to use, but also reduces the time associated with learning new features of the .NET
Framework. In addition, third-party components can integrate seamlessly with classes in the
.NET Framework.
For example, the .NET Framework collection classes implement a set of interfaces that you can
use to develop your own collection classes. Your collection classes will blend seamlessly with
the classes in the .NET Framework.
As you would expect from an object-oriented class library, the .NET Framework types enable
you to accomplish a range of common programming tasks, including tasks such as string
management, data collection, database connectivity, and file access. In addition to these common
tasks, the class library includes types that support a variety of specialized development scenarios.
For example, you can use the .NET Framework to develop the following types of applications
and services:
Console applications.
ASP.NET applications.
Windows services.
For example, the Windows Forms classes are a comprehensive set of reusable types that vastly
simplify Windows GUI development. If you write an ASP.NET Web Form application, you can
use the Web Forms classes.
Another kind of client application is the traditional ActiveX control (now replaced by the
managed Windows Forms control) deployed over the Internet as a Web page. This application is
much like other client applications: it is executed natively, has access to local resources, and
includes graphical elements.
In the past, developers created such applications using C/C++ in conjunction with the Microsoft
Foundation Classes (MFC) or with a rapid application development (RAD) environment such as
Microsoft® Visual Basic®. The .NET Framework incorporates aspects of these existing products
into a single, consistent development environment that drastically simplifies the development of
client applications.
The Windows Forms classes contained in the .NET Framework are designed to be used for GUI
development. You can easily create command windows, buttons, menus, toolbars, and other
screen elements with the flexibility necessary to accommodate shifting business needs.
For example, the .NET Framework provides simple properties to adjust visual attributes
associated with forms. In some cases the underlying operating system does not support changing
these attributes directly, and in these cases the .NET Framework automatically recreates the
forms. This is one of many ways in which the .NET Framework integrates the developer
interface, making coding simpler and more consistent.
Unlike ActiveX controls, Windows Forms controls have semi-trusted access to a user's
computer. This means that binary or natively executing code can access some of the resources on
the user's system (such as GUI elements and limited file access) without being able to access or
compromise other resources. Because of code access security, many applications that once
needed to be installed on a user's system can now be safely deployed through the Web. Your
applications can implement the features of a local application while being deployed like a Web
page.
ASP
ASP is a server side scripting technology that enables scripts (embedded in web pages) to be
executed by an Internet server.
Drawbacks of ASP
3. ASP does not have versioning of components because when the new version of
the component is introduced the old version is simply replaced.
4. ASP supports limited scalability(as the use of application’s ability increases , the
no of users , amount of data in database and other criteria increases)
5. ASP is not known for it’s crash protection because ASP assumes that all code has
been tested, go un checking memory leaks, infinite loops.
6. Does not have any compiler environment i.e we do not know any errors.
7. No separation of design and code.
8. It is not object oriented.
9. No graphical user interface(GUI).
10. Bulky code for small programmes.
ASP.NET
Server-side applications in the managed world are implemented through runtime hosts.
Unmanaged applications host the common language runtime, which allows your custom
managed code to control the behavior of the server. This model provides you with all the features
of the common language runtime and class library while gaining the performance and scalability
of the host server.
The following illustration shows a basic network schema with managed code running in different
server environments. Servers such as IIS and SQL Server can perform standard operations while
your application logic executes through the managed code.
ASP.NET is the hosting environment that enables developers to use the .NET Framework to
target Web-based applications. However, ASP.NET is more than just a runtime host; it is a
complete architecture for developing Web sites and Internet-distributed objects using managed
code. Both Web Forms and XML Web services use IIS and ASP.NET as the publishing
mechanism for applications, and both have a collection of supporting classes in the .NET
Framework.
XML Web services, an important evolution in Web-based technology, are distributed, server-
side application components similar to common Web sites. However, unlike Web-based
applications, XML Web services components have no UI and are not targeted for browsers such
as Internet Explorer and Netscape Navigator. Instead, XML Web services consist of reusable
software components designed to be consumed by other applications, such as traditional client
applications, Web-based applications, or even other XML Web services. As a result, XML Web
services technology is rapidly moving application development and deployment into the highly
distributed environment of the Internet.
If you have used earlier versions of ASP technology, you will immediately notice the
improvements that ASP.NET and Web Forms offers. For example, you can develop Web Forms
pages in any language that supports the .NET Framework. In addition, your code no longer needs
to share the same file with your HTTP text (although it can continue to do so if you prefer). Web
Forms pages execute in native machine language because, like any other managed application,
they take full advantage of the runtime. In contrast, unmanaged ASP pages are always scripted
and interpreted. ASP.NET pages are faster, more functional, and easier to develop than
unmanaged ASP pages because they interact with the runtime like any managed application.
The .NET Framework also provides a collection of classes and tools to aid in development and
consumption of XML Web services applications. XML Web services are built on standards such
as SOAP (a remote procedure-call protocol), XML (an extensible data format), and WSDL ( the
Web Services Description Language). The .NET Framework is built on these standards to
promote interoperability with non-Microsoft solutions.
For example, the Web Services Description Language tool included with the .NET Framework
SDK can query an XML Web service published on the Web, parse its WSDL description, and
produce C# or Visual Basic source code that your application can use to become a client of the
XML Web service. The source code can create classes derived from classes in the class library
that handle all the underlying communication using SOAP and XML parsing. Although you can
use the class library to consume XML Web services directly, the Web Services Description
Language tool and the other tools contained in the SDK facilitate your development efforts with
the .NET Framework.
If you develop and publish your own XML Web service, the .NET Framework provides a set of
classes that conform to all the underlying communication standards, such as SOAP, WSDL, and
XML. Using those classes enables you to focus on the logic of your service, without concerning
yourself with the communications infrastructure required by distributed software development.
Finally, like Web Forms pages in the managed environment, your XML Web service will run
with the speed of native machine language using the scalable communication of IIS.
The ASP.NET Web Forms page framework is a scalable common language runtime
programming model that can be used on the server to dynamically generate Web pages.
Intended as a logical evolution of ASP (ASP.NET provides syntax compatibility with existing
pages), the ASP.NET Web Forms framework has been specifically designed to address a number
of key deficiencies in the previous model. In particular, it provides:
The ability to create and use reusable UI controls that can encapsulate common
functionality and thus reduce the amount of code that a page developer has to write.
The ability for developers to cleanly structure their page logic in an orderly fashion (not
"spaghetti code").
The ability for development tools to provide strong WYSIWYG design support for pages
(existing ASP code is opaque to tools).
ASP.NET Web Forms pages are text files with an .aspx file name extension. They can be
deployed throughout an IIS virtual root directory tree. When a browser client requests .aspx
resources, the ASP.NET runtime parses and compiles the target file into a .NET Framework
class. This class can then be used to dynamically process incoming requests. (Note that the .aspx
file is compiled only the first time it is accessed; the compiled type instance is then reused across
multiple requests).
An ASP.NET page can be created simply by taking an existing HTML file and changing its file
name extension to .aspx (no modification of code is required). For example, the following
sample demonstrates a simple HTML page that collects a user's name and category preference
and then performs a form post back to the originating page when a button is clicked:
ASP.NET provides syntax compatibility with existing ASP pages. This includes support for <%
%> code render blocks that can be intermixed with HTML content within an .aspx file. These
code blocks execute in a top-down manner at page render time.
ASP.NET supports two methods of authoring dynamic pages. The first is the method shown in
the preceding samples, where the page code is physically declared within the originating .aspx
file. An alternative approach--known as the code-behind method--enables the page code to be
more cleanly separated from the HTML content into an entirely separate file.
In addition to (or instead of) using <% %> code blocks to program dynamic content, ASP.NET
page developers can use ASP.NET server controls to program Web pages. Server controls are
declared within an .aspx file using custom tags or intrinsic HTML tags that contain a
runat="server" attribute value. Intrinsic HTML tags are handled by one of the controls in the
System.Web.UI.HtmlControls namespace. Any tag that doesn't explicitly map to one of the
controls is assigned the type of System.Web.UI.HtmlControls.HtmlGenericControl.
Server controls automatically maintain any client-entered values between round trips to the
server. This control state is not stored on the server (it is instead stored within an <input
type="hidden"> form field that is round-tripped between requests). Note also that no client-side
script is required.
In addition to supporting standard HTML input controls, ASP.NET enables developers to utilize
richer custom controls on their pages. For example, the following sample demonstrates how the
<asp:adrotator> control can be used to dynamically display rotating ads on a page.
ASP.NET Web Forms provide an easy and powerful way to build dynamic Web UI.
ASP.NET Web Forms pages can target any browser client (there are no script library or
cookie requirements).
ASP.NET Web Forms pages provide syntax compatibility with existing ASP pages.
ASP.NET ships with 45 built-in server controls. Developers can also use controls built by
third parties.
ASP.NET server controls can automatically project both uplevel and downlevel HTML.
ASP.NET templates provide an easy way to customize the look and feel of list server
controls.
ASP.NET validation controls provide an easy way to do declarative client or server data
validation.
Features of ASP.NET:
2. Simplicity
3. Performance
ASP.NET uses compiled CLR code on the Server ;Asp.Net is not interpreted .ASP.NET is a
modern environment and it takes advantage of just-in-time compiling, early binding of objects
to provide the best performance possible.
4. Tool Support
5. Customizability
The ASP.NET architecture is designed to allow developers to plug in their code at any level .
6. Scalability
The ASP.NET process memory and processor usage and it automatically regulates itself
when things get out of hand.
7. Manageability
All configuring of ASP.NET applications is performed in XML-based text files. Any time you
make a change to configuration file and save it, the setting is instantly made to the application
without a need for restarting services or disconnecting users
1) Client perspective
2) Server perspective
1) Client perspective
From the client’s perspective, the ASP.NET interaction process is straight forward .The user
browses to an ASP.NET page by some standard means of browsing web pages. The page loads
in the browser. The user interacts with the page through from elements such as Textboxes &
Buttons. The user submits the request via the page’s user interface & the results are returned to
the browser.
It does not get anymore straight forward than that. All the user has to do is click around on the
page and use the interface elements defined on that page. The server, however, has to do a lot
more to make things happen
2) Server perspective
From the server perspective, the process begins when a user requests a page and seemingly ends
when the output is returned to the server.
f) The ASP.NET code outputs plain HTML and pipes it through a chain of defined modules
down to IIS, which in turn delivers it to the browser.
g) If the user interacts with the an HTML element that has a server side event handler.
h) The ASP.NET page is loaded into memory in the same manner as before.
i) The resulting HTML is sent to the browser and the process continues in a post- back
cycle
This post- back cycle maintains the dynamic nature of an ASP.NET page
1. Database
A database management, or DBMS, gives the user access to their data and helps them transform
the data into information. Such database management systems include dBase, paradox, IMS,
SQL Server and SQL Server. These systems allow users to create, update and extract
information from their database.
SQL Server stores records relating to each other in a table. Different tables are created for the
various groups of information. Related tables are grouped together to form a database.
3. Primary Key
Every table in SQL Server has a field or a combination of fields that uniquely identifies each
record in the table. The Unique identifier is called the Primary Key, or simply the Key. The
primary key provides the means to distinguish one record from all other in a table. It allows the
user and the database system to identify, locate and refer to one particular record in the database.
4. Relational Database
Sometimes all the information of interest to a business operation can be stored in one table. SQL
Server makes it very easy to link the data in multiple tables. Matching an employee to the
department in which they work is one example. This is what makes SQL Server a relational
database management system, or RDBMS. It stores data in two or more tables and enables you
to define relationships between the table and enables you to define relationships between the
tables.
5. Foreign Key
When a field is one table matches the primary key of another field is referred to as a foreign key.
A foreign key is a field or a group of fields in one table whose values match those of the primary
key of another table.
6. Referential Integrity
Not only does SQL Server allow you to link multiple tables, it also maintains consistency
between them. Ensuring that the data among related tables is correctly matched is referred to as
maintaining referential integrity.
7. Data Abstraction
A major purpose of a database system is to provide users with an abstract view of the data. This
system hides certain details of how the data is stored and maintained. Data abstraction is divided
into three levels.
8. Physical level
This is the lowest level of abstraction at which one describes how the data are actually stored.
9. Conceptual Level
At this level of database abstraction all the attributed and what data are actually stored is
described and entries and relationship among them.
This is the highest level of abstraction at which one describes only part of the database.
Advantages of RDBMS:
SQL SERVER is one of the leading database management systems (DBMS) because it is the
only Database that meets the uncompromising requirements of today’s most demanding
information systems. From complex decision support systems (DSS) to the most rigorous online
transaction processing (OLTP) application, even application that require simultaneous DSS and
OLTP access to the same critical data, SQL Server leads the industry in both performance and
capability SQL SERVER is a truly portable, distributed, and open DBMS that delivers
unmatched performance, continuous operation and support for every database.
SQL SERVER RDBMS is high performance fault tolerant DBMS which is specially designed
for online transactions processing and for handling large database application.
SQL SERVER with transactions processing option offers two features which contribute to very
high level of transaction processing throughput, which are
The unrivaled portability and connectivity of the SQL SERVER DBMS enables all the systems
in the organization to be linked into a singular, integrated computing resource.
2. Portability
SQL SERVER is fully portable to more than 80 distinct hardware and operating systems
platforms, including UNIX, MSDOS, OS/2, Macintosh and dozens of proprietary platforms.
This portability gives complete freedom to choose the database sever platform that meets the
system requirements.
3. Open Systems
SQL SERVER offers a leading implementation of industry –standard SQL. SQL Server’s open
architecture integrates SQL SERVER and non –SQL SERVER DBMS with industries most
comprehensive collection of tools, application, and third party software products SQL Server’s
Open architecture provides transparent access to data from other relational database and even
non-relational database.
SQL Server’s networking and distributed database capabilities to access data stored on remote
server with the same ease as if the information was stored on a single local computer. A single
SQL statement can access data at multiple sites. You can store data where system requirements
such as performance, security or availability dictate.
5. Unmatched Performance
The most advanced architecture in the industry allows the SQL SERVER DBMS to deliver
unmatched performance.
Real World applications demand access to critical data. With most database Systems application
becomes “contention bound” – which performance is limited not by the CPU power or by disk
I/O, but user waiting on one another for data access . SQL Server employs full, unrestricted row-
level locking and contention free queries to minimize and in many cases entirely eliminates
contention wait times.
7. No I/O Bottlenecks
SQL Server’s fast commit groups commit and deferred write technologies dramatically reduce
disk I/O bottlenecks. While some database write whole data block to disk at commit time, SQL
Server commits transactions with at most sequential log file on disk at commit time, On high
throughput systems, one sequential writes typically group commit multiple transactions. Data
read by the transaction remains as shared memory so that other transactions may access that data
without reading it again from disk. Since fast commits write all data necessary to the recovery to
the log file, modified blocks are written back to the database independently of the transaction
commit, when written from memory to disk.
Implementation is the process of converting a new system design into operation. It is phase that
focuses on user training, site preparation, and file conversion for installing the system under
consideration. The important factor that should be considered here is that the conversion should
not disrupt the following of the organization.
The objective is to put the tested system into operation while holding costs, risks, and personnel
irritation to a minimum.
In our project the conversion involves following steps:
1. Conversion begins with a review of the project plan, the system test documentation, and the
implementation plan. The parties involved are the user, the project team, programmers, and
operators.
2. The conversion portion of implementation plan are finalized and approved.
3. Files are converted.
4. Parallel processing between the existing and the new systems re initiated.
5. Results of the computer runs and operations for the new system are logged on a special form.
6. Assuming no problems, parallel processing is continued. Implementation details are
documented for reference.
The prime concern during the conversion process is copying the old files into the new
system. Once a particular file is selected, the next step is to specify the data to be converted. A
file comparison program is best used for verifying the accuracy of the copying process.
Well-planed test files are important for successful conversion. An audit trail was performed on
the system since it is the key to detect errors and fraud in the new system.
During the implementation the user training is most important. In our Web Server project no
heavy training is required. Only training how to design and post the files and how to use the
administration tools and how to get files etc.
A post-implementation review is an evaluation of a system in terms of the extent to which the
system accomplishes stated objectives and actual project cost exceeds initial estimates. It is
usually a review of major problems that need converting and those that surfaced during the
implementation phase. The team prepares a review plan around the type of evaluation to be done
and the time frame for its completion. The plan considers administrative, personnel, and system
performance and the changes that are likely to take place through maintenance.
8. Maintenance
Maintenance is the enigma of system development. It holds the software industry captive,
tying up programming resources. Maintenance is actually the implementation of the post-
implementation review plan.
4 SYSTEM DESIGNING
4.1Module and Module Description
1. Login
2. Employee
3. Administrator
4. Project Status
1. Login
In this module the Employee or Administrator enters the system by using different user names.
After login into this module and do the related work and then logout from this module.
2. Employee
Under this we have 5 sub titles
If the Employee wants to apply for a Leave then he will apply in the Apply leave form.
It contains the leave date; apply date, no of days and reason fields.
If the Employee wants to see his salary report then he can see it in view salary form.
In check Leave Employee can apply for leave and see the status of the applied leave.
In project status Employee cans see the status of the project and he can manage the project
details.
3. Administrator
Under this we have five sub titles
If the Employee is new to the organization then he/she has to register in the new Employee
registration form.
In this administrator see the leaves applied by employees and he manages the leaves.
The entity Relationship Diagram (ERD) depicts the relationship between the data objects. The
ERD is the notation that is used to conduct the date modeling activity the attributes of each data
object noted is the ERD can be described resign a data object descriptions.
The set of primary components that are identified by the ERD are
Data object
Relationships
Attributes
Various types of indicators
The primary purpose of the ERD is to represent data objects and their relationship
Scenario – an example of what happens when someone interacts with the system.
Actor – A user or another system that interacts with the modeled system.
A use case diagram describes the relationships between actors and scenarios.
Extend - defines that instances of a use case may be augmented with some additional
behavior defined in an extending use case.
Uses - defines that a use case uses a behavior defined in another use
5 SYSTEM IMPLEMENTATION
Login form:
Admin form:
Employee form:
Manager form:
6 SAMPLE CODE
6.1 Login.aspx:
</td>
</tr>
</table>
</asp:Content>
6.2 Login.aspx.cs:
using System;
using System.Collections;
using System.Configuration;
using System.Data;
using System.Linq;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Xml.Linq;
using System.Data.SqlClient;
using System.Drawing;
public partial class bascilogin : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
txtLoginId.Focus();
}
protected void btnsignin_Click(object sender, EventArgs e)
{
try
{
if (txtLoginId.Text == "")
{
lblmsg.Text = "Enter Username";
txtLoginId.Focus();
}
else if (txtPassword.Text == "")
{
txtPassword.Focus();
lblmsg.Text = "Enter Password";
}
else
{
SqlConnection cn = new SqlConnection(Class1.con);
string query = "";
if (txtLoginId.Text == "admin")
}
}
catch (Exception exe) { }
}
}
7 SYSTEM TESTING
The testing phase involves the testing of development system using various data. Preparation of
the test data plays a vital role in system testing. After preparing the test data, the system under
study was tested using those data. While testing the system, by using the test data, errors were
found and corrected by using the following testing steps and corrections were also noted for
future use. Thus, a series of testing is performed on the proposed system before the system is
ready for implementation
1. Software Testing
As the coding is completed according to the requirement we have to test the quality of the
software. Software testing is a critical element of the software quality assurance and represents
the ultimate review of specification, design and coding. Although testing is to uncover the errors
in the software functions appear to be working as per the specification, those performance
requirements appear to have been met. In addition, data collected as testing is conducted provide
a good indication of software reliability and some indications of software quality as a whole. To
assure the software quality we conduct both white box testing and black box testing.
White box testing is a test case design method that uses the control structure of the procedural
designs to derive test cases. As we are using a non procedural language, there is very small scope
for the white box testing. Whenever it is necessary, there the control structures are tested and
successfully passed all the control structures with a very minimum errors.
It focuses on the functional requirements to the software. It enables to derive sets of input
conditions that will fully exercise all functional requirements for a program. The Black box
testing finds almost all errors. It finds some interface errors and errors in accessing the database
and some performance errors. In Black box testing we use two techniques equivalence
4. System Testing
It is designated to uncover weakness that was not detected in the earlier tests. The total system is
tested for recovery and fallback after various major failures to ensure that no data are lost. an
acceptance test is done to validity and reliability of the system. The philosophy behind the testing
is to find error in project. There are many test cases designed with this in mind. The flow of
testing is as follows
5. Code Testing
Specification testing is done to check if the program does with it should do and how it should
behave under various condition or combinations and submitted for processing in the system and
it is checked if any overlaps occur during the processing. This strategy examines the logic of the
program. Here only syntax of the code is tested. In code testing syntax errors are corrected, to
ensure that the code is perfect.
Unit Testing
The first level of testing is called unit testing. Here different modules are tested against
the specification produced running the design of the modules. Unit testing is done to test the
working of individual modules with test oracles. Unit testing comprises a set of tests performed
by an individual programmer prior to integration of the units into a large system. A program unit
is usually small enough that the programmer who developed it can test it in a great detail. Unit
testing focuses first on the modules to locate errors. These errors are verified and corrected so
that the unit perfectly fits to the project.
System Testing
The next level of testing is system testing and acceptance testing. This testing is done to
check if the system has met its requirements and to find the external behavior of the system.
System testing involves two kinds of activities.
Integration Testing
The next level of testing is called the Integration testing. In this many tested modules are
combined into subsystems, which were then tested. Test case data is prepared to check the
control flow of all the modules and to exhaust all possible inputs to the program. Situations like
treating the modules when there is no data entered in the test box is also tested.
Dept. of Computer Science, Gulbarga University, Kalaburagi Page 49
Employee Salary Management System
This testing strategy dictates the order in which modules must be available, and exerts strong
influence on the order in which the modules must be written, debugged and unit tested. In
integration testing, all modules on which unit testing\g is performed are integrated together and
tested.
6. Acceptance Testing
This testing is performed finally by user to demonstrate that the implemented system
satisfies its requirements. The user gives various inputs to get required outputs.
7. Specification Testing
This is done to check if the program does what it should do and how it should behave
under various conditions or combination and submitted for processing in the system and it is
checked if any overlaps occur during the processing.
This is done to determine how long it takes to accept and respond i.e., the total time for
processing when it has to handle quite a large number of records. It is essential to check the
exception speed of the system, which runs well with only a handful of test transactions. Such
systems might be slow when fully loaded. So testing is done by providing large number of data
for processing. A system testing is designed to uncover weaknesses that were not detected in the
earlier tests. The total system is tested for recovery and fall back after various major failures to
ensure that no data is lost during an emergency, an acceptance test is done to ensure about the
validity and reliability of the system.
2. Compilation Test
It was a good idea to do our stress testing early on, because it gave us time to fix some of the
unexpected deadlocks and stability problems that only occurred when components were exposed
to very high end tasks.
3. Execution Test
This program was successfully loaded and executed. Because of good programming there was no
execution error.
4. Output Test
The successful output screens are placed in the output screens section.
8 RESULT/REPORT
Pay Slip Report
9 SYSTEM MAINTAINANCE
The results obtained from the evaluation process help the organization to determine whether its
information systems are effective and efficient or otherwise. The process of monitoring,
evaluating, and modifying of existing information systems to make required or desirable
improvements may be termed as System Maintenance.
For the purpose of convenience, maintenance may be categorized into three classes namely:
This type of maintenance implies removing errors in a program, which might have crept in the
system due to faulty design or wrong assumptions. Thus, in corrective maintenance, processing
or performance failures are repaired.
In adaptive maintenance, program functions are changed to enable the information system to
satisfy the information needs of the user. This type of maintenance may become necessary
because of organizational changes which may include:
10 CONCLUSION
The Employee Payroll System helps the organization in storing the details of the employee and
their details of the salary. Once the salary is stored it can be used to various calculations such as
using the tour allowance, house rent allowance etc. It even maintains the leaves of the employee.
It provides a web application and an automated procedure to calculate the salary according the
leaves and deductions which reduces the most of the work load which is done manually.
The proposed system eliminates the drawbacks of the existing system and provides a great
efficiency for the organization with a effective implementation with the latest technologies.
11 FUTURE ENHANCEMENT
The application Employee Evaluation System developed by us has made the best possible efforts
to satisfy the needs of Employee and Administrator. The details can be accessed and the salary
calculations is done according to the rules and regulations of particular company in shortest time
frame and keep Employers up to date on his statutory obligations.
Efficiently manage employee information with easy to use interfacing and process monthly
Employee Evaluation System along with satisfying features like leave reports, time sheet and
work sheet. So that it promotes clear, transparent, accountable and user friendly administration.
REFERENCES
1. Software Engineering: Roger S. Pressman.
2. Black Book C#: E Balaguruswami.
3. Database Management System : Navathe Elmasri
4. Vb.Net Programming : Steven Holzner
5. Mastering Vb.Net : David Scheinder