0% found this document useful (0 votes)
8 views

Software Testing Important Semester Questions

STE study material
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Software Testing Important Semester Questions

STE study material
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

Software Testing

Important Questions For


Semester
Introduction to Software Testing

Software Testing is a procedure to verify whether the actual result are same as of expected result. It is
performed to provide assurance that the software system does not contains any defect.

Advantages Of Software Testing

• It reduces the possibility of software failure by removing errors.

• Testing process removes the maximum possible error from the software and helps us to deliver quality
software to the customer.

• Testing ensures the correctness and completeness of the software along the quality.

• Testing allows us to verify and validate the software and ensures that software will satisfy all of the user
requirements.
Objectives of Software Testing

• Finding defects which can be generated by the programmer/developer while coding


the software.

• Report the detected defects to the developer for correction and after correction retest
the software product to give quality assurance.

• To make sure that the end result meets the business and user requirements.

• To prevent Defects.

• To give assurance that we deliver quality software to our end user.


Testing Terminology

Failure: When a software is not capable of performing the required function and is
offering results that are not expected, Then it is tested as failure in Software Testing.

Error: A human action that produces an incorrect result.

Fault: An incorrect step, process, or data definition in a computer program.

Defect: It is the error found after the application goes into the product.

Bug: A colloquial term for a software fault or defect causing problems during program
execution.
Methods of Testing

Methods of Testing defines how a software / product is (being) tested. There are two
methods of testing a software / product. Those are:

Static testing: In static testing code is not executed. Rather it manually checks the
code, requirement documents, and design documents to find errors. Main objective of
this testing is to improve the quality of software products by finding errors in early
stages of the development cycle.

Dynamic Testing: The dynamic testing is done by executing program. Main objective
of this testing is to confirm that the software product works in conformance with the
business requirements.
Positive Testing & Negative Testing

Positive Testing: It can be performed on the system by providing the valid data as input.

Example: Enter Only Numbers

123456789 The example has a prompt “Enter only Numbers” and the user
entered numbers as defined by the prompt. This is Positive Testing.
Positive Testing

Negative Testing: It can be perfumed on the system by providing the invalid data as input.

Example: Enter Only Numbers


The example has a prompt “Enter only Numbers” and the user
abcde entered alphabets instead of numbers as defined by the prompt. This
is Negative Testing.
Negative Testing
Boundary Value Analysis (BVA)

Boundary Value: In software testing, the BVA is black box testing based on test case. This technique is
applied to see if there are any bugs at the boundary of input domain. It helps the testing value in boundary
between both valid and invalid conditions. This technique is easy, quick and brilliant way to catch any input
errors. There are three guidelines of Boundary Value Analysis

Three Guidelines are as follows:

1) One test case for exact boundary values of input domain.


2) One test case for below boundary value of input domain.
3) One test case for above boundary value of input domain.

Example: The exam has pass boundary at 40%, merit at 75% and distinction at 85% then , Pass Boundary will be:

• 40% (exact pass)


• 39% (just below pass)
• 41% (just above pass)
Equivalence Class Partition.

Equivalence Class Partition: It is a black box testing that can be used at every level of testing. In this method we
separate the set of test conditions in different classes. (partition) We can use this method where the range of input
data for specific input field is available. It is an easy method.

Equivalence Partitioning is used to separate the data of the software / product in different classes / partition.

Example: An insurance company that has the following premium rates based on the age group.

Age Group Additional Premium • Below 35 Years (Valid Input)


• Between 35-59 Years (Valid Input)
Under 35 $1.50
• Above 60 Years (Valid Input)
35 – 59 $2.50 • Negative Age (Invalid Input)
• 0 Age (Invalid Input)
60+ $4.00 • Three digit Age (Valid Input)
Testing & Its Types.
Testing in software means software testing. Testing simply means to check whether all the functions in the software are
working properly or not. There are various types of Software Testing. Those are:

A) Black Box Testing: The Code or Structure is unknown / hidden.

B) White Box Testing: The Code or Structure is known/ unhidden.

C) Unit Testing: This method involves testing of every modules present in the software.

D) Integration Testing: This method checks the compatibility of one module to other.

E) System Testing: This method involves checking the software in different environment and is the software
compatible or not.

F) Stress Testing: This method involves checking if the software can handle all the load or how much load the software
can handle.

G) Performance Testing: This testing involves checking of the speed and effectiveness of the software.
External Standard & Its Types

External Standard are the standards made by an entity external to an organization. These standards are
standards that a product comply with, are externally visible and are usually stipulated by external parties.

Three Types of External Standards:

A) Customer Standard: It refers to something defined by the customer as per his/ her business
requirement for the given product.

B) National Standard: It refers to standard defined by the national level. entries where the supplier /
customer resides.

C) International Standard: It refers to the standard that are defined at international level and these are
applicable to all customers across the world.
Black Box Testing Vs White Box Testing

Black Box Testing White Box Testing

It is carried out by tester. It is carried out by developer.

It aims at what functionality the system is performing. It aims how system if performing its functionality.

It is less expensive if compared to WBT It is more expensive if compared to BBT.

In BBT, we do not focus on developing the code. In WBT, we focus on developing the code.

Programming and Implementation Knowledge is not Programming and Implementation Knowledge is


required. required.
Inspection Testing Vs Walkthrough Testing

Inspection Testing Walkthrough Testing

It is formal type of testing. It is informal type of testing

It is started by project team. It is started by author of the code or document.

Inspection includes planned meeting with all the Walkthrough Testing does not include planned
members of the project. meeting.

In inspection, recorder records the defects. In walkthrough, author makes a notes of defects
and suggestions which is given by team members.
Verification Vs Validation

Verification Validation
Are you building it right ? Have you build the right thing ?
It is conducted by Quality Assurance Team. It is conducted by Quality Control Team.
Methods involves Review, Unit Testing, Inspection, and Methods involves System Testing such as Black Box
Integration Testing. Testing, White Box Testing, Grey Box Testing, etc.

Plans, Requirement Specification, Design Specification, Testing of actual software is done in validation.
Code test Cases, etc. are evaluated in Verification.

Static and Dynamic Activities. Only Dynamic Activities.


.
Software Quality Assurance Vs Software Quality Control

Software Quality Assurance Software Quality Control

Quality Assurance is an activity for assuring quality Quality Control is an activity which focuses on
in the process by which the product are developed. identifying defects after the product is developed.

Verification is an example of SQA. Validation is an example of SQC.

Main goal is to find defect and errors when the Main goal is to find defect and errors after the
product is under development. product is developed.

When statistical tools and techniques are applied When statistical tools and techniques are applied
to process the product then they are known as to the product after the process then they are
Statistical Process Control (SPC). known as Statistical Quality Control (SQC).
Levels Of Testing

• Unit Testing: This is the first level of Testing. In this testing, the units or
components of a model are tested. These includes Black Box (Hidden) Acceptance Testing
Testing and White Box (Transparent) Testing.

• Integration Testing: This includes combining the modules together that


System Testing
were tested in the Unit Testing and then checking if all the parts are
compatible with each other or not. There are three types of Integration
Testing: A) Top Down Approach B) Bi-Directional / Mixed Approach C)
Bottom Up Approach. Integration Testing

• System Testing: This testing takes place after Integration Testing. It


verifies the entire system's functionality and performance. Unit Testing

• Acceptance Testing: It ensures the software meets business requirements


Diagram of
and is ready for deployment among the customers. Levels of Testing
Examples of Integration Testing

• Verifying the interface link between the login page and the home page i.e. when a
user enters the credentials and logs it should be directed to the homepage.

• Check the interface link between the Login and Mailbox module.

• Check the interface link between the Mailbox and Delete Mails Module.

• Verifying the interface link between the home page and the profile page i.e.
profile page should open up.
Types of Testing

Performance Testing: It is a type of software testing that is carried out to test the speed, scalability,
stability, and reliability of software product or application.

Load Testing: It is a software testing that is carried out to check the behavior of the system or
software product under extreme load. For Example if a site can hold only 10 concurrent users the
tester will try will to run more than 10 concurrent users and record the behavior off the system.

Stress Testing: It is a software testing that is carried out to determine the behavior of the software
under abnormal conditions.

Security Testing: It is a software testing technique that is used to determine if an information system
protects data and maintains functionality as intended.
Testing on MSBTE Website

1) Stress Testing: Stress Testing for MSBTE result display page can be easily done by degrading the system. For
Example providing less RAM for the server, or decreasing the bandwidth of the internet connection or provide
less hits for page.

2) Load Testing: It is the simplest form of testing conducted to understand the behavior of system under a specific
load. It will show how the software will respond if the number of students checking the result at a same are more
than specified.

3) Security Testing: It s a testing technique to determine if an information system protects the data and maintains
the functionality. In this scenario It secures the MSBTE website by verifying principles such as:

 Confidentiality of MSBTE online result display system

 Integrity of the Software.

 Authentication of the User who logged in to check the result with valid seat no. or enrolment no.
Testing on Web Application
Web based application is an application which can be accessed and used over the network that is internal. Web testing is a testing that
focus on web application.
Testing Approaches under Web Application are as follows

• Compatibility with different browsers: Tester should test the web based application on different browsers to make sure the
application behaviour is same on all the browsers.

• Functional Testing: Tester should test the different functionality of the application like calculation, business logic validation,
navigation should be properly done.

• Usability Testing: Tester should also focus on ease to use the web based application. The appearance of the web page and
navigation should be proper and user friendly.

• Integration: The integration between the browser and the server hardware software application should be validated by the tester.

• Security Testing: Security of any web based application is the most important factor while testing. Security must be validated by the
tester.

• Performance Testing: The performance of the web based application should be validated properly in performance testing by using
load testing and stress testing by the tester.
Client Server & Testing on Client Server

An Client Server consists of a Client, Server and a Network. The client and server are connected to a single network
which helps them to communicate with each other. The client sends request to server and server shares a response to
the client’s request.
Testing Approaches for Client Server are as follows:

• Component Testing: Testing of client and server must be done individually. For testing the server an client
simulator is needed and for testing the client an server simulator is needed and for testing the network both the
simulators (Server & Client) are needed.

• Integration Testing: After successfully testing of server, client and network, they are brought together to form
system testing.

• Performance Testing: System performance is tested when number of clients is communicating with server at a
time. Volume and Stress testing may also be useful for test under maximum and minimum load.

• Compatibility Testing: It is necessary to test the compatibility of different software, hardware used by the server
for communicating with the client. This includes testing compatibility with various operating systems, browsers,
devices, and network environments to ensure a seamless user experience across different platforms.
Driver Vs Stub

Driver Stub
Drivers are dummy modules that always used to Stubs are dummy modules that always used to
simulate the high level modules. simulate the low level modules.

Drivers are the calling programs Stubs are the called programs.

Drivers are only used when main programs are Stubs are used when sub programs are under
under construction. construction.

Drivers are used in bottom up approach. Stubs are used in top down approach.
Alpha Testing Vs Beta Testing

Alpha Testing Beta Testing (Field Testing)

Performed at developer’s site. Performed at end user’s site.

Less time consuming. More time consuming.

It is performed in Virtual Environment. It is performed in Real Time Environment.

It comes under the category of both White Box It comes only under Black Box Testing.
Testing and Black Box Testing.

It is done during implementation phase of It is done as pre-release of software.


software.
Test Planning & Its Structure
Test plan is strategic document, which contains some information that describes how to perform testing on an
application in an effective, efficient and optimized way.
Structure Of Test Plan:

1) Testing Process: It includes a description of the major phases of the system testing process.

2) Requirements Traceability: Testing should be planned so that all requirements are individually tested.

3) Tested Items: The products that needs to be tested must be specified.

4) Testing Schedule: An overall testing schedule and resource allocation must be specified.

5) Test Recording Procedures: The results of the test must be systematically recorded.

6) Hardware and Software Requirements: It should specify the software tools required and estimate hardware
utilization.

7) Constraints: Constraints affecting the testing process such as staff shortages should be anticipated in this section.
Test Deliverable

The Deliverables include the following


The test plan Helpful for tester.
Test Case Specifications Details needed for testing.
Test design specification documents Helpful in designing test.
Testing Strategy Approach to follow testing.
Testing Scripts / procedures Need to be followed.
Test data Data useful during testing.
Test incident report Details of situation where testing performed.
Test Traceability Matrix Metrix to follow testing.
Test results / Reports Entire report or testing.
Install / Configuration guides Provides guidelines before testing.
Test logs produced Useful for future testing.
Deciding Test Approach

The test approach strategy issued result in identifying the right type of test for each of the features once we have
prioritised feature lists, the next step is to what needs to be tested, to enable estimation of size effort and schedule

This includes identifying:

• Are any special tools to be used and what are they.


Mean Time Between Failures (MTBF): It is the
• Will the tool require special training. average time between system failures.

• What metrics will be collected. Software Reliability Engineering (SRE): It focuses


on maintaining software reliability through automation
• Hardware.

• Software.
Setting up criteria for testing

Setting up criteria for testing is important and it should be followed accordingly

Some of the typical suspension criteria include

A) Encountering more than a certain number of defects causing frequent stoppage of testing activity

B) Hitting Show stoppers that prevent further progress of testing for example if a database does not start
further tests of query, data manipulation, and so on are is simply not possible to be executed

C) Developers releasing a new version which they advise should be used in lieu of the product under test,
(because of some critical defect fixes)
Test Infrastructure Management

Test requires Infrastructure to be planned up front. This Infrastructure is made up of Three Essential
Elements those are:

• A Test Code Database (TCDB): It contains the relevant information of the test cases in an organization. It
used for software testing and quality assurance purposes. An TCDB contains: Test Case, Test Case Product Cross
Reference, Test Case Run History, Test Case Defect Cross Reference.

• Defect Repository: It captures all the relevant information of defect repository for a product. The information
that a defect Repository includes are: Defect Details, Defect Test Detail, Fix Details, Communication.

• Configuration Management Repository and Tool: It is defined as a process to systematically organize,


manage and control the changes in the documents, codes, and other entries during the Software Development
Cycle. It keeps track of change control and version control of the files / entries that make up a software product.
Test Process

A) Baselining A Test Plan: It means creating a solid, unchanging version of the plan. It's like setting the plan in
stone, so everyone knows what's expected in the testing process.

B) Test Case Specification: It is a document outlining specific conditions, steps, and data needed to test a particular
aspect of a software or system, ensuring it works as expected. A test case specification should clearly identify

1. The Purpose of the Test: The reason for why the product is being tested should be mentioned here.

2. Items being tested: The items that are being tested should be mentioned here along with their version release
numbers.

3. Environment needs for running the Test Case: This can include the hardware environment setup, supporting
software environment setup for example setup of operating system, database, etc. should be mentioned here.

4. Steps followed to execute the Test: If the testing is manual, the steps are in detailed instruction and if
Automation Testing is used, then the steps are translated to the scripting language of that Tool
Test Report

A test report is a document that explains what happened during a software test. It tells if the test was
successful or if there were problems.

There are Three Types of Reports or Communication that are required:

A) Test incident Report: It is a note that tells everyone about a problem encountered during testing,
explaining what went wrong and what was done to fix it, helping teams keep track of issues and
their solutions.

B) Test Cycle Report: It is a document that details the tests that are being executed in each testing
phase, which builds will be tested during every cycle and what issues are severe enough for them.

C) Test Summary Report: It is a document that gives a brief overview of the testing process,
highlighting what was tested, what problems were found, and how well the software performed,
helping everyone understand the overall quality and status of the software after testing.
Decision Table

Decision Table testing is black box test design technique to determine the test scenarios for complex business logic.
They provide a systematic way of starting complex business rules, which is useful for developers as well as for testers.
Essentially it is structed exercise to formulate requirements when dealing with complex business rules.

Example:

Conditions TC-001 TC-002 TC-003 TC-004


Request Login 0 1 1 1
Valid Username X 0 1 1
Valid Password 0 1 1 0
Offer Recover Credentials 0 1 1 0

Where, 0 = False
1 = True
X = No Action (Don’t Care)
Example of Edit function in Notepad Test Case

Test Case Test Case Input Type Input Data Expected Result Actual Result Status
ID Objective
TC-001 Select all Menu Click on Select All text should be All text is selected. Pass
option Option all selected.
TC-002 Copy Option Menu Select the text It should copy the It copied the selected Pass
Option and click on cut. selected text. text.
TC-003 Cut Option Menu Select the text Selected text should be Selected text is cut. Pass
Option and click on cut cut.

TC-004 Paste Option Menu Click on Paste Contents should be Contents are pasted. Pass
Option pasted.
TC-005 Delete Option Menu Select text and Contents should be Contents are deleted. Pass
Option click on delete deleted.

TC-006 Save Option Menu Click on Save. It should save the It saved the content of Pass
Option content of notepad. notepad.
Example of Library Management System Test Case

Test Case Test Case Input Type Input Data Expected Result Actual Result Status
ID Objective
TC-001 Username Textbox Any valid login It should accept the user It accepted a valid Pass
name login username
TC-002 Password Textbox Any valid It should accept an valid It accepted a valid Pass
Password password password.
TC-003 Book Name List box Enter the name Any valid book title Valid book title Pass
of the book must be selected selected

TC-004 Book Issued Calendar Date of Issue It should accept an date It accepted the date Pass
selected by the user. selected by the user.
TC-005 Date of Return Calendar Date of Return It should accept an date It accepted the date Pass
selected by the user. selected by the user.

TC-006 Save Button Click It should save all the It saved all the details. Pass
details.
Example of Railway Reservation System Test Case

Test Case Test Case Input Type Input Data Expected Result Actual Result Status
ID Objective

TC-001 Login Field Textbox Any valid login It should accept the user It accepted the user Pass
name login login

TC-002 Password Field Textbox Valid Password It should accept an valid It accepted a valid Pass
password password.

TC-003 Login Button Click It should redirect the It redirected the user Pass
user to the home page. to the home page.

TC-004 Date of Calendar Date It should accept an date It accepted the date Pass
Departure selected by the user. selected by the user.

TC-005 Date of Calendar Date It should accept an date It accepted the date Pass
Arrival selected by the user. selected by the user.

TC-006 Station List box Station Name Station must be selected. Station was selected Pass
Example of Simple Calculator Test Case

Test Test Case Input Input Expected Result Actual Result Status
Case ID Objective Type Data
TC-001 Addition Button 5+5 The result should be 10. The result is 10. Pass
Operation
TC-002 Subtraction Button 20 - 10 The result should be 10. The result is 10. Pass
Operation
TC-003 Multiplication Button 1 * 10 The result should be 10. The result is 10. Pass
Operation
TC-004 Division Button 10 / 1 The result should be 10. The result is 10. Pass
Operation .
TC-005 Divide By Zero Button 10 / 0 It should display error or It displayed infinity Pass
infinity symbol (∞) symbol. (∞)
TC-006 Clear Button Click It should clear the input It cleared the input Pass
screen. screen.
Example of Amazon Register Page Test Case

Test Test Case Input Input Data Expected Result Actual Result Status
Case ID Objective Type
TC-001 Login Field Textbox Any valid login It should accept the user It accepted the user Pass
name login login
TC-002 Password Textbox Valid Password It should accept an valid It accepted a valid Pass
Field password password.
TC-003 Gender Radio Male or Female Either Male or Female Male or Female is Pass
Box must be selected. selected.
TC-004 Birth Date Calendar Date It should accept an date It accepted the date Pass
selected by the user. selected by the user.

TC-005 Address Textarea Address It should accept any an It accepted user’s Pass
address. address
TC-006 Submit Button Submit the User must be redirected User was redirected Pass
Data to the home page. to the home pas
Defect Classification

Defect Classification

Requirement Defect Designing Defect Testing Defects Coding Defects


Functional System Interface
defects defects Test environment Initialisation defects
defects
Interface
User Interface Database
Defects Test tool defects
defects related defects
Test design
Module defects
defects
Algorithmic
defects
Root causes of Defects

1) Human Factor: Due to human error the software cannot be made perfectly i.e. without any defect.

2) Communication Failure: It takes place in different levels such as miss-communication, lack of


communication or incorrect communication can arise an defect.

3) Buggy Third Party Tools: The development process requires a lot of third party tools which may contain
many defects in the content and may cause defect in the current software as well.

4) Lack of Design / Coding Experience: There is a possibility of hiring a tester with lack of design or coding
experience and that may lead to the situation of defects arriving in the software.

5) Unreal Development Timeframe: The situation when the tester does not have enough information and
development schedule is limited by the deadline, that situation may arise defects and causes bad quality service.
Techniques for finding defects

1) Static Technique: This technique of quality control define checking the software
product without executing them. It is also termed as Verification testing, as it
comes under Verification Testing.

2) Dynamic Technique: This technique includes checking the software product by


executing them. It is also termed as Validation Testing, as it comes under Validation
Testing.

3) Operational Technique: This technique includes auditing work products and


projects to understand whether the processes are being followed correctly or not
and also whether they are effective or not.
Defect Management Process

1) Defect Prevention: Implementation of techniques, methodology and standard processes to reduce the risk of defects.

2) Deliverable Baseline: Establishment of milestones where deliverables will be considered complete and ready for further
development work.

3) Defect Discovery: Identification and reporting of defects for development team acknowledgement. A defect is only termed
discovered when it has been documented and acknowledged.

4) Defect Resolutions: Work by the development team to prioritize schedule and fix a defect, and document the resolution.

5) Process Improvements: Process improvements refer to the continuous efforts to enhance and optimize the development
process based on feedback and lessons learned from previous projects.

Defect Deliverable Defect Defect Defect Process


Prevention Baseline Discovery Resolution Improvement

Management Reporting
Defect Life Cycle

1) New: When a defect is logged for the first time it is in the new state.

2) Assigned: Once the bug is posted by the tester the leader of the tester approves the bug and assigns the bug to the
developer team.

3) Open: It is the state when the developer starts analysing and working on the defect fixing.

New Deferred

Reopen Assigned Rejected

Open
Duplicate
Fixed

Retest

Close
Defect Life Cycle Pt.2

4) Fixed: When the developer makes necessary code changes and verifies it, then the developer makes the bug status
as fixed.

5) Retest: At this stage, the tester does retesting of the change code, which developer makes status fixed.

6) Close: Once the bug is fixed and it is tested by the tester the tester changes the status to close.

7) Reopen: If the bug still exists even after the developer fixes, the tester changes the status to reopen.

8) Rejected: If the developer feels that the bug is not genuine, then the developer rejects the bug.

9) Duplicate: If the bug is repeated twice or more, than the tester refers the bug to duplicate state.

10) Deferred: It means the bug fix is expected in the next release of the software product.
Defect Template Report
Defect report is a document that identifies and describes a defect detected by a tester or defect report template can
consist of following elements

1. Id: Unique identity given to the defect. 8. Expected Result: Expected result of the product.

2. Project Name: Name of the project. 9. Actual Result: Actual result of the product.

3. Product: Name of the product. 10. Defects Severity: Severity of the product.

4. Release Version: Release version of the product. 11. Defect Priority: Priority of the product.

5. Module: Specify module of the product where 12. Report Bar: Name of the person who wrote the
the defect was detected. report.

6. Summary: Summary of the product. 13. Assigned To: Name of the person who is assigned to
analyse or fix the defect.
7. Description: Detailed description of the product.
14. Status: Status of the defect.
Example of ATM Defect Report

ID: R1
Expected Result: Withdraw option must be visible in
Bank’s Interaction Screen.
Project: Cash Simulator (ATM)
Actual Result: Withdraw option is visible in Bank’s
Product: ATM
Interaction Screen.
Release Version: v1.0
Defect Severity: Medium / High
Module: Home Page
Defect Priority: Medium / High
Summary: No option of withdrawing money in cash
Report Bar: Rajesh
withdrawal.
Assigned To: Pranav
Description: The Bank’s interaction screen contains
only “Deposit” and “Recent Transactions” Options and
Status: Fixed
is missing “Withdraw Option”
Selecting Testing Tool

 Selecting the testing tool is an important aspect of test automation for several reasons:

1) Free tool are not well supported and may phase out soon.
2) Test tools sold by vendors are expensive.
3) Not all test tool run on all platforms.

 Criteria For Selecting Test Tools (Automation):

1) Meeting Requirements: The tester must ensure that the tool satisfies all the requirements and is efficient.

2) Technology Expectations: Test tools are not 100% cross platform. They are supported only on some operating
system, hence the tester must ensure if the tool supports its Operating System or not.

3) Training Skills: Few test tools require plenty of training, very few vendor provide the training to the required level.

4) Management Aspects: A test tool increases the system requirement and requires the hardware and software to be
upgraded. These increases the cost of the already-expensive test tool. It is necessary for Managing the budget, etc.
When To Use Automated Tools & Its Use.
 Automated Test tools are used for increasing the effectiveness, efficiency and coverage of software testing.
 When to use Automated Tools?
1) Complexity of Test Case: Simple and easy repetitive test cases maybe suitable for automation tool while complex
test cases where testers have to take a lot of decision is not most likely for use of automation tools.

2) Test Case Dependency: When success or failure of one test has a direct affect on the next test case, automation
must be avoided.

3) Number Of Iterations Of Testing: If the same test is executed many times (repetitive) automation is a better
option than doing it manually.
 Testing Using Automated Tools

1) Partial Automation: In software testing there are many processes that do not fit the need of complete automation.
Partial Automation allows the organizations to observe the benefits of automation without the involvement of time
and cost.

2) Synchronization: It ensures that all tests happen on all of the systems in the correct order.
Limitations of Manual Testing

1) Manual Testing is slow and costly.

2) Manual Test takes more time.

3) Manual Testing is not recommended for repetitive iterations of the same tests.

4) Manual Test takes more time.

5) Manual Test can sometimes be boring for the tester and hence may result to error prone.

6) More time, manpower and cost is required to perform manual testing.

7) Lack of training is the common problem.

8) Manual Testing cannot be accurate sometimes.


Manual Testing Vs Automation Testing

Manual Testing Automation Testing


To perform manual testing, human interaction is To perform automation testing, no human
required for test execution. interaction is required as tools run the test
execution.
To perform manual testing we require skilled To perform Automation Testing we require
employees. Automation Tools.
In this testing more time, cost and man power is In this testing less time, cost and man power is
required. required.
Any application can be first tested manually. It is used to test an application whose requirement
are static.
Repetitively execution test cases becomes boring Repetitively execution of test cases are not boring
and error prone. and no error prone.
Metrics And Its Types:
 Metrics is simply standards of Measurement. There are Two main Types of Software Metrics

A) Product Metrics: (Explained in Next Page)

i) Project Metrics: It is a set of metrics that indicates how the project is planned and executed.

ii) Progress Metrics: It contains a set of metrics that tracks how the different activities of the project are progressing.

iii) Productivity Metrics: These metrics help in planning and estimating of testing activities.

B) Process Metrics: (Explained in Next Page)


Types Of Metrics

Product Metrics Process Metrics

Project Metrics Progress Metrics Productivity Metrics


Product Metrics, Process Metrics, Object Oriented Metrics

1) Product Metrics: These metrics are those metrics which has more meaning in the perspective of
the software product being developed. One of the Example is quality of the developed product.

2) Process Metrics: They are measurements, used to understand how well the computer system is
running. They help to track things like how fast programs are running, how much memory they're
using, and how much the CPU is working.

3) Object Oriented Metrics: Object-oriented metrics in operating systems measures the performance
and efficiency of how the system handles tasks using the object-oriented programming approach.

You might also like