Software Testing Important Semester Questions
Software Testing Important Semester Questions
Software Testing is a procedure to verify whether the actual result are same as of expected result. It is
performed to provide assurance that the software system does not contains any defect.
• Testing process removes the maximum possible error from the software and helps us to deliver quality
software to the customer.
• Testing ensures the correctness and completeness of the software along the quality.
• Testing allows us to verify and validate the software and ensures that software will satisfy all of the user
requirements.
Objectives of Software Testing
• Report the detected defects to the developer for correction and after correction retest
the software product to give quality assurance.
• To make sure that the end result meets the business and user requirements.
• To prevent Defects.
Failure: When a software is not capable of performing the required function and is
offering results that are not expected, Then it is tested as failure in Software Testing.
Defect: It is the error found after the application goes into the product.
Bug: A colloquial term for a software fault or defect causing problems during program
execution.
Methods of Testing
Methods of Testing defines how a software / product is (being) tested. There are two
methods of testing a software / product. Those are:
Static testing: In static testing code is not executed. Rather it manually checks the
code, requirement documents, and design documents to find errors. Main objective of
this testing is to improve the quality of software products by finding errors in early
stages of the development cycle.
Dynamic Testing: The dynamic testing is done by executing program. Main objective
of this testing is to confirm that the software product works in conformance with the
business requirements.
Positive Testing & Negative Testing
Positive Testing: It can be performed on the system by providing the valid data as input.
123456789 The example has a prompt “Enter only Numbers” and the user
entered numbers as defined by the prompt. This is Positive Testing.
Positive Testing
Negative Testing: It can be perfumed on the system by providing the invalid data as input.
Boundary Value: In software testing, the BVA is black box testing based on test case. This technique is
applied to see if there are any bugs at the boundary of input domain. It helps the testing value in boundary
between both valid and invalid conditions. This technique is easy, quick and brilliant way to catch any input
errors. There are three guidelines of Boundary Value Analysis
Example: The exam has pass boundary at 40%, merit at 75% and distinction at 85% then , Pass Boundary will be:
Equivalence Class Partition: It is a black box testing that can be used at every level of testing. In this method we
separate the set of test conditions in different classes. (partition) We can use this method where the range of input
data for specific input field is available. It is an easy method.
Equivalence Partitioning is used to separate the data of the software / product in different classes / partition.
Example: An insurance company that has the following premium rates based on the age group.
C) Unit Testing: This method involves testing of every modules present in the software.
D) Integration Testing: This method checks the compatibility of one module to other.
E) System Testing: This method involves checking the software in different environment and is the software
compatible or not.
F) Stress Testing: This method involves checking if the software can handle all the load or how much load the software
can handle.
G) Performance Testing: This testing involves checking of the speed and effectiveness of the software.
External Standard & Its Types
External Standard are the standards made by an entity external to an organization. These standards are
standards that a product comply with, are externally visible and are usually stipulated by external parties.
A) Customer Standard: It refers to something defined by the customer as per his/ her business
requirement for the given product.
B) National Standard: It refers to standard defined by the national level. entries where the supplier /
customer resides.
C) International Standard: It refers to the standard that are defined at international level and these are
applicable to all customers across the world.
Black Box Testing Vs White Box Testing
It aims at what functionality the system is performing. It aims how system if performing its functionality.
In BBT, we do not focus on developing the code. In WBT, we focus on developing the code.
Inspection includes planned meeting with all the Walkthrough Testing does not include planned
members of the project. meeting.
In inspection, recorder records the defects. In walkthrough, author makes a notes of defects
and suggestions which is given by team members.
Verification Vs Validation
Verification Validation
Are you building it right ? Have you build the right thing ?
It is conducted by Quality Assurance Team. It is conducted by Quality Control Team.
Methods involves Review, Unit Testing, Inspection, and Methods involves System Testing such as Black Box
Integration Testing. Testing, White Box Testing, Grey Box Testing, etc.
Plans, Requirement Specification, Design Specification, Testing of actual software is done in validation.
Code test Cases, etc. are evaluated in Verification.
Quality Assurance is an activity for assuring quality Quality Control is an activity which focuses on
in the process by which the product are developed. identifying defects after the product is developed.
Main goal is to find defect and errors when the Main goal is to find defect and errors after the
product is under development. product is developed.
When statistical tools and techniques are applied When statistical tools and techniques are applied
to process the product then they are known as to the product after the process then they are
Statistical Process Control (SPC). known as Statistical Quality Control (SQC).
Levels Of Testing
• Unit Testing: This is the first level of Testing. In this testing, the units or
components of a model are tested. These includes Black Box (Hidden) Acceptance Testing
Testing and White Box (Transparent) Testing.
• Verifying the interface link between the login page and the home page i.e. when a
user enters the credentials and logs it should be directed to the homepage.
• Check the interface link between the Login and Mailbox module.
• Check the interface link between the Mailbox and Delete Mails Module.
• Verifying the interface link between the home page and the profile page i.e.
profile page should open up.
Types of Testing
Performance Testing: It is a type of software testing that is carried out to test the speed, scalability,
stability, and reliability of software product or application.
Load Testing: It is a software testing that is carried out to check the behavior of the system or
software product under extreme load. For Example if a site can hold only 10 concurrent users the
tester will try will to run more than 10 concurrent users and record the behavior off the system.
Stress Testing: It is a software testing that is carried out to determine the behavior of the software
under abnormal conditions.
Security Testing: It is a software testing technique that is used to determine if an information system
protects data and maintains functionality as intended.
Testing on MSBTE Website
1) Stress Testing: Stress Testing for MSBTE result display page can be easily done by degrading the system. For
Example providing less RAM for the server, or decreasing the bandwidth of the internet connection or provide
less hits for page.
2) Load Testing: It is the simplest form of testing conducted to understand the behavior of system under a specific
load. It will show how the software will respond if the number of students checking the result at a same are more
than specified.
3) Security Testing: It s a testing technique to determine if an information system protects the data and maintains
the functionality. In this scenario It secures the MSBTE website by verifying principles such as:
Authentication of the User who logged in to check the result with valid seat no. or enrolment no.
Testing on Web Application
Web based application is an application which can be accessed and used over the network that is internal. Web testing is a testing that
focus on web application.
Testing Approaches under Web Application are as follows
• Compatibility with different browsers: Tester should test the web based application on different browsers to make sure the
application behaviour is same on all the browsers.
• Functional Testing: Tester should test the different functionality of the application like calculation, business logic validation,
navigation should be properly done.
• Usability Testing: Tester should also focus on ease to use the web based application. The appearance of the web page and
navigation should be proper and user friendly.
• Integration: The integration between the browser and the server hardware software application should be validated by the tester.
• Security Testing: Security of any web based application is the most important factor while testing. Security must be validated by the
tester.
• Performance Testing: The performance of the web based application should be validated properly in performance testing by using
load testing and stress testing by the tester.
Client Server & Testing on Client Server
An Client Server consists of a Client, Server and a Network. The client and server are connected to a single network
which helps them to communicate with each other. The client sends request to server and server shares a response to
the client’s request.
Testing Approaches for Client Server are as follows:
• Component Testing: Testing of client and server must be done individually. For testing the server an client
simulator is needed and for testing the client an server simulator is needed and for testing the network both the
simulators (Server & Client) are needed.
• Integration Testing: After successfully testing of server, client and network, they are brought together to form
system testing.
• Performance Testing: System performance is tested when number of clients is communicating with server at a
time. Volume and Stress testing may also be useful for test under maximum and minimum load.
• Compatibility Testing: It is necessary to test the compatibility of different software, hardware used by the server
for communicating with the client. This includes testing compatibility with various operating systems, browsers,
devices, and network environments to ensure a seamless user experience across different platforms.
Driver Vs Stub
Driver Stub
Drivers are dummy modules that always used to Stubs are dummy modules that always used to
simulate the high level modules. simulate the low level modules.
Drivers are the calling programs Stubs are the called programs.
Drivers are only used when main programs are Stubs are used when sub programs are under
under construction. construction.
Drivers are used in bottom up approach. Stubs are used in top down approach.
Alpha Testing Vs Beta Testing
It comes under the category of both White Box It comes only under Black Box Testing.
Testing and Black Box Testing.
1) Testing Process: It includes a description of the major phases of the system testing process.
2) Requirements Traceability: Testing should be planned so that all requirements are individually tested.
4) Testing Schedule: An overall testing schedule and resource allocation must be specified.
5) Test Recording Procedures: The results of the test must be systematically recorded.
6) Hardware and Software Requirements: It should specify the software tools required and estimate hardware
utilization.
7) Constraints: Constraints affecting the testing process such as staff shortages should be anticipated in this section.
Test Deliverable
The test approach strategy issued result in identifying the right type of test for each of the features once we have
prioritised feature lists, the next step is to what needs to be tested, to enable estimation of size effort and schedule
• Software.
Setting up criteria for testing
A) Encountering more than a certain number of defects causing frequent stoppage of testing activity
B) Hitting Show stoppers that prevent further progress of testing for example if a database does not start
further tests of query, data manipulation, and so on are is simply not possible to be executed
C) Developers releasing a new version which they advise should be used in lieu of the product under test,
(because of some critical defect fixes)
Test Infrastructure Management
Test requires Infrastructure to be planned up front. This Infrastructure is made up of Three Essential
Elements those are:
• A Test Code Database (TCDB): It contains the relevant information of the test cases in an organization. It
used for software testing and quality assurance purposes. An TCDB contains: Test Case, Test Case Product Cross
Reference, Test Case Run History, Test Case Defect Cross Reference.
• Defect Repository: It captures all the relevant information of defect repository for a product. The information
that a defect Repository includes are: Defect Details, Defect Test Detail, Fix Details, Communication.
A) Baselining A Test Plan: It means creating a solid, unchanging version of the plan. It's like setting the plan in
stone, so everyone knows what's expected in the testing process.
B) Test Case Specification: It is a document outlining specific conditions, steps, and data needed to test a particular
aspect of a software or system, ensuring it works as expected. A test case specification should clearly identify
1. The Purpose of the Test: The reason for why the product is being tested should be mentioned here.
2. Items being tested: The items that are being tested should be mentioned here along with their version release
numbers.
3. Environment needs for running the Test Case: This can include the hardware environment setup, supporting
software environment setup for example setup of operating system, database, etc. should be mentioned here.
4. Steps followed to execute the Test: If the testing is manual, the steps are in detailed instruction and if
Automation Testing is used, then the steps are translated to the scripting language of that Tool
Test Report
A test report is a document that explains what happened during a software test. It tells if the test was
successful or if there were problems.
A) Test incident Report: It is a note that tells everyone about a problem encountered during testing,
explaining what went wrong and what was done to fix it, helping teams keep track of issues and
their solutions.
B) Test Cycle Report: It is a document that details the tests that are being executed in each testing
phase, which builds will be tested during every cycle and what issues are severe enough for them.
C) Test Summary Report: It is a document that gives a brief overview of the testing process,
highlighting what was tested, what problems were found, and how well the software performed,
helping everyone understand the overall quality and status of the software after testing.
Decision Table
Decision Table testing is black box test design technique to determine the test scenarios for complex business logic.
They provide a systematic way of starting complex business rules, which is useful for developers as well as for testers.
Essentially it is structed exercise to formulate requirements when dealing with complex business rules.
Example:
Where, 0 = False
1 = True
X = No Action (Don’t Care)
Example of Edit function in Notepad Test Case
Test Case Test Case Input Type Input Data Expected Result Actual Result Status
ID Objective
TC-001 Select all Menu Click on Select All text should be All text is selected. Pass
option Option all selected.
TC-002 Copy Option Menu Select the text It should copy the It copied the selected Pass
Option and click on cut. selected text. text.
TC-003 Cut Option Menu Select the text Selected text should be Selected text is cut. Pass
Option and click on cut cut.
TC-004 Paste Option Menu Click on Paste Contents should be Contents are pasted. Pass
Option pasted.
TC-005 Delete Option Menu Select text and Contents should be Contents are deleted. Pass
Option click on delete deleted.
TC-006 Save Option Menu Click on Save. It should save the It saved the content of Pass
Option content of notepad. notepad.
Example of Library Management System Test Case
Test Case Test Case Input Type Input Data Expected Result Actual Result Status
ID Objective
TC-001 Username Textbox Any valid login It should accept the user It accepted a valid Pass
name login username
TC-002 Password Textbox Any valid It should accept an valid It accepted a valid Pass
Password password password.
TC-003 Book Name List box Enter the name Any valid book title Valid book title Pass
of the book must be selected selected
TC-004 Book Issued Calendar Date of Issue It should accept an date It accepted the date Pass
selected by the user. selected by the user.
TC-005 Date of Return Calendar Date of Return It should accept an date It accepted the date Pass
selected by the user. selected by the user.
TC-006 Save Button Click It should save all the It saved all the details. Pass
details.
Example of Railway Reservation System Test Case
Test Case Test Case Input Type Input Data Expected Result Actual Result Status
ID Objective
TC-001 Login Field Textbox Any valid login It should accept the user It accepted the user Pass
name login login
TC-002 Password Field Textbox Valid Password It should accept an valid It accepted a valid Pass
password password.
TC-003 Login Button Click It should redirect the It redirected the user Pass
user to the home page. to the home page.
TC-004 Date of Calendar Date It should accept an date It accepted the date Pass
Departure selected by the user. selected by the user.
TC-005 Date of Calendar Date It should accept an date It accepted the date Pass
Arrival selected by the user. selected by the user.
TC-006 Station List box Station Name Station must be selected. Station was selected Pass
Example of Simple Calculator Test Case
Test Test Case Input Input Expected Result Actual Result Status
Case ID Objective Type Data
TC-001 Addition Button 5+5 The result should be 10. The result is 10. Pass
Operation
TC-002 Subtraction Button 20 - 10 The result should be 10. The result is 10. Pass
Operation
TC-003 Multiplication Button 1 * 10 The result should be 10. The result is 10. Pass
Operation
TC-004 Division Button 10 / 1 The result should be 10. The result is 10. Pass
Operation .
TC-005 Divide By Zero Button 10 / 0 It should display error or It displayed infinity Pass
infinity symbol (∞) symbol. (∞)
TC-006 Clear Button Click It should clear the input It cleared the input Pass
screen. screen.
Example of Amazon Register Page Test Case
Test Test Case Input Input Data Expected Result Actual Result Status
Case ID Objective Type
TC-001 Login Field Textbox Any valid login It should accept the user It accepted the user Pass
name login login
TC-002 Password Textbox Valid Password It should accept an valid It accepted a valid Pass
Field password password.
TC-003 Gender Radio Male or Female Either Male or Female Male or Female is Pass
Box must be selected. selected.
TC-004 Birth Date Calendar Date It should accept an date It accepted the date Pass
selected by the user. selected by the user.
TC-005 Address Textarea Address It should accept any an It accepted user’s Pass
address. address
TC-006 Submit Button Submit the User must be redirected User was redirected Pass
Data to the home page. to the home pas
Defect Classification
Defect Classification
1) Human Factor: Due to human error the software cannot be made perfectly i.e. without any defect.
3) Buggy Third Party Tools: The development process requires a lot of third party tools which may contain
many defects in the content and may cause defect in the current software as well.
4) Lack of Design / Coding Experience: There is a possibility of hiring a tester with lack of design or coding
experience and that may lead to the situation of defects arriving in the software.
5) Unreal Development Timeframe: The situation when the tester does not have enough information and
development schedule is limited by the deadline, that situation may arise defects and causes bad quality service.
Techniques for finding defects
1) Static Technique: This technique of quality control define checking the software
product without executing them. It is also termed as Verification testing, as it
comes under Verification Testing.
1) Defect Prevention: Implementation of techniques, methodology and standard processes to reduce the risk of defects.
2) Deliverable Baseline: Establishment of milestones where deliverables will be considered complete and ready for further
development work.
3) Defect Discovery: Identification and reporting of defects for development team acknowledgement. A defect is only termed
discovered when it has been documented and acknowledged.
4) Defect Resolutions: Work by the development team to prioritize schedule and fix a defect, and document the resolution.
5) Process Improvements: Process improvements refer to the continuous efforts to enhance and optimize the development
process based on feedback and lessons learned from previous projects.
Management Reporting
Defect Life Cycle
1) New: When a defect is logged for the first time it is in the new state.
2) Assigned: Once the bug is posted by the tester the leader of the tester approves the bug and assigns the bug to the
developer team.
3) Open: It is the state when the developer starts analysing and working on the defect fixing.
New Deferred
Open
Duplicate
Fixed
Retest
Close
Defect Life Cycle Pt.2
4) Fixed: When the developer makes necessary code changes and verifies it, then the developer makes the bug status
as fixed.
5) Retest: At this stage, the tester does retesting of the change code, which developer makes status fixed.
6) Close: Once the bug is fixed and it is tested by the tester the tester changes the status to close.
7) Reopen: If the bug still exists even after the developer fixes, the tester changes the status to reopen.
8) Rejected: If the developer feels that the bug is not genuine, then the developer rejects the bug.
9) Duplicate: If the bug is repeated twice or more, than the tester refers the bug to duplicate state.
10) Deferred: It means the bug fix is expected in the next release of the software product.
Defect Template Report
Defect report is a document that identifies and describes a defect detected by a tester or defect report template can
consist of following elements
1. Id: Unique identity given to the defect. 8. Expected Result: Expected result of the product.
2. Project Name: Name of the project. 9. Actual Result: Actual result of the product.
3. Product: Name of the product. 10. Defects Severity: Severity of the product.
4. Release Version: Release version of the product. 11. Defect Priority: Priority of the product.
5. Module: Specify module of the product where 12. Report Bar: Name of the person who wrote the
the defect was detected. report.
6. Summary: Summary of the product. 13. Assigned To: Name of the person who is assigned to
analyse or fix the defect.
7. Description: Detailed description of the product.
14. Status: Status of the defect.
Example of ATM Defect Report
ID: R1
Expected Result: Withdraw option must be visible in
Bank’s Interaction Screen.
Project: Cash Simulator (ATM)
Actual Result: Withdraw option is visible in Bank’s
Product: ATM
Interaction Screen.
Release Version: v1.0
Defect Severity: Medium / High
Module: Home Page
Defect Priority: Medium / High
Summary: No option of withdrawing money in cash
Report Bar: Rajesh
withdrawal.
Assigned To: Pranav
Description: The Bank’s interaction screen contains
only “Deposit” and “Recent Transactions” Options and
Status: Fixed
is missing “Withdraw Option”
Selecting Testing Tool
Selecting the testing tool is an important aspect of test automation for several reasons:
1) Free tool are not well supported and may phase out soon.
2) Test tools sold by vendors are expensive.
3) Not all test tool run on all platforms.
1) Meeting Requirements: The tester must ensure that the tool satisfies all the requirements and is efficient.
2) Technology Expectations: Test tools are not 100% cross platform. They are supported only on some operating
system, hence the tester must ensure if the tool supports its Operating System or not.
3) Training Skills: Few test tools require plenty of training, very few vendor provide the training to the required level.
4) Management Aspects: A test tool increases the system requirement and requires the hardware and software to be
upgraded. These increases the cost of the already-expensive test tool. It is necessary for Managing the budget, etc.
When To Use Automated Tools & Its Use.
Automated Test tools are used for increasing the effectiveness, efficiency and coverage of software testing.
When to use Automated Tools?
1) Complexity of Test Case: Simple and easy repetitive test cases maybe suitable for automation tool while complex
test cases where testers have to take a lot of decision is not most likely for use of automation tools.
2) Test Case Dependency: When success or failure of one test has a direct affect on the next test case, automation
must be avoided.
3) Number Of Iterations Of Testing: If the same test is executed many times (repetitive) automation is a better
option than doing it manually.
Testing Using Automated Tools
1) Partial Automation: In software testing there are many processes that do not fit the need of complete automation.
Partial Automation allows the organizations to observe the benefits of automation without the involvement of time
and cost.
2) Synchronization: It ensures that all tests happen on all of the systems in the correct order.
Limitations of Manual Testing
3) Manual Testing is not recommended for repetitive iterations of the same tests.
5) Manual Test can sometimes be boring for the tester and hence may result to error prone.
i) Project Metrics: It is a set of metrics that indicates how the project is planned and executed.
ii) Progress Metrics: It contains a set of metrics that tracks how the different activities of the project are progressing.
iii) Productivity Metrics: These metrics help in planning and estimating of testing activities.
1) Product Metrics: These metrics are those metrics which has more meaning in the perspective of
the software product being developed. One of the Example is quality of the developed product.
2) Process Metrics: They are measurements, used to understand how well the computer system is
running. They help to track things like how fast programs are running, how much memory they're
using, and how much the CPU is working.
3) Object Oriented Metrics: Object-oriented metrics in operating systems measures the performance
and efficiency of how the system handles tasks using the object-oriented programming approach.