0% found this document useful (0 votes)
5 views21 pages

STE - 22518

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 21

SOFTWARE TESTING

1. Define Static And Dynamic Testing.


Ans. Static testing:
In static testing code is not executed. Rather it manually checks the code,
requirement documents, and design documents to find errors. Main objective of
this testing is to improve the quality of software products by finding errors in
early stages of the development cycle.

Dynamic testing:
The dynamic testing is done by executing program. Main objective of this testing
is to confirm that the software product works in conformance with the business
requirements.
2. Define The Term:-
i) Error
ii) Fault
iii) Defect,
iv) Bug.
Ans. i) Error:: An error is a human action that produces the incorrect result

ii) Fault: State of software caused by an error

iii) Defect: A defect is an error or a bug, in the application which is created. A


programmer while designing and building the software can make mistakes or
error. These mistakes or errors mean that there are flaws in the software. These
are called defects.

iv) Bug: The presence of error at the time of execution of the software.
3. Compare Verification And Validation.
Ans.
Verification Validation

1 Verification is the process of 1 Validation is the process of


evaluating products of a evaluating software at the end of
development phase to find out the development process to
whether they meet the specified determine whether software meets
requirements. the customer expectations and
requirements.
2 Are we building the system, right? 2 Are we building the right system?
3 Execution of code is not comes 3 Execution of code is comes under
under Verification. Validation.
4 Verification is carried out before 4 Validation activity is carried out
the Validation just after the Verification.

Page 1 of 21
SOFTWARE TESTING
5 Cost of errors caught in 5 Cost of errors caught in Validation
Verification is less than errors is more than errors found in
found in Validation. Verification.
6 It is basically manually checking 6 It is basically checking of
the of documents and files like developed program based on the
requirement specifications etc. requirement specifications
documents & files.
4. Defect Classification And Its Meaning.
Ans. Requirement/Specification Defects:

Requirement-related defects arise in a product when one fails to understand


what the customer requires. These defects may be due to the customer gap,
where the customer is unable to define his requirements. Producer gap, where
the developing team is not able to make a product as per requirements.

Design Defects:

Design defects occur when system components, interactions between system


components, interactions between the outside software/hardware, or users are
incorrectly designed. Design defects generally refer to the way of design creation
or its usage while creating a product.

Coding Defects:

This defect arises when variables are not initialized properly or variables are not
declared correctly or database is not created properly. Coding also needs
adequate commenting to make it readable and maintainable in future.

Testing Defects:

These would encompass incorrect, incomplete, missing inappropriate test cases


and test procedures.
5. What Is GUI Testing? Give One Example.
Ans. GUI Testing:
GUI stands for Graphical User Interface where you interact with the computer
using. images rather than text.

GUI testing is the process of testing the system's Graphical User Interface of the
Application Under Test. GUI testing involves checking the screens with the
controls like menus, buttons, icons, and all types of bars - toolbar, menu bar,
dialog boxes and windows, etc.

Page 2 of 21
SOFTWARE TESTING
GUI is what the user sees. A user does not see the source code. The interface is
visible to the user. Especially the focus is on the design structure, images that
they are working properly or not.

Examples of GUI testing:


1. Check Screen Validations
2. Verify All Navigations
3. Check usability Conditions
4. Verify Data Integrity
5. Verify the object states
6. Verify the date Field and Numeric Field Formats

6. State Entry Criteria And Exit Criteria.


Ans. Entry criteria:-
Entry criteria are the condition or the set of conditions, which should exist or be
met to start a process. Some of the conditions or situations, which may be seen as
an entry criterion for the initiation of testing activities.

• Requirements should be clearly defined and approved.


• Test Design and documentation plan is ready.
• Testers are trained, and necessary resources are available.
• Availability of proper and adequate test data (like test cases).
• Availability of the test environment supporting necessary hardware, software,
network configuration, settings, and tools for the purpose of test execution.

Exit criteria:-
Exit Criteria is often viewed as a single document concluding the end of a life
cycle phase. Some of the conditions or situations which may be seen as an exit
criterion for testing activities.

• Testing Deadline
• Completion of test case execution.
• Completion of Functional and code coverage to a certain point.
• Bug rates fall below a certain level and no high priority bugs are identified.
• Management decision.
7. Describe Test Infrastructure Management.
Ans. Test infrastructure management
Testing requires a robust infrastructure to be planned upfront. This
infrastructure is made up of three essential elements.

1. A test case database (TCDB):


A test case database captures all the relevant information about the test cases in
an organization. A test case database captures all the relevant information about
the test cases in an organization. Some of the entities and attributes.
Page 3 of 21
SOFTWARE TESTING
Entities in such a TCDB are:

1. Test case
2. Test case-product cross reference
3. Test case run history
4. Test case- defect cross reference

2. Defect repository
It captures all the relevant information of defect repository for a product. The
information that a defect repository includes

1. Defect details
2. Defect test detail
3. Fix details
4. Communication

3. Defect repository
It captures all the relevant information of defect repository for a product. The
information that a defect repository includes

1. Defect details
2. Defect test detail
3. Fix details
4. Communication

4. Configuration Management (CM)


repository and tool Software Configuration Management is defined as a process
to systematically manage, organize, and control the changes in the documents,
codes, and other entities during the Software Development Life Cycle. It keeps
track of change control and version control of all the files/entities that make up a
software product. Change control ensures that

1. Changes to test files are made in a controlled fashion and only with proper
approvals
2. Change are made by one test engineer are not accidently lost or overwritten by
other changes
3. Each change produces distinct version of the file that is re-creatable at any
point of time
4. Everyone gets access to only the most recent version of the test files.

Page 4 of 21
SOFTWARE TESTING

8. Difference Between White Box Testing And Black Box Testing.


Ans.
White Box Testing Black Box Testing

1 The tester needs to have the 1 This technique is used to test the
knowledge ofinternal code or softwarewithout the knowledge of
program. internal code or program
2 It aims at testing the structure of 2 It aims at testing the functionality
the itembeing tested. of thesoftware
3 It is also called structural testing, 3 It also knowns as data driven,
clear box testing, code-based closed boxtesting, data-, and
testing, or glass boxtesting. functional testing.
4 Testing is best suited for a lower 4 This type of testing is ideal for
level of testing like Unit Testing or higher levelsof testing like System
Integration testing. Testing, Acceptance testing.
5 Statement Coverage, Branch 5 Equivalence partitioning,
coverage, and Path coverage are Boundary valueanalysis are Black
White Box testingtechniques. Box testing technique
6 Can be based on detailed design 6 Can be based on Requirement
documents. specification document

Page 5 of 21
SOFTWARE TESTING
9. Describe Load Testing And Stree Testing.
Ans. Load Testing

1. Load Testing is a type of performance testing to check system with constantly


increasing the load on the system until the time load reaches its threshold value.

2. Here Increasing load means increasing number of concurrent users,


transactions & check the behavior of the application under test.

3. It is normally carried out underneath controlled environment to distinguish


between two different systems.

4. The main purpose of load testing is to monitor the response time and staying
power of application when the system is performing well under heavy load.

5. The successfully executed load testing is only if the specified test cases are
executed without any error in allocated time.

6. Load testing is testing the software under customer expected load.

Stress Testing

1. Stress Testing is performance testing type to check the stability of software


when hardware resources are not sufficient like CPU, memory, disk space etc.

2. It is performed to find the upper limit capacity of the system and also to
determine how the system performs if the current load goes well above the
expected maximum.

3. Main parameters to focus during Stress testing are "Response Time" and
"Throughput".

4. Stress testing is Negative testing where we load the software with large number
of concurrent users/processes which cannot be handled by the systems hardware
resources. This testing is also known as Fatigue testing.

5. Stress testing is testing the software under less-than-ideal conditions. So,


subject your software with low memory, low disk space, slow CPU, slow modems
and so on.

6. Word processor software running on your computer with all available


memory and disk space, it works fine. But if the system runs low on resources,
you have a greater potential to expect a bug.

Page 6 of 21
SOFTWARE TESTING
10. Describe Test Delivered In Detail.
Ans. 1) Test Deliverables are the artifacts which are given to the stakeholders of a
software project during the software development lifecycle. There are different
test deliverables at every phase of the software development lifecycle. Some test
deliverables are provided before the testing phase, some are provided during the
testing phase and some after the testing cycle is over.

The different types of Test deliverables are:

 Test cases Documents


 Test Plan
 Testing Strategy
 Test Scripts
 an pc
 Test Data
 Test Traceability Matrix
 Test Results/reports
 Test summary report
 Install/config guides
 Defect Reports

2) The test plan describes the overall method to be used to verify that the
software meets the product specification and the customer's needs. It includes the
quality objectives, resource needs, schedules, assignments, methods, and so forth.

3) Test cases list the specific items that will be tested and describe the detailed
steps that will be followed to verify the software.

4) Bug reports describe the problems found as the test cases are followed. These
could be done on paper but are often tracked in a database

5) Test tools and automation are listed and described which are used to test the
software. If the team is using automated methods to test software, the tools used,
either purchased or written in-house, must be documented.

6) Metrics, statistics, and summaries convey the progress being made as the test
work progresses. They take the form of graphs, charts, and written reports

Page 7 of 21
SOFTWARE TESTING
11. Define The Term Matrics And Measurement.
Ans. Metrics and measurement : A Metric is a measurement of the degree that any
attribute belongs to a system, product or process. For example the number of
errors per person hours would be a metric. Thus, software measurement gives
rise to software metrics. A measurement is an indication of the size, quantity,
amount or dimension of a particular attribute of a product or process. For
example the number of errors in a system is a measurement. A Metric is a
quantitative measure of the degree to which a system, system component, or
process possesses a given attribute. Metrics can be defined as “STANDARDS OF
MEASUREMENT”. Software Metrics are used to measure the quality of the
project. Simply, Metric is a unit used for describing an attribute. Metric is a scale
for measurement.

Need of Software measurement:

1. Establish the quality of the current product or process.


2. To predict future qualities of the product or process.
3. To improve the quality of a product or process.
4. To determine the state of the project in relation to budget and schedule.

Page 8 of 21
SOFTWARE TESTING
12. Design Test Cases For Hostel Admission Form.
Ans.
Test case Test case Input data Expected Actual Status
ID objective result result
Any valid
Student alphabetical It should Students Pass
TC1 name field characters accept the name is
(John) name accepted
It should
Date format accept the
Date of before the date less It accepte d
TC2 birth field current than the the valid Pass
date current date
date
Radio It should
button select the Proper
TC3 Gender should be proper radio Pass
field selected. F radio button is
or M button selected
Date format
Date of not before It should Accept ed
TC4 admission the current accept date the date Pass
date
Any It should
numerica l accept the
data number Accepted
TC5 Age field greater greater the age Pass
than or than or
equal to 16 equal to 16
Valid alpha It should
TC6 Address numeric accept the Accepted Pass
field characters address the address
Valid 6 It should
digits accept the Pin code
TC7 Pin code numeric valid pin accepted Pass
format code

Page 9 of 21
SOFTWARE TESTING
13. Describe Test Cases For Railway Reservation System.
Ans.
Test case Test case Input data Expected Actual Status
ID objective result result
Any valid
Login field login name It should It accepted Pass
TC1 (abcxyz) accept the the login
login name name
It accepted
It should the valid
Password Valid accept the password;
TC2 field password valid successful Pass
password login
message

Message
Password Invalid It should displayed
TC3 field password not accept as invalid Pass
the valid login or
password wrong
password.
Date
Date of format not
TC4 journey before the It should Accepted Pass
current accept date the date
date
Date
format,
date It should Accepted
TC5 Date of greater accept the the date Pass
return than the date
journey date of
journey

Boarding Valid It should Accepted


TC6 station boarding accept the Pass
station boarding
station
It should
Train Valid train accept the Train
TC7 number number valid train number Pass
number accepted

Page 10 of 21
SOFTWARE TESTING
14. Prefere Report For Withdrawal Of Amount Form ATM Machine.
Ans.

Page 11 of 21
SOFTWARE TESTING

15. List any four skills of software tester.


Ans. 1) Analytics skills
2) Communication skills
3) Knowledge of test management tools
4) Negotiation skills
16. Compare Alpha testing and Beta testing (Any 2 differences)
Ans.
Alpha testing Beta testing

1 Testing is done by end user at 1 Testing is done by end user at end


developer site. user site.
2 It is done in controlled 2 Developer is not present so not
environment. done in controlled environment.
3 Alpha testing is not effective as it 3 Beta test is more effective as end
is done in controlled environment. user do not have any restriction of
testing software.
4 End user may have to test 4 Beta testing is live application
software under influence of testing, not done in influence of
developer. developer
17. Explain stub and driver with example.
Ans Drivers:
The module where the required inputs for the module under test are simulated
for the purpose of module or unit testing is known as a Driver module. The
driver module may print or interpret the result produced by the module under
test.

Stubs:
The module under testing may also call some other module which is not ready at
the time of testing. There is need of dummy modules required to simulate for
testing, instead of actual modules. These are called stubs.

Importance:
Stubs and Drivers works as a substitute for the missing or unavailable module.
They are specifically developed, for each module, having different functionalities.
Generally, developers and unit testers are involved in the development of stubs
and drivers. Their most common use may be seen in the integration incremental
testing, where stubs are used in top bottom approach and drivers in a bottom-up
approach.

Page 12 of 21
SOFTWARE TESTING

18. Prepare steps for test plan.


Ans. • Test plan gives insight into testing activity completely.
• Test Plan Ensures all Functional and Design Requirements are implemented as
specified in the documentation.
• The test plan addresses various levels of testing for Unit testing, Integration
testing, System Testing, and acceptance testing.
• It explains who does testing, why test is performed, how tests are performed,
how test is conducted and when test is scheduled.
19. Explain v model in detail.
Ans. • The V-model is a type of SDLC model where process executes in a sequential
manner in V-shape.
• It is also known as Verification and Validation model.
• It is based on the association of a testing phase for each corresponding
development stage.
• Development of each step is directly associated with the testing phase.
• The next phase starts only after completion of the previous phase i.e. for each
development activity, there is a testing activity corresponding to it.
• V-Model contains Verification phases on one side of the Validation phases on
the other side.
• Verification and Validation phases are joined by coding phase in V-shape.

Verification Phase (Design Phase):


1. Requirement Analysis: This phase contains detailed communication with the
customer to understand their requirements and expectations. This stage is
known as Requirement Gathering.
2. System Design: This phase contains the system design and the complete
hardware and communication setup for developing product.

Page 13 of 21
SOFTWARE TESTING
3. Architectural Design: System design is broken down further into modules
taking up different functionalities. The data transfer and communication
between the internal modules and with the outside world (other systems) is
clearly understood.

Validation (Testing Phases) :


1. Unit Testing: Unit Test Plans are developed during module design phase.
These Unit Test Plans are executed to eliminate bugs at code or unit level.
2. Integration testing: After completion of unit testing Integration testing is
performed. In integration tested. Integration testing is performed on the
Architecture design phase. This test verifies the communication of modules
among themselves.
3. System Testing: System testing test the complete application with its
functionality, inter dependency, and communication. It tests the functional and
non-functional requirements of the developed application. testing, the modules
are integrated and the system is

Page 14 of 21
SOFTWARE TESTING
20. Advantage and disadvantage of automation.
Ans. ADVANTAGE:-

1. Accuracy: Automated tests are more reliable and precise than manual tests,
which can be prone to human error.

2. Scalability: Automation allows you to test more code and features in less time,
without having to add more people or pay overtime.

3. Immediate feedback: Automated testing provides immediate feedback to the


development team.

4. Consistency: Automation ensures that test cases are executed the same way
every time.

DISADVANTAGE:-

1. Cost: Automated testing can be expensive in software.

2. Implementation: It can take a lot of effort to implement automated testing for


the first time.

3. Maintenance: Automated testing requires a lot of maintenance.

4. Difficult to debug: Automated tests can be difficult to debug.

21. Describe the need of measurements and matrices.


Ans. Metrics and measurement :
A Metric is a measurement of the degree that any attribute belongs to a system,
product or process. For example the number of errors per person hours would
be a metric. Thus, software measurement gives rise to software metrics. A
measurement is an indication of the size, quantity, amount or dimension of a
particular attribute of a product or process. For example the number of errors in
a system is a measurement. A Metric is a quantitative measure of the degree to
which a system, system component, or process possesses a given attribute.
Metrics can be defined as “STANDARDS OF MEASUREMENT”. Software
Metrics are used to measure the quality of the project. Simply, Metric is a unit
used for describing an attribute. Metric is a scale for measurement.

Need of Software measurement:


1. Establish the quality of the current product or process.
2. To predict future qualities of the product or process.
3. To improve the quality of a product or process.
4. To determine the state of the project in relation to budget and schedule.
Page 15 of 21
SOFTWARE TESTING
22. Explain Boundary Value analysis.
Ans. 1. Most of the defects in software products hover around conditions and
boundaries.
2. Boundary value analysis is another black box test design technique, and it is
used to find the errors at boundaries of input domain rather than finding
those errors in the center of input.
3. Each boundary has a valid boundary value and an invalid boundary value.
4. Test cases are designed based on both valid and invalid boundary values.
Typically, we choose one test case from each boundary
5. This is one of the software testing technique in which the test cases are
designed to include values at the boundary.
6. Boundary value analysis is another black box test design technique and it is
used to find the errors at boundaries of input domain rather than finding those
errors in the center of input.
7. Boundary value analysis help identify the test cases that are most likely to
uncover defects.

The basic idea in boundary value testing is to select input variable values their:
1. Minimum
2. Just below the minimum
3. Just above the minimum
4. Just below the maximum
5. Maximum
6. Just above the maximum

EXAMPLE:-

Test Scenario Description Expected Outcomes

Boundary Value = 0 System should NOT accept


Boundary Value = 1 System should accept
Boundary Value = 2 System should accept
Boundary Value = 9 System should accept
Boundary Value = 10 System should accept
Boundary Value = 11 System should NOT accept

23. Explain Defect Management process with diagram.


Ans.

Page 16 of 21
SOFTWARE TESTING
i. Defect Prevention-- Implementation of techniques, methodology and standard
processes to reduce the risk of defects.

ii. Deliverable Baseline-- Establishment of milestones where deliverables will be


considered complete and ready for further development work. When a
deliverable is base lined, any further changes are controlled. Errors in a
deliverable are not considered defects until after the deliverable is base lined.

iii. Defect Discovery-- Identification and reporting of defects for development


team acknowledgment. A defect is only termed discovered when it has been
documented and acknowledged as a valid defect by the development team
member(s) responsible for the component(s) in error.

iv. Defect Resolution-- Work by the development team to prioritize, schedule and
fix a defect, and document the resolution. This also includes notification back to
the tester to ensure that the resolution is verified.

v. Process Improvement -- All problems are due to failure in the process involved
in creating software. Defects give an opportunity to identify the problem with
process used and update them. Better processes mean better product with less
defect.

vi. Management Reporting -- Analysis and reporting of defect information to


assist management with risk management, process improvement and project
management.

24. Describe Limitations of Manual Testing.


Ans. 1. Time-consuming: Manual testing requires time since test cases must be
executed manually. Complex software program testing could take some time.
Testing teams might not have enough time to cover all test cases because of the
delay in software development.

2. Human Error: Human errors can happen when testing is done manually. By
failing to test particular scenarios or by making mistakes when executing test
cases, testers may come up with erroneous results. These mistakes can make it
impossible to find flaws, which would affect the caliber of the software.

3. Not reusable- The test cases created for manual testing apply to a specific
version of software, but when the updates are made these test cases will become
unusable and need to be rewritten. Which does not seem to be an effective use of
resources and can lead increase the time. thus, slowing the development process

Page 17 of 21
SOFTWARE TESTING
4. Difficult to Measure: It is challenging to quantify the manual testing process
since there are no objective metrics provided by manual testing to assess the
software's quality. It is difficult to assess the efficiency of the testing process since
it is difficult to keep track of the quantity of test cases executed, errors
discovered, and test coverage attained.
25. Explain the manual testing with advantage and disadvantages and example.
Ans. Manual Testing:-

Manual testing is a software testing process in which test cases are executed
manually without using any automated tool. All test cases executed by the tester
manually according to the end user's perspective. It ensures whether the
application is working, as mentioned in the requirement document or not. Test
cases are planned and implemented to complete almost 100 percent of the
software application.
Test case reports are also generated manually. Manual Testing is one of the most
fundamental testing processes as it can find both visible and hidden defects of the
software. The difference between expected output and output, given by the
software, is defined as a defect. The developer fixed the defects and handed it to
the tester for retesting.

Advantages Of Manual Testing:-

 Reduce time of testing


 Improve the bugs finding
 Deliver the quality software/product
 Allow to run tests many times with different data
 Getting more time for test planning
 Save resources or reduce requirement

Dis-Advantages Of Manual Testing:-

 It's more expensive to automate. Initial investments are bigger than manual
testing
 Manual tests can be very time consuming.
 You cannot automate everything; some tests still have to be done manually.
 You cannot rely on testing tools always.

EXAMPLES:-

1. Unit testing.
2. Performance testing.
3. Security testing

Page 18 of 21
SOFTWARE TESTING
26. Prepare test plan with test summary report for gmail login page.
Ans. The Gmail login page is the first point of user interaction and needs to be secure,
responsive, and user-friendly. Test scenarios for the Gmail login page focus on
verifying various aspects of the login process, including validation of credentials,
error messages, security features, and overall user experience across devices and
browsers. Here are some key test scenarios to consider:

1. Valid Login Credentials:- Verify that a user can successfully log in with a valid
email address and correct password.

2. Invalid Email Format:- Test if the system shows an appropriate error message
when the user enters an incorrectly formatted email (for example, missing “@”
or domain extension).

3. Invalid Password:- Verify that an error message appears when a valid email
address is entered with an incorrect password.

4. Empty Fields:- Check if the login button is disabled or if the user receives an
error message when trying to log in without entering an email or password.

5. Forgot Password Flow:- Validate the “Forgot password?” link functionality,


ensuring it directs the user to a password recovery process and that the process
works as expected.

6. Email or Password Case Sensitivity:- Verify that the login page enforces case
sensitivity, especially for passwords, as Gmail passwords are case-sensitive.

7. Two-Step Verification:- For accounts with two-factor authentication enabled,


check if the second verification step (for example, SMS code, Google
Authenticator prompt) triggers after entering the correct login credentials.

8. “Stay Signed In” Option:- Verify that the “Stay signed in” checkbox retains
the user‟s session on the device, allowing re-entry without requiring a password.

9. Cross-Browser and Cross-Device Compatibility:- Test the login functionality


on various browsers (for example, Chrome, Safari, Firefox, Edge) and devices
(desktop, mobile, tablet) to ensure consistent behavior.

10. Password Masking:- Confirm that the password field masks the input
characters for security, preventing visibility to onlookers.

Page 19 of 21
SOFTWARE TESTING

27. Explain Defect Life Cycle in brief.


Ans 1. New: When a defect is logged and posted for the first time. It‟s state is given as
new.

2. Assigned: After the tester has posted the bug, the lead of the tester approves
that the bug is genuine and he assigns the bug to corresponding developer and
the developer team. It‟s state given as assigned.

3. Open: At this state the developer has started analysing and working on the
defect fix.

4. Fixed: When developer makes necessary code changes and verifies the changes
then he/she can make bug status as „Fixed‟ and the bug is passed to testing team.

5. Pending retest: After fixing the defect the developer has given that particular
code for retesting to the tester. Here the testing is pending on the testers end.
Hence its status is pending retest.

6. Retest: At this stage the tester do the retesting of the changed code which
developer has given to him to check whether the defect got fixed or not.

7. Verified: The tester tests the bug again after it got fixed by the developer. If
the bug is not present in the software, he approves that the bug is fixed and
changes the status to “verified”.

8. Reopen: If the bug still exists even after the bug is fixed by the developer, the
tester changes the status to “reopened”. The bug goes through the life cycle once
again.

9. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that
the bug no longer exists in the software, he changes the status of the bug to
“closed”. This state means that the bug is fixed, tested and approved.

10. Duplicate: If the bug is repeated twice or the two bugs mention the same
concept of the bug, then one bug status is changed to “duplicate“.

11. Rejected: If the developer feels that the bug is not genuine, he rejects the bug.
Then the state of the bug is changed to “rejected”.

Page 20 of 21
SOFTWARE TESTING

Page 21 of 21

You might also like