STE - 22518
STE - 22518
STE - 22518
Dynamic testing:
The dynamic testing is done by executing program. Main objective of this testing
is to confirm that the software product works in conformance with the business
requirements.
2. Define The Term:-
i) Error
ii) Fault
iii) Defect,
iv) Bug.
Ans. i) Error:: An error is a human action that produces the incorrect result
iv) Bug: The presence of error at the time of execution of the software.
3. Compare Verification And Validation.
Ans.
Verification Validation
Page 1 of 21
SOFTWARE TESTING
5 Cost of errors caught in 5 Cost of errors caught in Validation
Verification is less than errors is more than errors found in
found in Validation. Verification.
6 It is basically manually checking 6 It is basically checking of
the of documents and files like developed program based on the
requirement specifications etc. requirement specifications
documents & files.
4. Defect Classification And Its Meaning.
Ans. Requirement/Specification Defects:
Design Defects:
Coding Defects:
This defect arises when variables are not initialized properly or variables are not
declared correctly or database is not created properly. Coding also needs
adequate commenting to make it readable and maintainable in future.
Testing Defects:
GUI testing is the process of testing the system's Graphical User Interface of the
Application Under Test. GUI testing involves checking the screens with the
controls like menus, buttons, icons, and all types of bars - toolbar, menu bar,
dialog boxes and windows, etc.
Page 2 of 21
SOFTWARE TESTING
GUI is what the user sees. A user does not see the source code. The interface is
visible to the user. Especially the focus is on the design structure, images that
they are working properly or not.
Exit criteria:-
Exit Criteria is often viewed as a single document concluding the end of a life
cycle phase. Some of the conditions or situations which may be seen as an exit
criterion for testing activities.
• Testing Deadline
• Completion of test case execution.
• Completion of Functional and code coverage to a certain point.
• Bug rates fall below a certain level and no high priority bugs are identified.
• Management decision.
7. Describe Test Infrastructure Management.
Ans. Test infrastructure management
Testing requires a robust infrastructure to be planned upfront. This
infrastructure is made up of three essential elements.
1. Test case
2. Test case-product cross reference
3. Test case run history
4. Test case- defect cross reference
2. Defect repository
It captures all the relevant information of defect repository for a product. The
information that a defect repository includes
1. Defect details
2. Defect test detail
3. Fix details
4. Communication
3. Defect repository
It captures all the relevant information of defect repository for a product. The
information that a defect repository includes
1. Defect details
2. Defect test detail
3. Fix details
4. Communication
1. Changes to test files are made in a controlled fashion and only with proper
approvals
2. Change are made by one test engineer are not accidently lost or overwritten by
other changes
3. Each change produces distinct version of the file that is re-creatable at any
point of time
4. Everyone gets access to only the most recent version of the test files.
Page 4 of 21
SOFTWARE TESTING
1 The tester needs to have the 1 This technique is used to test the
knowledge ofinternal code or softwarewithout the knowledge of
program. internal code or program
2 It aims at testing the structure of 2 It aims at testing the functionality
the itembeing tested. of thesoftware
3 It is also called structural testing, 3 It also knowns as data driven,
clear box testing, code-based closed boxtesting, data-, and
testing, or glass boxtesting. functional testing.
4 Testing is best suited for a lower 4 This type of testing is ideal for
level of testing like Unit Testing or higher levelsof testing like System
Integration testing. Testing, Acceptance testing.
5 Statement Coverage, Branch 5 Equivalence partitioning,
coverage, and Path coverage are Boundary valueanalysis are Black
White Box testingtechniques. Box testing technique
6 Can be based on detailed design 6 Can be based on Requirement
documents. specification document
Page 5 of 21
SOFTWARE TESTING
9. Describe Load Testing And Stree Testing.
Ans. Load Testing
4. The main purpose of load testing is to monitor the response time and staying
power of application when the system is performing well under heavy load.
5. The successfully executed load testing is only if the specified test cases are
executed without any error in allocated time.
Stress Testing
2. It is performed to find the upper limit capacity of the system and also to
determine how the system performs if the current load goes well above the
expected maximum.
3. Main parameters to focus during Stress testing are "Response Time" and
"Throughput".
4. Stress testing is Negative testing where we load the software with large number
of concurrent users/processes which cannot be handled by the systems hardware
resources. This testing is also known as Fatigue testing.
Page 6 of 21
SOFTWARE TESTING
10. Describe Test Delivered In Detail.
Ans. 1) Test Deliverables are the artifacts which are given to the stakeholders of a
software project during the software development lifecycle. There are different
test deliverables at every phase of the software development lifecycle. Some test
deliverables are provided before the testing phase, some are provided during the
testing phase and some after the testing cycle is over.
2) The test plan describes the overall method to be used to verify that the
software meets the product specification and the customer's needs. It includes the
quality objectives, resource needs, schedules, assignments, methods, and so forth.
3) Test cases list the specific items that will be tested and describe the detailed
steps that will be followed to verify the software.
4) Bug reports describe the problems found as the test cases are followed. These
could be done on paper but are often tracked in a database
5) Test tools and automation are listed and described which are used to test the
software. If the team is using automated methods to test software, the tools used,
either purchased or written in-house, must be documented.
6) Metrics, statistics, and summaries convey the progress being made as the test
work progresses. They take the form of graphs, charts, and written reports
Page 7 of 21
SOFTWARE TESTING
11. Define The Term Matrics And Measurement.
Ans. Metrics and measurement : A Metric is a measurement of the degree that any
attribute belongs to a system, product or process. For example the number of
errors per person hours would be a metric. Thus, software measurement gives
rise to software metrics. A measurement is an indication of the size, quantity,
amount or dimension of a particular attribute of a product or process. For
example the number of errors in a system is a measurement. A Metric is a
quantitative measure of the degree to which a system, system component, or
process possesses a given attribute. Metrics can be defined as “STANDARDS OF
MEASUREMENT”. Software Metrics are used to measure the quality of the
project. Simply, Metric is a unit used for describing an attribute. Metric is a scale
for measurement.
Page 8 of 21
SOFTWARE TESTING
12. Design Test Cases For Hostel Admission Form.
Ans.
Test case Test case Input data Expected Actual Status
ID objective result result
Any valid
Student alphabetical It should Students Pass
TC1 name field characters accept the name is
(John) name accepted
It should
Date format accept the
Date of before the date less It accepte d
TC2 birth field current than the the valid Pass
date current date
date
Radio It should
button select the Proper
TC3 Gender should be proper radio Pass
field selected. F radio button is
or M button selected
Date format
Date of not before It should Accept ed
TC4 admission the current accept date the date Pass
date
Any It should
numerica l accept the
data number Accepted
TC5 Age field greater greater the age Pass
than or than or
equal to 16 equal to 16
Valid alpha It should
TC6 Address numeric accept the Accepted Pass
field characters address the address
Valid 6 It should
digits accept the Pin code
TC7 Pin code numeric valid pin accepted Pass
format code
Page 9 of 21
SOFTWARE TESTING
13. Describe Test Cases For Railway Reservation System.
Ans.
Test case Test case Input data Expected Actual Status
ID objective result result
Any valid
Login field login name It should It accepted Pass
TC1 (abcxyz) accept the the login
login name name
It accepted
It should the valid
Password Valid accept the password;
TC2 field password valid successful Pass
password login
message
Message
Password Invalid It should displayed
TC3 field password not accept as invalid Pass
the valid login or
password wrong
password.
Date
Date of format not
TC4 journey before the It should Accepted Pass
current accept date the date
date
Date
format,
date It should Accepted
TC5 Date of greater accept the the date Pass
return than the date
journey date of
journey
Page 10 of 21
SOFTWARE TESTING
14. Prefere Report For Withdrawal Of Amount Form ATM Machine.
Ans.
Page 11 of 21
SOFTWARE TESTING
Stubs:
The module under testing may also call some other module which is not ready at
the time of testing. There is need of dummy modules required to simulate for
testing, instead of actual modules. These are called stubs.
Importance:
Stubs and Drivers works as a substitute for the missing or unavailable module.
They are specifically developed, for each module, having different functionalities.
Generally, developers and unit testers are involved in the development of stubs
and drivers. Their most common use may be seen in the integration incremental
testing, where stubs are used in top bottom approach and drivers in a bottom-up
approach.
Page 12 of 21
SOFTWARE TESTING
Page 13 of 21
SOFTWARE TESTING
3. Architectural Design: System design is broken down further into modules
taking up different functionalities. The data transfer and communication
between the internal modules and with the outside world (other systems) is
clearly understood.
Page 14 of 21
SOFTWARE TESTING
20. Advantage and disadvantage of automation.
Ans. ADVANTAGE:-
1. Accuracy: Automated tests are more reliable and precise than manual tests,
which can be prone to human error.
2. Scalability: Automation allows you to test more code and features in less time,
without having to add more people or pay overtime.
4. Consistency: Automation ensures that test cases are executed the same way
every time.
DISADVANTAGE:-
The basic idea in boundary value testing is to select input variable values their:
1. Minimum
2. Just below the minimum
3. Just above the minimum
4. Just below the maximum
5. Maximum
6. Just above the maximum
EXAMPLE:-
Page 16 of 21
SOFTWARE TESTING
i. Defect Prevention-- Implementation of techniques, methodology and standard
processes to reduce the risk of defects.
iv. Defect Resolution-- Work by the development team to prioritize, schedule and
fix a defect, and document the resolution. This also includes notification back to
the tester to ensure that the resolution is verified.
v. Process Improvement -- All problems are due to failure in the process involved
in creating software. Defects give an opportunity to identify the problem with
process used and update them. Better processes mean better product with less
defect.
2. Human Error: Human errors can happen when testing is done manually. By
failing to test particular scenarios or by making mistakes when executing test
cases, testers may come up with erroneous results. These mistakes can make it
impossible to find flaws, which would affect the caliber of the software.
3. Not reusable- The test cases created for manual testing apply to a specific
version of software, but when the updates are made these test cases will become
unusable and need to be rewritten. Which does not seem to be an effective use of
resources and can lead increase the time. thus, slowing the development process
Page 17 of 21
SOFTWARE TESTING
4. Difficult to Measure: It is challenging to quantify the manual testing process
since there are no objective metrics provided by manual testing to assess the
software's quality. It is difficult to assess the efficiency of the testing process since
it is difficult to keep track of the quantity of test cases executed, errors
discovered, and test coverage attained.
25. Explain the manual testing with advantage and disadvantages and example.
Ans. Manual Testing:-
Manual testing is a software testing process in which test cases are executed
manually without using any automated tool. All test cases executed by the tester
manually according to the end user's perspective. It ensures whether the
application is working, as mentioned in the requirement document or not. Test
cases are planned and implemented to complete almost 100 percent of the
software application.
Test case reports are also generated manually. Manual Testing is one of the most
fundamental testing processes as it can find both visible and hidden defects of the
software. The difference between expected output and output, given by the
software, is defined as a defect. The developer fixed the defects and handed it to
the tester for retesting.
It's more expensive to automate. Initial investments are bigger than manual
testing
Manual tests can be very time consuming.
You cannot automate everything; some tests still have to be done manually.
You cannot rely on testing tools always.
EXAMPLES:-
1. Unit testing.
2. Performance testing.
3. Security testing
Page 18 of 21
SOFTWARE TESTING
26. Prepare test plan with test summary report for gmail login page.
Ans. The Gmail login page is the first point of user interaction and needs to be secure,
responsive, and user-friendly. Test scenarios for the Gmail login page focus on
verifying various aspects of the login process, including validation of credentials,
error messages, security features, and overall user experience across devices and
browsers. Here are some key test scenarios to consider:
1. Valid Login Credentials:- Verify that a user can successfully log in with a valid
email address and correct password.
2. Invalid Email Format:- Test if the system shows an appropriate error message
when the user enters an incorrectly formatted email (for example, missing “@”
or domain extension).
3. Invalid Password:- Verify that an error message appears when a valid email
address is entered with an incorrect password.
4. Empty Fields:- Check if the login button is disabled or if the user receives an
error message when trying to log in without entering an email or password.
6. Email or Password Case Sensitivity:- Verify that the login page enforces case
sensitivity, especially for passwords, as Gmail passwords are case-sensitive.
8. “Stay Signed In” Option:- Verify that the “Stay signed in” checkbox retains
the user‟s session on the device, allowing re-entry without requiring a password.
10. Password Masking:- Confirm that the password field masks the input
characters for security, preventing visibility to onlookers.
Page 19 of 21
SOFTWARE TESTING
2. Assigned: After the tester has posted the bug, the lead of the tester approves
that the bug is genuine and he assigns the bug to corresponding developer and
the developer team. It‟s state given as assigned.
3. Open: At this state the developer has started analysing and working on the
defect fix.
4. Fixed: When developer makes necessary code changes and verifies the changes
then he/she can make bug status as „Fixed‟ and the bug is passed to testing team.
5. Pending retest: After fixing the defect the developer has given that particular
code for retesting to the tester. Here the testing is pending on the testers end.
Hence its status is pending retest.
6. Retest: At this stage the tester do the retesting of the changed code which
developer has given to him to check whether the defect got fixed or not.
7. Verified: The tester tests the bug again after it got fixed by the developer. If
the bug is not present in the software, he approves that the bug is fixed and
changes the status to “verified”.
8. Reopen: If the bug still exists even after the bug is fixed by the developer, the
tester changes the status to “reopened”. The bug goes through the life cycle once
again.
9. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that
the bug no longer exists in the software, he changes the status of the bug to
“closed”. This state means that the bug is fixed, tested and approved.
10. Duplicate: If the bug is repeated twice or the two bugs mention the same
concept of the bug, then one bug status is changed to “duplicate“.
11. Rejected: If the developer feels that the bug is not genuine, he rejects the bug.
Then the state of the bug is changed to “rejected”.
Page 20 of 21
SOFTWARE TESTING
Page 21 of 21