questionbankans

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 24

Question Bank for FC5463

Unit 1.
2 Marks Questions

1. Define software testing. (R)


Software testing is a process of identifying the correctness of software by considering its
all attributes (Reliability, Scalability, Portability, Usability, Re-usability) and evaluating
the execution of software components to find the software bugs or errors or defects.
It provides objective of the software and gives surety of fitness of the software.

2. Write any four objectives of software test. (R)


Validation
Defects
Providing Information
Quality Analysis
Compatibility

3. Define following terms. (R)


i. Defect
The Defect is the difference between the actual outcomes and expected outputs
ii. Fault
The Fault is a state that causes the software to fail to accomplish its essential function.
iii. Error
An Error is a mistake made in the code; that's why we cannot execute or compile code
iv. failure
If the software has lots of defects, it leads to failure or causes failure.

4. Define entry criteria and exit criteria. (R)


Entry Criteria: Entry Criteria gives the prerequisite items that must be completed before
testing can begin.
Exit Criteria: Exit Criteria defines the items that must be completed before testing can be
concluded

5. Define test case and test suite. (R)


Test case :-A test case consists of inputs given to the program and its expected outputs.
Every test case will have a unique identification number.
 Test Suite :- The set of test cases is called a test suite.
 All test suites should be preserved as we preserve source code and other documents.
6. Define Verification and validation. (R)
Verification :- It involves a static analysis method (review) done without executing
code.
It is the process of evaluation of the product development process to find whether
specified requirements meet.
Validation :- It involves dynamic analysis method (functional, non-functional), testing is
done by executing code.
Validation is the process to determine whether the software meets the customer
expectations and requirements

7. Define Quality control (R)


Quality control activities operate and verify that the application meet the defined
quality standards.

8. Define Quality assurance. (R)


Quality Assurance is defined as an activity that ensures the approaches, techniques,
methods and processes designed for the projects are implemented correctly.

10. Define Static testing and Dynamic testing. (R)


Static:- Static testing is testing, which checks the application without executing the
code. It is a verification process. Some of the essential activities are done under static
testing such as business requirement review, design review, code walkthroughs, and the
test documentation review.
 Dynamic :- Dynamic testing is testing, which is done when the code is executed at the
run time environment. It is a validation process where functional testing [unit,
integration, and system testing] and non-functional testing [user acceptance testing] are
performed.

14. Define Code Coverage Testing. (R)


Code Coverage analysis eliminates gaps in a Test Case suite.
It identifies areas of a program that are not exercised by a set of test cases.

15. Define Cyclometric complexity. (R)


It is a measure of the logical complexity of the software and is used to define the number
of independent paths.

16. Define Black Box Testing. (R)


a method in which the functionalities of software applications are tested without having
knowledge of internal code structure, implementation details and internal paths.
17. Explain Boundary Value Analysis with suitable example. (U)
the process of testing between extreme ends or boundaries between partitions of the
input values.
So these extreme ends like Start- End, Lower- Upper, Maximum-Minimum, Just Inside-
Just Outside values are called boundary values and the testing is called “boundary
testing”.
 Let’s consider the behavior of Order Pizza Text Box Below
 Pizza values 1 to 10 is considered valid. A success message is shown.
 While value 11 to 99 are considered invalid for order and an error message will
appear, “Only 10 Pizza can be ordered”
 Here is the test condition
 Any Number greater than 10 entered in the Order Pizza field(let say 11) is considered
invalid.
 Any Number less than 1 that is 0 or below, then it is considered invalid.
 Numbers 1 to 10 are considered valid
 Any 3 Digit Number say -100 is invalid.

18. Explain Equivalence Partitioning with suitable example. (U)


Equivalence Class Partitioning is type of black box testing technique which can be
applied to all levels of software testing like unit, integration, system, etc.
In this technique, input data units are divided into equivalent partitions that can be used to
derive test cases which reduces time required for testing because of small number of test
cases.

19.Write advantages of Boundary Value analysis and Equivalence Partitioning. (U)


20. Explain White Box Testing with suitable example. (U)
software testing technique in which internal structure, design and coding of
software are tested to verify flow of input-output and to improve design, usability and
security.
In white box testing, code is visible to testers so it is also called Clear box testing, Open
box testing, Transparent box testing, Code-based testing and Glass box testing.

Printme (int a, int b) {


int result = a+ b;
If (result> 0)
Print ("Positive", result)
Else
Print ("Negative", result)
}

4 Marks Questions

1. Write objectives of software testing. (R)


 Verification: - allows testers to confirm that the software meets the various business
and technical requirements stated by the client before the inception of the whole
project.
 Validation: -Confirms that the software performs as expected and as per the
requirements of the clients.
 Defects: - to find different defects in the software to prevent its failure or crash during
implementation or go live of the project.
 Providing Information: - during the process of software testing, testers can accumulate
a variety of information related to the software and the steps taken to prevent its failure.
 Quality Analysis:- Testing helps improve the quality of the software by constantly
measuring and verifying its design and coding.
 Compatibility:- It helps validate application’s compatibility with the implementation
environment, various devices, Operating Systems, user requirements, among other things.
 Verifying Performance & Functionality:- It ensures that the software has
superior performance and functionality.
2. Explain software testing life cycle. (U)

 Requirement Analysis:- tester analyses requirement document of SDLC to examine


requirements stated by the client. After examining the requirements, the tester makes a
test plan to check whether the software is meeting the requirements or not.
 Test plan creation :- Tester determines the estimated effort and cost of the entire project.
Test case execution can be started after the successful completion of Test Plan Creation.
 Environment setup: - Setup of testing environment is and independent activity and can
be started along with Test Case Development.
 Test case Execution:- the testing team starts case development and execution activity.
The testing team writes down the detailed test cases, also prepares the test data if
required.
 Defect Logging:- Testers and developers evaluate the completion criteria of the software
based on test coverage, quality, time consumption, cost, and critical business objectives.
This phase determines the characteristics and drawbacks of the software. Test cases and
bug reports are analyzed in depth to detect the type of defect and its severity.
 Test Cycle Closure:- The test cycle closure report includes all the documentation related
to software design, development, testing results, and defect reports.

3. Describe V-model of software testing. (A) [can be asked as 6 Marks questions]


 V-Model also referred to as the Verification and Validation Model.
 In this, each phase of SDLC must complete before the next phase starts.
 It follows a sequential design process same as the waterfall model.
 Testing of the device is planned in parallel with a corresponding stage of development.
Advantages
 Easy to Understand.
 Testing Methods like planning, test designing happens well before coding.
 This saves a lot of time. Hence a higher chance of success over the waterfall model.
 Avoids the downward flow of the defects.
 Works well for small plans where requirements are easily understood.

Disadvantages
 Very rigid and least flexible.
 Not a good for a complex project.
 Software is developed during the implementation stage, so no early prototypes of the
software are produced.
 If any changes happen in the midway, then the test documents along with the required
documents, has to be updated.

4. Describe phases of verification Phase of V-model. (A)


 There are the various phases of Verification Phase of V-model:
 Business requirement analysis: This is the first step where product requirements
understood from the customer's side.
 System Design: In this stage system engineers analyze and interpret the business of the
proposed system by studying the user requirements document.
 Architecture Design: The baseline in selecting the architecture is that it should
understand all which typically consists of the list of modules, brief functionality of each
module, their interface relationships, dependencies, database tables, architecture
diagrams, technology detail, etc. The integration testing model is carried out in a
particular phase.
 Module Design: In the module design phase, the system breaks down into small
modules. The detailed design of the modules is specified, which is known as Low-Level
Design
 Coding Phase: After designing, the coding phase is started. Based on the requirements, a
suitable programming language is decided.
5. Write advantages and disadvantages of V-model. (U)
Advantages
 Easy to Understand.
 Testing Methods like planning, test designing happens well before coding.
 This saves a lot of time. Hence a higher chance of success over the waterfall model.
 Avoids the downward flow of the defects.
 Works well for small plans where requirements are easily understood.

Disadvantages
 Very rigid and least flexible.
 Not a good for a complex project.
 Software is developed during the implementation stage, so no early prototypes of the
software are produced.
 If any changes happen in the midway, then the test documents along with the required
documents, has to be updated.

6. Differentiate between Static and Dynamic Testing. (A) [any 4 for four marks/ any 6 for 6
marks]
Static Testing Dynamic Testing
It is performed in the early stage of the It is performed at the later stage of the software
software development. development.

In static testing whole code is not


In dynamic testing whole code is executed.
executed.

Static testing prevents the defects. Dynamic testing finds and fixes the defects.

Static testing is performed before code Dynamic testing is performed after code
deployment. deployment.

Static testing is less costly. Dynamic testing is highly costly.

Static Testing involves checklist for Dynamic Testing involves test cases for testing
testing process. process.

It includes walkthroughs, code review,


It involves functional and nonfunctional testing.
inspection etc.

It usually takes longer time as it involves running


It generally takes shorter time.
several test cases.
7. Explain Boundary value analysis with suitable example. (U)
Boundary Value Analysis.
 the process of testing between extreme ends or boundaries between partitions of the input
values.
 So these extreme ends like Start- End, Lower- Upper, Maximum-Minimum, Just Inside-
Just Outside values are called boundary values and the testing is called “boundary
testing”.
 The basic idea in normal boundary value testing is to select input variable values at their:
 Minimum
 Just above the minimum
 A nominal value
 Just below the maximum
 Maximum

8. Explain Equivalence Partitioning with suitable example. (U)


 Equivalence Class Partitioning is type of black box testing technique which can be
applied to all levels of software testing like unit, integration, system, etc.
 In this technique, input data units are divided into equivalent partitions that can be used to
derive test cases which reduces time required for testing because of small number of test
cases.
 It divides the input data of software into different equivalence data classes.
 You can apply this technique, where there is a range in the input field.

 Following password field accepts minimum 6 characters and maximum 10 characters


 That means results for values in partitions 0-5, 6-10, 11-14 should be equivalent
Test
Scenari Test Scenario Description Expected Outcome
o#
Enter 0 to 5 characters in
1 System should not accept
password field
Enter 6 to 10 characters in
2 System should accept
password field
Enter 11 to 14 character in
3 System should not accept
password field
9. Differentiate between quality control and quality assurance. (U)
Quality Assurance (QA) Quality Control (QC)

It focuses on providing assurance that the


It focuses on fulfilling the quality requested.
quality requested will be achieved.

It is the technique of managing quality. It is the technique to verify quality.

It is involved during the development phase. It is included during testing phase.

It does not include the execution of the It always includes the execution of the
program. program.

The aim of quality assurance is to prevent The aim of quality control is to identify and
defects. improve the defects.

It is a preventive technique. It is a corrective technique.

All team members of the project are Generally, the testing team of the project is
involved. involved.

Example: Verification Example: Validation

Unit 2
2 Marks Question

1. Define Unit testing and Integration Testing. (R)


Unit Testing is a type of software testing where individual units or components of a
software are tested. The purpose is to validate that each unit of the software code performs
as expected
Integration Testing is defined as a type of testing where software modules are integrated
logically and tested as a group. A typical software project consists of multiple software
modules, coded by different programmers.

2. Define System testing and Acceptance Testing. (R)


System Testing is a level of testing that validates the complete and fully integrated software
product. The purpose of a system test is to evaluate the end-to-end system specifications.
User Acceptance Testing (UAT) is a type of testing performed by the end user or the client
to verify/accept the software system before moving the software application to the
production environment.
3. Define Driver and stubs. (R)
Stubs and Drivers are the dummy programs in Integration testing used to facilitate the
software testing activity. These programs act as a substitutes for the missing models in the
testing.
Stub: Is called by the Module under Test.
Driver: Calls the Module to be tested.

4. Explain Bottom-Up approach of Integration Testing. (U)


a strategy in which the lower level modules are tested first.
These tested modules are then further used to facilitate the testing of higher level modules.
The process continues until all modules at top level are tested.
Once the lower level modules are tested and integrated, then the next level of modules are
formed.

5. Explain Top-down approach of Integration Testing. (U)


a method in which integration testing takes place from top to bottom following the control
flow of software system.
The higher level modules are tested first and then lower level modules are tested and
integrated in order to check the software functionality.
6. Define regression testing. (R)
REGRESSION TESTING is defined as a type of software testing to confirm that a recent
program or code change has not adversely affected existing features
Or
Regression Testing is nothing but a full or partial selection of already executed test cases
which are re-executed to ensure existing functionalities work fine

7. Define following terms: Alpha Testing and Beta Testing. (R)


Alpha: This testing is referred to as an alpha testing only because it is done early on, near
the end of the development of the software, and before Beta Testing.
Beta Testing is performed by “real users” of the software application in “real
environment” and it can be considered as a form of external User Acceptance Testing. It
is the final test before shipping a product to the customers. Direct feedback from
customers is a major advantage of Beta Testing. This testing helps to test products in
customer’s environment.

4 Marks Question

1. Explain Sandwich Approach of Integration Testing. (U)


strategy in which top level modules are tested with lower level modules at the same time
lower modules are integrated with top modules and tested as a system.
It is a combination of Top-down and Bottom-up approaches therefore it is called Hybrid
Integration Testing.
It makes use of both stubs as well as drivers.
2. Explain Performance Testing of Web Application. (U)
Performance Testing is a software testing process used for testing the speed, response time,
stability, reliability, scalability, and resource usage of a software application under a
particular workload. The main purpose of performance testing is to identify and eliminate
the performance bottlenecks in the software application. It is a subset of performance
engineering and is also known as “Perf Testing”.
Speed – Determines whether the application responds quickly
Scalability – Determines the maximum user load the software application can
handle.
Stability – Determines if the application is stable under varying loads
Load testing – checks the application’s ability to perform under anticipated user loads. The
objective is to identify performance bottlenecks before the software application goes live.
Stress testing – involves testing an application under extreme workloads to see how it
handles high traffic or data processing. The objective is to identify the breaking point of an
application.

3. Explain Functionality Testing of Web Application. (U)


Functionality Testing of a Website is a process that includes several testing parameters like
user interface, APIs, database testing, security testing, client and server testing and basic
website functionalities. Functional testing is very convenient and it allows users to perform
both manual and automated testing. It is performed to test the functionalities of each feature
on the website.

Test all links in your webpages are working correctly and make sure
there are no broken links. Links to be checked will include –

 Outgoing links
 Internal links
 Anchor Links
 Mail To Links

4. Describe Security Testing of Web application. (A)


Security Testing is vital for e-commerce website that store sensitive customer information
like credit cards. Testing Activities will include-

Test unauthorized access to secure pages should not be permitted


Restricted files should not be downloadable without appropriate access
Check sessions are automatically killed after prolonged user inactivity
On use of SSL certificates, website should re-direct to encrypted SSL pages.
5. Differentiate between Alpha Testing and Beta Testing. (A)

Alpha Testing Beta Testing


Alpha Testing is done by testers who reside Users and clients do it, and they are not
internally as employees of that company or the employees of any company or
organization. organization.
This testing type is carried out at the developer's This testing is carried at the end user's
site. location.
In-depth reliability and security of the product In-depth reliability and security of the
are not done in this testing type. product are checked, and patches are
being released.
It uses both white box as well as black box It uses black box Testing only.
testing.
Execution of the test cycle might take a longer Execution time for the test requires only
time. a few months (at maximum).

6. Explain need of Regression Testing. (U)


REGRESSION TESTING is defined as a type of software testing to confirm that a
recent program or code change has not adversely affected existing features.
mainly arises whenever there is requirement to change the code and we need to test whether
the modified code affects the other part of software application or not.
Moreover, regression testing is needed, when a new feature is added to the software
application and for defect fixing as well as performance issue fixing.

7. Explain GUI Testing.(U)


a software testing type that checks the Graphical User Interface of the Software.
The purpose of Graphical User Interface (GUI) Testing is to ensure the functionalities of
software application work as per specifications by checking screens and controls like menus,
buttons, icons, etc.
GUI is what the user sees.
Say if you visit any website what you will see say homepage it is the GUI (graphical user
interface) of the site.
A user does not see the source code. The interface is visible to the user.
Especially the focus is on the design structure, images that they are working properly or not.

Unit 3

2 Marks Question

1. Define Test Deliverable.(R)


Test Deliverables are the test artifacts which are given to the stakeholders of a software
project during the SDLC (Software Development Life Cycle). A software project which follows
SDLC undergoes the different phases before delivering to the customer. In this process, there
will be some deliverables in every phase.

2. Write entities in Test Case Database. (R)


3. Write Entities of Defect Repository. (R)
4. Recall the term Test Plan. (R)

5. Write Test Deliverables before and after testing phase. (U)

6. Write process of preparing Test Summary Report. (U)

4 Marks Question

1. Describe Scope Management in Test Plan (U).

2. Illustrate Resource Requirement in Test Plan. (A)

3. Describe process of Test Planning. (A)

4. Write Setting up criteria of Testing. (U)

5. Describe Test Infrastructure Management. (A/U)

6. Describe SCM. (U/A)

8. Explain Base Lining Testing. (U)

9. Describe Test Case Specification.

10. Describe Test Summary Report. (A)

Unit 4

1. Classify the defects on the basis of Priority. (2 Marks Level A)

Low: The Defect is an irritant but repair can be done once the more serious Defect has
been fixed

Medium: During the normal course of the development activities defect should be
resolved. It can wait until a new version is created

High: The defect must be resolved as soon as possible as it affects the system severely
and cannot be used until it is fixed
2. Classify the defects on the basis of Severity. (4 Marks Level A)

Critical: This defect indicates complete shut-down of the process, nothing can proceed
further

Major: It is a highly severe defect and collapses the system. However, certain parts of
the system remain functional

Medium: It causes some undesirable behavior, but the system is still functional

Low: It won’t cause any major break-down of the system

3. Recall following Terms. i. Priority ii. Severity (2 Marks R)


1) Severity : Defect Severity in testing is a degree of impact a bug or a Defect has on
the software application under test. A higher effect of bug/defect on system
functionality will lead to a higher severity level. A Quality Assurance engineer
usually determines the severity level of a bug/defect.

2) Priority : Priority is defined as the order in which a defect should be fixed. Higher
the priority the sooner the defect should be resolved.

4. Recall Bug Report Template. (2 Marks R)

A Bug Report in Software Testing is a detailed document about bugs found in the software
application. Bug report contains each detail about bugs like description, date when bug was
found, name of tester who found it, name of developer who fixed it, etc. Bug report helps to
identify similar bugs in future so it can be avoided.

Defect_ID – Unique identification number for the defect.

Defect Description – Detailed description of the Defect including information about the
module in which Defect was found.
Version – Version of the application in which defect was found.

Steps – Detailed steps along with screenshots with which the developer can reproduce the
defects.

Date Raised – Date when the defect is raised

Reference– where in you Provide reference to the documents like . requirements, design,
architecture or maybe even screenshots of the error to help understand the defect

Detected By – Name/ID of the tester who raised the defect

Status – Status of the defect , more on this later

Fixed by – Name/ID of the developer who fixed it

Date Closed – Date when the defect is closed

Severity - which describes the impact of the defect on the application

Priority - which is related to defect fixing urgency. Severity Priority could be


High/Medium/Low based on the impact urgency at which the defect should be fixed
respectively

5. Explain Categorization in Defect Management Process with examples. ( 4 Marks U)

A Defect in Software Testing is a variation or deviation of the software application from


end user’s requirements or original business requirements. A software defect is an error in
coding which causes incorrect or unexpected results from a software program which does not
meet actual requirements. Testers might come across such defects while executing the test cases.

6. Categorize following defects with explanation. (4 Marks A)

i. The login function of the website does not work properly.

ii. The GUI of the website does not display correctly on mobile devices.

iii. The website could not remember the user login session.

iv. The website performance is too slow.


Description Priority Explanation
The website
The performance bug can cause huge inconvenience to
performance is too High
user.
slow
The login function of
Login is one of the main function of the banking website if
the website does not Critical
this feature does not work, it is serious bugs
work properly
The GUI of the website
does not display The defect affects the user who use Smartphone to view
Medium
correctly on mobile the website.
devices
The website could not
This is a serious issue since the user will be able to login
remember the user High
but not be able to perform any further transactions
login session
Some links doesn’t This is an easy fix for development guys and the user can
Low
work still access the site without these links

7. Describe Defect Management Process with neat labeled Diagram.(4 Marks A)

Defect Management is a systematic process to identify and fix bugs. A defect


management cycle contains the following stages 1) Discovery of Defect, 2) Defect
Categorization 3) Fixing of Defect by developers 4) Verification by Testers, 5) Defect Closure 6)
Defect Reports at the end of project

8. Explain Defect life cycle with neat


labeled diagram. (4 Marks U)

New - Potential defect that is raised and yet to be validated.

Assigned - Assigned against a development team to address it but not yet resolved.

Active - The Defect is being addressed by the developer and investigation is under progress. At
this stage there are two possible outcomes; viz - Deferred or Rejected.
Test - The Defect is fixed and ready for testing.

Verified - The Defect that is retested and the test has been verified by QA.

Closed - The final state of the defect that can be closed after the QA retesting or can be closed if
the defect is duplicate or considered as NOT a defect.

Reopened - When the defect is NOT fixed, QA reopens/reactivates the defect.

Deferred - When a defect cannot be addressed in that particular cycle it is deferred to future
release.

Rejected - A defect can be rejected for any of the 3 reasons; viz - duplicate defect, NOT a
Defect, Non-Reproducible.

9. Explain Estimation of expected impact of defect. (4 Marks U)

Once the critical risks are identified, the financial impact of each risk should be
estimated.

This can be done by assessing the impact, in dollars, if the risk does become a problem combined
with the probability that the risk will become a problem.
The product of these two numbers is the expected impact of the risk.

The expected impact of a risk (E) is calculated as

E = P * I,

where: P= probability of the risk becoming a problem

and I= Impact in dollars if the risk becomes a problem.

10. Illustrate any four techniques for finding defects. (4 Marks U)

a) Quick Attacks:

The quick-attacks technique allows you to perform a cursory analysis of a system in a very compressed
timeframe.

Even without a specification, you know a little bit about the software, so the time spent is also time invested in
developing expertise.

b) Equivalence and Boundary Conditions:

Boundaries and equivalence class es give us a technique to reduce an infinite test set into something
manageable.

They also provide a mechanism for us to show that the requirements are "covered".

c) Common Failure Modes:

The heart of this method is to figure out what failures are common for the platform, the project, or the team;
then try that test again on this build.

If your team is new, or you haven't previously tracked bugs, you can still write down defects that "feel"
recurring as they occur—and start checking for them.

The more your team stretches itself (using a new database, new programming language, new team members,
etc.), riskier the project will be—and, at the same time, the less valuable this technique will be.

Customer-level coverage tools are expensive, programmer-level tools that tend to assume the team is doing
automated unit testing and has a continuous-integration server and a fair bit of discipline.

After installing the tool, most people tend to focus on statement coverage—the least powerful of the measures.

d) Use Cases:

Use cases and scenarios focus on software in its role to enable a human being to do something.

Use cases and scenarios tend to resonate with business customers, and if done as part of the requirement
process, they sort of magically generate test cases from the requirements.

e) Code-Based Coverage Models:


Imagine that you have a black-box recorder that writes down every single line of code as it executes.
Programmers prefer code coverage.

It allows them to attach a number— an actual, hard, real number, such as 75%—to the performance of their unit
tests, and they can challenge themselves to improve the score.

f) Regression and High-Volume Test Techniques:

People spend a lot of money on regression testing, taking the old test ideas described above and rerunning them
over and over.

This is generally done with either expensive users or very expensive programmers spending a lot of time
writing and later maintaining those automated tests.

11. Draw defect management process. State the working of each phase. (4 Marks A)

Defect Management Process (DMP) is basically defined as a process where identification and
resolution of defect take place. Software development is not an easy process. It is very complex
process so constant occurring of defects is normal. DMP usually takes place at stage of testing
products. It is also not possible to remove all defects from software. We can only minimize
number of defects and their impact on software development. DMP mainly focuses on
preventing defects, identifying defects as soon as possible, and reducing impact of defects.

Stages of DMP :

There are different stages of DMP that takes place as given below :

Defect Prevention :

Defect elimination at early stage is one of the best ways to reduce its impact. At early stage,
fixing or resolving defects required less cost, and impact can also be minimized. But at a later
stage, finding defects and then fixing it requires very high cost and impact of defect can also be
increased. It’s not possible to remove all defects but at least we can try to reduce its effects and
cost required to fix the same. This process simply improves quality of software by removing
defects at early stage and also increases productivity by simply preventing injection of defects
into software product.
Deliverable Baseline :

When deliverable such product or document reaches its pre-defined milestone then deliverable is
considered as baseline. Pre-defined milestone generally defines what the project or software is
supposed to achieve. If there is any failure to reach or meet pre-defined milestone, it simply
means that project is not proceeding towards plan and generally triggers corrective action to be
taken by management. When a deliverable is baselines, further changes are controlled.

Defect Discovery :

Defect discovery at early stage is very important. Afterword’s, it might cause greater damage. A
defect is only considered ‘discovered” if developers have acknowledged it to be valid one.

Defect Resolution :

Defect is being resolved and fixed by developers and then places it in the same place from where
the defect was initially identified.

Process Improvement :

All defects that are identified are critical and cause some impact on system. It doesn’t mean that
defects that have a low impact on system are not critical. For process improvement, each and
every defect that is identified are needed to fixed. Identification and analysis of process should
be done in which defect was occurred so that we can determine different ways to improve
process to prevent any future occurrence of similar defects.

12. Prepare Defect report Template for Student information system which contains
modules Student registration ,Course Registration, Exam ,Result. (4 Marks A)
Unit 5

1. Define Following Terms. i. Manual Testing ii. Automation Testing (2 Marks R)

Automation Testing : Automation Testing uses automation tools to execute test cases.

Manual Testing : In manual testing, test cases are executed by a human tester and software.

2. Write any four advantages of Manual Testing. (2 Marks R)

1. Get fast and accurate visual feedback


2. It is less expensive as you don’t need to spend your budget for the automation tools and
process
3. Human judgment and intuition always benefit the manual element
4. While testing a small change, an automation test would require coding which could be
time-consuming. While you could test manually on the fly.

3. Write any four disadvantages of Manual Testing. (2 Marks R)

1. Less reliable testing method because it’s conducted by a human. Therefore, it is always
prone to mistakes & errors.
2. The manual testing process can’t be recorded, so it is not possible to reuse the manual
test.
3. In this testing method, certain tasks are difficult to perform manually which may require
an additional time of the software testing phase.

4. Write any four advantages of Automated Testing. (2 Marks R)

1. Automated testing helps you to find more bugs compare to a human tester
2. As most of the part of the testing process is automated, you can have a speedy and
efficient process
3. Automation process can be recorded. This allows you to reuse and execute the same kind
of testing operations
4. Automated testing is conducted using software tools, so it works without tiring and
fatigue unlike humans in manual testing
5. It can easily increase productivity because it provides fast & accurate testing result
6. Automated testing support various applications
7. Testing coverage can be increased because of automation testing tool never forget to
check even the smallest unit

5. Write any four disadvantages of Automation testing. (2 Marks R)

1. Without human element, it’s difficult to get insight into visual aspects of your UI like
colors, font, sizes, contrast or button sizes.
2. The tools to run automation testing can be expensive, which may increase the cost of the
testing project.
3. Automation testing tool is not yet foolproof. Every automation tool has their limitations
which reduces the scope of automation.
4. Debugging the test script is another major issue in the automated testing. Test
maintenance is costly.

6. Illustrate need of Automation Testing. (4 Marks U)

7. Recall the term Software Test Metrics. (2 Marks R)

8. Describe Product metrics with suitable example. (4 Marks U)

9. Explain any four product metrics. (4 Marks U)

10. Illustrate Product metrics with suitable example. (4 Marks U)

11. Describe Product and Process metrics with suitable example. (4 Marks U)

12. Describe a procedure for selecting testing tool when you are asked to test college
Website? (4 Marks U)
13. Describe factors considered for selecting a testing tool for test automation. (4 Marks U)

14. Illustrate process metrics examples. (any 4) (4 Marks U)

15. Compare Manual Testing with Automation Testing. (4 Marks A)

Unit 6
1. Recall the term Selenium Webdriver. (2 Marks R)

2. Recall the term Selenium Grid. (2 Marks R)

3. Write any four advantages of using Webdriver. (2 Marks R)

4. Write any four disadvantages of using Webdriver. (2 Marks R)

5. Design Selenium script for login form of gpamravati.ac.in/gpa/ (4 Marks A)

6. Design Selenium script for signup form of gmail.com (4 Marks A)

7. Design Selenium script for login form of gpamravati.ac.in/gpa/ using Java. (4 Marks A)

8. Explain any two features of Mantis Bug Tracker. (2 Marks U)

9. Define Mantis Bug Tracker. (2 Marks R)

10. Explain any four features of Mantis Bug Tracker. (4 Marks U)

11. Recall the Term Test Link. (2 Marks R)

12. Explain any four features of Test Link. (4 Marks U)

13. Identify the process of reporting a bug issue with the help of mantis bug tracker.
(4 Marks A)
14. Create a Teat Plan for library management application with the help of Test link. (4 Marks
A)
15. Create a Teat Plan for banking application with the help of Test link. (4 Marks A)

You might also like