Manual Testing Interview Questions 1704361238

Download as pdf or txt
Download as pdf or txt
You are on page 1of 187

Ques.1. What is Software Testing?

Ans. Software testing is the process of evaluating a system to check if it satisfies its business
requirements. It measures the overall quality of the system in terms of attributes like correctness,
completeness, usability, performance etc. Basically, it is used for ensuring the quality of software to
the stakeholders of the application.

Ques.2. Why is testing required?


Ans. We need software testing for following reasons-
1. Testing provides an assurance to the stakeholders that product works as intended.
2. Avoidable defects leaked to the end user/customer without proper testing adds bad
reputation to the development company.
3. Defects detected earlier phase of SDLC results into lesser cost and resource utilisation of
correction.
4. Saves development time by detecting issues in earlier phase of development.
5. Testing team adds another dimension to the software development by providing a different
view point to the product development process.

Ques.3. When should we stop testing?


Ans. Testing can be stopped when one or more of the following conditions are met-
1. After test case execution - Testing phase can be stopped when one complete cycle of test
cases is executed after the last known bug fix with agreed upon value of pass-percentage.
2. Once the testing deadline is met - Testing can be stoppped after deadlines get met with no
high priority issues left in system.
3. Based on Mean Time Between failure (MTBF)- MTBF is the time interval between two
inherent failures. Based on stakeholders decisions, if the MTBF is quite large one can stop
the testing phase.
4. Based on code coverage value - Testing phase can be stopped when the automated code
coverage reaches a specific threshold value with sufficient pass-percentage and no critical
bug.

Ques.4. What is Quality Assurance?


Ans. Quality assurance is a process driven approach which checks if the process of developing the
product is correct and conforming to all the standards. It is considered as a preventive measure as it
identifies the weakness in the process to build a software. It involves activites like document
review, test cases review, walkthroughs, inspection etc.

Ques.5. What is Quality Control?


Ans. Quality control is product driven approach which checks that the developed product conforms
to all the specified requirements. It is considered as a corrective measure as it tests the built product
to find the defects. It involves different types of testing like functional testing, performance testing,
usability testing etc.

Ques.6. What is the difference between Verification and Validation?


Ans. Following are the major differences between verification and validation-

# Verification Validation
Verification
is the
process of
evaluating
Validation is
the artifacts
the process of
as well as
validating that
the process
the developed
of software
software
1. development
product
in order to
conforms to
ensure that
the specified
the product
business
being
requirements.
developed
will comply
to the
standards.
It is static
It involves
process of
dynamic
analysing the
testing of
2. documents
software
and not the
product by
actual end
running it.
product.
Verification Validation is a
is a process product
3.
oriented oriented
approach. approach.
Answers the Answers the
question - question -
"Are we "Are we
4.
building the building the
product right
right?" product?"
5. Errors found Errors found
during during
verification validation
require require more
lesser cost/resources
cost/resource . Later the
s to get fixed error is
as compared
to be found discovered
during higher is the
validation cost to fix it.
phase.

Ques.7. What is SDLC?


Ans. Software Development Life Cycle refers to all the activities that are performed during
software development, including - requirement analysis, designing, implementation, testing,
deployment and maintenance phases.

Ques.8. Explain STLC - Software Testing life cycle.


Software testing life cycle refers to all the activities performed during testing of a software product.
The phases include-

 Requirement analyses and validation - In this phase the requirements documents are
analysed and validated and scope of testing is defined.
 Test planning - In this phase test plan strategy is defined, estimation of test effort is defined
along with automation strategy and tool selection is done.
 Test Design and analysis - In this phase test cases are designed, test data is prepared and
automation scripts are implemented.
 Test environment setup - A test environment closely simulating the real world environment
is prepared.
 Test execution - The test cases are prepared, bugs are reported and retested once resolved.
 Test closure and reporting - A test closure report is prepared having the final test results
summary, learnings and test metrics.

Ques.9. What are the different types of testing?


Testing can broadly be defined into two types-

 Functional testing - Functional testing involves validating the functional specifications of


the system.
 Non Functional testing - Non functional testing includes testing the non-functional
requirements of the system like performance, security, scalability, portability, endurance etc.
Going by the way the testing is done, it can be categorized as-

 Black box testing - In black box testing, the tester need not have any knowledge of the
internal architecture or implementation of the system. The tester interact with the system
through the interface providing input and validating the received output.
 White box testing - In white box testing, the tester analyses the internal architecture of the
system as well as the quality of source code on different parameters like code optimization,
code coverage, code reusability etc.
 Gray box testing - In gray box testing, the tester has partial access to the internal
architecture of the system e.g. the tester may have access to the design documents or
database structure. This information helps tester to test the application better.

Ques.10. What is a test bed?


Ans. A test bed is a test environment used for testing an application. A test bed configuration can
consist of the hardware and software requirement of the application under test including - operating
system, hardware configurations, software configurations, tomcat, database etc.

Ques.11. What is a test plan?


Ans. A test plan is a formal document describing the scope of testing, the approach to be used,
resources required and time estimate of carrying out the testing process. It is derived from the
requirement documents(Software Requirement Specifications).

Ques.12. What is a test scenario?


Ans. A test scenario is derived from a use case. It is used for end end to end testing of a feature of
an application. A single test scenario can cater multiple test cases. The scenario testing is
particularly useful when there is time constraint while testing.

Ques.13. What is a test case?


Ans. A test case is used to test the conformance of an application with its requirement
specifications. It is a set of conditions with pre-requisites, input values and expected results in a
documented form.

Ques.14. What are some attributes of a test case?


Ans. A test case can have following attributes-
1. TestCaseId - A unique identifier of the test case.
2. Test Summary - Oneliner summary of the test case.
3. Description - Detailed description of the test case.
4. Prerequisite or pre-condition - A set of prerequisites that must be followed before executing
the test steps.
5. Test Steps - Detailed steps for performing the test case.
6. Expected result - The expected result in order to pass the test.
7. Actual result - The actual result after executing the test steps.
8. Test Result - Pass/Fail status of the test execution.
9. Automation Status - Identifier of automation - whether the application is automated or not.
10. Date - The test execution date.
11. Executed by - Name of the person executing the test case.
Ques.15. What is a test script?
Ans. A test script is an automated test case written in any programming or scripting langauge. These
are basically a set of instructions to evaluate the functioning of an application.

Ques.16. What is a bug?


Ans. A bug is a fault in a software product detected at the time of testing, causing it to function in
an unanticipated manner.

Ques.17. What is a defect?


Ans. A defect is non-conformance with the requirement of the product detected in production
(after the product goes live).
Difference between defect, error, bug, failure and fault: “A mistake in coding is called
error ,error found by tester is called defect, defect accepted by development team then it is called
bug.

Ques.18. What are some defect reporting attributes?


Ans. Some of the attributes of a Defect resport are-

 DefectId - A unique identifier of the defect.


 Defect Summary - A one line summary of the defect, more like a defect title.
 Defect Description - A detailed description of the defect.
 Steps to reproduce - The steps to reproduce the defect.
 Expected Result - The expected behavior from which the application is deviating because of
the defect.
 Actual Result- The current erroneous state of the application w.r.t. the defect.
 Defect Severity - Based on the criticality of the defect, this field can be set to minor,
medium, major or show stopper.
 Priority - Based on the urgency of the defect, this field can be set on a scale of P0 to P3.

Ques.19. What are some of the bug or defect management tools?


Ans. Some of the most widely used Defect Management tools are - Jira, Bugzilla, Redmine, Mantis,
Quality Center etc.

Ques.20. What is defect density?


Ans. Defect density is the measure of density of the defects in the system. It can be calculated by
dividing number of defect identified by the total number of line of code(or methods or classes) in
the application or program.
Ques.21. What is defect priority?
Ans. A defect priority is the urgency of the fixing the defect. Normally the defect priority is set on a
scale of P0 to P3 with P0 defect having the most urgency to fix.

Ques.22. What is defect severity?


Ans. Defect severity is the severity of the defect impacting the functionality. Based on the
organisation we can different levels of defect severity ranging from minor to scritical or show
stopper.

Ques.23. Give an example of Low priority-Low severity, Low priority-High severity, High
priority-Low severity, High priority-High severity defects.
Ans.
1. Low priority-Low severity - A spelling mistake in a page not frequently navigated by users.
2. Low priority-High severity - Application crashing in some very corner case.
3. High priority-Low severity - Slight change in logo color or spelling mistake in company
name.
4. High priority-High severity - Issue with login functionality.

Ques.24. What is a blocker?


Ans. A blocker is a bug of high priority and high severity. It prevents or blocks testing of some other
major portion of the application as well.

Ques.25. What is a critical bug?


Ans. A critical bug is a bug that impacts a major functionality of the application and the application
cannot be delivered without fixing the bug. It is different from blocker bug as it doesn't affect or
blocks the testing of other part of the application.
Ques.26. Explain bug lifecycle or the different states of a bug.
Ans. A bug goes through the following phases in software development-

 New - A bug or defect when detected is in New state


 Assigned - The newly detected bug when assigned to the corresponding developer is in
Assigned state
 Open - When the developer works on the bug, the bug lies in Open state
 Rejected/Not a bug - A bug lies in rejected state in case the developer feels the bug is not
genuine
 Deferred - A deferred bug is one, fix of which is deferred for some time(for the next
releases) based on urgency and criticality of the bug
 Fixed - When a bug is resolved by the developer it is marked as fixed
 Test - When fixed the bug is assigned to the tester and during this time the bug is marked as
in Test
 Reopened - If the tester is not satisfied with issue resolution the bug is moved to Reopened
state
 Verified - After the Test phase if the tester feels bug is resolved, it is marked as verified
 Closed - After the bug is verified, it is moved to Closed status.

Ques.27. What are the different test design techniques?


Ans. Test design techniques are different standards of test designing which allow systematic and
widely accepted test cases. The different test design techniques can be categorized as static test
design technique and dynamic test design technique.
1. Static Test Design Techniques - The test design techniques which involves testing without
executing the code. The various static test design techniques can be further divided into two
parts manual and using tool-
 Manual static design techniques-
 Walk through
 Informal reviews
 Technical reviews
 Audit
 Inspection
 Management review
 Static design techniques using tool-
 Static analysis of code - It includes analysis of the different paths and flows in
the application and different states of the test data.
 Compliance to coding standard - This evaluates the compliance of the code
with the different coding standards.
 Analysis of code metrics - The tool used for static analysis is required to
evaluate the different metrics like lines of code, complexity, code coverage
etc.
2. Dynamic Test Design Techniques - Dynamic test design techniques involves testing by
running the system under test.
 Specification based - Specification based test design techniques are also referred to
as blackbox testing. These involve testing based on the specification of the system
under test without knowing its internal architecture.
 Structure based - Structure based test design techniques are also referred to as white
box testing. In this techniques the knowledge of code or internal architecture of the
system is required to carry out the testing.
 Experienced based - The experienced based techniques are completely based on the
experience or intution of the tester. Two most common forms of experienced based
testing are - adhoc testing and exploratory testing.
Ques.28. Explain the different types of specification based test design technique?
Ans. Specification based test design techniques are also referred to as blackbox testing. It involves
testing based on the specification of the system under test without knowing its internal architecture.
The different types of specification based test design or black box testing techniques are-

 Equivalence partitioning - Grouping test data into logical groups or equivalence classes with
the assumpation that any all the data items lying in the classes will have same effect on the
application.
 Boundary value analysis - Testing using the boundary values of the equivalence classes
taken as the test input.
 Decision tables - Testing using decision tables showing application's behaviour based on
different combination of input values.
 Cause-effect graph - Testing using graphical representation of input i.e. cause and output i.e.
effect is used for test designing.
 State transition testing - Testing based on state machine model.
 Use case testing - Testing carried out using use cases.

Ques.29. Explain equivalence class partitioning.


Ans. Equivalence class partitioning is a specification based black box testing techniques. In
equivalence class partitioning, set of input data that defines different test conditions are partitioned
into logically similar groups such that using even a single test data from the group for testing can be
considered as similar to using all the other data in that group. E.g. for testing a Square
program(program that prints the square of a number- the equivalence classes can be-
Set of Negative numbers, whole numbers, decimal numbers, set of large numbers etc.

Ques.30. What is boundary value analysis?


Ans. Boundary value analysis is a software testing technique for designing test cases wherein the
boundary values of the classes of the equivalence class partitioning are taken as input to the test
cases e.g. if the test data lies in the range of 0-100, the boundary value analysis will include test data
- 0,1, 99, 100.

Ques.31. What is decision table testing?


Ans. Decision table testing is a type of specification based test design technique or black box testing
technique in which testing is carried out using decision tables showing application's behaviour
based on different combination of input values. Decision tables are particularly helpful in designing
test cases for complex business scenarios involving verification of application with multiple
combinations of input.

Ques.32. What is a cause effect graph?


Ans. A cause effect graph testing is black box test design technique in which graphical
representation of input i.e. cause and output i.e. effect is used for test designing. This technique uses
different notations representing AND, OR, NOT etc relations between the input conditions leading
to output.

Ques.33. What is state transition testing?


Ans. State transition testing is a black box test design technique based on state machine model.
State transition testing is based on the concept that a system can be defined as a collection of
multiple states and the transition from one state to other happens because of some event.

Ques.34. What is use case testing?


Ans. A use case testing is a black box testing approach in which testing is carried out using use
cases. A use case scenario is seen as interaction between the application and actors(users). These use
cases are used for depicting requirements and hence can also serve as basis for acceptance testing.

Ques.35. What is structure based testing?


Ans. Structure based test design techniques are also referred to as white box testing. In this
techniques the knowledge of code or internal architecture of the system is required to carry out the
testing. The various kinds of testing structure based or white testing techniques are-

 Statement testing - Test scripts are designed to execute code statements and coverage is the
measure of line of code or statements executed by test scripts.
 Decision testing/branch testing - Measure of the percentage of decision points(e.g. if-else
conditions) executed out of the total decision points in the application.
 Condition testing- Testing the condition outcomes(TRUE or FALSE). So, getting 100%
condition coverage required exercising each condition for both TRUE and FALSE results
using test scripts(For n conditions we will have 2n test scripts).
 Multiple condition testing - Testing the different combinations of condition outcomes. Hence
for 100% coverage we will have 2^n test scripts. This is very exhaustive and very difficult to
achieve 100% coverage.
 Condition determination testing - It is an optimized way of multiple condition testing in
which the combinations which doesn't affect the outcomes are discarded.
 Path testing - Testing the independent paths in the system(paths are executable statements
from entry to exit points).

Ques.36. What is Statement testing and statement coverage in white box testing?
Ans. Statement testing is a white box testing approach in which test scripts are designed to execute
code statements.
Statement coverage is the measure of the percentage of statements of code executed by the test
scripts out of the total code statements in the application. The statement coverage is the least
preferred metric for checking test coverage.

Ques.37. What is decision testing or branch testing?


Ans. Decision testing or branch testing is a white box testing approach in which test coverage is
measured by the percentage of decision points(e.g. if-else conditions) executed out of the total
decision points in the application.

Ques.38. What are the different levels of the testing?


Ans. Testing can be performed at different levels during the development process. Performing
testing activities at multiple levels help in early identification of bugs. The different levels of testing
are -
1. Unit Testing
2. Integration Testing
3. System Testing
4. Acceptance Testing

Ques.39. What is unit testing?


Ans. Unit testing is the first level of testing and it involves testing of individual modules of the
software. It is usually performed by developers.

Ques.40. What is integration testing?


Ans. Integration testing is performed after unit testing. In integration testing we test the group of
related modules. It aims at finding interfacing issues between the modules.

Ques.41. What are the different types of integration testing?


Ans. The different type of integration testing are-
1. Big bang Integration Testing - In big bang integration testing, testing starts only after all the
modules are integrated.
2. Top-down Integration Testing - In top down integration, testing/integration starts from top
modules to lower level modules.
3. Bottom-up Integration Testing - In bottom up integration, testing starts from lower level
modules to higher level module up in the heirarchy.
4. Hybrid Integration Testing - Hybrid integration testing is the combination of both Top-down
and bottom up integration testing. In this approach the integration starts from middle layer
and testing is carried out in both the direction
For details check Integration testing.
Ques.42. What is stub?
Ans. In case of Top-down integration many a times lower level modules are not developed while
beginning testing/integration with top level modules. In those cases Stubs or dummy modules are
used that simulate the working of modules by providing hardcoded or expected output based on the
input values.

Ques.43. What is driver?


Ans. In case of Bottom up integration drivers are used to simulate the working of top level modules
in order to test the related modules lower in the heirarchy.

Ques.44. What is a test harness? Why do we need a test harness?


Ans. A test harness is a collection of test scripts and test data usually associated with unit and
integration testing. It involves stubs and drivers that are required for testing software modules and
integrated components.

Ques.45. What is system testing?


Ans. System testing is the level of testing where the complete software is tested as a whole. The
conformance of the application with its business requirements is checked in system testing.

Ques.46. What is acceptance testing?


Ans. Acceptance testing is a testing performed by the potential end user or customers to check if the
software conforms to the business requirements and can be accepted for use.

Ques.47. What is alpha testing?


Ans. Alpha testing is a type of acceptance testing that is performed end users at the developer site.

Ques.48. What is beta testing?


Ans. Beta testing is the testing done by end users at end user's site. It allows users to provide direct
input about the software to the development company.

Ques.49. What is adhoc testing?


Ans. Adhoc testing is an unstructured way of testing that is performed without any formal
documentation or proper planning.

Ques.50. What is monkey testing?


Ans. Monkey testing is a type of testing that is performed randomly without any predefined test
cases or test inputs.
Ques.51. How is monkey testing different from adhoc testing?
Ans. In case of adhoc testing although there are no predefined or documented test cases still testers
have the understanding of the application. While in case of monkey testing testers doesn't have any
understanding of the application.

Ques.52. What is exploratory testing?


Ans. Exploratory testing is a type of testing in which new test case are added and updated while
exploring the system or executing test cases. Unlike scripted testing, test design and execution goes
parallely in exploratory testing.

Ques.53. What is performance testing?


Ans. Performance testing is a type of non-functional testing in which the performance of the system
is evaluated under expected or higher load. The various performance parameters evaluated during
performance testing are - response time, reliability, resource usage, scalabilty etc.

Ques.54. What is load testing?


Ans. Load testing is a type of performance testing which aims at finding application's performance
under expected workload. During load testing we evaluate the response time, throughput, error rate
etc parameters of the application.

Ques.55. What is stress testing?


Ans. Stress testing is a type of performance testing in which application's behaviuor is monitored
under higher workload tehn expected. Stress testing is done to find memory leaks, robustness of the
application as it is subjected to high workload.

Ques.56. What is volume testing?


Ans. Volume testing is a type of performance testing in which the performance of application is
evaluated with large amount of data. It checks the scalability of the application and helps in
identification of bottleneck with high volume of data.

Ques.57. What is endurance testing or Soak testing?


Ans. Endurance testing is a type of performance testing which aims at finding issues like memory
leaks when an application is subjected to load test for a long period of time.

Ques.58. What is spike testing?


Ans. Endurance testing is a type of performance testing in which the application's performance is
measured while suddenly increasing the number of active users during the load test.
Ques.59. What is usability testing?
Ans. Usability testing is the type of testing that aims at determining the extent to which the
application is easy to understand and use.

Ques.60. What is Accessibility testing?


Ans. Accessibility is the type of testing which aims at determining the ease of use or operation of
the application specifically by with disabilities.

Ques.61. What is compatibity testing?


Ans. Testing software to see how compatible the software is with a particular environment -
Operating system, platform or hardware.

Ques.62. What is configuration testing?


Ans. Configuration testing is the type of testing used to evaluate the configurational requirements of
the software along with effect of changing the required configuration.

Ques.63. What is localisation testing?


Ans. Localisation testing is a type of testing in which we evaluate the application's
customization(localized version of application) to a particular culture or locale. Generally the
content of the application is checked for updation(e.g. content language).

Ques.64. What is globalisation testing?


Ans. Globalisation testing is a type of testing in which application is evaluated for its functioning
across the world.

Ques.65. What is negative testing?


Ans. Negative testing is a type of testing in which the application's robustness(graceful exiting or
error reporting) is evaluated when provided with invalid input or test data.

Ques.66. What is security testing?


Ans. Security testing is a type of testing which aims at evaluating the integrity, authentication,
authorization, availabilty, confidentiality and non-repudation of the application under test.

Ques.67. What is penetration testing?


Ans. Penetration testing or pen testing is a type of security testing in which application is
evaluated(safely exploited) for different kinds of vulnerabilities that any hacker could expolit.

Ques.68. What is robustness testing?


Ans. Robustness testing is a type of testing that is performed to find the robustness of the
application i.e. the ability of the system to behave gracefully in case of erroneous test steps and test
input.

Ques.69. What is A/B testing?


A/B testing is a type of testing in which the two variants of the software product are exposed to the
end users and on analysing the user behaviour on each variant the better variant is chosen and used
thereafter.

Ques.70. What is concurrency testing?


Ans. Concurrency testing is a multi-user testing in which an application is evaluated by analyzing
application's behaviour with concurent users acccessing the same functioanity.

Ques.71. What is all pair testing?


Ans. All pair testing is a type of testing in which the application is tested with all possible
combination of the values of input parameters.

Ques.72. What is failover testing?


Ans. Failover testing is a type of testing that is used to verify application's ability to allocate more
resources(more servers) in case of failure and transfering of the processing part to back-up system.

Ques.73. What is fuzz testing?


Ans. Fuzz testing is a type of testing in which large amount of random data is provided as input to
the application in order to find security loopholes and other issues in the application.

Ques.74. What is UI testing?


Ans. UI or user interface testing is a type of testing that aims at finding Graphical User Interface
defects in the application and checks that the GUI conforms to the specifications.

Ques.75. What is risk analysis?


Ans. Risk analysis is the analysis of the risk identified and assigning an appropriate risk level to it
based on its impact over the application.
Ques.76. What is the difference between regression and retesting?
Ans. Regression testing is testing the application to verify that a new code change doesn't affect the
other parts of the application. Whereas, in retesting we verify if the fixed issue is resolved or not.

Ques.77. What is the difference between blackbox and whitebox testing?


Ans. Blackbox testing is a type of testing in which internal architecture of the code is not required
for testing. It is usaually applicable for system and acceptance testing.
Whereas whitebox testing requires internal design and implementation knowledege of the
application being tested. It is usually applicable for Unit and Integration testing.

Ques.78. What is the difference between smoke and sanity testing?


Ans. The difference between smoke and sanity testing is-

 Smoke testing is a type of testing in which the all major functionalities of the application are
tested before carrying out exhaustive testing. Whereas sanity testing is subset of regression
testing which is carried out when there is some minor fix in application in a new build.
 In smoke testing shallow-wide testing is carried out while in sanity narrow-deep testing (for
a particular fucntionality) is done.
 The smoke tests are usually documented or are automated. Whereas the sanity tests are
generally not documented or unscripted.

Ques.79. What is code coverage?


Ans. Code coverage is the measure of the amount of code covered by the test scripts. It gives the
idea of the part of the application covered by the test suite.

Ques.80. What is cyclomatic complexity?


Ans. Cyclomatic complexity is the measure of the number of independent paths in an application or
program. This metric provides an indication of the amount of effort required to test complete
functionality. It can be defined by the expression -
L – N + 2P, where:
L is the number of edges in the graph
N is the number of node
P is the number of disconnected parts

Ques.81. What is dynamic testing?


Ans. Testing performed by executing or running the application under test either manually or using
automation.

Ques.82. What is an exit criteria?


Ans. An exit criteria is a formal set of conditions that specify the agreed upon features or state of
application in order to mark the completion of the process or product.

Ques.83. What is traceability matrix?


Ans. In software testing a traceability matrix is a table that relates the high level requirements with
detailed requirements, test plans or test cases in order to determine the completeness of the
relationship.

Ques.84. What is pilot testing?


Ans. Pilot testing is a testing carried out as a trial by limited number of users evaluate the system
and provide their feedback before the complete deployment is carried out.

Ques.85. What is backend testing?


Ans. Backend testing is a type of testing that invloves testing the backend of the system which
comprises of testing the databases and the APIs in the application.

Ques.86. What are some advantages of automation testing?


Ans. Some advantages of automation testing are-
1. Test execution using automation is fast and saves considerable amount of time.
2. Carefully written test scripts remove the chance of human error during testing.
3. Tests execution can be scheduled for nightly run using CI tools like Jenkins which can also
be configured to provide daily test results to relevant stakeholders.
4. Automation testing is very less resource intensive. Once the tests are automated, test
execution requires almost no time of QAs. Saving Qa bandwidth for other explratory tasks.

Ques.87. What are some disadvantages of automation testing?


Ans. Some advantages of automation testing are-
1. It requries skilled automation testing experts to write test scritps.
2. Additional effort to write scripts is required upfront.
3. Automation scripts are limited to verification of the tests that are coded. These tests may
miss some error that is very glaring and easily identifiable to human(manual QA).
4. Even with some minor change in application, script updation and maintenance is required.

Ques.88. What is mutation testing?


Ans. Mutation testing is a type of white box testing in which the source code of the application is
mutated to cause some defect in its working. After that the test scripts are executed to check for
their correctness by verifying the failures caused the mutant code.
Ques.89. Write test cases for Pen.
Ans. Test cases of Pen

Ques.90. Write test cases for ATM Machine.


Ans. Test cases of ATM Machine

Ques.91. Write test cases for Login.


Ans. Test cases of Login Page

Ques.92. Write test cases for Lift.


Ans. Test cases of Lift

Ques.93. Write test cases for Lift.


Ans. Test cases of E-commerce application
Ques.94. What should be the psychology testing?
The two main stakeholders in software development life cycle - Testers and Developers have
different mindsets while approaching an application. Testers tend to have a more stringent approach
of examining the software. Most of the time they are looking to "break the application". Whereas,
developers have the mindset to "make the application work".
ISTQB has defined certain psychological factors that influence the success of testing-

 Independence - Testers should enjoy a certain degree of independence while testing the
application rather than following a straight path. This can be achieved by different
apporaches like off-shoring the QA process, getting test strategies and other test plans by
someone not following any sort of bias.
 Testing is often looked upon as destructive activity as it aims at finding flaws in system. But
QA personnel should present testing as required and integral part, presenting it as
constructive activity in overall sofware development lifecycle by mitigating the risks at early
stages.
 More often than not, tester and developers are at the opposite end of spectrum. Testers need
to have good interpersonal skills to communicate their findings without induging in any sort
of tussle with the developers.
 All the communication and discussions should be focussed on facts and figures(risk
analysis, priority setting etc). Emphasizing that the collaborative work of developers and
testers will lead to better software.
 Empathy is one characterstic that definitely helps in testing. Testers empathizing with
developers and other project stakeholders will lead to better relation between the two. Also,
empathizing with end users will lead to a software with better usability of the product.
Ques.95. What is the difference between Testing and debugging?
Ans. Testing is the primarily performed by testing team in order to find the defects in the system.
Whereas, debugging is an activity performed by development team. In debugging the cause of
defect is located and fixed. Thus removing the defect and preventing any future occurrence of the
defect as well.
Other difference between the two is - testing can be done without any internal knowledge of
software architecture. Whereas debugging requires knowledge of the software architecture and
coding.

Ques.96. Explain Agile methodology?


Ans. Agile methodology of software development is based on interative and increamental approach.
In this model the application is broken down into smaller build on which different cross functional
team work together providing rapid delivery along with adapting to changing needs at the same
time.

Ques.97. What is scrum?


Ans. A scrum is a process for implementing Agile methodology. In scrum, time is divided into
sprints and on completion of sprints, a deliverable is shipped.

Ques.98. What are the different roles in scrum?


Ans. The different roles in scrum are -
1. Product Owner - The product owner owns the whole development of the product, assign
tasks to the team and act as an interface between the scrum team(development team) and the
stakeholders.
2. Scrum Master - The scrum master monitors that scrum rules get followed in the team and
conducts scrum meeting.
3. Scrum Team - A scrum team participate in the scrum meetings and perform the tasks
assigned.

Ques.99. What is a scrum meeting?


Ans. A scrum meeting is daily held meeting in scrum process. This meeting is conducted by scrum
master and update of previous day's work along with next day's task and context is defined in
scrum.

Ques.100. Explain TDD (Test Driven Development).


Ans. Test Driven Development is a software development methodology in which the development
of the software is driven by test cases created for the functionality to be implemented. In TDD first
the test cases are created and then code to pass the tests is written. Later the code is refactored as
per the standards.
1. What is Software Testing?
According to ANSI/IEEE 1059 standard – A process of analyzing a software item to detect the
differences between existing and required conditions (i.e., defects) and to evaluate the features of
the software item. Click here for more details.
2. What are Quality Assurance and Quality Control?
Quality Assurance: Quality Assurance involves in process oriented activities. It ensures the
prevention of defects in the process used to make Software Application. So the defects don’t arise
when the Software Application is being developed.
Quality Control: Quality Control involves in product oriented activities. It executes the program or
code to identify the defects in the Software Application.
3. What is Verification in software testing?
Verification is the process, to ensure that whether we are building the product right i.e., to verify the
requirements which we have and to verify whether we are developing the product accordingly or
not. Activities involved here are Inspections, Reviews, Walk-throughs. Click here for more details.
4. What is Validation in software testing?
Validation is the process, whether we are building the right product i.e., to validate the product
which we have developed is right or not. Activities involved in this is Testing the software
application. Click here for more details.
5. What is Static Testing?
Static Testing involves in reviewing the documents to identify the defects in the early stages of
SDLC.
6. What is Dynamic Testing?
Dynamic testing involves in execution of code. It validates the output with the expected outcome.
7. What is White Box Testing?
White Box Testing is also called as Glass Box, Clear Box, and Structural Testing. It is based on
applications internal code structure. In white-box testing, an internal perspective of the system, as
well as programming skills, are used to design test cases. This testing usually was done at the unit
level. Click here for more details.
8. What is Black Box Testing?
Black Box Testing is a software testing method in which testers evaluate the functionality of the
software under test without looking at the internal code structure. This can be applied to every level
of software testing such as Unit, Integration, System and Acceptance Testing. Click here for more
details.
9. What is Grey Box Testing?
Grey box is the combination of both White Box and Black Box Testing. The tester who works on
this type of testing needs to have access to design documents. This helps to create better test cases
in this process.
10. What is Positive and Negative Testing?
Positive Testing: It is to determine what system supposed to do. It helps to check whether the
application is justifying the requirements or not.
Negative Testing: It is to determine what system not supposed to do. It helps to find the defects
from the software.
11. What is Test Strategy?
Test Strategy is a high-level document (static document) and usually developed by project manager.
It is a document which captures the approach on how we go about testing the product and achieve
the goals. It is normally derived from the Business Requirement Specification (BRS). Documents
like Test Plan are prepared by keeping this document as a base. Click here for more details.
12. What is Test Plan and contents available in a Test Plan?
Test plan document is a document which contains the plan for all the testing activities to be done to
deliver a quality product. Test Plan document is derived from the Product Description, SRS, or Use
Case documents for all future activities of the project. It is usually prepared by the Test Lead or Test
Manager.
1. Test plan identifier
2. References
3. Introduction
4. Test items (functions)
5. Software risk issues
6. Features to be tested
7. Features not to be tested
8. Approach
9. Items pass/fail criteria
10. Suspension criteria and resolution requirements
11. Test deliverables
12. Remaining test tasks
13. Environmental needs
14. Staff and training needs
15. Responsibility
16. Schedule
17. Plan risks and contingencies
18. Approvals
19. Glossaries
Click here for more details.
13. What is Test Suite?
Test Suite is a collection of test cases. The test cases which are intended to test an application.
14. What is Test Scenario?
Test Scenario gives the idea of what we have to test. Test Scenario is like a high-level test case.
15. What is Test Case?
Test cases are the set of positive and negative executable steps of a test scenario which has a set of
pre-conditions, test data, expected result, post-conditions and actual results. Click here for more
details.
16. What is Test Bed?
An environment configured for testing. Test bed consists of hardware, software, network
configuration, an application under test, other related softwares.
17. What is Test Environment?
Test Environment is the combination of hardware and software on which Test Team performs
testing.
Example:

 Application Type: Web Application


 OS: Windows
 Web Server: IIS
 Web Page Design: Dot Net
 Client Side Validation: Java Script
 Server Side Scripting: ASP Dot Net
 Database: MS SQL Server
 Browser: IE/FireFox/Chorme

18. What is Test Data?


Test data is the data that is used by the testers to run the test cases. Whilst running the test cases,
testers need to enter some input data. To do so, testers prepare test data. It can be prepared manually
and also by using tools.
For example, To test a basic login functionality having user id, password fields. We need to enter
some data in the user id and password fields. So we need to collect some test data.
19. What is Test Harness?
Test harness is the collection of software and test data configured to test a program unit by running
it under varying conditions which involves monitoring the output with expected output.
20. What is Test Closure?
Test Closure is the note prepared before test team formally completes the testing process. This note
contains the total no. of test cases, total no. of test cases executed, total no. of defects found, total
no. of defects fixed, total no. of bugs not fixed, total no of bugs rejected etc.,
21. List out Test Deliverables?
1. Test Strategy
2. Test Plan
3. Effort Estimation Report
4. Test Scenarios
5. Test Cases/Scripts
6. Test Data
7. Requirement Traceability Matrix (RTM)
8. Defect Report/Bug Report
9. Test Execution Report
10. Graphs and Metrics
11. Test summary report
12. Test incident report
13. Test closure report
14. Release Note
15. Installation/configuration guide
16. User guide
17. Test status report
18. Weekly status report (Project manager to client)
Click here for more details.
22. What is Unit Testing?
Unit Testing is also called as Module Testing or Component Testing. It is done to check whether the
individual unit or module of the source code is working properly. It is done by the developers in
developer’s environment.
23. What is Integration Testing?
Integration Testing is the process of testing the interface between the two software units. Integration
testing is done by three ways. Big Bang Approach, Top Down Approach, Bottom-Up Approach
Click here for more details.
24. What is System Testing?
Testing the fully integrated application to evaluate the system’s compliance with its specified
requirements is called System Testing AKA End to End testing. Verifying the completed system
to ensure that the application works as intended or not.
25. What is Big Bang Approach?
Combining all the modules once and verifying the functionality after completion of individual
module testing.
Top down and bottom up are carried out by using dummy modules known as Stubs and Drivers.
These Stubs and Drivers are used to stand-in for missing components to simulate data
communication between modules.
26. What is Top-Down Approach?
Testing takes place from top to bottom. High-level modules are tested first and then low-level
modules and finally integrating the low-level modules to a high level to ensure the system is
working as intended. Stubs are used as a temporary module if a module is not ready for integration
testing.
27. What is Bottom-Up Approach?
It is a reciprocate of the Top Down Approach. Testing takes place from bottom to up. Lowest level
modules are tested first and then high-level modules and finally integrating the high-level modules
to a low level to ensure the system is working as intended. Drivers are used as a temporary module
for integration testing.
28. What is End-To-End Testing?
Refer System Testing.
29. What is Functional Testing?
In simple words, what the system actually does is functional testing. To verify that each function of
the software application behaves as specified in the requirement document. Testing all the
functionalities by providing appropriate input to verify whether the actual output is matching the
expected output or not. It falls within the scope of black box testing and the testers need not concern
about the source code of the application.
30. What is Non-Functional Testing?
In simple words, how well the system performs is non-functionality testing. Non-functional testing
refers to various aspects of the software such as performance, load, stress, scalability, security,
compatibility etc., Main focus is to improve the user experience on how fast the system responds to
a request.
31. What is Acceptance Testing?
It is also known as pre-production testing. This is done by the end users along with the testers to
validate the functionality of the application. After successful acceptance testing. Formal testing
conducted to determine whether an application is developed as per the requirement. It allows the
customer to accept or reject the application. Types of acceptance testing are Alpha, Beta & Gamma.
32. What is Alpha Testing?
Alpha testing is done by the in-house developers (who developed the software) and testers.
Sometimes alpha testing is done by the client or outsourcing team with the presence of developers
or testers.
33. What is Beta Testing?
Beta testing is done by a limited number of end users before delivery. Usually, it is done in the
client place.
34. What is Gamma Testing?
Gamma testing is done when the software is ready for release with specified requirements. It is
done at the client place. It is done directly by skipping all the in-house testing activities.
35. What is Smoke Testing?
Smoke Testing is done to make sure if the build we received from the development team is testable
or not. It is also called as “Day 0” check. It is done at the “build level”. It helps not to waste the
testing time to simply testing the whole application when the key features don’t work or the key
bugs have not been fixed yet.
36. What is Sanity Testing?
Sanity Testing is done during the release phase to check for the main functionalities of the
application without going deeper. It is also called as a subset of Regression testing. It is done at the
“release level”. At times due to release time constraints rigorous regression testing can’t be done to
the build, sanity testing does that part by checking main functionalities.
37. What is Retesting?
To ensure that the defects which were found and posted in the earlier build were fixed or not in the
current build. Say, Build 1.0 was released. Test team found some defects (Defect Id 1.0.1, 1.0.2) and
posted. Build 1.1 was released, now testing the defects 1.0.1 and 1.0.2 in this build is retesting.
38. What is Regression Testing?
Repeated testing of an already tested program, after modification, to discover any defects
introduced or uncovered as a result of the changes in the software being tested or in another related
or unrelated software components.
Usually, we do regression testing in the following cases:
1. New functionalities are added to the application
2. Change Requirement (In organizations, we call it as CR)
3. Defect Fixing
4. Performance Issue Fix
5. Environment change (E.g., Updating the DB from MySQL to Oracle)
39. What is GUI Testing?
Graphical User Interface Testing is to test the interface between the application and the end user.
40. What is Recovery Testing?
Recovery testing is performed in order to determine how quickly the system can recover after the
system crash or hardware failure. It comes under the type of non-functional testing.
41. What is Globalization Testing?
Globalization is a process of designing a software application so that it can be adapted to various
languages and regions without any changes.
42. What is Internationalization Testing (I18N Testing)?
Refer Globalization Testing.
43. What is Localization Testing (L10N Testing)?
Localization is a process of adapting globalization software for a specific region or language by
adding local specific components.
44. What is Installation Testing?
It is to check whether the application is successfully installed and it is working as expected after
installation.
45. What is Formal Testing?
It is a process where the testers test the application by having pre-planned procedures and proper
documentation.
46. What is Risk Based Testing?
Identify the modules or functionalities which are most likely cause failures and then testing those
functionalities.
47. What is Compatibility Testing?
It is to deploy and check whether the application is working as expected in different combination of
environmental components.
48. What is Exploratory Testing?
Usually this process will be carried out by domain experts. They perform testing just by exploring
the functionalities of the application without having the knowledge of the requirements.
49. What is Monkey Testing?
Perform abnormal action on the application deliberately in order to verify the stability of the
application.
50. What is Usability Testing?
To verify whether application is user friendly or not and was comfortably used by end user or
not. The main focus in this testing is to check whether the end user can understand and operate the
application easily or not. Application should be self-exploratory and must not require training to
operate it.

Top 100 Manual Testing Interview Questions – 51-100:


51. What is Security Testing?
Security testing is a process to determine whether the system protects data and maintains
functionality as intended.
52. What is Soak Testing?
Running a system at high load for a prolonged period of time to identify the performance problems
is called Soak Testing.
53. What is Performance Testing?
This type of testing determines or validates the speed, scalability, and/or stability characteristics of
the system or application under test. Performance is concerned with achieving response times,
throughput, and resource-utilization levels that meet the performance objectives for the project or
product.
54. What is Load Testing?
It is to verify that the system/application can handle the expected number of transactions and to
verify the system/application behavior under both normal and peak load conditions.
55. What is Volume Testing?
It is to verify that the system/application can handle a large amount of data
56. What is Stress Testing?
It is to verify the behavior of the system once the load increases more than its design expectations.
57. What is Scalability Testing?
Scalability testing is a type of non-functional testing. It is to determine how the application under
test scales with increasing workload.
58. What is Concurrency Testing?
Concurrency testing means accessing the application at the same time by multiple users to ensure
the stability of the system. This is mainly used to identify deadlock issues.
59. What is Fuzz Testing?
Fuzz testing is used to identify coding errors and security loop holes in an application. By inputting
massive amount of random data to the system in an attempt to make it crash to identify if anything
breaks in the application.
60. What is Adhoc Testing?
Ad-hoc testing is quite opposite to the formal testing. It is an informal testing type. In Adhoc
testing, testers randomly test the application without following any documents and test design
techniques. This testing is primarily performed if the knowledge of testers in the application under
test is very high. Testers randomly test the application without any test cases or any business
requirement document.
61. What is Interface Testing?
Interface testing is performed to evaluate whether two intended modules pass data and
communicate correctly to one another.
62. What is Reliability Testing?
Perform testing on the application continuously for long period of time in order to verify the
stability of the application
63. What is Bucket Testing?
Bucket testing is a method to compare two versions of an application against each other to
determine which one performs better.
64. What is A/B Testing?
Refer Bucket Testing.
65. What is Split Testing?
Refer Bucket Testing.
66. What are the principles of Software Testing?
1. Testing shows presence of defects
2. Exhaustive testing is impossible
3. Early testing
4. Defect clustering
5. Pesticide paradox
6. Testing is context depending
7. Absence of error fallacy
Click here for more details.
67. What is Exhaustive Testing?
Testing all the functionalities using all valid and invalid inputs and preconditions is known as
Exhaustive testing.
68. What is Early Testing?
Defects detected in early phases of SDLC are less expensive to fix. So conducting early testing
reduces the cost of fixing defects.
69. What is Defect clustering?
Defect clustering in software testing means that a small module or functionality contains most of
the bugs or it has the most operational failures.
70. What is Pesticide Paradox?
Pesticide Paradox in software testing is the process of repeating the same test cases again and again,
eventually the same test cases will no longer find new bugs. So to overcome this Pesticide Paradox,
it is necessary to review the test cases regularly and add or update them to find more defects.
71. What is Walk Through?
Walkthrough is an informal meeting conducts to learn, gain understanding, and find defects. Author
leads the meeting and clarifies the queries raised by the peers in the meeting.
72. What is Inspection?
Inspection is a formal meeting lead by a trained moderator, certainly not by the author. The
document under inspection is prepared and checked thoroughly by the reviewers before the
meeting. In the inspection meeting the defects found are logged and shared with the author for
appropriate actions. Post inspection, a formal follow-up process is used to ensure a timely and
corrective action.
73. Who are all involve in an inspection meeting?
Author, Moderator, Reviewer(s), Scribe/Recorder and Manager.
74. What is a Defect?
The variation between the actual results and expected results is known as defect. If a developer
finds an issue and corrects it by himself in the development phase then it’s called a defect. Click
here for more details.
75. What is a Bug?
If testers find any mismatch in the application/system in testing phase then they call it as Bug. Click
here for more details.
76. What is an Error?
We can’t compile or run a program due to coding mistake in a program. If a developer unable to
successfully compile or run a program then they call it as an error. Click here for more details.
77. What is a Failure?
Once the product is deployed and customers find any issues then they call the product as a failure
product. After release, if an end user finds an issue then that particular issue is called
as failure. Click here for more details.
78. What is Bug Severity?
Bug/Defect severity can be defined as the impact of the bug on customer’s business. It can be
Critical, Major or Minor. In simple words, how much effect will be there on the system because of a
particular defect. Click here for more details.
79. What is Bug Priority?
Defect priority can be defined as how soon the defect should be fixed. It gives the order in which a
defect should be resolved. Developers decide which defect they should take up next based on the
priority. It can be High, Medium or Low. Most of the times the priority status is set based on the
customer requirement. Click here for more details.
80. Tell some examples of Bug Severity and Bug Priority?
High Priority & High Severity: Submit button is not working on a login page and customers are
unable to login to the application
Low Priority & High Severity: Crash in some functionality which is going to deliver after couple
of releases
High Priority & Low Severity: Spelling mistake of a company name on the home page
Low Priority & Low Severity: FAQ page takes a long time to load
Click here for more details.
81. What is the difference between Standalone application, Client-Server application and Web
application?
Standalone application:
Standalone applications follow one-tier architecture. Presentation, Business and Database layer are
in one system for single user.
Client Server Application:
Client server applications follow two-tier architecture. Presentation and Business layer are in client
system and Database layer in another server. It works majorly in Intranet.
Web Application:
Web server applications follow three-tier or n-tier architecture. Presentation layer is in client
system, Business layer is in application server and Database layer is in Database server. It works
both in Intranet and Internet.
82. What is Bug Life Cycle?
Bug life cycle also known as Defect life cycle. In Software Development process, the bug has a life
cycle. The bug should go through the life cycle to be closed. Bug life cycle varies depends upon the
tools (QC, JIRA etc.,) used and the process followed in the organization. Click here for more
details.
83. What is Bug Leakage?
A bug which is actually missed by the testing team while testing and the build was released to the
Production. If now that bug (which was missed by the testing team) was found by the end user or
customer then we call it as Bug Leakage.
84. What is Bug Release?
Releasing the software to the Production with the known bugs then we call it as Bug Release. These
known bugs should be included in the release note.
85. What is Defect Age?
Defect age can be defined as the time interval between date of defect detection and date of defect
closure.
Defect Age = Date of defect closure – Date of defect detection
Assume, a tester found a bug and reported it on 1 Jan 2016 and it was successfully fixed on 5 Jan
2016. So the defect age is 5 days.
86. What is Error Seeding?
Error seeding is a process of adding known errors intendedly in a program to identify the rate of
error detection. It helps in the process of estimating the tester skills of finding bugs and also to
know the ability of the application (how well the application is working when it has errors.)
87. What is Showstopper Defect?
A show stopper defect is a defect which won’t allow user to move further in the application. It’s
almost like a crash.
Assume that login button is not working. Even though you have a valid user name and valid
password, you could not move further because the login button is not functioning.
88. What is Hot Fix?
A bug which needs to handle as a high priority bug and fix it immediately.
89. What is Boundary Value Analysis?
Boundary value analysis (BVA) is based on testing the boundary values of valid and invalid
partitions. The Behavior at the edge of each equivalence partition is more likely to be incorrect than
the behavior within the partition, so boundaries are an area where testing is likely to yield
defects. Every partition has its maximum and minimum values and these maximum and minimum
values are the boundary values of a partition. A boundary value for a valid partition is a valid
boundary value. Similarly, a boundary value for an invalid partition is an invalid boundary
value. Click here for more details.
90. What is Equivalence Class Partition?
Equivalence Partitioning is also known as Equivalence Class Partitioning. In equivalence
partitioning, inputs to the software or system are divided into groups that are expected to exhibit
similar behavior, so they are likely to be proposed in the same way. Hence selecting one input from
each group to design the test cases. Click here for more details.
91. What is Decision Table testing?
Decision Table is aka Cause-Effect Table. This test technique is appropriate for functionalities
which has logical relationships between inputs (if-else logic). In Decision table technique, we deal
with combinations of inputs. To identify the test cases with decision table, we consider conditions
and actions. We take conditions as inputs and actions as outputs. Click here for more details.
92. What is State Transition?
Using state transition testing, we pick test cases from an application where we need to test different
system transitions. We can apply this when an application gives a different output for the same
input, depending on what has happened in the earlier state. Click here for more details.
93. What is an entry criteria?
The prerequisites that must be achieved before commencing the testing process. Click here for more
details.
94. What is an exit criteria?
The conditions that must be met before testing should be concluded. Click here for more details.
95. What is SDLC?
Software Development Life Cycle (SDLC) aims to produce high quality system that meets or
exceeds customer expectations, works effectively and efficiently in the current and planned
information technology infrastructure, and is inexpensive to maintain and cost effective to enhance.
Click here for more details.
96. What are the different available models of SDLC?
1. Waterfall
2. Spiral
3. V Model
4. Prototype
5. Agile
97. What is STLC?
STLC (Software Testing Life Cycle) identifies what test activities to carry out and when to
accomplish those test activities. Even though testing differs between Organizations, there is a
testing life cycle. Click here for more details.
98. What is RTM?
Requirements Traceability Matrix (RTM) is used to trace the requirements to the tests that are
needed to verify whether the requirements are fulfilled. Requirement Traceability Matrix
AKA Traceability Matrix or Cross Reference Matrix. Click here for more details.
99. What is Test Metrics?
Software test metrics is to monitor and control process and product. It helps to drive the project
towards our planned goals without deviation. Metrics answer different questions. It’s important to
decide what questions you want answers to. Click here for more details.
100. When to stop testing? (Or) How do you decide when you have tested enough?
There are many factors involved in the real time projects to decide when to stop testing.
1. Testing deadlines or release deadlines
2. By reaching the decided pass percentage of test cases
3. The risk in the project is under acceptable limit
4. All the high priority bugs, blockers are fixed
5. When acceptance criteria is met

For other test scenarios check our Test Scenarios Examples


section.
A list of mostly asked software testing interview questions or QTP interview questions and
answers are given below.

1) What is PDCA cycle and where testing fits in?


There are four steps in a normal software development process. In short, these steps are referred as
PDCA.
PDCA stands for Plan, Do, Check, Act.

 Plan: It defines the goal and the plan for achieving that goal.
 Do/ Execute: It depends on the plan strategy decided during the plan stage. It is done
according to this phase.
 Check: This is the testing part of the software development phase. It is used to ensure that
we are moving according to plan and getting the desired result.
 Act: This step is used to solve if there any issue is occurred during the check cycle. It takes
appropriate action accordingly and revises the plan again.
The developers do the "planning and building" of the project while testers do the "check" part of the
project.

2) What is the difference among white box, black box and gray box testing?
Black box Testing: The strategy of black box testing is based on requirements and specification. It
requires no need of knowledge of internal path, structure or implementation of the software being
tested.
White box Testing: White box testing is based on internal paths, code structure, and
implementation of the software being tested. It requires a full and detail programming skill.
Gray box Testing: This is another type of testing in which we look into the box which is being
tested, It is done only to understand how it has been implemented. After that we close the box and
use the black box testing.

3)What are the advantages of designing tests early in the life cycle?
Designing tests early in the life cycle prevent defects from being in the main code.

4) What are the types of defects?


There are three types of defects: Wrong, missing and extra.
Wrong: These defects are occurred due to requirements have been implemented incorrectly.
Missing: It is used to specify the missing things i.e. a specification was not implemented, or the
requirement of the customer was not noted properly.
Extra: This is an extra facility incorporated into the product that was not given by the end
customer. It is always a variance from the specification but may be an attribute that was desired by
the customer. However, it is considered as a defect because the variance from the user requirements.

5) What is exploratory testing?


Simultaneous test design and execution against an application is called exploratory testing. In this
testing the tester uses his domain knowledge and testing experience to predict where and under
what conditions the system might behave unexpectedly.

6) When should exploratory testing be performed?


Exploratory testing is performed as a final check before the software is released. It is a
complimentary activity to automated regression testing.
7) What are the advantages of designing tests early in the life cycle?
It helps you to prevent defects into the code.

8) Tell me about the risk based testing.


Risk based testing is a testing strategy that is based on prioritizing tests by risks. It is based on a
detailed risk analysis approach which categorizes the risks by their priority. Highest priority risks
are resolved first.

9) What is acceptance testing?


Acceptance testing is done to enable a user/customer to determine whether to accept a software
product. It also validates whether the software follows a set of agreed acceptance criteria.

10) What is accessibility testing?


Accessibility testing is used to verify whether a software product is accessible to the people having
disabilities (deaf, blind, mentally disabled etc.).

11) What is Adhoc testing?


Adhoc testing is a testing phase where the tester tries to 'break' the system by randomly trying the
system's functionality.

12) What is Agile testing?


Agile testing is a testing practice that uses agile methodologies i.e. follow test-first design
paradigm.

13) What is API (Application Programming Interface)?


Application Programming Interface is a formalized set of software calls and routines that can be
referenced by an application program in order to access supporting system or network services.

14) What do you mean by automated testing?


Testing by using software tools which execute test without manual intervention is known as
automated testing. Automated testing can be used in GUI, performance, API etc.
15) What is Bottom up testing?
The Bottom up testing is a testing approach which follows integration testing where the lowest level
components are tested first, after that the higher level components are tested. The process is
repeated until the testing of top level component.

16) What is Baseline Testing?


In Baseline testing, a set of tests is run to capture performance information. Baseline testing
improves performance and capabilities of the application by using the information collected and
make the changes in the application. Baseline compares present performance of application with its
own previous performance.

17) What is Benchmark Testing?


Benchmarking testing is the process of comparing application performance with respect to industry
standard given by some other organization.
It is a standard testing which specifies where our application stands with respect to others.

18) Which types are testing are important for web testing?
There are two types of testing which are very important for web testing:

 Performance testing
 Security testing

19) What is the difference between web application and desktop application in
the scenario of testing?
The difference a web application and desktop application is that a web application is open to the
world with potentially many users accessing the application simultaneously at various times, so load
testing and stress testing are important. Web applications are also prone to all forms of attacks,
mostly DDOS, so security testing is also very important in the case of web applications.

20) What is the difference between verification and validation?


Difference between verification and validation:

Verification Validation
Verification is Static Testing. Validation is Dynamic Testing.
Validation occurs after
Verification occurs before Validation.
Verification.
Verification evaluates plans, document, requirements and Validation evaluates products.
specification.
In verification inputs are checklist, issues list, walkthroughs In validation testing, actual
and inspection. product is tested.
Verification output is set of document, plans, specification and In Validation actual product is
requirement documents. output.

21) What is the difference between Retesting and Regression Testing?


A list of differences between Retesting and Regression Testing:

 Retesting is done to verify defect in previous and fix them to work correctly, on the other
hand, regression testing is performed to check if the defect fix have not impacted other
functionality that was working fine before doing changes in the code.
 Retesting is specific and it is performed on the bug which is fixed where as in regression is
not be always specific to any defect fix it is performed when any bug is fixed.
 Retesting concern with executing those test cases that are failed earlier where as regression
concern with executing test cases that was passed in earlier builds.
 Retesting has higher priority over regression testing.

22) What is the difference between preventative and reactive approaches to


testing?
Preventative tests are designed earlier and reactive tests are designed after the software has been
produced.

23) What is the purpose of exit criteria?


The exit criteria is used to define the completion of the test level.

24) Why the decision table testing is used?


A decision table consists of inputs in a column with the outputs in the same column but below the
inputs.
The decision table testing is used for testing systems for which the specification takes the form of
rules or cause-effect combination. The reminders you get in the table explore combinations of
inputs to define the output produced.

25) What is alpha and beta testing?


These are the key differences between alpha and beta testing:

No. Alpha Testing Beta Testing


It is always done by developers at the software It is always performed by customers at
1)
development site. their own site.
It is not be performed by Independent
2) It is also performed by Independent testing team.
testing team.
3) It is not open to the market and public. It is open to the market and public.
It is always performed in real time
4) It is always performed in virtual environment.
environment.
5) It is used for software applications and projects. It is used for software products.
It follows the category of both white box testing and
6) It is only the kind of Black Box Testing.
Black Box Testing.
7) It is not known by any other name. It is also known as field testing.

26) What is Random/Monkey Testing?


Random testing is also known as monkey testing. In this testing, data is generated randomly often
using a tool. The data is generated either using a tool or some automated mechanism.
Random testing has some limitations:

 Most of the random tests are redundant and unrealistic.


 It needs more time to analyze results.
 It is not possible to recreate the test if you do not record what data was used for testing.

27) What is the negative and positive testing?


Negative Testing: When you put an invalid input and receive errors is known as negative testing.
Positive Testing: When you put in the valid input and expect some actions that are completed
according the specification is known as positive testing.

28) What is the benefit of test independence?


Test independence is very useful because it avoids author bias in defining effective tests.

29) What is boundary value analysis / testing?


In boundary value analysis/testing, we only test the exact boundaries rather than hitting in the
middle. For example: If there is a bank application where you can withdraw a maximum of 25000
and a minimum of 100. So in boundary value testing we only test above the max and below the
max. This covers all scenarios.
The following figure shows the boundary value testing for the above discussed bank
application.TC1 and TC2 are sufficient to test all conditions for the bank. TC3 and TC4 are just
duplicate/redundant test cases which really do not add any value to the testing. So by applying
proper boundary value fundamentals we can avoid duplicate test cases, which do not add value to
the testing.

30) How would you test the login feature of a web application?
There are many ways to test the login feature of a web application:

 Sign in with valid login, Close browser and reopen and see whether you are still logged in or
not.
 Sign in, then logout and then go back to the login page to see if you are truly logged out.
 Login, then go back to the same page, do you see the login screen again?
 Session management is important. You must focus on how do we keep track of logged in
users, is it via cookies or web sessions?
 Sign in from one browser, open another browser to see if you need to sign in again?
 Login, change password, and then logout, then see if you can login again with the old
password.
---------------------------------------------------------------------------------------------------------------

Types of Testing:-
A list of 100 types of Software Testing Types along with definitions. A must read for any QA
professional.
1. Acceptance Testing: Formal testing conducted to determine whether or not a system
satisfies its acceptance criteria and to enable the customer to determine whether or not to
accept the system. It is usually performed by the customer.
2. Accessibility Testing: Type of testing which determines the usability of a product to the
people having disabilities (deaf, blind, mentally disabled etc). The evaluation process is
conducted by persons having disabilities.
3. Active Testing: Type of testing consisting in introducing test data and analyzing the
execution results. It is usually conducted by the testing team.
4. Agile Testing: Software testing practice that follows the principles of the agile manifesto,
emphasizing testing from the perspective of customers who will utilize the system. It is
usually performed by the QA teams.
5. Age Testing: Type of testing which evaluates a system's ability to perform in the future. The
evaluation process is conducted by testing teams.
6. Ad-hoc Testing: Testing performed without planning and documentation - the tester tries to
'break' the system by randomly trying the system's functionality. It is performed by the
testing team.
7. Alpha Testing: Type of testing a software product or system conducted at the developer's
site. Usually it is performed by the end users.
8. Assertion Testing: Type of testing consisting in verifying if the conditions confirm the
product requirements. It is performed by the testing team.
9. API Testing: Testing technique similar to Unit Testing in that it targets the code level. Api
Testing differs from unit testing in that it is typically a QA task and not a developer task.
10. All-pairs Testing: Combinatorial testing method that tests all possible discrete
combinations of input parameters. It is performed by the testing teams.
11. Automated Testing: Testing technique that uses Automation Testing tools to control
the environment set-up, test execution and results reporting. It is performed by a computer
and is used inside the testing teams.
12. Basis Path Testing: A testing mechanism which derives a logical complexity
measure of a procedural design and use this as a guide for defining a basic set of execution
paths. It is used by testing teams when defining test cases.
13. Backward Compatibility Testing: Testing method which verifies the behavior of
the developed software with older versions of the test environment. It is performed by
testing team.
14. Beta Testing: Final testing before releasing application for commercial purpose. It is
typically done by end-users or others.
15. Benchmark Testing: Testing technique that uses representative sets of programs and
data designed to evaluate the performance of computer hardware and software in a given
configuration. It is performed by testing teams.
16. Big Bang Integration Testing: Testing technique which integrates individual
program modules only when everything is ready. It is performed by the testing teams.
17. Binary Portability Testing: Technique that tests an executable application for
portability across system platforms and environments, usually for conformation to an ABI
specification. It is performed by the testing teams.
18. Boundary Value Testing: Software testing technique in which tests are designed to
include representatives of boundary values. It is performed by the QA testing teams.
19. Bottom Up Integration Testing: In bottom-up Integration Testing, module at the
lowest level are developed first and other modules which go towards the 'main' program are
integrated and tested one at a time. It is usually performed by the testing teams.
20. Branch Testing: Testing technique in which all branches in the program source code
are tested at least once. This is done by the developer.
21. Breadth Testing: A test suite that exercises the full functionality of a product but
does not test features in detail. It is performed by testing teams.
22. Black box Testing: A method of software testing that verifies the functionality of an
application without having specific knowledge of the application's code/internal structure.
Tests are based on requirements and functionality. It is performed by QA teams.
23. Code-driven Testing: Testing technique that uses testing frameworks (such as
xUnit) that allow the execution of unit tests to determine whether various sections of the
code are acting as expected under various circumstances. It is performed by the development
teams.
24. Compatibility Testing: Testing technique that validates how well a software
performs in a particular hardware/software/operating system/network environment. It is
performed by the testing teams.
25. Comparison Testing: Testing technique which compares the product strengths and
weaknesses with previous versions or other similar products. Can be performed by tester,
developers, product managers or product owners.
26. Component Testing: Testing technique similar to unit testing but with a higher level
of integration - testing is done in the context of the application instead of just directly testing
a specific method. Can be performed by testing or development teams.
27. Configuration Testing: Testing technique which determines minimal and optimal
configuration of hardware and software, and the effect of adding or modifying resources
such as memory, disk drives and CPU. Usually it is performed by the Performance Testing
engineers.
28. Condition Coverage Testing: Type of software testing where each condition is
executed by making it true and false, in each of the ways at least once. It is typically made
by the automation testing teams.
29. Compliance Testing: Type of testing which checks whether the system was
developed in accordance with standards, procedures and guidelines. It is usually performed
by external companies which offer "Certified OGC Compliant" brand.
30. Concurrency Testing: Multi-user testing geared towards determining the effects of
accessing the same application code, module or database records. It it usually done by
performance engineers.
31. Conformance Testing: The process of testing that an implementation conforms to
the specification on which it is based. It is usually performed by testing teams.
32. Context Driven Testing: An Agile Testing technique that advocates continuous and
creative evaluation of testing opportunities in light of the potential information revealed and
the value of that information to the organization at a specific moment. It is usually
performed by Agile testing teams.
33. Conversion Testing: Testing of programs or procedures used to convert data from
existing systems for use in replacement systems. It is usually performed by the QA teams.
34. Decision Coverage Testing: Type of software testing where each condition/decision
is executed by setting it on true/false. It is typically made by the automation testing teams.
35. Destructive Testing: Type of testing in which the tests are carried out to the
specimen's failure, in order to understand a specimen's structural performance or material
behavior under different loads. It is usually performed by QA teams.
36. Dependency Testing: Testing type which examines an application's requirements for
pre-existing software, initial states and configuration in order to maintain proper
functionality. It is usually performed by testing teams.
37. Dynamic Testing: Term used in software engineering to describe the testing of the
dynamic behavior of code. It is typically performed by testing teams.
38. Domain Testing: White box testing technique which contains checkings that the
program accepts only valid input. It is usually done by software development teams and
occasionally by automation testing teams.
39. Error-Handling Testing: Software testing type which determines the ability of the
system to properly process erroneous transactions. It is usually performed by the testing
teams.
40. End-to-end Testing: Similar to system testing, involves testing of a complete
application environment in a situation that mimics real-world use, such as interacting with a
database, using network communications, or interacting with other hardware, applications,
or systems if appropriate. It is performed by QA teams.
41. Endurance Testing: Type of testing which checks for memory leaks or other
problems that may occur with prolonged execution. It is usually performed by performance
engineers.
42. Exploratory Testing: Black box testing technique performed without planning and
documentation. It is usually performed by manual testers.
43. Equivalence Partitioning Testing: Software testing technique that divides the input
data of a software unit into partitions of data from which test cases can be derived. it is
usually performed by the QA teams.
44. Fault injection Testing: Element of a comprehensive test strategy that enables the
tester to concentrate on the manner in which the application under test is able to handle
exceptions. It is performed by QA teams.
45. Formal verification Testing: The act of proving or disproving the correctness of
intended algorithms underlying a system with respect to a certain formal specification or
property, using formal methods of mathematics. It is usually performed by QA teams.
46. Functional Testing: Type of black box testing that bases its test cases on the
specifications of the software component under test. It is performed by testing teams.
47. Fuzz Testing: Software testing technique that provides invalid, unexpected, or
random data to the inputs of a program - a special area of mutation testing. Fuzz testing is
performed by testing teams.
48. Gorilla Testing: Software testing technique which focuses on heavily testing of one
particular module. It is performed by quality assurance teams, usually when running full
testing.
49. Gray Box Testing: A combination of Black Box and White Box testing
methodologies: testing a piece of software against its specification but using some
knowledge of its internal workings. It can be performed by either development or testing
teams.
50. Glass box Testing: Similar to white box testing, based on knowledge of the internal
logic of an application's code. It is performed by development teams.
51. GUI software Testing: The process of testing a product that uses a graphical user
interface, to ensure it meets its written specifications. This is normally done by the testing
teams.
52. Globalization Testing: Testing method that checks proper functionality of the
product with any of the culture/locale settings using every type of international input
possible. It is performed by the testing team.
53. Hybrid Integration Testing: Testing technique which combines top-down and
bottom-up integration techniques in order leverage benefits of these kind of testing. It is
usually performed by the testing teams.
54. Integration Testing: The phase in software testing in which individual software
modules are combined and tested as a group. It is usually conducted by testing teams.
55. Interface Testing: Testing conducted to evaluate whether systems or components
pass data and control correctly to one another. It is usually performed by both testing and
development teams.
56. Install/uninstall Testing: Quality assurance work that focuses on what customers
will need to do to install and set up the new software successfully. It may involve full, partial
or upgrades install/uninstall processes and is typically done by the software testing engineer
in conjunction with the configuration manager.
57. Internationalization Testing: The process which ensures that product's functionality
is not broken and all the messages are properly externalized when used in different
languages and locale. It is usually performed by the testing teams.
58. Inter-Systems Testing: Testing technique that focuses on testing the application to
ensure that interconnection between application functions correctly. It is usually done by the
testing teams.
59. Keyword-driven Testing: Also known as table-driven testing or action-word testing,
is a software testing methodology for automated testing that separates the test creation
process into two distinct stages: a Planning Stage and an Implementation Stage. It can be
used by either manual or automation testing teams.
60. Load Testing: Testing technique that puts demand on a system or device and
measures its response. It is usually conducted by the performance engineers.
61. Localization Testing: Part of software testing process focused on adapting a
globalized application to a particular culture/locale. It is normally done by the testing teams.
62. Loop Testing: A white box testing technique that exercises program loops. It is
performed by the development teams.
63. Manual Scripted Testing: Testing method in which the test cases are designed and
reviewed by the team before executing it. It is done by manual testing teams.
64. Manual-Support Testing: Testing technique that involves testing of all the functions
performed by the people while preparing the data and using these data from automated
system. it is conducted by testing teams.
65. Model-Based Testing: The application of Model based design for designing and
executing the necessary artifacts to perform software testing. It is usually performed by
testing teams.
66. Mutation Testing: Method of software testing which involves modifying programs'
source code or byte code in small ways in order to test sections of the code that are seldom
or never accessed during normal tests execution. It is normally conducted by testers.
67. Modularity-driven Testing: Software testing technique which requires the creation
of small, independent scripts that represent modules, sections, and functions of the
application under test. It is usually performed by the testing team.
68. Non-functional Testing: Testing technique which focuses on testing of a software
application for its non-functional requirements. Can be conducted by the performance
engineers or by manual testing teams.
69. Negative Testing: Also known as "test to fail" - testing method where the tests' aim
is showing that a component or system does not work. It is performed by manual or
automation testers.
70. Operational Testing: Testing technique conducted to evaluate a system or
component in its operational environment. Usually it is performed by testing teams.
71. Orthogonal array Testing: Systematic, statistical way of testing which can be
applied in user interface testing, system testing, Regression Testing, configuration testing
and performance testing. It is performed by the testing team.
72. Pair Testing: Software development technique in which two team members work
together at one keyboard to test the software application. One does the testing and the other
analyzes or reviews the testing. This can be done between one Tester and Developer or
Business Analyst or between two testers with both participants taking turns at driving the
keyboard.
73. Passive Testing: Testing technique consisting in monitoring the results of a running
system without introducing any special test data. It is performed by the testing team.
74. Parallel Testing: Testing technique which has the purpose to ensure that a new
application which has replaced its older version has been installed and is running correctly.
It is conducted by the testing team.
75. Path Testing: Typical white box testing which has the goal to satisfy coverage
criteria for each logical path through the program. It is usually performed by the
development team.
76. Penetration Testing: Testing method which evaluates the security of a computer
system or network by simulating an attack from a malicious source. Usually they are
conducted by specialized penetration testing companies.
77. Performance Testing: Functional testing conducted to evaluate the compliance of a
system or component with specified performance requirements. It is usually conducted by
the performance engineer.
78. Qualification Testing: Testing against the specifications of the previous release,
usually conducted by the developer for the consumer, to demonstrate that the software meets
its specified requirements.
79. Ramp Testing: Type of testing consisting in raising an input signal continuously
until the system breaks down. It may be conducted by the testing team or the performance
engineer.
80. Regression Testing: Type of software testing that seeks to uncover software errors
after changes to the program (e.g. bug fixes or new functionality) have been made, by
retesting the program. It is performed by the testing teams.
81. Recovery Testing: Testing technique which evaluates how well a system recovers
from crashes, hardware failures, or other catastrophic problems. It is performed by the
testing teams.
82. Requirements Testing: Testing technique which validates that the requirements are
correct, complete, unambiguous, and logically consistent and allows designing a necessary
and sufficient set of test cases from those requirements. It is performed by QA teams.
83. Security Testing: A process to determine that an information system protects data
and maintains functionality as intended. It can be performed by testing teams or by
specialized security-testing companies.
84. Sanity Testing: Testing technique which determines if a new software version is
performing well enough to accept it for a major testing effort. It is performed by the testing
teams.
85. Scenario Testing: Testing activity that uses scenarios based on a hypothetical story
to help a person think through a complex problem or system for a testing environment. It is
performed by the testing teams.
86. Scalability Testing: Part of the battery of non-functional tests which tests a software
application for measuring its capability to scale up - be it the user load supported, the
number of transactions, the data volume etc. It is conducted by the performance engineer.
87. Statement Testing: White box testing which satisfies the criterion that each
statement in a program is executed at least once during program testing. It is usually
performed by the development team.
88. Static Testing: A form of software testing where the software isn't actually used it
checks mainly for the sanity of the code, algorithm, or document. It is used by the developer
who wrote the code.
89. Stability Testing: Testing technique which attempts to determine if an application
will crash. It is usually conducted by the performance engineer.
90. Smoke Testing: Testing technique which examines all the basic components of a
software system to ensure that they work properly. Typically, smoke testing is conducted by
the testing team, immediately after a software build is made .
91. Storage Testing: Testing type that verifies the program under test stores data files in
the correct directories and that it reserves sufficient space to prevent unexpected termination
resulting from lack of space. It is usually performed by the testing team.
92. Stress Testing: Testing technique which evaluates a system or component at or
beyond the limits of its specified requirements. It is usually conducted by the performance
engineer.
93. Structural Testing: White box testing technique which takes into account the
internal structure of a system or component and ensures that each program statement
performs its intended function. It is usually performed by the software developers.
94. System Testing: The process of testing an integrated hardware and software system
to verify that the system meets its specified requirements. It is conducted by the testing
teams in both development and target environment.
95. System integration Testing: Testing process that exercises a software system's
coexistence with others. It is usually performed by the testing teams.
96. Top Down Integration Testing: Testing technique that involves starting at the stop
of a system hierarchy at the user interface and using stubs to test from the top down until the
entire system has been implemented. It is conducted by the testing teams.
97. Thread Testing: A variation of top-down testing technique where the progressive
integration of components follows the implementation of subsets of the requirements. It is
usually performed by the testing teams.
98. Upgrade Testing: Testing technique that verifies if assets created with older versions
can be used properly and that user's learning is not challenged. It is performed by the testing
teams.
99. Unit Testing: Software verification and validation method in which a programmer
tests if individual units of source code are fit for use. It is usually conducted by the
development team.
100. User Interface Testing: Type of testing which is performed to check how user-
friendly the application is. It is performed by testing teams.
101. Usability Testing: Testing technique which verifies the ease with which a user can
learn to operate, prepare inputs for, and interpret outputs of a system or component. It is
usually performed by end users.
102. Volume Testing: Testing which confirms that any values that may become large over
time (such as accumulated counts, logs, and data files), can be accommodated by the
program and will not cause the program to stop working or degrade its operation in any
manner. It is usually conducted by the performance engineer.
103. Vulnerability Testing: Type of testing which regards application security and has the
purpose to prevent problems which may affect the application integrity and stability. It can
be performed by the internal testing teams or outsourced to specialized companies.
104. White box Testing: Testing technique based on knowledge of the internal logic of an
application's code and includes tests like coverage of code statements, branches, paths,
conditions. It is performed by software developers.
105. Workflow Testing: Scripted end-to-end testing technique which duplicates specific
workflows which are expected to be utilized by the end-user. It is usually conducted by
testing teams.
------------------------------------------------------------------------------------------------------------------------

Database Interview Quetions


Ques.1. What is database testing?
Ans. Database testing is checking the integrity of actual data in the
front end with the data present in the database. It involves validating
the data in the database, checking that there are no orphan records
(record with a foreign key to a parent record that has been deleted"),
no junk records are present, updating records in database and verify
the value in the front end.

Ques.2. What is RDBMS?


Ans. An RDBMS or Relational Database Management System is a type of DBMS having
relationships between the tables using indexes and different constraints like primary key, foreign
key etc. The use of indexes and constraints helps in faster retreival and better management of data
within the databases.

Ques.3. What is the difference between DBMS and RDBMS?


Ans. The primary difference between DBMS and RDBMS is, in RDBMS we have relations
between the tables of the database. Whereas in DBMS there is no relation between the tables(data
may even be stored in files).
RDBMS has primary keys and data is stored in tables. DBMS has no concept of primary keys with
data stored in navigational or hierarchical form.
RDBMS defines integrity constraints in order to follow ACID properties. While DBMS doesn't
follow ACID properties.
Ques.4. What is a database?
Ans. A database is a structured collection of data for faster and better access, storage and
manipulation of data.
A database can also be defined as collection of tables, schema, views and other database objects.

Ques.5. What is a table?


Ans. Tables are the database object that are used for storing related records in the form of rows and
columns.

Ques.6. What is field in a table?


Ans. A field is an entity used for storing a particular type of data within a table like numbers,
characters, dates etc.

Ques.7. What is a tuple, record or row in a table?


Ans. A tuple or record is an ordered set of related data item in a table.

Ques.8. What is SQL?


Ans. SQL stands for Structured Query Language, it is an language used for creating, storing,
fetching and updating of data and database objects in RDBMS.

Ques.9. What are the different types of SQL commands?


Ans. SQL commands are the set of commands used to communicate and manage the data present in
the database. The different type of SQL commands are-
1. DDL - Data Definition Language
2. DML - Data Manipulation Language
3. DCL - Data Control Language
4. TCL - Transactional Control Language

Ques.10. Explain DDL commands. What are the different DDL commands in SQL?
Ans. DDL refers to Data Definition Language, it is used to define or alter the structure of the
database. The different DDL commands are-

 CREATE - Used to create table in the database


 DROP - Drops the table from the database
 ALTER - Alters the structure of the database
 TRUNCATE - Deletes all the records from the database but not its database structure
 RENAME - Renames a database object
Ques.11. Explain DML commands. What are the different DML commands in SQL?
Ans. DML refers to Data Manipulation Language, it is used for managing data present in the
database. Some of the DML commands are-select, insert, update, delete etc.

Ques.12. Explain DCL commands. What are the different DCL commands in SQL?
Ans. DCL refers to Data Control Language, these commands are used to create roles, grant
permission and control access to the database objects. The three DCL commands are-

 GRANT - Grants permission to a database user


 REVOKE - Removes access privileges from a user provided with the GRANT command
 Deny - Explicitly prevents a user from receiving a particular permission(e.g. preventing a
particular user belonging to a group to receive the access controls

Ques.13. Explain TCL commands. What are the different TCL commands in SQL?
Ans. TCL refers to Transaction Control Language, it is used to manage the changes made by DML
statements. These are used to process a group of SQL statements comprising a logical unit. The
three TCL commands are-

 COMMIT - Commit write the changes to the database


 SAVEPOINT - Savepoints are the breakpoints, these divide the transaction into smaller
logical units which could be further roll-backed.
 ROLLBACK - Rollbacks are used to restore the database since a last commit.

Ques.14. What are SQL constraints?


Ans. SQL constraints are the set of rules that impose some restriction while insertion, deletion or
updation of data in the databases. In SQL we have both column level as well as table level
constraints which are applied at columns and tables respectively. Some of constraints in SQL are -
Primary Key, Foreign Key, Unique Key Key, Not NULL, DEFUALT, CHECK and Index constraint.

Ques.15. What is a Unique constraint?


Ans. A unique constraint is used to ensure that the field/column will have only unique value(no
duplication).

Ques.16. What is a Primary Key?


Ans. A primary key is a column or a combination of columns which uniquely identifies a record in
the database. A primary key can only have unique and not NULL values and there can be only one
primary key in a table.

Ques.17. What is the difference between unique key and primary key?
Ans. A unique key allows null value(although only one) but a primary key doesn't allow null values.
A table can have more than one unique keys columns while there can be only one primary key. A
unique key column creates non-clustered index whereas primary key creates a clustered index on
the column.

Ques.18. What is a composite key?


Ans. A composite key is a primary key with multiple columns as in case of some tables a single
field might not guarantee unique and not null values, so a combination of multiple fields is taken as
primary key.

Ques.19. What is a NULL value?


Ans. A NULL value in SQL is an unknown or blank value. Since NULL is unknown value so,
NULL value cannot be compared with another NULL values. Hence we cannot use '=' operator in
where condition with NULL. For this, we have IS NULL clause that checks if the value in field is
NULL or not.

Ques.20. What is a Not Null constraint?


Ans. A Not NULL constraint is used for ensuring that the value in the field cannot be NULL.

Ques.21. What is a Foreign Key?


Ans. A foreign key is used for enforcing referential integrity in which a field marked as foriegn key
in one table is linked with primary key of another table. With this refrential integrity we can have
only the data in foreign key which matches the data in the primary key of the other table.

Ques.22. What is a Check constraint?


Ans. A check constraint is used to limit the value entered in a field. E.g. we can ensure that field
'Salary' can only have value greater than 1000 using check constraint-
CREATE TABLE EMP_SALARY(EmpID int NOT NULL, NAME VARCHAR (30) NOT NULL, Salary
INT CHECK (AGE > 1000), PRIMARY KEY (EmpID));

Ques.23. What is a Default constraint?


Ans. A Default constraint is used for providing a default value to a column when no value is
supplied at the time of insertion of record in the database.

Ques.24. What is a clustered index?


Ans. Clustered indexes physically sort the rows in the table based on the clustering key(by default
primary key). Clustered index helps in fast retrieval of data from the databases. There can be only
one clustered index in a table.

Ques.25. What is a non-clustered index?


Ans. Non clustered indexes have a jump table containing key-values pointing to row in the table
corresponding to the keys. There can be multiple clustered indexes in a table.

Ques.26. What is the difference between delete, truncate and drop


command?
Ans. The difference between the Delete, Truncate and Drop command
is -
 Delete command is a DML command, it removes rows from table based on the condition
specified in the where clause, being a DML statement we can rollback changes made by
delete command.
 Truncate is a DDL command, it removes all the rows from table and also frees the space
held unlike delete command. It takes lock on the table while delete command takes lock on
rows of table.
 Drop is a DDL command, it removes the complete data along with the table structure(unlike
truncate command that removes only the rows).

Ques.27. What are the different types of joins in SQL?


Ans. Joins are used to combine records from multiple tables. The different types of joins in SQL
are-
1. Inner Join - To fetch rows from two tables having matching data in the specified columns of
both the tables.
SELECT * FROM TABLE1 INNER JOIN TABLE2 ON TABLE1.columnA = TABLE2.columnA;

2. Left Join - To fetch all rows from left table and matching rows of the right table
SELECT * FROM TABLE1 LEFT JOIN TABLE2 ON TABLE1.columnA = TABLE2.columnA;

3. Right Join - To fetch all rows from right table and matching rows of the left table
SELECT * FROM TABLE1 RIGHT JOIN TABLE2 ON TABLE1.columnA = TABLE2.columnA;

4. Full Outer Join - To fetch all rows of left table and all rows of right table
SELECT * FROM TABLE1 FULL OUTER JOIN TABLE2 ON TABLE1.columnA =
TABLE2.columnA;
5. Self Join - Joining a table to itself, for referencing its own data
SELECT * FROM TABLE1 T1, TABLE1 T2 WHERE T1.columnA = T2.columnB;

Ques.28. What is the difference between cross join and full outer join?
Ans. A cross join returns cartesian product of the two tables, so there is no condition or on clause as
each row of tabelA is joined with each row of tableB whereas a full outer join will join the two
tables on the basis of condition specified in the on clause and for the records not satisfying the
condition null value is placed in the join result.

Ques.29. What are difference between having and where clause?


Ans. A 'where' clause is used to fetch data from database that specifies a particular criteria (specified
after the where clause). Whereas a 'having' clause is used along with 'groupby' to fetch data that
meets a particular criteria specified by the aggregate function. For example-
Emp_Project
Employee Project
A P1
B P2
C P3
B P3
In the above table if we want to fetch Employee working in project P2, we will use 'where' clause-
1 Select Employee from Emp_Project wh2ere Project = P2;

Now if we want to fetch Employee who is working on more than one project, for this we will first
have to group the Employee column along with count of project and than the 'having' clause can be
used to fetch relevant records-
1 Select Employee from Emp_Project groupby Employee having count(Project)>1;

Ques.30. What is the difference between Union and Union All command?
Ans. The fundamental difference between Union and Union All command is, Union is by default
distinct i.e. it combines the distinct result set of two or more select statements. Whereas, Union All
combines all the rows including duplicates in the result set of different select statements.

Ques.31. Define the select into statement.


Ans. Select into statement is used to directly select data from one table and insert into other, the
new table gets created with same name and type as of the old table-
1 SELECT * INTO newtable FROM oldTable;

Ques.32. What is a View in SQL?


Ans. A view is virtual table, it is a named set of SQL statements which can be later referenced and
used as a table.
1 CREATE VIEW VIEW_NAME AS
2 SELECT COLUMN1, COLUMN2
3 FROM TABLE_NAME WHERE CONDITION;

Ques.33. Can we use 'where' clause with 'groupby'?


Ans. Yes, we can use 'where' clause with 'groupBy'. The rows that doesn't meet the where conditions
are removed first and then the grouping is done based on the groupby column.
1 SELECT Employee, Count(Project )
2 FROM Emp_Project
3 WHERE Employee != 'A'
4 GROUP BY Project;

Ques.34. What is Database Normalisation?


Ans. Database normalisation is the process of organisation of data in order to reduce the redundancy
and anamolies in the database. We have different Normalisation forms in SQL like - First Normal
Form, Second Normal Form, Third Normal Form and BCNF.

Ques.35. Explain First Normal Form(1NF).


Ans. According to First Normal Form a column cannot have multiple values, each value in the
columns must be atomic.

Ques.36. Explain Second Normal Form(2NF).


Ans. For a table to be considered in Second Normal Form, it must follow 1NF and no column
should be dependent on the primary key.

Ques.37. Explain Third Normal Form(3NF).


Ans. For a table to be Third Normla Form, it must follow 2NF and each non-prime attribute must be
dependent on primary key of the table.
For each functional dependency X -> Y either-
X should be the super key or Y should be the prime attribute(part of one of the candidate keys) of
table
Ques.38. Explain Boyce and Codd Normal Form(BCNF).
Ans. BCNF is the advanced or stricter version of 3NF.
For each functional dependency X -> Y-
X should be the super key

Ques.39. What are transactions in SQL?


Ans. Transaction is a set of operations performed in a logical sequence. It is executed as a whole, if
any statement in the transaction fails, the whole transaction is marked as failed and not committed
to the database.

Ques.40. What are ACID properties?


Ans. ACID properties refers to the four properties of transactions in SQL-
1. Atomicity - All the operations in the transaaction are performed as a whole or not performed
at all.
2. Consistency - State of database changes only on successfull committed transaction.
3. Isolation - Even with concurrent execution of the multiple transactions, the final state of the
DB would be same as if transactions got executed sequentially. In otehr words each
transaction is isolated from one another.
4. Durability - Even in the state of crash or power loss the state of committed transaction
remain persistent.

Ques.41. What are locks in SQL?


Ans. Locks in SQL are used for maintaining database integrity in case of concurrent execution of
same peice of data.

Ques.42. What are the different types of locks in database?


Ans. The different types of locks in database are-
1. Shared locks - Allows data to be read-only(Select operations), prevents the data to be
updated when in shared lock.
2. Update locks - Applied to resources that can be updated. There can be only one update lock
on a data at a time.
3. Exclusive locks - Used to lock data being modified(INSERT, UPDATE, or DELETE) by one
transaction thus ensuring that multiple updates cannot be made to the same resource at the
same time.
4. Intent locks - A notification mechanism usinh which a transaction conveys that intends to
acquire lock on data.
5. Schema locks- Used for operations when schema or structure of the database is required to
be updated.
6. Bulk Update locks - Used in case of bulk operations when the TABLOCK hint is used.

Ques.43. What are aggregate functions in SQL?


Ans. Aggregate functions are the SQL functions which return a single value calculated from
multiple values of columns. Some of the aggregate functions in SQL are-

 Count() - Returns the count of the number of rows returned by the SQL expression
 Max() - Returns the max value out of the total values
 Min() - Returns the min value out of the total values
 Avg() - Returns the average of the total values
 Sum() - Returns the sum of the values returned by the SQL expression

Ques.44. What are scalar functions in SQL?


Ans. Scalar functions are the functions that return a single value by processing a single value in
SQL. Some of the wodely used SQL functions are-

 UCASE() - USed to convert a string to upper case


 LCASE() - Used to convert a string to lower case
 ROUND() - Used to round a number to the decimal places specified
 NOW() - Used to fetch current system date and time
 LEN() - Used to find length of a string
 SUBSTRING() or MID() - MID and SUBSTRING are synonyms in SQL. They are used to
extract a substring from a string by specifying the start and end index. Syntax -
SUBSTRING(ColumnName,startIndex,EndIndex).
 LOCATE() - Used to find the index of the character in a string. Syntax -
LOCATE(character,ColumnName)
 LTRIM() - Used to trim spaces from left
 RTRIM() - Used to trim spaces from right

Ques.45. What is a coalesce function?


Ans. Coalesce function is used to return the first not NULL value out of the multiple values or
expressions passed to the coalesce function as parameters.Example-
COALESCE(NULL, NULL, 5, 'ArtOfTesting') will return the value 5.
COALESCE(NULL, NULL, NULL) will return NULL value as no not NULL value is encountered
in the parameters list.

Ques.46. What are cursors in SQL?


Ans. Cursors are objects in SQL that are used to traverse the result set of a SQL query one by one.
Ques.47. What are stored procedures? Explain there advanatages?
Ans. Stored procedures are SQL procedures(bunch of SQL statements) that are stored in the
database and can be called by other procedures, triggers and other applications.
CREATE PROCEDURE
procedureName
AS
Begin
Set of SQL statements
End

The advantages of stored procedure are-


1. Stored procedures improve performance as the procedures are pre-compiled as well as
cached.
2. Make queries easliy maintanable and reusable as any change is required to be made at single
location.
3. Reduce network usage and traffic.
4. Improve security as stored procedures restrict direct access to the database.

Ques.48. What are triggers in SQL?


Ans. Triggers are special type of stored procedures that get executed when a specified event occurs.
Syntax-
CREATE TRIGGER
triggerName
triggerTime{Before or After}
triggerEvent{Insert, Update or Delete}
ON tableName
FOR EACH ROW
triggerBody

Ques.49. What are orphan records?


Ans. Orphan records are the records having foreign key to a parent record which doesn't exist or got
deleted.

Ques.50. How can we remove orphan records from a table?


Ans. In order to remove orphan records from database we need to create a join on the parent and
child tables and then remove the rows from child table where id IS NULL.
1 DELETE PT
2 FROM ParentTable PT
3 LEFT JOIN ChildTable CT
4 ON PT.ID = CT.ID
5 WHERE PT.ID IS NULL

*Remember: Delete with joins requires name/alias before from clause in order to specify the table
of which data is to be deleted.
General SQL Interview Questions:
1. What is a Database?
A database is a collection of information in organized form for faster and better access, storage and
manipulation. It can also be defined as collection of tables, schema, views and other database
objects.
2. What is Database Testing?
It is AKA back-end testing or data testing.
Database testing involves in verifying the integrity of data in the front end with the data present in
the back end. It validates the schema, database tables, columns, indexes, stored
procedures, triggers, data duplication, orphan records, junk records. It involves in updating records
in database and verifying the same in the front end.
3. What is the difference between GUI Testing and Database Testing?

 GUI Testing is AKA User Interface Testing or Front end testing


Database Testing is AKA back-end testing or data testing.
 GUI Testing deals with all the testable items that are open to the user to interaction such as
Menus, Forms etc.
Database Testing deals with all the testable items that are generally hidden from the user.
 The tester who is performing GUI Testing doesn’t need to know Structured Query Language
The tester who is performing Database Testing needs to know Structured Query Language
 GUI Testing includes in validating the text boxes, check boxes, buttons, drop-downs, forms
etc., majorly the look and feel of the overall application
Database Testing involves in verifying the integrity of data in the front end with the data
present in the back end. It validates the schema, database tables, columns, indexes, stored
procedures, triggers, data duplication, orphan records, junk records. It involves in updating
records in database and verifying the same in the front end.
4. What is a Table in a Database?
Table is a database object used to store records in a field in the form of columns and rows that holds
data.
5. What is a Field in a Database?
Field in a Database table is a space allocated to store a particular record with in a table.
6. What is a Record in a Database?
A record (also called a row of data) is an ordered set of related data in a table.
7. What is a column in a Table?
A column is a vertical entity in a table that contains all information associated with a specific field
in a table.
8. What is DBMS?
Database Management System is a collection of programs that enables user to store, retrieve, update
and delete information from a database.
9. What is RDBMS?
RDBMS stands for Relational Database Management System. RDBMS is a database management
system (DBMS) that is based on the relational model. Data from relational database can be accessed
using Structured Query Language (SQL)
10. What are the popular Database Management Systems in the IT Industry?
Oracle, MySQL, Microsoft SQL Server, PostgreSQL, SyBase, MongoDB, DB2, and Microsoft
Access etc.,
11. What is SQL?
SQL Overview: SQL stands for Structured Query Language. It is an American National Standard
Institute (ANSI) standard. It is a standard language for accessing and manipulating databases. Using
SQL, some of the action we could do are to create databases, tables, stored procedures (SP’s),
execute queries, retrieve, insert, update, delete data against a database.
12. What are the different types of SQL commands?
SQL commands are segregated into following types:

 DDL – Data Definition Language


 DML – Data Manipulation Language
 DQL – Data Query Language
 DCL – Data Control Language
 TCL – Transaction Control Language

View Complete Post


13. What are the different DDL commands in SQL?
DDL commands are used to define or alter the structure of the database.

 CREATE: To create databases and database objects


 ALTER: To alter existing database objects
 DROP: To drop databases and databases objects
 TRUNCATE: To remove all records from a table but not its database structure
 RENAME: To rename database objects

14. What are the different DML commands in SQL?


DML commands are used for managing data present in the database.

 SELECT: To select specific data from a database


 INSERT: To insert new records in a table
 UPDATE: To update existing records
 DELETE: To delete existing records from a table

15. What are the different DCL commands in SQL?


DCL commands are used to create roles, grant permission and control access to the database
objects.

 GRANT: To provide user access


 DENY: To deny permissions to users
 REVOKE: To remove user access

16. What are the different TCL commands in SQL?


TCL commands are used to manage the changes made by DML statements.

 COMMIT: To write and store the changes to the database


 ROLLBACK: To restore the database since a last commit

17. What is an Index?


An index is used to speed up the performance of queries. It makes faster retrieval of data from the
table. Index can be created on a one column or a group of columns.
18. What is a View?
View is like a subset of table which are stored logically in a database. A view is a virtual table. It
contains rows and columns similar to a real table. The fields in the view are fields from one or more
real tables. Views do not contain data of their own. They are used to restrict access to the database
or to hide data complexity.
CREATE VIEW view_name AS SELECT column_name1, column_name2 FROM table_name WHERE CONDITION;

CREATE VIEW view_name AS SELECT column_name1, column_name2 FROM table_name


1 WHERE CONDITION;

19. What are the advantages of Views?


Some of the advantages of Views are
1. Views occupy no space
2. Views are used to simply retrieve the results of complicated queries that need to be executed
often.
3. Views are used to restrict access to the database or to hide data complexity.
20. What is a Subquery ?
A Subquery is a SQL query within another query. It is a subset of a Select statement whose return
values are used in filtering the conditions of the main query.
21. What is a temp table?
Ans. A temp table is a temporary storage structure to store the data temporarily.
22. How to avoid duplicating records in a query?
SQL SELECT DISTINCT query is used to return only unique values. It eliminates all the duplicated
vales.
View Detailed Post
23. What is the difference between Rename and Alias?
‘Rename’ is a permanent name given to a table or column
‘Alias’ is a temporary name given to a table or column.
24. What is a Join?
Join is a query, which retrieves related columns or rows from multiple tables.
25. What are the different types of joins?
Types of Joins are as follows:
 INNER JOIN
 LEFT JOIN
 RIGHT JOIN
 OUTER JOIN

View Complete Post


26. What is the difference between an inner and outer join?
Inner Join returns rows when there is at least some matching data between two (or more) tables that
are being compared.
Outer join returns rows from both tables that include the records that are unmatched from one or
both the tables.
27. What are SQL constraints?
SQL constraints are the set of rules that enforced some restriction while inserting, deleting or
updating of data in the databases.
28. What are the constraints available in SQL?
Some of constraints in SQL are – Primary Key, Foreign Key, Unique Key, Not Null, Default, Check
and Index constraint.
29. What is an Unique constraint?
Unique constraint is used to ensure that there are no duplication values in the field/column.
30. What is a Primary Key?
A PRIMARY KEY constraint uniquely identifies each record in a database table. All columns
participating in a primary key constraint must not contain NULL values.
31. Can a table contains multiple PRIMARY KEY’s?
The short answer is no, a table is not allowed to contain multiple primary keys but it allows to have
one composite primary key consisting of two or more columns.
32. What is a Composite PRIMARY KEY?
Composite PRIMARY KEY is a primary key created on more than one column (combination of
multiple fields) in a table.
33. What is a FOREIGN KEY?
A FOREIGN KEY is a key used to link two tables together. A FOREIGN KEY in a table is linked
with the PRIMARY KEY of another table.
34. Can a table contains multiple FOREIGN KEY’s?
A table can have many FOREIGN KEY’s.
35. What is the difference between UNIQUE and PRIMARY KEY constraints?
There should be only one PRIMARY KEY in a table where as there can be any number of UNIQUE
Keys.
PRIMARY KEY doesn’t allow NULL values whereas Unique key allows NULL values.
36. What is a NULL value?
A field with a NULL value is a field with no value. A NULL value is different from a zero value or a
field that contains spaces. A field with a NULL value is one that has been left blank during record
creation. Assume, there is a field in a table is optional and it is possible to insert a record without
adding a value to the optional field then the field will be saved with a NULL value.
37. How to Test for NULL Values?
A field with a NULL value is a field with no value. NULL value cannot be compared with another
NULL values. Hence, It is not possible to test for NULL values with comparison operators, such as
=, <, or <>. For this, we have to use the IS NULL and IS NOT NULL operators.

SELECT column_names FROM table_name WHERE column_name IS NULL;

SELECT column_names FROM table_name WHERE column_name IS NULL;


1

SELECT column_names FROM table_name WHERE column_name IS NOT NULL;

SELECT column_names FROM table_name WHERE column_name IS NOT NULL;


1

38. What is a NOT NULL constraint?


NOT NULL constraint is used to ensure that the value in the filed cannot be a NULL
39. What is a CHECK constraint?
CHECK constraint is used to limit the value that are accepted by one or more columns.
E.g. ‘Age’ field should contains only the value greater than 18.

CREATE TABLE EMP_DETAILS(EmpID int NOT NULL, NAME VARCHAR (30) NOT NULL, Age INT CHECK (AGE &gt;
18), PRIMARY KEY (EmpID));

CREATE TABLE EMP_DETAILS(EmpID int NOT NULL, NAME VARCHAR (30) NOT
1 NULL, Age INT CHECK (AGE &gt; 18), PRIMARY KEY (EmpID));

40. What is a DEFAULT constraint?


DEFAULT constraint is used to include a default value in a column when no value is supplied at the
time of inserting a record.
41. What is Normalization?
Normalization is the process of table design to minimize the data redundancy. There are different
types of Noramalization forms in SQL.

 First Normal Form


 Second Normal Form
 Third Normal Form
 Boyce and Codd Normal Form

42. What is Stored procedure?


A Stored Procedure is a collection of of SQL statements that has been created and stored in the
database to perform a particular task. The stored procedure accepts input parameters and processes
them and returns a single values such as a number or text value or a result set (set of rows).
43. What is a Trigger?
A Trigger is a SQL procedure that initiates an action in response to an event (Insert, Delete or
Update) occurs. When a new Employee is added in a Employee_Details table, new records will be
created in the relevant tables such as Employee_Payroll, Employee_Time_Sheet etc.,
44. Explain SQL Data Types?
In SQL Server, each column in a database table has a name and a data type. We need to decide what
type of data to store inside each and every column of a table while creating a SQL table.
View Detailed Post
45. What are the possible values that can be stored in a BOOLEAN data field.
TRUE and FALSE
46. What is the largest value that can be stored in a BYTE data field?
The largest number that can be represented in a single byte is 11111111 or 255. The number of
possible values is 256 (i.e. 255 (the largest possible value) plus 1 (zero), or 28).
47. What are Operators available in SQL?
This is one of the most important SQL Interview Questions. SQL Operator is a reserved word used
primarily in an SQL statement’s WHERE clause to perform operations, such as arithmetic
operations and comparisons. These are used to specify conditions in an SQL statement.
There are are three types of Operators.
1. Arithmetic Operators
2. Comparison Operators
3. Logical Operators
View Detailed Post
48. Which TCP/IP port does SQL Server run?
By default its 1433
49. Describe SQL comments?
Single Line Comments: Single line comments start with two consecutive hyphens (–) and ended by
the end of the line
Multi Line Comments: Multi-line comments start with /* and end with */. Any text between /* and
*/ will be ignored.
50. List out the ACID properties and explain?
Following are the four properties of ACID. These guarantees that the database transactions are
processed reliably.

 Atomicity
 Consistency
 Isolation
 Durability

51. Define the SELECT INTO statement.


The SELECT INTO statement copies data from one table into a new table. The new table will be
created with the column-names and types as defined in the old table. You can create new column
names using the AS clause.

SELECT * INTO newtable FROM oldtable WHERE condition;


SELECT * INTO newtable FROM oldtable WHERE condition;
1

52. What is the difference between Delete, Truncate and Drop command?
The difference between the Delete, Truncate and Drop command are

 Delete command is a DML command, it is used to delete rows from table. It can be rolled
back.
 Truncate is a DDL command, it is used to delete all the rows from the table and free the
space containing the table. It cant be rolled back.
 Drop is a DDL command, it removes the complete data along with the table structure(unlike
truncate command that removes only the rows). All the tables’ rows, indexes and privileges
will also be removed.
53. What is the difference between Delete and Truncate?
This is one of the most important SQL Interview Questions. The difference between the Delete,
Truncate and Drop command are

 Delete statement is used to delete rows from table. It can be rolled back.
Truncate statement is used to delete all the rows from the table and free the space containing
the table. It cant be rolled back.
 We can use WHERE condition in DELETE statement and can delete required rows
We cant use WHERE condition in TRUNCATE statement. So we cant delete required rows
alone
 We can delete specific rows using DELETE
We can only delete all the rows at a time using TRUNCATE
 Delete is a DML command
Truncate is a DDL command
 Delete maintains log and performance is slower than Truncate
Truncate maintains minimal log and performance wise faster
 We need DELETE permission on Table to use DELETE command
We need at least ALTER permission on the table to use TRUNCATE command
54. What is the difference between Union and Union All command?
This is one of the most important SQL Interview Questions
Union: It omits duplicate records and returns only distinct result set of two or more select
statements.
Union All: It returns all the rows including duplicates in the result set of different select statements.
55. What is the difference between Having and Where clause?
Where clause is used to fetch data from database that specifies a particular criteria where as a
Having clause is used along with ‘GROUP BY’ to fetch data that meets a particular criteria
specified by the Aggregate functions. Where cluase cannot be used with Aggregate functions, but
the Having clause can.
56. What are aggregate functions in SQL?
SQL aggregate functions return a single value, calculated from values in a column. Some of the
aggregate functions in SQL are as follows

 AVG() – This functions returns the average value


 COUNT() – This functions returns the number of rows
 MAX() – This functions returns the largest value
 MIN() – This functions returns the smallest value
 ROUND() – This functions rounds a numeric field to the number of decimals specified
 SUM() – This functions returns the sum

View Detailed Post


57. What are string functions in SQL?
SQL string functions are used primarily for string manipulation. Some of the widely used SQL
string functions are

 LEN() – It returns the length of the value in a text field


 LOWER() – It converts character data to lower case
 UPPER() – It converts character data to upper case
 SUBSTRING() – It extracts characters from a text field
 LTRIM() – It is to remove all white space from the beginning of the string
 RTRIM() – It is to remove all white space at the end of the string
 CONCAT() – Concatenate function combines multiple character strings together
 REPLACE() – To update the content of a string.

View Detailed Post


Now, we see “Practical SQL Interview Questions”.
Practical SQL Interview Questions:
58. How to add a new Employee details in a Employee_Details table with the following
details
Employee_Name: John, Salary: 5500, Age: 29?

INSERT into Employee_Details (Employee_Name, Salary, Age) VALUES (‘John’, 5500 , 29);

INSERT into Employee_Details (Employee_Name, Salary, Age) VALUES (‘John’, 5500 , 29);
1

View Detailed Post


59. How to add a column ‘Salary’ to a table Employee_Details?

ALTER TABLE Employee_Details ADD (Salary);

ALTER TABLE Employee_Details ADD (Salary);


1

View Detailed Post


60. How to change value of the field ‘Salary’ as 7500 for an Employee_Name ‘John’ in a table
Employee_Details?

UPDATE Employee_Details set Salary = 7500 where Employee_Name = ‘John’;

UPDATE Employee_Details set Salary = 7500 where Employee_Name = ‘John’;


1

View Detailed Post


61. How to select all records from the table?

Select * from table_name;

Select * from table_name;


1

View Detailed Post


62. How To Get List of All Tables From A DataBase?
To view the tables available on a particular DataBase

USE TestDB GO SELECT * FROM sys.Tables GO

USE TestDB
GO
1 SELECT
2
3
* FROM
4 sys.Tables
GO

63. Define SQL Delete statement.


The SQL Delete statement is used to delete records from a table.

DELETE FROM table_name WHERE some_column=some_value;

DELETE FROM table_name WHERE some_column=some_value;


1

View Detailed Post


64. Write the command to remove all Players named Sachin from the Players table.

DELETE from Players WHERE Player_Name = ‘Sachin’

DELETE from Players WHERE Player_Name = ‘Sachin’


1

65. How to fetch values from TestTable1 that are not in TestTable2 without using NOT
keyword?

-------------- | TestTable1 | -------------- | 11 | | 12 | | 13 | | 14 | --------------

1 --------------
2 |
3
4 TestTable1
5 |
6
--------------
| 11 |
| 12 |
7
8
| 13 |
| 14 |
--------------

-------------- | TestTable2 | -------------- | 11 | | 12 | --------------

--------------
|
1 TestTable2
2 |
3
4
--------------
5 | 11 |
6 | 12 |
--------------

By using the except keyword

<span class="hljs-operator"><span class="hljs-keyword">select</span> * <span class="hljs-keyword">from</span> TestTable1


<span class="hljs-keyword">except</span> <span class="hljs-keyword">select</span> * <span class="hljs-keyword">from</span>
TestTable2;</span>

<span class="hljs-operator"><span class="hljs-keyword">select</span> * <span class="hljs-


keyword">from</span> TestTable1
1 <span class="hljs-keyword">except</span>
2
3
<span class="hljs-keyword">select</span> * <span class="hljs-keyword">from</span>
4 TestTable2;</span>

66. How to get each name only once from a employee table?
By using DISTINCT keyword, we could get each name only once.

SELECT DISTINCT employee_name FROM employee_table;

SELECT DISTINCT employee_name FROM employee_table;


1

67. How to rename a column in the output of SQL query?


By using SQL AS keyword

SELECT column_name AS new_name FROM table_name;

SELECT column_name AS new_name FROM table_name;


1

68. What is the order of SQL SELECT ?


Order of SQL SELECT statement is as follows
SELECT, FROM, WHERE, GROUP BY, HAVING, ORDER BY.
69. How to display the current date in SQL.
By using below query
SELECT GetDate();

SELECT GetDate();
1

70. Write an SQL Query to find an Employee_Name whose Salary is equal or greater than
5000 from the below table Employee_Details.

| Employee_Name | Salary| ----------------------------- | John | 2500 | | Emma | 3500 | | Mark | 5500 | | Anne | 6500 |
-----------------------------

| Employee_Name | Salary|
1 -----------------------------
2 | John | 2500 |
3
| Emma | 3500 |
4
5 | Mark | 5500 |
6 | Anne | 6500 |
7 -----------------------------

Syntax:

SELECT Employee_Name FROM Employee_Details WHERE Salary>=5000;

SELECT Employee_Name FROM Employee_Details WHERE Salary>=5000;


1

Output:

| Employee_Name | Salary| ----------------------------- | Mark | 5500 | | Anne | 6500 | -----------------------------

| Employee_Name | Salary|
1 -----------------------------
2
| Mark | 5500 |
3
4 | Anne | 6500 |
5 -----------------------------

71. Write an SQL Query to find list of Employee_Name start with ‘E’ from the below table

| Employee_Name | Salary| ----------------------------- | John | 2500 | | Emma | 3500 | | Mark | 5500 | | Anne | 6500 |
-----------------------------

| Employee_Name | Salary|
1 -----------------------------
2 | John | 2500 |
3
| Emma | 3500 |
4
5 | Mark | 5500 |
6 | Anne | 6500 |
7 -----------------------------

Syntax:

SELECT * FROM Employee_Details WHERE Employee_Name like 'E%';

SELECT * FROM Employee_Details WHERE Employee_Name like 'E%';


1
Output:

| Employee_Name | Salary| ----------------------------- | Emma | 3500 | -----------------------------

| Employee_Name | Salary|
1 -----------------------------
2
3
| Emma | 3500 |
4 -----------------------------

72. Write SQL SELECT query that returns the FirstName and LastName from
Employee_Details table.

SELECT FirstName, LastName FROM Employee_Details;

SELECT FirstName, LastName FROM Employee_Details;


1

73. How to rename a Table?

SP_RENAME TABLE 'SCOREBOARD', 'OVERALLSCORE'

SP_RENAME TABLE 'SCOREBOARD', 'OVERALLSCORE'


1

To rename Table Name & Column Name

sp_rename OldTableName,NewTableName sp_rename 'TableName.OldColumnName', 'NewColumnName'

sp_rename OldTableName,NewTableName
1 sp_rename 'TableName.OldColumnName',
2 'NewColumnName'

74. How to select all the even number records from a table?
To select all the even number records from a table:

Select * from table where id % 2 = 0

Select * from table where id % 2 = 0


1
2

75. How to select all the odd number records from a table?
To select all the odd number records from a table:

Select * from table where id % 2 != 0

Select * from table where id % 2 != 0


1

76. What is the SQL CASE statement?


SQL Case statement allows to embed an if-else like clause in the SELECT statement.
77. Can you display the result from the below table TestTable based on the criteria M,m as M
and F,f as F and Null as N and g,k,I as U
SELECT Gender from TestTable

SELECT Gender from TestTable


1

| Gender | ------------ | M | | F | | NULL | | m | | f | | g | | H | | i | ------------

| Gender |
1
------------
2 | M |
3 | F |
4 | NULL |
5
| m |
6
7 | f |
8 | g |
9 | H |
10 | i |
11
------------

By using the below syntax we could achieve the output as required.

SELECT Gender, case when Gender='i' then 'U' when Gender='g' then 'U' when Gender='H' then 'U' when Gender='NULL' then 'N'
else upper(Gender) end as newgender from TestTable GROUP BY Gender

SELECT Gender,
case
when Gender='i'
then 'U'
when Gender='g'
then 'U'
1
2
when Gender='H'
3 then 'U'
4 when
5 Gender='NULL'
6 then 'N'
7
8 else
upper(Gender)
end as newgender
from TestTable
GROUP BY
Gender

78. What will be the result of the query below?

select case when null = null then 'SQLServer' else 'MySQL' end as Result;

select case when null = null then 'SQLServer' else 'MySQL' end as Result;
1

This query will returns “MySQL”. In the above question we could see null = null is no the proper
way to compare a null value.
79. What will be the result of the query below?
select case when null is null then 'SQLServer' else 'MySQL' end as Result;

select case when null is null then 'SQLServer' else 'MySQL' end as Result;
1

This query will returns “SQLServer”.


80. Can you display F as M and M as F from the below table TestTable.

| Name | Gender | ------------------------ | John | M | | Emma | F | | Mark | M | | Anne | F | ------------------------

| Name | Gender |
1 ------------------------
2 | John | M |
3
| Emma | F |
4
5 | Mark | M |
6 | Anne | F |
7 ------------------------

By using the below syntax we could achieve the output as required.

UPDATE TestTable SET Gender = CASE Gender WHEN 'F' THEN 'M' ELSE 'F' END

UPDATE TestTable SET Gender = CASE Gender WHEN 'F' THEN 'M' ELSE 'F' END
1

1) Difference between Quality Assurance and Quality Control?

QA:It is process oriented


it involves in entire process of software development.
Prevention oriented.
QC:
It is product oriented.
work to examine the quality of the product.
Detection oriented.

2) what is a bug?
Mismatch in Software Application

3.what is a test case?


Testcase is set of input values, execution preconditions,expected results and execution
post conditions, developed for a particular objective or test conditons, such as to exercise
a paticular program path or to verify compliance with a specific requiremnt.

4) What is the purpose of test plan?

Test plan document is prepared by the test lead or team lead,it contains the contents like
introduction,objectives,test strategy,scope,test items,program modules user
procedures,features to be tested features not to tested approach,pass or fail criteria,testing
process,test deliverables, testing, tasks, responsibilities,resources,schedule,
environmental requirements, risks & contingencies,change management procedures,plan
approvals,etc all these things help a test manager understand the testing he should do &
what he should follow for testing that particular project.
5) what is basis for testcase review?
the main basis for the test case review is
1.testing techniques oriented review
2.requirements oriented review
3.defects oriented review.

6)what are the contents of SRS documents?


System requirements specifications and Functional requirements specifications.

7)what is Regression testing?


After the Bug fixed ,testing the application whether the fixed bug is affecting remaining
functionality of the application or not.Majorly in regression testing Bug fixed module and it's

connected modules are checked for thier integrity after bug fixation.

8) what is build duration?

it is a tine gap between old version build and new version build in new version build some
new extra features are added
9) What is test deliverables?

Test deliverables are nothing but documents preparing during testing like test plan
document test case docs bug reports etc..

Test deliverables will be delivered to the client not only for the completed activities ,but
also for the activities, which we are implementing for the better productivity.(As per the
company's standards)

1.QA Test Plan


2.Test case Docs
3.QA Test plan,

if we are using Automation.


4.Automation scripts
5.QA Coverage Matrix and defect matrix.
6.Traceability Matrix
7.Test Results doc
8.QA Schedule doc(describes the deadlines)
9.Test Report or Project Closure Report.(Prepared once we rolled out the project to client)
10.Weekly status report(sent by PM to the client)
11.Release Notes.

10) If a project is long term project , requirements are also changes then test plan will
change or not? why?

Yes..definitely. If requirements change, the design documents, specifications (for that


particualr module which implements the requiremnts) will also change. Hence the test plan
would also need to be updated.

1. What is Acceptance Testing?

Testing conducted to enable a user/customer to determine whether to accept a


software product. Normally performed to validate the software meets a set of
agreed acceptance criteria.

2. What is Accessibility Testing?

Verifying a product is accessible to the people having disabilities (deaf, blind,


mentally disabled etc.).

3. What is Ad-Hoc Testing?

A testing phase where the tester tries to 'break' the system by randomly trying
the system's functionality. Can include negative testing as well. See also
Monkey Testing.

4. What is Agile Testing?

Testing practice for projects using agile methodologies, treating development


as the customer of testing and emphasizing a test-first design paradigm. See
also Test Driven Development.

5. What is Application Binary Interface (ABI)?

A specification defining requirements for portability of applications in binary


forms across different system platforms and environments.

6. What is Application Programming Interface (API)?

A formalized set of software calls and routines that can be referenced by an


application program in order to access supporting system or network services.

7. What is Automated Software Quality (ASQ)?

The use of software tools, such as automated testing tools, to improve


software quality.

8. What is Automated Testing?

Testing employing software tools which execute tests without manual


intervention. Can be applied in GUI, performance, API, etc. testing. The use of
software to control the execution of tests, the comparison of actual outcomes
to predicted outcomes, the setting up of test preconditions, and other test
control and test reporting functions.

9. What is Backus-Naur Form?


A metalanguage used to formally describe the syntax of a language.

10. What is Basic Block?

A sequence of one or more consecutive, executable statements containing no


branches.

11. What is Basis Path Testing?

A white box test case design technique that uses the algorithmic flow of the
program to design tests.

12. What is Basis Set?

The set of tests derived using basis path testing.

13. What is Baseline?

The point at which some deliverable produced during the software engineering
process is put under formal change control.

14. What you will do during the first day of job?

What would you like to do five years from now?

15. What is Beta Testing?

Testing of a rerelease of a software product conducted by customers.

16. What is Binary Portability Testing?

Testing an executable application for portability across system platforms and


environments, usually for conformation to an ABI specification.

17. What is Black Box Testing?

Testing based on an analysis of the specification of a piece of software without


reference to its internal workings. The goal is to test how well the component
conforms to the published requirements for the component.

18. What is Bottom Up Testing?

An approach to integration testing where the lowest level components are


tested first, then used to facilitate the testing of higher level components. The
process is repeated until the component at the top of the hierarchy is tested.

19. What is Boundary Testing?

Test which focus on the boundary or limit conditions of the software being
tested. (Some of these tests are stress tests).

20. What is Boundary Value Analysis?

BVA is similar to Equivalence Partitioning but focuses on "corner cases" or


values that are usually out of range as defined by the specification. his means
that if a function expects all values in range of negative 100 to positive 1000,
test inputs would include negative 101 and positive 1001.

21. What is Branch Testing?

Testing in which all branches in the program source code are tested at least
once.

22. What is Breadth Testing?

A test suite that exercises the full functionality of a product but does not test
features in detail.

23. What is CAST?

Computer Aided Software Testing.

24. What is Capture/Replay Tool?

A test tool that records test input as it is sent to the software under test. The
input cases stored can then be used to reproduce the test at a later time. Most
commonly applied to GUI test tools.

25. What is CMM?

The Capability Maturity Model for Software (CMM or SW-CMM) is a model for
judging the maturity of the software processes of an organization and for
identifying the key practices that are required to increase the maturity of these
processes.

27. What is Cause Effect Graph?

A graphical representation of inputs and the associated outputs effects which can be used to design
test cases.

28. What is Code Complete?

Phase of development where functionality is implemented in entirety; bug fixes are all that are left.
All functions found in the Functional Specifications have been implemented.

29. What is Code Coverage?

An analysis method that determines which parts of the software have been executed (covered) by
the test case suite and which parts have not been executed and therefore may require additional
attention.

30. What is Code Inspection?

A formal testing technique where the programmer reviews source code with a
group who ask questions analyzing the program logic, analyzing the code with respect to a checklist
of historically common programming errors, and analyzing its compliance with coding standards.

31. What is Code Walkthrough?

A formal testing technique where source code is traced by a group with a small set of test cases,
while the state of program variables is manually monitored, to analyze the programmer's logic and
assumptions.

32. What is Coding?

The generation of source code.

33. What is Compatibility Testing?

Testing whether software is compatible with other elements of a system with which it should
operate, e.g. browsers, Operating Systems, or hardware.

34. What is Component?

A minimal software item for which a separate specification is available.

35. What is Component Testing?

Testing of individual software components (Unit Testing).

36. What is Concurrency Testing?

Multi-user testing geared towards determining the effects of accessing the same application code,
module or database records. Identifies and measures the level of locking, deadlocking and use of
single-threaded code and locking semaphores.

37. What is Conformance Testing?

The process of testing that an implementation conforms to the specification on which it is based.
Usually applied to testing conformance to a formal standard.

38. What is Context Driven Testing?

The context-driven school of software testing is flavor of Agile Testing that advocates continuous
and creative evaluation of testing opportunities in light of the potential information revealed and the
value of that information to the organization right now.

39. What is Conversion Testing?


Testing of programs or procedures used to convert data from existing systems for use in
replacement systems.

40. What is Cyclomatic Complexity?

A measure of the logical complexity of an algorithm, used in white-box testing.

41. What is Data Dictionary?

A database that contains definitions of all data items defined during analysis.

42. What is Data Flow Diagram?

A modeling notation that represents a functional decomposition of a system.

43. What is Data Driven Testing?

Testing in which the action of a test case is parameterized by externally defined data values,
maintained as a file or spreadsheet. A common technique in Automated Testing.

44. What is Debugging?

The process of finding and removing the causes of software failures.

45. What is Defect?

Non-conformance to requirements or functional / program specification

46. What is Dependency Testing?

Examines an application's requirements for pre-existing software, initial states and configuration in
order to maintain proper functionality.

47. What is Depth Testing?

A test that exercises a feature of a product in full detail.

48. What is Dynamic Testing?

Testing software through executing it. See also Static Testing.

49. What is Emulator?

A device, computer program, or system that accepts the same inputs and produces the same outputs
as a given system.

50. What is Endurance Testing?

Checks for memory leaks or other problems that may occur with prolonged

execution
51. What is End-to-End testing?

Testing a complete application environment in a situation that mimics real-world use, such as
interacting with a database, using network communications, or interacting with other hardware,
applications, or systems if appropriate.

52. What is Equivalence Class?

A portion of a component's input or output domains for which the component's behaviour is
assumed to be the same from the component's specification.

53. What is Equivalence Partitioning?

A test case design technique for a component in which test cases are designed to execute
representatives from equivalence classes.

54. What is Exhaustive Testing?

Testing which covers all combinations of input values and preconditions for an element of the
software under test.

55. What is Functional Decomposition?

A technique used during planning, analysis and design; creates a functional hierarchy for the
software.
54. What is Functional Specification?

A document that describes in detail the characteristics of the product with regard to its intended
features.

55. What is Functional Testing?

Testing the features and operational behavior of a product to ensure they correspond to its
specifications. Testing that ignores the internal mechanism of a system or component and focuses
solely on the outputs generated in response to selected inputs and execution conditions. or Black
Box Testing.

56. What is Glass Box Testing?

A synonym for White Box Testing.

57. What is Gorilla Testing?

Testing one particular module, functionality heavily.

58. What is Gray Box Testing?

A combination of Black Box and White Box testing methodologies? testing a piece of software
against its specification but using some knowledge of its internal workings.

59. What is High Order Tests?

Black-box tests conducted once the software has been integrated.

60. What is Independent Test Group (ITG)?

A group of people whose primary responsibility is software testing,

61. What is Inspection?

A group review quality improvement process for written material. It consists of two aspects;
product (document itself) improvement and process improvement (of both document production
and inspection).

62. What is Integration Testing?

Testing of combined parts of an application to determine if they function together correctly. Usually
performed after unit and functional testing. This type of testing is especially relevant to client/server
and distributed systems.

63. What is Installation Testing?

Confirms that the application under test recovers from expected or unexpected events without loss
of data or functionality. Events can include shortage of disk space, unexpected loss of
communication, or power out conditions.

64. What is Load Testing?


See Performance Testing.

65. What is Localization Testing?

This term refers to making software specifically designed for a specific locality.

66. What is Loop Testing?

A white box testing technique that exercises program loops.

67. What is Metric?

A standard of measurement. Software metrics are the statistics describing the structure or content of
a program. A metric should be a real objective measurement of something such as number of bugs
per lines of code.

68. What is Monkey Testing?

Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system
or an application does not crash out.

69. What is Negative Testing?

Testing aimed at showing software does not work. Also known as "test to fail". See also Positive
Testing.

70. What is Path Testing?

Testing in which all paths in the program source code are tested at least once.

71. What is Performance Testing?

Testing conducted to evaluate the compliance of a system or component with specified performance
requirements. Often this is performed using an automated test tool to simulate large number of
users. Also know as "Load Testing".

72. What is Positive Testing?

Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.

73. What is Quality Assurance?

All those planned or systematic actions necessary to provide adequate confidence that a product or
service is of the type and quality needed and expected by the customer.

74. What is Quality Audit?

A systematic and independent examination to determine whether quality activities and related
results comply with planned arrangements and whether these arrangements are implemented
effectively and are suitable to achieve objectives.
75. What is Quality Circle?

A group of individuals with related interests that meet at regular intervals to consider problems or
other matters related to the quality of outputs of a process and to the correction of problems or to
the improvement of quality.

76. What is Quality Control?

The operational techniques and the activities used to fulfill and verify requirements of quality.

77. What is Quality Management?

That aspect of the overall management function that determines and implements the quality policy.

78. What is Quality Policy?

The overall intentions and direction of an organization as regards quality as formally expressed by
top management.

79. What is Quality System?

The organizational structure, responsibilities, procedures, processes, and resources for


implementing quality management.

80. What is Race Condition?

A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a
write, with no mechanism used by either to moderate simultaneous access.

81. What is Ramp Testing?

Continuously raising an input signal until the system breaks down.

82. What is Recovery Testing?

Confirms that the program recovers from expected or unexpected events without loss of data or
functionality. Events can include shortage of disk space, unexpected loss of communication, or
power out conditions

83. What is Regression Testing?

Retesting a previously tested program following modification to ensure that faults have not been
introduced or uncovered as a result of the changes made.

84. What is Release Candidate?

A pre-release version, which contains the desired functionality of the final version, but which needs
to be tested for bugs (which ideally should be removed before the final version is released).

85. What is Sanity Testing?

Brief test of major functional elements of a piece of software to determine if its basically
operational.

86. What is Scalability Testing?

Performance testing focused on ensuring the application under test gracefully handles increases in
work load.

87. What is Security Testing?

Testing which confirms that the program can restrict access to authorized personnel and that the
authorized personnel can access the functions available to their security level.

88. What is Smoke Testing?

A quick-and-dirty test that the major functions of a piece of software work. Originated in the
hardware testing practice of turning on a new piece of hardware for the first time and considering it
a success if it does not catch on fire.

89. What is Soak Testing?

Running a system at high load for a prolonged period of time. For example, running several times
more transactions in an entire day (or night) than would be expected in a busy day, to identify and
performance problems that appear after a large number of transactions have been executed.

90. What is Software Requirements Specification?

A deliverable that describes all data, functional and behavioral requirements, all constraints, and all
validation requirements for software/

91. What is Software Testing?

A set of activities conducted with the intent of finding errors in software.


92. What is Static Analysis?

Analysis of a program carried out without executing the program.

93. What is Static Analyzer?

A tool that carries out static analysis.

94. What is Static Testing?

Analysis of a program carried out without executing the program.

95. What is Storage Testing?

Testing that verifies the program under test stores data files in the correct
directories and that it reserves sufficient space to prevent unexpected
termination resulting from lack of space. This is external storage as opposed to
internal storage.

96. What is Stress Testing?


Testing conducted to evaluate a system or component at or beyond the limits
of its specified requirements to determine the load under which it fails and
how. Often this is performance testing using a very high level of simulated
load.

97. What is Structural Testing?

Testing based on an analysis of internal workings and structure of a piece of


software. See also White Box Testing.

98. What is System Testing?

Testing that attempts to discover defects that are properties of the entire
system rather than of its individual components.

99. What is Testability?

The degree to which a system or component facilitates the establishment of


test criteria and the performance of tests to determine whether those criteria
have been met.

100. What is Testing?

The process of exercising software to verify that it satisfies specified


requirements and to detect errors. The process of analyzing a software item to
detect the differences between existing and required conditions (that is, bugs),
and to evaluate the features of the software item (Ref. IEEE Std 829). The
process of operating a system or component under specified conditions,
observing or recording the results, and making an evaluation of some aspect of
the system or component. What is Test Automation? It is the same as
Automated Testing.

101. What is Test Bed?

An execution environment configured for testing. May consist of specific


hardware, OS, network topology, configuration of the product under test, other
application or system software, etc. The Test Plan for a project should
enumerated the test beds(s) to be used.

102. What is Test Case?

Test Case is a commonly used term for a specific test. This is usually the
smallest unit of testing. A Test Case will consist of information such as
requirements testing, test steps, verification steps, prerequisites, outputs, test
environment, etc. A set of inputs, execution preconditions, and expected
outcomes developed for a particular objective, such as to exercise a particular
program path or to verify compliance with a specific requirement. Test Driven
Development? Testing methodology associated with Agile Programming in
which every chunk of code is covered by unit tests, which must all pass all the
time, in an effort to eliminate unit-level and regression bugs during
development. Practitioners of TDD write a lot of tests, i.e. an equal number of
lines of test code to the size of the production code.

103. What is Test Driver?

A program or test tool used to execute tests. Also known as a Test Harness.

104. What is Test Environment?

The hardware and software environment in which tests will be run, and any
other software with which the software under test interacts when under test
including stubs and test drivers.

105. What is Test First Design?

Test-first design is one of the mandatory practices of Extreme Programming


(XP).It requires that programmers do not write any production code until they
have first written a unit test.

106. What is Test Harness?

A program or test tool used to execute a tests. Also known as a Test Driver.

107. What is Test Plan?

A document describing the scope, approach, resources, and schedule of


intended testing activities. It identifies test items, the features to be tested,
the testing tasks, who will do each task, and any risks requiring contingency
planning.

108. What is Test Procedure?

A document providing detailed instructions for the execution of one or more


test cases.

109. What is Test Script?

Commonly used to refer to the instructions for a particular test that will be
carried out by an automated test tool.

110. What is Test Specification?

A document specifying the test approach for a software feature or combination


or features and the inputs, predicted results and execution conditions for the
associated tests.

111. What is Test Suite?

A collection of tests used to validate the behavior of a product. The scope of a


Test Suite varies from organization to organization. There may be several Test
Suites for a particular product for example. In most cases however a Test Suite
is a high level concept, grouping together hundreds or thousands of tests
related by what they are intended to test.

112. What is Test Tools?

Computer programs used in the testing of a system, a component of the


system, or its documentation.

113. What is Thread Testing?

A variation of top-down testing where the progressive integration of


components follows the implementation of subsets of the requirements, as
opposed to the integration of components by successively lower levels.

114. What is Top Down Testing?

An approach to integration testing where the component at the top of the


component hierarchy is tested first, with lower level components being
simulated by stubs. Tested components are then used to test lower level
components. The process is repeated until the lowest level components have
been tested.

115. What is Total Quality Management?

A company commitment to develop a process that achieves high quality


product and customer satisfaction.

116. What is Traceability Matrix?

A document showing the relationship between Test Requirements and Test


Cases.

117. What is Usability Testing?

Testing the ease with which users can learn and use a product.
118. What is Use Case?
The specification of tests that are conducted from the end-user perspective.
Use cases tend to focus on operating software as an end-user would conduct
their day-to-day activities.
119. What is Unit Testing?
Testing of individual software components.

120. how do the companies expect the defect reporting to be communicated by


the tester to the development team. Can the excel sheet template be used for
defect reporting. If so what are the common fields that are to be included ?
who assigns the priority and severity of the defect
To report bugs in excel:
Sno. Module Screen/ Section Issue detail Severity
Prioriety Issuestatus
this is how to report bugs in excel sheet and also set filters on the Columns
attributes.
But most of the companies use the share point process of reporting bugs In
this when the project came for testing a module wise detail of project is
inserted to the defect management system they are using. It contains
following field
1. Date
2. Issue brief
3. Issue description (used for developer to regenerate the issue)
4. Issue status( active, resolved, on hold, suspend and not able to regenerate)
5. Assign to (Names of members allocated to project)
6. Priority(High, medium and low)
7. Severity (Major, medium and low)

121. How do you plan test automation?

1. Prepare the automation Test plan


2. Identify the scenario
3. Record the scenario
4. Enhance the scripts by inserting check points and Conditional Loops
5. Incorporated Error Handler
6. Debug the script
7. Fix the issue
8. Rerun the script and report the result
122. Does automation replace manual testing?
There can be some functionality which cannot be tested in an automated tool
so we may have to do it manually. therefore manual testing can never be
replaced. (We can write the scripts for negative testing also but it is hectic
task)when we talk about real environment we do negative testing manually.

123. How will you choose a tool for test automation?

choosing of a tool depends on many things ...


1. Application to be tested
2. Test environment
3. Scope and limitation of the tool.
4. Feature of the tool.
5. Cost of the tool.
6. Whether the tool is compatible with your application which means tool
should be able to interact with your application
7. Ease of use

124. How you will evaluate the tool for test automation?

We need to concentrate on the features of the tools and how this could be
beneficial for our project. The additional new features and the enhancements
of the features will also help.
125. How you will describe testing activities?

Testing activities start from the elaboration phase. The various testing activities
are preparing the test plan, Preparing test cases, Execute the test case, Log
teh bug, validate the bug & take appropriate action for the bug, Automate the
test cases.

126. What testing activities you may want to automate?

Automate all the high priority test cases which needs to be executed as a part
of regression testing for each build cycle.

127. Describe common problems of test automation.

The common problems are:


1. Maintenance of the old script when there is a feature change or
enhancement
2. The change in technology of the application will affect the old scripts
128. What types of scripting techniques for test automation do you know?
5 types of scripting techniques:
Linear
Structured
Shared
Data Driven
Key Driven

129. What is memory leaks and buffer overflows ?

Memory leaks means incomplete deallocation - are bugs that happen very
often. Buffer overflow means data sent as input to the server that overflows
the boundaries of the input area, thus causing the server to misbehave. Buffer
overflows can be used.

130. What are the major differences between stress testing,load


testing,Volume testing?

Stress testing means increasing the load ,and checking the performance at
each level. Load testing means at a time giving more load by the expectation
and checking the performance at that level. Volume testing means first we
have to apply initial.
Q: How do you introduce a new software QA process?

A: It depends on the size of the organization and the risks involved. For large
organizations with high-risk projects, a serious management buy-in is required
and a formalized QA process is necessary. For medium size organizations with
lower risk projects, management and organizational buy-in and a slower, step-
by-step process is required. Generally speaking, QA processes should be
balanced with productivity, in order to keep any bureaucracy from getting out
of hand. For smaller groups or projects, an ad-hoc process is more
appropriate. A lot depends on team leads and managers, feedback to
developers and good communication is essential among customers, managers,
developers, test engineers and testers. Regardless the size of the company, the
greatest value for effort is in managing requirement processes, where the goal
is requirements that are clear, complete and
testable.

Q: What is the role of documentation in QA?

A: Documentation plays a critical role in QA. QA practices should be


documented, so that they are repeatable. Specifications, designs, business
rules, inspection reports, configurations, code changes, test plans, test cases,
bug reports, user manuals should all be documented. Ideally, there should be a
system for easily finding and obtaining of documents and determining what
document will have a particular piece of information. Use documentation
change management, if possible.

Q: What makes a good test engineer?

A: Good test engineers have a "test to break" attitude. We, good test
engineers, take the point of view of the customer; have a strong desire for
quality and an attention to detail. Tact and diplomacy are useful in maintaining
a cooperative relationship with developers and an ability to communicate with
both technical and non-technical people. Previous software development
experience is also helpful as it provides a deeper understanding of the software
development process, gives the test engineer an appreciation for the
developers' point of view and reduces the learning curve in automated test tool
programming.

G C Reddy is a good test engineer because he has a "test to break" attitude,


takes the point of view of the customer, has a strong desire for quality, has an
attention to detail, He's also tactful and diplomatic and has good a
communication skill, both oral and written. And he has previous software
development experience, too.

Q: What is a test plan?

A: A software project test plan is a document that describes the objectives,


scope, approach and focus of a software testing effort. The process of
preparing a test plan is a useful way to think through the efforts needed to
validate the acceptability of a software product. The completed document will
help people outside the test group understand the why and how of product
validation. It should be thorough enough to be useful, but not so thorough that
none outside the test group will be able to read it.

Q: What is a test case?

A: A test case is a document that describes an input, action, or event and its
expected result, in order to determine if a feature of an application is working
correctly. A test case should contain particulars such as a...
• Test case identifier;
• Test case name;
• Objective;
• Test conditions/setup;
• Input data requirements/steps, and
• Expected results.
Please note, the process of developing test cases can help find problems in the
requirements or design of an application, since it requires you to completely
think through the operation of the application. For this reason, it is useful to
prepare test cases early in the development cycle, if possible.

Q: What should be done after a bug is found?

A: When a bug is found, it needs to be communicated and assigned to


developers that can fix it. After the problem is resolved, fixes should be re-
tested. Additionally, determinations should be made regarding requirements,
software, hardware, safety impact, etc., for regression testing to check the
fixes didn't create other problems elsewhere. If a problem-tracking system is in
place, it should encapsulate these determinations. A variety of commercial,
problem-tracking/management software tools are available. These tools, with
the detailed input of software test engineers, will give the team complete
information so developers can understand the bug, get an idea of its severity,
reproduce it and fix it.

Q: What is configuration management?

A: Configuration management (CM) covers the tools and processes used to


control, coordinate and track code, requirements, documentation, problems,
change requests, designs, tools, compilers, libraries, patches, changes made to
them and who makes the changes. Rob Davis has had experience with a full
range of CM tools and concepts, and can easily adapt to your software tool and
process needs.

Q: What if the software is so buggy it can't be tested at all?

A: In this situation the best bet is to have test engineers go through the
process of reporting whatever bugs or problems initially show up, with the
focus being on critical bugs.

Since this type of problem can severely affect schedules and indicates deeper
problems in the software development process, such as insufficient unit
testing, insufficient integration testing, poor design, improper build or release
procedures, managers should be notified and provided with some
documentation as evidence of the problem.

Q: What if there isn't enough time for thorough testing?

A: Since it's rarely possible to test every possible aspect of an application,


every possible combination of events, every dependency, or everything that
could go wrong, risk analysis is appropriate to most software development
projects.

Use risk analysis to determine where testing should be focused. This requires
judgment skills, common sense and experience. The checklist should include
answers to the following questions:
• Which functionality is most important to the project's intended purpose?
• Which functionality is most visible to the user?
• Which functionality has the largest safety impact?
• Which functionality has the largest financial impact on users?
• Which aspects of the application are most important to the customer?
• Which aspects of the application can be tested early in the development
cycle?
• Which parts of the code are most complex and thus most subject to errors?

• Which parts of the application were developed in rush or panic mode?


• Which aspects of similar/related previous projects caused problems?
• Which aspects of similar/related previous projects had large maintenance
expenses?
• Which parts of the requirements and design are unclear or poorly thought
out?
• What do the developers think are the highest-risk aspects of the
application?
• What kinds of problems would cause the worst publicity?
• What kinds of problems would cause the most customer service
complaints?
• What kinds of tests could easily cover multiple functionalities?
• Which tests will have the best high-risk-coverage to time-required ratio?

Q: What if the project isn't big enough to justify extensive testing?

A: Consider the impact of project errors, not the size of the project. However, if
extensive testing is still not justified, risk analysis is again needed and the
considerations listed under "What if there isn't enough time for thorough
testing?" do apply. The test engineer then should do "ad hoc" testing, or write
up a limited test plan based on the risk analysis.

Q: What can be done if requirements are changing continuously?

A: Work with management early on to understand how requirements might


change, so that alternate test plans and strategies can be worked out in
advance. It is helpful if the application's initial design allows for some
adaptability, so that later changes do not require redoing the application from
scratch. Additionally, try to...
• Ensure the code is well commented and well documented; this makes
changes easier for the developers.
• Use rapid prototyping whenever possible; this will help customers feel sure
of their requirements and minimize changes.
• In the project's initial schedule, allow for some extra time to
commensurate with probable changes.
Move new requirements to a 'Phase 2' version of an application and use the
original requirements for the 'Phase 1' version.
Negotiate to allow only easily implemented new requirements into the project.
• Ensure customers and management understand scheduling impacts,
inherent risks and costs of significant requirements changes. Then let
management or the customers decide if the changes are warranted; after all,
that's their job.
• Balance the effort put into setting up automated testing with the expected
effort required to redo them to deal with changes.
• Design some flexibility into automated test scripts;
• Focus initial automated testing on application aspects that are most likely
to remain unchanged;
• Devote appropriate effort to risk analysis of changes, in order to minimize
regression-testing needs;
• Design some flexibility into test cases; this is not easily done; the best bet
is to minimize the detail in the test cases, or set up only higher-level generic-
type test plans;
Focus less on detailed test plans and test cases and more on ad-hoc testing
with an understanding of the added risk this entails.

Q: How do you know when to stop testing?

A: This can be difficult to determine. Many modern software applications are so


complex and run in such an interdependent environment, that complete testing
can never be done. Common factors in deciding when to stop are...
• Deadlines, e.g. release deadlines, testing deadlines;
• Test cases completed with certain percentage passed;
• Test budget has been depleted;
• Coverage of code, functionality, or requirements reaches a specified point;
• Bug rate falls below a certain level; or
• Beta or alpha testing period ends.

Q: What if the application has functionality that wasn't in the


requirements?

A: It may take serious effort to determine if an application has significant


unexpected or hidden functionality, which it would indicate deeper problems in
the software development process. If the functionality isn't necessary to the
purpose of the application, it should be removed, as it may have unknown
impacts or dependencies that were not taken into account by the designer or
the customer.
If not removed, design information will be needed to determine added testing
needs or regression testing needs. Management should be made aware of any
significant added risks as a result of the unexpected functionality. If the
functionality only affects areas, such as minor improvements in the user
interface, it may not be a significant risk.
Q: What is the general testing process?

A: The general testing process is the creation of a test strategy (which sometimes includes the
creation of test cases), creation of a test plan/design (which usually includes test cases and test
procedures) and the execution of tests.
Q: How do you create a test plan/design?

A: Test scenarios and/or cases are prepared by reviewing functional requirements of the release and
preparing logical groups of functions that can be further broken into test procedures. Test
procedures define test conditions, data to be used for testing and expected results, including
database updates, file outputs, report results. Generally speaking...
• Test cases and scenarios are designed to represent both typical and unusual situations that may
occur in the application.
• Test engineers define unit test requirements and unit test cases. Test engineers also execute unit
test cases.
• It is the test team that, with assistance of developers and clients, develops test cases and
scenarios for integration and system testing.
• Test scenarios are executed through the use of test procedures or scripts.
• Test procedures or scripts define a series of steps necessary to perform one or more test
scenarios.
• Test procedures or scripts include the specific data that will be used for testing the process or
transaction.
• Test procedures or scripts may cover multiple test scenarios.
• Test scripts are mapped back to the requirements and traceability matrices are used to ensure
each test is within scope.
• Test data is captured and base lined, prior to testing. This data serves as the foundation for unit
and system testing and used to exercise system functionality in a controlled environment.
• Some output data is also base-lined for future comparison. Base-lined data is used to support
future application maintenance via regression testing.
• A pretest meeting is held to assess the readiness of the application and the environment and data
to be tested. A test readiness document is created to indicate the status of the entrance criteria of the
release.
Inputs for this process:
• Approved Test Strategy Document.
• Test tools, or automated test tools, if applicable.
• Previously developed scripts, if applicable.
• Test documentation problems uncovered as a result of testing.
• A good understanding of software complexity and module path coverage, derived from general
and detailed design documents, e.g. software design document, source code, and software
complexity data.
Outputs for this process:
• Approved documents of test scenarios, test cases, test conditions, and test data.
• Reports of software design issues, given to software developers for correction.
Q: How do you execute tests?

A: Execution of tests is completed by following the test documents in a methodical manner. As each
test procedure is performed, an entry is recorded in a test execution log to note the execution of the
procedure and whether or not the test procedure uncovered any defects. Checkpoint meetings are
held throughout the execution phase. Checkpoint meetings are held daily, if required, to address and
discuss testing issues, status and activities.
• The output from the execution of test procedures is known as test results. Test results are
evaluated by test engineers to determine whether the expected results have been obtained. All
discrepancies/anomalies are logged and discussed with the software team lead, hardware test lead,
programmers, software engineers and documented for further investigation and resolution. Every
company has a different process for logging and reporting bugs/defects uncovered during testing.
• A pass/fail criteria is used to determine the severity of a problem, and results are recorded in a
test summary report. The severity of a problem, found during system testing, is defined in
accordance to the customer's risk assessment and recorded in their selected tracking tool.
• Proposed fixes are delivered to the testing environment, based on the severity of the problem.
Fixes are regression tested and flawless fixes are migrated to a new baseline. Following completion
of the test, members of the test team prepare a summary report. The summary report is reviewed by
the Project Manager, Software QA Manager and/or Test Team Lead.
• After a particular level of testing has been certified, it is the responsibility of the Configuration
Manager to coordinate the migration of the release software components to the next test level, as
documented in the Configuration Management Plan. The software is only migrated to the
production environment after the Project Manager's formal acceptance.
• The test team reviews test document problems identified during testing, and update documents
where appropriate.
Inputs for this process:
• Approved test documents, e.g. Test Plan, Test Cases, Test Procedures.
• Test tools, including automated test tools, if applicable.
• Developed scripts.
• Changes to the design, i.e. Change Request Documents.
• Test data.
• Availability of the test team and project team.
• General and Detailed Design Documents, i.e. Requirements Document, Software Design
Document.
• A software that has been migrated to the test environment, i.e. unit tested code, via the
Configuration/Build Manager.
• Test Readiness Document.
• Document Updates.
Outputs for this process:
• Log and summary of the test results. Usually this is part of the Test Report. This needs to be
approved and signed-off with revised testing deliverables.
• Changes to the code, also known as test fixes.
• Test document problems uncovered as a result of testing. Examples are Requirements document
and Design Document problems.
• Reports on software design issues, given to software developers for correction. Examples are
bug reports on code issues.
• Formal record of test incidents, usually part of problem tracking.
• Base-lined package, also known as tested source and object code, ready for migration to the next
level.
Q: How do you create a test strategy?

A: The test strategy is a formal description of how a software product will be tested. A test strategy
is developed for all levels of testing, as required. The test team analyzes the requirements, writes the
test strategy and reviews the plan with the project team. The test plan may include test cases,
conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment.
Inputs for this process:
• A description of the required hardware and software components, including test tools. This
information comes from the test environment, including test tool data.
• A description of roles and responsibilities of the resources required for the test and schedule
constraints. This information comes from man-hours and schedules.
• Testing methodology. This is based on known standards.
• Functional and technical requirements of the application. This information comes from
requirements, change request, technical and functional design documents.
• Requirements that the system can not provide, e.g. system limitations.
Outputs for this process:
• An approved and signed off test strategy document, test plan, including test cases.
• Testing issues requiring resolution. Usually this requires additional negotiation at the project
management level.
Q: What is security clearance?

A: Security clearance is a process of determining your trustworthiness and reliability before


granting you access to national security information.

Q: What are the levels of classified access?

A: The levels of classified access are confidential, secret, top secret, and sensitive compartmented
information, of which top secret is the highest.

What's a 'test plan'?


A software project test plan is a document that describes the objectives, scope, approach, and focus
of a software testing effort. The process of preparing a test plan is a useful way to think through the
efforts needed to validate the acceptability of a software product. The completed document will help
people outside the test group understand the 'why' and 'how' of product validation. It should be
thorough enough to be useful but not so thorough that no one outside the test group will read it. The
following are some of the items that might be included in a test plan, depending on the particular
project:

* Title

* Identification of software including version/release numbers.

* Revision history of document including authors, dates, approvals.

* Table of Contents.

* Purpose of document, intended audience

* Objective of testing effort

* Software product overview

* Relevant related document list, such as requirements, design documents, other test plans, etc.

* Relevant standards or legal requirements

* Traceability requirements

* Relevant naming conventions and identifier conventions

* Overall software project organization and personnel/contact-info/responsibilties

* Test organization and personnel/contact-info/responsibilities

* Assumptions and dependencies

* Project risk analysis

* Testing priorities and focus

* Scope and limitations of testing

* Test outline - a decomposition of the test approach by test type, feature, functionality, process,
system, module, etc. as applicable

* Outline of data input equivalence classes, boundary value analysis, error classes

* Test environment - hardware, operating systems, other required software, data configurations,
interfaces to other systems

* Test environment validity analysis - differences between the test and production systems and their
impact on test validity.

* Test environment setup and configuration issues

* Software migration processes

* Software CM processes

• * Test data setup requirements

* Database setup requirements

* Outline of system-logging/error-logging/other capabilities, and tools such as screen capture


software, that will be used to help describe and report bugs

* Discussion of any specialized software or hardware tools that will be used by testers to help track
the cause or source of bugs

* Test automation - justification and overview

* Test tools to be used, including versions, patches, etc.

* Test script/test code maintenance processes and version control

* Problem tracking and resolution - tools and processes

* Project test metrics to be used

* Reporting requirements and testing deliverables

* Software entrance and exit criteria

* Initial sanity testing period and criteria

* Test suspension and restart criteria

* Personnel allocation

* Personnel pre-training needs

* Test site/location

* Outside test organizations to be utilized and their purpose, responsibilties, deliverables, contact
persons, and coordination issues.

* Relevant proprietary, classified, security, and licensing issues.

* Open issues

* Appendix - glossary, acronyms, etc.

What's a 'test case'?


* A test case is a document that describes an input, action, or event and an expected response, to
determine if a feature of an application is working correctly. A test case should contain particulars
such as test case identifier, test case name, objective, test conditions/setup, input data requirements,
steps, and expected results.

* Note that the process of developing test cases can help find problems in the requirements or
design of an application, since it requires completely thinking through the operation of the
application. For this reason, it's useful to prepare test cases early in the development cycle if
possible.

What should be done after a bug is found?


* The bug needs to be communicated and assigned to developers that can fix it. After the problem is
resolved, fixes should be re-tested, and determinations made regarding requirements for regression
testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in
place, it should encapsulate these processes. A variety of commercial problem-tracking/management
software tools are available (see the 'Tools' section for web resources with listings of such tools).
The following are items to consider in the tracking process:

* Complete information such that developers can understand the bug, get an idea of it's severity, and
reproduce it if necessary.
* Bug identifier (number, ID, etc.)

* Current bug status (e.g., 'Released for Retest', 'New', etc.)

* The application name or identifier and version

* The function, module, feature, object, screen, etc. where the bug occurred

* Environment specifics, system, platform, relevant hardware specifics

* Test case name/number/identifier

* One-line bug description

* Full bug description

* Description of steps needed to reproduce the bug if not covered by a test case or if the developer
doesn't have easy access to the test case/test script/test tool

* Names and/or descriptions of file/data/messages/etc. used in test

* File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in
finding the cause of the problem

* Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)

* Was the bug reproducible?

* Tester name

* Test date

* Bug reporting date

* Name of developer/group/organization the problem is assigned to

* Description of problem cause

* Description of fix

* Code section/file/module/class/method that was fixed


* Date of fix

* Application version that contains the fix

* Tester responsible for retest

* Retest date

* Retest results

* Regression testing requirements

* Tester responsible for regression tests

* Regression testing results

* A reporting or tracking process should enable notification of appropriate personnel at various


stages. For instance, testers need to know when retesting is needed, developers need to know when
bugs are found and how to get the needed information, and reporting/summary capabilities are
needed for managers.

What if the software is so buggy it can't really be tested at all?

* The best bet in this situation is for the testers to go through the process of reporting whatever bugs
or blocking-type problems initially show up, with the focus being on critical bugs. Since this type of
problem can severely affect schedules, and indicates deeper problems in the software development
process (such as insufficient unit testing or insufficient integration testing, poor design, improper
build or release procedures, etc.) managers should be notified, and provided with some
documentation as evidence of the problem.

How can it be known when to stop testing?

This can be difficult to determine. Many modern software applications are so complex, and run in
such an interdependent environment, that complete testing can never be done. Common factors in
deciding when to stop are:

* Deadlines (release deadlines, testing deadlines, etc.)

* Test cases completed with certain percentage passed

* Test budget depleted

* Coverage of code/functionality/requirements reaches a specified point

* Bug rate falls below a certain level

* Beta or alpha testing period ends

What if there isn't enough time for thorough testing?

* Use risk analysis to determine where testing should be focused. Since it's rarely possible to test
every possible aspect of an application, every possible combination of events, every dependency, or
everything that could go wrong, risk analysis is appropriate to most software development projects.
This requires judgement skills, common sense, and experience. (If warranted, formal methods are
also available.) Considerations can include:

* Which functionality is most important to the project's intended purpose?

* Which functionality is most visible to the user?

* Which functionality has the largest safety impact?

* Which functionality has the largest financial impact on users?

* Which aspects of the application are most important to the customer?

* Which aspects of the application can be tested early in the development cycle?

* Which parts of the code are most complex, and thus most subject to errors?

* Which parts of the application were developed in rush or panic mode?

* Which aspects of similar/related previous projects caused problems?

* Which aspects of similar/related previous projects had large maintenance expenses?

* Which parts of the requirements and design are unclear or poorly thought out?
* What do the developers think are the highest-risk aspects of the application?

* What kinds of problems would cause the worst publicity?

* What kinds of problems would cause the most customer service complaints?

* What kinds of tests could easily cover multiple functionalities?

* Which tests will have the best high-risk-coverage to time-required ratio?

What if the project isn't big enough to justify extensive testing?

* Consider the impact of project errors, not the size of the project. However, if extensive testing is
still not justified, risk analysis is again needed and the same considerations as described previously
in 'What if there isn't enough time for thorough testing?' apply. The tester might then do ad hoc
testing, or write up a limited test plan based on the risk analysis.

What can be done if requirements are changing continuously?

A common problem and a major headache

* Work with the project's stakeholders early on to understand how requirements might change so
that alternate test plans and strategies can be worked out in advance, if possible.

* It's helpful if the application's initial design allows for some adaptability so that later changes do
not require redoing the application from scratch.
* If the code is well-commented and well-documented this makes changes easier for the developers.

* Use rapid prototyping whenever possible to help customers feel sure of their requirements and
minimize changes.

* The project's initial schedule should allow for some extra time commensurate with the possibility
of changes.

* Try to move new requirements to a 'Phase 2' version of an application, while using the original
requirements for the 'Phase 1' version.

* Negotiate to allow only easily-implemented new requirements into the project, while moving
more difficult new requirements into future versions of the application.

* Be sure that customers and management understand the scheduling impacts, inherent risks, and
costs of significant requirements changes. Then let management or the customers (not the
developers or testers) decide if the changes are warranted - after all, that's their job.

* Balance the effort put into setting up automated testing with the expected effort required to re-do
them to deal with changes.

* Try to design some flexibility into automated test scripts.

* Focus initial automated testing on application aspects that are most likely to remain unchanged.

* Devote appropriate effort to risk analysis of changes to minimize regression testing needs.

* Design some flexibility into test cases (this is not easily done; the best bet might be to minimize
the detail in the test cases, or set up only higher-level generic-type test plans)

* Focus less on detailed test plans and test cases and more on ad hoc testing (with an understanding
of the added risk that this entails).

• What if the application has functionality that wasn't in the


requirements?

* It may take serious effort to determine if an application has significant unexpected or hidden
functionality, and it would indicate deeper problems in the software development process. If the
functionality isn't necessary to the purpose of the application, it should be removed, as it may have
unknown impacts or dependencies that were not taken into account by the designer or the customer.
If not removed, design information will be needed to determine added testing needs or regression
testing needs. Management should be made aware of any significant added risks as a result of the
unexpected functionality. If the functionality only effects areas such as minor improvements in the
user interface, for example, it may not be a significant risk.

How can QA processes be implemented without stifling productivity?

* By implementing QA processes slowly over time, using consensus to reach agreement on


processes, and adjusting and experimenting as an organization grows and matures, productivity will
be improved instead of stifled. Problem prevention will lessen the need for problem detection,
panics and burn-out will decrease, and there will be improved focus and less wasted effort. At the
same time, attempts should be made to keep processes simple and efficient, minimize paperwork,
promote computer-based processes and automated tracking and reporting, minimize time required
in meetings, and promote training as part of the QA process. However, no one - especially talented
technical types - likes rules or bureacracy, and in the short run things may slow down a bit. A
typical scenario would be that more days of planning and development will be needed, but less time
will be required for late-night bug-fixing and calming of irate customers. (See the Books section's
'Software QA', 'Software Engineering', and 'Project Management' categories for useful books with
more information.)

What if an organization is growing so fast that fixed QA processes are impossible

* This is a common problem in the software industry, especially in new technology areas. There is
no easy solution in this situation, other than:

* Hire good people

* Management should 'ruthlessly prioritize' quality issues and maintain focus on the customer

* Everyone in the organization should be clear on what 'quality' means to the customer

How does a client/server environment affect testing?

* Client/server applications can be quite complex due to the multiple dependencies among clients,
data communications, hardware, and servers. Thus testing requirements can be extensive. When
time is limited (as it usually is) the focus should be on integration and system testing. Additionally,
load/stress/performance testing may be useful in determining client/server application limitations
and capabilities. There are commercial tools to assist with such testing. (See the 'Tools' section for
web resources with listings that include these kinds of test tools.)

How can World Wide Web sites be tested?

* Web sites are essentially client/server applications - with web servers and 'browser' clients.
Consideration should be given to the interactions between html pages, TCP/IP communications,
Internet connections, firewalls, applications that run in web pages (such as applets, javascript, plug-
in applications), and applications that run on the server side (such as cgi scripts, database interfaces,
logging applications, dynamic page generators, asp, etc.). Additionally, there are a wide variety of
servers and browsers, various versions of each, small but sometimes significant differences between
them, variations in connection speeds, rapidly changing technologies, and multiple standards and
protocols. The end result is that
• testing for web sites can become a major ongoing effort. Other considerations might include:

How is testing affected by object-oriented designs?


* What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of
performance is required under such loads (such as web server response time, database query
response times). What kinds of tools will be needed for performance testing (such as web load
testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.)?

* Who is the target audience? What kind of browsers will they be using? What kind of connection
speeds will they by using? Are they intra- organization (thus with likely high connection speeds and
similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser
types)?

* What kind of performance is expected on the client side (e.g., how fast should pages appear, how
fast should animations, applets, etc. load and run)?

* Will down time for server and content maintenance/upgrades be allowed? how much?

* Will down time for server and content maintenance/upgrades be allowed? how much?

* How reliable are the site's Internet connections required to be? And how does that affect backup
system or redundant connection requirements and testing?

* What processes will be required to manage updates to the web site's content, and what are the
requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?

* Which HTML specification will be adhered to? How strictly? What variations will be allowed for
targeted browsers?
* Will there be any standards or requirements for page appearance and/or graphics throughout a site
or parts of a site?
* How will internal and external links be validated and updated? how often?

* Can testing be done on the production system, or will a separate test system be required? How are
browser caching, variations in browser option settings, dial-up connection variabilities, and real-
world internet 'traffic congestion' problems to be accounted for in testing?
* How extensive or customized are the server logging and reporting requirements; are they
considered an integral part of the system and do they require testing?

* How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked,
controlled, and tested?
* Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger,
provide internal links within the page.

* The page layouts and design elements should be consistent throughout a site, so that it's clear to
the user that they're still within a site.
* Pages should be as browser-independent as possible, or pages should be provided or generated
based on the browser-type.
* All pages should have links external to the page; there should be no dead-end pages.
* The page owner, revision date, and a link to a contact person or organization should be included
on each page.
What is Extreme Programming and what's it got to do with testing?

* Extreme Programming (XP) is a software development approach for small teams on risk-prone
projects with unstable requirements. It was created by Kent Beck who described the approach in his
book 'Extreme Programming Explained' (See the Softwareqatest.com Books page.). Testing
('extreme testing') is a core aspect of Extreme Programming. Programmers are expected to write
unit and functional test code first - before the application is developed. Test code is under source
control along with the rest of the code. Customers are expected to be an integral part of the project
team and to help develope scenarios for acceptance/black box testing. Acceptance tests are
preferably automated, and are modified and rerun for each of the frequent development iterations.
QA and test personnel are also required to be an integral part of the project team. Detailed
requirements documentation is not used, and frequent re-scheduling, re-estimating, and re-
prioritizing is expected.

Q: What is performance testing?

A: Although performance testing is described as a part of system testing, it can be regarded as a


distinct level of testing. Performance testing verifies loads, volumes and response times, as defined
by requirements.

Q: What is load testing?

A: Load testing is testing an application under heavy loads, such as the testing of a web site under a
range of loads to determine at what point the system response time will degrade or fail.

Q: What is installation testing?

A: Installation testing is testing full, partial, upgrade, or install/uninstall processes. The installation
test for a release is conducted with the objective of demonstrating production readiness.
This test includes the inventory of configuration items, performed by the application's System
Administration, the evaluation of data readiness, and dynamic tests focused on basic system
functionality. When necessary, a sanity test is performed, following installation testing.

Q: What is security/penetration testing?

A: Security/penetration testing is testing how well the system is protected against unauthorized
internal or external access, or willful damage.

This type of testing usually requires sophisticated testing techniques.

Q: What is recovery/error testing?

A: Recovery/error testing is testing how well a system recovers from crashes, hardware failures, or
other catastrophic problems.

Q: What is compatibility testing?

A: Compatibility testing is testing how well software performs in a particular hardware, software,
operating system, or network
This test includes the inventory of configuration items, performed by the application's System
Administration, the evaluation of data readiness, and dynamic tests focused on basic system
functionality. When necessary, a sanity test is performed, following installation testing.

Q: What is security/penetration testing?

A: Security/penetration testing is testing how well the system is protected against unauthorized
internal or external access, or willful damage.
This type of testing usually requires sophisticated testing techniques.

Q: What is recovery/error testing?

A: Recovery/error testing is testing how well a system recovers from crashes, hardware failures, or
other catastrophic problems.

Q: What is compatibility testing?

A: Compatibility testing is testing how well software performs in a particular hardware, software,
operating system, or network

Q: What is comparison testing?

A: Comparison testing is testing that compares software weaknesses and strengths to those of
competitors' products.

Q: What is acceptance testing?

A: Acceptance testing is black box testing that gives the client/customer/project manager the
opportunity to verify the system functionality and usability prior to the system being released to
production.

The acceptance test is the responsibility of the client/customer or project manager, however, it is
conducted with the full support of the project team. The test team also works with the
client/customer/project manager to develop the acceptance criteria.

Q: What is alpha testing?

A: Alpha testing is testing of an application when development is nearing completion. Minor design
changes can still be made as a result of alpha testing. Alpha testing is typically performed by a
group that is independent of the design team, but still within the company, e.g. in-house software
test engineers, or software QA engineers.

Q: What is beta testing?

A: Beta testing is testing an application when development and testing are essentially completed
and final bugs and problems need to be found before the final release. Beta testing is typically
performed by end-users or others, not programmers, software engineers, or test engineers.

Q: What is a Test/QA Team Lead?

A: The Test/QA Team Lead coordinates the testing activity, communicates testing status to
management and manages the test team.

Q: What testing roles are standard on most testing projects?

A: Depending on the organization, the following roles are more or less standard on most testing
projects: Testers, Test Engineers, Test/QA Team Lead, Test/QA Manager, System Administrator,
Database Administrator, Technical Analyst, Test Build Manager and Test Configuration Manager.

Depending on the project, one person may wear more than one hat. For instance, Test Engineers
may also wear the hat of Technical Analyst, Test Build Manager and Test Configuration Manager.

You CAN get a job in testing. Click on a link!

Q: What is a Test Engineer?

A: We, test engineers, are engineers who specialize in testing. We, test engineers, create test cases,
procedures, scripts and generate data. We execute test procedures and scripts, analyze standards of
measurements, evaluate results of system/integration/regression testing. We also...
• Speed up the work of the development staff;
• Reduce your organization's risk of legal liability;
• Give you the evidence that your software is correct and operates properly;
• Improve problem tracking and reporting;
• Maximize the value of your software;
• Maximize the value of the devices that use it;
• Assure the successful launch of your product by discovering bugs and design flaws, before users
get discouraged, before shareholders loose their cool and before employees get bogged down;
• Help the work of your development staff, so the development team can devote its time to build
up your product;
• Promote continual improvement;
• Provide documentation required by FDA, FAA, other regulatory agencies and your customers;
• Save money by discovering defects 'early' in the design process, before failures occur in
production, or in the field;
• Save the reputation of your company by discovering bugs and design flaws; before bugs and
design flaws damage the reputation of your company.

Q: What is a Test Build Manager?

A: Test Build Managers deliver current software versions to the test environment, install the
application's software and apply software patches, to both the application and the operating system,
set-up, maintain and back up test environment hardware.

Depending on the project, one person may wear more than one hat. For instance, a Test Engineer
may also wear the hat of a Test Build Manager.

Q: What is a System Administrator?

A: Test Build Managers, System Administrators, Database Administrators deliver current software
versions to the test environment, install the application's software and apply software patches, to
both the application and the operating system, set-up, maintain and back up test environment
hardware.

Depending on the project, one person may wear more than one hat. For instance, a Test Engineer
may also wear the hat of a System Administrator.

Q: What is a Database Administrator?

A: Test Build Managers, System Administrators and Database Administrators deliver current
software versions to the test environment, install the application's software and apply software
patches, to both the application and the operating system, set-up, maintain and back up test
environment hardware. Depending on the project, one person may wear more than one hat. For
instance, a Test Engineer may also wear the hat of a Database Administrator.

Q: What is a Technical Analyst?

A: Technical Analysts perform test assessments and validate system/functional test requirements.
Depending on the project, one person may wear more than one hat. For instance, Test Engineers
may also wear the hat of a Technical Analyst.

Q: What is a Test Configuration Manager?

A: Test Configuration Managers maintain test environments, scripts, software and test data.
Depending on the project, one person may wear more than one hat. For instance, Test Engineers
may also wear the hat of a Test Configuration Manager.

Q: What is a test schedule?

A: The test schedule is a schedule that identifies all tasks required for a successful testing effort, a
schedule of all test activities and resource requirements.

Q: What is software testing methodology?

A: One software testing methodology is the use a three step process of...
1. Creating a test strategy;
2. Creating a test plan/design; and
3. Executing tests.
This methodology can be used and molded to your organization's needs. G C Reddy believes that
using this methodology is important in the development and ongoing maintenance of his clients'
applications.
1) What is Quality?
The degree to which a Component, system or process meets specified
requirements and/or customer needs and expectations.

2) What is Quality Attribute?


A feature or characteristic that affects an item's quality.

3) What is Software Quality?


The totality of functionality and features of a Software product that bear on its
ability to satisfy stated or implied needs.

4) What is Software Quality Assurance?


All the planned and systematic actions necessary to provide adequate
confidence that a product or service is of the type and quality needed and
expected by the customer.

5) What is Software Quality Control?


The operational techniques and the activities used to fulfil and verify Quality
requirements of Software.

6) What is Quality Audit?


A systematic and independent examination to determine whether quality
activities and related results comply with planned arrangements and whether
these arrangements are implemented effectively and are suitable to achieve
objectives.

7) What is Quality Management?


Quality Management is that aspect of the overall management function that
determines and implements the quality policy.

8) What is Verification?
Verification is intended to check that a product or system meets a set of initial
design requirements, specifications.

9) What is Validation?
Validation is intended to check that development and verification procedures
for a product or system that meets initial requirements, specifications.

10) What is Software Testing?


A set of activities conducted with the intent of finding errors in software.
11) What is the relation between Quality and
Testing?
Testing is an investigation conducted to provide stakeholders with information
about the quality of the system or application under test.

12) What is Static Testing?


Analyzing the program/code without executing it.

13) What is Dynamic Testing?


Testing Software through executing it.

14) What is Quality Gate?


A special milestone in a project. Quality gates are located between those
phases of a project strongly depending on the outcome of a previous phase. A
quality gate includes a formal check of the documents of the previous phase.

15) What is Software Usability Measurement


Inventory?
A questionary-based usability test technique for measuring quality from the
end user's point of view.

16) What is Software Process Improvement?


A Program of activities designed to improve the performance and maturity of
the organization's software processes and the results of such a program.

17) What is Systematic Test and Evaluation Process?


Systematic Test and Evaluation Process(STEP) is a methodology to measure
the quality of a system. The process consists of a few activities, which includes
developing test plan and strategy, test design, test execution and evaluation of
test result.

18) What is ISO?


ISO means International organization for Standardization. It is a worldwide
federation of national standard bodies that creates standards and enforces
their application.
19) What is IEEE?
Institute of Electrical and Electronics Engineers is the world's largest technical
professional society. It provides standards for Power, Energy, Telecom,
Information Technology and Aviation etc.... Industries.

20) What is CMM/CMMI?


A five level staged framework that describes the key elements of an effective
software processs.

The Capability Maturity Model covers best practices for planning, engineering
and managing software development and maintenance.
Note: CMM scarped and CMMI (Capability Maturity Model Integration)
launched

21) What is Test Plan?


A document describing the scope,approach,resources and schedule of intended
testing activities. It identifies test items, the features to be tested, the testing
tasks, who will do each task, and any risks during contingency planning.

22) What is Test Control?


Test control describes any corrective actions taken as a result of information
and metrics gathered and reported. Actions may cover any test activity and
may affect any other software life cycle activity or task.

23) What is Test Monitoring?


Test Monitoring is done to know the Progress of the testing and Test Coverage.

24) What is Configuration Management?


A discipline applying technical and administrative direction and surveillance to
identify and document the functional and physical characteristics of a
configuration item, control changes to those characteristics, record and report
change processing and implementation status, and verify compliance with
specified requirements.

25) What is Configuration item?


An aggregation of hardware, software or both, that is designated for
configuration management and treated as a single entity in the configuration
management process.
26) What is Test Policy?
A high level document describing the principles, approach and major objectives
of the organization regarding testing.

27) What is Test Strategy?


Test Strategy is formal description of how a software product will be tested. A
tested strategy is developed for all levels of testing.

28) What is SDLC?


SDLC (Software Development Life Cycle) is the process used in project to
develop software product. It describes how the development activities will be
performed and how development phases follow each other.The main phases
are Requirements gathering, Analysis,Design,Code/implementation and
Release & maintenance.

29) What is STLC?


The process of testing software in a well planned and systematic way is known
as Software Testing Life Cycle(STLC).

30) What is Test Design Technique?


Test Design technique is a process of selecting few Test Cases out of many with
the likelihood of finding defects.

31) What is Test Execution?


Execution of Tests is accomplished by following the test documents (like Test
Cases) in a methodical manner. As eaech test is executed, an entry is recorded
in the test execution log to note that the execution of the procedure and
whether or not the test procedure uncovered any defects.

32) What is Test Closure?


Test Closure is part of Software Test process. During test closure, Exit criteria
is evaluated, all Quality work products are collected. And Test Summary report
is prepared, test deliverables are sent to the customer.

33) What is Software Maintenance?


Software is deployed and support is provided to the customer to correct issues
or to add enhancements.
34) What is Software Migration?
Migrating software from old technology to new technology is called Software
migration.

35) What is Software Retirement?


Retiring the Old system and developing new system is called Software
Retirement.

36) What is Test Documentation Standard?


Test documentation standard covers test documentation across the entire
software testing life cycle.

It covers Organizational Test process documentations, Test Management


process documentation, Dynamic Test Process documentation.

37) What is Base lined Document?


Document that have been formally reviewed and agreed upon. It serves as a
basis for further development.

38) What is Version Control?


Version control allows different projects to use same source files at the same
time it isolates work that is not ready to be shared by the rest of the project.It
isolates work that shoud never be shared. It allows software engineers to
continue development along a branch even when a line of development is
frozen.

39) How to prevent Defects?


Defects can be prevented through regular reviews and inspections before the
software is given for Functional Testing.

40) What are the Project Management Activities?


The following is a list of common project-management related activities:

- Setting goals
- Establishing timetables for the project.
- Monitoring the use of time for maximum efficiency.
- Estimating the resources, both material and human, required by the project
and ensuring that they are distributed and used properly.
- Setting a budget for the project and keeping it within that budget.
- Organizing relevant documents and records.
- Analyzing the current conditions of the project and predicting future trends so
as not to be caught off-guard by changes.
- Identifying potential risks to the project and developing risk managment
plan.
- Managing the quality and finding ways to improve quality.
- Monitoring the closing stages of a project that is near completion.
- Facilitating communication among project members and between the project
members and outside stakeholders.

41) What are the types of reviews in Software


Testing?
1) Code review
2) Inspection
3) Walkthrough
4) Technical review

42) What is Approval?


Approving the process as complete and allow the project to proceed to the
next level.

43) What is Software Bidding?


A proposal to develop new software is called Software Bidding.

44) What PIN Document?


Project Initiation Note (PIN) document is prepared to estimate required
technologies, required time and resources for developing new software.

45) What is Software Build?


Software build refers either to the process of converting source code files into
standalone software artifact(s) that can be run on a computer, or the result of
doing so. One of the most important steps of a software build is the
compilation process where source code files are converted into executable
code.

46) What is Software Version?


Software versioning is the process of assigning either unique version names or
unique version numbers to unique states of software.
47) What is Build Version?

48) What is Traceability?


A document showing the relationship between Test Requirements and Test
cases.

49) What are the different phases in Software


Application Life Cycle Management?
i) Development Phase

ii) Testing Phase,

iii) Production Phase

iv) Maintenance Phase

50) What is Live Testing?


Testing with Live Data (Real Data) in Testing Environment or Production
Environment.

51) What is Software Development Environment?


A Software Development Environment is the entire
environment(applications,servers,network) that provides comprehensive
facilities to computer programmers for software development

52) What is Software Test Environment?


A testing environment is a setup of software and hardware on which the
testing team is going to perform the testing of the newly built software. This
setup consists of the physical setup which includes hardware, and logical setup
that includes Server Operating system, client operating system, database
server, front end running environment, browser (if web application), IIS
(version on server side) or any other software components required to run the
software.

53) What is Software Production Environment?


A Production Environment is everything that it takes for the software to go live.

54) What is Change Control?


Proposed changes to baselines must have some level of review. The impact of
proposed changes must be identified and understood. When appropriate, the
approval of the CCB, Key managers and project members must be obtained.
Approved changes must be properly implemented after changes are made all
affected parties must be notified.

55) What is Test Estimation?


An estimate is an approximation of Budget, Time and an approximate
calculation of Quantity.

Considerable factors in Test estimation are


a) Scope of the project
b) Budget
c) Time
d) Available Environment Resources
e) Available Skilled human resources
f) Organization experience

56) How to Estimate Software test efforts?


Software Test efforts can be estimated using the following techniques
a) Function Point Analysis(FPA)
b) 3-Point Software Test Estimation
c) Use-Case Point method
d) Experienced based Techniques
e) Expert based Techniques

57) How Test metrics improve Software Quality?


- Test Metrics help gauging the progress, quality and health of a software
testing effort.
- Metrics can also be leveraged to evaluate past performance, current status
and envisage future trends. Effective Software metrics are
simple,objective,measurable,meaningful and have easily accessible
underlying data.
- Metrics can provide a quick insight into the status of software testing efforts,
hence resulting in better control through smart decision making.

1. What is the difference between Functional Requirement and Non-Functional


Requirement?

The Functional Requirement specifies how the system or application SHOULD DO where in
Non Functional Requirement it specifies how the system or application SHOULD BE.

Some functional Requirements are

 Authentication
 Business rules
 Historical Data
 Legal and Regulatory Requirements
 External Interfaces

Some Non-Functional Requirements are

 Performance
 Reliability
 Security
 Recovery
 Data Integrity
 Usability

2. How Severity and Priority are related to each other?

 Severity- tells the seriousness/depth of the bug where as


 Priority- tells which bug should rectify first.
 Severity- Application point of view
 Priority- User point of view

3. Explain the different types of Severity?

1. User Interface Defect-Low


2. Boundary Related Defects-Medium
3. Error Handling Defects-Medium
4. Calculation Defects-High
5. Interpreting Data Defects-High
6. Hardware Failures& Problems-High
7. Compatibility and Intersystem defects-High
8. Control Flow defects-High
9. Load conditions (Memory leakages under load testing)-High

4. What is the difference between Priority and Severity?

The terms Priority and Severity are used in Bug Tracking to share the importance of a bug
among the team and to fix it.
Severity: Is found in the Application point of view

Priority- Is found in the User point of view


Severity- (tells the seriousness/depth of the bug)

1. The Severity status is used to explain how badly the deviation is affecting the build.
2. The severity type is defined by the tester based on the written test cases and
functionality.

Example

If an application or a web page crashes when a remote link is clicked, in this case clicking the
remote link by an user is rare but the impact of application crashing is severe, so the severity
is high and priority is low.

PRIORITY- (tells which bug should rectify first)

1. The Priority status is set by the tester to the developer mentioning the time frame to fix
a defect. If High priority is mentioned then the developer has to fix it at the earliest.
2. The priority status is set based on the customer requirements.

Example

If the company name is misspelled in the home page of a website, then the priority is high and
the severity is low to fix it.

Severity: Describes the bug in terms of functionality.


Priority: Describes the bug in terms of customer.

Few examples:

High Severity and Low Priority -> Application doesn't allow customer expected
configuration.
High Severity and High Priority -> Application doesn't allow multiple user's.
Low Severity and High Priority -> No error message to prevent wrong operation.
Low Severity and low Priority -> Error message is having complex meaning.

Or

Few examples:

High Severity -Low priority

Supposing, you try the wildest or the weirdest of operations in a software (say, to be released
the next day) which a normal user would not do and supposing this renders a run -time error
in the application,the severity would be high. The priority would be low as the operations or
the steps which rendered this error by most chances will not be done by a user.

Low Severity -High priority

An example would be- you find a spelling mistake in the name of the website which you are
testing.Say, the name is supposed to be Google and its spelled there as 'Gaogle'. Though, it
doesn't affect the basic functionality of the software, it needs to be corrected before the
release. Hence, the priority is high.

High severity- High Priority

A bug which is a show stopper. i.e., a bug due to which we are unable to proceed our
testing.An example would be a run time error during the normal operation of the
software,which would cause the application to quit abruptly.

Low severity - low priority

Cosmetic bugs

What is Defect Severity?

A defect is a product anomaly or flaw, which is variance from desired product specification. The
classification of defect based on its impact on operation of product is called Defect Severity.

5. What is Bucket Testing?

Bucket testing (also known as A/B Testing) is mostly used to study the impact of various
product designs in website metrics, two simultaneous versions were run in a single or set of
web pages to measure the difference in click rates, interface and traffic.
6. What is Entry and Exit Criteria in Software Testing?

Entry Criteria is the process that must be present when a system begins, like,

 SRS (Software Requirement Specification)


 FRS (Functional Requirement Specification)
 Usecase
 Test Case
 Test plan

Exit Criteria ensures whether testing is completed and the application is ready for release, like,

 Test Summary Report


 Metrics
 Defect Analysis report

7. What is Concurrency Testing?

Concurrency Testing (also commonly known as Multi User Testing) is used to know the effects
of accessing the Application, Code Module or Database by different users at the same time.It
helps in identifying and measuring the problems in Response time, levels of locking and
deadlocking in the application.

Example

Load runner is widely used for this type of testing, Vugen (Virtual User Generator) is used to
add the number of concurrent users and how the users need to be added like Gradual Ramp up
or Spike Stepped.

8. Explain Statement coverage/Code coverage/Line Coverage?

Statement Coverage or Code Coverage or Line Coverage is a metric used in White Box Testing
where we can identify the statements executed and where the code is not executed cause of
blockage. In this process each and every line of the code needs to be checked and executed.

Some advantages of Statement Coverage / Code Coverage / Line Coverage are

 It verifies what the written code is expected to do and not to do.


 It measures the quality of code written.
 It checks the flow of different paths in the program also ensure whether those paths are
tested or not.

To Calculate Statement Coverage,

Statement Coverage = Statements Tested / Total No. of Statements.

9. Explain Branch Coverage/Decision Coverage?

Branch Coverage or Decision Coverage metric is used to check the volume of testing done in all
components. This process is used to ensure whether all the code is executed by verifying every
branch or decision outcome (if and while statements) by executing atleast one time, so that no
branches lead to the failure of the application.

To Calculate Branch Coverage,


Branch Coverage = Tested Decision Outcomes / Total Decision Outcomes.

10. What is the difference between High level and Low Level test case?

High level Test cases are those which cover major functionality in the application (i.e.
retrieve, update display, cancel (functionality related test cases), database test cases).
Low level test cases are those related to User Interface (UI) in the application.

11. Explain Localization testing with example?

Localization is the process of changing or modifying an application to a particular culture or


locale. This includes change in user interface, graphical designs or even the initial settings
according to their culture and requirements.

In terms of Localization Testing it verifies how correctly the application is changed or modified
into that target culture and language.

In case of translation required of the application on that local language, testing should be done
on each field to check the correct translation. Other formats like date conversion, hardware
and software usage like operating system should also be considered in localization testing.

Examples for Localization Testing are

In Islamic Banking all the transactions and product features are based on Shariah Law, some
important points to be noted in Islamic Banking are

1. In Islamic Banking, the bank shares the profit and loss with the customer.
2. In Islamic Banking, the bank cannot charge interest on the customer; instead they
charge a nominal fee which is termed as "Profit
3. In Islamic Banking, the bank will not deal or invest in business like Gambling, Alcohol,
Pork, etc.

In this case, we need to test whether these Islamic banking conditions were modified and
applied in the application or product.

In Islamic Lending, they follow both the Gregorian calendar and Hijiri Calendar for calculating
the loan repayment schedule. The Hijiri Calendar is commonly called as Islamic Calendar
followed in all the Muslim countries according to the lunar cycle. The Hijiri Calendar has 12
months and 354 days which is 11 days shorter than Gregorian calendar. In this case, we need
to test the repayment schedule by comparing both the Gregorian calendar and Hijiri Calendar.

12. Explain Risk Analysis in Software Testing?

In Software Testing, Risk Analysis is the process of identifying risks in applications and
prioritizing them to test.

In Software testing some unavoidable risk might takes place like

 Change in requirements or Incomplete requirements


 Time allocation for testing.
 Developers delaying to deliver the build for testing.
 Urgency from client for delivery.
 Defect Leakage due to application size or complexity.

To overcome these risks, the following activities can be done

 Conducting Risk Assessment review meeting with the development team.


 Profile for Risk coverage is created by mentioning the importance of each area.
 Using maximum resources to work on High Risk areas like allocating more testers for
High risk areas and minimum resources for Medium and Low risk areas. Creation of Risk
assessment database for future maintenance and management review.

13. What is the difference between Two Tier Architecture and Three Tier
Architecture?

In Two Tier Architecture or Client/Server Architecture two layers like Client and Server
is involved. The Client sends request to Server and the Server responds to the request by
fetching the data from it. The problem with the Two Tier Architecture is the server cannot
respond to multiple requests at the same time which causes data integrity issues.
The Client/Server Testing involves testing the Two Tier Architecture of user interface in the
front end and database as backend with dependencies on Client, Hardware and Servers.

In Three Tier Architecture or Multi Tier Architecture three layers like Client, Server and
Database are involved. In this the Client sends a request to Server, where the Server sends the
request to Database for data, based on that request the Database sends back the data to
Server and from Server the data is forwarded to Client.

The Web Application Testing involves testing the Three Tier Architecture including the User
interface, Functionality, Performance, Compatibility, Security and Database testing.

14. What is the difference between Static testing and dynamic testing?

Static Testing (done in Verification stage)

Static Testing is a White Box testing technique where the developers verify or test their code
with the help of checklist to find errors in it, this type of testing is done without running the
actually developed application or program. Code Reviews, Inspections, Walkthroughs are
mostly done in this stage of testing.

Dynamic Testing (done in Validation stage)

Dynamic Testing is done by executing the actual application with valid inputs to check the
expected output. Examples of Dynamic Testing methodologies are Unit Testing, Integration
Testing, System Testing and Acceptance Testing.

Some differences between Static Testing and Dynamic Testing are,

 Static Testing is more cost effective than Dynamic Testing because Static Testing is done
in the initial stage.
 In terms of Statement Coverage, the Static Testing covers more areas than Dynamic
Testing in shorter time.
 Static Testing is done before the code deployment where the Dynamic Testing is done
after the code deployment.
 Static Testing is done in the Verification stage where the Dynamic Testing is done in the
Validation stage.

15. Explain Use case diagram. What are the attributes of use cases?

Use Case Diagrams is an overview graphical representation of the functionality in a system. It


is used in the analysis phase of a project to specify the system to be developed.
In Use Case Diagrams the whole system is defined as ACTORS, USE CASES and
ASSOCIATIONS, the ACTORS are the external part of the system like users, computer software
& hardware, USECASES is the behavior or functionality of the system when these ACTORS
perform an action, the ASSOCIATIONS are the line drawn to show the connection between
ACTORS and USECASES. One ACTOR can link too many USECASES and one USECASE can link
too many ACTORS.

16. What is Web Application testing? Explain the different phases in Web Application
testing?

Web Application testing is done on a website to check its load, performance, Security,
Functionality, Interface, compatibility and other usability related issues. In Web application
testing, three phases of testing is done, they are,

Web Tier Testing

In Web tier testing, the browser compatibility of the application will be tested for IE, Fire Fox
and other web browsers.

Middle Tier Testing

In Middle tier testing, the functionality and security issues were tested.

Database Tier Testing

In Database tier testing, the database integrity and the contents of the database were tested
and verified.

17. Explain Unit testing, Interface Testing and Integration testing. Also explain the
types of integration testing in brief?

Unit testing

Unit Testing is done to check whether the individual modules of the source code are working
properly. i.e. testing each and every unit of the application separately by the developer in
developer's environment.

Interface Testing

Interface Testing is done to check whether the individual modules are communicating properly
one among other as per the specifications.
Interface testing is mostly used in testing the user interface of GUI application.

Integration testing

Integration Testing is done to check the connectivity by combining all the individual modules
together and test the functionality.

The types of Integration Testing are

1. Big Bang Integration Testing

In Big Bang Integration Testing, the individual modules are not integrated until all the modules
are ready. Then they will run to check whether it is performing well.

In this type of testing, some disadvantages might occur like,

Defects can be found at the later stage.It would be difficult to find out whether the defect
arouse in Interface or in module.

2. Top Down Integration Testing

In Top Down Integration Testing, the high level modules are integrated and tested first. i.e
Testing from main module to sub module. In this type of testing, Stubs are used as temporary
module if a module is not ready for integration testing.

3. Bottom Up Integration Testing

In Bottom Up Integration Testing, the low level modules are integrated and tested first i.e
Testing from sub module to main module. Same like Stubs, here drivers are used as a
temporary module for integration testing.

18. Explain Alpha, Beta, Gamma Testing?

Alpha Testing:

Alpha Testing is mostly like performing usability testing which is done by the in-house
developers who developed the software or testers. Sometimes this Alpha Testing is done by
the client or an outsider with the presence of developer and tester. The version release after
alpha testing is called Alpha Release.

Beta Testing:

Beta Testing is done by limited number of end users before delivery, the change request would
be fixed if the user gives feedback or reports defect. The version release after beta testing is
called beta Release.

Gamma Testing:

Gamma Testing is done when the software is ready for release with specified requirements,
this testing is done directly by skipping all the in-house testing activities.

19. Explain the methods and techniques used for Security Testing?

Security testing can be performed in many ways like,

1. Black Box Testing


2. White Box Testing
3. Database Testing
1. Black Box Testing

a. Session Hijacking

Session Hijacking commonly called as "IP Spoofing" where a user session will be attacked on a
protected network.

b. Session Prediction

Session prediction is a method of obtaining data or a session ID of an authorized user and gets
access to the application. In a web application the session ID can be retrieved from cookies or
URL.
The session prediction happening can be predicted when a website is not responding normally
or stops responding for an unknown reason.

c. Email Spoofing

Email Spoofing is duplicating the email header ("From" address) to look like originated from
actual source and if the email is replied it will land in the spammers inbox. By inserting
commands in the header the message information can be altered. It is possible to send a
spoofed email with information you didn't write.

d. Content Spoofing

Content spoofing is a technique to develop a fake website and make the user believe that the
information and website is genuine. When the user enters his Credit Card Number, Password,
SSN and other important details the hacker can get the data and use if for fraud purposes.

e. Phishing

Phishing is similar to Email Spoofing where the hacker sends a genuine look like mail
attempting to get the personal and financial information of the user. The emails will appear to
have come from well known websites.

f. Password Cracking

Password Cracking is used to identify an unknown password or to identify a forgotten password

Password cracking can be done through two ways,

1. Brute Force – The hacker tries with a combination of characters within a length and
tries until it is getting accepted.
2. Password Dictionary – The hacker uses the Password dictionary where it is
available on various topics.

2. White Box level

a. Malicious Code Injection

SQL Injection is most popular in Code Injection Attack, the hacker attach the malicious code
into the good code by inserting the field in the application. The motive behind the injection is
to steal the secured information which was intended to be used by a set of users.

Apart from SQL Injection, the other types of malicious code injection are XPath Injection, LDAP
Injection, and Command Execution Injection. Similar to SQL Injection the XPath Injection deals
with XML document.

b. Penetration Testing:

Penetration Testing is used to check the security of a computer or a network. The test process
explores all the security aspects of the system and tries to penetrate the system.

c. Input validation:

Input validation is used to defend the applications from hackers. If the input is not validated
mostly in web applications it could lead to system crashes, database manipulation and
corruption.

d. Variable Manipulation

Variable manipulation is used as a method for specifying or editing the variables in a program.
It is mostly used to alter the data sent to web server.

3. Database Level

a. SQL Injection

SQL Injection is used to hack the websites by changing the backend SQL statements, using
this technique the hacker can steal the data from database and also delete and modify it.

20. Explain IEEE 829 standards and other Software Testing standards?

An IEEE 829 standard is used for Software Test Documentation, where it specifies format for
the set of documents to be used in the different stages software testing. The documents are,

Test Plan- Test Plan is a planning document which has information about the scope,
resources, duration, test coverage and other details.
Test Design- Test Design document has information of test pass criteria with test conditions
and expected results.
Test Case- Test case document has information about the test data to be used.
Test Procedure- Test Procedure has information about the test steps to be followed and how
to execute it.
Test Log- Test log has details about the run test cases, test plans & fail status, order, and the
resource information who tested it.
Test Incident Report- Test Incident Report has information about the failed test comparing
the actual result with expected result.
Test Summary Report- Test Summary Report has information about the testing done and
quality of the software, it also analyses whether the software has met the requirements given
by customer.

The other standards related to software testing are,

IEEE 1008 is for Unit Testing


IEEE 1012 is for Software verification and validation
IEEE 1028 is for Software Inspections
IEEE 1061 is for Software metrics and methodology
IEEE 1233 is for guiding the SRS development
IEEE 12207 is for SLC process
21. What is Test Harness?

Test Harness is configuring a set of tools and test data to test an application in various
conditions, which involves monitoring the output with expected output for correctness.

The benefits of Test Harness are,

 Productivity increase due to process automation.


 Quality in the application.

22. What is the difference between bug log and defect tracking?

Bug Log: Bug Log document showing the number of defect such as open, closed, reopen or
deferred of a particular module

Defect Tracking- The process of tracking a defect such as symptom, whether reproducible
/not, priority, severity and status.

23. What are Integration Testing and Regression Testing?

Integration Testing:

 Combining the modules together & construct software architecture.


 To test the communication & data flow
 White & Black box testing techniques are used
 It is done by developer & tester

Regression Testing

 It is re-execution of our testing after the bug is fixed to ensure that the build is free
from bugs.
 Done after bug is fixed
 It is done by Tester

24. Explain Peer Review in Software Testing?

It is an alternative form of Testing, where some colleagues were invited to examine your work
products for defects and improvement opportunities.
Some Peer review approaches are,

Inspection

It is a more systematic and rigorous type of peer review. Inspections are more effective at
finding defects than are informal reviews.
Ex: In Motorola's Iridium project nearly 80% of the defects were detected through inspections
where only 60% of the defects were detected through formal reviews.

Team Reviews: It is a planned and structured approach but less formal and less rigorous
comparing to Inspections.
Walkthrough: It is an informal review because the work product's author describes it to some
colleagues and asks for suggestions. Walkthroughs are informal because they typically do not
follow a defined procedure, do not specify exit criteria, require no management reporting, and
generate no metrics.
Or

A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no


preparation is usually required.
Pair Programming: In Pair Programming, two developers work together on the same
program at a single workstation and continuously reviewing their work.

Peer Desk check

In Peer Desk check only one person besides the author examines the work product. It is an
informal review, where the reviewer can use defect checklists and some analysis methods to
increase the effectiveness.
Passaround: It is a multiple, concurrent peer desk check where several people are invited to
provide comments on the product.

25. Explain Compatibility testing with an example?

Compatibility testing is to evaluate the application compatibility with the computing


environment like Operating System, Database, Browser compatibility, backwards compatibility,
computing capacity of the Hardware Platform and compatibility of the Peripherals.

Example

If Compatibility testing is done on a Game application, before installing a game on a computer,


its compatibility is checked with the computer specification that whether it is compatible with
the computer having that much of specification or not.

26. What is Traceability Matrix?

Traceability Matrix is a document used for tracking the requirement, Test cases and the defect.
This document is prepared to make the clients satisfy that the coverage done is complete as
end to end, this document consists of Requirement/Base line doc Ref No., Test case/Condition,
Defects / Bug id. Using this document the person can track the Requirement based on the
Defect id.

27. Explain Boundary value testing and Equivalence testing with some examples?

Boundary value testing is a technique to find whether the application is accepting the expected
range of values and rejecting the values which falls out of range.

Exmple

A user ID text box has to accept alphabet characters ( a-z ) with length of 4 to 10 characters.
BVA is done like this, max value: 10 pass; max-1: 9 pass;
max+1=11 fail ;min=4 pass;min+1=5 pass;min-1=3 fail;
Like wise we check the corner values and come out with a conclusion whether the application
is accepting correct range of values.

Equivalence testing is normally used to check the type of the object.

Example

A user ID text box has to accept alphabet characters (a - z) with length of 4 to 10 characters.
In +ve condition we have test the object by giving alphabets. i.e. a-z char only, after that we
need to check whether the object accepts the value, it will pass.
In -ve condition we have to test by giving other than alphabets (a-z) i.e. A-Z, 0-9, blank etc, it
will fail.

28. What is Security testing?

Security testing is the process that determines that confidential data stays confidential
Or
Testing how well the system protects against unauthorized internal or external access, willful
damage, etc?
This process involves functional testing, penetration testing and verification.

29. What is Installation testing?

Installation testing is done to verify whether the hardware and software are installed and
configured properly. This will ensure that all the system components were used during the
testing process. This Installation testing will look out the testing for a high volume data, error
messages as well as security testing.

30. What is AUT?

AUT is nothing but "Application Under Test". After the designing and coding phase in Software
development life cycle, the application comes for testing then at that time the application is
stated as Application Under Test.

31. What is Defect Leakage?

Defect leakage occurs at the Customer or the End user side after the application delivery. After
the release of the application to the client, if the end user gets any type of defects by using
that application then it is called as Defect leakage. This Defect Leakage is also called as Bug
Leakage.

32. What are the contents in an effective Bug report?

1. Project
2. Subject
3. Description
4. Summary
5. Detected By (Name of the Tester)
6. Assigned To (Name of the Developer who is supposed to the Bug)
7. Test Lead (Name)
8. Detected in Version
9. Closed in Version
10. Date Detected
11. Expected Date of Closure
12. Actual Date of Closure
13. Priority (Medium, Low, High, Urgent)
14. Severity (Ranges from 1 to 5)
15. Status
16. Bug ID
17. Attachment
18. Test Case Failed (Test case that is failed for the Bug)

33. What is Error guessing and Error seeding?

Error Guessing is a test case design technique where the tester has to guess what faults might
occur and to design the tests to represent them.
Error Seeding is the process of adding known faults intentionally in a program for the reason of
monitoring the rate of detection & removal and also to estimate the number of faults remaining
in the program.

34. What is Ad-hoc testing?

Ad hoc testing is concern with the Application Testing without following any rules or test cases.
For Ad hoc testing one should have strong knowledge about the Application.

35. What are the basic solutions for the software development problems?

1. Basic requirements- A clear, detailed, complete, achievable, testable requirement has


to be developed. Use some prototypes to help pin down requirements. In nimble
environments, continuous and close coordination with customers/end-users is needed.
2. Schedules should be realistic- enough time to plan, design, test, bug fix, re-test,
change, and document in the given schedule. Adequate
3. testing- testing should be started early, it should be re-tested after the bug fixed or
changed, enough time should be spend for testing and bug-fixing.
4. Proper study on initial requirements- be ready to look after more changes after the
development has begun and be ready to explain the changes done to others. Work
closely with the customers and end-users to manage expectations. This avoids
excessive changes in the later stages.
5. Communication- conduct frequent inspections and walkthroughs in appropriate time
period; ensure that the information and the documentation is available on up-to-date if
possible electronic. More emphasize on promoting teamwork and cooperation inside the
team; use prototypes and proper communication with the end-users to clarify their
doubts and expectations.

36. What are the common problems in the software development process?

Inadequate requirements from the Client: if the requirements given by the client is not clear,
unfinished and not testable, then problems may come.
Unrealistic schedules: Sometimes too much of work is being given to the developer and ask
him to complete in a Short duration, then the problems are unavoidable.
Insufficient testing: The problems can arise when the developed software is not tested
properly.
Given another work under the existing process: request from the higher management to work
on another project or task will bring some problems when the project is being tested as a
team.
Miscommunication: in some cases, the developer was not informed about the Clients
requirement and expectations, so there can be deviations.

37. What is the difference between Software Testing and Quality Assurance (QA)?
Software Testing involves operation of a system or application under controlled conditions and
evaluating the result. It is oriented to 'detection'.

Quality Assurance (QA) involves the entire software development PROCESS- monitoring and
improving the process, making sure that any agreed-upon standards and procedures are
followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.

38. How to Test the water bottle?

Note: Before going to generate some test idea on how to test a water bottle, I would like to
ask few questions like:

1. Is it a bottle made up off glass, plastic, rubber, some metal, some kind of disposable
materials or any thing else?
2. Is it meant only to hot water or we can use it with other fluids like tea, coffee, soft
drinks, hot chocolate, soups, wine, cooking oil, vinegar, gasoline, acids, molten lava (!)
etc.?
3. Who is going to use this bottle? A school going kid, a housewife, some beverage
manufacturing company, an office-goer, a sports man, a mob protesting in a rally (going
to use as missiles), an Eskimo living in an igloo or an astronaut in a space ship?

These kinds of questions may allow a tester to know a product (that he is going to test) in a
better way. In our case, I am assuming that the water bottle is in form of a pet bottle and
actually made up off either plastic or glass (there are 2 versions of the product) and is
intended to be used mainly with water. About the targeted user, even the manufacturing
company is not sure about them! (Sounds familiar! When a software company develops a
product without clear idea about the users who are going to use the software!)

Test Ideas

1. Check the dimension of the bottle. See if it actually looks like a water bottle or a
cylinder, a bowl, a cup, a flower vase, a pen stand or a dustbin! [Build Verification
Testing!]
2. See if the cap fits well with the bottle.[Installability Testing!]
3. Test if the mouth of the bottle is not too small to pour water. [Usability Testing!]
4. Fill the bottle with water and keep it on a smooth dry surface. See if it leaks. [Usability
Testing!]
5. Fill the bottle with water, seal it with the cap and see if water leaks when the bottle is
tilted, inverted, squeezed (in case of plastic made bottle)! [Usability Testing!]
6. Take water in the bottle and keep it in the refrigerator for cooling. See what happens.
[Usability Testing!]
7. Keep a water-filled bottle in the refrigerator for a very long time (say a week). See what
happens to the water and/or bottle. [Stress Testing!]
8. Keep a water-filled bottle under freezing condition. See if the bottle expands (if plastic
made) or breaks (if glass made). [Stress Testing!]
9. Try to heat (boil!) water by keeping the bottle in a microwave oven! [Stress Testing!]
10. Pour some hot (boiling!) water into the bottle and see the effect. [Stress
Testing!]
11. Keep a dry bottle for a very long time. See what happens. See if any physical or
chemical deformation occurs to the bottle.
12. Test the water after keeping it in the bottle and see if there is any chemical
change. See if it is safe to be consumed as drinking water.
13. Keep water in the bottle for sometime. And see if the smell of water changes.
14. Try using the bottle with different types of water (like hard and soft water).
[Compatibility Testing!]
15. Try to drink water directly from the bottle and see if it is comfortable to use. Or
water gets spilled while doing so. [Usability Testing!]
16. Test if the bottle is ergonomically designed and if it is comfortable to hold. Also
see if the center of gravityof the bottle stays low (both when empty and when filled with
water) and it does not topple down easily.
17. Drop the bottle from a reasonable height (may be height of a dining table) and
see if it breaks (both with plastic and glass model). If it is a glass bottle then in most
cases it may break. See if it breaks into tiny little pieces (which are often difficult to
clean) or breaks into nice large pieces (which could be cleaned without much difficulty).
[Stress Testing!] [Usability Testing!]
18. Test the above test idea with empty bottles and bottles filled with water. [Stress
Testing!]
19. Test if the bottle is made up of material, which is recyclable. In case of plastic
made bottle test if it is easily crushable.
20. Test if the bottle can also be used to hold other common household things like
honey, fruit juice, fuel, paint, turpentine, liquid wax etc. [Capability Testing!]

39. What is Portlet Testing ?

Following are the features that should be concentrated while testing a portlet

i. Test alignment/size display with multiple style sheets and portal configurations. When you
configure a portlet object in the portal, you must choose from the following alignments:

a. Narrow portlets are displayed in a narrow side column on the portal page. Narrow portlets
must fit in a column that is fewer than 255 pixels wide.
b. Wide portlets are displayed in the middle or widest side column on the portal page. Wide
portlets fit in a column fewer than 500 pixels wide.

ii. Test all links and buttons within the portlet display. (if there are errors, check that all forms
and functions are uniquely named, and that the preference and gateway settings are
configured correctly in the portlet web service editor.)

iii. Test setting and changing preferences. (if there are errors, check that the preferences are
uniquely named and that the preference and gateway settings are configured correctly in the
portlet web service editor.)

iv. Test communication with the backend application. Confirm that actions executed through
the portlet are completed correctly. (if there are errors, check the gateway configuration in the
portlet web service editor.)

v. Test localized portlets in all supported languages. (if there are errors, make sure that the
language files are installed correctly and are accessible to the portlet.)

vi. If the portlet displays secure information or uses a password, use a tunnel tool to confirm
that any secure information is not sent or stored in clear text.
Vii. If backwards compatibility is supported, test portlets in multiple versions of the portal.

40. What is Equivalence Partitioning?

Concepts: Equivalence partitioning is a method for deriving test cases. In this method, classes
of input conditions called equivalence classes are
identified such that each member of the class causes the same kind of
processing and output to occur. In this method, the tester identifies various equivalence
classes for partitioning. A class is a set of input conditions that are is likely to be handled the
same way by the system. If the system were to handle one case in the class erroneously, it
would handle all cases erroneously.

41. Why Learn Equivalence Partitioning?

Equivalence partitioning drastically cuts down the number of test cases required to test a
system reasonably. It is an attempt to get a good 'hit rate', to find the most errors with the
smallest number of test cases.

DESIGNING TEST CASES USING EQUIVALENCE PARTITIONING

To use equivalence partitioning, you will need to perform two steps.

1. Identify the equivalence classes


2. Design test cases

STEP 1:

IDENTIFY EQUIVALENCE CLASSES Take each input condition described in the specification and
derive at least two equivalence classes for it. One class represents the set of cases which
satisfy the condition (the valid class) and one represents cases which do not (the invalid class)
Following are some general guidelines for identifying equivalence classes: a) If the
requirements state that a numeric value is input to the system and must be within a range of
values, identify one valid class inputs which are within the valid range and two invalid
equivalence classes inputs which are too low and inputs which are too high. For example, if an
item in inventory can have a quantity of - 9999 to + 9999, identify the following classes:

1. One valid class: (QTY is greater than or equal to -9999 and is less than or equal to
9999). This is written as (- 9999 < = QTY < = 9999)
2. The invalid class (QTY is less than -9999), also written as (QTY < -9999)
3. The invalid class (QTY is greater than 9999) , also written as (QTY >9999) b) If the
requirements state that the number of items input by the system at some point must lie
within a certain range, specify one valid class where the number of inputs is within the
valid range, one invalid class where there are too few inputs and one invalid class where
there are, too many inputs.

42. What are two types of Metrics?

1. Process metrics: Primary metrics are also called as Process metrics. This is the metric
the Six Sigma practitioners care about and can influence. Primary metrics are almost
the direct output characteristic of a process. It is a measure of a process and not a
measure of a high-level business objective. Primary Process metrics are usually Process
Defects, Process cycle time and Process consumption.
2. Product metrics: Product metrics quantitatively characterize some aspect of the
structure of a software product, such as a requirements specification, a design, or
source code.

43. What is the Outcome of Testing?

A stable application, performing its task as expected.

44. Why do you go for White box testing, when Black box testing is available?

A benchmark that certifies Commercial (Business) aspects and also functional (technical)
aspects is objectives of black box testing. Here loops, structures, arrays, conditions, files, etc
are very micro level but they arc Basement for any application, So White box takes these
things in Macro level and test these things

45. What is Baseline document, Can you say any two?

A baseline document, which starts the understanding of the application before the tester, starts
actual testing. Functional Specification and Business Requirement Document

46. Tell names of some testing type which you learnt or experienced?

Any 5 or 6 types which are related to companies profile is good to say in the interview,

1. Ad - Hoc testing
2. Cookie Testing
3. CET (Customer Experience Test)
4. Depth Test
5. Event-Driven
6. Performance Testing
7. Recovery testing
8. Sanity Test
9. Security Testing
10. Smoke testing
11. Web Testing

47. What exactly is Heuristic checklist approach for unit testing?

It is method of achieving the most appropriate solution of several found by alternative


methods is selected at successive stages testing. The checklist Prepared to Proceed is called
Heuristic checklist

48. What is a Data Guideline?

Data Guidelines are used to specify the data required to populate the test bed and prepare test
scripts. It includes all data parameters that are required to test the conditions derived from the
requirement / specification The Document, which supports in preparing test data are called
Data guidelines

49. Why do you go for Test Bed?


When Test Condition is executed its result should be compared to Test result (expected result),
as Test data is needed for this here comes the role of test Bed where Test data is made ready.

50. Why do we prepare test condition, test cases, test script (Before Starting
Testing)?

These are test design document which are used to execute the actual testing Without which
execution of testing is impossible, finally this execution is going to find the bugs to be fixed so
we have prepare this documents.

51. Is it not waste of time in preparing the test condition, test case & Test Script?

No document prepared in any process is waste of rime, That too test design documents which
plays vital role in test execution can never be said waste of time as without which proper
testing cannot be done.

52. How do you go about testing of Web Application?

To approach a web application testing, the first attack on the application should be on its
performance behavior as that is very important for a web application and then transfer of data
between web server and .front end server, security server and back end server.

53. What kind of Document you need for going for a Functional testing?

Functional specification is the ultimate document, which expresses all the functionalities of the
application and other documents like user manual and BRS are also need for functional testing.
Gap analysis document will add value to understand expected and existing system.

54. Can the System testing be done at any stage?

No, .The system as a whole can be tested only if all modules arc integrated and all modules
work correctly System testing should be done before UAT (User Acceptance testing) and Before
Unit Testing.

55. What is Mutation testing & when can it be done?

Mutation testing is a powerful fault-based testing technique for unit level testing. Since it is a
fault-based testing technique, it is aimed at testing and uncovering some specific kinds of
faults, namely simple syntactic changes to a program. Mutation testing is based on two
assumptions: the competent programmer hypothesis and the coupling effect. The competent
programmer hypothesis assumes that competent programmers turn to write nearly "correct"
programs. The coupling effect stated that a set of test data that can uncover all simple faults in
a program is also capable of detecting more complex faults. Mutation testing injects faults into
code to determine optimal test inputs.

56. Why it is impossible to test a program completely?

With any software other than the smallest and simplest program, there are too many inputs,
too many outputs, and too many path combinations to fully test. Also, software specifications
can be subjective and be interpreted in different ways.
57. How will you review the test case and how many types are there?

There are 2 types of review:

Informal Review: technical lead reviewing.


Peer Review: by a peer at the same organization (walkthrough? technical - inspection).

Or
Reviews:

1. Management Review
2. Technical Review
3. Code Review
4. Formal Review (Inspections and Audits)
5. Informal Review (Peer Review and Code Review)

and coming to walk through....

objectives of Reviews:

1. To find defects in requirements.


2. To find defects in Design.
3. To identify deviations in any process and also provide valued suggestions to improve the
process.

58. What do you mean by Pilot Testing?

 Pilot testing involves having a group of end users try the system prior to its full
deployment in order to give feedback on IIS 5.0 features and functions.
Or
Pilot Testing is a Testing Activity which resembles the Production Environment.
 It is Done Exactly between UAT and Production Drop.
 Few Users who simulate the Production environment to continue the Business Activity
with the System.
 They Will Check the Major Functionality of the System before going into production. This
is basically done to avoid the high-level Disasters.
 Priority of the Pilot Testing Is High and Issues Raised in Pilot Testing has to be Fixed As
Soon As Possible.

59. What is SRS and BRS in manual testing?

BRS is Business Requirement Specification which means the client who want to make the
application gives the specification to software development organization and then the
organization convert it to SRS (Software requirement Specification) as per the need of the
software.

60. What is Smoke Test and Sanity Testing? When will use the Above Tests?

Smoke Testing: It is done to make sure if the build we got is testable or not, i.e to check for
the testability of the build also called as "day 0" check. Done at the 'build level'
Sanity Testing: It is done during the release phase to check for the main functionalities
without going deeper. Sometimes also called as subset of regression testing. When no rigorous
regression testing is done to the build, sanity does that part by checking major functionalities.
Done at the 'release level'

61. What is debugging?

Debugging is finding and removing "bugs" which cause the program to respond in a way that is
not intended.

62. What is determination?

Determination has different meanings in different situations. Determination means a strong


intention or a fixed intention to achieve a specific purpose. Determination, as a core value,
means to have strong will power in order to achieve a task in life. Determination means a
strong sense of self-devotion and self-commitment in order to achieve or perform a given task.
The people who are determined to achieve various objectives in life are known to succeed
highly in various walks of life.

Another way, it could also mean calculating, ascertaining or even realizing a specific amount,
limit, character, etc. It also refers to a certain result of such ascertaining or even defining a
certain concept.

It can also mean to reach at a particular decision and firmly achieve its purpose.

63. What is exact difference between Debugging & Testing?

Testing is nothing but finding an error/problem and its done by testers where as debugging is
nothing but finding the root cause for the error/problem and that is taken care by developers.
Or
Debugging- is removing the bug and is done by developer.

Testing - is identifying the bug and is done by tester.

64. What is fish model can you explain?

Fish model explains the mapping between different stages of development and testing.

Phase 1

Information gathering takes place and here the BRS document is prepared.

Phase 2

Analysis takes place

During this phase, development people prepare SRS document which is a combination of
functional requirement specification and system requirement specification. During this phase,
testing people are going for reviews.

Phase-3

Design phase
Here HLD and LLD high level design document and low level design documents are prepared
by development team. Here, the testing people are going for prototype reviews.

Phase-4

coding phase

White box testers start coding and white box testing is being conducted by testing team.

Phase-5

testing phase

White box testing takes place by the black box test engineers.

Phase-6

release and maintenance.

65. What is Conformance Testing?

The process of testing that an implementation conforms to the specification on which it is


based. Usually applied to testing conformance to a formal standard.

66. What is Context Driven Testing?

The context-driven school of software testing is flavor of Agile Testing that advocates
continuous and creative evaluation of testing opportunities in light of the potential information
revealed and the value of that information to the organization right now.

67. What is End-to-End testing?

Similar to system testing, the 'macro' end of the test scale involves testing of a complete
application environment in a situation that mimics real-world use, such as interacting with a
database, using network communications, or interacting with other hardware, applications, or
systems if appropriate.

68. When the testing should be ended?

Testing is a never ending process, because of some factors testing May terminates.
The factors may be most of the tests are executed, project deadline, test budget depletion,
bug rate falls down below the criteria.

69. What is Parallel/Audit Testing?

Testing where the user reconciles the output of the new system to the output of the current
system to verify the new system performs the operations correctly.

70. What are the roles of glass-box and black-box testing tools?

Black-box testing

It is not based on knowledge of internal design or code. Tests are based on requirements and
functionality. Black box testing is used to find the errors in the following.

1. Interface errors
2. Performance errors
3. Initialization errors
4. Incorrect or missing functionality
5. Errors while accessing external database

Glass-box testing

It is based on internal design of an application code. Tests are based on path coverage, branch
coverage, and statement coverage. It is also known as White Box testing.

1. White box test cases can check for;


2. All independent paths with in a module are executed atleast once
3. Execute all loops
4. Exercises all logical decisions
5. Exercise internal data structure to ensure their validity

71. What is your experience with change control? Our development team has only 10
members. Do you think managing change is such a big deal for us?

Whenever the modifications happening to the actual project all the corresponding documents
are adapted on the information. So as to keep the documents always in sync with the product
at any point of time

72. What is GAP ANALYSIS?

The gap analysis can be done by traceability matrix that means tracking down each individual
requirement in SRS to various work products.

73. How do you know when your code has met specifications?

With the help of traceability matrix. All the requirements are tracked to the test cases. When
all the test cases are executed and passed is an indication that the code has met the
requirements.

74. At what stage of the life cycle does testing begin in your opinion?

Testing is a continuous process and it starts as and when the requirement for the project
/product begins to be framed.
Requirements phase: testing is done to check whether the project/product details are
reflecting clients ideas or giving an idea of complete project from the clients perspective (as he
wished to be) or not.

75. What are the properties of a good requirement?

Requirement specifications are important and one of the most reliable methods of insuring
problems in a complex software project. Requirements are the details describing an
application's externally perceived functionality and properties. Requirements should be clear,
complete, reasonably detailed, cohesive, attainable and testable.
76. How do you scope, organize, and execute a test project?

The Scope can be defined from the BRS, SRS, FRS or from functional points. It may be
anything that is provided by the client. And regarding organizing we need to analyze the
functionality to be covered and who will testing the modules and pros and cons of the
application. Identify the number if test cases, resource allocation, what are the risks that we
need mitigate all these come into picture.
Once this is done it is very easy to execute based on the plan what we have chalked out.

77. How would you ensure 100% coverage of testing?

We can not perform 100% testing on any application. but the criteria to ensure test completion
on a project are:

1. All the test cases are executed with the certain percentage of pass.
2. Bug falls below a certain level
3. Test budget depleted
4. Dead lines reached (project or test)
5. When all the functionalities are covered in a test cases
6. All critical & high bugs must have a status of CLOSED

78. Do you go about testing a web application?

Ideally to test a web application, the components and functionality on both the client and
server side should be tested. But it is practically impossible

The best approach to examine the project's requirements, set priorities based on risk analysis,
and then determine where to focus testing efforts within budget and schedule constraints.
To test a web application we need to perform testing for both GUI and client-server
architecture.

Based on many factors like project requirements, risk analysis, budget and schedule, we can
determine that what kind of testing will be appropriate for your project. We can perform unit n
integration testing, functionality testing, GUI testing, usability testing, compatibility testing,
security testing, performance testing, recovery testing and regression testing.

79. What are your strengths?

I'm well motivated, well-organized, good team player, dedicative to work and I've got a strong
desire to succeed, and I'm always ready and willing to learn new information and skills.

80. When should you begin testing?

For any Project, testing activity will be there from starting onwards, After the Requirements
gathering, Design Document (High and Low) will be prepared, that will be tested, whether they
are confirming to requirements or not, Design then Coding- White box will be done, after the
Build or System is ready, Integration followed by functional testing will be done, Till the
product or Project was stable. After the product or project is stable, then testing will be
stopped.

81. When should you begin test planning?


Test planning is done by test lead. As a test lead test planning begins when TRM is finalized by
project manager and handover to the test lead. Here test lead have some responsibilities those
are,

1. Testing team formation


2. identifying tactical risks
3. preparing test plan
4. Reviews on test plans

82. Would you like to work in a team or alone, why?

I would like to work in a team. Because the process of software development


is like a relay race where many runners have to contribute in their respective laps. It is
important because the complexity of work and degree of efforts required is beyond level of an
individual.

83. When should testing Start in a project? Why?

Testing in a continuous activity carried out at every stage of the project. You first test
everything that you get from the client. As tester (technical tester), my work will start as soon
as the project starts.

84. Have you ever created a test plan?

This is just a sample answer - "I have never created any test plan. I developed and executed
testcase. But I was involved/ participated actively with my Team Leader while creating Test
Plans."

85. Define quality for me as you understand it

It is software that is reasonably bug-free and delivered on time and within the budget, meets
the requirements and expectations and is maintainable.

86. What is the role of QA in a development project?

Quality Assurance
Group assures the Quality it must monitor the whole development process. they are most
concentration on prevention of bugs.

It must set standards, introduce review procedures, and educate people into better ways to
design and develop products.

87. How involved where you with your Team Lead in writing the Test Plan?

As per my knowledge Test Member are always out of scope while preparing the Test Plan, Test
Plan is a higher level document for Testing Team. Test Plan includes Purpose, scope,
Customer/Client scope, schedule, Hardware, Deliverables and Test Cases etc. Test plan derived
from PMP (Project Management Plan). Team member scope is just go through TEST PLAN then
they come to know what all are their responsibilities, Deliverable of modules.
Test Plan is just for input documents for every testing Team as well as Test Lead.

88. What processes/methodologies are you familiar with?

Methodology

1. Spiral methodology
2. Waterfall methodology. these two are old methods.
3. Rational unified processing. this is from I B M and
4. Rapid application development. this is from Microsoft office.

89. What is globalization testing?

The goal of globalization testing is to detect potential problems in application design that could
inhibit globalization. It makes sure that the code can handle all international support without
breaking functionality that would cause either data loss or display problems.

90. What is base lining?

Base lining: Process by which the quality and cost effectiveness of a service is assessed,
usually in advance of a change to the service. Base lining usually includes comparison of the
service before and after the Change or analysis of trend information. The term Benchmarking
is normally used if the comparison is made against other enterprises.

For example:

If the company has different projects. For each project there will be separate test plans. This
test plans should be accepted by peers in the organization after modifications. That modified
test plans are the baseline for the testers to use in different projects. Any further modifications
are done in the test plan. Present modified becomes the baseline. Because this test plan
becomes the basis for running the testing project.

91. Define each of the following and explain how each relates to the other: Unit,
System and Integration testing.

Unit testing

it is a testing on each unit (program)

System testin

This is a bottleneck stage of our project. This testing done after integration of all modules to
check whether our build meets all the requirements of customer or not. Unit and integration
testing is a white box testing which can be done by programmers. System testing is a black
box testing which can be done by people who do not know programming. The hierarchy of this
testing is unit testing integration testing system testing

Integration testing: integration of some units called modules. the test on these modules is
called integration testing (module testing).

92. Who should you hire in a testing group and why?

Testing is an interesting part of software cycle. and it is responsible for providing an quality
product to a customer. It involves finding bugs which is more difficult and challenging. I wanna
be part of testing group because of this.

93. What do you think the role of test-group manager should be? Relative to senior
management? Relative to other technical groups in the company? Relative to your
staff?

ROLES OF test-group manager INCLUDE

 Defect find and close rates by week, normalized against level of effort (are we finding
defects, and can developers keep up with the number found and the ones necessary to
fix?)
 Number of tests planned, run, passed by week (do we know what we have to test, and
are we able to do so?)
 Defects found per activity vs. total defects found (which activities find the most
defects?)
 Schedule estimates vs. actual (will we make the dates, and how well do we estimate?)
 People on the project, planned vs. actual by week or month (do we have the people we
need when we need them?)
 Major and minor requirements changes (do we know what we have to do, and does it
change?)

94. What criteria do you use when determining when to automate a test or leave it
manual?

The Time and Budget both are the key factors in determining whether the test goes on Manual
or it can be automated. Apart from that the automation is required for areas such as
Functional, Regression, Load and User Interface for accurate results.

95. How do you analyze your test results? What metrics do you try to provide?

Test results are analyzed to identify the major causes of defect and which is the phase that has
introduced most of the defects. This can be achieved through cause/effect analysis or Pareto
analysis. Analysis of test results can provide several test matrics. Where matrices are measure
to quantify s/w, s/w development resources and s/w development process. Few matrices which
we can provide are:

Defect density: total no of defects reported during testing/size of project

Test effectiveness'/(t+uat)
where t: total no of defect recorded during testing

and UAT: total no of defect recorded during use acceptance testing

Defect removal efficiency(DRE): (total no of defect removed / total no of defect injected)*100

96. How do you perform regression testing?

Regression Testing is carried out both manually and automation. The automatic tools are
mainly used for the Regression Testing as this is mainly focused repeatedly testing the same
application for the changes the application gone through for the new functionality, after fixing
the previous bugs, any new changes in the design etc. The regression testing involves
executing the test cases, which we ran for finding the defects. Whenever any change takes
place in the Application we should make sure, the previous functionality is still available
without any break. For this reason one should do the regression testing on the application by
running/executing the previously written test cases.

97. Describe to me when you would consider employing a failure mode and effect
analysis

FMEA (Failure Mode and Effects Analysis) is a proactive tool, technique and quality method that
enables the identification and prevention of process or product errors before they occur. Failure
modes and effects analysis (FMEA) is a disciplined approach used to identify possible failures of
a product or service and then determine the frequency and impact of the failure.

98. What is UML and how to use it for testing?

The Unified Modeling Language is a third-generation method for specifying, visualizing, and
documenting the artifacts of an object-oriented system under development From the inside,
the Unified Modeling Language consists of three things:

1. A formal metamodel
2. A graphical notation
3. A set of idioms of usage

99. What you will do during the first day of job?

In my present company HR introduced me to my colleagues. and i known the following things.

1. What is the organization structure?


2. What is the current project developing, on what domain etc.,
3. I will know to whom i have to report and what r my other responsibilities.

100. What is IEEE? Why is it important?

Organization of engineers Scientists and students involved in electrical, electronics, and related
fields. It is important because it functions as a publishing house and standards-making body.

101. Define Verification and Validation. Explain the differences between the two.

Verification - Evaluation done at the end of a phase to determine that requirements are
established during the previous phase have been met. Generally Verification refers to the
overall s/w evaluation activity, including reviewing, inspecting, checking and auditing.

Validation: - The process of evaluating s/w at the end of the development process to ensure
compliance with requirements. Validation typically involves actual testing and takes place after
verification is complete.

Or

Verification: Whether we are building the product right?

Validation: Whether we are building the right product/System?


102. Describe a past experience with implementing a test harness in the
development Of software

Harness: an arrangement of straps for attaching a horse to a cart.


Test Harness: This class of tool supports the processing of tests by working it almost painless
to

1. Install a candidate program in a test environment


2. Feed it input data
3. Simulate by stubs the behavior of subsidiary modules.

103. What criteria do you use when determining when to automate a test or leave it
manual?

The Time and Budget both are the key factors in determining whether the test goes on Manual
or it can be automated. Apart from that the automation is required for areas such as
Functional, Regression, Load and User Interface for accurate results.

104. What would you like to do five years from now?

I would like to be in a managerial role, ideally working closely with external clients. I have
worked in client-facing roles for more than two years and I enjoy the challenge of keeping the
customer satisfied. I think it's something I'm good at. I would also like to take on additional
responsibility within this area, and possibly other areas such as Finally, I'd like to be on the
right career path towards eventually becoming a Senior Manager within the company. I'm very
aware that these are ambitious goals, however I feel through hard work
and dedication they are quite attainable.

105. Define each of the following and explain how each relates to the other: Unit,
System, and Integration testing

 Unit system comes first. Performed by a developer.


 Integration testing comes next. Performed by a tester
 System testing comes last-Performed by a tester.

106. What is IEEE? Why is it important?

"Institute of Electrical & Electronic Engineers" Organization of engineers, scientists and


students involved in electrical, electronics, and related fields. It also functions as a publishing
house and standards-making body.

107. What is the role of QA in a company that produces software?

The role of the QA in the company is to produce a quality software and to ensure that it meets
all the requirements of its customers before delivering the product.

108. How would you build a test team?

Building a test team needs a number of factors to judge. Firstly, you have to consider the
complexity of the application or project that is going to be tested. Next testing, time allotted
levels of testing to be performed. With all these parameters in mind you need to decide the
skills and experience level of your testers and how many testers.

109. In an application currently in production, one module of code is being modified.


Is it necessary to re- test the whole application or is it enough to just test
functionality associated with that module?

It depends on the functionality related with that module. We need to check whether that
module is inter-related with other modules. If it is related with other modules, we need to test
related modules too. Otherwise, if it is an independent module, no need to test other modules.

110. What are ISO standards? Why are they important?

ISO 9000 specifies requirements for a Quality Management System overseeing the production
of a product or service. It is not a standard for ensuring a product or service is of quality;
rather, it attests to the process of production, and how it will be managed and reviewed.

For ex a few:

ISO 9000:2000
Quality management systems. Fundamentals and vocabulary

ISO 9000-1:1994
Quality management and quality assurance standards. Guidelines for selection and use

ISO 9000-2:1997
Quality management and quality assurance standards. Generic guidelines for the application of
ISO 9001, ISO 9002 and ISO 9003

ISO 9000-3:1997
Quality management and quality assurance standards. Guidelines for the application of ISO
9001:1994 to the development, supply, installation and maintenance of computer software

ISO 9001:1994
Quality systems. Model for quality assurance in design, development, production, installation
and servicing

ISO 9001:2000

Quality management systems. Requirements

111. What is the Waterfall Development Method and do you agree with all the steps?

Waterfall approach is a traditional approach to the s/w development. This will work out of it
project is a small one (Not complex).Real time projects need spiral methodology as SDLC.
Some product based development can follow Waterfall, if it is not complex. Production cost is
less if we follow waterfall method.

112. What is migration testing?

Changing of an application or changing of their versions and conducting testing is migration


testing. Testing of programs or procedures used to convert data from existing systems for use
in replacement systems.

113. What is terminology? Why testing Necessary fundamental test process


psychology of testing Testing Terminologies

Error: a human action that produces an incorrect result.


Fault: a manifestation of an error in software.

Failure: a deviation of the software from its expected delivery or service.

Reliability: the probability that the software will not cause the failure of the system for a
specified time under specified conditions.

Why Testing is Necessary

Testing is necessary because software is likely to have faults in it and it is better (cheaper,
quicker and more expedient) to find and remove these faults before it is put into live
operation. Failures that occur during live operation are much more expensive to deal with than
failures than occur during testing prior to the release of the software. Of course other
consequences of a system failing during live operation include the possibility of the software
supplier being sued by the customers!

Testing is also necessary so we can learn about the reliability of the software (that is, how
likely it is to fail within a specified time under specified conditions).

114. What is UAT testing? When it is to be done?

UAT stands for 'User acceptance Testing' This testing is carried out with the user perspective
and it is usually done before a release
UAT stands for User Acceptance Testing. It is done by the end users along with testers to
validate the functionality of the application. It is also called as Pre-Production testing.

115. How to find that tools work well with your existing system?

I think we need to do a market research on various tools depending on the type of application
we are testing. Say we are testing an application made in VB with an Oracle Database, and
then Win runner is going to give good results. But in some cases it may not, say your
application uses a lots of 3rd party Grids and modules which have been integrated into the
application. So it depends on the type of application u r testing.

Also we need to know what sort of testing will be performed. If u need to test the
performance, u cannot use a record and playback tool, u need a performance testing tool such
as Load runner.

116. What is the difference between a test strategy and a test plan?

TEST PLAN: IT IS PLAN FOR TESTING.IT DEFINES SCOPE, APPROACH, AND ENVIRONEMENT.

TEST STRATEGY: A TEST STRATEGY IS NOT A DOCUMENT.IT IS A FRAMEWORK FOR MAKING


DECISIONS ABOUT VALUE.

117. What is Scenarios in term of testing?

Scenario means development. We define scenario by the following definition: Set of test cases
that ensure the business process flows are tested from end to end. It may be independent
tests or a series of tests that follow each other, each dependant on the output of the previous
one. The term test scenario and test case are often used synonymously.

118. Explain the differences between White-box, Gray-box, and Black-box testing?
Black box testing Tests are based on requirements and functionality. Not based on any
knowledge of internal design or code.

White box testing Tests are based on coverage of code statements, branches, paths,
conditions. Based on knowledge of the internal logic of an application's code.

Gray Box Testing A Combination of Black and White Box testing methodologies, testing a
piece of software against its specification but using some knowledge of its internal workings.

119. What is structural and behavioral Testing?

Structural Testing

It is basically the testing of code which is called white box testing.

Behavioral Testing

It is also called functional testing where the functionality of software is being tested. This kind
of testing is called black box testing.

Structural Testing

It's a White Box Testing Technique. Since the testing is based on the internal structure of the
program/code & hence it is called as Structural Testing.

Behavioral Testing:

It's a Black Box Testing Technique. Since the testing is based on the external
behavior/functionality of the system /application & hence it is called as Behavioral Testing.

120. How does unit testing play a role in the development / Software lifecycle?

We can catch simple bugs like GUI, small functional Bugs during unit testing. This reduces
testing time. Overall this saves project time. If developer doesn't catch this type of bugs, this
will come to integration testing part and if it catches by a tester, this need to go through a Bug
life cycle and consumes a lot of time.

121. What made you pick testing over another career?

Testing is one aspect which is very important in the Software Development Life Cycle (SDLC). I
like to be part of the team which is responsible for the quality of the application being
delivered. Also, QA has broad opportunities and large scope for learning various technologies.
And of course it has lot more opportunities than the Development.

Sample Test Case:

Test Test Case


Input Expected
Case Descriptio Actual Result Pass/Fail Remarks
Data Result
ID n

Sample Bug Case:

S. no Links Bug ID Description Initial Bug Retesting Bug Conf Bug Status
Status Status
What is baseline testing?
Baseline testing is the process of running a set of tests to capture performance information. Baseline
testing use the information collected to made the changes in the application to improve performance
and capabilities of the application. Baseline compares present performance of application with its
own previous performance.

What is benchmark testing?


Benchmarking testing is the process of comparing application performance with respect to industry
standard which is given by some other organization. Benchmark informs us where our application
stands with respect to others. Benchmark compares our application performance with other
company’s application’s performance.

What is verification and validation?


Verification: process of evaluating work-products of a development phase to determine whether
they meet the specified requirements for that phase.

Validation: process of evaluating software during or at the end of the development process to
determine whether it specified requirements.

Difference between Verification and Validation:

- Verification is Static Testing where as Validations is Dynamic Testing.


- Verification takes place before validation.
- Verification evaluates plans, document, requirements and specification, where as Validation
evaluates product.
- Verification inputs are checklist, issues list, walkthroughs and inspection ,where as in Validation
testing of actual product.
- Verification output is set of document, plans, specification and requirement documents where as in
Validation actual product is output.

Explain Branch Coverage and Decision Coverage.


- Branch Coverage is testing performed in order to ensure that every branch of the software is
executed atleast. To perform the Branch coverage testing we take the help of the Control Flow
Graph.

- Decision coverage testing ensures that every decision taking statement is executed atleast once.

- Both decision and branch coverage testing is done to ensure the tester that no branch and decision
taking statement, will not lead to failure of the software.

- To Calculate Branch Coverage:


Branch Coverage = Tested Decision Outcomes / Total Decision Outcomes.

What is difference between Retesting and Regression testing?


The differences between Retesting and Regression testing are below:

- Retesting is done to verify defect fix previous in now working correctly where as regression is
perform to check if the defect fix have not impacted other functionality that was working fine
before doing changes in the code.

- Retesting is specific and is performed on the bug which is fixed where as in regression is not be
always specific to any defect fix it is performed when any bug is fixed.

- Retesting concern with executing those test cases that are failed earlier where as regression
concern with executing test cases that was passed in earlier builds.

- Retesting has higher priority over regression.


What is Mutation testing & when can it be done?
Mutation testing is a performed to find out the defect in the program. It is performed to find put
bugs in specific module or component of the application. Mutation testing is based on two
assumptions:

Competent programmer hypothesis: according this hypothesis we suppose that program write the
correct code of the program.
Coupling effect: according to this effect collection of different set of test data can also find large and
complex bugs.
In this testing we insert few bugs into program to examine the optimal test inputs.

What is severity and priority of bug? Give some example.


Priority: concern with application from the business point of view.

It answers: How quickly we need to fix the bug? Or how soon the bug should get fixed?
Severity: concern with functionality of application.

How much the bug is affecting the functionality of the application?

Ex.

1. High Priority and Low Severity:


If a company logo is not properly displayed on their website.

2. High Priority and High Severity:


Suppose you are doing online shopping and filled payment information, but after submitting the
form, you get a message like "Order has been cancelled."

3. Low Priority and High Severity:


If we have a typical scenario in which the application get crashed, but that scenario exists rarely.

4. Low Priority and Low Severity:


There is a mistake like "You have registered success" instead of successfully, success is written.

Explain bug leakage and bug release.


Bug Leakage: When customer or end user discovered a bug which can be detected by the testing
team. Or when a bug is detected which can be detected in pervious build then this is called as Bug
Leakage.

Bug release: is when a build is handed to testing team with knowing that defect is present in the
release. The priority and severity of bug is low. It is done when customer want the application on
the time. Customer can tolerate the bug in the released then the delay in getting the application and
the cost involved in removing that bug. These bugs are mentioned in the Release Notes handed to
client for the future improvement chances.

What is alpha and beta testing?


Alpha testing: is performed by the IN-House developers. After alpha testing the software is handed
over to software QA team, for additional testing in an environment that is similar to the client
environment.
Beta testing: beta testing becomes active. It is performed by end user. So that they can make sure
that the product is bug free or working as per the requirement. IN-house developers and software
QA team perform alpha testing. The public, a few select prospective customers or the general public
performs beta testing.
What is Monkey testing?
Monkey testing is a type of Black Box Testing used mostly at the Unit Level. In this tester enter the
data in any format and check the software is not crashing. In this testing we use Smart monkey and
Dumb monkey.

Smart monkeys are used for load and stress testing, they will help in finding the bugs. They are very
expensive to develop.

Dumb monkey, are important for basic testing. They help in finding those bugs which are having
high severity. Dumb monkey are less expensive as compare to Smart monkeys.

Example: In phone number filed Symbols are entered.

What is test driver and test stub?


- The Stub is called from the software component to be tested. It is used in top down approach.
- The driver calls a component to be tested. It is used in bottom up approach.
- Both test stub and test driver are dummy software components.

We need test stub and test driver because of following reason:

- Suppose we want to test the interface between modules A and B and we have developed only
module A. So we cannot test module A but if a dummy module is prepare, using that we can test
module A.

- Now module B cannot send or receive data from module A directly so, in these cases we have to
transfer data from one module to another module by some external features. This external feature
used is called Driver.

What is random testing?


When tester performs testing of application by using random input from the input domain of the
system, this is Random Testing.

Random testing involve following procedures:

- Selection of input domain.


- Randomly selecting any input from input domain.
- Using these test input testing of application is performed.
- The results are compared to the system specification. The test is a failure if any input leads to
incorrect results, otherwise it is a success.

What is Agile Testing?


Agile Testing means to quickly validation of the client requirements and make the application of
good quality user interface. When the build is released to the testing team, testing of the application
is started to find the bugs. As a Tester, we need to focus on the customer or end user requirements.
We put the efforts to deliver the quality product in spite of short time frame which will further help
in reducing the cost of development and test feedbacks will be implemented in the code which will
avoid the defects coming from the end user.
Describe Use Case Testing.
Use Case: A use case is a description of the process which is performed by the end user for a
particular task. Use case contains a sequence of step which is performed by the end user to complete
a specific task or a step by step process that describe how the application and end user interact with
each other. Use case is written by the user point of view.

Use case Testing: the use case testing uses this use case to evaluate the application. So that, the
tester can examines all the functionalities of the application. Use case testing cover whole
application,

What is the purpose of test strategy?


We need Test Strategy for the following reasons:

1. To have a signed, sealed, and delivered document, where the document contains details about the
testing methodology, test plan, and test cases.
2. Test strategy document tells us how the software product will be tested.
3. Test strategy document helps to review the test plan with the project team members.
4. It describes the roles, responsibilities and the resources required for the test and schedule.
5. When we create a test strategy document, we have to put into writing any testing issues requiring
resolution.

The test strategy is decided first, before lower level decisions are made on the test plan, test design,
and other testing issues.

Explain bug life cycle.


Bug Life Cycle:

- When a tester finds a bug .The bug is assigned with NEW or OPEN status,

- The bug is assigned to development project manager who will analyze the bug .He will check
whether it is a valid defect. If not valid bug is rejected then status is REJECTED.

- If not, next the defect is checked whether it is in scope. When bug is not part of the current
release .Such defects are POSTPONED

- Now, Tester checks whether a similar defect was raised earlier. If yes defect is assigned a status
DUPLICATE

- When bug is assigned to developer. During this stage bug is assigned a status IN-PROGRESS

- Once code is fixed. Defect is assigned a status FIXED

- Next the tester will re-test the code. In case the test case passes the defect is CLOSED

- If the test case fails again the bug is RE-OPENED and assigned to the developer. That’s all to Bug
Life Cycle.

What is Error guessing and Error seeding?


Error Guessing is a test case design technique where the tester has to guess what faults might occur
and to design the tests to represent them.

Error Seeding is the process of adding known faults intentionally in a program for the reason of
monitoring the rate of detection & removal and also to estimate the number of faults remaining in
the program.
Explain Compatibility testing with an example.
Compatibility testing is to evaluate the application compatibility with the computing environment
like Operating System, Database, Browser compatibility, backwards compatibility, computing
capacity of the Hardware Platform and compatibility of the Peripherals. Example, If Compatibility
testing is done on a Game application, before installing a game on a computer, its compatibility is
checked with the computer specification that whether it is compatible with the computer having that
much of specification or not.

What is Test Harness?


A test harness is a collection of software and test data required to test the application by running it
in different testing condition like stress, load, data- driven, and monitoring its behavior and outputs.
Test Harness contains two main parts:

- Test execution engine


- Test script repository

Automation testing is the use of a tool to control the execution of tests and compare the actual
results with the expected results. It also involves the setting up of test pre-conditions.

Explain Statement coverage.


Statement Coverage is a metric used in White Box Testing. Statement coverage is used to ensure
that all the statement in the program code is executed at least once. The advantages of Statement
Coverage are:

- Verifies that written code is correct.


- Measures the quality of code written.
- Determine the control flow of the program.
- To Calculate Statement Coverage:
- Statement Coverage = Statements Tested / Total No. of Statements.

What are the types of testing?


There are two types of testing:

- Static testing: Static testing is a technique used in the earlier phase of the development life cycle.
The code error detection and execution of program is not concern in this type of testing. Also
known as non-execution technique. The Verification of the product is performed in this testing
technique like Code Reviews, Inspections, Walkthroughs are mostly done in this stage of testing.

- Dynamic testing: Dynamic Testing is concern with the execution of the software. This technique
is used to test the dynamic behavior of the code. Most of the bugs are identified using this
technique. These are the Validation activities. It uses different methodologies to perform testing like
Unit Tests, Integration Tests, System Tests and Acceptance Testing, etc.

Explain User acceptance testing.


User Acceptance Testing (UAT) is performed by the end users on the applications before accepting
the application.

Alpha testing: is performed by the IN-House developers. After alpha testing the software is handed
for the Beta testing phase, for additional testing in an environment that is similar to the client
environment.

Beta testing: is performed by the end user. So that they can make sure that the product is bug free
or working as per the requirement. IN-house developers and software QA team perform alpha
testing. The public, a few select prospective customers or the general public performs beta testing.

Gamma Testing: Gamma Testing is done when the software is ready for release with specified
requirements. This testing is done directly by skipping all the in-house testing activities.
What should be done after a bug is found?
After finding the bug the first step is bug to be locked in bug report. Then this bug needs to be
communicated and assigned to developers that can fix it. After the bug is fixes by the developer,
fixes should be re-tested, and determinations made regarding requirements for regression testing to
check that fixes didn't create problems elsewhere.

What if the software is so buggy it can't really be tested at all?


In this situation is for the testers to go through the process of reporting of bugs with the focus being
on critical bugs. Since this type of problem can severely affect schedules, and indicates deeper
problems in the software development process project managers should be notified, and provided
with some documentation.

What are the types of maintenance?


There are four types of maintenance. They are:

- Corrective Maintenance
- Adaptive Maintenance
- Perfective Maintenance
- Preventive Maintenance

What are the advantages of waterfall model?


The advantages of the waterfall model are:

- Simple to implement and required fewer amounts of resources.


- After every phase output is generate.
- Help in methods of analysis, design, coding, testing and maintenance.
- Preferred in projects where quality is more important than schedule and cost.
- Systematic and sequential model.
- Proper documentation of the project.

What is Rapid Application Development model (RAD)?


The RAD model Rapid Application development (RAD) is incremental software development
process models that focus on the development of the project in very short time. It is enhanced
version of Waterfall model. It is proposed when requirements and solutions can be made
independently system or software components, which is developed by different teams. After these
smaller system components are developed, they are integrated to produce the large software system
solution.

What are the advantages of black box testing?


The advantages of this type of testing include:

- Developer and tester are independent of each other.


- The tester does not need knowledge of any programming languages.
- The test is done from the point-of-view of the user.
- Test cases can be designed when specifications are complete.
- Testing helps to identify issues related to functional specifications.
What is software review?
A software review can be defined as a filter for the software engineering process. The purpose of
any review is to discover errors in the analysis, design, and coding, testing and implementation
phases of the software development cycle. The other purpose of a review is to see whether
procedures are applied uniformly and in a manageable manner. It is used to check the process
followed to develop the software is right.

What is reverse engineering?


By analyzing a final product the process of recreating a design is known as reverse engineering.
Reverse engineering is the process followed in order to find difficult, unknown, and hidden
information about a software system. It is important when software products lack proper
documentation, and are highly unstructured, or their structure has degraded through a series of
maintenance efforts. Maintenance activities cannot be performed without a complete understanding
of the software system.

What is data flow diagram?


The Data Flow Diagram gives us information of the flow of data within the application.

- The DFD can be used to analyze the design of the application.


- It is a graphical representation of the structure of the data.
- A developer draws context level DFD first showing interaction between the different components
of the application.
- DFD help in developing the software by clarifying the requirements and major functionalities.
- DFDs show the flow of data through a system.
- It is an important modeling tool that allows us to picture a system as a network of functional
processes.

What is exploratory testing?


Exploratory testing: means testing an application without a test plan and test script. In exploring
testing test explore the application on the basis on his knowledge. The tester has no knowledge
about the application previously. He explores the application like an end user and try to use it.
While using the application his main motive is to find the bugs which are in the application.

What is compatibility testing?


Compatibility testing is a type of testing used to find out the compatibility between the application
and platform on which application works, web browsers, hardware, operating systems etc. Good
software must be compatible with different hardware, web browser and database.

What is SRS and BRS document?


Software Requirements Specification (SRS) is documented form of the requirement of the
customer. It consists of all requirement of the customer regarding that software to be developed.
The SRS document work as agreement between the company and the customer consisting of all
functional and non functional requirements.

Business Requirement Specification (BRS) are the requirements as described by the business
people. The business tells “what” they want for the application to do. In simple word BRS contain
the functional requirement of the application.
Can you explain V model in manual testing?
V model: it is enhanced version of waterfall model where each level of the development lifecycle is
verified before moving to next level. In this testing starts at the very beginning. By testing we mean
verification by means of reviews and inspections, static testing. Each level of the development life -
cycle has a corresponding test plan. A test plan is developed to prepare for the testing of the
products of that phase. Be developing the test plans, we can also define the expected results for
testing of the products for that level as well as defining the entry and exit criteria for each level.

What is Concurrency Testing?


Concurrency Testing is used to know the effects of using the software by different users at the
same time. In this type of testing we have multiple users performing the exact same requests at the
same time. It helps in identifying and measuring the problems in Response time, levels of locking
and deadlocking in the application. For this we use Load runner to create VUGen (Virtual User
Generator) is used to add the number of concurrent users and perform operation on the application
on the same time.

What is an inspection in software testing?


An inspection is more formalized than a walkt hrough. Inspection technique involves 3 to 8 team
member consisting of a moderator, reader, and a recorder to take notes. The subject of the
inspection is typically a document such as a requirements or a test plan, and the purpose is to find
problems and see what is missing, most problems will be found during this preparation. The result
of the inspection meeting should be a written report. It is one of the most cost effective methods of
ensuring quality.

A Form has four mandatory fields to be entered before you


Submit. How many numbers of test cases are required to
verify this? And what are they?
Five test cases are required to test:

1. Enter the data in all the mandatory fields and submit, should not display error message.
2. Enter data in any two mandatory fields and summit, should issue an error message.
3. Do not enter in any of the fields should issue an error message.
4. If the fields accept only number, enter numbers in the fields and submit, should not issue an error
message, try to enter only in two fields should issue an error message, and enter alphabets in two
fields and number in other two fields it should issue an error message.
5. If the fields do not accept special characters, then enter the characters and submit it.

What is Cyclomatic Complexity?


Cyclomatic complexity is used to measure the complexity of the software using the control flow
graph of the software. It is a graphical representation, consisting of following:

NODE: statement of the program is taken as node of the graph.

Edges: the flow of command is denoted by edges. Edges are used to connect two node , this show
flow of control from one node to other node in the program.

Using this node and edges we calculate the complexity of the program.
This determines the minimum number of inputs you need to test always to execute the program.

What is Requirement Traceability Matrix?


The Requirements Traceability Matrix (RTM) is a tool to make sure that project requirement remain
same throughout the whole development process. RTM is used in the development process because
of following reasons:

- To determine whether the developed project is meet the requirements of the user.
- To determine all the requirements given by the user.
- To make sure the application requirement can be fulfilled in the verification process.

What is difference between Pilot and Beta testing?


The differences between these two are listed below:

- A beta test when the product is about to release to the end user whereas pilot testing take place in
the earlier phase of the development cycle.

- In beta testing application is given to a few user to make sure that application meet the user
requirement and does not contain any showstopper whereas in case of pilot testing team member
give their feedback to improve the quality of the application.

Describe how to perform Risk analysis during software


testing?
Risk analysis is the process of identifying risk in the application and prioritizing them to test.
Following are some of the risks:

1. New Hardware.
2. New Technology.
3. New Automation Tool.
4. Sequence of code delivery.
5. Availability of application test resources.

We prioritize them into three categories these are:

- High magnitude: Impact of the bug on the other functionality of the application.
- Medium: it can be tolerable in the application but not desirable.
- Low: it can be tolerable. This type of risk has no impact on the company business.

What is Silk Test?


Silk Test is a tool developed for performing the regression and functionality testing of the
application. Silk Test a tool is used when we are testing the applications which are based on
Window, Java, web or traditional client/server. Silk Test help in preparing the test plan and
management of those test plans, to provide the direct accessing of the database and validation of the
field.

What is difference between Master Test Plan and Test Plan.


The differences between Master Plan and Test Plan are given below:

- Master Test Plan contains all the testing and risk involved area of the application where as Test
case document contains test cases.

- Master Test plan contain all the details of each and every individual tests to be run during the
overall development of application whereas test plan describe the scope, approach, resources and
schedule of performing test.

- Master Test plan contain the description of every tests that is going to be performed on the
application where as test plan only contain the description of few test cases. during the testing cycle
like Unit test, System test, beta test etc

- Master Test Plan is created for all large projects but when it is created for the small project then we
called it as test plan.

How to deal with not reproducible bug?


A bug cannot be reproduced for following reasons:

1. Low memory.
2. Addressing to non available memory location.
3. Things happening in a particular sequence.

Tester can do following things to deal with not reproducible bug:

- Includes steps that are close to the error statement.


- Evaluate the test environment.
- Examine and evaluate test execution results.
- Resources & Time Constraints must be kept in point.

What is the difference between coupling and cohesion?


The difference between coupling and cohesion is discussed below:

- Cohesion is the degree which is measure dependency of the software component that combines
related functionality into a single unit whereas coupling means that binding the related functionality
into different unit.
- Cohesion deals with the functionality that related different process within the single module where
as coupling deals with how much one module is dependent on the other modules within the
application.

- It is good to increase the cohesion between the software whereas increasing coupling is avoided.

What is the role of QA in a project development?


The role of Quality Assurance is discussed below:

- QA team is responsible for monitoring the process to be carried out for development.
- Responsibilities of QA team are planning testing execution process.
- QA Lead creates the time tables and agrees on a Quality Assurance plan for the product.
- QA team communicated QA process to the team members.
- QA team ensures traceability of test cases to requirements.

When do you choose automated testing over manual testing?


This choice between automated testing over manual testing can be based upon following factors:

1. Frequency of use of test case


2. Time Comparison (automated script run much faster than manual execution.)
3. Reusability of Automation Script
4. Adaptability of test case for automation.
5. Exploitation of automation tool

What are the key challenges of software testing?


Following are some challenges of software testing:

1. Application should be stable enough to be tested.


2. Testing always under time constraint.
3. Understanding the requirements.
4. Domain knowledge and business user perspective understanding.
5. Which tests to execute first?
6. Testing the Complete Application.
7. Regression testing.
8. Lack of skilled testers.
9. Changing requirements.
10. Lack of resources, tools and training.

What is difference between QA, QC and Software Testing?


Quality Assurance (QA): QA refers to the planned and systematic way of monitoring the quality of
process which is followed to produce a quality product. QA tracks the outcomes and adjusts the
process to meet the expectation.

Quality Control (QC): Concern with the quality of the product. QC finds the defects and suggests
improvements. The process set by QA is implemented by QC. The QC is the responsibility of the
tester.

Software Testing: is the process of ensuring that product which is developed by the developer
meets the user requirement. The motive to perform testing is to find the bugs and make sure that
they get fixed.

What is concurrent user hits in load testing?


When the multiple users, without any time difference, hits on a same event of the application under
the load test is called a concurrent user hit. The concurrency point is added so that multiple Virtual
User can work on a single event of the application. By adding concurrency point, the virtual users
will wait for the other Virtual users which are running the scripts, if they reach early. When all the
users reached to the concurrency point, only then they start hitting the requests.

What is difference between Front End Testing and Back End


testing?
The differences between front and back end testing are:

- Front End Testing is performed on the Graphical User Interface (GUI).whereas Back End Testing
involves databases testing.

- Front end consist of web site look where user can interact whereas in case of back end it is the
database which is required to store the data.

- When ender user enters data in GUI of the front end application, then this entered data is stored in
the database. To save this data into the database we write SQL queries.

What is Automated Testing?


The process of performing testing automatically which reduces the human intervention this is
automation testing. The automation testing is carried out with the help of the some automation tool
like QTP, Selenium, WinRunner etc. In automation testing we use a tool that runs the test script to
test the application; this test script can be generated manually or automatically. When testing is
completed then tools automatically generate the test report and report.

What is Testware?
The testware is:

- The subset of software which helps in performing the testing of application.


- Testware are required to plan, design, and execute tests. It contains documents, scripts, inputs,
expected results, set-up and additional software or utilities used in testing.
- Testware is term given to combination of all utilities and application software that required for
testing a software package.

Testware is special because it has:

1. Different purpose
2. Different metrics for quality and
3. Different users

What is Exhaustive Testing?


Exhaustive Testing, as the name suggests is very exhaustive. Exhaustive testing means to test every
component in the application with every possible number of inputs. According to Principles of
testing Exhaustive Testing is Impossible because exhaustive testing requires more time and effort to
test the application for all possible number of inputs. This may lead to high cost and delay in the
release of the application.

What is Gray Box Testing?


Gray box testing is the hybrid of black box and white box testing. In gray box testing, test engineer
has the knowledge of coding section of the component and designs test cases or test data based on
system knowledge. In this tester has knowledge of code, but this is less than the knowledge of white
box testing. Based on this knowledge the test cases are designed and the software application under
test treats as a black box & tester test the application from outside.

What is Integration Testing?


Integration testing is black box testing. Integration testing focuses on the interfaces between units,
to ensure that units work together to complete a specify task. The purpose of integration testing is to
confirm that different components of the application interact with each other. Test cases are
developed with the purpose of exercising the interfaces between the components. Integration testing
is considered complete, when actual results and expected results are same. Integration testing is
done after unit testing. There are mainly three approaches to do integration testing:

- Top-down Approach tests the components by integrating from top to bottom.


- Bottom-up approach It takes place from the bottom of the control flow to the higher level
components
- Big bang approach In this are different module are joined together to form a complete system and
then testing is performed on it.

What is Scalability Testing?


Scalability testing is testing performed in order to enhanced and improve the functional and
performance capabilities of the application. So that, application can meets requirements of the end
users. The scalability measurements is done by doing the evaluating the application performance in
load and stress conditions. Now depending upon this evaluation we improve and enhanced the
capabilities of the application.

What is Software Requirements Specification?


- A software requirements specification is a document which acts as a contract between the
customer and the supplier.

- This SRS contain all the requirement of the end user regarding that application. SRS can be used
as a communication medium between the customer and the supplier.

- The developer and tester prepare and examine the application based on the requirements written in
the SRS document.

- The SRS documented is prepared by the Business Analyst by taking all the requirements for the
customer.

What is Storage Testing?


In Storage Testing we test those functionalities of the application which is responsible for storing
the data into database. The data entered by the end user in GUI or front end, is the same data which
is stored in the database. The storage testing determines that the data taken from the front end of the
application is stored in correct place and in correct manner in the database.

What is Stress Testing?


Stress testing tests the software with a motive to check that the application do not crashes if we
increase the stress on the application by increasing the large number of user working on the
application. We can also apply the stress on the application firing the lots of process which cannot
be handled by the application. We perform the stress testing on the application evaluate the
application capabilities at or beyond the limits of its specified requirements to determine. Generally,
this is a type of performance testing performed in a very high level of load and stress condition.

What is Test Harness?


A test harness is a collection of software and test data required to test the application by running it
in different testing condition like stress, load, data- driven, and monitoring its behavior and outputs.
Test Harness contains two main parts:

- Test execution engine


- Test script repository

Automation testing is the use of a tool to control the execution of tests and compare the actual
results with the expected results. It also involves the setting up of test pre-conditions.

Can you define test driver and test stub?


- The Stub is called from the software component to be tested. It is used in top down approach.
- The driver calls a component to be tested. It is used in bottom up approach.
- Both test stub and test driver are dummy software components.

We need test stub and test driver because of following reason:

- Suppose we want to test the interface between modules A and B and we have developed only
module A. So we cannot test module A but if a dummy module is prepare, using that we can test
module A.

- Now module B cannot send or receive data from module A directly so, in these cases we have to
transfer data from one module to another module by some external features. This external feature
used is called Driver.

What is good design?


Design refers to functional design or internal design. Good internal design is indicated by software
code whose overall structure is clear, understandable, easily modifiable, and maintainable; is robust
with sufficient error-handling and status logging capability, and works correctly when implemented.
Good functional design is indicated by an application whose functionality can be traced back to
customer and end-user requirements.

What makes a good QA or Test manager?


A good QA or Test manager should have following characteristics:

- Knowledge about Software development process


- Improve the teamwork to increase productivity
- Improve cooperation between software, test, and QA engineers
- To improvements the QA processes
- Communication skills
- Ability to conduct meetings and keep them focused

What is Manual scripted Testing and Manual Support testing?


Manual Scripted Testing: Testing method in which the test cases are designed and reviewed by the
team before executing it. It is done by manual testing teams.

Manual-Support Testing: Testing technique that involves testing of all the functions performed by
the people while preparing the data and using these data from automated system. it is conducted by
testing teams.
What is Fuzz testing, backward compatibility testing and
assertion testing?
Fuzz Testing: Testing application by entering invalid, unexpected, or random data to the application
this testing is performed to ensure that application is not crashing when entering incorrect and
unformatted data.

Backward Compatibility Testing: Testing method which examines performance of latest software
with older versions of the test environment.

Assertion Testing: Type of testing consisting in verifying if the conditions confirm the product
requirements.

How does a client or server environment affect testing?


There are lots of environmental factors that affect the testing like speed of data transfer data
transfer, hardware, and server etc while working with client or server technologies, testing will be
extensive. When we have time limit, we do the integration testing. In most of the cases we prefer
the load, stress and performance testing for examine the capabilities of the application for the client
or server environment.

What are the categories of defects?


There are three main categories of defects:

- Wrong: The requirements are implemented incorrectly in the application.

- Missing: When requirement given by the customer and application is unable to meet those
application.

- Extra: A requirement incorporated into the product that was not given by the end customer. This
is always a variance from the specification, but may be an attribute desired by the user of the
product.

1. What is the MAIN benefit of designing tests early in the life cycle?
It helps prevent defects from being introduced into the code.
2. What is risk-based testing?
Risk-based Testing is the term used for an approach to creating a Test Strategy that is based on
prioritizing tests by risk. The basis of the approach is a detailed risk analysis and prioritizing of
risks by risk level. Tests to address each risk are then specified, starting with the highest risk first.
3. A wholesaler sells printer cartridges. The minimum order quantity is 5. There is a 20%
discount for orders of 100 or more printer cartridges. You have been asked to prepare test
cases using various values for the number of printer cartridges ordered. Which of the
following groups contain three test inputs that would be generated using Boundary Value
Analysis?
4, 5, 99
4. What is the KEY difference between preventative and reactive approaches to testing?
Preventative tests are designed early; reactive tests are designed after the software has been
produced.
5. What is the purpose of exit criteria?
The purpose of exit criteria is to define when a test level is completed.
6. What determines the level of risk?
The likelihood of an adverse event and the impact of the event determine the level of risk.
7. When is used Decision table testing?
Decision table testing is used for testing systems for which the specification takes the form of rules
or cause-effect combinations. In a decision table the inputs are listed in a column, with the outputs
in the same column but below the inputs. The remainder of the table explores combinations of
inputs to define the outputs produced.
Learn More About Decision Table Testing Technique in the Video Tutorial here
8. What is the MAIN objective when reviewing a software deliverable?
To identify defects in any software work product.
9. Which of the following defines the expected results of a test? Test case specification or test
design specification.
Test case specification defines the expected results of a test.
10. What is the benefit of test independence?
It avoids author bias in defining effective tests.
11. As part of which test process do you determine the exit criteria?
The exit criteria is determined on the bases of 'Test Planning'.
12. What is beta testing?
Testing performed by potential customers at their own locations.
13. Given the following fragment of code, how many tests are required for 100% decision
coverage?
if width > length
thenbiggest_dimension = width
if height > width
thenbiggest_dimension = height
end_if
elsebiggest_dimension = length
if height > length
thenbiggest_dimension = height
end_if
end_if
4
14. You have designed test cases to provide 100% statement and 100% decision coverage for
the following fragment of code. if width > length then biggest_dimension = width else
biggest_dimension = length end_if The following has been added to the bottom of the code
fragment above. print "Biggest dimension is " &biggest_dimensionprint "Width: " & width
print "Length: " & length How many more test cases are required?
None, existing test cases can be used.
15. Rapid Application Development?
Rapid Application Development (RAD) is formally a parallel development of functions and
subsequent integration. Components/functions are developed in parallel as if they were mini
projects, the developments are time-boxed, delivered, and then assembled into a working prototype.
This can very quickly give the customer something to see and use and to provide feedback
regarding the delivery and their requirements. Rapid change and development of the product is
possible using this methodology. However the product specification will need to be developed for
the product at some point, and the project will need to be placed under more formal controls prior to
going into production.
16. What is the difference between Testing Techniques and Testing Tools?
Testing technique: – Is a process for ensuring that some aspects of the application system or unit
functions properly there may be few techniques but many tools.
Testing Tools: – Is a vehicle for performing a test process. The tool is a resource to the tester, but
itself is insufficient to conduct testing
Learn More About Testing Tools here
17. We use the output of the requirement analysis, the requirement specification as the input
for writing …
User Acceptance Test Cases
18. Repeated Testing of an already tested program, after modification, to discover any defects
introduced or uncovered as a result of the changes in the software being tested or in another
related or unrelated software component:
Regression Testing
19. What is component testing?
Component testing, also known as unit, module and program testing, searches for defects in, and
verifies the functioning of software (e.g. modules, programs, objects, classes, etc.) that are
separately testable. Component testing may be done in isolation from the rest of the system
depending on the context of the development life cycle and the system. Most often stubs and drivers
are used to replace the missing software and simulate the interface between the software
components in a simple manner. A stub is called from the software component to be tested; a driver
calls a component to be tested.
20. What is functional system testing?
Testing the end to end functionality of the system as a whole is defined as a functional system
testing.
21. What are the benefits of Independent Testing?
Independent testers are unbiased and identify different defects at the same time.
22. In a REACTIVE approach to testing when would you expect the bulk of the test design
work to be begun?
The bulk of the test design work begun after the software or system has been produced.
23. What are the different Methodologies in Agile Development Model?
There are currently seven different agile methodologies that I am aware of:
1. Extreme Programming (XP)
2. Scrum
3. Lean Software Development
4. Feature-Driven Development
5. Agile Unified Process
6. Crystal
7. Dynamic Systems Development Model (DSDM)
24. Which activity in the fundamental test process includes evaluation of the testability of the
requirements and system?
A 'Test Analysis' and 'Design' includes evaluation of the testability of the requirements and system.
25. What is typically the MOST important reason to use risk to drive testing efforts?
Because testing everything is not feasible.
26. What is random/monkey testing? When it is used?
Random testing often known as monkey testing. In such type of testing data is generated randomly
often using a tool or automated mechanism. With this randomly generated input the system is tested
and results are analysed accordingly. These testing are less reliable; hence it is normally used by the
beginners and to see whether the system will hold up under adverse effects.
27. Which of the following are valid objectives for incident reports?
1. Provide developers and other parties with feedback about the problem to enable
identification, isolation and correction as necessary.
2. Provide ideas for test process improvement.
3. Provide a vehicle for assessing tester competence.
4. Provide testers with a means of tracking the quality of the system under test.
28. Consider the following techniques. Which are static and which are dynamic techniques?
1. Equivalence Partitioning.
2. Use Case Testing.
3. Data Flow Analysis.
4. Exploratory Testing.
5. Decision Testing.
6. Inspections.
Data Flow Analysis and Inspections are static; Equivalence Partitioning, Use Case Testing,
Exploratory Testing and Decision Testing are dynamic.
29. Why are static testing and dynamic testing described as complementary?
Because they share the aim of identifying defects but differ in the types of defect they find.
30. What are the phases of a formal review?
In contrast to informal reviews, formal reviews follow a formal process. A typical formal review
process consists of six main steps:
1. Planning
2. Kick-off
3. Preparation
4. Review meeting
5. Rework
6. Follow-up.
31. What is the role of moderator in review process?
The moderator (or review leader) leads the review process. He or she determines, in co-operation
with the author, the type of review, approach and the composition of the review team. The
moderator performs the entry check and the follow-up on the rework, in order to control the quality
of the input and output of the review process. The moderator also schedules the meeting,
disseminates documents before the meeting, coaches other team members, paces the meeting, leads
possible discussions and stores the data that is collected.
32. What is an equivalence partition (also known as an equivalence class)?
An input or output ranges of values such that only one value in the range becomes a test case.
33. When should configuration management procedures be implemented?
During test planning.
34. A Type of functional Testing, which investigates the functions relating to detection of
threats, such as virus from malicious outsiders?
Security Testing
35. Testing where in we subject the target of the test , to varying workloads to measure and
evaluate the performance behaviours and ability of the target and of the test to continue to
function properly under these different workloads?
Load Testing
36. Testing activity which is performed to expose defects in the interfaces and in the
interaction between integrated components is?
Integration Level Testing
37. What are the Structure-based (white-box) testing techniques?
Structure-based testing techniques (which are also dynamic rather than static) use the internal
structure of the software to derive test cases. They are commonly called 'white-box' or 'glass-box'
techniques (implying you can see into the system) since they require knowledge of how the
software is implemented, that is, how it works. For example, a structural technique may be
concerned with exercising loops in the software. Different test cases may be derived to exercise the
loop once, twice, and many times. This may be done regardless of the functionality of the software.
38. When "Regression Testing" should be performed?
After the software has changed or when the environment has changed Regression testing should be
performed.
39. What is negative and positive testing?
A negative test is when you put in an invalid input and receives errors. While a positive testing, is
when you put in a valid input and expect some action to be completed in accordance with the
specification.
40. What is the purpose of a test completion criterion?
The purpose of test completion criterion is to determine when to stop testing
41. What can static analysis NOT find?
For example memory leaks.
42. What is the difference between re-testing and regression testing?
Re-testing ensures the original fault has been removed; regression testing looks for unexpected side
effects.
43. What are the Experience-based testing techniques?
In experience-based techniques, people's knowledge, skills and background are a prime contributor
to the test conditions and test cases. The experience of both technical and business people is
important, as they bring different perspectives to the test analysis and design process. Due to
previous experience with similar systems, they may have insights into what could go wrong, which
is very useful for testing.
44. What type of review requires formal entry and exit criteria, including metrics?
Inspection
45. Could reviews or inspections be considered part of testing?
Yes, because both help detect faults and improve quality.
46. An input field takes the year of birth between 1900 and 2004 what are the boundary values
for testing this field?
1899,1900,2004,2005
47. Which of the following tools would be involved in the automation of regression test? a.
Data tester b. Boundary tester c. Capture/Playback d. Output comparator.
d. Output comparator
48. To test a function, what has to write a programmer, which calls the function to be tested
and passes it test data.
Driver
49. What is the one Key reason why developers have difficulty testing their own work?
Lack of Objectivity
50."How much testing is enough?"
The answer depends on the risk for your industry, contract and special requirements.
51. When should testing be stopped?
It depends on the risks for the system being tested. There are some criteria bases on which you can
stop testing.
1. Deadlines (Testing, Release)
2. Test budget has been depleted
3. Bug rate fall below certain level
4. Test cases completed with certain percentage passed
5. Alpha or beta periods for testing ends
6. Coverage of code, functionality or requirements are met to a specified point
52. Which of the following is the main purpose of the integration strategy for integration
testing in the small?
The main purpose of the integration strategy is to specify which modules to combine when and how
many at once.
53.What are semi-random test cases?
Semi-random test cases are nothing but when we perform random test cases and do equivalence
partitioning to those test cases, it removes redundant test cases, thus giving us semi-random test
cases.
54. Given the following code, which statement is true about the minimum number of test cases
required for full statement and branch coverage?
Read p
Read q
IF p+q> 100
THEN Print "Large"

ENDIF
IF p > 50
THEN Print "p Large"
ENDIF
1 test for statement coverage, 2 for branch coverage
55. What is black box testing? What are the different black box testing techniques?
Black box testing is the software testing method which is used to test the software without knowing
the internal structure of code or program. This testing is usually done to check the functionality of
an application. The different black box testing techniques are
1. Equivalence Partitioning
2. Boundary value analysis
3. Cause effect graphing
56. Which review is normally used to evaluate a product to determine its suitability for
intended use and to identify discrepancies?
Technical Review.
57. Why we use decision tables?
The techniques of equivalence partitioning and boundary value analysis are often applied to specific
situations or inputs. However, if different combinations of inputs result in different actions being
taken, this can be more difficult to show using equivalence partitioning and boundary value
analysis, which tend to be more focused on the user interface. The other two specification-based
techniques, decision tables and state transition testing are more focused on business logic or
business rules. A decision table is a good way to deal with combinations of things (e.g. inputs). This
technique is sometimes also referred to as a 'cause-effect' table. The reason for this is that there is an
associated logic diagramming technique called 'cause-effect graphing' which was sometimes used to
help derive the decision table
58. Faults found should be originally documented by whom?
By testers.
59. Which is the current formal world-wide recognized documentation standard?
There isn't one.
60. Which of the following is the review participant who has created the item to be reviewed?
Author
61. A number of critical bugs are fixed in software. All the bugs are in one module, related to
reports. The test manager decides to do regression testing only on the reports module.
Regression testing should be done on other modules as well because fixing one module may affect
other modules.
62. Why does the boundary value analysis provide good test cases?
Because errors are frequently made during programming of the different cases near the 'edges' of
the range of values.
63. What makes an inspection different from other review types?
It is led by a trained leader, uses formal entry and exit criteria and checklists.
64. Why can be tester dependent on configuration management?
Because configuration management assures that we know the exact version of the testware and the
test object.
65. What is a V-Model?
A software development model that illustrates how testing activities integrate with software
development phases
66. What is maintenance testing?
Triggered by modifications, migration or retirement of existing software
67. What is test coverage?
Test coverage measures in some specific way the amount of testing performed by a set of tests
(derived in some other way, e.g. using specification-based techniques). Wherever we can count
things and can tell whether or not each of those things has been tested by some test, then we can
measure coverage.
68. Why is incremental integration preferred over "big bang" integration?
Because incremental integration has better early defects screening and isolation ability
69. When do we prepare RTM (Requirement traceability matrix), is it before test case
designing or after test case designing?
It would be before test case designing. Requirements should already be traceable from Review
activities since you should have traceability in the Test Plan already. This question also would
depend on the organisation. If the organisations do test after development started then requirements
must be already traceable to their source. To make life simpler use a tool to manage requirements.
70. What is called the process starting with the terminal modules?
Bottom-up integration
71. During which test activity could faults be found most cost effectively?
During test planning
72. The purpose of requirement phase is
To freeze requirements, to understand user needs, to define the scope of testing
73. Why we split testing into distinct stages?
We split testing into distinct stages because of following reasons,
1. Each test stage has a different purpose
2. It is easier to manage testing in stages
3. We can run different test into different environments
4. Performance and quality of the testing is improved using phased testing
74. What is DRE?
To measure test effectiveness a powerful metric is used to measure test effectiveness known as DRE
(Defect Removal Efficiency) From this metric we would know how many bugs we have found from
the set of test cases. Formula for calculating DRE is
DRE=Number of bugs while testing / number of bugs while testing + number of bugs found by
user
75. Which of the following is likely to benefit most from the use of test tools providing test
capture and replay facilities? a) Regression testing b) Integration testing c) System testing d)
User acceptance testing
Regression testing
76. How would you estimate the amount of re-testing likely to be required?
Metrics from previous similar projects and discussions with the development team
77. What studies data flow analysis?
The use of data on paths through the code.
78. What is Alpha testing?
Pre-release testing by end user representatives at the developer's site.
79. What is a failure?
Failure is a departure from specified behaviour.
80. What are Test comparators?
Is it really a test if you put some inputs into some software, but never look to see whether the
software produces the correct result? The essence of testing is to check whether the software
produces the correct result, and to do that, we must compare what the software produces to what it
should produce. A test comparator helps to automate aspects of that comparison.
81. Who is responsible for document all the issues, problems and open point that were
identified during the review meeting
Scribe
82. What is the main purpose of Informal review
Inexpensive way to get some benefit
83. What is the purpose of test design technique?
Identifying test conditions and Identifying test cases
84. When testing a grade calculation system, a tester determines that all scores from 90 to 100
will yield a grade of A, but scores below 90 will not. This analysis is known as:
Equivalence partitioning
85. A test manager wants to use the resources available for the automated testing of a web
application. The best choice is Tester, test automater, web specialist, DBA
86. During the testing of a module tester 'X' finds a bug and assigned it to developer. But
developer rejects the same, saying that it's not a bug. What 'X' should do?
Send to the detailed information of the bug encountered and check the reproducibility
87. A type of integration testing in which software elements, hardware elements, or both are
combined all at once into a component or an overall system, rather than in stages.
Big-Bang Testing
88. In practice, which Life Cycle model may have more, fewer or different levels of
development and testing, depending on the project and the software product. For example,
there may be component integration testing after component testing, and system integration
testing after system testing.
V-Model
89. Which technique can be used to achieve input and output coverage? It can be applied to
human input, input via interfaces to a system, or interface parameters in integration testing.
Equivalence partitioning
90. "This life cycle model is basically driven by schedule and budget risks" This statement is
best suited for…
V-Model
91. In which order should tests be run?
The most important one must tests first
92. The later in the development life cycle a fault is discovered, the more expensive it is to fix.
Why?
The fault has been built into more documentation, code, tests, etc
93. What is Coverage measurement?
It is a partial measure of test thoroughness.
94. What is Boundary value testing?
Test boundary conditions on, below and above the edges of input and output equivalence classes.
For instance, let say a bank application where you can withdraw maximum Rs.20,000 and a
minimum of Rs.100, so in boundary value testing we test only the exact boundaries, rather than
hitting in the middle. That means we test above the maximum limit and below the minimum limit.
95. What is Fault Masking?
Error condition hiding another error condition.
96. What does COTS represent?
Commercial off The Shelf.
97.The purpose of which is allow specific tests to be carried out on a system or network that
resembles as closely as possible the environment where the item under test will be used upon
release?
Test Environment
98. What can be thought of as being based on the project plan, but with greater amounts of
detail?
Phase Test Plan
99. What is exploratory testing?
Exploratory testing is a hands-on approach in which testers are involved in minimum planning and
maximum test execution. The planning involves the creation of a test charter, a short declaration of
the scope of a short (1 to 2 hour) time-boxed test effort, the objectives and possible approaches to
be used. The test design and test execution activities are performed in parallel typically without
formally documenting the test conditions, test cases or test scripts. This does not mean that other,
more formal testing techniques will not be used. For example, the tester may decide to use boundary
value analysis but will think through and test the most important boundary values without
necessarily writing them down. Some notes will be written during the exploratory-testing session,
so that a report can be produced afterwards.
100. What is "use case testing"?
In order to identify and execute the functional requirement of an application from start to finish "use
case" is used and the techniques used to do this is known as "Use Case Testing"
Bonus!
101. What is the difference between STLC (Software Testing Life Cycle) and SDLC (Software
Development Life Cycle) ?
SDLC deals with developement/coding of the software while STLC deales with validation and
verification of the software
102. What is traceability matrix?
The relationship between test cases and requirements is shown with the help of a document. This
document is known as traceability matrix.
103. What is Equivalence partitioning testing?
Equivalence partitioning testing is a software testing technique which divides the application input
test data into each partition at least once of equivalent data from which test cases can be derived.
By this testing method it reduces the time required for software testing.
104. What is white box testing and list the types of white box testing?
White box testing technique involves selection of test cases based on an analysis of the internal
structure (Code coverage, branches coverage, paths coverage, condition coverage etc.) of a
component or system. It is also known as Code-Based testing or Structural testing. Different types
of white box testing are
1. Statement Coverage
2. Decision Coverage
105. In white box testing what do you verify?
In white box testing following steps are verified.
1. Verify the security holes in the code
2. Verify the incomplete or broken paths in the code
3. Verify the flow of structure according to the document specification
4. Verify the expected outputs
5. Verify all conditional loops in the code to check the complete functionality of the application
6. Verify the line by line coding and cover 100% testing
106. What is the difference between static and dynamic testing?
Static testing: During Static testing method, the code is not executed and it is performed using the
software documentation.
Dynamic testing: To perform this testing the code is required to be in an executable form.
107. What is verification and validation?
Verification is a process of evaluating software at development phase and to decide whether the
product of a given application satisfies the specified requirements. Validation is the process of
evaluating software at the end of the development process and to check whether it meets the
customer requirements.
108. What are different test levels?
There are four test levels
1. Unit/component/program/module testing
2. Integration testing
3. System testing
4. Acceptance testing
109. What is Integration testing?
Integration testing is a level of software testing process, where individual units of an application are
combined and tested. It is usually performed after unit and functional testing.
110. What are the tables in testplans?
Test design, scope, test strategies , approach are various details that Test plan document consists of.
1. Test case identifier
2. Scope
3. Features to be tested
4. Features not to be tested
5. Test strategy & Test approach
6. Test deliverables
7. Responsibilities
8. Staffing and training
9. Risk and Contingencies
111. What is the difference between UAT (User Acceptance Testing) and System testing?
System Testing: System testing is finding defects when the system under goes testing as a whole, it
is also known as end to end testing. In such type of testing, the application undergoes from
beginning till the end.
UAT: User Acceptance Testing (UAT) involves running a product through a series of specific tests
which determines whether the product will meet the needs of its users.
112. Mention the difference between Data Driven Testing and Retesting?
Retesting: It is a process of checking bugs that are actioned by development team to verify that
they are actually fixed.
Data Driven Testing (DDT): In data driven testing process, application is tested with multiple test
data. Application is tested with different set of values.
113. What are the valuable steps to resolve issues while testing?

 Record : Log and handle any problems which has happened


 Report: Report the issues to higher level manager
 Control: Define the issue management process

114. What is the difference between test scenarios, test cases and test script?
Difference between test scenarios and test cases is that
Test Scenarios: Test scenario is prepared before the actual testing starts, it includes plans for
testing product, number of team members, environmental condition, making test cases, making test
plans and all the features that are to be tested for the product.
Test Cases: It is a document that contains the steps that has to be executed, it has been planned
earlier.
Test Script: It is written in a programming language and it's a short program used to test part of
functionality of the software system. In other words a written set of steps that should be performed
manually.
115. What is Latent defect?
Latent defect: This defect is an existing defect in the system which does not cause any failure as
the exact set of conditions has never been met
116. What are the two parameters which can be useful to know the quality of test execution?
To know the quality of test execution we can use two parameters

 Defect reject ratio


 Defect leakage ratio

117. What is the function of software testing tool "phantom"?


Phantom is a freeware, and is used for windows GUI automation scripting language. It allows to
take control of windows and functions automatically. It can simulate any combination of key
strokes and mouse clicks as well as menus, lists and more.
118. Explain what is Test Deliverables ?
Test Deliverables are set of documents, tools and other components that has to be developed and
maintained in support of testing.
There are different test deliverables at every phase of the software development lifecycle

 Before Testing
 During Testing
 After the Testing

119. What is mutation testing?


Mutation testing is a technique to identify if a set of test data or test case is useful by intentionally
introducing various code changes (bugs) and retesting with original test data/ cases to determine if
the bugs are detected.
120. What all things you should consider before selecting automation tools for the AUT?

 Technical Feasibility
 Complexity level
 Application stability
 Test data
 Application size
 Re-usability of automated scripts
 Execution across environment
121. How will you conduct Risk Analysis?
For the risk analysis following steps need to be implemented
a) Finding the score of the risk
b) Making a profile for the risk
c) Changing the risk properties
d) Deploy the resources of that test risk
e) Making a database of risk
122. What are the categories of debugging?
Categories for debugging
a) Brute force debugging
b) Backtracking
c) Cause elimination
d) Program slicing
e) Fault tree analysis
123. What is fault masking explain with example?
When presence of one defect hides the presence of another defect in the system is known as fault
masking.
Example : If the "Negative Value" cause a firing of unhandled system exception, the developer will
prevent the negative values inpu. This will resolve the issue and hide the defect of unhandled
exception firing.
124. Explain what is Test Plan ? What are the information that should be covered in Test
Plan ?
A test plan can be defined as a document describing the scope, approach, resources and schedule of
testing activities and a test plan should cover the following details.

 Test Strategy
 Test Objective
 Exit/Suspension Criteria
 Resource Planning
 Test Deliverables

125. How you can eliminate the product risk in your project ?
To eliminate product risk in your project, there is simple yet crucial step that can reduce the product
risk in your project.

 Investigate the specification documents


 Have discussions about the project with all stakeholders including the developer
 As a real user walk around the website

126. What are the common risk that leads to the project failure?
The common risk that leads to a project failure are

 Not having enough human resource


 Testing Environment may not be set up properly
 Limited Budget
 Time Limitations

127. On what basis you can arrive to an estimation for your project?
To estimate your project , you have to consider following points

 Divide the whole project into a smallest tasks


 Allocate each task to team members
 Estimate the effort required to complete each task
 Validate the estimation

128. Explain how you would allocate task to team members ?


Task
Member

 Analyze software requirement specification

 All the members

 Create the test specification

 Tester/Test Analyst

 Build up the test environment

 Test administrator

 Execute the test cases

 Tester, Test administrator

 Report defects

 Tester

129. Explain what is testing type and what are the commonly used testing type ?
To get an expected test outcome a standard procedure is followed which is referred as Testing Type.
Commonly used testing types are

 Unit Testing: Test the smallest code of an application


 API Testing: Testing API created for the application
 Integration Testing: Individual software modules are combined and tested
 System Testing: Complete testing of system
 Install/UnInstall Testing: Testing done from the point of client/customer view
 Agile Testing: Testing through Agile technique

130. While monitoring your project what all things you have to consider ?
The things that has to be taken in considerations are

 Is you project on schedule


 Are you over budget
 Are you working towards the same career goal
 Have you got enough resources
 Are there any warning signs of impending problems
 Is there any pressure from management to complete the project sooner

131. What are the common mistakes which creates issues ?

 Matching resources to wrong projects


 Test manager lack of skills
 Not listening to others
 Poor Scheduling
 Underestimating
 Ignoring the small problems
 Not following the process

132. What does a typical test report contains? What are the benefits of test reports?
A test report contains following things:

 Project Information
 Test Objective
 Test Summary
 Defect

The benefits of test reports are:

 Current status of project and quality of product are informed


 If required, stake holder and customer can take corrective action
 A final document helps to decide whether the product is ready for release

133. What is test management review and why it is important?


Management review is also referred as Software Quality Assurance or SQA. SQA focusses more on
the software process rather than the software work products. It is a set of activities designed to
make sure that the project manager follows the standard process. SQA helps test manager to
benchmark the project against the set standards.
134. What are the best practices for software quality assurance?
The best practices for an effective SQA implementation is

 Continuous Improvement
 Documentation
 Tool Usage
 Metrics
 Responsibility by team members
 Experienced SQA auditors

135. When is RTM (Requirement Traceability Matrix) prepared ?


RTM is prepared before test case designing. Requirements should be traceable from review
activities.
136. What is difference between Test matrix and Traceability matrix?
Test Matrix: Test matrix is used to capture actual quality, effort, the plan, resources and time
required to capture all phases of software testing
Traceability Matrix:Mapping between test cases and customer requirements is known as
Traceability Matrix
137. In manual testing what are stubs and drivers?
Both stubs and drivers are part of incremental testing. In incremental testing there are two
approaches namely bottom up and top down approach. Drivers are used in bottom up testing and
stub is used for top down approach. In order to test the main module, stub is used, whuich is a
dummy code or program .
138. What are the step you would follow once you find the defect?
Once defect is found you would follow the step
a) Recreate the defect
b) Attach the screen shot
c) Log the defect
139. Explain what is "Test Plan Driven" or "Key Word Driven" method of testing?
This technique uses the actual test case document developed by testers using a spread sheet
containing special "key Words". The key words control the processing.
140. What is DFD (Data Flow Diagram) ?
When a "flow of data" through an information system is graphically represented then it is known as
Data Flow Diagram. It is also used for the visualization of data processing.
141. Explain what is LCSAJ?
LCSAJ stands for 'linear code sequence and jump'. It consists of the following three items
a) Start of the linear sequence of executable statements
b) End of the linear sequence
c) The target line to which control flow is transferred at the end of the linear sequence
142. Explain what is N+1 testing?
The variation of regression testing is represented as N+1. In this technique the testing is performed
in multiple cycles in which errors found in test cycle 'N' are resolved and re-tested in test cycle
N+1. The cycle is repeated unless there are no errors found.
143. What is Fuzz testing and when it is used?
Fuzz testing is used to detect security loopholes and coding errors in software. In this technique
random data is added to the system in attempt to crash the system. If vulnerability persists, a tool
called fuzz tester is used to determine potential causes. This technique is more useful for bigger
projects but only detects major fault.
144. Mention what are the main advantages of statement coverage metric of software testing?
The benefit of statement coverage metric is that
a) It does not require processing source code and can be applied directly to object code
b) Bugs are distributed evenly through code, due to which percentage of executable statements
covered reflects the percentage of faults discovered
145. How to generate test cases for replace string method?
a) If characters in new string > characters in previous string. None of the characters should get
truncated
b) If characters in new string< characters in previous string. Junk characters should not be added
c) Spaces after and before the string should not be deleted
d) String should be replaced only for the first occurrence of the string
146. How will you handle a conflict amogst your team members ?

 I will talk individually to each person and note their concerns


 I will find solution to the common problems raised by team members
 I will hold a team meeting , reveal the solution and ask people to co-operate

147. Mention what are the categories of defects?


Mainly there are three defect categories

 Wrong: When requirement is implemented incorrectly


 Missing: It is a variance from the specification, an indication that a specification was not
implemented or a requirement of the customer is not met
 Extra: A requirement incorporated into the product that was not given by the end customer.
It is considered as a defect because it is a variance from the existing requirements
148. Explain how does a test coverage tool works?
The code coverage testing tool runs parallel while performing testing on the actual product. The
code coverage tool monitors the executed statements of the source code. When the final testing is
done we get a complete report of the pending statements and also get the coverage percentage.
149. Mention what is the difference between a "defect" and a "failure" in software testing?
In simple terms when a defect reaches the end customer it is called a failure while the defect is
identified internally and resolved then it is referred as defect.
150. Explain how to test documents in a project that span across the software development
lifecycle?
The project span across the software development lifecycle in following manner

 Central/Project test plan: It is the main test plan that outlines the complete test strategy of the
project. This plan is used till the end of the software development lifecycle
 Acceptance test plan: This document begins during the requirement phase and is completed
at final delivery
 System test plan: This plan starts during the design plan and proceeds until the end of the
project
 Integration and Unit test plan: Both these test plans start during the execution phase and last
until the final delivery
151. Explain which test cases are written first black boxes or white boxes?
Black box test cases are written first as to write black box test cases; it requires project plan and
requirement document all these documents are easily available at the beginning of the project.
While writing white box test cases requires more architectural understanding and is not available at
the start of the project.
152. Explain what is the difference between latent and masked defects?

 Latent defect: A latent defect is an existing defect that has not caused a failure because the
sets of conditions were never met
 Masked defect: It is an existing defect that has not caused a failure because another defect
has prevented that part of the code from being executed
153. Mention what is bottom up testing?
Bottom up testing is an approach to integration testing, where the lowest level components are
tested first, then used to facilitate the testing of higher level components. The process is repeated
until the component at the top of the hierarchy is tested.
154. Mention what are the different types of test coverage techniques?
Different types of test coverage techniques include

 Statement Coverage: It verifies that each line of source code has been executed and tested
 Decision Coverage: It ensures that every decision in the source code is executed and tested
 Path Coverage: It ensures that every possible route through a given part of code is executed
and tested
155. Mention what is the meaning of breadth testing?
Breadth testing is a test suite that exercises the full functionality of a product but does not test
features in detail
156. Mention what is the difference between Pilot and Beta testing?
The difference between pilot and beta testing is that pilot testing is actually done using the product
by the group of user before the final deployment and in beta testing we do not input real data, but it
is installed at the end customer to validate if the product can be used in production.
157. Explain what is the meaning of Code Walk Through?
Code Walk Through is the informal analysis of the program source code to find defects and verify
coding techniques
158. Mention what are the basic components of defect report format?
The basic components of defect report format includes

 Project Name
 Module Name
 Defect detected on
 Defect detected by
 Defect ID and Name
 Snapshot of the defect
 Priority and Severity status
 Defect resolved by
 Defect resolved on

159. Mention what is the purpose behind doing end-to-end testing?


End-to end testing is done after functional testing. The purpose behind doing end-to-end testing is
that

 To validate the software requirements and integration with external interfaces


 Testing application in real world environment scenario
 Testing of interaction between application and database

160. Explain what it means by test harness?


A test harness is configuring a set of tools and test data to test an application in various conditions,
it involves monitoring the output with expected output for correctness.
161. Explain in a testing project what testing activities would you automate?
In a testing project testing activities you would automate are
 Tests that need to be run for every build of the application
 Tests that use multiple data for the same set of actions
 Identical tests that needs to be executed using different browsers
 Mission critical pages
 Transaction with pages that do not change in short time

You might also like