0% found this document useful (0 votes)
26 views192 pages

Manual Testing Interview Questions_V1_0

The document is a comprehensive guide on manual testing interview questions, covering essential concepts such as software testing definitions, types of testing, and methodologies. It includes detailed explanations of testing processes, roles, and documentation, along with specific testing techniques like unit, integration, and system testing. Additionally, it outlines the importance of quality assurance and control, as well as various testing strategies and metrics to ensure software reliability and functionality.

Uploaded by

qavasutesting
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views192 pages

Manual Testing Interview Questions_V1_0

The document is a comprehensive guide on manual testing interview questions, covering essential concepts such as software testing definitions, types of testing, and methodologies. It includes detailed explanations of testing processes, roles, and documentation, along with specific testing techniques like unit, integration, and system testing. Additionally, it outlines the importance of quality assurance and control, as well as various testing strategies and metrics to ensure software reliability and functionality.

Uploaded by

qavasutesting
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 192

Manual Testing Interview Questions

Manual Testing Interview Questions


V 1.0
Manual Testing Interview Questions

1. What Does Software Testing Mean?


Ans: Software testing is a validation process which confirms that a system works as per the
business requirements. It qualifies a system on various aspects such as usability, accuracy,
completeness, efficiency, etc. The ANSI/IEEE 1059 is the global standard which defines the basic
principles of testing.

2. Why Is Software Testing Required?


It is a mandatory process which is essential to qualify software as usable for the production. Here
are some compelling reasons to prove why is it needed.

1. The testing process guarantees the software will work as per the expectation of the
customers.
2. It reduces the coding cycles by identifying issues at the initial stage of the development.
3. Discovery of issues in the earlier SDLC phases ensures proper utilization of resources and
prevents any cost escalations.
4. The testing team brings customer view into the process and finds use cases that a developer
may overlook.
5. Any failure, defect or a bug observed by the customer distorts a firm’s credibility which only
the testing can ensure not to happen.

3. When Should You Start The Testing Process?


Testing should begin from the inception of the project. Once you get the requirements baselined,
System testing plan and test cases preparation should start. It also helps in exploring any gaps in
the functional requirements.
Manual Testing Interview Questions

4. When Should You Stop The Testing Process?


The testing activity ends after the team completes the following milestones.

1. Test case execution: The successful completion of a full test cycle after the final bug fix
marks the end of the testing phase.
2. Testing deadline: The end date of the validation stage also declares the closure of the
validation if no critical or high priority defects remain in the system.
3. MTBF rate: It is the mean time between failures (MTBF) which reflects the reliability of the
components. If it is on the higher side, then PO and EM can decide to stop testing.
4. CC ratio: It is the amount of code covered via automated tests. If the team achieves the
desired level of code coverage (CC) ratio, then they can choose to end the validation.

5. What Does Quality Assurance Mean In Software Testing?


Quality assurance is a process-oriented approach to certify a software development (SDLC)
method that it is correct and follows the standard procedures. It may bring changes in the process
and cause to replace the weak practices if it identifies any. It includes review activities such as the
inspection of documents, test cases, source code, and automation, etc.

6. What Does Quality Control Mean In Software Testing?


Quality control is a product-oriented approach to qualify that the product under development
meets the original software specifications. It also results in changes to the product if there are bugs
in the system or some deviation observed in the implementation. It includes different type of
testing to perform which are functional (unit, usability, integration, etc.) and non-functional
(compatibility, security, performance, etc.).

7. What Does Verification Mean In Software Testing?


In software testing, the verification is a means to confirm that product development is taking place
as per the specifications and using the standard development procedures. It comprises of the
following activities.

1. Inspections
2. Reviews
3. Walk-throughs
4. Demos

8. What Does Validation Mean In Software Testing?


In software testing, the validation is a means to confirm that the developed product doesn’t have
any bugs and working as expected. It comprises of the following activities.

1. Functional testing
2. Non-functional testing
Manual Testing Interview Questions

9. What Is Static Testing, When Does It Start, And What Does It Cover?
 It is a white box testing technique which directs the developers to verify their code with the
help of a checklist to find errors in it.
 Developers can start it done without actually finalizing the application or program.
 Static testing is more cost-effective than Dynamic testing.
 It covers more areas than Dynamic testing in a shorter time.

10. What Is Dynamic Testing, When Does It Start, And What Does It Cover?
 Dynamic testing involves the execution of an actual application with valid inputs and checking
of the expected output.
 Examples of Dynamic testing are Unit Testing, Integration Testing, System Testing and
Acceptance Testing.
 Dynamic testing happens after code deployment.
 It starts during the validation stage.
Read details from here – Static and Dynamic Testing

11. Define White Box Testing?


White Box Testing has many names such as Glass Box, Clear Box, or Structural Testing.
It requires the testers to gain code-level perspective, design cases to exploit code and find
potential bugs.

12. Define Black Box Testing?


Black Box Testing is a standard software testing approach which requires testers to assess the
functionality of the software as per the business requirements. They treat the software as a black
box and validate as per the end user point of view.
It applies to all levels of software testing such as Unit, Integration, System or Acceptance Testing.

13. Explain Positive Testing Approach?


The purpose of this testing is to ensure whether the system is confirming to the requirements or
not.

14. Explain Negative Testing Approach?


The purpose of this testing is to identify what the system should not do. It helps uncover potential
flaws in the software.

15. What Is Test Strategy And What Does It Cover?


Test strategy is an approach to carry out the testing activity.

16. It covers the following:


 Test team Roles and Responsibilities
 Testing scope
 Test tools
Manual Testing Interview Questions

 Test environment
 Testing schedule
 Associated risks

17. What Is Test Plan And What Does It Include?


It is the responsibility of a Test Lead or Testing Manager to create the Test Plan document.

A test plan captures all possible testing activities to guarantee a quality product. It gathers data
from the product description, requirement, and use case documents.

18. The test plan document includes the following:


 Testing objectives
 Test scope
 Testing the frame
 The environment
 Reason for testing
 The criteria for entrance and exit
 Deliverables
 Risk factors

19. What Is The Difference Between Master Test Plan And Test Plan?
The difference between the Master Plan and Test Plan can be described using the following points.

1. Master Test Plan contains all the test scenarios and risks prone areas of the application.
Whereas, the Test Plan document contains test cases corresponding to test scenarios.
2. Master Test Plan captures every test to be run during the overall development of application
whereas test plan describes the scope, approach, resources, and schedule of performing the
execution.
3. MTP includes test scenarios for all the phases of the application development lifecycle. Whereas,
a separate test plan also exists for the Unit, Functional, and System testing which includes cases
specific to related areas.
4. A Master Test Plan suffice for big projects which require execution in all phases of testing.
However, preparing a basic Test Plan is enough for small projects.

20. What Are Test Cases?


A test case is a sequence of actions and observations that are used to verify the desired
functionality.

A good test case helps to identify problems in the requirements or design of an application.

21. What Is The Difference Between High Level And Low-Level Test Case?
 High-level test cases cover the core functionality of a product like standard business flows.
 Low-level test cases are those related to user interface (UI) in the application.
Manual Testing Interview Questions

22. What Is A Test Scenario?


Test Scenario represents a condition or a possibility which defines what to test.

It can have multiple test cases to cover a scenario.

23. How Is A Test Case Different From A Test Scenario?


A test case is a testing artifact to verify a particular flow with defined input values, test
preconditions, expected output, and post conditions prepared to cover specific behaviour.

A test scenario can have one or many associations with a test case which means it can include
multiple test cases.

24. What Is A Test Suite?


A test suite is a group of test cases. Each test case intends to test the application functionality.

25. What Is Meant By Test Bed?


It refers to a test set up which includes necessary H/W, S/W, N/W, AUT, and any dependency
software.

26. What Is Meant By Test Environment?


Test Environment mostly refers to the essential hardware and software required to perform testing.

27. What Is Meant By Test Data?


Test data is a set of input values required to execute the test cases. Testers define the test data
according to the test requirements. They can do it manually or use generation tools.

For example, if a tester is validating a graphics tool, then he would need to procure relevant data
for graph generation.

28. What Is Meant By Test Harness?


A test harness is a set of scripts and demo data which tests an application under variable
conditions and observes its behaviour and outputs.

It emphasizes on running the test cases randomly rather than in a sequence.

29. What Is Meant By Test Coverage?


Test coverage is a quality metric to represent the amount (in %) of testing completed for a
software product.
Manual Testing Interview Questions

It is applicable for both functional and non-functional testing activities. This metric helps testers to
add missing test cases.

30. What Is Meant By Code Coverage?


Code coverage is another software engineering metric which represents the amount (in %) of code
covered from unit testing.

It helps developers to add test cases for any missing functionality.

31. Is It Possible To Achieve 100% Coverage Of Testing? How Would You Ensure It?
No, it’s not possible to perform 100% testing of any product. But you can follow the below steps to
come closer.

 Set a hard limit on the following factors.


 Percentage of test cases passed
 The no. of bug found
 Set a red flag if,
 Test budget depleted
 Deadlines breached
 Set a green flag if,
 The entire functionality gets covered in test cases.
 All critical & high bugs must have a status of CLOSED.

32. Beside Test Case & Test Plan, What Documents A Tester Should Produce?
Here are a few other documents to prepare.

 Testing metrics
 Test design specs
 End-to-end scenarios
 Test summary reports
 Bug reports

33. What Is A Business Requirements Document (BRD)?


BRD provides a detailed business solution for a project including the documentation of customer
needs and expectations.

BRD fulfils the following objectives.


 Gain agreement with stakeholders.
 Provide clarity on the business requirements.
 Describe the solution that meets the customer/business needs.
 Determine the input for the next phase of the project.
Manual Testing Interview Questions

34. What Is Meant By Unit Testing?


Unit testing has many names such as Module Testing or Component Testing.

It is mostly the developers who test individual units or modules to check if they are working
correctly.

35. What Is Meant By Integration Testing?


Integration Testing validates how well two or more units of software interact amongst each other.
There are three ways to validate integration.
 Big Bang Approach,
 Top-Down Approach,
 Bottom-Up Approach

36. What Is The Difference Between A Test Driver And Test Stub?
The test driver is a piece of code that calls a software component under test. It is useful in testing
that follows the bottom-up approach.
Test stub is a dummy program that integrates with an application to complete its functionality.
These are relevant for testing that uses the top-down approach.
Let’s take an example.

1. Let’s say there is a scenario to test the interface between modules A and B. We have developed
only module-A. Then we can test module-A if we have real module-B or a dummy module for it. In
this case, we call module-B as the Test Stub.
2. Now, module-B can’t send or receive data directly from module-A. In such a scenario, we’ve to
move data from one module to another using some external features called Test Driver.

37. What Is Meant By Big Bang Approach?


It means to merge all the modules after testing of individual modules and verify the functionality.

It involves the use of dummy modules such as Stubs and Drivers. They make up for missing
components to simulate data exchange.

38. What Is Meant By Top-Down Approach?


Testing goes from top to bottom. It first validates the High-level modules and then goes for low-
level modules.

Finally, it tests the integrated modules to ensure the system is working as expected.

39. What Is Meant By The Bottom-Up Approach?


It is a reverse of the Top-Down method. In this, testing occurs from bottom to up.
Manual Testing Interview Questions

It first tests the lowest level modules and then goes for high-level modules. Finally, it verifies the
integrated modules to ensure the system is working as expected.

40. What Is Meant By System Testing?


It is the final testing activity which tests the fully integrated application and confirms its
compliance with business requirements.

Alternatively, we call it as End to End testing. It verifies the whole system to ensure that the
application is behaving as expected.

41. Can We Do System Testing At Any Stage?


No. System testing should start only if all modules are in place and work correctly. However, it
should happen before the UAT (User Acceptance testing).

42. What Are The Different Types Of Software Testing?


Following is the list of various testing types used by manual testers.
 Unit testing
 Integration testing
 Regression testing
 Shakeout testing
 Smoke testing
 Functional testing
 Performance testing
 Load testing
 stress testing
 Endurance testing
 White box and Black box testing
 Alpha and Beta testing
 System testing

43. What Is Meant By Smoke Testing?


Smoke testing confirms the basic functionality works for a product. It requires you to identify the
most basic test cases for execution.

44. What Is Meant By Sanity Testing?


Sanity testing ensures that the product runs without any logical errors. For example, if we are
testing a calculator app; we may multiply a number by 3 and check whether the sum of the digits
of the answer is divisible by 3.

45. What Is Exploratory Testing?


Exploratory testing is a process which lets a tester to concentrate more on execution and less on
planning.
Manual Testing Interview Questions

 It requires formulating a test charter, a short declaration of the scope, a set of objectives and
possible approaches to be used.
 The test design and test execution activities may run in parallel without formally documenting
the test conditions, test cases or test scripts.
 Testers can use boundary value analysis to concentrate the testing effort on error-prone areas
by accurately pinpointing the boundaries.
 Notes should be recorded for the Exploratory Testing sessions as it would help to create a
final report of its execution.

46. What Is Ramp Testing?


It is a testing method which proposes to raise an input signal until the system breaks down.

47. What Is Recovery Testing?


It ensures that the program must recover from any expected or unexpected events without loss of
data or functionality.

Events could be like a shortage of disk space, unexpected loss of communication, or power out
conditions.

48. What Is Reliability Testing?


Reliability testing is a testing strategy to measure the consistency of a Software in executing a
specific operation without throwing any error for a certain period in the given environment.

Example:
The probability that a Server class application hosted on the cloud is up and running for six long
months without crashing is 99.99%. We refer to this type of testing as reliability.

49. What Is Globalization Testing?


Globalization testing concentrates on detecting the potential problems in the product design that
could spoil globalization. It certifies that the code can handle the desired international support
without breaking any functionality. And also, it ensures that there would be no data loss and
display problems.

50. What Is Agile Testing And Why Is It Important?


Agile testing is a software testing process which evaluates software from the customer point of
view. It is beneficial because this does not require Dev to complete coding for starting QA. Instead,
the coding and testing both go hand in hand. However, it may require continuous customer
interaction.

51. What Do You Know About Data Flow Testing?


It is one of the white-box testing techniques.
Manual Testing Interview Questions

Data Flow Testing emphasizes for designing test cases that cover control flow paths around
variable definitions and their uses in the modules. It expects them to have the following attributes:

1. The input to the module


2. The control flow path for testing
3. Pair of an appropriate variable definition and its use
4. The expected outcome of the test case

52. What Do You Know About API Testing?


API is an acronym for Application Programming Interface. It gives users access to public classes,
functions, and member variables for calling them from external applications. It lays down a model
for components to being interaction with each other.

API testing comprises of three parts:

1. Data tier (database)


2. Business logic tier (PHP/J2EE)
3. Presentation tier (UI)

API testing is also a white box testing method. It makes use of the code and a programming tool to
call the API. It ignores the UI layer of the application and validates the path between the client and
the API. The client software forwards a call to the API to get the specified return value. API testing
examines whether the system is responding with the correct status or not.

API testing got carried out by the testers to confirm the system from end to end. They don’t have
permissions to get to the source code but can use the API calls. This testing covers the
authorization, usability, exploratory, automated and document validation.

53. What Is Meant By Alpha Testing?


Alpha testing is an in-house and developer-driven testing approach. Sometimes the client also does
it, and in some case, it may get outsourced.

54. What Is Meant By Beta Testing?


Beta testing happens at the end users premises. It is to ensure customer feedback before final
delivery.

55. What Is Meant By Gamma Testing?


Gamma testing happens after the software is available for release with itemized requirements.

It skips the in-house testing activities and executes at the client end.
Manual Testing Interview Questions

56. Explain The Difference Between Pilot And Beta Testing?


Read the following points to know the difference between Pilot and Beta testing.

1. We do the beta test when the product is about to release to the customer whereas pilot testing
takes place in the earlier phase of the development cycle.
2. In the beta test, testing application is given to a few users to make sure that application meet
the customer requirements and does not contain any showstopper bug. Whereas, in the pilot test,
few members of the testing teamwork at the Customer site to set up the product. They give their
feedback also to improve the quality of the end product.

57. What Are Rainbow Or Colour Box Testing Types?


The majority of software testers knew about the grey, black and white box testing. Let’s see what
all colour box testing’s are available.

1. White box tests


2. Black box checking
3. Grey box testing
4. Glass box tests
5. Red box scripts
6. Yellow box testing
7. Green box checking

58. What Do You Know About The Structured Testing?


Structured testing is often get called as the basis path testing. It got invented by Tom McCabe.

59. What Are The Steps Involved In Structured Testing?


1. The control flow graph should get derived from the system component.
2. The Cyclomatic Complexity of the chart should get considered.
3. A group of C basis path should get identified.
4. Defining test cases for every basis path is necessary.
5. The execution should include all the established test cases.

60. What Does The High Availability Testing Mean For A Software Tester?
High availability shows the ability of a system or a component to operate continuously without
failure even at high loads for a long time.

Hence, the High availability testing confirms that a system or its sub-systems have gone through
thorough checks and, in many cases, simulates failures to validate whether components support
redundancy or not.

61. What Does It Mean By Baseline Testing?


A baseline is the indicator of a specific benchmark that serves as the foundation of a new creation.
Manual Testing Interview Questions

In Baseline testing, the tests capture and preserve all the results produced by the source code, and
compare against a reference baseline. This reference baseline refers to the last accepted test
results. If there are new changes in the source code, then it requires re-execution of tests to form
the current baseline. If the new results get accepted, then the current baseline becomes the
reference.

62. What Is The Purpose Of A Failover Test?


Failover testing is a test strategy to evaluate how a software allocates resources and switch
operations to backup systems for preventing operational failures.

63. What Does It Mean By Gorilla Testing?


Gorilla Testing is a testing strategy where testers collaborate with developers as a joint force for
validating a target module thoroughly from all ends.

64. What Is The Purpose Of End To End Testing?


End to End Testing is a testing strategy to execute tests which covers every possible flow of an
application from its start to the finish. The objective of performing end-to-end tests is to discover
software dependencies and to assert that the correct input is getting passed between various
software modules and sub-systems.

Manual Testing Interview Questions [Bugs, Defects, Errors]

65. What Is The Primary Difference Between Debugging & Testing?


 Testing is to find out defects while using a product whereas debugging is to reach the part of
the code causing failure.
 Debugging is isolating the problem area in the code done by a developer whereas Testing is
identifying the bug in an application and done by a tester.

66. Why Does Software Have Bugs?


 Miscommunication.
 Programming errors.
 Timeline pressures.
 Change in requirements.
 Software complexity.

67. What Are Error Guessing And Error Seeding?


Error Guessing.

It is a test case design technique in which testers have to guess the defects that might occur and
write test cases to represent them.

Error Seeding.
Manual Testing Interview Questions

It is the process of adding known bugs in a program for tracking the rate of detection & removal. It
also helps to estimate the number of faults remaining in the program.

68. What Will You Do When A Bug Turns Up During Testing?


When a bug shows up, we can follow the below steps.

 Run more tests to make sure that the problem has a clear description.
 Run a few more tests to ensure that the same problem doesn’t exist with different inputs.
 Once we are sure of the full scope of the bug, then we can add details and report it.

69. Why Is It Impossible To Test A Program Thoroughly?


Here are the two principal reasons that make it impossible to test a program entirely.

 Software specifications can be subjective and can lead to different interpretations.


 A software program may require too many inputs, too many outputs, and too many path
combinations to test.

70. How Do You Handle A Non-Reproducible Bug?


Following bugs lie under the non-reproducible category.

1. Defects observed due to low memory issue.


2. Issues raised due to address pointing to a memory location that does not exist.
3. The race condition is an error scenario which occurs when the timing of one event impacts
another executing in a sequence.
A tester can take the following actions to handle the non-reproducible bugs.

1. Execute test steps that are close to the error description.


2. Evaluate the test environment.
3. Examine and evaluate test execution results.
4. Keep the resources & time constraints under check.

71. How Do You Test A Product If The Requirements Are Yet To Freeze?
If the requirement spec is not available for a product, then a test plan can be created based on the
assumptions made about the product. But we should get all assumptions well documented in the
test plan.

72. How Will You Tell If Enough Test Cases Have Been Created To Test A Product?
First of all, we’ll check if every requirement has at least one test case covered. If yes, then we can
say that there are enough test cases to test the product.
Manual Testing Interview Questions

73. If A Product Is In Production And One Of Its Modules Gets Updated, Then Is It Necessary To
Retest?
It is advisable to perform regression testing and run tests for all of the other modules as well.
Finally, the QA should carry out System testing.

74. How Do We Know The Code Has Met Specifications?


Traceability matrix is an intuitive tool which ensures the requirements mapped to the test cases.
And when the execution of all test cases finishes with success, it indicates that the code has met
the requirements.

75. What Does Requirement Traceability Matrix Include?


Requirement Traceability Matrix (RTM) is a document which records the mapping between the
high-level requirements and the test cases in the form of a table.

That’s how it ensures that the Test Plan covers all the requirements and links to their latest
version.

76. What Is GAP Analysis?


Gap analysis reveals any deviation between the features available for testing and how the
customer perceives them to be.

Traceability matrix is a testing tool which testers can use to track down the gaps.

77. What Is Risk Analysis?


Risk analysis is a technique to identify the things that can go wrong in a software development
project. They can negatively impact the scope, quality, timeliness, and cost of a project.

However, everyone involved in the project has a part in minimizing the risk. But it’s the leader who
ensures that the whole team understands the individual role in managing the risk.

78. Describe How To Perform Risk Analysis During Software Testing?


Risk analysis is the process of identifying the hidden issues that may derail the successful delivery
of the application. It also prioritizes the sequence of resolving the identified risks for testing
purpose.

Following are some of the risks that are of concern to the QA.

1. New Hardware
2. New Technology
3. New Automation Tool
Manual Testing Interview Questions

4. The sequence of code delivery


5. Availability of test resources for the application
We prioritize them into three categories which are as follows.

1. High magnitude: Impact of the bug on the other functionality of the application
2. Medium: It is somewhat bearable but not desirable.
3. Low: It is tolerable. This type of risk has no impact on the company business.

79. What Is The Difference Between Coupling And Cohesion?


The difference between coupling and cohesion is as follows.

 Cohesion is the degree which measures the dependency of the software component that
combines related functionality into a single unit whereas coupling represents to have it in
a different group.
 Cohesion deals with the functionality that relates to different processes within a single module
whereas coupling deals with how much one module is dependent on the other modules within
the product.
 It is a good practice to increase the cohesion between the software whereas coupling is
discouraged.

80. What Is CMM?


The Capability Maturity Model for Software (CMM or SW-CMM) is a model for assessing the maturity
of the software processes of an organization.

It also identifies the key practices that increase the maturity of these processes.

81. What Is Cause Effect Graph?


It is a graphical representation of inputs and the associated outputs effects which assist in
designing test cases.

82. What Do You Understand Of Inspection?


It’s a group-level review and quality improvement process for the product documents. It focuses on
the following two aspects.

 Product document improvement


 Process improvement (of both document production and inspection)
Manual Testing Interview Questions [Functional Testing]

83. What Does It Mean By Functional Testing?


Functional testing is a software testing methodology which ensures that the application under test
has all the functionality working as per the specifications provided.
Manual Testing Interview Questions

84. What Are The Different Types Of Functional Testing?


Functional testing covers the following types of validation techniques.

1- Unit Testing
2- Smoke testing
3- Sanity testing
4- Integration Testing
5- Interface Testing
6- System Testing
7- Regression Testing
8- UAT

85. How Do You Conduct Functional Testing?


Functional testing sees the application under test as a black-box. We as a tester, first of all, write
down the use cases for all the possible workflows of the said features.

After that, we verify the functionality by exercising all the said features, their options and ensure
that they behave as expected.

86. What Tool Do You Use For Functional Testing?


We use the Unified Functional Testing (UFT) tool developed by HP (Hewlett Packard) for both the
functional and regression testing.

This tool makes use of Visual Basic (VB) scripting to automate the functional workflows. It allows us
to integrate manual, automated as well as framework-based test scripts in the same IDE.

87. What Kind Of Document Will You Need To Begin Functional Testing?
 It is none other than the Functional specification document. It defines the full functionality of a
product.
 Other documents are also useful in testing like user manual and BRS.
 Gap analysis is another document which can help in understanding the expected and existing
system.

88. What Are The Functional Test Cases?


Functional test cases are those that confirm that the application executes a specified business
function. Most of these tests take the form of user or business cases that resemble the standard
transactions.

89. What Are Non-functional Test Cases?


Non-functional testing is a type of testing for observing the non-functional aspects (such as
performance, usability, reliability) of the application under test.
Manual Testing Interview Questions

It validates the readiness of a system as given in the non-functional specification and never got
addressed in functional testing.

90. What Are Functional Requirements?


The functional requirements specify the behaviour or function. Some of these are as follows:

 Authentication
 Business rules
 Historical Data
 Legal and Regulatory Requirements
 External Interfaces

91. What Are The Non-Functional Requirements?


The non-functional requirements specify how the system should behave. Some of these are as
follows:

 Performance
 Reliability
 Security
 Recovery
 Data Integrity
 Usability

92. What Is The Difference Between Functional And Non-functional Requirements?


In Software engineering terms, a non-functional requirement (NFR) is one that lays down the
criteria for assessing the operation of a system, rather than a general behaviour. They often get
called the “quality attributes” of a system.

On the other hand, functional requirements are those that cover the technical aspect of the
system.

Manual Testing Interview Questions [SDLC And STLC]

93. What Do You Know About The SDLC?


Software development life cycle is SDLC which is a standard development framework and the
foundation for the processes like Waterfall, Spiral, V Model, and Agile.
It breaks down the entire development into multiple stages known as the SDLC phases like
requirement collection, analysis, design, implementation/coding, testing/validation, deployment,
and finally the maintenance.
Manual Testing Interview Questions

94. What Does It Mean By STLC?


The STLC is an acronym for the Software Testing Life Cycle. It is a testing model which
proposes to execute test execution in a systematic and planned way. In the STLC model, many
activities occur to improve the quality of the product.
The STLC model lays down the following steps:

1. Requirement Analysis
2. Test Planning
3. Test Case Development
4. Environment Setup
5. Test Execution
6. Test Cycle Closure

95. What Is The Requirement Analysis Phase In STLC?


The requirement analysis (RA) is the first stage of the STLC model. In this step, the testing team
learns what to test & determine the testable requirements. If some specifications are not precise or
have a disagreement, then the stakeholders like business analyst (BA), architects, and customers
provide clarity.

96. What Are The Tasks Performed In The Requirement Analysis Phase Of STLC?
Test team performs the following tasks during the RA phase.

1. Provide a questionnaire for the customer-facing people.


2. List down the priorities areas for testing.
3. Gather test environment details for carrying out the testing tasks.
4. Evaluate the possibility of test automation and prepare a report.
Manual Testing Interview Questions

97. What Is The Test Planning (TP) Phase In STLC?


The second phase of STLC is test planning. It is a crucial step as the team here defines the whole
testing strategy for the project. Hence, it gets also known as the Test strategy phase.

In this stage, the test lead has to get the effort and cost estimation done for the whole project. This
phase starts soon after the RA phase gets over. The outcomes of the TP include the Test Planning
& Effort estimation documents. After the planning ends, the testing team can begin writing the test
case development tasks.

98. What Are The Tasks Performed In The Test Planning (TP) Phase Of STLC?
Test team performs the following tasks during the TP phase.

1. Provide a written objective & scope for the STLC cycle.


2. Mention the testing types to cover.
3. Estimate testing efforts and resource requirements.
4. Select the testing tools required.
5. Confirm the test infra requirements.
6. List down the testing schedule and set milestones.
7. Prepare the summary of the entire testing process.
8. Create a control policy.
9. Assign roles and responsibilities.
10. Outline the testing deliverables.
11. Conclude the entry criteria, suspension/resumption criteria, and exit conditions.
12. Mark the expected risks.

99. What Is The Test Case Development Phase In STLC?


The testing team picks on the test case development tasks once the test planning (TP) phase is
over. In this stage of STLC, the principal activity is to write detailed test cases for the requirements.
While performing this task, they also need to prepare the input data required for testing. Once the
test plan is ready, it needs to be reviewed by senior members or the lead.

One of the documents which team has to produce is the Requirement Traceability Matrix (RTM). It
is an industry-wide standard for ensuring the test case get correctly mapped with the requirement.
It helps in tracking both the backward & forward direction.

100. What Are The Tasks Performed In The Test Case Development Phase Of STLC?
Test team performs the following tasks during the TCD phase.

1. Write test cases.


2. Produce scripts for automation testing (Optional).
3. Gather test data required for test execution.
Manual Testing Interview Questions

101. What Is The Test Environment Setup Phase In STLC?


It is a time-consuming yet crucial activity in the STLC to prepare the test environment. Only after
the test setup is available, the team can determine which conditions the application would go
through testing.

It is an independent task and can go in parallel with test case writing stage. The team or any
member outside the group can also help in setting up the testing environment. In some
organizations, a developer or the customer can also create or provide the testing beds.
Simultaneously, the test team starts writing the smoke tests to ensure the readiness of the test
infra.

102. What Are The Tasks Performed In The Test Environment Setup Phase Of STLC?
Test team performs the following tasks during the TES phase.

1. Assess the need for new Software & hardware requirements.


2. Prepare the test infra.
3. Execute smoke tests and confirms the readiness of the test infra.

103. What Is The Test Execution Phase In STLC?


After the testing infra is ready, the test execution phase can start. In this stage, the team runs the
test cases as per the test plan defined in the previous steps.

If a test case executes successfully, the team should mark it as Passed. If some tests have failed,
then the defects should get logged, and the report should go to the developer for further analysis.
As per the books, every error or failure should have a corresponding defect. It helps in tracing back
the test case or the bug later. As soon as the development team fixes it, the same test case should
execute and report back to them.

104. What Are The Tasks Performed In The Test Execution Phase Of STLC?
Test team performs the following tasks during the TE phase.

1. Execute tests as per the test planning.


2. Provide test execution status showing the passed, failed, skipped statistics.
3. Create defects for failed cases and assign to dev for resolution.
4. Perform re-execution of test cases which have got the defect fixes.
5. Make sure the defects get closed.

105. What Is The Test Cycle Closure Phase In STLC?


The testing team calls upon the meeting to evaluate the open defects, known issues, code quality
issues and accordingly decides on the closure of the test cycle.
Manual Testing Interview Questions

They discuss what went well, where is the need for improvement and notes the pain points faced in
the current STLC. Such information is beneficial for future STLC cycles. Each member puts his/her
views on the test case & bug reports and finalizes the defect distribution by type and severity.

106. What Are The Tasks Performed In The Test Cycle Closure Phase Of STLC?
Test team performs the following tasks during the TCC phase.

1. Define closure criteria by reviewing the test coverage, code quality. the status of the business
objectives, and test metrics.
3. Produce a test closure report.
4. Publish best practices used in the current STLC.

Manual Testing Interview Questions [Defect Metrics]

107. What Does A Fault Mean In Software Testing?


A fault is a condition that makes the software to fail while executing the intended function.

108. What Does An Error Mean In Software Testing?


An error represents a problem in a program that arises unexpectedly and causes it not to function
correctly. An application can encounter either software errors or network errors.

109. What Does A Failure Mean In Software Testing?


A failure represents the incompetence of a system or component in executing the intended
function as per its specification.

It could also happen in a customer environment that a particular feature didn’t work after product
deployment. They would term it as a product failure.

110. What Is The Difference Between A Bug, Defect, And Error?


A bug is usually the same as the defect. Both of them represents an unexpected behaviour of the
software.

However, an error would also fall in the same category. But in some cases, errors are fixed values.

For example – 404/405 errors in HTML pages.

111. What Is The Difference Between Error And Bug?


A slip in the coding indicates an Error. The error discovered by a manual tester becomes a Defect.
The defect which the dev team admits known as a Bug. If a build misses on the requirements, then
it is a functional failure.
Manual Testing Interview Questions

Read – Difference between Defect, Error, Bug, Failure, and Fault

112. What Is Defect Life Cycle In Software Testing?


Defect Life Cycle also has another name as Bug Life Cycle.

Defect Life Cycle is the representation of the different states of a defect which it attains at
different levels during its lifetime. It may have variations from company to company or even gets
customized for some projects as the software testing process drives it to not getting out of the
way.

113. What Is Defect Leakage?


Defect leakage occurs at the Customer or the End-user side after the product delivery. If the end
user sees an issue in the application, then such bugs lead to Defect leakage. And this process of
finding bugs is also called as Bug Leakage.

114. How Come The Severity And Priority Relate To Each Other?
 Severity – Represents the gravity/depth of the bug.
 Priority – Specifies which bug should get fixed first.
 Severity – Describes the application point of view.
 Priority – Defines the user’s point of view.

115. What Are The Different Types Of Severity?


The severity of a bug can be low, medium or high depending on the context.

 User Interface Defect – Low


 Boundary Related Defects – Medium
 Error Handling Defects – Medium
 Calculation Defects – High
 Misinterpreted Data – High
 Hardware Failures – High
 Compatibility Issues – High
 Control Flow Defects – High
 Load Conditions (Memory leakages under load testing) – High
Manual Testing Interview Questions

116. What Is The Difference Between Priority And Severity In Software Testing?
Severity indicates the impact of a defect on the development or the operation of a component in
the application going under test. It usually is an indication of commercial loss or cost to the
environment or business reputation.

On the contrary, the Priority of a defect shows the urgency of a bug yet to get a fix. For example –
a bug in a server application blocked it from getting deployed on the live servers.

117. What Does Defect Density Mean In Software Testing?


Defect density is a way to measure the no. of defects found in a product during a specific test
execution cycle. It gets determined by dividing the defect count found with the size of the software
or component.

KLOC (Thousands of lines of code) is the unit used to measure the defect density.

118. What Does The Defect Detection Percentage Mean In Software Testing?
Defect Detection Percentage (DDP) is a type of testing metric. It indicates the effectiveness of a
testing process by measuring the ratio of defects discovered before the release and reported after
release by customers.

For example – let’s say the QA logged 70 defects during the testing cycle and the customer
reported 20 more after the release. The DDP would come to 72.1% after the calculation as 70/(70
+ 20) = 72.1%.

119. What Does The Defect Removal Efficiency Mean In Software Testing?
Defect Removal Efficiency (DRP) is a type of testing metric. It is an indicator of the efficiency of the
development team to fix issues before the release.

It gets measured as the ratio of defects fixed to total the number of issues discovered.

For example – let’s say that there were 75 defects discovered during the test cycle while 62 of
them got fixed by the dev team at the time of measurement. The DRE would come to 82.6% after
the calculation as 62/75 = 82.6%.

120. What Does The Test Case Efficiency Mean In Software Testing?
Test Case Efficiency (TCE) is a type of testing metric. It is a clear indicator of the efficiency of the
test cases getting executed in the test execution stage of the release. It helps in ensuring and
measuring the quality of the test cases.

Test Case Efficiency (TCE) => (No. of defects discovered / No. of test cases executed)* 100
Manual Testing Interview Questions

121. What Is The Age Of Defect In Software Testing?


Defect age is the time elapsed between the day the tester discovered it and the day the developer
got this fixed.

While estimating the age of a defect, consider the following points.

1. The day of birth for a Defect is the day it got assigned and accepted by the dev.
2. The issues which got dropped are out of the scope.
3. The age can be both in hours or days.
4. The end time is the day got verified and closed, not just fixed by the dev.

122. What Is Defect Clustering In Software Testing?


Defect clustering is a situation in testing which could arise if either most of the software bugs got
discovered only in a handful of modules or the software fails to operate frequently.

123. What Is Pesticide Paradox In Software Testing?


The pesticide paradox is a situation in software testing when the same tests get repeated over and
over again until they are no longer able to find new bugs.

124. What Is The Pareto Principle In Software Testing?


In software testing, the Pareto Principle refers to the notion that 80% of all bugs happen to be in
the 20% of the program modules.

125. What Are The Different Ways To Apply The Pareto Principle In Software Testing?
Following is the list of ways to exercise Pareto Principle in software testing:

1. Arrange the defects based on their causes, not via consequences. Don’t club bugs that yield the
same outcome. Prefer to group issues depending on what module they occur.
2. Collaborate with the dev team to discover new ways to categorize the problems. E.g., use the
same static library for the components which counted for the most bugs.
3. Put more energy for locating the problem areas in the source code instead of doing a random
search.
4. Re-order the test cases and pick the critical ones first to begin.
5. Pay attention to the end-user response and assess the risk areas around.

126. What Is Cyclomatic Complexity In Software Testing?


In software testing, the cyclomatic complexity represents a test metric known as the program
complexity. This method got introduced by Thomas J. McCabe in the year 1976. It sees a program
as a graph using the control flow representation.

The graph includes the following attributes:


Manual Testing Interview Questions

1. Nodes – A node indicates the processing tasks


2. Edges – An edge shows the control flow between the nodes.

127. What Is The Usage Of Cyclomatic Complexity In Software Testing?


1. It gets to find the independent path for test execution.
2. It proposes that the test should cover all the branches once.
3. It gives space to concentrate on the untested paths.
4. It guarantees better code coverage.
5. It determines the risks associated with a program.
6. It aims to minimize the probable risks.

Manual Testing Interview Questions [Miscellaneous]

128. What Is Silk Test And Why Should You Use It?
Here are some facts about the Silk tool.

1. It’s a tool developed for performing the regression and functionality testing of the application.
2. It benefits when we are testing Window based, Java, the web, and the traditional client/server
applications.
3. Silk Test help in preparing the test plan and managing them to provide the direct accessing of
the database and validation of the field.

129. How Do You Perform Automated Testing In Your Environment?


Automation Testing is a process of executing tests automatically. It reduces the human
intervention to a great extent. We use different test automation tools like QTP, Selenium, and
WinRunner. These tools help in speeding up the testing tasks.

Using the above tools we can create test scripts to verify the application automatically. After
completing the test execution, these tools also generate the test reports.

130. What Are The Factors That You’ll Consider To Choose Automated Testing Over Manual
Testing?
The choice of automated testing over manual testing depends on the following factors.

1. Tests require periodic execution.


2. Tests include repetitive steps.
3. Tests execute in a standard runtime environment.
4. Automation is expected to take less time.
5. Automation is increasing reusability.
6. Automation reports are available for every execution.
7. Small releases like service packs which include a minor bug fix. In such cases, executing the
regression tests is sufficient for validation.
Manual Testing Interview Questions

131. What Are The Essential Qualities Of An Experienced QA Or Test Lead?


Every QA or Test Lead should have the following qualities.

1. Well-versed in Software testing processes.


2. Ability to accelerate teamwork to increase productivity.
3. Improve coordination between QA and Dev engineers.
4. Provide ideas to refine the QA processes.
5. Ability to conduct RCA meetings and draw conclusions.
6. Excellent written and interpersonal communication skills.
7. Quick learner and able to groom the team members.

132. What Is Equivalence Partitioning? Explain By Example.


Answer.
It’s a famous black box testing technique which defines the effectiveness of manual test cases. You
can apply it to a range of testing areas like unit testing, acceptance testing, integration testing and
so on. It works by breaking the test data into different sets and selecting one input value from each
range.

Since it’s not possible to validate all data points from the domain because any attempt to do so
would result in huge no. of test cases. Hence, you should apply this technique in such situations. It
classifies the data into different classes where each class lays the input criteria from the
equivalence class.

Example.

Assuming an application which accepts dates from the calendar year. If we apply the above
technique, then we can break the inputs into classes. For example, one for valid dates and another
for invalid dates. We’ll now create test cases from each class.

TC#1. Valid date class would allow cases with dates from the current calendar year.
TC#2. Dates from years other than the current calendar year would belong to the invalid class.

133. What Is Boundary Value Analysis? Explain By Example.


Answer.
Most of the errors emanate either from the head or the tail ends of the test data. The values at
these extreme ends are the boundary values. And the process to analyse them is Boundary value
analysis. We may sometimes call it as <range checking>.
It’s one more black box testing approach which focuses on finding errors at the terminating ends of
the input domain. You can use this technique at all levels of testing.
Manual Testing Interview Questions

While designing test cases using BV, you should consider both valid and invalid boundary values
for the edges. Usually, we pick one test case from each boundary.

Let’s see some examples to demonstrate the use of BV analysis.

TC#1. The first test case will analyse exact boundary values from the input data. Considering
the example in Q:1, it would be the dates from January and the dates from December.
TC#2. For invalid values, you may consider using the dates from months other than the Jan or Dec.

134. What Is State Transition Testing? Explain By Example.


Answer.
This approach is best suitable where there is a possibility to view the whole system as a <finite
state machine>. It works on the notion that a system can be in a finite no. of distinct states. And
it’s the rules of the machine that drive the transitions from one state to another.
It’s the model on which is the basis for the system and the test cases. Any system which produces
a different output for the constant input, depending on what has happened before, is a finite state
system.

Example.

For example, you can consider a bank account. If you use it regularly, it’ll stay active. If you don’t
make any transaction for over a period of 12 months, then your account will become inactive. You
can though choose to close your account. Once it’s closed, you cannot use it until you reopen. So, a
bank account can have the following states.

1. Open,
2. Inactive,
3. Close,
4. Reopen.
Hence, you can design cases to test all transitions around the states mentioned above. In this
model, you measure coverage in terms of switches, see below.

1. <0-switch> => You are testing every valid transition.


2. <1-switch> => You’ve covered the pair of two valid transitions.
3. <2-switch> => You need to test the sets of 3 transitions for this coverage.

135. What Is The Difference Between A White Box, Black Box, And Gray Box Testing?
Answer.
133.1. Black box testing is a testing approach which depends completely on the product
requirements and specifications. The knowledge of internal paths, structures, or implementation of
Manual Testing Interview Questions

the software isn’t a prereq for it.


133.2. White box testing is a testing mechanism which covers the internal paths, code
structures, and implementation of the software under test. It generally demands that a tester
should possess serious programming skills.
133.3. Last but not least is the gray box testing. It allows looking into the “box” being tested to
understand the implementation. Finally, you close the box and apply knowledge to choose more
effective black box tests.

136. What Are The Categories Of Defects?


Answer.
There are three main categories of defects:

134.1. Wrong: It indicates a mismatch in the requirement and the implementation. It implies a
variance from the given specification.
134.2. Missing: The end product doesn’t have a feature matching the requirement. It’s a variance
from the specifications and represents that you didn’t document the requirement properly.
134.3. Extra: You added a feature which the customer didn’t ask for. It’s again a variance from
the specification. And the users of the product may like this feature. But it’s still a defect because
it’s not the part of the specs.

137. What Is The Basis To Prepare For An Acceptance Plan?


Answer.
First of all, you need inputs from the following areas to prepare for the acceptance document.
These may vary from company to company and from project to project.

135.1. Requirement document: This document specifies what exactly is needed in the project
from the customer’s perspective.
135.2. Customer Input: It could be discussions, informal chats, email addresses, etc.
135.3. Project Plan: The project plan from the project manager (PM) also serves as a good input
to conclude your acceptance test.

138. What Is The Requirement Traceability Matrix?


Answer.
It’s a common Software testing interview question which you might get asked during an interview.
The Requirements Traceability Matrix (RTM) is a requirement tracking tool that ensures the
requirements are the same for the whole development process. There are the following reasons to
use it:
Manual Testing Interview Questions

136.1. To determine whether the developed project is meeting the requirements of the user.
136.2. To determine all the requirements given by the user.
136.3. To make sure the application requirement can be fulfilled in the verification process.

139. Describe How To Perform Risk Analysis During Software Testing?


Answer.
Risk analysis is the process of foreseeing the risk in an application and prioritizing them to test.
Following are some of the risks:

137.1- New Hardware.


137.2. New Technology.
137.3. New Automation Tool.
137.4. The sequence of code delivery.
137.5. Availability of application and test resources.
You can prioritize them into three categories in the following manner:

i. High magnitude: Impact of the bug on the other functionality of the application.
ii. Medium: It can be tolerable in the application but not desirable.
iii. Low: It can be bearable. This type of risk has no impact on the company business.

140. How To Deal With A Bug Which Is Intermittent And Not Reproducible?
Answer.
Again, it’s one of the practical software testing interview questions. A bug might not be
reproducible for a variety of reason. Some of these are as follows.

138.1. Low memory.


138.2. Addressing an unavailable memory location.
138.3. Things happening in a particular sequence.

141. When Do You Perform Decision Table Testing?


Answer.
We may use the <decision table testing> for testing systems for which the specification takes the
form of rules or cause-effect sequence. In a decision table, the input gets listed in a column, with
the outputs in the same column but below the inputs. The remainder of the table examines the
different sets of inputs to define the outputs produced.

142. What Is Boundary Testing?


Answer.
Manual Testing Interview Questions

Boundary testing proposes to focus on the limit or edge conditions of the software under test.

143. What Is Branch Testing?


Answer.
Branch testing requires testing of all the branches in the program source code at least once.

144. What Is Breadth Testing?


Answer.
It is also known as overview testing. It covers the main functionality of the product but does not
test the features in detail.

145. What Is Alpha Testing?


Answer.
The Alpha Testing takes place at the developer end but in a controlled environment. And it is the
end user of the software who does it.

146. What Is Beta Testing?


Answer.
The purpose of Beta testing is to gather feedback from the real-time usage of the product. It
happens after the installation at the client-end.

147. What Is Component Testing?


Answer.
This testing focuses on validating the individual components rather than the whole application.

148. What Is The End-To-End Testing?


Answer.
End-to-end testing verifies the complete flow of an application right from start to finish. Its purpose
is to identify the system dependencies and ensuring that the data integrity is intact.

149. What Is Monkey Testing?


Answer.
Monkey testing is a black-box testing technique. It proposes to enter data in any format and verify
the software is not crashing. It uses the concept of smart monkey and dumb monkey.

 Smart Monkey – Used for load and stress testing and lead to high development cost.
 Dumb Monkey – Used for basic testing and helps in locating quality bugs.

150. What Is The Difference Between Baseline And Benchmark Testing?


Answer.
Manual Testing Interview Questions

Following are the primary differences between the baseline and benchmark testing.

 Baseline testing intends to collect the performance of an application. Benchmark testing


compares the app performance with the industry standard.
 Baseline testing utilizes the data collected to improve performance. Benchmark testing
returns the information of target application vis a vis other applications.
 Baseline testing compares the current performance with the application’s previous
performance whereas benchmark testing compares our app performance with competitors
performance.

151. What Is The Role Of QA In Project Development?


Answer.
 QA team is responsible for ensuring the quality of the software product.
 They are involved in planning, testing, and execution.
 QA manager prepares an estimate and agrees on a Quality Assurance plan for the product.
 He explains the QA process to the team members.
 The test engineers ensure the traceability of test cases to requirements.

152. What’s The Difference between Acceptance and Accessibility Testing?


Answer.
Acceptance Testing.

It enables a user as well as the customer to determine whether to accept a software product. It
ensures that the software meets a set of agreed acceptance guidelines.

Accessibility Testing.

It certifies that a product is accessible to people with disabilities like autism, blind, or deaf.

153. What’s The Difference Between Application Programming And Application Binary Interface?
Answer.
Application Programming Interface.

It’s a set of software calls and routines for the client applications to access desired services.

Application Binary Interface.

It’s a specification which defines the portability requirements of applications in binary form across
different platforms.
Manual Testing Interview Questions

154. What’s The Difference Between Verification And Validation?


Answer.
Verification.

It’s a review without actually executing the process like code review or the revision of test cases
etc.

Validation.

It’s a process of verifying the product and its features by doing the actual execution.

155. What’s The Difference Between The Branch And Breadth Testing?
Answer.
 Branch testing is a technique which ensures that all the source code branches would go
through validation at least once.
 Breadth testing covers the entire product but tests the features with limited cases.

156. What’s The Difference between Alpha and Beta Testing?


Answer.
 Alpha testing happens in a developer managed environment by the target customers.
 Beta testing is the one which starts after installing the application at the client site.

157. What’s The Difference between Component and Compatibility Testing?


Answer.
 Component testing happens during the development and verifies the individual modules.
 Compatibility testing makes sure that the software can work with other parts of the system
like browsers, hardware, and the OS.

158. What Do You Think Portability Testing Is?


Answer.
Portability testing is the process of rebuilding an existing application for new platforms. And
the process of validating it on those environments is usually known as portability testing.

159. What Do You Think Bottom-Up Testing Is?


Answer.
It’s a way to carry out the integration testing where you first test the lowest level components. And
later you cover the top level items. You need to keep moving until you get to the top of the
hierarchy.
Manual Testing Interview Questions

160. What Are The Different Ways Of Doing Black Box Testing?
Answer.
Mostly, we use the following five black box testing tricks.

 Top-down according to budget


 WBS (Work Breakdown Structure)
 Guess and gut feeling
 Early project data
 TPA (Test Point Analysis)

161. What Are The Different Methods To Test Software (In Abnormal Conditions)?
Answer.
Usually, we can use any of the below techniques.

 Stress testing
 Security testing
 Recovery testing
 Beta testing

162. What Do You Think A Software Testing Scope Is?


Answer.
 The testing scope helps in sketching a well-defined boundary, which covers all the activities
to develop and deliver the software product.
 It clearly defines all the functionalities and artefacts to be delivered as a part of the product.

163. How To Estimate The Size Of A Software Product?


Answer.
There are following two ways to give the estimation of a product.

 Count the lines of delivered code.


 Calculate the delivered function points.

164. What’s The Difference Between A Functional And Non-Functional Requirement?


Answer.
A Functional Requirement.

It interprets what the system should do and where should it run.


Manual Testing Interview Questions

Examples.
 Authentication
 Business rules
 Historical Data
 Legal and Regulatory Requirements
 External Interfaces.
A Non-Functional Requirement.

It defines how the system or the application should be.

Examples.
 Performance
 Reliability
 Security
 Recovery
 Data Integrity
 Usability

165. What’s The Difference between the Defect Priority and Severity?
Answer.
The Priority and Severity are the popular defect management terms. These two share the
importance of a bug among the team and to fix it.

The Priority.

Describes the bug in terms of a customer.

 The priority status is set by the tester to the developer mentioning the time frame to fix a
defect. If the high priority is mentioned then the developer has to fix it at the earliest.
 The priority status is set based on customer requirements.
The Severity.

Describes the bug in terms of functionality.

 The severity status is used to explain how badly the deviation is affecting the build.
 The severity type is defined by the tester based on the written test cases and functionality.

166. What Do You Think The Concurrency Testing Is?


Answer.
Concurrency Testing.
Manual Testing Interview Questions

You can also call it as multi-user testing. It observes the effects of accessing the application, code
module or database by different users at the same time. It helps in identifying and measuring the
problems in response time, levels of locking and deadlocking in the application.

Example.
Load runner is a tool which supports this type of testing. It has a <Vugen> (Virtual User Generator)
which adds the number of concurrent users. It also defines how the users need to be added like
Gradual Ramp up or Spike Stepped.

167. What’s The Difference Between A High-Level And Low-Level Test Case?
Answer.
 High-level test case covers the major functionality of the application e.g. functionality
related test cases, database test cases etc.
 Low-level test case tests the basic features like the User Interface (UI) in the application.

168. What’s The Difference Between A Two-Tier Architecture And The Three-Tier Architecture?
Answer.
 In a two-tier architecture, there happen to be two layers like Client and Server. The Client
sends a request to the Server and the server responds to the request by fetching the data
from it. The problem with the two-tier architecture is the server can’t respond to multiple
requests at the same time which may cause data integrity issues.
 In a three-tier architecture, there can be layers like Client, Server, and Database. Here, the
Client sends a request to Server, where the Server sends the request to the Database for
data. Based on that request the Database sends back the data to Server and from the Server,
the data is forwarded to the Client.

169. What’s The Difference Between Static Testing And Dynamic Testing?
Answer.
Static Testing (Done In Verification Stage).

Static Testing is a White Box testing technique where the developers verify or test their code with
the help of checklist to find errors in it, this type of testing is done without running the actually
developed application or program. Code Reviews, Inspections, Walkthroughs are mostly done in
this stage of testing.

Dynamic Testing (Done In Validation Stage).

Dynamic Testing is done by executing the actual application with valid inputs to check the
expected output. Examples of Dynamic Testing methodologies are Unit Testing, Integration
Testing, System Testing and Acceptance Testing.
Manual Testing Interview Questions

Some differences between Static Testing and Dynamic Testing are as follows.

 Static Testing is more cost-effective than Dynamic Testing because Static Testing is done in
the initial stage.
 In terms of Statement Coverage, Static Testing covers more areas than Dynamic Testing in
a shorter time.
 Static Testing is done before the code deployment where Dynamic Testing is done after the
code deployment.
 Static Testing is done in the Verification stage where Dynamic Testing is done in the
Validation stage.

170. What’s The Difference Between A Bug Log And Defect Tracking?
Answer.
Bug Log.

It’s a document for the number of defects such as open, closed, reopen or deferred of a particular
module.

Defect Tracking.

The process of tracking a defect such as symptoms, whether reproducible or not, priority, severity,
and status.

171. What Are The Contents Of An Effective Bug Report?


Answer.
 Project
 Subject
 Description
 Summary
 Detected By (Name of the Software Tester)
 Assigned To (Name of the Developer of the feature)
 Test Lead (Name)
 Detected in Version
 Closed in Version
 Date Detected
 Expected Date of Closure
 Actual Date of Closure
 Priority (Medium, Low, High, or Urgent)
 Severity (Range => 1 to 5)
 Status.
 Bug ID
 Attachment
Manual Testing Interview Questions

 Test Case Failed (Total no. of test cases which are failing for a Bug)

172. How will you describe yourself as a QA Engineer?


Answer:
I began my career as a QA engineer in the year ____. Since then I have been working on a variety of
platforms and operating systems including Windows 7, Win 2K8, Win 2012 and different flavours of
Linux such as Ubuntu, RHEL, Suse, etc. During my stint as a test engineer, I have conducted
validation of different kind of applications such as Java, Visual basics, C/C++/CSharp, etc. I’ve
hands-on experience in testing client-server applications, web-based applications, and many other
programming languages.

Being a QA engineer, I do have experience in preparing test plans, writing test Cases. I use to
attend several meetings with project managers, business analysts and sometimes with clients.

While thinking of different types of testing, I have explored many things such as Smoke Testing,
Integration Testing, Regression Testing, Black box or UAT Testing. Creating a defect could also be
an important area where I should put more stress.

173. What do the following testing terms mean?


 QA, QC and Software Testing.
Answer:
1- Quality Assurance (QA) QA refers to the planned and systematic way of monitoring the quality of
process which is followed to produce a quality product.
QA tracks the outcomes and adjusts the process to meet the expectation.

2- Quality Control (QC) Concern with the quality of the product. QC finds the defects and suggests
improvements. QC implements the process set by QA. It is the responsibility of the tester.
3- Software Testing is the process of ensuring that product which is developed by the developer
meets the user requirement. The motive to perform testing is to find the bugs and make sure that
they get fixed.
Back to top
Question Set-2.

174. What are the characteristics of a good test case?


Answer:
A good test case ensures that both +ve scenarios and -ve scenarios get covered. It is atomic which
means it does only one thing at a time and doesn’t overlap with others.

Characteristics of a good test case.

1- Title: A clear and one-liner title to show the intent of the test case.
2- Purpose: A brief explanation of the reason the test case is getting created.
Manual Testing Interview Questions

3-Description: A representation in words of the nature and characteristics of the test case.
174.4-Test objects: An unambiguous feature or module getting tested.
174.5-Preconditions: The conditions that must get satisfied during test execution.

175. What do you mean by a test plan?


Answer:
A test plan is a test life cycle document which captures the resource requirements, scope,
approach, and the schedule of several testing activities.

It also helps in identifying the risks that may arise during testing but guides to the solutions as
well.

A good test plan consists of history, contents, introduction, scope, overview, and approach. The
risks and assumptions are not left out either.

Back to top
Question Set-3.

176. Assume you have a test plan with over 1000 test cases. How would you make sure what
should be automated and what to test manually?
Answer:
In such a situation, I will focus on test case priority and the feasibility of automation for the test
case in question.

There can be many other things that can make a difference.

1- The complicated scenarios which are tedious and take a lot of time in manual execution.
2- The test cases missed in the past.
3- The parts of the application that need regression testing.
4- The test cases which are hard to automate.
5- The features which are still under development. (If certain parts of the app are about to be
changed, I recommend not to start with automated testing for these cases.)
6- The test cases that are part of “explorative” testing and assessing the user experience.

177. How do you determine which devices and OS versions should we test?
Answer:
A good candidate would point to app analytics as the best measure, looking for the most used
devices for their particular app.

Another answer would be to check the app reviews where people might have complained about
specific issues happening on their devices.
Manual Testing Interview Questions

Back to top
Question Set-4.

178. Define a Test Case and a Use Case? What information would you include in their
descriptions?
Answer:
A test case is again a document which gives you a step by step detailed idea on how you can test
an application. It usually comprises of results (pass or fail), remarks, actions, outputs, and
description.

A use case on the other is a document of another kind. It helps you understand the actions of the
user and the response of the system found in a particular functionality. It comprises of the cover
page, revision, contents, exceptions, and pre, and post-conditions.

179. What is Testware?


Answer:
Testware is the subset of software, which helps in performing the testing of an application.

It is a term given to the combination of Software applications and utilities required for testing a
software package.

180. What is Test strategy?


Answer:
Test strategy helps to define the testing process that would take place in a software development
cycle. It consists of testing tasks, and lets managers and developers know about the issues as and
when they get discovered.

It includes the following information.

 Introduction,
 Resource,
 Scope and schedule for test activities,
 Required Test tools,
 Definition of Test priorities,
 Test planning and the types of test to run.
Back to top
Question Set-5.

181. What are the main elements of a test plan and test cases?
Answer:
1- Testing objectives
2- Testing scope
Manual Testing Interview Questions

3- Testing the frame


4- The environment
5- Reason for testing
6- The criteria for entrance and exit
7- Deliverables
8- Risk factors

182. What is the strategy for a successful Test automation plan?


Answer: A successful test automation plan should cover the following aspects.
 Preparation of Automation Test Plan
 Recording the scenario
 Error handler incorporation
 Script enhancement by inserting checkpoints and looping constructs
 Debugging the script and fixing the issues
 Re-running the script
 Reporting the result
Back to top
Question Set-6.

183. What are the test stub and test driver? Why are they required?
Answer:
A. Stubs are dummy programs that imitate the actual software by giving the same output as
that of the real one.
B. Drivers are dummy programs that call a software component for testing.
We need them for testing the inter-related modules, say X and Y.

If we have developed only module X, then we cannot just test it alone. But if there is any dummy
module, i.e., a stub, then we can use it to test module X.

Next, module Y cannot receive or send data on its own. So in this case, we have to transmit data
from one module to another module by some external features. This external feature is known as
the driver.

184. What are the roles and responsibilities of a software quality assurance engineer?
Answer:
A software quality assurance engineer has to perform the following tasks:

A. Understanding of software design


B. Knowledge of source code
C. Code review
D. Version control
Manual Testing Interview Questions

E. Program testing
F. Integration testing
G. Release process
Back to top
Question Set-7.

185. To what extent should developers do their testing or do you believe testing is the
responsibility of the QA team?
Answer:
The answer to this question depends on the business environment you work. In today’s emerging
test scenario, it is also the developer’s responsibility to perform at least some of his code testing.
Though it is not expected that he will have the capacity or that his focus should be to run through
large test plans or test on a large stack of devices. However, without the responsibility to review
and test his code, a sense of ownership will not develop.

We believe that results will improve if all parties have access to test cases and can run and access
them regularly to verify if the latest changes brought any regression.

186. What’s your experience using Continuous Integration as part of the development process?
Answer:
If this applies to your company, it is a great thing to hear that a candidate has worked with Jenkins
or Bamboo CI. If he has set up these systems and can give recommendations to you on what
worked and did not work in his previous jobs, the candidate has earned himself not only bonus
points but a merit badge or two.

187. How do you define the bug life cycle?


Answer:
Bug life cycle comprises of numerous statuses of an error during its life cycle. A few examples are
open, deferred, solved, reopened, fixed, solved and closed. You may also speak about this process
and how you monitor and determine the status with the help of several points.

Back to top
Question Set-8.

188. Do you know about bug leakage and bug release?


Answer:
Bug release is when software or an application is handed over to the testing team knowing that the
defect is present in a release. During this, the priority and severity of the bug are low, as it has to
be fixed before the final handover.

Bug leakage is something when the bug is discovered by the end users or customer and missed by
the testing team to detect while testing the software.
Manual Testing Interview Questions

189. Tell us about the best bug of your test career?


Answer:
Well since there are so many quality bugs I’ve discovered in my testing career that I can’t really
remember the best one I found. What always surprises me is that you find so many different kinds
of bugs so quickly. It proved that having multiple competencies in the team are a great asset while
testing. The latest bug hunt I did was conducted on a product application which was already on the
market for some time. Still, we found 21 bugs in 7 minutes! And yes even a crash! That is what
amazes me.

190. What is your view on acceptance testing, when it is done and who does it?
Answer:
Acceptance Testing is a software testing checkpoint where a system is tested for acceptability. The
purpose of this test is to evaluate the system’s compliance with the business requirements and
assess whether it is acceptable for delivery. Formal testing with respect to user needs,
requirements, and business processes conducted to determine whether or not a system satisfies
the acceptance criteria and to enable the user, customers or other authorized entity to determine
whether or not to accept the system.
Back to top
Question Set-9.

A. When is it performed?
Acceptance Testing is carried out after System Testing and before making the system available for
actual use.

B. Who performs it?


Internal Acceptance Testing (Aka Alpha Testing) is done by members of the organization that has
produced the software but who are not directly involved in the project (Development or Testing).
Commonly, it is the members of Product Management, Pre-Sales, and/or Tech Support.

External Acceptance Testing is performed by the product consumers who are not employees of the
organization that developed the software. They can be some technical people from the client-side
or the actual end users.

191. What is your experience in dealing with your team members, how do you plan it?
Answer:
When you work for an organization be it medium or large, it is almost likely that you won’t be the
only one in the team. And there are times when you find it very difficult and frustrating while
dealing with the team members. There could be arguments, differences, and misunderstandings
and some will also try to ignore the others. But my purpose always is to look beyond all of this. I
perceive it like we are a team and we should work together to reach a common goal. I’ve learned
to be friendly with my teammates and sometimes invite them over for coffee. As a human, it is
very important to share feelings and have important discussions, and that is exactly what I intend
to do. This is something that not only me but everyone else in a working environment should apply.
Manual Testing Interview Questions

192. What Are The Steps You Follow To Create A Test Script?
Ans# Creating a test script usually requires the below steps.
Step-1# The primary requirement is to get a thorough understanding of the Application Under
Test.
 To achieve this, we will read the requirements related documents very thoroughly.
 In case the requirements document is not available with us, then we will use other available
references like the previous version of the application or wire-frames or screenshots.
Step-2# After developing an understanding of the requirements, we will prepare an exhaustive list
of the areas to be tested for the AUT. The focus in this step is to identify “What” to test. Thus the
outcome of this step is a list of test scenarios.
Step-3# After we are ready with the test scenarios, our focus shifts on “How” to test them. This
phase involves writing detailed steps about how to test a particular feature, what data to enter
(test data) and what is the expected outcome.
With all this done we are ready for testing.

193. What Are The Key Elements In A Bug Report?


Ans# An ideal bug report should contain the following key points.
 A unique ID.
 Defect description – a short description of the bug.
 Steps to reproduce – include the detailed test steps to emulate the issue. We should also
provide the test data and the time of its occurrence.
 Environment – add any system settings that could help in reproducing the issue.
 Module/section of the application in which issue has occurred.
 Severity.
 Screenshots.
 Responsible QA – This person is a point of contact in case you want to follow-up regarding this
issue.

194. How Will You Overcome The Challenges Due To Unavailability Of Proper Documentation For
Testing?
Ans# If the standard documents like System Requirement Specification or Feature Description
Document are not available then QAs may have to rely on the following references if available.
 Screenshots.
 A previous version of the application.
 Wireframes.
Another reliable method is to have discussions with the developer and business analyst. It helps in
closing the doubts and opens a channel for bringing clarity on the requirements. Also, the e-mails
exchanged could also be useful as a testing reference.
Manual Testing Interview Questions

SMOKE testing is another good option which will help to verify the main functionality of the
application. It also reveals some very basic bugs in the application.

If none of these work we can just test the application from our previous experiences.

195. Is There Any Difference Between Quality Assurance, Quality Control, And
Software Testing. What Is It?
Ans# Quality Assurance (QA): QA refers to the planned and systematic way of monitoring the
quality of process which is followed to produce a quality product. QA tracks the outcomes and
adjusts the process to meet the expectation.
Quality Control (QC) is related to the quality of the product. QC not only finds the defects and
suggests improvements also. Thus the process that is set by QA is implemented by QC. QC is the
responsibility of the testing team.

Software Testing is the process of ensuring that product which is developed by the developer
meets the user requirement. The motive to perform testing is to find the bugs and make sure that
they get fixed. Thus it helps to maintain the quality of the product to be delivered to the customer.

196. What Is The Best Approach To Start QA In A Project?


Ans# A good time to start the QA is from the beginning of the project startup. In this way, the QA
team will get enough time to do proper planning for the processes followed during the testing life
cycle.
It’ll also ensure that the product to be delivered to the customer satisfies the quality criteria.

QA also play an important role to initiate the communication between the domain teams. The
testing phase starts after the test plans are written, reviewed and approved.

197. Explain The Difference Between Smoke Testing And Sanity Testing?
Ans# The main differences between smoke and sanity testing are as follows.
 Whenever there is a new build delivered after bug fixing, it has to pass through sanity testing.
However, smoke testing is done to check the major functionalities of the application.
 Sanity testing is done either by the tester or the developer. However, smoke testing is not
necessarily done by a tester or developer.
 Smoke tests precede sanity test execution.
 Sanity testing touches critical areas of the product to ensure the basics are working fine.
However, smoke tests include a set of high priority test cases focussing on a particular
functionality.

198. Is There Any Difference Between Retesting And Regression Testing?


Ans# The possible differences between Retesting and Regression testing are as follows.
Manual Testing Interview Questions

 We perform retesting to verify the defect fixes. But the regression testing assures that the
bug fix didn’t break other parts of the application.
 Also, regression test cases verify the functionality of some or all the modules.
 Retesting involves the execution of test cases that are in a failed state. But the regression
ensures the re-execution of passed test cases.
 Retesting has a higher priority over regression. But in some cases, both get executed in
parallel.

199. What Are The Severity And Priority Of A Defect? Explain Using An Example.
Ans# Priority reflects the urgency of the defect from the business point of view. It indicates – How
quickly we need to fix the bug?
Severity reflects the impact of the defect on the functionality of the application. Bugs having a
critical impact on the functionality require a quick fix.

Here are examples which show the bugs under different priority and severity combinations.

High Priority And Low Severity.

The display of the company logo is not proper on its website.

High Priority And High Severity.

While making an online purchase, if the user sees a message like “Error in order processing, please
try again.” at the time of submitting the payment details.

Low Priority And High Severity.

Suppose we have a typical scenario in which the application crashes, but such a scenario has a
rare occurrence.

Low Priority And Low Severity.

These are typo errors in the displayed content like “You have registered success”. Instead of
“successfully”, “success” is written.

200. What Is The Role Of QA In Project Development?


Ans# QA stands for QUALITY ASSURANCE. QA team assures the quality by monitoring the whole
development process. QA tracks the outcomes after adjusting the process to meet the
expectations.
Quality Assurance (QA) does many tasks like the following.
Manual Testing Interview Questions

 Responsible for monitoring the process to be carried out during development.


 Plans the processes to follow for the test execution phase.
 Prepares the time-table and agrees on the Quality Assurance plan for the product with the
customer.
 Communicates the QA process to other teams and their members.
 Ensures traceability of test cases to requirements.

201. As Per Your Understanding, List Down The Key Challenges Of Software Testing?
Ans# Following are some of the key challenges of software testing.
 Availability of Standard documents to understand the application.
 Lack of skilled testers, tools, and training.
 Understanding requirements, Domain knowledge, and business user perspective
understanding.
 Agreeing on the Test Plan and the test cases with the customer.
 Re-execution efforts due to changing requirements.
 The application is stable enough to be tested otherwise retesting efforts become high.
 Testers always work under stringent timelines.
 Deciding on to which tests to execute first.
 Testing the complete application using an optimized number of test cases.
 Planning test cases for other stages of testing like Regression, Release, and Performance
testing.
Manual Testing Interview Questions

202. What is a Bug?


When actual result deviates from the expected result while testing a software application or
product then it results into a defect. Hence, any deviation from the specification mentioned in the
product functional specification document is a defect. In different organizations it’s called a Bug.

203. What is a Defect?


If software misses some feature or function from what is there in requirement it is called as defect.

204. What is CMM?


The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of
the software processes of an organization and for identifying the key practices that are required to
increase the maturity of these processes.

205. What is Beta Testing?


Beta testing is testing of a release of a software product conducted by customers.

206. What is Black Box Testing?


Testing based on an analysis of the specification of a piece of software without reference to its
internal workings. The goal is to test how well the component conforms to the published
requirements for the component.

207. What is Bottom Up Testing?


An approach to integration testing where the lowest level components are tested first, then used to
facilitate the testing of higher level components. The process is repeated until the component at
the top of the hierarchy is tested.

208. What is Boundary Testing?


Test which focus on the boundary or limit conditions of the software being tested. (Some of these
tests are stress tests).

209. What is Boundary Value Analysis?


BVA is similar to Equivalence Partitioning but focuses on "corner cases" or values that are usually
out of range as defined by the specification. This means that if a function expects all values in
range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001.

210. What is Branch Testing?


Testing of all branches in the program source code is called Branch Testing.
Manual Testing Interview Questions

211. What is Coding?


The generation of source code is called Coding.

212. What is Compatibility Testing?


Testing whether software is compatible with other elements of a system with which it should
operate, e.g. browsers, Operating Systems, or hardware.

213. What is a Component?


A component is an identifiable part of a larger program or construction. Usually, a component
provides a particular function or group of related functions.

214. What is Component Testing?


Testing of individual software components is called Component testing.

215. What is Acceptance Testing?


Testing conducted to enable a user/customer to determine whether to accept a software product.

Normally performed to validate the software meets a set of agreed acceptance criteria.

216. What is Accessibility Testing?


Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled
etc.).

217. What is Ad Hoc Testing?


A testing phase where the tester tries to 'break' the system by randomly trying the system's
functionality is called Ad Hoc testing. This can include negative testing also.

218. What is Agile Testing?


Testing practice for projects using agile methodologies, treating development as the customer of
testing and emphasizing a test-first design paradigm. See also Test Driven Development.

219. What is Application Binary Interface (ABI)?


A specification defining requirements for portability of applications in binary forms across different
system platforms and environments is called Application Binary Interface (ABI).

220. What is Application Programming Interface (API)?


A formalized set of software calls and routines that can be referenced by an application program in
order to access supporting system or network services is called Application Programming Interface
(API).
Manual Testing Interview Questions

221. What is Automated Software Quality (ASQ)?


The use of software tools, such as automated testing tools, to improve software quality is called
Automated Software Quality (ASQ).

222. What is Automated Testing?


Testing employing software tools which execute tests without manual intervention is called
Automated Testing. Can be applied in GUI, performance, API, etc. testing. The use of software to
control the execution of tests, the comparison of actual outcomes to predicted outcomes, the
setting up of test preconditions, and other test control and test reporting functions.

223. What is Backus-Naur Form?


It is a meta-language used to formally describe the syntax of a language.

224. What is Basic Block?


A sequence of one or more consecutive, executable statements containing no branches is called
Basic Block.

225. What is Basis Path Testing?


A white box test case design technique that uses the algorithmic flow of the program to design
tests.

226. What is Basis Set?


The set of tests derived using basis path testing.

227. What is Baseline?


The point at which some deliverable produced during the software engineering process is put
under formal change control.

228. What you will do during the first day of job?


Few things that you should be doing on the first day of your job are

o Reach office at least 15 minutes early


o Be calm and relaxed
o Have a smile on your face
o Don’t be shy
o Don’t try too hard to impress your colleagues
o If you're offered to go have lunch with your new boss and co-workers then you
should go. It's important to show that you're ready to mingle with your new team.
o Pay attention to how decisions are made.
Manual Testing Interview Questions

229. What is Binary Portability Testing?


Testing an executable application for portability across system platforms and environments,
usually for conformation to an ABI specification is called Binary Portability Testing.

230. What is Breadth Testing?


A test suite that exercises the full functionality of a product but does not test features in detail is
called Breadth Testing.

231. What is CAST?


Computer Aided Software Testing refers to the computing-based processes, techniques and tools
for testing software applications or programs.

232. What is Capture/Replay Tool?


A test tool that records test input as it is sent to the software under test. The input cases stored
can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.

233. What is Cause Effect Graph?


A graphical representation of inputs and the associated outputs effects which can be used to
design test cases.

234. What is Code Complete?


Phase of development where functionality is implemented entirety; bug fixes are all that are left.
All functions found in the Functional Specifications have been implemented.

235. What is Code Coverage?


An analysis method that determines which parts of the software have been executed (covered) by
the test case suite and which parts have not been executed and therefore may require additional
attention.

236. What is Code Inspection?


A formal testing technique where the programmer reviews source code with a group who ask
questions analysing the program logic, analysing the code with respect to a checklist of historically
common programming errors, and analysing its compliance with coding standards. Know more
about the Inspection in software testing.

237. What is Code Walkthrough?


A formal testing technique where source code is traced by a group with a small set of test cases,
while the state of program variables is manually monitored, to analyse the programmer's logic and
assumptions. Know more about Walkthrough in software testing.
Manual Testing Interview Questions

238. What is Concurrency Testing?


Multi-user testing geared towards determining the effects of accessing the same application code,

module or database records. Identifies and measures the level of locking, deadlocking and use of

single-threaded code and locking semaphores.

239. What is Conformance Testing?


Conformance testing, also known as compliance testing, is a methodology used in engineering to
ensure that a product, process, computer program or system meets a defined set of standards.

240. What is Context Driven Testing?


The context-driven school of software testing is flavour of Agile Testing that advocates continuous
and creative evaluation of testing opportunities in light of the potential information revealed and
the value of that information to the organization right now.

241. What is Conversion Testing?


Testing of programs or procedures used to convert data from existing systems for use in
replacement systems.

242. What is Cyclomatic Complexity?


A measure of the logical complexity of an algorithm, used in white-box testing is called Cyclomatic
Complexity.

243. What is Data Dictionary?


A database that contains definitions of all data items defined during analysis.

244. What is Data Flow Diagram?


A data flow diagram (DFD) is a graphical representation of the "flow" of data through an
information system, modelling its process aspects.

245. What is Data Driven Testing?


Testing in which the action of a test case is parameterized by externally defined data values,
maintained as a file or spreadsheet. This is a common technique used in Automated Testing.

246. What is Debugging?


The process of finding and removing the causes of software failures is called Debugging.

247. What is Defect?

When actual result deviates from the expected result while testing a software application or
product then it results into a defect.
Manual Testing Interview Questions

248. What is Dependency Testing?


Dependency Testing, a testing technique in which an application's requirements are pre-examined
for an existing software, initial states in order to test the proper functionality. The impacted areas
of the application are also tested when testing the new features or existing features.

249. What is Depth Testing?


A test that exercises a feature of a product in full detail is called Depth testing.

250. What is Dynamic Testing?


Testing software through executing it is called Dynamic testing. Also know about Static Testing.

251. What is Emulator?


A device, computer program, or system that accepts the same inputs and produces the same
outputs in a given system is called Emulator.

252. What is Endurance Testing?


Checks for memory leaks or other problems that may occur with prolonged execution.

253. What is End-to-End testing?


Testing a complete application environment in a situation that mimics real-world use, such as
interacting with a database, using network communications, or interacting with other hardware,
applications, or systems if appropriate is called End-to-End testing.

254. What is Equivalence Class?


A portion of a component's input or output domains for which the component's behaviour is
assumed to be the same from the component's specification.

255. What is Equivalence Partitioning?


A test case design technique for a component in which test cases are designed to execute
representatives from equivalence classes.

256. What is Exhaustive Testing?


Testing which covers all combinations of input values and preconditions for an element of the
software under test.

257. What is Functional Decomposition?

A technique used during planning, analysis and design; creates a functional hierarchy for the
software.
Manual Testing Interview Questions

258. What is Functional Specification?


A document that describes in detail the characteristics of the product with regard to its intended
features is called Functional Specification.

259. What is Functional Testing?


Testing the features and operational behaviour of a product to ensure they correspond to its
specifications. Testing that ignores the internal mechanism of a system or component and focuses
solely on the outputs generated in response to selected inputs and execution conditions. or Black
Box Testing.

260. What is Glass Box Testing?


White Box Testing is also known as Glass Box testing.

261. What is Gorilla Testing?


Gorilla Testing is a testing technique in which sometimes developers also join hands with testers to
test a particular module thoroughly in all aspects.

262. What is Gray Box Testing?


A combination of Black Box and White Box testing methodologies is called Gray box testing.
Testing a piece of software against its specification but using some knowledge of its internal
workings.

263. What is High Order Tests?


Black-box tests conducted once the software has been integrated.

264. What is Independent Test Group (ITG)?


A group of people whose primary responsibility is software testing. Know more about Independent
testing

265. What is Inspection?


A group review quality improvement process for written material. It consists of two aspects;
product (document itself) improvement and process improvement (of both document production
and inspection).

266. What is Integration Testing?

After integrating two different components together we do the integration testing. Integration
testing is usually performed after unit and functional testing. This type of testing is especially
relevant to client/server and distributed systems.
Manual Testing Interview Questions

267. What is Installation Testing?


Installation testing is a kind of quality assurance work in the software industry that focuses on what
customers will need to do to install and set up the new software successfully. The testing process
may involve full, partial or upgrades install/uninstall processes.

268. What is Load Testing?


A load testing is a type of software testing which is conducted to understand the behaviour of the
application under a specific expected load. Also see Performance Testing.

269. What is Localization Testing?


This term refers to making software specifically designed for a specific locality.

270. What is Loop Testing?


A white box testing technique that exercises program loops in order to validate them.

271. What is Metric?


Metric is a standard of measurement. Software metrics are the statistics describing the structure or
content of a program. A metric should be a real objective measurement of something such as
number of bugs per lines of code.

272. What is Monkey Testing?


Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system
or an application does not crash out.

273. What is Negative Testing?


Testing aimed at showing software does not work. This is also known as "test to fail". See also
Positive Testing.

274. What is Path Testing?


Testing in which all paths are in the program source code are tested at least once.

275. What is Performance Testing?


Testing conducted to evaluate the compliance of a system or component with specified
performance requirements. Often this is performed using an automated test tool to simulate large
number of users.

276. What is Positive Testing?


Testing aimed at showing software works. This is also known as "test to pass".
Manual Testing Interview Questions

277. What is Quality Assurance?


All those planned or systematic actions necessary to provide adequate confidence that a product
or service is of the type and quality needed and expected by the customer.

278. What is Quality Audit?


A systematic and independent examination to determine whether quality activities and related
results comply with planned arrangements and whether these arrangements are implemented
effectively and are suitable to achieve objectives.

279. What is Quality Circle?


A group of individuals with related interests that meet at regular intervals to consider problems or
other matters related to the quality of outputs of a process and to the correction of problems or to
the improvement of quality.

280. What is Quality Control?


The operational techniques and the activities used to fulfil and verify requirements of quality.

281. What is Quality Management?


That aspect of the overall management function that determines and implements the quality
policy.

282. What is Quality Policy?


In quality management system, a quality policy is a document jointly developed by management
and quality experts to express the quality objectives of the organization, the acceptable level of
quality and the duties of specific departments to ensure quality.

283. What is Quality System?


The organizational structure, responsibilities, procedures, processes, and resources for
implementing quality management is called Quality System.

284. What is Race Condition?


A race condition is an undesirable situation that occurs when a device or system attempts to
perform two or more operations at the same time, but because of the nature of the device or
system, the operations must be done in the proper sequence to be done correctly.

285. What is Ramp Testing?


Continuously raising an input signal until the system breaks down.
Manual Testing Interview Questions

286. What is Recovery Testing?


Recovery testing confirms that the program recovers from expected or unexpected events without
loss of data or functionality. Events can include shortage of disk space, unexpected loss of
communication, or power out conditions.

287. What is Regression Testing?


Retesting a previously tested program following modification to ensure that faults have not been
introduced or uncovered as a result of the changes made is called Regression testing.

288. What is Release Candidate?


A pre-release version, which contains the desired functionality of the final version, but which needs
to be tested for bugs (which ideally should be removed before the final version is released).

289. What is Sanity Testing?


Brief test of major functional elements of a piece of software to determine if its basically
operational.

See also Smoke Testing.

290. What is Scalability Testing?


Performance testing focused on ensuring the application under test gracefully handles increases in
work load.

291. What is Security Testing?


Testing which confirms that the program can restrict access to authorized personnel and that the
authorized personnel can access the functions available to their security level.

292. What is Smoke Testing?


In Software testing context, smoke testing refers to testing the basic functionality of the build.

293. What is Soak Testing?


Running a system at high load for a prolonged period of time is called Soak testing. For example;
run several times more transactions in an entire day (or night) than would be expected in a busy
day, to identify the performance problems that appear after a large number of transactions have
been executed.

294. What is Software Requirements Specification?


A deliverable that describes all data, functional and behavioural requirements, all constraints, and
all validation requirements for software/
Manual Testing Interview Questions

295. What is Software Testing?


A set of activities conducted with the intent of finding errors in software.

296. What is Static Analysis?


Analysis of a program carried out without executing the program.

297. What is Static Analyzer?


A tool that carries out static analysis is called Static Analyzer.

298. What is Static Testing?


Analysis of a program carried out without executing the program.

299. What is Storage Testing?


Testing that verifies the program under test stores data files in the correct directories and that it
reserves sufficient space to prevent unexpected termination resulting from lack of space. This is
external storage as opposed to internal storage.

300. What is Stress Testing?


Testing conducted to evaluate a system or component at or beyond the limits of its specified
requirements to determine the load under which it fails and how. Often this is performance testing
using a very high level of simulated load.

301. What is Structural Testing?


Testing based on an analysis of internal workings and structure of a piece of software. See also
White Box Testing.

302. What is System Testing?


Testing that attempts to discover defects that are properties of the entire system rather than of its
individual components is called System testing.

303. What is Testability?


The degree to which a system or component facilitates the establishment of test criteria and the
performance of tests to determine whether those criteria have been met.

304. What is Testing?

The process of exercising software to verify that it satisfies specified requirements and to detect
errors is called testing. The process of analysing a software item to detect the differences between
existing and required conditions (that is, bugs), and to evaluate the features of the software item
(Ref. IEEE Std 829). The process of operating a system or component under specified conditions,
Manual Testing Interview Questions

observing or recording the results, and making an evaluation of some aspect of the system or
component.

305. What is the difference between quality assurance and testing?


Quality assurance involves the entire software development process and testing involves operation
of a system or application to evaluate the results under certain conditions. QA is oriented to
prevention and Testing is oriented to detection.

306. Why does software have bugs?


Software have bugs because of the following reasons: 1) Miscommunication 2) programming errors
3) time pressures. 4) Changing requirements 5) software complexity

307. How do you do usability testing, security testing, installation testing, ADHOC, safety and
smoke testing?
Usability testing : testing the user friendliness , simplicity of software. Security testing is testing the
access rights for example admin has all rights, user1 has only right to read and open not to write .
Installation testing is testing the software while installing it in different environment. Safety and
security goes hand in hand most of the time it used logins. Smoke testing is to test the software
whether the basic functionality is working or not. If u wants to Test the basic functionalities then we
go for ADHOC it happens if we don’t have time to test procedure wise

308. What are memory leaks and buffer overflows?


Memory leaks means incomplete de-allocation - are bugs that happen very often. Buffer overflow
means data sent as input to the server that overflows the boundaries of the input area, thus
causing the server to misbehave.

309. What are the major differences between stress testing, load testing, Volume testing?
Stress testing means increasing the load and checking the performance at each level. Load testing
means at a time giving more load by the expectation and checking the performance at that level.
Volume testing means first we have to apply initial.

310. What is Exhaustive Testing?


Testing which covers all combinations of input values and preconditions for an element of the
software under test.

311. What is Functional Decomposition?


A technique used during planning, analysis and design; creates a functional hierarchy for the
software.
Manual Testing Interview Questions

312. What is Functional Specification?


A document that describes in detail the characteristics of the product with regard to its intended
features is called Functional Specification.

313. What is Test Bed?


An execution environment configured for testing. It may consist of specific hardware, OS, network
topology, configuration of the product under test, other application or system software, etc. The
Test Plan for a project should enumerate the test beds(s) to be used.

314. What is Test Case?


Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A
Test Case will consist of information such as requirements testing, test steps, verification steps,
prerequisites, outputs, test environment, etc. A set of inputs, execution preconditions, and
expected outcomes developed for a particular objective, such as to exercise a particular program
path or to verify compliance with a specific requirement. Test Driven Development? Testing
methodology associated with Agile Programming in which every chunk of code is covered by unit
tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs
during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test
code to the size of the production code.

315. What is Test Driver?


A program or test tool used to execute a test. Also known as a Test Harness.

316. What is Test Environment?


The hardware and software environment in which tests will be run, and any other software with
which the software under test interacts when under test including stubs and test drivers.

317. What is Test First Design?


Test-first design is one of the mandatory practices of Extreme Programming (XP).It requires that

programmers do not write any production code until they have first written a unit test.

318. What is Test Harness?


A program or test tool used to execute tests. Also known as a Test Driver.

319. What is Test Plan?


A document describing the scope, approach, resources, and schedule of intended testing activities.
It identifies test items, the features to be tested, the testing tasks, who will do each task, and any
risks requiring contingency planning.
Manual Testing Interview Questions

320. What is Test Procedure?


A document providing detailed instructions for the execution of one or more test cases is called
test Procedure.

321. What is Test Script?


Test script is commonly used to refer to the instructions for a particular test that will be carried out
by an automated test tool.

322. What is Test Specification?


Test specification is a document specifying the test approach for a software feature or combination
or features and the inputs, predicted results and execution conditions for the associated tests.

323. What is Test Suite?


A collection of tests used to validate the behaviour of a product. The scope of a Test Suite varies
from organization to organization. There may be several Test Suites for a particular product for
example. In most cases however a Test Suite is a high level concept, grouping together hundreds
or thousands of tests related by what they are intended to test.

324. What is Test Tool?


Computer programs used for the testing of a system, a component of the system, or its
documentation is called a test tool.

325. What is Thread Testing?


A variation of top-down testing where the progressive integration of components follows the
implementation of subsets of the requirements, as opposed to the integration of components by
successively lower levels.

326. What is Top Down Testing?


Top down testing is an approach to integration testing where the component at the top of the
component hierarchy is tested first, with lower level components being simulated by stubs. Tested
components are then used to test lower level components. The process is repeated until the lowest
level components have been tested.

327. What is Total Quality Management?


Total Quality Management is a company commitment to develop a process that achieves high
quality product and customer satisfaction.
Manual Testing Interview Questions

328. What is Traceability Matrix?


Traceability Matrix is a document showing the relationship between Test Requirements and Test
Cases.

329. What is Usability Testing?


Testing the ease with which users can learn and use a product.

330. What is Use Case?


Use case is the specification of tests that are conducted from the end-user perspective. Use cases
tend to focus on operating software as an end-user would conduct their day-to-day activities.

331. What is Unit Testing?


Testing of individual software components is called Unit testing.

332. What is Validation?


The process of evaluating software at the end of the software development process to ensure
compliance with software requirements is called Validation.

333. What is Verification?


The process of determining whether or not the products of a given phase of the software
development cycle meet the implementation steps and can be traced to the incoming objectives
established during the previous phase.

334. What is White Box Testing?


Testing based on an analysis of internal workings and structure of a piece of software. This includes
techniques such as Branch Testing and Path Testing. This also known as Structural Testing and
Glass Box Testing. Contrast with Black Box Testing. White box testing is used to test the internal
logic

of the code for ex checking whether the path has been executed once, checking whether the
branches has been executed at least once .This is used to check the structure of the code.

335. What is Workflow Testing?


Workflow processes technique in software testing by routing a record through each possible path.
These tests are performed to ensure that each workflow process accurately reflects the business
process. This kind of testing holds good for workflow-based applications.

336. What's the difference between load and stress testing ?


One of the most common, but unfortunate misuse of terminology is treating “load testing” and
“stress testing” as synonymous. The consequence of this ignorant semantic abuse is usually that
Manual Testing Interview Questions

the system is neither properly “load tested” nor subjected to a meaningful stress test. Stress
testing is subjecting a system to an unreasonable load while denying it the resources (e.g., RAM,
disc, mips, interrupts, etc.) needed to process that load. The idea is to stress a system to the
breaking point in order to find bugs that will make that break potentially harmful. The system is not
expected to process the overload without adequate resources, but to behave (e.g., fail) in a decent
manner (e.g., not corrupting or losing data). Bugs and failure modes discovered under stress
testing may or may not be repaired depending on the application, the failure mode, consequences,
etc. The load (incoming transaction stream) in stress testing is often deliberately distorted so as to
force the system into resource depletion. Load testing is subjecting a system to a statistically
representative (usually) load. The two main reasons for using such loads is in support of software
reliability testing and in performance testing. The term 'load testing' by itself is too vague and
imprecise to warrant use. For example, do you mean representative load,' 'overload,' 'high load,'
etc. In performance testing, load is varied from a minimum (zero) to the maximum level the system
can sustain without running out of resources or having, transactions >suffer (application-specific)
excessive delay. A third use of the term is as a test whose objective is to determine the maximum
sustainable load the system can handle. In this usage, 'load testing' is merely testing at the highest
transaction arrival rate in performance testing.

337. What's the difference between QA and testing?


QA is more a preventive thing, ensuring quality in the company and therefore the product rather
than just testing the product for software bugs? TESTING means 'quality control' QUALITY
CONTROL measures the quality of a product QUALITY ASSURANCE measures the quality of
processes used to create a quality product.

338. What is the best tester to developer ratio?


Tester : developer ratios range from 10:1 to 1:10. There's no simple answer. It depends on so many
things, Amount of reused code, number and type of interfaces, platform, quality goals, etc. It also
can depend on the development model. The more specs, the less testers. The roles can play a big
part also. Does QA own beta? Do you include process auditors or planning activities? These figures
can all vary very widely depending on how you define 'tester' and 'developer'. In some
organizations, a 'tester' is anyone who happens to be testing software at the time -- such as their
own. In other organizations, a 'tester' is only a member of an independent test group. It is better to
ask about the test labour content than it is to ask about the tester/developer ratio. The test labour
content, across most applications is generally accepted as 50%, when people do honest
accounting. For life-critical software, this can go up to 80%.

339. How can new Software QA processes be introduced in an existing organization?


- A lot depends on the size of the organization and the risks involved. For large organizations
with high-risk (in terms of lives or property) projects, serious management buy-in is required and a
Manual Testing Interview Questions

formalized QA process is necessary. - Where the risk is lower, management and organizational buy-
in and QA implementation may be a slower, step-at-a-time process. QA processes should be
balanced with productivity so as to keep bureaucracy from getting out of hand. - For small groups
or projects, a

more ad-hoc process may be appropriate, depending on the type of customers and projects. A lot
will depend on team leads or managers, feedback to developers, and ensuring adequate
communications among customers, managers, developers, and testers. - In all cases the most
value for effort will be in requirements management processes, with a goal of clear, complete,
testable requirement specifications or expectations.

340. What are 5 common problems in the software development process? The 5 common
problems in the software development process are as follows:
1) Poor requirements - if requirements are unclear, incomplete, too general, or not testable, there
will be problems. 2) Unrealistic schedule - if too much work is crammed in too little time, problems
are inevitable. 3) Inadequate testing - no one will know whether or not the program is any good
until the customer complaints or systems crash. 4) Features - requests to pile on new features after
development is underway; extremely common. 5) Miscommunication - if developers don't know
what's needed or customer's have erroneous expectations, problems are guaranteed.

341. What are 5 common solutions to software development problems?


The 5 common solutions to software development problems as given below:

1) Solid requirements - clear, complete, detailed, cohesive, attainable, testable requirements


that are agreed to by all players. Use prototypes to help nail down requirements. 2) Realistic
schedules - allow adequate time for planning, design, testing, bug fixing, re-testing, changes, and
documentation; personnel should be able to complete the project without burning out. 3) Adequate
testing - start testing early on, re-test after fixes or changes, plan for adequate time for testing and
bug-fixing. 4) Stick to initial requirements as much as possible - be prepared to defend against
changes and additions once development has begun, and be prepared to explain consequences. If
changes are necessary, they should be adequately reflected in related schedule changes. If
possible, use rapid prototyping during the design phase so that customers can see what to expect.
This will provide them a higher comfort level with their requirements decisions and minimize
changes later on. 5) communication - require walkthroughs and inspections when appropriate;
make extensive use of group communication tools - e-mail, groupware, networked bug-tracking
tools and change management tools, intranet capabilities, etc.; insure that documentation is
available and up-to-date - preferably electronic, not paper; promote teamwork and cooperation;
use prototypes early on so that customers' expectations are clarified.
Manual Testing Interview Questions

342. What is a 'good code'?


'Good code' is a code that works, is bug free, and is readable and maintainable. Some
organizations have coding 'standards' that all developers are supposed to adhere to, but everyone
has different ideas about what's best, or what is too many or too few rules. There are also various
theories and metrics, such as McCabe Complexity metrics. It should be kept in mind that excessive
use of standards and rules can stifle productivity and creativity. 'Peer reviews', 'buddy checks' code
analysis tools, etc. can be used to check for problems and enforce standards. For C and C++
coding, here are some typical ideas to consider in setting rules/standards; these may or may not
apply to a particular situation: - minimize or eliminate use of global variables. - use descriptive
function and method names

- use both upper and lower case, avoid abbreviations, use as many characters as necessary
to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in
naming conventions. - use descriptive variable names - use both upper and lower case, avoid
abbreviations, use as many characters as necessary to be adequately descriptive (use of more
than 20 characters is not out of line); be consistent in naming conventions. - function and method
sizes should be minimized; less than 100 lines of code is good, less than 50 lines is preferable. -
function descriptions should be clearly spelled out in comments preceding a function's code.-
organize code for readability.

- use whitespace generously - vertically and horizontally - each line of code should contain
70 characters max. - one code statement per line. - coding style should be consistent throughout a
program (eg, use of brackets, indentations, naming conventions, etc.) - in adding comments, err on
the side of too many rather than too few comments; a common rule of thumb is that there should
be at least as many lines of comments (including header blocks) as lines of code. - no matter how
small, an application should include documentation of the overall program function and flow (even
a few paragraphs is better than nothing); or if possible a separate flow chart and detailed program
documentation. - make extensive use of error handling procedures and status and error logging. -
For C++, to minimize complexity and increase maintainability, avoid too many levels of inheritance
in class hierarchies (relative to the size and complexity of the application). Minimize use of multiple
inheritance, and minimize use of operator overloading (note that the Java programming language
eliminates multiple inheritance and operator overloading.) - For C++, keep class methods small,
less than 50 lines of code per method is preferable. - For C++, make liberal use of exception
handlers

343. What is a 'good design'?


'Design' could refer to many things, but often refers to 'functional design' or 'internal design'. Good
internal design is indicated by software code whose overall structure is clear, understandable,
Manual Testing Interview Questions

easily modifiable, and maintainable; is robust with sufficient error-handling and status logging
capability; and works correctly when implemented. Good functional design is indicated by an
application whose functionality can be traced back to customer and end-user requirements. For
programs that have a user interface, it's often a good idea to assume that the end user will have
little computer knowledge and may not read a user manual or even the on-line help; some
common rules-of-thumb include: - the program should act in a way that least surprises the user - it
should always be evident to the user what can be done next and how to exit - the program
shouldn't let the users do something stupid without warning them. Know more about test design,
test design technique, categories of test design technique and test design tools

344. What makes a good test engineer?


A good test engineer has a 'test to break' attitude, an ability to take the point of view of the
customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in
maintaining a cooperative relationship with developers, and an ability to communicate with both
technical (developers) and non-technical (customers, management) people is useful. Previous
software development experience can be helpful as it provides a deeper understanding of the
software development process, gives the tester an appreciation for the developers' point of view,
and reduce the learning curve in automated test tool programming. Judgment skills are needed to
assess high-risk areas of an application on which to focus testing efforts when time is limited.

345. What makes a good Software QA engineer?


The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able
to understand the entire software development process and how it can fit into the business
approach and goals of the organization. Communication skills and the ability to understand various
sides of issues are important. In organizations in the early stages of implementing QA processes,
patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's
missing' is important for inspections and reviews.

346. What makes a good QA or Test manager?


A good QA, test, or QA/Test(combined) manager should: - be familiar with the software
development process - be able to maintain enthusiasm of their team and promote a positive
atmosphere, despite what is a somewhat 'negative' process (e.g., looking for or preventing
problems) - be able to promote teamwork to increase productivity - be able to promote cooperation
between software, test, and QA engineers - have the diplomatic skills needed to promote
improvements in QA processes -have the ability to withstand pressures and say 'no' to other
managers when quality is insufficient or QA processes are not being adhered to - have people
judgement skills for hiring and keeping skilled

personnel- be able to communicate with technical and non-technical people, engineers, managers,
and customers. - be able to run meetings and keep them focused
Manual Testing Interview Questions

347. What's the role of documentation in QA?


The role of documentation in QA is Critical. (Note that documentation can be electronic, not
necessarily paper.) QA practices should be documented such that they are repeatable.
Specifications, designs, business rules, inspection reports, configurations, code changes, test plans,
test cases, bug reports, user manuals, etc. should all be documented. There should ideally be a
system for easily finding and obtaining documents and determining what documentation will have
a particular piece of information. Change management for documentation should be used if
possible. Know more about documentation testing

348. What's the big deal about 'requirements'?


One of the most reliable methods of insuring problems, or failure, in a complex software project is
to have poorly documented requirements specifications. Requirements are the details describing
an application's externally-perceived functionality and properties. Requirements should be clear,
complete, reasonably detailed, cohesive, attainable, and testable. A non-testable requirement
would be, for example, 'user-friendly' (too subjective). A testable requirement would be something
like 'the user must enter their previously-assigned password to access the application'.
Determining and organizing requirements details in a useful and efficient way can be a difficult
effort; different methods are available depending on the particular project. Many books are
available that describe various approaches to this task. Care should be taken to involve ALL of a
project's significant 'customers' in the requirements process. 'Customers' could be in-house
personnel or out, and could include end-users, customer acceptance testers, customer contract
officers, customer management, future software maintenance engineers, salespeople, etc. Anyone
who could later derail the project if their expectations aren't met should be included if possible.
Organizations vary considerably in their handling of requirements specifications. Ideally, the
requirements are spelled out in a document with statements such as 'The product shall.....'.
'Design' specifications should not be confused with

'requirements'; design specifications should be traceable back to the requirements. In some


organizations requirements may end up in high level project plans, functional specification
documents, in design documents, or in other documents at various levels of detail. No matter what
they are called, some type of documentation with detailed requirements will be needed by testers
in order to properly plan and execute tests. Without such documentation, there will be no clear-cut
way to determine if a software application is performing correctly.

349. What steps are needed to develop and run software tests?
The following are some of the steps to consider: - Obtain requirements, functional design, and
internal design specifications and other necessary documents - Obtain budget and schedule
requirements - Determine project-related personnel and their responsibilities, reporting
requirements, required standards and processes (such as release processes, change processes,
Manual Testing Interview Questions

etc.) - Identify application's higher-risk aspects, set priorities, and determine scope and limitations
of tests - Determine test approaches and methods - unit, integration, functional, system, load,
usability tests, etc. - Determine test environment requirements (hardware, software,
communications, etc.) -Determine testware requirements (record/playback tools, coverage
analyzers, test tracking, problem/bug tracking, etc.) - Determine test input data requirements -
Identify tasks, those responsible for tasks, and labour requirements - Set schedule estimates,
timelines, milestones - Determine input equivalence classes, boundary value analyses, error
classes - Prepare test plan document and have needed reviews/approvals - Write test cases - Have
needed reviews/inspections/approvals of test cases - Prepare test environment and test ware,
obtain needed user manuals/reference documents/configuration guides/installation guides, set up
test tracking processes, set up logging and archiving processes, set up or obtain test input data -
Obtain and install software releases - Perform

tests - Evaluate and report results - Track problems/bugs and fixes - Retest as needed - Maintain
and update test plans, test cases, test environment, and test ware through life cycle

350. What is 'configuration management'?


Configuration management covers the processes used to control, coordinate, and track: code,
requirements, documentation, problems, change requests, designs,
tools/compilers/libraries/patches, changes made to them, and who makes the changes.

351. What if the software is so buggy it can't really be tested at all?


The best bet in this situation is for the testers to go through the process of reporting whatever
bugs or blocking-type problems initially show up, with the focus being on critical bugs. Since this
type of problem can severely affect schedules, and indicates deeper problems in the software
development process (such as insufficient unit testing or insufficient integration testing, poor
design, improper build or release procedures, etc.) managers should be notified, and provided with
some documentation as evidence of the problem. Know more about Severity and Priority

352. How can it be known when to stop testing?


This can be difficult to determine. Many modern software applications are so complex, and run in
such an interdependent environment, that complete testing can never be done. Common factors in
deciding when to stop are: - Deadlines (release deadlines, testing deadlines, etc.)- Test cases
completed with certain percentage passed - Test budget depleted - Coverage of
code/functionality/requirements reaches a specified point - Bug rate falls below a certain level -
Beta or alpha testing period ends

353. What if there isn't enough time for thorough testing?


Use risk analysis to determine where testing should be focused. Since it's rarely possible to test
every possible aspect of an application, every possible combination of events, every dependency,
Manual Testing Interview Questions

or everything that could go wrong, risk analysis is appropriate to most software development
projects. This requires judgement skills, common sense, and experience. (If warranted, formal
methods are also available.) Considerations can include: - Which functionality is most important to
the project's intended purpose? - Which functionality is most visible to the user? - Which
functionality has the largest safety impact? - Which functionality has the largest financial impact on
users? - Which aspects of the application are most important to the customer? - Which aspects of
the application can be tested early in the development cycle? - Which parts of the code are most
complex, and thus most subject to errors? - Which parts of the application were developed in rush
or panic mode? - Which aspects of similar/related previous projects caused problems? - Which
aspects of similar/related previous projects had large maintenance expenses? - Which parts of the
requirements and design are unclear or poorly thought out? - What do the developers think are the
highest-risk aspects of the application? - What kinds of problems would cause the worst publicity? -
What kinds of problems would cause the most customer service complaints?- What kinds of tests
could easily cover multiple functionalities? - Which tests will have the best high-risk-coverage to
time-required ratio?

354. What can be done if requirements are changing continuously?


This is a common problem and a major headache. - Work with the project's stakeholders early on
to understand how requirements might change so that alternate test plans and strategies can be
worked out in advance, if possible. - It's helpful if the application's initial design allows for some
adaptability so that later changes do not require redoing the application from scratch. - If the code
is well-commented and well-documented this makes changes easier for the developers.- Use rapid
prototyping whenever possible to help customers feel sure of their requirements and minimize
changes. - The project's initial schedule should allow for some extra time commensurate with the
possibility of changes.- Try to move new requirements to a 'Phase 2' version of an application,
while using the original requirements for the 'Phase 1' version. - Negotiate to allow only easily-
implemented new requirements into the project, while moving more difficult new requirements into
future versions of the application. - Be sure that customers and management understand the
scheduling impacts, inherent risks, and costs of significant requirements changes. Then let
management or the customers (not the developers or testers) decide if the changes are warranted
- after all, that's their job. - Balance the effort put into setting up automated testing with the
expected effort required to re-do them to deal with changes. - Try to design some flexibility into
automated test scripts. - Focus initial automated testing on application aspects that are most likely
to remain unchanged. - Devote appropriate effort to risk analysis of changes to minimize
regression testing needs. - Design some flexibility into test cases (this is not easily done; the best
bet might be to minimize the detail in the test cases, or set up only higher-level generic-type test
plans) - Focus less on detailed test plans and test cases and more on ad hoc testing (with an
understanding of the added risk that this entails).
Manual Testing Interview Questions

355. What if the project isn't big enough to justify extensive testing?
Consider the impact of project errors, not the size of the project. However, if extensive testing is
still not justified, risk analysis is again needed and the same considerations as described previously
in 'What if there isn't enough time for thorough testing?' apply. The tester might then do ad hoc
testing, or write up a limited test plan based on the risk analysis.

356. What if the application has functionality that wasn't in the requirements?
It may take serious effort to determine if an application has significant unexpected or hidden
functionality, and it would indicate deeper problems in the software development process. If the
functionality isn't necessary to the purpose of the application, it should be removed, as it may have
unknown impacts or dependencies that were not taken into account by the designer or the
customer. If not removed, design information will be needed to determine added testing needs or
regression testing needs. Management should be made aware of any significant added risks as a
result of the unexpected functionality. If the functionality only effects areas such as minor
improvements in the user interface, for example, it may not be a significant risk.

357. How can Software QA processes be implemented without stifling productivity?


By implementing QA processes slowly over time, using consensus to reach agreement on
processes, and adjusting and experimenting as an organization grows and matures, productivity
will be improved instead of stifled. Problem prevention will lessen the need for problem detection,
panics and burn-out will decrease, and there will be improved focus and less wasted effort. At the
same time, attempts should be made to keep processes simple and efficient, minimize paperwork,
promote computer-based processes and automated tracking and reporting, minimize time required
in meetings, and promote training as part of the QA process. However, no one - especially talented
technical types - likes rules or bureaucracy, and in the short run things may slow down a bit. A
typical scenario would be that more days of planning and development will be needed, but less
time will be required for late-night bug-fixing and calming of irate customers.

358. What if an organization is growing so fast that fixed QA processes are impossible?

This is a common problem in the software industry, especially in new technology areas. There is no
easy solution in this situation, other than: - Hire good people - Management should 'ruthlessly
prioritize' quality issues and maintain focus on the customer - Everyone in the organization should
be clear on what 'quality' means to the customer

359. How does a client/server environment affect testing?


Client/server applications can be quite complex due to the multiple dependencies among clients,
data communications, hardware, and servers. Thus testing requirements can be extensive. When
Manual Testing Interview Questions

time is limited (as it usually is) the focus should be on integration and system testing. Additionally,
load/stress/performance testing may be useful in determining client/server application limitations
and capabilities. There are commercial tools to assist with such testing.

360. How can World Wide Web sites be tested?


Web sites are essentially client/server applications - with web servers and 'browser' clients.
Consideration should be given to the interactions between html pages, TCP/IP communications,
Internet connections, firewalls, applications that run in web pages (such as applets, javascript,
plug-in applications), and applications that run on the server side (such as cgi scripts, database
interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a wide
variety of servers and browsers, various versions of each, small but sometimes significant
differences between them, variations in connection speeds, rapidly changing technologies, and
multiple standards and protocols. The end result is that testing for web sites can become a major
ongoing effort. Other considerations might include: - What are the expected loads on the server
(e.g., number of hits per unit time?), and what kind of performance is required under such loads
(such as web server response time, database query response times). What kinds of tools will be
needed for performance testing (such as web testing tools, other tools already in house that can be
adapted, web robot downloading tools, etc.)? - Who is the target audience? What kind of browsers
will they be using? What kind of connection speed will they by using? Are they intra- organization
(thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wide
variety of connection speeds and browser types)? - What kind of performance is expected on the
client side (e.g., how fast should pages appear, how fast should animations, applets, etc. load and
run)? - Will down time for server and content maintenance/upgrades be allowed? How much? -
What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it
expected to do? How can it be tested? - How reliable are the site's Internet connections required to
be? And how does that affect backup system or redundant connection requirements and testing? -
What processes will be required to manage updates to the web site's content, and what are the
requirements for maintaining, tracking, and controlling page content, graphics, links, etc.? - Which
HTML specification will be adhered to? How strictly? What variations will be allowed for targeted
browsers? - Will there be any standards or requirements for page appearance and/or graphics
throughout a site or parts of a site?? - How will internal and external links be validated and
updated? how often? - Can testing be done on the production system, or will a separate test
system be required? How are browser caching, variations in browser option settings, dial-up
connection variability, and real-world internet 'traffic congestion' problems to be accounted for in
testing?- How extensive or customized are the server logging and reporting requirements; are they
considered an integral part of the system and do they require testing?- How are cgi programs,
applets, java scripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested? -
Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger,
Manual Testing Interview Questions

provide internal links within the page. - The page layouts and design elements should be consistent
throughout a site, so that it's clear to the user that they're still within a site. - Pages should be as
browser-independent as possible, or pages should be provided or generated based on the browser-
type. - All pages should have links external to the page; there should be no dead-end pages. - The
page owner, revision date, and a link to a contact person or organization should be included on
each page.

361. How is testing affected by object-oriented designs?


Well-engineered object-oriented design can make it easier to trace from code to internal design to
functional design to requirements. While there will be little affect on black box testing (where an
understanding of the internal design of the application is unnecessary), white-box testing can be
oriented to the application's objects. If the application was well-designed this can simplify test
design.

362. What is Extreme Programming and what's it got to do with testing?


Extreme Programming (XP) is a software development approach for small teams on risk-prone
projects with unstable requirements. It was created by Kent Beck who described the approach in
his book 'Extreme Programming Explained'. Testing ('extreme testing') is a core aspect of Extreme
Programming. Programmers are expected to write unit and functional test code first - before the

application is developed. Test code is under source control along with the rest of the code.
Customers are expected to be an integral part of the project team and to help develop scenarios
for acceptance/black box testing. Acceptance tests are preferably automated, and are modified
and rerun for each of the frequent development iterations. QA and test personnel are also required
to be an integral part of the project team. Detailed requirements documentation is not used, and
frequent re-scheduling, re-estimating, and re-prioritizing is expected.

363. Will automated testing tools make testing easier?


- Possibly. For small projects, the time needed to learn and implement them may not be
worth it. For larger projects, or on-going long-term projects they can be valuable. - A common type
of automated tool is the 'record/playback' type. For example, a tester could click through all
combinations of menu choices, dialog box choices, buttons, etc. in an application GUI and have
them 'recorded' and the results logged by a tool. The 'recording' is typically in the form of text
based on a scripting language that is interpretable by the testing tool. If new buttons are added, or
some underlying code in the application is changed, etc. the application can then be retested by
just 'playing back' the 'recorded' actions, and comparing the logging results to check effects of the
changes. The problem with such tools is that if there are continual changes to the system being
tested, the 'recordings' may have to be changed so much that it becomes very time-consuming to
continuously update the scripts. Additionally, interpretation of results (screens, data, logs, etc.) can
be a difficult task. Note that there are record/playback tools for text-based interfaces also, and for
Manual Testing Interview Questions

all types of platforms.- Other automated tools can include: code analyzers - monitor code
complexity, adherence to standards, etc. coverage analyzers - these tools check which parts of the
code have been exercised by a test, and may be oriented to code statement coverage, condition
coverage, path coverage, etc. memory analyzers - such as bounds-checkers and leak detectors.
load/performance test tools - for testing client/server and web applications under various load
levels. web test tools - to check that links are valid, HTML code usage is correct, client-side and
server-side programs work, a web site's interactions are secure. other tools - for test case
management, documentation management, bug reporting, and configuration management.

364. What's the difference between black box and white box testing?
Black-box and white-box are test design methods. Black-box test design treats the system as a
“black-box”, so it doesn't explicitly use knowledge of the internal structure. Black-box test design is
usually described as focusing on testing functional requirements. Synonyms for black-box include:
behavioural, functional, opaque-box, and closed-box. White-box test design allows one to peek
inside the “box”, and it focuses specifically on using internal knowledge of the software to guide
the selection of test data. Synonyms for white-box include: structural, glass-box and clear-box.
While black-box and white-box are terms that are still in popular use, many people prefer the terms
'behavioural' and 'structural'. Behavioural test design is slightly different from black-box test
design because the use of internal knowledge isn't strictly forbidden, but it's still discouraged. In
practice, it hasn't proven useful to use a single test design method. One has to use a mixture of
different methods so that they aren't hindered by the limitations of a particular one. Some call this
'gray-box' or 'translucent-box' test design, but others wish we'd stop talking about boxes
altogether. It is important to understand that these methods are used during the test design phase,
and their influence is hard to see in the tests once they're implemented. Note that any level of
testing (unit testing, system testing, etc.) can use any test design methods. Unit testing is usually
associated with structural test design, but this is because testers usually don't have well-defined
requirements at the unit level to validate.

365. What kinds of testing should be considered?


Below are the kinds of testing which should be considered:

Black box testing can be considered which is not based on any knowledge of internal design or
code. Tests are based on requirements and functionality. White box testing can also be considered
which is based on knowledge of the internal logic of an application's code. Tests are based on
coverage of

Code statements, branches, paths, conditions. Unit testing - the most 'micro' scale of testing; to
test particular functions or code modules. Unit testing is typically done by the programmer and not
Manual Testing Interview Questions

by testers, as it requires detailed knowledge of the internal program design and code. Not always
easily done unless the application has a well-designed architecture with tight code; may require
developing test driver modules or test harnesses. incremental integration testing - continuous
testing of an application as new functionality is added; requires that various aspects of an
application's functionality be independent enough to work separately before all parts of the
program are completed, or that test drivers be developed as needed; done by programmers or by
testers. Integration testing - testing of combined parts of an application to determine if they
function together correctly. The 'parts' can be code modules, individual applications, client and
server applications on a network, etc. This type of testing is especially relevant to client/server and
distributed systems. Functional testing - black-box type testing geared to functional requirements
of an application; this type of testing should be done by testers. This doesn't mean that the
programmers shouldn't check that their code works before releasing it (which of course applies to
any stage of testing.) system testing - black-box type testing that is based on overall requirements
specifications; covers all combined parts of a system. end-to-end testing - similar to system
testing; the 'macro' end of the test scale; involves testing of a complete application environment in
a situation that mimics real-world use, such as interacting with a database, using network
communications, or interacting with other hardware, applications, or systems if appropriate. sanity
testing or smoke testing - typically an initial testing effort to determine if a new software version is
performing well enough to accept it for a major testing effort. For example, if the new software is
crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the
software may not be in a 'sane' enough condition to warrant further testing in its current state.
Regression testing - re-testing after fixes or modifications of the software or its environment. It can
be difficult to determine how much re-testing is needed, especially near the end of the
development cycle. Automated testing tools can be especially useful for this type of testing.
Acceptance testing - final testing based on specifications of the end-user or customer, or based on
use by end-users/customers over some limited period of time. Load testing - testing an application
under heavy loads, such as testing of a web site under a range of loads to determine at what point
the system's response time degrades or fails. Stress testing - term often used interchangeably with
'load' and 'performance' testing. Also used to describe such tests as system functional testing while
under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical
values, large complex queries to a database system, etc. performance testing - term often used
interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type'
of testing) is defined in requirements documentation or QA or Test Plans. Usability testing - testing
for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or
customer. User interviews, surveys, video recording of user sessions, and other techniques can be
used. Programmers and testers are usually not appropriate as usability testers. Install /uninstall
testing - testing of full, partial, or upgrade install/uninstall processes. Recovery testing - testing
how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Manual Testing Interview Questions

Failover testing - typically used interchangeably with 'recovery testing' security testing - testing
how well the system protects against unauthorized internal or external access, wilful damage, etc;
may require sophisticated testing techniques. Compatibility testing - testing how well software
performs in a particular hardware/software/operating system/network/etc. environment.
Exploratory testing - often taken to mean a creative, informal software test that is not based on
formal test plans or test cases; testers may be learning the software as they test it. Ad-hoc testing
- similar to exploratory testing, but often taken to mean that the testers have significant
understanding of the software before testing it. Context-driven testing - testing driven by an
understanding of the environment, culture, and intended use of software. For example, the testing
approach for life-critical medical equipment software would be completely different than that for a
low-cost computer game. User acceptance testing is done to determine whether the software is
satisfactory to an end-user or customer. Comparison testing is about comparing software
weaknesses and strengths to competing products. Alpha testing - testing of an application when
development is nearing completion; minor design changes may still be made as

a result of such testing. Alpha testing is typically done by end-users or others, not by programmers
or testers. Beta testing - testing when development and testing are essentially completed and final
bugs and problems need to be found before final release. Beta testing is typically done by end-
users or others, not by programmers or testers. mutation testing - a method for determining if a
set of test data or test cases is useful, by deliberately introducing various code changes ('bugs')
and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper
implementation requires large computational resources.

366. Why is it often hard for management to get serious about quality assurance?
Solving problems is a high-visibility process; preventing problems is low-visibility. This is illustrated
by an old parable: In ancient China there was a family of healers, one of whom was known
throughout the land and employed as a physician to a great lord. The physician was asked which of
his family was the most skillful healer. He replied, "I tend to the sick and dying with drastic and
dramatic treatments, and on occasion someone is cured and my name gets out among the lords.
“My elder brother cures sickness when it just begins to take root, and his skills are known among
the local peasants and neighbours." "My eldest brother is able to sense the spirit of sickness and
eradicate it before it takes form. His name is unknown outside our home."

367. Why does software have bugs?


1) Miscommunication or no communication - as to specifics of what an application should or
shouldn't do (the application's requirements). 2) Software complexity - the complexity of current
software applications can be difficult to comprehend for anyone without experience in modern-day
software development. Multi-tiered applications, client-server and distributed applications, data
communications, enormous relational databases, and sheer size of applications have all
Manual Testing Interview Questions

contributed to the exponential growth in software/system complexity. programming errors -


programmers, like anyone else, can make mistakes. 3) Changing requirements (whether
documented or undocumented) - the end-user may not understand the effects of changes, or may
understand and request them anyway - redesign, rescheduling of engineers, effects on other
projects, work already completed that may have to be redone or thrown out, hardware
requirements that may be affected, etc. If there are many minor changes or any major changes,
known and unknown dependencies among parts of the project are likely to interact and cause
problems, and the complexity of coordinating changes may result in errors. Enthusiasm of
engineering staff may be affected. In some fast-changing business environments, continuously
modified requirements may be a fact of life. In this case, management must understand the
resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to
keep the inevitable bugs from running out of control. 4) Poorly documented code - it's tough to
maintain and modify code that is badly written or poorly documented; the result is bugs. In many
organizations management provides no incentive for programmers to document their code or write
clear, understandable, maintainable code. In fact, it's usually the opposite: they get points mostly
for quickly turning out code, and there's job security if nobody else can understand it ('if it was
hard to write, it should be hard to read'). 5) software development tools - visual tools, class
libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented,
resulting in added bugs. Also know about from where do defects and failures in software testing
arise?

368. How can new Software QA processes be introduced in an existing organization?


A lot depends on the size of the organization and the risks involved. For large organizations with
high-risk (in terms of lives or property) projects, serious management buy-in is required and a
formalized QA process is necessary. Where the risk is lower, management and organizational buy-
in and QA implementation may be a slower, step-at-a-time process. QA processes should be
balanced with productivity so as to keep bureaucracy from getting out of hand. For small groups or
projects, a more ad-hoc process may be appropriate, depending on the type of customers and
projects. A lot will depend on team leads or managers, feedback to developers, and ensuring
adequate communications among customers, managers, developers, and testers. The most value
for effort will often be in

(a) Requirements management processes, with a goal of clear, complete, testable requirement
specifications embodied in requirements or design documentation, or in 'agile'-type environments
extensive continuous coordination with end-users,

(b) Design inspections and code inspections, and

(c) Post-mortems/retrospectives.
Manual Testing Interview Questions

369. How do the companies expect the defect reporting to be communicated by the tester to
the development team. Can the excel sheet template be used for defect reporting. If so what
are the common fields that are included? Who assigns the priority and severity of the defect?
To report bugs in excel: S.no, Module Screen/ Section Issue detail, Severity, Priority, Issue status
this is how to report bugs in excel sheet and also set filters on the Columns attributes. But most of
the companies use the share point process of reporting bugs In this when the project came for
testing a module wise detail of project is inserted to the defect management system they are
using. It contains following field1) Date 2) Issue brief 3) Issue description (used for developer to
regenerate the issue) 4) Issue status (active, resolved, on hold, suspend and not able to
regenerate) 5) Assign to (Names of members allocated to project) 6) Priority (High, medium and
low) 7) severity (Major, medium and low)

370. What are the tables in test plans and test cases?
Test plan is a document that contains the scope, approach, test design and test strategies. It
includes the following:-1) Test case identifier 2) Scope 3) Features to be tested 4) Features not to
be tested. 5) Test strategy. 6) Test Approach 7) Test Deliverables 8) Responsibilities. 9) Staffing
and Training 10)

Risk and Contingencies 11) Approval While A test case is a noted/documented set of
steps/activities that are carried out or executed on the software in order to confirm its
functionality/behaviour to certain set of inputs.

371. What are the table contents in test plans and test cases?
Test Plan is a document which is prepared with the details of the testing priority. A test Plan
generally includes: 1) Objective of Testing 2) Scope of Testing 3) Reason for testing 4) Timeframe
5) Environment 6) Entry and exit criteria 7) Risk factors involved 8) Deliverables

372. What automating testing tools are you familiar with?


Win Runner, Load runner, QTP , Silk Performer, Test director, Rational robot, QA run.

373. Why did you use automating testing tools in your job? Automating testing tools are used
because of the following reasons:
1) For regression testing 2) Criteria to decide the condition of a particular build

374. How do you plan test automation?


1) Prepare the automation Test plan 2) Identify the scenario 3) Record the scenario 4)
Enhance the scripts by inserting check points and Conditional Loops 5) Incorporated Error Handler
6) Debug the script 7) Fix the issue 8) Rerun the script and report the result.
Manual Testing Interview Questions

375. Can test automation improve test effectiveness?


Yes, automating a test makes the test process: 1) Fast 2) Reliable 3) Repeatable 4) Programmable
5) Reusable 6) Comprehensive

376. What are the main attributes of test automation?


software test automation attributes : Maintainability - the effort needed to update the test
automation suites for each new release Reliability - the accuracy and repeatability of the test
automation Flexibility - the ease of working with all the different kinds of automation test ware
Efficiency - the total cost related to the effort needed for the automation Portability - the ability of
the automated test to run on different environments Robustness - the effectiveness of automation
on an unstable or rapidly changing system Usability - the extent to which automation can be used
by different types of users

377. Does automation replace manual testing?


There can be some functionality which cannot be tested in an automated tool so we may have to
do it manually. Therefore manual testing can never be replaced. (We can write the scripts for
negative testing also but it is hectic task).When we talk about real environment we do negative
testing manually.

378. How will you choose a tool for test automation?


Choosing of a tool depends on many things like 1) Application to be tested 2) Test environment 3)
Scope and limitation of the tool. 4) Feature of the tool 5) Cost of the tool 6) whether the tool is
compatible with your application which means tool should be able to interact with your application
7) Ease of use

379. How you will evaluate the tool for test automation?
We need to concentrate on the features of the tools and how this could be beneficial for our
project.

The additional new features and the enhancements of the features will also help.

380. What are main benefits of test automation?


FAST, RELIABLE, COMPREHENSIVE, REUSABLE

381. What could go wrong with test automation?


1. The choice of automation tool for certain technologies 2. Wrong set of test automated
Manual Testing Interview Questions

382. How you will describe testing activities?


Testing activities start from the elaboration phase. The various testing activities are preparing the
test plan, Preparing test cases, Execute the test case, Log the bug, validate the bug & take
appropriate action for the bug, Automate the test cases.

383. What testing activities you may want to automate?


Automate all the high priority test cases which need to be executed as a part of regression testing
for each build cycle.

384. Describe common problems of test automation.


The common problems are: 1. Maintenance of the old script when there is a feature change or
enhancement 2. The change in technology of the application will affect the old scripts

385. What types of scripting techniques for test automation do you know?
5 types of scripting techniques: Linear Structured Shared Data Driven Key Driven

386. What are principles of good testing scripts for automation?


1) Proper code guiding standards 2) Standard format for defining functions, exception handler etc
3) Comments for functions 4.)Proper error handling mechanisms 5) The appropriate
synchronisation techniques.

387. Can the activities of test case design be automated?


As I know it, test case design is about formulating the steps to be carried out to verify something
about the application under test. And this cannot be automated. However, I agree that the process
of putting the test results into the excel sheet.

388. What are the limitations of automating software testing?


Hard-to-create environments like “out of memory”, “invalid input/reply”, and “corrupt registry
entries” make applications behave poorly and existing automated tools can’t force these condition
- they simply test your application in “normal” environment.

389. What skills needed to be a good test automator?


1) Good Logic for programming. 2) Strong analytical skills 3) Pessimistic in Nature.

390. How to find that tools work well with your existing system?
1. Discuss with the support officials 2. Download the trial version of the tool and evaluate 3.
Get suggestions from people who are working on the tool
Manual Testing Interview Questions

391. Describe some problem that you had with automating testing tool
1) The inability of win runner to identify the third party control like infragistics controls 2) The
change of the location of the table object will cause object not found error. 3) The inability of the
win runner to execute the script against multiple languages

392. What are the main attributes of test automation?


Maintainability, Reliability, Flexibility, Efficiency, Portability, Robustness, and Usability - these are
the main attributes in test automation.

393. What testing activities you may want to automate in a project?


Testing tools can be used for : Sanity tests(which is repeated on every build), stress/Load tests(U
simulate a large no of users, which is manually impossible) & Regression tests(which are done after
every code change)

394. How to find that tools work well with your existing system?
To find this, select the suite of tests which are most important for your application. First run them
with automated tool. Next subject the same tests to careful manual testing. If the results are
coinciding you can say your testing tool has been performing.

395. How will you test the field that generates auto numbers of AUT when we click the button
'NEW" in the application?
We can create a text file in a certain location, and update the auto generated value each time we
run the test and compare the currently generated value with the previous one will be one solution.

396. How will you evaluate the fields in the application under test using automation tool?
We can use Verification points (rational Robot) to validate the fields .Ex Using object data, object
data properties VP we can validate fields.

397. Can we perform the test of single application at the same time using different tools on the
same machine?
No. The Testing Tools will be in the ambiguity to determine which browser is opened by which tool.

398. What is 'configuration management'?

Configuration management is a process to control and document any changes made during the life
of a project. Revision control, Change Control, and Release Control are important aspects of
Configuration Management.

399. How to test the Web applications?


The basic difference in web testing is here we have to test for URL's coverage and links coverage.
Using Win Runner we can conduct web testing. But we have to make sure that Web test option is
Manual Testing Interview Questions

selected in "Add in Manager". Using WR we cannot test XML objects.

400. What are the problems encountered during the testing the application compatibility on
different browsers and on different operating systems
Font issues, alignment issues

401. How exactly the testing the application compatibility on different browsers and on different
operating systems is done?
Compatibility testing is a type of software testing used to ensure compatibility of the
system/application/website built with various other objects such as other web browsers, hardware
platforms, users

402. How testing is proceeded when SRS or any other document is not given?
If SRS is not there we can perform Exploratory testing. In Exploratory testing the basic module is
executed and depending on its results, the next plan is executed.

403. How do we test for severe memory leakages?


Severe memory leakages can be tested by using Endurance Testing . Endurance Testing means
checking for memory leaks or other problems that may occur with prolonged execution.

404. What is Acceptance Testing?


Testing conducted to enable a user/customer to determine whether to accept a software product.
Normally performed to validate the software meets a set of agreed acceptance criteria.

405. What is Accessibility Testing?


Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled
etc.).

406. What is Ad-Hoc Testing?


A testing phase where the tester tries to 'break' the system by randomly trying the system's
functionality. Can include negative testing as well. See also Monkey Testing.

407. What is Agile Testing?


Testing practice for projects using agile methodologies, treating development as the customer of
testing and emphasizing a test-first design paradigm. See also Test Driven Development.

408. What is Application Binary Interface (ABI)?


A specification defining requirements for portability of applications in binary forms across different
system platforms and environments.

409. What is Application Programming Interface (API)?


A formalized set of software calls and routines that can be referenced by an application program in
order to access supporting system or network services.
Manual Testing Interview Questions

410. What is Automated Software Quality (ASQ)?


The use of software tools, such as automated testing tools, to improve software quality.

411. What is Automated Testing?


Testing employing software tools which execute tests without manual intervention. Can be applied
in GUI, performance, API, etc. testing. The use of software to control the execution of tests, the
comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and
other test control and test reporting functions.

412. What is Backus-Naur Form?


A metalanguage used to formally describe the syntax of a language.

413. What is Basic Block?


A sequence of one or more consecutive, executable statements containing no branches.

414. What is Basis Path Testing?


A white box test case design technique that uses the algorithmic flow of the program to design
tests.

415. What is Basis Set?


The set of tests derived using basis path testing.

416. What is Baseline?


The point at which some deliverable produced during the software engineering process is put
under formal change control.

417. What you will do during the first day of job?


What would you like to do five years from now?

418. What is Beta Testing?


Testing of a rerelease of a software product conducted by customers.

419. What is Binary Portability Testing?


Testing an executable application for portability across system platforms and environments,
usually for conformation to an ABI specification.

420. What is Black Box Testing?


Testing based on an analysis of the specification of a piece of software without reference to its
internal workings. The goal is to test how well the component conforms to the published
requirements for the component.

421. What is Bottom Up Testing?


An approach to integration testing where the lowest level components are tested first, then used to
facilitate the testing of higher level components. The process is repeated until the component at
the top of the hierarchy is tested.
Manual Testing Interview Questions

422. What is Boundary Testing?


Test which focus on the boundary or limit conditions of the software being tested. (Some of these
tests are stress tests).

423. What is Boundary Value Analysis?


BVA is similar to Equivalence Partitioning but focuses on "corner cases" or values that are usually
out of range as defined by the specification. his means that if a function expects all values in range
of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001.

424. What is Branch Testing?


Testing in which all branches in the program source code are tested at least once.

425. What is Breadth Testing?


A test suite that exercises the full functionality of a product but does not test features in detail.

426. What is CAST?


Computer Aided Software Testing.

427. What is Capture/Replay Tool?


A test tool that records test input as it is sent to the software under test. The input cases stored
can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.

428. What is CMM?


The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of
the software processes of an organization and for identifying the key practices that are required to
increase the maturity of these processes.

429. What is Cause Effect Graph?


A graphical representation of inputs and the associated outputs effects which can be used to
design test cases.

430. What is Code Complete?


Phase of development where functionality is implemented in entirety; bug fixes are all that are left.
All functions found in the Functional Specifications have been implemented.

431. What is Code Coverage?


An analysis method that determines which parts of the software have been executed (covered) by
the test case suite and which parts have not been executed and therefore may require additional
attention.

432. What is Code Inspection?


A formal testing technique where the programmer reviews source code with a
group who ask questions analysing the program logic, analysing the code with respect to a
Manual Testing Interview Questions

checklist of historically common programming errors, and analysing its compliance with coding
standards.

433. What is Code Walkthrough?


A formal testing technique where source code is traced by a group with a small set of test cases,
while the state of program variables is manually monitored, to analyse the programmer's logic and
assumptions.

434. What is Coding?


The generation of source code.

435. What is Compatibility Testing?


Testing whether software is compatible with other elements of a system with which it should
operate, e.g. browsers, Operating Systems, or hardware.

436. What is Component?


A minimal software item for which a separate specification is available.

437. What is Component Testing?


Testing of individual software components (Unit Testing).

438. What is Concurrency Testing?


Multi-user testing geared towards determining the effects of accessing the same application code,
module or database records. Identifies and measures the level of locking, deadlocking and use of
single-threaded code and locking semaphores.

439. What is Conformance Testing?


The process of testing that an implementation conforms to the specification on which it is based.
Usually applied to testing conformance to a formal standard.

440. What is Context Driven Testing?


The context-driven school of software testing is flavour of Agile Testing that advocates continuous
and creative evaluation of testing opportunities in light of the potential information revealed and
the value of that information to the organization right now.

441. What is Conversion Testing?


Testing of programs or procedures used to convert data from existing systems for use in
replacement systems.

442. What is Cyclomatic Complexity?


A measure of the logical complexity of an algorithm, used in white-box testing.

443. What is Data Dictionary?


A database that contains definitions of all data items defined during analysis.
Manual Testing Interview Questions

444. What is Data Flow Diagram?


A modelling notation that represents a functional decomposition of a system.

445. What is Data Driven Testing?


Testing in which the action of a test case is parameterized by externally defined data values,
maintained as a file or spreadsheet. A common technique in Automated Testing.

446. What is Debugging?


The process of finding and removing the causes of software failures.

447. What is Defect?


Non-conformance to requirements or functional / program specification

448. What is Dependency Testing?


Examines an application's requirements for pre-existing software, initial states and configuration in
order to maintain proper functionality.

449. What is Depth Testing?


A test that exercises a feature of a product in full detail.

450. What is Dynamic Testing?


Testing software through executing it. See also Static Testing.

451. What is Emulator?


A device, computer program, or system that accepts the same inputs and produces the same
outputs as a given system.

452. What is Endurance Testing?


Checks for memory leaks or other problems that may occur with prolonged Execution.

453. What is End-to-End testing?


Testing a complete application environment in a situation that mimics real-world use, such as
interacting with a database, using network communications, or interacting with other hardware,
applications, or systems if appropriate.

454. What is Equivalence Class?


A portion of a component's input or output domains for which the component's behaviour is
assumed to be the same from the component's specification.

455. What is Equivalence Partitioning?


A test case design technique for a component in which test cases are designed to execute
representatives from equivalence classes.
Manual Testing Interview Questions

456. What is Exhaustive Testing?


Testing which covers all combinations of input values and preconditions for an element of the
software under test.

457. What is Functional Decomposition?


A technique used during planning, analysis and design; creates a functional hierarchy for the
software.
Manual Testing Interview Questions

458. What is Functional Specification?

A document that describes in detail the characteristics of the product with regard to its intended
features.

459. What is Functional Testing?

Testing the features and operational behaviour of a product to ensure they correspond to its
specifications. Testing that ignores the internal mechanism of a system or component and focuses
solely on the outputs generated in response to selected inputs and execution conditions. Or Black
Box Testing.

460. What is Glass Box Testing?

A synonym for White Box Testing.

461. What is Gorilla Testing?

Testing one particular module, functionality heavily.

462. What is Gray Box Testing?

A combination of Black Box and White Box testing methodologies? testing a piece of software
against its specification but using some knowledge of its internal workings.

463. What is High Order Tests?

Black-box tests conducted once the software has been integrated.

464. What is Independent Test Group (ITG)?

A group of people whose primary responsibility is software testing,

465. What is Inspection?

A group review quality improvement process for written material. It consists of two aspects;
product (document itself) improvement and process improvement (of both document production
and inspection).

466. What is Integration Testing?


Manual Testing Interview Questions

Testing of combined parts of an application to determine if they function together correctly. Usually
performed after unit and functional testing. This type of testing is especially relevant to
client/server and distributed systems.

467. What is Installation Testing?

Confirms that the application under test recovers from expected or unexpected events without loss
of data or functionality. Events can include shortage of disk space, unexpected loss of
communication, or power out conditions.

468. What is Load Testing?

See Performance Testing.

469. What is Localization Testing?

This term refers to making software specifically designed for a specific locality.

470. What is Loop Testing?

A white box testing technique that exercises program loops.

471. What is Metric?

A standard of measurement. Software metrics are the statistics describing the structure or content
of a program. A metric should be a real objective measurement of something such as number of
bugs per lines of code.

472. What is Monkey Testing?

Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system
or an application does not crash out.

473. What is Negative Testing?

Testing aimed at showing software does not work. Also known as "test to fail". See also Positive
Testing.

474. What is Path Testing?

Testing in which all paths in the program source code are tested at least once.

475. What is Performance Testing?


Manual Testing Interview Questions

Testing conducted to evaluate the compliance of a system or component with specified


performance requirements. Often this is performed using an automated test tool to simulate large
number of users. Also known as "Load Testing".

476. What is Positive Testing?

Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.

477. What is Quality Assurance?

All those planned or systematic actions necessary to provide adequate confidence that a product
or service is of the type and quality needed and expected by the customer.
Manual Testing Interview Questions

478. What is Quality Audit?

A systematic and independent examination to determine whether quality activities and related
results comply with planned arrangements and whether these arrangements are implemented
effectively and are suitable to achieve objectives.

479. What is Quality Circle?

A group of individuals with related interests that meet at regular intervals to consider problems or
other matters related to the quality of outputs of a process and to the correction of problems or to
the improvement of quality.

480. What is Quality Control?

The operational techniques and the activities used to fulfil and verify requirements of quality.

481. What is Quality Management?

That aspect of the overall management function that determines and implements the quality
policy.

482. What is Quality Policy?

The overall intentions and direction of an organization as regards quality as formally expressed by
top management.

483. What is Quality System?

The organizational structure, responsibilities, procedures, processes, and resources for


implementing quality management.

484. What is Race Condition?

A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a
write, with no mechanism used by either to moderate simultaneous access.

485. What is Ramp Testing?

Continuously raising an input signal until the system breaks down.


Manual Testing Interview Questions

486. What is Recovery Testing?

Confirms that the program recovers from expected or unexpected events without loss of data or
functionality. Events can include shortage of disk space, unexpected loss of communication, or
power out conditions

487. What is Regression Testing?

Retesting a previously tested program following modification to ensure that faults have not been
introduced or uncovered as a result of the changes made.

488. What is Release Candidate?

A pre-release version, which contains the desired functionality of the final version, but which needs
to be tested for bugs (which ideally should be removed before the final version is released).

489. What is Sanity Testing?

Brief test of major functional elements of a piece of software to determine if it’s basically
operational.
Manual Testing Interview Questions

490. What is Scalability Testing?

Performance testing focused on ensuring the application under test gracefully handles
increases in work load.

491. What is Security Testing?

Testing which confirms that the program can restrict access to authorized personnel and that
the authorized personnel can access the functions available to their security level.

492. What is Smoke Testing?

A quick-and-dirty test that the major functions of a piece of software work. Originated in the
hardware testing practice of turning on a new piece of hardware for the first time and
considering it a success if it does not catch on fire.

493. What is Soak Testing?

Running a system at high load for a prolonged period of time. For example, running several
times more transactions in an entire day (or night) than would be expected in a busy day, to
identify and performance problems that appear after a large number of transactions have been
executed.

494. What is Software Requirements Specification?

A deliverable that describes all data, functional and behavioural requirements, all constraints,
and all validation requirements for software/

495. What is Software Testing?

A set of activities conducted with the intent of finding errors in software.


------------------------------------------------------------------------
Click on below Links for Interview Questions

496. What is Static Analysis?


Analysis of a program carried out without executing the program.
Manual Testing Interview Questions

497. What is Static Analyzer?


A tool that carries out static analysis.

498. What is Static Testing?


Analysis of a program carried out without executing the program.

499. What is Storage Testing?


Testing that verifies the program under test stores data files in the correct directories and that it
reserves sufficient space to prevent unexpected termination resulting from lack of space. This is
external storage as opposed to internal storage.

500. What is Stress Testing?


Testing conducted to evaluate a system or component at or beyond the limits of its specified
requirements to determine the load under which it fails and how. Often this is performance testing
using a very high level of simulated load.

501. What is Structural Testing?


Testing based on an analysis of internal workings and structure of a piece of software. See also
White Box Testing.

502. What is System Testing?


Testing that attempts to discover defects that are properties of the entire system rather than of its
individual components.

503. What is Testability?


The degree to which a system or component facilitates the establishment of test criteria and the
performance of tests to determine whether those criteria have been met.

504. What is Testing?


The process of exercising software to verify that it satisfies specified requirements and to detect
errors. The process of analysing a software item to detect the differences between existing and
required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std
829). The process of operating a system or component under specified conditions, observing or
recording the results, and making an evaluation of some aspect of the system or component. What
is Test Automation? It is the same as Automated Testing.

505. What is Test Bed?


An execution environment configured for testing. May consist of specific hardware, OS, network
topology, configuration of the product under test, other application or system software, etc. The
Test Plan for a project should enumerated the test beds(s) to be used.

506. What is Test Case?


Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A
Test Case will consist of information such as requirements testing, test steps, verification steps,
prerequisites, outputs, test environment, etc. A set of inputs, execution preconditions, and
Manual Testing Interview Questions

expected outcomes developed for a particular objective, such as to exercise a particular program
path or to verify compliance with a specific requirement. Test Driven Development? Testing
methodology associated with Agile Programming in which every chunk of code is covered by unit
tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs
during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test
code to the size of the production code.

507. What is Test Driver?


A program or test tool used to execute tests. Also known as a Test Harness.

508. What is Test Environment?


The hardware and software environment in which tests will be run, and any other software with
which the software under test interacts when under test including stubs and test drivers.

509. What is Test First Design?


Test-first design is one of the mandatory practices of Extreme Programming (XP).It requires that
programmers do not write any production code until they have first written a unit test.

510. What is Test Harness?


A program or test tool used to execute a tests. Also known as a Test Driver.

511. What is Test Plan?


A document describing the scope, approach, resources, and schedule of intended testing activities.
It identifies test items, the features to be tested, the testing tasks, who will do each task, and any
risks requiring contingency planning.

512. What is Test Procedure?


A document providing detailed instructions for the execution of one or more test cases.

513. What is Test Script?


Commonly used to refer to the instructions for a particular test that will be carried out by an
automated test tool.

514. What is Test Specification?


A document specifying the test approach for a software feature or combination or features and the
inputs, predicted results and execution conditions for the associated tests.

515. What is Test Suite?


A collection of tests used to validate the behaviour of a product. The scope of a Test Suite varies
from organization to organization. There may be several Test Suites for a particular product for
example. In most cases however a Test Suite is a high level concept, grouping together hundreds
or thousands of tests related by what they are intended to test.
Manual Testing Interview Questions

516. What is Test Tools?


Computer programs used in the testing of a system, a component of the system, or its
documentation.

517. What is Thread Testing?


A variation of top-down testing where the progressive integration of components follows the
implementation of subsets of the requirements, as opposed to the integration of components by
successively lower levels.

518. What is Top Down Testing?


An approach to integration testing where the component at the top of the component hierarchy is
tested first, with lower level components being simulated by stubs. Tested components are then
used to test lower level components. The process is repeated until the lowest level components
have been tested.

519. What is Total Quality Management?


A company commitment to develop a process that achieves high quality product and customer
satisfaction.

520. What is Traceability Matrix?


A document showing the relationship between Test Requirements and Test Cases.

521. What is Usability Testing?


Testing the ease with which users can learn and use a product.

522. What is Acceptance Testing?


Testing conducted to enable a user/customer to determine whether to accept a software product.
Normally performed to validate the software meets a set of agreed acceptance criteria.

523. What is Accessibility Testing?


Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled
etc.).

524. What is Ad-Hoc Testing?


A testing phase where the tester tries to ‘break’ the system by randomly trying the system’s
functionality. Can include negative testing as well. See also Monkey Testing.

525. What is Agile Testing?


Testing practice for projects using agile methodologies, treating development as the customer of
testing and emphasizing a test-first design paradigm. See also Test Driven Development.

526. What is Application Binary Interface (ABI)?


A specification defining requirements for portability of applications in binary forms across different
system platforms and environments.
Manual Testing Interview Questions

527. What is Application Programming Interface (API)?


A formalized set of software calls and routines that can be referenced by an application program in
order to access supporting system or network services.

528. What is Automated Software Quality (ASQ)?


The use of software tools, such as automated testing tools, to improve software quality.

529. What is Automated Testing?


Testing employing software tools which execute tests without manual intervention. Can be applied
in GUI, performance, API, etc. testing. The use of software to control the execution of tests, the
comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and
other test control and test reporting functions.

530. What is Backus-Naur Form?


A metalanguage used to formally describe the syntax of a language.

531. What is Basic Block?


A sequence of one or more consecutive, executable statements containing no
branches.

532. What is Basis Path Testing?


A white box test case design technique that uses the algorithmic flow of the program to design
tests.

533. What is Basis Set?


The set of tests derived using basis path testing.

534. What is Baseline?


The point at which some deliverable produced during the software engineering process is put
under formal change control.

535. What you will do during the first day of job?


What would you like to do five years from now?

536. What is Beta Testing?


Testing of a rerelease of a software product conducted by customers.

537. What is Binary Portability Testing?


Testing an executable application for portability across system platforms and environments,
usually for conformation to an ABI specification.

538. What is Black Box Testing?


Testing based on an analysis of the specification of a piece of software without reference to its
internal workings. The goal is to test how well the component conforms to the published
requirements for the component.
Manual Testing Interview Questions

539. What is Bottom Up Testing?


An approach to integration testing where the lowest level components are tested first, then used to
facilitate the testing of higher level components. The process is repeated until the component at
the top of the hierarchy is tested.

540. What is Boundary Testing?


Test which focus on the boundary or limit conditions of the software being tested. (Some of these
tests are stress tests).

541. What is Boundary Value Analysis?


BVA is similar to Equivalence Partitioning but focuses on “corner cases” or values that are usually
out of range as defined by the specification. his means that if a function expects all values in range
of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001.

542. What is Branch Testing?


Testing in which all branches in the program source code are tested at least once.

543. What is Breadth Testing?


A test suite that exercises the full functionality of a product but does not test features in detail.

544. What is CAST?


Computer Aided Software Testing.

545. What is Capture/Replay Tool?


A test tool that records test input as it is sent to the software under test. The input cases stored
can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.

546. What is CMM?


The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of
the software processes of an organization and for identifying the key practices that are required to
increase the maturity of these processes.

547. What is Cause Effect Graph?


A graphical representation of inputs and the associated outputs effects which can be used to
design test cases.

548. What is Code Complete?


Phase of development where functionality is implemented in entirety; bug fixes are all that are left.
All functions found in the Functional Specifications have been implemented.

549. What is Code Coverage?


An analysis method that determines which parts of the software have been executed (covered) by
the test case suite and which parts have not been executed and therefore may require additional
attention.
Manual Testing Interview Questions

550. What is Code Inspection?


A formal testing technique where the programmer reviews source code with a
group who ask questions analysing the program logic, analysing the code with respect to a
checklist of historically common programming errors, and analysing its compliance with coding
standards.

551. What is Code Walkthrough?


A formal testing technique where source code is traced by a group with a small set of test cases,
while the state of program variables is manually monitored, to analyse the programmer’s logic and
assumptions.

552. What is Coding?


The generation of source code.

553. What is Compatibility Testing?


Testing whether software is compatible with other elements of a system with which it should
operate, e.g. browsers, Operating Systems, or hardware.

554. What is Component?


A minimal software item for which a separate specification is available.

555. What is Component Testing?


Testing of individual software components (Unit Testing).

556. What is Concurrency Testing?


Multi-user testing geared towards determining the effects of accessing the same application code,
module or database records. Identifies and measures the level of locking, deadlocking and use of
single-threaded code and locking semaphores.

557. What is Conformance Testing?


The process of testing that an implementation conforms to the specification on which it is based.
Usually applied to testing conformance to a formal standard.

558. What is Context Driven Testing?


The context-driven school of software testing is flavour of Agile Testing that advocates continuous
and creative evaluation of testing opportunities in light of the potential information revealed and
the value of that information to the organization right now.

559. What is Conversion Testing?


Testing of programs or procedures used to convert data from existing systems for use in
replacement systems.

560. What is Cyclomatic Complexity?


A measure of the logical complexity of an algorithm, used in white-box testing.
Manual Testing Interview Questions

561. What is Data Dictionary?


A database that contains definitions of all data items defined during analysis.

562. What is Data Flow Diagram?


A modelling notation that represents a functional decomposition of a system.

563. What is Data Driven Testing?


Testing in which the action of a test case is parameterized by externally defined data values,
maintained as a file or spreadsheet. A common technique in Automated Testing.

564. What is Debugging?


The process of finding and removing the causes of software failures.

565. What is Defect?


Non-conformance to requirements or functional / program specification

566. What is Dependency Testing?


Examines an application’s requirements for pre-existing software, initial states and configuration in
order to maintain proper functionality.

567. What is Depth Testing?


A test that exercises a feature of a product in full detail.

568. What is Dynamic Testing?


Testing software through executing it. See also Static Testing.

569. What is Emulator?


A device, computer program, or system that accepts the same inputs and produces the same
outputs as a given system.

570. What is Endurance Testing?


Checks for memory leaks or other problems that may occur with prolonged

Execution

571. What is End-to-End testing?


Testing a complete application environment in a situation that mimics real-world use, such as
interacting with a database, using network communications, or interacting with other hardware,
applications, or systems if appropriate.

572. What is Equivalence Class?


A portion of a component’s input or output domains for which the component’s behaviour is
assumed to be the same from the component’s specification.
Manual Testing Interview Questions

573. What is Equivalence Partitioning?


A test case design technique for a component in which test cases are designed to execute
representatives from equivalence classes.

574. What is Exhaustive Testing?


Testing which covers all combinations of input values and preconditions for an element of the
software under test.

575. What is Functional Decomposition?


A technique used during planning, analysis and design; creates a functional hierarchy for the
software.

576. What is Functional Specification?


A document that describes in detail the characteristics of the product with regard to its intended
features.

577. What is Functional Testing?


Testing the features and operational behaviour of a product to ensure they correspond to its
specifications. Testing that ignores the internal mechanism of a system or component and focuses
solely on the outputs generated in response to selected inputs and execution conditions. Or Black
Box Testing.

578. What is Glass Box Testing?


A synonym for White Box Testing.

579. What is Gorilla Testing?


Testing one particular module, functionality heavily.

580. What is Gray Box Testing?


A combination of Black Box and White Box testing methodologies? testing a piece of software
against its specification but using some knowledge of its internal workings.

581. What is High Order Tests?


Black-box tests conducted once the software has been integrated.

582. What is Independent Test Group (ITG)?


A group of people whose primary responsibility is software testing,

583. What is Inspection?


A group review quality improvement process for written material. It consists of two aspects;
product (document itself) improvement and process improvement (of both document production
and inspection).
Manual Testing Interview Questions

584. What is Integration Testing?


Testing of combined parts of an application to determine if they function together correctly. Usually
performed after unit and functional testing. This type of testing is especially relevant to
client/server and distributed systems.

585. What is Installation Testing?


Confirms that the application under test recovers from expected or unexpected events without loss
of data or functionality. Events can include shortage of disk space, unexpected loss of
communication, or power out conditions.

586. What is Load Testing?


See Performance Testing.

587. What is Localization Testing?


This term refers to making software specifically designed for a specific locality.

588. What is Loop Testing?


A white box testing technique that exercises program loops.

589. What is Metric?


A standard of measurement. Software metrics are the statistics describing the structure or content
of a program. A metric should be a real objective measurement of something such as number of
bugs per lines of code.

590. What is Monkey Testing?


Testing a system or an Application on the fly, i.e. just few tests here and there to ensure the
system or an application does not crash out.

591. What is Negative Testing?


Testing aimed at showing software does not work. Also known as “test to fail”. See also Positive
Testing.

592. What is Path Testing?


Testing in which all paths in the program source code are tested at least once.

593. What is Performance Testing?


Testing conducted to evaluate the compliance of a system or component with specified
performance requirements. Often this is performed using an automated test tool to simulate large
number of users. Also known as “Load Testing”.

594. What is Positive Testing?


Testing aimed at showing software works. Also known as “test to pass”. See also Negative Testing.
Manual Testing Interview Questions

595. What is Quality Assurance?


All those planned or systematic actions necessary to provide adequate confidence that a product
or service is of the type and quality needed and expected by the customer.

596. What is Quality Audit?


A systematic and independent examination to determine whether quality activities and related
results comply with planned arrangements and whether these arrangements are implemented
effectively and are suitable to achieve objectives.

597. What is Quality Circle?


A group of individuals with related interests that meet at regular intervals to consider problems or
other matters related to the quality of outputs of a process and to the correction of problems or to
the improvement of quality.

598. What is Quality Control?


The operational techniques and the activities used to fulfil and verify requirements of quality.

599. What is Quality Management?


That aspect of the overall management function that determines and implements the quality
policy.

600. What is Quality Policy?


The overall intentions and direction of an organization as regards quality as formally expressed by
top management.

601. What is Quality System?


The organizational structure, responsibilities, procedures, processes, and resources for
implementing quality management.

602. What is Race Condition?


A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a
write, with no mechanism used by either to moderate simultaneous access.

603. What is Ramp Testing?


Continuously raising an input signal until the system breaks down.

604. What is Recovery Testing?


Confirms that the program recovers from expected or unexpected events without loss of data or
functionality. Events can include shortage of disk space, unexpected loss of communication, or
power out conditions

605. What is Regression Testing?


Retesting a previously tested program following modification to ensure that faults have not been
introduced or uncovered as a result of the changes made.
Manual Testing Interview Questions

606. What is Release Candidate?


A pre-release version, which contains the desired functionality of the final version, but which needs
to be tested for bugs (which ideally should be removed before the final version is released).

607. What is Sanity Testing?


Brief test of major functional elements of a piece of software to determine if it’s basically
operational.

608. What is Scalability Testing?


Performance testing focused on ensuring the application under test gracefully handles increases in
work load.

609. What is Security Testing?


Testing which confirms that the program can restrict access to authorized personnel and that the
authorized personnel can access the functions available to their security level.

610. What is Smoke Testing?


A quick-and-dirty test that the major functions of a piece of software work. Originated in the
hardware testing practice of turning on a new piece of hardware for the first time and considering
it a success if it does not catch on fire.

611. What is Soak Testing?


Running a system at high load for a prolonged period of time. For example, running several times
more transactions in an entire day (or night) than would be expected in a busy day, to identify and
performance problems that appear after a large number of transactions have been executed.

612. What is Software Requirements Specification?


A deliverable that describes all data, functional and behavioural requirements, all constraints, and
all validation requirements for software/

613. What is Software Testing?


A set of activities conducted with the intent of finding errors in software

614. What is Static Analysis?


Analysis of a program carried out without executing the program.

615. What is Static Analyzer?


A tool that carries out static analysis.

616. What is Static Testing?


Analysis of a program carried out without executing the program.
Manual Testing Interview Questions

617. What is Storage Testing?


Testing that verifies the program under test stores data files in the correct directories and that it
reserves sufficient space to prevent unexpected termination resulting from lack of space. This is
external storage as opposed to internal storage.

618. What is Stress Testing?


Testing conducted to evaluate a system or component at or beyond the limits of its specified
requirements to determine the load under which it fails and how. Often this is performance testing
using a very high level of simulated load.

619. What is Structural Testing?


Testing based on an analysis of internal workings and structure of a piece of software. See also
White Box Testing.

620. What is System Testing?


Testing that attempts to discover defects that are properties of the entire system rather than of its
individual components.

621. What is Testability?


The degree to which a system or component facilitates the establishment of test criteria and the
performance of tests to determine whether those criteria have been met.

622. What is Testing?


The process of exercising software to verify that it satisfies specified requirements and to detect
errors. The process of analysing a software item to detect the differences between existing and
required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std
829). The process of operating a system or component under specified conditions, observing or
recording the results, and making an evaluation of some aspect of the system or component. What
is Test Automation? It is the same as Automated Testing.

623. What is Test Bed?


An execution environment configured for testing. May consist of specific hardware, OS, network
topology, configuration of the product under test, other application or system software, etc. The
Test Plan for a project should enumerated the test beds(s) to be used.

624. What is Test Case?


Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A
Test Case will consist of information such as requirements testing, test steps, verification steps,
prerequisites, outputs, test environment, etc. A set of inputs, execution preconditions, and
expected outcomes developed for a particular objective, such as to exercise a particular program
path or to verify compliance with a specific requirement. Test Driven Development? Testing
methodology associated with Agile Programming in which every chunk of code is covered by unit
tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs
Manual Testing Interview Questions

during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test
code to the size of the production code.

625. What is Test Driver?


A program or test tool used to execute tests. Also known as a Test Harness.

626. What is Test Environment?


The hardware and software environment in which tests will be run, and any other software with
which the software under test interacts when under test including stubs and test drivers.

627. What is Test First Design?


Test-first design is one of the mandatory practices of Extreme Programming (XP).It requires that
programmers do not write any production code until they have first written a unit test.

628. What is Test Harness?


A program or test tool used to execute a tests. Also known as a Test Driver.

629. What is Test Plan?


A document describing the scope, approach, resources, and schedule of intended testing activities.
It identifies test items, the features to be tested, the testing tasks, who will do each task, and any
risks requiring contingency planning.

630. What is Test Procedure?


A document providing detailed instructions for the execution of one or more test cases.

631. What is Test Script?


Commonly used to refer to the instructions for a particular test that will be carried out by an
automated test tool.

632. What is Test Specification?


A document specifying the test approach for a software feature or combination or features and the
inputs, predicted results and execution conditions for the associated tests.

633. What is Test Suite?


A collection of tests used to validate the behaviour of a product. The scope of a Test Suite varies
from organization to organization. There may be several Test Suites for a particular product for
example. In most cases however a Test Suite is a high level concept, grouping together hundreds
or thousands of tests related by what they are intended to test.

634. What is Test Tools?


Computer programs used in the testing of a system, a component of the system, or its
documentation.
Manual Testing Interview Questions

635. What is Thread Testing?


A variation of top-down testing where the progressive integration of components follows the
implementation of subsets of the requirements, as opposed to the integration of components by
successively lower levels.

636. What is Top Down Testing?


An approach to integration testing where the component at the top of the component hierarchy is
tested first, with lower level components being simulated by stubs. Tested components are then
used to test lower level components. The process is repeated until the lowest level components
have been tested.

637. What is Total Quality Management?


A company commitment to develop a process that achieves high quality product and customer
satisfaction.

638. What is Traceability Matrix?


A document showing the relationship between Test Requirements and Test Cases.

639. What is Usability Testing?


Testing the ease with which users can learn and use a product.
Manual Testing Interview Questions

640. What is Use Case?


The specification of tests that are conducted from the end-user perspective. Use cases tend to
focus on operating software as an end-user would conduct their day-to-day activities.

641. What is Unit Testing?


Testing of individual software components.

642. How do the companies expect the defect reporting to be communicated by the tester to
the development team? Can the excel sheet template be used for defect reporting. If so what
are the common fields that are to be included? Who assigns the priority and severity of the
defect?
To report bugs in excel:
Sno. Module Screen/ Section Issue detail Severity
Priority Issue status
this is how to report bugs in excel sheet and also set filters on the Columns attributes.
But most of the companies use the share point process of reporting bugs In this when the
project came for testing a module wise detail of project is inserted to the defect management
system they are using. It contains following field
1. Date
2. Issue brief
3. Issue description (used for developer to regenerate the issue)
4. Issue status( active, resolved, on hold, suspend and not able to regenerate)
5. Assign to (Names of members allocated to project)
6. Priority(High, medium and low)
7. Severity (Major, medium and low)

643. How do you plan test automation?

1. Prepare the automation Test plan


2. Identify the scenario
3. Record the scenario
4. Enhance the scripts by inserting check points and Conditional Loops
5. Incorporated Error Handler
6. Debug the script
7. Fix the issue
8. Rerun the script and report the result
Manual Testing Interview Questions

644. Does automation replace manual testing?


There can be some functionality which cannot be tested in an automated tool so we may have
to do it manually. therefore manual testing can never be replaced. (We can write the scripts for
negative testing also but it is hectic task)when we talk about real environment we do negative
testing manually.

645. How will you choose a tool for test automation?


Choosing of a tool depends on many things …
1. Application to be tested
2. Test environment
3. Scope and limitation of the tool.
4. Feature of the tool.
5. Cost of the tool.
6. Whether the tool is compatible with your application which means tool should be able to interact
with your application
7. Ease of use

646. How you will evaluate the tool for test automation?
We need to concentrate on the features of the tools and how this could be beneficial for our
project. The additional new features and the enhancements of the features will also help.

647. How you will describe testing activities?


Testing activities start from the elaboration phase. The various testing activities are preparing the
test plan, Preparing test cases, Execute the test case, Log the bug, validate the bug & take
appropriate action for the bug, Automate the test cases.

648. What testing activities you may want to automate?


Automate all the high priority test cases which needs to be executed as a part of regression testing
for each build cycle.
Manual Testing Interview Questions

649. Describe common problems of test automation.

The common problems are:


1. Maintenance of the old script when there is a feature change or enhancement
2. The change in technology of the application will affect the old scripts
What types of scripting techniques for test automation do you know?
5 types of scripting techniques:
Linear
Structured
Shared
Data Driven
Key Driven

650. What is memory leaks and buffer overflows?


Memory leaks means incomplete deallocation – are bugs that happen very often. Buffer overflow
means data sent as input to the server that overflows the boundaries of the input area, thus
causing the server to misbehave. Buffer overflows can be used.

651. What are the major differences between stress testing, load testing, Volume
testing?
Stress testing means increasing the load, and checking the performance at each level. Load testing
means at a time giving more load by the expectation and checking the performance at that level.
Volume testing means first we have to apply initial.

652. How do you introduce a new software QA process?


A: It depends on the size of the organization and the risks involved. For large organizations with
high-risk projects, a serious management buy-in is required and a formalized QA process is
necessary. For medium size organizations with lower risk projects, management and organizational
buy-in and a slower, step-by-step process is required. Generally speaking, QA processes should be
balanced with productivity, in order to keep any bureaucracy from getting out of hand. For smaller
groups or projects, an ad-hoc process is more appropriate. A lot depends on team leads and
managers, feedback to developers and good communication is essential among customers,
managers, developers, test engineers and testers. Regardless the size of the company, the
greatest value for effort is in managing requirement processes, where the goal is requirements
that are clear, complete and
testable.

653. What is the role of documentation in QA?


A: Documentation plays a critical role in QA. QA practices should be documented, so that they are
repeatable. Specifications, designs, business rules, inspection reports, configurations, code
changes, test plans, test cases, bug reports, user manuals should all be documented. Ideally, there
should be a system for easily finding and obtaining of documents and determining what document
will have a particular piece of information. Use documentation change management, if possible.
Manual Testing Interview Questions

654. What makes a good test engineer?


A: Good test engineers have a “test to break” attitude. We, good test engineers, take the point of
view of the customer; have a strong desire for quality and an attention to detail. Tact and
diplomacy are useful in maintaining a cooperative relationship with developers and an ability to
communicate with both technical and non-technical people. Previous software development
experience is also helpful as it provides a deeper understanding of the software development
process, gives the test engineer an appreciation for the developers’ point of view and reduces the
learning curve in automated test tool programming.

G C Reddy is a good test engineer because he has a “test to break” attitude, takes the point of
view of the customer, has a strong desire for quality, has an attention to detail, He’s also tactful
and diplomatic and has good a communication skill, both oral and written. And he has previous
software development experience, too.

655. What is a test plan?


A: A software project test plan is a document that describes the objectives, scope, approach and
focus of a software testing effort. The process of preparing a test plan is a useful way to think
through the efforts needed to validate the acceptability of a software product. The completed
document will help people outside the test group understand the why and how of product
validation. It should be thorough enough to be useful, but not so thorough that none outside the
test group will be able to read it.

656. What is a test case?


A: A test case is a document that describes an input, action, or event and its expected result, in
order to determine if a feature of an application is working correctly. A test case should contain
particulars such as a…
• Test case identifier;
• Test case name;
• Objective;
• Test conditions/setup;
• Input data requirements/steps, and
• Expected results.
Please note, the process of developing test cases can help find problems in the requirements or
design of an application, since it requires you to completely think through the operation of the
application. For this reason, it is useful to prepare test cases early in the development cycle, if
possible.

657. What should be done after a bug is found?


A: When a bug is found, it needs to be communicated and assigned to developers that can fix it.
After the problem is resolved, fixes should be re-tested. Additionally, determinations should be
made regarding requirements, software, hardware, safety impact, etc., for regression testing to
check the fixes didn’t create other problems elsewhere. If a problem-tracking system is in place, it
should encapsulate these determinations. A variety of commercial, problem-tracking/management
software tools are available. These tools, with the detailed input of software test engineers, will
Manual Testing Interview Questions

give the team complete information so developers can understand the bug, get an idea of its
severity, reproduce it and fix it.

658. What is configuration management?


A: Configuration management (CM) covers the tools and processes used to control, coordinate and
track code, requirements, documentation, problems, change requests, designs, tools, compilers,
libraries, patches, changes made to them and who makes the changes. Rob Davis has had
experience with a full range of CM tools and concepts, and can easily adapt to your software tool
and process needs.

659. What if the software is so buggy it can’t be tested at all?


A: In this situation the best bet is to have test engineers go through the process of reporting
whatever bugs or problems initially show up, with the focus being on critical bugs.

Since this type of problem can severely affect schedules and indicates deeper problems in the
software development process, such as insufficient unit testing, insufficient integration testing,
poor design, improper build or release procedures, managers should be notified and provided with
some documentation as evidence of the problem.

660. What if there isn’t enough time for thorough testing?


A: Since it’s rarely possible to test every possible aspect of an application, every possible
combination of events, every dependency, or everything that could go wrong, risk analysis is
appropriate to most software development projects.

Use risk analysis to determine where testing should be focused. This requires judgment skills,
common sense and experience. The checklist should include answers to the following questions:
• Which functionality is most important to the project’s intended purpose?
• Which functionality is most visible to the user?
• Which functionality has the largest safety impact?
• Which functionality has the largest financial impact on users?
• Which aspects of the application are most important to the customer?
• Which aspects of the application can be tested early in the development cycle?
• Which parts of the code are most complex and thus most subject to errors?
• Which parts of the application were developed in rush or panic mode?
• Which aspects of similar/related previous projects caused problems?
• Which aspects of similar/related previous projects had large maintenance expenses?
• Which parts of the requirements and design are unclear or poorly thought out?
• What do the developers think are the highest-risk aspects of the application?
• What kinds of problems would cause the worst publicity?
• What kinds of problems would cause the most customer service complaints?
• What kinds of tests could easily cover multiple functionalities?
• Which tests will have the best high-risk-coverage to time-required ratio?
Manual Testing Interview Questions

661. What if the project isn’t big enough to justify extensive testing?
A: Consider the impact of project errors, not the size of the project. However, if extensive testing is
still not justified, risk analysis is again needed and the considerations listed under “What if there
isn’t enough time for thorough testing?” do apply. The test engineer then should do “ad hoc”
testing, or write up a limited test plan based on the risk analysis.

662. What can be done if requirements are changing continuously?


A: Work with management early on to understand how requirements might change, so that
alternate test plans and strategies can be worked out in advance. It is helpful if the application’s
initial design allows for some adaptability, so that later changes do not require redoing the
application from scratch. Additionally, try to…
• Ensure the code is well commented and well documented; this makes changes easier for the
developers.
• Use rapid prototyping whenever possible; this will help customers feel sure of their requirements
and minimize changes.
• In the project’s initial schedule, allow for some extra time to commensurate with probable
changes.
Move new requirements to a ‘Phase 2’ version of an application and use the original requirements
for the ‘Phase 1’ version.
Negotiate to allow only easily implemented new requirements into the project.
• Ensure customers and management understand scheduling impacts, inherent risks and costs of
significant requirements changes. Then let management or the customers decide if the changes
are warranted; after all, that’s their job.
• Balance the effort put into setting up automated testing with the expected effort required to redo
them to deal with changes.
• Design some flexibility into automated test scripts;
• Focus initial automated testing on application aspects that are most likely to remain unchanged;
• Devote appropriate effort to risk analysis of changes, in order to minimize regression-testing
needs;
• Design some flexibility into test cases; this is not easily done; the best bet is to minimize the
detail in the test cases, or set up only higher-level generic-type test plans;
Focus less on detailed test plans and test cases and more on ad-hoc testing with an understanding
of the added risk this entails.

663. How do you know when to stop testing?


A: This can be difficult to determine. Many modern software applications are so complex and run in
such an interdependent environment, that complete testing can never be done. Common factors in
deciding when to stop are…
• Deadlines, e.g. release deadlines, testing deadlines;
• Test cases completed with certain percentage passed;
• Test budget has been depleted;
• Coverage of code, functionality, or requirements reaches a specified point;
• Bug rate falls below a certain level; or
• Beta or alpha testing period ends.
Manual Testing Interview Questions

664. What if the application has functionality that wasn’t in the requirements?
A: It may take serious effort to determine if an application has significant unexpected or hidden
functionality, which it would indicate deeper problems in the software development process. If the
functionality isn’t necessary to the purpose of the application, it should be removed, as it may have
unknown impacts or dependencies that were not taken into account by the designer or the
customer.
If not removed, design information will be needed to determine added testing needs or regression
testing needs. Management should be made aware of any significant added risks as a result of the
unexpected functionality. If the functionality only affects areas, such as minor improvements in the
user interface, it may not be a significant risk.

665. What if the application has functionality that wasn’t in the requirements?
* It may take serious effort to determine if an application has significant unexpected or hidden
functionality, and it would indicate deeper problems in the software development process. If the
functionality isn’t necessary to the purpose of the application, it should be removed, as it may have
unknown impacts or dependencies that were not taken into account by the designer or the
customer. If not removed, design information will be needed to determine added testing needs or
regression testing needs. Management should be made aware of any significant added risks as a
result of the unexpected functionality. If the functionality only effects areas such as minor
improvements in the user interface, for example, it may not be a significant risk.

666. How can QA processes be implemented without stifling productivity?


* By implementing QA processes slowly over time, using consensus to reach agreement on
processes, and adjusting and experimenting as an organization grows and matures, productivity
will be improved instead of stifled. Problem prevention will lessen the need for problem detection,
panics and burn-out will decrease, and there will be improved focus and less wasted effort. At the
same time, attempts should be made to keep processes simple and efficient, minimize paperwork,
promote computer-based processes and automated tracking and reporting, minimize time required
in meetings, and promote training as part of the QA process. However, no one – especially talented
technical types – likes rules or bureaucracy, and in the short run things may slow down a bit. A
typical scenario would be that more days of planning and development will be needed, but less
time will be required for late-night bug-fixing and calming of irate customers. (See the Books
section’s ‘Software QA’, ‘Software Engineering’, and ‘Project Management’ categories for useful
books with more information.)

667. What if an organization is growing so fast that fixed QA processes are impossible
* This is a common problem in the software industry, especially in new technology areas. There is
no easy solution in this situation, other than:

* Hire good people

* Management should ‘ruthlessly prioritize’ quality issues and maintain focus on the customer

* Everyone in the organization should be clear on what ‘quality’ means to the customer
Manual Testing Interview Questions

668. How does a client/server environment affect testing?


* Client/server applications can be quite complex due to the multiple dependencies among clients,
data communications, hardware, and servers. Thus testing requirements can be extensive. When
time is limited (as it usually is) the focus should be on integration and system testing. Additionally,
load/stress/performance testing may be useful in determining client/server application limitations
and capabilities. There are commercial tools to assist with such testing. (See the ‘Tools’ section for
web resources with listings that include these kinds of test tools.)

669. How can World Wide Web sites be tested?


* Web sites are essentially client/server applications – with web servers and ‘browser’ clients.
Consideration should be given to the interactions between html pages, TCP/IP communications,
Internet connections, firewalls, applications that run in web pages (such as applets, JavaScript,
plug-in applications), and applications that run on the server side (such as cgi scripts, database
interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a wide
variety of servers and browsers, various versions of each, small but sometimes significant
differences between them, variations in connection speeds, rapidly changing technologies, and
multiple standards and protocols. The end result is that
• testing for web sites can become a major ongoing effort. Other considerations might include:
Manual Testing Interview Questions

670. How is testing affected by object-oriented designs?

671. What are the expected loads on the server (e.g., number of hits per unit time?), and what
kind of performance is required under such loads (such as web server response time, database
query response times). What kinds of tools will be needed for performance testing (such as
web load testing tools, other tools already in house that can be adapted, web robot
downloading tools, etc.)?

672. Who is the target audience? What kind of browsers will they be using? What kind of
connection speeds will they by using? Are they intra- organization (thus with likely high
connection speeds and similar browsers) or Internet-wide (thus with a wide variety of
connection speeds and browser types)?

673. What kind of performance is expected on the client side (e.g., how fast should pages
appear, how fast should animations, applets, etc. load and run)?

674. Will down time for server and content maintenance/upgrades be allowed? how much?

675. Will down time for server and content maintenance/upgrades be allowed? how much?

676. How reliable are the site’s Internet connections required to be? And how does that affect
backup system or redundant connection requirements and testing?

677. What processes will be required to manage updates to the web site’s content, and what are
the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?

678. Which HTML specification will be adhered to? How strictly? What variations will be allowed
for targeted browsers?

679. Will there be any standards or requirements for page appearance and/or graphics
throughout a site or parts of a site?

680. How will internal and external links be validated and updated? how often?

681. Can testing be done on the production system, or will a separate test system be required?
How are browser caching, variations in browser option settings, dial-up connection variabilities,
and real-world internet ‘traffic congestion’ problems to be accounted for in testing?

682. How extensive or customized are the server logging and reporting requirements; are they
considered an integral part of the system and do they require testing?

683. How are cgi programs, applets, java scripts, ActiveX components, etc. to be maintained,
tracked, controlled, and tested?
Manual Testing Interview Questions

* Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger,
provide internal links within the page.
* The page layouts and design elements should be consistent throughout a site, so that it’s clear to
the user that they’re still within a site.
* Pages should be as browser-independent as possible, or pages should be provided or generated
based on the browser-type.
* All pages should have links external to the page; there should be no dead-end pages.
* The page owner, revision date, and a link to a contact person or organization should be included
on each page.

684. What is Extreme Programming and what’s it got to do with testing?


* Extreme Programming (XP) is a software development approach for small teams on risk-prone
projects with unstable requirements. It was created by Kent Beck who described the approach in
his book ‘Extreme Programming Explained’ (See the Softwareqatest.com Books page.). Testing
(‘extreme testing’) is a core aspect of Extreme Programming. Programmers are expected to write
unit and functional test code first – before the application is developed. Test code is under source
control along with the rest of the code. Customers are expected to be an integral part of the
project team and to help developed scenarios for acceptance/black box testing. Acceptance tests
are preferably automated, and are modified and rerun for each of the frequent development
iterations. QA and test personnel are also required to be an integral part of the project team.
Detailed requirements documentation is not used, and frequent re-scheduling, re-estimating, and
re-prioritizing is expected.

685. What is the general testing process?

A: The general testing process is the creation of a test strategy (which sometimes includes the
creation of test cases), creation of a test plan/design (which usually includes test cases and test
procedures) and the execution of tests.

686. How do you create a test plan/design?


Manual Testing Interview Questions

A: Test scenarios and/or cases are prepared by reviewing functional requirements of the
release and preparing logical groups of functions that can be further broken into test
procedures. Test procedures define test conditions, data to be used for testing and expected
results, including database updates, file outputs, report results. Generally speaking…
• Test cases and scenarios are designed to represent both typical and unusual situations that
may occur in the application.
• Test engineers define unit test requirements and unit test cases. Test engineers also execute
unit test cases.
• It is the test team that, with assistance of developers and clients, develops test cases and
scenarios for integration and system testing.
• Test scenarios are executed through the use of test procedures or scripts.
• Test procedures or scripts define a series of steps necessary to perform one or more test
scenarios.
• Test procedures or scripts include the specific data that will be used for testing the process or
transaction.
• Test procedures or scripts may cover multiple test scenarios.
• Test scripts are mapped back to the requirements and traceability matrices are used to
ensure each test is within scope.
• Test data is captured and base lined, prior to testing. This data serves as the foundation for
unit and system testing and used to exercise system functionality in a controlled environment.
• Some output data is also base-lined for future comparison. Base-lined data is used to support
future application maintenance via regression testing.
• A pretest meeting is held to assess the readiness of the application and the environment and
data to be tested. A test readiness document is created to indicate the status of the entrance
criteria of the release.
Inputs for this process:
• Approved Test Strategy Document.
• Test tools, or automated test tools, if applicable.
• Previously developed scripts, if applicable.
• Test documentation problems uncovered as a result of testing.
• A good understanding of software complexity and module path coverage, derived from
general and detailed design documents, e.g. software design document, source code, and
software complexity data.
Outputs for this process:
• Approved documents of test scenarios, test cases, test conditions, and test data.
• Reports of software design issues, given to software developers for correction.

687. How do you execute tests?


Manual Testing Interview Questions

A: Execution of tests is completed by following the test documents in a methodical manner. As


each test procedure is performed, an entry is recorded in a test execution log to note the execution
of the procedure and whether or not the test procedure uncovered any defects. Checkpoint
meetings are held throughout the execution phase. Checkpoint meetings are held daily, if required,
to address and discuss testing issues, status and activities.
• The output from the execution of test procedures is known as test results. Test results are
evaluated by test engineers to determine whether the expected results have been obtained. All
discrepancies/anomalies are logged and discussed with the software team lead, hardware test
lead, programmers, software engineers and documented for further investigation and resolution.
Every company has a different process for logging and reporting bugs/defects uncovered during
testing.
• A pass/fail criteria is used to determine the severity of a problem, and results are recorded in a
test summary report. The severity of a problem, found during system testing, is defined in
accordance to the customer’s risk assessment and recorded in their selected tracking tool.
• Proposed fixes are delivered to the testing environment, based on the severity of the problem.
Fixes are regression tested and flawless fixes are migrated to a new baseline. Following completion
of the test, members of the test team prepare a summary report. The summary report is reviewed
by the Project Manager, Software QA Manager and/or Test Team Lead.
• After a particular level of testing has been certified, it is the responsibility of the Configuration
Manager to coordinate the migration of the release software components to the next test level, as
documented in the Configuration Management Plan. The software is only migrated to the
production environment after the Project Manager’s formal acceptance.
• The test team reviews test document problems identified during testing, and update documents
where appropriate.
Inputs for this process:
• Approved test documents, e.g. Test Plan, Test Cases, Test Procedures.
• Test tools, including automated test tools, if applicable.
• Developed scripts.
• Changes to the design, i.e. Change Request Documents.
• Test data.
• Availability of the test team and project team.
• General and Detailed Design Documents, i.e. Requirements Document, Software Design
Document.
• A software that has been migrated to the test environment, i.e. unit tested code, via the
Configuration/Build Manager.
• Test Readiness Document.
• Document Updates.
Outputs for this process:
• Log and summary of the test results. Usually this is part of the Test Report. This needs to be
approved and signed-off with revised testing deliverables.
• Changes to the code, also known as test fixes.
• Test document problems uncovered as a result of testing. Examples are Requirements document
and Design Document problems.
• Reports on software design issues, given to software developers for correction. Examples are bug
Manual Testing Interview Questions

reports on code issues.


• Formal record of test incidents, usually part of problem tracking.
• Base-lined package, also known as tested source and object code, ready for migration to the next
level.

688. How do you create a test strategy?


A: The test strategy is a formal description of how a software product will be tested. A test strategy
is developed for all levels of testing, as required. The test team analyses the requirements, writes
the test strategy and reviews the plan with the project team. The test plan may include test cases,
conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment.
Inputs for this process:
• A description of the required hardware and software components, including test tools. This
information comes from the test environment, including test tool data.
• A description of roles and responsibilities of the resources required for the test and schedule
constraints. This information comes from man-hours and schedules.
• Testing methodology. This is based on known standards.
• Functional and technical requirements of the application. This information comes from
requirements, change request, technical and functional design documents.
• Requirements that the system cannot provide, e.g. system limitations.
Outputs for this process:
• An approved and signed off test strategy document, test plan, including test cases.
• Testing issues requiring resolution. Usually this requires additional negotiation at the project
management level.

689. What is security clearance?


A: Security clearance is a process of determining your trustworthiness and reliability before
granting you access to national security information.

690. What are the levels of classified access?


A: The levels of classified access are confidential, secret, top secret, and sensitive compartmented
information, of which top secret is the highest.

691. What’s a ‘test plan’?


A software project test plan is a document that describes the objectives, scope, approach, and
focus of a software testing effort. The process of preparing a test plan is a useful way to think
through the efforts needed to validate the acceptability of a software product. The completed
document will help people outside the test group understand the ‘why’ and ‘how’ of product
validation. It should be thorough enough to be useful but not so thorough that no one outside the
test group will read it. The following are some of the items that might be included in a test plan,
depending on the particular project:

* Title
Manual Testing Interview Questions

* Identification of software including version/release numbers.

* Revision history of document including authors, dates, approvals.

* Table of Contents.

* Purpose of document, intended audience

* Objective of testing effort

* Software product overview

* Relevant related document list, such as requirements, design documents, other test plans, etc.

* Relevant standards or legal requirements

* Traceability requirements

* Relevant naming conventions and identifier conventions

* Overall software project organization and personnel/contact-info/responsibilities

* Test organization and personnel/contact-info/responsibilities

* Assumptions and dependencies

* Project risk analysis

* Testing priorities and focus

* Scope and limitations of testing

* Test outline – a decomposition of the test approach by test type, feature, functionality, process,
system, module, etc. as applicable

* Outline of data input equivalence classes, boundary value analysis, error classes

* Test environment – hardware, operating systems, other required software, data configurations,
interfaces to other systems

* Test environment validity analysis – differences between the test and production systems and
their impact on test validity.

* Test environment setup and configuration issues

* Software migration processes

* Software CM processes
Manual Testing Interview Questions

• * Test data setup requirements

* Database setup requirements

* Outline of system-logging/error-logging/other capabilities, and tools such as screen capture


software, that will be used to help describe and report bugs

* Discussion of any specialized software or hardware tools that will be used by testers to help track
the cause or source of bugs

* Test automation – justification and overview

* Test tools to be used, including versions, patches, etc.

* Test script/test code maintenance processes and version control

* Problem tracking and resolution – tools and processes

* Project test metrics to be used

* Reporting requirements and testing deliverables

* Software entrance and exit criteria

* Initial sanity testing period and criteria

* Test suspension and restart criteria

* Personnel allocation

* Personnel pre-training needs

* Test site/location

* Outside test organizations to be utilized and their purpose, responsibilities, deliverables, contact
persons, and coordination issues.

* Relevant proprietary, classified, security, and licensing issues.

* Open issues

* Appendix – glossary, acronyms, etc.

692. What’s a ‘test case’?


* A test case is a document that describes an input, action, or event and an expected response, to
determine if a feature of an application is working correctly. A test case should contain particulars
Manual Testing Interview Questions

such as test case identifier, test case name, objective, test conditions/setup, input data
requirements, steps, and expected results.

* Note that the process of developing test cases can help find problems in the requirements or
design of an application, since it requires completely thinking through the operation of the
application. For this reason, it’s useful to prepare test cases early in the development cycle if
possible.

693. What should be done after a bug is found?


* The bug needs to be communicated and assigned to developers that can fix it. After the problem
is resolved, fixes should be re-tested, and determinations made regarding requirements for
regression testing to check that fixes didn’t create problems elsewhere. If a problem-tracking
system is in place, it should encapsulate these processes. A variety of commercial problem-
tracking/management software tools are available (see the ‘Tools’ section for web resources with
listings of such tools). The following are items to consider in the tracking process:

* Complete information such that developers can understand the bug, get an idea of its severity,
and reproduce it if necessary.
* Bug identifier (number, ID, etc.)

* Current bug status (e.g., ‘Released for Retest’, ‘new’, etc.)

* The application name or identifier and version

* The function, module, feature, object, screen, etc. where the bug occurred

* Environment specifics, system, platform, relevant hardware specifics

* Test case name/number/identifier

* One-line bug description

* Full bug description

* Description of steps needed to reproduce the bug if not covered by a test case or if the developer
doesn’t have easy access to the test case/test script/test tool

* Names and/or descriptions of file/data/messages/etc. used in test

* File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in
finding the cause of the problem

* Severity estimate (a 5-level range such as 1-5 or ‘critical’-to-‘low’ is common)

694. Was the bug reproducible?


* Tester name
Manual Testing Interview Questions

* Test date

* Bug reporting date

* Name of developer/group/organization the problem is assigned to

* Description of problem cause

* Description of fix

* Code section/file/module/class/method that was fixed

* Date of fix

* Application version that contains the fix

* Tester responsible for retest

* Retest date

* Retest results

* Regression testing requirements

* Tester responsible for regression tests

* Regression testing results

* A reporting or tracking process should enable notification of appropriate personnel at various


stages. For instance, testers need to know when retesting is needed, developers need to know
when bugs are found and how to get the needed information, and reporting/summary capabilities
are needed for managers.

695. What if the software is so buggy it can’t really be tested at all?


* The best bet in this situation is for the testers to go through the process of reporting whatever
bugs or blocking-type problems initially show up, with the focus being on critical bugs. Since this
type of problem can severely affect schedules, and indicates deeper problems in the software
development process (such as insufficient unit testing or insufficient integration testing, poor
design, improper build or release procedures, etc.) managers should be notified, and provided with
some documentation as evidence of the problem.

How can it be known when to stop testing?

This can be difficult to determine. Many modern software applications are so complex, and run in
such an interdependent environment, that complete testing can never be done. Common factors in
deciding when to stop are:

* Deadlines (release deadlines, testing deadlines, etc.)


Manual Testing Interview Questions

* Test cases completed with certain percentage passed

* Test budget depleted

* Coverage of code/functionality/requirements reaches a specified point

* Bug rate falls below a certain level

* Beta or alpha testing period ends

696. What if there isn’t enough time for thorough testing?


* Use risk analysis to determine where testing should be focused. Since it’s rarely possible to test
every possible aspect of an application, every possible combination of events, every dependency,
or everything that could go wrong, risk analysis is appropriate to most software development
projects. This requires judgement skills, common sense, and experience. (If warranted, formal
methods are also available.) Considerations can include:

* Which functionality is most important to the project’s intended purpose?

* Which functionality is most visible to the user?

* Which functionality has the largest safety impact?

* Which functionality has the largest financial impact on users?

* Which aspects of the application are most important to the customer?

* Which aspects of the application can be tested early in the development cycle?

* Which parts of the code are most complex, and thus most subject to errors?

* Which parts of the application were developed in rush or panic mode?

* Which aspects of similar/related previous projects caused problems?

* Which aspects of similar/related previous projects had large maintenance expenses?

* Which parts of the requirements and design are unclear or poorly thought out?
* What do the developers think are the highest-risk aspects of the application?

* What kinds of problems would cause the worst publicity?

* What kinds of problems would cause the most customer service complaints?

* What kinds of tests could easily cover multiple functionalities?

* Which tests will have the best high-risk-coverage to time-required ratio?


Manual Testing Interview Questions

697. What if the project isn’t big enough to justify extensive testing?
* Consider the impact of project errors, not the size of the project. However, if extensive testing is
still not justified, risk analysis is again needed and the same considerations as described previously
in ‘What if there isn’t enough time for thorough testing?’ apply. The tester might then do ad hoc
testing, or write up a limited test plan based on the risk analysis.

698. What can be done if requirements are changing continuously?


A common problem and a major headache

* Work with the project’s stakeholders early on to understand how requirements might change so
that alternate test plans and strategies can be worked out in advance, if possible.

* It’s helpful if the application’s initial design allows for some adaptability so that later changes do
not require redoing the application from scratch.

* If the code is well-commented and well-documented this makes changes easier for the
developers.

* Use rapid prototyping whenever possible to help customers feel sure of their requirements and
minimize changes.

* The project’s initial schedule should allow for some extra time commensurate with the possibility
of changes.

* Try to move new requirements to a ‘Phase 2’ version of an application, while using the original
requirements for the ‘Phase 1’ version.

* Negotiate to allow only easily-implemented new requirements into the project, while moving
more difficult new requirements into future versions of the application.

* Be sure that customers and management understand the scheduling impacts, inherent risks, and
costs of significant requirements changes. Then let management or the customers (not the
developers or testers) decide if the changes are warranted – after all, that’s their job.

* Balance the effort put into setting up automated testing with the expected effort required to re-
do them to deal with changes.

* Try to design some flexibility into automated test scripts.

* Focus initial automated testing on application aspects that are most likely to remain unchanged.

* Devote appropriate effort to risk analysis of changes to minimize regression testing needs.

* Design some flexibility into test cases (this is not easily done; the best bet might be to minimize
the detail in the test cases, or set up only higher-level generic-type test plans)
Manual Testing Interview Questions

* Focus less on detailed test plans and test cases and more on ad hoc testing (with an
understanding of the added risk that this entails).

699. What is performance testing?


A: Although performance testing is described as a part of system testing, it can be regarded as a
distinct level of testing. Performance testing verifies loads, volumes and response times, as defined
by requirements.

700. What is load testing?


A: Load testing is testing an application under heavy loads, such as the testing of a web site under
a range of loads to determine at what point the system response time will degrade or fail.

701. What is installation testing?


A: Installation testing is testing full, partial, upgrade, or install/uninstall processes. The installation
test for a release is conducted with the objective of demonstrating production readiness.
This test includes the inventory of configuration items, performed by the application’s System
Administration, the evaluation of data readiness, and dynamic tests focused on basic system
functionality. When necessary, a sanity test is performed, following installation testing.

702. What is security/penetration testing?


A: Security/penetration testing is testing how well the system is protected against unauthorized
internal or external access, or wilful damage.

This type of testing usually requires sophisticated testing techniques.

703. What is recovery/error testing?


A: Recovery/error testing is testing how well a system recovers from crashes, hardware failures, or
other catastrophic problems.

704. What is compatibility testing?


A: Compatibility testing is testing how well software performs in a particular hardware, software,
operating system, or network
This test includes the inventory of configuration items, performed by the application’s System
Administration, the evaluation of data readiness, and dynamic tests focused on basic system
functionality. When necessary, a sanity test is performed, following installation testing.

705. What is security/penetration testing?


A: Security/penetration testing is testing how well the system is protected against unauthorized
internal or external access, or wilful damage.

This type of testing usually requires sophisticated testing techniques.

706. What is recovery/error testing?


A: Recovery/error testing is testing how well a system recovers from crashes, hardware failures, or
other catastrophic problems.
Manual Testing Interview Questions

707. What is compatibility testing?


A: Compatibility testing is testing how well software performs in a particular hardware, software,
operating system, or network

708. What is comparison testing?


A: Comparison testing is testing that compares software weaknesses and strengths to those of
competitors’ products.

709. What is acceptance testing?


A: Acceptance testing is black box testing that gives the client/customer/project manager the
opportunity to verify the system functionality and usability prior to the system being released to
production.

The acceptance test is the responsibility of the client/customer or project manager, however, it is
conducted with the full support of the project team. The test team also works with the
client/customer/project manager to develop the acceptance criteria.

710. What is alpha testing?


A: Alpha testing is testing of an application when development is nearing completion. Minor design
changes can still be made as a result of alpha testing. Alpha testing is typically performed by a
group that is independent of the design team, but still within the company, e.g. in-house software
test engineers, or software QA engineers.

711. What is beta testing?


A: Beta testing is testing an application when development and testing are essentially completed
and final bugs and problems need to be found before the final release. Beta testing is typically
performed by end-users or others, not programmers, software engineers, or test engineers.

712. What is a Test/QA Team Lead?


A: The Test/QA Team Lead coordinates the testing activity, communicates testing status to
management and manages the test team.

713. What testing roles are standard on most testing projects?


A: Depending on the organization, the following roles are more or less standard on most testing
projects: Testers, Test Engineers, Test/QA Team Lead, Test/QA Manager, System Administrator,
Database Administrator, Technical Analyst, Test Build Manager and Test Configuration Manager.

Depending on the project, one person may wear more than one hat. For instance, Test Engineers
may also wear the hat of Technical Analyst, Test Build Manager and Test Configuration Manager.

You CAN get a job in testing. Click on a link!

714. What is a Test Engineer?


A: We, test engineers, are engineers who specialize in testing. We, test engineers, create test
cases, procedures, scripts and generate data. We execute test procedures and scripts, analyse
Manual Testing Interview Questions

standards of measurements, evaluate results of system/integration/regression testing. We also…


• Speed up the work of the development staff;
• Reduce your organization’s risk of legal liability;
• Give you the evidence that your software is correct and operates properly;
• Improve problem tracking and reporting;
• Maximize the value of your software;
• Maximize the value of the devices that use it;
• Assure the successful launch of your product by discovering bugs and design flaws, before users
get discouraged, before shareholders lose their cool and before employees get bogged down;
• Help the work of your development staff, so the development team can devote its time to build
up your product;
• Promote continual improvement;
• Provide documentation required by FDA, FAA, other regulatory agencies and your customers;
• Save money by discovering defects ‘early’ in the design process, before failures occur in
production, or in the field;
• Save the reputation of your company by discovering bugs and design flaws; before bugs and
design flaws damage the reputation of your company.

715. What is a Test Build Manager?


A: Test Build Managers deliver current software versions to the test environment, install the
application’s software and apply software patches, to both the application and the operating
system, set-up, maintain and back up test environment hardware.

Depending on the project, one person may wear more than one hat. For instance, a Test Engineer
may also wear the hat of a Test Build Manager.

716. What is a System Administrator?


A: Test Build Managers, System Administrators, Database Administrators deliver current software
versions to the test environment, install the application’s software and apply software patches, to
both the application and the operating system, set-up, maintain and back up test environment
hardware.

Depending on the project, one person may wear more than one hat. For instance, a Test Engineer
may also wear the hat of a System Administrator.

717. What is a Database Administrator?


A: Test Build Managers, System Administrators and Database Administrators deliver current
software versions to the test environment, install the application’s software and apply software
patches, to both the application and the operating system, set-up, maintain and back up test
environment hardware. Depending on the project, one person may wear more than one hat. For
instance, a Test Engineer may also wear the hat of a Database Administrator.
Manual Testing Interview Questions

718. What is a Technical Analyst?


A: Technical Analysts perform test assessments and validate system/functional test requirements.
Depending on the project, one person may wear more than one hat. For instance, Test Engineers
may also wear the hat of a Technical Analyst.

719. What is a Test Configuration Manager?


A: Test Configuration Managers maintain test environments, scripts, software and test data.
Depending on the project, one person may wear more than one hat. For instance, Test Engineers
may also wear the hat of a Test Configuration Manager.

720. What is a test schedule?


A: The test schedule is a schedule that identifies all tasks required for a successful testing effort, a
schedule of all test activities and resource requirements.

721. What is software testing methodology?


A: One software testing methodology is the use a three step process of…
1. Creating a test strategy;
2. Creating a test plan/design; and
3. Executing tests.
This methodology can be used and melded to your organization’s needs. G C Reddy believes that
using this methodology is important in the development and ongoing maintenance of his clients’
applications.

722. What is SDLC?


SDLC (Software Development Life Cycle) is the process of developing software through business
needs, analysis, design, implementation and Release and maintenance.

723. What is STLC?


The process of testing software in a well-planned and systematic way is known as software testing
life cycle(STLC).

Different organizations have different phases in STLC however generic Software Test Life Cycle
(STLC) consists of the following phases.

i) Test Planning

ii) Test Design

iii) Test Execution

iv) Evaluating the Exit criteria

v) Test Closure
Manual Testing Interview Questions

724. What is the difference between SDLC and STLC?


Software Development Life Cycle involves the complete Verification and Validation of a Process or
a Project.

Whereas Software Testing Life Cycle involves only Validation.

Software Development Life Cycle involves business requirement specifications, Analysis, Design,
Software requirement specifications, Development Process(Coding and Application development),
Testing Process(Preparation of Test Plan, Preparation of Test cases,Testing,Bug reporting,Test Logs
&amp; Test Reports), Release and Maintenance .

Whereas Software Testing Life Cycle involves Preparation of Test Plan, Preparation of Test cases,
Test execution ,Bug reporting &amp; Tracking, Re &amp; Regression Testing and Test Closure.

STLC is a Part of SDLC.

725. What is Waterfall Model?


Waterfall Model is one of the most widely used Software Development Process. It is also called as
“Linear Sequential model”.

It is widely used in the commercial development projects. In this model, we move to next
phase(step) after getting input from previous phase, like in a waterfall, water flows down to from
the upper steps.

726. What are the advantages of Waterfall Model?


i) Simple and easy to use

ii) Easy to manage due to the rigidity of the model- each phase has specific deliverables and a
review process.

iii) Phases are processed and completed one at a time.

iv) Works well for smaller projects where requirements are very well understood.

727. What are the disadvantages of Waterfall Model?


i) No working software is produced until late during the life cycle

ii) High amount of risk and uncertainty

iii) Poor model for complex and object oriented projects.

iv) Poor model for Long and ongoing projects

v) Poor Model where requirements are at a moderate to high risk of changing.


Manual Testing Interview Questions

728. What is V Model?


A framework to describe the software development life cycle activities from
requirements specification to maintenance.

The V-model illustrates how testing activities can be integrated into each phase of the software
development life cycle.

SDLC Interview Questions and Answers

729. What are the advantages of V Model?


o Tester role will take place in the requirement phase it self

o Multiple stages of Testing available so that Defects multiplication can be reduced.

o Can be used for any type of requirements

o Due to Multiple stages of Testing and Multiple teams involvement Quality can be improved.

o The V Model Supports wide range of development methodologies such as Structured and Object
oriented systems development.

o The V Model supports tailoring.

730. What are the disadvantages of V Model?


o It an expensive model than Waterfall model, needs lot of resources, budget and time.

o Co-ordination and Maintenance are difficult.


Manual Testing Interview Questions

o Adoption of changes in Requirements and Adding New Requirements at middle of the process are
difficult.

o It needs an established process for proper implementation.

731. What is Prototype Model?


It begins with requirements gathering. Developers and Customers meet and define the overall
objectives of the software.

Developers prepare design documents using all available requirements then build the prototypes,
prototypes are sent to Customer, and Customer evaluates Prototypes and gives feedback.
Developers redefine Requirements, modify software design and produce modified Prototypes.
Process will be continued like this, after getting Customer’s confirmation Developers Start Regular
process; Software Design, Coding (Implementation), Testing and Release &amp; Maintenance.

The Objective of this approach is getting clear (Complete and Correct) Requirements from
Customers in order to avoid Defects multiplication.

732. What are the advantages of Prototype Model?


o The customer doesn’t need to wait long as in the Linear Model or Waterfall Model.

o Feedback from customer is received periodically and the changes don’t come as a last minute
surprise.

o Customer’s interaction improves the Quality as well success rate.

733. What are the disadvantages of Prototype Model?


o It is an expensive approach and time taking process when its compare to sequential models like
Waterfall model.

o Customer could believe the prototype as the working version.

o Developer also could make the implementation compromises.

o Once Requirements finalized then adopting changes in Requirements and adding New
Requirements are difficult.

734. What is Agile development methodology? What is Scrum Methodology in Agile Software
Development?
There are different methodologies, which are a part of the agile model. The most famous one is
scrum methodology. Like all the other agile computer programming, scrum is also an iterative and
incremental methodology. This methodology is different than the other methodologies because,
the idea of empirical process control was introduced in this process. As a matter of fact, scrum was
introduced for software project management. However, it was eventually also used for software
maintenance.
Manual Testing Interview Questions

The best part of the scrum methodology is that it makes use of real world progress of a project,
which is used for planning and scheduling releases. The entire computer software project is divided
into small parts known as sprints. The duration of sprint can range from one week to three weeks.
At the end of the duration of the sprint, the team members along with the stakeholders meet. This
meeting helps in assessing the progress of the project and chalk out the further plan of action. This
assessment helps in taking stalk of the current state of affairs and rework the line of work and
complete the project on time and not just speculate or predict the further outcome.

735. What Spiral Model?


The Spiral Life Cycle Model is a type of iterative software development model which is generally
implemented in high risk projects.

It was first proposed by Boehm. In this system development method, we combine the features of
both, waterfall model and prototype model. In Spiral model we can arrange all the activities in the
form of a spiral.

736. What is Requirements Gathering phase in SDLC, explain it?


This is an initial stage in SDLC; in this stage business analyst category people gather requirements
from the customer and document them, the document called BRS (Business requirements
specification) or CRS (customer requirements specification) or URS (user requirements
specification) or PRD (product requirements specification) or BDD (business design document)

NOTE:

. Document name may vary from one company to another but process is same

. In software application development BA category people gather requirements from a specific


customer

. In software product development BA category people gather requirements from model customers
in the market

737. What is BRS in Software Development?


BRS is Business Requirement Specification which means the client who want to make the
application gives the specification to software development organization and then the organization
convert it to SRS (Software requirement Specification) as per the need of the software.

738. What is SRS in Software Development?


Software Requirements Specification (SRS) is a perfect detailed description of the behaviour of the
system to be developed.

SRS document is an agreement between the developer and the customer covering the functional
and non-functional requirements of the software to be developed.

SRS document consists of 2 sub documents


Manual Testing Interview Questions

i) System Requirements Specification

ii) Functional Requirements Specification

SRS can be derived from BRS Document and it is technical document.

739. What is HLD or Global Design?


High Level Design gives the overall System Design in terms of Functional Architecture and
Database design. It designs the over all architecture of the entire system from main module to all
sub module.

740. What is LLD or Detailed Design?


Low Level Design the view of the application developed during the high level design is broken
down into modules and programs. Logic design is done for every program and then documented as
program specifications. For every program, a unit test plan is created.

741. Explain about the Software Release process?


PM (Project Manager) forms Release Team, Release team consists of a few developers, testers,
project management executive, and system administrator

Release Team goes to customer place and deploys software.

Release Team provides some training to the customer site people (if required)

742. Explain about the Software Maintenance Process?


PM (Project Manager) forms Maintenance Team, Maintenance team consists of a few developers,
testers, and project management executive

CCB receives customer change requests and perform required changes

Maintenance(3 types)
———————
a) Modifications
i) Corrective changes and ii) Enhancements

b) Migration Migrating from old technology to new technology

(ex: windows 98 to Vista/windows 7)


Ms Access to SQL Server/Oracle)

c) Retirement Retiring the Old system and developing New system

(Ex: II-tier application to web application)


——————————————
Manual Testing Interview Questions

743. What is the difference between Mobile device testing and mobile application testing?
Ans. Mobile device testing means testing the mobile device and mobile application testing means
testing of the mobile application on
a mobile device.

744. What are the types of mobile applications?


Ans. Mobile applications are of three types:
Native Application– Native app installed from application store like Android’s google play and
apple’ app store. The application
which can be installed into your devices and run is known as a native application for E.G. whats
App, Angry birds etc.
Web Application– Web applications run from mobile web browsers like Chrome, Firefox, Opera,
Safari etc. using mobile network or
WIFI. E.G. of web browser applications are m.facebook.com, m.gmail.com, m.yahoo.com,
m.rediffmail.com etc.
Hybrid Application- Hybrid apps are combinations of native app and web app. They can run on
devices or offline and are written
using web technologies like HTML5 and CSS. For E.G. eBay, Flipkart etc.

745. How to test CPU usage on mobile devices?


Ans. There are various tools available in the market like google play or app store from where you
can install apps like CPU Monitor,
Usemon, CPU Stats, CPU-Z etc. these are an advanced tool which records historical information
about processes running on your
device.

746. What are the defects tracking tools used for mobile testing?
Ans. You can use same testing tool which you use for web application testing like QC, Jira, Rally,
and Bugzilla etc.

747. What all major networks to be considered while performing application testing?
Ans. You should test the application on 4G, 3G, 2G, and WIFI. 2G is a slower network, it’s good if
you verify your application on a
slower network also to track your application performance.

748. When performing sanity test on the mobile application what all criteria should be taken into
consideration?
Ans.
Installation and uninstallation of the application
Verify the device in different available networks like 2G, 3G, 4G or WIFI.
Functional testing
Interrupt testing- Able to receive the calls while running the application.
Compatibility testing – able to attach the photo in message from gallery
Test application performance on a different handset.
Manual Testing Interview Questions

Make some negative testing by entering the invalid credentials and test the behaviour of the
application.

749. Which things to consider testing a mobile application through black box technique?
Ans.
By testing your application on multiple devices.
By changing the port and IP addresses to make sure the device is getting connected and
disconnected properly.
By making calls and sending messages to other devices.
By testing your web application on different mobile browsers like Chrome, Firefox, opera, dolphin
etc.

750. What is the latest version of iOS?


Ans. iOS 8. (This changes quite often, so please check the apple site for most recent info)

751. What is the latest version of Android?


Ans. Lollipop 5.0–5.0.2 (also changes often). See all here.

752. What is the extension of Android files?


.apk (Android application package)

753. What is the extension of iOS files?


Ans. .ipa

754. What is the full form of MMS?


Ans. Multimedia Messaging Services

755. What are MT and MO in SMS?


Ans. Sending message is known as MO (Message originate) and receiving the message is known as
MT(Message Terminate)

756. What is WAP?


Ans. WAP is Wireless Application Protocol used in network apps.

757. What is GPRS and how it works?


Ans. GPRS is General Packet Radio Service which works on a mobile network with the help of IP
transmissions. GPRS provides the
transmission of IP packets over existing cellular networks. It provides you internet services on
mobile.

758. What is the latest version of Windows?


Ans. Windows 10 (see latest here)
Manual Testing Interview Questions

759. What do you mean by Streaming media?


Ans. Streaming is a process of downloading the data from the server. Streaming media is the
multimedia that is transferred from server
or provider to the receiver.

760. What are the automation tools available for mobile application testing?
Ans. There are many automation tools available in the market for mobile application testing but
iPhone Tester is one of the best tools to test the application on iPhones and screenplay for android
devices.

761. What is the best way to test different screen sizes of the devices?
Ans. Using emulator. See example here.

762. What is the basic difference between Emulator and Simulator?


Ans. The emulator is based on hardware and software wherein simulator is based on software.
Simulation is a system that behaves
Similar to something else while emulation is a system that exactly behaves like something else.

763. What are the common challenges in mobile application testing?


Ans. Working on different operating systems, a variety of handsets, different networks, a variety of
screen size. Read more here => 5
Mobile testing challenges and solutions.

764. What are the tools based on cloud-based mobile testing?


Ans. See test, Perfecto Mobile, Blaze Meter, AppThwack, Manymo, DeviceAnywhere etc.

765. What web services are used by a mobile app?


Ans. They are many depend upon the application. SOAP and REST web services are used but
RESTful is more common now.

766. What all devices have you worked till now?


Ans. Android, Symbian, Windows, iPhone etc.

767. How to create Emulator on Android?


Ans. Give a name in name field -> select target API from the list -> enter the size -> select the
required skin section -> click on create
AVD -> select the required AVD -> click on start button -> launch it

768. Does Selenium support mobile internet testing?


Ans. Yes, it does, Opera browser is used for Mobile internet testing.

769. Does Selenium support Google Android Operating System?


Ans. Yes, Selenium 2.0 supports Android Operating System.
Manual Testing Interview Questions

770. Name debugging tools for mobile?


Ans. Errors can be verified by the generated logs. We can use configuration utility on iOS and
android monitor.bat on android. Here are few to name Android DDMS, Remote Debugging on
Android with Chrome, Debugging from Eclipse with ADT, Android Debug
Bridge, iOS simulator etc.

771. Name mobile automation testing tools you know?


Ans. Paid tools:
Ranorex, Silk M obile, SeeTest
Free tools:
Appium, R obotium, KIF, Calabash
Also, read => 5 best Android application testing tools.

772. What is the strategy used to test new mobile app?


Ans.
System integration testing
Functional testing
Installation and uninstallation of the app
Test HTML control
Performance
Check in multiple mobile OS
Cross browser and cross-device testing
Gateway testing
Network and Battery testing

773. What does a test plan for Mobile App contain?


Ans. Test plan for mobile app is very similar to software app
1. Objective
2. Automation tools required
3. Required features to be tested:
Network
Security
Performance
Size
Battery
Memory
4. Features not to be tested
Display size
Resolution
5. Test cases
6. Test Strategy
7. Tested by
8. Time required
9. No. of resources required
Manual Testing Interview Questions

774. Why mobile phone has 10 digit numbers?


Ans. The number of digits in a mobile phone number decide the maximum mobile phones we can
have without dialling the country code.

775. Explain critical bugs that you come across while testing in mobile devices or application?
Ans. Explain the example as per your experience. Here are top 10 mobile app risks.

776. Name mobile application testing tools


Ans.
Android
Android Lint
Find Bugs
iPhone
Clang Static Analyzer
Analyze code from XCode

777. Full form of the various extensions


Ans.
apk – Android Application Package File
exe – Executable Files
iPA –iOS App Store Package
prc – Palm Resource Compiler
jad – Java Application Descriptor

adb – Android Debug Bridge


Aapt – Android Asset Packing Tool

778. How to test different screen sizes of the devices


Ans. Using Emulators

779. What is web service?


Ans. It is a component used in software to perform the task. It is like an interface between one
program to another.

780. What are the roles and responsibilities on a current mobile application you are testing?
Ans. Answer based on your experience on the current project you are working on. Also, read mobile
testing career guide.

781. How to create the log file?


Ans. Using CAT.

782. How can we install the build on iPhones and iPads?


Ans. Using iTunes.
Manual Testing Interview Questions

783. Can we use QTP/UFT for mobile automation testing?


Ans. Yes, with the help of See test add-in.

784. Is cloud base mobile testing possible? Name any?


Ans. Yes, Perfecto Mobile and See test.

785. Where to perform forward compatibility testing?


Ans. This can be done with new versions of the mobile application.

786. Explain Web Services?


Ans: A Web Service can be defined as an application component for communication or say
exchanging information between two applications over the network. Web services basically work
on client server model where web services are easily accessible to client applications over the
network.
To enable communication between various applications, web services take the help of open
standards like XML (for data tagging), SOAP (for message transferring) and WSDL (to denote
service availability).

787. What are the components of web service?


Ans: The different components of web services are
 SOAP- Simple Object Access Protocol
 UDDI- Universal Description, Discovery, and Integration
 WSDL- Web Service Description language
 RDF- Resource Description Framework
 XML- Extensible Markup Language

788. Explain the term Interoperability with respect of Web services?


Ans: The term ‘Interoperability’ is widely used in product marketing description which defines the
ability of different products or systems working together without any special effort from the
customer part.
This is applicable in the same way when we talk about ‘Interoperability’ in terms of web services.
Here it determines the communication between various applications, sharing of data as well as
services among themselves. There is no restriction on the type of application to be in
communication. If any code is written, it will be treated as generic code that will be understood by
all application. Thus, the cost of writing specific codes for each application is reduced.

There is no restriction on the type of application to be in communication. If any code is written, it


will be treated as generic code that will be understood by all application. Thus, the cost of writing
specific codes for each application is reduced.

789. Define web service protocol stack and its layers?


Ans: Web service protocol stack consists of 4 layers. This can be described as follows
Manual Testing Interview Questions

1) Service transport: This is the first layer which helps in transporting XML messages between
various client applications. This layer commonly uses the below-mentioned protocols:
 HTTP(Hypertext Transport Protocol)
 SMTP(Simple Mail Transport Protocol)
 FTP(File Transfer Protocol)
 BEEP(Block Extensible Exchange Protocol)
2) XML messaging: This layer is based on the XML model where messages are encoded in
common XML format which is easily understood by others. This layer includes
 XML-RPC
 SOAP(Simple Object Access Protocol)
3) Service description: This layer contains description like location, available functions, and data
types for XML messaging which describes the public interface to a specific web service. This layer
includes:
 WSDL(Web Service Description Language)
4) Service discovery: This layer is responsible for providing a way to publish and find web
services over the web. This layer includes:
 UDDI(Universal Description, Discovery, and Integration)

790. Explain web service architecture?


Ans: Web service framework consists of an architecture which consists of three different layers.
The roles of these layers are defined as below
 Service Provider: As the name denotes, service provider role is to create the web service
and makes it accessible to the client applications over the internet for their usage.
 Service Requestor: Service requestor is basically any consumer of web service like any
client application. Client applications are written in any language contact web service for
any type of functionality by sending XML request over the available network connection.
 Service Registry: Service registry is the centralized directory which helps locate web
services for client applications. Here we can find the existing web services, as well as
developers, can also create the new one.
The Service Provider uses the ‘Publish’ interface of Service Registry to make the existing web
services available to client applications. With all the information provided by the service registry,
service requestor is able to bind or invoke services.

791. What do you understand by XML-RPC?


Ans: RPC is Remote Procedure Call and as the name suggests, it is the method of calling a
procedure or function available on any remote computer.
XML stands for Extensible Markup Language. Thus XML-RPC represents a simple protocol that
performs RPCs by using XML messaging. This has been considered as an excellent tool for
connecting different environments and also establishing connections between wide varieties of
computers.

792. Explain features of XML-RPC?


Ans: The major features of XML-RPC are enlisted below
Manual Testing Interview Questions

 RPCs are performed using simple XML language.


 XML encoded Requests are sent via HTTP POST.
 XML Response is embedded in HTTP response.
 It is considered as platform-independent.
 It allows communication between diverse applications.
 It uses HTTP protocol for passing information between client and server computers.
 It has small XML vocabulary for describing request and response’s nature.

793. Enlist few advantages of web services?


Ans: We have already discussed web services, its architecture, components. Now, let us see some
its advantages
 Every application is now on the internet and it the web service which provides some sort of
required functionality to the client applications.
 Web services help in exposing the existing functionalities over the network to help other
applications to use in their programs.
 It has features like ‘Interoperability’ which determines the communication between various
applications, sharing of data as well as services among themselves.
 Web services use the standardized web service protocol stack for communication which
consists of 4 layers namely, Service Transport, XML messaging, Service description and
Service discovery.
 It has the feature of the low cost of communication because of the usage of SOAP (Simple
Object Access Protocol) over HTTP protocol.
 Easy to deploy, integrate and is reusable.
 Allows simple integration between different feature as a part of loose coupling feature.

794. Explain the term UDDI with its features?


Ans: UDDI is an XML-based standard in the service discovery layer of web service protocol stack. It
is used for publishing and finding web services over the web as it acts like a directory. Some of the
features of UDDI are explained below
 It is an open framework and is platform independent.
 SOAP, COBRA, and Java RMI protocols are used for communication.
 It helps businesses to discover each other and enable interaction between them over the
internet.
 It acts as a database containing all WSDL files.

795. Which language is used by UDDI?


Ans: UDDI uses the language known as WSDL (Web Service Description Language)

796. Explain BEEP?


Ans: BEEP stands for Blocks Extensible Exchange Protocol. BEEP is determined as building new
protocols for the variety of applications such as instant messaging, network management, file
transfer etc. It is termed as new Internet Engineering Task Force (IETF) which is layered directly
over TCP. It has some built-in features like
Manual Testing Interview Questions

 Authentication
 Security
 Error handling
 Handshake Protocol

797. Enlist few tools used to test web services?


Ans: To test Web services, below-mentioned tools are used
 SoapUI
 REST client
 JMeter

798. Do we require any special application to access web service?


Ans: The only requirement for accessing web services from any application is that it must support
XML-based request and response. There is no need or say the requirement of installing any
application for accessing web services.
RESTful Web Services Interview Questions

799. What do you know about RESTful Web Services?


Ans: REST stands for Representational State Transfer. REST is defined as the stateless client-
server architectural style for developing application accessed over the web. When web services
use HTTP methods to implement the concept of REST architecture, then it is known as RESTful Web
services. In this architectural style, data and functionality are served as resources and is accessed
by URI (Uniform Resource Identifiers).
RESTful web services enable web services to work best by inducing properties like
 Performance
 Scalability
 Modifiability

800. Explain the advantages of RESTful web services?


Ans: Enlisted below are the advantages of RESTful web services
 They are considered as language and platform independent as these can be written in any
programming language and can be executed on any platform.
 REST is lightweight protocol and is considered as fast because of less consumption of
bandwidth and resources.
 It supports multiple technologies and different data formats like plain text, XML, JSON, etc.
 It has loosely coupled implementation and can be tested easily over browsers.

801. Differentiate SOAP and REST?


Ans: Difference between SOAP and REST can be easily understood from the below table

SOAP REST

Simple Object Access Protocol (SOAP) serves as a Representational State Transfer (REST) is an architectura
standard protocol for web service creation. for web service creation.
Manual Testing Interview Questions

SOAP REST

Web services and clients are tightly coupled and It does not follow too many standards and is loosely coup
define some standards that are to be strictly followed.

It requires more bandwidth and resource as well as It requires less bandwidth and resource as well as uses U
uses service interfaces for exposing business logic. (Uniform Resource Identifiers) for exposing business logic

It is usually less preferred and permits XML data It is usually more preferred and permits data formats like
format only. text, HTML, JSON, etc.

Java API for SOAP web service is JAX-WS. Java API for RESTFUL web service is JAX-RS.

SOAPUI can be used for testing SOAP web services. Browsers and extensions such as Chrome postman are us
testing REST web services.

It defines its own security and uses WSDL contract for It does not have any defined contract as well as does not
binding web services and client programs. own security methods.

802. Explain different HTTP methods supported by RESTful web services?


Ans: Enlisted below are some common HTTP methods along with their functions that are supported
by RESTful web services
 GET: Read-only access to the resource.
 PUT: Creation of new resource.
 DELETE: Removal of a resource.
 POST: Update of an existing resource.
 OPTIONS: Get supported operations on the resource.
 HEAD: Returns HTTP header only, nobody.

803. What is a resource in RESTful web service and how it is represented?


Ans: Resource is said to be a fundamental concept having a type and relationship with other
resources. In REST architecture, each content is considered as the resource and they are identified
by their URIs.
Resources are represented with the help of XML, JSON, text etc. in RESTful architecture.

804. What are the core components of HTTP request and HTTP response?
Ans: HTTP request has following 5 major components

HTTP
Meaning/work
Requests

Verb Indicate HTTP methods like GET, PUT, POST, etc.

URI Identifies the resource on server


Manual Testing Interview Questions

HTTP
Meaning/work
Requests

HTTP Version Indicates version.

Request Header Contains metadata like client type, cache settings, message body format, etc. for HTTP request
message.

Request Body Represents content of the message.

HTTP response has following 4 major components

HTTP Response Meaning/work

Status/Response Indicates the status of the server for requested resource.


code

HTTP version Represents HTTP version.

Response Header Consists of metadata like content length, content type, server length, etc. for HTTP respon
message.

Response Body Represents response message content.

805. What is the purpose and format of URI in REST architecture?


Ans: Purpose of URI is to locate resources on the server that are hosting web services.
Format of URI

<protocol>://<service-name>/<ResourceType>/<ResourceID>

806. Explain the term statelessness in terms of RESTful web services?


Ans: In REST architecture, there is a restriction where a REST web service is not allowed to keep a
client state on the server. Such condition is known as ‘Statelessness’. In such situation, the client
passes its context to the server and in turn, the server stores the context in order to process
client’s further requests.

807. Enlist advantages and disadvantages of statelessness?


Ans: The advantages of statelessness include
 Each and every method requests are treated independently.
 Application design is simplified as it does not maintain client’s previous interaction.
 Works with HTTP protocol as it shares the feature of being statelessness.
The disadvantage of statelessness includes
Manual Testing Interview Questions

 Every time client interaction takes place, web services are to be provided with extra
information about each request so that they can interpret the client’s state.

808. For designing a secure RESTful web service, what are the best factors that should be
followed?
Ans: As HTTP URL paths are used as a part of RESTful web service, so they need to be secured.
Some of the best practices include the following
 Perform validation of all inputs on the server from SQL injection attacks.
 Perform user’s session based authentication whenever a request is made.
 Never use sensitive data like username, session token password, etc. through URL. These
should be passed via POST method.
 Methods like GET, POST, PUT, DELETE, etc. should be executed with proper restrictions.
 HTTP generic error message should be invoked wherever required.
SOAPUI Web Services

809. Define SOAP web services?


Ans: Simple Object Access Protocol (SOAP) is defined as the XML based protocol which is known
for designing and developing web services as well as enabling communication between
applications developed on different platforms with different programming languages over the
internet. It is both platform and language independent.

810. What are the various approaches available for developing SOAP based web services?
Ans: There are basically 2 different approaches available for developing SOAP-based web services.
These are explained as follows
 Contract-first approach: In this approach, the contract is defined first by XML and WSDL
and then java classes are derived from the contract.
 Contract-last approach: In this approach, java classes are defined first and then the
contract is generated which is usually the WSDL file from the java class.
“Contract-first” method is the most preferred approach.

811. Explain the major obstacle faced by SOAP users?


Ans: One of the major hindrance observed by users of SOAP is the ‘Firewall security mechanism’.
In this case, all the HTTP ports except those which bypass firewall are locked. In some cases, a
technical issue of mixing specification of message transport with message structure is also
observed.

812. What are the advantages and disadvantages of SOAP?


Ans: Enlisted below are advantages of SOAP web services
 SOAP allows communications between various applications and it is both language and
platform independent.
 It is very simple as well as uses standard HTTP protocol and XML for sending and receiving
messages.
 It defines and uses its own security known as WS security.
 It decouples the encoding and communication protocol from the runtime environment.
Manual Testing Interview Questions

 It eradicates firewall problems and is vendor neutral.


 It allows circulation of messages in distributed and decentralized environment.
Enlisted below are disadvantages of SOAP web services
 Lightweight formats other than XML are not supported.
 Not easily testable on browsers.
 Security facilities are not present.
 SOAP is slow and cannot be easily tested on the browser.
 Web services and clients are tightly coupled and define some standards that are to be
strictly followed.

813. What are the elements of a SOAP message?


Ans: SOAP is just like other XML document and has following elements
 Envelope: This element is defined as the mandatory root element. It translates the XML
document and determines the start and end of the SOAP message.
 Header: This element contains the optional header attributes of the message that contains
specific information of the application. This element can occur multiple times and are
intended to add new features and functionalities.
 Body: This element is mandatory and contains the call and response messages. It is also
defined as the child element of the envelope containing all the application derived XML
data that has been exchanged as a part of SOAP message.
 Fault element: Errors that occur during processing of the messages are handled by the
fault element. If the error is present, then this element appears as a child element of the
body. However, there can only be one fault block.

814. What are the important characteristics of SOAP envelope element?


Ans: We have seen the basic work of a SOAP envelope element in the previous answer, now let us
see some of its characteristics
 SOAP envelope is a packaging mechanism.
 Every Soap message has a mandatory root envelope message.
 Only one body element is allowed for each envelope element.
 As the SOAP version changes, envelope changes.
 If the header element is present, it should appear as the first child.
 Prefix ENV and envelope element is used for specification.
 A namespace and an optional encoding style are used in case of optional SOAP encoding.

815. Enlist few syntax rules applicable for SOAP message?


Ans: Enlisted below are some important syntax rules that are applicable for SOAP message
A SOAP message

 Must be encoded using XML.


 Must use the SOAP envelope namespace.
 Must use the SOAP encoding namespace.
 Must not contain the DTD reference.
 Must not contain XML processing instructions.
Manual Testing Interview Questions

816. Define SOA?


Ans: A Service Oriented Architecture (SOA) is basically defined as an architectural pattern
consisting of services. Here application components provide services to the other components
using communication protocol over the network. This communication involves data exchanging or
some coordination activity between services.
Some of the key principles on which SOA is based are mentioned below

 The service contract should be standardized containing all the description of the services.
 There is loose coupling defining the less dependency between the web services and the
client.
 It should follow Service Abstraction rule, which says the service should not expose the way
functionality has been executed to the client application.
 Services should be reusable in order to work with various application types.
 Services should be stateless having the feature of discoverability.
 Services break big problems into little problems and allow diverse subscribers to use the
services.

817. Explain the actions performed by SOAPUI?


Ans: SOAPUI is an open-source, free and cross-platform functional testing solution. Mentioned
below are some actions performed by SOAPUI
 It can help create functional, security and load testing test suites.
 Data driven testing and scenario based testing is also performed.
 It has the ability to impersonate web services as well as has got built-in reporting abilities.
Web Services Security

818. What are the primary security issues of web service?


Ans: To ensure reliable transactions and secure confidential information, web services requires
very high level of security which can be only achieved through Entrust Secure Transaction
Platform. Security issues for web services are broadly divided into three sections as described
below
1) Confidentiality: A single web service can have multiple applications and their service path
contains a potential weak link at its nodes. Whenever messages or say XML requests are sent by
the client along with the service path to the server, they must be encrypted. Thus, maintaining the
confidentiality of the communication is a must.
2) Authentication: Authentication is basically performed to verify the identity of the users as well
as ensuring that the user using the web service has the right to use or not? Authentication is also
done to track user’s activity. There are several options that can be considered for this purpose
 Application level authentication
 HTTP digest and HTTP basic authentication
 Client certificates
3) Network Security: This is a serious issue which requires tools to filter web service traffic.

819. What do you know about foundation security services?


Ans: Foundation security services consist of the following
Manual Testing Interview Questions

 Integration
 Authentication
 Authorization
 Digital Signatures
 Encryption processes

820. What is Entrust Identification Service?


Ans: Entrust Identification Service is categorized under Entrust Secure Transaction Platform which
provides essential security capabilities to ensure secure transactions. This usually allows
companies to fully control the identities that are trusted to perform web service transactions.

821. What is Entrust Entitlements Service?


Ans: Entrust Entitlement service is those whose task is to verify the services that are attempting to
access the web services. It basically ensures security in business operations as well as some
authentication services.

822. What is Entrust Privacy Service?


Ans: As the name suggests, Entrust Privacy Service perform encryption of the data so that only
concerned parties are able to access the data. It basically deals with two factors
 Confidentiality
 Security
WSDL Interview Questions

823. Explain WSDL?


Ans: WSDL stands for Web service Description Language. It is a simple XML document which
comes under the Service Description layer of Web Service Protocol Stock and describes the
technical details or locates the user interface to web service. Few of the important information
present in WSDL document are
 Method name
 Port types
 Service end point
 Method parameters
 Header information
 Origin, etc.

824. What are the different elements of WSDL documents?


Ans: The different elements of WSDL document along with brief description is enlisted below
 Types: This defines the message data types, which are in the form of XML schema, used
by the web services.
 Message: This defines the data elements for each operation where messages could be the
entire document or an argument that is to be mapped.
 Port Type: There are multiple services present in WSDL. Port type defines the collection of
operations that can be performed for binding.
 Binding: Determines and defines the protocol and data format for each port type.
Manual Testing Interview Questions

 Operations: This defines the operations performed for a message to process the
message.

825. Explain the message element in WSDL?


Ans: Message element describes the data that has been exchanged between the consumer and
the web service providers. Every web service consists of two messages and each message has zero
or more <part> parameters. The two messages are
 Input: Describes the parameter for the web service
 Output: Describes the return data from the web service.

826. Enlist the operation types response used in WSDL?


Ans: WSDL basically defines 4 types of Operation type responses. These are enlisted below
 One-way: Receives a message but does not return response.
 Request-Response: Receives a request and return a response.
 Solicit-Response: Sends a request and wait for a response.
 Notification: Sends a message but does not wait for a response.
Among these, Request-Response is the most common operation type.

827. Is binding between SOAP and WSDL possible?


Ans: Yes, it is possible to bind WSDL to SOAP. The binding is possible by basically two attributes
 Name: Defines the name of the binding.
 Type: Defines the port for the binding.
For SOAP binding, two attributes need to be declared

 Transport: Defines the SOAP protocol to be used i.e. HTTP.


 Style: This attribute can be ‘rpc’ or ‘document’.

828. Explain <definition> element?


Ans: Definition element is described as the root of WSDL document which defines the name of the
web service as well as act as a container for all the other elements.

829. What are the two attributes of <Port> element in WSDL?


Ans: Every port element is related to a specific binding by defining an individual endpoint. The port
element has following two attributes
 Name: This attribute provides the unique name within the WSDL document.
 Binding: This attribute refers to the process of binding which has to be performed as per
the linking rules defined by WSDL.

830. What are the points that should be considered by ports while binding?
Ans: WSDL allows extensibility elements which are used to specify binding information. Below are
few important points that should be kept in consideration while binding.
A port must not

 Specify more than one address.


Manual Testing Interview Questions

 Specify any binding information other than address information.


Manual Testing Interview Questions

831. What is GUI?

There are two types of interfaces for a computer application. Command Line Interface is where you
type text and computer responds to that command. GUI stands for Graphical User Interface where
you interact with the computer using images rather than text.

Following are the GUI elements which can be used for interaction between the user and
application:

GUI Testing is a validation of the above elements.

832. What is GUI Testing?

GUI testing is defined as the process of testing the system's Graphical User Interface of the
Application Under Test. GUI testing involves checking the screens with the controls like menus,
buttons, icons, and all types of bars - toolbar, menu bar, dialog boxes, and windows, etc.

GUI is what the user sees. Say if you visit guru99.com what you will see say homepage it is the GUI
(graphical user interface) of the site. A user does not see the source code. The interface is visible
to the user. Especially the focus is on the design structure, images that they are working properly
or not.
Manual Testing Interview Questions

In above example, if we have to do GUI testing we first check that the images should be completely
visible in different browsers.

Also, the links are available, and the button should work when clicked.

Also, if the user resizes the screen, neither images nor content should shrink or crop or overlap.

In this tutorial, you will learn-

 What is GUI Testing?


 Need of GUI Testing
 What do you Check-in GUI Testing?
 Approach of GUI Testing
 GUI Testing Test Cases
 Demo: How to conduct GUI Test
 Challenges in GUI Testing

833. What is the Need of GUI Testing?

Now the basic concept of GUI testing is clear. The few questions that will strike in your mind will be

 Why do GUI testing?


 Is it really needed?
 Does testing of functionally and logic of Application is not more than enough?? Then why to
waste time on UI testing.

To get the answer to think as a user, not as a tester. A user doesn't have any knowledge about XYZ
software/Application. It is the UI of the Application which decides that a user is going to use the
Application further or not.
Manual Testing Interview Questions

A normal User first observes the design and looks of the Application/Software and how easy it is for
him to understand the UI. If a user is not comfortable with the Interface or find Application complex
to understand he would never going to use that Application Again. That's why, GUI is a matter for
concern, and proper testing should be carried out in order to make sure that GUI is free of Bugs.

834. What do you Check-in GUI Testing?

The following checklist will ensure detailed GUI Testing in Software Testing.

 Check all the GUI elements for size, position, width, length, and acceptance of characters
or numbers. For instance, you must be able to provide inputs to the input fields.
 Check you can execute the intended functionality of the application using the GUI
 Check Error Messages are displayed correctly
 Check for Clear demarcation of different sections on screen
 Check Font used in an application is readable
 Check the alignment of the text is proper
 Check the Colour of the font and warning messages is aesthetically pleasing
 Check that the images have good clarity
 Check that the images are properly aligned
 Check the positioning of GUI elements for different screen resolution.

835. GUI Testing Techniques / Methods?

GUI testing can be done in three ways:

Manual Based Testing

Under this approach, graphical screens are checked manually by testers in conformance with the
requirements stated in the business requirements document.

Record and Replay


Manual Testing Interview Questions

GUI testing can be done using automation tools. This is done in 2 parts. During Record, test steps
are captured by the automation tool. During playback, the recorded test steps are executed on the
Application Under Test. Example of such tools - QTP.

Model Based Testing

A model is a graphical description of a system's behaviour. It helps us to understand and predict


the system behaviour. Models help in a generation of efficient test cases using the system
requirements. The following needs to be considered for this model based testing:

 Build the model


 Determine Inputs for the model
 Calculate the expected output for the model
 Run the tests
 Compare the actual output with the expected output
 A decision on further action on the model
Manual Testing Interview Questions

Some of the modelling techniques from which test cases can be derived:

 Charts - Depicts the state of a system and checks the state after some input.
 Decision Tables - Tables used to determine results for each input applied

Model based testing is an evolving technique for generating test cases from the requirements. Its
main advantage, compared to above two methods, is that it can determine undesirable
states that your GUI can attain.

Following are open source tools available to conduct automated UI Test.

Product Licensed Under

AutoHotkey GPL

Selenium Apache

Sikuli MIT

Robot Framework Apache

Water BSD

Dojo Toolkit BSD

836. Example GUI Testing Test Cases?

GUI Testing basically involves

1. Testing the size, position, width, height of the elements.


2. Testing of the error messages that are getting displayed.
3. Testing the different sections of the screen.
4. Testing of the font whether it is readable or not.
5. Testing of the screen in different resolutions with the help of zooming in and zooming out
like 640 x 480, 600x800, etc.
6. Testing the alignment of the texts and other elements like icons, buttons, etc. are in proper
place or not.
7. Testing the colours of the fonts.
Manual Testing Interview Questions

8. Testing the colours of the error messages, warning messages.


9. Testing whether the image has good clarity or not.
10. Testing the alignment of the images.
11. Testing of the spelling.
12. The user must not get frustrated while using the system interface.
13. Testing whether the interface is attractive or not.
14. Testing of the scrollbars according to the size of the page if any.
15. Testing of the disabled fields if any.
16. Testing of the size of the images.
17. Testing of the headings whether it is properly aligned or not.
18. Testing of the colour of the hyperlink.

837. How to do GUI Test?

Here we will use some sample test cases for the following screen.

Following below is the example of the Test cases, which consists of UI and Usability test
scenarios.

TC 01- Verify that the text box with the label "Source Folder" is aligned properly.

TC 02 - Verify that the text box with the label "Package" is aligned properly.

TC 03 – Verify that label with the name "Browse" is a button which is located at the end of Textbox
with the name "Source Folder."

TC 04 – Verify that label with the name "Browse" is a button which is located at the end of Textbox
with the name "Package."

TC 05 – Verify that the text box with the label "Name" is aligned properly.

TC 06 – Verify that the label "Modifiers" consists of 4 radio buttons with the name public, default,
private, protected.

TC 07 – Verify that the label "Modifiers" consists of 4 radio buttons which are aligned properly in a
row.

TC 08 – Verify that the label "Superclass" under the label "Modifiers" consists of a dropdown
which must be properly aligned.

TC 09 – Verify that the label "Superclass" consists of a button with the label "Browse" on it which
must be properly aligned.

TC 10 – Verify that clicking on any radio button the default mouse pointer must be changed to the
hand mouse pointer.
Manual Testing Interview Questions

TC 11 – Verify that user must not be able to type in the dropdown of "Superclass."

TC 12 – Verify that there must be a proper error generated if something has been mistakenly
chosen.

TC 13 - Verify that the error must be generated in the RED colour wherever it is necessary.

TC 14 – Verify that proper labels must be used in the error messages.

TC 15 – Verify that the single radio buttons must be selected by default every time.

TC 16 – Verify that the TAB button must be work properly while jumping on another field next to
previous.

TC 17 – Verify that all the pages must contain the proper title.

TC 18 – Verify that the page text must be properly aligned.

TC 19 – Verify that after updating any field a proper confirmation message must be displayed.

TC 20 - Verify that only 1 radio button must be selected and more than single checkboxes may be
selected.

Challenges in GUI Testing

In Software Engineering, the most common problem while doing Regression Testing is that the
application GUI changes frequently. It is very difficult to test and identify whether it is an issue or
enhancement. The problem manifests when you don't have any documents regarding GUI changes.

GUI Testing Tools

 Ranorex
 Selenium
 QTP
 Cucumber
 SilkTest
 TestComplete
 Squish GUI Tester
 Why does a software have bugs ?
Mis-communication OR No communication -> as to specific of what an application should or
shouldn‟t do(the application‟s requirements)
Software complexity
Programming errors
Changing requirements
Time pressure
Manual Testing Interview Questions

838. When do we stop the testing ?


 When the time span is less, then we test the important features & we stop it
 Budget
 When the functionality of the application is stable
 When the basic feature itself is not working correctly
In the last 2 days of release, the code will be freezed. When the code is freezed, the defect
found will be fixed in later stages.

839. Why do we do testing ?


To ensure the quality
To verify all the requirements are implemented correctly

840. How will you receive the project requirements?

A. The finalized SRS will be placed in a project repository; we will access it from there

841. What will you do with SRS?

A. SRS stands for software requirement specification. SRS is used to understand the project
functionality from business and functional point of view.

842. What is FRS? How it different from SRS?


A.srs describes what client is expecting from the system. For example in case of Gmail SRS
consists details like first page should be login, to access mail box user should be authenticated.
FRS describes how above requirements will be developed .in FRS, the functionality in SRS will
be written down in more technical terms. For example in case of Gmail FRS consists details like for
login what fields should be present and what are valid inputs. This means FRS will have screen
level details of the application.

Note: In many projects SRS itself will be designed at screen level details of the application.

843. Is the testing team involved in SRS preparation?


A. Business analyst prepare the SRS document by interacting with the client. However a senior
testing team member can also be involved in requirements collections along with the development
team and the business analyst team.

844. How does your requirements document look like?


A. It contains lots of use cases where each use case explains one or more functionalities
Manual Testing Interview Questions

845. How will you understand the requirements?

A. If it is known domain by going through use cases i can understand the requirements. if i have
some queries, i will discuss them with business analyst(BA) for clarifications. if it is new domain,
first i will get domain training them i go through the use cases. If the project requirements are
very confusing, then (BA) can also walk through each use case.

846. How do u understand functionality without screens?


A. We get wireframes in the usecases which helps a lot to understand the functionality

847. What is wireframe?


A. A diagram which stimulates the feel of the actual screen.

848. What is usecase?


A. Usecase explains the step by step procedure of how a particular functionality of s/w is used by
the end user.usecase contains sections such as
. usecase id
. usecase name
. decription
. flow of events
. alternative flow of events
. pre,post conditions

849. Where you involved in writing the usecases?


A. I am aware of how usecases looks like and i can write if required. but i have never got an
opportunity to write the usecases because these are prepared by requirements gathering
team .Any how i have reviewed the usecases of certain functionalities and have given my inputs for
betterment of the same.

850. What are the different sections present in SRS?


A. overview
Scope
Features
User characteristics
Software requirements
Hardware requirements
Performance requirements
Use cases
Security and reliability requirements

851. How long do u spend on understanding SRS?


A. It depends on the familiarity of the domain and complexity of the project. if it is a familiar
domain, we can understand around 25 pages of the documentation every day. For a new complex
domain, we manage around 15 pages per day.
Manual Testing Interview Questions

852. After understanding the SRS what do you do?


A. My lead asks for presentation of the functionalities i am assigned with if i am in a position to
explain the functionalities clearly to the team, then i am considered as comfortable with
functionalities.

853. Should you understand the whole project functionality or only the functionality assigned to
you?
A. I should have a big picture of the whole project. In other words i should have an overview of the
whole project and detailed screen and field level understanding of the assigned functionalities.

854. What are the different models generally followed in documenting requirements?
A. Two models are followed in documenting the requirements which are usecase model and
paragraph model. in paragraph model business requirements are written like a paragraph which is
old model. Now a days almost all companies follow the usecase model where the requirements
are written by stating thier clear objectives and explained with the help of screen shots.

855. How big is your SRS?


A. You can answer anything like approx 250 pages. This question is asked just to cross check
whether u have seen SRS or not

856. What will be the problem without SRS?


A. without srs we will not be able to understand the project features correctly. Hence we will not
able to test the project in depth and deliver the best quality product.

857. What is SRS?


A. BRS is business requirement specification which is usually prepared before preparing an srs.
This document gives a high-leval view of what is being required by the customer to meet business
needs.

858. What is technical requirements specification?


A. This is also called as high-leval design, which consists of different modules present in the
project.

859. What is user story?


A. user story is the method of documenting requirements in the agile model.

860. What is Review?


A. Review is a meeting in which a work product is verified by set if members (stake holders)

861. Explain the review process you follow in your organization?


A. The various phases of the review process followed in my organization are:
Planning:
>selecting the personal for review
Manual Testing Interview Questions

>allocating roles
>defining entry and exit criteria.
Kick-off:
>Distributing documents
>explaining the objectives
>checking entry criteria, etc.
Individual preparation:
> in the phase, each of the participants will work before the review meeting and be ready
with questions and comments.
Review Meeting:
> Discussion among the review members by going through each line of the work product.
> Logging comments
> Making decision about the defect.
Rework:
> Fixing defects found during the review, typically done by the author.
Follow-up:
> checking the defects that have been addressed.
>gathering metrics and checking the exit criteria.

862. What are the roles present in the review?


A. Manager:
>decides on execution of reviews.
> allocates time in project schedules.
> determines if the review objectives have met.
Moderator:
>leads the review, including planning and running the meeting
>follows-up after the meeting.
Author:
The author is the person who has created the item to be reviewed. the author may also be
asked questions within the review.
Reviewer:
The reviewer are the attendees of the review who attempt to find errors in the item under
review. they should come from different perspectives in order to provide a well balanced review of
the item.
scribe:
The scribe or recorder is the person who is responsible for documenting issues raised
during the process of the review meeting.

863. What is peer review?


A. Is a review of a software work product by colleagues?

864. What is the difference between static and dynamic testing?


A. Static testing means testing the project without executing the software and dynamic testing
means testing the project by executing the software . i.e. by running the application and going
Manual Testing Interview Questions

through screens. To conduct dynamic testing you must use application screens and enter valid
and invalid inputs and verify the application behaviour. For static testing, we do not use any
screens of application instead we use static techniques like review. During review, experts go
through each line of the work products like requirement document, design document and
identify mistakes in these documents. Any mistakes identified during this review are nothing but
defects in the work product.

865. I want you to choose one among static and dynamic testing for your project. Which one will
you choose and why?
A. static testing reduces the cost of fix and dynamic testing gives the complete confidence to
release the product. According to me both are equally important and both of them contribute
equally for the project success. so i prefer to have both. however if i have to choose one, i choose
dynamic testing since i cannot let the project to be released until i see with my eyes that it is
working.

866. Out of formal and informal review, which one do you prefer?
A. In my view, both are important: informal review is fast and formal review is effective. we have
to use both depending on the data we are reviewing . I prefer formal and informal review
techniques as follows.
Formal Review:
> Reviewing test case document created
> Reviewing test plan document created
> reviewing test scripts developed
Informal Review:
> reviewing tests used for retesting
> Reviewing minor changes in test case, test plan or test scripts.

867. How do you decide the review outcome?


A. The review outcome is decided by the moderator. i can share my views with him. for example in
test cases review, the outcome decision it as follows Review
observation Review outcome
A. Most of the critical test cases are missed
B. Documentation standards are poor
Major changes are suggested …....Accept after correction with another round of review
Minor changes are suggested ….. Accept after correction without another round of review
No changes suggested …………………………………………………Accepts as it is

868. Explain what do you document during the review process?


A. we document page and line number of defect, origin of defect, severity of defect. We also
document other information like work product ID, reviewers, etc.

869. How much information you can review in one day?


A. per hour we review around 20 pages if it is documentation and 200 lines if it is code.
Manual Testing Interview Questions

870. How do you say review was successful?


A. if every reviewer prepares well before the review and provides good comments for improvement
of the work product, we can say that the review was successful.

871. What is code review?


A. code review is the process of reviewing the code written. code reviews are conducted for the
code developed by the developer and also for the automation scripts developed by the automation
engineer.

872. What is desk check?


A. This is an informal review where a colleagues comes to the desk/computer of the author and
quickly goes through the work product along with the author and also shares comments while
going through it.

873. What are the entry criteria for release?


A. > system testing results must show that all requirements are completed and project is stable.
> Alpha and beta testing must be completed.
> All medium and above severity bugs must be fixed.
>The release package is available.
> The release CD label is ready.

874. What is the release process you follow?


A. In our organization, the release process is coordinated by a person called release manager.
After successful beta testing, the release manger sends an email to all stake holders ( development
manager, test manager, documentation manager) for their
Approval for final release. The test manager further forwards the same mail to team members
requesting their internal approval. Based on internal approval. The test manager can send approval
to the release manager.

875. What is your involvement in the release process?


A. As a testing team member, i go through the defect tracking tool and check whether all the
defects are fixed. In case any defects are not fixed i communicate the same to my test lead and
test manager, sharing my opinion regarding each bug whether it must be fixed before release
or it can be fixed after release. The test manager takes the final decision on whether to fix or not
after discussion with the development manager.
I am further involved in preparing release notes, where i document known issues in my module
along with the issues resolved from the previous release.
Exit criteria for release are:
> All stake holders have approved for release.
> The new package is deployed in production and users are happy about the release.
> The code has been base lined in the configuration management.
Manual Testing Interview Questions

876. What is a code Freeze?


A. code freeze means the code has been locked from further modifications from developers. After
the code freeze the code should be changed by any developer. if at all any changes are required it
should be only for very critical bugs after taking permission from the top management of the
project. Code freezes are often employed in the final stages of development.

877. What are the entry and exit criteria for test execution?

Entry criteria:
--------------------------------
> coding should be completed
> test cases should be ready and base lined
> RTM should be updated
> test data should be read and base lined
> test environment/set up should be ready.
>s/w tools should be ready and approved.

Exit criteria:
-----------------------------
> All test cases must be executed and passed
> All defects identified must be fixed, retested and closed
> Test execution summary report must be prepared

878. Explain different test execution strategies?


A. There are 3 test execution strategies.
They are pass1, pass2, pass 3
Pass1 Test execution strategy:
------------------------------
In this execution model one execution cycle will be there. In this one execution cycle
Itself testers log defects and retest that defect. This is useful in stable, small with
2nd pass:
----------------------------
Development team releases new build claiming all the defects are fixed. Testing team retest all
defects with adhoc regression. if new defects are found development team release new build and
the life cycle is repeated until no new defects.
3rd pass:
---------------------------
Testing team runs full regression suite and this phase completes only when full regression is
completed. This is good model for large, complex and critical projects. In case of getting large no of
defects in pass -2 strategy, one may have to move to pass-3 strategy.
Manual Testing Interview Questions

879. How do you know you have a build ready for testing?
A. Frequency of build creation would vary from project to project .however below is the guideline
to answer this question. In our project automatic build creation deployment happens on every x
day. we receive a confirmation mail on every day morning about successful deployment along
with URL for testing .please refer our build process FAQ'S for exactly how build deployment and
release process.

880. How many test cases can you execute per day?

A. it depends on the size and complexity of the test cases. Approximately i execute around 50 test
cases per day which comes to 40 pages approx.

881. How do you run the test cases?


A. I will perform each step in the test case on the application and compare the application
behaviour with expected result of the step. if it is same as expected result then step is passed else
step is failed. if the password field shows * or some other special character while entering the
password, it is called password masking, NOT password encryption. Encryption means converting
the user entered characters into different characters before sending over the network. This can be
checked with the help of network sniffers.
Example s/w for this is WIRESHARK. These s/ was capture every data packet travelling over the
network including the IP address of source and destination computers. By analysing these packets
we can identify whether the password string is encrypted or not.

882. How do you check broken links?


A. Many tools are available for this. we use tools like menu.

883. What is test log?


A. It is a report of what tests have been executed and their status like pass/ fail. it is also known
as Test execution report.

884. Did you observe any application logs during the test execution?
A. yes. We do observe logs of the application server to check whether the server has thrown any
runtime errors.

885. Do you run all regression tests for every bug fixed?
A. No, I didn’t run regression test cases for every bug fixed. I run regression tests once for every
build.

886. Do you run all regression tests every time ?


A. Depends. if we are sure that the fix might not affect other modules, we run regression tests
specific to the module of the bugs fixed, else we run for the entire project.
Manual Testing Interview Questions

887. In the modules you have worked on, are there any issues identified after release?
A. projects if i am supposed to write stubs or drivers in the current project i am confident that i can
handle it.

888. When you fill the data in the application form, how do you ensure that the data is stored in
the correct tables and columns?
A. we can write an SQL query to retrieve data from the data base and compare the query result
with the data we have filled in the application forms.

889. What is test case?


A. Test case is a set of inputs, conditions and expected outcomes which a tester will determine
whether an application is working correctly or not.

890. What fields a test case will have?


A. The following are the fields that a test case will usually have.......
test case id, description, precondition, step name, expected results, actual results and status.

891. Where do you write test case?


A. Depending on the project we can write test cases in an excel or in QC.

892. How do you know for which functionalities you should write test case?
A. My lead writes top level requirements in QC and assigns to each team member. we divide test
requirements further into sub requirements. Then we identify test conditions for each sub
requirement and create test cases. test cases are reviewed after that. Reviewed and approved test
cases will go to ready state.

893. What is test scenario?


A. Test scenario is nothing but a functional scenario for which testing is to be conducted. It is also
called as a test condition.

894. What is the difference between test scenario and test case?
A. Test scenario is a high level description of business requirements, which is latter decomposed in
to a set of test cases. These test cases will be reviewed and approved by peers. we follow formal
review process for approving test cases written for each functionality.

895. How do you know your test cases are completed?


A. We follow two step approaches to ensure that test cases are completed.
B. reviews-- it ensures that quality of the test cases is good.
C. Requirement traceability matrix ---it ensures that all requirements have been covered through
test cases.
Manual Testing Interview Questions

896. How do you find whether a test case is a good test case or bad test case?
A. A good test case is one which finds the bug or one which has a high probability of finding the
bug. A good test case should be documented clearly, so that it can be executed by anyone without
any difficulties and confusion.

897. What is the percentage of positive and negative test cases that you write?
A. Approx 30% positive and 70% negative

898. Do you update the test cases after receiving build based on the application screen?
A. During execution, if we feel any test case requires an update, we will do it with the approval of
the team lead. but this work is very limited.

899. Explain one scenario where you were not able to write test cases for a given requirement?
A. Effort for a new domain by putting extra effort for through understanding of the domain.

900. What is the difference between a positive and negative test case?
A. A positive test caes checks whether the system does what it is suppose to do. I.e. to check that
we got the desired result with a valid set of inputs.
Ex: the user should login in to the system with a valid user name and password.
Negative test case: A negative test case checks whether the system will do what it is not supposed
to do .i.e. to check the system generates the correct error or warning messages with an invalid set
of inputs.
Ex: if the user entered the wrong user name or password, then the user should not login in to the
system and appropriate error message should be shown.

901. What are the documents required for test analysis?


A. 1. SRS/FRS
2. Use case
3. Architecture document

902. What is an entry criterion for test closure?


A. Decision to stop testing

903. Who takes this decision?


A. The Test Manager

904. What parameters do the test manager considers to take the decision to stop testing ?
A. The important parameters a test manager looks into are
> Whether all requirements have been developed or not.
> Whether all requirements have been covered through testing.
> Whether all requirements have been handled through fixed or differed status.
Manual Testing Interview Questions

905. What are the exit criteria for test closure?


A. > checking whether planned deliverables have been delivered.
> Finalizing and archiving test ware.
> Hand over of test ware for maintenance.
> analysing lessons learned for improvement of test maturity.
> Testing sign off.

906. What is test ware?


A. Test ware is Artifacts produced during the testing process. Test ware include test cases, test
plan, automation scripts, test data, test environment set-up and clear up procedures and any
additional software or utilities used in testing.

907. What is lessons learnt document?


A. > no of test cases/scenarios blocked
> No of defects verified and their respective status.
> Weekly status reporting:
> Test case summary
> Issues found
> Issues resolved
> Critical issues which are still open and which requires immediate attention from the client
side
> The report should also contain high plan for the next week.

908. What is the status can give to a test case?


A. Status are pass, fail, blocked, no run.

909. What is web server log?


A. Every time a web page is requested, the webserver automatically logs the following
information.
> The IP address of the visitor
> Date and time of the request
>The url of the requested file
> The url, the visitor came from immediately before
> The visitors web browser type and os

910. What is SQL?


Structured Query Language is a database tool which is used to create and access database to
support software application.

911. What are tables in SQL?


The table is a collection of record and its information at a single view.
Manual Testing Interview Questions

912. What are different types of statements supported by SQL?

There are 3 types of SQL statements


1) DDL (Data Definition Language): It is used to define the database structure such as tables. It
includes three statements such as Create, Alter, and Drop.
Some of the DDL Commands are listed below
CREATE: It is used for creating the table.
CREATE TABLE table_name

column_name1 data_type(size),

column_name2 data_type(size),

column_name3 data_type(size),

ALTER: The ALTER table is used for modifying the existing table object in the database.
ALTER TABLE table_name
ADD column_name datatype
OR

ALTER TABLE table_name


DROP COLUMN column_name
2) DML (Data Manipulation Language): These statements are used to manipulate the data in
records. Commonly used DML statements are Insert, Update, and Delete.
The Select statement is used as partial DML statement that is used to select all or relevant records
in the table.

3) DCL (Data Control Language): These statements are used to set privileges such as Grant
and Revoke database access permission to the specific user.

913. How do we use DISTINCT statement? What is its use?


The DISTINCT statement is used with the SELECT statement. If the records contain duplicate
values then DISTINCT is used to select different values among duplicate records.
Manual Testing Interview Questions

Syntax: SELECT DISTINCT column_name(s)


FROM table_name;

914. What are different Clauses used in SQL?

WHERE Clause: This clause is used to define the condition, extract and display only those records
which fulfil the given condition
Syntax: SELECT column_name(s)
FROM table_name
WHERE condition;
GROUP BY Clause: It is used with SELECT statement to group the result of the executed query
using the value specified in it. It matches the value with the column name in tables and groups the
end result accordingly.
Syntax: SELECT column_name(s)
FROM table_name
GROUP BY column_name;
HAVING clause: This clause is used in association with the GROUP BY clause. It is applied to each
group of result or the entire result as a single group and much similar as WHERE clause, the only
difference is you cannot use it without GROUP BY clause
Syntax: SELECT column_name(s)
FROM table_name
GROUP BY column_name
HAVING condition;
ORDER BY clause: This clause is to define the order of the query output either in ascending (ASC)
or in descending (DESC) order. Ascending (ASC) is the default one but descending (DESC) is set
explicitly.
Syntax: SELECT column_name(s)
FROM table_name
WHERE condition
ORDER BY column_name ASC|DESC;
Manual Testing Interview Questions

USING clause: USING clause comes in use while working with SQL Joins. It is used to check
equality based on columns when tables are joined. It can be used instead ON clause in Joins.
Syntax: SELECT column_name(s)
FROM table_name
JOIN table_name
USING (column_name);

915. Why do we use SQL constraints? Which constraints we can use while creating database in
SQL?
Constraints are used to set the rules for all records in the table. If any constraints get violated then
it can abort the action that caused it.

Constraints are defined while creating the database itself with CREATE TABLE statement or even
after the table is created once with ALTER TABLE statement.

There are 5 major constraints are used in SQL, such as


 NOT NULL: That indicates that the column must have some value and cannot be left null
 UNIQUE: This constraint is used to ensure that each row and column has unique value and
no value is being repeated in any other row or column
 PRIMARY KEY: This constraint is used in association with NOT NULL and UNIQUE
constraints such as on one or the combination of more than one columns to identify the
particular record with a unique identity.
 FOREIGN KEY: It is used to ensure the referential integrity of data in the table and also
matches the value in one table with another using Primary Key
 CHECK: It is used to ensure whether the value in columns fulfills the specified condition

916. What are different JOINS used in SQL?

There are 4 major types of joins made to use while working on multiple tables in SQL databases

 INNER JOIN: It is also known as SIMPLE JOIN which returns all rows from BOTH tables when
it has at least one column matched
Syntax:
SELECT column_name(s)
Manual Testing Interview Questions

FROM table_name1
INNER JOIN table_name2
ON column_name1=column_name2;
Example
In this example, we have a table Employee with the following data

The second Table is joining

Enter the following SQL statement

1 SELECT Employee.Emp_id, Joining.Joining_Date

2 FROM Employee

3 INNER JOIN Joining

4 ON Employee.Emp_id = Joining.Emp_id

5 ORDER BY Employee.Emp_id;

There will be 4 records selected. These are the results that you should see

Employee and orders tables where there is a matching customer_id value in both
the Employee and orders tables
LEFT JOIN (LEFT OUTER JOIN): This join returns all rows from a LEFT table and its matched
rows from a RIGHT table.
Manual Testing Interview Questions

Syntax: SELECT column_name(s)


FROM table_name1
LEFT JOIN table_name2
ON column_name1=column_name2;
Example
In this example, we have a table Employee with the following data:

Second Table is joining

Enter the following SQL statement

1 SELECT Employee.Emp_id, Joining.Joining_Date

2 FROM Employee

3 LEFT OUTER JOIN Joining

4 ON Employee.Emp_id = Joining.Emp_id

5 ORDER BY Employee.Emp_id;

There will be 4 records selected. These are the results that you should see:
Manual Testing Interview Questions

RIGHT JOIN (RIGHT OUTER JOIN): This joins returns all rows from the RIGHT table and its
matched rows from a LEFT table.
Syntax: SELECT column_name(s)
FROM table_name1
RIGHT JOIN table_name2
ON column_name1=column_name2;
Example
In this example, we have a table Employee with the following data

The second Table is joining

Enter the following SQL statement

1 SELECT Employee.Emp_id, Joining.Joining_Date

2 FROM Employee

3 LEFT OUTER JOIN Joining

4 ON Employee.Emp_id = Joining.Emp_id

5 ORDER BY Employee.Emp_id;

There will be 4 records selected. These are the results that you should see
Manual Testing Interview Questions

FULL JOIN (FULL OUTER JOIN): This joins returns all when there is a match either in the RIGHT
table or in the LEFT table.
Syntax: SELECT column_name(s)
FROM table_name1
FULL OUTER JOIN table_name2
ON column_name1=column_name2;
Example
In this example, we have a table Employee with the following data:

Second Table is joining

Enter the following SQL statement:

1 SELECT Employee.Emp_id, Joining.Joining_Date

2 FROM Employee

3 FULL OUTER JOIN Joining

4 ON Employee.Emp_id = Joining.Emp_id

5 ORDER BY Employee.Emp_id;
Manual Testing Interview Questions

There will be 8 records selected. These are the results that you should see

917. What are transaction and its controls?


A transaction can be defined as the sequence task that is performed on databases in a logical
manner to gain certain results. Operations performed like Creating, updating, deleting records in
the database comes from transactions.

In simple word, we can say that a transaction means a group of SQL queries executed on database
records.

There are 4 transaction controls such as

 COMMIT: It is used to save all changes made through the transaction


 ROLLBACK: It is used to roll back the transaction such as all changes made by the
transaction are reverted back and database remains as before
 SET TRANSACTION: Set the name of transaction
 SAVEPOINT: It is used to set the point from where the transaction is to be rolled back

918. What are properties of the transaction?


Properties of transaction are known as ACID properties, such as
 Atomicity: Ensures the completeness of all transactions performed. Checks whether every
transaction is completed successfully if not then transaction is aborted at the failure point
and the previous transaction is rolled back to its initial state as changes undone
 Consistency: Ensures that all changes made through successful transaction are reflected
properly on database
Manual Testing Interview Questions

 Isolation: Ensures that all transactions are performed independently and changes made
by one transaction are not reflected on other
 Durability: Ensures that the changes made in database with committed transactions
persist as it is even after system failure

919. How many Aggregate Functions are available there in SQL?


SQL Aggregate Functions calculates values from multiple columns in a table and returns a single
value.

There are 7 aggregate functions we use in SQL

 AVG(): Returns the average value from specified columns


 COUNT(): Returns number of table rows
 MAX(): Returns largest value among the records
 MIN(): Returns smallest value among the records
 SUM(): Returns the sum of specified column values
 FIRST(): Returns the first value
 LAST(): Returns Last value

920. What are Scalar Functions in SQL?


Scalar Functions are used to return a single value based on the input values. Scalar Functions are
as follows

 UCASE(): Converts the specified field in upper case


 LCASE(): Converts the specified field in lower case
 MID(): Extracts and returns character from text field
 FORMAT(): Specifies the display format
 LEN(): Specifies the length of text field
 ROUND(): Rounds up the decimal field value to a number

921. What are triggers?


Triggers in SQL is kind of stored procedures used to create a response to a specific action
performed on the table such as Insert, Update or Delete. You can invoke triggers explicitly on the
table in the database.

Action and Event are two main components of SQL triggers when certain actions are performed the
event occurs in response to that action.

Syntax: CREATE TRIGGER name {BEFORE|AFTER} (event [OR..]}


ON table_name [FOR [EACH] {ROW|STATEMENT}]
EXECUTE PROCEDURE functionname {arguments}
Manual Testing Interview Questions

922. What is View in SQL?


A View can be defined as a virtual table that contains rows and columns with fields from one or
more table.

Syntax: CREATE VIEW view_name AS


SELECT column_name(s)
FROM table_name
WHERE condition

923. How we can update the view?


SQL CREATE and REPLACE can be used for updating the view.

Following query syntax is to be executed to update the created view

Syntax: CREATE OR REPLACE VIEW view_name AS


SELECT column_name(s)
FROM table_name
WHERE condition

924. Explain the working of SQL Privileges?


SQL GRANT and REVOKE commands are used to implement privileges in SQL multiple user
environments. The administrator of the database can grant or revoke privileges to or from users of
database object like SELECT, INSERT, UPDATE, DELETE, ALL etc.

GRANT Command: This command is used provide database access to user apart from an
administrator.
Syntax: GRANT privilege_name
ON object_name
TO {user_name|PUBLIC|role_name}
[WITH GRANT OPTION];
In above syntax WITH GRANT OPTIONS indicates that the user can grant the access to another user
too.

REVOKE Command: This command is used provide database deny or remove access to database
objects.
Syntax: REVOKE privilege_name
ON object_name
FROM {user_name|PUBLIC|role_name};

925. How many types of Privileges are available in SQL?


There are two types of privileges used in SQL, such as
Manual Testing Interview Questions

 System Privilege: System privileges deal with an object of a particular type and specifies
the right to perform one or more actions on it which include Admin allows a user to perform
administrative tasks, ALTER ANY INDEX, ALTER ANY CACHE GROUP CREATE/ALTER/DELETE
TABLE, CREATE/ALTER/DELETE VIEW etc.
 Object Privilege: This allows to perform actions on an object or object of another user(s)
viz. table, view, indexes etc. Some of the object privileges are EXECUTE, INSERT, UPDATE,
DELETE, SELECT, FLUSH, LOAD, INDEX, REFERENCES etc.

926. What is SQL Injection?


SQL Injection is a type of database attack technique where malicious SQL statements are inserted
into an entry field of database such that once it is executed the database is opened for an attacker.
This technique is usually used for attacking Data-Driven Applications to have an access to sensitive
data and perform administrative tasks on databases.

For Example: SELECT column_name(s) FROM table_name WHERE condition;

927. What is SQL Sandbox in SQL Server?


SQL Sandbox is the safe place in SQL Server Environment where untrusted scripts are executed.
There are 3 types of SQL sandbox, such as

 Safe Access Sandbox: Here a user can perform SQL operations such as creating stored
procedures, triggers etc. but cannot have access to the memory and cannot create files.
 External Access Sandbox: User can have access to files without having a right to
manipulate the memory allocation.
 Unsafe Access Sandbox: This contains untrusted codes where a user can have access to
memory.

928. What is the difference between SQL and PL/SQL?


SQL is a structured query language to create and access databases whereas PL/SQL comes with
procedural concepts of programming languages.

929. What is the difference between SQL and MySQL?


SQL is a structured query language that is used for manipulating and accessing the relational
database, on the other hand, MySQL itself is a relational database that uses SQL as the standard
database language.

930. What is the use of NVL function?


NVL function is used to convert the null value to its actual value.

931. What is the Cartesian product of table?


The output of Cross Join is called as a Cartesian product. It returns rows combining each row from
the first table with each row of the second table. For Example, if we join two tables having 15 and
20 columns the Cartesian product of two tables will be 15×20=300 Rows.
Manual Testing Interview Questions

932. What do you mean by Subquery?


Query within another query is called as Subquery. A subquery is called inner query which returns
output that is to be used by another query.

933. How many row comparison operators are used while working with a subquery?
There are 3-row comparison operators which are used in subqueries such as IN, ANY and ALL.

934. What is the difference between clustered and non-clustered indexes?


 One table can have only one clustered index but multiple nonclustered indexes.
 Clustered indexes can be read rapidly rather than non-clustered indexes.
 Clustered indexes store data physically in the table or view and non-clustered indexes do
not store data in table as it has separate structure from data row

935. What is the difference between DELETE and TRUNCATE?


 The basic difference in both is DELETE is DML command and TRUNCATE is DDL
 DELETE is used to delete a specific row from the table whereas TRUNCATE is used to
remove all rows from the table
 We can use DELETE with WHERE clause but cannot use TRUNCATE with it

936. What is the difference between DROP and TRUNCATE?


TRUNCATE removes all rows from the table which cannot be retrieved back, DROP removes the
entire table from the database and it cannot be retrieved back.

937. How to write a query to show the details of a student from Students table whose name
starts with K?
SELECT * FROM Student WHERE Student_Name like ‘K%’;
Here ‘like’ operator is used for pattern matching.

938. What is the difference between Nested Subquery and Correlated Subquery?
Subquery within another subquery is called as Nested Subquery. If the output of a subquery is
depending on column values of the parent query table then the query is called Correlated
Subquery.

SELECT adminid(SELEC Firstname+' ‘+Lastname FROM Employee WHERE


empid=emp. adminid)AS EmpAdminId FROM Employee
This query gets details of an employee from the Employee table.

939. What is Normalization? How many Normalization forms are there?


Normalization is used to organize the data in such manner that data redundancy will never occur in
the database and avoid insert, update and delete anomalies.

There are 5 forms of Normalization


Manual Testing Interview Questions

 First Normal Form (1NF): It removes all duplicate columns from the table. Creates a table
for related data and identifies unique column values
 First Normal Form (2NF): Follows 1NF and creates and places data subsets in an individual
table and defines the relationship between tables using the primary key
 Third Normal Form (3NF): Follows 2NF and removes those columns which are not related
through primary key
 Fourth Normal Form (4NF): Follows 3NF and do not define multi-valued dependencies. 4NF
also known as BCNF

940. What is Relationship? How many types of Relationship are there?


The relationship can be defined as the connection between more than one tables in the database.

There are 4 types of relationships


 One to One Relationship
 Many to One Relationship
 Many to Many Relationship
 One to Many Relationship

941. What do you mean by Stored Procedures? How do we use it?


A stored procedure is a collection of SQL statements which can be used as a function to access the
database. We can create these stored procedures previously before using it and can execute these
them wherever we require and also apply some conditional logic to it. Stored procedures are also
used to reduce network traffic and improve performance.

Syntax:
CREATE Procedure Procedure_Name
(
//Parameters
)
AS
BEGIN
SQL statements in stored procedures to update/retrieve records
END

942. State some properties of Relational databases?


 In relational databases, each column should have a unique name
 The sequence of rows and columns in relational databases are insignificant
 All values are atomic and each row is unique

943. What are Nested Triggers?


Triggers may implement data modification logic by using INSERT, UPDATE, and DELETE statement.
These triggers that contain data modification logic and find other triggers for data modification are
called Nested Triggers.
Manual Testing Interview Questions

944. What is a Cursor?


A cursor is a database object which is used to manipulate data in a row-to-row manner.

Cursor follows steps as given below

 Declare Cursor
 Open Cursor
 Retrieve row from the Cursor
 Process the row
 Close Cursor
 Deallocate Cursor

945. What is Collation?


Collation is a set of rules that check how the data is sorted by comparing it. Such as Character data
is stored using correct character sequence along with case sensitivity, type, and accent.

946. What do we need to check in Database Testing?


Generally, in Database Testing following thing is need to be tested

 Database Connectivity
 Constraint Check
 Required Application Field and its size
 Data Retrieval and Processing With DML operations
 Stored Procedures
 Functional flow

947. What is Database White Box Testing?


Database White Box Testing involves
 Database Consistency and ACID properties
 Database triggers and logical views
 Decision Coverage, Condition Coverage, and Statement Coverage
 Database Tables, Data Model, and Database Schema
 Referential integrity rules

948. What is Database Black Box Testing?


Database Black Box Testing involves
 Data Mapping
 Data stored and retrieved
 Use of Black Box techniques such as Equivalence Partitioning and Boundary Value Analysis
(BVA)

949. What are Indexes in SQL?


The index can be defined as the way to retrieve the data more quickly. We can define indexes
using CREATE statements.
Manual Testing Interview Questions

Syntax:
CREATE INDEX index_name
ON table_name (column_name)
Further, we can also create Unique Index using following syntax;

Syntax:
CREATE UNIQUE INDEX index_name
ON table_name (column_name)
******************

UPDATE: We have added few more short questions for practice.

950. What does SQL stand for?


Ans. SQL stands for Structured Query Language.

951. Define join and name different types of joins?


Ans. Join keyword is used to fetch data from related two or more tables. It returns rows where
there is at least one match in both the tables included in the join.
Type of joins are:
1. Right Join
2. Outer Join
3. Full Join
4. Cross Join
5. Self Join.

952. What is the syntax to add a record to a table?


Ans. To add a record in a table INSERT syntax is used.
Ex: INSERT into table_name VALUES (value1, value2..);

953. How do you add a column to a table?


Ans. To add another column in the table following command has been used.
ALTER TABLE table_name ADD (column_name);

954. Define SQL Delete statement.


Ans. Delete is used to delete a row or rows from a table based on the specified condition. The
basic syntax is as follows:
DELETE FROM table_name

WHERE <Condition>
Manual Testing Interview Questions

955. Define COMMIT?


Ans. COMMIT saves all changes made by DML statements.

956. What is the Primary key?


Ans. A Primary key is a column whose values uniquely identify every row in a table. Primary
key values can never be reused

957. What are Foreign keys?


Ans. When a one table’s primary key field is added to related tables in order to create the
common field which relates the two tables, it called a foreign key in other tables. Foreign Key
constraints enforce referential integrity.

958. What is CHECK Constraint?


Ans. A CHECK constraint is used to limit the values or type of data that can be stored in a
column. They are used to enforce domain integrity.

959. Is it possible for a table to have more than one foreign key?
Ans. Yes, a table can have many foreign keys and only one primary key.

960. What are the possible values for the BOOLEAN data field?
Ans. For a BOOLEAN data field, two values are possible: -1(true) and 0(false).

961. What is a stored procedure?


Ans. A stored procedure is a set of SQL queries which can take input and send back output.

962. What is identity in SQL?


Ans. An identity column in the SQL automatically generates numeric values. We can define a
start and increment value of the identity column.

963. What is Normalization?


Ans. The process of table design to minimize the data redundancy is called normalization. We
need to divide a database into two or more table and define relationships between them.

964. What is a Trigger?


Ans. The Trigger allows us to execute a batch of SQL code when a table event occurs (Insert,
update or delete command executed against a specific table)

965. How to select random rows from a table?


Ans. Using a SAMPLE clause we can select random rows.
Example:
SELECT * FROM table_name SAMPLE(10);
Manual Testing Interview Questions

966. Which TCP/IP port does SQL Server run?


Ans. By default SQL Server runs on port 1433.

967. Write a SQL SELECT query that only returns each name only once from a table?
Ans. To get each name only once, we need to use the DISTINCT keyword.
SELECT DISTINCT name FROM table_name;

968. Explain DML and DDL?


Ans. DML stands for Data Manipulation Language. INSERT, UPDATE and DELETE are DML
statements.
DDL stands for Data Definition Language. CREATE, ALTER, DROP, RENAME are DDL statements.

969. Can we rename a column in the output of SQL query?


Ans. Yes using the following syntax we can do this.
SELECT column_name AS new_name FROM table_name;

970. Give the order of SQL SELECT?


Ans. Order of SQL SELECT clauses is: SELECT, FROM, WHERE, GROUP BY, HAVING, ORDER BY.
Only the SELECT and FROM clause are mandatory.

971. Suppose a Student column has two columns, Name and Marks. How to get name and
marks of the top three students.
Ans. SELECT Name, Marks FROM Student s1 where 3 <= (SELECT COUNT(*) FROM Students s2
WHERE s1.marks = s2.marks)

972. What is SQL comments?


Ans. SQL comments can be put by two consecutive hyphens (–).

973. Difference between TRUNCATE, DELETE and DROP commands?


Ans. DELETE removes some or all rows from a table based on the condition. It can be rolled
back.
TRUNCATE removes ALL rows from a table by de-allocating the memory pages. The operation
cannot be rolled back

DROP command removes a table from the database completely.

974. What are the properties of a transaction?


Ans. Generally, these properties are referred to as ACID properties. They are:
1. Atomicity
2. Consistency
3. Isolation
Manual Testing Interview Questions

4. Durability.

975. What do you mean by ROWID?


Ans. It’s an 18 character long pseudo column attached with each row of a table.

976. Define UNION, MINUS, UNION ALL, INTERSECT ?


Ans. MINUS – returns all distinct rows selected by the first query but not by the second.
UNION – returns all distinct rows selected by either query

UNION ALL – returns all rows selected by either query, including all duplicates.

INTERSECT – returns all distinct rows selected by both queries.

977. What is a transaction?


Ans. A transaction is a sequence of code that runs against a database. It takes the database
from one consistent state to another.

978. What is the difference between UNIQUE and PRIMARY KEY constraints?
Ans. A table can have only one PRIMARY KEY whereas there can be any number of UNIQUE
keys.
The primary key cannot contain Null values whereas Unique key can contain Null values.
Manual Testing Interview Questions

979. What is a composite primary key?


Ans. Primary key created on more than one column is called composite primary key.

980. What is an Index?


Ans. An Index is a special structure associated with a table speed up the performance of
queries. The index can be created on one or more columns of a table.

981. What is the Subquery?


Ans. A Subquery is a subset of select statements whose return values are used in filtering
conditions of the main query.

982. What do you mean by query optimization?


Ans. Query optimization is a process in which a database system compares different query
strategies and select the query with the least cost.

983. What is Collation?


Ans. Set of rules that define how data is stored, how case sensitivity and Kana character can
be treated etc.

984. What is Referential Integrity?


Ans. Set of rules that restrict the values of one or more columns of the tables based on the
values of the primary key or unique key of the referenced table.

985. What is Case Function?


Ans. Case facilitates if-then-else type of logic in SQL. It evaluates a list of conditions and
returns one of the multiple possible result expressions.

986. Define a temp table?


Ans. A temp table is a temporary storage structure to store the data temporarily.

987. How can we avoid duplicating records in a query?


Ans. By using DISTINCT keyword duplicating records in a query can be avoided.

988. Explain the difference between Rename and Alias?


Ans. Rename is a permanent name given to a table or column whereas Alias is a temporary
name given to a table or column.

989. What is a View?


Ans. A view is a virtual table which contains data from one or more tables. Views restrict data
access of the table by selecting only required values and make complex queries easy.
Manual Testing Interview Questions

990. What are the advantages of Views?


Ans. Advantages of Views:
1. Views restrict access to the data because the view can display selective columns from the
table.
2. Views can be used to make simple queries to retrieve the results of complicated queries.
For example, views can be used to query information from multiple tables without the user
knowing.

991. List the various privileges that a user can grant to another user?
Ans. SELECT, CONNECT, RESOURCES.

992. What is schema?


Ans. A schema is a collection of database objects of a User.

993. What is a Table?


Ans. A table is the basic unit of data storage in the database management system. Table data
is stored in rows and columns.

994. Does View contain Data?


Ans. No, Views are virtual structure.

995. Can a View based on another View?


Ans. Yes, A View is based on another View.

996. What is the difference between Having clause and Where clause?
Ans. Both specify a search condition but Having clause is used only with the SELECT statement
and typically used with GROUP BY clause.
If GROUP BY clause is not used then Having behaved like WHERE clause only.

997. What is the difference between Local and Global temporary table?
Ans. If defined in inside a compound statement a local temporary table exists only for the
duration of that statement but a global temporary table exists permanently in the DB but its
rows disappear when the connection is closed.

998. What is CTE?


Ans. A CTE or common table expression is an expression which contains temporary result set
which is defined in a SQL statement.

999. How to select all records from the table?


Ans. To select all the records from the table we need to use the following syntax:
Select * from table_name;
Manual Testing Interview Questions

1000. What is the Sanity Test (or) Build test?


Ans. Verifying the critical (important) functionality of the software on a new build to decide whether
to carry further testing or not is termed as Sanity Test.

1001. What is the difference between client-server testing and web-based testing?
Ans: Projects are broadly divided into two types of:

2 tier applications

3 tier applications

CLIENT / SERVER TESTING

This type of testing usually done for 2 tier applications (usually developed for LAN)

Here we will be having front-end and backend.

The application launched on front-end will be having forms and reports which will be monitoring
and manipulating data

WEB TESTING

This is done for 3 tier applications (developed for Internet / intranet / xtranet)

Here we will be having Browser, web server and DB server.

The applications accessible in browser would be developed in HTML, DHTML, XML, JavaScript etc.
(We can monitor through these applications)

1002. What is Black Box testing?


Ans: Software Testing method that analyses the functionality of a software/application without
knowing much about the internal structure/design of the item that is being tested and compares
the input value with the output value.

1003. What are the different Types of Software Testing?


Ans: Functional testing types include:

 Unit testing
 Integration testing
 System testing
 Sanity testing
 Smoke testing
Manual Testing Interview Questions

 Interface testing
 Regression testing
 Beta/Acceptance testing

Non-functional testing types include:

 Performance Testing
 Load testing
 Stress testing
 Volume testing
 Security testing
 Compatibility testing
 Install testing
 Recovery testing
 Reliability testing
 Usability testing
 Compliance testing
 Localization testing

Additional:

 Browser Compatibility Testing


 Ad-hoc Testing
 Exploratory Testing
 Monkey Testing

1004. What is the difference between the STLC (Software Testing Life Cycle) and SDLC (Software
Development Life Cycle)?
Ans: SDLC deals with development/coding of the software while STLC deales with validation and
verification of the software

1005. What is Equivalence partitioning testing?


Ans: Equivalence partitioning testing is a software testing technique which divides the application
input test data into each partition at least once of equivalent data from which test cases can be
derived. By this testing method, it reduces the time required for software testing.

1006. In white box testing, what do you verify?


Ans: In white box testing following steps are verified.
Manual Testing Interview Questions

 Verify the security holes in the code


 Verify the incomplete or broken paths in the code
 Verify the flow of structure according to the document specification
 Verify the expected outputs
 Verify all conditional loops in the code to check the complete functionality of the
application
 Verify the line by line coding and cover 100% testing

1007. What is Latent defect?


Ans: Latent defect: This defect is an existing defect in the system which does not cause any failure
as the exact set of conditions has never been met

1008. Explain what Test Deliverables is?


Test Deliverables are a set of documents, tools and other components that have to be developed
and maintained in support of testing.

There are different test deliverables at every phase of the software development lifecycle

 Before Testing
 During Testing
 After the Testing

1009. How will you conduct Risk Analysis?


Ans: For the risk analysis following steps need to be implemented

 Finding the score of the risk


 Making a profile for the risk
 Changing the risk properties
 Deploy the resources of that test risk
 Making a database of risk

You might also like