Manual Testing Interview Questions_V1_0
Manual Testing Interview Questions_V1_0
1. The testing process guarantees the software will work as per the expectation of the
customers.
2. It reduces the coding cycles by identifying issues at the initial stage of the development.
3. Discovery of issues in the earlier SDLC phases ensures proper utilization of resources and
prevents any cost escalations.
4. The testing team brings customer view into the process and finds use cases that a developer
may overlook.
5. Any failure, defect or a bug observed by the customer distorts a firm’s credibility which only
the testing can ensure not to happen.
1. Test case execution: The successful completion of a full test cycle after the final bug fix
marks the end of the testing phase.
2. Testing deadline: The end date of the validation stage also declares the closure of the
validation if no critical or high priority defects remain in the system.
3. MTBF rate: It is the mean time between failures (MTBF) which reflects the reliability of the
components. If it is on the higher side, then PO and EM can decide to stop testing.
4. CC ratio: It is the amount of code covered via automated tests. If the team achieves the
desired level of code coverage (CC) ratio, then they can choose to end the validation.
1. Inspections
2. Reviews
3. Walk-throughs
4. Demos
1. Functional testing
2. Non-functional testing
Manual Testing Interview Questions
9. What Is Static Testing, When Does It Start, And What Does It Cover?
It is a white box testing technique which directs the developers to verify their code with the
help of a checklist to find errors in it.
Developers can start it done without actually finalizing the application or program.
Static testing is more cost-effective than Dynamic testing.
It covers more areas than Dynamic testing in a shorter time.
10. What Is Dynamic Testing, When Does It Start, And What Does It Cover?
Dynamic testing involves the execution of an actual application with valid inputs and checking
of the expected output.
Examples of Dynamic testing are Unit Testing, Integration Testing, System Testing and
Acceptance Testing.
Dynamic testing happens after code deployment.
It starts during the validation stage.
Read details from here – Static and Dynamic Testing
Test environment
Testing schedule
Associated risks
A test plan captures all possible testing activities to guarantee a quality product. It gathers data
from the product description, requirement, and use case documents.
19. What Is The Difference Between Master Test Plan And Test Plan?
The difference between the Master Plan and Test Plan can be described using the following points.
1. Master Test Plan contains all the test scenarios and risks prone areas of the application.
Whereas, the Test Plan document contains test cases corresponding to test scenarios.
2. Master Test Plan captures every test to be run during the overall development of application
whereas test plan describes the scope, approach, resources, and schedule of performing the
execution.
3. MTP includes test scenarios for all the phases of the application development lifecycle. Whereas,
a separate test plan also exists for the Unit, Functional, and System testing which includes cases
specific to related areas.
4. A Master Test Plan suffice for big projects which require execution in all phases of testing.
However, preparing a basic Test Plan is enough for small projects.
A good test case helps to identify problems in the requirements or design of an application.
21. What Is The Difference Between High Level And Low-Level Test Case?
High-level test cases cover the core functionality of a product like standard business flows.
Low-level test cases are those related to user interface (UI) in the application.
Manual Testing Interview Questions
A test scenario can have one or many associations with a test case which means it can include
multiple test cases.
For example, if a tester is validating a graphics tool, then he would need to procure relevant data
for graph generation.
It is applicable for both functional and non-functional testing activities. This metric helps testers to
add missing test cases.
31. Is It Possible To Achieve 100% Coverage Of Testing? How Would You Ensure It?
No, it’s not possible to perform 100% testing of any product. But you can follow the below steps to
come closer.
32. Beside Test Case & Test Plan, What Documents A Tester Should Produce?
Here are a few other documents to prepare.
Testing metrics
Test design specs
End-to-end scenarios
Test summary reports
Bug reports
It is mostly the developers who test individual units or modules to check if they are working
correctly.
36. What Is The Difference Between A Test Driver And Test Stub?
The test driver is a piece of code that calls a software component under test. It is useful in testing
that follows the bottom-up approach.
Test stub is a dummy program that integrates with an application to complete its functionality.
These are relevant for testing that uses the top-down approach.
Let’s take an example.
1. Let’s say there is a scenario to test the interface between modules A and B. We have developed
only module-A. Then we can test module-A if we have real module-B or a dummy module for it. In
this case, we call module-B as the Test Stub.
2. Now, module-B can’t send or receive data directly from module-A. In such a scenario, we’ve to
move data from one module to another using some external features called Test Driver.
It involves the use of dummy modules such as Stubs and Drivers. They make up for missing
components to simulate data exchange.
Finally, it tests the integrated modules to ensure the system is working as expected.
It first tests the lowest level modules and then goes for high-level modules. Finally, it verifies the
integrated modules to ensure the system is working as expected.
Alternatively, we call it as End to End testing. It verifies the whole system to ensure that the
application is behaving as expected.
It requires formulating a test charter, a short declaration of the scope, a set of objectives and
possible approaches to be used.
The test design and test execution activities may run in parallel without formally documenting
the test conditions, test cases or test scripts.
Testers can use boundary value analysis to concentrate the testing effort on error-prone areas
by accurately pinpointing the boundaries.
Notes should be recorded for the Exploratory Testing sessions as it would help to create a
final report of its execution.
Events could be like a shortage of disk space, unexpected loss of communication, or power out
conditions.
Example:
The probability that a Server class application hosted on the cloud is up and running for six long
months without crashing is 99.99%. We refer to this type of testing as reliability.
Data Flow Testing emphasizes for designing test cases that cover control flow paths around
variable definitions and their uses in the modules. It expects them to have the following attributes:
API testing is also a white box testing method. It makes use of the code and a programming tool to
call the API. It ignores the UI layer of the application and validates the path between the client and
the API. The client software forwards a call to the API to get the specified return value. API testing
examines whether the system is responding with the correct status or not.
API testing got carried out by the testers to confirm the system from end to end. They don’t have
permissions to get to the source code but can use the API calls. This testing covers the
authorization, usability, exploratory, automated and document validation.
It skips the in-house testing activities and executes at the client end.
Manual Testing Interview Questions
1. We do the beta test when the product is about to release to the customer whereas pilot testing
takes place in the earlier phase of the development cycle.
2. In the beta test, testing application is given to a few users to make sure that application meet
the customer requirements and does not contain any showstopper bug. Whereas, in the pilot test,
few members of the testing teamwork at the Customer site to set up the product. They give their
feedback also to improve the quality of the end product.
60. What Does The High Availability Testing Mean For A Software Tester?
High availability shows the ability of a system or a component to operate continuously without
failure even at high loads for a long time.
Hence, the High availability testing confirms that a system or its sub-systems have gone through
thorough checks and, in many cases, simulates failures to validate whether components support
redundancy or not.
In Baseline testing, the tests capture and preserve all the results produced by the source code, and
compare against a reference baseline. This reference baseline refers to the last accepted test
results. If there are new changes in the source code, then it requires re-execution of tests to form
the current baseline. If the new results get accepted, then the current baseline becomes the
reference.
It is a test case design technique in which testers have to guess the defects that might occur and
write test cases to represent them.
Error Seeding.
Manual Testing Interview Questions
It is the process of adding known bugs in a program for tracking the rate of detection & removal. It
also helps to estimate the number of faults remaining in the program.
Run more tests to make sure that the problem has a clear description.
Run a few more tests to ensure that the same problem doesn’t exist with different inputs.
Once we are sure of the full scope of the bug, then we can add details and report it.
71. How Do You Test A Product If The Requirements Are Yet To Freeze?
If the requirement spec is not available for a product, then a test plan can be created based on the
assumptions made about the product. But we should get all assumptions well documented in the
test plan.
72. How Will You Tell If Enough Test Cases Have Been Created To Test A Product?
First of all, we’ll check if every requirement has at least one test case covered. If yes, then we can
say that there are enough test cases to test the product.
Manual Testing Interview Questions
73. If A Product Is In Production And One Of Its Modules Gets Updated, Then Is It Necessary To
Retest?
It is advisable to perform regression testing and run tests for all of the other modules as well.
Finally, the QA should carry out System testing.
That’s how it ensures that the Test Plan covers all the requirements and links to their latest
version.
Traceability matrix is a testing tool which testers can use to track down the gaps.
However, everyone involved in the project has a part in minimizing the risk. But it’s the leader who
ensures that the whole team understands the individual role in managing the risk.
Following are some of the risks that are of concern to the QA.
1. New Hardware
2. New Technology
3. New Automation Tool
Manual Testing Interview Questions
1. High magnitude: Impact of the bug on the other functionality of the application
2. Medium: It is somewhat bearable but not desirable.
3. Low: It is tolerable. This type of risk has no impact on the company business.
Cohesion is the degree which measures the dependency of the software component that
combines related functionality into a single unit whereas coupling represents to have it in
a different group.
Cohesion deals with the functionality that relates to different processes within a single module
whereas coupling deals with how much one module is dependent on the other modules within
the product.
It is a good practice to increase the cohesion between the software whereas coupling is
discouraged.
It also identifies the key practices that increase the maturity of these processes.
1- Unit Testing
2- Smoke testing
3- Sanity testing
4- Integration Testing
5- Interface Testing
6- System Testing
7- Regression Testing
8- UAT
After that, we verify the functionality by exercising all the said features, their options and ensure
that they behave as expected.
This tool makes use of Visual Basic (VB) scripting to automate the functional workflows. It allows us
to integrate manual, automated as well as framework-based test scripts in the same IDE.
87. What Kind Of Document Will You Need To Begin Functional Testing?
It is none other than the Functional specification document. It defines the full functionality of a
product.
Other documents are also useful in testing like user manual and BRS.
Gap analysis is another document which can help in understanding the expected and existing
system.
It validates the readiness of a system as given in the non-functional specification and never got
addressed in functional testing.
Authentication
Business rules
Historical Data
Legal and Regulatory Requirements
External Interfaces
Performance
Reliability
Security
Recovery
Data Integrity
Usability
On the other hand, functional requirements are those that cover the technical aspect of the
system.
1. Requirement Analysis
2. Test Planning
3. Test Case Development
4. Environment Setup
5. Test Execution
6. Test Cycle Closure
96. What Are The Tasks Performed In The Requirement Analysis Phase Of STLC?
Test team performs the following tasks during the RA phase.
In this stage, the test lead has to get the effort and cost estimation done for the whole project. This
phase starts soon after the RA phase gets over. The outcomes of the TP include the Test Planning
& Effort estimation documents. After the planning ends, the testing team can begin writing the test
case development tasks.
98. What Are The Tasks Performed In The Test Planning (TP) Phase Of STLC?
Test team performs the following tasks during the TP phase.
One of the documents which team has to produce is the Requirement Traceability Matrix (RTM). It
is an industry-wide standard for ensuring the test case get correctly mapped with the requirement.
It helps in tracking both the backward & forward direction.
100. What Are The Tasks Performed In The Test Case Development Phase Of STLC?
Test team performs the following tasks during the TCD phase.
It is an independent task and can go in parallel with test case writing stage. The team or any
member outside the group can also help in setting up the testing environment. In some
organizations, a developer or the customer can also create or provide the testing beds.
Simultaneously, the test team starts writing the smoke tests to ensure the readiness of the test
infra.
102. What Are The Tasks Performed In The Test Environment Setup Phase Of STLC?
Test team performs the following tasks during the TES phase.
If a test case executes successfully, the team should mark it as Passed. If some tests have failed,
then the defects should get logged, and the report should go to the developer for further analysis.
As per the books, every error or failure should have a corresponding defect. It helps in tracing back
the test case or the bug later. As soon as the development team fixes it, the same test case should
execute and report back to them.
104. What Are The Tasks Performed In The Test Execution Phase Of STLC?
Test team performs the following tasks during the TE phase.
They discuss what went well, where is the need for improvement and notes the pain points faced in
the current STLC. Such information is beneficial for future STLC cycles. Each member puts his/her
views on the test case & bug reports and finalizes the defect distribution by type and severity.
106. What Are The Tasks Performed In The Test Cycle Closure Phase Of STLC?
Test team performs the following tasks during the TCC phase.
1. Define closure criteria by reviewing the test coverage, code quality. the status of the business
objectives, and test metrics.
3. Produce a test closure report.
4. Publish best practices used in the current STLC.
It could also happen in a customer environment that a particular feature didn’t work after product
deployment. They would term it as a product failure.
However, an error would also fall in the same category. But in some cases, errors are fixed values.
Defect Life Cycle is the representation of the different states of a defect which it attains at
different levels during its lifetime. It may have variations from company to company or even gets
customized for some projects as the software testing process drives it to not getting out of the
way.
114. How Come The Severity And Priority Relate To Each Other?
Severity – Represents the gravity/depth of the bug.
Priority – Specifies which bug should get fixed first.
Severity – Describes the application point of view.
Priority – Defines the user’s point of view.
116. What Is The Difference Between Priority And Severity In Software Testing?
Severity indicates the impact of a defect on the development or the operation of a component in
the application going under test. It usually is an indication of commercial loss or cost to the
environment or business reputation.
On the contrary, the Priority of a defect shows the urgency of a bug yet to get a fix. For example –
a bug in a server application blocked it from getting deployed on the live servers.
KLOC (Thousands of lines of code) is the unit used to measure the defect density.
118. What Does The Defect Detection Percentage Mean In Software Testing?
Defect Detection Percentage (DDP) is a type of testing metric. It indicates the effectiveness of a
testing process by measuring the ratio of defects discovered before the release and reported after
release by customers.
For example – let’s say the QA logged 70 defects during the testing cycle and the customer
reported 20 more after the release. The DDP would come to 72.1% after the calculation as 70/(70
+ 20) = 72.1%.
119. What Does The Defect Removal Efficiency Mean In Software Testing?
Defect Removal Efficiency (DRP) is a type of testing metric. It is an indicator of the efficiency of the
development team to fix issues before the release.
It gets measured as the ratio of defects fixed to total the number of issues discovered.
For example – let’s say that there were 75 defects discovered during the test cycle while 62 of
them got fixed by the dev team at the time of measurement. The DRE would come to 82.6% after
the calculation as 62/75 = 82.6%.
120. What Does The Test Case Efficiency Mean In Software Testing?
Test Case Efficiency (TCE) is a type of testing metric. It is a clear indicator of the efficiency of the
test cases getting executed in the test execution stage of the release. It helps in ensuring and
measuring the quality of the test cases.
Test Case Efficiency (TCE) => (No. of defects discovered / No. of test cases executed)* 100
Manual Testing Interview Questions
1. The day of birth for a Defect is the day it got assigned and accepted by the dev.
2. The issues which got dropped are out of the scope.
3. The age can be both in hours or days.
4. The end time is the day got verified and closed, not just fixed by the dev.
125. What Are The Different Ways To Apply The Pareto Principle In Software Testing?
Following is the list of ways to exercise Pareto Principle in software testing:
1. Arrange the defects based on their causes, not via consequences. Don’t club bugs that yield the
same outcome. Prefer to group issues depending on what module they occur.
2. Collaborate with the dev team to discover new ways to categorize the problems. E.g., use the
same static library for the components which counted for the most bugs.
3. Put more energy for locating the problem areas in the source code instead of doing a random
search.
4. Re-order the test cases and pick the critical ones first to begin.
5. Pay attention to the end-user response and assess the risk areas around.
128. What Is Silk Test And Why Should You Use It?
Here are some facts about the Silk tool.
1. It’s a tool developed for performing the regression and functionality testing of the application.
2. It benefits when we are testing Window based, Java, the web, and the traditional client/server
applications.
3. Silk Test help in preparing the test plan and managing them to provide the direct accessing of
the database and validation of the field.
Using the above tools we can create test scripts to verify the application automatically. After
completing the test execution, these tools also generate the test reports.
130. What Are The Factors That You’ll Consider To Choose Automated Testing Over Manual
Testing?
The choice of automated testing over manual testing depends on the following factors.
Since it’s not possible to validate all data points from the domain because any attempt to do so
would result in huge no. of test cases. Hence, you should apply this technique in such situations. It
classifies the data into different classes where each class lays the input criteria from the
equivalence class.
Example.
Assuming an application which accepts dates from the calendar year. If we apply the above
technique, then we can break the inputs into classes. For example, one for valid dates and another
for invalid dates. We’ll now create test cases from each class.
TC#1. Valid date class would allow cases with dates from the current calendar year.
TC#2. Dates from years other than the current calendar year would belong to the invalid class.
While designing test cases using BV, you should consider both valid and invalid boundary values
for the edges. Usually, we pick one test case from each boundary.
TC#1. The first test case will analyse exact boundary values from the input data. Considering
the example in Q:1, it would be the dates from January and the dates from December.
TC#2. For invalid values, you may consider using the dates from months other than the Jan or Dec.
Example.
For example, you can consider a bank account. If you use it regularly, it’ll stay active. If you don’t
make any transaction for over a period of 12 months, then your account will become inactive. You
can though choose to close your account. Once it’s closed, you cannot use it until you reopen. So, a
bank account can have the following states.
1. Open,
2. Inactive,
3. Close,
4. Reopen.
Hence, you can design cases to test all transitions around the states mentioned above. In this
model, you measure coverage in terms of switches, see below.
135. What Is The Difference Between A White Box, Black Box, And Gray Box Testing?
Answer.
133.1. Black box testing is a testing approach which depends completely on the product
requirements and specifications. The knowledge of internal paths, structures, or implementation of
Manual Testing Interview Questions
134.1. Wrong: It indicates a mismatch in the requirement and the implementation. It implies a
variance from the given specification.
134.2. Missing: The end product doesn’t have a feature matching the requirement. It’s a variance
from the specifications and represents that you didn’t document the requirement properly.
134.3. Extra: You added a feature which the customer didn’t ask for. It’s again a variance from
the specification. And the users of the product may like this feature. But it’s still a defect because
it’s not the part of the specs.
135.1. Requirement document: This document specifies what exactly is needed in the project
from the customer’s perspective.
135.2. Customer Input: It could be discussions, informal chats, email addresses, etc.
135.3. Project Plan: The project plan from the project manager (PM) also serves as a good input
to conclude your acceptance test.
136.1. To determine whether the developed project is meeting the requirements of the user.
136.2. To determine all the requirements given by the user.
136.3. To make sure the application requirement can be fulfilled in the verification process.
i. High magnitude: Impact of the bug on the other functionality of the application.
ii. Medium: It can be tolerable in the application but not desirable.
iii. Low: It can be bearable. This type of risk has no impact on the company business.
140. How To Deal With A Bug Which Is Intermittent And Not Reproducible?
Answer.
Again, it’s one of the practical software testing interview questions. A bug might not be
reproducible for a variety of reason. Some of these are as follows.
Boundary testing proposes to focus on the limit or edge conditions of the software under test.
Smart Monkey – Used for load and stress testing and lead to high development cost.
Dumb Monkey – Used for basic testing and helps in locating quality bugs.
Following are the primary differences between the baseline and benchmark testing.
It enables a user as well as the customer to determine whether to accept a software product. It
ensures that the software meets a set of agreed acceptance guidelines.
Accessibility Testing.
It certifies that a product is accessible to people with disabilities like autism, blind, or deaf.
153. What’s The Difference Between Application Programming And Application Binary Interface?
Answer.
Application Programming Interface.
It’s a set of software calls and routines for the client applications to access desired services.
It’s a specification which defines the portability requirements of applications in binary form across
different platforms.
Manual Testing Interview Questions
It’s a review without actually executing the process like code review or the revision of test cases
etc.
Validation.
It’s a process of verifying the product and its features by doing the actual execution.
155. What’s The Difference Between The Branch And Breadth Testing?
Answer.
Branch testing is a technique which ensures that all the source code branches would go
through validation at least once.
Breadth testing covers the entire product but tests the features with limited cases.
160. What Are The Different Ways Of Doing Black Box Testing?
Answer.
Mostly, we use the following five black box testing tricks.
161. What Are The Different Methods To Test Software (In Abnormal Conditions)?
Answer.
Usually, we can use any of the below techniques.
Stress testing
Security testing
Recovery testing
Beta testing
Examples.
Authentication
Business rules
Historical Data
Legal and Regulatory Requirements
External Interfaces.
A Non-Functional Requirement.
Examples.
Performance
Reliability
Security
Recovery
Data Integrity
Usability
165. What’s The Difference between the Defect Priority and Severity?
Answer.
The Priority and Severity are the popular defect management terms. These two share the
importance of a bug among the team and to fix it.
The Priority.
The priority status is set by the tester to the developer mentioning the time frame to fix a
defect. If the high priority is mentioned then the developer has to fix it at the earliest.
The priority status is set based on customer requirements.
The Severity.
The severity status is used to explain how badly the deviation is affecting the build.
The severity type is defined by the tester based on the written test cases and functionality.
You can also call it as multi-user testing. It observes the effects of accessing the application, code
module or database by different users at the same time. It helps in identifying and measuring the
problems in response time, levels of locking and deadlocking in the application.
Example.
Load runner is a tool which supports this type of testing. It has a <Vugen> (Virtual User Generator)
which adds the number of concurrent users. It also defines how the users need to be added like
Gradual Ramp up or Spike Stepped.
167. What’s The Difference Between A High-Level And Low-Level Test Case?
Answer.
High-level test case covers the major functionality of the application e.g. functionality
related test cases, database test cases etc.
Low-level test case tests the basic features like the User Interface (UI) in the application.
168. What’s The Difference Between A Two-Tier Architecture And The Three-Tier Architecture?
Answer.
In a two-tier architecture, there happen to be two layers like Client and Server. The Client
sends a request to the Server and the server responds to the request by fetching the data
from it. The problem with the two-tier architecture is the server can’t respond to multiple
requests at the same time which may cause data integrity issues.
In a three-tier architecture, there can be layers like Client, Server, and Database. Here, the
Client sends a request to Server, where the Server sends the request to the Database for
data. Based on that request the Database sends back the data to Server and from the Server,
the data is forwarded to the Client.
169. What’s The Difference Between Static Testing And Dynamic Testing?
Answer.
Static Testing (Done In Verification Stage).
Static Testing is a White Box testing technique where the developers verify or test their code with
the help of checklist to find errors in it, this type of testing is done without running the actually
developed application or program. Code Reviews, Inspections, Walkthroughs are mostly done in
this stage of testing.
Dynamic Testing is done by executing the actual application with valid inputs to check the
expected output. Examples of Dynamic Testing methodologies are Unit Testing, Integration
Testing, System Testing and Acceptance Testing.
Manual Testing Interview Questions
Some differences between Static Testing and Dynamic Testing are as follows.
Static Testing is more cost-effective than Dynamic Testing because Static Testing is done in
the initial stage.
In terms of Statement Coverage, Static Testing covers more areas than Dynamic Testing in
a shorter time.
Static Testing is done before the code deployment where Dynamic Testing is done after the
code deployment.
Static Testing is done in the Verification stage where Dynamic Testing is done in the
Validation stage.
170. What’s The Difference Between A Bug Log And Defect Tracking?
Answer.
Bug Log.
It’s a document for the number of defects such as open, closed, reopen or deferred of a particular
module.
Defect Tracking.
The process of tracking a defect such as symptoms, whether reproducible or not, priority, severity,
and status.
Test Case Failed (Total no. of test cases which are failing for a Bug)
Being a QA engineer, I do have experience in preparing test plans, writing test Cases. I use to
attend several meetings with project managers, business analysts and sometimes with clients.
While thinking of different types of testing, I have explored many things such as Smoke Testing,
Integration Testing, Regression Testing, Black box or UAT Testing. Creating a defect could also be
an important area where I should put more stress.
2- Quality Control (QC) Concern with the quality of the product. QC finds the defects and suggests
improvements. QC implements the process set by QA. It is the responsibility of the tester.
3- Software Testing is the process of ensuring that product which is developed by the developer
meets the user requirement. The motive to perform testing is to find the bugs and make sure that
they get fixed.
Back to top
Question Set-2.
1- Title: A clear and one-liner title to show the intent of the test case.
2- Purpose: A brief explanation of the reason the test case is getting created.
Manual Testing Interview Questions
3-Description: A representation in words of the nature and characteristics of the test case.
174.4-Test objects: An unambiguous feature or module getting tested.
174.5-Preconditions: The conditions that must get satisfied during test execution.
It also helps in identifying the risks that may arise during testing but guides to the solutions as
well.
A good test plan consists of history, contents, introduction, scope, overview, and approach. The
risks and assumptions are not left out either.
Back to top
Question Set-3.
176. Assume you have a test plan with over 1000 test cases. How would you make sure what
should be automated and what to test manually?
Answer:
In such a situation, I will focus on test case priority and the feasibility of automation for the test
case in question.
1- The complicated scenarios which are tedious and take a lot of time in manual execution.
2- The test cases missed in the past.
3- The parts of the application that need regression testing.
4- The test cases which are hard to automate.
5- The features which are still under development. (If certain parts of the app are about to be
changed, I recommend not to start with automated testing for these cases.)
6- The test cases that are part of “explorative” testing and assessing the user experience.
177. How do you determine which devices and OS versions should we test?
Answer:
A good candidate would point to app analytics as the best measure, looking for the most used
devices for their particular app.
Another answer would be to check the app reviews where people might have complained about
specific issues happening on their devices.
Manual Testing Interview Questions
Back to top
Question Set-4.
178. Define a Test Case and a Use Case? What information would you include in their
descriptions?
Answer:
A test case is again a document which gives you a step by step detailed idea on how you can test
an application. It usually comprises of results (pass or fail), remarks, actions, outputs, and
description.
A use case on the other is a document of another kind. It helps you understand the actions of the
user and the response of the system found in a particular functionality. It comprises of the cover
page, revision, contents, exceptions, and pre, and post-conditions.
It is a term given to the combination of Software applications and utilities required for testing a
software package.
Introduction,
Resource,
Scope and schedule for test activities,
Required Test tools,
Definition of Test priorities,
Test planning and the types of test to run.
Back to top
Question Set-5.
181. What are the main elements of a test plan and test cases?
Answer:
1- Testing objectives
2- Testing scope
Manual Testing Interview Questions
183. What are the test stub and test driver? Why are they required?
Answer:
A. Stubs are dummy programs that imitate the actual software by giving the same output as
that of the real one.
B. Drivers are dummy programs that call a software component for testing.
We need them for testing the inter-related modules, say X and Y.
If we have developed only module X, then we cannot just test it alone. But if there is any dummy
module, i.e., a stub, then we can use it to test module X.
Next, module Y cannot receive or send data on its own. So in this case, we have to transmit data
from one module to another module by some external features. This external feature is known as
the driver.
184. What are the roles and responsibilities of a software quality assurance engineer?
Answer:
A software quality assurance engineer has to perform the following tasks:
E. Program testing
F. Integration testing
G. Release process
Back to top
Question Set-7.
185. To what extent should developers do their testing or do you believe testing is the
responsibility of the QA team?
Answer:
The answer to this question depends on the business environment you work. In today’s emerging
test scenario, it is also the developer’s responsibility to perform at least some of his code testing.
Though it is not expected that he will have the capacity or that his focus should be to run through
large test plans or test on a large stack of devices. However, without the responsibility to review
and test his code, a sense of ownership will not develop.
We believe that results will improve if all parties have access to test cases and can run and access
them regularly to verify if the latest changes brought any regression.
186. What’s your experience using Continuous Integration as part of the development process?
Answer:
If this applies to your company, it is a great thing to hear that a candidate has worked with Jenkins
or Bamboo CI. If he has set up these systems and can give recommendations to you on what
worked and did not work in his previous jobs, the candidate has earned himself not only bonus
points but a merit badge or two.
Back to top
Question Set-8.
Bug leakage is something when the bug is discovered by the end users or customer and missed by
the testing team to detect while testing the software.
Manual Testing Interview Questions
190. What is your view on acceptance testing, when it is done and who does it?
Answer:
Acceptance Testing is a software testing checkpoint where a system is tested for acceptability. The
purpose of this test is to evaluate the system’s compliance with the business requirements and
assess whether it is acceptable for delivery. Formal testing with respect to user needs,
requirements, and business processes conducted to determine whether or not a system satisfies
the acceptance criteria and to enable the user, customers or other authorized entity to determine
whether or not to accept the system.
Back to top
Question Set-9.
A. When is it performed?
Acceptance Testing is carried out after System Testing and before making the system available for
actual use.
External Acceptance Testing is performed by the product consumers who are not employees of the
organization that developed the software. They can be some technical people from the client-side
or the actual end users.
191. What is your experience in dealing with your team members, how do you plan it?
Answer:
When you work for an organization be it medium or large, it is almost likely that you won’t be the
only one in the team. And there are times when you find it very difficult and frustrating while
dealing with the team members. There could be arguments, differences, and misunderstandings
and some will also try to ignore the others. But my purpose always is to look beyond all of this. I
perceive it like we are a team and we should work together to reach a common goal. I’ve learned
to be friendly with my teammates and sometimes invite them over for coffee. As a human, it is
very important to share feelings and have important discussions, and that is exactly what I intend
to do. This is something that not only me but everyone else in a working environment should apply.
Manual Testing Interview Questions
192. What Are The Steps You Follow To Create A Test Script?
Ans# Creating a test script usually requires the below steps.
Step-1# The primary requirement is to get a thorough understanding of the Application Under
Test.
To achieve this, we will read the requirements related documents very thoroughly.
In case the requirements document is not available with us, then we will use other available
references like the previous version of the application or wire-frames or screenshots.
Step-2# After developing an understanding of the requirements, we will prepare an exhaustive list
of the areas to be tested for the AUT. The focus in this step is to identify “What” to test. Thus the
outcome of this step is a list of test scenarios.
Step-3# After we are ready with the test scenarios, our focus shifts on “How” to test them. This
phase involves writing detailed steps about how to test a particular feature, what data to enter
(test data) and what is the expected outcome.
With all this done we are ready for testing.
194. How Will You Overcome The Challenges Due To Unavailability Of Proper Documentation For
Testing?
Ans# If the standard documents like System Requirement Specification or Feature Description
Document are not available then QAs may have to rely on the following references if available.
Screenshots.
A previous version of the application.
Wireframes.
Another reliable method is to have discussions with the developer and business analyst. It helps in
closing the doubts and opens a channel for bringing clarity on the requirements. Also, the e-mails
exchanged could also be useful as a testing reference.
Manual Testing Interview Questions
SMOKE testing is another good option which will help to verify the main functionality of the
application. It also reveals some very basic bugs in the application.
If none of these work we can just test the application from our previous experiences.
195. Is There Any Difference Between Quality Assurance, Quality Control, And
Software Testing. What Is It?
Ans# Quality Assurance (QA): QA refers to the planned and systematic way of monitoring the
quality of process which is followed to produce a quality product. QA tracks the outcomes and
adjusts the process to meet the expectation.
Quality Control (QC) is related to the quality of the product. QC not only finds the defects and
suggests improvements also. Thus the process that is set by QA is implemented by QC. QC is the
responsibility of the testing team.
Software Testing is the process of ensuring that product which is developed by the developer
meets the user requirement. The motive to perform testing is to find the bugs and make sure that
they get fixed. Thus it helps to maintain the quality of the product to be delivered to the customer.
QA also play an important role to initiate the communication between the domain teams. The
testing phase starts after the test plans are written, reviewed and approved.
197. Explain The Difference Between Smoke Testing And Sanity Testing?
Ans# The main differences between smoke and sanity testing are as follows.
Whenever there is a new build delivered after bug fixing, it has to pass through sanity testing.
However, smoke testing is done to check the major functionalities of the application.
Sanity testing is done either by the tester or the developer. However, smoke testing is not
necessarily done by a tester or developer.
Smoke tests precede sanity test execution.
Sanity testing touches critical areas of the product to ensure the basics are working fine.
However, smoke tests include a set of high priority test cases focussing on a particular
functionality.
We perform retesting to verify the defect fixes. But the regression testing assures that the
bug fix didn’t break other parts of the application.
Also, regression test cases verify the functionality of some or all the modules.
Retesting involves the execution of test cases that are in a failed state. But the regression
ensures the re-execution of passed test cases.
Retesting has a higher priority over regression. But in some cases, both get executed in
parallel.
199. What Are The Severity And Priority Of A Defect? Explain Using An Example.
Ans# Priority reflects the urgency of the defect from the business point of view. It indicates – How
quickly we need to fix the bug?
Severity reflects the impact of the defect on the functionality of the application. Bugs having a
critical impact on the functionality require a quick fix.
Here are examples which show the bugs under different priority and severity combinations.
While making an online purchase, if the user sees a message like “Error in order processing, please
try again.” at the time of submitting the payment details.
Suppose we have a typical scenario in which the application crashes, but such a scenario has a
rare occurrence.
These are typo errors in the displayed content like “You have registered success”. Instead of
“successfully”, “success” is written.
201. As Per Your Understanding, List Down The Key Challenges Of Software Testing?
Ans# Following are some of the key challenges of software testing.
Availability of Standard documents to understand the application.
Lack of skilled testers, tools, and training.
Understanding requirements, Domain knowledge, and business user perspective
understanding.
Agreeing on the Test Plan and the test cases with the customer.
Re-execution efforts due to changing requirements.
The application is stable enough to be tested otherwise retesting efforts become high.
Testers always work under stringent timelines.
Deciding on to which tests to execute first.
Testing the complete application using an optimized number of test cases.
Planning test cases for other stages of testing like Regression, Release, and Performance
testing.
Manual Testing Interview Questions
Normally performed to validate the software meets a set of agreed acceptance criteria.
module or database records. Identifies and measures the level of locking, deadlocking and use of
When actual result deviates from the expected result while testing a software application or
product then it results into a defect.
Manual Testing Interview Questions
A technique used during planning, analysis and design; creates a functional hierarchy for the
software.
Manual Testing Interview Questions
After integrating two different components together we do the integration testing. Integration
testing is usually performed after unit and functional testing. This type of testing is especially
relevant to client/server and distributed systems.
Manual Testing Interview Questions
The process of exercising software to verify that it satisfies specified requirements and to detect
errors is called testing. The process of analysing a software item to detect the differences between
existing and required conditions (that is, bugs), and to evaluate the features of the software item
(Ref. IEEE Std 829). The process of operating a system or component under specified conditions,
Manual Testing Interview Questions
observing or recording the results, and making an evaluation of some aspect of the system or
component.
307. How do you do usability testing, security testing, installation testing, ADHOC, safety and
smoke testing?
Usability testing : testing the user friendliness , simplicity of software. Security testing is testing the
access rights for example admin has all rights, user1 has only right to read and open not to write .
Installation testing is testing the software while installing it in different environment. Safety and
security goes hand in hand most of the time it used logins. Smoke testing is to test the software
whether the basic functionality is working or not. If u wants to Test the basic functionalities then we
go for ADHOC it happens if we don’t have time to test procedure wise
309. What are the major differences between stress testing, load testing, Volume testing?
Stress testing means increasing the load and checking the performance at each level. Load testing
means at a time giving more load by the expectation and checking the performance at that level.
Volume testing means first we have to apply initial.
programmers do not write any production code until they have first written a unit test.
of the code for ex checking whether the path has been executed once, checking whether the
branches has been executed at least once .This is used to check the structure of the code.
the system is neither properly “load tested” nor subjected to a meaningful stress test. Stress
testing is subjecting a system to an unreasonable load while denying it the resources (e.g., RAM,
disc, mips, interrupts, etc.) needed to process that load. The idea is to stress a system to the
breaking point in order to find bugs that will make that break potentially harmful. The system is not
expected to process the overload without adequate resources, but to behave (e.g., fail) in a decent
manner (e.g., not corrupting or losing data). Bugs and failure modes discovered under stress
testing may or may not be repaired depending on the application, the failure mode, consequences,
etc. The load (incoming transaction stream) in stress testing is often deliberately distorted so as to
force the system into resource depletion. Load testing is subjecting a system to a statistically
representative (usually) load. The two main reasons for using such loads is in support of software
reliability testing and in performance testing. The term 'load testing' by itself is too vague and
imprecise to warrant use. For example, do you mean representative load,' 'overload,' 'high load,'
etc. In performance testing, load is varied from a minimum (zero) to the maximum level the system
can sustain without running out of resources or having, transactions >suffer (application-specific)
excessive delay. A third use of the term is as a test whose objective is to determine the maximum
sustainable load the system can handle. In this usage, 'load testing' is merely testing at the highest
transaction arrival rate in performance testing.
formalized QA process is necessary. - Where the risk is lower, management and organizational buy-
in and QA implementation may be a slower, step-at-a-time process. QA processes should be
balanced with productivity so as to keep bureaucracy from getting out of hand. - For small groups
or projects, a
more ad-hoc process may be appropriate, depending on the type of customers and projects. A lot
will depend on team leads or managers, feedback to developers, and ensuring adequate
communications among customers, managers, developers, and testers. - In all cases the most
value for effort will be in requirements management processes, with a goal of clear, complete,
testable requirement specifications or expectations.
340. What are 5 common problems in the software development process? The 5 common
problems in the software development process are as follows:
1) Poor requirements - if requirements are unclear, incomplete, too general, or not testable, there
will be problems. 2) Unrealistic schedule - if too much work is crammed in too little time, problems
are inevitable. 3) Inadequate testing - no one will know whether or not the program is any good
until the customer complaints or systems crash. 4) Features - requests to pile on new features after
development is underway; extremely common. 5) Miscommunication - if developers don't know
what's needed or customer's have erroneous expectations, problems are guaranteed.
- use both upper and lower case, avoid abbreviations, use as many characters as necessary
to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in
naming conventions. - use descriptive variable names - use both upper and lower case, avoid
abbreviations, use as many characters as necessary to be adequately descriptive (use of more
than 20 characters is not out of line); be consistent in naming conventions. - function and method
sizes should be minimized; less than 100 lines of code is good, less than 50 lines is preferable. -
function descriptions should be clearly spelled out in comments preceding a function's code.-
organize code for readability.
- use whitespace generously - vertically and horizontally - each line of code should contain
70 characters max. - one code statement per line. - coding style should be consistent throughout a
program (eg, use of brackets, indentations, naming conventions, etc.) - in adding comments, err on
the side of too many rather than too few comments; a common rule of thumb is that there should
be at least as many lines of comments (including header blocks) as lines of code. - no matter how
small, an application should include documentation of the overall program function and flow (even
a few paragraphs is better than nothing); or if possible a separate flow chart and detailed program
documentation. - make extensive use of error handling procedures and status and error logging. -
For C++, to minimize complexity and increase maintainability, avoid too many levels of inheritance
in class hierarchies (relative to the size and complexity of the application). Minimize use of multiple
inheritance, and minimize use of operator overloading (note that the Java programming language
eliminates multiple inheritance and operator overloading.) - For C++, keep class methods small,
less than 50 lines of code per method is preferable. - For C++, make liberal use of exception
handlers
easily modifiable, and maintainable; is robust with sufficient error-handling and status logging
capability; and works correctly when implemented. Good functional design is indicated by an
application whose functionality can be traced back to customer and end-user requirements. For
programs that have a user interface, it's often a good idea to assume that the end user will have
little computer knowledge and may not read a user manual or even the on-line help; some
common rules-of-thumb include: - the program should act in a way that least surprises the user - it
should always be evident to the user what can be done next and how to exit - the program
shouldn't let the users do something stupid without warning them. Know more about test design,
test design technique, categories of test design technique and test design tools
personnel- be able to communicate with technical and non-technical people, engineers, managers,
and customers. - be able to run meetings and keep them focused
Manual Testing Interview Questions
349. What steps are needed to develop and run software tests?
The following are some of the steps to consider: - Obtain requirements, functional design, and
internal design specifications and other necessary documents - Obtain budget and schedule
requirements - Determine project-related personnel and their responsibilities, reporting
requirements, required standards and processes (such as release processes, change processes,
Manual Testing Interview Questions
etc.) - Identify application's higher-risk aspects, set priorities, and determine scope and limitations
of tests - Determine test approaches and methods - unit, integration, functional, system, load,
usability tests, etc. - Determine test environment requirements (hardware, software,
communications, etc.) -Determine testware requirements (record/playback tools, coverage
analyzers, test tracking, problem/bug tracking, etc.) - Determine test input data requirements -
Identify tasks, those responsible for tasks, and labour requirements - Set schedule estimates,
timelines, milestones - Determine input equivalence classes, boundary value analyses, error
classes - Prepare test plan document and have needed reviews/approvals - Write test cases - Have
needed reviews/inspections/approvals of test cases - Prepare test environment and test ware,
obtain needed user manuals/reference documents/configuration guides/installation guides, set up
test tracking processes, set up logging and archiving processes, set up or obtain test input data -
Obtain and install software releases - Perform
tests - Evaluate and report results - Track problems/bugs and fixes - Retest as needed - Maintain
and update test plans, test cases, test environment, and test ware through life cycle
or everything that could go wrong, risk analysis is appropriate to most software development
projects. This requires judgement skills, common sense, and experience. (If warranted, formal
methods are also available.) Considerations can include: - Which functionality is most important to
the project's intended purpose? - Which functionality is most visible to the user? - Which
functionality has the largest safety impact? - Which functionality has the largest financial impact on
users? - Which aspects of the application are most important to the customer? - Which aspects of
the application can be tested early in the development cycle? - Which parts of the code are most
complex, and thus most subject to errors? - Which parts of the application were developed in rush
or panic mode? - Which aspects of similar/related previous projects caused problems? - Which
aspects of similar/related previous projects had large maintenance expenses? - Which parts of the
requirements and design are unclear or poorly thought out? - What do the developers think are the
highest-risk aspects of the application? - What kinds of problems would cause the worst publicity? -
What kinds of problems would cause the most customer service complaints?- What kinds of tests
could easily cover multiple functionalities? - Which tests will have the best high-risk-coverage to
time-required ratio?
355. What if the project isn't big enough to justify extensive testing?
Consider the impact of project errors, not the size of the project. However, if extensive testing is
still not justified, risk analysis is again needed and the same considerations as described previously
in 'What if there isn't enough time for thorough testing?' apply. The tester might then do ad hoc
testing, or write up a limited test plan based on the risk analysis.
356. What if the application has functionality that wasn't in the requirements?
It may take serious effort to determine if an application has significant unexpected or hidden
functionality, and it would indicate deeper problems in the software development process. If the
functionality isn't necessary to the purpose of the application, it should be removed, as it may have
unknown impacts or dependencies that were not taken into account by the designer or the
customer. If not removed, design information will be needed to determine added testing needs or
regression testing needs. Management should be made aware of any significant added risks as a
result of the unexpected functionality. If the functionality only effects areas such as minor
improvements in the user interface, for example, it may not be a significant risk.
358. What if an organization is growing so fast that fixed QA processes are impossible?
This is a common problem in the software industry, especially in new technology areas. There is no
easy solution in this situation, other than: - Hire good people - Management should 'ruthlessly
prioritize' quality issues and maintain focus on the customer - Everyone in the organization should
be clear on what 'quality' means to the customer
time is limited (as it usually is) the focus should be on integration and system testing. Additionally,
load/stress/performance testing may be useful in determining client/server application limitations
and capabilities. There are commercial tools to assist with such testing.
provide internal links within the page. - The page layouts and design elements should be consistent
throughout a site, so that it's clear to the user that they're still within a site. - Pages should be as
browser-independent as possible, or pages should be provided or generated based on the browser-
type. - All pages should have links external to the page; there should be no dead-end pages. - The
page owner, revision date, and a link to a contact person or organization should be included on
each page.
application is developed. Test code is under source control along with the rest of the code.
Customers are expected to be an integral part of the project team and to help develop scenarios
for acceptance/black box testing. Acceptance tests are preferably automated, and are modified
and rerun for each of the frequent development iterations. QA and test personnel are also required
to be an integral part of the project team. Detailed requirements documentation is not used, and
frequent re-scheduling, re-estimating, and re-prioritizing is expected.
all types of platforms.- Other automated tools can include: code analyzers - monitor code
complexity, adherence to standards, etc. coverage analyzers - these tools check which parts of the
code have been exercised by a test, and may be oriented to code statement coverage, condition
coverage, path coverage, etc. memory analyzers - such as bounds-checkers and leak detectors.
load/performance test tools - for testing client/server and web applications under various load
levels. web test tools - to check that links are valid, HTML code usage is correct, client-side and
server-side programs work, a web site's interactions are secure. other tools - for test case
management, documentation management, bug reporting, and configuration management.
364. What's the difference between black box and white box testing?
Black-box and white-box are test design methods. Black-box test design treats the system as a
“black-box”, so it doesn't explicitly use knowledge of the internal structure. Black-box test design is
usually described as focusing on testing functional requirements. Synonyms for black-box include:
behavioural, functional, opaque-box, and closed-box. White-box test design allows one to peek
inside the “box”, and it focuses specifically on using internal knowledge of the software to guide
the selection of test data. Synonyms for white-box include: structural, glass-box and clear-box.
While black-box and white-box are terms that are still in popular use, many people prefer the terms
'behavioural' and 'structural'. Behavioural test design is slightly different from black-box test
design because the use of internal knowledge isn't strictly forbidden, but it's still discouraged. In
practice, it hasn't proven useful to use a single test design method. One has to use a mixture of
different methods so that they aren't hindered by the limitations of a particular one. Some call this
'gray-box' or 'translucent-box' test design, but others wish we'd stop talking about boxes
altogether. It is important to understand that these methods are used during the test design phase,
and their influence is hard to see in the tests once they're implemented. Note that any level of
testing (unit testing, system testing, etc.) can use any test design methods. Unit testing is usually
associated with structural test design, but this is because testers usually don't have well-defined
requirements at the unit level to validate.
Black box testing can be considered which is not based on any knowledge of internal design or
code. Tests are based on requirements and functionality. White box testing can also be considered
which is based on knowledge of the internal logic of an application's code. Tests are based on
coverage of
Code statements, branches, paths, conditions. Unit testing - the most 'micro' scale of testing; to
test particular functions or code modules. Unit testing is typically done by the programmer and not
Manual Testing Interview Questions
by testers, as it requires detailed knowledge of the internal program design and code. Not always
easily done unless the application has a well-designed architecture with tight code; may require
developing test driver modules or test harnesses. incremental integration testing - continuous
testing of an application as new functionality is added; requires that various aspects of an
application's functionality be independent enough to work separately before all parts of the
program are completed, or that test drivers be developed as needed; done by programmers or by
testers. Integration testing - testing of combined parts of an application to determine if they
function together correctly. The 'parts' can be code modules, individual applications, client and
server applications on a network, etc. This type of testing is especially relevant to client/server and
distributed systems. Functional testing - black-box type testing geared to functional requirements
of an application; this type of testing should be done by testers. This doesn't mean that the
programmers shouldn't check that their code works before releasing it (which of course applies to
any stage of testing.) system testing - black-box type testing that is based on overall requirements
specifications; covers all combined parts of a system. end-to-end testing - similar to system
testing; the 'macro' end of the test scale; involves testing of a complete application environment in
a situation that mimics real-world use, such as interacting with a database, using network
communications, or interacting with other hardware, applications, or systems if appropriate. sanity
testing or smoke testing - typically an initial testing effort to determine if a new software version is
performing well enough to accept it for a major testing effort. For example, if the new software is
crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the
software may not be in a 'sane' enough condition to warrant further testing in its current state.
Regression testing - re-testing after fixes or modifications of the software or its environment. It can
be difficult to determine how much re-testing is needed, especially near the end of the
development cycle. Automated testing tools can be especially useful for this type of testing.
Acceptance testing - final testing based on specifications of the end-user or customer, or based on
use by end-users/customers over some limited period of time. Load testing - testing an application
under heavy loads, such as testing of a web site under a range of loads to determine at what point
the system's response time degrades or fails. Stress testing - term often used interchangeably with
'load' and 'performance' testing. Also used to describe such tests as system functional testing while
under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical
values, large complex queries to a database system, etc. performance testing - term often used
interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type'
of testing) is defined in requirements documentation or QA or Test Plans. Usability testing - testing
for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or
customer. User interviews, surveys, video recording of user sessions, and other techniques can be
used. Programmers and testers are usually not appropriate as usability testers. Install /uninstall
testing - testing of full, partial, or upgrade install/uninstall processes. Recovery testing - testing
how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Manual Testing Interview Questions
Failover testing - typically used interchangeably with 'recovery testing' security testing - testing
how well the system protects against unauthorized internal or external access, wilful damage, etc;
may require sophisticated testing techniques. Compatibility testing - testing how well software
performs in a particular hardware/software/operating system/network/etc. environment.
Exploratory testing - often taken to mean a creative, informal software test that is not based on
formal test plans or test cases; testers may be learning the software as they test it. Ad-hoc testing
- similar to exploratory testing, but often taken to mean that the testers have significant
understanding of the software before testing it. Context-driven testing - testing driven by an
understanding of the environment, culture, and intended use of software. For example, the testing
approach for life-critical medical equipment software would be completely different than that for a
low-cost computer game. User acceptance testing is done to determine whether the software is
satisfactory to an end-user or customer. Comparison testing is about comparing software
weaknesses and strengths to competing products. Alpha testing - testing of an application when
development is nearing completion; minor design changes may still be made as
a result of such testing. Alpha testing is typically done by end-users or others, not by programmers
or testers. Beta testing - testing when development and testing are essentially completed and final
bugs and problems need to be found before final release. Beta testing is typically done by end-
users or others, not by programmers or testers. mutation testing - a method for determining if a
set of test data or test cases is useful, by deliberately introducing various code changes ('bugs')
and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper
implementation requires large computational resources.
366. Why is it often hard for management to get serious about quality assurance?
Solving problems is a high-visibility process; preventing problems is low-visibility. This is illustrated
by an old parable: In ancient China there was a family of healers, one of whom was known
throughout the land and employed as a physician to a great lord. The physician was asked which of
his family was the most skillful healer. He replied, "I tend to the sick and dying with drastic and
dramatic treatments, and on occasion someone is cured and my name gets out among the lords.
“My elder brother cures sickness when it just begins to take root, and his skills are known among
the local peasants and neighbours." "My eldest brother is able to sense the spirit of sickness and
eradicate it before it takes form. His name is unknown outside our home."
(a) Requirements management processes, with a goal of clear, complete, testable requirement
specifications embodied in requirements or design documentation, or in 'agile'-type environments
extensive continuous coordination with end-users,
(c) Post-mortems/retrospectives.
Manual Testing Interview Questions
369. How do the companies expect the defect reporting to be communicated by the tester to
the development team. Can the excel sheet template be used for defect reporting. If so what
are the common fields that are included? Who assigns the priority and severity of the defect?
To report bugs in excel: S.no, Module Screen/ Section Issue detail, Severity, Priority, Issue status
this is how to report bugs in excel sheet and also set filters on the Columns attributes. But most of
the companies use the share point process of reporting bugs In this when the project came for
testing a module wise detail of project is inserted to the defect management system they are
using. It contains following field1) Date 2) Issue brief 3) Issue description (used for developer to
regenerate the issue) 4) Issue status (active, resolved, on hold, suspend and not able to
regenerate) 5) Assign to (Names of members allocated to project) 6) Priority (High, medium and
low) 7) severity (Major, medium and low)
370. What are the tables in test plans and test cases?
Test plan is a document that contains the scope, approach, test design and test strategies. It
includes the following:-1) Test case identifier 2) Scope 3) Features to be tested 4) Features not to
be tested. 5) Test strategy. 6) Test Approach 7) Test Deliverables 8) Responsibilities. 9) Staffing
and Training 10)
Risk and Contingencies 11) Approval While A test case is a noted/documented set of
steps/activities that are carried out or executed on the software in order to confirm its
functionality/behaviour to certain set of inputs.
371. What are the table contents in test plans and test cases?
Test Plan is a document which is prepared with the details of the testing priority. A test Plan
generally includes: 1) Objective of Testing 2) Scope of Testing 3) Reason for testing 4) Timeframe
5) Environment 6) Entry and exit criteria 7) Risk factors involved 8) Deliverables
373. Why did you use automating testing tools in your job? Automating testing tools are used
because of the following reasons:
1) For regression testing 2) Criteria to decide the condition of a particular build
379. How you will evaluate the tool for test automation?
We need to concentrate on the features of the tools and how this could be beneficial for our
project.
The additional new features and the enhancements of the features will also help.
385. What types of scripting techniques for test automation do you know?
5 types of scripting techniques: Linear Structured Shared Data Driven Key Driven
390. How to find that tools work well with your existing system?
1. Discuss with the support officials 2. Download the trial version of the tool and evaluate 3.
Get suggestions from people who are working on the tool
Manual Testing Interview Questions
391. Describe some problem that you had with automating testing tool
1) The inability of win runner to identify the third party control like infragistics controls 2) The
change of the location of the table object will cause object not found error. 3) The inability of the
win runner to execute the script against multiple languages
394. How to find that tools work well with your existing system?
To find this, select the suite of tests which are most important for your application. First run them
with automated tool. Next subject the same tests to careful manual testing. If the results are
coinciding you can say your testing tool has been performing.
395. How will you test the field that generates auto numbers of AUT when we click the button
'NEW" in the application?
We can create a text file in a certain location, and update the auto generated value each time we
run the test and compare the currently generated value with the previous one will be one solution.
396. How will you evaluate the fields in the application under test using automation tool?
We can use Verification points (rational Robot) to validate the fields .Ex Using object data, object
data properties VP we can validate fields.
397. Can we perform the test of single application at the same time using different tools on the
same machine?
No. The Testing Tools will be in the ambiguity to determine which browser is opened by which tool.
Configuration management is a process to control and document any changes made during the life
of a project. Revision control, Change Control, and Release Control are important aspects of
Configuration Management.
400. What are the problems encountered during the testing the application compatibility on
different browsers and on different operating systems
Font issues, alignment issues
401. How exactly the testing the application compatibility on different browsers and on different
operating systems is done?
Compatibility testing is a type of software testing used to ensure compatibility of the
system/application/website built with various other objects such as other web browsers, hardware
platforms, users
402. How testing is proceeded when SRS or any other document is not given?
If SRS is not there we can perform Exploratory testing. In Exploratory testing the basic module is
executed and depending on its results, the next plan is executed.
checklist of historically common programming errors, and analysing its compliance with coding
standards.
A document that describes in detail the characteristics of the product with regard to its intended
features.
Testing the features and operational behaviour of a product to ensure they correspond to its
specifications. Testing that ignores the internal mechanism of a system or component and focuses
solely on the outputs generated in response to selected inputs and execution conditions. Or Black
Box Testing.
A combination of Black Box and White Box testing methodologies? testing a piece of software
against its specification but using some knowledge of its internal workings.
A group review quality improvement process for written material. It consists of two aspects;
product (document itself) improvement and process improvement (of both document production
and inspection).
Testing of combined parts of an application to determine if they function together correctly. Usually
performed after unit and functional testing. This type of testing is especially relevant to
client/server and distributed systems.
Confirms that the application under test recovers from expected or unexpected events without loss
of data or functionality. Events can include shortage of disk space, unexpected loss of
communication, or power out conditions.
This term refers to making software specifically designed for a specific locality.
A standard of measurement. Software metrics are the statistics describing the structure or content
of a program. A metric should be a real objective measurement of something such as number of
bugs per lines of code.
Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system
or an application does not crash out.
Testing aimed at showing software does not work. Also known as "test to fail". See also Positive
Testing.
Testing in which all paths in the program source code are tested at least once.
Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.
All those planned or systematic actions necessary to provide adequate confidence that a product
or service is of the type and quality needed and expected by the customer.
Manual Testing Interview Questions
A systematic and independent examination to determine whether quality activities and related
results comply with planned arrangements and whether these arrangements are implemented
effectively and are suitable to achieve objectives.
A group of individuals with related interests that meet at regular intervals to consider problems or
other matters related to the quality of outputs of a process and to the correction of problems or to
the improvement of quality.
The operational techniques and the activities used to fulfil and verify requirements of quality.
That aspect of the overall management function that determines and implements the quality
policy.
The overall intentions and direction of an organization as regards quality as formally expressed by
top management.
A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a
write, with no mechanism used by either to moderate simultaneous access.
Confirms that the program recovers from expected or unexpected events without loss of data or
functionality. Events can include shortage of disk space, unexpected loss of communication, or
power out conditions
Retesting a previously tested program following modification to ensure that faults have not been
introduced or uncovered as a result of the changes made.
A pre-release version, which contains the desired functionality of the final version, but which needs
to be tested for bugs (which ideally should be removed before the final version is released).
Brief test of major functional elements of a piece of software to determine if it’s basically
operational.
Manual Testing Interview Questions
Performance testing focused on ensuring the application under test gracefully handles
increases in work load.
Testing which confirms that the program can restrict access to authorized personnel and that
the authorized personnel can access the functions available to their security level.
A quick-and-dirty test that the major functions of a piece of software work. Originated in the
hardware testing practice of turning on a new piece of hardware for the first time and
considering it a success if it does not catch on fire.
Running a system at high load for a prolonged period of time. For example, running several
times more transactions in an entire day (or night) than would be expected in a busy day, to
identify and performance problems that appear after a large number of transactions have been
executed.
A deliverable that describes all data, functional and behavioural requirements, all constraints,
and all validation requirements for software/
expected outcomes developed for a particular objective, such as to exercise a particular program
path or to verify compliance with a specific requirement. Test Driven Development? Testing
methodology associated with Agile Programming in which every chunk of code is covered by unit
tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs
during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test
code to the size of the production code.
Execution
during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test
code to the size of the production code.
642. How do the companies expect the defect reporting to be communicated by the tester to
the development team? Can the excel sheet template be used for defect reporting. If so what
are the common fields that are to be included? Who assigns the priority and severity of the
defect?
To report bugs in excel:
Sno. Module Screen/ Section Issue detail Severity
Priority Issue status
this is how to report bugs in excel sheet and also set filters on the Columns attributes.
But most of the companies use the share point process of reporting bugs In this when the
project came for testing a module wise detail of project is inserted to the defect management
system they are using. It contains following field
1. Date
2. Issue brief
3. Issue description (used for developer to regenerate the issue)
4. Issue status( active, resolved, on hold, suspend and not able to regenerate)
5. Assign to (Names of members allocated to project)
6. Priority(High, medium and low)
7. Severity (Major, medium and low)
646. How you will evaluate the tool for test automation?
We need to concentrate on the features of the tools and how this could be beneficial for our
project. The additional new features and the enhancements of the features will also help.
651. What are the major differences between stress testing, load testing, Volume
testing?
Stress testing means increasing the load, and checking the performance at each level. Load testing
means at a time giving more load by the expectation and checking the performance at that level.
Volume testing means first we have to apply initial.
G C Reddy is a good test engineer because he has a “test to break” attitude, takes the point of
view of the customer, has a strong desire for quality, has an attention to detail, He’s also tactful
and diplomatic and has good a communication skill, both oral and written. And he has previous
software development experience, too.
give the team complete information so developers can understand the bug, get an idea of its
severity, reproduce it and fix it.
Since this type of problem can severely affect schedules and indicates deeper problems in the
software development process, such as insufficient unit testing, insufficient integration testing,
poor design, improper build or release procedures, managers should be notified and provided with
some documentation as evidence of the problem.
Use risk analysis to determine where testing should be focused. This requires judgment skills,
common sense and experience. The checklist should include answers to the following questions:
• Which functionality is most important to the project’s intended purpose?
• Which functionality is most visible to the user?
• Which functionality has the largest safety impact?
• Which functionality has the largest financial impact on users?
• Which aspects of the application are most important to the customer?
• Which aspects of the application can be tested early in the development cycle?
• Which parts of the code are most complex and thus most subject to errors?
• Which parts of the application were developed in rush or panic mode?
• Which aspects of similar/related previous projects caused problems?
• Which aspects of similar/related previous projects had large maintenance expenses?
• Which parts of the requirements and design are unclear or poorly thought out?
• What do the developers think are the highest-risk aspects of the application?
• What kinds of problems would cause the worst publicity?
• What kinds of problems would cause the most customer service complaints?
• What kinds of tests could easily cover multiple functionalities?
• Which tests will have the best high-risk-coverage to time-required ratio?
Manual Testing Interview Questions
661. What if the project isn’t big enough to justify extensive testing?
A: Consider the impact of project errors, not the size of the project. However, if extensive testing is
still not justified, risk analysis is again needed and the considerations listed under “What if there
isn’t enough time for thorough testing?” do apply. The test engineer then should do “ad hoc”
testing, or write up a limited test plan based on the risk analysis.
664. What if the application has functionality that wasn’t in the requirements?
A: It may take serious effort to determine if an application has significant unexpected or hidden
functionality, which it would indicate deeper problems in the software development process. If the
functionality isn’t necessary to the purpose of the application, it should be removed, as it may have
unknown impacts or dependencies that were not taken into account by the designer or the
customer.
If not removed, design information will be needed to determine added testing needs or regression
testing needs. Management should be made aware of any significant added risks as a result of the
unexpected functionality. If the functionality only affects areas, such as minor improvements in the
user interface, it may not be a significant risk.
665. What if the application has functionality that wasn’t in the requirements?
* It may take serious effort to determine if an application has significant unexpected or hidden
functionality, and it would indicate deeper problems in the software development process. If the
functionality isn’t necessary to the purpose of the application, it should be removed, as it may have
unknown impacts or dependencies that were not taken into account by the designer or the
customer. If not removed, design information will be needed to determine added testing needs or
regression testing needs. Management should be made aware of any significant added risks as a
result of the unexpected functionality. If the functionality only effects areas such as minor
improvements in the user interface, for example, it may not be a significant risk.
667. What if an organization is growing so fast that fixed QA processes are impossible
* This is a common problem in the software industry, especially in new technology areas. There is
no easy solution in this situation, other than:
* Management should ‘ruthlessly prioritize’ quality issues and maintain focus on the customer
* Everyone in the organization should be clear on what ‘quality’ means to the customer
Manual Testing Interview Questions
671. What are the expected loads on the server (e.g., number of hits per unit time?), and what
kind of performance is required under such loads (such as web server response time, database
query response times). What kinds of tools will be needed for performance testing (such as
web load testing tools, other tools already in house that can be adapted, web robot
downloading tools, etc.)?
672. Who is the target audience? What kind of browsers will they be using? What kind of
connection speeds will they by using? Are they intra- organization (thus with likely high
connection speeds and similar browsers) or Internet-wide (thus with a wide variety of
connection speeds and browser types)?
673. What kind of performance is expected on the client side (e.g., how fast should pages
appear, how fast should animations, applets, etc. load and run)?
674. Will down time for server and content maintenance/upgrades be allowed? how much?
675. Will down time for server and content maintenance/upgrades be allowed? how much?
676. How reliable are the site’s Internet connections required to be? And how does that affect
backup system or redundant connection requirements and testing?
677. What processes will be required to manage updates to the web site’s content, and what are
the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?
678. Which HTML specification will be adhered to? How strictly? What variations will be allowed
for targeted browsers?
679. Will there be any standards or requirements for page appearance and/or graphics
throughout a site or parts of a site?
680. How will internal and external links be validated and updated? how often?
681. Can testing be done on the production system, or will a separate test system be required?
How are browser caching, variations in browser option settings, dial-up connection variabilities,
and real-world internet ‘traffic congestion’ problems to be accounted for in testing?
682. How extensive or customized are the server logging and reporting requirements; are they
considered an integral part of the system and do they require testing?
683. How are cgi programs, applets, java scripts, ActiveX components, etc. to be maintained,
tracked, controlled, and tested?
Manual Testing Interview Questions
* Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger,
provide internal links within the page.
* The page layouts and design elements should be consistent throughout a site, so that it’s clear to
the user that they’re still within a site.
* Pages should be as browser-independent as possible, or pages should be provided or generated
based on the browser-type.
* All pages should have links external to the page; there should be no dead-end pages.
* The page owner, revision date, and a link to a contact person or organization should be included
on each page.
A: The general testing process is the creation of a test strategy (which sometimes includes the
creation of test cases), creation of a test plan/design (which usually includes test cases and test
procedures) and the execution of tests.
A: Test scenarios and/or cases are prepared by reviewing functional requirements of the
release and preparing logical groups of functions that can be further broken into test
procedures. Test procedures define test conditions, data to be used for testing and expected
results, including database updates, file outputs, report results. Generally speaking…
• Test cases and scenarios are designed to represent both typical and unusual situations that
may occur in the application.
• Test engineers define unit test requirements and unit test cases. Test engineers also execute
unit test cases.
• It is the test team that, with assistance of developers and clients, develops test cases and
scenarios for integration and system testing.
• Test scenarios are executed through the use of test procedures or scripts.
• Test procedures or scripts define a series of steps necessary to perform one or more test
scenarios.
• Test procedures or scripts include the specific data that will be used for testing the process or
transaction.
• Test procedures or scripts may cover multiple test scenarios.
• Test scripts are mapped back to the requirements and traceability matrices are used to
ensure each test is within scope.
• Test data is captured and base lined, prior to testing. This data serves as the foundation for
unit and system testing and used to exercise system functionality in a controlled environment.
• Some output data is also base-lined for future comparison. Base-lined data is used to support
future application maintenance via regression testing.
• A pretest meeting is held to assess the readiness of the application and the environment and
data to be tested. A test readiness document is created to indicate the status of the entrance
criteria of the release.
Inputs for this process:
• Approved Test Strategy Document.
• Test tools, or automated test tools, if applicable.
• Previously developed scripts, if applicable.
• Test documentation problems uncovered as a result of testing.
• A good understanding of software complexity and module path coverage, derived from
general and detailed design documents, e.g. software design document, source code, and
software complexity data.
Outputs for this process:
• Approved documents of test scenarios, test cases, test conditions, and test data.
• Reports of software design issues, given to software developers for correction.
* Title
Manual Testing Interview Questions
* Table of Contents.
* Relevant related document list, such as requirements, design documents, other test plans, etc.
* Traceability requirements
* Test outline – a decomposition of the test approach by test type, feature, functionality, process,
system, module, etc. as applicable
* Outline of data input equivalence classes, boundary value analysis, error classes
* Test environment – hardware, operating systems, other required software, data configurations,
interfaces to other systems
* Test environment validity analysis – differences between the test and production systems and
their impact on test validity.
* Software CM processes
Manual Testing Interview Questions
* Discussion of any specialized software or hardware tools that will be used by testers to help track
the cause or source of bugs
* Personnel allocation
* Test site/location
* Outside test organizations to be utilized and their purpose, responsibilities, deliverables, contact
persons, and coordination issues.
* Open issues
such as test case identifier, test case name, objective, test conditions/setup, input data
requirements, steps, and expected results.
* Note that the process of developing test cases can help find problems in the requirements or
design of an application, since it requires completely thinking through the operation of the
application. For this reason, it’s useful to prepare test cases early in the development cycle if
possible.
* Complete information such that developers can understand the bug, get an idea of its severity,
and reproduce it if necessary.
* Bug identifier (number, ID, etc.)
* The function, module, feature, object, screen, etc. where the bug occurred
* Description of steps needed to reproduce the bug if not covered by a test case or if the developer
doesn’t have easy access to the test case/test script/test tool
* File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in
finding the cause of the problem
* Test date
* Description of fix
* Date of fix
* Retest date
* Retest results
This can be difficult to determine. Many modern software applications are so complex, and run in
such an interdependent environment, that complete testing can never be done. Common factors in
deciding when to stop are:
* Which aspects of the application can be tested early in the development cycle?
* Which parts of the code are most complex, and thus most subject to errors?
* Which parts of the requirements and design are unclear or poorly thought out?
* What do the developers think are the highest-risk aspects of the application?
* What kinds of problems would cause the most customer service complaints?
697. What if the project isn’t big enough to justify extensive testing?
* Consider the impact of project errors, not the size of the project. However, if extensive testing is
still not justified, risk analysis is again needed and the same considerations as described previously
in ‘What if there isn’t enough time for thorough testing?’ apply. The tester might then do ad hoc
testing, or write up a limited test plan based on the risk analysis.
* Work with the project’s stakeholders early on to understand how requirements might change so
that alternate test plans and strategies can be worked out in advance, if possible.
* It’s helpful if the application’s initial design allows for some adaptability so that later changes do
not require redoing the application from scratch.
* If the code is well-commented and well-documented this makes changes easier for the
developers.
* Use rapid prototyping whenever possible to help customers feel sure of their requirements and
minimize changes.
* The project’s initial schedule should allow for some extra time commensurate with the possibility
of changes.
* Try to move new requirements to a ‘Phase 2’ version of an application, while using the original
requirements for the ‘Phase 1’ version.
* Negotiate to allow only easily-implemented new requirements into the project, while moving
more difficult new requirements into future versions of the application.
* Be sure that customers and management understand the scheduling impacts, inherent risks, and
costs of significant requirements changes. Then let management or the customers (not the
developers or testers) decide if the changes are warranted – after all, that’s their job.
* Balance the effort put into setting up automated testing with the expected effort required to re-
do them to deal with changes.
* Focus initial automated testing on application aspects that are most likely to remain unchanged.
* Devote appropriate effort to risk analysis of changes to minimize regression testing needs.
* Design some flexibility into test cases (this is not easily done; the best bet might be to minimize
the detail in the test cases, or set up only higher-level generic-type test plans)
Manual Testing Interview Questions
* Focus less on detailed test plans and test cases and more on ad hoc testing (with an
understanding of the added risk that this entails).
The acceptance test is the responsibility of the client/customer or project manager, however, it is
conducted with the full support of the project team. The test team also works with the
client/customer/project manager to develop the acceptance criteria.
Depending on the project, one person may wear more than one hat. For instance, Test Engineers
may also wear the hat of Technical Analyst, Test Build Manager and Test Configuration Manager.
Depending on the project, one person may wear more than one hat. For instance, a Test Engineer
may also wear the hat of a Test Build Manager.
Depending on the project, one person may wear more than one hat. For instance, a Test Engineer
may also wear the hat of a System Administrator.
Different organizations have different phases in STLC however generic Software Test Life Cycle
(STLC) consists of the following phases.
i) Test Planning
v) Test Closure
Manual Testing Interview Questions
Software Development Life Cycle involves business requirement specifications, Analysis, Design,
Software requirement specifications, Development Process(Coding and Application development),
Testing Process(Preparation of Test Plan, Preparation of Test cases,Testing,Bug reporting,Test Logs
& Test Reports), Release and Maintenance .
Whereas Software Testing Life Cycle involves Preparation of Test Plan, Preparation of Test cases,
Test execution ,Bug reporting & Tracking, Re & Regression Testing and Test Closure.
It is widely used in the commercial development projects. In this model, we move to next
phase(step) after getting input from previous phase, like in a waterfall, water flows down to from
the upper steps.
ii) Easy to manage due to the rigidity of the model- each phase has specific deliverables and a
review process.
iv) Works well for smaller projects where requirements are very well understood.
The V-model illustrates how testing activities can be integrated into each phase of the software
development life cycle.
o Due to Multiple stages of Testing and Multiple teams involvement Quality can be improved.
o The V Model Supports wide range of development methodologies such as Structured and Object
oriented systems development.
o Adoption of changes in Requirements and Adding New Requirements at middle of the process are
difficult.
Developers prepare design documents using all available requirements then build the prototypes,
prototypes are sent to Customer, and Customer evaluates Prototypes and gives feedback.
Developers redefine Requirements, modify software design and produce modified Prototypes.
Process will be continued like this, after getting Customer’s confirmation Developers Start Regular
process; Software Design, Coding (Implementation), Testing and Release & Maintenance.
The Objective of this approach is getting clear (Complete and Correct) Requirements from
Customers in order to avoid Defects multiplication.
o Feedback from customer is received periodically and the changes don’t come as a last minute
surprise.
o Once Requirements finalized then adopting changes in Requirements and adding New
Requirements are difficult.
734. What is Agile development methodology? What is Scrum Methodology in Agile Software
Development?
There are different methodologies, which are a part of the agile model. The most famous one is
scrum methodology. Like all the other agile computer programming, scrum is also an iterative and
incremental methodology. This methodology is different than the other methodologies because,
the idea of empirical process control was introduced in this process. As a matter of fact, scrum was
introduced for software project management. However, it was eventually also used for software
maintenance.
Manual Testing Interview Questions
The best part of the scrum methodology is that it makes use of real world progress of a project,
which is used for planning and scheduling releases. The entire computer software project is divided
into small parts known as sprints. The duration of sprint can range from one week to three weeks.
At the end of the duration of the sprint, the team members along with the stakeholders meet. This
meeting helps in assessing the progress of the project and chalk out the further plan of action. This
assessment helps in taking stalk of the current state of affairs and rework the line of work and
complete the project on time and not just speculate or predict the further outcome.
It was first proposed by Boehm. In this system development method, we combine the features of
both, waterfall model and prototype model. In Spiral model we can arrange all the activities in the
form of a spiral.
NOTE:
. Document name may vary from one company to another but process is same
. In software product development BA category people gather requirements from model customers
in the market
SRS document is an agreement between the developer and the customer covering the functional
and non-functional requirements of the software to be developed.
Release Team provides some training to the customer site people (if required)
Maintenance(3 types)
———————
a) Modifications
i) Corrective changes and ii) Enhancements
743. What is the difference between Mobile device testing and mobile application testing?
Ans. Mobile device testing means testing the mobile device and mobile application testing means
testing of the mobile application on
a mobile device.
746. What are the defects tracking tools used for mobile testing?
Ans. You can use same testing tool which you use for web application testing like QC, Jira, Rally,
and Bugzilla etc.
747. What all major networks to be considered while performing application testing?
Ans. You should test the application on 4G, 3G, 2G, and WIFI. 2G is a slower network, it’s good if
you verify your application on a
slower network also to track your application performance.
748. When performing sanity test on the mobile application what all criteria should be taken into
consideration?
Ans.
Installation and uninstallation of the application
Verify the device in different available networks like 2G, 3G, 4G or WIFI.
Functional testing
Interrupt testing- Able to receive the calls while running the application.
Compatibility testing – able to attach the photo in message from gallery
Test application performance on a different handset.
Manual Testing Interview Questions
Make some negative testing by entering the invalid credentials and test the behaviour of the
application.
749. Which things to consider testing a mobile application through black box technique?
Ans.
By testing your application on multiple devices.
By changing the port and IP addresses to make sure the device is getting connected and
disconnected properly.
By making calls and sending messages to other devices.
By testing your web application on different mobile browsers like Chrome, Firefox, opera, dolphin
etc.
760. What are the automation tools available for mobile application testing?
Ans. There are many automation tools available in the market for mobile application testing but
iPhone Tester is one of the best tools to test the application on iPhones and screenplay for android
devices.
761. What is the best way to test different screen sizes of the devices?
Ans. Using emulator. See example here.
775. Explain critical bugs that you come across while testing in mobile devices or application?
Ans. Explain the example as per your experience. Here are top 10 mobile app risks.
780. What are the roles and responsibilities on a current mobile application you are testing?
Ans. Answer based on your experience on the current project you are working on. Also, read mobile
testing career guide.
1) Service transport: This is the first layer which helps in transporting XML messages between
various client applications. This layer commonly uses the below-mentioned protocols:
HTTP(Hypertext Transport Protocol)
SMTP(Simple Mail Transport Protocol)
FTP(File Transfer Protocol)
BEEP(Block Extensible Exchange Protocol)
2) XML messaging: This layer is based on the XML model where messages are encoded in
common XML format which is easily understood by others. This layer includes
XML-RPC
SOAP(Simple Object Access Protocol)
3) Service description: This layer contains description like location, available functions, and data
types for XML messaging which describes the public interface to a specific web service. This layer
includes:
WSDL(Web Service Description Language)
4) Service discovery: This layer is responsible for providing a way to publish and find web
services over the web. This layer includes:
UDDI(Universal Description, Discovery, and Integration)
Authentication
Security
Error handling
Handshake Protocol
SOAP REST
Simple Object Access Protocol (SOAP) serves as a Representational State Transfer (REST) is an architectura
standard protocol for web service creation. for web service creation.
Manual Testing Interview Questions
SOAP REST
Web services and clients are tightly coupled and It does not follow too many standards and is loosely coup
define some standards that are to be strictly followed.
It requires more bandwidth and resource as well as It requires less bandwidth and resource as well as uses U
uses service interfaces for exposing business logic. (Uniform Resource Identifiers) for exposing business logic
It is usually less preferred and permits XML data It is usually more preferred and permits data formats like
format only. text, HTML, JSON, etc.
Java API for SOAP web service is JAX-WS. Java API for RESTFUL web service is JAX-RS.
SOAPUI can be used for testing SOAP web services. Browsers and extensions such as Chrome postman are us
testing REST web services.
It defines its own security and uses WSDL contract for It does not have any defined contract as well as does not
binding web services and client programs. own security methods.
804. What are the core components of HTTP request and HTTP response?
Ans: HTTP request has following 5 major components
HTTP
Meaning/work
Requests
HTTP
Meaning/work
Requests
Request Header Contains metadata like client type, cache settings, message body format, etc. for HTTP request
message.
Response Header Consists of metadata like content length, content type, server length, etc. for HTTP respon
message.
<protocol>://<service-name>/<ResourceType>/<ResourceID>
Every time client interaction takes place, web services are to be provided with extra
information about each request so that they can interpret the client’s state.
808. For designing a secure RESTful web service, what are the best factors that should be
followed?
Ans: As HTTP URL paths are used as a part of RESTful web service, so they need to be secured.
Some of the best practices include the following
Perform validation of all inputs on the server from SQL injection attacks.
Perform user’s session based authentication whenever a request is made.
Never use sensitive data like username, session token password, etc. through URL. These
should be passed via POST method.
Methods like GET, POST, PUT, DELETE, etc. should be executed with proper restrictions.
HTTP generic error message should be invoked wherever required.
SOAPUI Web Services
810. What are the various approaches available for developing SOAP based web services?
Ans: There are basically 2 different approaches available for developing SOAP-based web services.
These are explained as follows
Contract-first approach: In this approach, the contract is defined first by XML and WSDL
and then java classes are derived from the contract.
Contract-last approach: In this approach, java classes are defined first and then the
contract is generated which is usually the WSDL file from the java class.
“Contract-first” method is the most preferred approach.
The service contract should be standardized containing all the description of the services.
There is loose coupling defining the less dependency between the web services and the
client.
It should follow Service Abstraction rule, which says the service should not expose the way
functionality has been executed to the client application.
Services should be reusable in order to work with various application types.
Services should be stateless having the feature of discoverability.
Services break big problems into little problems and allow diverse subscribers to use the
services.
Integration
Authentication
Authorization
Digital Signatures
Encryption processes
Operations: This defines the operations performed for a message to process the
message.
830. What are the points that should be considered by ports while binding?
Ans: WSDL allows extensibility elements which are used to specify binding information. Below are
few important points that should be kept in consideration while binding.
A port must not
There are two types of interfaces for a computer application. Command Line Interface is where you
type text and computer responds to that command. GUI stands for Graphical User Interface where
you interact with the computer using images rather than text.
Following are the GUI elements which can be used for interaction between the user and
application:
GUI testing is defined as the process of testing the system's Graphical User Interface of the
Application Under Test. GUI testing involves checking the screens with the controls like menus,
buttons, icons, and all types of bars - toolbar, menu bar, dialog boxes, and windows, etc.
GUI is what the user sees. Say if you visit guru99.com what you will see say homepage it is the GUI
(graphical user interface) of the site. A user does not see the source code. The interface is visible
to the user. Especially the focus is on the design structure, images that they are working properly
or not.
Manual Testing Interview Questions
In above example, if we have to do GUI testing we first check that the images should be completely
visible in different browsers.
Also, the links are available, and the button should work when clicked.
Also, if the user resizes the screen, neither images nor content should shrink or crop or overlap.
Now the basic concept of GUI testing is clear. The few questions that will strike in your mind will be
To get the answer to think as a user, not as a tester. A user doesn't have any knowledge about XYZ
software/Application. It is the UI of the Application which decides that a user is going to use the
Application further or not.
Manual Testing Interview Questions
A normal User first observes the design and looks of the Application/Software and how easy it is for
him to understand the UI. If a user is not comfortable with the Interface or find Application complex
to understand he would never going to use that Application Again. That's why, GUI is a matter for
concern, and proper testing should be carried out in order to make sure that GUI is free of Bugs.
The following checklist will ensure detailed GUI Testing in Software Testing.
Check all the GUI elements for size, position, width, length, and acceptance of characters
or numbers. For instance, you must be able to provide inputs to the input fields.
Check you can execute the intended functionality of the application using the GUI
Check Error Messages are displayed correctly
Check for Clear demarcation of different sections on screen
Check Font used in an application is readable
Check the alignment of the text is proper
Check the Colour of the font and warning messages is aesthetically pleasing
Check that the images have good clarity
Check that the images are properly aligned
Check the positioning of GUI elements for different screen resolution.
Under this approach, graphical screens are checked manually by testers in conformance with the
requirements stated in the business requirements document.
GUI testing can be done using automation tools. This is done in 2 parts. During Record, test steps
are captured by the automation tool. During playback, the recorded test steps are executed on the
Application Under Test. Example of such tools - QTP.
Some of the modelling techniques from which test cases can be derived:
Charts - Depicts the state of a system and checks the state after some input.
Decision Tables - Tables used to determine results for each input applied
Model based testing is an evolving technique for generating test cases from the requirements. Its
main advantage, compared to above two methods, is that it can determine undesirable
states that your GUI can attain.
AutoHotkey GPL
Selenium Apache
Sikuli MIT
Water BSD
Here we will use some sample test cases for the following screen.
Following below is the example of the Test cases, which consists of UI and Usability test
scenarios.
TC 01- Verify that the text box with the label "Source Folder" is aligned properly.
TC 02 - Verify that the text box with the label "Package" is aligned properly.
TC 03 – Verify that label with the name "Browse" is a button which is located at the end of Textbox
with the name "Source Folder."
TC 04 – Verify that label with the name "Browse" is a button which is located at the end of Textbox
with the name "Package."
TC 05 – Verify that the text box with the label "Name" is aligned properly.
TC 06 – Verify that the label "Modifiers" consists of 4 radio buttons with the name public, default,
private, protected.
TC 07 – Verify that the label "Modifiers" consists of 4 radio buttons which are aligned properly in a
row.
TC 08 – Verify that the label "Superclass" under the label "Modifiers" consists of a dropdown
which must be properly aligned.
TC 09 – Verify that the label "Superclass" consists of a button with the label "Browse" on it which
must be properly aligned.
TC 10 – Verify that clicking on any radio button the default mouse pointer must be changed to the
hand mouse pointer.
Manual Testing Interview Questions
TC 11 – Verify that user must not be able to type in the dropdown of "Superclass."
TC 12 – Verify that there must be a proper error generated if something has been mistakenly
chosen.
TC 13 - Verify that the error must be generated in the RED colour wherever it is necessary.
TC 15 – Verify that the single radio buttons must be selected by default every time.
TC 16 – Verify that the TAB button must be work properly while jumping on another field next to
previous.
TC 17 – Verify that all the pages must contain the proper title.
TC 19 – Verify that after updating any field a proper confirmation message must be displayed.
TC 20 - Verify that only 1 radio button must be selected and more than single checkboxes may be
selected.
In Software Engineering, the most common problem while doing Regression Testing is that the
application GUI changes frequently. It is very difficult to test and identify whether it is an issue or
enhancement. The problem manifests when you don't have any documents regarding GUI changes.
Ranorex
Selenium
QTP
Cucumber
SilkTest
TestComplete
Squish GUI Tester
Why does a software have bugs ?
Mis-communication OR No communication -> as to specific of what an application should or
shouldn‟t do(the application‟s requirements)
Software complexity
Programming errors
Changing requirements
Time pressure
Manual Testing Interview Questions
A. The finalized SRS will be placed in a project repository; we will access it from there
A. SRS stands for software requirement specification. SRS is used to understand the project
functionality from business and functional point of view.
Note: In many projects SRS itself will be designed at screen level details of the application.
A. If it is known domain by going through use cases i can understand the requirements. if i have
some queries, i will discuss them with business analyst(BA) for clarifications. if it is new domain,
first i will get domain training them i go through the use cases. If the project requirements are
very confusing, then (BA) can also walk through each use case.
853. Should you understand the whole project functionality or only the functionality assigned to
you?
A. I should have a big picture of the whole project. In other words i should have an overview of the
whole project and detailed screen and field level understanding of the assigned functionalities.
854. What are the different models generally followed in documenting requirements?
A. Two models are followed in documenting the requirements which are usecase model and
paragraph model. in paragraph model business requirements are written like a paragraph which is
old model. Now a days almost all companies follow the usecase model where the requirements
are written by stating thier clear objectives and explained with the help of screen shots.
>allocating roles
>defining entry and exit criteria.
Kick-off:
>Distributing documents
>explaining the objectives
>checking entry criteria, etc.
Individual preparation:
> in the phase, each of the participants will work before the review meeting and be ready
with questions and comments.
Review Meeting:
> Discussion among the review members by going through each line of the work product.
> Logging comments
> Making decision about the defect.
Rework:
> Fixing defects found during the review, typically done by the author.
Follow-up:
> checking the defects that have been addressed.
>gathering metrics and checking the exit criteria.
through screens. To conduct dynamic testing you must use application screens and enter valid
and invalid inputs and verify the application behaviour. For static testing, we do not use any
screens of application instead we use static techniques like review. During review, experts go
through each line of the work products like requirement document, design document and
identify mistakes in these documents. Any mistakes identified during this review are nothing but
defects in the work product.
865. I want you to choose one among static and dynamic testing for your project. Which one will
you choose and why?
A. static testing reduces the cost of fix and dynamic testing gives the complete confidence to
release the product. According to me both are equally important and both of them contribute
equally for the project success. so i prefer to have both. however if i have to choose one, i choose
dynamic testing since i cannot let the project to be released until i see with my eyes that it is
working.
866. Out of formal and informal review, which one do you prefer?
A. In my view, both are important: informal review is fast and formal review is effective. we have
to use both depending on the data we are reviewing . I prefer formal and informal review
techniques as follows.
Formal Review:
> Reviewing test case document created
> Reviewing test plan document created
> reviewing test scripts developed
Informal Review:
> reviewing tests used for retesting
> Reviewing minor changes in test case, test plan or test scripts.
877. What are the entry and exit criteria for test execution?
Entry criteria:
--------------------------------
> coding should be completed
> test cases should be ready and base lined
> RTM should be updated
> test data should be read and base lined
> test environment/set up should be ready.
>s/w tools should be ready and approved.
Exit criteria:
-----------------------------
> All test cases must be executed and passed
> All defects identified must be fixed, retested and closed
> Test execution summary report must be prepared
879. How do you know you have a build ready for testing?
A. Frequency of build creation would vary from project to project .however below is the guideline
to answer this question. In our project automatic build creation deployment happens on every x
day. we receive a confirmation mail on every day morning about successful deployment along
with URL for testing .please refer our build process FAQ'S for exactly how build deployment and
release process.
880. How many test cases can you execute per day?
A. it depends on the size and complexity of the test cases. Approximately i execute around 50 test
cases per day which comes to 40 pages approx.
884. Did you observe any application logs during the test execution?
A. yes. We do observe logs of the application server to check whether the server has thrown any
runtime errors.
885. Do you run all regression tests for every bug fixed?
A. No, I didn’t run regression test cases for every bug fixed. I run regression tests once for every
build.
887. In the modules you have worked on, are there any issues identified after release?
A. projects if i am supposed to write stubs or drivers in the current project i am confident that i can
handle it.
888. When you fill the data in the application form, how do you ensure that the data is stored in
the correct tables and columns?
A. we can write an SQL query to retrieve data from the data base and compare the query result
with the data we have filled in the application forms.
892. How do you know for which functionalities you should write test case?
A. My lead writes top level requirements in QC and assigns to each team member. we divide test
requirements further into sub requirements. Then we identify test conditions for each sub
requirement and create test cases. test cases are reviewed after that. Reviewed and approved test
cases will go to ready state.
894. What is the difference between test scenario and test case?
A. Test scenario is a high level description of business requirements, which is latter decomposed in
to a set of test cases. These test cases will be reviewed and approved by peers. we follow formal
review process for approving test cases written for each functionality.
896. How do you find whether a test case is a good test case or bad test case?
A. A good test case is one which finds the bug or one which has a high probability of finding the
bug. A good test case should be documented clearly, so that it can be executed by anyone without
any difficulties and confusion.
897. What is the percentage of positive and negative test cases that you write?
A. Approx 30% positive and 70% negative
898. Do you update the test cases after receiving build based on the application screen?
A. During execution, if we feel any test case requires an update, we will do it with the approval of
the team lead. but this work is very limited.
899. Explain one scenario where you were not able to write test cases for a given requirement?
A. Effort for a new domain by putting extra effort for through understanding of the domain.
900. What is the difference between a positive and negative test case?
A. A positive test caes checks whether the system does what it is suppose to do. I.e. to check that
we got the desired result with a valid set of inputs.
Ex: the user should login in to the system with a valid user name and password.
Negative test case: A negative test case checks whether the system will do what it is not supposed
to do .i.e. to check the system generates the correct error or warning messages with an invalid set
of inputs.
Ex: if the user entered the wrong user name or password, then the user should not login in to the
system and appropriate error message should be shown.
904. What parameters do the test manager considers to take the decision to stop testing ?
A. The important parameters a test manager looks into are
> Whether all requirements have been developed or not.
> Whether all requirements have been covered through testing.
> Whether all requirements have been handled through fixed or differed status.
Manual Testing Interview Questions
column_name1 data_type(size),
column_name2 data_type(size),
column_name3 data_type(size),
ALTER: The ALTER table is used for modifying the existing table object in the database.
ALTER TABLE table_name
ADD column_name datatype
OR
3) DCL (Data Control Language): These statements are used to set privileges such as Grant
and Revoke database access permission to the specific user.
WHERE Clause: This clause is used to define the condition, extract and display only those records
which fulfil the given condition
Syntax: SELECT column_name(s)
FROM table_name
WHERE condition;
GROUP BY Clause: It is used with SELECT statement to group the result of the executed query
using the value specified in it. It matches the value with the column name in tables and groups the
end result accordingly.
Syntax: SELECT column_name(s)
FROM table_name
GROUP BY column_name;
HAVING clause: This clause is used in association with the GROUP BY clause. It is applied to each
group of result or the entire result as a single group and much similar as WHERE clause, the only
difference is you cannot use it without GROUP BY clause
Syntax: SELECT column_name(s)
FROM table_name
GROUP BY column_name
HAVING condition;
ORDER BY clause: This clause is to define the order of the query output either in ascending (ASC)
or in descending (DESC) order. Ascending (ASC) is the default one but descending (DESC) is set
explicitly.
Syntax: SELECT column_name(s)
FROM table_name
WHERE condition
ORDER BY column_name ASC|DESC;
Manual Testing Interview Questions
USING clause: USING clause comes in use while working with SQL Joins. It is used to check
equality based on columns when tables are joined. It can be used instead ON clause in Joins.
Syntax: SELECT column_name(s)
FROM table_name
JOIN table_name
USING (column_name);
915. Why do we use SQL constraints? Which constraints we can use while creating database in
SQL?
Constraints are used to set the rules for all records in the table. If any constraints get violated then
it can abort the action that caused it.
Constraints are defined while creating the database itself with CREATE TABLE statement or even
after the table is created once with ALTER TABLE statement.
There are 4 major types of joins made to use while working on multiple tables in SQL databases
INNER JOIN: It is also known as SIMPLE JOIN which returns all rows from BOTH tables when
it has at least one column matched
Syntax:
SELECT column_name(s)
Manual Testing Interview Questions
FROM table_name1
INNER JOIN table_name2
ON column_name1=column_name2;
Example
In this example, we have a table Employee with the following data
2 FROM Employee
4 ON Employee.Emp_id = Joining.Emp_id
5 ORDER BY Employee.Emp_id;
There will be 4 records selected. These are the results that you should see
Employee and orders tables where there is a matching customer_id value in both
the Employee and orders tables
LEFT JOIN (LEFT OUTER JOIN): This join returns all rows from a LEFT table and its matched
rows from a RIGHT table.
Manual Testing Interview Questions
2 FROM Employee
4 ON Employee.Emp_id = Joining.Emp_id
5 ORDER BY Employee.Emp_id;
There will be 4 records selected. These are the results that you should see:
Manual Testing Interview Questions
RIGHT JOIN (RIGHT OUTER JOIN): This joins returns all rows from the RIGHT table and its
matched rows from a LEFT table.
Syntax: SELECT column_name(s)
FROM table_name1
RIGHT JOIN table_name2
ON column_name1=column_name2;
Example
In this example, we have a table Employee with the following data
2 FROM Employee
4 ON Employee.Emp_id = Joining.Emp_id
5 ORDER BY Employee.Emp_id;
There will be 4 records selected. These are the results that you should see
Manual Testing Interview Questions
FULL JOIN (FULL OUTER JOIN): This joins returns all when there is a match either in the RIGHT
table or in the LEFT table.
Syntax: SELECT column_name(s)
FROM table_name1
FULL OUTER JOIN table_name2
ON column_name1=column_name2;
Example
In this example, we have a table Employee with the following data:
2 FROM Employee
4 ON Employee.Emp_id = Joining.Emp_id
5 ORDER BY Employee.Emp_id;
Manual Testing Interview Questions
There will be 8 records selected. These are the results that you should see
In simple word, we can say that a transaction means a group of SQL queries executed on database
records.
Isolation: Ensures that all transactions are performed independently and changes made
by one transaction are not reflected on other
Durability: Ensures that the changes made in database with committed transactions
persist as it is even after system failure
Action and Event are two main components of SQL triggers when certain actions are performed the
event occurs in response to that action.
GRANT Command: This command is used provide database access to user apart from an
administrator.
Syntax: GRANT privilege_name
ON object_name
TO {user_name|PUBLIC|role_name}
[WITH GRANT OPTION];
In above syntax WITH GRANT OPTIONS indicates that the user can grant the access to another user
too.
REVOKE Command: This command is used provide database deny or remove access to database
objects.
Syntax: REVOKE privilege_name
ON object_name
FROM {user_name|PUBLIC|role_name};
System Privilege: System privileges deal with an object of a particular type and specifies
the right to perform one or more actions on it which include Admin allows a user to perform
administrative tasks, ALTER ANY INDEX, ALTER ANY CACHE GROUP CREATE/ALTER/DELETE
TABLE, CREATE/ALTER/DELETE VIEW etc.
Object Privilege: This allows to perform actions on an object or object of another user(s)
viz. table, view, indexes etc. Some of the object privileges are EXECUTE, INSERT, UPDATE,
DELETE, SELECT, FLUSH, LOAD, INDEX, REFERENCES etc.
Safe Access Sandbox: Here a user can perform SQL operations such as creating stored
procedures, triggers etc. but cannot have access to the memory and cannot create files.
External Access Sandbox: User can have access to files without having a right to
manipulate the memory allocation.
Unsafe Access Sandbox: This contains untrusted codes where a user can have access to
memory.
933. How many row comparison operators are used while working with a subquery?
There are 3-row comparison operators which are used in subqueries such as IN, ANY and ALL.
937. How to write a query to show the details of a student from Students table whose name
starts with K?
SELECT * FROM Student WHERE Student_Name like ‘K%’;
Here ‘like’ operator is used for pattern matching.
938. What is the difference between Nested Subquery and Correlated Subquery?
Subquery within another subquery is called as Nested Subquery. If the output of a subquery is
depending on column values of the parent query table then the query is called Correlated
Subquery.
First Normal Form (1NF): It removes all duplicate columns from the table. Creates a table
for related data and identifies unique column values
First Normal Form (2NF): Follows 1NF and creates and places data subsets in an individual
table and defines the relationship between tables using the primary key
Third Normal Form (3NF): Follows 2NF and removes those columns which are not related
through primary key
Fourth Normal Form (4NF): Follows 3NF and do not define multi-valued dependencies. 4NF
also known as BCNF
Syntax:
CREATE Procedure Procedure_Name
(
//Parameters
)
AS
BEGIN
SQL statements in stored procedures to update/retrieve records
END
Declare Cursor
Open Cursor
Retrieve row from the Cursor
Process the row
Close Cursor
Deallocate Cursor
Database Connectivity
Constraint Check
Required Application Field and its size
Data Retrieval and Processing With DML operations
Stored Procedures
Functional flow
Syntax:
CREATE INDEX index_name
ON table_name (column_name)
Further, we can also create Unique Index using following syntax;
Syntax:
CREATE UNIQUE INDEX index_name
ON table_name (column_name)
******************
WHERE <Condition>
Manual Testing Interview Questions
959. Is it possible for a table to have more than one foreign key?
Ans. Yes, a table can have many foreign keys and only one primary key.
960. What are the possible values for the BOOLEAN data field?
Ans. For a BOOLEAN data field, two values are possible: -1(true) and 0(false).
967. Write a SQL SELECT query that only returns each name only once from a table?
Ans. To get each name only once, we need to use the DISTINCT keyword.
SELECT DISTINCT name FROM table_name;
971. Suppose a Student column has two columns, Name and Marks. How to get name and
marks of the top three students.
Ans. SELECT Name, Marks FROM Student s1 where 3 <= (SELECT COUNT(*) FROM Students s2
WHERE s1.marks = s2.marks)
4. Durability.
UNION ALL – returns all rows selected by either query, including all duplicates.
978. What is the difference between UNIQUE and PRIMARY KEY constraints?
Ans. A table can have only one PRIMARY KEY whereas there can be any number of UNIQUE
keys.
The primary key cannot contain Null values whereas Unique key can contain Null values.
Manual Testing Interview Questions
991. List the various privileges that a user can grant to another user?
Ans. SELECT, CONNECT, RESOURCES.
996. What is the difference between Having clause and Where clause?
Ans. Both specify a search condition but Having clause is used only with the SELECT statement
and typically used with GROUP BY clause.
If GROUP BY clause is not used then Having behaved like WHERE clause only.
997. What is the difference between Local and Global temporary table?
Ans. If defined in inside a compound statement a local temporary table exists only for the
duration of that statement but a global temporary table exists permanently in the DB but its
rows disappear when the connection is closed.
1001. What is the difference between client-server testing and web-based testing?
Ans: Projects are broadly divided into two types of:
2 tier applications
3 tier applications
This type of testing usually done for 2 tier applications (usually developed for LAN)
The application launched on front-end will be having forms and reports which will be monitoring
and manipulating data
WEB TESTING
This is done for 3 tier applications (developed for Internet / intranet / xtranet)
The applications accessible in browser would be developed in HTML, DHTML, XML, JavaScript etc.
(We can monitor through these applications)
Unit testing
Integration testing
System testing
Sanity testing
Smoke testing
Manual Testing Interview Questions
Interface testing
Regression testing
Beta/Acceptance testing
Performance Testing
Load testing
Stress testing
Volume testing
Security testing
Compatibility testing
Install testing
Recovery testing
Reliability testing
Usability testing
Compliance testing
Localization testing
Additional:
1004. What is the difference between the STLC (Software Testing Life Cycle) and SDLC (Software
Development Life Cycle)?
Ans: SDLC deals with development/coding of the software while STLC deales with validation and
verification of the software
There are different test deliverables at every phase of the software development lifecycle
Before Testing
During Testing
After the Testing