Manual Testing Interview Questions 1704361238
Manual Testing Interview Questions 1704361238
Manual Testing Interview Questions 1704361238
Ans. Software testing is the process of evaluating a system to check if it satisfies its business
requirements. It measures the overall quality of the system in terms of attributes like correctness,
completeness, usability, performance etc. Basically, it is used for ensuring the quality of software to
the stakeholders of the application.
# Verification Validation
Verification
is the
process of
evaluating
Validation is
the artifacts
the process of
as well as
validating that
the process
the developed
of software
software
1. development
product
in order to
conforms to
ensure that
the specified
the product
business
being
requirements.
developed
will comply
to the
standards.
It is static
It involves
process of
dynamic
analysing the
testing of
2. documents
software
and not the
product by
actual end
running it.
product.
Verification Validation is a
is a process product
3.
oriented oriented
approach. approach.
Answers the Answers the
question - question -
"Are we "Are we
4.
building the building the
product right
right?" product?"
5. Errors found Errors found
during during
verification validation
require require more
lesser cost/resources
cost/resource . Later the
s to get fixed error is
as compared
to be found discovered
during higher is the
validation cost to fix it.
phase.
Requirement analyses and validation - In this phase the requirements documents are
analysed and validated and scope of testing is defined.
Test planning - In this phase test plan strategy is defined, estimation of test effort is defined
along with automation strategy and tool selection is done.
Test Design and analysis - In this phase test cases are designed, test data is prepared and
automation scripts are implemented.
Test environment setup - A test environment closely simulating the real world environment
is prepared.
Test execution - The test cases are prepared, bugs are reported and retested once resolved.
Test closure and reporting - A test closure report is prepared having the final test results
summary, learnings and test metrics.
Black box testing - In black box testing, the tester need not have any knowledge of the
internal architecture or implementation of the system. The tester interact with the system
through the interface providing input and validating the received output.
White box testing - In white box testing, the tester analyses the internal architecture of the
system as well as the quality of source code on different parameters like code optimization,
code coverage, code reusability etc.
Gray box testing - In gray box testing, the tester has partial access to the internal
architecture of the system e.g. the tester may have access to the design documents or
database structure. This information helps tester to test the application better.
Ques.23. Give an example of Low priority-Low severity, Low priority-High severity, High
priority-Low severity, High priority-High severity defects.
Ans.
1. Low priority-Low severity - A spelling mistake in a page not frequently navigated by users.
2. Low priority-High severity - Application crashing in some very corner case.
3. High priority-Low severity - Slight change in logo color or spelling mistake in company
name.
4. High priority-High severity - Issue with login functionality.
Equivalence partitioning - Grouping test data into logical groups or equivalence classes with
the assumpation that any all the data items lying in the classes will have same effect on the
application.
Boundary value analysis - Testing using the boundary values of the equivalence classes
taken as the test input.
Decision tables - Testing using decision tables showing application's behaviour based on
different combination of input values.
Cause-effect graph - Testing using graphical representation of input i.e. cause and output i.e.
effect is used for test designing.
State transition testing - Testing based on state machine model.
Use case testing - Testing carried out using use cases.
Statement testing - Test scripts are designed to execute code statements and coverage is the
measure of line of code or statements executed by test scripts.
Decision testing/branch testing - Measure of the percentage of decision points(e.g. if-else
conditions) executed out of the total decision points in the application.
Condition testing- Testing the condition outcomes(TRUE or FALSE). So, getting 100%
condition coverage required exercising each condition for both TRUE and FALSE results
using test scripts(For n conditions we will have 2n test scripts).
Multiple condition testing - Testing the different combinations of condition outcomes. Hence
for 100% coverage we will have 2^n test scripts. This is very exhaustive and very difficult to
achieve 100% coverage.
Condition determination testing - It is an optimized way of multiple condition testing in
which the combinations which doesn't affect the outcomes are discarded.
Path testing - Testing the independent paths in the system(paths are executable statements
from entry to exit points).
Ques.36. What is Statement testing and statement coverage in white box testing?
Ans. Statement testing is a white box testing approach in which test scripts are designed to execute
code statements.
Statement coverage is the measure of the percentage of statements of code executed by the test
scripts out of the total code statements in the application. The statement coverage is the least
preferred metric for checking test coverage.
Smoke testing is a type of testing in which the all major functionalities of the application are
tested before carrying out exhaustive testing. Whereas sanity testing is subset of regression
testing which is carried out when there is some minor fix in application in a new build.
In smoke testing shallow-wide testing is carried out while in sanity narrow-deep testing (for
a particular fucntionality) is done.
The smoke tests are usually documented or are automated. Whereas the sanity tests are
generally not documented or unscripted.
Independence - Testers should enjoy a certain degree of independence while testing the
application rather than following a straight path. This can be achieved by different
apporaches like off-shoring the QA process, getting test strategies and other test plans by
someone not following any sort of bias.
Testing is often looked upon as destructive activity as it aims at finding flaws in system. But
QA personnel should present testing as required and integral part, presenting it as
constructive activity in overall sofware development lifecycle by mitigating the risks at early
stages.
More often than not, tester and developers are at the opposite end of spectrum. Testers need
to have good interpersonal skills to communicate their findings without induging in any sort
of tussle with the developers.
All the communication and discussions should be focussed on facts and figures(risk
analysis, priority setting etc). Emphasizing that the collaborative work of developers and
testers will lead to better software.
Empathy is one characterstic that definitely helps in testing. Testers empathizing with
developers and other project stakeholders will lead to better relation between the two. Also,
empathizing with end users will lead to a software with better usability of the product.
Ques.95. What is the difference between Testing and debugging?
Ans. Testing is the primarily performed by testing team in order to find the defects in the system.
Whereas, debugging is an activity performed by development team. In debugging the cause of
defect is located and fixed. Thus removing the defect and preventing any future occurrence of the
defect as well.
Other difference between the two is - testing can be done without any internal knowledge of
software architecture. Whereas debugging requires knowledge of the software architecture and
coding.
Plan: It defines the goal and the plan for achieving that goal.
Do/ Execute: It depends on the plan strategy decided during the plan stage. It is done
according to this phase.
Check: This is the testing part of the software development phase. It is used to ensure that
we are moving according to plan and getting the desired result.
Act: This step is used to solve if there any issue is occurred during the check cycle. It takes
appropriate action accordingly and revises the plan again.
The developers do the "planning and building" of the project while testers do the "check" part of the
project.
2) What is the difference among white box, black box and gray box testing?
Black box Testing: The strategy of black box testing is based on requirements and specification. It
requires no need of knowledge of internal path, structure or implementation of the software being
tested.
White box Testing: White box testing is based on internal paths, code structure, and
implementation of the software being tested. It requires a full and detail programming skill.
Gray box Testing: This is another type of testing in which we look into the box which is being
tested, It is done only to understand how it has been implemented. After that we close the box and
use the black box testing.
3)What are the advantages of designing tests early in the life cycle?
Designing tests early in the life cycle prevent defects from being in the main code.
18) Which types are testing are important for web testing?
There are two types of testing which are very important for web testing:
Performance testing
Security testing
19) What is the difference between web application and desktop application in
the scenario of testing?
The difference a web application and desktop application is that a web application is open to the
world with potentially many users accessing the application simultaneously at various times, so load
testing and stress testing are important. Web applications are also prone to all forms of attacks,
mostly DDOS, so security testing is also very important in the case of web applications.
Verification Validation
Verification is Static Testing. Validation is Dynamic Testing.
Validation occurs after
Verification occurs before Validation.
Verification.
Verification evaluates plans, document, requirements and Validation evaluates products.
specification.
In verification inputs are checklist, issues list, walkthroughs In validation testing, actual
and inspection. product is tested.
Verification output is set of document, plans, specification and In Validation actual product is
requirement documents. output.
Retesting is done to verify defect in previous and fix them to work correctly, on the other
hand, regression testing is performed to check if the defect fix have not impacted other
functionality that was working fine before doing changes in the code.
Retesting is specific and it is performed on the bug which is fixed where as in regression is
not be always specific to any defect fix it is performed when any bug is fixed.
Retesting concern with executing those test cases that are failed earlier where as regression
concern with executing test cases that was passed in earlier builds.
Retesting has higher priority over regression testing.
30) How would you test the login feature of a web application?
There are many ways to test the login feature of a web application:
Sign in with valid login, Close browser and reopen and see whether you are still logged in or
not.
Sign in, then logout and then go back to the login page to see if you are truly logged out.
Login, then go back to the same page, do you see the login screen again?
Session management is important. You must focus on how do we keep track of logged in
users, is it via cookies or web sessions?
Sign in from one browser, open another browser to see if you need to sign in again?
Login, change password, and then logout, then see if you can login again with the old
password.
---------------------------------------------------------------------------------------------------------------
Types of Testing:-
A list of 100 types of Software Testing Types along with definitions. A must read for any QA
professional.
1. Acceptance Testing: Formal testing conducted to determine whether or not a system
satisfies its acceptance criteria and to enable the customer to determine whether or not to
accept the system. It is usually performed by the customer.
2. Accessibility Testing: Type of testing which determines the usability of a product to the
people having disabilities (deaf, blind, mentally disabled etc). The evaluation process is
conducted by persons having disabilities.
3. Active Testing: Type of testing consisting in introducing test data and analyzing the
execution results. It is usually conducted by the testing team.
4. Agile Testing: Software testing practice that follows the principles of the agile manifesto,
emphasizing testing from the perspective of customers who will utilize the system. It is
usually performed by the QA teams.
5. Age Testing: Type of testing which evaluates a system's ability to perform in the future. The
evaluation process is conducted by testing teams.
6. Ad-hoc Testing: Testing performed without planning and documentation - the tester tries to
'break' the system by randomly trying the system's functionality. It is performed by the
testing team.
7. Alpha Testing: Type of testing a software product or system conducted at the developer's
site. Usually it is performed by the end users.
8. Assertion Testing: Type of testing consisting in verifying if the conditions confirm the
product requirements. It is performed by the testing team.
9. API Testing: Testing technique similar to Unit Testing in that it targets the code level. Api
Testing differs from unit testing in that it is typically a QA task and not a developer task.
10. All-pairs Testing: Combinatorial testing method that tests all possible discrete
combinations of input parameters. It is performed by the testing teams.
11. Automated Testing: Testing technique that uses Automation Testing tools to control
the environment set-up, test execution and results reporting. It is performed by a computer
and is used inside the testing teams.
12. Basis Path Testing: A testing mechanism which derives a logical complexity
measure of a procedural design and use this as a guide for defining a basic set of execution
paths. It is used by testing teams when defining test cases.
13. Backward Compatibility Testing: Testing method which verifies the behavior of
the developed software with older versions of the test environment. It is performed by
testing team.
14. Beta Testing: Final testing before releasing application for commercial purpose. It is
typically done by end-users or others.
15. Benchmark Testing: Testing technique that uses representative sets of programs and
data designed to evaluate the performance of computer hardware and software in a given
configuration. It is performed by testing teams.
16. Big Bang Integration Testing: Testing technique which integrates individual
program modules only when everything is ready. It is performed by the testing teams.
17. Binary Portability Testing: Technique that tests an executable application for
portability across system platforms and environments, usually for conformation to an ABI
specification. It is performed by the testing teams.
18. Boundary Value Testing: Software testing technique in which tests are designed to
include representatives of boundary values. It is performed by the QA testing teams.
19. Bottom Up Integration Testing: In bottom-up Integration Testing, module at the
lowest level are developed first and other modules which go towards the 'main' program are
integrated and tested one at a time. It is usually performed by the testing teams.
20. Branch Testing: Testing technique in which all branches in the program source code
are tested at least once. This is done by the developer.
21. Breadth Testing: A test suite that exercises the full functionality of a product but
does not test features in detail. It is performed by testing teams.
22. Black box Testing: A method of software testing that verifies the functionality of an
application without having specific knowledge of the application's code/internal structure.
Tests are based on requirements and functionality. It is performed by QA teams.
23. Code-driven Testing: Testing technique that uses testing frameworks (such as
xUnit) that allow the execution of unit tests to determine whether various sections of the
code are acting as expected under various circumstances. It is performed by the development
teams.
24. Compatibility Testing: Testing technique that validates how well a software
performs in a particular hardware/software/operating system/network environment. It is
performed by the testing teams.
25. Comparison Testing: Testing technique which compares the product strengths and
weaknesses with previous versions or other similar products. Can be performed by tester,
developers, product managers or product owners.
26. Component Testing: Testing technique similar to unit testing but with a higher level
of integration - testing is done in the context of the application instead of just directly testing
a specific method. Can be performed by testing or development teams.
27. Configuration Testing: Testing technique which determines minimal and optimal
configuration of hardware and software, and the effect of adding or modifying resources
such as memory, disk drives and CPU. Usually it is performed by the Performance Testing
engineers.
28. Condition Coverage Testing: Type of software testing where each condition is
executed by making it true and false, in each of the ways at least once. It is typically made
by the automation testing teams.
29. Compliance Testing: Type of testing which checks whether the system was
developed in accordance with standards, procedures and guidelines. It is usually performed
by external companies which offer "Certified OGC Compliant" brand.
30. Concurrency Testing: Multi-user testing geared towards determining the effects of
accessing the same application code, module or database records. It it usually done by
performance engineers.
31. Conformance Testing: The process of testing that an implementation conforms to
the specification on which it is based. It is usually performed by testing teams.
32. Context Driven Testing: An Agile Testing technique that advocates continuous and
creative evaluation of testing opportunities in light of the potential information revealed and
the value of that information to the organization at a specific moment. It is usually
performed by Agile testing teams.
33. Conversion Testing: Testing of programs or procedures used to convert data from
existing systems for use in replacement systems. It is usually performed by the QA teams.
34. Decision Coverage Testing: Type of software testing where each condition/decision
is executed by setting it on true/false. It is typically made by the automation testing teams.
35. Destructive Testing: Type of testing in which the tests are carried out to the
specimen's failure, in order to understand a specimen's structural performance or material
behavior under different loads. It is usually performed by QA teams.
36. Dependency Testing: Testing type which examines an application's requirements for
pre-existing software, initial states and configuration in order to maintain proper
functionality. It is usually performed by testing teams.
37. Dynamic Testing: Term used in software engineering to describe the testing of the
dynamic behavior of code. It is typically performed by testing teams.
38. Domain Testing: White box testing technique which contains checkings that the
program accepts only valid input. It is usually done by software development teams and
occasionally by automation testing teams.
39. Error-Handling Testing: Software testing type which determines the ability of the
system to properly process erroneous transactions. It is usually performed by the testing
teams.
40. End-to-end Testing: Similar to system testing, involves testing of a complete
application environment in a situation that mimics real-world use, such as interacting with a
database, using network communications, or interacting with other hardware, applications,
or systems if appropriate. It is performed by QA teams.
41. Endurance Testing: Type of testing which checks for memory leaks or other
problems that may occur with prolonged execution. It is usually performed by performance
engineers.
42. Exploratory Testing: Black box testing technique performed without planning and
documentation. It is usually performed by manual testers.
43. Equivalence Partitioning Testing: Software testing technique that divides the input
data of a software unit into partitions of data from which test cases can be derived. it is
usually performed by the QA teams.
44. Fault injection Testing: Element of a comprehensive test strategy that enables the
tester to concentrate on the manner in which the application under test is able to handle
exceptions. It is performed by QA teams.
45. Formal verification Testing: The act of proving or disproving the correctness of
intended algorithms underlying a system with respect to a certain formal specification or
property, using formal methods of mathematics. It is usually performed by QA teams.
46. Functional Testing: Type of black box testing that bases its test cases on the
specifications of the software component under test. It is performed by testing teams.
47. Fuzz Testing: Software testing technique that provides invalid, unexpected, or
random data to the inputs of a program - a special area of mutation testing. Fuzz testing is
performed by testing teams.
48. Gorilla Testing: Software testing technique which focuses on heavily testing of one
particular module. It is performed by quality assurance teams, usually when running full
testing.
49. Gray Box Testing: A combination of Black Box and White Box testing
methodologies: testing a piece of software against its specification but using some
knowledge of its internal workings. It can be performed by either development or testing
teams.
50. Glass box Testing: Similar to white box testing, based on knowledge of the internal
logic of an application's code. It is performed by development teams.
51. GUI software Testing: The process of testing a product that uses a graphical user
interface, to ensure it meets its written specifications. This is normally done by the testing
teams.
52. Globalization Testing: Testing method that checks proper functionality of the
product with any of the culture/locale settings using every type of international input
possible. It is performed by the testing team.
53. Hybrid Integration Testing: Testing technique which combines top-down and
bottom-up integration techniques in order leverage benefits of these kind of testing. It is
usually performed by the testing teams.
54. Integration Testing: The phase in software testing in which individual software
modules are combined and tested as a group. It is usually conducted by testing teams.
55. Interface Testing: Testing conducted to evaluate whether systems or components
pass data and control correctly to one another. It is usually performed by both testing and
development teams.
56. Install/uninstall Testing: Quality assurance work that focuses on what customers
will need to do to install and set up the new software successfully. It may involve full, partial
or upgrades install/uninstall processes and is typically done by the software testing engineer
in conjunction with the configuration manager.
57. Internationalization Testing: The process which ensures that product's functionality
is not broken and all the messages are properly externalized when used in different
languages and locale. It is usually performed by the testing teams.
58. Inter-Systems Testing: Testing technique that focuses on testing the application to
ensure that interconnection between application functions correctly. It is usually done by the
testing teams.
59. Keyword-driven Testing: Also known as table-driven testing or action-word testing,
is a software testing methodology for automated testing that separates the test creation
process into two distinct stages: a Planning Stage and an Implementation Stage. It can be
used by either manual or automation testing teams.
60. Load Testing: Testing technique that puts demand on a system or device and
measures its response. It is usually conducted by the performance engineers.
61. Localization Testing: Part of software testing process focused on adapting a
globalized application to a particular culture/locale. It is normally done by the testing teams.
62. Loop Testing: A white box testing technique that exercises program loops. It is
performed by the development teams.
63. Manual Scripted Testing: Testing method in which the test cases are designed and
reviewed by the team before executing it. It is done by manual testing teams.
64. Manual-Support Testing: Testing technique that involves testing of all the functions
performed by the people while preparing the data and using these data from automated
system. it is conducted by testing teams.
65. Model-Based Testing: The application of Model based design for designing and
executing the necessary artifacts to perform software testing. It is usually performed by
testing teams.
66. Mutation Testing: Method of software testing which involves modifying programs'
source code or byte code in small ways in order to test sections of the code that are seldom
or never accessed during normal tests execution. It is normally conducted by testers.
67. Modularity-driven Testing: Software testing technique which requires the creation
of small, independent scripts that represent modules, sections, and functions of the
application under test. It is usually performed by the testing team.
68. Non-functional Testing: Testing technique which focuses on testing of a software
application for its non-functional requirements. Can be conducted by the performance
engineers or by manual testing teams.
69. Negative Testing: Also known as "test to fail" - testing method where the tests' aim
is showing that a component or system does not work. It is performed by manual or
automation testers.
70. Operational Testing: Testing technique conducted to evaluate a system or
component in its operational environment. Usually it is performed by testing teams.
71. Orthogonal array Testing: Systematic, statistical way of testing which can be
applied in user interface testing, system testing, Regression Testing, configuration testing
and performance testing. It is performed by the testing team.
72. Pair Testing: Software development technique in which two team members work
together at one keyboard to test the software application. One does the testing and the other
analyzes or reviews the testing. This can be done between one Tester and Developer or
Business Analyst or between two testers with both participants taking turns at driving the
keyboard.
73. Passive Testing: Testing technique consisting in monitoring the results of a running
system without introducing any special test data. It is performed by the testing team.
74. Parallel Testing: Testing technique which has the purpose to ensure that a new
application which has replaced its older version has been installed and is running correctly.
It is conducted by the testing team.
75. Path Testing: Typical white box testing which has the goal to satisfy coverage
criteria for each logical path through the program. It is usually performed by the
development team.
76. Penetration Testing: Testing method which evaluates the security of a computer
system or network by simulating an attack from a malicious source. Usually they are
conducted by specialized penetration testing companies.
77. Performance Testing: Functional testing conducted to evaluate the compliance of a
system or component with specified performance requirements. It is usually conducted by
the performance engineer.
78. Qualification Testing: Testing against the specifications of the previous release,
usually conducted by the developer for the consumer, to demonstrate that the software meets
its specified requirements.
79. Ramp Testing: Type of testing consisting in raising an input signal continuously
until the system breaks down. It may be conducted by the testing team or the performance
engineer.
80. Regression Testing: Type of software testing that seeks to uncover software errors
after changes to the program (e.g. bug fixes or new functionality) have been made, by
retesting the program. It is performed by the testing teams.
81. Recovery Testing: Testing technique which evaluates how well a system recovers
from crashes, hardware failures, or other catastrophic problems. It is performed by the
testing teams.
82. Requirements Testing: Testing technique which validates that the requirements are
correct, complete, unambiguous, and logically consistent and allows designing a necessary
and sufficient set of test cases from those requirements. It is performed by QA teams.
83. Security Testing: A process to determine that an information system protects data
and maintains functionality as intended. It can be performed by testing teams or by
specialized security-testing companies.
84. Sanity Testing: Testing technique which determines if a new software version is
performing well enough to accept it for a major testing effort. It is performed by the testing
teams.
85. Scenario Testing: Testing activity that uses scenarios based on a hypothetical story
to help a person think through a complex problem or system for a testing environment. It is
performed by the testing teams.
86. Scalability Testing: Part of the battery of non-functional tests which tests a software
application for measuring its capability to scale up - be it the user load supported, the
number of transactions, the data volume etc. It is conducted by the performance engineer.
87. Statement Testing: White box testing which satisfies the criterion that each
statement in a program is executed at least once during program testing. It is usually
performed by the development team.
88. Static Testing: A form of software testing where the software isn't actually used it
checks mainly for the sanity of the code, algorithm, or document. It is used by the developer
who wrote the code.
89. Stability Testing: Testing technique which attempts to determine if an application
will crash. It is usually conducted by the performance engineer.
90. Smoke Testing: Testing technique which examines all the basic components of a
software system to ensure that they work properly. Typically, smoke testing is conducted by
the testing team, immediately after a software build is made .
91. Storage Testing: Testing type that verifies the program under test stores data files in
the correct directories and that it reserves sufficient space to prevent unexpected termination
resulting from lack of space. It is usually performed by the testing team.
92. Stress Testing: Testing technique which evaluates a system or component at or
beyond the limits of its specified requirements. It is usually conducted by the performance
engineer.
93. Structural Testing: White box testing technique which takes into account the
internal structure of a system or component and ensures that each program statement
performs its intended function. It is usually performed by the software developers.
94. System Testing: The process of testing an integrated hardware and software system
to verify that the system meets its specified requirements. It is conducted by the testing
teams in both development and target environment.
95. System integration Testing: Testing process that exercises a software system's
coexistence with others. It is usually performed by the testing teams.
96. Top Down Integration Testing: Testing technique that involves starting at the stop
of a system hierarchy at the user interface and using stubs to test from the top down until the
entire system has been implemented. It is conducted by the testing teams.
97. Thread Testing: A variation of top-down testing technique where the progressive
integration of components follows the implementation of subsets of the requirements. It is
usually performed by the testing teams.
98. Upgrade Testing: Testing technique that verifies if assets created with older versions
can be used properly and that user's learning is not challenged. It is performed by the testing
teams.
99. Unit Testing: Software verification and validation method in which a programmer
tests if individual units of source code are fit for use. It is usually conducted by the
development team.
100. User Interface Testing: Type of testing which is performed to check how user-
friendly the application is. It is performed by testing teams.
101. Usability Testing: Testing technique which verifies the ease with which a user can
learn to operate, prepare inputs for, and interpret outputs of a system or component. It is
usually performed by end users.
102. Volume Testing: Testing which confirms that any values that may become large over
time (such as accumulated counts, logs, and data files), can be accommodated by the
program and will not cause the program to stop working or degrade its operation in any
manner. It is usually conducted by the performance engineer.
103. Vulnerability Testing: Type of testing which regards application security and has the
purpose to prevent problems which may affect the application integrity and stability. It can
be performed by the internal testing teams or outsourced to specialized companies.
104. White box Testing: Testing technique based on knowledge of the internal logic of an
application's code and includes tests like coverage of code statements, branches, paths,
conditions. It is performed by software developers.
105. Workflow Testing: Scripted end-to-end testing technique which duplicates specific
workflows which are expected to be utilized by the end-user. It is usually conducted by
testing teams.
------------------------------------------------------------------------------------------------------------------------
Ques.10. Explain DDL commands. What are the different DDL commands in SQL?
Ans. DDL refers to Data Definition Language, it is used to define or alter the structure of the
database. The different DDL commands are-
Ques.12. Explain DCL commands. What are the different DCL commands in SQL?
Ans. DCL refers to Data Control Language, these commands are used to create roles, grant
permission and control access to the database objects. The three DCL commands are-
Ques.13. Explain TCL commands. What are the different TCL commands in SQL?
Ans. TCL refers to Transaction Control Language, it is used to manage the changes made by DML
statements. These are used to process a group of SQL statements comprising a logical unit. The
three TCL commands are-
Ques.17. What is the difference between unique key and primary key?
Ans. A unique key allows null value(although only one) but a primary key doesn't allow null values.
A table can have more than one unique keys columns while there can be only one primary key. A
unique key column creates non-clustered index whereas primary key creates a clustered index on
the column.
2. Left Join - To fetch all rows from left table and matching rows of the right table
SELECT * FROM TABLE1 LEFT JOIN TABLE2 ON TABLE1.columnA = TABLE2.columnA;
3. Right Join - To fetch all rows from right table and matching rows of the left table
SELECT * FROM TABLE1 RIGHT JOIN TABLE2 ON TABLE1.columnA = TABLE2.columnA;
4. Full Outer Join - To fetch all rows of left table and all rows of right table
SELECT * FROM TABLE1 FULL OUTER JOIN TABLE2 ON TABLE1.columnA =
TABLE2.columnA;
5. Self Join - Joining a table to itself, for referencing its own data
SELECT * FROM TABLE1 T1, TABLE1 T2 WHERE T1.columnA = T2.columnB;
Ques.28. What is the difference between cross join and full outer join?
Ans. A cross join returns cartesian product of the two tables, so there is no condition or on clause as
each row of tabelA is joined with each row of tableB whereas a full outer join will join the two
tables on the basis of condition specified in the on clause and for the records not satisfying the
condition null value is placed in the join result.
Now if we want to fetch Employee who is working on more than one project, for this we will first
have to group the Employee column along with count of project and than the 'having' clause can be
used to fetch relevant records-
1 Select Employee from Emp_Project groupby Employee having count(Project)>1;
Ques.30. What is the difference between Union and Union All command?
Ans. The fundamental difference between Union and Union All command is, Union is by default
distinct i.e. it combines the distinct result set of two or more select statements. Whereas, Union All
combines all the rows including duplicates in the result set of different select statements.
Count() - Returns the count of the number of rows returned by the SQL expression
Max() - Returns the max value out of the total values
Min() - Returns the min value out of the total values
Avg() - Returns the average of the total values
Sum() - Returns the sum of the values returned by the SQL expression
*Remember: Delete with joins requires name/alias before from clause in order to specify the table
of which data is to be deleted.
General SQL Interview Questions:
1. What is a Database?
A database is a collection of information in organized form for faster and better access, storage and
manipulation. It can also be defined as collection of tables, schema, views and other database
objects.
2. What is Database Testing?
It is AKA back-end testing or data testing.
Database testing involves in verifying the integrity of data in the front end with the data present in
the back end. It validates the schema, database tables, columns, indexes, stored
procedures, triggers, data duplication, orphan records, junk records. It involves in updating records
in database and verifying the same in the front end.
3. What is the difference between GUI Testing and Database Testing?
CREATE TABLE EMP_DETAILS(EmpID int NOT NULL, NAME VARCHAR (30) NOT NULL, Age INT CHECK (AGE >
18), PRIMARY KEY (EmpID));
CREATE TABLE EMP_DETAILS(EmpID int NOT NULL, NAME VARCHAR (30) NOT
1 NULL, Age INT CHECK (AGE > 18), PRIMARY KEY (EmpID));
Atomicity
Consistency
Isolation
Durability
52. What is the difference between Delete, Truncate and Drop command?
The difference between the Delete, Truncate and Drop command are
Delete command is a DML command, it is used to delete rows from table. It can be rolled
back.
Truncate is a DDL command, it is used to delete all the rows from the table and free the
space containing the table. It cant be rolled back.
Drop is a DDL command, it removes the complete data along with the table structure(unlike
truncate command that removes only the rows). All the tables’ rows, indexes and privileges
will also be removed.
53. What is the difference between Delete and Truncate?
This is one of the most important SQL Interview Questions. The difference between the Delete,
Truncate and Drop command are
Delete statement is used to delete rows from table. It can be rolled back.
Truncate statement is used to delete all the rows from the table and free the space containing
the table. It cant be rolled back.
We can use WHERE condition in DELETE statement and can delete required rows
We cant use WHERE condition in TRUNCATE statement. So we cant delete required rows
alone
We can delete specific rows using DELETE
We can only delete all the rows at a time using TRUNCATE
Delete is a DML command
Truncate is a DDL command
Delete maintains log and performance is slower than Truncate
Truncate maintains minimal log and performance wise faster
We need DELETE permission on Table to use DELETE command
We need at least ALTER permission on the table to use TRUNCATE command
54. What is the difference between Union and Union All command?
This is one of the most important SQL Interview Questions
Union: It omits duplicate records and returns only distinct result set of two or more select
statements.
Union All: It returns all the rows including duplicates in the result set of different select statements.
55. What is the difference between Having and Where clause?
Where clause is used to fetch data from database that specifies a particular criteria where as a
Having clause is used along with ‘GROUP BY’ to fetch data that meets a particular criteria
specified by the Aggregate functions. Where cluase cannot be used with Aggregate functions, but
the Having clause can.
56. What are aggregate functions in SQL?
SQL aggregate functions return a single value, calculated from values in a column. Some of the
aggregate functions in SQL are as follows
INSERT into Employee_Details (Employee_Name, Salary, Age) VALUES (‘John’, 5500 , 29);
INSERT into Employee_Details (Employee_Name, Salary, Age) VALUES (‘John’, 5500 , 29);
1
USE TestDB
GO
1 SELECT
2
3
* FROM
4 sys.Tables
GO
65. How to fetch values from TestTable1 that are not in TestTable2 without using NOT
keyword?
1 --------------
2 |
3
4 TestTable1
5 |
6
--------------
| 11 |
| 12 |
7
8
| 13 |
| 14 |
--------------
--------------
|
1 TestTable2
2 |
3
4
--------------
5 | 11 |
6 | 12 |
--------------
66. How to get each name only once from a employee table?
By using DISTINCT keyword, we could get each name only once.
SELECT GetDate();
1
70. Write an SQL Query to find an Employee_Name whose Salary is equal or greater than
5000 from the below table Employee_Details.
| Employee_Name | Salary| ----------------------------- | John | 2500 | | Emma | 3500 | | Mark | 5500 | | Anne | 6500 |
-----------------------------
| Employee_Name | Salary|
1 -----------------------------
2 | John | 2500 |
3
| Emma | 3500 |
4
5 | Mark | 5500 |
6 | Anne | 6500 |
7 -----------------------------
Syntax:
Output:
| Employee_Name | Salary|
1 -----------------------------
2
| Mark | 5500 |
3
4 | Anne | 6500 |
5 -----------------------------
71. Write an SQL Query to find list of Employee_Name start with ‘E’ from the below table
| Employee_Name | Salary| ----------------------------- | John | 2500 | | Emma | 3500 | | Mark | 5500 | | Anne | 6500 |
-----------------------------
| Employee_Name | Salary|
1 -----------------------------
2 | John | 2500 |
3
| Emma | 3500 |
4
5 | Mark | 5500 |
6 | Anne | 6500 |
7 -----------------------------
Syntax:
| Employee_Name | Salary|
1 -----------------------------
2
3
| Emma | 3500 |
4 -----------------------------
72. Write SQL SELECT query that returns the FirstName and LastName from
Employee_Details table.
sp_rename OldTableName,NewTableName
1 sp_rename 'TableName.OldColumnName',
2 'NewColumnName'
74. How to select all the even number records from a table?
To select all the even number records from a table:
75. How to select all the odd number records from a table?
To select all the odd number records from a table:
| Gender |
1
------------
2 | M |
3 | F |
4 | NULL |
5
| m |
6
7 | f |
8 | g |
9 | H |
10 | i |
11
------------
SELECT Gender, case when Gender='i' then 'U' when Gender='g' then 'U' when Gender='H' then 'U' when Gender='NULL' then 'N'
else upper(Gender) end as newgender from TestTable GROUP BY Gender
SELECT Gender,
case
when Gender='i'
then 'U'
when Gender='g'
then 'U'
1
2
when Gender='H'
3 then 'U'
4 when
5 Gender='NULL'
6 then 'N'
7
8 else
upper(Gender)
end as newgender
from TestTable
GROUP BY
Gender
select case when null = null then 'SQLServer' else 'MySQL' end as Result;
select case when null = null then 'SQLServer' else 'MySQL' end as Result;
1
This query will returns “MySQL”. In the above question we could see null = null is no the proper
way to compare a null value.
79. What will be the result of the query below?
select case when null is null then 'SQLServer' else 'MySQL' end as Result;
select case when null is null then 'SQLServer' else 'MySQL' end as Result;
1
| Name | Gender |
1 ------------------------
2 | John | M |
3
| Emma | F |
4
5 | Mark | M |
6 | Anne | F |
7 ------------------------
UPDATE TestTable SET Gender = CASE Gender WHEN 'F' THEN 'M' ELSE 'F' END
UPDATE TestTable SET Gender = CASE Gender WHEN 'F' THEN 'M' ELSE 'F' END
1
2) what is a bug?
Mismatch in Software Application
Test plan document is prepared by the test lead or team lead,it contains the contents like
introduction,objectives,test strategy,scope,test items,program modules user
procedures,features to be tested features not to tested approach,pass or fail criteria,testing
process,test deliverables, testing, tasks, responsibilities,resources,schedule,
environmental requirements, risks & contingencies,change management procedures,plan
approvals,etc all these things help a test manager understand the testing he should do &
what he should follow for testing that particular project.
5) what is basis for testcase review?
the main basis for the test case review is
1.testing techniques oriented review
2.requirements oriented review
3.defects oriented review.
connected modules are checked for thier integrity after bug fixation.
it is a tine gap between old version build and new version build in new version build some
new extra features are added
9) What is test deliverables?
Test deliverables are nothing but documents preparing during testing like test plan
document test case docs bug reports etc..
Test deliverables will be delivered to the client not only for the completed activities ,but
also for the activities, which we are implementing for the better productivity.(As per the
company's standards)
10) If a project is long term project , requirements are also changes then test plan will
change or not? why?
A testing phase where the tester tries to 'break' the system by randomly trying
the system's functionality. Can include negative testing as well. See also
Monkey Testing.
A white box test case design technique that uses the algorithmic flow of the
program to design tests.
The point at which some deliverable produced during the software engineering
process is put under formal change control.
Test which focus on the boundary or limit conditions of the software being
tested. (Some of these tests are stress tests).
Testing in which all branches in the program source code are tested at least
once.
A test suite that exercises the full functionality of a product but does not test
features in detail.
A test tool that records test input as it is sent to the software under test. The
input cases stored can then be used to reproduce the test at a later time. Most
commonly applied to GUI test tools.
The Capability Maturity Model for Software (CMM or SW-CMM) is a model for
judging the maturity of the software processes of an organization and for
identifying the key practices that are required to increase the maturity of these
processes.
A graphical representation of inputs and the associated outputs effects which can be used to design
test cases.
Phase of development where functionality is implemented in entirety; bug fixes are all that are left.
All functions found in the Functional Specifications have been implemented.
An analysis method that determines which parts of the software have been executed (covered) by
the test case suite and which parts have not been executed and therefore may require additional
attention.
A formal testing technique where the programmer reviews source code with a
group who ask questions analyzing the program logic, analyzing the code with respect to a checklist
of historically common programming errors, and analyzing its compliance with coding standards.
A formal testing technique where source code is traced by a group with a small set of test cases,
while the state of program variables is manually monitored, to analyze the programmer's logic and
assumptions.
Testing whether software is compatible with other elements of a system with which it should
operate, e.g. browsers, Operating Systems, or hardware.
Multi-user testing geared towards determining the effects of accessing the same application code,
module or database records. Identifies and measures the level of locking, deadlocking and use of
single-threaded code and locking semaphores.
The process of testing that an implementation conforms to the specification on which it is based.
Usually applied to testing conformance to a formal standard.
The context-driven school of software testing is flavor of Agile Testing that advocates continuous
and creative evaluation of testing opportunities in light of the potential information revealed and the
value of that information to the organization right now.
A database that contains definitions of all data items defined during analysis.
Testing in which the action of a test case is parameterized by externally defined data values,
maintained as a file or spreadsheet. A common technique in Automated Testing.
Examines an application's requirements for pre-existing software, initial states and configuration in
order to maintain proper functionality.
A device, computer program, or system that accepts the same inputs and produces the same outputs
as a given system.
Checks for memory leaks or other problems that may occur with prolonged
execution
51. What is End-to-End testing?
Testing a complete application environment in a situation that mimics real-world use, such as
interacting with a database, using network communications, or interacting with other hardware,
applications, or systems if appropriate.
A portion of a component's input or output domains for which the component's behaviour is
assumed to be the same from the component's specification.
A test case design technique for a component in which test cases are designed to execute
representatives from equivalence classes.
Testing which covers all combinations of input values and preconditions for an element of the
software under test.
A technique used during planning, analysis and design; creates a functional hierarchy for the
software.
54. What is Functional Specification?
A document that describes in detail the characteristics of the product with regard to its intended
features.
Testing the features and operational behavior of a product to ensure they correspond to its
specifications. Testing that ignores the internal mechanism of a system or component and focuses
solely on the outputs generated in response to selected inputs and execution conditions. or Black
Box Testing.
A combination of Black Box and White Box testing methodologies? testing a piece of software
against its specification but using some knowledge of its internal workings.
A group review quality improvement process for written material. It consists of two aspects;
product (document itself) improvement and process improvement (of both document production
and inspection).
Testing of combined parts of an application to determine if they function together correctly. Usually
performed after unit and functional testing. This type of testing is especially relevant to client/server
and distributed systems.
Confirms that the application under test recovers from expected or unexpected events without loss
of data or functionality. Events can include shortage of disk space, unexpected loss of
communication, or power out conditions.
This term refers to making software specifically designed for a specific locality.
A standard of measurement. Software metrics are the statistics describing the structure or content of
a program. A metric should be a real objective measurement of something such as number of bugs
per lines of code.
Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system
or an application does not crash out.
Testing aimed at showing software does not work. Also known as "test to fail". See also Positive
Testing.
Testing in which all paths in the program source code are tested at least once.
Testing conducted to evaluate the compliance of a system or component with specified performance
requirements. Often this is performed using an automated test tool to simulate large number of
users. Also know as "Load Testing".
Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.
All those planned or systematic actions necessary to provide adequate confidence that a product or
service is of the type and quality needed and expected by the customer.
A systematic and independent examination to determine whether quality activities and related
results comply with planned arrangements and whether these arrangements are implemented
effectively and are suitable to achieve objectives.
75. What is Quality Circle?
A group of individuals with related interests that meet at regular intervals to consider problems or
other matters related to the quality of outputs of a process and to the correction of problems or to
the improvement of quality.
The operational techniques and the activities used to fulfill and verify requirements of quality.
That aspect of the overall management function that determines and implements the quality policy.
The overall intentions and direction of an organization as regards quality as formally expressed by
top management.
A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a
write, with no mechanism used by either to moderate simultaneous access.
Confirms that the program recovers from expected or unexpected events without loss of data or
functionality. Events can include shortage of disk space, unexpected loss of communication, or
power out conditions
Retesting a previously tested program following modification to ensure that faults have not been
introduced or uncovered as a result of the changes made.
A pre-release version, which contains the desired functionality of the final version, but which needs
to be tested for bugs (which ideally should be removed before the final version is released).
Brief test of major functional elements of a piece of software to determine if its basically
operational.
Performance testing focused on ensuring the application under test gracefully handles increases in
work load.
Testing which confirms that the program can restrict access to authorized personnel and that the
authorized personnel can access the functions available to their security level.
A quick-and-dirty test that the major functions of a piece of software work. Originated in the
hardware testing practice of turning on a new piece of hardware for the first time and considering it
a success if it does not catch on fire.
Running a system at high load for a prolonged period of time. For example, running several times
more transactions in an entire day (or night) than would be expected in a busy day, to identify and
performance problems that appear after a large number of transactions have been executed.
A deliverable that describes all data, functional and behavioral requirements, all constraints, and all
validation requirements for software/
Testing that verifies the program under test stores data files in the correct
directories and that it reserves sufficient space to prevent unexpected
termination resulting from lack of space. This is external storage as opposed to
internal storage.
Testing that attempts to discover defects that are properties of the entire
system rather than of its individual components.
Test Case is a commonly used term for a specific test. This is usually the
smallest unit of testing. A Test Case will consist of information such as
requirements testing, test steps, verification steps, prerequisites, outputs, test
environment, etc. A set of inputs, execution preconditions, and expected
outcomes developed for a particular objective, such as to exercise a particular
program path or to verify compliance with a specific requirement. Test Driven
Development? Testing methodology associated with Agile Programming in
which every chunk of code is covered by unit tests, which must all pass all the
time, in an effort to eliminate unit-level and regression bugs during
development. Practitioners of TDD write a lot of tests, i.e. an equal number of
lines of test code to the size of the production code.
A program or test tool used to execute tests. Also known as a Test Harness.
The hardware and software environment in which tests will be run, and any
other software with which the software under test interacts when under test
including stubs and test drivers.
A program or test tool used to execute a tests. Also known as a Test Driver.
Commonly used to refer to the instructions for a particular test that will be
carried out by an automated test tool.
Testing the ease with which users can learn and use a product.
118. What is Use Case?
The specification of tests that are conducted from the end-user perspective.
Use cases tend to focus on operating software as an end-user would conduct
their day-to-day activities.
119. What is Unit Testing?
Testing of individual software components.
124. How you will evaluate the tool for test automation?
We need to concentrate on the features of the tools and how this could be
beneficial for our project. The additional new features and the enhancements
of the features will also help.
125. How you will describe testing activities?
Testing activities start from the elaboration phase. The various testing activities
are preparing the test plan, Preparing test cases, Execute the test case, Log
teh bug, validate the bug & take appropriate action for the bug, Automate the
test cases.
Automate all the high priority test cases which needs to be executed as a part
of regression testing for each build cycle.
Memory leaks means incomplete deallocation - are bugs that happen very
often. Buffer overflow means data sent as input to the server that overflows
the boundaries of the input area, thus causing the server to misbehave. Buffer
overflows can be used.
Stress testing means increasing the load ,and checking the performance at
each level. Load testing means at a time giving more load by the expectation
and checking the performance at that level. Volume testing means first we
have to apply initial.
Q: How do you introduce a new software QA process?
A: It depends on the size of the organization and the risks involved. For large
organizations with high-risk projects, a serious management buy-in is required
and a formalized QA process is necessary. For medium size organizations with
lower risk projects, management and organizational buy-in and a slower, step-
by-step process is required. Generally speaking, QA processes should be
balanced with productivity, in order to keep any bureaucracy from getting out
of hand. For smaller groups or projects, an ad-hoc process is more
appropriate. A lot depends on team leads and managers, feedback to
developers and good communication is essential among customers, managers,
developers, test engineers and testers. Regardless the size of the company, the
greatest value for effort is in managing requirement processes, where the goal
is requirements that are clear, complete and
testable.
A: Good test engineers have a "test to break" attitude. We, good test
engineers, take the point of view of the customer; have a strong desire for
quality and an attention to detail. Tact and diplomacy are useful in maintaining
a cooperative relationship with developers and an ability to communicate with
both technical and non-technical people. Previous software development
experience is also helpful as it provides a deeper understanding of the software
development process, gives the test engineer an appreciation for the
developers' point of view and reduces the learning curve in automated test tool
programming.
A: A test case is a document that describes an input, action, or event and its
expected result, in order to determine if a feature of an application is working
correctly. A test case should contain particulars such as a...
• Test case identifier;
• Test case name;
• Objective;
• Test conditions/setup;
• Input data requirements/steps, and
• Expected results.
Please note, the process of developing test cases can help find problems in the
requirements or design of an application, since it requires you to completely
think through the operation of the application. For this reason, it is useful to
prepare test cases early in the development cycle, if possible.
A: In this situation the best bet is to have test engineers go through the
process of reporting whatever bugs or problems initially show up, with the
focus being on critical bugs.
Since this type of problem can severely affect schedules and indicates deeper
problems in the software development process, such as insufficient unit
testing, insufficient integration testing, poor design, improper build or release
procedures, managers should be notified and provided with some
documentation as evidence of the problem.
Use risk analysis to determine where testing should be focused. This requires
judgment skills, common sense and experience. The checklist should include
answers to the following questions:
• Which functionality is most important to the project's intended purpose?
• Which functionality is most visible to the user?
• Which functionality has the largest safety impact?
• Which functionality has the largest financial impact on users?
• Which aspects of the application are most important to the customer?
• Which aspects of the application can be tested early in the development
cycle?
• Which parts of the code are most complex and thus most subject to errors?
A: Consider the impact of project errors, not the size of the project. However, if
extensive testing is still not justified, risk analysis is again needed and the
considerations listed under "What if there isn't enough time for thorough
testing?" do apply. The test engineer then should do "ad hoc" testing, or write
up a limited test plan based on the risk analysis.
A: The general testing process is the creation of a test strategy (which sometimes includes the
creation of test cases), creation of a test plan/design (which usually includes test cases and test
procedures) and the execution of tests.
Q: How do you create a test plan/design?
A: Test scenarios and/or cases are prepared by reviewing functional requirements of the release and
preparing logical groups of functions that can be further broken into test procedures. Test
procedures define test conditions, data to be used for testing and expected results, including
database updates, file outputs, report results. Generally speaking...
• Test cases and scenarios are designed to represent both typical and unusual situations that may
occur in the application.
• Test engineers define unit test requirements and unit test cases. Test engineers also execute unit
test cases.
• It is the test team that, with assistance of developers and clients, develops test cases and
scenarios for integration and system testing.
• Test scenarios are executed through the use of test procedures or scripts.
• Test procedures or scripts define a series of steps necessary to perform one or more test
scenarios.
• Test procedures or scripts include the specific data that will be used for testing the process or
transaction.
• Test procedures or scripts may cover multiple test scenarios.
• Test scripts are mapped back to the requirements and traceability matrices are used to ensure
each test is within scope.
• Test data is captured and base lined, prior to testing. This data serves as the foundation for unit
and system testing and used to exercise system functionality in a controlled environment.
• Some output data is also base-lined for future comparison. Base-lined data is used to support
future application maintenance via regression testing.
• A pretest meeting is held to assess the readiness of the application and the environment and data
to be tested. A test readiness document is created to indicate the status of the entrance criteria of the
release.
Inputs for this process:
• Approved Test Strategy Document.
• Test tools, or automated test tools, if applicable.
• Previously developed scripts, if applicable.
• Test documentation problems uncovered as a result of testing.
• A good understanding of software complexity and module path coverage, derived from general
and detailed design documents, e.g. software design document, source code, and software
complexity data.
Outputs for this process:
• Approved documents of test scenarios, test cases, test conditions, and test data.
• Reports of software design issues, given to software developers for correction.
Q: How do you execute tests?
A: Execution of tests is completed by following the test documents in a methodical manner. As each
test procedure is performed, an entry is recorded in a test execution log to note the execution of the
procedure and whether or not the test procedure uncovered any defects. Checkpoint meetings are
held throughout the execution phase. Checkpoint meetings are held daily, if required, to address and
discuss testing issues, status and activities.
• The output from the execution of test procedures is known as test results. Test results are
evaluated by test engineers to determine whether the expected results have been obtained. All
discrepancies/anomalies are logged and discussed with the software team lead, hardware test lead,
programmers, software engineers and documented for further investigation and resolution. Every
company has a different process for logging and reporting bugs/defects uncovered during testing.
• A pass/fail criteria is used to determine the severity of a problem, and results are recorded in a
test summary report. The severity of a problem, found during system testing, is defined in
accordance to the customer's risk assessment and recorded in their selected tracking tool.
• Proposed fixes are delivered to the testing environment, based on the severity of the problem.
Fixes are regression tested and flawless fixes are migrated to a new baseline. Following completion
of the test, members of the test team prepare a summary report. The summary report is reviewed by
the Project Manager, Software QA Manager and/or Test Team Lead.
• After a particular level of testing has been certified, it is the responsibility of the Configuration
Manager to coordinate the migration of the release software components to the next test level, as
documented in the Configuration Management Plan. The software is only migrated to the
production environment after the Project Manager's formal acceptance.
• The test team reviews test document problems identified during testing, and update documents
where appropriate.
Inputs for this process:
• Approved test documents, e.g. Test Plan, Test Cases, Test Procedures.
• Test tools, including automated test tools, if applicable.
• Developed scripts.
• Changes to the design, i.e. Change Request Documents.
• Test data.
• Availability of the test team and project team.
• General and Detailed Design Documents, i.e. Requirements Document, Software Design
Document.
• A software that has been migrated to the test environment, i.e. unit tested code, via the
Configuration/Build Manager.
• Test Readiness Document.
• Document Updates.
Outputs for this process:
• Log and summary of the test results. Usually this is part of the Test Report. This needs to be
approved and signed-off with revised testing deliverables.
• Changes to the code, also known as test fixes.
• Test document problems uncovered as a result of testing. Examples are Requirements document
and Design Document problems.
• Reports on software design issues, given to software developers for correction. Examples are
bug reports on code issues.
• Formal record of test incidents, usually part of problem tracking.
• Base-lined package, also known as tested source and object code, ready for migration to the next
level.
Q: How do you create a test strategy?
A: The test strategy is a formal description of how a software product will be tested. A test strategy
is developed for all levels of testing, as required. The test team analyzes the requirements, writes the
test strategy and reviews the plan with the project team. The test plan may include test cases,
conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment.
Inputs for this process:
• A description of the required hardware and software components, including test tools. This
information comes from the test environment, including test tool data.
• A description of roles and responsibilities of the resources required for the test and schedule
constraints. This information comes from man-hours and schedules.
• Testing methodology. This is based on known standards.
• Functional and technical requirements of the application. This information comes from
requirements, change request, technical and functional design documents.
• Requirements that the system can not provide, e.g. system limitations.
Outputs for this process:
• An approved and signed off test strategy document, test plan, including test cases.
• Testing issues requiring resolution. Usually this requires additional negotiation at the project
management level.
Q: What is security clearance?
A: The levels of classified access are confidential, secret, top secret, and sensitive compartmented
information, of which top secret is the highest.
* Title
* Table of Contents.
* Relevant related document list, such as requirements, design documents, other test plans, etc.
* Traceability requirements
* Test outline - a decomposition of the test approach by test type, feature, functionality, process,
system, module, etc. as applicable
* Outline of data input equivalence classes, boundary value analysis, error classes
* Test environment - hardware, operating systems, other required software, data configurations,
interfaces to other systems
* Test environment validity analysis - differences between the test and production systems and their
impact on test validity.
* Software CM processes
* Discussion of any specialized software or hardware tools that will be used by testers to help track
the cause or source of bugs
* Personnel allocation
* Test site/location
* Outside test organizations to be utilized and their purpose, responsibilties, deliverables, contact
persons, and coordination issues.
* Open issues
* Note that the process of developing test cases can help find problems in the requirements or
design of an application, since it requires completely thinking through the operation of the
application. For this reason, it's useful to prepare test cases early in the development cycle if
possible.
* Complete information such that developers can understand the bug, get an idea of it's severity, and
reproduce it if necessary.
* Bug identifier (number, ID, etc.)
* The function, module, feature, object, screen, etc. where the bug occurred
* Description of steps needed to reproduce the bug if not covered by a test case or if the developer
doesn't have easy access to the test case/test script/test tool
* File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in
finding the cause of the problem
* Tester name
* Test date
* Description of fix
* Retest date
* Retest results
* The best bet in this situation is for the testers to go through the process of reporting whatever bugs
or blocking-type problems initially show up, with the focus being on critical bugs. Since this type of
problem can severely affect schedules, and indicates deeper problems in the software development
process (such as insufficient unit testing or insufficient integration testing, poor design, improper
build or release procedures, etc.) managers should be notified, and provided with some
documentation as evidence of the problem.
This can be difficult to determine. Many modern software applications are so complex, and run in
such an interdependent environment, that complete testing can never be done. Common factors in
deciding when to stop are:
* Use risk analysis to determine where testing should be focused. Since it's rarely possible to test
every possible aspect of an application, every possible combination of events, every dependency, or
everything that could go wrong, risk analysis is appropriate to most software development projects.
This requires judgement skills, common sense, and experience. (If warranted, formal methods are
also available.) Considerations can include:
* Which aspects of the application can be tested early in the development cycle?
* Which parts of the code are most complex, and thus most subject to errors?
* Which parts of the requirements and design are unclear or poorly thought out?
* What do the developers think are the highest-risk aspects of the application?
* What kinds of problems would cause the most customer service complaints?
* Consider the impact of project errors, not the size of the project. However, if extensive testing is
still not justified, risk analysis is again needed and the same considerations as described previously
in 'What if there isn't enough time for thorough testing?' apply. The tester might then do ad hoc
testing, or write up a limited test plan based on the risk analysis.
* Work with the project's stakeholders early on to understand how requirements might change so
that alternate test plans and strategies can be worked out in advance, if possible.
* It's helpful if the application's initial design allows for some adaptability so that later changes do
not require redoing the application from scratch.
* If the code is well-commented and well-documented this makes changes easier for the developers.
* Use rapid prototyping whenever possible to help customers feel sure of their requirements and
minimize changes.
* The project's initial schedule should allow for some extra time commensurate with the possibility
of changes.
* Try to move new requirements to a 'Phase 2' version of an application, while using the original
requirements for the 'Phase 1' version.
* Negotiate to allow only easily-implemented new requirements into the project, while moving
more difficult new requirements into future versions of the application.
* Be sure that customers and management understand the scheduling impacts, inherent risks, and
costs of significant requirements changes. Then let management or the customers (not the
developers or testers) decide if the changes are warranted - after all, that's their job.
* Balance the effort put into setting up automated testing with the expected effort required to re-do
them to deal with changes.
* Focus initial automated testing on application aspects that are most likely to remain unchanged.
* Devote appropriate effort to risk analysis of changes to minimize regression testing needs.
* Design some flexibility into test cases (this is not easily done; the best bet might be to minimize
the detail in the test cases, or set up only higher-level generic-type test plans)
* Focus less on detailed test plans and test cases and more on ad hoc testing (with an understanding
of the added risk that this entails).
* It may take serious effort to determine if an application has significant unexpected or hidden
functionality, and it would indicate deeper problems in the software development process. If the
functionality isn't necessary to the purpose of the application, it should be removed, as it may have
unknown impacts or dependencies that were not taken into account by the designer or the customer.
If not removed, design information will be needed to determine added testing needs or regression
testing needs. Management should be made aware of any significant added risks as a result of the
unexpected functionality. If the functionality only effects areas such as minor improvements in the
user interface, for example, it may not be a significant risk.
* This is a common problem in the software industry, especially in new technology areas. There is
no easy solution in this situation, other than:
* Management should 'ruthlessly prioritize' quality issues and maintain focus on the customer
* Everyone in the organization should be clear on what 'quality' means to the customer
* Client/server applications can be quite complex due to the multiple dependencies among clients,
data communications, hardware, and servers. Thus testing requirements can be extensive. When
time is limited (as it usually is) the focus should be on integration and system testing. Additionally,
load/stress/performance testing may be useful in determining client/server application limitations
and capabilities. There are commercial tools to assist with such testing. (See the 'Tools' section for
web resources with listings that include these kinds of test tools.)
* Web sites are essentially client/server applications - with web servers and 'browser' clients.
Consideration should be given to the interactions between html pages, TCP/IP communications,
Internet connections, firewalls, applications that run in web pages (such as applets, javascript, plug-
in applications), and applications that run on the server side (such as cgi scripts, database interfaces,
logging applications, dynamic page generators, asp, etc.). Additionally, there are a wide variety of
servers and browsers, various versions of each, small but sometimes significant differences between
them, variations in connection speeds, rapidly changing technologies, and multiple standards and
protocols. The end result is that
• testing for web sites can become a major ongoing effort. Other considerations might include:
* Who is the target audience? What kind of browsers will they be using? What kind of connection
speeds will they by using? Are they intra- organization (thus with likely high connection speeds and
similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser
types)?
* What kind of performance is expected on the client side (e.g., how fast should pages appear, how
fast should animations, applets, etc. load and run)?
* Will down time for server and content maintenance/upgrades be allowed? how much?
* Will down time for server and content maintenance/upgrades be allowed? how much?
* How reliable are the site's Internet connections required to be? And how does that affect backup
system or redundant connection requirements and testing?
* What processes will be required to manage updates to the web site's content, and what are the
requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?
* Which HTML specification will be adhered to? How strictly? What variations will be allowed for
targeted browsers?
* Will there be any standards or requirements for page appearance and/or graphics throughout a site
or parts of a site?
* How will internal and external links be validated and updated? how often?
* Can testing be done on the production system, or will a separate test system be required? How are
browser caching, variations in browser option settings, dial-up connection variabilities, and real-
world internet 'traffic congestion' problems to be accounted for in testing?
* How extensive or customized are the server logging and reporting requirements; are they
considered an integral part of the system and do they require testing?
* How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked,
controlled, and tested?
* Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger,
provide internal links within the page.
* The page layouts and design elements should be consistent throughout a site, so that it's clear to
the user that they're still within a site.
* Pages should be as browser-independent as possible, or pages should be provided or generated
based on the browser-type.
* All pages should have links external to the page; there should be no dead-end pages.
* The page owner, revision date, and a link to a contact person or organization should be included
on each page.
What is Extreme Programming and what's it got to do with testing?
* Extreme Programming (XP) is a software development approach for small teams on risk-prone
projects with unstable requirements. It was created by Kent Beck who described the approach in his
book 'Extreme Programming Explained' (See the Softwareqatest.com Books page.). Testing
('extreme testing') is a core aspect of Extreme Programming. Programmers are expected to write
unit and functional test code first - before the application is developed. Test code is under source
control along with the rest of the code. Customers are expected to be an integral part of the project
team and to help develope scenarios for acceptance/black box testing. Acceptance tests are
preferably automated, and are modified and rerun for each of the frequent development iterations.
QA and test personnel are also required to be an integral part of the project team. Detailed
requirements documentation is not used, and frequent re-scheduling, re-estimating, and re-
prioritizing is expected.
A: Load testing is testing an application under heavy loads, such as the testing of a web site under a
range of loads to determine at what point the system response time will degrade or fail.
A: Installation testing is testing full, partial, upgrade, or install/uninstall processes. The installation
test for a release is conducted with the objective of demonstrating production readiness.
This test includes the inventory of configuration items, performed by the application's System
Administration, the evaluation of data readiness, and dynamic tests focused on basic system
functionality. When necessary, a sanity test is performed, following installation testing.
A: Security/penetration testing is testing how well the system is protected against unauthorized
internal or external access, or willful damage.
A: Recovery/error testing is testing how well a system recovers from crashes, hardware failures, or
other catastrophic problems.
A: Compatibility testing is testing how well software performs in a particular hardware, software,
operating system, or network
This test includes the inventory of configuration items, performed by the application's System
Administration, the evaluation of data readiness, and dynamic tests focused on basic system
functionality. When necessary, a sanity test is performed, following installation testing.
A: Security/penetration testing is testing how well the system is protected against unauthorized
internal or external access, or willful damage.
This type of testing usually requires sophisticated testing techniques.
A: Recovery/error testing is testing how well a system recovers from crashes, hardware failures, or
other catastrophic problems.
A: Compatibility testing is testing how well software performs in a particular hardware, software,
operating system, or network
A: Comparison testing is testing that compares software weaknesses and strengths to those of
competitors' products.
A: Acceptance testing is black box testing that gives the client/customer/project manager the
opportunity to verify the system functionality and usability prior to the system being released to
production.
The acceptance test is the responsibility of the client/customer or project manager, however, it is
conducted with the full support of the project team. The test team also works with the
client/customer/project manager to develop the acceptance criteria.
A: Alpha testing is testing of an application when development is nearing completion. Minor design
changes can still be made as a result of alpha testing. Alpha testing is typically performed by a
group that is independent of the design team, but still within the company, e.g. in-house software
test engineers, or software QA engineers.
A: Beta testing is testing an application when development and testing are essentially completed
and final bugs and problems need to be found before the final release. Beta testing is typically
performed by end-users or others, not programmers, software engineers, or test engineers.
A: The Test/QA Team Lead coordinates the testing activity, communicates testing status to
management and manages the test team.
A: Depending on the organization, the following roles are more or less standard on most testing
projects: Testers, Test Engineers, Test/QA Team Lead, Test/QA Manager, System Administrator,
Database Administrator, Technical Analyst, Test Build Manager and Test Configuration Manager.
Depending on the project, one person may wear more than one hat. For instance, Test Engineers
may also wear the hat of Technical Analyst, Test Build Manager and Test Configuration Manager.
A: We, test engineers, are engineers who specialize in testing. We, test engineers, create test cases,
procedures, scripts and generate data. We execute test procedures and scripts, analyze standards of
measurements, evaluate results of system/integration/regression testing. We also...
• Speed up the work of the development staff;
• Reduce your organization's risk of legal liability;
• Give you the evidence that your software is correct and operates properly;
• Improve problem tracking and reporting;
• Maximize the value of your software;
• Maximize the value of the devices that use it;
• Assure the successful launch of your product by discovering bugs and design flaws, before users
get discouraged, before shareholders loose their cool and before employees get bogged down;
• Help the work of your development staff, so the development team can devote its time to build
up your product;
• Promote continual improvement;
• Provide documentation required by FDA, FAA, other regulatory agencies and your customers;
• Save money by discovering defects 'early' in the design process, before failures occur in
production, or in the field;
• Save the reputation of your company by discovering bugs and design flaws; before bugs and
design flaws damage the reputation of your company.
A: Test Build Managers deliver current software versions to the test environment, install the
application's software and apply software patches, to both the application and the operating system,
set-up, maintain and back up test environment hardware.
Depending on the project, one person may wear more than one hat. For instance, a Test Engineer
may also wear the hat of a Test Build Manager.
A: Test Build Managers, System Administrators, Database Administrators deliver current software
versions to the test environment, install the application's software and apply software patches, to
both the application and the operating system, set-up, maintain and back up test environment
hardware.
Depending on the project, one person may wear more than one hat. For instance, a Test Engineer
may also wear the hat of a System Administrator.
A: Test Build Managers, System Administrators and Database Administrators deliver current
software versions to the test environment, install the application's software and apply software
patches, to both the application and the operating system, set-up, maintain and back up test
environment hardware. Depending on the project, one person may wear more than one hat. For
instance, a Test Engineer may also wear the hat of a Database Administrator.
A: Technical Analysts perform test assessments and validate system/functional test requirements.
Depending on the project, one person may wear more than one hat. For instance, Test Engineers
may also wear the hat of a Technical Analyst.
A: Test Configuration Managers maintain test environments, scripts, software and test data.
Depending on the project, one person may wear more than one hat. For instance, Test Engineers
may also wear the hat of a Test Configuration Manager.
A: The test schedule is a schedule that identifies all tasks required for a successful testing effort, a
schedule of all test activities and resource requirements.
A: One software testing methodology is the use a three step process of...
1. Creating a test strategy;
2. Creating a test plan/design; and
3. Executing tests.
This methodology can be used and molded to your organization's needs. G C Reddy believes that
using this methodology is important in the development and ongoing maintenance of his clients'
applications.
1) What is Quality?
The degree to which a Component, system or process meets specified
requirements and/or customer needs and expectations.
8) What is Verification?
Verification is intended to check that a product or system meets a set of initial
design requirements, specifications.
9) What is Validation?
Validation is intended to check that development and verification procedures
for a product or system that meets initial requirements, specifications.
The Capability Maturity Model covers best practices for planning, engineering
and managing software development and maintenance.
Note: CMM scarped and CMMI (Capability Maturity Model Integration)
launched
- Setting goals
- Establishing timetables for the project.
- Monitoring the use of time for maximum efficiency.
- Estimating the resources, both material and human, required by the project
and ensuring that they are distributed and used properly.
- Setting a budget for the project and keeping it within that budget.
- Organizing relevant documents and records.
- Analyzing the current conditions of the project and predicting future trends so
as not to be caught off-guard by changes.
- Identifying potential risks to the project and developing risk managment
plan.
- Managing the quality and finding ways to improve quality.
- Monitoring the closing stages of a project that is near completion.
- Facilitating communication among project members and between the project
members and outside stakeholders.
The Functional Requirement specifies how the system or application SHOULD DO where in
Non Functional Requirement it specifies how the system or application SHOULD BE.
Authentication
Business rules
Historical Data
Legal and Regulatory Requirements
External Interfaces
Performance
Reliability
Security
Recovery
Data Integrity
Usability
The terms Priority and Severity are used in Bug Tracking to share the importance of a bug
among the team and to fix it.
Severity: Is found in the Application point of view
1. The Severity status is used to explain how badly the deviation is affecting the build.
2. The severity type is defined by the tester based on the written test cases and
functionality.
Example
If an application or a web page crashes when a remote link is clicked, in this case clicking the
remote link by an user is rare but the impact of application crashing is severe, so the severity
is high and priority is low.
1. The Priority status is set by the tester to the developer mentioning the time frame to fix
a defect. If High priority is mentioned then the developer has to fix it at the earliest.
2. The priority status is set based on the customer requirements.
Example
If the company name is misspelled in the home page of a website, then the priority is high and
the severity is low to fix it.
Few examples:
High Severity and Low Priority -> Application doesn't allow customer expected
configuration.
High Severity and High Priority -> Application doesn't allow multiple user's.
Low Severity and High Priority -> No error message to prevent wrong operation.
Low Severity and low Priority -> Error message is having complex meaning.
Or
Few examples:
Supposing, you try the wildest or the weirdest of operations in a software (say, to be released
the next day) which a normal user would not do and supposing this renders a run -time error
in the application,the severity would be high. The priority would be low as the operations or
the steps which rendered this error by most chances will not be done by a user.
An example would be- you find a spelling mistake in the name of the website which you are
testing.Say, the name is supposed to be Google and its spelled there as 'Gaogle'. Though, it
doesn't affect the basic functionality of the software, it needs to be corrected before the
release. Hence, the priority is high.
A bug which is a show stopper. i.e., a bug due to which we are unable to proceed our
testing.An example would be a run time error during the normal operation of the
software,which would cause the application to quit abruptly.
Cosmetic bugs
A defect is a product anomaly or flaw, which is variance from desired product specification. The
classification of defect based on its impact on operation of product is called Defect Severity.
Bucket testing (also known as A/B Testing) is mostly used to study the impact of various
product designs in website metrics, two simultaneous versions were run in a single or set of
web pages to measure the difference in click rates, interface and traffic.
6. What is Entry and Exit Criteria in Software Testing?
Entry Criteria is the process that must be present when a system begins, like,
Exit Criteria ensures whether testing is completed and the application is ready for release, like,
Concurrency Testing (also commonly known as Multi User Testing) is used to know the effects
of accessing the Application, Code Module or Database by different users at the same time.It
helps in identifying and measuring the problems in Response time, levels of locking and
deadlocking in the application.
Example
Load runner is widely used for this type of testing, Vugen (Virtual User Generator) is used to
add the number of concurrent users and how the users need to be added like Gradual Ramp up
or Spike Stepped.
Statement Coverage or Code Coverage or Line Coverage is a metric used in White Box Testing
where we can identify the statements executed and where the code is not executed cause of
blockage. In this process each and every line of the code needs to be checked and executed.
Branch Coverage or Decision Coverage metric is used to check the volume of testing done in all
components. This process is used to ensure whether all the code is executed by verifying every
branch or decision outcome (if and while statements) by executing atleast one time, so that no
branches lead to the failure of the application.
10. What is the difference between High level and Low Level test case?
High level Test cases are those which cover major functionality in the application (i.e.
retrieve, update display, cancel (functionality related test cases), database test cases).
Low level test cases are those related to User Interface (UI) in the application.
In terms of Localization Testing it verifies how correctly the application is changed or modified
into that target culture and language.
In case of translation required of the application on that local language, testing should be done
on each field to check the correct translation. Other formats like date conversion, hardware
and software usage like operating system should also be considered in localization testing.
In Islamic Banking all the transactions and product features are based on Shariah Law, some
important points to be noted in Islamic Banking are
1. In Islamic Banking, the bank shares the profit and loss with the customer.
2. In Islamic Banking, the bank cannot charge interest on the customer; instead they
charge a nominal fee which is termed as "Profit
3. In Islamic Banking, the bank will not deal or invest in business like Gambling, Alcohol,
Pork, etc.
In this case, we need to test whether these Islamic banking conditions were modified and
applied in the application or product.
In Islamic Lending, they follow both the Gregorian calendar and Hijiri Calendar for calculating
the loan repayment schedule. The Hijiri Calendar is commonly called as Islamic Calendar
followed in all the Muslim countries according to the lunar cycle. The Hijiri Calendar has 12
months and 354 days which is 11 days shorter than Gregorian calendar. In this case, we need
to test the repayment schedule by comparing both the Gregorian calendar and Hijiri Calendar.
In Software Testing, Risk Analysis is the process of identifying risks in applications and
prioritizing them to test.
13. What is the difference between Two Tier Architecture and Three Tier
Architecture?
In Two Tier Architecture or Client/Server Architecture two layers like Client and Server
is involved. The Client sends request to Server and the Server responds to the request by
fetching the data from it. The problem with the Two Tier Architecture is the server cannot
respond to multiple requests at the same time which causes data integrity issues.
The Client/Server Testing involves testing the Two Tier Architecture of user interface in the
front end and database as backend with dependencies on Client, Hardware and Servers.
In Three Tier Architecture or Multi Tier Architecture three layers like Client, Server and
Database are involved. In this the Client sends a request to Server, where the Server sends the
request to Database for data, based on that request the Database sends back the data to
Server and from Server the data is forwarded to Client.
The Web Application Testing involves testing the Three Tier Architecture including the User
interface, Functionality, Performance, Compatibility, Security and Database testing.
14. What is the difference between Static testing and dynamic testing?
Static Testing is a White Box testing technique where the developers verify or test their code
with the help of checklist to find errors in it, this type of testing is done without running the
actually developed application or program. Code Reviews, Inspections, Walkthroughs are
mostly done in this stage of testing.
Dynamic Testing is done by executing the actual application with valid inputs to check the
expected output. Examples of Dynamic Testing methodologies are Unit Testing, Integration
Testing, System Testing and Acceptance Testing.
Static Testing is more cost effective than Dynamic Testing because Static Testing is done
in the initial stage.
In terms of Statement Coverage, the Static Testing covers more areas than Dynamic
Testing in shorter time.
Static Testing is done before the code deployment where the Dynamic Testing is done
after the code deployment.
Static Testing is done in the Verification stage where the Dynamic Testing is done in the
Validation stage.
15. Explain Use case diagram. What are the attributes of use cases?
16. What is Web Application testing? Explain the different phases in Web Application
testing?
Web Application testing is done on a website to check its load, performance, Security,
Functionality, Interface, compatibility and other usability related issues. In Web application
testing, three phases of testing is done, they are,
In Web tier testing, the browser compatibility of the application will be tested for IE, Fire Fox
and other web browsers.
In Middle tier testing, the functionality and security issues were tested.
In Database tier testing, the database integrity and the contents of the database were tested
and verified.
17. Explain Unit testing, Interface Testing and Integration testing. Also explain the
types of integration testing in brief?
Unit testing
Unit Testing is done to check whether the individual modules of the source code are working
properly. i.e. testing each and every unit of the application separately by the developer in
developer's environment.
Interface Testing
Interface Testing is done to check whether the individual modules are communicating properly
one among other as per the specifications.
Interface testing is mostly used in testing the user interface of GUI application.
Integration testing
Integration Testing is done to check the connectivity by combining all the individual modules
together and test the functionality.
In Big Bang Integration Testing, the individual modules are not integrated until all the modules
are ready. Then they will run to check whether it is performing well.
Defects can be found at the later stage.It would be difficult to find out whether the defect
arouse in Interface or in module.
In Top Down Integration Testing, the high level modules are integrated and tested first. i.e
Testing from main module to sub module. In this type of testing, Stubs are used as temporary
module if a module is not ready for integration testing.
In Bottom Up Integration Testing, the low level modules are integrated and tested first i.e
Testing from sub module to main module. Same like Stubs, here drivers are used as a
temporary module for integration testing.
Alpha Testing:
Alpha Testing is mostly like performing usability testing which is done by the in-house
developers who developed the software or testers. Sometimes this Alpha Testing is done by
the client or an outsider with the presence of developer and tester. The version release after
alpha testing is called Alpha Release.
Beta Testing:
Beta Testing is done by limited number of end users before delivery, the change request would
be fixed if the user gives feedback or reports defect. The version release after beta testing is
called beta Release.
Gamma Testing:
Gamma Testing is done when the software is ready for release with specified requirements,
this testing is done directly by skipping all the in-house testing activities.
19. Explain the methods and techniques used for Security Testing?
a. Session Hijacking
Session Hijacking commonly called as "IP Spoofing" where a user session will be attacked on a
protected network.
b. Session Prediction
Session prediction is a method of obtaining data or a session ID of an authorized user and gets
access to the application. In a web application the session ID can be retrieved from cookies or
URL.
The session prediction happening can be predicted when a website is not responding normally
or stops responding for an unknown reason.
c. Email Spoofing
Email Spoofing is duplicating the email header ("From" address) to look like originated from
actual source and if the email is replied it will land in the spammers inbox. By inserting
commands in the header the message information can be altered. It is possible to send a
spoofed email with information you didn't write.
d. Content Spoofing
Content spoofing is a technique to develop a fake website and make the user believe that the
information and website is genuine. When the user enters his Credit Card Number, Password,
SSN and other important details the hacker can get the data and use if for fraud purposes.
e. Phishing
Phishing is similar to Email Spoofing where the hacker sends a genuine look like mail
attempting to get the personal and financial information of the user. The emails will appear to
have come from well known websites.
f. Password Cracking
1. Brute Force – The hacker tries with a combination of characters within a length and
tries until it is getting accepted.
2. Password Dictionary – The hacker uses the Password dictionary where it is
available on various topics.
SQL Injection is most popular in Code Injection Attack, the hacker attach the malicious code
into the good code by inserting the field in the application. The motive behind the injection is
to steal the secured information which was intended to be used by a set of users.
Apart from SQL Injection, the other types of malicious code injection are XPath Injection, LDAP
Injection, and Command Execution Injection. Similar to SQL Injection the XPath Injection deals
with XML document.
b. Penetration Testing:
Penetration Testing is used to check the security of a computer or a network. The test process
explores all the security aspects of the system and tries to penetrate the system.
c. Input validation:
Input validation is used to defend the applications from hackers. If the input is not validated
mostly in web applications it could lead to system crashes, database manipulation and
corruption.
d. Variable Manipulation
Variable manipulation is used as a method for specifying or editing the variables in a program.
It is mostly used to alter the data sent to web server.
3. Database Level
a. SQL Injection
SQL Injection is used to hack the websites by changing the backend SQL statements, using
this technique the hacker can steal the data from database and also delete and modify it.
20. Explain IEEE 829 standards and other Software Testing standards?
An IEEE 829 standard is used for Software Test Documentation, where it specifies format for
the set of documents to be used in the different stages software testing. The documents are,
Test Plan- Test Plan is a planning document which has information about the scope,
resources, duration, test coverage and other details.
Test Design- Test Design document has information of test pass criteria with test conditions
and expected results.
Test Case- Test case document has information about the test data to be used.
Test Procedure- Test Procedure has information about the test steps to be followed and how
to execute it.
Test Log- Test log has details about the run test cases, test plans & fail status, order, and the
resource information who tested it.
Test Incident Report- Test Incident Report has information about the failed test comparing
the actual result with expected result.
Test Summary Report- Test Summary Report has information about the testing done and
quality of the software, it also analyses whether the software has met the requirements given
by customer.
Test Harness is configuring a set of tools and test data to test an application in various
conditions, which involves monitoring the output with expected output for correctness.
22. What is the difference between bug log and defect tracking?
Bug Log: Bug Log document showing the number of defect such as open, closed, reopen or
deferred of a particular module
Defect Tracking- The process of tracking a defect such as symptom, whether reproducible
/not, priority, severity and status.
Integration Testing:
Regression Testing
It is re-execution of our testing after the bug is fixed to ensure that the build is free
from bugs.
Done after bug is fixed
It is done by Tester
It is an alternative form of Testing, where some colleagues were invited to examine your work
products for defects and improvement opportunities.
Some Peer review approaches are,
Inspection
It is a more systematic and rigorous type of peer review. Inspections are more effective at
finding defects than are informal reviews.
Ex: In Motorola's Iridium project nearly 80% of the defects were detected through inspections
where only 60% of the defects were detected through formal reviews.
Team Reviews: It is a planned and structured approach but less formal and less rigorous
comparing to Inspections.
Walkthrough: It is an informal review because the work product's author describes it to some
colleagues and asks for suggestions. Walkthroughs are informal because they typically do not
follow a defined procedure, do not specify exit criteria, require no management reporting, and
generate no metrics.
Or
In Peer Desk check only one person besides the author examines the work product. It is an
informal review, where the reviewer can use defect checklists and some analysis methods to
increase the effectiveness.
Passaround: It is a multiple, concurrent peer desk check where several people are invited to
provide comments on the product.
Example
Traceability Matrix is a document used for tracking the requirement, Test cases and the defect.
This document is prepared to make the clients satisfy that the coverage done is complete as
end to end, this document consists of Requirement/Base line doc Ref No., Test case/Condition,
Defects / Bug id. Using this document the person can track the Requirement based on the
Defect id.
27. Explain Boundary value testing and Equivalence testing with some examples?
Boundary value testing is a technique to find whether the application is accepting the expected
range of values and rejecting the values which falls out of range.
Exmple
A user ID text box has to accept alphabet characters ( a-z ) with length of 4 to 10 characters.
BVA is done like this, max value: 10 pass; max-1: 9 pass;
max+1=11 fail ;min=4 pass;min+1=5 pass;min-1=3 fail;
Like wise we check the corner values and come out with a conclusion whether the application
is accepting correct range of values.
Example
A user ID text box has to accept alphabet characters (a - z) with length of 4 to 10 characters.
In +ve condition we have test the object by giving alphabets. i.e. a-z char only, after that we
need to check whether the object accepts the value, it will pass.
In -ve condition we have to test by giving other than alphabets (a-z) i.e. A-Z, 0-9, blank etc, it
will fail.
Security testing is the process that determines that confidential data stays confidential
Or
Testing how well the system protects against unauthorized internal or external access, willful
damage, etc?
This process involves functional testing, penetration testing and verification.
Installation testing is done to verify whether the hardware and software are installed and
configured properly. This will ensure that all the system components were used during the
testing process. This Installation testing will look out the testing for a high volume data, error
messages as well as security testing.
AUT is nothing but "Application Under Test". After the designing and coding phase in Software
development life cycle, the application comes for testing then at that time the application is
stated as Application Under Test.
Defect leakage occurs at the Customer or the End user side after the application delivery. After
the release of the application to the client, if the end user gets any type of defects by using
that application then it is called as Defect leakage. This Defect Leakage is also called as Bug
Leakage.
1. Project
2. Subject
3. Description
4. Summary
5. Detected By (Name of the Tester)
6. Assigned To (Name of the Developer who is supposed to the Bug)
7. Test Lead (Name)
8. Detected in Version
9. Closed in Version
10. Date Detected
11. Expected Date of Closure
12. Actual Date of Closure
13. Priority (Medium, Low, High, Urgent)
14. Severity (Ranges from 1 to 5)
15. Status
16. Bug ID
17. Attachment
18. Test Case Failed (Test case that is failed for the Bug)
Error Guessing is a test case design technique where the tester has to guess what faults might
occur and to design the tests to represent them.
Error Seeding is the process of adding known faults intentionally in a program for the reason of
monitoring the rate of detection & removal and also to estimate the number of faults remaining
in the program.
Ad hoc testing is concern with the Application Testing without following any rules or test cases.
For Ad hoc testing one should have strong knowledge about the Application.
35. What are the basic solutions for the software development problems?
36. What are the common problems in the software development process?
Inadequate requirements from the Client: if the requirements given by the client is not clear,
unfinished and not testable, then problems may come.
Unrealistic schedules: Sometimes too much of work is being given to the developer and ask
him to complete in a Short duration, then the problems are unavoidable.
Insufficient testing: The problems can arise when the developed software is not tested
properly.
Given another work under the existing process: request from the higher management to work
on another project or task will bring some problems when the project is being tested as a
team.
Miscommunication: in some cases, the developer was not informed about the Clients
requirement and expectations, so there can be deviations.
37. What is the difference between Software Testing and Quality Assurance (QA)?
Software Testing involves operation of a system or application under controlled conditions and
evaluating the result. It is oriented to 'detection'.
Quality Assurance (QA) involves the entire software development PROCESS- monitoring and
improving the process, making sure that any agreed-upon standards and procedures are
followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.
Note: Before going to generate some test idea on how to test a water bottle, I would like to
ask few questions like:
1. Is it a bottle made up off glass, plastic, rubber, some metal, some kind of disposable
materials or any thing else?
2. Is it meant only to hot water or we can use it with other fluids like tea, coffee, soft
drinks, hot chocolate, soups, wine, cooking oil, vinegar, gasoline, acids, molten lava (!)
etc.?
3. Who is going to use this bottle? A school going kid, a housewife, some beverage
manufacturing company, an office-goer, a sports man, a mob protesting in a rally (going
to use as missiles), an Eskimo living in an igloo or an astronaut in a space ship?
These kinds of questions may allow a tester to know a product (that he is going to test) in a
better way. In our case, I am assuming that the water bottle is in form of a pet bottle and
actually made up off either plastic or glass (there are 2 versions of the product) and is
intended to be used mainly with water. About the targeted user, even the manufacturing
company is not sure about them! (Sounds familiar! When a software company develops a
product without clear idea about the users who are going to use the software!)
Test Ideas
1. Check the dimension of the bottle. See if it actually looks like a water bottle or a
cylinder, a bowl, a cup, a flower vase, a pen stand or a dustbin! [Build Verification
Testing!]
2. See if the cap fits well with the bottle.[Installability Testing!]
3. Test if the mouth of the bottle is not too small to pour water. [Usability Testing!]
4. Fill the bottle with water and keep it on a smooth dry surface. See if it leaks. [Usability
Testing!]
5. Fill the bottle with water, seal it with the cap and see if water leaks when the bottle is
tilted, inverted, squeezed (in case of plastic made bottle)! [Usability Testing!]
6. Take water in the bottle and keep it in the refrigerator for cooling. See what happens.
[Usability Testing!]
7. Keep a water-filled bottle in the refrigerator for a very long time (say a week). See what
happens to the water and/or bottle. [Stress Testing!]
8. Keep a water-filled bottle under freezing condition. See if the bottle expands (if plastic
made) or breaks (if glass made). [Stress Testing!]
9. Try to heat (boil!) water by keeping the bottle in a microwave oven! [Stress Testing!]
10. Pour some hot (boiling!) water into the bottle and see the effect. [Stress
Testing!]
11. Keep a dry bottle for a very long time. See what happens. See if any physical or
chemical deformation occurs to the bottle.
12. Test the water after keeping it in the bottle and see if there is any chemical
change. See if it is safe to be consumed as drinking water.
13. Keep water in the bottle for sometime. And see if the smell of water changes.
14. Try using the bottle with different types of water (like hard and soft water).
[Compatibility Testing!]
15. Try to drink water directly from the bottle and see if it is comfortable to use. Or
water gets spilled while doing so. [Usability Testing!]
16. Test if the bottle is ergonomically designed and if it is comfortable to hold. Also
see if the center of gravityof the bottle stays low (both when empty and when filled with
water) and it does not topple down easily.
17. Drop the bottle from a reasonable height (may be height of a dining table) and
see if it breaks (both with plastic and glass model). If it is a glass bottle then in most
cases it may break. See if it breaks into tiny little pieces (which are often difficult to
clean) or breaks into nice large pieces (which could be cleaned without much difficulty).
[Stress Testing!] [Usability Testing!]
18. Test the above test idea with empty bottles and bottles filled with water. [Stress
Testing!]
19. Test if the bottle is made up of material, which is recyclable. In case of plastic
made bottle test if it is easily crushable.
20. Test if the bottle can also be used to hold other common household things like
honey, fruit juice, fuel, paint, turpentine, liquid wax etc. [Capability Testing!]
Following are the features that should be concentrated while testing a portlet
i. Test alignment/size display with multiple style sheets and portal configurations. When you
configure a portlet object in the portal, you must choose from the following alignments:
a. Narrow portlets are displayed in a narrow side column on the portal page. Narrow portlets
must fit in a column that is fewer than 255 pixels wide.
b. Wide portlets are displayed in the middle or widest side column on the portal page. Wide
portlets fit in a column fewer than 500 pixels wide.
ii. Test all links and buttons within the portlet display. (if there are errors, check that all forms
and functions are uniquely named, and that the preference and gateway settings are
configured correctly in the portlet web service editor.)
iii. Test setting and changing preferences. (if there are errors, check that the preferences are
uniquely named and that the preference and gateway settings are configured correctly in the
portlet web service editor.)
iv. Test communication with the backend application. Confirm that actions executed through
the portlet are completed correctly. (if there are errors, check the gateway configuration in the
portlet web service editor.)
v. Test localized portlets in all supported languages. (if there are errors, make sure that the
language files are installed correctly and are accessible to the portlet.)
vi. If the portlet displays secure information or uses a password, use a tunnel tool to confirm
that any secure information is not sent or stored in clear text.
Vii. If backwards compatibility is supported, test portlets in multiple versions of the portal.
Concepts: Equivalence partitioning is a method for deriving test cases. In this method, classes
of input conditions called equivalence classes are
identified such that each member of the class causes the same kind of
processing and output to occur. In this method, the tester identifies various equivalence
classes for partitioning. A class is a set of input conditions that are is likely to be handled the
same way by the system. If the system were to handle one case in the class erroneously, it
would handle all cases erroneously.
Equivalence partitioning drastically cuts down the number of test cases required to test a
system reasonably. It is an attempt to get a good 'hit rate', to find the most errors with the
smallest number of test cases.
STEP 1:
IDENTIFY EQUIVALENCE CLASSES Take each input condition described in the specification and
derive at least two equivalence classes for it. One class represents the set of cases which
satisfy the condition (the valid class) and one represents cases which do not (the invalid class)
Following are some general guidelines for identifying equivalence classes: a) If the
requirements state that a numeric value is input to the system and must be within a range of
values, identify one valid class inputs which are within the valid range and two invalid
equivalence classes inputs which are too low and inputs which are too high. For example, if an
item in inventory can have a quantity of - 9999 to + 9999, identify the following classes:
1. One valid class: (QTY is greater than or equal to -9999 and is less than or equal to
9999). This is written as (- 9999 < = QTY < = 9999)
2. The invalid class (QTY is less than -9999), also written as (QTY < -9999)
3. The invalid class (QTY is greater than 9999) , also written as (QTY >9999) b) If the
requirements state that the number of items input by the system at some point must lie
within a certain range, specify one valid class where the number of inputs is within the
valid range, one invalid class where there are too few inputs and one invalid class where
there are, too many inputs.
1. Process metrics: Primary metrics are also called as Process metrics. This is the metric
the Six Sigma practitioners care about and can influence. Primary metrics are almost
the direct output characteristic of a process. It is a measure of a process and not a
measure of a high-level business objective. Primary Process metrics are usually Process
Defects, Process cycle time and Process consumption.
2. Product metrics: Product metrics quantitatively characterize some aspect of the
structure of a software product, such as a requirements specification, a design, or
source code.
44. Why do you go for White box testing, when Black box testing is available?
A benchmark that certifies Commercial (Business) aspects and also functional (technical)
aspects is objectives of black box testing. Here loops, structures, arrays, conditions, files, etc
are very micro level but they arc Basement for any application, So White box takes these
things in Macro level and test these things
A baseline document, which starts the understanding of the application before the tester, starts
actual testing. Functional Specification and Business Requirement Document
46. Tell names of some testing type which you learnt or experienced?
Any 5 or 6 types which are related to companies profile is good to say in the interview,
1. Ad - Hoc testing
2. Cookie Testing
3. CET (Customer Experience Test)
4. Depth Test
5. Event-Driven
6. Performance Testing
7. Recovery testing
8. Sanity Test
9. Security Testing
10. Smoke testing
11. Web Testing
Data Guidelines are used to specify the data required to populate the test bed and prepare test
scripts. It includes all data parameters that are required to test the conditions derived from the
requirement / specification The Document, which supports in preparing test data are called
Data guidelines
50. Why do we prepare test condition, test cases, test script (Before Starting
Testing)?
These are test design document which are used to execute the actual testing Without which
execution of testing is impossible, finally this execution is going to find the bugs to be fixed so
we have prepare this documents.
51. Is it not waste of time in preparing the test condition, test case & Test Script?
No document prepared in any process is waste of rime, That too test design documents which
plays vital role in test execution can never be said waste of time as without which proper
testing cannot be done.
To approach a web application testing, the first attack on the application should be on its
performance behavior as that is very important for a web application and then transfer of data
between web server and .front end server, security server and back end server.
53. What kind of Document you need for going for a Functional testing?
Functional specification is the ultimate document, which expresses all the functionalities of the
application and other documents like user manual and BRS are also need for functional testing.
Gap analysis document will add value to understand expected and existing system.
No, .The system as a whole can be tested only if all modules arc integrated and all modules
work correctly System testing should be done before UAT (User Acceptance testing) and Before
Unit Testing.
Mutation testing is a powerful fault-based testing technique for unit level testing. Since it is a
fault-based testing technique, it is aimed at testing and uncovering some specific kinds of
faults, namely simple syntactic changes to a program. Mutation testing is based on two
assumptions: the competent programmer hypothesis and the coupling effect. The competent
programmer hypothesis assumes that competent programmers turn to write nearly "correct"
programs. The coupling effect stated that a set of test data that can uncover all simple faults in
a program is also capable of detecting more complex faults. Mutation testing injects faults into
code to determine optimal test inputs.
With any software other than the smallest and simplest program, there are too many inputs,
too many outputs, and too many path combinations to fully test. Also, software specifications
can be subjective and be interpreted in different ways.
57. How will you review the test case and how many types are there?
Or
Reviews:
1. Management Review
2. Technical Review
3. Code Review
4. Formal Review (Inspections and Audits)
5. Informal Review (Peer Review and Code Review)
objectives of Reviews:
Pilot testing involves having a group of end users try the system prior to its full
deployment in order to give feedback on IIS 5.0 features and functions.
Or
Pilot Testing is a Testing Activity which resembles the Production Environment.
It is Done Exactly between UAT and Production Drop.
Few Users who simulate the Production environment to continue the Business Activity
with the System.
They Will Check the Major Functionality of the System before going into production. This
is basically done to avoid the high-level Disasters.
Priority of the Pilot Testing Is High and Issues Raised in Pilot Testing has to be Fixed As
Soon As Possible.
BRS is Business Requirement Specification which means the client who want to make the
application gives the specification to software development organization and then the
organization convert it to SRS (Software requirement Specification) as per the need of the
software.
60. What is Smoke Test and Sanity Testing? When will use the Above Tests?
Smoke Testing: It is done to make sure if the build we got is testable or not, i.e to check for
the testability of the build also called as "day 0" check. Done at the 'build level'
Sanity Testing: It is done during the release phase to check for the main functionalities
without going deeper. Sometimes also called as subset of regression testing. When no rigorous
regression testing is done to the build, sanity does that part by checking major functionalities.
Done at the 'release level'
Debugging is finding and removing "bugs" which cause the program to respond in a way that is
not intended.
Another way, it could also mean calculating, ascertaining or even realizing a specific amount,
limit, character, etc. It also refers to a certain result of such ascertaining or even defining a
certain concept.
It can also mean to reach at a particular decision and firmly achieve its purpose.
Testing is nothing but finding an error/problem and its done by testers where as debugging is
nothing but finding the root cause for the error/problem and that is taken care by developers.
Or
Debugging- is removing the bug and is done by developer.
Fish model explains the mapping between different stages of development and testing.
Phase 1
Information gathering takes place and here the BRS document is prepared.
Phase 2
During this phase, development people prepare SRS document which is a combination of
functional requirement specification and system requirement specification. During this phase,
testing people are going for reviews.
Phase-3
Design phase
Here HLD and LLD high level design document and low level design documents are prepared
by development team. Here, the testing people are going for prototype reviews.
Phase-4
coding phase
White box testers start coding and white box testing is being conducted by testing team.
Phase-5
testing phase
White box testing takes place by the black box test engineers.
Phase-6
The context-driven school of software testing is flavor of Agile Testing that advocates
continuous and creative evaluation of testing opportunities in light of the potential information
revealed and the value of that information to the organization right now.
Similar to system testing, the 'macro' end of the test scale involves testing of a complete
application environment in a situation that mimics real-world use, such as interacting with a
database, using network communications, or interacting with other hardware, applications, or
systems if appropriate.
Testing is a never ending process, because of some factors testing May terminates.
The factors may be most of the tests are executed, project deadline, test budget depletion,
bug rate falls down below the criteria.
Testing where the user reconciles the output of the new system to the output of the current
system to verify the new system performs the operations correctly.
70. What are the roles of glass-box and black-box testing tools?
Black-box testing
It is not based on knowledge of internal design or code. Tests are based on requirements and
functionality. Black box testing is used to find the errors in the following.
1. Interface errors
2. Performance errors
3. Initialization errors
4. Incorrect or missing functionality
5. Errors while accessing external database
Glass-box testing
It is based on internal design of an application code. Tests are based on path coverage, branch
coverage, and statement coverage. It is also known as White Box testing.
71. What is your experience with change control? Our development team has only 10
members. Do you think managing change is such a big deal for us?
Whenever the modifications happening to the actual project all the corresponding documents
are adapted on the information. So as to keep the documents always in sync with the product
at any point of time
The gap analysis can be done by traceability matrix that means tracking down each individual
requirement in SRS to various work products.
73. How do you know when your code has met specifications?
With the help of traceability matrix. All the requirements are tracked to the test cases. When
all the test cases are executed and passed is an indication that the code has met the
requirements.
74. At what stage of the life cycle does testing begin in your opinion?
Testing is a continuous process and it starts as and when the requirement for the project
/product begins to be framed.
Requirements phase: testing is done to check whether the project/product details are
reflecting clients ideas or giving an idea of complete project from the clients perspective (as he
wished to be) or not.
Requirement specifications are important and one of the most reliable methods of insuring
problems in a complex software project. Requirements are the details describing an
application's externally perceived functionality and properties. Requirements should be clear,
complete, reasonably detailed, cohesive, attainable and testable.
76. How do you scope, organize, and execute a test project?
The Scope can be defined from the BRS, SRS, FRS or from functional points. It may be
anything that is provided by the client. And regarding organizing we need to analyze the
functionality to be covered and who will testing the modules and pros and cons of the
application. Identify the number if test cases, resource allocation, what are the risks that we
need mitigate all these come into picture.
Once this is done it is very easy to execute based on the plan what we have chalked out.
We can not perform 100% testing on any application. but the criteria to ensure test completion
on a project are:
1. All the test cases are executed with the certain percentage of pass.
2. Bug falls below a certain level
3. Test budget depleted
4. Dead lines reached (project or test)
5. When all the functionalities are covered in a test cases
6. All critical & high bugs must have a status of CLOSED
Ideally to test a web application, the components and functionality on both the client and
server side should be tested. But it is practically impossible
The best approach to examine the project's requirements, set priorities based on risk analysis,
and then determine where to focus testing efforts within budget and schedule constraints.
To test a web application we need to perform testing for both GUI and client-server
architecture.
Based on many factors like project requirements, risk analysis, budget and schedule, we can
determine that what kind of testing will be appropriate for your project. We can perform unit n
integration testing, functionality testing, GUI testing, usability testing, compatibility testing,
security testing, performance testing, recovery testing and regression testing.
I'm well motivated, well-organized, good team player, dedicative to work and I've got a strong
desire to succeed, and I'm always ready and willing to learn new information and skills.
For any Project, testing activity will be there from starting onwards, After the Requirements
gathering, Design Document (High and Low) will be prepared, that will be tested, whether they
are confirming to requirements or not, Design then Coding- White box will be done, after the
Build or System is ready, Integration followed by functional testing will be done, Till the
product or Project was stable. After the product or project is stable, then testing will be
stopped.
Testing in a continuous activity carried out at every stage of the project. You first test
everything that you get from the client. As tester (technical tester), my work will start as soon
as the project starts.
This is just a sample answer - "I have never created any test plan. I developed and executed
testcase. But I was involved/ participated actively with my Team Leader while creating Test
Plans."
It is software that is reasonably bug-free and delivered on time and within the budget, meets
the requirements and expectations and is maintainable.
Quality Assurance
Group assures the Quality it must monitor the whole development process. they are most
concentration on prevention of bugs.
It must set standards, introduce review procedures, and educate people into better ways to
design and develop products.
87. How involved where you with your Team Lead in writing the Test Plan?
As per my knowledge Test Member are always out of scope while preparing the Test Plan, Test
Plan is a higher level document for Testing Team. Test Plan includes Purpose, scope,
Customer/Client scope, schedule, Hardware, Deliverables and Test Cases etc. Test plan derived
from PMP (Project Management Plan). Team member scope is just go through TEST PLAN then
they come to know what all are their responsibilities, Deliverable of modules.
Test Plan is just for input documents for every testing Team as well as Test Lead.
Methodology
1. Spiral methodology
2. Waterfall methodology. these two are old methods.
3. Rational unified processing. this is from I B M and
4. Rapid application development. this is from Microsoft office.
The goal of globalization testing is to detect potential problems in application design that could
inhibit globalization. It makes sure that the code can handle all international support without
breaking functionality that would cause either data loss or display problems.
Base lining: Process by which the quality and cost effectiveness of a service is assessed,
usually in advance of a change to the service. Base lining usually includes comparison of the
service before and after the Change or analysis of trend information. The term Benchmarking
is normally used if the comparison is made against other enterprises.
For example:
If the company has different projects. For each project there will be separate test plans. This
test plans should be accepted by peers in the organization after modifications. That modified
test plans are the baseline for the testers to use in different projects. Any further modifications
are done in the test plan. Present modified becomes the baseline. Because this test plan
becomes the basis for running the testing project.
91. Define each of the following and explain how each relates to the other: Unit,
System and Integration testing.
Unit testing
System testin
This is a bottleneck stage of our project. This testing done after integration of all modules to
check whether our build meets all the requirements of customer or not. Unit and integration
testing is a white box testing which can be done by programmers. System testing is a black
box testing which can be done by people who do not know programming. The hierarchy of this
testing is unit testing integration testing system testing
Integration testing: integration of some units called modules. the test on these modules is
called integration testing (module testing).
Testing is an interesting part of software cycle. and it is responsible for providing an quality
product to a customer. It involves finding bugs which is more difficult and challenging. I wanna
be part of testing group because of this.
93. What do you think the role of test-group manager should be? Relative to senior
management? Relative to other technical groups in the company? Relative to your
staff?
Defect find and close rates by week, normalized against level of effort (are we finding
defects, and can developers keep up with the number found and the ones necessary to
fix?)
Number of tests planned, run, passed by week (do we know what we have to test, and
are we able to do so?)
Defects found per activity vs. total defects found (which activities find the most
defects?)
Schedule estimates vs. actual (will we make the dates, and how well do we estimate?)
People on the project, planned vs. actual by week or month (do we have the people we
need when we need them?)
Major and minor requirements changes (do we know what we have to do, and does it
change?)
94. What criteria do you use when determining when to automate a test or leave it
manual?
The Time and Budget both are the key factors in determining whether the test goes on Manual
or it can be automated. Apart from that the automation is required for areas such as
Functional, Regression, Load and User Interface for accurate results.
95. How do you analyze your test results? What metrics do you try to provide?
Test results are analyzed to identify the major causes of defect and which is the phase that has
introduced most of the defects. This can be achieved through cause/effect analysis or Pareto
analysis. Analysis of test results can provide several test matrics. Where matrices are measure
to quantify s/w, s/w development resources and s/w development process. Few matrices which
we can provide are:
Test effectiveness'/(t+uat)
where t: total no of defect recorded during testing
Regression Testing is carried out both manually and automation. The automatic tools are
mainly used for the Regression Testing as this is mainly focused repeatedly testing the same
application for the changes the application gone through for the new functionality, after fixing
the previous bugs, any new changes in the design etc. The regression testing involves
executing the test cases, which we ran for finding the defects. Whenever any change takes
place in the Application we should make sure, the previous functionality is still available
without any break. For this reason one should do the regression testing on the application by
running/executing the previously written test cases.
97. Describe to me when you would consider employing a failure mode and effect
analysis
FMEA (Failure Mode and Effects Analysis) is a proactive tool, technique and quality method that
enables the identification and prevention of process or product errors before they occur. Failure
modes and effects analysis (FMEA) is a disciplined approach used to identify possible failures of
a product or service and then determine the frequency and impact of the failure.
The Unified Modeling Language is a third-generation method for specifying, visualizing, and
documenting the artifacts of an object-oriented system under development From the inside,
the Unified Modeling Language consists of three things:
1. A formal metamodel
2. A graphical notation
3. A set of idioms of usage
Organization of engineers Scientists and students involved in electrical, electronics, and related
fields. It is important because it functions as a publishing house and standards-making body.
101. Define Verification and Validation. Explain the differences between the two.
Verification - Evaluation done at the end of a phase to determine that requirements are
established during the previous phase have been met. Generally Verification refers to the
overall s/w evaluation activity, including reviewing, inspecting, checking and auditing.
Validation: - The process of evaluating s/w at the end of the development process to ensure
compliance with requirements. Validation typically involves actual testing and takes place after
verification is complete.
Or
103. What criteria do you use when determining when to automate a test or leave it
manual?
The Time and Budget both are the key factors in determining whether the test goes on Manual
or it can be automated. Apart from that the automation is required for areas such as
Functional, Regression, Load and User Interface for accurate results.
I would like to be in a managerial role, ideally working closely with external clients. I have
worked in client-facing roles for more than two years and I enjoy the challenge of keeping the
customer satisfied. I think it's something I'm good at. I would also like to take on additional
responsibility within this area, and possibly other areas such as Finally, I'd like to be on the
right career path towards eventually becoming a Senior Manager within the company. I'm very
aware that these are ambitious goals, however I feel through hard work
and dedication they are quite attainable.
105. Define each of the following and explain how each relates to the other: Unit,
System, and Integration testing
The role of the QA in the company is to produce a quality software and to ensure that it meets
all the requirements of its customers before delivering the product.
Building a test team needs a number of factors to judge. Firstly, you have to consider the
complexity of the application or project that is going to be tested. Next testing, time allotted
levels of testing to be performed. With all these parameters in mind you need to decide the
skills and experience level of your testers and how many testers.
It depends on the functionality related with that module. We need to check whether that
module is inter-related with other modules. If it is related with other modules, we need to test
related modules too. Otherwise, if it is an independent module, no need to test other modules.
ISO 9000 specifies requirements for a Quality Management System overseeing the production
of a product or service. It is not a standard for ensuring a product or service is of quality;
rather, it attests to the process of production, and how it will be managed and reviewed.
For ex a few:
ISO 9000:2000
Quality management systems. Fundamentals and vocabulary
ISO 9000-1:1994
Quality management and quality assurance standards. Guidelines for selection and use
ISO 9000-2:1997
Quality management and quality assurance standards. Generic guidelines for the application of
ISO 9001, ISO 9002 and ISO 9003
ISO 9000-3:1997
Quality management and quality assurance standards. Guidelines for the application of ISO
9001:1994 to the development, supply, installation and maintenance of computer software
ISO 9001:1994
Quality systems. Model for quality assurance in design, development, production, installation
and servicing
ISO 9001:2000
111. What is the Waterfall Development Method and do you agree with all the steps?
Waterfall approach is a traditional approach to the s/w development. This will work out of it
project is a small one (Not complex).Real time projects need spiral methodology as SDLC.
Some product based development can follow Waterfall, if it is not complex. Production cost is
less if we follow waterfall method.
Reliability: the probability that the software will not cause the failure of the system for a
specified time under specified conditions.
Testing is necessary because software is likely to have faults in it and it is better (cheaper,
quicker and more expedient) to find and remove these faults before it is put into live
operation. Failures that occur during live operation are much more expensive to deal with than
failures than occur during testing prior to the release of the software. Of course other
consequences of a system failing during live operation include the possibility of the software
supplier being sued by the customers!
Testing is also necessary so we can learn about the reliability of the software (that is, how
likely it is to fail within a specified time under specified conditions).
UAT stands for 'User acceptance Testing' This testing is carried out with the user perspective
and it is usually done before a release
UAT stands for User Acceptance Testing. It is done by the end users along with testers to
validate the functionality of the application. It is also called as Pre-Production testing.
115. How to find that tools work well with your existing system?
I think we need to do a market research on various tools depending on the type of application
we are testing. Say we are testing an application made in VB with an Oracle Database, and
then Win runner is going to give good results. But in some cases it may not, say your
application uses a lots of 3rd party Grids and modules which have been integrated into the
application. So it depends on the type of application u r testing.
Also we need to know what sort of testing will be performed. If u need to test the
performance, u cannot use a record and playback tool, u need a performance testing tool such
as Load runner.
116. What is the difference between a test strategy and a test plan?
TEST PLAN: IT IS PLAN FOR TESTING.IT DEFINES SCOPE, APPROACH, AND ENVIRONEMENT.
Scenario means development. We define scenario by the following definition: Set of test cases
that ensure the business process flows are tested from end to end. It may be independent
tests or a series of tests that follow each other, each dependant on the output of the previous
one. The term test scenario and test case are often used synonymously.
118. Explain the differences between White-box, Gray-box, and Black-box testing?
Black box testing Tests are based on requirements and functionality. Not based on any
knowledge of internal design or code.
White box testing Tests are based on coverage of code statements, branches, paths,
conditions. Based on knowledge of the internal logic of an application's code.
Gray Box Testing A Combination of Black and White Box testing methodologies, testing a
piece of software against its specification but using some knowledge of its internal workings.
Structural Testing
Behavioral Testing
It is also called functional testing where the functionality of software is being tested. This kind
of testing is called black box testing.
Structural Testing
It's a White Box Testing Technique. Since the testing is based on the internal structure of the
program/code & hence it is called as Structural Testing.
Behavioral Testing:
It's a Black Box Testing Technique. Since the testing is based on the external
behavior/functionality of the system /application & hence it is called as Behavioral Testing.
120. How does unit testing play a role in the development / Software lifecycle?
We can catch simple bugs like GUI, small functional Bugs during unit testing. This reduces
testing time. Overall this saves project time. If developer doesn't catch this type of bugs, this
will come to integration testing part and if it catches by a tester, this need to go through a Bug
life cycle and consumes a lot of time.
Testing is one aspect which is very important in the Software Development Life Cycle (SDLC). I
like to be part of the team which is responsible for the quality of the application being
delivered. Also, QA has broad opportunities and large scope for learning various technologies.
And of course it has lot more opportunities than the Development.
S. no Links Bug ID Description Initial Bug Retesting Bug Conf Bug Status
Status Status
What is baseline testing?
Baseline testing is the process of running a set of tests to capture performance information. Baseline
testing use the information collected to made the changes in the application to improve performance
and capabilities of the application. Baseline compares present performance of application with its
own previous performance.
Validation: process of evaluating software during or at the end of the development process to
determine whether it specified requirements.
- Decision coverage testing ensures that every decision taking statement is executed atleast once.
- Both decision and branch coverage testing is done to ensure the tester that no branch and decision
taking statement, will not lead to failure of the software.
- Retesting is done to verify defect fix previous in now working correctly where as regression is
perform to check if the defect fix have not impacted other functionality that was working fine
before doing changes in the code.
- Retesting is specific and is performed on the bug which is fixed where as in regression is not be
always specific to any defect fix it is performed when any bug is fixed.
- Retesting concern with executing those test cases that are failed earlier where as regression
concern with executing test cases that was passed in earlier builds.
Competent programmer hypothesis: according this hypothesis we suppose that program write the
correct code of the program.
Coupling effect: according to this effect collection of different set of test data can also find large and
complex bugs.
In this testing we insert few bugs into program to examine the optimal test inputs.
It answers: How quickly we need to fix the bug? Or how soon the bug should get fixed?
Severity: concern with functionality of application.
Ex.
Bug release: is when a build is handed to testing team with knowing that defect is present in the
release. The priority and severity of bug is low. It is done when customer want the application on
the time. Customer can tolerate the bug in the released then the delay in getting the application and
the cost involved in removing that bug. These bugs are mentioned in the Release Notes handed to
client for the future improvement chances.
Smart monkeys are used for load and stress testing, they will help in finding the bugs. They are very
expensive to develop.
Dumb monkey, are important for basic testing. They help in finding those bugs which are having
high severity. Dumb monkey are less expensive as compare to Smart monkeys.
- Suppose we want to test the interface between modules A and B and we have developed only
module A. So we cannot test module A but if a dummy module is prepare, using that we can test
module A.
- Now module B cannot send or receive data from module A directly so, in these cases we have to
transfer data from one module to another module by some external features. This external feature
used is called Driver.
Use case Testing: the use case testing uses this use case to evaluate the application. So that, the
tester can examines all the functionalities of the application. Use case testing cover whole
application,
1. To have a signed, sealed, and delivered document, where the document contains details about the
testing methodology, test plan, and test cases.
2. Test strategy document tells us how the software product will be tested.
3. Test strategy document helps to review the test plan with the project team members.
4. It describes the roles, responsibilities and the resources required for the test and schedule.
5. When we create a test strategy document, we have to put into writing any testing issues requiring
resolution.
The test strategy is decided first, before lower level decisions are made on the test plan, test design,
and other testing issues.
- When a tester finds a bug .The bug is assigned with NEW or OPEN status,
- The bug is assigned to development project manager who will analyze the bug .He will check
whether it is a valid defect. If not valid bug is rejected then status is REJECTED.
- If not, next the defect is checked whether it is in scope. When bug is not part of the current
release .Such defects are POSTPONED
- Now, Tester checks whether a similar defect was raised earlier. If yes defect is assigned a status
DUPLICATE
- When bug is assigned to developer. During this stage bug is assigned a status IN-PROGRESS
- Next the tester will re-test the code. In case the test case passes the defect is CLOSED
- If the test case fails again the bug is RE-OPENED and assigned to the developer. That’s all to Bug
Life Cycle.
Error Seeding is the process of adding known faults intentionally in a program for the reason of
monitoring the rate of detection & removal and also to estimate the number of faults remaining in
the program.
Explain Compatibility testing with an example.
Compatibility testing is to evaluate the application compatibility with the computing environment
like Operating System, Database, Browser compatibility, backwards compatibility, computing
capacity of the Hardware Platform and compatibility of the Peripherals. Example, If Compatibility
testing is done on a Game application, before installing a game on a computer, its compatibility is
checked with the computer specification that whether it is compatible with the computer having that
much of specification or not.
Automation testing is the use of a tool to control the execution of tests and compare the actual
results with the expected results. It also involves the setting up of test pre-conditions.
- Static testing: Static testing is a technique used in the earlier phase of the development life cycle.
The code error detection and execution of program is not concern in this type of testing. Also
known as non-execution technique. The Verification of the product is performed in this testing
technique like Code Reviews, Inspections, Walkthroughs are mostly done in this stage of testing.
- Dynamic testing: Dynamic Testing is concern with the execution of the software. This technique
is used to test the dynamic behavior of the code. Most of the bugs are identified using this
technique. These are the Validation activities. It uses different methodologies to perform testing like
Unit Tests, Integration Tests, System Tests and Acceptance Testing, etc.
Alpha testing: is performed by the IN-House developers. After alpha testing the software is handed
for the Beta testing phase, for additional testing in an environment that is similar to the client
environment.
Beta testing: is performed by the end user. So that they can make sure that the product is bug free
or working as per the requirement. IN-house developers and software QA team perform alpha
testing. The public, a few select prospective customers or the general public performs beta testing.
Gamma Testing: Gamma Testing is done when the software is ready for release with specified
requirements. This testing is done directly by skipping all the in-house testing activities.
What should be done after a bug is found?
After finding the bug the first step is bug to be locked in bug report. Then this bug needs to be
communicated and assigned to developers that can fix it. After the bug is fixes by the developer,
fixes should be re-tested, and determinations made regarding requirements for regression testing to
check that fixes didn't create problems elsewhere.
- Corrective Maintenance
- Adaptive Maintenance
- Perfective Maintenance
- Preventive Maintenance
Business Requirement Specification (BRS) are the requirements as described by the business
people. The business tells “what” they want for the application to do. In simple word BRS contain
the functional requirement of the application.
Can you explain V model in manual testing?
V model: it is enhanced version of waterfall model where each level of the development lifecycle is
verified before moving to next level. In this testing starts at the very beginning. By testing we mean
verification by means of reviews and inspections, static testing. Each level of the development life -
cycle has a corresponding test plan. A test plan is developed to prepare for the testing of the
products of that phase. Be developing the test plans, we can also define the expected results for
testing of the products for that level as well as defining the entry and exit criteria for each level.
1. Enter the data in all the mandatory fields and submit, should not display error message.
2. Enter data in any two mandatory fields and summit, should issue an error message.
3. Do not enter in any of the fields should issue an error message.
4. If the fields accept only number, enter numbers in the fields and submit, should not issue an error
message, try to enter only in two fields should issue an error message, and enter alphabets in two
fields and number in other two fields it should issue an error message.
5. If the fields do not accept special characters, then enter the characters and submit it.
Edges: the flow of command is denoted by edges. Edges are used to connect two node , this show
flow of control from one node to other node in the program.
Using this node and edges we calculate the complexity of the program.
This determines the minimum number of inputs you need to test always to execute the program.
- To determine whether the developed project is meet the requirements of the user.
- To determine all the requirements given by the user.
- To make sure the application requirement can be fulfilled in the verification process.
- A beta test when the product is about to release to the end user whereas pilot testing take place in
the earlier phase of the development cycle.
- In beta testing application is given to a few user to make sure that application meet the user
requirement and does not contain any showstopper whereas in case of pilot testing team member
give their feedback to improve the quality of the application.
1. New Hardware.
2. New Technology.
3. New Automation Tool.
4. Sequence of code delivery.
5. Availability of application test resources.
- High magnitude: Impact of the bug on the other functionality of the application.
- Medium: it can be tolerable in the application but not desirable.
- Low: it can be tolerable. This type of risk has no impact on the company business.
- Master Test Plan contains all the testing and risk involved area of the application where as Test
case document contains test cases.
- Master Test plan contain all the details of each and every individual tests to be run during the
overall development of application whereas test plan describe the scope, approach, resources and
schedule of performing test.
- Master Test plan contain the description of every tests that is going to be performed on the
application where as test plan only contain the description of few test cases. during the testing cycle
like Unit test, System test, beta test etc
- Master Test Plan is created for all large projects but when it is created for the small project then we
called it as test plan.
1. Low memory.
2. Addressing to non available memory location.
3. Things happening in a particular sequence.
- Cohesion is the degree which is measure dependency of the software component that combines
related functionality into a single unit whereas coupling means that binding the related functionality
into different unit.
- Cohesion deals with the functionality that related different process within the single module where
as coupling deals with how much one module is dependent on the other modules within the
application.
- It is good to increase the cohesion between the software whereas increasing coupling is avoided.
- QA team is responsible for monitoring the process to be carried out for development.
- Responsibilities of QA team are planning testing execution process.
- QA Lead creates the time tables and agrees on a Quality Assurance plan for the product.
- QA team communicated QA process to the team members.
- QA team ensures traceability of test cases to requirements.
Quality Control (QC): Concern with the quality of the product. QC finds the defects and suggests
improvements. The process set by QA is implemented by QC. The QC is the responsibility of the
tester.
Software Testing: is the process of ensuring that product which is developed by the developer
meets the user requirement. The motive to perform testing is to find the bugs and make sure that
they get fixed.
- Front End Testing is performed on the Graphical User Interface (GUI).whereas Back End Testing
involves databases testing.
- Front end consist of web site look where user can interact whereas in case of back end it is the
database which is required to store the data.
- When ender user enters data in GUI of the front end application, then this entered data is stored in
the database. To save this data into the database we write SQL queries.
What is Testware?
The testware is:
1. Different purpose
2. Different metrics for quality and
3. Different users
- This SRS contain all the requirement of the end user regarding that application. SRS can be used
as a communication medium between the customer and the supplier.
- The developer and tester prepare and examine the application based on the requirements written in
the SRS document.
- The SRS documented is prepared by the Business Analyst by taking all the requirements for the
customer.
Automation testing is the use of a tool to control the execution of tests and compare the actual
results with the expected results. It also involves the setting up of test pre-conditions.
- Suppose we want to test the interface between modules A and B and we have developed only
module A. So we cannot test module A but if a dummy module is prepare, using that we can test
module A.
- Now module B cannot send or receive data from module A directly so, in these cases we have to
transfer data from one module to another module by some external features. This external feature
used is called Driver.
Manual-Support Testing: Testing technique that involves testing of all the functions performed by
the people while preparing the data and using these data from automated system. it is conducted by
testing teams.
What is Fuzz testing, backward compatibility testing and
assertion testing?
Fuzz Testing: Testing application by entering invalid, unexpected, or random data to the application
this testing is performed to ensure that application is not crashing when entering incorrect and
unformatted data.
Backward Compatibility Testing: Testing method which examines performance of latest software
with older versions of the test environment.
Assertion Testing: Type of testing consisting in verifying if the conditions confirm the product
requirements.
- Missing: When requirement given by the customer and application is unable to meet those
application.
- Extra: A requirement incorporated into the product that was not given by the end customer. This
is always a variance from the specification, but may be an attribute desired by the user of the
product.
1. What is the MAIN benefit of designing tests early in the life cycle?
It helps prevent defects from being introduced into the code.
2. What is risk-based testing?
Risk-based Testing is the term used for an approach to creating a Test Strategy that is based on
prioritizing tests by risk. The basis of the approach is a detailed risk analysis and prioritizing of
risks by risk level. Tests to address each risk are then specified, starting with the highest risk first.
3. A wholesaler sells printer cartridges. The minimum order quantity is 5. There is a 20%
discount for orders of 100 or more printer cartridges. You have been asked to prepare test
cases using various values for the number of printer cartridges ordered. Which of the
following groups contain three test inputs that would be generated using Boundary Value
Analysis?
4, 5, 99
4. What is the KEY difference between preventative and reactive approaches to testing?
Preventative tests are designed early; reactive tests are designed after the software has been
produced.
5. What is the purpose of exit criteria?
The purpose of exit criteria is to define when a test level is completed.
6. What determines the level of risk?
The likelihood of an adverse event and the impact of the event determine the level of risk.
7. When is used Decision table testing?
Decision table testing is used for testing systems for which the specification takes the form of rules
or cause-effect combinations. In a decision table the inputs are listed in a column, with the outputs
in the same column but below the inputs. The remainder of the table explores combinations of
inputs to define the outputs produced.
Learn More About Decision Table Testing Technique in the Video Tutorial here
8. What is the MAIN objective when reviewing a software deliverable?
To identify defects in any software work product.
9. Which of the following defines the expected results of a test? Test case specification or test
design specification.
Test case specification defines the expected results of a test.
10. What is the benefit of test independence?
It avoids author bias in defining effective tests.
11. As part of which test process do you determine the exit criteria?
The exit criteria is determined on the bases of 'Test Planning'.
12. What is beta testing?
Testing performed by potential customers at their own locations.
13. Given the following fragment of code, how many tests are required for 100% decision
coverage?
if width > length
thenbiggest_dimension = width
if height > width
thenbiggest_dimension = height
end_if
elsebiggest_dimension = length
if height > length
thenbiggest_dimension = height
end_if
end_if
4
14. You have designed test cases to provide 100% statement and 100% decision coverage for
the following fragment of code. if width > length then biggest_dimension = width else
biggest_dimension = length end_if The following has been added to the bottom of the code
fragment above. print "Biggest dimension is " &biggest_dimensionprint "Width: " & width
print "Length: " & length How many more test cases are required?
None, existing test cases can be used.
15. Rapid Application Development?
Rapid Application Development (RAD) is formally a parallel development of functions and
subsequent integration. Components/functions are developed in parallel as if they were mini
projects, the developments are time-boxed, delivered, and then assembled into a working prototype.
This can very quickly give the customer something to see and use and to provide feedback
regarding the delivery and their requirements. Rapid change and development of the product is
possible using this methodology. However the product specification will need to be developed for
the product at some point, and the project will need to be placed under more formal controls prior to
going into production.
16. What is the difference between Testing Techniques and Testing Tools?
Testing technique: – Is a process for ensuring that some aspects of the application system or unit
functions properly there may be few techniques but many tools.
Testing Tools: – Is a vehicle for performing a test process. The tool is a resource to the tester, but
itself is insufficient to conduct testing
Learn More About Testing Tools here
17. We use the output of the requirement analysis, the requirement specification as the input
for writing …
User Acceptance Test Cases
18. Repeated Testing of an already tested program, after modification, to discover any defects
introduced or uncovered as a result of the changes in the software being tested or in another
related or unrelated software component:
Regression Testing
19. What is component testing?
Component testing, also known as unit, module and program testing, searches for defects in, and
verifies the functioning of software (e.g. modules, programs, objects, classes, etc.) that are
separately testable. Component testing may be done in isolation from the rest of the system
depending on the context of the development life cycle and the system. Most often stubs and drivers
are used to replace the missing software and simulate the interface between the software
components in a simple manner. A stub is called from the software component to be tested; a driver
calls a component to be tested.
20. What is functional system testing?
Testing the end to end functionality of the system as a whole is defined as a functional system
testing.
21. What are the benefits of Independent Testing?
Independent testers are unbiased and identify different defects at the same time.
22. In a REACTIVE approach to testing when would you expect the bulk of the test design
work to be begun?
The bulk of the test design work begun after the software or system has been produced.
23. What are the different Methodologies in Agile Development Model?
There are currently seven different agile methodologies that I am aware of:
1. Extreme Programming (XP)
2. Scrum
3. Lean Software Development
4. Feature-Driven Development
5. Agile Unified Process
6. Crystal
7. Dynamic Systems Development Model (DSDM)
24. Which activity in the fundamental test process includes evaluation of the testability of the
requirements and system?
A 'Test Analysis' and 'Design' includes evaluation of the testability of the requirements and system.
25. What is typically the MOST important reason to use risk to drive testing efforts?
Because testing everything is not feasible.
26. What is random/monkey testing? When it is used?
Random testing often known as monkey testing. In such type of testing data is generated randomly
often using a tool or automated mechanism. With this randomly generated input the system is tested
and results are analysed accordingly. These testing are less reliable; hence it is normally used by the
beginners and to see whether the system will hold up under adverse effects.
27. Which of the following are valid objectives for incident reports?
1. Provide developers and other parties with feedback about the problem to enable
identification, isolation and correction as necessary.
2. Provide ideas for test process improvement.
3. Provide a vehicle for assessing tester competence.
4. Provide testers with a means of tracking the quality of the system under test.
28. Consider the following techniques. Which are static and which are dynamic techniques?
1. Equivalence Partitioning.
2. Use Case Testing.
3. Data Flow Analysis.
4. Exploratory Testing.
5. Decision Testing.
6. Inspections.
Data Flow Analysis and Inspections are static; Equivalence Partitioning, Use Case Testing,
Exploratory Testing and Decision Testing are dynamic.
29. Why are static testing and dynamic testing described as complementary?
Because they share the aim of identifying defects but differ in the types of defect they find.
30. What are the phases of a formal review?
In contrast to informal reviews, formal reviews follow a formal process. A typical formal review
process consists of six main steps:
1. Planning
2. Kick-off
3. Preparation
4. Review meeting
5. Rework
6. Follow-up.
31. What is the role of moderator in review process?
The moderator (or review leader) leads the review process. He or she determines, in co-operation
with the author, the type of review, approach and the composition of the review team. The
moderator performs the entry check and the follow-up on the rework, in order to control the quality
of the input and output of the review process. The moderator also schedules the meeting,
disseminates documents before the meeting, coaches other team members, paces the meeting, leads
possible discussions and stores the data that is collected.
32. What is an equivalence partition (also known as an equivalence class)?
An input or output ranges of values such that only one value in the range becomes a test case.
33. When should configuration management procedures be implemented?
During test planning.
34. A Type of functional Testing, which investigates the functions relating to detection of
threats, such as virus from malicious outsiders?
Security Testing
35. Testing where in we subject the target of the test , to varying workloads to measure and
evaluate the performance behaviours and ability of the target and of the test to continue to
function properly under these different workloads?
Load Testing
36. Testing activity which is performed to expose defects in the interfaces and in the
interaction between integrated components is?
Integration Level Testing
37. What are the Structure-based (white-box) testing techniques?
Structure-based testing techniques (which are also dynamic rather than static) use the internal
structure of the software to derive test cases. They are commonly called 'white-box' or 'glass-box'
techniques (implying you can see into the system) since they require knowledge of how the
software is implemented, that is, how it works. For example, a structural technique may be
concerned with exercising loops in the software. Different test cases may be derived to exercise the
loop once, twice, and many times. This may be done regardless of the functionality of the software.
38. When "Regression Testing" should be performed?
After the software has changed or when the environment has changed Regression testing should be
performed.
39. What is negative and positive testing?
A negative test is when you put in an invalid input and receives errors. While a positive testing, is
when you put in a valid input and expect some action to be completed in accordance with the
specification.
40. What is the purpose of a test completion criterion?
The purpose of test completion criterion is to determine when to stop testing
41. What can static analysis NOT find?
For example memory leaks.
42. What is the difference between re-testing and regression testing?
Re-testing ensures the original fault has been removed; regression testing looks for unexpected side
effects.
43. What are the Experience-based testing techniques?
In experience-based techniques, people's knowledge, skills and background are a prime contributor
to the test conditions and test cases. The experience of both technical and business people is
important, as they bring different perspectives to the test analysis and design process. Due to
previous experience with similar systems, they may have insights into what could go wrong, which
is very useful for testing.
44. What type of review requires formal entry and exit criteria, including metrics?
Inspection
45. Could reviews or inspections be considered part of testing?
Yes, because both help detect faults and improve quality.
46. An input field takes the year of birth between 1900 and 2004 what are the boundary values
for testing this field?
1899,1900,2004,2005
47. Which of the following tools would be involved in the automation of regression test? a.
Data tester b. Boundary tester c. Capture/Playback d. Output comparator.
d. Output comparator
48. To test a function, what has to write a programmer, which calls the function to be tested
and passes it test data.
Driver
49. What is the one Key reason why developers have difficulty testing their own work?
Lack of Objectivity
50."How much testing is enough?"
The answer depends on the risk for your industry, contract and special requirements.
51. When should testing be stopped?
It depends on the risks for the system being tested. There are some criteria bases on which you can
stop testing.
1. Deadlines (Testing, Release)
2. Test budget has been depleted
3. Bug rate fall below certain level
4. Test cases completed with certain percentage passed
5. Alpha or beta periods for testing ends
6. Coverage of code, functionality or requirements are met to a specified point
52. Which of the following is the main purpose of the integration strategy for integration
testing in the small?
The main purpose of the integration strategy is to specify which modules to combine when and how
many at once.
53.What are semi-random test cases?
Semi-random test cases are nothing but when we perform random test cases and do equivalence
partitioning to those test cases, it removes redundant test cases, thus giving us semi-random test
cases.
54. Given the following code, which statement is true about the minimum number of test cases
required for full statement and branch coverage?
Read p
Read q
IF p+q> 100
THEN Print "Large"
ENDIF
IF p > 50
THEN Print "p Large"
ENDIF
1 test for statement coverage, 2 for branch coverage
55. What is black box testing? What are the different black box testing techniques?
Black box testing is the software testing method which is used to test the software without knowing
the internal structure of code or program. This testing is usually done to check the functionality of
an application. The different black box testing techniques are
1. Equivalence Partitioning
2. Boundary value analysis
3. Cause effect graphing
56. Which review is normally used to evaluate a product to determine its suitability for
intended use and to identify discrepancies?
Technical Review.
57. Why we use decision tables?
The techniques of equivalence partitioning and boundary value analysis are often applied to specific
situations or inputs. However, if different combinations of inputs result in different actions being
taken, this can be more difficult to show using equivalence partitioning and boundary value
analysis, which tend to be more focused on the user interface. The other two specification-based
techniques, decision tables and state transition testing are more focused on business logic or
business rules. A decision table is a good way to deal with combinations of things (e.g. inputs). This
technique is sometimes also referred to as a 'cause-effect' table. The reason for this is that there is an
associated logic diagramming technique called 'cause-effect graphing' which was sometimes used to
help derive the decision table
58. Faults found should be originally documented by whom?
By testers.
59. Which is the current formal world-wide recognized documentation standard?
There isn't one.
60. Which of the following is the review participant who has created the item to be reviewed?
Author
61. A number of critical bugs are fixed in software. All the bugs are in one module, related to
reports. The test manager decides to do regression testing only on the reports module.
Regression testing should be done on other modules as well because fixing one module may affect
other modules.
62. Why does the boundary value analysis provide good test cases?
Because errors are frequently made during programming of the different cases near the 'edges' of
the range of values.
63. What makes an inspection different from other review types?
It is led by a trained leader, uses formal entry and exit criteria and checklists.
64. Why can be tester dependent on configuration management?
Because configuration management assures that we know the exact version of the testware and the
test object.
65. What is a V-Model?
A software development model that illustrates how testing activities integrate with software
development phases
66. What is maintenance testing?
Triggered by modifications, migration or retirement of existing software
67. What is test coverage?
Test coverage measures in some specific way the amount of testing performed by a set of tests
(derived in some other way, e.g. using specification-based techniques). Wherever we can count
things and can tell whether or not each of those things has been tested by some test, then we can
measure coverage.
68. Why is incremental integration preferred over "big bang" integration?
Because incremental integration has better early defects screening and isolation ability
69. When do we prepare RTM (Requirement traceability matrix), is it before test case
designing or after test case designing?
It would be before test case designing. Requirements should already be traceable from Review
activities since you should have traceability in the Test Plan already. This question also would
depend on the organisation. If the organisations do test after development started then requirements
must be already traceable to their source. To make life simpler use a tool to manage requirements.
70. What is called the process starting with the terminal modules?
Bottom-up integration
71. During which test activity could faults be found most cost effectively?
During test planning
72. The purpose of requirement phase is
To freeze requirements, to understand user needs, to define the scope of testing
73. Why we split testing into distinct stages?
We split testing into distinct stages because of following reasons,
1. Each test stage has a different purpose
2. It is easier to manage testing in stages
3. We can run different test into different environments
4. Performance and quality of the testing is improved using phased testing
74. What is DRE?
To measure test effectiveness a powerful metric is used to measure test effectiveness known as DRE
(Defect Removal Efficiency) From this metric we would know how many bugs we have found from
the set of test cases. Formula for calculating DRE is
DRE=Number of bugs while testing / number of bugs while testing + number of bugs found by
user
75. Which of the following is likely to benefit most from the use of test tools providing test
capture and replay facilities? a) Regression testing b) Integration testing c) System testing d)
User acceptance testing
Regression testing
76. How would you estimate the amount of re-testing likely to be required?
Metrics from previous similar projects and discussions with the development team
77. What studies data flow analysis?
The use of data on paths through the code.
78. What is Alpha testing?
Pre-release testing by end user representatives at the developer's site.
79. What is a failure?
Failure is a departure from specified behaviour.
80. What are Test comparators?
Is it really a test if you put some inputs into some software, but never look to see whether the
software produces the correct result? The essence of testing is to check whether the software
produces the correct result, and to do that, we must compare what the software produces to what it
should produce. A test comparator helps to automate aspects of that comparison.
81. Who is responsible for document all the issues, problems and open point that were
identified during the review meeting
Scribe
82. What is the main purpose of Informal review
Inexpensive way to get some benefit
83. What is the purpose of test design technique?
Identifying test conditions and Identifying test cases
84. When testing a grade calculation system, a tester determines that all scores from 90 to 100
will yield a grade of A, but scores below 90 will not. This analysis is known as:
Equivalence partitioning
85. A test manager wants to use the resources available for the automated testing of a web
application. The best choice is Tester, test automater, web specialist, DBA
86. During the testing of a module tester 'X' finds a bug and assigned it to developer. But
developer rejects the same, saying that it's not a bug. What 'X' should do?
Send to the detailed information of the bug encountered and check the reproducibility
87. A type of integration testing in which software elements, hardware elements, or both are
combined all at once into a component or an overall system, rather than in stages.
Big-Bang Testing
88. In practice, which Life Cycle model may have more, fewer or different levels of
development and testing, depending on the project and the software product. For example,
there may be component integration testing after component testing, and system integration
testing after system testing.
V-Model
89. Which technique can be used to achieve input and output coverage? It can be applied to
human input, input via interfaces to a system, or interface parameters in integration testing.
Equivalence partitioning
90. "This life cycle model is basically driven by schedule and budget risks" This statement is
best suited for…
V-Model
91. In which order should tests be run?
The most important one must tests first
92. The later in the development life cycle a fault is discovered, the more expensive it is to fix.
Why?
The fault has been built into more documentation, code, tests, etc
93. What is Coverage measurement?
It is a partial measure of test thoroughness.
94. What is Boundary value testing?
Test boundary conditions on, below and above the edges of input and output equivalence classes.
For instance, let say a bank application where you can withdraw maximum Rs.20,000 and a
minimum of Rs.100, so in boundary value testing we test only the exact boundaries, rather than
hitting in the middle. That means we test above the maximum limit and below the minimum limit.
95. What is Fault Masking?
Error condition hiding another error condition.
96. What does COTS represent?
Commercial off The Shelf.
97.The purpose of which is allow specific tests to be carried out on a system or network that
resembles as closely as possible the environment where the item under test will be used upon
release?
Test Environment
98. What can be thought of as being based on the project plan, but with greater amounts of
detail?
Phase Test Plan
99. What is exploratory testing?
Exploratory testing is a hands-on approach in which testers are involved in minimum planning and
maximum test execution. The planning involves the creation of a test charter, a short declaration of
the scope of a short (1 to 2 hour) time-boxed test effort, the objectives and possible approaches to
be used. The test design and test execution activities are performed in parallel typically without
formally documenting the test conditions, test cases or test scripts. This does not mean that other,
more formal testing techniques will not be used. For example, the tester may decide to use boundary
value analysis but will think through and test the most important boundary values without
necessarily writing them down. Some notes will be written during the exploratory-testing session,
so that a report can be produced afterwards.
100. What is "use case testing"?
In order to identify and execute the functional requirement of an application from start to finish "use
case" is used and the techniques used to do this is known as "Use Case Testing"
Bonus!
101. What is the difference between STLC (Software Testing Life Cycle) and SDLC (Software
Development Life Cycle) ?
SDLC deals with developement/coding of the software while STLC deales with validation and
verification of the software
102. What is traceability matrix?
The relationship between test cases and requirements is shown with the help of a document. This
document is known as traceability matrix.
103. What is Equivalence partitioning testing?
Equivalence partitioning testing is a software testing technique which divides the application input
test data into each partition at least once of equivalent data from which test cases can be derived.
By this testing method it reduces the time required for software testing.
104. What is white box testing and list the types of white box testing?
White box testing technique involves selection of test cases based on an analysis of the internal
structure (Code coverage, branches coverage, paths coverage, condition coverage etc.) of a
component or system. It is also known as Code-Based testing or Structural testing. Different types
of white box testing are
1. Statement Coverage
2. Decision Coverage
105. In white box testing what do you verify?
In white box testing following steps are verified.
1. Verify the security holes in the code
2. Verify the incomplete or broken paths in the code
3. Verify the flow of structure according to the document specification
4. Verify the expected outputs
5. Verify all conditional loops in the code to check the complete functionality of the application
6. Verify the line by line coding and cover 100% testing
106. What is the difference between static and dynamic testing?
Static testing: During Static testing method, the code is not executed and it is performed using the
software documentation.
Dynamic testing: To perform this testing the code is required to be in an executable form.
107. What is verification and validation?
Verification is a process of evaluating software at development phase and to decide whether the
product of a given application satisfies the specified requirements. Validation is the process of
evaluating software at the end of the development process and to check whether it meets the
customer requirements.
108. What are different test levels?
There are four test levels
1. Unit/component/program/module testing
2. Integration testing
3. System testing
4. Acceptance testing
109. What is Integration testing?
Integration testing is a level of software testing process, where individual units of an application are
combined and tested. It is usually performed after unit and functional testing.
110. What are the tables in testplans?
Test design, scope, test strategies , approach are various details that Test plan document consists of.
1. Test case identifier
2. Scope
3. Features to be tested
4. Features not to be tested
5. Test strategy & Test approach
6. Test deliverables
7. Responsibilities
8. Staffing and training
9. Risk and Contingencies
111. What is the difference between UAT (User Acceptance Testing) and System testing?
System Testing: System testing is finding defects when the system under goes testing as a whole, it
is also known as end to end testing. In such type of testing, the application undergoes from
beginning till the end.
UAT: User Acceptance Testing (UAT) involves running a product through a series of specific tests
which determines whether the product will meet the needs of its users.
112. Mention the difference between Data Driven Testing and Retesting?
Retesting: It is a process of checking bugs that are actioned by development team to verify that
they are actually fixed.
Data Driven Testing (DDT): In data driven testing process, application is tested with multiple test
data. Application is tested with different set of values.
113. What are the valuable steps to resolve issues while testing?
114. What is the difference between test scenarios, test cases and test script?
Difference between test scenarios and test cases is that
Test Scenarios: Test scenario is prepared before the actual testing starts, it includes plans for
testing product, number of team members, environmental condition, making test cases, making test
plans and all the features that are to be tested for the product.
Test Cases: It is a document that contains the steps that has to be executed, it has been planned
earlier.
Test Script: It is written in a programming language and it's a short program used to test part of
functionality of the software system. In other words a written set of steps that should be performed
manually.
115. What is Latent defect?
Latent defect: This defect is an existing defect in the system which does not cause any failure as
the exact set of conditions has never been met
116. What are the two parameters which can be useful to know the quality of test execution?
To know the quality of test execution we can use two parameters
Before Testing
During Testing
After the Testing
Technical Feasibility
Complexity level
Application stability
Test data
Application size
Re-usability of automated scripts
Execution across environment
121. How will you conduct Risk Analysis?
For the risk analysis following steps need to be implemented
a) Finding the score of the risk
b) Making a profile for the risk
c) Changing the risk properties
d) Deploy the resources of that test risk
e) Making a database of risk
122. What are the categories of debugging?
Categories for debugging
a) Brute force debugging
b) Backtracking
c) Cause elimination
d) Program slicing
e) Fault tree analysis
123. What is fault masking explain with example?
When presence of one defect hides the presence of another defect in the system is known as fault
masking.
Example : If the "Negative Value" cause a firing of unhandled system exception, the developer will
prevent the negative values inpu. This will resolve the issue and hide the defect of unhandled
exception firing.
124. Explain what is Test Plan ? What are the information that should be covered in Test
Plan ?
A test plan can be defined as a document describing the scope, approach, resources and schedule of
testing activities and a test plan should cover the following details.
Test Strategy
Test Objective
Exit/Suspension Criteria
Resource Planning
Test Deliverables
125. How you can eliminate the product risk in your project ?
To eliminate product risk in your project, there is simple yet crucial step that can reduce the product
risk in your project.
126. What are the common risk that leads to the project failure?
The common risk that leads to a project failure are
127. On what basis you can arrive to an estimation for your project?
To estimate your project , you have to consider following points
Tester/Test Analyst
Test administrator
Report defects
Tester
129. Explain what is testing type and what are the commonly used testing type ?
To get an expected test outcome a standard procedure is followed which is referred as Testing Type.
Commonly used testing types are
130. While monitoring your project what all things you have to consider ?
The things that has to be taken in considerations are
132. What does a typical test report contains? What are the benefits of test reports?
A test report contains following things:
Project Information
Test Objective
Test Summary
Defect
Continuous Improvement
Documentation
Tool Usage
Metrics
Responsibility by team members
Experienced SQA auditors
Central/Project test plan: It is the main test plan that outlines the complete test strategy of the
project. This plan is used till the end of the software development lifecycle
Acceptance test plan: This document begins during the requirement phase and is completed
at final delivery
System test plan: This plan starts during the design plan and proceeds until the end of the
project
Integration and Unit test plan: Both these test plans start during the execution phase and last
until the final delivery
151. Explain which test cases are written first black boxes or white boxes?
Black box test cases are written first as to write black box test cases; it requires project plan and
requirement document all these documents are easily available at the beginning of the project.
While writing white box test cases requires more architectural understanding and is not available at
the start of the project.
152. Explain what is the difference between latent and masked defects?
Latent defect: A latent defect is an existing defect that has not caused a failure because the
sets of conditions were never met
Masked defect: It is an existing defect that has not caused a failure because another defect
has prevented that part of the code from being executed
153. Mention what is bottom up testing?
Bottom up testing is an approach to integration testing, where the lowest level components are
tested first, then used to facilitate the testing of higher level components. The process is repeated
until the component at the top of the hierarchy is tested.
154. Mention what are the different types of test coverage techniques?
Different types of test coverage techniques include
Statement Coverage: It verifies that each line of source code has been executed and tested
Decision Coverage: It ensures that every decision in the source code is executed and tested
Path Coverage: It ensures that every possible route through a given part of code is executed
and tested
155. Mention what is the meaning of breadth testing?
Breadth testing is a test suite that exercises the full functionality of a product but does not test
features in detail
156. Mention what is the difference between Pilot and Beta testing?
The difference between pilot and beta testing is that pilot testing is actually done using the product
by the group of user before the final deployment and in beta testing we do not input real data, but it
is installed at the end customer to validate if the product can be used in production.
157. Explain what is the meaning of Code Walk Through?
Code Walk Through is the informal analysis of the program source code to find defects and verify
coding techniques
158. Mention what are the basic components of defect report format?
The basic components of defect report format includes
Project Name
Module Name
Defect detected on
Defect detected by
Defect ID and Name
Snapshot of the defect
Priority and Severity status
Defect resolved by
Defect resolved on