0% found this document useful (0 votes)
95 views48 pages

Black Box

Uploaded by

rabailasghar97
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views48 pages

Black Box

Uploaded by

rabailasghar97
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 48

Quality Control ::

Test Case Generation (Using Black Box) and


Test Data Generation (Using Equivalence Class Partitioning)
Testing Case Design Strategies
 Software testing is the process to uncover errors in requirement, design and coding.
 It is used to identify the correctness, completeness, security and quality of software products
against a specification.
 Two basic approaches of software testing are:
1. Black Box Testing
1. Equivalence Partitioning
2. Boundary Value analysis
3. Cause effect graphing
4. Error guessing
5. Decision Table
6. State transition
2. White Box Testing
1. Statement Coverage
2. Branch Coverage
3. Conditional Coverage
4. Path Testing
Testing Case Design Strategies
 Black Box Testing
 Method that examines the functionality of an application, without
looking at its structure.
 The tester does not ever examine the programming code and does
not need any further knowledge of the program other than
requirement specifications (SRS).
 You can begin planning for black box testing soon after the
requirements and the functional specifications are available.
Black Box Testing

 Advantages
 The designer and the tester are independent of each other.
 The testing is done from the point of view of the user.
 Test cases can be designed when requirements are clear.
Black Box Testing
 Disadvantages
 Test cases are difficult to design.
 Testing every output against every input will take a long time and
many program structures can go unchecked.
 Inefficient testing because tester only has limited knowledge about
an application.
White Box Testing
 Is detailed examination of internal structure and logic of
code.
 In white box testing, you create test cases by looking at the
code to detect any potential failure scenarios.
 White box testing is also know as Glass Testing or Open box
testing.
White Box Testing
 Advantages
 As tester has knowledge of the source code it is easy to find
errors.
 Due to testers knowledge of code , maximum coverage is
attained during test scenario.
 He or she can then see if the program diverges from its intended
goal.
White Box Testing

 Disadvantages
 As skilled tester are performing tests, costs is increased.
 As it is very difficult to look into every corner of the code, some
code will go unchecked.
Test Case and Test Data Generation
Black Box Testing

 Equivalence Partitioning
 A good test case is one that has a reasonable probability of
finding an error.
 Keeping in view the fact that an exhaustive-input testing of
a program is impossible.
 limited to trying a small subset of all possible inputs.
 Of course, then, we want to select the right subset-the
subset with the highest probability of finding the most
errors.
Equivalence Class Partitioning
 The input domain of a program is partitioned into a finite
number of equivalence classes such that we can reasonably
assume that a test of a representative value of each class
is equivalent to a test of any other value.
 That is, if one test case in an equivalence class detects an
error, all other test cases in the equivalence class would be
expected to find the same error.
 Conversely, if a test case did not detect an error, we would
expect that no other test cases in the equivalence class
would fall within another equivalence class, since
equivalence classes may overlap one another.
Equivalence Class Partitioning

 By identifying this as an equivalence class, we are stating that if


no error is found by a test of one element of the set, it is unlikely
that an error would be found by a test of another element of the
set.
 Test-case design by equivalence partitioning proceeds in two
steps:
1. identifying the equivalence classes and
2. defining the test cases.
 The equivalence classes are identified by taking each input
condition (usually a sentence or phrase in the specification) and
partitioning it into two or more groups.
Equivalence Class Partitioning
 Following are the guidelines for identifying the equivalence
classes
1. If an input condition specifies a range of values (for example,
“the item count can be from 1 to 999”), identify valid
equivalence class (1 < item count < 999) and invalid equivalence
classes (item count < 1 and item count > 999).
2. If an input condition specifies the number of values (for example,
“one through six owners can be listed for the automobile”),
identify valid equivalence class and invalid equivalence classes
(no owners and more than six owners).
Boundary (Value) Analysis

 Greater number of errors tends to occur at the


Boundaries of the Input Domain than in the
center
 Uses same principal
 Inputs & Outputs grouped into Classes
 Elements are selected such that each edge of
the E.C. is subject of a test (Boundaries are
always a good place to look for defects)
Boundary Value Analysis

 Boundaries mark the point or zone of


transition from one equivalence class to another.
 The program is more likely to fail at a boundary,
so these are the best members of (simple,
numeric) equivalence classes to use.
 If software can operate on the edge of its
capabilities, it will almost certainly operate well
under normal conditions.
Boundary Value Analysis

 E.C.P. and B.A. can be used together.


 Input: range of valid values
 Test Cases (valid) for the ends of the range
 Test Cases (invalid) for conditions just beyond the
ends
 Input (real number- Range): 0.0 - 90.0
 Test Cases
 0.0, 90.0, -.0.001, 90.001
Boundary Value Analysis
 Input: number of valid values
 Test Cases (maximum and minimum number of
values)
 One beneath and beyond these values
 Input (file can contain 1-255 records)
 Test Cases
 0, 1, 255, and 256 records.
 extremes
 first/last min/max start/finish over/under
empty/full Shortest/longest slowest/fastest
largest/smallest …
Boundary Value Analysis
 Use these guidelines for each output condition
 Output: Monthly Deduction
 Minimum = 0.0, Maximum = 3500.50
 Test Cases to cause
 0.0 deduction and 3500.50 deduction
 If possible to design test cases to have negative
deduction and deduction larger than 3500.50.
Example

 Employees of an organization are allowed to get


accommodation expenses while traveling on
official tours.
 The program for validating expenses claims for
accommodation has the following requirements
 There is an upper limit of Rs. 3,000 for
accommodation expense claims
 Any claim above Rs. 3,000 should be rejected
and cause an error message to be displayed
 All expense amounts should be greater than
zero and an error message to be displayed if
this is not the case
Example

 Inputs: Accommodation Expense


 Boundaries of the Input Values
 Better to show Boundaries Graphically
 Boundary: 0 < Expense ≤ 3,000
0 3,000
 􀃅-------|---------------|------􀃆
-1|+1 2,900|3,001
Boundary Analysis
Boundary Analysis
 Rather than thinking about a single
variable with a single range of values, a
variable might have different ranges, such
as the day of the month, in a date:
 1-28
 1-29
 1-30
 1-31
Boundary Analysis
 We analyze the range of dates by partitioning
the month field for the date into different sets:
 {February}
 {April, June, September, November}
 {Jan, March, May, July, August, October, December}
 For testing, you want to pick one of each. There
might or might not be a “boundary” on months.
The boundaries on the days, are sometimes 1-28,
sometimes 1-29, etc
Error Guessing

 Just ‘guess’ where the errors are ……


 Intuition and experience of tester
 Ad hoc, not really a technique
 Strategy:
 Make a list of possible errors or error-prone situations
(often related to boundary conditions)
 Write test cases based on this list
Error Guessing

 Most common error–prone situations ( Risk


Analysis)
 Try to identify critical parts of program (high risk
sections):
 Parts with unclear specifications.
 Developed by junior programmer while he has
problems ……
 Complex specification and code
Error Guessing

 Defects’ histories are useful


 Some items to try are:
 Empty or null lists/strings
 Zero instances/occurrences
 Blanks or null characters in strings
 Negative numbers
 “Probability of errors remaining in the program is
proportional to the number of errors that have
been found so far” [Myers 79]
Equivalence Class Partitioning
 Employees of an organization are allowed to get
accommodation expenses while traveling on official tours.
The program for validating expenses claims for
accommodation has the following requirements
 There is an upper limit of Rs. 3,000 for accommodation
expense claims
 Any claim above Rs. 3,000 should be rejected and cause an
error message to be displayed
 All expense amount should be greater than zero and an
error message to be displayed if this is not the case
Test Case Generation (Using Black Box) and Test
Data Generation (Equivalence Class Partitioning)

Inputs: Accommodation Expense

Partition the Input Values:


Expense ≤ 0
0 < Expense ≤ 3,000
Expense > 3,000
Test Case Generation (Using Black Box) and Test
Data Generation (Equivalence Class Partitioning)

Test Case 1 2 3

Expenses 2000 -10 3500


P. tested 0<E.C<3000 E.C<0 E.C>3000

Exp-output OK error mess error mess


Test Case Generation (Using Black Box) and Test
Data Generation (Equivalence Class Partitioning)
 Consider a function, Grade(), with the following
specification:
 The function is passed an coursework out of 100 out of
which it generates a grade for the course in the range
‘A’ to ‘D’.
 The grade is calculated from the overall mark, which is
calculated as the sum of exam and c/w marks, as
follows:
 greater than or equal to 70 ‘A’
 greater than or equal to 50, but less than 70 ‘B’
 greater than or equal to 30, but less than 50 ‘C’
 less than 30 ‘D’
 Where a mark is outside its expected range then a fault
message (‘FM’) is generated.
 All inputs are passed as parameters.
Test Case Generation (Using Black Box) and Test
Data Generation (Equivalence Class Partitioning)

 Design Test Cases to exercise Partitions


 A test case comprises the following:
 the inputs to the component
 The partitions exercised
 The expected outcome of the test case
 Two Approaches to Design Test Cases
 Separate test cases are generated for each partition
 A minimal set of test cases is generated to cover all
partitions
Test Case Generation (Using Black Box) and Test
Data Generation (Equivalence Class Partitioning)
 Input :: Course mark
 Equivalence Class Partitions
 Valid
 0 ≤ real number ≤ 100
 Invalid
 alphabetic
 Coursework < 0
 Coursework > 100
Test Case Generation (Using Black Box) and Test
Data Generation (Equivalence Class Partitioning)

Invalid E.C.(s)
Course Mark < 0 ‘FM’
Course Mark > 100 ‘FM’

Valid E.C.(s)
0 ≤ Course Mark < 30 ‘D’
30 ≤ Course Mark < 50 ‘C’
50 ≤ Course Mark < 70 ‘B’
70 ≤ Course Mark ≤ 100 ‘A’
Test Case Generation (Using Black Box) and Test
Data Generation (Equivalence Class Partitioning)
Valid Partitions

TEST CASES 1 2 3

Input Course Work 8 -15 47


Partition tested 0<=c<=25 c<0 c>25

Expected output ‘D’ FM FM


Test Case Generation (Using Black Box) and Test
Data Generation (Equivalence Class Partitioning)
Invalid Partitions

Test cases 4

Input (course work) ‘A’


Partition tested (of exam mark) real

Expected output FM
Test Case Generation (Using Black Box) and Test
Data Generation (Equivalence Class Partitioning)
 Equivalence partitions may also be considered for
invalid outputs.
 Difficult to identify unspecified outputs.
 If test can cause to occur an Invalid output, i.e.
identified a defect in either the component, its
specification, or both.
Test Case Generation (Using Black Box) and Test
Data Generation (Equivalence Class Partitioning)
Valid Partitions

Test cases 5 6 7
Course Marks -20 17 45
Partition tested 0<C 0<=C<=30 30<=C<50

Expected output ‘FM’ ‘c’ ‘C’


Test Case Generation (Using Black Box) and Test
Data Generation (Equivalence Class Partitioning)

Valid Partitions

Test Case 8 9 10

Input Course Mark 66 80 110


Partition tested 50<C<70 70<C<100 C>100

Expected output B A FM
Minimal Test Cases

 Many of the test cases are similar, but targeting


different E.P.(s)
 it is possible to develop single test cases that
exercise multiple partitions at the same time
 Reduces number of test cases required to cover
all E.P.(s)
Test Cases for Multiple E.P.(s)
Test Cases for Multiple E.P.(s)
Cause effect graphing
 Cause Effect Graph is a black box testing
technique that graphically illustrates the
relationship between a given outcome and all the
factors that influence the outcome.
 It is also known as Ishikawa diagram as it was
invented by Kaoru Ishikawa or fish bone diagram
because of the way it looks
 It Helps us to determine the root causes of a
problem or quality using a structured approach.
Cause effect graphing
Decision table testing
 This is a systematic approach where various input
combinations and their respective system
behavior are captured in a tabular form.
 That's why it is also known as a cause-effect
table.
 We use decision table testing in situations where
different combinations of test input conditions
result in different outputs. With this technique,
we identify the conditions and the resulting
actions of the testing object and present them in
a table.
Decision table testing
Both the email and the password are the conditions, and the expected
result is action.
State transition testing
 State transition testing is used when a system
responds differently to the same input and
depends on current conditions and previous
history.
 The previous history of the system is called the
state, while an event is an occurrence that can
result in a transition.
 A state transition diagram or state transition table
is a representation of the system’s states and
events that are causing transitions between
states. One event can cause different transitions
to the same state.
State transition testing
 This applies to those types of applications
that provide the specific number of
attempts to access the application such as
the login function of an application which
gets locked after the specified number of
incorrect attempts.
 In the login function we use email and
password, it gives a specific number of
attempts to access the application, after
crossing the maximum number of
attempts it gets locked with an error
message
State transition testing

STATE LOGIN VALIDATION REDIRECTED

S1 First Attempt Invalid S2

S2 Second Attempt Invalid S3

S3 Third Attempt Valid S4

S4 Home Page

S5 Error Page

You might also like