Set A: Criteria Black Box Testing White Box Testing

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Set A

1.

Criteria Black Box Testing White Box Testing


White Box Testing is a
Black Box Testing is a software
software testing method in
testing method in which the
which the internal
internal structure/ design/
Definition structure/ design/
implementation of the item
implementation of the item
being tested is NOT known to
being tested is known to
the tester
the tester.
Mainly applicable to higher Mainly applicable to lower
Levels levels of testing:Acceptance levels of testing:Unit
Applicable To Testing Testing
System Testing Integration Testing
Generally, independent Generally, Software
Responsibility
Software Testers Developers
Programming
Not Required Required
Knowledge
Implementation
Not Required Required
Knowledge
Basis for Test
Requirement Specifications Detail Design
Cases

2.

Alpha testing: Alpha testing takes place at the developer's site by the
internal teams, before release to external customers. This testing is
performed without the involvement of the development teams.

The following diagram explains the fitment of Alpha testing in the software
development life cycle.

In the first phase of alpha testing, the software is tested by in-house


developers during which the goal is to catch bugs quickly.

In the second phase of alpha testing, the software is given to the software
QA team for additional testing.
Alpha testing is often performed for Commercial off-the-shelf software
(COTS) as a form of internal acceptance testing, before the beta testing is
performed.

Beta Testing:Beta testing also known as user testing takes place at the end
users site by the end users to validate the usability, functionality,
compatibility, and reliability testing.

Beta testing adds value to the software development life cycle as it allows
the "real" customer an opportunity to provide inputs into the design,
functionality, and usability of a product. These inputs are not only critical to
the success of the product but also an investment into future products when
the gathered data is managed effectively.

The following diagram explains the fitment of Beta testing in the software
development life cycle:

There are number of factors that depends on the success of beta testing:

 Test Cost

 Number of Test Participants

 Shipping

 Duration of Test

 Demographic coverage

 Structured English:
 It is used to provide step by step specification for any algorithm. It can
be used at any desirable level of description & procedures. It is
modified form of English that is used to specify the contents of the
process box in data flow diagram (DFD). It uses English language
instead programming language.
 It uses there basic types of statements to describe a process

 i. Sequence structure –
 It is a single step or action included in a
process. It does not depend on the existence of any condition.
 For Ex:
 To buy a computer science book we follow the steps –
 a. Pick out a desirable book.
 b. Take it to the sells counter.
 c. Pay cash for the book.
 d. Collect cash receipt.
 e. Collect the book and leave the store.
 It is used when two or more actions can be taken depending on the
value of a specific condition.

3.

Two classes of different heuristic Estimation Techniques:

- single variable model

- multi variable model

Single Variable Estimation Models:

It provides a means to estimate the desired characteristics Of a problem, using some

previously estimated basic (independent) characteristic of the software product such

as its size.

A single variable estimator model takes the following form:

Estimated Parameter=c1*ed1

e= characteristic which already have been calculated.

Estimated parameter is the dependent parameter to be estimated. The dependent

parameters to be estimated could be effort, duration, staff size etc.

c1 and d1 are constants- calculated from past projects.

COCOMO is one of this type of models example.

Effort = 2.4 х (48)^1.05 = 140 PM

Nominal development time = 2.5 х (91)0.38 = 16 months


Cost required to develop the product = 16 х 18,000 = Rs. 288,000/

4.
Cohesion and Coupling deal with the quality of an OO design. Generally, good OO design should be
loosely coupled and highly cohesive. Lot of the design principles, design patterns which have been
created are based on the idea of “Loose coupling and high cohesion”.

The aim of the design should be to make the application:

 easier to develop
 easier to maintain
 easier to add new features
 less Fragile.

Coupling is the degree to which one class knows about another class. Let us consider two classes
class A and class B. If class Aknows class B through its interface only i.e it interacts with class B through
its API then class A and class B are said to be loosely coupled.

Cohesion is used to indicate the degree to which a class has a single, well-focused purpose. Coupling is
all about how classes interact with each other, on the other hand cohesion focuses on how single class is
designed. Higher the cohesiveness of the class, better is the OO design.

Benefits of Higher Cohesion:

 Highly cohesive classes are much easier to maintain and less frequently changed.
 Such classes are more usable than others as they are designed with a well-focused purpose.

A data dictionary is a file or a set of files that contains a database's metadata. The data
dictionary contains records about other objects in the database, such as data ownership, data
relationships to other objects, and other data.

The data dictionary is a crucial component of any relational database. Ironically, because of its
importance, it is invisible to most database users. Typically, only database administrators
interact with the data dictionary.

In a relational database, the metadata in the data dictionary includes the following:

 Names of all tables in the database and their owners


 Names of all indexes and the columns to which the tables in those indexes relate
 Constraints defined on tables, including primary keys, foreign-key relationships to other
tables, and not-null constraints

5.

Verification Validation

Are we building the system right? Are we building the right system?
Verification is the process of evaluating Validation is the process of evaluating
products of a development phase to find software at the end of the development
out whether they meet the specified process to determine whether software
requirements. meets the customer expectations and
requirements.

The objective of Verification is to make The objective of Validation is to make sure


sure that the product being develop is as that the product actually meet up the user’s
per the requirements and design requirements, and check whether the
specifications. specifications were correct in the first place.

Following activities are involved Following activities are involved


in Verification: Reviews, Meetings and in Validation: Testing like black box
Inspections. testing, white box testing, gray box testing
etc.

Verification is carried out by QA team to Validation is carried out by testing team.


check whether implementation software is
as per specification document or not.

A bottom-up approach is the piecing together of systems to give rise to more complex systems,
thus making the original systems sub-systems of the emergent system. Bottom-up processing is a
type of information processing based on incoming data from the environment to form a perception.
From a cognitive psychology perspective, information enters the eyes in one direction (sensory
input, or the "bottom"), and is then turned into an image by the brain that can be interpreted and
recognized as a perception (output that is "built up" from processing to final cognition). In a bottom-
up approach the individual base elements of the system are first specified in great detail. These
elements are then linked together to form larger subsystems, which then in turn are linked,
sometimes in many levels, until a complete top-level system is formed. This strategy often
resembles a "seed" model, by which the beginnings are small but eventually grow in complexity and
completeness. However, "organic strategies" may result in a tangle of elements and subsystems,
developed in isolation and subject to local optimization as opposed to meeting a global purpose.

6.

Cocomo (Constructive Cost Model) is a regression model based on LOC, i.e number of
Lines of Code. It is a procedural cost estimate model for software projects and often used
as a process of reliably predicting the various parameters associated with making a
project such as size, effort, cost, time and quality. It was proposed by Barry Boehm in
1970 and is based on the study of 63 projects, which make it one of the best-documented
models.
The key parameters which define the quality of any software products, which are also an
outcome of the Cocomo are primarily Effort & Schedule:
 Effort: Amount of labor that will be required to complete a task. It is measured in person-months
units.
 Schedule: Simply means the amount of time required for the completion of the job, which is, of
course, proportional to the effort put. It is measured in the units of time such as weeks, month

You might also like