Unit-3 STM To Follow
Unit-3 STM To Follow
Unit-3 STM To Follow
SOFTWARETESTINGMETHODOLOGIES
Unit–III
Syllabus:
Static Testing: Inspections, Structured Walkthroughs, Technical Reviews. Validation activities: Unit
testing, Integration Testing, Function testing, system testing, acceptance testing. Regression testing:
Progressives Vs regressive testing, Regression test ability, Objectives of regression testing, Regression
testing types, Regression testing techniques
Static Testing is a software testing technique which is used to check defects in software
application without executing the code.
• Drawbacks of dynamic testing:
• Dynamic testing uncovers the bugs at a later stage of SDLC and hence is costly to debug. „
• Dynamic testing is expensive and time-consuming, as it needs to create, run, validate, and maintain testcases. „
• The efficiency of code coverage decreases with the increase in size of the system. „
• Dynamic testing provides information about bugs. However, debugging is not always easy. It is difficult and
time-consuming to trace a failure from a test case back to its root cause. „
• Dynamic testing cannot detect all the potential bugs.
Static testing techniques do not execute the software and they do not require the bulkof test cases. This type of testing is also known as
non-computer based testing or human testing.
Since verification activities are required at every stage of SDLCtill coding, static testing also can be applied at all these phases.
•
•
•
•
• The entry and exit criteria are used to determine whether an item is ready to be inspected.
Entry criteria mean that the item to be inspected is mature enough to be used.
• For example, for code inspection, the entry criterion is that the code has been compiled successfully.
• Exit criterion is that once the item has been given for inspection, it should not be updated
Inspection meeting. The inspection meeting starts with the author of the inspected item who has created it.
• The author first discusses every issue raised by different members in the compiled log file.
• After the discussion, all the members arrive at a consensus whether the issues pointed out are in fact errors and if they are errors, should
they be admitted by the author.
• It may be possible that during the discussion on any issue, another error is found. Then, this new error is also discussed and recorded as
an error by the author.
• Rework The summary list of the bugs that arise during the inspection meeting needs to be reworked by the author. The author fixes all
these bugs and reports back to the moderator.
• Follow-up It is the responsibility of the moderator to check that all the bugs found in the last meeting have been addressed and fixed. He
prepares a report and ascertains that all issues have been resolved. The document is then approved for release.
• Bug reduction
• Bug prevention
• Productivity
• Real-time feedback to software engineers
• Reduction in development resource
• Quality improvement
• Project management
• Checking coupling and cohesion
• Learning through inspection
• Process improvement
• Finding most error-prone modules
•
• Distribution of error-types
•
• The inspection process alone found 52% errors. Of the remaining 48% of defects, on an average,only 17% are detected by structural
testing.
• The effectiveness of the inspection process lies in its rate of inspection.
• The rate of inspection refers to how much evaluation of an item has been done by the team. If the rate is very fast, it means the coverage
of item to be evaluated is high.
• A balanced rate of inspection should be followed so that it concentrates on a reasonable number of defects. It may be calculated as the
error detection efficiency of the inspection process, as given below:
3.2.5 VARIANTS OF INSPECTION PROCESS
• Overview A brief overview of the module being reviewed is presented. The overview explains the modular structure in case it is
unfamiliar to reviewers,and shows them where the module belongs in the structure
• Review :Reviewers are assigned sections of the document to be reviewed and questionnaires based on the bug type. Reviewers answer the
questions in the questionnaires. They are also assigned a timeframe during which they may raise any questions they have, and a time to
meet with designers after the designers have read the completed questionnaires.
• Meeting The designers read the completed questionnaires and meet the reviewers to resolve any queries that the designers may have
about the reviewer’s answers to the questionnaires. The reviewers may be asked to defend their answers. This interaction continues until
both designers and reviewers understand the issues, although an agreement on these issues may not be reached.
• In this process, the meeting phase of inspection is considered expensive and therefore, the idea is to eliminate this phase. The
inspection process is carried out without having a meeting of the members. This is a type of asynchronous inspection [88] in which the
inspectors never have to simultaneously meet. For this process, an online version of the document is made available to every member
where they can add their comments and point out the bugs.
Setup It involves choosing the members and preparing the document for an asynchronous inspection process. The document is prepared
as a hypertext document.
Orientation It is same as the overview step of the inspection process discussed earlier.
Private review It is same as the preparation phase discussed in the inspection process. Here, each reviewer or inspector individually gives his
comments on
the document being inspected. However, each inspector provides comments individually and is unable to see the other inspector’s comments.
Public review In this step, all comments provided privately are made public and all inspectors are able to see each other’s comments and put
forward their
suggestions. Based on this feedback and with consultation of the author, it is decided that there is a bug.
Consolidation In this step, the moderator analyses the result of private and public reviews and lists the fi ndings and unresolved issues, if any.
Group meeting If required, any unresolved issues are discussed in this step. But the decision to conduct a group meeting is taken in the pr evious
step only
by the moderator.
Conclusion The final report of the inspection process along with the analysis is produced by the moderator.
Gilb Inspection
• Gilb and Graham [89] defined this process. It differs from Fagan inspection in that the defect detection is carried out by individual
inspectors at their own level rather than in a group. Therefore, a checking phase has been introduced.
• Three different roles are defined in this type of inspection:
• Leader is responsible for planning and running the inspection.
• Author of the document
• Checker is responsible for finding and reporting the defects in the document.
• Entry The document must pass through an entry criteria so that the inspectiontime is not wasted on a document which is fundamentall y
flawed.
• Planning The leader determines the inspection participants and schedules the meeting.
• Kick-off The relevant documents are distributed, participants are assigned roles and briefed about the agenda of the meeting.
• Checking Each checker works individually and finds defects.
• Logging Potential defects are collected and logged.
• Brainstorm In this stage, process improvement suggestions are recorded based on the reported bugs.
• Edit After all the defects have been reported, the author takes the list and works accordingly.
• Follow-up The leader ensures that the edit phase has been executed properly.
• Exit The inspection must pass the exit criteria as fixed for the completion of the inspection process
• It was described by Watts Humphrey [90]. In this process, the preparationphase emphasizes on finding and logging of bugs, unlike Fagan
inspections.
• It also includes an analysis phase between preparation and meeting. In theanalysis phase, individual logs are analysed and combined into
a single list.
• N-Fold Inspection
• It is based on the idea that the effectiveness of the inspection process can be increased by replicating it . If we increase the number of
teams inspecting the item, the percentage of defects found may increase.
• A proper evaluation of the situation is required. Originally, this process was used for inspecting requirement specifications, but it can also
be used for any phase.
• This process consists of the following stages
• Planning and overview This is the formal planning and overview stage. In addition, it also includes the planning of how many teams will
participate in the inspection process.
• Inspection stages There are many inspection processes adopted by many teams. It is not necessary that every team will choose the same
inspection process. The team is free to adopt any process.
• Collation phase The results from each inspection process are gathered,collated, and a master list of all detected defects is prepared.
• Rework and follow-up This step is same as the tradition Fagan inspection process
• Phased Inspection
• Phased inspections are designed to verify the product in a particular domain only ,experts who have experience in that particular domain
are called. Thus, this inspection process provides a chance to utilize human resources.
• In this process, inspection is divided into more than one phase.
• There is an ordered set of phases and each phase is designed such that it will check a particular feature in the product. The next phase in
• the ordered set of phases assumes that the particular feature has been verified in the previous phase.
• Structured Walkthrough
• The idea of structured walkthroughs was proposed by Yourdon
• This type of static testing is less formal and not so rigorous in nature.
• It has a simpler process as compared to inspection process as no organized meetings are conducted
• Doesnot use checklist
•
Though structured walkthrough is a variant of the inspection process, we have now categorized it as a separate category for static testing.
• READING TECHNIQUES
• A reading technique can be defined as a series of steps or procedures whose purpose is to guide an inspector to acquire a deep
understanding of the inspected software product. Thus, a reading technique can be regarded as a mechanism or strategy for the individual
inspector to detect defects in the inspected product.
• Checklists A checklist is a list of items that focus the inspector’s attention onspecific topics, such as common defects or organizational
rules, while reviewing
a software document . The purpose of checklists is to gather expertise concerning the most common defects and thereby supporting
inspections.
• The checklists are in the form of questions which are very general and are used by the inspection team members to verify the item.
• checklists have some drawbacks too:
• The questions, being general in nature, are not sufficiently tailored to take into account a particular development environment.
• Instruction about using a checklist is often missing.
• There is a probability that some defects are not taken care of. It happens in the case of those type of defects which have not been
previously detected.
• Scenario-based reading Scenario-based reading [93] is another readingtechnique which stresses on finding different kind of defects. It is
based on scenarios, wherein inspectors are provided with more specific instructions than typical checklists. Additionally, they are
provided with different scenarios,
focusing on different kind of defects
• Define scenario-based reading as a high-level term, which they break down to more specific techniques. The original method of Porter
&Votta [93] is also known as defect-based reading. By using the definition by Basili et al., most of the reading techniques are based on
scenario-based techniques wherein the inspector has to actively work with the inspected documents instead of mere straightforward
reading.
• The following methods have been developed based on the criteria given by scenario-based reading.
• Perspective-based reading The idea behind this method is that a software item should be inspected from the perspective of different
stakeholders .Inspectors of an inspection team have to check the software quality as well as the software quality factors of a software
artifact from different respectives.The perspectives mainly depend upon the roles people have within the software development or
maintenance process. For each perspective, either one or multiple scenarios are defined, consisting of repeatable activities an inspector
has to perform, and the questions an inspector has to answer.
• Usage-based reading This method proposed by Thelin et al. is applied in design inspections. Design documentation is inspected based on
usecases, which are documented in requirements specification. Since use-cases are the basis of inspection, the focus is on finding
functional defects, which are relevant from the users’ point of view.
•
•
• Organization –
This step involves assigning roles and responsibilities to the team selected for structural walkthroughs.
• Preparation –
• In this step, focus lies on preparing for the structural walkthroughs which could include thinking of basic test cases that would be
required to test the product.
• Walkthrough –
• Walkthrough is performed by a tester who comes to the meeting with a small set of test cases.
• Test cases are executed by the tester mentally ie., the test data are walked through the logic of program and results are noted down on a
paper or a presentation media.
• Rework and Follow – up –
• This step is similar to that of the last two phases of the inspection process.
• If no bugs are detected, then the product is approved for release. Otherwise, errors are resolved and again, a structured walkthrough is
conducted.
•
3.4 TECHNICAL REVIEWS
• A technical review is intended to evaluate the software in the light of development standards, guidelines, and specifications and to
provide the management with evidence that the development process is being carried out according to the stated objectives
• .it is considered a higher-level technique as compared to inspection or walkthrough.
• A technical review team is generally comprised of management-level representatives and project management.
• The moderator should gather and distribute the documentation to all team members for examination before the review.
• He should also prepare a set of indicators to measure the following points:
• Appropriateness of the problem definition and requirements
• Adequacy of all underlying assumptions
• Adherence to standards
• Consistency
• Completeness
• Documentation
• The moderator may also prepare a checklist to help the team focus on the key points.
• The result of the review should be a document recording the events of the meeting, deficiencies identified, and review team
recommendations.
• Appropriate actions should then be taken to correct any deficiencies and address all recommendations.
• Software validation is achieved through a series of black-box tests that demonstrate conformity with requirements.
• A test plan outlines the classes of tests to be conducted and a test procedure defines specific test cases that will be used to demonstrate
conformity with requirements.
• Both the plan and procedure are designed to ensure that all functional requirements are satisfied, all behavioural charateristics are
achieved, all performance requirements are attained, documentation is correct and human-engineered, and other requirements are met.
•
• 3.6 UNIT VALIDATION TESTING
• Unit testing is normally considered an adjunct to the coding step.
• testing with the code of a unit is called the verifi cation step.
• Units must also be validated to ensure that every unit of software has been built in the right manner in conformance with user
requirements.
• Unit tests ensure that the software meets at least a baseline level of functionality prior to integration and system testing.
• The types of interface modules which must be simulated, if required, to test a module
• Driver:This module which passes inputs to the module to be tested is not ready and under development. In such a situation, we need to
simulate the inputs required in the module to be tested is known as a driver module.
• Suppose module B is under test. In the hierarchy, module A is a superordinate of module B. Suppose module A is not ready and B has to
be unit tested. In this case, module B needs inputs from module A. Therefore, a driver module is needed which will simulate module A in
the sense that it passes the required inputs to module B and acts as a main program for module B inwhich its being called.
• For this purpose, a main program is prepared, wherein the required inputs are either hard-coded or entered by the user and passed on to
the module under test.
• The driver module may print or interpret the results produced by the module under testing
• A test driver may take inputs in the following form and call the unit to be tested: „
• It may hard-code the inputs as parameters of the calling unit.
• „ It may take the inputs from the user.
• It may read the inputs from a file
• A driver can be defined as a software module which is used to invoke a module under test and provide test inputs, control and
monitor execution, and report test results or most simplistically a line of code that calls a method and passes a value to that
method.
• A test driver provides the following facilities to a unit to be tested:
• 1.Initializes the environment desired for testing.
2.Provides simulated inputs in the required format to the units to be tested.
• Stubs The module under testing may also call some other module which is not ready at the time of testing. Therefore, these modules
need to be simulated for testing.
• Dummy modules instead of actual modules, which are not ready, are prepared for these subordinate modules. These dummy modules are
called stubs.
• A stub can be defined as a piece of software that works similar to a unit which is referenced by the unit being tested, but it is much
simpler than the actual unit. A stub works as a ‘stand-in’ for the subordinate unit and provides the minimum required behaviour for that
unit.
• Stubs allow the programmer to call a method in the code being developed,even if the method does not have the desired behaviour yet.
• By using stubs and drivers effectively, we can cut down our total debugging and testing time by testing small parts of a program
individually,
helping us to narrow down problems before they expand.
• Stubs and drivers can also be an effective tool for demonstrating progress in a business environment.
For example, if you are to implement four specific methods in a class by the end of the week, you can insert
stubs for any method and write a short driver program so that you can demonstrate to your manager or client that the requirement has beenmet.
As you continue, you can replace the stubs and drivers with the real code to finish your program.
•
Stubs and drivers are generally prepared by the developer of the module under testing. Developers use them at the time of unit verifi cation.
3.6 INTEGRATION TESTING
Integration is the activity of combining the modules together when all the modules have been prepared.
Need of integration testing
Modules are not standalone entities,they are part of software system consists of many interfaces.
Even if a single interface is mismatched many modules may be affected.
• Integration testing exposes inconsistency between the modules such as improper call or return sequences.
module A is linked to three subordinate modules, B, C, and D. It means that module A calls modules,B, C, and D.
All integration testing methods in the decomposition-based integration assume that all the modules have been unit tested in isolation
Integration methods in decomposition-based integration depend on the methods on which the activity of integration is based.
1)One method of integrating is to integrate all the modules together and then test it.
2)Another method is to integrate the modules one by one and test them incrementally
Based on these methods, integration testing methods are classified into two
categories: (a) non-incremental (b) incremental
Non-Incremental Integration Testing
In this type of testing, either all untested modules are combined together and then tested or unit tested modules are combined together. It
is also known as Big-Bang integration testing.
Big-Bang method cannot be adopted practically. This theory has been discarded due to the following reasons:
1)Big-Bang requires more work. For example, consider the following hierarchy of a software system.
According to the Big-Bang theory, if all unit tested modules are integrated in this example, then for unit testing of all the modules
independently, we require four drivers(1,2,3,6) and seven stubs(2,3,4,5,6,7,8) .This count will grow according to size of system
2. Actual modules are not interfaced directly until the end of the software system.
3. It will be difficult to localize the errors since the exact location of bugs cannot be found easily.
We start with one module and unit test it. Then combine the module which has to be merged with it and perform test on both the modules
• If we are testing module 6, then although module 1 and 2 have been tested previously, they will get the chance to be tested again and
again .
This gives the modules more exposure to the probable bugs by the time the last module in the system is tested.
• Incremental testing suffers from the problem of serially combining the methods according to the design
• One method that may work is to borrow the good features of both the approaches.
• The bottom-up strategy begins with the terminal or modules at the lowest level in the software structure.
• After testing these modules, they are integrated and tested moving from bottom to top level.
• Bottom-up integration can be considered as the opposite of top-down approach.
• stubs are not required
There are situations depending on the project in hand which will force to integrate the modules by combining top-down and bottom-up
techniques.
This combined approach is sometimes known as sandwich integration testing.
The practical approach for adopting sandwich testing
If we can refine the functional decomposition tree into a form of module calling graph, then we are moving towar ds behavioural testing
at the integration level. This can be done with call graphs
A call graph is a directed graph, where in the nodes are either modules or units, and a directed edge from one node to another means one
module has called another module.
The call graph can be captured in a matrix form which is known as the adjacency matrix.
The idea behind using a call graph for integration testing is to avoid the efforts made in developing the stubs and drivers
If we know the calling sequence, and if we wait for the called or calling function, if not ready, then call graph-based integration can be
used.
Pair-wise Integration
• we consider only one pair of calling and called modules,
• then we can make a set of pairs for all such modules, for pairs 1–10 and 1–11.
• The resulting set will be the total test sessions which will be equal to the sum of all edges in the call graph.
• the number of test sessions is 19 which is equal to the number of edges in the call graph.
Neighbourhood Integration
There is not much reduction in the total number of test sessions in pair-wise integration as compared to decomposition-based integration.
we consider the neighbourhoods of a node in the call graph, then the number of test sessions may reduce.
The neighbourhood for a node is the immediate predecessor as well as the immediate successor nodes.
The neighbourhood of a node,thus, can be defined as the set of nodes that are one edge away from the givennode.
3.6.3 PATH-BASED INTEGRATION
• In a call graph, when a module or unit executes, some path of source instructions is executed.
• The control is transferred from the calling unit to the called unit. This passing of control from one unit to another unit is necessary for
integration testing.
•
• There should be information within the module regarding instructions that call the module or return to the module
• It can be done with the help of path-based integration defined by Paul C. Jorgenson
• Definitions for path-based integration.
• Source node It is an instruction in the module at which the execution starts or resumes. The nodes where the control is being transferred
after calling the module are also source nodes
• Sink node It is an instruction in a module at which the execution terminates. The nodes from which the control is transferred are also
sink nodes.
• Module execution path ( MEP)It is a path consisting of a set of executable statements within a module like in a flow graph
--Begins with a source node
-- Ends with a sink node
-- With no intervening sink nodes
• Message Programming language mechanism by which one unit transfers control to another unit.
• MM-path It is a path consisting of MEPs and messages.
• MM-path graphIt can be defined as an extended flow graph where nodes are MEPs and edges are messages.
• Solid lines indicate messages (calls) Dashed lines indicate returns from calls .
• Example:1
Example:2 MM-path as a darken line
3.7 FUNCTION TESTING
Kit has defined function testing as the process of attempting to detect discrepancies between the functional specifications of a software
and its actual behaviour.
The objective of function test
Is to measure the quality of the functional components of the system.
The function test must determine if each component or business event:
1. performs in accordance to the specifications,
2. responds correctly to all conditions that may present themselves by incoming events/data,
3. moves data correctly from one business event to the next (including data stores), and
4. is initiated in the order required to meet the business objectives of the system.
Function testing can be performed after unit and integration testing, or whenever the development team thinks that the system has
sufficient functionality to execute some tests.
An effective function test cycle must have a defined set of processes and deliverables.
Example of functionalities :
Are you able to login to a system after entering correct credentials?
Does your payment gateway prompt an error message when you enter incorrect card number?
Does your “add a customer” screen adds a customer to your records successfully?
The primary processes/deliverables for requirements based function test are discussed below
Test planning During planning, the test leader with assistance from the test team defi nes the scope, schedule, and deliverables for the fu nction
test cycle.Partitioning/functional decomposition Functional decomposition of a system (or partitioning) is the breakdown of a system into its
functional components
or functional areas.
Requirement definition:The testing organization needs specified requirements in the form of proper documents to proceed with the function
test
Test case design:A tester designs and implements a test case to validate that the product performs in accordance with the requirements
Traceability matrix formation Test cases need to be traced/mapped back to the appropriate requirement.
• A function coverage matrix is prepared. It keeps track of those functions that exhibited the greatest number of errors.
function F2 test cases must be executed, as its priority is highest; and through the function coverage matrix, we can track which functions are
being tested through which test cases.
Test case execution an appropriate set of test cases need to be executed and the results of those test cases recorded.
• When all the modules have been integrated with the software system, This integrated system needs to be validated But the system is not
validated according to functional specifications.
• system testing is not a process of testing the functions of a complete system or program.
• System testing is the process of attempting to demonstrate that a program or system does not meet its original requirements and
objectives, as stated in the requirement specification.
• It is series of different tests(performance, security, maximum load). After passing through these tests,
• Design the system test by analysing the objectives;
• formulate test cases by analysing the user documentation.
•
• Rather than developing a system test methodology, distinct categories of system test cases are taken:
• 1)Recovery Testing
• Recovery is just like the exception handling feature of a programming language.
• It is the ability of a system to restart operations after the integrity of the application has been lost
• Some software systems (e.g. operating system, database management systems,etc.) must recover from programming errors, hardware
failures, data errors,or any disaster in the system.
• To determine how good the developed software is when it faces a disaster.(power,network,database,crashing of software itself).
• Recovery testing is the activity of testing how well the software is able to recover from crashes, hardware failures, and other similar
problems.
• Examples of recovery testing:
• While the application is running, suddenly restart the computer and thereafter, check the validity of application’s data integrity.
• While the application receives data from the network, unplug the cable and plug-in after awhile, and analyse the application’s ability to
continue receiving data from that point, when the network connection disappeared.
• Restart the system while the browser has a definite number of sessions and after rebooting, check that it is able to recover all of them.
• Recovery tests would determine if the system can return to a well-known state.
• A ‘checkpoint’ system can also be put that meticulously
• records transactions and system states periodically to preserve them in case of failure. This information allows the system t o return to a
known state after the failure.
•
• Beizer proposes that testers should work on the following areas during recovery testing:
• Restart: If there is a failure and we want to recover and start again, then first the current system state and transaction states are discarded
.Using checkpoints, a system can be recovered and started again from a new state.The system can now continue with new tra nsactions.
•
•
• Switchover: In case of failure of one component, the standby component takes over the control.Ability of the system to switch to a new
component must be tested.
2.Security Testing
• Security is a protection system that is needed to assure the customers that their data will be protected.
• For example, if Internet users feel that their personal data/information is not secure, the system loses its accountability.
• Security may include controlling access to data, encrypting data in communication,ensuring secrecy of stored data, auditing security
events, etc.
•
• Security vulnerabilities
• Vulnerability is an error that an attacker can exploit.
• Types of security vulnerabilities
• Bugs at the implementation level, such as local implementation errors or interprocedural interface errors
• Design-level mistakes.
• Examples of design-level security flaws include problem in error-handling,unprotected data channels, incorrect or missing access control
mechanisms,
How to perform security testing
• Testers must use a risk-based approach, grounded in both the system’s architectural reality and the attacker’s mindset to gauge software
security adequately.
• By identifying risks and potential loss associated
with those risks in the system and creating tests driven by those risks, the tester can properly focus on areas of code in which an attack is likely
to succeed
• Risk analysis at design level identifies potential security problems and their impacts.
• Once identified and ranked, software risks can help guide software security testing.
3.Performance Testing
• Performance specifications (requirements) are documented in a performance test plan.
• This is done during the requirements development phase of any system development project, prior to any design effort.
• Performance specifications (requirements) should ask the following questions,at a minimum:
A system along with its functional requirements, must meet the quality requirements. One example of quality requirement is performance
level
Performance testing becomes important for real-time embedded systems, as they demand critical performance requirements
A requirement that the system should return a response to a query in a reasonable amount of time is not an acceptable requirement;
the time must be specified in a quantitative way.
In another example, for a Web application, you need to know at least two things:
(a) expected load in terms of concurrent users or HTTP connections and (b) acceptable response time.
Performance testing usually concludes that it is the software (rather than the hardware) that contribute most to delays (bottlenecks) in
data processing.
it becomes necessary that testing team must use production-size realistic databases that include a wide range of possible data
combinations.
Using realistic-size databases provides the following benefits:
Since a large dataset may require significant disk space and processor power, it leads to backup and recoverability concerns.
If data is transferred across a network, bandwidth may also be a consideration.
• Normally, we don’t test the system with its full load, i.e. with maximum values of all resources in the system.
• The normal system testing takes into
5.Stress Testing
• Stress testing is also a type of load testing, but the difference is that the system is put under loads beyond the limits so that the system
breaks.
• Stress testing tries to break the system under test by overwhelming its resources in order to find the circumstances under which it will
crash
areas that may be stressed are:
• Input transactions
• Disk space
• Output
• Communications
• Interaction with users
• Stress testing is important for real-time systems where unpredictable events occurs resulting in input loads that exceed the values
described in the specification, and the system cannot afford to fail due to maximum load on resources.
• Therefore, in real-time systems, the entire threshold values and system limits must be noted carefully. Then, the system must be stress-
tested on each individual value.
are real-time in nature, demand the systems to perform continuously in warfare conditions. Thus, for a real-time defense system, we must stress-
test the system; otherwise, there may be loss of equipment as well as life.
6.Usability Testing
• Usability testing identifies discrepancies between the user interfaces of a product and the human engineering requirements of its
potential users
• Expectations of user from the system:Proposed by Justin
• Area experts The usability problems or expectations can be best understood by the subject or area experts who have worked for years in
the same area.
They analyse the system’s specific requirements from the user’s perspective and provide valuable suggestions.
• Group meetings Group meeting is an interesting idea to elicit usability requirements from the user. These meetings result in potential
customers’
comments on what they would like to see in an interface.
• Surveys Surveys are another medium to interact with the user. It can also yield valuable information about how potential customers
would use a software
product to accomplish their tasks.
• Analyse similar products We can also analyse the experiences in similar kinds of projects done previously and use it in the current
project. This study will
also give us clues regarding usability requirements. Usability characteristics against which testing is conducted are discussed below:
• Ease of use The users enter, navigate, and exit with relative ease. Each user interface must be tailored to the intelligence, educational
background, and
environmental pressures of the end-user.
• Interface steps User interface steps should not be misleading. The steps should also not be very complex to understand eitherResponse
time The time taken in responding to the user should not be so high that the user is frustrated or will move to some other option in the
interface.
• Help system A good user interface provides help to the user at every step. Thehelp documentation should not be redundant; it should be
very precise and
easily understood by every user of the system.
• Error messages For every exception in the system, there must be an errormessage in text form so that users can understand what has
happened in the
system. Error messages should be clear and meaningful.
7. Compatibility/Conversion/Configuration Testing
• Compatibility testing is to check the compatibility of a system being developed with different operating system, hardware and software
configuration available
• Many software systems interact with multiple CPUs.In many cases, users require that the devices be interchangeable, removable,or re-
configurable. Very often the software will have a set of commands or menus that allow users to make these configuration chang es.
• Configuration testing allows developers/testers to evaluate system performance and availability when hardware exchanges and
reconfigurations occur. The number of possible combinations for checking the compatibilities of available hardware and software can be
too high, making this type of testing a complex job.
•
•
• Guidelines for compatibility testing
..Operating systems: The specifi cations must state all the targeted end-user operating systems on which the system being developed will
be run.
..Software/Hardware: The product may need to operate with certain versions of Web browsers, with hardware devices such as printers,
or with other softwares such as virus scanners or word processors.
..Conversion testing: Compatibility may also extend to upgrades from previous versions of the software. Therefore, in this case, the
system must be upgraded properly and all the data and information from the previous version should also be considered. It should be
specified whether the new system will be backward compatible with the previous version
..Ranking of possible configurations: Since there will be a large set of possible configurations and compatibility concerns, the testers
must rank the possible configurations
..Identification of test cases: Testers must identify appropriate test cases and data for compatibility testing. It is usually not feasible to
run the entire set of possible
test cases on every possible configuration, because this would take too much time.
..Updating the compatibility test cases:
The compatibility test cases must also be continually updated with the following:
(i) Tech-support calls provide a good source of information for updating the compatibility test suite.
(ii) A beta-test program can also provide a large set of data regarding real end-user configurations and compatibility issues prior
to the final release
Of a system .
ACCEPTANCE TESTING
• The purpose of acceptance testing is to give the end user a chance to provide the development team with feedback as to whether or not
the software meets their needs.
• it’s the user that needs to be satisfied with the application, not the testers, managers, or contract writers.
• Acceptance testing is the formal testing conducted to determine whether a software system satisfies its acceptance criteria and to enable
buyers to determine whether to accept the system or not.
• Acceptance testing must take place at the end of the development process
• to determine whether the developed system meets the predetermined functionality, performance, quality, and interface criteria acceptable
to the user.
• User acceptance testing is different from system testing.
• System testing is invariably performed by the development team which includes developers and testers.
• User acceptance testing, on the other hand, should be carried out by end-users.
ALPHA TESTING
• Alpha is the test period during which the product is complete and usable in a test environment, but not necessarily bug-free.
• It is the final chance to get verification from the customers that the tradeoffs made in the final develop development stage are coherent
• Done for two reasons:
• (i) to give confidence that the software is in a suitable state to be seen by the customers (but not necessarily released).
• (ii) to find bugs that may only be found under operational conditions. Any other major defects or performance issues should be
discovered in this stage.
Since alpha testing is performed at the development site, testers and users together perform this testing
Entry Criteria to Alpha
All features are complete/testable (no urgent bugs).
High bugs on primary platforms are fixed/verified.
50% of medium bugs on primary platforms are fixed/verified.
All features have been tested on primary platforms.
Performance has been measured/compared with previous releases (user
functions).
Usability testing and feedback (ongoing).
Alpha sites are ready for installation.
Exit Criteria from Alpha
After alpha testing, we must:
Get responses/feedbacks from customers.
Prepare a report of any serious bugs being noticed.
Notify bug-fixing issues to developers.
BETA TESTING
• Beta is the test period during which the product should be complete and usable in a production environment.
• The purpose of the beta ship and test period is to test the company’s ability to deliver and support the product.
• Beta also serves as a chance to get a final vote of confidence’ from a few customers to help validate our own belief that the product
is now ready for volume shipment to all customers.
• Entry Criteria to Beta
• Positive responses from alpha sites.
• Customer bugs in alpha testing have been addressed.
• There are no fatal errors which can affect the functionality of the software.
• Secondary platform compatibility testing is complete.
• Regression testing corresponding to bug fixes has been done.
• Beta sites are ready for installation.
• Guidelines for Beta Testing
• Don’t expect to release new builds to beta testers more than once every two weeks.
• Don’t plan a beta with fewer than four releases.
• If you add a feature, even a small one, during the beta process, the clock goes back to the beginning of eight weeks and you need
another 3–4 releases.
• Exit Criteria from Beta
• After beta testing, we must:
• Get responses/feedbacks from the beta testers.
• Prepare a report of all serious bugs.
• Notify bug-fixing issues to developers