ABSTRACT ContextThis paper presents an approach for selecting regression test cases in the contex... more ABSTRACT ContextThis paper presents an approach for selecting regression test cases in the context of large-scale database applications. We focus on a black-box (specification-based) approach, relying on classification tree models to model the input domain of the system under test (SUT), in order to obtain a more practical and scalable solution. We perform an experiment in an industrial setting where the SUT is a large database application in Norway’s tax department.Objective We investigate the use of similarity-based test case selection for supporting black box regression testing of database applications. We have developed a practical approach and tool (DART) for functional black-box regression testing of database applications. In order to make the regression test approach scalable for large database applications, we needed a test case selection strategy that reduces the test execution costs and analysis effort. We used classification tree models to partition the input domain of the SUT in order to then select test cases. Rather than selecting test cases at random from each partition, we incorporated a similarity-based test case selection, hypothesizing that it would yield a higher fault detection rate.Method An experiment was conducted to determine which similarity-based selection algorithm was the most suitable in selecting test cases in large regression test suites, and whether similarity-based selection was a worthwhile and practical alternative to simpler solutions.ResultsThe results show that combining similarity measurement with partition-based test case selection, by using similarity-based test case selection within each partition, can provide improved fault detection rates over simpler solutions when specific conditions are met regarding the partitions.Conclusions Under the conditions present in the experiment the improvements were marginal. However, a detailed analysis concludes that the similarity-based selection strategy should be applied when a large number of test cases are contained in each partition and there is significant variability within partitions. If these conditions are not present, incorporating similarity measures is not worthwhile, since the gain is negligible over a random selection within each partition.
The use of Unified Modeling Language (UML) analysis/design models on large projects leads to a la... more The use of Unified Modeling Language (UML) analysis/design models on large projects leads to a large number of interdependent UML diagrams. As software systems evolve, those diagrams undergo changes to, for instance, correct errors or address changes in the requirements. Those changes can in turn lead to subsequent changes to other elements in the UML diagrams. Impact analysis is then defined as the process of identifying the potential consequences (side-effects) of a change, and estimating what needs to be modified to accomplish a change. In this article, we propose a UML model-based approach to impact analysis that can be applied before any implementation of the changes, thus allowing an early decision-making and change planning process. We first verify that the UML diagrams are consistent (consistency check). Then changes between two different versions of a UML model are identified according to a change taxonomy, and model elements that are directly or indirectly impacted by those changes (i.e., may undergo changes) are determined using formally defined impact analysis rules (written with Object Constraint Language). A measure of distance between a changed element and potentially impacted elements is also proposed to prioritize the results of impact analysis according to their likelihood of occurrence. We also present a prototype tool that provides automated support for our impact analysis strategy, that we then apply on a case study to validate both the implementation and methodology.
System testing is concerned with testing an entire system based on its specifications. In the con... more System testing is concerned with testing an entire system based on its specifications. In the context of object-oriented, UML development, this means that system test requirements are derived from UML analysis artifacts such as use cases, their corresponding sequence and collaboration diagrams, class diagrams, and possibly the use of the Object Constraint Language across all these artifacts. Our goal is to support the derivation of test requirements, which will be transformed into test cases, test oracles, and test drivers once we have detailed design information. Another important issue we address is the one of testability. Testability requirements (or rules) need to be imposed on UML artifacts so as to be able to support system testing efficiently. Those testability requirements result from a trade-off between analysis and design overhead and improved testability. The potential for automation is also an overriding concern all across our work as the ultimate goal is to fully support testing activities with high-capability tools.
... Each team was asked to develop a management information system supporting therental/return pr... more ... Each team was asked to develop a management information system supporting therental/return process of a hypothetical video rental business and the maintenance of customer and video databases. Such an application domain ...
... useful. A measure is useful if it is related to a measure of some external attribute5, eg,mai... more ... useful. A measure is useful if it is related to a measure of some external attribute5, eg,maintainability. Measures of internal product attributes in software engineering are artificial concepts and do not hold any meaning in themselves. For ...
... originate during the software-development process In Section 11, we present an evolved versio... more ... originate during the software-development process In Section 11, we present an evolved version of the OSR algorithm (an earlier version of the OSR approach was applied to project cost estimation and published in [12]), which is intended to make OSR models more accurate ...
ABSTRACT ContextThis paper presents an approach for selecting regression test cases in the contex... more ABSTRACT ContextThis paper presents an approach for selecting regression test cases in the context of large-scale database applications. We focus on a black-box (specification-based) approach, relying on classification tree models to model the input domain of the system under test (SUT), in order to obtain a more practical and scalable solution. We perform an experiment in an industrial setting where the SUT is a large database application in Norway’s tax department.Objective We investigate the use of similarity-based test case selection for supporting black box regression testing of database applications. We have developed a practical approach and tool (DART) for functional black-box regression testing of database applications. In order to make the regression test approach scalable for large database applications, we needed a test case selection strategy that reduces the test execution costs and analysis effort. We used classification tree models to partition the input domain of the SUT in order to then select test cases. Rather than selecting test cases at random from each partition, we incorporated a similarity-based test case selection, hypothesizing that it would yield a higher fault detection rate.Method An experiment was conducted to determine which similarity-based selection algorithm was the most suitable in selecting test cases in large regression test suites, and whether similarity-based selection was a worthwhile and practical alternative to simpler solutions.ResultsThe results show that combining similarity measurement with partition-based test case selection, by using similarity-based test case selection within each partition, can provide improved fault detection rates over simpler solutions when specific conditions are met regarding the partitions.Conclusions Under the conditions present in the experiment the improvements were marginal. However, a detailed analysis concludes that the similarity-based selection strategy should be applied when a large number of test cases are contained in each partition and there is significant variability within partitions. If these conditions are not present, incorporating similarity measures is not worthwhile, since the gain is negligible over a random selection within each partition.
The use of Unified Modeling Language (UML) analysis/design models on large projects leads to a la... more The use of Unified Modeling Language (UML) analysis/design models on large projects leads to a large number of interdependent UML diagrams. As software systems evolve, those diagrams undergo changes to, for instance, correct errors or address changes in the requirements. Those changes can in turn lead to subsequent changes to other elements in the UML diagrams. Impact analysis is then defined as the process of identifying the potential consequences (side-effects) of a change, and estimating what needs to be modified to accomplish a change. In this article, we propose a UML model-based approach to impact analysis that can be applied before any implementation of the changes, thus allowing an early decision-making and change planning process. We first verify that the UML diagrams are consistent (consistency check). Then changes between two different versions of a UML model are identified according to a change taxonomy, and model elements that are directly or indirectly impacted by those changes (i.e., may undergo changes) are determined using formally defined impact analysis rules (written with Object Constraint Language). A measure of distance between a changed element and potentially impacted elements is also proposed to prioritize the results of impact analysis according to their likelihood of occurrence. We also present a prototype tool that provides automated support for our impact analysis strategy, that we then apply on a case study to validate both the implementation and methodology.
System testing is concerned with testing an entire system based on its specifications. In the con... more System testing is concerned with testing an entire system based on its specifications. In the context of object-oriented, UML development, this means that system test requirements are derived from UML analysis artifacts such as use cases, their corresponding sequence and collaboration diagrams, class diagrams, and possibly the use of the Object Constraint Language across all these artifacts. Our goal is to support the derivation of test requirements, which will be transformed into test cases, test oracles, and test drivers once we have detailed design information. Another important issue we address is the one of testability. Testability requirements (or rules) need to be imposed on UML artifacts so as to be able to support system testing efficiently. Those testability requirements result from a trade-off between analysis and design overhead and improved testability. The potential for automation is also an overriding concern all across our work as the ultimate goal is to fully support testing activities with high-capability tools.
... Each team was asked to develop a management information system supporting therental/return pr... more ... Each team was asked to develop a management information system supporting therental/return process of a hypothetical video rental business and the maintenance of customer and video databases. Such an application domain ...
... useful. A measure is useful if it is related to a measure of some external attribute5, eg,mai... more ... useful. A measure is useful if it is related to a measure of some external attribute5, eg,maintainability. Measures of internal product attributes in software engineering are artificial concepts and do not hold any meaning in themselves. For ...
... originate during the software-development process In Section 11, we present an evolved versio... more ... originate during the software-development process In Section 11, we present an evolved version of the OSR algorithm (an earlier version of the OSR approach was applied to project cost estimation and published in [12]), which is intended to make OSR models more accurate ...
Uploads
Papers by Lionel Briand