Soft Warethesis

Download as pdf or txt
Download as pdf or txt
You are on page 1of 369

Support for Understanding

in Software Maintenance

Eirik Tryggeseth

Department of Computer and Infomation Science


Faculty of Physics, Informatics and Matemathics
Norwegian University of Science and Technology

Trondheim
February 12, 1997

Abstract
Software systems are increasingly becoming more and more complex. The functions they are
supposed to support are increasing both in number and in complexity. Because of this, the size
of software systems of today is constantly increasing.
Traditional software maintenance processes have often resulted in a degradation of the structure, documentation, and source code of the maintained software system. Additional problems
of maintainer turnover and recruitment of experienced maintainers have made long term maintenance a problem for maintenance organizations. The software system is becoming difficult to
understand, particularly for new maintainers, and hence maintenance costs are increasing.
The increased size and complexity of software systems signifies large cost of system redevelopment due to uncontrolled maintenance costs. Thus, organizations which produce software
systems must focus on how to efficiently evolve their current systems to be able to stay competitive.
In this thesis, we analyse the costs and problems of software maintenance, and discuss why
software systems evolve. We present a framework for how the evolution of software can be
controlled, and how system knowledge can be efficiently reused during the maintenance process. The focus of the framework is to provide the user with easy access to system information
from heterogeneous information sources, so that the time spent on trying to understand the system is reduced. Literature reports that as much as 50% of the time spent on maintenance can be
related to trying to understand the system. The provision of efficient mechanisms to reduce this
time will benefit in reduced maintenance costs.
The proposed framework consists of two major parts: (i) A language capable of specifying the
logical architectural structure of the system as well as how this logical system structure is
related to physical files on the disk. (ii) The specification of a set of relations among heterogeneous system components, and how the interesting relationships can be extracted dynamically
from the different system components by using an advanced querying mechanism.
These two parts are used in conjunction to be able to efficiently provide recorded system
knowledge to the maintainer in a manner which make our proposed framework scalable to handle very large systems without significant reduction in response time.
The maintenance tasks of an organization are often related to maintaining a family similar software systems rather than one individual system. This system family may for example represent
a core system customized for different customers. The language which specifies the logical
structure is capable of describing the evolution of the software family and a mechanism for
selecting a particular system configuration for maintenance. This facilitates a top-down exploration of the system which supports the maintainer in understanding the overall system structure.
The querying mechanism allows the user to extract related information from heterogeneous
system components. These include the requirements, design, user manual, and test documents,
and source code to be able to gain functional knowledge of particular parts of the system which
must be thoroughly understood, e.g. to be able to comply with the requirements of a modification request. These relationships are extracted dynamically by requests from the maintainer.
Relationships among source code components are however extracted once per file version and
recorded in a database.
Parts of the framework have been demonstrated by prototypes. The overall utility of the proposed framework cannot be validated by demonstration. This would require a fully integrated
system used in a real setting, which is beyond the scope of this doctoral work. The utility of
documentation for providing more knowledge to the maintainer has however been demonstrated experimentally with significant results in an experiment performed as part of my work.

ii

iii

Acknowledgements
I will use this opportunity to thank a number of people which has provided me with valuable
support in different ways during the process of writing this thesis.
First, I would like to thank professor Reidar Conradi who invited me to start on my doctoral
study. Without his support and continuing encouragement, this thesis would never have been
initiated.
During the first half of my doctoral study I was funded by the ESPRIT Proteus project1, first as
a full time researcher, later as a part time doctoral student. The international group of people
who constituted the Proteus project team was a really fun bunch of people to work with. The
constant input of ideas from the Proteus team was a real inspiration. Particularly I would like to
thank the members of the Proteus PCL team Ian Sommerville, Graham Dean, Gilbert Rondeau, Ariane Suisse, Bjrn Grnquist and Bjrn Gulla for the cooperation with the development of the Proteus PCL technology; it was a hit.
The cooperation with the researchers at SINTEF Delab, both in the Proteus project and otherwise is highly appreciated, particular thanks goes to Joe Gorman, Jacqueline Floch, Rolf Brk,
Tor Stlhane and Richard Sanders.
The last half of my study has been financed by a grant from the Norwegian Research Council
through the IDIS, and later the BEST programmes. I would like to thank the doctoral committee of those programmes for believing in my ideas and encourage me to follow my research
direction. Particular thanks go to Arne Slvberg and Rudolf Andersen.
Thanks also go to anonymous reviewers which has commented earlier drafts of the thesis, and
different papers and reports which has been produced during this exciting period.
Present and past colleagues in the software engineering (SU) group at the Department of Computer Systems and Information Sciences at the Norwegian University of Science and Technology are gratefully remembered for both professional discussions and social gatherings. A
cordial thanks to Even-Andre Karlsson, Guttorm Sindre, Erik Odberg, Svein Erik Bratsberg,
Marianne Hagaseth, Jens-Otto Larsen, Minh Nguyen, Bjarte stvold, Terje Totland, Geir Hydalsvik, Sigurd Thunem, Sivert Srumgrd and Bjrn Gulla.
The inspiration from the students I have advised on different assignments is highly appreciated, special thanks goes to Per Axel Aamot, Trygve Rste, Knut Langeggen, Per Arne Vollan,
Arnvid Hellebust, and Tore Berg for testing out the implementability of some of my ideas.
The support of my advisor, associate professor ystein Nytr, has been determining for completing this thesis. Thanks for believing in my vague ideas which slowly but steady have been
formed into the results presented in this thesis.
Thanks also go to my parents, Bergljot and Oddmund, for supporting and believing in me during all years.
A special thanks goes to my wife Janne, for supporting me to finalize this work, and for keeping up with all my long nights in front of the computer.
Finally, my two sons Andreas and Markus are particularly remembered for constantly reminding me on that life is certainly more than work.
1. The Proteus project was project no. 8086 in the European ESPRIT III research programme.

iv

TABLE OF CONTENTS
CHAPTER 1

Introduction .............................................................................. 1

1.1 The motivation for this thesis ....................................................................... 1


1.2 The problems addressed and solutions envisioned....................................... 3
1.2.1 The problems addressed in this thesis................................................. 3
1.2.2 The solutions envisioned in this thesis ................................................ 4
1.3 The process of conceiving this thesis ........................................................... 5
1.4 The claimed contributions of this thesis ....................................................... 6
1.5 The organization of this thesis...................................................................... 7
CHAPTER 2

Software Maintenance:
What - Why - The Costs and Problems....................................11

2.1 Introduction .................................................................................................11


2.2 Software Maintenance - What? .................................................................. 12
2.2.1 To maintain........................................................................................ 12
2.2.2 Maintenance...................................................................................... 13
2.2.3 Software maintenance ....................................................................... 13
2.2.4 Comparing maintenance and software maintenance................... 15
2.2.5 An explorative definition of software maintenance........................... 15
2.3 Software Maintenance - Why? ................................................................... 18
2.3.1 Introduction....................................................................................... 18
2.3.2 Models of software product evolution ............................................... 18
Belady and Lehmans SPE classification......................................................... 18
Parnas Software Aging............................................................................... 20
Discussion of SPE and Software Aging ........................................................... 21

2.3.3 Models of software process evolution ............................................... 23


Madhavjis Prism model of changes ................................................................ 23
Nguyen and Conradis evolution patterns........................................................ 24
An experimental view on process evolution - The QIP/GQM.......................... 24
Comparison of the three frameworks for process evolution ............................ 26

2.3.4 Summing up the evolution part ......................................................... 27


2.4 Software Maintenance - The Costs............................................................. 28
2.4.1 Introduction....................................................................................... 28
2.4.2 Costs reported in investigations........................................................ 28
2.4.3 Increased development productivity.................................................. 30
2.4.4 Increased product size....................................................................... 31
2.4.5 Contributions to maintenance effort ................................................. 31
2.4.6 Discussion on the direction of maintenance costs ............................ 32
2.5 Software Maintenance - The Problems ...................................................... 33
2.5.1 The Lientz and Swanson investigation .............................................. 33
2.5.2 Problem areas in Krogsties investigation ........................................ 34
2.5.3 The Nosek and Palvia study .............................................................. 35

vi
2.5.4 Foffanis study ................................................................................... 35
2.5.5 Chapins open-ended-question...................................................... 36
2.5.6 Deklevas Delphi study on maintenance problems ............................ 37
2.5.7 Discussion.......................................................................................... 40
2.6 Studies of real maintenance problems ..................................................... 43
2.6.1 Blums maintenance paradox ............................................................ 43
2.6.2 Lakhotias program reading theory ................................................... 43
2.6.3 The systematic and as-needed strategies of Littman et al. ................ 44
2.6.4 The concept assignment problem of Biggerstaff et al........................ 44
2.6.5 Von Mayrhauser and Vans protocol analysis ................................... 44
2.6.6 Soloway and Ehrlichs programming plans....................................... 46
2.6.7 Further references on software maintenance .................................... 47
2.6.8 Discussion.......................................................................................... 48
2.7 Chapter summary and conclusion ............................................................... 49
2.7.1 Summary ............................................................................................ 49
2.7.2 Conclusion......................................................................................... 50
CHAPTER 3

Assessing the Role of Documentation


in Software Maintenance ........................................................ 51

3.1 Introduction ................................................................................................. 51


3.1.1 Motivation for experiment ................................................................. 51
3.1.2 Overview of experiment..................................................................... 52
3.1.3 Related work...................................................................................... 53
3.1.4 Chapter outline.................................................................................. 53
3.2 Experimental hypotheses ............................................................................ 54
3.3 Detailed experiment design......................................................................... 55
3.3.1 Introduction ....................................................................................... 55
3.3.2 Terminology for classification of experiment variables .................... 55
3.3.3 Definition of experiment variables .................................................... 57
Identification of variables ................................................................................ 57
Final classification of variables....................................................................... 58
Other factors in a real setting .......................................................................... 59

3.3.4 Identification of experiment subjects................................................. 59


3.3.5 Controlling the impact of the Skill variable ...................................... 60
The C++ pre-test.............................................................................................. 60
Partitioning of subjects into categories ........................................................... 61

3.4 Time schedule of experiment ...................................................................... 63


3.5 Measurement extraction from experiment data .......................................... 65
3.5.1 Evaluation criteria for analysis......................................................... 65
A note on measurement theory regarding our measure selection .................... 65
Measure alternatives ........................................................................................ 66
Selected measures............................................................................................. 67

3.5.2 The experiment baseline and modification requests.......................... 67


The experiment baseline................................................................................... 68
The first modification request, delta 1 ......................................................... 70

vii
The second modification request, delta 2 .................................................... 70

3.5.3 Experiment data from category A ..................................................... 70


3.5.4 Experiment data from category B ..................................................... 71
3.6 Result analysis ............................................................................................ 72
3.6.1 Initial analysis................................................................................... 72
3.6.2 Testing of hypothesis H1 ................................................................... 74
3.6.3 Testing of hypothesis H2 ................................................................... 74
Mann-Whitney test on Mpc .............................................................................. 75
Mann-Whitney test on Mu ................................................................................ 76

3.6.4 Checking for correlation among Mpc and Mu.................................. 77


3.6.5 Testing of hypothesis H3 ................................................................... 77
3.6.6 Examining debriefing sheets ............................................................. 78
Response to the debriefing schema. ................................................................. 79

3.7 Chapter summary........................................................................................ 83


3.7.1 Experiences gained ........................................................................... 83
3.7.2 Summary............................................................................................ 83
Hypotheses ....................................................................................................... 83
Facts and analysis............................................................................................ 83

CHAPTER 4

Prerequisites for a Framework to


Understand Software Systems ................................................ 87

4.1 Introduction ................................................................................................ 87


4.2 Internal equilibrium: A result of balanced evolution.................................. 88
4.2.1 A philosophical reflection ................................................................. 88
4.2.2 Internal equilibrium .......................................................................... 90
4.3 Research for quick fixes.......................................................................... 91
4.4 Economics in a software spectrum ............................................................. 92
4.4.1 The maintainer profile....................................................................... 92
4.4.2 A software spectrum.......................................................................... 93
4.4.3 A short course in SCM ...................................................................... 93
4.4.4 Characterizing maintenance profile ................................................. 95
4.4.5 Software costs in the short run.......................................................... 97
4.4.6 Software costs in the long run........................................................... 99
4.4.7 Factors which influence the maintainability................................... 100
4.4.8 Recapitulating the software degrading process .............................. 101
4.5 Chapter summary and the way ahead ....................................................... 102
4.5.1 Summary.......................................................................................... 102
4.5.2 The way ahead ................................................................................ 103
CHAPTER 5

New Future Technologies? ................................................... 107

5.1 Introduction .............................................................................................. 107


5.2 A new software production paradigm?..................................................... 107
5.2.1 Traditional software production model........................................... 107
5.2.2 Automatic program synthesis.......................................................... 108

viii
5.2.3 Automatic code maintenance........................................................... 109
5.2.4 Relevant work .................................................................................. 110
5.3 A universal representation model for software engineering? ................... 110
5.3.1 Translation of IS data ...................................................................... 111
5.3.2 Translation of documents ................................................................ 111
5.3.3 Translation of languages ................................................................. 112
5.3.4 Translation of software models........................................................ 112
5.4 Conclusion ................................................................................................ 113
CHAPTER 6

Specifying Structural Evolution


and Understanding It Using
a Configuration Language ....................................................115

6.1 Introduction ............................................................................................... 115


6.2 State of the art ........................................................................................... 116
6.3 System modelling and variability ............................................................ 118
6.3.1 Composition structure ..................................................................... 119
Logical structure ............................................................................................ 120
Physical structure........................................................................................... 120

6.3.2 Entity attributes ............................................................................... 122


Entity information attributes .......................................................................... 122
Variability control attributes .......................................................................... 122

6.3.3 Expressing variability...................................................................... 123


Structural variability ...................................................................................... 123
Variability in mapping.................................................................................... 124
Attribute assignment variability (constraints) ............................................... 124

6.3.4 System instantiation......................................................................... 125


6.3.5 Entity classification ......................................................................... 127
Classification for relation definitions............................................................. 128
Classification for system manufacture ........................................................... 128

6.3.6 System manufacture (building)........................................................ 129


6.3.7 Repository management .................................................................. 131
6.4 Tool support .............................................................................................. 133
6.5 Experiences from PCL use........................................................................ 134
6.6 Summary ................................................................................................... 136
6.7 Chapter appendix: The calculator example............................................... 136
CHAPTER 7

Elaboration of a Framework for


Supporting System Understanding ....................................... 141

7.1 Introduction ............................................................................................... 141


7.2 Goals for a framework for system understanding ..................................... 142
7.2.1 Overall objectives............................................................................ 142
7.2.2 The process of understanding software systems for evolution ........ 143
7.2.3 The problem that will be attacked ................................................... 144
7.3 Requirements to the understanding support system.................................. 145

ix
7.3.1 Introduction..................................................................................... 145
7.3.2 Continuous documentation updates ................................................ 146
7.3.3 Use experienced people .................................................................. 149
7.3.4 Textual description or report format............................................... 150
7.3.5 Automatic component identification. .............................................. 151
7.3.6 Support for modification planning.................................................. 152
7.3.7 Hypertext system ............................................................................. 153
7.3.8 Development = Maintenance.......................................................... 155
7.4 Problems of software component relations .............................................. 156
7.4.1 The proposed solution ..................................................................... 156
7.4.2 Problems of solution alternatives.................................................... 157
Static vs. dynamic documents......................................................................... 157
Data dictionary vs. literate programming ..................................................... 158

7.4.3 Different approaches for relating components................................ 159


Relations predefined by a schemata............................................................... 160
Manually inserted relations ........................................................................... 160
Automatically extracted relations .................................................................. 160
Summary ........................................................................................................ 161

7.4.4 Automatic updates of stored links ................................................... 161


7.4.5 Conclusion of section ...................................................................... 163
7.5 Relation types among system components ............................................... 164
7.6 Strategies for decomposing a system using PCL...................................... 165
7.7 Architectural relations .............................................................................. 166
7.7.1 Hierarchical composition................................................................ 166
7.7.2 Requires........................................................................................... 167
7.7.3 Documents & Is_documented_by.................................................... 167
7.8 Instance relations ...................................................................................... 170
7.9 Document element relations ..................................................................... 170
7.9.1 Introduction..................................................................................... 170
7.9.2 RD relationships.............................................................................. 172
7.9.3 RUD relationships........................................................................... 174
7.9.4 RT relationships .............................................................................. 175
7.9.5 DUD relationships .......................................................................... 175
7.9.6 DSC relationships ........................................................................... 175
7.9.7 SCT relationships ............................................................................ 178
7.9.8 Displaying the relations .................................................................. 180
7.9.9 Limiting the document search space ............................................... 182
7.10 Document type relations......................................................................... 184
7.10.1 DT-relations among document elements ....................................... 184
7.10.2 DT-relations among source code elements ................................... 185
7.11 A thesaurus and a query mechanism....................................................... 186
7.11.1 The thesaurus ................................................................................ 186
7.11.2 The query mechanism.................................................................... 187
7.12 Related work........................................................................................... 188
7.13 Chapter summary.................................................................................... 191

x
CHAPTER 8

Conclusion, Summary and Future Work............................... 193

8.1 Introduction ............................................................................................... 193


8.2 Research approach and main achievements of our work .......................... 193
8.3 Summary of the main achievements ......................................................... 194
8.3.1 Assessing the Role of Documentation in Software Maintenance .... 194
8.3.2 The PCL language for specifying system architecture .................... 195
8.3.3 The system understanding framework ............................................. 196
8.4 Final conclusion ........................................................................................ 197
8.5 Future work ............................................................................................... 197
8.5.1 Overview.......................................................................................... 197
8.5.2 Experiment extensions ..................................................................... 198
Maintenance circumstances ........................................................................... 198
Hypotheses - extended set .............................................................................. 199

APPENDIX A PCL Syntax ........................................................................... 201


A.1 Introduction .............................................................................................. 201
A.2 The concrete syntax of the PCL ............................................................... 201
A.2.1 Notation........................................................................................... 201
A.2.2 Lex definition................................................................................... 202
A.2.3 BNF definition of PCL .................................................................... 202
Comments ....................................................................................................... 202
PCL_library ................................................................................................... 202
PCL entity definitions..................................................................................... 202
PCL list definitions......................................................................................... 205
Conditionals ................................................................................................... 209
Standard library ............................................................................................. 209
Other distinguished names ............................................................................. 210

A.3 Version identification and selection ......................................................... 211


A.3.1 Version identification ...................................................................... 211
A.3.2 Version descriptors.......................................................................... 211
A.3.3 Syntax of version descriptors .......................................................... 212
A.3.4 Inheritance in version descriptors .................................................. 212
Examples of version descriptor inheritance................................................... 212

A.3.5 Attribute assignment ....................................................................... 213


Syntax ............................................................................................................. 213
Assigned values .............................................................................................. 215
Positional specification .................................................................................. 215
Named specification ....................................................................................... 215
Mixed specification ........................................................................................ 215

A.3.6 Decomposition of version descriptors............................................. 216


A.3.7 The bind transformation.................................................................. 217
A.3.8 The select transformation................................................................ 217
A.3.9 Tool processing ............................................................................... 218
A.4 Semantics of makefile generation ............................................................ 218
A.4.1 Definitions....................................................................................... 219

xi
A.4.2 Function values............................................................................... 222
A.4.3 Multi input slots ......................................................................... 223
A.4.4 Possible tool entity instantiations................................................... 224
A.4.5 Utilizing the parts structure............................................................ 224
A.4.6 Pragmatic concerns........................................................................ 225
A.4.7 An example ..................................................................................... 225
APPENDIX B Analysis of Data from the

Programming Methodology Course ..................................... 229


B.1 Introduction.............................................................................................. 229
B.2 Description of the assignment.................................................................. 229
B.3 Partitioning of groups .............................................................................. 231
B.4 Total scores for the groups. ...................................................................... 232
B.5 Total scores for the groups (moderated) .................................................. 234
B.6 Discussion of test score analysis.............................................................. 236
B.7 Measures of code ..................................................................................... 236
B.7.1 LOC measures based on naive application closure........................ 237
B.7.2 Finding the actual application (file) closure .................................. 238
B.8 Comparing system size and test score ..................................................... 242
B.9 Documentation measures ......................................................................... 247
B.10 Time resources for two deltas ................................................................ 251
B.11 Test scores (moderated) for all phases ................................................... 252
B.12 Implementation metrics ......................................................................... 253
B.13 Grades .................................................................................................... 256
APPENDIX C C++ Pre-Test and Calibration............................................. 259
C.1 Introduction.............................................................................................. 259
C.2 C++ pre-test ............................................................................................. 259
C.2.1 Introduction .................................................................................... 259
C.2.2 Part I (Counts 15%) ....................................................................... 259
C.2.3 Part II (2%) .................................................................................... 260
C.2.4 Part III (9%)................................................................................... 261
C.2.5 Part IV (16%) ................................................................................. 261
C.2.6 Part V (26%) .................................................................................. 263
C.2.7 Part VI (32%) ................................................................................. 265
C.2.8 Answers to problems....................................................................... 267
C.3 Evaluation of pre-test calibration............................................................. 269
C.3.1 Introduction .................................................................................... 269
C.3.2 Results from the calibration test..................................................... 269
C.3.3 Discussion of calibration test results ............................................. 270
C.3.4 Calibrating the test problems ......................................................... 271

xii
APPENDIX D Experiment Subject Responses and Reports......................... 273
D.1 Introduction .............................................................................................. 273
D.2 Responses to Q2 in debriefing schema .................................................... 273
D.3 Response to Q4 in debriefing schema...................................................... 274
D.4 Initial subject analysis .............................................................................. 276
D.4.1 Reports for category B (code only), preliminary............................ 276
8.5.3 Reports for category B (code + doc), preliminary .......................... 278
APPENDIX E Analysis of changes of group a03......................................... 281
E.1 Code changes ............................................................................................ 281
E.2 Documentation changes............................................................................ 287
APPENDIX F Statistical observations......................................................... 291
F.1 Introduction ............................................................................................... 291
F.2 Statistics of one variable ........................................................................... 291
F.2.1 Measures of central tendency .......................................................... 291
F.2.2 Measures of variation ...................................................................... 292
F.3 Statistics of several variables .................................................................... 292
F.3.1 Correlation among variables........................................................... 292
F.3.2 Pearsons product-moment correlation coefficient.......................... 293
F.3.3 Spearmans rank j ............................................................................ 294
F.3.4 Kendalls tau .................................................................................... 294
F.4 Testing and estimation............................................................................... 296
F.4.1 Estimation of confidence interval for population mean .................. 297
APPENDIX G References............................................................................. 299
G.1 Introduction .............................................................................................. 299
G.2 Cited references ........................................................................................ 299
G.3 Other references........................................................................................ 315

xiii

LIST OF FIGURES
CHAPTER 1
FIGURE 1.
CHAPTER 2
FIGURE 2.
FIGURE 3.
FIGURE 4.
FIGURE 5.
FIGURE 6.
FIGURE 7.
FIGURE 8.
FIGURE 9.
FIGURE 10.
CHAPTER 3
FIGURE 11.
FIGURE 12.
FIGURE 13.
FIGURE 14.
FIGURE 15.
FIGURE 16.
FIGURE 17.
FIGURE 18.
CHAPTER 4
FIGURE 19.
FIGURE 20.
FIGURE 21.
FIGURE 22.
FIGURE 23.
CHAPTER 5

Introduction .............................................................................. 1
Outline of framework for system understanding

.................................. 5

Software Maintenance:
What - Why - The Costs and Problems...........................11
An illustration of the verb maintain .................................................... 13
Software maintenance? ....................................................................... 15
A mental model of software maintenance ........................................... 17
S-type programs .................................................................................. 19
P-type programs .................................................................................. 19
E-type programs .................................................................................. 20
Maintenance costs reported in investigations ..................................... 29
Lientz and Swansons explorative model of maintenance effort. ....... 32
Deklevas causal associations among maintenance problems ............ 40

Assessing the Role of Documentation


in Software Maintenance ............................................... 51
Pre-test score distribution for all subjects ........................................... 61
Distribution of test scores in categories .............................................. 63
Effort distribution in categories (initial) .............................................. 73
Effort distribution after removal of outliers ........................................ 73
Frequencies of measures for experiment evaluation ........................... 75
Scatterplot of Mpc and Mu for all subjects ......................................... 77
Most true statement (distribution) in debriefing Q3. .......................... 81
Least true statement (distribution) in debriefing Q3. .......................... 81

Prerequisites for a Framework to


Understand Software Systems ....................................... 87
Maintainer profile & knowledge vs. experience ................................. 93
An SCM example ................................................................................ 94
Evolving system structure ................................................................... 96
Costs of short term maintenance ......................................................... 98
Costs of long term maintenance .......................................................... 99

New Future Technologies? ................................................... 107

FIGURE 24. Traditional software life-cycle .......................................................... 108


FIGURE 25. Software life-cycle with automatic program synthesis ..................... 108

xiv
FIGURE 26.
FIGURE 27.
FIGURE 28.
FIGURE 29.
CHAPTER 6

FIGURE 30.
FIGURE 31.
FIGURE 32.
FIGURE 33.
FIGURE 34.
FIGURE 35.
FIGURE 36.
FIGURE 37.
FIGURE 38.
FIGURE 39.
CHAPTER 7
FIGURE 40.
FIGURE 41.
FIGURE 42.
FIGURE 43.
FIGURE 44.
FIGURE 45.
FIGURE 46.
FIGURE 47.
FIGURE 48.
FIGURE 49.
FIGURE 50.
CHAPTER 8

Practical new software life-cycle ....................................................... 108


Specification synthesis for automatic maintenance ........................... 109
Extending the suitability of automatic program synthesis ................. 111
A systematization for machine translation models ............................ 112

Specifying Structural Evolution


and Understanding It Using
a Configuration Language ...........................................115
Language-defined relations ............................................................... 119
Logical structure of the calculator program ...................................... 120
Two dimensions of variability and possible family members ........... 122
Extending the structure of the calculator program ............................ 123
Visual presentation of a composite version descriptor ...................... 127
Extract from the PCL classification hierarchy .................................. 129
Menu for customizing makefile generation. ...................................... 131
Derivation graph for the calc example .............................................. 132
Tool overview .................................................................................... 134
PCL compile main window and Repository browser. ....................... 135

Elaboration of a Framework for


Supporting System Understanding .............................. 141
Maintainer profile & knowledge vs. experience ............................... 147
Component structures ........................................................................ 153
Role division in software system production .................................... 156
System element symbols ................................................................... 156
A feature relation example ................................................................ 157
Relation types .................................................................................... 165
Different DE-relations ....................................................................... 171
Possible user interface ....................................................................... 181
ER model for information extracted from C++ file ........................... 185
Small thesaurus example ................................................................... 187
Example of a means/end network ...................................................... 190

Conclusion, Summary and Future Work............................... 193

APPENDIX A PCL Syntax ........................................................................... 201


FIGURE 51. A PCL fragment ................................................................................. 226
FIGURE 52. Information emitted to makefile ........................................................ 226
FIGURE 53. Resulting makefile ............................................................................. 227

xv
APPENDIX B Analysis of Data from the

Programming Methodology Course ............................ 229


FIGURE 54.
FIGURE 55.
FIGURE 56.
FIGURE 57.
FIGURE 58.
FIGURE 59.
FIGURE 60.
FIGURE 61.
FIGURE 62.
FIGURE 63.
FIGURE 64.
FIGURE 65.
FIGURE 66.
FIGURE 67.
FIGURE 68.
FIGURE 69.
FIGURE 70.
FIGURE 71.
FIGURE 72.
FIGURE 73.
FIGURE 74.
FIGURE 75.
FIGURE 76.
FIGURE 77.
FIGURE 78.
FIGURE 79.
FIGURE 80.
FIGURE 81.

Total score for delivery 3 .................................................................. 233


Total score for delivery 4 .................................................................. 233
Total score for delivery 5 .................................................................. 234
Total score for delivery 3 (moderated) .............................................. 234
Total score for delivery 4 (moderated) .............................................. 235
Total score for delivery 5 (moderated) .............................................. 235
Averages for the data set ................................................................... 237
LOC measure, for all three deliveries ............................................... 238
Actual LOC measure, for all three deliveries .................................... 241
Averages for LOC measures ............................................................. 242
Scatter plots, delivery 3, LOC/moderated ......................................... 243
Scatter plots, delivery 4, LOC/moderated ......................................... 243
Scatter plots, delivery 5, LOC/moderated ......................................... 243
Word count, design documents ......................................................... 248
Word count, user manuals ................................................................. 248
Word count, test documents .............................................................. 249
Resources spent for 1st delta (delivery 4) (time in mins.) ................ 251
Resources spent for 2nd delta (delivery 5) (time in mins) ................ 251
Test points (moderated) per group/phase .......................................... 252
Point slopes for groups ...................................................................... 252
Fulfillment of deltas .......................................................................... 253
Number of classes and functions ....................................................... 254
Weighted methods per class (WMC) ................................................ 254
Depth of inheritance tree (DIT) ......................................................... 255
Number of children (NOC) ............................................................... 256
Frequency histogram for group grades ............................................. 256
Frequency histogram for student grades ........................................... 257
Frequency histogram for group test scores ....................................... 257

APPENDIX C C++ Pre-Test and Calibration............................................. 259


APPENDIX D Experiment Subject Responses and Reports......................... 273
APPENDIX E Analysis of changes of group a03......................................... 281
FIGURE 82. Variation in file size

.......................................................................... 282

APPENDIX F Statistical observations......................................................... 291


FIGURE 83. Scatter plots (examples) .................................................................... 292
FIGURE 84. Sample scatter plot ............................................................................ 295

xvi
APPENDIX G References............................................................................. 299

xvii

LIST OF TABLES
CHAPTER 1
TABLE 1.

CHAPTER 2
TABLE 2.
TABLE 3.
TABLE 4.
TABLE 5.
TABLE 6.
TABLE 7.
TABLE 8.
TABLE 9.
TABLE 10.
TABLE 11.
TABLE 12.

CHAPTER 3
TABLE 13.
TABLE 14.
TABLE 15.
TABLE 16.
TABLE 17.
TABLE 18.
TABLE 19.
TABLE 20.
TABLE 21.
TABLE 22.
TABLE 23.
TABLE 24.
TABLE 25.
TABLE 26.
TABLE 27.
TABLE 28.
TABLE 29.
TABLE 30.
TABLE 31.
TABLE 32.
TABLE 33.

Introduction .............................................................................. 1
Overview of thesis content

.................................................................................... 7

Software Maintenance:
What - Why - The Costs and Problems...........................11
Comparing S, P, and E-type programs ................................................................ 22
Example properties of a change ([Madhavji, 1992b]) ........................................ 23
Categorization for process evolution ([Nguyen and Conradi, 1996]) ................. 25
Maintenance investigations (from [Krogstie, 1994b]) ....................................... 29
Project measures at NASA/SEL .......................................................................... 30
Software process improvement results (CMM-based, 1987-1993) .................... 30
Lientz and Swansons list of problems in maintenance ...................................... 33
Problem groups in Lientz and Swansons study ................................................ 34
Delphi study on maintenance problems (from [Dekleva, 1992]) ....................... 39
Important problems of maintenance. ................................................................... 41
Knowledge sought by maintainers, from [Von Mayrhauser and Vans, 1994]. .. 45

Assessing the Role of Documentation


in Software Maintenance ............................................... 51
Variables in the experiment ................................................................................. 58
Categorization of subjects ................................................................................... 62
C++ familiarity - self assessment ........................................................................ 62
Experiment time schedule ................................................................................... 63
Siegels summary of measurement scales and relevant statistics ........................ 65
Development plan for projects ............................................................................ 69
Source code measures of the experiment baseline .............................................. 69
Document measures of experiment baseline (words) ......................................... 69
Time usage report for delta1, category A ............................................................ 70
Time usage report for delta1, category B ............................................................ 71
Effort distribution on measured variables ........................................................... 72
Scores on the understanding measure (Mu) ........................................................ 74
Scores on the pseudo code measure (Mpc) ......................................................... 74
Scores when adding pseudo code and understanding measure ........................... 74
Rank computation for Mpc. ................................................................................ 76
Mann-Whitney results from SPSS 6.1 ................................................................ 76
Spearman ranks between test results and experiment measures ......................... 78
Priorities of question Q3 ..................................................................................... 80
Sum of statement priorities (smaller value means higher priority) ..................... 81
Scores on Mpc and Mu ....................................................................................... 84
Spearman rank correlation coefficients ............................................................... 84

xviii
CHAPTER 4

Prerequisites for a Framework to


Understand Software Systems ....................................... 87

TABLE 34. Characterization of software spectrum ................................................................ 95


TABLE 35. Characteristics of systems in software spectrum ............................................... 100
TABLE 36. Software system states ....................................................................................... 101

CHAPTER 5

New Future Technologies? ................................................... 107

CHAPTER 6

Specifying Structural Evolution


and Understanding It Using
a Configuration Language ...........................................115

TABLE 37. Support offered by MILs & SCM systems to PROTEUS requirements ........... 117
TABLE 38. PCL entity types and sections ............................................................................ 119

CHAPTER 7

Elaboration of a Framework for


Supporting System Understanding .............................. 141

TABLE 39. Overview of positions and arguments ............................................................... 146


TABLE 40. Comparison of relation capturing approaches ................................................... 161

CHAPTER 8

Conclusion, Summary and Future Work............................... 193

TABLE 41. Scores on Mpc and Mu ...................................................................................... 195


TABLE 42. Spearman rank correlation coefficients ............................................................. 195
TABLE 43. Possible maintenance situations ........................................................................ 199

APPENDIX A PCL Syntax ........................................................................... 201


TABLE 44. Operators in attribute assignments .................................................................... 214
TABLE 45. Built-in functions for use in simple expressions ............................................... 223
TABLE 46. Possible tool instantiations ................................................................................ 224

APPENDIX B Analysis of Data from the

Programming Methodology Course ............................ 229


TABLE 47.
TABLE 48.
TABLE 49.
TABLE 50.
TABLE 51.
TABLE 52.
TABLE 53.
TABLE 54.
TABLE 55.
TABLE 56.

Development plan for projects .......................................................................... 230


Descriptive statistics for total scores ................................................................. 232
Descriptive statistics for moderated total scores ............................................... 235
Statistics for groups which delivered all times (N=23) ..................................... 236
Number of application files per group ............................................................... 239
Correlation measures for LOC & moderated score ........................................... 244
Spearman rank, 1-tailed significance ................................................................ 244
Kendalls tau-b, 1-tailed significance ................................................................ 245
Correlation measures for number of uncommented LOC & moderated score .. 245
Corr meas for number of uncomm LOC & and number of lines of comments . 246

xix
TABLE 57.
TABLE 58.
TABLE 59.
TABLE 60.

Number of code lines without comments (loc) and lines of comments (comm) 246
Content evaluation of documents (delivery5) ................................................... 249
Standard deviations and means for document measures ................................... 250
Mean group grade and standard deviation ........................................................ 257

APPENDIX C C++ Pre-Test and Calibration............................................. 259


TABLE 61.
TABLE 62.
TABLE 63.
TABLE 64.
TABLE 65.

Volunteers answers to experience questions .................................................... 269


Volunteers answers to pre-test questions. ....................................................... 270
Total score, time used, experience and test judgement ..................................... 270
Percent of time used on different problem groups ............................................ 271
Scores after conversion. .................................................................................... 271

APPENDIX D Experiment Subject Responses and Reports......................... 273


TABLE 66. Time usage report for delta1, category A
TABLE 67. Time usage report for delta1, category B

.......................................................... 277
.......................................................... 279

APPENDIX E Analysis of changes of group a03......................................... 281


TABLE 68.
TABLE 69.
TABLE 70.
TABLE 71.
TABLE 72.
TABLE 73.
TABLE 74.

Files and changes in a03 application ................................................................. 281


LOC count for a03 files ..................................................................................... 281
Changes from delivery 3 to delivery 4 .............................................................. 282
Changes from delivery 4 to delivery 5 .............................................................. 286
Files and their size ............................................................................................. 287
Changes in documents from delivery 3 to delivery 4 ....................................... 287
Changes in documents from delivery 4 to delivery 5 ....................................... 289

APPENDIX F Statistical observations......................................................... 291


TABLE 75. Degree of relation indicated by the magnitude of r ........................................... 293
TABLE 76. Ranking ordinal variables ................................................................................. 294
TABLE 77. Computation of concordance, discordance and ties .......................................... 295

APPENDIX G References ............................................................................ 299

xx

CHAPTER 1

Introduction

1.1 The motivation for this thesis


The software crisis.

There is no silver bullet, claimed Brooks1, and buried the hopes of a free lunch for software
developers around the world. Yes, there is, Harel2 replied. Are there really any silver bullets
for software development? Or is all work and no play the only solution to producing maintainable quality software?
The 1968 Nato Software Conference [Naur and Randell, 1969] described the state of software
development as a software crisis. The problems covered by this phrase have generated huge
amounts of research funding, and led to a new engineering field software engineering. The
motivation research in this field is the urge to find a solution to the underlying problems of this
crisis. These are
Software contain errors, and do not meet the requirements of the users,
Software projects are always late, and
Software projects always slip their budget.

This research has produced hundreds of proposals for new development notations, development processes, design paradigms, programming languages, and project support environments.
Still we hear about projects with the problems above. In Norway, several government projects
have been stopped or delayed for these reasons, e.g.:
The Police Operational System (PO): This system was developed for the Norwegian

police. The system should assist the police in its operative work, particularly before and
during the 1994 Olympic Winter Games in Lillehammer. After the games, the PO system
was to be installed at local police departments all over Norway. This would justify the
costs of the system which totalled NOK 45 million. One small problem stopped the
installation plan the system was developed to be used on SUN workstations, local
police departments only had old PCs. An extra NOK 300 million has to be spent on
machine upgrades. Initial testing of the PO system shows that the capabilities of the sys1. [Brooks Jr., 1987]
2. [Harel, 1992]

Introduction

tem are much more advanced than the needs of an average Norwegian police department.
Today, three years later, after taking into account the needs of the smaller police departments, the system is reported to be successfully in use in one police district1.
Tress-90: This project was initiated in 1990 with the intentions of becoming the common

system for management and control of social security matters in Norway. The project
was stopped in 1994 with a budget slip of several hundred million NOK. The total costs
of the project was in 1991 stipulated to NOK 1200 million.
These examples are not unique another example is the infamous luggage control system at
Denver airport: The delay of the control system for luggage transport has delayed the opening
of the new airport with several months, resulting in losses of billions of dollars.
Another evidence of the software crisis is given in a report2 surveying 500 organizations in 8
countries. The report reveals that two out of three corporations and public institutions have
experienced problems with controlling IT projects during the last two years. Lack of risk control procedures and poor project management are the most common reasons for the failure of
IT projects, according to the survey. 20% of the organizations in the survey have cancelled an
important project after heavy investments in it. The investigation also shows that organizations
with good risk control and quality assurance have encountered significantly lower losses and
less operational problems than organizations with poor risk control.
As this is written, a European rocket was launched in order to place four satellites into orbit.
Only minutes after the launch the rocket had to be destroyed, due to its failure to follow the
scheduled course. I am almost sure that software failure was the reason.
Evolve, not revolve.

Software engineering research is now in its golden years. Like the past forty years, the need for
software is continuously increasing, with an explosion in the last decade. The revolution of
computer systems has opened an enormous market and resulted in high expectations for new
software products3. This implies a race for featurization, and a struggle for reaching the market first. The focus of software development has accordingly been on the first release, often
neglecting the needs of later releases. We believe that such a focus will, and certainly must
change.
The costs related to a software systems4 life cycle are evenly divided among the phases prior
to the first release, and the phase after that milestone5. This even division has remained steady
during the last two decades, and it is my opinion that this situation will not change significantly
in the future: An organizations software systems are investments which influence its operation, behaviour, and output. When the organizational needs change, the software used in the
1. Source: Teknisk Ukeblad, Volume 144, no. 4, 30 January, 1997
2. The survey investigation is performed by Coopers & Lybrand. The results were presented in Kapital
Data ajour, 26 June, 1996 (http://www.sol.no/kdnett)
3. I will use the term software product for what the customer buys from a software producer. Alternative terms that I may use are product and application.
4. I will use the term software system, or system to mean everything which a software producer
generates during software development and maintenance. This means that a software system
includes the software product, and all documentation which the software developer/maintainer produces/needs during the software life cycle.
5. See Section 2.4 for a discussion of software maintenance costs.

The problems addressed and solutions envisioned

organization must change as well.


Software systems have generally grown in size as the expectations have increased, and technology has permitted more advanced user interfaces and interconnection with other systems.
The size makes it difficult for potential competitors to establish alternative products with similar functionality, as it increases the potential of project failure. The dependency of organizations on particular software systems will therefore increase, and longer lifetime of existing
software can therefore be expected. Reuse and evolution of existing systems must be given priority in order to tackle this situation.
A large portion of software evolution costs could have been avoided if developers did not
focus blindly on the first release, but also planned for the evolution of the software systems.
This calls for special support to control the evolution of software systems. If such support were
reasonable, several problems of software maintenance1 could be mastered, and effort could be
spent on activities that really matters evolving the systems to meet changing requirements.
It will no longer be sufficient to be first to market, rather to control the evolution of software.
The underlying motivation of this thesis is to investigate and understand the problems of software evolution, and propose support to handle the evolution of software systems.
Epilogue.

14 hours after the above was written, Reuter conveyed the following press release:
A senior spokesperson of the air space centre in French Guyana reported that an error in a
computer program was the possible cause for the failure of the Ariane-5 launch. The rocket
failed to follow its planned course, and had to be destroyed by ground personnel.
Daniel Mugnier, head of the launch, reported on a press conference that the computer program sent wrong signals to the rocket. This made the rocket deviate from its planned course.
Due to this, ground personnel had to destroy the rocket only 30 seconds after the launch. A
commission has been ordered to investigate the failure, and will present a final report on the
incident in the middle of July.

Reuter also reported that the development of the rocket had taken 10 years. The development
costs were estimated to 500 million. Indeed, we can still speak of a software crisis!

1.2 The problems addressed and solutions envisioned


1.2.1 The problems addressed in this thesis
All successful software products eventually evolve. However, if the successful providers do
not react to new competition and changing requirements from their users, their success will
cease. In order to be competitive, they must be able to effectively evolve their systems. There
are several problems related to software evolution the main problems addressed by this thesis
are:
1. The lack of documentation inhibits effective evolution. There are two major reasons for

this problem: Software developers are too focused on the first release, and do not document all decisions and solutions adopted in the software. The software maintenance
activity is often initiated with people having experience from the development phase of
1. See Section 2.5 were problems of software maintenance are discussed.

Introduction

2.

3.

4.

5.

the product. They consider updating documentation an extra burden which decreases
their productivity, and limits the number of changes which can be included in the next
release.
Turnover of maintenance personnel reduces maintenance productivity. It is a problem
that system knowledge is lost when experienced maintainers leave. New maintainers
must learn the system before they become productive. How can this learning period be
shortened?
The time used to understand the software in order to make the changes represents a significant part of the total software maintenance effort. This is a consequence of the first
and second problem. When maintainers with experience from initial development are
moved to new projects, the less experienced ones are left with a system where source
code is the only reliable source of information.
The structure of software deteriorates over time. Since maintainers lack system knowledge, they make changes which were not foreseen by the original developers, and the
software system is gradually becoming a patchwork of changes, resulting in deteriorated
system structure. How can maintainers increase their system knowledge level?
Different releases of the software product must be continuously evolved. Maintainers
may incorporate similar changes to different system releases, hence overlooking reuse
opportunities. This implies extra work, and may increase maintenance costs significantly. The simultaneous evolution of several releases also involves problems of software configuration management i.e. which parts belong to which release.

Over time, these problems results in a software system with low maintainability. The software
producer may decide that new requirements cannot be accomplished at competitive costs.
Redevelopment of the system may be decided to increase the maintainability. This takes time
and is expensive - during this time competitors may have taken over the market.
The problems addressed in this thesis are to investigate and propose solutions for how to handle this maintenance crisis.

1.2.2 The solutions envisioned in this thesis


It is my vision that when evolution of the software system is performed in a controlled manner,
the mentioned problems of maintenance can be tackled. This requires that all parts constituting
the software system are simultaneously evolved, resulting in a software system in internal
equilibrium. I discuss this term further in Chapter 4.
When the system is in internal equilibrium, a framework for systematic utilization of the information contained in the system will reduce the time needed for system understanding. The systematic use of the information will provide the maintainer with more thorough knowledge of
both the structure and the components of the system. This will reduce the system deterioration,
and hence the need for redevelopment is reduced.
When a new generation of the system must be developed, the fact that the system is in internal
equilibrium allows more of the original system to be reused. This is possible since the requirements which are valid for the system are explicitly defined, and the software components are
designed in updated design documents. This information allows for more direct reuse of components in a new system generation.
I outline my envisioned framework for system understanding in Figure 1. It is the vision of this
thesis to define such a framework for reducing the costs related to system understanding.

The process of conceiving this thesis

Change management system


Configuration
management
Evolution
description
Version
management

Thesaurus
Change planner
Evolution
knowledge

Search engine

System in
internal
equilibrium

Parsers
code
documents

Dependency database
Information presenters
code
documents
architecture

FIGURE 1. Outline of framework for system understanding

1.3 The process of conceiving this thesis


After graduating from the Norwegian Institute of Technology (NTH) in 1992, I was offered a
research position in a European ESPRIT project, the Proteus project. The project size was 50
person years over a three year period. The goal of Proteus was to provide methods and tools
for supporting the evolution of software. The project consisted of several work packages,
including the two I participated in during the first two years of my doctoral study:
The definition of a language for configuration specification, and a supporting tool set.
The specification of a general methodological framework for system evolution, based on

the results obtained in the project.


The results were the Proteus Configuration Language (PCL) and the PCL tool set, and a report
describing the Proteus Methodological Framework. The PCL and the supporting tool set was a
collaborative effort of the University of Lancaster, Cap Gemini Innovation, and NTH. The
framework activity was a collaborative effort of all project partners.
The originality of the PCL is its ability to describe a family of systems in a single model. This
means that a set of configurations are shown in one description, emphasizing the family structure, and the differences between family members. The logical configuration (of components)
is separated from the physical configuration (of files). A family member (a system configuration) is selected by assigning values to attributes characterizing it. The PCL provides a concise
overview of the structural evolution history of a system family.
While the PCL provides an excellent overview of the software family at a system level, support is lacking for understanding the system at a detailed level. This has been the concern of
the latter half of my doctoral study. This latter part has been supported by a grant form the Norwegian Research Council.
I decided to investigate the possibilities for support for detailed system understanding. During
the autumn semester of 1994, a group of fourth grade students was assembled. Under my
supervision, this group implemented a first version of such a system, the HyperMaint system
([Aamot et. al., 1994]). While simple, the HyperMaint system generated several ideas for the
framework proposed in my thesis.

Introduction

We would like to validate the usefulness of this prototype. An opportunity was given when I
was allowed to design the schedule of an internal course on programming methodology.
Around 35 student groups had signed up for this course, and the course schedule was to split
the project into two parts. These were initial development and a maintenance phase, where the
project groups had to change their programs after change requests. The plan was that half of
the student groups should use the HyperMaint prototype during the maintenance phase.
Due to a number of reasons, the prototype was found unfit to be used actively in a project. To
avoid user frustration, several enhancements had to be incorporated; but resources for this was
unavailable. The original course schedule was carried through without the use of the prototype.
A statistical analysis of the data collected from this course was however carried through, an the
analysis provided us with some interesting observations.
The definition of my proposed framework for providing information for detailed system understanding evolved as more insight in maintenance problems was gained. As the possibility of
having the frameworks usability validated by the experiment mentioned above was reduced, I
reduced the priority of extending the prototype with the new enhancements.
Instead, I focused on documenting the problems of software evolution to provide a solid fundament for the proposed solution. This fundament was assessed in an experiment where the subjects were asked to modify a software system, having different sources of information
available.
This latter experiment provided me with significant results which support the argumentation
for my proposed framework for system understanding during system evolution.
I learned from the design and analysis of the two experiments, that experimentation is a labour
intensive activity. However, I believe that such an approach for assessing the usefulness of a
proposed solution is better than assessment by case study; an approach often seen in software
engineering research. Case studies require that a stable prototype of the proposed solution is
available. This requires resources usually beyond the limits of one doctoral student. Secondly,
case studies are often toy examples they are relatively small, and designed to emphasize the
strengths of the proposed solution. Finally, case studies cannot show the usefulness of the proposed solution, only its usability. Showing the usefulness requires an experiment. This in turn
requires an even more stable prototype, a number of impartial subjects, and instruction of these
in using the prototype of the proposed solution.

1.4 The claimed contributions of this thesis


The work performed during this doctoral study has led to several contributions to the field of
software maintenance. My approach to attacking the problems of software maintenance has
been a pragmatical one, where I have tried to analyse the existing problems of software maintenance and propose a framework for a solution. If used properly, it is my firm belief that the
proposed solution will relieve many maintenance organizations with much of their existing
problems. Below, I have prepared a short list of claimed contributions which are detailed in the
remainder of this thesis.
An extensive overview and analysis of the problems related to software maintenance is pre-

sented. Software maintenance is analysed from several angles; what is it, why does it happen, what are its cost, and what are the problems experienced by software maintainers.

The organization of this thesis

Based on the analysis of software maintenance and its problems, we concluded that the lack

of documentation was crucial for software maintenance problems for several reasons. We
designed and carried through an empirical experiment which confirmed several hypotheses
related to the impact of documentation on software maintenance productivity.
We presented a model for how the evolution of a software system should be organized in

order to minimize the risks of being opposed to the problems related to software maintenance. A particular requirement for the model is that software systems are kept in internal
equilibrium - a notion which were coined to describe the that all parts of the system should
be updated in accordance with each other.
In collaboration with other researchers of the Proteus consortium, the Proteus Configuration

Language and an associated tool set were developed. The PCL allows the architecture and
evolution of a system family to be described in a single formalism, and provides control of
the different configurations of the system family. The PCL makes visible the logical system
structure and high level dependencies, and is a tool for maintainers to understand the high
level organization of a software system.
A framework for being able to identify and extract information from the system (i.e. differ-

ent types of documentation and source code) to be able to swiftly obtain knowledge to
understand the current functionality level of the system, and to understand appropriate system functionality to be able to satisfy the requirements of a given modification request. A
set of relations among different system components are defined, and extraction mechanisms
for being able to extract the individual relationships are presented. The PCL does not provide the maintainer with the needed detailed system information, but the PCL and the tool
set is integrated in the framework to provide an approach which is scalable and which performs well even for very large systems.

1.5 The organization of this thesis


The thesis is organized in 8 different chapters and 6 appendices which provide additional information which supports the different chapters. Table 1 gives an overview of the different chapters and appendices which constitutes this thesis. The content of each chapter is outlined, and a
reference to the chapters starting page is given.
TABLE 1. Overview of thesis content
Chapter

Page

Chapter 1 ......... p. 1
Chapter 2 ....... p. 11

Content

This chapter which provides the motivation for the selected research
area and gives an overview of the chosen approach and the thesis.
Presents an overview and analysis of four essential issues of understanding software maintenance. These issues are what is software
maintenance, why is it needed, how much does it cost, and what are
the problems experience by software professionals when performing
software maintenance.

Introduction
TABLE 1. Overview of thesis content

Chapter

Page

Chapter 3 ....... p. 51

Chapter 4 ....... p. 87

Chapter 5 ..... p. 107

Chapter 6 ...... p. 115

Chapter 7 ..... p. 141

Chapter 8 ..... p. 193

Appendix A .. p. 201

Appendix B .. p. 229

Appendix C .. p. 259

Content

This chapter presents the design and analysis of an experiment carried


out to asses the role of documentation in software maintenance. The
experiment was carried out in a controlled setting using students as
experiment subjects.
We discuss different system types and how the evolution of them best
can be managed to avoid the problems which are related to software
maintenance. This chapter establishes and outlines the path which we
choose as our choice of providing a framework for a solution to support understanding during software maintenance work.
This chapter presents a somewhat philosophical discussion of the possibility of future technologies which would revolutionize the way
maintenance of software is performed.
The Proteus Configuration Language is presented. We show how the
language can be used to specify the different types of system variability which are a result of system evolution. A complete example is
used throughout the chapter to show how PCL makes the system
structure visible to the maintainer.
We present a framework for a solution to the problem of providing the
maintainer with sufficient system knowledge to be able to understand
the different details and relationships which are existent in a software
system.
We use an example throughout the chapter to show how different type
of knowledge and relationships can be extracted from the software
system if it follows a particular format.
This chapter concludes the work done and solutions proposed in this
thesis. A summary of the main results is given- Finally, a set of
research steps for future work, building on the results of this thesis, is
presented.
This appendix presents the syntax of the PCL in BNF format. For the
convenience of the reader, the processes of selecting a particular configuration from a PCL family, and automatically building a selected
system configuration are described. The appendix is a document produced in the context of the Proteus project.
An in-depth analysis of the evolution of 30 different software systems
produced by student groups in a semester assignment is presented.
Each of the 30 systems exist in up to 5 versions. We have analysed
some of the changes made to the systems in the last three available
versions.
This appendix include the pre-test which was used to be able to control the skill level for dividing the experiment subjects into similar
categories. The experiment was presented in the experiment presented
in Chapter 3.

The organization of this thesis

TABLE 1. Overview of thesis content


Chapter

Page

Appendix D .. p. 273

Appendix E... p. 281


Appendix F ... p. 291

Appendix G .. p. 299

Content

Describes some feedback from the experiment subjects on the experiment described in Chapter 3, and an initial analysis with comments on
the work performed by each participating subject.
The changes made by the original student developers to the system
used as a baseline in the experiment is listed.
An explanation of the portfolio of statistical methods used in the analysis of the experiment and the thirty systems produced by the students
is included for convenience to the reader.
The last appendix contains references to all sources used and cited in
this thesis, and several other publications which are interesting reading for anyone interested in the field of software maintenance. Around
500 references are listed in all, some of them are annotated with personal comments.

Note: For stylistic reasons, I choose to abandon using the term I and rather use the more neutral
term we during the rest of this thesis.

10

Introduction

CHAPTER 2

Software Maintenance:
What - Why - The Costs and Problems

2.1 Introduction
It is often difficult to understand all facets of software maintenance. Particularly this is a problem for people which are not familiar with software maintenance. Even people who are trained
software maintainers disagree about the content of software maintenance and the difficulties
they experience in performing their job.
We make an attempt in this chapter to uncover the mysteries of software maintenance. In particular we focus on explaining why software maintenance is necessary, and what the difficulties of software maintenance can be. After reading this chapter it should be clear to the reader
why software maintenance is not easy, and why some researcher can report that maintenance
cost often exceed 50% of the total life-cycle costs1.
The rest of the chapter is organized as follows:
Section 2.2 discusses the concepts of maintenance and software maintenance. Based on our

discussion, we contribute with a new definition of software maintenance. We find our definition more explanatory and complete than others.
In addition to error corrections, why is software maintenance needed? The fundamental

concept of evolution is explained in Section 2.3. We present two models which try to
explain why software needs to evolve, and why software normally becomes worse to maintain as it evolves. Approaches which try to control the evolution of the software development process are presented; we find that the ideas put forward by these can be adopted to
controlling the software system evolution.
The costs of software maintenance has been a subject for many studies. In Section 2.4 we do

not attempt to scrutinize this area, but present some results and discuss the fact that software
maintenance costs has not been reduced during the last two decades.
In order to give an overview of the problems of software maintenance, we present and com-

pare several investigations on software maintenance problems in Section 2.5.


Section 2.6 also present research related to identify problems of software maintenance.

Unlike in Section 2.5, we here focus on studies which investigate hands on problems, and
not surveys.
1. In Section 2.4 we present this research in more detail.

12

Software Maintenance: What - Why - The Costs and Problems

Section 2.7 summarizes the chapter and draws some conclusions.

We do not attempt to include a discussion of technologies and methodologies for assisting


maintainers in performing their work. Two good references which give an overview of this are
[Zvegintzov, 1994a] and [EPSOM, 1991].

2.2 Software Maintenance What?


Traditionally, software maintenance has been has been regarded as second class work. Software maintenance has been characterized with negative adjectives, such as boring, routine,
stressing, etc. On the other hand, software development is typically described as stimulating,
rewarding, challenging, analytical, exciting, and a long list of other positive adjectives.
Software professionals have, due to this polarization of perceptions, often neglected the importance of software maintenance. How then, is software maintenance defined in literature, and
how does this compare with the traditional connotations of maintenance as routine work. We
begin by consulting our dictionaries with the definitions of the terms that software maintenance are associated with, namely the verb maintain, and its noun maintenance.

2.2.1 To maintain
The verb to maintain is defined in Oxford Advanced Learners Dictionary of Current English
[Cowie, 1989] as:
(1) maintain something - cause something to continue, keep something in existence at the
same level, standard, etc.: maintain friendly relations, contacts, etc. (with somebody);
enough food to maintain ones strength; maintain law and order; maintain prices; maintain
ones rights; maintain your speed at 60 m.p.h.; the improvement in his health is being maintained; (2) support somebody financially; earn enough to maintain a family in comfort; this
school is maintained by a charity; she maintains two sons at university; (3) keep something in
good condition or working order: maintain the roads, a house, a car, etc.; engineers maintain the turbines; a well-maintained house, and finally (4) assert something as true: maintain
ones innocence; maintain that one is innocent of a charge.

The Concise English Dictionary [Hayward and Sparkes, 1982] defines the verb as
To hold, preserve, or carry on in any state; to sustain, to keep up; to support; to provide with
the means of living; to keep in order, proper condition, or repair; to assert, to affirm, to support by reasoning, argument, etc.,.

while The Merriam-Webster Dictionary [Webster, 1986] finally tells us that to maintain is
(1) keep in an existing state (as of repair, efficiency, or validity): preserve from failure/
decline
(2) sustain against opposition or danger: uphold and defend <~ a position>
(3) continue or persevere in: carry on: keep up <couldnt ~ his composure>
(4a): support or provide for: bear the expense of <has a family to ~>
(4b): [SUSTAIN] <enough food to ~ life>
(5): to affirm in or as if in argument: [ASSERT] <~ [ed] that all men are not equal> main.tain.able aj SYN [MAINTAIN], [ASSERT], [DEFEND], [VINDICATE], [JUSTIFY]
...

We have embolded the meanings which we feel most comfortable with when thinking about
actions to stop degradation of some asset. We note that the underlying assumptions of the verb
to maintain is that forces, implied on an asset in some way, degrades the usability of the asset,

13

Software Maintenance - What?

and that work must be performed on the asset in order to keep the usability at a defined level.
This is illustrated in Figure 2. We also note that the quoted definitions do not use the verb
maintain to include activities which improve the state of something. Rather terms like same
level, preserve, and existing state are used.

2.2.2 Maintenance
The noun maintenance is defined in [Cowie, 1989] as:
(1) Maintaining or being maintained; the maintenance of good relations between countries;
price maintenance; money for the maintenance of ones family; Hes taking classes in car
maintenance, and (2) (law) money that one is legally required to pay to support sb: He has to
pay maintenance to his ex-wife

[Hayward and Sparkes, 1982] defines the term as:


The act of maintaining; maintenance man: Workman employed to keep machines, etc., in
working order,

while [Webster, 1986] gives yet another definition of the subject:


(1) the act of maintaining; the state of being maintained; (2) something that maintains; (3) the
upkeep of property or equipment; (4) an officious or unlawful intermeddling in a legal suit by
assisting either party with means to carry it on.

Summing up, the noun maintenance denotes the activity of maintaining something, i.e. the
activities of upholding the usability of an asset on a specified level. We now investigate how
the term maintenance is understood in the context of software engineering.

2.2.3 Software maintenance


Software maintenance is defined by Osborne [Osborne, 1985] as:
The performance of those activities required to keep the software product operational, from
the delivery time to the end of its life.

Another definition is given by Grek [Grek, 1991]


(Software) Maintenance is the work required on a system to ensure... [its] continued effective
operation at its current functional and agreed service level.

ANSI and IEEE [IEEE 610, 1990] defines the term as:
Software maintenance is the process of modifying a software system or component after
delivery to correct faults, to improve performance or other attributes, or to adapt the product
to a changed environment.
Usability

Degrading forces
To maintain

Time
FIGURE 2. An illustration of the verb maintain

14

Software Maintenance: What - Why - The Costs and Problems

The Eureka Software Factory project EPSOM [EPSOM, 1992] complements IEEEs style of
definition by
Software maintenance is the process performed on a software system after delivery to
answer a request for information, to correct faults, to improve performance or other nonfunctional attributes, to adapt the product to a changed environment, to add new functionality
or modify the existing ones, to improve the software maintainability, or to anticipate future
problems.

We note that Osbornes definition is directly conceived from the dictionary definitions of the
noun maintenance, by prefixing the term with software. Grek goes one step further by introducing the adjective effective, unveiling some of the complexities underlying what we have
come to learn about the software maintenance phrase. Still, however, the notion of current and
agreed service level implies that the traditional dictionary definition of the term is not violated.
Thus the hidden complexities of software maintenance cannot be deduced by these general, but
yet narrow definitions.
The ANSI/IEEE and EPSOM definitions make the subject matter (i.e. the task of maintaining
software) more concrete. They identify that software maintenance in addition to keeping the
software operational, also adds functionality or improves performance of the software system.
Several other definitions of software maintenance have been suggested. For a selection, see for
example [Riggs, 1969]1, [Ogden, 1972]2, [Boehm et al., 1976]3, [Liu, 1976]4, [Swanson,
1976]5, [Lyons, 1981]6, [Reutter, 1981]7, [Bush, 1988]8, [Scott and Farley, 1988]9, [Moad,
1990]10, [McClure, 1992]11. None of these definitions significantly extend those discussed
above, and most of them include all activities undertaken with a software system after delivery.
1. Maintenance is the activity associated with keeping operational computer systems continuously in
tune with the requirements of users, data processing operations, associated clerical functions, and
external demands from governmental and other agencies.
2. Maintenance is the continuing process of keeping the program running, or improving its characteristics. Program modification has as its objective the adaption to a changing environment.
3. Maintenance is the process of modifying existing operational software while leaving its summary
function intact.
4. Maintenance refers to modifying a program - updating an existing programs functions to reflect
new constraints or additional features.
5. Maintenance is performed in response to system failures, to changes in data and processing requirements, to eliminate processing inefficiencies, and to improve maintainability.
6. Maintenance is the mechanism for combating software deterioration, which over time tends to
become unstructured, unreliable, and resistant to change.
7. Maintenance is fixing software bugs.
8. Maintenance is adapting software to meet constantly changing business needs.
9. Maintenance consists of changes that need to be made to a computer program after the software has
been turned over to the customer or goes into production.
10.Maintenance includes updating as well as fixing bugs in existing applications.
11.Software maintenance is the process of keeping a production software system program (or system
of programs) operational or of improving the program.

15

Software Maintenance - What?

2.2.4 Comparing maintenance and software maintenance


When a car is used, its components are worn out due to friction in the mechanical parts, unsuitable use, or by external conditions such as bumpy roads or acid rain. The car owner tries to
overcome this wear and tear by washing the car, changing simple components when they are
worn out, and use trained mechanics to handle the more complex faults during the cars lifetime. Occasionally the owner puts his car into a garage for overall service to prepare the car for
future wear and tear. These facts are all agreed upon when the subject of discussion is maintenance of a car.
In fact all physical objects created by man are deteriorating when used. Sometimes the physical object is cheap, so that it can easily be replaced with little cost. Sometimes it is expensive
so that fixing the defects is the most cost-effective thing to do.
A software system, on the other hand, is obviously not a physical object. Software is often
described as an intangible asset. The fact of being intangible implies that software is not worn
out through use. There are no outside forces acting on the software to, say, break the pattern of
bits and bytes into chaos. This means that people not knowledgeable about the way software is
constructed and work, will have trouble to understand why someone has to work with maintaining software.
How can we then claim that software must be maintained? Using the same metaphor as in
Figure 2, Figure 3 illustrates that effort is put into maintaining software, although there are no
degrading forces acting on it! Still, literature (e.g. [Lientz and Swanson, 1980]) reports that as
much as 70% of all costs incurred on a software system is related to the activities of software
maintenance. How can software professionals explain this to a manager?!
?

Specified software
functionality level

Software
maintenance
FIGURE 3. Software maintenance?

Time

Of course this view of software maintenance is wrong. In the next section we will try to
explain the nature of software maintenance.

2.2.5 An explorative definition of software maintenance


The view of software maintenance as presented in Figure 3 is very natural for people not
trained in software engineering, or for those not familiar with the nature of software systems.
As can be seen from Figure 3, the vertical axis is unlabelled. However, the dotted horizontal
line is termed specified software functionality level. The dimensions of maintaining software
are hidden by those four words:
After software is put into operation, users will realize that the current installation does not

perform at the specified software functionality level. In other words, the actual functionality
level is not what is specified. Changes must be made to correct this misbehaviour. This
includes identifying and correcting processing, performance and implementation failures in
the software system,

16

Software Maintenance: What - Why - The Costs and Problems

When users become acquainted with the software, they will request more support from it1.

The specified functionality level is no longer their requested functionality level. Thus the
specified functionality level is raised, and changes must be made in the software to comply
with this.
The software may fail to operate on/together with new technology when this is introduced

into the technical environment. Again, changes must be made to the software to fit into its
new environment.
A software system may provide the necessary functionality, but major enhancements are

expected in the future. Even without outside demands for enhancements, the maintainers
may find that the software system lacks structure, or that parts of it are severely complex.
Reorganization of these parts can then be initiated to make future maintenance more effective.
The first three dimensions of maintenance described above were termed corrective, perfective,
and adaptive maintenance in [Swanson, 1976]. The last, preventive maintenance was termed in
[Pressmann, 1987]2.
To summarize the comparison, we argue that when discussing software maintenance, we must
take into account three different software functionality levels: the requested, the specified, and
the actual. It is the job of the software analyst and developer to minimize the distance of the
requested and specified level of functionality during initial software development. The tester is
responsible to assure a minimal difference between the specified and the actual functionality
levels. Finally it is the maintainers responsibility to keep the customer happy throughout the
systems life-time by minimizing the distance from the requested to the actual software functionality. This means that the software maintainer must have the skill of all the other three categories, not only particular skills as a software maintainer. A more correct picture of software
maintenance can the be portrayed as in Figure 4. (Please note that the figure is highly conceptual, i.e. the slopes of the lines should not be deeply analysed.)
After this discussion we attempt another and more covering definition of software maintenance, compared to those presented in Section 2.2.3. To be able to do this, we define what we
mean with software maintainability:
Software maintainability. The maintainability of a software system reflects how much effort is

needed to keep the actual functionality level in concert with the requested functionality
level. Software maintainability is high when the effort needed to respond to functionality
changes is small. If the effort needed is high, the maintainability of the software is low.
1. It may not be that the usability of the software itself has decreased, but as new similar systems enter
the market with higher functionality, the users perceive the current (old) system to have less functionality, and hence feel that the software is worn as compared to normal understanding of something being worn out.
2. Pressman reports that the preventive maintenance approach was first reported as structured retrofit
by Miller in 1981. (In Techniques of Program and System Maintenance, G. Parikh, ed., Winthrop
Publishers.) He defined the concept as the application of todays methodologies to yesterdays systems to support tomorrows requirements. Another term used for this, particularly when the whole
system is redesigned due to large maintenance costs, is software re-engineering. A good overview
of this is given in by Arnold ([Arnold, 1993a] and [Arnold, 1993b]).

17

Software Maintenance - What?

Functionality
level

User needs

1
2
3
Maintenance
effort
time
1 = Requested functionality level
2 = Specified functionality level
3 = Actual functionality level

FIGURE 4. A mental model of software maintenance


Software maintenance. Software maintenance is the process performed on the software system,

from its conception to its end, to maximize software maintainability, to constantly minimize
the difference between the specified and the actual functionality level, and to act on requests
from the users of the system to adapt the specified functionality level to their requested
functionality level.
This definition covers all those described in Section 2.2.3, extending them with activities traditionally viewed as part of software development, and relating the maintenance activities to the
specifications of the software, not only to the code. A definition like this blurs the border
between what has traditionally been viewed as development and maintenance. It is our opinion
that blurring this border will make the needs of activities performed after the first delivery
more visible to the participants in the projects early phases.
With our definition of software maintenance, corrective maintenance is about filling in the
gaps between the actual and specified functionality level. Adaptive maintenance is the set of
changes needed to uphold the actual functionality level in a changing environment. Perfective
maintenance deals with changing the specified functionality level of the application to that
requested by the user and ensuring that the new actual functionality level is the same as the one
specified. Finally, preventive maintenance is to take remedial actions when software maintainability is low1.
We started Section 2.2 by describing some traditional adjectives associated with software
maintenance. A recent investigation ([Layzell and Macaulay, 1994]) of the perceptions of software maintenance in five large organizations showed that this negative image is changing. For
the organizations participating in the investigation the image of software maintenance had
gradually changed from one which is dull, passive and isolated to one which is reactive,
responsive and vital to the business. Another interesting observation in the same investigation
was that this new image was particularly evident where development and support staff where
integrated in one business unit: The need for maintenance was apparent for all partners
involved (managers, clients, developers). We believe that our definition of maintenance would
be met with approval in such organizations.
1. The maintenance costs are (expected to be) high due to unnecessary complex parts of the software, or
when documentation quality is low.

18

Software Maintenance: What - Why - The Costs and Problems

2.3 Software Maintenance Why?


2.3.1 Introduction
The main observations of the previous section were that software maintenance was different
from maintenance in the dictionary meaning of the term, and that software maintenance was
performed for the user, and generally not for the software. Software maintenance is about preserving and extending, while the usual interpretation of maintenance is about preservation
only.
This section presents and discusses some models of software evolution which try to explain
why software change over time. The first two models focus on how the software product
evolves:
Belady and Lehman [Lehman and Belady, 1985] SPE model describe how different pro-

gram types evolve differently according to the type of problem the program is designed to
solve.
Parnas [Parnas, 1994] argues that software aging is due to failure to meet the users chang-

ing needs, and that changes generally deteriorate the software structure from a clean model
to a patchwork.
While not directly focusing on the evolution of the software product, the approaches focusing
on how the software process evolve include elements which are relevant to controlling the evolution of the software product.
Madhavji [Madhavji, 1992b] and Nguyen and Conradi [Nguyen and Conradi, 1996]

presents two similar approaches to controlling the evolution of the software process by categorizing process changes by properties, and controlling the implication of these changes
based on a model of the items to be changed and their interdependencies.
Basili and Green [Basili and Green, 1994] presents the NASA Software Engineering Labo-

ratory approach to controlling process evolution. Their approach is based on learning from
actual use of different processes, compared against a process baseline.
Controlling the software production process by using the formally defined processes as an executable process program is also an example of software application management technology of
the future.

2.3.2 Models of software product evolution


2.3.2.1 Belady and Lehmans SPE classification
Lehman and Belady [Lehman and Belady, 1985] observed that the changed content between
successive releases were statistically invariant1. They were intriqued by why some software
seemed to change more frequent than other software. In their research, they formulated a
model which classifies software programs into three sets with respect to software evolution.
The model is known as the SPE program classification. We give an overview of S, P, and Etype programs below. Figure 5 to Figure 7 are borrowed from [Lehman and Belady, 1985].
1. This observation is formulated as one of five laws of program evolution; the law of conservation of
familiarity. The five laws are described in detail in [Lehman and Belady, 1985] on p. 381 and p. 412.

19

Software Maintenance - Why?

S-type programs are solutions to specific problems. The programs can be formally defined
and derived from a specification. Figure 5 depicts this type of programs.
Problem
relating to

Formal statement
Program specification

Universe of
discourse
possibly of
interest in

Solution

controls the
production of

providing a

Program

FIGURE 5. S-type programs

P-type programs are solutions to real-world problems. Compared to S-type programs, problems in this category are hard. Complete solutions are not practical due to the size of the problem, or its complexity. Only an approximation to the correct solution can be specified. Figure 6
depicts this type of programs. While S-type programs are static, P-type programs are evaluated

Change

Real World
Universe of Discourse

Abstraction
(a view)

A Problem
Requirements
Comparison

Change
Specification

Information

Program

FIGURE 6. P-type programs

in their real-world context and will evolve as the context changes. [Lehman and Belady, 1985]
states that
Differences between data derived from observation and from computation may cause
changes in the world view, the problem perception, its formulation, the model, the program
specification and/or the program implementation. [...] But the world too changes and such
changes result in additional pressure for change. Thus P-type programs are very likely to
undergo never-ending change or to become steadily less and less effective and cost effective.

E-type programs provide automated assistance to a human or social activity, and are embedded in the operational environment. They implement an application in that environment1. Etype systems have no direct boundaries. The programs that implement them cannot have permanent and demonstrably satisfactory specifications since the variety of features that can be
built into such systems are unlimited. When the developers are deciding what features to
include into the specification, they must take into account the perceived needs of the users,
1. Note that most applications which are developed using the principles of software engineering are of
this type.

20

Software Maintenance: What - Why - The Costs and Problems

available technology at present, and non-functional properties as performance and cost.


Application In
The Real World
Dissatisfaction
Change

Program

Output
Comparison

Specification

Change

Requirements

Views
(predictive)

Model
FIGURE 7. E-type programs

2.3.2.2 Parnas Software Aging


People deteriorate with age, and so do software systems. Parnas ([Parnas, 1994]) calls this software aging. Software aging leads to uncontrolled maintenance costs. To fight this, Parnas
argues that developers must shift their focus from preoccupation with the first release to the
concern of the long term health of the software.
Parnas focuses on two main causes for software aging:
1. The failure of the product owners to meet the users changing needs.
2. The ignorant surgery performed on the software to try to meet the users changing needs.

To quote Parnas:
The designer of a piece of software usually had a simple concept in mind when writing the
program. If the program is large, understanding that concept allows one to find those sections
of the program that must be altered when an update or correction is needed. [...] Changes
made by people who do not understand the original design concept almost always cause the
structure of the program to degrade. [...] After many such changes, the original designers no
longer understands the product, and those who made the changes never did.

This change induced aging of the product is often made worse as the maintainers feel that they
do not have time to update the documentation. This makes future changes even more difficult.
In order to remedy the problems with software aging, Parnas propose that three main principles
of software engineering must be adhered to both during the development and the maintenance
phases:
1. The principle of designing for change1. Components which are most likely to change

should be encapsulated in separate parts of the code, so that expected changes can be confined to these parts. Parnas gives a thorough discussion of why this principle is often vio1. Designing for change = information hiding, encapsulation.

Software Maintenance - Why?

21

lated. The main points are that developers are eager to finish the first release, they do not
have sufficient education and training, and cannot find good examples of programs utilizing
these principles to mimic from.
2. Documentation should be a primary concern. According to Parnas, documentation is the
most neglected aspect of software engineering both by academics and developers. Software system documentation is inadequate. Parnas reports that many software developing
organizations neglect the documentation needed during maintenance to prevent software
aging, and only produce the information needed by the customer or client. Parnas proposes
that a mathematically orientated form of documentation is needed, as a formally defined
notation is the only practical way to record the information needed for proper documentation.
3. Reviews should be used to confirm the work of software professionals. Every design should
be reviewed and approved by someone whose responsibilities are for the long-term future of
the product. Reviews by people concerned with maintenance should be carried out when the
design is first proposed and long before there is code. This means that the maintainers get to
know the programs before taking them over for maintenance, and that the requirements for
maintainability is imposed on the program at an early stage.

2.3.2.3 Discussion of SPE and Software Aging


Lehman and Belady describe the underlying reasons for software evolution. Parnas complements that model with a model for avoiding software aging. The term software aging is closely
related to Lehman and Beladys Law of Increased Entropy1:
As an evolving program is continually changed, its complexity, reflecting deteriorating structure, increases unless work is done to maintain or reduce it.

The continuous increase in user expectations and introduction of more powerful computing
systems make software systems increasingly more complex. The capital needed to build completely new systems will increase for every software generation. This makes software maintenance an even more important issue, as the most cost effective way of ensuring user
satisfaction will be through the evolution of old systems. Indeed, both [Lehman and Belady,
1985] and [Parnas, 1994] observe that software indeed change gradually. Thus software
evolves, rather than revolves.
An important issue pointed out by Lehman and Beladys E-type programs, is that programs
which affect the life of users will also heavily affected by the users.Lehman and Belady writes:
As they become familiar with a system whose design and attributes depend, at least in part,
on user attitudes and practice before system installation, users will modify their behaviour to
minimize effort or maximize effectiveness. Inevitably this leads to pressure for system
change. In addition system exogeneous pressures will also cause changes in the application
environment within which the system operates and the program executes. New hardware will
be introduced, traffic patterns and demand change, technology advance and society itself
evolve. Moreover the nature and rate of this evolution will be markedly influenced by program characteristics with a new release at intervals ranging from one month to two years,
say.

Program evolution should be kept in mind when designing such programs. The three points
suggested by Parnas could be a good place to start. We agree with Parnas three issues to fight
1. The Law of Increased Entropy ([Lehman and Belady, 1985], p. 412), is one of Lehman and
Beladys five laws of program evolution.

22

Software Maintenance: What - Why - The Costs and Problems

software aging, but not with his proposal that documentation must be mathematically specified. The reasons for this are:
1. Software engineers and maintainers are not sufficiently trained to use mathematically ori-

ented software specifications.


2. Formal specifications are difficult to read and understand. In fact, source code is a formal

documentation itself, and documentation should be specified at a higher level of abstraction, not at a similar level.
3. Formality implies completeness, which is not always needed in documentation. Complete-

ness may unnecessarily increase the cost both for the initial specifications, and for maintaining them.
The notions of E-type systems and software aging are important to be aware of when planning
for software maintenance:
1. What nature has the system - is it an S, P, or E-type system? If the system is an E-type one,

maintenance costs will be a significant part of the total system costs. Table 2 sums up the
characteristics of S, P and E-type programs.
TABLE 2. Comparing S, P, and E-type programs
Aspect

S-type

P-type

E-type

Program complexity

small-medium

medium-large

small-large

Program size

relatively small

relatively large

relatively large

Program focus

problem solving

problem solving

task assisting

Program coverage

full

one of many

complies to
customers requests

Problem change

none

none, but view


may change

problem changes as
real world changes

External pressure
implies change

no

yes

yes

Program changes
environment

no

no

yes

2. What is the expected life-time of the system? For the systems observed by Lehman and

Belady, data showed that maintenance costs would remain more or less the same over the
whole maintenance phase.
3. If not taken into account, the increased system complexity due to increased size and deteri-

orated system structure will at some point in time make the costs associated with system
replacement less than the costs associated with continued maintenance. However, this may
be avoided if actions are taken to reduce this inherently increased system complexity.
Many software developers and software owners believe that the problems of software aging
would have caused no problems if the product had been designed with state-of-the-art technology. However, all successful software products will age, and it should be a primary concern of
the developers and owners of software to take precautions so that the software systems are at a
controlled and acceptable level of maintainability.

23

Software Maintenance - Why?

2.3.3 Models of software process evolution


2.3.3.1 Madhavjis Prism model of changes
In [Madhavji, 1992b], Madhavji describes the Prism model of changes. The evolution of the
whole software development environment is controlled. This approach is based on a theory
where the quality of software is best controlled if the process producing it is formally defined,
controlled, and executed by a machine1.
The items of change in a software development environment is identified to be at least people,
policies, laws, resources, processes and results. The Prism model of changes controls these
items by using a change process model and two change-related environment infrastructures:
The Dependency Structure (DS) describes the items of change and their interdependen-

cies in the project at different levels of abstraction.


The Change Structure (CS) facilitates the classification, recording, and analysis of

change related data to the items of change. All properties of a change are explicitly captured in a CS-sheet for analysis and documentation. Table 3 lists the set of change properties identified in the Prism project.
TABLE 3. Example properties of a change ([Madhavji, 1992b])
Criteria for deciding whether to make the change
Source of change
- local feedback in Dependency Structure
(eager change)
- non-local feedback in Dependency Structure
(demand change)
Decision of whether to make the change
Advantages of carrying out the decision taken
Disadvantages of carrying out the decision taken
Reason for the change or non-change

Structure of change process used: meta life cycle


- requirements, specification
- design and code
- testing and simulation
- make changes
- maintenance
Method for carrying out the change
- batch style, incremental style
Resources needed to make the change

What aspect of the item the decision is about


Type of change
- static, dynamic
- corrective, adaptive, perfective

Person responsible for the decision


Reviewer of change
- quality control person, process program

Quantitative coupling with other changes


- prevalence of this kind of changes
Qualitative coupling with other changes
Relation with a previous change (to this item)
- how new is this change
Explicit constraints acting on the change
- policy, law
- sequencing of changes of activities
- maximum, minimum changes permitted
- cost, method, etc.
Compromises made in the change
- includes agreements with other parties
Conflict caused by the change
- other people, functioning of other processes,...

Phase of the project in which the decision is made


Schedule of the change
Schedule of the review process for this change
Anticipation of the change
- anticipated, unanticipated
Ease of change
- trivial, ..., not possible
Absorption of change
- how much change can be sustained by item
Reversibility of the change

1. For a discussion of machine execution of software processes, see [Osterweil, 1987].

24

Software Maintenance: What - Why - The Costs and Problems


TABLE 3. Example properties of a change ([Madhavji, 1992b])

Size of change (to this item)


Impact of a change on the entire system
- qualitative, quantitative along with the
degree of each
- can involve cost analysis, etc.
Cost of obeying decision
Cost of not obeying decision

Reliability of the change process


Repeatability of the change process
Extensibility of the change process
- including contractibility
Re-usability of the change process
- portability, generality supports reusability
Efficiency of the change process

The change history recorded by the properties in the CS-sheet should be consulted prior to any
change decision. This may reveal whether the proposed change has been rejected before and
the reason for this, or if the change is similar to other changes. In this latter case, a specified
cost estimate for the proposed change can be given based on the experience data.
In [Madhavji, 1992a], Madhavji describes a framework for process maintenance, the process
cycle. This framework specifies a process for supporting and controlling process maintenance,
in effect a process model for using the Prism model of changes.

2.3.3.2 Nguyen and Conradis evolution patterns


[Nguyen and Conradi, 1996] describes a framework for categorizing process and product evolution into recognizable parts. The process and product structures are described using a modelling language. The framework is therefore very similar to the one proposed by Madhavji.
All changes are recorded as an evolution pattern in a hierarchical categorization framework,
which separates the where, why, what, when, how, and by whom regarding the introduction of a
process change:
Where identifies the sources of process change requests.
Why represents the major causes driving such a process evolution.
What describes what entities are subjected to change.
When distinguishes between the time at which the change request is detected (CDT) and the

time at which the proposed change is designed and implemented (CRT).


How presents the corrective and preventive actions being conducted to handle a given proc-

ess change.
By whom identifies who is the approver and performer of the proposed change.

Table 4 shows the categorization framework. The categorization framework is complemented


by a set of metrics for the production process and the product. Process changes are cost monitored with respect to these metrics. The metrics also provide a pattern for reusing development
process knowledge. When new projects are initiated, a pattern matching technique on project
profiles (a set of metrics) is employed to detect similarities. Experiences from evolution patterns of these similar projects assist the planning and scheduling of the development activity
for the new project.

2.3.3.3 An experimental view on process evolution - The QIP/GQM


In [Basili and Green, 1994], Basili and Green describe a practical approach to software process
evolution employed at the NASA Software Engineering Laboratory (SEL). The approach is to
make use of controlled experiments to find effective software development processes. The

25

Software Maintenance - Why?


TABLE 4. Categorization for process evolution ([Nguyen and Conradi, 1996])

where

why

what

when

how

who

External agent Improvement


- competitor
- supplier

Process
support

Negotiation

Plan
adjustment
- forward
- backward

Approver

Internal agent
- senior
- middle
- process eng.
- engineer
- customer

Correction
- misunderstanding
- ambiguity
- omission
. error
- lack of
competence

Meta process Planning


- planning
- tracking
- packaging
- evolving

External
factor

Adjustment
- revised req.
- better
insight
- delay /
distortion
- strategic
decision

Production
process
- structure
- activity
- plan doc.
- product
- resource
- humans
- budget
- production tool
(sw/hw)

Refinement

Organizational Performer
change
- re-allocation
- training

Development
Technological
-specification innovation
- production
- h-l design
- process
- detailed
design
- realization
- testing
- maintenance

Learning

quality improvement paradigm (QIP) is used as a framework for conducting the experiments.
The QIP is an experimental and evolutionary concept for learning and improvement, consisting of six steps:
1. Characterize the project and its environment.
2. Set quantifiable goals for successful project performance and improvement.
3. Choose appropriate process models, supporting methods, and tools for the project.
4. Execute the processes, construct the processes, collect and validate the prescribed data, and

analyse the data to provide real-time feedback for corrective action.


5. Analyse the data to evaluate the current practices, determine problems, record findings, and

make recommendations for future process improvements.


6. Package the experience in form of updated and refined models, and save the knowledge

gained from this and earlier projects in an experience base for future projects.
After initial case studies, experiments are conducted on real projects with professional project
participants. The QIP uses the goal-question metric (GQM) approach for defining and formulating a set of goals for the perspectives of interest in the experiment. The GQM paradigm provides a structured approach for decomposing the goals into questions which can extract
information from the experiment models. The questions in turn, define the set of metrics
needed to define and interpret the goals.
Basili and Green points out the necessity of having thorough knowledge of the baseline process, against which the experiment results are compared. Before any process improvement can
be expected, the performance of the original process must be monitored over time. In order to

26

Software Maintenance: What - Why - The Costs and Problems

control misuse of experimental results, the experiments must be replicated on different projects
which vary in size. The improvement is measured against stated product goals of the organization. The experiment knowledge is documented in reusable packages. These are distributed
through consultation, documentation and automated support.

2.3.3.4 Comparison of the three frameworks for process evolution


The frameworks described in [Madhavji, 1992b] and [Nguyen and Conradi, 1996] differs significantly in organization and focus from the one described in [Basili and Green, 1994]. Still,
the mutual intention is to capture experience of process evolution and reuse the experience in
future projects.
The first two frameworks focus on controlling and documenting evolution in one software
development process. This is done by modelling all interdependencies among interacting
agents and characterizing the changes to document the reasons, decisions, and effects of evolution steps in the development process. This requires a formal model of both the process, and
for characterizing the changes.
On the other hand, an experimental attitude to process evolution investigation is taken in
[Basili and Green, 1994]. The effect of introducing new technology and ways of working on
pilot projects is measured against a base of baseline projects. If the measured effect is positive,
i.e. higher productivity or less defects, the new technology and way of working is incorporated
into the baseline project models. The experience with the pilots is packaged as presentations,
guidelines and project handbooks, and is available for new projects needing assistance in
assimilating the new changes.
The fine-grained focus of Madhavji and Nguyen/Conradi is closely related to their attempt to
model the production process at a very detailed level. Their motivation for this is to provide
automated support for the analysis, planning and coordination of the software production process. This way of thinking is inherited from Osterweil1, who presents ideas for a language for
process programming, and a compiler for compiling the process programs to be used for supervising the software production process. This calls for a mechanical view of software engineering, where the freedom of the software professionals is controlled and restricted. Such an
environment provides a good framework for measurement on the software process. However,
the lessons learned in the SEL consortium ([McGarry, 1994a]) should be kept in mind. These
are: (i) keep the measures few and simple, and (ii) avoid too much focus on process measures.
Measuring for the sake of measuring may have several unwanted side effects:
Software professionals work is interrupted by activities to report measures.
The cost of the collection and analysis of the measures is high.
If a measure does not contribute to the evolutionary decisions needed by management, it

should be dropped. If not, the measurement program may be felt unnecessary by the
employees, and they may start to work against it. The rationale for the measures should
be agreed among all parties.
Furthermore, we question the detailed process change monitoring which distinguishes the evolution frameworks described by Madhavji, and Nguyen/Conradi. Process changes do not happen every day, and we question the need for the detailed change characterization proposed by
1. See Osterweils paper Software Processes are Software too ([Osterweil, 1987]).

Software Maintenance - Why?

27

these frameworks. Nguyen and Conradi propose that project profiles can be used to match similar projects, and then use the process evolution knowledge from the previous projects to assist
the planning and scheduling of the new project. We do not agree with such an idea. The reason
for this is twofold:
1. If the process successfully evolved in a previous project, this evolution should already be

reflected in the process model for the new project. We should use all the good things at
once, and not wait for the new projects to evolve to the good state, to which the path has
already been identified.
2. If a proposed change was previously rejected, certainly knowledge about this is essential,
but believing that this knowledge can assist in the planning and scheduling activity of the
new project is hard to understand and seems a bit naive.
Allowing variation inside projects, and identifying beneficial evolution steps in controlled
experiments seems to be more realistic and less vulnerable to production processes.

2.3.4 Summing up the evolution part


We have presented several models of evolution of both the software system and the software
development process. It is our position that models for evolving both of them is necessary to
meet the demands for improvement in existing software, and for the new software which will
be needed in the years to come. There are two aspects to be considered:
Customers will expect more reliable software, and software with added functionality, as

they become dependent on software both at their workplace and in their everyday life.
The availability of software engineers, or people with the ability to develop the

demanded software, is much lower than what is needed.


The evolution models described give a brief overview of the complexity of software evolution
and give an overview of needed precautions and models to cope with this.
Our opinion is that controlling the evolution of the software product is more important than
controlling the evolution of the software production process. We must clearly continually
improve the way we do things, but trying to control this evolution in an automated manner as
called for in [Madhavji, 1992b] and [Nguyen and Conradi, 1996] seems like an impossible
overkill. We base this view on the following observations:
A change in the product is performed by outside actors. In order to understand a change,

both the product and the performing actor needs to be consulted if the reason for the product
change is not properly documented.
A change in the process means that an actor is told to do something else. The actor then

immediately understands what to do, and can inform related actors that his way of working
is changed.
The process of making something is neither the means nor the result. The product is the

result, while the knowledge and technology used are the means.
While a product can exist in several different releases, both in time and space, production

processes within an organization should always exist in one version, the best one.
As pointed out in [Lehman, 1994]
Whenever people are involved some degree of freedom exists; otherwise their activity could
be mechanized. [...] Hence the process can only be preplanned and defined to a limited extent

28

Software Maintenance: What - Why - The Costs and Problems


and to some arbitrary level of detail. It can only be enforced at a comparatively high level.
Any expectation that it will be carried out in detail as planned is naive in the extreme. The
process will inevitably evolve. [...] Process improvements can be developed or evaluated on
process models, but the realization and fine tuning can only be achieved in actual execution.
Process models are crucial for achieving and communicating process understanding. [...]
(But) the models are incomplete; at best a high level guide to the process. They can never
provide a precise and complete representation of actuality, of the process actually followed.

This means that process programs are unrealistic as tools to guide the software production
process.
Another factor against using a process program for guiding the process execution is that

several of the applications used in the production process incorporate their own working
process. The production tools are not simple in-transform-out tools. Users have to define
this process and implement it in the application before starting using the application. This
means that the process program is dependent on the internal tool process. Changing this
internal tool process is often impossible once it has been defined. A change would require
that the corresponding database is changed as well, but this is beyond the capabilities of
most commercial tools. An example is a change management/problem tracking tool for
which a change process has been defined.
The process change characterization models defined are important, as their organization could
easily be adapted to describe product changes, and used effectively for characterizing and managing product changes, and form a basis for a product change management model. A change
management model is needed for product change management, which includes the fields of
software configuration/change management, and software change management.

2.4 Software Maintenance The Costs


2.4.1 Introduction
What are the typical costs of software maintenance? Can we expect that these costs can be
reduced in the future? These are typical questions one might ask about software maintenance.
In this section, we elaborate some answers to them, and argues that maintenance cost may be
increasing in the future if new ways of performing maintenance are not introduced.

2.4.2 Costs reported in investigations


Several investigations on the cost of software maintenance have been conducted since the late
1960s. Most articles and texts on maintenance report as a fact that maintenance costs typically
represent 50% of the total life cycle costs of a software system. The first well known investigation on this was reported in [Lientz and Swanson, 1980]. Lientz and Swanson which incorporated maintenance figures from 487 Directors/Managers/Supervisors of Data
Processing. 2000 persons in that category from the US/Canadian Data Processing Manage-

29

Software Maintenance - The Costs

ment Association were asked to contribute. Results from several other investigations are publicly available. An overview is given by Krogstie in [Krogstie, 1994b], and shown in Table 5
for convenience.
TABLE 5. Maintenance investigations (from [Krogstie, 1994b])
Maintenance Cost %
26
40
40-60

67
70
75

Investigation
Henne [Henne, 1992]
Hoskyns [Hoskyns, 1973]
Arfa & Mili [Arfa and Mili, 1990]
Brantley & Osajima [Brantley and Osajima, 1975]
Ditri et al. [Ditri et al., 1971]
Gunderman [Gunderman, 1973]
Krogstie [Krogstie, 1994b]
Lientz & Swanson [Lientz and Swanson, 1980]
Nosek & Palvia [Nosek and Palvia, 1990]
Riggs [Riggs, 1969]
Swanson & Beath [Swanson and Beath, 1990b]
Zelkowitz [Zelkowitz, 1978]
Dekleva [Dekleva, 1992a]
Brantley & Osajima [Brantley and Osajima, 1975]
Elshoff [Elshoff, 1976]

Year
1992
1973
1990
1975
1971
1973
1994
1980
1990
1969
1989
1978
1990
1975
1976

The variance in the reported figures are due to several factors, e.g. the type of software being
studied, sampling errors, measurement problems, and the use of different definitions for maintenance. [Krogstie, 1994b] reports that e.g. [Dekleva, 1992a] includes time for answering
questions from users as maintenance. This time is usually not included in other figures.
%
100
75
50

1995

1990

1985

1980

1975

1970

25

Year

FIGURE 8. Maintenance costs reported in investigations

Figure 8 shows the plot of the reported maintenance costs against the year in which they were
reported. We observe that there is not much improvement in reducing the maintenance costs
over the last 25 years. This is somewhat peculiar, since the arrival of new techniques, methodologies, and tools for the initial system development phase have been introduced during this

30

Software Maintenance: What - Why - The Costs and Problems

period. We therefore suggest that the maintenance costs of systems is relatively invariant with
respect to development method. Why is this so? We believe that there is no unifying answer to
this problem, but provide a discussion below.

2.4.3 Increased development productivity


Application systems have grown in complexity during the last 25 years. This is a consequence
of technology improvements, and the increasing complexity of organizations requirements as
they have become more dependent on software to be able to provide services to their clients.
[McGarry, 1994c] shows how the productivity and reuse have increased, while the defect rate
has decreased when comparing comparative baselines in the NASA Software Engineering
Laboratory. The actual figures are shown in Table 6.
TABLE 6. Project measures at NASA/SEL
Errors/KLOC

Cost (staff months)

% Reuse

Baseline 85-89

(1.7) ~4.5 (8.9)

(357) ~490 (755)

~20

Baseline 90-93

(0.2) ~1.0 (2.4)

(98) ~210 (277)

~79

75% decrease

55% reduction

300% increase

Change

In [Paulish and Carleton, 1994], Paulish and Carleton discuss a Software Engineering Institute
(SEI) study preformed by Hersleb et al. ([Hersleb et al., 1994]). 13 organizations experience
over a 9 year period of software process improvement was investigated. The framework for
process improvement in the organizations studied were the SEI/Capability Maturity Model
(CMM) as described by Humphrey in [Humphrey, 1989]. The results from the study are shown
in Table 7.
TABLE 7. Software process improvement results (CMM-based, 1987-1993)
Measure

Median

Range

Productivity gain per year

35 %

9 - 67 %

Early detection gain per year


(defects discovered prior to testing)

22 %

6 - 25 %

Yearly reduction in time to market

19 %

15 - 23 %

Yearly reduction in post-release defect


reports

39 %

10 - 94 %

5.0

4.0 - 8.8

Business value of software-process


improvement investment (value
returned on each dollar invested)

The results indicate that new technology and better processes in the software production arena
have have helped developers to be able to cope with the rising demand for new and more complex systems. The process improvement steps which helped increasing development productivity also seems to reduce the impact of corrective maintenance on total maintenance costs, as
the post-release defect reports have decreased. However, corrective maintenance is only a
small portion of the maintenance activities.

Software Maintenance - The Costs

31

2.4.4 Increased product size


Three different investigations on software maintenance ask for the number of lines of code in
one selected application system in each of the organizations participating in their investigations. The average system sizes were
48 KLOC in the Lientz and Swanson investigation, [Lientz and Swanson, 1980].
204 KLOC in the Nosek and Palvia investigation, [Nosek and Palvia, 1990].
283 KLOC in the Krogstie investigation, [Krogstie, 1994c].

These data shows that application systems are generally larger today compared to the late
1970ies. However, the variations reported by Nosek/Palvia (SD=313KLOC) and Krogstie
(SD=333KLOC) are large.
The investigation by Lientz and Swanson further reported that the number of person-hours per
year expended on the maintenance of the average application system was 2768 (median 827).
The corresponding average in the Nosek and Palvia study was 15530 (median 1920).
Now, assume that an average maintainer works 2000 hours a year. Although the variation in
the data is large, this suggests that an average maintainer took care of 34.5KLOC in 1980, and
26.3KLOC in 1990. So when research shows that development productivity has increased during the last decades, the contrary may be the case for maintenance productivity. The next section presents a model which can help on explaining this.

2.4.5 Contributions to maintenance effort


In [Lientz and Swanson, 1980], Lientz and Swanson presented an elaborative model describing
the amount of maintenance effort, based on (partial) correlation analyses among variables in
their investigation. The amount of maintenance effort was found to be influenced by five
causal paths involving four different variables. The causal paths influencing the dependent variable is outlined in Figure 9.
The associations are explained in [Lientz and Swanson, 1980] as:
1. This correlation was weak when the other variables of the model were controlled, but

remained significant. The correlation is explained by system deterioration; hence a system


is harder to maintain regardless of maintainer skills.
2. As the system ages, enhancements made to it implies that the system grows in size. The
amount of maintenance is directly related to the system size.
3. The correlation coefficients computed by Lientz and Swanson showed that the relative
amount of routine debugging do not vary significantly with system age, but is positively
associated with system size. [Lientz and Swanson, 1980] argues:
The total amount of maintenance effort is significantly associated with the relative amount of
routine debugging, controlling for other variables in the model. A suggested interpretation is
that larger systems tend to demand relatively more routine debugging, because the location
and elimination of bugs is a more complex task for larger systems, and because such corrective maintenance is obligatory when needed, and thus receives priority attention. Further a
high relative amount of routine debugging is associated with a high amount of maintenance
effort, independent of system size. This characterizes systems which are more troublesome to
maintain on the whole.
4. As the system ages, the probability that the maintainers involved in the application mainte-

nance have development experience from the system maintained is decreasing. This is due
to staff turnover over a series of year. Such development experience is negatively related to

32

Software Maintenance: What - Why - The Costs and Problems

the amount of maintenance effort. This means that when the development experience of the
maintainers is low, the amount of effort to maintain the system is higher than if the maintainers had this experience.
5. The relative development experience is also negatively correlated with time spent on rou-

tine debugging. This in turn cause the amount of maintenance effort to increase.
Relative
Development
Experience

(-)

4
System
Age

1
(+)

(+)

(-)

Amount of
Maintenance
Effort
2

(-)

(+)
System
Size

5
(+)

Relative
Amount of
Routine
Debugging

3
(+)

FIGURE 9. Lientz and Swansons explorative model of maintenance effort.

2.4.6 Discussion on the direction of maintenance costs


In Section 2.4.3 we indicated that the development costs per line of code had decreased in the
last decade, due to more reuse and improved technology and processes in the software production organizations. More demanding users and more computing power have increased the size,
and hence the complexity of new software applications, as indicated in Section 2.4.4. The
increased size and complexity of the systems has kept the relative costs of maintenance and
development at the same level, as indicated in Figure 8. These observations match the empirically determined elaborative model of Lientz and Swanson, as discussed in Section 2.4.5.
Since todays systems are generally larger, with similar age distribution, maintained by maintainers with similar relative development knowledge, we conclude that the improvements in
software development technology and processes has not been beneficiary to the
relative amount of maintenance costs. From the data available, maintenance productivity has
not increased in terms of lines of code maintained per maintainer per year. On the contrary, it
seems that maintenance costs increase relatively to development costs when systems get larger.
A causal explanation may be that larger systems are more complex so that maintainers can
comprehend less, and the increased system size may cause an additional level of management
in the maintenance organization. When the maintenance organization increases in size, effective communication can be a problem which further decreases individual productivity.
The question to ask is then: What factors can be improved in order to reduce the maintenance
cost below the traditional 50% level? Or rather: How can we make maintenance costs independent of system size. To be able to answer this, we look into the problems related to software
maintenance.

33

Software Maintenance - The Problems

2.5 Software Maintenance The Problems


In this section we present the findings some of the major investigations on software maintenance problems. When relative maintenance costs have not been reduced during the last two
decades, it is interesting to find out which problems are perceived most important to be able to
reduce these costs. The results from the investigations give us some clues.
Each investigation is first presented in Section 2.5.1 to Section 2.5.6, and a discussion of the
problems found is presented in Section 2.5.7.
The investigations presented here are those of Lientz and Swanson [Lientz and Swanson,
1980], Krogstie [Krogstie, 1994c], Nosek and Palvia [Nosek and Palvia, 1990], Foffani [Foffani, 1992], Chapin [Chapin, 1985], and Dekleva [Dekleva, 1992]. All investigations were carried out in a 15 year time period, ranging from 1980 to 1994.

2.5.1 The Lientz and Swanson investigation


In [Lientz and Swanson, 1980], Lientz and Swanson presents their investigation on software
maintenance management. They formulated 26 potential problems of software maintenance.
This list of problems has been used several times in following studies by other researchers. The
respondents were asked to rate the problems on a five grade ordinal scale
1. no problem at all,
2. somewhat minor problem,
3. minor problem,
4. somewhat major problem, and
5. major problem.

The complete list of problems is shown in Table 8.


TABLE 8. Lientz and Swansons list of problems in maintenance
Problem
Demand for enhancements and extensions
Competing demands for programmer time
Documentation quality
Inadequate user training
Meeting scheduled commitments
Lack of user understanding
Quality of original programming
Number of maintenance programmers available
Program processing time requirements
Unrealistic user expectations
Forecasting maintenance programming requirements
Adequacy of system design specifications
Turnover in user organization

Problem
Maintenance personnel turnover
Adherence to programming standards
Skills of maintenance programmers
System hardware and software changes
Maintenance programming productivity
Budgetary pressures
Program storage requirements
Maintenance programmer motivation
Data integrity
Lack of user interest
System run failures
Management support
System hardware and software reliability

The major problems reported based on the median of the ordinal responses were (note that the
corresponding mean on the transformed interval scale and the problems rank on that scale is
given in parentheses):

34

Software Maintenance: What - Why - The Costs and Problems


1. Demand for enhancements and extensions. (Mean = 3.20, rank = 1)
2. Competing demands for maintenance programmer time. (Mean = 3.03, rank = 2)
3. Quality of documentation. (Mean = 3.00, rank = 3)
4. Inadequate user training. (Mean = 2.76, rank = 4)
5. Meeting scheduled commitments. (Mean = 2.69, rank = 5)
6. Turnover in user organization. (Mean = 2.36, rank = 13)

A notable finding here is that of the top six perceived problems, only one is of a technical
nature. This is related to the quality of documentation. The other problems are related with
process issues and user interaction.
To find the underlying dimensions of the assessed problems, a factor analysis was undertaken.
Six factors were produced which accounted for the common variance in the 26 problems.
Table 9 shows a simplified presentation of the factor analysis.
TABLE 9. Problem groups in Lientz and Swansons study
Problem group

Eigen
value

% of
var. Item components

User knowledge

7.25

59.5 Lack of user understanding. Inadequate user training

Programmer effectiveness

1.45

11.9 Maintenance programming productivity & motivation


Skills of maintenance programmers

Product quality

1.16

9.5 Adequacy of syst. des. specs. Quality of original


programming. Documentation quality

Progr. time avail.

0.97

8.0 Competing demands for programmer time

Machine req.

0.76

6.8 Program storage/processing time requirements

System reliability

0.59

4.8 SW/HW reliability. Data integrity

All groups correlated significantly positive with a larger amount of maintenance effort. The
second and third problem groups both correlated significantly positive with system age, as well
as significantly negative with relative development experience. System size correlated significantly positive with the second group.
The second, third and sixth problem groups were found to significantly correlate with number
of persons assigned to maintenance. This last relation may indicate, as argued for in Section
2.4.6, that difficulties in communication among maintainers increase when the number of
maintainers increases, resulting in less effective maintenance. Another explanation is that the
perceived problems of maintenance grows when the maintenance budget gets more visible in
the organization, as suggested in [Lientz and Swanson, 1980].

2.5.2 Problem areas in Krogsties investigation


Krogstie ([Krogstie, 1994c]) used the original set of potential problems used by Lientz and
Swanson in his investigation. Sorting on the mean score, the major problems reported in this
investigation were:
1. Quality of original application system (3.6)
2. Turnover of maintenance personnel (3.5)
3. Quality of documentation (3.5)

Software Maintenance - The Problems

35

4. Inadequate training of user personnel (3.2)


5. Competing demands for maintenance personnel (3.2)
6. Skills of maintenance personnel (3.1)
7. Availability of maintenance personnel (3.1)

We observe that several of the highly ranked problems match those of the Lientz and Swanson
investigation. Again, documentation quality is the only technical issue among the most important problems. The turnover and lack of skills among maintenance personnel is also ranked
high.

2.5.3 The Nosek and Palvia study


In [Nosek and Palvia, 1990], Nosek and Palvia present a study which also used the set of possible maintenance problems proposed by Lientz and Swanson. The respondents of the study
were a subset of the same population used by Lientz and Swanson, namely members of the
DPMA association. The questionnaire was distributed to 240 persons, with 52 responses, giving a response rate of 22%. The major problems reported in this study were:
1. User demands for enhancements (3.29)
2. Quality and documentation of software (3.17)
3. Competing demands for personnel (3.17)
4. Availability of maintenance personnel (2.65)
5. Inadequate training of user personnel (2.65)
6. Lack of user understanding the system (2.62)
7. Quality of the original application system (2.58)

The average score of all 26 problems slightly increased compared to the Lientz and Swanson
investigation, which indicate that the maintenance problems have slightly increased during the
last years. Nosek and Palvia particularly noted that some problems seemed more acute at the
70-90% confidence level. These were (1) maintenance programmer turnover, motivation and
productivity, (2) documentation quality and system design specification, and (3) unrealistic
user expectations and budgetary pressures.
The last set of problems which was viewed more acute was explained by Nosek and Palvia to
be attributed to the PC revolution. They argue that a new attitude has emerged among users and
managers: If I can do it so easily on my PC, why does it cost them (the maintenance engineers) so much more and take so much more time.

2.5.4 Foffanis study


In [Foffani, 1992], Foffani reports about a study on the state of software maintenance in Italy. 9
organizations were interviewed to answer 97 questions covering the organization, tools, staff,
software process, and operational issues in their organization. In the study, each maintenance
staff had an average of 173 KLOC to manage, which averaged 294 programs. Each maintainer
also had to deal with 137 final users on the average. Most of the applications used in the organizations were old (16% over 10 years, 34% over 6 years), and the oldest were also found to be
the most critical for the organizations.

36

Software Maintenance: What - Why - The Costs and Problems

The five top reasons for problems in the maintenance work were
1. Continual change of the programs which did not allow the maintainers to control the

2.
3.

4.

5.

quality of the program. This continual change was triggered by a steady flow of
enhancement requests.
The quality of documentation. There was generally little documentation, and what
existed was unreliable and out of date. This also was the state of code comments.
The quality of software. A large amount of the corrective maintenance was due to
defective software, and the maintainability scores measured showed a general low
score.
Problems with users. On the average, a maintainer had contact with 137 final users,
and more than half of the maintainers reported that contact with the users occurred on
a daily basis. Unrealistic expectations from the users also resulted in dissatisfaction
and user inquiries and made the pressure on the maintainer even more tight. Users
also were generally inadequately trained for using the application, resulting in problems and implying that the maintainer had to act in a help desk role.
Bad performance was reported as the fifth most importance problem for maintenance.

Like the three previous studies, Foffani also used the set of possible maintenance problems
proposed by Lientz and Swanson as a starting point in his investigation.

2.5.5 Chapins open-ended-question


The studies on software maintenance problems outlined above all assess the state of software
maintenance problems using the same set of 26 predefined questions. While this gives good
control over the design of the investigation, and also shows the trendlines of shifting maintenance problems, there is a danger that new problems; problems which were not apparent when
Lientz and Swanson made the list, are more severe today. A study on software maintenance
problems trying to overcome this problem is described by Chapin ([Chapin, 1985]).
In 1984, Chapin asked 260 software maintenance managers to list their opinion of the major
problems in maintaining programs.
The problems which were most frequently reported were
1. Documentation - reported 134 times,
2. Personnel capability - 80,
3. Personnel shortage - 42,
4. Software complexity - 38,
5. Communication with users - 34, and
6. Software not well structured - 32.

The full list showing the frequencies of 37 different responses is given in [Chapin, 1985].
Chapin grouped the responses into eight categories.
48% of the problems identified were related to the characteristics of the software itself,

such as bad documentation, complex and not well structured code, old code, etc.
personnel factors (20%),
maintenance management (9%),

Software Maintenance - The Problems

37

environmental factors (8%),


activities involving software (8%),
distribution of software (2%),
user relations (5%), and
a miscellaneous category not accounting for any percentage.

Chapin interprets the most significant problems to be related to circumstances or conditions


under which the altering of the performance of the software is to be done. The respondents had
a list of software related problems which hindered them in doing effective maintenance. To
quote Chapin
It is as if maintenance personnel were being asked to do ballet on a muddy dirt road while
dodging traffic in the sleet - the task can be done, but the circumstances hinder good performance and actually increase the amount of work to be done.

Chapin also notes that the responding managers do not perceive the reported problems to be
related with productivity issues. This is in contrast to the rating given to productivity problems
in the Lientz and Swanson predefined problem list. The managers seem to be reactive to problems when they encounter, rather than acting in advance to avoid problems before they occur
and improve the performance of maintenance.
The lack of documentation is viewed as the largest single problem reported by more than half
of the respondents. This problem together with other problems classified as software characteristics (e.g. complexity, bad structure, old code, poor techniques used, etc.) are all problems
induced by the organization itself. Comparing with the previous paragraph, it is symptomatical
that none of the maintenance managers report problems with changing this poor state of software in their organization.
An interesting observation in the Chapin study was that only one of the 769 responses perceived user demand as a major problem. This is in strong contrast to the Nosek/Palvia and Foffani studies which reported the problem of user demand for enhancements and problems
with large amount of user interaction as one of the major ones.

2.5.6 Deklevas Delphi study on maintenance problems


Dekleva ([Dekleva, 1992]) reports about a survey undertaken in 1987 by the Software Maintenance Association1 (SMA) where the responses to the question What is the biggest challenge
your organization faces with regard to software maintenance?. Dekleva arranged the
responses into the categories suggested by Chapin, and found that the most frequent category
according to his grouping was Environmental factors. This category, with 29% of all challenges included lack of resources. The next two categories were System characteristics
and Management with 21% and 18% of the responses. This contrasts the results of the
Chapin study, where almost half of the responses were related to system characteristics, and
only 8% of the responses were related to environmental factors.
Dekleva also reports about the 1990 SMA survey, where the question was rephrased to What
1. Reference to a working paper by Ball, R.K: 1987 Annual Software Maintenance Survey: Survey
Results, working paper, Software Maintenance Association, Vallejo, California, 1987. We have not
been able to obtain a copy of this.

38

Software Maintenance: What - Why - The Costs and Problems

are the three major software maintenance problems in your IS department? Again, after classifying into categories, the four most populated categories of problems were Management
(28%), System characteristics (21%), Personnel issues (19%), and Environmental factors (19%).
Dekleva then argues that
these results [taken together with the results of Lientz and Swanson] indicate that the perception of software maintenance problems has changed over time. In the late 1970s and early
1980s, maintainers felt that the major cause of software maintenance problems was users.
Blame later first turned toward the poor state of software systems, then toward the environment, and finally toward management.

We are sceptical to such an interpretation because of the differences in the design of the investigations: The respondents of the Lientz and Swanson investigation were asked to rate 26
potential problems, while the respondents in the three studies described above were asked to
state their real problems. We identify at least three pitfalls in such an interpretation:
1. Respondents which have to formulate their own problems may be subjective and not

synchronized with the rest of the organization.


2. Respondents which have to formulate their own problems may also agree with, and even
give a high rating to several of the problems outlined among the 26 problems suggested
by Lientz and Swanson.
3. The classification of the reported problems formulated by the respondents may be performed incorrectly, and thus askew the category cumulative distribution.
All this said, we cannot be sure that the problems formulated by Lientz and Swanson are covering todays problems, and investigations where users formulate their current real problems
must be undertaken. These real problems may then be collected and used in a new investigation organized similar to that of Lientz and Swanson.
Dekleva realized these problems with the SMA surveys and designed a three-round Delphi
study to obtain consensus among a group of about 70 maintenance professionals (of which
most where maintenance managers; 83%). The initial list of problems to build consensus
around were compiled after a questionnaire at the 1990 Annual Conference of the SMA. The
participants were mostly volunteers from this conference and the next annual conference. The
participants were asked to rate the 19 problems on a scale from 1 (low importance) to 10 (high
importance), and give a rationale for each problem. After each round, the mean and standard
deviation were computed, and this list and the response given by the participant was returned
to that participant in the next round. The resulting mean and standard deviations from the Delphi study are shown in Table 10, and the continuing decrease of standard deviations indicate
that consensus has increased during the rounds.
Dekleva presents the rationale for the top six problems in Table 10. These are:
1. Changing priorities: The maintenance task cycle is longer than the request cycle. Thus there

is always something more critical or another user with a problem. A great deal of time is
wasted on stopping and starting maintenance tasks.
2. Inadequate testing methods: There is a lack of understanding and use of testing methods,
lack of time, lack of comprehensive regression test data, and lack of rigorous testing
requirements as a standard for passing an application into production.
3. Performance measurement difficulties: It is difficult to measure individual and group performance of maintainers.

39

Software Maintenance - The Problems


TABLE 10. Delphi study on maintenance problems (from [Dekleva, 1992])
Round 1

Round 2

Round 3

Mean SD

Mean SD

Mean SD

Abbreviated Problem Name


1 Changing priorities
2 Inadequate testing methods
3 Performance measurement difficulties

- new 6.1
2,8
- new -

6.4
6.2
6.1

1.8
2.5
2.1

6.4
6.1
6.0

1.9
2.2
1.9

4 System documentation incomplete or non-existent


5 Adopting to the rapidly changing business environment
6 Large backlog

5.9

2.6

6.0

2.3

6.0

2.1

5.6

2.7

5.9

2.1

5.9

2.1

5.3

2.4

5.9

2.1

5.7

1.9

5.6

2.4

5.2

2.1

7 Contribution measurement difficulties


8 Low morale due to the lack of motivation and
respect
9 Lack of maintenance personnel, particularly experienced maintainers

- new 5.1

2.9

5.4

2.6

5.3

2.3

4.5

2.8

5.2

2.6

5.3

2.4

10 Lack of maintenance methodology, standards, procedures and tools


11 Program code is complex and unstructured
12 Integration of overlapping and incompatible systems or subsystems

5.5

2.9

5.1

2.4

5.2

2.5

5.0

2.2

5.0

2.1

5.1

1.9

5.0

3.1

5.4

2.4

5.0

2.1

13 Maintainers lack proper training


14 Strategic plans
15 Understanding and responding to business needs

4.9
2.6
- new 5.0
2.7

5.2
5.0
5.2

2.1
2.6
2.2

5.0
5.0
4.9

1.9
2.4
2.2

16
17
18
19

5.0
4.6
4.4
3.8

2.6
2.8
3.0
2.6

4.9
4.7
4.6
3.9

2.2
2.4
2.2
2.2

4.8
4.7
4.4
3.8

1.9
2.1
2.0
2.1

5.1

2.7

5.4

2.3

5.3

2.1

Lack of managerial understanding and support


Antiquated systems and technology
Lack of support for reengineering
High turnover causing a loss of expertise

Round means

4. System documentation incomplete or non-existent: Lack of accurate documentation

decreases maintenance productivity and increases the learning curve dramatically for new
maintainers. Knowledgable people get stuck in their jobs because of their understanding
(walking documentation).
5. Adopting to the rapidly changing business environment: The business environment is

changing at a tremendously rapid pace. A system is already obsolete when it is implemented. Adapting is particularly difficult in the case of old systems, because of bad code,
high complexity, or old technology.
6. Large backlog: Users are dissatisfied and impatient. Limited work can be done; low priority

requests will never be addressed.


Dekleva seeks a causal model of the maintenance problems. The problems are categorized
using cluster analysis. The categories are then slightly modified to match the semantic similarities of the problems. Dekleva categorizes the problems according to either the entity directly
causing it, or being in a best position to control it.
The relations among the categories were sought by computing significant correlations among
them. This, together with responses from the participants in the study, provided relationships

40

Software Maintenance: What - Why - The Costs and Problems

among the deduced categories. The causal associations among the categories were determined
after counting qualitative statements indicating causality and assuring that the directional balance were at least four to one. Deklevas causal model is shown in Figure 10. The correlation
Maintenance
Management
1,2,3,6,10,16

0.47

0.15
0.4
Personnel
Factors
8,9,13,15,19

0.56

System
Characteristics
4,11,12,17
0.15

Organizational
Environment
5, 7, 14, 18

FIGURE 10. Deklevas causal associations among maintenance problems

coefficients among the categories are indicated on the lines. The numbers inside the categories
reflect the problem numbers from Table 10.

2.5.7 Discussion
If we compare Deklevas causal model of problems in Figure 10 with Lientz and Swansons
explorative model of effort in Figure 9, we observe that the variable relative development experience in the latter figure can be put into Deklevas personnel factors category. The three other
variables may be placed in the system characteristic category. If we assume that an increase in
problems cause increased efforts, we believe that Lientz and Swansons model is too simple to
describe causal relationships of effort in general. On the other hand, Deklevas model does not
contain sufficient information about causal associations among individual problems. This
means that maintenance managers cannot use it to find out how individual problems are related
to effort.
We want to summarize the results of the different investigations. As a start we list the major
problems from all six studies in Table 11. We observe that nine of the twelve problems listed
were also mentioned in Lientz and Swansons proposed list of maintenance problems. In the
open-ended question stated by Chapin, none of the most frequent responses given by the
respondents are different from those given by Lientz and Swanson. The Dekleva study reports
three new problems (j, k and l) not proposed by Lientz and Swanson.
The problems rated fourth and fifth in the Nosek and Palvia study are combined under the
header Inadequate training of user personnel. Similarly, we group the fourth and sixth rated
problems of Chapin under Quality of original system, structure and complexity. Finally, we
find that the problem with large backlogs of work to be similar to Lientz and Swansons original problem of meeting scheduled commitments. Can we draw any conclusions from this
table?
First of all, we notice that all problems listed are very general. None of the investigations
report specific problems of actual maintenance tasks. This is due to the positions of the
respondents in the surveys. Most of them have managerial positions at some level in their
organization. We assume that most of the respondents have been doing actual maintenance.

41

Software Maintenance - The Problems

Lientz &
Swanson, 1980

Chapin, 1985

Nosek &
Palvia, 1990

Foffani, 1992

Dekleva, 1992

Krogstie, 1994

TABLE 11. Important problems of maintenance.

a Documentation

b Inadequate training of user personnel

c Quality of original system, structure & complexity

d Turnover and recruiting of maintenance personnel

Problem

e Demand for enhancement and extensions

Competing demands for maintenance personnel

g Skills of maintenance personnel

1
2

h Bad performance

Meeting scheduled commitments, large backlog

Changing priorities

1
5

6
5

3
6
1

k Inadequate testing methods

Adopting to the rapidly changing business environment


a. Termed program processing time requirements by Lientz and Swanson

work, but the responses given reflects a managerial attitude towards maintenance problems.
This is a problem with the investigations, since researchers are not given specific clues to guide
their research to advance the state of maintenance. Nevertheless, the investigations are valuable in that they give an overall view of the general maintenance problems. Later in this chapter,
in Section 2.6, we present studies to provide an insight into important real problems of actual
maintenance tasks.
Secondly, we note that we can group the problems into four categories: technology, users,
maintainers and management. We discuss each category below:
Technology (problems a, c, h, k): The quality of the original system is perceived to be low

by the maintenance organization. Documentation is generally lacking, and what is available


is often out-of-date. The lack of documentation reduces the productivity of the maintenance
staff, and makes it difficult to keep up with user demands. Further changes to the system
deteriorate the structure, documentation quality and performance, as well as increase the
complexity of the system. Lack of adequate testing methods, large backlogs and changing
priorities mean that changes are rushed, which in turn generates more user demands.
Users (b, e): Maintenance management finds that inadequate user training creates a prob-

lem for the maintenance staff. Since the users are not sufficiently drilled in using the application, they may ask for functionality which already exist in the product, if used correctly.
Users have different background experience using other applications, and non-standard user
interfaces may cause problems. As users get more accustomed with the application, new
functionality which was not previously considered interesting may become necessary. It is
somewhat interesting that this is perceived as a problem for the maintenance organization,
as user demands are what pay for their butter. The core of the problem with annoying users

42

Software Maintenance: What - Why - The Costs and Problems

may be explained by a combination of the other three categories: The technology for maintenance is not sufficient as it fails to uphold the quality of the software application and its
documentation. This in turn may be attributed to lacking skills of inexperienced maintainers
and managements continually changing priorities, forgetting about already agreed
enhancements which creates an increasing backlog.
We argue that a high demand for enhancements should be perceived to be valuable however, if the organization is not prepared to handle the increased activity this will imply, this
could be perceived as a nuisance.
Maintainers (d, f, g): As reported in the introduction section of this chapter, working in the

maintenance organization has traditionally been regarded as second class software work.
Thus recruiting personnel to maintenance positions is difficult; at the same time the turnover in the maintenance organization is reported as a major problem. This means that the persons working in maintenance may be little motivated to learn the procedures and
technology used in the maintenance organization as they are constantly looking for work in
other parts of the organization. The key maintenance personnel is constantly overloaded,
being pushed by management to spread their effort, by customers to provide enhancements
and extensions, and by less skilled colleagues to share their knowledge in lieu of updated
documentation. This also explains why the skilled maintainers do not take (or even have)
the time to update the documentation in the first place.
To help in this situation, we propose that maintenance research should focus on technology
which can aid new maintainers to reach full proficiency in less time. From the reported
problems, we find that the most important problems to tackle is how to document and to utilize this documentation.
Management (i, j, l): The rapidly changing business environment means that management

pushes the rest of the maintenance organization to stay in front of competition. Problems
which were important yesterday may seem insignificant when the closest competitor has
announced a new set of features. Decreasing the priority of previously agreed commitments
may be the result, pulling maintainers away from existing assignments. Such actions
increase the backlog of the maintenance organization, and generates even more inquiries
from the old customers which are waiting for their agreed upon extensions. The pressure on
the maintainers are steadily increasing, resulting in increasing turnover, and even more difficulties recruiting staff to maintenance tasks.
To sum up, we can only conclude that the problems of maintenance are multi-faceted. Still, we
want to highlight one thread we find particularly important:
The traditional negative view of maintenance work has resulted in high turnover in the

maintenance organization.
This high throughput of personnel has made it difficult to enforce a disciplined process for

maintenance.
High pressure on key maintenance personnel has made documentation work neglected in

the first place.


This neglect has resulted in further pressure on the experienced maintainers to act as walk-

ing maintenance, and has made inexperienced personnel use too much time on reading
code in order to understand the system.
This is a bad circle which is difficult to break.

Studies of real maintenance problems

43

2.6 Studies of real maintenance problems


As noted in Section 2.5.7, all investigations presented reported problems of maintenance as
seen from a managerial view. In this section we reports findings of several important studies
which have examined how systems are maintained, and what the problems are when maintainers are working with the applications. Since we in the previous section concluded that problems with lacking documentation and high turnover of maintainers were important, we focus
on important research which have studied maintainers cognition processes when trying to get
familiar with a new software system.

2.6.1 Blums maintenance paradox


We found this paradox, nicely formulated by Blum in [Blum, 1995], as a good description to
the credibility problem of software maintenance. Of course, a reader which has read the
chapter to this point will not find this paradox strange at all.
We know that the cost to maintain a software system generally exceeds the cost of its initial
implementation. Thus, experience with a software system increases proportionally to its postdelivery use. We also know that we learn from experience so that tasks become easier as the
experience base grows. It follows therefore, that as experience with a software product
increases, maintenance efficiency should improve. Yet this is not the case; the older the system software is, the more costly it is to maintain.

Blum observe that the maintenance paradox is due the lack of effective high-level documentation about the software features to be changed. This means that maintainers are forced to interpret and change the programs in the context of what the code actually does, instead of with
respect to the codes intended role in the system. This reduces the flexibility of the system, and
hence increase the complexity both in the software and in the maintenance tasks. Blum
describes an application development environment, TEDIUM, which generates programs from
specifications in the mature field of interactive information systems. Blum argues that since
lacking documentation is a problem in traditional maintenance, maintainers should do all
manipulations at the design/document level, and programs can be generated automatically. He
reports some success with this approach in the relative narrow domain of patient information
systems for hospitals.

2.6.2 Lakhotias program reading theory


In [Lakhotia, 1993], Lakhotia describes his rediscovering of a program comprehension theory
first proposed by Brooks in [Brooks, 1983]. This theory can be summarized as:
1. The programming process is one of constructing mappings from a problem domain, pos-

sibly through several intermediate domains, into the programming domain.


2. Comprehending a program involves reconstructing part or all of these mappings.
3. This reconstruction process is expectation driven by the creation, confirmation, and
refinement of hypotheses.
Lakhotia found that he as an expert skimmed through the code to look for tags which matched
a mental model he had built of the system. When observing graduate students trying to make
similar changes, he found that these had a different approach: They read the source code in
sequence to try to build a mental model of the systems structure and functionality. Thus, while
he as an expert used a top-down approach to localize the parts to change, the novices used a
bottom-up approach.

44

Software Maintenance: What - Why - The Costs and Problems

2.6.3 The systematic and as-needed strategies of Littman et al.


The observations of Lakhotia are supported by Littman et al. ([Littman et al., 1987]). They
analysed video-taped protocols of experienced maintainers at work. The analyses revealed two
strategies for understanding a program for maintaining it: The systematic strategy, and the asneeded strategy.
When using the systematic strategy, maintainers tried to build a global understanding of the

structure and behaviour of the program by symbolic execution and data flow traces to gain
causal knowledge of component connections.
Maintainers who are used the as-needed strategy, tried to minimize studying of the program,

focusing on local behaviour, before trying to do modifications.


The protocol analyses performed showed that maintainers who used the systematic strategy
were more successful in modifying the programs. The programs used in the study were small,
and Littman et al. realized that a systematic strategy would be unrealistic when larger programs were to be maintained. Too much time would be used on understanding the system
before modifications could be performed. We may add to this that period in which a system is
under maintenance is relatively long compared to the development period. In addition, maintainers may work on several applications. Thus, it is difficult to have several mental models
fresh in mind for such a long time.

2.6.4 The concept assignment problem of Biggerstaff et al.


In [Biggerstaff et al., 1993], Biggerstaff et al. define the concept assignment problem in program understanding as the problem of discovering individual human oriented concepts1 and
assigning them to their implementation oriented counterparts2 for a given program.
Biggerstaff et al. describes several prototypes of tools that can help programmers in the program understanding process. Their own design recovery system, DESIRE, is illustrated with an
example. DESIRE contains two facilities to assist in the program understanding process. The
passive assistance facility in which the user can inspect the source code in different views (e.g.
slices [Weiser, 1984], call graphs, cluster analysis). The active facility includes a domain
model which characterizes the concepts of a domain, and tries to map these concepts to parts of
the code automatically, so that the user can further inspect these parts of the code with the passive assistance facility.

2.6.5 Von Mayrhauser and Vans protocol analysis


In [Von Mayrhauser and Vans, 1994], Von Mayrhauser and Vans describe a protocol analysis
experiment with 11 experienced maintenance engineers. The aim of the protocol analysis was
to acquire knowledge about what kinds of information which were needed for maintaining
large programs. They describe their findings in terms of a meta-model for program comprehension, consisting of:
1. A top-down model including knowledge of the application domain. This is typically active

if the domain is familiar to the programmer. This model contains information of what can
be expected.
1. Example given is reserve an airline seat
2. if ((seat=request(flight)) && available(seat)) then reserve(seat,customer)

Studies of real maintenance problems

45

2. A program model is built bottom-up by the maintainers as a control flow abstraction of the

program. The program model contains knowledge of what is done.


3. A situation model which uses the program model to build a data flow and functional

abstraction of the program. The situation model contains knowledge of how is the what
done.
4. A (mental) knowledge base is used to provide long-term storage for the knowledge repre-

sented in the other three models.


In the protocol analyses, Von Mayrhauser and Vans found that the maintainers shifted among
the three models as they expanded their system knowledge. When work was constrained to one
module, references to the program model was predominant. When large parts of the system
had to be understood, references to the top-down model were predominant. The analyses also
indicated which knowledge the maintenance engineers were consulting or looking for in the
different models. In Table 12,we repeat their findings on the knowledge sought for in the different cognition processes.
TABLE 12. Knowledge sought by maintainers, from [Von Mayrhauser and Vans, 1994].
Product Specific Knowledge

Area Knowledge

Architecture
Cross references
Key term

Domain Information Necessary


Commands and Use (e.g. HPUX vs UNIX operating systems commands)
System Configuration (e.g. How to configure the system for test or bug
reproduction)
Standards, product independent information (e.g. Op. sys. principles)
Prior experience
Formal instruction
Structure (e.g. OS as components: Process, I/O, memory, file management)
Interconnections (e.g. How are they related)
Within the domain including where to find the information we need (e.g.
the expert or specific text book)
Very important This guides understanding and ties the domain model to the
situation and program models. (e.g. process manager using round robin
scheduling algorithm)

Situation Information Necessary


Algorithms and data structures Language independent, detail design level (e.g. functional sequence of
steps in the round robin algorithm or a graphical representation of process
engine)
Detailed design
Close to code, but language independent. Specific product information in
functional terms.
Pop-up technology conFind design rationale. Connect algorithm to purpose of application
nected to domain information
Conventions
Use the same terms across comprehensions and models.
These are the key terms!
Cross reference levels of
Also within the same situation model level
information with connection
to other models
Program Model Information Necessary
Variable & component names Key terms: meaningful mnemonic and acronyms for symbols
Pop-ups connected to situa- Capability to follow beacon to design and domain information
tion and domain

46

Software Maintenance: What - Why - The Costs and Problems

TABLE 12. Knowledge sought by maintainers, from [Von Mayrhauser and Vans, 1994].
Critical sections of code
Focus attention; improve efficiency
identified
Formalized beacons
Focus attention; improve efficiency
Cross references
Back to the situation and domain levels
and connections.

Von Mayrhauser and Vans conclude that much of the information presented in Table 12 exist in
manuals. However, it is poorly accessible to maintainers:
... maintenance activities can be described by a small set of cognition processes. These aggregate into higher level processes, each with their goals and information needs. [...] Current
practice of documentation and coding does not encourage efficient understanding as it compartmentalizes knowledge by type of document, and rarely provides the cross references that
are needed to support programmers cognitive needs

Maintainers need direct access to the information, and facilities to access information at different abstraction levels (e.g. domain information, requirements, detailed design, test reports,
user manuals, code) simultaneously. They argue that documentation should be organized
around information needed in the cognitive processes used by the maintenance engineers, and
that this information need to be presented to the maintenance engineer by tools, not only as
paper documents

2.6.6 Soloway and Ehrlichs programming plans


In [Soloway and Ehrlich, 1984], Soloway and Ehrlich report a study to investigate the impact
of programming knowledge on the ability to fill-in-the-blank in small programs1. The claim
of Soloway and Ehrlich is that essentially two types of programming knowledge distinguishes
expert programmers from novice programmers. The first is the knowledge of programming
plans, which are program fragments that represent action sequences in programming, e.g.
nested loops for sorting, single loops for searching. The second is knowledge of rules of programming discourse, which are rules that specify conventions in programming, e.g. that variable names agree with function.
Four simple programs (alpha) were designed, using a specific set of plans and discourse rules.
The plans and discourse rules used in these programs where slightly modified, which resulted
in four alternative programs (beta). One statement was hidden in all eight alpha and beta programs. Novice and expert programmers were asked to fill in the hidden statements in the programs. An experiment was designed to test the following hypotheses: (i) Experienced
programmers would provide more accurate responses than novice programmers for the alpha
programs. (ii) The differences in response accuracy among experts and novices would decrease
for the beta programs, since experts will be puzzled by broken conventions while novices will
not since they do not know these conventions. (iii) Experts can recall programs which use
1. Fill-in-the-blank tests are called cloze tests, and were developed as a measure of comprehension in
the prose domain. [Hall and Zweben, 1986] discusses the use of the cloze test in the software
domain, and concludes that it is appropriate, if certain precautions are taken when the blanks are
selected. This consideration was not done by Soloway and Ehrlich, and is a criticism made by Hall
and Zweben. However, Soloway and Ehrlichs experiment design does not fall in the argued traps of
using the cloze test in the software domain, as all deletions are program dependent, and the comprehensibility is not compared across programs, rather across sets of similar programs with the same
semantic deletions.

Studies of real maintenance problems

47

familiar programming plans and do not break rules of discourse more accurate than programs
which do not have these characteristics.
The results from the experiment supported all three hypotheses. While the percentage of correct responses for the alpha programs were 87 for expert programmers and 69 for novice programmers, the corresponding percentages for the beta programs were 34 and 28. In addition,
the time required to answer the problems was significantly shorter for expert programmers. For
both categories of programmers, the response time for the beta programs was on average 50%
longer than for alpha programs.
For examples of technology to automatically locate programming plans in source code, see
[Harandi and Ning, 1990], [Quilici, 1993], and [Quilici, 1994].

2.6.7 Further references on software maintenance


Several other publications describe investigations on software maintenance. A good overview
is given in [Sharpe et al., 1991], which contains an extensive list of publications in empirical
software maintenance research from 1980 to 1990, and concludes an increase in research in the
area over the decade. A similar, but more extensive list is given in [Jrgensen, 1994]. Jrgensen reports that a search for empirical studies in software maintenance for the period from
1978 to 1984 resulted in 113 matches. His material reveals that the yearly average number of
studies published up until 1990 was around 5, while from 1990 the average has been around
12. This indicates an increased interest in empirical study of software maintenance, and possibly also in the maintenance area as a whole. Jrgensen concludes that the lack of empirical
software maintenance research gives a weak indication about the present state of the field, and
that there is a need to invest in more studies to complement and extend the data gathered in the
studies so far.
While not dedicated to software maintenance, [Basili et al., 1986] provides an extensive list on
experiments in software engineering, as well as a framework for analysing such experiments.
[Jrgensen, 1994]1 also reports of a study of software maintenance tasks in a Norwegian company. 124 maintenance tasks were studied. The maintainer was interviewed before initiation
and after completion of the task. Some interesting observations in this study were for example
that the primary source of information was through oral communication with experienced
maintainers, application system documentation played a minor role. Jrgensen measured the
maintainers productivity as LOC/effort. He concluded that this is dangerous, as no difference
was found among inexperienced and experienced maintainers. This may be explained by that
experienced maintainers are assigned difficult tasks, or that less experienced maintainers produce more code to solve the task.
[Guimares, 1983] presents a study of variables impacting maintenance costs. 43 organizations
were superficially interviewed, while 5 were more closely investigated. 7 variables were found
to describe much of the variance in the cost of maintenance of application systems. An interesting observation is that the degree of useful documentation correlated significantly positive
to reduced maintenance costs. The maintainers preferred documentation written in narrative
English to flowcharts and I/O-forms.
[EPSOM, 1991] identifies and describes some of the underlying problems in software maintenance. A comparison of models for software development and maintenance shows that the
1. First published in [Jrgensen and Maus, 1993]

48

Software Maintenance: What - Why - The Costs and Problems

models for maintenance is significantly more immature than those for development. An interesting model for software maintenance is the request driven model described in [Bennett et
al., 1990]. [EPSOM, 1991] also provides an overview of supporting tools on the technical side
of software maintenance. Good references are included.
[Abran and Nguyenkim, 1993] presents the organization of a measurement program to measure maintenance activity cost data directly from real projects. They argue that results published
in investigations have great variance, as organizations typically do not measure or collect
maintenance-type data in a timely and accurate fashion.
Few empirical studies of maintenance in object-oriented systems have been published. One
such study is [Li and Henry, 1995], which investigates the distribution of type of maintenance
activities across class level hierarchies in three releases of two commercial systems. For the
two systems studied, all classes were subject to all kinds of maintenance in the original release.
In subsequent releases, a higher proportion of classes in upper inheritance levels are affected
by perfective, corrective and adaptive maintenance changes. This may result in increasingly
complex higher level classes as the system evolves. A tendency was also that perfective and
corrective maintenance dominated the maintenance activities, with corrective maintenance
being the more dominant in later releases.
[Fjelstad and Hamlen, 1979] presents very interesting results on the distribution of effort in the
processing of modification requests. The study shows that 25% of the efforts are related to
actually implementing the change, verification accounts for 28%, while understanding the software accounts for 47%. [Devanbu et al., 1991] reports about a study performed by L. Modica
at AT&T which shows that 30%-60% of maintenance cost is due to what Modica calls the discovery task. [Standish, 1984] estimates that 50-90% of maintenance time is devoted to program comprehension.
[Robson et al., 1991] is an easy-to-read albeit covering introduction to the field of program
understanding. It discusses the main approaches and gives some small examples.
We do not attempt ere to give an overview of the technology of software maintenance. Excellent overviews of this can be found in [Zvegintzov, 1994a], [EPSOM, 1991], and [Oman,
1990].

2.6.8 Discussion
In the study of Littman et al. the only information about the program was the source code
itself. This was also the case in the more informal study by Lakhotia. We argue that the mental
model with causal knowledge described by Littman et al. is already available in the specifications and designs. The developers of the software system had this model when they designed
the system. One of the primary task of developers is to write down this knowledge in the systems specifications and design. It is the lack of access to this information which restrict maintainers from building equally strong mental models using the as-needed strategy. The situation
may be, of course, that either the developers did not make strong enough commitment to
record this information, or that the recorded information has become obsolete during the maintenance process. In either case, the fault is in the processes which guide the developers and
maintainers, or in the enforcement of these processes.
As reported in Section 2.6.7, maintainers may spend a significant amount of the total maintenance effort on the discovery task as Modica labels it.
As explained by Parnas (see Section 2.3.2), maintainers which cannot acquire the causal con-

Chapter summary and conclusion

49

nections in a software system, may insert local changes which are not in harmony with the
overall system. For large programs, maintainers must be able to use an as-needed strategy at
the system level, using all kind of knowledge available for the application to build the (partial)
mental model needed to be able to comply with a request for modifications.
Littman et al. propose the as-needed strategy when programs are larger the maintainer should
be guided to places in the code which he should focus his attention to. This is supported by the
observations of both Blum, and von Mayrhauser and Vans which are described earlier in this
section.
When descriptions of all causal relations in a software system are available, the concept
assignment problem described by Biggerstaff is no longer a problem, merely an easy task.

2.7 Chapter summary and conclusion


This section summarizes what has been described in this chapter, and provides some conclusions which will be important for the rest of the thesis.

2.7.1 Summary
The aim of this chapter has been to discover the many facets of software maintenance. The following facets were examined in detail:
What is software maintenance? We examined several definitions of software maintenance

(SM), and discussed how SM compared to ordinary maintenance (OM). We found significant differences, particularly that SM can add functionality to an asset, while OM typically
only preserves functionality of an asset. We provided our own definition of SM which
focused on how the SM activities were related to a shift in functionality levels.
Why does software maintenance happen? Two models explaining how software products

evolve were presented. Simply put, SM is a consequence of that software system are
dynamic systems. They influence and evolve together with the surrounding system the
software systems environment. We also described some models of how the evolution of the
software (maintenance) process can be described and controlled.
What does software maintenance cost? We focused on the fact that software maintenance

costs relative to development costs have not decreased during the last 20 years, although
development productivity has increased. The main issue identified was that SM costs would
probably remain high as empirical data show that systems become harder to maintain when
they grow in size.
What are the problems of software maintenance? We presented and discussed several inves-

tigations on software maintenance problems. When w compared the most important problems from these investigations, we found that the most important problems had not changed
much over a 15 year period. We argued that the weakness of the investigations was that they
presented the problems from a managerial point-of-view. We then presented a set of studies
which focused on problems observed from real maintenance processes, particularly focusing on how maintainers tries to understand a software system. Reference to other interesting, but not so relevant (in our context) maintenance research was provided.
To summarize, we gave an overview to the what and why of software maintenance, as well as

50

Software Maintenance: What - Why - The Costs and Problems

the related costs and problems.

2.7.2 Conclusion
The common issue of all studies on maintenance problems described in this chapter can be
summarized as the maintainers search for information in a world of nonexistent information. Maintainers need information to be productive; this information does not exist. If it did,
efficient ways of conveying it to the maintainers are necessary. As noted in Section 2.6.7, studies have shown that more than 50% of the effort spent on maintenance is spent on what we can
call the discovery task.
We have found that one of the most important problems of maintenance is lacking or obsolete
documentation. Studies of how maintainers assemble their system knowledge suggest that they
build up a model of causal associations among the system components. We argued that this
causal information should already exist in good documentation.
Why is this information and documentation lacking? We suggest a two-parts answer:
1. Documentation has traditionally been a part of the delivery which the maintenance person-

nel has signed off for maintenance. The documentation has not been available on-line, and
there has been no good way of relating information in the documentation to particular parts
of the code. Therefore, the documentation has been difficult to maintain.
2. Maintenance managers may have been short-sighted. How can they defend spending extra

effort on updating documentation when customers are pushing for the release which was
due two months ago?
Are the managers really short-sighted? Is there any evidence that using updated documentation
will pay off during maintenance? Can one expect technology support for maintaining the documentation?
In Section 2.5.7 we concluded: The traditional negative view of maintenance work has
resulted in high turnover in the maintenance organization. This high throughput of personnel
has made it difficult to enforce a disciplined process for maintenance. High pressure on key
maintenance personnel has made documentation work neglected in the first place. This neglect
has resulted in further pressure on the experienced maintainers to act as walking maintenance, and has made inexperienced personnel use too much time on reading code in order to
understand the system.
Thus support systems capable of presenting information useful in a given context, would score
high on a maintainers wish-list. Both Littman et al., and von Mayrhauser and Vans (see section Section 2.6) admit that such systems does not exist today.
We believe that is mandatory for such systems to be able to present information both in the
intra- and interlevel dimensions: If information is requested for a sequence of source code
statements the design information, requirements, test information, as well as annotations and
code dependencies needed to understand that piece of code should be presented to the maintainer.
In the next chapter, we describe an experiment for investigating the impact of documentation
availability on software maintenance productivity. In that experiment we try to provide
answers to the question: Does documentation really help during maintenance, or is it just
wasted effort to produce it?

CHAPTER 3

Assessing the Role of Documentation


in Software Maintenance

3.1 Introduction
One of the main conclusions of Chapter 2 was that the cost and problems of maintenance could
be reduced if sufficient system documentation was available during maintenance.
This chapter describes the design, analysis and results of an experiment we conducted to investigate the impact of documentation availability on software maintenance productivity. Our
assumption prior to this experiment is that the systematic use of updated documentation for a
software system, will reduce the time to comprehend the system to perform maintenance on it,
and hence also the costs of maintenance. We refine this theory into a set of experimental
hypotheses later in this chapter.
The experiment results were presented at the First International Workshop on Empirical Studies of Software Maintenance, see [Tryggeseth, 1996b].

3.1.1 Motivation for experiment


All maintainers should use available documentation in order to understand different aspects of
the system on which they are to make changes. In many situations, this documentation has not
been updated as modification requests have been fulfilled. The documentation has therefore
lost its value, and valuable design decisions which would have helped in future maintenance
have been lost.
One of the principal reasons for the lack of documentation updates has been the short-sighted
planning which has been common in maintenance organizations. In the short run it may not be
cost effective to update documentation. When maintainers have intimate knowledge of the system, they are not dependent on documentation. Documentation updates is therefore perceived
as unnecessary costs. This view on documentation updates becomes a problem in at least two
situations:
1. The short-sighted cost minimization is a problem when new persons join the mainte-

nance organization. Without sufficient documentation on the system, the learning curve
to understand the system is extremely high.

52

Assessing the Role of Documentation in Software Maintenance


2. When systems are large, no maintainers can have intimate knowledge of the whole sys-

tem. If maintainers need information about a part of the system which they do not have
knowledge about, they must communicate with maintainers of these other parts. This
will cause interruption in work and reduced productivity. In certain situations, groups of
maintainers may find themselves in a deadlock situation.
Maintenance managers may not have had any choice when choosing to neglect documentation.
They have been pressured by customers, competition and owners to deliver the day before
yesterday. Some of them may have believed that updating documentation would be a wise
thing to do, but none of them has had evidence that doing this would actually decrease the
maintenance costs.
As discussed in Section 2.4, empirical investigations have consistently shown that the maintenance costs account for around 40% to 60% of the total life-cycle costs of a software system.
Other investigations (see Section 2.6.7) have shown that up to 60% of software maintenance
costs can be attributed to activities other than performing the actual changes which are needed
to comply with the set of modification requests submitted for a software system. In these costs,
activities such as maintenance management, negotiation, making priorities, change planning
(e.g. work breakdown structure, persons), and system understanding are included.
Software is increasingly becoming more and more important in our society. Producing software is a time-consuming task, and the persons who produce it are highly specialized. Even
today, it is difficult to cover the demands for software professionals, and this problem will
increase in the future. Reducing the time and costs of producing and maintaining software is
therefore a key issue for meeting the increasing demands for software. In this thesis, we argue
that documentation should be used actively to reduce the system understanding time of software maintenance. Technological support is needed to help the maintainer navigate in the large
base of information which documentation comprise.
This experiment tries to provide answers to whether documentation really influences the productivity of software maintainers or do the myths of documentation as a bureaucratic nuisance persist? Our belief is that reducing documentation update costs in the short run will
reduce productivity in the long run.

3.1.2 Overview of experiment


34 undergraduate students volunteered to participate in the experiment. The students have
taken courses in object-oriented design and C++. In the rest of this chapter we call these students experiment subjects (or only subjects).
The subjects were partitioned into two categories. The categories were designed with similar
mean and variation on a skill attribute relevant to the experiment. During the experiment the
subjects in the two categories had the following information available:
Subjects in category A had only the source code (ca. 2.7 KSLOC1 C++) of the system avail-

able. The input and generated output for a small example was also available.
Subjects in category B had the same information as those in category A. In addition cate-

gory B subjects had the following documentation for the system available: The requirements specification, design document, test report, and user manual.
1. KSLOC = Thousand Source Lines of Code, i.e. source code free for comments.

Introduction

53

All subjects were presented with the same modification request, calling for a number of
changes in the given software system. An oral presentation of the system functionality was
given before the modification request was presented. A demonstration of the system was also
given. The subjects were allowed to browse the handed out information during the presentation
and demonstration.
The subjects were asked to record the effort they spent on the different tasks during the experiment. For each experiment subject, the reported effort and changes made were analysed. From
this analysis, we were able to infer a number of interesting observations about our theory.

3.1.3 Related work


The works of [Littman et al., 1987], [Von Mayrhauser and Vans, 1994], and [Wilde and Huitt,
1992] inspired us into investigating the possibilities of specifying a framework for managing
and using documentation in a controlled manner during software maintenance, in order to
reduce the costs attributed to system understanding and change planning. The observations of
Littman et. al. concluded that when programs are small, maintainers are most productive when
they gain a thorough understanding of the whole program. In their investigation, they found
that this was most commonly done by maintainers reading sequentially through the program
text. For larger programs, their investigation concluded, that the approach used for small programs could not be efficiently used. Techniques and tools for extracting information relevant to
the problem at hand from the program was necessary. The investigation by Von Mayrhauser et.
al. also concluded that
(1) Programmers use a multi-level approach to understand, frequently switching between
program, situation, and domain (top-down) models. Effective understanding of large-scale
code needs significant domain information. (2) Maintenance activities can be described by a
distinct small set of cognition processes. These aggregate into higher level processes, each
with their goals and information needs. (3) Current practice of documentation and coding
does not encourage efficient understanding as it compartmentalizes knowledge by type of
document and rarely provides the cross references that are needed to support programmers
cognitive needs.

Wilde and Huitt reports that the concepts of inheritance and polymorphism, while introducing
great strengths in the object-oriented languages, also introduce difficulties in the analysis and
understanding of object-oriented programs. The conclusion provided by the authors is that for
maintenance of object-oriented programs, browsing tools are needed to manage and control
dynamic relations introduced in object-oriented programs.

3.1.4 Chapter outline


The rest of this chapter is organized as follows: Section 3.2 presents the hypotheses investigated in this experiment. In Section 3.3 we discuss the design of the experiment and defines
the measures needed for the statistical analyses. Section 3.4 gives a time schedule of the experiment. Section 3.5 defines the measures used to assess the quality of the changes made by the
subjects and presents the data extracted from the experiment, while Section 3.6 discuss a statistical analysis of the extracted data, and a discussion of the answers given by the subjects to a
debriefing schema at the end of the experiment. Finally, in Section 3.7 we summarize the chapter and discuss further work.
Appendix D provides the results of a preliminary data extraction and remarks about the results
of each experiment subject, as well as responses to a debrief held after the experiment.

54

Assessing the Role of Documentation in Software Maintenance

3.2 Experimental hypotheses


In our experiment we want to measure several differences among two different categories of
maintainers. The categories are as follows:
Category A: Subjects in this category do only have source available in the experiment.
Category B: Subjects in this category has all available documentation, as well as source

code available. The documentation consists of the requirement specification, the design
document, the user manual, and a test report.
The experiment is conducted without the aid of computers. To compensate for this, both categories had available the input and resulting output of the program they were asked to change.
Now, the hypotheses we want to investigate follow. The hypothesis is first stated, followed by
a discussion of its rationale. The measures described here are formally defined in Section 3.3.3.
H1: Maintainers in category B will on the average use less effort to understand how to fulfil a

modification request than maintainers in category A.


Discussion of H1: When a modification request is processed, the maintainer must identify the

parts of the system which provides the functionality which is requested to change. When
code is the only available source of information, this is difficult. The maintainer uses the
available information to build a conceptual model of the applications functionality. In this
case the names of the functions and variables are the only tags the maintainer can use to
retrieve information to build this mental conceptual model.
When documentation is available, the meaning of the functions and variables is spelt out
clearly, and models showing the applications structure may also be available. Information
in the requirement document shows the current state of the application. The user documentation tells the maintainer how the functionality is presented to the user. This additional
body of information allows the maintainer to build a conceptual model in less time than if
code alone (category A) is the substance of information, and hence the total effort to comply
with the modification request should be reduced in category B.
H2: Under the time restrictions imposed by the experiment; maintainers in category B will

gain more thorough understanding and provide more detailed specifications to the solution
to the modification request than maintainers in category A. I.e. the scores obtained in the
experiment should be higher for subjects in category B when compared to category A.
Discussion of H2: The time available for the experiment subjects is limited compared to what

would have been needed to comply with the modification requesting a professional setting.
We hypothesize that maintainers in category B, given that H1 holds, will have a better
understanding and more time to specify the pseudo code for the needed changes, and hence
be able to specify their proposed changes better.
H3: The score obtained by a subject in the experiment is expected to be positively correlated

with the subjects skill, for all categories.


Discussion of H3: Subjects who are confident with OO principles and the C++ language are

expected to obtain better results than inexperienced subjects. We do however assume that,
on the average, subjects in category B will obtain higher experiment scores than those in
category A.

Detailed experiment design

55

3.3 Detailed experiment design


This section presents the potential impact of several variables on the experiment, and discusses
how their impact on the experiment result is controlled by the design. We first describe the
importance of a good experiment design in Section 3.3.1. In Section 3.3.2 we present a classification for experiment variables after [Kish, 1987]. In Section 3.3.3 we describe the experiment variables and characterize them according to Kish. In Section 3.3.4 we give an overview
of how the subjects participating in the experiment were selected. Finally, Section 3.3.5
describes how the impact of a particular variable (the skill variable) was controlled by partitioning the subjects into similar categories.

3.3.1 Introduction
The selection and assessment of subjects prior to an experiment is important. The partitioning
of subjects into categories is also important. The skill and experience of the subjects are important factors, both to the success of the experiment and to the confidence we what to the results
of the experiment.
When we can control the critical factors influencing the behaviour of interest in the experiment, the variance between the categories can be reduced. However, if the subjects are too well
matched to the critical factors, e.g. all have exactly the same skills, the generalization of the
experimental results to populations with other characteristics may be questioned. See for
example Chapter 14 in [Keppel, 1991] for a good discussion of this.
To get confidence in the results of an experiment, the experiment design should be detailed
enough to be replicated. Researchers often replicate their own experiments (called internal replication) to confirm their own earlier findings in the same settings, and try to replicate the
experiments of other researchers (called external replication) to confirm the findings of others.
Replication of experiments is a common undertaking in fields such as medicine and chemistry,
but few reports have been published on experiment replication in software engineering. For a
small experiment like this one, which does not use a large subject mass, control over critical
factors regarding the subject is needed for the experiment to be replicable. A good example on
replication failure is described in [Daly et al., 1994b], where the results in the replication
experiment totally contradicted the results of the original experiment. An analysis of why the
replicated experiment did not support the findings in the original experiment, showed that of 8
subjects in one of two categories, 8 of the 12 best were represented. In the second category,
only 4 of 9 were rated among the top 12 subjects. The subject rating was computed after the
experiment was finished, by comparing the results of a programming test which was run prior
to the experiment. Daly et al. followed the details of the original experiment design. However,
the original experiment also lacked control over their subjects. When a replication failure like
this happens, not only the results of the replication experiment, but also those of the original
experiment, are doubted.

3.3.2 Terminology for classification of experiment variables


We use the terminology from [Kish, 1987] to identify and define the variables in the experiment. Kish distinguishes four classes of experiment variables, the explanatory variables which
are divided into predictor and predictand variables, the controlled variables, the disturbing
variables, and the randomized variables. The classification is defined by Kish as:

56

Assessing the Role of Documentation in Software Maintenance

Explanatory variables class (E): These denotes the variables that embody the aims of the

experiment design, among which the researcher wishes to find and measure some specific
relationships. The explanatory variables include two distinct sets:
Predictor variables class (X) which comprise the sought causes of the relationships.

These are the stimuli variables of the experiment, which is intentionally varied by the
experiment design.
Predictand variables class (Y) which describe the predicted effects. In other word these

are the measured response variables of the experiment.


Explanatory variables are designated on the basis of substantive, scientific theories, and
knowledge and insight into the field of study.
Other potentially extraneous sources of variation may exist. These must be recognized and
separated from the explanatory variables. Kish identifies:
Controlled variables class (C): The variables that can be controlled adequately by the

experiment design. The control is enforced either by the design of the selection process or
by estimation techniques in the statistical analysis. The choices depend on foresight and
knowledge. The techniques for controlling the extraneous variables are aimed at decreasing
the random errors (class R) or decreasing the bias of the disturbing variables (class D).
Disturbing variables class (D) are uncontrolled extraneous variables which may be con-

founded with the explanatory variables (class E). Failure to remove all of these D variables
either into class C of controlled variables or into class R of randomized variables is the primary disadvantage of non-experimental designs such as surveys and investigations. Some
of the techniques to control disturbing variables are for example stratification1 and blocking2.
Randomized variables class (R) are uncontrolled extraneous variables that are treated as

random errors. In ideal experiments they are actually operationally randomized, but in
surveys and investigations they are only assumed to be randomized. Randomization may be
seen as a form of experimental control, but distinct from the forms used for class C variables.
Kish further argues that efficient designs should place as much as possible of the extraneous
variables into class C. This is however limited by feasibility, practicality, and economic concern. One should however strive to place all class D variables into class C. The rest of this section describes the identification of variables of consideration in the experiment, and how these
are classified and controlled according to the schema above.
1. Stratification means to distribute subjects into strata based on a classification factor. If the classification factor is skill in C++ programming, 100 subjects could be distributed into say 5 strata, where the
20 with the lowest skill were placed into one strata, and so on.
2. Blocking means to use the stratas when grouping the subjects into categories. In stead of choosing
homogeneous groups solely based on a test score, we place an equal number of subjects from each
strata in every category. Each category is now blocked with respect to ability level.

Detailed experiment design

57

3.3.3 Definition of experiment variables


We analysed the experiment objectives, the subjects, and the environment in which the experiment was to be performed. In Section 3.3.3.1 we identify the variables, and in Section 3.3.3.2
we show the final classification of the variables after controlling the extraneous ones.

3.3.3.1 Identification of variables


We found the following variables to be relevant to our experiment. We classify the variables
after Kish.
Class E:
Predictor variables (class X):
1. Documentation: Whether documentation is available for the experiment subjects. Pos-

sible values of this variable are none, outdated, or updated.


Predictand variables (class Y): The response variables are different measurements of

time. We split up the total time used into three, as this gives us more explanatory power
on the response compared to the stimuli. As more than one change request will be proposed in the experiment, these variables will be indexed with a reference to the change
request in the analysis of the experiment. All predictand variables are measured in minutes.
2. Time_U_D: Time_U_D measures time spent on reading the system documentation

when trying to understand a modification request and identify where to make the
changes, and what changes to make. (Measured on a ratio scale.)
3. Time_U_C: Time_U_C measures the time spent on reading the source code when try-

ing understand a modification request and identify where to make the changes, and
what changes to make. (Measured on a ratio scale.)
4. Time_C: Time_C measures the time used to implement the changes related to a par-

ticular change request. (Measured on a ratio scale.)


5. Time_D: Time_D measures the time used to update the documentation so that it is

synchronized with the changes made to the implementation. (Measured on a ratio


scale.)
6. Mu measures how much the user has understood. A thorough discussion and defini-

tion of this variable is in Section 3.5.1. (Measured on an ordinal scale.)


7. Mpc measures the degree of detail the user has implemented the changes needed to

comply with the modification request. A thorough discussion and definition of this
variable is in Section 3.5.1. (Measured on an ordinal scale.)
Class C:
8. Skill: The C++ skill of the individual experiment subjects. The skill is measured

based on the result of a test of C++ reading and writing skills prior to the experiment
start. The categories in the experiment are partitioned to be homogeneous with respect
to the test score. Each category is blocked into three ability levels. The Skill variable
is therefore a controlled variable in the experiment. We describe how the Skill variable is controlled in the partitioning process in Section 3.3.5. (Measured on a ratio
scale.)

58

Assessing the Role of Documentation in Software Maintenance


9. Dom_Know: This variable is a subjective measure of the domain knowledge of the

subjects in the experiment. We argue that this variable is controlled, as none of the
participating subjects have been involved in the Programming Methodology project,
and none of them has developed similar applications before. We describe the subject
selection process in Section 3.3.4.
Class D:
10. Experience: The subjects own view of their experience with object-orientation and

C++. The subjects were asked to assess their own experience by stating their C++
familiarity and how many lines of code they had written in C++. A subjective assessment such as this may disturb the experiment results if used as criteria for partitioning
the subjects into groups. We would like to move this variable to class R, as controlling
it is not possible.
11. Disc_pre: The number of other subjects each subject has discussed the organization of

the application after the briefing.


12. Disc_in: The number of times each subject has consulted another subject during the

modification phase of the experiment.


Class R:
13. Error_post: The number of subjects not finishing the experiment for each category.

3.3.3.2 Final classification of variables


We minimized the effect of class D and R variables on the experiment as described below. The
resulting variable classification shown in Table 13.
TABLE 13. Variables in the experiment
Class E
Predicator (X)
Documentation

Predictand
(Y)
Time_U_D
Time_U_C
Time_C
Time_D
Mu

Class C
Skill
Dom_Know
Disc_pre
Disc_in

Class D

Class R
Error_post
Experience

Mpc

The Disc_pre and Disc_in counts the number of times the subjects have discussed the assignment and its solution. The impact of these variables on the experiment is controlled (minimized to zero) by the fact that subjects were not allowed to discuss with each other during the
experiment, and were physically distributed in a large auditorium to avoid this.
In Section 3.3.5 we show that the Experience variable does not correlate with the Skill variable. We conclude that the Experience variable, assigned subjectively by the subjects, can therefore be treated as a class R variable.

Detailed experiment design

59

3.3.3.3 Other factors in a real setting


If the this had not been an experiment, but rather real maintenance in a real environment, several other factors could influence the maintenance productivity. We present some of them here,
to show that precautions may have to be taken if the variables identified in this experiment
were to be used for measuring real maintenance tasks:
Organization rigidity: If support from several layers of the organization must be gathered in

order to perform a necessary change, the productivity of the maintainer is reduced. The
maintainer must act as a bureaucrat in his organization, rather than utilizing his creative
skills in solving problems.
Workspace organization: If the maintainers are well-equipped with technical facilities, have

a comfortable office with little noise and good climate, the chances for high maintenance
productivity increase.
Stress factors: If maintainers are buffered by a help desk from routine questions from cus-

tomers regarding application errors, and management allows responsibility for maintainers
to make decisions of their own, the work process is not stressed, and high productivity is
possible.
Experience of maintainer: Maintenance efficiency can be high when the maintainer has

experience from previous maintenance projects, experience with the application to be maintained, and/or experience in the domain in which the application to be maintained is developed for.
Skills of the maintainer: Several professional skill factors enhance the productivity of the

maintainer, if present. These include the ability to read code, and conceptually make an
abstraction of the codes action, the ability to express solutions to problems in a programming language, and skill in the development methods used for the application.
Tool support: Different type of development tools help the maintainer to control the applica-

tion under maintenance. These include tools for configuration control and defect tracking,
tools for build support, and computer aided software development tools to automate parts of
the development process. Our tool for software system understanding is also included in
this list of tools.
Documentation availability: Both the range of available documents and their stated (i.e.

outdated or up-to-date) influence the productivity of the maintainer.

3.3.4 Identification of experiment subjects.


Hypothesis H6 requires that the skill and experience of the subjects are on the same level as the
students in the Programming Methodology course. If the subjects have more experience than
those students, the confidence in the results of testing H6 might be questioned. For the other
hypotheses, this requirement is not relevant.
We have two possible populations to choose our experiment subjects from. Although not
required since H6 is not further investigated, we argue below that the C++ programming experience of these possible subjects are not different from the students who had the Programming
Methodology course and developed the system used for making changes to in this experiment.
The first population is among 4th grade students from the Department of Computer Systems
and Telematics at NTH. These students is one grade above those that developed the system

60

Assessing the Role of Documentation in Software Maintenance

used as experiment baseline. The Programming Methodology course is run in the second year
at NTH. The third year has no project where C++ programming is required. One lab-course
requires the use of lex, yacc and the C language. The assignment in the lab-course is to
develop a call-graph generator for C under X/Motif.
This indicates that students from the fourth grade at NTH do not have much more experience
in object-orientation and C++ than the second year students who developed the application
which is used in the experiment. C is a language closely related to C++, and experience using
C may be positive for using C++ compared to experience in only using Pascal.
The second population is from the Department of Informatics at the Sr-Trndelag College.
The students have the same training in object-orientation and C++ as the students from NTH.
They have all had a course in object-orientation and C++, and has participated in a small
project where a system was implemented using these techniques. The course at the Sr-Trndelag College spans two semester, while the one at NTH span only one semester. However,
during the first semester in the two semester course, the C heritage of C++ is focused. This is
outweighed by C experience for the NTH students in other courses, making the total experience in C/C++ for both populations similar.
We conclude that the students from both of these populations have similar training in objectorientation and C++, and can be treated as on homogeneous population.
An invitation for participating in the experiment was posted to these students, ca. 200 students
in total. The invitation was sent out in several rounds. The initial compensation offered was
NOK 100 and pizza. This compensation was to small, as only 10 students responded to the first
invitation. One invitation round crashed with a major social arrangement for the students in
Trondheim, another with an examination period.
Finally the offered compensation was increased to NOK 250 and free pizza after the experiment. 34 students accepted the final invitation to volunteer as experiment subjects. This was
perceived high enough to have sufficient confidence in the outcome of the experiment. The
experiment was performed on February 8th, 1996.

3.3.5 Controlling the impact of the Skill variable


The experiment design requires that the subjects were partitioned into two different categories,
category A and category B. To ensure that these categories had similar distribution with respect
to the skill variable, we decided to use a pre-test to assess the skills of the subjects.

3.3.5.1 The C++ pre-test


Before the experiment could start properly, all subjects answered a C++ pre-test. The purpose
of this test was to use the results to control the partitioning of the 34 participating subjects to
the categories. The two experiment categories should have a similar distribution with respect
to the subjects skills in C++, and the pre-test was used as a control measure for the subjects
skills. The pre-test was designed to test the understanding of the object-oriented concepts in
C++. The C heritage of C++ was toned down in the test. The reason for this was that for
understanding the baseline application used in the experiment; knowledge of classes, inheritance, and dynamic binding would definitely be a pro. The pre-test was organized into 6 parts,
and the purpose of each part was to test knowledge of the following:
1. Dynamic binding (15%)

61

Detailed experiment design


2. The definition of class compared to struct (2%)
3. The use of constructors (9%)
4. Member function overloading (16%)
5. Understanding of class definitions and encapsulation (26%)
6. Code reading and error detection (32%)

The different parts contribution to the total test size is indicated in percentage in the above list.
These numbers were calibrated by measuring the time spent on each part by six persons with
different C++ skills, prior to the experiment.
The pre-test and calibration are included in Appendix C in this thesis.

3.3.5.2 Partitioning of subjects into categories


100

Score
90

80

70

60

50

40

30

20

10

Subject

18

47

75

29

60

84

69

49

25

77

45

66

13

42

17

50

71

64

24

55

83

80

33

11

14

27

15

37

74

19

10

Subject

FIGURE 11. Pre-test score distribution for all subjects

Figure 11 shows the experiment subjects results from the pre-test. As seen in the figure, the
measured skills had a rather wide distribution. This was as expected. We want to have similar
categories with respect to the C++ skill. Since we do not have unlimited resources for the
experiment we need to keep the categories relatively small. Since the number of subjects in our
experiment is only 34, the risks for ending up with two very dissimilar categories would have
been high if the subjects had been assigned to a category at random. The decision that the categories should be partitioned based on the results from the pre-test instead of purely at random
was correct. We used the following algorithm to divide the subjects into to categories:
1. Determine the number of categories (denoted c) to partition into. In this experiment we

want to partition the subjects into two categories, thus c is set to 2.

62

Assessing the Role of Documentation in Software Maintenance

2. Stratify the subjects, based on the test score results, into s = 4 blocks, where the number of

subjects in each block is the same. Since 34/4 is not an integer number, two of the subjects
were temporarily removed from the partition process. These were subjects 47 and 60, who
were randomly picked. The subjects were placed into four blocks as shown in Table 14. The
column to the left in the table shows the blocks strata, and the numbers in the other columns are the subject numbers.
3. For each of the s = 4 blocks:
Randomly choose one of the subjects in the selected block, and assign this subject to one

of the categories which contains the least number of subjects.


Repeat the previous step until all subjects in the selected block are distributed to a cate-

gory.
4. The remaining subjects who were temporarily removed are randomly assigned to the categories, such that all categorises have equal size.
TABLE 14. Categorization of subjects
Strata

Assignmentsa

[6.4 - 28.4>

B 10

B 14

A 19

B 15

A 37

A 27

B 74

A 11

[28.4 - 45>

B 24

B 64

B4

A1

A 55

B 80

A 33

A 83

[45 - 62>

B 42

A 45

A 71

A 13

A 77

B 17

B 66

B 50

[62 - 95.8>

B2

A 49

A 25

B 84

B 29

B 18

A 69

A 75

a. Two additional subjects, 47 and 60, were added to category A and B, respectively, at
the beginning of the experiment. Subject 47 had a test score of 95.0, while subject 60
had a score of 74.8.

Table 14 shows how the subjects in each of the four strata were assigned to category A and B.
Figure 12 shows the distribution of the test scores for these categories. The average test scores
are 48.5 for category A, and 48.0 for category B. The standard deviations for the categories are
23.8 and 25.6. The average test score and standard deviation for the whole subject mass is 48.2
and 24.0.
In addition, 15 of the 17 subjects in category A reported to have used C++ during the last six
months. For category B, all 17 reported this as true. The subjects were asked to provide an estimate for how many C++ LOC they had written prior to the experiment. The given estimates
were on average 4265 LOC for category A (SD=3231), and 4559 LOC for category B
(SD=3495). The total average was 4412 (SD=3269).
The subjects were asked to describe their familiarity with C++. The allowed answers were
poor, mediocre, on the average, experienced, and expert. The distribution of the answers were
as shown in Table 15.
TABLE 15. C++ familiarity - self assessment
Poor

Mediocre

On the average

Experienced

Expert

Category A

Category B

14

To assess our use of a C++ test to categorize the subjects, a correlation measure was computed
on the test score, the estimated LOC given by subject, and the self-assessed C++ familiarity

63

Time schedule of experiment

100
90
80
70

Score

60
Category A
Category B

50
40
30
20
10
0
1

10

11

12

13

14

15

16

17

Subject

FIGURE 12. Distribution of test scores in categories

level. Using Spearman rank, the data showed that the subjects reported LOC estimate positively correlated with the reported level of C++ familiarity (r=0.4819, N=34, p=0.04), while
the test score result did not correlate with any of the other two. This meant that the selfassessed skill information lacked objectivity compared to the test score results. This further
confirmed to us that using the results from a C++ test to categorize the subjects was necessary
to obtain homogeneous groups. If we had based the categorization on the reported LOC estimation and the self-assessed C++ familiarity level, we would not have been able to control the
Skill variable.

3.4 Time schedule of experiment


The experiment is scheduled for a total duration of four and a half hour per experiment subject. All subjects execute the experiment concurrently. The phases of the experiment are outlined in Table 16. We briefly describe the purpose of each phase. A more thorough explanation
of the phases is given below.
TABLE 16. Experiment time schedule
Phase

Overview
& Pre-test

Break

Mins.

60

30

Briefing
Pres.

Demo.

20

10

Break

Modification

Break

Debriefing

120

20

Prior to the overview and pre-test, the students are registered. The name of the student is not
registered, but each student is randomly assigned a unique number (1-99) which anonymously
identifies him/her during the rest of the experiment.
Orientation & Pre-test: Our experiment has two different treatment conditions. The experi-

ment will be run 17 times for each of these. We term the subject group consisting of subjects
having the same treatment conditions as a category. We want these categories to be homogeneous. This is achieved by measuring the score for each subject on a set of questions

64

Assessing the Role of Documentation in Software Maintenance

regarding C++, and then distribute subjects to categories based on this score as described in
Section 3.3.5.2. The first five minutes of the pre-test is devoted to orientating the students
about the time schedule of the experiment.
First break: This break will be used to partition the subjects into homogeneous categories.

This means that we must correct the pre-tests, and compute a test score. The algorithm for
partitioning the subjects into categories given in Section 3.3.5.2 is used. At the end of the
break, the students are informed of which category they have been appointed to.
Briefing: The presentation will explain the purpose of the system which the students will

make modifications to. The functionality is presented by giving the students an overview of
the requirements. The demonstration will show how the tool is used in its current state1.
During the briefing, all subjects will have the source code available. For subjects in category B, the system documentation will be available during the briefing (as well as during
the rest of the experiment).
Second break: This short break allows students to find the places they are appointed to for

the rest of the experiment. The students are informed that they are not allowed to discuss
impressions from the briefing with any other student.
Modification: Each student in both categories will modify the system in order to fulfil two

modification requests handed out in writings. For each modification request the following
three actions are performed:
1. Each student should record the time used to understand the modification request, i.e. the

time used to understand the system and to find out where changes in the system must be
made. The student should note on the experiment schema what information led to understanding the system and the needed chances. Subjects in category B should distinguish
whether the information helping to build the conceptual mental model was drawn from
the documentation (Time_U_D) or from the source code (Time_U_C). We identify the
time used for this as the understanding time for a modification request.
2. Furthermore, the changes needed to satisfy the change request should be implemented in

C++-like pseudo code. The effort spent on this should be recorded by the student. This
effort is identified as the code modification time for a change request. (I.e. the Time_C
variable)
3. For category B which have documentation available, the time used to update the docu-

mentation should be recorded as well. This last issue is identified as the documentation
update time for a change request. (I.e. the Time_D variable.)
Third break: Some students will finish the modification phase earlier than others. The short

break will give the late finishers a break before the experiment debriefing.
Debriefing: Each subject is asked to fill in a schema with general questions on how the

experiment is perceived, and particular questions regarding the organization of the experiment.
1. The current state means the state without the modifications in place.

65

Measurement extraction from experiment data

3.5 Measurement extraction from experiment data


This section explains what measures were chosen to extract information relevant to testing our
hypotheses. Section 3.5.1 defines the measures and provides a discussion of the selected measures with regard to measurement theory. Section 3.5.3 presents the data extracted from subjects
in category A in tabular form, while Section 3.5.4 presents the same data from subjects in category B.

3.5.1 Evaluation criteria for analysis


3.5.1.1 A note on measurement theory regarding our measure selection
Selecting the set of measures for extracting the data sought in an experiment is important. The
selected measures should directly measure the objectives of the experiment, or in some way be
transformed to provide a value for comparison. [Fenton, 1991] use the terminology direct and
indirect measure for this. On the other hand, the selected measures should preferably be on an
interval (or better) scale, as the statistical tools for analysing such measures are less sensitive to
experimental errors than measures taken on a nominal or ordinal scale (e.g. [Briand et al.,
1995a]). Table 17 originally from [Siegel and Castellan, 1988] (reproduced on p. 36 in [Fenton, 1991] and partly in [Briand et al., 1995a]) shows the types of operations or statistical analyses which can sensibly applied to particular types of measures.
TABLE 17. Siegels summary of measurement scales and relevant statistics
Scale

Defininig relations

Nominal

(1) Equivalence

Examples of appropriate
statistics

Appropriate
statistical tests

Mode
Frequency
Contingency coefficient
Median
Percentile

Non-parametric
statistical tests

Ordinal

(1) Equivalence
(2) Greater than

Spearman rs
Kendalls tau
Kendalls W

Interval

(1) Equivalence
(2) Greater than
(3) Known ratio of any two
intervals

Mean
Standard deviation
Pearson product-moment corr.
Multiple product-moment corr.

Ratio

(1) Equivalence
(2) Greater than
(3) Known ratio of any two
intervals
(4) Known ratio of any two
scales

Geometric mean
Coefficient of variation

Non-parametric
and parametric
statistical tests

[Briand et al., 1995a] argues that these recommendations should be taken with a grain of salt,
as simulations show that the use of parametric statistics may be applicable in a larger number
of circumstances than was originally thought, as the t-test or correlation coefficients are not
affected by non-linear transformations, if they are not extreme. Particularly, [Briand et al.,
1995a] outlines a procedure in which ordinal data may be transformed to an interval scale by

66

Assessing the Role of Documentation in Software Maintenance

ranking and scaling the ranks by using domain knowledge of the ordinal values. Then parametric statistics, such as the t-test, can be used. The reason for this is that the minimum sample
sizes for a certain statistical power is 20% less for parametric tests compared to non-parametric
tests, hence the costs of an experiment may be reduced by using parametric statistics.
In this experiment, we want to investigate differences in productivity among two subject
groups, where all subjects make the same changes to a software system, but where the two
groups have different prerequisites for making the changes. With this in mind, we would like to
define our experiment measures in such a way that they are either on an interval scale, or at an
ordinal scale which may be transformed to an interval scale if the results of the non-parametric
statistics are unsatisfactory, given the number of experiment subjects.

3.5.1.2 Measure alternatives


We want to measure the effort spent on understanding the system, and the effort spent on
implementing the changes asked for. These are obviously at least on an interval scale. The
measures used for this are those reported by the subjects in the experiment forms, Time_*, discussed in Section 3.3.3. In addition, we want to measure how much the subjects have understood, and how much the subjects have implemented.
A first attempt on defining this was to define the two measures Mu and Mpc as
Mu: The number of correctly proposed changes to the system divided by the needed

number of changes to the system, and


Mpc: The average degree (in per cent) of proposed lines of pseudo code for the correctly

proposed changes. A subject scoring 100% on this measure would have provided pseudo
code for all details in all proposed changes.
We analysed the system on which changes were to be made by the subjects. We found that the
number of changes made by the project group given the same modification request as used in
the experiment were 8 member function declarations, 7 member function definitions, and one
change to an existing member function. The number of LOC added for this modification
request by the original project group, was 108.
Problems related to the proposed measures arose when we started to analyse the data from the
experiment. We encountered two types of problems:
1. The subjects choice of solution. If all subjects had chosen the same solution to comply with

the modification request, the counting implied by the Mu measure would be easy. In particular this would be true if all chose the solution provided by the original development group.
When analysing the experiment forms we found at least three different ways of complying
with the modification request. In some situations, the same logical solution was chosen, but
the actual changes was made very differently. Together, this made it difficult for us, both to
reliably count the actual number of changes done, and the number of changes needed for the
different solution alternatives.
2. How to count the pseudo code. We saw several ways of expressing changes to the system.

Some parts of the functionality were specified very close to actual code which could pass a
compiler. Some were specified using typical pseudo code language with control structures
and structured English. Yet other parts were specified using high level English, explaining
what and how to do it in an informal way. Computing the Mpc measure, and comparing
them for the different subjects would be difficult.

Measurement extraction from experiment data

67

These reported problems on defining a measure on an interval scale, convinced us to rather


define our measures on an ordinal scale. The redefinition of the two proposed measures on an
ordinal scale is given in the section below.

3.5.1.3 Selected measures


The changes specified by the subjects were analysed along two axes:
Measure Mu: How well the subject has understood the system and the changes that need to

be made in order to fulfil the modification request. This measure is obtained by analysing
the comments and pseudo code which the subject has written down on the experiment
schema. The measure on an ordinal scale takes 5 values (0 included):
0 - No understanding of the system and changes to be made have been shown.
1 - Some scattered understanding has been obtained.
2 - Understanding at a high level of abstraction has been obtained.
3 - A partial plan on how to specify the changes shows good understanding.
4 - A detailed understanding on how to fulfil the modification request is shown.
Measure Mpc: The degree of detail of the pseudo code which the subject has specified the

proposed changes with. This measure is obtained by analysing the code written by the subject on the experiment schema. The measure on an ordinal scale takes 5 values (0 included):
0 - No pseudo code.
1 - Very limited pseudo code - the code written does not have any meaning.
2 - Some meaningful pseudo code.
3 - Good pseudo code for the necessary changes, but details are lacking.
4 - Good pseudo code for the necessary changes, with sufficient details.
The reason for this separation into two measures is that a subject may have understood the system well, but has not been able to specify how the changes needed to fulfil the modification
request should fit into the existing program. Similarly, a subject may have written very good
pseudo code for the changes the subject have felt necessary; but these changes may be incorrect. We do expect, however, that a high score on one of the measures should be accompanied
by a high score on the other.
The data for these two measures, extracted from forms filled in by the subjects during the
experiment, is shown in Section 3.5.3 and Section 3.5.4.

3.5.2 The experiment baseline and modification requests


This section describes the system on which the subjects had to make changes during the experiment, and the modification requests they had to comply with. The two modification requests
are identical to those given to the project group which originally developed the system. We do
not expect that full compliance to both of these request will be possible by any subject in the

68

Assessing the Role of Documentation in Software Maintenance

experiment; the first request is the focus of the experiment, while the second is included for
completeness. The system on which the subject make changes is denoted the experiment baseline in the following.

3.5.2.1 The experiment baseline


The functionality of the experiment baseline
The experiment baseline is a system for automatic maintenance of consistency in source code
comments in C++ programs, and for navigating between the comments given to certain program elements.
The experiment baseline extracts certain program elements from a C++ program. The
extracted program elements are file, class definition, member function declaration, member
function definition, function definition, and include statements. For each of these elements, the
user is asked to input different types of comments in a uniform interface. Such comments are
for example a test status for files, an objective field for (member) functions, and change logs
for files and (member) functions. The user may navigate among the comments for the different
program elements to obtain an overview of the program. When a session finishes, the program
files are updated by inserting dedicated comment fields on the lines prior to the extracted program element. This generated comment field includes both the comments which the user have
entered, and comments automatically generated from program analysis done by the experiment
baseline. The generated comment includes a check sum field for the relevant program element.
The experiment baseline can now detect any changes in the program elements extracted, by
comparing the old check sum in the comment field with the new check sum generated by parsing. If changes in a program element is found, the user is asked to update the comment already
entered for the program element, and add a change log describing the change. The program
files are updated with the modified comments, and problems with outdated comments in the
source code is prevented.
Facts about the experiment baseline
The experiment baseline was developed by a project group of five students in course 45012
Programming Methodology at NTH during the spring of 1995. The experiment baseline was
one of 35 solutions to a semester project assignment given in the course. The solutions ranged
in C++ size from 1.7kLOC to 8.9kLOC. The solutions were rated for functionality on a scale
from 0 to 100 based on a system test. The functionality rates varied from 0 to 99. The experiment baseline had a LOC count of 4.0kLOC of C++ code, and a functionality rating of 93. Ca.
25% of this code were comments of some sort. We removed the comments from the source
code. A set of statistics gathered for all the groups of the course is given in Appendix B.
The reason for selecting this particular experiment baseline were twofold: (1) A relatively low
number of lines of code compared with a high functionality rating. (2) Few changes were made
with respect to the two modification requests submitted in the project, compared to other
groups with similar functionality rating.
The development plan for the projects in the course was as shown in Table 18. The system documentation generated by the project groups consisted of a design document, a test report (test
plan and test log), and a user manual.
Experiment subjects in category B had all this documentation plus the requirements specification for the semester assignment available during the experiment.

69

Measurement extraction from experiment data


TABLE 18. Development plan for projects
Date

In/Out what

What

2.2

Out

Reqs

Requirements specification handed out to students

3.3

In

OOD

Deliver design document for quality assessment by other group

6.3

Out

OOD

Design documents are switched for quality assessment.

10.3

In/Out

Qual

Quality assessments are delivered and handed out to the groups

22.3

In

V0

System documentation and code delivered for system test.

23.3

Out

TestRes.

The system test and test results are handed out.

29.3

In

V1

Revised system documentation and code after changes implied


by system test is delivered.

29.3

Out

Req D1

The first modification request is given to the students.

6.4

In

V2

Revised system documentation and code after changes implied


by the first modification request is delivered.

7.4

Out

Req D2

The second modification request is given to the students.

In

V3

Final system documentation and code is delivered.

27.4
5.5

Groups are told whether they passed or failed the assignment.

Both category A and B had the source code generated by the group available as paper listing
(double-sided printed on A4 paper, 2 columns on each side with ca. 100 lines per column). The
system documentation and code available to the experiment subjects were based on the V3
delivery, with changes implied by Req D1 and Req D2 removed1. The source code made available was stripped for comments. The functionality removed in the code totalled to 108 lines of
code for the first modification request and 15 for the second modification request. Size measures for the documentation and code handed out to the subjects for the experiment are shown in
Table 20. The changes made by the student group to the system selected as the experiment
baseline is described in Appendix E.
TABLE 20. Document measures of experiment baseline (words)
Requirement specification

Design document

Test report

User Manual

5500

6700

900

800

TABLE 19. Source code measures of the experiment baseline


WMC
LOC

2700

Files

13 .h
13 .cc

DIT

NOC

Classes

17

min

max

med

min

max

med

min

max

med

47

15

Similarly, Table 19 shows some measures of the source code of the experiment baseline. The
measures WMC, DIT and NOC are three of six object-oriented measures defined in
[Chidamber and Kemerer, 1994]. They are defined as follows:
1. These are the modification requests in the experiment, later termed delta1 and delta2.

70

Assessing the Role of Documentation in Software Maintenance

Weighted Methods per Class (WMC) for a class C with member functions M1, ..., Mn is

defined as WMC = i=1n ci, where ci is the complexity of member function Mi. We have
defined the complexity ci as (length(Mi) div 10)+1, where length(Mi) is defined as the
length of member function M, measured in lines of code.

The Depth of Inheritance Tree (DIT) for a class is the inheritance depth of the class. Classes

which does not inherit have DIT=0. If a class inherits from multiple classes, the DIT for the
class is the maximum length from the class to the root of the inheritance hierarchy.
The Number of Children (NOC) of a class is the number of classes which directly inherits

the class in the class hierarchy.

3.5.2.2 The first modification request, delta 1


The functionality for navigating the comments space in the experiment baseline as described in
Section 3.5.2.1 is only usable on-line. For viewing the comments when not in front of the computer, the source code files has to be inspected. The first modification request specifies an
extension to the experiment baseline for generating a sequential report of a program which has
had comments generated by the experiment baseline. The modification request specifies the
format of the report, and its contents. The subjects must first understand the specification in the
modification request, and then obtain an overview of the system to find out how to extract the
information needed in the report.

3.5.2.3 The second modification request, delta 2


The second modification request is easier to implement, as the developers of the experiment
baseline were able to realize the functionality specified in about 15 LOC. However, deeper
knowledge of specific details are needed to understand the prerequisites for making this
change. The functionality asked for in this modification request is to provide information of
where any declared (member function) is defined. The definition place may take the values
not_defined, defined_in_file xxx, or defined_in_class.

3.5.3 Experiment data from category A


The first column is the subject number, Time_U_C is the time (in minutes) the subject has
reported to have used on reading the code, while Time_C is the time reported for implementing
the changes. As expected, not much (in fact nothing) was done on the second modification
request. The measures therefore all refers to the first modification request. The two last two
columns are the score of the measures defined in Section 3.5.1.3.
TABLE 21. Time usage report for delta1, category A
Time_U_C

Time_C

Mu

Mpc

75

45

11

75

45

13

100

20

19

120

25

100

20

27

120

33

120

Subject #

71

Measurement extraction from experiment data


TABLE 21. Time usage report for delta1, category A
Time_U_C

Time_C

Mu

Mpc

37

115

45

105

15

47

90

30

49

120

55

100

20

69

80

40

71

110

10

75

60

60

77

90

30

83

60

60

1640

400

21

19

Subject #

Total

3.5.4 Experiment data from category B


The columns are labelled as in Table 21, with additions of Time_U_D which is the reported
time used to study the documentation, and Time_D which is the time used for documentation
updates. Three subjects has done something on the second modification request. The numbers
from what they have achieved there are not shown in the table, thus all numbers reported in the
table relates to work on the first modification request. The data from category B is shown in
Table 22.
TABLE 22. Time usage report for delta1, category B
Time_U_D

Time_U_C

Time_C

Time_D

Mu

Mpc

50

10

45

15

50

70

10

60

60

14

40

70

10

15

60

60

17

35

35

50

18

30

30

60

24a

40

20

15

29

45

45

30

42

30

30

60

50b

40

30

15

60

20

30

70

64

40

30

50

66c

30

30

30

74

50

60

10

80

60

60

Subject #

72

Assessing the Role of Documentation in Software Maintenance


TABLE 22. Time usage report for delta1, category B
Time_U_D

Time_U_C

Time_C

Time_D

Mu

Mpc

84

40

45

30

Total

720

715

475

35

33

29

Subject #

a. Has produced something on delta 2 as well, used 40 minutes on that.


b. Has produced something on delta 2 as well, used 50 minutes on that.
c. Has produced something on delta 2 as well, used 25 minutes on that.

Note: We have extracted data from changes implied by modification request delta1 only. We
did not expect anyone to finish more than this. The changes made by subjects 24, 50 and 66 to
modification request delta2 is not included in any of the scores in Table 21 and Table 22.

3.6 Result analysis


3.6.1 Initial analysis
In Figure 13, the distribution of effort over the measured variables is shown for the two categories. The distribution of the effort is based the reported time consumption for the two categories. Subjects 24, 50 and 66 used some of their reported effort on delta2 . This explains why
the total effort consumption for category B is less than the one for category A. The data is
taken from tables Table 21 and Table 22, and is summarized in Table 23 for convenience.
TABLE 23. Effort distribution on measured variables
Time_U_D

Time_U_C

Time_C

Time_D

Total time

Before outlier
removal

Category A

N/A

1640

400

N/A

2040

Category B

720

715

475

35

1945

After removing
them

Category A

N/A

1160

400

N/A

1560

Category B

490

465

475

35

1465

An analysis of the material handed in by the subjects reveals that four subjects (4, 10, 15, 80)
in category B have spent all their time on documentation understanding. Similarly, four subjects (19, 27, 33, 49) in category A are in the same situation. 7 of these 8 subjects are in the two
lower strata (see Table 14). An analysis of the notes on the experiment forms received from
them show that they had resigned before the experiment had ended. We discarded these subjects from the rest of the analysis. The updated values on the variables are shown in the lower
part of Table 23. The modified version of Figure 13 after removing the outliers is shown in
Figure 14.
When we remove these outliers from the data set of categories A and B, the average time spent
on understanding is 89 and 73. The corresponding variance is 324 and 381.
This initial analysis tells us the following:
Subjects in category A spent on the average 21.5% more time on system understanding

activities than subjects in category B. Subjects in category A spent on the average 74%

73

Result analysis

Time_D
2%

Time_C
20 %

Time_C
24 %
Time_U_D
37 %

Time_U_C
80 %

Time_U_C
37 %

Category A
(total 2040)

Category B
(total 1945)

FIGURE 13. Effort distribution in categories (initial)


Time_D
2%

Time_C
26%

Time_U_D
34%

Time_C
32%

Time_U_C
74%
Time_U_C
32%

Category A
(total 1560)

Category B
(total 1465)

FIGURE 14. Effort distribution after removal of outliers

of the effort on system understanding activities. The corresponding value for category B
is 66%.
Subjects in category B spent on the average 27.5% more time on change implementation

activities than subjects in category A.


Most of the time (A:74%, B:65%) was spent on activities related to understanding the

system, i.e. its architecture and functionality, the modification request, i.e. its purpose
and scope, and what changes was needed in the system to accommodate the requirements
given by the modification request.
When documentation was available (category B), subject spent on the average the same

amount of time consulting the documentation as they did with code.

74

Assessing the Role of Documentation in Software Maintenance

In the next section we test the hypotheses we stated earlier in the chapter, to see whether these
findings are statistically significant.

3.6.2 Testing of hypothesis H1


Hypothesis H1 is defined in Section 3.2. The null hypothesis is that the effort used for understanding the system are the same for categories A and B.
We use SPSS to compute an independent samples t-test on the two reduced samples. This
shows statistical significance in the reduction of sample mean at a 0.05 level. (t=2.14, critical =
2.06, p=0.05).
Hypothesis H1 holds under the given conditions. This means that when documentation is
available, maintainers in our experiment will use less effort to understand how to fulfil a modification request than maintainers which do not have this documentation available.

3.6.3 Testing of hypothesis H2


Hypothesis H2 is defined in Section 3.2. The null hypothesis is that the level of understanding
and the degree of detail of the pseudo code written are equal for both categories.
The measure of the detail of the pseudo code (Mpc) specified by the experiment subjects is on
an ordinal scale, as defined in Section 3.5.1. The scale for the understanding measure (Mu) is
also on an ordinal scale. When discarding the same outliers which were removed when testing
hypothesis H1, the values and medians for the two measures are as shown in Table 24and Table
25. Table 24 shows the same information when the two measures are added. Figure 15 presents
TABLE 24. Scores on the understanding measure (Mu)

Category A
Category B

Values

Total

Median

0,1,1,1,1,1,1,2,2,2,2,3,4
1,2,2,2,2,2,3,3,3,3,3,3,4

21
32

1
3

TABLE 25. Scores on the pseudo code measure (Mpc)

Category A
Category B

Values

Total

Median

0,0,1,1,1,1,1,2,2,2,2,3,3
1,1,2,2,2,2,2,2,3,3,3,3,3

19
29

1
2

the histograms of the same measures shown in Table 24 to Table 24.


TABLE 26. Scores when adding pseudo code and understanding measure

Category A
Category B

Values

Total

Median

1,1,2,2,2,2,2,3,4,4,4,6,7
2,3,3,4,4,4,5,5,6,6,6,6,7

40
61

2
5

We use the non-parametric Mann-Whitney on Mpc and Mu to test whether H2 holds.

75

Result analysis

Category A

Category B

Frequency of Mu

4
3
2
1
0
0

6
5

Frequency of Mpc

4
3
2
1
0
0

Frequency of measures combined

4
3
2
1
0
0

FIGURE 15. Frequencies of measures for experiment evaluation

3.6.3.1 Mann-Whitney test on Mpc


We use the procedure outlined in [Brase and Brase, 1983] to compute the Mann-Whitney nonparametric test:
1. Arrange the two samples jointly in order of increasing Mpc, and compute the ranks.
2. Let R denote the sum of the rank for the category with smallest rank sum.
( nA + nB + 1 )
3. R is approximately normally distributed with mean R = n A ---------------------------------- , and standard
2
n n (n + n + 1)

A B A
B
deviation R = -----------------------------------------------, where nA and nB are the number of subjects in cate12
gory A and category B.
4. Critical values for the Mann-Whitney test at the 0.05 level of significance are c1 and c2

given by
c 1, 2 = R 1.96 R

c1

c2

76

Assessing the Role of Documentation in Software Maintenance

The rank computation for Mpc. is shown in Table 27.


TABLE 27. Rank computation for Mpc.

Mpc

Category
Rank
Mpc

A
1.5
2

A
1.5
2

A
6
2

A
6
2

A
6
2

A
6
2

A B B A
A
A
A
6 6 6 14.5 14.5 14.5 14.5
3 3 3 3
3
3
3

Category
Rank

B
B
B
B
B
B A A B B
14.5 14.5 14.5 14.5 14.5 14.5 23 23 23 23

B
23

B
23

B
23

The sum of the ranks is 137 for category A, and 214 for category B. Choosing the smallest sum
of ranks, gives us the needed R = 137. R is 175.5, and R is 19.5. This gives us critical values c1 = 175.5 - 1.96 * 19.5 = 137.28, and c2 = 213.22.
Since R < c1, we conclude that the two categories do not have the same distribution according
to Mpc, hence the null hypothesis is rejected for Mpc and we conclude that at a 0.05 level of
significance, the amount of documentation influences the degree of detail of the pseudo code
produced by the subjects.
A check on the computation using the Mann-Whitney test in SPSS6.1 ([Norusis, 1992]) confirms our computation with an actual two-tailed significance level of 0.0379.

3.6.3.2 Mann-Whitney test on Mu


Our null hypothesis is that the level of understanding (Mu) in category A is the same as the one
in category B. Our hypothesis is that Mu is larger for category B.
Using SPSS 6.1 to run the Mann-Whitney test on the Mu data shows that the compute R
(denoted W in Table 28) is less than c1 with an actual two-tailed significance level of 0.0139.
Hence, the null hypothesis is rejected, and we conclude at the 0.05 level of significance that the
amount of documentation influences the level of understanding of the system. The resulting
tables when running the Mann-Whitney test using SPSS 6.1 is summarized in Table 28.
TABLE 28. Mann-Whitney results from SPSS 6.1
Mean rank A

Mean rank B

2-tailed P

Mpc

10.54

16.46

46

137

0.0379

Mu

9.96

17.04

38.5

129.5

0.0139

Mpc+ Mu

10.12

16.88

40.5

131.5

0.0217

Hypothesis H2 therefore holds for both Mpc and Mu, and we can conclude that when using
documentation when performing maintenance, both the level of understanding of the system
and the degree of detail of the pseudo code produced during a limited time period has higher
quality, compared to not using the documentation.

77

Result analysis

3.6.4 Checking for correlation among Mpc and Mu


The Spearman rank correlation coefficients between Mpc and Mu in category A is 0.6453
(N=13, p=0.017). In category B, the result is 0.7261 (N=13, p=0.005). These results were
obtained using SPSS 6.1. Combined for both categories, the Spearman rank correlation coefficient between Mpc and Mu is 0.7582 (N=26, p=0.000).
This confirms our initial assumption that these two measures would correlate, and hence it
would have been sufficient to compute one of them. A scatter plot of Mpc and Mu is shown
Figure 16.
4
3.5
3
2.5
Mu

Mpc

1.5
1
0.5
0
0

10

FIGURE 16. Scatterplot of Mpc

15

20

25

30

and Mu for all subjects

3.6.5 Testing of hypothesis H3


Recall that the criterion used for partitioning the experiment subjects into the two categories
was their score on the pre-test. The procedure for this partitioning was described in Section
3.3.5.2. This categorization was based on the assumption that the partitioning procedure would
result in two categories with similar mean and similar standard deviation on the skill attribute.
Hypothesis H3 defined in Section 3.2 states our assumption that skilled subjects will perform
better in the experiment than unskilled subjects.
The null hypothesis we test against is that there is no significant positive correlation between
the results obtained in the pre-test and the ones obtained in the experiment.
Table 29 shows the computed Spearman rank correlation coefficients between
1. TR/Mu: Test results and Mu measure.
2. TR/Mpc: Test results and the Mpc measure.
3. TR/(Mu + Mpc): Test results and the combined (Mu +

Mpc) measure.

Table 29 shows some very interesting results: The rank correlation coefficients in the first row
implies that we cannot reject the null hypothesis for the group of subjects (A) which had only
code available for the experiment. There is a slight correlation among the test results and the

78

Assessing the Role of Documentation in Software Maintenance


TABLE 29. Spearman ranks between test results and experiment measures

Category A
(N=13)
Category B
(N=13)
All subjects
(N=26)

TR/Mu

TR/Mpc

TR/(Mu + Mpc)

0.2260
(p=0.458)
0.8230
(p=0.001)
0.5311
(p=0.005)

0,3020
(p=0.316)
0.6182
(p=0.024)
0.4309
(p=0.028)

0.2761
(p=0.181)
0.7744
(p=0.002)
0.5024
(p=0.009)

experiment scores, but these are small and not significant. However, for the group of subjects
(B) which had both documentation and code available during the experiment, there is a strong
correlation among the test results and the experiment scores. We conclude that the null hypothesis is rejected for category B, and that hypothesis H3 holds at a 0.05 level of significance.
This correlation is so strong that we can make the same conclusion for all subjects treated as
one group.
This results deserves some discussion. The fact that hypotheses H1 and H2 hold implies the
following: The aid of having documentation available during system maintenance reduce the
time needed to understand the system and the changes implied by a change request (H1). It
also enables the maintainer with more time and better knowledge so that s/he can make more
detailed changes to the system given a restricted amount of time (H2). The results shown by
the correlations in hypothesis H3 shows that the aid of documentation helps the maintainer to
use her/his skills1 better than if no documentation was available. In fact, if a skilled maintainer
in category B were not allowed to utilize the aid of the accompanying documentation, s/he
could not expect to do her/his job better than a person with less skills than her/himself. On the
contrary, when this documentation is available, the skills of the maintainer very much determines the productivity of the maintainer.
This has (at least) two direct implications:
1. An organization which is about to employ a maintainer should try to get the best people

available. (There is nothing revolutionizing about this.)


2. An organization which have hired the best maintainers money can get, cannot utilize them
in an optimal manner if the systems they are set to maintain are not documented in a satisfying way. This is at least true in the short run; controlling for this truth in the long run cannot
be done by this experiment design, as the domain/application knowledge variable is kept
constant at zero level for this experiment.

3.6.6 Examining debriefing sheets


After the two hour period designated for making the changes, a debriefing schema was handed
out for the subjects to fill in. The debriefing schema asked the following five questions to the
subjects:
1. Q1 How satisfied are you with your endeavour in the experiment? Three answer choices

were given: Not satisfied, more or less satisfied, and satisfied.


1. The skills controlled in this experiment are the abilities to understand and make changes to C++ programs

Result analysis

79

2. Q2 What could you have done better in the experiment?


3. Q3 Assign a priority to the following statements regarding improving your endeavour in the

experiment. 1 means that the statement is the most true one, while 5 means that this is the
least true one, based on your perception: I would have performed better in the experiment
if ...
a) ......... I had been allowed to use more time.
b) ......... the system on which changes had to be made to was smaller
c) ......... I had better knowledge of the C++ language
d) ......... I had more documentation available on the system
e) ......... I had a computer available where I could code my changes
4. Q4 Please write down any comment you may have about the experiment.

3.6.6.1 Response to the debriefing schema.


Response to Q1.

The subjects (16 out of 17 responded) in category A answered in the following way: 7 were not
satisfied, 7 were more or less satisfied, and 2 were satisfied.
The subjects (16 out of 17) in category B answered as follows: 8 were not satisfied, 6 were
more or less satisfied, and 2 were satisfied.
The results from the experiment suggest that subjects in category B should have been more satisfied with their endeavour compared to subjects in category A. This is not reflected in the
responses to question 1. There are three possible explanations to this:
The responses not satisfied and more or less satisfied are difficult to distinguish and do not

allow the subjects to express their viewpoint. Choosing either the first or the latter has therefore been done more or less at random.
The values given by subjects in category A reflect their level of satisfaction based on the

resources they had available. Since subjects in category B had more resources available
(documentation), they assess their level of satisfaction based on different circumstances
than category A. The values given for each category are therefore not directly comparable.
The more you understand, the more critical you become. The skilled subjects who

achieved medium good scores in the experiment is less satisfied than subjects with less skill
which achieved medium scores. This may be reflected in the answers to the question, as
skilled subjects underestimate their endeavour and not so skilled subjects overestimate it.
This bias the answers making any analysis of them not meaningful.
Response to Q2.

The responses given by the subjects are listed in Appendix D.2.


Comparing the responses informally with the results from the experiment as shown earlier
reveals no direct surprises, and confirms our confidence in the Mpc and Mu measures.

80

Assessing the Role of Documentation in Software Maintenance

Response to Q3.

Table 30 shows the answers to question Q3 in the debriefing schema.


TABLE 30. Priorities of question Q3a
Subject
1
11
13
25
27
33
37
45
47
49
55
69
71
75
77
83

Statement a
3
3
4
4
5
3
5
1
4
4
5
4
3
1
2
1

Statement b
5
2
2
2
3
5
3
4
3
2
2
5
2
4
4
3

Statement c
4
1
3
5
1
2
2
5
5
5
1
1
5
5
5
5

Statement d
2
4
1
3
2
4
4
3
1
1
4
3
1
2
1
2

Statement e
1
5
5
1
4
1
1
2
2
3
3
2
4
3
3
4

2
4
10
14
15
17
18
24
29
42
50
64
66
74
80
84

1
1
3
2
4
1
1
1
3
4
1
4
2
4
4
3

2
5
4
3
2
3
4
2
1
1
2
2
5
1
3
1

5
2
1
4
1
5
5
4
4
5
4
5
4
5
2
4

3
4
5
1
3
4
2
3
2
2
3
1
1
3
5
5

4
3
2
5
5
2
3
5
5
3
5
3
3
2
1
2

a. Subject 1-83 (above double line) are category A, subjects 2-84 are category B.

Figure 17 shows how the statements given priority most true are distributed for the two categories. Similarly, Figure 17 shows how the statements given priority least true are distributed for
the two categories. The sum of priorities given to the different statements in question Q3 in the
debriefing schema are shown in Table 31.

81

Result analysis
6

0
a

a
19%

e
25%

e 6%
d
19%

a
37%

b
0%

c
13%

c
25%

b
25%

d
31%

Category A

Category B

FIGURE 17. Most true statement (distribution) in debriefing Q3.


8
7
6
5
4
3
2
1
0

8
7
6
5
4
3
2
1
0
a

e 13%

a
19%

a 0%
e
31%

d 0%

b 13%

b
19%

c
37%

c
49%

d 19%

Category A

Category B

FIGURE 18. Least true statement (distribution) in debriefing Q3.


TABLE 31. Sum of statement priorities (smaller value means higher priority)
Category

Statement a

Statement b

Statement c

Statement d

Statement e

A
B

52
39

51
41

55
60

38
47

44
53

82

Assessing the Role of Documentation in Software Maintenance

Subjects in category A (Figure 17) believe that they would have been able to do better in the
experiment mainly if they had
1. d) more documentation,
2. c) better C++ knowledge, and
3. e) access to a computer to code their changes.

The first two were not surprisingly the two statements given highest priority. No documentation was provided for these, and to understand the system, they had to rely on their C++ abilities. Remember that there were no significant correlation between their C++ abilities and the
experiment results (discussed in Section 3.6.5). These facts seem to contradict each other.
However, investigating the data more closely reveals that the four subjects (11, 27, 55 and 69)
who gave this statement highest priority had a low experiment score (Mu + Mpc= 2, 0, 2, and
3), and three of the four had low pre-test score (27.3, 26.1, 35.4, and 68.2). This explains at
least why three of the four ranked statement c with highest priority. It is also interesting to
observe that all of the five subjects who ranked statement d highest have test scores above the
median. Investigating the data in Table 30 shows that four of these five have assigned lowest
priority to statement c. We have no logical explanation to the fact that four subjects in category
B agreed most with statement e. A speculation can be that working in front of a computer is
their preferred way of working, and being exhausted with reading code on paper they expect
that having computer access would have helped the situation. Still, all of these are in the second quartile with regard to pre-test score, with no high scores on the experiment (0, 2, 2, and
4).
Subjects in category B have given priorities very differently from category A. They believed
that their experiment performance would have increased if they had
1. a) more time available,
2. b) the system on which changes had to be made was smaller, and
3. d) if more documentation was available.

The latter (only three subjects ranked this highest) may be attributed to that the quality of the
documentation varied, as this was obtained from a student project and not production software.
Some of the system was more documented in more detail than others, so if the subjects were
looking for information on parts which were sparsely detailed, they have a good reason to ask
for more documentation. The bulk of the subjects in category B ranked statements a and b
highest. These statements are really two aspects of the same fact: The information available
was sufficient for gaining understanding of the system, but the provided information was too
complex to understand in the allotted time. Thus if more time had been available or if the system had been smaller, the subjects in category B reported that they would have performed better with the information available.
The priorities given to the Q3 question by the category B subjects strongly indicates that our
belief of using a supporting tool for identifying and navigating in the documentation will be a
valuable aid to the maintainer.
Response to Q4.

The responses given by the subjects are listed in Appendix D.3.

Chapter summary

83

3.7 Chapter summary


3.7.1 Experiences gained
First some experience from running the experiment is given: Initiating an experiment requires
a good experiment design. The design of this experiment, including everything from defining
the hypotheses and increasing my statistical skills, to preparing the experiment forms to be
filled in by the subjects took about three man months. The analysis took about one man month,
including writing this paper and a chapter in my thesis. The calendar time used was almost the
double. The reason for this was a series of problems in recruiting enough volunteers as experiment subjects. The amount of administrative work with handling the subject recruiting and
form management was also considerable.
The amount of work needed to finalize the experiment certainly exceeded what I had imagined. However, I believe that the experience and results obtained from carrying it through is
worth the effort.

3.7.2 Summary
This chapter described the design and analysis of an experiment to investigate the impact of
documentation on software maintenance. 34 subjects were partitioned into two categories to
make the same changes with different system information available. Below follows a short
summary of the chapter.

3.7.2.1 Hypotheses
We refer to maintainers who have only source code available as category A. Maintainers who
have source code and documentation available are referred to as category B.
H1: Maintainers in category B will on the average use less effort to understand how to fulfil a
modification request than maintainers in category A.
H2: Maintainers in category B will gain more thorough understanding and provide more

detailed specifications for the solution of the modification request than maintainers in category
A.
H3: The score obtained by a subject in the experiment is expected to correlate positively with

the subjects skill.


The hypotheses were discussed in Section 3.2.

3.7.2.2 Facts and analysis


The following numbers were measured from the subjects effort reports:
Category A subjects spent 21.5% more time than category B on trying to understand the

system.
Category B subjects spent 27.5% more time than category A on implementing the changes.

The effort saved on code reading can be used for productive work as actually coding the
needed changes.

84

Assessing the Role of Documentation in Software Maintenance

Most of he time (A: 74%, B: 65%) was spent on system understanding activities. The per-

centage was lower for category B, as expected.


When documentation was available (category B), subjects spent on the average the same

amount of time consulting the documentation as they did with code.


The hypotheses were tested with the following results:
H1: H0: Time_U(A) = Time_U(B). An independent samples t-test shows statistical sig-

nificance in the reduction of the Time_U sample mean at a 0.05 level. (t=2.14, critical =
2.06, p=0.05). H0 is rejected, and H1 holds under the given conditions.
H2: H0: The sum of the ranks for Mpc and Mu have the same distribution for category A

and B. The median for the two measures and the sums of their measured values are shown
in Table 25. A Mann-Whitney test computed using SPSS showes that A and B have differTABLE 32. Scores on Mpc and Mu
Mpc A

Mpc B

Mu A

Mu B

Median

Total

19

29

21

32

ent distributions at a 5% level of significance. H0 is rejected for both Mpc and Mu, and H2
holds.
H3: H0: There is no significant positive correlation between the subject skills and experi-

mental score. Table 29 shows the computed Spearman rank correlation coefficients
between the test results and Mu (TR/Mu), and between the test results and the Mpc measure
(TR/Mpc). Some very interesting results are observed:
TABLE 33. Spearman rank correlation coefficients
TR/Mu

TR/Mpc

Category A

0.2260 (p=0.458) 0,3020 (p=0.316)

Category B

0.8230 (p=0.001) 0.6182 (p=0.024)

All subjects

0.5311 (p=0.005) 0.4309 (p=0.028)

1. The correlation coefficients in the first row imply that the null hypothesis for category A

cannot be rejected. Weak correlations exist among the test results and experiment scores,
but these are small and nonsignificant.
2. However, for category B, there are strong correlations among the test results and the

experiment scores. The null hypothesis is rejected for category B, and hypothesis H3
holds at a 0.05 level of significance.
These results deserves some discussion. The fact that hypotheses H1 and H2 hold implies the
following:
The aid of having documentation available during system maintenance reduces the time

needed to understand the system and the changes implied by a change request (H1).
It also enables the maintainer with more time and better knowledge so that he can make

more detailed changes to the system given a restricted amount of time (H2).

Chapter summary

85

The correlations in hypothesis H3 show that the aid of documentation helps the main-

tainer to use her/his skills better than if no documentation was available. In fact, if a
skilled maintainer in category B were not allowed to utilize the aid of the accompanying
documentation, he could not expect to do his job better than a person with less skills than
himself. On the contrary, when this documentation is available, the skills of the maintainer very much determines the productivity of the maintainer.
This has (at least) two direct implications:
An organization which is about to employ a maintainer should try to get the best people

available. (There is nothing revolutionizing about this.)


An organization which have hired the best maintainers money can buy, cannot utilize

them in an optimal manner if the system they are set to maintain is not documented in a
satisfying way. This is at least true in the short run; controlling for this in the long run
cannot be done by this experiment design, as the domain/application knowledge variable
is kept constant at zero level in this experiment.
Preserving the utility of the documentation is therefore important in software maintenance, and
using documentation as a source of information when trying to understand the system is indeed
effective.

86

Assessing the Role of Documentation in Software Maintenance

CHAPTER 4

Prerequisites for a Framework to


Understand Software Systems

4.1 Introduction
In Chapter 2 we found that there has been no reduction in the costs of software maintenance
compared to the costs of development during the last two decades. Systems have grown larger,
and these large systems are more difficult to maintain than smaller systems 20 years ago. The
maintenance process can at an abstract level be divided into three phases:
1. The change management phase, which collects all incoming modification requests,

judges their relevancy, and sort them according to priority.


2. The understanding or discovery phase, where maintainers tries to understand what

impacts the modification request has on the system, which parts of the system that needs
to be modified, how the system is organized today, and how the components which must
be modified are functioning.
3. The implementation and testing phase, where maintainers make the changes called for

by the modification request. The changes are made based on knowledge acquired in the
understanding phase.
As reported in Section 2.6.7, the 50-60% of the effort spent in the maintenance process could
be attributed to the activities in the understanding phase. In our own experiment which was
described in the previous chapter, we saw that this figure was as high as 74% when no documentation was available (see Figure 14 on page 73). From the effort reported by the subjects
(described in Section 3.6.1) we calculate that those who did not have documentation available
spent 21.5% more time to understand the system compared to the subjects who did. The experiment design sat a time limit to how long the subjects were allowed to study and modify the
system. The subjects in general did not have sufficient time to understand the whole system
and all the changes which where needed to comply with the change request. The subjects had
only time to specify varying degree of detailed pseudo code rather than detailed working code.
In this chapter we present a model for how the evolution of a software system should be organized in order to minimize the risks of being opposed to the problems of maintenance that we
described in Chapter 2. We argue that different kinds of software have different restrictions in
how they can evolve. These restrictions influence the potential complexity of maintaining
them. We argue that the software application and its documentation should be in a consistent
state, internal equilibrium, and that this is more important for particular types of software than
for others. In the long run, all kinds of software will have its maintenance costs reduced if
internal equilibrium is enforced.

88

Prerequisites for a Framework to Understand Software Systems

In Section 4.2 we argue that when documentation should be updated in a specific way during
maintenance. The resulting system state when this is made properly is termed internal equilibrium. Section 4.3 criticizes the large investments made in research for reverse engineering
approaches. The main bulk of this chapter is contained in Section 4.4. There we discuss a highlevel model for how different type of software should be maintained. We typify the software as
either one-time, shrink-wrap, or customized. Finally, we summarize the chapter in Section 4.5,
and give an outline to the next two chapters which describe our approach to supporting the
understanding processes in software maintenance.

4.2 Internal equilibrium: A result of balanced evolution


We have demonstrated that the availability of documentation is a key factor to the success of
software maintenance. In this section we discuss the importance of organizing the system documentation correctly in order to obtain high system maintainability.

4.2.1 A philosophical reflection


In any modern democratic system it would be outrageous to increase the wellbeing of some
groups, while leaving other behind. In order to do so, the societys leaders must either be suicidal, or have immense control over the military forces to break down resulting riots. Modern
societies are characterized by sustained growth, or a balanced evolution. Even though this may
not be the fastest way to change the structures in a society, political leaders find it impossible
to revolve the society they settle for an evolution. By controlling the growth, all groups are
more or less satisfied, and the political system is more easy to control and understand. In this
section we argue that software systems are more maintainable in the long run when the software evolution is balanced. By this we mean that all affected parts of a software system which
may be of future use should be evolved concurrently. By doing this we restrict the possibility
of losing documentation which can be valuable in the future.
Before we describe our view on how documentation should be organized, we present two very
different examples of how documentation can be organized for maintenance, and discuss their
weaknesses.
In [Parnas, 1993], Parnas reports about a visit to a major U.S. airport. He was guided around
the airport, and when he came to the control room, he asked which specific situations that
would trigger a red alarm lamp on the air traffic controllers desk. The supervisor told him that
all situations which triggered the alarm lamp were documented in the system documents. The
problem was that the amount of documentation which was available for the system covered a
considerable amount of shelf space. So the supervisor told Parnas: Well you could check the
documentation, but it will take you some weeks.
When Parnas further investigated the documentation he realized that the documentation
changes were not integrated into one document. The documentation was organized in an astonishing inefficient manner: When a modification was made to the system, all planning and
changes made were described in a separate report which was added to the documentation set.
Rather than updating the primary documentation, small reports were added as the system
evolved. The requirements that specified the situations that should trigger the red alarm lamp
were not situated in one document, but rather in a number of small reports.
We find that the documentation update method described above to be efficient in several ways:

Internal equilibrium: A result of balanced evolution

89

The maintainer focus on describing the actual changes made to the system with regard to

a particular modification request. This will speed up the process of documenting the
changes, and will make the work very visible to the client which asked for the change.
If several maintainers modify the same components of a system simultaneously, prob-

lems can occur when they need to modify the same documents at the same time (i.e write
access conflicts). If the changes made are described separately, such situations will be
avoided.
If the original documentation is only available in printed form, or spread across several

machines that the maintainer cannot access, the only solution may be to produce separate
documents. This is indeed better than producing no documentation at all.
While there can be good reasons for producing documentation the way Parnas experienced, we
do not believe that this is a good approach with respect to system understanding. Indeed, as
described in the example: Finding the right information could be a severe problem. If only a
few changes are anticipated, there will be few problems with such an approach. However, if
the software is maintained for several years, and thousands of changes are made to the system,
the problems may be severe.
We are aware of examples which are reluctant to put extra effort into documentation in the
maintenance process. A common reason for this is that developers and maintainers are under
pressure to get the software product right, not the software system. For example Wilson, in
[Wilson, 1994], reports about a project:
No detailed requirements document is maintained. Such documents existed at system inception, but were not maintained. A cost-benefit analysis determined that change control would
best be implemented using a formal documented change process, and let the extensive user
documentation describe the system features. Similarly, only high-level design documentation
is maintained. Mentoring and work teams are used to pass on intermediate designs, and
where that fails, the Ada packaging usually provides a clear description of the system design.
Additionally, Ada dictionary packages are the only data dictionaries used.

The described project, Argus, maintains a system for security control at some of the U.S. Dept.
of Energy plants. Wilson reported that the employers found it more cost-effective to keep
maintainers with long experience in the project, than to increase the documentation efforts. He
also reported that the software is not customized for different customers, but all customers get
everything. We will see in the next section how this approach can be useful to reduce the costs
in the maintenance process.
The Argus project used economic incentives (added salary) to keep the right people in the
project. In addition other agreements were negotiated like rewarding conscientious maintainers
with higher positions when the project ends. Wilson admitted that the Argus management took
a risk by choosing such an approach, and that they were vulnerable if some of the key personnel left the project. However the chosen strategy had proved successful so far as the incentives
had kept the personnel with the project.
This last example may not be unique in its success, but as described in Chapter 2, some of the
most common problems experienced in software maintenance is that of maintainer turnover
and difficulties with recruiting new personnel. Particularly it is difficult to find skilled and
experienced maintainers. As we showed in Chapter 3, documentation was of significant value
when unskilled maintainers had to get acquainted with a system to make modifications to it.

90

Prerequisites for a Framework to Understand Software Systems

4.2.2 Internal equilibrium


The two examples and the discussion given lead us to the following proposal which will be
central in our framework for understanding software systems.
Documentation should be updated and not added during software maintenance.
Internal equilibrium (DEF): When the system documentation is updated rather than added, and
at any time describes the current state of the software application, we say that the software system is in internal equilibrium.
Write access conflicts and problems with identifying what parts of the documentation that
needed to be changed were two important issues identified for choosing the change report
addition approach described above.
We will attack these two important problems in the framework for supporting system understanding that we will propose in the next two chapters. The primary mechanisms which we
propose are to
1. Use configuration management for all items produced in the software development

and maintenance processes.


Just as source code is subject for configuration management, all other types of documentation integrated in a software system should be as well. When the system documentation is under configuration management, it is possible to compare two
document versions to understand what changes were made to the system due to a
modification request.
2. The architectural structure of the software system will be made explicit so that the

maintainer easy can obtain a top-down understanding of how the system is organized.
3. The maintainer will be provided with support to locate particular pieces of informa-

tion which are needed to understand a specific part of the system.


4. Relationships among different types of system components will allow the maintainer

to navigate upwards or downwards in the system abstraction hierarchy.


When the maintainer asks about which requirements that have influenced some
design decisions he will be able to easily obtain this information. Similarly, if he
wants to know which source code components that implements a particular requirement this will also be possible.
This proposed approach requires that all software documentation is in electronic form, at the
fingertips of the maintainers. Such a solution will not be a problem, since todays technology
permits software developers to distribute documentation, both for internal and external use, in
electronic form. We propose that the software maintenance community should adopt such an
approach.
Later we describe a model which we find suitable for deciding when maintenance should follow a process which ensures that the software system is in internal equilibrium. First we will
give some criticism to a particular branch of software maintenance research which attack the
problems of software maintenance in what we believe is a short-sighted approach.

Research for quick fixes

91

4.3 Research for quick fixes


Today, significant amounts of money are spent on research on technology to redocument
systems. This technology is called software redocumentation or software reverse engineering.
For an overview of software reverse engineering technology, two good sources are [Arnold,
1993a] and [Chikofsky and Cross II, 1990]. For convenience, we give an introduction to
reverse engineering below.
In large commercial and industrial organizations several systems exist without any kind of
documentation, except the source code. The organization is dependent on these systems in
order to keep its competitive edge in its business domain. Such systems are termed legacy systems.
When user demands and hardware platforms have been stable, the problems of running these
systems have been fairly small. Typically, initial changes to these systems have been made at a
very low rate, and often by maintainers with good knowledge of the system. New and cheaper
technology has raised new demands from users, both regarding interoperability and functionality. The owners of the system are now faced with two possibilities: Continue to provide users
with the system as of today, or extend the system to meet the new requirements.
There are several ways an organization can meet these demands. Some constraints usually
holds in most situations:
The system is large,
The data the system generates and updates is important for the organization.
The size of the data is large.
A new system cannot be plugged in overnight.

By weighing these, and certainly other constraints, the organization must take a decision of
whether they should:
Maintain and extend the current system to meet the demands of the users, or
Rewrite the system from scratch.

Having these two options, decision makers in the organization must choose one of them. If
they select the first, there is an obvious need for finding out how the system is constructed,
since the only information available is the source code itself (and sometimes even only the
object file).
This is the background for the field of reverse engineering. Reverse engineering aims to analyse the source code to unveil modules and their connections (i.e. the system structure), presenting the control flow of the program, finding data that is related, detecting hot spots, dead
code and clones in the source code, etc.
The reverse engineering task consists of three main phases:
1. Parse the data.
2. Analyse the data.
3. Present the data.

The first phase is a well known field through decades of research in compiler construction. The
second phase is very specific to what kind of information is sought; this is the phase were

92

Prerequisites for a Framework to Understand Software Systems

reverse engineering experts are focusing their research. The last phase, sometimes strongly
intertwined with the second as the analysis/presentation cycle can be iterative and user guided,
focus on how to provide the user with different views on the same data to unveil different
aspects of the data.
Now here is the announced criticism: The aim of the reverse engineering research is to find
(semi-)automatic procedures to extract design information from source code. Similar information was indeed available during initial system development, but has been neglected by stateof-the-practice maintainers; now they want it back. While there is obviously a need for this
technology in the market, our position is that resources can be used more cost effectively on
enforcing internal equilibrium in the new systems. In general, research should focus on methods and technology of avoiding problems of the past, not technology to work around them. If
the latter approach is the case, maintenance managers will have good excuses for neglecting
the problems of maintenance in the short run, as an easy escape may be available when the real
problems emerge.
Instead of allowing software systems to follow the path of evolution leading to legacy systems,
maintainers should strive for internal equilibrium. Then the need for reverse engineering technology would diminish after 5-10 years. We predict that there still will be a market for reverse
engineering technology, as not all software applications can be expected to be in internal equilibrium. Our main point is that the reverse engineering technology should not be a substitute
for conventional design documentation during maintenance. The explicit decisions and rationale stated by the original designers and maintainers of software is far too important to overlook, and should be kept visible in the system documentation.

4.4 Economics in a software spectrum


Why do software maintenance projects continuously keep falling into the same pitfalls over
and over again? Even though the problems related to maintenance are known as we described
in Chapter 2, maintenance management seems to neglect them again and again. We have
argued that a key factor to successful maintenance is to accept the problems of high maintainer
turnover and lacking documentation, and update documentation in a controlled manner and
use configuration management on this documentation. At the same time some projects seems
to avoid the problems even though they do not take special considerations in updating the documents. A good example of this was the Argus project presented earlier in this chapter. In that
project the key personnel were given incentives to stay with the project. The level of system
knowledge among the maintainers were the key to the projects success. It is the collective
knowledge of the maintainers in a maintenance project which decides its success.

4.4.1 The maintainer profile


It is accepted that maintainers with experience in maintaining an application are more productive than those who lack such experience. We depict this in Figure 19. The system knowledge
of a maintainer is plotted as an S-curve against the system experience. The descending solid
line reflects the experience profile of the maintainers in a maintenance organization which has
problems with turnover. The curve shows us that few maintainers have long experience, while
many maintainers have little experience. If turnover is not a problem in the organization, the
experience profile is something like the increasing dotted line in the figure. In this case documentation may be superfluous, as most of the maintainers have deep knowledge of the system.

93

Economics in a software spectrum

# of maintainers

System knowledge

100%

System experience

FIGURE 19. Maintainer profile & knowledge vs. experience

When maintainers or management have been involved in a project like the Argus project, they
may carry over experiences and neglect the potential problems in new projects. This can be
expensive. In this section we argue that maintenance management must be fore-sighted and try
to predict how the future evolution of the system will proceed.
In this section we propose a model which provides maintenance management with information
to use when they need to determine how they will organize the maintenance process in their
project. We present a list of factors which will influence which process to choose. The model
argues that keeping the software system in internal equilibrium will be the best solution for
most projects, but some exceptions will always exist.

4.4.2 A software spectrum.


To be able to describe different needs for software configuration management, Mahler
([Mahler, 1994]) identified three basic types of software systems.
1. One-time software is developed and maintained for a single installation, for example a con-

trol system for an oil rig or a transaction system for a bank, or something more simple like a
report generator for an economics department.
2. Shrink-wrap software on the other hand is installed at a large group of customers. The producer is responsible for and sells only one release at a time. Examples of software of this
kind are word processors, spreadsheets, and other commercial-off-the-shelf (COTS) products.
3. Customized software, is tailored to different customers needs and several variants of the
software exist at all times, and the producer is responsible for maintaining all of them simultaneously. Examples of systems of this type are air traffic control systems used at different
airports, but customized for each installation.
These three type of systems will exhibit different types of evolution. In software configuration
management (SCM) terminology, their version graphs will be very different. We find this categorization of software systems to be pertinent for describing our selection model for maintenance strategy. At our own account, we make the reader aware that the following discussion
assume that the software systems are of considerable size. Typical systems of consideration
would have a size measure > 50kLoC.

4.4.3 A short course in SCM


To be able to define the evolutionary nature of these different types of systems, we define the
basic terminology of SCM according to [Conradi and Westfechtel, 1996]:

94

Prerequisites for a Framework to Understand Software Systems

t1

t0

t2
1.3.1.1

a.cc

1.1

1.2

t4

t3

t5

1.3.1.2

1.3.2.1

1.3.1.3

1.3.3.1

main.cc

1.1

a.h

1.1

2.1.1.1
1.3.3.2

1.2

1.3

2.1

1.2.1.1

2.1.1.1
1.2.2.1

Release 1
FIGURE 20. An SCM example

2.1.2.1

2.1.2.1

Release 2 for Win95


Release 2 for Macintosh

A version v represents a state of an evolving item i. v is characterized by a pair v = (ps,

vs), where ps and vs denote a state in the product space and in the version space.
According to the type of evolution, versions are classified into revisions and variants:
A revision is a version which is evolving along the time dimension. Revisions are main-

tained to recover from erroneous updates of to fix bugs in old versions delivered to the
customers.
Variants are alternative versions of an evolving item which are coexisting at a given

point in time. For example, variants of data structures may differ with respect to storage
consumption, runtime efficiency, and across operations. Furthermore, a software product
may support multiple operating systems or windows systems.
At the system level, the concepts of system version is often heard. It is more correct to talk
about
A system configuration which is a collection of item versions of all items of the system,

such that these items can interact as a fully coherent system.


A system release is a system configuration which is made publicly available.

Most versioning schemes identify versions with numbers. The most common version numbering scheme is the following:
<release number>.<revision number>[.<variant number>.<variant revision number>]
Figure 20 shows how these definitions are related in a version graph for a system. The system
in the example consists of three files a.h, a.cc, and main.cc. The version graph is shown for
all these items. Initially, the system was released. The release was a configuration of version
1.1 of all items. As the system evolved, it was apparent that variants had to be made to a.h and
a.cc in order to release the systems on both the Win95 and Macintosh platforms. Version 1.2 of
a.cc was split into three variants 1.3.1.11.3.3.1. Version 1.3.1.2 is a revision of 1.3.1.1. Versions 1.3.1.2 and 1.3.2.1 of a.cc were merged into 1.3.1.3 which is a variant of 1.3.3.1. At point
t4 the product was released as release 2 both for Win95 and Macintosh. After this release, the
version number sequence is incremented by the user to reflect that the new versions are revi-

95

Economics in a software spectrum

sions of the second release. In most configuration management systems, the users can decide
themselves when to do such increments.

4.4.4 Characterizing maintenance profile


One-time systems are supported along the revision dimension. Only the latest system configuration is installed, and only at one customer which is typically at the organization which maintains the system. This means that maintainers will always make modifications to the last
release.
The maintenance of shrink-wrap systems is also limited to the last release. Shrink-wrap systems are also supported along the variant dimension, as shrink-wrap type of systems are typically provided for different computing platforms. This means that several variants are
maintained concurrently. The individual logical system components may exist in different
physical files when the differences are large between the platforms. Two components which
have the same functionality for different platforms may therefore be very different in content.
This complicates maintenance, and increases the need for documentation.
For customized software, a customer dimension is also added. Additional maintenance is not
only relative to the last release, but a range of different installations (different releases) are
maintained concurrently.
As an example, consider an air traffic surveillance system: An airport installed the system
three years ago, and used a specific type of radar. Another airport installed the system two
years later, and used a different type of radar. In the logical system structure, the radar controller component may be very similar. Both customers may require modifications to the radar
controller. The software provider must therefore be able to maintain both the old and new version of the radar controller component.
Some parts of the air traffic control system may be shared among different installations. Since
customized software is installed at different customers, requests for modifications to the shared
parts may be incompatible. If the system parts which are shared among the different installations are large and complex, the system provider must negotiate with the customers to get
agreement on a common set of requirements. If not, the number of shared parts will be
reduced. This will increase the complexity of maintenance, and hence the maintenance costs.
Table 34 summarizes the above discussion.
TABLE 34. Characterization of software spectrum
One release
One source of requests

One-time software

Several sources of
requests

Shrink-wrap software

Several releases
One variant
Customized software

Several variants

The system structure, i.e. the logical breakdown of system components (including any documentation component), will also evolve over time. For one-time and shrink-wrap systems, this
puts no challenging requirements to the configuration management system. However, for customized systems, this means that the configuration management system must be able to handle
simultaneous modifications to several system architectures. The configuration management
system must provide the user with mechanisms for describing the variability1 in the system
structure dimension, and selecting among different structures, not only the last one. Variability

96

Prerequisites for a Framework to Understand Software Systems

exist both in the revision, variant, and structure dimensions.


In Figure 21 we show how the logical structure of a system has evolved over time. The left
upper part of the figure shows the logical component structure of one installation. The right
upper part of the figure shows the The components marked by the gray regions are identical for
both structures1.
Old system structure
1
2
4

New system structure


1

3
7

10

11

12

Combined system structure


1
3

2
4

8
9

10

11

12

FIGURE 21. Evolving system structure

For one-time and shrink-wrap systems, the maintainers need not worry about this structural
evolution, as only the last system structure is maintained. The old configurations may be
stacked away somewhere if they are accidentally needed at some future point in time.
For maintainers of customized software, the structural evolution is a challenge. The maintainers must evolve a family of systems where each family member may have different system
structure. Indeed, this complicates both the maintenance of such systems compared to the other
two system types, and also give challenging requirements to the configuration management
system which can be used for controlling the system component versions.
In the lower part of Figure 21, we show how the family structure can be represented in a kind
of AND/OR graph. The dotted lines on a composition link pair indicates that a choice must be
made among one of them. A dotted line on a single link indicates that the composition link is
optional. A solid line on two or more composition link means that all these links must be part
in a valid composition of the logical system structure. A configuration management system
which is used with customized software needs to be able to represent these choices of system
structure composition.
1. By variability we mean that several options are possible in a given situation. To stay inside the configuration management domain, a subsystem may exist in different variants. One may optimize the
subsystems functionality with respect to time, another with respect to storage requirements. Several
revisions may exist for the components which comprise this system, and variants may exist for the
components if several computing platforms are supported.
1. Note that although the logical system structure of two configurations are identical, the component
versions which implement this system structure may be different.

Economics in a software spectrum

97

It should be clear by now that such complexities will increase the maintenance problems of
customized software. The maintainer needs to understand how different installations (system
family members) are organized, and why they are differently organized. The maintainer also
needs to understand the differences among the different family members with respect to functionality. If internal equilibrium is not ensured for all different system installations, the problems of understanding the system family will be immense. Particularly maintainers which are
new to the system will have enormous problems with acquiring sufficient knowledge of the
system.

4.4.5 Software costs in the short run


Above we saw that the three types of systems had characteristic profiles with regard to which
dimensions of variability they were maintained. Below follows a discussion where we argue
that the three different types of systems also have different cost profiles in the short run with
regard to who develops or maintain them:
A one-time system is designed to perform or support a particular function in a specific

organization. There are two reasons for why such systems are built:
1. The organization has decided that its problems and requirements are so special that existing systems which are commercially available are not sufficient.
2. The existing systems are so expensive that it would be much cheaper to build and inhouse system instead.
A one-time system is usually developed by engineers from the organization which needs the
system, with additional help from outside consultants if the system is of considerable size.
The engineers general knowledge of both the application and the operational domains help
to reach a good consensus between the requested and specified level of functionality.
The consultants will typically have nothing to do with the system when the first release is
delivered. It is the responsibility of the maintainers employed by the organization to maintain the system. Valuable system knowledge is lost with the consultants.
If the engineers which developed the system are responsible for the maintenance, valuable
system knowledge is inherited from the development phase. If other software engineers
with experience from the organization are assigned to maintain the system, they will carry
with them knowledge of the organizations operational domain, and will therefore understand the conceptual functionality of the software. When maintainers are brought in from
outside the organization, they first need to gain general knowledge of the organizations
operational domain and then understand how the system has implemented solutions to help
automating these domain concepts.
Shrink-wrap software is typically developed by a large software company. The developers

of the software are typically not specialized in the domain which the software supports.
They will become familiar with the domain during the course of developing the system.
Requirements have been collected from a narrow group of potential users (compared to the
large user group of the final product). The creators of a shrink-wrap system must take into
account that the system will be used in several different operational domains. This is difficult, and means that the initial specified functionality level will not match all requested levels of functionality.
Unlike large organizations which develops large internal software systems (one-time systems), the developers of shrink-wrap software make the software to sell in large quantities.

98

Prerequisites for a Framework to Understand Software Systems

When the first release of the system is sent out to the market, their job is not done. They will
have their income from further selling the software, and as we saw in Chapter 2 the system
will evolve if it is successful. The original developers will therefore maintain the system
during its life-time, and will carry with them all the knowledge acquired during the initial
system development. In the short run the maintenance costs can be expected to be rather
low, and since the maintainers have high system knowledge documentation will not be as
important as for one-time systems.
Customized software systems are also typically produced by a software company. Unlike

shrink-wrap systems which are supposed to sell in thousands, a customized software system
will sell only a few copies. The complexity of this type of software will be higher than typical shrink-wrap systems. However, it is not a one-time system, since a customized software
product is part of a system family where family members share parts of some basic functionality. Special considerations must therefore be made to plan for future evolution. As discussion earlier in this section has pointed out, such considerations may include how to best
structure the system for changing system components in different installations.
The personnel which customize the software know the basic application domain very good,
and also have good knowledge of the system. However, they are typically arranged in
project groups which are responsible for customizing the systems to new customers. This
means that all maintainers share some knowledge, while knowledge which is special to one
installation is known only to a few persons in the project group. It is therefore important that
documentation is maintained at all time.

Normalized Costs

In Figure 22, we depict how the costs of maintenance can be expected to be in the short run for
systems of the three different software types. The relative costs are based on the discussion
above.

= with documentation
= without documentation
One-time

Shrink-wrap Customized

FIGURE 22. Costs of short term maintenance

The figure distinguishes among initial development costs and maintenance cost in the short run
when documentation is and is not available:
For one-time systems, it is often the case that the original developers follow the systems

during their operation. This means that the maintainers have intricate knowledge of the system, and that the need for documentation is small. Evolving the documentation with the rest
of the system is generally viewed as extra costs.
Shrink-wrap systems have a wide variety of users, and the number of operational features in

the software generally exceed what is obvious for the software maintainer. Keeping the documentation in concert with the rest of the system is therefore more important in this case.
The producer may be under pressure to release the product as fast as possible to be able to
reach the market before its competitors. Several incentives may therefore exist to cut the
corners with respect to documentation, particularly since the software engineers have inti-

99

Economics in a software spectrum

mate knowledge of the system. Because of the variability in the system, the software producer should nevertheless enforce internal equilibrium, as the large number of changes will
soon make the initial system obsolete.
The customized system family has a complex integration of specified functionality for the

different customers. The short term variability therefore be more complex than for the other
two types of software. The need for documentation for effective maintenance is therefore a
rule, also in the short run.
In this section we showed that dropping documentation in the short run may be cost effective.
However, at some point in time the accumulated costs of maintaining the system without documentation will exceed the costs of enforcing internal equilibrium from the start. This is the
topic of the next section.

4.4.6 Software costs in the long run


In the long run software will grow in size, both with regard to the number of requirements
which must be adhered to and to the number of lines of code in the system. This means that its
complexity will increase. When the software gets older the intimate knowledge available to the
original developers is forgotten. This means that documentation get more important in the long
run when the system evolves. In Figure 23 we depict this situation.

Accumulated
maintenance costs

environment is unstable

= with documentation
= without documentation
environment is stable
Time

FIGURE 23. Costs of long term maintenance

The proposal of a constant cost of maintenance when documentation is evolved in concert with
the rest of the system is based on an observation of Lehman and Belady. This observation
shows that the increments added for the different releases is constant over the evolution period.
Thus, if patchwork changes are avoided the system structure does not deteriorate, suggesting a
constant incremental cost.
As discussed in the previous section, updating the documentation in the short run may be inefficient in the short run for some type of systems. The maintainers know the system intimately,
and they themselves make the changes to the software. In the beginning mostly corrective
changes are made, and the perfective changes are natural extensions to the existing system.
As the system evolves, new people will be included in the maintenance group. Since these people lack the system knowledge of the maintainers which participated in the original development, the incremental costs of maintaining the system will increase. If the system is changing
rapidly to keep up with requirements from its users, the original documentation will be off little value to the new maintainers.

100

Prerequisites for a Framework to Understand Software Systems

At a point in time, a pay-off from the investments made in evolving system documentation
with the rest of the system can be reaped. This point in time is determined both by system type
and particularly by the stability of the operational domain. This is portrayed by the curved lines
of Figure 22.
When a system is transferred from development to maintenance, the maintenance management
should determine from past experience how long the expected life-time of the system will be.
As their experience base increase they will be able to predict whether it will be cost effective to
enforce internal equilibrium, when they compare all the factors we have outline in this and the
previous section.

4.4.7 Factors which influence the maintainability


The discussion on the previous pages revealed a number of factors which influence the maintainability of the software system. In Table 35 we summarize these factors. The rows above the
TABLE 35. Characteristics of systems in software spectrum
Factors

One-time system

Shrink-wrap

Customized

Importance of system to
maintenance organization

Low

High

High

Importance of system to
user organization

High

Low

High

Short term change type

Corr.

Corr. & Adapt.

Corr. & Perf. & Adapt.

Initial change costs

Small

Medium

Large

Variability dimensions

Revision

Revision, variant

Revision,variant,structure

Comp. syst. decision by

User

User

Producer

System complexity

Low to high (!)

Medium

High

What decides evolution

Business domain

Market/other companies

Individual customer

System life-time

Long

Probably short (COTS)

Long

System developed by

Consultant +
in-house developer

Software company

Software company +
customer in coop

Are developers and


maintainers same pepole

In short run yes,


in long run no

Yes

Yes, but maintainers


rotate in project groups

Who maintains system

In-house maintainer

Software company

Software company +
customer in coop

Maintainer turnover

High

Medium

Low

Maintainer domain
knowledge

High,
special knowledge

Medium,
general knowledge

High on some parts,


low on other parts

Is evolution predictable

Yes

Partly, market decides

Yes, partic. in long run

Importance of
documentation
Maint. doc. required in
the short run?

Doc. required from start,


Doc. required from start,
Doc. necessary when
must be maintained due
must be maintained to
success long lifetime
separate installations
to high turnover
Not really

Indifferent

Yes

101

Economics in a software spectrum

double line in the table describe factors which are related to the nature of the system type itself.
The rows below this double line describe factors that affect maintainability which are dependent on how the system is developed.

4.4.8 Recapitulating the software degrading process


We use the knowledge which we have accumulated this far to describe how the software system degrades over time. In Table 36 we show the possible states in which a software system
may be with respect to internal equilibrium and stability of the external environment. Even
though most systems start out in the table cell labelled 1, they tend to shift to state 3 and 4
over time. Below we describe our view of why this is the case:
TABLE 36. Software system states

External
environment

Internal equilibrium
Yes
(Goal)

No
(Legacy systems)

Documentation updated;
Little maintenance;
Low pace
1

Documentation not updated;


Little maintenance;
Low pace
3

Documentation updated;
Much maintenance;
Unstable
High pace
2

Documentation not updated;


Much maintenance;
High pace
4

Stable

Before its first release, a software application is thoroughly tested to ensure that the cus-

tomer receives a product of high quality. The application is then released to the customer.
The application can now be thought to be in internal equilibrium, if the development personnel has done their job right. Since the users of the application are not familiar with the
application, the external pressures implied on it can be thought of as few and stable. We say
that the application is located in the cell labelled 1 in Table 36.
When the application is introduced to an increasing mass of users and these user start

exploiting the facilities of the system, errors and faults will be discovered. Since the correction of these processing failures are thought not to have impact on the system organization,
documentation is not updated. Now, as pointed out by Parnas, not all maintainers have full
knowledge of the system, and changes not meant to affect the system organization do often
have these unfortunate side effects. These side effects on the system organization are not
immediately perceived to be a problem, and are not documented. The application is slowly
moving from the cell labelled 1 to the one labelled 3.
As further changes are made, maintainers find themselves struggling to understand the sys-

tem. When consulting the documentation, they find it to be of little use, since it is obsolete.
Updating the obsolete documentation when making further changes does not make any
sense, and maintainers find that they can only trust the code. The application is stuck in the
cell labelled 3, and has become what in literature is called a legacy system1.
1. See e.g. [Bennett, 1995] for a discussion of legacy system characteristics.

102

Prerequisites for a Framework to Understand Software Systems

When the application has been in operation for some time, products from other vendors

arise on the market. These new products are more sophisticated than the old application,
and the clients are looking for an opportunity to change from the old application to some
which provide better user interface, performance, database integration, etc. This puts an
enormous pressure on the maintenance organization, as management decides new directions
for the application functionality. The maintenance activities change from handling a steady
flow of customer requests to more ramifying requests from management. The customers,
management, and maintenance organization change their priorities1, and the application can
be characterized to be in the cell labelled 4 in Table 36.
When the application is in this state, effective maintenance is needed in order to stay in business. The maintainers find that they lack documentation to be able to make the quick decisions
needed. The only two solutions will be to try to reverse engineer the system or totally redesign
and re-implement it, as described in Section 4.3. If maintenance had enforced internal equilibrium, maintainers would have all necessary information at their fingertips, and the incremental
maintenance costs would not increase.
To sum up, we believe that internal equilibrium should be enforced for all systems with some
expected system life-time. Both maintainers and maintenance management must understand
and accept this. The maintenance managers must persuade the customers to pay for the extra
costs to be able to do this in the beginning of the maintenance process. Explaining that this will
be profitable in the long run, with the aid of the model described in this chapter, can be a good
way to persuade the customers or sponsor.

4.5 Chapter summary and the way ahead


4.5.1 Summary
Changes made to the software system should be balanced across all types of components
spawning it. This means that requirements, specifications, test reports, and user documentation
should be updated, as well as the source code. We gave an example that showed how difficult it
is to comprehend the total system when small change reports are added to the existing documentation. Instead, these updates should be made to the original system documentation. When
a system is updated in this way, we said that the software system was in internal equilibrium.
The maintenance organization will then be less vulnerable to key personnel leaving the organization. When support exists for the identification and extraction of system components related
to a change, and navigating among them, the introduction of new personnel will be less painful, as the training period can be minimized. This requires that all types of documentation is
available in electronic form. Particularly, we identified four important requirements for a
framework to support the understanding of software:
1. Use configuration management for all items produced in the software development and

maintenance processes.
1. While most studies on maintenance problems have used a proposed set of problems first used in
[Lientz and Swanson, 1980], Dekleva ([Dekleva, 1992]) used a Delphi technique to allow maintenance professionals converge on a set of important maintenance problems. The most important problem reported by Dekleva was that of changing priorities.

Chapter summary and the way ahead

103

2. The architectural structure of the software system should be made explicit so that the

maintainer easily can obtain a top-down understanding of the organization of the system
structure.
3. The maintainer should be provided with support to locate particular pieces of information which are needed to understand a specific part in the software system.
4. Relationships among different types of system components will allow the maintainer to
navigate upwards, downwards, and even sidewards in the system abstraction hierarchy
When large systems are maintained without enforcing internal equilibrium, they will become
increasingly less maintainable. At some stage the system is so different from the original documentation due to system aging that making changes to it is hazardous. If the maintainer discover where the change should be made, he dare not make the change because he does not
know how the change will affect the rest of the system. Systems inhibiting such characteristics
are termed legacy systems in literature. Research is ongoing to be able to reverse-engineer the
documentation from the existing source code of legacy systems. We criticized this research as
being short-sighted. On the one hand, existence of reverse engineering technology will give
maintainers legitimate reasons for neglecting the evolution of documentation. On the other
hand, if maintainers ensure that the systems they maintain are in internal equilibrium, the technology of reverse engineering would be obsolete.
Maintenance projects have constantly fallen into the trap of neglecting to update the system
documentation because it is perceived unnecessary during the early phases of the maintenance
process. This is a fact as many of the maintainers in the early phases of the maintenance process have carried with them system knowledge from the development phase. When the system
gets older, these maintainers leave the project and new maintainers will be employed. These
maintainers will contribute to a change in the maintainer profile, resulting in few maintainers
with long system experience, and many with little such.
We presented a model which described how different types of software evolve. The set of software systems to be considered was divided into one-time, shrink-wrap and customized systems. These three types of systems exhibit different evolutionary characteristics. The reason
for this was described using a configuration management metaphor. The costs of maintaining
the systems in the short and long run were discussed. We argued that in the short run different
requirements existed for how the three types of software systems could be maintained cost
effective. In the long run, we concluded that all systems would profit from having enforced
internal equilibrium.
From domain experience, and history of other maintenance project in an organization, the
maintenance management should choose which approach that is most cost effective. If the
expected life-time of the system is short, enforcing internal equilibrium from day one may not
be the best solution, depending on the type of system. The accumulated costs of the whole
maintenance project may then be reduced by neglecting to update documentation.
The factors which characterize the three different types of software systems with respect to
maintenance and evolution were summarized in Table 35. The process which typically
degrades the software system into a non-maintainable state was recapitulated.

4.5.2 The way ahead


We have advocated that in order to reduce the costs of the understanding phase in the maintenance process, documentation should be kept in internal equilibrium. This became particularly

104

Prerequisites for a Framework to Understand Software Systems

clear in our experiment described in Chapter 3. We demonstrated that even for moderate systems, the presence of documentation significantly helped an inexperienced maintainer in
understanding and specifying changes to a rather small software system.
Responses from the participants in the group which had documentation available during the
experiment suggested that they felt they vasted much time in trying to locate relevant information in the documentation. Since they did not understand the structure of the system, had problems with navigating in the documentation, and also had problems with relating information in
the documentation with information in the code and to requirements. Nevertheless, they spent
less time in reading documentation and code in order to understand the system, compared to
the group which had only source code available.
In the rest of this thesis, we present our approach for supporting maintainers in locating information which is relevant to a given problem. The approach we will present is two-fold:
Support for understanding the structural evolution and system structure.

The architectural structure of the software system should be made explicit so that the maintainer easily can obtain a top-down understanding of the organization of the system structure.
When a system evolves, its structure can change significantly. The system structure is the logical composition of system components, arranged in a hierarchy. Different parts of the system
may interact in different ways, not only through the hierarchical composition links. This interaction at the component level is also included in the system structure. The system structure
does not only include the structure of the implementation parts, but also the structure of the
documentation which describes them.
It is important to be able to describe how this system structure evolves. Particularly this is
important when several releases are maintained concurrently. E.g. when a customer request
changes to a two year old installation, the software producer must be able to regenerate a system that is identical to the system installed at the customer.
The changes asked for by the customer may already have been included in a release to another
customer. By being able to visually inspect the differences and indifferences of several
releases, the maintenance management will have a powerful facility to help them in planning
what changes that must be made.
We propose that a configuration language, developed by a small group of European researchers
including the author during the doctoral study, exhibits the functionality which we require for
describing the architectural evolution of a software system.
The system model described in the configuration language can be used by maintainers for
acquiring an up-to-date description of how the software system they maintain is organized at
an abstract level at any time.
We describe the Proteus configuration language in Chapter 6.
Support for understanding parts of the system to comply with a change request.

The maintainer should be provided with support to locate particular pieces of information
which are needed to understand a specific part in the software system.
During the specification of the Proteus configuration language, it became clear to me that an
important problem in maintenance is to be able to extract just the information you need from
the large information base which is comprised from the documentation and source code.

Chapter summary and the way ahead

105

Indeed it is possible to make all relationships among all components at all levels in the software system visible by explicitly specifying them as part of the system model. This imposes a
new problem and leaves another unsolved:
It would be very expensive to update such a detailed system model when the system

evolves. The cost benefits from using the explicit system model to gain knowledge of the
system structure would probably be lost. The high degree of model detail would also
hamper the understanding of the more abstract structure which was the most important
thing to visualize with the configuration language.
While all component interactions would be made visible by choosing such a solution,

two main needs of the maintainer would still be unsolved. The first of this is that the
maintainer would still have problems with understanding the functionality of the different components, and how this functionality is presented to the user. This information is
hidden in the system design and user documentation. The second problem is that the user
does not know why the different components are organized as they are, or why the system components exist in the first place. This information is contained in the high level
system design, and in the requirements document. All these four types of information are
contained in a set of files generated by some word processor.
The type of questions which cannot be answered by the system model described in PCL
include e.g.:
Which parts of the system are affected by this change request?
What system components much be changed in order to ... ?
What are the requirements that poses restrictions to this particular system component?
What does this particular function do?
Which components in the system contains solutions which fulfils this particular require-

ment?
Where is the source code which implement the user interface for displaying the status list

of all alarm lamps in the air traffic controller system?


How can I find out whether the system takes special consideration when radar X identi-

fies an alien aircraft?


We argue that answers to dynamic questions like these cannot be extracted prior to their use
and stored statically in a database. We propose that a dynamic extraction facility should be
used to try to locate information which can answer these questions.
In Chapter 7 we specify such a dynamic extraction facility. We describe
what kind of information should be sought for,
how information in different types of system components is related,
how relationships among different types of system components can be dynamically

extracted to provide for system traceability, and finally


how this information should be presented to the maintainer to aid him in reducing the

time spent on trying to understand the functionality of the system.

106

Prerequisites for a Framework to Understand Software Systems

CHAPTER 5

New Future Technologies?

5.1 Introduction
In this thesis, we propose a software maintenance model which stresses that the total system
should be maintained, not only the code. This chapter provides a rather philosophical discussion of alternative future technologies which could revolutionize the way we develop and
maintain our software.
The use of formal specification languages could provide for maintaining only the system specification, and not the code. This would be an alternative approach to our for supporting software maintenance. The possibility of such an approach is discussed, and we present some ideas
for an abstract model for semiautomating parts of software maintenance if this approach is feasible.
Additionaly, we discuss the concept of an internlingua for software representation models. A
problem in maintenance is that different systems are developed using different design methods.
This presents a problem to a maintainer which must learn to use a set of different design methods. An internlingua is a common representation scheme which all other representations can be
translated to and from. If such an interlingua did exist, a maintainer could choose to maintain
any system using the design method of his chocice.

5.2 A new software production paradigm?


5.2.1 Traditional software production model
Software systems are made by elaborating and developing a set of ideas into programs:
These ideas are elaborated into a specification of some sort. Furthermore, the specification

formes the basis for the implementation of the program.


The validation of a software system is the process of ensuring that the program is correct

according to its specification. Validation is a multi-staged process including verification and


testing.
Verification of the specification is performed by different quality control techniques, e.g.

perspective-based reading as described by [Basili et al., 1996], where the aim is to remove
errors in the specification prior to implementation.

108

New Future Technologies?

Testing is performed at several levels, both prior to release (component, integration and sys-

tem test), when customers take over the system for operation (acceptance test), and during
operational use.
Following our definition of maintenance from Chapter 2 (page 11), maintenance is performed
on the specifications and source code to minimize the discrepancies of the different functionality levels. This traditional software life-cycle is depicted in Figure 24. Imagine if all maintenance could be performed on the specification level. This could potentially save considerable
effort.
Verification
Ideas

Testing

Specification
Elaboration
Maintenance

Program
Development
Maintenance

FIGURE 24. Traditional software life-cycle

5.2.2 Automatic program synthesis


According to Flener [Flener, 1995] automatic programming, or rather automatic program synthesis, is to automate the development process, such that correct implementations are automatically generated from a formal specification. This would make testing and maintenance of
implementations obsolete. This aim is depicted in Figure 24.
Validation
Ideas

Elaboration

Specification

Development

Program

Maintenance
FIGURE 25. Software life-cycle with automatic program synthesis

Flener further reports that utilizing such techniques for programming-in-the-large is beyond
hope for several decades, and reduces the scope of automatic program synthesis to contributions for programming-in-the-small. This refinement is reflected in Figure 26, where the algorithm design and algorithm implementation activities are introduced. Promising results from
research on automating the algorithm development activity using algorithm synthesis, where
algorithms are designed automatically, usually from formal specifications, according to Flener.
Validation
Development
Ideas

Elaboration

Specification

Maintenance

Program
Algorithm

Algorithm
synthesis

Implementation

FIGURE 26. Practical new software life-cycle

Flener believes that automatic program synthesis may prove useful in the future. Peoples
ambiguous feelings against this technology can be related back to the failure of the first

109

A new software production paradigm?

projects in the field. These projects were very ambitious, trying to generate programs from natural language specifications. On the contrary, a look into history reveals that even the first
assemblers and compilers were seen as automatic programmers. The real programmers
felt they were writing in some sort of specification language in stead of performing real programming at the register level. However, these specification languages were soon perceived to
be the natural programming languages. Flener argues that the (formal) specification languages
used as the basis for automatic program synthesis as of today may well be the natural form of
program specification tomorrow.
From the failures of the early attempts to generate programs automatically from natural language, it seems evident that any breakthrough in the use of automatic program synthesis in
software engineering would come from using a formal specification language. Although there
are several degrees of formal specification languages, this implies that software developers
need a strong background in mathematics and computational logic to be able to specify the
software.
Current lack of adoption of formal specifications and the more widespread use of more nonformal specification languages in software engineering suggests that the software engineering
community is not yet ready for the technology of automatic program synthesis. This speculation is supported by the current trend of user involvement in the software specification process
to ensure customer satisfaction. When the software engineer is not ready to adopt formal specification techniques, this can not be expected by the customer.

5.2.3 Automatic code maintenance


Consider the following intriguing possibility: Todays automatic program synthesis is about
synthesizing programs from specifications. Tomorrows automatic program synthesizers will
be termed automatic specification synthesis and is about synthesizing specifications from specifications! This idea is outlined in Figure 27.
Change requested by user

Formal specification of change

S
Original
specification

Manual
Automatic

Specification
synthesis

New
specification

Program
Program
synthesis

New
program

FIGURE 27. Specification synthesis for automatic maintenance

The idea is that when given a software specification in a particular state, a modification request
can be formulated as input to a specification synthesizer such that the specification is transformed into a new state defined by the requested change.
The potential of such a technology would be tremendous; the maintainers need not have to dig
into thousands of lines of code in order to modify a software system. Rather they can use more
time on assuring the quality of the system by validation, and more rapidly respond to requests
from the customers.

110

New Future Technologies?

5.2.4 Relevant work


In [Baxter, 1992], the notion of a design maintenance system (DMS) is introduced, similar to
the ideas presented in the preceeding sections. Baxters proposal is to capture and reuse design
information from a transformational implementation process. The DMS updates the specification (what), a derivation history (how), and the design history (justification) using a formal
maintenance delta to guide the revision process. The DMS requires that the software is implmented in a particular manner, and Baxter propose to use the Maintainers Assistant described
below to transform the original system into one that is compatible with DMS.
The Maintainers Assitant (MA) described in [Bennett et al., 1992], relalizes some of the ideas
described in the previous sections. Although it is not possible to transform an old system version into a new one based on a modification request, it provides for assistance in applying program transformations to restructure the program:
The original source code is transformed into the internal language, WSL, which is the heart
of the MA. The WSL is mathematically based, so proving the equality of two program
structures is proving the equality of two formulas. The MA has a catalogue of 500 semantic-preserving transformations (i.e. CISC), including a small set of simple generative transformations that can be freely combined in sequence (i.e. RISC). The transformation rules
can be classified as either (i) pattern-matching and replacement, (ii) algorithmic transformations such as removing a dummy loop, or (iii) hybrid transformations (a combination of the
two former). The MA is typically used for improving the quality of a program, and thereby
making the program more maintainable, thus making the MA principally a tool for reverse
engineering. However, the MA also supports the transformation from specification to
source, thereby allowing the maintainers first to restructure and abstract, then make changes
on an abstract level, and then transforming back to code (i.e. implement).
Thus, the MA allows for the restructuring of a system, decreasing the number of lines of code
to maintain, and hence probably also increase the program maintainability

5.3 A universal representation model for software engineering?


It is a well-known fact that new inventions make old technology obsolete. Users continually
demand better performance, both from the software and computing environments. If the end
user organization needs to change the computing environment, their software systems must be
ported to the new computing environment. The software producing organizations will normally continue the system maintenance on the old computing system and cross-compile new
releases for the new one. If, on the other hand, the software environment is changed to a new
one, the old data must be ported to be used by the new system so that the users can continue to
use their old data in the new software environment.
Just as new and better software environments become available for end-users, new and better
software development environments emerge for software engineering organizations. The
organization may want to start using these new environments for their software development:
The software engineers must be trained in using the new development environment before
starting to use it actively on new projects. The new environment may or may not be able to
incorporate the systems which are maintained by the organization. If this is not possible the
organization may be stuck with two different environments; one for new development, and one
or more environments for maintaining the existing systems.

A universal representation model for software engineering?

111

Is it possible to have a common representation model which incorporate all other representation
models? This would make the transition from one development environment to any other feasible. If such a representation model did exist, old software systems could be migrated to new
development environments, and maintenance of all applications could be performed in the same
environment. Indeed, if a company uses consultants for development, but maintains the system
itself, it would not matter if the two used different development environments they could both
select their environment of choice. The maintenance organization could transform the final
delivery from the consultants and use the same environment for maintaining all systems.
A great opportunity would be to find a technology which allows the development of specifications in a language suitable to the task, which the user understand, and which can be transformed into a formal specification. This formal specification could then form the basis for
further automatic program synthesis and maintenance. Such a view is presented in Figure 28.
Elaboration
Validation
I1
S1
Transformation
P3
Development
Si S3
S2
I2
P2
S2
S1
P1
Algorithm
S3
I3
Algorithm
Implementation
Maintenance
synthesis
FIGURE 28. Extending the suitability of automatic program synthesis

The maintainer would only have to maintain applications specified in one formalism, and does
not need to know several specification languages and supporting tool sets.
We know that solutions exist for migrating data from outdated applications, that documents
can be moved between most document formatting systems, and that natural language can be
automatically translated. What about software models? We discuss this below.

5.3.1 Translation of IS data


When changing an organizations software environment, the old data may not be directly usable by the new systems. Such problems can normally be solved. The reason for this is that the
information system providers has adopted standard ways of representing the data, i.e. the relational model. Data can then be converted when the end users change platforms, and no real
trouble exists.

5.3.2 Translation of documents


When a new word processing system is introduced, old documents may not be readily incorporated into the new system. Several intermediate formats exist for converting documents from
one processing system to another. Although the user interface and internal representation of
one word processing system may differ significantly among two such systems, the functionality provided is very much the same.
Intermediate formats can be constructed for representing the content (i.e. the text itself, its colour, its emphasis, font type, size, etc.) of a text segment. Several such intermediate formats are
available, for example RTF (Rich Text Format) used by most Microsoft products, MIF (Maker
Interchange Format) defined as an open interface to documents produced by the Framemaker

112

New Future Technologies?

document processing system, and the more universal ISO standardized SGML (Standard Generalized Mark-up Language) which allows for the separation of the document contents from its
presentation. This means that, in principle, all word processing systems following the ISO
SGML standard directly can open an SGML document.

5.3.3 Translation of languages


A powerful technology for automatic language translation, particularly when multi-language
translation is the goal, is the use of a universal middle language, or interlingua. The basic idea
of an interlingua is that the translation between any two languages a and b is carried out by
first translating from the source language a to the interlingua, and then from the interlingua to
the target language b. The concept of an interlingua is shown in Figure 29, borrowed from
direct translation

source
language
R1
analysis

Rm-1

transfer1
transferm-1

target
language
Rn
Rm+1

synthesis

Rm
Interlingua
FIGURE 29. A systematization for machine translation models

[Weisweber, 1994].
The figure suggests that translation from the source to the target language can be done at several
levels, and different parts of the translation process is carried out at different levels of representation. The transformation of a language phrase from the source language to the interlingua is done
by analysis and transfer. Each of the representations Ri represent the source language at some
level of abstraction. The most abstract level is the interlingua which is the common abstraction
of all languages. [Weisweber, 1994] notes that the systematization in Figure 29 can be interpreted in two ways. On the one hand one representation can be replaced by an adjacent one
within analysis or synthesis. Consequently, each representation has to contain more or less
explicitly the complete information which is necessary for the translation of a source language
sentence. This automatically leads to redundancies among the Ris. On the other hand different
representations may contain different information, and so they are without redundancies.

5.3.4 Translation of software models


A maintenance group maintains software developed for several computing platforms, using
several software development environments employing different software modelling formalisms. Each maintainer must know several software development environments, and also their
inherent formalisms.
It may therefore be suitable to ask whether it is possible to transform software models from one
development environment to another, so that all maintenance is carried out in the same envi-

Conclusion

113

ronment? In other words; what we are asking for is:


One common representation of software models, a software interlingua, which captures the

entities and relations and their semantics of all possible software representation models for
all software engineering environments.
The possibility of synthesizing the interlingua to any other representation formalism.

Is this feasible? The data represented in the software development formalism forms a large and
very complex model of a software system, and does not have any meaning unless interpreted
by the overlying software development environment.
The problems related to defining a universal representation model for all software development formalisms is really a variant of the problem of automatic translation between two languages. Translation of natural languages has a serious limitation, which is apparent when
studying Figure 29. All expressions in the source language must be expressible in the target
language. Indeed, this is the case for natural languages which is the focus of most machine
translation.
For software development formalisms, it is possible to construct an interlingua which represent
the same expressions of different formalisms in the same manner. Most formalisms contain
expressions which do not have similar counterparts in other formalisms. Interpreted according
to Figure 29, this means that an analysis of an expression into a universal representation Rm is
possible for all formalisms. The synthesis of the new target language representation is not possible for all combinations of source and target languages, as their modelling capabilities may
differ significantly.
A potential candidate for a software model interlingua would be theWSL language described
e.g. in [Bennett et al., 1992]. Transformers exist for converting several source languages into
this language, and for transforming back to the original or another language. The problems
described above will however still persist for languages like WSL.
The complexity of this interlingua would be extreme, and we do not find any such universal
representation scheme feasible. It would neither be much useful, as modifications had to be
made on the interlinguistic representation by an environment which could only serve as a
drawing tool, as semantic relations among different components does not exist. E.g. how to
translate a system designed and implemented using JSD and Cobol to OMT and C++?

5.4 Conclusion
As the discussion from the previous pages has shown, both the available technology of automatic program synthesis and thought technology of software model conversion can potentially
be valuable contributions to maintenance. The first by reducing the scope of what needs to be
maintain, and the latter by reducing the number of formalisms needed to be understood by the
maintainer. The maintainer can maintain at a higher level of abstraction, and can be more
knowledgeable in the single formalism in which all systems are maintained.
The discussion revealed that the suggested technologies is a long way from being real, particularly for large general systems.
Most software development environments are used for verifying that consistency is ensured in
a software system. In the end, the system specification is still a document of some sort, where

114

New Future Technologies?

diagrams and text are intertwined. We believe, that this situation will remain for a number of
years, and that this is the situation which most software developers and maintainers feel most
comfortable with.
This is why we have chosen the approach of providing support for information identification as
our solution to reducing the costs of software system understanding. We now continue with
detailing our solution to this problem.

CHAPTER 6

Specifying Structural Evolution


and Understanding It Using
a Configuration Language

6.1 Introduction
This chapter describes the PROTEUS1 Configuration Language (PCL). The chapter is a modified version of a paper ([Tryggeseth et al., 1995]) presented at the 5th International Workshop
on Configuration Management (SCM5) in Seattle, May 1995. The paper was coauthored with
Reidar Conradi and Bjrn Gulla.
To respond to environmental changes and customer specific requirements, industrial software
systems must often incorporate many sources of variability. Developers use a diverse range of
representations and techniques to achieve this, including structural variability, component version selection, conditional inclusion, and varying derivation processes.
This chapter advocates specifying all potential variability within a system using a single formalism. PCL, the configuration language defined in the PROTEUS project, provides uniform
facilities for expressing and controlling variability in all aspects of a system and its manufacturing process. PCL is supported by a comprehensive tool set and integrated with several
design methods.
The PCL provides us with a formalism for describing and inspecting the overall system structure, and its evolution. The system structure description is important both when we want to
obtain an overview understanding of the system, and for constraining the set of components in
which to search to dynamically extract system traceability information as explained in Section
4.5.2. The mechanisms for achieving this are described in Chapter 7.
The objective of the PROTEUS project was to provide support for system evolution. The project
has developed methods and tools for (1) domain analysis, (2) adapting existing design methods
(SDL, HOOD, MD) to support evolving systems, and (3) modelling system structure and manufacture. This chapter deals with the last issue. We use a simple example throughout this chapter to illustrate the facilities of PCL and how these are supported by the tool set.
PCL, the PROTEUS Configuration Language, is a formalism for system modelling, configuration definition and system manufacture. As systems evolve, large numbers of system and component versions with slightly different properties are created. The objective of PCL is to
1. PROTEUS was project no. 8086 in the European research programme ESPRIT III. PROTEUS ran
from May 1992 to May 1995 and had a budget of 9,6 MECUs. Participants were CAP Gemini Innovation (F), Matra Marconi Space (F), CAP debis SSP (D), SINTEF (N), Lancaster University (UK),
Intecs (I), CAP Sesa Telecom(F), and Hewlett Packard (F). NTNU was a subcontractor to SINTEF.

116

Specifying Structural Evolution and Understanding It Using a Configuration Language

support product management in a broad sense throughout the complete system lifetime: manage components and sub-systems, their interconnections, their variability, their evolution and
their potential derivation processes. PCL covers both software, hardware and documentation
parts of products. We will in this chapter focus on aspects of software management.
The chapter is organized as follows. Section 6.2 gives a compressed state of the art review of
work on which PCL is partly based. Section 6.3 presents the PCL language constructs for system modelling with emphasis on how to express variability. Section 6.4 provides an overview
of tool support for the PCL language and the current status of the implementation. Section 6.5
reports some experiences gained so far in the project, while Section 6.6 provides a summary of
the chapter. Finally, the full PCL source for the example used in this chapter is listed in an
appendix to the chapter Section 6.7.

6.2 State of the art


A system model is a description of the items of a system and the relationships between them.
For such a model to support configuration management, it must uniquely identify the comprising components, their static structure and their derivation processes. It is a principle in configuration management that the system model must be explicit, unambiguous, and be managed as
the system evolves [Whitgift, 1991].
Module Interconnection Languages, MILs, is a common approach for expressing system models. Sommerville and Dean [Sommerville and Dean, 1994] give an overview of existing module interconnection languages and compare these with the capabilities of PCL. System models
are also employed by current SCM systems, although the model is usually embedded in a tool
or in a database. We have extended the comparison in [Sommerville and Dean, 1994] with
more fine-grained criteria and replaced the description of some MILs with characterization of
three SCM systems. Table 37 presents a summary of the comparison, which is to some degree
influenced by the concrete requirements expressed by the application partners in PROTEUS.
The requirements assessed in the table can be summarized as
Integrated system modelling: Modelling all aspects of the product in one formalism, i.e.

incorporate descriptions of and interrelationships between software, hardware and documentation elements.
Multiple structural viewpoints: Be able to express and show several viewpoints of the

same system, e.g. its interface, its logical composition and its run-time structure.
Structural variability: The ability to define variability in the logical composition of a sys-

tem, in interfaces and in relationships in which an entity participates.


Component variability: The ability to represent variability in the concrete system (e.g.

revisions and variants of source files), and to allow intensional version selection. Versions
should be logically characterized, related to the system model.
Flexible manufacture support: The details of the system manufacture process must be

controlled from the system model. Definition of generic yet instrumentable manufacture
tasks should be supported. The aggregation of such tasks into a manufacture process should
be computed from the system model.

117

State of the art

Object-oriented modelling: The extent to which the language uses the concepts provided

in object-oriented formalisms, such as classification, inheritance and encapsulation.


User tailorability: Ability to provide an extensible, multi-dimensional classification

scheme and offer integration with a range of different design methods. User-defined relations to tailor the modelling capabilities should be supported..
TABLE 37. Support offered by MILs & SCM systems to PROTEUS requirements
MIL75a

Coopriders MILb
Jasmined SySLe ClearCasef Adeleg PCLh
INTERCOLc

Integrated system
modelling

None

None

None

Good

None

Multiple structural
viewpoints

Limited

None

None

None

None

Structural variability

None

None

None

Limited

Limited

Good

Good

Component variability

None

Limited

Limited

None

Good

Good

Good

Flexible manufacture
support

None

None

Limited Limited

Good

Good

Good

Object-oriented
modelling

None

Limited

Limited

Good

None

Good

Good

User tailorability

None

None

None

Limited

Good

Good

Good

None

Good

Limited Good

a. [DeRemer and Kron, 1976]


b. [Cooprider, 1979]
c. [Tichy, 1979]
d. [Marzullo and Wiebe, 1986]
e. [Thomson and Sommerville, 1989]
f. [Leblang, 1994]
g. [Estublier and Casallas, 1994]
h. [PROTEUS, 1994b]

The only two formalisms offering direct support for integrated system modelling are SySL and
PCL. In large-scale system evolution it is essential to capture the dependency relationships to
enable successful change management. Incorporating non-software items is also necessary for
proper modelling of distributed applications and embedded systems.
Although MIL75, the original module interconnection language, offered virtually no support
for most of our requirements, it did offer limited support for multiple structural viewpoints.
Future work largely ignored this early insight and still only provided limited support. PCL is
the first language to provide good facilities to model a range of these different structural viewpoints.
Narayanaswamy identified the need for structural variability [Narayanaswamy and Scacchi,
1987], although the proposed NuMIL does not contain constructs for expressing it. In SySL
some variability may be expressed using cardinality on the composition relation. ClearCase
supports limited structural variability by allowing directories to be versioned. PCL allows
structural variability to be explicitly declared, i.e. stating which parts of the system are stable

118

Specifying Structural Evolution and Understanding It Using a Configuration Language

and which parts vary by using conditional expressions in the system model. It also recognises
that variance can occur within any of the structural viewpoints, and supports reconciliation
across a complete model.
Cooprider was the first to incorporate component variability into the MIL framework. INTERCOL allows structuring this information within the notion of a family, and supports version
selection, i.e. allowing the system to determine which version to use in a configuration. More
advanced SCM systems offer intensional configuration descriptions, consisting of a product
part and a version part. Such descriptions serve as partially bound system descriptions, and
must be expanded into fully bound configurations (extensional lists) by exploiting stored product and versioning information. The sequence of product and version binding varies. MILs
usually first perform product elaboration into relevant product families, and then version binding for each atomic family. In Adele, an intertwined binding process over the product is used,
exploiting preferences and constraint rules. Yet other systems, such as ClearCase and EPOS
[Lie et al., 1989], first perform version binding, allowing transparent access to a uni-version
view.
Automated support for system manufacture was introduced by Feldman [Feldman, 1979] with
the Make system. ClearCase provides more accurate and optimized re-generation by managing
configuration records for derived objects. In Adele manufacture support may be implemented by triggers. PCL advocates user control of recompilation, using automatically generated makefiles tailored to the selected product configuration.
Object-oriented modelling has recently gained popularity in the software engineering community. Some of the principles behind the object-oriented paradigm, such as information hiding
and grouping have been supported in previous languages. Only SySL, Adele and PCL offer
extensive object-oriented facilities in their modelling languages.
User tailorability is an important requirement for enabling seamless integration with a diverse
set of design methods. Different design methods often need specific types and relations for
expressing their architectures, and rather than trying to include all possible ones in one language, an extensible framework should be offered. PCL does just that. Adele allows userdefined object and relationship types, and roles of these.

6.3 System modelling and variability


The aim of PCL is to provide a notation in which all aspects of a system family may be modelled. This includes software, hardware, documentation, possible configurations, how these
configurations are instantiated into a system, and finally how the software parts of an instantiated system are processed into executable programs.
Some of the problems reported by the industrial partners in the Proteus project revealed the
lack of a high level model of the complete product and its potential variability. See [PROTEUS, 1992] for the complete set of requirements for the Proteus Congfiguration Language
and tools. The components are not well documented, and reuse relies heavily on the knowledge
of a small kerner of developers ([Floch and Gulla, 1995]):
There is no overview of available components.
No description of similarities and differences between components and component ver-

sions is available.

119

System modelling and variability


It is difficult to describe dependencies between components.

No description of how components can and should be composed, and the characteristics

of the aggregates.
PCL [PROTEUS, 1994b] defines six distinct entity types for modelling families of systems, as
defined in Table 38. An entity description is organized in sections, each consisting of a
sequence of named slots. These entity types are related to each other by a set of languagedefined relations as shown in Figure 30. The remainder of this section explains how these concepts and relations are used to support comprehensive modelling of system families.
TABLE 38. PCL entity types and sections

Entity type

Sections

family

classification, attributes, interface, parts, physical, relationships

version description

attributes, parts

tool

inputs, outputs, attributes, scripts

relation

domain, range

class

physical, tool

attribute type

enumeration

inherits
1
version

inherits
1
1
of

parts classed_as
inherits
range
1
relation
domain

family

attribute_type

parts (user-def)
relation
inherits
1
class

input

inherits
1
tool

output

FIGURE 30. Language-defined relations

6.3.1 Composition structure


The basic assumption of PCL is that a system can be organized as a layered composition structure at the logical level. This means that one component may be a part of another component,
and may itself have sub-components.
The family entity is the core entity in PCL. All logical components and their structure are
defined by a set of family entities.

6.3.1.1 Logical structure


In the remainder of the chapter we will use a calculator program as a small, but yet complete
example for exemplifying the constructs in PCL. The complete PCL source for the example

120

Specifying Structural Evolution and Understanding It Using a Configuration Language

used in this section is given in the Appendix. The basic composition structure in the calculator
program can graphically be illustrated as in Figure 31. In PCL this logical system structure is
expressed as:
CalcProg
Calculator

mathlib

FIGURE 31. Logical structure of the calculator program


(i)
family CalcProg
parts
calc => Calculator;
math => mathlib;
end
end
family Calculator
end
family mathlib
end

The logical composition structure of a system is specified in the parts section. Note the use of
slots (named calc and math in the parts section) in which the actual references to the subfamilies are declared. Since PCL supports entity refinement by specialization, slots are used to
distinguish between items in a section, allowing selective addition, redefinition or removal of
information.

6.3.1.2 Physical structure


Orthogonal to the logical structure is the physical structure of the system, i.e. which files constitute the system and how these are structured in the users workspace. A logical component
may be represented by none, one or several physical components. This information is given in
the physical section. The calculator example can be extended with (omitting the parts section):
(ii)
family CalcProg
attributes
HOME : string default /home/ask/proteus/test;
workspace := HOME ++ /calc/src/; // string concatenation
repository := calc/;
end
physical
main => main.C;
defs => defs.h;
exe => calx.x
attributes
workspace := HOME ++ /calc/bin;
end
classifications
status := standard.derived; // This is not a primary object
end

System modelling and variability

121

end
end
family Calculator
attributes
workspace := workspace ++ Calculator/;
repository := repository ++ Calculator/;
end
physical
calc => (Calculator.C, Calculator.h);
expr => (expr.C, expr.h);
end
end
family mathlib
attributes
workspace := workspace ++ mathlib/;
repository := repository ++ mathlib/;
end
physical
files => (
math_plus.c,
math_minus.c,
math_mult.c,
math_div.c,
math_sqrt.c,
mathlib.h);
lib => libmath.a classifications status := standard.derived; end;
end
end

For software a physical object is a file in a certain directory on the users disk. The directory
where a file is located is called the workspace for the file. Typically the files associated with
one logical component tend to have the same workspace. Because of this PCL has defined a
special attribute workspace which can be set in the attributes section of a PCL entity. The value
of this will by default be the workspace of all files defined in the physical section. It is possible
to override this, e.g. as for calc.x in CalcProg.
PCL allows propagation of attribute values along the composition hierarchy to achieve compact and easily manageable models. This is convenient for example when an application is
moved from one directory to another. In mathlib we see that the workspace attribute is
extended with the string mathlib/ compared to the CalcProgs value. The notation
<attribute_name> means to use the value of this attribute closest above in the composition
hierarchy.

6.3.2 Entity attributes


PCL supports annotation of entities with attributes of two different kinds, information
attributes to provide stable information about an entity, and variability control attributes which
are determined during system instantiation. Syntactically they are distinguished by using the
= assignment operator for entity attributes, while := (or no assignment) is used for variability control attributes.

122

Specifying Structural Evolution and Understanding It Using a Configuration Language

6.3.2.1 Entity information attributes


Entities may be annotated with a number of attributes of type string, integer, or user-defined
enumerations. There is a pre-defined enumeration type, boolean, whose members are true and
false. We can elaborate the calculator example with attributes for the CalcProg entity:
(iii)
family CalcProg
attributes
created_by = Eirik Tryggesethl;
created : string = 94/08/12;
contract_no: integer = 1643256;
end
end

Since string is the default attribute type, including the attribute type string is not necessary (e.g.
for created_by).

6.3.2.2 Variability control attributes


A family in principle represents a set of potential logical components. The differences between
the individual members of the family is declared by the use of variability control attributes. A
specific member is produced by binding values to these.
Particular members of the family are identified by determining values for all variability control
attributes. The example in Figure 32, borrowed from [Gulla and Gorman, 1995], shows how
the attributes span an n-dimensional space of variability. If binding attribute target to emulaspeed
fast
slow
host emulator emulator- prom
stripped

target

FIGURE 32. Two dimensions of variability and possible family members

tor-stripped, and speed to fast, a unique member of the software family is established. Variability control attributes can in principle be selected independently, although there might be some
disallowed combinations, as we will show in Section 6.3.3.3.
On the logical or architectural level, the breakdown of functionality may be different according
to what situation the entity is used in, and at the physical level, the mapping from logical structure to files may differ, and finally each file may exist in different versions. An example of a
variability control attribute is the status attribute in the example below.
(iv)
family CalcProg
attributes
...
status: status_type exported default initiated;
end
end

The status attribute is of an enumeration type. An enumerated attribute type can be declared

123

System modelling and variability

as:
(v)
attribute_type status_type
enumeration initiated, module-tested, system-tested end
end

This attribute is not assigned a value as the other attributes in the example, but is rather given a
default value. The default value may be overridden by a new value, taken from a version
description, during system instantiation. See Section 6.3.7 for an explanation of the exported
qualifier.

6.3.3 Expressing variability


The parts section in a family entity defines what parts the entity consists of. Each subpart in a
family is declared in a slot which syntactically is on the form:
<slot> => [<conditional-expr>] <family-entity>
The type of variability we want to show here is (1) variability in the logical composition structure of a system family, (2) variability in the mapping from the logical composition to the
physical objects, and (3) variability in attribute assignments.

6.3.3.1 Structural variability


Variability in the logical composition structure of a system family is expressed by associating a
conditional expression to the assignment of a parts slot. Consider again the calculator example.
We will extend the example to optionally include a graphical user interface. The original structure of the program were shown in Figure 31. Figure 33 illustrates the modified structure of the
CalcProg entity:
CalcProg

XGUI Calculator mathlib


FIGURE 33. Extending the structure of the calculator program
(vi)
family CalcProg
attributes
...
xgui : boolean default false;
end
parts
ui => if xgui = true then
XGUI
endif;
calc => Calculator;
math => mathlib;
end
end

124

Specifying Structural Evolution and Understanding It Using a Configuration Language

PCL also features entity refinement through inheritance. Inheritance might also be used for
expressing variability between family entities. Inheritance is however mainly used for achieving economy of description by allowing extraction of common information for a set of family
entities. This information can then be declared once in a generic family from which the other
families inherit.
Providing these two constructs for expressing variability on the structural level provides for
introducing variability at different levels as needed

6.3.3.2 Variability in mapping


In (iii) we defined the Calculator entity to have four files, where two are related to expression
parsing (expr.c and expr.h). PCL can express that the mapping from the logical component
Calculator into files contains variability. This is expressed by introducing conditions on the
slot assignment in the physical section. In the following example we express that the binding
of files to the expression slot is dependent on the expression attribute. In this case we distinguish among whether the expression parsing should be done infix, or using reverse polish notation:
(vii)
family Calculator
attributes
...
expression : expr_type default infix;
end
physical
calc => (Calculator.C, Calculator.h);
expr =>
if expression = infix then
(expr.C, expr.h)
elsif expression = reverse_polish then
(rpn_expr.C, rpn_expr.h)
endif;
end
end

This allows straight forward and elegant treatment of collapsing, splitting, deleting, and moving files during system evolution. Many traditional configuration management systems have
problems with handling this properly.

6.3.3.3 Attribute assignment variability (constraints)


The last form of variability we present here may be used to represent simple constraints among
attribute assignments. I.e. an attribute may take a special value only if another attribute (or a
combination) takes a particular value like the CCFLAGS attribute in the following example:
(viii)
attribute_type os_type
enumeration sun, vax end
end
family CalcProg
attributes
...
expression : expr_type default infix;

System modelling and variability

125

debug : boolean default false;


os: os_type;
DEBUG := if debug = true then -g -Ddebug endif;
INCL : string := ;
CCFLAGS : string :=
if os = sun then
if expression = infix then DEBUG ++ -Dsun4
elsif expression = reverse_polish then DEBUG ++ -O2
endif
elsif os = vax then DEBUG ++ -C -Dvax
endif
end
end

6.3.4 System instantiation


A system family described using PCL defines a set of possible system instances. System
instantiation is the process of removing all (1) structural variability, and (2) physical mapping
variability, and assigning correct attribute values to the attributes throughout the instantiated
system. We call this process binding.
A system is bound in an iterative way, in which the three following activities are performed
interleaved: (a) Application of a version description on a family entity. (b) Evaluation of
attribute expressions. (c) Propagation of attribute values along the composition hierarchy.
A variability control attribute may have its value propagated from another entity in the composition structure. This is a feature which is particular convenient for resolving logical and physical variability to build a consistent configuration. We declare an attribute to take its value
from the nearest entity above in the logical composition hierarchy which has a value assigned
for the particular attribute name. We may extend the Calculator entity with the attributes:
(ix)
attributes
status : status_type := status;
number : integer := X + 3 default 10;
end

Since declarations of the first form are used in most occasions, we allow the short-hand declaration below to mean the same.
(x)
attributes
status : status_type;
end

The specification of version descriptors in PCL is intensional, i.e. defined in terms of the
desired properties of the final system rather than explicitly enumerating the particular instances
for each component. An example illustrates this:
(xi)
version my-version of CalcProg
attributes
os := sun;
xgui := true;
end
end

126

Specifying Structural Evolution and Understanding It Using a Configuration Language

When this version descriptor is applied to the family entity CalcProg in (viii), the following
happens during Bind:
a. Most attributes are bound to their default values, e.g. expression is bound to infix.
b. The os and xgui attributes are bound to sun and to true.
c. The expressions for attributes DEBUG and CCFLAGS are evaluated to and to -Dsun4 .
d. The structural variability on the ui slot assignment is resolved, so the CalcProg entity in (vi)
is composed of the XGUI, Calculator and mathlib parts.
Thus the CalcProg entity, after instantiation by applying the my-version version description,
looks like:
(xii)
family CalcProg
attributes
HOME : string := /home/ask/proteus/test;
workspace : string := /home/ask/proteus/test/calc/src/;
repository : string := calc/;
created_by : string = Eirik Tryggeseth;
created : string = 94/08/12;
contract_no: integer = 1643256;
status: status_type exported := initiated;
xgui : boolean exported := true;
expression : expr_type := infix;
debug : boolean := false;
os: os_type := sun;
DEBUG : string := ;
INCL : string := ;
CCFLAGS : string := -Dsun4 ;
end
parts
ui => XGUI
calc => Calculator;
math => mathlib;
end
end

Assume that both XGUI, Calculator and mathlib declare the attribute INCL := INCL;. By
default, the value assigned to the INCL attribute in this case is the value of the attribute in the
nearest ancestor in the composition structure. Now, for some reason, during system instantiation, it is discovered that the INCL attribute needs to have a different value for the XGUI
entity. This is achieved by declaring a sub version descriptor specifying the particular bindings for XGUI (Figure 34 shows how this is visualized with the PCL tool set):
(xiii)
version my-version of CalcProg
attributes
os := sun;
xgui := true;
end
parts
ui => ui-version;
end
end
version ui-version of XGUI
attributes

System modelling and variability

127

INCL := -I/local/X11R5/include ;
end
end

FIGURE 34. Visual presentation of a composite version descriptor

Now, when the bind operation is fixing the attribute values for the XGUI entity, it uses the values applied on the entity from the ui-version version descriptor. This has higher priority than
the values propagated along the composition hierarchy, which are used e.g. for Calculator and
mathlib.
To assist the system instantiation process for large configurations, the PCL tools provide:
Partial binding to iteratively remove parts of the variability in a system model. This is use-

ful for scrutinizing a model covering only a limited set of all possible system configurations.
Interactive binding to aid the process by allowing the user to interactively choose between

possible attribute bindings whenever Bind cannot compute a value. This is convenient for
large and unfamiliar models, where it might be hard to know which attributes exist and
which may or must be assigned a value. Automatic constructions of correct composite version descriptors is provided by the PCL tools to be able to re-instantiate the particular
instance made during the interactive bind session.

6.3.5 Entity classification


PCL provides a framework for classifying family entities and physical objects. Classifications
are also used for defining the domain and range for user-defined relations. Requirements from
the partners in the PROTUES project have shown that entity classification is a complex task.
Therefore PCL provides an extensible framework from which users can define their own classification hierarchies. PCL basically allows classification along four different dimensions, distinguished by different slot names:
abstraction: Used to classify the entity according to its level of abstraction. The possible

abstractions are system, process and component.

128

Specifying Structural Evolution and Understanding It Using a Configuration Language

type: Used to classify the entity as either hardware, software or an amalgam (platform).

Processor is a sub-class of hardware, used for entities which can execute software processes. Platform is used for entities which are logically considered as a single entity and
which include one or more processors and associated software. Application software is
installed on a platform.
category: Used to specify whether the entity is a documentation or a representation pro-

duced during the system development process. Possible categories are document and program.
status: Used to specify whether the entity can be automatically derived. Possible status

assignments are primary or derived.


In addition the user may define new classification dimensions, or introduce subclasses of the
pre-defined classes. Default values for classification assignment for family entities are type =>
software, abstraction => component, category => program and status => primary.

6.3.5.1 Classification for relation definitions


In structural models of application systems there may be different kinds of relationships
between the system entities. Some, such as the part-of relation, is directly provided by the parts
construct in PCL. Others are specific to a particular system or a design method used in the conjunction with PCL. We allow these relationships to be documented by providing the users with
mechanisms to define legal relationships in their models. The restriction on entities which may
participate in these relationships are specified by restricting the domain and range in the relation definition. Only entities defined with classifications which match the classifications specified as domain and range of a relationship may participate.
Some relations are pre-defined in PCL, such as requires, implemented-by and installedon.

6.3.5.2 Classification for system manufacture


Classifications are particularly important for system building, as this process is basically to
find a relationship between a physical object and a tool that is able to transform the physical
object into a new form.
In the calculator example we have
(xiv)
physical
files => (Calculator.C, Calculator.h);
end

To be able to find a tool that may compile the file Calculator.C, we must be sure that the file
and the input expected by the tool is of the same type.
For the calculator example, a number of sub-classes of software are defined.
(xv)
class text inherits standard.software
end
class source-code inherits text
end

129

System modelling and variability

class cpp-source inherits source-code


tools CC end
physical
name ++ .C;
end
end

From this example, we see that the file Calculator.C matches the classification cpp-source.
Figure 35 shows a part of the full classification hierarchy.

ar

FIGURE 35. Extract from the PCL classification hierarchy

The next paragraph explains how we use this classification information to support system manufacture.

6.3.6 System manufacture (building)


Borison [Borison, 1986] defines software manufacture to be the process by which a software
product is derived, through an often complex sequence of steps, from the primitive components of a system. PCL provides constructs to define customizable tasks in software system
manufacturing. The PCL tools use such descriptions to find the correct steps needed in a particular system manufacture process to build e.g. an executable program.
Section 6.3.4 describes how variability is removed in a PCL model. This step identifies the
system configuration as a set of family members at the logical level. Variability is also
removed as physical objects are mapped to a file version group and further to a specific version
as described in Section 6.3.7. The configuration is then completely defined, with all variability
removed.
From such a configuration description the system manufacture process may begin. The tool

130

Specifying Structural Evolution and Understanding It Using a Configuration Language

entity in PCL defines the signature and behaviour of software tools which can transform a representation from one form to another, or more generally, transform a set of input representations to a set of output representations. A C++ compiler may be modelled in the following way
using the tool entity:
(xvi)
tool CC
inputs
InSrc => cpp-source;
end
outputs
OutObj => obj-code;
end
attributes
CC : string default CC ;
CCFLAGS : string default -c ;
INCL: string default ;
end
scripts
build := CC ++ CCFLAGS ++ INCL ++ -o ++ OutObj
++ -c ++ InSrc;
end
end

The inputs section specifies that the CC tool can transform physical objects which are classified as cpp-source into physical objects classified as obj-code as specified in the outputs section. This constitutes one step in the system manufacture process. The behaviour of this step is
defined in the scripts section, where two pre-defined script slots may be given an expression:
1. The build script which specifies how the actual tool invocation on the command line is for-

matted. This is a catenated string expression. The CC tool entity declares three attributes
which are used in the string expression. The values of these attributes are propagated from
the physical object which the tool transforms. If no value is found there, the value defined
for the enclosing family entity is used, or a recursive search along the system composition
structure is initiated until a value is found. This facilitates customization of every manufacture step. As an example, the file Calculator.C, if the enclosing family is bound with the version descriptor in (xiii), is transformed by the following command line:
CC -Dsun4 -o Calculator.o -c Calculator.C
Since attribute INCL in entity XGUI is bound to another value, the C++ file in that entity
would be derived with
CC -Dsun4 -I/local/X11R5/include -o xgui.o -c xgui.C
2. The depend script, not used in the CC tool description, specifies the command line for
(source-level) dependency extraction for this tool. The form of this script is similar to the
build script.
As physical objects are transformed to new representations, which again may be further transformed, the system derivation graph is built. This information is emitted to a makefile which
can be utilized by the Make program [Feldman, 1979]. The makefile generation process can be
customized in different ways, as shown in Figure 36. Generating and maintaining the make-

131

System modelling and variability

files for the different system configurations by hand is an expensive and error-prone task.

FIGURE 36. Menu for customizing makefile generation.

The system derivation graph for the calc program is shown in Figure 37.
Derivation Graph

ar

FIGURE 37. Derivation graph for the calc example

132

Specifying Structural Evolution and Understanding It Using a Configuration Language

6.3.7 Repository management


A PCL model refers ultimately to a set of physical objects. For elements classified as software,
a physical object corresponds to a file. These files must typically be versioned, since they
evolve over time and may exist in several variants. Real systems contain a large number of
files, and over time there will be a vast number of file versions with subtle differences. If representing all file versions and their particular characteristics were performed inside the PCL
model, the model would soon become impractically large. In PROTEUS we have therefore chosen a two-tier approach, in which file versions and their properties are managed by a special
component library called the Repository.
Version selection is the process of determining a consistent set of versions for all elements in a
configuration. Basically, this process consists of finding a unique version identifier for each
element, so that the resulting configuration is consistent and possesses the desired properties.
PCL supports intensional version selection adopted from the Adele system [Estublier, 1985].
Version selection is done by the Select operation. It transforms a bound PCL model to a
selected model by adding explicit version identifiers to the description of each physical object
stored in the Repository. For example, the following PCL fragment:
(xvii)
family CalcProg
physical
main => main.C;
defs => defs.h;
...

might be transformed into:


(xviii)
family CalcProg
physical
main => main.C attributes repository_version := 5.14.2.4; end;
defs => defs.h attributes repository_version := 4.22; end;
...

The intensional, attribute-based version selection works as follows. For each physical object
referenced in the model, the classifications are used to determine if it is supposed to exist in the
Repository (i.e. classified as software and primary). If so, Select queries the Repository for the
best matching version for the object. The submitted query includes all attributes defined in the
family which are declared with the exported qualifier. If successful, a unique version identifiers is returned.
The following example illustrates some of the available operations when stating version selection queries over attributes:
(xix)
version Calc_test of CalcProg
attributes
status >= module-tested;
time := max; // The latest version
author <> bj.*;// Note the use of regular expression
// The time and author attributes are automatically inserted
// for any version when checking it into the Repository.
end
end

Tool support

133

This descriptor will select the latest file versions which has reached at least status moduletested and are not entered by a user having a name starting with bj. The Repository resolves
such queries by investigating the properties of all versions of a component. Version properties
are expressed as attributes, i.e. user-defined name-value pairs. A user typically associates
attributes to characterize a version when checking it into the Repository, or after having tested
configurations in which the version occurs. It is the responsibility of the user to choose appropriate attributes which discriminate between versions.
Upon a successful Select, the resulting selected PCL model may be further used to check out
the configuration from the Repository and possibly build the configuration. A selected PCL
model ensures reproducibility it uniquely defines a system instance which may be re-created.
To summarize, the following Repository operations are available for a PCL model:
Select: invoke intensional version selection.
Check In: check in all changed files of a configuration, and optionally attach a set of

attributes to each new version.


Check Out: establish or update a workspace by checking out all files of a specific configura-

tion.

6.4 Tool support


A comprehensive tool set to support the creation and use of PCL models has been developed. It
includes a graphical structural editor for entering and browsing PCL models, an interactive
PCL compiler, and a graphical browser for inspecting and manipulating the contents of the
Repository. PCL models are organized in libraries with explicit prefixing for inter-library
entity referencing. The PCL compiler supports parsing of textual PCL descriptions, binding of
models, version selection, and makefile generation. Figure 38 presents an overview of the core
PCL tool set. In addition comes a simple reverse engineering tool for constructing a rudimentary PCL description for existing software products. Figure 39 shows the user interface for the
PCL compiler and the Repository browser.
The tool set is implemented in C++ using X11 and the OSF/MotifTM toolkit. The core tool set is
about 60 KLOC. It is currently available for Sun and HP workstations. BMS, a selective multicast implementation provided by CAP Gemini, is used for tool integration, both of the PCL
tool set itself and for integration with external design tools. The Repository is currently implemented on top of RCS [Tichy, 1985].

6.5 Experiences from PCL use


PCL and its tool set is currently being validated at four different partners in the PROTEUS
project, on applications ranging from telecommunications software to system development
tools. Reported benefits include (see also [Gulla and Gorman, 1995]):
Increased system visibility, i.e. recording and formalizing knowledge previously distributed

and unavailable (person dependent). This system documentation is essential for controlling
the system evolution (change impact analysis), but in addition it has proved valuable for
internal communication and training.

134

Specifying Structural Evolution and Understanding It Using a Configuration Language

PCL
editor

Repository
browser
PCL model

version
description

Bind

PCL Compile

Repository

bound
model

Select

selected
model

MakeGen

FIGURE 38. Tool overview

makefile(s)

When the system architecture is made visible to the maintainers, they have increased their

understanding of this architecture. Knowledge which was previously available only in the
minds of a few maintainers is now made visible to all members of the maintenance team.
Integrating the system manufacture process into the system configuration support has been

acknowledged by several of our partners. Manual maintenance of makefiles and shell


scripts for each system variant is avoided.
People outside the development team may specify and build a release, based on desired

properties expressed by customers.


The test space, i.e. the set of configurations which must be tested after changes, is made

explicit.
As a system evolves, structural changes need to be reflected in the PCL model. In order to ease
the creation and maintenance of PCL system models, different strategies have been chosen.
For one partner, a CASE tool has been tightly integrated with the PCL tool set, providing automatic propagation of changes. For file-base software systems, the PCL Reverse tool is able to
both generate an initial PCL model and to check consistency between a system model and an
actual system version in a workspace.
The PCL was evaluated in a Norwegian company which experience problems with congfigur-

135

Experiences from PCL use

PCL TOOLSET V2.6 January 1995

FIGURE 39. PCL compile main window and Repository browser.

ing their system family for deliveries to different customers. Their software product consists of
500.000 lines of code, and a proprietary configuration description of 10.000 lines. The configuration description is unreadable for other than a few experts in the company.
A PCL model was developed for controlling the software product build process. The resulting
PCL model consisted of ca. 4000 lines of PCL code, of these where ca. 600 part of the standard
PCL library, and 1300 lines were an automatically produced PCL template reflecting the directory structure of the example product. The makefile generated for a particular configuration
was about 6000 lines long. The PCL model can be used for building different system instances.
In addition, the PCL model contains more information than what was available in the configuration file and makefile priorly used by the company. The PCL model describes system structural variability, version selection of associated physical objects, differences in tool parameters
etc.
Another evaluation performed in another company, presented in [Gulla and Gorman, 1995],
showed that the size of the files needed to support the system builiding process could be
reduced with 90% by using the PCL. In addition to supporting the system building process, the
PCL model expressed structural system information which was earlier invisible to the maintainers.

136

Specifying Structural Evolution and Understanding It Using a Configuration Language

6.6 Summary
In this chapter we have presented the PROTEUS Configuration Language and its supporting tool
set. PCL supports comprehensive system modelling and provides expression of variability in
the logical system model, in the mapping from the logical model to files, in the version selection, and finally in the system manufacture process. Intensional system configuration using
attribute assignment provides configuration binding and system building in a concise and
reproducible manner.
We have illustrated the important concepts in PCL on a small, but complete example. The
example has been annotated with screen dumps from the PCL tools.
Experience shows that it requires some effort to build a comprehensive system model, especially if trying to incorporate all potential variability in an industrial product. However, the
benefits in terms of improved system visibility and automation are significant.
The implementation of the PCL tool set was performed by Gilbert Rondeau, Ariane Suiss and
Sergio Calabretta from Cap Gemini, and Bjrn Gulla and the author from NTNU.

6.7 Chapter appendix: The calculator example


Below follows the complete PCL description for calculator example. Note that the latter half of
this description is independent of the actual example system, allowing it to be shared among
different models.
version my-version of CalcProg
attributesos := sun;
xgui := true;
end
parts ui => ui-version; end
end
version ui-version of XGUI
attributesINCL := -I/local/X11R5/include ; end
end
family CalcProg
attributescreated_by= Eirik Tryggeseth;
created : string = 94/08/12;
contract_no: integer = 1643256;
HOME : string default /home/ask/proteus/test;
workspace := HOME ++ /calc/src/;
repository := calc/;
status: status_type exported default initiated;
xgui : boolean exported default false;
expression : expr_type default infix;
debug : boolean default false;
os : os_type;
DEBUG := if debug = true then -g -Ddebug endif;
INCL : string := ;
CCFLAGS : string :=

Chapter appendix: The calculator example


if os = sun then
if expression = infix then DEBUG ++ -c -Dsun4
elsif expression = reverse_polish then DEBUG++-c -O2
endif
elsif os = vax then DEBUG ++ -C -c -Dvax
endif;
end
parts ui => if xgui = true then XGUI endif;
calc => Calculator;
math => mathlib;
end
physicalmain => main.C;
defs => defs.h;
exe => calx.x
attributes workspace := HOME ++ /calc/bin; end
classifications status => standard.derived; end;
end
end
family XGUI
attributesINC := INCL; end
physicalfiles => xgui.C; end
end
family Calculator
attributesworkspace := workspace ++ Calculator/;
repository := repository ++ Calculator/;
expression : expr_type := expression default infix;
status : status_type := status;
INCL := INCL;
end
physicalcalc => (Calculator.C, Calculator.h);
expr => if expression = infix then
(expr.C, expr.h)
elsif expression = reverse_polish then
(rpn_expr.C, rpn_expr.h)
endif;
end
end
family mathlib
attributes workspace := workspace ++ mathlib/;
repository := repository ++ mathlib/;
INCL := INCL;
end
physical files => ( math_plus.c, math_minus.c, math_mult.c,
math_div.c, math_sqrt.c, mathlib.h);
lib=> libmath.a
classifications status => standard.derived; end;
end
end
attribute_type status_type

137

138

Specifying Structural Evolution and Understanding It Using a Configuration Language

enumeration initiated, module-tested, system-tested end


end
attribute_type os_type
enumeration sun, vax end
end
attribute_type expr_type
enumeration infix, reverse_polish end
end
tool CC
attributesCC : string default CC ;
CCFLAGS : string default ;
INCL: string default ;
end
inputs InSrc => cpp-source; end
outputsOutObj => obj-code; end
scripts build := CC ++ CCFLAGS ++ INCL ++ -o ++ OutObj
++ -c ++ InSrc;
end
end
tool cc
attributescc : string default cc ;
CFLAGS : string default ;
INCL : string default ;
end
inputs InSrc => c-source; end
outputs OutObj => obj-code; end
scripts build := cc ++ CFLAGS ++ INCL ++ -o ++ OutObj
++ -c ++ InSrc;
end
end
tool ar
attributesAR : string default ar ;
ARFLAGS : string default rv ;
RANLIB : string default ranlib ;
end
inputs InObj : multi => obj-code; end
outputsOutLib => library; end
scripts build := AR ++ ARFLAGS ++ OutLib ++ ++ InObj ++
\n ++ RANLIB ++ OutLib;
end
end
tool ld
attributes LD : string default CC ;
LDFLAGS : string default ;
LIBS : string default -lm ; end
inputs InObj : multi => obj-code;
InLib : multi => library; end

Chapter appendix: The calculator example


outputs OutExe => exe-file; end
scripts build := LD ++ LDFLAGS ++ -o ++ OutExe ++ ++
InObj ++ ++ fixlib ++ InLib ++ ++ LIBS;
end
end
class text inherits standard.software end
class source-code inherits text end
class binary inherits standard.software end
class cpp-source inherits source-code
tools CC end
physical name ++ .C; end
end
class c-header inherits source-code
physical name ++ .h; end
end
class c-source inherits source-code
tools cc end
physical name ++ .c; end
end
class library inherits binary
tools ld end
physical lib ++ name ++ .a; end
end
class obj-code inherits binary
tools ar, ld end
physical name ++ .o; end
end
class exe-file inherits binary
physical name ++ .x; end
end

139

140

Specifying Structural Evolution and Understanding It Using a Configuration Language

CHAPTER 7

Elaboration of a Framework for


Supporting System Understanding

7.1 Introduction
In Chapter 2, we concluded, after a review of several survey investigations, that the most serious problem of maintenance was that of lacking documentation. This was strongly supported
by our experiment described in Chapter 3. The reason for this being perceived as a major problem is that the software system is more difficult to understand when documentation is not
available. Studies have shown that maintainers spend as much as 50-60%1 of their time trying
to understand the systems, and that the maintenance costs typically account for more than
50%2 of the total life cycle costs.
Our approach to reducing the costs of software maintenance is to reduce the time spent on system understanding. We focus on that the application must be properly documented, and that
this documentation must evolve with the application the system must be in internal equilibrium during the whole evolution period. We recognized that system documentation was necessary on two principal levels:
For the evolutionary system family at the system architecture level.
For the logical components which comprise the system family.

In Chapter 6 we showed a solution to how the evolutionary structure of a system family could
be described at an architectural level. The formalism used to describe the evolutionary software architecture was a configuration language, the PCL. By using the PCL, both the logical
and the physical structure of the software system can be described, as well as the evolution of
the system in a system family.
While working on the construction of the PCL, we felt that maintainers needed a deeper understanding of the system as compared to the architectural information provided by the PCL.
The PCL does not provide support for guiding maintainers in understanding the internals of the
configured system. This chapter outlines our proposal of a framework which help maintainers
utilizing the available documentation to reduce the costs of software maintenance by reducing
the time the maintainers need to spend on trying to understand the system.
The rest of the chapter is organized as follows:
1. [Fjelstad and Hamlen, 1979], [McClure, 1992], [Devanbu et al., 1991].
2. See overview given in Chapter 2.

142

Elaboration of a Framework for Supporting System Understanding

In Section 7.2 we discuss the overall functionality needed for such a framework, and

present an example which we will use throughout the chapter.


In Section 7.3 we describe some thoughts for a framework for system understanding.
In Section 7.4 we sketch our proposed solution, and discusses problems which can be

encountered by trying to related software components in different ways.


In Section 7.5 to Section 7.11 we describe our proposed framework in detail.
Section 7.12 presents others work related to the framework which we present in this

chapter.
Finally, Section 7.13 provides some concluding remarks to the chapter.

7.2 Goals for a framework for system understanding


This section describes the problems which will be attacked in our proposal for a framework to
support system understanding. In Section 7.2.1 the overall objective or goal of the framework
is first presented, and the benefits from reaching this goal is discussed. A set of steps identified
in the process of trying to understand a software system is described in Section 7.2.2. Finally,
in Section 7.2.3 an example is sketched which we will make references to throughout the chapter.

7.2.1 Overall objectives


The overall objective of our framework is the following:
Utilize the available system documentation to present to the maintainer the information necessary for complying with a modification request. The information presented should provide sufficient knowledge to understand the current requirements to the software regarding
the modification request, the corresponding design information, implementation, the perceived state of the software product, and the state which is presented to the user.
As we will show later, meeting this objective requires disciplined development and maintenance processes. The benefits, however, are significant:
The time spent on locating information is reduced.
The learning process is natural from the abstract to the concrete. It should not longer be

necessary to scrutinize a large number of lines of source code to locate and understand a
particular functionality of the system.
Since the learning process is simplified, the time spent on obtaining sufficient knowledge

is reduced.
Pressure on experienced maintenance personnel is reduced they no longer need to

explain inexperienced maintainers about the system.


Focus is on more formal written documentation, and less on oral communication.
Inexperienced maintainers can be more productive in shorter time, thus increasing the

overall productivity of the maintenance organization.

Goals for a framework for system understanding

143

The maintenance organization is less vulnerable when key maintenance personnel move

to other positions.
The costs of software maintenance are reduced.

In order to meet this objective and reap the stated benefits, our model requires that the software
system is in internal equilibrium. The concept was discussed in Chapter 4.
We know that two other high priority problems of maintenance are the high frequency of new
modification requests from users, and changing priorities from management. This certainly
opposes that maintainers should spend extra time on documentation issues it is our sound
opinion that this must be done to battle the crisis of software maintenance. The argumentation
for this standpoint should be clear from the experiment results in Chapter 3 and the discussion
of maintenance economics in Chapter 4.

7.2.2 The process of understanding software systems for evolution


The process of building up an understanding of a (part of a) software system includes several
steps. These include activities such as (in no particular order):
Understand the objective(s) of the modification request.
Locate source code components which cause a problem issued in a modification request.
Understand how a component is related with other components.
Understand how a specific functionality is realized in a source code component.
Isolate the problem reported in the modification request to a set of components (both

documentation and code).


Identify what need to be done to change the reported problem.
Determine whether the changes necessary to comply with one modification request inter-

sect with changes needed to comply with another request.


Communicate with other maintainers to acquire information.
Understand the information needs of the enhancement asked for in the modification

request.
Locate and understand the information and its sources needed to comply with the request

for enhancement.
Identify the logical location in the system structure for inserting the enhanced functional-

ity.
Identify components which are potentially affected by a modification, and localize those

that really are.


Search for documentation about the system. Several sources of information are necessary

to consult:
1. The requirements must be consulted to find out what the specified functionality was
supposed to be.
2. The design must be verified against the requirements.
3. The source code must be verified against the design.

144

Elaboration of a Framework for Supporting System Understanding


4. The user manual must be checked to find out whether the reported problem was due to

a user mistake.
5. The test documentation needs to be consulted to check whether the functionality has
been previously tested, and if so, the test report must be consulted.
Change history (other system configurations) must be checked to ensure that the problem

has not been reported before. If it has been reported, a solution to the problem may
already have been incorporated in a more recent system release. Another possibility is
that an alternative workaround exist. The user can then be told about this, and changing
the software may be unnecessary.
The list above is not meant to be exhaustive, but rather give an impression of activities which
are included in the system understanding process.

7.2.3 The problem that will be attacked


In this section we give a description of the problem which we want to attack.
Briefly, the we want to be able to identify all type of components which are related to any other
type of component. Consider the following example:
The Invention Inc. software company develops a software product, "Farmers Assistant"
(FsA) which is used to control the feeding processes at different type of farms. The software
has been customized for different types of farms, ranging from duck farms to elephant
farms.
The software product controls the distribution of food to each individual animal by regulating how much food which is inserted into the animals eating place. All this is done automatically. The only responsibility of the breeder is to ensure that the food depots are not
empty.
Large parts of the different installations are identical, but since each type of breeding
requires special routines for feeding and the type of food to be used, several modules must
be customized to meet these special requirements.
The FsA system has been on market for several years, and has had great success in several
countries. Invention Inc. knew from the beginning that the system would be used for several
years at all installations. The framers bad economy does not allow frequent investments of
this size.
The management of Invention Inc. realized from the start that to be able to maintain the system for 10-15 years, they needed to document all changes made to it. Invention Inc. started
out as a small company, but their success required that as much as 250 software engineers
were working on evolving the FsA in the late 1980s. A recession in the agricultural market
in the beginning of the 1990s resulted in that Invention Inc. had to let go of almost 80% of
the software engineers.
Since 1994, the farming market has been reclining, as new resources has been invested in
farming. The market started to demand more efficient feeding technology, and the FsA
reported a 94% increase in revenues last year. To be able to keep up with the markets
requirements and new technology (both in the software environment and in feeding technology), Invention Inc. hired 42 new software engineers each of the last two years.
The new software maintainers find it hard to understand the details of the system, even

Requirements to the understanding support system

145

though documentation exist for every system installation. A typical installation is very
large, around 1MLoC of C and C++ code, and the accompanying documentation spans several thousand pages. Since so few of the software engineers which were employed in the
end of the 1980s are still with the company, it is difficult to obtain explanations to unclear
issues. A lot of time has therefore been spent on trying to understand how the system is
working, how the different components interact, and how different installations share components. Since the amount of system documentation is so large the systems design cannot
be described in one document. In fact, more than 100 documents describe the design. Additionally, the requirements to the system are spread across several files, at least one for each
new breeding type. This is also the case for the user and system installation manuals. Much
time is therefore wasted on locating the correct documents when the maintainers need to
find information to understand the FsA. When changes are made, much time is also used to
locate the documentation that need to be updated.
Management finds that the system maintenance is much less cost effective now, than in the
previous blooming period of FsA, when the system was maintained by software engineers
which had been part of the initial FsA evolution.
Since the system is sufficiently documented, and great care has been taken to separate functionality which is common among all installations from less common or special functionality, there is no need to start a large redocumentation process. Documentation exist, the
problem is the large amount which exist of it.
Technology for managing requirement traceability has been considered, such as the RTM
product (Requirement Traceability Management) from GEC Marconi, but such products
require that they are used from the beginning of the software development cycle.
The management of Invention Inc. hopes that the maintainer will acquire sufficient knowledge over time, such that their efficiency will improve, but continues to search for technology which can help to improve the maintenance productivity.
In the next section we will discuss requirements for an enabling technology which will help
Invention Inc. to increase their maintenance productivity.

7.3 Requirements to the understanding support system


7.3.1 Introduction
We discuss solutions which can be adopted in a support system to simplify the activities of system understanding. We give an overview of the discussion in Table 39, which shows position
statements, and positive and negative arguments to the positions. A positive argument is prefixed with a + and a negative one with a -. A negative argument to the right of another negative argument, means that the former objects to the latter, and not to the position which the
former objected to. The hierarchical breakdown of the table is conserved in the following discussion.
The position statements are described in subsections, like Section 7.3.2. The first level arguments (middle column) are described in non-indented bullet lists with a con/pro indication
after each bullet. The second level arguments are discussed in indented bullet list with a con/
pro indication after the bullet.

146

Elaboration of a Framework for Supporting System Understanding


TABLE 39. Overview of positions and arguments

Position
Continuous documentation
updates

Use experienced people


Textual description or
report format
Automatic component
identification

Argument
-

Time consuming

Prevents legacy problems

Solves documentation
problems

Not possible

OK for short-lived systems

Not informative

Not CASE specific

Higher abstraction

Requires detailed schema?

Schema may be unnecessary

Not natural

Argument
-

Training costs

Turnover of maintainers

Costs are earned over time

Cost of legacy

No career opportunities

Schema may evolve

New technology

Technical considerations

Support for modification


planning
Hypertext system

Direct links to related


information

Restriction of freedom

Development experience
ratio increases

Division of roles

On-line access to
documentation
Development = Maintenance

7.3.2 Continuous documentation updates


It is essential that the system is in internal equilibrium when a maintainer tries to understand
the reasons for a reported problem. If not, the maintainer may be faced with the following difficulties:
1. Assume that the system was in equilibrium, and that a perfective modification Prf1 was

made without ensuring new equilibrium. Another maintainer, not aware of Prf1 forwardengineers1 another modification Mod which conflicts with Prf1. The modifications introduced by Prf1 may conflict with the changes needed to implement Mod. Since the changes
introduced by Prf1 are not documented, the engineer implementing Mod reverses these and
makes the changes needed to finish his job. He believes that in addition to fulfilling the Mod
changes, he also caught an error in the implementation (Prf1), since the source code compo-

1. I.e. engineers the modification from specification to implementation.

147

Requirements to the understanding support system

nent differed from its specification. Instead an error is introduced. Errors introduced in this
way are particularly difficult to catch, for the system worked sufficiently when the Prf1
changes were tested.
2. Now, assume the opposite: The system is not in equilibrium, and everybody knows that.

When a maintainer without intimate knowledge of the system wants to make a modification, looking at the documentation is dangerous. Instead of obtaining valuable knowledge,
he can be given a wrong understanding of how the system works. When turning to the
implementation, he finds that what is specified is not what is implemented, and additional
time must be used to find out what is really implemented. Heading directly to the code to
build up his understanding would perhaps have been better.
The above examples show the importance of ensuring internal equilibrium in the software system. It also shows the importance of ensuring equilibrium after all changes, so that maintainers can trust the documentation at any time.
(Con) Updating is time consuming and increases costs.

Experienced maintainers need not update documentation when changing the code. This is
just a waste of time, and hence money. The experienced maintainers never use the design
documentation because they have intimate knowledge of the applications source code. As
we argued in Chapter 4, this may particularly be the case for one-time systems when the
pressure from the environment is low and stable.
(Con) Training costs will be higher

The danger of not updating the documents when changing the source code is that the
training costs when new maintainers join the team will become very high, probably
higher than the extra costs incurred in the short run by taking the time to ensure equilibrium. The new maintainers are not productive for a long time after their employment. As
we saw in Chapter 3, the subjects in our experiment were almost twice as productive
when documentation was available compared to when it was not. Recall that none of the
subjects had any experience with the application.
It is accepted that maintainers with experience with maintaining the application are more
productive that those who do not have this experience. From the review of the maintenance investigations in Chapter 2, we recall that one of the major problems were the
turnover of maintenance personnel. We can depict this as in Figure 40.
# of maintainers

System knowledge

100%

System experience
FIGURE 40. Maintainer profile & knowledge vs. experience

The declining line reflects the experience profile of the maintainers in a maintenance
organization which has problems with turnover. The lower of the two S-curves reflects
the system knowledge typical for a maintainer with a given system experience, given that

148

Elaboration of a Framework for Supporting System Understanding

documentation is unavailable. The upper S-curve represents the same when the updated
documentation is available, i.e. the system is in internal equilibrium.
(Con) Turnover of maintainers

When an experienced maintainer resigns, all undocumented features of the code is lost
with him. This is a negative consequence of not updating the documentation.
(Con) Costs are earned back over time

The initial costs of updating the documentation continuously will be earned back by saving training costs of newly employed maintainers. The employment of new maintainers
to the maintenance team will reduce the productivity of the experienced ones as they
have to transfer their system knowledge to the new ones (walking documentation). The
productivity of the new maintainers will also be below average for the maintenance
group before a certain level of skill and application knowledge is acquired. The costs of
doing continuous documentation updates must therefore be compared to the productivity
losses of engaging new inexperienced maintainers.
(Pro) Prevents legacy problems

A legacy system is a software application on which an organization is dependent to run its


everyday business. Legacy systems can be characterized along three dimensions:
1. They are old. Since the organizations are dependent on these systems, they have obviously been in operation for several years. The system controls large amounts of data
which are important to the organization. Typical ages of legacy systems are 5-20 years.
2. They are old-fashioned. Due to their age, the technologies used to develop the systems is
no longer standard practice in the software production departments. It is a burden to
maintain the systems, as e.g. [Bennett, 1995] points out: ... use of small main storage
meant that programmers had to save space by using variable aliasing, and single, very
large data structures, clarity and structure were traded off for program speed. In addition, due to the restrictions of old computers, large portions of the legacy systems are
coded in assembler language. Users complain about old-fashioned interfaces, and lack of
integration with newer applications.
3. They are aged. To use Parnas terminology as discussed in Section 2.3.2.2, the legacy
systems have been successful in their organization. Due to this success, the systems have
been subject to patchwork changes, and hence the clarity of the original structure is deteriorated. In addition, only small portions of the systems have had the corresponding documentation updated.
These factors contributes to that legacy systems are hard to maintain, and often subjected to
redevelopment through reengineering1. One of the hardest problems of re-engineering is to
understand how the systems worked in the first place. This problem is often tackled through
the use of reverse engineering technologies. This is both expensive and time consuming.
1. When we described the evolution of a patchwork, rather than the evolution of the system, we
described the basic assumptions of reengineering research. When the patchwork has reached a specific level of complexity, the costs of maintenance have exceeded the profits. The organization has
then two choices: Should the maintenance of the system be stopped, or should it be reorganized so
that maintaining it becomes more efficient. The last answer to the question is the basis of the field of
reverse engineering and reengineering. For an overview of reverse engineering and reengineering,
consult [Arnold, 1993b].

Requirements to the understanding support system

149

If the documentation for these systems had been continuously updated, the problems related
to both to old-fashionedness and aging would be avoided. Hence the hardness of maintenance would diminish, and reengineering for the sake of making maintenance easier would
not be necessary.
(Pro) Cost of reverse engineering of legacy systems

Legacy systems are often subject to reverse or reengineering when the organization starts
using new computing platforms. The old architecture may not be suitable to for example
client/server technology. Today, adapting the legacy systems to a new computing platform often requires a major reverse engineering step prior to redevelopment. This additional step would be superfluous if the proper documentation were updated and
available. The costs associated with this step could therefore be substantially reduced. A
more thorough discussion of this can be found in Section 4.3
(Pro) Solves documentation problem

Obviously, if the documentation were updated, one of the major problems reported in the
maintenance investigations described in Chapter 2, namely the problem of unavailable documentation would have been solved. Reverse engineering technologies, as discussed above,
can help an organization to build relevant documentation for a legacy system. However, this
is done when the organization has suffered maintenance problems for a period of time. The
right cure should be preventive, as proposed here, and not reactive, as when using reverse
engineering.

7.3.3 Use experienced people


If only people who are very experienced with the system and its domain are used to for maintenance, the cost will be reduced. [Lientz and Swanson, 1980] found that applications which
were maintained by maintainers with a high development experience ratio correlated significantly with less maintenance effort. Lientz and Swanson defined the development experience
ratio to be the number of people maintaining an application which had participated its original
development, divided by the total number of maintainers on the application. In their investigations, Lientz and Swanson found a development experience ratio of 0.48. Remember that the
cost of maintenance in their study accounted for 49% of the total life cycle costs.
As an aside here, we note that [Krogstie, 1994c] found that few organizations in Norway used
maintenance as a training position for new employers. In the cases where this was done, the
new employers were assigned easy tasks, such as changing the appearance of some screen layout. Also, as should be clear by now, being an experienced developer does not qualify for
being an experienced maintainer. Maintenance managers should also be aware of a productivity paradox, reported for example in [Jrgensen, 1994]. The productivity measure used for
maintenance should be objective and normalized. Jrgensen found that when the productivity
of experienced and inexperienced maintainers was measured using a number of changed/
added/deleted-lines measure, the inexperienced maintainers scored higher. This could be
interpreted to the benefit of the inexperienced maintainer. However, it would contradict established models from empirical data. Jrgensen argued that this paradox could be attributed to
either two things: The experienced maintainers had more intimate knowledge and reused more
old code when making changes; or the experienced maintainers were assigned more difficult
maintenance tasks, and hence had to use more time to study the system before actual changes
could be done.

150

Elaboration of a Framework for Supporting System Understanding

(Con) Not possible

Successful software systems are long-lived. It is impossible to plan personnel resources and
assignment of this personnel to last for such a long time. It is most probable that the entire
maintenance group of a software system has changed over that period.
(Pro) No career opportunities

Maintenance work has traditionally been regarded as a task in which a software engineer
starts, and ends, his/her work. Career opportunities have been few for engineers which
have decided to dedicate their professionalism to maintenance. This has resulted in several models for filling the needs for maintenance engineers. Some mentioned in literature
are: Placing fresh employees in maintenance, rotating maintenance chores among the
developers in the software organization, compulsory periods in maintenance group
before advancing in the development career path.
All this indicates that it is a dangerous bet to trade the benefits of the (isolated) extra
costs of documented systems with the risks of continuous staffing in the maintenance
organization.
(Pro) OK for short-lived systems

When a software system has a planned lifetime of 2-5 years, it may be reasonable to use the
same people for maintenance during the whole period. The personnel can be offered compensations like e.g. higher salary and career opportunities at system closure, to ensure stability in the maintenance group during this short period.
However, if supporting technologies could ensure a steeper knowledge acquisition curve for
maintenance (i.e. more knowledge acquired on less time, compare with Figure 40), keeping
maintenance personnel to control maintenance costs would not be necessary. New personnel would be more productive, and the internal equilibrium property of the system would
make the maintenance organization less vulnerable when key personnel leave.

7.3.4 Textual description or report format


Understanding is best supported by reading the documentation accompanying the thing to
understand.
(Con) Not informative

The following argument can be used against focusing on textual reports for software maintenance: It is not sufficient to rely only on textual documentation for todays complex systems. Most development organizations use one or several software development
methodologies to analyse, construct and implement the systems they deliver. These methodologies are typically supported by their own set of supporting tools, including editors to
draw design schemata, and a data dictionary to ensure consistency and completeness across
the design. Exchanging this with only textual documentation would not give enough conceptual power, and the size and complexity of the systems to be designed would be
restricted.
(Pro) Not CASE specific

So why focus on textual information, why not use a CASE tool which support a formalized
software development method? The reasons for this are twofold:

Requirements to the understanding support system

151

1. With the work in this thesis, we do not want to constrain our proposals to particular

CASE tools. The ideas and propositions made in this thesis could either be realized
directly, or be adopted by CASE tool vendors and included in their project support environments.
2. Development and maintenance are often performed by different groups. The personnel in

the different groups have different qualifications in using CASE tools. Several CASE
tools may have been used during the development phase (for analysis, requirements,
detailed design, etc.). Maintenance personnel may not be trained using these tools. Different tools may have been used for similar phases in different projects. This means that
maintainers must be experts in using different CASE tools, and not only in maintaining
the systems. When development is performed by outside consultants, the organization
taking over the system for maintenance may not even have licenses for the expensive
CASE tools used by the consultants. Adding even more costs to the maintenance budget
by introducing these CASE tools seems like a bad idea.
We do not claim that textual documentation should be only pure text. Textual documentation is the set of reports produced at milestones in the software development process. These
reports should be evolved with the application. Rather than calling it textual documentation,
we can call it documentation, including all kinds of documents, as figures, databases, files
from a word processor, models from a CASE tool incorporated in reports, etc.
However, in this thesis we have to restrict ourselves, as identifying concepts from several
development methods is not trivial. We have to define our system representation, and
restrict our framework for maintenance support to that. We do this in Section 7.4.
(Pro) Higher abstraction

When you want to understand something, reading about it is a good idea. Consider an
example of a video-cassette recorder (VCR): The typical VCR has an abundance of functionality. In addition to recording and playing video cassettes, there are indexing functionality, repeated recording facilities, easy programming using ShowView codes, automatic
positioning for presetting channels, etc. When purchasing a new VCR, you expect to find all
this functionality, but how to access it is not obvious. The front panel of the VCR contains
buttons for the most basic functionality, like the main power switch, the eject button, and
buttons for starting and stopping the cassette. The rest of the functionality is typically accessible through the remote control. The buttons on the remote control are symbolized with different symbols. To activate functionality, the user typically has to press several buttons in a
predefined sequence. Finding out all this is possible by trial and error, but can more easily
be learned by reading the user manual for the VCR.
For the same reason, all source code components should be documented in written reports.
These reports are the user manuals for the maintainers. The maintainers use these reports to
gain knowledge about the system at a higher level of abstraction than source code.

7.3.5 Automatic component identification.


It must be possible to automatically identify components that contain useful information about
some given problem. Given a modification request, the maintainer should be presented with all
relevant information which may provide the maintainer with the knowledge needed to handle a
particular problem. This information should cover all aspects of the reported problem: The

152

Elaboration of a Framework for Supporting System Understanding

related requirements, design, test cases, and user manual, in addition to the source code modules. The information should be presented to the maintainer so that no further information
should be necessary in order to fulfil the modification request.
(Con) Requires detailed schema

A detailed system representation schema is required in order to automatically identify the


components related to a given problem. This schema must be fixed over the whole system
lifetime. When information is to be found, we must know what kind of information to look
for, and where to find this information.
(Pro) Schema may evolve

The system representation schema cannot be fixed during the entire system lifetime. If
technology or information needs change, it must be possible to evolve the system representation to meet the new requirements. An example is the introduction of a new CASE
tool in the organization. Some modules may be redeveloped using the support provided
by this new CASE tool. The output from this tool may be very different compared to
what is available for the other system modules. Thus incorporating this information in
the system representation schema is very hard (if not impossible).
Maintenance is performed simultaneously over several releases. Different representations are used for different kind of information. The problems arise when the same kind
of information are represented in different ways. Due to the large number of releases
maintained, the system information cannot be transformed to a common representation
schema. This means that the queries extracting the necessary information about a problem must exist in different versions simultaneously, and strict rules must be adhered to in
order to use the right queries for the right releases.
(Pro) Schema may be unnecessary

We have chosen to view the documentation accompanying the application as a set of reports
produced at milestones and evolved with the application. This means that a detailed schema
is not necessary for extracting information from the different reports. We can define the set
of reports which must be evolved, and how the reports should be organized. Component
information across reports can be determined either through specified references for the different reports, or by automatically extracting references based on some query. This means
that the reports can be handled as one body of information, regardless of the CASE tool
which produced them.

7.3.6 Support for modification planning


The primary concern of automatic information identification is to assist the maintainer to
obtain an understanding of a component (or set of components) which comprise a particular
functionality of the system.
We propose that support for modification planning is needed to aid the maintainer in finding
dependencies to other parts of the system. The dependencies are to other system components
which may be affected when changing a set of components to overcome a particular problem.
Knowing what system components that need to be changed, and the sequence in which the
changes should be made is a difficult task of system understanding.
Support for planning the modification of changes can be divided into two categories:

153

Requirements to the understanding support system

1. Information needed to understand how to comply with one modification request: Assume

that the problem reported in a modification request is located to a particular set of components (0). The type of these components include parts of reports (requirements, specifications, test plans, user manuals) as well as source code modules. Other parts of the system
may need to be changed when changes are made to this set of components. The maintainer
should be presented with a list of other components which potentially must be changed.
This list includes the components which use the functionality related to the reported problem (1), subcomponents of the components which where modified due to the problem report
(2), and components which aggregates the modified components (3). The numbers in parentheses relates these sets of components to the hypothetical structures shown in Figure 41.
3
0
1
2

Source code structure

Report structure

FIGURE 41. Component structures

When a modification has been performed, the components which potentially have to be
modified due to this change should be identified so that the maintainer can assess whether
these need to be changed.)
2. Information needed to understand how to comply with several modification requests: When

several modification requests are received and must be complied with simultaneously, a
change plan for performing the changes in an optimal and controlled manner should be presented. If several requests involve changes to the same components, it is important to know
this before the actual changes are made. If this knowledge is used, conflicts can be avoided,
and personnel can be more efficiently used.

7.3.7 Hypertext system


All documentation, including code, could be developed in a hypertext system. By this we mean
an editor which allows to insert traversable links between any items.
If such support were available, a methodology should be used for using it properly during system development and maintenance. Both the facility for automatic identification of information relevant to a component, and the automatic change planning facility would benefit from
this.
(Con) Not natural

It is not natural for developers to write documentation in a hypertext system. When documents are created, the developers mind is focused on the item currently under construction.
It would be expensive for a developer to interrupt the flow of work to include hypertext
links to other documents, and from other documents to what the developer currently works
on. In addition, it would be difficult to determine when to include hypertext links to components which are currently under development.

154

Elaboration of a Framework for Supporting System Understanding

(Pro) Technical considerations

Other technical considerations also need to be taken into account. Among these are
1. Concurrent write access to documents. A hypertext link has a start point and an end

point. If developer A works on the document containing the start point, he also need
write access to the document which contains the end point. This document may be in
use by another developer.
2. A conflict resolution scheme which prohibits users to overwrite each others modifica-

tions. If an end point of a hypertext link is removed from a document due to changes
in that document, the start point still exist. The system must be able to detect such situations, such that actions can be taken. The action taken can either be to remove the
start point, or to move the end point to a part of the document which is not removed.
In the latter case, the hypertext link must be checked so that its relevance is not lost.
(Con) New technology

There exist several document editors which allow users to concurrently modify the same
document. Such editors are termed multi-user editors1 or co-authoring systems. Two
well known multi-user editors are GROVE ([Ellis et al., 1991]) and Quilt ([Leland et al.,
1988]). Quilt also provides for the insertion of hypermedia links.
The modifications made to the document by different users are marked with a particular
colour. Some multi-user editors allows users concurrent write access to the whole document (e.g. GROVE). This means that two people can modify the same sentence, or even
word at the same time. Other editors provide for segmentation of the document (e.g.
Quilt), allowing write access to a particular segment for one user at a time. However,
suggested usage of the editors is that you should not modify what someone else has modified, i.e. something that is marked with a colour. Some systems even provide an intensity based colour scale on the modified parts, such that recently modified portions of text
have a high intensity, while less recent modifications have less intensity. After a certain
time period, the colour marking is reset, making it safe for other persons to modify the
regions. [Ellis et al., 1991] identifies the need for a notification mechanism in multi-user
editors so that a user can be notified when other users make changes which affect his
work.
The most popular editors2 used for document formatting do not permit concurrent modification. A conflict resolution scheme must be found to handle concurrent updates in
this case. Document locking provided by a versioning system is a basic mechanism in
this case. However, file locking and the associated resolution protocols to gain access to
a locked document may reduce the productivity. Significant portions of the time may be
used to negotiate access to documents currently locked by others.
Most recent versions of popular word processors allow users to create hypertext links.
The links can be to places within the same document, or to other documents.
1. For an extensive list of multi-user editors, see the URL: www11.informatik.tu-muenchen.de/cscw/
multiusereditor.html. This list is compiled by Michael Koch at the Department of Informatics, Technical University of Munich, Germany.
2. For example, FrameMaker, Word for Windows, Word Perfect.

Requirements to the understanding support system

155

(Pro) Direct links to related information

If all information about the software system was organized in a large hypertext system, the
maintainers would automatically have access to all pieces of related information by following the hypertext links.

7.3.8 Development = Maintenance


If there is a big problem with transferring knowledge from the development organization to the
maintenance organization, it should be considered to join the maintenance and development
departments.
(Con) Restriction of freedom

The nature of work in the development department is very different from the work in the
maintenance department. Mixing roles of maintainers and developers will restrict the professional freedom of a subject acting in both roles. As a maintainer she may feel that the
development project administrator requires too much on the development side, compared to
the effort which has to put into the maintenance work. On the other hand, acting in the
developer role, she may feel that acting to critical change requests is interrupting the continuous, innovative work which has to be done in the development work.
(Pro) Development experience ratio increases

The number of persons in the maintenance project with experience from the development
phases of the project will increase if development and maintenance departments are joined.
[Lientz and Swanson, 1980] showed that this had a positive effect on the maintenance productivity. However, as discussed in Section 7.3.3, obtaining this increased ratio can be difficult.
(Pro) Division of roles

Developers design software systems with a specific architecture in mind. As the system
evolves, this architecture is gradually deteriorated as the initial architectural idea is not
understood, not technically feasible for the evolution patterns, or because the persons
responsible for the system evolution disagree with the structure of the original architecture.
Gradually, the system architecture is changing into a patchwork.
We propose that professionals having participated in the original design of the system
should be responsible for the architectural evolution of the system. This means that persons
from the development groups shall participate in the maintenance work, but not acting to
critical, short-term requests from customers. The development and maintenance organizations are therefore partly joined, where some work on development only, some on both
development and maintenance, while a third group of persons are responsible for the shortterm operation of the maintenance projects.
Thus the software production organization should be split into three professional categories:
Software architects. This group consists of senior staff, particularly system analysts, soft-

ware configuration management professionals, quality controllers, and designers. These


have the responsibility of determining the original software architecture. They also have
the responsibility of evolving the architecture during the maintenance phase, as well as
assessing the maintainability of the software when transferred from development to
maintenance. The PCL could be used as their tool of choice.

156

Elaboration of a Framework for Supporting System Understanding

Software developers. This group consists of system analysts, designers, programmers,

and testers. They are responsible for the development of the first release of the software
together with the software architects.
Software maintainers. This group of people consists of designers, programmers, and test-

ers. They are responsible for meeting the level of functionality required by the customers
after the first release. The maintainers are guided by the software architects in order to
ensure a controlled evolution of the system.
We depict this scenario in Figure 42.

Software
Developers

Software
Architects

Software
Maintainers

FIGURE 42. Role division in software system production

7.4 Problems of software component relations


7.4.1 The proposed solution
Our proposed solution require that we are able to identify relations among different kinds of
documents and the source code elements which actually implements the software system.
For example, a high level design specification element may support two requirements. This
high level design specification may in turn be decomposed in two more concrete design specification elements, which describe the implementation of a set of source code files. The functionality which is specified in the design element can be described to the user in a section in the
user manual. A requirement and the design elements which supports that requirement may
relate to a test case which formulates system input and output for correct system behaviour. We
would also like to know which system components that are related to a particular logical system components (i.e. a PCL entity).
Figure 43 defines the symbols which represents the system elements described in the example
above. For the remainder of this chapter, we assume that these are the different type of components which a software system is comprised of.
Design spec element

Logical system component

Requirement

Source code member


User manual element
Test report

FIGURE 43. System element symbols

Figure 44 shows a possible set of relations which are identified among a set of components of
a system. We see that requirement 1 and 2 are supported by design spec element 1, which is in
turn decomposed into two more concrete design spec elements, e.g. chapters.

157

Problems of software component relations

1
1

!
1

1
4

2
1

FIGURE 44. A feature relation example

The advantages of having this information available is obvious, and twofold:


If an error gets reported, or a modification is asked for once the offending source code

member is found, we can walk back through the relation graph to find the source of the
bug and the impact of the needed changes.
If a requirement changes, everything which support that particular requirements can

readily be identified, and by following relations up and down the relation graph, you can
identify which components that your proposed changes will affect.

7.4.2 Problems of solution alternatives


7.4.2.1 Static vs. dynamic documents
If the documents were static, i.e. the documents did not change, at least two straight-forward
solutions exists for representing the relations:
Hyper-links: The creator of the document could insert a link from a particular place in

one document which points directly to a particular point in another document. In order to
be able to traverse the link in both directions, information about the to/from places must
exist in both components.
Link database: When a relationship exist among two different documents, a database

entry can be made to record that e.g. line 13 on page 15 in document X describes the
requirements to function A which is described on line 32 on page 43 in document Y.
The hyper-links and the links recorded in the database could be of different types so that different relationships among components can be expressed.
Now documents, and particularly software documents, are seldom static. In particular if the
system should constantly be in internal equilibrium as I propose, the documents would constantly change.
All the links described in the examples above would then have to be updated. If 10 lines of text
are added in the beginning of the document, line 13 on page 15 would now contain different

158

Elaboration of a Framework for Supporting System Understanding

information than it originally did. The ends of the relationships must have some mark-up
wrapped around them so that they can be identified independently from their position in the
document. The label and ref commands of the LaTeX text formatting package is a good example of this.
Still, considerable amount of updates need to be made if large parts of the document is
changed. To be able to relocate old links in the documents, the documents must be under configuration management with version control. Consider two documents A and B. When document B is changed to B, four options exist for each link into the change document:
Delete the link into the old version of the document.
Leave the link pointing to the old version of the document.
Leave the link pointing into the old version of the document, and add a similar link

pointing to the appropriate place in the new document.


Move the link from the old version of the document to the appropriate place in the new

document.
If the hyper-link approach is chosen, document A must be changed because document B is
changed, and so must all other documents which have links into document B.
If the link database approach is chosen, all database entries which mention document B must
be inspected and possibly changed. In addition, the database itself must be versioned, so that
relationships among older document versions are available if someone needs to inspect the old
documents1.
Ideally, the link data should either be directly implicit in the document structure, or automatically derived. Keeping a large link database up-to-date by hand is prohibitively expensive and
error-prone. It can be considered to be a relative of, but even a larger problem than, keeping
make dependencies up-to-date by hand. Remember that the Proteus tools provide automatic
generation of makefiles, as reported in Section 6.3.6.

7.4.2.2 Data dictionary vs. literate programming


How should the documents be organized in order to be able to find the needed relations? The
two extremes are what we can call the data dictionary and the literate programming
approaches.
First, is to store all information about a given software object in one place, and derive the
requirements, architecture spec, design spec, and code, all from one set of sources. This is the
"literate programming" notion.
In practice, literate programming is useful for combining two kinds of documents into one
(commonly code and documentation), but it seems to break down when you try to combine
three or more documents (e.g. adding in test data or requirements). The additional data typically mentions more than one component, hence cannot be properly contained in one particular
component. In cases where the components are extremely modularized, like APIs or libraries,
the literate programming approach can work since all document types as requirements, architecture, design, implementation and user documentation all have a coherent tiling.
1. This is often the case when customers which have an old installations need support, either to fix a
bug or to have an enhancement implemented.

Problems of software component relations

159

If this is possible for the particular system, all information that describes a particular source
code component is located in one place, actually in the same file as the source code itself.
Second is to automatically find all references to each software element from whatever disparate sources that mention it, and compile an index so that all relationships are logically linked
together (perhaps "hyperlinked"). This is the "data dictionary" notion. The data dictionary has
been particularly popular for integrated software development environments, where all possible relations are predefined by database schemata. The problem with this approach is that the
development environments are not capable of containing all information produced during a
software project. This additional information cannot be included and linked in the data dictionary.
Thus, the data dictionary approach should be extended to include all components which spawn
the software system and become a kind of system encyclopedia rather than a mere data dictionary.
Both the literate programming and data dictionary approaches require good support for automated construction, but for opposite reasons: Literate programming requires complicated
processing to get specific views of the software system for various purposes. The data dictionary approach requires parsing and indexing tools to cross-reference all the views that pre-exist.
What they have in common is, the "link data" is relative to a version-neutral "spacelike slice"
of the whole software project.
Since the data dictionary contains all extracted relations of all predefined types, the dictionary
cannot be kept up-to-date by hand. Furthermore, the data dictionary approach is best suited for
development of one-time or shrink-wrap types of software. These types of software projects
always work on the latest configuration, and maintaining the data dictionary by incremental
evolution seems to be a good solution in this case. For customized type projects, this approach
may be more cumbersome, as different configurations of the system exist and evolve concurrently; this means that several overlapping data dictionaries must be maintained simultaneously.
The problems of ensuring that the right versions of the different types of components are
related is not evident for the literate programming approach, since all types of documentation
regarding one component is physically collocated. Thus maintaining the relationships for one
component is fairly easy. This is probably the main advantage of literate programming over the
data dictionary approach. However, as we described above, the literate programming approach
is only suitable for particular types of systems.

7.4.3 Different approaches for relating components


We can identify three different approaches to relate components in a software system. We
describe them here by degree of increased flexibility:
1. Relations defined by a schemata.
2. Relations inserted by manual interaction.
3. Relations inserted by relating extracted features which are defined by the user or the sys-

tem.
We discuss each of these three approaches below.

160

Elaboration of a Framework for Supporting System Understanding

7.4.3.1 Relations predefined by a schemata


We find relations of the first type in different kinds of software development environments.
This can range from design notations to case tools. The user have control of the relations of all
components which are designed using the notations; however there are some drawbacks:
First, if not all of the system is designed using these notations, or if the data produced by

the environment is not publicly exportable, it is not possible to relate this data to other
information produced during the project.
Second, the relationships which can be expressed are constrained by those predefined by

the schemata. If the user wants to add relationships to produce new interesting relations,
this cannot be supported by the environment. There may be no support for defining the
relationships in the first place, and the tool supporting the development method has no
rules for either maintaining or visualizing the relationships.
The positive side of expressing relations in this way is that relations are automatically generated, and maintained as part of the system maintenance, hence little extra effort is needed for
maintaining them. Since the relations follow a predefined schema and are inserted automatically, there is a guarantee for the correctness of the relation. (Logical correctness of the design
is of course not guaranteed.)

7.4.3.2 Manually inserted relations


Manual insertion of relations provides the user with far more flexibility than following a rigid
schemata of allowed relationships. This approach also requires that a method exist for capturing the existence of a relation, either through direct links, or in a database. All kinds of relations can be inserted by the user, not only those defined by a schemata.
While being able to insert the same relations, though manually, as could be inserted automatically using the first approach, this approach is much more effort intensive and error-prone.
Both the process of inserting and maintaining the relations adds considerable effort to the
maintenance process. As described earlier, all relations which reference a component must be
checked for correctness when this component has changed.
As the relations are inserted by a user, there is a possibility that relations which are needed by
another user at a later stage are omitted. This, and the fact that the user may insert incorrect
relations, and omit updating relations after component changes, makes this approach very error
prone.

7.4.3.3 Automatically extracted relations


The last way of relating components of a software project is to automatically capture relations
in a set of components. This automatic capture is based on capturing rules specified either globally for the whole system, or locally as a response to a user query. Based on the degree of
detail of the users query, complex relationships can be extracted and reported.
The relationships can either be stored as those which are manually inserted, or they are generated dynamically at the users request. When the relations are stored, they can be used directly
by the user just as for the other two types. However, the maintenance of the relations are not
error-prone as for the second approach; after changes, the relations are just regenerated. The
computer cost of regenerating the relations is determined by the complexity of the relations,
the size of the system, and the cleverness of the algorithm for regenerating them.

161

Problems of software component relations

If all relations are generated after each change, this approach may be costly. If the query mechanism is powerful, only a restricted set of the relations of a component need to be regenerated.
If the mechanism is not so clever, a lazy approach may be chosen, i.e. a relation need only be
regenerated when someone attempts to traverse the relation. We can also choose not to store
any of the relations, and only generate those that are needed at any time.
When we introduced the three types of approaches for relating software components, we stated
that this last approach was most flexible. It is also the approach which is potentially the most
error-prone. However, if certain naming conventions are followed, as we will show later in this
chapter, the errors produced by this approach is that it may show false relations. The identification of a relation among two components is determined by the rule or query applied. A specific
query will pinpoint the correct relations, but a loose query may result in relations which are
not relevant to the users need.

7.4.3.4 Summary
We conclude the comparison of the three approaches with a summary of our observations in
Table 40.
TABLE 40. Comparison of relation capturing approaches
Issue

Pre-defined

Manually inserted

Automatically
extracted

Initial cost

Low

High

Low

Maintenance cost

Low

High and error-prone

Low

Correctness

All specified are captured, but not all that


we may need

Rels may be omitted

All captured, but


depending on query
detail, also possibly too
many?

Storage required

Part of model

Extra storage req.

Extra or no storage

Flexibility

Limited

High

High

The approach using automatic extraction as the capturing mechanism is both subject to both
errors of omission, and to errors of false relations. The existence of such errors does not itself
preclude the usefulness of this approach. What is important is that the number of valuable links
found using this approach is significantly higher than those one want to maintain manually,
and that the number of valuable relations to false ones is large. If this is true the few false relations may be overlooked, if not we have the needle-in-a-haystack problem. We will s how later
how we can ensure that the number of false relations are kept low, using a particular set of
rules for naming software entities.

7.4.4 Automatic updates of stored links


We argue that automatic updates of links stored in a database can be dangerous. Consider the
following example:
A requirement X is supported by design elements A, B, and C. When requirement X is
deleted, all relations from A, B and C must be deleted as well. Some of design elements A,
B and C may have to be deleted as well, since only requirement X was the reason for their
existence. This would in turn affect other component types of the system. If some of A, B

162

Elaboration of a Framework for Supporting System Understanding

and C support other requirements, they need not be deleted, just modified. Now if requirement X is changed, how will this affect the links from A, B, and C to the requirement?
Which of the design references that are no longer valid is something that a human must
decide.
If, for instance the example above was on the source code level, this could have been done
automatically. Consider the following:
For some source code components, assume that component X defines a set of variables
which are used in components A, B, and C. There is an include relation from A,B, and C to
X. If some of the variable definitions are removed, there may no longer be an include relation from some of A, B, and C to X. This can be determined automatically by parsing the
code.
Thus for formally defined representations, such as source code, the links among components can be automatically updated.
What does this imply? We could argue that this implies that all documentation produced in a
software project should have a formal representation, such that all different relations could be
automatically maintained. However, for reasons argued previous in this thesis, we do not
believe that formalizing the software development notations is the best approach.
Now, consider the following example which arose during a discussion. The example expresses
many of the misunderstandings which are related to traceability links in software projects.
Suppose you modify a requirement, such that some of the design pointer to that requirement
is no longer valid. Which of the design references are no longer valid is something that a
human must decide. If you use a technique of automatic extraction based on naming references, you would have to give the requirement a new name, and then go to all the documents that reference the old name and update them appropriately. In other words, using
names to derive the relationships is an easy way of implementing the relationships, but it
does not solve any of the update problems that occur when the documents are changed in
non-trivial ways.
We do not agree either with the assumptions or the conclusions in this example. The assumption of the example is that relations are just flags which show that two components are connected. This is not true. Indeed, a relation shows a connection, but there is some semantics
associated with the connection. It expresses that the two components interact so that care must
be taken when any of them are changed. Indeed, humans must inspect whether the relation is
still valid after the change. If it is not valid, the other component may have to be changed as
well to maintain the validity, or the relation may perhaps be removed. This is shows the importance of a relation. Now, if the change made to the requirement implied that only some of the
components initially supporting the requirement still supports it, actions may either be taken to
split up the requirement, or remove the components which no longer supports it.
When relations are used, we are able to know which components we must inspect, and decide
what actions that need to be taken. If relations were not used, there was no way we could know
these connections, and the total system would loose its structure and start aging. After the
appropriate actions are taken the connections are now updated correctly again, either by an
automatic approach or by changing the relations manually.
The changes may have resulted in other interesting connections among the changed components. This may not be evident to a maintainer which updates the relations manually. If an

Problems of software component relations

163

approach of updating the relations automatically is chosen, such new interesting relations can
be successfully captured automatically.

7.4.5 Conclusion of section


It is important for system understanding to have relations which show different kind of connections among components. If the documents were static, i.e. they did not change, we could
insert all the interesting relations into the documents as we identified them. However, software
documents change, and a dynamic way of extracting interesting relations is preferable, both
since it is more cost effective to extract relations like this, and because it is less costly to maintain the relations if they are automatically extracted.
Relations among components which has a formal specification can be automatically updated
after a change. This is for example the case for relations among source code components. Such
relations typically describe syntactical aspects of the code. Consider the relation function F
changes variable V. If function F is modified to not changing variable V, the existence of this
relation in the relation database will not affect the system behaviour. When the program is
parsed again the change will be detected, and the relation database will be updated correctly.
For relations among documents, we should be careful in automatically updating manually
inserted relations when the documents change. These relations must be inspected manually.
This make the manual insertion technique expensive. If relations among documents are
extracted automatically, they can be regenerated automatically. However, as expressed in the
discussion of the last example, the existence of a relation shows that special care must be taken
to ensure the correctness of the related component when the component on one side of the relation is changed.
We propose that relations among software document components should be extracted automatically, and that there is no need for storing them in a database, as this provides for better maintainability and more flexibility. In particular, we propose that relations should be extracted
dynamically at the request of a user, so that no explicit relations are stored, but generated based
on the query issued by the user.
We propose that the relation information for source code members of the software system
should be extracted and versioned with the source code member. The relevant relations to
extract is defined later in this chapter. To extract these relations, the source code member must
be parsed. As this parsing may take some time for large files, it is best to do it per file when the
file that is checked in is changed, instead of doing it for each checked-out configuration.
Relations which connects source code members to appropriate documents are also generated
dynamically. The PCL architectural description of the system is used to constrain the number
of document and source files to inspect to be able to compute all the relations which are relevant for a particular document element or source code member.
In the next section we define the types of relations that we believe is interesting for supporting
software understanding, and the strategies we select for extracting these from the system elements.

164

Elaboration of a Framework for Supporting System Understanding

7.5 Relation types among system components


We distinguish the terms relation and relationship as proposed in the Proteus project [PROTEUS, 1994b]:
1. A relation is a definition of a set of possible links between entities. In a relation, you may

define the relation name, domain, and range. The domain and range are expressed in
terms of family entity classifications.
2. A relationship is a instance of a relation, that is, it is established between two classified

entities. Relationships are directed between entity A and entity B (say). We refer to A as
the source and B as the destination of the relationship. The source classification must
conform to the domain specification in the relation. The destination classification must
conform to the range specification.
There are several categories of relations which are of interest when a maintainer needs to
increase his system knowledge:
1. Architectural relations (A-relation): These relations show the architectural composition

of the logical system structure. The A-relations will be further described in Section 7.7.
2. Instance relations (I-relation): I-relations link a logical component in PCL to unique

physical components on the disk. These physical components can be a text file document, or source code files. The different types of I-relations are depicted as dotted lines
in Figure 45. These relations are primarily maintained manually, but the PCL provides a
mechanism for automating the maintenance of these relations. The I-relations will be
described in Section 7.8.
3. Document element relations (DE-relation): A relationship of this type relates a document

element in one document to one or several document elements in other documents or in


source code. The different types of DE-relations are depicted as solid lines in Figure 45.
These relationships are generated dynamically. The DE-relations will be further
described in Section 7.9.
4. Document type relations (DT-relation): A relationship of this type relates a document

element to another document element or a list of document elements in documents which


are of the same type. Source code files are in this definition perceived to be documents.
E.g. call relations in a C program are DT-relations in our framework. The DT-relations
among elements in proper documents are generated dynamically, while the DT-relations
among elements in the source code are stored in a database. DT-relations are not shown
in Figure 45 as they are internal to the component types. The DT-relations will be further
described in Section 7.10.
The different types of relations, and an explanation of the extraction policy for each of them,
are described in the next sections. For each of the relationships, we will describe intent of the
relation, how the relation is identified, and how the relation should be extracted. In Section
7.9.8, we portray a possible user interface for specifying the search of relationships and examining the found relationships. First, however, in the next section we will describe three different strategies for how a system should be decomposed in a PCL description. The utility of the
A-relations will depend on the strategy chosen.
When the maintainer has received a modification request to implement, he will use the different relations to locate information which he finds necessary to understand the current state of
the system and how to change it to satisfy the modification request.

165

Strategies for decomposing a system using PCL

A-relation
I-relation
DE-relation

FIGURE 45. Relation types

7.6 Strategies for decomposing a system using PCL


The PCL description of the system defines the hierarchical logical composition of the system.
The breakdown of the system into its logical components can be made to an arbitrary level of
detail. From our experience with PCL, the breakdown should at least be down to a level such
that physical objects of a logical component are naturally related in a system breakdown meaning. As an example, consider that a system consists of three main subsystems. If each of these
subsystems are implemented in 50 files, it is not recommended that the PCL decomposition is
stopped at this level. The logical structure of each of these subsystems should be visible in the
PCL description. There are three different levels of decomposition which are natural:
1. Let the decomposition of the logical system description follow the directory breakdown of

the system. Each leaf logical component of the systems PCL description will then have the
files in the leaf directories as its physical objects.
A special tool, PCL Reverse, has been added to the PCL tool set which does exactly this. If
a large project chose to use PCL and has not evolved a PCL description during the initial
project phases, a PCL description as described above is generated automatically. All the
subsystems of the automatically generated PCL description are given the names of the corresponding directories; the physical objects of the components are those that reside in the
directory. A user can later interactively change the names of the subsystems if they are cryptically named, e.g. if an eight character limit is enforced on directory names.
2. Let the decomposition follow the directory structure as in the previous case, but decompose
the leaf directories such that the leaf logical components are mapped to a single file, or a
pair of interface and body files.
If the system was implemented in ADA, the PCL decomposition should probably stop at the
package level, such that the physical source code objects of the leaf logical components are
the interface and body files of the ADA package. Similarly, if the implementation language
was C or C++, the system decomposition should stop when the leaf components are mapped
to a header (.h) and body (.C) file.
3. Extend the decomposition of the previous case, such that even the structure of the interface/
realization pair of files is shown. In this case, several of the logical components may map to
the same physical files.
This approach may be necessary if the files are very large, and much system structure is hid-

166

Elaboration of a Framework for Supporting System Understanding

den inside the files. If for example the implementation language was C++, each of the
classes which are defined inside a file may be modelled as a logical system component in
the PCL description.
All of these approaches may be used in the PCL description of a system. It is up to the system
architect to decide how detailed the description of the different subsystems should be. The
organization may also decide on a strategy for how their systems should be described using
PCL.
The second strategy is the best trade-off among effort in writing PCL descriptions and gaining
utility from the architectural relations. If the third strategy is chosen, there is no limit in how
detailed the PCL description can be made. We believe that the users of PCL must take care in
not being too detailed, as this is costly and may reduce the effort savings introduced by using
our framework.

7.7 Architectural relations


By architectural relations, we mean the relations that are defined and maintained with the PCL.
These relations help to build up a top-down or a global understanding of the software system.
The relations of interest are:

7.7.1 Hierarchical composition


By studying this relation, the user will gain knowledge of how the system is organized. The
hierarchical composition relations are defined using the PCL parts structure declaration.
This is presented in Section 6.3.1.1, and its definition is given in Parts declaration list on
page 206 in Appendix A. A small example of hierarchical composition relations is shown
below.
family Farmers_Assistant
parts
application => Farmers_Assistant_Application;
documentation => Farmers_Assistant_Documentation;
end
end
family Farmers_Assistant_Application
parts
stat => Statistics_Package;
depot => Depot_Level_Monitor;
delivery => Meal_Delivery_Control;
end
end
family Meal_Delivery_Control
parts
in => Meal_Extract;
transport => Meal_Transport;
out => Meal_Delivery;

Architectural relations

167

end
end
By browsing the structural composition of the system with the PCL browser as shown in
Figure 35 on page 129, the user can find both upward and downward composition links.

7.7.2 Requires
The requires relation can be set up between PCL entities when the source entity is dependent on the destination entities. E.g. if a system component X uses the services of another
system component Y, we would state in the PCL description for component X that it
requires Y. The requires relation was mentioned in Section 6.3.5.1, and its definition is
given in the PCL standard library described on page 209 in Appendix A.
In the below example, the logical component whose responsibility it is to calculate the
nutrition intake for some species, is dependant on the nutrition database, and the animal
database. This relation
family Calculate_Nutrition_Intake
...
relationships
REQ: requires => (Nutrition_Database, Animal_Database);
end
physical
header => ntrincal.h
body => ntrincal.C
end
end
The requires relation was originally included in the PCL language to support system manufacture. To support system understanding, the relation is necessary in the opposite direction
as well. We propose to extend the standard PCL library with the new relation
is_required_by. This new relation should be inserted in any PCL entity which is required
by another entity, e.g. the Nutrition_Database:
family Nutrition_Database
...
relationships
REQ_BY: is_required_by => (Calculate_Nutrition_Intake);
end
end
This will make it easier for a user to see that the component is indeed used by other components, and also make it easier for any impact analysis tools.

7.7.3 Documents & Is_documented_by


Not only should the application part of the system be described using PCL. The documentation should also be described using PCL. The documents and is_documented_by relations
should be used similarly to the requires and is_required_by relations.
The best way to model the report or documentation structure of software system is to sepa-

168

Elaboration of a Framework for Supporting System Understanding

rate this structure from the application structure. The application structure will typically be
most decomposed, and the documents and is_documented_by relations will link the two
structures. It must be up to the system architect to decide how detailed the structures should
be decomposed. E.g. we could imagine that the document decomposition in PCL stopped at
the document file level, or it could even be decomposed into the different chapters. In Section 7.6 we discuss different approaches for detailing the PCL descriptions.
We now show an example of these relations:
family Farmers_Assistant_Documentation
parts
req => FsA_Requirements;
design => FsA_Design;
test => FsA_Test_Specifications;
user => FsA_User_Documentation;
end
end
Now, all of these may be decomposed, but here we show only the decomposition for the
design entity:
family FsA_Design
parts
stat => FsA_Statistics_Design;
dep_and_deliver => FsA_Food_Transport_Design;
end
end
family FsA_Food_Transport_Design
parts
stable => FsA_Stable_Parts_Design;
unstable => FsA_Unstable_Parts_Design;
end
end
family FsA_Unstable_Parts_Design
attributes
type_of_breed : breeding_type := chicken;
end
parts
intake =>
if type_of_breed = chicken then FsA_Small_Intake
elseif type_of_breed = cow then FsA_Large_Intake
endif;
delivery =>
if type_of_breed = chicken then FsA_Chicken_Design
elseif type_of_breed = cow then FsA_Cow_Design
endif;
end
end
In the previous entity, the attribute type_of_breed is defined as an enumerated type ranging
the different breeding types of interest. This can be used for conditionally determining the

Architectural relations

169

composition structure.
family FsA_Small_Intake
...
relationships
DOC: documents => (Calculate_Nutrition_Intake);
end
physical
file => intake_s.doc
end
end
The entity Calculate_Nutrition_Intake in the application structure can now be updated:
family Calculate_Nutrition_Intake
...
relationships
REQ: requires => (Nutrition_Database, Animal_Database);
IS_DOC: is_documented_by => (FsA_Small_Intake);
end
physical
header => ntrincal.h
body => ntrincal.h
end
end
Since there was some variability in the documentation structure, such variability would
probably be apparent in the application structure also. This is dropped in this example for
simplicity.
All of these relations can be visually inspected, either by reading the PCL description, or by
using the PCL graphical browser.
The PCL tool set provides an API which allows other application to query the state of the different entities. In this way, we can ask which components that requires component X, and what
documents that document component Y.
Identifying correct document type
Since the documents and is_documented_by relations does not contain information about what
type of documentation that is involved in the relation, we use the PCL classification system to
differentiate among the different types of documentation.
If the intake_s.doc file above was the design document for the subsystem the PCL entity
Calculate_Nutrition_Intake, the physical section of the entity FsA_Small_Intake would look
like:
physical
file => intake_s.doc classifications doc_type => design; end;
end
The design classification would be predefined for a project, as well as classifications for other
document types in the project.

170

Elaboration of a Framework for Supporting System Understanding

7.8 Instance relations


The instance relations are the class of relationships which link a logical PCL entity to the corresponding files which that entity makes an abstraction of. These relations make the user able
to find out which files that e.g. implements a particular logical part of the system, or what subsystem a particular file is in. By expanding one such relationship, the user can identify other
files which are also related to the same logical entity.
An example of two instance relationships:
family Calculate_Nutrition_Intake
...
relationships
REQ: requires => (Nutrition_Database, Animal_Database);
end
physical
header => ntrincal.h
body => ntrincal.C
end
end
The two instance relationships here are that the Caluclate_Nutrition_Intake is mapped to the
physical files ntrincal.h and ntrincal.C.

7.9 Document element relations


7.9.1 Introduction
A relationship of this type relates a document element in one document to one or several document elements in other documents or in source code.
The architectural and instance relations were manually defined and maintained. The document
element relationships are on the contrary automatically identified in a dynamic fashion. The
first two types of relations were defined in the PCL. The DE-relations will be automatically
extracted from any type of physical object that is part of the software system.
For simplicity we will restrict the discussion here to the case where the physical objects are
either word processing documents or ASCII source code files. We note that the information
that we find easy to extract from standard documents and ASCII files is simple to extract from
any design tool when an appropriate API exist for querying. Thus the document element relations can be extended to include particular design formalisms in addition to the valuable information that exist in the written documentation.
In Figure 45 we outline the different types of DE-relations which we will discuss in this section. The meaning of the symbols are the same as in Figure 43. The relations are among elements of
Requirements and design (RD), described in Section 7.9.2.
Requirements and user documentation (RUD), described in Section 7.9.3
Requirements and test reports (RT), described in Section 7.9.4

171

Document element relations

!
RT

RUD
RD

SCT

DUD
DSC

FIGURE 46. Different DE-relations


Design and user documentation (DUD), described in Section 7.9.5
Design and source code (DSC), described in Section 7.9.6
Source code and test reports (SCT), described in Section 7.9.7

We will now describe how these relations can be extracted from the different information
sources. First we want to state three basic requirements that must be true if this approach
should be successful.
1. The system must be in internal equilibrium. Remember the discussion of the importance of

internal equilibrium from Chapter 4.


2. The requirements must be uniquely numbered, following a standard numbering scheme. It is

not a requirement that a particular numbering scheme must be used, but for the sake of the
example let us follow this numbering scheme: R-<YYMMDD>-<NN>, so that R-97010101 is the first requirement of January 1st, while R-961029-12 is the twelfth requirement registered on October 29th.
It is important that all requirements follow the same format, as this allows us to find them
easily in the documents.
3. All names of identifiers in the program must follow a predefined naming format, using nat-

ural names for all identifiers. In addition to making the source code easier to understand, as
for example shown by [Laitinen, 1995], this will make it more manageable to relate information in different sources.
If these requirements are adhered to, the extraction schemes for extracting the different relations shown below will work successfully.
For each of the six relations, we will describe the intent of the relationship, how a relationship
of the given type is identified in the sources of information, and how we can go about to extract
the relationship to be able to present it to the user. When all relationships are described, a proposed user interface for how the relationships may be presented to the user so that the user can
use the extracted information to increase his system knowledge. The example user interface is
described in Section 7.9.8.

172

Elaboration of a Framework for Supporting System Understanding

7.9.2 RD relationships
Intent of relationship.
The user may need to know which parts of the system that supports a requirement.
The user may need to know which requirements that put restrictions on a system compo-

nent.
How the relationship is identified.
1. Each requirement in the document follows a standard numbering scheme with unique

numbers as described above. Thus in the requirements document, the section label of
each requirement is marked with one such unique number.
2. Each component in the design document refers to the requirements that it supports. In the

design document, each section down to an appropriate level mentions the number of the
requirement that it supports. In order to make things even easier, the first line in the section contains a list of the requirement numbers, following the token Supports requirements:. The name of the containing section is the name of the component that is
designed in that section.
When a user needs to find which components that supports a particular requirement, the
design documentation is scanned for sections that mentions the requirement number.
If a user needs to find which requirements that put restrictions on a system component, the
system component is located in the design documentation, and the list of number requirements is given in the first line in the section that describes the component.
Extraction policy.

In order to extract the needed information, we must be able to know something about the
storage format of the word processing system used for writing the requirement and design
documents. Preferably, one type of system is used throughout, but this is not a requirement,
it just makes things more easy to implement. Below we show how we can extract the
needed information if the documents were written in LaTeX and then in FrameMaker.
LaTeX: We will assume that each requirement is started with the LaTeX unnumbered

section header \subsection*{R-YYMMDD-NN}. If this is the case it is easy to locate the


beginning of each requirement using a lexical scanner. The line number for the requirement in the processed file is remembered. Similarly a lexical scan is necessary to locate
the places in the design documentation where components that supports a particular
requirement are located. Each component in the documentation is described in a section
at some level. The name of the component is given as the section name. A lexical scan
for a section like \section*{compname}, \subsection*{compname},or \subsubsection*{compname} where the next line contains a reference to the sought requirement
after the token Supports requirements: will identify the correct set of components.
FrameMaker: Unlike LaTeX which use a predefined scheme to allow the user to number

his section, the FrameMaker user must choose his own names for the different section
numbering levels. We require that the same names are used consistently for this for all
documents in the system. This requires that a standard FrameMaker template is used dur-

Document element relations

173

ing the whole project. It is the rule rather than the exception that this is what most organizations do. This thesis, for example is written using FrameMaker, and the numbering
scheme used is as follows:
1. Heading1 is the name of sections on the top level, e.g. like Section 7.9.
2. Heading2 is the name of the next level section, e.g. like Section 7.9.1.
3. Heading3 is the name of the lowest numbered level, e.g. like Section 7.4.2.1
4. HeadingRunIn is the name of unnumbered sections, with emboldened print, e.g like
Intent of relationship on page 172
Now, FrameMaker documents are saved in a binary format which is impossible to parse
unless detailed information from the vendor is available. This is not a problem, since
FrameMaker provides a non-destructible ASCII format, the Maker Interchange Format
(MIF), which documents can be saved in. We require that the documents must be saved
in this format.
A section title 1.1 Introduction of the Heading1 type looks like this when saved in MIF
format:
<Para
<Unique 139175>
<PgfTag Heading1>
<PgfNumString 1.1 >
<ParaLine
<TextRectID 18>
<String Introduction>
>
> # end of Para
This is a bit more complicated to identify than the LaTeX section numbers, but still
rather easy using a simple lex script. The requirements can be given a special paragraph
type, say Requirement. To identify all requirements in the requirements documentation, we would have to find all such paragraphs. One such paragraph could look like:
<Para
<Unique 132832>
<PgfTag Requirement>
<ParaLine
<String R-961012-01>
>
> # end of Para
In order to find the places in the design document that supports a particular requirement,
we need to find the paragraphs which contains references to that requirement.
The lines following the section title looks like this:
<Para
<Unique 139184>
<PgfTag Body>
<ParaLine
<String This is the text in the first line,>
>
<ParaLine

174

Elaboration of a Framework for Supporting System Understanding

<String the text in the same paragraph con->


>
<ParaLine
<String tinues in different ParaLines.>
>
> # end of Para
The name of the paragraph type is Body, and the text in the paragraph is stored in
ParaLines inside the Para. The Para represents the whole paragraph, while each
ParaLine represent a text line as seen on the screen. A new Para is created for each
carriage return. Inside each Para, there is a Unique tag. This tag is used as the anchor
when FrameMaker makes its internal cross references. It is also used for generating
HyperText links into a FrameMaker document.
Just as we had to find the lines in the LaTeX file which contained the information we
needed, we need to find the correct Para and remember the corresponding Unique
number. When we have located the Para that contain a reference to the sought requirement, we know that the previous para contained the name of the component.

7.9.3 RUD relationships


Intent of relationship.
The maintainer needs to know how a particular requirement is implemented and looks

like from a users point of view as described in the user manual.


Given a functionality described in the user documentation, the maintainer needs to know

what requirements that has lead to this functionality.


How the relationship is identified.

A requirement is typically stated very directly, with some additional discussion which justifies the requirement. The short requirement is will contain important terms which will be
reused throughout the project. In the user manual, the provided functionality is described
using the same terms as those used in the requirement.
Extraction policy.

As stated Section 7.4.5, the relations among documents should be extracted automatically
based on a query made by the maintainer. In Section 7.11.2 we will describe the query
mechanism we propose to be able to extract for example this kind of information.
When the maintainer want to know how the functionality specified by a requirement is
implemented in the final application, he specifies a query where he select some of the key
terms used for specifying the requirement. These terms are then searched for in the user
manual. The corresponding matches are believed to describe functionality which relates to
what is specified by the requirement.
Similarly, when a maintainer would like to find out which requirements that has specified
some functionality in the application, he selects some of the terms that are used in the user
documentation to specify that functionality. These terms are used in a search in the requirements specification. The corresponding matches are requirements which have some impact
on the functionality.

175

Document element relations

Note that the matches which are returned by the searches are not exact. The RD relationships which were described in the previous section were exact matches, as they were explicitly defined in the documentation. The relationships which are generated using the search
mechanism will only be an approximate to the correct relationships. The correctness of the
extracted relationships are dependent on the choice of terms used in the query. The maintainer needs to inspect the ends of the relationships to investigate how well the query
matched his intentions.

7.9.4 RT relationships
Intent of relationship.
The maintainer wants to know whether a test has been made to ensure that a requirement

has been correctly supported by the system.


Given a test case, the maintainer wants access to the requirement which controls the test.
How the relationship is identified.

In the test report, the requirement is mentioned in the same manner as it is in the design document. The first line after the section name in a test case description is like this:
Tested requirement(s): R-961201-03
In the requirements specification, each requirement is identified since its section name is the
requirement number. For further requirements to the test report content, see Section 7.9.7.
Extraction policy.

The extraction policy is similar to the one described for RD relationships.

7.9.5 DUD relationships


Intent of relationship.
The maintainer is interested in knowing how the function described somewhere in the

design look like for the user as described in the user manual.
The maintainer would like to know where in the design the component that implements a

particular functionality in the application is located.


How the relationship is identified.

The relationships are indirect and are found with queries.


Extraction policy.

The relationships are extracted using queries as described in Section 7.11.2.


The policy for choosing terms is similar to the one described for RUD relationships.

7.9.6 DSC relationships


Intent of relationship.

176

Elaboration of a Framework for Supporting System Understanding

The maintainer reads the design document and would like to locate the place in the

source code where the component that he reads about is located.


When working with the source code, the maintainer wants to inspect the design of the

component that he is working on.


How the relationship is identified.

As described in Section 7.9.1, one requirement is that a predefined naming format is used
for naming the identifiers in the system. When this naming format is followed, the name of
an identifier will be the same both in the source code and in the design document. The identifier can be a class, member function, global variable, or whatever.
Extraction policy.

First a note on names: All languages have scoping rules. In different scopes, different variables may have the same name. In object-oriented languages, such as C++, even member
functions of different classes will have identical names. In fact, it is a rule rather than an
exception that different member functions use identical names. This is due to the facilities
of overloading and polymorphism in object-oriented languages.
This present us with a problem. Consider that the maintainer is inspecting the design of the
member function insert_object_in_front. Several classes has a member function with this
name. How then can we find the correct member function in the source code, if this is our
task? Similarly, if the maintainer was working on this member function in the source code
and would like to inspect its design how could he find the correct section which describes
that particular member function, and not some other member function with the same name?
To alleviate this problem, in the design document the design of a member function must
uniquely identify which class the member function belongs to. We had the opportunity to
test one strategy in a project course at our department, where the students were asked to
develop a system using C++. In the design document, the description of the different entities had to follow this particular scheme:
1. The name of the components which are documented in a (sub)section must be part of

the (sub)section name, following this rule: <type name>:<component name>.


2. The legal type names which can be used in (sub)section names at all levels are:

Class which is the same as a C++ class


Module which is a set of closely cooperating classes
GlobalFunction which is a global function in C++
GlobalData which is a variable defined outside a class in C++
3. Type names which can only be used in subsections are:

StaticFunction which can be used in a Module chapter (a local static function)


MemberFunction which can be used in a Class chapter (a member function in a class)
MemberData in a Class chapter (a data member in a class)
StaticData in a Module chapter (a local static variable)
The student groups did not have any problems in following this scheme. They found it a bit
strange at first, but when they were told that the reason behind it was that a tool sometime in
the future would automatically extract information from their design document, they agreed
to that the scheme was OK to use. The structure of a chapter in the design document would
now be

Document element relations

177

2.0 Module:Calculate_Nutrition_Intake
2.1 GlobalFunction:XXX
2.2 GlobalData:YYY
2.3 StaticData:ZZZ
2.4 Class:Cow
2.4.1 MemberData:Weight
2.4.2 MemberData:Height
2.4.3 MemberFunction:Calculate_Fat_Level
...
We have dropped the full signature of the functions and member functions in the section
numbers. The full signature should be mentioned inside the section.
Now we are in a position to describe the extraction policy for how information in the design
document are related to information in the source code. We will first describe how we can
find the source code elements given that we know about an element in the design document.
Then we describe how to find the design information given that we know about a source
code element.
1. From design to source code.
There are two possible ways this may be done. First it can be done using a lexical search
on the source code files, second it can be done by querying a database which contains
information about all the source code of the system.
Let us consider the first method first. It would clearly be a large search to find the name
of a function in a system which contains a million or more lines of source code. Luckily,
we are in a position that we do not have to consider all files in the system. Remember
that the PCL description describes the logical architecture of the system. If the PCL
description is made right, and according to the guidelines that we specified in Section
7.6, there should be a PCL family entity with the same name as the name of the module
where the design element we need source code information for is located. Inspect the
PCL description for the system, or use the API to the PCL tool set, to obtain a list of the
source code files that implement the module.
Consider we need to find the place where the MemberFunction:Calculate_Fat_Level is
implemented in the source code. We request a search for Cow::Calculate_Fat_Level in
the files that implement the module. In this case the number of files were only two, see
Section 7.7.2. This search is fast, and we easily obtain the position in the file where the
implementation of this member function begins.
It is also possible to use a database query to find the source code position of the member
function we requested information about. A student group designed and implemented a
querying system for source code information in C++ programs under the authors supervision. We will describe this in more detail in Section 7.10.
2. From source code to design.
Let us stick to the member function Cow::Calculate_Fat_Level. Now, the maintainer is
inspecting this member function in a text editor, and wants to find out where the design
of this member function is located.
We will describe two different approaches to finding the location of this design information. The first is the straight-forward approach of searching in all design documents. The
second is to utilize information in the PCL description of the system architecture.

178

Elaboration of a Framework for Supporting System Understanding


1. Searching all documents: Since we know that the FsA system is rather large, we can

also assume that the design information is spread across a number of files. In this
case, the document structure is not described in detail in the PCL description, all we
know is the name of all the files which contain design information. We start to scan
the design files to search for a section which name is MemberFunction:Calculate_Fat_Level, and this member function must be a subsection of a section
named Class:Cow. After some searching, the location of this section is found. If we
had a situation were we were looking for a static variable, say StaticData:ZZZ, as
described earlier in this section, we could be put in a position where several modules
contained the static data ZZZ. We would have to know the module name of the module which our ZZZ belonged to. We do this by querying the PCL tool set for the name
of the family entity which has an instance relationship to the file where our ZZZ is
defined. We could now search for a section StaticData:ZZZ which is a subsection of
Module:Calculate_Nutrition_Intake.
When we do the searching we search for the most general information first, i.e.
Class:Cow and Module:Calculate_Nutrition_Intake. When this is found in the design,
we begin searching for the more specific information from this place, i.e. MemberFunction:Calculate_Fat_Level and StaticData:ZZZ.
2. Utilizing PCL description of system architecture: Remember from Section 7.7.3 that

an important pair of relations in the PCL model were the documents and
is_documented_by relations. In the previous example of extracting design information, these relations were not utilized so we had to search through all design documents. In this example we assume that not only is the application structure broken
down to the module level, but also the document structure, particularly the document
structure for the design documents. By using the is_documented_by relation, our
search for design information for MemberFunction:Calculate_Fat_Level will be
much quicker. What we do is the following:
We identify the family entity in the PCL system description which has an instance
relationship to the file where our member function Calculate_Fat_Level is defined in.
We then ask for the name of the PCL entity which has a is_documented_by relationship to this entity. The design documents we have to search in are the documents that
this latter PCL entity has instance relations to. This will greatly reduce the number of
documents that we need to search in, and hence reduce the search time considerably.
Our search mechanism is very scalable since we can use the information in the PCL
description to constrain the number of source code files or design documents we need to
search in to be able to establish the needed relations dynamically.

7.9.7 SCT relationships


Intent of relationship.
When maintainer is inspecting a component in a source code file, he wants to know

whether a test case is made for testing this component.


When a maintainer is reading a test case in the test report, he wants access to the source

code files which contain the functionality which will be tested by the test case.

Document element relations

179

The maintainer is also interested in locating the correct test drivers and stubs for the test

that is described in the test case.


How the relationship is identified.

The relationship is again identified by searching the test report for some key followed by a
name, and by searching for the correct identifier name in the source code.
Extraction policy.

A test case should be easy to identify in the test report. For simplicity of description, we
require that each test case should start with a particular heading type (recapitulate the examples of heading types in FrameMaker in Section 7.9.2). The section name for a test case
should start with Test Case:, followed by some appropriate text. In Section 7.9.4 we
required that the line after the section name should contain a list of the requirement(s)
which were put to test by the test case. Here, we additionally require that
1. The line following the line with requirement references, contains references to the
source code elements which are tested. The form of this reference is <Module
name>:<Source code element>.
2. The line after that contains the names of the test drivers and stubs which are used by
the test.
A test case would then start like this:
2.3.2 Test Case: Check calculation of fat level
Tested requirement(s): R-961210-14
Source code elements: Calculate_Nutrition_Intake:Cow::Calculate_Fat_Level
Test stubs & drivers: fattest.h, fattest.C
We start the test by initializing a list of cows with different weight and height.
....
The location of the test stubs and drivers is found by checking the physical section of the
PCL entity which has a instance relation to the file which contains the source code elements
mentioned in the Source code elements line of the test case.
Now we are in a position to describe the extraction policy for SCT relationships:
The maintainer is inspecting the incredibly complex function Calculate_Fat_Level, and

wonders whether the correctness of the fat calculation algorithm has been tested before.
He forms a query to search the test documentation for test cases involving
Cow::Calculate_Fat_Level. The query returns with the information that the test case
described in section 2.3.2 in the test report describes a test named Test Case: Check calculation of fat level.
The maintainer is reading the test case for checking the fat level, and he wants to know

where the source code element Cow::Calculate_Fat_Level is located. From the second
line of the test case, he understands that the source code element is inside the
Calculate_Nutrition_Intake module, and he construct query to identify the position of the
source code element in the files which are linked by instance relationships to the PCL
entity which models the Calculate_Nutrition_Intake module.
The test stubs and drivers can be located in the same manner as were the files which contained the source code elements. This was described in the previous bullet. To separate the

180

Elaboration of a Framework for Supporting System Understanding

test stubs and drivers from the application code related to a PCL entity, different slots
should be used in the physical section of the PCL entities. We propose to use the slot header
to relate to the header files related to the entity, the slot body to relate to the implementation
files of the entity, and the slot test to refer to the test stubs and drivers which are related to
an entity. We extend the example started in Section 7.8, by adding the PCL code necessary
to include the test stubs and drivers:
family Calculate_Nutrition_Intake
...
relationships
REQ: requires => (Nutrition_Database, Animal_Database);
end
physical
header => ntrincal.h;
body => ntrincal.C;
test => if testing = TRUE then (fattest.h fattest.C) endif;
end
end
The conditions on the test slot allows the user to specify when the test files should be
checked out when a configuration is made. If the testing attribute is assigned a true value,
the files are checked out. This would normally be the case when the system is in the test
phase. If the system is in a production phase, the attribute value would be set to false, and
the files would not be checked out into the workspace. Note that the files can at any time be
checked out by the user, independently of the value of the testing attribute. Thus the user
can manually specify which test files he wants to check out.

7.9.8 Displaying the relations


We will here outline how a suitable user interface for displaying all relations should be built
up. Figure 47 give an idea what the user interface should look like. This is not an image taken
from a finished system, rather it is a constructed image which contains the information we feel
is necessary to utilize the framework in a useful manner. A final user interface will probably
have more menus and buttons.
In the lower part of the screen, three windows show the PCL description of the system in the
PCL browser, a view of a document is shown in a word processing system, and a view of the
source code is shown in a text editor.
In the upper right hand corner, a query form is displayed. This allows the user to formulate the
queries which are needed to extract the wanted relationships. In Section 7.11.2 we have
described the type of queries that should be sufficient. The lower part of the query form contains facilities to define where to search. The user should enter the name of a PCL entity. Every
file which is mentioned in the physical objects in that PCL entity and recursively for any child
PCL entity, if any, are included in the search space. The upper part of the query specifies
whether the query is for finding particular relationships or for searching in a particular type of
documentation, where the documentation type is specified as one of the classifications available for documents as described in Section 7.7.3..
In the upper left hand corner, there are six windows which each contains a list of matches for
the last query of information of that type. The windows list matches for requirements, user
design information, user manual information, test case information, source code information,

181

Document element relations

Output from queries


Req

Test

Design

Source code

Query form
Query form
Query

Relation/element type

Type:

Expanded

Exact

Where to search
User man

PCL entity

Spec where

PCL browser

Document window

Search

Source code window

FIGURE 47. Possible user interface

and PCL entity information.


When the user double clicks on an entry in the PCL entity information list, the focus of the
PCL browser shifts focus to the PCL entity listed as that entry.
Similarly, when the user selects an entry in the source code list, the focus of the text editor window is moved to the source code element that was selected.
Finally, if the user selects an entry in any of the other four windows, the focus of the document
window is put on the document element that was chosen.
A simple example
Imagine that the maintainer have received a modification request. Based on an analysis of the
requirements in the request and by browsing in the PCL model, the maintainer identifies that
the request is related to a particular part of the system. Assume that the maintainer wants to
know which existing requirements that constrain the request received.
1. The maintainer formulates a query, and asks to search for requirements in the subsystem

defined by a PCL entity. A list of matching requirements is the result of the query.

182

Elaboration of a Framework for Supporting System Understanding

2. The maintainer inspects the requirements by selecting them in the requirement list, which

highlights them in the document window. When the maintainer has identified the requirement which he feels controls the functionality that is required to change, he forms a new
query.
3. The new query is to find all RD relationships from the highlighted requirement to design
documents. The appropriate headers for sections in the design document which make up the
D-end of the RD relationships are displayed in the design list.
4. The maintainer contiunes to explore the system in this way until a thorough understanding
is gained. Then he can make the necessary changes to the system source code, and concurrently make needed document updates in the document window.
The requirements specified in the modification requests are now fulfilled.

7.9.9 Limiting the document search space


In Section 7.9.6 we claimed that the proposed solution was scalable since the queries did not
have to search through all documentation or source code. Only a limited set of documents or
source code files needed to be consulted based on a particular query.
We want to repeat here the basic steps for achieving this.
The basis for this scalability is a good system architecture model described in PCL. The application structure and the document structure of the PCL should be described down to a good
level of detail, so that each PCL entity does not have instance relations to a large number of
files.
Just as it is natural to split up the source code into several files, the documentation files should
also be divided into several files. Consider for example a design document. If the system consist of 20 different subsystems, it would be a good idea to split the document into one file for
each subsystem. A master file could function as an umbrella, which is quite normal in modern
word processing tools (in MS Word this is called a master document, in FrameMaker this is
called a book). If some of the subsystems are large, it may be considered to split up the document describing the design of the subsystem into yet several documents.
Since we can use the relations documents and is_documented_by in PCL to relate application
entities to document entities, the more the documentation is split up the better.
In Section 7.7.1 we showed how the top level entity for the FsA example could look like. We
recapitulate that here:
family Farmers_Assistant
parts
application => Farmers_Assistant_Application;
documentation => Farmers_Assistant_Documentation;
end
end
In the rest of Section 7.7, and in Section 7.8 and Section 7.9 we gave small examples of how
the application structure could be described using the PCL.
In Section 7.7.3 we showed the top level decomposition of the document structure for the FsA
system. We recapitulate the PCL document structure description here.
family Farmers_Assistant_Documentation

Document element relations

183

parts
req => FsA_Requirements;
design => FsA_Design;
test => FsA_Test_Specifications;
user => FsA_User_Documentation;
end
end
Now, all of these may be decomposed, here we show only the decomposition for the design
entity:
family FsA_Design
parts
stat => FsA_Statistics_Design;
dep_and_deliver => FsA_Food_Transport_Design;
end
end
family FsA_Food_Transport_Design
parts
stable => FsA_Stable_Parts_Design;
unstable => FsA_Unstable_Parts_Design;
end
end
family FsA_Unstable_Parts_Design
attributes
type_of_breed : breeding_type := chicken;
end
parts
intake =>
if type_of_breed = chicken then FsA_Small_Intake
elseif type_of_breed = cow then FsA_Large_Intake
endif;
delivery =>
if type_of_breed = chicken then FsA_Chicken_Design
elseif type_of_breed = cow then FsA_Cow_Design
endif;
end
end
In the last entity, the attribute type_of_breed is defined as an enumerated type ranging the different breeding types of interest. This can be used for conditionally determining the composition structure.
When the document structure is detailed, the maintainer may use the PCL browser to the documents that may contain the information he is interested in. He can specify that these are the
documents that should be searched in the queries he specifies.
Of course the same principle should be used when the maintainer is interested in finding particular source code elements.
If this strategy is used, the proposed solution is scalable to handle very large systems without

184

Elaboration of a Framework for Supporting System Understanding

loss of search efficiency.

7.10 Document type relations


The last type of relations we will discuss are what we have called document type relations. In
Section 7.5 we gave the following description of document element relations:
Document type relations (DT-relation): A relationship of this type relates a document element to another document element or a list of document elements in documents which are
of the same type. Source code files are in this definition perceived to be documents. E.g.
call relations in a C program are DT-relations in our framework. The DT-relations among
elements in proper documents are generated dynamically, while the DT-relations among
elements in the source code are stored in a database.
We separate the description of DT-relations into the relations which are among proper document elements and those that are among source code elements.

7.10.1 DT-relations among document elements


Consider the following examples:
When a writer writes a document, he often refers to other sections in the document. These

sections may contain additional information to the section he is currently writing on, similar
ideas used in other ways, preconditions to the section, a nice figure which he does not want
to redraw. There is a wealth of possibilities of why the writer may want to reference these
other parts of the document or another document. The writer will insert cross references into
the document. In modern word processing documents such cross references can be followed
by pointing and clicking.
A good text book contains an index at the end. When the user wants to find information

about a certain topic, he uses this index instead of browsing through the book. Some of the
indexes refer only to one particular place in the document, while other indexes may refer to
several different pages throughout the book.
A DT-relation is a dynamically generated cross reference.
Imagine that the maintainer is reading a section in some document, say the user manual. A particular concept described in the user manual catches his attention, and he wants to find out
whether this concept is detailed in other places in the user manual.
The maintainer writes a query and specifies the search engine to search for the concept in the
user documentation which exist for the system. Either a phrase query or a term query is used to
specify the search. The maintainer decides whether to use the exact or expanded version of the
queries. The the query mechanism is described in Section 7.11.
The result of the search is displayed as a list in the user manual list box (see Figure 47), and the
user can inspect the different matches to check whether interesting information was found.
Note the difference among a DT-relation for user manual elements, and e.g. the DUD relation
which was described in Section 7.9.5. The DUD relation is a DE-relation, and is used to be
able to identify information across different document types. The DT-relations identifies relevant information inside a particular type of documentation.

185

Document type relations

7.10.2 DT-relations among source code elements


The reader may now realize that the DT-relations among source code elements is the call,
assign, declaration, inherit, and etc. operations typically of programming languages. This is
indeed true. It should be obvious that such relations are important to be able to find and navigate in. The job of making the parser, database constructor, and the other bells and whistles
which are necessary to utilize the system.
Under the supervision of the author, a group of six 4 grade CS students designed and implemented a system for extracting and navigating among source code elements in a C++ program.
The prototype system was called HyperMaint, and is described in [Aamot et. al., 1994]. The
type of C++ elements and relations that were extracted is outlined in Figure 48. The total effort
invested in the project was around 2000 man hours.
Source
Code
1
consist_of
N
File
Type

is_of_type

has
N
Source Code
Element

Class
N
Variable
d

N
N
assigns

Data
Global Local
Member Variable Variable N

calls

Simple
Type

refers

M
M

declares Function
1
d

Data
d
Global Static
Variable Variable

Static Member Global


Function Function Function

N
Class
inherits
1 M
contains
N
Class
Member
d

Member
Data
Member Function

FIGURE 48. ER model for information extracted from C++ file

This system is able to satisfy the requirements for DT-relations for source code, hence we do
only refer to this system here. The system can be examined in more detail by reading the report
produced by the project group. This report, [Aamot et. al., 1994], is available from the author
of this thesis.
The technology used for implementing the system was the CPPP C++ parser developed by Steven P. Reiss and Tony Davis at the Dept. of Comp. Science at Brown University1. The version
of CPPP which was used by the student group was 1.64. The parser is sufficiently documented
1. The parser may be obtained by writing to the Software Librarian, Computer Science Department,
Box 1910, Brown University, Providence, RI 02912, USA. Or preferably email:
brusd@cs.brown.edu.

186

Elaboration of a Framework for Supporting System Understanding

in accompanying text files. The only academic information I have seen around was a paper
submitted to OOPSLA95, called Experiences Writing Object-Oriented Compiler Front
Ends, by the same two persons.
The students used quite some time to understand how to use the compiler. The output from the
compiler was a ASCII listing of all possible information about the C++ program. Since we
were only interested in the information given in the ER model of Figure 48, the students made
a filter to extract only the necessary information. The filter processed the output and inserted it
into an MS Access database.
The information in the database could either be inspected manually through the MS Access
tables, or through a custom built hierarchical browser which were able to show all source code
elements and the DT-relations among them. The latter approach used an embedded SQL-based
interface into the database.
It is an expensive operation to produce the full database of DT-relations for a considerably
sized software system. Due to this we do not extract the DT-relations for source code elements
dynamically. We update the database incrementally when changes are made to the source code.
To avoid this expensive operation when an old configuration is checked out of the repository, a
full save of the database is stored together with any named configuration, i.e. release provided
to a customer. All we need to is to check out the database from the repository together with the
rest of the files.

7.11 A thesaurus and a query mechanism


In this section we describe two components which are important in our framework to be able to
extract relationships among document elements dynamically.
The two components are
A thesaurus which acts like a system index. It contains the key terms used during the

specification and design of the system.


A query mechanism which allows the maintainer to query the documents for either DE

or DT relations. The query mechanism may use the thesaurus to expand the scope of the
query.
The two components are described in detail in the following two subsections.

7.11.1 The thesaurus


The system thesaurus should act like an index to all terms which are in use in the system. The
terms that should be stored in the thesaurus are those that describe the key properties (i.e.
nouns) of the system, or some key behaviour (i.e. verbs) of the system.
The system thesaurus should expand during the course of the project, and insertion of terms
into the thesaurus should start from day one of the project. When new requirements are added,
or new components are designed, terms which describe these new additions should be inserted
into the thesaurus.
It is better to insert a few terms too much rather than a few terms too little into the thesaurus. In
this way, the query mechanism as described in the next chapter can be better exploited.

187

A thesaurus and a query mechanism

The thesaurus should be implemented as a database, which the user can browse to identify
good terms to be used in a query.
A particular property or behaviour in the system may possibly be referred to differently by different maintainers, or in different contexts. The thesaurus therefore cannot be merely a list of
terms used in the system. Synonym relations must exist among terms to express that the terms
are similar. The synonym relation is bidirectional and there is an associated weight on the relation. The weight is on the relation is used for expressing how distant two terms are. The weight
value is from 0 to 1, where 0 means identical and 1 means very distant. The user choose this
weight subjectively from experience.
For example, if term W has can be expressed similarly using term X, Y or Z. The weights on
the synonym relationships from W to X, Y, and Z would determine which of X, Y, and Z that
were closest to W. Figure 49 gives an example of a small thesaurus with synonym relations and
weights.
assemble
0.1

0.2

0.2

organization

structure
0.4

0.3

arrangement

FIGURE 49. Small thesaurus example

7.11.2 The query mechanism


In order to identify places in the different type of documents that contain some sort of information needed by the maintainer, a mechanism must exist to allow him to express a query for
finding this information.
We suggest that two different forms for querying the documents should be available to the
maintainer:
1. Exact phrase query. The maintainer should be able to specify a phrase which should be

queried for in the appropriate documents. An example of such a phrase query is (calculate nutrition intake). All sections of the document which contains this particular phrase
will match. The search is case insensitive.
Note that if the document was stored in MIF format, this phrase could be split in two different paralines of one para (see page 172). Thus paralines must be concatenated
when we search for exact phrase matches.
2. Exact term set query. In this type of query, the maintainer can specify a set of terms and a
window length in which all of these terms must exist in a document text. An example of
a term set query is (production manager policy),20. This query will match all 20 word
windows in the document text which contain all of the three words inside the parentheses. White space such as space, tab, punctuation marks, exclamation marks, etc. are not
counted as a word.
Note that there is an implicit AND between the queries, and that the ordering of the
terms in the query does not have any importance.
3. Expanded term set query. This query is specified in the same way as the term set query.
However, during the search, alternatives for the different terms are also searched for.
These alternatives are the terms related to the original term by a synonym relation in the

188

Elaboration of a Framework for Supporting System Understanding

thesaurus. The results of the search is ranked by summing the synonym link weights. The
match with the smallest sum of link weight is the better match. It is possible to specify
that a term should not be expanded by a synonym link. This term is then prefixed with a
! in the query. An example of an expanded term set query is (!production organization
optimization),50.
Note that the exact term set query can be expressed as an expanded term set query. The
expanded term set query (!production !manager !policy),20 is identical to the exact term set
query example above.
We can therefore speak about a phrase query and the term query in general, and be more specific when we define the searches to make clear whether the query is exact or expanded.
The exact phrase query and the two types of term set queries can be combined. The following
is an example of such a query (product manager responsibility salary).
Finally, we add the possibility of inserting the OR operator in the queries. The reader should be
aware that the OR operator is just an abbreviation for specifying many queries without OR.
The following is an example of a query which uses the OR operator (product manager
(responsibility OR salary)).
This last query equals the two queries (product manager responsibility) and (product manager salary).
NB! The return value of a query is not the line number where the query was matched. Rather
the return value is a reference to the section header for the section in which the match was
made. For example, the return value to the query (return value !query),30 would be 7.12.2
The query mechanism.

7.12 Related work


The DOCKET project, described in [Layzell et al., 1993]/[Laysell et al., 1995], provides for
using multiple knowledge sources to build a static model of how important system concepts
are related in these multiple knowledge sources. The system information at the different
abstraction levels are integrated into a global system model which describes how components
in the different knowledge sources are related. This integration process is not fully automated
and requires human interaction. The text processing mechanism locates concepts and events in
the text. These are located pattern matching by using certain heuristics, such as important
concepts are emphasized in the text and events are often introduced by if or when.
The DOCKET approach is lacking under its attempt to be general. No restrictions are made to
the organization of the system documentation. While this allows the DOCKET approach to be
used directly on a number of existing systems, the amount of useful links among different type
of information is restricted.
In the LaSSIE system, [Devanbu et al., 1991] takes another approach by hard-coding a knowledge base with information about the source code. Users can query the knowledge base. A
semantic retrieval algorithm based on formal inference is then used to provide answers to questions formed in natural language, such as what global variables are accessed by a function that
flashes a display lamp at an attendants console?. As knowledge is explicitly represented in
the knowledge base, the acquisition and organization of the knowledge base becomes very
tedious and time consuming.

Related work

189

While this approach makes LaSSIE capable of answering very detailed questions about the
source code, there is no information available to answer questions like why does the lamp
flash.
Both the DOCKET and the LaSSIE approaches will experience the problems of keeping the
information in the knowledge base synchronized with the rest of the software system.
[Boldyreff et al., 1995] describes the approach taken to application program understanding in
the AMES project. The AMES approach links related elements in the documentation by inserting links between two document elements when they contain similar words, phrases, or a colocation of words. They also relate requirements documents to design documents using this
approach, and also design to source code. The words, phrases and co-locations leading to a link
is decided after an analysis of the vocabulary of the documentation. The document elements
and source code elements are listed so that the user can point-and-click navigate among them.
All of these approaches maintain a static knowledge base of the information available for the
maintainer to use during the understanding task. This means that a lot of processing must be
done to create the knowledge base. None of the articles describe how they update the knowledge base during system evolution, and how much effort that is required for this. Neither are
any of them taking into consideration the potential problems of versioning the knowledge base,
when for example maintenance is done concurrently on several parallel variants of the system.
[Brice and Connell, 1984] describes a project where summer job students have used a relational data base to store data about software systems. These data have previously only been
available through manual retrieval in a paper-based library. The maintainers in the company
100% supported the work, and found several advantages to having automated access to the
systems data. Several usages of the data that previously could not be achieved was realized
during the test-out of the database project. Although this project inserted a small number of
system component characteristics into the system database, the importance of the example the
positive feed-back from the users. This shows that on-line access to system documentation is
highly appreciated.
The Software Document Support (SODOS) environment described in [Horowitz and Williamson, 1986a], and [Horowitz and Williamson, 1986b] introduces the concept of storing all information produced during the software development cycle as structured documents in a
relational database. SODOS defines a hierarchical document structure, and the user inserts the
documents into the structure by cutting and pasting from a text editor into a database entry
form. In addition to the text, keywords describing the section and relations to other sections
and documents are inserted. By analysing the graph model of the system which is then inherent
in the database, SODOS is able to check the completeness and consistency of the software documents. The database also contains the perceived needed traceability relations among the documents. The SODOS approach suffers from the problems of maintaining manually inserted
relations the limitations of using a predefined schemata as described in Section 7.4.3. Another
limitation of the SODOS approach is that documents much be maintained, and preferably initiated, inside the SODOS environment. This means that the features of modern word processing
systems cannot be utilized.
In [Turver and Munro, 1994], a similar partitioning of documents into sections, paragraphs,
and keywords as introduced by Horowitz and Williamson, is used for document impact analysis. Turver and Munro use the information contained in a static model of the document structure to identify potential ripple effects of planned changes. This is done by identifying all
document segment entities (sections/paragraphs) which contain any of the keywords men-

190

Elaboration of a Framework for Supporting System Understanding

tioned in a modification request. Since this will produce the worst-case ripple, a document stability analysis technique is applied to try to minimize the positive false ripples based on a
probability connection matrix for documentation. This matrix is manually generated, and specifies the probability that a change in one segment entity propagates a the requirement for a
change in another document segment entity. The same drawbacks as mentioned for the
SODOS system also applies to the maintenance of this document structure for ripple effect
identification.
The Whorf system, described in [Brade et al., 1994], provides the maintainer with a graphical
view of the system. Each node in the graphical view contains a hypertext link to the source
code implementing the node. In addition to the graphical and source code views, the maintainer may also inspect the variable and function cross-reference views of a component. There
is no attempt to include information available in available documentation in the views. The
functionality of the Whorf system closely resembles the functionality of our HyperMaint prototype described earlier in this chapter (page 185).
An experiment described in [Brade et al., 1994], showed that persons who used Whorf to try
locate and understand a set of functions in an example system outperformed persons which had
only documentation available. Brade et al. does not discuss the idea of including hypertext
links to documentation and the impact this would have on the experiment.
A problem identified by Brade et al., which is also a problem with reverse engineering tools, is
the overwhelming amount of information contained in the initial program view, and the problems of identifying information in the view when programs get larger. This is not a problem in
my proposed approach, since the user is not confronted with a large initial system graph.
Rather, as we know, queries are formed and the returned information is of limited size whatever system size.
[Karakostas, 1990] proposes to use a transitive means/ends network for a teleological (telos =
purpose) modelling approach for bridging the gap between user requirements (goals) and the
function which realizes it. An example of this is shown in Figure 50.
Goal
Provide X

Means
Function A

Req.
Goal

Means

Req.
Goal

Function B

Needs Z

Means

Needs Y
Provide Y

Provides Z

Function C

FIGURE 50. Example of a means/end network

The X,Y, and Z in the figure are concepts and themes in the application domain. These are
classified in hierarchies prior to the means/ends network are modelled. If a goal has no means
in the application system, the generalize/specialize and has-a links in the concept hierarchies
may be followed to find means to similar concepts. The teleological modelling of the connections between domain concepts and application concepts can only be seen as complementary to
other application system documentation. The teleological approach provides a model of the
application domain, but does not model the structure of the application system. Other design
information is needed if this is to be understood.
The Rigi reverse engineering environment is described in [Tilley et al., 1992]. Parsing techniques extracts and represents structures from a software system. These structures embody visual and spatial information that serve as organizational axes for the exploration and

Chapter summary

191

presentation of the composed subsystem structures. These structures can be augmented with
views: hypertext that highlights different aspects of the system in question. View documentation can be utilized for aiding management documentation, recovering lost information, and
for improving system comprehension. The central issue of the assumptions made for this technology is that design information is lost or out-of-date. When documentation is available, the
complexity of the views shown by Rigi will soon outweigh the utility of the view, and users
will go back to the documentation.

7.13 Chapter summary


This chapter has described our approach to a framework to identifying and extracting different
kind of information out from a software system. The motivation for this framework is to provide the maintainer with system information when he tries to understand the system.
We define a system to consist of a set of documents and a set of source code files. The different
type of documents in the system are requirements documents, design documents, user documentation, and test reports. A set of relation types were identified among elements of the documents and source code. These were perceived to be necessary both for understanding the
structure of the system and the functionality of the different components.
The relation types identified as important were
1. Architectural relations (A-relation): These relations show the architectural composition

of the logical system structure. The A-relations were described in Section 7.7.
2. Instance relations (I-relation): I-relations link a logical component in PCL to unique
physical components on the disk. These physical components can be a text file document
or a source code file. The I-relations were described in Section 7.8.
3. Document element relations (DE-relation): A relationship of this type relates a document
element in one document to one or several document elements in other documents or in
source code. The DE-relations were described in Section 7.9.
4. Document type relations (DT-relation): A relationship of this type relates a document
element to another document element or a list of document elements in documents which
are of the same type. Source code files are in this definition perceived to be documents.
E.g. call relations in a C program are DT-relations in our framework. The DT-relations
were described in Section 7.10.
After a discussion of different problems related to different methods to representing relations
among document elements and source code, among different types of documents, we selected
to extract all the DE-relations, as well as the DT-relations among proper documents dynamically. The approaches for how the different relations are extracted were described together with
the discussion of each relation.
The A- and I-relations are part of or extensions of the PCL language. In fact, these relations are
extensively used during the computation of particular DE- and DT-relationships. A good and
detailed PCL description plays an important role in constraining the set of files to inspect in
order to compute these relationships. In fact, the PCL description plays a crucial role in the
proposed framework since it allows the framework to scale of to managing systems of considerable size.
We proposed that a thesaurus should be used together with a querying mechanism to be able to

192

Elaboration of a Framework for Supporting System Understanding

provide good matches to the users search for relationships.


We proposed two different types of queries, the phrase query and the term query. These two
types of queries could be combined. When using the phrase queries, the user could specify a
phrase to search for. A phrase is a sequential piece of text. A term query is a query constructed
of different disjoint terms which must be found inside a word window of N words.
An exact term query requires that all words in the query must be matched in a window. An
expanded term query allows some of the terms, potentially all, to be exchanged with terms that
are synonyms to those used in the query. The thesaurus is used for finding synonyms.
By organizing the system according to the framework presented in this system, the relations
which are dynamically extracted provides the maintainer with dynamic system traceability
information which is utilized to reduce the time needed to understand the system for making
the required changes to it.

CHAPTER 8

Conclusion, Summary and


Future Work

8.1 Introduction
This last chapter in our thesis sums up what has been detailed in the previous chapters, and provides a final conclusion and a set of research steps for further work based on the results of this
thesis.
Section 8.2 summarize the research approach and presents a list of the main achievements of
our doctoral work. Section 8.3 summarizes the results and proposals of the experiment, the
PCL, and the system understanding framework. Section 8.4 includes a final conclusion of the
experiences gained though our work. Then, finally Section 8.5 gives some suggestions for
future work for extending the results of this thesis.

8.2 Research approach and main achievements of our work


The work performed during this doctoral study has led to several contributions to the field of
software maintenance. My approach to attacking the problems of software maintenance has
been a pragmatical one, where I have tried to analyse the existing problems of software maintenance and propose a framework for a solution. If used properly, it is my firm belief that the
proposed solution will relieve many maintenance organizations with much of their existing
problems.
Below, I have prepared a short list of claimed achievements which have been discussed in
detail earlier in this thesis.
An extensive overview and analysis of the problems related to software maintenance is pre-

sented. Software maintenance is analysed from several angles; what is it, why does it happen, what are its cost, and what are the problems experienced by software maintainers.
Based on the analysis of software maintenance and its problems, we concluded that the lack

of documentation was crucial for software maintenance problems for several reasons. We
designed and carried through an empirical experiment which confirmed several hypotheses
related to the impact of documentation on software maintenance productivity.
We presented a model for how the evolution of a software system should be organized in

order to minimize the risks of being opposed to the problems related to software maintenance. A particular requirement for the model is that software systems are kept in internal

194

Conclusion, Summary and Future Work

equilibrium - a notion which were coined to describe the that all parts of the system should
be updated in accordance with each other. The model stresses that reducing the costs of software maintenance is best achieved by reducing the time spent on trying to understand the
system.
In collaboration with other researchers of the Proteus consortium, the Proteus Configuration

Language and an associated tool set were developed. The PCL allows the architecture and
evolution of a system family to be described in a single formalism, and provides control of
the different configurations of the system family. The PCL makes visible the logical system
structure and high level dependencies, and is a tool for maintainers to understand the high
level organization of a software system.
A framework for being able to identify and extract information from the system (i.e. differ-

ent types of documentation and source code) to be able to swiftly obtain knowledge to
understand the current functionality level of the system, and to understand appropriate system functionality to be able to satisfy the requirements of a given modification request. A
set of relations among different system components are defined, and extraction mechanisms
for being able to extract the individual relationships are presented. The PCL does not provide the maintainer with the needed detailed system information, but the PCL and the tool
set is integrated in the framework to provide an approach which is scalable and which performs well even for very large systems.
This sums up the main achievements of the of the work performed during my doctoral study.

8.3 Summary of the main achievements


8.3.1 Assessing the Role of Documentation in Software Maintenance
In Chapter 3 we described the design and analysis of an experiment to investigate the impact of
documentation on software maintenance.
Hypotheses. We refer to maintainers who have only source code available as category A. Main-

tainers who have source code and documentation available are referred to as category B.
H1: Maintainers in category B will on the average use less effort to understand how to fulfil a

modification request than maintainers in category A.


H2: Maintainers in category B will gain more thorough understanding and provide more

detailed specifications to a solution for a modification request than maintainers in category A.


H3: The score obtained by a subject in the experiment is expected to correlate positively with

the subjects skill.


Facts and analysis. The following numbers were measured from the subjects effort reports:
Category A subjects spent 21.5% more time than category B on trying to understand the

system.
Category B subjects spent 27.5% more time than category A on implementing the changes.

The effort saved on code reading can be used for productive work as actually coding the
needed changes.

195

Summary of the main achievements

Most of he time (A: 74%, B: 65%) was spent on system understanding activities. The per-

centage was lower for category B, as expected.


When documentation was available (category B), subjects spent on the average the same

amount of time consulting the documentation as they did with code.


The hypotheses were tested with the following results:
H1: H0: Time_U(A) = Time_U(B). An independent samples t-test shows statistical sig-

nificance in the reduction of the Time_U sample mean at a 0.05 level. (t=2.14, critical =
2.06, p=0.05). H0 is rejected, and H1 holds under the given conditions.
H2: H0: The sum of the ranks for Mpc and Mu have the same distribution for category A

and B. The median for the two measures and the sums of their measured values are shown
in Table 41. A Mann-Whitney test computed using SPSS shows that A and B have different
TABLE 41. Scores on Mpc and Mu
Mpc A

Mpc B

Mu A

Mu B

Median

Total

19

29

21

32

distributions at a 5% level of significance. H0 is rejected for both Mpc and Mu, and H2
holds.
H3: H0: There is no significant positive correlation between the subject skills and experi-

mental score. Table 42 shows the computed Spearman rank correlation coefficients
between the test results and Mu (TR/Mu), and between the test results and the Mpc measure
(TR/Mpc). Some very interesting results are observed:
TABLE 42. Spearman rank correlation coefficients
TR/Mu

TR/Mpc

Category A

0.2260 (p=0.458)

0,3020 (p=0.316)

Category B

0.8230 (p=0.001)

0.6182 (p=0.024)

All subjects

0.5311 (p=0.005)

0.4309 (p=0.028)

The correlation coefficients in the first row imply that the null hypothesis for category A

cannot be rejected. Weak correlations exist among the test results and experiment scores,
but these are small and nonsignificant.
However, for category B, there are strong correlations among the test results and the

experiment scores. The null hypothesis is rejected for category B, and hypothesis H3
holds at a 0.05 level of significance.

8.3.2 The PCL language for specifying system architecture


In Chapter 6 we presented the Proteus Configuration Language and its supporting tool set.
PCL supports comprehensive system modelling and provides expression of variability in the
logical system model, in the mapping from the logical model to files, in the version selection,
and finally in the system manufacture process. Intensional system configuration using attribute
assignment provides configuration binding and system building in a concise and reproducible
manner.

196

Conclusion, Summary and Future Work

We illustrated the important concepts in PCL on a small, but complete example. The example
was annotated with screen dumps from the PCL tools.
The author worked jointly with other national and international researchers in the ESPRIT Proteus project. The Proteus work was carried out during the first two years of my doctoral study
period.

8.3.3 The system understanding framework


Chapter 7 described our approach to a framework to identifying and extracting different kind
of information out from a software system. The motivation for this framework is to provide the
maintainer with system information when he tries to understand the system.
We defined a system to consist of a set of documents and a set of source code files. The different type of documents in the system are requirements documents, design documents, user documentation, and test reports. A set of relation types were identified among elements of the
documents and source code. These were perceived to be necessary both for understanding the
structure of the system and the functionality of the different components.
The relation types identified as important were
1. Architectural relations (A-relation): These relations show the architectural composition

of the logical system structure. The A-relations were described in Section 7.7.
2. Instance relations (I-relation): I-relations link a logical component in PCL to unique

physical components on the disk. These physical components can be a text file document
or a source code file. The I-relations were described in Section 7.8.
3. Document element relations (DE-relation): A relationship of this type relates a document

element in one document to one or several document elements in other documents or in


source code. The DE-relations were described in Section 7.9.
4. Document type relations (DT-relation): A relationship of this type relates a document

element to another document element or a list of document elements in documents which


are of the same type. Source code files are in this definition perceived to be documents.
E.g. call relations in a C program are DT-relations in our framework. The DT-relations
were described in Section 7.10.
After a discussion of different problems related to different methods to representing relations
among document elements and source code, among different types of documents, we selected
to extract all the DE-relations, as well as the DT-relations among proper documents dynamically. The approaches for how the different relations are extracted were described together with
the discussion of each relation.
The A- and I-relations are part of or extensions of the PCL language. In fact, these relations are
extensively used during the computation of particular DE- and DT-relationships. A good and
detailed PCL description plays an important role in constraining the set of files to inspect in
order to compute these relationships. In fact, the PCL description plays a crucial role in the
proposed framework since it allows the framework to scale of to managing systems of considerable size.
We proposed that a thesaurus should be used together with a querying mechanism to be able to
provide good matches to the users search for relationships.

Final conclusion

197

8.4 Final conclusion


Successful software systems will always be subject to change. The maintenance team will be
confronted with internal (changes needed minimize the distance among the specified and
actual functionality levels) and external (changes needed to minimize the distance among
requested and specified functionality levels) pressure to change the system to stay competitive.
Reducing the costs of software system understanding will be the most efficient way of reducing the total maintenance costs. In order to reduce the understanding costs, the maintainers
must have easy access to the knowledge of the original system developers and other maintainers, both present ones and those who have left the maintenance team.
We have defined a framework which provides the maintainer with easy access to the different
parts of the system documentation and source code, and mechanisms to navigate among the
different types of knowledge about a system part which are recorded in various places in the
software system.
A language for describing the structure of a system family and the variation among the different family members is used to provide the maintainer with a mechanism for acquiring structural system knowledge in a top-down manner. A set of relations defined among different types
of system components and a query mechanism for identifying the different types of knowledge
and particular relationships among system components support the process of understanding
the functionality of the system in a bottom-up manner.
Efficient support from our proposed framework requires that the software system, which consists of different types of system documentation that encodes system knowledge and the source
code, is ensured to be internal equilibrium at all times. This requires that both the organization
which develops the software system, and the organization that maintains it, are able to control
their production processes with respect to updating the documentation concurrently with
source code updates. If they do, they can reap the long term benefits of reduced overall maintenance costs.

8.5 Future work


This section describes future work to extend the results of this thesis. In Section 8.5.1 we give
an overview of what issues that should be addressed in the future, while Section 8.5.2 defines a
set of hypotheses for validating the solution to reducing system understanding time proposed
in this thesis.

8.5.1 Overview
The following issues should be addressed in future work to extend the usefulness of our proposed solution to reducing system understanding costs:
Although parts of the proposed solution have been implemented both as working proto-

types, e.g. the PCL tool set and HyperMaint, and as demonstrator for showing how to
extract information from documents, a final integration of all functionality proposed in
this thesis has not been made. Such an integrated prototype should be implemented in the
future.

198

Conclusion, Summary and Future Work

To validate the solution, the prototype should be tested in a laboratory experiment. A

proposal for how the usefulness could be tested is described in a set of hypotheses, see
Section 8.5.2, which would extend the experiment described in Chapter 3.
To further validate the prototype, it should be used in a real project. This would be a long

term effort.
The framework for supporting system understanding is currently based on that the main-

tainer must query for information needed when he tries understand what must be done to
the system to comply to the requirements of a modification request. Future research
should investigate the possibilities of providing automatic identification of the information needed to understand how to carry out the changes called for in a modification
request. This requires intelligent, automatic parsing of the modification request, so that
correct information is found and presented to the maintainer without user interaction
The solution envisioned in the opening chapter of this thesis, involved a change manage-

ment system for keeping track of the impacts of the different modification requests and
the system changes imposed by these requests. Facilities for such a change management
system have not been considered in the solution presented in this thesis, but future work
should investigate in how to integrate such facilities in the proposed solution.
When facilities for automatic identification of system information and a change manage-

ment system has been integrated in the proposed solution, we foresee that change planning functionality can be incorporated in the framework. If several modification requests
must be integrated in a configuration concurrently, the change planning functionality
could optimize the assignments of satisfying them. This could be done by planning how
the sequence of making the changes should be organized to create the least interference
among the different changes made by the maintainers.
Much work is still left to support all the activities of the proposed approach to support system
understanding. We have however in this thesis demonstrated the importance of focusing on
reducing the time spent on system understanding and the first steps towards a software maintenance environment which will provide the maintainer with mechanisms to reach this goal.

8.5.2 Experiment extensions


We outline how the experiment described in Chapter 3 can be extended to investigate the
impact of technology for assisting the maintainer in identifying and navigating in documentation for reducing the system understanding time.
In this thesis, we specified a framework for how documentation should be organized and
which tools should be used to utilize this documentation to relieve the burden of trying to
understand a software system for performing maintenance on it. Let us for the moment term
such a framework an understanding support system (USS). Below, we indicate how the experiment could be extended to investigate the impact of an USS on maintenance productivity.

8.5.2.1 Maintenance circumstances


Table 43 gives an overview of a set of circumstances under which maintenance can be performed. The circumstances reflects the discussion of the previous section.
In the following, we say that a person who works in one of these circumstances, perform maintenance in category (x), where x is roman number i through v. The last column in the table is

199

Future work
TABLE 43. Possible maintenance situations
Source & updated doc.

No tool support
With USS support

Source only

Source & outdated doc.

Experienceda

Inexperienced

(i)

(ii)

(iii)

(iv)

(v)

a. We have effort data on this from the group which developed the application.

split into two: Experienced means maintainers with competence about the application and
the application domain. Inexperienced in the last column refers to maintainers lacking this
competence. Category (i) in the proposed extension equals category A in the experiment
described in Chapter 3. Category (iv) equals category B.
Data from maintainers performing maintenance in category (iii) are obtained from data collected from student groups which have designed and implemented the software system used as
experiment baseline. The experiment baseline is one system selected from those delivered by
the student groups.
Three of the circumstances in Table 43 are left blank. The two cases to the left are blank since
the system is not in internal equilibrium, which is a requirement for the usefulness of our proposed framework for system understanding. For the table cell where maintainers are experienced, coupled with USS usage, we have no data. This is due to that a USS prototype was not
available when the project course was run.

8.5.2.2 Hypotheses extended set


Hypotheses H4, H5, and H6 reflect the aim of a USS. Since a prototype of the framework is
not functional, we cannot test these hypotheses in an experiment. When such a prototype is
available, these hypotheses should be further investigated.
Hypotheses H7 and H8 theorize how the use of outdated documentation will negatively affect
maintenance productivity. These hypotheses were not further investigated due to restricted
funds and a limited availability of potential experiment subjects.
H4: Maintainers performing maintenance in category (v) will on the average use less effort to

fulfil a change request than maintainers performing maintenance in category (i).


Discussion of H4: The argumentation of the validity of this hypothesis follows the argumenta-

tion of hypothesis H1. A risk in the testing of this hypothesis is that maintainers performing
maintenance in category (v) are not proficient in using a USS prototype. A learning effect of
using the prototype may slow them down in the work of complying with the changes. A
course in learning about the prototypes functionality, and testing it on an example may
reduce the learning effect.
H5: Maintainers performing maintenance in category (v) will on the average use less effort to

fulfil a change request than maintainers performing maintenance in category (iv).


Discussion of H5: The difference between categories (vi) and (v), is that maintainers perform-

ing maintenance in the latter have support from the USS. The prototype learning effect, as
discussed for hypothesis H4, may skew the time used for performing changes in category
(v) in a negative direction (i.e. more effort). If we assume that this learning effect will not

200

Conclusion, Summary and Future Work

influence the effort used, the hypothesis states that category (v) should use less effort than
category (vi) to comply with the change requests. The two categories have the same amount
of documentation available, but category (v) maintainers have the ability to easily locate
and navigate on-line among related information of different types. This helps both when the
conceptual model of the changes is built up by the maintainer, and when documentation
should be updated after changes.
H6: Maintainers performing maintenance in category (v) will not use significantly more effort

than maintainers performing maintenance in category (iii).


Discussion of H6: Maintainers in category (iii) have much knowledge of the application and

the application domain. These maintainers have originally designed and implemented the
application. With this hypothesis we want to test if using the USS prototype will bridge the
productivity gap among experienced and inexperienced maintainers.
Two risks are associated with testing this hypothesis. The first is the one of prototype learning effect as discussed earlier. The second is related to the size of the software system which
is used in the experiment. The maintainers in category (iii) might have been able to fully
comprehend both the documentation and the implementation, and their inter- and intra-relations. If this is the case, the maintainers in category (iii) might have a very high productivity. The chance of this situation would have been very small if a software system of realistic
size had been chosen for the experiment.
H7: Maintainers performing maintenance in category (ii) will use more effort to comply with a

change request than maintainers performing maintenance in category (iv).


Discussion of H7: The assumption to be verified by this hypothesis is that documentation

which is not synchronized with the implementation leads to confusion, and hence more
effort to comply with change requests.
H8: Maintainers performing maintenance in category (ii) will not use significantly less effort

to comply with a change request than maintainers performing maintenance in category (i).
Discussion of H8: Outdated documentation may even be worse than no documentation. When

the maintainer finds that what is documented is not what is implemented, valuable time is
used to understand the difference. This time could have been used for complying with the
change request. If the documentation is not synchronized with any of the implementation, a
situation where outdated documentation is worse than no documentation occurs. If only
parts of the documentation is out of synchronization with the implementation, documentation can be of some use. The degree of mismatch among documentation and implementation for the information needed by the maintainer for a given change request decide the
outcome of a comparison of category (i) and (ii).

APPENDIX A

PCL Syntax

A.1 Introduction
This chapter presents the complete syntax of the Proteus Configuration Language. In addition
to the examples, and explanations given in Chapter 6, the Proteus Consortium has described
the language in a reference manual ([PROTEUS, 1994b]). The authors of the reference manual
were Ian Sommerville, Gilbert Rondeau, Bjrn Gulla, and myself. The syntax presented in this
appendix is taken from [PROTEUS, 1994b].
This was used as the basis for implementing a syntax analyser for PCL. Because of this there
are some differences for removing ambiguities, if compared with the annotated description of
the reference manual. However, the concrete syntax of the language is the same.
For the convenience of the reader and for the completeness of the thesis, we have chosen to
include two chapters from the PCL reference manual ([PROTEUS, 1994b]). Appendix A.3
(section 9 in the PCL reference manual) describes how a particular configuration is selected
from a system family. Appendix A.4 (section 12 ini the PCL reference manual) describes the
system building process of the PCL tool set. The process of generating a makefile from a
bound configuration is described in detail.

A.2 The concrete syntax of the PCL


A.2.1 Notation
The following notations are used in the BNF description of the PCL language.
[...]* means zero or more repetitions of the enclosed
[...]+ means one or more repetitions of the enclosed
[...] means that the enclosed items are optional
| means alternative (or)
::= means is defined as
Terminals symbols are emboldened: bold
Keywords may be written entirely in upper-case or lower-case letters but are always

written in lower case letters here.

202

PCL Syntax

Identifiers are case sensitive i.e. a is not the same as A

A.2.2 Lex definition


digit ::=
[0-9]
letter ::=
[A-Z, a-z]
integer ::=
{digit}+
identifier ::=
{letter} ({letter}|{digit}| _| - )*
string ::=
\ [ ^ \ ] * \
A string is a sequence of characters enclosed in quotes . Quotes may be included in a string
by preceding them with the escape character \. The escape character may be included in a
string by preceding it with the escape character.

A.2.3 BNF definition of PCL


A.2.3.1 Comments
Comments must be written on a separate line with the first non-blank characters on the line
being //
// this is a comment
// comments are delimited by end of line

A.2.3.2 PCL_library
pcl_library ::=
[pcl_entity]+
pcl_entity ::=
family_entity
| version_description_entity
| tool_entity
| relation_entity
| class_entity
| attribute_type_entity

A.2.3.3 PCL entity definitions


pcl_entity_reference ::=
[ library_name ] pcl_entity_name
pcl_entity_name ::=
identifier
library_name ::=
identifier

The concrete syntax of the PCL

203

Family entity.
family_entity ::=
family_entity_decl
| name_declaration
family_entity_decl ::=
family family_entity_name [ inherits family_entity_reference]
[ family_body ]*
end
name_declaration ::=
declare name_list end
family_entity_reference ::=
pcl_entity_reference
family_body ::=
classification_section
| attributes_section
| interface_section
| parts_section
| physical_section
| relationships_section
classification_section ::=
classifications classification_decl_list end
attributes_section ::=
attributes attribute_decl_list end
parts_section ::=
parts part_decl_list end
physical_section ::=
physical physical_decl_list end
interface_section ::=
interface interface_decl_list end
relationships_section ::=
relationships relationship_decl_list end
Version description entity.
version_description_entity ::=
version version_description_entity_name inherits
version_description_entity_reference [of family_entity_reference ]
[ version_description_body ]*
end
version_description_entity_reference ::=
pcl_entity_reference
version_description_body ::=
vd_attributes_section
| sub_version_description_section
vd_attributes_section ::=
attributes attribute_assignment_list end
sub_version_description_section ::=
parts version_description_decl_list end
Class entity.
class_entity ::=
class class_entity_name
[inherits_part]
[class_body ]

204

PCL Syntax
end
inherits_part ::=
inherits class_entity_reference
| dimension slot_name
class_entity_reference ::=
pcl_entity_reference
slot_name ::= identifier
class_body ::= [ extension_section ]
[ tool_section ]
extension_section ::=
physical extension_decl_list end
tool_section ::=
tools tool_decl_list end

Attribute type entity.


attribute_type_entity ::=
attribute_type attribute_type_entity_name
enumeration_section
end
attribute_type_entity_name ::=
identifier
enumeration_section ::=
enumeration enumeration_decl_list end
Relation definition entity.
relation_entity ::=
relation relation_entity_name [inherits relation_entity_reference]
[ relation_body ]
end
relation_entity_name ::=
identifier
relation_entity_reference ::=
pcl_entity_reference
relation_body ::=
domain class_decl_list end
range class_decl_list end
Tool definition entity.
tool_entity ::= tool tool_entity_name [ inherits tool_entity_reference ]
[tool_body]*
end
tool_entity_name ::=
identifier
tool_entity_reference ::=
pcl_entity_reference
tool_body ::=
input_section
| output_section
| attributes_section
| scripts_section
input_section ::=
inputs parameter_decl_list end
output_section ::=
outputs parameter_decl_list end

205

The concrete syntax of the PCL


attributes_section ::=
attributes attribute_decl_list end
scripts_section ::=
scripts script_decl_list end

A.2.3.4 PCL list definitions


Name declaration list.
name_list ::= identifier [, identifier ]
Classification declaration list.
classification_decl_list ::=
[ classification_decl ; ]*
classification_decl ::=
slot_name [ => class_entity_reference ]
Attribute declaration list.
attribute_decl_list ::=
[ attribute_decl ; ]*
attribute_decl ::=
attribute_name [: attribute_type] [ exported ]
[ attr_op attribute_expression ] [default]
attribute_type ::=
string
integer
attribute_type_entity_reference
attribute_type_entity_reference ::=
pcl_entity_reference
attr_op ::= := | =
attribute_expression ::=
simple_expression
| conditional_attribute_expression
simple_expression ::=
simple_operand [ op simple_expression]
conditional_attribute_expression ::=
if condition then attribute_expression [ elsif condition then
attribute_expression]* [else attribute_expression] endif
default ::= default simple_operand
simple_operand ::=
literal
| attribute_value_ref
| function_ref
literal ::= string
| integer
| enumeration_identifier
attribute_value_ref::=
identifier
| identifier
++
op ::=

|+

|function_ref ::= name


| function_name ( simple_expression [, simple_expression]*)

206

PCL Syntax

Parts declaration list.


part_decl_list ::=
[ part_decl ; ]*
part_decl ::= slot_name [ : slot_type] [ => slot_expression]
slot_type ::=
uses
| external
slot_expression ::=
simple_slot_expression
| conditional_slot_expression
conditional_slot_expression ::=
if condition then slot_expression [ elsif condition then slot_expression]*
[ else slot_expression ] endif
simple_slot_expression ::=
family_entity_reference [ [ instance_number ] ]
| ( family_entity_reference [ , family_entity_reference]* )
family_entity_reference ::=
pcl_entity_reference
instance_number ::=
integer
| attribute_value_ref
Physical declaration list.
physical_decl_list ::=
[ physical_decl ; ]*
physical_decl ::=
slot_name => physical_list
physical_list ::=
simple_physical_list
| conditional_physical_list
conditional_physical_list ::=
if condition then physical_list [ elsif condition then physical_list]* [ else
physical_list ] endif
simple_physical_list ::=
annotated_physical
| ( annotated_physical [ , annotated_physical ]* )
annotated_physical ::=
[ physical_name ] [ annotation_list ]
physical_name ::=
simple_expression // of type string
annotation_list ::=
( annotation [ , annotation ]* )
annotation ::=
attributes attribute_decl_list end
| classifications classification_decl_list end
class_entity_reference ::=
pcl_entity_reference
Interfaces declaration list.
interface_decl_list ::=
[ interface_decl ;] *
interface_decl ::=
slot_name [ => interface_expression]
interface_expression ::=
conditional_interface_expression

207

The concrete syntax of the PCL


| simple_interface_expression
conditional_interface_expression ::=
if condition then interface_expression [ elsif condition then
interface_expression ]* [ else interface_expression] endif
simple_interface_expression ::=
interface_name
| ( interface_name [ , interface_name]*)
interface_name ::=
identifier
Relationship declaration list.

relationship_decl_list ::=
[ relationship_decl ; ]*
relationship_decl ::=
slot_name : relation_entity_reference [ slot_pragma]
[ =>[ inverse ] relationship_expression]
slot_pragma ::=
string
relation_entity_reference ::=
pcl_entity_reference
relationship_expression ::=
simple_relationship_expression
| conditional_relationship_expression
conditional_relationship_expression ::=
if condition then relationship_expression [ elsif condition then
relationship_expression]* [ else relationship_expression ] endif
simple_relationship_expression ::=
[ family_entity_reference ,]
( family_entity_reference [ , family_entity_reference]* )
Attibute assignment list.
attribute_assignment_list ::=
[ attribute_assignment ]*
attribute_assignment ::=
attribute_name ass_operator attribute_value_expression
attribute_value_expression ::=
attribute_value
| attribute_value_array
attribute_value_array ::=
positional_value_array
| named_value_array
positional_value_array ::=
( attribute_value_expression [ , attribute_value_expression]*)
named_value_array ::=
( slot_attribute_value [ , slot_attribute_value] )
slot_attribute_value ::=
attribute_index_expression => attribute_value_expression
attribute_index_expression ::=
attribute_index_range
| attribute_index_enum
| others
attribute_index_range ::=
attribute_index_value .. attribute_index_value
attribute_index_enum ::=
attribute_index_value [ | attribute_index_value]*
attribute_index_value ::=

208

PCL Syntax
integer
| attribute_value
ass_operator ::=

:= | <> | > | > | >= | <=


attribute_name ::=
identifier
attribute_value ::=
string
| integer
| enumeration_literal
| min
| max
enumeration_literal ::=
identifier
Version description declaration list.
version_description_decl_list ::=
[ sub_version_slot ; ]*
sub_version_slot ::=
slot_name => version_description_entity_reference
Extensions declaration list.
extension_decl_list ::= [ simple_expression ; ]*
Tool declaration list.
tool_decl_list ::=
[ tool_entity_reference , ]*
Enumeration declaration list.
enumeration_decl_list ::=
[identifier , ]*
Class declaration list.
class_decl_list ::=
[ class_entity_reference ; ]*
Parameter declaration list.
parameter_decl_list ::=
[ parameter_decl ; ]*
parameter_decl ::=
slot_name [ : multi_flag ] => class_entity_reference
[ ( tool_expression ) ]
multi_flag ::=
multi
tool_expression ::=
tool_operand [ op tool_expression ]
tool_operand ::=
string
| attribute_value_ref
| slot_name
| function_ref

The concrete syntax of the PCL

209

Script declaration list.


script_decl_list ::=
[script_decl ; ]*
script_decl ::=
script_name [ := tool_expression ]
script_name ::=
build
| depend

A.2.3.5 Conditionals
condition ::=
atomic_condition [conditional_operator atomic_condition]*
atomic_condition ::=
[not] numeric_condition
| [not] non_numeric_condition
| ( condition )
conditional_operator ::=
and | or
numeric_condition ::=
numeric_value comp_op numeric_value
numeric_value ::=
integer
| attribute_value_ref
comp_op ::=
= | <> | < | > | >= | <=
non_numeric_condition ::=
non_numeric_value eq_op non_numeric_value
non_numeric_value ::=
attribute_value_ref
| string
| enumeration_identifier
eq_op ::=
= | <>

A.2.3.6 Standard library


This section lists the entities which have been entered in the standard PCL library.n These may
be accessed without an explicit library prefix from a PCL declaration.
// 1. Predefined classifications
// 1.a Top-level
class base_class end
// 1.b Dimension type
class software dimension type end
class hardware dimension type end
class processor inherits hardware end
class platform dimension type end
// 1.c Dimension category
class program dimension category end
class document dimension category end

210

PCL Syntax
// 1.d Dimension abstraction
class component dimension abstraction end
class process dimension abstraction end
class system dimension abstraction end
// 1.e Dimension status
class primary dimension status end
class derived dimension status end
// 2. Predefined relations
relation requires
domain base_class; end
range base_class; end
end
relation implemented_by
domain process; end
range system; component; end
end
relation installed_on
domain system; end
range platform; end
end
// 3. Predefined attribute types
attribute_type boolean
enumeration false, true end
end

A.2.3.7 Other distinguished names


// 4. Distinguished attributes in physical objects
// Note that these are not held in the library as
// attribute declarations cannot be held separately
// hence they are shown as comments here
attributes
pragma
: string
workspace
: string;
repository
: string;
write_access
: boolean;
repository_version : string;
end
// The following functions are defined in PCL
// See page 22 for a description of these functions
path (string) -> string
basename (string) -> string
prefix (string) -> string
sufficx (string) -> string
lowercase (string) -> string
uppercase (string) -> string
length (string) -> integer
pos (string, string) -> integer

Version identification and selection

211

rpos (string, string) -> integer


substring (string, integer, integer) -> string
integer (string) -> integer
string (integer) -> string
string (enumeration_identifier) -> string

A.3 Version identification and selection


A PCL family description which incorporates variability represents a set of versions. A specific version may be defined by setting the variability control attributes which are associated
with that version. These are integrated with the family description to remove variability from
the family description and thus generate, in PCL, a description of the required system version
(without variability).
The definition of the attribute values which identify such a version is specified in a version
descriptor. The tools provided with PCL take such a version descriptor and a PCL family
description and remove the variability from the family description. Further tools can then take
this family description, abstract the correct versions of the files making up the system from a
version management systems and generate a script to build the system.
This chapter discusses how versions are identified, the bind transformation which is used to
associate values from a version descriptor with a family and the selection of files from the version management system.

A.3.1 Version identification


The model of version identification which is supported in PCL is that an individual version is
identified by a defining set of attributes. This is a more powerful and flexible method than
using a simple version identifier. The distinctions between versions can be represented at a
finer grain so that versions which have some differences but which have much in common are
represented by overlapping attribute sets.
For example, if we wished to represent a version of a system which had been written in Pascal,
delivered to a customer SNCF, in 1993 for installation on IBM PCs, then we could identify this
using PCL attributes as follows:
attributes
Prog_language: string := Pascal ;
Customer: string := SNCF ;
Delivery_date: string := 1993 ;
Platform: string := IBM PC ;
end

A.3.2 Version descriptors


Version descriptors are PCL constructs which encapsulate the set of attribute values defining a
system version. These attribute values are not just the values of PCL variability control
attributes but are also the values of attributes used to select the physical versions of files from
the underlying version storage system. This physical file selection is covered in Section A.3.8.

212

PCL Syntax

A.3.3 Syntax of version descriptors


version_descriptor ::= version_desc_header [version_desc_body]* end
version_desc_header ::=
version version_name [inherits version_reference]
[of family_entity_reference]
version_desc_body ::=
vd_attributes_section
|
sub_version_description_section
vd_attributes_section ::= attributes [ attribute_assignment ;]* end
// attribute_assignment is defined in Section A.3.5
sub_version_description_section ::= parts [sub_version_slot]* end
sub_version_slot ::= name => version_reference
version_reference ::= [library_qualifier] version_name

A version descriptor includes a collection of attribute values which, when associated with a
family description, uniquely identifies a version of that family.
The header of the version descriptor specifies the name of the family to which the attributes are
to be bound and the version name. If the version descriptor is created through inheritance, the
family name associated with the inheritance parent is also inherited. However, this may be
explicitly overwritten with a different family name included after the of keyword.
The vd_attributes section defines the values which are to be assigned to attributes in a PCL
family entity whose attribute values are specified to be assigned at bind-time. It also sets out
queries which are used by the Select transformation (See Section A.3.8) to associate the
required versions of physical objects with a PCL description.
In general, a system is made up of different sub-systems and modules. It is often the case that a
particular version of a system is constructed by integrating different versions of these sub-systems so it does not necessarily follow that the single set of attribute values is the same for all of
the sub-systems making up a system. We therefore allow for the possibility of sub-version
descriptors as explained in Section A.3.6.

A.3.4 Inheritance in version descriptors


Version descriptors may be created through inheritance in the same way as family entities. As
well as inheriting the attributes and parts of the parent description, a child version descriptor
also inherits the name of the family to which the descriptor is applied (as specified after the
keyword of).
Attribute values, slots in the parts section and the family name with which the version descriptor is associated may all be overwritten after inheritance. New attributes and parts slots may
also be added.

A.3.4.1 Examples of version descriptor inheritance


Say we had the following PCL declarations of version descriptors
version V1 of F1

Version identification and selection

213

attributes
x:= 12345;
y := abcde;
end
parts
P1 => VP1 ;
P2 => VP2 ;
end
end
version V2 inherits V1
attributes
y := pqrst ;
end
parts
P2 => VP22 ;
end
end

If we had modified the the version descriptor V1 instead of using inheritance, the result would
look like:
version ??? of F1
attributes
x := 12345 ;
y := pqrst ;
end
parts
P1 => VP1;
P2 => VP22 ;
end
end

A.3.5 Attribute assignment


A.3.5.1 Syntax
attribute_assignment ::=
attribute_name ass_operator attribute_value_expression
attribute_value_expression ::= attribute_value | attribute_value_array
attribute_value_array ::= positional_value_array | named_value_array
positional_value_array ::=
"("attribute_value_expression ["," attribute_value_expression]*")"
named_value_array ::=
"("slot_attribute_value["," slot_attribute_value]*")"
slot_attribute_value ::=
attribute_index_expression "=>" attribute_value_expression
attribute_index_expression ::= attribute_index_range
| attribute_index_enum
| others
attribute_index_range ::= attribute_index_value ".." attribute_index_value
attribute_index_enum ::=
attribute_index_value["|" attribute_index_value]*
attribute_index_value ::= integer | attribute_value_ref
ass_operator ::=
:= | <> | > | > | <= | >=
attribute_value ::= string | integer | enumeration_literal | min | max

214

PCL Syntax

The attribute assignment part of a version descriptor has a dual function:


1. It defines values which are to be assigned to variability control attributes for a particular

version of the PCL family entity.


2. It defines queries which are used to select the appropriate versions of physical objects as

discussed in Section A.3.8. Attribute names may be used in these queries which are not
defined in the associated PCL family entity. These names are simply ignored in the bind
transformation and passed directly to the select transformation as discussed later in this
chapter.
When values are to be assigned to variability control attributes the assignment operator (:=)
is used. The value assigned must conform with the defined attribute type and may be a simple
string, integer or member of an enumeration. These values are used in the bind transformation
to remove variability from the PCL family description.
After the bind transformation has been applied, a further transformation (Select) is applied to
remove variability at the physical level. Queries as specified as simple logical expressions
using the operators as defined in Table 44.
TABLE 44. Operators in attribute assignments

Operator
:=

This is equivalent to =. A physical object is selected if its corresponding attribute equals the assigned value.

<>

A physical object is selected if its corresponding attribute does not


equal the assigned value.

>

A physical object is selected if its corresponding attribute is greater


than the assigned value.

<

A physical object is selected if its corresponding attribute is less than


the assigned value.

>=

A physical object is selected if its corresponding attribute is greater


than or equal to the assigned value.

<=

A physical object is selected if its corresponding attribute is less than


or equal to the assigned value.

As well as simple attribute values, the keywords min and max may be used in attribute assignments. These mean:
1. min The reference is taken to mean the minimum value of the corresponding attribute asso-

ciated with the physical object.


2. max The reference is taken to mean the maximum value of the corresponding attribute

associated with the physical object.


These are used, for example, to select the most recent version of an object or to select the
object with the lowest version identifier.
It is an error to assign max or min to variability control attributes.

Version identification and selection

215

A.3.5.2 Assigned values


The values assigned to attributes in a version descriptor may either be simple literal values or
may be arrays of values. Arrays of values are assigned to the attributes of family entities where
there are many instances of the sameentity associated with a parts slot in a PCL family description. The array assignment facility allows different attribute values to be assigned in different
instances of the same entity. Thus, if there are 1000 instances of some entity X with attribute Y
associated with a slot, it is possible to set Y to Z (say) in some of the instances and to T in other
instances.
The simple values assigned must conform to the type of the attributes in the family entity and
may be either string, integeger or enumeration literal values.
When an array of values is to be assigned, then the values associated with each instance may
be specified using positional specification or by explicitly identifying the instance numbers
which are to be assigned specific values. Specification using mixed positional and named specification is also possible.

A.3.5.3 Positional specification


This is comparable to the way in which actual parameter values are assigned to formal parameters in a procedure in a language such as C or Pascal. The values to be assigned are written as
a list with the position in the list identifying the instance to which the value is assigned. Therefore, if we have some family entity X with attribute name Y where X is replicated 3 times, the
attribute value for Y may be assigned as follows:
version VD_of_X
Y := (one, two, three)
end

In the first instance of X, Y would be assigned the value one, in the second instance, it would
be assigned the value two, in the second instance and the value three in the third instance.

A.3.5.4 Named specification


In named specification, the indexes of the instances whose attribute is to be assigned the given
values are explicitly identified either as a range, as a list or using the keyword others. The
value to be assigned is specified along with the index identification. Therefore, if we have
some family entity X with attribute name Y where X is replicated 1000 times, the attribute
value for Y may be assigned as follows:
version VD_of_X
Y := ( 1..500 => one,
501 | 502 => two,
others => three)
end

In the first 500 instances of X, Y would be assigned the value one, in instances 501 and 502,
it would be assigned the value two, and the value three in the remaining instances.

A.3.5.5 Mixed specification


Specification using mixed positional and named specification is also possible. Therefore, if we
have some family entity X with attribute name Y where X is replicated 1000 times, the
attribute value for Y may be assigned as follows:

216

PCL Syntax

version VD_of_X
Y :=one, two,
others => three)
end
In the first instances of X, Y would be assigned the value one, in the second
instance, it would be assigned the value two, and the value three in the
remaining instances.

A.3.6 Decomposition of version descriptors


To allow for different sub-systems with different attribute value sets, a version descriptor may
contain a parts section. This is similar to the parts section in a family entity but no conditional
slot assignments are allowed. Rather, each slot is bound to a single version descriptor.
This is perhaps best illustrated by example. Say we have a family description as follows:
family A
...
parts
P1 => B;
P2 => C ;
P3 => D ;
end
end

We wish to associate a version descriptor with A and separate version descriptors with B and
D. Family C takes the same version descriptor as A. We would therefore write the version
descriptor for A as follows:
version VD_A of A
...
parts
V1 => VD_B ;
V2 => VD_D ;
end
end

The version descriptor is applied to entity A and the specified sub-version descriptors are
applied to B and D. No version descriptor is applied to entity C. It is assumed that any attribute
values in C which should be assigned at this stage may be computed through references to
attributes of ancestors of C in the composition hierarchy.
The version descriptors VD_B and VD_D must have headers as follows:
version VD_B of B
version VD_D of D

During the bind transformation, each parts slot in the entity being processed is checked against
the list of sub-version descriptors to see if a sub-version descriptor should be applied to it.

Version identification and selection

217

A.3.7 The bind transformation


The bind transformation is a transformation which transforms a PCL family description plus a
PCL version description into another PCL family description. The signature of the bind transformation is:
bind (family_entity, version_descriptor) = family_entity

To explain this transformation, assume it is applied as follows:


bind (F, VD)

where F is any family entity name and VD is any version descriptor.


The semantics of the bind transformation may be explained as follows:
1. The attribute expressions associated with variability control attributes in F are evaluated

and the values is bound to the variability control attributes. If a reference is made to an
attribute name which cannot be resolved (i.e. no value is bound to that name), and a version
descriptor has been specified and a value for that name is defined in the version descriptor,
that value is taken to be the value of the referenced attribute. Note that there will always be
a version descriptor VD for the top-level entity F.
2. For each entity which is referenced in the parts section of F, the parts, interface, physical
and relationship sections are checked. Where a conditional expression is associated with a
slot, that expression is evaluated and the resulting name or relation bound to the slot.
3. For each entity PF in the parts section of F (which now has had variability removed), a bind
transformation is carried out. The bind transformation checks if a sub-version descriptor for
the entity being bound has been included in the version descriptor VD.
4. This process is repeated until the entire composition tree defined in F has been traversed
and all entities visited and bound.
Note that the bind transformation need not remove all variability. Partial binding is possible
where some but not all variability is removed.

A.3.8 The select transformation


A PCL description identifies the physical mapping of PCL entities onto files. These files themselves may be separately versioned with versions stored in some repository where the versions
are identified. In Proteus, there is explicit support for managing versions of files through the
system known as the Repository.
The purpose of the select transformation is to remove versioning variability from a bound
PCL description, i.e. to select correct versions for all software physical objects. The result of a
successful Select transformation is called a selected representation. The PROTEUS toolset
ensures configuration reproducibility from a selected PCL description.
Select transforms a PCL family description and an accompanying version description into
another family description. The signature is:
select (F, VD) -> F

where F is a bound family entity, VD is a version description, and F' is the resulting selected
family.
During this transformation only the physical section of the family entity is affected. All other
sections of F' are made by copying the corresponding sections from F. For each software phys-

218

PCL Syntax

ical object an intensional version selection query is made to the Repository. The query is composed of the selection expressions over attributes stated in the version description, as well as
all attributes in F with the selection qualifier.
The Repository resolves the query by consulting all versions comprising the specified version
group and their attribute annotations. If the selection query identifies a unique version in the
Repository, the Repository sends over its identifier, and this identifier is recorded in the
'repository_version' attribute associated with the physical object. If the Repository is not able
to find a unique version (best) matching the query, Select fails.
For example:
family F
<xxx>
<yyy>
physical
s1 => <p1> ( <a1> );
s2 => <p2> ( <a2> );
end
end

If version identifiers <vid1> and <vid2> are retrieved from the Repository when querying for
matching versions for physical objects denoted by slot s1 and s2, F' will look as follows:
family F
<xxx>
<yyy>
physical
s1 => <p1> ( <a1>; repository_version := <vid1> );
s2 => <p2> ( <a2>; repository_version := <vid2> );
end
end

A.3.9 Tool processing


The version description is applied to the PCL family description by a tool called PCL-bind.
This removes variability from the PCL description to create a bound instance. This bound
instance, along with the version description is then applied to the repository (by the Repository
tool) and the appropriate versions of the physical files defined in the bound instance are identified by applying the Select transformation. A makefile is generated which references these
files and running that makefile causes a unique system version to be created.

A.4 Semantics of makefile generation


Building support is realized through the use of the Make program (see [Feldman, 1979]). We
define the semantics by describing how information expressed in PCL is mapped to constructs in
the generated makefiles. Readers are assumed to be familiar with the use of Make and the syntax
and semantics of makefiles. Assumed input for this processing is a bound, complete and consistent
PCL description.

Semantics of makefile generation

219

A.4.1 Definitions
is a tool.
is a tool slot.
is a relationship slot.
is a physical object.
is a classification term.
is a string expression.
E is a family entity

Classifications is the set of all defined classification terms.


PhysicalObjects is the set of all physical objects, initially all physical objects explicitly
declared in the families. Note that files shared by several families will be treated as one file.
This is intended, and is due to the definition of a set.
The semantic function expand is defined for several type of parameters on page 222
(1)

Q is_applicable : // An action
makefile := makefile emit_rule(),
.output_slots: PhysicalObjects := PhysicalObjects outfilename(, )
Explanation: The crucial part of tool processing is a search for applicable tools. Each tool
specifies some information to go into the makefile, defined by emit_rule. If an applicable tool
is found, the PhysicalObjects set is extended with constructed templates for all output
elements. This step is repeated until no more applicable tools can be found.

(2)

is_applicable iff // A criterion for (1)


is_multi_rule:
.input_slots .qualifier multi :
PhysicalObjects : is_subsumed_by
is_multi_rule:
PhysicalObjects: .output_slots.first.name = .name
Explanation: A tool is_applicable if there can be found a physical object suitable for each input
slot. If the tool is a multi rule, the tool is_applicable if the tools output has been declared as a
derived object in some family entity.

(3)

is_multi_rule iff
.input_slots : .qualifier = multi
Explanation: A tool entity is a multi_rule if one of its slots is qualified with the multi qualifier.

(4)

multi_rule:
.input_slots.first = multi
o = .output_slots.first.name PhysicalObjects
.input_slots s.t. = multi:
.arg_list = closure of o( | is_subsumed_by )

220

PCL Syntax

.input_slots.first multi
i = .input_slots.first.name PhysicalObjects
.input_slots s.t. = multi:
.arg_list = composition structure( | is_subsumed_by )
Explanation: If the first input slot is qualified with multi, a derived object matching the output of
the tool entity must be declared with status derived in some family entity. The slots qualified
with multi will then inhale all physical objects which they subsume in the closure of the
family composition structure, starting with the family entity in which the derived object was
declared. If the first input slot is not qualified with multi, but the tool entity still is a multi_rule,
the processing is controlled by the first input parameter to the tool entity. In this case, the tool
matches all physicals which are subsumed by the first slot (still with the rest of restrictions to
be satisfied).

(5)

is_subsumed_by iff
.classification is_subtype_of .classification

.name matches expand(.file_name)


Explanation: A physical object is suitable for a slot if its classification is a subtype of the slots
classification and the file name is matching. Note if .file_name = , the is_subsumed_by
entirely based on the classifications.

(6)

is_subtype_of iff
=

Classifications : inherits is_subtype_of


Explanation: A classification is a subtype of another if they are the same or if the fist one inherits
(by the inherits clause) some classification which is a subtype of the second.

(7)

matches iff
=

= expand()
Explanation: The file name of a physical object matches a slots file_name expression if the
expression is empty or if the name is a match of the regular expression specified by the
expression.

(8)

) =
.file_name = :
expand( prefix(.input_slots.first.name) ++
suffix(.classification.physical.first)
.file_name = :
expand()

outfilename(,

Explanation: The name of an output object is constructed from the slots file_name expression by
the expand operation. If this expression is not explicitly specified, there exists a default
expression which mirrors implicit rules in Make.

Semantics of makefile generation

(9)

221

E1 requires E2 :
// An action
makefile := makefile emit_dependency( E1, E2 )
Explanation: If an entity has a requires relation to another entity, extra dependency information
is emitted to the makefile.

(10) E1 requires E2 iff // Criterion for (9)


E1. relationships_slots: .type = requires E2 .becomes
Explanation: The criterion for emitting such dependencies (13) is that there is a requires relation
from E1 to E2.

(11) derived( ) = outfilename(, 2) iff


, 1, 2 : is_applicable 1 .input_slots

is_subsumed_by 1 2 .output_slots
Explanation: The files derived from a physical entity are those that are direct outputs of tools that
act on the physical entity.

(12) emit_rule() =
.build_script :
.output_slots : emit(.name)
emit( : )
.input_slots : emit(.name)
emit(LF, TAB, expand(.build_script), LF, LF)
.depend_script :
emit(.output_slots.first.name, .d : )
.input_slots : emit(.name)
emit(LF, TAB, expand(.depend_script), LF, LF)
emit(include , .output_slots.first.name, .d, LF, LF)
Explanation: Emit_rule defines the output which is written to the generated makefile for each
applicable tool.

(13) emit_dependency( E1, E2 ) =


E1.PhysicalObjects: s.t. is_applicable(,)
emit( derived() )
emit( : )
E2.PhysicalObjects:
emit()
emit(LF,LF)
Explanation: The semantics of the requires relation for makefile generation. An entity declared
as dependent on another entity with the requires relation means that the files directly derived
from the physical objects of the first entity is source level dependent on the physical objects of
the second entity.

222

PCL Syntax

Expand.

The semantic function expand is used to compute a string value for tool expressions. It is
defined by:
tool_expression ::= tool_operand [op tool_operand]*
return the concatenation of the strings returned when calling expand on each of the sub oper-

ands.
tool_operand ::= literal
return the literal string value. If the literal is an integer or an enumeration the string (see Sec-

tion A.4.2) function is applied.


tool_operand ::= attribute_value_ref
return the value of the attribute. If the attribute type is not string, the value is first converted to

a string.
define-time attributes: return the value specified in the definition included in the tool entity.
build-time attributes: return the value associated with the physical objects. Since there are

potentially several input and output objects, the value is retrieved by doing a breath-first
search among input and output physical objects and their enclosing family entities.
tool_operand ::= function_ref
call expand on the simple_expression(s) that act as arguments and return a transformed value

depending on the function. The possible function names are those defined in Section A.4.2.
tool_operand ::= slot_name
return the complete file name of the physical object used as this parameter. In case the slot is

qualified with multi, a space-separated list with all appropriate physicals matching the slots
classification is returned.

A.4.2 Function values


Various functions may be included as part of a simple expression. These functions may have
zero, one, two or three parameters.
The special parameterless function name is used to discover the name of the PCL family declaration which includes the simple expression being evaluated.
1. If the simple expression is used within a family declaration as part of an attribute assign-

ment, the function name returns the name of the enclosing family declaration as a string.
2. If the simple expression is used within a class declaration to define a pattern, the function

name refers to the name of the PCL family entity which has been assigned that classification. That is, if a family entity X is classified as class Y where the definition of Y includes a
reference to the function name in a pattern definition, the evaluation of the pattern will
return X as the value of name.
Other functions which may be included in simple expressions are summarised in Table 45.

223

Semantics of makefile generation


TABLE 45. Built-in functions for use in simple expressions
Function name Signature
path

string string

Description
Returns everything preceding and including the last /.
Ex. path(/usr/lib/libX11.a) returns /usr/lib/.

basename

string string

Returns everything between the last / and the last ..


Ex. basename(/usr/lib/libX11.a) returns libX11.

prefix

string string

Returns everything preceding the last ..


Ex. prefix(/usr/lib/libX11.a) returns /usr/lib/libX11.

suffix

string string

Returns everything following and including the last ..


Ex. suffix(/usr/lib/libX11.a) returns .a.

lowercase

string string

Converts uppercase letters to lowercase.


Ex. lowercase(/usr/lib/libX11.a)
returns /usr/lib/libx11.a.

uppercase

string string

Converts lowercase letters to uppercase.


Ex. uppercase(/usr/lib/libX11.a)
returns /USR/LIB/LIBX11.A.

length

integer string

Returns the length of a string


Ex. length(proteus) returns 7.

pos

integer (string, string) Returns the index of the leftmost occurrence of the second parameter in the first parameter.
Ex. pos(abcdabckeja,cda) returns 3.

rpos

integer (string, string) Returns the index of the rightmost occurrence of the second parameter in the first parameter.
Ex. rpos(abcdabckeja,abc) returns 5.

substring

string (string, integer, Returns the substring of the first parameter between posiinteger)
tions given by 2nd and 3rd parameters (inclusively).
Ex. substring(This is a test, 6, 9) returns is a.

integer

integer <- string

Converts a string to an integer value. If the string does not


represent an integer, 0 is returned.
Ex. integer(5) returns 5.

string

string <- integer

Converts an integer value to a string.


Ex. string(5) returns 5.

string

string <- enumeration

Converts an enumeration value to a string representation.


Ex. string(false) returns false.

A.4.3 Multi input slots


If one (or more) but the first slot of a tool is declared with the multi qualifier, the tool is
invoked for each (set of) phsyical(s) where the tool is_applicable. All physical objects in the
complete composition structure which is_subsumed_by a multi input slot are also inputs in every
such tool invocation.
If the first of the input slots of a tool is declared with the multi qualifier, the computation of
is_applicable is a bit more complex. For such tools there may only be one applicable occurrence

224

PCL Syntax

for a composition structure. In this context a composition structure is basically the whole
family hierarchy, but by explicitly declaring proper derived objects associated with sub-families, smaller composition structures can be defined by the user.
Within a composition structure, all physical objects which is_subsumed_by a multi input slot are
assigned as input arguments for the single tool occurrence. When referencing this slot_name in
expressions, expand will return a list containing complete file names for all the physical objects
assigned.

A.4.4 Possible tool entity instantiations


The following table lists the possible variations of tool instantiations based on the cardinality
of the in and out parameters.
TABLE 46. Possible tool instantiationsa
Input

Output

Comment

Not instantiated

Phony ruleb

Instantiated oncec

1[+ n*multi]

Instantiated once per explicit input

n [+ n* multi]

Instantiated once per applicable input set

1 [ + n* multi]

Instantiated once per explicit input

Instantiated once per applicable input set

multi

At least 1 explicit

multi

max n

multi + 1 free

At least 1 explicit

multi + n free

At least 1 explicit

Only instantiated where explicitly specified in thefamily composition structure

multi + multi
a. Legend for table: 0,1,n = number of slots, free = no file_name specified for the slot,
explicit = file_name specified for the slot
b. A phony rule is a rule with no outputs. It is sometimes used to initialize tools with data
for internal processing. A phony rule must not be understood as a Make phony target,
which is a Make target which is not the name of an actual file.
c. Rules with no input can be used for extracting system information, eg. to set flags to the
compiler. This is typical use of such rules in normal make scripts. For PCL generated
makefiles, attributes should be used for this where possible.

A.4.5 Utilizing the parts structure


Prior to generating the makefile, a derivation graph is computed. The graph is computed by
applying rules on the physical objects in the parts hierarchy, starting with the physical objects
contained in the lowermost family entities in the parts hierarchy. These are derived as far as possible.
Whenever a physical object specified with classification status as derived, the parts hierarchy,
that is in the closure of the family entity in which the object is derived, is searched for derived
objects to match the input slots qualified with multi in tool entities that can produce the specified
derived object.

Semantics of makefile generation

225

If the same family entity is referenced twice in a PCL description, one of them must be referenced with the uses qualifier. The makefile generator utilizes this to reflect that the corresponding physical objects are only processed once during the building of the derivation graph.
There is no restriction on referencing the same file from two different physical sections. However, if both of these physicals are in the same system composition, a controlled decision is
made: The first occurence of the file in a physical in the left-to-right breadth-first traversal of the
composition structure is inserted into the set of PhysicalObjects. The attributes propagated to the
tool are thus the attributes in this definition of the physical, and the attributes in the the enclosing
family entity.

A.4.6 Pragmatic concerns


In order to generate makefiles which are going to be processed fluently by Make, there are a
number of practicalities that must be taken into account. Some are due to inherent limitations in
Make, while others facilitate the practical use of the makefiles. These will not be further elaborated in this document.
Path handling: all file names are really computed as the concatenation of the value of the

workspace attribute and the name expression


Ordering: care must be taken to ensure an appropriate ordering of information inside the

makefiles
Phony targets: artificial targets are introduced as needed to control Make processing
CheckOut and CheckIn rules: usability is improved by adding rules to support repository

communication. There are rules to establish or complete the workspace structure by checking
components out of the Repository, and rules to check in modified components after changes.
Trimming: remove duplicate rules which would confuse Make. Selection will be based on

matching define-time attributes in the tool description with attributes of the physical object
arguments and their enclosing families.
Partitioning: structure the system building information into a set of cooperating makefiles.

There are two alternatives:


1. One makefile per directory, with the necessary make rules in the makefile. The problem
with this solution is that dependencies can be lost in the partitioning process.
2. One makefile per directory, with one global makefile to hold all the make rules. The
makefiles in the directories will contain commands on how to actually build the target
by referencing the global makefile. The advantage with this solution is that all dependency information is kept in one place, and utilized correctly.

A.4.7 An example
After binding, a fragment of a PCL description might look like Figure 51. Some arrows have
been added to illustrate how the different entities are related.
A single physical object named is referenced in slot s1 of family . The classification of in
the TYPE dimension is . We assume is a sub-classification of software. Otherwise would
be ignored as far as system building is concerned. A tool is capable of processing physical
objects of type . If there are several such tools available, a selection has to be made (see Section
A.4.6, trimming).

226

PCL Syntax
family
attributes
a : string = 1;
b : string;
end
physical
s1 => (classification TYPE => );
end
end
classification inherits
physical ; end
tools ; end
end

classification inherits
physical ; end
end

tool
inputs
=> ; end
outputs => ; end
attributes
a : string;
b : string;
c : string = 3;
end
scripts
build => ;
depend => ;
end
end

FIGURE 51. A PCL fragment

The information shown in Figure 51 will be emitted into the generated makefile..
:
expand()
.d :
expand()
include .d
FIGURE 52. Information emitted to makefile

In Figure 51, is the filename of the output object. In this particular example the value of
will be expand(prefix()++suffix()) by the default rule.
To illustrate how expand works, lets assume is specified as:
ct -s ++ ++ a ++ b ++ c

Assuming the build-time attribute b in family is bound to 2, expand would produce the concatenation of the values:
ct -s 1 2 3

Semantics of makefile generation

227

If there were several input (1 2 ... m) and output (1 2 ... n) arguments, the generated information is shown in Figure 53
1 2... n : 1 2 ... m
expand()
1.d : 1 2 ... m
expand()
include 1.d
FIGURE 53. Resulting makefile

228

PCL Syntax

APPENDIX B

Analysis of Data from the


Programming Methodology
Course

B.1 Introduction
This document presents the data material gathered from the deliveries of the 32 groups in the
45012 Programming Methodology course at the Department of Computer Systems and
Telematics at the Norwegian Institute of Technology in the 1995 spring semester. The complete set of data from which the statistics are generated from are available form the author.

B.2 Description of the assignment


The assignment is to develop a system for automatic maintenance of consistency in source
code comments in C++ programs, and for navigating between the comments given to certain
program elements. The exact requirements were specified before the project started ([Tryggeseth, 1995b]), and were handed out to all student groups.
The system to be made in the assignment extracts certain program elements from a C++ program. The extracted program elements are file, class definition, member function declaration,
member function definition, function definition, and include statements. For each of these elements, the user is asked to input different types of comments in a uniform interface. Such comments are for example a test status for files, an objective field for (member) functions, and
change logs for files and (member) functions. The user may navigate among the comments for
the different program elements to obtain an overview of the program. When a session finishes,
the program files are updated by inserting dedicated comment fields on the lines prior to the
extracted program element. This generated comment field includes both the comments which
the user have entered, and comments automatically generated from program analysis done by
the experiment baseline. The generated comment includes a check sum field for the relevant
program element.
The program can now detect any changes in the program elements extracted, by comparing the
old check sum in the comment field with the new check sum generated by parsing. If changes
in a program element is found, the user is asked to update the comment already entered for the
program element, and add a change log describing the change. The program files are updated
with the modified comments, and problems with outdated comments in the source code is prevented.
Three metrics specified by Chidamber and Kemerer ([Chidamber and Kemerer, 1994]). They

230

Analysis of Data from the Programming Methodology Course

are defined as follows:


Weighted Methods per Class (WMC) for a class C with member functions M1, ..., Mn is

defined as WMC = i=1n ci, where ci is the complexity of member function Mi. We have
defined the complexity ci as (length(Mi) div 10)+1, where length(Mi) is defined as the
length of member function M, measured in lines of code.

The Depth of Inheritance Tree (DIT) for a class is the inheritance depth of the class. Classes

which does not inherit have DIT=0. If a class inherits from multiple classes, the DIT for the
class is the maximum length from the class to the root of the inheritance hierarchy.
The Number of Children (NOC) of a class is the number of classes which directly inherits

the class in the class hierarchy.


The development plan for the projects in the course was as shown in Table 47. The system documentation generated by the project groups consisted of a design document, a test report (test
plan and test log), and a user manual.
TABLE 47. Development plan for projects
Date

In/Out what

What

2.2

Out

Reqs

Requirements specification handed out to students

3.3

In

OOD

Deliver design document for quality assessment by other group

6.3

Out

OOD

Design documents are switched for quality assessment.

10.3

In/Out

Qual

Quality assessments are delivered and handed out to the groups

22.3

In

V0

System documentation and code delivered for system test.

23.3

Out

TestRes.

The system test and test results are handed out.

29.3

In

V1

Revised system documentation and code after changes implied


by system test is delivered.

29.3

Out

Req D1

The first modification request is given to the students.

6.4

In

V2

Revised system documentation and code after changes implied


by the first modification request is delivered.

7.4

Out

Req D2

The second modification request is given to the students.

In

V3

Final system documentation and code is delivered.

27.4
5.5

Groups are told whether they passed or failed the assignment.

The groups were asked to make extensions to the initial set of requirements. These are referred
to as ReqD1 and ReqD2 in Table 47. The popular name for them are delta1 and delta2. During
the rest of this appendix, V1 is referred to as delivery 3, V2 as delivery 4, and V3 as delivery 5.
The first modification request, delta 1.

The mechanism for navigating the comments spaceas specified in the original requirements is
only usable on-line. For viewing the comments when not in front of the computer, the source
code files have to be inspected. The first modification request specifies an extension to the initial requirements for generating a sequential report of a program which has had comments generated by the experiment baseline. The modification request specifies the format of the report,
and its contents.

Partitioning of groups

231

The second modification request, delta 2.

The functionality asked for in this modification request is to provide information of where any
declared (member function) is defined. The definition place may take the values not_defined,
defined_in_file xxx, or defined_in_class.

B.3 Partitioning of groups


For the 1995 edition of course 45012 Programming methodology, 155 students had signed
up. The participants are a mixture of second year students (in their third semester) from the
Faculty of Electrical Engineering and Computer Science, Department of Computer Systems
and Telematics (category a), from the Faculty of Economics and Industrial Management (category b), and other students of different grades from other faculties (category c). The students
from category a will in their third year specialize in either computer science or engineering
cybernetics. For the first two years, their educational schedule is identical. Students from category b have a somewhat different schedule than the former, but their education regarding computer science is identical. Students in the last category are a mixture of students from the
Faculty of Civil Engineering, the Department of Electrical Power Engineering, the Department
of Physical Electronics, the Faculty of Physics and Mathematics, the Faculty of Marine Technology, and finally a group of PhD students from different faculties. These may have taken
other courses in computer science, but the majority of these normally have the computer sciences classes taken by the first two categories.
33 groups were carefully selected without mixing members from different categories. This
resulted in 20 groups from category a (groups a01 through a21, leaving a12 empty), 5 groups
from category b (groups b01 throgh b05), and 8 groups from category c (groups c01 through
c10, leaving groups c03 and c06 empty.
For 84 students from category a, grades were obtained from two undergraduate computer science courses, and 17 groups were partitioned according to the following algoritm:
1. Sort students according to average grade in the two courses, best grade average first.
2. Assign the 17 first students to groups a01 to a17.
3. Assign the 17 next students to groups a17 to a01.
4. Compute the average grade for the 17 groups.
5. Assign the 35th student to the group with highest average (low grade), and so on, so that the

51st student is assigned to the group with the lowest average.


6. Repeat step 4 and 5 until all students are assigned to a group.
Groups a12 was later split up, and members were assigned to other groups, for which grades
were not available. One person was assigned from group a12 to a01. This leaves us with 16
groups (a01 through a17) for which we have controlled the group composition.
The group averages are (in random order): 2.2, 2.2, 2.1, 2.2, 2.1, 2.1, 2.2, 2.3, 2.3, 2.4, 2.3, 2.4,
2.3, 2.3, 2.4, 2.0. This results in a partitioning with average 2.24, and standard deviation 0.12.
This gives us a homogeneous set of groups.
The reason for controlling this composition is to prevent skewness in the measurements, limiting the possibility of socialization in the group formation process, where very eager programmers join forces in a small number of super groups. Additionally, we believe that the work

232

Analysis of Data from the Programming Methodology Course

pressure will be levelled out among the groups, as eager students may cheer up and inspire the
more average student. Due to the size of the project, we believe that there is no risk in one or
two persons couping the project, as this would imply to high work pressure on the two students
to reach the deadlines.

B.4 Total scores for the groups.


A system test was designed to unveil to what extent the systems produced byt the student
groups satisifed the requirements given in the requirement specification. The requirements for
all groups were as earlier described the same. The system test was designed in such a way that
all requirements described in the specification were tested. In addition, the person doing the
system test (for the data presented here, I have made all tests) were assumed to follow the
questions asked in the test sequentially. The reason for this was that the test was designed to
represent a process that would be followed by an ordinary user of the application. The questions asked during the test can be ordered into the following groups:
1. Register the application under ProKomm for the first time. This includes asking for com2.
3.
4.

5.
6.

ments and saving the generated comments. (63 points)


Update the comments after small changes has been made to one file. (27 points)
Update the comments after changes have been made to several files in the application that
were used in the system test. (84 points)
Check that ProKomm extracts the correct information from the application used in the system test, and also that ProKomm is able to write the manual and automatically generated
comments satisfactorily into the files. (60 points)
Assess the functionality for asking queries about the system. (45 points)
Check the ability to generate reports on metrics and on system structure. (6 points)

The test was designed so that if parts of the questions in group 1 to 3 could not be answered,
the test could still continue from group 4.
For each question in the test, a score of 0 to 3 points was given. 0 means that the required functionality is not taken into account in the application. 1 means that the functionality exist under
a menu choice, but has not been implemented. If the score 2 is given, the system provides the
required functionality, but a wrong result is given. The top score for each question, 3, means
thath the required functionality is present, and the result presented is correct.
Figure 54 to Figure 56 shows the total system test scores (percent of maximum possible score)
for all groups for the three different deliveries. Table 48 shows some descriptive statistics of
the percentage scores for deliveries 3 through 5.
TABLE 48. Descriptive statistics for total scores
Sum 3 (%)

Sum 4 (%)

Sum 5 (%)

Mean

25,7

34,8

38,9

Stdev

29,1

32,9

32,8

21

27

40

1 quartile

3 quartile

32

48

52

Median

233

Total scores for the groups.

c10

c08

c05

c02

b05

b03

b01

a20

a18

a16

a14

a11

a09

a07

a05

a03

a01

100
90
80
70
60
50
40
30
20
10
0

FIGURE 54. Total score for delivery 3

These statistics show a low average score for the groups. The most notable observations that
can be made from these data is the high standard deviation, and the very low values for the 1st
quartile. Together these indicate that many groups has not been able to produce any results at
all, while other groups have delivered very useful applications.
100
90
80
70
60
50
40
30
20
10

c10

c08

c05

c02

b05

b03

b01

a20

a18

a16

a14

a11

a09

a07

a05

a03

a01

FIGURE 55. Total score for delivery 4

After a thorough analysis of the test score sheets, we found that the largest deviations among
the groups test scores were located in categories 2 and 3 of the system test questions. These
are related to the ability to locate changes in the application, and prompt for comments to
(only) the changes made. This functionality relied heavily on the delivered applications ability
to generate check sums for the different program blocks1. We found that groups with relatively
low total score had almost always failed to implement the checksum algorithms for the program blocks, and therefore could not score at all in categories 2 and 3. This functionality does
not take many lines to implement, but it was by many groups considered hard to find an algorithm for this purpose. Since this was not intended to be an obstacle for the groups, we decided
1. The program blocks in the assignment were file, class definition, function definition, and member
function definition.

234

Analysis of Data from the Programming Methodology Course

to find a way to level out the scores to obtain a more fair basis for group rating. This is
described in the next section.
100
90
80
70
60
50
40
30
20
10
c10

c08

c05

c02

b05

b03

b01

a20

a18

a16

a14

a11

a09

a07

a05

a03

a01

FIGURE 56. Total score for delivery 5

B.5 Total scores for the groups (moderated)


The major obstacle for being able to score in categories 2 and 3 of the system test questions
was the presence of a check sum generator for program blocks. Two groups with identical systems except for this, would have a test score difference of 27 points in group 2 and 84 points in
group 3, giving a total difference of 111 points (out of 285 possible points).
We decided to moderate the possible scores to be obtained in these two groups such that the
maximum score in category 2 would be 6, and 12 in cateogry 3. This would favour the groups
with check sum functionality, but only moderately, giving a better rationale for judging the
total functionality of the different deliveries. Figure 57 through Figure 59 reportst the moderated scores in percent for the different deliveries. Table 49 shows some descriptive statistics for

FIGURE 57. Total score for delivery 3 (moderated)

c10

c08

c05

c02

b05

b03

b01

a20

a18

a16

a14

a11

a09

a07

a05

a03

a01

100
90
80
70
60
50
40
30
20
10
0

235

Total scores for the groups (moderated)

the moderated percentage scores for all deliveries.


TABLE 49. Descriptive statistics for moderated total scores
Mod 3 (%)

Mod 4 (%)

Mod 5 (%)

Mean

32,2

43,5

49,4

Stdev

30,6

34,6

34

30

42

60

1 quartile

12

3 quartile

47

73

78

Median

We see that the averages have increased with about 10 percent per delivery, while the standard
deviations are at the same level as for the non-moderated total scores. Still we can observe that
the first quartile is rather low compared tot the median, and this indicates that there are still
some groups which have problems with satisfying the basic requirements. We discuss this in
the next section.
100
90
80
70
60
50
40
30
20
10

b03

b05

c02

c05

c08

b03

b05

c02

c05

c08

c10

b01
b01

a20

a18

a16

a14

a11

a09

a07

a05

a03

a01

FIGURE 58. Total score for delivery 4 (moderated)


100
90
80
70
60
50
40
30
20
10

FIGURE 59. Total score for delivery 5 (moderated)

c10

a20

a18

a16

a14

a11

a09

a07

a05

a03

a01

236

Analysis of Data from the Programming Methodology Course

B.6 Discussion of test score analysis


As we observed in the two previous sections, a single requirement that would take little effort
to satisfy given the right algorithm, resulted in a large spread of system test scores, even for
groups which had similar functionality otherwise. We calibrated for this effect by reducing the
impact that this single requirement had on the test scores, and computed the moderated test
scores. This unveiled that the calibration was successful in adjusting the scores for more fair
group rating, but still the 1st quartile values were very low for all three deliveries.
We would like to find what this effect was contributed to, and found that several groups, 11 of
a total of 33, had not delivered executable code ready for system test, on one or more occations. At system test, these groups got a score of 0, significantly lowering the total score. After
correcting the data by extracting these groups from the average, we obtained numbers as
shown in Table 50.
TABLE 50. Statistics for groups which delivered all times (N=23)
Sum 3 %

Mod 3 %

Sum 4 %

Mod 4 %

Sum 5 %

Mod 5 %

Mean

32.3

39,1

45,3

55

50,6

62,5

Stdev

31,2

31

33,1

32,3

30,9

28

Median

22

32

42

64

45

68

1 quartile

14

20,5

21

32

31

47,5

3 quartile

43

64

70,5

77,5

83

84,5

From the table we observe that all values have increased significantly compared to Table 48
and Table 49. We argue that groups with low functionality has been reluctant to deliver their
applications at the deadlines. This is further supported by the fact that the pass/fail decision is
done only by a total judgement of the last delivery, so groups with low scores has nothing to
lose by not delivering at deadline 3 and 4. By not delivering, they hide the actual status of their
systems from the other groups, thereby not losing prestige by unveiling that their system is in a
bad shape.
Finally, Figure 60 shows the average total system test scores, arranged by the different strategies we have discussed to compute this.

B.7 Measures of code


In this section we investigate the form of the C++ applications delivered by the groups. We
first present two measures of the number of lines in the application, and discuss some of the
problems we had to avoid measurement errors here. We need the LOC measure for later investigation of correlation among system test score and application size. The physical structure of
the applications is looked into, in particular how the groups have organized their application
into files. A large number of files can be dangerous for keeping the overview of the applications logical structure. The encapsulation unit in the C++ language is the class, but the modularization unit is the file. Therefore it can be unsatisfactory if modules are split into several
files. As the last issue of this section, we compute a set of source code metrics.

237

Measures of code

65

% summed score
60

22 groups

% moderated score
55

50
M od%
S um %
M od%
S um %

45

33 groups
40

35

30

25
D eliv ery 3

Delivery 3

D eliv ery 4
Delivery
4

D eliv ery 5
Delivery
5

FIGURE 60. Averages for the data set

B.7.1 LOC measures based on naive application closure


Figure 61 on page 238 shows a lines of code (LOC) measure for all groups. The LOC measure
was computed using the UNIX wc (word count) tool for counting lines in all files with extension .h and .cpp in the project directory. Prior to this counting, all file names containing the
word test, and files matching the names used in the system test were filtered out. This was
done under the assumption that testfiles were tagged with a name that would make them stand
out from the application files. As we ourselves had distributed the system test files, we could
filter these out as we knew their names.
The upper sub-figure in Figure 61 shows the computed averages for all groups, grouped by
delivery. Average5 is the last delivery. At first glance, the results puzzled us a bit, since the
average LOC is decreasing from delivery four to five. There were two possible explanations to
this:
1. Either the groups had reorganized their systems, possibly leaving out debugging code in the

last delivery and removed non-functional parts of the code.


2. The other possibility was that groups which had not delivered in phase three or four influ-

enced the average. This was indicated that some groups that had been sloppy with the deliveries usually had low functionality, and also less code than the average.
Removing all groups not having delivered through all three phases, and then computing the
average LOC measure, gave us the lower sub-figure in Figure 61. This computation revealed
that our second assumption was correct.
To sum up these findings, on average the number of lines of code in the applications delivered
at all three deadlines had increased. The average number of lines was 4680, 5049, and 5134 for
deliveries 3 to 5.

238

Analysis of Data from the Programming Methodology Course

c10
c08
c05
c02
b05

LOC
b03

5000
4800

b01

4600

a20

4400
Averag
e3

Averag
e4

a18

LOC 5
LOC 4
LOC 3

a16

Averag
e5
LOC 5 LOC5

LOC 4 LOC4
LOC 3 LOC 3

a14

Moderated LOC
a11
5200
5000

a09

4800
4600
Avera
ge3

a07

Avera
ge4

Avera
ge5

a05
a03
a01
0

2000

4000

6000

8000

10000

12000

FIGURE 61. LOC measure, for all three deliveries

B.7.2 Finding the actual application (file) closure


Mention in some other chapter about the problem of deciding which files in a directory tree
that actually make up the application, and refer to this here
When doing a deeper analysis of some of the individual systems, we found that it was very difficult to find which of the C++ files located in the project directories that actually constituted
the application. The reasons we found for this were:
1. Mixing test drivers and application code: The (student) developers had put test drivers for

different parts of the application in the development directory. These were mostly identified

239

Measures of code

as having the string test or tst as part of their file names.


How we handled this: When determining which files to use, we filtered out all files including the substrings test and tst.
Ideal solution: Locating test drivers in a separate directory will solve this problem. One
directory with all drivers for one test is ideal, and a description file for the test should be
included. This information should also be put in the test documentation.
2. Introducing backup copies in development directory: When the developers for some reason

perceived that a module had sufficient functionality, we found that they had back-uped this
file to another one, usually with an indexing suffix, but sometimes also to a completely different file name. This fact made us suspicious to that all files potentially could be copies of
each others, particularly the files which had similar names.
How we handled this: We had to find a measure for how much related two files were. The
best solution would be a pattern matching algorithm that could localize all matching blocks
of text among two files, and report how much the two files were alike. An extensive search
for such a tool or algorithm dif not succeed1. An alternative formulation of the problem is
how many lines which differ in two files. The GNU tool diff reports how many lines which
differ among two files. The output from the command diff file1 file2 gives a list of lines
that differ, where lines in file1 that differs from file2 is prefixed with a < character. A
measure for the simlarity is (diff file1 file2) * (diff file2 file1) / size( file1 + file2), where
(diff file1 file2) reports the number of lines in file1 which are different from similar lines in
file2, and size(file1) reports the size of file1 in lines.
If two files are similar, this measure will give a value close to 0. If the files are very different the measure will be close to 1. We chose to use 0.4 as a threshold value for similarity.
When the measure between two files were less than 0.4, we further inspected these two
files, and left out the oldest file. Table 51 below shows how the different groups have split
TABLE 51. Number of application files per group
Group

Delivery 3

Delivery 4

Delivery 5

Group

Delivery 3

Delivery 4

Delivery 5

a01

22/24

22/24

22/22

a02

23/27

23/27

23/25

a03

26/26

26/26

26/26

a04

28/42

28/30

28/31

a05

16/16

17/17

17/17

a06

22/23

25/34

24/35

a07

31/31

31/31

31/31

a08

61/74

65/83

65/75

a09

42/54

44/56

44/56

a10

37/37

41/42

39/40

a11

26/27

27/34

29/29

a13

43/49

43/49

43/61

a14

30/30

30/30

30/49

a15

25/26

26/26

26/26

a16

17/18

19/20

19/20

a17

23/30

23/33

24/34

a18

14/14

14/14

14/14

a19

29/35

29/30

29/30

a20

39/46

39/46

34/36

a21

-/-

9/9

9/9

b01

24/32

24/39

22/32

b02

25/32

26/29

28/32

b03

21/21

-/-

25/26

b04

21/21

21/21

21/21

b05

19/19

21/28

21/28

c01

26/31

28/32

30/30

1. We find such an algorithm easy to formulate, but not so easy to implement. Still, we believe that
something like this must have been implemented somewhere else!

240

Analysis of Data from the Programming Methodology Course

TABLE 51. Number of application files per group


Group

Delivery 3

Delivery 4

Delivery 5

c02

-/-

13/15

15/16

c05

10/10

-/-

c08

26/29

c10

Group

Delivery 3

Delivery 4

Delivery 5

c04

42/42

45/45

45/45

10/12

c07

10/10

-/-

7/7

26/30

28/33

c09

15/17

17/29

17/18

-/-

19/21

TOTAL

793/893

802/929

864/987

up their application in C++ files. The first number in each cell is the actual number of files
in the application, after test drivers and backup files are removed. The second number in the
cells is the number of C+ files initially in the development directory. A minus sign in a cell
indicates that the group has delivered no files.
We see from the figure a great diversity in how the groups select to split up their application. The number of application files ranges from 7 to 65, with an average of 28, 29, and 29
for the different deliveries, with a standard deviation of 10, 11, and 11. The average is computed from all groups which have delivered their application for all three deliveries.
If we count the number of C++ files in the application directory including the test drivers
and backup files, we get the second number in the cells. We see that the total number of files
have increased significantly, giving an average of 32, 34, and 33, with standard deviations
of 13, 14, and 14.
The average number of files in the application before the test drivers and backup files are
removed, means that we must do 2x(332 + 352 + 342 )-(33+35+34) = 2.21 million file comparisons. On a Sun Sparc10 with 128MB RAM, running the Solaris operating system, this
took about 3 hours for each delivery.
Ideal solution: Using a revision control system like RCS or SCCS would obviously have
been the best thing to do, but such a system was not available on the machines provided by
the faculty to the undergraduate students. A second best solution is to place backup copies
in dedicated directories.
The removal of test drivers and backup files resulted that on average 10-12% of the files put in
the application directory were stripped off, i.e. they were redundant. The range of reduntant
files for the applications was 0 to 19 files (or 0 to 41.4%).

LOC measures based on actual application closure


Figure 62 shows the LOC measure for all groups after the filtering as described above has been
taken into account. For a more easy comparison, we show a plot of the average LOC from Figure 61 and Figure 62 in Figure 63. The lines termed Average1 and Moderated1 are replicated from Figure 61, while the lines termed Average2 and Moderated are replicated from
Figure 62. In addition, two lines termed Plain Av. and Plain Mod. are the average measures obtained when counting the lines of all C++ files in the application directories.
If this file redundancy is not taken into account when computing measures such as lines of
code in the application, the results may be very unsatisfactory. As an example we can mention
that the group which had a file redundancy of 41.4% (group c09, delivery 4) had an actual
LOC measure of 4138. When all C++ files in the directory were used as the base for measurement, the LOC measure was 5949. This is an increase of 44%. For the group which had 19
redundant files (grroup a14, delivery 5), we measured the actual LOC to 6017. If all C++ files
were counted, this gave a LOC measure of 9787, resulting in an increase of 63%!

241

Measures of code

c10
c08
c05
c02
b05

LOC
b03

4750
4500
4250
4000
3750
Averag
e3

b01
a20

Averag
e4

Averag
e5

LOC5
LOC4
LOC 5
LOC3
LOC 4

a18
a16

LOC 3

a14
a11

Moderated LOC
5000
4750
4500
4250
4000
Avera
ge3

a09
a07
a05

Avera
ge4

Avera
ge5

a03
a01
0

1000

2000

3000

4000

5000

6000

7000

8000

9000

FIGURE 62. Actual LOC measure, for all three deliveries

Looking only at the line counts for the applications showed that one group which had 14
redundant files had an actual LOC measure of 2684 (group a04, delivery 3). When the backup
and redundant files were counted in as well, the LOC measure was 11598! This is a difference
of 332%! However, this was an extreme case.
The results obtained from this comparison shows how important it is to be exactly sure about
the structure of the system, i.e. which files does actually make up the application, which files
are test drivers, which are backup files, and which are just temporary files placed in the development directory by chance.

242

Analysis of Data from the Programming Methodology Course

5300

5100

4900
Moderated 2
Average 2
Moderated 1
Average 1
Plain Av.
Plain Mod

4700

4500

4300

4100

3900
Delivery3

Delivery4

Delivery5

FIGURE 63. Averages for LOC measures

For a person taking over a system for maintenance, this confusion of file content is disastrous.
Until a very deep knowledge of the system has been gained, the maintainer does not dare to
delete any of the files in the development directory, resulting in an extra burden when maintaining the system. The maintainer does not know in advance whether the files contain vital
system components which has not yet been integrated with the rest of the system, or components that are actually part of the system. Neither does the maintainer know whether the files
(if he realises they are test drivers) are applicable to the current state of the system. All this
adds to the maintainers burden of acquiring a good understanding of the system, and thereby
increasing the cost needed to understand it.
If this analysis had not been done, an erroneous report would have been given to themanager in
charge of planning the size of the maintenance group. If the manager had used the lines of code
measure as a metric for system size, we see that he could have erroneously have oversized the
maintenance group by more than 300%.
In our case, the feedback about system size to the course organizers could have been equally
wrong, resulting that next years students would have been assigned only a subset of this years
assignment if the professor was not satisfied with this years size of the solutions.

B.8 Comparing system size and test score


The scatter plots in Figure 64 through Figure 66 show how the moderated score on the system
test is distributed relative to the number of lines of code (LOC) in the delivered systems. The
LOC measure used is the one where both backup and redundant files are removed before the
line counting is done.
Assumption: If there is a significant positive correlation among the LOC measure and the
scores obtained by the group, we argue that programming technique does not have any large

243

Comparing system size and test score

8000
7000
6000
5000
4000
3000
2000
1000
0
0

20

40

60

80

100

60

80

100

FIGURE 64. Scatter plots, delivery 3, LOC/moderated


9000
8000
7000
6000
5000
4000
3000
2000
1000
0
0

20

40

FIGURE 65. Scatter plots, delivery 4, LOC/moderated


9000
8000
7000
6000
5000
4000
3000
2000
1000
0
0

20

40

60

80

100

FIGURE 66. Scatter plots, delivery 5, LOC/moderated

impact on program fuctionality for the programs developed by the student groups. In this case,
the size of the program is a good indicator of its functionality. We could advise our students to
write large programs to get good programs - this would be sad.
On the other hand, if there is not a significant correlation among these two variables, we argue
that the programming technique indeed is important for acquiring a program with good functionality given the project limitations in the programming methodology course. If this is the

244

Analysis of Data from the Programming Methodology Course

case, we must further dvelve into the programs to find out what really distinguishes a good
program from a poor one.
We start our analysis by computing the Spearman rank and Kendalls tau-b for the moderated
scores and LOC measures. Table 52 shows the two correlation measures for the two data sets
for each delivery.
TABLE 52. Correlation measures for LOC & moderated score
Correlation
measure

Delivery 3

Delivery 4

Delivery 5

0,3051

0,3252

0,4682

,051

,003

,043

0,1774

0,2261

0,3142

,089

,044

,005

Spearman rank
significance
Kendalls tau-b
significance

Table 53 presents the Spearman rank for all combinations of moderated scores and lines of
code. Table 54 gives the corresponding Kendall tau-b values.
As can be seen from Table 52, the correlation between the moderated scores and number of
lines in the application is slightly postive. The observed signficance levels indicate that we
cannot unanimously conclude that there is a linear relationship among the test scores and the
LOC measure. However, it should be noted that the values for the two different correlation
coefficients computed increases over time. As the correlation among the lines of code measures for the different deliveries is high, the tendency of increasing correlation among moderated score and LOC indicates that integration problems has kept groups with a high LOC
measure from getting high scores on the system tests. On the other hand, groups with less code
has not implemented enough to obtain high scores on all questions in the system test. We also
note that groups with low initial scores have problems to compete in the final phases, as the
correlation among score values is quite high. To sum up, we find that groups which have produced much code and have overcome their integration problems, will get higher scores on the
average than groups which have produced less code.
TABLE 53. Spearman rank, 1-tailed significance
MOD5

LOC3

LOC4

LOC5

.4155
LOC3

N(30)
Sig ,011

LOC4

LOC5

MOD3

,3844

,9054

N(29)

N(27)

Sig ,020

Sig ,000

,4632

,9373

,9862

N(33)

N(30)

N(29)

Sig ,003

Sig ,000

Sig ,000

,7986

,3051

,2554

,2966

N(33)

N(30)

N(29)

N(33)

Sig ,000

Sig ,051

Sig ,091

Sig ,047

MOD3

245

Comparing system size and test score


TABLE 53. Spearman rank, 1-tailed significance

MOD4

MOD5

LOC3

LOC4

LOC5

MOD3

,9013

,4055

,3252

,4348

,8792

N(33)

N(30)

N(29)

N(33)

N(33)

Sig ,000

Sig ,013

Sig ,043

Sig ,006

Sig ,000

LOC5

MOD3

TABLE 54. Kendalls tau-b, 1-tailed significance


MOD5

LOC3

LOC4

,2675
LOC3

N(30)
Sig ,020

LOC4

LOC5

MOD3

MOD4

,2544

,7835

N(29)

N(27)

Sig ,028

Sig ,000

,3142

,8253

,9310

N(33)

N(30)

N(29)

Sig ,005

Sig ,000

Sig ,000

,6379

,1774

,1631

,1946

N(33)

N(30)

N(29)

N(33)

Sig ,000

Sig ,089

Sig ,111

Sig ,060

,7537

,2801

,2261

,3100

,7518

N(33)

N(30)

N(29)

N(33)

N(33)

Sig ,000

Sig ,032

Sig ,044

Sig ,006

Sig ,000

We also studied the relation among the number of uncommented lines of C++ code and the
moderated test score. In this case we got the correlation measures as displayed in Table 52. We
TABLE 55. Correlation measures for number of uncommented LOC & moderated score
Correlation
measure
Spearman rank
significance
Kendalls tau-b
significance

Delivery 3

Delivery 4

Delivery 5

0,4769

0,3941

0,5042

,004

,0017

,001

0,3145

0,2509

0,3372

,008

,029

,003

see that there is no significant differences here compared to Table 52, and conclude that the
amount of comments in source code has not had any influence on how the groups have solved
the specifications put forward by the requirements.
FInally, we investigated the correlation among the number of uncommented lines and the

246

Analysis of Data from the Programming Methodology Course

number of lines of comments. The correlation coefficients for this is shown in Table 52. Not
TABLE 56. Corr meas for number of uncomm LOC & and number of lines of comments
Correlation
measure
Spearman rank

Delivery 3

Delivery 4

Delivery 5

0,5284

0,5404

0,6524

,001

,001

,000

0,3931

0,4039

0,5038

,001

,001

,000

significance
Kendalls tau-b
significance

surprisingly, we can conclude that there is a clear positive relation among these two figures.
This means that the (student) developers regularly have commented their code, and kept commenting as the code body has increased. We cannot
Table 57 shows for reference the actual number of uncommeted code lines and lines of comments for each group for the different deliveries.

a01 5648

919 6109

994 6301

comm5

loc5

comm4

loc4

comm3

loc3

Group

comm5

loc5

comm4

loc4

comm3

loc3

Group

TABLE 57. Number of code lines without comments (loc) and lines of comments (comm)

942 a19 3543 1181 3846 1149 3887 1161

a02 2493 1120 2779 1191 2853 1223 a20 2310 1087 2310 1087 2598 1114
a03 2639

579 2811 1205 2822 1209 a21

a04 1986

698 2342

781 2601

2156

719 2362

787

777 b01 3716

988 3719

988 4056 1014

a05 2252 1997 2777 2462 2795 2478 b02 3736

993 3940

985 4104 1026

a06 3123 1609 3443 1937 3781 1862 b03 1740

549

a07 2858 1407 3086 1520 3102 1528 b04 2710

2171

612

810 2884

862 2860

854

a08 4655 1552 5144 1624 5277 1667 b05 1418

668 1730

706 2012

744

a09 3698 1043 4182 1045 4182 1045 c01 2242

747 2562

900 2897

966

2219

779 2335

821

a10 2421

989 2726 1113 2725 1113 c02

a11 2319 1422 3221 1735 3336 1719 c04 3733

876 4093 4615 4102 4816

a13 4241

498

1865

526

282

1393

327

995 4241

995 4372 1025 c05 1874

a14 3608 1474 4139 1609 4332 1685 c07

724

a15 3143 1619 3552 1671 3706 1665 c08 2300 1083 2287 1126 2719 1222
a16 2275

929 2424

942 2440

949 c09 2836

a17 4531 2549 5241 2700 5441 2560 c10


a18 1415

332 1773

389 1784

997 3062 1076 3098 1089


2245

789

392

As a conclusion, we note that although there is a positive relation among the number of lines of
code and the achieved test scores for the groups, this relation is not strong enough to conclude
that groups which work hard and produce much code has a better chance of achieving a high
test score. Other factors may additionally describe the large variation in the test scores
achieved by the groups. Some of these may be the style of C++ programming the groups have
used, how good their design and documentation of the system were, and how well they have

Documentation measures

247

overcome the problems in the integration phase. We will now look into these issues.
It would have been interesting to investigate how much of the required functioality which had
been taken into account in the design, and in what state the implementation of this functionality were in the different releases. This would have given us a good indicator of major integration problems in the groups delivered systems. We note that some groups have produced very
much code (made explicit in Figure 66), without achieving a good score on the system tests.
We now turn to investigate the documentation bundled with the different releases of the systems.

B.9 Documentation measures


For each of the three deliveries, we have extracted the number of words and number of pages
of documentation for the groups. There are two reasons for this:
1. We want to see whether there are any relations between having much documentation and

the level system test satisfaction.


We assume here that the amount of documentation produced reflects the groups maturity in
understanding the system concepts, and hence the system architecture.
2. For our follow-up experiments we would like documentation containing prosa, for the persons trying to do the modifications to have explanations to the system and the system decisions above code level.
For the documentation delivered with the final version of the systen (delivery 5), the content of
the documentation has been assessed and given scores for several aspects.
The aspects of the documentation assessed for the groups include:
Introduction: Where the group describes the assignment in their own words. This shows

how good the group has perceived the problem assigned to them.
Project plan: This document described the organization of the group, the roles of the indi-

viduals, and a time schedule for the group to meet the required delivery schedules.
System documentation: This is the system design. The layout of the design was predeter-

mined by me. Rules for naming of sections and explicit references to requirements for tracing was put forward. Most, but not all groups obeyed these requirements.
User manual: Explains to the user how to use the groups system.
Test report: As for the system design document, a predetermined layout was specified for

this document. The document should contain information about the test cases run by the
groups, problems encountered, and solutions made to solve these. The files affected should
also be listed, as well as which files were used to drive the tests.
Project evaluation: This is a short section with criticisms from the groups regarding the

project, including guidance, choice of project assignment, and facilities for project work
(e.g. available hardware and software, and rooms for project meetings).
Neighbour group evaluation: In the beginning of the project each group should give con-

structive criticism to one other group. This document contains that criticism.
There are at least one risk associated with measuring the documentation produced in this

248

Analysis of Data from the Programming Methodology Course

project. Since the projects duration is of relatively short time, the amount of documentation
may have impacted the groups resources to implement the system documented, hence resulting in a lower functionality at system test for groups with a high amount of documentation,
compared to the groups with less documentation. On the other hand, groups spending more of
the initial time on polishing the could expect to spend less time on implementation. The question will be whether the time left over for implementation is to small.
Figure 67 shows the word count of the groups design documents Figure 68 the word count of
the user manuals, and Figure 69 the word count of the test reports.
12000
10000
8000
D eliv ery 3
D eliv ery 4
D eliv ery 5

6000
4000
2000

FIGURE 67. Word count, design documents

Average

c10

c08

c05

c02

b05

b03

b01

a20

a18

a16

a14

a11

a09

a07

a05

a03

a01

3500
3000
2500
2000

D eliv ery 3
D eliv ery 4
D eliv ery 5

1500
1000
500

c10

c08

c05

c02

b05

b03

b01

Average

FIGURE 68. Word count, user manuals

a20

a18

a16

a14

a11

a09

a07

a05

a03

a01

249

Documentation measures

1800
1600
1400
1200
D eliv ery 3
D eliv ery 4
D eliv ery 5

1000
800
600
400
200

FIGURE 69. Word count, test documents

Average

c10

c08

c05

c02

b05

b03

b01

a20

a18

a16

a14

a11

a09

a07

a05

a03

a01

Table 58 shows how the the contents of the groups reports were evaluated. The scores for the
different aspects was rated as good, average, mediocre, and blank. For this analysis,
we have given the value 3 to good, 2 to average, down to 0 for blank.
TABLE 58. Content evaluation of documents (delivery5)
Group

Intro

Project
plan

Sys. doc.

User
man.

Test
report

Project
eval.

Quality
eval.

a01

17

a02

a03

16

a04

14

a05

17

a06

11

a07

17

a08

a09

a10

11

a11

12

a13

15

a14

17

a15

17

a16

10

a17

10

a18

10

a19

14

a20

18

a21

b01

12

b02

13

Total

250

Analysis of Data from the Programming Methodology Course

TABLE 58. Content evaluation of documents (delivery5)


Group

Intro

Project
plan

Sys. doc.

User
man.

Test
report

Project
eval.

Quality
eval.

b03

17

b04

17

b05

18

c01

13

c02

10

c04

11

c05

c07

c08

c09

12

c10

Total

The mean of the evaluation is 12.2 (of possible 21), while the standard deviation is 4.1. The
means and standard deviations for the values presented in Figure 67 to Figure 69 are presented
in Table 59 The rows in the table needs some explanation:
TABLE 59. Standard deviations and means for document measures
User manual

Design document

Test report

Total count

Del

U3

U4

U5

D3

D4

D5

T3

T4

T5

Tot3

Tot4

Tot5

Av1

720

769

1090

4410

4278

5201

253

163

195

5383

5210

6486

SD1

564

604

730

2442

2506

2733

384

301

276

2899

2937

3152

Av2

784

911

1199

5251

5092

6196

312

171

201

6347

6174

7597

SD2

526

549

707

1936

2016

1846

413

276

245

2308

2317

2183

Av3

880

976

1199

4851

4868

5918

492

490

401

SD3

495

507

673

2090

2050

2035

413

338

272

When counting the words in the documentation, there are several missing values. Most groups
have delivered something for all deliveries, but many groups have not delivered all specified
material for all deliveries. The two first rows (Av1 and SD1) is the average and standard deviation for word counts when every missing value encountered has been assigned zero (0). The
middle two rows (Av2 and SD2) is ditto when groups which had delivered nothing on some
deliveryhas been removed from the computation. (If a group had delivered documentation on
delivery 3 and 5, but failed to deliver at the 4th deadline, the groups documentation count will
not influcence the average.) The other missing values is still set to zero. Finally, the last two
rows (Av3 and SD3) is the average and standard deviation when missing values is overlooked.
That is, only the documents delivered for each category is used in the computation of th average and standard deviation.
A correlation analysis between the content evaluation of the final documents and the word
counts of the same documents show a significant correlation (r=0.51, p=0.001). As we remember, there was no such significant correlation among the LOC measure and the functionality of

251

Time resources for two deltas

the program. There is a positive correlation (r=0.34) among the final test score and the content
evaluation, but this is not significant (p=0.033).

B.10 Time resources for two deltas


Figure 70 and Figure 71show the time resources spent by the groups on the different delta1 and
delta2. Note that not all groups have delivered the changes asked for in the deltas, and those
who have delivered have not all made all the changes asked for.
Group Design
TestDoc UserMan
a01
210
600
a03
80
0
a06
35
0
a07
100
0
a08
850
0
a10
50
0
a11
10
0
a14
0
0
a15
29
0
a16
0
0
a17
67
0
a19
10
0
b03
90
0
b04
10
0
c01
25
0
c02
141
0
c04
0
350
c09
10
0

Code
80
63
0
0
30
0
10
0
6
0
0
0
30
10
0
60
0
0

Total
1780
197
420
460
1147,5
260
220
180
211
258
420
46
400
1119
480
732
455
350

FIGURE 70. Resources spent for 1st delta (delivery 4) (time in mins.)
Group
a01
a03
a06
a08
a10
a11
a15
a16
a19
b03
b04
c02
c04

Design

TestDoc

5
850

UserMan Code

30

50
8
141

60
120

60
20
20
1597,5
34
110
107
60
10
70
55
732
60

Total
60
20
25
2477,5
34
110
107
60
10
120
63
933
180

FIGURE 71. Resources spent for 2nd delta (delivery 5) (time in mins)

2670
340
455
560
2028
310
240
180
246
258
487
56
520
1139
505
933
805
360

252

Analysis of Data from the Programming Methodology Course

B.11 Test scores (moderated) for all phases


In order to extract the most suitable group(s) for our verification experiment, we need to identify which groups had a high score througout the deliveries, with little deviations in test score
except for the deltas. In Figure 72 we show the moderated totals for all groups, combined for
all phases.
200

180

160

140

Test points

120
D eliv e ry 3
D eliv e ry 4
D eliv e ry 5

100

80

60

40

20

0
a01

a03

a05

a 07

a0 9

a11

a14

a16

a 18

a20

b01

b03

b 05

c02

c 05

c 08

c 10

FIGURE 72. Test points (moderated) per group/phase

Additionally, Figure 73 show how much the groups have increased their scores for phase 4 and
5 of the project.
140
125,25
119
120

100 94,75
90

89
80

72

56

60

S lope 4
S lope 5

40

37

40

62,95

62

59

34

33

32

34
24 25

24

24

19,75
20

11
3

5
0

15

14
10

14
7

13

0,4

11

00

6
00

53

00

0
a01

a03
-9

-3
a05

a07

a 09

a11

a 14

a16

a18

-4

a20

-5
b01

-5 b 03

b05

-4

c0 2

-3 -3
c 05

c0 8

-5 c1 0

-20
-23
-27,95
-40

-33

FIGURE 73. Point slopes for groups

Finally, we show how the different groups were able to fulfill the two additions, delta 1 and
delta 2 to the requirements. Total fulfillment of each of these deltas scores 3 points. For comparison, we have checked the score of delta 1, for both deliveries four and five..

253

Implementation metrics
9

5
d2, de l 5
d1, de l5
d1, de l 4

0
a01

a03

a05

a07

a09

a11

a 14

a1 6

a 18

a2 0

b01

b03

b05

c 02

c 05

c 08

c 10

FIGURE 74. Fulfillment of deltas

B.12 Implementation metrics


We would like to investigate the structure of the different applications delivered. For all applications, the number of classes and free functions in the C++ implementation was counted. For
each class, three class metrics were calculated.
The class metrics calculated were the WMC, DIT and NOC metrics defined by Chidamber and
Kemerer in [Chidamber and Kemerer, 1994]:
Weighted Methods per Class (WMC). The WMC metric for a class C1 with methods M1, ...,

Mn is defined as
WMC = i=1n ci

(EQ 1)

where ci is the complexity of method Mi. As a simplification we have not implemented any
traditional complexity measure, but rather defined ci to as
ci = ( length(Mi) div 10 ) + 1

(EQ 2)

where lenght(M) is the length of method M, measured in lines of code.


Depth of Inheritance Tree (DIT). DIT measures the inheritance depth for a class. Root

classes which does not inerit from any class has DIT=0. If a class inherits through multiple
inheritance, the DIT for that class is the maximum length from the class to the root of the
class hierarchy.
Number of Children (NOC). The NOC of a class is the number of direct descendants of that

class.
For each of these three class metrics, the mean, standard deviation and maximum were computed. Figure 75 shows the number of classes and free functions in each of the applications in
the final delivery.

254

Analysis of Data from the Programming Methodology Course

45

40

35

30

25
#Class
#Func
20

15

10

0
a01

a03

a05

a07

a09

a11

a14

a16

a18

a20

b01

b03

b05

c02

c05

c08

c10

FIGURE 75. Number of classes and functions

The average number of classes per application were 17.6 (median 15), with a standard deviation of 9. The maximum number of classes in an application was 42, while the minimum was
6. The average number of functions was 7.3 (median 3), with a standarde deviation of 9.4. The
maximum number of functions was 44, while the minimum was 1 (the main() function).
100

90

80

70

60
Av. WMC
SD WMC
MAX WMC

50

40

30

20

10

0
a01

a03

a05

a07

a09

a11

a14

a16

a18

a20

b01

b03

b05

c02

c05

c08

c10

FIGURE 76. Weighted methods per class (WMC)

The WMC metric describes the complexity of the classes. A class is difficult to understand
when the value of is WMC is high. It should be noted that the WMC metric obtains high values
both when the class has relatively few complex methods, and also when the class consist of
many small methods. Figure 75 shows the average, standard deviation and maximum values

255

Implementation metrics

for the WMC metric for the last delivery. One outlier is not shown in the figure; the maximum
value of the WMC for group a01 is 252. The mean of the WMC for all applications from the
last delivery was 22.3 (median 20). The corresponding standard deviation was 10.3. The minimum average WMC for a group was 10, while the maximum was 68.
The depth of the inheritance tree (DIT) is reported by Chidamber and Kemerer as an important
factor for the understanding of object-oriented code. It is difficult to understand the flow of
control in an object-oriented program when inherited methods are called. Sometimes the methods called have been modified several times along the inheritance path from the base class to
the class where it is used. A high value on the DIT indicates a class which may be problematic
to understand.
5

4,5

3,5

3
Av. DIT
SD DIT
MAX DIT

2,5

1,5

0,5

0
a01

a03

a05

a07

a09

a11

a14

a16

a18

a20

b01

b03

b05

c02

c05

c08

c10

FIGURE 77. Depth of inheritance tree (DIT)

Figure 77 depicts the average, standard deviation and maximum of the DIT for the applications
in the final delivery. The mean DIT for all applications were 0.5, with a standard deviation of
0.5. The maximum DIT for an application was 5. From the figure whe note that four groups did
not utilize inheritance at all in their applications.
The NOC class metric measures the fan-out in the class hierarchy. A high NOC measure for a
class means that changes made in the class can affect a range of children at different inheritance paths in the application. We give an overview of the average, standard deviation and maxiumum of the applications NOC measures in Figure 78. The average DIT for all groups were
0.4, with a standard deviation of 0.2. The maximum DIT in an application was 7.
We found no significant relations among any of the class metrics and the score achieved in the
system test.

256

Analysis of Data from the Programming Methodology Course

4
Av. NOC
SD NOC
MAX NOC
3

0
a01

a03

a05

a07

a09

a11

a14

a16

a18

a20

b01

b03

b05

c02

c05

c08

c10

FIGURE 78. Number of children (NOC)

B.13 Grades
We denote the group grade as the mean of the group members grade achieved at a written
examination. There is no significant correlation among the test score of the groups and the
group grade (Spearman rank: r=-0.13, p=0.316).
From the discussion in Section B.3, we remind that we only had prior grade information for 77
of the 155 students participating in the course. The group grades presented here, reflect only
those groups which these students were asssigned to. Figure 79 shows the histogram of the
group grade distribution.
8

Frequency

2
Std. Dev = ,34
Mean = 1,90
N = 16,00

1,50
1,75
2,00
2,25
FIGURE 79. Frequency histogram for group grades

2,50

257

Grades

The actual group grades and their corresponding standard deviations are shown in Table 60.
TABLE 60. Mean group grade and standard deviation
Group a01 a02 a03 a04 a05 a06 a07 a08 a09 a10 a11 a13 a14 a15 a16 a17
Mean

1,80 1,75 2,00 1,50 2,00 1,60 1,70 1,90 2,60 2,60 1,60 1,80 1,70 1,70 1,80 2,38

StDev 0,45 0,50 0,58 0,00 0,61 0,42 0,27 0,42 0,82 1,92 0,42 0,67 0,27 0,27 0,27 1,80

Figure 79 shows the histogram for the individual student grades.As can be seen from the fig70
60
50
40
30

Frequency

20
Std. Dev = ,78
Mean = 1,9
N = 77

10
0

1,0
2,0
3,0
4,0
5,0
FIGURE 80. Frequency histogram for student grades

6,0

ure, the mean grade was 1,91, and the standard deviation was 0,78. Finally, Figure 79 shows a
histogram for the groups test scores. The mean test score for these groups were 55,7, with a
standard deviation of 33,77..
5

Frequency

1
0
0

10

20

30

40

50

60

70

80

90

100

Std. Dev = 33,77


Mean = 55,7
N =16

FIGURE 81. Frequency histogram for group test scores

If we compare the students grades from this course with the grades used for group partitioning
1. The average grade for all 155 students participating in the course was 1.92

258

Analysis of Data from the Programming Methodology Course

(mean grade 2,21, standard deviation 0,67), we find that the grades obtained in this course is
0,3 grade points better on the average. There is also a low but significant correlation among the
students course grade and their prior grade (r=0,35, p=0,02).
There is a strong correlation (r=0,84, p=0,000) among the group grade and the standard deviation of the grades of the group members result.
This indicates that group strength is an explaining factor for group performance. If all members of the group have participated equally in the project, the chance is high that all members
will learn about the subject and get a good grade. On the other hand, if the project is driven by
one or two members, these will get a good grade, while the rest will fall through.
We would have expected a correlation among the group grades and their test scores. The
phrase buy the best people did not seem to hold here. Assembling a team of the best students
could not have guaranteed us a good product. We attemt an explantion. The examination situation is known to the students and the problems which they are asked to solve are small, albeit
sometimes difficult. Carrying through a development project of this size is not familiar to the
students, neither is working so intimate with other students over a long period. If an assembled
group can work together well and be able to abstract and systematize the specifications for the
project, the chance of success is high. The lack of training in abstraction and systematization
among the students has been a negative contributing factor to the groups problems in carrying
out the project. This has outweighed the individual members skills, regarding obtaining good
test scores.

APPENDIX C

C++ Pre-Test and


Calibration

C.1 Introduction
This appendix includes the C++ pre-test (Section C.1) used to test the skills of the experiment
subjects, as discussed in Section 3.3.5. In Section C.3 we describe how we calibrated the pretest to ensure that the score of the individual questions reflected their difficulty.

C.2 C++ pre-test


Subject Number:
C.2.1 Introduction
Follow the instructions and answer the questions in the following sections. The time limit for
the test is 60 minutes. Take your time to think through the questions. Do not answer at random.
Answer all questions inside the corresponding answer box. Note the time used for each part,
and write this next to the parts heading. All questions in each part count equal.
If, at the end of the test, there are questions which time has not allowed you to answer; mark
the corresponding answer box with a T. If you are uncertain about, or do not know, the correct
answer, mark the corresponding answer box with an U.
Answer the following questions before starting with the test:
Have you used C++ during the last six months? YES/NO
Give an estimate to the number of C++ LOC you have written: .................
How would you characterize your familiarity with the C++ language?
Poor / Mediocre / On the average / Experienced / Expert

Answer this question when you have finished the pre-test:


Did you find the pre-test to be: Easy / Just right / Difficult ?

C.2.2 Part I (Counts 15%)


The program below is supposed to have the following properties: Each object of classes a and
b contains two integer numbers. When their member function skriv() is called, objects of class
a print out the product of the two numbers, while objects of class b print out the quotient of the

260

C++ Pre-Test and Calibration

integer division of the two numbers.


The program as it stands does not possess this behaviour. Examine the program and answer the
questions.
#include <iostream.h>
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
class a {
protected:
int x,y;
int hmm(void)
public:
a()
a (int x,int y)
void skriv(void)
};

{ return x*y; }
{ x=0,y=0; }
{ a::x=x; a::y=y; }
{ cout << hmm () << endl; }

class b : public a {
protected:
int hmm (void)
{ return y? x/y: 0; }
public:
b() : a()
{}
b (int x, int y) : a(x,y) {}
};
void main(void){
a *a_list[2];
a_list[0] = new a(12,6);
a_list[1] = new b(12,6);
a_list[0] -> skriv();
a_list[1] -> skriv();
}
What is written out by the program as it stands above?

How would you modify the program by changing one line for it to behave correctly?

C.2.3 Part II (2%)


What is the difference between a class and a struct in C++?
1. Class and struct are the same. C++ provides two different keywords to look more object-ori-

ented.

261

C++ pre-test

2. Structs can only declare data members. Classes can declare both data and function mem-

bers.
3. Structs and classes are equivalent except that all members are private by default in a class

and public by default in a struct.


4. Classes and structs are the same, except that sub-structing is not allowed, while sub-classing

is.

C.2.4 Part III (9%)


Given the following C++ program:
#include <iostream.h>
class a {
int x;
public:
a (int x)
void write()
};

{ a::x = x; }
{ cout << x; }

void main( void ) {


// Declare a list of pointers to a objects
a *a_list[10];
for (int i=0; i<10; i++) a_list[i] = new a();
for (int i=0; i<10; i++) a_list[i] -> write();
}
Will the program fail to compile?

YES / NO

Circle the correct answer

How would you solve this problem?

C.2.5 Part IV (16%)


Given the following C++ program:
#include <iostream.h>
class C{
public:
virtual void P()

{ cout << C here << endl; }

262

C++ Pre-Test and Calibration


C()

{}

};
class C1 : public C {
public:
virtual void P()
C1()
};
class C1A : public C1 {
public:
virtual void P()
C1A()
};
class C2 : public C {
public:
virtual void P()
C2()
};

{ cout << C1 here << endl; }


{}

{ cout << C1A here << endl; }


{}

{ cout << C2 here << endl; }


{}

int main ( void ) {


C
*X = new C;
C1 *X1 = new C1;
C1A *X1A = new C1A;
C2 *X2 = new C2;
...
}

Which of the assignments below are legal (circle the appropriate):


X1

= X;

Legal/Illegal/Requires modification:

X1

= X1A;

Legal/Illegal/Requires modification:

X1

= X2;

Legal/Illegal/Requires modification:

Disregard the three assignments above. What is the effect of inserting the following expressions (insertion A):
X-> P()

Output:

X1-> P()

Output:

X1A->P()

Output:

X2-> P()

Output:

Then we make the following modification (insertion B) after insertion A above.


X

= X1A;

What is now the effect of (insertion C):


X -> P()

Output:

We remove virtual in all definitions of the P-function above. What is then the effect of the 4 +
1 calls above?

263

C++ pre-test
X-> P()

Output:

X1-> P()

Output:

X1A->P()

Output:

X2-> P()

Output:

X -> P()

Output:

C.2.6 Part V (26%)


Given the following C++ program:
1

#include <stdio.h>

2
3
4
5
6
7
8

class a {
public:
int x;
a()
virtual /* 2 */
int getit()
};

9
10
11
12

class b : public a {
public: /* 4 */
int getit()
};

13
14
15
16
17
18
19
20

void main(void) {
a *tobeb;
b *isb;
isb = new b;
tobeb = isb;
printf(isb->getit %d \n, isb->getit());
printf(tobeb-> getit %d \n, tobeb->getit()); /* 1 */
}

{ x = 1; }
{ return x; } /* 3 */

{ return 2*x; }

Answer the following questions: Fill in the correct answer in the answer box.
a) Describe what happens at compilation time and at what is output at a possible execution

if the program is without changes?


1. The program crashes during compilation since tobeb cannot be a pointer to isb. Pointers
declared to point to objects of class a cannot point to objects of class b, since objects of
class b are a subtype of class a.
2. The program compiles successfully. Execution gives the following result:
isb->getit 2
tobeb->getit 2
Tobeb is declared to point to objects of class a. In line 17, tobeb is assigned to be a pointer to
objects of class b. Since b is derived from a this is ok. Since getit() is declared virtual, the member fuction in the class which tobeb points to is called.
3. The program compiles successfully. Execution gives the following result:
isb->getit 2
tobeb->getit 1

264

C++ Pre-Test and Calibration


Tobeb is assigned to point to objects of class a. The assignment on line 17 tells tobeb to point to
an object of class b. Since getit() is virtually declared in class a, this method is called. If getit was
not virtually declared, objects at the bottom of the class hierarch would not know that a declaration of getit() existed at the top of the class hierarchy, and the local definition of getit() would
have been used.

The correct choice is:

b) Describe what happens at compilation time and at a possible execution if the line marked

/* 4 */ is removed?
1. The compilation fails. A compilation error on line 11 is the result.Since class b inherits

all attributes of a public, the getit() member function cannot be declared as private in
class b. This would have resulted in objects of class b having both a private and a public
getit() method.
2. There is no change from the situation in point a) above, since deleting the public keyword has no effect. All attributes are declared public by default.
3. The compilation fails. A compilation error on line 18 is the result. Removing the line

results in overriding the inherited getit() function by making it private in class b. Since
the getit() method is private for objects of class b, the the call to getit() on line 18 is illegal.
The correct choice is:

c) Describe what happens at compilation time and at a possible execution if the line marked

/* 2 */ is removed?
1. The compilation will fail. A compilation error on line 11 is issued. Since getit() is not

longer a virtual method, it cannot be overridden in class b.


2. The program compiles successfully. Execution gives the following result:
isb->getit 2
tobeb->getit 2
Since both isb and tobeb point to the same object, the same definition of getit() is used in both
calls. Since getit() is not declared virtual in class a, the run-time system does not know about the
definition of getit() in a, and can therefore not perform the correct dynamic binding of the getit()
call, which is to the definition in class a.
3. The program compiles successfully. Execution gives the following result:
isb->getit 2
tobeb->getit 1
The type of the tobeb pointer decides how the object pointed to by tobeb must be handled. Even
though an object of a subclass is pointed to, the definition of non-virtual methods will be taken
from the declared class.

The correct choice is:

C++ pre-test

265

d) What happens if the lines marked /* 2 */ and /* 3 */ are removed?


1. The compilation fails. Since class a contains no definition of getit(), the compiler does

not know what function to call in line 18.


2. The compilation succeeds. Since tobeb points to an object of class b, the member funtion
getit() defined in this class is called both at line 17 and 18. The printed output will be
isb->getit 2
tobeb->getit 2
If tobeb had pointed to an object of class a, the compilation would still have succeded. At execution time, a segmentation fault would occur when line 18 was reached, since no getit() method is
defined in class a.
3. The compilation fails. Class a contains no member function except the constructor. This

is illegal in C++.
The correct choice is:

C.2.7 Part VI (32%)


A bicycle is made up of two wheels, a frame, a seat, a handle, and two pedals. The two wheels
have two components: An aluminium rim and a rubber tyre.
The classes have the following properties:
The weight of the parts is specified in grams, while the volume is specified in litres.
The weight and volume of each bicycle part can be set only once by the user.
When the user sets the value of the size and volume of a part, the user shall be informed

which part he enters values for.


The bicycle and its parts knows about their weight and volume.
The total size and volume of the bicycle shall be printed by the program.

The C++ program below is an attempt to model this. However, the program contains flaws in
two or three lines which make it behave different from what is expected. At what lines must
modifications be made to mend this? Write down the line number and the modified line for the
two cases.
1

#include <iostream.h>

2
3
4
5
6
7
8
9
10
11
12
13

class abstract { // Used as a virtual base class only


int
weight;
floatvolume;
protected:
abstract(){ weight = 0 ; volume = 0; }
abstract(abstract *a) {
weight = a->get_weight();
volume = a->get_volume();
}
abstract(char *name) { set_w_and_v(name); }
set_w_and_v ( char *part ) {
cout << Set weight of << part << : ;

266

C++ Pre-Test and Calibration

14
15
16
17
18
19
20
21

cin >> weight;


cout << Set volume of << part << : ;
cin >> volume;
}
public:
int get_weight (){ return weight; }
float get_volume(){ return volume; }
};

22
23
24
25

class pedal : public virtual abstract {


public:
pedal():abstract(pedal){ }
};

26
27
28
29

class seat : public virtual abstract {


public:
seat():abstract(seat){ }
};

30
31
32
33

class handle : public virtual abstract {


public:
handle():abstract(handle) { }
};

34
35
36
37

class frame : public virtual abstract {


public:
frame():abstract(frame) { }
};

38
39
40
41

class tyre : public virtual abstract {


public:
tyre():abstract(tyre){ }
};

42
43
44
45

class rim : public virtual abstract {


public:
rim() : abstract( rim ){ }
};

46
47
48
49
50
51
52
53
54
55
56

class wheel : public virtual abstract {


tyre *t;rim *r;
public:
wheel (char *type) // The wheel is an aggregate class.
{ cout << type << endl; t = new (tyre); r = new (rim); }
int get_weight()
{ return (t->get_weight() + r>get_weight()); }
float get_volume()
{ return (t->get_volume() + r->get_volume()); }
~wheel(){ delete t; delete r; }
};

57
58
59
60
61
62
63
64
65
66
67

class bicycle : public virtual abstract {


pedal *p;seat *s;handle *h;frame *f;
wheel *front_w; wheel *back_w;
public:
bicycle () // The bicycle is an aggregate class.
{ p = new pedal; s = new seat; h = new handle;
f = new frame; front_w = new wheel(front wheel);
back_w = new wheel(back wheel);
}
int get_weight() {
return p->get_weight() + s->get_weight() +

267

C++ pre-test
68
69
70
71
72
73
74
75
76
77
78
79

h->get_weight() + f->get_weight() +
front_w->get_weight() + back_w->get_weight();

};

80
81

void main (void) {


abstract *my_bicycle = new bicycle;

82
83

cout << Total weight << my_bicycle -> get_weight() << endl;
cout << Total volume << my_bicycle -> get_volume() << endl;

84
85

delete my_bicycle;
}

}
float get_volume() {
return p->get_volume() + s->get_volume() +
h->get_volume() +f->get_volume() +
front_w->get_weight() + back_w->get_weight();
}
~bicycle()
{ delete p; delete s; delete h; delete f;
delete front_w; delete back_w; }

The changes to be made are:


Line
number

Modified Line

C.2.8 Answers to problems


Answers to problems in Section C.2.2:
First questionAnswer: The program writes out 78 and 2.
Second question: The member function hmmm() in class a must be made virtual.

Answers to problem in Section C.2.3:


The third argument is correct.

Answers to problems in Section C.2.4:


First question: The compilation fails.
Second question: To mend this, the following modification must be made
for (int i=0; i<10; i++) a_list[i] = new a(i);

Answers to problems in Section C.2.5:


(The problem is taken from an earlier Prog. Meth. exam)

268

C++ Pre-Test and Calibration

First question: The first assignment is illegal, the second legal, and the third is illegal.
Second question: The following is output:
C here
C1 here
C1A here
C2 here
After modification X = X1A, the output of X->P() is
C1A here
When virtual is removed in the definitions of the P-function:
C here
C1 here
C1A here
C2 here
and
C here

Answer to problem in Section C.2.6:


Question a): The 2nd choice is correct
Question b): The 3rd choice is correct.
Question c): The 3rd choice is correct.
Question d): The 1st choice is correct.

Answer to problem in Section C.2.7


There are two types of errors in the program as listed:
1. Since my_bicycle pointer in the main function is of defined to be a pointer to an abstract

class object (line 81), the declarations of the get_weight and get_volume functions in line
19 and 20 in class abstract must be made virtual for the correct get_volume and
get_weight function to be called at line 82 and 83 in main. This would require modification to two lines.
The corrections will be
19
20

virtual int get_weight ()


virtual float get_volume()

{ return weight; }
{ return volume; }

Alternatively, my_bicycle pointer can be defined to be a pointer to a bicycle object. This would
require modification in one line:
81 bicycle *my_bicycle = new bicycle;
2. The second error is in the get_weight function in class wheel. When the weight of the

rim object is to be returned (line 52), a comparison among the pointer r and the
get_weight function of the wheel is performed. This may result in a segmentation fault.
The modified line will look like
52

{ return (t->get_weight() + r->get_weight()); }

269

Evaluation of pre-test calibration.

C.3 Evaluation of pre-test calibration.


C.3.1 Introduction
This document discuss the results from the calibration of the C++ pre-test. The pre-test will to
be used to test the C++ skills of experiment subjects participating in my application maintenance experiment. In the experiement, ca. 50 students will be used to make modifications to a
given C++ application. The students will be assigned to different categories, where each category differs in what information and tools are available to the subjects in them. We want the
mean and variation of the subjects in a category to be equal to the same in the other categories.
The C++ pre-test provides us with the tool to design the categories this way.
The C++ pre-test consists of a series of C++ related problems. Some of the problems are
answered by selecting one of several posible solutions presented as a multiple choice, others
by writing down the correct answer. In the latter case, the answer can be written down in one or
two lines.
The calibration results described in this document was performed to check whether the test
suited its purpose to differentiate among C++ skills of experiment subjects.

C.3.2 Results from the calibration test


6 persons were asked to volunteer to answer the questions in the pre-test. The volunteers had
the same restrictions as the experiment subjects will have, i.e. they were not allowed to use any
referece literature, and not allowed to discuss the problems with others.
The volunteers were asked to answer the following questions regading their own view on their
experience prior to the pre-test:
1. Have you used C++ the last 6 months?
2. How many lines do you estimate to have written in C++?
3. How would you rate your experience with the C/C++ syntax?

Poor / A little / Average / Experienced / Expert


The volunteers answers to the questions are shown in Table 61.
TABLE 61. Volunteers answers to experience questions
Vol. no.

Q1

Q2

Q3

No

10.000

Average

Yes

50.000

Experienced

Yes

10.000

Experienced

No

5.000

Average

No

600

Average

Yes

10.000

Experienced

270

C++ Pre-Test and Calibration

The volunteers score on the pre-test is summarised in Table 62.


TABLE 62. Volunteers answers to pre-test questions.
Volunteer

Volunteer

Question

Question

I-a

IV - h

I-b

IV - i

II

IV - j

III - a

IV - k

III - b

IV - l

IV - a

IV - m

IV - b

V-a

IV - c

V-b

IV - d

V-c

IV - e

V-d

IV - f

VI - a

IV - g

VI - b

The results from the test is summed up in Table 63.


TABLE 63. Total score, time used, experience and test judgement
Vol 1

Vol 2

Vol 3

Vol 4

Vol5

Vol 6

Total score a

22

23

17

21

15

15

Time usedb

46

41

36

60

34

38

Experience

Average

Experienced

Experienced

Average

Average

Experienced

Opinionc

Average

Difficult

Difficult

Difficult

a. Max score is 24.


b. Max time allowed is 60 minutes.
c. After the test, the volunteers were asked to characterize the test as easy, average, or difficult.

C.3.3 Discussion of calibration test results


All volunteers completed the test in the time frame allowed. Volunteer 4 used the maximum
time limit, but did not report that answers were incomplete due to this.
The test results show that a good spread on the test scores. Most problems are solved by all,
revealing that all volunteers have some basic knowledge of C++. Other problems reveal that
the knowledge of C++ is not the same for the volunteers.
Compared to the volunteers own judgement about their C++ experience, we see that of the 3
rating themselves as experienced, two of them obtain the worst and fourth best test score. Of
the 3 rating themselves as average, two obtain very good scores.
Imagine that we were to divide the volunteers into two categories. The two categoies should

271

Evaluation of pre-test calibration.

have equal values for mean and variation of C++ skills. We could then either choose to distribute them at random, based on their own experience judgement, or based on the test scores.
Random selection may obviously have resulted in very skewed group compositions. Using
their own experience judgement as a partition key, may also have resulted in a skewed group
composition, given the discussion in the previous paragraph. By composing the groups based
on the test results, we can compose more homogeneous groups based on the test scores.
The volunteers were asked to write down the time used for the different problem groups (I VI). We observed that the time used for the different problems were not equal. This may skew
the test score scale. In the next section we discuss how to fix this.

C.3.4 Calibrating the test problems


The 24 problems which the volunteers were asked to answer was not of the same degree of difficulty. This means that the measurement scale of the test is not linear. We use the time
recorded by the volunteer for each problem group to compute conversion factors to obtain a
linear measurement scale.
Table 64 shows the time used (in percent of the total time) on the six different problem groups.
TABLE 64. Percent of time used on different problem groups
Vol 1

Vol 2

Vol 3

Vol 4

Vol 5

Vol 6

Total

Mean

Conv.
Factor

PG I (2)

10,9

12,2

16,7

18,3

23,5

10,5

92,1

15,4

7,7

PG II (1)

1,1

2,4

2,8

3,3

2,9

2,6

15,1

2,5

2,6

PG III (2)

10,9

4,9

8,3

6,7

5,9

13,2

49,9

8,4

4,2

PG IV(13)

21,7

14,6

16,7

16,7

14,7

10,5

94,9

15,9

1,2

PG V (4)

21,7

29,3

27,8

18,3

23,5

31,6

152,2

25,5

6,4

PG VI (2)

32,6

36,6

27,8

35,0

29,4

31,6

193,0

32,3

16,2

Total

98,9

100

100,1

98,3

99,9

100,0

597,2

100

100

We rate the questions in each problem group to be of equal degree of difficulty. The mean time
consumption (measured in percent) for the problem group is computed. The conversion factor
for a problem will then be the number obtained when the mean time consumption is divided by
the number of problems in a problem group. This conversion factor is shown in the last column
in Table 64. To obtain a sum of 100 in the last column, we have modified the conversion factor
for the single problem in group 2 to 2,6.
Modifying the test scores with the conversion factors gives us a measurement scale from 0 to
100 points. The scores for the volunteers on this new measurement scale is shown in Table 65.
TABLE 65. Scores after conversion.
Volunteer

Score

79,6

83,8

51,5

69,7

52

44,7

272

C++ Pre-Test and Calibration

APPENDIX D

Experiment Subject
Responses and Reports

D.1 Introduction
This appendix includes additional information about the experiment described in Chapter 3.

D.2 Responses to Q2 in debriefing schema


This section contains the responses to the debriefing schema described in Section 3.6.6.
The bullets below show the subjects responded to the question What could you have done
better in the experiment? The responses are directly translated from Norwegian.
Responses from category A subjects:
(1) Difficult to get an overview of the code
(11) Everything, my knowledge level was too low
(13) Everything, if I had understood it
(25) I could have done more if I had grep and xcoral available, and if more pages

were available for me to write on


(27) I had nothing done, and I do not feel I could have done much more on anything
(33) I could have done more changes in the program code
(37) Most of it really, it was difficult to gain an overview of the system
(45) Well, I did not get any further than reading the program code.
(47) Everything
(49) I was unable to do anything concrete
(55) Find out where the information was located in the files (? bad handwriting)
(69) I did not do much more than trying to understand the system
(71) No, not with the documentation I had available
(75) Delta 2

274

Experiment Subject Responses and Reports

(77) I had to use a lot of time to gain understanding before being able to implement

something.Even if I used 90 minutes on trying to understand the code, I could not locate
some of the information I needed
(83) Delta 1 on the implementation details, and everything on delta 2.

Responses from category B subjects:


(2) I used too much time trying to understand the overall architecture of the system
(4) I used all time available for reading the documentation and code
(10) Changes
(14) Study the input data and the documentation
(15) I had too little knowledge about classes and object-orientation in C++
(17) I felt that I could do more on delta 1 (I did nothing on delta 2). Things really started

to roll when only 30 minutes was left.


(18) I felt that I could do more on the program changes
(24) blank
(29) I felt that too little time was available, and much of the time was used on trying to

understand the system


(42) Everything
(50) I felt that I more or less gained an overview of what had to be done and how I would

have done it, but I missed some of the details regarding data members and functions
regarding this.
(64) Understand the data structures
(66) Felt I had too little done on the C++ coding
(74) I cannot brag about having too much done. Maybe I could have performed better if

the problem had been more clearly formulated.


(80) Most of it, but I work best behind a computer. Paper programming does not suit me
(84) I am not very steady on the syntactical aspects of C++. I understood what I had to do

and how to do it, but I did not bother with doing it properly. (I do not find it interesting to
concretely do something after I have understood what must be done.)

D.3 Response to Q4 in debriefing schema


This section contains the responses to the debriefing schema described in Section 3.6.6.
The following lists the responses given to question Q4 in the debriefing schema: Please write
down any comment you may have about the experiment. The responses are translated from
Norwegian by me.
Responses from category B subjects:

Response to Q4 in debriefing schema

275

It was an interesting experience to take part in the experiment. Regrettably, education

never provides similar assignments. The experiment was very good organized and carried through.
I think the experiment was to long.
I felt that I had to little knowledge about C++, and that things was a bit too advance for

me. Besides from that, the experiment seemed OK.


With little documentation and sparsely commented code, things will not go OK :-)
It was very little motivating working under a tight time limit, making one sure that noth-

ing was really going to be achieved.


It was interesting to have demonstrated beyond any doubt how chaotic even moderately

sized C++ programs can be without sufficient system documentation.


Too little time (and knowledge) available. It is not motivating to work with an assign-

ment which you know you do not have time to complete. Therefore, I feel the experiment
is somewhat misleading1
It was almost impossible to obtain an overall picture of the system given to us. It may be

unrealistic with so little (no) documentation.


It was difficult to understand what Hyper/HyperList did. It would have been more easy

to do the changes if it had been possible to run/debug the system. More time (1-2 hours)
had made it possible to make all the changes. With a compiler, it might have worked.
Responses from category B subjects:
Two to three more years of experience would have made a difference. I did not have

enough knowledge about C++ and programming in general.


The source code was messy and little documentation was available. The implementation

was difficult to read, and would have been much better without the hypertext system
Interesting, but maybe the system was too large for the experiment. I kind of feel that the

documentation did not help too much


The system was a bit too big to gain an overall understanding. The documentation lacked

a general section about overall architecture and how the objects were connected.
(1) The system presentation on foils and demonstration took too much time. (2) The pro-

gram was too complex for the limited time frame. I believe that you would have been
able to obtain better results from the experiment by specifying the changes in more
detail. Is the experiment tested on one or two persons in advance? Nevertheless, I think
the aspects raised by the experiment are important for software evolution. Good luck.2
The program was a bit too large to gain an thorough overview. The presentation and

demonstration in advance was a bit too quick3


1. The comment shows that the subject has not understood the prerequisites of the experiment. If the
assignment had been given so that everybody had been able to finish if their skills were sufficient,
the experiment had merely been degraded into an ordinary examination.
2. Again, the comments are based on a misconception about the aims of the experiment. This is not surprisingly as this was not stressed to the experiment subjects.

276

Experiment Subject Responses and Reports

The code was too complex with little comments. The system was too complicated, it was

difficult to understand it.


The system was large and it was difficult to obtain an overview of it. The documentation

I had most use for during the problem solving process was the design document, the program example, and the source code. More documentation would have been unnecessary
as the time limits made it impossible to read even those which were handed out.
Interesting and exciting experiment. It was difficult to understand the system.
The assignment was poorly documented. I used much time on things which should have

been unnecessary to use time on at all.1

The experiment seems somewhat unplanned, as the level on the explanations given did

not match the level of skill of everyone in the audience.2

The system was difficult to understand. Why did everything need to inherit from the

Hyper class. The code should have more comments.

D.4 Initial subject analysis


This section contains the first attempt on evaluating the subjects contributions in the experiment. The Qual_Lev subjective measure in Table 66 and Table 67 is given on an ordinal scale,
ranging from 0 to 5 (best). A short comment is attached to evaluate the effort of subject.

D.4.1 Reports for category B (code only), preliminary


A-1 Created files rapport.h and rapport.cpp, and made a rapport class which were responsible for traversing the lists and extracting information from the classes. Forgot about free functions.
A-11 Inserted a list class for the report generation. Does not indicate were elements are
inserted into this, nor good pseudo-code. Has misunderstood the organization of the program
structures internally.
A-13 Used most of the time on trying to understand how the data is arranged in the list structures. Could not find this out during the permitted time. Good explanation of the problems
encountered when trying to understand the code. No source code of significance, some to
extract the name of the system.
A-19 Used all time trying to understand the system. From the experiment forms filled in by the
subject, it seems that the subject has given in before the permitted time ran out. The following
three general questions were all written by the subject on the forms: Where does the data to be
reported exist? How is the system organized in memory? Where in the program should the
report function be inserted?
A-25 Used most of the time trying to understand the system. Could not really understand how
3. Compare with the previous comment, which stated the opposite
1. Not specified any further
2. This particular subject had the third lowest test score.

277

Initial subject analysis

the information was organized, and suggested to make a new class for report generation. This
would be called from the menu object, with the system object as a parameter. It seems that the
subject is onto something, but has lacked time to fulfil the thought process.
TABLE 66. Time usage report for delta1, category Aa
Subject #

Time_U_C

Time_C

Qual_Lev

75

45

11

75

45

13

100

20

19

120

25

100

20

27

120

33

120

37

60

45

105

15

47

90

30

49

110

10

55

100

20

69

80

40

71

110

10

75

60

60

77

90

30

83

60

60

Total

1630

410

18

a. Time reported is in minutes

A-27 Could not understand the system. The only thing suggested was that there had to be a
menu choice for generating the reports.
A-33 Did not understand the system. Understood where and what information was written into
the C++ comments. Could not use this knowledge to suggest how to generate report specified
by modification request delta 1.
A-37 Did not understand the system. Suggested that there had to be a menu choice for generating the reports. Found clues about where the information to be written to the report was
located. This was found in some member functions in the list and class classes. It seems
that the subject has given in before the permitted time ran out.
A-45 Understood that a menu choice had to be inserted to generate the report, and that all lists
needed a generate report member function. Did not recognize that the objects contained in
these lists needed such a function to do the actual work as well, if the intention was not to
include all this code in the list classes. If the pseudo code had been more than a one-liner, this
could have been clarified.
A-47 Understood that member functions for writing a report had to be added to the class and
member function lists. Did not understand that functions were represented as hyper objects,
and therefore the hyper list also needed a report member function. Understood that the menu

278

Experiment Subject Responses and Reports

needed a choice for report generation. The pseudo code other than for the menu option is
mostly lacking, but it is evident that the subject has been onto something.
A-49 No understanding of the system. Suggest using the skrivdata member function for writing the report by adding a file parameter. It is evident that the subject has not been able to
find the plan of the program, and therefore has no good suggestions to the modifications to be
made. Actually explicitly complains about lacking comments and documentation on the
change schema.
A-55 Found the information needed for writing a class report in klasse::skriv_kommentar,
and suggested to change this for writing the report. Produced some very high level pseudo
code for this. Has not been able to show how this can be integrated with the rest of the report.
A-69 Misinterprets the purpose of the file object, and use the information in that class as a
basis for the report function. However, the proposed solution may be logically correct and have
resulted in the right behaviour if the time had permitted the subject to carry out the modifications before time ran out.
A-71 The delivered schema indicates that the obtained understanding of the system is minimal.
The subject has understood that there are some lists in the program and that one has to start
with these to generate a report. No good suggestions or pseudo code.
A-75 This subject has really understood the system. At least the proposed solution is very close
to what is done by the originating group. The pseudo code is also very good. What is lacking is
the formatting information to get the report formatted correctly, else the proposed solution is
very good.
A-77 The subject has located all the lists holding the information about the system. However,
the proposed solution seemed a little immature, and reveals that a full understanding has not
been gained.
A-83 The time consumption has been thoroughly documented by this subject. The relevant
lists containing the information to be written to the reports have been located. The proposed
solution is the one implemented by the originating group, making a report member function for
all program elements. The subject has forgotten to make a report member function for member
functions, and the pseudo code is a bit weak in the details.

8.5.3 Reports for category B (code + doc), preliminary


B-2 The subject has showed an understanding of how the system is assembled, and has suggested the changes proposed by the originating group. The pseudo code is ok for the high level
generate report function in the system class, but more unclear for the more detailed ones
B-4 The subject has nothing to contribute with. Most of the time was used to try to understand
the source code.
B-10 It seems that the subject has given in after about an hour. There are no suggestions to
which changes to be made to the system, neither does it seem that the system is understood.
B-14 The subject has spent most of the time trying to understand the system. In the schema
delivered, complaints are made about the lack of comments in the code. The suggested changes
includes only adding a case statement to the menu, to generate the report.
B-15 No suggestions were provided for enhancements. Very little was noted on the understanding phase. It seems that the subject has given in after some time.

279

Initial subject analysis

B-17 The subject has used the documentation and code evenly to build an understanding about
how the system functions. The proposals for modifications are those suggested by the originator group. The pseudo code is ok and correct for the system and class level, but more unclear at
TABLE 67. Time usage report for delta1, category Ba
Subject #

Time_U_D

Time_U_C

Time_C

Time_D

Qual_Lev

50

10

45

15

50

70

10

60

60

14

45

70

15

60

60

17

35

35

50

18

30

30

60

40

20

15

29

45

45

30

42

30

30

60

40

30

15

60

20

30

70

64

40

30

50

30

30

30

74

50

60

10

80

60

60

84

40

45

30

Total

725

715

470

35

31

24

50

66

a. Time reported is in minutes


b. Has produced something on delta 2 as well.
c. Has produced something on delta 2 as well.
d. Has produced something on delta 2 as well.

the function level.


B-18 The subject has shown a good understanding of what should be written to the report. The
pseudo code is ok for the top level report and mostly for the class part of the report, but missing
for most of the function parts of the report.
B-24 The subject gives no detailed pseudo code of the changes. A class is made for the report,
and it is specified that the member function to generate the report in the report class should use
the lists in the system class (which is correct) and write the report to a file with correct formatting (which is also correct).
B-29 The subject gives an unusual proposal to how to generate the report, but the specification
is correct. The pseudo code is mostly correct, but the details for how the report is generated for
the individual program elements is not shown.
B-42 It is only showed how a report can be generated of the top level of the system, and how
the time the report is generated at can be obtained. It is not showed how the information about

280

Experiment Subject Responses and Reports

the classes and its member functions, and about the other functions can be obtained. The
pseudo code for what is suggested is ok.
B-50 It seems that the subject has gained an incomplete and spurious knowledge about the system. Has understood the basic procedure for generating the report, but is not able to instantiate
this into pseudo code rather than very high level discussion.
B-60 The subject has understood how the report should be generated, but has not been able to
locate all the information to go into the different reports. Has understood that the system is the
general report generator which calls a report generator for each list, which in turn calls the
report generator for the class and functions. The details are missing, but the thoughts look ok.
B-64 Has understood where the report should be hooked into the system, but has not been able
to specify pseudo code for the details.
B-66 Has understood the procedure of how to generate a report, but has not been able to find
and write pseudo code for all the details needed to generate the report in full.
B-74 Has not written anything on the experiment forms, but has written pseudo code for the
report menu option in the source code. Rather thin.
B-80 Has understood what data should be used to generate the reports, but has not been able to
specify the pseudo code to generate the reports in full. The subject reports that understanding
what happens during menu choices is difficult. Has not reported any time consumption, but
since no pseudo code is generated, I put all the effort used into the understanding phase.
B-84 The subject has shown a good understanding of the system, and has generated detailed
pseudo code to show how a report should be generated.

Analysis of changes of
group a03

APPENDIX E

E.1 Code changes


The a03 application used as the experiment baseline for the experiment described in Chapter 3
consisted of the files shown in Table 68. Each of the three deliveries consisted of the same
files.
TABLE 68. Files and changes in a03 application
File
buffer.h
endl.h
endring.h
fil.h
funkmedl.h
funksjon.h
hyper.h
klasse.h
lister.h
menyvalg.h
parser.h
system.h
norsk.h

Change 4

Change 5

x
x
x
x
x
x
x
x
x

File
buffer.cpp
endl.cpp
endring.cpp
fil.cpp
funkmedl.cpp
funksjon.cpp
hyper.cpp
klasse.cpp
lister.cpp
menyvalg.cpp
parser.cpp
system.cpp
pkmain.cpp

Change 4

Change 5

x
x
x
x
x
x
x
x

x
x

x
x

The LOC count for the files in the three different deliveries are shown in Table 68.
TABLE 69. LOC count for a03 files
Delivery

Delivery

File

File

buffer.h

27

27

27

buffer.cpp

111

111

111

endl.h

19

20

20

endl.cpp

69

76

76

endring.h

25

27

27

endring.cpp

80

80

80

fil.h

39

39

39

fil.cpp

445

465

465

funkmedl.h

31

32

33

funkmedl.cpp

118

141

154

282

Analysis of changes of group a03

TABLE 69. LOC count for a03 files


Delivery

Delivery

File

File

funksjon.h

30

31

31

funksjon.cpp

90

95

95

hyper.h

43

107

107

hyper.cpp

72

131

131

klasse.h

29

86

86

klasse.cpp

180

249

250

lister.h

122

128

128

lister.cpp

508

885

885

31

67

67

56

98

98

parser.h

133

133

133

parser.cpp

629

629

629

system.h

31

32

32

system.cpp

266

293

293

norsk.h

10

10

10

pkmain.cpp

24

24

24

menyvalg.h

menyvalg.cpp

The variation in file size is also shown graphically in Figure 82.


900

800

700

600

500

Del 3
Del 4
Del 5

400

300

200

100

pkmain.cpp

system.cpp

parser.cpp

menyvalg.cpp

lister.cpp

klasse.cpp

hyper.cpp

funksjon.cpp

funkmedl.cpp

fil.cpp

endring.cpp

endl.cpp

buffer.cpp

norsk.h

system.h

parser.h

menyvalg.h

lister.h

klasse.h

hyper.h

funksjon.h

funkmedl.h

fil.h

endring.h

endl.h

buffer.h

FIGURE 82. Variation in file size

A more detailed description of the changes from delivery 3 to delivery 4 is in Table 70.
TABLE 70. Changes from delivery 3 to delivery 4
File

Change

endl.h

added function string DatoOgId() to class EndringsLogg

endring.h

added function string Dato() to class Endring


added function string ID() to class Endring

fil.h

changed parameter passing from reference to value for functions EkstrFunknavn,


EkstrKlassenavn and EkstSynlighet in class Fil

funksjon.h

added function void SkrivRapport() to class Funksjon

283

Code changes
TABLE 70. Changes from delivery 3 to delivery 4
File

Change

funkmedl.h

added function void SkrivRapport() to class Funksjonsmedlem

hyper.h

added ProKomm generated comment to start of file


added #include <fstream.h>
added ProKomm generated comment to class Hyper
added virtual function void SkrivRapport() to class Hyper

klasse.h

added ProKomm generated comment to start of file


added ProKomm generated comment to class Klasse

lister.h

added virtual function void SkrivRapport() to class HyperListe


added virtual function void SkrivRapport() to class KlasseListe
added virtual function void SkrivRapport() to class MFunkListe
added functions Endring *Topp(), int LeggTil (Endring *e), int LeggTilOeverst(Endring *e), and int LeggTilNederst(Endring *e) to class EndringsListe
changed constructor EndringsList() from inline to declaration

menyvalg.h

added ProKomm generated comments to start of file


added definition #define RAPPORT 7 (to ID-fields in MenyValg objects)
added ProKomm generated comments to class MenyValg

system.h

added function void SkrivRapport() to class System

endl.cpp

added definition of function string EndringsLogg::DatoOgID()


in void EndringsLogg::LesInn() function LeggTilNederst() is called rather than
LeggTil() in first while loop
in void EndringsLogg::LesNyEndring() function LeggTilOverst() is called rather
than LeggTil() in last if expression

fil.cpp

comment on parser bug in definition of string Fil::EkstrFunknavn()


changed parameter passing type in functions Fil::EkstrFunknavn, Fil::EkstrKlassenavn, and Fil::EkstrSynlighet
moved ekstraction of filstatus and teststatus in function Fil::LesInn from else expression on test on sjekksum to before test on sjekksum
added possibility of adding new files to system in function Fil::LesInn
added debug-printout and commented this out in case medl_funk_def in function
Fil::LesInn
removed a debug-printout after if clause in case medl_funk_def in function
Fil::LesInn
added and commented out a debug-printout as second line in case loes_funk in
function Fil::LesInn
added and commented out a debug-printout as fourth line in case medl_funk_body
in function Fil::LesInn
added and commented out a debug-printout as second line in case klasse om function Fil::LesInn

284

Analysis of changes of group a03

TABLE 70. Changes from delivery 3 to delivery 4


File

Change
removed a commented debug-printout under case klasse in function Fil::LesInn
added cast to string on p.element_navn() under case klasse in function
Fil::SkrivKommentar
added and commented out a debug printout under case klasse in function
Fil::SkrivKommentar

funkmedl.cpp

added definition of void Funksjonsmedlem::SkrivRapport()


changed find(string(:),0) to find_first_of(:) in first line of Funksjonsmedlem::LesInn

funksjon.cpp

added definition of void Funksjon::SkrivRapport()

hyper.cpp

added ProKomm generated comments to start of file


added ProKomm generated comment to function Hyper::Hyper()
added ProKomm generated comment to function Hyper *Hyper::SkrivNavn()
added ProKomm generated comment to function void Hyper::Valgt()
added ProKomm generated comment to function int Hyper::SkrivData()

klasse.cpp

added code to print out inherited functions in function int Klasse:SkrivData()


moved definition of string navn out of while loop in void Klasse::LesInn()
in void Klasse::LesInn(): edited the klassenavn manipulation by changing klassenavn.remove(0,5) to klassenavn.remove(0,6)
added definition of function void Klasse::SkrivRapport()

lister.cpp

added ProKomm generated comments to start of file


added ProKomm generated comments to HyperListe::HyperListe()
added ProKomm generated comments to Hyper *HyperListe::Topp()
added ProKomm generated comments to Hyper *HyperListe::Forste()
added ProKomm generated comments to Hyper *HyperListe::Neste()
added ProKomm generated comments to Hyper *HyperListe::Push()
added ProKomm generated comments to Hyper *HyperListe::Pop()
added ProKomm generated comments to Hyper *HyperListe::FinnNavn()
added ProKomm generated comments to int HyperListe::Data()
added ProKomm generated comments to int HyperListe::SettData()
added ProKomm generated comments to int HyperListe::KommerEtter()
added ProKomm generated comments to int FilListe::KommerEtter()
added ProKomm generated comments to int HyperListe::LeggTil()
added ProKomm generated comments to int HyperListe::Antall()
added ProKomm generated comments to void HyperListe::DeallokerHypere()
added ProKomm generated comments to Hyper *HyperListe::SkrivRekke()
added ProKomm generated comments to void HyperListe::SkrivRekkeTilFil()

285

Code changes
TABLE 70. Changes from delivery 3 to delivery 4
File

Change
added ProKomm generated comments to void HyperListe::SkrivKolonner()
added definition of function void HyperListe::SkrivRapport()
added ProKomm generated comments to void HyperListe::DeallokerNoder()
removed comments from function void HyperListe::DeallokerNoder()
added ProKomm generated comments to Hyper *KlasseListe::SkrivRekke()
added ProKomm generated comments to void KlasseListe::SkrivHierarki()
added ProKomm generated comments to void KlasseListe::SkrivMetrikker()
added ProKomm generated comments to void KlasseListe::OppdaterFunksjonsarving()
added comment to first fourth level while loop in void KlasseListe::OppdaterFunksjonsarving()
in void KlasseListe::OppdaterFunksjonsarving()s first fourth level while loop;
changed k->funksjoner.FinnNavn to k->arvfunk.FinnNavn three times
added another fourth level while loop in void KlasseListe::OppdaterFunksjonsarving() to update the visibility for inherited functions
added definition of void KlasseListe::SkrivRapport()
added ProKomm generated comments to void FilListe::SkrivKolonner()
added ProKomm generated comments to Hyper *MFunkListe::SkrivRekke()
added definition of void MFunkListe::SkrivRapport()
added definition of EndringsListe::EndringsListe()

lister.cpp

changed definition of int EndringsListe::LeggTil to int EndringsListe::LeggTilOverste, and added ProKomm generated comments to int EndringsListe::LeggTilOverst()
added definition of int EndringsListe::LeggTilNederst()
added ProKomm generated comments to void EndringsListe::DeallokerNoder()
added ProKomm generated comments to int EndringsListe::Antall()
added ProKomm generated comments to void EndringsListe::SkrivEndringer()
removed output of endl to fil as last last line of while expression in void
EndringsListe::SkrivEndringer()
added ProKomm generated comments to void EndringsListe::SkrivKolonner()
added definition of function Endring *EndringsListe::Topp()

286

Analysis of changes of group a03

TABLE 70. Changes from delivery 3 to delivery 4


File

Change

menyvalg.cpp

added ProKomm generated comments to start of file


added ProKomm generated comments to string MenyValg::HentFilnavn()
added ProKomm generated comments to int MenyValg::SkrivData()
in int MenyValg::SkrivData() under case NYFIL: uncommented the line systempeker>klasseliste.OppdaterFunksjonsarving()
in int MenyValg::SkrivData(); added case RAPPORT expression with directionos
for how to write report.

system.cpp

removed a comment at the end of line including #include Klasse.h


added definition of void System::SkrivRapport()
in function int System::InitMeny(), added a line to push the menu choice for generating reports on the menu HyperListe
in function int System::SkrivData(), changed Prokomm v1.1 to Prokomm v1.2

Changes in the implementation from delivery 4 to delivery 5 is found in Table 71.


TABLE 71. Changes from delivery 4 to delivery 5
File

Change

funkmedl.h

for declarations of member functions void LesInnGammel(3), void LesInn(3), and


void LesInn(3) in class Funksjonsmedlem in delivery 4, the parameter Fil *fil is
added as last parameter to the first two, and removed as second parameter in the last.
the function string ReturnerDefinerti() has its declaration added in class FunksjonsMedlem

funkmedl.cpp

Addition of white space in cout lines in int Funksjonsmedlem::SkrivData() to have a


correctly indented report.
Addition of output lines for writing where the member function is defined in int
Funksjonsmedlem::SkrivData().
in function void Funksjonsmedlem::SkrivRapport(), two lines are added at the end
to output where the member function is defined. The indentation of this output is the
same as the indentation for the info in the line above.
change in parameter list of void Funksjonsmedlem::LesInnGammel() and void
Funksjonsmedlem::LesInn() to reflect the changes described for funkmedl.h
added line in body of void Funksjonsmedlem::LesInnGammel() and void Funksjonsmedlem::LesInn() with line this->fil = fil of unknown reason. For the void
Funksjonsmedlem::LesInn() where the parameter list was reduced, the line this->fil
= fil line was removed
added definition of string FunksjonsMedlem::ReturnerDefinerti()

parser.cpp

In function bool Parser::utfoer_parsing, under case ;, the last parameter to function sett_inn_element() is changed from 1 to 0.

287

Documentation changes
TABLE 71. Changes from delivery 4 to delivery 5
File

Change

klasse.cpp

In void Klasse::SkrivKommentar(), where comments for a class are written to file, a


line to write the defined in information to file is added.
In void Klasse::LesInn() the buffer pointer is advance with 4 in each while loop
instead of 3.

fil.cpp

in int Fil::LesInn(), the passing of this to funkmedlobjekt->LesInn() is removed


under case medl_funk_def
in int Fil::LesInn(), this is added as a last parameter when funkmedlobjekt->LesInnGammel() is called under case medl_funk_body
in int Fil::LesInn(), this is added as a last parameter when funkmedlobjekt->LesInn is called under case medl_funk_body
in int Fil::LesInn(), this is added as a last parameter when filbuffer.Sjekksum() is
called under case medl_funk_body

E.2 Documentation changes


This section describes the changes group a03 made in the documentation from delivery 3
through to the last delivery. Table 72 outlines the file structure of the documentation, and the
size of the files measured in number of words. Where no size is given for a file in the table, the
TABLE 72. Files and their size
Type of doc

File name

Size (3)

Size (4)

Size (5)

User manual

bruker.doc

872

908

908

Test report

testrap.doc

919

763

763

Design document

kons2.doca

5671

5942

6226

kritikk.doc

189

vurder.doc

Purpose

Design report

175

Critique of neighbour group

322

Project evaluation

a. This document is called konstruk.doc in delivery 3. We have renamed it in the table for convenience.

file is not available in that delivery. The design document konstruk.doc in delivery 3 has been
renamed to kons2.doc in delivery 4 and 5. In addition to the files listed in the table, 6 dummy
files were included in the directories. These were empty and has been excluded from the table.
In Table 73, we describe the changes made to the documents from delivery 3 to delivery 4. The
TABLE 73. Changes in documents from delivery 3 to delivery 4
File

Section

Change

bruker.doc

Added option for generate report to main menu description.

5.5 (I)

Inserted new section for describing behaviour of the generate


report option.

Table of contents removed

5.4.2 (C)

Test case E8 removed from list of test cases

testrap.doc

288

Analysis of changes of group a03

TABLE 73. Changes in documents from delivery 3 to delivery 4


File

kons2.doc

Section

Change

5.4.3 (C)

Test case E8 removed from test log list

5.4.4 (C)

Test case E8 removed from change log list

6 (D)

Removed Section Class:Buffer

6.1.1 (D)

Removed test plan

6.1.2 (D)

Removed test case description

6.1.3 (D)

Removed test log list

6.1.4 (D)

Removed change log list

6 (A)

Added new Section Class:Buffer with a comment that this


class is not yet implemented.

6.1 (C)

Added menu option generate report to menu list description

6.1.5 (A)

Added description of menu option generate report

7 (C)

Added virtual member function SkrivRapport(...) to interface


of class Hyper.

7.7 (A)

Added description of the virtual member function


Hyper::SkrivRapport(...)

7.16 (A)

Added (yet another?) description of the virtual member function Hyper::SkrivRapport(...)

NB!!!!!!

The interface declaration of HyperListe in Section 8 is not


extended with the SkrivRapport function. This should have
been done.

8.1.9 (A)

Added description of member function HyperListe::GenererRapport(...)

NB!!!!!!

The interface declaration of KlasseListe in Section 8.2 is not


extended with the SkrivRapport function. This should have
been done.

8.2.5 (A)

Added description of member function KlasseListe::GenererRapport(...)

NB!!!!!!

The interface declaration of MfunkListe in Section 8.4 is not


extended with the SkrivRapport function. This should have
been done.

8.4.2 (A)

Added description of member function MfunkListe::SkrivRapport(...).

NB!!!!!

The interface declaration of EndringsListe in Section 8.5 is not


updated with the two new member functions LeggTilOverst
and LeggTilNederst.

8.5.5 (A)

Added description of new member function


EndringsListe::LeggTilOverst(...)

8.5.6 (A)

Added description of new member function


EndringsListe::LeggTilNederst(...)

9 (C)

Added member function SkrivRapport to the inteface declaration of class System.

289

Documentation changes
TABLE 73. Changes in documents from delivery 3 to delivery 4
File

Section

Change

9.5 (A)

Added description of member function System::SkrivRapport()

10.6 (C)

Added case for behaviour in main menu for the report generator
option. The description is inserted in the pseudo code for member function MenyValg::SkrivData(...)

11 NB !!!!

The interface declaration of class Fil is missing.

12 NB !!!!

The interface declaration of class Buffer is missing.

13 (C)

Corrected a minor spelling error.

NB !!!!

The interface declaration of class Endringslogg is missing.

13.3 (A)

Added description of new member function


EndringsLogg::DatoOgID(...)

NB !!!!

The interface declaration of class Endringslogg is missing.

14 NB !!!!

The interface declaration of class Endring is missing.

14.1 (C)

Corrected typing error for member data name. Member data


Endring::dato was corrected to Endring::Dato

14.5 (A)

Added description of new member function Endring::Dato()

14.6 (A)

Added description of new member function Endring::ID()

14.8 (C)

Corrected typing error in section name. Member Fuction was


corrected to MemberFunction

15 NB !!!!

The interface declaration of class Klasse is missing.

15.12 (A)

Added description of new member function Klasse::SkrivRapport()

16 NB !!!!

The interface declaration of class Funksjon is missing.

16.7 (C)

Corrected minor typing error. Skjerm corrected to skjerm

16.8 (A)

Added description of new member function Funksjon::SkrivRapport()

17 NB !!!!

The interface declaration of class Funksjonsmedlem is missing.

17.6 (A)

Added description of new member function


Funksjonsmedlem::SkrivRapport()

kritikk.doc

Not available in delivery 4

vurder.doc

Not available in delivery 4

second column shows which section the change has been made. An (I) after the section number
means that the section has been inserted. Similarly, a (D) means that the section has been
deleted, and a (C) means that the section has been deleted from the previous version.
Table 74 reports the differences in the documentation from delivery 4 to delivery 5.
TABLE 74. Changes in documents from delivery 4 to delivery 5
File

Section

Change

bruker.doc

No changes

testrap.doc

No changes

290

Analysis of changes of group a03

TABLE 74. Changes in documents from delivery 4 to delivery 5


File

Section

kons2.doc

The NBs of Table 73 regarding missing interface declarations also apply in the
design document of delivery 5

kritikk.doc

3 (C)

Fixed typing error in line 2: konstruksjonspesifikasjonen


changed to kravspesifikasjonen

5 (C)

Added descriptive text to Coas/Yourdon diagrams.

17.6 (C)

Extended description of Funksjonsmedlem::SkrivRapport with


information about changes made for second delta given. The
description is sloppy.

17.9 (A)

Added description of new member function Funksjonsmedlem::ReturnerDefinerti(...).

19 (C)

Removed three of the defined error messages.

20 (A)

Added section. Describes the status of the current version of


ProKomm

20.1 (A)

Added section on known errors and missing functionality

20.2 (A)

Added section on not implemented issues

20.3 (A)

Added section discussing what is described in the design document contra what is included in the executable application.

The changes described here are compared to the document delivered with
delivery 3.
Page 1/1

vurder.doc

Change

Reformulated the criticism from delivery 3. No new issues are


raised.

This document is only included with this delivery.

APPENDIX F

Statistical observations

F.1 Introduction
The aim of this chapter is to summarize the statistical techniques used in the analysis of the
data, both in the data collection from the students projects and the change/time measurements.
We choose to include it as an appendix for readers which are unfamiliar with statistics, and for
the completeness of the thesis.

F.2 Statistics of one variable


This section presents the statistical foundations for averages and variation regarding one statistical variable.

F.2.1 Measures of central tendency


There are three different measures that is normally used for describing an average. These are:
1. The modus, which is the most frequent category in a series of observations. This is normally

used when observations naturally fall into discrete categories, eg. we can speak of the mode
of a grade distribution.
2. The median of a distribution is the value of the observation which one half of the observa-

tion has a smaller value, and the other half of the observation has a larger value. To find the
median the observed values need to be sorted in ascending order. If we have n observations,
and n is odd, there will be (n-1)/2 observations both with smaller and larger value. If n is
even, the median is the midpoint between the two middle observations.
3. The arithmetic average is the sum of the n observed values divided by n. When we use the

term average, we mean the arithmetic average. The average is the most common used measure for the central tendency, and is most suited unless the frequency distribution is very
skewed. In this case either the modus or median is the best measure.
Since we do not have a natural set of discrete categories to place our observations, we use the
median or the average to describe the central tendency in our data set. We choose to use the
average as our measure.

292

Statistical observations

F.2.2 Measures of variation


Our average measure does not tell us anything about the diversity of the observations. Statistical theory operates with several measures for this, like
1. The range of the observations, meaning the difference between the highest and lowest

observed value. This measure is however extremely sensitive for noise in the data set.
2. The interquartile range is the difference between the third and first quartile of the observa-

tions. The median is the second quartile, so the first quartile is computed like the median for
the n-1 observations with value less than the median. Similarly the third quartile computed
with the n-1 observations with value greater than the median.
3. The standard deviation, measures how far the individual observations on the average are

from the arithmetic average of the observations. Given that the observations taken constitute a sample of the whole population, the formula for the standard deviation is given by
n

SD =

(x x )

i----------------------------------=1
n1

where x is the arithmetic average of the population.

F.3 Statistics of several variables


The previous section presented the statistics used for computing the average and variation of a
variable. In this section we will present the techniques for how to check for interdependence
among variables. This interdependency is called correlation.
See page 50/51 in [Hogg and Ledolter, 1992] about the correlation coefficient. The absolute
value of this coefficient tells us about the strength of the linear association among two variables, the sign of the coefficient tells us whether the association is direct (+) or indirect (-).
Also read section 7 in [Undheim, 1985].

F.3.1 Correlation among variables


Statistical variables are said to correlate when observations show when a change in the value
of one variable results in a change in the value of another variable. The relations between two
statistical variables (X and Y) can be graphically presented using scatter plots. Figure 83
Y

X
X
positive correlation (r>0) negative correlation (r<0)
FIGURE 83. Scatter plots (examples)

X
no correlation (r=0)

293

Statistics of several variables

shows an example of scatter plots of two variables, and how their corresponding values of r
change. The leftmost plot shows a postitive (or direct) correlation among the two variables
plotted. Such a positive correlation results if one can expext a high value of variable Y when a
high value of variable X is measured. The middle plot shows a negative (or indirect) correlation among X and Y. If a high value is measured for X, a low value can be expected for Y, and
vice versa. Finally, the rightmost plot shows a plot of two variables which are not correlated.
In this case the variables are statistically independent, meaning that we cannot estimate any
value for Y, given a measure of X (except the aritmethic average of all previous measures of
Y),
There are several statistical tools which helps us to measure the degree of correlation among
two variables. The different tools highlight different characteristics of the data, and may be
used in different situations. The following paragraphs will present some of these tools.

F.3.2 Pearsons product-moment correlation coefficient


When, as for the use of arithmetic average and standard deviation, the observations are taken in
intervals and the frequency distribution are more or less symmetrical around a top point (normal distribution), we can use Pearsons product-moment correlation coefficient (r) to measure
the strength of a correlation among two variables. Figure 83 shows how the value of r is related
to the scatter plots. This indicate that we cannot use a nominal and ordinal scale for such an
analysis.
John Krogstie [Krogstie, 1994c] reports that several researchers have stated that ordinal scales
can be used in correlation analysis (e.g. [Boyle, 1970], [Labovitz, 1970], and [Nie et al.,
1975]). Boyle states that using an ordinal scale in parametric statistics will be conservative
regarding the real correlation coefficient used with an interval scale. The requirement for a normal distribution can also be relaxed a bit according to Guildford ([Guildford, 1978]), but the
distribution should be unimodal (one maximum) and have symmetric properties. The number
of cases in the investigation should be above 30 ([Bergersen, 1990]).
Pearsons product-moment correlation coefficient is defined as
n

( Xi X ) ( Yi Y )

i=1
r = ---------------------------------------------------( N 1 )S x S y

The degree of relation among the two variables is indicated by the magnitude of r. In Table 76,
[Brase and Brase, 1983] reports a good explanation to the magnitude of r:
TABLE 75. Degree of relation indicated by the magnitude of r
If the magnitude of r is

Then we may think of the relation between x and y as

Less than 0.20

Negligible

Between 0.20 and 0.40

Very small

Between 0.40 and 0.70

Moderate and substantial, but not too high

Between 0.70 and 0.90

Fairly high, but not extremely dependable

Between 0.90 and 1.00

High and reasonably dependable

294

Statistical observations

It should be noted that it is important to examine correlation coefficients together with a scatter
plot of the data. The correlation coefficient is a measure for the strength of the linear relationship in data, but this coeffiecient may be equal for several plots, also when the corresponding
plot is not linear at all. Also, it is a common mistake when interpreting the correlation coefficient to assume that correlation assume causation. No such conclusion is automatic.

F.3.3 Spearmans rank


As we noted above, Pearsons correlation coefficient cannot unanimously be trusted whent the
data is on an ordinal scale. For ordinal data that do not satisfy the normality assumption,
another measure of the linear relationship between two variables, Spearmans rank correlation
coefficient, is used.
The rank of the correlation coeffiecient is the Pearson correlation coefficient based on the
ranks of the data if there are no ties. If the original data have no ties, the data for each variable
is ranked, and then the Pearson correlation coefficient between the ranks for the two variables
is computed. Like the Pearson r-coefficient, the rank correlation ranges between -1 and +1,
where -1 and +1 indicates a perfect linear relationship between the ranks of the the two variables. The interpretation is therefore the same except that the relationships between ranks, and
not values, is examined.
An example of rank computation is shown in Table 76.:
TABLE 76. Ranking ordinal variables
Variable X

Variable Y

Rank X

Rank Y

2356

4564

1243

8433

6543

5245

Instead of computing the Pearsons r-coefficient on variable X and variable Y, the r-coefficient
is computed on rank X and rank Y. The mean for both ranks is 2, the standard deviation is 1,
and the Pearso coefficient is computed to

01+0
r = --------------------- = 0, 5
2(1 1)

F.3.4 Kendalls tau


When we have a table of two ordinal variables, Kendalls tau can be used to back up the findings of a Spearmans rank. To compute Kendalls tau, all pairs are checked for concordance
(P), discordance (Q), and ties (T) against all other pairs. A pair is concordant to another pair if
both values are higher (or lower) than the values in the other pair. Two pairs are discordant to
each other when one value in the first pair is higher than the corresponding value in the second
pair, and the other value in the second pair is higher than the other value in the first pair. When
the two pairs have identical values on one or both variables, they are tied. An effective way to

295

Statistics of several variables

compute this information for a set of pairs is to arrange them in a matrix, as in Table 77. Here P
equals 1, Q equals 3, while T equals 2. The total number of pairs, N is 4. A scatter plot of these
variable pairs is depicted in Figure 84.
TABLE 77. Computation of concordance, discordance and ties
7, 9

8, 5

3, 8

7, 5

7, 9

Tx

8, 5

Ty

3, 8

7, 5

10
8
6
4
2
2 4 6

8 10

FIGURE 84. Sample scatter plot

There are three variations of Kendalls tau: tau-a, tau-b, and tau-c. They differ primarily in the
way the nominator (P-Q) is normalized. The simplest measure is
PQ

Kendalls a = -------------N
If there are no pairs with tise, this meaure is in the range from -1 to +1. If there are ties, the
range of possible values is narrower; the actual range depends on the number of ties. Since all
observations within the same row are ited, so are those in the same column, and the resulting
taua-a measures are difficult to understand. A measure which tries to normalize P-Q considering ties on each variable in a pair separately but not ties on both variables in a pair is:
PQ
Kendalls b = -------------------------------------------------------------------(P + Q + T )(P + Q + T )
x

where Tx is the number of pairs tied on X but not on Y, and Ty is the number of pairs tied on Y
but not on X. If no marginal frequency is 0, tau-b can attain +1 or -1 only for a square table.
A measure that can attain, or nearly attain, +1 or -1 for any r x c table is
2m ( P Q )
Kendalls c = -------------------------2
N (m 1)

where m is the smaller of the number of rows and columns. The coefficients tau-b and tau-c do
not differ much in value if each margin contains approximately equal frequencies.
In our case, we find the tau-b measure to best measure our need. We could have used the tau-a,
but as there might be cases in our data where pairs are tied on X or Y, we choose to use tau-b as
our correlation measure, in addition to Spearmans rank.
13
Kendalls tau-b in our example is computed to b = ---------------- = 0, 4 . To compute Spearmans
55
rank, we first need to rank the pairs.

296

Statistical observations

F.4 Testing and estimation


When using t-tests, we are looking for a significant difference between two sample means
(double-sided two-sample t-test) or between one sample mean and a specified value (singlesided one-sample t-test).
Assume that a series of observations about a phenomenon has been collected. A population
mean of these observations has been computed. Then we try to improve the production process
of the phenomenon, in order to increase (or decrease) the mean of the phenomenon in future
observations.
The question now is: How can we be sure that the modifications we have made to the production process has led to an improvement in the observed phenomenon?
Example: We have measured the time consumption for 500 changes made to a software system
to be x . We decide to improve the maintenance environment by introducing a configuration
management system. After the maintenance staff has been trained iin the new environment, we
record observations of the time used to process 10 changes to the system. The mean of these
observations is x10 < x , with standard deviation s. Can we conclude that the new mean time to
process a change has decreased?
This question can be answered by using a significance testing strategy. When trying to decide
whether or not the mean yield has decreased it is not sufficient to calculate the differences
between the sample mean and the old mean: we must compare this difference with some measure of the batch to batch variation such as the standard deviation.
[Caulcutt, 1983] describes the following procedure to calculate a one-sample t-test, which in
significance testing is known as a test statistic. This significance test is a one-sided test,
because we are looking for a change in one direction.
x
One-sample t-test: Test statistic = --------------- , where x is the sample mean, is the old
s n

population mean, s is the sample standard deviation, and n is the number of observation in the
sample
The one-sample t-test is carried out using a six-step procedure:
1. Null hypothesis - The mean yield of all observations after the modification is equal to .
2. Alternative hypothesis - The mean yield of all observations after the modifications is
greater than (or less than) .
x
3. Test statistic: --------------- .
s n
4. Critical values - From a table of critical values for the t-test, using n-1 degrees of freedom

for a one-sided test:


x at the 5% significance level.
y >x at the 1% significance level.
5. Decision:
- If the test statistic is less than both critical values we cannot reject the null hypothesis.
- If the test statistic lies between the two critical values we reject the null hypothesis at the
5% significance level.

Testing and estimation

297

- If the test statistic is greater than both critical values we reject the null hypothesis at the
5% significance level.
6. If we in step 5 rejected the null hypothesis, we can conclude that the mean yield is greater
than (or less than) . We typically report that a significant increase (decrease) in yield was
found (p < 0.05)1.
Note that a smalll test statistic does not inspire us to accept the null hypothesis. At the decision
step we wither reject the null hypothesis or we fail to reject the null hypothesis. It would be
illogical to accept the null hypothesis since the theory underlying the significance test is based
on the assumption that the null hypothesis is true.
Questions about the test statistic: Chapter 5 in [Caulcutt, 1983] describes two-sided t-tests.

F.4.1 Estimation of confidence interval for population mean


Given that we have a measure of the sample mean x and the standard deviation s for a sample
of size n. We would like to estimate a confidence interval for the population mean, . This
interval is computed with x ts n . The value used for t (the critical value of a two-sided ttest) is selected from a table, e.g. the one in [Caulcutt, 1983].
Example: If we measure the height of 5 randomly selected 14 year old boys, and find the sample mean to be 162 cm, and the sample standard deviation to be 6 cm, we can expect the 95%
confidence interval for the average height of all 14 year old boys to range from 162-2.,78*6/
5 = 154,5 cm to 169,5 cm.

1. Or p < 0.01.

298

Statistical observations

APPENDIX G

References

G.1 Introduction
This appendix contains references to all sources cited in this thesis, and several other publications which are interesting reading for anyone interested in the field of software maintenance.
Section G.2 lists the references which are used and cited in this thesis, while Section G.3 lists a
number of other interesting and related references that I have collected during the work on this
thesis.
Around 500 references are listed in all, some of them are annotated with personal comments

G.2 Cited references


[Abran and Nguyenkim, 1993] Abran, A. and Nguyenkim, H. (1993). Measurement of the
Maintenance Process from a Demand-based Perspective. Software Maintenance: Research and Practice, 5:6390.
Summary: (i) Workload distribution of management effort based on actual data. 2152
work requests requiring 11332 days of effort to carry out. (i) Investigates basis of productivity analyses. (iii) Function points metric in maintenance category. Provide a basis for establishing pricing mechanisms for maintenance products and services. (iv)
The paper stresses the impact of measurements and data analyses to get more understanding of the maintenance process. (v) The analyses supports previously reported results based on interviews.
[Arfa and Mili, 1990] Arfa, L. B. and Mili, A. (1990). Sofware Maintenance Management in
Tunisia: A Statistical Study. In Proc. of the 1990 Conference on Software Maintenance
(CSM 1990), pp. 124-129. IEEE, 1990.
[Arnold, 1993a] Arnold, R. S. (1993a). A Road Map Guide to Software Reengineering Technology. In Arnold, R. S., editor, Software Reengineering, chapter 1, pages 322. IEEE
Computer Society Press.
[Arnold, 1993b] Arnold, R. S. (1993b). Software Reengineering. IEEE Computer Society
Press Tutorials. IEEE Computer Society Press, Los Alamitos, California, USA.
[Basili and Green, 1994] Basili, V. R. and Green, S. (1994). Software Process Evolution at the

300

References

SEL. IEEE Software, pages 5866.


[Basili et al., 1986] Basili, V. R., Selby, R. W., and Hutchens, D. H. (1986). Experimentation
in Software Engineering. In IEEE Transactions on Software Engineering, SE12(7):733-743.
[Basili et al., 1996] Basili, V. R., Green, S., Laitenberger, O., Srumgrd, S., and Zelkowitzm,
M. (1996). The Empirical Investigation on Perspective-Based Reading. In Journal of
Empirical Software Engineering, 1(1):140, April, 1996.
[Baxter, 1992] Baxter, I. D. (1992). Design Maintenance Systems. Communications of the
ACM, 35(4):7389.
[Bennett et al., 1990] Bennet, K.H., Estdale, J., Khabaza, J., Price, M., Van Zuylen, H., and
Younger, E. (1990). The Reverse Engineering Handbook. Technical report 2487TNWL-1027, 1990.
[Bennett et al., 1992] Bennett, K., Bull, T., and Yang, H. (1992). A Transformation System for
Maintenance Turning Theory into Practice. In [CSM92, 1992], pages 146155.
The goal of the ReForm project is to make a system for maintaining a program by entirely performing semantic-preserving transformation on the code level (basic source is
IBM 370 Assembler). The process of the Maintainers Assistant (MA) is basically (1) restructure the old code, (2) transform the restructured code to a high-level requirement
specification, and (3) identify appropriate abstractions. The article presents the results
after using the MA in a series of case studies, where the largest program restructured
was 4500 lines of assembler. The original source is transformed into the internal language, WSL, which is the heart of the MA. The WSL is mathematically based, so proving
the equality of two program structures is proving the equality of two formulas. The MA
has a catalogue of 500 semantic-preserving transformations (i.e. CISC), including a
small set of simple generative transformations that can be freely combined in sequence
(i.e. RISC). The transformation rules can be classified as either (i) pattern-matching and
replacement, (ii) algorithmic transformations such as removing a dummy loop, or (iii)
hybrid transformations (a combination of the two former). The MA is typically used for
improving the quality of a program, and thereby making the program more maintainable, thus making the MA principally a tool for reverse engineering. However, the MA
also supports the transformation from specification to source, thereby allowing the
maintainers first to restructure and abstract, then make changes on an abstract level,
and then transforming back to code (i.e. implement). The use of the MA (a prototype
implemented under X11/Motif) so far indicates great support in the restructuring of programs by maintainers not familiar with the code. The article points out that it is essential
that the tool is interactive, if only automatic local minimums in metrics used are
reached, if interactive global minimums are found.
[Bennett, 1995] Bennett, K. (1995). Legacy Systems: Coping with Success. IEEE Software,
12(1):1923.
[Bergersen, 1990] Bergersen, L. (1990). Prosjektadministrasjon i systemutvikling. Aktiviteter i
planleggingsfasen som pvirker suksess. (In Norwegian). PhD thesis, ORAL, NTH,
Trondheim, Norway, 1990.
[Biggerstaff et al., 1993] Biggerstaff, T. J., Mitbander, B. G., and Webster, D. (1993). The
Concept Assignment Problem in Program Understanding. InProceedings of the15th International Conference on Software Engineering, pp. 482-498. Baltimore, Md., May

Cited references

301

1993. Also in Proceedings of the Workshop on Reverse Engineering, pages 2743.


The concept assignment problem is that of finding pieces of code that implement a functionality. Referenced in [Lakhotia, 1993].
[Blum, 1995] Blum, B. I. (1995). Resolving the Software Maintenance Paradox. Journal of
Software Maintenance: Research and Practice, 7:326. Support my view on using existing knowledge for software maintenance.
[Boehm et al., 1976] Boehm, B.W., Brown, J., and Lipow, M. (1976). Quantitative Evaluation
of Software Quality. In Proc. of 2nd International Conference on Software Engineering,
pp. 592-605, San Fracisco, USA, October 13-15, 1976.
[Boldyreff et al., 1995] Boldyreff, C., Burd, E. L., Hather, R. M., Mortimer, R. E., Munro, M.,
and Younger, E. J. (1995). The AMES Approach to Application Understanding: A Case
Study. In (Submitted to) International Conference on Software Maintenance 1995.
[Borison, 1986] Borison, E. (1986). A Model of Software Manufacture. In Reidar Conradi
et.al., editors, Proceedings of the International Workshop on Advanced Programming
Environments, Trondheim, Norway, June 16-18, LNCS no. 244, Springer-Verlag, Berlin, pp. 197-220.
[Boyle, 1970] Boyle, R.P. (1970). Path Analysis and Ordinal Data. American Journal of Sociology, 75:461-480, 1970.
[Brade et al., 1994] Brade, K. and Guzdial, M. and Steckel, M. and Soloway, E. (1996) Whorf:
A Hypertext Tool for Software Maintenance. In International Journal of Software Engineering and Knowledge Engineering, Vol 4(1):1-16, 1994.
[Brantley and Osajima, 1975] Brantley, C. L. and Osajima, Y. R. (1975). Continuing Development of Centrally Developed and Maintained Software Systems. In IEEEE Computer
Society Conference Proceedings, IEEE, Spring, 1975.
[Brase and Brase, 1983] Brase, C. H. and Brase, C. P. (1983). Understandable Statistics
Concept and Methods. D. C. Heath and Company.
[Briand et al., 1995a] Briand, L., Emam, K. E., and Morasca, S. (1995a). On the Application of
Measurement Theory in Software Engineering. Technical Report + foil copies ISERN95-04, International Software Engineering Research Network.
[Brice and Connell, 1984] Brice, L. and Connell, J. (1984). System information database: An
automated maintenance aid. In Proceedings of the 1984 National Computer Conference,
Volume 53, pages 209216.
Summer job students have used a relational data base to store data about software systems. These data have previously only been available through manual retrieval in a paper-based library. The maintainers in the company 100% supported the work, and found
several advantages to having automated access to the systems data. Several usages of
the data that previously could not be achieved was realized during the test-out of the database project.
[Brooks Jr., 1987] Brooks Jr., F. P. (1987). No Silver Bullet Essence and Accidents of Software Engineering. IEEE Computer, pages 1019.
The article was first published in Information Processing 86, ISBN No. 0-444-700773, H.-J. Kugler (ed) Elsevier Science Publishers B. V. (North-Holland), IFIP 1986.

302

References

[Brooks, 1983] Brooks, R. (1983). Towards a Theory of the Comprehension of Computer Programs. International Journal of Man-Machine Studies, 18:543554.
Does not have this one. (Cited in [Lakhotia, 1993]). Proposes a theory on program
comprehension whose major points are summarized as: (1) The programming process
is one of constructing mappings from a problem domain, possibly through several intermediate domains, into the programming domain. (2) Comprehending a program involves reconstructing part or all of these mappings. (3) This reconstruction process is
expectation driven by the creation, confirmation, and refinement of hypotheses.
[Bush, 1988] Bush, E. (1988). A CASE for Existing Systems. Language Technology White Paper, Salem, MA, p. 4.
[Caulcutt, 1983] Caulcutt, R. (1983). Statistics in Research and Development. Chapman and
Hall.
[Chapin, 1985] N. Chapin. (1985). "Software Maintenance: A Different View". In Proc. of National Computer Conference, pp. 507-513.
[Chidamber and Kemerer, 1994] Chidamber, S. R. and Kemerer, C. F. (1994). A Metrics Suite
of Object-Oriented Design. IEEE Transactions on Software Engineering, 20(6):476
493.
Defines six static metrics to measure the complexity in the design of classes. The metrics
are specifically designed to measure the three non-implementation steps in Boochs definition of OOD. It should be easy to generalize on the usage of these metrics. The metrics
are evaluated according to Weyukers evaluation criteria for metrics, and are applied
on class libraries from two independent sites. The results are not quantitatively tested,
this is future research. However, the application of the metrics proved to support some
viewpoints of senior designers in the two sites.
[Chikofsky and Cross II, 1990] Chikofsky, E. J. and Cross II, J. H. (1990). Reverse Engineering and Design Recovery: A Taxonomy. IEEE Software, 7(1):1317. Also in [Arnold,
1993b], pp. 5458.
[Conradi and Westfechtel, 1996] R. Conradi and B. Westfechtel, 1996. Version Models for
Software Configuration Management. Technical Report, Norwegian University of Science and Technology, October 16, 1996, 54 pages.
[Cooprider, 1979] Cooprider, L. W. (1979). The Representation of Families of Software Systems. Ph.D. thesis, Carnegie-Mellon University, Computer Science Department, April.
[Cowie, 1989] Cowie, A. P., editor (1989). Oxford Advanced Learners Dictionary of Current
English. Oxford University Press, 4 edition.
[Cross, 1990] Cross, J. (1990). Grasp/Ada uses control structure. IEEE Software, 7(5):62
[CSM NEWSLETTER, 1992] CSM NEWSLETTER (1992). Centre for Software Maintenance Ltd., Association Newsletter. Distributed to members, Centre for Software Maintenance Ltd., Mountjoy Research Centre, Stockton Road, Durham, DH1 3SW, England.
[CSM90, 1990] CSM90 (1990). Thomas M. Pigoski (ed.) Conference on Software Maintenance, San Diego, California, November 2629, Los Alamitos, California. IEEE Technical Committee on Software Engineering, IEEE Computer Society Press.
[CSM92, 1992] CSM92 (1992). Marc Kellner (ed.) Conference on Software Maintenance, Or-

Cited references

303

lando, Florida, November 912, Los Alamitos, California. IEEE Technical Committee
on Software Engineering, IEEE Computer Society Press.
[Daly et al., 1994b] Daly, J., Brooks, A., Miller, J., Roper, M., and Wood, M. (1994b). Verification of Results in Software Maintenance Through External Replication. In Muller, H.
and Georges, M., editors, Proc. of Intl Conference on Software Maintenance, Victoria
B.C., September 1994, pages 5057. IEEE Computer Society Press.
[DeRemer and Kron, 1976] DeRemer, F., and Kron, H. H. (1976). Programming-in-the-large
Versus Programming-in-the-small, IEEE Transactions on Software Engineering, SE2(2), June 1976, pp. 80-86.
[Dekleva, 1992] Dekleva, S. (1992). Delphi Study of Software Maintenance Problems. In
[CSM92, 1992], pages 1017.
[Dekleva, 1992a] Dekleva, S, M. (1992). Software Maintenance: 1990 Status. In Journal of
Software Maintenance, 4:233-247, 1992.
[Devanbu et al., 1991] Devanbu, P., Brachman, R. J., Selfridge, P. G., and Ballard, B. W.
(1991). LaSSIE: A Knowledge-Based Software Information System. Communications
of the ACM, 34(5):3449.
Includes a personal communication reference to a man called L. Modica at AT&T
who has conducted an experiment showing that 30%-60% of maintenance cost is due to
what Modica calls the discovery task. This obviously supports the findings made in
[Fjelstad and Hamlen, 1979]. The article also contains references to work done by Soloway, reported in CACM Nov 88.
[Ditri et al., 1971] Ditri, A. E., Shaw, I. C., and Atkins, W., editors. (1971) Managing the EDP
Function. McGraw-Hill, 1971.
[EPSOM, 1991] EPSOM (1991). Matra Marconi Space France and Cap Gemini Innovation
and EDF: State of the Art in Software Maintenance. Deliverable D1.1 v1.0, ESF - Eurka Software Factory, EPSOM project, 1991.
The report identifies and describes some of the underlying problems in software maintenance. The a comparison of models software development and maintenance shows
that the models for maintenance is significantly more immature than those for development. An interesting model for software maintenance is the request driven model of
Bennett et. al. described in Reverse engineering handbook, 1990". Activities and tools
for managing maintenance is then presented. The main theme left is then an in-depth
walk-through of problems and supporting tools on the technical side of software maintenance. Good references are included, among them are references for program comprehension, advanced AI tools for supporting maintenance activities, and impact analysis.
[EPSOM, 1992] EPSOM (1992). Matra Marconi Space France and Cap Gemini Innovation:
Identification of maintenance activities. Deliverable D2.1 v2.0, ESF - Eurka Software
Factory, EPSOM project, 1991.
This report identifies triggers, the form of the triggers, and categories of maintenance.
The categories identified are (i) user support, (i) corrective (not modifying requirements), (iii) evolutive (new functionalities), (iv) perfective (improve non-functional requirements), (v) adaptive (adapt to new platform or environment), (vi) preventive (to improve the systems maintainability), and (vii) anticipative (i.e. analyses to foresee future

304

References

(i) and (vi) activity). Note: The report identifies that the problem analysis phases are
very specific to the type of maintenance performed, while the change process phases are
quite generic. A generic V life cycle for software maintenance is presented with the
phases Trigger -> Problem understanding -> Localization -> Solution analysis -> Impact analysis -> Implementation (special V cycle here) -> Regression testing -> Acceptance -> Re-insertion. This process is instantiated in the maintenance environment at a
French company, Matra, operating in the space domain. The instantiation process is
documented, with the main steps being (i) identify the actors, (ii) define the distribution
of tasks, and (iii) establish information flows. The different actors, roles, and communication reports are described. The report further contains an analysis of the experiences
of the instantiation process, and goes on with identifying tools that would be useful for
supporting the maintenance process, as defined in the report.
[Ellis et al., 1991] Ellis, C.A, Gibbs, S.J., Rein, G.L. (1991). Groupware - Some Issues and Experience. In Communications of the ACM, 34(1):3958, January, 1991.
[Elshoff, 1976] Elshoff, I. L. (1976). An Analysis of Some Commercial PL/1 Programs. In
IEEE Transactions on Software Engineering, 2(2), June 1976.
[Estublier, 1985] Estublier, J. (1985) A configuration manager: The Adele data base of programs. In Workshop on Software Engineering Environments for Programming-in-theLarge, Harwichport, Massachusetts, June 1985, pp. 140-147.
[Estublier and Casallas, 1994] Estublier, J., and Casallas, R. (1994). The Adele Configuration
Manager. In W. F. Tichy, Configuration Management, John Wiley & Sons Ltd., Chichester, 1994, ISBN 0-471-94245-6, pp. 99-133.
[Feldman, 1979] Feldman, S. I. (1979). Make, a Program for Maintaining Computer Programs. In Software - Practice and Experience, 9(4), April 1979, pp. 255-265.
[Fenton, 1991] Fenton, N. E. (1991). Software Metrics A Rigorous Approach. Chapman &
Hall.
Presents a framework for doing measurement on software. Gives a sound theoretical
basis for measuring based on measurement theory. Contains a vast of examples of different approaches to measuring, and a good survey to internal product metrics. Presents
a general graph-theoretic way of expressing the internal complexity of software.
[Fjelstad and Hamlen, 1979] Fjelstad, R. K. and Hamlen, W. T. (1979). Application Program
Maintenance Study - Report to Our Respondents. Technical report, IBM Corporation,
DP Marketing Group. Also in Proceedings of GUIDE 48, The Guide Corporation, Philadelphia.
Contains a very interesting result on the distribution of effort in the processing of a
modification request. The study shows that 25% of the efforts are related to actually implementing the change, verification accounts for 28%, while understanding the software
accounts for 47%!
[Flener, 1995] Flener, Pierre. (1995). Logic Program Synthesis From Incomplete Information.
Kulwer Academic Publishers, ISBN 0-7923-9532-8.
[Floch and Gulla, 1995] J. Floch and B. Gulla (1995). Enabling Reuse With a Configuration
Language. In Proc. of Fourth International Conference on Software Reuse, pp. 10, 1995.
[Foffani, 1992] Foffani, F. (1992). Survey on Quality in Software Maintenance in Italy. In

Cited references

305

[CSM NEWSLETTER, 1992].


Survey shows that the two major problems in software maintenance are the continual
change/enhancement request and quality of documentation for the systems that are
maintained. The survey only includes data from 9 Italian organizations, according to
John Krogstie.
[Gallagher, 1990] Gallagher, K. (1990). Surgeons Assistant limits side effects. IEEE Software, 7(5):64.
[Grek, 1991] Grek, M. (1991). Tools and Methods to Support Software Maintenance. In Proceedings of the 5th Software Maintenance Workshop, Durham, England.
[Guildford, 1978] Guildford, J.P. (1978) Fundamental Statistics in Psychology and Education.
McGraw-Hill, 1978.
[Guimares, 1983] Guimares, T. (1983). Managing Application Program Maintenance Expenditures. Communications of the ACM, 26(10):739746.
[Gulla and Gorman, 1995] Gulla, B., and Gorman, J. (1995). Supporting Evolution of SDLBased Systems: Industrial Experience, In G. v. Bochman et al. (editor), Proceedings of
the 8th International IFIP Conference on Formal Description Techniques for Distributed Systems and Communication Protocols, Montreal, Quebec, October 17-20, 1995, 15
pages. IFIP WG 6.1 Chapman & Hall, London.
[Gunderman, 1973] Gunderman, R. E. (1973). A Glimpse into Program Maintenance. In Datamation, June 1973.
[Hall and Zweben, 1986] Hall, W. E.. and Zweben, S. H. (1986). The Cloze Procedure and
Software Comprehensibility Measurement. In IEEE Transactions on Software Engineering, SE-12(5):608-623.
[Harandi and Ning, 1990] Harandi, M. T. and Ning, J. Q. (1990). Knowledge-Based Program
Analysis. IEEE Software, 7(1):7481. Also in [Arnold, 1993b], pp. 651658.
[Harband, 1990] Harband, J. (1990). Seela aids maintenance with code-block focus. IEEE Software, 7(5):61.
[Harel, 1992] Harel, D. (1992). Biting the Silver Bullet Toward a Brighter Future for System
Development. IEEE Computer, pages 820.
[Hayward and Sparkes, 1982] Hayward, A. L. and Sparkes, J. J., editors (1982). The Concise
English Dictionary. Omega Books.
[Henne, 1992] Henne, A. (1992). Information Systems Maintenance - Problems or
Opportunities? In Proc. of Norsk Informatikk Konferanse, 1992 (NIK92)., pp. 91-104,
Troms, Norway, 1992.
[Hersleb et al., 1994] Hersleb, J. et al. (1994). Benefits of CMM-Based Software-Process Improvement Initial Results. CMU/SEI-94-TR-13, ESC-TR-94-013, August 1994, Software Eng. Inst., Carnegie Mellon University, Pittsburg. p.15.
[Hogg and Ledolter, 1992] Hogg, R. V. and Ledolter, J. (1992). Applied Statistics for Engineers and Physical Scientists. Maxwell Maxmillan International Editions.
[Horowitz and Williamson, 1986a] Horowitz, E. and Williamson, R. C. (1986). SODOS: A
Software Documentation Environment - Its Definition. In IEEE Transactions on Soft-

306

References

ware Engineering, SE-12(8):849-859.


[Horowitz and Williamson, 1986b] Horowitz, E. and Williamson, R. C. (1986). SODOS: A
Software Documentation Environment - Its Use. In IEEE Transactions on Software Engineering, SE-12(11):1076-1087.
[Hoskyns, 1973] Hoskyns, J. (1973). Implications of Using Modular Programming. Technocal
Report Guide No. 1, Hoskyns Systems Research, London, England, 1973.
[Humphrey, 1989] Humphrey, W. S. (1989). Managing the Software Process. Addison Wesley.
[IEEE 610, 1990] IEEE 610 (1990). IEEE Std. 610.12.1990, Glossary of Software Engineering
Terminology, Software Engineering Standards. IEEE.
[Jrgensen and Maus, 1993] Jrgensen, M. and Maus, A. (1993). A Case Study of Software
Maintenance Tasks. In Haveraaen, M., editor, Norsk Informatikk Konferanse, NIK93,
pages 101112. Norsk Informatikkrd.
The authors have conducted personal interviews with maintainers pre and post of 124
maintenance tasks at the Norwegian Telecom Research. The paper supports the previous findings on maintenance activity distribution, but corrective maintenance seems to
previously have been overemphasized. 16 hypotheses regarding indicators for maintenance productivity were tested and the findings are described in the paper. A surprising
finding was that (in this sample) inexperienced maintainers had higher productivity
than experienced maintainers, measured in lines of code.
On page 106, there is an interesting discussion about the use of different information
sources for maintenance work. The study revealed that informal expert knowledge and
oral communication were the most valuable resources, and systems documentation (relatively) seldom were consulted (and also very seldom updated after end of maintenance
task).
[Jrgensen, 1994] Jrgensen, M. (1994). Empirical Studies of Software Maintenance. Ph.D.
thesis, Department of Informatics, University of Oslo.
[Karakostas, 1990] Karakostas, V. (1990). The Use of Application Domain Knowledge for Effective Software Maintenance. In [CSM90, 1990], pages 170176.
[Keppel, 1991] Keppel, G. (1991). Design and Analysis A Researchers Handbook. Prentice
Hall.
[Kish, 1987] Kish, L. (1987). Statistical Design for Research. Wiley Series in Probability and
Mathematical Statistics. Wiley.
[Krogstie, 1994b] Krogstie, J. (1994b). Software Maintenance in Norway: A Survey Investigation. In [Muller and Georges, 1994], pages 304313.
[Krogstie, 1994c] Krogstie, J. (1994c). Survey Investigation: Development and Maintenance
of Information Systems: Version 1. Technical report, Norwegian Institute of Technology, Department of Computer Science and Telematics, Trondheim.
Final report on a survey on maintenance involving 52 Norwegian information systems
departments. The survey is in the spirit of Lientz and Swanson [Lientz and Swanson,
1980].
[Labovitz, 1970] Labovitz, S. (1970). The Assignment of Numbers to Rank Order Categories,

Cited references

307

American Sociological Review, 35:515-524, 1970.


[Laitinen, 1995] Laitinen, K. (1995). Natural Naming in Software Development and Maintenance. PhD thesis, University of Oulo, Finland, October 1995.
[Lakhotia, 1993] Lakhotia, A. (1993). Understanding Someone Elses Code: Analysis of Experiences. Journal of Systems and Software, 23:269275.
This is an interesting (though informal) article on what need to be known in order to
change somebody elses code. The authors first assumption was that the whole system
needed to be understood in order to be changed. A (commercial employed) friend of him
gave him the idea that only the part that needs to be changed needs to be understood,
but difficult to find this particular part. Two case studies on how to understand code
was carried out in student projects, and the author felt that he (as an expert) first made
up an hypothesis on what to look for and then scan through the code. The students, being
at an interim level more often sat down with the source listing and tried to understand
bottom-up, without having hypotheses for what to look for first. The author found support for his findings in literature date 10 years back, where a similar hypothesis on program comprehension was defined by Brooks [Brooks, 1983].. Also a reference to
Biggerstaff [Biggerstaff et al., 1993] on how to locate pieces of code that implements a
given functionality.
[Layzell et al., 1993] Layzell, P.J., Champion, R., and Freeman, M.J. (1993). DOCKET: Program Comprehension-in-the-Large. In Proc. 2n Workshop on Program Comprehension,
Jul 8-9, 1993, Capri, Italy, pp. 140-148.
[Layzell and Macaulay, 1994] Layzell, P. J. and Macaulay, L. A. (1994). An Investigation into
Software Maintenance Perception and Practices. Software Maintenance: Research
and Practice, 6:105120.
[Laysell et al., 1995] Laysell, P. J., Freeman, M. J., and Benedusi, P. (1995). Improving Reverse-engineering through the Use of Multiple Knowledge Sources. Journal of Software
Maintenance: Research and Practice, 7:279299.
[Leblang, 1994] Leblang, D. L. (1994). The CM Challenge: Configuration Management that
Works. In W. F. Tichy, Configuration Management, John Wiley & Sons Ltd., Chichester, 1994, ISBN 0-471-94245-6, pp. 1-37.
[Lehman and Belady, 1985] Lehman, M. and Belady, A. B. (1985). Program Evolution: Processes of Software Change. A.P.I.C. Studies in Data Processing. London, Academic
Press.
[Lehman, 1994] Lehman, M. M. (1994). Software Evolution. In Volume 2 of Encyclopedia of
Software Engineering, pp. 1202-1208. Wiley and Co. 1994. J. Marciniak (ed.)
[Leland et al., 1988] Leland, M.D.P., Fish, R.S., Kraut, R.E. (1988). Collaborative Document
Processing Using Quilt. In Proceedings of the Conference on Collaborative-Supportive
Collaborative Work (CSCW-88), pp. 206125, Portland, Oregon, 1988.
[Li and Henry, 1995] Li, W. and Henry, S. (1995). An Empirical Study of Maintenance Activities in Two Object-Oriented Systems. Journal of Software Maintenance: Research and
Practice, 7:131147.
[Lie et al., 1989] A. Lie, R. Conradi, T. M. Didriksen, E.-A. Karlsson, S. O. Hallsteinsen and P.
Holager. Change Oriented Versioning in a Software Engineering Database. In W. F.

308

References

Tichy, editor, Proceedings of the Second International Workshop on Software Configuration Management, pp. 56-65, Princeton, NJ, October 25-27, 1989. ACM SIGSOFT
Software Engineering Notes 17(7), November 1989.
[Lientz and Swanson, 1980] Lientz, B. P. and Swanson, E. B. (1980). Software Maintenance
Management. Addison-Wesley, Reading MA.
The authors have surveyed 487 companies to investigate and compare application software maintenance costs. Results of this analysis is presented in the book. The results
from this survey are probably the most quoted empirical investigation on software maintenance.
[Littman et al., 1987] Littman, D. C., Pinto, J., Letovsky, S., and Soloway, E. (1987). Mental
Models and Software Maintenance. The Journal of Systems and Software, 7:341354.
NB. The references are missing, probably on pp. 355-56? Systematic strategy (focusing
on data-flow traces, understand global behavior) vs. as-need strategy (focus on local
program behavior).
[Liu, 1976] Liu, C. (1976). A Look at Software Maintenance. In Datamation, 22(11):51-55,
November 1976.
[Lyons, 1981] Lyons, M.J. (1981). Salvaging your Software Asset (Tools-Based Maintenance.
In AFIPS Conference Proceedings, 1981 National Computer Conference, pp. 337-352,
Chicago, USA, May 4-7, 1981.
[Madhavji, 1992a] Madhavji, N. H. (1992a). A Framework for Process Maintenance. In
[CSM92, 1992], pages 245254.
[Madhavji, 1992b] Madhavji, N. H. (1992b). Environment Evolution: The Prism Model of
Changes. IEEE Transactions of Software Engineering, 18(5):380392.
A model of changes (of items in a software development environment) + two supportive
change-related environment infrastructures featuring (i) separating changes of items
vs. environment, (ii) dependency structure for identifying change ramifications, (iii)
change structure classifying and recording changes, (iv) identification of properties of
a change, (v) built in feed-back.
[Mahler, 1994] A. MAhler. (1994) "Keeping Things Together and Telling them Apart". In
[Tichy, 1994], pp. 73-95.
[Marzullo and Wiebe, 1986] K. Marzullo and D. Wiebe. Jasmine: A Software System Modelling Facility. In P. B. Henderson, Proceedings of the 2nd ACM SIGSOFT/SIGPLAN
Software Engineering Symposium on Practical Software Development Environments,
Palo Alto, CA, December 9-11, 1986. ACM SIGPLAN Notices, 22(1), January 1987, pp.
121-130.
[McCabe, 1990] McCabe, T. (1990). Battle Map, Act show code structure, complexity. IEEE
Software, 7(5):62.
[McClure, 1992] McClure, C. (1992). The Tree Rs of Software Automation: Re-engineering,
Repository, Reusability. Prentice Hall.
Chapters 1 and 2 contains some interesting observations about software maintenance,
and contains the first reference that I have seen to the Fjelstad and Hamlen [Fjelstad
and Hamlen, 1979] investigation about cost distribution in the processing of a modification request.

Cited references

309

[McGarry, 1994a] McGarry, F. (1994a). Lessons From 20 Years of Experimental Software Engineering. Slide copies, presented at NTH, 26th October 1995. (Also presented as
ICSE17 tutorial.).
[McGarry, 1994c] McGarry, F. (1994c). The Software Engineering Laboratory: An Example
Experience Factory. Slide copies, presented at NTH, 26th October 1995. (Also presented as ICSE17 tutorial.).
[Moad, 1990] Moad, J. (1990). Maintaining the Competitive Edge. In Datamation, pp 61-66,
February 1990.
[Muller and Georges, 1994] Muller, H. A. and Georges, M., editors (1994). International Conference on Software Maintenance, Victoria, British Columbia, Canada, September 19
23, Los Alamitos, California. IEEE Technical Committee on Software Engineering,
IEEE Computer Society Press.
[Narayanaswamy and Scacchi, 1987] Narayanaswamy, K. and Scacchi, W. (1987). Maintaining Configurations of Evolving Software Systems. IEEE Transactions on Software Engineering, SE-13(3):324334.
[Naur and Randell, 1969] Naur, P. and Randell, B. (editors). Software Engineering Proceedings of the NATO Conference In Garmisch-Partenkirchen, 1968. NATO Science Committee, Scientific Affairs Divsion, NATO, Brussels. January, 1969, 231 pages.
[Nguyen and Conradi, 1996] Nguyen, M. N. and Conradi, R. (1996). Towards a Rigorous Approach for Managing Process Evolution. Submitted to European Workshop on Software
Process Technology, Nancy, France, October 1996.
[Nie et al., 1975] N.H. Nie, C.H. Hull, J.G. Jenkins, K. Steinbrenner, D.H. Bent. Statistical
Package for Social Science, McGraw-Hill, 1975.
[Norusis, 1992] Norusis, M. J. (1992). SPSS for Windows Base System Users Guide Release
5.0. SPSS Inc.
[Nosek and Palvia, 1990] T. Nosek and P. Palvia. "Software Maintenance Management:
Changes in the Last Decade". In Journal of Software Maintenance: Research and Practice, Vol. 2, pp 157-174.
[Novobilski, 1990] Novobilski, A. (1990). Objective-C Browser details class structures. IEEE
Software, 7(5):60.
[Ogden, 1972] Ogden, J.L. (1972). Designing Relieable Software. In Datamation, 21(7):71-78,
July 1972.
[Oman, 1990] Oman, P. (1990). Maintenance Tools. IEEE Software, 7(5):59. Introduction to
the descriptions of other tools presented (on the following pages) in [Novobilski, 1990],
[Rajlich, 1990], [Harband, 1990], [Cross, 1990], [McCabe, 1990], [Vanek and Davis,
1990], [Gallagher, 1990], and [Wilde, 1990].
[Osborne, 1985] Osborne, W. M. (1985). Reports on Computer Science and Technology.
Technical Report NBS Special Publication 500-130, National Bureau of Standards.
[Osterweil, 1987] Osterweil, Leon. (1987). Software Processes are Software Too. In Proc. of
the 9th International Conference on Software Engineering (ICSE-9). IEEE Computer
Society Press, pp. 2-13.
[PROTEUS, 1992] Bjrn Grnquist, Bjrn Gulla, Ian Sommerville, Eirik Tryggeseth,

310

References

P-DEL-3-4-A-LAN-1.4: Requirements for the Proteus Configuration Language and


Tools, pp. 45, Proteus Technical Report, November 6, 1992.
[PROTEUS, 1994b] Ian Sommerville, Reidar Conradi, Bjrn Gulla, Eirik Tryggeseth, Gilbert
Rondeau, P-DEL34b: PCL-V2 Reference Manual, pp. 85, Proteus Technical Report,
September 1994.
[Parnas, 1993] D. L. Parnas. (1993). "Seminar notes: Studying design practice - empirical findings and future challenges". One-day seminar at the Dept. Of Informatics, University of
Trondheim, October 29, 1993.
[Parnas, 1994] Parnas, D. L. (1994). Software Aging. In Proc. 16th Intl Conference on Software Engineering, Sorrento, Italia, pages 279287.
[Paulish and Carleton, 1994] Paulish, D. J. and Carleton, A. D. (1994). Case Studies of Software Process-Improvement Measures. In IEEE Computer, pp. 50-57, September 1994.
[Pressmann, 1987] Pressman, R.S. (1987). Software Engineering - A Practitioners Approach.
McGraw-Hill International Editions, Computer Science Series. ISBN 0-07-100232-4.
[Quilici, 1993] Quilici, A. (1993). A Hybrid Approach to Recognizing Programming Plans. In
Proc. 2n Workshop on Program Comprehension, Jul 8-9, 1993, Capri, Italy, pp. 96-103.
[Quilici, 1994] Quilici, A. (1994). A Memory-Based Approach to Recognizing Programming
Plans. Communications of the ACM, 37(5):8493.
[Rajlich, 1990] Rajlich, V. (1990). Vifor transforms code skeletons to graphs. IEEE Software,
7(5):60.
[Reutter, 1981] Reutter, J. (1981). Maintenance is a Management Problem and a Programmers
Opportunity. In AFIPS Conference Proceedings, 1981 National Computer Conference,
pp. 343-347, Chicago, USA, May 4-7, 1981.
[Riggs, 1969] Riggs, R. (1969). Computer Systems Maintenance. In Datamation, 15:227-235,
November, 1990.
[Robson et al., 1991] Robson, D. J., Bennett, K. H., Cornelius, B. J., and Munro, M. (1991).
Approaches to Program Comprehension. Journal of Systems and Software, 14:7984.
Also in [Arnold, 1993b], pp. 609614.
[Scott and Farley, 1988] Scott, T. and Farley, D. (1988). Slashing Software Maintenance
Costs. In Business Software Review, March 1988.
[Sharpe et al., 1991] Sharpe, S., Haworth, D. A., and Hale, D. (1991). Characteristics of empirical software maintenance studies: 1980-1989. Software Maintenance: Research and
Practice, 3:115.
[Siegel and Castellan, 1988] Siegel, S. and Castellan, N.J. (1988). Nonparametric Statistics for
the Behavioral Sciences. (2nd Edition). McGraw Hill, New York, 1988.
[Soloway and Ehrlich, 1984] Soloway, E. and Ehrlich, K. (1984). Empirical Studies of Programming Knowledge. In IEEE Transactions on Software Engineering, SE-10(5):595609.
[Sommerville and Dean, 1994] Sommerville and G. Dean. A configuration language for modelling evolving system architectures, 16 pages. Submitted for publication, available
through the authors at Lancaster University, England.

Cited references

311

[Standish, 1984] Standish, T.A. (1984). An Essay on Software Reuse. In IEEE Transactions on
Software Engineering, SE-10(5), pp. 494-497, May 1984.
[Swanson, 1976] Swanson, E. B. (1976). The Dimensions of Maintenance. In Proceedings of
the Second International Conference on Software Engineering, pages 492497.
This is the original article which divided the maintenance work into either corrective,
perfective or adaptive.
[Swanson and Beath, 1990b] Swanson, E. B. and Beath, C. M. (1990). Maintaining Information Systems in Organizations. Wiley Series in Information Systems. John Wiley &
Sons, 1989.
[Thomson and Sommerville, 1989] R. Thomson and I. Sommerville. An Approach to the Support of Software Evolution, IEE/BCS Computer Journal, 32(5), October 1989, pp. 386396.
[Tichy, 1979] W. F. Tichy. Software Development Control Based on Module Interconnection, in Proceedings of the 4th International Conference on Software Engineering,
IEEE, September 1979, pp. 29-41.
[Tichy, 1985] Tichy, W. F. (1985). RCS A System for Version Control. Software Practice
and Experience, 15(7):637654.
[Tichy, 1994] Tichy, W. F., editor (1994). Configuration Management. Trends in Software.
John Wiley & Sons. (Bjrn Gulla has the book.)
[Tilley et al., 1992] Tilley, S. R., Mller, H. A., and Orgun, M. A. (1992). Documenting Software Systems with Views. In Proceedings of the 10th International Conference on Systems Documentation (SIGDOC 92), Ottawa, Ontario, October 13-16, 1992, pages 211
219, ACM Order Number 613920.
Reverse engineering extracts and represents structures from a software system. These
structures embody visual and spatial information that serve as organizational axes for
the exploration and presentation of the composed subsystem structures. These structures can be augmented with views: hypertext that highlights different aspects of the system in question.
Focus in on how to provide the relevant information to the relevant audience, i.e. to the
casual user, developer that thoroughly knows the system, the maintainer that barely
knows the system, testers, technical writers, and project management. Focus on visual
data to guide the exploration of spatial data. Spatial data = information about the relative positions of the meaningful parts of a software structure. Visual data = supplies
information about what software structure looks like. The Rigi editor is an interactive
graph editor for maintaining and viewing the structures (e.g. overviews and projections,
in addition to more general graphical operations such as expanding, zooming, panning,
grouping, etc.
View documentation can be utilized for aiding management documentation, recovering
lost information, and for improving system comprehension. The central issue of the assumptions made for this technology is that design information is lost or out-of-date. This
seems rather negative and pessimistic, especially when taking into account the increasing use of CASE tools, and design environments.
[Tryggeseth et al., 1995] Tryggeseth, E., Gulla, B., and Conradi, R. (1995). Modelling Systems

312

References

with Variability using the PROTEUS Configuration Language. In Proceedings of 5th


International Workshop on Software Configuration Management. Published in Jacky
Estublier (ed), Software Configuration Management: Selected Papers of the ICSE SCM4 and SCM-5 Workshops, number 1005 in Lecture Notes in Computer Science, pp. 216240, Springer Verlag.
[Tryggeseth, 1995b] Tryggeseth, E. (1995) Requirements specification for obligatory project
assingment in course 45012 Programming Methodology: "ProKomm: A Programmers
Assistant for C++ Programming.", Technical Report, IDT/NTH, 17 pages.
[Tryggeseth, 1996b] Tryggeseth, E. (1996). Report from an Experiment: Impact of Documentation on Maintenance. In First International Workshop on Empirical Studies of Software Maintenance, Monterey, CA, November 1996. Held in conjunction with the 1996
International Conference on Software Maintenance, Monterey, CA, 4-8 November,
1996
[Turver and Munro, 1994] Turver, R. J. and Munro, M. (1994). An Early Impact Analysis
Technique for Software Maintenance. Software Maintenance: Research and Practice,
6:3552.
This article is good, with a clear and concise language used. The objective of the work
is to investigate a basis for a technique for analyzing and measuring impacts of change
on a program system, not only on source code alone. The paper surveys existing ripple
analysis techniques. It presents a new technique for early detection of ripple effects; a
graph theoretic model of documentation and themes in a program systems documentation. Measurement techniques are used to attribute change proposals after clearly defined rules (this is not described in detail in this article). The paper contains several references to persons that have defined request-driven process models; also present their
own (?) change proposal evaluation model. None has used impact analysis to control
the maintenance process (?), but rather to determine ripple effects of a change after it
has been implemented. The authors present a good overview of several techniques for
finding ripple effects, and the main criticism is that they do not provide sufficient detail
for obtaining accurate estimates of the effect of a change. This is because most emphasis
is on the source code level. Stability analysis is discussed (defined in [Yau and Collofello, 1980] = resistance of a program to the amplification of changes in the program).
A number of stability analysis measures have been proposed, concentrating on source
code. The authors identify that documentation itself has valuable properties that can
predict ripple effects. A problem is that there is no formal model of documentation (can
OORAM help?). Several observations about documentation help the authors to model
the thematic structure of documentation. A ripple propagation graph (RPG) defuced
from the documentation observations forms the basis for their work. Pro: Can be used
early in development and maintenance, does not focus on source code. The RPG captures the hierarchy of documentation, and the thematic structure (has-theme, co-occurs). The expressibility seems to be rather limited, regarding semantic content. Only
general relations. The name of the themes are subjective, and a good classification system is needed. The RPG is described using set notation. A change set is defined as the
set of themes, where each element is described in a change proposal. The affected documentation parts are identified by a ripple effect algorithm based on a graph slice operation.
[Undheim, 1985] Undheim, J. O. (1985). Innfring i statistikk for samfunnsvitenskapelige fag.
Universitetsforlagets metodebibliotek.

Cited references

313

[Vanek and Davis, 1990] Vanek, L. and Davis, L. (1990). Expert Dataflow and Static Analysis
tool. IEEE Software, 7(5):63.
[Von Mayrhauser and Vans, 1994] Von Mayrhauser, A. and Vans, A. M. (1994). Comprehension Processes During Large Scale Maintenance. In Proc. 16th Intl Conference on Software Engineering, Sorrento, Italia, pages 3948.
[Webster, 1986] Webster (1986). The Merriam-Webster Dictionary. New York: Pocket Books.
[Weiser, 1984] Weiser, M. (1984). Program Slicing. IEEE Transactions on Software Engineering, SE-10(4):352357.
Given a statement s in a program P, we want to find a smaller program with exact similar behavior as P that stops at s in the same state as would P. This is called a program
slice. Especially we want to keep in the slice all variables (assignment and use) that
would impact the value of a given variable set just before s. This variable set and s is
together called the slicing criterion. By utilizing the programs flow and control
graphs, slices can be found automatically. Program slices are especially useful for debugging and maintenance. (Alternative description of a program slice: Program slicing
is a technique for restricting the behavior of a program to some specified subset of interest. A slice S(v,n) (of program P) on variable v, or set of variables, at statement n
yields the portions of the program that contributed to the value of v just before statement
n is executed.).
[Weisweber, 1994] Weisweber, Wilhelm. (1994). The Experimental MT System of the Project
KIT-FAST. In Proceedings of the International Conference: Machine Translation: Ten
Years On, Cranfield, UK, pp. 12.112.19.
[Whitgift, 1991] D. Whitgift. Methods and Tools for Software Configuration Management,
John Wiley & Sons Ltd., Chichester, 1991. ISBN 0-471-92940-9
[Wilde, 1990] Wilde, N. (1990). Dependency Analysis Tool Set prototype. IEEE Software,
7(5):65.
[Wilde and Huitt, 1992] Wilde, N. and Huitt, R. (1992). Maintenance Support for Object-Oriented Programs. In IEEE Transactions on Software Engineering, 18(12):1038-1044,
December, 1992.
[Yau and Collofello, 1980] Yau, S. S. and Collofello, J. S. (1980). Some Stability Measures in
Software Maintenance. IEEE Transactions on Software Engineering, 6(6):545552.
Do not have this. Other references to work by Yau and Collofello are in [Turver and Munro, 1994].
[Zelkowitz, 1978] Zelkowitz, M. V. (1978). Perspectives on Software Engineering. In Computing Survey, 10(2), June 1978.
[Zvegintzov, 1994a] Zvegintzov, N. (1994a). Software Management Technology Reference
Guide. Software Maintenance News, Inc., B10 - Suite 237, 4546 El Camino Real, Los
Altos CA 94022 USA, 1994 edition, release 5.1 edition.
See also tutorial notes from ICSM 1994, [Zvegintzov, 1994b].
[Zvegintzov, 1994b] Zvegintzov, N. (1994b). The Technology of Maintenance and Reengineering. Tutorial Notes at International Conference on Software Maintenance, 1994,
Victoria B.C., September 19.

314

References

In addition to a summary of the tutorial slides, the material contains a copy of Zvegintzovs book [Zvegintzov, 1994a].
[Aamot et. al., 1994] Per Axel Aamot, Trygve Rste, Knut Langeggen, Per Arne Vollan, Arnvid Hellebust, and Tore Berg. (1994). "HyperMaint". Project report, in course 45075
Software Program Systems, November 1994.

Other references

315

G.3 Other references


[Abd-El-Hafiz, 1996] Abd-El-Hafiz, S. (1996). Evaluation of a Knowledge-Based Approach to
Program Understanding. In [ICSM 1996], pp. 275-284.
[Abdel-Hamid, 1993] Abdel-Hamid, T. K. (1993). Adapting, Correcting and Perfecting Software Estimates: A Maintenance Metaphor. IEEE Computer, pages 2029.
[Abdel-Hamid et al., 1993] Abdel-Hamid, T. K., Sengupta, K., and Brown, D. (1993). Software Project Control: An Experimental Investigation of Judgment with Fallible Information. IEEE Transactions on Software Engineering, 19(6):603612.
[Abran and Desharnais, 1995] Abran, A. and Desharnais, J.-M. (1995). Measurement of Functional Reuse in Maintenance. Journal of Software Maintenance: Research and Practice,
7:263277.
[Abran and Robillard, 1993] Abran, A. and Robillard, P. N. (1993). Reliability of Function
Points Productivity Model for Enhancement Projects (A Field Study). In [CSM93,
1993], pages 8087.
[Al-Merri and McGregor, 1992] Al-Merri, J. and McGregor, D. R. (1992). Document Retrieval Using Signature Files. Technical report, Dept. of Computer Science, Univ. of Strathclyde, Glasgow, UK.
Abstract: The signature file method has proved to be a convenient method for indexing
documents, suitable for many application areas, and in particular for office information
systems. Three signature file methods have been implemented and tested in a text-based
document retrieval system. Their performance is evaluated and discussed.
[Al-Zoubi and Prakash, 1995] Al-Zoubi, R. and Prakash, A. (1995). Program View Generation
and Change Analysis Using Attributed Dependency Graphs. Journal of Software Maintenance: Research and Practice, 7:239261.
[Alsaker, 1992] Alsaker, F. D. (1992). Modelling Quantitative Developmental Change. In
Asendorpf, J. and Valsiner, J., editors, Framing stability and change: An investigation
into methodological reasoning, pages 88109. Sage, Newbury Parc, California.
[Ambriola et al., 1994] Ambriola, V., Meglio, R. D., Gervasi, V., and Mercurio, B. (1994). Applying a Metric Framework to the Software Process: and Experiment. In Warboys,
B. C., editor, Lecture Notes in Computer Science, no. 772: Software Process Technology Third European Workshop, EWSPT 94, pages 207226. Springer-Verlag.
[Antonini et al., 1987] Antonini, P., Benedusi, P., Cantone, G., and Cimitile, A. (1987). Maintenance and Reverse Engineering: Low-Level Design Documents Production and Improvement. In Proceedings of the 1987 IEEE Conference on Software Maintenance,
pages 91101.
Describes a system which can produce Jackson or Warnier/Orr diagrams which are totally consistent with the code. I.e. a reverse-engineering approach.
[Arango, 1988] Arango, G. (1988). Domain Engineering for Software Reuse. Ph.D. thesis,
University of California, Irvine. Arangos Ph.D. thesis.
[Arngrimsson and Vesterager, 1991] Arngrimsson, G. and Vesterager, J. (1991). Overblik
over standarden ISO-Step (in Danish). Technical report, Driftsteknisk institutt, Dan-

316

References

marks Tekniske Hjskole, 2800 Lyngby.


Fikk denne av Terje Totland.
[Arnold, 1989] Arnold, R. S. (1989). Software Restructuring. Proceedings of the IEEE,
77(4):607617. Also in [Arnold, 1993b], pp. 348358.
[Arnold and Bohner, 1993] Arnold, R. S. and Bohner, S. A. (1993). Impact Analysis Towards a Framework for Comparison. In [CSM93, 1993], pages 292301.
Presents a framework for characterizing impact analysis (IA) approaches to be able to
understand, compare and assess different IA approaches. The framework consists of
three parts: (1) IA Application: examines how the approach is actually used to perform
IA (elements: artifact object model, decomposition, change specification, results specification, interpretation, and other), (2) IA parts: functional parts of the IA approach,
what/how the IA does, duties of agent or tools (elements: interface object model, internal object model, impact model, tracing/impact approach, decomposition, repository,
load/modify/browse), and (3) IA effectiveness: how well the IA approach accomplishes
IA (elements: starting impact set (SIS), estimated impact set (EIS), actual impact set
(AIS), system, relations, and several relations among the sets. The authors identifies
Madhavjis Prism model [Madhavji, 1992b] as being related to their work, but this focused most on the change process, while the work presented in this article focus on the
IA technology. Five IA approaches are assessed as an exercise to validate the usability
of the characterization framework. The authors hope that this framework may help other
researchers and users of IA technology to assess what is claimed by other to compare
the usability of different IA approaches.
[Arnold et al., 1993] Arnold, R. S., Slovin, M., and Wilde, N. (1993). Do Design Records Really Benefit Software Maintenance? In [CSM93, 1993], pages 234243.
[Arthur et al., 1993] Arthur, J. D., Nance, R. E., and Balci, O. (1993). Establishing Software
Development Process Control: Technical Objectives, Operational Requirements and the
Foundational Framework. Journal of Systems and Software, 22:117128.
[Arthur, 1988] Arthur, L. J. (1988). Software Evolution: The Software Maintenance Challenge. John Wiley & Sons, New York. The book gets a very good review in IEEE Computer by Andrew Marmorstein [Marmorstein, 1989].
[Authors, 1994] Authors, I. (1994). How to Measure Software Quality. Excerpt from discussion at Internet newsgroup comp.software.testing.
[authors, 1994] authors, V. (1994). Kvalitetsledelse - Hva, hvorfor og hvordan. Et slags kompendium, fikk den av Tor Stlhane.
[Baker and Eick, 1994] Baker, M. J. and Eick, S. G. (1994). Visualizing Software Systems. In
Proc. 16th Intl Conference on Software Engineering, Sorrento, Italia, pages 5967.
[Banker et al., 1993] Banker, R. D., Datar, S. M., Kemerer, C. F., and Zweig, D. (1993). Software Complexity and Maintenance Costs. Communications of the ACM, 36(11):8194.
[Banker et al., 1994] Banker, R. D., Kauffman, R. J., Wright, C., and Zweig, D. (1994). Automating Output Size and Reuse Metrics in a Repository-Based Computer-Aided Software Engineering (CASE) Environment. IEEE Transactions on Software Engineering,
20(3):169187.
[Barker, 1990a] Barker, T. B. (1990a). Engineering Quality by Design: Interpreting the Tagu-

Other references

317

chi Approach. Statistics; 113. Marcel Dekker.


[Barker, 1990b] Barker, T. B. (1990b). Taking Aim at Noise An Arsenal of Experimental
Tools, chapter 4, pages 4369. Marcel Dekker. A chapter in [Barker, 1990a].
[Bartussek and Parnas, 1977] Bartussek, A. W. and Parnas, D. L. (1977). Using Traces to
Write Abstract Specifications for Software Modules. Technical Report UNC 77-012,
University of North Carolina at Chapel Hill.
Trace specifications make assertions about sequences of procedure calls. As with statemachine specifications, they define value returning procedures and side-effect-producing procedures; they allow value-returning procedures to produce side effects. They
provide assertions about values returned by the v-procedures. They also give assertions
about legal sequences of procedure calls, and equivalences between sequences. PS. I
do not have this report. Other reports about the same topic are [McClean, 1984] and
[Hoffman and Snodgrass, 1986].
[Basili and Perricone, 1984] Basili, V. R. and Perricone, T. (1984). Software Errors and Complexity: An Empirical Investigation. Communications of the ACM, 27(1):4252.
[Basili and Weiss, 1984] Basili, V. R. and Weiss, D. M. (1984). A Methodology for Collecting
Valid Software Engineering Data. In IEEE Transactions on Software Engineering, SE10(6):728-738.
[Basili, 1990] Basili, V. R. (1990). Viewing Maintenance as Reuse-Oriented Software Development. IEEE Software, pages 1925.
[Basili and Musa, 1991] Basili, V. R. and Musa, J. D. (1991). The Future Engineering of Software: A Management Perspective. IEEE Computer, pages 9096.
[Basili, 1993] Basili, V. R. (1993). The Experience Factory and its Relationship to Other Improvement Paradigms. In Sommerville, I. and Paul, M., editors, 4th European Software
Engineering Conference (ESEC), Springer-Verlag Lecture Notes in Computer Science
717, pages 6883.
[Basili, 1994] Basili, V. R. (1994). Building an Experience Factory for Maintenance: Reeengineering Process and Product. Foil copies from keynote speech at ICSM94.
[Basili, 1995] Basili, V. R. (1995). Proposed ISERN Definitions. ISERN draft document.
[Beck and Eichmann, 1993] Beck, J. and Eichmann, D. (1993). Program and Interface Slicing
for Reverse Engineering. Acquired via ftp. Full address of authors: Dept. of Statistics
and Computer Science, West Virginia University, Morgantown, WV 26506
(beck@cs.wvu,wvnet,edu, and Dept. of Computer Science, University of Houston Clear Lake, 2700 Bay Area Boulevard, Box 113, Houston, TX 77058 (eichmann@cs.wvu,wvnet.edu).
[Belkhatir and Estublier, 1986] Belkhatir, N. and Estublier, J. (1986). Experience with a Data
Base of Programs. In Proc. of 2nd ACM SIGSOFT/SIGPLAN Software Engineering
Symposium on Practical Software Development Environment, pages 8491.
[Bendifallah and Scacchi, 1987] Bendifallah, S. and Scacchi, W. (1987). Understanding Software Maintenance Work. IEEE Transactions on Software Engineering, SE-13(3):311
323.
Distinguishes between primary and articulation maintenance work, where primary work

318

References

is what the maintainers want to do with their systems, while articulation work is what
they have to do with their system.
[Benedusi et al., 1990] Benedusi, P., Benvenuto, V., and Caporaso, M. G. (1990). Maintenance and Prototyping at the Entity-Relationship Level: A Knowledge-Based Support.
In [CSM90, 1990], pages 161170.
[Benedusi et al., 1993] Penedusi, P., Benvenuto, V., and Tomacelli, L. (1993). The Role of
Testing and Dynamic Analysis in Comprehension Support. In Proc. 2n Workshop on
Program Comprehension, Jul 8-9, 1993, Capri, Italy, pp. 149-158.
[Bennett, 1991] Bennett, K. H. (1991). Automated Support of Software Maintenance. Information and Software Technology, 33(1):7485. Also in [Arnold, 1993b], pp. 5970.
[Bennett, 1993] Bennett, K. H. (1993). Understanding the Process of Software Maintenance. In
Proc. 2n Workshop on Program Comprehension, Jul 8-9, 1993, Capri, Italy, pp. 2-5.
[Bersoff, 1984] Bersoff, E. H. (1984). Elements of Software Configuration Management. In
IEEE Transactions on Software Engineering, SE-10(1):79-87.
[Bersoff and Davis, 1991] Bersoff, E. H. and Davis, A. M. (1991). Impacts of Life-Cycle
Models on Software Configuration Management. Communications of the ACM,
34(8):104117.
[Bertolino and Marre, 1993] Bertolino, A. and Marre, M. (1993). Deriving Path Expressions
Recursively. In Proc. 2n Workshop on Program Comprehension, Jul 8-9, 1993, Capri,
Italy, pp. 177-185.
[Biggerstaff, 1989] Biggerstaff, T. J. (1989). Design Recovery for Maintenance and Reuse.
IEEE Computer, pages 3649. Also in [Arnold, 1993b], pp. 520533.
[Biggerstaff et al., 1994] Biggerstaff, T. J., Mitbander, B. G., and Webster, D. (1994). Program
Understanding and the Concept Assignment Problem. Communications of the ACM,
37(5):7283.
[Blazy and Facon, 1993] Blazy, S. and Facon, P. (1993). Partial Evaluation as an Aid to the
Comprehension of Fortran Programs. In Proc. 2n Workshop on Program Comprehension, Jul 8-9, 1993, Capri, Italy, pp. 46-55.
[Binkley, 1992] Binkley, D. (1992). Using Semantic Differencing to Reduce the Cost of Regression Testing. In [CSM92, 1992], pages 4150.
[Boehm, 1984] Boehm, B. W. (1984). Software Engineering Economics. In IEEE Transactions on Software Engineering, SE-10(1):4-21
[Boehm et al., 1984] Boehm, B. W., Gray, T. E., and Seewaldt, T. (1984). Prototyping Versus
Specifying: A Multiproject Experiment. IEEE Transactions of Software Engineering,
10(3):133147.
My reference includes comments on the experiment after a presentation at the 1st Workshop on Software Process. (Abstract) In this experiment, seven software teams developed versions of the same small-size (2000-4000 source instructions) application software product. Four teams used the Specifying approach. Three teams used the Prototyping approach. The main results of the experiment were the following. (1) Prototyping
yielded products with roughly equivalent performance, but with about 40% less code
and 45% less effort. (2) The prototyped products rated somewhat lower on functionality

Other references

319

and robustness, but higher on ease of use and ease of learning. (3) Specifying produced
more coherent designs and software was easier to integrate. The paper presents the experimental data supporting these and a number of additional conclusions.
[Bohner et al., 1993] Bohner, S. A., Schneidewind, N. F., Kuvaja, P., and Caldiera, G. (1993).
Status Report Software Maintenance Standards (Four invited papers). In [CSM93,
1993], pages 102107.
[Bowen and Stavridou, ] Bowen, J. and Stavridou, V. Formal Methods: Epideictic or Apodeictic? In BCS/IEE Software Engineering Journals personal view column (month/year
not known).
[Bowen, 1993] Bowen, J. P. (1993). From Programs to Object Code and back again using Logic Programming: Compilation and Decompilation. Journal of Software Maintenance:
Research and Practice, 5(4):205234.
[Bowen et al., 1993a] Bowen, J. P., Breuer, P. T., and Lano, K. (1993a). Formal specifications
in software maintenance: From code to Z++ and back again. Information and Software
Technology, 35(11/12):679690.
[Bowen et al., 1991] Bowen, J. P., Breuer, P. T., and Lano, K. C. (1991). The REDO Project:
Final Report. Technical Report PRG-TR-23-91, Oxford University Computing Laboratory, 11 Keble Road, Oxford OX1 3QD, England.
[Bowen et al., 1993b] Bowen, J. P., Breuer, P. T., and Lano, K. C. (1993b). A Compendium of
Formal Techniques for Software Maintenance. BCS/IEE Software Engineering Journal,
8(5):253262.
[Bowen and Hinchey, 1994] Bowen, J. P. and Hinchey, M. G. (1994). Seven More Myths of
Formal Methods. Submitted to the FME94 Symposium, Industrial Benefits of Formal
Methods, Barcelona, Spain, 28-28 October 1994.
[Bowen et al., 1985] Bowen, T. P., Wigle, G. B., and Tsai, J. T. (1985). AD-A153 990: Specification of Software Quality Attributes: Software Quality Evaluation Guidebook. Technical Report RADC-TR-85-37, Boeing Aerospace Company.
[Box et al., 1978] Box, G. E. P., Hunter, W. G., and Hunter, J. S. (1978). Statistics for Experimenters An Introduction to Design, Data Analysis, and Model Building. John Wiley
& Sons, Inc.
[Bradac et al., 1994] Bradac, M. G., Perry, D. E., and Votta, L. G. (1994). Prototyping a Process Monitoring Experiment. IEEE Transactions on Software Engineering, 20(10):774
784.
[Braek, 1993] Braek, R. (1993). Engineering Real Time Systems. Addison Wesley.
[Briand and Basili, 1992] Briand, L. C. and Basili, V. R. (1992). A Classification Procedure
for the Effective Management of Changes During the Maintenance Process. In [CSM92,
1992], pages 328336.
[Briand and Basili, 1993] Briand, L. C. and Basili, V. R. (1993). Measuring and Assessing
Maintainability at the End of High Level Design. In [CSM93, 1993], pages 8897.
[Briand et al., 1995b] Briand, L., Emam, K. E., and Morasca, S. (1995b). Theoretical and Empirical Validation of Software Product Measures. Technical Report ISERN-95-03, International Software Engineering Research Network.

320

References

[Brodman and Johnson, 1994] Brodman, J. G. and Johnson, D. L. (1994). What Small Businesses and Small Organizations Say About the CMM. In Proc. 16th Intl Conference on
Software Engineering, Sorrento, Italia, pages 331340.
[Brooks et al., 1994] Brooks, A., Daly, J., Miller, J., Roper, M., and Wood, M. (1994). Replications Role in Experimental Computer Science. Technical Report RR/172/94EFoCS-5-94, Empirical Foundations of Computer Science, Dept. of Computer Science,
Univ. of Strathclyde, Glasgow, UK.
[Brown et al., 1995] Brown, A. W., Christie, A. M., and Dart, S. (1995). An Examination of
Software Maintenance Practices in a U.S. Government Organization. Journal of Software Maintenance: Research and Practice, 7:223238.
Interviews with maintainers discovered a need for maintenance tools to support such areas such as reverse engineering and regression testing. However, these problems often
had their roots in deeper issues, such as lack of design for maintenance, ineffective communication and low status of maintenance work. The authors propose solutions to the
problems by better planning both the development work and maintenance work to allow
more time for documentation, and suggest using acceptance testing before maintenance
takes over a system from development.
[Bryan and Siegel, 1984] Bryan, W. and Siegel, S. (1984). Making Software Visible, Operational, and Maintainable in a Small Project Environment. lEEE Transactions on Software Engineering, SE-10(1):5967.
[Brynjolfsson, 1993] Brynjolfsson, E. (1993). The Productivity Paradox of Information Technology. Communications of the ACM, 36(17):6777.
[Buckley, 1989] Buckley, F. J. (1989). Some standards for software maintenance. IEEE Computer, pages 6970.
[Buss et al., 1994a] Buss, E., Ewart, G., Mori, R. D., Madhavji, N., Mylopoulos, J., and Mller,
H. (1994a). IBM NSERC CRD Project A Reverse Engineering Environment. Part of
documentation of RIGI, ftp-ed from tara.uvic.ca, 2 pages.
[Buss et al., 1994b] Buss, E., Mori, R. D., Gentleman, M., Henshaw, J., Johnson, H., Kontogiannis, K., Merlo, E., Mller, H., Mylopoulos, J., Paul, S., Prakash, A., Stanley, M.,
Tilley, S., Troster, J., and Wong, K. (1994b). Investigating Reverse Engineering Technologies: The CAS Program Understanding Project. IBM Systems Journal, 33(3):41
pages.
[Canfora et al., 1993] Canfora, G., Cimitile, A., Munro, M., and Tortorella, M. (1993). Experiments in Identifying Reusable Abstract Data Types in Program Code. In Proc. 2n
Workshop on Program Comprehension, Jul 8-9, 1993, Capri, Italy, pp. 36-45.
[Capretz and Munro, 1992] Capretz, M. A. M. and Munro, M. (1992). COMFORM - A Software Maintenance Method Based on the Software Configuration Management Discipline. In [CSM92, 1992], pages 183192.
[Capretz and Munro, 1994] Capretz, M. A. M. and Munro, M. (1994). Software Configuration
Management Issues in the Maintenance of Existing Systems. Software Maintenance:
Research and Practice, 6:114.
[Card et al., 1987] Card, D. N., Cotnoir, D. V., and Goorevich, C. E. (1987). Managing Software Maintenance Cost and Quality. In Proceedings of the 1987 IEEE Conference on

Other references

321

Software Maintenance, pages 145152.


Describes a simplistic model for managing a maintenance project. It complements the
paper by Grady ([Grady, 1987]) .
[Carlsen, 1995] Carlsen, S. (1995). Essential Workflow Problems. Presented at BEST doctoral
seminar, Bergen.
[Chapin, 1986] Chapin, N. (1986). Supervisory Attitudes Toward Software Maintenance. In
Proceedings of the 1986 National Computer Conference, Volume 55, pages 6168.
[Chapin, 1993] Chapin, N. (1993). Software Maintenance Characteristics and Effective Management. Software Maintenance: Research and Practice, 5:91100.
(i) Managers determine maintenance effort. (ii) Classifies maintenance management
into three groups: successful, indifferent and poor; and view their relationship with
quality of management with respect to management effort. (iii) Well managed groups
carry a light burden of fix-it work, but more enhancement and less new development.
(iv) Successful group has an even spread of experience within the group (typically 2.5
6 years of experience).
[Chen et al., 1990] Chen, S., Heisler, K. G., Tsai, W. T., Chen, X., and Leung, E. (1990). A
Model for Assembly Program Maintenance. Software Maintenance: Research and
Practice, 2:332. Also in [Arnold, 1993b], pp. 416445.
[Cioch et al., 1996] Cioch, F., and Lohrer, S., and Palazzolo, M. (1996). A Documentation
Suite for Maintenance Programmers. In [ICSM 1996], pp. 286-295. (Awarded ICSM
1996 "Best paper")
[Cimitile, 1993] Cimitile, A. (1993). Reuse Reengineering and Validation via Concept Assignment. In [CSM93, 1993], pages 216225.
The paper identifies that completeness and adequacy are valuable properties for validating a criterion for selecting candidates from a program for reuse reeengineering.
Presents a good list of references to related work in the field. Presents a candidate criterion that is validated by using the proposed framework. The authors identify that a
candidate criterion is very context-dependent on the program system it is applied to, and
therefore must be revalidated in every context used. The article is very brief and it is
sometimes hard to get the points on the first reading. The research goal is to automatically detect candidates for reuse.
[Cimitile et al., 1992] Cimitile, A., Lanubile, F., and Visaggio, G. (1992). Traceability Based
on Design Decisions. In [CSM92, 1992], pages 309317.
[Cimitile et al., 1990] Cimitile, A., Lucca, G. A. D., and Maresca, P. (1990). Maintenance and
Intermodular Dependencies in Pascal Environment. In [CSM90, 1990], pages 7283.
[Cleveland, 1989] Cleveland, L. (1989). A Program Understanding Support Environment. IBM
Systems Journal, 28(2):324344. Also in [Arnold, 1993b], pp. 244-264.
[Collins et al., 1994] Collins, W. R., Miller, K. W., Spielman, B. J., and Wherry, P. (1994).
How Good is Good Enough? An Ethical Analysis of Software Construction and Use.
Communications of the ACM, 37(1):8191.
[Collofello and Buck, 1987] Collofello, J. S. and Buck, J. J. (1987). Software Quality Assurance for Maintenance. IEEE Software, pages 4651.

322

References

[Collofello and Vennergrund, 1987] Collofello, J. S. and Vennergrund, D. A. (1987). Ripple


effect based on semantic information. In Proceedings of AFIPS Joint Computer Conference, Vol. 56, pages 675682. I do not have this.
[Conradi et al., 1992] R. Conradi et. al. (1992). Design, Use and Implementation of SPELL, A
Language for Software Process Modelling and Evolution. In [Derniame, 1992], pp.
167-177, 1992.
[Conradi et al., 1994a] Conradi, R., Fernstrm, C., and Fugetta, A. (1994a). Promoter Book,
chapter Concepts for Evolving Software Processes, pages ???? ??
[Conradi et al., 1994b] Conradi, R., Hagaseth, M., Larsen, J.-O., Nguyn, M. N., Munch, B. P.,
Westby, P. H., Zhu, W., Jaccheri, M. L., and Liu, C. (1994b). EPOS: Object-Oriented
and Cooperative Process Modelling. In [Finkelstein et. al., 1994], pp. 33-70.
[Continuus, 1995] Continuus Software Corporation, October 26, 1995. Response to Atrias
Competitive Claims. Public flyer, available from Continuus WWW-server http://
www.wji.com/continuus.
The document is a company response to email messages posted by Atria sales staff presenting competitive evaluation of Continuus/CM (C/CM) and Atrias ClearCase. Continuus felt that many of Atrias claims, observations and questions regarding the functionality of C/CM were intentionally false, and responded with this document. The full
text of Atrias email messages are included in the doucment. As Atrias text was biased
toward ClearCase, the response from Continuus is equally biases toward C/CM. The
document contains pointers to some independent evalutations of the two systems.
[Corbi, 1989] Corbi, T. A. (1989). Program Understanding: Challenge for the 1990s. IBM Systems Journal, 28(2):294306. Also in [Arnold, 1993b], pp. 596608.
[Cordy et al., 1990] Cordy, J. R., Eliot, N. L., and Robertson, M. G. (1990). TuringTool: A
User Interface to Aid in the Software Maintenance Task. IEEE Transactions on Software Engineering, 16(3):294301.
The TuringTool is a source code viewing systems, enabling the use of source elisions
(=outlines) to present different views of a program written in the Turing language. The
projections produced are automatically updated as the projected program is edited. The
tool provides a set of primitive elision rules (e.g. structure, calls, definition(x), use(x),
...) that can be combined using set operations to present derived projections (e.g. mention(depth) = use(depth) U definition(depth)) of the program. The developers realize
that such a tool is only a small part of a larger environment for software maintenance,
comprising document handlers, configuration control, versioning, debuggers, etc. The
paper identifies that an integration with such an environment can be difficult partly due
to integration with SCM technology and partly due to presenting concurrent updated
projected views to multiple maintainers of the same program.
[Courtois and Parnas, 1993] Courtois, P.-J. and Parnas, D. L. (1993). Documentation for Safety Critical Software. In Proc. of the 15th International Conference on Software Engineering, pp. 315-323, Baltimore, Md, May 1993.
[Ct and Bohner, 1990] Ct, V. and Bohner, S. A. (1990). A Model for Estimating Perfective Software Maintenance Projects. In [CSM90, 1990], pages 328334.
[Cousin and Collofello, 1992] Cousin, L. and Collofello, J. (1992). A Task-Based Approach to
Improving the Software Maintenance Process. In [CSM92, 1992], pages 118127.

Other references

323

[CSM93, 1993] CSM93 (1993). David Card (ed.) Conference on Software Maintenance, Montreal, Quebec, Canada, September 2730, Los Alamitos, California. IEEE Technical
Committee on Software Engineering, IEEE Computer Society Press.
[Cutillo et al., 1993] Cutillo, F., Lanubile, F., and Visaggio, G. (1993). Extracting Application
Domain Functions from Old Code: A Real Experience. In Proc. 2n Workshop on Program Comprehension, Jul 8-9, 1993, Capri, Italy, pp. 186-192.
[Daly et al., 1994a] Daly, J., Brooks, A., Miller, J., Roper, M., and Wood, M. (1994a). An External Replication of a Korson Experiment. Technical Report RR/162/94EFoCs-4-94,
Empirical Foundations of Computer Science, Dept. of Computer Science, Univ. of
Strathclyde, Glasgow, UK.
In-depth report to the presentation by Daly at ICSM94.
[Daly et al., 1995a] Daly, J., Miller, J., Brooks, A., Roper, M., and Wood, M. (1995a). Issues
on the Object-Oriented Paradigm: A Questionnaire Survey. Technical Report RR/95/
183EFoCS-8-95, Empirical Foundations of Computer Science, Dept. of Computer Science, Univ. of Strathclyde, Glasgow, UK.
[Daly et al., 1995b] Daly, J., Wood, M., Brooks, A., Miller, J., and Roper, M. (1995b). Structured Interviews on the Object-Oriented Paradigm. Technical Report RR/95/182EFoCS-7-95, Empirical Foundations of Computer Science, Dept. of Computer Science,
Univ. of Strathclyde, Glasgow, UK.
[Dart, 1991] Dart, S. (1991). Concepts in Configuration Management Systems. In Proceedings
of the 3rd International Workshop on Software Configuration Management, Trondheim,
Norway ([Feiler, 1991]), pages 118.
[Dart, 1992] Dart, S. A. (1992). The Past, Present and Future of Configuration Management.
Technical Report CMU/SEI-92-TR-8, Software Engineering Institute, Carnegie Mellon
University.
[Davis et al., 1988] Davis, A. M., Bersoff, E. H., and Comer, E. R. (1988). A Strategy for
Comparing Alternative Software Development Life Cycle Models. IEEE Transaction
on Software Engineering, 14(10):14531461.
(summary) Traditionally waterfall model or variations on this. Rapid throwaway prototyping; ensuring software meets users needs. Incremental development; partial implementation to a quick operating system with requirements understood. Evolutionary
prototyping; do not know all requirements, build upon previous in iterations. Reuse. Automated software synthesis; transformation based. Requirements is frozen during development; if adoptable techniques could be invented (i.e. the outputs of the development
is adaptable) current requirements could more easily be included. Adding a third dimension, cost, so that cost/functionality can be measured. Productivity based on functionality, not on LOC.
[de Boer, 1993] de Boer, R. R. A. (1993). Computer Aided Requirements Vehicle for Software System Development. Masters thesis, Faculty of Economic Knowledge, Erasmus
University Rotterdam, The Netherlands.
The emphasis is put on a formal representation of the situation today, and the future
situation, and then automatically deriving a solution system from these specifications by
using simulations. This is the dream. The thesis tries to identify components in a requirements system that would fulfill the dream: It identifies the language needed, an automat-

324

References

ed proof system and a simulation environment. A considerable example of a computer


reservation system is presented.
[De Carlini et al., 1993] De Carlini, U., De Lucia, A., Di Lucca, G.A., and Tortora, G. (1993).
An Integrated and Interactive Reverse Engineering Environement for Existing Software
Comprehension. In Proc. 2n Workshop on Program Comprehension, Jul 8-9, 1993, Capri, Italy, pp. 128-137.
[DeMarco and Listener, 1987] DeMarco, T. and Listener, T. I. (1987). Peopleware. Doreset
House Publishing.
A series of thought-provoking situations about management of software projects are
presented. Most of the stories are built on the authors own experiences, and some are
supported by research data.
[Derniame, 1992] J-C. Derniame, editor. (1992). Proc. Second European Workshop on Software Process Technology (EWSPT92), Trondheim, Norway. 253 p. Springer Verlag
LNCS 635, September 1992.
[Devanbu and Jones, 1994] Devanbu, P. T. and Jones, M. A. (1994). The Use of Description
Logics in KBSE Systems. In Proc. 16th Intl Conference on Software Engineering, Sorrento, Italia, pages 2335.
[Devanbu et al., 1994] Devanbu, P. T., Rosenblum, D. S., and Wolf, A. L. (1994). Automated
Construction of Testing and Analysis Tools. In Proc. 16th Intl Conference on Software
Engineering, Sorrento, Italia, pages 241250.
[Diamond, 1981] Diamond, W. J. (1981). Practical Experiment Designs for Engineers and
Scientists. Lifetime Learning Publications.
[(DND), 1995] (DND), D. N. D. (1995). Sjekkliste for IT-Strategi. Tilgjengelig fra DND,
DND, Pb. 6714 Rodelkka, 0503 Oslo.
[Dorling, 1992] Dorling, A. (1992). SPICE: Project Overview.
[Dorling, 1993] Dorling, A. (1993). SPICE: Software Process Improvement and Capability
dEtermination. Software Quality Journal, (2):209224.
[e Abreu, 1995] e Abreu, E. F. B. (1995). Research Activities: Software Quality. ERCIM
News, no. 23.
[e Abreu et al., 1995] e Abreu, F. B., Goulao, M., and Esteves, R. (1995). Toward the Design
Quality Evaluation of Object-Oriented Software Systems. In Proc. of 5th Intl Conference on Software Quality, Austin, Texas, USA. Presents a set of metrics for controlling
the quality of object-oriented designs. The acceptable value scale for the metrics is discussed, and it is argued for using the high- and low-pass filter metaphor to define the
value limits based on experience.
[e Abreu and Melo, 1995] e Abreu, F. B. and Melo, W. (1995). Evalutaing the Impact of Object-Oriented Design on Software Quality. Technical report, Lisbon Technical University and University of Maryland. Describes results where the OO design metrics presented in [e Abreu et al., 1995] is experimentally evaluated. The results obtained show
how OO design mechanics influence quality characteristics such as defect density and
rework.
[Ebert et al., 1980] Ebert, R., Lgger, J., and (eds), L. G. (1980). Practice in Software Adaption

Other references

325

and Maintenance (Proceedings of the SAM Workshop, Berlin, April, 1979. North-Holland Publishing Company, Amsterdam.
[Edelstein, 1993] Edelstein, D. V. (1993). Report on the IEEE Std. 1219-1993 Standard for
Software Maintenance. ACM SigSoft: Software Engineering Notes, 18(4):9495 (double paged).
The standard (published 2 June 1993) can be obtained from IEEE Inc., 445 Hoes Lane,
Box 1331 Piscataway NJ 08855-1331, 800-678-IEEE, or by fax 908-981-9667.
[Edwards and Munro, 1993] Edwards, H. M. and Munro, M. (1993). Abstracting the Logical
Processing Life Cycle for Entities Using the RECAST Method. In [CSM93, 1993], pages 162171.
[Estublier, 1988] Estublier, J. (1988). Configuration Management - The Notion and the Tools.
In [Winkler, 1988], pages 3861. ISBN 3-519-02671-6.
[Feiler, 1989] Feiler, P. H. (1989). Configuration Management Models in Commercial Environments. Technical Report CMU/SEI-91-TR-7 ESD-9-TR-7, Carnegie Mellon University, Software Engineering Institute.
[Feiler, 1991] Feiler, P. H., editor (1991). Proceedings of the 3rd International Workshop on
Software Configuration Management, Trondheim, Norway. ACM Press, New York.
[Fenton, 1994] Fenton, N. E. (1994). Software Measurement: A Necessary Scientific Basis.
IEEE Transactions on Software Engineering, 20(3):199206.
[Finkelstein et. al., 1994] A. Finkelstein, J. Kramer, and B. A. Nuseibeh, editors. (1994). Software Process Modelling and Technology. Advanced Software Development Series, Research Studies Press/John Wiley & Sons, 1994. ISBN 0-86380-169-2. 362 p.
[Frost, 1985] Frost, D. (1985). Software Maintenance and Modifiability. In Proceedings of the
1985 lEEE Phoenix Conference on Computers and Communications, pages 489494.
[Fuggetta and Picco, 1994] Fuggetta, A. and Picco, G. P. (1994). An Annotated Bibliography
on Software Process Improvement. Software Engineering Notes, 19(3):6668.
Contains 78 pointers to other references on the subject.
[Gallagher and Lyle, 1991] Gallagher, K. B. and Lyle, J. R. (1991). Using Program Slicing in
Software Maintenance. IEEE Transactions on Software Engineering, SE-17(8):751
761. Also in [Arnold, 1993b], pp. 324334.
Defines a program decomposition slice and its complement, and a set of principles for
changes that assure the maintainer that any changes made in the program decomposition
slice will not affect the rest of the program (i.e. the complement). Thus decomposition
slices breaks down the program into manageable pieces, and automatically assist the
maintainer in guaranteeing that there are no ripple effects induced by modifications in a
component. (See also [Gallagher, 1990]). Practical problem(?): How to identify the criteria for making a decomposition slice in large programs?
[Ganfora and Cimitile, 1994] Ganfora, G. and Cimitile, A. (1994). RE2: Reverse-engineering
and Reuse Re-engineering. Software Maintenance: Research and Practice, 6:5372.
[Garlan, 1995] Garlan, E. D. (1995). First International Workshop on Architectures for Software Systems Workshop Summary. ACM SIGSOFT Software Engineering Notes,
20(3):8489. The workshop was collocated with ICSE-17 in Seattle, April 24-25, 1995.

326

References

[Garlan et al., 1995] Garlan, E. D., Tichy, W., and Paulisch, F. (1995). Summary of the Dagsthul Workshop on Software Architecture. ACM SIGSOFT Software Engineering Notes,
20(3):6383. The workshop was located in Schloss Dagsthul, Germany, February 2024, 1995.
[Gelernter and Carriero, 1992] Gelernter, D. and Carriero, N. (1992). Coordination Languages
and their Significance. Communications of the ACM, 35(2):96107.
[Gillis and Wright, 1990] Gillis, K. D. and Wright, D. G. (1990). Improving Software Maintenance Using System-Level Reverse Engineering. In [CSM90, 1990], pages 8490.
[Glass, 1992] Glass, R. L. (1992). Building Quality Software. Prentice-Hall.
The main part of the book describes numerous techniques that can be used during the
software development process to enhance the level of quality in the delivered software.
Contains sections on requirements, design, implementation, checkout, maintenance, an
attribute approach to quality, etc.
There is a discussion of the Cleanroom approach that caught my attention: But experimental findings are beginning to appear in the literature suggesting dramatic advantages to cleanroom. Perhaps the most exciting to date finds that over 90 percent of software
errors can be eliminated before testing is even begun. This is a strong argument for the
use of formal verification practices by software developers. Further analysis of these
findings, however, raises additional questions. Although some experiments employed
formal verification, in one key experiment [Kouchakdijian and Basili, 1989] the experimenters substituted rigorous inspection techniques (i.e. peer code review) for formal
verification, and also achieved 91 percent error elimination prior to testing. Thus it
would appear that the most important element of the cleanroom process might be the expectation that developers will remove errors by rigorous, notesting approaches, rather
than specifically using formal verification..
[Glass, 1994a] Glass, R. L. (1994a). A Tabulation of Topics where Software Practice Leads
Software Theory (Editors Corner). Journal of Systems and Software, 25:219222.
RLG. asserts that at least in software design, software maintenance, user interfaces,
programming-in-the-large, modelling and simulation and metrics software practice
leads software theory.
[Glass, 1994b] Glass, R. L. (1994b). The Software Research Crisis. IEEE Software, pages 42
47.
[Glass and Noiseux, 1981] Glass, R. L. and Noiseux, R. A. (1981). Software Maintenance
Guidebook. Prentice-Hall.
[Grady, 1987] Grady, R. B. (1987). Measuring and Managing Software Maintenance. IEEE
Software, 4(9):3545.
This article describes HPs "Goal-Question-Metric" approach to managing software
maintenance. The article is essential reading for anyone who will set up a metrics program to measure and manage softeare maintenance in an organization.
[Grady, 1992] Grady, R. B. (1992). Practical Software Metrics for Project Management and
Process Improvement. Englewood Cliffs, N.J.: Prentice-Hall.
[Grady, 1993] Grady, R. B. (1993). Practical Results from Measuring Software Quality. Communications of the ACM, 36(11):6268.

Other references

327

[Grady and Caswell, 1987] Grady, R. B. and Caswell, D. L. (1987). Software Metrics: Establishing a Company-Wide Program. Englewood Cliffs, N.J.: Prentice-Hall.
The book describes the planning, introduction, and implementation of a metrics program in the HP organization. The authors were two of the focal persons in the project,
and reports their experiences in this practical book.
[Gray and Hunter, 1992] Gray, E. M. and Hunter, R. B. (1992). Process Assessment and Process Improvement The Need to Standardize? Technical report, Dept. of Computer Science, Univ. of Strathclyde, Glasgow, UK and Dept. of Computer Studies, Glasgow
Polytechnic, Glasgow, UK. (Not certain about the year.).
[Gray and Hunter, 1994] Gray, E. M. and Hunter, R. B. (1994). Process Assessment and Process Improvement the need to Standardize? Address: Dept. of Computer Studies, Glasgow Polytechnic, Glasgow G4 0BA, Scotland and Department of Computer Science,
University of Strathclyde, Glasgow G1 1XH, Scotland.
[Greenspan et al., 1984] Greenspan, S., Mylopoulos, J., and Borgida, A. (1984). Capturing
More World Knowledge in the Requirements Specification. In Proceedings of 6th International Conference on Software Engineering, Tokyo. Reprinted in Freeman, P., and
Wasserman, A. (eds.) Tutorial on Software Design Techniques, IEEE Computer Society
Press,1984. Also in R. Prieto-Diaz and G. Arango, Domain Analysis and Software Systems Modeling, IEEE Comp. Sci. Press, 1991. (NB. This paper was awarded with the
best ICSE paper ten years ago at ICSE16, see invited paper at ICSE16, [Greenspan
et al., 1994]).
[Greenspan et al., 1994] Greenspan, S., Mylopoulos, J., and Borgida, A. (1994). On Formal
Requirements Modeling Languages: RML Revisited. In Proceedings of 16th International Conference on Software Engineering (ICSE16), pages 135148. IEEE Computer
Science Press.
This paper is written for a plenary talk by the authors after being awarded with the
best paper prize for their paper Capturing More World Knowledge in the Requirements Specification [Greenspan et al., 1984], presented at ICSE6. Paper abstract: Research issues related to requirements modeling are introduced and discussed through a
review of requirements modeling language RML, its peers and its successors from the
time it was first proposed at the Sixth International Conference on Software Engineering
to the present ICSEs later. We note that the central theme of Capturing More World
Knowledge in the original RML proposal is becoming increasingly important in Requirements Engineering. The paper highlights key ideas and research issues that have
driven RML and its peers, evaluates them retrospectively in the context of experience
and more recent developments, and points out significant remaining problems and directions for requirements modeling research.
[Gremillion, 1984] Gremillion, L. L. (1984). Determinants of Program Repair Maintenance
Requirements. Communications of the ACM, 27(8):826832.
[Gustafson et al., 1993] Gustafson, D. A., Tan, J. T., and Weaver, P. (1993). Software Measure
Specification. ACM SigSoft: Software Engineering Notes, 18(6):163168.
Check the correctness of this reference!!!
[Gustafson et al., 1990] Gustafson, D. A., Melton, A. C., An, K. H., and leHong Lin (1990).
Software Maintenance Models. In [Longstreet, 1990], pages 2335. IEEE Computer So-

328

References

ciety Press.
[Haney, 1972] Haney, F. M. (1972). Module connection analysis. Proceedings AFIPS Joint
Computer Conference, 41(5):163167.
Does not have this.
[Harjani and Queille, 1992] Harjani, D. R. and Queille, J. P. (1992). A Process Model for the
Maintenance of Large Space Systems Software. In [CSM92, 1992], pages 127136.
[Harrison, 1987] Harrison, R. (1987). Maintenance giant sleeps undisturbed in federal data
centers. Computerword.
[Harrison and Miluk, 1992] Harrison, W. and Miluk, G. (1992). The Impact of Within Application Size Variability on Software Sizing Models. Technical Report TR 92-1, Portland
State University and SEI/CMU, PSU Center for Software Quality Research, Portland
State University, Portland, OR 97207-0751, USA.
Sizing models are used to predict how large a piece of software will be based on certain
implementation-independent characteristics of the application. The authors briefly discuss the problems that Between Application and Within Application errors pose
when constructing sizing models from empirical data when using Lines of Code as a sizing measure. The authors suggest a new sizing measure called Adjusted Nominal Length
which appears to provide a distribution with less variability, and hence a more stable
basis for constructing sizing models.
[Hatzimanikatis et al., 1995] Hatzimanikatis, A. E., Tsalidis, C. T., and Christodoulakis, D.
(1995). Measuring the Readability and Maintainability of Hyperdocuments. Journal of
Software Maintenance: Research and Practice, 7:7790.
[Hazan et. al., 1993] Hazan, J.E., Jarvis, S.A., Morgan R.G., and Garigliano, G. (1993). Understanding Lolita: Program Comprehension in Functionali Languages. In Proc. 2n Workshop on Program Comprehension, Jul 8-9, 1993, Capri, Italy, pp. 26-34.
[Haziza et al., 1992] Haziza, M., Voidrot, J., E. Minor, L. P., and Blazy, S. (1992). Software
Maintenance: An Analysis of lndustrial Needs and Constraints. In [CSM92, 1992], pages 1826.
[Heindel and Kasten, 1993] Heindel, L. E. and Kasten, V. A. (1993). Managing the Software
Factory. In ??, pages 468474. IEEE.
[Hicks, 1982] Hicks, C. R. (1982). Fundamental Concepts in the Design of Experiments. Holt,
Rinehart and Winston.
[Hinkelmann and Kempthorne, 1994] Hinkelmann, K. and Kempthorne, O. (1994). Design
and Analysis of Experiments Volume 1: Introduction to Experimental Design. Wiley
Series in Probability and Mathematical Statistics: Applied Probability and Statistics
Section. John Wiley & Sons, Inc.
[Hinley and Bennett, 1992] Hinley, D. S. and Bennett, K. H. (1992). Developing a Model to
Manage the Software Maintenance Process. In [CSM92, 1992], pages 174182.
[Hirayama et al., 1990] Hirayama, M., Sato, H., Yamada, A., and Tsuda, J. (1990). Practice of
quality modeling and measurement on software life-cycle. In ?? Got it from Tor Stlhane, Copyright 1990, IEEE, pages 98107.
Plus for linking quality in designs with quality in source code! Describes the ESQUT

Other references

329

(Evaluation of Software Quality from Users viewpoint), a system developed at Toshiba,


built on quality quantification methodologies and support tools including quality models and quality metrics for each software development phase. The metrics selected show
high correlation with the McCabe and Halstead metrics, and this is actually used as a
validation argument(!). (Actually a subset of the ESQUT metrics correlate with McCabe
and Halstead.) For quality control, the defined metrics are used in scatter diagrams to
find critical modules. The problem may be that most modules may be bad in one or more
scatter diagrams => How pick the fishes? ESQUT-TFF (technical formula for fifty design steps method) quality metrics for design quality. Defines a design quality total
index (DQTI); the shape of the area curve reflects ill-balanced designs (deviating from
the a square shape). The authors predict that if a design is properly expanded into
source code, there is a high correlation between the design and source code metrics.
This helps to predict product size. The problem may be that the design and source code
metrics are actually identical, on two different levels. => Can we say that the quality
characteristics for a design are the same as those for source code? Good: It the correlation is low in a project, the assumption is that the code badly reflect the design.
[Hoffman and Snodgrass, 1986] Hoffman, D. and Snodgrass, R. (1986). Trace Specifications:
Methodology and Models. Technical Report DCS-53-IR, Department of Computer Science, University of Victoria.
[Horebeek and Levi, 1989] Horebeek, I. V. and Levi, J. (1989). Algebraic Specifications in
Software Engineering. Springer-Verlag.
(From back of book) There is now general agreement that formal specifications are
needed to obtain quality software in large projects. Algebraic specifications form a major category of formal specifications. Many projects using algebraic specifications have
now been carried out, and their more widespread use has been prevented only by the
absence of introductory descriptions and supporting tools. The aim of this book is to
bridge the gab between theory and practice by providing a sound introduction to algebraic specifications. In the book the authors (i) show the benefits of using algebraic
specifications, (ii) present an algebraic specification language and a method to use this
language, (iii) explain the underlying mathematical foundations of algebraic specifications and the consequences of the theory for the practitioner, and (iv) present not only
small examples but also case studies of a reasonable complexity. The book will be of interest to software designers and programmers. It can also be used for an introductory
course on algebraic specifications and software engineering at undergraduate or graduate level.
[IBIS, ] IBIS. The IBIS Manual; A Short Course in IBIS Methodology. Copied by permission
from Corporate Memory Systems Inc.
[ICSM 1996] Proceedings of the International Conference on Software Maintenance.
Monterey, California, November 4-8, 1996. IEEE Computer Society Press, Los Alamitos, California, ISBN 0-8186-7678-7.
[IEEE SCM, 1987] IEEE SCM (1987). IEEE Guide to Software Configuration Management.
IEEE/ANSI Standard 1042-1987.
[Ino, 1992] Ino, M. (1992). Current State of Software Maintenance in Japan: In Depth View.
In [CSM92, 1992], pages 2729.
[Jin et al., 1994] Jin, Y., Levitt, R. E., Christiansen, T., and Kunz, J. C. (1994). The Virtual

330

References

Design Team: A Computational Model of Engineering Design Teams. In AAAI 94


Spring Symposium on Computational Organization Design, Stanford, California.
Describes the goal and functionality of the VDT simulation environment. The goal of
VDT is to simulate changes in different aspects of project team performance, given
changes in organization structure, communication tool availability and project policy,
such as centralized decision making and formalized communication. Christiansen is
now with Det Norske Veritas (DNV) in Oslo, Norway.
[Jones, 1994] Jones, K. S. (1994). Information Retrieval: Current State, New Challenges and
Future Work. Foils presented at internal seminar at NTH.
[Jrgensen, 1995] Jrgensen, M. (1995). An Empirical Study of Software Maintenance Tasks.
Journal of Software Maintenance: Research and Practice, 7:2748.
[Kaiser and Perry, 1987] Kaiser, G. E. and Perry, D. E. (1987). Workspaces and Experimental
Databases: Automated Support for Software Maintenance and Evolution. In Proceedings of the 1987 IEEE Conference on Software Maintenance, pages 108114.
[Karlsson et. al., 1992] E. A. Karlsson, G. Sindre, S. Srumgrd, and E. Tryggeseth. (1992).
"Weighted Term Spaces for Relaxed Search", In Proc. of International Conference on
Knowledge Management, Baltimore, 8-11 November, 1992.
[Keller and Nance, 1993] Keller, B. J. and Nance, R. E. (1993). Abstraction Refinement: A
Model for Software Evolution. Software Maintenance: Research and Practice, 5:123
145.
[Kellner et al., 1993] Kellner, M., Arnold, R. S., Chapin, N., Pigoski, T., Schneidewind, N.,
and Zvegintzov, N. (1993). CSM: Ten Years Later (Five invited papers). In [CSM93,
1993], pages 406422.
[Kemerer, 1993] Kemerer, C. F. (1993). Reliability of Function Points Measurement A Field
Experiment. Communications of the ACM, 36(2):8597.
[Kinloch and Munro, 1993] Kinloch, D. and Munro, M. (1993). A Combined Representation
for the Maintenance of C Programs. In Proc. 2n Workshop on Program Comprehension,
Jul 8-9, 1993, Capri, Italy, pp. 119-127.
[Kirkwood and Bazzana, 1994] Kirkwood, K. and Bazzana, G. (1994). A Software Reliability
Tool-Kit. Address: University of Strathclyde, Glasgow, UK, and Etnoteam Spa, Milan,
Italy.
[Kitchenham, 1987] Kitchenham, B. (1987). Towards a constructive quality model: Part I:
Software quality modelling, measurement and prediction. BCS/IEE Software Engineering Journal, pages 2432.
[Kitchenham, 1994] Kitchenham, B. (1994). Measuring Software Quality. Published as 60
foils, got them from Tor Stlhane.
[Kitchenham and Petersen, ] Kitchenham, B. and Petersen, P. G. (??). A Strategy for Developing the COnstructive QUAlity MOdel (COQUAMO) for Software Development. Technical report, ESPRIT REQUEST (REliability and QUality of European Software Technology) project.
[Kitchenham and Pickard, 1987] Kitchenham, B. and Pickard, L. (1987). Towards a constructive quality model: Part II: Statistical techniques for modelling software quality in the

Other references

331

ESPRIT REQUEST project. BCS/IEE Software Engineering Journal, pages 5769.


[Knight and Myers, 1993] Knight, J. C. and Myers, E. A. (1993). An Improved Inspection
Technique. Communications of the ACM, 36(11):5161.
[Kontogiannis and Tilley, 1994] Kontogiannis, K. A. and Tilley, S. R. (1994). Reverse Engineering Questionnaire. Submitted to ACM SIGSOFT Software Engineering Notes.
[Kontogiannis et al., 1994] Kontogiannis, K. A., Tilley, S. R., Mori, R. D., and Mller, H. A.
(1994). User-Assisted Design Recovery of Legacy Software Systems. In Submitted to
ICSE16.
[Koshgoftaar et al., 1993] Koshgoftaar, T. M., Munson, J. C., and Lanning, D. L. (1993). A
Comparative Study of Predictive Models for Program Changes During System Testing
and Maintenance. In [CSM93, 1993], pages 7279.
[Kouchakdijian and Basili, 1989] Kouchakdijian, A. and Basili, V. (1989). Evaluation of the
Cleanroom Methodology in the SEL. In Proceedings of the Fourteenth Annual Software
Engineering Workshop, NASA-Goddard.
Used a modified cleanroom process on an experiment on a real project using real software developers; found many advantages to the approach, including 91 percent error
removal prior to testing. But substituted rigorous inspection approaches for formal verification.
[Kozaczynski et al., 1991] Kozaczynski, W., Letovsky, S., and Ning, J. (1991). A KnowledgeBased Approach to Software System Understanding. In Proceedings of the 6th Annual
Knowledge-Based Software Engineering Conference, pages 162170. IEEE. Also in
[Arnold, 1993b], pp. 642650.
[Kozaczynski and Ning, 1989] Kozaczynski, W. and Ning, J. Q. (1989). SRE: A KnowledgeBased Environment for Large-Scale Software Re-engineering Activities. In Proc. 11th
Intl Conference on Software Engineering, Pittsburgh, PA, USA, pages 113122. IEEE/
ACM. Also in [Arnold, 1993b], pp. 632641.
[Krogstie, 1994a] Krogstie, J. (1994a). Information Systems Development and Maintenance in
Norway A Survey Investigation. Technical report, Norwegian Institute of Technology,
Department of Computer Science and Telematics, Trondheim.
Subitted to NOKOBIT94. Har og sprreskjemaet som ble sendt ut.
[Krone and Snelting, 1994] Krone, M. and Snelting, G. (1994). On the Inference of Configuration Structures from Source Code. In Proc. 16th Intl Conference on Software Engineering, Sorrento, Italia, pages 4957.
[Laitenberger, 1995] Laitenberger, O. (1995). Perspective-based Reading: Technique, Validation and Research in Future. Technical Report ISERN-95-01, International Software Engineering Research Network.
[Lamb, 1988] Lamb, D. A. (1988). Software Engineering: Planning for Change. Englewood
Cliffs, N.J.: Prentice-Hall International.
This book is an introductory text to the software engineering field, mainly aimed towards students conducting a one-term development project in parallel.
[Landis et al., 1988] Landis, L. D., Hyland, P. M., Gilbert, A. L., and Fine, A. J. (1988). Documentation in a Software Maintenance Environment. In Proceedings of the Internation-

332

References

al Conference on Software Maintenance, pages 6673. The Institute of Electrical and


Electronics Engineers, Inc. Also in [Arnold, 1993b], pp. 455462.
Gives an overview of pros and cons of a selection of documentation techniques to help
maintenance programmers in understanding code. Gives a more thorough explanation
of a tool to generate Nassi-Shneiderman charts from source code written in Fortran,
Cobol, C and Ada.
[Lanubile and Visaggio, 1995] Lanubile, F. and Visaggio, G. (1995). Decision-driven Maintenance. Journal of Software Maintenance: Research and Practice, 7:91115. See also
[Cimitile et al., 1992].
[Laski and Szermer, 1992] Laski, J. and Szermer, W. (1992). Identification of Program Modifications and its Applications in Software Maintenance. In [CSM92, 1992], pages 282
290.
[Lehman, 1991] Lehman, M. M. (1991). Software Engineering, the Software Process and
Their Support. In IEE Software Engineering Journal, Special Issue on Software Environments and Factories, 6(5), pp. 243-258, September, 1991.
[Lehmann, 1991] Lehmann, E. L. (1991). Theory of Point Estimation. Wadsworth & Brooks/
Cole Advanced Book & Software.
[Leveson et al., 1994] Leveson, N. G., Heimdahl, M. P. E., Hildreth, H., and Reese, J. D.
(1994). Requirements Specification for Process-Control Systems. IEEE Transactions
on Software Engineering, 20(9):684707.
[Lewerentz and Lindner, 1994] Lewerentz, C. and Lindner, T. (1994). Case Study Production
Cell: A Comparative Study in Formal Specification and Verification. Technical report,
Forschungszentrum Informatik (FZI), Haid-und-Neu-Strae 10-14, 76131 Karlsruhe,
Germany. A complete report of the study is available as a printed book from the authors.
[Lieberherr and Xiao, 1993] Lieberherr, K. J. and Xiao, C. (1993). Object-Oriented Software
Evolution. IEEE Transactions on Software Engineering, 19(4):313343.
[Lientz et al., 1978] Lientz, B. P., Swanson, E. B., and Tompkins (1978). Characteristics of
Application Software Maintenance. Communications of the ACM, 21(6).
Surveys 69 computing installations to identify the characteristics of software maintenance. Describes what maintenance is like in a typical data-processing computing shop.
[Linos et al., 1993] Linos, P., Aubet, P., Dumas, L., Helleboid, Y, Lejeune, P., and Tulula, P.
(1993). Facilitating Comprehension of C Programs: An Experimental Study. In Proc. 2n
Workshop on Program Comprehension, Jul 8-9, 1993, Capri, Italy, pp. 55-63.
[Linthicum, 1995] Linthicum, D. S. (1995). The End of Programming. Byte Magazine, pages
6972.
[Littlewood and Strigini, 1993] Littlewood, B. and Strigini, L. (1993). Validation of Ultrahigh
Dependability for Software-based Systems. Communications of the ACM, 36(11):69
80.
[Livadas and Alden, 1993] Livadas, P.E. and Alden, S.D. (1993). A Toolset for Program Understanding. In Proc. 2n Workshop on Program Comprehension, Jul 8-9, 1993, Capri,
Italy, pp. 110-118.
[Livadas and Johnson, 1994a] Livadas, P. E. and Johnson, T. (1994a). A New Approach to

Other references

333

Finding Objects in Programs. Software Maintenance: Research and Practice, 6:249


260.
[Livadas and Johnson, 1994b] Livadas, P. E. and Johnson, T. (1994b). A New Approach to
Finding Objects in Programs. Software Maintenance: Research and Practice, 6(5):249
260.
[Lonchamp, 1993] Lonchamp, J. (1993). An Assessment Exercise. Technical report, Promoter
Esprit BRA.
[Longstreet, 1990] Longstreet, D. H. (1990). Software Maintenance and Computers. IEEE
Computer Society Press Tutorial. IEEE Computer Society Press.
This book contains 35 articles about software maintenance. The articles are all published elsewhere.
[Loyall and Mathisen, 1993] Loyall, J. P. and Mathisen, S. A. (1993). Using Dependance
Analysis to Support the Software Maintenance Process. In [CSM93, 1993], pages 282
291.
The article presents a definition of interprocedural dependence analysis; this indicates
the conditions under which a statement of one procedure is dependent on a statement of
another procedure. Maintainers are questioned, and express interest in automatic
change management for: (i) evaluating appropriateness of a proposed modification, (ii)
driving regression testing, and (iii) indicating vulnerability to critical sections of code.
The article exhaustively gives formal definitions on control flow graphs and several dependence relations (in the spirit of Podgurski and Clarke [Podgurski and Clarke,
1990]). The article is poor on experience examples, but OK if you want to implement a
dependence analysis tool.
[Lukowicz et al., 1994] Lukowicz, P., Heinz, E. A., Prechelt, L., and Tichy, W. F. (1994). Experimental evaluation in computer science: A quantitative study. Technical Report 17/
94, Fakultt fr Informatik, Universitt Karlsruhe, 76137 Karlsruhe, Germany.
anonymous ftp: /pub/papers/techreports/1994/1994-17.ps.Z on ftp.ira.uka.de, A similar
version will appear in the January issue of Journal of Systems and Software. A survey
of over 400 recent research articles suggests that computer scientists publish relatively
few papers with experimentally validated results. The survey includes complete volumes
of several refereed computer science journals, a conference, and 50 titles drawn at random from all articles published by ACM in 1993. The journals Optical Engineering
(OE) and Neural Computation (NC) were used for comparison. Of the papers in the random sample that would require experimental validation, 40% have none at all. In journals related to software engineering, this fraction is over 50%. In comparison, the fraction of papers lacking quantitative evaluation in OE and NC is only 15% and 12%, respectively. Conversely, the fraction of papers that devote one fifth or more of their space
to experimental validation is almost 70% for OE and NC, while it is a mere 30% for the
CS random sample and 20% for software engineering. The low ratio of validated results
appears to be a serious weakness in computer science research. This weakness should
be rectified for the long-term health of the field.
[MacDonald et al., 1995a] MacDonald, F., Miller, J., Brooks, A., Roper, M., and Wood, M.
(1995a). A Review of Tool Support for Software Inspection. Technical Report RR/95/
181EFoCS-6-95, Empirical Foundations of Computer Science, Dept. of Computer Science, Univ. of Strathclyde, Glasgow, UK.

334

References

[MacDonald et al., 1995b] MacDonald, F., Miller, J., Brooks, A., Roper, M., and Wood, M.
(1995b). Automating the Software Inspection Process. Technical Report RR/95/187
EFoCS-13-95, Empirical Foundations of Computer Science, Dept. of Computer Science, Univ. of Strathclyde, Glasgow, UK.
[Mamone, 1994] Mamone, S. (1994). The IEEE Standard for Software Maintenance. ACM SigSoft: Software Engineering Notes, 19(1):7576 (double paged).
[Mancini, 1992] Mancini, L. (1992). Romancing the Quantitative Maintenance Management.
In [CSM92, 1992], pages 5862.
[Mancl and Havanas, 1990] Mancl, D. and Havanas, W. (1990). A Study of the Impact of C++
On Software Maintenance. In [CSM90, 1990], pages 6369.
[Marmorstein, 1989] Marmorstein, A. (1989). Review of Arthurs book [Arthur, 1988]. IEEE
Computer, page 116.
[Matson et al., 1994] Matson, J. E., Barrett, B. E., and Mellichamp, J. M. (1994). Software Development Cost Estimation Using Function Points. IEEE Transactions on Software Engineering, 20(4):275287.
[McClean, 1984] McClean, J. (1984). A Formal Method for the Abstract Specification of Software. Journal of the ACM, 21(3):600672.
[McGarry, 1994b] McGarry, F. (1994b). Software Process Improvement Program in the
NASA Software Engineering Laboratory. Slide copies, presented at ESI, 26th October
1994.
[Merlo et al., 1993] Merlo, E., DeMori, R., and Kontogiannis, K. (1993). A Process Algebra
Based Program and System Representation for Reverse Engineering. In Proc. 2n Workshop on Program Comprehension, Jul 8-9, 1993, Capri, Italy, pp. 17-25.
[Middleton, 1995] Middleton, P. (1995). Maintenance Management: From Product to Process.
Journal of Software Maintenance: Research and Practice, 7:6373.
[Midtstraum, 1987] Midtstraum, R. (1987). Forvaltningsdokumentasjon. Masters thesis, Department of Computer Systems and Telematics, Norwegian Institute of Technology.
(Thesis is in Norwegian) Grei ut om forlvaltningsdokumentasjonens rolle i forhold til
andre dokumentasjonstyper i de ulike utviklingsfasene til et datasystem. Forsk identifisere hvilke krav vi m stille til forlvaltningsdokumentasjonen. Undersk om arbeidet
i forbindelse med utarbeidelse av forlvaltningsdokumentasjonen kan understttes av
tekniske hjelpemidler.
[MIL-STD-SDD, 1992] MIL-STD-SDD (1992). Military Standard: Software Development
and Documentation (Draft). Technical report, U.S. Department of Defense.
[Miller et al., 1995] Miller, J., Daly, J., Wood, M., Brooks, A., and Roper, M. (1995). Electronic Bulletin Board Distributed Questionnaires for Exploratory Research. Technical Report RR-95-186EFoCS-11-95, Empirical Foundations of Computer Science, Dept. of
Computer Science, Univ. of Strathclyde, Glasgow, UK.
[Mills, 1988] Mills, H. D. (1988). Stepwise Refinement and Verification in Box-Structured
Systems. IEEE Computer, pages 2336.
[Mills et al., 1987] Mills, H. D., Dryer, M., and Linger, R. (1987). Cleanroom Software Engineering. IEEE Software.

Other references

335

The definitive article about cleanroom by its originators. Found that human verification, even though fallible could replace debugging in software development and human verification is surprisingly synergistic with statistical testing.
[Mittra, 1995] Mittra, S. S. (1995). A Road Map for Migrating Legacy Systems to Client/Server. Journal of Software Maintenance: Research and Practice, 7:117130.
[Miyoshi and Azuma, 1993] Miyoshi, T. and Azuma, M. (1993). An Empirical Study of Evaluating Software Development Environment Quality. IEEE Transactions on Software
Engineering, 19(5):425435.
[Mllrt and Paulish, 1993] Mllrt, K. H. and Paulish, D. J. (1993). Software Metrics: A Practitioners Guide to Improved Product Development. IEEE Press and Chapman & Hall.
Materials in this book have been developed from work carried out by the ESPRIT PYRAMID research project.
[Monk et al., 1994] Monk, S., Sommerville, I., Pendaries, J. M., and Durin, B. (1994). Supporting Design Rationale for System Evolution. Working Paper P-WOR-D3.3-03-LAN,
Proteus (ESPRIT project 6086).
Submitted to Software Practice and Experience.
[Moore, 1993] Moore, L. (1993). Managing Software Development to Cost and Schedule. In
??, pages 475482. IEEE.
[Mller et al., 1993a] Mller, H. A., Orgun, M. A., Tilley, S. R., and Uhl, J. S. (1993a). A Reverse Engineering Approach to Subsystem Structure Identification. Software Maintenance: Research and Practice, 5(4):181204.
[Mller et al., 1992] Mller, H. A., Tilley, S. R., Orgun, M. A., Corrie, B. D., and Madhavji,
N. H. (1992). A Reverse Engineering Environment Based on Spatial and Visual Software Interconnection Models. In Proceedings of the Fifth ACM SIGSOFT Symposium
on Software Development Environments (SIGSOFT92), ACM Software Engineering
Notes 17(5), pages 8898.
[Mller et al., 1993b] Mller, H. A., Tilley, S. R., and Wong, K. (1993b). Understanding Software Systems Using Reverse Engineering Technology Perspectives form the Rigi
Project. In Proceedings of CASCON 93, Toronto, Ontario, October 25-28, 1993, pages
217226.
[Mller et al., 1994a] Mller, H. A., Tilley, S. R., Wong, K., Whitney, M. J., and Storey, M.A. D. (1994a). Rigi An Extensible System for Retargetable Reverse Engineering. Part
of documentation of RIGI, ftp-ed from tara.uvic.ca, 2 pages.
[Mller et al., 1994b] Mller, H. A., Wong, K., and Tilley, S. R. (1994b). Understanding Software Systems Using Reverse Engineering Technology. Paper, Department of Computer
Science, University of Victoria, P.O. Box 3055, Victoria BC, Canada V8W 3P6.
[Munch, 1994a] Munch, B. (1994a). ECM User Guide. Technical report, IDT/NTH. October
19.
[Munch, 1994b] Munch, B. (1994b). EPIT: Product Installation Tool for EPOS. Technical report, IDT/NTH. June 29.
[Munch, 1994c] Munch, B. (1994c). EPOS CM Tool Set. Technical report, IDT/NTH. August
12.

336

References

[Munch et. al., 1995] B. P. Munch, R. Conradi, J-O. Larsen, M. N. Nguyen, and P. H. Westby.
(1995). Integrated Product and Process Management in EPOS. Journal of Integrated
CAE. (Special issue on Integrated Product and Process Modelling)
[Mylopoulos et al., 1994] Mylopoulos, J., Stanley, M., Wong, K., Bernstein, M., Mori, R. D.,
Ewart, G., Kontogiannis, K., Merlo, E., Mller, H., Tiley, S. R., and Tomic, M. (1994).
Towards an Integrated Toolset for Program Understanding. In Proceedings of CASCON
94, page 13 pages.
[Navlakha, 1986] Navlakha, J. (1986). Software Productivity Metrics: Some Candidates and
Their Evaluation. In Proceedings of the 1986 National Computer Conference, Volume
55, pages 6976.
[Neil and Bache, 1993] Neil, M. and Bache, R. (1993). Data Linkage Maps. Software Maintenance: Research and Practice, 5:155164.
A symmetric metric for data dependency between pairs of procedures is defined. This
is used to measure the dependency between all pairs of procedures. Multidimensional
scaling is then used on these data to obtain a 2-dimensional view of the clustering of the
procedures. Very much like affinity computation. Proposed use is for sw. maintenance,
but also seems suitable for re-engineering.
[Newton and Bennett, 1993] Newton, J. and Bennett, K. (1993). Designing Systems for Future
Maintainability: A Case Study. In [CSM93, 1993], pages 272280.
[Nguyen and Conradi, 1994] Nguyen, M. N. and Conradi, R. (1994). SPELL: A Logic Programming Language for Process Modelling. In Proc. from Workshop on Logic Programming in Software Engineering, Santa Margherita Ligure (Genova), Italy, June,
1994, pp. 15-23.
[Nuseibeh et al., 1994] Nuseibeh, B., Kramer, J., and Finkelstein, A. (1994). A Framework for
Expressing the Relationships Between Multiple Views in Requirements Specification.
IEEE Transactions on Software Engineering, 20(10):760773.
[Ogando et al., 1994] Ogando, R. M., Yau, S. S., Liu, S. S., and Wilde, N. (1994). An Object
Finder for Program Structure Understanding in Software Maintenance. Software Maintenance: Research and Practice, 6(5):261283.
[Oman and Hagemeister, 1992] Oman, P. and Hagemeister, J. (1992). Metrics for Assessing a
Software Systems Maintainability. In [CSM92, 1992], pages 337344.
[Osborne and Raigrodski, 1986] Osborne, W. M. and Raigrodski, R. (1986). Annotated Bibliography on Software Maintenance. Technical Report NBS Special Publication 500-141,
U.S. Department of Commerce, National Bureau of Standards.
The report contains summaries of two hundred and eighty-five software maintenance
articles or papers from computer science journals, books, proceedings, Federal publications, computer newspapers, and other technical reports. It covers the years 1972-86,
and presents an overview of the various aspects of software maintenance including
problems and issues faced in most software maintenance environments. It identifies
techniques, procedures, methodologies, and tools that have been effectively employed
throughout the software system life cycle to improve the quality of that system. Keywords: documentation, metrics, productivity, programmers, software configuration
management (SCM), software errors, software life cycle, software maintenance costs,
software packages, software quality, techniques, testing, tools, users.

Other references

337

[Ott and Bieman, 1992] Ott, L. and Bieman, J. (1992). Effects of Software Changes on Module
Cohesion. In [CSM92, 1992], pages 345353.
[Ourston, 1989] Ourston, D. (1989). Program Recognition. IEEE Expert, 4(4):3649. Also in
[Arnold, 1993a], pp. 615628.
[Padula, 1993] Padula, A. (1993). Use of a Program Understanding Taxonomy at HewlettPackard. In Proc. 2n Workshop on Program Comprehension, Jul 8-9, 1993, Capri, Italy,
pp. 66-70.
[Papapanagiotakis, 1994] Papapanagiotakis, G. (1994). A Software Maintenance Management
Model based on Queuing Networks. Software Maintenance: Research and Practice,
6:7397.
[Parnas, 1995] Parnas, D. L. (1995). On ICSEs Most Influential Papers. ACM SIGSOFT
Software Engineering Notes, 20(3):2936.
[Pau and Kristinsson, 1990] Pau, L. F. and Kristinsson, J. B. (1990). Softm: A Software Maintenance Expert System in Prolog. Technical Report, March 1990, R424, Electromagnetics Institute, Technical University of Denmark.
From a first glance at the report, it looks very similar to the LaSSIE system by Devanbu
[Devanbu et al., 1991]. The following is the abstract from the technical report:
"This paper describes a software maintenance knowledge-based system called SOFTM,
which serves the three following purposes: (1) assisting a software programmer or analyst in application code maintencnance tasks and reverse engineering, (2) generating or
updating automatically software correction documentation, (3) helping the end user register, and possibly interpret, observed errors on the successive application code versions.
The knoweldge-based system SOFTM is written in PROLOG II, and is largely applicable to application codes written in differeent programming languages, provided that a
description of the application code can be retrieved. SOFTM does not address any of the
syntactic, input-output, or procedural errors normally detected by the syntactic analyzer,
compiler, or by the operating system environment, although some of these extensions
are possible. SOFTM is relying on a unique ATN network-based code description, on
diagnostic inference procedures based on context-based pattern classification, on maintenance log report generators, and on features whereby application code specific facts
can be retrieved by the operating system (VMS) using a command procedure and asserted directly in the SOFTM knowledge bases. An application case is given for FORTRAN
code."
[Pei and Victoria, 1994] Pei, G. and Victoria, A. (1994). Reusability in Software Maintenance.
Software Maintenance: Research and Practice, 6(3):165183.
Japan benefits from a level of software reuse in software maintenance more than ten
times the level in the USA. We report in this paper on our investigation during 1992 and
1993 into the management factors associated with that difference. The four main parts
of this paper are the motivation for applying software reusability, maintenance management in the USA, maintenance management in Japan,. and USA versus Japan. Our investigation included on-site visits in Japan and the USA, interviews with experts in Japan and the USA, and a review of the relevant Japanese and American literature (one
of us speaks and reads Japanese fluently).
Organizations in Japan applying software reuse report reduced maintenance backlogs,

338

References

better satisfaction of customer (user) requests, and improved maintenance productivity.


In Japan, about a third of organizations apply software reuse, and the reuse level typically in the 50% to 75% range, we observe. In the USA, less than 2% of the organizations apply software reuse in software maintenance, and the reuse level those organizations achieve typically is below 35%. The approach to software reuse in Japan is more
business-oriented, with an emphasis on team work-methods and quality improvement.
The approach in the USA is more technically oriented, with an emphasis on tools and
operating practices. The approach in Japan reflects a tradition of building and maintaining in terms of assemblies and subassemblies to help control product quality over a long
term. The approach in the USA reflects managements pressure for a short-term payoff,
and the relatively easy availability of personnel skills when needed. In both Japan and
the USA, personnel factors are a major concern for information systems management in
determining the extent of software reuse in software maintenance. Underlying all software maintenance reuse differences between Japan and the USA are structural and cultural differences which shape the extent and manner of applying software reusability in
software maintenance.
[Perry, 1987] Perry, D. E. (1987). Software Interconnection Models. In [Riddle, 1987], pages
6169.
[Pfleeger, 1993] Pfleeger, S. L. (1993). Lessons Learned in Building a Corporate Metrics Program. IEEE Software, pages 6774.
[Pfleeger and Bohner, 1990] Pfleeger, S. L. and Bohner, S. A. (1990). A Framework for Software Maintenance Metrics. In [CSM90, 1990], pages 320327.
Recognize impact analysis as a primary activity in software maintenance and present a
framework for software metrics which could be used as a basis for measuring stability
of the whole system including documentation. Traceability graph, showing the interconnections among source code, test cases, design documents and requirements.
[Pickard and Carter, 1993] Pickard, M. M. and Carter, B. D. (1993). Maintainability: What Is
It And How Do We Measure It. ACM SigSoft: Software Engineering Notes, 18(3):A36
A39.
[Pigoski and Sexton, 1990] Pigoski, T. M. and Sexton, J. (1990). Software Transition: A Case
Study. In [CSM90, 1990], pages 200204.
[Podgurski and Clarke, 1990] Podgurski, A. and Clarke, L. A. (1990). A Formal Model of Program Dependences and Its Implication for Software Testing, Debugging and Maintenance. IEEE Transactions on Software Engineering, SE-16(9):965979.
[Prito-Diaz, 1991] Prito-Diaz, R. (1991). Implementing Faceted Classification for Software
Reuse. Communications of the ACM, 34(5):8897.
[Prieto-Diaz and Arango, 1991] Prieto-Diaz, R. and Arango, G. (1991). Domain Analysis and
Software Systems Modelling. IEEE Computer Society Press. A set of papers on domain
analysis.
[PROTEUS, 1993a] PROTEUS (1993a). Proteus Consortium, P-D22a: Specification of PCL
Tools.
[PROTEUS, 1993b] PROTEUS (1993b). Proteus Consortium, P-D31A-HOOD: Change Support with HOOD.

Other references

339

[PROTEUS, 1993c] PROTEUS (1993c). Proteus Consortium, P-D33a-Part2: Evolution Model


V1: Part 2 Change Process.
[PROTEUS, 1993d] PROTEUS (1993d). Proteus Consortium, P-DEL34a: Specification of the
Proteus Configuration Language.
[PROTEUS, 1993e] PROTEUS (1993e). Proteus Consortium, P-WOR-A132-BG01-CAP: A
PCL model for CSS Application.
[PROTEUS, 1994a] PROTEUS (1994a). Proteus Consortium, P-D32b: Domain Analysis
Method.
[PROTEUS, 1994c] PROTEUS (1994c). Proteus Consortium, Towards a Global Methodological Framework.
[PROTEUS, 1995] I. Sommerville, G. Mayobre, R. Braek, K. Mertes, M. Breuer, J. Leger, JM. Pendaries, M-A. Gandrieau, J. Floch, E. Tryggeseth: P-DEL-3.5.A.4.0 Proteus
Framework, Proteus Technical Report, pp. 65, January, 1995
[Rajlich, 1994] Rajlich, V. (1994). Program Reading and Comprehension. Tutorial Notes at International Conference on Software Maintenance, 1994, Victoria B.C., September 21.
[Reiss and Davis, 1995] Reiss, S. P. and Davis, T. (1995). Experiences Writing Object-Oriented Compiler Front Ends. Included with the v1.83 distribution of the Cppp (C plus plus
parser) from Brown University. The authors can be reached at: Dept. of Computer Science, Brown University, Providence, RI02912, USA. Their mail addresses are spr
cs.brown.edu and ted cs.brown.edu.
[Rich and Wills, 1990] Rich, C. and Wills, L. M. (1990). Recognizing a Programs Design: A
Graph-Parsing Approach. IEEE Software, pages 8289. Also in [Arnold, 1993b], pp.
534541.
[Richards and Mannion, 1995] Richards, P. and Mannion, M. (1995). Building Traceability
into Requirements Specifications. Presented in doctoral consortium, re95, Dept. of
Mech., Manuf. and Software Engineering, Napier University, Edinburgh, Scotland. Argues for providing increased traceability from the initial conceptual stage of requirements elicitation through to the formal written description of a systems requirements.
Also described a case tool to be used for experimenting with such traceability.
[Richardson and Hodil, 1984] Richardson, G. and Hodil, E. D. (1984). Redocumentation: Addressing the maintenance legacy. In Proceedings of the 1984 National Computer Conference, Volume 53, pages 203208.
[Riddle, 1987] Riddle, W. E., editor (1987). Proceedings of the 9th International Conference
on Software Engineering, Monterey, CA, March 30April 2.
[Rifkin and Cox, 1991] Rifkin, S. and Cox, C. (1991). Measurement in Practice. Technical Report CMU/SEI-91-TR-16, Software Engineering Institute, Carnegie Mellon University.
[Roche, 1994] Roche, J. M. (1994). Software Metrics and Measurement Principles. ACM Sigsoft: Software Engineering Notes, 19(1):7785 (doublepaged).
[Rochkind, 1975] Rochkind, M. J. (1975). The Source Code Control System. IEEE Transactions of Software Engineering, 1(4):364370
[Rombach, 1987] Rombach, H. D. (1987). A Controlled Experiment on the Impact of Software
Structure on Maintainability, IEEE Transactions on Software Engineering, 13(3), 344-

340

References

354, 1987.
[Rombach and Basili, 1987] Rombach, H. D. and Basili, V. R. (1987). Quantitive Assesment
of Maintenance: An Industrial Case Study. In Proc. of theConference of Software Maintenance, 1992, IEEE Computer Society Press, Los Alamitos, CA, pp. 294-298. (Referenced in [Brown et al., 1995], do not have own copy)
[Rombach et al., 1993] Rombach, H. D., Basili, V. R., and (editors), R. W. S. (1993). Experimental Software Engineering Issues: Critical Assessment and Future Directions, volume 706 of Lecture Notes in Computer Science. Springer Verlag. The book is the proceedings from the International Workshop, Daghstuhl Castle, Germany, September;
1992.
[Roper et al., 1994] Roper, M., Miller, J., Brooks, A., and Wood, M. (1994). Towards the Experimental Evaluation of Software Testing Techniques. Technical report, Dept. of Computer Science, Univ. of Strathclyde, Glasgow, UK.
Abstract: Despite the existence of a large number of software testing techniques we are
largely ignorant of their respective powers as software engineering methods. It is argued that more experimental work in software testing is necessary in order to place testing techniques onto a scale of measurement other than the nominal. Current experimental practices are examined using a parametric framework and are shown to contribute
little towards a cohesive and useful body of knowledge. A number of suggestions are
made regarding how experimentation may progress at a faster and more productive
rate.
[Rosson and Carrol, 1991] Rosson, M. B. and Carrol, J. M. (1991). A View Matcher for Reusing Smalltalk Classes. In Proceedings of CHI91, pages 277283.
[Rugaber et al., 1990] Rugaber, S., Ornburn, S. B., and Richard J. LeBlanc, J. (1990). Recognizing Design Decisions in Programs. IEEE Software, pages 4654. Also in [Arnold,
1993b], pp. 463-471.
[Ruhl and Gunn, 1991] Ruhl, M. K. and Gunn, M. T. (1991). Software Reengineering: A Case
Study and Lessons Learned. Technical Report NIST Special Publication 500-193, US
DoC, National Institute of Standards and Technology (NIST).
Software reengineering and other related terms are defined and possible benefits that
relate to this technology are described. The use of CASE tools fore reengineering are
examined. A case study that examines the feasibility and cost-effectiveness of software
reengineering is described., Study results are addressed along with recommendations
for organizations that are considering the use of reengineering.
[Sakthivel, 1994] Sakthivel, S. (1994). A Decision Model to Choose between Software Maintenance and Software Redevelopment. Software Maintenance: Research and Practice,
6:121143.
[Sap and McGregor, 1992] Sap, M. N. M. and McGregor, D. R. (1992). Natural Language Interfaces to Databases: State of the Art. Technical report, Dept. of Computer Science,
Univ. of Strathclyde, Glasgow, UK.
[Schach, 1994] Schach, S. R. (1994). The Economic Impact of Software Reuse on Maintenance. Software Maintenance: Research and Practice, 6(3):185196.
Software reuse has traditionally been put forward as a mechanism for reducing the cost

Other references

341

of developing a product. This paper shows that the overall economic impact of reuse on
maintenance is greater than its impact on development. A specific reuse example is
worked in detail to illustrate the impact of reuse on the various types of maintenance
that are preformed on s software product. On average, the cost savings during maintenance as a consequence of reuse are nearly twice the corresponding savings during development. The results are generalized to show that, for an arbitrary product, the cost
savings during maintenance as a consequence of software reuse exceed the cost savings
during development as a consequence of reuse when more than about 51% of the software budget is devoted to maintenance, and increase rapidly as the proportion of the
budget devoted to maintenance becomes large.
[Schaefer, 1985] Schaefer, H. (1985). Metrics for Optimal Maintenance Management. In Proceedings IEEE Conference on Software Maintenance, pages 114119. IEEE, Washington D.C.
I do not have this.
[Schatzberg, 1993] Schatzberg, D. R. (1993). Total Quality Management for Maintenance Process Improvement. Software Maintenance: Research and Practice, 5:112.
[Schneidewind, 1987] Schneidewind, N. F. (1987). The State of Software Maintenance. lEEE
Transactions on Software Engineering, pages 303310.
[Schneidewind, 1993] Schneidewind, N. F. (1993). Report on the IEEE Standard for a Software Quality Metrics Methodology. ACM SigSoft: Software Engineering Notes,
18(6):A95A98.
[Schwanke, 1991] Schwanke, R. W. (1991). An Intelligent Tool For Re-engineering Software
Modularity. In Proc. 13th Intl Conference on Software Engineering, Austin, TX, USA,
pages 8392. IEEE.
[Schwanke et al., 1989] Schwanke, R. W., Altucher, R. Z., and Platoff, M. A. (1989). Discovering, Visualizing, and Controlling Software Structure. In Proceedings of 5th International Workshop on Software Specification and Design, Pittsburgh, USA, pages 147
150. ACM, ACM Software Engineering Notes 14(3).
[Schwanke and Kaiser, 1988] Schwanke, R. W. and Kaiser, G. E. (1988). Smarter Recompilation. ACM Transactions on Programming Languages and Systems, 10(4):627632.
An optimization to Tichys approach to smarter recompilation.
[Schwanke and Platoff, 1989] Schwanke, R. W. and Platoff, M. A. (1989). Cross References
are Features. In Proc. 2nd International Workshop on Software Configuration Management, Princeton, USA, pages 8695. IEEE/ACM, ACM Software Engineering Notes
14(7).
[Sefcik, 1994] Sefcik, J. G. (1994). Critical Success Factors for Implementing Software Quality Plans. ACM SigSoft: Software Engineering Notes, 19(1):7274.
[Selby et al., 1987] Selby, R. W., Basili, V., and Baker, F. T. (1987). Cleanroom Software Development: An Empirical Evaluation. IEEE Transactions on Software Engineering.
Reports on an experiment using student software developers to develop 800 to 2300
lines of software in which productivity and quality were improved by cleanroom even
among first-time users.

342

References

[Sherer, 1992] Sherer, S. (1992). Cost Benefit Analysis and the Art of Software Maintenance.
In [CSM92, 1992], pages 7077.
[Sherif et al., 1985] Sherif, Y. S., Ng, E., and Steinbacher, J. (1985). Computer Software Quality Measurements and Metrics. Microelectronics and Reliability, 25(6):11051150.
From [Fenton, 1991]:Very good survey paper. Because it appeared in such an unusual
source, it has been largely overlooked by the software metrics community. The survey is
based on a classification of attributes which appears to amount to a new quality model. Also contains a good bibliography.
[Shneiderman, 1980] Shneiderman, B. (1980). Software Psychology - Human Factors in Computer and Information Systems. Winthrop Publishers, Inc., Cambridge, Massachusetts.
This book reviews current trends and experimental results which have immediate application in software engineering and offers a model of human behavior which may be
useful for further research. The author presents a definitive study on people who develop
and maintain software. Some of the sections included in this book are as follows: programming style, team organization, personality factors, and software quality evaluation.
[SIGMAINT1, 1990] SIGMAINT1 (1990). European Special Interest Group in Software
Maintenance, Newsletter no. 1. Distributed through SIGMAINT mailing list.
[SIGMAINT2, 1991] SIGMAINT2 (1991). European Special Interest Group in Software
Maintenance, Newsletter no. 2. Distributed through SIGMAINT mailing list.
[SIGMAINT3, 1992] SIGMAINT3 (1992). European Special Interest Group in Software
Maintenance, Newsletter no. 3. Distributed through SIGMAINT mailing list.
[SIGMAINT4, 1992] SIGMAINT4 (1992). European Special Interest Group in Software
Maintenance, Newsletter no. 4. Distributed through SIGMAINT mailing list.
[SIGMAINT5, 1993] SIGMAINT5 (1993). European Special Interest Group in Software
Maintenance, Newsletter no. 5. Distributed through SIGMAINT mailing list.
[SIGMAINT6, 1993] SIGMAINT6 (1993). European Special Interest Group in Software
Maintenance, Newsletter no. 6. Distributed through SIGMAINT mailing list.
[Signore and Loffredo, 1993] Signore, O. and Loffredo, M. (1993). Charon: A Tool for Code
Redocumentation and Re-Engineering. In Proc. 2n Workshop on Program Comprehension, Jul 8-9, 1993, Capri, Italy, pp. 169-176.
[Sillince, 1994] Sillince, J. A. A. (1994). A Design for Information Systems which can Adapt
to Changing Organizational Requirements. Software Maintenance: Research and Practice, 6:145160.
[Sindre et al., 1994] Sindre, G., Conradi, R., and Karlsson, E.-A. (1994). The REBOOT Approach to Software Reuse. To appear in Journal of Systems and Software, spring 1995.
[Sjberg, 1993] Sjberg, D. (1993). Quantifying Schema Evolution. Information and Software
Technology, 35(1):3544.
[Slovin and Malik, 1991] Slovin, M. and Malik, S. (1991). Reengineering to Reduce System
Maintenance: A Case Study. Software Engineering, pages 1424. Also in [Arnold,
1993b], pp. 162172.

Other references

343

[Sneed, 1984] Sneed, H. M. (1984). Software Renewal: A Case Study. IEEE Software,
1(3):5663.
[Sneed, 1991] Sneed, H. M. (1991). Economics of Software Re-engineering. Software Maintenance: Research and Practice, 3(3):163182. Also in [Arnold, 1993b], pp. 121-140.
[Sneed and Kaposi, 1990] Sneed, H. M. and Kaposi, A. (1990). A Study on the Effect of Reengineering upon Software Maintainability. In [CSM90, 1990], pages 9199. Also in
[Arnold, 1993b], pp. 223231.
[Sommerville and Rodden, 1994] Sommerville, I. and Rodden, T. (1994). Requirements Engineering and Cooperative Systems. Technical Report CSCW/1/1994, Lancaster University, Computing Department, Centre for Research in CSCW.
[Stlhane, 1992] Stlhane, T. (1992). Software Metrics, Current Trends and Future Development. In Proceedings of Norsk Informatikk Konferanse (NIK-92), pages 137148.
[Stlhane and Wedde, 1994] Stlhane, T. and Wedde, K. J. (1994). The Quest for Reliability:
A Case Study. Journal of Systems and Software, 26:6976.
[Stark et al., 1994] Stark, G. E., Kern, L. C., and Vowell, C. W. (1994). A Software Metrics
Set for Program Maintenance Management. Journal of Systems and Software, 25:239
249.
NASAs Mission Operations Directorate (MOD) introduced a set of 13 metrics (identified by the goal/question/metric paradigm) for program maintenance management. The
effect of this introduction was more visibility of the status of ongoing maintenance
projects, and provided analysts with the information needed to schedule staffing and
other resources. Although the metrics were valuable individually, more benefit was
gained when several metrics were analyzed together. The metrics program was implemented with two simple tools, one public domain, the other inexpensive commercial. The
cost of introducing the metrics program in the organization and its contractors was remarkably low, adding only 0.1% to the maintenance budget.
[Swanson and Beath, 1990] Swanson, E. B. and Beath, C. M. (1990). Departmentalization in
Software Development and Maintenance. Communications of the ACM, 33(6):658667.
[Takahashi et al., 1995] Takahashi, K., Oka, A., Yamamoto, S., and Isoda, S. (1995). A Comparative Study of Structured and Text-Oriented Analysis and Design Methodologies.
Journal of Systems Software, 28:6975. Two experiments were conducted to compare
the effectiveness of the SA/SD methodology and the conventional text-oriented analysis
and design method. In the first experiment, subjects were asked to understand requirement specifications, detect specification errors and identify change influences. The experiment showed that subjects using SA documents had higher scores for data definition
and interface, but they were prone to fail to understand specifications when relevant
constituents were distributed over the documents. In the second experiment, the requirement analysis and design processes using these two methods were compared. This experiment showed that the SA/SD method supported by a CASE environment is more efficient and reliable than the text-oriented method. The SA/SD method allows analysts to
formally specify more detailed requirements with the same effort and produce less-error-prone requirements specification and module design documents.
[Tamai and Torimitsu, 1992] Tamai, T. and Torimitsu, Y. (1992). Software Lifetime and its
Evolution Process over Generations. In [CSM92, 1992], pages 6369.

344

References

[Terry and Cameron, 1987] Terry, B. W. and Cameron, R. D. (1987). Software Maintenance
Using Metaprogramming Systems. In Proceedings of the 1987 IEEE Conference on
Software Maintenance, pages 115119.
[Thayer and McGettrick, 1993] Thayer, R. H. and McGettrick, A. D. M., editors, (1993). Software Engineering: A European Perspective. IEEE Press.
[Tichy, 1988] Tichy, W. F. (1988). Response to Schwanke and Kaisers Smarter Recompilation. ACM Transactions on Programming Languages and Systems, 10(4):633634. Response to [Schwanke and Kaiser, 1988].
[Tichy, 1992] Tichy, W. F. (1992). Programming-in-the-Large: Past, Present, and Future. In
ICSE92, Melbourne.
[Tilley, 1992] Tilley, S. R. (1992). Management Decision Support Through Reverse Engineering Technology. In Proceedings of CASCON 92, Toronto, Ontario; November 9-11,
1992, pages 319328.
[Tilley, 1993] Tilley, S. R. (1993). Documenting-in-the-large vs. Documenting-in-the-small.
In Proceedings of CASCON 93, Toronto, Ontario, October 25-28, 1993, pages 1083
1090.
[Tilley, 1994] Tilley, S. R. (1994). Domain-Retargetable Reverse Engineering II: Personalized
User Interfaces. In International Conference on Software Maintenance.
[Tilley and Mller, 1993] Tilley, S. R. and Mller, H. A. (1993). Using Virtual Subsystems in
Project Management. In Proceedings of the Sixth International Conference on Computer Aided Software Engineering, CASE 93, Singapore, July 19-23, 1993, pages 144
153. IEEE Computer Society Press, order number 3480-02.
[Tilley et al., 1993] Tilley, S. R., Mller, H. A., Whitney, M. J., and Wong, K. (1003). Domain-Retargetable Reverse Engineering. In [CSM93, 1993], pages 142151.
[Tilley et al., 1993b] Tilley, S. R., Withney, M. J., Mller, H. A., and Storey, M.-A. (1993).
Personalized Information Structures. In Proceedings of the 11th International Conference on Systems Documentation (SIGDOC 93), Waterloo, Ontario, October 5-8, 1993,
pages 325337. ACM Order Number 6139330.
[Tilley and Wong, 1994] Tilley, S. R. and Wong, K. (1994). Meta-Document Construction
(Abstract only). In Submitted to ECWC94 (Utrecht, The Netherlands, November 1994)
February 1994, page 1 page. Correspondence regarding this article should be directed
to Scott R. Tilley, Department of Computer Science, University of Victoria, P.O. Box
3055, Victoria BC, Canada V8W 3P6.
[Tilley et al., 1994] Tilley, S. R., Wong, K., Storey, M.-A. D., and Mller, H. A. (1994). Programmable Reverse Engineering. Preprint submitted to International Journal of Software Engineering and Knowledge Engineering, page 34 pages. Correspondence regarding this article should be directed to Scott R. Tilley, Department of Computer Science,
University of Victoria, P.O. Box 3055, Victoria BC, Canada V8W 3P6.
[Tracz, 1995] Tracz, W. (1995). DSSA (Domain-Specific Software Architecture) Pedagogical
Example. ACM SIGSOFT, Software Engineering Notes, 20(3):4962.
[Tryggeseth et al., 1993] Tryggeseth, E., Gulla, B., and Conradi, R. (1993). Software Configuration Management in Proteus. In Proceedings of 4th International Workshop on Software Configuration Management.

Other references

345

[Tryggeseth, 1994] Tryggeseth, E. (1994). Travel Report International Conference on Software Maintenance, 1994; Victoria B.C., Canada, 19-23 September. Technical report,
Department of Computer Systems and Telematics, Norwegian Institute of Technology,
Trondheim, Norway. (31 pages).
[Tryggeseth, 1995] Tryggeseth, E. (1995). Support for Understanding and Change Request
Control in Software Maintenance. Technical Report, Norwegian Institute of Technology, Dept. of Computer Systems and Telematics. Submitted for publication. 18 pages.
May, 1995.
[Tryggeseth, 1997] Tryggeseth, E. (1997). Support for Understanding in Software Maintenance. PhD thesis submitted for defense, Department of Computer and Information Science, Norwegian University of Science and Technology, 346 pages, February 1997.
[Turner and Robson, 1993] Turner, C. D. and Robson, D. J. (1993). The State-Based Testing
of Object Oriented Programs. In [CSM93, 1993], pages 302310.
[Ulltang, 1994] Ulltang, T. (1994). STEP: STandard for the Exchange of Product model data.
Notat til NTH-EEU kurs: Mer effektivitet med mindre papir i forsvar og offshore.
Ulltang er stipendiat ved institutt for marin prosjektering, NTH, og jobber med en
avhandling om emnet produktmodellering. For STEP, se og [Arngrimsson and Vesterager, 1991].
[United States Congress, 1992] United States Congress, O. o. T. A. (1992). Finding a Balance: Computer Software, Intellectual Property and the Challenge of Technological
Change. (OTA-TCT; 527). Washington D.C.: The Office.
[Van Sickle et al., 1993] Van Sickle, L., Liu, Z.Y., Ballantyne, M. (1993). Recovering User Interface Specification for Proting Transaction Processing Applications. In Proc. 2n
Workshop on Program Comprehension, Jul 8-9, 1993, Capri, Italy, pp. 71-85.
[Vessey and Conger, 1994] Vessey, I. and Conger, S. A. (1994). Requirements Specification:
Learning Object, Process, and Data Methodologies. Communications of the ACM,
37(5):102113.
[Vollman, 1990] Vollman, T. (1990). Transitioning from Development to Maintenance. In
[CSM90, 1990], pages 189199.
[Von Mayrhauser and Vans, 1993] Von Mayrhauser, A. and Vans, A.M. (1993). From Program Comprehension in Tool Requirements for and Industrial Environment. In Proc. 2n
Workshop on Program Comprehension, Jul 8-9, 1993, Capri, Italy, pp. 78-86.
[Wang and Parnas, 1994] Wang, Y. and Parnas, D. L. (1994). Simulating the Behavior of Software Modules by Trace Rewriting. IEEE Transactions on Software Engineering,
20(10):750759.
[Ward and Bennett, 1995] Ward, M. P. and Bennett, K. H. (1995). Formal Methods to Aid the
Evolution of Software. To appear in International Journal of Software Engineering and
Knowledge Engineering, special issue on Software Evolution.
[Wilde et al., 1993] Wilde, N., Matthews, P., and Huit, R. (1993). Maintaining Object-Oriented Software. IEEE Software, pages 7580.
[Wilde and Scully, 1995] Wilde, N. and Scully, M. C. (1995). Software Reconnaissance: Mapping Program Features to Code. Journal of Software Maintenance: Research and Prac-

346

References

tice, 7:4962.
[Wilde and Casey, 1996] Wilde, N. and Casey, C. (1996). Early Field Experience with the
Software Reconnaissance Technique for Program Comprehension. In [ICSM 1996], pp.
312-318.
[Wilson, 1994] E. F. Wilson. (1994) "Software Evolution for the U.S. Department of Energy Experience Report". In [Mller et al., 1994a], pp. 434-436.
[Winkler, 1988] Winkler, J. F. H., editor (1988). International Workshop on Software Version
and Configuration Control, German Chapter of the ACM, Berichte Band 30, Grassau,
FRG, January 2729, 1988. B.G.Teubner, Stuttgart. ISBN 3-519-02671-6.
[Wong, 1993] Wong, K. (1993). Managing Views in a Program Understanding Tool. In Proceedings of CASCON 93, Toronto, Ontario; October 25-28, 1993, pages 244249.
This paper gives an overview of what views are interesting in a program understanding
tool. Subviews, sequencable views and meta views are suitable for structuring the view
documentation with unlimited levels of abstraction. Templates for standardizing the
software documentation. Canvases use the space very efficient (canvas = onion notation). This paper contains several ideas that can be utilized in the student project running in autumn 1994, and for presenting views in my thesis.
[Wong et al., 1994] Wong, K., Tilley, S. R., Mller, H. A., and Storey, M.-A. D. (1994). Structural Redocumentation: A Case Study. Preprint submitted (and accepted) to IEEE
Software, Special issue on legacy software systems, January, 1995, 20 pages. Correspondence regarding this article should be directed to Kenny Wong, Department of
Computer Science, University of Victoria, P.O. Box 3055, Victoria BC, Canada V8W
3P6.
The article explains how the RIGI reverse engineering system was adapted to redocument the IBM SQL/DS relational database written in PL/AS. The case study revealed
several limitations of the RIGI system, among them scalability, which led to a reorganization of the presentation subsystem of RIGI, the rigiedit. By using domain knowledge,
such as component naming conventions, usable graphs were created. The information
extracted from PL/AS was call expressions, and this was translated into C programs.
These C programs were then parsed by the C parser of the RIGI system. This allowed
easy adaptation on the parser side, and also reduced the system to be parsed from several
million lines of PL/AS (the SQL/DS) system to some ten-thousand lines of C (only calls
are replicated in the transformed C program). The views generated by the RIGI system
was confirmed by the maintainers of SQL/DS. The article concludes that (semi-)automatic generation of such views is cost-effective, and provides an up-to-date view of the
system in production, rather being based on out-dated system documentation, and program listings.
[Yau and Tsai, 1987] Yau, S. S., and Tasi, J. J. (1987). Knowledge Representation of Software
Component Interconnection Information for Large-Scale Software Modifications. IEEE
Transactions on Software Engineering, 13(3):355-361, 1987.
[Younger and Bennett, 1993] Younger, E.J., and Bennett, K.H. (1993). Model-Based Tools to
Record Program Understanding. In Proc. 2n Workshop on Program Comprehension, Jul
8-9, 1993, Capri, Italy, pp. 87-95.
[Younger and Ward, 1993] Younger, E.J., and Ward, M.P. (1993). Understanding Concurrent

Other references

347

Programs Using Program Transformations. In Proc. 2n Workshop on Program Comprehension, Jul 8-9, 1993, Capri, Italy, pp. 160-168.
[Younger et al., 1996] Younger, E., and Luo, Z., and Bennett, K., and Bull, T. (1996). Reverse
Engineering Concurrent Programs using Formal Modelling and Analysis. In [ICSM
1996], pp. 255-264.
[Zuze, 1993] Zuse, H. (1993). Program Comprehension Derived from Software Complexity
Metrics. In Proc. 2n Workshop on Program Comprehension, Jul 8-9, 1993, Capri, Italy,
pp. 8-16.
[Yu and Mylopoulos, 1994] Yu, E. S. K. and Mylopoulos, J. (1994). Understanding Why in
Software Process Modelling, Analysis and Design. In Proc. 16th Intl Conference on
Software Engineering, Sorrento, Italia, pages 159168.
[Zamperoni and Gerritsen, 1994] Zamperoni, A. and Gerritsen, B. (1994). Integrating the Developers and the Managements Perspective in an Incremental Development Life Cycle. In Proceedings of the 12th Annual Pacific Northwest Software Quality Conference,
Portland, Oregon.
[Zeller, 1995] Zeller, A. (1995). A Unified Configuration Management Model. Technical Report 95-03, Technische Universitt Braunscweig. The paper gives an overview of the
version set model where versions, components and aggregates are grouped into sets according to their features. Feature logic is used as a formal base to denote sets and operations and deduce consistency. It is demonstrated how the concepts of four central
configuration management models are encompassed and extended by the version set
model, making the version set model a unified basis for modelling, realizing and integrating configuration management tasks. A prototype tool described demonstrates how
the model can be implemented using the C preprocessor to annotate components with
features. Although the feature logic operations generally result in exponential time complexity, the prototype shows that the discussed CM models can be realized and combined
without loss of efficiency.
[sterbye, 1995] sterbye, K. (1995). Literate Smalltalk Programming Using Hypertext. IEEE
Transactions on Software Engineering, 21(2):138145.

You might also like