Evaluation of Library & Information Services: Bs (Lis)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 145

Evaluation of Library

& Information Services


BS(LIS)
9228 Units: 1-9

Department of Library and Information Sciences


Faculty of Social Sciences and Humanities
EVALUATION OF LIBRARY
AND INFORMATION SERVICES
(BS-4 YEARS LIS)

Code No. 9228 Units: 1–9

DEPARTMENT OF LIBRARY AND INFORMATION SCIENCES


FACULTY OF SOCIAL SCIENCES & HUMANITIES
ALLAMA IQBAL OPEN UNIVERSITY
ISLAMABAD
(All Rights Reserved with the Publisher)

First Edition ............................ 2023

Quantity .................................. 1000

Price ........................................ Rs.

Typeset by............................... Muhammad Hameed

Printing Incharge ..................... Dr. Sarmad Iqbal

Printer ..................................... AIOU-Printing Press, Sector H-8, Islamabad.

Publisher ................................. Allama Iqbal Open University, Islamabad.

ii
COURSE TEAM

Chairman: Dr. Pervaiz Ahmad


Associate Professor

Course Coordinator: Muhammad Jawwad


Lecturer

Compiled by: Muhammad Jawwad


Lecturer

Reviewed by: Dr. Amjad Khan

Edited by: Humera Ejaz

Layout/Typeset by: Muhammad Hameed

iii
CONTENTS
Page #
Foreword ........................................................................................................... iv

Preface............................................................................................................... v

Acknowledgements ........................................................................................... vi

Introduction of the Course ................................................................................ vii

Unit–1: Introduction ........................................................................................ 1

Unit–2: Reasons for Evaluation and Related Factors ..................................... 13

Unit–3: Identifying Performance Issues for Evaluation.................................. 31

Unit–4: Qualitative Methods ........................................................................... 43

Unit–5: Quantitative Methods ......................................................................... 59

Unit–6: Pitfalls and Progress ........................................................................... 75

Unit–7: Case Studies ....................................................................................... 85

Unit–8: Future Developments ......................................................................... 109

Unit–9: Performance Management for the Electronic Library ....................... 121

iv
FOREWORD

Department of Library and Information Sciences was established under the flagship of
the Faculty of Social Sciences and Humanities to produce trained professional
manpower. The department is currently offering various programs from certificate
level to PHD level. The department is supporting the mission of AIOU keeping in view
the philosophies of distance and online education. The primary focus of its programs
is to provide quality education by targeting the educational needs of the masses at their
doorsteps across the country.

BS 4-year in Library and Information Science (LIS) is a competency-based learning


program. The primary aim of this program is to produce knowledgeable ICT-related
skilled professionals. The scheme of study for this program is specially designed on
the foundational and advanced courses to provide in-depth knowledge and
understanding of the areas of specialization in librarianship. It also focuses on the
general subjects and theories, principles, and methodologies of related LIS and relevant
domains.

This new program has a well-defined level of LIS knowledge and includes courses of
general education, and foundational skills. The students are expected to advance
beyond their secondary level and mature and deepen their competencies, including in
writing, communication, mathematics, languages, analytical and intellectual
scholarship. Moreover, the salient feature of this program includes practice-based
learning to provide students with a platform for practical knowledge of the
environment and context they will face in their professional life.

This program intends to enhance students’ abilities in planning and controlling


library functions. The program will also produce a highly skilled professional
human resource to serve libraries, resource centres, documentation centres,
archives, museums, information centers, and LIS schools. Further, it will also help
the students to improve their knowledge and skills of management, research,
technology, advocacy, problem-solving, and decision-making relevant to
information work in a rapidly changing environment along with integrity and social
responsibility. I welcome you all and wish you with your academic exploration at
AIOU.

Dr. Nasir Mahmood


Vice Chancellor

v
PREFACE

We live in an evaluation culture which is the result of social change over the past
thirty years. The growth of the consumer movement in the 1970s encouraged
consumers of goods and services to view the quality of received service much more
critically and to complain if the consumers were not satisfied. From 1990 onwards,
declining patterns of public expenditure signalled the need to maximize resources
and defend pre-existing patterns of expenditure, something which usually requires
the collection of data. Declining public expenditure in the 80s was compounded by
economic recession which encouraged consumers to spend more carefully and look
critically at the goods and services they purchased.

Although librarians have been aware of the importance of meeting users’ needs for
decades, the customer care movement in the 90s has strengthened the emphasis on
customer orientation. The movement originated in retailing, e.g., supermarkets, but
has successfully transferred to the public sector. This new emphasis on the
‘customer’ has expressed itself in customer care statements and charters. The world
in which all types of libraries function has come under the influence of a new world
of analysis and assessment. The ‘new managerialism’ has promoted an emphasis
on strategic planning, customer service and devolved budgeting in the public sector.
Strategic planning has resulted in the introduction of mission statements and also
emphasized that institutional targets/aims/goals etc. and departments within the
organization, such as libraries, may have their mission statements which can be
used as a baseline for evaluation.

There are four principles involved, the four Cs: 1) Challenge, 2) Consult, 3)
Compare, and 4) Compete. There are many reasons for undertaking evaluation, the
principal and overriding reason for the evaluation of library services is to collect
information to facilitate decision-making and justify increasing expenditure or
defending existing expenditure; to evaluate the quality of service provided: both
overall, and specifically to plan for future improvements. This is usually done by
surveying, either quantitative or qualitative, to locate operational difficulties, some
specified under objectives, and to identify the extent to which problems can be
solved.

Dean
Faculty of Social Sciences & Humanities

vi
ACKNOWLEDGEMENTS

All praise to Almighty Allah who has bestowed on me the potential and courage to
undertake this work. Prayers and peace be upon our Prophet Hazrat Muhammad,
his family and all of his faithful companions.

I am thankful to the worthy Vice-Chancellor and the worthy Dean of FSSH for
allowing me to prepare this study guide. Without their support, this task may not
be possible. Further, they have consistently been a source of knowledge,
inspiration, motivation, and much more.

I am highly indebted to my parents, spouse, siblings, and children, who allowed me


to utilize family time to complete this work promptly. Their continuous prayers
kept me consistent throughout this journey. I would also appreciate the cooperation
of my departmental colleagues extended to me whenever required. Special thanks
to the Academic Planning and Course Production (APCP) and Editing Cell of
AIOU for their valued input that paved my path to improve and finish this study
guide by AIOU standards and guidelines. They have been very kind and supportive
as well.

I would also like to thank the Print Production Unit (PPU) of AIOU for their support
in the comprehensive formatting of the manuscript and designing an impressive
cover and title page. Special thanks also to AIOU’s library for giving me the
relevant resources to complete this task in a befitting manner. I am also thankful to
ICT officials for uploading this book on the AIOU website. There are many other
people whose names I could not mention here, but they have been a source of
motivation for the whole extent of this pursuit.

Muhammad Jawwad
Course Coordinator

vii
OBJECTIVES OF THE COURSE

After completion of this course, you will be able to:

1. Learn what is Library Evaluation.

2. Familiarize yourself with the motives for and purposes of evaluation.

3. Major contexts and models of evaluation.

4. Tools and techniques approach to assess the value of the library.

5. Study concrete, real-life cases that document the development and


application of approaches to evaluate library operations.

Recommended Readings:
1. Crawford, J. (2000). Evaluation of library and information services. London:
Aslib: The Association for Information Management.

2. Wallace, D.P. and Fleet, C.V. (2005). Library evaluation: a casebook and
can-do guide. Englewood, Colorado: Libraries Unlimited Inc.

viii
COURSE ORGANIZATION
The course has been designed as easily as possible for distance mode of learning
and it will help students in completing his/her required course work. The course is
of three credit hours and comprises nine units, each unit starts with an introduction
which provides an overall overview of that particular unit. At the end of every unit,
the objectives of the unit show students the basic learning purposes. The rationale
behind these objectives is that after reading the unit a student should be able to
explain, discuss, compare, and analyze the concepts studied in that particular unit.
This study guide is specifically structured for students to acquire the skill of self-
learning through studying prescribed reading material. Studying all this material is
compulsory for the successful completion of the course. Recommended readings
are listed at the end of each unit. A few self-assessment questions and activities
have also been put forth for the students. These questions are meant to facilitate
students in understanding and self-assessment that how much they have learned.

For this course, a 3-days workshop at the end of the semester, and four tutorial
classes/meetings during the semester will be arranged by the department for
learning this course. Participation/attendance in the workshop is compulsory (at
least 70%). The tutorial classes/meetings are not formal lectures as given in any
formal university. These are meant for group and individual discussion with the
tutor to facilitate students learning. So, before going to attend a tutorial, prepare
yourself to discuss course contents with your tutor (attendance in tutorial
classes/meetings is non-compulsory).

After completing the study of the first 5 units ‘Assignment No. 1’ is due. The
second assignment, ‘Assignment No. 2’ is due after the completion of the next 4
units. These two assignments are to be assessed by the relevant tutor/resource
person. Students should be very careful while preparing the assignments because
these may also be checked with Turnitin for plagiarism.

Course Study Plan and Chart


As you know the course is offered through distance education, so it is organized in
a manner to evolve a self-learning process in the absence of formal classroom
teaching. Although the students can choose their way of studying the required
reading material, advised following the following steps:

Step-1: Thoroughly read the description of the course for clear identification of
reading material.

Step-2: Read carefully the way the reading material is to be used.

ix
Step-3: Complete the first quick reading of your required study materials.

Step-4: Carefully make the second reading and note down some of the points in a
notebook, which are not clear and need full understanding.

Step-5: Carry out the self-assessment questions with the help of study material
and tutor guidance.

Step-6: Revise notes. It is quite possible that many of those points which are not
clear and understandable previously become clearer during the process of
carrying out self-assessment questions.

Step-7: Make a third and final reading of the study material. At this stage, it is
advised to keep in view the homework (assignments). These are
compulsory for the successful completion of the course.

Assessment/Evaluation Criteria of Students’ Coursework

As per AIOU rules/policy in vogue.

Muhammad Jawwad
Course Coordinator

x
Unit–1

LIBRARY EVALUATION:
INTRODUCTION

Compiled by: Muhammad Jawwad


Reviewed by: Dr Amjid Khan
CONTENTS

Page #
Introduction ....................................................................................................... 3

Objectives ......................................................................................................... 3

1.1 Introduction .............................................................................................. 4

1.2 Evaluation: The Systems Approach ......................................................... 5

1.3 Characteristics of Evaluation ................................................................... 6

1.4 Need for Evaluation ................................................................................. 7

1.5 Alternative Approaches to Evaluation ..................................................... 8


1.5.1 Objective-oriented approaches..................................................... 8
1.5.2 Management-oriented approaches ............................................... 8
1.5.3 Expertise-oriented approaches ..................................................... 8
1.5.4 Naturalistic and participant-oriented approaches......................... 9

1.6 Why are We Doing This? ........................................................................ 9


1.6.1 Administrative decision making .................................................. 10
1.6.2 Public relations............................................................................. 10
1.6.3 Politics.......................................................................................... 10

1.7 What Exactly Do We Want to Know? ..................................................... 10

1.8 Conclusion ............................................................................................... 11

2
INTRODUCTION

This unit is developed to teach students the concept of library evaluation, and the
purpose and process of evaluating library programs, services, and resources. It will
also focus on reviewing all possible types of evaluation or approaches to evaluation.

OBJECTIVES

After studying this unit, the students will be able to understand:

 Library evaluation.

 Purposes of evaluation.

 Major contexts and models of evaluating the library services and programs.

3
1.1 Introduction
Research continues to play an important role in understanding the societal needs to
which libraries should be responsive, assessing the effectiveness of approaches to
delivering library services, and guiding the evolution of library processes, practices,
and policies. Practitioners often view research and researchers as being removed
from and uninterested in pragmatic problems. Researchers and practising librarians
view the field from different perspectives, attempt to meet different standards, and
are driven by different motives.

Systematic evaluation is the nexus between the need to conduct true research into
library operations and the need to provide direct evidence of the value of libraries.
Evaluation is a vital tool for providing effective, high-quality library programs.
Evaluating library services and programs can provide data to help professionals
understand what works and what doesn't for particular programs, patron groups or
communities. In doing so, evaluation data can help professionals manage staff and
resources and communicate their library's impact on the community. Professionals
can accomplish this whether they are a director, a department head, or a programming
staff.

The civil sector and nonprofit world have increasingly incorporated the practice of
evaluation, often for a variety of internally- and externally driven reasons: to create
institutional change, to demonstrate the importance of specific programs or
initiatives to funders and sometimes to simply demonstrate their impact to the
outside world. No matter the impetus, it is hard for program directors to refrain
from feeling judged or resentful during the process of evaluation.

Evaluation can, and should, be part of regular reflective practice, one that
incorporates deep listening to your community and your community’s needs.
Evaluation doesn’t need to feel overwhelming or frightening. It can, and should, be
part of regular reflective professional practice. Evaluation can be done by those
within the library, and it doesn’t have to be expensive or involve outside
consultants. Well-done evaluation serves internal library needs, helping the library
achieve its goals, allocate scarce resources to where they are most effective, better
understand their patrons and serve community needs. Evaluation can save libraries
time and money by creating an environment where decision-making is based on
evidence. Evaluation supports libraries, making the case for how libraries can
effectively achieve their goals and connect with their patrons. It demonstrates the
relevancy of libraries, showing their critical role within communities. Evaluation
can improve how work is done within the library.

4
Evaluations of libraries are inevitable and ever-present. All aspects of library
development are influenced by the results of evaluations. To design successful
evaluations, the objectives to be accomplished must be known. Also, the criteria used
in the evaluation must be specified and the implications of values must be explicit. In
the evaluations of libraries, it is my conviction that the essential evaluative criteria
should be developed by the library profession and that standards for libraries,
developed by the profession and agreed to by it, should provide the basic measures for
evaluation. Of course, precise evaluations of libraries and library services can never be
the sole basis of decision-making. In many cases, politics is involved, and a highly
subjective element enters. It is the evaluation, however, based upon sound criteria and
carried out systematically, that can temper the politics.

Certain questions have emerged in evaluations of libraries which have established


the essential components of library standards:
 Are the library's collections adequate?
 Are the library's materials organized effectively?
 Is the staff large enough and sufficiently well-trained to provide a high level
of library service?
 Do the library's services facilitate effective use of the library?
 Is the library building adequate to meet the service needs?
 Are the library's finances sufficient to support the library's operations?
 Is the library's organization and management suitable?
 Are there appropriate cooperative activities with other libraries?

All of the existing standards for libraries derive from efforts to determine why one
library is more effective than another and to decide what constitutes quality and
achievement in the libraries. The development of standards grew out of the interest
in evaluating libraries.

1.2 Evaluation: The Systems Approach


Evaluation at its best is a mechanism for understanding a system. In the library
context, evaluation has to do with understanding the library system. The systems
approach to evaluation is based on several key concepts:
1. All phenomena take place in the context of systems. No action, event, process,
or product can be divorced from the system to which it belongs.
2. Understanding the phenomenon requires understanding the system.
3. Every system is linked to other systems and its environment. No system exists
in isolation. There are no closed systems.
4. Any change to a system affects other systems. No system is an island.

5
Systems analysis is the process of understanding and evaluating systems and is a
highly structured set of tools and processes that, when properly employed, yields
reliable data to describe the system, its inputs, its processes, and its products.

Operations research is a highly quantitative adjunct to system analysis in which


systems and events are described in terms of predictable mathematical and
statistical formulas. System analysis and operations research are tools and
processes, there are also ways of thinking. The most important aspect of our system
approach to evaluation is learning to think systematically. As evaluation is adopted
and nurtured in an institutional setting, evaluation takes on an important systems
role that places it on a level with basic and familiar library systems such as
collection management, reference and Information Services, technical processes,
circulation, outreach services, and administration. In an ideal situation, evaluation
becomes a basic social and societal system of the library, and a culture of evaluation
permeates the library and all its functions and activities.

1.3 Characteristics of Evaluation


Building a culture of evaluation is a deliberative process that requires thought,
effort, planning, patience, and evaluation. It also requires a deep understanding of
and appreciation for the fundamental characteristics of evaluation.
1. Evaluation results from design, not an accident. Although assigning value is
a basic human trait, true evaluation is a deliberately designed process. The
success of the evaluation has its origin in the quality of the design. Carelessly
planned and sloppily designed evaluation inevitably results in poor results that
are of limited use. It is generally impossible to improve the quality of data once
they are gathered, so making sure they are essential steps in the design process.
2. Evaluation has a purpose. Effective evaluation is intrinsically goal oriented.
If the purposes, goals, and objectives of the evaluation are poorly defined,
inadequately understood, or incorrectly communicated, the result will be a
faulty evaluation.
3. Evaluation is about quality. Determining how well some process is carried
out, how good some product is, how appreciated a service is, or how
thoroughly a service outlet is used are all ways of assessing quality.
Evaluation that does not address quality and that is not based on a desire to
achieve high quality is sterile and fundamentally pointless.
4. Evaluation is more than measurement. Measurement may be part of
evaluation, but that doesn’t mean that measurement is a substitute for
evaluation. Measurement must be tied to and derived from the design and
purpose of evaluation. A university library with millions of volumes may
ultimately be of less value to an undergraduate student with an undecided
6
major than the local public library. A huge collection that contains multiple
copies of ageing titles may be inferior in quality to a library with an active
weeding program.
5. Evaluation doesn’t have to be big. A small, focused evaluation project that
requires only a few days to complete in a single library can have as much
immediate and long-term impact as a year-long study addressing a broad
range of needs based on data drawn from a nationwide survey. It is the explicit
need and whether it is met through evaluation that determines the quality of
the evaluation effort.
6. There is no one right way to evaluate. The need for evaluation is situational,
as are the tools and resources available to carry out the evaluation and the
skills of the individuals charged with carrying out the evaluation. Although it
is appropriate and useful to seek models in professional literature or by
consulting with other librarians in other settings, imaginative librarians who
develop their ways of doing things carry out some of the best evaluation
projects. If the project has been carefully designed, if the purposes of the
evaluation are well constructed and thoroughly understood, and if the tools,
measures, and processes are appropriate to the need for evaluation, the
evaluation project will yield useful results.

1.4 Need for Evaluation


The need for evaluation is felt very keenly in libraries, particularly in libraries that
are supported by public funds. In times of rapid and profound societal and
technological change, evaluation is essential to preserving the viability and
visibility of libraries. Although most librarians probably reject the notion that
libraries will be summarily replaced by some mythical digital best, there are
members of the general public who have extensive traditions of print on paper.
Some of those believers in the digital epoch serve as municipal administrators,
members of governing boards, school principals, and university executives.

Evaluation usually involves deciding on what purpose the evaluation is to serve. It


requires the collecting and analyzing of data and the making of values and
judgments. Data collection is the gathering of specific information related to a
problem. The data are collected to address specific concerns of specific problems.
For example, librarians want some information that will enable them to improve a
specific library program or there is an interest in achieving specific objectives and
a plan of evaluation is required to do that.

Standards are developed by the professional community to assist in the evaluation


of library programs. The quality of a library program is judged or evaluated by
7
experts who use standards, professionally developed, and their expertise in making
their determinations.

1.5 Alternative Approaches to Evaluation


There are alternative views of evaluation and these views have influenced librarians
who are writing about library evaluation, standards, and performance measures. It
is important for us to acknowledge the varying viewpoints and to recognize that not
all librarians agree on the approach to take in undertaking library evaluation. Four
approaches are particularly applicable to the evaluation of libraries:

1.5.1 Objective-oriented approaches


The emphasis in this approach is on specifying goals and objectives and
determining the extent to which they have been achieved. The evaluator gathers
evidence of program outcomes and compares the actual performance against the
program objectives. The work on performance measures in libraries emphasizes
this approach. Most work on performance measures in libraries stresses the need to
develop performance measures within the context of strategic planning and the
library's mission and its goals and objectives.

1.5.2 Management-oriented approaches


The emphasis is on identifying and satisfying the information needs of managerial
decision-makers. The evaluator provides information and alternatives to the
decision-maker. This method of evaluation usually is conducted by external
evaluators. Management makes known to the evaluators what they are to examine
and the kinds of outcomes that could be expected. A related approach, which is
gaining favour in library evaluation, is the use of benchmarking to guide
management decisions. In business literature and particularly in the total quality
management (TQM) literature, a benchmark means a standard of excellence against
which other similar outcomes are measured or judged. A library, seeking to
improve a particular service or process, will identify another institution which it
decides has an exemplary service or process. It then measures its own against the
exemplary one and determines the necessary changes which have to be made to
improve its own. This use of benchmarking is essentially a comparative evaluation.

1.5.3 Expertise-oriented approaches


The emphasis is on the direct application of professional expertise to judge quality.
The judgments are made using standards and practices accepted by the professional
community. This approach has guided the development of the standards for public
libraries historically and is the approach being used by most states in the U.S. It is
the task of library authorities and their chief librarians to assess needs, determine
8
priorities, and quantify the resources required to meet the needs of their
communities. Recommendations as to desirable levels of provision, based on
experience in quite different circumstances, are bound to be unreliable and
misleading. Most of the standards for libraries emphasize resources that are
required to ensure adequate collections, services, staff, and facilities. The standards
were developed out of a consensus of professionals who are considered to be
experts in the particular library service. The library standards have been particularly
useful to those libraries just being established or those which have been
inadequately funded for some time. Libraries which exceeded the statements of
standards, however, often did not feel as well served. They feared that their
resources might be reduced for the very reason that they exceeded the standards.
Furthermore, there was a strong movement toward the individual library
determining its own goals and objectives and deciding what resources were
required to achieve them. That is, the objective-oriented approach to evaluation
became the popular method of evaluation, driving out the expertise-oriented
approach.

1.5.4 Naturalistic and participant-oriented approaches


The emphasis is on the involvement of participants or stakeholders in determining
values, criteria, needs, and data. The evaluator works with stakeholders and
facilitates and interacts with the stakeholders and their interests. This approach is
guiding current research activities in the evaluation of digital library projects. It
also has been emphasized in much of the literature on performance measurement
in libraries. As Powell observed in his review of public library use studies and the
use of performance measures, "...the movement in librarianship has been towards
judging library effectiveness from the point of view of the user". The variations in
approach lead us to recognize that values and judgments play an important part in
library evaluations.

1.6 Why Are We Doing This?


Evaluation of library services is carried out for a variety of purposes that derive
from the contexts in which the library exists. The most prominent among these
purposes are administrative decision-making, public relations, and politics.
Although these purposes can be discussed independently, the system’s approach to
understanding suggests a more holistic approach in which it is explicitly understood
that evaluation may serve multiple purposes. The study conducted to determine
how many patron computer workstations are needed may be useful in alerting the
public to the library’s ability to provide access to the Internet and may at the same
time be used to support or defend an investment in computer technology.

9
1.6.1 Administrative Decision-Making
Much of the focus of the evaluation is making decisions regarding resource
allocation, personnel training and evaluation, procedure development and revision,
and planning. Evaluation projects carried out to meet these purposes tend to be very
focused and concrete and may have very obvious expected outcomes.

1.6.2 Public Relations


Evaluation can play an important role in educating and persuading the public as
well as explaining the library to parallel institutions such as schools and social
service agencies. Evaluation as a public relations tool tends to focus on user
satisfaction and may involve market surveys, patron surveys, and appeal marketing.

1.6.3 Politics
Libraries of every kind exist in a political arena; library administrators in particular
must be sensitive to the political interactions of which they are an integral part.
Evaluation can serve to justify and explain administrative decisions to governing
bodies and higher-level administrators.

1.7 What Exactly Do We Want to Know?


Determining precisely what results the evaluation is intended to generate is a matter
of establishing perspective and refining the definition of the problem. Establishing
a sense of perspective helps determine the specific models, methodologies, and
tools that will be necessary for carrying out the evaluation. Some important
perspective-oriented questions are as follows:
1. Do we want to access the perceptions of the library, its services or both? If so
do we want to look at patron perceptions, patron subgroup perceptions,
perceptions or a combination?
2. Do we want to determine patterns of use of resources, services, or both?
3. Do we want to find out how much things cost? Am I interested in comparing
the costs of different ways of doing things to determine or improve efficiency?
4. Do we want to find out how effective library services are or how appropriately
they are being exploited?

Question definition is the process of turning a vague understanding of a problem


area into a question that can be answered. Vague questions lead to vague answers,
or much worse, to answers that are deceptively specific and don’t address the true
problem. It is generally better to obtain a good answer to a small question than to
produce a low-quality answer to a big question. Some questions to ask during the
question definition process are:
1. Can this problem be solved? Can this question be answered?
10
2. Will the solutions or answers be usable and useful?
3. Are there aspects of the problem that need to be clarified before proceeding?
Is there a need to evaluate the problem before evaluating the process or
service in which the problem is perceived to be present?
4. What tools are likely to be available for answering the question or solving the
problem? Are they the right tools?

1.8 Conclusion
Evaluations of libraries and library services inevitably call for comparisons and
several approaches were identified. Measures of effectiveness have remained
elusive. There have been efforts to use patron satisfaction as a measure of
effectiveness. There are problems here too, for patrons often do not know if they
were served well. Satisfaction studies note that if the patron was treated politely
and cordially, the patron reported a high level of satisfaction. There are no major
studies that measure satisfaction sometime after the library experience.

The demand for greater accountability is growing in most organizations and


institutions. Thus, efforts to evaluate and measure the performance of libraries will
continue. Libraries will continue to seek better ways to evaluate and measure their
performance. Criteria will be developed and used in the evaluations. T The
profession must take responsibility for the development of those criteria and assist
libraries in applying the criteria to evaluation and decision-making.

Standards for libraries prepared and adopted by professional librarians and library
associations in countries around the world have been successful in identifying the
kinds of resources necessary for the development of library services. As librarians
met the established minimums, and as librarians in many jurisdictions began to
chafe against externally established standards, standards gave way to locally
determined missions, goals, and objectives, and measures of performance began to
be designed. While much work on performance measures has been carried out over
the past twenty-five years, these measures have not assisted libraries much in
identifying measures of quality, nor have they helped in determining the kinds of
resources needed by libraries today.

11
SELF-ASSESSMENT QUESTIONS

1. Describe the benefits of library evaluation.

2. Explain alternative approaches to evaluation.

3. Describe various Characteristics of evaluation.

Activity:
1. With the help of a tutor, develop an Evaluation Action Plan.

RECOMMENDED READINGS

1. Abbott, C. (1994). Performance indicators in a quality context, Law librarian,


25 (4), pp.205–208.

2. Abbott, C. (1994). Performance measurement in library and information


services, London: Aslib.

3. Baker, S. L., and F.W. Lancaster, (1991). The Measurement and Evaluation
of Library Services, 2nd ed. Arlington, VA: Information Resources Press.

4. Bawden, D. User-orientated evaluation of information systems and services,


Aldershot, Gower.

5. Hernon, P., and Charles, R. M. (1990). Evaluation & Library Decision


Making. Norwood, HJ: Ablex.

12
Unit–2

REASONS FOR EVALUATION


AND RELATED FACTORS

Compiled by: Muhammad Jawwad


Reviewed by: Dr Amjid Khan

13
CONTENTS

Page #
Introduction ....................................................................................................... 15

Objectives ......................................................................................................... 15

2.1 Introduction .............................................................................................. 16

2.2 Why Evaluate? ......................................................................................... 18

2.3 Reasons for Evaluation ............................................................................ 20

2.4 Context for Evaluation ............................................................................. 21


2.4.1 The societal context ..................................................................... 23
2.4.2 The professional context .............................................................. 23
2.4.3 The institutional context .............................................................. 24
2.4.4 The administrative context........................................................... 24
2.4.5 The functional context ................................................................. 24
2.4.6 The technological context ............................................................ 24
2.4.7 The patron context ....................................................................... 25

2.5 Assigning Value in Evaluation ................................................................ 25


2.5.1 Value and values .......................................................................... 26
2.5.2 Value and benefits........................................................................ 26
2.5.3 Value and goals ............................................................................ 27
2.5.4 Value and quality ......................................................................... 27
2.5.5 Value of quantity.......................................................................... 27

2.6 Conclusion ............................................................................................... 28

14
INTRODUCTION

This unit is developed to teach students reasons for evaluation, specific issues in
the evaluation and various contexts for evaluation. It will also discuss assigning
value in evaluation.

OBJECTIVES

After studying this unit, you will be able to explain the following:

 Reasons for evaluation.

 Specific issues in evaluation.

 The context for evaluation.

 Assigning value in evaluation.

15
2.1 Introduction
The ultimate aim of all management principles, methods and techniques is to help
attain the objectives of the organization efficiently, effectively, economically, and
on time. It is an evaluation that testifies whether the objectives are achieved and if
so, to what extent. The evaluation also includes accountability to the funding
authorities, the patrons and other stakeholders as to whether the resources spent
have resulted in the attainment of the desired objectives. Evaluation is a judgement
of worth. Thus, it means assessing the worth or value of the unit to the people for
whom it is meant. It is the assessment of performance against users’ expectations.
It could also be interpreted in the narrower sense of whether the output is
commensurate with the input. In the context of a system, it means the degree of
usefulness of the set-up in meeting various objectives the system has to achieve.

By and large, evaluation means testing the service or system for effectiveness and
efficiency. Lancaster has prescribed three possible levels of library evaluation.
These include the measurement of effectiveness, cost-effectiveness and cost-
benefit. Similarly, Vickery and Vickery have provided a useful framework for
assessing performance in reaching objectives. These include the effectiveness of a
system, the economic efficiency of a system and the value of a system. By
effectiveness, they mean the degree to which it minimizes costs in achieving an
objective. The combination of the two results in cost-effectiveness. According to
Vickery, value is the degree to which a system contributes to user needs, and where
it is expressed in monetary terms and compared with the cost, it becomes a cost-
benefit analysis. A look at the latter’s framework shows that it is not different from
what Lancaster has prescribed, and therefore they fit well into Lancaster’s
effectiveness, cost-effectiveness and cost-benefit analysis, respectively.

According to Lancaster, a library’s effectiveness is measured in terms of how well


a service satisfies the demands placed upon it by its users. Saracevic et al. consider
it to be how well a library does what it is intended. Ralli on the other hand argues
that ‘effectiveness essentially measures how we are going, to what extent we are
meeting our goals and objectives’. A system’s cost-effectiveness is concerned with
its internal operating efficiency. It measures how efficiently in terms of costs; the
system satisfies its objectives. A cost-benefit evaluation, on the other hand, is a
measurement of the value of a service to ascertain whether the service is worth
more or less than the cost of providing it. It thus creates a basis to justify the
provision of the service. Moore concedes that there is considerable confusion that
surrounds the various forms of measurement particularly the terminology used to
describe them. According to him, there are three levels at which one can evaluate a
library. These are the measurements of efficiency, performance and effectiveness.

16
The former is concerned with how well the library performs. It is a consideration
of how a library could utilise fewer resources to achieve the same level of service.
It is, therefore, a measure of cost-effectiveness, which is itself an assessment of the
impact which a service has on its users, or an examination of how it is fulfilling or
satisfying the needs of its user community.

It is clear from the foregoing discussion that the user is at the centre of all these
measures of evaluation. Be it cost-effective evaluation or cost-benefit evaluation,
evaluation of effectiveness, efficiency or performance – all end up finding ways of
better serving the library user, and this is the satisfaction of the demands he places
upon the library. Methods of evaluation as it stands now, there are two main
methods for the evaluation of a library’s effectiveness or the measurement of its
performance. These are the subjective and the objective methods. The subjective
method or approach primarily depends on users’ opinions or attitudes to measure
the effectiveness of a library. Normally, such opinions or attitudes are ascertained
by methods used in marketing research, which are the use of questionnaires or
interviews or both. As a result, the subjective approach takes the user as the unit of
analysis. The assumption here is that these user evaluations are valid indicators of
library performance. This view, however, lacks consensus. There are two powerful
schools of thought representing the pros and cons. Arguing for the cons, Stecher
contends that users are not competent to give valid evaluations of library services.
He argues powerfully that: ‘it seems doubtful, to say the least, that results from
subjective satisfaction measures could be taken seriously’. Lancaster and others
share the opposite opinion. They argue for the necessity of soliciting these user
evaluations for a host of reasons. They contend that some demands for materials
are either too complex or too ambiguous to cope with the constraints of the
objective measures, which tend to be predicated upon demands for specific items.
Secondly, some of the services that people use do not have objective measures of
performance. In such a case, it is imperative that the user, as the ultimate user of
these services, becomes the most qualified person to evaluate the performance or
effectiveness of such services. These potent arguments bring the subjective
approach to the fore as a useful complement in evaluating library performance. It
is therefore effective and most important once its methodological application is
sound and scientific. Stecher, one of the most ardent critics of the subjective
approach, corroborates this. He argues that ‘subjective satisfaction as expressed and
tested in a more realistic form of user preferences, has found methodological
application in several studies.

It is, therefore, safe to conclude at this point, that, in determining the degree of
success with which a library performs, the ultimate authority, the library user, is the
most logical source of an answer. This is well noted by Vickery and Vickery that,

17
‘in the social process of information transfer ... the ultimate evaluation must be from
the viewpoint of the potential recipients. User opinions, therefore, remain a valid
and potent measure of user satisfaction. Performance measurement of a library, and
in this context the evaluation of a library’s effectiveness in the services it renders
can also be accomplished quantitatively. With this approach, performance
measurements of a library adopt the tools of the management sciences. It is an
integral part of the management process. As its extensive use testifies, it is now
accepted as an answer for the numerous problems and shortcomings of traditional
measures.

For a complete and objective evaluation of a library’s effectiveness, three important


things need to be done. 1). Specification of a purpose or goal of the system and the
parts studied. 2). Selection of a measure or measures reflecting this purpose and
specification of

2.2 Why Evaluate?


The answer to the question “Why evaluates?” is frequently “Because it is required.”
Evaluation is a prominent concern in a wide variety of environments. The public
increasingly demands accountability from government agencies and elected
officials. Elected officials, if they are wise, transfer that demand to government
employees and call upon them to demonstrate that what they do is right and good.
Shareholders in public companies expect to see profitability, efficiency, and
increasing social responsibility in the actions and decisions of management.
Consumer activism creates a need to ascertain the quality of products and services.
The need for evaluation is felt very keenly in libraries, particularly in libraries that
are supported by public funds. The sharpened focus on evaluation has led to several
excellent publications over the last three decades, many of which carry the
imprimatur of major organizations such as the American Library Association. The
impressive number of publications and presentations on evaluation as a topic,
however, does not necessarily ensure that the tools and techniques of evaluation are
being delivered to and explicated to the front-line library employees who are
responsible for carrying out evaluation activities.

Librarians are feeling growing pressure, as are many others charged with the
administration of public agencies. Shrinking federal government resources and
eroding local tax bases, combined with pressing social problems, resulted in intense
competition among agencies for resources. Social scientists have recognized for
some years that the allocation of funds is a political act; in such a case, evaluation
measures become political bools. Public librarians recognize that their tools are
inadequate. They appear frustrated that they cannot defend what they know to be a
18
crucial and threatened public good: free public library service available to all
citizens. They know that the case for public libraries cannot be supported in the
current terms of business or politics, but researchers have not developed alternative
measures that are compelling in hostile political environments. They see a need for
a new generation of evaluation tools that better explain what librarians do and what
impact libraries have on the future.

In times of rapid and profound societal and technological change, evaluation is


essential to preserving the viability and visibility of libraries. Although most
librarians probably reject the notion that libraries will be summarily replaced by
some mythical digital beast, there are members of the general public who have
extensively bought into the notion of a truly paperless society and who equate
libraries with the ancient traditions of print on paper. Some of those believers in the
digital epoch serve as municipal administrators, members of governing boards,
school principals, and university executives. Evaluation of the library and its
benefits ultimately may be essential to the survival of the library itself.

Evaluation leads to enhanced efficiency and avoidance of errors. The history of


libraries is rich with examples of inappropriate policies, processes, tools, and
techniques that were promulgated for protracted periods because they were never
properly evaluated or much too frequently were never evaluated at all. The history
of research into the usability of library catalogues, for instance, is a depressing tale
of the precedence of rule over role. Similarly, studies of library fines have found
that their impact is generally much more negative than positive, but fines remain
an entrenched aspect of library practice. Such mistakes as creating catalogues that
please librarians more than they serve patrons and imposing fine systems that
discourage library use can be avoided through the relatively simple means of
evaluating local needs, policies, and processes.

Planning is fruitless if not accompanied by an evaluation of the appropriateness of


the planning process, the efficacy of the plan, and the outcomes of the
implementation of the plan. Even when evaluation is not required for purposes of
accountability, for demonstrating the need for libraries, for avoiding costly
mistakes, or for planning, systematic evaluation is desirable as an expression of the
library’s concern for its public trust. Libraries are among the most service-oriented
and consumer-friendly of all institutions. The focus on the public that pervades all
types of libraries and library services in itself suggests a need for evaluation, for
exploring ways to do things better, and for demonstrating that the library’s
administration and staff want to provide the best possible library. The desire to
improve, grow, and provide ever-better services and products, is a deeply rooted
part of the librarian’s philosophy.

19
2.3 Reasons for Evaluation
There are differing ideas about what evaluation is and why it should be done.
Blagden performance is an integral part of good management which is undertaken
for two reasons:
1. To convince the funders and the clients that the service is delivering the
benefits that were expected when the investment was made,
2. As an internal control mechanism to ensure that the resources are used
efficiently and effectively.

This approach has been informed by a decade of cost-cutting. Another approach is


to look at cost-benefit analysis and the value of the service while a third is to view
the library as a set of systems. Bawden (1990) explains the importance of evaluation
and discusses various methodologies, including user-orientated evaluation which
aims to improve services and the competence and motivation of librarians. This
signals a move towards evaluation from the viewpoint of the users, which has been
the hallmark of the 90s.

It is important to distinguish between evaluation and performance measurement.


According to Abbott (1994), “Performance indicators are simply management tools
designed to assist library managers to determine how well their service is
performing. They provide evidence on which to base judgements, but are not
substitutes for that judgement, since performance data needs to be interpreted
before such assessments can be made. In considering performance indicators we
rarely deal with absolutes. Performance indicators contribute to the process of
evaluation, but the latter is a broader term for the assessment of performance. As
indicated above there are different approaches to assessment. As discussed earlier,
Lancaster (1993) advocates approaching evaluation from the perspective of
performance measurement and the of use a systematic approach. It is a method
which emphasizes technical services issues, e.g., weeding of stock, and illustrates
a tension between the two main types of measure: technical services and user-
orientated measures. The former has a strong quantitative emphasis and may impact
services to users e.g., speed of cataloguing sought materials, while user-orientated
measures are more qualitative and might well be those which users would choose
themselves.

The primary focus is the evaluation of services provided to the user: how they may
be identified, how a background understanding of them may be built up, how they
can be evaluated and how the data collected can be put to good use. Superficially,
librarianship is easily evaluated because it is mainly about the provision of discrete,
related and comprehensive services which have an element of predictability about

20
their operation. However, services with a major qualitative component such as
reference services are difficult to evaluate, and the methods used may be
controversial. The volume of activity in a service has to be related to the demand
for it to understand if it functions well. The service must be appropriate to the need
of the user which, in turn, raises the question:
a. What is a user?
b. Are users an amorphous mass with identical needs or are they discrete groups
with differing or even contradictory needs?

Although progress has been made in the evaluation of services to users there is still
a need for simple, generally agreed definitions:
a. What is a queue?
b. What does answering the telephone promptly mean?

2.4 Specific Issues in Evaluation


There are many reasons for undertaking evaluation, but the main ones are listed
below:
1. The principal and overriding reason for the evaluation of library services is to
collect information to facilitate decision-making and justify increasing
expenditure or defending existing expenditure.
2. To evaluate the quality of service provided: both overall and specifically to
plan for future improvements. This is usually done by surveying, either
quantitative or qualitative, and there may be operational difficulties, some
specified under objectives.
3. To identify the extent to which problems can be solved. It may or may not be
possible to solve a problem identified by the evaluation. If a problem
identified by evaluation cannot be solved this is usually due to:
a) Resource constraints e.g., a survey by a public library may indicate the
need to employ an ethnic minorities librarian but budgetary constraints
may delay or make this impossible. In this case, at least, evaluation can
contribute to the planning process.
b) The involvement of parties outside the library’s control e.g. evaluation
of a university library’s exam paper collection may indicate
considerable user dissatisfaction because of its incompleteness, which,
in turn, is due to the University administration’s failure to supply the
Library with complete sets of papers. This leaves the library with several
choices, ranging from doing nothing to undertaking new work.

4. To identify differing or contradictory needs of different user categories. It is


a serious error to assume that all library users have the same needs and that to
21
benefit one group is to benefit all. For example, short loan collections in
academic libraries benefit full-time students disproportionately because full-
time students always ‘get there first’. Part-time and distance learners find such
services less suited to their needs because of their irregular contact with the
campus.
5. To plan public relations work and information dissemination. The evaluation
may point to user ignorance in particular areas e.g., the photocopying service
provided, and the photocopying regulations imposed by the 1988 Copyright
Act. This may indicate a need for an appropriate leaflet. Similarly, ignorance
about the use of electronic information services might suggest changes in user
education in an academic library.
6. To provide feedback to, and evaluate contractors e.g., time taken by
booksellers to supply items, and quality of OPACs. Such measures impinge
on the technical services approach but have important implications for users.
An OPAC with poor functionality or user-friendliness may only be
modifiable in conjunction with the system supplier. A clear identification of
user difficulties can strengthen the library’s hand in negotiating with systems
suppliers.
7. To involve users in management. A consequence of the rise of the profession
of librarianship in the 20th century has been the exclusion of users from
library management, where, in the case of publicly available libraries, and to
a lesser extent, academic libraries, they were once pre-eminent. Regular
evaluation of services, by whatever means, allows users to rediscover a voice
in library management and express views about service priorities.
8. To provide the basis for further improvements and direction. It may be that
evaluation will merely highlight the library’s ignorance of a particular issue
and the need for further work. This is particularly the case in brief overview
surveys which often point to issues like quality of stock and the need to
improve the situation.
9. Closing the feedback loop. Recent research in higher education has shown
that it is essential to report to users the outcomes of survey work and the
decisions taken as a result. This shows the user that evaluation is a valid
exercise to which they have contributed. Failure to close the feedback loop
may be the reason for the phenomenon known as ‘questionnaire fatigue’.

2.4 Context for Evaluation


Working within the context of the systems approach to evaluation requires
understanding the context within which evaluation takes place. The need for library
evaluation derives from several important contexts, which may apply
simultaneously. Effective understanding of these contexts and their origins may
22
serve to foster understanding and effective employment of evaluation processes and
techniques. Failure to understand the context for evaluation may lead to evaluation
activities that are inappropriate, ineffective, or even harmful.

2.4.1 The societal context


A library is a manifestation of the society it supports and the society that supports
it. Any society is an exceedingly complex organism that cannot easily be
understood. Some appreciation for the societal context, though, is essential to
effective evaluation. Societies are defined by a myriad of characteristics, including
place, time, economics, politics, and other factors. The modern library is a product
of a host of societal influences, local and universal, historical and contemporary,
pragmatic and philosophical, immediate and long term. Evaluation is and must be
a response to the societal context of the library. Changes in the societal context
should be reflected in changes in library operations; effective evaluation is essential
to determining how the library should respond to societal change.

As society changes, it may be necessary to engage in evaluation processes that


reposition libraries to better suit societal evolution. Although the public perception
of libraries, particularly public libraries, is generally positive, there is reason to
believe that the public’s understanding of libraries is limited. There is an ongoing
need to seek new ways of presenting the public with understandable assessments of
the value of libraries and library services.

2.4.2 The Professional Context


Librarians are members of a highly specialized professional group with an
established set of professional concerns, ethics, policies, and practices. No library
can operate in isolation from the profession of librarianship. The American Library
Association and its subsidiary divisions, such as the Association of /College and
Research Libraries, the Public Library Association, and the American Association
of School Libraries, define the professional context for libraries to a considerable
extent. Established professional standards can readily serve as a model on which to
base local evaluation activities and national guidelines for reference behaviours.

Although the philosophies and policies of national professional associations do not


necessarily bind local evaluation, those philosophies and policies are always
available for local application and provide a set of guideposts for evaluation.
Conflicts between library administration and community pressure can be resolved
by relating professional association policies and recommended procedures to local
evaluation activities.

23
2.4.3 The Institutional Context
Every library exists within the structure of some institutional setting. Although the
concept of the library is not necessarily tied to an institution called a library, most
libraries are defined at least in part by their institutional identity. Evaluation carried
out in the library is by extension carried out on behalf of the institution that governs
the library. Although every library is governed by a unique combination of
institutional needs and requirements, there are fundamental similarities that make
it possible to other environments.

2.4.4 The Administrative Context


Evaluation is an administrative function. Regardless of who carries out the actual
evaluation, the library’s administration is responsible for the evaluation and its
results. This means that evaluation at any level must have explicit or implied
administrative consent. More importantly, it suggests an administrative
commitment to act on the outcomes of the evaluation. In the absence of such
commitment, evaluation of any kind is an empty and futile exercise. Evaluation can
be used as an approach to consciously altering the administrative structure of a
library.

2.4.5 The Functional Context


To be meaningful, useful, and beneficial, the evaluation must lead to some
pragmatic result. Ultimately, that result is either the replacement of some existing
function with a new function or a decision to perpetuate the existing function. In
this context, more than any other, objectivity is essential. If replacement of the
existing function is not a possibility, there is no need for evaluation. If retention of
the existing function is not a possibility, particularly if the replacement has already
been selected, there is no need for evaluation. Evaluation in the functional context
assumes a commitment to acting on the outcomes of the evaluation process. The
desire to implement new measures was necessarily accompanied by a series of
activities and events designed to build a shared commitment to functional change.

2.4.6 The Technological Context


Although it is certainly possible to overstate the impact of the technological context
on evaluation and decision-making, it is impossible to deny that changing
technology inherently has a significant impact on what libraries and librarians do
and how those things are done. The evaluation must take into account the
technological context. At the same time, the evaluation of technology and its use is
essential to understanding the technological context. It is frequently the case that
the introduction of new technology has a polarizing effect on those individuals who
are affected by the change, with some people embracing the new technology
because it represents change and others rejecting the new technology. After all, it

24
represents change. It is much too often the case that neither camp has engaged in
any meaningful evaluation of the new technology.

The introduction of new technologies has had a profound impact throughout the
history of libraries and library services. The emergence of new ways of achieving
library goals must be accompanied by an evaluation of the technology itself and of
the impact of the new technology on existing processes, products, and services.
Continuity in the provision of services is frequently maintained by adapting
established evaluation techniques to new technologies, as has been with the
development of criteria for evaluating World Wide Web search engines.

2.4.7 The Patron Context


The ultimate context for evaluation is a benefit to the patron. If there is no potential
for patron benefit, any outcome of evaluation becomes suspect. Even when the
process of the product to be evaluated is buried deeply in the bowels of obscure
library processes and procedures, the patron must be the central focus for
evaluation. It is imminently difficult to imagine any library activity for which the
patron is not the ultimate beneficiary of competent evaluation.

A currently popular expression in the library profession emphasizes the need for
libraries to be client-centred. This term derives from the business world and carries
with it the implication that the central purpose is not to be profitable but to provide
profit. In the corporate context, the message to be sent is that the company does not
exist to make money but to provide useful products or services to its customers.
The principle of being client centered extends to the library context in a desire to
be focused not on information resources, but on information needs.

Appreciation for the patron or client context leads to the need to involve and engage
the library’s clientele in the evaluation of library services, processes, and products.
Bringing the client into the evaluation process has a bonding effect that sends the
message that patron input is important. The desire for useful and usable client input
is the principle that underlies methods such as focus groups. Clint's input also drives
the ongoing search for standards for professional performance.

2.5 Assigning Value in Evaluation


Within the framework of an appropriate context or context, evaluation is literally
and fundamentally the process of assigning value. The assignment of value can
have many motives and many meanings. Likewise, evaluation takes place for many
reasons and in many contexts, and in most cases is done for multiple simultaneous
purposes that can be understood from a variety of points of view. The concept of
25
evaluation is tied to some related concepts; a clear vision of the origins and linkages
of any particular evaluation activity is required for successful evaluation.

2.5.1 Value and Values


Value can be assigned only within the structure of some recognized system of
values. Values are a human phenomenon that mixes elements of personal group,
and societal influence. Individuals develop idiosyncratic value systems that shape
their understanding of the universe. Free public library service is treated as a basic
value by many residents of the United States and is a core value of the country’s
library profession. Societies develop, foster, and in some cases enforce value
systems that vary according to geography, economics, and history.

Within an overall societal context, value systems vary across subgroups or cultures.
Free public library service may be highly prized in general, but there are
undoubtedly segments of the population to whom, for various reasons, free public
library service is irrelevant or is viewed negatively.

Evaluation, then, must recognize the various value systems that affect the entity
being evaluated. Evaluation is not value-neutral. Working from the assumption that
free public library service is a core value inherently shapes the goals, methods, and
outcomes of the evaluation of public library services. If the goal of evaluation is to
determine whether a thing is good, then the question of who determines what is
good or bad must be addressed. The first essential of evaluation is to understand
and work within the value system that applies.

2.5.2 Value and Benefits


It is easy to assume that those things that are valuable are necessarily beneficial or,
conversely, that value derives from benefit. Because the benefit is itself a function
of prevailing value systems, determining benefit is an uncertain process. Benefits,
like values, are closely tied to individual and group perceptions of importance. One
person’s benefit may be another’s detriment. Many of the most fundamental
sources of disagreement about intellectual freedom, for instance, have to do with
value-driven differences in perception of the benefit of open access to information.
Historically, it has been relatively simple to assign a relationship between value and
benefit in economic analysis, but much more difficult in attempts to evaluate social
processes. How well does a professional school meet the expectations of its various
constituent groups? Do existing measures of library performance accurately reflect
library activities? Does explicit instruction in basic reference service providers
result in improved reference behaviour? Each of these questions identifies a specific
benefit and the value that might be associated with it, but turning such statements

26
of benefits and values into operational evaluation processes is a difficult and
frequently elusive proposition.

The benefit may be expressed in economic terms by comparing the costs of


different products or services and allowing relative cost to serve as an indicator of
relative benefit. Knowing that the annual cost for public library service to a family
of four is less than one-half the cost of admission for the same family to a typical
amusement park.

2.5.3 Value and Goals


Because evaluation is so tied to diverse value systems and varying perceptions of
benefit, is paramount to develop specific goals for any evaluation process, project,
or product. Although evaluation cannot take place outside the value system context,
the establishment of explicit goals for evaluation serves as a constant anchor in a
sea of varying values and conflicting perceptions of benefit. A goal speaks to some
set of tasks to be accomplished and the need to determine if they are accomplished
appropriately. When carefully stated, evaluation goals serve to override the
negative potential inherent in conflicting value systems.

2.5.4 Value and Quality


At its heart, the purpose of the evaluation is to ascertain quality-how good
something is, how well something is done, how effectively a goal is achieved, how
appropriate a service is, and how efficiently a service is delivered. Quality is a
tenuous and amorphous concept. There is no universal measure of goodness, no
obvious definition of correctness, and no yardstick marked in units of quality. As a
result, it is necessary to develop specialized tools that are assumed to somehow aid
in determining quality even though the tools themselves do not directly address
quality. It is perilous to lose track of the distance between what the tool measures
and the phenomenon being evaluated. The old expression “a pint is’ a pound the
world around” and the modern modern “one size fits all” are excellent examples of
the danger of divorcing the measure from the thing being measured.

2.5.5 Value and Quantity


Qualification is the most obvious and most frequently employed approach to
indirectly assessing quality. Although it is possible to set non-quantitative quality
targets, adding the element of quantification lends precision, consistency, and
replicability. A desire to know if the library is being used, which in the context of
some value systems is taken as being an appositive indicator of quality, translates
into counts of conspicuous acts of use such as circulation transactions, numbers of
questions asked, door counts, and related measures. Quality can be quantified
through methodologies such as citation analysis, which provides a basis for

27
identifying core resources that transcend the scope of expert judgment. These
quantitative indicators are especially attractive in that they are easily amenable to
comparisons. They can be compared over time for a single location, among
locations for a single library system, and across locations for a broader geographic
area. They can be applied consistently and with an impressive degree of validity.

A fundamental aspect of the relationship between value and quantity is the


assurance that measures are meaningful. This requires revisiting measures at
appropriate intervals to reassess their usefulness.

2.6 Conclusion
The ultimate aim of all management principles, methods and techniques is to help
attain the objectives of the organization efficiently, effectively, economically, and
on time. It is an evaluation that testifies whether the objectives are achieved and if
so, to what extent. The primary focus is the evaluation of services provided to the
user: how they may be identified, how a background understanding of them may
be built up, how they can be evaluated and how the data collected can be put to
good use.

It is important to distinguish between evaluation and performance measurement.


According to Abbott (1994), “Performance indicators are simply management tools
designed to assist library managers to determine how well their service is
performing. They provide evidence on which to base judgements, but are not
substitutes for that judgement, since performance data needs to be interpreted
before such assessments can be made.

28
SELF-ASSESSMENT QUESTIONS

1. Define evaluation and explain reasons for evaluation.

2. Describe the specific issues in evaluating libraries and their key indicators.

3. Explain the context for evaluation and assigning value in evaluation.

Activity:
1. Prepare a flow chart of the societal, functional and technological context of
evaluation with the help of a tutor.

29
RECOMMENDED READING

1. Blagden, J. and Harrington, J. (1990). How good is your library? a review of


approaches to the evaluation of library and information services, London,
Aslib,

2. Bohme, S. and Spiller, D. (1999). Perspectives of public library use 2: a


compendium of survey information, Loughborough, LISU.

3. British Standards Institution. (1998). Information and documentation: library


performance indicators, London, BSI, (International Standard ISO 11620)

4. Brophy, P. and Coulling, K. (1996). Quality management for information and


library managers, London, Aslib.

5. Zweizig, D. and Rodger, E. J. (1982). Output measure for public libraries: a


manual of standardized procedures. Chicago: American Library Association.

30
Unit–3

IDENTIFYING PERFORMANCE
ISSUES FOR EVALUATION

Compiled by: Muhammad Jawwad


Reviewed by: Dr Amjid Khan

31
CONTENTS

Page #
Introduction ....................................................................................................... 33

Objectives ......................................................................................................... 33

3.1 Introduction .............................................................................................. 34

3.2 Subjective Character of Organizational Effectiveness ............................ 36

3.3 Criteria for Effectiveness at the Organizational Level ............................ 37

3.4 Performance Issues for Evaluation .......................................................... 39


3.4.1 Internal/local sources ..................................................................... 39

3.5 Conclusion ............................................................................................... 41

32
INTRODUCTION

This unit is about identifying performance issues for evaluation. This unit will also
provide students with an understanding of organizational effectiveness and
performance measurement in libraries. Students will learn the criteria for
effectiveness at the organizational level.

OBJECTIVES

After studying this unit, you will be able to explain:

1. Organizational effectiveness.

2. Performance issues for evaluation.

3. Performance measurement in libraries.

4. The subjective character of organizational effectiveness.

5. Criteria for effectiveness at the organizational level.

33
3.1 Introduction
Effective organizations are the ones most likely to survive and prosper. It is easy to
see, therefore, why measuring organizational effectiveness is crucial. Administrators,
managers, and trustees all have significant stakes in determining whether their
organizations are successful and why they are successful. Organizational
effectiveness is not just the concern of those who work inside organizations, however,
but is also important to consumers. When individuals select businesses or institutions
to patronize, their decisions are often based on their evaluation of organizational
effectiveness. People want to patronize organizations they believe are effective. The
criteria for effectiveness used by consumers may, of course, differ in kind and
number from those used by individuals working inside the organization.

Organizations of all kinds are regularly called on to provide evidence of their


effectiveness by measuring performance. Libraries and other information agencies
are no exception. The motivations for performance measurement in libraries may
have evolved (particularly as new services are offered, and newer e.g., electronic
resources are made available), but increasingly libraries must demonstrate their
worth for things that sometimes elude simple quantification. Identifying and
properly utilizing the tools and essential frameworks and principles needed for
collecting, analyzing, and presenting such information can be difficult and elusive.
Moreover, the risks of losing resources or having decisions made by others outside
the library mean professionals must develop the foundational skills of determining
which of a variety of factors should be measured and how.

Performance measurement in libraries may be required or motivated by bodies


outside the organization (for instance, as part of accreditation by the Joint
Commission on Accreditation of Healthcare Organizations), as part of a self-
examination of the effectiveness of current services, or perhaps as justification for
an increase in resources.

During the 1970s and 1980s interest declined in developing quantitative standards
for libraries. Output measures for performance developed. As library costs rose
faster than library income, librarians sought meaningful and measurable ways to
show how their libraries were performing. The development of performance
measures does not include indicators of what excellent service might require.
Rather, the approach is that of a single library assessing its services concerning its
own goals and objectives.

The fact that there are many possible criteria for determining organizational
effectiveness highlights a critical point: Measuring effectiveness depends, in large

34
part, on point-of-view, on who is doing the judging. Libraries should be concerned
with collective ones. It is the collective judgement of people connected to the
library that forms the source of acceptance, stability, and prosperity for libraries.

Performance or output measures were developed first in the public library sector.
In that community, there is now a recognition that performance, to be satisfactory,
requires a certain level of resources. As King Research observed in Keys to Success,
"Performance is the relationship between resources that go into the library -- the
inputs -- and what the library achieves using those resources -- the outputs or
outcomes". What is emerging is the need for standards relating to resources (or
inputs) that will enable appropriate levels of performance (or outputs), and there is
an emerging interest in developing professional standards against which a particular
library can be evaluated. There also is emerging interest in comparative assessment
and evaluation.

Organizational effectiveness has been described as “the ultimate dependent


variable”. That is, it is the standard by which almost all other measures are related.
Specifically, it is critical to measure and analyze organizational effectiveness for
the following reasons:
1. We want to know how well we are doing and to report our condition in an
intelligible fashion to those who want or need to know.
2. We live in an age when public accountability and tight fiscal resources are a
reality. We have many other competitors for limited resources, and there is
little evidence that the public will be substantially increasing our available
resources. Measuring organizational effectiveness provides an important
rationale or justification for why resources should be allocated to libraries.
3. There is a considerable danger to organizations such as libraries if they are
unable to measure and report performance. One of these dangers is that, in the
absence of such information, citizens will find the most obvious and
simplistic ways to assess organizational effectiveness.
4. Library use is important to the library profession. A fundamental assumption
of library management is that the organization can implement interventions
that will increase use. Only by determining if the library is being used, and if
not, why not, can the critical missions of libraries be achieved.

In sum, neither library administration nor library patrons are going to suspend
judgments regarding whether the library is effective or ineffective, no matter how
difficult the task of measuring it is. Therefore, librarians need to find intelligent
ways to determine how well the library is doing and to report the results clearly to
the public.

35
One approach in comparative assessment is to identify a set of institutions with
which one wishes to be compared and use that set as a referent in making
comparisons on various aspects of library performance. The Association of
Research Libraries (ARL) is experimenting with this approach. In developing the
initial set of ratios, ARL identifies three issues which must be taken into account in
assessing the reliability and validity of the data: 1) consistency, that is the way data
are collected from institution to institution and collected over time. There is
difficulty with definitions here; 2) ease vs utility, that is, what is easy to gather data
on may not be the most desirable variable to measure; and 3) values and meaning,
these may have meaning only in the context of a local situation. ARL has been
collecting statistical data from its members for many years. Thus, the Association
is in a good position to use statistical measures which will help measure the quality
and costs of library services and enable institutional comparisons.

3.2 Subjective Character of Organizational Effectiveness


Like most organizational measures, it would be ideal if a scientific approach could
be applied. It is generally easier if one can rely on measurable, quantitative, and
easily definable factors to determine effectiveness. But management is as much an
art as it is a science. Scientific approaches can help us manage and understand
organizations but libraries, like many other organizations, are also sociologic and
cultural institutions. They are value-laden institutions in which notions of social
obligation and public good are intermingled with cost-effectiveness and efficiency.
Even in the private sector, the concept of organizational effectiveness is considered
“inherently subjective”.

Recognizing this subjectivity requires in the evaluator a sense of modesty: that there
is no “one best way” to measure organizational effectiveness; there are many ways
and the evaluation itself depends heavily on who is doing the measuring and what
measures are selected.

The selection of organizational effectiveness measures must therefore be seen as


presupposing certain values and interests on the part of the evaluator. Each
evaluation reflects these values and attitudes. In libraries, for example, choosing to
survey only adult library users allows adults and library users more power than
young people or nonusers to affect library decisions. Similarly, if librarians choose
to evaluate by looking only at internal operations, greater influence is given to the
views of employees than to the opinions of library users. In this sense, decisions
concerning organizational effectiveness may tell us as much or more about those
who evaluate as about the organization itself.

36
Failing to explicitly recognize these interests may result in distortion. This also
highlights the need to be self-critical and reflective regarding measuring
organizational effectiveness to ensure that values are not inappropriately imposed
on the process. For example, the public library has often been accused of being an
institution that caters to the better-educated and higher-income white middle class.
Are our evaluation techniques primarily designed to assess this one set of interests?

In the development of any statement of quantitative standards, and particularly in the


development of any international standards, an important consideration is the need for
uniform practices in collecting statistics and the need to develop standard definitions.
Standardization of statistics is essential if accurate comparisons are to be made. John
Sumsion has conducted international comparisons of public libraries using published
statistics of individual library authorities within each of the 25 countries. While his
purpose was not to make comparisons, such comparisons are inevitable.

As Sumsion says, the data cannot be considered precisely because of the different
methods of collection, different definitions, and problems with incomplete datasets.
It should be noted that data from different years have been used. The expenditures
have been converted to British pounds throughout, using the average exchange rate
for the year to which the statistics apply. The years covered range from 1992-1994.
In the tables, "loans per capita data" are frequently for loans of total stock rather
than books only.

Sumsion's work offers guidance in the development of comparable statistics


internationally. Also, the standard, ISO 2789, scheduled for revision in 1998, provides
guidance. Standardization of statistics is essential if accurate comparisons are to be
made and if international standards or benchmarks are developed for libraries.

3.3 Criteria for Effectiveness at the Organizational Level


When looking very generally at what is usually considered when measuring
effectiveness at the organizational level, a wide variety of criteria have been used.
Among the more common criteria are the following:
1. The extent to which goals are met,
2. The extent to which decision-making is performed effectively and quality
decisions are made,
3. The extent to which the organization maintains its survival,
4. The extent to which the organization meets the needs of its customers,
5. The extent to which the organization recruits and maintains a satisfactory
labour force,
6. The extent to which the organization effectively directs staff,
7. The extent to which the organization exploits opportunities to grow, and
8. The extent to which the organizations treat staff and customers with respect.

37
An additional criterion is the extent to which the organization has a beneficial
impact on society as a whole. This concept of trying to measure the impact on
society is sometimes used to distinguish organizational effectiveness from
organizational success. An organization is a success to the extent to which the
organization satisfies the needs of society and improves libraries. This would be an
appropriate, albeit elusive, measurement.

Because organizations can be viewed from multiple perspectives, multiple


approaches and measures can be adopted. It is best to use multiple approaches and
determine if, in evaluating from different perspectives, the same basic results are
being obtained. This is referred to as triangulation. Consider for example the
multiple levels in the organization that can be subjected to evaluation and the types
of measures that could be employed:
1. Individual level: To what extent does the organization create satisfaction,
motivation, and commitment in the employee? To what extent is there group
cohesiveness?
2. Subunit level: To what extent do individual work groups perform
effectively? To what extent are group goals obtained? To what extent is there
group cohesiveness?
3. Unit level: To what extent do departments or branches meet goals? To what
extent are these departments cohesive and adaptable to change?
4. Multiunit level: How effective is the coordination and communication
between units? How cost-effective is this coordination?
5. Organizational level: To what extent are organizational goals obtained? To
what extent is the organization adaptable to change?
6. Population-level: How does the organization’s performance compare to
other similar organizations?

The variety of levels and the many ways to measure effectiveness highlight the
difference between what is referred to as macro-organizational criteria versus
micro-organizational criteria. Macro-organizational criteria measure how
effectively organizations provide for the community at large. They answer
questions such as, “Is the organization serving its entire potential market and
serving this market well?” In terms of a library, they ask, “Is the library serving all
the potential users?” or “Is the library accomplishing its broader social goals?”
Micro-organizational criteria focus on internal operations. They answer questions
such as, “Are departments working efficiently?”; “Are qualified staff being
recruited?”; and “Are employees satisfied and committed to the organization?”.

38
3.4 Performance Issues for Evaluation
It is important to devise a systematic regime of assessment which meets the needs
of the library and can be understood outside the library by its parent body or users.
This usually involves an annual overview survey to identify more general problems,
backed up by a range of methods, to clarify specific issues. It helps to know where
you ‘fit in’. John Sumsion, former head of the Library and Information Statistics
Unit has devised a classification of universities as follows:
1. Large and postgraduate and miscellaneous
2. Pre-1960 universities
3. Post-1960 pre-1992 universities
4. 1992 universities (former polytechnics)
5. Higher Education colleges: education and general
6. Higher Education specialist colleges.

This gives a basic guide to comparative perspectives and the kind of issue a
particular academic library should concern itself with. Floor space for large special
collections is unlikely to be a major issue in a post-1992 academic library.

Public library service points can also be categorized as ‘large’ or ‘medium’, depending
on the range and level of services they provide. Opening hours per week can be used
as a basis for categorization (Library Association 1995). Before beginning to look into
the question of what to evaluate it is worth asking: Is a survey really necessary? Can
adequate data be obtained from pre-existing sources which might supply you with a
comparative perspective and eliminate the need for further work? Some sources are
listed below but the Library and Information Statistics Unit at

Loughborough University has emerged as the key provider and indeed interpreter
of LIS statistics in the UK and their website is often a good place to start.

3.4.1 Internal/local sources


Sources to inform the identification of evaluation issues include the following:
1. Mission statements. Whether the library or the parent body will list objectives
which might be fairly specific. A University mission statement, for example,
might include a declaration that it wishes to have a wide range of modes of
attendance so that it can recruit students who would find it difficult to attend on
a normal full-time basis. Such students might have difficulty in making the best
use of the library’s services and a study of their needs could be undertaken.
2. Course/programme board reports. Specific university courses are managed
by course or programme boards which include student and library
representatives. They give both staff and students a forum for comments and

39
complaints about central services. In addition, annual reports are usually
produced in which students can comment on library services. In practice,
these tend to be repetitive and focus on a limited number of issues like the
availability of basic textbooks, access to seating, noise and loan regulations.
Nevertheless, they represent a useful forum of undergraduate opinion, and
taken in conjunction with other sources, can help to identify problems
requiring further attention, even if they are the most intractable.
3. Library committees and published sources. Most types of libraries have a
library committee. In public libraries, it has a statutory function. In academic
and special libraries its role varies from a decision-making body to a talking
shop. Much will depend on whether the members bring issues to it or whether
a business is led by library staff, in which case it is less likely to produce
potential evaluation issues. A supplementary and much more anarchic source
is contributions to newspapers and in-house journals. Letters to local
newspapers can stimulate debate on public library issues and in universities
student newspapers are a popular platform for debate and complaint.
Unfortunately, although frequently impassioned, such sources are not always
as thoughtful and well-informed as they might be.
4. Institutional programmes of evaluation which evaluate all the services
provided by the parent institution, regularly, are still relatively
uncommon. Such an approach is useful because it compares the library with
other services and gives an idea of its status within the institution as a whole.
Institutional programmes of evaluation, apart from the obvious purposes of
informing decision-making and identifying issues for further study, can be
used as a basis for charter design as they give a picture of the level of service
which can realistically be provided in specific areas.
5. Management information sources can offer a starting point for
evaluation. The advent of automated systems has eliminated the need to do
basic survey work. It should be easy to extract information from automated
systems about particular categories of users and the use they make of loan and
inter-library-loan services. Simple usage statistics can be the start of an
investigation into the worth of specific services.
6. Most types of libraries have a structure of meetings, from a simple staff
meeting in small libraries to a complex system of team and related meetings
in large libraries. Sometimes this is supplemented by ‘limited life’ working
parties. Such meetings will discuss issues specific to themselves or matters of
general concern. Problems they raise may be the subject of evaluation.
7. Electronic bulletin boards and suggestion boxes offer users an opportunity
to express themselves directly to users. In general terms they allow users to
raise qualitative issues which, if they occur sufficiently frequently, can
identify problems which need further study.

40
8. Programmes of research to identify evaluation issues: if resources permit
it is worthwhile considering a programme of research to identify what the
library’s ongoing performance issues are likely to be and to try to devise a
systematic programme of evaluation. As indicated above, there are different
theories as to what constitutes evaluation, and many libraries identify a
programme of basic needs and an ad hoc programme of supplementary
evaluation which looks at specific issues. An overall ‘holistic’ programme of
evaluation is therefore lacking. The major disadvantage of this approach is its
proneness to inflexibility, and, in a period of rapid change, the performance
issues identified can become out of date.

3.5 Conclusion
All responsible librarians strive to be organizationally effective, but there is no one
way to measure organizational effectiveness. Library administrators should view
organizational analysis not as a one-time activity but as an ongoing
multidimensional process. Although the goal-setting model recommended by the
Public Library Association can be useful, it should not be seen as a complete
approach. Much depends on the purposes of the evaluation and the perspective of
those conducting it. Before deciding on a particular approach or set of approaches
one must address a variety of issues. These include, but are not limited, to the
following:
1. What is the purpose of the evaluation?
2. Whose point(s)-of view is critical?
3. What functions are being evaluated?
4. What levels of the organization are being evaluated?
5. How will the results of the evaluation be used?
6. How much time, money, and staff are available to conduct the evaluation?
7. What type of information is needed to conduct the evaluation?
8. Against what standard can results be assessed?
9. Who will conduct the evaluation?
10. What are the possible sources of bias in the evaluation?

By approaching organizational effectiveness from many perspectives, it is possible


to get a more accurate picture of the true performance of the institution. This, in
turn, permits the library to deal more realistically with the problems and challenges
facing it and to plan constructively in the dynamic environment in which libraries
now struggle to survive or prosper.

41
SELF-ASSESSMENT QUESTIONS

1. What are organizational effectiveness and the subjective character of


organizational effectiveness?

2. Discuss criteria of effectiveness at the organizational level.

3. Describe the performance issues for evaluation in libraries.

Activity:
1. Prepare with the help of a tutor “Model of Library Effectiveness”.

RECOMMENDED READING

1. Barton, J. (1998). The recommendations of the Cranfield project on


performance indicators for academic libraries, SCONUL Newsletter, 14
Autumn, pp. 15–17.

2. Barton, J and Blagden, J. (1998). Academic library effectiveness: a


comparative approach, London, British Library Research and Innovation
Centre, (British Library Research and Innovation Report 120).

3. Batchelor, K and Tyerman, K. (1994). Expressions of interest in Brent,


Library Association Record, 96 (10), 1994, pp. 554–555.

4. Bawden, D. (1990). User-orientated evaluation of information systems and


services, Aldershot, Gower.

5. Cullen, R.J. and Calvert, P.J. (1995). Stakeholder perceptions of university


library effectiveness, Journal of academic librarianship Nov., pp. 438–448.

6. Library Association. (1995). Model statement of standards, London, Library


Association Publishing.

7. Morgan, S. (1995). Performance assessment in academic libraries, London,


Mansell.

8. Poll, R. and te Boekhorst, P. (1996). Measuring quality: international


guidelines for performance measurement in academic libraries, Munchen,
Saur, (IFLA Publications 76).

42
Unit–4

QUALITATIVE METHODS

Compiled by: Muhammad Jawwad


Reviewed by: Dr Amjid Khan

43
CONTENTS

Page #
Introduction ....................................................................................................... 45

Objectives ......................................................................................................... 45

4.1 Introduction .............................................................................................. 46

4.2 Suitable Areas of Study ........................................................................... 48

4.3 Questionnaire Design ............................................................................... 49


4.3.1 Specific points in questionnaire design........................................ 50

4.4 Sampling .................................................................................................. 51


4.4.1 Practical Advice on a sample size ................................................ 52

4.5 Questionnaire Administration .................................................................. 53

4.6 Analyzing the Data and Presenting Results ............................................. 55

4.7 Conclusion ............................................................................................... 57

44
INTRODUCTION

The unit will give an understanding to students about the quantitative methods of
library evaluation. This will guide them about suitable areas of study, questionnaire
design, sampling techniques, analyzing the data and presentation of results.

OBJECTIVES

The objectives of this unit are to impart knowledge of the following aspects:

1. Quantitative methods of evaluation.

2. Questionnaire design and administration.

3. Sampling techniques.

4. Analyzing the data and presenting results.

45
4.1 Introduction
We intended to assemble a set of data that would “cut to the chase” that is, to
present in a very limited space only those data elements that most directly and
eloquently built our case. We were mindful that we live in the ‘sound bite” age in
which time to read is scarce and the competition for public attention is fierce. We
knew that we had to choose our data carefully and present only a handful of
attention-getting statements and each of which would carry important information.

Useful data abound. Data from within and without the library world that can be
fashioned into an argument for increased support for libraries are relatively easy to
find and present. Much data can be found on the world wide web, analyzed using
standard spreadsheet software or a calculator, and presented in documents designed
in word processing or desktop publishing software. Finding data for library sources
is easy if you know where to look. Evaluation processes include subjective and
objective methods and approaches. Although the emphasis tends to be on objective
(quantitative) methods, the subjective (qualitative) approach can provide a balance
to the evaluation.

Quantitative methods are typically those that provide statistical data or work with
known quantities. The quantities may be manipulated or changed, and the variations
measured. These methods answer questions of “what” and “how many?” and are
straightforward to use.

Quantitative studies gather statistical data and use known quantities as a way to
look at the impact a change in one component might bring about. For example, a
library manager could look at the number of reference questions answered in terms
of different times of the day, number of staff available, location, or amount of time
available. The advantage of this approach is the ability to control the environment
to allow the effect brought about by the change in one variable to be measured. A
cause-and-effect relationship can then be established. This approach does not
address the complexity of social interactions that might impact service but can
provide useful data for hiring questions and other service decisions. As a
methodology, objective or quantitative studies are easier to administer and more
common than qualitative ones.

Selecting a specific methodology depends on the purpose of the evaluation, its


expected uses, and the community under study. Surveys are efforts to gather
information about different aspects of services in a variety of library settings. These
usually take the form of questionnaires given to a portion of the library community.
Survey instruments help determine opinions and attitudes about the facts of library

46
and information services including effectiveness; they are not, however, useful in
describing why a library system is functioning effectively. Surveys need to be
carefully conducted and administered. A survey questionnaire needs to be pretested
and designed to prevent problems of question interpretation, bias, reading ability,
and other completion errors.

Survey work, in whatever form, is the most widely used method of evaluating
library services and the questionnaire is widely viewed as the most attractive
method of quantitative data collection. It can be applied to a whole range of issues
from a simple overview survey of user satisfaction to a detailed investigation of the
needs of small groups of users. However, the structure of the questionnaire and its
method of administration will vary, depending on the user group being addressed.

To its exponents, the quantitative, questionnaire-based approach has distinct


advantages. The questionnaire can report on the views of many thousands of people
and give a breadth of data not available to qualitative methods. It adopts a highly
structured approach and because it uses statistical methods and produces numerical
data its outcomes are perceived to be ‘scientific’ and therefore objectively correct.
If well-designed, it gives clear-cut answers to the questions asked and because
interested parties can complete it without intervention it appears to be neutral. It is
good for asking questions based on counts: ‘How often do you visit the library?’ A
questionnaire offering a limited number of answer options of the YES/NO or tick
box variety (closed questions) can be analyzed relatively easily and quickly.

However, as a method, it has the vices of its virtues. Because it is highly structured
it is also highly inflexible. If a fault is found in a questionnaire halfway through its
administration, not much can be done about it. It does not penetrate the attitudes
which inform the answers. A questionnaire might readily reveal that Saturday
opening does not appeal to part-time students, although superficially Saturday
opening might seem attractive to people who work during the week. Behind the
answer lies the complex world of the part-time student who has to juggle family,
social, occupational and study commitments. Furthermore, the questionnaire
answers only the questions which have been asked. If important issues have not
been addressed by the questionnaire, then its qualitative value will be diminished.
At its worst extreme, it is possible to design a questionnaire, by selecting questions
which focus on certain issues and avoiding them.

others could produce a misleading result. For example, it might be possible to get
an over-favourable view of library service by avoiding asking questions about
service areas which users consider to be poor. While such goings-on is unknown in
librarianship it is interesting to note that precisely such a charge has been made

47
against Britain’s privatized railway companies (Independent 7.1.1999). The
accurate identification of the performance issues which inform question selection
is at the heart of good questionnaire design.

Finally, although the analysis of a questionnaire should produce useful


generalizations it is not the outcome of a collective experience. Questionnaires are
completed by individuals and represent the views of individuals. They are not a
group experience.

4.2 Suitable Areas for Study


There is a wide range of issues suitable for study. Here are a few examples:
1. An overview user or general satisfaction survey aims to produce a general
picture of user satisfaction. This is used in all types of libraries. It often
identifies issues for further study, but its main purpose is to effect service
improvements.
2. Materials availability survey which tries to establish whether users found the
books, journals or other sources they were looking for, usually on a particular
day.
3. A reference satisfaction survey which invites the user to evaluate the quality
of the reference service received. Again, this usually relates to a particular
day or otherwise limited period.
4. Benchmarking—a comparison of one library’s performance with many
others. This presupposes the existence of a comparative database of edited
data.
5. Ethnic origin: mainly used by public libraries to understand libraries used by
ethnic minorities.
6. Lapsed users’ surveys—used mainly by public libraries to estimate the library
services’ penetration into the community.
7. Opening hours: to plan the extension or alteration of opening hours.

The availability of data from automated systems reduces the need for data
collection in areas which have a systemic element. Some aspects of satisfaction
with an inter-library loan service can be inferred from statistics and benchmarking
with other institutions but there might still be a need to collect the views of users
who attach a lot of importance to the service, like research fellows and research
students in higher education. Problems like the length of queues can be tackled by
simple counts and may not need to involve the user at all. In public libraries, some
survey priorities are impact, market penetration, promotion of services, book
buying compared with borrowing, information/enquiry services, audiovisual
services and electronic information sources.

48
4.3 Questionnaire Design
In questionnaire design standardized methodologies should be used as much as
possible. Standardized methodologies allow you to benefit from the experience of
others and to compare your results with similar libraries.

Although some questionnaires use an A3 format e.g., the IPF National Standard
User Survey offers this as an option, the A4 format is widespread and influences
the design and structure of the questionnaire.

The Committee of Vice-Chancellors and Principals of the Universities of the United


Kingdom recommends short questionnaires and quotes examples of questionnaires
occupying only one side of A4. The one-sheet of A4 format is ideal for a short user
survey but less attractive for longer questionnaires where several sheets might be
necessary to avoid crowding. Unfortunately, long questionnaires can be off-putting
and are perhaps best reserved for small populations with a high level of
commitment to the library or the survey. Special library users and research students
are obvious examples. In general questionnaire questions should proceed from the
general to the specific. General questions which do not intimidate the respondent
will encourage progression to more specific ones. If questionnaires are broken up
into components or ‘question modules within each module again progression
should be from general to specific. It is a good idea to leave a fair amount of white
space on the paper so that the respondent does not feel overwhelmed by a mass of
print.

There are essentially two types of questions: closed (or pre-coded) and open (free
response) questions. In closed questions, the respondent is offered a choice of
answers and ticks or circles the most appropriate one. In open questions,
respondents can answer spontaneously in their own words. The closed question is
much easier to analyse, especially by computer, but the options offered must be
appropriate to the respondent. Here it is essential to build on the experience of
others and carefully identify in advance the performance issues which will inform
the design of your questionnaire questions. The answers to open questions can be
much more instructive and entertaining to read but the comments made have to be
turned into identifiable performance issues which can then be quantified. This is a
time consuming and somewhat subjective exercise.

However, the points made contribute to the identification of performance issues and
therefore to modifications to the questionnaire in the future.

49
4.3.1 Specific points in questionnaire design
1. Don’t ask a question unless you need to know the answer. Always be guided
by the general aim of your survey and avoid peripheral or extraneous
questions. They will only bulk out the questionnaire and make it less attractive
to respondents. It will also lengthen analysis time.
2. Ask only questions which can be answered. This may seem so obvious as not
to require stating, but questions should be avoided which require respondents
to undertake significant data collection themselves.
3. Ask only questions which can be realistically and truthfully answered. Don’t
encourage the respondent to give speculative or inaccurate information. This
applies particularly to open questions.
4. Ask only questions which the user is prepared to answer. Avoid questions that
the respondent might consider embarrassing. Put sensitive questions (e.g.,
sex, age) last.
5. Ask only for information unavailable by other means, a great deal of survey
and statistical data exists in published sources. Don’t reinvent the wheel.
6. Ask precise rather than general questions. Avoid questions like. ‘Are you
satisfied/dissatisfied with the library service’? They are insufficiently probing
and likely to mask dissatisfaction with specific areas of the service.
7. Avoid double questions like ‘Would you like more books and a longer loan
period’. They are different issues and should be treated separately. Putting
them together will lead to a confused analysis.
8. Use simple jargon-free language. Questions should be short, simple and easy
to grasp. Jargon is a particular problem in a jargon-filled profession like
librarianship and can be difficult to avoid. I once designed an OPAC
satisfaction survey questionnaire in conjunction with a Psychology student
and a university lecturer, an expert in questionnaire design. Despite this,
respondents criticized what they perceived as jargon in the terminology of the
questions. Jargon is very difficult to avoid in a survey which focuses. on
users’ perceptions of technical issues. In libraries with a privileged user
group, in frequent contact with the service, jargon may be more acceptable.
9. Avoid ‘gift’ questions such as ‘Would you like Sunday opening?’. The
respondent is unlikely to say ‘no’ even if he or she does not intend to make
use of such a service himself or herself. It is better, although more long-
winded, to offer a range of viable options e.g.

‘The library is considering opening on Sundays. If this were to be done, would


you prefer (circle one”):
 11.00 am to 4 pm
 09.00 am to 1 pm
 1 pm to 5 pm

50
The question is indicating to the respondent what the library can reasonably
deliver and is inviting him or her to make a responsible choice.

10. Appreciate that the respondent may perceive a hidden agenda in the question.
A survey of a service which is not widely used but valued by those who do
use it may be interpreted by users as a signal that it will be withdrawn. This
may result in misleading. answers. Surveys of the use of infrequently used
periodicals are a good example.

4.4 Sampling
Sampling is done when it is not possible to survey an entire population. This
procedure is known as collecting inferential statistics which tries to make
statements about the parent population from the evidence of the sample. This is
typically done when surveying public library users or university undergraduates. In
the case of a small group of users, it is
possible to survey the entire population (doing a census). This usually applies to
special libraries or small groups of users with specific needs e.g., the disabled.
Samples aim to represent the population on a small scale and if the sample is
reliable, it should be possible to reach conclusions about the whole population.

The term sampling frame means a list of the entire population as defined by
whatever factors are applied, such as gender or occupation. It may be a register of
borrowers, university student records or a list of staff who work for a company and
use a special library. Sampling theory demands that probability should be allowed
to operate fully and therefore samples should be chosen randomly. There are several
methods of random sampling:
1. Simple random sampling: each member of the (sampling frame) population
has an equal chance of being chosen. If not, many people are involved their
names can be written on a piece of paper and drawn from a hat. Where many
people are involved a table of random numbers can be used.
2. Systematic random sampling: this also involves the use of a sampling frame.
A starting point is selected at random and then every nth number thereafter,
fourth, tenth or whatever, depending on the size of the sample desired.
3. Stratified random sampling: the sampling frame is divided by criteria like
library users by department or faculty, and random sampling takes place
within each band chosen.

51
To stratify you have to know that each group is different. This is difficult to do
accurately, and it might be useful to use proportionate sampling to ensure that the
numbers for each band reflect the numbers in the sampling frame.
It is not always possible for a library to carry out a structured survey and there are
also non-random and therefore less reliable methods which are widely used:
1. Accidental sampling: whoever is available is chosen.
2. Quota sampling: Whoever is available is chosen based on predetermined
characteristics such as age, gender, social class and occupation and a certain
number of people have to be surveyed in each category. It is a quick method
and convenient to administer and is widely used for opinion polling.
3. Purposive sampling: the survey population is chosen from prior knowledge,
using intuition and judgement.

The relative validity of these methods depends on the purpose of the survey. Non-
random methods can give more information about minority views. Random
methods, by their very nature, are unlikely to generate many respondents from
minority interest groups. This is the case in higher education where the views of
the largest single group, full-time undergraduates, will predominate unless the
views of other groups are sought as well. Quota sampling is a good method of
surveying changes in opinion over time. The survey would have to be repeated at
intervals to build up a meaningful, long-term picture.

4.4.1 Practical Advice on sample sizes


Although samples must be a certain minimum size (a rule of thumb might be 30)
accuracy and representativeness are important in determining sample size. The
bigger the sample the longer it will take to analyse the resulting data and the higher
the administrative costs will be recommending 400 survey forms returned for their
General Satisfaction Survey. Considerably fewer are permissible for the Reference
Satisfaction Survey and Online Search Evaluation. Priority Search Ltd (PSL) of
Sheffield, which produces survey software for academic libraries specifies a
minimum of 500 correctly completed questionnaires. SCONUL (1996) offers
advice on sample sizes and desirable response rates for its five standard
questionnaires. For the General Satisfaction Survey, a response rate of 55–60%
should be expected and a minimum of 400 responses is necessary for
comprehensive analysis. For public libraries, the IPF manual recommends sample
sizes of between 740 and 1060, depending on the size of the authority. It also
suggests adding another 30% to these sample sizes to cater for non-response. An
American body, the National Educational Association, has produced a formula,
from which a table has been devised, giving recommended sample sizes for
populations, ranging from 10 to 1,000,000 (Krejcie & Morgan 1970). Sample size
calculators can be found on the World Wide Web.

52
4.5 Questionnaire Administration
Before administering a questionnaire, it is advisable to pilot it. This helps to
eliminate errors before the questionnaire is administered. Problems likely to be
encountered include a poor choice of terminology and varying interpretations of the
questions due to ambiguous or misleading wording or simply, the differing
viewpoints of respondents. Ideally, the pilot should be tried out on 10% of the
intended sample. This can, in itself, be quite a large number and it might not be
possible in which case at least some of the sample should be consulted. It is also a
good idea to seek the advice of experts. Universities, businesses and industry and
local authorities usually employ people skilled in survey or market research
techniques, and they will often offer advice or even direct help in planning and
organizing survey work.

In administering questionnaires, a range of practical, tactical and even ethical issues


must be considered. It is not thought appropriate, in the United Kingdom, to survey
minors so persons under the age of 16 cannot be asked to complete questionnaires.
Information can be elicited from children by interviewing, and this is best done
with a parent/guardian or teacher present. It is also advisable to find out if any other
survey work is being done within the institution or by or in the physical
neighbourhood of a local authority. People who have already been subjected to a
bout of surveying about something else may not take kindly to being bombarded
with questions about the library.

This phenomenon is known as ‘questionnaire fatigue’. Cynicism is another factor.


People who complete questionnaires regularly for one or more organizations may
become disillusioned if nothing seems to happen as a result of their cooperation.
This usually results in comments in returned questionnaires like:

‘I complained about this last year, but you have not done anything about it’.

The timing of questionnaire administration is important. In public libraries


October/November is preferred, failing which the Spring will do. If survey work
involving visiting people’s homes or interviewing in public places is intended the
worst months of winter are best avoided. Interviewers should be issued with
identification badges, and it might be wise to inform the police of the intended
work. In academic libraries, the time chosen might relate to the nature of the work
intended but the early part of the calendar year is a good time. Most categories of
users are present and first-year students have had a term or semester in which to
become familiar with the library’s services. There is also time to analyse and
disseminate the results before the session ends. The coming of semesterisation,

53
however, tends to make January a ‘dead’ month as students are either sitting
examinations or are absent from the campus.

All service points (branch libraries, campus site libraries or whatever) should be
surveyed to identify differences or problems particular to one or more sites. A large
public library service with many branches may not be able to afford to survey them
all simultaneously. In this case, all branches should be surveyed in a three-year
rotational period. Survey results from small branches should be interpreted with
care as the number of returns may be so small as to make the results of limited
worth. However, it is a good idea to cross-check the results from different branches
to try to identify similar patterns of activity. A variety of methods may be used to
administer the questionnaires, depending on the circumstances. In public libraries,
temporary staff can be hired if funds permit, but it is often better to use existing
library staff who will understand why the work is being undertaken and will be
better qualified to explain and justify the exercise to users.

The IPF manual offers precise instructions on how to administer the questionnaires.
In academic and special libraries, a range of options is available. Internal mail is an
option open to both and can yield better results than postal questionnaires to
people’s homes. While the former typically yields results of 15%-30%, internal
mailing can double this response rate. In university libraries, questionnaires can be
administered both inside and outside the library. Sympathetic lecturers can be
persuaded to distribute questionnaires at the beginning of lectures and if these are
filled in at the time and collected at the end of the lecture there will be a good
response rate. User education evaluation questionnaires can be distributed at the
end of user education sessions and, again, if completed at the time, this will produce
a good response rate. Within the library, the staff are the best people.

to administer the questionnaire. They can distribute forms to users entering the
library and explain the purpose of the study if questioned. It is a good idea to record
the number of questionnaires distributed. From this, it will be possible to calculate
the response rate. At least 40% should be aimed for.

Questionnaires can be administered electronically/web-based and this can be done


by mounting the questionnaire on the institution’s web pages/google docs. The
questionnaire must be brought to the intended respondents’ attention. This is easy
if the questionnaire is administered in conjunction with a previously agreed activity
like participating in a web-based learning programme. Failing that, the
questionnaire could be presented as a default page on the institution’s web pages
so it is always brought to users’ attention.

54
The return rate for the questionnaires administered on paper was about 14% while
the return rate for the electronically administered questionnaire was 72%. The
return rate might have been even higher had the IT skills of respondents been better.
Time is saved in data analysis since the raw return data is already in electronic
format and does not have to be created from scratch. This method, despite its
advantages, has one obvious and serious limitation. It effectively limits the sample
to the computer literate and is most suitable for use in special libraries and
universities with few non-traditional students. If used in public libraries, it will tend
to exclude the elderly and those with low skill levels.

The staffing and management of survey work is a perennial problem. Although


library managers are keen to use survey results, they are frequently reluctant to
invest in the labour costs required. Some public and academic libraries maintain
sections or departments with a remit for performance evaluation and research. They
usually contain very few staff. Failing this, there may be a member of staff who
monitors quality as part of his/her duties. Whoever manages survey work should be
at a senior level, so that he or she can contribute to the management process. There
should also be adequate clerical support for checking completed questionnaires,
preparing data for input, contributing to analysis and writing up etc. In higher
education, it is sometimes possible to use students who need to learn about survey
work as part of their studies. However, this will only work if the interests of students
and librarians coincide. The future of staffing levels for evaluation is linked to the
development of initiatives like Best Value and the quality movement generally
since they imply the need for quality officers.

4.6 Analyzing the Data and Presenting Results


In writing up and presenting the data it is tempting to discuss, in some detail, the
methodologies chosen and how they were applied. It is best, however, to keep the
methodological description to a minimum and concentrate on results and
recommendations for action. There should be only enough discussion of
methodologies to allow others to repeat the survey and understand how the results
were obtained. The audience for the completed report will include decision-makers,
both inside and outside the library, library staff and users of the service. They will
be primarily interested in outcomes and recommendations for action especially
insofar as they are concerned.

Data can be analyzed using a spreadsheet package such as Excel, SPSS (Statistical
Package for the Social Sciences) and other analyzing desktop software which will
give adequate statistical analysis, in terms of totals and percentages for surveys of
a moderate size (hundreds rather than thousands of returned forms). It can generate

55
attractive ways of presenting the data like bars and pie charts. For very large surveys
which are likely to be undertaken only occasionally, it might be necessary to use a
statistical package. One of the most frequently used is SPSS (Statistical Package
for the Social Sciences). It has a large repertoire of statistical techniques and is well-
known to social researchers, so it is fairly easy to get advice on its application.

There are two means of communication: closed and open. Closed communication
includes written reports, newsletters and posters briefly summarizing results. Open
includes meetings of library staff, committee meetings and focus or other structured
groups. All these will be appropriate in disseminating results. In public libraries,
the target audience will include service managers and library managers, local
authority elected members and the general public. Summaries of the report, in press
release form, can be sent to the local newspaper.

In special libraries, the report can be summarized in a house journal or newsletter


if one exists, and a copy of the report sent to the governing body. Similarly,
universities maintain a range of publications including a staff journal or newsletter,
a library newsletter and an alumni magazine. All these may be appropriate outlets.
If the university students’ union publishes its newspaper this is a good way to
communicate with undergraduates. Unfortunately, it is much easier to
communicate with full-time undergraduates than with their part-time equivalents.
Submitting information to any publication out of the library’s control means loss
of editorial control and it might be worth structuring key press releases to
emphasize any points the library would like editors to take up.

Open communication offers the survey manager an opportunity to report on the


survey in person and respond to comments and criticisms. Such feedback is a
valuable supplement to the report and can contribute to future survey planning. It
is a good idea to report to library staff as this gives them feedback on their
performance and how users view their work. Library committees and policy-
making bodies can also be approached. However, detailed reporting to several
bodies can be extremely labour-intensive and time-consuming so this aspect of
dissemination should be planned with that in mind.

Communication and feedback are in the process of being revolutionized by the


development of email although this development naturally affects only those with
access to a computer. However, reporting results by email is quick and cheap
because no mailing costs are involved and allows recipients to respond easily and
quickly. It can be used in public libraries by users using the public library computers
and in special and university libraries by users using the computers supplied by

56
their employers. In higher education, it is probably still more used to communicate
with staff, rather than students but there is great developmental potential here.

4.7 Conclusion
Good measures are valid, reliable, practical and useful. Each of these components
contributes to the success of the evaluation. A valid measure accurately reflects that
which it is meant to measure. The most appropriate methodology matches the goals
of the research with the strengths of a particular approach. Using more than one
methodology or collecting data from more than one perspective can result in a better
understanding of the service under study.

The ultimate aim of all management principles, methods and techniques is to help
attain the objectives of the organization efficiently, effectively, economically, and
on time. It is an evaluation that testifies whether the objectives are achieved and if
so, to what extent. The evaluation also includes accountability to the funding
authorities, the patrons and other stakeholders as to whether the resources spent
have resulted in the attainment of the desired objectives. It may be pertinent to state
that despite the importance of evaluation for funding agencies and the managers of
libraries, accounts of actual evaluations made of libraries or information services
are very few, despite the theoretical expositions. Whatever the method of evaluation
adopted it is likely to be influenced by the type of library and the area of service
required for evaluation.

57
SELF-ASSESSMENT QUESTIONS

1. Define the quantitative method approach of library evaluation.

2. Discuss the various suitable areas of study in evaluating library services and
products.

3. Describe the important facts of questionnaire development and administration.

4. Explain sampling techniques and data analysis and present the report process.

Activity:
1. Visit the nearby university library and evaluate its circulation services by
developing a questionnaire with the help of the circulation staff/librarian.

RECOMMENDED READING

1. Bawden, D. (1990). User-orientated evaluation of information systems and


services, Aldershot, Gower.

2. Blagden, J and Harrington, J. (1990). How good is your library? a review of


approaches to the evaluation of library and information services, London,
Aslib.

3. Narayana, G.J. (1991). Library and information management. Delhi:


Prentice-Hall.

4. Lancaster, F.W. (1977). The measurement and evaluation of library services.


Arlington, Virginia: Information Resources Press.

5. Vickery, B. and Vickery, A. (1987). Information science in theory and


practice. London: Butterworth.

6. Saracevic, T. et al., (1977). Causes and dynamics of user frustration in an


academic library. College and Research Libraries, p.8.

7. Ralli, T. (1987). Performance measures for academic libraries. Australian


Academic and Research Libraries 18(1), p.2.

8. Moore, N. (1989). Measuring the performance of public libraries. IFLA


Journal 15(1), p.17.

58
Unit–5

QUANTITATIVE METHODS

Compiled by: Muhammad Jawwad


Reviewed by: Dr Amjid Khan

59
CONTENTS

Page #
Introduction ....................................................................................................... 61

Objectives ......................................................................................................... 61

5.1 Introduction .............................................................................................. 62

5.2 Some Qualitative Techniques/Methods ................................................... 64


5.2.1 Focus groups ................................................................................ 64
5.2.2 Suggestions boxes ........................................................................ 67
5.2.3 Diary techniques .......................................................................... 68
5.2.4 Interviewing ................................................................................. 70
5.2.5 Observation .................................................................................. 71

5.3 Conclusion ............................................................................................... 72

60
INTRODUCTION

After going through this unit, you should get acquainted with the basic qualitative
methods of library evaluation. The main focus of the unit is to give an
understanding of various methods of the qualitative approach to library services
and products.

OBJECTIVES

After studying this unit, you will be able to explain the following:

 Basics of qualitative methods.

 Focus group, interviewing and observation techniques.

61
5.1 Introduction
Evaluation is concerned with determining the strengths and weaknesses of a library
collection and services in terms of the level of intrinsic quality, the extent to which
that service and collection support and furthers the library’s mission and goals, and
the value of that service and collection to the library’s user and potential users.
Evaluation of library services and collection is an integral part of the broader
collection development process and planning of the library services and structure.

Data gathering is the first step in the evaluation process. Collecting data, whether
qualitative or quantitative, is a way to describe current conditions. Evaluation
involves examining data in the context of appropriate organizational models and
terms of the library’s mission. The resulting judgments about what the data mean
are from the basis of decisions for future action. On the whole qualitative methods
are less used than quantitative methods. They figure less in the literature, especially
standard textbooks, but are increasingly reported and becoming better understood.
Qualitative methods include such techniques as interviews, frequently conducted
on a one-to-one basis, meetings, whether loosely structured or more tightly
organized, like focus groups; suggestions boxes; whether in manual or automated
form (via an OPAC or website); observational methods and the keeping of diaries.
Some involve direct, face-to-face interaction, and require special skills and training,
others do not.

Qualitative methods work in two ways:


1. Where you already know something about the subject. For example, if a
survey indicates dissatisfaction with an aspect of the service without making
clear the reasons why, it might then be appropriate to conduct interviews or
hold a meeting with the people concerned to find out why they are dissatisfied.
2. Where you do not fully understand the situation. Interviews or meetings could
be held to ask ‘why’ -based questions to find out if a problem exists and what
the performance issues are. If performance issues are identified, they can be
used either to tackle the problem or as a basis for collecting ‘hard’ quantitative
data. The issues identified will help in the designing of the questionnaire.

Behind the qualitative method lies the simple concept of the story. We make sense
of the world by telling one another stories. In talking about libraries, we also talk
about stories. What did the person come for? What were their intentions? What did
they find? What happened? What was the value? The story can take many forms.
A small child enthusing to its parents about a story hour at a public library is
engaging in positive qualitative evaluation just as much as adult members of a focus
group discussing a library service or participants in a structured interview.

62
They all represent direct experience. The qualitative approach has several
characteristics and advantages. It works well with small samples and is appropriate
for analyzing problems in depth. For this reason, it is useful for tackling complex
and poorly understood problems. By collecting users’ perspectives directly, it is
possible to tease out the underlying issues. Answering the question ‘Why’ rather
than ‘How often’, emphasizes the role of the participant who can present himself
or herself as an actor in the events being studied. However, the approach is much
looser. It does not produce the tidy array of statistics at the end which a quantitative
survey would and, for this reason, is perceived as being non-scientific.

There is also a danger that it may be manipulated by vocal minorities who are good
at case-making. This sometimes happens with university teaching departments who
are anxious to promote or retain a service which they see as benefiting them
specifically. For this reason, some background in quantitative data is desirable. This
is not a problem for libraries which practice a systematic regime of assessment, but,
for libraries not in that position, some background published sources such as LISU-
produced figures might suffice. Qualitative methods also allow users to participate
more in library management and give them a feeling that they are making a direct
contribution to policy formulation.

It is often said that qualitative methods are less labour-intensive and time-consuming
than quantitative methods. There is some truth in this. (The qualifications are discussed
in detail below.) However, planning and facilitating meetings require special skills
which library staff may not possess and the logistical problems of getting an
appropriate group of people into a suitable room on a certain day at a certain time
should never be underestimated. It helps considerably to have a preexisting
organization, independent of the library, which acts as an organizational focus.
 The library at the University of Huddersfield has built up good relations with
the University’s Students Union which has 150 course representatives who
can be called on for focus group work.
 Glasgow Caledonian University had, for a time, an organization called the
Partnership for Quality Initiative which organized and facilitated meetings for
several university departments, including the library. If no appropriate
organization exists, it might be necessary to employ an outside body as Brent
Arts and Libraries did when studying the needs of ethnic minorities within
the Borough.

63
5.2 Some Qualitative Techniques/Methods
5.2.1 Focus groups
Of the various qualitative methods available the focus group is probably the one
which has attracted the most attention. They are called focus groups because the
discussions start broadly and gradually narrow down to the focus of the research.
They are not rigidly constructed question-and-answer sessions. Focus groups are
used in a variety of situations. In business and industry, they are often used to test
new product ideas or evaluate television commercials. In higher education, they can
be used to ‘float’ new ideas such as embarking on a major fundraising venture.
Focus groups typically consist of 8 to 12 people, with a moderator or facilitator who
focuses the discussion on relevant topics in a non-directive manner. The role of the
facilitator is crucial. He or she must encourage positive discussion without
imposing control on the group. There is a danger that focus groups can degenerate
into ‘moan sessions. The structured discussion group (also known as a snowball or
pyramid discussion) is a variant of the focus group which tries to address this issue.
After an introductory discussion, the participants begin by working in small groups
to identify and prioritize key themes. The groups then come together, and each
group is asked to make a point which is then tested by the other groups. Agreement
is reached on each point in turn and a record is kept of the discussion which is
verified towards the end of the zv52 session. Sessions last between 45 minutes and
an hour and a quarter and about 14 points usually emerge.

Focus groups have several advantages over other forms of research which
have been usefully summarized by Young (1993):
1. Participants use their own words to express their perceptions.
2. Facilitators ask questions to clarify comments.
3. The entire focus group process usually takes less time than a written survey.
4. Focus groups offer unexpected insights and more complete information.
5. In focus groups, people tend to be less inhibited than in individual interviews.
6. One respondent’s remarks often tend to stimulate others and there is a
snowball effect as respondents comment on the views of others.
7. The focus group question design is flexible and can clear up confusing
responses.
8. Focus groups are an excellent way to collect preliminary information.
9. Focus groups detect ideas which can be fed into the questionnaire design.

There are 12 points to remember in organizing focus groups:


1. Use facilitators from outside the library. Participants may be reluctant to
discuss issues with a library facilitator and it is difficult for a facilitator who
has a vested interest in the outcome to be objective.

64
2. Select facilitators with expert skills. These include good communication skills
and experience with group dynamics. They do not have to be an expert on the
subject under discussion.
3. When recruiting ask for volunteers. A good way to do this is to add a brief
section to regularly used questionnaires in which respondents are asked if they
are willing to participate in further survey work. Relevant organizations
which regularly collect opinions within the institution may also be able to help
in providing names although this carries with it the risk of involving the ‘rent
a crowd’ who are only too willing to express an opinion about anything.
4. Use stratified groups. In higher education separate staff from students and
undergraduates from postgraduate students. Try to include all aspects of the
target population such as full and part-time students.
5. Schedule 8–12 people per focus group, but always overschedule especially if
working with undergraduates. Reminders by personal visits, telephone or
prompting by lecturers may all be necessary for higher education. It is
important to remember that Students’ Representative Council members have
many calls on their time and too much cannot be expected of them.
6. Allow ample time for discussion, usually up to two hours.
7. Develop a short discussion guide, based on the objectives of the research. This
should be pre-tested on a sample population if possible. An experienced
facilitator should be able to do much of this work.
8. If possible, run three or four groups per target audience for the best results.
One group may not provide enough data but organizing more may be difficult.
9. Hold sessions in a centrally located, easily accessible room. Put up signs and
notify colleagues who may be asked for directions. Ideally, use a room with
audio-taping facilities.
10. Reward participants for their time. Many libraries have no appropriate budget
and a reward, in practice, often means nothing more than tea, coffee, biscuits
and scones. Small rewards can be made in kind such as a free photocopying
card or a book token donated by a bookseller. Such prizes can be allocated
through a raffle.
11. In analyzing and summarizing the data look for trends or comments which are
repeated in several sessions. Sessions can be analyzed from audio tapes, flip
chart paper and handwritten notes.
12. Don’t over-generalize information gained from focus groups and don’t use it
for policy decisions. Because of non-scientific sampling and the inability to
quantify results the information collected should be used carefully.

Practitioners have had varied experiences with focus groups. It is sometimes


claimed that library staff should not be participants as this can inhibit other
participants such as undergraduates. However, I have never found students to have

65
any inhibitions about criticizing the library service in the presence of staff. Hart
(1995) has organized seven focus groups over an academic year and, inter alia,
makes the following points:
1. The lunch period is a good time to hold them.
2. Each group lasted half an hour to an hour.
3. Misconceptions relating to all aspects of library organization were
widespread.
4. Focus groups are good for identifying problems which might not otherwise
have been considered.
5. Focus groups are good for providing insights rather than answers.
6. Focus groups are not particularly cheap, mainly in terms of staff time, which
can be 3– 4 hours per session.

Focus groups can reveal distinct gaps between the perceptions of library staff and
those coming from the users. This is something librarians have to come to terms
with if they are to benefit from the experience. Users might focus on a particular
theme e.g., open access photocopying and binding facilities, and force library staff
to rethink provision. Focus groups are usually a public relations success, as they
show that the library is actively canvassing user opinion even if users’ expectations
of what the library can provide are sometimes unrealistic. Scheduling can be a real
problem and if people cannot be recruited can threaten the result.

Focus groups are not very helpful in discussing technical issues because of a lack
of expertise among participants but it is important to recognize that the underlying
issue may be valid even if presented in naive terms. For example, a focus group on
the use of a computer centre suggested a computer booking system, based on colour
bands, as used in
swimming pools. Even if the method was not very practical the need for a booking
system was identified. A somewhat unexpected advantage of focus groups is that
they are an end in themselves. The very act of participating gives those present a
feeling that they are ‘having their say’ and engaging in a form of two-way
communication which helps to
close the feedback loop.

Focus groups can be used with children, but this is a highly specialized activity.
They can be used with children from 7 to 16 years of age. Very simple questions
should be asked, and a ‘chatty’ manner is necessary. The use of pairs of friends
encourages discussion and groups can then be built up consisting of up to three
pairs. Parental consent for participation is needed, in writing wherever possible, up
to the age of 16.

66
Neutral venues are best, preferably a family home. Sessions can last up to 90
minutes provided the participants are interested, the facilitator is well-prepared and
light refreshments are provided. As younger children (7-11) think concretely it is
important to provide concrete examples like models and drawings and practical
activities like drawing or writing down their ideas in the form of a postcard to a pen
pal. For older participants (13+) bubble diagrams are a good method. A drawing of
a simple situation includes a speech bubble, and a thought bubble and respondents
fill in what the people are thinking and saying. This is very useful for sensitive
topics. A useful collection of methodologies for use with school pupils can be found
in Nancy Everhart’s Evaluating the school library media centre (1998) which
discusses evaluation methods with practical examples of questionnaires,
interviews, focus groups, numbers gathering and observation. The practical
difficulties of relating this work to children and young people are discussed.

5.2.2 Suggestions boxes


The suggestions box has been around for a long time and is now the subject of
renewed interest in the present evaluation climate. The traditional ‘suggestions box’
in fact takes several physical forms. It might be a book, in which users can write
suggestions, it might be a box, with an opening for inserting bits of paper on which
comments are written or it could be pre-printed cards which can be completed by
the user and analyzed in a fairly structured way. About 10 academic libraries used
this method in the mid-90s (Morgan 1995, p. 60) and Essex Libraries is a public
library example but there is uncertainty about what they are for a public relations
exercise or part of a customer care program. Unless they are the latter, they are
largely a waste of time. The book or box must be prominently displayed and
adequately publicized. It must be scrutinized regularly, preferably once a week, and
the issues raised identified. Two questions must be considered:
1. Will the author of the suggestion receive an answer?
2. Will the suggestion be acted upon or at least considered?

If the answer to both these questions is ‘no’ there is not much point in having a
suggestions box, but to answer each question by letter could be a substantial clerical
exercise. There seems to be a good deal of user cynicism about the method.
Suggestions are sometimes facetious or even obscene and, in the automated version
described below a failure to respond rapidly can result in further questions, like:
‘Why does no one ever answer my questions?’.

Automated library systems have given a new lease of life to the suggestions box
because some have question/answer facilities included in the OPAC. Typically,
these include a screen on which users can input questions and these are then
answered by library staff. It is best if a specific member of staff has responsibility

67
for this and ensures that all questions are answered promptly in practice questions
tend to be repetitive and the responsible staff member soon builds up expertise in
replying to them. If the person who deals with the questions does not know the
answer, he or she can forward it to the relevant member of staff. The system should
collect statistical data about questions and may allow browsing and keyword
searching. Regular overviewing of the questions makes it possible to identify
performance issues and compare them with other sources of performance data. If
the module contains a stop-word list of obscenities and offensive expressions these
can be filtered out. These features make for a much more reliable evaluation tool
than the manual suggestions box, mainly because feedback is much better. In
practice, questions tend to fall into two categories:
1. Precise questions specific to the enquirer e.g. ‘If I do not return my books by
a certain date, will I have to pay a fine?
2. General questions about services are easier to answer.

Generally speaking, the ‘suggestions box’, whether manual or automated, should


be used in conjunction with other methods. The coming of the world wide web
makes it possible to make suggestions and services easily available over the
Internet.

5.2.3 Diary techniques


Although the diary method is used by market and social researchers, in library and
information science, its use seems to be largely confined to studying the activities
of university undergraduates, mainly in respect of their study habits and use of
library textbooks and other sources of information (Goodall 1994). It reflects many
of the problems found in ‘people’ orientated research. Completing a diary can be
viewed as a tedious task. While it might be possible to draw up a random sample
of those to be approached, those who do agree to cooperate will likely be a small,
self-selecting group. To be useful the diaries will have to be kept over a lengthy
period, but this is likely to lead to dropouts, so a short period of a few days is more
likely to produce satisfactory completion rates.

The data obtained can be varied and highly specific but, for this very reason, it can
be extremely difficult to tease out performance issues from the detailed
observations made. If diaries are kept over a long period errors can creep in and
diarists who are aware that their writings are part of a structured research program
may begin to modify their observations, perhaps even unconsciously. Nevertheless,
they are a good way of collecting data which is difficult to collect in any other way,
provided the data collected can be set in some sort of context.

68
Diary techniques are usually a less casual and unstructured activity than the term
appears to imply. Although diaries allow users’ actions and reactions to be recorded
when they occur most people are not used to keeping a record of their activities and
without a predetermined structure for recording the data is likely to be somewhat
difficult to analyze. Diaries are usually structured, and the respondent is given
forms with checklists or performance issues prompting him or her to comment on
the areas of study. The checklists must be easy to understand otherwise the
respondent may become confused. Such a method can remove the element of
spontaneity and individuality which informs diary writing.

Another method is a time diary which records the respondents’ activities at different
times of the day. An example of diary research was carried out at the Centre for
Research in Library and Information Management at the University of Central
Lancashire as part of a wider study of library support for franchised courses in
higher education (Goodall 1994).

Approximately 120 first-year students were involved in the project which aimed to
document the experience of students about the provision and availability of library
resources. Students were required to complete three or four diaries and attend a
follow-up focus group discussion. Although the students received £25 each for
participating there were difficulties in recruiting a sufficient sample and the project
had to be vigorously promoted to attract sufficient interest. The result was a self-
selected sample. Although there were problems in identifying suitable pieces of
work for study, once assignments had been identified they provided issues on which
to focus and gave a structure to the exercise.

The diaries themselves were five-page A4 booklets, accompanied by guidelines and


an example of a completed page. Time and place of work data was recorded and
there were also coded tick boxes. There was also space for free comments. The
diaries were used to record materials consulted by students as they completed their
assignments and also to note any difficulties, they had in obtaining and/or using
materials.

The analysis of the diary entries then provided the framework for the focus group
discussion in that it allowed the researcher to compile a list of themes and
performance issues to use with the group. The students were encouraged to refer
back to their diaries during the group discussion so that they were able to draw from
specific examples to describe their actions in detail rather than talking in general
terms. In this case, then, the purpose of the diary project was two-fold:
 to record data
 to facilitate focus group discussion

69
The diary data was more useful when set in a wider context of discussion.

5.2.4 Interviewing
Interviewing on a one-to-one basis is something that many librarians have done at
one time or another. It is important to realize, however, that it is a structured activity
and not just a chat. It can be seen as an extension of the meeting method, but by
speaking to only one person it is possible to probe in detail into the experiences and
reactions of respondents. For this reason, it is a good method for exploring sensitive
or confidential issues like library staff’s relations with users. Interviewing is a
skilled activity and because it is about the interaction between two people, well-
developed social skills are essential.

The interviewer must be good at getting people to talk. He or she should talk as
little as possible and concentrate on listening. It is important to note the issues
which the respondent raises and also those which are not raised. Unless the library
can afford to employ paid interviewers, which is rarely the case, interviewing will
probably be done by library staff. There is a danger that this might inhibit
respondents or influence what they are prepared to say. Conversations can be
recorded in notes or using a tape recorder. The latter method allows the interviewer
to concentrate on what the respondent is saying but the tapes have to be transcribed
or at least analyzed, which takes time.

There are three types of interviews:


1. The structured or formal interview. This is based on a pre-prepared list of
questions which are not deviated from. This closely resembles the
administration of a questionnaire, except that the interviewer is present to
explain and clarify questions.
2. The semi-structured interview. The interviewer works from a pre-prepared
list of issues. The questions derived from the issues, are likely to be open-
ended to allow the respondent to express himself or herself.
3. The unstructured interview. In this case, only the general subject is pre-
determined, and the interview is informal. This gives considerable scope to
the respondent to express his or her views but demands considerable skill on
the part of the interviewer who must be able to subtly control digressions and
tease out issues only partially examined by the respondent.

Interviewing is a skill which takes time to learn and is most needed for conducting
unstructured interviews.

70
5.2.5 Observation
Observing what people are doing is a relatively little-used technique in libraries but
it has obvious attractions. It allows users to be observed in their natural setting and
it makes it possible to study people who are unwilling or unlikely to give accurate
reports on their activity. The non-curricular use of computers in university libraries
is a particularly good example of this. It also enables data to be analyzed in stages
or phases as an understanding of its meaning is gained.

There are two types of observation, structured and unstructured.


1. Structured observation:
This is a predetermined activity where a form is used in which the observer
records whether specific activities take place, when and how often. A well-
designed data collection method will allow space to record unanticipated
activity. However, the form must be carefully designed at the outset to allow
for most eventualities. Because this is essentially a statistical method it is
usually considered to be a quantitative method.

2. Unstructured observation:
The observer records any behaviour or event which is relevant to the research
questions being studied. This is a much more open-ended approach and, as is
the case with most qualitative research, is especially useful in exploratory
research or where a situation is incompletely understood.

Observation, although on the face of it simple, is a highly skilled exercise, for the
observer must know enough about the situation to understand and interpret what is
going on. To return to the observation of computer use example, the observer can
note important activities like mouse and keyboarding skills, file management and
the expertise with which different software packages are being used but to do this
the observer must be highly computer literate and be able to recognize and critically
analyze and evaluate such activity.

The methodology has some disadvantages. People who are aware they are being
observed tend to change their behaviour, at least initially. There is an ethical
question as observation without consent can be interpreted as an intrusion into
privacy. It is not always possible to anticipate a spontaneous event and so be ready
to observe and understand it. Not all events lend themselves to observation. The
development of IT skills throughout time is a good example. Observation can be
very time-consuming and finally, the subjectivity of the observer must be taken into
account.

71
In making observations the researcher should focus only gradually on the research
questions to open up possibilities for insight. The observer should also record his
or her subjective reactions to the events observed. This helps to distance the
observer from them, an important way in which the questions of reliability and
validity can be addressed. Notes should be made as close in time as possible to the
events being recorded. Although in a library context, unobtrusive observation is
probably the norm the observer may also participate in the activities he or she is
observing. To be a successful participant observer it is necessary to be
approachable, friendly and receptive and to dress and behave appropriately.

Observation is a particularly suitable technique for observing the use of electronic


services like email, word processing and internet surfing or electronic information
databases because precise quantitative instruments for evaluating them are still in
the process of formation. It is technically possible to observe the use of a group of
computers from a central point which makes reliable data collection easy as the
users do not know they are being observed. However, such a method necessarily
raises ethical questions.

5.3 Conclusion
Qualitative studies such forms as surveys, observation, interviews, and case studies,
can examine complex factors in the social interactions inherent in library settings.
Because these are unique studies, however, findings cannot be generally applied to
a larger group. The quantity of raw data gathered is likely to be large and, because
it is descriptive data, more difficult to categorize. Qualitative research, generally,
is a major research method in its own right and is useful for probing the sensitive
issues which questionnaires do not deal with so effectively. As library and
information science moves increasingly to the provision and use of electronic
services so qualitative methods may become more attractive because so many
poorly understood issues are arising which cannot be addressed with the precision
that quantitative methods require. A major concern is to see the measurement is
done without evaluator bias and that the study is objective. This ability to examine
“real-world” aspects or libraries provides insight into the multiple human factors
involved.

72
SELF-ASSESSMENT QUESTIONS

1. Define the qualitative method of library evaluation.

2. Discuss the various methods and techniques of qualitative methods.

Activity:
1. Visit any public library and prepare a suggestion box and fix it on its entrance
and collect the suggestions of library users.

73
RECOMMENDED READING

1. Al Hijji, K. Z., Cox, A. M. (2012). Performance measurement methods at


academic libraries in Oman. Performance Measurement and Metrics 13:3,
183–196.

2. Barton, J and Blagden, J. (1998). Academic library effectiveness: A


comparative approach, London, British Library Research and Innovation
Centre, (British Library Research and Innovation Report 120).

3. Batchelor, K and Tyerman, K. (1994). Expressions of interest in Brent,


Library AssociationcRecord, 96 (10), pp. 554–555.

4. Bawden, D. (1990). User-orientated evaluation of information systems and


services, Aldershot, Gower.

5. Blagden, J and Harrington, J. (1990). How good is your library? a review of


approaches to the evaluation of library and information services, London,
Aslib.

6. Bohme, S. and Spiller, D. (1999) Perspectives of public library use 2: A


compendium of survey information, Loughborough, LISU.

7. British Standards Institution. (1998). Information and documentation: library


performance indicators, London, BSI, (International Standard ISO 11620).

8. Brophy, P. and Coulling, K. (1996). Quality management for information and


library managers, London, Aslib.

9. Lakshmi, R. S. R. V. (2003). Measurement of College Library Performance:


An Evaluative Study with Standards. The International Information &
Library Review 35, 19–37.

74
Unit–6

PITFALLS AND PROGRESS

Compiled by: Muhammad Jawwad


Reviewed by: Dr Amjid Khan

75
CONTENTS

Page #
Introduction ....................................................................................................... 77

Objectives ......................................................................................................... 77

6.1 Introduction .............................................................................................. 78

6.2 Evaluation Action Plan ............................................................................ 81

6.3 Library Evaluation and Ranganathan's Five Laws .................................. 81

6.4 Evaluation and System Analysis .............................................................. 82

6.5 Conclusion ............................................................................................... 83

76
INTRODUCTION

The unit is designed to explain the importance of evaluation and the pitfalls and
progress of library evaluation projects. It will also explain the components of the
evaluation action plan and understanding of system analysis.

OBJECTIVES

After studying this unit, you will be able to explain the following:

 Evaluation and system analysis

 Evaluation Action plan

 Drawbacks and progress of evaluation projects

 Ranganathan’s five laws and library evaluation.

77
6.1 Introduction
Much, perhaps most, evaluation is carried out on the fly as a more-or-less
emergency procedure. A problem arises when there is a perceived need for an
immediate solution, and some sort of attempt is made at evaluating the problem as
a means of deriving a solution. Sometimes the problem is imposed from outside via
political, social, or economic influences. The library community does not need to
look far for examples of societal pressures to examine problems defined by pressure
groups. An impressive number of very immediate evaluation needs centre around
the opportunity to take advantage of a funding opportunity with a fixed deadline.
Governance or governmental bodies are well known for their tendency to demand
quick responses to esoteric needs to evaluate specific functions.

The major problem with on-demand evaluation is that there is no meaningful


integration of evaluation and planning. If a member of the library board suddenly
demands to know ‘why the library is not meeting the needs of small business,” an
on-demand evaluation of whether the library is doing so is unlikely to be something
that can be carried out in a time frame that will soothe the board member. Turning
an on-demand evaluation into an integral element of the library’s planning will take
even more time and will likely constitute a de facto admission that the board
member’s concern was justified. The best response to such a demand for short
turnaround evaluation is a pre-existing evaluation process, which requires an
element of forethought that cannot be guaranteed. Having in place the results of an
evaluation activity for which no external request was made, however, has fulfilled
many a real need. Relying on ad hoc evaluation is an invitation to missed
opportunities and frantic compliance with unreasonable deadlines.

Research is a very special focus for evaluation. Herbert Goldhor, former director of
the library centre at the University of Illinois, frequently lamented that every
evaluation project was almost a research study. What research adds to evaluation is
the potential for extension to other environments. When applied appropriately,
evaluation techniques reveal useful information not only about the library for which
the evaluation project was conducted but also about other libraries with similar
evaluation needs.

Carrying out an evaluation project in a research mode is not an easy process.


Determining how to carry out the evaluation project is a matter of determining a
methodology, establishing a procedure for implementing the methodology, and
determining the extent to which the problem will be explored.

78
The conclusion of a piece of survey/project work does not necessarily mark the end
of the exercise. The results should include recommendations for action, but several
factors may affect the outcomes of the evaluation study and may even lead to their
modification.
1. The problem may be insoluble for a variety of reasons. Resources may be
unavailable to tackle it. The study may show that a particular group of users
requires particular attention, but money may not be available to undertake the
work needed. It may be difficult to collect information on which to base
decision-making as the Brent study showed. If the cooperation and support of
departments out with the library is needed, e.g., manual staff support for
extended opening hours, it may be difficult to proceed, but at least, solid data
is available for making a case.
2. More questions may be raised than answered. This is often the case with short
overview surveys which may raise puzzling issues that require further
investigation. One group of users may be much less satisfied with the service
than others. It may be necessary to mount a further study of the group to find
out why.
3. Misconceptions are an unavoidable problem. Users’ comments on survey forms
and at meetings can show that they have inaccurate perceptions or have fallen
victim to rumours. Sometimes the numbers, e.g., all the students in a particular
course, can be substantial. The consequences, in the form of angry letters to local
newspapers, or articles in student magazines, can be serious and it may be
necessary to mount a public relations exercise to clarify the situation. Sometimes
these errors can be accepted at quite a high level. I have found evidence of quite
senior academics credulously accepting unfounded rumours.
4. Contradictory results sometimes occur. These may be the results of faulty
methodology or paying too much attention to pressure from vocal interest
groups. Building on experience over years, comparisons with similar
institutions, and using a regime of assessment which consists of a range of
methods are the best ways of avoiding these problems.
5. The results of the study may generate criticism. This may be criticisms of
methodologies or outcomes. Following proper procedures is the best way to
avoid criticisms of methodology. Outcomes may be criticized if they are
perceived as having deleterious implications. For example, the
recommendation to deskill a service, previously run by professional staff,
may not be well received by those concerned. This can result in delaying,
modifying, or even abandoning the proposals.
6. The information collected from a survey can go out of date quite quickly. The
current move to electronic information services is changing expectations
rapidly and this must be allowed for. It may be necessary to look at the same
problem areas regularly to identify new and changing needs. zv69
7. As implied in (6) above the performance issues which inform evaluation must
be regularly reviewed to ensure that the changing needs of users are being
addressed.

79
Over the years, it should be possible to develop a systematic regime of evaluation
composed of questionnaire work, formal qualitative methods like focus groups and
other meetings, ancillary methods such as suggestion boxes and comparisons with
other departments within the institution. A range of performance measures will
emerge, some recurring, others needing consideration less frequently. These will
require evaluation and comparisons should be made with similar institutions and
existing data to get an idea of context. Research-based methods which seek to
identify performance issues objectively have been developed in the 1990s.

Based on this strategy, it will be possible to pass through a hierarchy of quality,


composed of the following elements:
1. Recognizing the need: identifying areas of ignorance within the library.
2. Find out what users need.
3. Adapting services to respond to their needs.
4. Develop comparisons with other libraries.
5. Understanding users’ needs from a comparative perspective.

If it proves possible to move through all five stages, then it might be possible to
consider benchmarking as a further qualitative step. Benchmarking has been variously
described as a zv70 systematic approach to business improvement where best practice
is sought and implemented to improve a process beyond the benchmark performance’
and ‘stealing shamelessly’. It is a technique developed by the industry in which best
practice by one’s competitors is studied to improve one’s performance.

Benchmarking can improve the customer focus by seeing how others satisfy their
customers and in librarianship libraries can compare themselves with each other
and relevant service industries with a view to an all-around improvement in
performance. It is, however, difficult to compare services in different institutions
which are highly qualitative like the Enquiries service. Measures or benchmarks to
use in comparing libraries have to be chosen and it can be difficult to find several
libraries for which the same set of measures will be appropriate. The best
benchmarking partners have to be chosen to give the exercise credibility.

Benchmarking is no longer a marginal activity. About 40% of British academic


libraries are involved in benchmarking in some way and in Australia the figure is
at least 50%.

SCONUL has conducted six benchmarking pilots focusing on the following areas:
advice and enquiry desks, information skills training, counter services and library
environment. Although these pilots have been useful it has proved difficult to devise
generally acceptable measures and because of ethical and confidentiality issues
outcomes have not featured a great deal in the literature. Enquiry work has emerged as
a favourite theme for benchmarking, and this has cropped up in public libraries too.

80
6.2 Evaluation Action Plan
Building toward a culture of evaluation requires making evaluation a habit, making
it more difficult not to evaluate than to evaluate. One approach to nurturing that
habit is the development of an Evaluation Action Plan to guide the evaluation
process. An Evaluation Action Plan asks the following questions:
1. What’s the problem?
2. Why am I doing this?
3. What exactly do I want to know?
4. Does the answer already exist?
5. How do I find out?
6. Who’s involved?
7. What’s this going to cost?
8. What will I do with the data?
9. Where do I go from here?

6.3 Library Evaluation and Ranganathan’s Five Laws


Indian library philosopher and educator S.R. Ranganathan suggested/theorized five
laws of library science as a guide for understanding and fostering the roles and
purposes of the field. Those five laws are:
1. Books are for use.
2. Every reader is a book.
3. Every book has its reader.
4. Save the time of the reader.
5. A library is a growing organism.

Although Ranganathan concentrated on the book in his rule, he was quite aware of
the role to be played by other information resources but consciously chose to use
the term book in a generic sense.

Ranganathan’s five laws are an excellent example of systems thinking and carry
substantial implications for the need to evaluate libraries and their processes. If
information resources are indeed for use, then there is a clear need to evaluate their
use and determine whether they are being used at all and if they are being used
appropriately. The expression “every reader is (originally his) book” implies the
need to evaluate the needs of individual patrons and patron groups and to design
library systems to meet those needs. Obversely, there is a need to proactively
identify those patrons who can make use of particular information resources and
develop mechanisms for getting those resources to the patrons who can best use
them. Saving the time of the patron is fundamental, although library systems have
not always been designed with the patron’s convenience in mind. It is the fifth law
that most clearly relates Ranganathan’s thinking to systems thinking. By describing
the library as a growing organism, Ranganathan recognized that the library is not

81
only a system but also a system with life. He also described the fate of an organism
that ceases to grow.

Long after Ranganathan first formulated his five laws, Maurice Line (1979)
presented an alternative view of the way things are. Line’s five laws are:
1. Books are for collecting.
2. Some readers their books.
3. Some books have their readers.
4. Waste the time of the reader.
5. A library is a growing mausoleum.

There is a dark side of Line’s humour that lies very close to home. Any observant,
thoughtful, or simply aware librarian can think of many examples of situations and
policies that are more closely aligned with Line’s cynicism than with
Ranganathan’s idealism. Many honest librarians would have to admit that they have
at one time, or another, been active participants in supporting the reality behind the
line’s facetiousness.

6.4 Evaluation and System Analysis


System analysis and operations research are tools and processes; they are also ways
of thinking. The most important aspect of a system’s approach to evaluation is
learning to think systematically. As evaluation is adopted and nurtured in an
institutional setting, evaluation takes on an important systems role that places it on
a level with basic and familiar library systems such as collection management,
reference and information services, technical processing, circulation, outreach
services, and administration. In an ideal situation, evaluation becomes a basic social
and societal system of the library, and a culture of evaluation permeates the library
and all its functions and activities.

6.4.1 Rules for understanding systems


The following three basic rules for understanding systems are adapted from the
“Last Whole Earth Catalog, 1974” They were originally intended to summarize, in
a semi-facetious manner, the to pay adequate attention to ecology. Understanding
and nourishing the ecology of a library is an essential function and an important
focus for evaluation. Failure to accept and account for any of these rules can
completely undermine an evaluation activity.
1. Everything is connected to everything else. The activities of the Anytown
Public Library cannot be divorced from the activities of libraries in
surrounding communities, from the activities of the Major Public Library,
from the activities of the Regional Library, or from any library with which
the Anytown Public Library has even the most remote contact. Furthermore,

82
the Anytown Public Library is directly linked to a wide range of other
government offices, educational institutions, social service agencies,
businesses, industries, and individual members of the public. Because these
components all work together as a system the administration and staff of the
library are responsible for exploring and understanding those connections.
2. Everything has to go somewhere. If the library’s administration, based on
its evaluation., decides not to offer a particular service at a determined level,
some other entity or agency will be the recipient of the accompanying demand
unmet by the library. If the library emphasizes a particular service at a
determined level, some other entity or agency will experience a reduced
demand for that service. In a worst-case scenario, the library may be
marginalized by a decision to emphasize certain services and de-emphasize
others. Although no library can be all things to all people, it is essential to
understand that there is an intense need to evaluate the need and demand for
services and to evaluate their delivery.
3. There ain’t no such thing as a free lunch (TANSTAAFL). Around the turn
of the twentieth century, many bars and taverns advertised a “free lunch.” The
catch was that access to the free lunch was dependent on the purchase of
watered-down drinks. Library administrators and staff cannot and should not
expect to benefit from any externally provided benefit at no local cost. The
local cost of access to state-funded network services, for instance, may be
reduced by community appreciation for the direct services of the library. In
some cases, expanded state funding for shared library resources may lead to
reduced funding for local library resources.

6.5 Conclusion
The most important potential outcome of a successful evaluation project is the
completion of at least one step on the way to creating a culture of evaluation. When
everyone works as it should, when good results are rendered and positive action taken,
when a positive attitude toward evaluation has been fostered, and when people see that
evaluation can make things better for them, the result may be an increased desire to
engage in evaluation for the good of the library. When that happens, a culture of
evaluation is truly in place and things will never be the same again.

SELF-ASSESSMENT QUESTIONS
1. Explain the evaluation project and its essential steps/components.
2. Discuss Ranganathan’s five laws in the scenario of library evaluation.
3. What is benchmarking and how does it help in the evaluation of library services?

83
Activity:
1. Develop the library evaluation project keeping in view the action plan stages
with the help of a tutor.

RECOMMENDED READING

1. Churchman, C. W. (1968). The systems approach. New York: Delacorte


Press.
2. Krejcie, R.V. and Morgan, D W. (1970). Determining sample size for research
activity. Educational and psychological measurement, 30, 1970, pp. 607–610.
3. Lancaster, F.W. (1993). If you want to evaluate your library, 2nd ed., London,
Library Association.
4. Line, M. B. (1979). Review of the use of library materials: The University of
Pittsburg Study. College & Research Libraries, 40, pp. 557-558.
5. Line M.B. and Stone, S. (1982). Library surveys: an introduction to the use,
planning, procedure and presentation of surveys, London, Clive Bingley.
6. McClure, C.R. and Lopata C L. (1996). Assessing the academic networked
environment: strategies and options, Washington, Coalition for Networked
Information.
7. McKeever, L. (1998). Intellectual resource or digital haystack? web
experiences of undergraduate business school students, [online] IRISS ‘98
Conference papers. SOSIG, 1998. Available from: http://www.sosig.ac.uk/
iriss/papers/ paper20.htm [Accessed August 1998].
8. Mendelsohn, S. (1995). Does your library come up to scratch? Library
manager, 8, pp. 6–9.
9. Ministry of Defence. (1998). HQ Information and Library Service. Library
service charter, MOD.
10. Morgan, S. (1995). Performance assessment in academic libraries, London,
Mansell.

84
Unit–7

CASE STUDIES

Compiled by: Muhammad Jawwad


Reviewed by: Dr Amjid Khan

85
CONTENTS

Page #
Introduction ....................................................................................................... 87

Objectives ......................................................................................................... 87

7.1 Introduction .............................................................................................. 88

7.2 Survey Work in Public Libraries ............................................................. 89

7.3 Academic Libraries .................................................................................. 95

7.4 Special Libraries ...................................................................................... 100

7.5 Charter and Service Level Agreements ................................................... 101

7.6 Service Level Agreements in Higher Education Libraries ...................... 104

7.7 The EQUINOX Project ............................................................................ 105

7.8 Conclusion ............................................................................................... 106

86
INTRODUCTION

The unit is developed to educate students about case study methods of library
evaluation. It will also present some already conducted case studies for better
understanding.

OBJECTIVES

After studying this unit, you will be able to explain the following:

 Case studies methods of library evaluation.

 An important component of case studies.

87
7.1 Introduction
Throughout their careers librarians are asked to evaluate collections, services,
policies, expenditures, and other activities that affect the institution and its patrons,
using a mixture of quantitative and qualitative criteria. With the increase in the
types of formats available that contain information, it is essential to continue to
apply the same evaluation and selection criteria to all media. Librarianship has
always been concerned with evaluating collections. The literature includes articles
and books that describe evaluating collections under the subject headings collection
development, selection, and weeding. The criteria outlined in the literature are
almost always the same and hold on today.

Many of the traditional methods for evaluating collections suggest a quantitative


analysis. That is, how many books total, or standard bibliographies, are owned by
the library as compared to the number of librarians, size of budget, or type of
library. The Association of Research Libraries (ARL) and the American Library
Association (ALA) rank libraries based on these quantitative criteria, in addition to
other factors.

Other methods are more concerned with qualitative analysis. Many of these are
grounded in an understanding of the context for evaluation. Does the collection
reflect and support the mission of the library? Public libraries have a very different
overall mission than academic and special libraries. The former serves the local
community’s reading and reference needs, and the reference and research collection
will be as in-depth as the size of the library and the makeup of the community
dictates. Academic libraries serve the needs of undergraduates, the more
specialized needs of graduate students, and the in-depth, subject-specific needs of
the research faculty, as well as carrying professional literature for the librarians and
other professional groups on campus. Special libraries are a set of very developed
teachers.

Within libraries, different collections may serve different needs and so require
different perspectives in assessing materials. Fiction collections and subgenres
require a different evaluation knowledge base than nonfiction and evaluation of
non-print materials requires adapting basic evaluation criteria. For many years,
non-print library collections emphasized record albums, film strips, and 16mm
film. Now non-print formats include music on audiotape, videotape, and compact
disc; movies on videotape, DVDs, and laser discs; audiobooks, available in
complete or abridged versions, in fiction and nonfiction titles, and a variety of
formats designed for sighted or visually impaired listeners; and CD-ROMs
educational and recreational.

88
Although nonprint formats are not typically seen as suitable formats for reference
and research collections, libraries have traditionally collected other nonprint
material, such as photographs and microfilm, to complement and supplement the
reference and non-circulating collections. Audiovisual materials, microform
collections, and other nonprint resources can be evaluated using the same basic
criteria used for evaluating print materials. Particular attention must be given to
organizational factors such as arrangement, access, and equipment support, as well
as the durability and longevity of some media and the control mechanisms required
by multipart materials.

This unit looks at examples of interesting practices in public, academic and special
libraries. It considers survey work, charters and service level agreements and
examples of relevant research projects.

7.2 Survey Work in Public Libraries


Among the issues which can be explored in public library user surveys are (Spiller
1998, pp. 74–77):
 user practice (e.g., frequency of visits, types of use)
 user capabilities (e.g., the ability to use computer catalogues)
 user motivations (e.g., reasons for use)
 user satisfaction (with different aspects of service)
 user opinions (e.g., preferences for future service development).

Examples include:

Hackney user survey


Aslib and Solon Consultants surveyed Hackney Libraries in 1995 and the
information was used to radically reshape the authority’s libraries by reducing some
service points and strengthening others. It was possible to do this because the
public’s views had been sought. Among the data collected were ‘reasons for
borrowing’. The largest percentage, 43% was ‘read for pleasure’ with ‘education’
following at 21%. ‘Acquire knowledge’ was 16% with ‘recreations and pastimes'
at 13%. ‘Work’ accounted for only 7% so, for the majority of users, the library was
a recreational service.

Bromley exit survey


This survey was carried out for Bromley Libraries by Capital Planning Information
Ltd. Nine different library sites were surveyed. A question about user capabilities—
Are you able to use the library computer catalogue unaided? —produced a

89
worrying ‘no’ response rate of 45%. Users were also asked about services that they
might like to use in the future such as access to PCs and the Internet. Responses
produced a clear age range split with the youngest being most enthusiastic about
such services and the oldest being least interested.

Survey of Birmingham Central Library


This survey, undertaken in 1996, was a follow-up to a previous survey done in 1992.
This allowed the authority to monitor the effectiveness of action taken to remedy
previously identified problems. Some service aspects showed improvement:
 time spent waiting.
 ease of finding books
 study/reading places.
 helpfulness of signing.

However, a range of new problems caused over 20% dissatisfaction:


 The range of books and other stock
 temperature levels
 noise levels
 quality of equipment.

These issues are the staples of any large library, and the identification of new
problems shows the need for repeat surveys at regular intervals.

Sandwell lapsed user survey


This survey carried out in 1997, reflects public librarians’ increasing preoccupation
with the penetration of library services into the community. Non-use is an
increasingly important issue for all types of libraries and here, public librarians
seem to be giving a lead. Completed postal questionnaires were received from 138
library members who had not borrowed material in the previous three years. The
results were fairly reassuring. Many respondents had used the library. They were
lapsed borrowers rather than users.

However, in response to the question—Have you made use of the library service
since you joined? —only 25% replied ‘regularly’ while 65% replied a few times.
The remaining 10% had used the service only once or not at all. Non-borrower
types of use in the previous year were:
 reference materials 18%
 visited with children 17%
 photocopier 12%
 personal study 10%
 chose books for others 7%
 read newspapers/mags 6%

90
The use of outside consultants in some of these studies is interesting.
A list of priorities for future surveys might include:

Topics
• impact (quantitative measures)
• costing by function
• market penetration (lapsed/ non-users)
• promotion of services
• book buying compared with borrowing.

Services
• information/enquiries
• audio/visual services
• electronic information sources.

East Renfrewshire Council Library Survey 1999


In 1999 East Renfrewshire Council (1999) undertook its first study of its 10 branch
libraries. Other Council departments, including recreational services, undertook
parallel surveys. The questionnaire (see Figure 1) included 12 rateable performance
issues including four which would come under the general heading of environment.
Helpfulness of staff, stock, audio-visual materials, information services, computer
services, photocopying and opening hours were other issues. Across the branches
‘the helpfulness of staff received high ratings, between 88 and 98% rating the
service as good.

Case studies and future developments 37 Fiction stock fared less well with typical
‘good’ ratings of between 51 and 82%. Ratings for audio-visual stock were even
poorer, ranging typically from 14 to 42%. Information resources received good
ratings, typically between 51 and 86% of respondents rated this service as good but
the computer catalogue and computer reference services fared worse, perhaps
because of high levels of non-use. The problems with audio-visual and computer
services match two of the survey priorities for public libraries listed above audio-
visual services and electronic information resources. Two fundamental issues stand
out: the quality of staff and the quality of stock. Interestingly, the views of the
Scottish suburban general public and those of North German academic library users
(see below) are so similar. The questionnaire also asked users if they were aware
of the Council’s’ Let Us Know system which allows them to make comments,
suggestions and complaints about the service provided. Only one branch produced
an ‘awareness’ rating of more than 50%, an indication that as laudable as the aim
of open communication may be, it can be difficult to achieve.

91
Figure-1

92
Institute of Public Finance (IPF) Public Library User Survey (Plus)
This national standard for user surveys grew out of the Audit Commission’s
Citizens’ Charter exercise in 1992. There was a strong feeling that conventional
statistics were not adequate to assess how well libraries provided materials and
information to the public. Surveys of users who asked specific questions seemed to
be the answer.

Following a large-scale pilot in 1993 and follow-up work, overseen by the


Committee on Public Library Statistics, a National Standard for undertaking
surveys of users in UK public libraries was launched in late 1994. This establishes
a common base on which library authorities can compare their results and a forum
for the development of visitor surveys (England & Sumsion 1995, pp. 109–110).
The Standard, available from the IPF, takes the form of a manual of instructions,
advice and background information on the survey with appendices. The appendices
contain worked examples of the documentation and samples of the types of analysis
which could be undertaken on the data gathered. Software is also provided. After
the data has been collected analysis can be done in-house, by an outside consultancy
or the IPF. Participating authorities can compare results at the service point level
within their authority, against service points in another authority of their choice or
compare purely at the authority level, depending on the type of survey chosen.

The questionnaire (see Figure 2) contains ‘core’ questions. Additions to the core
questions can be made in collaboration with the IPF. There are several supplementary
questions which some authorities have added that can be used by others. Leeds Library
and Information Services conducted its first major user survey in December 1994,
based on IPF’s Public Libraries User Survey (Plus) (Pritchard 1995).

93
Figure - 2

94
The availability of the survey resolved uncertainties about what questions to ask,
sampling tables and how to collate the data. The documentation provided covered
all aspects of how to set up and run the survey. Apart from the data itself the fact
that statistically valid data was collected was seen to be a major benefit. Currently,
92 public library authorities subscribe to the scheme and well over 200,000 surveys
are recorded on the IPF database. Members can compare their local results against
published national averages. A children’s survey was launched in 1998 following
extensive piloting and a PLUS subgroup has been formed to investigate community
surveys.

The core areas covered by the CIPFA PLUS questionnaire are: user activities in the
library; the number of books borrowed; a needs-fill question; user satisfaction
relating to several services; frequency of visits; sex, age and occupation of
respondent and postcode area of the respondent a feature which makes it possible
to calculate distance travelled to the library (Spiller 1998, p. 72).

7.3 Academic Libraries


In survey work, university libraries are also moving towards standardization, albeit
in a less structured and coherent fashion, and in this Measuring academic library
performance, Van House et al (1990) have been influential. The Follett report
suggested that the user satisfaction survey it contained could be used in Britain in
a suitably amended form. Adapting US academic library standard forms to British
practice has several advantages (Revill 1995). They have been validated by
practice; they make it unnecessary to invent one’s own, they allow comparisons of
user satisfaction between sites and years, and they permit inter-institutional
comparisons. With the help of volunteers, SCONUL piloted five questionnaires,
covering different aspects of library service in early 1995.

The questionnaires were:


1) general user satisfaction
2) quality of service
3) availability of materials
4) enquiries and reference services
5) stock selection.

The Library of the University College of Swansea has used the Van House
originals, independently of the SCONUL initiative and Glasgow Caledonian
University Library has used the Van House general user satisfaction survey in an
increasingly modified form. Figure-3 is an example of a modified proforma,
derived from Van House by SCONUL and originating from the mythical Poppleton
95
Metropolitan University, beloved by Times Higher Educational Supplement
readers.

Figure-3

96
As a result of various research initiatives, a modified version of the Van House user
satisfaction survey questionnaire was devised at Glasgow Caledonian University.
The performance issues in section one are derived from these initiatives. The list of
issues reflected in section 1 of the Van House questionnaire was not very relevant
and the questionnaire proved difficult to analyse satisfactorily.

The Glasgow Caledonian questionnaire was successfully administered in 1995 and


1996 and has undergone further modifications since (see Figure 4). It permits inter-
site, inter-faculty and inter-user group comparisons. There is space on the back for
comments which produces qualitative data which can be compared with the
quantitative data also collected.

The survey has been conducted over five years (1995–1999 inclusive) and now
provides a longitudinal perspective on how the services have developed over that
time. If universities as a whole regularly survey their teaching, learning and central
services the university library is surveyed, probably annually, as part of this
process. This y allows the library to compare itself with other services provided by
the University. The practice is not widespread, but the University of Central
England in Birmingham is a good example. The University of Central England
maintains a Centre for Research into Quality, one of whose functions is to conduct
an annual university-wide student satisfaction survey. The 1998 annual report,
University of Central England (1998) covered central services like library and
computing services and word processing facilities, refectories and student services
as well as course organisation, teaching staff and teaching and learning. The survey
is based on a lengthy questionnaire which, in 1998, included the views of nearly
2000 respondents and provided more than a million items of information.

97
Figure-4

98
The section on the library extends over 16 pages and is more comprehensive than
many stand-alone in-house library surveys. As academic libraries are now major
providers of computing facilities the sections on word processing and computing
facilities are also relevant. The 1998 survey recorded high levels of user
satisfaction, especially with staff helpfulness. The annual surveys have been
ongoing since 1991 and several familiar issues have emerged over this period:
range of books; up-to-datedness of books; availability of recommended course
material; multiple copies of core books; a range of journals; opening hours;
availability of study places and noise levels. Perhaps predictably availability of
recommended course material and multiple copies of core books are the main areas
of dissatisfaction. The Centre also undertakes specialist surveys, and this has
included the library. The Centre recognizes the need to close the feedback loop and
goes to considerable lengths to publicize the outcomes of its work.

There are advantages and some disadvantages to this method. One can be certain
the survey has been carried out expertly and that the data is reliable and that
comparisons within the university are possible. However, the data collected is
limited so there will still be a need for specialist surveys. The library, although it
benefits from the process, has no direct control over it. However, studies such as
this are increasingly helpful, partly because they include IT issues and partly
because they point to performance issues which are common to both the library and
other services as well.

A European example
The library of the University of Munster in northwestern Germany is one of the
most active in Germany in evaluation and performance measurement. It surveyed
user satisfaction in 1982 and repeated the exercise in 1996 (Buch 1997). Perhaps,
because of the relative infrequency of surveying, it was a substantial and
methodologically complex exercise. After the initial survey design, it was pre-
tested on 30 users. This raised problems with jargon, especially abbreviations, and
the questionnaire was modified. The completed questionnaire comprised a total of
52 questions in 19 different service areas.

The survey was carried out over one complete week in January from 8 am to 9 p.m.
each day and was administered by 4 librarians and 2 student assistants. The
questionnaire was administered to 8 subjects per hour who took, on average, 20
minutes to complete the form. This led the surveyors to conclude that the
questionnaire should have been shorter.

A total of 578 usable questionnaires were completed and analyzed by Excel.


Because the comments written on the form had to be classified and coded data

99
preparation and analysis took until the following May. The analysis of the
quantitative data took approximately 130–140 hours and the analysis of the
qualitative data took another 50 hours, about 190 hours in total.

Among the results was a strong desire for longer opening hours which resulted in
extended Saturday opening. Overall user satisfaction was high although satisfaction
with the stock was lower. The most highly rated area was ‘helpfulness of staff’. The
surveyors were surprised to discover that the Internet which appeared to be a
‘favourite toy’ was unfamiliar to 76% of users. Publicity for this service has
increased. The survey results were publicized by an exhibition and a press
conference. The labour costs of such a large survey are substantial and the time
taken to analyse qualitative comments is particularly noteworthy. The issues raised
will be familiar to many academic librarians outside Germany.

7.4 Special Libraries


Surveying special libraries is rather different from public and academic libraries
because of the small number of users involved. This makes it possible to do a census
rather than a survey. Because of the frequent availability of internal mailing, either
paper or electronic, it is often possible to get a high response rate but, even so, the
absolute numbers dealt with will not be very great. Because of this, it is possible to
use a questionnaire which mixes quantitative and qualitative elements and because
of the high level of commitment of users, it is acceptable to make the questionnaire
quite long. The library of the Scottish Agricultural Sciences Agency completed an
unpublished survey of its library use in January 1996. The questionnaire was six
pages long. It was divided into four sections:
1) Visiting the library.
2) Services provided.
3) Quality of service.
4) Future.

There was a total of 23 questions, a mixture of closed and open, the latter giving
the respondents adequate opportunity to make qualitative observations. A total of
149 questionnaires were sent out and 74 were returned (49.6%). Statistical data was
generated using Excel. Satisfaction with the service given was very high, although
inevitably misconceptions surfaced. Perhaps not surprisingly in a special library,
journal circulation was the most controversial issue. Journals also figured
prominently in replies to the question on better services. Conclusions from the
questionnaire included the need for better library promotion and the impact of IT,
issues not confined to special libraries.

100
The British Ministry of Defense operates a comprehensive program of evaluation
which includes user survey reports for each library in the Ministry of Defense HQ
Information and Library service. There is also a rolling program of six-monthly
surveys aimed at giving a satisfaction performance indicator. The aim is to achieve
90% satisfaction against three key performance indicators: ‘Speed’; ‘Information
provided’ and ‘Courteous and helpful’. To date, these targets have all been
achieved. Indeed, the ‘Courteous and helpful’ indicator regularly scores 100%.
Characteristically the problem of getting customer feedback in a special library
means that the number of respondents is small, between 300 and 350. One of the
outcomes of the evaluation program was the Library Service Charter which
includes a commitment to monitoring and customer feedback.

7.5 Charters and Service Level Agreements


The Citizen’s Charter was launched by the then Prime Minister, John Major, in
June 1991. Its stated aims are to improve quality, choice and value in public
services by publishing standards which the public should expect, and by
establishing a system for redress where standards are not met. The principles of
public service listed in the Charter are:
1) Standards which are explicit and published.
2) Openness and accountability.
3) Readily available Information.
4) Choice.
5) Non-discrimination.
6) Accessibility.
7) Systems for complaint and redress.

Local authorities were listed among the public services covered so the implications for
public libraries were obvious. Higher education was not mentioned specifically but the
impact has, nevertheless, been substantial. The Citizen’s Charter’s first principle of
public service, Standards, establishes the link with local customer charters:

‘Every citizen is entitled to expect explicit standards, published and prominently


displayed at the point of delivery. These standards should invariably include
courtesy and helpfulness from staff, …and a commitment to prompt action, which
might be expressed in terms of a target response or waiting time.’

The challenge was swiftly taken up. By August 1992 at least 13 authorities had
published a library charter and some 14 others were working towards one. Although
these varied in length, style, organization and detail, they shared common concerns
about accessibility, appropriateness, quality and value for money. They offered a
101
mixture of commitments and pledges, some general, some specific, and some
supported by numeric standards. Few exhibited a link between market research and
the charter pledges (Library Association 1992).

Also in 1991, a government minister challenged the Library Association to produce


a model charter for libraries which appeared in 1993 (Library Association 1993). It
includes a commitment to involve the community, undertake surveys regularly and
publish a Statement of Standards covering all areas of the service. There is a section
on access covering opening times, signing and publicizing, and access for those
with special needs. There are also sections on Environment and facilities, Books
and other stock, Information Services, and Staff, encouraging people to use the
services and monitoring the services. This was supplemented by a model statement
of standards (Library Association 1995) which covers specifics e.g.: 20-minute
travelling time to service points; provision for people with physical disabilities,
signing and guiding, minimum opening hours, time taken to attend to users and
answer telephones, provision for children, seating and health and safety standards,
minimum stock and information services provision, minimum numbers of, and
performance of staff, marketing and evaluation.

This mentions specifically annual surveys, suggestions boxes and answers to


complaints. These two documents have given public libraries a valuable basis on
which to proceed.

By May 1995, 52 authorities had charters and a further 16 were preparing them.
Public library charters are usually attractively produced, sometimes in A5 leaflet
form and printed in two or more colours to attract attention to them. The City of
Westminster initially produced separate charters for libraries and archives but has
now abandoned these in favour of a single document entitled Service Standards and
Promises. This describes services simply and explains in fairly general terms what
standard of services users can expect. There is a promise to listen to comments,
respond to complaints and conduct surveys. There is also a short section explaining
how users can help the library.

In 1997 the Library Association’s Branch and Mobile Libraries Group published
its own specialized Charter for public mobile library services. It is based upon A
charter for public libraries and the Model statement of standards and includes such
specialized issues as stopping times, the need to review routes frequently and the
role of mobile libraries in community information provision. In higher education,
the position is rather different. Academic libraries are under less pressure to
produce charters and consequently, fewer have done so. However, there are several
influences, both direct and indirect. There are general higher education charters for

102
England and Scotland and the National Union of Students has developed a student
charter which, inter alia, states that students should have ‘the right to effective
learning support. Some universities have produced general charters. Liverpool John
Moores University produced the first of these in 1993. It is divided into specific
sections which include a short item about the library.

The overall statements and promises can affect the library even if it is not
mentioned specifically e.g., provision of feedback to students, involving students
in the decision-making process, provision of a suitable learning environment and
the complaints procedure. Of the specific library charts, one of the most attractively
produced is that of the University of London Library. It is printed in two colours in
A5 leaflet form. It was produced in early 1994 and has been kept short intentionally
to give it a long shelf life although changes to practice probably imply updating. It
outlines in six points what a reader has a right to expect from the service.

Detail is available in the library’s range of leaflets. It covers Service Delivery and
Customer Care, Quality (which includes a reference to surveys), Collections,
Information Technology, The Working Environment and Complaints Procedure. The
most detailed is that produced by Sheffield Hallam University Library which is a
detailed document extending over four sheets of A4. It is divided into eight sections:
Access, Accommodation, Materials, Information Services, Photocopying and Sales,
Audio Visual Services, Communicating with Students and Student Responsibilities. It
makes promises on specific issues e.g., responding to 75% of enquiries immediately
and 95% photocopier operational availability. What distinguishes higher education
charters, both general and library-specific, is that they tend to contract in that they
specify the behaviours expected from the students in return for the services promised.
Public library charters do not usually have a contractual element.

It is fair to say that there has been a good deal of cynicism about charters. They can
be seen as bland promises, merely rephrasing institutional policy or ‘weasel words’
which make impressive statements but do not promise anything measurable. They
can also be seen as a fad and, if they descend to specifics, can go out of date. They
can become an end in themselves, unrelated to the realities of the service and they
can be difficult to get right. There is also concern as to whether they are legally
binding. To be successful they should be both the outcome of evaluation, offering
objectively deliverable promises and they should also be seen as a part of
continuing the evaluation process. There is no doubt that the specific promises they
often contain on feedback and surveying have boosted the evaluation movement. If
well-designed and kept up to date charters can:
1) improve communications with users
2) demonstrate a commitment to quality
3) focus staff attention on performance issues.

103
Although charters have been viewed as a mid-90s fad they have not gone away in
higher education. There seem to be two reasons for this:
1. The growth of student numbers in higher education has resulted in many
people coming to university who come from family backgrounds with no
previous experience of higher education and have no idea of what to
realistically expect from higher education services. This has led to a need for
expectation management and a charter is a good way of doing this.
2. The growth of off-campus and work-based learning which results in irregular
user contact with the library. In such circumstances laying out the ground
rules is a good idea.

7.6 Service Level Agreements in Higher Education Libraries


Although not widespread (Revill & Ford 1994) service-level agreements in higher
education libraries are worth mentioning because they signal a move away from
informal promises about levels of library provision to users in favour of explicit
guarantees about levels of service which can be subject to evaluation. Service level
agreements may be the result of pressure from departments or faculties, and they can
be seen as a defence mechanism. If questioned or attacked the library can refer to its
agreement and point out that it is performing according to it. They can also be used to
set limits to demand. It is important to involve library staff who will, after all, be
providing the service promised in the agreement, to initiate discussions with users and
create a series of partnerships with faculties. The agreement, when it is finally achieved,
should have an annual review process which, in turn, supports the overall concept of
evaluation. A continuing dialogue should be maintained with users through formal
mechanisms like the library committee and informally via faculty meetings.

Leeds Metropolitan University Library has a service level agreement, completed in


1994, which originally began with only one faculty and now applies to all schools
and faculties.

The agreement has four aims:


1) to spell out what the library provides
2) to set standards of service for the library and mechanisms by which those
standards can be monitored
3) to emphasize the necessity for close cooperation between the library and
faculties/schools if the library is to provide a quality service.
4) to encourage improvements in the range and standards of service provided by
the library.

104
The agreement is intended to evolve as new services are provided. The areas it
covers were assessing user needs, opening hours, study environment, library
resources, interlibrary loans, information handling skills, information services and
photocopying.

Specific topics mentioned include complaints, noise, seat occupancy and shelf
tidying. Service level agreements have an important part to play in providing
yardsticks for evaluation and promoting service improvements and, like charters,
they need the involvement and support of both library staff and users in their design,
implementation and monitoring.

However, agreements are not contracts and although service level agreements
oblige library services to deliver a certain level of service to users, they put no
enforceable obligations on users. The library can only state that it cannot deliver on
specifics unless certain user obligations are first met e.g., the library cannot promise
to have sufficient copies of a particular textbook available by a particular time
unless reading lists are delivered by a mutually agreed date.

7.7 The EQUINOX Project: Library Performance Measurement


and Quality Management System
EQUINOX is a project funded under the Telematics for Libraries Program of the
European Commission. This project addresses the need for all libraries to develop
and use methods for measuring performance in the new networked, electronic
environment, alongside traditional performance measurement, and to operate these
methods within a framework of quality management.

The project has two main objectives. Firstly, EQUINOX aims to further develop
existing international agreements on performance measures for libraries, by
expanding these to include performance measures for the electronic library
environment. The second aim is to develop and test an integrated quality
management and performance measurement tool for library managers.

The specific objectives of the EQUINOX project were:


 To develop an integrated software tool which will assist European librarians
to manage increasingly hybrid (i.e., traditional print-based and electronic)
libraries effectively and efficiently.
 To develop a standard set of performance indicators for the hybrid library and
to move towards international agreement on this set.
 To identify the datasets that need to be collected for these performance
indicators.

105
 To provide software which will encourage all library managers to introduce
an appropriate level of quality management, without the constraints of
ISO9000.
 To validate and test the pre-production prototype system in several libraries.
 To undertake large-scale demonstration trials in libraries across Europe.
 To undertake dissemination of the approach and model across Europe.
 To ensure that Europe retains its world leadership in this area.

7.8 Conclusion
The case study method is a learning technique in which the student is faced with a
particular problem, in the case. The case study facilitates the exploration of a real
issue within a defined context, using a variety of data sources (Baxter et al., 2008).
In general terms, the case study analyzes a defined problem consisting of a real
situation and uses real information as a methodological tool. Case studies are
associated with the development of detailed information relating to a specific
business phenomenon, with phenomena across similar organizations or settings, or
with one specific case (person, organization, or setting). Case study methods may
draw on several methods to gather data, such as observation, experiments,
structured interviews, questionnaires, and/or documentary analysis. A case study
within a positivistic paradigm is subsequently guided by the tenets of a quantitative
methodology. The advantages of case studies associated with case-specific detail
are the obtainment of objectives and the use of multiple methods to gain detailed
data on the case. Disadvantages are associated with resource allocation and (with
field case studies) the inability to control all variables systematically.

The mentioned studies will serve to build a framework on which to understand


evaluation and most importantly to put the methods presented into action while
building toward the culture of evaluation.

106
SELF-ASSESSMENT QUESTIONS

1. Define case studies and discuss various methods of case studies.

2. How surveying special libraries is different from public and academic


libraries? Explain with examples.

3. What is meant for charters and service level agreements? Explain.

Activity:
1. With the help of a tutor, develop a case study to evaluate the Reference
Services of an academic library (University Library).

107
RECOMMENDED READING

1. Creaser, C, and Spiller, D. (1997). TFPL survey of UK special library


statistics, Loughborough, LISU, (LISU Occasional Paper no. 15).

2. Crist, M., Daub, P. and MacAdam, B. (1994). User studies: reality check and
future perfect, Wilson Library Bulletin, 68 (6), pp. 38–41.

3. Cullen, R.J. and Calvert, P.J. (1995). Stakeholder perceptions of university


library effectiveness, Journal of academic librarianship Nov, pp. 438–448.

4. Greguras, G. J., Robie, C., Schleicher, D. J., Goff, M. (2003). A field study
of the effects of rating purpose on the quality of multisource ratings.
Personnel Psychology, 56, 1–21.

5. Ghorpade, J. (2000). Managing the five paradoxes of 360-degree feedback.


Academy of Management Executive, 14(1), 140–150.

6. Rotchford, N. L. (2002). Performance management. In J. W. Hedge & E. D.


Pulakos (Eds.), Implementing organizational interventions (pp. 167–197).
San Francisco: Jossey-Bass.

7. Waldman, D., & Atwater, L. E. (1998). The power of 360-degree feedback:


How to leverage performance evaluations for top productivity. Houston, TX:
Gulf Publishing.

108
Unit–8

FUTURE DEVELOPMENTS

Compiled by: Muhammad Jawwad


Reviewed by: Dr Amjid Khan

109
CONTENTS

Page #
Introduction ....................................................................................................... 111

Objectives ......................................................................................................... 111

8.1 Introduction .............................................................................................. 112

8.2 Digital Library Federation (DLF) ............................................................ 114

8.3 DLF and Member Libraries ..................................................................... 117

8.4 Conclusion ............................................................................................... 119

110
INTRODUCTION

The unit will present future developments in the area of library evaluation and key
challenges faced by libraries related to assessment.

OBJECTIVES

After studying this unit, you will be able to explain the following:

 Gathering meaningful, purposeful, comparable data.

 Digital Library Federation and its various projects.

 Future areas of library evaluation such as (LIBQUAL etc.).

111
8.1 Introduction
Libraries face five key challenges related to assessment:
1. Gathering meaningful, purposeful, comparable data
2. Acquiring methodological guidance and the requisite skills to plan and
conduct assessments.
3. Managing assessment data
4. Organizing assessment as a core activity
5. Interpreting library trend data in the larger environmental context of user
behaviours and constraints

Libraries urgently need statistics and performance measures appropriate to


assessing traditional and digital collections and services. They need a way to
identify unauthenticated visits to Web sites and digital collections, as well as clear
definitions and instructions for compiling composite input and output measures for
the hybrid library. They need guidelines for conducting cost-effectiveness and cost-
benefit analyses and benchmarks for making decisions. They need instruments to
assess whether students are learning by using the resources libraries provide. They
need reliable, comparative, quantitative baseline data across disciplines and
institutions as a context for interpreting qualitative and quantitative data indicative
of what is happening locally. They need assessments of significant environmental
factors that may be influencing library use to interpret trend data. To facilitate
comparative assessments of resources provided by the library, commercial vendors,
and other information service providers, DLF (Digital Library Federation)
respondents commented that they need a central reporting mechanism, standard
definitions, and national guidelines that have been developed and tested by
librarians, not by university administrators or representatives of accreditation or
other outside agencies.

Aggressive efforts are underway to satisfy all of these needs. For example, the
International Coalition of Library Consortia’s (ICOLC) work to standardize
vendor-supplied data is making headway. The Association of Research Libraries
(ARL) E-metrics and LIBQUAL+ efforts are standardizing new statistics,
performance measures, and research instruments. Collaboration with other national
organizations, including the National Center for Education Statistics (NCES) and
the National Information Standards Organization (NISO), shows promise for
coordinating standardized measures across all types of libraries. ARL’s foray into
assessing costs and learning and research outcomes could provide standards, tools,
and guidelines for these much-needed activities as well. Their plans to expand
LIBQUAL+ to assess digital library service quality and to link digital library
measures to institutional goals and objectives are likely to further enhance

112
standardization, instrumentation, and understanding of library performance about
institutional outcomes. ARL serves as the central reporting mechanism and
generator of publicly available trend data for large research libraries. A similar
mechanism is needed to compile new measures and disseminate trend data for other
library cohort groups.

Meanwhile, libraries have diverse assessment practices and sometimes experience


failure or only partial success in their assessment efforts. Some DLF respondents
expressed dismay at the pace of progress in the development of new measures. The
pace is slower than libraries might like, in the context of the urgency of their need,
because developing and standardizing the assessment of current library resources,
resource use, and performance is very difficult. Libraries are in transition. It is hard
to define, let alone standardize, what libraries do, or to measure how much they do
or how well they do it, because what they do is constantly changing. Deciding what
data to collect and how to collect them is difficult because library collections and
services are evolving rapidly. New media and methods of delivery evolve at the
pace of technological change, which, according to Raymond Kurzweil (2000),
doubles every decade. The methods for assessing new resource delivery evolve at
a slower rate than do the resources themselves. This is the essential challenge and
rationale for the efforts of ARL, ICOLC, and other organizations to design and
standardize appropriate new measures for digital libraries. It also explains the
difficulties involved in developing good trend data and comparative measures.
Even if all libraries adopted new measures as soon as they became available,
comparing the data would be difficult because libraries evolve on different paths
and at different rates, and offer different services or venues for service. Given the
context of rapid, constant change and diversity, the new measures and initiatives
are essential and commendable. Without efforts on a national scale to develop and
field-test new measures and build a consensus, libraries would hesitate to invest in
new measures. Just as the absence of community agreement about digitization and
metadata standards, impedes libraries that would otherwise digitize some of their
collections, the lack of community agreement about appropriate new measures
impedes investing in assessment.

Despite the difficulties, substantial progress is being made. Consensus is being


achieved. Libraries are slowly adopting composite measures, such as those
developed by John Carlo Bertot, Charles McClure, and Joe Ryan, to capture
traditional and digital library inputs, outputs, and performance. For example
 Total library visits = total gate counts + total virtual visits.
 Percentage of total library visits that are virtual.
 Total library materials use = total circulation + total in-house use of materials
+ total full-text electronic resources viewed or downloaded.

113
 Percentage of total library materials used in electronic format.
 Total reference activity = total in-person transactions + total telephone
transactions + total virtual (for example, e-mail, chat) transactions.
 Percentage of total reference activity conducted in virtual format.
 Total serials collection = total print journal titles + total e-journal titles.
 A percentage of the total serials collection is available in electronic format.

Analysis of composite measures over time will provide a more comprehensive


picture of what is happening in libraries and will enable libraries to present more
persuasive cases to university administrators and other funders to support libraries
and their digital initiatives. Perhaps a lesson learned in system development applies
here. Interoperability is possible when a limited subset of metadata tags and service
offerings are supported. In the context of assessment, a limited subset of statistics
and performance measures could facilitate comparison yet also allow for local
variations and investments. ARL is taking this approach in its effort to develop a
small set of core statistics for vendor products.

Reaching a consensus on even a minimum common denominator set of new


statistics and performance measures would be a big step forward, but libraries also
need methodological guidance and training in the requisite skills. Practical manuals
and workshops, developed by libraries for libraries, that describe how to gather,
analyze, interpret, present, and apply data to decision-making and strategic
planning would facilitate assessment and increase return on investment in
assessment. ARL produces such a manual for E-metrics. The manual will provide
the definition of each measure, its rationale, and instructions for how to collect the
data. ARL also offers workshops, Systems and Procedures Exchange Center
(SPEC) kits, and publications that facilitate skill development and provide models
for gathering, analyzing, and interpreting data. However, even if libraries take
advantage of ARL’s current and forthcoming offerings, comments from DLF
respondents indicate that gaps remain in several areas.

8.2 Digital Library Federation (DLF)


The Digital Library Federation (DLF) is a program of the Council on Library and
Information Resources (CLIR) that brings together a consortium of college and
university libraries, public libraries, museums, and related institutions with the
stated mission of "advancing research, learning, social justice, and the public good
through digital library technologies." It was formed in 1995.

DLF's mission is to enable new research and scholarship of its members, students,
scholars, lifelong learners, and the general public by developing an international
114
network of digital libraries. DLF relies on collaboration, the expertise of its
members, and a nimble, flexible, organizational structure to fulfill its mission. To
achieve this mission, DLF:
 Supports professional development and networking of members,
 Promotes open digital library standards, software, interfaces, and best
practices,
 Leverages shared actions, resources, and infrastructures,
 Encourages the creation of digital collections that can be brought together and
made accessible across the globe,
 Works with the public sector, educational, and private partners
 Secures and preserves the scholarly and cultural record.

“How-to” manuals and workshops are greatly needed in the area of user studies.
Although DLF libraries are conducting several user studies, many respondents
asked for assistance. Manuals and workshops developed by libraries for libraries
that cover the popular assessment methods (surveys, focus groups, and user
protocols) and the less well-known but powerful and cost-effective discount
usability testing methods (heuristic evaluations and paper prototypes and scenarios)
would go a long way toward providing such guidance. A helpful manual or
workshop would
 Define the method.
 Describe its advantages and disadvantages.
 Provide instruction on how to develop the research instruments and gather
and analyze the data.
 Include sample research instruments proven successful in field testing.
 Include sample quantitative and qualitative results, along with how they were
interpreted, presented, and applied to realistic library concerns.
 Include sample budgets, timelines, and workflows.

Standard, field-tested research instruments for such things as OPAC user protocols
or focus groups to determine priority features and functionality for digital image
collections would enable comparisons across libraries and avoid the cost of
duplicated efforts in developing and testing the instruments. Similarly, budgets,
timelines, and workflows derived from real experience would reduce the cost of
trial-and-error efforts replicated at each institution.

The results of the DLF study also indicate that libraries would benefit from manuals
and workshops that provide instruction in the entire research process-from
conception through the implementation of the results-particularly if attention were
drawn to key decision points, potential pitfalls, and the skills needed at each step
of the process. Recommended procedures and tools for analyzing, interpreting, and

115
presenting quantitative and qualitative data would be helpful, as would guidance on
how to turn research findings into action plans. Many libraries have already learned
a great deal through trial and error and investments in training and professional
development. Synthesizing and packaging their knowledge and expertise in the
form of guidelines or best practices and disseminating it to the broader library
community could go a long way toward removing impediments to conducting user
studies and would increase the yield of studies conducted.

TLA presents a slightly different set of issues because the data are not all under the
control of the library. Through the efforts of ICOLC and ARL, progress is being
made in standardizing the data points to be delivered by vendors of database
resources. ARL’s forthcoming instruction manual on E-metrics will address
procedures for handling these vendor statistics. Similar work remains to be done
with OPAC and ILS vendors and vendors of full-text digital collections. Library-
managed usage statistics for their Web sites and local databases and digital
collections present a third source of TLA data. Use of different TLA software,
uncertainty or discrepancy in how the data points are defined and counted, and
needed analyses not supported by some of the software to complicate data gathering
and comparative analysis of the use of these different resources. Work must be done
to coordinate efforts on all these fronts to facilitate comparative assessments of
resources provided by the library, commercial vendors, and other information
service providers.

In the meantime, libraries could benefit from guidance on how to compile, interpret,
present, and use the TLA data they do have. For example, DLF libraries have taken
different approaches to compiling and presenting vendor data. A study of these
approaches and the costs and benefits of each approach would be instructive. Case
studies of additional research conducted to provide a context for interpreting and
using TLA data would likewise be informative. For example, what does the
increasing or decreasing number of queries of licensed databases mean? Is an
increase necessarily a good thing and a decrease necessarily a bad thing? Does a
decrease indicate a poor financial investment? Could a decrease in the number of
queries simply mean that users have become better searchers? What do low-use or
no-use Web pages mean? Poor Web site design? Or wasted resources producing
pages of information that no one needs? Libraries would benefit if those who have
gathered data to help answer these questions would share what they have learned.
The issue of compiling assessment data is related to managing the data and
generating trend lines over time. Libraries need a simplified way to record and
analyze input and output data on traditional and digital collections and services, as
well as an easy way to generate statistical reports and trend lines.

116
8.3 DLF and Member Libraries
Several DLF libraries reported conducting needs assessments for library statistics
in their institutions, eliminating data-gathering practices that did not address
strategic concerns or were not required for internal or external audiences. They also
mentioned plans to develop a homegrown MIS that supports the data manipulations
they want to perform and provides the tools to generate the graphics they want to
present. Designing and developing an MIS could take years, not counting the effort
required to train staff how to use the system and secure their commitment to using
it. Only time will tell whether the benefits to individual libraries will exceed the
cost of creating these homegrown systems.

The fact that multiple libraries are engaged in this activity suggests a serious
common need. One wonders why a commercial library automation vendor has not
yet marketed a product that manages, analyzes, and graphically presents library
data. The local costs of gathering, compiling, analyzing, managing, and presenting
quantitative data in effective ways, not to mention the cost of training and
professional development required to accomplish these tasks, could exceed the cost
of purchasing a commercial library data management system, were such a system
available. The market for such a system would probably be large enough that a
vendor savvy enough to make it affordable could also make it profitable. Such a
system would reduce the need for librarians to interpret and apply data effectively.
The cost savings would be spent on purchasing the system. The specifications and
experiences of libraries engaged in creating their own MIS could be used to develop
specifications for the design of a commercial MIS. Building a consensus within the
profession for the specification and marketing it to library automation vendors
could yield the collaborative development of a useful, affordable system.
Admittedly, the success of such a system depends in part on the entry and
verification of correct data, but this issue could begin to resolve itself, given
standard data points and a system, designed by libraries for libraries, that saves
resources and contributes to strategic planning.

The results of the DLF study suggest that individually, libraries in many cases are
collecting data without really having the will, organizational capacity, or interest to
interpret and use the data effectively in library planning. Libraries have been slow
to standardize definitions and assessment methods, develop guidelines and best
practices, and provide the benchmarks necessary to compare the results of
assessments across institutions. These problems are no doubt related to the fact that
library use and library roles are in continuous transition. The development of skills
and methods cannot keep pace with the changing environment. The problems may
also be related to the internal organization of libraries. Comments from DLF

117
respondents indicate that the internal organization of many libraries does not
facilitate the gathering, analysis, management, and strategic use of assessment data.
The result is a kind of purposeless data collection that has little hope of serving as
a foundation for the development of guidelines, best practices, or benchmarks. The
profession could benefit from case studies of those libraries that have conducted
research efficiently and applied the results effectively. Understanding how these
institutions created a program of assessment-how they integrated assessment into
daily library operations, how they organized the effort, how they secured the
commitment of human and financial resources, and what human and financial
resources they committed-would be helpful to the many libraries currently taking
an ad hoc approach to assessment and struggling to organize their effort. Including
budgets and workflows for the assessment program would enhance the utility of
such case studies.

Efforts to enhance research skills, to conduct and use the results of assessments, to
compile and manage assessment data, and to organize assessment as a core library
activity all shed light on how libraries and library use are changing. What remains
to be known is why libraries and library use are changing. To date, speculation and
intuition have been employed to interpret known trends; however careful
interpretation of the data requires knowledge of the larger context within which
libraries operate. Many DLF respondents expressed a need-to-know what
information students and faculty use, why they use this information, and what they
do or want to do when they need information or when they find information.
Respondents acknowledged that these behaviours, including the use of the library,
are constrained by changes on and beyond the campus, including the following:
 Changes in the habits, needs, and preferences of users; for example,
undergraduate students now turn to a Web search engine instead of the library
when they need information.
 Changes in the curriculum; for example, elimination of research papers or
other assignments that require library use, distance education courses, or the
use of course packs and course management software that bundle materials
that might otherwise have been found in the library.
 Changes in the technological infrastructure; for example, penetration and
ownership of personal networked computers, network bandwidth, or wireless
capabilities on university and college campuses enable users to enter the
networked world of information without going through pathways established
by the library.
 Use of competing information service providers; for example, Ask-A
services, Questia, Web sites such as LibrarySpot, or the Web in general.

118
In response to this widespread need to know, the Digital Library Federation,
selected library directors, and Outsell, Inc., have designed a study to examine the
information-seeking and usage behaviours of academic users. The study will survey
several thousand students and faculty in different disciplines and different types of
institutions to begin to understand how they perceive and use the broader
information landscape. The study will provide a framework for understanding how
academics find and use information (regardless of whether the information is
provided by libraries), examine changing patterns of use about changing
environmental factors, identify gaps where user needs are not being met, and
develop baseline and trend data to help libraries with strategic planning and
resource allocation. The findings will help libraries focus their efforts on current
and emerging needs and expectations of academic users, evaluate their current
position in the information landscape, and plan their future collections, services,
and roles on campus based on an informed, rather than a speculative, understanding
of academic users and uses of information.

The next steps recommended based on the results of the DLF study are the
collaborative production and dissemination of the following:
 E-metrics lite: a limited subset of digital library statistics and performance
measures to facilitate gathering baseline data and enable comparisons.
 How-to manuals and workshops for
o conducting research in general, with special emphasis on planning and
commitment to resources
o conducting and using the results of surveys, focus groups, user
protocols, and discount usability studies, with special emphasis on field-
tested instruments, timelines, budgets, workflows, and requisite skills.
 Case studies of
o the costs and benefits of different approaches to compiling, presenting,
interpreting, and using vendor TLA data in strategic planning.
o how institutions successfully organized assessment as a core library
activity.
o a specification for the design and functionality of an MIS to capture
traditional and digital library data and generate composite measures,
trend data, and effective graphical presentations.

8.4 Conclusion
Libraries today are needy. Facing rampant need and rapid change, their ingenuity
and diligence are remarkable. Where no path has been charted, they carve a course.
Where no light shines, they strike a match. They articulate what they need to serve
users and their institutional mission, and if no one provides what they need, they

119
provide it themselves, ad hoc perhaps, but for the most part functional. In search of
high quality, they know when to settle for good enough good-enough data, good-
enough research and sampling methods, good enough to be cost-effective, and good
enough to be beneficial to users. In the absence of standards, guidelines,
benchmarks, and adequate budgets, libraries work to uphold the core values of
personal service and equitable access in the digital environment. Collaboration and
dissemination may be the keys to current and future success.

SELF-ASSESSMENT QUESTIONS

1. What is LIBQUAL? Discuss its various components with examples.


2. Describe E-metrics and criteria for conducting case studies.
3. Write a comprehensive note on DLF.

Activity:
1. With the help of a tutor, develop and conduct a LIBQUAL survey of your
nearby Public Library.

RECOMMENDED READING

1. DLF Organizer’s Toolkit. (2022). CLIR. Available at: https://www.diglib.org/


dlf-organizers-toolkit/
2. HEFC Colleges Learning Resources Group. Statistics 1997/98, HCLRG,
1999 (Available from David Brown, Harold Bridges Library, St Martin’s
College, Lancaster).
3. Further reading 64 (1995). Improving libraries: case studies on implementing
performance indicators in Scottish college libraries, Glasgow, Scottish
Further Education Unit.
4. IPF. (1995). A standard for the undertaking of user surveys in public libraries
in the United Kingdom: Manual of guidance, London, CIPFA.
5. Pritchard, T.A (1990). Plus, for Leeds, Library Association Record, 97 (10),
1995, pp. 549–550.
6. Public Libraries, Which February 1990, pp108–110.

120
Unit–9

PERFORMANCE MANAGEMENT
FOR THE ELECTRONIC LIBRARY

Compiled by: Muhammad Jawwad


Reviewed by: Dr Amjid Khan

121
CONTENTS

Page #
Introduction ....................................................................................................... 123

Objectives ......................................................................................................... 123

9.1 Introduction .............................................................................................. 124

9.2 What the Electronic Library Does?.......................................................... 125

9.3 Users and Usage ....................................................................................... 126

9.4 Identifying Performance Issues and Performance Indicators. ................. 127

9.5 Performance Issues .................................................................................. 127

9.6 Equinox Indicators ................................................................................... 128

9.7 The Use of Electronic Services in Practice .............................................. 129

9.8 Survey Methods ....................................................................................... 130

9.9 Outcomes and Consequences ................................................................... 131

9.10 Conclusion ............................................................................................... 132

122
INTRODUCTION

The unit will guide the students about performance management and teach them
about the key challenges that have plagued performance measurement since its
inception. It will also discuss the electronic library assessment, usage and user and
performance issues and indicators.

OBJECTIVES

After studying this unit, you will be able to explain the following:

 Identify performance issues and performance indicators.

 What does the electronic library do?

 Use of electronic services in practice.

123
9.1 Introduction
Performance management is known as the “Achilles’ Heel” of human capital
management, and it is the most difficult HR system to implement in organizations.
P Performance management is consistently one of the lowest, if not the lowest,
rated areas in employee satisfaction surveys. Yet, performance management is the
key process through which work gets done. It’s how organizations communicate
expectations and drive behaviour to achieve important goals; it’s also how
organizations identify ineffective performers for development programs or other
personnel actions. There are genuine reasons why both managers and employees
have difficulties with performance management. Managers avoid performance
management activities, especially providing developmental feedback to employees,
because they don’t want to risk damaging relationships with the very individuals,
they count on to get work done. Employees avoid performance management
activities, especially discussing their development needs with managers, because
they don’t want to jeopardize their pay or advancement. In addition, many
employees feel that their managers are unskilled at discussing their performance
and coaching them on how to improve. These attitudes, on the part of both
managers and employees, result in poor performance management processes that
simply don’t work well. Another problem is that many managers and employees
don’t understand the benefits of effective performance management. They often
view it as a paperwork drill required by human resources, where ratings need to be
submitted every year for record-keeping purposes – a necessary evil that warrants
the minimum investment of time. What many managers don’t realize is that
performance management is the most important tool they have for getting work
done. It’s essential for high-performing organizations, and one of their most
important responsibilities. Done correctly, performance management
communicates what’s important to the organization, drives employees to achieve
important goals, and implements the organization’s strategy.

On the other hand, done poorly, performance management has significant negative
consequences for organizations, managers, and employees. Managers who conduct
performance management ineffectively will not only fail to realize its benefits, but
they can damage relationships with or undermine the self-confidence of their
employees. If employees do not feel they are being treated fairly, they become de-
motivated, or worse, they may legally challenge the organization’s performance
management practices. This can result in serious problems that are expensive,
distracting, and damaging to an organization’s reputation and functioning.

124
Today’s performance management best practices are the result of ongoing efforts
to address two key challenges that have plagued performance measurement since
its inception:
1. What type of performance should be measured – abilities, skills, behaviours,
results?
2. How can we measure performance most reliably, accurately, and fairly?

To understand where we are today with performance management and why certain
approaches have become best practices, we need to understand how they evolved,
based on trial and error.

In discussing the electronic library various terms have been used almost
interchangeably: the electronic library, the virtual library and the digital library.
The term Hybrid Library is used to denote a mixed collection of traditional paper
and electronic sources. At the most basic level, a library has been traditionally
thought of as a building with carefully selected and pre-determined resources in it.
Although the electronic library is not like this it does have some traditional
characteristics like being at least partly housed in a building and requiring the
support of professional staff although some of these will not be librarians. However,
the electronic library entails a movement away from the library as a place. The
Equinox project defines a library collection as ‘All information resources provided
by a library for its users. Comprises resources held locally and remote resources for
which access has been acquired, at least for a certain period. The definition offered
by the Equinox Project (Equinox 1999) of electronic library services is— ‘The
electronic documents and databases the library decides to make available in its
collections, plus the library OPAC and home page’.

9.2 What The Electronic Library Does?


The electronic library is part of a wider networked environment which offers access
to a wide range of materials including internal management information about the
host institution and its working environment. It is an organized environment yet
paradoxically facilitates extremely anarchic patterns of use. User understanding or
lack of it will necessarily influence judgements about its effectiveness. Users may
view it as just a collection of computers in a room or as a collection of software
packages and networked services which have no essential link with electronic
information. It gives virtual access to book and journal collections and databases.
Some of the data contained in these services may be bibliographical control data or
metadata, rather than full text. Sources might include bibliographic databases as
well as full-text sources and paper sources which are, at least, ‘virtually’ accessed
through the OPAC It augments the traditional materials provided by library services

125
with electronic access to datasets and images such as video clips which might be
used for educational purposes. The service is less building and direct access
dependent than the traditional library. The library is the interface to electronic data
providing remote access including 24-hour access. Navigational aids and resources
are usually provided in the form of ‘hot linked’ web pages. Among the services the
electronic library might offer are the following:
• Access to electronic journals
• Word Processing packages
• Excel and other statistical packages
• PowerPoint demonstration software
• Links to local networks
• Internet
• Email
• Bibliographic software
• Digitized books and journals
• Electronic information databases
• OPACs
• Networked CD-ROMs on local area networks
• Full-text outputs via bibliographic searching
• Sets of lecture notes
• Web-based training packages.

9.3 Users and Usage


The electronic library redefines the concept of the user and introduces the idea of
the ‘virtual visitor’ or user. The user is no longer someone who ‘comes in’ and
observes set opening hours. There is no defined service period. Users may be
accessing the electronic library remotely from home or work and may be seen only
infrequently by librarians. Skill levels of users are very variable and may not
necessarily be defined by traditional stakeholder groups. It might be necessary to
redefine stakeholder groups based on skill levels e.g., older people or non-
traditional students who conventionally have lower IT skills levels than full-time
students. There are likely to be ‘regulars’ among the user group such as overseas
students in higher education or enthusiastic school children in public libraries.
Other types of users include distance and lifelong learners and teleworkers.
Conversely, there is the problem of non-use which includes such issues as
technophobia, and low IT skills among the unemployed and low-skill groups. These
new characteristics naturally affect the delivery of training as traditional face-to-
face-based training can only cope with a fraction of the potential user group and
cannot cope at all with remote users. Web-based training, although a powerful tool,
can only be used by those with some basic skills.
126
9.4 Identifying Performance Issues and Performance Indicators
It is perhaps best to begin by going back to basics and identifying what constitutes
a ‘good’ performance indicator. A good performance indicator should meet the
following criteria:
 Informative content: should be used for measuring activity, identifying
achievements and identifying problems and shortcomings.
 Reliability: Produces the same or a similar result if the measurement is
repeated.
 Validity: measures what it is intended to measure and not something else.
 Appropriateness: the operations necessary to implement the process of
measurement should fit in with the library’s procedures, physical layout etc.
 Practicality: the indicator shall use data that can be made available with a
reasonable amount of effort and resources.
 Comparability: comparing the performance of one library with another is a
controversial area. Comparative performance measurement for the electronic
library will encounter enormous problems although the potential is equally
enormous.

9.5 Performance Issues


In planning performance indicators for the electronic library issues to be considered
include the following:
 Skill levels—In traditional evaluation and performance measurement it is
assumed that users do not need high skill levels to comment intelligently on
services. Users’ comments on services have to be interpreted in the
knowledge that IT skill levels vary widely.
 Real use vs. browsing—Is the user making serious, systematic use of a range
of electronic services or engaging in unstructured browsing?
 Recreational use—Are services being used for leisure, entertainment, or
inconsequential purposes? This is an important factor in academic libraries
where the demand for access to machines is high.
 Provision of unwanted/unanticipated services—The electronic library,
because it gives access to the Internet gives access to a range of information
sources which librarians cannot anticipate or control.
 Queuing/booking/walk-outs—Can physical access be easily measured?
 Remote logging in/problems with—Relates to wider issues of network
access.
 Problems of outputting data—Printing, downloading to disk and emailing
of results. Printing problems include the organization of printing and whether

127
all computers are linked to a central printer. How is queuing organized and
what are the charging mechanisms? Printers usually require at least
intermittent staff intervention and staffing support is a quantifiable issue.
Floppy disks can involve problems of following proper procedures and may
require advice from staff. Damaged disks are another problem.
 No defined service period—Service periods can be intermittent and/or out
with standard opening hours.
 Quality and reliability of Internet data—This is extremely variable, and
the librarian has no means of exercising any control over it.
 Non-use—This is an extremely complex issue and involves such factors as
physical distance from the campus, access to a computer at home or work,
access to a network connection, licensing conditions of databases, IT skill
levels, technophobia as well as social class characteristics.
 Changes over time—Longitudinal studies will be affected by changing and
hopefully improving skill levels, changes and hopefully improvements in
services, changing password authorizations etc.
 Distributed resources—Clusters of computers in different places, perhaps
not even in the same building make supervision, observation and support
difficult.
 Problems with the library’s control e.g., unreliable networks.
 The service-orientated culture—Librarianship is increasingly a service and
evaluation-orientated culture, but librarians have to work increasingly with IT
personnel, not necessarily in a structured environment. If IT personnel mainly
concern themselves with technical issues, differences in service attitudes can
emerge.
 PCs versus Macs—Dual platform services raise training issues for support staff.

The overall picture from these points is that there is a battery of qualitative issues
to be considered that count-based methods will fail to recognize and interpret.

9.6 Equinox Indicators


The Equinox Project is an EU-funded research project which aims to develop
international agreements on standard performance measures for the electronic
library environment, building on existing agreements for traditional library
services. It proposes the following draft list of performance indicators:
1. Percentage of target population reached by electronic library services.
2. Number of logins to electronic library services per capita per month.
3. Number of remote logins to electronic library services per capita per month.
4. Number of electronic documents delivered per capita per month.
5. Cost per log-in per electronic library service.
128
6. Cost per electronic document delivered per electronic library service.
7. Reference enquiries submitted electronically per capita per month.
8. Library computer workstation use rate.
9. Number of library computer workstations per capita.
10. Library computer workstation hours used per capita per month.
11. Rejected logins as a percentage of total logins.
12. Systems availability.
13. Mean waiting time for access to library computer workstations.
14. IT expenditure as a percentage of total library expenditure.

This extremely concise list has been refined from an initial list of over fifty
indicators which shows how difficult it is to identify reliable performance indicators
which can be widely used. They are extremely practical and cover essential areas
but should be considered in light of the largely qualitative issues raised above.

The list does not include the length of the session, which might not, in any case, be
very meaningful and there is no real way of measuring success in use and the
qualitative level of work undertaken. The proposed performance indicators will be
supported by user surveys to gather qualitative information which will complement
the numeric nature of the performance indicator set.

9.7 The Use of Electronic Services in Practice


The mismatch between quantitative indicators and the qualitative issues which need
to be addressed shows clearly that programs of evaluation will continue to be
needed but there are surprisingly few studies of this important area. One of the first
was undertaken by Obst at Munster University in 1995 who found that email and
the Internet were the services mostly used by students and mainly for private
purposes. A survey of web use by undergraduate business school students at the
University of Sunderland was undertaken in May 1997. This showed that most
respondents experienced such difficulties in navigating the web that its potential
use value was greatly diminished.

There was a conspicuous lack of awareness about web information gateways.


‘Entertainment purposes’ also accounted for a significant proportion of web use;
81% of respondents encountered problems frequently or occasionally. Students
customarily sought help from other students. Search engines were not found to be
very helpful. A study at Glasgow Caledonian University study confirmed these
findings. A mixture of qualitative methods was used by Crawford in 1999
consisting of 47 semi-structured interviews and three independently facilitated
focus groups. The questionnaire method was not used because of a lack of clearly

129
defined performance issues on which to base questionnaire questions. Some
conclusions from the study were as follows:
 The distinctive mission of the Library’s Electronic Information Floor (EIF)
was not clear to users who simply viewed it as a collection of computers
located in the library.
 Much of the use was unsophisticated and centred on email, the Internet and
word processing. Electronic information services were less used.
 There was a good deal of non-curricular use centring around using email and
the Internet for recreational purposes.
 Levels of IT skills were low, especially among non-traditional students.
 Much of the learning was from other students and not from Library or other
staff.

The study also highlighted general issues requiring further study. Users did not
appear to distinguish between electronic services generally like email and word
processing packages and specific electronic information services like Science
Citation Index. They saw the matter more in terms of ‘things you can do on a
computer’. A follow-up study undertaken in Spring 1999 showed that only about
15% of computer users were devoted to the use of electronic information services.
These findings have been confirmed by an elaborate study at Cornell University
which used a combination of different techniques such as observation, semi-
structured interviews, a questionnaire and focus groups. This found a wide
ignorance of the electronic sources available and how they are accessed. Staff
typically only used two or three databases and none of the students used the library-
provided web gateway to access databases although they did use internet search
engines to locate information for coursework. Staff and students both wanted swift
access to relevant material with minimal investment in learning and searching time.
The overall picture is of unsophisticated use. Whether this will change over time is
one of the biggest issues in the evaluation of the electronic library.

9.8 Survey Methods


There is a major evaluation challenge here. McClure & Lopata’s (1996) pioneering
work is concerned with assessing University campus networks rather than the
evaluation of electronic information services but the repertoire of techniques they
recommend is relevant and surprisingly traditional. Because evaluating the
academic networking environment is in its early stages qualitative techniques will
be especially useful in developing an understanding of the users of networks as well
as the benefits and problems associated with network use. Participants can raise
issues which the evaluator did not anticipate. The range of qualitative methods
prescribed includes focus groups, user activity logs (diaries), interviews and
130
observational methods. For effective observation a well-developed data collection
form is essential. Quantitative methods (questionnaires) can also be used. The
following methodologies can all be used: software-based counts, observation, one-
to-one interviews, focus groups, diaries and questionnaires, both offline and online.
There are, however, particular problems with the evaluation of the electronic
library. Observational techniques work well provided a data sheet is used so that
the observer can work systematically. Observation, however, requires considerable
IT expertise as the observer must be able to recognize what service is being used
and form some impressions of the user’s IT skills. Observation also tends to be
unsuited to the study of remote users. Questionnaires can be administered either on
paper or electronically. Paper-based questionnaires require no IT expertise to
complete and consequently produce a less biased return, but response rates tend to
be low. Electronic questionnaires, made available over OPACs, have been
experimented with although the method has disadvantages. In this case, completion
rates were less than 10%. Experience of administering questionnaires electronically
at Glasgow Caledonian University, on the other hand, shows that response rates are
high but, as rudimentary IT skills are required to complete them, the sample is
necessarily biased. While the use of a range of techniques facilitates comparisons
(triangulation) and consequently validates the data, systematic evaluation of the
electronic library may prove to be extremely labour-intensive because of the range
of skills needed and the importance of human-based qualitative studies. All this
assumes that the use of electronic services is rich and varied and worthy of serious
investigation. If future survey work shows this not to be the case, then the problem
will be smaller.

9.9 Outcomes and Consequences


Equinox draft indicators 5, 6 and 14: ‘5. cost per log-in per electronic library service
6. cost per electronic document delivered per electronic library service 14. IT
expenditure as a percentage of total library expenditure’ suggests that performance
measurement for the electronic library will be much more of an overt accounting
exercise than previous forms of evaluation or performance measurement. It should
be possible to look at the usage of specific services and decide whether the money
spent on them is justified. It will be easier to identify demand in general and demand
in particular subject areas. It should help libraries to assess what to do with the
money they have and correlate funding more precisely with users’ needs. While, in
the short term, the number of electronic databases libraries subscribe to is likely to
increase, in the long run, measurement of use might lead to stabilization and indeed
reduction if some databases are found to be little used. Already English language
information databases predominate, and standardized methods of measurement
might hasten this process, except perhaps in the area of social sciences where there
131
seems to be a continuing demand for databases of publications presented in
languages other than English. There may also be implications for information-
seeking skills training, for if it becomes possible to identify a limited number of
widely used electronic information databases, then it might be possible to devise a
training program, based around them, rather in the manner of the European
Computer Driving License, setting out a program of basic training in IT skills.
Overall, however, the changing behaviour of users may be the biggest issue
especially if the UK government’s lifelong learning initiative generates a demand
for electronic information among people previously unfamiliar with such sources.
This will be a challenge for both public and educational libraries.

9.10 The Future


The past few years have seen the spread of the performance indicator movement
and the prescription of reliably tested performance indicators together with
methodologies for their application. It remains to be seen whether this will inhibit
the evaluation movement with the wide range of methodologies at its disposal. The
application of the Best Value regime in public libraries may lead to the appointment
of quality officers but wider progress in such a movement will be dependent on the
application of standardized quality programs which would themselves be necessary
to promote the regular collection of a wide range of performance indicators. While
public libraries progress a key set of performance indicators the picture in higher
education is still emerging.

The Cranfield project recommended a set of nine key ‘management statistics’ to be


used by British academic libraries and a first set covering the university and larger
higher education college libraries was published in 1999. SCONUL and HCLRG
intend to maintain the series in future years if possible. Evaluation will certainly be
needed to understand the new challenges of electronic information and lifelong and
distance learning, a world in which contact with the user will be irregular and
perhaps infrequent.

Evaluation will be needed to support the development of charters and expectation


management for here the move to a customer-based model, disability rights
legislation and the possibilities offered by the electronic information revolution will
all need to be considered. As services other than libraries within an organization
develop their programs of evaluation the evaluation of library and information
services may decline as a separate issue although the standards set by librarians will
surely continue to be influential. In higher education evaluation is increasingly seen
as a holistic exercise in which the students’ experience is evaluated as a whole
rather than just one aspect of it.
132
Performance issues can emerge from this process which are common to several
departments, rather than just the library. Communication is a very good example.
Another factor, which is receiving increasing attention, is the influence of outside
forces. It is no longer possible to think of library users as people who have no
measurable life other than as library users. In higher education, the increasing need
for students to have employment lessens their contact with university services and
requires more flexible modes of provision. Services must be planned and evaluated
with this in mind. In public libraries, LISU and the Southeast London Performance
Indicator Group (SELPIG) have studied the relationship between social deprivation
in a local authority area and library use. This showed that there is a statistical
relationship between deprivation and library performance and has led to further
work. So, the future may be both more ‘holistic’ and more sociological.

9.11 Conclusion
Performance management is consistently one of the lowest, if not the lowest, rated
areas in employee satisfaction surveys. Yet, performance management is the key
process through which work gets done. It’s how organizations communicate
expectations and drive behaviour to achieve important goals; it’s also how
organizations identify ineffective performers for development programs or other
personnel actions.

The electronic library redefines the concept of the user and introduces the idea of
the ‘virtual visitor’ or user. The user is no longer someone who ‘comes in’ and
observes set opening hours. There is no defined service period. Users may be
accessing the electronic library remotely from home or work and may be seen only
infrequently by librarians. Skill levels of users are very variable and may not
necessarily be defined by traditional stakeholder groups.

SELF-ASSESSMENT QUESTIONS

1. What is performance management? Discuss the performance issues and


indicators.

2. Define an electronic library, and what it does Discuss its usage and users with
examples.

3. Describe the Equinox project indicators.

133
Activity:
1. Visit the HEC website and write an evaluative note on the HEC digital library
with the help of a tutor.

RECOMMENDED READING

1. Equinox. (1999). Library Performance Measurement and Quality Management


System, [online] Equinox, Available from http://equinox.dcu.ie/index. html
[Accessed Dec. 1999]

2. Everhart, N. (1998). Evaluating the school library media centre: analysis


techniques and research practices, Englewood, Libraries Unlimited.

3. Forrest, M. and Mackay, D. (1999). A healthy relationship with your user,


Library Association Record, 101 (8), p. 476

4. SCONUL. (1996). User satisfaction: standard survey forms for academic


libraries, London, SCONUL, 1996 (SCONUL briefing paper)

5. SCONUL. (1997). Aide-memoire for assessors when evaluating library and


computing services, London, SCONUL, 1997 Available at
http://www.sconul.ac.uk/aidememoire.html

6. SCONUL. (1999). Annual library statistics 1997–98. London, SCONUL.

7. SCONUL and HCLRG. (1999). UK higher education library management


statistics 1997–98, London, SCONUL & HCLRG.

8. Town, J. S. (1995). Benchmarking and performance measurement,


Proceedings of the 1st Northumbria international conference on performance
measurement in libraries and information services. Newcastle, Information
North, pp. 83–88.

9. Webster, K. G. (1995). The use of IT in library evaluation: electronic surveying at


the University of Newcastle, Proceedings of the 1st Northumbria international
conference on performance measurement in libraries and information services,
Newcastle, Information North, pp. 193–198.

_____[ ]_____

134

You might also like