Evaluation of Library & Information Services: Bs (Lis)
Evaluation of Library & Information Services: Bs (Lis)
Evaluation of Library & Information Services: Bs (Lis)
ii
COURSE TEAM
iii
CONTENTS
Page #
Foreword ........................................................................................................... iv
Preface............................................................................................................... v
Acknowledgements ........................................................................................... vi
iv
FOREWORD
Department of Library and Information Sciences was established under the flagship of
the Faculty of Social Sciences and Humanities to produce trained professional
manpower. The department is currently offering various programs from certificate
level to PHD level. The department is supporting the mission of AIOU keeping in view
the philosophies of distance and online education. The primary focus of its programs
is to provide quality education by targeting the educational needs of the masses at their
doorsteps across the country.
This new program has a well-defined level of LIS knowledge and includes courses of
general education, and foundational skills. The students are expected to advance
beyond their secondary level and mature and deepen their competencies, including in
writing, communication, mathematics, languages, analytical and intellectual
scholarship. Moreover, the salient feature of this program includes practice-based
learning to provide students with a platform for practical knowledge of the
environment and context they will face in their professional life.
v
PREFACE
We live in an evaluation culture which is the result of social change over the past
thirty years. The growth of the consumer movement in the 1970s encouraged
consumers of goods and services to view the quality of received service much more
critically and to complain if the consumers were not satisfied. From 1990 onwards,
declining patterns of public expenditure signalled the need to maximize resources
and defend pre-existing patterns of expenditure, something which usually requires
the collection of data. Declining public expenditure in the 80s was compounded by
economic recession which encouraged consumers to spend more carefully and look
critically at the goods and services they purchased.
Although librarians have been aware of the importance of meeting users’ needs for
decades, the customer care movement in the 90s has strengthened the emphasis on
customer orientation. The movement originated in retailing, e.g., supermarkets, but
has successfully transferred to the public sector. This new emphasis on the
‘customer’ has expressed itself in customer care statements and charters. The world
in which all types of libraries function has come under the influence of a new world
of analysis and assessment. The ‘new managerialism’ has promoted an emphasis
on strategic planning, customer service and devolved budgeting in the public sector.
Strategic planning has resulted in the introduction of mission statements and also
emphasized that institutional targets/aims/goals etc. and departments within the
organization, such as libraries, may have their mission statements which can be
used as a baseline for evaluation.
There are four principles involved, the four Cs: 1) Challenge, 2) Consult, 3)
Compare, and 4) Compete. There are many reasons for undertaking evaluation, the
principal and overriding reason for the evaluation of library services is to collect
information to facilitate decision-making and justify increasing expenditure or
defending existing expenditure; to evaluate the quality of service provided: both
overall, and specifically to plan for future improvements. This is usually done by
surveying, either quantitative or qualitative, to locate operational difficulties, some
specified under objectives, and to identify the extent to which problems can be
solved.
Dean
Faculty of Social Sciences & Humanities
vi
ACKNOWLEDGEMENTS
All praise to Almighty Allah who has bestowed on me the potential and courage to
undertake this work. Prayers and peace be upon our Prophet Hazrat Muhammad,
his family and all of his faithful companions.
I am thankful to the worthy Vice-Chancellor and the worthy Dean of FSSH for
allowing me to prepare this study guide. Without their support, this task may not
be possible. Further, they have consistently been a source of knowledge,
inspiration, motivation, and much more.
I would also like to thank the Print Production Unit (PPU) of AIOU for their support
in the comprehensive formatting of the manuscript and designing an impressive
cover and title page. Special thanks also to AIOU’s library for giving me the
relevant resources to complete this task in a befitting manner. I am also thankful to
ICT officials for uploading this book on the AIOU website. There are many other
people whose names I could not mention here, but they have been a source of
motivation for the whole extent of this pursuit.
Muhammad Jawwad
Course Coordinator
vii
OBJECTIVES OF THE COURSE
Recommended Readings:
1. Crawford, J. (2000). Evaluation of library and information services. London:
Aslib: The Association for Information Management.
2. Wallace, D.P. and Fleet, C.V. (2005). Library evaluation: a casebook and
can-do guide. Englewood, Colorado: Libraries Unlimited Inc.
viii
COURSE ORGANIZATION
The course has been designed as easily as possible for distance mode of learning
and it will help students in completing his/her required course work. The course is
of three credit hours and comprises nine units, each unit starts with an introduction
which provides an overall overview of that particular unit. At the end of every unit,
the objectives of the unit show students the basic learning purposes. The rationale
behind these objectives is that after reading the unit a student should be able to
explain, discuss, compare, and analyze the concepts studied in that particular unit.
This study guide is specifically structured for students to acquire the skill of self-
learning through studying prescribed reading material. Studying all this material is
compulsory for the successful completion of the course. Recommended readings
are listed at the end of each unit. A few self-assessment questions and activities
have also been put forth for the students. These questions are meant to facilitate
students in understanding and self-assessment that how much they have learned.
For this course, a 3-days workshop at the end of the semester, and four tutorial
classes/meetings during the semester will be arranged by the department for
learning this course. Participation/attendance in the workshop is compulsory (at
least 70%). The tutorial classes/meetings are not formal lectures as given in any
formal university. These are meant for group and individual discussion with the
tutor to facilitate students learning. So, before going to attend a tutorial, prepare
yourself to discuss course contents with your tutor (attendance in tutorial
classes/meetings is non-compulsory).
After completing the study of the first 5 units ‘Assignment No. 1’ is due. The
second assignment, ‘Assignment No. 2’ is due after the completion of the next 4
units. These two assignments are to be assessed by the relevant tutor/resource
person. Students should be very careful while preparing the assignments because
these may also be checked with Turnitin for plagiarism.
Step-1: Thoroughly read the description of the course for clear identification of
reading material.
ix
Step-3: Complete the first quick reading of your required study materials.
Step-4: Carefully make the second reading and note down some of the points in a
notebook, which are not clear and need full understanding.
Step-5: Carry out the self-assessment questions with the help of study material
and tutor guidance.
Step-6: Revise notes. It is quite possible that many of those points which are not
clear and understandable previously become clearer during the process of
carrying out self-assessment questions.
Step-7: Make a third and final reading of the study material. At this stage, it is
advised to keep in view the homework (assignments). These are
compulsory for the successful completion of the course.
Muhammad Jawwad
Course Coordinator
x
Unit–1
LIBRARY EVALUATION:
INTRODUCTION
Page #
Introduction ....................................................................................................... 3
Objectives ......................................................................................................... 3
2
INTRODUCTION
This unit is developed to teach students the concept of library evaluation, and the
purpose and process of evaluating library programs, services, and resources. It will
also focus on reviewing all possible types of evaluation or approaches to evaluation.
OBJECTIVES
Library evaluation.
Purposes of evaluation.
Major contexts and models of evaluating the library services and programs.
3
1.1 Introduction
Research continues to play an important role in understanding the societal needs to
which libraries should be responsive, assessing the effectiveness of approaches to
delivering library services, and guiding the evolution of library processes, practices,
and policies. Practitioners often view research and researchers as being removed
from and uninterested in pragmatic problems. Researchers and practising librarians
view the field from different perspectives, attempt to meet different standards, and
are driven by different motives.
Systematic evaluation is the nexus between the need to conduct true research into
library operations and the need to provide direct evidence of the value of libraries.
Evaluation is a vital tool for providing effective, high-quality library programs.
Evaluating library services and programs can provide data to help professionals
understand what works and what doesn't for particular programs, patron groups or
communities. In doing so, evaluation data can help professionals manage staff and
resources and communicate their library's impact on the community. Professionals
can accomplish this whether they are a director, a department head, or a programming
staff.
The civil sector and nonprofit world have increasingly incorporated the practice of
evaluation, often for a variety of internally- and externally driven reasons: to create
institutional change, to demonstrate the importance of specific programs or
initiatives to funders and sometimes to simply demonstrate their impact to the
outside world. No matter the impetus, it is hard for program directors to refrain
from feeling judged or resentful during the process of evaluation.
Evaluation can, and should, be part of regular reflective practice, one that
incorporates deep listening to your community and your community’s needs.
Evaluation doesn’t need to feel overwhelming or frightening. It can, and should, be
part of regular reflective professional practice. Evaluation can be done by those
within the library, and it doesn’t have to be expensive or involve outside
consultants. Well-done evaluation serves internal library needs, helping the library
achieve its goals, allocate scarce resources to where they are most effective, better
understand their patrons and serve community needs. Evaluation can save libraries
time and money by creating an environment where decision-making is based on
evidence. Evaluation supports libraries, making the case for how libraries can
effectively achieve their goals and connect with their patrons. It demonstrates the
relevancy of libraries, showing their critical role within communities. Evaluation
can improve how work is done within the library.
4
Evaluations of libraries are inevitable and ever-present. All aspects of library
development are influenced by the results of evaluations. To design successful
evaluations, the objectives to be accomplished must be known. Also, the criteria used
in the evaluation must be specified and the implications of values must be explicit. In
the evaluations of libraries, it is my conviction that the essential evaluative criteria
should be developed by the library profession and that standards for libraries,
developed by the profession and agreed to by it, should provide the basic measures for
evaluation. Of course, precise evaluations of libraries and library services can never be
the sole basis of decision-making. In many cases, politics is involved, and a highly
subjective element enters. It is the evaluation, however, based upon sound criteria and
carried out systematically, that can temper the politics.
All of the existing standards for libraries derive from efforts to determine why one
library is more effective than another and to decide what constitutes quality and
achievement in the libraries. The development of standards grew out of the interest
in evaluating libraries.
5
Systems analysis is the process of understanding and evaluating systems and is a
highly structured set of tools and processes that, when properly employed, yields
reliable data to describe the system, its inputs, its processes, and its products.
9
1.6.1 Administrative Decision-Making
Much of the focus of the evaluation is making decisions regarding resource
allocation, personnel training and evaluation, procedure development and revision,
and planning. Evaluation projects carried out to meet these purposes tend to be very
focused and concrete and may have very obvious expected outcomes.
1.6.3 Politics
Libraries of every kind exist in a political arena; library administrators in particular
must be sensitive to the political interactions of which they are an integral part.
Evaluation can serve to justify and explain administrative decisions to governing
bodies and higher-level administrators.
1.8 Conclusion
Evaluations of libraries and library services inevitably call for comparisons and
several approaches were identified. Measures of effectiveness have remained
elusive. There have been efforts to use patron satisfaction as a measure of
effectiveness. There are problems here too, for patrons often do not know if they
were served well. Satisfaction studies note that if the patron was treated politely
and cordially, the patron reported a high level of satisfaction. There are no major
studies that measure satisfaction sometime after the library experience.
Standards for libraries prepared and adopted by professional librarians and library
associations in countries around the world have been successful in identifying the
kinds of resources necessary for the development of library services. As librarians
met the established minimums, and as librarians in many jurisdictions began to
chafe against externally established standards, standards gave way to locally
determined missions, goals, and objectives, and measures of performance began to
be designed. While much work on performance measures has been carried out over
the past twenty-five years, these measures have not assisted libraries much in
identifying measures of quality, nor have they helped in determining the kinds of
resources needed by libraries today.
11
SELF-ASSESSMENT QUESTIONS
Activity:
1. With the help of a tutor, develop an Evaluation Action Plan.
RECOMMENDED READINGS
3. Baker, S. L., and F.W. Lancaster, (1991). The Measurement and Evaluation
of Library Services, 2nd ed. Arlington, VA: Information Resources Press.
12
Unit–2
13
CONTENTS
Page #
Introduction ....................................................................................................... 15
Objectives ......................................................................................................... 15
14
INTRODUCTION
This unit is developed to teach students reasons for evaluation, specific issues in
the evaluation and various contexts for evaluation. It will also discuss assigning
value in evaluation.
OBJECTIVES
After studying this unit, you will be able to explain the following:
15
2.1 Introduction
The ultimate aim of all management principles, methods and techniques is to help
attain the objectives of the organization efficiently, effectively, economically, and
on time. It is an evaluation that testifies whether the objectives are achieved and if
so, to what extent. The evaluation also includes accountability to the funding
authorities, the patrons and other stakeholders as to whether the resources spent
have resulted in the attainment of the desired objectives. Evaluation is a judgement
of worth. Thus, it means assessing the worth or value of the unit to the people for
whom it is meant. It is the assessment of performance against users’ expectations.
It could also be interpreted in the narrower sense of whether the output is
commensurate with the input. In the context of a system, it means the degree of
usefulness of the set-up in meeting various objectives the system has to achieve.
By and large, evaluation means testing the service or system for effectiveness and
efficiency. Lancaster has prescribed three possible levels of library evaluation.
These include the measurement of effectiveness, cost-effectiveness and cost-
benefit. Similarly, Vickery and Vickery have provided a useful framework for
assessing performance in reaching objectives. These include the effectiveness of a
system, the economic efficiency of a system and the value of a system. By
effectiveness, they mean the degree to which it minimizes costs in achieving an
objective. The combination of the two results in cost-effectiveness. According to
Vickery, value is the degree to which a system contributes to user needs, and where
it is expressed in monetary terms and compared with the cost, it becomes a cost-
benefit analysis. A look at the latter’s framework shows that it is not different from
what Lancaster has prescribed, and therefore they fit well into Lancaster’s
effectiveness, cost-effectiveness and cost-benefit analysis, respectively.
16
The former is concerned with how well the library performs. It is a consideration
of how a library could utilise fewer resources to achieve the same level of service.
It is, therefore, a measure of cost-effectiveness, which is itself an assessment of the
impact which a service has on its users, or an examination of how it is fulfilling or
satisfying the needs of its user community.
It is clear from the foregoing discussion that the user is at the centre of all these
measures of evaluation. Be it cost-effective evaluation or cost-benefit evaluation,
evaluation of effectiveness, efficiency or performance – all end up finding ways of
better serving the library user, and this is the satisfaction of the demands he places
upon the library. Methods of evaluation as it stands now, there are two main
methods for the evaluation of a library’s effectiveness or the measurement of its
performance. These are the subjective and the objective methods. The subjective
method or approach primarily depends on users’ opinions or attitudes to measure
the effectiveness of a library. Normally, such opinions or attitudes are ascertained
by methods used in marketing research, which are the use of questionnaires or
interviews or both. As a result, the subjective approach takes the user as the unit of
analysis. The assumption here is that these user evaluations are valid indicators of
library performance. This view, however, lacks consensus. There are two powerful
schools of thought representing the pros and cons. Arguing for the cons, Stecher
contends that users are not competent to give valid evaluations of library services.
He argues powerfully that: ‘it seems doubtful, to say the least, that results from
subjective satisfaction measures could be taken seriously’. Lancaster and others
share the opposite opinion. They argue for the necessity of soliciting these user
evaluations for a host of reasons. They contend that some demands for materials
are either too complex or too ambiguous to cope with the constraints of the
objective measures, which tend to be predicated upon demands for specific items.
Secondly, some of the services that people use do not have objective measures of
performance. In such a case, it is imperative that the user, as the ultimate user of
these services, becomes the most qualified person to evaluate the performance or
effectiveness of such services. These potent arguments bring the subjective
approach to the fore as a useful complement in evaluating library performance. It
is therefore effective and most important once its methodological application is
sound and scientific. Stecher, one of the most ardent critics of the subjective
approach, corroborates this. He argues that ‘subjective satisfaction as expressed and
tested in a more realistic form of user preferences, has found methodological
application in several studies.
It is, therefore, safe to conclude at this point, that, in determining the degree of
success with which a library performs, the ultimate authority, the library user, is the
most logical source of an answer. This is well noted by Vickery and Vickery that,
17
‘in the social process of information transfer ... the ultimate evaluation must be from
the viewpoint of the potential recipients. User opinions, therefore, remain a valid
and potent measure of user satisfaction. Performance measurement of a library, and
in this context the evaluation of a library’s effectiveness in the services it renders
can also be accomplished quantitatively. With this approach, performance
measurements of a library adopt the tools of the management sciences. It is an
integral part of the management process. As its extensive use testifies, it is now
accepted as an answer for the numerous problems and shortcomings of traditional
measures.
Librarians are feeling growing pressure, as are many others charged with the
administration of public agencies. Shrinking federal government resources and
eroding local tax bases, combined with pressing social problems, resulted in intense
competition among agencies for resources. Social scientists have recognized for
some years that the allocation of funds is a political act; in such a case, evaluation
measures become political bools. Public librarians recognize that their tools are
inadequate. They appear frustrated that they cannot defend what they know to be a
18
crucial and threatened public good: free public library service available to all
citizens. They know that the case for public libraries cannot be supported in the
current terms of business or politics, but researchers have not developed alternative
measures that are compelling in hostile political environments. They see a need for
a new generation of evaluation tools that better explain what librarians do and what
impact libraries have on the future.
19
2.3 Reasons for Evaluation
There are differing ideas about what evaluation is and why it should be done.
Blagden performance is an integral part of good management which is undertaken
for two reasons:
1. To convince the funders and the clients that the service is delivering the
benefits that were expected when the investment was made,
2. As an internal control mechanism to ensure that the resources are used
efficiently and effectively.
The primary focus is the evaluation of services provided to the user: how they may
be identified, how a background understanding of them may be built up, how they
can be evaluated and how the data collected can be put to good use. Superficially,
librarianship is easily evaluated because it is mainly about the provision of discrete,
related and comprehensive services which have an element of predictability about
20
their operation. However, services with a major qualitative component such as
reference services are difficult to evaluate, and the methods used may be
controversial. The volume of activity in a service has to be related to the demand
for it to understand if it functions well. The service must be appropriate to the need
of the user which, in turn, raises the question:
a. What is a user?
b. Are users an amorphous mass with identical needs or are they discrete groups
with differing or even contradictory needs?
Although progress has been made in the evaluation of services to users there is still
a need for simple, generally agreed definitions:
a. What is a queue?
b. What does answering the telephone promptly mean?
23
2.4.3 The Institutional Context
Every library exists within the structure of some institutional setting. Although the
concept of the library is not necessarily tied to an institution called a library, most
libraries are defined at least in part by their institutional identity. Evaluation carried
out in the library is by extension carried out on behalf of the institution that governs
the library. Although every library is governed by a unique combination of
institutional needs and requirements, there are fundamental similarities that make
it possible to other environments.
24
represents change. It is much too often the case that neither camp has engaged in
any meaningful evaluation of the new technology.
The introduction of new technologies has had a profound impact throughout the
history of libraries and library services. The emergence of new ways of achieving
library goals must be accompanied by an evaluation of the technology itself and of
the impact of the new technology on existing processes, products, and services.
Continuity in the provision of services is frequently maintained by adapting
established evaluation techniques to new technologies, as has been with the
development of criteria for evaluating World Wide Web search engines.
A currently popular expression in the library profession emphasizes the need for
libraries to be client-centred. This term derives from the business world and carries
with it the implication that the central purpose is not to be profitable but to provide
profit. In the corporate context, the message to be sent is that the company does not
exist to make money but to provide useful products or services to its customers.
The principle of being client centered extends to the library context in a desire to
be focused not on information resources, but on information needs.
Appreciation for the patron or client context leads to the need to involve and engage
the library’s clientele in the evaluation of library services, processes, and products.
Bringing the client into the evaluation process has a bonding effect that sends the
message that patron input is important. The desire for useful and usable client input
is the principle that underlies methods such as focus groups. Clint's input also drives
the ongoing search for standards for professional performance.
Within an overall societal context, value systems vary across subgroups or cultures.
Free public library service may be highly prized in general, but there are
undoubtedly segments of the population to whom, for various reasons, free public
library service is irrelevant or is viewed negatively.
Evaluation, then, must recognize the various value systems that affect the entity
being evaluated. Evaluation is not value-neutral. Working from the assumption that
free public library service is a core value inherently shapes the goals, methods, and
outcomes of the evaluation of public library services. If the goal of evaluation is to
determine whether a thing is good, then the question of who determines what is
good or bad must be addressed. The first essential of evaluation is to understand
and work within the value system that applies.
26
of benefits and values into operational evaluation processes is a difficult and
frequently elusive proposition.
27
identifying core resources that transcend the scope of expert judgment. These
quantitative indicators are especially attractive in that they are easily amenable to
comparisons. They can be compared over time for a single location, among
locations for a single library system, and across locations for a broader geographic
area. They can be applied consistently and with an impressive degree of validity.
2.6 Conclusion
The ultimate aim of all management principles, methods and techniques is to help
attain the objectives of the organization efficiently, effectively, economically, and
on time. It is an evaluation that testifies whether the objectives are achieved and if
so, to what extent. The primary focus is the evaluation of services provided to the
user: how they may be identified, how a background understanding of them may
be built up, how they can be evaluated and how the data collected can be put to
good use.
28
SELF-ASSESSMENT QUESTIONS
2. Describe the specific issues in evaluating libraries and their key indicators.
Activity:
1. Prepare a flow chart of the societal, functional and technological context of
evaluation with the help of a tutor.
29
RECOMMENDED READING
30
Unit–3
IDENTIFYING PERFORMANCE
ISSUES FOR EVALUATION
31
CONTENTS
Page #
Introduction ....................................................................................................... 33
Objectives ......................................................................................................... 33
32
INTRODUCTION
This unit is about identifying performance issues for evaluation. This unit will also
provide students with an understanding of organizational effectiveness and
performance measurement in libraries. Students will learn the criteria for
effectiveness at the organizational level.
OBJECTIVES
1. Organizational effectiveness.
33
3.1 Introduction
Effective organizations are the ones most likely to survive and prosper. It is easy to
see, therefore, why measuring organizational effectiveness is crucial. Administrators,
managers, and trustees all have significant stakes in determining whether their
organizations are successful and why they are successful. Organizational
effectiveness is not just the concern of those who work inside organizations, however,
but is also important to consumers. When individuals select businesses or institutions
to patronize, their decisions are often based on their evaluation of organizational
effectiveness. People want to patronize organizations they believe are effective. The
criteria for effectiveness used by consumers may, of course, differ in kind and
number from those used by individuals working inside the organization.
During the 1970s and 1980s interest declined in developing quantitative standards
for libraries. Output measures for performance developed. As library costs rose
faster than library income, librarians sought meaningful and measurable ways to
show how their libraries were performing. The development of performance
measures does not include indicators of what excellent service might require.
Rather, the approach is that of a single library assessing its services concerning its
own goals and objectives.
The fact that there are many possible criteria for determining organizational
effectiveness highlights a critical point: Measuring effectiveness depends, in large
34
part, on point-of-view, on who is doing the judging. Libraries should be concerned
with collective ones. It is the collective judgement of people connected to the
library that forms the source of acceptance, stability, and prosperity for libraries.
Performance or output measures were developed first in the public library sector.
In that community, there is now a recognition that performance, to be satisfactory,
requires a certain level of resources. As King Research observed in Keys to Success,
"Performance is the relationship between resources that go into the library -- the
inputs -- and what the library achieves using those resources -- the outputs or
outcomes". What is emerging is the need for standards relating to resources (or
inputs) that will enable appropriate levels of performance (or outputs), and there is
an emerging interest in developing professional standards against which a particular
library can be evaluated. There also is emerging interest in comparative assessment
and evaluation.
In sum, neither library administration nor library patrons are going to suspend
judgments regarding whether the library is effective or ineffective, no matter how
difficult the task of measuring it is. Therefore, librarians need to find intelligent
ways to determine how well the library is doing and to report the results clearly to
the public.
35
One approach in comparative assessment is to identify a set of institutions with
which one wishes to be compared and use that set as a referent in making
comparisons on various aspects of library performance. The Association of
Research Libraries (ARL) is experimenting with this approach. In developing the
initial set of ratios, ARL identifies three issues which must be taken into account in
assessing the reliability and validity of the data: 1) consistency, that is the way data
are collected from institution to institution and collected over time. There is
difficulty with definitions here; 2) ease vs utility, that is, what is easy to gather data
on may not be the most desirable variable to measure; and 3) values and meaning,
these may have meaning only in the context of a local situation. ARL has been
collecting statistical data from its members for many years. Thus, the Association
is in a good position to use statistical measures which will help measure the quality
and costs of library services and enable institutional comparisons.
Recognizing this subjectivity requires in the evaluator a sense of modesty: that there
is no “one best way” to measure organizational effectiveness; there are many ways
and the evaluation itself depends heavily on who is doing the measuring and what
measures are selected.
36
Failing to explicitly recognize these interests may result in distortion. This also
highlights the need to be self-critical and reflective regarding measuring
organizational effectiveness to ensure that values are not inappropriately imposed
on the process. For example, the public library has often been accused of being an
institution that caters to the better-educated and higher-income white middle class.
Are our evaluation techniques primarily designed to assess this one set of interests?
As Sumsion says, the data cannot be considered precisely because of the different
methods of collection, different definitions, and problems with incomplete datasets.
It should be noted that data from different years have been used. The expenditures
have been converted to British pounds throughout, using the average exchange rate
for the year to which the statistics apply. The years covered range from 1992-1994.
In the tables, "loans per capita data" are frequently for loans of total stock rather
than books only.
37
An additional criterion is the extent to which the organization has a beneficial
impact on society as a whole. This concept of trying to measure the impact on
society is sometimes used to distinguish organizational effectiveness from
organizational success. An organization is a success to the extent to which the
organization satisfies the needs of society and improves libraries. This would be an
appropriate, albeit elusive, measurement.
The variety of levels and the many ways to measure effectiveness highlight the
difference between what is referred to as macro-organizational criteria versus
micro-organizational criteria. Macro-organizational criteria measure how
effectively organizations provide for the community at large. They answer
questions such as, “Is the organization serving its entire potential market and
serving this market well?” In terms of a library, they ask, “Is the library serving all
the potential users?” or “Is the library accomplishing its broader social goals?”
Micro-organizational criteria focus on internal operations. They answer questions
such as, “Are departments working efficiently?”; “Are qualified staff being
recruited?”; and “Are employees satisfied and committed to the organization?”.
38
3.4 Performance Issues for Evaluation
It is important to devise a systematic regime of assessment which meets the needs
of the library and can be understood outside the library by its parent body or users.
This usually involves an annual overview survey to identify more general problems,
backed up by a range of methods, to clarify specific issues. It helps to know where
you ‘fit in’. John Sumsion, former head of the Library and Information Statistics
Unit has devised a classification of universities as follows:
1. Large and postgraduate and miscellaneous
2. Pre-1960 universities
3. Post-1960 pre-1992 universities
4. 1992 universities (former polytechnics)
5. Higher Education colleges: education and general
6. Higher Education specialist colleges.
This gives a basic guide to comparative perspectives and the kind of issue a
particular academic library should concern itself with. Floor space for large special
collections is unlikely to be a major issue in a post-1992 academic library.
Public library service points can also be categorized as ‘large’ or ‘medium’, depending
on the range and level of services they provide. Opening hours per week can be used
as a basis for categorization (Library Association 1995). Before beginning to look into
the question of what to evaluate it is worth asking: Is a survey really necessary? Can
adequate data be obtained from pre-existing sources which might supply you with a
comparative perspective and eliminate the need for further work? Some sources are
listed below but the Library and Information Statistics Unit at
Loughborough University has emerged as the key provider and indeed interpreter
of LIS statistics in the UK and their website is often a good place to start.
39
complaints about central services. In addition, annual reports are usually
produced in which students can comment on library services. In practice,
these tend to be repetitive and focus on a limited number of issues like the
availability of basic textbooks, access to seating, noise and loan regulations.
Nevertheless, they represent a useful forum of undergraduate opinion, and
taken in conjunction with other sources, can help to identify problems
requiring further attention, even if they are the most intractable.
3. Library committees and published sources. Most types of libraries have a
library committee. In public libraries, it has a statutory function. In academic
and special libraries its role varies from a decision-making body to a talking
shop. Much will depend on whether the members bring issues to it or whether
a business is led by library staff, in which case it is less likely to produce
potential evaluation issues. A supplementary and much more anarchic source
is contributions to newspapers and in-house journals. Letters to local
newspapers can stimulate debate on public library issues and in universities
student newspapers are a popular platform for debate and complaint.
Unfortunately, although frequently impassioned, such sources are not always
as thoughtful and well-informed as they might be.
4. Institutional programmes of evaluation which evaluate all the services
provided by the parent institution, regularly, are still relatively
uncommon. Such an approach is useful because it compares the library with
other services and gives an idea of its status within the institution as a whole.
Institutional programmes of evaluation, apart from the obvious purposes of
informing decision-making and identifying issues for further study, can be
used as a basis for charter design as they give a picture of the level of service
which can realistically be provided in specific areas.
5. Management information sources can offer a starting point for
evaluation. The advent of automated systems has eliminated the need to do
basic survey work. It should be easy to extract information from automated
systems about particular categories of users and the use they make of loan and
inter-library-loan services. Simple usage statistics can be the start of an
investigation into the worth of specific services.
6. Most types of libraries have a structure of meetings, from a simple staff
meeting in small libraries to a complex system of team and related meetings
in large libraries. Sometimes this is supplemented by ‘limited life’ working
parties. Such meetings will discuss issues specific to themselves or matters of
general concern. Problems they raise may be the subject of evaluation.
7. Electronic bulletin boards and suggestion boxes offer users an opportunity
to express themselves directly to users. In general terms they allow users to
raise qualitative issues which, if they occur sufficiently frequently, can
identify problems which need further study.
40
8. Programmes of research to identify evaluation issues: if resources permit
it is worthwhile considering a programme of research to identify what the
library’s ongoing performance issues are likely to be and to try to devise a
systematic programme of evaluation. As indicated above, there are different
theories as to what constitutes evaluation, and many libraries identify a
programme of basic needs and an ad hoc programme of supplementary
evaluation which looks at specific issues. An overall ‘holistic’ programme of
evaluation is therefore lacking. The major disadvantage of this approach is its
proneness to inflexibility, and, in a period of rapid change, the performance
issues identified can become out of date.
3.5 Conclusion
All responsible librarians strive to be organizationally effective, but there is no one
way to measure organizational effectiveness. Library administrators should view
organizational analysis not as a one-time activity but as an ongoing
multidimensional process. Although the goal-setting model recommended by the
Public Library Association can be useful, it should not be seen as a complete
approach. Much depends on the purposes of the evaluation and the perspective of
those conducting it. Before deciding on a particular approach or set of approaches
one must address a variety of issues. These include, but are not limited, to the
following:
1. What is the purpose of the evaluation?
2. Whose point(s)-of view is critical?
3. What functions are being evaluated?
4. What levels of the organization are being evaluated?
5. How will the results of the evaluation be used?
6. How much time, money, and staff are available to conduct the evaluation?
7. What type of information is needed to conduct the evaluation?
8. Against what standard can results be assessed?
9. Who will conduct the evaluation?
10. What are the possible sources of bias in the evaluation?
41
SELF-ASSESSMENT QUESTIONS
Activity:
1. Prepare with the help of a tutor “Model of Library Effectiveness”.
RECOMMENDED READING
42
Unit–4
QUALITATIVE METHODS
43
CONTENTS
Page #
Introduction ....................................................................................................... 45
Objectives ......................................................................................................... 45
44
INTRODUCTION
The unit will give an understanding to students about the quantitative methods of
library evaluation. This will guide them about suitable areas of study, questionnaire
design, sampling techniques, analyzing the data and presentation of results.
OBJECTIVES
The objectives of this unit are to impart knowledge of the following aspects:
3. Sampling techniques.
45
4.1 Introduction
We intended to assemble a set of data that would “cut to the chase” that is, to
present in a very limited space only those data elements that most directly and
eloquently built our case. We were mindful that we live in the ‘sound bite” age in
which time to read is scarce and the competition for public attention is fierce. We
knew that we had to choose our data carefully and present only a handful of
attention-getting statements and each of which would carry important information.
Useful data abound. Data from within and without the library world that can be
fashioned into an argument for increased support for libraries are relatively easy to
find and present. Much data can be found on the world wide web, analyzed using
standard spreadsheet software or a calculator, and presented in documents designed
in word processing or desktop publishing software. Finding data for library sources
is easy if you know where to look. Evaluation processes include subjective and
objective methods and approaches. Although the emphasis tends to be on objective
(quantitative) methods, the subjective (qualitative) approach can provide a balance
to the evaluation.
Quantitative methods are typically those that provide statistical data or work with
known quantities. The quantities may be manipulated or changed, and the variations
measured. These methods answer questions of “what” and “how many?” and are
straightforward to use.
Quantitative studies gather statistical data and use known quantities as a way to
look at the impact a change in one component might bring about. For example, a
library manager could look at the number of reference questions answered in terms
of different times of the day, number of staff available, location, or amount of time
available. The advantage of this approach is the ability to control the environment
to allow the effect brought about by the change in one variable to be measured. A
cause-and-effect relationship can then be established. This approach does not
address the complexity of social interactions that might impact service but can
provide useful data for hiring questions and other service decisions. As a
methodology, objective or quantitative studies are easier to administer and more
common than qualitative ones.
46
and information services including effectiveness; they are not, however, useful in
describing why a library system is functioning effectively. Surveys need to be
carefully conducted and administered. A survey questionnaire needs to be pretested
and designed to prevent problems of question interpretation, bias, reading ability,
and other completion errors.
Survey work, in whatever form, is the most widely used method of evaluating
library services and the questionnaire is widely viewed as the most attractive
method of quantitative data collection. It can be applied to a whole range of issues
from a simple overview survey of user satisfaction to a detailed investigation of the
needs of small groups of users. However, the structure of the questionnaire and its
method of administration will vary, depending on the user group being addressed.
However, as a method, it has the vices of its virtues. Because it is highly structured
it is also highly inflexible. If a fault is found in a questionnaire halfway through its
administration, not much can be done about it. It does not penetrate the attitudes
which inform the answers. A questionnaire might readily reveal that Saturday
opening does not appeal to part-time students, although superficially Saturday
opening might seem attractive to people who work during the week. Behind the
answer lies the complex world of the part-time student who has to juggle family,
social, occupational and study commitments. Furthermore, the questionnaire
answers only the questions which have been asked. If important issues have not
been addressed by the questionnaire, then its qualitative value will be diminished.
At its worst extreme, it is possible to design a questionnaire, by selecting questions
which focus on certain issues and avoiding them.
others could produce a misleading result. For example, it might be possible to get
an over-favourable view of library service by avoiding asking questions about
service areas which users consider to be poor. While such goings-on is unknown in
librarianship it is interesting to note that precisely such a charge has been made
47
against Britain’s privatized railway companies (Independent 7.1.1999). The
accurate identification of the performance issues which inform question selection
is at the heart of good questionnaire design.
The availability of data from automated systems reduces the need for data
collection in areas which have a systemic element. Some aspects of satisfaction
with an inter-library loan service can be inferred from statistics and benchmarking
with other institutions but there might still be a need to collect the views of users
who attach a lot of importance to the service, like research fellows and research
students in higher education. Problems like the length of queues can be tackled by
simple counts and may not need to involve the user at all. In public libraries, some
survey priorities are impact, market penetration, promotion of services, book
buying compared with borrowing, information/enquiry services, audiovisual
services and electronic information sources.
48
4.3 Questionnaire Design
In questionnaire design standardized methodologies should be used as much as
possible. Standardized methodologies allow you to benefit from the experience of
others and to compare your results with similar libraries.
Although some questionnaires use an A3 format e.g., the IPF National Standard
User Survey offers this as an option, the A4 format is widespread and influences
the design and structure of the questionnaire.
There are essentially two types of questions: closed (or pre-coded) and open (free
response) questions. In closed questions, the respondent is offered a choice of
answers and ticks or circles the most appropriate one. In open questions,
respondents can answer spontaneously in their own words. The closed question is
much easier to analyse, especially by computer, but the options offered must be
appropriate to the respondent. Here it is essential to build on the experience of
others and carefully identify in advance the performance issues which will inform
the design of your questionnaire questions. The answers to open questions can be
much more instructive and entertaining to read but the comments made have to be
turned into identifiable performance issues which can then be quantified. This is a
time consuming and somewhat subjective exercise.
However, the points made contribute to the identification of performance issues and
therefore to modifications to the questionnaire in the future.
49
4.3.1 Specific points in questionnaire design
1. Don’t ask a question unless you need to know the answer. Always be guided
by the general aim of your survey and avoid peripheral or extraneous
questions. They will only bulk out the questionnaire and make it less attractive
to respondents. It will also lengthen analysis time.
2. Ask only questions which can be answered. This may seem so obvious as not
to require stating, but questions should be avoided which require respondents
to undertake significant data collection themselves.
3. Ask only questions which can be realistically and truthfully answered. Don’t
encourage the respondent to give speculative or inaccurate information. This
applies particularly to open questions.
4. Ask only questions which the user is prepared to answer. Avoid questions that
the respondent might consider embarrassing. Put sensitive questions (e.g.,
sex, age) last.
5. Ask only for information unavailable by other means, a great deal of survey
and statistical data exists in published sources. Don’t reinvent the wheel.
6. Ask precise rather than general questions. Avoid questions like. ‘Are you
satisfied/dissatisfied with the library service’? They are insufficiently probing
and likely to mask dissatisfaction with specific areas of the service.
7. Avoid double questions like ‘Would you like more books and a longer loan
period’. They are different issues and should be treated separately. Putting
them together will lead to a confused analysis.
8. Use simple jargon-free language. Questions should be short, simple and easy
to grasp. Jargon is a particular problem in a jargon-filled profession like
librarianship and can be difficult to avoid. I once designed an OPAC
satisfaction survey questionnaire in conjunction with a Psychology student
and a university lecturer, an expert in questionnaire design. Despite this,
respondents criticized what they perceived as jargon in the terminology of the
questions. Jargon is very difficult to avoid in a survey which focuses. on
users’ perceptions of technical issues. In libraries with a privileged user
group, in frequent contact with the service, jargon may be more acceptable.
9. Avoid ‘gift’ questions such as ‘Would you like Sunday opening?’. The
respondent is unlikely to say ‘no’ even if he or she does not intend to make
use of such a service himself or herself. It is better, although more long-
winded, to offer a range of viable options e.g.
50
The question is indicating to the respondent what the library can reasonably
deliver and is inviting him or her to make a responsible choice.
10. Appreciate that the respondent may perceive a hidden agenda in the question.
A survey of a service which is not widely used but valued by those who do
use it may be interpreted by users as a signal that it will be withdrawn. This
may result in misleading. answers. Surveys of the use of infrequently used
periodicals are a good example.
4.4 Sampling
Sampling is done when it is not possible to survey an entire population. This
procedure is known as collecting inferential statistics which tries to make
statements about the parent population from the evidence of the sample. This is
typically done when surveying public library users or university undergraduates. In
the case of a small group of users, it is
possible to survey the entire population (doing a census). This usually applies to
special libraries or small groups of users with specific needs e.g., the disabled.
Samples aim to represent the population on a small scale and if the sample is
reliable, it should be possible to reach conclusions about the whole population.
The term sampling frame means a list of the entire population as defined by
whatever factors are applied, such as gender or occupation. It may be a register of
borrowers, university student records or a list of staff who work for a company and
use a special library. Sampling theory demands that probability should be allowed
to operate fully and therefore samples should be chosen randomly. There are several
methods of random sampling:
1. Simple random sampling: each member of the (sampling frame) population
has an equal chance of being chosen. If not, many people are involved their
names can be written on a piece of paper and drawn from a hat. Where many
people are involved a table of random numbers can be used.
2. Systematic random sampling: this also involves the use of a sampling frame.
A starting point is selected at random and then every nth number thereafter,
fourth, tenth or whatever, depending on the size of the sample desired.
3. Stratified random sampling: the sampling frame is divided by criteria like
library users by department or faculty, and random sampling takes place
within each band chosen.
51
To stratify you have to know that each group is different. This is difficult to do
accurately, and it might be useful to use proportionate sampling to ensure that the
numbers for each band reflect the numbers in the sampling frame.
It is not always possible for a library to carry out a structured survey and there are
also non-random and therefore less reliable methods which are widely used:
1. Accidental sampling: whoever is available is chosen.
2. Quota sampling: Whoever is available is chosen based on predetermined
characteristics such as age, gender, social class and occupation and a certain
number of people have to be surveyed in each category. It is a quick method
and convenient to administer and is widely used for opinion polling.
3. Purposive sampling: the survey population is chosen from prior knowledge,
using intuition and judgement.
The relative validity of these methods depends on the purpose of the survey. Non-
random methods can give more information about minority views. Random
methods, by their very nature, are unlikely to generate many respondents from
minority interest groups. This is the case in higher education where the views of
the largest single group, full-time undergraduates, will predominate unless the
views of other groups are sought as well. Quota sampling is a good method of
surveying changes in opinion over time. The survey would have to be repeated at
intervals to build up a meaningful, long-term picture.
52
4.5 Questionnaire Administration
Before administering a questionnaire, it is advisable to pilot it. This helps to
eliminate errors before the questionnaire is administered. Problems likely to be
encountered include a poor choice of terminology and varying interpretations of the
questions due to ambiguous or misleading wording or simply, the differing
viewpoints of respondents. Ideally, the pilot should be tried out on 10% of the
intended sample. This can, in itself, be quite a large number and it might not be
possible in which case at least some of the sample should be consulted. It is also a
good idea to seek the advice of experts. Universities, businesses and industry and
local authorities usually employ people skilled in survey or market research
techniques, and they will often offer advice or even direct help in planning and
organizing survey work.
‘I complained about this last year, but you have not done anything about it’.
53
however, tends to make January a ‘dead’ month as students are either sitting
examinations or are absent from the campus.
All service points (branch libraries, campus site libraries or whatever) should be
surveyed to identify differences or problems particular to one or more sites. A large
public library service with many branches may not be able to afford to survey them
all simultaneously. In this case, all branches should be surveyed in a three-year
rotational period. Survey results from small branches should be interpreted with
care as the number of returns may be so small as to make the results of limited
worth. However, it is a good idea to cross-check the results from different branches
to try to identify similar patterns of activity. A variety of methods may be used to
administer the questionnaires, depending on the circumstances. In public libraries,
temporary staff can be hired if funds permit, but it is often better to use existing
library staff who will understand why the work is being undertaken and will be
better qualified to explain and justify the exercise to users.
The IPF manual offers precise instructions on how to administer the questionnaires.
In academic and special libraries, a range of options is available. Internal mail is an
option open to both and can yield better results than postal questionnaires to
people’s homes. While the former typically yields results of 15%-30%, internal
mailing can double this response rate. In university libraries, questionnaires can be
administered both inside and outside the library. Sympathetic lecturers can be
persuaded to distribute questionnaires at the beginning of lectures and if these are
filled in at the time and collected at the end of the lecture there will be a good
response rate. User education evaluation questionnaires can be distributed at the
end of user education sessions and, again, if completed at the time, this will produce
a good response rate. Within the library, the staff are the best people.
to administer the questionnaire. They can distribute forms to users entering the
library and explain the purpose of the study if questioned. It is a good idea to record
the number of questionnaires distributed. From this, it will be possible to calculate
the response rate. At least 40% should be aimed for.
54
The return rate for the questionnaires administered on paper was about 14% while
the return rate for the electronically administered questionnaire was 72%. The
return rate might have been even higher had the IT skills of respondents been better.
Time is saved in data analysis since the raw return data is already in electronic
format and does not have to be created from scratch. This method, despite its
advantages, has one obvious and serious limitation. It effectively limits the sample
to the computer literate and is most suitable for use in special libraries and
universities with few non-traditional students. If used in public libraries, it will tend
to exclude the elderly and those with low skill levels.
Data can be analyzed using a spreadsheet package such as Excel, SPSS (Statistical
Package for the Social Sciences) and other analyzing desktop software which will
give adequate statistical analysis, in terms of totals and percentages for surveys of
a moderate size (hundreds rather than thousands of returned forms). It can generate
55
attractive ways of presenting the data like bars and pie charts. For very large surveys
which are likely to be undertaken only occasionally, it might be necessary to use a
statistical package. One of the most frequently used is SPSS (Statistical Package
for the Social Sciences). It has a large repertoire of statistical techniques and is well-
known to social researchers, so it is fairly easy to get advice on its application.
There are two means of communication: closed and open. Closed communication
includes written reports, newsletters and posters briefly summarizing results. Open
includes meetings of library staff, committee meetings and focus or other structured
groups. All these will be appropriate in disseminating results. In public libraries,
the target audience will include service managers and library managers, local
authority elected members and the general public. Summaries of the report, in press
release form, can be sent to the local newspaper.
56
their employers. In higher education, it is probably still more used to communicate
with staff, rather than students but there is great developmental potential here.
4.7 Conclusion
Good measures are valid, reliable, practical and useful. Each of these components
contributes to the success of the evaluation. A valid measure accurately reflects that
which it is meant to measure. The most appropriate methodology matches the goals
of the research with the strengths of a particular approach. Using more than one
methodology or collecting data from more than one perspective can result in a better
understanding of the service under study.
The ultimate aim of all management principles, methods and techniques is to help
attain the objectives of the organization efficiently, effectively, economically, and
on time. It is an evaluation that testifies whether the objectives are achieved and if
so, to what extent. The evaluation also includes accountability to the funding
authorities, the patrons and other stakeholders as to whether the resources spent
have resulted in the attainment of the desired objectives. It may be pertinent to state
that despite the importance of evaluation for funding agencies and the managers of
libraries, accounts of actual evaluations made of libraries or information services
are very few, despite the theoretical expositions. Whatever the method of evaluation
adopted it is likely to be influenced by the type of library and the area of service
required for evaluation.
57
SELF-ASSESSMENT QUESTIONS
2. Discuss the various suitable areas of study in evaluating library services and
products.
4. Explain sampling techniques and data analysis and present the report process.
Activity:
1. Visit the nearby university library and evaluate its circulation services by
developing a questionnaire with the help of the circulation staff/librarian.
RECOMMENDED READING
58
Unit–5
QUANTITATIVE METHODS
59
CONTENTS
Page #
Introduction ....................................................................................................... 61
Objectives ......................................................................................................... 61
60
INTRODUCTION
After going through this unit, you should get acquainted with the basic qualitative
methods of library evaluation. The main focus of the unit is to give an
understanding of various methods of the qualitative approach to library services
and products.
OBJECTIVES
After studying this unit, you will be able to explain the following:
61
5.1 Introduction
Evaluation is concerned with determining the strengths and weaknesses of a library
collection and services in terms of the level of intrinsic quality, the extent to which
that service and collection support and furthers the library’s mission and goals, and
the value of that service and collection to the library’s user and potential users.
Evaluation of library services and collection is an integral part of the broader
collection development process and planning of the library services and structure.
Data gathering is the first step in the evaluation process. Collecting data, whether
qualitative or quantitative, is a way to describe current conditions. Evaluation
involves examining data in the context of appropriate organizational models and
terms of the library’s mission. The resulting judgments about what the data mean
are from the basis of decisions for future action. On the whole qualitative methods
are less used than quantitative methods. They figure less in the literature, especially
standard textbooks, but are increasingly reported and becoming better understood.
Qualitative methods include such techniques as interviews, frequently conducted
on a one-to-one basis, meetings, whether loosely structured or more tightly
organized, like focus groups; suggestions boxes; whether in manual or automated
form (via an OPAC or website); observational methods and the keeping of diaries.
Some involve direct, face-to-face interaction, and require special skills and training,
others do not.
Behind the qualitative method lies the simple concept of the story. We make sense
of the world by telling one another stories. In talking about libraries, we also talk
about stories. What did the person come for? What were their intentions? What did
they find? What happened? What was the value? The story can take many forms.
A small child enthusing to its parents about a story hour at a public library is
engaging in positive qualitative evaluation just as much as adult members of a focus
group discussing a library service or participants in a structured interview.
62
They all represent direct experience. The qualitative approach has several
characteristics and advantages. It works well with small samples and is appropriate
for analyzing problems in depth. For this reason, it is useful for tackling complex
and poorly understood problems. By collecting users’ perspectives directly, it is
possible to tease out the underlying issues. Answering the question ‘Why’ rather
than ‘How often’, emphasizes the role of the participant who can present himself
or herself as an actor in the events being studied. However, the approach is much
looser. It does not produce the tidy array of statistics at the end which a quantitative
survey would and, for this reason, is perceived as being non-scientific.
There is also a danger that it may be manipulated by vocal minorities who are good
at case-making. This sometimes happens with university teaching departments who
are anxious to promote or retain a service which they see as benefiting them
specifically. For this reason, some background in quantitative data is desirable. This
is not a problem for libraries which practice a systematic regime of assessment, but,
for libraries not in that position, some background published sources such as LISU-
produced figures might suffice. Qualitative methods also allow users to participate
more in library management and give them a feeling that they are making a direct
contribution to policy formulation.
It is often said that qualitative methods are less labour-intensive and time-consuming
than quantitative methods. There is some truth in this. (The qualifications are discussed
in detail below.) However, planning and facilitating meetings require special skills
which library staff may not possess and the logistical problems of getting an
appropriate group of people into a suitable room on a certain day at a certain time
should never be underestimated. It helps considerably to have a preexisting
organization, independent of the library, which acts as an organizational focus.
The library at the University of Huddersfield has built up good relations with
the University’s Students Union which has 150 course representatives who
can be called on for focus group work.
Glasgow Caledonian University had, for a time, an organization called the
Partnership for Quality Initiative which organized and facilitated meetings for
several university departments, including the library. If no appropriate
organization exists, it might be necessary to employ an outside body as Brent
Arts and Libraries did when studying the needs of ethnic minorities within
the Borough.
63
5.2 Some Qualitative Techniques/Methods
5.2.1 Focus groups
Of the various qualitative methods available the focus group is probably the one
which has attracted the most attention. They are called focus groups because the
discussions start broadly and gradually narrow down to the focus of the research.
They are not rigidly constructed question-and-answer sessions. Focus groups are
used in a variety of situations. In business and industry, they are often used to test
new product ideas or evaluate television commercials. In higher education, they can
be used to ‘float’ new ideas such as embarking on a major fundraising venture.
Focus groups typically consist of 8 to 12 people, with a moderator or facilitator who
focuses the discussion on relevant topics in a non-directive manner. The role of the
facilitator is crucial. He or she must encourage positive discussion without
imposing control on the group. There is a danger that focus groups can degenerate
into ‘moan sessions. The structured discussion group (also known as a snowball or
pyramid discussion) is a variant of the focus group which tries to address this issue.
After an introductory discussion, the participants begin by working in small groups
to identify and prioritize key themes. The groups then come together, and each
group is asked to make a point which is then tested by the other groups. Agreement
is reached on each point in turn and a record is kept of the discussion which is
verified towards the end of the zv52 session. Sessions last between 45 minutes and
an hour and a quarter and about 14 points usually emerge.
Focus groups have several advantages over other forms of research which
have been usefully summarized by Young (1993):
1. Participants use their own words to express their perceptions.
2. Facilitators ask questions to clarify comments.
3. The entire focus group process usually takes less time than a written survey.
4. Focus groups offer unexpected insights and more complete information.
5. In focus groups, people tend to be less inhibited than in individual interviews.
6. One respondent’s remarks often tend to stimulate others and there is a
snowball effect as respondents comment on the views of others.
7. The focus group question design is flexible and can clear up confusing
responses.
8. Focus groups are an excellent way to collect preliminary information.
9. Focus groups detect ideas which can be fed into the questionnaire design.
64
2. Select facilitators with expert skills. These include good communication skills
and experience with group dynamics. They do not have to be an expert on the
subject under discussion.
3. When recruiting ask for volunteers. A good way to do this is to add a brief
section to regularly used questionnaires in which respondents are asked if they
are willing to participate in further survey work. Relevant organizations
which regularly collect opinions within the institution may also be able to help
in providing names although this carries with it the risk of involving the ‘rent
a crowd’ who are only too willing to express an opinion about anything.
4. Use stratified groups. In higher education separate staff from students and
undergraduates from postgraduate students. Try to include all aspects of the
target population such as full and part-time students.
5. Schedule 8–12 people per focus group, but always overschedule especially if
working with undergraduates. Reminders by personal visits, telephone or
prompting by lecturers may all be necessary for higher education. It is
important to remember that Students’ Representative Council members have
many calls on their time and too much cannot be expected of them.
6. Allow ample time for discussion, usually up to two hours.
7. Develop a short discussion guide, based on the objectives of the research. This
should be pre-tested on a sample population if possible. An experienced
facilitator should be able to do much of this work.
8. If possible, run three or four groups per target audience for the best results.
One group may not provide enough data but organizing more may be difficult.
9. Hold sessions in a centrally located, easily accessible room. Put up signs and
notify colleagues who may be asked for directions. Ideally, use a room with
audio-taping facilities.
10. Reward participants for their time. Many libraries have no appropriate budget
and a reward, in practice, often means nothing more than tea, coffee, biscuits
and scones. Small rewards can be made in kind such as a free photocopying
card or a book token donated by a bookseller. Such prizes can be allocated
through a raffle.
11. In analyzing and summarizing the data look for trends or comments which are
repeated in several sessions. Sessions can be analyzed from audio tapes, flip
chart paper and handwritten notes.
12. Don’t over-generalize information gained from focus groups and don’t use it
for policy decisions. Because of non-scientific sampling and the inability to
quantify results the information collected should be used carefully.
65
any inhibitions about criticizing the library service in the presence of staff. Hart
(1995) has organized seven focus groups over an academic year and, inter alia,
makes the following points:
1. The lunch period is a good time to hold them.
2. Each group lasted half an hour to an hour.
3. Misconceptions relating to all aspects of library organization were
widespread.
4. Focus groups are good for identifying problems which might not otherwise
have been considered.
5. Focus groups are good for providing insights rather than answers.
6. Focus groups are not particularly cheap, mainly in terms of staff time, which
can be 3– 4 hours per session.
Focus groups can reveal distinct gaps between the perceptions of library staff and
those coming from the users. This is something librarians have to come to terms
with if they are to benefit from the experience. Users might focus on a particular
theme e.g., open access photocopying and binding facilities, and force library staff
to rethink provision. Focus groups are usually a public relations success, as they
show that the library is actively canvassing user opinion even if users’ expectations
of what the library can provide are sometimes unrealistic. Scheduling can be a real
problem and if people cannot be recruited can threaten the result.
Focus groups are not very helpful in discussing technical issues because of a lack
of expertise among participants but it is important to recognize that the underlying
issue may be valid even if presented in naive terms. For example, a focus group on
the use of a computer centre suggested a computer booking system, based on colour
bands, as used in
swimming pools. Even if the method was not very practical the need for a booking
system was identified. A somewhat unexpected advantage of focus groups is that
they are an end in themselves. The very act of participating gives those present a
feeling that they are ‘having their say’ and engaging in a form of two-way
communication which helps to
close the feedback loop.
Focus groups can be used with children, but this is a highly specialized activity.
They can be used with children from 7 to 16 years of age. Very simple questions
should be asked, and a ‘chatty’ manner is necessary. The use of pairs of friends
encourages discussion and groups can then be built up consisting of up to three
pairs. Parental consent for participation is needed, in writing wherever possible, up
to the age of 16.
66
Neutral venues are best, preferably a family home. Sessions can last up to 90
minutes provided the participants are interested, the facilitator is well-prepared and
light refreshments are provided. As younger children (7-11) think concretely it is
important to provide concrete examples like models and drawings and practical
activities like drawing or writing down their ideas in the form of a postcard to a pen
pal. For older participants (13+) bubble diagrams are a good method. A drawing of
a simple situation includes a speech bubble, and a thought bubble and respondents
fill in what the people are thinking and saying. This is very useful for sensitive
topics. A useful collection of methodologies for use with school pupils can be found
in Nancy Everhart’s Evaluating the school library media centre (1998) which
discusses evaluation methods with practical examples of questionnaires,
interviews, focus groups, numbers gathering and observation. The practical
difficulties of relating this work to children and young people are discussed.
If the answer to both these questions is ‘no’ there is not much point in having a
suggestions box, but to answer each question by letter could be a substantial clerical
exercise. There seems to be a good deal of user cynicism about the method.
Suggestions are sometimes facetious or even obscene and, in the automated version
described below a failure to respond rapidly can result in further questions, like:
‘Why does no one ever answer my questions?’.
Automated library systems have given a new lease of life to the suggestions box
because some have question/answer facilities included in the OPAC. Typically,
these include a screen on which users can input questions and these are then
answered by library staff. It is best if a specific member of staff has responsibility
67
for this and ensures that all questions are answered promptly in practice questions
tend to be repetitive and the responsible staff member soon builds up expertise in
replying to them. If the person who deals with the questions does not know the
answer, he or she can forward it to the relevant member of staff. The system should
collect statistical data about questions and may allow browsing and keyword
searching. Regular overviewing of the questions makes it possible to identify
performance issues and compare them with other sources of performance data. If
the module contains a stop-word list of obscenities and offensive expressions these
can be filtered out. These features make for a much more reliable evaluation tool
than the manual suggestions box, mainly because feedback is much better. In
practice, questions tend to fall into two categories:
1. Precise questions specific to the enquirer e.g. ‘If I do not return my books by
a certain date, will I have to pay a fine?
2. General questions about services are easier to answer.
The data obtained can be varied and highly specific but, for this very reason, it can
be extremely difficult to tease out performance issues from the detailed
observations made. If diaries are kept over a long period errors can creep in and
diarists who are aware that their writings are part of a structured research program
may begin to modify their observations, perhaps even unconsciously. Nevertheless,
they are a good way of collecting data which is difficult to collect in any other way,
provided the data collected can be set in some sort of context.
68
Diary techniques are usually a less casual and unstructured activity than the term
appears to imply. Although diaries allow users’ actions and reactions to be recorded
when they occur most people are not used to keeping a record of their activities and
without a predetermined structure for recording the data is likely to be somewhat
difficult to analyze. Diaries are usually structured, and the respondent is given
forms with checklists or performance issues prompting him or her to comment on
the areas of study. The checklists must be easy to understand otherwise the
respondent may become confused. Such a method can remove the element of
spontaneity and individuality which informs diary writing.
Another method is a time diary which records the respondents’ activities at different
times of the day. An example of diary research was carried out at the Centre for
Research in Library and Information Management at the University of Central
Lancashire as part of a wider study of library support for franchised courses in
higher education (Goodall 1994).
Approximately 120 first-year students were involved in the project which aimed to
document the experience of students about the provision and availability of library
resources. Students were required to complete three or four diaries and attend a
follow-up focus group discussion. Although the students received £25 each for
participating there were difficulties in recruiting a sufficient sample and the project
had to be vigorously promoted to attract sufficient interest. The result was a self-
selected sample. Although there were problems in identifying suitable pieces of
work for study, once assignments had been identified they provided issues on which
to focus and gave a structure to the exercise.
The analysis of the diary entries then provided the framework for the focus group
discussion in that it allowed the researcher to compile a list of themes and
performance issues to use with the group. The students were encouraged to refer
back to their diaries during the group discussion so that they were able to draw from
specific examples to describe their actions in detail rather than talking in general
terms. In this case, then, the purpose of the diary project was two-fold:
to record data
to facilitate focus group discussion
69
The diary data was more useful when set in a wider context of discussion.
5.2.4 Interviewing
Interviewing on a one-to-one basis is something that many librarians have done at
one time or another. It is important to realize, however, that it is a structured activity
and not just a chat. It can be seen as an extension of the meeting method, but by
speaking to only one person it is possible to probe in detail into the experiences and
reactions of respondents. For this reason, it is a good method for exploring sensitive
or confidential issues like library staff’s relations with users. Interviewing is a
skilled activity and because it is about the interaction between two people, well-
developed social skills are essential.
The interviewer must be good at getting people to talk. He or she should talk as
little as possible and concentrate on listening. It is important to note the issues
which the respondent raises and also those which are not raised. Unless the library
can afford to employ paid interviewers, which is rarely the case, interviewing will
probably be done by library staff. There is a danger that this might inhibit
respondents or influence what they are prepared to say. Conversations can be
recorded in notes or using a tape recorder. The latter method allows the interviewer
to concentrate on what the respondent is saying but the tapes have to be transcribed
or at least analyzed, which takes time.
Interviewing is a skill which takes time to learn and is most needed for conducting
unstructured interviews.
70
5.2.5 Observation
Observing what people are doing is a relatively little-used technique in libraries but
it has obvious attractions. It allows users to be observed in their natural setting and
it makes it possible to study people who are unwilling or unlikely to give accurate
reports on their activity. The non-curricular use of computers in university libraries
is a particularly good example of this. It also enables data to be analyzed in stages
or phases as an understanding of its meaning is gained.
2. Unstructured observation:
The observer records any behaviour or event which is relevant to the research
questions being studied. This is a much more open-ended approach and, as is
the case with most qualitative research, is especially useful in exploratory
research or where a situation is incompletely understood.
Observation, although on the face of it simple, is a highly skilled exercise, for the
observer must know enough about the situation to understand and interpret what is
going on. To return to the observation of computer use example, the observer can
note important activities like mouse and keyboarding skills, file management and
the expertise with which different software packages are being used but to do this
the observer must be highly computer literate and be able to recognize and critically
analyze and evaluate such activity.
The methodology has some disadvantages. People who are aware they are being
observed tend to change their behaviour, at least initially. There is an ethical
question as observation without consent can be interpreted as an intrusion into
privacy. It is not always possible to anticipate a spontaneous event and so be ready
to observe and understand it. Not all events lend themselves to observation. The
development of IT skills throughout time is a good example. Observation can be
very time-consuming and finally, the subjectivity of the observer must be taken into
account.
71
In making observations the researcher should focus only gradually on the research
questions to open up possibilities for insight. The observer should also record his
or her subjective reactions to the events observed. This helps to distance the
observer from them, an important way in which the questions of reliability and
validity can be addressed. Notes should be made as close in time as possible to the
events being recorded. Although in a library context, unobtrusive observation is
probably the norm the observer may also participate in the activities he or she is
observing. To be a successful participant observer it is necessary to be
approachable, friendly and receptive and to dress and behave appropriately.
5.3 Conclusion
Qualitative studies such forms as surveys, observation, interviews, and case studies,
can examine complex factors in the social interactions inherent in library settings.
Because these are unique studies, however, findings cannot be generally applied to
a larger group. The quantity of raw data gathered is likely to be large and, because
it is descriptive data, more difficult to categorize. Qualitative research, generally,
is a major research method in its own right and is useful for probing the sensitive
issues which questionnaires do not deal with so effectively. As library and
information science moves increasingly to the provision and use of electronic
services so qualitative methods may become more attractive because so many
poorly understood issues are arising which cannot be addressed with the precision
that quantitative methods require. A major concern is to see the measurement is
done without evaluator bias and that the study is objective. This ability to examine
“real-world” aspects or libraries provides insight into the multiple human factors
involved.
72
SELF-ASSESSMENT QUESTIONS
Activity:
1. Visit any public library and prepare a suggestion box and fix it on its entrance
and collect the suggestions of library users.
73
RECOMMENDED READING
74
Unit–6
75
CONTENTS
Page #
Introduction ....................................................................................................... 77
Objectives ......................................................................................................... 77
76
INTRODUCTION
The unit is designed to explain the importance of evaluation and the pitfalls and
progress of library evaluation projects. It will also explain the components of the
evaluation action plan and understanding of system analysis.
OBJECTIVES
After studying this unit, you will be able to explain the following:
77
6.1 Introduction
Much, perhaps most, evaluation is carried out on the fly as a more-or-less
emergency procedure. A problem arises when there is a perceived need for an
immediate solution, and some sort of attempt is made at evaluating the problem as
a means of deriving a solution. Sometimes the problem is imposed from outside via
political, social, or economic influences. The library community does not need to
look far for examples of societal pressures to examine problems defined by pressure
groups. An impressive number of very immediate evaluation needs centre around
the opportunity to take advantage of a funding opportunity with a fixed deadline.
Governance or governmental bodies are well known for their tendency to demand
quick responses to esoteric needs to evaluate specific functions.
Research is a very special focus for evaluation. Herbert Goldhor, former director of
the library centre at the University of Illinois, frequently lamented that every
evaluation project was almost a research study. What research adds to evaluation is
the potential for extension to other environments. When applied appropriately,
evaluation techniques reveal useful information not only about the library for which
the evaluation project was conducted but also about other libraries with similar
evaluation needs.
78
The conclusion of a piece of survey/project work does not necessarily mark the end
of the exercise. The results should include recommendations for action, but several
factors may affect the outcomes of the evaluation study and may even lead to their
modification.
1. The problem may be insoluble for a variety of reasons. Resources may be
unavailable to tackle it. The study may show that a particular group of users
requires particular attention, but money may not be available to undertake the
work needed. It may be difficult to collect information on which to base
decision-making as the Brent study showed. If the cooperation and support of
departments out with the library is needed, e.g., manual staff support for
extended opening hours, it may be difficult to proceed, but at least, solid data
is available for making a case.
2. More questions may be raised than answered. This is often the case with short
overview surveys which may raise puzzling issues that require further
investigation. One group of users may be much less satisfied with the service
than others. It may be necessary to mount a further study of the group to find
out why.
3. Misconceptions are an unavoidable problem. Users’ comments on survey forms
and at meetings can show that they have inaccurate perceptions or have fallen
victim to rumours. Sometimes the numbers, e.g., all the students in a particular
course, can be substantial. The consequences, in the form of angry letters to local
newspapers, or articles in student magazines, can be serious and it may be
necessary to mount a public relations exercise to clarify the situation. Sometimes
these errors can be accepted at quite a high level. I have found evidence of quite
senior academics credulously accepting unfounded rumours.
4. Contradictory results sometimes occur. These may be the results of faulty
methodology or paying too much attention to pressure from vocal interest
groups. Building on experience over years, comparisons with similar
institutions, and using a regime of assessment which consists of a range of
methods are the best ways of avoiding these problems.
5. The results of the study may generate criticism. This may be criticisms of
methodologies or outcomes. Following proper procedures is the best way to
avoid criticisms of methodology. Outcomes may be criticized if they are
perceived as having deleterious implications. For example, the
recommendation to deskill a service, previously run by professional staff,
may not be well received by those concerned. This can result in delaying,
modifying, or even abandoning the proposals.
6. The information collected from a survey can go out of date quite quickly. The
current move to electronic information services is changing expectations
rapidly and this must be allowed for. It may be necessary to look at the same
problem areas regularly to identify new and changing needs. zv69
7. As implied in (6) above the performance issues which inform evaluation must
be regularly reviewed to ensure that the changing needs of users are being
addressed.
79
Over the years, it should be possible to develop a systematic regime of evaluation
composed of questionnaire work, formal qualitative methods like focus groups and
other meetings, ancillary methods such as suggestion boxes and comparisons with
other departments within the institution. A range of performance measures will
emerge, some recurring, others needing consideration less frequently. These will
require evaluation and comparisons should be made with similar institutions and
existing data to get an idea of context. Research-based methods which seek to
identify performance issues objectively have been developed in the 1990s.
If it proves possible to move through all five stages, then it might be possible to
consider benchmarking as a further qualitative step. Benchmarking has been variously
described as a zv70 systematic approach to business improvement where best practice
is sought and implemented to improve a process beyond the benchmark performance’
and ‘stealing shamelessly’. It is a technique developed by the industry in which best
practice by one’s competitors is studied to improve one’s performance.
Benchmarking can improve the customer focus by seeing how others satisfy their
customers and in librarianship libraries can compare themselves with each other
and relevant service industries with a view to an all-around improvement in
performance. It is, however, difficult to compare services in different institutions
which are highly qualitative like the Enquiries service. Measures or benchmarks to
use in comparing libraries have to be chosen and it can be difficult to find several
libraries for which the same set of measures will be appropriate. The best
benchmarking partners have to be chosen to give the exercise credibility.
SCONUL has conducted six benchmarking pilots focusing on the following areas:
advice and enquiry desks, information skills training, counter services and library
environment. Although these pilots have been useful it has proved difficult to devise
generally acceptable measures and because of ethical and confidentiality issues
outcomes have not featured a great deal in the literature. Enquiry work has emerged as
a favourite theme for benchmarking, and this has cropped up in public libraries too.
80
6.2 Evaluation Action Plan
Building toward a culture of evaluation requires making evaluation a habit, making
it more difficult not to evaluate than to evaluate. One approach to nurturing that
habit is the development of an Evaluation Action Plan to guide the evaluation
process. An Evaluation Action Plan asks the following questions:
1. What’s the problem?
2. Why am I doing this?
3. What exactly do I want to know?
4. Does the answer already exist?
5. How do I find out?
6. Who’s involved?
7. What’s this going to cost?
8. What will I do with the data?
9. Where do I go from here?
Although Ranganathan concentrated on the book in his rule, he was quite aware of
the role to be played by other information resources but consciously chose to use
the term book in a generic sense.
Ranganathan’s five laws are an excellent example of systems thinking and carry
substantial implications for the need to evaluate libraries and their processes. If
information resources are indeed for use, then there is a clear need to evaluate their
use and determine whether they are being used at all and if they are being used
appropriately. The expression “every reader is (originally his) book” implies the
need to evaluate the needs of individual patrons and patron groups and to design
library systems to meet those needs. Obversely, there is a need to proactively
identify those patrons who can make use of particular information resources and
develop mechanisms for getting those resources to the patrons who can best use
them. Saving the time of the patron is fundamental, although library systems have
not always been designed with the patron’s convenience in mind. It is the fifth law
that most clearly relates Ranganathan’s thinking to systems thinking. By describing
the library as a growing organism, Ranganathan recognized that the library is not
81
only a system but also a system with life. He also described the fate of an organism
that ceases to grow.
Long after Ranganathan first formulated his five laws, Maurice Line (1979)
presented an alternative view of the way things are. Line’s five laws are:
1. Books are for collecting.
2. Some readers their books.
3. Some books have their readers.
4. Waste the time of the reader.
5. A library is a growing mausoleum.
There is a dark side of Line’s humour that lies very close to home. Any observant,
thoughtful, or simply aware librarian can think of many examples of situations and
policies that are more closely aligned with Line’s cynicism than with
Ranganathan’s idealism. Many honest librarians would have to admit that they have
at one time, or another, been active participants in supporting the reality behind the
line’s facetiousness.
82
the Anytown Public Library is directly linked to a wide range of other
government offices, educational institutions, social service agencies,
businesses, industries, and individual members of the public. Because these
components all work together as a system the administration and staff of the
library are responsible for exploring and understanding those connections.
2. Everything has to go somewhere. If the library’s administration, based on
its evaluation., decides not to offer a particular service at a determined level,
some other entity or agency will be the recipient of the accompanying demand
unmet by the library. If the library emphasizes a particular service at a
determined level, some other entity or agency will experience a reduced
demand for that service. In a worst-case scenario, the library may be
marginalized by a decision to emphasize certain services and de-emphasize
others. Although no library can be all things to all people, it is essential to
understand that there is an intense need to evaluate the need and demand for
services and to evaluate their delivery.
3. There ain’t no such thing as a free lunch (TANSTAAFL). Around the turn
of the twentieth century, many bars and taverns advertised a “free lunch.” The
catch was that access to the free lunch was dependent on the purchase of
watered-down drinks. Library administrators and staff cannot and should not
expect to benefit from any externally provided benefit at no local cost. The
local cost of access to state-funded network services, for instance, may be
reduced by community appreciation for the direct services of the library. In
some cases, expanded state funding for shared library resources may lead to
reduced funding for local library resources.
6.5 Conclusion
The most important potential outcome of a successful evaluation project is the
completion of at least one step on the way to creating a culture of evaluation. When
everyone works as it should, when good results are rendered and positive action taken,
when a positive attitude toward evaluation has been fostered, and when people see that
evaluation can make things better for them, the result may be an increased desire to
engage in evaluation for the good of the library. When that happens, a culture of
evaluation is truly in place and things will never be the same again.
SELF-ASSESSMENT QUESTIONS
1. Explain the evaluation project and its essential steps/components.
2. Discuss Ranganathan’s five laws in the scenario of library evaluation.
3. What is benchmarking and how does it help in the evaluation of library services?
83
Activity:
1. Develop the library evaluation project keeping in view the action plan stages
with the help of a tutor.
RECOMMENDED READING
84
Unit–7
CASE STUDIES
85
CONTENTS
Page #
Introduction ....................................................................................................... 87
Objectives ......................................................................................................... 87
86
INTRODUCTION
The unit is developed to educate students about case study methods of library
evaluation. It will also present some already conducted case studies for better
understanding.
OBJECTIVES
After studying this unit, you will be able to explain the following:
87
7.1 Introduction
Throughout their careers librarians are asked to evaluate collections, services,
policies, expenditures, and other activities that affect the institution and its patrons,
using a mixture of quantitative and qualitative criteria. With the increase in the
types of formats available that contain information, it is essential to continue to
apply the same evaluation and selection criteria to all media. Librarianship has
always been concerned with evaluating collections. The literature includes articles
and books that describe evaluating collections under the subject headings collection
development, selection, and weeding. The criteria outlined in the literature are
almost always the same and hold on today.
Other methods are more concerned with qualitative analysis. Many of these are
grounded in an understanding of the context for evaluation. Does the collection
reflect and support the mission of the library? Public libraries have a very different
overall mission than academic and special libraries. The former serves the local
community’s reading and reference needs, and the reference and research collection
will be as in-depth as the size of the library and the makeup of the community
dictates. Academic libraries serve the needs of undergraduates, the more
specialized needs of graduate students, and the in-depth, subject-specific needs of
the research faculty, as well as carrying professional literature for the librarians and
other professional groups on campus. Special libraries are a set of very developed
teachers.
Within libraries, different collections may serve different needs and so require
different perspectives in assessing materials. Fiction collections and subgenres
require a different evaluation knowledge base than nonfiction and evaluation of
non-print materials requires adapting basic evaluation criteria. For many years,
non-print library collections emphasized record albums, film strips, and 16mm
film. Now non-print formats include music on audiotape, videotape, and compact
disc; movies on videotape, DVDs, and laser discs; audiobooks, available in
complete or abridged versions, in fiction and nonfiction titles, and a variety of
formats designed for sighted or visually impaired listeners; and CD-ROMs
educational and recreational.
88
Although nonprint formats are not typically seen as suitable formats for reference
and research collections, libraries have traditionally collected other nonprint
material, such as photographs and microfilm, to complement and supplement the
reference and non-circulating collections. Audiovisual materials, microform
collections, and other nonprint resources can be evaluated using the same basic
criteria used for evaluating print materials. Particular attention must be given to
organizational factors such as arrangement, access, and equipment support, as well
as the durability and longevity of some media and the control mechanisms required
by multipart materials.
This unit looks at examples of interesting practices in public, academic and special
libraries. It considers survey work, charters and service level agreements and
examples of relevant research projects.
Examples include:
89
worrying ‘no’ response rate of 45%. Users were also asked about services that they
might like to use in the future such as access to PCs and the Internet. Responses
produced a clear age range split with the youngest being most enthusiastic about
such services and the oldest being least interested.
These issues are the staples of any large library, and the identification of new
problems shows the need for repeat surveys at regular intervals.
However, in response to the question—Have you made use of the library service
since you joined? —only 25% replied ‘regularly’ while 65% replied a few times.
The remaining 10% had used the service only once or not at all. Non-borrower
types of use in the previous year were:
reference materials 18%
visited with children 17%
photocopier 12%
personal study 10%
chose books for others 7%
read newspapers/mags 6%
90
The use of outside consultants in some of these studies is interesting.
A list of priorities for future surveys might include:
Topics
• impact (quantitative measures)
• costing by function
• market penetration (lapsed/ non-users)
• promotion of services
• book buying compared with borrowing.
Services
• information/enquiries
• audio/visual services
• electronic information sources.
Case studies and future developments 37 Fiction stock fared less well with typical
‘good’ ratings of between 51 and 82%. Ratings for audio-visual stock were even
poorer, ranging typically from 14 to 42%. Information resources received good
ratings, typically between 51 and 86% of respondents rated this service as good but
the computer catalogue and computer reference services fared worse, perhaps
because of high levels of non-use. The problems with audio-visual and computer
services match two of the survey priorities for public libraries listed above audio-
visual services and electronic information resources. Two fundamental issues stand
out: the quality of staff and the quality of stock. Interestingly, the views of the
Scottish suburban general public and those of North German academic library users
(see below) are so similar. The questionnaire also asked users if they were aware
of the Council’s’ Let Us Know system which allows them to make comments,
suggestions and complaints about the service provided. Only one branch produced
an ‘awareness’ rating of more than 50%, an indication that as laudable as the aim
of open communication may be, it can be difficult to achieve.
91
Figure-1
92
Institute of Public Finance (IPF) Public Library User Survey (Plus)
This national standard for user surveys grew out of the Audit Commission’s
Citizens’ Charter exercise in 1992. There was a strong feeling that conventional
statistics were not adequate to assess how well libraries provided materials and
information to the public. Surveys of users who asked specific questions seemed to
be the answer.
The questionnaire (see Figure 2) contains ‘core’ questions. Additions to the core
questions can be made in collaboration with the IPF. There are several supplementary
questions which some authorities have added that can be used by others. Leeds Library
and Information Services conducted its first major user survey in December 1994,
based on IPF’s Public Libraries User Survey (Plus) (Pritchard 1995).
93
Figure - 2
94
The availability of the survey resolved uncertainties about what questions to ask,
sampling tables and how to collate the data. The documentation provided covered
all aspects of how to set up and run the survey. Apart from the data itself the fact
that statistically valid data was collected was seen to be a major benefit. Currently,
92 public library authorities subscribe to the scheme and well over 200,000 surveys
are recorded on the IPF database. Members can compare their local results against
published national averages. A children’s survey was launched in 1998 following
extensive piloting and a PLUS subgroup has been formed to investigate community
surveys.
The core areas covered by the CIPFA PLUS questionnaire are: user activities in the
library; the number of books borrowed; a needs-fill question; user satisfaction
relating to several services; frequency of visits; sex, age and occupation of
respondent and postcode area of the respondent a feature which makes it possible
to calculate distance travelled to the library (Spiller 1998, p. 72).
The Library of the University College of Swansea has used the Van House
originals, independently of the SCONUL initiative and Glasgow Caledonian
University Library has used the Van House general user satisfaction survey in an
increasingly modified form. Figure-3 is an example of a modified proforma,
derived from Van House by SCONUL and originating from the mythical Poppleton
95
Metropolitan University, beloved by Times Higher Educational Supplement
readers.
Figure-3
96
As a result of various research initiatives, a modified version of the Van House user
satisfaction survey questionnaire was devised at Glasgow Caledonian University.
The performance issues in section one are derived from these initiatives. The list of
issues reflected in section 1 of the Van House questionnaire was not very relevant
and the questionnaire proved difficult to analyse satisfactorily.
The survey has been conducted over five years (1995–1999 inclusive) and now
provides a longitudinal perspective on how the services have developed over that
time. If universities as a whole regularly survey their teaching, learning and central
services the university library is surveyed, probably annually, as part of this
process. This y allows the library to compare itself with other services provided by
the University. The practice is not widespread, but the University of Central
England in Birmingham is a good example. The University of Central England
maintains a Centre for Research into Quality, one of whose functions is to conduct
an annual university-wide student satisfaction survey. The 1998 annual report,
University of Central England (1998) covered central services like library and
computing services and word processing facilities, refectories and student services
as well as course organisation, teaching staff and teaching and learning. The survey
is based on a lengthy questionnaire which, in 1998, included the views of nearly
2000 respondents and provided more than a million items of information.
97
Figure-4
98
The section on the library extends over 16 pages and is more comprehensive than
many stand-alone in-house library surveys. As academic libraries are now major
providers of computing facilities the sections on word processing and computing
facilities are also relevant. The 1998 survey recorded high levels of user
satisfaction, especially with staff helpfulness. The annual surveys have been
ongoing since 1991 and several familiar issues have emerged over this period:
range of books; up-to-datedness of books; availability of recommended course
material; multiple copies of core books; a range of journals; opening hours;
availability of study places and noise levels. Perhaps predictably availability of
recommended course material and multiple copies of core books are the main areas
of dissatisfaction. The Centre also undertakes specialist surveys, and this has
included the library. The Centre recognizes the need to close the feedback loop and
goes to considerable lengths to publicize the outcomes of its work.
There are advantages and some disadvantages to this method. One can be certain
the survey has been carried out expertly and that the data is reliable and that
comparisons within the university are possible. However, the data collected is
limited so there will still be a need for specialist surveys. The library, although it
benefits from the process, has no direct control over it. However, studies such as
this are increasingly helpful, partly because they include IT issues and partly
because they point to performance issues which are common to both the library and
other services as well.
A European example
The library of the University of Munster in northwestern Germany is one of the
most active in Germany in evaluation and performance measurement. It surveyed
user satisfaction in 1982 and repeated the exercise in 1996 (Buch 1997). Perhaps,
because of the relative infrequency of surveying, it was a substantial and
methodologically complex exercise. After the initial survey design, it was pre-
tested on 30 users. This raised problems with jargon, especially abbreviations, and
the questionnaire was modified. The completed questionnaire comprised a total of
52 questions in 19 different service areas.
The survey was carried out over one complete week in January from 8 am to 9 p.m.
each day and was administered by 4 librarians and 2 student assistants. The
questionnaire was administered to 8 subjects per hour who took, on average, 20
minutes to complete the form. This led the surveyors to conclude that the
questionnaire should have been shorter.
99
preparation and analysis took until the following May. The analysis of the
quantitative data took approximately 130–140 hours and the analysis of the
qualitative data took another 50 hours, about 190 hours in total.
Among the results was a strong desire for longer opening hours which resulted in
extended Saturday opening. Overall user satisfaction was high although satisfaction
with the stock was lower. The most highly rated area was ‘helpfulness of staff’. The
surveyors were surprised to discover that the Internet which appeared to be a
‘favourite toy’ was unfamiliar to 76% of users. Publicity for this service has
increased. The survey results were publicized by an exhibition and a press
conference. The labour costs of such a large survey are substantial and the time
taken to analyse qualitative comments is particularly noteworthy. The issues raised
will be familiar to many academic librarians outside Germany.
There was a total of 23 questions, a mixture of closed and open, the latter giving
the respondents adequate opportunity to make qualitative observations. A total of
149 questionnaires were sent out and 74 were returned (49.6%). Statistical data was
generated using Excel. Satisfaction with the service given was very high, although
inevitably misconceptions surfaced. Perhaps not surprisingly in a special library,
journal circulation was the most controversial issue. Journals also figured
prominently in replies to the question on better services. Conclusions from the
questionnaire included the need for better library promotion and the impact of IT,
issues not confined to special libraries.
100
The British Ministry of Defense operates a comprehensive program of evaluation
which includes user survey reports for each library in the Ministry of Defense HQ
Information and Library service. There is also a rolling program of six-monthly
surveys aimed at giving a satisfaction performance indicator. The aim is to achieve
90% satisfaction against three key performance indicators: ‘Speed’; ‘Information
provided’ and ‘Courteous and helpful’. To date, these targets have all been
achieved. Indeed, the ‘Courteous and helpful’ indicator regularly scores 100%.
Characteristically the problem of getting customer feedback in a special library
means that the number of respondents is small, between 300 and 350. One of the
outcomes of the evaluation program was the Library Service Charter which
includes a commitment to monitoring and customer feedback.
Local authorities were listed among the public services covered so the implications for
public libraries were obvious. Higher education was not mentioned specifically but the
impact has, nevertheless, been substantial. The Citizen’s Charter’s first principle of
public service, Standards, establishes the link with local customer charters:
The challenge was swiftly taken up. By August 1992 at least 13 authorities had
published a library charter and some 14 others were working towards one. Although
these varied in length, style, organization and detail, they shared common concerns
about accessibility, appropriateness, quality and value for money. They offered a
101
mixture of commitments and pledges, some general, some specific, and some
supported by numeric standards. Few exhibited a link between market research and
the charter pledges (Library Association 1992).
By May 1995, 52 authorities had charters and a further 16 were preparing them.
Public library charters are usually attractively produced, sometimes in A5 leaflet
form and printed in two or more colours to attract attention to them. The City of
Westminster initially produced separate charters for libraries and archives but has
now abandoned these in favour of a single document entitled Service Standards and
Promises. This describes services simply and explains in fairly general terms what
standard of services users can expect. There is a promise to listen to comments,
respond to complaints and conduct surveys. There is also a short section explaining
how users can help the library.
In 1997 the Library Association’s Branch and Mobile Libraries Group published
its own specialized Charter for public mobile library services. It is based upon A
charter for public libraries and the Model statement of standards and includes such
specialized issues as stopping times, the need to review routes frequently and the
role of mobile libraries in community information provision. In higher education,
the position is rather different. Academic libraries are under less pressure to
produce charters and consequently, fewer have done so. However, there are several
influences, both direct and indirect. There are general higher education charters for
102
England and Scotland and the National Union of Students has developed a student
charter which, inter alia, states that students should have ‘the right to effective
learning support. Some universities have produced general charters. Liverpool John
Moores University produced the first of these in 1993. It is divided into specific
sections which include a short item about the library.
The overall statements and promises can affect the library even if it is not
mentioned specifically e.g., provision of feedback to students, involving students
in the decision-making process, provision of a suitable learning environment and
the complaints procedure. Of the specific library charts, one of the most attractively
produced is that of the University of London Library. It is printed in two colours in
A5 leaflet form. It was produced in early 1994 and has been kept short intentionally
to give it a long shelf life although changes to practice probably imply updating. It
outlines in six points what a reader has a right to expect from the service.
Detail is available in the library’s range of leaflets. It covers Service Delivery and
Customer Care, Quality (which includes a reference to surveys), Collections,
Information Technology, The Working Environment and Complaints Procedure. The
most detailed is that produced by Sheffield Hallam University Library which is a
detailed document extending over four sheets of A4. It is divided into eight sections:
Access, Accommodation, Materials, Information Services, Photocopying and Sales,
Audio Visual Services, Communicating with Students and Student Responsibilities. It
makes promises on specific issues e.g., responding to 75% of enquiries immediately
and 95% photocopier operational availability. What distinguishes higher education
charters, both general and library-specific, is that they tend to contract in that they
specify the behaviours expected from the students in return for the services promised.
Public library charters do not usually have a contractual element.
It is fair to say that there has been a good deal of cynicism about charters. They can
be seen as bland promises, merely rephrasing institutional policy or ‘weasel words’
which make impressive statements but do not promise anything measurable. They
can also be seen as a fad and, if they descend to specifics, can go out of date. They
can become an end in themselves, unrelated to the realities of the service and they
can be difficult to get right. There is also concern as to whether they are legally
binding. To be successful they should be both the outcome of evaluation, offering
objectively deliverable promises and they should also be seen as a part of
continuing the evaluation process. There is no doubt that the specific promises they
often contain on feedback and surveying have boosted the evaluation movement. If
well-designed and kept up to date charters can:
1) improve communications with users
2) demonstrate a commitment to quality
3) focus staff attention on performance issues.
103
Although charters have been viewed as a mid-90s fad they have not gone away in
higher education. There seem to be two reasons for this:
1. The growth of student numbers in higher education has resulted in many
people coming to university who come from family backgrounds with no
previous experience of higher education and have no idea of what to
realistically expect from higher education services. This has led to a need for
expectation management and a charter is a good way of doing this.
2. The growth of off-campus and work-based learning which results in irregular
user contact with the library. In such circumstances laying out the ground
rules is a good idea.
104
The agreement is intended to evolve as new services are provided. The areas it
covers were assessing user needs, opening hours, study environment, library
resources, interlibrary loans, information handling skills, information services and
photocopying.
Specific topics mentioned include complaints, noise, seat occupancy and shelf
tidying. Service level agreements have an important part to play in providing
yardsticks for evaluation and promoting service improvements and, like charters,
they need the involvement and support of both library staff and users in their design,
implementation and monitoring.
However, agreements are not contracts and although service level agreements
oblige library services to deliver a certain level of service to users, they put no
enforceable obligations on users. The library can only state that it cannot deliver on
specifics unless certain user obligations are first met e.g., the library cannot promise
to have sufficient copies of a particular textbook available by a particular time
unless reading lists are delivered by a mutually agreed date.
The project has two main objectives. Firstly, EQUINOX aims to further develop
existing international agreements on performance measures for libraries, by
expanding these to include performance measures for the electronic library
environment. The second aim is to develop and test an integrated quality
management and performance measurement tool for library managers.
105
To provide software which will encourage all library managers to introduce
an appropriate level of quality management, without the constraints of
ISO9000.
To validate and test the pre-production prototype system in several libraries.
To undertake large-scale demonstration trials in libraries across Europe.
To undertake dissemination of the approach and model across Europe.
To ensure that Europe retains its world leadership in this area.
7.8 Conclusion
The case study method is a learning technique in which the student is faced with a
particular problem, in the case. The case study facilitates the exploration of a real
issue within a defined context, using a variety of data sources (Baxter et al., 2008).
In general terms, the case study analyzes a defined problem consisting of a real
situation and uses real information as a methodological tool. Case studies are
associated with the development of detailed information relating to a specific
business phenomenon, with phenomena across similar organizations or settings, or
with one specific case (person, organization, or setting). Case study methods may
draw on several methods to gather data, such as observation, experiments,
structured interviews, questionnaires, and/or documentary analysis. A case study
within a positivistic paradigm is subsequently guided by the tenets of a quantitative
methodology. The advantages of case studies associated with case-specific detail
are the obtainment of objectives and the use of multiple methods to gain detailed
data on the case. Disadvantages are associated with resource allocation and (with
field case studies) the inability to control all variables systematically.
106
SELF-ASSESSMENT QUESTIONS
Activity:
1. With the help of a tutor, develop a case study to evaluate the Reference
Services of an academic library (University Library).
107
RECOMMENDED READING
2. Crist, M., Daub, P. and MacAdam, B. (1994). User studies: reality check and
future perfect, Wilson Library Bulletin, 68 (6), pp. 38–41.
4. Greguras, G. J., Robie, C., Schleicher, D. J., Goff, M. (2003). A field study
of the effects of rating purpose on the quality of multisource ratings.
Personnel Psychology, 56, 1–21.
108
Unit–8
FUTURE DEVELOPMENTS
109
CONTENTS
Page #
Introduction ....................................................................................................... 111
110
INTRODUCTION
The unit will present future developments in the area of library evaluation and key
challenges faced by libraries related to assessment.
OBJECTIVES
After studying this unit, you will be able to explain the following:
111
8.1 Introduction
Libraries face five key challenges related to assessment:
1. Gathering meaningful, purposeful, comparable data
2. Acquiring methodological guidance and the requisite skills to plan and
conduct assessments.
3. Managing assessment data
4. Organizing assessment as a core activity
5. Interpreting library trend data in the larger environmental context of user
behaviours and constraints
Aggressive efforts are underway to satisfy all of these needs. For example, the
International Coalition of Library Consortia’s (ICOLC) work to standardize
vendor-supplied data is making headway. The Association of Research Libraries
(ARL) E-metrics and LIBQUAL+ efforts are standardizing new statistics,
performance measures, and research instruments. Collaboration with other national
organizations, including the National Center for Education Statistics (NCES) and
the National Information Standards Organization (NISO), shows promise for
coordinating standardized measures across all types of libraries. ARL’s foray into
assessing costs and learning and research outcomes could provide standards, tools,
and guidelines for these much-needed activities as well. Their plans to expand
LIBQUAL+ to assess digital library service quality and to link digital library
measures to institutional goals and objectives are likely to further enhance
112
standardization, instrumentation, and understanding of library performance about
institutional outcomes. ARL serves as the central reporting mechanism and
generator of publicly available trend data for large research libraries. A similar
mechanism is needed to compile new measures and disseminate trend data for other
library cohort groups.
113
Percentage of total library materials used in electronic format.
Total reference activity = total in-person transactions + total telephone
transactions + total virtual (for example, e-mail, chat) transactions.
Percentage of total reference activity conducted in virtual format.
Total serials collection = total print journal titles + total e-journal titles.
A percentage of the total serials collection is available in electronic format.
DLF's mission is to enable new research and scholarship of its members, students,
scholars, lifelong learners, and the general public by developing an international
114
network of digital libraries. DLF relies on collaboration, the expertise of its
members, and a nimble, flexible, organizational structure to fulfill its mission. To
achieve this mission, DLF:
Supports professional development and networking of members,
Promotes open digital library standards, software, interfaces, and best
practices,
Leverages shared actions, resources, and infrastructures,
Encourages the creation of digital collections that can be brought together and
made accessible across the globe,
Works with the public sector, educational, and private partners
Secures and preserves the scholarly and cultural record.
“How-to” manuals and workshops are greatly needed in the area of user studies.
Although DLF libraries are conducting several user studies, many respondents
asked for assistance. Manuals and workshops developed by libraries for libraries
that cover the popular assessment methods (surveys, focus groups, and user
protocols) and the less well-known but powerful and cost-effective discount
usability testing methods (heuristic evaluations and paper prototypes and scenarios)
would go a long way toward providing such guidance. A helpful manual or
workshop would
Define the method.
Describe its advantages and disadvantages.
Provide instruction on how to develop the research instruments and gather
and analyze the data.
Include sample research instruments proven successful in field testing.
Include sample quantitative and qualitative results, along with how they were
interpreted, presented, and applied to realistic library concerns.
Include sample budgets, timelines, and workflows.
Standard, field-tested research instruments for such things as OPAC user protocols
or focus groups to determine priority features and functionality for digital image
collections would enable comparisons across libraries and avoid the cost of
duplicated efforts in developing and testing the instruments. Similarly, budgets,
timelines, and workflows derived from real experience would reduce the cost of
trial-and-error efforts replicated at each institution.
The results of the DLF study also indicate that libraries would benefit from manuals
and workshops that provide instruction in the entire research process-from
conception through the implementation of the results-particularly if attention were
drawn to key decision points, potential pitfalls, and the skills needed at each step
of the process. Recommended procedures and tools for analyzing, interpreting, and
115
presenting quantitative and qualitative data would be helpful, as would guidance on
how to turn research findings into action plans. Many libraries have already learned
a great deal through trial and error and investments in training and professional
development. Synthesizing and packaging their knowledge and expertise in the
form of guidelines or best practices and disseminating it to the broader library
community could go a long way toward removing impediments to conducting user
studies and would increase the yield of studies conducted.
TLA presents a slightly different set of issues because the data are not all under the
control of the library. Through the efforts of ICOLC and ARL, progress is being
made in standardizing the data points to be delivered by vendors of database
resources. ARL’s forthcoming instruction manual on E-metrics will address
procedures for handling these vendor statistics. Similar work remains to be done
with OPAC and ILS vendors and vendors of full-text digital collections. Library-
managed usage statistics for their Web sites and local databases and digital
collections present a third source of TLA data. Use of different TLA software,
uncertainty or discrepancy in how the data points are defined and counted, and
needed analyses not supported by some of the software to complicate data gathering
and comparative analysis of the use of these different resources. Work must be done
to coordinate efforts on all these fronts to facilitate comparative assessments of
resources provided by the library, commercial vendors, and other information
service providers.
In the meantime, libraries could benefit from guidance on how to compile, interpret,
present, and use the TLA data they do have. For example, DLF libraries have taken
different approaches to compiling and presenting vendor data. A study of these
approaches and the costs and benefits of each approach would be instructive. Case
studies of additional research conducted to provide a context for interpreting and
using TLA data would likewise be informative. For example, what does the
increasing or decreasing number of queries of licensed databases mean? Is an
increase necessarily a good thing and a decrease necessarily a bad thing? Does a
decrease indicate a poor financial investment? Could a decrease in the number of
queries simply mean that users have become better searchers? What do low-use or
no-use Web pages mean? Poor Web site design? Or wasted resources producing
pages of information that no one needs? Libraries would benefit if those who have
gathered data to help answer these questions would share what they have learned.
The issue of compiling assessment data is related to managing the data and
generating trend lines over time. Libraries need a simplified way to record and
analyze input and output data on traditional and digital collections and services, as
well as an easy way to generate statistical reports and trend lines.
116
8.3 DLF and Member Libraries
Several DLF libraries reported conducting needs assessments for library statistics
in their institutions, eliminating data-gathering practices that did not address
strategic concerns or were not required for internal or external audiences. They also
mentioned plans to develop a homegrown MIS that supports the data manipulations
they want to perform and provides the tools to generate the graphics they want to
present. Designing and developing an MIS could take years, not counting the effort
required to train staff how to use the system and secure their commitment to using
it. Only time will tell whether the benefits to individual libraries will exceed the
cost of creating these homegrown systems.
The fact that multiple libraries are engaged in this activity suggests a serious
common need. One wonders why a commercial library automation vendor has not
yet marketed a product that manages, analyzes, and graphically presents library
data. The local costs of gathering, compiling, analyzing, managing, and presenting
quantitative data in effective ways, not to mention the cost of training and
professional development required to accomplish these tasks, could exceed the cost
of purchasing a commercial library data management system, were such a system
available. The market for such a system would probably be large enough that a
vendor savvy enough to make it affordable could also make it profitable. Such a
system would reduce the need for librarians to interpret and apply data effectively.
The cost savings would be spent on purchasing the system. The specifications and
experiences of libraries engaged in creating their own MIS could be used to develop
specifications for the design of a commercial MIS. Building a consensus within the
profession for the specification and marketing it to library automation vendors
could yield the collaborative development of a useful, affordable system.
Admittedly, the success of such a system depends in part on the entry and
verification of correct data, but this issue could begin to resolve itself, given
standard data points and a system, designed by libraries for libraries, that saves
resources and contributes to strategic planning.
The results of the DLF study suggest that individually, libraries in many cases are
collecting data without really having the will, organizational capacity, or interest to
interpret and use the data effectively in library planning. Libraries have been slow
to standardize definitions and assessment methods, develop guidelines and best
practices, and provide the benchmarks necessary to compare the results of
assessments across institutions. These problems are no doubt related to the fact that
library use and library roles are in continuous transition. The development of skills
and methods cannot keep pace with the changing environment. The problems may
also be related to the internal organization of libraries. Comments from DLF
117
respondents indicate that the internal organization of many libraries does not
facilitate the gathering, analysis, management, and strategic use of assessment data.
The result is a kind of purposeless data collection that has little hope of serving as
a foundation for the development of guidelines, best practices, or benchmarks. The
profession could benefit from case studies of those libraries that have conducted
research efficiently and applied the results effectively. Understanding how these
institutions created a program of assessment-how they integrated assessment into
daily library operations, how they organized the effort, how they secured the
commitment of human and financial resources, and what human and financial
resources they committed-would be helpful to the many libraries currently taking
an ad hoc approach to assessment and struggling to organize their effort. Including
budgets and workflows for the assessment program would enhance the utility of
such case studies.
Efforts to enhance research skills, to conduct and use the results of assessments, to
compile and manage assessment data, and to organize assessment as a core library
activity all shed light on how libraries and library use are changing. What remains
to be known is why libraries and library use are changing. To date, speculation and
intuition have been employed to interpret known trends; however careful
interpretation of the data requires knowledge of the larger context within which
libraries operate. Many DLF respondents expressed a need-to-know what
information students and faculty use, why they use this information, and what they
do or want to do when they need information or when they find information.
Respondents acknowledged that these behaviours, including the use of the library,
are constrained by changes on and beyond the campus, including the following:
Changes in the habits, needs, and preferences of users; for example,
undergraduate students now turn to a Web search engine instead of the library
when they need information.
Changes in the curriculum; for example, elimination of research papers or
other assignments that require library use, distance education courses, or the
use of course packs and course management software that bundle materials
that might otherwise have been found in the library.
Changes in the technological infrastructure; for example, penetration and
ownership of personal networked computers, network bandwidth, or wireless
capabilities on university and college campuses enable users to enter the
networked world of information without going through pathways established
by the library.
Use of competing information service providers; for example, Ask-A
services, Questia, Web sites such as LibrarySpot, or the Web in general.
118
In response to this widespread need to know, the Digital Library Federation,
selected library directors, and Outsell, Inc., have designed a study to examine the
information-seeking and usage behaviours of academic users. The study will survey
several thousand students and faculty in different disciplines and different types of
institutions to begin to understand how they perceive and use the broader
information landscape. The study will provide a framework for understanding how
academics find and use information (regardless of whether the information is
provided by libraries), examine changing patterns of use about changing
environmental factors, identify gaps where user needs are not being met, and
develop baseline and trend data to help libraries with strategic planning and
resource allocation. The findings will help libraries focus their efforts on current
and emerging needs and expectations of academic users, evaluate their current
position in the information landscape, and plan their future collections, services,
and roles on campus based on an informed, rather than a speculative, understanding
of academic users and uses of information.
The next steps recommended based on the results of the DLF study are the
collaborative production and dissemination of the following:
E-metrics lite: a limited subset of digital library statistics and performance
measures to facilitate gathering baseline data and enable comparisons.
How-to manuals and workshops for
o conducting research in general, with special emphasis on planning and
commitment to resources
o conducting and using the results of surveys, focus groups, user
protocols, and discount usability studies, with special emphasis on field-
tested instruments, timelines, budgets, workflows, and requisite skills.
Case studies of
o the costs and benefits of different approaches to compiling, presenting,
interpreting, and using vendor TLA data in strategic planning.
o how institutions successfully organized assessment as a core library
activity.
o a specification for the design and functionality of an MIS to capture
traditional and digital library data and generate composite measures,
trend data, and effective graphical presentations.
8.4 Conclusion
Libraries today are needy. Facing rampant need and rapid change, their ingenuity
and diligence are remarkable. Where no path has been charted, they carve a course.
Where no light shines, they strike a match. They articulate what they need to serve
users and their institutional mission, and if no one provides what they need, they
119
provide it themselves, ad hoc perhaps, but for the most part functional. In search of
high quality, they know when to settle for good enough good-enough data, good-
enough research and sampling methods, good enough to be cost-effective, and good
enough to be beneficial to users. In the absence of standards, guidelines,
benchmarks, and adequate budgets, libraries work to uphold the core values of
personal service and equitable access in the digital environment. Collaboration and
dissemination may be the keys to current and future success.
SELF-ASSESSMENT QUESTIONS
Activity:
1. With the help of a tutor, develop and conduct a LIBQUAL survey of your
nearby Public Library.
RECOMMENDED READING
120
Unit–9
PERFORMANCE MANAGEMENT
FOR THE ELECTRONIC LIBRARY
121
CONTENTS
Page #
Introduction ....................................................................................................... 123
122
INTRODUCTION
The unit will guide the students about performance management and teach them
about the key challenges that have plagued performance measurement since its
inception. It will also discuss the electronic library assessment, usage and user and
performance issues and indicators.
OBJECTIVES
After studying this unit, you will be able to explain the following:
123
9.1 Introduction
Performance management is known as the “Achilles’ Heel” of human capital
management, and it is the most difficult HR system to implement in organizations.
P Performance management is consistently one of the lowest, if not the lowest,
rated areas in employee satisfaction surveys. Yet, performance management is the
key process through which work gets done. It’s how organizations communicate
expectations and drive behaviour to achieve important goals; it’s also how
organizations identify ineffective performers for development programs or other
personnel actions. There are genuine reasons why both managers and employees
have difficulties with performance management. Managers avoid performance
management activities, especially providing developmental feedback to employees,
because they don’t want to risk damaging relationships with the very individuals,
they count on to get work done. Employees avoid performance management
activities, especially discussing their development needs with managers, because
they don’t want to jeopardize their pay or advancement. In addition, many
employees feel that their managers are unskilled at discussing their performance
and coaching them on how to improve. These attitudes, on the part of both
managers and employees, result in poor performance management processes that
simply don’t work well. Another problem is that many managers and employees
don’t understand the benefits of effective performance management. They often
view it as a paperwork drill required by human resources, where ratings need to be
submitted every year for record-keeping purposes – a necessary evil that warrants
the minimum investment of time. What many managers don’t realize is that
performance management is the most important tool they have for getting work
done. It’s essential for high-performing organizations, and one of their most
important responsibilities. Done correctly, performance management
communicates what’s important to the organization, drives employees to achieve
important goals, and implements the organization’s strategy.
On the other hand, done poorly, performance management has significant negative
consequences for organizations, managers, and employees. Managers who conduct
performance management ineffectively will not only fail to realize its benefits, but
they can damage relationships with or undermine the self-confidence of their
employees. If employees do not feel they are being treated fairly, they become de-
motivated, or worse, they may legally challenge the organization’s performance
management practices. This can result in serious problems that are expensive,
distracting, and damaging to an organization’s reputation and functioning.
124
Today’s performance management best practices are the result of ongoing efforts
to address two key challenges that have plagued performance measurement since
its inception:
1. What type of performance should be measured – abilities, skills, behaviours,
results?
2. How can we measure performance most reliably, accurately, and fairly?
To understand where we are today with performance management and why certain
approaches have become best practices, we need to understand how they evolved,
based on trial and error.
In discussing the electronic library various terms have been used almost
interchangeably: the electronic library, the virtual library and the digital library.
The term Hybrid Library is used to denote a mixed collection of traditional paper
and electronic sources. At the most basic level, a library has been traditionally
thought of as a building with carefully selected and pre-determined resources in it.
Although the electronic library is not like this it does have some traditional
characteristics like being at least partly housed in a building and requiring the
support of professional staff although some of these will not be librarians. However,
the electronic library entails a movement away from the library as a place. The
Equinox project defines a library collection as ‘All information resources provided
by a library for its users. Comprises resources held locally and remote resources for
which access has been acquired, at least for a certain period. The definition offered
by the Equinox Project (Equinox 1999) of electronic library services is— ‘The
electronic documents and databases the library decides to make available in its
collections, plus the library OPAC and home page’.
125
with electronic access to datasets and images such as video clips which might be
used for educational purposes. The service is less building and direct access
dependent than the traditional library. The library is the interface to electronic data
providing remote access including 24-hour access. Navigational aids and resources
are usually provided in the form of ‘hot linked’ web pages. Among the services the
electronic library might offer are the following:
• Access to electronic journals
• Word Processing packages
• Excel and other statistical packages
• PowerPoint demonstration software
• Links to local networks
• Internet
• Email
• Bibliographic software
• Digitized books and journals
• Electronic information databases
• OPACs
• Networked CD-ROMs on local area networks
• Full-text outputs via bibliographic searching
• Sets of lecture notes
• Web-based training packages.
127
all computers are linked to a central printer. How is queuing organized and
what are the charging mechanisms? Printers usually require at least
intermittent staff intervention and staffing support is a quantifiable issue.
Floppy disks can involve problems of following proper procedures and may
require advice from staff. Damaged disks are another problem.
No defined service period—Service periods can be intermittent and/or out
with standard opening hours.
Quality and reliability of Internet data—This is extremely variable, and
the librarian has no means of exercising any control over it.
Non-use—This is an extremely complex issue and involves such factors as
physical distance from the campus, access to a computer at home or work,
access to a network connection, licensing conditions of databases, IT skill
levels, technophobia as well as social class characteristics.
Changes over time—Longitudinal studies will be affected by changing and
hopefully improving skill levels, changes and hopefully improvements in
services, changing password authorizations etc.
Distributed resources—Clusters of computers in different places, perhaps
not even in the same building make supervision, observation and support
difficult.
Problems with the library’s control e.g., unreliable networks.
The service-orientated culture—Librarianship is increasingly a service and
evaluation-orientated culture, but librarians have to work increasingly with IT
personnel, not necessarily in a structured environment. If IT personnel mainly
concern themselves with technical issues, differences in service attitudes can
emerge.
PCs versus Macs—Dual platform services raise training issues for support staff.
The overall picture from these points is that there is a battery of qualitative issues
to be considered that count-based methods will fail to recognize and interpret.
This extremely concise list has been refined from an initial list of over fifty
indicators which shows how difficult it is to identify reliable performance indicators
which can be widely used. They are extremely practical and cover essential areas
but should be considered in light of the largely qualitative issues raised above.
The list does not include the length of the session, which might not, in any case, be
very meaningful and there is no real way of measuring success in use and the
qualitative level of work undertaken. The proposed performance indicators will be
supported by user surveys to gather qualitative information which will complement
the numeric nature of the performance indicator set.
129
defined performance issues on which to base questionnaire questions. Some
conclusions from the study were as follows:
The distinctive mission of the Library’s Electronic Information Floor (EIF)
was not clear to users who simply viewed it as a collection of computers
located in the library.
Much of the use was unsophisticated and centred on email, the Internet and
word processing. Electronic information services were less used.
There was a good deal of non-curricular use centring around using email and
the Internet for recreational purposes.
Levels of IT skills were low, especially among non-traditional students.
Much of the learning was from other students and not from Library or other
staff.
The study also highlighted general issues requiring further study. Users did not
appear to distinguish between electronic services generally like email and word
processing packages and specific electronic information services like Science
Citation Index. They saw the matter more in terms of ‘things you can do on a
computer’. A follow-up study undertaken in Spring 1999 showed that only about
15% of computer users were devoted to the use of electronic information services.
These findings have been confirmed by an elaborate study at Cornell University
which used a combination of different techniques such as observation, semi-
structured interviews, a questionnaire and focus groups. This found a wide
ignorance of the electronic sources available and how they are accessed. Staff
typically only used two or three databases and none of the students used the library-
provided web gateway to access databases although they did use internet search
engines to locate information for coursework. Staff and students both wanted swift
access to relevant material with minimal investment in learning and searching time.
The overall picture is of unsophisticated use. Whether this will change over time is
one of the biggest issues in the evaluation of the electronic library.
9.11 Conclusion
Performance management is consistently one of the lowest, if not the lowest, rated
areas in employee satisfaction surveys. Yet, performance management is the key
process through which work gets done. It’s how organizations communicate
expectations and drive behaviour to achieve important goals; it’s also how
organizations identify ineffective performers for development programs or other
personnel actions.
The electronic library redefines the concept of the user and introduces the idea of
the ‘virtual visitor’ or user. The user is no longer someone who ‘comes in’ and
observes set opening hours. There is no defined service period. Users may be
accessing the electronic library remotely from home or work and may be seen only
infrequently by librarians. Skill levels of users are very variable and may not
necessarily be defined by traditional stakeholder groups.
SELF-ASSESSMENT QUESTIONS
2. Define an electronic library, and what it does Discuss its usage and users with
examples.
133
Activity:
1. Visit the HEC website and write an evaluative note on the HEC digital library
with the help of a tutor.
RECOMMENDED READING
_____[ ]_____
134