SAQA - 10313 - Learner Guide

Download as pdf or txt
Download as pdf or txt
You are on page 1of 33

FURTHER EDUCATION AND TRAINING CERTIFICATE: INFORMATION

TECHNOLOGY: SYSTEMS DEVELOPMENT

ID 78965 LEVEL 4 – CREDITS 165


LEARNER GUIDE
SAQA: 10313
COMPLY WITH SERVICE LEVELS AS SET OUT IN A
CONTACT CENTRE OPERATION

1|Page
Learner Information:
Details Please Complete this Section
Name & Surname:
Organisation:
Unit/Dept:
Facilitator Name:
Date Started:
Date of Completion:

Copyright
All rights reserved. The copyright of this document, its previous editions and any annexures
thereto, is protected and expressly reserved. No part of this document may be reproduced,
stored in a retrievable system, or transmitted, in any form or by any means, electronic,
mechanical, photocopying, recording or otherwise without the prior permission.

2|Page
Key to Icons
The following icons may be used in this Learner Guide to indicate specific functions:

This icon means that other books are available for further information on
a particular topic/subject.

Books

This icon refers to any examples, handouts, checklists, etc…

References

This icon represents important information related to a specific topic or


section of the guide.

Important

This icon helps you to be prepared for the learning to follow or assist you
to demonstrate understanding of module content. Shows transference of
knowledge and skill.
Activities

This icon represents any exercise to be completed on a specific topic at


home by you or in a group.
Exercises
An important aspect of the assessment process is proof of competence.
This can be achieved by observation or a portfolio of evidence should
be submitted in this regard.
Tasks/Projects

An important aspect of learning is through workplace experience.


Activities with this icon can only be completed once a learner is in the
workplace
Workplace Activities
3|Page
This icon indicates practical tips you can adopt in the future.

Tips

This icon represents important notes you must remember as part of the
learning process.

Notes

4|Page
Learner Guide Introduction

About the Learner This Learner Guide provides a comprehensive overview of the COMPLY
Guide… WITH SERVICE LEVELS AS SET OUT IN A CONTACT CENTRE OPERATION ,
and forms part of a series of Learner Guides that have been developed
for FURTHER EDUCATION AND TRAINING CERTIFICATE: INFORMATION
TECHNOLOGY: TECHNICAL SUPPORT ID 78964 LEVEL 4 – 163 CREDITS. The
series of Learner Guides are conceptualized in modular’s format and
developed for NATIONAL CERTIFICATE: MEASUREMENT, CONTROL AND
INSTRUMENTATION. They are designed to improve the skills and
knowledge of learners, and thus enabling them to effectively and
efficiently complete specific tasks. Learners are required to attend
training workshops as a group or as specified by their organization.
These workshops are presented in modules, and conducted by a
qualified facilitator.
Purpose The purpose of this Learner Guide is to provide learners with the
necessary knowledge related to COMPLY WITH SERVICE LEVELS AS SET
OUT IN A CONTACT CENTRE OPERATION.
Outcomes People credited with this unit standard are able to:
• Demonstrate an understanding of company specific service levels.
• Meet and maintain service levels.
Assessment Criteria The only way to establish whether a learner is competent and has
accomplished the specific outcomes is through an assessment process.
Assessment involves collecting and interpreting evidence about the
learner’s ability to perform a task. This guide may include assessments in
the form of activities, assignments, tasks or projects, as well as
workplace practical tasks. Learners are required to perform tasks on the
job to collect enough and appropriate evidence for their portfolio of
evidence, proof signed by their supervisor that the tasks were
performed successfully.
To qualify To qualify and receive credits towards the learning programme, a
registered assessor will conduct an evaluation and assessment of the
learner’s portfolio of evidence and competency
Range of Learning This describes the situation and circumstance in which competence

5|Page
must be demonstrated and the parameters in which learners operate
Responsibility The responsibility of learning rest with the learner, so:
• Be proactive and ask questions,
• Seek assistance and help from your facilitators, if required.

6|Page
Learning Unit
UNIT STANDARD NUMBER
1 Comply with service levels as set out in a
Contact Centre Operation

: 10313
LEVEL ON THE NQF : 4
CREDITS : 10
FIELD : Business, Commerce and Management Studies
SUB FIELD : Marketing

This unit standard forms part of the qualification, National Certificate in Contact Centre
PURPOSE: Operations NQF Level 4. Learners working towards this unit standard will be learning
towards the full qualification, or will be working within a Contact Centre environment,
where the acquisition of competence against this standard will add value to learner's job.
This unit standard is intended to enhance the provision of intermediate level service within
the Contact Centre industry.
The qualifying learner is capable of:
• Demonstrating an understanding of company specific service levels.
Meeting and maintaining service levels

LEARNING ASSUMED TO BE IN PLACE:


Learners accessing this unit standard or qualification will have demonstrated competency against
unit standards in Contact Centres at NQF Level 2 or equivalent.
Learners are expected to have demonstrated competency in language, numeracy, literacy and
communication at NQF Level 4 or equivalent.

7|Page
SESSION 1
Demonstrate an understanding of company specific service
levels.
Learning Outcomes
• All relevant service levels are explained.
• The purpose of service levels is described and explained.
• The requirements of all relevant service levels are listed, described and
explained.

All relevant service levels are explained.


Working in a Call Centre environment requires a fairly high level of computer literacy,
and is normally an entry requirement in applying for a Call centre Agent’s position.
Extensively with computer literacy.
Introduction to Service Level Agreements
Your Call Centre may have several Service Level Agreements in place for a variety of
services, for example management, information management, IT systems, HR
management or development.
Self Reflection: SLA’s
• Do you know what types of Service Level Agreements your organization has in
place?
• Who manages the Service Level Agreements – and how?

Definition of a Service Level Agreement (SLA)


A service level agreement (frequently abbreviated as SLA) is a part of a service
contract where the level of service is formally defined.
In practice, the term SLA is sometimes used to refer to the contracted delivery time (of
the service) or performance. As an example, internet service providers will commonly
include service level agreements within the terms of their contracts with customers, as a

8|Page
way to signify to their customers that their service may go down from time to time and
that they must accept this breach in service as a (non-refundable) possibility!
A service level agreement (SLA) is a negotiated agreement between two parties where
one is the customer and the other is the service provider. This can be a legally binding
formal or informal "contract" (see internal department relationships). Contracts between
the service provider and other third parties are often (incorrectly) called SLAs — as the
level of service has been set by the (principal) customer, there can be no "agreement"
between third parties (these agreements are simply a "contract").
Operating Level Agreements or OLA(s), however, may be used by internal groups to
support SLA(s).
The purpose of service levels is described and explained.
Purpose of Service Level Agreements
The SLA records a common understanding about services, priorities, responsibilities,
guarantees, and warranties. Each area of service scope should have the "level of
service" defined.
The SLA may specify the levels of availability, serviceability, performance, operation, or
other attributes of the service, such as billing.
Note:
The "level of service" can also be specified as "target" and "minimum," which allows
customers to be informed what to expect (the minimum), whilst providing a measurable
(average) target value that shows the level of organization performance.
In some contracts, penalties may be agreed upon in the case of noncompliance of the
SLA (but see "internal" customers below)
It is important to note that the "agreement" relates to the services the customer
receives, and not how the service provider delivers that service.
Output Based SLA
Service-level agreements are, by their nature, "output" based — the result of the service
as received by the customer is the subject of the "agreement."
The (expert) service provider can demonstrate their value by organizing themselves
with ingenuity, capability, and knowledge to deliver the service required, perhaps in an
innovative way.

9|Page
Input Based SLA
Organizations can also specify the way the service is to be delivered, through a
specification (a service-level specification) and using subordinate "objectives" other
than those related to the level of service. This type of agreement is known as an "input"
SLA.
This latter type of requirement is becoming obsolete as organizations become more
demanding and shift the delivery methodology risk on to the service provider.

The requirements of all relevant service levels are listed, described and explained.
Structure of Service Level Agreements
SLAs commonly include segments to address: a definition of services, performance
measurement, problem management, customer duties, warranties, disaster recovery,
and termination of agreement.
Service-level agreements (SLAs) are critical for the success of any outsourcing initiative
as they set expectations for both parties – the outsourcer and the customer.
The following outlines the type of information contained in a Service Level Agreement:
• Operating days and hours
• Definition of work
• Processes and procedures
• Agent quality
• Agent coaching
• Agent training
• Escalation procedures
10 | P a g e
• Technology
• Uptime requirements and performance
• Backup and contingency
• Reporting
• Transaction handling
• Security

Critical Note:
The best service-level agreements are very detailed and address every aspect of your
relationship with your outsourcer, including rewards and penalties for good and bad
performance. They also include how to handle transitions when relationships end.
Service-level agreements (SLAs) should include commitments for response, escalation
and resolution time whenever possible, and should break down the different types of
issues.
Vendors often have categories predefined, such as major and minor outages.
Some companies also break down internal issues to troubles (something isn't working)
versus service (i.e., a new feature or capability, or a change).
Types of Service Level Agreements
You might have internal elements of your enterprise for which you need to guarantee
service, as well as third party (outsourced) providers that you depend on to provide
levels of service to your customer (external).
Critical Note:
You must ensure that internal objectives can be met, from which you offer external
guarantees to your customers.
This reliance of external SLAs on internal SLAs (which in turn might be dependent on
outsourced SLAs) is the key to delivering true end-to-end service level agreements.
It is important to distinguish between different types of SLAs – three types of SLAs are
defined below:
External SLA
Tracks services that you provide to your external customers. Reports are available to
your customers, showing levels of service that are being provided. In this type of SLA,
you are considered to be the provider of services for your external customer.
11 | P a g e
Internal SLA
Tracks the internal operation of your computing infrastructure. Reports generated are
for internal use only. In this SLA, you are considered to be the provider, while your
customer can also be your own organization or another internal group ultimately
responsible for providing services to an external customer. Use an internal SLA to
monitor your own internal operations, enabling you to provide services to your external
customers on a more reliable basis.
Outsourced SLA
Tracks services provided to you by a third party. For this type of SLA, you are considered
to be the customer, not the provider. You might want to define an outsourced SLA to
monitor critical services that are provided to your organization, core services for your
environment that you use to provide your services to your external customers.
Tip:
While SLAs can be created to support one of these types, SLAs are increasingly
becoming more oriented toward end-to-end and structured agreements, with a single
SLA made up of internal, external, and outsourced layers.
Company Specific SLA’s
Your Facilitator will lead a discussion on the following topics:
• Internal SLA’s in place in Club Leisure
• External SLA’s in place in Club Leisure
• Outsourced SLA’s in place in Club Leisure
For example, consider the environment depicted in the Graphic presentation below

12 | P a g e
Case Study: Showing the tiered nature of internal, external, and outsourced SLAs.
• Company Y provides a Web hosting service for Company Z that includes a point-of-
presence (represented by the circle), Web servers (A, B, and C), and a back-end database.
• The database is located at a remote site (where coordinated backup can occur) and is
accessed through a backbone network provided by Company X.
• An external SLA is depicted that represents Company Z's access to the entire Web hosting
service.
• Within Company Y, there are internal SLAs to track network connectivity between the point-
of-presence and the Web servers, and to track the availability of the back-end database.
• Also pictured is the SLA provided by Company X, that assures proper operation to the
backbone network (for which Company Y is a consumer).
• The SLA drawn as a dotted line tracks Company Y's view of the backbone network, and
serves as a way of checking the consumed SLA from Company X.
• This is the outsourced SLA.

Group Activity: SLA Indicators


In your group, discuss the following and provide feedback to the group:
1. Draw up a Mind Map of the product and specific industry knowledge that Call Centre Agents
must have in Club Leisure, The Mind Map must show main cluster areas, which must be
translatable into performance indicators and training needs.

13 | P a g e
2. From the Mind Map drafted above, identify at least 4 specific service level areas which will be
present in the internal SLA’s with Contact Centre Operators.
3. From the Mind Map drafted above, identify at least 4 key specific service level areas which
will be present in the external SLA’s with your company’s IT Provider.
4. Identify if any Outsourced Service Level Agreements would be relevant – if so, what would the
SLA service level indicators be?

Implementation processes are monitored to ensure compliance.


Monitoring Service Level Agreements
Because of the importance to business operations, companies manage sourcing
arrangements through complex contracts that contain detailed statements of work
(SOW) describing the services and deliverables to be provided, and SLA’s that use
metrics to describe the desired performance standards and a framework for monitoring
the ongoing delivery of service.
When chosen wisely and implemented / monitored correctly, service level metrics are
an invaluable governance tool.
They can provide:

• Precise delivery standards for service attributes such as quality, responsiveness, and
efficiency
• An objective means for determining whether ongoing performance meets
expectations and a basis for triggering rewards of penalties based on that
performance.
• Valuable trend and operational data that enables the rapid identification and
correction of issues
• A foundation for making informed adjustments in service delivery to meet changing
business requirements.

Tip:

Unfortunately service level metrics rarely deliver the intended benefits listed above.
Poorly selected or constructed service metrics can actually motivate behaviours that
are detrimental to the success of the sourcing arrangement and its ability to deliver the
desired business results.
14 | P a g e
Why Service Metrics Fail
Despite their importance, service level metrics are often added as an afterthought
when negotiating a service level agreement.
1. Wrong Metrics
Companies enter into service level agreements for one reason – to further one or more
business objectives. If the goal is to streamline operations, then the service metric should
measure the services improvement to company’s operation.

Note:
Typical mistakes made in choosing metrics:
• Going for ease of measurement rather than fit to business objective
• Not considering collection and analysis effort
• Does not provide actionable information – if it does not clearly tell you what should
be done to fix the problem, it is a useless metric
• Measuring attributes outside the service provider’s control
• Picking a metric that is not clearly defined, and methods of collection of metric
data are open to interpretation.
2. Wrong Target Setting
A service level agreement (SLA) normally contains both metrics and targets. For
example, a Call Centre metric may be “calls per rep per hour” and the target may be
set to 15. The service provider is judged (and rewarded / penalized) by its ability to
meet the target. Often companies set targets to what is desirable rather than
realistically achievable, or set the targets too low to achieve business goals.
3. Insufficient Metrics to support sound decision making
Simplicity is a valid objective when choosing metrics for a SLA, and is often applied to
the point if listing a few key indicators only. Though such metrics may be useful, they
may not be able to supply the entire picture or assist in troubleshooting when things go
wrong.
4. Improper set up and infrastructure to support metric usage
Like any other tool, metrics require an investment of time, resources, education to be
effective. SLA reporting is often seen as a burdensome overhead activity that produces
reams of number filled data that lies unread in a cubicle. Someone in the organization
15 | P a g e
must be responsible (and held accountable) for managing vendor’s performance to its
terms.
Tip:
Planning for and implementing metrics collection, analysis and reporting processes is
essential – and training of the Business managers (and Directors) in reading and
interpreting the metric data is just as essential.

5. Misused penalties and incentives


Performance penalties and rewards can be a powerful tool if used correctly, but can
poison the relationship if used incorrectly, even encouraging wrong behaviours. Metrics
must be firmly aligned to business objectives, and penalties / rewards meaningful
contributor.

Individual Activity: Using Metrics in Management


On your own, look at the following questions and provider feedback to the group during a
general discussion.
1. How does Service Level Agreements contribute to the management of staff and outsourced
partners?
2. How is the development of Metrics linked to your Organizational Business Strategy and Goals?
3. How can the correct development of metrics support business or training needs analysis?

Steps for creating better Service Metrics


Choosing the right service metrics, creating effective service level agreements and
managing services using those agreements is critical in the Contact Centre
environment.
1. Start from the Business Objectives
• List the major business objectives.
• For each objective, list how the service contributes to the objective
• Next, consider the attributes that assess each contribution.

Example Box:

16 | P a g e
One objective in outsourcing the support of a corporate web-site may be to attract
more prospective buyers to the business.
The outsourcing engagement would contribute to this objective by developing an
attractive web-site that encourages more visitors, promotes to company’s products
and services and captures contact information for sales follow up.
These contributions can be measured by the amount of people who visit the web-site,
noting the access to each product’s features, and capturing of visitors information or
sign-ups.

2. Turn the Objectives into Metrics


To turn the attributes determined above into metrics, consider the following:
• Are the attributes within the power of the service provider to control or affect?
• If the attribute is not entirely in the control of the service provider, can it be
supplemented by another metric that isolates the vendor’s responsibilities?
• Would the metric data from those attributes provide actionable insights?
• Consider the behaviour that would be motivated by the metric. If the vendor
optimizes performance to maximize this metric, does it improve business
performance?
• What would be the means of collecting and analysing the metric data?

Continuing the Example:


The bottom line metric may be the number of new buying prospects per month.
However, this number is only partially within the control of the service provider. The other
attributes could be measured by the number unique individuals that visit the site, the
number of page views for company products, and the number of on-line sign-ups for
demo’s and downloads.
17 | P a g e
Each attribute is actionable – if for example the number of product page views drop,
they can see it is time for new content or better promotion.
3. Add Operational Metrics
Operational metrics fall into 4 categories:
• Volume
• Responsiveness
• Quality
• Efficiency
Continuing the Example:
Using demo and download sign ups, the company wants to know the number of sign-
ups per time period (volume), the time needed to pass these prospects to sales
(responsiveness), the type of person signing up (quality), and cost of sign up delivered
(efficiency).

4. Set reasonable performance targets


Each metric should have its own performance targets in the SLA, and must be set
realistically, based on actual history.
Continuing the Example:
The company wants 1000 download sign-ups per month, these sign-ups must reach the
sales division within 15 minutes of occurrence, at least 60% of the sign ups must be
business people, and the cost must not exceed R15-00 per sign up.
These targets are actionable, and set very clear expectations for the service provider.
Likewise, incentives or penalties can be provided if performance targets are exceeded
or failed.
18 | P a g e
5. Create a metrics definition document.
The metrics definition document accompanies the service level agreement and
describes each metric in detail. It describes the intent of the metric (why it was chosen),
how the metric is measured, and how the metric is interpreted. The goal is that both
parties capture, analyze and act upon the metric in the same way, use the same tools
and analysis methods, and ensures clarity of action when metric data changes.
For example, is a spike in data (up or down) auctioned when it happens, or is data
tracked over 2 – 3 months before action is required?

6. Build the contract to facilitate changes in the SLA


Business conditions change, service needs and parameters change, and the SLA must
be open to change where required.
7. Match SLA’s with separate customer satisfaction surveys.
Performing separate customer satisfaction surveys of a given service’s internal
customers is a critical double check of both the vendor’s performance and the quality
of the SLA and its metrics. If the SLA meets or exceeds performance targets, but
customer satisfaction is low (or vice versa), then the SLA is using the wrong metrics.
Mismatch between customer satisfaction data and SLA data is a clear indicator that
review is required!

19 | P a g e
SESSION 2
Meet and maintain service levels.
Learning Outcomes
• Relevant company specific levels are implemented.
• Implementation processes are monitored to ensure compliance.
• Service level timeframes and targets are consistently met as per company specific
requirements.
• Potential constraints in meeting and maintaining service levels are identified
and evaluated.

Company Specific Service Levels


Common Metrics
Service-level agreements can contain numerous service performance metrics with
corresponding service level objectives. A common case in IT Service Management is a
call centre or service desk.
Metrics commonly agreed to in these cases include:
• ABA (Abandonment Rate): Percentage of calls abandoned while waiting to be
answered.
• ASA (Average Speed to Answer): Average time (usually in seconds) it takes for a call
to be answered by the service desk.
• TSF (Time Service Factor): Percentage of calls answered within a definite timeframe,
e.g., 80% in 20 seconds.
• FCR (First Call Resolution): Percentage of incoming calls that can be resolved
without the use of a call-back or without having the caller call back the helpdesk to
finish resolving the case.
• TAT (Turn Around Time): Time taken to complete a certain task.

Uptime Agreements are another very common metric, often used for data services
such as shared hosting, virtual private servers and dedicated servers.

20 | P a g e
Common agreements include percentage of network uptime, power uptime, amount
of scheduled maintenance windows, etc.

Group Activity: Company Specific Service Levels


In your groups – look at the Common Metrics provided above, and answer the following
questions:
1. For each of the common metric categories provided above, define the company specific
targets that are deemed acceptable in Club Leisure.
2. How are these targets communicated and maintained?
3. What monitoring system is in place to ensure adherence to the targets?
Resource Guide:
Refer to your Resource guide P5 for a Resource 1: Metrics in SLA Monitoring.

Implementation processes are monitored to ensure compliance.


Monitoring Implementation
The question many Call Centre Team leaders and Managers ask is:
• What do I monitor in the Call Centre environment?
• Which pressure points or areas must be included in a daily / weekly monitoring plan?
• How do I use monitoring information to overcome potential constraints in meeting
and maintaining service levels?
Real-Time Monitoring
Real time monitoring and reporting provides critical contact centre metrics and gives
supervisors the ability to manage their agent teams effectively. Authorized supervisors
can monitor live agent and customer interactions from any location. A supervisor uses a
web browser to pick an agent to monitor. A sophisticated Call Centre system will allow
the supervisor to observe, select another agent or quit monitoring.
Benefits of Real-Time Monitoring include:
• Complete visibility into your call centre operations
• Customer service quality assurance
Tip:

21 | P a g e
Performance management has been a challenge due to varied technologies and
data intricacies often involved in Contact Centres. While numerous systems integrate to
support the operation of Contact Centre, data from these systems are not always
processed and visualized in the right manner to aid performance management.
➢ Pressure Points in Monitoring
➢ Call Centre Process Flow
The Diagram below illustrates the call flow (black arrows) along with the flow of
information from Call Centre transaction systems.
Performance related data is extracted from the individual transaction systems and
transformed for visualization in the form of reports and charts.
Conventionally, for performance management focus was on the data present in data
transformation and reporting layers (highlighted area).
Tip:
This led to inconsistencies in metrics and reports affecting performance management.
Data analysis of the underlying transaction systems and building a robust technology
infrastructure are essential for effective Contact Centre Performance Management.

22 | P a g e
Importance of Metrics
Contact Centres generate huge volume of transaction data in numerous systems. This
data is used to derive its performance metrics. However non-standardization of data
elements across systems, leads to inconsistency in the derived metrics and reports.
Usability of these derived metrics is also greatly affected by ability to quickly visualize
the required information.

Understanding the nature and source of data is essential to derive proper metrics in a
contact centre. Reporting infrastructure has to be abstracted from the raw data and a
semantic layer has to be built, so that users get faster access to metrics and reports.

23 | P a g e
Metrics help spot issues, identify root causes and control factors affecting customers.
However, complexity and proliferation of systems in a contact centre makes it difficult
to derive the right metrics.

Insight thru Data Analysis


While Contact Centres use traditional reports and charts to monitor performance, they
lack the ability to obtain insights into their operations. Often complex relationships and
patterns remain hidden in the data, which are revealed through manual analysis.
Data Analysis is required to uncover hidden relationships and patterns in the data. A
multidisciplinary team is required to perform the tasks involved in implementing analytic
projects. Understanding of Contact Centre processes and data that drive them are
essential for a successful implementation.
Gain insight from customer interactions
Conventional reports cannot uncover complex patterns and relationships present in
interaction data.
Analytics improves performance management of contact centres by providing such
insights.
Data Quality
Poor data quality makes an otherwise helpful report and insight useless. It is very
important to realize that a functioning contact centre is not automatically ready for
implementing performance management strategies. This is because the rigor applied
to validate data quality during implementation may not rise to the standard expected
for performance management.
Data Analysis is an important step in any performance management initiative. During
this process, data has to be compared against system of records to identify
inconsistencies. This activity has to be performed on a periodic basis to ensure clean
data.
Using Metrics to Monitor Performance
The following article was written by Penny Reynolds, a founding partner of a Nashville Call Centre
School, a contact centre consulting and education company.
Article: Monitoring Call Centre Performance
Read the following article for discussion in the next activity.
24 | P a g e
A new look at THE TOP 20 Contact Centre Metrics

The evolution of a simple call centre into a multichannel contact centre doesn't
happen overnight. You may need to add or upgrade technologies, and certainly staff
skills will need to expand as customer contacts begin to include e-mail and Web chat in
addition to incoming phone calls.
It's also important to rethink what performance measurements are important for this
new breed of operation. Are the measures of performance that served you well in the
call centre the same ones that will determine how well the multichannel contact centre
is working?
You can organize contact centre standards into three categories: service, quality, and
efficiency.

We've put together the top 20 metrics in these categories.


SERVICES MEASURES
The most important measures of performance in the contact centre are those
associated with service.
Some of these measures are the same for both the old-fashioned call centre and the
modern-day contact centre, while some need to change slightly to reflect the new
types of transactions.
1) BLOCKAGE
An accessibility measure, blockage — busy signals — indicates what percentage of
customers will not be able to access the centre at a given time due to insufficient
network facilities in place. Most centres measure blockage by time of day or by
occurrences of “all trunks busy” situations. Failure to include a blockage goal allows a
centre to always meet its speed-of-answer goal simply by blocking the excess calls. As
you can imagine, this damages customer accessibility and satisfaction, even though
the contact centre appears to be doing a great job of managing the queue.
The contact centre must also carefully determine the amount of bandwidth and e-mail
server capacity to ensure that large quantities of e-mails do not overload the system.
Likewise, the number of lines supporting fax services must be sufficient.
2) ABANDON RATE
25 | P a g e
Call centres measure the number of abandons as well as the abandon rate, since both
relate to retention and revenue. Keep in mind, however, that the abandon rate is not
entirely under the call centre’s control. While abandons are affected by the average
wait time in queue (which the contact centre can control), a multitude of other factors
also influence this number, such as individual caller tolerance, time of day, availability
of service alternatives, and so on.
Abandon rate is not typically a measure associated with e-mail communications, as e-
mail does not abandon the “queue” once it has been sent, but it does apply to Web
chat interactions.
3) SELF-SERVICE AVAILABILITY
More and more contacts are being offloaded from contact centre agents to self-
service alternatives. In the contact centre, self-service usage is an important gauge of
accessibility and is typically measured as an overall number, by self-service
methodology and menu points, and by time of day or demographic group. In cases of
Web chat, automated alternatives such as FAQs or use of help functions can reduce
the requirement for the live interaction with a Web chat agent.
4) AND 5) SERVICE LEVEL AND AVERAGE SPEED OF ANSWER
Service level, the percentage of calls answered in a defined wait threshold, is the most
common speed-of-answer measure in the call centre. It is typically stated as X percent
of calls handled in Y seconds or less. Average speed of answer (ASA) represents the
average wait time of all calls in the period.
In the contact centre, speed of answer for Web chat should also be measured and
reported with a service level or an ASA number. Many centres measure initial response
as well as the back-and-forth times, as having too many open Web chat sessions can
slow the expected response time once an interaction has begun. The speed of answer
for e-mail transactions, on the other hand, is defined as “response time” and may be
depicted in terms of hours or even days, rather than in seconds or minutes of elapsed
time.
6) LONGEST DELAY IN QUEUE
Another speed-of-answer measure is how long the oldest call in queue has been
waiting: the longest delay in queue (LDQ). A number of centres use real-time LDQ to
indicate when more staff needs to be made immediately available.
26 | P a g e
Historical LDQ is a more common measure, to indicate the “worst case” experience of
a customer over a period of time. Historical LDQ is measured in two categories. One is
the longest delay for a customer whose transaction was finally handled by an agent
(longest delay to answer), and the other is the longest delay for a customer who finally
abandoned the contact (longest delay to abandon), as might be the case in a Web
chat scenario.
QUALITY MEASURES
Perhaps a more significant indicator of customer satisfaction than the “how fast”
measures outlined above is “how well” the contact was handled.
7) FIRST RESOLUTION RATE
The percentage of transactions completed within a single contact, often called the
“one and done” ratio, is a crucial measure of quality. It gauges the ability of the centre,
as well as of an individual, to accomplish an interaction in a single step without
requiring a transfer to another person or area and without needing another interaction
at a future time to resolve the issue. The satisfactory resolution of a call is tracked overall
in the centre, as well as by type of call and perhaps by time of day, by team, or by
individual.
You should likewise track the one-contact resolution rate for e-mail transactions and
Web interactions. The resolution rate will likely be lower for e-mails, as it generally takes
multiple messages between two parties to resolve a matter to completion.
8) TRANSFER RATE
The transfer percentage is an indication of what percentage of contacts has to be
transferred to another person or place for handling. Tracking transfers can help fine-
tune the routing strategies as well as identify performance gaps of the staff. Likewise,
tracking e-mails that must be transferred to others or text chat interactions that require
outside assistance helps to identify personnel training issues or holes in online support
tools.
9) COMMUNICATIONS ETIQUETTE
One of the critical factors that affect the caller's perception of how well a call was
handled is simple courtesy. You can monitor the degree to which telephone
communications skills and etiquette are displayed via observation or some form of
quality monitoring.
27 | P a g e
E-mail and Web chat etiquette should also be observed. Standard wordings that
employees should follow in both types of communications should be carefully
observed, reviewed, and recorded.
10) ADHERENCE TO PROCEDURES
Adherence to procedures such as workflow processes and call scripts is particularly
important so that the customer receives a consistent interaction regardless of the
contact channel or individual agent involved. In the call centre, adherence to
processes and procedures is typically measured for individuals through simple
observation and the quality monitoring process. Adherence to processes and
procedures such as written scripts and preapproved responses is also important for e-
mail and other channels of contact.
EFFICIENCY MEASURES
Executives in every type of organization are concerned with how well its resources are
being put to use. That is especially true in the contact centre, where more than two-
thirds of operating expenses are related to personnel costs.
11) AGENT OCCUPANCY
Agent occupancy is the measure of actual time an agent is busy on customer contacts
compared with available or idle time, calculated by dividing workload hours by staff
hours. Occupancy is an important measure of how well the call centre has scheduled
its staff and how efficiently it is using its resources.
If occupancy is too low, agents are sitting around idle with not enough to do. If
occupancy is too high, agents may be overworked.
Agent occupancy rates often reflect the randomness and unpredictability of incoming
calls. In those instances, the desired level of occupancy may lead managers to pull
agents away from processing emails to answering phones, or vice versa. Because Web
chat interactions are essentially random events like incoming calls, the same measures
of occupancy apply here as in an incoming call scenario.
12) STAFF SHRINKAGE
Staff shrinkage is the percentage of time that employees are not available to handle
calls. It consists of meeting and training time, breaks, paid time off, off-phone work, and
general unexplained time where agents are away from their stations. Staff shrinkage is
an important number to track, since it plays an important role in how many people will
28 | P a g e
need to be scheduled each half-hour. The same measures of shrinkage that are used
for call centre calculations also apply to the multichannel contact centre.
13) SCHEDULE EFFICIENCY
Workforce management is all about getting the “just right” number of people in place
each period of the day to handle customer contacts. Schedule efficiency measures
the degree of overstaffing and understaffing that result from the scheduling design.
Measure schedule efficiency for responding to the randomly arriving Web chats just as
you measure it for responding to incoming calls. Since e-mails typically represent
sequential rather than random workload, the work fits the schedule, and therefore
overstaffing and understaffing measures are less relevant.
14) SCHEDULE ADHERENCE
Schedule adherence measures the degree to which the specific hours scheduled are
actually worked by the agents. It is an overall call centre measure and is also one of the
most important team and individual measures of performance, since it has such great
impact on productivity and service. Schedule adherence is a critical measure in the
multichannel contact centre as well. Specific hours worked is less of an issue in a group
responding to e-mails rather than real-time demand of calls and Web chats, but it is still
relevant in processing the work in a timely manner, especially if response-time
guarantees exist.
15) AND 16) AVERAGE HANDLE TIME AND AFTER-CALL WORK
A common measure of contact handling is the average handle time (AHT), made up of
talk time plus after-call work (ACW). To accommodate differences in calling patterns,
you should measure and identify it by time of day as well as by day of week. AHT is also
important regarding the other types of multichannel contact workload. It's harder to
calculate, however, given the difficulties of measuring how long it takes to handle an e-
mail or a Web chat transaction. An e-mail may be opened and put aside for varying
amounts of time before it is completed. Likewise, a Web chat session may appear to
take longer than a phone call, since a Web agent typically has several sessions open at
once.
17) SYSTEM AVAILABILITY
Slow response time from the computer system can add seconds or minutes to the
handle time of a transaction. In the call centre, system speed, uptime, and overall
29 | P a g e
availability should be measured on an ongoing basis to ensure maximum response time
and efficiency as well as service to callers. For example, if the interactive voice
response (IVR) typically handles 50% of calls to completion but is out of service, more
calls will require agent assistance than normal, causing overtime costs, long delays, and
generally poor service. Often this will be a measure of performance that resides in the IT
department, but it is also a crucial measure of contact centre performance.
18) CONVERSION RATE
The conversion rate refers to the percentage of transactions in which a sales
opportunity is translated into an actual sale. It can be measured as an absolute number
of sales or as a percentage of calls that result in a sale. You should track and measure
conversion rates for incoming calls as well as outgoing calls, e-mail transactions, and
other Web interactions.
19) UPSELL/CROSS-SELL RATE
Many companies measure the up-sell or cross-sell rate as a success rate at generating
revenue over and above the original order or intention of the call. It is becoming a
more common practice, not just for pure revenue-generating contact centres but for
customer service centres as well. Although more prevalent regarding telephone calls, it
is also an appropriate measure of performance for other communications channels.
20) COST PER CALL
A common measure of operational efficiency is cost per call or cost per minute to
handle the call workload, both in a simple call centre and in a multichannel contact
environment. This cost per call can simply be a labour cost per call, or it can be a fully
loaded rate that includes wage rates in addition to telecommunications, facilities, and
other costs. In setting cost per call, it is critical to define the components being used
and to use them consistently in evaluating how well the centre is using financial
resources over time. This metric is commonly used to compare one company or site to
another in benchmarking, but that's not a good practice, as the components included
and the types of contacts will often vary.

30 | P a g e
Alternative Monitoring Strategy
We all measure the number of calls answered in 15 seconds, and the amount of wrap-
up time. Why? Because they are easy to measure.
Richard Snow’s research indicates that they may have gone past their sell-by date.
Last year, I carried out a benchmark study into agent performance management. One
of the key questions was about what metrics companies currently use to measure how
well they are performing at handling customer interactions.
I deliberately included options that might be seen as traditional service-level measures
and others that are much more business and outcome related.

As the chart shows, the results were quite interesting and not really that unexpected,
given my overall experience talking to contact centre managers.
The biggest surprise was that the two metrics about pure volumes of calls and other
types of interactions handled only made it as far as 9th and 10th in the list. This suggests
companies are more interested in timing statistics, with average length of a call not
surprisingly being the number one metric and the time taken to complete after-call
work making number 4.
First-call resolution
31 | P a g e
In terms of business and outcome measures, first-call resolution rates have climbed up
the list and made it to number 2, and, given all the hype around it, not surprisingly
customer satisfaction scores make it into the top five.
But what of real business measures?
You have to look quite a long way down the list to number 6 before you see anything
business related (number of customer saves) and value of sales only makes it in at
number 12, and number of new accounts generated comes bottom of the list.
This all rather suggests that traditional service-level metrics have far from reached their
sell-by date and companies are more interested in how efficient their centres are rather
than how effectively they are performing at delivering against key business objectives.
The average company uses six measurements
What these top-line results don’t show is that on average companies use six metrics to
judge the performance of their centres, and indeed the six includes a mixture of
service-level metrics, outcome measures and business-related measures.
It is this that really points us to the answer as to whether service stats have outlived their
sell-by date, which is of course “yes” and “no”. Yes because by themselves they don’t
paint the complete picture and used incorrectly they could actually do more harm
than good, and no because they will always be part of any set of metrics used to judge
the performance of contact centres (or more broadly, the handling of customer
interactions).
What I think we will see is that the mix of metrics will change, so while traditional service
metrics will remain, the balance will swing more to business- and outcome-related
metrics.
In fact, I have already seen more importance being placed on a crucial metric – first-
call (or interaction) resolution rates (FCR).
FCR in truth is a hybrid metric in that it includes an element of efficiency (more
interactions were closed at the first attempt so more efficient and fewer follow-ups, also
saving money) and an element of outcome (more closed at the first attempt, so happy
customers).
The challenge for companies is to measure true FCR rates, for example, closing a call
by saying “someone will get back to you” should not be included as closed at the first
attempt.
32 | P a g e
These days’ companies have to look across multiple channels to track interactions and
define, then monitor, which are truly closed to the customer’s satisfaction.
However, what is interesting is that centre managers I have spoken with say changing
to focus on FCR brings about a change of behaviour in people handling interactions, in
that they try harder to solve the customer’s issue, which can only be good for the
customer, the company and the agent.
And this is why companies need to move on from just relying on traditional service-level
metrics and begin to include business- and outcome-related measures in a composite
set of metrics that drive better behaviours, that deliver better business results, and as a
consequence indeed drive some of the efficiencies they are so eager to see.
7. Group Activity: Alternative Monitoring Methods
In your groups - read the article provided above, and then discuss the following questions:
• Do you agree that with the Author that the use of traditional statistics may have reached their
“sell by” date? Motivate your answer.
• What does the Author suggest as alternative monitoring parameters – do you agree with him?
Motivate your answer.

33 | P a g e

You might also like