Understanding The Top 5 Redis Performance Metrics
Understanding The Top 5 Redis Performance Metrics
Understanding The Top 5 Redis Performance Metrics
Introduction
Yesterdays
web
was
mostly
populating
static
pages
with
contents
from
relational
databases.
Todays
web
is
all
about
near
real-time
interactions
and
low
latency.
The
demands
for
real-time
and
the
constraints
of
computer
architectures
have
dictated
the
need
to
put
in-memory
data
stores
at
the
center
of
modern
applications.
When
you
need
low-latency,
persistent,
cross-platform
storage
no
other
system
is
as
widely
used
as
Redis.
The
popularity
of
Redis
stems
from
its
rich
API
and
its
strength
in
providing
low-latency
while
offering
persistence
usually
seen
in
traditional
relational
databases.
Redis
has
many
prominent
users
including
Yahoo!,
Twitter,
Craigslist,
Stack
Overflow
and
Flickr1.
The
objective
of
this
guide
is
to
explain
the
five
most
important
metrics
that
will
allow
you
to
more
effectively
use
Redis.
Understanding
these
metrics
will
help
when
troubleshooting
common
issues.
In
this
guide,
you
will
learn
what
these
metrics
are,
how
to
access
them,
how
to
interpret
their
output,
and
how
to
use
them
to
improve
your
Redis
performance.
The
first
few
pages
describe
what
Redis
is,
why
developers
use
it,
and
some
of
the
common
alternatives.
You
can
jump
straight
into
the
top
5
Redis
performance
metrics
on
page
7.
1
http://redis.io/topics/whos-using-redis
2
https://en.wikipedia.org/wiki/Web_cache
3
http://en.wikipedia.org/wiki/Publish/subscribe
On
top
of
the
high
performance,
Redis
gives
you
two
options
to
persist
its
data
durably:
(1)
using
snapshots
and/or
(2)
using
an
append-only
file.
Redis
snapshots
are
point-in-time
images
of
the
in-memory
data.
Rather
than
writing
every
change
to
disk
first,
Redis
creates
a
copy
of
itself
in
memory
(using
fork)
so
that
the
copy
can
be
saved
to
disk
as
fast
as
possible.
Data
that
was
modified
between
now
and
the
last
successful
snapshot
is
always
at
risk,
thus
running
frequent
snapshots
is
recommended
if
the
disk
is
fast
enough.
The
optimal
frequency
depends
on
your
application;
the
first
question
to
ask
yourself
when
you
set
that
parameter
is
how
much
the
application
would
suffer
if
it
lost
that
sliver
of
data-at-risk.
Redis
also
offers
append-only
file
(AOF)
as
a
more
reliable
persistence
mode.
With
AOF,
every
change
made
to
the
dataset
in
memory
is
also
written
to
disk.
With
careful
disk
configuration,
this
mechanism
ensures
that
all
past
writes
will
have
been
saved
to
disk.
However,
the
higher
frequency
of
disk
writes
will
have
a
significant
negative
impact
on
performance.
Between
AOF,
snapshotting,
and
using
Redis
without
persistence,
there
is
a
trade-off
between
100%
data
integrity
in
the
case
of
a
crash
versus
speed
of
operation
while
the
system
runs.
Redis
Alternatives
There
are
alternative
solutions
to
the
problems
tackled
by
Redis,
but
none
fully
overlap
in
the
unique
mix
of
capabilities
that
Redis
offers;
multithreaded
caching
system
alternatives
like
Memcached
do
not
persist
to
disk
and
typically
support
only
string
data
types
as
values.
More
feature-rich
database
alternatives
like
MongoDB
are
typically
more
resource-intensive.
Additional
detail
on
these
differences
is
provided
below.
Memcached
Memcached
is
an
open
sourced,
in-memory,
multithreaded
key-value
store
often
used
as
a
cache
to
speed
up
dynamic
web
applications.
If
no
persistence
is
needed
and
youre
only
storing
strings,
Memcached
is
a
simple
and
powerful
tool
for
caching.
If
your
data
benefits
from
richer
data
structure,
you
will
benefit
from
using
Redis.
In
terms
of
performance,
Memcached
can
perform
better
or
worse
than
Redis
depending
on
the
specific
use
case[4][5][6].
MongoDB
MongoDB
is
an
open-source,
NoSQL
database
that
supports
richer
data
types.
It
is
by
some
measure
the
most
popular
NoSQL
database7.
From
the
relational
database
4
http://oldblog.antirez.com/post/redis-memcached-benchmark.htm
5
http://systoilet.wordpress.com/2010/08/09/redis-vs-memcached/
6
http://dormando.livejournal.com/525147.html
7
http://db-engines.com/en/ranking
In-memory X X
Persistent X X
Key-value store X X
Multithreaded X X
Supports
larger-than-memory
X
dataset
The
info
command
output
is
grouped
into
10
sections:
server
clients
memory
persistence
stats
replication
cpu
commandstats
8
http://redis.io/topics/memory-optimization
9
http://instagram-engineering.tumblr.com/post/12202313862/storing-hundreds-of-millions-of-
simple-key-value-pairs
10http://instagram-engineering.tumblr.com/post/12202313862/storing-hundreds-of-millions-of-
simple-key-value-pairs
In
that
case,
you
would
see
slow
responses
and
a
spike
in
the
number
of
total
commands
processed.
If
instead,
slowness
and
increased
latency
are
caused
by
one
or
more
slow
commands,
you
would
see
your
total
commands
metric
drop
or
stall
completely
as
Redis
performance
degrades.
Diagnosing
these
problems
requires
that
you
track
command
volume
and
latency
over
time.
For
example,
you
could
setup
a
script
that
periodically
logs
the
total_commands_processed metric
and
also
measures
latency.
You
can
then
use
this
log
of
historical
command
volume
to
determine
if
your
total
number
of
commands
processed
increased
or
decreased
when
you
observed
slower
response
times.
Using
number
of
commands
processed
to
resolve
increases
in
latency
If
you
are
seeing
an
increase
or
decrease
in
total
commands
processed
as
compared
to
historical
norms,
it
may
be
a
sign
of
high
command
volume
(more
commands
in
queue)
or
several
slow
commands
blocking
the
system.
Here
are
three
ways
to
address
latency
issues
caused
by
high
command
volume
and
slow
commands:
1. Use
multi-argument
commands:
If
Redis
clients
send
a
large
number
of
commands
to
the
Redis
server
in
a
short
period
of
time,
you
may
see
slow
response
times
simply
because
later
commands
are
waiting
in
queue
for
the
large
volume
of
earlier
commands
to
complete.
One
way
to
improve
latency
is
to
reduce
the
overall
command
volume
by
using
Redis
commands
that
accept
multiple
arguments.
For
example,
instead
of
adding
1,000
elements
to
a
list
using
a
loop
and
1,000
iterations
of
the
Redis
command
LSET,
you
could
create
a
separate
list
completely
on
the
client
side
with
each
of
the
1,000
elements
and
use
a
single
Redis
command,
LPUSH
or
RPUSH,
to
add
all
1,000
elements
at
once.
The
table
below
highlights
several
Redis
commands
that
can
be
used
only
for
single
elements
and
corresponding
multiple
element
commands
that
can
help
you
minimize
overall
command
volume.
Table
2:
Redis
single-argument
commands
and
their
corresponding
multi-argument
alternatives
GET Get
the
value
of
a
key
MGET Get
the
values
of
all
the
given
keys
HGET Get
the
value
of
a
hash
HMGET Get
the
values
of
all
field
the
given
hash
fields
2. Pipeline
commands:
Another
way
to
reduce
latency
associated
with
high
command
volume
is
to
pipeline
several
commands
together
so
that
you
reduce
latency
due
to
network
usage.
Rather
than
sending
10
client
commands
to
the
Redis
server
individually
and
taking
the
network
latency
hit
10
times,
pipelining
the
commands
will
send
them
all
at
once
and
pay
the
network
latency
cost
only
once.
Pipelining
commands
is
supported
by
the
Redis
server
and
by
most
clients.
This
is
only
beneficial
if
network
latency
is
significantly
larger
than
your
instances.
3. Avoid
slow
commands
for
large
sets:
If
increases
in
latency
correspond
with
a
drop
in
command
volume
you
may
be
inefficiently
using
Redis
commands
with
high
time-complexity.
High
time-complexity
means
the
required
time
to
complete
these
commands
grows
rapidly
as
the
size
of
the
dataset
processed
increases.
Minimizing
use
of
these
commands
on
large
sets
can
significantly
improve
Redis
performance.
The
table
below
lists
the
Redis
commands
with
highest
time-complexity.
Specific
command
attributes
that
affect
Redis
performance
and
guidelines
for
optimal
performance
are
highlighted
for
each
of
the
commands.
Table
3:
Redis
commands
with
high
time
complexity
ZINTERSTORE intersect
multiple
sorted
reducing
the
number
of
sets
and/or
the
sets
and
store
result
number
of
elements
in
resulting
set
SINTERSTORE intersect
multiple
sets
and
reducing
size
of
smallest
set
and/or
the
store
result
number
of
sets
SINTER intersect
multiple
sets
reducing
the
size
of
smallest
set
and/or
the
number
of
sets
MIGRATE transfer
key
from
one
reducing
the
number
of
objects
stored
as
Redis
instance
to
another
values
and/or
their
average
size
DUMP return
serialized
value
for
reducing
the
number
of
objects
stored
as
a
given
key
values
and/or
their
average
size
ZUNIONSTORE add
multiple
sorted
sets
reducing
the
total
size
of
the
sorted
sets
and
store
result
and/or
the
number
of
elements
in
the
resulting
set
SORT sort
elements
in
list,
set,
or
reducing
the
number
of
element
to
sort
and/or
sorted
set
the
number
of
returned
elements
SDIFFSTORE subtract
multiple
sets
and
reducing
the
number
of
elements
in
all
sets
store
result
SDIFF subtract multiple sets reducing the number of elements in all sets
SUNION add multiple sets reducing the number elements in all sets
LRANGE get
range
of
elements
from
reduce
the
start
offset
and/or
the
number
of
a
list
elements
in
range
3.
Latency
Latency
measures
the
average
time
in
milliseconds
it
takes
the
Redis
server
to
respond.
This
metric
is
not
available
through
the
Redis
info
command.
To
see
latency,
go
to
your
redis-cli,
change
directory
to
the
location
of
your
Redis
installation,
and
type
the
following:
./redis-cli --latency -h host -p port
where
host
and
port
are
relevant
numbers
for
your
system.
While
times
depend
on
your
actual
setup,
typical
latency
for
a
1GBits/s
network
is
about
200
s.
Interpreting
latency:
Tracking
Redis
performance
Performance,
and
more
specifically,
its
predictable
low
latency
is
one
of
the
main
reasons
Redis
is
so
popular.
Tracking
latency
is
the
most
direct
way
to
see
changes
in
Redis
performance.
For
a
1Gb/s
network,
a
latency
greater
than
200
s
likely
points
to
a
problem.
Although
there
are
some
slow
I/O
operations
that
run
in
the
For
a
more
detailed
look
at
slow
commands,
you
can
adjust
the
threshold
for
logging.
In
cases
where
few
or
no
commands
take
longer
than
10
ms,
lower
the
threshold
to
5
ms
by
entering
the
following
command
in
you
redis
cli:
config set slowlog-log-slower-than 5000.
2. Monitor
client
connections:
Because
Redis
is
single-threaded,
one
process
serves
requests
from
all
clients.
As
the
number
of
clients
grows,
the
percentage
of
resource
time
given
to
each
client
decreases
and
each
client
spends
an
increasing
amount
of
time
waiting
for
their
share
of
Redis
server
time.
Monitoring
the
number
of
clients
is
important
because
there
may
be
applications
creating
client
connections
that
you
did
not
expect
or
your
application
may
not
be
efficiently
closing
unused
connections.
To
see
all
clients
connected
to
your
Redis
server
go
to
your
redis-cli
and
type
info clients.
The
first
field
(connected_clients)
gives
the
number
of
client
connections.
The
default
maximum
number
of
client
connections
is
10,000.
Even
with
low
client
command
traffic,
if
you
are
seeing
connection
counts
that
are
above
5,000,
the
number
of
clients
may
be
significantly
affecting
Redis
performance.
If
some
or
more
of
your
clients
are
sending
large
numbers
of
commands
the
threshold
for
affecting
performance
could
be
much
lower.
3. Limit
client
connections:
In
addition
to
monitoring
client
connections,
Redis
versions
2.6
and
greater
allow
you
to
control
the
maximum
number
of
client
connections
for
your
Redis
server
with
the
redis.conf
directive
maxclients.
You
can
also
set
the
maxclients
limit
from
the
redis-
cli
by
typing
config set maxclients <value>.
You
should
set
maxclients
to
between
110%
and
150%
of
your
expected
peak
number
of
connections,
depending
on
variation
in
your
connections
load.
Connections
that
exceed
your
defined
limit
will
be
rejected
and
closed
immediately.
Setting
a
maximum
is
important
for
limiting
the
number
of
unintended
client
connections.
In
addition,
because
an
error
message
is
returned
for
failed
connection
attempts,
the
maxclient
limit
helps
warn
you
that
a
significant
number
of
unexpected
connections
are
occurring.
Both
are
important
for
controlling
the
total
number
of
connections
and
ultimately
maintaining
optimal
Redis
performance.
4. Improve
memory
management:
Poor
memory
management
can
cause
increased
latency
in
Redis.
If
your
Redis
instance
is
using
more
memory
than
is
available,
the
operating
system
will
swap
parts
of
the
Redis
process
out
of
physical
memory
and
onto
disk.
Swapping
will
significantly
increase
latency.
See
page
8
for
more
information
on
how
to
monitor
and
reduce
memory
usage.
5. Metric
correlation:
Diagnosing
and
correcting
performance
issues
often
requires
you
to
correlate
changes
in
latency
with
changes
in
other
metrics.
A
spike
in
latency
that
occurs
as
the
number
of
commands
processed
drops
likely
indicates
slow
commands
blocking
the
system.
But
if
latency
increases
as
memory
usage
increases
you
are
probably
seeing
performance
issues
due
to
swapping.
For
this
type
of
correlated
metric
analysis,
you
need
historical
perspective
in
order
for
significant
changes
in
metrics
to
be
perceptible
as
well
as
the
ability
to
see
all
relevant
metrics
across
your
stack
in
one
place.
To
do
this
with
Redis,
you
would
create
a
script
that
calls
the
info
command
periodically,
parses
the
output,
and
records
key
metrics
in
a
log
file.
The
log
can
be
used
to
identify
when
latency
changes
occurred
and
what
other
metrics
changed
in
tandem.
Using
the
fragmentation
ratio
to
predict
performance
issues
If
the
fragmentation
ratio
is
outside
the
range
of
1
to
1.5,
it
is
likely
a
sign
of
poor
memory
management
by
either
the
operating
system
or
by
your
Redis
instance.
Here
are
three
ways
to
correct
the
problem
and
improve
Redis
performance:
1. Restart
your
Redis
server:
If
your
fragmentation
ratio
is
above
1.5,
restarting
your
Redis
server
will
allow
the
operating
system
to
recover
memory
that
is
effectively
unusable
because
of
external
memory
fragmentation.
External
fragmentation
occurs
when
Redis
frees
blocks
of
memory
but
the
allocator
(the
piece
of
code
responsible
for
managing
memory
distribution),
does
not
return
that
memory
to
the
operating
system.
You
can
check
for
external
fragmentation
by
comparing
the
values
of
the
used_memory_peak, used_memory_rss
and
used_memory
metrics.
As
the
name
suggests,
used_memory_peak
measures
the
largest
historical
amount
of
memory
used
by
Redis
regardless
of
current
memory
allocation.
If
used_memory_peak
and
used_memory_rss
are
roughly
equal
and
both
significantly
higher
than
used_memory,
this
indicates
that
external
fragmentation
is
occurring.
All
three
of
these
memory
metrics
can
be
displayed
by
typing
info
memory
in
your
redis-cli.
5.
Evictions
The
evicted_keys
metric
gives
the
number
of
keys
removed
by
Redis
due
to
hitting
the
maxmemory
limit.
maxmemory
is
explained
in
more
detail
on
page
9.
Key
evictions
only
occur
if
the
maxmemory
limit
is
set.
When
evicting
a
key
because
of
memory
pressure,
Redis
does
not
consistently
remove
the
oldest
data
first.
Instead,
a
random
sample
of
keys
are
chosen
and
either
the
least
recently
used
key
(LRU12
eviction
policy)
or
the
key
closest
to
expiration
(TTL13
eviction
policy)
within
that
random
set
is
chosen
for
removal.
You
have
the
option
to
select
between
the
lru
and
ttl
eviction
policies
in
the
config
file
by
setting
maxmemory-policy
to
volatile-lru
or
volatile-ttl
respectively.
The
TTL
eviction
policy
is
appropriate
if
you
are
effectively
expiring
keys.
If
you
are
not
using
key
expirations
or
keys
are
not
expiring
quickly
enough
it
makes
sense
to
use
the
lru
policy
which
will
allow
you
to
remove
keys
regardless
of
their
expiration
state.
11
For
more
details
on
jemalloc
https://www.facebook.com/notes/facebook-engineering/scalable-
memory-allocation-using-jemalloc/480222803919
12
Least
recently
used
13
Time
to
live
Conclusion
For
developers,
Redis
provides
fast
in-memory,
key-value
store
with
an
easy
to
use
programming
interface.
However,
in
order
to
get
the
most
out
of
Redis,
you
should
understand
what
factors
can
negatively
impact
performance
and
what
metrics
can
help
you
avoid
pitfalls.
After
reading
this
guide,
you
should
understand
some
of
the
key
Redis
metrics,
how
to
view
them,
and
most
importantly
how
to
use
them
for
detecting
and
solving
Redis
performance
issues.
Conor
is
a
Software
Engineer
at
Datadog.
Equally
at
ease
with
frontend
and
backend
code,
hes
made
Redis
his
data
store
of
choice.
He
started
at
Datadog
through
the
HackNY
intern
program
in
New
York.
In
his
free
time
he
enjoys
hiking
and
enjoying
the
great
food
of
Astoria,
Queens.
Patrick
Crosby,
Product
Marketer,
Datadog
Patrick
is
a
Product
Marketer
at
Datadog.
Prior
to
joining
Datadog,
he
was
a
Sales
Engineer
at
Innography,
a
venture
backed
software
company
in
Austin,
TX.
Patrick
managed
relationships
for
Innographys
60
largest
global
clients,
led
strategy
and
About Datadog
Datadog
unifies
the
data
from
servers,
databases,
applications,
tools
and
services
to
present
a
unified
view
of
on-premise
and
cloud
infrastructure.
These
capabilities
are
provided
on
a
SaaS-based
monitoring
and
data
analytics
platform
that
enables
Dev
and
Ops
teams
working
collaboratively
on
the
infrastructure
to
avoid
downtime,
resolve
performance
problems
and
ensure
that
development
and
deployment
cycles
finish
on
time.
To
find
out
more,
visit
www.datadog.com.