A Statistical Test Suite For Random and
A Statistical Test Suite For Random and
A Statistical Test Suite For Random and
Distribution/Availability Statement
Approved for public release, distribution unlimited
Supplementary Notes
The original document contains color images.
Abstract
Subject Terms
IATAC COLLECTION
Number of Pages
163
Form Approved
REPORT DOCUMENTATION PAGE OMB No. 074-0188
Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining
the data needed, and completing and reviewing this collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for
reducing this burden to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of
Management and Budget, Paperwork Reduction Project (0704-0188), Washington, DC 20503
1. AGENCY USE ONLY (Leave 2. REPORT DATE 3. REPORT TYPE AND DATES COVERED
blank) 5/15/2001 Publication 5/15/2001
4. TITLE AND SUBTITLE 5. FUNDING NUMBERS
A statistical test suite for random and pseudorandom number
generators for cryptographic applications
6. AUTHOR(S)
Andrew Rukhin, Juan Soto, James Nechvatal,
Miles Smid, Elaine Barker, Stefan Leigh,
Mark Levenson, Mark Vangel, David Banks,
Alan Heckert, James Dray, San Vo
7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION
REPORT NUMBER
Booz Allen & Hamilton
8283 Greensboro Drive
McLean, VA 22102
This paper discusses some aspects of selecting and testing random and pseudorandom
number generators. The outputs of such generators may be used in many cryptographic
applications, such as the generation of key material. Generators suitable for use in
cryptographic applications may need to meet stronger requirements than for other
applications. In particular, their outputs must be unpredictable in the absence of
knowledge of the inputs. Some criteria for characterizing and selecting appropriate
generators are discussed in this document. The subject of statistical testing and its
relation to cryptanalysis is also discussed, and some recommended statistical tests are
provided. These tests may be useful as a first step in determining whether or not a
generator is suitable for a particular cryptographic application. The design and
cryptanalysis of generators is outside the scope of this paper.
17. SECURITY CLASSIFICATION 18. SECURITY CLASSIFICATION 19. SECURITY CLASSIFICATION 20. LIMITATION OF ABSTRACT
OF REPORT OF THIS PAGE OF ABSTRACT
UNCLASSIFIED UNCLASSIFIED UNCLASSIFIED UNLIMITED
This paper discusses some aspects of selecting and testing random and pseudorandom
number generators. The outputs of such generators may be used in many cryptographic
applications, such as the generation of key material. Generators suitable for use in
cryptographic applications may need to meet stronger requirements than for other
applications. In particular, their outputs must be unpredictable in the absence of
knowledge of the inputs. Some criteria for characterizing and selecting appropriate
generators are discussed in this document. The subject of statistical testing and its
relation to cryptanalysis is also discussed, and some recommended statistical tests are
provided. These tests may be useful as a first step in determining whether or not a
generator is suitable for a particular cryptographic application. However, no set of
statistical tests can absolutely certify a generator as appropriate for usage in a particular
application, i.e., statistical testing cannot serve as a substitute for cryptanalysis. The
design and cryptanalysis of generators is outside the scope of this paper.
Certain commercial equipment and materials were used in the development of this test
suite. Such identification does not imply recommendation or endorsement by the National
Institute of Standards and Technology, nor does it imply that the materials or equipment
identified are necessarily the best available for the purpose.
iv
TABLE OF CONTENTS
v
2.4.4 Test Description.................................................................................................................... 22
2.4.5 Decision Rule (at the 1 % Level) .......................................................................................... 23
2.4.6 Conclusion and Interpretation of Test Results ...................................................................... 23
2.4.7 Input Size Recommendations ............................................................................................... 23
2.4.8 Example ................................................................................................................................ 23
vi
2.10 Lempel-Ziv Compression Test .................................................................................................... 41
2.10.1 Test Purpose..................................................................................................................... 41
2.10.2 Function Call.................................................................................................................... 41
2.10.3 Test Statistic and Reference Distribution......................................................................... 41
2.10.4 Test Description ............................................................................................................... 41
2.10.5 Decision Rule (at the 1 % Level) ..................................................................................... 42
2.10.6 Conclusion and Interpretation of Test Results ................................................................. 42
2.10.7 Input Size Recommendations........................................................................................... 43
2.10.8 Example ........................................................................................................................... 43
vii
2.15.6 Conclusion and Interpretation of Test Results ................................................................. 59
2.15.7 Input Size Recommendations........................................................................................... 59
2.15.8 Example ........................................................................................................................... 59
viii
4.2.1 Proportion of Sequences Passing a Test ............................................................................. 100
4.2.2 Uniform Distribution of P-values ....................................................................................... 101
ix
APPENDIX I: INSTRUCTIONS FOR INCORPORATING ADDITIONAL
STATISTICAL TESTS........................................................................... 139
x
xi
1 INTRODUCTION TO RANDOM NUMBER TESTING
The need for random and pseudorandom numbers arises in many cryptographic applications. For
example, common cryptosystems employ keys that must be generated in a random fashion.
Many cryptographic protocols also require random or pseudorandom inputs at various points,
e.g., for auxiliary quantities used in generating digital signatures, or for generating challenges in
authentication protocols.
This document discusses the randomness testing of random number and pseudorandom number
generators that may be used for many purposes including cryptographic, modeling and
simulation applications. The focus of this document is on those applications where randomness is
required for cryptographic purposes. A set of statistical tests for randomness is described in this
document. The National Institute of Standards and Technology (NIST) believes that these
procedures are useful in detecting deviations of a binary sequence from randomness. However, a
tester should note that apparent deviations from randomness may be due to either a poorly
designed generator or to anomalies that appear in the binary sequence that is tested (i.e., a
certain number of failures is expected in random sequences produced by a particular generator).
It is up to the tester to determine the correct interpretation of the test results. Refer to Section 4
for a discussion of testing strategy and the interpretation of test results.
There are two basic types of generators used to produce random sequences: random number
generators (RNGs - see Section 1.1.3) and pseudorandom number generators (PRNGs - see
Section 1.1.4). For cryptographic applications, both of these generator types produce a stream of
zeros and ones that may be divided into substreams or blocks of random numbers.
1.1.1 Randomness
A random bit sequence could be interpreted as the result of the flips of an unbiased “fair” coin
with sides that are labeled “0” and “1,” with each flip having a probability of exactly ½ of
producing a “0” or “1.” Furthermore, the flips are independent of each other: the result of any
previous coin flip does not affect future coin flips. The unbiased “fair” coin is thus the perfect
random bit stream generator, since the “0” and “1” values will be randomly distributed (and
[0,1] uniformly distributed). All elements of the sequence are generated independently of each
other, and the value of the next element in the sequence cannot be predicted, regardless of how
many elements have already been produced.
Obviously, the use of unbiased coins for cryptographic purposes is impractical. Nonetheless,
the hypothetical output of such an idealized generator of a true random sequence serves as a
benchmark for the evaluation of random and pseudorandom number generators.
1
1.1.2 Unpredictability
To ensure forward unpredictability, care must be exercised in obtaining seeds. The values
produced by a PRNG are completely predictable if the seed and generation algorithm are known.
Since in many cases the generation algorithm is publicly available, the seed must be kept secret
and should not be derivable from the pseudorandom sequence that it produces. In addition, the
seed itself must be unpredictable.
The first type of sequence generator is a random number generator (RNG). An RNG uses a non-
deterministic source (i.e., the entropy source), along with some processing function (i.e., the
entropy distillation process) to produce randomness. The use of a distillation process is needed to
overcome any weakness in the entropy source that results in the production of non-random
numbers (e.g., the occurrence of long strings of zeros or ones). The entropy source typically
consists of some physical quantity, such as the noise in an electrical circuit, the timing of user
processes (e.g., key strokes or mouse movements), or the quantum effects in a semiconductor.
Various combinations of these inputs may be used.
The outputs of an RNG may be used directly as a random number or may be fed into a
pseudorandom number generator (PRNG). To be used directly (i.e., without further processing),
the output of any RNG needs to satisfy strict randomness criteria as measured by statistical tests
in order to determine that the physical sources of the RNG inputs appear random. For example,
a physical source such as electronic noise may contain a superposition of regular structures, such
as waves or other periodic phenomena, which may appear to be random, yet are determined to be
non-random using statistical tests.
For cryptographic purposes, the output of RNGs needs to be unpredictable. However, some
physical sources (e.g., date/time vectors) are quite predictable. These problems may be
mitigated by combining outputs from different types of sources to use as the inputs for an RNG.
However, the resulting outputs from the RNG may still be deficient when evaluated by statistical
tests. In addition, the production of high-quality random numbers may be too time consuming,
making such production undesirable when a large quantity of random numbers is needed. To
produce large quantities of random numbers, pseudorandom number generators may be
preferable.
2
1.1.4 Pseudorandom Number Generators (PRNGs)
The second generator type is a pseudorandom number generator (PRNG). A PRNG uses one or
more inputs and generates multiple “pseudorandom” numbers. Inputs to PRNGs are called
seeds. In contexts in which unpredictability is needed, the seed itself must be random and
unpredictable. Hence, by default, a PRNG should obtain its seeds from the outputs of an RNG;
i.e., a PRNG requires a RNG as a companion.
The outputs of a PRNG are typically deterministic functions of the seed; i.e., all true randomness
is confined to seed generation. The deterministic nature of the process leads to the term
“pseudorandom.” Since each element of a pseudorandom sequence is reproducible from its seed,
only the seed needs to be saved if reproduction or validation of the pseudorandom sequence is
required.
Ironically, pseudorandom numbers often appear to be more random than random numbers
obtained from physical sources. If a pseudorandom sequence is properly constructed, each value
in the sequence is produced from the previous value via transformations which appear to
introduce additional randomness. A series of such transformations can eliminate statistical auto-
correlations between input and output. Thus, the outputs of a PRNG may have better statistical
properties and be produced faster than an RNG.
1.1.5 Testing
Various statistical tests can be applied to a sequence to attempt to compare and evaluate the
sequence to a truly random sequence. Randomness is a probabilistic property; that is, the
properties of a random sequence can be characterized and described in terms of probability. The
likely outcome of statistical tests, when applied to a truly random sequence, is known a priori
and can be described in probabilistic terms. There are an infinite number of possible statistical
tests, each assessing the presence or absence of a “pattern” which, if detected, would indicate
that the sequence is nonrandom. Because there are so many tests for judging whether a sequence
is random or not, no specific finite set of tests is deemed “complete.” In addition, the results of
statistical testing must be interpreted with some care and caution to avoid incorrect conclusions
about a specific generator (see Section 4).
A statistical test is formulated to test a specific null hypothesis (H0). For the purpose of this
document, the null hypothesis under test is that the sequence being tested is random. Associated
with this null hypothesis is the alternative hypothesis (Ha) which, for this document, is that the
sequence is not random. For each applied test, a decision or conclusion is derived that accepts or
rejects the null hypothesis, i.e., whether the generator is (or is not) producing random values,
based on the sequence that was produced.
For each test, a relevant randomness statistic must be chosen and used to determine the
acceptance or rejection of the null hypothesis. Under an assumption of randomness, such a
statistic has a distribution of possible values. A theoretical reference distribution of this statistic
3
under the null hypothesis is determined by mathematical methods. From this reference
distribution, a critical value is determined (typically, this value is "far out" in the tails of the
distribution, say out at the 99 % point). During a test, a test statistic value is computed on the
data (the sequence being tested). This test statistic value is compared to the critical value. If the
test statistic value exceeds the critical value, the null hypothesis for randomness is rejected.
Otherwise, the null hypothesis (the randomness hypothesis) is not rejected (i.e., the hypothesis is
accepted).
In practice, the reason that statistical hypothesis testing works is that the reference distribution
and the critical value are dependent on and generated under a tentative assumption of
randomness. If the randomness assumption is, in fact, true for the data at hand, then the resulting
calculated test statistic value on the data will have a very low probability (e.g., 0.01 %) of
exceeding the critical value.
On the other hand, if the calculated test statistic value does exceed the critical value (i.e., if the
low probability event does in fact occur), then from a statistical hypothesis testing point of view,
the low probability event should not occur naturally. Therefore, when the calculated test statistic
value exceeds the critical value, the conclusion is made that the original assumption of
randomness is suspect or faulty. In this case, statistical hypothesis testing yields the following
conclusions: reject H0 (randomness) and accept Ha (non-randomness).
CONCLUSION
TRUE SITUATION Accept H0 Accept Ha (reject H0)
Data is random (H0 is true) No error Type I error
Data is not random (Ha is true) Type II error No error
If the data is, in truth, random, then a conclusion to reject the null hypothesis (i.e., conclude that
the data is non-random) will occur a small percentage of the time. This conclusion is called a
Type I error. If the data is, in truth, non-random, then a conclusion to accept the null hypothesis
(i.e., conclude that the data is actually random) is called a Type II error. The conclusions to
accept H0 when the data is really random, and to reject H0 when the data is non-random, are
correct.
The probability of a Type I error is often called the level of significance of the test. This
probability can be set prior to a test and is denoted as α. For the test, α is the probability that the
test will indicate that the sequence is not random when it really is random. That is, a sequence
appears to have non-random properties even when a “good” generator produced the sequence.
Common values of α in cryptography are about 0.01.
The probability of a Type II error is denoted as β. For the test, β is the probability that the test
will indicate that the sequence is random when it is not; that is, a “bad” generator produced a
4
sequence that appears to have random properties. Unlike α, β is not a fixed value. β can take on
many different values because there are an infinite number of ways that a data stream can be
non-random, and each different way yields a different β. The calculation of the Type II error β is
more difficult than the calculation of α because of the many possible types of non-randomness.
One of the primary goals of the following tests is to minimize the probability of a Type II error,
i.e., to minimize the probability of accepting a sequence being produced by a good generator
when the generator was actually bad. The probabilities α and β are related to each other and to
the size n of the tested sequence in such a way that if two of them are specified, the third value is
automatically determined. Practitioners usually select a sample size n and a value for α (the
probability of a Type I error – the level of significance). Then a critical point for a given statistic
is selected that will produce the smallest β (the probability of a Type II error). That is, a suitable
sample size is selected along with an acceptable probability of deciding that a bad generator has
produced the sequence when it really is random. Then the cutoff point for acceptability is
chosen such that the probability of falsely accepting a sequence as random has the smallest
possible value.
Each test is based on a calculated test statistic value, which is a function of the data. If the test
statistic value is S and the critical value is t, then the Type I error probability is P(S > t || Ho is
true) = P(reject Ho | H0 is true), and the Type II error probability is P(S ≤ t || H0 is false) =
P(accept H0 | H0 is false). The test statistic is used to calculate a P-value that summarizes the
strength of the evidence against the null hypothesis. For these tests, each P-value is the
probability that a perfect random number generator would have produced a sequence less
random than the sequence that was tested, given the kind of non-randomness assessed by the test.
If a P-value for a test is determined to be equal to 1, then the sequence appears to have perfect
randomness. A P-value of zero indicates that the sequence appears to be completely non-
random. A significance level (α) can be chosen for the tests. If P-value ≥ α, then the null
hypothesis is accepted; i.e., the sequence appears to be random. If P-value < α, then the null
hypothesis is rejected; i.e., the sequence appears to be non-random. The parameter α denotes the
probability of the Type I error. Typically, α is chosen in the range [0.001, 0.01].
• An α of 0.001 indicates that one would expect one sequence in 1000 sequences to be rejected
by the test if the sequence was random. For a P-value ≥ 0.001, a sequence would be
considered to be random with a confidence of 99.9 %. For a P-value < 0.001, a sequence
would be considered to be non-random with a confidence of 99.9 %.
• An α of 0.01 indicates that one would expect 1 sequence in 100 sequences to be rejected. A
P-value ≥ 0.01 would mean that the sequence would be considered to be random with a
confidence of 99 %. A P-value < 0.01 would mean that the conclusion was that the sequence
is non-random with a confidence of 99 %.
For the examples in this document, α has been chosen to be 0.01. Note that, in many cases, the
parameters in the examples do not conform to the recommended values; the examples are for
illustrative purposes only.
5
1.1.6 Considerations for Randomness, Unpredictability and Testing
The following assumptions are made with respect to random binary sequences to be tested:
Term Definition
Asymptotic Analysis A statistical technique that derives limiting approximations
for functions of interest.
Bernoulli Random A random variable that takes on the value of one with
Variable probability p and the value of zero with probability 1-p.
Central Limit Theorem For a random sample of size n from a population with mean µ
and variance σ2, the distribution of the sample means is
6
approximately normal with mean µ and variance σ2/n as the
sample size increases.
Critical Value The value that is exceeded by the test statistic with a small
probability (significance level). A "look-up" or calculated
value of a test statistic (i.e., a test statistic value) that, by
construction, has a small probability of occurring (e.g., 5 %)
when the null hypothesis of randomness is true.
Cumulative Distribution A function giving the probability that the random variable X is
Function (CDF) F(x) less than or equal to x, for every value x. That is,
F(x) = P(X ≤ x).
7
Incomplete Gamma See the definition for igamc.
Function
Hypothesis (Alternative) A statement Ha that an analyst will consider as true (e.g., Ha:
the sequence is non-random) if and when the null hypothesis
is determined to be false.
Level of Significance The probability of falsely rejecting the null hypothesis, i.e.,
(α) the probability of concluding that the null hypothesis is false
when the hypothesis is, in fact, true. The tester usually
chooses this value; typical values are 0.05, 0.01 or 0.001;
occasionally, smaller values such as 0.0001 are used. The
level of significance is the probability of concluding that a
sequence is non-random when it is in fact random. Synonyms:
Type I error, α error.
Linear Dependence In the context of the binary rank matrix test, linear
dependence refers to m-bit vectors that may be expressed as a
linear combination of the linearly independent m-bit vectors.
8
2
1 x−µ
1 −
2 σ
f ( x; µ ; σ ) = e , where µ and σ are location and
2πσ 2
scale parameters.
Random Binary A sequence of bits for which the probability of each bit being
Sequence a “0” or “1” is ½. The value of each bit is independent of any
other bit in the sequence, i.e., each bit is unpredictable.
Rank (of a matrix) Refers to the rank of a matrix in linear algebra over GF(2).
Having reduced a matrix into row-echelon form via
elementary row operations, the number of nonzero rows, if
any, are counted in order to determine the number of linearly
independent rows or columns in the matrix.
9
Run An uninterrupted sequence of like bits (i.e., either all zeroes or
all ones).
Standard Normal See the definition in Section 5.5.3. This is the normal function
Cumulative Distribution for mean = 0 and variance = 1.
Function
Statistically Independent Two events are independent if the occurrence of one event
(Events) does not affect the chances of the occurrence of the other
event. The mathematical formulation of the independence of
events A and B is the probability of the occurrence of both A
and B being equal to the product of the probabilities of A and
B (i.e., P(A and B) = P(A)P(B)).
Statistical Test (of a A function of the data (binary stream) which is computed and
Hypothesis) used to decide whether or not to reject the null hypothesis. A
systematic statistical rule whose purpose is to generate a
conclusion regarding whether the experimenter should accept
or reject the null hypothesis Ho.
Abbreviation Definition
ANSI American National Standards Institute
10
1.3 Mathematical Symbols
In general, the following notation is used throughout this document. However, the tests in this
document have been designed and described by multiple authors who may have used slightly
different notation. The reader is advised to consider the notation used for each test separate from
that notation used in other tests.
Symbol Meaning
x The floor function of x; for a given real positive x, x = x-g, where x
is a non-negative integer, and 0 ≤ g < 1.
∇ψ2m(obs); A measure of how well the observed values match the expected value. See
∇ 2ψ2m(obs) Sections 2.12 and 3.12.
H0 The null hypothesis; i.e., the statement that the sequence is random.
log2(x) ln( x )
Defined as , where ln is the natural logarithm.
ln( 2 )
fn The sum of the log2 distances between matching L-bit templates, i.e., the sum of
the number of digits in the distance between L-bit templates. See Sections 2.9
and 3.9.
π 3.14159… unless defined otherwise for a specific test.
11
σ The standard deviation of a random variable = ∫ (x − µ ) f ( x )dx
2
.
sobs The observed value which is used as a statistic in the Frequency test.
Sn The nth partial sum for values Xi = {-1, +1}; i.e., the sum of the first n values of
Xi.
ξj The total number of times that a given state occurs in the identified cycles. See
Section 2.16 and 3.16.
χ2(obs) The chi-square statistic computed on the observed values. See Sections 2.2,
2.4, 2.5, 2.7, 2.8, 2.11, 2.13, 2.15, and the corresponding sections of Section 3.
Vn The expected number of runs that would occur in a sequence of length n under
an assumption of randomness See Sections 2.3 and 3.3.
Vn(obs) The observed number of runs in a sequence of length n. See Sections 2.3 and
3.3.
W The expected number of words in a bitstring being tested.
Wobs The number of disjoint words in a sequence. See Sections 2.10 and 3.10.
12
2 RANDOM NUMBER GENERATION TESTS
The NIST Test Suite is a statistical package consisting of 16 tests that were developed to test the
randomness of (arbitrarily long) binary sequences produced by either hardware or software
based cryptographic random or pseudorandom number generators. These tests focus on a
variety of different types of non-randomness that could exist in a sequence. Some tests are
decomposable into a variety of subtests. The 16 tests are:
This section (Section 2) consists of 16 subsections, one subsection for each test. Each
subsection provides a high level description of the particular test. The corresponding
subsections in Section 3 provide the technical details for each test.
Section 4 provides a discussion of testing strategy and the interpretation of test results. The
order of the application of the tests in the test suite is arbitrary. However, it is recommended
that the Frequency test be run first, since this supplies the most basic evidence for the existence
of non-randomness in a sequence, specifically, non-uniformity. If this test fails, the likelihood
of other tests failing is high. (Note: The most time-consuming statistical test is the Linear
Complexity test; see Sections 2.11 and 3.11).
Section 5 provides a user's guide for setting up and running the tests, and a discussion on
program layout. The statistical package includes source code and sample data sets. The test code
was developed in ANSI C. Some inputs are assumed to be global values rather than calling
parameters.
A number of tests in the test suite have the standard normal and the chi-square ( χ 2 ) as
reference distributions. If the sequence under test is in fact non-random, the calculated test
statistic will fall in extreme regions of the reference distribution. The standard normal
13
distribution (i.e., the bell-shaped curve) is used to compare the value of the test statistic obtained
from the RNG with the expected value of the statistic under the assumption of randomness. The
test statistic for the standard normal distribution is of the form z = (x - µ)/σ, where x is the
sample test statistic value, and µ and σ2 are the expected value and the variance of the test
statistic. The χ 2 distribution (i.e., a left skewed curve) is used to compare the goodness-of-fit of
the observed frequencies of a sample measure to the corresponding expected frequencies of the
( )
hypothesized distribution. The test statistic is of the form χ 2 = ∑ (o i − e i ) ei , where oi and
2
ei are the observed and expected frequencies of occurrence of the measure, respectively.
For many of the tests in this test suite, the assumption has been made that the size of the
sequence length, n, is large (of the order 103 to 107). For such large sample sizes of n,
asymptotic reference distributions have been derived and applied to carry out the tests. Most of
the tests are applicable for smaller values of n. However, if used for smaller values of n, the
asymptotic reference distributions would be inappropriate and would need to be replaced by
exact distributions that would commonly be difficult to compute.
Note: For many of the examples throughout Section 2, small sample sizes are used for
illustrative purposes only, e.g., n = 10. The normal approximation is not really applicable for
these examples.
The focus of the test is the proportion of zeroes and ones for the entire sequence. The purpose
of this test is to determine whether the number of ones and zeros in a sequence are
approximately the same as would be expected for a truly random sequence. The test assesses
the closeness of the fraction of ones to ½, that is, the number of ones and zeroes in a sequence
should be about the same. All subsequent tests depend on the passing of this test; there is no
evidence to indicate that the tested sequence is non-random.
Frequency(n), where:
Additional input used by the function, but supplied by the testing code:
ε The sequence of bits as generated by the RNG or PRNG being tested; this exists
as a global structure at the time of the function call; ε = ε1, ε2, … , εn.
14
2.1.3 Test Statistic and Reference Distribution
sobs: The absolute value of the sum of the Xi (where, Xi = 2ε - 1 = ±1) in the sequence divided
by the square root of the length of the sequence.
The reference distribution for the test statistic is half normal (for large n). (Note: If z (where
z = s obs 2 ; see Section 3.1) is distributed as normal, then |z| is distributed as half normal.) If
the sequence is random, then the plus and minus ones will tend to cancel one another out so that
the test statistic will be about 0. If there are too many ones or too many zeroes, then the test
statistic will tend to be larger than zero.
(1) Conversion to ±1: The zeros and ones of the input sequence (ε) are converted to values
of –1 and +1 and are added together to produce Sn = X1 + X 2 +L+ X n , where Xi = 2εi –
1.
Sn
(2) Compute the test statistic sobs =
n
2
For the example in this section, sobs = = .632455532.
10
s obs
(3) Compute P-value = erfc ,
where erfc is the complementary error function as
2
defined in Section 5.5.3.3.
.632455532
For the example in this section, P-value = erfc = 0.527089.
2
If the computed P-value is < 0.01, then conclude that the sequence is non-random. Otherwise,
conclude that the sequence is random.
Since the P-value obtained in step 3 of Section 2.1.4 is ≥ 0.01 (i.e., P-value = 0.527089), the
conclusion is that the sequence is random.
15
Note that if the P-value were small (< 0.01), then this would be caused by S n or sobs being
large. Large positive values of Sn are indicative of too many ones, and large negative values of
Sn are indicative of too many zeros.
It is recommended that each sequence to be tested consist of a minimum of 100 bits (i.e., n ≥
100).
2.1.8 Example
(input) ε = 11001001000011111101101010100010001000010110100011
00001000110100110001001100011001100010100010111000
(input) n = 100
The focus of the test is the proportion of ones within M-bit blocks. The purpose of this test is to
determine whether the frequency of ones in an M-bit block is approximately M/2, as would be
expected under an assumption of randomness. For block size M=1, this test degenerates to test
1, the Frequency (Monobit) test.
BlockFrequency(M,n), where:
16
Additional input used by the function, but supplied by the testing code:
ε The sequence of bits as generated by the RNG or PRNG being tested; this exists
as a global structure at the time of the function call; ε = ε1, ε2, … , εn.
χ 2 (obs): A measure of how well the observed proportion of ones within a given M-bit
block match the expected proportion (1/2).
Partition the input sequence into N = non-overlapping blocks. Discard any unused
n
(1)
M
bits.
(2) Determine the proportion πi of ones in each M-bit block using the equation
M
∑ ε ( i −1 )M + j
j =1
πi = , for 1 ≤ i ≤ N.
M
N
Compute the χ statistic: χ2(obs) = 4 M ∑ ( πi - ½)2.
2
(3)
i =1
( ) (
2 2
) ( 2
)
For the example in this section, χ2(obs) = 4 x 3 x 2 3 − 1 2 + 1 3 − 1 2 + 2 3 − 1 2 = 1 .
(4) Compute P-value = igamc (N/2, χ2(obs)/2) , where igamc is the incomplete gamma
function for Q(a,x) as defined in Section 5.5.3.3.
Note: When comparing this section against the technical description in Section 3.2, note
that Q(a,x) = 1-P(a,x).
3 1
For the example in this section, P-value = igamc , = 0.801252.
2 2
17
2.2.5 Decision Rule (at the 1 % Level)
If the computed P-value is < 0.01, then conclude that the sequence is non-random. Otherwise,
conclude that the sequence is random.
Since the P-value obtained in step 4 of Section 2.2.4 is ≥ 0.01 (i.e., P-value = 0.801252), the
conclusion is that the sequence is random.
Note that small P-values (< 0.01) would have indicated a large deviation from the equal
proportion of ones and zeros in at least one of the blocks.
It is recommended that each sequence to be tested consist of a minimum of 100 bits (i.e., n ≥
100). Note that n ≥ MN. The block size M should be selected such that M ≥ 20, M > .01n and
N < 100.
2.2.8 Example
(input) ε = 11001001000011111101101010100010001000010110100011
00001000110100110001001100011001100010100010111000
(input) n = 100
(input) M = 10
(processing) N = 10
(processing) χ2 = 7.2
18
The focus of this test is the total number of runs in the sequence, where a run is an uninterrupted
sequence of identical bits. A run of length k consists of exactly k identical bits and is bounded
before and after with a bit of the opposite value. The purpose of the runs test is to determine
whether the number of runs of ones and zeros of various lengths is as expected for a random
sequence. In particular, this test determines whether the oscillation between such zeros and
ones is too fast or too slow.
Runs(n), where:
Additional inputs for the function, but supplied by the testing code:
ε The sequence of bits as generated by the RNG or PRNG being tested; this exists
as a global structure at the time of the function call; ε = ε1, ε2, … , εn.
Vn(obs): The total number of runs (i.e., the total number of zero runs + the total number of
one-runs) across all n bits.
∑jε j
(1) Compute the pre-test proportion π of ones in the input sequence: π = .
n
(2) Determine if the prerequisite Frequency test is passed: If it can be shown that π-1
2
≥τ ,
then the Runs test need not be performed (i.e., the test should not have been run because
of a failure to pass test 1, the Frequency (Monobit) test). If the test is not applicable, then
the P-value is set to 0.0000. Note that for this test, τ = 2 has been pre-defined in the test
n
code.
19
For the example in this section, since τ = 2 ≈ 0.63246 , then |π - 1/2| = | 3/5 – 1/2 | = 0.1
10
< τ, and the test is not run.
Since the observed value π is within the selected bounds, the runs test is applicable.
n −1
(3) Compute the test statistic Vn( obs ) = ∑ r( k ) + 1 , where r(k)=0 if εk=εk+1, and r(k)=1 otherwise.
k =1
V10(obs)=(1+0+1+0+1+1+1+1+0)+1=7.
|V n (obs) − 2nπ (1 − π ) |
(4) Compute P-value = erfc .
2 2nπ (1 − π )
3
7 − 2 • 10 • 1 −
3
5 5
For the example, P-value = erfc = 0.147232.
2 • 2 • 10 • 3 • 1 − 3
5 5
If the computed P-value is < 0.01, then conclude that the sequence is non-random. Otherwise,
conclude that the sequence is random.
Since the P-value obtained in step 4 of Section 2.3.4 is ≥ 0.01 (i.e., P-value = 0.147232), the
conclusion is that the sequence is random.
Note that a large value for Vn(obs) would have indicated an oscillation in the string which is too
fast; a small value would have indicated that the oscillation is too slow. (An oscillation is
considered to be a change from a one to a zero or vice versa.) A fast oscillation occurs when
there are a lot of changes, e.g., 010101010 oscillates with every bit. A stream with a slow
oscillation has fewer runs than would be expected in a random sequence, e.g., a sequence
containing 100 ones, followed by 73 zeroes, followed by 127 ones (a total of 300 bits) would
have only three runs, whereas 150 runs would be expected.
It is recommended that each sequence to be tested consist of a minimum of 100 bits (i.e., n ≥
100).
20
2.3.8 Example
(input) ε = 11001001000011111101101010100010001000010110100011
00001000110100110001001100011001100010100010111000
(input) n = 100
(input) τ = 0.02
(processing) π = 0.42
(processing) Vn(obs) = 52
The focus of the test is the longest run of ones within M-bit blocks. The purpose of this test is to
determine whether the length of the longest run of ones within the tested sequence is consistent
with the length of the longest run of ones that would be expected in a random sequence. Note
that an irregularity in the expected length of the longest run of ones implies that there is also an
irregularity in the expected length of the longest run of zeroes. Therefore, only a test for ones is
necessary. See Section 4.4.
LongestRunOfOnes(n), where:
ε The sequence of bits as generated by the RNG or PRNG being tested; this exists
as a global structure at the time of the function call; ε = ε1, ε2, … , εn.
M The length of each block. The test code has been pre-set to accommodate three
values for M: M = 8, M = 128 and M = 104 in accordance with the following
table.
21
Minimum n M
128 8
6272 128
750,000 104
χ2(obs): A measure of how well the observed longest run length within M-bit blocks
matches the expected longest length within M-bit blocks.
(2) Tabulate the frequencies νi of the longest runs of ones in each block into categories,
where each cell contains the number of runs of ones of a given length.
For the values of M supported by the test code, the vi cells will hold the following
counts:
K ( − Nπ ) 2
(3) Compute χ 2 ( obs ) = ∑ vi i
, where the values for πi are provided in Section 3.4.
i =0 Nπ i
The values of K and N are determined by the value of M in accordance with the
following table:
M K N
8 3 16
22
128 5 49
104 6 75
K χ 2 ( obs )
(4) Compute P-value = igamc , .
2 2
3 4.882605
For the example, P-value = igamc , = 0.180598.
2 2
If the computed P-value is < 0.01, then conclude that the sequence is non-random. Otherwise,
conclude that the sequence is random.
For the example in Section 2.4.8, since the P-value ≥ 0.01 (P-value = 0.180609), the conclusion
is that the sequence is random. Note that large values of χ2(obs) indicate that the tested
sequence has clusters of ones.
2.4.8 Example
(input) ε = 11001100000101010110110001001100111000000000001001
00110101010001000100111101011010000000110101111100
1100111001101101100010110010
(input) n = 128
23
(processing) Subblock Max-Run Subblock Max-Run
11001100 (2) 00010101 (1)
01101100 (2) 01001100 (2)
11100000 (3) 00000010 (1)
01001101 (2) 01010001 (1)
00010011 (2) 11010110 (2)
10000000 (1) 11010111 (3)
11001100 (2) 11100110 (3)
11011000 (2) 10110010 (2)
(processing) ν0 = 4; ν1 = 9; ν2 = 3; ν4 = 0; χ2 = 4.882457
The focus of the test is the rank of disjoint sub-matrices of the entire sequence. The purpose of
this test is to check for linear dependence among fixed length substrings of the original
sequence. Note that this test also appears in the DIEHARD battery of tests [7].
Rank(n), where:
ε The sequence of bits as generated by the RNG or PRNG being tested; this exists as a
global structure at the time of the function call; ε = ε1, ε2, … , εn.
M The number of rows in each matrix. For the test suite, M has been set to 32. If other
values of M are used, new approximations need to be computed.
Q The number of columns in each matrix. For the test suite, Q has been set to 32. If other
values of Q are used, new approximations need to be computed.
24
2.5.3 Test Statistic and Reference Distribution
χ2(obs): A measure of how well the observed number of ranks of various orders match
the expected number of ranks under an assumption of randomness.
(1) Sequentially divide the sequence into M•Q-bit disjoint blocks; there will exist
n
N = such blocks. Discarded bits will be reported as not being used in the
MQ
computation within each block. Collect the M•Q bit segments into M by Q matrices.
Each row of the matrix is filled with successive Q-bit blocks of the original sequence ε.
(2) Determine the binary rank ( Rl ) of each matrix, where l = 1,..., N . The method for
determining the rank is described in Appendix A.
For the example in this section, the rank of the first matrix is 2 (R1 = 2), and the rank of
the second matrix is 3 (R2 = 3).
For the example in this section, FM = F3 = 1 (R2 has the full rank of 3), FM-1 = F2 = 1 (R1
has rank 2), and no matrix has any lower rank.
(4) Compute
( FM − 0.2888 N ) 2 ( FM −1 − 0.5776 N ) 2 ( N − FM − FM −1 − 0.1336 N ) 2
χ (obs ) =
2
+ + .
0.2888 N 0.5776 N 0.1336 N
25
For the example in this section,
χ 2 (obs ) =
(1 − 0.2888 • 2)2 + (1 − 0.5776 • 2 )2 + (2 − 1 − 1 − 0.1336 • 2)2 = 0.596953.
0.2888 • 2 0.5776 • 2 0.1336 • 2
Compute P − value = e − χ
2
( obs ) / 2
(5) . Since there are 3 classes in the example, the P-value for
χ 2 (obs )
the example is equal to igamc1, .
2
0.596953
For the example in this section, P-value = e 2 = 0.741948.
If the computed P-value is < 0.01, then conclude that the sequence is non-random. Otherwise,
conclude that the sequence is random.
Since the P-value obtained in step 5 of Section 2.5.4 is ≥ 0.01 (P-value = 0.741948), the
conclusion is that the sequence is random.
Note that large values of χ 2 ( obs ) (and hence, small P-values) would have indicated a deviation
of the rank distribution from that corresponding to a random sequence.
The probabilities for M = Q = 32 have been calculated and inserted into the test code. Other
choices of M and Q may be selected, but the probabilities would need to be calculated. The
minimum number of bits to be tested must be such that n ≥ 38MQ (i.e., at least 38 matrices are
created). For M = Q = 32, each sequence to be tested should consist of a minimum of 38,912
bits.
2.5.8 Example
(processing) N = 97
26
(processing) χ2 = 1.2619656
The focus of this test is the peak heights in the Discrete Fourier Transform of the sequence. The
purpose of this test is to detect periodic features (i.e., repetitive patterns that are near each other)
in the tested sequence that would indicate a deviation from the assumption of randomness. The
intention is to detect whether the number of peaks exceeding the 95 % threshold is significantly
different than 5 %.
DiscreteFourierTransform(n), where:
Additional input used by the function, but supplied by the testing code:
ε The sequence of bits as generated by the RNG or PRNG being tested; this exists
as a global structure at the time of the function call; ε = ε1, ε2, … , εn.
d: The normalized difference between the observed and the expected number of frequency
components that are beyond the 95 % threshold.
The reference distribution for the test statistic is the normal distribution.
(1) The zeros and ones of the input sequence (ε) are converted to values of –1 and +1 to
create the sequence X = x1, x2, …, xn, where xi = 2εi – 1.
For example, if n = 10 and ε = 1001010011, then X = 1, -1, -1, 1, -1, 1, -1, -1, 1, 1.
27
(2) Apply a Discrete Fourier transform (DFT) on X to produce: S = DFT(X). A sequence
of complex variables is produced which represents periodic components of the sequence
of bits at different frequencies (see Section 3.6 for a sample diagram of a DFT result).
(3) Calculate M = modulus(S´) ≡ |S'|, where S´ is the substring consisting of the first n/2
elements in S, and the modulus function produces a sequence of peak heights.
(5) Compute N0 = .95n/2. N0 is the expected theoretical (95 %) number of peaks (under the
assumption of randomness) that are less than T.
(6) Compute N1 = the actual observed number of peaks in M that are less than T.
( N1 − N 0 )
(7) Compute d = .
n(.95)(.05) / 2
d
(8) Compute P-value = erfc .
2
1.538968
For the example in this section, P-value = erfc = 0.123812.
2
If the computed P-value is < 0.01, then conclude that the sequence is non-random. Otherwise,
conclude that the sequence is random.
Since the P-value obtained in step 8 of Section 2.6.4 is ≥ 0.01 (P-value = 0.123812), the
conclusion is that the sequence is random.
28
A d value that is too low would indicate that there were too few peaks (< 95 %) below T, and
too many peaks (more than 5 %) above T.
It is recommended that each sequence to be tested consist of a minimum of 1000 bits (i.e., n ≥
1000).
2.6.8 Example
(input) ε = 11001001000011111101101010100010001000010110100011
00001000110100110001001100011001100010100010111000
(input) n = 100
(processing) N1 = 46
(processing) N0 = 47.5
(processing) d = -0.973329
The focus of this test is the number of occurrences of pre-specified target strings. The purpose
of this test is to detect generators that produce too many occurrences of a given non-periodic
(aperiodic) pattern. For this test and for the Overlapping Template Matching test of Section 2.8,
an m-bit window is used to search for a specific m-bit pattern. If the pattern is not found, the
window slides one bit position. If the pattern is found, the window is reset to the bit after the
found pattern, and the search resumes.
NonOverlappingTemplateMatching(m,n)
m The length in bits of each template. The template is the target string.
29
n The length of the entire bit string under test.
Additional input used by the function, but supplied by the testing code:
ε The sequence of bits as generated by the RNG or PRNG being tested; this exists
as a global structure at the time of the function call; ε = ε1, ε2, … , εn.
B The m-bit template to be matched; B is a string of ones and zeros (of length m)
which is defined in a template library of non-periodic patterns contained within
the test code.
M The length in bits of the substring of ε to be tested. M has been set to 131,072
(i.e., 217) in the test code.
N The number of independent blocks. N has been fixed at 8 in the test code.
χ2(obs): A measure of how well the observed number of template “hits” matches the
expected number of template “hits” (under an assumption of randomness).
(2) Let Wj (j = 1, …, N) be the number of times that B (the template) occurs within the
block j. Note that j = 1,…,N. The search for matches proceeds by creating an m-bit
window on the sequence, comparing the bits within that window against the template. If
there is no match, the window slides over one bit , e.g., if m = 3 and the current window
contains bits 3 to 5, then the next window will contain bits 4 to 6. If there is a match, the
window slides over m bits, e.g., if the current (successful) window contains bits 3 to 5,
then the next window will contain bits 6 to 8.
For the above example, if m = 3 and the template B = 001, then the examination
proceeds as follows:
Block 1 Block 2
Bit Positions Bits W1 Bits W2
1-3 101 0 111 0
30
2-4 010 0 110 0
3-5 100 0 100 0
4-6 001 (hit) Increment to 1 001 (hit) Increment to 1
5-7 Not examined Not examined
6-8 Not examined Not examined
7-9 001 Increment to 2 011 1
8-10 010 (hit) 2 110 1
Thus, W1 = 2, and W2 = 1.
(3) Under an assumption of randomness, compute the theoretical mean µ and variance σ2:
1 2m − 1
µ = (M-m+1)/2m σ 2 = M m − 2m .
2 2
N χ 2 ( obs )
(5) Compute P-value = igamc , . Note that multiple P-values will be
2 2
computed, i.e., one P-value will be computed for each template. For m = 9, up to 148 P-
values may be computed; for m = 10, up to 284 P-values may be computed.
2 2.133333
For the example in this section, P-value = igamc , = 0.344154.
2 2
If the computed P-value is < 0.01, then conclude that the sequence is non-random. Otherwise,
conclude that the sequence is random.
Since the P-value obtained in step 5 of Section 2.7.4 is ≥ 0.01 (P-value = 0.344154), the
conclusion is that the sequence is random.
31
If the P-value is very small (< 0.01), then the sequence has irregular occurrences of the possible
template patterns.
The test code has been written to provide templates for m = 2, 3,…,10. It is recommended that
m = 9 or m = 10 be specified to obtain meaningful results. Although N = 8 has been specified
in the test code, the code may be altered to other sizes. However, N should be chosen such that
N ≤ 100 to be assured that the P-values are valid. The test code has been written to assume a
sequence length of n = 106 (entered via a calling parameter) and M = 131072 (hard coded). If
values other than these are desired, be sure that M > 0.01 • n and N = n/M..
2.7.8 Example
The focus of the Overlapping Template Matching test is the number of occurrences of pre-
specified target strings. Both this test and the Non-overlapping Template Matching test of
Section 2.7 use an m-bit window to search for a specific m-bit pattern. As with the test in
Section 2.7, if the pattern is not found, the window slides one bit position. The difference
between this test and the test in Section 2.7 is that when the pattern is found, the window slides
only one bit before resuming the search.
1
Defined in Federal Information Processing Standard (FIPS) 186-2.
32
2.8.2 Function Call
OverlappingTemplateMatching(m,n)
m The length in bits of the template – in this case, the length of the run of ones.
Additional input used by the function, but supplied by the testing code:
ε The sequence of bits as generated by the RNG or PRNG being tested; this exists
as a global structure at the time of the function call; ε = ε1, ε2, … , εn.
K The number of degrees of freedom. K has been fixed at 5 in the test code.
M The length in bits of a substring of ε to be tested. M has been set to 1032 in the
test code.
N The number of independent blocks of n. N has been set to 968 in the test code.
χ2(obs): A measure of how well the observed number of template “hits” matches the
expected number of template “hits” (under an assumption of randomness).
(2) Calculate the number of occurrences of B in each of the N blocks. The search for
matches proceeds by creating an m-bit window on the sequence, comparing the bits
within that window against B and incrementing a counter when there is a match. The
window slides over one bit after each examination, e.g., if m = 4 and the first window
contains bits 42 to 45, the next window consists of bits 43 to 46. Record the number of
33
occurrences of B in each block by incrementing an array vi (where i = 0,…5), such that
v0 is incremented when there are no occurrences of B in a substring, v1 is incremented
for one occurrence of B,…and v5 is incremented for 5 or more occurrences of B.
For the above example, if m = 2 and B = 11, then the examination of the first block
(1011101111) proceeds as follows:
Thus, after block 1, there are five occurrences of 11, v5 is incremented, and v0 = 0, v1 =
0, v2 = 0, v3 = 0, v4 = 0, and v5 = 1.
In a like manner, blocks 2-5 are examined. In block 2, there are 2 occurrences of 11; v2
is incremented. In block 3, there are 3 occurrences of 11; v3 is incremented. In block 4,
there are 4 occurrences of 11; v4 is incremented. In block 5, there is one occurrence of
11; v1 is incremented.
(3) Compute values for λ and η that will be used to compute the theoretical probabilities πi
corresponding to the classes of v0:
λ = (M-m+1)/2m η = λ/2.
5 (vi − Nπ i )2
(4) Compute χ 2 ( obs ) = ∑ , where π0 = 0.367879, π1 = 0.183940, π2 =
i =0 Nπ i
0.137955, π3 = 0.099634, π4 = 0.069935 and π5 = 0.140657 as computed by the
equations specified in Section 3.8.
For the example in this section, the values of πi were recomputed, since the example
doesn’t fit the requirements stated in Section 3.8.5. The example is intended only for
illustration. The values of πi are: π0 = 0.324652, π1 = 0.182617, π2 = 0.142670, π3 =
0.106645, π4 = 0.077147, and π5 = 0.166269.
34
χ 2 ( obs ) =
(0 − 5 • 0.324652 )2 + (1 − 5 • 0.182617 )2( 1 − 5 • 0.142670 )2
+ +
5 • 0.324652 5 • 0.182617 5 • 0.142670
(1 − 5 • 0.106645 )2 + (1 − 5 • 0.077147 )2 + (1 − 5 • 0.166269 )2 = 3.167729.
5 • 0.106645 5 • 0.077147 5 • 0.166269
5 χ 2 ( obs )
(5) Compute P-value = igamc , .
2 2
5 3.167729
For the example in this section, P-value = igamc , = 0.274932.
2 2
If the computed P-value is < 0.01, then conclude that the sequence is non-random. Otherwise,
conclude that the sequence is random.
Since the P-value obtained in step 4 of Section 2.8.4 is ≥ 0.01 (P-value = 0.274932), the
conclusion is that the sequence is random.
Note that for the 2-bit template (B = 11), if the entire sequence had too many 2-bit runs of ones,
then: 1) ν5 would have been too large, 2) the test statistic would be too large, 3) the P-value
would have been small (< 0.01) and 4) a conclusion of non-randomness would have resulted.
The values of K, M and N have been chosen such that each sequence to be tested consists of a
minimum of 106 bits (i.e., n ≥ 106). Various values of m may be selected, but for the time being,
NIST recommends m = 9 or m = 10. If other values are desired, please choose these values as
follows:
• n ≥ MN.
• N should be chosen so that N • (min πi) > 5.
• λ = (M-m+1)/2m ≈ 2
• m should be chosen so that m ≈ log2 M
• Choose K so that K ≈ 2λ. Note that the πi values would need to be
recalculated for values of K other than 5.
35
2.8.8 Example
The focus of this test is the number of bits between matching patterns (a measure that is related
to the length of a compressed sequence). The purpose of the test is to detect whether or not the
sequence can be significantly compressed without loss of information. A significantly
compressible sequence is considered to be non-random.
L The length of each block. Note: the use of L as the block size is not consistent
with the block size notation (M) used for the other tests. However, the use of L as
the block size was specified in the original source of Maurer's test.
Additional input used by the function, but supplied by the testing code:
ε The sequence of bits as generated by the RNG or PRNG being tested; this exists
as a global structure at the time of the function call; ε = ε1, ε2, … , εn.
36
fn : The sum of the log2 distances between matching L-bit templates, i.e., the sum of the
number of digits in the distance between L-bit templates.
The reference distribution for the test statistic is the half-normal distribution (a one-sided
variant of the normal distribution) as is also the case for the Frequency test in Section 2.1.
(1) The n-bit sequence (ε) is partitioned into two segments: an initialization segment
consisting of Q L-bit non-overlapping blocks, and a test segment consisting of K L-bit
non-overlapping blocks. Bits remaining at the end of the sequence that do not form a
complete L-bit block are discarded.
The first Q blocks are used to initialize the test. The remaining K blocks are the test
blocks (K = n/L - Q).
(2) Using the initialization segment, a table is created for each possible L-bit value (i.e., the
L-bit value is used as an index into the table). The block number of the last occurrence
of each L-bit block is noted in the table (i.e., For i from 1 to Q, Tj= i, where j is the
decimal representation of the contents of the ith L-bit block).
37
For the example in this section, the following table is created using the 4 initialization
blocks.
(3) Examine each of the K blocks in the test segment and determine the number of blocks
since the last occurrence of the same L-bit block (i.e., i – Tj). Replace the value in the
table with the location of the current block (i.e., Tj= i). Add the calculated distance
between re-occurrences of the same L-bit block to an accumulating log2 sum of all the
differences detected in the K blocks (i.e., sum = sum + log2(i – Tj)).
For the example in this section, the table and the cumulative sum are developed as
follows:
For block 5 (the 1st test block): 5 is placed in the “01” row of the table (i.e., T1),
and sum=log2(5-2) = 1.584962501.
For block 6: 6 is placed in the “11” row of the table (i.e., T3), and sum =
1.584962501 + log2(6-0) = 1.584962501 + 2.584962501 = 4.169925002.
For block 7: 7 is placed in the “01” row of the table (i.e., T1), and sum =
4.169925002 + log2(7-5) = 4.169925002 + 1 = 5.169925002.
For block 8: 8 is placed in the “01” row of the table (i.e., T1), and sum =
5.169925002 + log2(8-7) = 5.169925002 + 0 = 5.169925002.
For block 9: 9 is placed in the “01” row of the table (i.e., T1), and sum =
5.169925002 + log2(9-8) = 5.169925002 + 0 = 5.169925002.
For block 10: 10 is placed in the “11” row of the table (i.e., T3), and sum =
5.169925002 + log2(10-6) = 5.169925002 + 2 = 7.169925002.
38
1 Q+ K
(4) Compute the test statistic: f n = ∑ log 2 ( i − T j ) , where Tj is the table entry
K i = Q +1
corresponding to the decimal representation of the contents of the ith L-bit block.
7.169925002
For the example in this section, f n = = 1.1949875.
6
f n − expectedValue( L )
(5) Compute P-value = erfc , where erfc
is defined in Section
2σ
5.5.3.3, and expectedValue(L) and σ are taken from a table of precomputed values2 (see
the table below). Under an assumption of randomness, the sample mean,
expectedValue(L), is the theoretical expected value of the computed statistic for the
var iance( L )
given L-bit length. The theoretical standard deviation is given by σ = c ,
K
0 .8 32 K −3 L
where c = 0.7 − + 4 + .
L L 15
1.1949875 − 1.5374383
For the example in this section, P-value = erfc = 0.767189.
2 1 . 338
Note that the expected value and variance for L = 2 are not provided in the above table,
since a block of length two is not recommended for testing. However, this value for L is
easy to use in an example. The value for the expected value and variance for the case
where L = 2, although not shown in the above table, were taken from the indicated
reference3.
If the computed P-value is < 0.01, then conclude that the sequence is non-random. Otherwise,
conclude that the sequence is random.
2
From the “Handbook of Applied Cryptography.”
3
From the “Handbook of Applied Cryptography.”
39
2.9.6 Conclusion and Interpretation of Test Results
Since the P-value obtained in step 5 of Section 2.9.4 is ≥ 0.01 (P-value = 0.767189), the
conclusion is that the sequence is random.
Theoretical expected values for ϕ have been computed as shown in the table in step (5) of
Section 2.9.4. If fn differs significantly from expectedValue(L), then the sequence is
significantly compressible.
This test requires a long sequence of bits (n ≥ (Q + K)L) which are divided into two segments
of L-bit blocks, where L should be chosen so that 6 ≤ L ≤ 16. The first segment consists of Q
initialization blocks, where Q should be chosen so that Q = 10 • 2L. The second segment
consists of K test blocks, where K = n/L - Q ≈ 1000 • 2L. The values of L, Q and n should be
chosen as follows:
n L Q = 10 • 2L
≥ 387,840 6 640
≥ 904,960 7 1280
≥ 2,068,480 8 2560
≥ 4,654,080 9 5120
≥ 1,342,400 10 10240
≥ 22,753,280 11 20480
≥ 49,643,520 12 40960
≥ 107,560,960 13 81920
≥ 231,669,760 14 163840
≥ 496,435,200 15 327680
≥ 1,059,061,760 16 655360
2.9.8 Example
4
Defined in FIPS 186-2.
40
(processing) fn = 6.194107, expectedValue = 6.196251, σ = 3.125
The focus of this test is the number of cumulatively distinct patterns (words) in the sequence.
The purpose of the test is to determine how far the tested sequence can be compressed. The
sequence is considered to be non-random if it can be significantly compressed. A random
sequence will have a characteristic number of distinct patterns.
LempelZivCompression(n), where:
Additional input used by the function, but supplied by the testing code:
ε The sequence of bits as generated by the RNG or PRNG being tested; this exists
as a global structure at the time of the function call; ε = ε1, ε2, … , εn.
Wobs: The number of disjoint and cumulatively distinct words in the sequence.
The reference distribution for the test statistic is the normal distribution.
(1) Parse the sequence into consecutive, disjoint and distinct words that will form a
"dictionary" of words in the sequence. This is accomplished by creating substrings from
consecutive bits of the sequence until a substring is created that has not been found
previously in the sequence. The resulting substring is a new word in the dictionary.
41
Bit Position Bit New Word? The Word is:
1 0 Yes 0 (Bit 1)
2 1 Yes 1 (Bit 2)
3 0 No
4 1 Yes 01 (Bits 3-4)
5 1 No
6 0 Yes 10 (Bits (5-6)
7 0 No
8 1 No
9 0 Yes 010 (Bits 7-9)
There are five words in the "dictionary": 0, 1, 01, 10, 010. Hence, Wobs = 5.
µ −W
(2) Compute P-value = ½ erfc obs , where µ
= 69586.25 and σ = 70.448718 when
2σ 2
n = 106. For other values of n, the values of µ and σ would need to be calculated. Note
that since no known theory is available to determine the exact values of µ and σ, these
values were computed (under an assumption of randomness) using SHA-1. The Blum-
Blum-Shub generator will give similar results for µ and σ2.
Because the example in this section is much shorter than the recommended length, the
values for µ and σ2 are not valid. Instead, suppose that the test was conducted on a
sequence of a million bits, and the value Wobs = 69600 was obtained, then
69586.25 − 69600
P-value = ½ erfc = 0.949310.
2 • 70.448718
If the computed P-value is < 0.01, then conclude that the sequence is non-random. Otherwise,
conclude that the sequence is random.
Since the P-value obtained in step 2 of Section 2.10.4 is ≥ 0.01 (P-value = 0.949310), the
conclusion is that the sequence is random.
Note that for n = 106, if Wobs had fallen below 69,561, then the conclusion would have been that
the sequence is significantly compressible and, therefore, not random.
42
2.10.7 Input Size Recommendations
It is recommended that each sequence to be tested consist of a minimum of 1,000,000 bits (i.e.,
n ≥ 106).
2.10.8 Example
(input) n = 1,000,000
(conclusion) Since P-value < 0.01, reject the sequence as being random.
The focus of this test is the length of a linear feedback shiftregister (LFSR). The purpose of this
test is to determine whether or not the sequence is complex enough to be considered random.
Random sequences are characterized by longer LFSRs. An LFSR that is too short implies non-
randomness.
Additional input used by the function, but supplied by the testing code:
ε The sequence of bits as generated by the RNG or PRNG being tested; this exists
as a global structure at the time of the function call; ε = ε1, ε2, … , εn.
K The number of degrees of freedom; K = 6 has been hard coded into the test.
43
2.11.3 Test Statistic and Reference Distribution
χ2(obs): A measure of how well the observed number of occurrences of fixed length
LFSRs matches the expected number of occurrences under an assumption of
randomness.
(1) Partition the n-bit sequence into N independent blocks of M bits, where n = MN.
(2) Using the Berlekamp-Massey algorithm5, determine the linear complexity Li of each of
the N blocks (i = 1,…,N). Li is the length of the shortest linear feedback shift register
sequence that generates all bits in the block i. Within any Li-bit sequence, some
combination of the bits, when added together modulo 2, produces the next bit in the
sequence (bit Li + 1).
For this block, the trial feedback algorithm works. If this were not the case, other
feedback algorithms would be attempted for the block (e.g., adding bits 1 and 3 to
produce bit 5, or adding bits 1, 2 and 3 to produce bit 6, etc.).
µ= +
(
M 9 + (− 1)M +1
−
) (M 3 + 2 9 ) .
2 36 M
2
5
Defined in The Handbook of Applied Cryptography; A. Menezes, P. Van Oorschot and S. Vanstone; CRC Press,
1997.
44
For the example in this section, µ = +
(
13 9 + (− 1)13+1 ) (13 3 + 2 9 ) = 6.777222.
−
2 36 13
2
(4) For each substring, calculate a value of Ti, where Ti = (− 1)M • (Li − µ ) + 2 9 .
K (vi − Nπ i )2
(6) Compute χ 2 ( obs ) = ∑ , where π0 = 0.01047, π1 = 0.03125, π2 = 0.125, π3 =
i =0 Nπ i
0.5, π4 = 0.25, π5 = 0.0625, π6 = 0.02078 are the probabilities computed by the
equations in Section 3.11.
K χ 2 ( obs )
(7) Compute P-value = igamc , .
2 2
If the computed P-value is < 0.01, then conclude that the sequence is non-random. Otherwise,
conclude that the sequence is random.
Since the P-value obtained in step 7 of Section 2.10.4 is ≥ 0.01 (P-value = 0.949310), the
conclusion is that the sequence is random.
Note that if the P-value were < 0.01, this would have indicated that the observed frequency
counts of Ti stored in the νI bins varied from the expected values; it is expected that the
distribution of the frequency of the Ti (in the νI bins) should be proportional to the computed πi
as shown in step (6) of Section 2.11.5.
45
2.11.7 Input Size recommendations
Choose n ≥ 106. The value of M must be in the range 500≤ M ≤ 5000, and N ≥ 200 for the χ2
result to be valid (see Section 3.11 for a discussion).
2.11.8 Example
The focus of this test is the frequency of all possible overlapping m-bit patterns across the entire
sequence. The purpose of this test is to determine whether the number of occurrences of the 2m
m-bit overlapping patterns is approximately the same as would be expected for a random
sequence. Random sequences have uniformity; that is, every m-bit pattern has the same chance
of appearing as every other m-bit pattern. Note that for m = 1, the Serial test is equivalent to the
Frequency test of Section 2.1.
Serial(m,n), where:
Additional input used by the function, but supplied by the testing code:
ε The sequence of bits as generated by the RNG or PRNG being tested; this exists
as a global structure at the time of the function call; ε = ε1, ε2, … , εn.
46
2.12.3 Test Statistics and Reference Distribution
∇ψ2m(obs) and∇ 2ψ2m(obs): A measure of how well the observed frequencies of m-bit patterns
match the expected frequencies of the m-bit patterns.
(1) Form an augmented sequence ε′: Extend the sequence by appending the first m-1 bits to
the end of the sequence for distinct values of n.
(2) Determine the frequency of all possible overlapping m-bit blocks, all possible
overlapping (m-1)-bit blocks and all possible overlapping (m-2)-bit blocks. Let
vi1 ...im denote the frequency of the m-bit pattern i1…im; let vi1 ...im −1 denote the frequency of
the (m-1)-bit pattern i1…im-1; and let vi1 ...im − 2 denote the frequency of the (m-2)-bit pattern
i1…im-2.
For the example in this section, when m = 3, then (m-1) = 2, and (m-2) = 1. The
frequency of all 3-bit blocks is: v000 = 0, v001 = 1, v010 = 1, v011 = 2, v100 = 1, v101 = 2, v110
= 2, v111 = 0. The frequency of all possible (m-1)-bit blocks is: v00 = 1, v01 = 3, v10 = 3,
v11 = 3. The frequency of all (m-2)-bit blocks is: v0 = 4, v1 = 6.
2
2m n 2m
(3) Compute: ψ 2m = ∑ v i1 ...im − m = ∑ v2 − n
n i1 ...im 2 n i1 ...im i1 ...im
2 m −1 2 m −1
2
n
ψ m −1 =
2
∑ vi1 ...im−1 − m −1 = ∑ v2 −n
n i1 ...im−1 2 n i1 ...im−1 i1 ...im −1
2m−2 2m−2
2
n
ψ m−2 =
2
∑ vi1 ...im−2 − m − 2 = ∑ v2 −n
n i1 ...im−2 2 n i1 ...im −2 i1 ...im−2
47
(4) Compute: ∇ψ 2m = ψ 2m −ψ 2m −1 , and
∇ 2 ψ m = ψ m − 2ψ m −1 + ψ m − 2 .
2 2 2 2
If the computed P-value is < 0.01, then conclude that the sequence is non-random. Otherwise,
conclude that the sequence is random.
Since the P-value obtained in step 5 of Section 2.12.4 is ≥ 0.01 (P-value1 = 0.808792 and P-
value2 = 0.670320), the conclusion is that the sequence is random.
Note that if ∇2ψ2m or ∇ψ2m had been large, then non-uniformity of the m-bit blocks is implied.
2.12.8 Example
48
#00s = 250116; #01s = #10s = 249855; #11s = 250174
(conclusion) Since both P-value1 and P-value2 were ≥ 0.01, accept the sequences as random
for both tests.
As with the Serial test of Section 2.12, the focus of this test is the frequency of all possible
overlapping m-bit patterns across the entire sequence. The purpose of the test is to compare the
frequency of overlapping blocks of two consecutive/adjacent lengths (m and m+1) against the
expected result for a random sequence.
ApproximateEntropy(m,n), where:
m The length of each block – in this case, the first block length used in the test.
m+1 is the second block length used.
Additional input used by the function, but supplied by the testing code:
ε The sequence of bits as generated by the RNG or PRNG being tested; this exists
as a global structure at the time of the function call; ε = ε1, ε2, … , εn.
χ2(obs): A measure of how well the observed value of ApEn(m) (see step 6 in Section
2.13.4) matches the expected value.
49
2.13.4 Test Description
(1) Augment the n-bit sequence to create n overlapping m-bit sequences by appending m-1
bits from the beginning of the sequence to the end of the sequence.
For example, if ε = 0100110101 and m = 3, then n = 10. Append the 0 and 1 at the
beginning of the sequence to the end of the sequence. The sequence to be tested
becomes 010011010101. (Note: This is done for each value of m.)
(2) A frequency count is made of the n overlapping blocks (e.g., if a block containing εj to
εj+m-1 is examined at time j, then the block containing εj+1 to εj +m is examined at time
j+1). Let the count of the possible m-bit ((m+1)-bit) values be represented as C im ,
where i is the m-bit value.
For the example in this section, the overlapping m-bit blocks (where m = 3) become 010,
100, 001, 011, 110, 101, 010, 101, 010, and 101. The calculated counts for the 2m = 23 =
8 possible m-bit strings are:
#i
(3) Compute C im = for each value of i.
n
For example in this section, C3000 = 0, C3001 = 0.1, C3010 = 0.3, C3011 =0.1, C3100 = 0.1,
C3101 = 0.3, C3110 = 0.1, C3111 = 0.
2 m −1
(4) Compute ϕ (m)
= ∑ π i log π i , where πi = C3j , and j=log2 i.
i =0
For the example in this section, ϕ(3) = 0(log 0) + 0.1(log 0.1) + 0.3(log 0.3) + 0.1(log
0.1) + 0.1(log 0.1) + 0.3(log 0.3) + 0.1(log 0.1) + 0(log 0) = -1.64341772.
Step 1: For the example in this section, m is now 4, the sequence to be tested becomes
0100110101010.
Step 2: The overlapping blocks become 0100, 1001, 0011, 0110, 1101, 1010, 0101,
1010, 0101, 1010. The calculated values are: #0011 = 1, #0100 = 1, #0101 = 2, #0110 =
1, #1001 = 1, #1010 = 3, #1101 = 1, and all other patterns are zero.
Step 3: C40011 = C40100 = C40110 = C41001 = C41101 = 0.1, C40101 = 0.2, C41010 = 0.3, and all
other values are zero.
Step 4: ϕ(4) = 0 + 0 + 0 + 0.1(log 0.01) + 0.1(log 0.01) + 0.2(log 0.02) + 0.1(log 0.01) +
0 + 0 + 0.1(log 0.01) + 0.3(log 0.03) + 0 + 0 + 0.1(log 0.01) + 0 + 0) = -1.83437197.
50
(6) Compute the test statistic: χ2 = 2n[log 2 – ApEn(m)] , where ApEn(m) = ϕ ( m ) − ϕ ( m+1) .
If the computed P-value is < 0.01, then conclude that the sequence is non-random. Otherwise,
conclude that the sequence is random.
Since the P-value obtained in step 7 of Section 2.13.4 is ≥ 0.01 (P-value = 0.261961), the
conclusion is that the sequence is random.
Note that small values of ApEn(m) would imply strong regularity (see step 6 of Section 2.13.4).
Large values would imply substantial fluctuation or irregularity.
2.13.8 Example
(input) ε = 11001001000011111101101010100010001000010110100011
00001000110100110001001100011001100010100010111000
(input) m = 2; n = 100
51
(conclusion) Since P-value ≥ 0.01, accept the sequence as random.
The focus of this test is the maximal excursion (from zero) of the random walk defined by the
cumulative sum of adjusted (-1, +1) digits in the sequence. The purpose of the test is to
determine whether the cumulative sum of the partial sequences occurring in the tested sequence
is too large or too small relative to the expected behavior of that cumulative sum for random
sequences. This cumulative sum may be considered as a random walk. For a random sequence,
the excursions of the random walk should be near zero. For certain types of non-random
sequences, the excursions of this random walk from zero will be large.
CumulativeSums(mode,n), where:
Additional input for the function, but supplied by the testing code:
ε The sequence of bits as generated by the RNG or PRNG being tested; this exists
as a global structure at the time of the function call; ε = ε1, ε2, … , εn.
mode A switch for applying the test either forward through the input sequence (mode =
0) or backward through the sequence (mode = 1).
z: The largest excursion from the origin of the cumulative sums in the corresponding (-1,
+1) sequence.
The reference distribution for the test statistic is the normal distribution.
(1) Form a normalized sequence: The zeros and ones of the input sequence (ε) are converted
to values Xi of –1 and +1 using Xi = 2εi – 1.
(2) Compute partial sums Si of successively larger subsequences, each starting with X1 (if
mode = 0) or Xn (if mode = 1).
52
Mode = 0 (forward) Mode = 1 (backward)
S1 = X1 S1 = Xn
S2 = X1 + X2 S2 = Xn + Xn-1
S3 = X1 + X2 + X3 S3 = Xn + Xn-1 + Xn-2
. .
. .
Sk = X1 + X2 + X3 + … + Xk Sk = Xn + Xn-1 + Xn-2 + … + Xn-k+1
. .
. .
Sn = X1 + X2 + X3 + … + Xk + …+ Xn Sn = Xn + Xn-1 + Xn-2 + … + Xk-1 + …+ X1
That is, Sk = Sk-1 + Xk for mode 0, and Sk = Sk-1 + Xn-k+1 for mode 1.
For the example in this section, when mode = 0 and X = 1, (-1), 1, 1, (-1), 1, (-1), 1, 1, 1,
then:
S1 = 1
S2 = 1 + (-1) = 0
S3 = 1 + (-1) + 1 = 1
S4 = 1 + (-1) + 1 + 1 = 2
S5 = 1 + (-1) + 1 + 1 + (-1) = 1
S6 = 1 + (-1) + 1 + 1 + (-1) + 1 = 2
S7 = 1 + (-1) + 1 + 1 + (-1) + 1 + (-1) = 1
S8 = 1 + (-1) + 1 + 1 + (-1) + 1 + (-1) + 1 = 2
S9 = 1 + (-1) + 1 + 1 + (-1) + 1 + (-1) + 1 + 1 = 3
S10 = 1 + (-1) + 1 + 1 + (-1) + 1 + (-1) + 1 + 1 + 1 = 4
(3) Compute the test statistic z =max1≤k≤n|Sk|, where max1≤k≤n|Sk| is the largest of the absolute
values of the partial sums Sk.
n
−1 4
z
(4k + 1)z (4k − 1)z
(4) Compute P-value = 1 − ∑
−n
Φ
n
− Φ
n
+
k = +1 4
z
n
−1 4
z
(4k + 3)z (4k + 1)z
∑
−n
Φ
n
− Φ
n
k = −3 4
z
53
For the example in this section, P-value = 0.4116588.
If the computed P-value is < 0.01, then conclude that the sequence is non-random. Otherwise,
conclude that the sequence is random.
Since the P-value obtained in step 4 of Section 2.14.4 is ≥ 0.01 (P-value = 0.411658), the
conclusion is that the sequence is random.
Note that when mode = 0, large values of this statistic indicate that there are either “too many
ones” or “too many zeros” at the early stages of the sequence; when mode = 1, large values of
this statistic indicate that there are either “too many ones” or “too many zeros” at the late stages.
Small values of the statistic would indicate that ones and zeros are intermixed too evenly.
It is recommended that each sequence to be tested consist of a minimum of 100 bits (i.e., n ≥
100).
2.14.8 Example
(input) ε = 11001001000011111101101010100010001000010110100011
00001000110100110001001100011001100010100010111000
(input) n = 100
54
The focus of this test is the number of cycles having exactly K visits in a cumulative sum
random walk. The cumulative sum random walk is derived from partial sums after the (0,1)
sequence is transferred to the appropriate (-1, +1) sequence. A cycle of a random walk consists
of a sequence of steps of unit length taken at random that begin at and return to the origin. The
purpose of this test is to determine if the number of visits to a particular state within a cycle
deviates from what one would expect for a random sequence. This test is actually a series of
eight tests (and conclusions), one test and conclusion for each of the states: -4, -3, -2, -1 and +1,
+2, +3, +4.
RandomExcursions(n), where:
Additional input used by the function, but supplied by the testing code:
ε The sequence of bits as generated by the RNG or PRNG being tested; this exists
as a global structure at the time of the function call; ε = ε1, ε2, … , εn.
χ2(obs): For a given state x, a measure of how well the observed number of state visits
within a cycle match the expected number of state visits within a cycle, under an
assumption of randomness.
(1) Form a normalized (-1, +1) sequence X: The zeros and ones of the input sequence (ε) are
changed to values of –1 and +1 via Xi = 2εi – 1.
(2) Compute the partial sums Si of successively larger subsequences, each starting with X1.
Form the set S = {Si}.
S1 = X1
S2 = X1 + X2
S3 = X1 + X2 + X3
.
.
55
Sk = X1 + X2 + X3 + … + Xk
.
.
Sn = X1 + X2 + X3 + … + Xk + …+ Xn
(3) Form a new sequence S' by attaching zeros before and after the set S. That is, S' = 0, s1,
s2, … , sn, 0.
-2-
(4) Let J = the total number of zero crossings in S', where a zero crossing is a value of zero
in S ' that occurs after the starting zero. J is also the number of cycles in S′, where a
cycle of S′ is a subsequence of S′consisting of an occurrence of zero, followed by no-
zero values, and ending with another zero. The ending zero in one cycle may be the
beginning zero in another cycle. The number of cycles in S ' is the number of zero
crossings. If J < 500, discontinue the test6.
For the example in this section, if S' = {0, –1, 0 1, 0, 1, 2, 1, 2, 1, 2, 0}, then J = 3 (there
are zeros in positions 3, 5 and 12 of S'). The zero crossings are easily observed in the
above plot. Since J = 3, there are 3 cycles, consisting of {0, -1, 0}, {0, 1, 0} and {0, 1,
2, 1, 2, 1, 2, 0}.
6
J times the minimum of the probabilities found in the table in Section 3.15 must be ≥ 5 in order to satisfy the
empirical rule for Chi-square computations.
56
(5) For each cycle and for each non-zero state value x having values –4 ≤ x ≤ -1 and 1 ≤ x ≤
4, compute the frequency of each x within each cycle.
For the example in this section, in step 3, the first cycle has one occurrence of –1, the
second cycle has one occurrence of 1, and the third cycle has three occurrences each of 1
and 2. This can be visualized using the following table.
Cycles
State Cycle 1 Cycle 2 Cycle 3
x (0, -1, 0) (0, 1, 0) (0,1,2,1,2,1,2,0)
-4 0 0 0
-3 0 0 0
-2 0 0 0
-1 1 0 0
1 0 1 3
2 0 0 3
3 0 0 0
4 0 0 0
(6) For each of the eight states of x, compute νk(x) = the total number of cycles in which
state x occurs exactly k times among all cycles, for k = 0, 1, …, 5 (for k = 5, all
5
frequencies ≥ 5 are stored in ν5(x)). Note that ∑ ν k ( x ) = J .
k =0
57
ν1(-4) = ν2(-4) = ν3(-4) = ν4(-4) = ν5(-4) = 0 (the -4 state occurs exactly {1,
2, 3, 4, ≥5} times in 0 cycles).
And so on….
Number of Cycles
State x 0 1 2 3 4 5
-4 3 0 0 0 0 0
-3 3 0 0 0 0 0
-2 3 0 0 0 0 0
-1 2 1 0 0 0 0
1 1 1 0 1 0 0
2 2 0 0 1 0 0
3 3 0 0 0 0 0
4 3 0 0 0 0 0
(7) For each of the eight states of x, compute the test statistic
5 (ν k ( x ) − Jπ k ( x )) 2
χ 2 ( obs ) = ∑ , where πk(x) is the probability that the state x occurs k
k =0 Jπ k ( x )
times in a random distribution (see Section 3.15 for a table of πk values). The values for
πk(x) and their method of calculation are provided in Section 3.15. Note that eight χ2
statistics will be produced (i.e., for x = -4, -3, -2, -1, 1, 2, 3, 4).
(8) For each state of x, compute P-value = igamc(5/2, χ 2 ( obs ) 2 ) . Eight P-values will be
produced.
5 4.333033
For the example when x = 1, P-value = igamc , = 0.502529.
2 2
If the computed P-value is < 0.01, then conclude that the sequence is non-random. Otherwise,
conclude that the sequence is random.
58
2.15.6 Conclusion and Interpretation of Test Results
Since the P-value obtained in step 8 of Section 2.15.4 is ≥ 0.01 (P-value = 0.502529), the
conclusion is that the sequence is random.
Note that if χ2(obs) were too large, then the sequence would have displayed a deviation from the
theoretical distribution for a given state across all cycles.
It is recommended that each sequence to be tested consist of a minimum of 1,000,000 bits (i.e.,
n ≥ 106).
2.15.8 Example
(processing) J = 1490
(conclusion) For seven of the states of x, the P-value is ≥ 0.01, and the conclusion would be
that the sequence was random. However, for one state of x (x = -1), the P-value
is < 0.01, so the conclusion would be that the sequence is non-random. When
contradictions arise, further sequences should be examined to determine whether
or not this behavior is typical of the generator.
59
The focus of this test is the total number of times that a particular state is visited (i.e., occurs) in
a cumulative sum random walk. The purpose of this test is to detect deviations from the
expected number of visits to various states in the random walk. This test is actually a series of
eighteen tests (and conclusions), one test and conclusion for each of the states: -9, -8, …, -1 and
+1, +2, …, +9.
RandomExcursionsVariant(n), where:
n The length of the bit string; available as a parameter during the function call.
Additional input used by the function, but supplied by the testing code:
ε The sequence of bits as generated by the RNG or PRNG being tested; this exists
as a global structure at the time of the function call; ε = ε1, ε2, … , εn.
ξ: For a given state x, the total number of times that the given state is visited during the
entire random walk as determined in step 4 of Section 2.15.4.
The reference distribution for the test statistic is the half normal (for large n). (Note: If ξ is
distributed as normal, then |ξ| is distributed as half normal.) If the sequence is random, then the
test statistic will be about 0. If there are too many ones or too many zeroes, then the test statistic
will be large.
(1) Form the normalized (-1, +1) sequence X in which the zeros and ones of the input
sequence (ε) are converted to values of –1 and +1 via X = X1, X2, … , Xn, where Xi = 2εi
– 1.
(2) Compute partial sums Si of successively larger subsequences, each starting with x1. Form
the set S = {Si}.
S1 = X1
S2 = X1 + X2
S3 = X1 + X2 + X3
.
60
.
Sk = X1 + X2 + X3 + . . . + Xk
.
.
Sn = X1 + X2 + X3 + . . . + Xk + . . .+ Xn
(3) Form a new sequence S' by attaching zeros before and after the set S. That is, S' = 0, s1,
s2, … , sn, 0.
(4) For each of the eighteen non-zero states of x, compute ξ(x) = the total number of times
that state x occurred across all J cycles.
For the example in this section, ξ(-1) = 1, ξ(1) = 4, ξ(2) = 3, and all other ξ(x) = 0.
ξ ( x) − J
(5) For each ξ(x), compute P-value = erfc
. Eighteen P-values are computed.
2 J (4 x − 2)
See Section 5.5.3.3 for the definition of erfc.
4−3
For the example in this section, when x = 1, P-value = erfc = 0.683091.
2 • 3(4 1 − 2)
61
2.16.5 Decision Rule (at the 1 % Level)
If the computed P-value is < 0.01, then conclude that the sequence is non-random. Otherwise,
conclude that the sequence is random.
Since the P-value obtained in step 7 of Section 2.16.4 is ≥ 0.01 for the state x = 1 (P-value =
0.683091), the conclusion is that the sequence is random.
It is recommended that each sequence to be tested consist of a minimum of 1,000,000 bits (i.e.,
n ≥ 106).
2.16.8 Example
(processing) J = 1490
62
+9 1610 0.593930 Random
(conclusion) Since the P-value ≥ 0.01 for each of the eighteen states of x, accept the sequence
as random.
63
3 TECHNICAL DESCRIPTION OF TESTS
This section contains the mathematical backgound for the tests in the NIST
test suite. Each subsection corresponds to the appropriate subsection in Sec-
tion 2. The relevant references for each subsection are provided at the end
of that subsection.
The test is derived from the well-known limit Central Limit Theorem for
the random walk, Sn = X1 + · · · + Xn . According to the Central Limit
Theorem,
!
Sn 1 Z z −u2 /2
lim P √ ≤ z = Φ(z) ≡ √ e du. (1)
n→∞ n 2π −∞
This classical result serves as the basis of the simplest test for randomness.
It implies that, for positive z,
!
|Sn |
P √ ≤ z = 2Φ(z) − 1.
n
√
√ s = |Sn |/ n, evaluate the ob-
According to the test based on the statistic
served value |s(obs)| = |X1 + . . . + Xn |/ n, and then calculate√the corre-
sponding P − value, which is 2[1 − Φ(|s(obs)|)] = erfc(|s(obs)|/ 2). Here,
erfc is the (complementary) error function
2 Z ∞ −u2
erf c(z) = √ e du.
π z
64
References for Test
[1] Kai Lai Chung, Elementary Probability Theory with Stochastic Processes.
New York: Springer-Verlag, 1979 (especially pp. 210-217).
The parameters of this test are M and N , so that n = MN , i.e., the orig-
inal string is partitioned into N substrings, each of length M. For each of
these substrings, the probability of ones is estimated by the observed relative
frequency of 1’s, πi , i = 1, . . . , N . The sum
N
X 2
1
χ2 (obs) = 4M πi −
1 2
65
References for Test
The specific test used here is based on the distribution of the total number of
P
runs, Vn . For the fixed proportion π = j ǫj /n (which by the Frequency test
of Section 3.1 must have been established to be close to 0.5: |π − 12 | ≤ √2n ).
!
Vn − 2nπ(1 − π)
lim P √ ≤ z = Φ(z). (2)
n→∞ 2 nπ(1 − π)
To evaluate Vn , define for k = 1, . . . , n − 1, r(k) = 0 if ǫk = ǫk+1 and r(k) = 1
Pn−1
if ǫk 6= ǫk+1 . Then Vn = k=1 r(k) + 1. The P − value reported is
!
|Vn (obs) − 2nπ(1 − π)|
erf c √ .
2 2nπ(1 − π)
Large values of Vn (obs) indicate oscillation in the string of ǫ’s which is too
fast; small values indicate oscillation which is too slow.
66
References for Test
[1] Jean D. Gibbons, Nonparametric Statistical Inference, 2nd ed. New York:
Marcel Dekker, 1985 (especially pp. 50-58).
[2] Anant P. Godbole and Stavros G. Papastavridis, (ed), Runs and pat-
terns in probability: Selected papers. Dordrecht: Kluwer Academic, 1994.
so that !
M
X M 1
P (ν ≤ m) = P (ν ≤ m|r) . (3)
r=0
r 2M
The theoretical probabilities π0 , π1 , . . . , πK of these classes are determined
from (3).
67
which, under the randomness hypothesis, has an approximate χ2 -distribution
with K degrees of freedom. The reported P − value is
R∞ !
χ2 (obs)e−u/2 uK/2−1 du K χ2 (obs)
= igamc , ,
Γ (K/2) 2K/2 2 2
The following table contains selected values of K and M with the correspond-
ing probabilities obtained from (3). Cases K = 3, M = 8; K = 5, M = 128;
and K = 6, M = 10000 are currently embedded in the test suite code.
K = 3, M = 8
classes {ν ≤ 1} {ν = 2} {ν = 3} {ν ≥ 4}
probabilities π0 = 0.2148 π1 = 0.3672 π2 = 0.2305 π3 = 0.1875
K = 5, M = 128
classes {ν ≤ 4} {ν = 5} {ν = 6} {ν = 7}
probabilities π0 = 0.1174 π1 = 0.2430 π2 = 0.2493 π3 = 0.1752
{ν = 8} {ν ≥ 9}
π4 = 0.1027 π5 = 0.1124
K = 5, M = 512
classes {ν ≤ 6} {ν = 7} {ν = 8} {ν = 9}
probabilities π0 = 0.1170 π1 = 0.2460 π2 = 0.2523 π3 = 0.1755
{ν = 10} {ν ≥ 11}
π4 = 0.1015 π5 = 0.1077
K = 5, M = 1000
classes {ν ≤ 7} {ν = 8} {ν = 9} {ν = 10}
probabilities π0 = 0.1307 π1 = 0.2437 π2 = 0.2452 π3 = 0.1714
68
{ν = 11} {ν ≥ 12}
π4 = 0.1002 π5 = 0.1088
K = 6, M = 10000
Large values of χ2 indicate that the sequence has clusters of ones; the gener-
ation of “random” sequences by humans tends to lead to small values of νn
(see Revesz, 1990, p. 55).
[2] Anant P. Godbole and Stavros G. Papastavridis (ed), Runs and Patterns
in Probability: Selected Papers. Dordrecht: Kluwer Academic, 1994.
This test is a specification of one of the tests coming from the DIEHARD
69
[1] battery of tests. It is based on the result of Kovalenko (1972) and also
formulated in Marsaglia and Tsay (1985). The result states that the rank
R of the M × Q random binary matrix takes values r = 0, 1, 2, . . . , m where
m ≡ min(M, Q) with probabilities
r−1
Y (1 − 2i−Q )(1 − 2i−M )
pr = 2r(Q+M−r)−MQ .
i=0 1 − 2i−r
The probability values are fixed in the test suite code for M = Q = 32. The
number M is then a parameter of this test, so that ideally n = M 2 N , where
N is the new “sample size.” In practice, values for M and N are chosen so
that the discarded part of the string, n − N M 2 , is fairly small.
FM = #{Rℓ = M},
70
Interpretation of this test: large values of χ2 (obs) indicate that the devi-
ation of the rank distribution from that corresponding to a random sequence
is significant. For example, pseudo random matrices produced by a shift-
register generator formed by less than M successive vectors systematically
have rank Rℓ ≡ M, while for truly random data, the proportion of such
occurrences should be only about 0.29.
[3] G. Marsaglia and L. H. Tsay (1985), “Matrices and the structure of ran-
dom number sequences,” Linear Algebra and its Applications. Vol. 67, pp.
147-156.
Let xk be the kth bit, where k = 1,...,n. Assume that the bits are coded
−1 and +1. Let
n
X
fj = xk exp (2πi (k − 1)j/n),
k=1
71
A P − value based on this threshold comes from the binomial distribution.
Let N1 be the number of peaks less than h. Only theq first n/2 peaks are
considered. Let N0 = .95N/2 and d = (N1 − N0 )/ n(.95)(.05)/2. The
P − value is !
|d|
2(1 − φ(|d|)) = erfc √
2
where Φ(x) is the cumulative probability function of the standard normal
distribution.
Other P − values based on the series fj or modj that are sensitive to de-
partures from randomness are possible. However, the primary value of the
transform comes from a plot of the series modj . In the accompanying figure,
the top plot shows the series of modj for 4096 bits generated from a satisfac-
tory generator. The line through the plot is the 95 % confidence boundary.
The P − value for this series is 0.8077. The bottom plot shows a correspond-
ing plot for a generator that produces bits that are statistically dependent
in a periodic pattern. In the bottom plot, significantly greater than 5 % of
the magnitudes are beyond the confidence boundary. In addition, there is a
clear structure in the magnitudes that is not present in the top plot. The
P − value for this series is 0.0001.
[1] R. N. Bracewell, The Fourier Transform and Its Applications. New York:
McGraw-Hill, 1986.
72
No Evidence of Frequency Components
o
o
o
CM
CD O
3 i-
CO
CO O
o
ID
Index
o
o
CM
© o
3 l-
CO o
o
o
in
o -
3.7 Non-overlapping Template Matching Test
This test rejects sequences exhibiting too many or too few occurrences of a
given aperiodic pattern.
For the test suite code, M and N are chosen so that n = MN and N = 8.
74
Partition the original string into N blocks of length M. Let Wj = Wj (m, M)
be the number of occurrences of the pattern B in the block j, for j = 1, . . . , N .
75
Aperiodic Templates for small values of 6 ≤ m ≤ 8
m=6 m=7 m=8
0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1
0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1
0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1
0 0 0 1 1 1 0 0 0 0 1 1 1 0 0 0 0 0 1 1 1
0 0 1 0 1 1 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1
0 0 1 1 0 1 0 0 0 1 0 1 1 0 0 0 0 1 0 1 1
0 0 1 1 1 1 0 0 0 1 1 0 1 0 0 0 0 1 1 0 1
0 1 0 0 1 1 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1
0 1 0 1 1 1 0 0 1 0 0 1 1 0 0 0 1 0 0 1 1
0 1 1 1 1 1 0 0 1 0 1 0 1 0 0 0 1 0 1 0 1
1 0 0 0 0 0 0 0 1 0 1 1 1 0 0 0 1 0 1 1 1
1 0 1 0 0 0 0 0 1 1 0 1 1 0 0 0 1 1 0 0 1
1 0 1 1 0 0 0 0 1 1 1 0 1 0 0 0 1 1 0 1 1
1 1 0 0 0 0 0 0 1 1 1 1 1 0 0 0 1 1 1 0 1
1 1 0 0 1 0 0 1 0 0 0 1 1 0 0 0 1 1 1 1 1
1 1 0 1 0 0 0 1 0 0 1 1 1 0 0 1 0 0 0 1 1
1 1 1 0 0 0 0 1 0 1 0 1 1 0 0 1 0 0 1 0 1
1 1 1 0 1 0 0 1 0 1 1 1 1 0 0 1 0 0 1 1 1
1 1 1 1 0 0 0 1 1 0 1 1 1 0 0 1 0 1 0 1 1
1 1 1 1 1 0 0 1 1 1 1 1 1 0 0 1 0 1 1 0 1
1 0 0 0 0 0 0 0 0 1 0 1 1 1 1
1 0 0 1 0 0 0 0 0 1 1 0 1 0 1
1 0 1 0 0 0 0 0 0 1 1 0 1 1 1
1 0 1 0 1 0 0 0 0 1 1 1 0 1 1
1 0 1 1 0 0 0 0 0 1 1 1 1 0 1
1 0 1 1 1 0 0 0 0 1 1 1 1 1 1
1 1 0 0 0 0 0 0 1 0 0 0 0 1 1
1 1 0 0 0 1 0 0 1 0 0 0 1 1 1
1 1 0 0 1 0 0 0 1 0 0 1 0 1 1
1 1 0 1 0 0 0 0 1 0 0 1 1 1 1
1 1 0 1 0 1 0 0 1 0 1 0 0 1 1
1 1 0 1 1 0 0 0 1 0 1 0 1 1 1
1 1 1 0 0 0 0 0 1 0 1 1 0 1 1
1 1 1 0 0 1 0 0 1 0 1 1 1 1 1
1 1 1 0 1 0 0 0 1 1 0 0 1 1 1
1 1 1 0 1 1 0 0 1 1 0 1 1 1 1
1 1 1 1 0 0 0 0 1 1 1 1 1 1 1
1 1 1 1 0 1 0 1 0 0 0 0 0 0 0
1 1 1 1 1 0 0 1 0 0 1 0 0 0 0
1 1 1 1 1 1 0 1 0 0 1 1 0 0 0
1 0 1 0 0 0 0 0
1 0 1 0 0 1 0 0
1 0 1 0 1 0 0 0
1 0 1 0 1 1 0 0
1 0 1 1 0 0 0 0
1 0 1 1 0 1 0 0
1 0 1 1 1 0 0 0
1 0 1 1 1 1 0 0
1 1 0 0 0 0 0 0
1 1 0 0 0 0 1 0
1 1 0 0 0 1 0 0
1 1 0 0 1 0 0 0
1 1 0 0 1 0 1 0
1 1 0 1 0 0 0 0
1 1 0 1 0 0 1 0
1 1 0 1 0 1 0 0
1 1 0 1 1 0 0 0
1 1 0 1 1 0 1 0
1 1 0 1 1 1 0 0
1 1 1 0 0 0 0 0
1 1 1 0 0 0 1 0
1 1 1 0 0 1 0 0
1 1 1 0 0 1 1 0
1 1 1 0 1 0 0 0
1 1 1 0 1 0 1 0
1 1 1 0 1 1 0 0
1 1 1 1 0 0 0 0
1 1 1 1 0 0 1 0
1 1 1 1 0 1 0 0
1 1 1 1 0 1 1 0
1 1 1 1 1 0 0 0
1 1 1 1 1 0 1 0
1 1 1 1 1 1 0 0
1 1 1 1 1 1 1 0
76
3.8 Overlapping Template Matching Test
This test rejects sequences which show too many or too few occurrences of
m-runs of ones, but can be easily modified to detect irregular occurrences of
any periodic pattern B.
Let W̃j = W̃j (m, n) be the number of (possibly overlapping) runs of ones
of length m in the jth block. The asymptotic distribution of W̃j is the com-
pound Poisson distribution (the so-called Pòlya-Aeppli law, see Chrysaphi-
nou and Papastavridis, 1988):
( )
λ(et − 1)
E exp{tW̃j } → exp (5)
2 − et
For example,
P (U = 0) = e−η ,
η
P (U = 1) = e−η ,
2
−η
ηe
P (U = 2) = [η + 2] ,
8
" #
ηe−η η 2
P (U = 3) = +η+1 ,
8 6
" #
ηe−η η 3 η 2 3η
P (U = 4) = + + +1 .
16 24 2 2
77
The complement to the distribution function of this random variable has the
form
∞
X ηℓ
L(u) = P (U > u) = e−η ∆(ℓ, u)
ℓ=u+1 ℓ
with !
u
X 1 k−1
∆(ℓ, u) = .
k=ℓ
2k ℓ−1
Choose K + 1 classes or cells for U , i.e., {U = 0}, {U = 1}, · · · , {U = K − 1},
{U ≥ K}. The theoretical probabilities π0 , π1 , . . . , πK+1 of these cells are
found from the above formulas. A reasonable choice could be K = 5, λ =
2, η = 1.
The expression for the P − value is the same as that used in Section 3.7. The
interpretation is that for very small P − values, the sequence shows irregular
occurrences of m-runs of ones.
[2] N.J. Johnson, S. Kotz, and A. Kemp, Discrete Distributions. John Wiley,
2nd ed. New York, 1996 (especially pp. 378-379).
78
such, the test is claimed to measure the actual cryptographic significance of
a defect because it is “related to the running time of [an] enemy’s optimal
key-search strategy,” or the effective key size of a cipher system.
The test is not designed to detect a very specific pattern or type of sta-
tistical defect. However, the test is designed “to be able to detect any one of
the very general class of statistical defects that can be modeled by an ergodic
stationary source with finite memory.” Because of this, Maurer claims that
the test subsumes a number of the standard statistical tests.
The test is a compression-type test “based on the idea of Ziv that a uni-
versal statistical test can be based on a universal source coding algorithm.
A generator should pass the test if and only if its output sequence cannot be
compressed significantly.” According to Maurer, the source-coding algorithm
due to Lempel-Ziv “seems to be less suited for application as a statistical test”
because it seems to be difficult to define a test statistic whose distribution
can be determined or approximated.
The test requires a long (on the order of 10 · 2L + 1000 · 2L with 6 ≤ L ≤ 16)
sequence of bits which are divided into two stretches of L-bit blocks (6 ≤
L ≤ 16), Q (≥ 10 · 2L ) initialization blocks and K (≈ 1000 · 2L ) test blocks.
We take K = ceiling(n/L) − Q to maximize its value. The order of mag-
nitude of Q should be specifically chosen to ensure that all possible L-bit
binary patterns do in fact occur within the initialization blocks. The test
is not suited for very large values of L because the initialization takes time
exponential in L.
The test looks backs through the entire sequence while walking through the
test segment of L-bit blocks, checking for the nearest previous exact L-bit
template match and recording the distance - in number of blocks - to that
previous match. The algorithm computes the log2 of all such distances for
all the L-bit templates in the test segment (giving, effectively, the number
of digits in the binary expansion of each distance). Then it averages over all
the expansion lengths by the number of test blocks.
1 Q+K
X
fn = [ log (#indices since previous occurrence of ith template)]
K i=Q+1 2
79
The algorithm achieves this efficiently by subscripting a dynamic look-up
table making use of the integer representation of the binary bits constituting
the template blocks. A standardized version of the statistic - the standardiza-
tion being prescribed by the test - is compared to an acceptable range based
on a standard normal (Gaussian) density, making use of the test statistic’s
mean which is given by formula (16) in Maurer (1992),
∞
X
Efn = 2−L (1 − 2−L )i−1 log2 i.
i=1
The expected value of the test statistic fn is that of the random variable
log2 G where G = GL is a geometric random variable with the parameter
1 − 2−L .
There are several versions of approximate empirical formulas for the vari-
ance of the form
Here, c(L, K) represents the factor that takes into account the dependent
nature of the occurrences of templates. The latest of the approximations
(Coron and Naccache (1998): not embedded in the test suite code) has the
form
0.8 12.8
c(L, K) = 0.7 − + 1.6 + K −4/L .
L L
However, Coron and Naccache (1998) report that “the inaccuracy due to [this
approximation] can make the test 2.67 times more permissive than what is
theoretically admitted.” In other words, the ratio of the standard deviation
of fn obtained from the approximation above to the true standard deviation
deviates considerably from one. In view of this fact and also since all ap-
proximations are based on the “admissible” assumption that Q → ∞, the
randomness hypothesis may be tested by verifying normality of the observed
values fn , assuming that the variance is unknown. This can be done using a
t-test.
80
and the P − value is
fn − E(L)
erf c q
var(fn )
[1] Ueli M. Maurer, “A Universal Statistical Test for Random Bit Genera-
tors,” Journal of Cryptology. Vol. 5, No. 2, 1992, pp. 89-105.
[2] J-S Coron and D. Naccache, “An Accurate Evaluation of Maurer’s Uni-
versal Test,” Proceedings of SAC ’98 (Lecture Notes in Computer Science).
Berlin: Springer-Verlag, 1998.
[5] J. Ziv, “Compression, tests for randomness and estimating the statistical
model of an individual sequence,” Sequences (ed. R.M. Capocelli). Berlin:
Springer-Verlag, 1990.
[6] J. Ziv and A. Lempel, “A universal algorithm for sequential data com-
pression,” IEEE Transactions on Information Theory. Vol. 23, pp. 337-343.
81
The Lempel-Ziv test is thought to subsume the frequency, runs, other com-
pression, and possibly spectral tests, but it may intersect the random binary
matrix rank test. The test is similar to the entropy test and even more similar
to Maurer’s Universal Statistical test. However, the Lempel-Ziv test directly
incorporates the compression heuristic that defines modern information the-
ory.
There are several variations on the Lempel-Ziv algorithm (1977). The test
used here assumes that {Xi }ni=1 is a binary sequence, and specifically pro-
ceeds as follows:
1. Parse the sequence into consecutive disjoint strings (words) so that the
next word is the shortest string not yet seen.
3. Assign each word a prefix and a suffix; the prefix is the number of the
previous word that matches all but the last digit; the suffix is the last
digit.
Note that what drives this compression is the number of substrings in the
parsing. It is possible that, for small n, the Lempel-Ziv compression is actu-
ally longer than the original representation.
Following the work of Aldous and Shields (1988), let W (n) represent the
number of words in the parsing of a binary random sequence of length n.
They show that
E[W (n)]
lim = 1,
n→∞ n/ log n
2
82
That difficulty was nominally overcome by Kirschenhofer, Prodinger, and
Szpankowski (1994) who prove that
n[C + δ(log2 n)]
σ 2 [W (n)] ∼
log32 n
where C = 0.26600 (to five significant places) and δ(·) is a slowly varying
continuous function with mean zero and |δ(·)| < 10−6 .
The given sequence is parsed, and the number of words counted. It is not
necessary to go through the complete Lempel-Ziv encoding, since the number
of words, W , is sufficient. W is used to calculate
n
W− log2 n
z= q
.266n
log32 n
For this test, the mean and variance were evaluated using SHA-1 for mil-
lion bit sequences. The mean and variance were computed to be 69586.25
83
and 70.448718, respectively.
[4] U. M. Maurer (1992), “A Universal Statistical Test for Random Bit Gen-
erators,” Journal of Cryptology. 5, pp. 89-105.
[5] J. Ziv and A. Lempel (1977), “A Universal Algorithm for Sequential Data
Compression,” IEEE Transactions on Information Theory. 23, pp. 337-343.
84
For a given sequence sn = (ǫ1 , . . . , ǫn ), its linear complexity L(sn ) is defined
as the length of the shortest LFSR that generates sn as its first n terms. The
possibility of using the linear complexity characteristic for testing random-
ness is based on the Berlekamp-Massey algorithm, which provides an efficient
way to evaluate finite strings.
When the binary n-sequence sn is truly random, formulas exist [2] for the
mean, µn = EL(sn ), and the variance, σn2 = V ar(L(sn )), of the linear com-
plexity L(sn ) = Ln when the n-sequence sn is truly random. The Crypt-X
package [1] suggests that the ratio (Ln − µn )/σn is close to a standard normal
variable, so that the corresponding P − values can be found from the normal
error function. Indeed, Gustafson et al. [1] (p. 693) claim that “for large n,
L(sn ) is approximately normally distributed with mean n/2
and a variance
q
86/81 times that of the standard normal statistic z = L(sn ) − n2 81
86
.”
This is completely false. Even the mean value µn does not behave asymptot-
ically precisely as n/2, and in view of the boundedness of the variance, this
difference becomes significant. More importantly, the tail probabilities of
the limiting distribution are much larger than those of the standard normal
distribution.
1 1 1
2κ−1
+ 2κ−2
= 2κ−1 .
3×2 3×2 2
In view of the discrete nature of this distribution and the impossibility of
attaining the uniform distribution for P − values, the same strategy can be
used that was used with other tests in this situation. Namely, partition the
string of length n, such that that n = MN , into N substrings each of length
M. For the test based on the linear complexity statistic (6), evaluate TM
within the j-th substring of size M, and choose K + 1 classes (depending on
M.) For each of these substrings, the frequencies, ν0 , ν1 , . . . , νK , of values of
TM belonging to any of K +1 chosen classes, ν0 +ν1 +. . .+νK = N , are deter-
mined. It is convenient to choose the classes with end-points at semi-integers.
86
which, under the randomness hypothesis, has an approximate χ2 -distribution
with K degrees of freedom. The reported P − value is
R∞ !
χ2 (obs)e−u/2 uK/2−1 du K χ2 (obs)
= igamc , .
Γ (K/2) 2K/2 2 2
[3] R.A. Rueppel, Analysis and Design of Stream Ciphers. New York: Springer,
1986.
87
Specifically, for i1 , · · · , im running through the set of all 2m possible 0, 1 vec-
tors of length m, let νi1 ···im denote the frequency of the pattern (i1 , · · · , im )
in the “circularized” string of bits (ǫ1 , . . . , ǫn , ǫ1 , . . . , ǫm−1 ).
Set
2 2m X n 2 2m X 2
ψm = νi1 ···im − m = ν − n,
n i1 ···im 2 n i1 ···im i1 ···im
2
Thus, ψm is a χ2 -type statistic, but it is a common mistake to assume that
2
ψm has the χ2 -distribution. Indeed, the frequencies νi1 ···im are not indepen-
dent.
P − value2 = igamc 2m−3 , ∇2 Ψ2m /2
The result for ∇ψ22 and the usual counting of frequencies is incorrectly given
by Menezes, van Oorschot and Vanstone (1997) on p. 181, formula (5.2): +1
should be replaced by −1.
2
The convergence of ∇ψm to the χ2 - distribution was proven by Good (1953).
[1] I. J. Good (1953), “The serial test for sampling numbers and other tests
for randomness,” Proc. Cambridge Philos. Soc.. 47, pp. 276-284.
88
[2] M. Kimberley (1987), “Comparison of two statistical tests for keystream
sequences,” Electronics Letters. 23, pp. 365-366.
[3] D. E. Knuth (1998), The Art of Computer Programming. Vol. 2, 3rd ed.
Reading: Addison-Wesley, Inc., pp. 61-80.
89
1996, p. 2083).
For a fixed block length m, one should expect that in long random (irregular)
strings, ApEn(m) ∼ log 2. The limiting distribution of n[log 2 − ApEn(m)]
coincides with that of a χ2 -random variable with 2m degrees of freedom. This
fact provides the basis for a statistical test, as was shown by Rukhin (2000).
where νi1 ···im denotes the relative frequency of the template (i1 , · · · , im ) in
the augmented (or circular) version of the original string, i.e., in the string
(ǫ1 , . . . , ǫn , ǫ1 , . . . , ǫm−1 ). Let ωi1 ···im = nνi1 ···im be the frequency of the pat-
P
tern i1 · · · im . Under our definition, ωi1 ···im = k ωi1 ···im k , so that for any m,
P
i1 ···im = n.
90
When n is large, ApEn(m) and its modified version cannot differ much.
Indeed, one has with ωi′1 ···im = (n − m + 1)νi′1···im
X
ωi′1 ···im = n − m + 1,
i1 ···im
which suggests that for a fixed m, Φ(m) and Φ̃(m) must be close for large
n. Therefore, Pincus’ approximate entropy and its modified version are also
close, and their asymptotic distributions must coincide.
[2] S. Pincus and R. E. Kalman, “Not all (possibly) “random” sequences are
created equal,” Proc. Natl. Acad. Sci. USA. Vol. 94, April 1997, pp. 3513-
3518.
91
The test is based on the limiting distribution of the maximum of the ab-
solute values of the partial sums, max1≤k≤n |Sk |,
! Z ∞
( )
max1≤k≤n |Sk | 1 z X
k (u − 2kz)2
lim P √ ≤z = √ (−1) exp − du
n→∞ n 2π −z k=−∞ 2
∞
( )
4X (−1)j (2j + 1)2 π 2
= exp − = H(z), z > 0. (10)
π j=0 2j + 1 8z 2
√
With the test statistic z = max1≤k≤n |Sk |(obs)/ n , the randomness hy-
√ P − value is
pothesis is rejected for large√values of z, and the corresponding
1 − H (max1≤k≤n |Sk |(obs)/ n) = 1 − G (max1≤k≤n |Sk |(obs)/ n) where the
function G(z) is defined by the formula (11).
The series H(z) in the last line of (10) converges quickly and should be
used for numerical calculation only for small values of z. The function G(z)
(which is equal to H(z) for all z) is preferable
√ for the calculation for moderate
and large values of max1≤k≤n |Sk |(obs)/ n,
Z ∞
1 z X (u − 2kz)2
G(z) = √ (−1)k exp{− } du
2π −z k=−∞ 2
∞
X
= (−1)k [Φ((2k + 1)z) − Φ((2k − 1)z)]
k=−∞
∞
X
= Φ(z) − Φ(−z) + 2 (−1)k [Φ((2k + 1)z) − Φ((2k − 1)z)]
k=1
∞
X
= Φ(z) − Φ(−z) − 2 [2Φ((4k − 1)z) − Φ((4k + 1)z) − Φ((4k − 3)z)]
k=1
≈ Φ(z) − Φ(−z) − 2 [2Φ(3z) − Φ(5z) − Φ(z)]
4 z2
≈1− √ exp{− }, z → ∞. (11)
2πz 2
where Φ(x) is the standard normal distribution.
92
∞
X
=1− P ((4k − 1)z < Sn < (4k + 1)z))
k=−∞
∞
X
+ P ((4k + 1)z < Sn < (4k + 3)z)) .
k=−∞
Let J denote the total number of such excursions in the string. The limiting
distribution for this (random) number J (i.e., the number of zeros among the
sums Sk , k = 1, 2, . . . , n when S0 = 0) is known to be
! s
Z
J 2 z 2 /2
lim P √ <z = e−u du, z > 0. (12)
n→∞ n π 0
93
The test rejects the randomness hypothesis immediately if J is too small,
i.e., if the following P − value is small:
s √ !
Z
2 J(obs)/ n
−u2 /2 1 J 2 (obs)
P (J < J(obs)) ≈ e du = P , .
π 0 2 2n
√
If J < max(0.005 n, 500), the randomness hypothesis is rejected. Otherwise
the number of visits of the random walk S to a certain state is evaluated.
and for k = 1, 2, . . .
!k−1
1 1
P (ξ(x) = k) = 2 1 − . (14)
4x 2|x|
This means that ξ(x) = 0 with probability 1 − 1/2|x|; otherwise (with prob-
ability 1/2|x|), ξ(x) coincides with a geometric random variable with the
parameter 1/2|x|.
The above results are used for randomness testing in the following way. For
a ”representative” collection of x-values (say, 1 ≤ x ≤ 7 or −7 ≤ x ≤ −1:
−4 ≤ x ≤ 4 is used in the test suite code), evaluate the observed frequencies
94
νk (x) of the number k of visits to the state x during J excursions which occur
P
in the string. So νk (x) = Jj=1 νkj (x) with νkj (x) = 1 if the number of visits to
x during the jth excursion (j = 1, . . . , J) is exactly equal to k, and νkj (x) = 0
otherwise. Pool the values of ξ(x) into classes, say, k = 0, 1, . . . , 4 with an
additional class k ≥ 5. The theoretical probabilities for these classes are:
1
π0 (x) = P (ξ(x) = 0) = 1 − ;
2|x|
!k−1
1 1
πk (x) = P (ξ(x) = k) = 2 1 − , k = 1, . . . , 4;
4x 2|x|
!4
1 1
π5 (x) = P (ξ(x) ≥ 5) = 1− .
2|x| 2|x|
These probabilities have the form
which, for any x under the randomness hypothesis, must have approximately
a χ2 -distribution with 5 degrees of freedom. This is a valid test when
J min πk (x) ≥ 5, i.e., if J ≥ 500. (The test suite code uses π4 (x = 4)
for min πk (x).) If this condition does not hold, values of ξ(x) must be pooled
into larger classes.
95
The corresponding battery of P − values is reported. These values are ob-
tained from the formula
!
5 χ2 (obs)(x)
1−P , .
2 2
is small.
96
References for Test
97
4. TESTING STRATEGY AND RESULT INTERPRETATION
Three topic areas will be addressed in this section: (1) strategies for the statistical analysis of a
random number generator, (2) the interpretation of empirical results using the NIST Statistical
Test Suite, and (3) general recommendations and guidelines.
In practice, there are many distinct strategies employed in the statistical analysis of a random
number generator. NIST has adopted the strategy outlined in Figure 1. Figure 1 provides an
architectural illustration of the five stages involved in the statistical testing of a random number
generator.
Select a hardware or software based generator for evaluation. The generator should produce a
binary sequence of 0’s and 1’s of a given length n. Examples of pseudorandom generators
(PRNG) that may be selected include a DES-based PRNG from ANSI X9.17 (Appendix C), and
two further methods that are specified in FIPS 186 (Appendix 3) and are based on the Secure
Hash Algorithm (SHA-1) and the Data Encryption Standard (DES).
For a fixed sequence of length n and the pre-selected generator, construct a set of m binary
sequences and save the sequences to a file7.
Invoke the NIST Statistical Test Suite using the file produced in Stage 2 and the desired
sequence length. Select the statistical tests and relevant input parameters (e.g., block length) to
be applied.
An output file will be generated by the test suite with relevant intermediate values, such as test
statistics, and P-values for each statistical test. Based on these P-values, a conclusion regarding
the quality of the sequences can be made.
7
Sample data may also be obtained from George Marsaglia's Random Number CDROM, at
http://stat.fsu.edu/pub/diehard/cdrom/.
98
GENERATORS
G G G G G Stage 1:
G G G G G Select a generator.
BINARY SEQUENCES
S1 = (0P1P1P1P0P0P1P0,1,1,...1} Stage 2:
^{O, 0,1,1,0, 0,1, 0,1,1....Q) A set of m sequences, each of length n, is
Sb = {1, 0, 0,1,1,0,2,0,1,0,...0) produced from the selected generator
Sm={0,1,1,1,0,1,1,0,1,0,...!
Stage 3:
Each binary stream is input into the test suite. s. '1,1 r
l,2
Every statistical test evaluates the sequence >V P*2 P2f16
P-VALUES
Pu = 0.0572 Pi)2 = 0.0392 Pi,i6 = 0.8532
Stage 4:
P-values are probabilistic values which
lie in the unit interval, i.e., in the range
[0,1]. P.!» 1.0000 Pm2 = 0.4634 Pml6 = 0
ASSESSMENT Stage 5:
PASS={PU,P1J2,P1J3,P1I5...} P-values are used to either affirm the null hypothesis
FAIL={PU(>P2i4,P6f8,P7il6} (i.e., that the sequence is random) or reject the hypothesis.
For each statistical test, a set of P-values (corresponding to the set of sequences) is produced.
For a fixed significance level, a certain percentage of P-values are expected to indicate failure.
For example, if the significance level is chosen to be 0.01 (i.e., α = 0.01), then about 1 % of the
sequences are expected to fail. A sequence passes a statistical test whenever the P-value ≥ α
99
and fails otherwise. For each statistical test, the proportion of sequences that pass is computed
and analyzed accordingly. More in-depth analysis should be performed using additional
statistical procedures (see Section 4.2.2).
Three scenarios typify events that may occur due to empirical testing. Case 1: The analysis of
the P-values does not indicate a deviation from randomness. Case 2: The analysis clearly
indicates a deviation from randomness. Case 3: The analysis is inconclusive.
In the event that either of these approaches fails (i.e., the corresponding null hypothesis must be
rejected), additional numerical experiments should be conducted on different samples of the
generator to determine whether the phenomenon was a statistical anomaly or a clear evidence of
non-randomness.
Given the empirical results for a particular statistical test, compute the proportion of sequences
that pass. For example, if 1000 binary sequences were tested (i.e., m = 1000), α = 0.01 (the
significance level), and 996 binary sequences had P-values ≥ .01, then the proportion is
996/1000 = 0.9960.
m
sample size. If the proportion falls outside of this
•
interval, then there is evidence that the data is non-
random. Note that other standard deviation values
could be used. For the example above, the
9J - .99(.01 )
confidence interval is .99 ± 3 = .99 ± 0.0094392 (i.e.,
: l • t ' 1 «Kill) UMI'H 1000
the proportion should lie above 0.9805607. This can
Figure 2: P-value Plot be illustrated using a graph as shown in Figure 2.
The confidence interval was calculated using a
normal distribution as an approximation to the
binomial distribution, which is reasonably accurate for large sample sizes (e.g., n ≥ 1000).
100
4.2.2 Uniform Distribution of P-values
distributed.
In practice, many reasons can be given to explain why a data set has failed a statistical test.
The following is a list of possible explanations. The list was compiled based upon NIST
statistical testing efforts.
Unless otherwise specified, it should be assumed that a statistical test was tailored to
handle a particular problem class. Since the NIST test code has been written to
allow the selection of input parameters, the code has been generalized in any number
of ways. Unfortunately, this doesn't necessarily translate to coding ease.
A few statistical tests have been constrained with artificial upper bounds. For
example, the random excursions tests are assumed to be no more than max{1000,
n/128} cycles. Similarly, the Lempel-Ziv Compression test assumes that the longest
word is in the neighborhood of log2 n, where n is the sequence length. Conceivably,
fixed parameters may have to be increased, depending on experimental conditions.
101
(b) An underdeveloped (immature) statistical test.
There are occasions when either probability or complexity theory isn’t sufficiently
developed or understood to facilitate a rigorous analysis of a statistical test.
Over time, statistical tests are revamped in light of new results. Since many
statistical tests are based upon asymptotic approximations, careful work needs to be
done to determine how good an approximation is.
It might be plausible that a hardware RNG or a software RNG has failed due to a
flaw in the design or due to a coding implementation error. In each case, careful
review must be made to rule out this possibility.
Another area that needs to be scrutinized is the harnessing of test data. The test data
produced by a (P)RNG must be processed before being used by a statistical test. For
example, processing might include dividing the output stream from the (P)RNG into
appropriate sized blocks, and translating the 0’s to negative ones. On occasion, it
was determined that the failures from a statistical test were due to errors in the code
used to process the data.
In practice, a statistical test will not provide reliable results for all seemingly valid
input parameters. It is important to recognize that constraints are made upon tests on
a test-by-test basis. Take the Approximate Entropy Test, for example. For a
sequence length on the order of 106, one would expect that block lengths
approaching log2 n would be acceptable. Unfortunately, this is not the case.
Empirical evidence suggests that beyond m = 14, the observed test statistic will
begin to disagree with the expected value (in particular, for known good generators,
such as SHA-1). Hence, certain statistical tests may be sensitive to input parameters.
102
Sequence Length
The determination as to how long sequences should be taken for the purposes of
statistical testing is difficult to address. If one examines the FIPS 140-1 statistical
tests, it is evident that sequences should be about 20,000 bits long.
However, the difficulty with taking relatively short sequence lengths is problematic
in the sense that some statistical tests, such as Maurer’s Universal Statistical Test,
require extremely long sequence lengths. One of the reasons is the realization that
asymptotic approximations are used in determining the limiting distribution.
Statements regarding the distribution for certain test statistics are more difficult to
address for short length sequences than their longer length counterparts.
Sample Size
The issue of sample size is tied to the choice of the significance level. NIST
recommends that, for these tests, the user should fix the significance level to be at
least 0.001, but no larger than 0.018. A sample size that is disproportional to the
significance level may not be suitable. For example, if the significance level (α) is
chosen to be 0.001, then it is expected that 1 out of every 1000 sequences will be
rejected. If a sample of only 100 sequences is selected, it would be rare to observe a
rejection. In this case, the conclusion may be drawn that a generator was producing
random sequences, when in all likelihood a sufficiently large enough sample was not
used. Thus, the sample should be on the order of the inverse of the significance level
(α-1). That is, for a level of 0.001, a sample should have at least 1000 sequences.
Ideally, many distinct samples should be analyzed.
Block Size
Block sizes are dependent on the individual statistical test. In the case of Maurer's
Universal Statistical test, block sizes range from 1 to 16. However, for each specific
block size, a minimum sequence length should be used. If the block size were fixed
at 16, a sequence of more than a billion bits would be required. For some users, that
may not be feasible.
Intuitively, it would seem that the larger the block size, the more information could
be gained from the parsing of a sequence, such as in the Approximate Entropy test.
However, a block size that is too large should not be selected either, for otherwise
the empirical results may be misleading and incorrect because the test statistic is
better approximated by a distinct probability distribution. In practice, NIST advises
selecting a block size no larger than log 2 n , where n is the sequence length.
However, certain exceptions hold, and thus NIST suggests choosing a smaller block
size.
8
Note that for FIPS 140-2, the significance level has been set to 0.0001 for the power up tests.
103
Template
Certain statistical tests are suited for detecting global non-randomness. However,
other statistical tests are more apt at assessing local non-randomness, such as tests
developed to detect the presence of too many m-bit patterns in a sequence. Still, it
makes sense that templates of a block size greater than log 2 n should not be
chosen, since frequency counts will most probably be in the neighborhood of zero,
which does not provide any useful information. Thus, appropriate choices must be
made.
Other Considerations
Another, frequently asked question concerns the need for applying a monobits test
(i.e., Frequency test), in lieu of Maurer’s Universal Statistical test. The perception is
that Maurer's Universal Statistical test supercedes the need to apply a monobits test.
This may hold true for infinite length sequences. However, it is important to keep in
mind that there will be instances when a finite binary sequence will pass Maurer's
Universal Statistical test, yet fail the monobits test. Because of this fact, NIST
recommends that the Frequency test be applied first. If the results of this test support
the null hypothesis, then the user may proceed to apply other statistical tests.
Given a concern regarding the application of multiple tests, NIST performed a study to
determine the dependence between the tests. The performance of the tests was checked by
using a Kolmogorov-Smirnov test of uniformity on the P-values obtained from the sequences.
However, it required an assumption that the sequences that were generated to test uniformity
were sufficiently random. There are many tests in the suite. Some tests should intuitively give
independent answers (e.g., the frequency test and a runs test that conditions on frequencies
should assess completely different aspects of randomness). Other tests, such as the cusum test
and the runs test, result in P-values that are likely to be correlated.
To understand the dependencies between the tests in order to eliminate redundant tests, and to
ensure that the tests in the suite are able to detect a reasonable range of patterned behaviors, a
factor analysis of the resulting P-values was performed. More precisely, in order to assess
independence, m sequences of binary pseudorandom digits were generated, each of length n,
and all k=161 tests in the suite were applied to those sequences to determine their randomness.
Each test produced a significance probability; denote by pij the significance probability of test i
on sequence j.
104
( )
Given the uniformly distributed pij , the transformation z ij = Φ −1 pij leads to normally
distributed variables. Let zj be the vector of transformed significance probabilities
corresponding to the ith sequence. A principal components analysis was performed on the z1, …
, xm. Usually, a small number of components suffices to explain a great proportion of the
variability, and the number of these components can be used to quantify the number of
“dimensions'” of nonrandomness spanned by the suite tests. The principal component analysis
of this data was performed. This analysis extracts 161 factors, equal to the number of tests.
The first factor is the one that explains the largest variability. If many tests are correlated, their
P-values will greatly depend on this factor, and the fraction of total variability explained by this
factor will be large. The second factor explains the second largest proportion of variability,
subject to the constraint that the second factor is orthogonal to the first, and so on for
subsequent factors. The corresponding fractions corresponding to the first 50 factors were
plotted for the tests, based on Blum-Blum-Shub sequences of length 1,000,000. This graph
showed that there is no large redundancy among our tests.
The correlation matrix formed from the z1, … , xm was constructed via a statistical software
application (SAS). The same conclusion was supported by the structure of these matrices. The
degree of duplication among the tests seems to be very small.
105
5. USER’S GUIDE
This section describes the set-up and proper usage of the statistical tests developed by NIST that
are available in the NIST test code. Descriptions of the algorithms and data structures that were
utilized are included in this section.
This toolbox was specifically designed for individuals interested in conducting statistical testing
of cryptographic (P)RNGs. Several implementations of PRNGs utilized during the development
phase of the project have also been included.
Caveat: The test code was developed using a SUN workstation under the Solaris operating
system. No guarantee is made regarding the compilation and execution of the PRNG
implementations on other platforms. For this reason, a switch has been incorporated into the
source codes to disable the inclusion of the PRNGs. The flag INCLUDE_GENERATORS can
be found in the defs.h header file.
This package will address the problem of evaluating (P)RNGs for randomness. It will be useful
in:
The objectives during the development of the NIST statistical test suite included:
• Platform Independence: The source code was written in ANSI C. However, some
modification may have to be made, depending on the target platform and the compiler.
• Flexibility: The user may freely introduce their own math software routines.
• Extensibility: New statistical tests can easily be incorporated.
• Versatility: The test suite is useful in performing tests for PRNGs, RNGs and cipher
algorithms.
• Portability: With minor modifications, source code may be ported to different
platforms. The NIST source code was ported onto the SGI Origin, and a 200 MHz PC
using the Microsoft Visual C++ 6.0 development environment.
106
5.2 System Requirements
This software package was developed on a SUN workstation under the Solaris operating system.
All of the source code was written in ANSI C. Source code porting activities were successful for
the SGI Origin (IRIX 6.5 with the SGI C compiler) and a desktop computer (IBM PC under
Windows 98 and Microsoft C++ 6.0).
In practice, minor modifications will have to be introduced during the porting process in order to
ensure the correct interpretation of tests. In the event that a user wishes to compile and execute
the code on a different platform, sample data and the corresponding results for each of the
statistical tests have been provided. In this manner, the user will be able to gain confidence that
the ported statistical test suite is functioning properly. For additional details see Appendix C.
For the majority of the statistical tests, memory must be allocated dynamically in order to
proceed. In the event that workspace cannot be provided, the statistical test returns a diagnostic
message.
To setup a copy of the NIST test code on a workstation, follow the instructions below.
• Copy the sts.tar file into the root directory. Use the instruction, tar -xvf
sts.tar, to unbundle the source code.
• Several files and subdirectories should have been created. The eight
subdirectories include data/, experiments/, generators/, include/, obj/, src/
and templates/. The four files include assess, grid, makefile, and stats.
• The data/ subdirectory is reserved for pre-existing RNG data files that are
under investigation. Currently, two formats are supported, i.e., data files
consisting of ASCII zeroes and ones, and binary formatted hexadecimal
character strings.
• The generators/ subdirectory contains the source codes for nine pseudo-
107
random number generators. These include Blum-Blum-Shub, Cubic
Congruential Generator, the FIPS 186 one way function based on SHA-1 (G-
SHA-1), Linear Congruential Generator, Modular Exponentiation, Micali-
Schnorr, Quadratic Congruential Generator I and II, and Exclusive OR. Code
for the ANSI X9.17 generator and the FIPS 186 one way function based on
DES (G-DES) were removed from the package because of possible export
issues. User defined PRNGs should be copied into this subdirectory, with the
corresponding modifications to the makefile, utilities1.c, defs.h, and proto.h
files.
• The include/ subdirectory contains the header files for the statistical tests,
pseudo-random number generators, and associated routines.
• The obj/ subdirectory contains the object files corresponding to the statistical
tests, pseudo random number generators and other associated routines.
• The src/ subdirectory contains the source codes for each of the statistical tests.
• Now execute Makefile. An executable file named assess should appear in the
project directory.
Follow the menu prompts. The files stats and grid correspond respectively to the
logs of the per sequence frequency of zeroes and ones and the 0-1 matrix of
fail/pass assignments for each individual sequence and each individual statistical
test.
108
5.4 Data Input and Output of Empirical Results
Data input may be supplied in one of two ways. If the user has a stand-alone program or
hardware device which implements a RNG, the user may want to construct as many files of
arbitrary length as desired. Files should contain binary sequences stored as either ASCII
characters consisting of zeroes and ones, or as hexadecimal characters stored in binary format.
These files can then be independently examined by the NIST Statistical Test Suite.
In the event that storage space is a problem, the user may want to modify the reference
implementation and plug-in their implementation of the PRNG under evaluation. The bit
streams will be stored directly in the epsilon data structure, which contains binary sequences.
The output logs of empirical results will be stored in two files, stats and results, that correspond
respectively to the computational information, e.g., test statistics, intermediate parameters, and
P-values for each statistical test applied to a data set.
If these files are not properly created, then it is most probably due to the inability to open the
files for output. See Appendix J for further details.
Five sample files have been created and are contained in the data/ subdirectory. Four of these
files correspond to the Mathematica9 generated binary expansion of several classical numbers for
over 1,000,000 bits. These files are data.e, data.pi, data.sqrt2, and data.sqrt3. The Mathematica
program used in creating these files can be found in Appendix E. A fifth file, data.sha1, was
constructed utilizing the SHA-1 hash function.
The test suite package has been decomposed into a series of modules which include the:
statistical tests, (pseudo)random number generators, empirical results (hierarchical) directories,
and data.
The three primary components of the NIST test suite are the statistical tests, the underlying
mathematical software, and the pseudo random number generators under investigation. Other
9
Mathematica, Stephen Wolfram’s Computer Algebra System, http://www.mathematica.com.
109
components include the source code library files, the data directory and the hierarchical directory
(experiments/) containing the sample data files and empirical result logs, respectively.
The NIST test suite contains sixteen tests which will be useful in studying and evaluating the
binary sequences produced by random and pseudo random number generators. As in previous
work in this field, statistical tests must be devised which, under some hypothesized distribution,
employ a particular test statistic, such as the number of runs of ones or the number of times a
pattern appears in a bit stream. The majority of the tests in the test suite either (1) examine the
distribution of zeroes and ones in some fashion, (2) study the harmonics of the bit stream
utilizing spectral methods, or (3) attempt to detect patterns via some generalized pattern
matching technique on the basis of probability theory or information theory.
In practice, any number of problems can arise if the user executes this software in unchartered
domains. It is plausible that sequence lengths well beyond the testing procedure (i.e., on the
order of 106 ) may be chosen. If memory is available, there should not be any reason why the
software should fail. However, in many instances, user defined limits are prescribed for data
structures and workspace. Under these conditions, it may be necessary to increase certain
parameters, such as the MAXNUMOFTEMPLATES and the MAXNUMBEROFCYCLES.
Several parameters that may be modified by a user are listed in Table 3.
The parameter ALPHA denotes the significance level that determines the region of acceptance
and rejection. NIST recommends that ALPHA be in the range [0.001, 0.01].
The parameter ITMAX is utilized by the special functions; it represents the upper bound on the
maximum number of iterations allowed for iterative computations.
The parameter KAPPA is utilized by the gcf and gser routines defined in the special-functions.c
file. It represents the desired accuracy for the incomplete gamma function computations.
Lastly, the MAXNUMBEROFCYCLES represents the maximum number of cycles that NIST
anticipates in any particular binary sequence.
110
Table 3. User Prescribed Statistical Test Parameters
Binary sequences are stored in the epsilon data structure. To efficiently store this information, a
bit field structure was introduced. This is a C structure defined to strictly utilize a single bit to
store a zero or a one. In essence, the bit field structure utilizes the minimum amount of storage
necessary to hold the information that will be manipulated by the statistical tests. It is flexible
enough to allow easy manipulation by accessing individual bits via an index specification.
For many of the tests, efficiency is desired in both time and space. A binary tree data structure is
used for this purpose. In this case, the binary tree is implemented as an array, whose root node
serves no particular purpose. The binary tree is used in several different ways. One way is as a
Boolean structure where each individual node represents either a zero or a one, but whose
content indicates the absence or presence of an individual bit. Parsing this tree indicates the
presence or absence of a word of fixed length. In addition, the binary tree structure is used as an
efficient means to tabulate the frequency of all 2m words of length m in a finite binary sequence.
This data structure is also employed in the construction of the dictionary required in the Lempel-
Ziv coding scheme. The restriction in this case, however, is that if the stream isn't
equidistributed (i.e., is very patterned), then the Lempel-Ziv test may break down. This is due to
the unbalanced binary tree10 which may ensue. In this case, the procedure is halted by the test
suite and a warning statement is returned.
10
The binary tree will be unbalanced due to the presence of too many words exceeding log2 n, where n is the
sequence length.
111
5.5.3.3 Mathematical Software
Special functions required by the test suite are the incomplete gamma function and the
complementary error function. The cumulative distribution function, also known as the standard
normal function, is also required, but it can be expressed in terms of the error function.
One of the initial concerns regarding the development of the reference implementation was the
dependencies that were required in order to gain reliable mathematical software for the special
functions required in the statistical tests. To resolve this matter, the test suite makes use of the
following libraries:
The complementary error function (erfc) utilized in the package is the ANSI C function
contained in the math.h header file and its corresponding mathematical library. This library
should be included during compilation.
The incomplete gamma function is based on an approximation formula whose origin is described
in the Handbook of Applied Mathematical Functions [1] and in the Numerical Recipes in C book
[6]. Depending on the values of its parameters a and x, the incomplete gamma function may be
approximated using either a continued fraction development or a series development.
Gamma Function
∞
Γ( z ) = ∫0 t z −1e − t dt
112
Incomplete Gamma Function
Γ (a , x ) 1 ∞ − t a −1
Q( a , x ) ≡ 1 − P ( a , x ) ≡ ≡ ∫ e t dt
Γ (a ) Γ (a ) x
where Q(a,0) = 1 and Q(a,∞) = 0.
NIST has chosen to use the Cephes C language special functions math library in the test
software. Cerphes may be found at http://people.ne.mediaone.net/moshier/index.html#Cephes or
on the GAMS server at http://math.nist.gov/cgi-bin/gams-serve/list-module components/
CEPHES/CPROB/13192.html. The specific functions that are utilized are igamc (for the
complementary incomplete gamma function) and lgam (for the logarithmic gamma function).
A sample NIST Statistical Test Suite monolog is described below. Note: In this section bold
items indicate input.
In order to invoke the NIST statistical test suite, type assess, followed by the desired bit stream
length, n. For example, assess 100000. A series of menu prompts will be displayed in order to
select the data to be analyzed and the statistical tests to be applied. The first screen appears as
follows:
GENERATOR OPTIONS
OPTION ----> 0
User Prescribed Input File: data/data.pi
Once the user has prescribed a particular data set or PRNG, the statistical tests to be applied must
be selected. The following screen is displayed:
113
STATISTICAL TESTS
[01] Frequency [02] Block Frequency
[03] Cumulative Sums [04] Runs
[05] Longest Runs of Ones [06] Rank
[07] Spectral - Discrete Fourier Transform [08] Nonperiodic Template Matchings
[09] Overlapping Template Matchings [10] Universal Statistical
[11] Approximate Entropy [12] Random Excursions
[13] Random Excursions Variant [14] Serial
[15] Lempel-Ziv Complexity [16] Linear Complexity
INSTRUCTIONS
Enter 0 if you DO NOT want to apply all of the
statistical tests to each sequence and 1 if you DO.
Enter Choice: 0
In this case, 0 has been selected to indicate interest in applying a subset of the available statistical
tests. The following screen is then displayed.
INSTRUCTIONS
Enter a 0 or 1 to indicate whether or not the numbered
statistical test should be applied to each sequence. For
example, 1111111111111111 applies every test to each
sequence.
1234567891111111
0123456
0000000010000000
As shown above, the only test applied was number 9, the Nonoverlapping templates test. A
query for the desired sample size is then made.
Ten sequences will be parsed using the data.pi file. Since a file was selected as the data
specification mode, a subsequent query is made regarding the data representation. The user must
specify whether the file consists of bits stored in ASCII format or hexadecimal strings stored in
binary format.
114
[0] BITS IN ASCII FORMAT [1] HEX DIGITS IN BINARY FORMAT
Since the data consists of a long sequence of zeroes and ones, 0 was chosen. Given all necessary
input parameters the test suite proceeds to analyze the sequences.
During the execution of the statistical tests, two log files located under the rng/ directory are
updated. One file is the stats file, the other is the grid file. The former contains the distribution
of zeroes and ones for each binary sequence, whereas the latter contains a binary matrix of
values corresponding to whether or not sequence i passed statistical test j. Once the testing
process is complete, the empirical results can be found in the experiments/ subdirectory.
An analytical routine has been included to facilitate interpretation of the results. A file
finalAnalysisReport is generated when statistical testing is complete. The report contains a
summary of empirical results. The results are represented via a table with p rows and q columns.
The number of rows, p, corresponds to the number of statistical tests applied. The number of
columns, q = 13, are distributed as follows: columns 1-10 correspond to the frequency of P-
values12, column 11 is the P-value that arises via the application of a chi-square test13, column 12
is the proportion of binary sequences that passed, and the 13th column is the corresponding
statistical test. An example is shown in Figure 6.
11
See Section 1.2, Definitions and Abbreviations.
12
The unit interval has been divided into ten discrete bins.
13
I n order to assess the uniformity of P-values in the ith statistical test.
115
------------------------------------------------------------------------------
RESULTS FOR THE UNIFORMITY OF P-VALUES AND THE PROPORTION OF PASSING SEQUENCES
------------------------------------------------------------------------------
------------------------------------------------------------------------------
C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 P-VALUE PROPORTION STATISTICAL TEST
------------------------------------------------------------------------------
6 12 9 12 8 7 8 12 15 11 0.616305 0.9900 Frequency
11 11 12 6 10 9 8 9 17 7 0.474986 0.9900 Cusum
6 10 8 14 16 10 10 6 5 15 0.129620 0.9900 Cusum
7 9 9 11 11 11 8 12 12 10 0.978072 0.9900 Serial
13 6 13 15 9 7 3 11 13 10 0.171867 0.9600 Serial
- - - - - - - - - - - - - - - - - - - - - - - - - - - -- - - - - - - - - - - -
The minimum pass rate for each statistical test with the exception of the random
excursion (variant) test is approximately = 0.960150 for a sample
size = 100 binary sequences.
For further guidelines construct a probability table using the MAPLE program
provided in the addendum section of the documentation.
- - - - - - - - - - - - - - - - - - - - - - - - - - - -- - - - - - - - - - - -
116
APPENDIX A: RANK COMPUTATION FOR BINARY MATRICES
Apply elementary row operations where the addition operator is taken to be the exclusive-OR
operation. The matrices are reduced to upper triangular form using forward row operations, and
the operation is repeated in reverse in order using backward row operations in order to arrive at a
matrix in triangular form. The rank is then taken to be the number of nonzero rows in the
resulting Gaussian reduced matrix.
1. Set i = 1
2. If element ai ,i = 0 (i.e., the element on the diagonal ≠ 1), then swap all elements in the ith
row with all elements in the next row that contains a one in the ith column (i.e., this row is
the kth row, where i < k ≤ m) . If no row contains a “1” in this position, go to step 4.
3. If element ai ,i = 1 , then if any subsequent row contains a “1” in the ith column, replace
each element in that row with the exclusive-OR of that element and the corresponding
element in the ith row.
a. Set row = i + 1
b. Set col=i.
c. If arow,col = 0, then go to step 3g.
d. a row,col = a row,col ⊕ ai ,col
e. If col=m, then go to step 3g.
f. col=col+1; go to step 3d.
g. If row = m, then go to step 4.
h. row=row+1; go to step 3b.
1. Set i=m.
117
2. If element ai ,i = 0 (i.e., the element on the diagonal ≠ 1), then swap all elements in the ith
row with all elements in the next row that contains a one in the ith column (i.e., this row is
the kth row, where 1 ≤ k < i) . If no row contains a “1” in this position, go to step 4.
3. If element ai ,i = 1 , then if any preceding row contains a “1” in the ith column, replace
each element in that row with the exclusive-OR of that element and the corresponding
element in the ith row.
a. Set row = i - 1
b. Set col=i.
c. If arow,col = 0, then go to step 3g.
d. a row,col = a row,col ⊕ ai ,col
e. If col= 1, then go to step 3g.
f. col=col – 1; go to step 3d.
g. If row = 1, then go to step 4.
h. row=row-1; go to step 3b.
100000
000001
100001
A 101010 The original matrix.
001011
000010
100000
000001
000001 Since a1,1 = 1 and rows 3 and 4 contain a 1 in the first column (see the
B 001010 original matrix), rows 3 and 4 are replaced by the exclusive-OR of that
001011 row and row 1.
000010
100000
000001
000001 Since a2,2 ≠ 1 and no other row contains a “1” in this column (see B),
C 001010 the matrix is not altered.
001011
118
000010
100000
000001
001010 Since a3,3 ≠ 1, but the 4th row contains a “1” in the 3rd column (see B or
D 000001 C), the two rows are switched.
001011
000010
100000
000001
001010 Since row 5 contains a “1” in the 3rd column (see D), row 5 is replaced
E 000001 by the exclusive-OR of row 1 and row 5.
000001
000010
100000
000001
001010 Since a4,4 ≠ 1 and no other row contains a “1” in this column (see E),
F 000001 the matrix is not altered.
000001
000010
100000
000001
001010 Since a5,5≠ 1, but row 6 contains a 1 in column 5 (see F), the two rows
G 000001 are switched. Since no row below this contains a “1” in the 5th column,
000010 the end of the forward process is complete.
000001
100000
000000
001010 Since a6,6 = 1 and rows 2 and 4 contain ones in the 6th column (see G),
H 000000 rows 2 and 4 are replaced by the exclusive-OR of that row and row 6.
000010
000001
100000
000000
001000 Since a5,5 = 1and row 3 contains a one in the 5th column (see H), row 3
I 000000 is replaced by the exclusive-OR or row 3 and row 5.
000010
119
000001
100000
000000
001000 Since a4,4 ≠ 1 and no other row has a one in column 4, the matrix is not
J 000000 altered.
000010
000001
100000
000000
001000 Since a3,3 = 1, but no other row has a one in column 3, the matrix is not
K 000000 altered.
000010
000001
100000
000000
001000 Since a2,2≠ 1 and no other row has a one in column 2, the matrix is not
L 000000 altered, and he process is complete.
000010
000001
Since the final form of the matrix has 4 non-zero rows, the rank of the matrix is 4.
120
APPENDIX B: SOURCE CODE
Filename: defs.h
Debugging Aides:
1. #define FREQUENCY 1
2. #define BLOCK_FREQUENCY 1
3. #define CUSUM 1
4. #define RUNS 1
5. #define LONG_RUNS 1
6. #define RANK 1
7. #define MATRICES 0
8. #define DFT 1
9. #define APERIODIC_TEMPLATES 1
10. #define PERIODIC_TEMPLATES 1
11. #define UNIVERSAL 1
12. #define APEN 1
13. #define SERIAL 1
14. #define RANDOM_EXCURSIONS 1
15. #define RANDOM_EXCURSIONS_VARIANT 1
16. #define LEMPEL_ZIV 1
17. #define LINEAR_COMPLEXITY 1
18. #define DISPLAY_OUTPUT_CHANNELS 1
19. #define PARTITION 0
Note: For debugging purposes, switches were introduced to display/not display intermediate
computations for each statistical test. A one denotes true, i.e., show intermediate results; a zero
denotes false, i.e., do not show intermediate results.
Filename: defs.h
1. #define INCLUDE_GENERATORS 1
2. #define LONG_RUNS_CASE_8 0
3. #define LONG_RUNS_CASE_128 0
4. #define LONG_RUNS_CASE_10000 1
5. #define SAVE_FFT_PARAMETERS 0
6. #define SAVE_APEN_PARAMETERS 0
7. #define SAVE_RANDOM_EXCURSION_PARAMETERS 1
8. #define SEQ_LENGTH_STEP_INCREMENTS 5000
Note: Statistical testing alternatives have been incorporated into the test suite using switches.
Line 1 refers to the inclusion (or exclusion) of the pseudo-random number generators contained
in the NIST test suite during compilation. The ability to enable or disable this function was
121
introduced under the realization that underlying libraries may not port easily to different
platforms. The user can disable the sample generators and should be able to compile the
statistical tests.
Lines 2-4 refer to different probability values that have been included in the Long Runs of Ones
Test. Since the statistical test partitions a sequence into sub-strings of varying length, the user
has the freedom to select between several cases. The user should enable only one case and
disable the other two cases.
Lines 5-7 refer to the ability to store intermediate parameter values to a file for the sake of
constructing graphics. Line 5 will enable or disable the storage of the Fourier points and
corresponding moduli into the files, fourierPoints and magnitude, respectively. Line 6 will
enable or disable the storage of the sequence length and approximate entropy value for varying
sequence lengths into the files, abscissaValues and ordinateValues. Line 7 will enable or disable
the storage of the number of cycles for each binary sequence into the file, cycleInfo.
Line 8 refers to the number of sequence length step increments to be taken during the generation
and storage of the approximate entropy values in the file, ordinateValues.
Filename: defs.h
Global Constants:
Lines 1-6 correspond to test suite parameters that have been preset. Under various conditions,
the user may decide to modify them.
Line 1 refers to the significance level. It is recommended that the user select the level in the
range [0.001,0.01].
Line 2 refers to the maximum number of templates that may be used in the Nonoverlapping
Template Matching test.
Line 3 refers to the maximum number of tests that is supported in the test suite. If the user adds
additional tests, this parameter should be incremented.
Line 4 refers to the maximum number of generators that is supported in the package. If the user
adds additional generators, this parameter should be incremented.
122
Line 5 refers to the maximum number of expected cycles in the random excursions test. If this
number is insufficient, the user may increase the parameter appropriately.
Line 6 refers to the maximum number of files which may be decomposed by the
partitionResultFile routine. This routine is applied only for specific tests where more than one
P-value is produced per sequence. This routine decomposes the corresponding results file into
separate files, data001, data002, data003, …
123
APPENDIX C: EMPIRICAL RESULTS FOR SAMPLE DATA
The user is urged to validate that the statistical test suite is operating properly. For this reason,
five sample files have been provided. These five files are: (1) data.pi, (2) data.e,
(3) data.sha1, (4) data.sqrt2, and (5) data.sqrt3. For each data file, all of the statistical tests
were applied, and the results recorded in the following tables. The Block Frequency, Long Runs
of Ones, Non-overlapping Template Matching, Overlapping Template Matching, Universal,
Approximate Entropy, Linear Complexity and Serial tests require user prescribed input
parameters. The exact values used in these examples has been included in parenthesis beside
the name of the statistical test. In the case of the random excursions and random excursions
variant tests, only one of the possible 8 and 18 P-values, respectively, has been reported.
124
Example #2: The binary expansion of e
Statistical Test P-value
Frequency 0.953749
Block Frequency (m = 100) 0.619340
Cusum-Forward 0.669887
Cusum-Reverse 0.724266
Runs 0.561917
Long Runs of Ones (M = 10000) 0.718945
Rank 0.306156
Spectral DFT 0.443864
NonOverlapping Templates (m = 9, B = 000000001) 0.078790
Overlapping Templates (m = 9) 0.110434
Universal (L = 7, Q = 1280) 0.282568
Approximate Entropy (m = 5) 0.361688
Random Excursions (x = +1) 0.778616
Random Excursions Variant (x = -1) 0.826009
Lempel Ziv Complexity 0.000322
Linear Complexity (M = 500) 0.826335
Serial (m = 5, ∇Ψm )
2 0.225783
125
Example #4: The binary expansion of 2
Statistical Test P-value
Frequency 0.811881
Block Frequency (m = 100) 0.289410
Cusum-Forward 0.879009
Cusum-Reverse 0.957206
Runs 0.313427
Long Runs of Ones (M = 10000) 0.012117
Rank 0.823810
Spectral DFT 0.267174
NonOverlapping Templates (m = 9, B = 000000001) 0.569461
Overlapping Templates (m = 9) 0.791982
Universal (L = 7, Q = 1280) 0.130805
Approximate Entropy (m = 5) 0.853227
Random Excursions (x = +1) 0.216235
Random Excursions Variant (x = -1) 0.566118
Lempel Ziv Complexity 0.949310
Linear Complexity (M = 500) 0.317127
Serial (m = 5, ∇Ψm2 ) 0.873914
126
APPENDIX D: CONSTRUCTION OF APERIODIC TEMPLATES
For the purposes of executing the Non-overlapping Template Matching statistical test, all 2m m-
bit binary sequences which are aperiodic were pre-computed. These templates, or patterns, were
stored in a file for m = 2 to m = 21. The ANSI-C program utilized in finding these templates is
provided below. By modifying the parameter M, the template library corresponding to the
template can be constructed. This parameter value should not exceed B, since the dec2bin
conversion routine will not operate correctly. Conceivably, this source code can be easily
modified to construct arbitrary 2m m-bit binary sequences for larger m.
#include <stdio.h>
#include <math.h>
#define B 32
#define M 6
int *A;
static long nonPeriodic;
unsigned displayBits(FILE*, long, long);
int main()
{
FILE *fp1, *fp2;
long i, j, count, num;
A = (unsigned*) calloc(B,sizeof(unsigned));
fp1 = fopen("template", "w");
fp2 = fopen("dataInfo", "a");
num = pow(2,M);
count = log(num)/log(2);
nonPeriodic = 0;
for(i = 1; i < num; i++) displayBits(fp1, i,count);
fprintf(fp2,"M = %d\n", M);
fprintf(fp2,"# of nonperiodic templates = %u\n",
nonPeriodic);
fprintf(fp2,"# of all possible templates = %u\n", num);
fprintf(fp2,"{# nonperiodic}/{# templates} = %f\n",
(double)nonPeriodic/num);
fprintf(fp2,"==========================================");
fprintf(fp2,"===============\n");
fclose(fp1);
fclose(fp2);
free(A);
return 0;
}
127
void displayBits(FILE* fp, long value, long count)
{
int i, j, match, c, displayMask = 1 << (B-1);
for(i = 0; i < B; i++) A[i] = 0;
for(c = 1; c <= B; c++) {
if (value & displayMask)
A[c-1] = 1;
else
A[c-1] = 0;
value <<= 1;
}
for(i = 1; i < count; i++) {
match = 1;
if ((A[B-count]!= A[B-1]) &&
((A[B-count]!= A[B-2])||(A[B-count+1] != A[B-1]))) {
for(c = B-count; c <= (B-1)-i; c++) {
if (A[c] != A[c+i]) {
match = 0;
break;
}
}
}
if (match) {
/* printf("\nPERIODIC TEMPLATE: SHIFT = %d\n",i); */
break;
}
}
if (!match) {
for(c = B-count; c < (B-1); c++) fprintf(fp,"%u",A[c]);
fprintf(fp,"%u\n", A[B-1]);
nonPeriodic++;
}
return;
}
128
APPENDIX E: GENERATION OF THE BINARY EXPANSION OF
IRRATIONAL NUMBERS
The sample Mathematica program utilized in constructing four sample files is shown below.
Mathematica Program
(**********************************************************)
(* Purpose: Converts num to its decimal expansion using *)
(* its binary representation. *)
(* *)
(* Caution: The $MaxPrecision variable must be set to *)
(* the value of d. By default, Mathematica *)
(* sets this to 50000, but this can be increased.*)
(**********************************************************)
BinExp[num_,d_] := Module[{n,L},
If[d > $MaxPrecision, $MaxPrecision = d];
n = N[num,d];
L = First[RealDigits[n,2]]
];
SE = BinExp[E,302500];
Save["data.e",{SE}];
SP = BinExp[Pi,302500];
Save["data.pi",{SP}];
S2 = BinExp[Sqrt[2],302500];
Save["data.sqrt2",{S2}];
S3 = BinExp[Sqrt[3],302500];
Save["data.sqrt3",{S3}];
129
APPENDIX F: NUMERIC ALGORITHM ISSUES
For each binary sequence, an individual statistical test must produce at least one P-value.
P-values are based on the evaluation of special functions, which must be as accurate as possible
on the target platform. The log files produced by each statistical test, report P-values with six
digits of precision, which should be sufficient. However, if greater precision is desired, modify
the printf statements in each statistical test accordingly.
During the testing phase, NIST commonly evaluated sequences on the order 106; hence, results
are based on this assumption. If the user wishes to choose longer sequence lengths, then be
aware that numerical computations may be inaccurate14 due to machine or algorithmic
limitations. For further information on numerical analysis matters, see [6]15.
For the purposes of illustration, sample parameter values and corresponding special function
values are shown in Table F.1 and Table F.2. Table F.1 compares the results for the incomplete
gamma function for selected parameter values for a and x. The results are shown for Maple16,
Matlab10, and the Numerical Recipe17 routines. Recall that the definitions for the gamma
function and the incomplete gamma function are defined, respectively, as:
∞
Γ( z ) = ∫0 t z −1e − t dt
Γ( a, x ) 1 ∞ −t a −1
Γ(a ) ∫x
Q ( a, x ) = = e t dt ,
Γ( a )
Since the algorithm used in the test suite implementation of the incomplete gamma function is
based on the numerical recipe codes, it is evident that the function is accurate to at least the
seventh decimal place. For large values of a, the precision will degrade, as will confidence in the
result (unless a computer algebra system is employed to ensure high precision computations).
Table F.2 compares the results for the complementary error function (see Section 5.3.3) for
selected parameter values for x. The results are shown for ANSI C, Maple, and Matlab. Recall
that the definition for the complementary error function is:
14
According to the contents of the GNU C specifications at “/usr/local/lib/gcc-lib/sparc-sun-
solaris2.5.1/2.7.2.3/specs (gcc version 2.7.2.3),” the limits.h header file on a SUN Ultra 1 workstation, the
maximum number of digits of precision of a double is 15.
15
Visit http://www.ulib.org/webRoot/Books/Numerical_Recipes/ or http://beta.ul.cs.cmu.edu/webRoot/
Books/Numerical_Recipes/, particularly, Section 1.3 (Error, Accuracy, and Stability).
16
See Section 1.2, Definitions and Abbreviations.
17
The parameter values for eps and itmax were fixed at 3x10-15 and 2,000,000 respectively. Special function routines
based on Numerical Recipe codes will be replaced by non-proprietary codes in the near future.
130
2 ∞
erfc( z ) = ∫ e −u du
2
π z
Table F.1: Selected Input Parameters for the Incomplete Gamma Function
Table F.2: Selected Input Parameters for the Complementary Error Function
x erfc(x) x erfc(x)
0.00 Test Suite 1.000000000000000 0.50 Test Suite 0.479500122186953
Maple 1.000000000000000 Maple 0.479500122186950
Matlab 1.000000000000000 Matlab 0.479500122186953
1.00 Test Suite 0.157299207050285 1.50 Test Suite 0.033894853524689
Maple 0.157299207050280 Maple 0.033894853524690
Matlab 0.157299207050285 Matlab 0.033894853524689
2.00 Test Suite 0.004677734981047 2.50 Test Suite 0.000406952017445
Maple 0.004677734981050 Maple 0.000406952017440
Matlab 0.004677734981047 Matlab 0.000406952017445
3.00 Test Suite 0.000022090496999 3.50 Test Suite 0.000000743098372
Maple 0.000022090497000 Maple 0.000000743098370
Matlab 0.000022090496999 Matlab 0.000000743098372
Thus, it is evident that the various math routines produce results that are sufficiently close to
each other. The differences are negligible. To reduce the likelihood for obtaining an inaccurate
P-value result, NIST has prescribed recommended input parameters.
131
APPENDIX G: HIERARCHICAL DIRECTORY STRUCTURE
rng/
makefile The NIST Statistical Test Suite makefile. This file is invoked in order to
recompile the entire test suite, including PRNGs.
makefile2 The NIST Statistical Test Suite makefile. This file is invoked in order to
recompile the NIST test suite without the PRNGs (Note: the PRNGs may
not compile on all platforms without user intervention).
assess The NIST Statistical Test Suite executable file is called assess.
data/ This subdirectory contains the names of all data files to be analyzed.
Sample files include the binary expansions to well known constants such
as e, π, 2 , and 3 .
experiments/ This subdirectory contains the empirical result subdirectories for each
RNG.
AlgorithmTesting/ BBS/
CCG/ G-SHA-1/
LCG/ MODEXP/
MS/ QCG1/
QCG2/ XOR/
apen/ block-frequency/
cumulative-sums/ fft/
frequency/ lempel-ziv /
linear-complexity/ longest-run/
nonperiodic-templates/ overlapping-templates/
random-excursions/ random-excursions-variant/
rank/ runs/
serial/ universal/
For each nested directory there are two files created upon execution of an
individual statistical test. The results file contains a P-value list for each
binary sequence, and the stats file contains a list of statistical information
for each binary sequence.
generators/ This subdirectory contains the source code for each PRNG. In the
132
event that the user is interested in evaluating their PRNG (online), their
source code may, for example, be added as generators4.c in this directory,
with additional changes made in the utilities2.c file and the defs.h file.
grid This file contains bits which represent the acceptance or rejection of a
particular sequence for each individual statistical test that is run. The P-
value computed for the sequence is compared to the chosen significance
level α.
include/ This subdirectory contains all of the header files that prescribe
any global variables, constants, and data structures utilized in the
reference implementation. In addition, the subdirectory contains all
function declarations and prototypes.
cephes-protos.h config.h
config2.h defs.h
f2c.h generators1.h
generators2.h generators3.h
globals.h lip.h
lippar.h matrix.h
mconfig.h mp.h
proto.h sha.h
special-functions.h utilities1.h
utilities2.h
approximateEntropy.o assess.o
cephes.o cusum.o
dfft.o discreteFourierTransform.o
frequency.o functions.o
generators1.o generators2.o
generators3.o lempelZivCompression.o
linearComplexity.o lip.o
matrix.o mp.o
nonOverlappingTemplateMatchings.o
overlappingTemplateMatchings.o
randomExcursions.o randomExcursionsVariant.o
rank.o runs.o
sha.o special-functions.o
133
universal.o utilities1.o utilities2.o
src/ This subdirectory contains the source codes for the statistical tests.
134
This subdirectory contains the templates (or patterns) which are evaluated
in the NonOverlapping Template Matching Test. The corresponding file
is opened for the prescribed template block length m. Currently, the only
options for which nonperiodic templates have been stored are those which
lie in [2,21]. In the event that m > 21, the user must pre-compute the
non-periodic templates.
135
APPENDIX H: VISUALIZATION APPROACHES
There are several visualization approaches that may be used to investigate the randomness of
binary sequences. Three techniques involve the Discrete Fourier Transform, approximate
entropy and the linear complexity profile.
Figure H.1 depicts the spectral components (i.e., the modulus of the DFT) obtained via
application of the Fast Fourier Transform on a binary sequence (consisting of 5000 bits)
extracted from the Blum-Blum-Shub pseudo-random number generator18. To demonstrate how
the spectral test can detect periodic features in the binary sequence, every 10th bit was changed to
a single one. To pass this test, no more than 5 % of the peaks should surpass the 95 % cutoff,
(determined to be sqrt(3*5000) ≈ 122.4744871). Clearly, greater than 5 % of the peaks exceed
the cutoff point in the figure. Thus, the binary sequence fails this test.
2500
18
The Blum-Blum-Shub pseudo random number generator, based on the intractability of the quadratic residuocity
problem is described in the Handbook of Applied Cryptography, by Menezes, et. al.
136
(b) Approximate Entropy (ApEn) Graph
Figure H.2 depicts the approximate entropy values (for block length = 2) for three binary
sequences, the binary expansion of e and π, and a binary sequence taken from the SHA-1
pseudo-random number generator. In theory, for an n-bit sequence, the maximum entropy value
that can be attained is ln (2) ≈ 0.69314718. The x-axis reflects the number of bits considered in
the sequence. The y-axis reflects the deficit from maximal irregularity, that is, the difference
between the ln (2) and the observed approximate entropy value. Thus, for a fixed sequence
length, one can determine which sequence appears to be more random. For a sequence of
1,000,000 bits, e appears more random than both π and the SHA-119 sequence. However, for
larger block sizes, this is not the case.
Enocpy P1M
G-SHA-1
e
G-SHA-1
π
e
4 5 6 7 9 10
Sequence Length KIQF
19
It is worth noting that, for larger block sizes and sequence lengths on the O(106), SHA-1
binary sequences yield deficit values on the O(10-9).
137
(c) Linear Complexity Profile
Figure H.3 depicts the linear complexity profile for a pseudo-random number generator that is
strictly based on the XOR (exclusive-or) operator. The generator is defined as follows: given a
random binary seed, x1 , x 2 , L , x127 , subsequent bits in the sequence are generated according to
the rule, xi = xi − 1 ⊕ xi −127 for i ≥ 128.
The Berlekamp-Massey 20algorithm computes the connection polynomial that, for some seed
value, reconstructs the finite sequence. The degree of this polynomial corresponds to the length
of the shortest Linear Feedback Shift Register (LFSR) that represents the polynomial. The linear
complexity profile depicts the degree, which for a random finite length (n-bit) sequence is about
n/2. Thus, the x-axis reflects the number of bits observed in the sequence thus far. The y-axis
depicts the degree of the connection polynomial. At n = 254, observe that the degree of the
polynomial ceases to increase and remains constant at 127. This value precisely corresponds to
the number of bits in the seed used to construct the sequence.
20
For a description of the algorithm see Chapter 6 - Stream Ciphers, which may be accessed at
http://www.cacr.math.uwaterloo.ca/hac/.
138
APPENDIX I: INSTRUCTIONS FOR INCORPORATING ADDITIONAL
STATISTICAL TESTS
In order to add another statistical test to the test suite, the user should make the following
modifications:
Insert any test input parameters into the testParameters structure. Increment the value
of NUMOFTESTS by the number of tests to be added.
Embed the test function call into the nist_test_suite function. For example, if the
current number of tests is 16, and one test is to be added, insert the following code:
Define the statistical test function. Note: The programmer should embed fprintf
statements using stats[x], and results[x] as the corresponding output channel for writing
intermediate test statistic parameters and P-values, respectively. x is the total number of
tests.
(a) In the function, openOutputStreams, insert the string, “myNewTest” into the
testNames variable. In the function, chooseTests, insert the following lines of code (as
modified by the actual number of total tests):
printf("\t\t\t 12345678911111111\n");
printf("\t\t\t 01234567\n");
Note: For each PRNG defined in the package, a sub-directory myNewTest must be
created.
(b) In the function, displayTests, insert a printf statement. For example, if the total
number of tests is 17, insert
139
printf(" [17] My New Test\n");
(c) If an input test parameter is required, in the function, fixParameters, insert the
following lines of code (under the assumption that myNewTestParameter is an
integer). For example, if the total number of tests is 17, insert
if (testVector[17] == 1) {
printf("\tEnter MyNewTest Parameter Value: ");
scanf("%d", &tp.myNewTestParameter);
}
140
APPENDIX J: INSTRUCTIONS FOR INCORPORATING ADDITIONAL
PRNGs
In order to add a PRNG to the test suite, the user should make the following modifications:
Define the generator function. The general scheme for each PRNG defined in the test
suite is as follows:
Note: A sub-directory called myNewPRNG/ must be created. Under this new directory,
a set of sub-directories must be created for each of the test suite statistical tests. The
script createScript has been included to facilitate this operation.
(c) In the function, openOutputStreams, insert the generator string name into the
generatorDir variable. For example,
141
char generatorDir[20][20] = {"AlgorithmTesting/",
…,"XOR/", "MYNEWPRNG/"};
Similarly, in the routine, partitionResultFile, in the file, assess.c.
142
APPENDIX K: GRAPHICAL USER INTERFACE (GUI)
K.1 Introduction
A simple Tcl/Tk graphical user interface (GUI) was developed as a front-end to the NIST
Statistical Test Suite. The source code may be found in the file, rng-gui.tcl. The interface
consists of a single window with four regions. The topmost region contains the software product
laboratory affiliation. The left half of the window consists of a checklist for the sixteen statistical
tests. The user should select or de-select the set of statistical tests to be executed.
The right half of the window is sub-divided into an upper and lower portion. The upper portion
consists of required parameters that must be provided in order to execute the tests. The lower
portion consists of test dependent parameters that must be provided only if the corresponding test
has been checked.
^M I I
The National Institute of Standards and Technology (NIST)
Information Technology Laboratory (ITL)
Statistical Test Suite, Copyright 2000
W Rank Test
TEST DEPENDENT INPUT PARAMETERS
W Discrete Fourier Transform (Spectral) Test
W Non-overlapping Template Matchings Test Enter block (requency block length (in bits): 100
W Overlapping Template Matchings Test £n,ei norvovei,apping ,emplale ,eng,h (in bils): \~ 9
W Linear Complexity Test Enter linear complexity substring length (in bits): | 50tf
Execute Quit
Figure K.1: Tcl/Tk GUI for the NIST Statistical Test Suite
Once the user has selected the statistical tests, the required input parameters, and the test
dependent input parameters, then the user should depress the Execute button to invoke the
143
battery of statistical tests. This will result in the de-iconification of the GUI. Upon completion,
the GUI will re-iconify. The user should then proceed to review the file, finalAnalysisReport.txt
to assess the results.
K.2 An Example
The following table presents an example of the use of the GUI. The user has checked all sixteen
of the statistical tests and entered:
data.e as the binary date stream • 9 as the overlapping template block length
filename
Section 2 provides the recommended parameter choices for each statistical test.
144
K.6 References
[1] Brent Welch, Practical Programming in Tcl and Tk, 2nd edition. Prentice Hall PTR, 1997.
[2] Clif Flyntf, Tcl/Tk for Real Programmers. Academic Press, 1999.
145
APPENDIX L: DESCRIPTION OF THE REFERENCE PSEUDO RANDOM
NUMBER GENERATORS
The NIST Statistical Test Suite supplies the user with nine pseudo-random number generators.
A brief description of each pseudo-random number generator follows. The user supplied
sequence length determines the number of iterations for each generator.
The input parameter for the Fishman and Moore21 LCG22 is fixed in code but may be altered by
the user.
Input Parameter:
z0 = 23482349
Description:
Given a seed z0, subsequent numbers are computed based on zi+1 = a*zi mod (231-1), where a is a
function of the current state. These numbers are then converted to uniform values in [0,1]. At
each step, output ‘0’ if the number is ≤ 0.5, otherwise output ‘1’.
The input parameters to the QCG-I are fixed in code, but may modified by the user.
Input Parameters:
p = 987b6a6bf2c56a97291c445409920032499f9ee7ad128301b5d0254aa1a9633fdbd378
d40149f1e23a13849f3d45992f5c4c6b7104099bc301f6005f9d8115e1
x0 = 3844506a9456c564b8b8538e0cc15aff46c95e69600f084f0657c2401b3c244734b62e
a9bb95be4923b9b7e84eeaf1a224894ef0328d44bc3eb3e983644da3f5
21
Fishman, G. S. and L. R. Moore (1986). An exhaustive analysis of multiplicative congruential random number
generators with modulus 2**31-1, SIAM Journal on Scientific and Statistical Computation, 7, 24-45.
22
Additional information may be found in Chapter 16 (Pseudo-Random Sequence Generators & Stream Ciphers),
Section 16.1 (Linear Congruential Generators) of Bruce Schneier’s book, Applied Cryptography: Protocols,
Algorithms and Source Code in C, 2nd edition, John Wiley & Sons, 1996.
146
Description:
Using a 512-bit prime p, and a random 512-bit seed x0, construct subsequent elements (each 512-
bit numbers) in the sequence via the rule:
The input parameter to the QCG-II is fixed in code, but may be modified by the user.
Input Parameter:
x0 = 7844506a9456c564b8b8538e0cc15aff46c95e69600f084f0657c2401b3c244734b62e
a9bb95be4923b9b7e84eeaf1a224894ef0328d44bc3eb3e983644da3f5
Description:
Using a 512-bit modulus, and a random 512-bit seed x0, construct subsequent elements (each
512-bit numbers) in the sequence via the rule:
The input parameter to the CCG is fixed in code, but may be modified by the user.
Input Parameter:
x0 =7844506a9456c564b8b8538e0cc15aff46c95e69600f084f0657c2401b3c244734b62ea
9bb95be4923b9b7e84eeaf1a224894ef0328d44bc3eb3e983644da3f5
Description:
Given a 512 bit seed x0, construct subsequent 512-bit strings via the rule:
The input parameter to the XORG is a 127-bit seed that is fixed in code, but may be user
modified.
147
Input Parameter:
Description:
Choose a bit sequence, x1 , x2 ,K, x127 . Construct subsequent bits via the rule:
The input parameters to the MODEXPG are fixed in code, but they may be user modified.
Input Parameters:
seed = 7AB36982CE1ADF832019CDFEB2393CABDF0214EC
p = 987b6a6bf2c56a97291c445409920032499f9ee7ad128301b5d0254aa1a9633fdbd378
d40149f1e23a13849f3d45992f5c4c6b7104099bc301f6005f9d8115e1
g = 3844506a9456c564b8b8538e0cc15aff46c95e69600f084f0657c2401b3c244734b62ea
9bb95be4923b9b7e84eeaf1a224894ef0328d44bc3eb3e983644da3f5
Description:
The input parameters to the SHA1G are fixed in code, but may be user modified. The length of
the key, keylen should be chosen in the interval [160, 512].
Input Parameters:
seedlen = 160
Xseed = 237c5f791c2cfe47bfb16d2d54a0d60665b20904
keylen = 160
Xkey = ec822a619d6ed5d9492218a7a4c5b15d57c61601
148
Description:
For a detailed description of SHA1G (the FIPS 186 one-way function using SHA-1), visit
http://www.cacr.math.waterloo.ca/hac/about/chap5.pdf.zip, especially p. 175.
The input parameters to the BBSG are not fixed in code. They are variable parameters, which
are time dependent. The three required parameters are two primes, p and q, and a random integer
s.
Input Parameters:
Two primes p and q such that each is congruent to 3 modulo 4. A random integer s (the seed),
selected in the interval [1, pq-1] such that gcd(s,pq) = 1. The parameters p, q and s are not fixed
in code; thus, the user will not be able to reconstruct the original sequence because these values
will vary (i.e., they are dependent on the system time). To reproduce a sequence the user must
modify the code to fix the variable input parameters.
Description:
The input parameters to the MSG are not fixed in code. They are variable parameters, which are
time dependent. The four required parameters are two primes, p and q, an integer e, and the seed
x0.
Input Parameters:
Two primes p and q. A parameter e, selected such that 1 < e < φ = (p-1)(q-1), gcd(e, φ) = 1, and
80e < N = floor(lg n + 1). A random sequence x0 (the seed) consisting of r (a function of e and
n) bits is chosen. The parameters e, p, q, and x0 are not fixed in code; thus, the user will not be
able to reconstruct the original sequence because these values will vary (i.e., they are dependent
on the system time). To reproduce a sequence the user must modify the code to fix the variable
input parameters.
Description:
149
ANSI C reference implementation may be located at ftp://www.mindspring.
com/users/pate/crypto/chap05/micali.c.
The following table depicts test-by-test failures for the above reference generators.
Excessive Lacks
Statistical Test Generator
Rejections Uniformity
Frequency X X Modular Exponentiation
X X Cubic Congruential
X X Quadratic Congruential (Type I)
Block Frequency X Cubic Congruential
X X XOR
Cusum X X Micali-Schnorr
X X Modular Exponentiation
X X Cubic Congruential
X X Quadratic Congruential (Type I)
Runs X Modular Exponentiation
X X Cubic Congruential
X Quadratic Congruential (Type I)
Rank X X XOR
Spectral X X Cubic Congruential
X Quadratic Congruential (Type II)
Aperiodic X ANSI X9.17
Templates X Micali-Schnorr
X Modular Exponentiation
X X Cubic Congruential
X Quadratic Congruential (Type I)
X Quadratic Congruential (Type II)
X X XOR
Periodic Templates X Modular Exponentiation
X X XOR
Approximate X X Modular Exponentiation
Entropy X X Cubic Congruential
X X Quadratic Congruential (Type I)
X X XOR
Serial X X Modular Exponentiation
X X Cubic Congruential
X X Quadratic Congruential (Type I)
X X XOR
150
APPENDIX M: REFERENCES
[4] U. Maurer, “A Universal Statistical Test for Random Bit Generators,” Journal of
Cryptology. Vol. 5, No. 2, 1992, pp. 89-105.
[5] A. Menezes, et al., Handbook of Applied Cryptography. CRC Press, Inc., 1997.
See http://www.cacr.math.uwaterloo.ca/hac/about/chap5.pdf.zip.
[11] FIPS 180-1, Secure Hash Standard, Federal Information Processing Standards
Publication 180-1. U.S. Department of Commerce/NIST, National Technical
Information Service, Springfield, VA, April 17, 1995.
[12] FIPS 186, Digital Signature Standard (DSS), Federal Information Processing
Standards Publication 186. U.S. Department of Commerce/NIST, National
Technical Information Service, Springfield, VA, May 19, 1994.
151
[13] MAPLE, A Computer Algebra System (CAS). Available from Waterloo Maple Inc.;
http://www.maplesoft.com.
152
153