Algorithms - Key Size and Parameters Report - 2014 PDF
Algorithms - Key Size and Parameters Report - 2014 PDF
report – 2014
November, 2014
November, 2014
About ENISA
The European Union Agency for Network and Information Security (ENISA) is a centre of network and
information security expertise for the EU, its member states, the private sector and Europe’s citizens.
ENISA works with these groups to develop advice and recommendations on good practice in
information security. It assists EU member states in implementing relevant EU legislation and works
to improve the resilience of Europe’s critical information infrastructure and networks. ENISA seeks to
enhance existing expertise in EU member states by supporting the development of cross-border
communities committed to improving network and information security throughout the EU. More
information about ENISA and its work can be found at www.enisa.europa.eu.
Authors
Contributors to this report:
This work was commissioned by ENISA under contract ENISA D-COD-14-TO9 (under F-COD-13-C23) to
the consortium formed by K.U.Leuven (BE) and University of Bristol (UK).
Contributors: Nigel P. Smart (University of Bristol), Vincent Rijmen (KU Leuven), Benedikt
Gierlichs (KU Leuven), Kenneth G. Paterson (Royal Holloway, University of London), Martijn
Stam (University of Bristol), Bogdan Warinschi (University of Bristol), Gaven Watson
(University of Bristol).
Editor: Nigel P. Smart (University of Bristol).
ENISA Project Manager: Rodica Tirtea.
Agreements of Acknowledgements
We would like to extend our gratitude to:
External Reviewers: Michel Abdalla (ENS Paris), Kenneth G. Paterson (Royal Holloway,
University of London), Ahmad-Reza Sadeghi (T.U. Darmstadt), Michael Ward (Mastercard) for
their comments suggestions and feedback.
We also thank a number of people for providing anonymous input and Cas Cremers (Oxford
University) and Hugo Krawczyk (IBM) for detailed comments on various aspects.
Contact
For contacting the authors please use sta@enisa.europa.eu.
For media enquires about this paper, please use press@enisa.europa.eu.
Page ii
Algorithms, key size and parameters report – 2014
November, 2014
Legal notice
Notice must be taken that this publication represents the views and interpretations of the authors and
editors, unless stated otherwise. This publication should not be construed to be a legal action of ENISA or the
ENISA bodies unless adopted pursuant to the Regulation (EU) No 526/2013. This publication does not
necessarily represent state-of the-art and ENISA may update it from time to time.
Third-party sources are quoted as appropriate. ENISA is not responsible for the content of the external
sources including external websites referenced in this publication.
This publication is intended for information purposes only. It must be accessible free of charge. Neither ENISA
nor any person acting on its behalf is responsible for the use that might be made of the information contained
in this publication.
Copyright Notice
© European Union Agency for Network and Information Security (ENISA), 2014
Reproduction is authorised provided the source is acknowledged.
Catalogue number TP-05-14-084-EN-N ISBN 978-92-9204-102-1 DOI 10.2824/36822
Page iii
Algorithms, key size and parameters report – 2014
November, 2014
doi: xx.xxxx/xxxxx
REMOVE If not applicable
Page 1
Algorithms, Key Size and Parameters Report
Contents
1 Executive Summary 9
3 Primitives 20
3.1 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2 Block Ciphers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2.1 Future Use Block Ciphers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2.2 Legacy Block Ciphers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2.3 Historical (non-endorsed) Block Ciphers . . . . . . . . . . . . . . . . . . . . . 25
3.3 Hash Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.3.1 Future Use Hash Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.3.2 Legacy Hash Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3.3 Historical (non-endorsed) Hash Functions . . . . . . . . . . . . . . . . . . . . 27
3.4 Stream Ciphers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.4.1 Future Use Stream Ciphers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.4.2 Legacy Stream Ciphers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.4.3 Historical (non-endorsed) Stream Ciphers . . . . . . . . . . . . . . . . . . . . 31
3.5 Public Key Primitives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.5.1 Factoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.5.2 Discrete Logarithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.5.3 Pairings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.6 Key Size Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Page: 1
Algorithms, Key Size and Parameters Report
Page: 2
Algorithms, Key Size and Parameters Report
4.8.6 PV Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.8.7 (EC)Schnorr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
6 General Comments 63
6.1 Side-channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
6.1.1 Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
6.2 Random Number Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
6.2.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
6.2.2 Architectural model for PRNGs . . . . . . . . . . . . . . . . . . . . . . . . . . 68
6.2.3 Security Requirements for PRNGs . . . . . . . . . . . . . . . . . . . . . . . . 69
6.2.4 Theoretical models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
6.2.5 Implementation considerations . . . . . . . . . . . . . . . . . . . . . . . . . . 70
6.2.6 Specific PRNGs and their analyses . . . . . . . . . . . . . . . . . . . . . . . . 71
6.2.7 Designing around bad randomness . . . . . . . . . . . . . . . . . . . . . . . . 72
6.3 Key Life Cycle Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
6.3.1 Key Management Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Bibliography 77
Page: 3
Algorithms, Key Size and Parameters Report
Acronyms
Page: 4
Algorithms, Key Size and Parameters Report
Page: 5
Algorithms, Key Size and Parameters Report
Page: 6
Algorithms, Key Size and Parameters Report
Page: 7
Algorithms, Key Size and Parameters Report
SA Security Association
SHA Secure Hash Algorithm
SIMD Single Instruction Multiple Data
SIV Synthetic Initialization Vector
SK Sakai–Kasahara (ID-based encryption)
SN Serving Network (i.e. a provider in UMTS/LTE)
SPAKE Single-Party Public-Key Authenticated Key Exchange
SPD Security Policy Database
SQL Structured Query Language
SSE Symmetric Searchable Encryption
SSH Secure Shell
SSL Secure Sockets Layer
TKW Triple-DEA Key Wrap
TDKW A Key Wrap Scheme
TRNG True Random Number Generator
UEA UMTS Encryption Algorithm
UF Universally Unforgeable
UIA UMTS Integrity Algorithm
UMAC Universal hashing based MAC
UMTS Universal Mobile Telecommunications System
VPN Virtual Private Network
WEP Wired Equivalent Privacy
WPA Wi-Fi Protected Access
XTS XEX Tweakable Block Cipher with Ciphertext Stealing
Page: 8
Algorithms, Key Size and Parameters Report
Chapter 1
Executive Summary
During 2013, ENISA prepared and published its first reports with cryptographic guidelines support-
ing the security measures required to protect personal data in online systems. Recently published
EC Regulations on the measures applicable to the notification of personal data breaches [118] make
reference to ENISA, as a consultative body, in the process of establishing a list of appropriate
cryptographic protective measures.
This report is providing an update of the 2013 report [113] produced by ENISA. As was the
case with the report of 2013, the cryptographic guidelines of ENISA should serve as a reference
document, and cannot fill in for the existing lack of cryptographic recommendations at EU level. As
such we provide rather conservative guiding principles, based on current state-of-the-art research,
addressing construction of new systems with a long life cycle. This report is aimed to be a reference
in the area, focusing on commercial online services that collect, store and process the personal data
of EU citizens.
In the report of 2013 there was a section on protocols; for this year we decided to extend the part
on implementation by adding to this report a section on side-channels, random number generation,
and key life cycle management. The summary of protocols is now covered in a sister report [114].
It should be noted that this is a technical document addressed to decision makers, in particular
specialists designing and implementing cryptographic solutions, within commercial organisations.
In this document we focus on just two decisions which we feel are more crucial to users of cryptog-
raphy.
Firstly, whether a given primitive or scheme can be considered for use today if it is already
deployed. We refer to such use as legacy use within our document. Our first guiding principle is
that if a scheme is not considered suitable for legacy use, or is only considered for such use with
certain caveats, then this should be taken as a strong advise that the primitive or scheme should
be replaced as a matter of urgency.
Secondly, we consider the issue of whether a primitive or scheme is suitable for deployment in
new or future systems. In some sense mechanisms which we consider usable for new and future
Page: 9
Algorithms, Key Size and Parameters Report
systems meet cryptographic requirements described in this document; they generally will have
proofs of security, will have key sizes equivalent to 128-bit symmetric security or more1 , will have
no structural weaknesses, will have been well studied, will have been been standardized, and will
have a reasonably-sized existing user base. Thus the second guiding principle is that decision
makers now make plans and preparations for the phasing out of what we term legacy mechanisms
over a period of say 5-10 years, and replacing them with systems we deem secure for future use.
This document does not consider any mechanisms which are currently only of academic interest.
In particular all the mechanisms we discuss have been standardized to some extent, and have either
been deployed, or are slated to be deployed, in real systems. This selection is a means of focusing the
document on mechanisms which will be of interest to decision makers in industry and government.
Further limitations of scope are mentioned in the introductory chapter which follows. Further
restrictions are mentioned in Chapter 2 “How to Read this Document”. Such topics, which are not
explored by this document, could however be covered in the future.
1
See Section 3.6 for the equivalence mapping between symmetric key sizes and public key sizes
Page: 10
Algorithms, Key Size and Parameters Report
Chapter 2
This document collates a series of proposals for algorithm and keysizes The 2013 version of this
report [113] also contained a section on protocol proposals; as remarked in the executive summary
this has now been separated into a separate report [114]. In some sense the current document su-
persedes the ECRYPT and ECRYPT2 “Yearly Report on Algorithms and Key Lengths” published
between 2004 and 2012 [104–111]. However, it should be considered as completely distinct. The
current document tries to provide a focused set of proposals in an easy to use form. The prior
ECRYPT documents provided more general background information and discussions on general
concepts with respect to key size choice, and tried to predict the future ability of cryptanalytic
attacks via hardware and software.
In this document we focus on just two decisions which we feel are more crucial to users of
cryptography. Firstly, whether a given primitive, scheme, or keysize can be considered for use
today if it is already deployed. We refer to such use as legacy use within our document. If a scheme
is not considered suitable for legacy use, or is only considered for such use with certain caveats,
then this should be taken as strong advice that the primitive or scheme be possibly replaced as a
matter of urgency (or even that an attack exists). A system which we deem not secure for legacy
use may still actually be secure if used within a specific environment, e.g. limited key life times,
mitigating controls, or (in the case of hash functions) relying on non-collision resistance properties.
However, in such instances we advise the user consults expert advise to see whether such specific
details are indeed relevant.
In particular, we stress, that schemes deemed to be legacy are considered to be secure currently.
But, that for future systems there are better choices available which means that retaining schemes
which we deem to be legacy in future systems is not best practice. We summarize this distinction
in Table 2.1.
Secondly, we consider the issue of whether a primitive, scheme, or key size is suitable for
deployment in new or future systems. In some sense mechanisms which we consider usable for
new and future systems meet a gold standard of cryptographic strength; they generally will have
Page: 11
Algorithms, Key Size and Parameters Report
Classification Meaning
Legacy 7 Attack exists or security considered not sufficient.
Mechanism should be replaced in fielded products as a matter of urgency.
Legacy X No known weaknesses at present.
Better alternatives exist.
Lack of security proof or limited key size.
Future X Mechanism is well studied (often with security proof).
Expected to remain secure in 10-50 year lifetime.
proofs of security, will have key sizes equivalent to 128-bits symmetric security or more, will have
no structural weaknesses, will have been well studied and standardized.
As a general rule of thumb we consider symmetric 80-bit security levels to be sufficient for
legacy applications for the coming years1 , but consider 128-bit security levels to be the minimum
requirement for new systems being deployed. Thus the key take home message is that decision
makers now make plans and preparations for the phasing out of what we term legacy mechanisms
over a period of say 5-10 years. In selecting key sizes for future applications we consider 128-bit
to be sufficient for all but the most sensitive applications. Thus we make no distinction between
high-grade security and low-grade security, since 128-bit encryption is probably secure enough in
the near term.
However, one needs to also take into account the length of time data needs to be kept secure
for. For example it may well be appropriate to use 80-bit encryption into the near future for
transactional data, i.e. data which only needs to be kept secret for a very short space of time;
but to insist on 128-bit encryption for long lived data. All proposals in this document need to be
read with this in mind. We concentrate on proposals which imply a minimal security level across
all applications; i.e. the most conservative approach. Thus this does not imply that a specific
application cannot use security levels lower than considered here.
The document does not consider any mechanisms which are currently only of academic interest.
In particular all the mechanisms we discuss have been standardized to some extent, and have either
been deployed or are due to be deployed in real world systems. This is not a critique of academic
research, but purely a means of focusing the document on mechanisms which will be of interest to
decision makers in industry and government.
Unlike the previous report [113] we consider implementation issues such as side channels result-
ing from timing, power, cache analysis etc, insufficient randomness generation and key life-cycle
management; as well as implementation issues related to the mathematical instantiation of the
1
An exception is made for SHA-1: although it (probably) does not offer 80-bit security, it is still included for
legacy use in this year’s document. However, we propose removing SHA-1 from applications as soon as possible .
Page: 12
Algorithms, Key Size and Parameters Report
Cryptographic primitives are considered the basic building blocks upon which one needs to
make some assumption. This assumption is the level of difficulty of breaking this precise building
block; this assumption is always the cryptographic community’s current “best guess”. We discuss
primitives in detail in Chapter 3.
In Chapter 4 we then go onto discuss basic cryptographic schemes, and in Chapter 5 we discuss
more advanced or esoteric schemes. By a scheme we mean some method for taking a primitive, or
set of primitives, and constructing a cryptographic service out of the primitive. Hence, a scheme
could refer to a digital signature scheme or a mode of operation of a block cipher. It is considered
good cryptographic practice to only use schemes for which there is a well defined security proof
which reduces the security of the scheme to that of the primitive. So for example an attack against
CBC mode using AES should result in an attack against the AES primitive itself. Cryptographic
protocols are dealt with in the companion report [114].
In this 2014 report we add in a new chapter, Chapter 6, on a number of general issues related
to the deployment of cryptographic primitives and schemes. In this edition of the report we restrict
ourselves to hardware and software side-channels, to random number generation and to key life-cycle
management.
Page: 13
Algorithms, Key Size and Parameters Report
1. A security proof which reduces security of a scheme to the security of an underlying primitive
can introduce a security loss. The “loss” is the proportion of additional effort an attacker
who can break the scheme needs to expend so as to break the primitive. This loss leads some
cryptographers to state that the key size of the primitive should be chosen with respect to
this loss. With such a decision, unless proofs are tight2 , the key sizes used in practice will be
larger than one would normally expect. The best one can hope for is that the key size for the
scheme matches that of the underlying primitive.
2. Another school of thought says that a proof is just a design validation, and the fact a tight
proof does not exist may not be for fundamental reasons but could be because our proof
techniques are not sufficiently advanced. They therefore suggest picking key sizes to just
ensure the underlying primitive is secure.
It is this second, pragmatic, approach which we adopt in this document. It is also the approach
commonly taken in industry.
The question then arises as to how to read this document? Whilst the order of the document
is one of going from the ground up, the actual order of making a decision should be from the top
down. We consider two hypothetical situations. One in which a user wishes to select a public key
signature algorithm and another in which he wishes to select a public key encryption algorithm
for use in a specific protocol. Let us not worry too much about which protocol is being used, but
assume that the protocol says that one can select either RSA-PSS or EC-Schnorr as the public key
signature algorithm, and either RSA-OAEP or ECIES as the public key encryption algorithm.
Page: 14
Algorithms, Key Size and Parameters Report
needs to consider “which” RSA primitive to use (i.e. the underlying RSA key size) and which hash
function to use. The scheme itself will impose some conditions on the relevant sizes so they match
up, but this need not concern a reader of this document in most cases. Returning to RSA-PSS
we see that the user should use 1024-bit RSA moduli only for legacy applications and SHA-1 as a
hash function only for legacy applications. If that is all the user requires then this document would
support the user’s decision. However, if the user is looking at creating a new system without any
legacy concerns then this document cannot be used as a justification for using RSA moduli of 1024
bits and SHA-1 as the hash function. The user would instead be forced to consider RSA moduli of
3072 bits (or more) and a hash function such as the 256-bit variant of SHA-2.
Page: 15
Algorithms, Key Size and Parameters Report
Page: 16
Algorithms, Key Size and Parameters Report
Another form of comparison can be made with the documents of various standards organisa-
tions. The ones which have been most referred to in this report are those of IETF, ISO and NIST.
Divergences from our analysis (if any) in these standards are again due to the distinct audiences.
The IETF standardises the protocols which keeps the internet running, their main concern is hence
interoperability. As we have seen in recent years, with attacks on TLS and IPSec, this often leads to
compromises in algorithm selection and choice. The ISO takes a very liberal approach to standard-
ising cryptographic algorithms, with far more algorithms standardized than a report like this could
reasonably cover. We have selected algorithms from ISO (and dubbed them suitable/unsuitable
for legacy and future use) due to our perception of their importance in other applications. Finally
the NIST documents are more focused, with only a small subset of schemes being standardized.
A major benefit in the NIST standardization is that when security proofs are available they are
alluded to, and so one can judge the scientific basis of the recommendations.
Finally, we compare with the recommendations of the European Payments Council (EPC). In
their document [119] the EPC also divide cryptographic systems into those for legacy and those
for future use. They classify SHA-1, RSA moduli with 1024 bits, ECC keys of 160 bits as suitable
for legacy use, and 3DES, AES-128, SHA-2 (256 and 512 bit variants), SHA-3, Whirlpool, RSA
moduli with 2048 bits, ECC keys of 224 bits or more as suitable for future use. These are broadly
in line with our analysis.
• Currently practical Post-Quantum Systems: The reason for examining these now is that
systems currently being deployed may need to be resistant against the future development of
a quantum computer, or may need to be designed so that a switch to a post-quantum system
is simple.
• Short signatures and signatures with message recovery: Short signatures are used in multiple
scenarios, and signatures with message recovery are used in currently deployed systems such
as the chip-and-pin system EMV. The current document does not cover such cryptographic
schemes.
• Encryption schemes which enable de-duplication of ciphertexts: The use of such schemes, and
other deterministic encryption schemes such as format preserving encryption, are becoming
Page: 17
Algorithms, Key Size and Parameters Report
more used in real systems. Encryption which enables de-duplication is important to enable
secure cloud backup.
It is hoped that if this document were to be revised in future years that the opportunity would be
taken to also include the afore mentioned mechanisms.
Page: 18
Algorithms, Key Size and Parameters Report
ECIES
PPP
PP
PP
)
P q
P
ECIES-KEM c DEM
Q
H
A
A HH
Q
A
A QH
QHH
A A
U
A s H
Q
9
)
j
A
OCB CCM Encrypt-then-MAC EAX CWC GCM
ECDLP
c Size U
A ? ? QQ ? ? ?
Q KDF
c ? ? Q ? ? ?
Q ? ? Q ? ? ?
? QQs
BC Q
Q
256-bits 512-bits
CB
Q
CB s
Q
- MAC Function IND-CPA cEncryption
CB
CB c
CB A@ B
CB A@ B
CB A @ B
C B A @ B
C B A @ B
C B UA R
@
NB
X9.63-KDF ?
C B HMAC EMAC CMAC CTR mode CBC mode
C B
NIST-800-108-KDF CW B A C
NIST-800-56-KDF-A/B BN B A
A
C
C
NIST-800-56-KDF-C A C
A C
A C
A C
? AU CW
Hash Function
c Block cCipher
HH H
HH
H
?
HH
j
H
?
HH
j
H
SHA-256 SHA-512 SHA-3 AES-128 AES-192 AES-256
Figure 2.1: Just some of the design space for instantiating the ECIES public key encryption algorithm.
Note, that not all standards documents will support all of these options. To read this diagram: A group
of arrows starting with a circle implies the implementer needs to choose one of the resulting paths. A set
of three arrows implies a part of the decision tree which we have removed due to space. In addition (again
for reasons of space) we do not list all possible choices, e.g. some hash functions can be block cipher based.
Even with these restrictions one can see the design space for a cipher as well studied and understood as
ECIES can be quite complex.
Page: 19
Algorithms, Key Size and Parameters Report
Chapter 3
Primitives
This chapter is about basic cryptographic building blocks, the atoms out of which all other crypto-
graphic constructions are produced. In this section we include basic symmetric key building blocks,
such as block ciphers, hash functions and stream ciphers; as well as basic public key building blocks
such as factoring, discrete logarithms and pairings. With each of these building blocks there is some
mathematical hard problem underlying the primitive. For example the RSA primitive is based on
the difficulty of factoring, the AES primitive is (usually) based on it representing a keyed pseudo-
random permutation. That these problems are hard, or equivalently, the primitives are secure is
an assumption which needs to be made. This assumption is often based on the specific parameters,
or key lengths, used to instantiate the primitives.
Modern cryptography then takes these building blocks/primitives and produces cryptographic
schemes out of them. The defacto methodology, in modern work, is to then show that the result-
ing scheme, when attacked in a specific cryptographic model, is secure assuming the underlying
assumption on the primitive holds. So another way of looking at this chapter and the next, is that
this chapter presents the constructions for which we cannot prove anything rigorously, whereas
the next chapter presents the schemes which should have proofs relative to the primitives in this
chapter actually being secure.
In each section we use the term observation to point out something which may point to a longer
term weakness, or is purely of academic interest, but which is not a practical attack at the time of
writing. In each section we also give a table, and group the primitives within the table in order of
security strength (usually).
3.1 Comparison
In making a decision as to which cryptographic mechanism to employ, one first needs to choose the
mechanism and then decide on the key length to be used. In later sections and chapters we focus
on the mechanism choice, whereas in this section we focus just on the key size. In some schemes
Page: 20
Algorithms, Key Size and Parameters Report
the effective key length is hardwired into the mechanism, in others it is a parameter to be chosen,
in some there are multiple parameters which affect the effective key length.
There is common understanding that what we mean by an effective key length is that an attack
should take 2k operations for an effective key length of k. Of course this understanding is itself not
well defined as we have not defined what an operation is; but as a rule of thumb it should be the
“basic” operation of the mechanism. This lack of definition of what is meant by an operation means
that it is hard to compare one mechanism against another. For example the best attack against a
block cipher of key length kb should be equivalent to 2kb block cipher invocations, whereas the best
known attack against an elliptic curve system with group order of ke bits should be 2ke /2 elliptic
curve group operations. This often leads one to conclude that one should take ke = 2 · kb , but this
assumes that a block cipher call is about the same cost as an elliptic curve group operation (which
may be true on one machine, but not true on another).
This has led authors and standards bodies to conduct a series of studies as to how key sizes
should be compared across various mechanisms. The “standard” method is to equate an effective
key size with a specific block cipher, (say 112 corresponds to two or three key Triple-DES, 128
corresponds to AES-128, 192 corresponds to AES-192, and 256 corresponds to AES-256), and then
try to establish an estimate for another mechanisms key size which equates to this specific quanta
of effective key size.
In comparing the different literature one meets a major problem in that not all studies compare
the same base symmetric key sizes; or even do an explicit comparison. The website http://
www.keylength.com takes the various proposed models from the the literature and presents a
mechanism to produce such a concrete comparison. In Table 3.1 we present either the concrete
recommendations to be found in the literature, or the inferred recommendations presented on the
web site http://www.keylength.com.
We focus on the symmetric key size k, the RSA modulus size `(N ) (which is also the size
of a finite field for DLP systems) and the discrete logarithm subgroup size `(q); all of which are
measured in bits. Of course these are just crude approximations and hide many relationships
between parameters which we discuss in future sections below. As one can see from the table the
main divergence in estimates is in the selection of the size `(N ) of the RSA modulus.
As one can see, as the symmetric key size increases the size of the associated RSA moduli needs
to become prohibitively large. Ignoring such large value RSA moduli we see that there is surprising
agreement in the associated size of the discrete logarithm subgroup q, which we assume to be an
elliptic curve group order.
Our implicit assumption is that the above key sizes are for (essentially) single use applications.
As a key is used over and over again its security degrades, due to various time-memory tradeoffs.
There are often protocol and scheme level procedures to address this issue; for example salting in
password hashing or the use of short lived session keys. The same holds true in other situations, for
example in [46], it is shown that AES-128 has only 85-bit security if 243 encryptions of an arbitrary
fixed text under different keys are available to the attacker.
Very little literature discusses the equivalent block length for block ciphers or the output length
Page: 21
Algorithms, Key Size and Parameters Report
k `(N ) `(q) k `(N ) `(q) k `(N ) `(q) k `(N ) `(q) k `(N ) `(q)
Lenstra–Verheul 2000 [217] ?
80 1184 142 112 3808 200 128 5888 230 192 20160 350 256 46752 474
Lenstra 2004 [214] ?
80 1329 160 112 3154 224 128 4440 256 192 12548 384 256 26268 512
IETF 2004 [268] ?
80 1233 148 112 2448 210 128 3253 242 192 7976 367 256 15489 494
SECG 2009 [319]
80 1024 160 112 2048 224 128 3072 256 192 7680 384 256 15360 512
NIST 2012 [262]
80 1024 160 112 2048 224 128 3072 256 192 7680 384 256 15360 512
ECRYPT2 2012 [107]
80 1248 160 112 2432 224 128 3248 256 192 7936 384 256 15424 512
Table 3.1: Key Size Comparisons in Literature. An entry marked with a ? indicates an inferred
comparison induced from the web site http://www.keylength.com. Where a range is given by the
source we present the minimum values. In the columns k is the symmetric key size, `(N ) is the
RSA modulus size (or finite field size for finite field discrete logarithms) and `(q) is the subgroup
size for finite field and elliptic curve discrete logarithms.
of hash functions or MAC functions; since this is very much scheme/protocol specific. A good rule
of thumb for hash function outputs is that they should correspond in length to 2·k, since often hash
functions need to be collision resistant. However, if only preimage or second-preimage resistance is
needed then output sizes of k can be tolerated.
The standard [259] implicitly recommends that the MAC key and and MAC output size should
be equal to the underlying symmetric key size k. However, the work of Preneel and van Oorschot
[285, 286], implies attacks on MAC functions requiring 2n/2 operations, where n is the key size, or
the size of the MAC functions internal memory. These recommendations can be problematic with
some MAC function constructions based on block ciphers at high security levels, as no major block
cipher has block length of 256 bits. In addition one needs to distinguish between off-line attacks,
in which a large MAC output size is probably justified, and an on-line attack, where smaller MAC
output sizes can be tolerated. Thus choice of the MAC output size can be very much scheme,
protocol, or even system, dependent.
Page: 22
Algorithms, Key Size and Parameters Report
is a mathematical assumption, akin to the statement that factoring 3072-bit moduli is hard. The
schemes we present in Chapter 4, that use block ciphers, are often built on the assumption that
the block cipher is secure in the above sense.
Some cryptanalysists include the resistance against related-key attacks in the security evaluation
of a block cipher. We include these results for completeness. Note however that the existence of a
related-key attack on a given block cipher does not contradict the assumption that the block cipher
acts as a pseudo-random permutation. Furthermore, the soundness of security models allowing for
related-key attacks is still under investigation.
Generally speaking we feel the minimum key size for a block cipher should be 128 bits; the
minimum for the block size depends on the precise application but in many applications (for example
construction of MAC functions) a 128-bit block size should now be considered the minimum. We
also consider that the maximum amount of data which should be encrypted under the same key
should be bounded by 2n/2 blocks, where n is the block size in bits. However, as indicated before
some short lived cryptograms may warrant smaller block and key sizes in their constructions; but
for general applications we advise a minimum of 128 bits.
Again, for each primitive we give a short description of state of the art with respect to known
attacks, we then give guidelines for minimum parameter sizes for future and legacy use. For
convenience these guidelines are summarised in Table 3.2.
Classification
Primitive Legacy Future
AES X X
Camellia X X
Three-Key-3DES X 7
Two-Key-3DES X 7
Kasumi X 7
Blowfish≥80-bit keys X 7
DES 7 7
Page: 23
Algorithms, Key Size and Parameters Report
Observation: The strong algebraic structure of the AES cipher has led some researchers to suggest
that it might be susceptible to algebraic attacks [87, 240]. However, such attacks have not been
shown to be effective [77, 219].
For the 192- and 256-bit key versions there are related key attacks [44, 45]. For AES-256 this
attack, using four related keys, requires time 299.5 and data complexity 299.5 . The attack works
due to the way the key schedule is implemented for the 192- and 256-bit keys (due to the mismatch
in block and key size), and does not affect the security of the 128-bit variant. Related key attacks
can clearly be avoided by always selecting cryptographic keys independently at random.
A bi-clique technique can be applied to the cipher to reduce the complexity of exhaustive
key search. For example in [51] it is shown that one can break AES-128 with 2126.2 encryption
operations and 288 chosen plaintexts. For AES-192 and AES-256 these numbers become 2189.7 /240
and 2254.4 /280 respectively.
Camellia
The Camellia block cipher is used as one of the possible cipher suites in TLS, and unlike AES is of
a Feistel cipher design. Camellia has a block length of 128 bits and supports 3 key lengths: 128,
192 and 256 bits [224]. The versions with a 192- or a 256-bit key are 33% slower than the versions
with a 128-bit key.
Observation: Just as for AES there is a relatively simple set of algebraic equations which define
the Camellia transform; this might leave it open to algebraic attacks. However, just like AES such
attacks have not been shown to be effective.
Kasumi
This cipher [117], used in 3GPP, has a 128-bit key and 64-bit block size is a variant of MISTY-1.
Kasumi is called UIA1 in UMTS and is called A5/3 in GSM
Page: 24
Algorithms, Key Size and Parameters Report
Observation: Whilst some provable security against linear and differential cryptanalysis has been
established [188], the cipher suffers from a number of problems. A related key attack [41] requiring
276 operations and 254 plaintext/ciphertext pairs has been presented. In [100] a more efficient
related key attack is given which requires 232 time and 226 plaintext/ciphertext pairs. These
attacks do not affect the practical use of Kasumi in applications such as 3GPP, however given them
we do not advise to use Kasumi in further applications.
Blowfish
This cipher [317] has a 64-bit block size, which is too small for some applications and the reason we
only advise it for legacy use. It also has a key size ranging from 32- to 448-bits, which we clearly
only endorse using at 80-bits and above for legacy applications. The Blowfish block cipher and is
used in some IPsec configurations.
Observation: There have been a number of attacks on reduced round versions [189, 292, 341] but
no attacks on the full cipher.
Page: 25
Algorithms, Key Size and Parameters Report
Output Classification
Primitive Length Legacy Future
SHA-2 256, 384, 512 X X
SHA3 256,384,512 X X
Whirlpool 512 X X
SHA3 224 X 7
SHA-2 224 X 7
RIPEMD-160 160 X 7
SHA-1 160 X1 7
MD-5 128 7 7
RIPEMD-128 128 7 7
and then truncates the output. Due to our decision of symmetric security lengths of less than 128
being only suitable for legacy applications we denote SHA-224 as in the legacy only division of our
analysis.
Observation: For SHA-224/SHA-256 (resp. SHA-384/SHA-512) reduced round collision attacks
31 out of 64 (resp. 24 out of 80) have been reported [156, 233, 310]. In addition reduced round
variants 43 (resp. 46) have also been attacked for preimage resistance [17, 136].
SHA3
The competition organised by NIST to find an algorithm for SHA3 ended on October 2nd, 2012,
with the selection of Keccak [127]. In April 2014, a draft version of FIPS 202 describing SHA3 has
been released [121]. The draft standard contains 4 hash functions: SHA3-224, SHA3-256, SHA3-384
and SHA3-512.
Observation: Reduced round collision attacks (4 out of 24) have been reported [95]. For appli-
cations that use a secret key as part of the input of a hash function, cube attacks with practical
complexity have been shown for up to 6 rounds of Keccak [96].
Whirlpool
Whirlpool produces a 512-bit hash output and is not in the MD-X family; being built from AES
style methods, thus it is a good alternative to use to ensure algorithm diversity.
Observation: Preimage attacks on 5 (out of 10) rounds have been given [311], as well as collisions
on 5.5 rounds [210], with complexity 2120 . In [314] this is extended to 6 rounds, with 2481 computa-
tion cost. Collision attacks are also given in [314] where eight rounds are attacked with complexity
2120 .
Page: 26
Algorithms, Key Size and Parameters Report
SHA-1
SHA-1 is in widespread use and was designed to provide protection against collision finding of 280 , it
was standardized in NIST-180-4 [248]. However, several authors claim that collisions can be found
with a computational effort that is significantly lower [236, 346, 347]. The current best analysis is
that of 261 operations, reported in [330]. On the other hand explicit collisions for the full SHA-1
have not yet been found, despite collisions for a reduced round variant (73 rounds out of 80) being
found [102].
Due to the importance we repeat the footnote from Page 12: We have decided to keep SHA-1
as a legacy use algorithm since SHA-3 has not yet been officially standardized. The expectation is
that as soon as SHA-3 is standardized then SHA-1 will be removed from the legacy use category.
Therefore it is recommended that parties take immediate steps to stop using SHA-1 in legacy
applications.
Observation: The literature also contains preimage attacks on a variant reduced to 45-48 rounds
[18, 66].
RIPEMD-128
Given an output size of 128-bits, collisions can be found in RIPEMD-128 in time 264 using generic
attacks, thus RIPEMD-128 can no longer be considered secure in a modern environment irrespective
of any cryptanalysis which reduces the overall complexity. Practical collisions for a 3-round variant
were reported in 2006, [235]. In [211] further cryptanalytic results were presented which lead one
to conclude that RIPEMD-128 is not to be considered secure.
Page: 27
Algorithms, Key Size and Parameters Report
Classification
Primitive Legacy Future
HC-128 X X
Salsa20/20 X X
ChaCha X X
SNOW 2.0 X X
SNOW 3G X X
SOSEMANUK X X
Grain X 7
Mickey 2.0 X 7
Trivium X 7
Rabbit X 7
A5/1 7 7
A5/2 7 7
E0 7 7
RC4 7 7
Page: 28
Algorithms, Key Size and Parameters Report
SNOW 2.0
SNOW 2.0 comes in a 128 and 256-bit key variants. The cipher is included in ISO/IEC 18033-4 [165]
Observation: A distinguishing attack against SNOW 2.0 is theoretically possible [266], but it
requires 2174 bits of key-stream and work. A related-key attack exists on SNOW 2.0 with 256-bit
key [195].
SNOW 3G
SNOW 3G is an enhanced version of SNOW 2.0, the main change being the addition of a second
S-Box as a protection against future advances in algebraic cryptanalysis. It uses a 128-bit key and
a 128-bit IV. The cipher is the core of the algorithms UEA2 and UIA2 of the 3GPP UMTS system,
which are identical to the algorithms 128-EIA1 and 128-EEA1 in LTE.
SOSEMANUK
SOSEMANUK was an entrant to the eSTREAM competition and included in the final eSTREAM
portfolio as promising for software implementations [34]. SOSEMANUK supports key lengths from
128 to 256 bits together with a 128-bit initialisation vector. The designers of SOSEMANUK don’t
claim more than 128 bits of security for any key length.
The literature contains several attacks on SOSEMANUK, which don’t break the claim of 128-
bit security. An attack requiring only a few words of key stream and with time complexity 2176
was shown in [122]. An attack requiring 2138 words of key stream and with time complexity 2138
was shown in [76, 213].
Page: 29
Algorithms, Key Size and Parameters Report
Mickey 2.0
Mickey 2.0 was evaluated by the eSTREAM competition and included in the final eSTREAM portfo-
lio as promising for hardware implementations [22]. It uses an 80-bit key and an 80-bit initialisation
vector. There exists also a scaled-up version Mickey-128 using 128-bit keys and initialisation values,
but this version has not been officially evaluated by eSTREAM [22].
Rabbit
Rabbit was an entrant to the eSTREAM competition and included in the final eSTREAM portfolio
as promising for software implementations. Rabbit uses a 128-bit key together with a 64-bit IV.
Rabbit is described in RFC 4503 and is included in ISO/IEC 18033-4 [165]. In [91] a distinguishing
attack on Rabbit is described. The effect of this in practice has yet to be quantified thus for 2014
we downgrade Rabbit from suitable for future use, to only suitable for legacy use.
Trivium
Trivium was an entrant to the eSTREAM competition and included in the final eSTREAM portfolio
as promising for hardware implementations. It has been included in ISO/IEC 29192-3 on lightweight
stream ciphers [168]. Trivium uses an 80-bit key together with an 80-bit IV.
Observation: There has been a number of papers on the cryptanalysis of Trivium and there
currently exists no attack against full Trivium. Aumasson et al. [20] present a distinguishing attack
with complexity 230 on a variant of Trivium with the initialisation phase reduced to 790 rounds
(out of 1152). Maximov and Biryukov [228] present a state recovery attack with time complexity
around 283.5 . This attack shows that Trivium with keys longer than 80 bits provides no more
security than Trivium with an 80-bit key. It is an open problem to modify Trivium so as to obtain
128-bit security in the light of this attack.
Page: 30
Algorithms, Key Size and Parameters Report
A5/2
A5/2 is a weakened version of A5/1 to allow for (historic) export restrictions to certain countries.
It is therefore not considered to be secure.
E0
The E0 stream cipher is used to encrypt data in Bluetooth systems. It uses a 128-bit key and no
IV. The best attack recovers the key using the first 24 bits of 224 frames and 238 computations [221].
This cipher is therefore not considered to be secure.
RC4
RC4 comes in various key sizes. Despite widespread deployment the RC4 cipher has for many years
been known to suffer from a number of weaknesses. There are various distinguishing attacks [222],
and state recovery attacks [229]. (An efficient technique to recover the secret key from an internal
state is described in [40].)
An important shortcoming of RC4 is that it was designed without an IV input. Some ap-
plications, notably WEP and WPA “fix” this by declaring some bytes of the key as IV, thereby
effectively enabling related-key attacks. This has led to key-recovery attacks on RC4 in WEP [344].
When initialised the first 512 output bytes of the cipher should be discarded due to statistical
biases. If this step is omitted, then key-recovery attacks can be accelerated, e.g. those on WEP
and WPA [322].
Despite statistical biases being known since 1995, SSL/TLS does not discard any of the output
bytes of RC4; this results in recent attacks by AlFardan et al. [6] and Isobe et al. [158].
Page: 31
Algorithms, Key Size and Parameters Report
two of a number; a ? denotes some conditions which also need to be tested which are explained in
the text.
Primitive Parameters Legacy System Minimum Future System Minimum
RSA Problem N, e, d `(n) ≥ 1024, `(n) ≥ 3072
e ≥ 3 or 65537, d ≥ N 1/2 e ≥ 65537, d ≥ N 1/2
Finite Field DLP p, q, n `(pn ) ≥ 1024 `(pn ) ≥ 3072
`(p), `(q) > 160 `(p), `(q) > 256
ECDLP p, q, n `(q) ≥ 160, ? `(q) > 256, ?
Pairing p, q, n, d, k `(pk·n ) ≥ 1024 `(pk·n ) ≥ 3072
`(p), `(q) > 160 `(p), `(q) > 256
3.5.1 Factoring
Factoring is the underlying hard problem behind all schemes in the RSA family. In this section we
discuss what is known about the mathematical problem of factoring, we then specialise to the math-
ematical understanding of the RSA Problem. The RSA Problem is the underlying cryptographic
primitive, we are not considering the RSA encryption or signature algorithm at this point. In fact
vanilla RSA should never be used as an encryption or signature algorithm, the RSA primitive (i.e.
the RSA Problem) should only be used in combination with one of the well defined schemes from
Chapter 4.
Since the mid-1990s the state of the art in factoring numbers of general form has been determined
by the factorisation of the RSA-challenge numbers. In the last decade this has progressed at the
following rate RSA-576 (2003) [124], RSA-640 (2005) [125], RSA-768 (2009) [197]. These records
have all been set with the Number Field Sieve algorithm [216]. It would seem prudent that only
legacy applications should use 1024 bit RSA modulus going forward, and that future systems should
use RSA keys with a minimum size of 3072 bits.
Since composite moduli for cryptography are usually chosen to be the product of two large
primes N = p · q, to ensure they are hard to factor it is important that p and q are chosen of the
same bit-length, but not too close together. In particular
• If `(p) `(q) then factoring can be made easier by using the small value of p (via the ECM
method [184]). Thus selecting p and q such that 0.1 < |`(p) − `(q)| ≤ 20, is a good choice.
• On the other hand if |p − q| is less than N 1/4 then factoring can be accomplished by the
Coppersmith’s method [80].
Selecting p and q to be random primes of bit length `(N )/2 will, with overwhelming probably,
ensure that N is hard to factor with both these techniques.
Page: 32
Algorithms, Key Size and Parameters Report
RSA Problem
Cryptosystems based on factoring are actually usually based not on the difficulty of factoring but
on the difficulty of solving the RSA problem. The RSA Problem is defined to be that of given an
RSA modulus N = p · q, an integer value e such that gcd(e, (p − 1) · (q − 1)) = 1, and a value
y ∈ Z/N Z find the value x ∈ Z/N Z such that xe = y (mod N ).
If e is too small such a problem can be easily solved, assuming some side information, using
Coppersmith’s lattice based techniques [78, 79, 81]. Thus for RSA based encryption schemes it is
common to select e ≥ 65537. For RSA based signature schemes such low values of e do not seem
to be a problem, thus it is common to select e ≥ 3. For efficiency one often takes e to be as small
a prime as the above results would imply; thus it is very common to find choices of e = 65537
for encryption and e = 3 for signatures in use. In keeping with the conservative nature of the
suggestions in this report we suggest using e = 65537 for future systems using RSA signatures.
The RSA private key is given by d = 1/e (mod (p − 1) · (q − 1)). Some implementers may be
tempted to choose d “small” and then select e so as to optimise the private key operations. Clearly,
just from naive analysis d cannot be too small. However, lattice attacks can also be applied to
choices of d less than N 0.292 [53, 350]. Lattice attacks in this area have also looked at situations
in which some of the secret key leaks in some way, see for example [115, 151]. We therefore advise
that d is chosen such that d > N 1/2 , this will happen with overwhelming probability if the user
selects e first and then finds d. Indeed, if standard practice is followed and e is selected first then
d will be of approximately the same size as N with overwhelming probability.
• Computational Diffie–Hellman problem: Given g x and g y for hidden x and y compute g x·y .
• Gap Diffie–Hellman problem: Given g x and g y for hidden x and y compute g x·y , given an
oracle which allows solution of the Decision Diffie–Hellman problem.
Page: 33
Algorithms, Key Size and Parameters Report
Clearly the ability to solve the DLP will also give one the ability to solve the above three problems,
but the converse is not known to hold in general (although it is in many systems widely believed
to be the case).
ECDLP
Standard elliptic curve cryptography (i.e. ECC not using pairings) comes in two flavours in practice,
either systems are based on elliptic curves over a large prime field E(Fp ), or they are based on elliptic
curves over a field of characteristic two E(F2n ). We denote the field size by pn in what follows, so
when writing pn we implicitly assume either p = 2 or n = 1. We let q denote the largest prime
factor of the group order and let h denote the “cofactor”, so h · q = #E(Fpn ). To avoid known
Page: 34
Algorithms, Key Size and Parameters Report
• The smallest t such that q divides pt·n − 1 is such that extracting discrete logarithms in the
finite field of size pt·n is hard. This is the so called MOV condition [237].
• If n = 1 then we should not have p = q. These are the so-called anomalous curves for which
there is a polynomial time attack [315, 321, 329].
• If p = 2 then n should be prime. This is to avoid so-called Weil descent attacks [129].
The above three conditions are denoted by ? in Table 3.5. It is common, to avoid small subgroup
attacks, for the curve to be chosen such that h = 1 in the case of n = 1 and h = 2 or 4 in the
case of p = 2. To avoid implementation mistakes in protocols we strongly advise that curves are
selected with h = 1. Some fast implementations can be obtained when h = 4, but when using these
protection against small subgroup attacks need to be also implemented.
There are a subclass of curves called Koblitz curves in the case of p = 2 which offer some
performance advantages, but we do not consider the benefit to outweigh the cost for modern
processors thus our discussion focuses on general curves only. Some standards, e.g. [116] stipulate
that the class number of the associated endomorphism ring must be larger than some constant (e.g.
200). We see no cryptographic reason for making this recommendation, since no weakness is known
for such curves. If curves are selected at random it is over whelmingly likely that the curve has a
large endomorphism ring in any case.
The largest ECDLP records have been set for the case of n = 1 with a p of size 109-bits [56],
and for p = 2 with n = 109 [70]. These record setting achievements are all performed with the
method of distinguished points [340], which is itself based on Pollard’s rho method [282]. To avoid
such “generic attacks” the value q should be at least 160 bits in length for legacy applications and
at least 256 bits in length for new deployments.
Various standards, e.g. [13, 14, 320] specify a set of recommended curves; many of which also
occur in other standards and specifications, e.g. in TLS [48]. Due to issues of interoperability the
authors feel that using a curve specified in a standard is best practice. Thus the main choice for
an implementer is between curves in characteristic two and large prime characteristic.
3.5.3 Pairings
Pairing based systems take two elliptic curves E(Fpn ) and Ê(Fpn·d ), each containing a subgroup of
order q. We denote the subgroup of order q in each of these elliptic curves by G1 and G2 . Pairing
based systems also utilise a finite field Fpk·n , where q divides pk·n − 1. These three structures are
linked via a bilinear mapping t̂ : G1 × G2 −→ GT , where GT is the multiplicative subgroup of Fpk·n
of order q. The value k is called the embedding degree, and we always have 1 ≤ d ≤ k. Whilst there
are many hard problems on which pairing based cryptography is based, the most efficient attack is
almost always the extraction of discrete logarithms in either one of the elliptic curves or the finite
Page: 35
Algorithms, Key Size and Parameters Report
field (although care needs to be taken with some schemes due to the additional information the
scheme makes available).
Given our previous discussion on the finite field DLP and the ECDLP the parameter choices
for legacy and new systems are immediate. In addition, note that the conditions in Table 3.5 for
pairings immediately imply all the special conditions for elliptic curve based systems indicated by
a ? in the ECDLP row. This explains the lack of a ? in the pairing row of Table 3.5.
1. Block Ciphers: For near term use we advise AES-128 and for long term use AES-256.
Page: 36
Algorithms, Key Size and Parameters Report
Table 3.6: Key Size Analysis. A ? notes the value could be smaller due to specific protocol or
system reasons, the value given is for general purposes.
2. Hash Functions: For near term use we advise SHA-256 and for long term use SHA-512.
3. Public Key Primitive: For near term use we advise 256 bit elliptic curves, and for long term
use 512 bit elliptic curves.
Note, that all of our guidelines need to be read given the aspects described in Section 2.4 which
we do not cover in this report. Finally, we note that the guidelines above, and indeed all analysis
in this document, is on the basis that there is no breakthrough in the construction of quantum
computers. If the development of quantum computers became imminent, then all this documents
guidelines would need to be seriously reassessed. In particular all of the public key based primitives
in this document should be considered to be insecure.
Page: 37
Algorithms, Key Size and Parameters Report
Chapter 4
As mentioned previously a cryptographic scheme usually comes with an associated security proof.
This is (most often) an algorithm which takes an adversary against the scheme in some well defined
model, and turns the adversary into one which breaks some property of the underlying primitive (or
primitives) out of which the scheme is constructed. If one then believes the primitive to be secure,
one then has a strong guarantee that the scheme is well designed. Of course other weaknesses may
exist, but the security proof validates the basic design of the scheme. In modern cryptography all
schemes should come with a security proof.
The above clean explanation however comes with some caveats. In theoretical cryptography a
big distinction is made between schemes which have proofs in the standard model of computation,
and those which have proofs in the random oracle model. The random oracle model is a model
in which hash functions are assumed to be idealised objects. A similar issue occurs with some
proofs using idealised groups (the so-called generic group model), or idealised ciphers (a.k.a the
ideal cipher model). In this document we take, as do most cryptographers working with real world
systems, the pragmatic view; that a scheme with a proof in the random oracle model is better than
one with no proof, and that the use of random oracles etc can be justified if they produce schemes
which have performance advantages over schemes which have proofs in the standard model.
It is sometimes tempting for an implementer to use the same key for different purposes. For
example to use a symmetric AES key as both the key to an application of AES in an encryption
scheme, and also for the use of AES within a MAC scheme, or within different modes of operation
[130]. As another example one can imagine using an RSA private key as both a decryption key
and as a key to generate RSA signatures; indeed this latter use-case is permitted in the EMV chip-
and-pin system [93]. Another example would be to use the same encryption key on a symmetric
channel between Alice and Bob for two way communication, i.e. using has one bidirectional key
as opposed to two unidirectional keys. Such usage can often lead to unexpected system behaviour,
thus it is good security practice to design into systems explicit key separation.
Key separation means we can isolate the systems dependence on each key and its usages; and
Page: 38
Algorithms, Key Size and Parameters Report
indeed many security proofs implicitly assume that key separation is being deployed. However, in
some specific instances one can show, for specific pairs of cryptographic schemes, that key separation
is not necessary. We do not discuss this further in this document but refer the reader to [9,93,271],
and simply warn the reader to violate the key separation principle with extreme caution. In general
key separation is a good design principle in systems, which can help to avoid logical errors in other
system components. If key separation is violated then we advise this is only done following a
rigorous analysis, and associated security proofs.
In Tables 4.1, 4.2, 4.4 and 4.5 we present our summary of the various symmetric and asymmetric
schemes considered in this document. In each scheme we assume the parameters and building blocks
have been chosen so that the guidelines of Chapter 3 apply.
In 4.1 we give (some of) the security notions for symmetric encryption achieved by the the
various constructions presented in Sections 4.1 and 4.3. Whether it is suitable for future or legacy
use needs to be decided by consideration of the underlying block cipher and therefore by reference to
Table 3.2. For general encryption of data we strongly advise the use of an authenticated encryption
scheme, and CCM, EAX or GCM modes in particular. The columns IND-CPA, IND-CCA and
IND-CVA refer to indistinguishablity under chosen plaintext, chosen ciphertext and ciphertext
validity attacks. The latter class of attacks lie somewhere between IND-CPA and IND-CCA and
include padding oracle attacks. Of course some of the padding oracle attacks imply a specific choice
as to how padding is performed in such schemes. In our table a scheme which does not meet IND-
CVA does not meet IND-CVA for a specific padding method. A scheme for which it is probably
true that it is IND-CVA is marked with a bracketed tick. Similarly an authenticated encryption
scheme which does not meet IND-CCA is one which does not meet this goal for a specific choice of
underlying components.
4.1.1 ECB
Electronic Code Book (ECB) mode [254] should be used with care. It should only be used to
encrypt messages with length at most that of the underlying block size, and only for keys which
Page: 39
Algorithms, Key Size and Parameters Report
are used in a one-time manner. This is because without such guarantees ECB mode provides no
modern notion of security.
4.1.2 CBC
Cipher Block Chaining (CBC) mode [254] is the most widely used mode of operation. Unless used
with a one-time key, an independent and random IV must be used for each message; with such a
usage the mode can be shown to be IND-CPA secure [30], if the underlying block cipher is secure.
With a non-random or predictable IV, CBC mode is insecure. In particular using a nonce as the I
is insufficient to prove security.
The mode is not IND-CCA secure as ciphertext integrity is not ensured, for applications requir-
ing IND-CCA security an authenticated encryption mode is to be used (for example by applying a
message authentication code to the output of CBC encryption). For further details see Section 4.3.
Since CBC mode requires padding of the underlying message before encryption the mode suffers
from certain padding oracle attacks [272, 342, 354]. Again usage of CBC within an authenticated
encryption scheme (and utilising uniform error reporting in the case of Encrypt-then-MAC schemes)
can mitigate against such attacks.
Page: 40
Algorithms, Key Size and Parameters Report
4.1.3 OFB
Output Feedback (OFB) mode [254] produces a stream cipher from a block cipher primitive, using
an IV as the initial input to the block cipher and then feeding the resulting output back into the
blockcipher to create a stream of blocks. To improve efficiency the stream can be precomputed.
The mode is IND-CPA secure when the IV is random (this follows from the security result for
CBC mode). If the IV is a nonce then IND-CPA security is not satisfied. The mode is not IND-
CCA secure as ciphertext integrity is not ensured, for applications requiring IND-CCA security an
authenticated encryption mode is to be used (cf. Section 4.3). OFB mode does not require padding
so does not suffer from padding oracle attacks.
4.1.4 CFB
Cipher Feedback (CFB) mode [254] produces a self-synchronising stream cipher from a block cipher.
Unless used with a one-time key the use of an independent and random IVmust be used for each
message; with such a usage the mode can be shown to be IND-CPA secure [8], if the underlying
block cipher is secure and the IV is random (i.e. not a nonce).
The mode is not IND-CCA secure as ciphertext integrity is not ensured. For applications
requiring IND-CCA security an authenticated encryption mode is to be used (cf. Section 4.3).
CFB mode does not require padding so does not suffer from padding oracle attacks.
4.1.5 CTR
Counter (CTR) mode [254] produces a stream cipher from a block cipher primitive, using a counter
as the input message to the block cipher and then taking the resulting output as the stream cipher
sequence. The counter (or IV) should be a nonce to achieve IND-CPA security [30]. The scheme is
rendered insecure if the counter is repeated.
The mode is not IND-CCA secure as ciphertext integrity is not ensured, for applications re-
quiring IND-CCA security an authenticated encryption mode is to be used (cf. Section 4.3). No
padding is necessary so the mode does not suffer from padding oracle attacks.
Unlike all previous modes mention, CTR mode is easily and fully parallelisable allowing for
much faster encryption and decryption.
4.1.6 XTS
XTS mode [257] is short for XEX Tweakable Block Cipher with Ciphertext Stealing and is based on
the XEX tweakable block cipher [297] (using two keys instead of one). The mode was specifically
designed for encrypted data storage using fixed-length data units, and was used in the TrueCrypt
system.
Due to the specific application of disc encryption the standard notion of IND-CPA security is
not appropriate for this setting. It is mentioned in [257] that the mode should provide slightly more
Page: 41
Algorithms, Key Size and Parameters Report
protection against data manipulation than standard confidentiality-only modes. The exact notion
remains unclear and as a result XTS mode does not have a proof of security. Further technical
discussion on this matter can be found in [299, Chapter 6] and [220]. The underlying tweakable
block cipher XEX is proved secure as a strong pseudo-random permutation [297].
Due to its “narrow-block” design XTS mode offers significant efficiency benefits over “wide-
block” schemes.
4.1.7 EME
ECB-mask-ECB (EME) mode was designed by Halevi and Rogaway [141] and has been improved
further by Halevi [139]. EME mode is design for the encrypted data storage setting and is proved
secure as a strong tweakable pseudo-random permutation. Due to its wide block design it will be
half the speed of XTS mode but in return does offer greater security. EME is patented and its use
is therefore restricted.
Classification
Scheme Legacy Future Building Block
CMAC X X Any block cipher as a PRP
HMAC X X Any hash function as a PRF
UMAC X X An internal universal hash function
EMAC X 7 Any block cipher as a PRP
GMAC X 7 Finite field operations
AMAC X 7 Any block cipher
Table 4.2: Symmetric Key Based Authentication Summary Table. When instantiating the prim-
itives they should be selected according to our division into legacy and future use to provide the
MAC function with the same level of security.
Page: 42
Algorithms, Key Size and Parameters Report
h!tb
Table 4.3:
Page: 43
Algorithms, Key Size and Parameters Report
EMAC
The Algorithm was introduced in [274] and is specified as Algorithm 2 in ISO-9797-1 [170]. There
are known attacks against the scheme that require 2n/2 MAC operations, where n is the block size.
The scheme should therefore not be used, unless frequent rekeying is employed. For a variant of the
scheme that uses two independent keys, provable security guarantees have been derived in [274,276].
Note however that the security of the scheme is bounded by 2k , where k is the length of a single
key. There are no known guarantees for the version where the two keys are derived from a single
key in the way specified by the standard. The function LMAC obtains the same security bounds
as EMAC but uses one fewer encryption operation.
AMAC
The algorithm was introduced in [11] and is also specified as Algorithm 3 in ISO 9797-1 [170]. The
algorithm is known as ANSI Retail MAC, or just AMAC for short, and is deployed in banking
applications with DES as the underlying block cipher. There are known attacks against the scheme
that require 2n/2 MAC operations, where n is the block size. The scheme should therefore not be
used, unless frequent rekeying is employed.
CMAC
The CMAC scheme was introduced in [172] and standardized as Algorithm 5 in [170]. It enjoys
provable security guarantees under the assumption that the underlying block-cipher is a PRP [242].
In particular this requires frequent rekeying; for example when instantiated with AES-128 existing
standards recommend that the scheme should be used for at most 248 messages. Furthermore, the
scheme should only be used in applications where no party learns the enciphering of the all-0 string
under the block-cipher underlying the MAC scheme. (This is a problem if Key Check Values as
defined in ANSI X9.24-1:2009 [12] are used.)
Page: 44
Algorithms, Key Size and Parameters Report
be reasonably secure, provided that the collision attacks do not yield distinguishing attacks against
the pseudo-randomness of the underlying compression-function. HMAC-MD4 should therefore not
be used while HMAC-SHA1, HMAC-MD5 are still choices for which forgeries cannot be made.
However, we do not propose usage with MD-5 even for legacy applications and use with SHA-1
is proposed with the usual caveats mentioned before. Conservative instantiations should consider
HMAC-SHA2 and HMAC-SHA3.
UMAC
UMAC was introduced in [47] and specified in [208]. The scheme has provable security guaran-
tees [47]. The scheme uses internally a universal hash function for which the computation can be
paralellized which in turn allows for efficient implementations with high throughput. The scheme
requires a nonce for each application. One should ensure that the input nonces do not repeat.
Rekeying should occur after 264 applications. Due to analysis by Handschuh and Preneel [143], the
32-bit output version results in a full key recovery after a few chosen texts and 240 verifications.
This implies one also needs to limit the number of verifications, irrespective of nonce reuse. In any
case MAC tags of 64-bits in length should be used in all cases.
GMAC
GMAC is the MAC function underlying the authenticated encryption mode GCM. It makes use of
polynomials over the finite field GF (2128 ), and evaluates a message-dependent function at a fixed
value. This can lead to some weaknesses, indeed in uses of SNOW 3G in LTE the fixed value is
altered at each invocation in a highly similar construction. Without this fix, there is a growing
body of work examining weaknesses of the construction, e.g. [143, 287, 307]. Due to these potential
Page: 45
Algorithms, Key Size and Parameters Report
issues we leave the use of GMAC outside of GCM mode in the legacy only division. See the entry
on GCM mode below for further commentary.
Page: 46
Algorithms, Key Size and Parameters Report
and nonce-based constructions of this type may be composed securely can be found in the paper
by Namprempre et al. [241].
4.3.2 OCB
Offset Codebook (OCB) mode [167] was proposed by Rogaway et al. [301]. The mode’s design is
based on Jutla’s authenticated encryption mode, IAPM. OCB mode is provably secure assuming
the underlying block cipher is secure. OCB mode is a one-pass mode of operation making it highly
efficient. Only one block cipher call is necessary for each plaintext block, (with an additional two
calls needed to complete the whole encryption process).
The adoption of OCB mode has been hindered due to two U.S. patents. As of January 2013,
the author has stated that OCB mode is free for software usage under an GNU General Public
License, and for other non-open-source software under a non-military license [300].
4.3.3 CCM
CCM mode [255] was proposed in [349] and essentially combines CTR mode with CBC-MAC, using
the same block cipher and key. The mode is defined only for 128-bit block ciphers and is used in
802.11i. A proof of security was given in [176], and a critique has been given in [303].
The main drawback of CCM mode comes from its inefficiency. Each plaintext block implies
two block cipher calls. Secondly, the mode is not “online”, as a result the whole plaintext must
be known before encryption can be performed. An online scheme allows encryption to be perform
on-the-fly as and when plaintext blocks are available. For this reason (amongst others) CCM mode
has in some sense been superseded by EAX mode.
4.3.4 EAX
EAX mode [167] was presented in [33], where an associated proof of security was also given. It is
very similar to CCM mode, also being a two-pass method based on CTR mode and CBC-MAC but
with the advantage that both encryption and decryption can be performed in an online manner.
4.3.5 CWC
Carter-Wegman + Counter (CWC) mode was designed by Kohno, Viega and Whiting [202]. As
the name suggests it combines a Carter-Wegman MAC, to achieve authenticity, with CTR mode
encryption, to achieve privacy. It is provably secure assuming the IV is a nonce and the underlying
block cipher is secure. Care should be taken to ensure that IVs are never repeated otherwise forgery
attacks may be possible. When considering whether to standardise CWC mode or GCM, NIST
ultimately chose GCM. As a result GCM is much more widely used and studied.
Page: 47
Algorithms, Key Size and Parameters Report
4.3.6 GCM
Galois/Counter Mode (GCM) [256] was designed by McGrew and Viega [230, 231] as an improve-
ment to CWC mode. It again combines Counter mode with a Carter-Wegman MAC (i.e. GMAC),
whose underlying hash function is based on polynomials over the finite field GF (2128 ). GCM is
widely used and is recommended as an option in the IETF RFCs for IPsec, SSH and TLS. The
mode is online, is fully parallelisable and its design facilitates efficient implementations in hardware.
GCM is provably secure [173] assuming that the IV is a nonce and the underlying block cipher
is secure. Note that repeating IVs lead to key recovery attacks [143]. Joux [178] demonstrated
a problem in the NIST specification of GCM when non-default length IVs are used. Ferguson’s
[177] critique highlights a security weakness when short authentication tags are used. To prevent
attacks based on short tags it is wise to insist that authentication tags have length at least 96 bits.
Furthermore it is wise to also insist that the length of nonces is fixed at 96 bits. Saarinen [307]
raises the issues of weak keys which may lead to cycling attacks. The work of Proctor and Cid [287]
presents an algebraic analysis which demonstrates even more weak keys. In the conclusion of their
paper Proctor and Cid discuss the significance of weak key attacks. They state that although it is
highly undesirable for almost every subset of the keyspace to be a weak key class, for many schemes
(GCM included) this will not reduce the security to an unacceptable level.
Page: 48
Algorithms, Key Size and Parameters Report
Classification
Primitive Legacy Future Building Block
NIST-800-108-KDF(all modes) X X A PRF
X9.63-KDF X X Any hash function
NIST-800-56-KDF-A/B X X Any hash function
NIST-800-56-KDF-C X X A MAC function
HKDF X X HMAC based PRF
IKE-v2-KDF X X HMAC based PRF
TLS-v1.2-KDF X X HMAC (SHA-2) based PRF
IKE-v1-KDF X 7 HMAC based PRF
TLS-v1.1-KDF X 7 HMAC (MD-5 and SHA-1) based PRF
Table 4.4: Key Derivation Function Summary Table. When instantiating the primitives they should
be selected according to our division into legacy and future use to provide the PRF function with
the same level of security.
4.4.1 NIST-800-108-KDF
NIST-SP800-108 [251] defines a family of KDFs based on pseudo-random-functions PRFs. These
KDFs can produce arbitrary length output and they are formed by repeated application of the
PRF. One variant (Counter mode) applies the PRF with the input secret string as key, to an input
consisting of a counter and auxiliary data; one variant (Feedback mode) does the same but also
takes as input in each round the output of the previous round. The final double pipelined mode
uses two iterations of the same PRF (with the same key in each iteration), but the output of the
first iteration (working in a feedback mode) is passed as input into the second iteration; with the
second iteration forming the output. The standard does not define how any key material is turned
into a key for the PRF, but this is addressed in NIST-SP800-56C [261].
4.4.2 X9.63-KDF
This KDF is defined in the ANSI standard X9.63 [14] and was specifically designed in that standard
for use with elliptic curve derived keys; although this is not important for its application. The
KDF works by repeatedly hashing the concatenation of the shared random string, a counter and
the shared info. The KDF is secure in the random oracle model, however there are now better
designs for KDF’s than this one. We still include it for future use however, as there are no reasons
(bar the existence of better schemes) to degrade it to legacy only.
Page: 49
Algorithms, Key Size and Parameters Report
4.4.3 NIST-800-56-KDFs
A variant of the X9.63-KDF is defined in NIST-SP800-56A/B, [259, 260]. The main distinction
being the hash function is repeatedly applied to the concatenation of the counter, the shared
random string and the shared info (i.e. a different order is used). Similar comments apply to its
use for future and legacy systems as that made for X9.63-KDF above.
In NIST-SP800-56C [261] a different KDF is defined which uses a MAC function application
to obtain the derived key; with a publicly known parameter (or salt value) used as the key to the
MAC. This KDF has stronger security guarantees than the hash function based KDFs (for example
one does not need a proof in the random oracle model). However, the output length is limited to
the output length of the MAC, which can be problematic when deriving secret keys for use in
authenticated encryption schemes requiring double length keys (e.g. Encrypt-then-MAC). For this
reason the standard also specifies a key expansion methodology based on NIST-800-108 [251], which
takes the same MAC function used in the KDF, and then uses the output of the KDF as the key
to the MAC function so as to define a PRF.
4.4.5 TLS-KDF
This is the KDF defined for use in TLS, it is defined in [94] and [48]. In the TLS v1.0 and v1.1
versions of the KDF, HMAC-SHA1 and HMAC-MD5 are used as KDFs and their outputs are then
exclusive-or’d together; producing a PRF sometimes called HMAC-MD5/HMAC-SHA1. In TLS
v1.2 the PRF is simply HMAC instantiated with SHA-2. In both cases the underlying PRF is used
to both extract randomness and for key expansion.
• Certification: Public keys almost always need to be certified in some way; i.e. a crypto-
graphic binding needs to be established between the public key and the identity of the user
Page: 50
Algorithms, Key Size and Parameters Report
claiming to own that key. Such certification usually comes in the form of a digital certifi-
cate, produced using a proposed signing algorithm. This is not needed for the identity based
schemes considered later.
• Domain Parameter Validation: Some schemes, such as those based on discrete logarithms,
share a set of a parameters across a number of users; these are often called Domain Parameters.
Before using such a set of domain parameters a user needs to validate them to be secure, i.e.
to meet the security level that the user is expecting. To ease this concern it is common to
select domain parameters which have been specified in a well respected standards document.
• Public Key Validation: In many schemes and protocols long term or ephemeral public
keys need to be validated. By this we mean that the data being received actually corresponds
to a potentially valid public key (and not a potentially weak key). For example this could
consist of checking whether a received elliptic curve point actually is a point on the given
curve, or does not lie in a small subgroup. These checks are very important for security but
often are skipped in descriptions of protocols and academic treatments.
Page: 51
Algorithms, Key Size and Parameters Report
Classification
Scheme Legacy Future Notes
Public Key Encryption/Key Encapsulation
RSA-OAEP X X See text
RSA-KEM X X See text
PSEC-KEM X X See text
ECIES-KEM X X See text
RSA-PKCS# 1 v1.5 7 7
4.6.2 RSA-OAEP
Defined in [279], and first presented in [32], this is the preferred method of using the RSA primitive
to encrypt a small message. It is known to be provably secure in the random oracle model [126],
and the proof has been verified in the Coq theorem proving system [27]. A decryption failure oracle
attack is possible [223] if implementations are not careful in uniform error reporting/constant
timing. Security is proved in the random oracle model, i.e. under the assumption that the hash
functions used in the scheme behave as random oracles. It is good practice to ensure that the hash
functions used in the scheme be implemented with SHA-1 for legacy applications and SHA-2/SHA-3
for future applications.
Page: 52
Algorithms, Key Size and Parameters Report
encryption algorithm; and is referred to as a hybrid cipher. This is the preferred method for
performing public key encryption of data, and is often called the KEM-DEM paradigm.
Various standards specify the precise DEM to be used with a specific KEM. So for example
ECIES can refer to a standardized scheme in which a specific choice of DEM is mandated for use
with ECIES-KEM. In this document we allow any DEM to be used with any KEM, the exact choice
is left to the user. The precise analysis depends on the security level (legacy or future) we assign
to the DEM and the constituent parts; as well as the precise instantiation of the underlying public
key primitive.
4.7.1 RSA-KEM
Defined in [164], this Key Encapsulation Method takes a random element m ∈ Z/N Z and encrypts
it using the RSA function. The resulting ciphertext is the encapsulation of a key. The output key is
given by applying a KDF to m, so as to obtain a key in {0, 1}k . The scheme is secure in the random
oracle model (modelling the KDF as a random oracle), with very good security guarantees [147,324].
We assume that the KDF used in the scheme be one of the good ones mentioned in Section 4.4.
4.7.2 PSEC-KEM
This scheme is defined in [164] and is based on elliptic curves. Again when modelling the KDF as a
random oracle, this scheme is provable secure, assuming the computational Diffie–Hellman problem
is hard in the group under which the scheme is instantiated. Whilst this gives a stronger security
guarantee than ECIES-KEM below, in that security is not based on gap Diffie–Hellman, the latter
scheme is often preferred due to performance considerations. Again it we assume that the KDF
used in the scheme be one of the good ones from Section 4.4.
4.7.3 ECIES-KEM
This is the discrete logarithm based encryption scheme of choice. Defined in [14, 164, 319], the
scheme is secure assuming the KDF is modelled as a random oracle. However, this guarantee is
requires one to assume the gap Diffie–Hellman is hard (which holds in general elliptic curve groups
but sometimes not in pairing groups). Earlier versions of standards defining ECIES had issues
related to how the KDF was applied, producing a form of benign malleability, which although not
a practical security weakness did provide unwelcome features of the scheme. Again we assume that
the KDF used in the scheme be one of the good ones from Section 4.4.
Page: 53
Algorithms, Key Size and Parameters Report
4.8.2 RSA-PSS
This scheme, defined in [279], can be shown to be UF-CMA secure in the random oracle model [175].
It is used in a number of places including e-passports.
4.8.3 RSA-FDH
The RSA-FDH scheme hashes the message to the group Z/N Z and then applies the RSA (de-
cryption) function to the output. The scheme has strong provable security guarantees [82, 83, 185],
but is not wise to use in practice due to the difficulty of defining a suitably strong hash function
with codomain the group Z/N Z. Thus whilst conceptually simple and appealing the scheme is not
practically deployable.
One way to instantiate the hash function for an `(N ) bit modulus would be to use a hash
function with an output length of more than 2 · `(N ) bits, and then take the output of this hash
function modulo N so as to obtain the pre-signature. This means the full domain of the RSA
function will be utilised with very little statistical bias in the distribution obtained. This should
be compared with ISO’s DS3 below.
Page: 54
Algorithms, Key Size and Parameters Report
the full RSA domain is not used to produce signatures. The fact that a hash image is not taken
into the full group Z/N Z means the security proof for RSA-FDH does not apply. We therefore do
not propose the use of DS3 for future applications.
4.8.5 (EC)DSA
The Digital Signature Algorithm (DSA) and its elliptic curve variant (ECDSA) is widely stan-
dardized [13, 249, 319]; and there exists a number of variants including the German DSA (GDSA)
[152, 161], the Korean DSA (KDSA) [161, 337] and the Russian DSA (RDSA) [133, 162]. The basic
construct is to produce an ephemeral public key (the first part of the signature component), then
hash the message to an element in Z/qZ, and finally to combine the hashed message, the static
secret and the long term secret in a “signing equation” to produce the second part of the signature.
All (EC)DSA variants (bar KDSA) have weak provable security guarantees; whilst some proofs
do exist they are in less well understood models (such as the generic group), for example [61]. The
reason for this is that the hash function is only applied to the message and not the combination of
the message and the ephemeral public key.
The KDSA algorithm uses a hash function to compute the r-component of the signature, a
full proof in the random oracle model can be given for this variant [59]. Thus KDSA falls into our
category of suitable for future use. KDSA also has a simpler signing equation than DSA, it does not
require a modular inversion, however the extra hash function invocation is likely to counterbalance
this benefit.
All (EC)DSA variants also suffer from lattice attacks against poor ephemeral secret generation
[154, 245, 246]. A method to prevent this, proposed in [293] but known to be “folklore”, is derive
the ephemeral secret key by applying a PRF (with a default key) to a message containing the static
secret key and the message to be signed. This however needs to be used with extreme caution as
the use of a deterministic emphemeral key derivation technique could lead an implementation open
to side-channel analysis.
4.8.6 PV Signatures
ISO 14888-3 [161] defined a variant of DSA signatures (exactly the same signing equation as for
DSA), but with the hash function computed on the message and the ephemeral key. This scheme
is due to Pointcheval and Vaudeney [281], and the scheme is often denoted as the PV signature
scheme3 . The PV signature scheme can be shown to be provably secure in the random oracle
model, and so have much of the benefits of Schnorr signatures. However Schnorr signatures have a
simpler to implement signing equation (no need for any modular inversions). Whilst only defined
in the finite field setting in ISO 14888-3, the signatures can trivially be extended to the elliptic
curve situation.
3
There is another PV signature scheme which this should not be confused with due to Pintsov and Vanstone [277]
which is a signature scheme with message recovery originally used to secure electronic postal franks.
Page: 55
Algorithms, Key Size and Parameters Report
Just like (EC)DSA signatures, PV signatures suffer from issues related to poor randomness
in the ephemeral secret key. Thus the defences proposed for (EC)DSA signatures should also be
applied to PV signatures.
4.8.7 (EC)Schnorr
Schnorr signatures [318], standardized in [162], are like (EC)DSA signatures with two key differ-
ences; firstly the signing equation is simpler (allowing for some optimisations) and secondly the hash
function is applied to the concatenation of the message and the ephemeral key. This last property
means that Schnorr signatures can be proved UF-CMA secure in the random oracle model [280].
There is also a proof in the generic group model [244]. In addition the signature size can be
made shorter than that of DSA. We believe Schnorr signatures are to be preferred over DSA style
signatures for future applications.
Just like (EC)DSA signatures, Schnorr signatures suffer from issues related to poor randomness
in the ephemeral secret key. Thus the defences proposed for (EC)DSA signatures should also be
applied to Schnorr signatures.
Page: 56
Algorithms, Key Size and Parameters Report
Chapter 5
In this chapter we discuss more esoteric or specialised schemes. These include password based key
derivation, password based encryption, key-wrap algorithms and identity based encryption. We
summarize our conclusions in Table 5.1.
Categorisation
Scheme Legacy Future Notes
Password Based Key Derivation
PBKDF2 X ? See text
bcrypt X ? See text
scrypt X ? See text
Key Wrap Algorithms
KW X 7 No security proof; no associated data
TKW X 7 No security proof; no associated data
KWP X 7 No security proof; no associated data
AESKW X 7 No security proof; inefficient
TDKW X 7 No security proof; inefficient
AKW1 X 7 No security proof; no associated data
AKW2 7 7 Not fully secure
SIV X X See text
Identity Based Encryption
BB X X See text
SK X X See text
BF X 7 See text
Page: 57
Algorithms, Key Size and Parameters Report
5.1.1 PBKDF2
NIST SP 800-132 [253] standardises the PBKDF2 function, which was first defined in RFC 2898 [186].
PBKDF is based on any secure PRF; in [186] it is defined with HMAC using SHA-1. Additionally,
PBKDF2 is defined by an iteration count which specifies the number of times the PRF is iterated.
The iteration count is used to increase the workload of dictionary attacks and should be as large as
possible whilst ensuring the compute time is not unnecessarily long. A minimum of 1000 iterations
is proposed.
The input to the key-derivation function is the password, a salt and the desired key length. The
salt is used to generate a large set of keys for each password. It should be generated with a secure
random number generator (cf. Section 6.2) and be at least 128 bits long. The key length should be
at least 112 bits.
Despite the ability to adjust the number of iterations it is still possibly to implement dictionary
attacks relatively cheaply on ASICs or GPUs. The bcrypt function and scrypt functions provide
progressively greater resistance to such attacks due to the respective attacks increasing need for
additional RAM.
5.1.2 bcrypt
bcrypt was designed by Provos and Mazières [289]. It is based on the blockcipher Blowfish (cf.
Section 3.2.2). bcrypt is more resistant to dictionary attacks than PBKDF2.
Page: 58
Algorithms, Key Size and Parameters Report
5.1.3 scrypt
scrypt [273] was designed by Colin Percival to create a key derivation function which was much
more resistant to dictionary attacks than bcrypt. The scheme was introduced in 2009 and so is
much younger than other schemes meaning it has not been subject to as much usage and analysis.
Page: 59
Algorithms, Key Size and Parameters Report
5.2.2 KWP
The scheme AES Key Wrap with Padding, abbreviated KWP, is specified in [258] and RFC
5649 [153]. It shares the variable input length cipher from KW, but due to the use of an ex-
plicit padding scheme, inputs of any number of octets are allowed.
5.2.4 AKW1
The scheme AKW1 is specified in ANSI X9.102 [10] and consists essentially of a SHA1 based padding
scheme, followed by two layers of CBC encryption, one with a random IV and one with a fixed IV,
where the underlying blockcipher is 3DES. The random IV makes the scheme probabilistic, making
classification as an authenticated encryption scheme (without associated data) more accurate than
as a key wrap scheme. Even when instantiated with a modern block cipher instead of 3DES, AKW1
should be considered a legacy only construction.
5.2.5 AKW2
The scheme AKW2 is specified in ANSI X9.102 [10] and corresponds to an Encrypt-then-MAC
scheme using related keys. For the encryption, CBC mode using TDEA is stipulated, whereas for
authentication CBC-MAC is used. The scheme supports associated data and indeed, the first block
of associated data is used as initialisation vector for the CBC mode. AKW2 is demonstrably not a
secure key wrap scheme [302] and we believe it should not be used.
5.2.6 SIV
Synthetic Initialisation Vector (SIV) authenticated encryption was introduced by Rogaway and
Shrimpton [302]. It is a 2-pass mode based on using an IV-based encryption scheme with a pseudo-
random function. The pseudo-random function is used to compute a tag that is used both for
authentication purposes and as IV to the encryption scheme. SIV is captured by RFC 5297 [144],
combining CMAC with AES in counter mode. SIV is provably secure and relatively efficient.
Page: 60
Algorithms, Key Size and Parameters Report
5.4.2 BB
The Boneh–Boyen IBE scheme [52] is secure in the standard model under the decision Bilinear
Diffie–Hellman assumption, but only in a weak model of selective ID security. However, the scheme,
as presented in the IEEE 1363.3 standard [155], hashes the identities before executing the main BB
scheme. The resulting scheme is therefore fully secure in the random oracle model. The scheme
is efficient, including at high security levels, and has a number of (technical) advantages when
compared to other schemes.
5.4.3 SK
The Sakai–Kasahara key construction is known to be fully secure in the random oracle model,
and at the same curve/field size outperforms the prior two schemes. The constructions comes as
an encryption scheme [72] and a KEM construction [73], and is also defined in the IEEE 1363.3
standard [155]. The main concern on using this scheme is due to the underlying hard problem (the
Page: 61
Algorithms, Key Size and Parameters Report
q-bilinear Diffie–Hellman inversion problem) not being as hard as the underlying hard problem of
the other schemes. This concern arises from a series of results, initiating with those of Cheon [74],
on q-style assumptions.
Page: 62
Algorithms, Key Size and Parameters Report
Chapter 6
General Comments
In this chapter we discuss a number of general issues related to the deployment of cryptographic
primitives and schemes. In this edition of the report we restrict ourselves to hardware and software
side-channels, random number generation and key life-cycle management.
6.1 Side-channels
Traditionally, cryptographic algorithms are designed and analysed in the black-box model. In
this model, an algorithm is merely regarded as a mathematical function that will be applied to
some input to generate some output, regardless of implementation details. An evaluation of a
keyed algorithm in the black-box model assumes that an adversary knows the specification of the
algorithm and can observe pairs of inputs I and outputs O = Ek (I) of a black box implementing
the algorithm.
When cryptography is implemented on embedded devices, black-box analysis is not sufficient to
get a good picture of the security provided. The cryptographic algorithms are executed on a device
that is in the possession and under the physical control of the user, who may have an interest in
breaking the cryptography, e.g. in banking applications or digital right management applications.
The physical accessibility of embedded devices allows for a much wider range of attacks against
the cryptographic system, not targeting the strength of the algorithm as an abstract mathematical
object, but the strength of its concrete implementation in practice.
Classical examples for side-channels include the execution time of an implementation [200], the
power consumption of a chip [201] and its eletromagnetic radiation [128]. More exotic examples
include acoustics [19], temperature [60] and light emission [328]. Some side-channels can be observed
only by means of an invasive attack, where the computing device is opened. Others can be observed
in a passive attack, where the device is not damaged.
There are many reasons for side-channel leakage, including hardware circuit architectures, micro-
architectural features and implementations. Interestingly, many side-channels arise from optimisa-
Page: 63
Algorithms, Key Size and Parameters Report
tions. For example, circuits in modern CMOS technology consume power only when the internal
state changes. The amount of power consumed is proportional to the number of state bits that
change. This is clearly a side-channel. For other examples of the relation between optimisation
and side-channels, please see Section 6.1.1.
Many cryptographic algorithms are constructed as product ciphers [323]: one or a few cryp-
tographically weak functions are iterated many times such that the composition is secure. Other
algorithms use a small number of complex operations. In order to implement such algorithms,
however, these complex operations are usually broken down into sequences of less complex op-
erations. Hence, their implementations are similar to product ciphers. Furthermore, in keyed
algorithms (or their implementations), typically the key is introduced gradually: the dependence
of the intermediate data on the key increases in the course of the algorithm (or implementation).
Side-channel attacks capitalise on this property of gradually increasing security. While it is
(supposedly) hard to attack the full cryptographic algorithm, it is much easier to attack the cryp-
tographically weak intermediate variables. Depending on the side-channel, measurements of the
leakage contain information about the intermediate variables at each instance of time (e.g. power
consumption), or about an aggregate form thereof (e.g. execution time). Thus, side-channel mea-
surements allow to zoom in on the algorithm and to work on a few iterations only of the crypto-
graphically weak functions. By working with intermediate variables that depend only on a fraction
of the bits of the secret key, side-channel attacks allow to apply a divide-and-conquer strategy.
6.1.1 Countermeasures
Countermeasures against side-channel attacks can be classified into two categories. In the first
category, one tries to eliminate or to minimise the leakage of information. This is achieved by
reducing the signal-to-noise ratio of the side-channel signals. In the second category, one tries to
ensure that the information that leaks through side-channels, cannot be exploited to recover secrets.
Typically, one will implement a combination of countermeasures. Increasing the key size will (in
general) not improve the resilience against side-channel attacks.
Constant-time algorithms
The first academic publication of a physical attack is the timing attack on RSA [200]. In a naive
implementation of modular exponentiation, the execution time depends on the value of the expo-
nent, i.e. the private key. By observing the execution time of a series of decryptions or signatures,
an adversary can easily deduce the value of the private key.
Hence, a first countermeasure to be taken is to ensure that the execution time of the crypto-
graphic algorithm doesn’t depend on the value of secret information. The difficulty of this task
depends greatly on the features of the processor that the software will run on and the compiler
that is being used to translate high-level code into low-level assembly instructions.
Simple processors and low-level programming languages give the programmer absolute access
Page: 64
Algorithms, Key Size and Parameters Report
to the control flow of the program, making it possible to write code that executes in constant time.
Modern pipelined processors contain units for branch prediction, out-of-order execution and other
systems that may complicate the task of predicting the exact execution time of an algorithm or a
subroutine. These units may interact with compiler options and settings in ways that are difficult
to fully understand. In such environments, it may be difficult to achieve 100% constant-time code.
Observe that constant-time code is usually slow code. Indeed, any optimisation that can be
applied only for a fraction of the values that a secret variable can take, leads to non-constant
execution time and therefore has to be excluded.
Page: 65
Algorithms, Key Size and Parameters Report
be very competitive to table-based implementations [190]. For the specific case of AES (and other
algorithms using the AES S-box), the AES-NI instructions can be used to avoid table lookups.
Masking
The purpose of masking is to ensure that the value of individual data elements is uncorrelated to
secrets. Hence, if there is leakage on the value of individual data elements, this will not lead to
recovery of the secrets. Clearly, if an attacker can combine signals of different elements, he can
again start to recover the secrets, but the approach can be generalised to higher levels, making
tuples, triplets, . . . of data independent of the value of the secret [288]. Masking can be done for
software implementations and for hardware implementations.
In hardware, masking can be employed at gate level [157, 335], at algorithm level [5], or in
combination with circuit design approaches [283].
The Threshold Implementation method is a masking approach that achieves provable security
based on secret sharing techniques at a moderate cost in hardware complexity [247, 284]. It can
also be used to mask software implementations. An alternative approach based on Shamir’s secret
sharing scheme is presented in [134].
• Netscape’s implementation of SSL, which was discovered in 1996 to make use of a random
number generator in which the only sources of entropy used to seed the generator were the
time of day, the process ID and the parent process ID [131].
• The Debian OpenSSL randomness failure, in which a patch applied by a Debian developer led
to substantially reduced entropy being available for key generation in OpenSSL [92]. Affected
keys included SSH keys, OpenVPN keys, DNSSEC keys, and key material for use in X.509
certificates and session keys used in SSL/TLS connections, with all keys produced between
September 2006 and May 2008 being potentially suspect.
Page: 66
Algorithms, Key Size and Parameters Report
• Two independent analyses of public keys found on the Internet [150, 215], which discovered,
amongst other things, that many pairs of RSA public keys had common factors, making the
derivation of the corresponding private keys a relatively trivial matter. The identified issues
are at least in part attributable to poor randomness generation procedures, especially in the
Linux kernel [150]. A follow-up study on a particular smart-card deployment involving RSA
is reported in [38].
• Ristenpart and Yilek studied how randomness is handled across virtual machine resets [293],
discovering that the state of the PRNG can often be predicted to the point where an attack
against a DSA signing key can be mounted in the context of TLS (two signatures on distinct
messages being produced with the same random input leading to immediate recovery of the
DSA private key).
6.2.1 Terminology
We refer to Random Number Generators (RNGs), but these are also often referred to as Random
Bit Generators in the literature. A suitable source of random bits can always be turned into a
source of random numbers that are approximately uniformly distributed in a desired range by
various means (see [123, Section 10.8], [264, Appendix B] for extensive discussion of this important
practical issue). In what follows we make extensive reference to the NIST standard [264], however
the interested reader should also consult the ISO 18031 standard [163] and ANSI X9.82 [15].
We distinguish between True Random Number Generators (TRNGs) and Pseudo-Random Num-
ber Generators (PRNGs). TRNGs usually involve the use of special-purpose hardware (e.g. elec-
tronic circuits, quantum devices) followed by suitable post-processing of the raw output data to
generate random numbers. In an ideal world, all random number requirements would be met by
using TRNGs. But, typically, TRNGs operate at low output rates (relative to PRNGs) and are of
moderate-to-high cost (relative to PRNGs which are usually implemented in software). A TRNG
device might be used to generate highly sensitive cryptographic keys, for example system master
keys, in a secured environment, but would be considered “overkill” for general-purpose use. PRNGs
are suitable for general-purpose computing environments and usually involve a software-only ap-
proach. Here, the approach is to deterministically generate random-looking outputs from an initial
seed value. We note that NIST [264] refer to PRNGs as DRBGs, where “D” stands for “determin-
istic”, stressing the non-random nature of the generation process. Here, we focus on PRNGs, since
TRNGs do not in general offer the flexibility and cost profile offered by (software) PRNGs.
A PRNG usually includes a capability for reseeding (or refreshing) the generator with a fresh
source of randomness. The problem of obtaining suitable and assured high-quality randomness
for the purposes of reseeding is one of the most challenging aspects of designing systems that use
PRNGs.
PRNGs are sometimes described as being blocking or non-blocking. For example, the Linux
kernel PRNG provides two different RNGs, one of each type. A blocking RNG will prevent outputs
Page: 67
Algorithms, Key Size and Parameters Report
from the RNG from being delivered to the application requesting random numbers if it deems that
doing so would be inappropriate for some reason.
Page: 68
Algorithms, Key Size and Parameters Report
• Health test function: this function is intended to provide a mechanism by which the PRNG
can be tested to be functioning correctly.
We note that the last two components are often not explicitly present in PRNG implementations.
Moreover, many PRNGs do not have “other inputs” or allow the use of personalisation strings.
Some generators in the literature do not fully separate the reseed and generate functions, mixing
entropy directly into the state of the generator, for example.
• Forward security: Compromise of the internal state of the generator should not allow an
attacker to compute previous outputs of the generator, nor to distinguish previous outputs
from random. This requirement implies in particular that it must be hard to compute any
previous state of the generator from its current state. In turn, this implies that the generator
state must be updated after each output in a one-way manner.
Page: 69
Algorithms, Key Size and Parameters Report
Note that none of these requirements directly refer to the quality of entropy inputs, but that
this rapidly emerges as a key concern in meeting the requirements.
Entropy sources: Foremost amongst these implementation issues are the questions of how to
identify suitable sources of entropy, how to manage and process these sources, and how (and indeed,
whether) to assess the quality of the entropy that is extracted from these sources when reseeding
a PRNG. A good general overview of these issues can be found in [103, 137].
Page: 70
Algorithms, Key Size and Parameters Report
outputs; on the other hand, waiting to long provides poor protection against state compromises,
weakening forward security. The majority of practical PRNG designs do some form of entropy
estimation. However, Ferguson et al. [123] contend that no procedure can accurately assess entropy
(or rather, the amount of entropy unknown to an attacker) across all environments. Their Fortuna
PRNG design attempts to get around the problem of entropy estimation by allocating gathered
entropy, represented by events, to a sequence of entropy pools in order. The Fortuna generator
then uses the pools at different intervals to reseed the generator. An analysis of this approach was
recently provided in [98].
The Fortuna design sets out to avoid the need for entropy estimation whilst preventing state-
extension attacks. As pointed out by Barak and Halevi [23], this approach works well so long
as the entropy is well-spread across the different pools, but does not work well if the entropy is
concentrated in one pool that is not often accessed when doing state refreshes. It is possible that an
adversary could arrange for this to occur by generating large numbers of spurious events under his
control. The view of Barak and Halevi is that it is better to accumulate entropy over a long period
of time in a single pool and do infrequent reseeds, but without doing any entropy estimation, since
in their view “at best the entropy estimator provides a false sense of security”. A third approach is
to perform conservative entropy estimation, and to reseed only when sufficient entropy is available
– this is the approach taken in the Linux dev/urandom and dev/random PRNGs, for example.
Generator initialisation: An important special case of seeding is the setting of the initial state
(which is done via the Instantiate function in the NIST model). A PRNG should be blocking until
properly initialised, either with entropy supplied by the user, with entropy gathered from the local
environment. There is anecdotal evidence that this is not popular with software developers – see
[150], where it is explained how one SSH implementation uses the non-blocking Linux dev/urandom
PRNG in preference to the blocking dev/random one when generating cryptographic keys. We
reiterate that accessing a PRNG before it is properly seeded for the first time has been identified
as a source of serious security problems, particularly in key generation [150].
Page: 71
Algorithms, Key Size and Parameters Report
There are several PRNGs that are supplied as part of crypto libraries. Prominent amongst
these is the OpenSSL PRNG. This generator has a rather ad hoc design. It was analysed in [306],
and some changes were made as a result of this analysis. However, as far as we are aware, it has not
been subjected to any further cryptographic analysis since then. Gutmann has designed a PRNG
that is made available as part of his cryptlib software development kit1 . This PRNG and its
design are described in detail in [137].
The NIST special publication [264] contains several PRNG designs. As far as we are aware,
none of these has been thoroughly analysed with the exception of the Dual Elliptic Curve generator.
A pseudo-randomness property was proven for this generator in [62], based on some reasonable
number-theoretic assumptions. However, the generator is relatively slow and known to have a
small bias in its outputs. The generator has the potential to contain a backdoor, enabling its
internal state to be reconstructed given sufficient output [327], and it is widely believed that this
potential was exploited during the NIST standardisation process by NSA. A recent study [71] found
the generator to be in surprisingly widespread use. The controversy surrounding this Dual Elliptic
Curve generator led to the withdrawal of the generator from the NIST special publication [264]
and the opening of a comment period on a revised version of the NIST document2 .
These NIST PRNG designs do not include a full specification of how to gather and process
entropy sources for seeding/reseeding purposes, which is consistent with the over-arching approach
in [264].
The Fortuna generator from [123] incorporates learning from the earlier Yarrow design [192].
It’s basic design of using entropy pools to collect entropy for reseeding at different rates was recently
validated by the analysis of [98], whilst see [142, 326] for two analyses of Intel’s hardware RNG.
• For signatures, there is a folklore de-randomisation technique which neatly sidesteps security
issues arising from randomness failures: simply augment the signature scheme’s private key
with a key for a pseudo-random function (PRF), and derive any randomness needed during
signing by applying this PRF to the message to be signed; meanwhile verification proceeds
as normal.
• In the symmetric encryption setting, Rogaway [298] argued for the use of nonce-based en-
cryption, thus reducing reliance on randomness. Rogaway and Shrimpton [302] initiated the
study of misuse-resistant authenticated encryption (AE), considering the residual security of
1
See http://www.cryptlib.com/
2
See http://www.nist.gov/itl/csd/sp800-90-042114.cfm.
Page: 72
Algorithms, Key Size and Parameters Report
AE schemes when nonces are repeated. Katz and Kamara [187] considered the security of
symmetric encryption in a chosen-randomness setting, wherein the adversary has complete
control over the randomness used for encryption (except for the challenge encryption which
uses fresh randomness).
• In the public key encryption (PKE) setting, Bellare et al. [29] considered security under chosen
distribution attack, wherein the joint distribution of message and randomness is specified by
the adversary, subject to containing a reasonable amount of min entropy. Bellare et al. gave
several designs for PKE schemes achieving this notion in the Random Oracle Model (ROM)
and in the standard model. A follow-up work [291] considers a less restrictive adversarial
setting.
• Also in the PKE setting, Yilek [355], inspired by virtual machine reset attacks in [293],
considered the scenario where the adversary can force the reuse of random values that are
otherwise well-distributed and unknown to the adversary. This is referred to in [355] as
the Reset Attack (RA) setting. In [355], Yilek also gave a general construction achieving
security for public key encryption in his RA setting. The RA setting was recently extended
to a setting where the adversary can to a certain extent control the randomness that is used
during encryption, the so-called Related Randomness Attack (RRA) setting [270].
• Ristenpart and Yilek [293] studied the use of “hedging” as a general technique for protecting
against broad classes of randomness failures in already-deployed systems, and implemented
and benchmarked this technique in OpenSSL. Hedging in the sense of [293] involves replacing
the random value r required in some cryptographic scheme with a hash of r together with
other contextual information, such as a message, algorithm or unique operation identifier, etc.
Their results apply to a variety of different randomness failure types but have their security
analyses restricted to the ROM.
Page: 73
Algorithms, Key Size and Parameters Report
1. Protecting the confidentiality and authenticity of secret and private keys, as well as protecting
secret and private keys against unauthorised use.
To accomplish these three goals we need to examine the whole key life cycle; from generation of
key the material through to destruction.
Key Generation
Secret keys and private keys need to be unpredictable. Symmetric primitives usually don’t have
additional requirements for the secret keys, except that some primitives have a small fraction of
weak keys, which should not be used. Asymmetric primitives usually have additional requirements,
both on their private and public keys. For example, they often require the generation of prime
numbers that need to satisfy extra properties. Keys can either be generated at random in a
protocol, in which case generating them with a sufficient amount of entropy turns out to be a very
challenging task in practice, see Section 6.2, in other instances keys are derived from other data as
part of the protocol definition. There are numerous well documented attacks on systems for which
not enough entropy was used to generate the underlying key material.
Key Registration/Certification
Keys need to be associated with their owner (user). For example, public keys are linked to their
owner by means of (public-key) certificates. Through the issuing of a certificate, a certification
authority guarantees that a certain key belongs to a certain user, and associated policy statements
specify for what purposes the owner may use the key. A certificate also has a validity period.
Certificates are usually public documents. Their authenticity is ensured by means of a digital
signature, placed by the certification authority. However, one needs to trust the certificate authority
and its public key, which is itself authenticated by another certificate authority; creating a certificate
chain. At the root of the chain is a root certificate authority. These root certificates can be
distributed to relying parties and signatories alike by, for example including them in applications
(as in a web browser) or having them downloaded from an authoritative source (e.g. a designated
public authority), for the purpose of invoking trust.
Various issues have come to light in the last few years as to the ability for users to fully
trust the root certificates in their browsers. Thus certification is a technology which is (still) not
completely 100 percent reliable. Hence, when using certificates in a non-public application (e.g. in
a corporate environment) care needs to be taken as to the underlying policy framework and how
this is implemented and enforced.
Page: 74
Algorithms, Key Size and Parameters Report
Key Use
The goal of key management is to put keys in place such that they can be used for a certain period
of time. During the lifetime of a key, it has to be protected against unauthorised use by attackers.
The key must also be protected against unauthorised uses by the owner of the key, e.g. even the
owner of the key should not be allowed to export a key or to use it in an insecure environment. This
protection can be provided by storing the key on secure hardware and by using secure software,
which includes authorisation checks.
Key Storage
By using secure hardware, it is possible to store keys such that they can never be exported, and
hence are very secure against theft or unauthorised use. However, sometimes keys get lost and it
might be desirable to have a backup copy. Organisations might require backups of keys in order to
be able to access data after employees leave. Similarly, expired keys might be archived in order to
keep old data accessible. Finally, under certain conditions law enforcement agencies might request
access to certain keys. Technical systems that implement access for law enforcement agencies are
called key escrow mechanisms or key recovery mechanisms.
Backup, archival and escrow/recovery of keys complicate key management, because they in-
crease the risk for loopholes for unauthorised access to keys. The advanced security requirement of
non-repudiation requires that the owner of a key is the only one who has access to the key at all
times from generation to key retirement. For example, keys that are used for advanced electronic
signatures have to be under the sole control of the user. Archival, backup or storage of such keys
Page: 75
Algorithms, Key Size and Parameters Report
is difficult. For use of the non-repudiation property in a court of law one may require special
procedures for digital signature generation to be followed.
Revocation/Validation
Cryptographic keys expire and are replaced. Sometimes it can happen that keys have to be taken out
of use before the planned end of their lifetime, e.g. if secret keys leak to outsiders or if developments
in cryptanalysis make schemes insecure. This process is called revocation. In centralised systems,
revocation can usually be achieved relatively easily, but in distributed systems special measures
have to be implemented to avoid that people use or rely on keys that have been expired early. In
the context of revocation, validation has a very specific meaning. It means to check whether a
cryptographic operation, e.g. placing a digital signature, was performed with a key that is valid,
or was valid at the time the operation took place.
Key Archive/Destruction
When the lifetime of the key has expired, it has to be removed from the hardware. This requires
a secure deletion process. In most operating systems and applications, the deletion of a file only
clears a logic flag. It doesn’t result in actual removal of the data until the disk space used to store
the file is reclaimed and overwritten by another application. On many file storage media, even
after a file has been overwritten, it is possible to recover the original file, using some moderately
advanced equipment. This is called data remanence. Various techniques have been developed to
counter data remanence. At the logical level, one can overwrite the disk space repeatedly with
certain bit patterns in order to make recovery difficult. At the physical level, one can degauss (on
magnetic media) or employ other operations that restore the storage media in pristine state, or one
can physically destroy the storage media.
Page: 76
Algorithms, Key Size and Parameters Report
requirements which any framework needs to meet, from this any given system can be mapped onto
the framework by stating how and in what way the specific system meets the given framework.
Page: 77
Algorithms, Key Size and Parameters Report
Bibliography
[1] Masayuki Abe, editor. Advances in Cryptology - ASIACRYPT 2010 - 16th International
Conference on the Theory and Application of Cryptology and Information Security, Singa-
pore, December 5-9, 2010. Proceedings, volume 6477 of Lecture Notes in Computer Science.
Springer, 2010.
[2] Carlisle M. Adams, Ali Miri, and Michael J. Wiener, editors. Selected Areas in Cryptogra-
phy, 14th International Workshop, SAC 2007, Ottawa, Canada, August 16-17, 2007, Revised
Selected Papers, volume 4876 of Lecture Notes in Computer Science. Springer, 2007.
[3] Leonard M. Adleman. The function field sieve. In Leonard M. Adleman and Ming-Deh A.
Huang, editors, ANTS, volume 877 of Lecture Notes in Computer Science, pages 108–121.
Springer, 1994.
[4] Martin Ågren, Martin Hell, Thomas Johansson, and Willi Meier. Grain-128a: a new version
of Grain-128 with optional authentication. IJWMC, 5(1):48–59, 2011.
[5] Mehdi-Laurent Akkar and Christophe Giraud. An implementation of des and aes, secure
against some attacks. In Çetin Kaya Koç et al. [69], pages 309–318.
[6] Nadhem J. AlFardan, Daniel J. Bernstein, Kenneth G. Paterson, Bertram Poettering, and
Jacob C. N. Schuldt. On the security of RC4 in TLS. In Samuel T. King, editor, USENIX
Security, pages 305–320. USENIX Association, 2013.
[7] Nadhem J. AlFardan and Kenneth G. Paterson. Lucky Thirteen: Breaking the TLS and
DTLS Record Protocols. In IEEE Symposium on Security and Privacy. IEEE Computer
Society, 2013.
[8] Ammar Alkassar, Alexander Geraldy, Birgit Pfitzmann, and Ahmad-Reza Sadeghi. Optimized
self-synchronizing mode of operation. In Mitsuru Matsui, editor, FSE, volume 2355 of Lecture
Notes in Computer Science, pages 78–91. Springer, 2001.
[9] Jee Hea An, Yevgeniy Dodis, and Tal Rabin. On the security of joint signature and encryption.
In Knudsen [198], pages 83–107.
Page: 78
Algorithms, Key Size and Parameters Report
[10] ANSI X9.102. Symmetric key cryptography for the financial services industry - wrapping of
keys and associated data. American National Standard Institute, 2008.
[11] ANSI X9.19. Financial institution retail message authentication. American National Standard
Institute, 1996.
[12] ANSI X9.24. Retail financial services symmetric key management part 1: Using symmetric
techniques. American National Standard Institute, 2009.
[13] ANSI X9.62. Public key cryptography for the financial services industry – The elliptic curve
digital signature algorithm (ECDSA). American National Standard Institute, 2005.
[14] ANSI X9.63. Public key cryptography for the financial services industry – Key agreement
and key transport using elliptic curve cryptography. American National Standard Institute,
2011.
[15] ANSI X9.82. Random number generation part 1: Overview and basic principles. American
National Standard Institute, 2006.
[17] Kazumaro Aoki, Jian Guo, Krystian Matusiewicz, Yu Sasaki, and Lei Wang. Preimages for
step-reduced SHA-2. In Matsui [226], pages 578–597.
[18] Kazumaro Aoki and Yu Sasaki. Meet-in-the-middle preimage attacks against reduced SHA-0
and SHA-1. In Halevi [140], pages 70–89.
[19] Dmitri Asonov and Rakesh Agrawal. Keyboard acoustic emanations. In IEEE Symposium
on Security and Privacy, pages 3–11. IEEE Computer Society, 2004.
[20] Jean-Philippe Aumasson, Itai Dinur, Willi Meier, and Adi Shamir. Cube testers and key
recovery attacks on reduced-round MD6 and Trivium. In Orr Dunkelman, editor, FSE,
volume 5665 of Lecture Notes in Computer Science, pages 1–22. Springer, 2009.
[21] Jean-Philippe Aumasson, Simon Fischer, Shahram Khazaei, Willi Meier, and Christian Rech-
berger. New features of latin dances: Analysis of Salsa, ChaCha, and Rumba. In Nyberg [265],
pages 470–488.
[22] Steve Babbage and Matthew Dodd. The mickey stream ciphers. In Robshaw and Billet [295],
pages 191–209.
Page: 79
Algorithms, Key Size and Parameters Report
[23] Boaz Barak and Shai Halevi. A model and architecture for pseudo-random generation with
applications to /dev/random. In Vijay Atluri, Catherine Meadows, and Ari Juels, editors,
ACM Conference on Computer and Communications Security, pages 203–212. ACM, 2005.
[24] Razvan Barbulescu, Pierrick Gaudry, Antoine Joux, and Emmanuel Thomé. A quasi-
polynomial algorithm for discrete logarithm in finite fields of small characteristic, 2013.
[25] Romain Bardou, Riccardo Focardi, Yusuke Kawamoto, Lorenzo Simionato, Graham Steel,
and Joe-Kai Tsay. Efficient padding oracle attacks on cryptographic hardware. In Safavi-
Naini and Canetti [308], pages 608–625.
[26] Elad Barkan, Eli Biham, and Nathan Keller. Instant ciphertext-only cryptanalysis of GSM
encrypted communication. J. Cryptology, 21(3):392–429, 2008.
[27] Gilles Barthe, Benjamin Grégoire, and Santiago Zanella Béguelin. Formal certification of
code-based cryptographic proofs. In Zhong Shao and Benjamin C. Pierce, editors, POPL,
pages 90–101. ACM, 2009.
[28] Mihir Bellare. New proofs for NMAC and HMAC: Security without collision resistance. In
Dwork [101], pages 602–619.
[29] Mihir Bellare, Zvika Brakerski, Moni Naor, Thomas Ristenpart, Gil Segev, Hovav Shacham,
and Scott Yilek. Hedged public-key encryption: How to protect against bad randomness. In
Matsui [226], pages 232–249.
[30] Mihir Bellare, Anand Desai, E. Jokipii, and Phillip Rogaway. A concrete security treatment
of symmetric encryption. In FOCS, pages 394–403. IEEE Computer Society, 1997.
[31] Mihir Bellare and Chanathip Namprempre. Authenticated encryption: Relations among
notions and analysis of the generic composition paradigm. In Tatsuaki Okamoto, editor,
ASIACRYPT, volume 1976 of Lecture Notes in Computer Science, pages 531–545. Springer,
2000.
[32] Mihir Bellare and Phillip Rogaway. Optimal asymmetric encryption. In Alfredo De Santis,
editor, EUROCRYPT, volume 950 of Lecture Notes in Computer Science, pages 92–111.
Springer, 1994.
[33] Mihir Bellare, Phillip Rogaway, and David Wagner. The EAX mode of operation. In Roy
and Meier [305], pages 389–407.
[34] Côme Berbain, Olivier Billet, Anne Canteaut, Nicolas Courtois, Henri Gilbert, Louis Goubin,
Aline Gouget, Louis Granboulan, Cédric Lauradoux, Marine Minier, Thomas Pornin, and
Hervé Sibert. Sosemanuk, a fast software-oriented stream cipher. In Robshaw and Billet [295],
pages 98–118.
Page: 80
Algorithms, Key Size and Parameters Report
[35] Côme Berbain, Henri Gilbert, and Alexander Maximov. Cryptanalysis of Grain. In Robshaw
[294], pages 15–29.
[37] Daniel J. Bernstein. Snuffle 2005: the Salsa20 encryption function, 2007.
http://http://cr.yp.to/snuffle.html.
[38] Daniel J. Bernstein, Yun-An Chang, Chen-Mou Cheng, Li-Ping Chou, Nadia Heninger, Tanja
Lange, and Nicko van Someren. Factoring rsa keys from certified smart cards: Coppersmith
in the wild. In Sako and Sarkar [309], pages 341–360.
[39] Eli Biham. A fast new des implementation in software. In Eli Biham, editor, FSE, volume
1267 of Lecture Notes in Computer Science, pages 260–272. Springer, 1997.
[40] Eli Biham and Yaniv Carmeli. Efficient reconstruction of RC4 keys from internal states. In
Nyberg [265], pages 270–288.
[41] Eli Biham, Orr Dunkelman, and Nathan Keller. A related-key rectangle attack on the full
KASUMI. In Roy [304], pages 443–461.
[42] Eli Biham and Adi Shamir. Differential cryptanalysis of DES-like cryptosystems. J. Cryp-
tology, 4(1):3–72, 1991.
[43] Alex Biryukov, editor. Fast Software Encryption, 14th International Workshop, FSE 2007,
Luxembourg, Luxembourg, March 26-28, 2007, Revised Selected Papers, volume 4593 of Lec-
ture Notes in Computer Science. Springer, 2007.
[44] Alex Biryukov, Orr Dunkelman, Nathan Keller, Dmitry Khovratovich, and Adi Shamir. Key
recovery attacks of practical complexity on AES-256 variants with up to 10 rounds. In Henri
Gilbert, editor, EUROCRYPT, volume 6110 of Lecture Notes in Computer Science, pages
299–319. Springer, 2010.
[45] Alex Biryukov and Dmitry Khovratovich. Related-key cryptanalysis of the full AES-192 and
AES-256. In Matsui [226], pages 1–18.
[46] Alex Biryukov, Sourav Mukhopadhyay, and Palash Sarkar. Improved time-memory trade-
offs with multiple data. In Bart Preneel and Stafford E. Tavares, editors, Selected Areas in
Cryptography, volume 3897 of Lecture Notes in Computer Science, pages 110–127. Springer,
2005.
[47] John Black, Shai Halevi, Hugo Krawczyk, Ted Krovetz, and Phillip Rogaway. UMAC: Fast
and secure message authentication. In Wiener [351], pages 216–233.
Page: 81
Algorithms, Key Size and Parameters Report
[48] S. Blake-Wilson, N. Bolyard, V. Gupta, C. Hawk, and B. Moeller. Elliptic Curve Cryptog-
raphy (ECC) Cipher Suites for Transport Layer Security (TLS). RFC 4492 (Informational),
May 2006. Updated by RFC 5246.
[49] Daniel Bleichenbacher. Chosen ciphertext attacks against protocols based on the RSA en-
cryption standard PKCS #1. In Hugo Krawczyk, editor, CRYPTO, volume 1462 of Lecture
Notes in Computer Science, pages 1–12. Springer, 1998.
[50] Lenore Blum, Manuel Blum, and Mike Shub. A simple unpredictable pseudo-random number
generator. SIAM J. Comput., 15(2):364–383, 1986.
[51] Andrey Bogdanov, Dmitry Khovratovich, and Christian Rechberger. Biclique cryptanalysis
of the full AES. In Lee and Wang [212], pages 344–371.
[52] Dan Boneh and Xavier Boyen. Efficient selective-ID secure identity-based encryption without
random oracles. In Christian Cachin and Jan Camenisch, editors, EUROCRYPT, volume 3027
of Lecture Notes in Computer Science, pages 223–238. Springer, 2004.
[53] Dan Boneh and Glenn Durfee. Cryptanalysis of RSA with private key d less than n0.292 .
IEEE Transactions on Information Theory, 46(4):1339–1349, 2000.
[54] Dan Boneh and Matthew K. Franklin. Identity-based encryption from the Weil pairing. In
Kilian [194], pages 213–229.
[55] Dan Boneh and Matthew K. Franklin. Identity-based encryption from the Weil pairing. SIAM
J. Comput., 32(3):586–615, 2003.
[56] Joppe W. Bos and Marcelo E. Kaihara. Playstation 3 computing breaks 260 barrier: 112-bit
prime ECDLP solved. EPFL Laboratory for cryptologic algorithms - LACAL, 2009.
[57] Cyril Bouvier. Discrete logarithm in GF(2809 ) with FFS. Post to NM-
BRTHRY@LISTSERV.NODAK.EDU, 2013.
[58] Gilles Brassard, editor. Advances in Cryptology - CRYPTO ’89, 9th Annual International
Cryptology Conference, Santa Barbara, California, USA, August 20-24, 1989, Proceedings,
volume 435 of Lecture Notes in Computer Science. Springer, 1990.
[59] Ernest F. Brickell, David Pointcheval, Serge Vaudenay, and Moti Yung. Design validations
for discrete logarithm based signature schemes. In Hideki Imai and Yuliang Zheng, editors,
Public Key Cryptography, volume 1751 of Lecture Notes in Computer Science, pages 276–292.
Springer, 2000.
[60] Julien Brouchier, Tom Kean, Carol Marsh, and David Naccache. Temperature attacks. IEEE
Security & Privacy, 7(2):79–82, 2009.
Page: 82
Algorithms, Key Size and Parameters Report
[61] Daniel R. L. Brown. Generic groups, collision resistance, and ECDSA. Des. Codes Cryptog-
raphy, 35(1):119–152, 2005.
[62] Daniel R. L. Brown and Kristian Gjøsteen. A security analysis of the nist sp 800-90 elliptic
curve random number generator. In Alfred Menezes, editor, CRYPTO, volume 4622 of Lecture
Notes in Computer Science, pages 466–481. Springer, 2007.
[63] BSI. Kryptographische Verfahren: Empfehlungen und Schlüssellängen. BSI TR-02102 Ver-
sion 2013.2, https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/Publikationen/
TechnischeRichtlinien/TR02102/BSI-TR-02102_pdf.pdf?__blob=publicationFile,
2013.
[64] Bundesnetzagentur. Bekanntmachung zur elektronischen Signatur nach dem Sig-
naturgesetz und der Signaturverordnung. http://www.bundesnetzagentur.de/
SharedDocs/Downloads/DE/Sachgebiete/QES/Veroeffentlichungen/Algorithmen/
2013Algorithmenkatalog.pdf?__blob=publicationFile&v=1, 2013.
[65] Mihir Bellare Ran Canetti and Hugo Krawczyk. Keying hash functions for message authen-
tication. In Koblitz [199], pages 1–15.
[66] Christophe De Cannière and Christian Rechberger. Preimages for reduced SHA-0 and SHA-1.
In Wagner [345], pages 179–202.
[67] Anne Canteaut and Kapalee Viswanathan, editors. Progress in Cryptology - INDOCRYPT
2004, 5th International Conference on Cryptology in India, Chennai, India, December 20-22,
2004, Proceedings, volume 3348 of Lecture Notes in Computer Science. Springer, 2004.
[68] Larry Carter and Mark N. Wegman. Universal classes of hash functions. J. Comput. Syst.
Sci., 18(2):143–154, 1979.
[69] Çetin Kaya Koç, David Naccache, and Christof Paar, editors. Cryptographic Hardware and
Embedded Systems - CHES 2001, Third International Workshop, Paris, France, May 14-16,
2001, Proceedings, volume 2162 of Lecture Notes in Computer Science. Springer, 2001.
[70] Certicom. Certicom announces elliptic curve cryptosystem (ECC) challenge winner. Certicom
Press Release, 2009.
[71] Stephen Checkoway, Matthew Fredrikson, Ruben Niederhagen, Adam Everspaugh, Matthew
Green, Tanja Lange, Thomas Ristenpart, Daniel J. Bernstein, Jake Maskiewicz, and Hovav
Shacham. On the practical exploitability of Dual EC in TLS implementations. In USENIX
Security Symposium, 2014.
[72] Liqun Chen and Zhaohui Cheng. Security proof of Sakai-Kasahara’s identity-based encryption
scheme. In Nigel P. Smart, editor, IMA Int. Conf., volume 3796 of Lecture Notes in Computer
Science, pages 442–459. Springer, 2005.
Page: 83
Algorithms, Key Size and Parameters Report
[73] Liqun Chen, Zhaohui Cheng, John Malone-Lee, and Nigel P. Smart. An efficient ID-KEM
based on the Sakai–Kasahara key construction. IEE Proc. Information Security, 153:19–26,
2006.
[74] Jung Hee Cheon. Security analysis of the strong Diffie-Hellman problem. In Vaudenay [343],
pages 1–11.
[75] Olivier Chevassut, Pierre-Alain Fouque, Pierrick Gaudry, and David Pointcheval. Key deriva-
tion and randomness extraction. IACR Cryptology ePrint Archive, 2005:61, 2005.
[76] Joo Yeon Cho and Miia Hermelin. Improved linear cryptanalysis of sosemanuk. In Donghoon
Lee and Seokhie Hong, editors, ICISC, volume 5984 of Lecture Notes in Computer Science,
pages 101–117. Springer, 2009.
[77] Carlos Cid and Gaëtan Leurent. An analysis of the XSL algorithm. In Roy [304], pages
333–352.
[78] Don Coppersmith. Finding a small root of a bivariate integer equation; Factoring with high
bits known. In Maurer [227], pages 178–189.
[79] Don Coppersmith. Finding a small root of a univariate modular equation. In Maurer [227],
pages 155–165.
[80] Don Coppersmith. Small solutions to polynomial equations, and low exponent RSA vulner-
abilities. J. Cryptology, 10(4):233–260, 1997.
[81] Don Coppersmith, Matthew K. Franklin, Jacques Patarin, and Michael K. Reiter. Low-
exponent RSA with related messages. In Maurer [227], pages 1–9.
[82] Jean-Sébastien Coron. On the exact security of full domain hash. In Mihir Bellare, editor,
CRYPTO, volume 1880 of Lecture Notes in Computer Science, pages 229–235. Springer, 2000.
[83] Jean-Sébastien Coron. Optimal security proofs for PSS and other signature schemes. In
Knudsen [198], pages 272–287.
[84] Jean-Sébastien Coron, Marc Joye, David Naccache, and Pascal Paillier. New attacks on
PKCS#1 v1.5 encryption. In Bart Preneel, editor, EUROCRYPT, volume 1807 of Lecture
Notes in Computer Science, pages 369–381. Springer, 2000.
[85] Jean-Sébastien Coron, David Naccache, and Julien P. Stern. On the security of RSA padding.
In Wiener [351], pages 1–18.
[86] Jean-Sébastien Coron, David Naccache, Mehdi Tibouchi, and Ralf-Philipp Weinmann. Prac-
tical cryptanalysis of ISO/IEC 9796-2 and EMV signatures. In Halevi [140], pages 428–444.
Page: 84
Algorithms, Key Size and Parameters Report
[87] Nicolas Courtois and Josef Pieprzyk. Cryptanalysis of block ciphers with overdefined systems
of equations. In Yuliang Zheng, editor, ASIACRYPT, volume 2501 of Lecture Notes in
Computer Science, pages 267–287. Springer, 2002.
[88] Ronald Cramer, editor. Advances in Cryptology - EUROCRYPT 2005, 24th Annual Inter-
national Conference on the Theory and Applications of Cryptographic Techniques, Aarhus,
Denmark, May 22-26, 2005, Proceedings, volume 3494 of Lecture Notes in Computer Science.
Springer, 2005.
[89] Joan Daemen and Vincent Rijmen. The Design of Rijndael: AES - The Advanced Encryption
Standard. Springer, 2002.
[90] Ivan Damgård. A design principle for hash functions. In Brassard [58], pages 416–427.
[91] Nasser Ramazani Darmian. A distinguish attack on rabbit stream cipher based on multiple
cube tester. IACR Cryptology ePrint Archive, 2013:780, 2013.
[92] Debian. Debian Security Advisory DSA-1571-1: OpenSSL – predictable random number
generator, 2008. http://www.debian.org/security/2008/dsa-1571.
[93] Jean Paul Degabriele, Anja Lehmann, Kenneth G. Paterson, Nigel P. Smart, and Mario
Strefler. On the joint security of encryption and signature in EMV. In Orr Dunkelman, editor,
CT-RSA, volume 7178 of Lecture Notes in Computer Science, pages 116–135. Springer, 2012.
[94] T. Dierks and C. Allen. The TLS Protocol Version 1.0. RFC 2246 (Proposed Standard),
January 1999. Obsoleted by RFC 4346, updated by RFCs 3546, 5746, 6176.
[95] Itai Dinur, Orr Dunkelman, and Adi Shamir. Improved practical attacks on round-reduced
keccak. J. Cryptology, 27(2):183–209, 2014.
[96] Itai Dinur, Pawel Morawiecki, Josef Pieprzyk, Marian Srebrny, and Michal Straus. Practical
complexity cube attacks on round-reduced keccak sponge function. IACR Cryptology ePrint
Archive, 2014:13, 2014.
[97] Yevgeniy Dodis, David Pointcheval, Sylvain Ruhault, Damien Vergnaud, and Daniel Wichs.
Security analysis of pseudo-random number generators with input: /dev/random is not ro-
bust. In Ahmad-Reza Sadeghi, Virgil D. Gligor, and Moti Yung, editors, ACM Conference
on Computer and Communications Security, pages 647–658. ACM, 2013.
[98] Yevgeniy Dodis, Adi Shamir, Noah Stephens-Davidowitz, and Daniel Wichs. How to eat your
entropy and have it too - optimal recovery strategies for compromised rngs. In Juan A. Garay
and Rosario Gennaro, editors, CRYPTO (2), volume 8617 of Lecture Notes in Computer
Science, pages 37–54. Springer, 2014.
Page: 85
Algorithms, Key Size and Parameters Report
[99] Leo Dorrendorf, Zvi Gutterman, and Benny Pinkas. Cryptanalysis of the random number
generator of the Windows operating system. ACM Trans. Inf. Syst. Secur., 13(1), 2009.
[100] Orr Dunkelman, Nathan Keller, and Adi Shamir. A practical-time related-key attack on the
KASUMI cryptosystem used in GSM and 3G telephony. In Rabin [290], pages 393–410.
[101] Cynthia Dwork, editor. Advances in Cryptology - CRYPTO 2006, 26th Annual International
Cryptology Conference, Santa Barbara, California, USA, August 20-24, 2006, Proceedings,
volume 4117 of Lecture Notes in Computer Science. Springer, 2006.
[102] E.A.Grechnikov. Collisions for 72-step and 73-step SHA-1: Improvements in the method of
characteristics. Cryptology ePrint Archive, Report 2010/413, 2010. http://eprint.iacr.
org/.
[103] D. Eastlake 3rd, J. Schiller, and S. Crocker. Randomness Requirements for Security. RFC
4086 (Best Current Practice), June 2005.
[104] ECRYPT II NoE. ECRYPT II Yearly Report on Algorithms and Key Lengths (2008-2009).
ECRYPT II deliverable D.SPA.7-1.0, 2009.
[105] ECRYPT II NoE. ECRYPT II Yearly Report on Algorithms and Key Lengths (2009-2010).
ECRYPT II deliverable D.SPA.13-1.0, 2010.
[106] ECRYPT II NoE. ECRYPT II Yearly Report on Algorithms and Key Lengths (2010-2011).
ECRYPT II deliverable D.SPA.17-1.0, 2011.
[107] ECRYPT II NoE. ECRYPT II Yearly Report on Algorithms and Key Lengths (2011-2012).
ECRYPT II deliverable D.SPA.20-1.0, 2012.
[108] ECRYPT NoE. ECRYPT Yearly Report on Algorithms and Key Lengths (2004). ECRYPT
deliverable D.SPA.10-1.1, 2004.
[109] ECRYPT NoE. ECRYPT Yearly Report on Algorithms and Key Lengths (2005). ECRYPT
deliverable D.SPA.16-1.0, 2005.
[110] ECRYPT NoE. ECRYPT Yearly Report on Algorithms and Key Lengths (2006). ECRYPT
deliverable D.SPA.21-1.0, 2006.
[111] ECRYPT NoE. ECRYPT Yearly Report on Algorithms and Key Lengths (2007-2008).
ECRYPT deliverable D.SPA.28-1.0, 2008.
Page: 86
Algorithms, Key Size and Parameters Report
[113] ENISA. Algorithms, key size and parameters report – 2013 recommendations. ENISA XXXX,
2013.
[115] Matthias Ernst, Ellen Jochemsz, Alexander May, and Benne de Weger. Partial key exposure
attacks on RSA up to full size exponents. In Cramer [88], pages 371–386.
[116] ETSI TS 102 176-. Electronic signatures and infrastructures (ESI); Algorithms and param-
eters for secure electronic signatures; Part 1: Hash functions and asymmetric algorithms.
European Telecommunications Standards Institute, 2007.
[117] ETSI/SAGE Specification. Specification of the 3GPP Confidentiality and Integrity Algo-
rithms. Document 2: Kasumi Algorithm Specification. ETSI/SAGE, 2011.
[118] EU. EC regulation (EU) No 611/2013 on the measures applicable to the notification
of personal data breaches under Directive 2002/58/EC on privacy and electronic commu-
nications. http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2013:173:
0002:0008:en:PDF.
[119] European Payments Council. Guidelines on algorithms usage and key management, 2013.
[120] Federal Information Processing Standards Publication 197. Advanced encryption standard
(AES). National Institute of Standards and Technology, 2001.
[121] Federal Information Processing Standards Publication 202. SHA-3 standard: Permutation-
based hash and extendable-output functions (draft). National Institute of Standards and
Technology, 2014.
[122] Xiutao Feng, Jun Liu, Zhaocun Zhou, Chuankun Wu, and Dengguo Feng. A byte-based guess
and determine attack on sosemanuk. In Abe [1], pages 146–157.
[123] Niels Ferguson, Bruce Schneier, and Tadayoshi Kohno. Cryptography Engineering — Design
Principles and Practical Applications. Wiley, 2010.
[124] Jens Franke. RSA576. Post to various internet discussion boards/email lists, 2003.
[125] Jens Franke. RSA576. Post to various internet discussion boards/email lists, 2005.
[126] Eiichiro Fujisaki, Tatsuaki Okamoto, David Pointcheval, and Jacques Stern. RSA-OAEP is
secure under the RSA assumption. In Kilian [194], pages 260–274.
[127] M Peeters G. Bertoni, J. Daemen and G. Van Assche. The Keccak sponge function family.
http://keccak.noekeon.org/.
Page: 87
Algorithms, Key Size and Parameters Report
[128] Karine Gandolfi, Christophe Mourtel, and Francis Olivier. Electromagnetic analysis: Con-
crete results. In Çetin Kaya Koç et al. [69], pages 251–261.
[129] Pierrick Gaudry, Florian Hess, and Nigel P. Smart. Constructive and destructive facets of
Weil descent on elliptic curves. J. Cryptology, 15(1):19–46, 2002.
[130] Danilo Gligoroski, Suzana Andova, and Svein J. Knapskog. On the importance of the key
separation principle for different modes of operation. In Liqun Chen, Yi Mu, and Willy Susilo,
editors, ISPEC, volume 4991 of Lecture Notes in Computer Science, pages 404–418. Springer,
2008.
[131] Ian Goldberg and David Wagner. Randomness and the Netscape browser, 1996. http:
//www.drdobbs.com/windows/184409807.
[132] Daniel M. Gordon. Discrete logarithms in GF(P) using the number field sieve. SIAM J.
Discrete Math., 6(1):124–138, 1993.
[133] GOST R 34-10-2001. Information technology – Cryptography data security – Formation and
verification process of [electronic] signatures. State Standard of the Russion Federation, 2001.
[134] Louis Goubin and Ange Martinelli. Protecting aes with shamir’s secret sharing scheme. In
Bart Preneel and Tsuyoshi Takagi, editors, CHES, volume 6917 of Lecture Notes in Computer
Science, pages 79–94. Springer, 2011.
[136] Jian Guo, San Ling, Christian Rechberger, and Huaxiong Wang. Advanced meet-in-the-
middle preimage attacks: First results on full Tiger, and improved results on MD4 and
SHA-2. In Abe [1], pages 56–75.
[137] Peter Gutmann. Software generation of practically strong random numbers. In Aviel D.
Rubin, editor, USENIX Security. USENIX Association, 1998.
[138] Zvi Gutterman, Benny Pinkas, and Tzachy Reinman. Analysis of the linux random number
generator. In IEEE Symposium on Security and Privacy, pages 371–385. IEEE Computer
Society, 2006.
[139] Shai Halevi. EME* : Extending EME to handle arbitrary-length messages with associated
data. In Canteaut and Viswanathan [67], pages 315–327.
[140] Shai Halevi, editor. Advances in Cryptology - CRYPTO 2009, 29th Annual International
Cryptology Conference, Santa Barbara, CA, USA, August 16-20, 2009. Proceedings, volume
5677 of Lecture Notes in Computer Science. Springer, 2009.
Page: 88
Algorithms, Key Size and Parameters Report
[141] Shai Halevi and Phillip Rogaway. A parallelizable enciphering mode. In Okamoto [267], pages
292–304.
[142] Mike Hamburg, Paul Kocher, and Mark E. Marson. Analysis of Intel’s Ivy Bridge digital ran-
dom number generator, March 2012. http://www.cryptography.com/public/pdf/Intel_
TRNG_Report_20120312.pdf.
[143] Helena Handschuh and Bart Preneel. Key-recovery attacks on universal hash function based
MAC algorithms. In Wagner [345], pages 144–161.
[144] D. Harkins. Synthethic Initialization Vector (SIV) Authenticated Encryption Using the Ad-
vanced Encryption Standard (AES). RFC 5297 (Informational), October 2008.
[145] D. Harkins and D. Carrel. The Internet Key Exchange (IKE). RFC 2409 (Proposed Standard),
November 1998. Obsoleted by RFC 4306, updated by RFC 4109.
[146] Johan Håstad. Solving simultaneous modular equations of low degree. SIAM J. Comput.,
17(2):336–341, 1988.
[147] Johan Håstad and Mats Näslund. The security of all RSA and discrete log bits. J. ACM,
51(2):187–230, 2004.
[148] Martin Hell, Thomas Johansson, Alexander Maximov, and Willi Meier. The Grain family of
stream ciphers. In Robshaw and Billet [295], pages 179–190.
[149] Martin Hell, Thomas Johansson, and Willi Meier. Grain: a stream cipher for constrained
environments. IJWMC, 2(1):86–93, 2007.
[150] Nadia Heninger, Zakir Durumeric, Eric Wustrow, and J.Alex Halderman. Mining your Ps and
Qs: Detection of widespread weak keys in network devices. In USENIX Security Symposium
– 2012, pages 205–220, 2012.
[151] Mathias Herrmann and Alexander May. Maximizing small root bounds by linearization and
applications to small secret exponent RSA. In Phong Q. Nguyen and David Pointcheval,
editors, Public Key Cryptography, volume 6056 of Lecture Notes in Computer Science, pages
53–69. Springer, 2010.
[152] Erwin Hess, Marcus Schafheutle, and Pascale Serf. The digital signature scheme ECGDSA,
2006.
[153] R. Housley and M. Dworkin. Advanced Encryption Standard (AES) Key Wrap with Padding
Algorithm. RFC 5649 (Informational), August 2009.
[154] Nick Howgrave-Graham and Nigel P. Smart. Lattice attacks on digital signature schemes.
Des. Codes Cryptography, 23(3):283–290, 2001.
Page: 89
Algorithms, Key Size and Parameters Report
[155] IEEE P1363.3 (Draft D5). Identity-based public key cryptography using pairings. Institute
of Electrical and Electronics Engineers Standard, 2012.
[156] Sebastiaan Indesteege, Florian Mendel, Bart Preneel, and Christian Rechberger. Collisions
and other non-random properties for step-reduced SHA-256. In Roberto Maria Avanzi, Liam
Keliher, and Francesco Sica, editors, Selected Areas in Cryptography, volume 5381 of Lecture
Notes in Computer Science, pages 276–293. Springer, 2008.
[157] Yuval Ishai, Amit Sahai, and David Wagner. Private circuits: Securing hardware against
probing attacks. In Dan Boneh, editor, CRYPTO, volume 2729 of Lecture Notes in Computer
Science, pages 463–481. Springer, 2003.
[158] Takanori Isobe, Toshihiro Ohigashi, Yuhei Watanabe, and Masakatu Morii. Full plaintext
recovery attack on broadcast rc4. In Moriai [239], pages 179–202.
[160] ISO/IEC 11770-6. Information technology – Security techniques – Key management – Part
6: Key derivation. International Organization for Standardization, Under Development.
[161] ISO/IEC 14888-3. Information technology – Security techniques – Digital signatures with
appendix – Part 3: Discrete logarithm based mechanisms. International Organization for
Standardization, 2009.
[162] ISO/IEC 14888-3. Information technology – Security techniques – Digital signatures with
appendix – Part 3: Discrete logarithm based mechanisms – Ammendment 1. International
Organization for Standardization, 2009.
[163] ISO/IEC 18031. Information technology – Security techniques – Random bit generator.
International Organization for Standardization, 2011.
Page: 90
Algorithms, Key Size and Parameters Report
[169] ISO/IEC 9796-2. Information technology – Security techniques – Digital signatures giving
message recovery – Part 2: Integer factorization based schemes. International Organization
for Standardization, 2010.
[170] ISO/IEC 9797-1:2011. Information technology – Security techniques – Digital signatures giv-
ing message recovery – Part 1: Mechanisms using a block cipher. International Organization
for Standardization, 2011.
[172] Tetsu Iwata and Kaoru Kurosawa. OMAC: One-key CBC MAC. In Thomas Johansson,
editor, FSE, volume 2887 of Lecture Notes in Computer Science, pages 129–153. Springer,
2003.
[173] Tetsu Iwata, Keisuke Ohashi, and Kazuhiko Minematsu. Breaking and repairing GCM secu-
rity proofs. In Safavi-Naini and Canetti [308], pages 31–49.
[174] Thomas Johansson and Phong Q. Nguyen, editors. Advances in Cryptology - EUROCRYPT
2013, 32nd Annual International Conference on the Theory and Applications of Cryptographic
Techniques, Athens, Greece, May 26-30, 2013. Proceedings, volume 7881 of Lecture Notes in
Computer Science. Springer, 2013.
[175] Jakob Jonsson. Security proofs for the RSA-PSS signature scheme and its variants. Cryptol-
ogy ePrint Archive, Report 2001/053, 2001. http://eprint.iacr.org/.
[176] Jakob Jonsson. On the security of CTR + CBC-MAC. In Kaisa Nyberg and Howard M.
Heys, editors, Selected Areas in Cryptography, volume 2595 of Lecture Notes in Computer
Science, pages 76–93. Springer, 2002.
[177] Antoine Joux. Comments on the choice between CWC or GCM – authentication weaknesses in
GCM. http://csrc.nist.gov/groups/ST/toolkit/BCM/documents/comments/CWC-GCM/
Ferguson2.pdf.
[178] Antoine Joux. Comments on the draft GCM specification – authentication failures in NIST
version of GCM. http://csrc.nist.gov/groups/ST/toolkit/BCM/documents/comments/
800-38_Series-Drafts/GCM/Joux_comments.pdf.
Page: 91
Algorithms, Key Size and Parameters Report
[180] Antoine Joux. Faster index calculus for the medium prime case application to 1175-bit and
1425-bit finite fields. In Johansson and Nguyen [174], pages 177–193.
[181] Antoine Joux. A new index calculus algorithm with complexity $$l(1/4+o(1))$$ in small
characteristic. In Tanja Lange, Kristin Lauter, and Petr Lisonek, editors, Selected Areas in
Cryptography, volume 8282 of Lecture Notes in Computer Science, pages 355–379. Springer,
2013.
[182] Antoine Joux, Reynald Lercier, Nigel P. Smart, and Frederik Vercauteren. The number field
sieve in the medium prime case. In Dwork [101], pages 326–344.
[183] Marc Joye and Sung-Ming Yen. The montgomery powering ladder. In Burton S. Kaliski
Jr., Çetin Kaya Koç, and Christof Paar, editors, CHES, volume 2523 of Lecture Notes in
Computer Science, pages 291–302. Springer, 2002.
[184] Hendrik W. Lenstra Jr. Factoring integers with elliptic curves. Annals of Mathematics,
126(3):649–673, 1987.
[185] Saqib A. Kakvi and Eike Kiltz. Optimal security proofs for full domain hash, revisited. In
David Pointcheval and Thomas Johansson, editors, EUROCRYPT, volume 7237 of Lecture
Notes in Computer Science, pages 537–553. Springer, 2012.
[186] B. Kaliski. PKCS #5: Password-Based Cryptography Specification Version 2.0. RFC 2898
(Informational), September 2000.
[187] Seny Kamara and Jonathan Katz. How to encrypt with a malicious random number generator.
In Nyberg [265], pages 303–315.
[188] Ju-Sung Kang, Sang Uk Shin, Dowon Hong, and Okyeon Yi. Provable security of KASUMI
and 3GPP encryption mode f8. In Colin Boyd, editor, ASIACRYPT, volume 2248 of Lecture
Notes in Computer Science, pages 255–271. Springer, 2001.
[189] Orhun Kara and Cevat Manap. A new class of weak keys for Blowfish. In Biryukov [43],
pages 167–180.
[190] Emilia Käsper and Peter Schwabe. Faster and timing-attack resistant aes-gcm. In Christophe
Clavier and Kris Gaj, editors, CHES, volume 5747 of Lecture Notes in Computer Science,
pages 1–17. Springer, 2009.
[191] C. Kaufman. Internet Key Exchange (IKEv2) Protocol. RFC 4306 (Proposed Standard),
December 2005. Obsoleted by RFC 5996, updated by RFC 5282.
[192] John Kelsey, Bruce Schneier, and Niels Ferguson. Yarrow-160: Notes on the design and
analysis of the yarrow cryptographic pseudorandom number generator. In Howard M. Heys
Page: 92
Algorithms, Key Size and Parameters Report
and Carlisle M. Adams, editors, Selected Areas in Cryptography, volume 1758 of Lecture Notes
in Computer Science, pages 13–33. Springer, 1999.
[193] John Kelsey, Bruce Schneier, David Wagner, and Chris Hall. Cryptanalytic attacks on pseu-
dorandom number generators. In Serge Vaudenay, editor, FSE, volume 1372 of Lecture Notes
in Computer Science, pages 168–188. Springer, 1998.
[194] Joe Kilian, editor. Advances in Cryptology - CRYPTO 2001, 21st Annual International
Cryptology Conference, Santa Barbara, California, USA, August 19-23, 2001, Proceedings,
volume 2139 of Lecture Notes in Computer Science. Springer, 2001.
[195] A. Kircanski and A. M. Youssef. On the sliding property of SNOW 3G and SNOW 2.0. IET
Inf. Secur., 5(4):199–206, 2011.
[196] Thorsten Kleinjung. Discrete logarithms in GF(p) — 160 digits. Post to NM-
BRTHRY@LISTSERV.NODAK.EDU, 2007.
[197] Thorsten Kleinjung, Kazumaro Aoki, Jens Franke, Arjen K. Lenstra, Emmanuel Thomé,
Joppe W. Bos, Pierrick Gaudry, Alexander Kruppa, Peter L. Montgomery, Dag Arne Osvik,
Herman J. J. te Riele, Andrey Timofeev, and Paul Zimmermann. Factorization of a 768-bit
RSA modulus. In Rabin [290], pages 333–350.
[198] Lars R. Knudsen, editor. Advances in Cryptology - EUROCRYPT 2002, International Confer-
ence on the Theory and Applications of Cryptographic Techniques, Amsterdam, The Nether-
lands, April 28 - May 2, 2002, Proceedings, volume 2332 of Lecture Notes in Computer
Science. Springer, 2002.
[199] Neal Koblitz, editor. Advances in Cryptology - CRYPTO ’96, 16th Annual International
Cryptology Conference, Santa Barbara, California, USA, August 18-22, 1996, Proceedings,
volume 1109 of Lecture Notes in Computer Science. Springer, 1996.
[200] Paul C. Kocher. Timing attacks on implementations of diffie-hellman, rsa, dss, and other
systems. In Koblitz [199], pages 104–113.
[201] Paul C. Kocher, Joshua Jaffe, and Benjamin Jun. Differential power analysis. In Wiener [351],
pages 388–397.
[202] Tadayoshi Kohno, John Viega, and Doug Whiting. CWC: A high-performance conventional
authenticated encryption mode. In Roy and Meier [305], pages 408–426.
[203] H. Krawczyk, M. Bellare, and R. Canetti. HMAC: Keyed-Hashing for Message Authentica-
tion. RFC 2104 (Best Current Practice), February 1997.
[204] Hugo Krawczyk. The order of encryption and authentication for protecting communications
(or: How secure is ssl?). In Kilian [194], pages 310–331.
Page: 93
Algorithms, Key Size and Parameters Report
[205] Hugo Krawczyk. Cryptographic extraction and key derivation: The HKDF scheme. In
Rabin [290], pages 631–648.
[206] Hugo Krawczyk. Hmac-based extract-and-expand key derivation function (hkdf). RFC 5869
(Informational), 2010.
[207] Hugo Krawczyk, Kenneth G. Paterson, and Hoeteck Wee. On the security of the tls protocol:
A systematic analysis. In Ran Canetti and Juan A. Garay, editors, CRYPTO (1), volume
8042 of Lecture Notes in Computer Science, pages 429–448. Springer, 2013.
[208] T. Krovetz. UMAC: Message Authentication Code using Universal Hashing. RFC 4418 (Best
Current Practice), March 2006.
[209] Patrick Lacharme, Andrea Röck, Vincent Strubel, and Marion Videau. The linux pseudoran-
dom number generator revisited. IACR Cryptology ePrint Archive, 2012:251, 2012.
[210] Mario Lamberger, Florian Mendel, Christian Rechberger, Vincent Rijmen, and Martin
Schläffer. Rebound distinguishers: Results on the full Whirlpool compression function. In
Matsui [226], pages 126–143.
[211] Franck Landelle and Thomas Peyrin. Cryptanalysis of full RIPEMD-128. In Johansson and
Nguyen [174], pages 228–244.
[212] Dong Hoon Lee and Xiaoyun Wang, editors. Advances in Cryptology - ASIACRYPT 2011 -
17th International Conference on the Theory and Application of Cryptology and Information
Security, Seoul, South Korea, December 4-8, 2011. Proceedings, volume 7073 of Lecture Notes
in Computer Science. Springer, 2011.
[213] Jung-Keun Lee, Dong Hoon Lee, and Sangwoo Park. Cryptanalysis of sosemanuk and snow
2.0 using linear masks. In Josef Pieprzyk, editor, ASIACRYPT, volume 5350 of Lecture Notes
in Computer Science, pages 524–538. Springer, 2008.
[214] Arjen Lenstra. Key lengths. In Hossein Bidgoli, editor, Handbook of Information Security:
Volume II: Information Warfare; Social Legal, and International Issues; and Security Foun-
dations, pages 617–635. Wiley, 2004.
[215] Arjen K. Lenstra, James P. Hughes, Maxime Augier, Joppe W. Bos, Thorsten Kleinjung, and
Christophe Wachter. Public keys. In Safavi-Naini and Canetti [308], pages 626–642.
[216] Arjen K. Lenstra and Hendrik W. Lenstra. The development of the number field sieve, volume
1554 of Lecture Notes in Mathematics. Springer, 1993.
[217] Arjen K. Lenstra and Eric R. Verheul. Selecting cryptographic key sizes. Datenschutz und
Datensicherheit, 24(3), 2000.
Page: 94
Algorithms, Key Size and Parameters Report
[218] Gaëtan Leurent. Message freedom in MD4 and MD5 collisions: Application to APOP. In
Biryukov [43], pages 309–328.
[219] Chu-Wee Lim and Khoongming Khoo. An analysis of XSL applied to BES. In Biryukov [43],
pages 242–253.
[220] Moses Liskov and Kazuhiko Minematsu. Comments on the proposal to approve XTS-AES
– Comments on XTS-AES. http://csrc.nist.gov/groups/ST/toolkit/BCM/documents/
comments/XTS/XTS_comments-Liskov_Minematsu.pdf.
[221] Yi Lu, Willi Meier, and Serge Vaudenay. The conditional correlation attack: A practical
attack on Bluetooth encryption. In Shoup [325], pages 97–117.
[222] Subhamoy Maitra and Goutam Paul. New form of permutation bias and secret key leakage
in keystream bytes of RC4. In Nyberg [265], pages 253–269.
[223] James Manger. A chosen ciphertext attack on RSA optimal asymmetric encryption padding
(OAEP) as standardized in PKCS #1 v2.0. In Kilian [194], pages 230–238.
[224] M. Matsui, J. Nakajima, and S. Moriai. A Description of the Camellia Encryption Algorithm.
RFC 3713 (Informational), April 2004.
[225] Mitsuru Matsui. Linear cryptoanalysis method for DES cipher. In Tor Helleseth, editor,
EUROCRYPT, volume 765 of Lecture Notes in Computer Science, pages 386–397. Springer,
1993.
[226] Mitsuru Matsui, editor. Advances in Cryptology - ASIACRYPT 2009, 15th International
Conference on the Theory and Application of Cryptology and Information Security, Tokyo,
Japan, December 6-10, 2009. Proceedings, volume 5912 of Lecture Notes in Computer Science.
Springer, 2009.
[227] Ueli M. Maurer, editor. Advances in Cryptology - EUROCRYPT ’96, International Con-
ference on the Theory and Application of Cryptographic Techniques, Saragossa, Spain, May
12-16, 1996, Proceeding, volume 1070 of Lecture Notes in Computer Science. Springer, 1996.
[228] Alexander Maximov and Alex Biryukov. Two trivial attacks on Trivium. In Adams et al. [2],
pages 36–55.
[229] Alexander Maximov and Dmitry Khovratovich. New state recovery attack on RC4. In
Wagner [345], pages 297–316.
[230] David A. McGrew. Efficient authentication of large, dynamic data sets using Galois/Counter
mode (GCM). In IEEE Security in Storage Workshop, pages 89–94. IEEE Computer Society,
2005.
Page: 95
Algorithms, Key Size and Parameters Report
[231] David A. McGrew and John Viega. The security and performance of the Galois/Counter
mode (GCM) of operation. In Canteaut and Viswanathan [67], pages 343–355.
[232] Florian Mendel, Tomislav Nad, Stefan Scherz, and Martin Schläffer. Differential attacks on
reduced ripemd-160. In Dieter Gollmann and Felix C. Freiling, editors, ISC, volume 7483 of
Lecture Notes in Computer Science, pages 23–38. Springer, 2012.
[233] Florian Mendel, Tomislav Nad, and Martin Schläffer. Improving local collisions: New attacks
on reduced SHA-256. In Johansson and Nguyen [174], pages 262–278.
[234] Florian Mendel, Thomas Peyrin, Martin Schläffer, Lei Wang, and Shuang Wu. Improved
cryptanalysis of reduced ripemd-160. In Sako and Sarkar [309], pages 484–503.
[235] Florian Mendel, Norbert Pramstaller, Christian Rechberger, and Vincent Rijmen. On the
collision resistance of RIPEMD-160. In Sokratis K. Katsikas, Javier Lopez, Michael Backes,
Stefanos Gritzalis, and Bart Preneel, editors, ISC, volume 4176 of Lecture Notes in Computer
Science, pages 101–116. Springer, 2006.
[236] Florian Mendel, Christian Rechberger, and Martin Schläffer. Update on SHA-1. Presented
at Rump Session of Crypto 2007, 2007.
[237] Alfred Menezes, Tatsuaki Okamoto, and Scott A. Vanstone. Reducing elliptic curve loga-
rithms to logarithms in a finite field. IEEE Transactions on Information Theory, 39(5):1639–
1646, 1993.
[238] Ralph C. Merkle. A certified digital signature. In Brassard [58], pages 218–238.
[239] Shiho Moriai, editor. Fast Software Encryption - 20th International Workshop, FSE 2013,
Singapore, March 11-13, 2013. Revised Selected Papers, volume 8424 of Lecture Notes in
Computer Science. Springer, 2014.
[240] Sean Murphy and Matthew J. B. Robshaw. Essential algebraic structure within the AES.
In Moti Yung, editor, CRYPTO, volume 2442 of Lecture Notes in Computer Science, pages
1–16. Springer, 2002.
[241] Chanathip Namprempre, Phillip Rogaway, and Thomas Shrimpton. Reconsidering generic
composition. In Phong Q. Nguyen and Elisabeth Oswald, editors, EUROCRYPT, volume
8441 of Lecture Notes in Computer Science, pages 257–274. Springer, 2014.
[242] Mridul Nandi. A unified method for improving PRF bounds for a class of blockcipher based
MACs. In Seokhie Hong and Tetsu Iwata, editors, FSE, volume 6147 of Lecture Notes in
Computer Science, pages 212–229. Springer, 2010.
[243] National Security Agency. Suite b cryptography. http://www.nsa.gov/ia/programs/
suiteb_cryptography/index.shtml, 2009.
Page: 96
Algorithms, Key Size and Parameters Report
[244] Gregory Neven, Nigel P. Smart, and Bogdan Warinschi. Hash function requirements for
Schnorr signatures. J. Mathematical Cryptology, 3(1):69–87, 2009.
[245] Phong Q. Nguyen and Igor Shparlinski. The insecurity of the digital signature algorithm with
partially known nonces. J. Cryptology, 15(3):151–176, 2002.
[246] Phong Q. Nguyen and Igor Shparlinski. The insecurity of the elliptic curve digital signature
algorithm with partially known nonces. Des. Codes Cryptography, 30(2):201–217, 2003.
[247] Svetla Nikova, Vincent Rijmen, and Martin Schläffer. Secure hardware implementation of
nonlinear functions in the presence of glitches. J. Cryptology, 24(2):292–321, 2011.
[248] NIST Special Publication 180-4. Secure hash standard (SHS). National Institute of Standards
and Technology, 2012.
[249] NIST Special Publication 186-4. Digital signature standard (DSS). National Institute of
Standards and Technology, 2013.
[250] NIST Special Publication 198-1. The keyed-hash message authentication code (HMAC).
National Institute of Standards and Technology, 2008.
[251] NIST Special Publication 800-108. Recommendation for key derivation using pseudorandom
functions. National Institute of Standards and Technology, 2009.
[252] NIST Special Publication 800-130. A framework for designing cryptographic key management
systems. National Institute of Standards and Technology, 2013.
[253] NIST Special Publication 800-132. Recommendation for password-based key derivation –
Part 1: Storage applications. National Institute of Standards and Technology, 2010.
[254] NIST Special Publication 800-38A. Recommendation for block cipher modes of operation –
Modes and techniques. National Institute of Standards and Technology, 2001.
[255] NIST Special Publication 800-38C. Recommendation for block cipher modes of operation –
The CCM mode for authentication and confidentiality. National Institute of Standards and
Technology, 2004.
[256] NIST Special Publication 800-38D. Recommendation for block cipher modes of operation –
Galois/Counter Mode (GCM) and GMAC. National Institute of Standards and Technology,
2007.
[257] NIST Special Publication 800-38E. Recommendation for block cipher modes of operation –
The XTS-AES mode for confidentiality on storage devices. National Institute of Standards
and Technology, 2010.
Page: 97
Algorithms, Key Size and Parameters Report
[258] NIST Special Publication 800-38F. Recommendation for block cipher modes of operation –
Methods for Key Wrapping. National Institute of Standards and Technology, 2012.
[259] NIST Special Publication 800-56A. Recommendation for pair-wise key establishment schemes
using discrete logarithm cryptography. National Institute of Standards and Technology, 2007.
[260] NIST Special Publication 800-56B. Recommendation for pair-wise key establishment schemes
using integer factorization cryptography. National Institute of Standards and Technology,
2009.
[261] NIST Special Publication 800-56C. Recommendation for key derivation through extraction-
then-expansion. National Institute of Standards and Technology, 2009.
[262] NIST Special Publication 800-57. Recommendation for key management – Part 1: General
(Revision 3). National Institute of Standards and Technology, 2012.
[263] NIST Special Publication 800-67-Rev1. Recommendation for the triple data encryption stan-
dard algorithm (tdea) block cipher. National Institute of Standards and Technology, 2012.
[264] NIST Special Publication 800-90A. Recommendation for random number generation using
deterministic random bit generators. National Institute of Standards and Technology, 2012.
[265] Kaisa Nyberg, editor. Fast Software Encryption, 15th International Workshop, FSE 2008,
Lausanne, Switzerland, February 10-13, 2008, Revised Selected Papers, volume 5086 of Lec-
ture Notes in Computer Science. Springer, 2008.
[266] Kaisa Nyberg and Johan Wallén. Improved linear distinguishers for SNOW 2.0. In Robshaw
[294], pages 144–162.
[267] Tatsuaki Okamoto, editor. Topics in Cryptology - CT-RSA 2004, The Cryptographers’ Track
at the RSA Conference 2004, San Francisco, CA, USA, February 23-27, 2004, Proceedings,
volume 2964 of Lecture Notes in Computer Science. Springer, 2004.
[268] H. Orman and P. Hoffman. Determining Strengths For Public Keys Used For Exchanging
Symmetric Keys. RFC 3766 (Best Current Practice), April 2004.
[269] Christof Paar and J.XXXX Pelzl. Understanding cryptography: A textbook for students and
practitioners. Springer, 2009.
[270] Kenneth G. Paterson, Jacob C. N. Schuldt, and Dale L. Sibborn. Related randomness attacks
for public key encryption. In Hugo Krawczyk, editor, Public Key Cryptography, volume 8383
of Lecture Notes in Computer Science, pages 465–482. Springer, 2014.
[271] Kenneth G. Paterson, Jacob C. N. Schuldt, Martijn Stam, and Susan Thomson. On the joint
security of encryption and signature, revisited. In Lee and Wang [212], pages 161–178.
Page: 98
Algorithms, Key Size and Parameters Report
[272] Kenneth G. Paterson and Arnold K. L. Yau. Padding Oracle Attacks on the ISO CBC Mode
Encryption Standard. In Okamoto [267], pages 305–323.
[273] C. Percival and S. Josefsson. The scrypt Password-Based Key Derivation Function draft-
josefsson-scrypt-kdf-01. Internet-Draft (Informational), September 2012.
[274] Erez Petrank and Charles Rackoff. CBC MAC for real-time data sources. J. Cryptology,
13(3):315–338, 2000.
[275] Raphael Chung-Wei Phan. Related-key attacks on triple-DES and DESX variants. In
Okamoto [267], pages 15–24.
[276] Krzysztof Pietrzak. A tight bound for EMAC. In Michele Bugliesi, Bart Preneel, Vladimiro
Sassone, and Ingo Wegener, editors, ICALP (2), volume 4052 of Lecture Notes in Computer
Science, pages 168–179. Springer, 2006.
[277] Leon A. Pintsov and Scott A. Vanstone. Postal revenue collection in the digital age. In Yair
Frankel, editor, Financial Cryptography, 4th International Conference, FC 2000 Anguilla,
British West Indies, February 20-24, 2000, Proceedings, volume 1962 of Lecture Notes in
Computer Science, pages 105–120. Springer, 2000.
[280] David Pointcheval and Jacques Stern. Security arguments for digital signatures and blind
signatures. J. Cryptology, 13(3):361–396, 2000.
[281] David Pointcheval and Serge Vaudenay. On provable security for digital signature algorithms.
Technical Report LIENS-96-17, 1996.
[282] John M. Pollard. Monte Carlo methods for index computation (mod p). Math. Comput.,
32(143):918–924, 1978.
[283] Thomas Popp and Stefan Mangard. Masked dual-rail pre-charge logic: Dpa-resistance without
routing constraints. In Josyula R. Rao and Berk Sunar, editors, CHES, volume 3659 of Lecture
Notes in Computer Science, pages 172–186. Springer, 2005.
[284] Axel Poschmann, Amir Moradi, Khoongming Khoo, Chu-Wee Lim, Huaxiong Wang, and
San Ling. Side-channel resistant crypto for less than 2, 300 ge. J. Cryptology, 24(2):322–345,
2011.
[285] Bart Preneel and Paul C. van Oorschot. MDx-MAC and building fast MACs from hash
functions. In Don Coppersmith, editor, CRYPTO, volume 963 of Lecture Notes in Computer
Science, pages 1–14. Springer, 1995.
Page: 99
Algorithms, Key Size and Parameters Report
[286] Bart Preneel and Paul C. van Oorschot. On the security of iterated message authentication
codes. IEEE Transactions on Information Theory, 45(1):188–199, 1999.
[287] Gordon Procter and Carlos Cid. On weak keys and forgery attacks against polynomial-based
mac schemes. In Moriai [239], pages 287–304.
[288] Emmanuel Prouff and Matthieu Rivain. Masking against side-channel attacks: A formal
security proof. In Johansson and Nguyen [174], pages 142–159.
[289] Niels Provos and David Mazières. A future-adaptable password scheme. In USENIX Annual
Technical Conference, FREENIX Track, pages 81–91. USENIX, 1999.
[290] Tal Rabin, editor. Advances in Cryptology - CRYPTO 2010, 30th Annual Cryptology Con-
ference, Santa Barbara, CA, USA, August 15-19, 2010. Proceedings, volume 6223 of Lecture
Notes in Computer Science. Springer, 2010.
[291] Ananth Raghunathan, Gil Segev, and Salil P. Vadhan. Deterministic public-key encryption
for adaptively chosen plaintext distributions. In Johansson and Nguyen [174], pages 93–110.
[292] Vincent Rijmen. Cryptanalysis and design of iterated block ciphers. PhD thesis, Katholieke
Universiteit Leuven, 1997.
[293] Thomas Ristenpart and Scott Yilek. When good randomness goes bad: Virtual machine reset
vulnerabilities and hedging deployed cryptography. In NDSS. The Internet Society, 2010.
[294] Matthew J. B. Robshaw, editor. Fast Software Encryption, 13th International Workshop,
FSE 2006, Graz, Austria, March 15-17, 2006, Revised Selected Papers, volume 4047 of Lecture
Notes in Computer Science. Springer, 2006.
[295] Matthew J. B. Robshaw and Olivier Billet, editors. New Stream Cipher Designs - The
eSTREAM Finalists, volume 4986 of Lecture Notes in Computer Science. Springer, 2008.
[297] Phillip Rogaway. Efficient instantiations of tweakable blockciphers and refinements to modes
OCB and PMAC. In Pil Joong Lee, editor, ASIACRYPT, volume 3329 of Lecture Notes in
Computer Science, pages 16–31. Springer, 2004.
[298] Phillip Rogaway. Nonce-based symmetric encryption. In Roy and Meier [305], pages 348–359.
[299] Phillip Rogaway. Evaluation of some blockcipher modes of operation. Cryptography Research
and Evaluation Committees (CRYPTREC) for the Government of Japan, 2011.
Page: 100
Algorithms, Key Size and Parameters Report
Page: 101
Algorithms, Key Size and Parameters Report
[312] Yu Sasaki and Kazumaro Aoki. Finding preimages in full MD5 faster than exhaustive search.
In Antoine Joux, editor, EUROCRYPT, volume 5479 of Lecture Notes in Computer Science,
pages 134–152. Springer, 2009.
[313] Yu Sasaki, Lei Wang, Kazuo Ohta, and Noboru Kunihiro. Security of MD5 challenge and
response: Extension of APOP password recovery attack. In Tal Malkin, editor, CT-RSA,
volume 4964 of Lecture Notes in Computer Science, pages 1–18. Springer, 2008.
[314] Yu Sasaki, Lei Wang, Shuang Wu, and Wenling Wu. Investigating fundamental security
requirements on whirlpool: Improved preimage and collision attacks. In Xiaoyun Wang and
Kazue Sako, editors, ASIACRYPT, volume 7658 of Lecture Notes in Computer Science, pages
562–579. Springer, 2012.
[315] Takakazu Satoh and Kiyomichi Araki. Fermat quotients and the polynomial time discrete log
algorithm for anomalous elliptic curves. Commentarii Math. Univ. St. Pauli, 47:81–92, 1998.
[316] J. Schaad and R. Housley. Advanced Encryption Standard (AES) Key Wrap Algorithm. RFC
3394 (Informational), September 2002.
[317] Bruce Schneier. Description of a new variable-length key, 64-bit block cipher (Blowfish). In
Ross J. Anderson, editor, FSE, volume 809 of Lecture Notes in Computer Science, pages
191–204. Springer, 1993.
[318] Claus-Peter Schnorr. Efficient identification and signatures for smart cards. In Brassard [58],
pages 239–252.
[319] SEC 1. Elliptic curve cryptography – version 2.0. Standards for Efficient Cryptography
Group, 2009.
[320] SEC 2. Recommended elliptic curve domain parameters – version 2.0. Standards for Efficient
Cryptography Group, 2010.
[321] Igor A. Semaev. Evaluation of discrete logarithms in a group of p-torsion points of an elliptic
curve in characteristic p. Math. Comput., 67(221):353–356, 1998.
[322] Pouyan Sepehrdad, Serge Vaudenay, and Martin Vuagnoux. Statistical attack on RC4 -
distinguishing WPA. In Kenneth G. Paterson, editor, EUROCRYPT, volume 6632 of Lecture
Notes in Computer Science, pages 343–363. Springer, 2011.
[323] Claude E. Shannon. Communication theory of secrecy systems. Bell System Technical Jour-
nal, 28(4):656–715, 1949.
[324] Victor Shoup. A proposal for an ISO standard for public key encryption. Cryptology ePrint
Archive, Report 2001/112, 2001. http://eprint.iacr.org/.
Page: 102
Algorithms, Key Size and Parameters Report
[325] Victor Shoup, editor. Advances in Cryptology - CRYPTO 2005: 25th Annual International
Cryptology Conference, Santa Barbara, California, USA, August 14-18, 2005, Proceedings,
volume 3621 of Lecture Notes in Computer Science. Springer, 2005.
[326] Thomas Shrimpton and R. Seth Terashima. A provable security analysis of intel’s secure key
rng. IACR Cryptology ePrint Archive, 2014:504, 2014.
[327] Dan Shumov and Nils Ferguson. On the Possibility of a Back Door in the NIST SP800-90
Dual Ec Prng, August 2007. Rump session presentation at Crypto 2007, http://rump2007.
cr.yp.to/15-shumow.pdf.
[328] Sergei P. Skorobogatov. Using optical emission analysis for estimating contribution to power
analysis. In Luca Breveglieri, Israel Koren, David Naccache, Elisabeth Oswald, and Jean-
Pierre Seifert, editors, FDTC, pages 111–119. IEEE Computer Society, 2009.
[329] Nigel P. Smart. The discrete logarithm problem on elliptic curves of trace one. J. Cryptology,
12(3):193–196, 1999.
[330] Marc Stevens. New collision attacks on SHA-1 based on optimal joint local-collision analysis.
In Johansson and Nguyen [174], pages 245–261.
[331] Marc Stevens, Arjen K. Lenstra, and Benne de Weger. Chosen-prefix collisions for MD5
and colliding X.509 certificates for different identities. In Moni Naor, editor, EUROCRYPT,
volume 4515 of Lecture Notes in Computer Science, pages 1–22. Springer, 2007.
[332] Marc Stevens, Arjen K. Lenstra, and Benne de Weger. Chosen-prefix collisions for MD5 and
applications. IJACT, 2(4):322–359, 2012.
[333] Marc Stevens, Alexander Sotirov, Jacob Appelbaum, Arjen K. Lenstra, David Molnar,
Dag Arne Osvik, and Benne de Weger. Short chosen-prefix collisions for MD5 and the creation
of a rogue CA certificate. In Halevi [140], pages 55–69.
[334] Kris Tiri and Ingrid Verbauwhede. A logic level design methodology for a secure dpa resistant
asic or fpga implementation. In DATE, pages 246–251. IEEE Computer Society, 2004.
[335] Elena Trichina, Tymur Korkishko, and Kyung-Hee Lee. Small size, low power, side channel-
immune aes coprocessor: Design and synthesis results. In Hans Dobbertin, Vincent Rijmen,
and Aleksandra Sowa, editors, AES Conference, volume 3373 of Lecture Notes in Computer
Science, pages 113–127. Springer, 2004.
[336] Eran Tromer, Dag Arne Osvik, and Adi Shamir. Efficient cache attacks on aes, and counter-
measures. J. Cryptology, 23(1):37–71, 2010.
[337] TTA.KO-12.0001/R1. Digital signature scheme with appendix – Part 2: Certificate-based
digital signature algorithm. Korean Telecommunications Technology Association, 2000.
Page: 103
Algorithms, Key Size and Parameters Report
[338] Kyushu University, NICT, and Fujitsu Laboratories. Achieve world record cryptanalysis
of next-generation cryptography. http://www.nict.go.jp/en/press/2012/06/PDF-att/
20120618en.pdf, 2012.
[339] Paul C. van Oorschot and Michael J. Wiener. A known plaintext attack on two-key triple en-
cryption. In Ivan Damgård, editor, EUROCRYPT, volume 473 of Lecture Notes in Computer
Science, pages 318–325. Springer, 1990.
[340] Paul C. van Oorschot and Michael J. Wiener. Parallel collision search with cryptanalytic
applications. J. Cryptology, 12(1):1–28, 1999.
[341] Serge Vaudenay. On the weak keys of Blowfish. In Dieter Gollmann, editor, FSE, volume
1039 of Lecture Notes in Computer Science, pages 27–32. Springer, 1996.
[342] Serge Vaudenay. Security flaws induced by CBC padding - Applications to SSL, IPSEC,
WTLS ... In Knudsen [198], pages 534–546.
[343] Serge Vaudenay, editor. Advances in Cryptology - EUROCRYPT 2006, 25th Annual Interna-
tional Conference on the Theory and Applications of Cryptographic Techniques, St. Peters-
burg, Russia, May 28 - June 1, 2006, Proceedings, volume 4004 of Lecture Notes in Computer
Science. Springer, 2006.
[344] Serge Vaudenay and Martin Vuagnoux. Passive-only key recovery attacks on RC4. In Adams
et al. [2], pages 344–359.
[345] David Wagner, editor. Advances in Cryptology - CRYPTO 2008, 28th Annual International
Cryptology Conference, Santa Barbara, CA, USA, August 17-21, 2008. Proceedings, volume
5157 of Lecture Notes in Computer Science. Springer, 2008.
[346] Xiaoyun Wang. New collision search for SHA-1. Presented at Rump Session of Crypto 2005,
2005.
[347] Xiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu. Finding collisions in the full SHA-1. In
Shoup [325], pages 17–36.
[348] Brent Waters. Efficient identity-based encryption without random oracles. In Cramer [88],
pages 114–127.
[349] Doug Whiting, Russ Housley, and Neils Ferguson. Submission to NIST: Counter with CBC-
MAC (CCM) – AES mode of operation. http://csrc.nist.gov/groups/ST/toolkit/BCM/
documents/ccm.pdf.
[350] Michael J. Wiener. Cryptanalysis of short RSA secret exponents. IEEE Transactions on
Information Theory, 36(3):553–558, 1990.
Page: 104
Algorithms, Key Size and Parameters Report
[351] Michael J. Wiener, editor. Advances in Cryptology - CRYPTO ’99, 19th Annual International
Cryptology Conference, Santa Barbara, California, USA, August 15-19, 1999, Proceedings,
volume 1666 of Lecture Notes in Computer Science. Springer, 1999.
[352] Hongjun Wu. A new stream cipher hc-256. In Roy and Meier [305], pages 226–244.
[353] Hongjun Wu. The stream cipher hc-128. In Robshaw and Billet [295], pages 39–47.
[354] Arnold K. L. Yau, Kenneth G. Paterson, and Chris J. Mitchell. Padding oracle attacks on
CBC-mode encryption with secret and random IVs. In Henri Gilbert and Helena Handschuh,
editors, FSE, volume 3557 of Lecture Notes in Computer Science, pages 299–319. Springer,
2005.
[355] Scott Yilek. Resettable public-key encryption: How to encrypt on a virtual machine. In Josef
Pieprzyk, editor, CT-RSA, volume 5985 of Lecture Notes in Computer Science, pages 41–56.
Springer, 2010.
Page: 105
Algorithms, Key Size and Parameters Report
Index
(EC)DSA, 52, 55, 56 CTR mode, 15, 19, 28, 40, 41, 47
(EC)Schnorr, 14, 52, 56 CWC mode, 19, 40, 47
3DES, 21, 23, 24
3GPP, 24 Data Encapsulation Mechanism, see DEM
802.11i, 47 Decision Diffie–Hellman problem, 33, 34
DEM, 15, 19, 28, 46, 52
A5/1, 28, 31 DES, 16, 23, 25, 44
A5/2, 28, 31 Diffie–Hellman problem, 33, 34, 53
A5/3, 24 discrete logarithm problem, see DLP
AES, 13, 15, 20, 21, 23, 26, 28, 36, 43, 44, 66 DLP, 32–37
AES-NI instructions, 66 DNSSEC, 66
authenticated encryption, 15, 46–48 domain parameters, 51
DSA, 67
BB, 57, 61
bcrypt, 58 E0, 28, 31
BF, 57, 61 EAX mode, 19, 39, 40, 47
BLAKE, 29 ECB mode, 39, 40
block ciphers, 22–25 ECDLP, 19, 32, 34–37
modes of operation, 39–42 ECIES, 14, 15, 19, 53
Blowfish, 23, 25 ECIES-KEM, 15, 19, 52, 53
elliptic curves, 21, 34–37
Camellia, 16, 23, 24 pairings, 32, 35
CBC mode, 13, 19, 40, 41, 46 EMAC, 19
CBC-MAC, 43–44, 46, 47 EME mode, 40, 42
AMAC, 42–44 EMV, 38
CMAC, 15, 43, 44 Encrypt-and-MAC, 40, 46
EMAC, 42–44 Encrypt-then-MAC, 15, 19, 40, 46, 50
LMAC, 43 Encrypted Storage, 61
CCM mode, 19, 39, 40, 47 Entropy, 70
certificates, 51
CFB mode, 40, 41 factoring, 32
ChaCha, 28, 29
CMAC, 19, 42 gap Diffie–Hellman problem, 33, 34, 53
Page: 106
Algorithms, Key Size and Parameters Report
GCM, 45 AKW1, 60
GCM mode, 19, 39, 40, 47, 48 AKW2, 60
GDSA, 52, 55 KW, 59
GMAC, 45, 48 KWP, 60
Grain, 28, 30 TDKW, 60
GSM, 24 TKW, 59
key wrap, 76
hash functions, 25–27 Key Wrapping, 59–60
HC-128, 28
HKDF, 48–50 LTE, 23, 29
HMAC, 19, 42, 44, 49, 50
MAC, 15, 19, 22, 23, 46, 47, 50
IAPM, 47 MAC-then-Encrypt, 40, 46
IBE, 61 MACs, 42–46
ID-IND-CCA, 61 MD-5, 16, 26, 27, 45, 49, 50
Identity Based Encryption, 61–62 message authentication codes, see MAC
IKE-KDF, 49, 50 Mickey 2.0, 28, 30
IND-CCA, 39–41, 46, 52 Montgomery ladder, 65
IND-CPA, 39–41, 46
IND-CVA, 39 NIST-800-108-KDF, 19, 49
INT-CTXT, 46 NIST-800-56-KDF, 19, 49, 50
IPsec, 13, 25, 48, 50 NMAC, 44
ISO 19772, 46
OCB mode, 19, 40, 47
ISO-9796
OFB mode, 40, 41
RSA DS1, 52, 54
OpenSSL, 66
RSA DS2, 52, 54
OpenVPN, 66
RSA DS3, 52, 54
Password-Based Encryption, 58
Kasumi, 23, 24
PBKDF2, 58
KDF, 15, 19, 48–50, 53
PRF, 44, 49, 50
Password Based, 58–59
primitive, 13
KDSA, 52, 55
protocol, 13
KEM, 15, 46, 51–53, 61
PSEC-KEM, 52, 53
Ket Wrap
PV Signatures, 52, 55
SIV, 60
Key Derivation Functions, see KDF quantum computers, 37
Key Encapsulation Mechanism, see KEM
Key Management, 73–77 Rabbit, 28, 30
key separation, 38 Random Number Generation, 66–73
Key Wrap NIST-DBRG, 67
AESKW, 60 Dual Elliptic Curve, 72
Page: 107
Algorithms, Key Size and Parameters Report
Fortuna, 72 UIA1, 24
Linux PRNG, 71 UMAC, 42, 45
OpenSSL PRNG, 72 UMTS, 24, 29
RC4, 28, 31
RDSA, 52, 55 Whirlpool, 26
RIPEMD-128, 26, 27
X9.63-KDF, 15, 19, 49
RIPEMD-160, 26, 27
XEX, 41
RSA, 20, 21, 32–33, 37, 51, 53, 55, 67
XTS mode, 40, 41
timing attack, 64
RSA-FDH, 52, 54
RSA-KEM, 52, 53
RSA-OAEP, 14, 15, 51, 52
RSA-PKCS# 1
encryption, 51, 52
signatures, 52, 54
RSA-PSS, 14, 52, 54
Salsa20/20, 28, 29
scheme, 13
scrypt, 59
SHA-1, 15, 16, 26, 27, 45, 49, 50, 52
SHA-2, 15, 19, 25, 26, 37, 45, 49, 50, 52
SHA-3, 19, 45, 52
SHA3, 26
Shamir Secret Sharing, 66
Side-channels, 63–66
cache attacks, 65
SK, 57, 61
SNOW 2.0, 28, 29
SNOW 3G, 16, 28, 29
SOSEMANUK, 28, 29
SSH, 48
SSL, 51, 66
Stream Ciphers, 28–31
Page: 108
Catalogue number TP-05-14-084-EN-N, ISBN 978-92-9204-102-1, DOI 10.2824/36822
ENISA
European Union Agency for Network and Information Security
Science and Technology Park of Crete (ITE)
Vassilika Vouton, 700 13, Heraklion, Greece
Athens Office
1 Vassilis Sofias, Marousi 151 24, Athens, Greece
Catalogue number TP-05-14-084-EN-N
ISBN 978-92-9204-102-1
DOI 10.2824/36822