CH 18
CH 18
CH 18
Lattices
This is a draft chapter from version 0.8 of the book “Mathematics of Public Key Cryptography” by
Steven Galbraith, available from http://www.isg.rhul.ac.uk/˜sdg/crypto-book/ The copyright for
this chapter is held by Steven Galbraith.
This book is yet to be completed and so the contents of this chapter may change. In particular,
many chapters are currently too long and will be shortened. Hence, the Chapter, Section or Theorem
numbers are likely to change.
By downloading this chapter you agree to email S.Galbraith@math.auckland.ac.nz if you find
any mistakes, or if the explanation is unclear or misleading, or if there are any missing references,
or if something can be simplified, or if you have any suggestions for additional theorems, examples
or exercises. All feedback on the draft of the book is very welcome and will be acknowledged.
The word ‘lattice’ has two different meanings in mathematics. One meaning is related to the
theory of partial orderings on sets (for example, the lattice of subsets of a set). The other meaning,
which is the one relevant to us, is discrete subgroups of Rn .
There are several reasons for presenting lattices in this book. First, there are hard computational
problems on lattices which have been used as a building block for public key cryptosystems (e.g.,
the Goldreich-Goldwasser-Halevi (GGH) cryptosystem, the NTRU cryptosystem, the Ajtai-Dwork
cryptosystem, and the LWE cryptosystem). Second, lattices are used as a fundamental tool for
cryptanalysis of public key cryptosystems (e.g., lattice attacks on knapsack cryptosystems, Cop-
persmith’s method for finding small solutions to polynomial equations, attacks on signatures, and
attacks on variants of RSA). Third, there are applications of lattices to efficient implementation of
discrete logarithm systems (such as the GLV method; see Section 12.3.3). Finally, lattices are used
as a theoretical tool for security analysis of cryptosystems (e.g., bit security of Diffie-Hellman key
exchange using the hidden number problem, see Section 22.5).
Some good references for lattices, applications of lattices and/or lattice reduction algorithms
are: Cassels [116], Siegel [524], Cohen [129], von zur Gathen and Gerhard [584], Grötschel, Lovász
and Schrijver [246], Nguyen and Stern [431, 432] Micciancio and Goldwasser [393], Hoffstein, Pipher
and Silverman [265], Lenstra’s chapter in [107], Micciancio and Regev’s chapter in [46] and the
proceedings of the conference LLL+25.
329
330 CHAPTER 17. LATTICES
Definition 17.1.1. Let {b1 , . . . , bn } be a linearly independent set of (row) vectors in Rm (m ≥ n).
The lattice generated by {b1 , . . . , bn } is the set
( n )
X
L= li b i : li ∈ Z
i=1
of integer linear combinations of the bi . The vectors b1 , . . . , bn are called a lattice basis. The
lattice rank is n and the lattice dimension is m. If n = m then L is said to be a full rank
lattice.
Let L ⊂ Rm be a lattice. A sublattice is a subset L′ ⊂ L which is a lattice.
A basis matrix B of a lattice L is an n × m matrix formed by taking the rows to be basis
vectors bi . Thus Bi,j is the j-th entry of the row bi and
L = {xB : x ∈ Zn }.
Proof: Given the n × m basis matrix B with rows bi , define V = span{b1 , . . . , bn } ⊂ Rm , which
has dimension n by assumption. Choose (perhaps by running the Gram-Schmidt algorithm) a basis
m
v 1 , . . . , v n for V which is orthonormal with
Pn respect to the inner product
pin R . Define
pPnthe linear map
P : V → Rn by P (v i ) = ei . For v = i=1 xi v i ∈ V we have kvk = hv, vi = 2
i=1 xi = kvP k.
Since the vectors bi form a basis for V , the vectors bi P are linearly independent. Hence, BP is an
invertible matrix and P (L) is a lattice of rank n.
We can now prove the following fundamental result.
Lemma 17.1.7. Two n × m matrices B and B ′ generate the same lattice L if and only if B and
B ′ are related by a unimodular matrix, i.e., B ′ = U B where U is an n × n matrix with integer
entries and determinant ±1.
Proof: (⇒) Every row of B ′ is an integer linear combination
n
X
b′i = ui,j bj
j=1
Definition 17.1.12. Let L be a lattice in Rm of rank n with basis matrix B. The Gram matrix
of B is BB T . This is a matrix whose (i, j)-th entry is hbi , bj i.
Lemma
p 17.1.13. Let L be a lattice in Rm of rank n with basis matrix B. Then det(L) =
det(BB T ).
Proof: As noted, we have det(L) = | det(BP )|. So det(L)2 = det(BP ) det((BP )T ) = det(BP P T B T ).
Now, by part (5) of Lemma 2.10.3 we have P P T = Im . The result follows.
Note that an integer lattice of non-full rank may not have integer determinant.
Exercise 17.1.14. Find an example of a lattice of rank 1 in Z2 whose determinant is not an integer.
m ∗ ∗
Lemma 17.1.15. Let b1 , . . . , bn be an ordered basis
Qn for a∗ lattice L in R and let b1 , . . . , bn be the
Gram-Schmidt orthogonalisation. Then det(L) = i=1 kbi k.
Proof: The case m = n is already proved in Lemma 2.10.8. For the general case let v i = b∗i /kb∗i k be
the orthonormal basis required for the construction of the projection P . Then P (b∗i ) = kb∗i kei . Write
B and B ∗ for the n × m matrices formed by the rows bi and b∗i respectively. It follows that B ∗ P is
an n × n diagonal matrix with diagonal entries kb∗i k. Finally, by the Gram-Schmidt construction,
B ∗ = U B for some n × n matrix U such that det(U ) = 1. Combining these facts gives1
n
Y
det(L) = | det(BP )| = | det(U BP )| = | det(B ∗ P )| = kb∗i k.
i=1
Definition 17.1.17. Let {b1 , . . . , bn } be a basis for a lattice L in Rm . The orthogonality defect
of the basis is
n
!
Y
kbi k / det(L).
i=1
Exercise 17.1.18. Show that the orthogonality defect of {b1 , . . . , bn } is 1 if and only if the basis is
orthogonal.
It follows that 0 < λ1 ≤ λ2 · · · ≤ λn . In general there is not a basis consisting of vectors whose
lengths are equal to the successive minima, as the following example shows.
It is easy to check that this is a lattice. The vectors 2ei ∈ L for 1 ≤ i ≤ n are linearly independent
and have length 2. Every other vector x ∈ L √ with even entries has length ≥ 2. Every vector x ∈ L
with odd entries has all xi 6= 0 and so kxk ≥ n. √
If n = 2 the √ successive minima are λ1 = λ2 = 2 and if n = 3 the successive minima are
λ1 = λ2 = λ3 = 3. When n ≥ 4 then λ1 = λ2 = · · · = λn = 2. For n ≤ 4 one can construct a
basis for the lattice with vectors of lengths equal to the successive minima. When n > 4 there is no
basis for L consisting of vectors of length 2, since a basis must contain at least one vector having
odd entries.
1 The formula BP = U −1 (B ∗ P ) is the QR decomposition of BP .
334 CHAPTER 17. LATTICES
Exercise 17.1.21. For n = 2, 3, 4 in Example 17.1.20 write down a basis for the lattice consisting
of vectors of lengths equal to the successive minima.
Exercise 17.1.22. For n > 4 in Example
√ 17.1.20 show there is a basis for the lattice such that
kbi k = λi for 1 ≤ i < n and kbn k = n.
Definition 17.1.23. Let L ⊆ Rm be a lattice and write V ⊆ Rm for the R-vector space spanned
by the vectors in L. The dual lattice of L is L∗ = {y ∈ V : hx, yi ∈ Z for all x ∈ L}.
Exercise 17.1.24. Show that the dual lattice is a lattice. Let B be a basis matrix of a full rank
lattice L. Show that (B T )−1 is a basis matrix for the dual lattice. Hence, show that the determinant
of the dual lattice is det(L)−1 .
Proof: See Theorem 1.5 of [393], Theorem 6.25 of [265], or Theorem 12.2.1 of [115].
Exercise 17.2.4. Show that Minkowski’s theorem is tight. In other words find a lattice L in Rn
for some n and a symmetric convex subset S ⊆ Rn such that the volume of S is 2n det(L) and yet
S ∩ L = {0}.
Exercise 17.2.5. Show that, with respect to the ℓ∞ norm, λ1 ≤ det(L)1/n . Show that, with respect
to the ℓ1 norm, λ1 ≤ (n! det(L))1/n ≈ n det(L)1/n /e.
Exercise √
17.2.6. Let a, b ∈ N. Show that there is a solution r, s, t ∈ Z to r = as + bt such that
s2 + r2 ≤ 2b.
Definition 17.2.7. Let n ∈ N. The smallest real number γn such that
λ21 ≤ γn det(L)2/n
1. Let {b1 , b2 } be a Gauss reduced basis (see Definition 18.1.1 of the next Section) for a dimension
2 lattice in R2 . Define the quadratic form N (x, y) = kxb1 + yb2 k2 . Show that, without loss of
generality, N (x, y) = ax2 + 2bxy + cy 2 with a, b, c ≥ 0 and a ≤ c.
2. Using N (1, −1) ≥ N (0, 1) (which follows from the property of being Gauss reduced), show
that 2b ≤ a. Hence show that 3ac ≤ 4(ac − b2 )
√
3. Show that det(L)2 = |b2 − ac|. Hence deduce that Hermite’s constant satisfies γ2 ≤ 2/ 3.
√ √
4. Show that the lattice L ⊂ R2 with basis {(1, 0), (−1/2, 3/2)} satisfies λ21 = 2/ 3 det(L).
√
(This lattice corresponds to the ring of integers of Q( −3).)
Section 6.5.2 of Nguyen [425] lists the first 8 values of γn , gives the bound 2/(2πe) ≤ γn ≤ n/(πe)
and gives further references.
We refer to Section 6.5.3 of [425] and Section 6.5.3 of [265] for discussion and references.
3. kernel lattice: Given an m × n integer matrix A compute a basis for the lattice ker(A) =
{x ∈ Zm : xA = 0}.
Exercise 17.3.1. Describe explicit algorithms for the above problems and determine their com-
plexity.
1. The shortest vector problem (SVP) is the computational problem: given a basis matrix
B for L, compute a non-zero vector v ∈ L such that kvk is minimal (i.e., kvk = λ1 ).
336 CHAPTER 17. LATTICES
2. The closest vector problem (CVP) is the computational problem: given a basis matrix B
for L and a vector w ∈ Qm (one can work with high-precision approximations in Rm , but this
is essentially still working in Qn ), compute v ∈ L such that kw − vk is minimal.
3. The decision closest vector problem (DCVP) is: given a basis matrix B for a lattice L,
a vector w ∈ Qm and a real number r > 0, decide whether or not there is a vector v ∈ L such
that kw − vk ≤ r.
4. The decision shortest vector problem is: given a basis matrix B for a lattice L and a real
number r > 0 to decide whether or not there is a non-zero v ∈ L such that kvk ≤ r.
5. Fix γ > 1. The approximation SVP is: given a basis matrix B for L, compute a non-zero
vector v ∈ L such that kvk ≤ γλ1 .
6. Fix γ > 1. The approximate CVP is: given a basis matrix B for L and a vector w ∈ Qm ,
compute v ∈ L such that kw − vk ≤ γkw − xBk for all x ∈ Zn .
In general, these computational problems are known to be hard when the rank is sufficiently
large by the theory of computational complexity (we do not give details of complexity theory in this
book; in particular we do not define the term “NP-hard”). It is known that CVP is NP-hard (this is
shown by relating CVP with subset-sum; for details see Chapter 3 of [393]). Also, SVP is NP-hard
under randomised reductions and non-uniform reductions (see Chapter 4 of [393] for explanation of
these terms and proofs). Nguyen [425] gives a summary of the complexity results and current best
running times of algorithms for these problems.
On the other hand, if a lattice is sufficiently nice then these problems may be easy.
Example 17.3.3. Let L ⊂ R2 be the lattice with basis matrix
1001 0
B= .
0 2008
Then every lattice vector is of the form (1001a, 2008b) where a, b ∈ Z. Hence the shortest non-zero
vectors are clearly (1001, 0) and (−1001, 0). Similarly, the closest vector to w = (5432, 6000) is
clearly (5005, 6024).
Why is this example so easy? The reason is that the basis vectors are orthogonal. Even in
large dimensions the SVP and CVP problems are easy if one has an orthogonal basis for a lattice.
When given a basis which is not orthogonal it is less obvious whether there exists a non-trivial linear
combination of the basis vectors which gives a vector strictly shorter than the shortest basis vector.
A basis for a lattice which is “as close to orthogonal as it can be” is therefore convenient for solving
some computational problems.