Solns

Download as pdf or txt
Download as pdf or txt
You are on page 1of 38

Appendix B

Hints and Solutions to


Exercises

Chapter 1: Introduction
1.3.3: Encryption is deterministic so one can compare the challenge ciphertext c with
me0 (mod N ).
1.3.4: Given c, submit c′ = c2e (mod N ) to the decryption oracle to get 2m (mod N )
and hence compute m.
1.3.5: If it does not have semantic security there is some function f : Mκ → {0, 1}
which can be computed given a ciphertext, so choose messages m0 , m1 such that f (mi ) = i
and then one can break IND security.
1.3.7: A UF-CMA adversary is a randomised polynomial-time algorithm which takes
as input a public key pk for a signature scheme, is allowed to query a signing oracle on
messages of its choice, and outputs a message m and a signature s. The adversary wins
if the signature s satisfies the verification algorithm on message m and key pk and if
m was not one of the queries to the signing oracle. A scheme has UF-CMA security if
every adversary succeeds with only negligible probability (in the security parameter). See
Definition 12.2 of Katz and Lindell [334].
1.3.8: Yes, if the RSA problem is hard.
1.3.9: Choose random s and set m = se (mod N ).
1.3.10: Given m call signing oracle on 2e m (mod N ) to get s′ . Output s = s′ 2−1 (mod N ).

Chapter 2: Basic Algorithms


2.2.3: See Section 9.5.1 of Crandall and Pomerance [162], Section 3.5 of Shoup [556] or
Section 8.1 of von zur Gathen and Gerhard [238].
2.2.8: See Section 9.2.2 of [162], Sections 1.5.1 and 1.5.2 of Brent and Zimmer-
mann [100] or Section 1.7.1 of Cohen [136]. The complexity is O(M (log(a))),
2.2.9: Since e ≤ log2 (N ) the idea is to compute N 1/e for e = 1, 2, . . . , ⌊log2 (N )⌋ to
find the largest e such that N 1/e ∈ N.
2.4.4: See Section 5.9 of Bach and Shallit [22] or Algorithm 2.3.5 of Crandall and
Pomerance [162].
2.4.5: Show the algorithm is faster than computing gcd(m, n) using Euclid; the fun-
damental step is computing m mod n but the numbers are always at most as large as
those in Euclid’s algorithm.

605
606 APPENDIX B. HINTS AND SOLUTIONS TO EXERCISES

2.4.6: Choose random 1 < a < p and compute ( ap ) until the Legendre symbol is −1.
Since the probability of success is at least 0.5 for each trial, the expected number of trials
is 2. This is a Las Vegas algorithm (it may never terminate, but the answer is always
correct). See the proof of Lemma 2.9.5 for further details.
2.4.9: This goes back to A. Cobham. See Shoup [557] or Bach and Sorensen [23] for
the analysis.
2.4.10: Computing Legendre symbols using quadratic reciprocity requires O(log(p)2 )
bit operations while computing a(p−1)/2 (mod p) needs O(log(p)M (log(p))) bit operations
(see Corollary 2.8.4). So using quadratic reciprocity is (theoretically) always better.
Q
2.5.5: First compute and store b2 , . . . , bm where bi = ij=1 aj , then b−1m , then, for
−1 −1 −1 −1
i = m downto 2 compute ai = bi−1 bi , bi−1 = ai bi . See Algorithm 11.15 of [16],
Section 2.4.1 of [100] or Section 2.2.5 of [274].
2.6.4:
2.8.6: For pseudocode of the algorithm see Algorithm IV.4 of [64] or Algorithm 9.10
of [16].
2.8.7: Consider a window of length w representing an odd integer. If the bits of m are
uniformly distributed then the probability the next most significant bit after the window
is 0 is 0.5. Hence Pthe expected number of zeroes before the first 1 is 0.5 · 0 + 0.25 · 1 +

0.125 · 2 + · · · = i=1 i/2i+1 = 1.
2.8.10: For pseudocode see Section 2.2 of Möller [432]. The problem is that there is
no fixed precomputation of powers of g.
2.8.11: The values mi in addition chain satisfy mi ≤ 2i .
2.9.8: See Adleman, Manders and Miller [3].
2.10.2: See Chapters 2 and 3 of von zur Gathen and Gerhard [238].
2.10.4: Assume that F (0) 6= 0 so that x is coprime to F (x). Replace R = 2k in the
description of Montgomery reduction in Section 2.5 by R = xd (mod F (x)). See Koç and
Acar [349] for details and discussion.
2.10.5: See Section 9.6.3 of [162].
2.12.1: Compute F ′ (x). If F ′ (x) = 0 then, by Lemma A.5.2, F (x) = G(x)p where
p = charFq , and F (x) is not square-free. Otherwise, compute gcd(F (x), F ′ (x)) and if
this is not 1 then F (x) is not square-free. The computation requires O(deg(F )2 ) field
operations.
2.12.2: Given F (x) ∈ Fq [x] compute F ′ (x). If F ′ (x) = 0 then F (x) = G(x)p and
repeat process on G(x). Otherwise, compute F1 (x) = F (x)/ gcd(F (x), F ′ (x)) and factor
F1 (x). Once the factors of F1 (x) are known then one can factor F (x)/F1 (x) efficiently by
testing divisibility by the known factors of F1 (x).
2.12.3: One computes xq (mod F (x)) in O(log(q)d2 ) field operations and a further
O(d2 ) operations are needed for the gcd computation. The answer to the decision problem
is “yes” if and only if deg(R1 (x)) > 0.
2.12.5: One is essentially doing “divide and conquer” to separate the deg(R1 (x)) ≤ d
roots. Each trial is expected to split R1 (x) into two polynomials of degree ≈ deg(R1 (x))/2.
Hence, an expected O(log2 (deg(R1 (x)))) = O(log2 (d)) trials are required to factor R1 (x).
Since each trial involves computing u(x)(q−1)/2 (mod R1 (x)) (which has complexity
O(log(q)M (d)) operations in Fq ) and then computing the gcd of polynomials of degree ≤ d
(which requires O(d2 ) operations in Fq ) the total expected cost is O(log(q) log(d)d2 ) field
operations. For a detailed analysis of the probabilities see Section 21.3.2 of Shoup [556].
2.12.10: If F (x) is not irreducible then it has a factor of degree ≤ deg(F (x))/2.
i
Hence it is sufficient that gcd(F (x), xq − x) = 1 for 1 ≤ i ≤ d/2. To compute all these
polynomials efficiently one computes s1 (x) ≡ xq (mod F (x)) and then gcd(F (x), s1 (x) −
x). For the next step compute s2 (x) = s1 (x)q (mod F (x)) and gcd(F (x), s2 (x) − x),
607

and so on. There are d/2 steps, each taking O(log(q)d2 ) field operations, so the overall
complexity is O(d3 log(q)). See Algorithm IPT of Section 21.1 of [556] or Section 3 of
[370].
b
2.12.11: One first computes gcd(xq − x, F (x)) in time O(b log(q)) operations on poly-
nomials of degree at most d. The rest of the algorithm is the same as the root finding
algorithm of Exercise 2.12.5 where we raise polynomials to at most the power q b .
m−1
2.14.1: Construct {θ, θq , . . . , θq } in O(m3 ) operations in Fq (using the fact that
q-th powering is linear) and then O(m3 ) operations in Fq for the Gaussian elimination.
Hence the complexity is an expected O(m3 logq (m)) bit operations.
2.14.4: Suppose θ2 = d. Given g = g0 + g1 θ we must solve for x, y ∈ Fq such that
(x + θy)2 = g0 + g1 θ. Expanding gives x2 + dy 2 = g0 and 2xy = g1 and one can eliminate
y to get 4x4 − 4g0 x2 + dg12 = 0 which can be solved using two square roots in Fq .
2.14.5: See Fong-Hankerson-López-Menezes [207].
2.14.10: Computing Sm requires O(m3 ) operations in Fq . Computing a linear depen-
dence among m vectors can be done using Gaussian elimination in O(m3 ) field operations.
2.14.11: Since 1 − 1/q ≥ 1/2 the number of trials is at most 2. The complexity is
therefore an expected O(m3 ) operations in Fq .
2.14.12: Expected O(m3 ) operations in Fqm .
2.14.13: The cost of finding a root of the polynomial F1 (x) of degree m is an expected
O(log(m) log(q)m2 ) operations in Fq and this dominates the cost of the isomorphism
algorithm in the forwards direction. The cost of the linear algebra to compute the inverse
to this isomorphism is O(m3 ). (If one repeats the algorithm with the roles of F1 and F2
swapped then one gets a field isomorphism from Fq [y]/(F2 (y)) to Fq [x]/(F1 (x)) but it is
not necessarily the inverse of the function computed in the first step.
Qm
2.15.1: Writing q − 1 = i=1 liei we have m = O(log(q)) and then computing g (q−1)/li
requires O(log(q)3 ) bit operations.
2.15.8: The naive solution requires ≈ 2/3 log2 (N ) group operations in the precom-
putation, followed by ≈ 3 21 log2 (N 2/3 ) = log2 (N ) group operations. The same trick can
be used in the first stage of Algorithm 4, giving ≈ 23 log2 (N ) group operations. But the
three exponentiations in the second stage are all to different bases and so the total work is
≈ 32 log2 (N ) group operations. The naive solution is therefore better. The naive method
becomes even faster compared with Algorithm 4 if one uses window methods.

Chapter 5: Varieties

5.1.3: V (y − x2 ), V (y 2 − x), V ((x − 1)3 − y 2 ).


5.1.4: If f (x, y) has no monomials featuring y then it is a polynomial in x only; take
any root a ∈ k and then {(a, b) : b ∈ k} ⊆ V (f ). For the remaining case, for each a ∈ k
then f (a, y) is a polynomial in y and so has at least one root b ∈ k.
5.1.7: Let z = x + iy ∈ Fp2 . Then z p = x − iy. Hence z p+1 = 1 if and only if
1 = (x + iy)(x − iy) = x2 + y 2 . If follows that the map x + iy 7→ (x, y) is a bijection from
G to X(Fp ). Showing that this is a group isomorphism is straightforward.
5.1.10: In X = A1 (Fp ) we have X = V (xp − x).
5.2.8: V (x2 + y 2 − z 2 )(R) is the same as the circle V (x2 + y 2 − 1)(R) ⊆ A2 (R). Over
C it would also contain the points (1 : ±i : 0).
V (yz − x2 ) is the parabola y = x2 in A2 together with the single point (0 : 1 : 0)
at infinity. Note that this algebraic set is exactly the same as the one in Example 5.2.7
under a change of coordinates.
5.2.21: (1 : 0 : 0), (0 : 1 : 0), (0 : 0 : 1).
608 APPENDIX B. HINTS AND SOLUTIONS TO EXERCISES

5.2.35: We have X = V (f (x0 , x1 , x2 )) = V (f ). A point in X − X has x2 = 0 and at


least one of x0 , x1 6= 0. Hence, the points at infinity are in the union of (X ∩U0 )∩V (x2 ) =
{(1 : y : 0) : f (1, y, 0) = 0} and (X ∩ U1 ) ∩ V (x2 ) = {(x : 1 : 0) : f (x, 1, 0) = 0}. Both sets
are finite.
5.3.3: It decomposes as V (x, z) ∪ V (w − x, x2 − yz) and one can see that neither of
these sets is equal to X.
5.3.5: k[x, y]/(y − x2 ) ∼ = k[x] is an integral domain, so (y − x2 ) is prime.
5.3.6: If gh ∈ Ik (X) then g(f1 (t), . . . , fn (t))h(f1 (t), . . . , fn (t)) is a polynomial in t
which is identically zero on k. The result follows. See Proposition 5 of Section 4.5 of Cox,
Little and O’Shea [158] for details and a generalisation.
5.3.14: If U1 ∩ U2 = ∅ then X = (X − U1 ) ∪ (X − U2 ) is a union of proper closed sets.
5.4.6: We show that (f1 /f2 )(P ) − (f3 /f4 )(P ) = 0. By assumption f2 (P ), f4 (P ) 6= 0
and so it suffices to show (f1 f4 − f2 f3 )(P ) = 0. This follows since (f1 f4 − f2 f3 ) ∈ Ik (X).
5.5.7: Recall from Exercise 5.3.14 that U1 ∩ U2 6= ∅. Then apply the same arguments
as in the proof of Theorem 5.4.8.
5.5.9: The solution is not unique. An example of maps φ : X → Y and ψ : Y → X is
φ(x, y) = (x : 1 : 1) and ψ(x0 : x1 : x2 ) = (x0 /x1 , x1 /x0 ). One can check that φ ◦ ψ and
ψ ◦ φ are the identity when they are defined.
5.5.14: The line of slope t through (−1, 0) has equation y = t(x + 1) and hits X and
(−1, 0) and a point φ(t) = (x(t), y(t)) = ((1 − t2 )/(1 + t2 ), 2t/(1 + t2 )). Hence, define
φ : P1 → X by f ([t : u]) = ((u2 − t2 )/(u2 + t2 ), 2ut/(u2 + t2 )). The inverse morphism
ψ : X → P1 is ψ(x, y) = (y : x + 1) = (x − 1 : y); note that these two descriptions of ψ
are equivalent (multiply the former through by y/(x + 1)) and that the former is regular
away from (−1 : 0) and the latter regular away from (1 : 0). Since the slope of the line
between (−1.0) and (x, y) is y/(x + 1) it follows that ψ ◦ φ and φ ◦ ψ are the identity.
5.5.19: Write Z for the Zariski closure of φ(X) in Y . If Z = Z1 ∪ Z2 is a union of
closed sets then X = φ−1 (Z1 ) ∪ φ−1 (Z2 ) is also a union of closed sets so, without loss of
generality, φ−1 (Z1 ) = X and so Z1 contains the Zariski closure of φ(X).
5.5.26: Suppose θ(f ) = 0 for some f ∈ K1∗ . Then f f −1 = 1 and so 1 = θ(1) =
θ(f )θ(f −1 ) = 0 which is a contradiction.
5.6.5: A proof is given in Proposition 1.13 of Hartshorne [278]. We also give a proof
here: Let f ∈ k[x1 , . . . , xn ]. Without loss of generality suppose f contains at least
one monomial featuring xn . Then write f = fr xrn + fr−1 xr−1 n + · · · + f0 with fi ∈
k[x1 , . . . , xn−1 ] or k[x0 , . . . , xn−1 ]. One can check that the field k(X) contains a subfield
isomorphic to k(x1 , . . . , xn−1 ). Finally, k(X) is an algebraic extension of k(x1 , . . . , xn ).
5.6.6: k(X) = k[X] = k and so Ik (X) is a maximal ideal and so by the Nullstellensatz
X = {P }.
5.6.10: Let X be a variety of dimension 1 and let Y be a proper closed subset. Let
Y = ∪ni=1 Yi be the decomposition of Y into irreducible components. By Corollary 5.6.9
we have dim(Yi ) = 0. Exercise 5.6.6 implies #Yi (k) = 1 and so #Y (k) = n.
5.7.5: V (y12 − y22 , y1 y2 − 1).
2 2 2 2
5.7.6: V (y1,1 − y1,2 + y2,1 − y2,2 − 1, y1,1 y1,2 + y2,1 y2,2 − 1).

Chapter 6: Tori, LUC and XTR

6.1.4: If p ∤ n then
Y 
Φnp (x) = (xnp/d − 1)µ(d) (xn/d − 1)µ(dp)
d|n
609

and use µ(dp) = −µ(d). If p | n then


  
Y Y
Φnp (x) =  (xnp/d − 1)µ(d)   (xnp/d − 1)µ(d)  .
d|n d|np,d∤n

and note that if d | np but d ∤ n then p2 | d and so µ(d) = 0. The final statement follows
since, when n is odd, z n = 1 if and only if (−z)2n = 1.
6.3.2: To compute (u + vθ)(u′ + v ′ θ) compute uu′ , vv ′ and (u + v)(u′ + v ′ ). To square
(u + vθ) compute u2 , v 2 and (u + v)2 . One inverts (u + vθ) by computing (u + vθ)/(u2 −
Auv + Bv 2 ).
6.3.3: The first two parts are similar to Exercise 6.3.2. For inversion, note that
(u + vθ)−1 = (u − vθ)/(u2 + v 2 ). For square roots, note that (u + vθ)2 = (u2 − v 2 ) + 2uvθ
so to compute the square root of a + bθ requires solving these equations. We assume here
2 2
that a + bθ is a square, and so ( a +b q ) = 1. One finds that u4 − au2 − b2 /4 = 0 and so

u2 = (a ± a2 + b2 )2−1 . √Only one of the two choices is a square, so one must compute
2 2
the Legendre symbol ( a+ aq +b ) to determine the answer (taking into account that ( 2q )

is known from q (mod 8)). One then multiplies the appropriate (a ± a2 + b2 ) by 2−1 ,
which is easy (either shift right or add q and shift right). Taking the square root gives u.
An inversion and multiplication gives v.
6.3.15: See Williams [634].
6.3.17: The characteristic polynomial of g is (x − g)(x − g q ) = x2 − TrFq2 /Fq (g)x + 1
so TrFq2 /Fq (g) = TrFq2 /Fq (g ′ ) implies g ′ is a root of the same polynomial as g.
6.3.22: 34.
6.3.23: Exercises 6.3.2 and 6.3.3 give costs of 3 squarings in Fq for computing a
squaring in Fq2 and 3 squarings plus 3 multiplications in Fq for a square-and-multiply.
Lucas sequences give just one multiplication and squaring for each operation; only one
third the number of operations in the worst case.
6.4.5: Need q ≡ 2 (mod 3) or else ζ3 ∈ Fq . Also need q 2 6≡ 1 (mod 9) or else ζ9 ∈ Fq2 .
2
6.4.7: Consider 0 = f (a)q = a3q −tq a2q +tq aq −1 = (−1+tq a−q −ta−2q +a−3q )(−a3q ).
This shows that f (a−q ) = 0. Now, suppose f (x) is neither irreducible nor split completely.
Then f (x) = (x − u)g(x) where u ∈ Fq2 and g(x) is an irreducible quadratic over Fq2 . Let
2
a ∈ Fq4 − Fq2 be a root of g(x). Then a−q is the other root, but then (a−q )−q = aq = a
and we have a ∈ Fq2 .
6.4.10: See Lenstra and Verheul [375].
6.4.11: Squaring requires computing (t2n+1 , t2n , t2n−1 ) which is one iteration of each
formula in the previous exercise. Square-and-multiply requires computing (t2n+2 , t2n+1 , t2n )
which is twice formula 2 and once formula 3 above. Let M (respectively S, P ) be the
costs of a multiplication (respectively, squaring, q-th powering) in Fq2 . Then a squaring
(i.e., going from tn to t2n ) requires 4M + S + 5P while a square-and-multiply requires
2M + 2S + 4P . The reader is warned that in practice it is often more efficient to use other
ladder algorithms than traditional square-and-multiply, see Chapter 3 of Stam [579].
6.4.12: The trick is to represent tn using the triple (tn+1 , tn , tn−1 ) as above when n is
odd, but (tn+2 , tn+1 , tn ) when n is even. One then must use formula 1 of Exercise 6.4.10.
For example, if n is even and one is computing (t2n+2 , t2n+1 , t2n ) then the middle term
is computed as t2(n+1)−1 . See Lenstra and Verheul [375] for details.
6.4.13: t6 = 7 + 48i, t7 = 35 + 65i, t8 = 6 + 8i.
6.4.14: One can represent the equivalence class of g n by (tn , t−n ). For the ladder one
can store the 6-tuple (tn+1 , tn , tn−1 , t−n−1 , t−n , t−n+1 ) and compute an analogous 6-tuple
610 APPENDIX B. HINTS AND SOLUTIONS TO EXERCISES

centered at (t2n , t−2n ) or (t2n+1 , t−2n−1 ). See Gong and Harn [259] and also Stam [579]
for further details.
6.4.15: See Shirase et al [551].
6.6.2: Modulo each prime pi there are usually two values h such that TrFp2 /Fpi (h) =
i
TrFp2 /Fpi (g). Hence, by the Chinese remainder theorem, there are 2k values for h in
i
general.
The second claim follows immediately, or one can prove it using the same argument
as the first part: modulo each prime pi , 2 + (pi − 1)/2 ≈ pi /2 values arise as the trace of
an element in Gpi ,2 .

Chapter 7: Curves and Divisor Class Groups


7.1.7: The first claim is immediate, since f ∈ OP implies f (P ) ∈ k is defined, and
(f − f (P )) ∈ mP . For the second claim, note that mP /m2P is an OP /mP -module, i.e.,
a k-vector space. Without loss of generality, P = (0, . . . , 0). and mP is the OP -ideal
(x1 , . . . , xn ). Then every monomial xi xj for 1 ≤ i, j ≤ n lies in m2P . Therefore every
element of mP is of the form a1 x1 + · · · an xn + g where ai ∈ k for 1 ≤ i ≤ n and g ∈ m2P .
Hence there is a map d : mP → kn given by

d(a1 x1 + · · · + an xn + g) = (a1 , . . . , an ).

7.1.9: x = y 2 − yx − x4 ∈ m2P so {y} is a basis. For the second example there is no


linear relation between x and y and so {x, y} is a basis.
7.1.15: Corollary 7.1.14: By Exercise 5.6.5 the dimension of X is d = n − 1, so
n − d = 1. Now, the Jacobian matrix is a single row vector and so the rank is 1 unless
the vector is the zero vector. Corollary 7.1.14 is just a restatement of this.
7.2.4: Putting z = 0 gives the equation x3 = 0 and so x = 0. Hence, (0 : 1 : 0) is the
only point at infinity. Taking the affine part by setting y = 1 gives the equation E(x, z) =
z +a1 xz +a3 z 2 −(x3 +a2 x2 z +a4 xz 2 +a6 z 3 ) and one can check that (∂E/∂z)(0, 0) = 1 6= 0.
7.3.7: Write φ(P ) = (φ0 (P ) : · · · : φn (P )) and let n = max{vP (φi ) : 0 ≤ i ≤ n}. Let
tP be a uniformizer at P . Then (t−n −n
P φ0 : · · · : tP φn ) is regular at P . See Proposition
II.2.1 of Silverman [564] for further details.
7.4.3: The first 3 statements are straightforward. The definitions also directly imply
that mv is an Rv -ideal. The proof that mv is maximal and that Rv is a local ring is
essentially the same as the proof of Lemma 7.1.2. Statement 5 follows from Statement 2.
7.4.6: The result follows since mP,k (C)m = k(C) ∩ mP,k′ (C)m for any m ∈ Z≥0 .
m+1
7.4.8: mm m m
P = (tP ) so f ∈ mP and f 6∈ mP is equivalent to f = tmP u where u ∈

OP − mP and hence u ∈ OP .
7.4.9: Write f = taP u and h = tbP v where u, v ∈ OP∗ and note that f h = ta+b P uv.
v (f )
7.4.12: Let f = f1 /f2 with f1 , f2 ∈ OP,k (C). By Lemma 7.4.7 we have f1 = tPP 1 u1
v (f2 )
and f2 = tPP u2 for some u1 , u2 ∈ OP,k (C)∗ . If vP (f ) = vP (f1 ) − vP (f2 ) > 0 then
v (f1 )−vP (f2 )
f = tPP u1 /u2 ∈ OP,k (C) and so P is not a pole of f . On the other hand, if P
v (f2 )−vP (f1 )
is a pole of f then 1/f = tPP u2 /u1 ∈ mP,k (C).

7.7.3: If f ∈ k then div(f ) = 0 and the result follows.
7.7.5: (5) follows immediately from part 4 of Lemma 7.4.14.
7.7.7: First, Prink (C) ⊆ Divk (C) since functions only have finitely many poles and
zeros. Second, if f is defined over k and σ ∈ Gal(k/k) then f = σ(f ) and so σ(div(f )) =
div(f ) and so Prink (C) ⊆ Divk (C). Finally, Prink (C) is closed under addition by Lemma 7.7.4.
611

7.7.9: Write f as a ratio of homogeneous polynomials of the same degree and then
factor them as in equation (7.8).
7.7.15: f /h ∈ k(C) has div(f /h) = 0 and so f /h ∈ k∗ .
7.9.6: See Cassels [122], Reid [497] or Silverman and Tate [567].

Chapter 8: Rational Maps on Curves and Divisors


8.1.2: Any non-constant morphism φ has a representative of the form (φ1 : φ2 ) where
φ2 is not identically zero, and so define f = φ1 /φ2 . That φ is a morphism follows from
Lemma 7.3.6.
8.1.10: k(x, y/x) = k(x, y).
8.1.11: k(x, y) = k(x2 , y)(x) = φ∗ (k(C2 ))(x), which is a quadratic extension.
8.2.3: See Proposition III.1.4 of Stichtenoth [589].
8.2.10: By Lemma 7.4.7, t is a uniformizer at Q if and only if vQ (t) = 1. The problem
is therefore equivalent to Lemma 8.2.9.
8.2.11: Since φ∗ (k(C2 )) = k(C1 ) it follows directly from the definition in terms of OP
and mP that if φ(P ) = Q and f ∈ k(C2 ) then vQ (f ) = vP (f ◦ φ).
8.2.16: Follows from Lemma 8.2.9.
8.3.12: The function can also be written φ((x : z)) = (1 : z 2 /x2 ). One has φ∗ (D) =
(1 : 1) + (1 : 0) − (0 : 1), φ∗ (D) = (i : 1) + (−i : 1) + 2(1 : 0) − 2(0 : 1), φ∗ φ∗ (D) = 2(−1 :
1) + 2(1 : 0) − 2(0 : 1) = 2D and φ∗ φ∗ (D) = (1 : 1) + (−1 : 1) + 2(1 : 0) − 2(0 : 1).
8.3.15: Follows from Exercise 8.2.17.
8.4.3: See Section 8.2 of Fulton [216] or I.4.5 to I.4.9 of Stichtenoth [589].
8.4.6: By Corollary 7.7.13 all non-constant functions have a pole. Hence Lk (0) = k.
Since div(f ) ≡ 0 the second result follows from part 6 of Lemma 8.4.2.
8.4.10: The dimensions are 1, 1, 2, 3, 4, 5, 6.
8.5.15: δ(xp ) = pxp−1 δ(x) = 0.
8.5.27: If ω1 = f1 dtP and ω2 ≡ ω1 then ω2 = f2 dtP with f1 ≡ f2 in k(C), so there
isn’t anything to show here.
For the second point suppose t and u are two uniformizers at P . Write ω = f1 dt =
f2 du, so that f1 /f2 = ∂u/∂t. We must show that vP (f1 ) = vP (f2 ). Since u ∈ mP,k (C) =
(t) we have u = ht for some h ∈ k(C) such that h(P ) 6= 0. Then ∂u/∂t = ∂(ht)/∂t =
h + t(∂h/∂t). It follows that vP (∂u/∂t) = 0 and so vP (f1 ) = vP (f2 ).
8.5.29: If ω ′ = f ω then div(ω ′ ) = div(ω) + div(f ).
8.6.3: An elliptic curve with only one point, e.g., y 2 + y = x3 + x + 1 over F2 .

Chapter 9: Elliptic Curves


9.1.1: It is clear that the formula gives the doubling formula when x1 = x2 , y1 = y2 . For
the other case it is sufficient to compute
  
y1 − y2 y1 + y2 + (a1 x2 + a3 )
x1 − x2 y1 + y2 + (a1 x2 + a3 )
and simplify to the above formula.
9.1.4: (These ideas originate in the work of Seroussi and Knudsen.) Dividing the
equation by x2P gives (yP /xP )2 + (yP /xP ) = xP + a2 + a6 /x2P and the result follows by
Exercise 2.14.7.
The formula for λQ is immediate from equation (9.2) and the formulae for xP and
yP also follow immediately. If P = [2]Q then xP = λ2Q + λQ + a2 for some λQ ∈ F2m
and so TrF2m /F2 (xP + a2 ) = 0. Since TrF2m /F2 (xP + a2 + a6 /x2P ) = 0 it follows that
TrF2m /F2 (a6 /x2P ) = 0.
612 APPENDIX B. HINTS AND SOLUTIONS TO EXERCISES

Conversely, if TrF2m /F2 (a6 /x2P ) = 0 then one can solve for xQ ∈ F2m the equation
x2Q +xP +a6 /x2Q = 0. Further, TrF2m /F2 (xQ +a2 +a6 /x2Q ) = TrF2m /F2 (xQ +x2Q +a2 +xP ) =
0 so there is an element yQ ∈ F2m such that Q = (xQ , yQ ) ∈ E(F2m ). It then follows that
P = [2]Q.
Finally, the formulae for point halving follow from substituting yQ = xQ (λQ + xQ )
into the formula yP = (xP + xQ )λQ + xQ + yQ .
9.1.5: We have λ = (y1 /z1 − y2 /z2 )/(x1 /z1 − x2 /z2 ) = (y1 z2 − y2 z1 )/(x1 z2 − x2 z1 ).
Similarly, putting x1 /z1 , x2 /z2 and y1 /z1 in the addition formulae in place of x1 , x2 and
y1 gives the above formulae.
9.3.2: There is an inverse morphism which is a group homomorphism, also defined
over k by definition, such that the composition is the identity.
9.3.8: From E1 to E2 is just φ(x, y) = (x + 1, y + x). From E1 to E3 is φ(x, y) =
(x+s2 , y +sx+t) where s ∈ F24 satisfies s4 +s+1 = 0 and t ∈ F28 satisfies t2 +t = s6 +s2 .
9.4.5: For t ∈ F24 show that TrF24 /F2 (s6 + s2 ) = 1 + u(s4 + s + 1)2 = 0. Theorem 9.3.4
implies every element of Aut(E) is of this form. Since there are 24 choices for (u, s, t),
each giving a different function on E(F2 ), one has #Aut(E) = 24. The fact that Aut(E) is
non-Abelian follows can be shown as follows. Let φi (x, y) = (u2i x+ s2i , u3i y + u2i si x+ ti ) for
i = 1, 2. One can check that the x-coordinates of φ1 ◦ φ2 and φ2 ◦ φ1 are u21 u22 x + u21 s22 + s21
and u21 u22 x + u22 s21 + s22 respectively. These are not equal when, for example, u1 = 1,
s1 = ζ3 , u2 = ζ3 and s2 is arbitrary.
9.5.5: The first part of the exercise follows from Theorem 9.3.4. Points P = (xP , yP )
either satisfy a1 xP + a3 = 0 and so P = −P or a1 xP + a3 6= 0 and so yP is a solution to

y 2 + y = F (x)/(a1 x + a3 )2 .

By Exercise 2.14.7 this is soluble if and only if TrF2n /F2 (F (xP )/(a1 xP + a3 )2 ) = 0. Hence,
every xP ∈ F2n is such that either a1 xP + a3 = 0 (in which case both E and E e have a
single point each with x-coordinate xP ) or precisely one of y 2 + y = F (xP )/(a1 xP + a3 )2
and y 2 + y = F (xP )/(a1 xP + a3 )2 + α has a solution. The result follows.
9.5.6: If σ(φ−1 ) ◦ φ = 1 = φ−1 ◦ φ for all σ then σ(φ−1 ) = φ−1 for all σ and φ−1 is
defined over k.
9.5.8: Follow the method of Lemma 9.5.7; also see Proposition 1 of Hess, Smart and
Vercauteren [284].
9.5.9: We have j(E) = 0 and the only elliptic curves E/F e 2 with j(E) e = 0 in short
2 3
Weierstrass form are y + y = x + a4 x + a6 with (a4 , a6 ) ∈ {(0, 0), (0, 1), (1, 0), (1, 1)}.
One can verify by direct calculation that for any pair E1 , E2 of distinct curves in this
form, the isomorphism between them is not defined over F2 .
9.6.2: Let E2 : y 2 + H(x)y = F (x). If φ(x, y) = (φ1 (x, y), φ2 (x, y)) is a morphism then
so is −φ(x, y) = (φ1 (x, y), −φ2 (x, y) − H(φ1 (x, y)).
9.6.7: We know that [0] and [1] are isogenies and that the inverse of an isogeny is an
isogeny, so we may assume that n ≥ 2. Lemma 9.6.6 shows that the sum of two isogenies
is an isogeny. It follows by induction that [n] = [n − 1] + [1] is an isogeny.
9.6.10: Use the fact that [2]P = OE if and only if P = −P = ι(P ). Everything is
direct calculation except for the claim that the quartic has distinct roots, which follows
by observing that if x1 is a repeated root then the corresponding point (x1 , y1 ) would be
singular.
9.6.16: There are many proofs: using differentials; showing that ker(πq ) = {OE };
determining k(E)/πq∗ (k(E)). The statement of the degree follows by Lemma 9.6.13.
9.6.20: This is an immediate consequence of Theorem 9.6.18 (the fact that λ is an iso-
morphism comes from considering degrees). The details are also given in Proposition 12.12
of Washington [626].
613

9.6.22: We have (ψ − φ)b ◦ φ = [0] and so the result follows by the same argument using
degrees as in the proof of Lemma 9.6.11. One can also use Theorem 9.6.18.
9.6.25: This follows since ρ∗ is essentially ρ−1 (x, y) = (ζ3−1 x, y) = (ζ32 x, y).
9.6.29: A point of order 3 means [2]P = −P and the subgroup being rational means
either P is defined over k or P is defined over k′ /k and σ(P ) = −P for all non-trivial
σ ∈ Gal(k′ /k). It follows that k′ /k is quadratic. Translating x to 0 gives a point (0, v).
Since char(k) 6= 2 we assume E : y 2 = x3 + a2 x2 + a4 x + a6 and clearly v 2 = a6 . The
condition [2](0, v) = (0, −v) implies a2 = (a4 /2v)2 = a24 /(4a6 ) and re-arranging gives the
first equation. For the twist write w = a4 /2a6 , X = wx, Y = w3/2 y and A = a6 w3 . Then

X 3 +A(X+1)2 = w3 x3 +(A/a26 )(a6 wx+a6 )2 = w3 (x3 +(A/(w3 a26 ))((a4 /2)x+a6 )2 = w3 y 2 = Y 2 .

It is easy to check that this final equation is singular if and only if A = 0 or 27/4.
9.6.30: See Doche, Icart and Kohel [183].
e over a field k
9.7.6: Let φ(x, y) = (φ1 (x), cyφ1 (x)′ + φ3 (x)) be an isogeny from E to E

of characteristic 2. By definition, X = φ1 (x) and Y = cyφ1 (x) + φ3 (x) satisfy the curve
equation Y 2 + (e a3 )Y = X 3 + e
a1 X + e a2 X 2 + ea4 X + e
a6 . Now

Y 2 + (e a3 )Y = (cyφ1 (x)′ + φ3 (x))2 + (e


a1 X + e a3 )(cyφ1 (x)′ + φ3 (x))
a1 φ1 (x) + e

a3 = c(a1 x + a3 )φ1 (x)′ . Hence


a1 φ1 (x) + e
and e

Y 2 + (e a3 )Y = (cφ1 (x)′ )2 (y 2 + (a1 x + a3 )y) + φ3 (x)2 + (e


a1 X + e a1 φ1 (x) + e
a3 )φ3 (x).

It follows that φ3 (x) must satisfy the following quadratic equation over k(x)

φ3 (x)2 +c(e a3 )φ1 (x)′ φ3 (x)+(cφ1 (x)′ )2 (x3 +a2 x2 +a4 x+a6 )+φ1 (x)3 +e
a1 φ1 (x)+e a2 φ1 (x)2 +e
a4 φ1 (x)+e
a6 = 0.

Since k(x) is a field there are at most two possible values for φ3 (x).
9.8.3: [3]P = OE if and only if [2]P = −P . Hence, if P = (x, y) is such that [3]P = OE
then λ2 = 3x = 0 where λ = (3x2 + 2a2 x + a4 )/(2y) = (2a2 x + a4 )/(2y). The statements
all follow from this.
b = [t]
9.9.5: Use the fact that the dual isogeny of φ2 − [t]φ + [d] is φb2 − [t]φb + [d] since [t]
etc.
9.10.4: For each y ∈ Fp there is a unique solution to x3 = y 2 − a6 and so there are p
solutions to the affine equation. Counting the point at infinity gives the result.
9.10.5: Write the curve as y 2 = x(x2 + a4 ). Since −1 is not a square, for any x ∈ Fp
we have either x(x2 + a4 ) = 0 or exactly one of x(x2 + a4 ) and −x(x2 + a4 ) is a square.
Hence there are p solutions to the affine equation.
9.10.10: Maintain a pair of the form (ti , ti+1 ). So start with (t1 = t, t2 = t2 − 2q).
Given (ti , ti+1 ) we can compute (t2i , t2i+1 ) by first computing t2i and then t2i+1 using the
above formulae. To go from (ti , ti+1 ) to (t2i+1 , t2i+2 ) one computes t2i+1 = ti ti+1 − q i t
and t2i+2 = t2i+1 − 2q i+1 . One can then perform a ladder algorithm (working through the
binary expansion of n) as in Lemma 6.3.19.
9.10.11: The first statement is easy to verify by counting points. The second statement
follows since if m | n then Ea (F2m ) is a subgroup of Ea (F2n ) and so #Ea (F2m ) | #Ea (F2n ).
Also, if m ≥ 3 then #Ea (F2m ) > 4. Hence, it suffices to restrict attention to prime values
for n in the third part. The values for E1 are 5, 7, 11, 17, 19, 23, 101, 107, 109, 113, 163
and the values for E0 are 5, 7, 9, 13, 19, 23, 41, 83, 97, 103, 107, 131.
9.10.21: The possibilities are p + 1 ± 46 and p + 1 ± 60. One can test which arises by
choosing a random point P ∈ E(Fp ) and computing [p + 1 − t]P for each choice of t. One
finds that the orders are p + 1 + 46, p + 1 − 60, p + 1 − 46, p + 1 − 46 respectively.
614 APPENDIX B. HINTS AND SOLUTIONS TO EXERCISES

9.10.22: The equation p = a2 + ab + b2 implies p = (a − bζ3 )(a − bζ32 ) and the


corresponding value for π + π as in Theorem 9.10.18 is (2a − b(ζ3 + ζ32 )) = 2a + b. To get
the other solutions one notes that the solutions to x2 + xy + y 2 = p are obtained from
(a, b) via the symmetries (a, b) 7→ (−a, −b); (a, b) 7→ (b, a) and (a, b) 7→ (b, a + b) which
corresponds to (a − bζ3 ) 7→ ζ3 (a − bζ3 ). One finds 12 pairs (a, b) and these give rise to the
6 values for t claimed.
9.11.1: The first claim is proved by induction, starting with t ≡ 0 (mod p). For the
second claim, recall that points in E[p] correspond to roots of the p-th division polynomial
and hence must be defined over some finite extension of Fq . But if E(Fqn ) has a point of
order p then #E(Fqn ) ≡ 0 (mod p).
√ √
9.11.10: We have (ignoring for the moment the easy case P (T ) = (T ± q)2 ) 1q P (T q) |
Φm (T 2 ) | T 2m − 1 = (T m − 1)(T m + 1) in R[x] for some m ∈ {1, 2, 3, 4, 6}. It follows (this
requires some work) that P (T ) | (T m − q m/2 )(T m + q m/2 ) in Z[x]. Hence, if α is a root
of P (T ) then αm = ±q m/2 ∈ Z. Similarly, #E(Fq ) = P (1) | (q m − 1).
9.11.14: Take any supersingular elliptic curve E1 over Fp with #E(Fp ) = p + 1 and
let E2 be a non-trivial quadratic twist.
9.12.6: The statements about (X1 , Z1 ) and (X2 , Z2 ) are immediate. For the others,
substitute Xn /Zn , Xm /Zm and Xm−n /Zm−n for x1 , x2 and x4 in Lemma 9.12.5.
9.12.7: The idea is to store (Xn , Zn , Xn+1 , Zn+1 ) (taking m = n+1 we have (Xm−n , Zm−n ) =
(xP : 1) and so the above formulae may be used). The exponentiation is done using
a “ladder algorithm” (analogous to Lemma 6.3.19) which computes either (X2n , Z2n ,
X2n+1 , Z2n+1 ) or (X2n+1 , Z2n+1 , X2n+2 , Z2n+2 ). Every step is therefore a doubling and
an addition. See Algorithm 13.35 of [16]. For the improved formulae see [436] or Section
13.2.3.a of [16].
9.12.8: One has
[2](x, y) = (Bλ2 − 2x − a2 , ⋆)
where λ = (3x2 + 2a2 x + a4 )/(2By) and where the formula for the y-coordinate is not
relevant here. Solving [2](x, y) = (0, 0) is solving Bλ2 = 2x + a2 and so 0 = (3x2 + 2a2 x +

a4p )2 − 4(x3 + a2 x2 + a4 x)(2x + a2 ) = (x2 − a4 )2 . Hence, x = ± a4 and the y-values are
± x(x2 + a2 x + a4 )/B as stated.
9.12.10: The point (1, 1) on (A + 2)y 2 = x(x2 + Ax + 1) has order 4.
9.12.17: Homogenising gives the equation (ax2 + y 2 )z 2 = z 4 + dx2 y 2 and setting z = 0
leads to the equation dx2 y 2 = 0 and hence the two points (1 : 0 : 0) and (0 : 1 : 0). To
show that (1 : 0 : 0) is singular it suffices to set x = 1 and show that the point (0, 0) on
(a + y 2 )z 2 = z 4 + dy 2 is singular, which is easy. The other case is similar.
9.12.23: Writing the curve equation as x2 = (1 − y 2 /(a − dy 2 ) the quadratic twist is
ux = (1 − y 2 )/(a − dy 2 ) which gives the result.
2

9.12.24: Follows directly from Theorem 9.12.12.


9.12.27: Irreducibility follows since y 2 − g(x) must factor as (y + g1 (x))(y + g2 (x)) and
it follows that g2 (x) = −g1 (x) (which is prohibited by a2 6= d). The point (0 : 1 : 0) on
y 2 z 2 = x4 + 2ax2 z 2 + z 4 is singular. An affine singular point on C must satisfy y = 0,
4x(dx2 + a) = 0 and dx4 + 2ax2 + 1 = 0, which again is impossible if a2 6= d.
The birational map is easy to verify: 2Y 2 = X(2X/x2 ) and X(X 2 − 2aX + (a2 − d)) =
X(a2 + 2a(y + 1)/x2 + (y + 1)2 /x4 − 2a2 − 2a(y + 1)/x2 + a2 − d) and the result follows
by substituting for y 2 . Finally, the discriminant of X 2 √ − 2aX + (a2 − d) is 4d so when d
is a square in k then there are three points (0, 0), (a ± d, 0) of order 2.
9.12.29: Let P be a point of order 2 and move P to (0, 0) so that the curve is Y 2 =
X(X 2 + Ax + B). Write a = −A/2 and d = a2 − B so that a twist of the curve is in the
form of equation (9.16). The result follows from Exercise 9.12.27.
615

Chapter 10: Hyperelliptic Curves


10.1.3: When we substitute z = 0 we get the equation HD−1 xD−1 y = FD xD where HD−1
(respectively, FD ) is the coefficient of the monomial xD−1 (resp., xD ) in H(x) (resp.,
F (x)). The possible solutions are x = 0, giving the point (x : y : z) = (0 : 1 : 0), or
HD−1 y = FD x giving the point (HD−1 : FD : 0).
For the second part, we have HD = 0 and so the only point at infinity is (0 : 1 : 0).
Making the curve affine by setting y = 1 yields z D−2 + z D−1 H(x/z) = z D F (x/z) and
when D > 3 every monomial has degree at least 2. Hence the two partial derivatives
vanish.
10.1.5: Make the change of variable y = Y + x3 + 1. Then

y 2 + (1 − 2x3 )y = Y 2 + 3Y − x6 + x3 + 2

and so the affine curve is isomorphic to Y 2 + 3Y = x − 1. This curve is birational to


P1 (taking the map (x, Y ) 7→ Y ) and hence, by Theorem 8.6.1 and Definition 8.6.2, the
curve has genus 0.
10.1.13: If H0 = F1 = F0 = 0 then (0, 0) ∈ C(k) is singular. (This also follows since
C † is birational to C and the genus is a birational invariant.)
10.1.20: 1. One way is (x, y) 7→ (y : xd : xd−1 : · · · : x : 1). The other way is
(Y : Xd : Xd−1 : · · · : X1 : X0 ) 7→ (Y, Xd /Xd−1 ).
2. The statement about ι is clear.
3. Identifying Xi = xi z d−i it follows that points at infinity (i.e., with z = 0) have
Xi = 0 when 0 ≤ i < d.
4. Set Xd = 1 to give the point (Y, Xd−1 , . . . , X0 ) = (α, 0, . . . , 0) on the affine curve

Y 2 + H(Xd−1 , . . . , X0 )y = F (Xd−1 , . . . , X0 )

together with various other equations, including Xi = X⌈(d+i)/2⌉ X⌊(d+i)/2⌋ for 0 ≤ i ≤ d−


2. One must determine the Jacobian matrix (see Theorem 7.1.12 and Corollary 7.1.13) at
the point (α, 0, . . . , 0). The number of variables is d+ 1 and the dimension is 1, so we need
to show that the rank is d. Note that each of the d−1 equations Xi = X⌈(d+i)/2⌉ X⌊(d+i)/2⌋
yields a row (0, 0, . . . , 0, 1, 0, . . . , 0) in the Jacobian matrix (with the 1 corresponding to
variable Xi ). Hence, the rank is at least d − 1. The curve equation yields the row

(2α + Hd , Hd−1 α − F2d−1 , 0, . . . , 0)

in the Jacobian matrix, so to complete the proof we must show that either 2α + Hd 6= 0
or Hd−1 α + F2d−1 6= 0. Note that at least one of {F2d , F2d−1 , Hd } is non-zero, otherwise
we would replace d by d − 1.
If char(k) 6= 2 and both terms are zero then α = −Hd /2 which implies that F2d =
−(Hd /2)2 and F2d−1 = −Hd Hd−1 /2 which violates the condition of Lemma 10.1.8. If
char(k) = 2 then we must consider the three cases in Lemma 10.1.6. The first case is
α = 0 and F2d−1 6= 0 and so Hd−1 α + F2d−1 6= 0. The second and third cases have Hd 6= 0
and so 2α + Hd = Hd 6= 0.
10.1.23: The statement v∞ (x) = −2 follows from the fact that vρ(∞) (Z) = 2 as in
Lemma 10.1.21. The second claim follows since ∞ = (1 : 0 : 0) and 1/x = c(y/xd )2 for
some constant c, in which case c(y/xd ) = xd−1 /y. The third claim is immediate from
Lemma 10.1.22.
10.1.25: Write d = g + 1. Note that

v∞ (y) = v∞ ((y/xd )xd ) = v∞ (y/xd ) + dv∞ (x).


616 APPENDIX B. HINTS AND SOLUTIONS TO EXERCISES

Hence, if C is in ramified model then v∞ (y) = 1 − 2d = −(2d − 1) = −(2g + 1). Similarly,


if C is in split model then v∞+ (y) = 0 − d = −(g + 1).
10.1.26: vP (A − yB) + vP (A − (−y − H)B) = vP (A2 + HAB − F B 2 ) which is 2e when
P = ι(P ) and is e otherwise.
10.1.27: A uniformizer at ∞+ is Xd−1 /Xd if ι(∞+ ) 6= ι(∞+ ) and Y /Xd − α+ other-
wise.
10.1.28: The choice of the leading coefficient already implies deg(G+ (x)2 +H(x)G+ (x)−
F (x)) ≤ 2d − 1. Write G+ (x) = α+ xd + Gd−1 xd−1 + · · · + G0 . One can solve a linear
equation in Gd−1 so that the degree of G+ (x)2 + H(x)G+ (x) − F (x) is at most 2d − 2.
Continuing this way one solves a linear equation for each variable Gi to lower the degree
to at most d + i − 1. The result for G− (x) follows by starting with α− instead of α+ .
10.1.29: First note that (y − G+ (x))/xd = (y/xd − α+ ) + (1/x)G(1/x) for some
G(x) ∈ k[x] and so is zero at ∞+ . Since α+ 6= α− it follows that (y − G+ (x))/xd is
not zero at ∞− . Also, v∞− (y − G+ (x)) ≥ max{v∞− (y), v∞− (G+ (x))} = −d = −(g + 1)
Now, deg(G+ (x)2 + H(x)G+ (x) − F (x)) ≤ d − 1 = g so the affine effective divisor
div(y − G+ (x)) ∩ A2 has degree at most g. Since div(y − G+ (x)) must have degree 0 and
v∞− (y − G+ (x)) = −(g + 1) it follows that v∞+ (y − G+ (x)) ≥ 1.
One can also complete the proof directly from the equation

(y − G+ (x))(−y − H(x) − G+ (x)) = G+ (x)2 + H(x)G+ (x) − F (x),

the right hand side of which is a polynomial of degree at most g. It follows that the
negative integer v∞+ (y − G+ (x)) + v∞+ (−y − H(x) − G+ (x)) is at least −g.
10.2.4: This is similar to Exercise 9.5.5.
10.2.5: First take a root α of F (x) and move it to ∞. Then perform a change of
variable on the remaining roots so that one is 0, another is 1 and change y so that the
polynomial is monic.
10.2.7: Follows immediately from Theorem 10.2.1.
10.3.8: If P = (xP , yP ) 6= ι(P ) and (x − xP )eP ku(x) then the claim is immediate
from vP (u(x)) = eP and vP (y − v(x)) = vP ((y − v(x))(−y − H(x) − v(x)) = vP (v(x)2 +
H(x)v(x) − F (x)) ≥ vP (u(x)). If P = ι(P ) = (xP , yP ) then y − yP is a uniformizer at
P . We have vP (u(x)) = vP (x − xP ) = 2 and since v(x) = yP + (x − xP )G(x) for some
polynomial G(x) ∈ k[x] we have vP (y − v(x)) = vP ((y − yP ) − (x − xP )G(x)) = 1.
10.3.11: The first statement follows from the following two facts: (1) ifP D is effective
then so is σ(D);Q(2) σ(ι(P )) = ι(σ(P )). The second Q follows since σ(D)Q= P nP (σ(P ))
and if u(x) = P (x − xP )nP then σ(u(x)) = P (x − σ(xP ))nP = P (x − xσ(P ) )nP
etc. The third statement follows from the second and the uniqueness of the Mumford
representation.
10.3.12: Since the ui (x) are co-prime the composition does not have any special cases
and the equality of divisors is clear. This can also be seen by considering points over k.
10.3.15: Note that ι(6, 4) = (6, 4) and so 2D1 ≡ 2(0, 4). The answers are (x2 +
5x, 4), (x2 − x, −3x + 4), (x2 , 10x + 4), (x4 + 4x3 + 6x2 , 4 + 10x + 6x2 + 3x3 ).
10.3.16: All polynomials are of degree O(m) and multiplication, computing the ex-
tended gcd, and division can be performed
P in O(m2 ) fieldPmultiplications.
10.4.2: The inverse of D = P nP (P ) is ι∗ (D) = P nP (ι(P )). Hence, since the
points P in the divisor class represented by (u(x), v(x)) are of the form (xi , v(xi )) where
u(xi ) = 0 the corresponding points for the inverse of D are (xi , −v(xi ) − H(xi )). The
result follows.
10.4.5: Just re-arrange the equation D3 − d3 (∞) ≡ D1 − d1 (∞) + D2 − d2 (∞).
10.4.8: One has v∞− (y −v ‡ (x)) = v∞− (y −G+ (x)) = −d. The affine part of the second
claim is already proved in the proof of Lemma 10.4.6. Since deg(div(y − v ‡ (x))) = 0 it
617

follows that v∞+ (y − v ‡ (x)) = −(du + du† − d). (There is also a direct proof of this fact.)
10.4.15: This is very similar to the proofs of Lemma 10.4.6 and Lemma 10.4.11.
Let du = deg(u(x)). The leading d − (du − 1) coefficients of v ‡ (x) agree with G+ (x)
and hence deg(v ‡ (x)2 + H(x)v ‡ (x) − F (x)) ≤ 2d − (d − (du − 1)) = d + du − 1. It follows
that deg(u† (x)) ≤ d − 1. Since v∞− (y − v ‡ (x)) = −d and since div(y − v ‡ (x)) has degree
0 one has v∞+ (y − v ‡ (x)) = −(deg(u(x) + deg(u† (x)) − d). The result follows from the
analogue of equation (10.17).
10.4.17: Clearly,

ι∗ (D + n(∞+ ) + (g − du − n)(∞− ) − D∞ )
= ι∗ (D) + n(∞− ) + (g − du − n)(∞+ ) − ι∗ (D∞ ))

When g is even then ι∗ (D∞ ) = D∞ and the result is immediate by Exercise 10.4.2. When
g is odd note that ι∗ (D∞ ) = D∞ + (∞− ) − (∞+ ) and so

ι∗ (D + n(∞+ ) + (g − deg(u(x)) − n)(∞− ) − D∞ )


= ι∗ (D) + (n − 1)(∞− ) + (g − deg(u(x)) − n + 1)(∞+ ) − D∞ .

If n = 0 then this divisor is not effective and so it is necessary to perform composition


and reduction at infinity.
10.4.20: This is the special case D1 = D2 = 0, n1 = 0, n2 = ⌈g/2⌉ + 1 of Theo-
rem 10.4.19, since D = (1, 0, 0) is principal.
10.4.21: This follows from Theorem 10.4.19.
10.4.22: Let ρP (x, y) = (1/(x − xP ), y/(x − xP )g+1 ) map P to ∞+ . If div(f (x, y)) =
n((P ) − (ι(P )) then div(f ◦ ρ−1 +
P ) = n((∞ ) − (∞ )).

10.5.2: By Lemma 8.1.3 and Theorem 8.2.7 we know φ is surjective. Hence, for any
P ∈ E(k) choose any point P ′ ∈ C(k) such that φ(P ′ ) = P and let P ′′ ∈ C(k) be such
that φ(P ′′ ) = OE . Then φ∗ ((P ′ )−(P ′′ )) = (P )−(OE ) as required. The second statement
follows since the kernel of φ∗ is contained in the kernel of φ∗ φ∗ = [deg(φ)].
10.6.3: See Mumford [445] pages 3.31-3.32.
10.7.7: An elegant proof using power series is given in Exercises 17.19 and 17.20 of
Shoup [556].
10.7.8: One has t1 = 0, t2 = −42, t3 = 0. Hence a1 = 0, a2 = 21 and a3 = 0. It follows
that #Pic0F7 (C) = 512.
10.7.14: First, one needs to count various types of points in C(Fq ) and C(Fq2 ). Note
that there is a single point at infinity ∞ on C(Fqn ) for all n ∈ N. Write m for the number
of roots of F (x) in Fq . Let u = 21 (N1 − 1 − m) = 21 (q − t1 − m). Then there are u values
for xP ∈ Fq such that there are points P = (xP , yP ) ∈ C(Fq ) with P 6= ι(P ). Hence
#C(Fq ) = 1 + m + 2u. There are q possible values for xP ∈ Fq . We have shown that
m + u of them arise as x-coordinates of points in C(Fq ) (i.e., F (xP ) is a quadratic residue
for those values). For the remaining q − m − u values xP it follows that F (xP ) is not a
square in Fq . But F (xP ) is a square in Fq2 so xP must arise as the x-coordinate of a point
in C(Fq2 ). Hence there are q − m − u = 12 (q + t1 − m) such values for xP . Hence, there
are 21 (q − m + t1 ) points P = (xP , yp ) ∈ C(Fq2 ) with xP ∈ Fq , yP 6∈ Fq and P 6= ι(P ).
Exercise 10.4.4 classifies divisor classes so it is sufficient to count the number of rep-
resentatives for each case. Case 1 gives N1 divisor classes.
For case 2, let m be the number of roots of F (x) in Fq . Therefore, there are m + 1
points P ∈ C(Fq ) such that P = ι(P ). Hence, case 2 gives N1 − 1 − m divisor classes.
For case 3, we first count the set of pairs (P, Q) such that P, Q 6= ∞, and Q 6∈ {P, ι(P )}.
There are N1 − 1 choices for P , and for each there are either N1 − 2 or N1 − 3 choices
for Q (depending on whether P = ι(P ) or not). Finally, since in this case we always have
618 APPENDIX B. HINTS AND SOLUTIONS TO EXERCISES

P 6= Q, the number of choices for {P, Q} is half the number of pairs (P, Q). Hence the
total number of divisor classes in case 3 is
1
2 ((N1 − m − 1)(N1 − 3) + m(N1 − 2)).

We finally consider case 4 of Exercise 10.4.4. Note that K = Fq2 is the unique quadratic
extension of Fq . We have P = σ(P ) if and only if P ∈ C(Fq ). Hence, the number of
choices for {P, σ(P )} is 21 (N2 − N1 ). From these choices we must subtract the number
of pairs {P, σ(P )} such that σ(P ) = ι(P ). This happens when xP ∈ Fq but yP 6∈ Fq and
by the above argument there are 21 (q − m + t1 ) such points. Hence, the total number of
divisor classes in case 4 is
1 2
2 (q + 1 − t2 − (q + 1 − t1 ) − (q − m + t1 )) = 21 (q 2 − 2q − t2 + m).

Adding up all the cases gives

(q + 1 − t1 ) + (q − t1 − m) + 12 (q 2 − q(2t1 + 2) + t21 + 2t1 + m) + 21 (q 2 − 2q − t2 + m)

which simplifies to
q 2 − t1 q + (t21 − t2 )/2 − t1 + 1
as required.
10.9.4: This proof follows the same layout as Lemma 9.11.8 and Corollary 9.11.9. The
details are given in [218].
10.9.6: We have p | a1 and p | a2 , which is equivalent to the conditions for supersin-
gularity.
10.9.7: First, simply show that #C(F2nk ) is always odd and so t1 and t2 (notation
as in Lemma 10.7.6) are even. Hence a1 , a2 are even (for the details of this calculation
see Lemma 2 of [218]). The result then follows from part 1 of Theorem 10.9.5 and
Exercise 10.9.6.

Chapter 11: Basic Algorithms for Algebraic Groups


11.1.2: If a is odd at any stage then the new value (a − ai ) ≡ 0 (mod 4) so the non-zero
coefficient ai is followed by ai+1 = 0.
11.1.6: 100 = 10100100 etc.
11.1.9: The largest possible NAF of length l + 1 is 2l + 2l−2 + 2l−4 + · · · and taking
dl = 2l−2 + 2l−4 + · · · + 1 when l is even and dl = 2l−2 + 2l−4 + · · · + 2 when l is odd
gives the formula for dl . If al = −1 then a = −2l + dl < 0, which is a contradiction. The
minimal value for a is therefore 2l − dl . One can check that 2l + dl + 1 = 2l+1 − dl+1 .
Finally, 2l+1 /3 < 2l − dL ≤ a so l ≤ log2 (a) − log2 (2/3) ≈ log2 (a) + 0.585. Since the
binary expansion of a requires ⌊log2 (a)⌋ + 1 bits the result follows.
11.1.15: Write Nk for the number of NAFs of length k (so k = l + 1 in the above
notation). We have N1 = 3 and N2 = 5. Also Nk+1 = Nk + 2Nk−1 from which the
formula follows.
11.1.21: Essentially the same proof as Lemma 11.1.8. See Proposition 2.1 or Muir and
Stinson [442].
11.1.23: Proposition 2.4 of Muir and Stinson [442].
11.1.26: 300070007 and 10010001001.
11.2.2: There are 2n values ub1 ,...,bn to compute: n + 1 of them come for free (corre-
sponding to the vectors (b1 , . . . , bn ) that are all zero or have only one non-zero entry) and,
if the computation is organised appropriately, each of the others can be obtained from
619

an earlier value using one additional multiplication. The second claim of the exercise is
clear. Qn
11.2.3: For windows of length w one must precompute all ub1 ,...,bn = i=1 gibi for
0 ≤ bi < 2w under the constraint that at least one of the bi is odd. The number of such
values is 2nw − 2n(w−1) of which n come for free. Hence the precomputation requires
2nw − 2n(w−1) − n multiplications (actually, when w > 1 the computation can be arranged
so that some of these are squarings). See Yen, Laih and Lenstra [638] for details.
11.2.4: See Algorithm 3.51 of [274].
11.3.1: Given P = (xP , yP ) compute a solution to λ2Q + λQ = xP + a2 , determine
whether the corresponding value of x2Q satisfies the trace condition (and if not, replace
λQ by λQ + 1), compute a square root (which is easy in finite fields of characteristic two),
and solve for yQ . See Knudsen [342] for details.
11.3.4: The cost depends on the number of non-zero values for ai , which is the weight.
The number of elliptic curve additions is one less than the weight. We ignore the cost of
computing the Frobenius maps.
11.3.10: Let n0 = 107 and n1 = 126 so a0 = −1 and the new values for (n0 , n1 ) are
(180, −54). The Frobenius expansion is
−1 − π 5 − π 7 − π 9 + π 11 + π 13 + π 16 .
11.3.13: P ∈ E(Fpm ) if and only if (πm − 1)P = OE .
11.3.16: It is clear that P is in the kernel of the endomorphism corresponding to this
polynomial. Now, note that (x19 − 1)/(x − 1) ≡ −457 + 314x (mod x2 − x + 2) and that
−457 + 314π has norm r. Rounding n(−457 + 314π̄)/r gives −67 − 148π from which we
get
n − (−67 − 148π)(−457 + 314π) = −107 − 126π
as was found using the GLV method. P
11.3.19: One first precomputes all w−1 j
j=0 [aj ]π (P ). To do this one computes all [aj ]P ,
which needs one doubling and (when q is odd) (q − 5)/2 additions or (when q is even)
q/2 − 2 additions. One computes [aj ]π j (P ) as π j ([aj ]P ) and we ignore the cost of this.
To compute all the terms requires at most q w−1 additions. Hence, the total cost of
precomputation is at most q/2 + q w−1 group operations.
The exponentiation itself then requires at most (l + 1)/w additions.
11.3.20: First compute integers a′0 , a′1 such that a′0 + a′1 π ≡ a(π) (mod π 2 − tπ + q)
(this can be done efficiently using Lucas sequences). Then find the eigenvalue λ for π by
computing gcd(x2 − tx + q, xm − 1). Finally, compute a ≡ a′0 + a′1 λ (mod r). See Brumley
and Järvinen [112].
11.3.23: Analogous to the method at the end of Section 11.3.2.
11.3.24: The first claims are immediate since #E(Fp2 ) = (p + 1 − t)(p + 1 + t). The
map ψ is an endomorphism since ψ, π and φ−1 are endomorphisms (alternatively, because
ψ is a morphism fixing OE ′ ). One can directly compute that ψ 2 = [−1]. The formula
ψ 2 − [t]ψ + [p] = 0 follows since π satisfies this formula and ψ is just a conjugation of π.
Since r2 ∤ #E ′ (Fp2 ) it follows that ψ(P ) ∈ hP i and so λ exists. The formula for λ follows
from solving the simultaneous equations λ2 ≡ −1 (mod r) and λ2 − tλ + p ≡ 0 (mod r).
11.4.7: Let S be the set of integer k-tuples (a1 , . . . , ak ) with 0 ≤ ai < mi for 1 ≤ i ≤ k.
Let
Q Tai be Q the set of (a1 , . . . , ak ) ∈ S in such that the value of equation (11.2) is 1. Then
bi
g
i i = i gi if and only if the vector with entries ai − bi (mod mi ) for 1 ≤ i ≤ k lies
in T . It follows that #G = #S/#T and that the method samples uniformly from the
group G.
11.4.9: A simple solution is to choose a, b ∈ Fq uniformly at random and reject if
a2 − ab + b2 = 1. This happens with probability roughly 1/q. One can then use the
620 APPENDIX B. HINTS AND SOLUTIONS TO EXERCISES

decompress map to obtain an element of G6,q . This process misses several points in G6,q ,
such as the element 1 and the element θ2 of order 3 (which corresponds to the point
(0, 0, 0) not appearing in the image of the map g in Example 6.4.4).
11.4.10: The expected number of trials to get a soluble quadratic equation is roughly

2 (precisely, there are at least (q − 2 q)/2 values for x0 such that y 2 + H(x0 )y − F (x0 ) is a

square in Fq and so the probability to be successful is at least (q − 2 q)/(2q) ≈ 1/2). One
solves the quadratic equation either by computing square roots using Exercise 2.14.3 (ex-
pected complexity of O(log(q)2 M (log(q))) bit operations) or, in the case of characteristic
2, by Lemma 2.14.8 (complexity O(log(q)3 ) bit operations). The goto statement in line
14 occurs with probability at most 12 q3 < 12 and so the expected number of repeats is less
than 2. Hence, the algorithm for finding random points on elliptic curves has expected
complexity O(log(q)4 ) bit operations.
11.4.12: See Section 5 of Shallue and van de Woestijne [545].
11.4.13: The first claim is checked by an elementary calculation. For the second claim
equate y − ux = v = (3A − u4 )/(6u) and compute the roots in Fq of the polynomial
6u(y − ux) − (3A − u4 ).
11.4.17: Set l′ = l/n and use the method of Example 11.4.2 n times with l′ -bit strings.
11.5.2: When the algorithm ends, the points P and Q generate the subgroup of E(Fq )
of order N0 . One simply adds to P a point R of order N1 (found using an analogue of
Algorithm 5).
11.5.3: If the group is cyclic then the probability that a random element is a generator
is ϕ(le )/le = 1 − 1/l. Hence, the probability that among two elements at least one of
them is a generator is 1 − (1 − (1 − 1/l))2 = 1 − 1/l2 .
If the group is not cyclic, suppose its structure is (Z/lf Z) × (Z/le−f Z) where 1 ≤
f ≤ e − f . Consider the projection of points {P, Q} to elements {(p1 , p2 ), (q1 , q2 )} of this
group. The points generate E(Fq )[le ] if and only if the vectors {(p1 , p2 ), (q1 , q2 )} are a
basis for the vector space (Z/lZ)2 . Hence, one needs the first vector to be non-zero (this
occurs with probability 1−1/l2) and the second vector to not lie on the line corresponding
to the first vector (this occurs with probability 1 − 1/l).
The probability of success overall is therefore the product for all primes li , which gives
the stated formula. Finally, ϕ(N0 )/N0 > 1/(3 log(log(N0 ))) = O(1/ log(log(q))) and
Y  Y 
1 − l12 > 1 − l12 = ζ(2)−1 = 6/π 2 = O(1)
l|N0 l

where the second product is over all primes l. Hence one expects O(log(log(q))) iterations
overall.
For generalisations and more detail see Section 6.1 of Miller [429].
11.6.2: Use point halving to attempt r − 1 halvings and then check whether the point
can be halved further over Fq (this is only possible if the original point has odd order).
11.7.2: First, we know that Tr(xP ) = Tr(a2 ) = 0. We may assume that xP 6= 0 and
so
(yP /xP )2 + (yP /xP ) = xP + a2 + a6 /x2P .
√ √
Hence, Tr(a6 /x2P ) = 0 and so Tr( a6 /xP ) = 0. It is immediate that (0, a6 ) has order
2, and since Tr(0) = 0 it follows that it can be halved in E(F2n ). The statement about

(xP , yP ) + (0, a6 ) follows from the formulae for the group law. Since (xP , yP ) and

(0, a6 ) can both be halved, their sum can also be halved; this also follows directly since

Tr( a6 /xP ) = 0.

The receiver gets an n − 1 bit string representing either xP or a6 /xP . The receiver

then computes the corresponding value z ∈ {xP , a6 /xP }. One can verify that, in both
621

cases, Tr(z + a2 + a6 /z 2 ) = 0. Hence, one can solve u2 + u = z + a2 + a6 /z 2 for u ∈ F2n



such that Tr(u) = 0. If z = xP then u is yP /xP . If z = a6 /xP then u is

yP a6
+ + xP + 1,
xP xP
which does satisfy Tr(u) = 0. In both cases, one determines the corresponding value y as
y = uz.
It remains to determine the two cases. Suppose 2i k#E(F2n ). Then P can be halved
√ √
at least i times while (0, a6 ) can only be halved i − 1 times. It follows that P + (0, a6 )
can only be halved i − 1 times. Hence, if one can repeatedly halve (z, y) at least i times

then set xP = z and yP = xP u. Otherwise, set xP = a6 /z and solve for yP subject to
Tr(yP /xP ) = 1. For further discussion see King [339].
11.7.3: Use action of 2-power Frobenius on xP (represented using a normal basis) so
that the least significant bits are a predictable bit pattern (for example, the longest run
of ones followed by a zero). Since the expected length of the longest run of ones is roughly
log2 (n) one can expect to save this many bits. One can also combine this method with
the Seroussi trick of Example 11.7.1. For details see Eagle, Galbraith and Ong [188].

Chapter 12: Primality Testing and Integer Factorisation using Algebraic


Groups
12.1.4: See Section 3.6 of Crandall and Pomerance [162].
12.1.5: (Gordon [262]) Let E : y 2 = x3 + x which has N + 1 points over Z/N Z when
N is prime. So√choose a random point P ∈ E(Z/N Z) (by choosing a random x and
computing y = x3 + x using the Tonelli-Shanks algorithm) and compute [N + 1]P and
see if it is OE . If anything goes wrong then N is not prime.
12.1.6: If N is prime then the elliptic curve E : y 2 = x3 + a4 x has N + 1 ± 2a
points for some integer a such that p = a2 + b2 (there are four possible group orders); see
Example 9.10.20. Hence, given N one can run the Cornacchia algorithm to compute a
and b, choose a random point P ∈ E(Z/N N), and test whether [N + 1 ± 2a]P = OE ; if
anything goes wrong then one deduces that N is composite.
12.1.8: Since N − 1 = 24 · 35 the Miller-Rabin sequence for the base 2 is

263, 166, 67, 1, 1.

The fact that 67 6≡ −1 (mod 561) implies that 561 is composite.


Indeed, the non-trivial solution to x2 ≡ 1 (mod N ) leads to a partial factorisation
of N via x2 − 1 = (x + 1)(x − 1) ≡ 0 (mod N ). In this case, gcd(67 − 1, 561) = 33.
Unfortunately the Miller-Rabin test is not a good factoring algorithm since the condition
aN −1 6≡ 1 (mod N ) is usually not satisfied.
12.2.4: Choose q, then find x0 such that Φk (x0 ) ≡ 0 (mod q), then choose p = x0 + lq
for suitable l. See Section 3.1 of Lenstra and Verheul [375].
12.2.5: A solution is Gordon’s algorithm; see Algorithm 4.53 of [418].
12.2.7: Theorem 4.1.1 of [162].
12.2.9: Theorem 4.1.3 of [162].
12.2.10: Generate a random prime q together with certificate (using, for example,
Maurer’s algorithm [403]) and determine if p = 2q + 1 is prime. It is easy to form a
certificate for p.
12.3.6: Since there are B/ log(B) primes we can save a log(B) term as long as one can
find the next prime in O(log(B)M (log(N ))) bit operations; in practice this is easily done
(e.g., using a sieve or Miller-Rabin).
622 APPENDIX B. HINTS AND SOLUTIONS TO EXERCISES

12.3.7: List the primes B < Q1 < Q2 < · · · < Qk ≤ B and use the fact that
bQi+1 ≡ bQi bQi+1 −Qi (mod N ). One precomputes all the increments bQi+1 −Qi (mod N ).
12.3.8: Apply Theorem 2.16.1.
12.3.9: See Williams [634] (note that he credits Guy and Conway for the suggestion).
12.5.1: Exercise 2.2.9 showed that one can determine if N is a prime power (and factor
it) in polynomial time. So suppose N has at least two distinct prime factors. If p2 | N
then either p < N 1/3 or N/p2 < N 1/3 . Use the Pollard-Strassen method with B = ⌈N 1/6 ⌉
to find all prime factors of N that are less than N 1/3 in Õ(N 1/6 ) bit operations. Then
test whether the remaining unfactored part of N is a perfect power.

Chapter 13: Basic Discrete Logarithm Algorithms


13.0.3: Split N using Miller-Rabin style idea once know order of random g modulo N .
13.2.7: First show, using calculus, that f (x) = x/ log2 (x) is monotonically increasing
for x ≥ 2.7183. Hence deduce that if B ≥ 4 and 2 ≤ x ≤ B then x ≤ B log2 (x)/ log2 (B).
Finally, prove the result
p using induction on n. √
13.3.3: Set m = ⌈ r/2⌉ and loop giant steps up to ⌈ 2r⌉.
13.3.4: Let X be the random variable max{x, y} over (x, y) ∈ [0, r]2 . If 0 ≤ t ≤ r then
Pr(X ≤ t) is the area of the box [0, t]2 divided by the 2
Prarea of [0, r] , i.e.,
2 2
R r t /r 2 . Hence,
2 2 2
Pr(X > t) = 1−t /r and the expected value of X √ is t=0 Pr(X > t) ≈ 0 (1−t /r )dt =
2r/3. Hence, choosing
√ both tables to have size r gives an algorithm with average-case
running time 2 32 r group operations and which requires storing the same number of
group elements. See Section 3 of Pollard [488].
−b a
p It is convenient to replace h by hg so that h = g with 0 ≤ a < w. Then set
13.3.6:
m = ⌈ w/2⌉ and run Algorithm 14 with trivial modifications (the while loop now runs
to 2m).

13.3.7: Let m = ⌈ w/2⌉. If h = g a then write a = a0 + 2ma1 where −m ≤ a0 ≤ m
and 0 ≤ a1 ≤ 2m. Then one needs m group operations to produce the list of baby steps
g a0 and, on average, 12 2m = m group operations for the giant steps.
13.3.8: We have a = b + ma1 for some integer a1 in the range 0 ≤ a1 < w/m. Let
h1 = hg −b andpg1 = g m . Then h1 = g1a1 . The BSGS algorithm can be used to solve this
problem in O( w/m) group operations.
13.3.9: See van Oorshot and Wiener [472]. √
13.3.10: One can just run BSGS twice, but a better solution is to set m = ⌈ 2w⌉,
h1 = hg −b1 and h2 = hg −b2 . One computes m baby steps as usual and then computes
m/2 giant steps starting from h1 and another m/2 giant steps starting from h2 .
13.3.11: Solve the DLP instance 1 = g a .
13.3.13: Blackburn and Teske [60].
13.3.14: By Exercise 11.1.15 the number of NAFS is ≈ 2n+2 /3. If a is a NAF then
a = a0 + 2⌈n/2⌉ a1 where a0 and a1 are NAFs of length l = ⌈n/2⌉. There is an efficient
procedure to list all NAFs a0 of length l (see Exercise 11.1.16) so one can compute and
store all g a0 in a sorted list. This gives a BSGS algorithm requiring O(2⌈n/2⌉+2 /3) time
and space.
13.4.6: See Maurer [404].
13.5.4: Let m = ⌊l/2⌋. Use the equation
−a
g1a1 · · · gm
am m+1
= hgm+1 · · · gl−al .

Let N1 = #S1 · · · #Sm and N2 = #Sm+1 · · · #Sl . Compute and store the left hand side
of the equation in time and space O(N1 ) then compute the right hand side and check for
623

a match. The running time is O(N1 + N2 ) group operations and the storage is O(N1 )
group elements.
13.5.5: Define S1′ = {a1 · · · a⌊l/2⌋ : aj ∈ Sj for 1 ≤ j ≤ l/2} and S2′ = {a⌊l/2⌋+1 · · · al :
aj ∈ Sj }. We assume #S1′ ≤ #S2′ . Compute and store (g a1 , a1 ), sorted according to
−1
the first component, for all a1 ∈ S1′ then compute each ha2 for a2 ∈ S2′ and check for a
match with the earlier list. The running time is O(#S1′ + #S2′ ) group operations and the
storage is O(#S1′ ) group elements.
⌈αn⌋ 
13.6.10: Find the largest α ∈ R such that α ≤ 1/2 and M ≥ ⌊αw⌋ . Compute M
n−⌈αn⌋ 
“baby steps” and then w−⌊αw⌋ “giant steps”.
13.8.2: A solution is to sort L1 and then, for each x2 ∈ L2 to check whether x2 ∈ L1 .
For the “hash-join” solution see Wagner [625].
13.8.5: See Wagner [625].
13.8.7: D = {x ∈ {0, 1}n : LSBm (x) = 0}.
13.8.9: See Wagner [625].

Chapter 14: Factoring and Discrete Logarithms using Pseudorandom Walks


2
−l /2N
14.1.3: By
√ solving e < p) for l, for probability 0.99 the number √ of trials needed is
≈ 3.035 N and for probability 0.999 the number of trials is ≈ 3.717 N .
14.2.4: lt = 20, lh = 10, so x20 ≡ x40 (mod p). The walk in this example is mostly
p which is “not very random” and which justifies why lt + lh is so much larger
squarings,
than πr/2.
14.2.5: lt = 7, lh = 6, so first collision is x12 = x24 .
14.3.3: Solving e−α = 1/r (the probability to simply guess the correct solution to the
DLP) gives α = log(r).
14.2.18: On can store some bits of H(xn ), where H is a hash function. A further
variant is for the clients not to compute or store the values ai and bi . Instead, the
algorithm is structured so that the values (a0 , b0 ) of the initial point x0 = g a0 hb0 are a
function of a short random “seed”; the client then sends the end-point of the walk and the
seed to the server. The server then re-calculates the walk, this time including the values
ai and bi , once a collision has been detected. We refer to Bailey et al. [25] for details.
14.2.21: Each step of the algorithm requires computing g a0 hb0 for random 0 < a0 , b0 <
r, which (naively) requires 3 log2 (r) group operations. The cost can be reduced to around
log2 (r) (and even further) with modest precomputation √ and storage. In any case the
total cost of the algorithm
√ would be around 1.25 r log 2 (r) group operations on average,
compared with 1.41 r for baby-step-giant-step. The storage requirements are similar.
14.4.9: xi+2 = (xi g)−1 g = x−1 i . To have the cycle we need S(x̂i+1 ) = S(x̂i ) which
occurs with probability 1/nS and we need x̂i+1 = x−1 i+1 which occurs with probability
1/NC .
14.5.6: The worst case is when h = g a withp a = 0 or a = w − 1. The expected cost is
then
√ w/(2m) + m. To minimise this take m = w/2. The worst-case complexity is then

(2 2 + o(1)) w group operations.
14.5.7: Mean jump size is m ≈ 5.38. Expected length of walk is l ≈ 8.7. Probability
of success is 1 − (1 − 1/m)l ≈ 0.83 ≈ 5/6. See Pollard [488].
14.5.9: The algorithm doesn’t really change. Let d be the expected distance of the wild
kangaroo from the center of the interval (for the uniform distribution on {0, 1, . . . , w − 1}
2
we had d = w/4. Then the expected running √ time is d/m + 4(1 + ǫ)m/NP + 1/θ group
operations. The√ optimal value for m is NP d/2, giving an average-case expected running
time 4(1 + ǫ) d/NP + 1/θ group operations.
624 APPENDIX B. HINTS AND SOLUTIONS TO EXERCISES

14.5.10: If w ≥ πr/8 ≈ 0.4r then one should use the rho algorithm. If w is significantly
smaller than 0.4r then one would use the kangaroo method. Determining the exact
crossover would require careful experimentation.
14.5.12: See Galbraith, Pollard and Ruprai [226]. p
14.6.3: The heuristic cost is w/(2m) + 4m/NP2 , which is minimised by m = NP w/8.
14.7.6: We always√ have #(T ∩ W ) √ = r where r = #G. The heuristic complexity is
therefore (1 + o(1)) πr ≈ (1.77 + o(1)) r group operations, which is slower than Pollard
rho.
14.7.7: See Galbraith and Ruprai [227].
14.8.1 See van Oorchot and Wiener [473].
14.9.5: x2 + 1 is fast to compute and does not have any trivial fixed points or short
cycles. The other suggestions have fixed points independent of p | N .
14.9.6: The idea is that f (x) is now m-to-1, so the expected length of the rho is
shorter (this is the same as the reason why taking nS > 3 is a good idea; see the end of
Section 14.2.5). For more discussion of this specific case see Brent and Pollard [99].
14.9.7: One cannot “see” a match modulo p without computing a gcd.

Chapter 15: Factoring and Discrete Logarithms in Subexponential Time


15.1.4: We have the error term like exp(−u(log(u) + c log(log(u)))) for a suitable function
c ≥ 1. This splits as u−u log(u)−cu . If the second term is u−f (u) for some function then
taking logs gives f (u) log(u) = cu log(log(u)) and so f (u) = cu log(log(u))/ log(u) = o(u).
15.1.7: The first 4 properties are straightforward to verify. Property 5 follows since
log(log(N )m ) = m log(log(N )) < c log(N )a log(log(N ))1−a = log(LN (a, c)) for suffi-
ciently large N . Property 6 follows from property 5 and property 2. To prove property 7
let f (N ) = m(log(log(N ))/ log(N ))a and note that Ln (a, f (N )) = exp(m log(log(N ))) =
log(N )m . It is easy to check that limN →∞ f (N ) = 0 and so f (N ) = o(1). Properties 8
and 9 are easily checked.
15.1.9: Let B = LN (1/2, c) and
p p
u = log(N )/ log(B) = log(N )/(c log(N ) log(log(N ))) = 1c log(N )/ log(log(N )).

The probability of being B-smooth is therefore Ψ(N, B)/N = u−u(1+o(1)) as N → ∞.


Now log(u) = log(1/c) + 21 (log(N ) log(N ) − log(log(log(N )))) = (1/2 + o(1)) log(log(N )).
The result follows.
15.2.1: For each prime p | N (necessarily odd) the equation x2 ≡ 1 (mod p) has
exactly two distinct solutions modulo p. If pe | N for e > 1 then by Hensel lifting the
equation x2 ≡ 1 (mod pe ) also has exactly 2 solutions. The result follows by the Chinese
remainder theorem.
15.2.3: Some random relations are

17172 ≡ 22 · 33 · 5 · 7 (mod N )
2
2365 ≡ 23 · 32 · 5 · 7 (mod N )
17572 ≡ 27 · 33 (mod N )
5192 ≡ 25 · 3 · 52 (mod N )

and so the relation matrix is  


2 3 1 1
 3 2 1 1 
 .
 7 3 0 0 
5 1 2 0
625

Adding the first three rows modulo 2 gives the all zero vector (again, we are lucky that
5 relations are not required) and let X = 1717 · 2365 · 1757 (mod N ) and Y = 26 · 34 · 5 ·
7 (mod N ). Then gcd(X − Y, N ) = 73. One has N = 53 · 73.
15.2.5: See Exercise 16.5 of Shoup [556].
15.2.8: First note that (#B)2 = LN (1/2, 1 + o(1)) as required. For the second part,
write u = log(N 1/2+o(1) )/ log(B) and note that one expects TB ≈ uu . Now,
p p
u = ( 12 + o(1)) log(u)/( 12 log(N ) log(log(N ))) = (1 + o(1)) ln(N )/ ln(ln(N )).

Hence, p
log(uu ) = u log(u) = ( 21 + o(1)) log(N ) log log(N ).
Hence #BTB = O(LN (1/2, 1 + o(1))) = LN (1/2, 1 + o(1)). See Section 6.1 of [162] or
Section 16.4.2 of [556]
√ for further details. √
15.2.10: If x = kN +ǫ then either x2 −kN or kN −x2 is a positive integer ≤ 2 kN ǫ.
The algorithm then proceeds the same way.
15.3.4: One expects to make LN (1/3, 1/(2c)+o(1)) trials until the value x is L√N (2/3, c)-
smooth. One can use ECM to factor the numbers (if they are smooth) in Lp (1/2, 2+o(1))
operations. Inserting p = LN (2/3, c) gives the result.
15.5.1: Let u ∈ hgi ∩ G′ = {1}. The order of u divides r and (p − 1)/r. Since
gcd(r, (p − 1)/r) = 1 it follows that the order of u is 1.
15.5.2: If r < (p − 1)/r is large then one can efficiently generate a random element
δ ∈ G′ by choosing a random element w ∈ F∗p and computing δ = wr (mod p). If
r ≥ (p − 1)/r then it is better to precompute an element g1 ∈ F∗p of order (p − 1)/r and
then compute δ = g1i (mod p) for a random value of i (one can speed this up further using
a Pollard-style pseudo-random walk, which is also a good solution when r is small).
15.5.3: See Algorithm 16.1 and Lemma 16.2 of Shoup [556].
15.5.7: Let B = Lp (1/2, c). By Corollary 15.1.8, TB = Lp (1/2, 1/(2c) + o(1)). Hence
the running time of the relation collection is O(Lp (1/2, 2c + 1/(2c) + o(1))) bit operations
and the running time of the linear algebra is O(Lp (1/2, 2c + o(1))) bit operations. The
optimal value of c is 1/2 and the complexity is as stated.
15.5.8: Let re k(p − 1) and let γ ∈ Fp∗ have order re . Let G′ be the subgroup of order
(p − 1)/re . Generate relations of the form γ z δ (mod p) with 1 < z < re and δ ∈ G′ .
One needs a relation featuring g and a relation featuring h. The linear algebra is now
modulo re (see Exercise 15.4 of [556]). One finds γ Z1 g Z2 = 1 from which it follows that
Z′ e−1
re−1 | A and so γ1 1 g Z2 = 1 with γ1 = γ r and Z1′ = Z1 /re−1 . One therefore compute
the logarithm of g to the base γ1 and similarly for h.
15.5.9: Once the first s relations are found only one relation is needed for each hi , and
this takes O(uu #BM (log(p))) = O(Lp (1/2, c + 1/(2c) + o(1))) = O(Lp (1/2, 3/2 + o(1))
bit operations (using trial division for the relations). With our current formulation the
full linear algebra has to be repeated each time (actually, this can be avoided by using a
different approach), but this only costs O(Lp (1/2, 1 + o(1))) bit operations.

15.5.10: We are testing numbers of size p for B-smoothness so the number of
′ √
trials before getting a B-smooth value for w1 is (u′ )u where u′ = log( p)/ log(B) =
1
2 log(p)/ log(B). However, since we need both w1 and w2 to be smooth the number of

trials is (u′ )2u = (u/2)u . For B = Lp (1/2, c) it follows that log((u/2)u ) = u log(u/2) ≈
u log(u) and so one gets the same complexity as before (but a smaller o(1) term).
15.5.14: Suppose b is even. One can write the sum as pb /b + pb−1 /(b − 1) + · · · +
pb/2 /(b/2) plus fewer than 2b terms which are all O(pb/2 ). Since pb−i /(b−i) ≤ pb−i /(b/2) =
2pb−i /b for 0 ≤ i ≤ b/2 the first result follows.
626 APPENDIX B. HINTS AND SOLUTIONS TO EXERCISES

For the approximation just note that pb /b + pb−1 /(b − 1) + · · · ≈ pb /b(1 + 1/p + 1/p2 +
· · · ) = pb /b(1 − 1/p). p
15.5.16: Following
p the discussion earlier in this section, let b = c n log(n)/ log(p) and
u = n/b = 1c n log(p)/ log(n). Then #B < pb and so #B = O(Lpn (1/2, c)). Now
p 
log(uu ) = u log(u) = 1c n log(p)/ log(n) log(1/c) + 12 (log(n) + log(log(p)) − log(log(n))
1
p
= 2c + o(1) n log(n) log(p).

Hence uu = O(Lpn (1/2, 1/(2c) + o(1))) as usual. The total running time is therefore
O(Lpn (1/2, c + 1/(2c) + o(1)) √ + Lpn (1/2, 2c + o(1))) bit operations. This is minimised
when 2c2 = 1, i.e., c = 1/ 2, which gives the stated complexity. We refer to Section
4 of Odlyzko
p [469] for more details, but note that Odlyzko writes the complexity as
O(exp(c n log(n)) = O(Lpn/ log(n) (1/2, c)).
15.5.18: Note that F2d = F2 [x]/(A(x)) where d = deg(A(x)). Let α ∈ F2d be a root
of A(x). If the equation x2 + x = α has no solutions then A(x2 + x) is irreducible. If the
equation x2 + x = α has two solutions (namely, α and α + 1) in F2d then A(x2 + x) is the
product of their minimal polynomials.
15.5.22: For example F1 (t) = t4 + t2 + t + 1 and F2 (t) = t4 + t2 .
15.5.23: Note that φ1 (ψ1 (x)) = t, φ1 (ψ1 (y)) = φ1 (F1 (x)) = F1 (t), φ2 (ψ2 (x)) =
φ2 (F1 (y)) = F2 (F1 (t)) ≡ t (mod F (t)) and φ2 (ψ2 (y)) = φ2 (y) = F1 (t).
15.6.2: Set b = ⌈logq (Lqg (1/2, c))⌉. Construct the factor base B as above, generate
random group elements using the algorithm of Section 15.5.1 and use Theorem 15.6.1 to
determine the expected number of trials to get a smooth relation. The group operations
and polynomial factorisation are all polynomial-time and can be ignored. The algo-
rithm therefore has the usual complexity #BLqg (1/2, 1/(2c) + o(1)) + (#B)2+o(1) which
is Lqg (1/2, c + 1/(2c) + o(1))√ + Lqg (1/2, 2c + o(1)) when g is sufficiently large. This is
optimised by taking c = 1/ 2, giving the stated running time. For the technical details
see Enge and Gaudry [195] or Section VII.6.1 of [65].
15.6.3: See page 36 of Adleman, DeMarrais and Huang [4] for the case of one point at
infinity.
15.6.4: Clearly degree 1 prime divisors are points and #C(Fq ) = q + 1 + t where |t| ≤
√ 
2g q = o(q). As we know, there are approximately q g divisor classes and #C(Fqg)+g−1 =
q(1+o(1))+g−1

g = q g (1 + o(1))/g! divisors formed by taking sums of prime divisors of
degree 1 (and subtracting a suitable divisor at infinity to get a degree 0 divisor).
For hyperelliptic curves with a single point at infinity, Lemma 10.3.24 shows the
Mumford representation is unique and the above argument therefore proves the result.
However, for general hyperelliptic curves uniqueness does not necessarily hold and so this
argument needs some care. To deal with this one uses the fact that there are #Pic0Fq (C) =
q g (1 + o(1)) divisor classes and hence only o(q g ) divisor classes contain more than one
reduced divisor. It follows that only o(q g ) sums of g points in C(Fq ) can yield divisor
classes containing more that one reduced divisor, and so the above arguments do prove
what is required.
15.6.5: See [242] or Section VII.6.2 of [65].
15.8.3: One can easily check all three properties for n = 2. The case n = 3 follows
directly from the group law: if x1 6= x2 , then x1 + x2 + x3 = λ2 where λ = (y2 − y1 )/(x2 −
x1 ). Expanding gives 2y1 y2 = y12 + y22 + (x2 − x1 )(x1 + x2 + x3 ); squaring both sides and
substituting x3i + a4 xi + a6 for yi2 gives (x1 − x2 )2 times the stated equation. The case
x1 = x2 follows by using λ = (3x21 + a4 )/(2y1 ).
The general case follows since P1 + · · · + Pn = OE if and only if P1 + · · · + Pn−2 = −R
and Pn−1 + Pn = R for some point R on the curve. In other words, P1 + · · · + Pn−2 + R =
627

Pn−1 + Pn + (−R)) = OE . The symmetry is obvious, and the statement about degrees
follows by induction and using the fact that the resultant of a quadratic and a degree
2n−2 polynomial is a degree 2n−1 polynomial.

Chapter 16: Lattices


16.1.4: A hint for last part is that the√ lattice contains the sublattice with basis M ei
16.1.13: b1 = (1, 1) has volume 2.
16.2.5: (You need to have studied the proof of Theorem 16.2.3.) A radius r = λ1 + ǫ
hypercube around the origin has volume (2r)n . Taking ǫ > 0 arbitrarily small gives the
result. For the second claim, consider the convex region {v ∈ Rn : kvk1 ≤ r}, which is a
hyper-tetrahedron of volume 2n rn /n!. The approximation is using Stirling’s formula.
16.2.6: Consider the lattice basis {(1, a), (0, b)} of rank n = 2 and determinant b.
p √is of the form (s, as + bt) for some s, t ∈ Z. By Minkowski,
Every element of the lattice
the disk of radius u = b 2 has√volume πu2 > 4b and so contains a non-zero lattice
point. Hence, 0 < s2 + r2 < u2 = 2b.
16.3.1: Problem 1 is achieved by solving xB = v over Q (or with sufficiently accurate
floating point arithmetic, rounding to the nearest integer solution and then checking).
To solve problem 2, compute the HNF of the matrix B whose rows are b1 , . . . , bn and
discard the zero rows. For problem 3, write A′ = U A be the HNF of A. If the first r rows
of A′ are zero then the first r rows of U are a basis for ker(A) (since if x is any vector
with last m − r entries zero then 0 = xA′ = (xU )A).
To solve problem 4, concatenate the n rows of the n × n matrix M In to the matrix
A to apply an extended matrix A′ . Then use the method used to solve problem 3 on the
matrix A′ . To see this is correct note that (x1 , . . . , xm )A ≡ 0 (mod M ) if and only if
there are n integers xn+1 , . . . , xn+m such that then (x1 , . . . , xn+m )A′ = 0.

Chapter 17: Lattice Basis Reduction


17.1.6: Clearly each change of basis is invertible and so the output is a basis of the lattice.
Since B1 ∈ N is strictly decreasing the algorithm must terminate after a finite number of
steps.
17.1.8: {(−1, 0), (0, 2)}.
17.2.5: Yes, no, yes, no, yes.
17.2.7: An example is b1 = (1, 0) and b2 = (0.49, 0.8).
17.2.10: The proof closely follows the proof of Lemma 17.2.8. For part 2 one should
Pi−1 √ k √ i
find kbi k2 ≤ (1 + 14 k=1 2 )Bi ≈ (0.146 + 2 /1.46)Bi and one gets the result. Since

1/6 ≤ ( 2 − 1)2(i−1)/2 for i ≥ 1 one has kbj k2 ≤ 2j/2 Bj and part 3 follows.
17.2.16: An example of the situation kv Q1 k 6= kb1 k was seen in Exercise 17.2.5. For the
second part, we have 2n(n−1)/4 det(L) ≥ nj=1 kv j k ≥ kv i kn+1−i .
17.4.4: See [373] or Section 2.6.1 of [136].
17.5.2: Let L be any lattice of dimension n. We have proved than an LLL-reduced
basis for L exists. Hence, by part 5 of Theorem 17.2.12 λ1 ≤ 2(n−1)/4 det(L)1/n . Also see
Section 12.2 of Cassels [121].

Chapter 18: Algorithms for the Closest and Shortest Vector Problem:
18.1.9: The complexity of computing the Gram-Schmidt basis is given by Theorem 17.3.4.
Using the same techniques, one can show that wi can be represented using exact Q-
arithmetic with denominators bounded by X n−1 and numerator bounded by X n .
628 APPENDIX B. HINTS AND SOLUTIONS TO EXERCISES

18.1.10: The inductive process uses the fact that the orthogonal complement of bn is
the span of {b∗1 , . . . , b∗n−1 }. If one starts at b1 instead of bn then one needs an analogue
of Lemma 18.1.1. P P P
18.2.5: If w = ni=1 li bi and u = ni=1 li′ bi then kw − uk2 = ni=1 (li − li′ )2 kbi k2 . The
result follows.
√ 18.2.7: w ≈ 24.09b1 +12.06b2 +26.89b3 so v = (99, 204, 306) and kv−wk = k(1, 1, −1)k =
3.
18.3.4: (100, 77, 104).
18.4.5: See Figure 1 of Hanrot and Stehlé [275] or Algorithm 10.6 of Joux [317].
18.4.7: See Section 3 (under “Combinatorial methods”) of Micciancio and Regev [423].

Chapter 19: Coppersmith’s Method and Related Applications


19.1.8: The dimension is 2d, the determinant is M d X 2d(2d−1)/2 , the condition (ignoring
the constants) (M d X 2d(2d−1)/2 )1/2d ≤ M leads to X ≈ M 1/(2d−1) . See Section 2 of
Coppersmith [142].
19.1.14: Set x0 = px for x ∈ Z. There is a similar example in Section 3 of Copper-
smith [142].
19.4.5: See Section 3.2 of May [410] (pages 34-36).
19.4.6: Set ǫ = 1/ log2 (N ) and apply Theorem 19.4.2 a constant number of times,
each with a different guess for the most-significant 2 bits of p − p̃.
19.4.7: Guess all N 1/4 values for p̃ and try Coppersmith’s algorithm (in polynomial-
time) on each of them.
19.4.8: Just use F (x) = (p̃ + x) as before and note that the proof of Theorem 19.4.2
does not change. This is mentioned on page 52 of Howgrave-Graham [297].
19.4.9: Assume that 0 ≤ p̃ < M . It follows that M x + p̃ has a small root modulo p.
Hence apply the same methods using the polynomial F (x) = M x + p̃, which can be made
monic as F (x) = x + (p̃M −1 ) (mod N ).
19.4.10: Let p = p̃ + x0 with 0 ≤ x0 < X then setting

q̃ = ⌊N/(p̃ + X)⌋

means q̃ ≤ q ≤ N/p̃, which is an interval of width N X/(p̃(p̃ + X)) < qX/p̃. Hence
q = q̃ + y0 with 0 ≤ y0 ≤ q(X/p̃).
19.4.12: Let p̃ = 5000 and X = 10. We seek a small root of F (x) = (p̃ + x)3 modulo
a large factor of N . Reducing the matrix
 
N 0 0 0 0
 0 NX 0 0 0 
 
 0 0 NX 2
0 0 
 3 
 p̃ 3p̃2 X 3p̃X 2 X3 0 
0 p̃3 X 3p̃2 X 2 3p̃X 3 X 4

yields the row (1980391527, −16046759730, 20944500000, 35123963000, 35110000). This


corresponds to the polynomial 3511x4 + 35123963x3 + 209445000x2 − 1604675973x +
1980391527, which has the root x = 3, giving
 p = 5003. 
19.4.14: The algorithm requires O( ne log(P )3 ) bit operations, and at least ne >
(n/e)e bit operations. The input size is O(n log(pn )) bits (assuming all 0 ≤ ri < pi ). The
algorithm is not polynomial-time: a polynomial-time algorithm would need ≤ (n log(pn ))c
bit operations for some constant c, and so the logarithm of the running time would be
629

≤ c(log(n) + log log(pn )); whereas if e > log(n) then the logarithm of the running time of
the algorithm is ≥ log((n/e)e ) > log(n)2 − log(n) log log(n).
19.4.16: pn ≈ n log(n) so P > n! > (n/e)n (warning: the p e here is 2.71828... from
Stirling’s
p formula) and so log(P
p ) > n(log(n) − 1). Then log(X)/ log(P ) log(pn ) ≈
log(X)/(n log(n)) log(n) = log(X) log(n)/n. The claimed value for e follows. This is
obviously > log(n) for large n and sufficiently small X.
19.4.17: Let m = ⌈log2 (U )⌉. Then 2m ≤ U < 2m+1 and 2m+1 ≤ 2U ≤ V .
19.4.19: For the first case we get x = −347641 and for the second x = 512.
19.5.2: Given q let pi = ⌊qαi ⌉ so that |qαi − pi | ≤ 1/2.
19.5.4: The first row output by LLL on the matrix is approximately (−0.0000225, 0.0002499, 0.0002499, 0.0007500
Now compute q = 0.0000225Q/ǫ = 2250. One finds p1 = 3499, p2 = 1735, p3 = 750.
19.6.1: Note that 21 b1/3 < qb < b1/3 and so |y| < 21 qb as required. Also, note that
the continued fraction method finds qa /qb as long as |(ã/b̃) − (qa /qb )| < 1/(2qb2 ). First
note that b−1/3 < 1/qb < 2b−1/3 so 12 b−2/3 < 1/(2qb2 ). Now, the difference (ã/b̃) − (qa /qb )
is given by equation (19.6), whose numerator is bounded by 2 12 b1/3 14 b1/3 . Since 1/qb <
2b−1/3 we have the difference bounded by 21 b1/3 /b̃. It follows that the difference is less
than 1/(2qb2 ) as required and so Euclid solves the problem.
19.6.2: If α < β then the target gcd is smaller than the errors – one therefore expects
very many solutions.
The first condition comes from the Diophantine approximation step: Namely that the
right hand side of equation (19.6) (which is 1/b̃1−β ) is at most 1/(2qb2 ) ≈ 1/b̃2(1−α) . The
second condition comes from the requirement that |y| < 21 qb .
19.6.3: The process is the same as  the generalisation of Diophantine approximation
β
in Section 19.5. The lattice is b̃0 ãb̃ which has determinant roughly b̃1+β and a short
vector (qa b̃β , qb x − qa y) of length roughly b̃1+β−α . The result follows.
19.7.4: See [496] for the first reduction. The second is almost immediate, but needs
some care.

Chapter 19a: Cryptosystems Based on Lattices


19.9.1: Given a ciphertext c one can check if it is an encryption of m by testing whether
c − mG′ (respectively, c − mB ′ ) is a valid error vector for the system.
19.9.2: Check if c is an encryption of m by testing if c−m lies in the code (respectively,
lattice) corresponding to the public key.
19.9.3: Given c add m′ G′ (respectively, m′ B ′ ) to get a ciphertext c′ 6= c which is an
encryption of m + m′ . Alternatively, add an extremely small error vector e′ to get c′ and
call the decryption oracle; hopefully this will return the message m.
19.9.8: (2, 3).
19.9.9: Set w = v and, for i = n downto 1 do w = w − ⌊hw, b∗i i/hb∗i , b∗i i⌋.
P m = (−1, 0, 1), e = (1, −1, 1).
19.10.3: m = (1, 2, 0), e = (1, 1, 1);
19.13.2: There is a solution s = ni=1 xi bi if and only if s′ = s − xn bn is a subset sum
of {b1 , . . . , bn−1 }.
P
19.13.3: Assume n even. Compute all 2n/2 integers n/2 i=1 xi bi for xi ∈ {0,
P1} and store
n
in a sorted list, binary tree or hash table. Then compute all 2n/2 values s − i=n/2+1 xi bi
and determine whether or not it appears in the list. For details see Algorithm 3.94 of
[418].
19.13.4: Just divide everything by the gcd.
19.13.7: Given s = 112 we just subtract the largest possible element, in this case 80,
which leaves 112 − 80 = 32. We then subtract the largest element less than 32 to get
630 APPENDIX B. HINTS AND SOLUTIONS TO EXERCISES

32−20 = 12. We then take 12−7 = 5. Hence, we have computed that 112 = 5+7+20+80,
which is the solution vector (0, 1, 1, 1, 0, 1, 0).
19.13.10: n/(n − 1) = 1 + 1/(n − 1).
19.13.11: 8/ log2 (430) ≈ 0.9145.
19.13.12: By Exercise 19.13.8 bn ≥ 2n−1 and so the density is at most n/(n − 1) =
1 + 1/(n − 1).
19.13.15: 0110010.
19.13.16: 0.782
19.13.17: (154, 184, 43, 69, 125, 62), c = 384.
19.13.18: Encryption is deterministic.
19.13.19: Given a ciphertext c one can call c + a1 and c − a1 to the decryption oracle.
One of them is a valid ciphertext and corresponds to the original message with the first
bit flipped.
19.13.21: Since 0 < ai < M we expect an average ciphertext to be a sum of around
n/2 integers of size M/2. Hence c ≈ nM/4 on average (and, in all cases, 0 ≤ c < nM ).
Since M ≥ 2n we expect c to require at least log2 (nM/4) > log2 (n) + n − 2 bits.
19.13.22: Take bi = 2i−1 for 1 ≤ i ≤ n, M = 2n + 1 and any W such that W bi 6≡
n
2 (mod M ) for all i. Then the density is > 1. Of course, it is easy to compute the
private key corresponding to such a public key.
Pn19.13.24: Since the integers ai,j are like random integers modulo t
Mi we expect
t n
j=1 ai,j ≈ nMi /2 and so Mi+1 > nMi /2. Hence, Mt > (n/2) M1 ≥ (n/2) 2 . The
ciphertext is expected to be a sum of around n/2 integers of size Mt /2 and so around
nMt /4. Therefore, on average,

log2 (c) = log2 (n(n/2)t M1 /4) > log2 (n) + t(log2 (n) − 1) + n − 2.

Since the at,i are somewhat like randomly chosen integers modulo Mt we expect to have
max{at,i } ≈ Mt . We conservately assume in what follows that, on average, max{at,i } >
Mt /2. Hence the density is at most n/ log2 (Mt /2) < n/(t(log2 (n) − 1) + n − 1). For
example, n = 200 and t = 5 gives expected density less than 0.87; in any case, the density
of an iterated Merkle-Hellman knapsack is always significantly less than 1.
19.13.25: From Exercise 19.13.8 we have bn > 2n−2 b1 and so M > (2n−2 + · · · + 2 +
1 + 1)b1 = 2n−1 b1 . So b1 b2 > M > 2n−1 b1 implies b2 > 2n−1 . Exercise 19.13.8 also shows
M > 2n−3 b2 and so M > 22n−4 . Then log2 (nM/4) > log2 (n) + (2n − 4) − 2.
19.13.26: See if W −1 ai (mod M ) are small and allow efficient solution to the subset
sum problem using a greedy algorithm.
19.13.28: a1 b2 − a2 b1 = 7 · 233 · 37589 · 143197. The only factor of size ≈ a3 is
M = 233 · 37589 = 8758237. This gives W = a1 b−1 1 (mod M ) = 5236910. One verifies
that W −1 ai (mod M ) is a superincreasing sequence.
19.13.30: Suppose W is known and assume no permutation is used. Note that there
are integers ki for 1 ≤ i ≤ n such that

ai = bi W + ki M

and ki < bi . So a1 ≡ k1 M (mod W ) and a2 ≡ k2 M (mod W ). Hence, writing c =


a2 a−1
1 (mod W ), we have k1 c ≡ k2 (mod W ). If k1 k2 < M (which is plausible since
k1 k2 < b1 b2 < M and W is usually about the same size as M ) one can apply the same
methods as used in Example 19.13.29 to find (k1 , k2 ) and hence M .
19.13.32: 11100001.
19.13.33: 10111100.
631

Chapter 20: The Diffie-Hellman Problem and Cryptographic Applications


20.2.7: Suppose l | n is prime and write g1 = g n/l . Then g c = g ab implies g cn/l = g abn/l
and so (g1 , g1a , g1b , g1c ) = (g n/l , (g a )n/l , (g b )n/l , (g c )n/l ) is a valid Diffie-Hellman tuple. If
l = O(log(n)) then one can solve the DLP in hg1 i (this is just Pohlig-Hellman) and hence
test DDH in hg1 i. If (g, g a , g b , g c ) is a random tuple in G4 then with probability 1/l the
resulting tuple in hg1 i is not a valid Diffie-Hellman tuple. The algorithm therefore has a
noticeable advantage in Definition 20.2.4.
20.4.1: In the first case, c2 = mhk and c′2 = mhAk+B . Hence, cA B ′
2 h /c2 = m
A−1
and
so one can compute m assuming that (A − 1) is coprime to the order of the group G. In
the second case, query the decryption oracle on (c1 , c2 ) to get m and hence hk . One can
then compute (hk )A hB and decrypt c2 .
20.4.3: As above we can self-correct the CDH oracle to make it reliable. Once one
knows g ax then one can decrypt to get M . For the second part: The problem is that
given the message M corresponding to the public key (g, h) and ciphertext (c1 , c2 ) one
can compute M ⊕ c2 = H(g ax ) but if H is hard to invert then one cannot compute g ax .
20.4.6: Given (c1 , c2 ) the user returns m = c2 c−a 1 so set c1 = h
−1
and c2 arbitrary and
get ha = mc−12 . If the group contains an element h with relatively small order l then one
can easily solve the DLP to get a (mod l). If this can be repeated for sufficiently many
coprime values l then a can be determined using the Chinese remainder theorem.
20.4.9: See Boneh, Joux and Nguyen [82] for the full details of this attack. The basic
idea is as follows: For suitable constant c one has m = m1 m2 where 1 ≤ mi ≤ c2m/2+ǫ
with a certain noticeable probability (for c = 1 [82] states the probability is at least
log(1 + 2ǫ)).
20.5.2: Eve, pretending to be Bob, sends g y (where y is known to Eve). She makes
a corrupt query to Alice and, knowing g xy , can compute g ab . Eve can now compute a
shared key with Alice, whereas Alice believes she is sharing with Bob.

Chapter 21: The Diffie-Hellman Problem


21.1.10: For both problems let the space of instances be (G − {1})2 . Given an Inverse-DH
−1 −1
oracle A and instance (g, g a ) one has g a = A(g x , (g a )xy )yx for 1 ≤ x, y < r. Given a
2 2 −1
Square-DH oracle A and instance (g, g a ) one has g a = (A(g x , (g a )xy )(xy ) .
For the self-correction: Run the non-perfect Square-DH oracle repeatedly to produce
2
a list L which is expected to contain g a . Then choose 0 ≤ u < r and repeat the process
2
on (g, g g ). This givies a list L which is expected to contain g (a+u) . Finally, determine
a u ′
2
whether there is a unique pair (Z, Z ′ ) in L × L′ such that Z(g a )2u g u = Z ′ , and if so
return Z. The precise details are similar to Theorem 21.3.8.
21.3.9: Suppose A is correct with noticeable probability ǫ. Since the reduction makes
at least log2 (r) oracle queries, the probability that the result is correct is at most ǫlog2 (r)
which is negligible. Instead, one should self-correct the oracle A to obtain an oracle A′
with success probability greater than 1 − 1/(4 log2 (r)). By Theorem 21.3.8 this requires
O(log(log(r))/ǫ) oracle queries. One can then perform the reduction of Lemma 21.1.13
using A′ , with success probability ≥ 1/2.
21.1.19: For the first part, given (g1 , g2 , g3 ) = (g, g a , g b ), call O1 (g r2 , g2r2 , g3r2 ) to get
g abr2 and call O2 (g r1 , g2r1 , g3r1 ) to get g abr1 . Finally compute s, t ∈ Z such that r1 s+r2 t = 1
and then (g abr1 )s (g abr2 )t = g ab as required.
For the next part: The problem Inverse-DH is only defined when a is coprime to the
group order. If a = r1 , as would be required in several places, then we cannot make
a meaningful query to an Inverse-DH oracle. Also, in the proof of Lemma 21.1.5 then
g 6∈ hg1 i so a CDH oracle can’t work. Indeed, Shoup has shown (Theorem 5 of [553])
632 APPENDIX B. HINTS AND SOLUTIONS TO EXERCISES

that a generic algorithm for Fixed-CDH with respect to g r1 , even when given a perfect

Fixed-CDH oracle with respect to g, takes Ω( r2 ) group operations.
21.1.20: For the proof, historical references and a major generalisation of such prob-
lems see [102].
21.4.12: For example, with a perfect Fixed-CDH oracle, Pohlig-Hellman only and
using projective coordinates the number of oracle queries is O(log(r) log log(r)) oracle
queries and O((l12 + l2 ) log(r)2 / log(max{l1 , l2 })) group operations.
21.4.13: We give the results only for the case of Pohlig-Hellman combined with ex-
haustive search. Note that not every a ∈ Fr corresponds to an element a + bθ of G2,r ,
whereas every a ∈ A1 (Fr ) corresponds to an element of T2 (Fr ). Hence, embedding the
DLP instance into the algebraic group G2,r requires an expected O(log(r)) oracle queries
(computing Legendre symbols and taking square roots).
The group operation using the first representation requires no inversions but the group
operation for the second representation requires inversions. The number of Fixed-CDH
oracle queries respectively is O(log(r) log log(r)) and O(log(r)2 log log(r)). Hence, the
first representation is better. If one has a CDH oracle then inversions are a single oracle
query and so the number of oracle queries is O(log(r) log log(r)) in both cases.
21.5.5: See Cheon [131].
21.5.9: See Brown and Gallant [111].
21.6.4: See Kaliski [327] or Fischlin and Schnorr [203].
21.6.9: Let A be a perfect oracle which, given h = g x for x ∈ {0, 1, . . . , r − 1} outputs
b(x). It suffices to show how to use A to determine the least significant bit of x. We may
assume that h 6= g −1 , since if h = g −1 then we know x. If A(h) = 1 then (x1 , x0 ) = (0, 1)
or (1, 0), so A(gh) determines the LSB. Similarly, if A(h) = 0 then (x1 , x0 ) = (0, 0) or
(1, 1) and so A(gh) determines the LSB (since h 6= g −1 there is no carry when computing
gh).
21.6.11: The integer r in binary begins as 11000 · · · so the highest order 2 bits of a
random integer modulo r are essentially 00, 01 or 10 with probability close to 1/3 each.
Hence, both predicates are 0 with probability close to 2/3.
21.6.13: Let A be an oracle such that A(g x ) = b(x). Let h = g a . Set j = 1 and if
A(h) = 0 then set l = 1, and if A(h) = 1 then set l = 2. Given, at step j, (l − 1)r/2j ≤
j j
a < lr/2j one calls A(h2 ). If A(h2 ) = 0 then (2l − 2)r/2j+1 ≤ a < (2l − 1)r/2j+1 ,
otherwise, (2l − 1)r/2j+1 ≤ a < 2lr/2j+1 .
We remark that Blum and Micali [73](generalised by Long and Wigderson [393]) use
Legendre symbols and square roots modulo p to show this predicate is hardcore in the
group F∗p when g is a primitive element (their method does not work in a prime order
cyclic subgroup).
21.6.15: See Section 7 of Li, Näslund and Shparlinski [387].
21.7.4: This is the same argument as Exercise 21.6.13. One bounds α as (l − 1)p/2j ≤
α < lp/2j and refines (l, j) by computing A1 (2j (mod p)) = MSB1 (α2j (mod p)).
21.7.6: Use the same argument as Exercise 21.6.4. See Fischlin and Schnorr [203] for
details.
21.7.13: Given a DDH instance (g, g a , g b , g c ) one can compute MSB1+ǫ (g c ) and com-
pare with the result of the oracle. If g c = g ab then the results will agree. Repeating
for random self-reduced versions of the original DDH instance gives the result. We refer
to Blake and Garefelakis [62]and Blake, Garefelakis and Shparlinski [63]for details and
generalisation to small subgroups of F∗p and to elliptic curve groups.
i
21.7.14: Just call A(g, (g a )2 , g b ) to get the i-th bit of the representation of g ab .
21.7.15: Call A(g, g a g z , g b ) about m times for uniformly random z. Each output
yields a linear equation over F2 in the unknown bits of g ab . Solving the system of linear
633

equations over F2 gives g ab .

Chapter 22: Digital Signatures Based on Discrete Logarithms


22.1.2: The second and fourth are valid transcripts (i.e., the equation (22.1) is satisfied).
In the third case one even has s0 6∈ hgi.
22.1.3: As2 + B − s′2 ≡ a(As1 − s′1 ) (mod r) which can be solved for a.
22.1.6: If s1 can be guessed then choose any 0 ≤ s2 < r and set s0 = g s2 h−s1 . Send s0
in the first stage of the protocol and respond to the challenge s1 with s2 .
22.1.8: We need a multi-exponentiation (in equation (22.1)) and this is not well-defined
in algebraic group quotients.
22.1.11: If m is a message and (s1 , s2 ) is a signature which is valid for two distinct
public keys hA and hB then we have

s1 = H(mkg s2 h−s s2 −s1


A ) = H(mkg hB ).
1

If s1 = 0 then the signature is valid for all public keys. If s1 6= 0 then h−s s2 −s1
A 6= g hB
1
and
so we have a hash collision of a very special form: namely H(mkR1 ) = H(mkR2 ) where
R1 and R2 are distinct elements of hgi. If the bit-length of the hash output is l such that
2l < r then, by the pigeonhole principle, there must be distinct R1 , R2 ∈ hgi such that
H(mkR1 ) = H(mkR2 ). Indeed, one expects many such pairs if 2l is significantly smaller
than r.
Even when 2l is significantly larger than r then, by the birthday paradox, one expects
there to be a collision of the form H(mkR1 ) = H(mkR2 ). However, if 2l is larger than r2
then the probability of such a collision is rather low.
As for security, the existence of two keys for which the same signature is valid on
the same message is not considered to lead to any practical attack in any real-world
application. In any case, it appears to be impossible to construct the actual signatures,
given the hash collision H(mkR1 ) = H(mkR2 ), without solving at least two instances of
the discrete logarithm problem.
22.1.12: Change s2 to k − as1 (mod r). All the security arguments are unchanged by
this.
22.2.2: Write the verification equation as

hF (s1 ) ss12 g r−H(m) = 1.

If gcd(s2 , #G) = 1, and h is known to lie in hgi then this equation (together with the
possibly simpler check that s1 ∈ G) implies s1 ∈ hgi.
One sees that Verify is computing a 3-dimensional multi-exponentiation with expo-
nents all general elements of Z/rZ and only one base fixed. Using basic methods (i.e.,
not windows or signed expansions) this computation will require roughly twice the cost
of the computation in Example 22.1.13. Even when using more advanced methods such
as windows, the fact that s1 is a variable base in the multi-exponentiation is always a
disadvantage. An additional cost in Elgamal signature verification (also for signing) is
computing a hash function whose image is Z/rZ rather than {0, 1}l .
22.2.3: Hint: Set s1 = g u h for random 0 ≤ u < r.
Set s1 = g u h for random 0 ≤ u < r. and s2 = −F (s1 ) (mod r). One checks that

hF (s1 ) ss12 = hF (s1 ) g −uF (s1 ) h−F (s1 ) = g −uF (s1 ) .

Hence, taking m = −uF (s1 ) (mod r) gives the existential forgery, More elaborate versions
of this attack are given in Section 4.2.3 of Elgamal [192].
634 APPENDIX B. HINTS AND SOLUTIONS TO EXERCISES

22.2.4: Hint: Use the Chinese remainder theorem.


Given the public key h and a message with hash H(m) the adversary chooses a random
′ −1
1 ≤ s′1 , s2 < r and computes s′′1 = (g H(m) h−s1 )s2 (mod p). Now, compute an integer s1
using the Chinese remainder theorem so that

s1 ≡ s′′1 (mod p) and s1 ≡ s′1 (mod r).

Note that, under our assumptions, F (s1 ) = s′1 and that sr1 ≡ 1 (mod p). Then hF (s1 ) ss12 =
′ ′
hs1 g H(m) h−s1 ≡ g H(m) (mod p) as required.
22.2.6: Hint: If s1 = ur for any u ∈ N then hF (s1 ) = 1.
Set s1 = ur where u ∈ N is such that r divides the order of s1 modulo p (this is easy).
The trick is to set
(p−1)/r
g = s1 (mod p)
which is a generator for the subgroup of order r. Use g as part of the system parameters
of the scheme. The signature forgeries will all use the same value for s1 (actually, s1 can
be varied by multiplying s1 by any integer of order dividing (p − 1)/r modulo p, as long
as one always treats s1 as an integer and does not reduce modulo p). Note that, for any
public key h, we have hF (s1 ) = 1. Finally, set s2 = H(m)(p − 1)/r ∈ N so that

hF (s1 ) ss12 ≡ ss12 ≡ g H(m) (mod p).

A nice extension of the attack, which works even when the checks on s1 and s2 are
performed, is to choose s1 = ur so that 1 < s1 < p and so that the order of s1 modulo p
is equal to r. One can then take g = s1 . Though it is easy to arrange that the order of
s1 equals r, it does not seem to be easy to do this while still keeping the integer ur less
than p.
More details of these attacks, and variants in the case where g is a primitive root, are
given in Bleichenbacher [66].
22.2.7: Try random m1 , m2 until r = |H(m1 ) − H(m2 )| is prime. See Vaudenay [615].
22.2.8: The signatures are all valid with probability at most 1/r. So suppose signature
j is not valid. Choose all wi for i 6= j randomly. Then there is at most a 1/(r − 1) chance
that wj is chosen so that the equation is satisfied.
Qt w F (s ) �t
When all hi = h just replace i=1 hi i 1,i by h i=1 wi F (s1,i ) . One can now break
the equation into t/3 3-dimensional multi-exponentiations, so the total cost is about 1/3
of the naive case.
For the third part see Yen and Laih [637].
It seems to be impossible to do something similar for Schnorr signatures since each
g s2 h−s1 needs to be computed separately. √
22.2.12: First, precompute and store g1 = g ⌈ r⌉ . Then, for each signature (s0 , s2 )
to be verified run Euclid’s algorithm√on inputs u2 = F (s0 )s−1 2 (mod r) and r until the
current remainder is approximately r. Write v for the resulting coefficient of u2 (in
the notation of Section 2.3 this is v = si which clashes horribly with the√notation of this
chapter). By part √ 6 of Lemma 2.3.3 it follows that √ v, vu2 (mod r) ≈ r. Also, write
u1 v ≡ w0 + w1 ⌈ r⌉ (mod r) with 0 ≤ w0 , w1 < r. Equation 22.6 can therefore be
written as
g w0 g1w1 hu2 v s−v
0 =1

All exponents in this 4-dimensional multi-exponentiation are of size roughly r. For
further details see Antipa et al [11]. A further improvement in [11] (assuming the public
key also contains a suitable power of h) leads to a 6-dimensional multi-exponentiation
with exponents of size r1/3 .
635

22.2.13: There is no algorithm to generate signatures!


2 2
22.2.18: For example, to check that g a is correct one can test whether g2a ) = 
 e(g1 , −1
(m+a)
e(ψ(g2a ), g2a ). The other elements are tested similarly. For the second part, e g1 , g2a , g2m
should equal z.

Chapter 23: Public Key Encryption Based on Discrete Logarithms

23.1.5: The proof proceeds almost identically to the previous case, except that when a
decryption query is made then only one call to the oracle is required. This variant is
studied in Section 10.4 of Cramer and Shoup [161].
23.1.7: Given a Hash-DH instance (g, g a , g b ), if one can compute g ab then one can
compute kdf(g ab ). Given a DDH instance (g, g a , g b , g c ) one can compute K = kdf(g c )
and, using an oracle for Hash-DH, distinguish it from kdf(g ab ).
23.2.2: For the first statement note that for each 0 ≤ z1 < r there is a unique choice
for z2 . The second statement is straightforward. For the final statement write g2 = g1w ,

h = g1v and u2 = g k with 0 ≤ k ′ < r and k ′ 6= k. The fact that (z1 , z2 ) ∈ Xg1 ,g2 ,h imposes
the linear equation z1 + wz2 ≡ v (mod r). To prove the result we need to show that one
can simultaneously solve kz1 + k ′ wz2 ≡ x (mod  r) for any 0 ≤ x < r. The result follows
since the determinant of the matrix k1 kw ′
w is not zero modulo r.
23.2.7: First has v 6∈ G, second does not satisfy equation (23.1), third has message
m = 1.
23.2.11: The adversary returns eu1−z1 u2−z2 just as the Decrypt algorithm does.
23.2.12: Given a challenge ciphertext (u1 , u2 , e, v) compute u′1 = u1 g1 , u′2 = u2 g2 and
e = eh (these are g1k+1 , g2k+1 and mhk+1 respectively). Then compute α′ = H(u′1 , u′2 , e′ )

′ ′
and set v = (u′1 )x1 +y1 α (u′2 )x2 +y2 α . Calling the decryption oracle on (u′1 , u′2 , e′ , v ′ ) gives
m.
23.2.13: Let u1 be a random element of F∗p of order l. Set u2 = ua1 and v = ub1 for
random 1 ≤ a, b < l and choose any e ∈ F∗p . Call the decryption oracle on (u1 , u2 , e, v).
With probability 1/l the decryption oracle does not return ⊥, and indeed returns some
message m. One therefore has

r−z1 +a(r−z2 )
u1 = me−1 .

Since it is easy to compute the discrete logarithm of me−1 to the base u1 when l is small
one obtains a linear equation in z1 and z2 modulo l. Repeating the attack and solving
gives z1 (mod l) and z2 (mod l).
Q
If p − 1 has distinct small prime factors l1 , . . . , lt so that ti=1 li > r then, by repeating
the above attack, one can determine the private key uniquely using the Chinese remainder
theorem.
23.3.7: Just set c′2 = c2 ⊕ s for some non-zero string s ∈ {0, 1}l and query the
decryption oracle on (c1 , c′2 ) to get m ⊕ s.
23.3.8: Given Qid = H1 (id) set R = ψ(g)Qid . Invert the hash function to find an
identity id′ such that H1 (id′ ) = R. Then request the private key for identity id′ to receive
R′ = Rs = (ψ(g)Qid )s . One can obtain Q′id = Qsid as R′ ψ(g ′ )−1 .
23.3.10: If one can solve CDH in GT then compute z = e(Q, g), z1 = e(Q, g a ) = z a
and z2 = e(Q, g b ) = z b . Then the solution to the CDH instance (z, z1 , z2 ) is the required
value. The case of CDH in G2 is similar.
636 APPENDIX B. HINTS AND SOLUTIONS TO EXERCISES

Chapter 24: The RSA and Rabin Cryptosystems


24.1.1: Repeatedly choose random primes p such that 2κ/2−1 < p < 2κ/2 and let q =
⌈u2κ−l /p⌉ + ǫ (where ǫ ∈ Z≥0 is small, possibly just always zero) until q is prime and the
top l bits of pq are equal to u.
24.1.3: Generate random primes r2 and r3 of the required size. Generate a random
prime r1 of the form 2r3 x + 1 for suitably sized x ∈ N. Solve p0 ≡ 1 (mod r1 ) and
p0 ≡ −1 (mod r2 ) using the Chinese remainder theorem. Try random y ∈ N of the
appropriate size until p = p0 + r1 r2 y is prime. This algorithm is due to Gordon [264].
24.1.5: See Galbraith, Heneghan and McKee [220]. For parameter restrictions see
[220] and [69].
24.1.7: One finds that m ≡ 385699 (mod p) and m ≡ 344504 (mod q). Solving
(385699 + px)3 ≡ c (mod p2 ) gives x = 1177 and m ≡ 1234567890 (mod p2 ). Performing
another iteration of Hensel lifting gives the same value for m, as does the CRT. Hence,
m = 1234567890.
24.1.10: Instead of computing cdp (mod p) and cdq (mod q) for two primes p, q ≈ 22500
we compute one exponentiation where p and dp are 1/5 the size. Hence the cost is about
1 1 2.58
2(5) ≈ 0.008 the time.
For the attack, choose an integer m > p (e.g., m = 2600 ), compute c = me (mod N )
and then ask for c to be decrypted. On receipt of the message 1 < m′ < p one knows
me ≡ (m′ )e (mod p) and so m′ ≡ m (mod p). Hence, gcd(N, m − m′ ) yields p.
24.1.11: An adversary can choose messages m0 and m1 such that ( mN0 ) = −1 and
m1
( N ) = 1 and then compute the Jacobi symbol of the challenge ciphertext to decide
which message was encrypted.
24.1.18: Suppose A is a perfect algorithm that, on input an RSA public key (N, e),
outputs the private key d. Let N = pq be an integer to be factored. Choose 1 < e < N
uniformly at random. If A(N, e) =⊥ then repeat for another choice of e (in this case one
knows that gcd(e, ϕ(N )) 6= 1; this information is potentially useful, but we ignore it in
our proof). Let d = A(N, e). It follows that ed − 1 is a multiple of λ(N ) and so the result
follows from Lemma 24.1.17.
The expected number of trials to find e coprime to ϕ(N )√is bounded by O(log(N ))
(since that is the number of trials to choose a prime value e > N ). The reduction there-
fore requires at most polynomially many queries to the oracle A and at most polynomially
many bit operations.
24.1.19: The idea is to note that ϕ(N ) = N − (p + q) + 1 so one can compute p and
q by taking the roots of the quadratic x2 + (ϕ(N ) − N − 1)x + N (which can be done
deterministically using numerical analysis).
24.1.20: Let N = pq where p and q are odd. We use the same ideas as Lemma 24.1.17.
One chooses a random 1 < g < N and computes gcd(g, N ) (if this is not equal to 1 then
we have factored N ). Use A to determine the order M of g modulo N . With probability
at least 3/4 one has M even (so if M is odd then repeat for a different choice of g). Now,
with probability at least 1/2, gcd((g M/2 (mod N )) − 1, N ) factors N (this final argument
uses careful analysis of the 2-adic valuations of p − 1 and q − 1).
24.1.24: Given a small number of message-signature pairs (mi , si ) one expects to
find two messages mi , mj such that gcd(H(mi ), H(mj )) = 1 (by Theorem A.14.4, the
probabilty for each pair is at least 0.6). Euclid gives integers s, t such that sH(mi ) +
tH(mj ) = 1. If si and sj are the corresponding signatures then
(ssi stj )e ≡ a (mod N )
and so an e-th root of a has been computed. One can then forge signatures for any
message.
637

24.2.5: m = 1234567890 in all three cases.


24.2.6: One determines b2 by computing ( Nc ). One then computes c′ = cu−b 2
2
and

c
determines b1 by computing ( p ).
24.2.7: The advantage is that one speeds up the square roots modulo p, q and r since
the primes are smaller (see Example 24.1.4). The main disadvantage is that there are
now 8 square roots, in general, to choose from.
24.2.8: There are still only four square roots (two square roots modulo p, which
correspond via Hensel lifting to two square roots modulo pr , and similarly for q). Hence,
one can use exactly the same type of redundancy schemes as standard Rabin. One speeds
up computing square roots by using the Chinese remainder theorem and Hensel lifting as
in Example 24.1.6. Hence, there is a significant advantage of using Rabin in this way.
x
24.2.17: Choose random 1 < x < N , call oracle on (x2 (mod N ), −( N ), 0) to get x′
and compute gcd(x′ − x, N ).
24.2.21: Now there are nine possible roots to choose from. Redundancy in the message
can be used in this case (the other two redundancy schemes are more directly related to
square roots and cannot be used in this case).
For the security, choose a random integer 1 < x < N that does not satisfy the
redundancy scheme and compute c = x3 (mod N ). Suppose a decryption oracle outputs
a solution x′ to (x′ )3 ≡ c (mod N ) so that x′ satisfies the redundancy scheme. (This
happens with probability at least (1 − 1/2l )7 1/2l .)
Now (x′ )3 − x3 = (x′ − x)((x′ )2 + x′ x + x2 ) ≡ 0 (mod N ). If, say, x′ ≡ x (mod p) and

x 6≡ x (mod q) then one can split N . This situation occurs with probability 2/9. The
idea of cubing is mentioned in Rabin [494].
x
24.2.23: Choose random x such that ( N ) = −1 and set y = A(x4 , x2 , x2 ). Then
gcd(x − y, N ) splits N with probability 1/2. This argument is due to Shmuely [552]. For
precise details see Biham, Boneh and Reingold [57].
24.2.24: Compute ϕ(N ) = 2(N + 2) − M and then solve a quadratic equation to get
p and q.
24.2.26: Use the same method as Exercise 24.2.23.
24.2.27: Since it is hard to choose random points in E(Z/N Z) one cannot apply the
method of Shmuely with the first oracle. For the second oracle one can choose P and
then fit the curve through it. Shmuely’s method then applies.
24.3.1: Obviously, (m1 m2 )2 ≡ m21 m22 (mod N ). But we need to ensure that decryption
of the product of the ciphertexts really does return the product of the messages. Since
there is no guarantee that m1 m2 (mod N ) has the correct bit pattern in the l least
significant bits, redundancy in the message is not suitable for homomorphic encryption.
The “extra bits” redundancy does not work either, since the least significant bit of
m1 m2 (mod N ) may not be the product (or any other easily computable function) of the
least significant bits of m1 and m2 .
Finally, the Williams redundancy scheme is also not compatible with homomorphic
encryption, since P (m1 )P (m2 ) (mod N ) is almost always not equal to P (m1 m2 ) (mod N ).
24.3.5: See Paillier [474].
24.3.7: One computes cp−1 (mod p2 ) to get 1 + pq(p − 1)m ≡ 1 + p(−qm) (mod p2 )
and hence m (mod p). Similarly one computes m (mod q) and hence m (mod N ). As with
standard RSA decryption using the CRT, we replace one modular exponentiation with
two modular exponentiations where the exponents and moduli are half the size. Hence,
the new method is about 3 times faster than the old one.
24.3.8: The security depends on the decisional problem: Given y, is y ≡ hx (mod N 2 )
for some integer 0 ≤ x < 2k . This problem is a variant of composite residuosity, and
638 APPENDIX B. HINTS AND SOLUTIONS TO EXERCISES

also similar to the discrete logarithm problem in an interval; see Exercise 13.3.6 and
Section 14.5 for algorithms to solve this problem in O(2k/2 ) multiplications.
Suppose a user’s public key consists of elements hi = uN 2
i (mod N ) for 1 ≤ i ≤ l
(i−1)/l
(where the ui are all random, or perhaps ui = u⌊N ⌋
(mod N 2 ) for a randomly chosen
1 < u < N ). To encrypt to the user one could compute c = (1 + N m)ha1 1 · · · hal l (mod N 2 )
using a sliding window multi-exponentiation method, where 0 ≤ a1 , . . . , al < 2k for some
value k (one possibility would be 2k ≈ N 1/l ).
24.3.11: One encrypts 0 ≤ m < N k as
k
c = (1 + mN )uN (mod N k+1 )

where 1 < u < N k is chosen randomly. Decryption requires some care since
 
λ(N )
cλ(N ) ≡ 1 + λ(N )mN + m2 N 2 + · · · (mod N k+1 ).
2

Hence, one first determines m (mod N ) from the coefficient λ(N )mN , and then m (mod N 2 )
from the coefficients of N and N 2 , and so on. This variant is not used in practice, since
messages are usually small for public key encryption.
24.3.12: Since (uN )p−1 ≡ 1 (mod p2 ) for any u ∈ (Z/N Z)∗ we have

g p−1 ≡ (1 − p)p−1 ≡ 1 − (p − 1)p ≡ 1 + p (mod p2 ).

It follows that cp−1 ≡ (1 + p)m ≡ 1 + pm (mod p2 ). The homomorphic property follows


since g m1 uN
1 g u2 = g m1 +m2 (u1 u2 )N .
m2 N

Let G be the subgroup of (Z/N Z)∗ of order (p − 1)(q − 1) containing all p-th powers
in (Z/N Z)∗ . In other words, g 6∈ G. This subgroup is unique. Determining whether a
ciphertext c encrypts a message m is precisely determining whether cg −m ∈ G. Okamoto
and Uchiyama call this the p-subgroup problem.
Finally, suppose one has a decryption oracle for this scheme. Choose an integer x >
N 1/3 , compute c = g x (mod N ), and let m be the message returned by the decryption
oracle. Then m ≡ x (mod p) and so p = gcd(x − m, N ).
24.4.4: One uses the formula c(m−1 e e
1 ) ≡ m2 (mod N ) to obtain a time/memory
tradeoff. We refer to [82] for discussion of the success probability of this attack.
24.4.5: 535573983004, 7873538602921, 8149260569118, 3195403782370.
24.4.6: Either set F1 (x) = xe − c1 and F2 (x) = (ax + b)e − c2 as before, or reduce to
the previous case by multiplying c2 by a−e (mod N ).
24.4.7: To find m we construct the polynomials F1 (x) = x3 − c1 and F2 (x) = (x +
10 3
2 ) − c2 and compute their gcd as polynomials in the ring ZN [x]. The result is

G(x) = x − 1234567890

from which we deduce that m = 1234567890.


24.4.8: Write F1 (x) = xe − c1 and F2 (y) = y e − c2 . Take the resultant of F1 (x) and
P (x, y) with respect to x to obtain a polynomial F (y) of degree de in y. Then compute the
gcd of F (y) and F2 (y). The complexity is dominated by the computation of the resultant,
which is the determinant of a (d + e) × (d + e) matrix whose entries are polynomials of
degree at most d. Using naive Gaussian elimination gives the result. For further details
see Coppersmith et al [144]. √
24.4.11: Let m be the message to forge and try to find x, y ≈ N such that

(P + x) ≡ (P + m)(P + y) (mod N ).
639

Then x and y satisfy x − y(P + m) ≡ P 2 − P + P m (mod N ). In other words, we seek a


small solution to x − Ay ≡ B (mod N ) for fixed A, B and N . We have seen how to solve
such a problem in Section 11.3.2 under the name “Gallant-Lambert-Vanstone method”.
24.4.12: Take P = A−1 B (mod N ).
24.5.1: One checks a guess for d by testing whether y d ≡ x (mod N ) for some pre-
computed pair 1 < x < N and y = xe (mod N ). If one precomputes y 2 (mod N ) then
one can compute the next value y d+2 (mod N ) using a single modular multiplication as
y d y 2 (mod N ). The total complexity is therefore O(dM (log(N ))) bit operations.
24.5.5: Write λ(N ) = ϕ(N )/r where r = gcd(p − 1, q − 1) so that the√ equation
ed = 1 + kλ(N ) corresponds to the equation −edr + krN ≈ kru (with u ≈ N ). The
Wiener method √ computes dr, as long as the condition drkru < N holds or, in other words,
if d < N 1/4 /( 3 r). One can determine r as gcd(dr, N − 1) and hence determine d.
24.5.6: One finds gcd(p − 1, q − 1) = 10 and d = 97.
24.5.7: If (e, k) satisfy ed = 1 + kϕ(N ) then so do (e′ , k ′ ) = (e + lϕ(N ), k + ld). Taking
l large enough one can ensure that duk ′ > N and so the attack does not apply.
24.5.8: Choose random 1 < x < N , set y = xe (mod N ) and, for each odd integer
1 < dp in turn, compute gcd(x − y dp (mod N ), N ).
√ √
24.5.10: Since 0 ≤ p + q < 3 N we have 0 ≤ x + y ≤ (p + q − 2)/r < 3N 1/4 and
so v and u give x + y and xy exactly. One can therefore solve the quadratic equation
x2 − vx + u to find x and hence p.
24.5.11: The first calculations are straightforward. The exhaustive search is performed
by trying each value for c and testing whether r2 c2 + (2rv + 4)c + v 2 − 4u is a perfect
square (for example, using the method of Exercise 2.4.9).
24.5.12: The exponent is lcm(p−1, q−1) = lcm(xr, yr). Choose random 1 < z < √ N , let
g = z r (mod N ) and h = z ur (mod N ) we have h ≡ g c (mod N ) where 0 ≤ c < N /r2 .
One therefore applies standard algorithms for the discrete logarithm problem, such as in
Exercise 13.3.6 or Section 14.5. This gives c modulo
√ the order of g. With overwhelming
probability the order of g is much larger than N and so c is found.
24.5.13: Since we may assume one can factor N − 1 in under 2128 bit operations, it
follows that a list of possible values for r is known. Assuming this list is short, one can
apply the attack in Exercise 24.5.12 for each possible value for r, to compute p. The
cost of the attack for a given guess for r is O(N 1/4 /r) modular multiplications. To have
N 1/4 /r > 2128 means r < 23072/4−128 = 2640 .
24.5.14: Small r can be found by factoring N − 1. Large r make N vulnerable to
Pollard rho. with the map
x 7→ xN −1 + 1 (mod N ).

If r is known then one can determine p + q modulo r, and hence perform a similar attack
to Exercise 24.5.12 that will split N in O(N 1/4 /r) ring operations if p and q are of similar
size.
24.6.2: See Appendix A of Coron [149].
h
24.6.3: If ( N ) = −1 then set f = 2, otherwise f = 1. Then ( fNh ) = 1. Now, if
( fph ) = −1 then set e = −1, otherwise e = 1. It follows that ( efph ) = 1 and ( efqh ) = 1.
24.6.5: Use the fact that, for any 1 ≤ a < p, (a(p+1)/4 )2 ≡ ( ap )a (mod p).
24.6.13: If e | H2 (mks1 ) then one can compute the e-th root of H1 (id), which is the
private key of the user.
24.6.14: Generating a signature is the same as the original Shamir scheme. For the
selective forgery, suppose (s1 , s2 ) is a valid signature for identity id on a message m where
gcd(e, H2 (m)) = 1. Let m′ be the message to forge. There are integers a, b ∈ Z such that
640 APPENDIX B. HINTS AND SOLUTIONS TO EXERCISES

ae − bH2 (m′ ) = −H2 (m). Set s′1 = sb1 and s′2 = s2 sa1 so that (s′1 , s′2 ) is a valid signature on
m′ .
H (m)
24.6.15: The signature is s = sid 2 (mod N ).
For the attack, suppose s1 and s2 are valid signatures for identity id on messages m1 and
m2 such that gcd(H2 (m1 ), H2 (m2 )) = 1. Let a, b ∈ Z be such that aH2 (m1 )+bH2 (m2 ) = 1.
Then e
sa1 sb2 ≡ H1 (id) (mod N )
and the user’s private key is obtained.
24.7.2: In the case where both top and bottom bits are 1 then let c′ = c2e (mod N ).
Depending on the precise sizes of N and r, the decryption oracle on c′ returns either
m′ = 2 + 4m + 2258 r + 23072 or m′ − N . Since N is odd it is easy to determine which
case has arisen and, in the latter case, one simply adds N back again. Solving for m is
immediate.
24.7.6: For the proof to go through it is necessary that κ1 > 2(κ + 2)/3 and n =
κ − κ0 − κ1 < κ/9. For full details see Boneh [74].

Chapter 25: Isogenies of Elliptic Curves


25.1.1: Since λ(P ) = OE� if and only if P = OE� it follows that ker(λ ◦ φ) = ker(φ). On
the other hand, ker(φ ◦ λ) = λ−1 (ker(φ)). So if λ does not fix ker(φ) then the isogenies
are not equivalent.
25.1.3: Just add composition with a power of Frobenius.
25.1.4: The existence of ψ1 and ψ2 follows from applying Theorem 9.6.18 to φ2 ◦ φ1 :
E → E2 .
25.1.9: The kernel of φ is a group of order G. Apply Vélu’s formula and write the
function X as a rational function in x. Once the result for φ1 (x) is proved the result for
φ2 (x, y) follows from Theorem 9.7.5.
25.1.11: The calculation is essentially the same as a calculation in the proof of Theo-
rem 9.7.5.
25.1.14: One performs d steps, each step being some arithmetic operations in Fqn .
The x-coordinates of points in G are all roots of a polynomial of degree (d − 1)/2 when
d is odd or roots of yF (x) where deg(F (x)) < d/2. The y-coordinates are defined over
quadratic extensions.
It is not quite true that n < d in general. Indeed, Fqn is a compositum of fields of
degrees corresponding to the degrees of irreducible factors. Hence n can be bigger than d
(e.g., d = 5 where the polynomial splits as quadratic times cubic). When d is prime then
there are no such problems as the kernel subgroup is generated by a single x-coordinate
and a quadratic extension for y.
25.1.17: Note that t(Q) = 2Fx (Q)−a1 Fy (Q) = 2(3x2Q +2a2 xQ +a4 −a1 yQ )−a1 (−2yQ −
a1 xQ − a3 ), and re-arranging gives the result. Similarly, u(Q) = Fy (Q)2 = (2yQ + a1 xQ +
a3 )2 = 4(yQ2
+ yQ (a1 xQ + a3 )) + (a1 xQ + a3 )2 , and one can replace yQ
2
+ yQ (a1 xQ + a3 )
by x3Q + a2 x2Q + a4 xQ + a6 .
25.1.18: The first statement is using xQ = x − (x − xQ ) the rest are equally easy. For
example, the final statement follows from x3Q = x3 − 3x2 xQ + 3xx2Q − (x − xQ )3 together
with some of the earlier cases.
25.1.20: Combine Theorem 25.1.6 with Exercise 25.1.17. Then simplify the formulae
using Exercises 25.1.18 and 25.1.19. For details see pages 80-81 of [383].
25.2.2: One computes the powers of j(E) in Fq in O(ℓM (log(q))) bit operations. In
the polynomial Φℓ (x, y) there are O(ℓ2 ) terms to consider and the coefficients are of size
O(ℓ log(ℓ)) and so reducing each coefficient to an element of Fq requires (the hardest
641

case being when q is prime and large) O(ℓ log(ℓ) log(q)) or O(M (ℓ log(ℓ))) bit operations.
We also need to multiply each coefficient by a suitable power of j(E). The total cost
is therefore O(ℓ2 (ℓ log(ℓ) log(q) + M (log(q))) bit operations. The resulting polynomial
requires O(ℓ log(q)) bits. The claim about root finding is immediate from Exercise 2.12.5.
25.2.3: The first statement follows since j(E) = j(E ′ ) and the Fq -rational ℓ-isogenies
are given by the roots of Φℓ (j(E), y). For the second statement, let φ : E → E ′ be the
isomorphism and consider the map EndFq (E) → EndFq (E ′ ) given by ψ 7→ φ ◦ ψ ◦ φ−1 .
One can check that this is a ring homomorphism and, since φ is an isomorphism, it is
surjective.
√ √ √
25.3.15: The eigenvalues are 3, 2, 1, 0, − 2, −2. We have λ(X) = 2 < 2 3 − 1 so
the graph is Ramanujan. We have δv ({1}) = {2, 4}, δv ({1, 2}) = {3, 4, 5} and δv ({1, 3}) =
{2, 4, 6}. The√expander property is easy√to verify. It also follows from equation (25.5)
and λ1 (X) = 2; giving #δv (A) ≥ (3 − 2)/6#A > 0.26#A.
25.3.16: Let n > 2/c be even and let X be the 2-regular “line” on n vertices (vertex
1 < i < n connected to vertex i − 1 and i + 1, and vertices 1 and n having single loops).
Let A = {1, . . . , n/2} so that #A = n/2 and c#A > 1 . But δv (A) = {n/2 + 1}.
25.3.18: We have h(−p) = 5 and ⌊p/12⌋ + 1 = 9, so there are nine isomorphism classes
of supersingular elliptic curves over Fp2 , five of which are defined over Fp .
Label the vertices of the graph as 24, 80, 23, 69, 34, α, α, β and β. The values α and
α are the roots of t2 + 84t + 73, while β and β are roots of t2 + 63t + 69. The graph is
3-regular. Vertices 24 and 80 have two loops, and an edge to 23. Vertex 23 has an edge
to 69. Vertex 69 has a loop and an edge to 34. Vertex 34 has edges to α and α. Vertex
α has edges to α and β. Vertex α has edges to α and β. Finally, vertex β has two edges
to β.
25.3.19: The graph has ⌊p/12⌋ + 1 = 2 vertices. So the vertices are j = 0 and
j = 1728 ≡ 1 (mod 11). Using the modular polynomial Φ2 (x, y) one finds there are three
edges from 0 to 1 and two edges from 1 to 0 (and a loop from 1 to itself). Hence, at least
two of the three isogenies from 0 to 1 have the same dual isogeny.
25.3.20: Take p = 131. There are 10 supersingular j-invariants in Fp . Taking 2-
isogenies gives two components: one containing the j-invariants {25, 82} and the other
with {0, 10, 28, 31, 50, 62, 94, 113}.
25.4.9: The graph has two components. Both are trees with a root and 4 leaves. One
component has root 42 and leaves 33, 51, 35, 57. The other component has root 14 and
leaves 44, 4, 18, 32. The diameter of XE,Fq ,{2,3} is 2.
25.4.10: Clearly, j = 11
√ and j = 31 are on the floor.√ It follows that End(E) is the
order of index 2 in Z[(1 + −7)/2]. Hence, End(E) ∼= Z[ −7].
Since the isogenies from j = 10 to 11 and 31 are descending it follows that the
ascending isogeny is to j = 29 and so this curve is on the surface.
25.5.2: The solution to the isogeny problem is a chain of length O(log(q)) of prime-
degree isogenies. Assuming that the chain is represented as a sequence of j-invariants, the
cost of the Elkies and Vélu steps is O(ℓ2+ǫ log(q)1+ǫ ) bit operations. Since ℓ = O(log(q)m )
the cost for each isogeny is bounded by O(ℓ2m+1+ǫ ) bit operations. The total cost is
therefore O(ℓ2m+2+ǫ ) bit operations.
25.5.3: Dijsktra’s algorithm has complexity linear in the number of vertices, so it needs

more than q operations to find the chain. The advantage is that the isogeny itself can
be computed faster, but probably only by a constant factor.
642 APPENDIX B. HINTS AND SOLUTIONS TO EXERCISES

Chapter 26: Pairings on Elliptic Curves


26.1.1: A function f such that div(f ) = D1 is defined up to multiplication by an element
∗ Q
of�k . So let f be such a � function and write f (D2 ) = P f (P )nP . Then (uf )(D2 ) =

u P nP f (D2 ). The term u P nP is 1 for all u ∈ k if and only if deg(D2 ) = 0.
26.3.5: The fact that the divisors of the functions are correct is immediate. To show
the functions are normalised at infinity it is necessary to show that the functions y and
x are normalised at infinity. To see this note that t−3 3 3 3 3
∞ = (y/x) = y /x = y(x + a2 x +
2
3
a4 x + a6 − a1 xy − a3 y)/x = y(1 + u) where u is zero at OE . Hence, y is normalised at
infinity. Similarly, t−2 2 3 2 2
∞ = (y/x) = (x + a2 x + a4 x + a6 − a1 xy − a3 y)/x = x + u where
u(OE ) = a2 and so x is normalised at infinity too. It follows that l(x, y) = y − λx + c and
v(x, y) = x − c are normalised at infinity.
m
26.3.8: This follows since if div(fn,P ) = n(P ) − n(OE ) then div(fn,P ) = mn(P ) −
mn(OE ). So take m = N/n.
26.3.10: Let Q1 , Q2 ∈ E[r] be such that Q1 6= Q2 . Suppose Q1 and Q2 were in the
same class in E(Fqk )/rE(Fqk ). Then Q1 − Q2 = [r]R for some R ∈ E(Fqk ). It would
follow that R has order r2 , but the conditions imply that no such group element exists.
26.3.12: If v(x) = (x−a) is a vertical line function over Fq then v(Q) = xQ −a ∈ Fqk/2 .
k
It follows that v(Q)(q −1)/r = 1.
26.3.17: The first statement follows directly from the definition. To show part 2, first
−1
note that div(fs,x−s,Q ) = ([s]Q) − s(Q) + (s − 1)(OE ) = div(fs,Q ). Now, s ≡ q m ≡
m
T (mod r) and so, by Exercise 26.3.14, we have a power of the ate pairing. Part 3
follows from
d
X d
X
div(fs,h(x)x,Q ) = hi (([si+1 ]Q) − (OE )) = hi (([si ][s]Q) − (OE )) = div(fs,h(x),[s]Q )
i=0 i=0

and the facts that


m
as,h(x) ([s]Q, P ) = as,h(x) (πqm (Q), P ) = as,h(x) (Q, P )q = as,h(x) (Q, P )s

(this is essentially the same argument as in equation (26.5)). The additive property follows
from the fact that the divisor of a product is the sum of the divisors. The multiplicative
property follows from the additive property and from part 3.
26.5.3: Given P, [a]P, [b]P choose a point Q such that e(P, Q) 6= 1 and one can invert
pairings with respect to that point Q. Compute z = e(P, Q)ab as in Lemma 26.5.2, then
call the pairing inversion oracle on (Q, z) to get [ab]P .

You might also like