Big O-Little o BEST
Big O-Little o BEST
Big O-Little o BEST
Definition 1 Big O notation. Suppose that f (x) and h (x) are functions
such that h (x) > 0 for x > x0 . We write f (x) = O (h (x)) if there exists a
constant C > 0 such that
|f (x)| < Ch (x)
for all x > x1 with some x1 x0 .
sin x = O (1) ,
x sin x = O (x) .
for all r 1.
1
Proof Split the integral at x. In the integral over [2, x] use logr t logr 2
so Z
x
dt x2
r r =O x .
2 log t log 2
In the integral over [ x, x] use logr t logr x = 2r logr x. Then
Z x
dt rx x
r
log t
2 r = O (x) . (1)
x log x
Hence
Z x Z x Z x
dt dt dt
= + r = O x + O (x) = O (x) .
2 logr t 2 logr t
x log t
An important observation is that in (1) the constant depends on r; here
as r grows the constant 2r grows. We indicate this by writing
Z x
dt
logr t
= Or (x) ,
x
2
Proof Integration by parts.
Unfortunately, in this course we do not prove these stronger versions of
the PNT. As an illustration of how the big O notation arises, a weaker form
of the Prime Number Theorem is
x x
(x) = +O .
log x log2 x
3
Other Properties
If f (x) = O (h (x)) then f (x) g (x) = O (h (x) g (x)) if g (x) > 0 for all
sufficiently large x otherwise f (x) g (x) = O (h (x) |g (x)|)
Proof of the final point The assumption f (x) = O (h (x)) means that
there exists C > 0 such that |f (x)| Ch (x). Then
Z x Z x
f (y) g (y) dy
|f (y) g (y)| dy
a a
Z x
Ch (y) g (y) dy
a
Z x
= O h (y) g (y) dy ,
a
N ! = O eN N N +1/2 .
4
Exponentiate as
N ! = N N +1/2 eN eE(N ) .
But E (N ) = O (1) means there exists C > 0 such that |E (N )| < C, i.e.
C < E (N ) < C. Thus eC < eE(N ) < eC . This again means eE(N ) =
O (1) .
The previous example can be summed up as eO(1) = O (1).
Definition 8 If we have both f (x) g (x) and g (x) f (x), which means
that f and g are both non-negative and there exist positive constants c and
C for which cg (x) f (x) Cg (x), we write f (x) g (x) .
f (x)
lim = 0.
x h (x)
Note that f (x) = o (g (x)) f (x) = O (g (x)) but the converse need not
hold. For example, if p (x) is a polynomial of degree n then p (x) = O (xn )
but p (x) 6= o (xn ).
Note that f (x) = o (1) means that f (x) 0 as x . For such a function
f (x) , its exponential satisfies ef (x) 1 by continuity. In fact we can say a
little more (and this result is used in the lectures).
5
Proof It can be shown (and was done so in MATH2010) that if |y| < 1/2
then
|ey 1| < 2 |y| . (2)
The assumption f (x) = o (1) means we can choose x sufficiently large
such that |f (x)| < 1/2. An application of (2) immediately gives stated
result.
This example could be summed up as eo(1) = 1 + o (1).
Extend the definition of o-notation to f (x) = g (x) + o (h (x)) if
f (x) g (x)
lim = 0.
x h (x)
f (x)
lim = 1.
x g (x)
Note that
f (x) f (x) f (x) g (x)
lim = 1 lim 1 = 0 lim = 0.
x g (x) x g (x) x g (x)
That is
6
In Analytic Number Theory you are required to have an understanding
of relative growths of different functions. For example, can you prove
Example 14 For any r > 0 and > 0 (so think of r as large and as small)
we have
logr x r, x ,
for all x > 1.
Subtle point. This example says that given r > 0 and > 0 there exists
C = C (r, ) such that logr x Cx for all x > 1. Yet
logr x
lim = 0,
x x
for all x > x0 (normally rephrased as for x sufficiently large.) So, in this
case, the r, can be replaced by < on replacing x > 1 by x > x0 for some
x0 depending on r and .
This is an important example since it can be used to bound the following
integral found in many error terms in the course.
for all x 2.
7
Again the can be replaced by <, with the resulting inequalities valid
for x > x0 , with x0 depending on the appropriate , or A.
Using the last example show that this error term grows more slowly than
x/ logA x, however large A is, but faster that x1 , however small > 0 is.