Diffunderint
Diffunderint
Diffunderint
KEITH CONRAD
An anti-derivative of (x +
t)2
0
1
3 (x
with respect to x is
Z 1
(x + t)2 dx =
0
+ t)3 , so
x=1
(x + t)3
3
x=0
(1 + t)3 t3
3
1
=
+ t + t2 .
3
This answer is a function of t, which makes sense since the integrand depends on t. We integrate
over x and are left with something that depends only on t, not x.
Rb
Since an integral like a f (x, t) dx is a function of t, we can ask about its t-derivative, assuming
that f (x, t) is sufficiently nicely behaved. The rule is: the t-derivative of the integral of f (x, t) is
the integral of the t-derivative of f (x, t):
Z
Z b
d b
d
(1.2)
f (x, t) dx =
f (x, t) dx.
dt a
a dt
=
KEITH CONRAD
This procedure is called differentiation under the integral sign. Since you are used to thinking
mostly about functions with one variable, not two, keep in mind that (1.2) involves integrals and
derivatives with respect to separate variables: integration is with respect to x and differentiation is
with respect to t.
R1
Example 1.2. We saw in Example 1.1 that 0 (x + t)2 dx = 1/3 + t + t2 , whose t-derivative is
1 + 2t. According to (1.2), we can also compute the t-derivative of the integral like this:
Z
Z 1
d 1
d
2
(x + t) dx =
(x + t)2 dx
dt 0
0 dt
Z 1
2(x + t) dx
=
0
Z 1
=
(2x + 2t) dx
0
2
x=1
= x + 2txx=0
= 1 + 2t.
The answers agree.
2. Eulers factorial integral in a new light
For integers n 0, we have already seen Eulers integral formula
Z
(2.1)
xn ex dx = n!
0
when n = 0. Now we are going to derive (2.1) by repeated differentiation from (2.2) after introducing
a parameter t into (2.2).
For any t > 0, let x = tu. Then dx = t du and (2.2) becomes
Z
tetu du = 1.
0
Differentiate both sides of (2.4) with respect to t, again using (1.2) to handle the left side. We get
Z
2
x2 etx dx = 3 .
t
0
Taking out the sign on both sides,
Z
(2.5)
0
x2 etx dx =
2
.
t3
If we continue to differentiate each new equation with respect to t a few more times, we obtain
Z
6
x3 etx dx = 4 ,
t
0
Z
24
x4 etx dx = 5 ,
t
0
and
Z
120
x5 etx dx = 6 .
t
0
Do you see the pattern? It is
Z
n!
(2.6)
xn etx dx = n+1 .
t
0
We have used the presence of the extra variable t to get these equations by repeatedly applying
d/dt. Now specialize t to 1 in (2.6). We obtain
Z
xn ex dx = n!,
0
sin x
dx = .
(3.1)
x
2
0
This is important in signal processing
R and Fourier analysis. Since (sin x)/x is even, an equivalent
formula over the whole real line is sinx x dx = .
We will work not with (sin x)/x, but with f (x, t) = etx (sin x)/x, where t 0. Note f (x, 0) =
(sin x)/x. Set
Z
sin x
g(t) =
etx
dx
x
0
for t 0. Our goal is to show g(0) = /2, and we are going to get this by studying g 0 (t) for variable
t.
KEITH CONRAD
etx sin x,
The integrand
integration by parts,
eax sin x dx =
(a sin x cos x) ax
e .
1 + a2
Applying this with a = t and turning the indefinite integral into a definite integral,
(t sin x + cos x) tx x=
0
e
.
g (t) =
1 + t2
x=0
As x , t sin x + cos x oscillates a lot, but in a bounded way (since sin x and cos x are bounded
functions), while the term etx decays exponentially to 0 since t > 0. So the value at x = is 0.
We are left with the negative of the value at x = 0, giving
g 0 (t) =
1
.
1 + t2
We know an explicit antiderivative of 1/(1+t2 ), namely arctan t. Since g(t) has the same t-derivative
as arctan t for t > 0, they differ by a constant: g(t) = arctan t + C for t > 0. Lets write this
out explicitly:
Z
sin x
(3.2)
etx
dx = arctan t + C.
x
0
Notice we obtained (3.2) by seeing both sides have the same t-derivative, not by actually finding
an antiderivative of etx (sin x)/x.
To pin down C in (3.2), let t in (3.2). The integrand on the left goes to 0, so the integral
on the left vanishes. Since arctan t /2 as t we get
0=
+ C,
2
(3.3)
etx
dx = arctan t.
x
2
0
Now let t 0+ in (3.3). Since etx 1 and arctan t 0, we obtain
Z
sin x
dx = ,
x
2
0
so were done!
Notice again the convenience introduced by a second parameter t. After doing calculus with the
second parameter, we let it go this time to a boundary value (t = 0) to remove it and solve our
problem.
(4.1)
2 /2
ex
dx =
is fundamental to probability theory and Fourier analysis (and number theory!). The function
2
1 ex /2 is called a Gaussian, and (4.1) says the integral of the Gaussian over the whole real line
2
is 1.
The physicist Lord Kelvin (after whom the absolute temperature scale is named) once wrote
(4.1) on the board in a class and said A mathematician is one to whom that [pointing at the
formula] is as obvious as twice two makes four is to you. Our derivation of (4.1) will not make it
seem terribly obvious, alas. If you take further courses you may learn more natural derivations of
(4.1) so that the result really does become obvious. For now, just try to follow the argument here
step-by-step.
We are
going to aim not at (4.1), but an equivalent formula over the range x 0 (after replacing
x with 2x):
x2
(4.2)
e
dx =
.
2
0
Make sure you see why (4.1) and (4.2) are equivalent before proceeding.
For t > 0, consider (!) the functions
2
Z t
Z 1 t2 (1+x2 )
e
x2
e
dx , B(t) =
dx.
A(t) =
1 + x2
0
0
Notice A(t) has t as an upper variable of integration, while B(t) has t inside the integrand. If I
denotes the integral on the left side of (4.2), then A() = I 2 .
We are going to compare A0 (t) and B 0 (t). First we compute A0 (t). By the chain rule and the
fundamental theorem of calculus,
Z
Z t
Z t
d t x2
2
2
0
x2
e
dx = 2
ex dx et .
A (t) = 2
e
dx
dt 0
0
0
To calculate B 0 (t) we use differentiation under the integral sign:
!
Z 1
t2 (1+x2 )
e
d
B 0 (t) =
dx
1 + x2
0 dt
Z 1
2
2
2tet (1+x ) dx
=
0
Z 1
2 2
t2
= 2e
tet x dx
0
Z t
2
2
= 2et
eu du (u = tx, du = t dx).
0
This is the same as A0 (t) except for an overall sign. Thus A0 (t) = B 0 (t) for all t > 0, so there is
a constant C such that
(4.3)
for all t > 0.
A(t) = B(t) + C
KEITH CONRAD
R0
2
To find C, we let t 0+ in (4.3). The left side tends to ( 0 ex dx)2 = 0 while the right side
R1
tends to 0 dx/(1 + x2 ) + C = /4 + C. Thus C = /4, so (4.3) becomes
Z t
2
Z 1 t2 (1+x2 )
e
x2
dx
(4.4)
e
dx =
4
1 + x2
0
0
for all t > 0. We already gained information by letting t 0+ . Now we look in the other direction:
let t . The integrand on the right side of (4.4) goes to 0, so the integral becomes 0 and (4.4)
turns into
Z
2
x2
e
dx = .
4
0
Taking (positive) square roots gives (4.2).
5. Higher moments of the Gaussian
For every integer n 0 we want to compute a formula for
Z
2
(5.1)
xn ex dx.
xn f (x) dx
2
(5.2)
ex dx = .
(The right side is twice the value of (4.2) since the integration is carried out over all real x, not just
x 0.) To get formulas for (5.1) when n 6= 0, we follow the same strategy as our treatment of the
2
factorial integral in Section 2: stick a t into the exponent of ex and then differentiate repeatedly
with respect to t.
tx2
(5.3)
e
dx = .
t
Differentiate both sides of (5.3) with respect to t, using differentiation under the integral sign on
the left:
2 tx2
x e
dx = 3/2 ,
2t
so
2
(5.4)
x2 etx dx = 3/2 .
2t
Differentiate both sides of (5.4) with respect to t. After removing a common minus sign on both
sides, we get
Z
3
4 tx2
(5.5)
x e
dx = 5/2 .
4t
Differentiating both sides of (5.5) with respect to t a few more times, we get
Z
35
6 tx2
x e
dx =
,
8t7/2
Z
357
8 tx2
x e
dx =
,
16t9/2
and
Z
3579
2
x10 etx dx =
.
32t11/2
where the numerator is the product of the positive odd integers from 1 to n 1 (understood to be
the empty product 1 when n = 0).
In particular, taking t = 1 we have computed (5.1):
Z
1 3 5 (n 1)
n x2
x e
dx =
.
2n/2
R
As an application of (5.4), we now compute ( 12 )! := 0 x1/2 ex dx. Set u = x1/2 , so x = u2 and
dx = 2u du. Then
Z
Z
2
1/2 x
x e dx =
ueu (2u) du
0
0
Z
2
= 2
u2 eu du.
0
From (5.4) at t = 1,
R
0
x1/2 ex dx = 2 /2 = , so ( 12 )! = .
6. A cosine transform of the Gaussian
Here we are including t in the integral from the beginning. (The corresponding integral with sin(tx)
2
in place of cos(tx) is zero since sin(tx)ex /2 is an odd function of x.)
Call the desired integral I(t). We calculate I(t) by looking at its t-derivative:
Z
2
(6.1)
I 0 (t) =
x sin(tx)ex /2 dx.
This looks good from the viewpoint of integration by parts since xex
So we apply integration by parts to (6.1):
u = sin(tx),
dv = xex dx
and
du = t cos(tx) dx,
v = ex
2 /2
2 /2
is the derivative of ex
2 /2
KEITH CONRAD
Then
I 0 (t) =
=
=
=
u dv
v du
Z
sin(tx) x=
2
cos(tx)ex /2 dx
t
2 /2
x
e
x=
sin(tx) x=
tI(t).
ex2 /2
uv
x=
As x ,
Therefore
2
ex /2
2 /2
I 0 (t) = tI(t).
2
We know the solutions to this differential equation: constant multiples of et /2 . So I(t) = Cet /2
for some constant C:
Z
2
2
cos(tx)ex /2 dx = Cet /2 .
R
2
To find C, set t = 0. The left side is ex /2 dx, which we computed to be 2 in Section 4.
2
2
cos(tx)ex /2 dx = 2et /2 .
Amazing!
7. Nasty logs, part I
Consider the following integral over a finite interval:
Z 1 t
x 1
dx.
0 log x
Since 1/ log x 0 as x 0+ , the integrand vanishes at x = 0. As x 1 , (xt 1)/ log x 0.
Therefore the integrand makes sense as a continuous function on [0, 1], so it is not an improper
integral.
The t-derivative of this integral is
Z 1 t
Z 1
1
x log x
dx =
xt dx =
,
log x
1+t
0
0
which we recognize as the t-derivative of log(1 + t). Therefore
Z 1 t
x 1
dx = log(1 + t) + C.
0 log x
To find C, set t = 0. The integral and the log term vanish, so C = 0. Thus
Z 1 t
x 1
dx = log(1 + t).
0 log x
For example,
Z 1
x1
dx = log 2.
0 log x
Notice we computed this definite integral without computing an anti-derivative of (x 1)/ log x.
8. Nasty logs, part II
We now consider the integral
Z
I(t) =
2
dx
xt log x
R
where t > 1. The integral converges by comparison with 2 dx/xt . We know that at t = 1 the
integral diverges to :
Z b
Z
dx
dx
= lim
b 2 x log x
x log x
2
b
= lim log log x
b
2
= .
So we expect that as t 1+ , I(t) should blow up. But how does it blow up? By analyzing I 0 (t)
and then integrating back, we are going to show I(t) behaves essentially like log(t 1) as t 1+ .
Using differentiation under the integral sign,
Z
d t dx
0
(x )
I (t) =
dt
log x
2
Z
dx
=
xt ( log x)
log
x
2
Z
dx
=
xt
2
x=
xt+1
=
t + 1 x=2
=
21t
.
1t
We want to bound this derivative from above and below when t > 1. Then we will integrate to get
bounds on the size of I(t).
For t > 1, the difference 1 t is negative, so 21t < 1. Dividing both sides by 1 t, which is
negative, reverses the sense of the inequality and gives
21t
1
>
.
1t
1t
This is a lower bound on I 0 (t). To get an upper bound on I 0 (t), we want to use a lower bound
on 21t . For this purpose we use a tangent line calculation. The function 2x has the tangent line
y = (log 2)x + 1 at x = 0 and the graph of y = 2x is everywhere above this tangent line, so
2x (log 2)x + 1
for all x. Taking x = 1 t,
(8.1)
10
KEITH CONRAD
When t > 1, 1 t is negative, so dividing (8.1) by 1 t reverses the sense of the inequality:
21t
1
log 2 +
.
t1
1t
This is an upper bound on I 0 (t). Combining both bounds,
1
1
(8.2)
< I 0 (t) log 2 +
1t
1t
for all t > 1.
We are concerned with the behavior of I(t) as t 1+ . Lets integrate (8.2) from a to 2, where
1 < a < 2:
Z 2
Z 2
Z 2
1
dt
0
log 2 +
I (t) dt
dt.
<
1t
a
a
a 1t
Using the Fundamental Theorem of Calculus,
2
2
2
log(t 1) < I(t) ((log 2)t log(t 1)) ,
a
so
log(a 1) < I(2) I(a) (log 2)(2 a) + log(a 1).
Manipulating to get inequalities on I(a), we have
(log 2)(a 2) log(a 1) + I(2) I(a) < log(a 1) + I(2)
Since a 2 > 1 for 1 < a < 2, (log 2)(a 2) is greater than log 2. This gives the bounds
log(a 1) + I(2) log 2 I(a) < log(a 1) + I(2)
Writing a as t, we get
log(t 1) + I(2) log 2 I(t) < log(t 1) + I(2),
so I(t) is a bounded distance from log(t 1) when 1 < t < 2. In particular, I(t) as t 1+ .
9. Smoothly dividing by t
Let h(t) be an infinitely differentiable function for all real t such that h(0) = 0. The ratio h(t)/t
makes sense for t 6= 0, but it also can be given a reasonable meaning at t = 0: from the very
definition of the derivative, when t 0 we have
h(t) h(0)
h(t)
=
h0 (0).
t
t0
Therefore the function
(
h(t)/t, if t 6= 0,
r(t) =
h0 (0),
if t = 0
is continuous for all t. We can see immediately from the definition of r(t) that it is better than
continuous when t 6= 0: it is infinitely differentiable when t 6= 0. The question we want to address
is this: is r(t) infinitely differentiable at t = 0 too?
If h(t) has a power series representation around t = 0, then it is easy to show that r(t) is infinitely
differentiable at t = 0 by working with the series for h(t). Indeed, write
h(t) = c1 t + c2 t2 + c3 t3 +
for all small t. Here c1 = h0 (0), c2 = h00 (0)/2! and so on. For small t 6= 0, we divide by t and get
(9.1)
r(t) = c1 + c2 t + c3 t3 + ,
11
which is a power series representation for r(t) for all small t 6= 0. The value of the right side of
(9.1) at t = 0 is c1 = h0 (0), which is also the defined value of r(0), so (9.1) is valid for all small x
(including t = 0). Therefore r(t) has a power series representation around 0 (its just the power
series for h(t) at 0 divided by t). Since functions with power series representations around a point
are infinitely differentiable at the point, r(t) is infinitely differentiable at t = 0.
However, this is an incomplete answer to our question about the infinite differentiability of r(t)
2
at t = 0 because we know by the key example of e1/t (at t = 0) that a function can be infinitely
differentiable at a point without having a power series representation at the point. How are we
going to show r(t) = h(t)/t is infinitely differentiable at t = 0 if we dont have a power series to
help us out? Maybe theres actually a counterexample?
The way out is to write h(t) in a very clever way using differentiation under the integral sign.
Start with
Z t
h(t) =
h0 (u) du.
0
(This is correct since h(0) = 0.) For t 6= 0, introduce the change of variables u = tx, so du = t dx.
At the boundary, if u = 0 then x = 0. If u = t then x = 1 (we can divide the equation t = tx by t
because t 6= 0). Therefore
Z 1
Z 1
0
h(t) =
h (tx)t dx = t
h0 (tx) dx.
0
h(t)
=
t
h0 (tx) dx.
The left and right sides dont have any t in the denominator.
Are they equal at t = 0 too? The
R1
left side at t = 0 is r(0) = h0 (0). The right side is 0 h0 (0) dx = h0 (0) too, so
Z 1
(9.2)
r(t) =
h0 (tx) dx
0
for all t, including t = 0. This is a formula for h(t)/t where there is no longer a t being divided!
Now were set to use differentiation under the integral sign. The way we have set things up
here, we want to differentiate with respect to t; the integration variable on the right is x. We can
use differentiation under the integral sign on (9.2) when the integrand is differentiable. Since the
integrand is infinitely differentiable, r(t) is infinitely differentiable!
Explicitly,
Z 1
0
r (t) =
vh00 (tx) dx
0
and
1
00
r (t) =
vh00 (tx) dx
(k)
Z
(t) =
R1
0
xk h(k+1) (0) dx =
h(k+1) (0)
k+1 .
12
KEITH CONRAD
10. A counterexample
We have seen many examples where differentiation under the integral sign can be carried out
with interesting results, but we have not actually stated conditions under which (1.2) is valid. The
following example shows that some hypothesis is needed beyond just the fact that the integrals on
both sides of (1.2) exist.
For any real numbers x and t, let
xt3
, if x 6= 0 or t 6= 0,
2
f (x, t) = (x + t2 )2
0,
if x = 0 and t = 0.
Let
Z
f (x, t) dx.
F (t) =
0
R1
0
R1
f (x, 0) dx =
F (t) =
0
0 dx = 0. When t 6= 0,
xt3
dx
(x2 + t2 )2
1+t2
=
t2
t3
du (where u = x2 + t2 )
2u2
u=1+t2
t3
2u u=t2
t3
t3
+
2(1 + t2 ) 2t2
t
.
2(1 + t2 )
=
=
This formula also works at t = 0, so F (t) = t/(2(1 + t2 )) for all t. Therefore F (t) is differentiable
and
1 t2
F 0 (t) =
2(1 + t2 )2
for all t. In particular, F 0 (0) = 21 .
R1 d
d
Now we compute dt
f (x, t) and then 0 dt
f (x, t) dx. Since f (0, t) = 0 for all t, f (0, t) is differd
entiable in t and dt
f (0, t) = 0. For x 6= 0, f (x, t) is differentiable in t and
d
f (x, t) =
dt
=
=
d
f (x, t) =
dt
xt2 (3x2 t2 )
,
(x2 +t2 )3
if x 6= 0,
0,
if x = 0.
13
d
In particular dt
| f (x, t) = 0. Therefore at t = 0 the left side of (1.2) is F 0 (0) = 1/2 and the right
R t=0
1 d
|t=0 f (x, t) dx = 0. The two sides are unequal!
side of (1.2) is 0 dt
d
The problem in this example is that dt
f (x, t) is not a continuous function of (x, t). Indeed, the
denominator in the formula in (10.1) is (x2 + t2 )3 , which has a problem near (0, 0). Specifically,
while this derivative vanishes at (0, 0), it we let (x, t) (0, 0) along the line x = t, then here
d
dt f (x, t) has the value 1/4x, which does not tend to 0 as (x, t) (0, 0).
Theorem 10.1. The two sides of (1.2) both exist and are equal at a point t = t0 provided the
following two conditions hold:
d
f (x, t) and dt
f (x, t) dx are continuous for x in the range of integration and t in an interval
around t0 ,
d
f (x, t)| B(x) independent of t such that
there are upper bounds |f (x, t)| A(x) and | dt
Rb
Rb
a A(x) dx and a B(x) dx converge.
In Table 1 we include choices for A(x) and B(x) for each of the functions we have treated. Since
the calculation of a derivative at a point only depends on an interval around the point, we have
replaced a t-range such as t > 0 with t c > 0 in some cases to obtain choices for A(x) and B(x).
Section
2
3
f (x, t)
xn etx
etx sinx x
2
4
5
6
7
8
9
x range t range
[0, ) t c > 0
[0, ) t c > 0
et (1+x )
1+x2
2
xn etx
[0, 1]
R
2 /2
x
cos(tx)e
R
xt 1
(0,
1]
log x
1
[2, )
xt logx
k
(k+1)
x h
(tx)
[0, 1]
Table 1.
A(x)
xn ecx
ecx
1
0tc
1+x2
2
t c > 0 xn ecx
2
R
ex /2
tc>0
??
1
t c > 1 x2 log
x
R
??
Summary
B(x)
xn+1 ecx
ecx
2c
2
xn+2 ecx
|x|ex
1
11. Exercises
1. Starting with the indefinite integral formulas
Z
Z
sin(tx)
cos(tx)
cos(tx) dx =
,
sin(tx) dx =
,
t
t
R
R
compute formulas for xn sin x dx and xn cos x dx for n = 2, 3, 4.
1
xc
??
2 /2