Integral Transform Notes Collation

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Integral Transforms

Remark. (Riemann-Lebesgue Lemma) If f is an integrable function then


Z ∞ Z ∞
f (x) cos axdx → 0 and f (x) sin axdx → 0 as a → ∞
−∞ −∞

A consequence of this lemma is that Fourier coefficients an and bn tend to zero as n → ∞.


[Locally integrable] f : R → R is locally integrable if it is integrable on any bounded interval.

1 The Dirac δ-function and Distributions.

1.1 Delta function

Aim: define a ”function” δ with the properties


Z ∞
δ(x) = 0 for x 6= 0; δ(x)dx = 1
−∞

Suppose now that φ : R → R is continuous. Then for any ε > 0 there exists ∆ > 0 such that

−∆ < x < ∆ =⇒ −ε < φ(x) − φ(0) < ε

Note that
Z ∞ Z −∆ Z ∆ Z ∞ Z ∆
δ(x)φ(x)dx = δ(x)φ(x)dx + δ(x)φ(x)dx + δ(x)φ(x)dx = δ(x)φ(x)dx
−∞ −∞ −∆ ∆ −∆

and so we would expect


Z ∆ Z ∆ Z ∆
φ(0) − ε = (φ(0) − ε) δ(x)dx 6 δ(x)φ(x)dx 6 (φ(0) + ε) δ(x)dx = φ(0) + ε
−∆ −∆ −∆

As this is true for all ε then we have


Z ∞
δ(x)φ(x)dx = φ(0) when φ is continuous.
−∞

1.2 Test Functions and Distributions

It seems that to understand δ we need to set aside our narrow view of a function f being defined simply
at points and instead try to understand the map
Z ∞
φ 7→ hf, φi = f (x)φ(x)dx
−∞

where φ(x) is a continuous function. Certainly continuous functions f can be uniquely represented this
way and we have seen that δ can be understood this way as ”evaluation at 0”. Note though that this
effectively treats f (x) as a functional on the space of continuous functions.
Differentiation: Z ∞
0
φ 7→ f , φ = f 0 (x)φ(x)dx
−∞

1
Apply integration-by-parts and if φ were differentiable then we’d have
Z ∞ Z ∞
f 0 (x)φ(x)dx = [f (x)φ(x)]∞
−∞ − f (x)φ0 (x)dx
−∞ −∞

So we might also require that


lim φ(x) = lim φ(x) = 0
x→∞ x→−∞

This would mean that f 0 was the generalized function


Z ∞
φ 7→ − f (x)φ0 (x)dx
−∞

[Test function] A map φ : R → R is a test function if it is smooth (i.e. infinitely differentiable and if
there exists X such that φ(x) = 0 when |x| > X.
We denote the vector space formed by test functions as D.

Theorem 1.1 Let f : R → R be a continuous function such that


Z ∞
f (x)φ(x)dx = 0
−∞

for all test functions φ. Then f = 0.

[ For each continuous function f we then have a functional Ff on the space D of test functions
Z ∞
Ff : φ 7→ Ff (φ) = hf, φi = f (x)φ(x)dx
−∞

What the above theorem shows is that the map f 7→ Ff is 1-1. ]


We should think of δ as the following functional

δ : φ 7→ φ(0)

[Continuous] A linear functional F is continuous if whenever φ and φn (n > 1) are test functions
(k)
which are all zero outside some bounded interval I and each φn converges uniformly to φ(k) as n → ∞
then F (φn ) → F (φ).
[Distribution] A distribution or generalized function F is a continuous linear functional from D to R.
[ We write D0 for the space of distributions. Also, we write hF, φi for the real number F (φ) When we
want emphasise the range of the distribution (which really means the range of the test functions ), we
may write F (x) instead of just F . ]

Proposition 1.2 Given a locally integrable function f then Ff is a distribution. Such distributions
are called regular distributions.

2
[Heaviside function] The functions
( (
1 x > 0; 1 x>0
H1 (x) = H2 (x) =
0 x 6 0; 0 x<0

both lead to the same distribution


Z ∞
H : φ → hH, φi = φ(x)dx
0

This distribution is called the Heaviside function.

Proposition 1.3 δ is a distribution.

Proposition 1.4 Suppose that the integrable function f (x) is continuous at x = 0. Then, as n → ∞
Z ∞
hδn , f i = δn (x)f (x)dx → f (0)
−∞

and hence δn → δ.

[ This proposition shows us that, although it is defined by its action on a test function, the delta function
also works when integrated against a continuous function. One can define δ and other distributions via
limits of approximating sequences, but this approach is fraught with technical problems. ]

1.3 More properties of distributions

[Translation of a distribution] Let F (x) be a distribution and a ∈ R. The translation of F through


a, written F (x − a), is defined by its action

hF (x − a), φ(x)i = hF (x), φ(x + a)i

[Sifting property] When the distribution is the delta function, this is known as the sifting property:

hδ(x − a), φ(x)i = φ(a)

This applies to any locally integrable function which is continuous at a:

hδ(x − a), f (x)i = f (a)

Remark. One might view δ(x − a) as a continuous version of δij .

3
[The distribution f F ] If f is a smooth function (i.e. it has derivatives of all orders) and F is a
distribution then the distribution f F has action hf F, φi = hF, f φi.
We should show that f F satisfies the properties of a distribution (linearity and continuity). Clearly
u
f F is linear. Suppose that φn → φ and that the φn , φ are zero outside some bounded interval I. Then
u
(f φn ) → (f φ) and hence
hf F, φn i = hF, f φn i → hF, f φi = hf F, φi

Hence f F is continuous and so a distribution.


[The derivative of distributions] Given a distribution F and test function φ we define

F 0 , φ = − F, φ0

If F is a distribution and f a smooth function then we have the product rule

(f F )0 = f 0 F + f F 0

as for any test function φ we have

(f F )0 , φ = − f F, φ0
= − F, f φ0
= − F, (f φ)0 + F, f 0 φ
= F 0 , f φ + f 0 F, φ
= f F 0 , φ + f 0 F, φ

Proposition 1.5 If F is a distribution then so is F 0 .

[ This proof shows that distributions inherit the infinite differentiability of test functions. ]Remark. We
can differentiate a function with a singularity such as a jump discontinuity at a point (the Heaviside
function is a simple example) and interpret the result without having to take limits from the left and
right. Such functions fit naturally into the framework of distributions.
Example
(a) H 0 = δ
- Z ∞
0 0
H , φ = − H, φ =− φ0 (x)dx = φ(0) = hδ, φi
0

(b) hδ 0 , φi = −φ0 (0)


-
δ 0 , φ = − δ, φ0 = −φ0 (0)

Remark. If f has continuous derivative f 0 , then hδ 0 , f i = −f 0 (0). Just as we can extend the action
of δ from test functions to continuous functions, so we can extend the action of δ 0 to continuously

4
differentiable functions.

Proposition 1.6 Every distribution F has an antiderivative G such that G0 = F.

2 Laplace Transform. Applications to ODEs.


[Laplace transform] Let f (x) be a real- or complex-valued function defined when x > 0. Then the
Laplace transform f¯(p) of f (x) is defined to be
Z ∞
f¯(p) = f (x)e−px dx
0

for those complex p where this integral exists. f¯ is also commonly denoted as Lf and the Laplace
Transform itself as L.

Proposition 2.1 Let f (x) be a complex-valued function defined when x > 0, such that f¯ (p0 ) is well-
defined for some complex number p0 . Then

f¯(p) exists for all Re p > Re p0

Proposition 2.2 Let f (x) be a continuous complex-valued function on [0, ∞) such that f¯ (p0 ) exists.
Then
f¯(p) → 0 as Re p → ∞

Proposition 2.3 Let a ∈ C with Re a > 0 and assume f¯(p) converges on some half-plane Re p > c.
Then the function g(x) = f (x)e−ax has transform

ḡ(p) = f¯(p + a)

5
Proposition 2.4 Provided the Laplace transforms of f 0 (x) and f (x) converge for Re(p) > c, and
provided f (x)e−px → 0 as x → ∞ with p in the region of convergence, then

f 0 (p) = pf¯(p) − f (0)

Corollary 2.5 Provided that the Laplace transforms of f (x), f 0 (x), f 00 (x) converge, and with good be-
haviour at infinity as above, then

f 00 (p) = p2 f¯(p) − pf (0) − f 0 (0)

Claim: The Laplace transform L is injective. So if f¯(p) = ḡ(p) on some half-plane Re p > c we can
conclude that f (x) = g(x).

Proposition 2.6 Assuming the Laplace transform f¯(p) of f (x) to exist on some half-plane Re p > c,
and a > 0 then
g(x) = f (x − a)H(x − a)

has transform ḡ(p) = f¯(p)e−ap

Proposition 2.7 Assuming that the Laplace transforms of f (x) and g(x) = xf (x) converge then

df¯
ḡ(p) = −
dp

Table of transforms

f (x) f¯(p) f (x) f¯(p)


xn n!
pn+1
f 0 (x) pf¯(p) − f (0)
eax 1
p−a f 00 (x) p2 f¯(p) − pf (0) − f 0 (0)
p −df¯
cos ax p2 +a2
xf (x) dp
sin ax a
p2 +a2
f (x − a)H(x − a) e f¯(p)
−ap

δ(x − a) e−ap e−ax f (x) f¯(p + a)

6
3 Convolution and Inversion
[Convolution] Given two functions f, g whose Laplace transforms f¯, ḡ exist for Rep > c, we define the
convolution h = f ∗ g by
Z x
h(x) = (f ∗ g)(x) = f (t)g(x − t)dt for x 6 0
0

Remark. (f ∗ g)(x) = (g ∗ f )(x).

Theorem 3.1 (Convolution theorem) Given two functions f and g whose Laplace transforms f¯
and ḡ exist for Re p > c. Then
h̄ = f¯ḡ

where h = f ∗ g.

Theorem 3.2 (Injectivity of the Laplace Transform) Let f be a continuous function on [0, ∞),
bounded by some function M ecx such that f¯(p) = 0 for Re p > c. Then f = 0.

Theorem 3.3 (Inversion Theorem for Laplace Transform) Let f be a differentiable function on
(0, ∞) such that f¯(p) exists for Re p > c. Then for x > 0
Z σ+i∞
1
f (x) = f¯(p)epx dp (σ > c)
2πi σ−i∞

You might also like