Chapter 12: Properties of The Fourier Transform: 12.A Introduction
Chapter 12: Properties of The Fourier Transform: 12.A Introduction
Chapter 12: Properties of The Fourier Transform: 12.A Introduction
12.A Introduction.
The power of the Fourier transform derives principally from the many
theorems describing the properties of the transformation operation which
provide insight into the nature of physical systems. Most of these theorems have
been derived within the context of communications engineering to answer
questions framed like "if a time signal is manipulated in such-and-such a way,
what happens to its Fourier spectrum?" As a result, a way of thinking about the
transformation operation has developed in which a Fourier transform pair
y(t) ↔ Y( f ) is like the two sides of a coin, with the original time or space signal
on one side and its frequency spectrum on the other. The two halves of a Fourier
transform pair are thus complementary views of the same signal and so it makes
sense that if some operation is performed on one half of the pair, then some
equivalent operation is necessarily performed on the other half.
12.B Theorems
Linearity
Scaling a function scales it's transform pair. Adding two functions corresponds
to adding the two frequency spectra.
h(x) ↔ H( f )
If then h(x) + g(x) ↔ H ( f ) + G( f ) [12.2]
g(x) ↔ G( f )
Scaling
Notice that this theorem differs from the corresponding theorem for discrete
spectra (Fig. 6.3) in that the ordinate scales inversely with the abscissa. This is
because the Fourier transform produces a spectral density function rather than a
spectral amplitude function, and therefore is sensitive to the scale of the frequency
axis.
Time/Space Shift
h(x) ↔ H( f ) h(x − x 0 ) ↔ e− i2 fx 0
If then H( f ) [12.5]
Frequency Shift
h(x) ↔ H( f ) h(x)ei2 ↔ H( f − f0 )
xf0
If then [12.6]
Modulation
h(x)ei2 ↔ H( f − f0 )
xf0
if h(x) ↔ H( f ) then
h(x)e −i 2 ↔ H( f + f0 )
xf0
H( f − f0 ) + H( f + f 0 )
h(x)cos(2 xf0 ) ↔
2
H( f − f 0) + H( f + f0 )
h(x)sin(2 xf0 ) ↔ [12.7]
2i
Differentiation
Differentiation of a function induces a 90° phase shift in the spectrum and scales
the magnitude of the spectrum in proportion to frequency. Repeated
differentiation leads to the general result:
d n h( x)
If h(x) ↔ H( f ) then n ↔ (i2 f )n H( f ) [12.8]
dx
This theorem explains why differentiation of a signal has the reputation for being
a noisy operation. Even if the signal is band-limited, noise will introduce high
frequency signals which are greatly amplified by differentiation.
Integration
Integration of a function induces a -90° phase shift in the spectrum and scales the
magnitude of the spectrum inversely with frequency.
x
If h(x) ↔ H( f ) then ∫ −∞
h(u) du ↔ H( f )/(i2 f ) + constant [12.9]
From this theorem we see that integration is analagous to a low-pass filter which
blurs the signal.
Transform of a transform
We normally think of using the inverse Fourier transform to move from the
frequency spectrum back to the time/space function. However, if instead the
spectrum is subjected to the forward Fourier transform, the result is a time/space
function which has been flipped about the y-axis. This gives some appreciation
for why the kernels of the two transforms are complex conjugates of each other:
the change in sign in the reverse transform flips the function about the y-axis a
second time so that the result matches the original function.
h(t)
→ H( f ) H( f )
F → h(−t)
F
If then [12.10]
One practical implication of this theorem is a 2-for-1 bonus: every transform pair
brings with it a second transform pair at no extra cost.
This theorem highlights the fact that the Fourier transform operation is
fundamentally a mathematical relation that can be completely divorced from the
physical notions of time and frequency. It is simply a method for transforming a
Chapter 12: Properties of The Fourier Transform Page 111
function of one variable into a function of another variable. So, for example, in
probability theory the Fourier transform is used to convert a probability density
function into a moment-generating function, neither of which bear the slightest
resemblance to the time or frequency domains.
Central ordinate
By analogy with the mean Fourier coefficient a0 , the central ordinate value H(0)
(analog of a0 in discrete spectra) represents the total area under the function h(x).
∞ ∞
∫−∞ h(u) e du = ∫
−i 0
If h(x) ↔ H( f ) then H(0) = h(u) du
−∞
∞
h(0) = ∫−∞ H(u) ei 0du
∞
For the inverse transform, = ∫−∞ H(u) du [12.12]
∞ ∞
= ∫−∞ Re[H(u)] du + i ∫−∞ Im[ H(u)] du
Note that for a real-valued function h(t) the imaginary portion of the spectru will
have odd symmetry, so the area under the real part of the spectrum is all that
needs to be computed to find h(0).
For example, in optics the line-spread function (LSF) and the optical transfer
function (OTF) are Fourier transform pairs. Therefore, according to the central-
ordinat theorem, the central point of the LSF is equal to the area under the OTF.
In two dimensions, the transform relationship exists between the point-spread
function (PSF) and the OTF. In such 2D cases, the integral must be taken over an
area, in which case the result is interpreted as the volume under the 2D surface.
Equivalent width
∫−∞ H(u) du
∞
∫
h(0) = H (u) du h(0)
−∞
The ratio on the left side of this last expression is called the "equivalent width" of
the given function h because it represents the width of a rectangle with the same
central ordinate and the same area as h. Likewise, the ratio on the right is the
inverse of the equivalent width of H. Thus we conclude that the equivalent
width of a function in one domain is the inverse of the equivalent width in the
other domain as illustrated in Fig. 12.1. For example, as a pulse in the time
domain gets shorter, its frequency spectrum gets longer. This theorem quantifies
that relationship for one particular measure of width.
Chapter 12: Properties of The Fourier Transform Page 112
w 1
w
Convolution
In words, this theorem says that if two functions are multiplied in one domain,
then their Fourier transforms are convolved in the other domain. Unlike the
cross-correlation operation described next, convolution obeys the commutiative,
associative, and distributive laws of algebra. That is,
Derivative of a convolution
Combining the derivative theorem with the convolution theorm leads to the
conclusion
dh df dg
If h(x) = f (x) ∗ g(x) then = ∗g = f ∗ [12.17]
dx dx dx
In words, this theorem states that the derivative of a convolution is equal to the
convolution of either of the functions with the derivative of the other.
Cross-correlation
∞
q = h★g means q(x) = ∫−∞ g(u − x)h(u) du [12.18]
h(x)★g(x) ↔ H( f )⋅ G(− f )
h(x) ↔ H( f )
If then h( −x) ⋅ g(x) ↔ H( f )★G( f ) [12.19]
g(x) ↔ G( f )
h(x) ⋅ g(x) ↔ H(− f )★G( f )
Combining eqns. [12.14] and [12.16] indicates the spectrum of the product of two
functions can be computed two ways, h ⋅ g ↔ H( f ) ∗G( f ) and
h ⋅ g ↔ H(− f )★G( f ). Since the spectrum of a function is unique, the implication
is that
Auto-correlation
The quantity h★h is known as the autocorrelation function of h and the quantity
HH* is called the power spectral density function of h. This theorem says that the
autocorrelation function and the power spectral density function comprise a
Fourier transform pair.
Parseval/Rayleigh
The left hand integral is interpreted as the total amount of energy in the signal as
computed in the time domain, whereas the right hand integral is the total
amount of energy computed in the frequency domain. The modulus symbols
(|~|) serve as a reminder that the integrands are in general complex valued, in
which case it is the magnitude of these complex quantities which is being
integrated.
h(x) ↔ H( f ) ∞ ∞
The other two methods for getting the same answer correspond to the idea of
convolution. In Fig. 12.4 the unit impulse response is drawn without
displacement along the x-axis. In the same figure we also draw the input pulses,
but notice that they are drawn in reverse sequence. We then shift the input train
of impulses along the x-axis by the amount t0 , which in this example is 3 units,
and overlay the result on top of the impulse response. Now the arithmetic is as
follows: using the x-location of each impulse in turn, locate the corresponding
point on the unit impulse response function, and scale the ordinate value of h(t)
by the height of the impulse. Repeat for each impuse in the input sequence and
add the results. The result will be exactly the same as given above in eqn. [12.24].
The arithmetic for evaluating the response in Fig. 12.5 is the same as in Fig.
12.4: multiply each ordinate value of the impulse response function by the
amplitude of the corresponding impulse and add the results. In fact, this is
nothing more than an inner product. To see this, we write the sequence of input
pulses as a stimulus vector s=(s0 ,s1 ,s2 ) = (4,5,6) and the strength of the impulse
response at the same points in time could be written as the vector h=(h 0 , h 1 , h 2 )=
(a,b,c). The operation of reversing the impulse response to plot it along the u-axis
would change the impulse response vector to h´=(h 2 , h1 , h0 )=(c,b,a). Accordingly,
the method described above for computing the response at time t0 is
3
r(t 0 ) = ∑ sk ⋅ hk′
k= 1 [12.25]
= s • h′
Although this result was illustrated by the particular example of t0 = 3, the same
method obviously applies for any point in time and so the subscript notation
may be dropped at this point without loss of meaning.
If we now generalize the above ideas so that the input signal is a continuous
function s(t), then the inner product of vectors in eqn. [12.25] becomes the inner
product between continuous functions.
∞
r(t) = ∫
−∞
s(u)h ′(u − t) du
∞
= ∫
−∞
s(u)h(t −u) du [12.26]
= s(t)∗ h(t )
Notice that the abscissa variable in Fig. 12.5 becomes a dummy variable of
integration u in eqn. 12.26 and so we recognize the result as the convolution of
the stimulus and impulse response. Therefore, we conclude that convolution
yields the superposition of responses to a collection of point stimuli. This is a major
result because any stimulus can be considered a collection of point stimuli.
If the development of this result had centered on Fig. 12.4 instead of Fig. 12.5,
the last equation would have been:
∞
r(t) = ∫
−∞
h(u) s′(u − t0 ) du
∞
= ∫
−∞
h(u)s(t −u) du [12.27]
= h(t) ∗ s(t)
Since we observed that the same result is achieved regardless of whether it is the
stimulus or the impulse response that is reversed and shifted, this demonstrates
that the order of the functions is immaterial for convolution. That is, s*h = h *s.
which is the commutative law stated earlier.
Chapter 12: Properties of The Fourier Transform Page 117
Although the transition from Fourier series to the Fourier transform is a major
advance, it is also a retreat since not all functions are eligible for Fourier analysis.
In particular, the sinusoidal functions which were the very basis of Fourier series
are excluded by the preceding development of the Fourier transform operation.
This is because one condition for the existence of the Fourier transform for any
particular function is that the function be "absolutely integrable", that is, the
integral of the absolute value over the range -∞ to +∞ must be finite, and a true
sinusoid lasting for all time does not satisfy this requirement. The same is true
for constant signals. On the other hand, any physical signal that an
experimentalist encounters will have started at some definite time and will
inevitably finish at some time. Thus, empirical signals will always have Fourier
transforms, but our mathematical models of these signals may not. Since the
function sin(x) is a very important element of mathematical models, we must
show some ingenuity and find a way to bring them into the domain of Fourier
analysis. That is the purpose of delta functions.
Recall that the transition from Fourier series to transforms was accompanied
by a change in viewpoint: the spectrum is now a display of amplitude density.
As a result, the emphasis shifted from the ordinate values of a spectrum to the
area under the spectrum within some bandwidth. This is why a pure sinusoid
has a perfectly good Fourier series representation, but fails to make the transition
to a Fourier transform: we would need to divide by the bandwidth of the signal,
which is zero for a pure sinusoid. On the other hand, if what is important
anyway is the area under the transform curve, which corresponds to the total
amplitude in the signal, then we have a useful work-around. The idea is to
invent a function which looks like a narrow pulse (so the bandwidth is small)
that is zero almost everywhere but which has unit area. Obviously as the width
approaches zero, the height of this pulse will have to become infinitely large to
maintain its area. However, we should not let this little conundrum worry us
since the only time we will be using this new function is inside an integral, in
which case only the area of the pulse is relevant. This new function is called a
unit delta function and it is defined by the two conditions:
Chapter 12: Properties of The Fourier Transform Page 118
(t) = 0 for t ≠ 0
∞ [12.28]
∫ −∞
(t) du = 1
∞ a− a+ ∞
∫−∞
g(u) (u − a) du = ∫−∞
g(u) (u − a) du + g(a)∫
a −
(u − a) du + ∫ + g(u) (u − a) du
a [12.29]
= 0 + g(a) + 0 = g(a)
Applying this result to the convolution integral we see that convolution of any
function with a delta function located at x=a reproduces the function at x=a
∞
g(t)∗ (t − a) = ∫ −∞
g(u) (t − a − u) du
+
[12.30]
= g(t − a)∫ (t − a − u) du = g(t − a)
−
In other words, a delta function at the origin has a flat Fourier spectrum, which
means that all frequencies are present to an equal degree. Likewise, the inverse
Fourier transform of a unit delta function at the origin in the frequency domain is
a constant (d.c.) value. These results are shown pictorially in Fig. 12.6 with the
delta function represented by a spike or arrow along with a number indicating
the area under the spike. Thus we see that the Fourier transform of a constant is a
delta function at the origin, 1 ↔ (0) . Applying the modulation theorem to this
result we may infer that the spectrum of a cosine or sine wave is a pair of delta
functions
( f − f0 ) + ( f + f0 ) ( f − f0 ) − ( f + f0 )
cos(2 f0 x) ↔ sin(2 f0 x) ↔ [12.32]
2 2i
(x − x0 ) + (x + x 0 ) (x + x0 ) − (x − x 0 )
↔ cos(2 fx0 ) ↔ sin(2 fx0 ) [12.33]
2 2i
Chapter 12: Properties of The Fourier Transform Page 119
0 0
0 0
0 0
0 0
If the complex conjugate is taken of a function, its spectrum is reflected about the
origin. This statement, and related results, are summarized in Table 12.1
y(x) Y(f)
h* ( x) H* (− f )
h* (−x) H* ( f )
h(−x) H(− f)
2Re{h(x) } H( f ) + H * (− f )
2Im{h(x) } H( f ) − H * (− f )
We saw in Ch. 5 that any function y(x) can be written as the sum of even and odd
components, y(x) = E(x) + O(x) , with E(x) and O(x) in general being complex-
valued. Applying this fact to the definition of the Fourier transform yields
∞ ∞
Y( f ) = ∫−∞
y(x) cos(2 xf ) df − i ∫ y(x) sin(2 xf ) df
−∞
∞ ∞
[12.34]
= ∫ E(x) cos(2 xf ) df −i ∫ O(x) sin(2 xf ) df
−∞ −∞
from which we may deduce the symmetry relations of Table 12.2 between the
function y(x) in the space/time domain and its Fourier spectrum, Y(f). A
graphical illustration of these relations may be found in Ch. 2 of Bracewell (1978).
y(x) Y(f)
even even
odd odd
Chapter 12: Properties of The Fourier Transform Page 121
y(x) Y(f)
h∗g H⋅G
h ∗ g(−) H ⋅ G(−)
h ∗ g* (−) H ⋅ G*
h ∗ g* H ⋅ G* (−)
h* (−) ∗ g* (−) H* ⋅ G*
h* (−) ∗ g* H* ⋅ G* (−)
h* ∗ g* H* (−) ⋅G * (−)
h∗h H2
h* (−) ∗ h*(−) [H * ]2
h* ∗ h* [H * (−)]2
h★h H ⋅ H(−)
h(−)★h(−) H ⋅ H(−)
h*★h* H* ⋅ H* (−)