Pope Fourier and Spectral

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Appendix D

Fourier transforms

The purpose of this appendix is to provide definitions and a summary of


the properties of the Fourier transforms used elsewhere in this book. For
further explanations and results, the reader may refer to standard texts, e.g.,
Bracewell (1965), Lighthill (1970), and Priestley (1981).

Definition
Given a function f(t), its Fourier transform is
! ∞
1
g(ω) = F {f(t)} ≡ f(t)e−iωt dt, (D.1)
2π −∞

and the inverse transform is


! ∞
−1
f(t) = F {g(ω)} = g(ω)eiωt dω. (D.2)
−∞

For f(t) and g(ω) to form a Fourier-transform pair it is necessary that the
above integrals converge, at least as generalized functions. The transforms
shown here are between the time domain (t) and the frequency domain
(ω); the corresponding formulae for transforms between physical space (x)
and wavenumber space (κ) are obvious. Some useful Fourier-transform pairs
are given in Table D.1 on page 679, and a comprehensive compilation is
provided by Erdélyi, Oberhettinger, and Tricomi (1954).
There is not a unique convention for the definition of Fourier transforms.
In some definitions the negative exponent −iωt appears in the inverse
transform, and the factor of 2π can be split in different ways between F and
F −1 . The convention used here is the same as that used by Batchelor (1953),
Monin and Yaglom (1975), and Tennekes and Lumley (1972).

678
D Fourier transforms 679

Table D.1. Fourier-transform pairs (a, b, and ν are real constants with b > 0
and ν > − 12 )

f(t) g(ω)
1 δ(ω)
1 −iωa
δ(t − a) e

(iω)n −iωa
δ (n) (t − a) e

b
e−b|t|
π(b2 + ω2 )
1 2 2 1 −b2 ω2 /2
√ e−t /(2b ) e
b 2π 2π
sin(bω)
H(b − |t|)
πω
√ " #ν " #
2 π |ω| |ω|
(b2 + t2 )−(ν+1/2) Kν
Γ(ν + 12 ) 2b b

Derivatives
The Fourier transforms of derivatives are
$ n %
d f(t)
F = (iω)n g(ω), (D.3)
dtn
$ n %
−1 d g(ω)
F n
= (−it)n f(t). (D.4)

The cosine transform


If f(t) is real, then g(ω) has conjugate symmetry:
g(ω) = g ∗ (−ω), for f(t) real, (D.5)
as may be seen by taking the complex conjugate of Eq. (D.2). If f(t) is real
and even (i.e., f(t) = f(−t)), then Eq. (D.1) can be rewritten
!
1 ∞
g(ω) = f(t) cos(ωt) dt
2π −∞
!
1 ∞
= f(t) cos(ωt) dt, (D.6)
π 0
680 D Fourier transforms

showing that g(ω) is also real and even. The inverse transform is
! ∞
f(t) = 2 g(ω) cos(ωt) dω. (D.7)
0

Equations (D.6) and (D.7) define the cosine Fourier transform and its inverse.
In considering spectra, it is sometimes convenient to consider twice the
Fourier transform of a real even function f(t), i.e.,

ḡ(ω) ≡ 2g(ω)
!
2 ∞
= f(t) cos(ωt) dt, (D.8)
π 0
so that the inversion formula
! ∞
f(t) = ḡ(ω) cos(ωt) dω, (D.9)
0

does not contain a factor of 2 (cf. Eq. (D.7)).

The delta function


The Fourier transform of the delta function δ(t − a) is (from Eq. (D.1) and
invoking the sifting property Eq. (C.11))
1 −iωa
F {δ(t − a)} = e , (D.10)

and, in particular,
1
F {δ(t)} = . (D.11)

Setting g(ω) = (1/2π)e−iωa , the inversion formula (Eq. (D.2)) yields
! ∞
1 iω(t−a)
δ(t − a) = e dω. (D.12)
−∞ 2π

This is a remarkable and valuable result. However, since the integral in


Eq. (D.12) is clearly divergent, it – like δ(t − a) – must be viewed as a
generalized function. That is, with G(t) being a test function, Eq. (D.12) has
the meaning
! ∞ ! ∞! ∞
1
G(t)δ(t − a) dt = G(t)eiω(t−a) dω dt
−∞ −∞ −∞ 2π
= G(a). (D.13)

Further explanation is provided by Lighthill (1970) and Butkov (1968).


D Fourier transforms 681

Convolution
Given two functions fa (t) and fb (t) (both of which have Fourier transforms)
their convolution is defined by
! ∞
h(t) ≡ fa (t − s)fb (s) ds. (D.14)
−∞

With the substitution r = t − s, the Fourier transform of the convolution is


! !
1 ∞ −iωt ∞
F {h(t)} = e fa (t − s)fb (s) ds dt
2π −∞ −∞
! !
1 ∞ ∞ −iω(r+s)
= e fa (r)fb (s) ds dr
2π −∞ −∞
! ! ∞
1 ∞ −iωr
= e fa (r) dr e−iωs fb (s) ds
2π −∞ −∞
= 2π F {fa (t)}F {fb (t)}. (D.15)

That is, the Fourier transform of the convolution is equal to the product of
2π and the Fourier transforms of the functions.

Parseval’s theorems
We consider the integral of the product of two functions fa (t) and fb (t) that
have Fourier transforms ga (ω) and gb (ω). By writing fa and fb as inverse
Fourier transforms, we obtain
! ∞ ! ∞! ∞ ! ∞

fa (t)fb (t) dt = iωt
ga (ω)e dω gb (ω ′ )eiω t dω ′ dt (D.16)
−∞
!−∞
∞ ! −∞
∞ ! ∞
−∞
′′
= ga (ω)gb (−ω ′′ )ei(ω−ω )t dω dω ′′ dt. (D.17)
−∞ −∞ −∞

The integral of the exponential term over all t yields 2πδ(ω − ω ′′ ), see
Eq. (D.12), so that the integration of all ω ′′ is readily performed, producing
! ∞ ! ∞
fa (t)fb (t) dt = 2π ga (ω)gb (−ω) dω. (D.18)
−∞ −∞

This is Parseval’s second theorem.


For the case in which fa and fb are the same function (i.e., fa = fb = f and
correspondingly ga = gb = g), Eq. (D.18) becomes Parseval’s first theorem:
! ∞ ! ∞
2
f(t) dt = 2π g(ω)g(−ω) dω. (D.19)
−∞ −∞
682 D Fourier transforms

If f(t) is real, this can be re-expressed as


! ∞ ! ∞
2
f(t) dt = 2π g(ω)g ∗ (ω) dω
−∞
!−∞

= 4π g(ω)g ∗ (ω) dω. (D.20)
0

EXERCISES
D.1 With f(t) being a differentiable function with Fourier transform g(ω),
obtain the following results:
! ∞
f(0) = g(ω) dω, (D.21)
−∞
! ∞
f(t) dt = 2πg(0), (D.22)
−∞

! ∞ " #2 ! ∞
dn f
dt = 2π ω 2n g(ω)g(−ω) dω. (D.23)
−∞ dtn −∞

Re-express the right-hand sides for the case of f(t) being real.
D.2 Let fa (t) be the zero-mean Gaussian distribution with standard de-
viation a, i.e.,
1 2 2
fa (t) = N (t; 0, a2 ) ≡ √ e−t /(2a ) , (D.24)
a 2π
and let fb (t) = N (t; 0, b2 ), where a and b are positive constants. Show
that the convolution of fa and fb is N (t; 0, a2 + b2 ).
Appendix E
Spectral representation of stationary
random processes

The purpose of this appendix is to show the connections among a statistically-


stationary random process U(t), its spectral representation (in terms of
Fourier modes), its frequency spectrum E(ω), and its autocorrelation function
R(s).
A statistically stationary random process has a constant variance, and
hence does not decay to zero as |t| tends to infinity. As a consequence, the
Fourier transform of U(t) does not exist. This fact causes significant technical
difficulties, which can be overcome only with more elaborate mathematical
tools than are appropriate here. We circumvent this difficulty by first devel-
oping the ideas for periodic functions, and then extending the results to the
non-periodic functions of interest.

E.1 Fourier series


We start by considering a non-random real process U(t) in the time interval
0 ≤ t < T . The process is continued periodically by defining
U(t + NT ) = U(t), (E.1)
for all non-zero integer N.
The time average of U(t) over the period is defined by
!
1 T
⟨U(t)⟩T ≡ U(t) dt, (E.2)
T 0
and time averages of other quantities are defined in a similar way. The
fluctuation in U(t) is defined by
u(t) = U(t) − ⟨U(t)⟩T , (E.3)
and clearly its time average, ⟨u(t)⟩T , is zero.

683
684 E Spectral representation of stationary random processes

For each integer n, the frequency ωn is defined by


ωn = 2πn/T . (E.4)
We consider both positive and negative n, and observe that
ω−n = −ωn . (E.5)
The nth complex Fourier mode is
eiωn t = cos(ωn t) + i sin(ωn t)
= cos(2πnt/T ) + i sin(2πnt/T ). (E.6)
Its time average is
⟨eiωn t ⟩T = 1, for n = 0,
= 0, for n ̸= 0, (E.7)
= δn0 ,
and hence the modes satisfy the orthogonality condition
⟨eiωn t e−iωm t ⟩T = ⟨ei(ωn −ωm )t ⟩T = δnm . (E.8)

The process u(t) can be expressed as a Fourier series,



& ∞
&
u(t) = (an + ibn )eiωn t = cn eiωn t , (E.9)
n=−∞ n=−∞

where {an , bn } are real, and {cn } are the complex Fourier coefficients. Since
the time average ⟨u(t)⟩T is zero, it follows from Eq. (E.7) that c0 is also zero.
Expanded in sines and cosines, Eq. (E.9) is

&
u(t) = [(an + a−n ) + i(bn + b−n )] cos(ωn t)
n=1
&∞
+ [i(an − a−n ) − (bn − b−n )] sin(ωn t). (E.10)
n=1

Since u(t) is real, cn satisfies conjugate symmetry,


cn = c∗−n , (E.11)
(i.e., an = a−n and bn = −b−n ) so that the imaginary terms on the right-hand
side of Eq. (E.10) vanish. Thus the Fourier series (Eq. (E.10)) becomes

&
u(t) = 2 [an cos(ωn t) − bn sin(ωn t)], (E.12)
n=1
E.1 Fourier series 685

which can also be written



&
u(t) = 2 |cn | cos(ωn t + θn ), (E.13)
n=1

where the amplitude of the nth Fourier mode is


|cn | = (cn c∗n )1/2 = (a2n + b2n )1/2 , (E.14)
and its phase is
θn = tan−1 (bn /an ). (E.15)
An explicit expression for the Fourier coefficients is obtained by multiply-
ing Eq. (E.9) by the −mth mode and averaging:
' ∞ (
&
⟨e−iωm t u(t)⟩T = cn eiωn t e−iωm t
n=−∞ T

&
= cn δnm = cm . (E.16)
n=−∞

It is convenient to introduce the operator Fωn{ } defined by


!
−iωn t 1 T
Fωn{u(t)} ≡ ⟨u(t)e ⟩T = u(t)e−iωn t dt, (E.17)
T 0
so that Eq. (E.16) can be written
Fωn{u(t)} = cn . (E.18)
Thus, the operator Fωn{ } determines the Fourier coefficient of the mode
with frequency ωn .
Equation (E.9) is the spectral representation of u(t), giving u(t) as the sum
of discrete Fourier modes eiωn t , weighted with Fourier coefficients, cn . With
the extension to the non-periodic case in mind, the spectral representation
can also be written ! ∞
u(t) = z(ω)eiωt dω, (E.19)
−∞

where

&
z(ω) ≡ cn δ(ω − ωn ), (E.20)
n=−∞

with ω being the continuous frequency. The integral in Eq. (E.19) is an


inverse Fourier transform (cf. Eq. (D.2)), and hence z(ω) is identified as the
Fourier transform of u(t). (See also Exercise E.1.)
686 E Spectral representation of stationary random processes

E.2 Periodic random processes


We now consider u(t) to be a statistically stationary, periodic random process.
All the results obtained above are valid for each realization of the process. In
particular, the Fourier coefficients cn are given by Eq. (E.16). However, since
u(t) is random, the Fourier coefficients cn are random variables. We now
show that the means ⟨cn ⟩ are zero, and that the coefficients corresponding to
different frequencies are uncorrelated.
The mean of Eq. (E.9) is

&
⟨u(t)⟩ = ⟨cn ⟩eiωn t . (E.21)
n=−∞

Recall that c0 is zero, and that for n ̸= 0, the stationarity condition – that
⟨u(t)⟩ be independent of t – evidently implies that ⟨cn ⟩ is zero.
The covariance of the Fourier modes is

⟨cn cm ⟩ = ⟨⟨e−iωn t u(t)⟩T ⟨e−iωm t u(t)⟩T ⟩


! T! T
1 ′
= 2 e−iωn t e−iωm t ⟨u(t)u(t′ )⟩ dt′ dt
T 0 0
! " ! #
1 T −i(ωn +ωm )t 1 T −t −iωm s
= e e R(s) ds dt
T 0 T −t
= δn(−m) Fωm {R(s)}. (E.22)

The third line follows from the substitution t′ = t + s, and from the definition
of the autocovariance
R(s) ≡ ⟨u(t)u(t + s)⟩, (E.23)

which (because of stationarity) is independent of t. The integrand e−iωm s R(s)


is periodic in s, with period T , so the integral in large parentheses is
independent of t. The last line then follows from Eqs. (E.8) and (E.17).
It is immediately evident from Eq. (E.22) that the covariance ⟨cn cm ⟩ is
zero unless m equals −n: that is, Fourier coefficients corresponding to different
frequencies are uncorrelated. For m = −n, Eq. (E.22) becomes

⟨cn c−n ⟩ = ⟨cn c∗n ⟩ = ⟨|cn |2 ⟩ = Fωn{R(s)}. (E.24)

Thus the variances ⟨|cn |2 ⟩ are the Fourier coefficients of R(s), which can
therefore be expressed as

& ∞
&
R(s) = ⟨cn c∗n ⟩eiωn s =2 ⟨|cn |2 ⟩ cos(ωn s). (E.25)
n=−∞ n=1
E.2 Periodic random processes 687

It may be observed that R(s) is real and an even function of s, and that it
depends only on the amplitudes |cn | independent of the phases θn .
Again with the extension to the non-periodic case in mind, we define the
frequency spectrum by

&
Ě(ω) = ⟨cn c∗n ⟩δ(ω − ωn ), (E.26)
n=−∞

so that the autocovariance can be written


&∞ ! ∞
∗ iωn s
R(s) = ⟨cn cn ⟩e = Ě(ω)eiωs dω (E.27)
n=−∞ −∞

(cf. Eq. E.19).


It may be seen from its definition that Ě(ω) is a real, even function of
ω (i.e., Ě(ω) = Ě(−ω)). It is convenient, then, to define the (alternative)
frequency spectrum by
E(ω) = 2Ě(ω), for ω ≥ 0, (E.28)
and to rewrite Eq. (E.27) as the inverse cosine transform (Eq. (D.9))
! ∞
R(s) = E(ω) cos(ωs) dω. (E.29)
0

Setting s = 0 in the above equation, we obtain


&∞ ! ∞
2 ∗
R(0) = ⟨u(t) ⟩ = ⟨cn cn ⟩ = Ě(ω) dω
n=−∞ −∞
! ∞
= E(ω) dω. (E.30)
0

Consequently, ⟨cn c∗n ⟩ represents the contribution to the variance from the nth
mode, and similarly
! ωb
E(ω) dω
ωa

is the contribution to ⟨u(t)2 ⟩ from the frequency range ωa ≤ |ω| < ωb . It is


clear from Eq. (E.26) that, like R(s), the spectrum E(ω) is independent of
the phases.
It may be observed that Eq. (E.27) identifies R(s) as the inverse Fourier
transform of Ě(ω) (cf. Eq. (D.2)). Hence, as may be verified directly from
Eq. (E.25), Ě(ω) is the Fourier transform of R(s):
!
1 ∞
Ě(ω) = R(s)e−iωs ds. (E.31)
2π −∞
688 E Spectral representation of stationary random processes

Similarly, E(ω) is twice the Fourier transform of R(s):


!
2 ∞
E(ω) = R(s) cos(ωs) ds. (E.32)
π 0

Having identified R(s) and Ě(ω) as a Fourier-transform pair, we now take


Eq. (E.31) as the definition of Ě(ω) (rather than Eq. (E.26)).
The spectrum Ě(ω) can also be expressed in terms of the Fourier transform
z(ω), defined in Eq. (E.20). Consider the infinitesimal interval (ω, ω + dω),
which contains either zero or one of the discrete frequencies ωn . If it contains
none of the discrete frequencies, then

z(ω) dω = 0, Ě(ω) dω = 0. (E.33)

On the other hand, if it contains the discrete frequency ωn , then

z(ω) dω = cn , Ě(ω) dω = ⟨cn c∗n ⟩. (E.34)

Thus, in general,
Ě(ω) dω = ⟨z(ω)z(ω)∗ ⟩ dω 2 . (E.35)

The essential properties of the spectrum are that

(i) Ě(ω) is non-negative (Ě(ω) ≥ 0) (see Eq. (E.35));


(ii) Ě(ω) is real (because R(s) is even, i.e., R(s) = R(−s)); and
(iii) Ě(ω) is even, i.e., Ě(ω) = Ě(−ω) (because R(s) is real).

Table E.1 provides a summary of the relationships among u(t), cn , z(ω), R(s),
and Ě(ω).

EXERCISES
E.1 By taking the Fourier transform of Eq. (E.9), show that z(ω) given
by Eq. (E.20) is the Fourier transform of u(t). (Hint: see Eq. (D.12).)
E.2 Show that the Fourier coefficients cn = an + ibn of a statistically
stationary, periodic random process satisfy

⟨c2n ⟩ = 0, ⟨a2n ⟩ = ⟨b2n ⟩, ⟨an bn ⟩ = 0, (E.36)

⟨an am ⟩ = ⟨an bm ⟩ = ⟨bn bm ⟩ = 0, for n ̸= m. (E.37)


E.3 Non-periodic random processes 689

Table E.1. Spectral properties of periodic and non-periodic statistically


stationary random processes

Periodic Non-periodic
Autocovariance R(s) ≡ ⟨u(t)u(t + s)⟩, R(s) ≡ ⟨u(t)u(t + s)⟩,
periodic R(±∞) = 0
! !
1 ∞ 1 ∞
Spectrum Ě(ω) ≡ R(s)e−iωs ds, Ě(ω) ≡ R(s)e−iωs ds,
2π −∞ 2π −∞
discrete continuous
E(ω) = 2Ě(ω) E(ω) = 2Ě(ω)
! !
2 ∞ 2 ∞
= R(s) cos(ωs) ds = R(s) cos(ωs) ds
π 0 π 0
! ∞ ! ∞
R(s) = Ě(ω)eiωs dω R(s) = Ě(ω)eiωs dω
−∞ −∞
! ∞ ! ∞
= E(ω) cos(ωs) dω = E(ω) cos(ωs) dω
0 0
−iωn t
Fourier cn = ⟨e u(t)⟩T
coefficient = Fωn {u(t)}

&
Fourier z(ω) = cn δ(ω − ωn )
transform n=−∞
! ∞
u(t) = eiωt z(ω) dω !
Spectral −∞ ∞

representation ∞
& u(t) = eiωt dZ(ω)
iωn t −∞
= e cn
n=−∞

Spectrum Ě(ω) dω = ⟨z(ω)z(ω)∗ ⟩ dω 2 Ě(ω) dω = ⟨ dZ (ω) dZ (ω)∗ ⟩

E.3 Non-periodic random processes

We now consider u(t) to be a non-periodic, statistically stationary random


process. Instead of being periodic, the autocovariance R(s) decays to zero as
|s| tends to infinity. Just as in the periodic case, the spectrum Ě(ω) is defined
to be the Fourier transform of R(s), but now Ě(ω) is a continuous function
of ω, rather than being composed of delta functions.
In an approximate sense, the non-periodic case can be viewed as the
periodic case in the limit as the period T tends to infinity. The difference
690 E Spectral representation of stationary random processes

between adjacent discrete frequencies is

∆ω ≡ ωn+1 − ωn = 2π/T , (E.38)

which tends to zero as T tends to infinity. Consequently, within any given


frequency range (ωa ≤ ω < ωb ), the number of discrete frequencies (≈
(ωb − ωa )/∆ω) tends to infinity, so that (in an approximate sense) the
spectrum becomes continuous in ω.
Mathematically rigorous treatments of the non-periodic case are given
by Monin and Yaglom (1975) and Priestley (1981). Briefly, while the non-
periodic process u(t) does not have a Fourier transform, it does have a
spectral representation in terms of the Fourier–Stieltjes integral
! ∞
u(t) = eiωt dZ(ω), (E.39)
−∞

where Z(ω) is a non-differentiable complex random function. It may be


observed that dZ(ω) (in the non-periodic case) corresponds to z(ω) dω
(in the periodic case, Eq. (E.20)). The spectrum for the non-periodic case
corresponding to Eq. (E.35) for the periodic case is

Ě(ω) dω = ⟨ dZ(ω) dZ(ω)∗ ⟩. (E.40)

In Table E.1 the spectral representations for the periodic and non-periodic
cases are compared.

E.4 Derivatives of the process


For the periodic case, the process u(t) has the spectral representation
Eq. (E.9). On differentiating with respect to time, we obtain the spectral
representation of du/dt:
&∞
du(t)
= iωn cn eiωn t . (E.41)
dt n=−∞

Similarly, the spectral representation of the kth derivative is


&∞
(k) dk u(t)
u (t) ≡ k
= (iωn )k cn eiωn t . (E.42)
dt n=−∞

Because u(k) (t) is determined by the Fourier coefficients of u(t) (i.e., cn in


Eq. (E.42)), the autocovariance and spectrum of u(k) (t) are determined by
R(s) and E(ω).
E.4 Derivatives of the process 691

It follows from the same procedure as that which leads to Eq. (E.25) that
the autocorrelation of u(k) (t) is
Rk (s) ≡ ⟨u(k) (t)u(k) (t + s)⟩
&∞
= ωn2k ⟨cn c∗n ⟩eiωn s . (E.43)
n=−∞

By comparing this with the (2k)th derivative of Eq. (E.25),


&∞
d2k R(s)
= (−1) k
ωn2k ⟨cn c∗n ⟩eiωn s , (E.44)
ds2k n=−∞

we obtain the result


d2k R(s)
Rk (s) = (−1)k . (E.45)
ds2k

The spectrum of u(k) (t) is (cf. Eq. (E.26))



&
Ěk (ω) ≡ ωn2k ⟨cn c∗n ⟩δ(ω − ωn ),
n=−∞

&
=ω 2k
⟨cn c∗n ⟩δ(ω − ωn )
n=−∞
2k
= ω Ě(ω), (E.46)
a result that can, alternatively, be obtained by taking the Fourier transform
of Eq. (E.45).
In summary, for the kth derivative u(k) (t) of the process u(t), the autoco-
variance Rk (s) is given by Eq. (E.45), while the spectrum is
Ěk (ω) = ω 2k Ě(ω). (E.47)
These two results apply both to the periodic and to non-periodic cases.
Appendix F
The discrete Fourier transform

We consider a periodic function u(t), with period T , sampled at N equally


spaced times within the period, where N is an even integer. On the basis of
these samples, the discrete Fourier transform defines N Fourier coefficients
(related to the coefficients of the Fourier series) and thus provides a discrete
spectral representation of u(t).
The fast Fourier transform (FFT) is an efficient implementation of the
discrete Fourier transform. In numerical methods, the FFT and its inverse
can be used to transform between time and frequency domains, and between
physical space and wavenumber space. In DNS and LES of flows with
one or more directions of statistical homogeneity, pseudo-spectral methods
are generally used (in the homogeneous directions), with FFTs being used
extensively to transform between physical and wavenumber spaces.
The time interval ∆t is defined by
T
∆t ≡ , (F.1)
N
the sampling times are

tj ≡ j∆t, for j = 0, 1, . . . , N − 1, (F.2)

and the samples are denoted by

uj ≡ u(tj ). (F.3)

The complex coefficients c̃k of the discrete Fourier transform are then defined
for 1 − 12 N ≤ k ≤ 12 N by

1 & −iωk tj 1 & −2πijk/N


N−1 N−1
c̃k ≡ uj e = uj e , (F.4)
N j=0 N j=0

692
F The discrete Fourier transform 693

where (as with Fourier series) the frequency ωk is defined by

2πk
ωk = . (F.5)
T
As demonstrated below, the inverse transform is
1 1
2N
& 2N
&
uℓ = c̃k eiωk tℓ = c̃k e2πikℓ/N . (F.6)
k=1− 12 N k=1− 12 N

In order to confirm the form of the inverse transform, we consider the


quantity
1
2N
1 &
Ij,N ≡ e2πijk/N . (F.7)
N 1 k=1− 2 N

Viewed in the complex plane, Ij,N is the centroid of the N points e2πijk/N for
the N values of k. For j being zero or an integer multiple of N, each point
is located at (1, 0), so Ij,N is unity. For j not being an integer multiple of
N, the points are distributed symmetrically about the origin, so Ij,N is zero.
Thus
$
1, for j/N integer,
Ij,N = (F.8)
0, otherwise.

With this result, the right-hand side of Eq. (F.6) can be written
1 1
2N
& 2N
& 1 & −2πijk/N 2πikℓ/N
N−1
2πikℓ/N
c̃k e = uj e e
N j=0
k=1− 12 N k=1− 12 N
1
& 2N
1 &
N−1
= uj e2πik(ℓ−j)/N
j=0
N 1k=1− 2 N

&
N−1
= uj I(ℓ−j),N = uℓ . (F.9)
j=0

In the final sum, the only non-zero contribution is for j = ℓ. This verifies the
inverse transform, Eq. (F.6).
It is informative to study the relationship between the coefficients of the
discrete Fourier transform c̃k and those of the Fourier series ck . From the
694 F The discrete Fourier transform

definitions of these quantities (Eqs. (F.6) and (E.9)) we have


1
2N
& ∞
&
uℓ = c̃k eiωk tℓ = ck eiωk tℓ . (F.10)
k=1− 12 N k=−∞

Before considering the general case, we consider the simpler situation in


which the Fourier coefficients ck are zero for all modes with |ωk | ≥ ωmax ,
where ωmax is the highest frequency represented in the discrete Fourier
transform,
π
ωmax ≡ = ωN/2 . (F.11)
∆t
In this case, the sums in Eq. (F.10) are both effectively from −( 12 N − 1) to
( 12 N − 1), so the coefficients c̃k and ck are identical.
For the general case, we need to consider frequencies higher than ωmax .
For k in the range −( 12 N − 1) ≤ k ≤ 12 N, and for a non-zero integer m, the
(k + mN)th mode has the frequency
ωk+mN = ωk + 2mωmax , (F.12)
with
|ωk+mN | ≥ ωmax . (F.13)
At the sampling times tj , the (k + mN)th mode is indistinguishable from the
kth mode, since
eiωk+mN tj = e2πij(k+mN)/N = e2πijk/N = eiωk tj . (F.14)
The (k + mN)th mode is said to be aliased to the kth mode.
The coefficients c̃k can be determined from their definition (Eq. (F.4)) with
the Fourier series substituted for uj :
) ∞ *
1 & &
N−1
c̃k = cn e2πinj e−2πijk/N
N j=0 n=−∞

& ∞
&
= In−k,N cn = ck+mN . (F.15)
n=−∞ m=−∞

Thus the coefficient c̃k is the sum of the Fourier coefficients of all the modes
that are aliased to the kth mode.
In view of conjugate symmetry, the N complex coefficient c̃k can be
expressed in terms of N real numbers (e.g., ℜ{c̃k } for k = 0, 1, . . . , 12 N and
ℑ{c̃k } for k = 1, 2, . . . , 12 N−1, see Exercise F.1). The discrete Fourier transform
and its inverse provide a one-to-one mapping between uj and c̃k . On the order
F The discrete Fourier transform 695

of N 2 operations are required in order to evaluate c̃k directly from the sum in
Eq. (F.4). However, the same result can be obtained in on the order of N log N
operations by using the fast Fourier transform (FFT) (see, e.g., Brigham
(1974)). Thus, for periodic data sampled at sufficiently small time intervals,
the FFT is an efficient tool for evaluating Fourier coefficients, spectra,
autocorrelations (as inverse Fourier transforms of spectra), convolutions,
and derivatives as
dn u(t)
n
= F −1 {(iω)n F {u(t)}}. (F.16)
dt
As with the Fourier transform, there are various definitions of the discrete
Fourier transform. The definition used here makes the most direct connection
with Fourier series. In numerical implementations, the alternative definition
given in Exercise F.2 is usually used.

EXERCISES
F.1 Show that, for real u(t), the coefficients c̃k satisfy
c̃k = c̃∗−k , for |k| < 12 N, (F.17)
and that c̃0 and c̃ 1 N are real.
2
Show that
cos(ωmax tj ) = (−1)j , (F.18)

sin(ωmax tj ) = 0. (F.19)
F.2 An alternative definition of the discrete Fourier transform is
&
N−1
ck = uj e−2πijk/N , for k = 0, 1, . . . , N − 1. (F.20)
j=0

Show that the inverse is


1 & 2πikℓ/N
N−1
uℓ = ck e . (F.21)
N k=0
What is the relationship between the coefficients c̃k and ck ?
Appendix G
Power-law spectra

In the study of turbulence, considerable attention is paid to the shape


of spectra at high frequency (or large wavenumber). The purpose of this
appendix is to show the relationships among a power-law spectrum (E(ω) ∼
ω −p , for large ω), the underlying random process u(t), and the second-order
structure function D(s).
We consider a statistically stationary process u(t) with finite variance ⟨u2 ⟩
and integral timescale τ̄. The autocorrelation function
R(s) ≡ ⟨u(t)u(t + s)⟩ (G.1)
and half the frequency spectrum form a Fourier-cosine-transform pair:
! ∞
R(s) = E(ω) cos(ωs) dω, (G.2)
0
! ∞
2
E(ω) = R(s) cos(ωs) ds. (G.3)
π 0

The third quantity of interest is the second-order structure function


D(s) ≡ ⟨[u(t + s) − u(t)]2 ⟩
= 2[R(0) − R(s)]
! ∞
=2 [1 − cos(ωs)]E(ω) dω. (G.4)
0

By definition, a power-law spectrum varies as


E(ω) ∼ ω −p , for large ω, (G.5)
whereas a power-law structure function varies as
D(s) ∼ sq , for small s. (G.6)

696
G Power-law spectra 697

The aim here is to understand the significances of particular values of p and


q and the connection between them.
The first observation – obtained by setting s = 0 in Eq. (G.2) – is that the
variance is
! ∞
2
⟨u ⟩ = R(0) = E(ω) dω. (G.7)
0

By assumption ⟨u2 ⟩ is finite. Hence, if E(ω) is a power-law spectrum, the


requirement that the integral converges dictates p > 1.
A sequence of similar results stems from the spectra of the derivatives of
u(t). Suppose that the nth derivative of u(t) exists, and denote it by
(n) dn u(t)
u (t) = . (G.8)
dtn
The autocorrelation of u(n) (t) is

Rn (s) ≡ ⟨u(n) (t)u(n) (t + s)⟩


d2n R(s)
= (−1)n (G.9)
ds2n
(see Appendix E, Eq. (E.45)), and its frequency spectrum is

En (ω) = ω 2n E(ω) (G.10)

(see Eq. (E.47)). Hence we obtain


'" #2 ( ! ∞
dn u
= Rn (0) = En (ω) dω
dtn 0
! ∞
= ω 2n E(ω) dω. (G.11)
0

The left-hand side is finite if u(t) is differentiable n times (in a mean-square


sense). Then, if E(ω) is a power-law spectrum, the requirement that the
integral in Eq. (G.11) converges dictates

p > 2n + 1. (G.12)

For an infinitely differentiable process – such as the velocity evolving by


the Navier–Stokes equations – it follows from Eq. (G.12) that (for large ω)
the spectrum decays more rapidly than any power of ω: it may instead decay
as exp(−ω) or exp(−ω 2 ), for example. Nevertheless, over a significant range
of frequencies (ωl < ω < ωh , say) a power-law spectrum may occur, with
exponential decay beyond ωh .
698 G Power-law spectra

0
10

10–1
1
10–2 ν=
E (ω) –3
6
10
–4
10

10–5

10–6 ν=2
10–7
–8
10
–9
10 –1 0 1 2
10 10 10 10
ω

Fig. G.1. Non-dimensional power-law spectra E(ω): Eq. (G.14) for ν = 16 , 13 , . . . , 1 56 , 2.

If the process u(t) is at least once continuously differentiable, then, for


small s, the structure function is
'" # (
2
du
D(s) ≈ s2 , (G.13)
dt

i.e., a power law (Eq. (G.6)) with q = 2.


It is instructive to examine the non-dimensional power-law spectrum
" #(1+2ν)/2
2 α2
E(ω) = , (G.14)
π α2 + ω 2
for various values of the positive parameter ν. The non-dimensional integral
timescale is unity, i.e.,
! ∞
π
R(s) ds = E(0) = 1, (G.15)
0 2
while (for given ν) α is specified (see Eq. (G.18) below) so that the variance
is unity. Figure G.1 shows E(ω) for ν between 16 and 2. For large ω, the
straight lines on the log–log plot clearly show the power-law behavior with
p = 1 + 2ν. (G.16)
The corresponding autocorrelation (obtained as the inverse transform of
Eq. (G.14)) is

R(s) = √ ( 1 sα)ν Kν (sα), (G.17)
π Γ(ν + 12 ) 2
G Power-law spectra 699

1.0
0.9
0.8
0.7
R (s)
0.6
0.5
0.4
0.3
0.2 1
ν=
0.1 ν=2 6
0.0
0 1 2 3 4 5 6 7 8 9 10
s

Fig. G.2. Autocorrelation functions R(s), Eq. (G.19), for ν = 16 , 13 , . . . , 1 56 , 2.

where Kν is the modified Bessel function of the second kind. The normaliza-
tion condition ⟨u2 ⟩ = R(0) = 1 yields

α = π Γ(ν + 12 )/Γ(ν), (G.18)

so that Eq. (G.17) can be rewritten


2 1 ν
R(s) = ( sα) Kν (sα), (G.19)
Γ(ν) 2
These autocorrelations are shown in Fig. G.2. (For ν = 12 , the autocorrelation
given by Eq. (G.19) is simply R(s) = exp(−|s|).)
The expression for the autocorrelation is far from revealing. However,
the expansion for Kν (sα) (for small argument) leads to very informative
expressions for the structure function (for small s):

⎨ 2 Γ(1 − ν) ( 1 αs)2ν . . . , for ν < 1,
D(s) = Γ(1 + ν) 2 (G.20)

2(ν − 1)( 21 αs)2. . . , for ν > 1.
Hence the structure function varies as a power-law with exponent
$
2ν, for ν < 1,
q= (G.21)
2, for ν > 1.
This behavior is evident in Fig. G.3.
The conclusions to be drawn from the expressions for the power-law
exponents p and q (Eqs. (G.16) and (G.21)) are straightforward. The case
700 G Power-law spectra

1
10

1
ν=
10 0 6
D (s)
10–1

–2
ν=2
10

–3
10

–4
10 –2 –1 0 1
10 10 10 10
s
Fig. G.3. Second-order structure functions D(s), Eq. (G.20), for ν = 16 , 13 , . . . , 1 56 , 2.
Observe that, for ν > 1 and small s, all the structure functions vary as s2 .

Table G.1. The relationships among the spectral exponent p, the structure-
function exponent q, and the differentiability of the underlying process u(t) for
the power-law spectrum Eq. (G.14)

'"Process
#2 (
u(t)
Structure n
d u
Parameter in Spectrum function <∞
dtn
Eq. (G.14) E(ω) ∼ ω −p D(s) ∼ sq
ν p q n
1 5 2
3 3 3 0
1
2 2 1 0
>1 >3 2 ≥1
>2 >5 2 ≥2

ν > 1 corresponds to an underlying process that is at least once continuously


differentiable, for which p is greater than 3 and q is 2. The case 0 < ν < 1
corresponds to a non-differentiable process u(t), and the power laws are
connected by
p = q + 1. (G.22)

Table G.1 displays these results for particular cases.


For ν < 1, for large ω the power-law spectrum is

E(ω) ≈ C1 ω −(1+q) , (G.23)


G Power-law spectra 701

while for small s the structure function is


D(s) ≈ C2 sq . (G.24)
These relations stem from Eqs. (G.14) and (G.20), from which C1 and C2
(which depend upon q) can be deduced. It is a matter of algebra (see
Exercise G.1) to show that C1 and C2 are related by
C1 1 . πq /
= Γ(1 + q) sin . (G.25)
C2 π 2
2
For the particular case q = 3
, this ratio is 0.2489; or, to an excellent
approximation,
(C1 /C2 )q=2/3 ≈ 14 . (G.26)
Although we have considered a specific example of a power-law spectrum
(i.e., Eq. (G.14)), the conclusions drawn are general. If a spectrum exhibits the
power-law behavior E(ω) ≈ C1 ω −p over a significant range of frequencies,
then there is corresponding power-law behavior D(s) ≈ C2 sq for the structure
function with q = min(p − 1, 2). (This assertion can be verified by analysis of
Eq. (G.4), see Monin and Yaglom (1975).) For q < 2, C1 and C2 are related
by Eq. (G.25).

EXERCISE
G.1 Identify C1 and C2 in Eqs. (G.23) and (G.24). With the use of the
following properties of the gamma function:
Γ(1 + ν) = νΓ(ν), (G.27)

Γ(ν)Γ(1 − ν) = π/ sin(πν), (G.28)


1
Γ(ν)Γ(ν + 12 ) = (2π) 2 2(1/2−2ν) Γ(2ν), (G.29)
verify Eq. (G.25).

You might also like