Review of Linear System Theory: 1 Linear Shift-Invariant Systems

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Review of Linear System Theory

The following is a (very) brief review of linear system theory and Fourier analysis. I work
primarily with discrete signals, but each result developed in this review has a parallel in terms
of continuous signals and systems. I assume the reader is familiar with linear algebra (as
reviewed in my handout Geometric Review of Linear Algebra), and least squares estimation (as
reviewed in my handout Least Squares Optimization).

1 Linear shift-invariant systems

A system is linear if it obeys the principle of superposition: the response to a weighted sum
of any two inputs is the (same) weighted sum of the responses to each individual input.
A system is shift-invariant (also called translation-invariant for spatial signals, or time-invariant
for temporal signals) if the response to any input shifted by any amount ∆ is equal to the re-
sponse to the original input shifted by amount ∆.
These two properties are completely independent: a system can have one of them, both or
neither [think of an example of each of the 4 possibilities].
The rest of this review is focused on systems that are both linear and shift-invariant (known as
LSI systems). The diagram below decomposes the behavior of such an LSI system. Consider
an arbitrary discrete input signal. We can rewrite it as a weighted sum of impulses (also called
“delta functions”). Since the system is linear, the response to this weighted sum is just the
weighed sum of responses to each individual impulse. Since the system is shift-invariant, the
response to each impulse is just a shifted copy of the response to the first. The response to the
impulse located at the origin (position 0) is known as the system’s impulse response.
Putting it all together, the full system response is the weighted sum of shifted copies of the
impulse response. Note that the system is fully characterized by the impulse response: This is
all we need to know in order to compute the response of the system to any input!
To make this explicit, we write an equation that describes this computation:
X
y[n] = x[m]r[n − m]
m

This operation, by which input x and impulse response r are combined to generate output
signal y is called a convolution. It is often written using a more compact notation: y = x ∗ r.
Although we think of x and r playing very different roles, the operation of convolution is
actually commutative: substituting k = n − m gives:
X
y[n] = x[n − k]r[k] = r ∗ x[n]
k

• Author: Eero Simoncelli, Center for Neural Science, and Courant Institute of Mathematical Sciences, New York
University.
• Thanks to Jonathan Pillow for generating some of the figures.
• Created: Fall 2001. Revised: Nov 2004.
• Send corrections or comments to eero.simoncelli@nyu.edu
Input

v1 x v1 x

+ v2 x + v2 x
L
+ v3 x + v3 x

+ v4 x + v4 x

Output

It is also easy to see that convolution is associative: a∗(b∗c) = (a∗b)∗c. And finally, convolution

2
is distributive over addition: a ∗ (b + c) = a ∗ b + a ∗ c.

In
Back to LSI systems. The impulse response r2
r is also known as a “convolution kernel” or r3
“linear filter”. Looking back at the definition, r1
each component of the output y is computed +
as an inner product of a chunk of the input
vector x with a reverse-ordered copy of r. As
such, the convolution operation may be visu-
alized as a weighted sum over a window that
slides along the input signal. Out

For finite-length discrete signals (i.e., vectors),


one must specify how convolution is han-
dled at the boundaries. The standard solu-
tion is to consider each vector to be one pe- Convolution
riod of an infinitely periodic signal. Thus,
Matrix
for example, when the convolution operation
would require one to access an element just
beyond the last element of the vector, one
need only “wrap around” and use the first el- wraparound
ement. There are other methods of handling
boundaries. For example, one can pad the sig-
nal with zeros, or reflect it about the boundary.
The convolution operation is easily extended to multidimensional signals. For example, in
2D, both the signal and convolution kernal are two-dimensional arrays of numbers, and the
operation corresponds to taking sums over a 2D window of the signal, with weights specified
by the kernel.

2 Sinusoids and Convolution

The sine function, sin(θ), gives the y-coordinate of the points on a unit circle, as a function of
the angle θ. The cosine function cos(θ), gives the x-coordinate. Thus, sin 2 (θ) + cos2 (θ) = 1.

3
The angle, θ, is (by convention) assumed to be in units of radians - a full sweep around the
unit circle corresponds to an angle of 2π radians.
For our purposes here, we will consider sinusoidal signals. A continuous sinusoidal signal
in time might be written A sin(Ωt − φ), where A is the amplitude, Ω gives the frequency (in
radians per unit time) and φ corresponds to a “phase shift”, or a temporal delay by amount
φ/Ω. A discretized sinusoid might be written as: A sin(2πωn − φ). Here, n is the index into the
vector, and the frequency (in cycles per sample) is determined by ω.
The usefulness of these “sinusoidal” functions comes from their unique behavior with respect
to LSI systems. First, consider what happens when the function cos(Ωt) is translated by an
amount ∆. This corresponds to a change in phase φ = Ω∆, which may be written (using a
basic trigonometric identity from high school mathematics):
A cos(Ωt − φ) = A cos(φ) cos(Ωt) + A sin(φ) sin(Ωt)
That is, a shifted copy of a sinusoid is equivalent to a linear combination of the (non-shifted)
sin and cos functions!
The left and right sides of this equation pro-
vide two different characterizations of a sinu-
soidal function: (a) in terms of its amplitude
and phase, or (b) in terms of its decompo- A sin φ
sition into sinusoidal and cosinusoidal com- A
ponents. These two descriptions are just po-
lar or rectangular coordinate system represen-
tations of the same information (see figure).
φ
The converse of this trigonometric identity is A cos φ
that a linear combination of sinusoids of the
same frequency is a single sinusoid of that
frequency, with a particular phase and ampli-
tude.
Now consider the response of an LSI system to a sinusoid:
y = x∗r
X
= r[m]A sin(2πω(n − m) − φ)
m
X
= r[m]A sin (2πωn − (2πωm + φ))
m

The response is a sum of sinusoids, all of the same frequency (2πω), but different phases,
(2πωm + φ). This sum is just a single sinusoid of the same frequency but (possibly) different
phase and amplitude. That is, sinusoids preserve their frequency when they pass through LSI
systems, even though their amplitude and phase may change.

Sinusoids as eigenfunctions of LSI systems. The relationship between LSI systems and si-
nusoidal functions may be expressed more compactly (and deeply) by bundling together a
sine and cosine function into a single complex exponential:
eiθ ≡ cos(θ) + i sin(θ)

4

where i = −1 is the imaginary number. This rather mysterious relationship can be derived
by expanding the exponential in a Taylor series, and noting that the even (real) terms form the
series expansion of a cosine and the odd (imaginary) terms form the expansion of a sine [See
Feynman’s beautiful derivation in his Lectures on Physics].
The notation may seem a bit unpleasant, but now changes in the amplitude and phase of a
sinusoid may be expressed more compactly, and the relationship between LSI systems and
sinusoids may be written as:

L{eiΩt } = AL ei(Ωt+φL )
= AL eiφL eiΩt

where AL and φL represent the amplitude and phase change caused by the LSI system L
in sinusoids with frequency Ω. Summarizing, the action of an LSI system on the complex
exponential function is to simply multiply it by the complex number AL eiφL . That is, the
complex exponentials are eigenfunctions of LSI systems!

3 Fourier Transform(s)

A collection of sinusoids may be used as a linear basis for representing (realistic) signals. The
transformation from the standard representation of the signal (eg, as a function of time) to a set
of coefficients representing the amount of each sinusoid needed to create the signal is called
the Fourier Transform.
There are really four variants of Fourier transform, depending on whether the signal is contin-
uous or discrete, and on whether it is periodic or aperiodic:

SIGNAL continuous discrete


aperiodic Fourier Transform Discrete-Time(Space) Fourier Transform
(continuous, aperiodic) (continuous, periodic)
periodic Fourier Series Discrete Fourier Transform
(discrete, aperiodic) (discrete, periodic)

For our purposes here, we’ll focus on the Discrete Fourier Transform (DFT), defined for peri-
odic discretized signals. We can write one period of such a signal as a vector of length (say) N .
The following collection of N sinusoidal functions forms an orthonormal basis [verify]:

1 2πk N
 
ck [n] ≡ √ cos n , k = 0, 1, . . .
N N 2
1 2πk N
 
sk [n] ≡ √ sin n , k = 1, 2, . . . −1
N N 2

Thus, we can write any vector ~v as a weighted sum of these basis functions:
N/2 N/2−1
X X
v[n] = ak ck [n] + bk sk [n]
k=0 k=1

5
Since the basis is orthogonal, the Fourier coefficients {ak , bk } may be computed by projecting
the input vector ~v onto each basis function:
N
X −1
ak = v[n]ck [n]
n=0
N
X −1
bk = v[n]sk [n]
n=0

Now, using the properties of sinusoids developed earlier, we can combine cosine and sine
terms into a single phase-shifted sinusoid:

N/2
2πk
X  
v[n] = Ak sin n − φk
k=0
N
q
with amplitudes Ak = a2k + b2k , and phases φk = tan−1 (bk /ak ). These are are referred to
as the Fourier amplitudes and Fourier phases of the signal v[n]. Again, this is just a polar
coordinate representation of the original amplitudes {ak , bk }.
3 frequencies One frequency (k=0) Original signal

At the right is an illustration of successive


approximations of a triangle wave with si-
nusoids. The top panel shows the original
signal. The next shows the approximation
one gets by projecting onto a single zero-
frequency sinusoid (i.e., the constant func-
6 frequencies

tion). The next shows the result with three fre-


quency components, and the last panel shows
the result with six. [Try matlab’s xfourier
to see a live demo with square wave.]
0 5 10 15 20 25 30

Alternatively, we can use a complex-valued number to represent the amplitude and phase of
each frequency component, Ak eiφk . Now the Fourier amplitudes and phases correspond to
the amplitude and phase of this complex number. This is the standard representation of the

6
Fourier coefficients.

impulse
constant

0 0

Shown are Fourier amplitude spectra (the


amplitudes plotted as a function of fre-
quency number k) for three simple signals: 0 0
an impulse, a Gaussian, and a cosine func-
tion. These are shown here symmetrically
arranged around the origin, but some au-
thors plot only the positive half of the fre-
quency axis. Note that the cosine function 0 0
is constructed by adding a complex exponen-
tial with its frequency-negated cousin - this
is why the Fourier transform shows two im-
pulses.

Shifting property. When we shift an input signal, each sinusoid in the Fourier representation
must be shifted. Specifically, shifting by m samples means that the phase of each sinusoid
N m. Note that the phase change is different for each frequency k, and also that
changes by 2πk
the amplitude is unchanged.

Stretching property. If we stretch the input signal (i.e., rescale the x-axis by a factor α), the
Fourier transform will be compressed by the same factor (i.e., rescale the frequency axis by
1/alpha). Consider a Gaussian signal. The Fourier amplitude is also a Gaussian, with standard
deviation inversely proportional to that of the original signal.

4 Convolution Theorem

The most important property of the Fourier representation is its relationship to LSI systems
and convolution. To see this, we need to combine the eigenvector property of complex expo-
nentials with the Fourier transform. The diagram below illustrates this. Consider applying an
LSI system to an arbitrary signal. Use the Fourier transform to rewrite it as a weighted sum
of sinusoids. The weights in the sum may be expressed as complex numbers, Ak eiφk , repre-
senting the amplitude and phase of each sinusoidal component. Since the system is linear, the
response to this weighted sum is just the weighted sum of responses to each of the individual
sinusoids. But the action of an LSI on a sinusoid with frequency number k will be to multiply
the amplitude by a factor AL (k) and shift the phase by an amount φL (k). Finally, the system

7
Input
v

v1 x AL(1)eiφ L(1) v1x


LSI
iφ L(2)
+ v2 x + AL(2)e v2 x

+ v3 x + AL(3)e iφ L(3) v x
3

Output

response is a sum of sinusoids with amplitudes/phases corresponding to

(AL (k)Ak )ei(φL (k)+φk ) = (AL (k)eiφL (k) )(Ak eiφk ).

Earlier, using a similar sort of diagram, we explained that LSI systems can be characterized
by their impulse response, r[n]. Now we have seen that the complex numbers AL (k)eiφL (k)
provide an alternative characterization. We now want to find the relationship between these
two characterizations. First, we write an expression for the convolution (response of the LSI
system): X
y[n] = r[m]x[n − m]
m

Now take the DFT of both sides of the equation:

y[n]ei2πnk/N
X
Y [k] =
n
r[m]x[n − m]ei2πnk/N
XX
=
n m
x[n − m]ei2πnk/N
X X
= r[m]
m n
i2πmk/N
X
= r[m]e
m
= X[k]R[k]

This is quite amazing: the DFT of the LSI system response, y[n], is just the product of the DFT

8
of the input and the DFT of the impulse response! That is, the complex numbers A L (k)eiφL (k)
correspond to the Fourier transform of the function r[n].

Summarizing, the response of the LSI system


may be computed by a) Fourier-transforming
the input signal, b) multiplying each Fourier Input
coefficient by the associated Fourier coeffi- LSI Output
cient of the impulse response, and c) In- v
verse Fourier-transforming. A more collo-
quial statement of this Convolution theorem F.T. (F.T.)-1
is: “convolution in the signal domain corre-
sponds to multiplication in the Fourier do-
v x r vxr
main”. Reversing the roles of the two do-
mains (since the inverse transformation is es-
sentially the same as the forward transforma-
tion) means that “multiplication in the sig-
nal domain corresponds to convolution in the
Fourier domain”.
Why would we want to bother going through three sequential operations in order to compute
a convolution? Conceptually, multiplication is easier to understand than convolution, and
thus we can often gain a better understanding of an LSI by thinking about it in terms of its
effect in the Fourier domain. More practically, there are very efficient algorithms for the Dis-
crete Fourier Transform, known as the Fast Fourier Transform (DFT), such that this three-step

9
computation may be more computationally efficient than direct convolution.

Lowpass Fourier
Filter Spectrum

As an example of conceptual simplification,


consider two impulse responses, along with 0

their Fourier amplitude spectra. It is often dif- 0

ficult to anticipate the behavior of these sys- Fourier


Spectrum
tems solely from their impulse responses. But Bandpass
Filter
their Fourier transforms are quite revealing.
The first is a lowpass filter meaning that it dis-
cards high frequency sinusoidal components 0

(by multiplying them by zero). The second is


a bandpass filter - it allows a central band of 0

frequencies to pass, discarding the lower and


higher frequency components.

0
As another example of conceptual simplifica- x
tion, consider an impulse response formed by
the product of a Gaussian function, and a si- 0
nusoid (known as a Gabor function). How
can we visualize the Fourier transform of this
product? We need only compute the convolu-
tion of the Fourier transforms of the Gaussian
and the sinusoid. The Fourier transform of 0
a Gaussian is a Gaussian. The Fourier trans-
form of a sinusoid is an impulse at the corre-
sponding frequency. Thus, the Fourier trans-
form of the Gabor function is a Gaussian cen-
tered at the frequency of the sinusoid.

10

You might also like