Lecture 01
Lecture 01
Lecture 01
Digital Communications
Lecture 1: Introduction to Digital Communication
That is, rather than focusing on hardware and software for these systems,
which is much like hardware and software for many other kinds of systems, we
focus on the fundamental system aspects of modern digital communication.
Lecture 01
In holiday, use your fancy mobile phone to take picture and send it straight away to
a friend.
In airport waiting for boarding, switch on your laptop and go to your favourite
website.
Or play game through internet with your mobile phone.
Lecture 01
Lecture 01
In holiday, use your fancy mobile phone to take picture and send it straight away to
a friend.
In airport waiting for boarding, switch on your laptop and go to your favourite
website.
Or play game through internet with your mobile phone.
Lecture 01
The basis of the theory was developed 60 years ago by Claude Shannon, and
is called information theory.
For the 1st 25 years or so of its existence, information theory served as a rich
source of academic research problems and as a tantalizing suggestion that
communication systems could be made more efficient and more reliable by
using these approaches.
First, by that time there were a sizable number of engineers who understood both
information theory and communication system development.
Second, the low cost and increasing processing power of digital hardware made it
possible to implement increasingly sophisticated algorithms.
Lecture 01
Here, as is appropriate for a graduate class, the focus is much more on the
connections between engineering and theory.
Lecture 01
The theory deals with relationships and analysis for models of real systems. A
good theory (and information theory is one of the best) allows for simple
analysis of simplified models.
It also provides structural principles that allow insights from these simple
models to be applied to more complex and realistic models. Problem sets
provide students with an opportunity to analyze these highly simplified models,
and, with patience, to start to understand the general principles.
Lecture 01
The important point here is that engineering (at this level) cannot really be separated
from theory. Engineering is necessary to theory in choosing appropriate models, and
theory is necessary to engineering to create principles and quantitative results.
Engineering sometimes becomes overly concerned with detail, and theory overly
concerned with mathematical niceties
This is a new subject, both in terms of content and approach. Some undergraduate
courses aim to make the student familiar with a large variety of different systems that
have been implemented historically.
In our opinion such an approach does not help much in understanding the new systems
being designed currently, and provides little insight into the different design choices that
might be made.
Our objective here is to develop the relatively small number of underlying principles
guiding all of these systems, with the hope that we can then understand the details of
any system of interest on our own.
Lecture 01
Source
SOURCE
Info.
Input
Transducer
Input
signal
Communication
System
Output
signal Output Transducer
Sink
Source: generates information. The source might be voice, text, image etc.
Input transducer: converts information to an electrical signal (e.g.. voltage, current).
Output transducer: converts output signal to the desired message form.
Sink: receives the desired information
Example: the transducers in a voice communication system could be a microphone at the input and a
loudspeaker at the output.
Lecture 01
10
Transmitted
signal
Transmitter
Channel
Receive
d
signal Receiver
Noise,
interference
and distortion
Output
signal
How do communication
systems perform in the
presence of NOISE?
11
Lecture 01
12
The communication systems that we use every day e.g., the telephone
system, the Internet| are networks of incredible complexity, made up of an
enormous variety of equipment made by different manufacturers at different
times following different design principles.
For example, the standard interface in the telephony system for decades has
been the 4KHz voice channel.
Lecture 01
13
You can plug a telephone into a wall plug anywhere in the world (subject to
further electrical, mechanical and dialing interface standards), and expect to
be able to send a nominal 4 KHz voice signal anywhere in the Public Switched
Telephone Network (PSTN).
The trend in recent decades has been to make all standardized interfaces
digital, at least inside the telephone and other networks, and increasingly at
the user interface as well.
Basically, one sends and receives bits, regardless of the ultimate application.
Lecture 01
14
Often peer modules occur in pairs, each containing both transmit and
receive functions
Lecture 01
15
In more complicated situations, there may be more than two paired elements
in a layer.
The multi-access control issues that arise in such cases are beyond the scope
of this course; different aspects of these issues are might be addressed in, e.g.,
Antenna & propagation, Selected Topics and etc2.
Lecture 01
16
Again we must very much keep in mind how we intend to recover a more-orless faithful replica of the original bit sequence from the channel output in the
remote receiver, despite the distortions that may be introduced by the
channel.
Lecture 01
17
The output bit sequence of the source encoder is the input bit sequence of
the channel encoder, and the output bit sequence of the channel decoder is
the input bit sequence of the source decoder.
Lecture 01
18
The source-channel coding separation theorem does not rule out the
possibility of smaller delay or lower complexity using joint source-channel
coding. Also, there are more complex network scenarios in which this theorem
does not hold.
Lecture 01
19
The input is the source signal. It might be a sequence of symbols such as letters
from the English or Arabic alphabet, binary symbols from a computer le, etc.
This is one of the reasons why probability is an essential prerequisite for this
subject. It is not obvious why inputs to communication systems should be
modeled as random, and in fact this was not appreciated before Shannon
developed information theory in 1948.
Lecture 01
20
The study of communication before that time (and well after that time) was
based on Fourier analysis, which basically studies the effect of passing sine
waves through various kinds of systems and components.
The study of channels can be started with this kind of analysis (often called
Nyquist theory in the context of digital communication) to develop basic
results about sampling, intersymbol interference, and bandwidth.
However, Shannon's view was that if the recipient knows that a sine wave of a
given frequency is to be communicated, why not simply regenerate it at the
output rather than send it over a long distance?
Lecture 01
21
The objective then is to transform each possible input into a transmitted signal
in such a way that each possible transmitted signal can be distinguished from
the others at the output.
We shall see in the rest of this subject how this point of view drives the
processing of the inputs as they pass through a communication system.
Lecture 01
22
The source encoder in Figure 1" has the function of converting the input from
its original form into a sequence of bits.
The simplest source coding techniques involve simply representing the source
signal by a sequence of symbols from some finite alphabet, and then coding
the alphabet symbols into fixed-length blocks of bits.
For example, letters from the 27-symbol English alphabet (including a space
symbol) may be encoded into 5-bit blocks. Or, upper-case letters, lower-case
letters, and a great many other special symbols may be converted into 8-bit
blocks (bytes") using the standard ASCII code.
Lecture 01
23
Lecture 01
24
Beyond the basic objective of conversion to bits, the source encoder often
has the further objective of doing this as efficiently as possible| i.e.,
transmitting as few bits as possible, subject to the need to reconstruct the
input adequately at the output.
Lecture 01
25
The interface between the source coding and channel coding layers is a
sequence of bits. However, this simple characterization does not tell the whole
story. Some sources, such as voice or video, produce a virtually unending
signal that must be encoded into a sequence of bits, usually at some constant
rate measured in bits per second (b/s).
These sources are often called virtual circuit sources. For other sources, such
as email or data les, the data are encoded into packets, each consisting of a
finite sequence of bits.
These packets are usually presented to the interface as a unit. These sources
are known as packet sources. Other sources are more complicated
combinations of both virtual circuit and packet sources. Thus the bits coming
into the interface from the source have some time structure, possibly constant
rate, possibly irregularly arriving packets that must be queued, etc.
Lecture 01
26
The issue of interest here is simply that of mapping the source output into a
sequence of bits.
Typical packets are long enough that we can ignore their finite duration (to
first order), and simply model all the sources of interest as unending
waveforms or sequences of letters.
As we discuss later, this source coding must be consistent with the required
quality of service (i.e., distortion, encoding delay, etc.).
Lecture 01
27
The channel encoder at the other side of the digital interface must be
capable of keeping up with the stream of incoming bits, encoding and
transmitting them so that they can be recreated at the decoder with a
suitably small error probability.
Lecture 01
28
At that point, the interface problem becomes simple: if the channel encoder
can successfully transmit bits at a rate at least as large as the rate produced
by the source coder, then communication will be successful.
The specification for this layer must include a data quality parameter, such as
probability of error per bit or per packet. The usual objective for channel
coding is to make this error probability very small, but it usually can not be
reduced to 0. The source coding layer must therefore be able to cope with a
small but nonzero error probability.
Lecture 01
29
For other kinds of sources, such as computer files, no transmission errors can be
tolerated.
The usual approach in such cases is to protect each packet with an errordetection code, and then at the receiver to request a retransmission of any
packet in which an error has been detected.
30
However, if these are not problems, then an ARQ system can easily be
engineered to provide nearly error-free transmission at the cost of a small
amount of overhead and some variable delay.
Lecture 01
31
Thus, to a source code designer, the channel might be a digital channel with
bits as input and output; to a telephone-line modem designer, it might be a 4
KHz voice channel; to a cable modem designer, it might be a physical coaxial
cable of up to a certain length, with certain bandwidth restrictions.
For a channel code designer, the channel is often a physical channel;" e.g.,
a pair of wires, a coaxial cable, or an optical fiber going from the source
location to the destination. It also might be open space between source and
destination over which electromagnetic radiation can carry signals.
Lecture 01
32
As in the study of signals and systems, we view a channel in terms of its input,
its output, and some description of how the input affects the output, which in
this course will usually be a probabilistic description.
Lecture 01
33
Suppose that there were no noise and a single input voltage level could be
communicated exactly.
Then, representing that voltage level by its infinite binary expansion, we would
in principle be able to transmit an infinite number of binary digits by
transmitting a single real number.
Again, it was Shannon in 1948 who realized that noise provides the
fundamental limitation to performance in communication systems.
Lecture 01
34
The most common channel model involves a waveform input X(t), an added
noise waveform Z(t), and a waveform output Y (t) = X(t) + Z(t) that is the sum of
the input and the noise, as shown in Figure 2.
Lecture 01
35
Each of these waveforms are viewed as stochastic processes, but for now
they can be viewed simply as waveforms.
The noise Z(t) is often modeled as white Gaussian noise (also to be studied
and explained later). The input is usually constrained in power and in
bandwidth.
Observe that for any channel with input X(t) and output Y (t), we could define
the noise to be Z(t) = Y (t) - X(t), so there must be something more to an
additive-noise channel model than what is expressed in Figure 2.
The missing ingredient is that the noise must be statistically independent of the
input. Thus, whenever we say that a channel is subject to additive noise, we
mean implicitly that the noise is statistically independent of the input.
Lecture 01
36
Lecture 01
37
The noise may even be non-white here, but, as we shall see later, this further "
"generalization" is actually not more general.
Lecture 01
38
This linear Gaussian channel model is typically not a bad model for wire line
communication or for line-of-sight wireless communication.
When engineers, journals, or texts fail to say what kind of channel they are
talking about, this model is a safe bet.
The linear Gaussian channel model is rather poor for non-line-of-sight mobile
communication.
Here, multiple paths usually exist from source to destination, and these paths
usually change in time in a way best modeled as random.
Lecture 01
39
A better model for mobile communication is to filter the input by a randomlytime-varying linear filter that represents the multiple paths as they change in
time, as shown in Figure 4.
Lecture 01
40
The channel encoder box in Figure 1 has the function of mapping the binary
sequence at the source/channel interface into channel inputs.
The general objective here is to map binary inputs at the maximum bit rate
possible into waveforms such that the channel decoder can recreate the
original bits with low probability of error.
Lecture 01
41
In the simplest modulators, each bit is independently mapped into one of two
possible waveforms.
Lecture 01
42
Each of the eight possible combinations of three binary digits is then mapped
into a different numerical signal level (e.g., -7; -5; -3; -1; 1; 3; 5; 7).
It is easy to think of many ways to map binary digits into signals and then
signals into waveforms. We shall find that there is a simple geometric " signalspace" approach to looking at these various combinations in an integrated
way.
Lecture 01
43
Because of the noise on the channel, the received signal is almost certainly
not equal to one of the possible transmitted signals. A major function of the
demodulator is that of detection.
The detector attempts to choose which possible input signal is most likely to
have given rise to the given received signal. The geometric signal-space
approach will be invaluable in understanding the detection problem.
Lecture 01
44
Finally, if the transmitter uses carrier modulation, then the receiver must track
the carrier frequency and phase.
We will look at these issues after clearly understanding the geometric structure
of the basic modulation and demodulation problem.
Lecture 01
45
One possible solution is to separate the channel coder into two layers, first an
error-correcting code, and then a simple modulator.
As a very simple example, the bit rate into the channel encoder could be
reduced by a factor of 3, and then each binary input could be repeated 3
times before entering the modulator.
If at most one of the 3 binary digits coming out of the demodulator were
incorrect, it could be corrected, thus reducing the error probability of the
system at a considerable cost in data rate.
Lecture 01
46
What Shannon showed was the very unintuitive fact that more sophisticated
coding schemes can achieve arbitrarily low error probabilities without
lowering the data rate below a certain data rate that depends on the
channel being used, called the channel capacity.
In this course, we will not prove this result, or even describe it very precisely,
although we will provide various types of insight into coding and decoding.
Lecture 01
47
For example, for an AWGN channel with bandwidth W and input power P, if
the onesided power spectral density of the noise is N0, then Shannon showed
that the channel capacity in bits per second is
Only in the past few years have channel coding schemes been developed
that can closely approach this channel capacity. When we discuss channel
coding, channel capacity will be our benchmark.
Until about 20 years ago, channel coding usually involved a two layer system
similar to that above, where an error-correcting code is followed by a
modulator.
Lecture 01
48
At the receiver, the waveform is first demodulated, and then the error
correction code is decoded.
More recently, it has been recognized that coding and modulation should be
considered as a unit, in schemes called coded modulation. Moreover, coding
for the AWGN channel is a problem that is best viewed in the geometric
signal-space context.
Lecture 01
49