Introduction Basics Terminologies

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 337

ETN840

Special Topics in Communication


Systems
Introduction , Basics and
Terminologies

1
Courtesy of Text
 Special Thanks to Dr. Muhammad Iqbal for these slides
 Slides Wireless Communication by :- Melkamu Deressa
 wireless communications (slides) by Andreas F. Molisch
 http://www.sharetechnote.com/
 http://users.ecs.soton.ac.uk/sqc/EL334N/InfThe-L1.pdf
 http://www.science4all.org/article/shannons-information-theory/http://nonot.lecturer.pens.ac.id/mobile%20comp/02-cellular
%5B1%5D.ppt
 Cellular Systems: An Introduction by: Reynold Cheng and Dr. Nasir D. Gohar
 A Location Management Problem Dr. Reynold Cheng Hong Kong Polytechnic University
 S-72.245 Transmission Methods in Telecommunication Systems
 http://www.slideshare.net
 http://ehm.kocaeli.edu.tr/dersnotlari_data/saldirmaz/7-Channel%20Models.pptx
 http://www.comlab.hut.fi/studies/3320/3320%20fading%20channels.ppt
 Shiv Kalyanaraman slides on Wireless Channel
 http://www.slideshare.net/rogerpitiot/information-theory
 Handoff in Cellular Systems by AJAL.A.J
 https://people.richland.edu/james/ictcm/2006/ Simplex Example
 Slides Spatial Diversity and Multiuser Diversity in Wireless Communications by Bengt Holter
 https://wirelesscafe.wordpress.com/2009/07/10/tutorial-i-basic-elements-of-digital-communication-system/
 http://www.science4all.org/article/shannons-information-theory/
 http://www.winlab.rutgers.edu/~narayan/Index.html#Teaching
 https://www.st-andrews.ac.uk/~www_pa/Scots_Guide/intro/electron.htm
 www.flann.com
 http://electronicdesign.com/communications/understanding-modern-digital-modulation-techniques
 http://www.britannica.com/technology/telecommunication/Modulation
 http://www.eecs.yorku.ca/course_archive/2010-11/F/3213/CSE3213_07_ShiftKeying_F2010.pdf
 http://www.ece.tamu.edu/~sunil/courses/ee449/notes/pulse.pdf
 HIAST – Advanced Mobile Communication By Ayman Alsawah, 2013/2014

2
Course Information
• Grading Policy
• Midterm 25%
• Final Examination 50%
• Quiz/Assignments
(Presentation + Simulation + Report) 25%

3
Course Syllabus
Basics of Communication Systems
Topics from recent research e.g.,
Heterogeneous Network
IoT
Physical Security
Cooperative Communication
NOMA
Full duplex
RRM
Energy Harvesting
Coexistence
Cloud ran
Security Encryption
D2D
Decoupling
Deep learning
4
Text Book and Reference
Selected Research Papers

5
What is Communication?
• Communication is transferring data reliably from one point to another
– Data could be: voice, video, codes etc…
• It is important to receive the same information that was sent from the
transmitter.
• Communication system
– A system that allows transfer of information reliably
• Information Source
• The source of data
• Data could be: human voice, data storage device CD, video etc..
• Data types:
• Discrete: Finite set of outcomes “Digital”
• Continuous : Infinite set of outcomes “Analog”
• Transmitter
• Converts the source data into a suitable form for transmission through
signal processing
• Data form depends on the channel
6
Why Wireless?
 Freedom from wires
 No cost of installing wires or rewiring
 No bunches of wires running here and there
 Instantaneous communication without physical connection setup, e.g.,
Bluetooth and WiFi
 Global Coverage
 Communications can reach where wiring is infeasible or costly, e.g., rural areas,
old buildings, battlefields, vehicles and outer space (communication satellites)
 Stay connected
 Roaming allows flexibility to stay connected anywhere and anytime
 Flexibility
 Services reach you wherever you go
 Connect to multiple devices simultaneously
 Increasing dependence on telecommunication services for business/personal use
 Consumers and business are willing to pay for it
Any Disadvantage

• Mobile Cellular phones (GSM, cdma2000) etc.


Types of wireless
communication • Portable IEEE 802.11b (WiFi) IEEE 802.15.3 (UWB)
7
• Fixed IEEE 802.16 (Wireless MAN)
History of Wireless Communication
• Wireless telegraph invented by Marconi in 1896
• First telegraphic signal traveled across the Atlantic ocean in 1901
• The origin of mobile phone
• America’s mobile phone age started in 1946 with MTS – weighing 40 Kg!
• First mobile phones very bulky, expensive and hardly portable
• Operator assisted with 250 maximum users

8
Cellular Subscribers Growth in Pakistan

NGMS: Next generation mobile services


Long Distance International (LDI)
Local Loop (LL)
Fixed Local Loop (FLL)
Wireless Local Loop(WLL)
Class Value Added Services (CVAS)
Data CVAS
Voice CVAS
Telecom Infrastructure Provider(TIP)
For more recent information, visit to PTA site 9
Annual Cellular Mobile Teledensity (%) in Pakistan
Telephone density or tele-density is the number of telephone connections for every
hundred individuals living within an area

For more recent information, visit to PTA site 10


11
12
Frequency Assignment to Cellular Mobile Operators in
Pakistan

Cellular Subscriber’s Market Share Mobile Phone Users in Pakistan


May-2014 (PTA) Mobilink 38 Million, Telenor 36 Million
Telenor 26% Mobilink 28% Ufone 24 Million , Warid 13 Million
Zong 19% Zong 26 Million,
Warid 9% Ufone 18% Total 137 Million
(More than adult people in Pakistan)
13
Page: 14

Fast, Secure, Reliable Business Solutions


Wireless Communication
 Transmitting voice and data using electromagnetic waves in
open space
 The information from sender to receiver is carried over a
well defined frequency band. This is called a channel
 Each channel has a fixed frequency bandwidth (in KHz) and
Capacity in (bit-rate)
 Different frequency bands (channels) can be used to
transmit information in parallel and independently.

15
TYPES OF WIRELESS COMMUNICATION?
RADIO TRANSMISSION:- easily generated, Omni-directional , travel long
distance , easily penetrates buildings.
• PROBLEMS:- frequency dependent , relatively low bandwidth for data
communication , tightly licensed by government.

MICROWAVE TRANSMISSION:- widely used for long distance


communication , give high S/N ratio , relatively inexpensive.
• PROBLEMS:- don’t pass through buildings , whether and frequency
dependent.

INFRARED AND MILIMETER WAVES:-


widely used for short range communication , unable to pass through
solid objects , used for indoor wireless LANs , not for outdoors.
LIGHT WAVE TRANSMISSION:- unguided optical signal such as laser ,
unidirectional , easy to install , no license required.
PROBLEMS:- unable to penetrate rain or thick fog , laser beam can be
16
easily diverted by air.
Frequencies for communication (Electromagnetic Spectrum)
twisted coax cable optical transmission
pair

1 Mm 10 km 100 m 1m 10 mm 100 m 1 m
300 Hz 30 kHz 3 MHz 300 MHz 30 GHz 3 THz 300 THz

VLF LF MF HF VHF UHF SHF EHF infrared visible light UV

104 102 100 10-2 10-4 10-6 10-8 10-10 10-12 10-14 10-16 meters
Radio Micro Cosmic
IR UV X-Rays
Spectrum wave Rays
104 106 108 1010 1012 1014 1016 1018 1020 1022 1024 Hz
Visible light

 VLF = Very Low Frequency UHF = Ultra High Frequency TYPICAL FREQUENCIES
 LF = Low Frequency SHF = Super High Frequency FM RADIO 88 MHZ
 MF = Medium Frequency EHF = Extra High Frequency TV BROADCAST 200 MHZ

GSM 900/1800 MHZ
HF = High Frequency UV = Ultraviolet Light
GPS 1.2 GHZ
 VHF = Very High Frequency PCS PHONES 1.8 GHZ
 Frequency and wave length:  = c/f wave length , speed BLUETOOTH 2.4 GHZ
of light c  3x108m/s, frequency f Wi-Fi 2.4 GHZ

17
Signals Type

18
• Channel capacity(C): It is the maximum rate at which data can be
transmitted at a given communication Path, or cannel under a given
conditions.
• Data Rate (bps): The rate at which data can be communicated,
impairments ,such as noise ,limit data rate that can be achieved.
• Band width (B): the band width of transmitted signal as constrained
by the transmitter and the nature of the transmission medium
(hertz).
• Noise (N): Impairments on communication path.
• Error rate - Rate at which errors occurs (BER)
 error = transmit 1 and receive 0; transmit 0 and receive 1

19
Wireless Communication

20
decibels
 RF Power is the measurement which is carrier out by engineers so often. Power
meter is usually used to measure integrated RF Power across the band of test.
Spectrum Analyzer is used to measure instantaneous power at various
frequencies of interest.

 The bel is a logarithmic unit of power ratios. One bel corresponds to an increase
of power by a factor of 10 relative to some reference power, Pref. (dB is the ratio
of either amplitude of voltage or current of the two signal. dB stands for Decibel
and will be taken as log of ratio of two power values(or voltage).

 The equations may also be used to express a ratio of voltages (or field strengths)
provided that they appear across the same impedance (or in a medium with the
same wave impedance See how to draw
logarithmic plots
in matlab

• dBm stands for decibel with respect to 1 Difference between dBm and dBW
milliwatt. XdBW = XdBm - 30.
• dBW stands for decibel with respect to 1 Watt Hence as per above equation,
• Power (dBm) = 10* log10(Power in milliwatt) 0 dBW = +30 dBm or -30 dBW = 0 dBm
21
decibels
 RF Power is the measurement which is carrier out by engineers so
often. Power meter is usually used to measure integrated RF Power
across the band of test. Spectrum Analyzer is used to measure
instantaneous power at various frequencies of interest.

 The bel is a logarithmic unit of power ratios. One bel corresponds to


an increase of power by a factor of 10 relative to some reference
power, Pref. (dB is the ratio of either amplitude of voltage or current
of the two signal. dB stands for Decibel and will be taken as log of
ratio of two power values(or voltage). See how to draw logarithmic
plots in matlab

 The equations may also be used to express a ratio of voltages (or


field strengths) provided that they appear across the same
impedance (or in a medium with the same wave impedance
 P   P   V 
P[bel ]  log10  , P[ dB ]  10log10  , V[ dB ]  20log10 
 P   P   V 
 ref   ref   ref 
22
• Shannon during WWII, defines the notion of channel capacity and provides a
mathematical model. The key result states that the capacity of the channel, as
defined by mutual information between the input and output of the channel, where
the maximization is with respect to the input distribution.

Shannon–Hartley theorem states the channel capacity C, meaning the


theoretical tightest upper bound on the information rate.
C [bps] is link capacity
B [Hz] is bandwidth  Ps   Ps 
C  B log 2 1    B log 2 1  
Ps [W] is signal Power  PN   BN o 
PN [w] noise power
No [W/Hz] is noise spectral density
Ps/No is the signal-to-noise ratio(SNR) or the carrier-to-noise ratio (CNR) of
the communication signal to the Gaussian noise interference expressed as a
linear power ratio (not as logarithmic decibels).
•To cope with noise, transmitted signal > noise
High Signal-to-Noise Ratio (SNR)
•Or use spread spectrum technology Embed signal over wide range of
frequencies with low power 23
Additive white Gaussian noise (AWGN) is a basic
noise model used in information theory to mimic
the effect of many random processes that occur in
nature.

This channel is assumed to corrupt the signal that


n(t), which denotes a sample function of the
additive white Gaussian noise process with zero-
mean and two-sided power spectral density.

24
Capacity tradeoffs

The upper, right hand side of the plot is so-called bandwidth limited region.
There, the desired spectral efficiency Rb/B for fixed B (desired data rate) is
traded-off against unconstrained transmission power (unconstrained Eb/N0),
under a given Pb.
An example of this would be a terrestrial DVB transmitting station, where
Rb/B is fixed (standardized), and where the transmitting power is only limited
by regulatory or technological constraints 25
Capacity tradeoffs

The lower, left hand side of the plot is the so-called power limited region.
There, the Eb/N0 is very poor and we have to sacrifice spectral efficiency
(b/s/Hz) to get a given transmission quality (Pb: Probability of error ).

An example of this are deep space communications, where the SNR received is
extremely low due to the huge free space losses in the link. The only way to get
a reliable transmission is to drop data rate at very low values 26
Capacity tradeoffs

Channel capacity tradeoffs


 Pb (Bit error Probability) is a required target quality and further limits the attainable
zone in the spectral efficiency/SNR plane, depending on the framework chosen
(modulation kind, channel coding scheme, and so on).
 For fixed spectral efficiency (fixed Rb/B), we move along a horizontal line where we
manage the Pb versus Eb/N0 tradeoff.
 For fixed SNR (fixed Eb/N0), we move along a vertical line where we manage the Pb
versus Rb/B tradeoff. 27
 Ps 
C  B log 2 1  
 BN o 

What did Shannon do?


Wonders! Among these wonders was an amazingly simple solution
to communication. This idea comes from the observation that all
messages can be converted into binary digits, better known as bits.
For instance, the picture is digitized into bits as follows:

28
Bits

To understand how important his ideas are, let’s go back in time


and consider telecommunications in the 1940s. Back then, the
telephone network was quickly developing, both in North America
and Europe. The two networks then got connected. But when
a message was sent through the Atlantic Ocean, it couldn’t
be read at the other end.

Why? What happened?


As the message travelled through the Atlantic Ocean, it got weakened
and weakened. Eventually, it was so weak that it was unreadable.
Imagine the message was the logo of Science4All. The following figure
displays what happened:
Why not amplifying the message along the way?

29
Why? What happened?
As the message travelled through the Atlantic Ocean, it got weakened
and weakened. Eventually, it was so weak that it was unreadable.
Imagine the message was the logo of Science4All. The following figure
displays what happened:
Why not amplifying the message along the way?
The unpredictable perturbation of the message! This perturbation is
called noise. This noise is precisely what prevents a message from
getting through.
When you’re amplifying the message, you’re also amplifying the noise.
Thus, even though the noise is small, as you amplify the message over
and over, the noise eventually gets bigger than the message. And if the
noise is bigger than the message, then the message cannot be read. This
is displayed below:

30
Now, instead of simply amplifying the message, we can read
it before. Because the digitized message is a sequel of 0s and
1s, it can be read and repeated exactly.

By replacing simple amplifiers by readers and amplifiers


(known as regenerative repeaters), we can now easily get
messages through the Atlantic Ocean. And all over the world,
as displayed below:

31
Shannon’s Bits

What’s the definition?


 According to Shannon’s brilliant theory, the concept of information
strongly depends on the context. For instance, Someone’s full name
is Abdul Basit.

 In western countries, people simply call him Abdul.

 In Pakistan, people either use my full name or just second name.


 Somehow, the word Abdul is not enough to identify me in Pakistan, as
it’s a common name over there.
 In other words, the word Abdul has less information in Pakistan than
in western countries.
 Similarly, if you talk about “the man with hair”, you are not giving
away a lot of information, unless you are surrounded by soldiers who
nearly all have their hair cut.
But what is a context in mathematical terms?
32
Shannon’s Bits

But what is a context in mathematical terms?


A context corresponds to what messages you expect. More
precisely, the context is defined by the probability of the
messages. In our example, the probability of calling
someone Abdul in western countries is much less likely than
in Pakistan. Thus, the context of messages in Pakistan
strongly differs from the context of western countries.
OK… So now, what’s information?
Well, we said that the information of Abdul is greater in
western countries…
So the rarer the message, the more
information it has?
33
If p is the probability of the message, then its information is
related to 1/p. But this is not how Shannon quantified it, as
this quantification would not have nice properties.

Shannon’s great idea was to define information rather as the


number of bits required to write the number 1/p. This
number is its logarithm in base 2, which we denote log2(1/p)

Now, this means that it would require more bits to digitize


the word Abdul in western countries than in Pakistan, as
displayed below:

34
Why did Shannon use the logarithm?
Because of its nice properties. First, the logarithm enables to bring
enormous numbers 1/p to more reasonable ones. But mainly, if you
consider a half of a text, it is common to say that it has half the
information of the text in its whole. This sentence can only be true if we
quantify information as the logarithm of 1/p.

This is due to the property of logarithm to transform multiplication


(which appears in probabilistic reasonings) into addition (which we
actually use). Now, this logarithm doesn’t need to be in base 2, but for
digitization and interpretation, it is very useful to do so.
35
Information Measure
 How is information content measured?
· Information sent from digital source from the jth message is
1 1 1
 
I j  log 2     log10  Pj    ln  Pj  (bits)
 Pj  log10 (2) ln(2)

where Pj is the probability of transmitting the jth message


 Information content will, in general, vary from one message
to the next since Pj is usually variable
· Bit = unit of information and
· Bit = unit of binary data (0,1) but they are not the same
· Must use context to determine meaning

36
Information Measure
 Since information content varies from message to
message  must measure average information
m m 1
H   Pj I j   Pj log 2   (bits)
j 1 j 1  Pj 

- where m is the number of possible source messages


- H is also called the “entropy” of the source
 Rate of Information
H
RI  bits/s
T
37
Information Measure
Example: 8 digit word (message) with two possible states per
digit (binary). Find the entropy if a) all words equally likely and
b) if half the words have Pj1 = 1/512
a) m = 28 = 256 and since all words equally likely Pj = 1/m = 1/256
1 1
H  mPj log 2    256 log 2  256   8 bits
 Pj  256
b) Note: All SPj = 1 (definition of probability) so 128 Pj1 + 128 Pj2 = 1
 Pj2 = (1/128)(1-128 Pj1) = (1/128)(3/4)= 3/512
 1   1 
H  128 Pj1 log2    128 Pj 2 log 2    2.25  5.56  7.81  8 !!

 Pj1   Pj 2 
must have equally likely for average information content = # digits
Channel Capacity
 Ideal channel capacity shown by Shannon to be
 S
C  B log 2 1   bits/s
 N
S
where B is channel BW (Hz) and is linear (watts/wat ts)
N
signal to noise ratio (not dB) at input to baseband (not RF) part of digital receiver

 Actual channel data rate Rc < C

39
Channel Capacity
 C  B so more bandwidth means higher data rate
 PSD of rectangular pulse train is (sin x / x)2

0 1 0 0 1 0 1 0 PSD

Symbol Period = Ts = Tb
= Bit Period …
f
Signal BW = Bs  1 / Tb
1 / Ts = FNBW

 As Tb  data rate Rc  since Rc  (Tb )-1 , but B


also  !!
 Increasing signal BW will increase data rate if
everything else remains the same

40
Channel Capacity
 C is also  S/N  Higher signal power means
larger channel capacity???
· Larger S/N makes it easier to differentiate (detect) multiple
states per digital symbol in presence of noise
 higher data rate for same symbol period & bandwidth

Ts1 = Ts2 but R1 = 2R2

vs.
00 01 00 10 00 11 00 01 0 1 0 1 0 1 0
1
Ts1 Ts2
**Note that (S /N)1 > (S /N)2 to achieve higher data rate
with same bit error probability**

41
Digital System Performance
 Critical Performance Measures:
· Bit Error Rate (BER)
· Channel BW = Transmitted Signal BW
· Received S/ N  Signal Power
· Channel Data Rate (Rc)
 Desire high data rate with small signal BW, low
signal power, and low BER!!
 Fundamental tradeoff between signal power and BW
· Example: Error Coding  add coding bits to data stream but keep same data
rate
» For same Rc  Ts must  and  BW 
» But coding will correct errors allowing weaker signal power for same BER

42
Well, if I read only half of a text, it may contain most of the information of
the text rather than the half of it…
This is an awesome remark! Indeed, if the fraction of the text you read is its
abstract, then you already kind of know what the information the whole text
has. Similarly, Abdul Basit, even in Pakistan, doesn’t have twice the
information that Abdul has.
Does Shannon’s quantification account for that?
It does! And the reason it does is because the first fraction of the message
modifies the context of the rest of the message. In other words, the
conditional probability of the rest of the message is sensitive to the first
fraction of the message. This updating process leads to counter-intuitive
results, but it is an extremely powerful one.

Are there applications of this quantification of information?


Yes! As Shannon put it in his seminal paper, telecommunication cannot be
thought in terms of information of a particular message. Indeed, a
communication device has to be able to work with any information of the
context. This has led Shannon to (re)-define the fundamental concept of
entropy, which talks about information of a context.
43
Shannon Entropy
• Context:
– A message is sent from a transmitter to a receiver
through a channel.
– Messages can be modified by the channel.
– The receiver tries to infer the message sent by the
transmitter.
• Shannon entropy is the expected value of the
information that can be inferred about the
message.

44
What’s Shannon’s definition of entropy?
Shannon’s entropy is defined for a context and equals the
average amount of information provided by messages of the
context.
Since each message is given with probability p and has
information log2(1/p), the average amount of information is
the sum for all messages of plog2(1/p). This is explained in
the following figure, where each color stands for a possible
message of the context:

45
Entropy is a probabilistic model such that:
Independent fair coin flips have an entropy of 1 bit per flip. A source that
always generates a long string of B's has an entropy of 0, since the
next character will always be a 'B'.
Shannon showed that:
If the experiments is a source that puts out symbols sn from a set A,
then the entropy is a measure of the average number of binary
symbols (bits) needed for encoding the source

46
Shannon’s Capacity
A communication consists in a sending of symbols through a channel to
some other end. Now, we usually consider that this channel can carry a
limited amount of information every second. Shannon calls this limit the
capacity of the channel. It is measured in bits per second, although
nowadays we rather use units like megabits per second (Mbit/s) or
megabytes per second (MB/s).

Why would channels have capacities?


The channel is usually using a physical measurable quantity to send a
message. This can be the pressure of air in case of oral communication.
For longer telecommunications, we use the electromagnetic field. The
message is then encoded by mixing it into a high frequency signal. The
frequency of the signal is the limit, as using messages with higher
frequencies would profoundly modify the fundamental frequency of the
signal. But don’t bother too much with these details. What’s of concern
to us here is that a channel has a capacity.
47
Shannon-Summary
Channel Properties
 Channels can only transport physical signals, e.g., electrical signals. Therefore,
digital signals must be converted to appropriate formats (remember the line coding
or RF)
 EvEn iF the signal is adapted to the channel, it does not pass it undisturbed !! The
channel introduce errors
 There is always an upper bound to the number of correct bits that you can send over
the channel (Shannon Channel Capacity: C)

The theory provides answers to two fundamental


questions (among others):
(a)What is the irreducible complexity below which a
signal cannot be compressed?
(b)What is the ultimate transmission rate for reliable
communication over a noisy channel?
The Source Coding Theorem - Shannon's first
theorem
The theorem can be stated as follows:
Given a discrete memoryless source of entropy H(S) , the average code-word length L
for any distortion-less source coding is bounded as
L >= H(S)

Theorem (Entropy)

The minimum average length of a codeword is:


1
H  sn  = P sn  log b =  P sn  log b P sn 
P  sn 

Entropy is the minimum expected


average length.

If the average length decreased


than H, then the code will not be
decoded

http://cwww.ee.nctu.edu.tw/course/channel_coding/CC01.pdf
Channel Coding Theorem (Shannon's 2nd theorem)
The channel coding theorem for a discrete memoryless channel is stated in
two parts as follows:
(a)Let a discrete memoryless source with an alphabet S have entropy H(S) and produce
symbols once every TS seconds. Let a discrete memoryless channel have capacity C

H  s C
and be used once every TC seconds. Then if


Ts Tc
There exists a coding scheme for which the source output can be transmitted over the
channel and be reconstructed with an arbitrarily
small probability of error.
The theorem specifies the
(b) Conversely, if H  s C channel capacity C as a
 fundamental limit on the
Ts Tc rate at which the
It is not possible to transmit information over transmission of reliable
the channel and reconstruct it with an error-free message can
arbitrarily small probability of error. take place over a discrete
memoryless channel.
Information Capacity Theorem (also known as Shannon-
Hartley law or Shannon's 3rd theorem)
The information capacity of a continuous channel of bandwidth B Hz, perturbed by
additive white Gaussian noise of power spectral density No/2 and limited in bandwidth
to B, is given by
 Ps 
C  B log 2 1  
 No 
where P is the average transmitted power.

This theorem implies that, for given average transmitted power P and channel
bandwidth B, we can transmit information at the rate C bits per second, with
arbitrarily small probability of error by employing sufficiently complex encoding
systems.
Imagine there was a gigantic network of telecommunication spread all over the world to
exchange data, like texts and images. Let’s call it the Internet. How fast can we download images
from the servers of the Internet to our computers? Using the basic formatting called Bitmap or
BMP, we can encode images pixels per pixels. The encoded images are then decomposed into a
certain number of bits. The average rate of transfer is then deduced from the average size of
encoded images and the channel’s capacity:

In the example, using bitmap encoding, the images can be transfered at the rate of 5 images per
second. In the webpage you are currently looking at, there are about a dozen images. This
means that more than 2 seconds would be required for the webpage to be downloaded on your
computer. That’s not very fast…

52
Can’t we transfer images faster? Yes, we can. The capacity cannot be exceed, but the encoding
of images can be improved. Now, what Shannon proved is that we can come up with encodings
such that the average size of the images nearly maps Shannon’s entropy! With these nearly
optimal encodings, an optimal rate of image file transfer can be reached, as displayed below:

This formula is called Shannon’s fundamental theorem of noiseless channels. It is basically a


direct application of the concept of entropy.
We have here assumed that the received data was identical to what’s sent! This is not the case
in actual communication. As opposed to what we have discussed in the first section of this
article, even bits can be badly communicated.

53
Shannon’s Redundancy
In actual communication, it’s possible that 10% of the bits get wrong.
Does this mean that only 90% of the information gets through? No! The problem is
that we don’t know which are the bits which got wrong. In your case, the information
that gets through is thus less than 90%.
So how did Shannon cope with noise?
 His amazing insight was to consider that the received deformed message is still
described by a probability, which is conditional to the sent message.

 This is where the language of equivocation or conditional entropy is essential. In


the noiseless case, given a sent message, the received message is certain.

 In other words, the conditional probability is reduced to a probability 1 that the


received message is the sent message.

 In Shannon’s powerful language, this all beautifully boils down to saying that the
conditional entropy of the received message is nil. Or, even more precisely, the
mutual information equals both the entropies of the received and of the sent
message.

54
Without Redundancy
Transmitted Signal

Signal Received

Received Signal
Interpretation

With Redundancy
Transmitted Signal

Signal Received

Received Signal
Interpretation

Information theory: How a measure of redundancy in the transmission of a message can improve the
probability of its being correctly interpreted on reception. In (A), a simple message in binary digits is
transmitted, losing 33% of its information in transmission; on receipt 25% of the message is incorrectly
interpreted. (B) By transmitting the message with 50% redundancy, i.e., with each digit repeated, and the
same loss in transmission, sufficient information is received for the original message to be correctly
reconstructed.

55
What about the general case?
The relevant information received at the other end is the mutual information. This mutual
information is precisely the entropy communicated by the channel.

Shannon’s revolutionary theorem says that we can provide the missing information by sending a
correction message whose entropy is this conditional entropy of the sent message given the
received message.

This correction message is known as Shannon’s redundancy.


This fundamental theorem is described in the following figure, where the word entropy can be
replaced by average information:

Shannon proved that by adding redundancy with enough entropy, we could reconstruct the
information perfectly almost surely (with a probability as close to 1 as possible). This idea is
another of Shannon’s earthshaking idea. Quite often, the redundant message is sent with the
message, and guarantees that, almost surely, the message will be readable once received. It’s
like having to read articles again and again to finally retrieve its information.

56
So redundancy is basically repeating the message?
 Shannon’s theorem for noisy channels provides a limit to the minimum quantity of
redundancy required to almost surely retrieve the message. In practice, this limit is
hard to reach though, as it depends on the probabilistic structure of the
information.
Does Shannon theorem explain why the English language is so redundant?
Yes! Redundancy is essential in common languages, as we don’t actually catch most of
what’s said. But, because English is so redundant, we can guess what’s missing from
what we’ve heard. For instance, whenever you hear I l*v* cake, you can easily fill the
blank. What’s particularly surprising is that we actually do most of this reconstitution
without even being aware of it!
It wouldn’t surprise me to find out that languages are nearly optimized for oral
communications in Shannon’s sense. Although there definitely are other factors
coming in play, which have to explain, for instance, why the French language is so
more redundant than English…
What we learned here are just the few fundamental ideas of Shannon for messages
with discrete probabilities. Claude Shannon then moves on generalizing these ideas
to discuss communication using actual electromagnetic signals, whose probabilities
now have to be described using probabilistic density functions. Although this
doesn’t affect the profound fundamental ideas of information and communication, it
does lead to a much more complex mathematical study.
57
• The key result states that the capacity of the channel, as defined by mutual
information between the input and output of the channel, where the maximization
is with respect to the input distribution.
Shannon–Hartley theorem states the channel capacity C, meaning the theoretical tightest
upper bound on the information rate.
 P   P 
C  B log 2  1  s   B log 2  1  s 
 PN   BN o 

58
 Additive white Gaussian noise (AWGN) is a basic noise model used in
information theory to mimic the effect of many random processes
that occur in nature.
 This channel is assumed to corrupt the signal that n(t), which
denotes a sample function of the additive white Gaussian noise
process with zero-mean and two-sided power spectral density.

59
cellular system
Wireless communication technology in which several small exchanges (called cells) equipped
with low-power radio antennas (strategically located over a wide geographical area) are
interconnected through a central exchange. As a receiver (cell phone) moves from one place to
the next, its identity, location, and radio frequency is handed-over by one cell to another
without interrupting a call.

Multiple Access

Downlink
(Forward)
Handoff
Uplink
(Reverse)
Mobile Station Base Station Fixed
Distributed transceivers transceiver
Cells
Different
Frequencies
or Codes
• High capacity is achieved by limiting the coverage of each base station to a small geographic
region called a cell.
• Same frequencies timeslots/codes are reused by spatially separated base stations.
• A switching technique called handoff enables a call to proceed uninterrupted when one user
moves from one cell to another.
• Neighboring base stations are assigned different group of channels so as to minimize the
60
interference.
Analog-to-digital conversion begins with sampling, or measuring
the amplitude of the analog waveform at equally spaced discrete
instants of time. A communications signal is actually a complex
wave—essentially the sum of a number of component sine waves,
all of which have their own precise amplitudes and phases—the rate
of variation of the complex wave can be measured by the
frequencies of oscillation of all its components.
The difference between the maximum rate of oscillation (or highest
freq.) and the minimum rate of oscillation (or lowest frequency) of
the sine waves making up the signal is known as the bandwidth (B)
of the signal. Bandwidth thus represents the maximum frequency
range occupied by a signal.

61
Analog-to-digital conversion begins with sampling, or measuring
the amplitude of the analog waveform at equally spaced discrete
instants of time. A communications signal is actually a complex
wave—essentially the sum of a number of component sine waves,
all of which have their own precise amplitudes and phases—the rate
of variation of the complex wave can be measured by the
frequencies of oscillation of all its components.
The difference between the maximum rate of oscillation (or highest
freq.) and the minimum rate of oscillation (or lowest frequency) of
the sine waves making up the signal is known as the bandwidth (B)
of the signal. Bandwidth thus represents the maximum frequency
range occupied by a signal.

Flash Analog-to-Digital Converter


62
A flash ADC (also known as a direct-conversion ADC) is a type of analog-to-digital
converter that uses a linear voltage ladder with a comparator at each "rung" of the
ladder to compare the input voltage to successive reference voltages. Often these
reference ladders are constructed of many resistors; however, modern implementations
show that capacitive voltage division is also possible. The output of these comparators
is generally fed into a digital encoder, which converts the inputs into a binary value (the
collected outputs from the comparators can be thought of as a unary value).

63
Four-Bit D/A Converter
One way to achieve D/A conversion is to use a summing amplifier.

64
Sampling

The sample rate, also referred to as sampling rate, is not directly related to the
bandwidth specification. Sample rate is the frequency at which the ADC converts the
analog input waveform to digital data. The oscilloscope samples the signal after any
attenuation, gain, and/or filtering has been applied to the analog input path and converts
the resulting waveform to digital representation. It does so in snapshots, similar to the
frames of a movie. The faster the oscilloscope samples, the greater the resolution and
detail that can be seen in the waveform. Fig. Flattop sampling
65
Nyquist Sampling Theorem
The Nyquist Sampling Theorem explains the relationship between the sample rate and the
frequency of the measured signal. It states that the sample rate fs must be greater than twice
the highest frequency component of interest in the measured signal. This frequency is often
referred to as the Nyquist frequency, fN.

http://www.ni.com/white-paper/2709/en/ 66
In signal processing and related disciplines,aliasing is an effect that causes different
signals to become indistinguishable (or aliases of one another) when sampled. It also
refers to the distortion or artifact that results when the signal reconstructed from
samples is different from the original continuous signal.

http://www.ni.com/white-paper/2709/en/ 67
In signal processing and related disciplines, aliasing is an effect that causes different
signals to become indistinguishable (or aliases of one another) when sampled. It also
refers to the distortion or artifact that results when the signal reconstructed from
samples is different from the original continuous signal.

http://www.ni.com/white-paper/2709/en/ 68
Time and Frequency Domain
The Fourier series is used to represent a periodic function by a discrete sum of
complex exponentials, while the Fourier transform is then used to represent a general,
nonperiodic function by a continuous superposition or integral of complex
exponentials. The Fourier transform can be viewed as the limit of the Fourier series of
a function with the period approaches to infinity, so the limits of integration change
from one period to (−∞,∞).

69
Time and Frequency Domain

Fourier Transform of Chirp Signals:

70
Time and Frequency Domain

71
http://www.beis.de/Elektronik/DeltaSigma/DeltaSigma.html

72
Analog-to-digital conversion begins with sampling, or measuring
the amplitude of the analog waveform at equally spaced discrete
instants of time. A communications signal is actually a complex
wave—essentially the sum of a number of component sine waves,
all of which have their own precise amplitudes and phases—the rate
of variation of the complex wave can be measured by the
frequencies of oscillation of all its components.
The difference between the maximum rate of oscillation (or highest
freq.) and the minimum rate of oscillation (or lowest frequency) of
the sine waves making up the signal is known as the bandwidth (B)
of the signal. Bandwidth thus represents the maximum frequency
range occupied by a signal.
Quantization: In order for a sampled signal to be stored or
transmitted in digital form, each sampled amplitude must be
converted to one of a finite number of possible values, or levels. For
ease in conversion to binary form, the number of levels is usually a
power of 2—that is, 8, 16, 32, 64, 128, 256, and so on, depending on
the degree of precision required. In digital transmission of voice, 256
levels are commonly used.

73
Quantization:

74
Quantization:

75
Analog-to-digital conversion begins with sampling, or measuring the
amplitude of the analog waveform at equally spaced discrete instants of
time. A communications signal is actually a complex wave—essentially the
sum of a number of component sine waves, all of which have their own
precise amplitudes and phases—the rate of variation of the complex wave
can be measured by the frequencies of oscillation of all its components.
The difference between the maximum rate of oscillation (or highest freq.)
and the minimum rate of oscillation (or lowest frequency) of the sine waves
making up the signal is known as the bandwidth (B) of the signal. Bandwidth
thus represents the maximum frequency range occupied by a signal.
Quantization: In order for a sampled signal to be stored or transmitted in
digital form, each sampled amplitude must be converted to one of a finite
number of possible values, or levels. For ease in conversion to binary form,
the number of levels is usually a power of 2—that is, 8, 16, 32, 64, 128, 256,
and so on, depending on the degree of precision required. In digital
transmission of voice, 256 levels are commonly used.
Quantization Level Binary Code
Bit mapping: In the next step 0 000
in the digitization process, the 1 001
output of the quantizer is 2 010
mapped into a binary 3 011
sequence. An encoding table 4 100
that might be used to
5 101
generate the binary sequence
6 110
is shown in figure
7 111 76
Source encoding: As is pointed out in analog-to-digital conversion, any available
telecommunications medium has a limited capacity for data transmission. This capacity is
commonly measured by the parameter called bandwidth. Since the bandwidth of a signal
increases with the number of bits to be transmitted each second, an important function of a
digital communications system is to represent the digitized signal by as few bits as possible—
that is, to reduce redundancy. Redundancy reduction is accomplished by a source encoder,
which often operates in conjunction with the analog-to-digital converter.

Huffman Code
Huffman Code assigns shorter encodings to elements with a high frequency, F:e. It
differs from block encoding in that it is able to assign codes of different bit lengths to
different elements. Elements with the highest frequency, F:e, get assigned the shortest
bit length code. The key to decompressing huffman code is a huffman tree.

A huffman tree is a special binary tree called a trie. (pronounced try) A binary trie is a
binary tree in which a 0 represents a left branch and a 1 represents a right branch. The
numbers on the nodes of the binary trie represent the total frequency, F, of the tree
below. The leaves of the trie represent the elements, e, to be encoded. The elements
are assigned the encoding which corresponds to their place in the binary trie.

77
Huffman Code Example
Message to be Encoded

dad ade fade bead ace dead cab bad fad cafe face

Block Encoding

011 000 011 000 011 100 101 000 011 101 001 100 000 011 000 010 100 011 100
000 011 010 000 001 001 000 011 101 000 011 010 000 101 100 101 000 010 100

The block encoding above is a fixed length encoding. If a message contains i elements,
block encoding requires log(i) bits to encode each element, e.

Spaces have been inserted between the strings of bits which represent each character
in both the Block Encoding and the Huffman Encoding.

http://www.ccs.neu.edu/home/jnl22/oldsite/cshonor/jeff.html

78
Huffman Code Example
Message to be Encoded
dad ade fade bead ace dead cab bad fad cafe face
Block Encoding
011 000 011 000 011 100 101 000 011 101 001 100 000 011 000 010 100 011 100
000 011 010 000 001 001 000 011 101 000 011 010 000 101 100 101 000 010 100

Huffman Encoding
01 10 01 10 01 111 110 10 01 111 000 111 10 01 10 001 111 01 111 10 01 001 10 000
000 10 01 110 10 01 001 10 110 111 110 10 001 111

79
Huffman Code Example
Message to be Encoded
dad ade fade bead ace dead cab bad fad cafe face

80
Huffman Code

http://www.urgenthomework.com/huffman-encoding-homework-help

81
Real-World Source Coding: Morse(1844)
Channel encoding
The strategy of the channel encoder, on the other hand, is to add redundancy to the
transmitted signal—in this case so that errors caused by noise during transmission can be
corrected at the receiver. The process of encoding for protection against channel errors is
called error-control coding. Error-control codes are used in a variety of applications, including
satellite communication, deep-space communication, mobile radio communication, and
computer networking.

83
Modulation: In many telecommunications systems, it is necessary to represent an information-
bearing signal with a waveform that can pass accurately through a transmission medium. This
assigning of a suitable waveform is accomplished by modulation, which is the process by which
some characteristic of a carrier wave is varied in accordance with an information signal, or
modulating wave. The modulated signal is then transmitted over a channel, after which the
original information-bearing signal is recovered through a process of demodulation.

84
Analog modulation is the process of converting an analog input signal into a signal that is
suitable for RF transmission.
The Analog carrier signal is modulated by analog information signal so that information bearing
analog signal can travel larger distance without the fear of loss due to absorption.
The Analog modulation is of two types: Amplitude Modulation and Angle Modulation
The Angle modulation is further classified as Frequency modulation and Phase Modulation.
Amplitude Modulation: In this type of modulation the strength of the carrier signal is varied
with the modulating signal.
Frequency Modulation: In this type of modulation the frequency of the carrier signal is varied
with the modulating signal.
Phase Modulation: In this type of modulation the phase of the carrier signal is varied with the
modulating signal. It is the variant of the frequency modulation.

85
Digital modulation is the process of converting a digital bistream into an analog signal suitable
for RF transmission. carrier wave in proportion to the information signal.
In amplitude shift keying, the amplitude of the signal is modulated to represent the information.
The simplest type of modulation is called on-off keying, where the carrier signal is turned on to
represent a 1 and turned off to represent a 0. (Detail in next slides)
In frequency shift keying, the frequency of the wave is modulated whereas, in phase shift
keying, the phase of the wave is modulated. Quadrature amplitude modulation is a type of
modulation where amplitude and phase are both modulated to, and because there are several
different combinations, this type of modulation can represent many different values for the
signal.

86
Difference Between Analog and Digital
Modulation
Allowed Values
Analog Modulation: An analog modulated
signal can represent any value within a range.
Digital Modulation: A digitally modulated signal
can only represent one of a set of discrete
values.
Variation with Time
Analog Modulation: Analog modulation can
produce a signal that carries continually
changing information.
Digital Modulation: Digital modulation
produces a signal whose value changes at
specific intervals of time.
Separation of Noise
Analog Modulation: It is difficult to separate
the signal from noise in analog modulation.
Digital Modulation: In digital modulation, the
signal can be easily separated from noise.
87
ASK – strength of carrier signal is varied to represent binary 1 or 0
• both frequency & phase remain constant while amplitude changes
• commonly, one of the amplitudes is zero
• advantage: simplicity
• disadvantage: ASK is very susceptible to noise interference. Noise
usually (only) affects the amplitude, therefore ASK is the modulation
technique most affected by noise

88
FSK frequency of carrier signal is varied to
represent binary 1 or 0
• peak amplitude & phase remain constant
during each bit interval

advantage: FSK is less susceptible to errors than ASK – receiver


looks for specific frequency changes over a number
of intervals, so voltage (noise) spikes can be ignored
• disadvantage: FSK spectrum is 2 x ASK spectrum
BPSK PSK is equivalent to multiplying carrier signal by +1 when the information is 1, and by
-1 when the information is 0
advantage: PSK is less susceptible to errors than ASK, while it requires/occupies the
same bandwidth as ASK.
more efficient use of bandwidth (higher data-rate) are possible, compared to FSK !!!
• disadvantage: more complex signal detection / recovery process, than in ASK and
FSK

90
QPSK
QPSK = 4 QPSK = 4 -PSK – PSK that uses phase shifts of 90º= π/2 rad ⇒ 4 different signals
generated, each representing 2 bits
advantage:
• higher data rate than in PSK (2 bits per bit interval), while bandwidth occupancy remains the
same
• 4-PSK can easily be extended to 8-PSK, i.e. n-PSK however, higher rate PSK schemes are
limited by the ability of equipment to distinguish small differences in phase

91
QAM: Quadrature Amplitude Modulation
– uses “two-dimensional” signalling
•original information stream is split into two sequences that consist of odd and even symbols,
e.g. Bk and Ak
Ak sequence (in-phase comp.) is modulated by cos(2πfct) Bk sequence (quadrature-phase
comp.) is modulated by sin(2πfct)
• composite signal is sent through the channel

92
Signal Constellation
Constellation Diagram Constellation Diagram – used to represents possible symbols that may be
selected by a given modulation scheme as points in 2-D plane
• X-axis is related to in-phase carrier: cos(ωct) ƒ the projection of the point on the X-axis defines
the peak amplitude of the in-phase component
• Y-axis is related to quadrature carrier: sin(ωct) ƒ the projection of the point on the Y-axis
defines the peak amplitude o the quadrature component
• the length of line that connects the point to the origin is the peak amplitude of the signal
element (combination of X & Y components) • the angle the line makes with the X-axis is the
phase of the signal element
QAM can also be seen as a combination of ASK & PSK

93
Digital Baseband Modulation: Line Coding
Also known as digital baseband modulation (1,0,1,1,0, … )
encoding digital information to make it resistant to certain forms of signal loss during
transmission

94
What is Pulse Modulation
What is PulsPulse modulation involves communication using a train of recurring pulses.
• Common means of modulating data in digital communication
– Key advantage is that I can send multiple signals using Time DivisionMultiplexing
• There are several pulse modulation techniques–
ignal

lse Amplitude Modulation

lse Width Modulation

lse Position Modulation

Pulse Code Modulation

95
What is Pulse Modulation
What is PulsPulse modulation involves communication using a train of recurring pulses.
• Common means of modulating data in digital communication
– Key advantage is that I can send multiple signals using Time DivisionMultiplexing
• There are several pulse modulation techniques–

PAM: Message information encoded in


ignal
the form of the amplitude of pulses.
Pulse transmitted every T seconds,
lse Amplitude Modulation amplitude of the pulse is
quantized to Q values, for PAM-Q.0
PWM: Here we modulate the width of
lse Width Modulation pulses (or their duty cycle) to convey
information.
• Example above shows the PWM signal
lse Position Modulation (bottom picture) corresponding to a
sinusoidal signal (top picture).
Pulse Code Modulation

PCM:Means to represent an analog signal in a digital manner


• Sample the analog signal every T seconds, into P values. – P is usually a power of two.
• Transmit log2P bits every T seconds (can do compression also)
PPM:Suppose I want to send one of M message bits every T seconds.
• PPM modulates the message by transmitting a single pulse in one of 2 M time slots
96
– Each time slot is T/2M seconds long
Properties of Digital Communication:
1. Digital signals are very easy to receive. The receiver has to just detect
whether the pulse is low or high.
2. AM & FM signals become corrupted over much short distances as
compared to digital signals. In digital signals, the original signal can be
reproduced accurately.
3. The signals lose power as they travel, which is called attenuation. When
AM and FM signals are amplified, the noise also get amplified. But the
digital signals can be cleaned up to restore the quality and amplified by
the regenerators.
4. The noise may change the shape of the pulses but not the pattern of the
pulses.
5. AM and FM signals can be received by any one by suitable receiver. But
digital signals can be coded so that only the person, who is intended for,
can receive them.
6. AM and FM transmitters are ‘real time systems’. i.e. they can be received
only at the time of transmission. But digital signals can be stored at the
receiving end.
7. The digital signals can be stored, or used to produce a display on a
computer monitor or converted back into analog signal to drive a loud
speaker.
97
Digital Communications System (Signal Processing Perspective)
Baseband Passband
audio Channel • ASK
video A/D Code
Modulation
• FSK
(analogue) anti-alias •Nyquist • FEC pulse • PSK
channel
filter sampling • ARQ shaping • binary
filter
Source Source • block filter • M’ary
code
data • convolution • ISI
(digital)
Communications
Transmit Channel
• loss
• interference
Receive •

noise
distortion
data
(digital) Source
decode
Sink Channel
D/A Decode Regeneration Demodulation
audio
video low pass quantisation • matched filter • envelope channel
filter noise
• FEC • decision threshold • coherent filter
(analogue) • ARQ • timing recovery • carrier recovery
• Block
• Convolution

Baseband transmission is transmission of the encoded signal using its own baseband frequencies; i.e. without any shift
(up-converting) to higher frequency ranges, while passband transmission is the transmission after shifting the baseband
frequencies to some higher frequency range (called passband) using modulation (which can include passband filtering to
ensure that our signal is separate in its passband from neighboring passbands). 98
Digital Communications System (Signal Processing Perspective)
Baseband Passband
audio Channel • ASK
video A/D Code
Modulation
• FSK
(analogue) anti-alias •Nyquist • FEC pulse • PSK
channel
filter sampling • ARQ shaping • binary
filter
Source Source • block filter • M’ary
code
data • convolution • ISI
(digital) • Input Transducer: Converts nonelectric (human voice,
email text, TV video) into an electric waveform called a Communications
Transmit message or baseband signal using physical devices Channel
(microphone, a computer keyboard, or a CCD camera • loss
• interference
Receive • Source: originates a message (human voice, a
television picture, an email, or data message, or


noise
distortion
data data
(digital) Source
decode
Sink Channel
D/A Decode Regeneration Demodulation
audio
video low pass quantisation • matched filter • envelope channel
filter noise
• FEC • decision threshold • coherent filter
(analogue) • ARQ • timing recovery • carrier recovery
• Block
• Convolution
• Transmitter: Modifies the baseband signal for efficient transmission and may consist of A/D converter, an encoder, and a
modulator. Receiver- Demodulator, decoder and D/A.
• Receiver: Reprocesses signals received from the channel by reversing signal modifications made at the transmitter.
Removal of noise due to channel.
• Output Transducer: Converts electric signal to its original form (Message) 99
Digital Communications System (Signal Processing Perspective)
Baseband Passband
audio Channel • ASK
video A/D Code
Modulation
• FSK
(analogue) anti-alias •Nyquist • FEC pulse • PSK
channel
filter sampling • ARQ shaping • binary
filter
Source Source • block filter • M’ary
code
data • convolution • ISI
(digital)
Communications
Transmit Channel
• loss
• interference
Receive •

noise
distortion
data
(digital) Source
decode
Sink Channel
D/A Decode Regeneration Demodulation
audio
video low pass quantisation • matched filter • envelope channel
filter noise
• FEC • decision threshold • coherent filter
(analogue) • ARQ • timing recovery • carrier recovery
• Block
• Convolution

100
Digital Communications System (Information Theoretic Perspective)
Information Source and Input Message
Channel
code word
Transducer: Signal
The source of information can be Modulated
Source Channel Mod- Transmitte
analog or digital, e.g. analog: audio Source
Encoder Encoder ulator d Signal
or video signal, digital: like teletype
signal. In digital communication the
Wireless
signal produced by this source is Shannon’s Wireless Communication
Channel
System
converted into digital signal consists
of 1’s and 0’s. For this we need
source encoder. Source Channel Demod-
User Receive
Decoder Decoder ulator
d
Signal
Estimate of Estimate of
Message signal channel code word

Source Encoder
In digital communication we convert the signal from source into digital signal as mentioned
above. The point to remember is we should like to use as few binary digits as possible to
represent the signal. In such a way this efficient representation of the source output results in
little or no redundancy. This sequence of binary digits is called information sequence.
Source Encoding or Data Compression: the process of efficiently converting the output of
wither analog or digital source into a sequence of binary digits is known as source encoding.
101
Digital Communications System (Information Theoretic Perspective)
Channel
Message code word
Signal

Modulated
Source Channel Mod- Transmitted
Source Signal
Encoder Encoder ulator

Wireless
Shannon’s Wireless Communication System Channel

Source Channel Demod-


User
Decoder Decoder ulator Received
Signal

Estimate of Estimate of
Message signal channel code word

Channel Encoder:
The information sequence is passed through the channel encoder. The purpose of the channel
encoder is to introduced, in controlled manner, some redundancy in the binary information
sequence that can be used at the receiver to overcome the effects of noise and interference
encountered in the transmission on the signal through the channel.
e.g. take k bits of the information sequence and map that k bits to unique n bit sequence
called code word. The amount of redundancy introduced is measured by the ratio n/k and the
reciprocal of this ratio (k/n) is known as rate of code or code rate. 102
Digital Communications System (Information Theoretic Perspective)
Channel Decoder: Message
Channel
code word
This sequence of numbers then Signal
passed through the channel Modulated
Source Channel Mod- Transmitte
decoder which attempts to Source
Encoder Encoder ulator d Signal
reconstruct the original
information sequence from the
Wireless
knowledge of the code used by Shannon’s Wireless Communication
Channel
System
the channel encoder and the
redundancy contained in the
received data Source Channel Demod-
User Receive
Decoder Decoder ulator
The average probability of a bit d
Signal
error at the output of the Estimate of Estimate of
decoder is a measure of the Message signal channel code word
performance of the demodulator
– decoder combination. THIS IS
Source Decoder
THE MOST IMPORTANT POINT,
At the end, if an analog signal is desired then
We will discuss a lot about this
source decoder tries to decode the sequence from
BER (Bit Error Rate) stuff in
the knowledge of the encoding algorithm. And
coming posts.
which results in the approximate replica of the
input at the transmitter end

103
Digital Communications System (Information Theoretic Perspective)
In Summary: Message
Channel
code word
1. The source coding algorithm Signal
plays important role in higher Modulated
Source Channel Mod- Transmitte
code rate Source
Encoder Encoder ulator d Signal
2. The channel encoder introduced
redundancy in data
Wireless
3. The modulation scheme plays Shannon’s Wireless Communication
Channel
System
important role in deciding the
data rate and immunity of signal
towards the errors introduced Source Channel Demod-
User Receive
Decoder Decoder ulator
by the channel d
Signal
4. Channel introduced many types Estimate of Estimate of
of errors like multi path, errors Message signal channel code word
due to thermal noise etc.
5. The demodulator and decoder
should provide high BER.

104
Noise
•Undesirable interferences and disturbances that
corrupts signal passing through communication channel
(different from channel distortion)
•Random, and unpredictable.
•External noise: Interference signals transmitted on
nearby channels, human-made noise generated from
faulty contact switches of electrical equipment,
automobile ignition radiation, and cell phones emission.
•Internal noise: results from thermal motion of charged
particles in conductors, random emission, and diffusion
or recombination of charged carriers in electronic
devices

Noise limits the rate of telecommunications.


The channel distorts the signal and noise accumulates along the path in a
practical communication system
Signal strength decreases while noise level remains steady irrespective of the
distance from transmitter
As a result, signal strength worsens along the length of the channel
What about amplifying the received signal to make up for the attenuation?

105
The Effect of noise on digital signal

106
There are four categories of noise:
Thermal (Gaussian) noise - this is due to the thermal agitation of electrons in a
conductor, is present in all electronic devices and transmission lines, and is a function
of temperature. It is distributed uniformly across the frequency spectrum, and is
often referred to as white noise. It cannot be eliminated, and limits overall system
performance.
• Thermal noise power is proportional to the product of bandwidth and temperature.
• Mathematically, noise power is N=KTB
N = noise power, K=Boltzmann’s constant (1.38x10-23 J/K) B = bandwidth,
T = absolute temperature (Kelvin)(17oC or 290K)
Intermodulation noise - this can occur if signals at different frequencies share the
same transmission line. It results in signals that are the sum or difference of the
original signals, and occurs when there is some non-linearity in the communication
system (which may be caused by component malfunction or excessive signal strength).
In otherwords, Generation of unwanted sum and difference frequencies produced
when two or more signals mix in a nonlinear device.

107
The sum and difference frequencies are called cross products.
Unwanted cross products can interfere with the information signal.
Cross products are produced when harmonics as well as fundamental frequency mix in
a nonlinear device.
Crosstalk - this is the phenomenon that allows you to hear someone else's
conversation whilst using the telephone, and occurs due to electrical coupling
between twoor more transmission paths (such as adjacent twisted-pair cables).
Impulse noise - this consists of random pulses (or spikes) of noise, usually of short
duration and relatively high amplitude. Causes include external electromagnetic
disturbances such as lightning, vehicle ignition systems, heavy-duty electrical
equipment, and faults in the communications system itself. It is usually only a minor
annoyance for analogue systems such as a telephone link, but is the primary cause of
errors in digital communication.

108
NOISE VOLTAGE Noise Source

• Figure shows the equivalent circuit for a thermal VN/2

noise source. RI

• Internal resistance RI in series with the rms noise VN R VN/2


voltage VN.
• For the worst condition, the load resistance R = RI ,
noise voltage dropped across R = half the noise
source (VR=VN/2) and Figure : Noise source equivalent circuit
• From the final equation The noise power PN , The mathematical expression :
developed across the load resistor = KTB
N  KTB 
 VN / 2 
2

VN2
Interference R 4R
VN2  4 RKTB
• Form of external noise.
VN  4 RKTB
• Means to disturb or detract from.
• Electrical interference is when information signals from one
source produce frequencies that fall outside their allocated
bandwidth and interfere with information signals form
another source.
• Most interference occur when harmonics frequencies from
one source fall into the passband of a neighboring channel.
109
Example
• PT = 10 W, free space loss 117 dB, antenna gains 8 dB & 0 dB, total system losses 8
dB, receiver antenna temperature 290 K, & receiver bandwidth 1.25 MHz
• Find PR
• Find thermal noise, K = 1.38×10-23 W/Kelvin-Hz
• Find SNR at receiver
• Solution
• PR = -107 dBW
• PThermal = KTB = 1.38×10-23 × 290 × 1.25×106 = -143 dBW
• SNR = -107 + 143 = 36 dB

110
DETERMINISTIC MODELS
 In deterministic models the conditions under which an experiment is carried out determine the exact
outcome of the experiment.
 In deterministic mathematical models, the solution of a set of mathematical equations specifies the
exact outcome of the experiment.
 Circuit theory is an example of a deterministic mathematical model.

Random experiments
 Sequential random experiments – performing a sequence of simple random sub-experiments, e.g., First
toss a coin, then throw a dice.
 Sometimes, the second sub-experiment depends on the outcome of the first; e.g.,Toss a coin first, if it is a
head, then throw a dice.
 A random experiment may involve a continuum of measurements. Say, the height of a student takes
some value between 1.4m to 2m.

111
you have a bucket with blue balls and red balls inside. You a) The probability of a sure thing is 1.
dip your hand in the bucket and grab some balls. A b) The probability of an impossible outcome is 0.
probability question will say "given the number of blue c) The sum of the probabilities of all possible
and red balls in the bucket what can you tell me about outcomes is 1.
the balls in your hand?". A statistics question will say d) The probability for any random event must be
"given the balls in your hands what can you tell me about somewhere from 0 to 1.
the balls in the bucket?"

 Sample space of a random experiment is


defined as the set of all possible outcomes.
 Outcomes are mutually exclusive in the sense
that they cannot occur simultaneously
 Experiment or Trial: an action where the result
is uncertain.

112
113
Dependent Events
Example: Marbles in a Bag
2 blue and 3 red marbles are in a bag.
What are the chances of getting a blue marble?
The chance is 2 in 5

 But after taking one out you change the chances!


 So the next time:
 if you got a red marble before, then the chance of a blue marble next is 2 in 4
 if you got a blue marble before, then the chance of a blue marble next is 1 in 4

114
Problem: A simple binary communication channel carries messages by using only two signals, say 0 and 1.
We assume that, for a given binary channel, 40% of the time a 1 is transmitted; the probability that a
transmitted 0 is correctly received is 0.90, and the probability that a transmitted 1 is correctly received is
0.95. Determine
(a) the probability of a 1 being received,
(b) given a 1 is received, the probability that 1 was transmitted.

115
:
Random variable A random variable is a function that associates a real number with each element in the sample space.

In an experiment a number is often


attached to each outcome. X: S  R

Sample space Real


S number
s
s X(s) R

A random variable X is a function defined on S, which takes values on the real axis
Difference between RV and RP
Random Variable : The outcome
is mapped into a number
Random Process: The outcome
is mapped into a function of
time

Probability
Density/Mass/Distribution
Function
116
The Expectation of a Random Variable
• Expectation of a discrete random variable with p.m.f

• Expectation of a continuous random variable with p.d.f f(x)


E( X )   state space
xf ( x ) dx

• The expected value of a random variable is also called the mean of the random
variable
Independence
• Two random variables X and Y are said to be independent if
f ( x, y )  f X ( x) fY ( y ) for all x and y
117
The Expectation of a Random Variable
• Example (discrete random variable)
• The expected repair cost is
xi 50 200 350

E (cost)  ($50  0.3)  ($200  0.2)  ($350  0.5) pi 0.3 0.2 0.5

 $230
• Example (continuous random variable) E( X )  
50.5
x(1.5  6( x  50.0)2 )dx
– The expected diameter of a metal cylinder is 49.5

– Change of variable: y=x-50


0.5
E ( x)   ( y  50)(1.5  6 y 2 ) dy
0.5
0.5
Median  ( 6 y 3  300 y 2  1.5 y  75)dy
0.5
Information about the “middle”
value of the random variable  [3 y 4 / 2  100 y 3  0.75 y 2  75 y ]0.50.5
 [25.09375]  [24.90625]  50.0
f ( x )  1.5  6( x  50.2) 2 for 49.5  x  50.5
f ( x )  0, elsewhere

118
The variance of a Random Variable
• Variance(  )2
• A positive quantity that measures the spread of the distribution of the
random variable about its mean value
• Larger values of the variance indicate that the distribution is more
spread out
• Standard Deviation Var( X )  E (( X  E ( X )) 2 )
• The positive square root of the variance  E ( X 2 )  ( E ( X ))2
f ( x)
• Denoted by 

Var( X )
 E (( X  E ( X )) 2 )
 E ( X 2  2 XE ( X )  ( E ( X )) 2 )
 E ( X 2 )  2 E ( X ) E ( X )  ( E ( X )) 2 x
Two distribution with
 E ( X )  ( E ( X ))
2 2
identical mean values but
different variances

119
Covariance
Covariance and correlation describe how two variables are related.

Variables are positively related if they move in the same direction.


Variables are inversely related if they move in opposite directions.

Both covariance and correlation indicate whether variables are positively or


inversely related. Correlation also tells you the degree to which the variables tend
to move together.

120
Covariance
• Covariance
Cov( X , Y )  E (( X  E ( X ))(Y  E (Y )))
 E ( XY )  E ( X ) E (Y )
Cov( X , Y )  E (( X  E ( X ))(Y  E (Y )))
 E ( XY  XE (Y )  E ( X )Y  E ( X ) E (Y ))
 E ( XY )  E ( X ) E (Y )  E ( X ) E (Y )  E ( X ) E (Y )
 E ( XY )  E ( X ) E (Y )
• May take any positive or negative numbers.
• Independent random variables have a covariance of zero
• What if the covariance is zero?
• Correlation: Cov( X , Y )
– Values between -1 and 1, and Corr( X , Y ) 
Var( X )Var(Y )
independent random variables have a
correlation of zero

121
Summary of Variance

122
If there is a positive relationship between the scores of job incumbents
on a job knowledge test and actual job performance, which of the
following graphs would most likely be an accurate representation of
this situation?

Ans: a
Graph a: positively related
Graph b: unrelated
Graph c: inversely related
Graph d: unrelated

123
124
Correlation is a way to determine the extent to which two variables covary (normalized
to be between -1 and 1). Coherence is similar, but instead assesses “similarity” by
looking at the similarity for two variables in frequency space, rather than time space.
(Color online) Instantaneous correlation between synthetic signals. Top panel:
signals x(t) = cos(πt) (thick line) and y(t) = sin(πt) (thin line). Second panel: 1D
correlation between x(t) and y(t) using the uni-(thick line) and bi-directional (thin
line) methods with η = 0.5. Third panel: signals x(t) = cos(πt) (thick line) and z(t) = 
x2(t) = cos2(πt) (thin line). Bottom panel: 1D correlation between x(t) and z(t)
using the uni-(thick line) and bi-directional (thin line) methods with η = 0.5.

125
Autocorrelation, also known as serial correlation
or cross-autocorrelation, is the cross-correlation of
a signal with itself at different points in time (that
is what the cross stands for). Informally, it is the
similarity between observations as a function of
the time lag between them.

Correlation is a technique used to compare the


similarity of two signals. The correlation integral
is given by :

126
A large correlation value represents a strong
similarity between the two signals, while a value
near zero represents little similarity. Correlation will
be used in this system to compare the signals
coming from the anchor’s and to highlight parts of
the signal that are the same, i.e both parts are 1 or
both are 0. The signals will be cross correlated,
highlighting the delay between both signals. The
delay between the RF signal and the bluetooth
signal will correspond to the distance the anchors
are away from each other.
The above circuit shows two identical signals being
cross correlated. As can be seen there are a
number of instances where (1×1) occur.

The image above represents the cross correlation


with a delay between the signals. This is along the
lines of what I expect to see from my system. As
can be seen, due to the delay between the signals,
the instances where both signals are 1 (1×1) are
fewer. This delay will be proportional to the
distance between the anchors.
127
Poisson Distribution
• The distribution of
• The number of defects in an item
• The number of radioactive particles emitted by a substance
• The number of telephone calls received by an operator with a certain
time limit
• That is, the number of “events” that occur within certain specified
boundaries.

• A random variable X distributed as a Poisson random variable


with parameter λ, which is written X ~ P   x
e  x  0, 1, 2, 3, .
has a probability mass function P  X  x  
x!
•The Poisson distribution is often useful to model the number of times
that a certain event occurs per unit of time, distance, or volume, and it
has a mean and variance both equal to the parameter value λ.
•The expectations E  X   Var  X   
128
Example: The number of calls received in a telephone exchange follows a Poisson
distribution with an average of 10 calls per minute. What is the probability that in
one-minute duration
a) no call is received
b) exactly 5 calls are received
c) more than 3 calls are received
Let X be the random k
variable representing the number of calls received.
e 

k ! where   10.Therefore,
Given X p ( k )

(i) probability that no call is received  p X (0)  e 10 


e 10  105
 p X (5)  
(ii) probability that exactly 5 calls are received 5!

(iii) probability that more the 3 calls are received


5 2 3
10 10 10
 1   p X (k )  1  e 10 (1    )
k 0 1 2! 3!
129
The Uniform Distribution
• It has a flat pdf over a region.
• if X  U (a, b), X takes on values between a and b,
and 1
f ( x)  , for a  x  b
ba
d c
P (c  X  d )  , for a  b  d  b
• Mean and Variance ba

ab
E( X ) 
2
(b  a ) 2
V (X ) 
12

130
Exponential Distribution

pdf : f ( x)   e   x for x  0 Probability density function of an


exponential distribution with
parameter l = 1
Cdf : F ( x)  1  e   x for x  0
1
Mean and Variance E ( X ) 

1
and V ( X )  2

• The exponential distribution often arises, in practice, as being of the amount of


time until some specific event occurs.
For example,
the amount of time until an earthquake occurs,
the amount of time until a new war breaks out, or
the amount of time until a telephone call you receive turns out to be a wrong
number, etc.
131
Gaussian (Normal) Distribution
gGaussian or Normal Distribution: X : N (m, s 2 )
1 - ( x- m)2 2 s 2
f ( x) = e (- ¥ £ x £ ¥ )
s 2p N(5,0.2
E ( X ) = m, Var ( X ) = s 2 5)

N(10,4
) N(5,4)
N(5,4
)

5 10 5

132
The Standard Normal Distribution
g The Standard Normal Distribution
1 - x2 2
p.d.f : f ( x) = e ( - ¥ £ x £ ¥ ), where m = 0 and s 2 = 1.
2p
x
c.d.f : F ( x) = ò

f ( y ) dy Ф(x)
1- F ( x) = P( Z ³ x) = P( Z £ - x) = F (- x)
1

N(1,0)
0.5
Ф(x)

0 x
x

133
Probability Calculation for Standard Normal Distributions

The standard
normal
distribution

The cumulative
distribution
function of a
standard normal
distribution

Φ(x)is the cumulative distribution


function of a standard normal
distribution
134
The Central Limit Theorem
iid
X i ~ D (m, s 2 ), 1 £ i £ n
æ s 2 ö÷ n
Þ X ® N ççm, ÷ and åi= 1 i
X ® N ( nm, ns 2
)
çè n ø÷÷
iid
( X i ~ D (m, s 2 ) : X i ' s are independent and identically distributed
with mean m and variance s 2 for some distribution D .)

135
Rayleigh Distribution
• The signal from the transmitter may be reflected from objects such as hills, buildings, or
vehicles.
• When MS far from BS, the envelope distribution of received signal is Rayleigh
distribution. The pdf is 2
r
r 
p r   2 e 2 2
, r 0

where  is the standard deviation.
• Middle value rm of envelope signal within sample range to be satisfied by

P (r  rm)  0.5. P(r


1.0 )
• We have rm = 1.777
0.8
=1
0.6

0.4 =2

=3
0.2

r
0
2 4 6 8 10
136
Rician Distribution
• When MS far from BS, the envelope distribution of
received signal is Rician distribution. The pdf is
r 2  2
r   r 
p r   2 e 2 2
I 0  , r  0
  
where
 is the standard deviation,
I0(x) is the zero-order Bessel function of the first kind,
 is the amplitude of the direct signal

137
Rician Distribution
r 2  2
r   r 
p r   2 e 2 2
I0  , r  0
  
= 0 (Rayleigh)
=1
0.6
=2
0.5 =3

0.4
Pdf p(r)

0.3 =1

0.2

0.1

00 r
2 4 6 8
r
The pdf of the envelope variation
138
139
Why we study this.
You will learn while doing your
simulation assignment

140
Simulation
Signal Bit Stream
1
Binary Value

0.5

0
5 10 15 20 25 30 35 40 45 50
Bit Index
Signal Symbol Stream
3
Integer Value

0
5 10 15 20 25
Symbol Index

141
Simulation
Signal Symbol Stream
3 2

1.5
2.5

2
0.5
Integer Value

1.5 0

-0.5
1

-1

0.5
-1.5

0 -2
0 5 10 15 20 25 -2 0 2
Symbol Index

142
Simulation
2 2

1.5 1.5

1 1

0.5 0.5

0 0

-0.5 -0.5

-1 -1

-1.5 -1.5

-2 -2
-2 -1 0 1 2 -2 -1 0 1 2

143
Simulation
AWGN = 5dB AWGN = 10dB
2 2

1 1

0 0

-1 -1

-2 -2
-2 -1 0 1 2 -2 -1 0 1 2

AWGN = 15dB AWGN = 20dB


2 2

1 1

0 0

-1 -1

-2 -2
-2 -1 0 1 2 -2 -1 0 1 2

144
Simulation: AWGN 20dB
Transmitted Symbol
3

0
0 5 10 15 20 25
2 2

0 0

-2 -2
-2 -1 0 1 2 -2 -1 0 1 2
Recieved Symbol
3

0
0 5 10 15 20 25

145
Simulation Transmitted Bit Stream
1

0.5

0
0 5 10 15 20 25 Symbol
Transmitted 30 35 40 45 50
4

0
0 5 10 15 20 25
2 2

0 0

-2 -2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2Recieved Symbol-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
4

0
0 5 10 Recieved BitStream 15 20 25
1

0.5

0
0 5 10 15 20 25 30 35 40 45 50
146
Simulation: AWGN
2dB
1
Transmitted Bit Stream

0.5

0
0 5 10 15 20 25 30 35 40 45 50
Transmitted Symbol
4

0
0 5 10 15 20 25
2 2

0 0

-2 -2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2Recieved Symbol-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
4

0
0 5 10 Recieved BitStream 15 20 25
1

0.5

0
0 5 10 15 20 25 30 35 40 45 50
147
Antenna Attenuation
•Converts signal to electromagnetic waves • Reduction in signal strength with distance, propagation
•Size must be consistent with wavelength medium, & atmospheric conditions
•Types • Typically high for high frequencies
•Directional • Friis free-space equation PT GT GR 2
•Satellite communication PR (d ) =
•Omnidirectional (4 ) 2 d 2
•Cell phones, car radios • PR, PT – Power at receiver (in Watts or Milliwatts)
•MIMO
• GT, GR – gain of antenna
•Wireless routers
• λ – wavelength (in meters), d – distance (in meters)
Antenna Gain Example
• Transmission frequency is 881.52 PT GT GR 2
• How well an antenna converts input power P (d ) =
MHz & antenna gains are 8 dB & 0 R (4 ) 2 d 2
into radio waves headed in a specified dB for base station & mobile
direction PR (d ) GT GR 2
station =
• Depends on antenna's directivity & PT ( 4 ) d
2 2

electrical efficiency • What is the signal attenuation at a P (d ) 6.3 ×1× 0.342


R
distance of 1,500 m? =
• Gain PT (4 ) 215002
• Ratio of power produced by antenna • c = 299 792 458 m/s 2
PR (d ) 6.3 ×1× 0.34
to power produced by a hypothetical • Solution =
PT (4 ) 215002
lossless isotropic antenna • c = f λ  λ = 299 792 PR (d )
• Unitless 458/881.52×106 = 0.34 m = 2.0497×10-9
PT
• Usually expressed in decibels (dB) • 8 dB = 100.8 = 6.3 PT
• Directional  high gain = 4.8788× 108
• 0 dB = 100 = 1 PR (d )
• Omnidirectional  low gain
• Loss = PT – PR
148
• Loss = 86.89 dB
Attenuation
• Reduction in signal strength with distance, Complex Attenuation
propagation medium, & atmospheric conditions
• When signal encounters
• Typically high for high frequencies obstacles
• Friis free-space equation P G G 2 T T R • High-frequency signals
PR (d ) =
(4 ) 2 d 2 experience
1. Absorption
• PR, PT – Power at receiver (in Watts or Milliwatts)
2. Shadowing
• GT, GR – gain of antenna • When object >> λ
• λ – wavelength (in meters), d – distance (in 3. Reflection
meters) • When object >> λ
Based on empirical evidence, more reasonable to model PR as 4. Refraction
a log-distance path-loss model 5. Diffraction
6. Scattering
d • When object ≤ λ
PR (d )  P0 (d 0 )  10n p log( )  
d0
np – path loss exponent
Xσ – zero-mean Gaussian random variable with STD σ
All power values are in dBm

Source: S. Rao, “Estimating the ZigBee transmission-range ISM band,” EDN, May 2007 149
Attenuation Complex Attenuation
• When signal encounters obstacles
• Reduction in signal strength with distance, propagation
medium, & atmospheric conditions • High-frequency signals experience
• Typically high for high frequencies 1. Absorption
• Friis free-space equation PT GT GR 2 2. Shadowing
PR (d ) =
(4 ) 2 d 2 • When object >> λ
3. Reflection
• PR, PT – Power at receiver (in Watts or Milliwatts) • When object >> λ
• GT, GR – gain of antenna
4. Refraction
• λ – wavelength (in meters), d – distance (in meters)
5. Diffraction
Based on empirical evidence, more reasonable 6. Scattering
to model PR as a log-distance path-loss model • When object ≤ λ
d
PR (d )  P0 (d 0 )  10n p log( )   Building Freq Path loss
d0 (MHz) exponent
Retail Store 914 2.2
np – path loss exponent
Xσ – zero-mean Gaussian random variable Office, Hard Partition 1500 3
with STD σ Office Soft Partition 900/1900 2.4/2.6
All power values are in dBm Factor line of sight 1300 2
Suburban, indoor 900 3
street

Source: S. Rao, “Estimating the ZigBee transmission-range ISM band,” EDN, May 2007 150
Complex Attenuation
• When signal encounters
obstacles
• High-frequency
signals experience
Absorption
Shadowing
• When object >> λ
Reflection
• When object >> λ
Refraction
Diffraction
Scattering
• When object ≤ λ
Exercise
Reflection of wireless signals
occurs when
a)wavelength is constant
b)object size << wavelength
c)object size ≈ wavelength
d)object size >> wavelength
http://wireless.navigator.co.uk/radio_link.htm
http://computer-help-tips.blogspot.com/2011/04/radio-frequency-behaviors.html
http://elmag.org/en/propagation-modeling-of-shadowing-by-vegetation-for-mobile-satcom-%26-satnav-systems
http://computer-help-tips.blogspot.com/2011/04/radio-frequency-behaviors.html
http://www.astrosurf.com/luxorion/qsl-propa.htm 151
http://newhorizons.bg/blog/2010/12/wireless-101-terminology-part-2-implementing-cisco-unified-wireless-networking-essentials-iuwne
Example – Attenuation Experienced by Mobile Phones
Multipath Propagation

• Receive same signal through different


paths
• Different arrival times
• Inter Symbol Interference (ISI)
• Different levels of attenuation
• Different levels of distortion

www.intechopen.com/books/matlab-a-fundamental-tool-for-scientific-computing-and-engineering-applications-volume-2/mobile-radio-propagation-
prediction-for-two-different-districts-in-mosul-city
http://www.ni.com/white-paper/6427/en 152
Example – Attenuation Experienced by Mobile Phones
Multipath Propagation

www.intechopen.com/books/matlab-a-fundamental-tool-for-scientific-computing-and-engineering-applications-volume-2/mobile-radio-propagation-
prediction-for-two-different-districts-in-mosul-city
http://www.ni.com/white-paper/6427/en 153
Fading problem (Flat fading)
ading of Rx power causes: - degradation in BER if the Bit Rate is fixed
- limitation in Bit Rate if the BER is fixed

Average Rx pwr
Rx Power (dBm)

Fading margin

Min required pwr


(Rx sensitivity)

Deep fade (target BER violation)

Time

15
4
Example of performance over flat fading

BPSK
Uncoded
Error probability

Flat Rayleigh fading


Coherence time ≥ Tb
Coherent detection

(AWGN)

(dB)

15
5
Diversity

Fading: Signal fluctuations caused by


multipath propagation and shadowing
effects.
Diversity: Receiving the same information
bearing signal over 2 or more fading
channels.
Space: Transmission using multiple
transmit/receive antennas.
Frequency: Transmission using multiple
frequency channels separated by at least
the coherence bandwidth.
Time : Transmission using multiple time slots
separated by at least the coherence time.

156
1-Time diversity via Coding & Interleaving

Rx Pwr
Tc
Block fading
model
(Approximation)
Example: GSM Deep fade
• Coded speech packet
interleaved over 8
bursts code word bit Error burst t
• 1 user-assigned burst
every frame of ~5 ms Non- A B C D E F G H I
Þ Packet interleaved Interleaved:
on 40 ms
• @900 MHz, 120 Interleaved: A D G B E H C F I
km/h:
fd = 100 Hz Deinterleaved: A B C D E F G H I
Tc = 10 ms
After deinterleaving, isolated errors Interleaving depth
have “more chance” to be corrected
Tc’
15
7
2- Frequency diversity via Freq. Hopping
900 MHz, 3 Km/h, Rayleigh flat fading

Power gain (dB)


Frequencies much
be spaced by more
than the
coherence
bandwidth Bc
Example: GSM
• Slow-FH ~200 hop/s
(Optional feature)
• Frame ≈ 4.6 ms
(8 user bursts)
• Typical Urban:
τRMS ≈ 1 µs
Bc = 1/(5τRMS)
= 200 KHz Time (ms)

15
8
3- Spatial diversity via Multiple Antennas
• For uniform surrounding scatterers:
uncorrelated power gains
if antenna spacing = λ/2
• In practice: spacing λ Tx Rx

Power gain (dB)


Example: GSM900
• 2 Rx antenna @BTS
Ant 1
• λ = 30 cm Ant 2
• Separation = 2-3 m

Time
15
9
Rx Diversity: 1-Antenna Selection

Select the highest


power gain branch
(max |hk|2)

Suitable for non-


coherent detection
where fading phases are
not needed
CSI Rx = {mag(hk)}
Used on LTE UL with 2 antennas
16
0
Rx Diversity: 2-Antenna Switching
Switch to the max
power gain antenna
when the current one
Ant. Sw.
falls below a given
threshold

threshold

CSI Rx = {mag(hk)}
Simplified hardware at the
price of degraded error
performance compared to
“Antenna Selection”
Better
solutions? 16
1
Beware of terminology: Multiple Access vs. Multiplexing
◮ Medium: physical entity that physically bears the transmission.
◮ Channel: logical entity, defined between the input and output of some subsystem or device of
the same hierarchical level.
Any communication has to be routed through a • Shannon Hartley’s law demonstrates the
theoretical limit to how much
physical medium.
information can be delivered over a
 The air interface (radio communications). medium
 A cable (optical, electrical), or waveguide.
• Hartley’s law shows that time and
Medium access is about... bandwidth are equivalent
 Allocating transmission resources to communicating
• A communications medium can be
users. shared equally by dividing either
 Managing the actual transmission medium. quantity among users
Medium access techniques are not independent from • The frequency spectrum can be divided
the characteristics of the transmission medium. by using:
Medium access techniques can have an impact on • FDM (frequency-division
counteracting the impairments of the multiplexing)
medium/channel. • TDM (time-division multiplexing
• Most communication systems require the sharing of • CDMA (code-division multiple
access)
channels
• Shared media is common in cable television,
telephone systems, and data communications

162
Beware of terminology: Multiple Access vs. Multiplexing
◮ Medium: physical entity that physically bears the transmission.
◮ Channel: logical entity, defined between the input and output of some subsystem or device of the same
hierarchical level.
Any communication has to be routed through a physical • Shannon Hartley’s law demonstrates the
medium. theoretical limit to how much
 The air interface (radio communications). information can be delivered over a
 A cable (optical, electrical), or waveguide. medium
Medium access is about... • Hartley’s law shows that time and
 Allocating transmission resources to communicating users. bandwidth are equivalent
 Managing the actual transmission medium. • A communications medium can be
Medium access techniques are not independent from the shared equally by dividing either
characteristics of the transmission medium. quantity among users
Medium access techniques can have an impact on
• The frequency spectrum can be divided
counteracting the impairments of the medium/channel.
by using:
• Most communication systems require the sharing of • FDM (frequency-division
channels multiplexing)
• Shared media is common in cable television, telephone • TDM (time-division multiplexing
systems, and data communications • CDMA (code-division multiple
access)
Two types of combining signals are: Multiple access: The access to the resources is performed on a de-
• Multiplexing - combining centralized basis. ◮ Each user meets the other ones directly in the
signals from the same medium.
sources Multiplexing: A centralized entity handles the communications from
• Multiple-access - combining different users. These communications are organized into a frame by
signals from multiple sources said entity. 163
Frequency-division multiplexing (FDM) distinct from FDMA. FDM is a physical layer technique
that combines and transmits low-bandwidth channels through a high-bandwidth channel.
FDMA, on the other hand, is an access method in the data link layer.
In TDM, data with two or more streams appear as if they are transferred as sub channels via one
channel but the fact is that they actually turn on a single channel. While TDMA is a subset of
TDM in which multiple transmitters are connected to a single receiver.

Multiple access schemes

Code Division Multiple


Frequency Division Time Division Multiple
Access
Multiple Access Access
- each subscriber is
- when the subscriber enters - each subscriber is assigned a code which is
another cell a unique assigned a time slot to
used to multiply the signal
frequency is assigned to him; send/receive a data burst; is sent or received by the
used in analog systems used in digital systems
subscriber 164
165
The transmission from the BS in the downlink can be heard by each and every mobile user in the
cell, and is referred as broadcasting. Transmission from the mobile users in the uplink to the BS
is many-to-one, and is referred to as multiple access.
Multiple access schemes to allow many users to share simultaneously a finite amount of radio
spectrum resources.
Should not result in severe degradation in the performance of the system as compared to a
single user scenario. Approaches can be broadly grouped into two categories: narrowband and
wideband.

Multiple Access Techniques -- Narrowband Systems

• Transmission experiences nonselective fading. This means that when fades occur, all of the
information (i.e. the whole channel) is affected.
• Channel system : generally total spectrum is divided into a number of relatively narrow radio
channels (e.g. FDMA). Occurrence of call blocking if channels are all being used. Unused
bandwidth in each channel cannot be used by other users.
Multiple Access Techniques -- Wideband Systems
• The main feature of wideband systems is that either all the spectrum available (e.g. CDMA,
TDMA) or a considerable portion of it is used by each user (e.g. TDMA+FDMA).
• The advantage of wideband systems is that the transmission bandwidth always exceeds the
coherence bandwidth for which the signal experiences only selective fading. That is, only a
small fraction of the frequencies composing the signal is affected by fading.
• Signal can be distorted and therefore equalization is needed but unlikely that a total signal
fade will occur. 166
Duplexing
• For voice or data communications, must assure two way communication
(duplexing, it is possible to talk and listen simultaneously). Duplexing may be
done using frequency or time domain techniques.
• Forward (downlink) band provides traffic from the BS to the mobile
• Reverse (uplink) band provides traffic from the mobile to the BS.

167
Frequency division duplexing (FDD)
• Provides two distinct bands of frequencies for every
user, one for downlink and one for uplink.
• A large interval between these frequency bands
must be allowed so that interference is minimized.

Reverse Forward
Channel Channel
fc, fc,, frequency
F
R
Frequency separation
Frequency separation should be carefully decided
Frequency separation is constant

168
Time division duplexing (TDD)
• In TDD communications, both directions of transmission use one
contiguous frequency allocation, but two separate time slots to
provide both a forward and reverse link.
• Because transmission from mobile to BS and from BS to mobile
alternates in time, this scheme is also known as “ping pong”.
• As a consequence of the use of the same frequency band, the
communication quality in both directions is the same. This is
different from FDD.

Slot number 0 1 2 3 4 5 6 7 …
channel F R F R F R F R ….

Reverse Forward
Channel Channel

Ti Ti+1 time

Time separation
169
TDMA/TDD and TDMA/FDD
• In TDMA/TDD system, half of the time slots in the
frame information message would be used for the
forward link channels and half would be used for
reverse link channels. Same channel conditions.
• In TDMA/FDD systems, same frame structure can be
used for both forward and reverse transmission but
carrier frequencies used are different.

170
Code Division Multiple Access (CDMA)
• In CDMA, the narrowband message signal is multiplied by a
very large bandwidth signal called spreading signal (code)
before modulation and transmission over the air. This is called
spreading.

• CDMA is also called DSSS (Direct Sequence Spread Spectrum).


DSSS is a more general term.

• Message consists of symbols


• Has symbol period and hence, symbol rate

171
Code Division Multiple Access
(CDMA)

• Spreading signal (code) consists of chips


• Has Chip period and and hence, chip rate
• Spreading signal use a pseudo-noise (PN) sequence (a pseudo-
random sequence)
• PN sequence is called a codeword
• Each user has its own cordword
• Codewords are orthogonal. (low autocorrelation)
• Chip rate is oder of magnitude larger than the symbol rate.
• The receiver correlator distinguishes the senders signal by
examining the wideband signal with the same time-
synchronized spreading code
• The sent signal is recovered by despreading process at the
receiver.
172
CDMA Principle Represent bit 1 with +1
Represent bit 0 with -1
One bit period (symbol period)

1 1
Data
0

1 1 1 0 1 0 1 1 1 1 1 0 1 0 1 1

Coded
Signal

Chip period
Input to the modulator (phase modulation)

173
CDMA Principle

174
CDMA Example – transmission from two sources

A Data
1 0 1 1

A 0 1 0 0 1 1 0 1 0 0 1 1 0 1 0 0 1 1 0 1 0 0 1 1
Codeword

Data  Code 1 0 1 1 0 0 0 1 0 0 1 1 1 0 1 1 0 0 1 0 1 1 0 0
A Signal

B Data 0 0 1 0

1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0
B
Codeword
Data  Code 1 0 1 0 1 0 1 0 1 0 1 0 0 1 0 1 0 1 1 0 1 0 1 0
B Signal

Transmitted
A+B
Signal
175
CDMA Example – recovering signal A at the receiver

A+B
Signal
received

A
Codeword
at
receiver

(A  B)  Code

Integrator
Output

Comparator
Output 0 1 0 0

Take the inverse of this to obtain A


176
CDMA Example – using wrong codeword at the receiver

A+B
Signal
received

Wrong
Codeword
Used at
receiver

Integrator
Output

Comparator
Output X 0 1 1
Noise
Wrong codeword will not be able to decode the original data!
177
Hybrid Spread Spectrum Techniques
• FDMA/CDMA
• Available wideband spectrum is frequency divided into
number narrowband radio channels. CDMA is employed
inside each channel.
• DS/FHMA
• The signals are spread using spreading codes (direct
sequence signals are obtained), but these signal are not
transmitted over a constant carrier frequency; they are
transmitted over a frequency hopping carrier frequency.

178
Hybrid Spread Spectrum Techniques
• Time Division CDMA (TCDMA)
• Each cell is using a different spreading code (CDMA employed
between cells) that is conveyed to the mobiles in its range.
• Inside each cell (inside a CDMA channel), TDMA is employed
to multiplex multiple users.
• Time Division Frequency Hopping
• At each time slot, the user is hopped to a new frequency
according to a pseudo-random hopping sequence.
• Employed in severe co-interference and multi-path
environments.
• Bluetooth and GSM are using this technique.

179
SDMA
• Use spot beam antennas
• The different beam area can use TDMA, FDMA, CDMA
• Sectorized antenna can be thought of as a SDMA
• Adaptive antennas can be used in the future (simultaneously
steer energy
in the direction of many users)
spot beam
antenna

180
Features:

• A large number of independently steered high-gain


beams can be formed without any resulting
degradation in SNR ratio.
• Beams can be assigned to individual users, thereby
assuring that all links operate with maximum gain.
• Adaptive beam forming can be easily implemented
to improve the system capacity by suppressing co
channel interference.

181
182
183
WIRELESS COMMUNICATIONS AND NETWORKING By Vijay Garg
Sensor networks usually require data rates from a few bits per second to about 1
kbit/s.

Speech communications usually require between 5 and 64 kbit/s depending on the


required quality and the amount of compression.

Elementary data services require between 10 and 100 kbit/s. One category of these
services uses the display of the cellphone to provide Internet-like information.

Communications between computer peripherals and similar devices: for the


replacement of cables that link computer peripherals, like mouse and keyboard, to the
computer (or similarly for cellphones), wireless links with data rates around 1Mbit/s
are used

High-speed data services: WLANs and 3G cellular systems are used to provide fast
Internet
access, with speeds that range from 0.5 to 100 Mbit/s (currently under development).

Personal Area Networks (PANs) is a newly coined term that refers mostly to the range
of a wireless network (up to 10m), but often also has the connotation of high data
rates (over 100 Mbit/s), mostly for linking the components of consumer entertainment
systems (streaming video from computer or DVD player to a TV) or high-speed
computer connections (wireless Universal Serial Bus (USB)).
wireless communication by andreas molisch 184
300m and the number of users connected to one BS is of the same order as for
WLANs. Note,
Rangethat
however, andwireless
NumberPABXs
of Userscan have much larger ranges and user numbers – as
mentioned
before, they can be seen essentially as small private cellular systems.
• Cellular systems have a range that is larger than, e.g., the range of WLANs.
Microcells typically
cover cells with 500m radius, while macrocells can have a radius of 10 or even 30 km.
Depending
on the available bandwidth and the multiple access scheme, the number of active
users in a cell
is usually between 5 and 50. If the system is providing high-speed data services to
one user, the
number of active users usually shrinks.
• Fixed wireless access services cover a range that is similar to that of cellphones –
namely,
between 100m and several tens of kilometers. Also, the number of users is of a similar
order as
for cellular systems.
• Satellite systems provide even larger cell sizes, often covering whole countries and
even continents.
Cell size depends critically on the orbit of the satellite: geostationary satellites provide
larger cell sizes (1,000-km radius) than LEOs.
185
186
transmission techniques for such devices lies in avoiding the laying of cables. Even
though the devices are not mobile, the propagation channel they transmit over can
change with time: both due to people walking by and due to changes in the
environment (rearranging of machinery, furniture, etc.).

Fixed wireless access is a typical case in point. Note also that all wired
communications (e.g., the PSTN) fall into this category.

• Nomadic devices: nomadic devices are placed at a certain location for a limited
duration of time (minutes to hours) and then moved to a different location. This means
that during one “drop” (placing of the device), the device is similar to a fixed device.
However, from one drop to the next, the environment can change radically. Laptops are
typical examples: people do not operate their laptops while walking around, but place
them on a desk to work with them. Minutes or hours later, they might bring them to a
different location and operate them there.

• Low mobility: many communications devices are operated at pedestrian speeds.


Cordless phones, as well as cellphones operated by walking human users are typical
examples. The effect of the low mobility is a channel that changes rather slowly, and –
in a system with multiple BSs – handover from one cell to another is a rare event.

• High mobility usually describes speed ranges from about 30 to 150 km/h. Cellphones
operated by people in moving cars are one typical example.
• Extremely high mobility is represented by high-speed trains and planes, which cover
speeds between 300 and 1000 km/h. 187
188
different devices is relatively simple. Limits on transmit power (identical for all users) are a
key component of this approach – without them, each user would just increase the
transmit power to drown out interferers, leading essentially to an “arms race” between
users.

◦ Free spectrum: is assigned for different services as well as for different operators. The
ISM band at 2.45 GHz is the best known example – it is allowed to operate microwave
ovens, WiFi LANs, and Bluetooth wireless links, among others, in this band. Also for this
case, each user has to adhere to strict emission limits, in order not to interfere too much
with other systems and users. However, coordination between users (in order to minimize
interference) becomes almost impossible – different systems cannot exchange
coordination messages with each other, and often even have problems determining the
exact characteristics (bandwidth,duty cycle) of the interferers.

After 2000, two new approaches have been promulgated, but are not yet in widespread
use:
• Ultra Wide Bandwidth systems (UWB) spread their information over a very large
bandwidth, while at the same time keeping a very low-power spectral density. Therefore,
the transmit band can include frequency bands that have already been assigned to other
services, without creating significant interference.
• Adaptive spectral usage: another approach relies on first determining the current
spectrum usage at a certain location and then employing unused parts of the spectrum.

189
Direction of Transmission
Not all wireless services need to convey information in both directions.
• Simplex systems send the information only in one direction – e.g., broadcast systems
and pagers.
• Semi-duplex systems can transmit information in both directions. However, only one
direction is allowed at any time. Walkie-talkies, which require the user to push a button in
order to talk, are a typical example. Note that one user must signify (e.g., by using the
word “over”) that (s)he has finished his/her transmission; then the other user knows that
now (s)he can transmit. • Full-duplex systems allow simultaneous transmission in both
directions – e.g., cellphones and cordless phones.

• Asymmetric duplex systems: for data transmission, we often find that the required data
rate in one direction (usually the downlink) is higher than in the other direction. However,
even in this case, full duplex capability is maintained.

190
Wired and wireless communications
Wired communications Wireless communications
The communication takes place over a more Due to user mobility as well as multipath
or less stable medium like copper wires or propagation, the transmission medium varies
optical fibers. The properties of the medium strongly with time.
are well defined and time-invariant.

Increasing the transmission capacity can be Increasing the transmit capacity must be
achieved by using a different frequency on an achieved by more sophisticated transceiver
existing cable, and/or by stringing new cables concepts and smaller cell sizes (in cellular
systems), as the amount of available
spectrum is limited.

The range over which communications can The range that can be covered is limited both
be performed without repeater stations is by the transmission medium (attenuation,
mostly limited by attenuation by the medium fading, and signal distortion) and by the
(and thus noise); for optical fibers, the requirements of spectral efficiency (cell size).
distortion of transmitted pulses can also limit
the speed of data transmission.

Interference and crosstalk from other users Interference and crosstalk from other users
either do not happen or the properties of the are inherent in the principle of cellular
interference are stationary. communications. Due to the mobility of the
users, they also are time-variant.
191
Wired and wireless communications

Wired Wireless
The delay in the transmission process is also The delay of the transmission depends partly
constant, determined by the length of the on the distance between base station and
cable and the group delay of possible Mobile Station (MS), and is thus time-variant.
repeater amplifiers.

The Bit Error Rate (BER) decreases strongly For simple systems, the average BER
(approximately exponentially) with increasing decreases only slowly (linearly) with
Signal-to-Noise Ratio (SNR). This means that increasing average SNR. Increasing the
a relatively small increase in transmit power transmit power usually does not lead to a
can greatly decrease the error rate. significant reduction in BER. However, more
sophisticated signal processing helps.

Due to the well-behaved transmission Due to the difficult medium, transmission


medium, the quality of wired transmission is quality is generally low unless special
generally high. measures are used.

192
Wired and wireless communications

Wired Wireless
Jamming and interception of dedicated links Jamming a wireless link is straightforward,
with wired transmission is almost impossible unless special measures are taken.
without consent by the network operator. Interception of the on-air signal is possible.
Encryption is therefore necessary to prevent
unauthorized use of the information.

Establishing a link is location based. In other Establishing a connection is based on the


words, a link is established from one outlet to (mobile) equipment, usually associated with
another, independent of which person is a specific person. The connection is not
connected to the outlet. associated with a fixed location.

Power is either provided through the MSs use rechargeable or one-way batteries.
communications network itself (e.g., for Energy efficiency is thus a major concern.
traditional landline telephones), or from
traditional power mains (e.g., fax). In neither
case is energy consumption a major concern
for the designer of the device.

193
194
195
Typical example of fading. The thin line is the (normalized) instantaneous field strength; the
thick line is the average over a 1-m distance.

196
Intersymbol Interference

The runtimes for different MPCs are different. We have already mentioned above that
this can lead to different phases of MPCs, which lead to interference in narrowband
systems. In a system with large bandwidth, and thus good resolution in the time
domain,3 the major consequence is signal dispersion: in other words, the impulse
response of the channel is not a single delta pulse but rather a sequence of pulses
(corresponding to different MPCs), each of which has a distinct arrival time in addition
to having a different amplitude and phase (see Figure 2.5). This signal dispersion leads
to InterSymbol Interference (ISI) at the RX. MPCs with long runtimes, carrying
information from bit k, and MPCs with short runtimes, carrying contributions from bit k +
1 arrive at the RX at the same time, and interfere with each other (see Figure 2.6).
Assuming that no special measures4 are taken, this ISI leads to errors that cannot be
eliminated by simply increasing the transmit power, and are therefore often called
irreducible errors.

ISI is essentially determined by the ratio between symbol duration and the duration of
the impulse response of the channel. This implies that ISI is not only more important for
higher data rates but also for multiple access methods that lead to an increase in
transmitted peak data rate (e.g., time division multiple access, ). Finally, it is also
noteworthy that ISI can even play a role when the duration of the impulse response is
shorter (but not much shorter) than bit duration

197
198
199
Assigned Frequencies

• Below 100 MHz: at these frequencies, we find Citizens’ Band (CB) radio, pagers,
and analog
cordless phones.
• 100–800 MHz: these frequencies are mainly used for broadcast (radio and TV)
applications.
• 400–500 MHz: a number of cellular and trunking radio systems make use of this
band. It is mostly systems that need good coverage, but show low user density.
• 800–1000 MHz: several cellular systems use this band (analog systems as well as
secondgeneration cellular). Also some emergency communications systems
(trunking radio) make use of this band.
• 1.8–2.1 GHz: this is the main frequency band for cellular communications. The
current (secondgeneration) cellular systems operate in this band, as do most of the
third-generation systems.Many cordless systems also operate in this band.
• 2.4–2.5 GHz: the Industrial, Scientific, and Medical (ISM) band. Cordless phones,
Wireless Local
Area Networks (WLANs) and wireless Personal Area Networks (PANs) operate in
this band; they share it with many other devices, including microwave ovens.
• 3.3–3.8 GHz: is envisioned for fixed wireless access systems.
• 4.8–5.8 GHz: in this range, most WLANs can be found. Also, the frequency range
between 5.7 and 5.8 GHz can be used for fixed wireless access, complementing the
3-GHz band. Also car-to-car communications are working in this band.
• 11–15 GHz: in this range we can find the most popular satellite TV services, which
200
use 14.0–14.5 GHz for the uplink, and 11.7–12.2 GHz for the downlink.
Basics of link budgets

Link budgets show how different components and propagation processes


influence the available SNR
• Link budgets can be used to compute required transmit power, possible range
of a system or required receiver sensitivity
• Link budgets can be easily set up using logarithmic power units (dB)

dB = 10 log (Pout/Pin)

Basics of link budgets

201
SINGLE LINK
The link budget – a central concept

202
Amplification and attenuation

203
Fading margin
Interference is subject to fading while noise is typically constant (averaged over
a short time interval). To determine a fading margin, we statistially assume the
desired signal is weaker than its median value 50% of the time and that the
interfering signal is stronger that its median value 50% of the time.
PL = the admissable path loss is ratio of the EIRP transmit power to the mean
received power

204
205
206
207
208
209
210
Narrowband and wideband systems. HC(f ), channel transfer function; hC(τ ), channel impulse
response.

211
Narrowband and wideband systems. HC(f ), channel transfer function; hC(τ ), channel impulse
response.

212
Figure 6.6 Squared magnitude of the impulse response |h(t, τ )|2 measured in hilly terrain near
Darmstadt, Germany. Measurement duration 140 s; center frequency 900 MHz. τ denotes the
excess delay.

213
Figure 6.7 Spreading function computed from
the data of Figure 6.6.
214
Cooperative Communications

• dedicated relays, i.e., relays that never act as source or destination of the information,
but whose sole purpose is to facilitate the information exchange of other nodes;
• peer nodes acting as relays. Such peer nodes, e.g., mobile handsets or sensor nodes,
can change their roles depending on the situation at hand – sometimes they help to
forward information and sometimes they act as a source or destination.

215
Cooperative Communications

What is a Relay?
 A simple repeater: Receive, boost, and re-send a signal.
 Cellular network: Different node, carrier owned infrastructure, tree topology.
IEEE 802.16j (mobile multihop relay).
Sensor network: Identical node, subscriber equipment, mesh topology.
IEEE 802.15.5 (WPAN mesh)/ 802.11s (WLAN mesh).

A. Chandra - Green wireless communication with relays 216


Cooperative Communications
Why Use a Relay?
 Save Tx energy: - Reduced transmission distance.
 Performance improvement: - Enhance QoS, capacity, range.
- Load balancing.
 CapEx benefit: - Temporary coverage, gradual rollout

A. Chandra - Green wireless communication with relays 217


218
Decoding at Relay
 Amplify and forward: - Relays act as analog repeaters.

 Decode and forward: - Relays act as digital regenerative repeaters.

 Compress and forward: - Relays quantize and compress.

219
220
A key feature of the wireless propagation channel is the broadcast effect : when one
node transmits a signal, it can be received by any node in the vicinity – a fact that can
be positively exploited in multinode networks.1 While the multi-hopping strategy
described above does not make use of the broadcast effect, the more advanced
cooperative communications approach does use it. Consider Figure 22.2a, which just
slightly redraws Figure 22.1a. When node A transmits, the signal reaches not only node
B but also (in weaker form) node C. This weak signal might not be enough by itself for
node C to decode, but it can be used to augment the signal received in a subsequent
transmission from node B to node C. The broadcast effect has even more significant
impact in larger networks, e.g., the situation depicted in Figure 22.2b: if the first node
transmits, the signal reaches nodes B and D at about equal strength. Reaching those
two nodes in the network thus does not “cost” anything more (i.e., does not require
more transmit power) than reaching a single node. The two nodes B and D can now
cooperate to forward the information to node C, and – as we will show later on – such a
cooperative transmission can be more efficient than if only a single node transmits. The
same principle holds in even larger networks, like the one depicted in Figure 22.2c.

221
The subsequent two subsections deal with larger networks, where a message is
not transmitted in just two “hops” (transmission from source to relay(s), and a
second transmission from relay(s) to destination), but where multiple
transmissions from one relay (or set of relays) to another is done. For such
transmission, via a sequence of relays, the routing problem arises, i.e., which
nodes should be used for relaying, and in what sequence.

222
Fundamentals of Relaying Protocols

Consider the three-node network shown in Figure 22.3. A source is connected to a


relay and a destination, with the channels between the nodes given by hsr, hsd, and
hrd, respectively. The relay can now help in various ways in forwarding the
information.

223
224
225
KPIs
Key Performance Indicators (KPIs)
10Gbps

010 011 012 020


2 2 2 2
1000X Capacity 10X User Rate Anywhere 10X Peak Data Rate
(Traffic and Connections) (100M-1Gbps) (10+Gbps)

6 Key Requirements
Spectrum Efficiency
20

18
4G Peformance
5G Requirement
16

14

Efficiency(bps/Hz)
12

10

0
-10 -5 0 5 10 15 20 25 30
SINR(dB)

1000X Energy&Cost Reduce 10X Low Latency, 5-10X Spectrum


High reliability Efficiency
Slides: http://wireless.egr.uh.edu/research.htm 226
Link Efficiency: Massive MIMO,
3D-BF, Full-duplex
More Scenarios and Use
Case : M2M, D2D, V2X Signal to Noise plus
Scaled Transmission Interference Radio
Time
Capacity : C = N * W * T * log(1 + SINR)
Bandwidth
No. of APs

Ultra Dense Deployment: New Spectrum :


LTE-Hi and Further mmWaves, LTE-U, 227
Evolution Cognitive radio
10X10Peak
X PeakRate
Rate

5G Requirement

A
NOM
le x,
1Gbps Dup
ll
Fu

Full-duplex communication p ectrucan


m be one possible
10Mbps Hi g hS Immediately Downloading
solution to meet a
Ul the future wireless challenges
tr

4G
100Kbps Ultra-Dense Network
3G 3D-UHDTV

2G
High Spectrum, mmWave 10Gbps Peak Rate
Background of Full-Duplex Techniques

• Traditional half duplex: using orthogonal resources


• Time-division duplex
• Frequency-division duplex

Fig. 3 Time-division duplex Fig. 4 Frequency-division duplex

Spectral loss

• Problems Full-duplex
Comms
• The orthogonal resources are allocated for reception
and transmission.
Full Duplex Introduction
• A full duplex system allows communication at the
same time and frequency resources.

: Signal of interest

: Self interference

Fig. 5 Full duplex communication

• Advantages
• High spectral efficiency
• Same time & same frequency band
Main Challenges
Received Signal of Self
• Traditional Challenges signal interest interference

• Very large self interference


• 50-110dB larger than signal of
interest
• Depending on inter-node
distance
• ADC is the bottleneck
• Limited dynamic range: Fig. 6 Very large Self interference
saturation distortion.
• Limited precision: signal of
interest is less than noise.
• For 12 bit ADC, INR(dB) >
SNR(dB) + 35 dB implies self
interference is too strong

• Need to reduce interference Fig. 7 Signal after quantization


before ADC
Self-interference Cancellation

• Self interference channel

• Passive propagation suppression


• Design antennas to increase propagation loss of hI
• Active cancelation
• Active analog cancelation
• Cancel interference through analog part
Passive Propagation Cancellation

• Antenna placement
• Separation d between TX
and RX

Fig. 7 Antenna separation

• Place antennas at opposite


sides of the device

Fig. 8 Device cancellation

• Directional antenna
• Used in full-duplex relays Fig. 9 Directional antenna
Active Analog Cancelation (1)

• Objective is to achieve exact 0 at the Rx antenna


• Cancellation path = negative of interfering path
• These techniques need analog parts
Pre-mixer Post-mixer
Active Analog Cancelation (2)

Post-mixer
Pre-mixer

Post-mixer

Post-mixer

x
Post-mixer x
DAC
ADC
Active Digital Cancelation

• Conceptually simpler – requires no new “parts”


• Two-step cancellation:
• Estimate the self residual interference channel hRI
through training symbols
• Cancel hRIx[n] at baseband
System Implementation in PKU
(1)
Full-Duplex Cooperative Communication

Full-duplex cooperative relaying Full-duplex two-way relaying

• Resource allocation problems


• Adaptive switching
• Power control
• Relay and antenna selection
• Radio resource allocation
Why Cooperation?
Mobile Station (MS) 2

Base station
(BS)

Why Cooperation in wireless networks?


• Increased coverage
• Reduced transmission power
Mobile Station (MS) 1
• Cooperative diversity
• Cooperative coding gain
Cooperative Communications
User 1
User 1 data

User 2
User 2 data
t1 t2
Time
Non cooperation Cooperation

Applications
o Cellular networks
o Wireless sensor
networks
o Wireless Ad Hoc
networks
2-Hop Relay Networks

Source Destination

Relay

2-hop relay network with a direct link

Two phases transmission:

I: Source broadcasts to relay and destination


II: Relay forwards to the destination
Relaying Protocols

• Amplify and forward (AF)


• Decode and forward (DF)
• Adaptive Relaying Protocol (ARP)
• Compress and forward (CF)

Source Destination

Relay
Amplify and Forward (AF)

• Relay is used as an Amplifier


• Amplify both signal and noise
Decode and Forward (DF)
• Introduce error propagation when decoding errors occur

• DF is superior to AF when the S-R channel quality is


good enough because DF can eliminate the effect of
noise

• AF is superior to DF when the S-R channel quality is


poor because DF will introduce serious error propagation
Full-Duplex Cooperative Relaying
• It illustrates a simplest FD
relay network consisting of
one HD source, one HD
destination, and one FD
cooperative relay node.
• Both the source and relay
nodes use the same time-
frequency resource and the
relay nodes work in the FD
mode with two antennas.
Full-duplex cooperative relaying

The communication process


• The source transmits signals to both the FD relay and
destination;
Joint Antenna and Relay Selection
• A joint relay and
antenna selection
scheme is studied in
general full-duplex
(FD) relay networks.

• The system has one


source, one
destination and N FD
amplify-and-forward
relays.
① Kun Yang, Hongyu Cui, Lingyang Song, and Yonghui Li, “Joint Relay and Antenna Selection for Full-Duplex AF
Relay Networks,” IEEE International Conference on Communications, Sydney, Australia, Jun. 2014.
② Kun Yang, Hongyu Cui, Lingyang Song, and Yonghui Li, “Efficient Full-Duplex Relaying with Joint Antenna-Relay

• Each FD relay is
Selection and Self-Interference Suppression,” to appear, IEEE Transactions on Wireless Communications.
Selection Criteria (1)
• Antenna selection criterion:
• Assumption: each antenna of the relay is able to
transmit/receive the signal.
• The relay configures the Tx/Rx antenna via the channel state
information.
• Maximum end-to-end SINR criterion

Mode 1: Rx antenna T1 and Tx antenna T2

Mode 2: Rx antenna T2 and Tx antenna T1


Selection Criteria (2)

• Joint antenna and relay selection scheme


• Maximum end-to-end SINR criterion
 { 𝑅 𝑜𝑝𝑡 ,𝑚𝑜𝑑𝑒 𝑜𝑝𝑡 }= 𝑎𝑟𝑔 max max { 𝛾 𝑖𝑚𝑜𝑑𝑒 1 , 𝛾 𝑚𝑜𝑑𝑒
𝑖
2
}
• 2N candidate configurations 𝑖 ❑
Radio Resource Management from Wiki

Radio resource management (RRM) is the system level management of co-channel interference, radio resources
and other radio transmission characteristics in wireless communication systems, for example cellular networks,
wireless local area networks and wireless sensor systems.[1][2] RRM involves strategies and algorithms for
controlling parameters such as transmit power, user allocation, beamforming, data rates, handover criteria,
modulation scheme, error coding scheme, etc. The objective is to utilize the limited radio-frequency spectrum
resources and radio network infrastructure as efficiently as possible.

RRM concerns multi-user and multi-cell network capacity issues, rather than the point-to-point channel capacity.
Traditional telecommunications research and education often dwell upon channel coding and source coding with a
single user in mind, although it may not be possible to achieve the maximum channel capacity when several user
and adjacent base stations share the same frequency channel. Efficient dynamic RRM schemes may increase the
system spectral efficiency by an order of magnitude, which often is considerably more than what is possible by
introducing advanced channel coding and source coding schemes. RRM is especially important in systems limited
by co-channel interference rather than by noise, for example cellular systems and broadcast networks
homogeneously covering large areas, and wireless networks consisting of many adjacent access points that may
reuse the same channel frequencies.

The cost for deploying a wireless network is normally dominated by base station sites (real estate costs, planning
maintenance, distribution network, energy, etc.) and sometimes also by frequency license fees. The objective of
radio resource management is therefore typically to maximize the system spectral efficiency in bit/s/Hz/area unit o
Erlang/MHz/site, under some kind of user fairness constraint, for example, that the grade of service should be
above a certain level. The latter involves covering a certain area and avoiding outage due to co-channel
interference, noise, attenuation caused by path losses, fading caused by shadowing and multipath, Doppler shift
and other forms of distortion. The grade of service is also affected by blocking due to admission control, schedulin
starvation or inability to guarantee quality of service that is requested by the users.

249
Static radio resource management[edit]
Static RRM involves manual as well as computer-aided fixed cell planning or radio
network planning. Examples:

Frequency allocation band plans decided by standardization bodies, by national


frequency authorities and in frequency resource auctions.
Deployment of base station sites (or broadcasting transmitter site)
Antenna heights
Channel frequency plans
Sector antenna directions
Selection of modulation and channel coding parameters
Base station antenna space diversity, for example
Receiver micro diversity using antenna combining
Transmitter macro diversity such as OFDM single frequency networks (SFN)
Static RRM schemes are used in many traditional wireless systems, for example 1G
and 2G cellular systems, in today's wireless local area networks and in non-cellular
systems, for example broadcasting systems. Examples of static RRM schemes are:

Circuit mode communication using FDMA and TDMA.


Fixed channel allocation (FCA)
Static handover criteria

250
Examples of dynamic RRM schemes are:

Power control algorithms


Precoding algorithms
Link adaptation algorithms
Dynamic Channel Allocation (DCA) or Dynamic Frequency Selection (DFS) algorithms, allowing "cell breathing"
Traffic adaptive handover criteria, allowing "cell breathing"
Re-use partitioning
Adaptive filtering
Single Antenna Interference Cancellation (SAIC)
Dynamic diversity schemes, for example
Soft handover
Dynamic single-frequency networks (DSFN)
Phased array antenna with
beamforming
Multiple-input multiple-output communications (MIMO)
Space-time coding
Admission control
Dynamic bandwidth allocation using resource reservation multiple access schemes or statistical multiplexing, for
example Spread spectrum and/or packet radio
Channel-dependent scheduling, for instance
Max-min fair scheduling using for example fair queuing
Proportionally fair scheduling using for example weighted fair queuing
Maximum throughput scheduling (gives low grade of service due to starvation)
Dynamic packet assignment (DPA)
Packet and Resource Plan Scheduling (PARPS) schemes
Mobile ad hoc networks using multihop communication
Cognitive radio
Green communication
QoS-aware RRM
Femtocells 251
The main RRM functions in a heterogeneous cellular system include: resource
allocation/partitioning among macrocells and small cells, link adaptation, packet
scheduling, radio admission control, handover management, etc.

252
Classification of RRM schemes

Lee, Ying Loong, Teong Chee Chuah, Jonathan Loo, and Alexey Vinel. "Recent advances in radio resource management
for heterogeneous LTE/LTE-A networks." IEEE Communications Surveys & Tutorials 16, no. 4 (2014): 2142-2180. 253
254
255
256
257
258
259
260
261
http://cc.ee.ntu.edu.tw/~ihsiangw/NICLab/Courses.html
262
263
264
265
266
267
268
269
270
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337

You might also like