4 Noise in Communication Systems
4 Noise in Communication Systems
4 Noise in Communication Systems
Noise can be defined as any unwanted signals, random or deterministic, which interfere
with the faithful reproduction of the desired signal in a system. Stated another way, any
interfering signal, which is usually noticed as random fluctuations in voltage or current
tending to obscure and mask the desired signals is known as noise. These unwanted
signals arise from a variety of sources and can be classified as man-made or naturally
occurring.
Man-made types of interference (noise) can practically arise from any piece of elec-
trical or electronic equipment and include such things as electromagnetic pick-up of
other radiating signals, inadequate power supply filtering or alias terms. Man-made
sources of noise all have the common property that their effect can be eliminated or at
least minimised by careful engineering design and practice.
Interference caused by naturally occurring noise are not controllable in such a direct
way and their characteristics can best be described statistically. Natural noise comes
from random thermal motion of electrons, atmospheric absorption and cosmic sources.
Since noise is mostly random in nature, it is best described through its statistical prop-
erties. In this section the main parameters and in-between relations for noise descrip-
tion are presented and analysed. Without going into details of random variables and
stochastic processes, expressions are developed to describe noise through its power
spectral density (frequency domain) or equivalently the auto-correlation function (time
domain). This section gives a quick review of some elementary principles, a full de-
scription can be found in literature [Papoulis, 1984, Proakis, 1989]. It should be noted
that the description is valid for both random and deterministic signals.
83
84 4 Noise in Communication Systems
8 8
7 7
6 6
mean n(t)
___
n(t)
5 5
4 4
3 3
2 2
1 1
0 0
0 50 100 150 200 0 50 100 150 200
Time t [ms] Time t [ms]
Mean Value
The mean value of n(t) will be refered to as n(t), ηT or E{n(t)}. It is given by:
!
1 T /2
n(t) = E{n(t)} = ηT = lim n(t)dt. (4.1)
T →∞ T −T /2
where n(t) is often referred to as dc or average value of n(t). For practical calculation
of the mean value, the averaging time T has to be chosen large enough to smooth the
fluctuations of n(t) adequately. Fig. 4.1 shows the averages n(t) calculated by sliding
a window centred at t and extending from t − T /2 to t + T /2 over n(t). It is seen that
for small averaging time T = 5 ms there is still a considerable amount of fluctuation
present whereas for T = 400 ms the window covers the whole time signal which results
in a constant average value.
Mean-Square Value
! T /2
1
n2 (t) 2
= E{n (t)} = lim |n(t)|2 dt. (4.2)
T →∞ T −T /2
Aside from a scaling factor the mean-square value n2 (t) in (4.2) gives the time av-
eraged power P of n(t). Assuming n(t) to be the noise voltage or current, the scaling
factor will be equivalent to a resistance, which is often set equal to 1 Ω. The square
root of n2 (t) is known as the root-mean-square
" (rms) value of n(t). The advantage of
the rms notation is that the units of n2 (t) are the same as those for n(t).
AC Component
The ac or fluctuation component σ(t) of n(t) is that component that remains after “re-
moving” the mean value n(t) and is defined as (see also Fig. 4.2):
Variance
is equal to the power of the ac component of n(t) (aside from a resistance scaling
factor). This can be showed by substituting (4.3) into (4.2) giving:
! T /2
1
n2 (t) = lim |n(t) + σ(t)|2 dt. (4.5)
T →∞ T −T /2
Using the fact that n(t) is a constant and that the mean of σ(t) is zero by definition,
we get:
# #2 ! T /2
# # 1
n2 (t) = #n(t)# + lim |σ(t)|2 dt. (4.6)
T →∞ T −T /2
In the above equation, the time averaged power of n(t) is written as the power sum of
the dc and ac signal components. The variance is a measure of how strong the signal
is fluctuation about the mean value.
7 7
T= 5 ms
T= 50 ms
6 6 T= 400 ms
5 5
4 4
AC component σ (t)
variance σ (t)
3 3
____
2
2 2
1 1
0 0
−1 −1
−2 −2
−3 −3
0 50 100 150 200 0 50 100 150 200
Time t [ms] Time t [ms]
Now if the time average ηT computed from a single realization of x(t) tends to the
ensemble average η as T → ∞ then the random process is called mean-ergodic.
As such, for ergodic random processes a single time realization can be used to obtain
the moments of the process. Thus, the expressions in (4.1) to (4.6) are only correct if
the stochastic process is both stationary and ergodic [Papoulis, 1984, Proakis, 1989].
In addition, since the measuring time T is finite the quantities are only estimated values
of the moments. In practice, the only quantity accessible to measurements is n(t),
which forces us to assume a stationary, ergodic stochastic process.
Drill Problem 33 Calculate the (a) average value, (b) ac component, and (c) rms value
of the periodic waveform v(t) = 1 + 3 cos(2πf t).
Drill Problem 34 A voltage source generating the waveform of drill problem 33 is con-
nected to a resistor R = 6 Ω. What is the power dissipated in the resistor?
If the signal f (t) is a power signal, i.e., a signal with finite power but infinite energy,
the integral in (4.10) will diverge. However, considering the practical case of a finite
observation time T and assume that the signal is zero outside this interval, is equivalent
to multiplying the signal by the unit gate function rect(t/T ). In this case the Fourier
transform can be written as:
! T /2
FT (f ) = F{f (t)rect(t/T )} = f (t)e−j2πf t dt (4.12)
−T /2
Note that the multiplication by the rect-function in the time domain is equivalent to a
convolution by a sinc-function in the frequency domain.
In the following, two statistical functions are introduced which can be used to investigate
the similarity between random functions.
Auto-Correlation Function
where f ∗ (t) is the complex conjugate of f (t). Note that the subscript f is added to
the autocorrelation funciton R(τ ) to indicate the random variable or function that is
considered. The auto-correlation function (4.13) is often used in signal analysis, it
gives a similarity measure of the signal f (t) with itself versus a relative time shift by
an amount τ . For slowly varying time signals, the signal values doesn’t change rapidly
over time which will result in a flat auto-correlation function Rf (τ ). Noise signals on the
other hand, tend to have rapid fluctuations giving rise to an auto-correlation function
with a sharp peak for τ = 0 (no time shift) and quickly falling to zero for increasing τ .
As an example Figs. 4.3 and 4.4 show the time signals and the corresponding auto-
correlation functions for both an exponential and a random noise signal.
2
1.8
0.5
1.6
0.4
Autocorrelation R (τ)
1.4
s
1.2
Signal s(t)
0.3
1
0.2
0.8
0.6 0.1
0.4
0
0.2
0 −0.1
0 100 200 300 400 −400 −200 0 200 400
Time t [ms] Time shift τ [ms]
3 0.8
0.7
2
0.6
Autocorrelation R (τ)
0.5
1
n
0.4
Noise n(t)
0 0.3
0.2
−1
0.1
0
−2
−0.1
−3 −0.2
0 100 200 300 400 −400 −200 0 200 400
Time t [ms] Time shift τ [ms]
Figure 4.4: Time signal and auto-correlation function of random noise waveform
Thus we arrive at the important conclusion, that the correlation integral resulting in
Rf (τ ) corresponds to a multiplication in frequency domain. Or, equivalently we can say
that instead of evaluating the integral in (4.13) we can calculate the Fourier transform
of f (t) according to (4.12), determine |FT (f )|2 and use the inverse Fourier transform
to get Rf (τ ). The limit for T → ∞ in (4.14) mainly reminds us of the finite observation
time, practically this limit means that we have to observe the signal for a sufficient time
period.
Cross-Correlation Function
1.2
3
Autocorrelation Rg(τ)
2
g(t)=s(t)+n(t)
0.8
1
0.6
0 0.4
0.2
−1
0
−2
0 100 200 300 400 −400 −200 0 200 400
Time t [ms] Time shift τ [ms]
Figure 4.5: Time signal and auto-correlation function of exponential waveform plus
noise
Drill Problem 35 Consider two functions f (t) = sin(2πf t) and g(t) = sin(2πf t + θ).
Find the expression for the cross-correlation function Cf g (τ ) of the two functions. Com-
pute the value of Cf g (τ ) for θ = 0, π/4, and π/2 rad. Note that the first case represents
the auto-correlation of sin x, and the last is the cross-correlation of sin x with cos x.
Drill Problem 36 Two voltage sources v1 (t) and v2 (t) are connected in series such
that the resulting voltage vs (t) = v1 (t) + v2 (t). Calculate the total power of vs (t) (1 Ω
scaling) assuming the signals of the two voltage sources to be completely uncorrelated
with each other, i.e. Rv1 v2 (τ ) = Rv2 v1 (τ ) ≡ 0. The signals v1 (t) and v2 (t) are assumed
to have rms voltages of 3 V and 5 V respectively.
In the following, we will be dealing with truncated signals, i.e. the signals are only con-
sidered within a finite time interval [−T /2, T /2] thus assuming the signal to be zero out-
side this interval. The mathematical representation is not as strait forward as for infinite
time signals, however, practical consideration show that this extra effort is needed.
Parseval’s theorem for truncated signals state that:
! T /2 ! ∞
2
|f (t)| dt = |FT (f )|2 df. (4.16)
−T /2 −∞
Noting the similarity between the first term in (4.16) and the time averaged power P
of a signal as given in (4.2) we write:
! !
1 T /2 1 ∞
P = lim 2
|f (t)| dt = lim |FT (f )|2 df. (4.17)
T →∞ T −T /2 T →∞ T −∞
The first integral in the equation above is easy to understand, it shows that in order
to obtain the total power of a signal we must add together the power contribution of
each time increment, which is done through the integration over time t. Whether we
are dealing with voltage or current is indifferent if we assume a 1 Ω resistance. We
know, that evaluating the integral over a finite time period, will give us the signal power
within this time. The first integral is thus also valid over each time interval. However,
Parseval’s equation suggests a second procedure to calculate the total power, which
is performed in the frequency domain. The last term in (4.17) shows that the summa-
tion of |F (f )|2 over all frequencies f will also result in the total power P . Defining a
power spectral density function S(f ) in units of Watts per Hz such that its integral over
frequency is equal to the total power, gives:
! ∞ !
1 ∞
S(f )df = lim |FT (f )|2 df. (4.18)
−∞ T →∞ T −∞
In addition, we insist that S(f ) also gives the power over each frequency increment,
which means that the integration of the power density function over a frequency range
∆f will give the total power for this frequency interval. It can be shown, that under
certain conditions –which are fulfilled for most practical signals of interest– S(f ) is
related to |FT (f )|2 through:
|FT (f )|2
S(f ) = lim . (4.19)
T →∞ T
Through (4.14) the relation between the power spectral density and the Fourier trans-
form of the auto-correlation function is given:
When evaluating the distribution of noise power over frequency, the power spectral
density S(f ) should be the function to examine, rather then the Fourier transform. The
reason for this, is, that the Fourier transform of a random quantity (cf. the expression
in (4.12)) is also a random quantity, which in this sense does not give us any useful
information. As we know, for random signals we need to investigate the statistical prop-
erties. Thus in case we are interested in the frequency content of noise, we compute
the Fourier transform of the auto-correlation function as given in the expression above.
The above expression can be used to calculate the total power using either the time
domain signal, the power spectral density function, or the auto-correlation function.
Care must be taken, when the resistive scaling factor is not equal to 1 Ω.
The power spectral density shall be used to describe noise. Knowing that random
noise tends to have rapid fluctuations, we assume a noise voltage n(t) having the
auto-correlation function:
No
Rn (τ ) = δ(τ ) (4.22)
2
where δ(τ ) is the impulse function. Thus, Rn (τ ) is zero for all τ ̸= 0, which indicates
completely uncorrelated noise signal except for zero time shift. Taking the Fourier
transform of Rn (τ ) the power spectral density is:
The power spectral density is constant for all frequencies, thus it contains all fre-
quency components with equal power weighting. This type of noise is designated as
white noise in analogy to white light. The factor of one-half in (4.23) is necessary to
have a two-sided power spectral density.
A problem arises when we try to calculate the total power of white noise, since:
! ∞
No
Pn = df → ∞ (4.24)
−∞ 2
which implies an infinite amount of power and thus cannot be used to describe any
physical process.
However, it turns out to be a good model for many cases in which the bandwidth is
limited through the system. In this case the power spectral density can be assumed flat
within the finite measuring bandwidth, which will restrict the total noise power. What
we are dealing with in this case is band-limited white noise which will appear as white
noise to the measuring system.
The power of band-limited white noise is independent of the choice of operating
frequency f0 . If n(t) is zero-mean white noise with the power spectral density equal to
No /2 Watts per Hz, then across a bandwidth B the noise power is
! f0 +B/2 ! −f0 +B/2 ! B/2
N0 N0 N0
Pn = df + df = 2 df = BNo Watt (4.25)
f0 −B/2 2 −f0 −B/2 2 −B/2 2
The transformation of an input signal x(t) through a linear time invariant (LTI) system
is described in the time domain through the convolution integral:
! ∞
y(t) = x(t)h(t − τ )dτ (4.26)
−∞
where y(t) is the output signal and h(t) is the impulse response of the LTI system. If
the input signal is random, what we are interested in is the power spectral density Sy (f )
of the output signal. Substituting (4.26) into (4.13) and performing a transformation of
variables we obtain the auto-correlation function of the output signal [Proakis, 1989]:
! ∞! ∞
Ry (τ ) = Rx (τ + α − β)h∗ (α)h(β)dαdβ (4.27)
−∞ −∞
Thus, we have the important result that the power density spectrum of the output
signal is the product of the input power density spectrum multiplied by the magnitude
squared of the frequency transfer function. If the auto-correlation function is desired,
it is usually easier to compute the power density spectrum through (4.28) and than
perform the inverse transform:
If the random input signal is white noise ni (t) with a power spectral density No /2,
then (4.28) becomes:
No
Sout (f ) = |H(f )|2 (4.30)
2
Sin(f) Sout(f)
R
The total noise output power from a system with known frequency transfer function
|H(f )| can be calculated using (4.28) and (4.21). If the input noise is white, this be-
comes: ! ∞ ! ∞
Pout = Sout (f )df = No |H(f )|2df. (4.31)
−∞ 0
The integral is a constant for a given system frequency transfer function. We would
like to have a simple expression similar to (4.25) for the output noise power. A reason-
able approach would be to define an equivalent noise bandwidth BN of an ideal filter
such that the output noise power from the ideal filter and the real system are equal.
As shown in Fig. 4.7, we assume that the ideal filters frequency transfer function is flat
and equal to H(fo ) within the bandwidth BN around the centre frequency fo and zero
otherwise.
|H(f)|2
fo-BN/2 fo fo+BN/2
Drill Problem 38 Compute the equivalent noise bandwidth and the 3-dB bandwidth of
the lowpass filter of drill problem 37 with R = 30 Ω and L = 25 mH. Then compute the
output noise power for Sin (f ) = N0 /2 = 20 × 10−3 W Hz−1 .
Let the input signal power for a given device be s2in (t) and let the input noise power of
the device be n2in (t). The input signal-to-noise ratio SNR in is defined as the ratio of the
total available signal power to the total available noise power at the input and is given
by
s2 (t)
SNR in = in (4.34)
n2in (t)
Thus the SNR as defined above gives an indication of the amount of noise power
relative to the signal power. Clearly the signal-to-noise ratio at the output of the device
is analog to the above expression. Also the definition of the signal-to-noise ratio is
independent of the noise source and type.
The SNR is a power ratio which is most often expressed in Decibels:
Thus an SNR = 13 dB means that the signal power is twenty times higher than the
noise power, while SNR = 0 dB means equal signal and noise power.
Drill Problem 39 An amplifier has an input SNR of 12 dB. Calculate the noise power
at the input if the signal power is −40 dBm.
Drill Problem 40 A signal 6 cos(2πf t) V with f = 200 Hz is fed to the input of the filter
in drill problem 37. Taking the values of drill problem 38 compute the signal-to-noise
ratio at output of the filter.
Planck’s Law
In 1900, Max Planck found the law that governs the emission of electromagnetic radi-
ation from a black body in thermal equilibrium [Planck, 1900]. A black body is simply
defined as an idealized, perfectly opaque material that absorbs all the incident radia-
tion at all frequencies, reflecting none. A body in thermodynamic equilibrium emits to
its environment the same amount of energy it absorbs from its environment. Hence, in
addition to being a perfect absorber, a blackbody also is a perfect emitter. The essential
point of Planck’s derivation is that energy can only be exchanged in discrete portions
or quanta equal to hf , where h is Planck’s constant h = 6.626 × 10−34 J s and f is the
frequency in Hertz.
Then, the energy of the ground level (or state) is 0, of the first level hf , of the second
level 2hf and so on. In general:
Ev = n · hf for v = 0, 1, 2 . . . (4.36)
where v is the level or state number. Given the number Nv of energy quanta (in Planck’s
publications these are referred to as energy elements) occupying level v results in an
energy of vNv hf for that level. The total energy is obtained by summing up over all
states, thus
Etot = N0 · 0 + N1 hf + N2 · 2hf + N3 · 3hf + . . . (4.37)
To determine the average energy, we divide the total energy by the total number of
energy quanta. The expression can be simplified to give:
hf
E(f ) = (4.39)
ehf /kT −1
Using the density of modes we find Planck’s law for the black body radiation. Ex-
pressed in terms of the brightness of the radiated energy from a blackbody this is given
by:
2hf 3 1
Bf (f ) = 2 hf /kT (4.40)
c e −1
Thermal radiation is system inherent and is generated through the random thermal
motion of electrons in a conducting medium such as a resistor. The path of each
electron is randomly oriented due to interaction with other electrons. The net effect
of the electron motion is a random current flowing in the conduction medium with an
average value of zero. The power spectral density of thermal noise is given by Planck’s
distribution law (4.39). For the normal range of Temperatures and frequencies well
below the optical range the parameter hf /kT is very small, so that ehf /kT ≈ 1 + hf /kT ,
and (4.39) can be approximated by:
Sn (f ) = kT (4.41)
The power spectral density as given by (4.41) is independent of frequency and hence
is referred to as white noise spectrum. Within the bandwidth B the available noise
power then is
Pn = kT B (4.42)
The above expression shows, that if the bandwidth is fixed it is sufficient to know the
temperature in order to be able to compute the noise power. This is the reason, why it
is common to speak of the noise temperature when referring to the noise power (even
if the noise source is not thermal).
For T =300 K, i.e. at room temperature, we get a noise power of NT = −114 dBm per
MHz bandwidth. It is worth remembering this number as a reference and using it to
compute the approximate noise power for a given bandwidth. For example the noise
power for a 20 MHz system would be NT = −101 dBm.
Knowing the available power to the network, we want to define the cirucit equivalent
of the noisy resistor. This is done by considering a voltage source of rms voltage Vn
connected in series with the resistor R.
R
Figure 4.8: Noisy resistor connected to a network (left) and its equivalent circuit (right)
Space is the source of mostly broadband noise which can be considered as plane elec-
tromagnetic waves. Cosmic radiation has to be accounted for if either the main lobe or
the sidelobes of the receiving antenna are directed towards space. The noise sources
are both thermal and non-thermal emission from the Sun, the Moon, the Cassiopeia
and planets and from elsewhere in our galaxy and other galaxies.
If the emission is of thermal origin its contribution to noise power can be described
through the spectral brightness as given in (4.40) which is the power density in Watt
per unit solid angle per unit area per Hertz. At radio-frequencies where hf ≪ kT the
spectral brightness Bf is given by the Rayleigh-Jeans law:
* +
2kTc Watt
Bf = 2 in (4.46)
λ m2 · sr · Hertz
where
Tc is the brightness temperature,
λ is the wavelength and
k is Boltzmann’s constant
The actual noise power received within a narrow frequency range depends on the
direction of the main lobe and the side lobes of the receiving antenna and on the
effective area of the antenna. Thus in general the spectral brightness of an extended
source is a function of the direction relative to the antenna coordinates. For discrete
sources (such as the Sun), which lie within the main lobe of the antenna and subtend
a solid angle Ωs that is much smaller than the antenna main-beam solid angle, the
spectral power density becomes
2kTc
p= Ωs W m−2 Hz (4.47)
λ2
Further use of the spectral brightness will be made in a later section when the total
noise power percepted by an antenna will be evaluated in detail.
In the general case Bf varies as λn where n is known as the spectral index. Thus for
the thermal emission of a black body n = −2. For non-thermal emission (4.46) can still
be used but the brightness temperature Tc is no longer related to the thermal emission
When energy is absorbed by a body the same energy is reradiated as noise as shown
by the theory of black body emission. Otherwise the temperature of some bodies would
rise and that of others fall. In the case of a radiating antenna the energy is partially
absorbed by the atmosphere and reradiated as noise. The effective absorption noise
temperature Tab given as a function of the ambient temperature Ta and the attenuation
La is:
Tab = Ta (La − 1) (4.48)
Note that Tab is not identical to the physical (ambient) temperature of the atmosphere
and increases with increasing atmospheric attenuation. Table 4.1 shows some values
for La and Tab when the ambient temperature Ta is 300K.
La [dB] 0 1 3 10
La (power ratio) 1 1.26 2 10
Tab [K] 0 78 300 2700
Table 4.1: Absorption noise temperature Tab for different values of attenuation and an
ambient temperature Ta = 300K
Shot noise: this type of noise occurs when the quantisation of electrical charge carrier
become manifest. It arises in physical devices when a charged particle moves
through a potential gradient without collision and with a random starting time.
This is the case in vacuum tubes due to the random emission of electrons from
the cathode and in many semiconductor components as a result of the diffusion
of minority carriers and the random generation and recombination of electron-
hole pairs. For these cases the power spectral density is approximately flat up to
frequencies in the order of 1/τ , where τ is the transit time or lifetime of the charge
carriers. In terms of the mean current, the power spectral density is
2
Sshot = qi(t) + 2πi(t) δ(f ) (4.49)
where q is the charge of an electron = 1.6 · 10−19 coulomb. The first term in (4.49)
corresponds to the ac or fluctuation part of the noise current and the second term
corresponds to the nonzero mean value.
1/f noise: lots of components exhibit 1/f noise which appears at low frequencies (de-
pending on the process below 1MHz, 10kHz or 1kHz). There exist several the-
ories about the origin of this noise which is difficult to measure due to the low
frequency.
[Misra and Moreira, 1991] Misra, T. and Moreira, A. (1991). Simulation and perfor-
mance evaluation of the real-time processor for the E-SAR system of DLR. Tech.
note, German Aerospace Center, Microwaves and Radar Institute.
[Papoulis, 1984] Papoulis, A. (1984). Probability, random variables, and stochastic
processes. McGraw-Hill, 2 edition.
[Planck, 1900] Planck, M. (1900). Zur Theorie des Gesetzes der Energieverteilung
im Normalspectrum. contained in Verhandlungen der Deutschen Physikalischen
Gesellschaft, (2, jahrgang 2):237–245.
[Proakis, 1989] Proakis, J. G. (1989). Digital Communications. McGraw-Hill, 2 edition.
[Stremler, 1982] Stremler, F. G. (1982). Introduction to Communication Systems.
Addison-Wesley, 2 edition.
[Widrow et al., 1996] Widrow, B., Kollár, I., and Liu, M.-C. (1996). Statistical theory of
quantization. IEEE Transactions on Instrumentation and Measurement Magazine,
45(2):353–361.
103
5 Noise Applications
Consider the multi-channel communication system of Fig. 5.1. For each channel, an
antenna is connected to an amplifier (receiver) through a lossy transmission line. The
beam-former multiplies the signal of each channel by a complex weight and sums up all
the weighted signals. In this section we shall develop expressions for the noise power
at the output of each stage of the system. This will include the contribution of the noise
percepted by the antenna, the absorption noise of the transmission line, the thermal
noise of the receiver and the effect of the beam-former.
channel 1 beamformer
s1(t)
B( ) LT G,NF w1
antenna transmission
line amplifier
s2(t)
w2
sN(t)
wN
105
106 5 Noise Applications
Any real device always adds some noise so that the input signal-to-noise ratio is higher
than the output signal-to-noise ratio. To measure the amount of degradation, we define
a noise figure, NF , to be the ratio between the input and output signal-to-noise ratios,
respectively:
SNR in
NF = (5.1)
SNR out
By definition, a fixed value for the input noise power is used when determining the
noise figure of a device using (5.1). This noise power is equivalent to the thermal noise
power provided by a resistor (as described in section 4.4.1) matched to the input and
at a temperature of T0 = 290 K.
The noise figure is commonly expressed in decibels:
The noise figure of a perfect noise free device is unity (or 0 dB), and the introduction
of additional noise causes the noise figure to be larger than unity, i.e. NF dB > 0 dB.
Nin NinG+Nadded Nin+Ne NinG+NeG
Sin SinG Sin SinG
A
A'
Figure 5.2: Noisy two port device and its equivalent model
Consider the two port device A as shown in Fig. 5.2 with the transfer function H(f )
and the equivalent noise bandwidth BN . The gain of the device, defined as the ratio of
the signal output power to the signal input power, is G. Thus the output signal power
is1 Sout = Sin G. The output noise power consists of the amplified input noise Nin G in
addition to the noise added by the device itself Nadded , thus Nout = Nin G + Nadded . To
describe this added noise an equivalent noise free device A′ will be assumed with a
noise generator at its input such that the total output noise of A and A′ are equal. As
shown in Fig. 5.2 the output noise becomes Nout = Nin G + Ne G and through (5.1) the
noise figure can be represented as
SNR in Sin /Nin Ne
NF = = =1+ (5.3)
SNR out Sin G/(Nin G + Ne G) Nin
1
for simplicity we will write Sout,in insted of PSout,in , and Nout,in instead of PNout,in . These quantities
denote the total power as described in section 4.2.4 equation (4.21)
The additional noise can be assumed to originate from an equivalent thermal noise
source at temperature Te thus Ne = kTe BN . By definition the input noise is Nin = kTo BN
and substituting into (5.3) gives
kTe BN Te
NF = 1 + =1+ (5.4)
kTo BN To
It should be noted that the effective temperature Te is only the equivalent physical
temperature of a resistor that generates the same noise power as the device, the ac-
tual noise source might not be thermal. Nevertheless (5.4) gives a simple formula to
calculate the effective temperature given the noise figure. The noise figure is useful for
comparing different systems regarding their noise performance. The noise tempera-
ture on the other hand can be effectively used to calculate the actual amount of noise
present in the system.
A better understanding of the meaning of the noise figure is possible by rewriting
equation (5.3)
Ne Nin + Ne G(Nin + Ne ) Nout
NF = 1 + = = = . (5.5)
Nin Nin GNin GNin
From the above expression it is seen that the noise figure can be defined as the ratio
of the total output noise to the total output noise of the noise free device, i.e.
total output noise
NF = (5.6)
total output noise of noise free device
Note: At the first glance (5.4) and (5.5) seem to be frequency independent since the
noise figure is not dependent on the transfer function of the device. The justification
would be that both noise and signal pass through the same device, so that |H(f )|
cancels out when forming the signal-to-noise ratio. This however is not correct since
the noise generated within the device Nadded will be frequency dependent in most cases,
thus we should write Te (f ) and keep in mind that F in (5.4) can at most be assumed
constant within some frequency range. Based on (5.5) we can write an expression for
the band noise figure NF which is frequency independent and gives the noise figure
for the total frequency band
'∞ '∞ '∞
F (f )|H(f )|2Nin df F (f )|H(f )|2df F (f )|H(f )|2df
0 0 0
NF = '∞ = '∞ = (5.7)
2 2 BN |H(fo )|2
|H(f )| Nin df |H(f )| df
0 0
In the above equation the gain G has been replaced by the square of the amplitude
of the transfer function |H(f )|2 which gives the relation between the input and output
spectral power density (see (4.28)). The total output noise at each frequency is found
as the product of the output noise from the noise free device |H(f )|2Nin times the noise
figure. The last term in (5.7) makes use of the equivalent bandwidth from section 4.3.3.
In this section expressions for the noise figure for a combination of cascaded networks
will be derived. Consider the cascaded two-port devices shown in Fig. 5.3.
Sin1 Sin2
Nin1+Ne1 Nin2+Ne2
A1 A2
Sout1=Sin1G1 Sout2=Sin2G2
Nout1=Nin1G1+Ne1G1 Nout2=Nin2G2+Ne2G2
Figure 5.3: Equivalent model for the transmission of noise through a cascaded system
Using the definition of the noise figure from (5.5) and knowing that the total noise
power output is Nout2 = G1 G2 (Nin + Ne1 ) + G2 Ne2 , the noise figure of the system NF12
will be:
total output noise G1 G2 (Nin + Ne1 ) + G2 Ne2
NF 12 = = (5.8)
total output noise of noise free device Nin G1 G2
The effective noise temperature of the cascaded system can readily be obtained from
NF 12 and (5.4).
If the two-port networks are assumed to have identical input and output impedances,
it can be shown that the minimum of the equivalent noise figure (or the equivalent
temperature) can be reached if the networks are arranged with increasing noise figures
of the individual stages.
The noise figure of cascaded networks (5.9) provides a simple and convenient way
to evaluate the noise performance of a system. An important point to note however
is that the noise figure assumes a perfect match between the input and output of all
the networks and the cascaded structure. If this condition is not fullfiled, the easy-to-
use equations can no longer be applied, the procedure is however straight forward and
mainly involves the derivations already made.
Consider the system shown in Fig. 5.1. The voltage at the terminals of the input of the
beam-former is
x(t) = a(φ, ϑ)s(t) + n(t) (5.10)
where the first term represents the signal, while the second term is the noise voltage.
Later the quantity of interest will be the power, which, assuming root-mean-square
voltages, is written as
where ⟨·⟩ denotes time-domain averaging and ∗ the complex conjugate. In the above
it has been assumed that the signal of interest is uncorrelated to the noise, thus
⟨s(t)n∗ (t)⟩ = 0, which is true for a non-multiplicative internal noise contribution.
In the following we investigate the various noise contributions, up to the input of the
beam-former.
The noise at the output terminals of a lossless antenna, c.f. Fig. ?? is considered. All
real antennas are directional antennas [Balanis, 1997], which is the property of radi-
ating or receiving electromagnetic waves more effectively in some directions than in
others. The directional properties of an antenna can be described through the radia-
tion pattern (c.f. section 1.5). If we assume spherical coordinates, the radiation pattern
C(θ, ψ) gives the ratio of the field strength in a given direction (θ, ψ) from the antenna
to the maximum field strength. Whether we are dealing with a receiving or transmitting
antenna is indifferent for the definition of C(θ, ψ). Since the radiation pattern is normal-
ized to the maximum value, the ratio of power density to maximum power density is
given through C 2 (θ, ψ). When dealing with the received noise by the antenna, we will
use the radiation pattern as a weighting function to combine the effect of the different
noise sources with the directional properties of the antenna.
All noise sources contributing to the total noise power received by the antenna will
be represented through their brightness B as described in section 4.4.2. As seen by
the antenna, the brightness will be a function of direction (θ, ψ) referred to the antenna
coordinates. The spectral density per unit area of the noise power is than given by:
!!
1
s= B(θ, ψ)C 2 (θ, ψ)dΩ [watt/m2 · Hz] (5.12)
2 4π
In the above equation the noise is assumed to be unpolarised while the antenna is
assumed to receive one single polarisation; this is the reason of the factor of 1/2 since
only half the noise power is received. Within the bandwidth ∆f around the centre
frequency fo the power density is
! fo +∆f /2 !!
1
S= B(θ, ψ)C 2 (θ, ψ)dΩ df [watt/m2 ] (5.13)
fo −∆f /2 2 4π
The available power at the antenna terminals is calculated using the effective an-
tenna area Ar which by definition is the area when multiplied by the incident power
density gives the power delivered to the load (c.f. section 1.5). The effective area and
the radiating pattern are related through
λ2
Ar = '' (5.14)
4π
C 2 (θ, ψ)dΩ
Using Ar and (5.13) to calculate the available noise power Nant we get
! fo +∆f /2 !!
Ar
Nant = B(θ, ψ)C 2 (θ, ψ)dΩ df (5.15)
2 fo −∆f /2 4π
In the above equation the inner integral stands for the summation over all noise
sources. The noise power depend strongly on the environment and look direction of
the antenna. An antenna pointing towards empty space will have a very low noise
temperature in the order of 3 K provided the side lobes aren’t pointing towards a noisy
environment. A narrow-beam antenna on the other hand with its beam directed to-
ward the sun might encounter a noise temperature up to 300 000 K. The outer integral
in (5.15) sums the noise power for the frequency band of interest and the multiplica-
tion factor of Ar transform the power density at the antenna to the available power at
the antenna terminals, thus the effective area will also include the effect of antenna
mismatch.
An expression as given by (5.15) is not practical when dealing with or comparing
different receiving antennas. What is needed is a simple figure-of-merit such as the
equivalent noise temperature of the antenna, which can be found by simplifying (5.15).
A narrow bandwidth ∆f is assumed such that the spectral brightness (4.46) can con-
sidered constant2 over ∆f . In addition, if the integration of the spectral brightness
is compared to the integration of the frequency transfer function as described in sec-
tion 4.3.3, an equivalent bandwidth BN can be introduced which reduces the integra-
tion over ∆f to the multiplication by the equivalent bandwidth at f0 = c/λ0 , thus (5.15)
becomes
! !!
Ar fo +∆f /2 2kTc (θ, ψ) 2
Nant = C (θ, ψ)dΩ df (5.16)
2 fo −∆f /2 4π λ2
! !!
Ar f +∆f /2 2k
= Tc (θ, ψ)C 2 (θ, ψ)dΩ df
2 f −∆f /2 λ2 4π
!!
k
= Ar BN 2 Tc (θ, ψ)C 2 (θ, ψ)dΩ
λ0 4π
Comparing the above equation with (4.42) for the thermal noise power from a resistor
immediately suggest the definition of an equivalent antenna noise temperature Tant of
the form ''
T (θ, ψ)C 2 (θ, ψ)dΩ
4π ''c
Tant = (5.18)
4π
C 2 (θ, ψ)dΩ
Absorbing chamber
Antenna
Antenna
radiation
pattern
Nout = kTBN
⇔ Resistor
Rr Nout = kTBN
Absorber, T
which is the same power available from a resistor at temperature T as given in sec-
tion 4.4.1. From the standpoint of an ideal receiver of bandwidth BN , the antenna
connected to its input terminals is equivalent to a resistance Rr known as the antenna
radiation resistance. Although in both cases the receiver is connected to a “resistor”
in the case of the real resistor the noise power available at its output terminal is deter-
mined by the physical temperature of the resistor, while in the case of the antenna the
power available is determined by the temperature of the blackbody enclosure, whos
walls may be at any distance from the antenna. Moreover, the physical temperature of
the antenna structure has no bearing on its output power as long as it is lossless.
An important perception from (5.19) is that the total noise power received by an
antenna is independent of the radiation pattern of the antenna if the surrounding envi-
ronment is assumed to have a constant brightness temperature.
Drill Problem 42 A reflector antenna used for geostationary satellite receiving is po-
sitioned such that its main beam lies at 37◦ above the horizon. For simplicity the
main beam is supposed to extend over ± 1◦ both in elevation and azimuth as shown
in Fig. 5.5(a). The value of the radiating pattern outside the main beam is −32 dB.
As shown in Fig. 5.5(b), the antenna ’sees’ the sky with a brightness temperature of
Tc,sky = 2 K for θ < 90◦ , the sun at Tc,sun = 8000 K within a solid angle of Ωs = 0.5◦ and
the earth at Tc,earth = 300 K for θ > 90◦ . Calculate the antenna noise temperature with
the sun outside the main beam of the antenna.
The noise temperature of a lossy transmission line can be calculated using the results
of section 4.4.3. Consider the transmission line shown in Fig. 5.1. If the transmission
line is at the physical temperature Tp and has an attenuation of LT = 1/GT = Pin /Pout ,
then the equivalent noise temperature TeT at the input of the transmission line as given
main beam
background Sun Tc,sun = K
C1= 0 dB
C2 = -32 dB
0° 0°
2° Sky Tc,sky = 2K
37°
θ θ
2°
(a) (b)
Figure 5.5: (a) Antenna pattern (not to scale) and (b) Brightness temperature
by (4.48) is
TeT = Tp (LT − 1) (5.20)
which corresponds to a thermal noise power of kTeT BN . If the transmission line is
connected to the terminals of an antenna having the noise temperature Tant , the total
noise power at the input of the transmission line becomes NinT = k(TeT + Tant )BN . The
noise power at the output of the transmission line is then
NinT kTp (LT − 1)BN kTant BN
NoutT = = + (5.21)
LT LT LT
and the noise temperature at the output becomes
Tp (LT − 1) Tant
ToutT = + (5.22)
LT LT
which again corresponds to the noise power kToutT BN at the output of the transmission
line. The above results are only valid if the antenna is matched to the transmission line,
thus assuming no reflected power at the transmission line terminal.
In practice, real antennas are not lossless devices. Part of the energy received (or
transmitted) by the antenna is absorbed by the antenna material in the form of heat
loss. This would require including the antenna losses when calculation the noise tem-
perature ToutT . If the calculations are performed, it can easily be seen that the antenna
losses might as well be included in the transmission line losses without altering the
results, provided both the antenna and the transmission line are at the same physical,
i.e. ambient, temperature Tp . This means, that equation (5.22) can be used with the
antenna losses included in LT .
5.2.3 Amplifier
Until now, the noise contribution of the antenna and the transmission line have been
considered using the equivalent noise temperature. All quantities are referenced to the
input of the amplifier, i.e. the noise power computed is at the input of the amplifier. Next,
the aim is to get the noise power at the output of the amplifier, which is the noise power
that goes into the beam-former. This is equal to the noise power kToutT BN amplified by
the gain of the G, and added to this we consider the contribution of the amplifier itself
as given by its noise figure NF . The total output noise power3 then becomes
, -
⟨|n(t)|2 ⟩ Tant G Tp (LT − 1)G
=k + + T0 (NF − 1)G BN (5.23)
Ro LT LT
5.2.4 Beam-Forming
Next the beam-forming is considered. To account for the multiple input signals, we add
the index i where i = 1 . . . N and N is the total number of inputs (channels) to the
beam-former. Then (5.10) for channel i becomes
Note that in the above equation, the signal s(t) is not indexed since it is the same to all
input channels; this is consistent with taking the electric field strength at the antenna
aperture to be E(t) equally for all channels since it is attributed to a single point source
at φ, ϑ.
Beamforming can then be described as the operation
N
$
y(t) = wi xi (t) (5.25)
i=1
where wi specify the (complex) unitless weights. Using vector notation, thus writing
x = [x1 (t), x2 (t), . . . , xN (t)]T , this becomes:
It should be pointed out, that the term wT a(φ, ϑ) in the above equation can be used
to explain the antenna array properties and effects; specifically, if normalized, this term
represents the radiation pattern.
Now the power py at the output of the beam-former becomes
where the earlier assumption that the signal and noise are uncorrelated is maintained.
If the noise of the individual channels is taken to be uncorrelated then ⟨ni (t)n∗j (t)⟩ = 0
for i ̸= j and ⟨n ∗ n T ⟩ becomes a diagonal matrix.
[Balanis, 1997] Balanis, C. (1997). Antenna Theory Analysis & Design. John Wiley &
Sons, 2 edition.
[Bundy, 1998] Bundy, S. C. (1998). Noise figure, antenna temperature and sensitivity
level for wireless communication receivers. Microwave Journal, 41(3):108–116.
116