Computer Networks UNIT-4 Syllabus: The Medium Access Control Sublayer-The Channel Allocation Problem-Static
Computer Networks UNIT-4 Syllabus: The Medium Access Control Sublayer-The Channel Allocation Problem-Static
Computer Networks UNIT-4 Syllabus: The Medium Access Control Sublayer-The Channel Allocation Problem-Static
UNIT-4
I.RAJU-ASST.PROF-VIEW 1
In principle, CDMA could be used but there are a number of constraints which makes it
difficult to apply in the general case
e.g., senders and receivers must know the chip sequence beforehand
Requires high synchronization
Assumptions for Dynamic Channel Allocation:
1. Independent Traffic. The model consists of N independent stations (e.g., computers,
telephones), each with a program or user that generates frames for transmission. The expected
number of frames generated in an interval of length Δt is λΔt, where λ is a constant (the arrival
rate of new frames). Once a frame has been generated, the station is blocked and does nothing
until the frame has been successfully transmitted.
2. Single Channel. A single channel is available for all communication. All stations can transmit
on it and all can receive from it. The stations are assumed to be equally capable, though
protocols may assign them different roles (e.g., priorities).
3. Observable Collisions. If two frames are transmitted simultaneously, they overlap in time and
the resulting signal is garbled. This event is called a collision. All stations can detect that a
collision has occurred. A collided frame must be transmitted again later. No errors other than
those generated by collisions occur.
4. Continuous or Slotted Time. Time may be assumed continuous, in which case frame
transmission can begin at any instant. Alternatively, time may be slotted or divided into discrete
intervals (called slots). Frame transmissions must then begin at the start of a slot. A slot may
contain 0, 1, or more frames, corresponding to an idle slot, a successful transmission, or a
collision, respectively.
5. Carrier Sense or No Carrier Sense. With the carrier sense assumption, stations can tell if the
channel is in use before trying to use it. No station will attempt to use the channel while it is
sensed as busy. If there is no carrier sense, stations cannot sense the channel before trying to use
it. They just go ahead and transmit. Only later can they determine whether the transmission was
successful.
MULTIPLE ACCESS PROTOCOLS:
When nodes or stations are connected and use a common link, called a multipoint or broadcast
link, we need a multiple-access protocol to coordinate access to the link. The problem of
controlling the access to the medium is similar to the rules of speaking in an assembly. Many
I.RAJU-ASST.PROF-VIEW 2
formal protocols have been devised to handle access to a shared link. We categorize them into
three groups. Protocols belonging to each group are shown above.
RANDOM ACCESS
In a random access method, each station has the right to the medium without being controlled by
any other station. However, if more than one station tries to send, there is an access conflict-
collision-and the frames will be either destroyed or modified. To avoid access conflict or to
resolve it when it happens, each station follows a procedure to answer the following questions:
When can the station access the medium?
What can the station do if the medium is busy?
How can the station determine the success or failure of the transmission?
What can the station do if there is an access conflict?
The random access methods have evolved from a very interesting protocol known as ALOHA,
which used a very simple procedure called multiple access (MA). The method was improved
with the addition of a procedure that forces the station to sense the medium before transmitting.
This was called carrier sense multiple access. This method later evolved into two parallel
methods: carrier senses multiple access with collision detection (CSMA/CD) and carrier sense
multiple access with collision avoidance (CSMA/CA). CSMA/CD tells the station what to do
when a collision is detected. CSMA/CA tries to avoid the collision.
ALOHA:
In the 1970s, Norman Abramson and his colleagues at the University of Hawaii devised a new
and elegant method to solve the channel allocation problem. Their work has been extended by
I.RAJU-ASST.PROF-VIEW 3
many researchers since then (Abramson, 1985).We will discuss two versions of ALOHA here:
pure and slotted.
Pure ALOHA
The basic idea of an ALOHA system is simple: let users transmit whenever they have data to be
sent. There will be collisions, of course, and the colliding frames will be damaged. However, due
to the feedback property of broadcasting, a sender can always find out whether its frame was
destroyed by listening to the channel, the same way other users do.
If the frame was destroyed, the sender just waits a random amount of time and sends it again.
The waiting time must be random or the same frames will collide over and over, in lockstep.
Systems in which multiple users share a common channel in a way that can lead to conflicts are
widely known as contention systems.
A sketch of frame generation in an ALOHA system is given in figure. We have made the frames
all the same length because the throughput of ALOHA systems is maximized by having a
uniform frame size rather than by allowing variable length frames.
I.RAJU-ASST.PROF-VIEW 4
population of users generates new frames according to a Poisson distribution with mean N
frames per frame time. If N > 1, the user community is generating frames at a higher rate than
the channel can handle, and nearly every frame will suffer a collision. For reasonable
throughput we would expect 0 < N < 1.
In addition to the new frames, the stations also generate retransmissions of frames that
previously suffered collisions. Let us further assume that the probability of k transmission
attempts per frame time, old and new combined, is also Poisson, with mean G per frame time.
Clearly, G ≥N. At low load (i.e., N ≈0), there will be few collisions, hence few retransmissions,
so G≈N. At high load there will be many collisions, so G > N. Under all loads, the throughput, S,
is just the offered load, G, times the probability, P0, of a transmission succeeding—that is, S =
GP0, where P0 is the probability that a frame does not suffer a collision.
A frame will not suffer a collision if no other frames are sent within one frame time of its start,
as shown in figure. Let t be the time required to send a frame. If any other user has generated a
frame between time t0 and t0 + t, the end of that frame will collide with the beginning of the
shaded one. In fact, the shaded frame's fate was already sealed even before the first bit was sent,
but since in pure ALOHA a station does not listen to the channel before transmitting, it has no
way of knowing that another frame was already underway. Similarly, any other frame started
between t0 + t and t0 + 2t will bump into the end of the shaded frame.
Figure 4-2.
I.RAJU-ASST.PROF-VIEW 5
Pr[k]= ( Gke-G ) /k!--------------- Equation 4
so the probability of zero frames is just e-G. In an interval two frame times long, the mean
number of frames generated is 2G. The probability of no other traffic being initiated during the
entire vulnerable period is thus given by P0 = e -2G. Using S = GP0, we get
S= Ge-2G
The relation between the offered traffic and the throughput is shown in figure.
The maximum throughput occurs at G = 0.5, with S = 1/2e, which is about 0.184. In other words,
the best we can hope for is a channel utilization of 18 percent. This result is not very
encouraging, but with everyone transmitting at will, we could hardly have expected a 100
percent success rate.
Slotted ALOHA
In Roberts' method, which has come to be known as slotted ALOHA, in contrast to pure
ALOHA, a computer is not permitted to send whenever a carriage return is typed. Instead, it is
required to wait for the beginning of the next slot. Thus, the continuous pure ALOHA is turned
into a discrete one. Since the vulnerable period is now halved, the probability of no other traffic
during the same slot as our test frame is e-G which leads to Equation 4
S= Ge-2G
As you can see from Figure, slotted ALOHA peaks at G = 1, with a throughput of S =1/e or
about 0.368, twice that of pure ALOHA. If the system is operating at G = 1, the probability of an
empty slot is 0.368 (from Eq. 4-2).
The best we can hope for using slotted ALOHA is 37 percent of the slots empty, 37 percent
successes, and 26 percent collisions. Operating at higher values of G reduces the number of
empties but increases the number of collisions exponentially. To see how this rapid growth of
I.RAJU-ASST.PROF-VIEW 6
collisions with G comes about, consider the transmission of a test frame. The probability that it
will avoid a collision is e-G., the probability that all the other users are silent in that slot. The
probability of a collision is then just 1 - e-G. The probability of a transmission requiring exactly k
attempts, (i.e., k - 1 collisions followed by one success) is
As a result of the exponential dependence of E upon G, small increases in the channel load can
drastically reduce its performance.
Slotted Aloha is important for a reason that may not be initially obvious. It was devised in the
1970s, used in a few early experimental systems, and then almost forgotten. When Internet
access over the cable was invented, all of a sudden there was a problem of how to allocate a
shared channel among multiple competing users, and slotted Aloha was pulled out of the garbage
can to save the day.
MAC Address:
Medium access control MAC address or a hardware address refers to a unique value assigned to
a network adapter on a LAN. This address is used to identify each node in a network in a
I.RAJU-ASST.PROF-VIEW 7
distinct manner. MAC addresses are 48 bit long and is represented as 12digit hexadecimal
number as follows
AA:AA:AA:BB:BB:BB or AA-AA-AA-BB-BB-BB
Where the first half specifies the manufacturers id of an adapter and the second half specifies the
serial number associated with the adapter. This serial number is assigned by the manufacturer to
an adapter.
An OSI is the most widely used network architecture where the data link layer is divided into
MAC and logical link control sub layers; MAC addresses are used to uniquely identify a
computer on a network.
CARRIER SENSE MULTIPLE ACCESS PROTOCOLS:
With slotted ALOHA the best channel utilization that can be achieved is 1/e. In local area
networks, however, it is possible for stations to detect what other stations are doing, and adapt
their behavior accordingly. These networks can achieve a much better utilization than 1/e.
Protocols in which stations listen for a carrier (i.e., a transmission) and act accordingly are called
carrier sense protocols.
Persistent and Non-persistent CSMA:
1-persistent CSMA: When a station has data to send, it first listens to the channel to see if
anyone else is transmitting at that moment. If the channel is busy, the station waits until it
becomes idle. When the station detects an idle channel, it transmits a frame. If a collision occurs,
the station waits a random amount of time and starts all over again. The protocol is called 1-
persistent because the station transmits with a probability of 1 when it finds the channel idle.
The propagation delay has an important effect on the performance of the protocol. There is a
small chance that just after a station begins sending, another station will become ready to send
and sense the channel. If the first station's signal has not yet reached the second one, the latter
will sense an idle channel and will also begin sending, resulting in a collision.
Even if the propagation delay is zero, there will still be collisions. If two stations become ready
in the middle of a third station's transmission, both will wait politely until the transmission ends
and then both will begin transmitting exactly simultaneously, resulting in a collision. If they were
not so impatient, there would be fewer collisions. This approach will lead to a higher
performance than pure ALOHA.
I.RAJU-ASST.PROF-VIEW 8
Non-persistent CSMA: In this protocol, a station senses the channel. If no one else is sending,
the station begins doing so itself. However, if the channel is already in use, the station does not
continually sense it for the purpose of seizing it immediately upon detecting the end of the
previous transmission. Instead, it waits a random period of time and then repeats the algorithm.
Consequently, this algorithm leads to better channel utilization but longer delays than 1-
persistent CSMA.
P-persistent CSMA: It applies to slotted channels. When a station becomes ready to send, it
senses the channel. If it is idle, it transmits with a probability p. With a probability q = 1 - p, it
defers until the next slot. If that slot is also idle, it either transmits or defers again, with
probabilities p and q. This process is repeated until either the frame has been transmitted or
another station has begun transmitting. In the latter case, the unlucky station acts as if there had
been a collision (i.e., it waits a random time and starts again). If the station initially senses the
channel busy, it waits until the next slot and applies the above algorithm. Figure shows the
computed throughput versus offered traffic for all three protocols, as well as for pure and slotted
ALOHA.
I.RAJU-ASST.PROF-VIEW 9
CSMA/CD, as well as many other LAN protocols, uses the conceptual model of figure. At the
point marked t0, a station has finished transmitting its frame. Any other station having a frame to
send may now attempt to do so. If two or more stations decide to transmit simultaneously, there
will be a collision. Collisions can be detected by looking at the power or pulse width of the
received signal and comparing it to the transmitted signal.
I.RAJU-ASST.PROF-VIEW 10
interval as a slotted ALOHA system with slot width 2τ. On a 1-km long coaxial cable, τ 5 μsec.
For simplicity we will assume that each slot contains just 1 bit. Once the channel has been
seized, a station can transmit at any rate it wants to, of course, not just at 1 bit per 2τ sec.
It is important to realize that collision detection is an analog process. The station's hardware must
listen to the cable while it is transmitting. If what it reads back is different from what it is putting
out, it knows that a collision is occurring. The implication is that the signal encoding must allow
collisions to be detected (e.g., a collision of two 0-volt signals may well be impossible to detect).
For this reason, special encoding is commonly used.
It is also worth noting that a sending station must continually monitor the channel, listening for
noise bursts that might indicate a collision. For this reason, CSMA/CD with a single channel is
inherently a half-duplex system. It is impossible for a station to transmit and receive frames at
the same time because the receiving logic is in use, looking for collisions during every
transmission.
To avoid any misunderstanding, it is worth noting that no MAC-sublayer protocol guarantees
reliable delivery. Even in the absence of collisions, the receiver may not have copied the frame
correctly for various reasons (e.g., lack of buffer space or a missed interrupt).
(iii) Acknowledgements
I.RAJU-ASST.PROF-VIEW 11
Whenever the channel is found idle, the station does not transmit immediately. It waits for a
period of time called interframe space (IFS).
When channel is sensed to be idle, it may be possible that same distant station may have already
started transmitting and the signal of that distant station has not yet reached other stations.
Therefore the purpose of IFS time is to allow this transmitted signal to reach other stations.
If after this IFS time, the channel is still idle, the station can send, but it still needs to wait a time
equal to contention time
A station that is ready to send chooses a random number of slots as its wait time.The number of
slots in the window changes according to the binary exponential back-off strategy. It means that
it is set of one slot the first time and then doubles each time the station cannot detect an idle
channel after the IFS time. This is very similar to the p-persistent method except that a random
outcome defines the number of slots taken by the waiting station.In contention window the
station needs to sense the channel after each time slot. If the station finds the channel busy, it
does not restart the process. It just stops the timer & restarts it when the channel is sensed as idle.
3. Acknowledgement
Despite all the precautions, collisions may occur and destroy the data.
The positive acknowledgment and the time-out timer can help guarantee that receiver has
received the frame.
I.RAJU-ASST.PROF-VIEW 12
CSMA/CA Procedure:
This is the CSMA protocol with collision avoidance. The station ready to transmit, senses the
line by using one of the persistent strategies. As soon as it find the line to be idle, the station
waits for an IFG (Interframe gap) amount of time.
If then waits for some random time and sends the frame. After sending the frame, it sets a timer
and waits for the acknowledgement from the receiver.
If the acknowledgement is received before expiry of the timer, then the transmission is
successful. But if the transmitting station does not receive the expected acknowledgement before
the timer expiry then it increments the back off parameter, waits for the back off time and
resenses the line.
I.RAJU-ASST.PROF-VIEW 13
CONTROLLED ACCESS
In controlled access, the stations consult one another to find which station has the right to send.
A station cannot send unless it has been authorized by other stations. We discuss three popular
controlled-access methods.
Reservation
In the reservation method, a station needs to make a reservation before sending data. Time is
divided into intervals. In each interval, a reservation frame precedes the data frames sent in that
interval. If there are N stations in the system, there are exactly N reservation mini-slots in the
reservation frame. Each mini-slot belongs to a station. When a station needs to send a data frame,
it makes a reservation in its own mini-slot. The stations that have made reservations can send
their data frames after the reservation frame.
Figure 12.18 shows a situation with five stations and a five-minislot reservation frame. In the
first interval, only stations 1, 3, and 4 have made reservations. In the second interval, only station
1 has made a reservation.
Reservation
Polling
Polling works with topologies in which one device is designated as a primary station and the
other devices are secondary stations. All data exchanges must be made through the primary
device even when the ultimate destination is a secondary device. The primary device controls the
link; the secondary devices follow its instructions. It is up to the primary device to determine
which device is allowed to use the channel at a given time. The primary device, therefore, is
always the initiator of a session (see Figure). If the primary wants to receive data, it asks the
secondary’s if they have anything to send; this is called poll function. If the primary wants to
send data, it tells the secondaryto get ready to receive; this is called select function.
Select
The select function is used whenever the primary device has something to send. Remember that
the primary controls the link. If the primary is neither sending nor receiving data, it knows the
I.RAJU-ASST.PROF-VIEW 14
link is available. If it has something to send, the primary device sends it. What it does not know,
however, is whether the target device is prepared to receive. So the primary must alert the
secondary to the upcoming transmission and wait for an acknowledgment of the secondary's
ready status. Before sending data, the primary creates and transmits a select (SEL) frame, one
field of which includes the address of the intended secondary.
Poll
The poll function is used by the primary device to solicit transmissions from the secondary
devices. When the primary is ready to receive data, it must ask (poll) each device in turn if it has
anything to send. When the first secondary is approached, it responds either with a NAK frame if
it has nothing to send or with data (in the form ofa data frame) if it does. If the response is
negative (a NAK frame), then the primary polls the next secondary in the same manner until it
finds one with data to send. When the response is positive (a data frame), the primary reads the
frame and returns an acknowledgment (ACK frame), verifying its receipt.
Select Poll
Collision-Free Protocols:
Although collisions do not occur with CSMA/CD once a station has unambiguously captured the
channel, they can still occur during the contention period. Some protocols that resolve the
contention for the channel without any collisions at all, not even during the contention period.
Most of these protocols are not currently used in major systems, but in a rapidly changing field,
having some protocols with excellent properties available for future systems is often a good
thing.
I.RAJU-ASST.PROF-VIEW 15
We assume that there are exactly N stations, each programmed with a unique address from 0 to N
− 1. It does not matter that some stations may be inactive part of the time. We also assume that
propagation delay is negligible.
A Bit-Map Protocol:
In our first collision-free protocol, the basic bit-map method, each contention period consists of
exactly N slots. If station 0 has a frame to send, it transmits a 1 bit during the slot 0. No other
station is allowed to transmit during this slot. Regardless of what station 0 does, station 1 gets the
opportunity to transmit a 1 bit during slot 1, but only if it has a frame queued. In general, station j
may announce that it has a frame to send by inserting a 1 bit into slot j. After all N slots have
passed by, each station has complete knowledge of which stations wish to transmit. At that point,
they begin transmitting frames in numerical order.
I.RAJU-ASST.PROF-VIEW 16
Token passing network
I.RAJU-ASST.PROF-VIEW 17
station wanting to use the channel now broadcasts its address as a binary bit string, starting with
the highorder bit. All addresses are assumed to be the same length. The bits in each address
position from different stations are BOOLEAN ORed together by the channel when they are sent
at the same time. We will call this protocol binary countdown.
To avoid conflicts, an arbitration rule must be applied: as soon as a station sees that a
high-order bit position that is 0 in its address has been overwritten with a 1, it gives up. For
example, if stations 0010, 0100, 1001, and 1010 are all trying to get the channel, in the first bit
time the stations transmit 0, 0, 1, and 1, respectively. These are ORed together to form a 1.
Stations 0010 and 0100 see the 1 and know that a higher-numbered station is competing for the
channel, so they give up for the current round. Stations 1001 and 1010 continue.
The next bit is 0, and both stations continue. The next bit is 1, so station 1001 gives up.
The winner is station 1010 because it has the highest address. After winning the bidding, it may
now transmit a frame, after which another bidding cycle starts. The protocol is illustrated in Fig.
4-8. It has the property that higher- numbered stations have a higher priority than lower-
numbered stations, which may be either good or bad, depending on the context.
I.RAJU-ASST.PROF-VIEW 18
Limited-Contention Protocols:
It would be nice if we could combine the best properties of the contention and collision-free
protocols, arriving at a new protocol that used contention at low load to provide low delay, but
used a collision-free technique at high load to provide good channel efficiency. Such protocols,
which we will call limited-contention protocols.
The only contention protocols we have studied have been symmetric. That is, each station
attempts to acquire the channel with some probability, p, with all stations using the same p.
Interestingly enough, the overall system performance can sometimes be improved by using a
protocol that assigns different probabilities to different stations.
Wireless LAN Protocols:
A common configuration for a wireless LAN is an office building with access points (APs)
strategically placed around the building. The APs are wired together using copper or fiber and
provide connectivity to the stations that talk to them.
There is an even more important difference between wireless LANs and wired LANs. A station
on a wireless LAN may not be able to transmit frames to or receive frames from all other stations
because of the limited radio range of the stations. In wired LANs, when one station sends a
frame, all other stations receive it. The absence of this property in wireless LANs causes a
variety of complications.
A naive approach to using a wireless LAN might be to try CSMA: just listen for other
transmissions and only transmit if no one else is doing so. The trouble is, this protocol is not
really a good way to think about wireless because what matters for reception is interference at
the receiver, not at the sender. To see the nature of the problem, consider Fig. 4-11, where four
wireless stations are illustrated. For our purposes, it does not matter which are APs and which are
laptops. The radio range is such that A and B are within each other’s range and can potentially
interfere with one another. C can also potentially interfere with both B and D, but not with A.
I.RAJU-ASST.PROF-VIEW 19
A wireless LAN. (a) A and C are hidden terminals when transmitting to B. (b) B and C are
exposed terminals when transmitting to A and D.
First consider what happens when A and C transmit to B, as depicted in Fig.(a). If A sends and
then C immediately senses the medium, it will not hear A because A is out of range. Thus C will
falsely conclude that it can transmit to B. If C does start transmitting, it will interfere at B, wiping
out the frame from A. (We assume here that no CDMA-type scheme is used to provide multiple
channels, so collisions garble the signal and destroy both frames.) We want a MAC protocol that
will prevent this kind of collision from happening because it wastes bandwidth. The problem of a
station not being able to detect a potential competitor for the medium because the competitor is
too far away is called the hidden terminal problem.
Now let us look at a different situation: B transmitting to A at the same time that C wants to
transmit to D, as shown in Fig. 4-11(b). If C senses the medium, it will hear a transmission and
falsely conclude that it may not send to D (shown as a dashed line). In fact, such a transmission
would cause bad reception only in the zone between B and C, where neither of the intended
receivers is located. We want a MAC protocol that prevents this kind of deferral from happening
because it wastes bandwidth. The problem is called the exposed terminal problem.
An early and influential protocol that tackles these problems for wireless LANs is MACA
(Multiple Access with Collision Avoidance) (Karn, 1990). The basic idea behind it is for the
sender to stimulate the receiver into outputting a short frame, so stations nearby can detect this
transmission and avoid transmitting for the duration of the upcoming (large) data frame. This
technique is used instead of carrier sense.
MACA is illustrated in following Fig. Let us see how A sends a frame to B. A starts by sending
an RTS (Request To Send) frame to B, as shown in Fig. 4-12(a). This short frame (30 bytes)
contains the length of the data frame that will eventually follow. Then B replies with a CTS
(Clear To Send) frame, as shown in Fig. 4-12(b). The CTS frame contains the data length
(copied from the RTS frame). Upon receipt of the CTS frame, A begins transmission.
I.RAJU-ASST.PROF-VIEW 20
The MACA protocol. (a) A sending an RTS to B. (b) B responding with a CTS to A.
ETHERNET
LOCAL AREA NETWORK (LAN) is a computer network that is designed for a
limited geographic area such as a building or a campus. Most LANs today are also
linked to a wide area network (WAN) or the Internet.
The LAN market has seen several technologies such as Ethernet, Token Ring, Token
Bus, FDDI, and ATM LAN. Some of these technologies survived for a while, but
Ethernet is by far the dominant technology.
The IEEE Standard Project 802, designed to regulate the manufacturing and
interconnectivity between different LANs. Ethernet has changed to meet the market
needs and to make use of the new technologies.
E.g.,
IEEE 802.3 Ethernet
IEEE 802.11 Wireless LAN (Wi-Fi)
IEEE 802.15 Wireless PAN (Bluetooth, etc)
IEEE STANDARDS:
In 1985, the Computer Society of the IEEE started a project, called Project 802, to define
certain LAN standards. Project 802,consisting a variety of LAN standards deals
primarily with physical layer and the data link layer issues.
I.RAJU-ASST.PROF-VIEW 21
The standard was adopted by the American National Standards Institute (ANSI). In 1987,
the International Organization for Standardization (ISO) also approved it as an
international standard under the designation ISO 8802.
The IEEE has subdivided the data link layer into two sublayers: logical link control
(LLC) and media access control (MAC). IEEE has also created several physical layer
standards for different LAN protocols.
I.RAJU-ASST.PROF-VIEW 22
HDLC PDU is divided into PDU at the LLC and a frame at the MAC sublayer.
The LLC PDU contains the address of the destination service access point (DSAP),
source service access point (SSAP),control field and upper layer data.
The control field is responsible for flow and error control.
Need for LLC The purpose of the LLC is to provide flow and error control for the upper-
layer protocols that actually demand these services.
Media Access Control (MAC):
Multiple access methods including random access, controlled access, and channelization.
IEEE Project 802 has created a sublayer called media access control that defines the
specific access method for each LAN.
For example, it defines CSMA/CD as the media access method for Ethernet LANs and the
token passing method for Token Ring and Token Bus LANs.
Compared to LLC sublayer, the MAC sublayer contains a number of distinct modules;
each defines the access method and the framing format specific to the corresponding
LAN protocol.
PHYSICAL LAYER:
The physical layer is dependent on the implementation and type of physical media used.
IEEE defines detailed specifications for each LAN implementation.
For example, although there is only one MAC sublayer for Standard Ethernet, there is a different
physical layer specifications for each Ethernet implementations.
MANCHESTER ENCODING:
Manchester encoding is a synchronous clock encoding technique used by the physical
layer to encode the clock and data of a synchronous bit stream. In this technique, the
I.RAJU-ASST.PROF-VIEW 23
actual binary data to be transmitted over the cable are not sent as a sequence of logic 1's
and 0's. Instead, the bits using straight binary encoding.
The data are represented NOT by logic 1 or 0, but with line transitions. A logic 0 is
represented by a transition from HIGH to LOW, and a logic HIGH is represented by a
transition from LOW to HIGH.
The data to encode is the binary number 10010, reading from left to right. The coding
occurs on every falling edge of the clock. On the first falling edge of the clock, the coded
signal has a LOW to HIGH transition, because the data is HIGH. On the second falling
edge of the clock, the code has a HIGH to LOW transition because the data is LOW. The
same algorithm is applied for the rest of the signal.
The Differential Manchester Coding:
The Differential Manchester Code is a variation of the Manchester code
The transmission line is kept HIGH when no data is sent. There are 2 encoding methods:
The first is the "Transition on LOW" and the second is the "Transition on HIGH". For
this example, to use the first method, "Transition on LOW".
If the data bit is 0, then a polarity transition occurs (if was HIGH it goes LOW, and if it
was LOW it goes HIGH), otherwise the line remains unchanged.
In our example, the data that is transmitted is the binary '10011101' (starting from left to
right). Each data bit is transmitted during negative transition of the clock. Between each
bit transmission, the code line changes polarity. This is done to help the receiver
recreate the clock signal and synchronize with the transmitter.
I.RAJU-ASST.PROF-VIEW 24
Use the "Transition on LOW" method. This means that when a bit is transmitted, if this
bit is ZERO the data line changes polarity. Otherwise, if the bit is 1 the data line polarity
remains unchanged.
Suppose now that we want to transmit this byte (10011101). The code line is HIGH. The
transmission is initiated by pulling the code line LOW. After half a pulse, the output is
pulled HIGH. This is part of the synchronization transition that occurs every middle of a
bit transmission. Half a pulse next, the first bit is transmitted. The first bit is 1 (starting
from left), so the code line polarity remains unchanged. After half a pulse, the code line
polarity changes state and goes LOW. After one full pulse, the second bit is about to be
transmitted. This bit is the 0, so the code line changes polarity and goes HIGH. The same
algorithm is used to transmit all 8 bits of the data. Finally, the code line is pulled HIGH
and the transmission ends.
STANDARD ETHERNET:
The original Ethernet was created in 1976 at Xerox’s palo alto research center(PARC)
The ETHERNET has gone through four generations:
Standard Ethernet (10 Mbps)
Fast Ethernet (100 Mbps)
Gigabit Ethernet (l Gbps)
Ten-Gigabit Ethernet (l0 Gbps)
I.RAJU-ASST.PROF-VIEW 25
MAC Sublayer:
In Standard Ethernet, the MAC sublayer governs the operation of the access method. It
also frames data received from the upper layer and passes them to the physical layer.
Frame Format
The Ethernet frame contains seven fields: preamble, SFD, DA, SA, length or type of
protocol data unit (PDU), upper-layer data, and the CRC.
Ethernet does not provide any mechanism for acknowledging received frames, making it
what is known as an unreliable medium. Acknowledgments must be implemented at the
higher layers.
The format of the MAC frame
Preamble. The first field of the 802.3 frame contains 7 bytes (56 bits) of alternating Os
and Is that alerts the receiving system to the coming frame and enables it to synchronize
its input timing. The preamble is added at the physical layer and is not part of the frame.
Start frame delimiter (SFD). The second field (l byte: 10101011) signals the beginning
of the frame. The last 2 bits is 11 and alerts the receiver that the next field is the
destination address.
I.RAJU-ASST.PROF-VIEW 26
Destination address (DA). 6 bytes long and contains the physical address of the
destination station.
Source address (SA). 6 bytes long and contains the physical address of the sender
station.
Length or type. The length field to define the number of bytes in the data field.
Data. This field carries data encapsulated from the upper-layer protocols. It is a
minimum of 46 and a maximum of 1500 bytes.
CRC. The last field contains error detection information, in this case a CRC-32.
Frame Length
Ethernet has imposed restrictions on both the minimum and maximum lengths of a frame.
Minimum length:
An Ethernet frame needs to have a minimum length of 512 bits or 64 bytes which
includes header and the trailer. If we count 18 bytes of header and trailer (6 bytes of source
address, 6 bytes of destination address, 2 bytes of length or type, and 4 bytes of CRC), then the
minimum length of data from the upper layer is 64 - 18 = 46 bytes. If the upper-layer packet is
less than 46 bytes, padding is added to make up the difference.
Maximum length:
The standard defines the maximum length of a frame 1518 bytes. If we subtract the 18
bytes of header and trailer, the maximum length of the payload is 1500 bytes. The maximum
length restriction has two reasons.
1. To reduce the size of the buffer.
2. To prevents one station from monopolizing the shared medium, blocking others stations that
have data to send.
I.RAJU-ASST.PROF-VIEW 27
Frame length:
Minimum: 64 bytes (512 bits) Maximum: 1518 bytes (12,144 bits)
Addressing
Each station on an Ethernet network has its own network interface card (NIC). The NIC provides
6-bytes physical address to the station. Hexadecimal notation only.
Unicast address: defines only one recipient; the relationship between the sender and the
receiver is one-to-one.
Multicast address: defines a group of addresses; the relationship between the sender
and the receivers is one-to-many.
The least significant bit of the 1st byte defines the type of address.
If the bit is 0,the address is UNICAST.
If the bit is 1,the address is MULTICAST.
Broadcast address: The recipients are all the stations on the LAN. A broadcast
destination address is forty-eight 1’s.
Access Method: CSMA/CD:
Standard Ethernet uses I-persistent CSMA/CD.
Slot Time: In an Ethernet network, the round-trip time required for a frame to travel from one
end of a maximum-length network to the other plus the time needed to send the jam sequence is
called the slot time.
Slot time =round-trip time + time required to send the jam sequence
The slot time in Ethernet is defined in bits. It is the time required for a station to send 512 bits.
This means that the actual slot time depends on the data rate; for traditional 10-Mbps Ethernet it
is 51.2 I1s.
Slot Time and Collision: It was chosen to allow the proper functioning of CSMA/CD. To
understand the situation, let us consider two cases.
1. The sender sends a minimum-size packet of 512 bits.
I.RAJU-ASST.PROF-VIEW 28
Before the sender can send the entire packet out, the signal travels through the
network and reaches the end of the network. If there is another signal at the end of the network, a
collision occurs. The sender has the opportunity to abort the sending of the frame and to send a
jam sequence to inform other stations of the collision. The round-trip time plus the time required
to send the jam sequence should be less than the time needed for the sender to send the minimum
frame, 512 bits. The sender needs to be aware of the collision before it is too late, that is, before
it has sent the entire frame.
2. The sender sends a frame larger than the minimum size (between 512 and
1518 bits).
If the station has sent out the first 512 bits and has not heard a collision, it is
guaranteed that collision will never occur during the transmission of this frame. The reason is
that the signal will reach the end of the network in less than one-half the slot time. If all stations
follow the CSMA/CD protocol, they have already sensed the existence of the signal (carrier) on
the line and have refrained from sending. If they sent a signal on the line before one-half of the
slot time expired, a collision has occurred and the sender has sensed the collision.
Slot Time and Maximum Network Length: There is a relationship between the slot time and
the maximum length of the network.
MaxLength= PropagationSpeedx SlotTime/2
MaxLength= (2x108) X(51.2 X10-6/2)= 5120m
Of course, we need to consider the delay times in repeaters and interfaces, and the time
required to send the jam sequence. These reduce the maximum-length of a traditional Ethernet
network to 2500 m, just 48 percent of the theoretical calculation.
MaxLength=2500 m.
Physical Layer Implementations:
The Standard Ethernet defines several physical layer implementations.
I.RAJU-ASST.PROF-VIEW 29
Encoding and Decoding:
All standard implementations use digital signaling (baseband) at 10 Mbps.
At the sender, data are converted to a digital signal using the Manchester scheme;
At the receiver, the received signal is interpreted as Manchester and decoded into data.
Manchester encoding is self-synchronous, providing a transition at each bit interval.
I.RAJU-ASST.PROF-VIEW 30
10Base2: Thin Ethernet:
The second implementation is called 10Base2, thin Ethernet, or Cheapernet.
10Base2 also uses a bus topology, but the cable is much thinner and more flexible.
The cable can be bent to pass very close to the stations. In this case, the transceiver is
normally part of the network interface card (NIC), which is installed inside the station.
collision occurs in the thin coaxial cable. This implementation is more cost effective than
10Base5 because thin coaxial cable is less expensive than thick coaxial and the tee
connections are much cheaper than taps.
Installation is simpler because the thin coaxial cable is very flexible. However, the length
of each segment cannot exceed 185 m (close to 200 m) due to the high level of
attenuation in thin coaxial cable.
I.RAJU-ASST.PROF-VIEW 31
The maximum length of the twisted cable as 100 m, to minimize the effect of attenuation
in the twisted cable.
I.RAJU-ASST.PROF-VIEW 32
Summary of Standard Ethernet implementations:
FAST ETHERNET:
Fast Ethernet was designed to compete with LAN protocols such as FDDI or Fiber
Channel.
The goals of Fast Ethernet as follows:
1. Upgrade the data rate to 100 Mbps.
2. Make it compatible with Standard Ethernet.
3. Uses the same 48-bit address.
4. Uses the same frame format.
5. Uses the same minimum and maximum frame lengths.
MAC Sublayer:
MAC sublayer of the fast Ethernet is same as that of the traditional Ethernet.
1. Access method:The Access method also remains the same for the half-duplex approach.
It is same as CSMA/CD. for fullduplex Fast Ethernet, there is no need for CSMAlCD.
But, the CSMA/CD is used for backward compatibility with Standard Ethernet.
2. Frame Format: Same as that of traditional Ethernet.
3. Minimum and maximum frame lengths: Same as that of traditional Ethernet.
4. Addressing: Same as that of traditional Ethernet.
Auto negotiation:
A new feature added to Fast Ethernet is called autonegotiation. It allows a Autonegotiation
allows two devices to negotiate the mode or data rate of operation between two devices.
It was designed particularly for the following purposes:
1. To allow incompatible devices to connect to one another
2. To allow one device to have multiple capabilities.
3. To allow a station to check a hub's capabilities.
I.RAJU-ASST.PROF-VIEW 33
Physical Layer
The physical layer in Fast Ethernet is more complicated than the one in Standard Ethernet.
Topology
Fast Ethernet is designed to connect two or more stations together. If there are only two stations,
they can be connected point-to-point. Three or more stations need to be connected in a star
topology with a hub or a switch at the center.
Implementation
Fast Ethernet implementation at the physical layer can be categorized as either two-wire or
four-wire. The two-wire implementation can be either category 5 UTP (100Base-TX) or fiber-
optic cable (100Base-FX). The four-wire implementation is designed only for category 3 UTP
(l00Base-T4).
Encoding
Manchester encoding needs a 200-Mbaud bandwidth for a data rate of 100 Mbps, which
makes it unsuitable for a medium such as twisted-pair cable.
The Fast Ethernet designers try to discover some alternative encoding/decoding
scheme.
Therefore, three different encoding schemes were chosen.
I.RAJU-ASST.PROF-VIEW 34
100Base-TX:
It uses two pairs of twisted-pair cable (either category 5 UTP or STP).
For this implementation, the MLT-3(Multi-Level Transmit) scheme was selected since it
has good bandwidth performance. However, since MLT-3 is not a self-synchronous line
coding scheme, 4B/5B block coding is used to provide bit synchronization by preventing
the occurrence of a long sequence of 0s and 1s.
This creates a data rate of 125 Mbps, which is fed into MLT-3 for encoding.
100Base-FX:
It uses two pairs of fiber-optic cables. Optical fiber can easily handle high bandwidth
requirements by using simple encoding schemes.
The designers of 100Base-FX selected the NRZ-I encoding scheme for this
implementation.
However,NRZ-I has a bit synchronization problem for long sequences of 0s (or 1s, based
on the encoding). To overcome this problem, the designers used 4B/5B block encoding
as we described for 100Base-TX.
The block encoding increases the bit rate from 100 to 125 Mbps, which can easily be
handled by fiber-optic cable.
100Base-T4:
The implementation uses four pairs of UTP for transmitting 100 Mbps.
Encoding/decoding in 100Base-T4 is more complicated. As this implementation uses
category 3 UTP, each twisted-pair cannot easily handle more than 25 Mbaud.
In this design, one pair switches between sending and receiving. Three pairs of UTP
category 3, however, can handle only 75 Mbaud (25 Mbaud) each. We need to use an
encoding scheme that converts 100 Mbps to a 75 Mbaud signal.
In 8B/6T, eight data elements are encoded as six signal elements. This means that 100
Mbps uses only (6/8) x 100 Mbps, or 75 Mbaud.
I.RAJU-ASST.PROF-VIEW 35
IEEE 802.11:
IEEE has defined the specifications for a wireless LAN, called IEEE 802.11, which covers the
physical and data link layers.
Architecture
The standard defines two kinds of services:
Basic service set (BSS)
Extended service set (ESS)
I.RAJU-ASST.PROF-VIEW 36
Basic Service Set:
The building block of a wireless LAN in IEEE 802.11 is the basic service set (BSS). The
BSS contains several stations(STAs) and an optional central base station, known as the
access point (AP).
The BSS without an AP is a stand-alone network and cannot send data to other BSSs. It is
called an ad hoc architecture, stations can form a network without the need of an AP;
they can locate one another and agree to be part of a BSS.
A BSS with an AP is sometimes referred to as an infrastructure network.
.
Station Types (STA):
This component is used to connect to the wireless medium. Station can be any device, it
can be a mobile device, a Network interface card etc. It provides the services of
authentication, privacy and delivery of the data.
Access Point(AP):
It provides same services as STA.
The function of the AP is to provide both the communication to the wired LAN and the
local relay function for the BSS.
MAC Sublayer:
IEEE 802.11 defines two MAC sublayers:
Distributed coordination function (DCF)
Point coordination function (PCF).
The relationship between the two MAC sublayers, the LLC sublayer, and the physical layer.
I.RAJU-ASST.PROF-VIEW 38
Distributed Coordination Function
Distributed coordination function (DCF) is the protocol defined at the MAC sublayer by IEEE.
DCF uses CSMAICA as the access method.
Wireless LANs cannot implement CSMA/CD for three reasons:
1. For collision detection a station must be able to send data and receive collision signals at
the same time. This can mean costly stations and increased bandwidth requirements.
2. Collision may not be detected because of the hidden station problem.
3. The distance between stations can be great. Signal fading could prevent a station at one
end from hearing a collision at the other end.
Frame Exchange Timeline: The data and control frames exchange in time.
1. Before sending a frame, the source station senses the medium by checking the energy
level at the carrier frequency.
a. The channel uses a persistence strategy with back-off until the channel is idle.
I.RAJU-ASST.PROF-VIEW 39
b. After the station is found to be idle, the station waits for a period of time called the distributed
interframe space (DIFS); then the station sends a control frame called the request to send (RTS).
2. After receiving the RTS and waiting a period of time called the short interframe space
(SIFS), the destination station sends a control frame, called the clear to send (CTS), to the
source station. This control frame indicates that the destination station is ready to receive
data.
3. The source station sends data after waiting an amount of time equal to SIFS.
4. The destination station, after waiting an amount of time equal to SIFS, sends an
acknowledgment to show that the frame has been received. Acknowledgment is needed
in this protocol because the station does not have any means to check for the successful
arrival of its data at the destination.
Network Allocation Vector:
When a station sends an RTS frame, it includes the duration of time that it needs to
occupy the channel. The stations that are affected by this transmission create a timer
called a network allocation vector (NAV) that shows how much time must pass before
these stations are allowed to check the channel for idleness.
Each time a station accesses the system and sends an RTS frame, other stations start their
NAV.
Process Flowchart
The process flowchart for CSMAICA as used in wireless LANs
I.RAJU-ASST.PROF-VIEW 40
Collision During Handshaking
Two or more stations may try to send RTS frames at the same time. These control frames
may collide. However, because there is no mechanism for collision detection, the sender
assumes there has been a collision if it has not received a CTS frame from the receiver.
The back-off strategy is employed, and the sender tries again.
Point Coordination Function (PCF)
The point coordination function (PCF) is an optional access method that can be
implemented in an infrastructure network. It is implemented for time sensitive
transmission.
PCF has a centralized, contention-free polling access method. The AP performs
polling for stations that are capable of being polled. The stations are polled one after
another, sending any data they have to the AP.
To give priority to PCF over DCF, another set of inter frame spaces has been defined:
PIFS and SIFS. The SIFS is the same as that in DCF, but the PIFS (PCF IFS) is shorter
than the DIFS. This means that if, at the same time, a station wants to use only DCF and
an AP wants to use PCF, the AP has priority.
I.RAJU-ASST.PROF-VIEW 41
Due to the priority of PCF over DCF, stations that only use DCF may not gain access to
the medium. To prevent this, a repetition interval has been designed to cover both
contention-free (PCF) and contention-based (DCF) traffic. The repetition interval, which
is repeated continuously, starts with a special control frame, called a beacon frame.
When the stations hear the beacon frame, they start their NAV for the duration of the
contention-free period of the repetition interval. Figure 14.6 shows an example of a
repetition interval.
Fragmentation
The wireless environment is very noisy; a corrupt frame has to be retransmitted. The
protocol, therefore, recommends fragmentation-the division of a large frame into smaller
ones. It is more efficient to resend a small frame than a large one.
Frame Format
The MAC layer frame consists of nine fields.
I.RAJU-ASST.PROF-VIEW 42
Frame control field: The FC field is 2 bytes long and defines the type of frame and
some control information.
Duration: In all frame types except one, this field defines the duration of the
transmission that is used to set the value of NAV. In one control frame, this field defines
the ID of the frame.
Addresses. There are four address fields, each 6 bytes long. The meaning of each address
field depends on the value of the To DS and From DS subfields.
Sequence control. This field defines the sequence number of the frame to be used in
flow control.
Frame body. This field, which can be between 0 and 2312 bytes, contains information
based on the type and the subtype defined in the FC field.
FCS. The FCS field is 4 bytes long and contains a CRC-32 error detection sequence.
Frame Types
A wireless LAN defined by IEEE 802.11 has three categories of frames:
Management frames, control frames, and data frames.
Management Frames: Management frames are used for the initial communication between
stations and access points.
Control Frames: Control frames are used for accessing the channel and acknowledging frames.
I.RAJU-ASST.PROF-VIEW 43
Data Frames: Data frames are used for carrying data and control information.
Addressing Mechanism:
The IEEE 802.11 addressing mechanism specifies four cases, defined by the value of the
two flags in the FC field, To DS and From DS. Each flag can be either 0 or I, resulting in
four different situations.
The interpretation of the four addresses (address I to address 4) in the MAC frame
depends on the value of these flags,
I.RAJU-ASST.PROF-VIEW 44
Hidden and Exposed Station Problems
Hidden Station Problem: Station B has a transmission range shown by the left oval;
every station in this range can hear any signal transmitted by station B. Station C has a
transmission range shown by the right oval (sphere in space); every station located in this
range can hear any signal transmitted by C.
Station C is outside the transmission range of B; likewise, station B is outside the
transmission range of C. Station A, however, is in the area covered by both Band C; it
can hear any signal transmitted by B or C.
I.RAJU-ASST.PROF-VIEW 45
Assume that station B is sending data to station A. In the middle of this transmission,
station C also has data to send to station A. However, station C is out of B's range and
transmissions from B cannot reach C. Therefore C thinks the medium is free.
Station C sends its data to A, which results in a collision at A because this station is
receiving data from both B and C. In this case, we say that stations Band C are hidden
from each other with respect to A. Hidden stations can reduce the capacity of the network
because of the possibility of collision.
The solution to the hidden station problem is the use of the handshake frames (RTS
and CTS). the RTS message from B reaches A, but not C. However, because both Band C
are within the range of A, the CTS message, which contains the duration of data
transmission from B to A reaches C. Station C knows that some hidden station is using
the channel and refrains from transmitting until that duration is over.
handshake frames
Exposed Station Problem
In this problem a station refrains from using a channel when it is, in fact, available.
Station A is transmitting to station B.
Station C has some data to send to station D, which can be sent without interfering with
the transmission from A to B. However, station C is exposed to transmission from A; it
hears what A is sending and thus refrains from sending. In other words, C is too
conservative and wastes the capacity of the channel.
I.RAJU-ASST.PROF-VIEW 46
The handshaking messages RTS and CTS cannot help in this case, despite what you
might think. Station C Station C hears the RTS from A, but does not hear the CTS from
B. Station C, after hearing the RTS from A, can wait for a time so that the CTS from B
reaches A; it then sends an RTS to D to show that it needs to communicate with D. Both
stations B and A may hear this RTS, but station A is in the sending state, not the
receiving state. Station B, however, responds with a CTS. The problem is here. If station
A has started sending its data, station C cannot hear the CTS from station D because of
the collision; it cannot send its data to D. It remains exposed until A finishes sending its
data.
I.RAJU-ASST.PROF-VIEW 47