Unit 3
Unit 3
Unit 3
1-Persistent CSMA:-
When a station has data to send, it first listens to the
channel to see if any one else is transmitting at that
moment.
If the channel is busy, the station waits until it become
idle.
When the station detects an idle channel, it transmits a
frame. If a collision occurs, the station waits a random
amount of time and starts all over again.
The protocol is called 1-persistent also because the
station transmit with a probability of 1 when it finds the
channel idle.
The propagation delay has an important
effect on the performance of the protocol.
The longer the propagation delay the worse
the performance of the protocol.
Even if the propagation delay is zero, there
will be collisions.
If two stations listen the channel, that is idle
at the same, both will send frame and there
will be collision.
In this, before sending, a station sense the
channel.
If no one else is sending, the station begins
doing so it self.
However, if the channel is busy, the station
does not continually sense it but it waits a
random amount of time and repeats the
process.
This algorithms leads to better channel
utilization but longer delays than 1-
persistent CSMA.
With persistent CSMA, what happens if two
stations become active when a third station is
busy?
Both wait for the active station to finish, then
simultaneously launch a packet, resulting a
collision.
There are two ways to handle this problem.
a) P-persistent CSMA (b) exponential backoff.
The first technique is for a waiting station not to launch a
packet immediately when the channel becomes idle, but
first toss a coin, and send a packet only if the coin comes
up heads.
If the coin comes up tails, the station waits for some time
(one slot for slotted CSMA), then repeats the process.
The idea is that if two stations are both waiting for the
medium, this reduces the chance of a collision from 100%
to 25%.
A simple generalization of the scheme is to use a biased
coin, so that the probability of sending a packet when the
medium becomes idle is not 0.5, but p, where 0< p < 1.
We call such a scheme P-persistent CSMA.
The original scheme, where p=1, is thus called 1-persitent
CSMA.
The key idea is that each station, after transmitting a
packet, checks whether the packet
transmission was successful.
Successful transmission is indicated either by an
explicit acknowledgement from the receiver or the
absence of a signal from a collision detection
circuit.
If the transmission is successful, the station is done.
Otherwise, the station retransmits the packet,
simultaneously realizing that at least one other
station is also contending for the medium.
To prevent its retransmission from colliding
with the other station’s retransmission, each
station backs off (that is, idles) for a random
time chosen from the interval [0,2*max-
propagation_delay] before retransmitting its
packet.
If the retransmission also fails, then the station
backs off for a random time in the interval [0,4*
maxpropagation_delay], and tries again.
Each subsequent collision doubles the backoff
interval length, until the retransmission finally
succeeds.
On a successful transmission, the backoff
interval is reset to the initial value.
We call this type of backoff exponential backoff.
In many wireless LANS, unlike wired LANS, the
station has no idea whether the packet collided with
another packet or not until it receives an
acknowledgement from receiver.
In this situation, collisions have a greater effect on
performance than with CSMA/CD, where colliding
packets can be quickly detected and aborted.
Thus, it makes sense to try to avoid collisions, if
possible.
CSMA/CA is basically p-persistence, with the twist
that when the medium becomes idle, a station must
wait for a time called the inter frame spacing or IFS
before contending for a slot.
A station gets a higher priority if it is allocated
smaller inter frame spacing.
When a station wants to transmit data, it first checks if the
medium is busy.
If it is, it continuously senses the medium, waiting for it to
become idle.
When the medium becomes idle, the station first waits for an
interframe spacing corresponding to its priority level, then
sets a contention timer to a time interval randomly selected in
the range [0,CW], where CW is a predefined contention window
length.
When this timer expires, it transmits a packet and waits for
the receiver to send an ack.
If no ack is received, the packet is assumed lost to collision,
and the source tries again, choosing a contention timer at
random from an interval twice as long as the one before
(binary exponential backoff).
If the station senses that another station has begun
transmission while it was waiting for the expiration of the
contention timer, it does not reset its timer, but merely freeze
it, and restarts the countdown when the packet completes
transmission.
In this way, stations that happen to choose a longer timer
value get higher priority in the next round of contention.
A Bit-Map Protocol:-
In the basic bit-map method, each contention period
consists of exactly N slots.
If station 0 has a frame to send, it transmits a 1 bit
during the zeroth slot.
No other station is allowed to transmit during this slot.
Regardless of what station 0 does, station 1 gets the
opportunity to transmit a 1during slot 1, but only if it
has a frame queued.
In general, station j may announce the fact that it has a
frame to send by inserting a 1 bit into slot j.
After all N slots have passed by, each station has
complete knowledge of which stations with to transmit.
Since everyone agrees on who goes next, there
will never be any collisions.
After the last ready station has transmitted its
frame, an event all stations can easily monitor,
another N bit contention period is begun.
If a station becomes ready just after its bit slot
has passed by, it is out of luck and must remain
silent until every station has had a chance and
the bit map has come around again.
Protocols like this in which the desire to transmit
is broadcast before the actual transmission are
called reservation protocols.
In the token-passing method, the stations in a
network are organized in a logical ring.
In other words, for each station, there is a
predecessor and a successor.
The predecessor is the station which is logically
before the station in the ring; the successor is
the station which is after the station in the ring.
The current station is the one that is accessing
the channel now.
The right to this access has been passed from
the predecessor to the current station.
The right will be passed to the successor when
the current station has no more data to send.
A special packet called a token circulates
through the ring.
The possession of the token gives the station
the right to access the channel and send its
data.
When a station has some data to send, it waits
until it receives the token from its predecessor.
It then holds the token and sends its data.
When the station has no more data to send, it
releases the token, passing it to the next logical
station in the ring.
Token management is needed for this access
method.
Stations must be limited in the time they can
have possession of the token.
In a token-passing network, stations do not have
to be physically connected in a ring; the ring can
be a logical one.
Below figure shows four different physical
topologies that can create a logical ring.
In the physical ring topology, when a station sends
the token to its successor, the token cannot be
seen by other stations; the successor is the next
one in line.
This means that the token does not have to have
the address of the next successor.
The problem with this topology is that if one of the
links-the medium between two adjacent stations-
fails, the whole system fails.
The dual ring topology uses a second
(auxiliary) ring which operates in the reverse
direction compared with the main ring.
The second ring is for emergencies only (such
as a spare tire for a car).
If one of the links in the main ring fails, the
system automatically combines the two rings
to form a temporary ring.
After the failed link is restored, the auxiliary
ring becomes idle again.
The high-speed Token Ring networks called
FDDI (Fiber Distributed Data Interface) and
CDDI (Copper Distributed Data Interface) use
this topology.
In the bus ring topology, also called a token bus,
the stations are connected to a single cable
called a bus.
They, however, make a logical ring, because each
station knows the address of its successor (and
also predecessor for token management
purposes).
When a station has finished sending its data, it
releases the token and inserts the address of its
successor in the token.
Only the station with the address matching the
destination address of the token gets the token
to access the shared media.
The Token Bus LAN, standardized by IEEE, uses
this topology.
The IEEE 802.3 is for a 1-persistent CSMA/CD
LAN.
Xerox built a 2.94 Mbps CSMA/CD system to
connect over 100 personal workstations on 1-
Km cable.
This system was called Ethernet through which
electromagnetic radiation was once thought to
propagate.
Xerox DEC and Intel came with another standard
for 100 Mbps Ethernet.
This differs from old one that it runs at speeds
from 1 to 10 Mbps on various media.
The second difference between these two is in
the header.
I) Preamble:
Each frame start with a preamble of 7 bytes each containing a bit pattern 10101010.
II) Start of frame byte:
It denotes the start of the frame itself. It contains 10101011.
III) Destination address:
This gives the destination address. The higher order bit is zero for ordinary address and
1for group address (Multi casting). If all bits are 1’s in the destination field, frame will
be delivered to all stations (Broad casting). The 46th bit (adjacent to the high-order
bit) is used to distinguish local from global addresses.
IV) Length field:
This tells how many bytes are present in the data field from 0 to 1500.
V) Data field:
This contains the actual data that the frame contains.
VI) Pad:
Valid frame must have 64 bytes long from destination to checksum. If the frame size less
than 64 bytes pad field is used to fill out the frame to the minimum Size.
VII) Checksum:
It is used to find out the whether the received frame is correct or not. CRC will be used
here.
802.3 frames do not have priorities, making
them unsuited for real-time systems in which
important frames should not be held up waiting
for unimportant frames.
A simple system with a known worst case is a
ring in which the stations take turns sending
frames.
If there are n stations and it takes T sec to send
a frame, no frame will ever have to wait more
than nT sec to be sent.
Physically, the token bus is a linear or tree-
shaped cable onto which the stations are
attached.
Logically, the stations are organized into a ring,
with each station knowing the address of the
station to its “left” and “right”.
When the logical ring is initialized, the highest
numbered station may send the first frame.
After it is done, it passes permission to its
immediate neighbour by sending the neighbour
a special control frame called a token.
The token propagates around the logical ring,
with only the token holder being permitted to
transmit frames.
Since only one station at a time holds the token,
collisions do not occur.
Since the cable is inherently a broadcast medium,
each station receives each frame, discarding
those not addressed to it.
When a station passes the token, it sends a token
frame specifically addressed to its logical
neighbour in the ring, irrespective of where that
station is physically located on the cable.
It is also worth noting that when stations are first
powered on, they will not be in the ring, so the
MAC protocol has provisions for adding and
deleting stations to the ring.
For the physical layer, the token bus uses the 75-
ohm broadband coaxial cable used for cable
television.
Both single and dual-cable systems are allowed,
with or without head-ends.
The frame control field is used to distinguish
data frames from control frames.
From data frames, it carries the frame’s
priority.
It can also carry an indicator requiring the
destination station to acknowledge correct or
incorrect receipt of the frame.
For control frames, the frame control field is
used to specify the frame type.
The allowed types include token passing and
various ring maintenance frames, including
the mechanism for letting new stations enter
the ring, the mechanism for allowing stations
to leave the ring, and so on.
LANS can be connected by devices called bridges,
which operate in the data link layer.
Bridges do not examine the network layer header
and can thus copy IP, IPX, and OSI packets equally
well.
There are various reasons for using the bridges.
1) Many university and corporate departments have
their own LANS, primarily to connect their own
personal computers, workstations, and servers.
Since the goals of the various departments differ,
different departments choose different LANS,
without regard to what other departments are
doing.
Sooner or later, there is a need for interaction, so
bridges are needed.
2) The organization may be geographically spread
over several buildings separated by considerable
distances.
It may be cheaper to have separate LANS in each
building and connect them with bridges and
infrared links than to run a single coaxial cable
over the entire site.
3) It may be necessary to split what is logically a
single LAN into separate LANS to accommodate
the load.
Putting all the workstations on a single LAN- the
total bandwidth needed is far too high.
Instead multiple LANS connected by bridges are
used.
4) In some situations, a single LAN would be
adequate in terms of the load, but the
physical distance between the most distant
machines is too great (e.g., more than 2.5km
for 802.3).
Even if laying the cable is easy to do, the network
would not work due to the excessively long round-
trip delay.
Only solution is to partition the LAN and install
bridges between the segments.
5) There is the matter of reliability. On a single LAN, a
defective node that keeps outputting a continuous
stream of garbage will cripple the LAN.
Bridges can be inserted at critical places, to prevent
a single node which has gone berserk from bringing
down the entire system.
6) And last, bridges can contribute to the
organization’s security.
By inserting bridges at various places and being
careful not to forward sensitive traffic, it is possible
to isolate parts of the network so that its traffic
cannot escape and fall into the wrong hands.
Simple Bridge
Simple bridges are the most primitive and least
expensive type of bridge.
A simple bridge links two segments and contains a
table that lists the addresses of all the stations
included in each of them.
Before a simple bridge can be used, an operator
must sit down and enter the addresses of every
station.
Whenever a new station is added, the table must be
modified.
If a station is removed, the newly invalid address
must be deleted.
Installation and maintenance of simple bridges are
time-consuming and potentially more trouble than
the cost savings are worth.
Transparent Bridge
A transparent, or learning, bridge builds its table of station
addresses on its own as it performs its bridge functions.
When the transparent bridge is first installed, its table is
empty.
As it encounters each packet, it looks at both the
destination and the source addresses.
It checks the destination to decide where to send the
packet.
If it does not yet recognize the destination address, it relays
the packet to all of the stations on both segments.
It uses the source address to build its table. As it reads the
source address, it notes which side the packet came from
and associates that address with the segment to
which it belongs.
By continuing this process even after the table is complete,
a transparent bridge is also self-updating.
This bridge uses flooding and backward landing
algorithms.
The routing procedure for an incoming frame
depends on the LAN it arrives on (the source
LAN) and the LAN its destination is on (the
destination LAN), as follows.
1) If destination and source LANS are the same,
discard the frame.
2) If the destination and source LANS are
different, forward the frame.
3) If the destination LAN is unknown, use
flooding.