CCN 2

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 28

CCN Assignment II

1. Explain MAC (Medium Access Control) Sublayer

The Medium Access Control (MAC) sublayer is a critical component of the data link layer in
network architecture. It is responsible for controlling how devices on a network gain access
to the medium and permission to transmit data. The MAC sublayer is an essential part of the
data link layer, primarily tasked with managing access to the communication channel in a
network. This sublayer is particularly significant in Local Area Networks (LANs), where
multiple devices share the same communication medium.

Key Responsibilities

1. Channel Access Control: In a network, especially a broadcast network, multiple


devices may need to transmit data over a single communication channel. The MAC
sublayer ensures that only one device transmits at a time to avoid collisions and
interference. This is crucial in a shared medium to maintain order and efficiency.
2. Protocol Management: The MAC sublayer employs various protocols to determine
which device should use the channel next. These protocols are necessary because,
unlike in face-to-face communication where visual cues can help manage turn-taking,
network devices require systematic protocols to decide transmission order.
3. Handling Multiaccess Channels: In LANs, many devices use a shared
communication channel known as a multiaccess channel. The MAC sublayer
facilitates communication by managing how devices access this shared medium. This
includes handling collision detection and avoidance mechanisms to ensure smooth
data transmission.
4. Distinction from Point-to-Point Links: Unlike WANs, which often use point-to-
point links where communication occurs between two specific nodes, LANs and their
MAC sublayer deal with multiaccess channels. This distinction is essential for
understanding the unique challenges and solutions the MAC sublayer addresses in a
LAN environment.

Importance in LANs

LANs heavily rely on the MAC sublayer because they involve multiple devices
communicating over the same medium. The MAC sublayer's protocols ensure that data
transmission is orderly, efficient, and free from chaos that would result from simultaneous
transmissions.

Layer Structure

Although the MAC sublayer is technically part of the lower portion of the data link layer, it is
often studied after understanding point-to-point protocols. This educational approach is
taken because comprehending multi-party protocols (managed by the MAC sublayer) is
easier once two-party protocols are well understood.

2. What is channel allocation? Explain channel allocation


techniques in detail.

THE CHANNEL ALLOCATION PROBLEM: we discuss how to allocate a single broadcast


channel among competing users.
 The channel might be a portion of the wireless spectrum in a geographic region, or a
single wire or optical fiber to which multiple nodes are connected.
 In both cases, the channel connects each user to all other users and any user who
makes full use of the channel interferes with other users who also wish to use the
channel.

1. STATIC CHANNEL ALLOCATION


 The traditional way of allocating a single channel, such as a telephone trunk, among
multiple competing users is to chop up its capacity by using one of the multiplexing
schemes, such as FDM (Frequency Division Multiplexing).
 If there are N users, the bandwidth is divided into N equal-sized portions, with each
user being assigned one portion. Since each user has a private frequency band, to
prevent the interference among users.
 When there are only a small and constant number of users, each of which has a
steady stream or a heavy load of traffic, this division is a simple and efficient
allocation mechanism.
 A wireless example is FM radio stations. Each station gets a portion of the FM band
and uses it most of the time to broadcast its signal.

FDM
 However, when the number of senders is large and varying or the traffic is bursty,
FDM presents some problems.
 If the spectrum is cut up into N regions and fewer than N users are currently
interested in communicating, a large piece of valuable spectrum will be wasted.
 If more than N users want to communicate, some of them will be denied permission
for lack of bandwidth, even if some of the users who have been assigned a frequency
band hardly ever transmit or receive anything.
 The poor performance of static FDM can easily be seen with a simple queueing
theory calculation.
 Let us start by finding the mean time delay, T, to send a frame on to a channel of
capacity C bps. We assume that the frames arrive randomly with an average arrival
rate of λ frames/sec, and that the frames vary in length with an average length of 1/μ
bits. With these parameters, the service rate of the channel is μC frames/sec.
1
 A standard queueing theory result is T=
μC− λ
 If C is 100 Mbps, the mean frame length, 1/μ, is 10,000 bits, and the frame arrival
rate, λ, is 5000 frames/sec, then T = 200 μsec.
 Note that if we ignored the queueing delay and just asked how long it takes to send a
10,000 bit frame on a 100-Mbps network, we would get the (incorrect) answer of 100
μsec. That result only holds when there is no contention for the channel.
 Now let us divide the single channel into N independent subchannels, each with
capacity C/Nbps. The mean input rate on each of the subchannels will now be λ/N.
1 N
= =NT
Recomputing T, we get TN =
μ ( CN )−( Nλ ) μC−λ

 The mean delay for the divided channel is N times worse than if all the frames were
somehow magically arranged orderly in a big central queue.

2. ASSUMPTIONS FOR DYNAMIC CHANNEL ALLOCATION: Underlying all the work


done in this area are the following five key assumptions:
1. Independent Traffic: The model consists of N independent stations (e.g.,
computers, telephones), each with a program or user that generates frames for
transmission. The expected number of frames generated in an interval of length Δt is
λΔt, where λ is a constant (the arrival rate of new frames). Once a frame has been
generated, the station is blocked and does nothing until the frame has been
successfully transmitted.
2. Single Channel: A single channel is available for all communication. All stations can
transmit on it and all can receive from it. The stations are assumed to be equally
capable, though protocols may assign them different roles (e.g., priorities).
3. Observable Collisions: If two frames are transmitted simultaneously, they overlap
in time and the resulting signal is garbled. This event is called a collision.
● All stations can detect that a collision has occurred.
● A collided frame must be transmitted again later. No errors other than those
generated by collisions occur
4. Continuous or Slotted Time: Time may be assumed continuous, in which case
frame transmission can begin at any instant. Alternatively, time may be slotted or
divided into discrete intervals (called slots).
● Frame transmissions must then begin at the start of a slot. A slot may contain 0, 1, or
more frames, corresponding to an idle slot, a successful transmission, or a collision,
respectively.
5. Carrier Sense or No Carrier Sense: With the carrier sense assumption, stations
can tell if the channel is in use before trying to use it.
● No station will attempt to use the channel while it is sensed as busy.
● If there is no carrier sense, stations cannot sense the channel before trying to use it.
They just go ahead and transmit. Only later can they determine whether the
transmission was successful.

3. Explain carrier sense MULTIPLE ACCESS PROTOCOL


Protocols in which stations PROTOCOLS listen for a carrier (i.e., a transmission) and act
accordingly are called carrier sense protocols. Persistent and Non-persistent CSMA

 The first carrier sense protocol is called 1-persistent CSMA (Carrier Sense Multiple
Access).
 When a station has data to send, it first listens
to the channel to see if anyone else is
transmitting at that moment. If the channel is
idle, the station sends its data. Otherwise, if the
channel is busy, the station just waits until it
becomes idle. Then the station transmits a
frame.
 If a collision occurs, the station waits a random amount of time and starts all over
again. The protocol is called 1-persistent because the station transmits with a
probability of 1 when it finds the channel idle.

 A second carrier sense protocol is non-persistent CSMA.


 In this protocol, a conscious attempt is made to
be less greedy than in the previous one. As
before, a station senses the channel when it
wants to send a frame, and if no one else is
sending, the station begins doing so itself.
 However, if the channel is already in use, the
station does not continually sense it for the
purpose of seizing it immediately upon
detecting the end of the previous transmission. Instead, it waits a random period of
time and then repeats the algorithm.

 The last protocol is p-persistent CSMA.


 It applies to slotted channels and works
as follows. When a station becomes
ready to send, it senses the channel. If
it is idle, it transmits with a probability
p.
 With a probability q = 1 − p, it defers
until the next slot. If that slot is also
idle, it either transmits or defers again, with probabilities p and q. This process is
repeated until either the frame has been transmitted or another station has begun
transmitting.
 In the latter case, the unlucky station acts as if there had been a collision (i.e., it
waits a random time and starts again). If the station initially senses that the channel
is busy, it waits until the next slot and applies the above algorithm.
 IEEE 802.11 uses a refinement of p-persistent CSMA

4. Explain CSMA/CD concept


5. Describe CSMA/CA in brief.
6. Write short note on pure & slotted ALOHA.
7. Explain IEEE 802.3 MAC Sublayer frame format.
 Preamble: This field contains 7 bytes (56 bits) of alternating 0s and 1s that alert the
receiving system to the coming frame and enable it to synchronize its clock if it’s out
of synchronization. The pattern provides only an alert and a timing pulse. The 56-bit
pattern allows the stations to miss some bits at the beginning of the frame. The
preamble is actually added at the physical layer and is not (formally) part of the
frame.
 Start frame delimiter (SFD): This field (1 byte: 10101011) signals the beginning of
the frame. The SFD warns the station or stations that this is the last chance for
synchronization. The last 2 bits are (11)2 and alert the receiver that the next field is
the destination address. This field is actually a flag that defines the beginning of the
frame. We need to remember that an Ethernet frame is a variable-length frame. It
needs a flag to define the beginning of the frame. The SFD field is also added at the
physical layer.
 Destination address (DA): This field is six bytes (48 bits) and contains the linklayer
address of the destination station or stations to receive the packet. We will discuss
addressing shortly. When the receiver sees its own link-layer address, or a multicast
address for a group that the receiver is a member of, or a broadcast address, it
decapsulates the data from the frame and passes the data to the upperlayer protocol
defined by the value of the type field.
 Source address (SA): This field is also six bytes and contains the link-layer address
of the sender of the packet. We will discuss addressing shortly.
 Type: This field defines the upper-layer protocol whose packet is encapsulated in the
frame. This protocol can be IP, ARP, OSPF, and so on. In other words, it serves the
same purpose as the protocol field in a datagram and the port number in a segment
or user datagram. It is used for multiplexing and demultiplexing.
 Data: This field carries data encapsulated from the upper-layer protocols. It is a
minimum of 46 and a maximum of 1500 bytes. We discuss the reason for these
minimum and maximum values shortly. If the data coming from the upper layer is
more than 1500 bytes, it should be fragmented and encapsulated in more than one
frame. If it is less than 46 bytes, it needs to be padded with extra 0s. A padded data
frame is delivered to the upper-layer protocol as it is (without removing the padding),
which means that it is the responsibility of the upper layer to remove or, in the case
of the sender, to add the padding. The upper-layer protocol needs to know the length
of its data. For example, a datagram has a field that defines the length of the data.
 CRC: The last field contains error detection information, in this case a CRC-32. The
CRC is calculated over the addresses, types, and data field. If the receiver calculates
the CRC and finds that it is not zero (corruption in transmission), it discards the
frame.

Frame Length: Ethernet has imposed restrictions on both the minimum and maximum
lengths of a frame. The minimum length restriction is required for the correct operation of
CSMA/CD, as we will see shortly. An Ethernet frame needs to have a minimum length of 512
bits or 64 bytes. Part of this length is the header and the trailer. If we count 18 bytes of
header and trailer (6 bytes of source address, 6 bytes of destination address, 2 bytes of
length or type, and 4 bytes of CRC), then the minimum length of data from the upper layer
is 64 − 18 = 46 bytes. If the upper-layer packet is less than 46 bytes, padding is added to
make up the difference. The standard defines the maximum length of a frame (without
preamble and SFD field) as 1518 bytes. If we subtract the 18 bytes of header and trailer, the
maximum length of the payload is 1500 bytes. The maximum length restriction has two
historical reasons. First, memory was very expensive when Ethernet was designed; a
maximum length restriction helped to reduce the size of the buffer. Second, the maximum
length restriction prevents one station from monopolizing the shared medium, blocking
other stations that have data to send.

8. Difference between 802.3, 802.4 & 802.5 IEEE Standard

IEEE 802.3 IEEE 802.4 IEEE 802.5

Topology used in IEEE Topology used in IEEE 802.4 Topology used in IEEE 802.5
802.3 is Bus Topology. is Bus or Tree Topology. is Ring Topology.

Size of the frame format in Size of the frame format in Frame format in IEEE 802.5
IEEE 802.3 standard is IEEE 802.4 standard is 8202 standard is of the variable
1572 bytes. bytes. size.

There is no priority given It supports priorities to In IEEE 802.5 priorities are


in this standard. stations. possible

Size of the data field is 0 Size of the data field is 0 to No limit is on the size of the
to 1500 bytes. 8182 bytes. data field.

Minimum frame required It can handle short minimum It supports both short and
is 64 bytes. frames. large frames.

Efficiency decreases when


Throughput & efficiency at Throughput & efficiency at
speed increases and
very high loads are very high loads are
throughput is affected by
outstanding. outstanding.
the collision.
IEEE 802.3 IEEE 802.4 IEEE 802.5

Modems are required in this Like IEEE 802.4, modems


Modems are not required.
standard. are also required in it.

Protocol is extremely Protocol is moderately


Protocol is very simple.
complex. complex.

It can be applied for Real


It is not applicable on Real
time applications and
time applications,
It is applicable to Real time interactive applications
interactive Applications
traffic. because there is no
and Client-Server
limitation on the size of
applications.
data.

9. Describe fiber distributed data interface (FDDI) standard in


detail.

FDDI is a high-speed networking standard primarily designed for local area networks (LANs)
and capable of extending up to 200 kilometers (124 miles). Based on the token ring
protocol, it supports high-capacity and high-speed data transfer. While it was once widely
used, especially in backbone networks for wide area networks (WANs) and campus area
networks (CANs), it has largely been replaced by more modern networking technologies.

Topology and Design

 Dual Ring Structure: FDDI networks typically employ two token rings—a primary ring
and a secondary ring for redundancy.
1. Primary Ring: Offers up to 100 Mbps capacity.
2. Secondary Ring: Serves as a backup and can also be used to double the
capacity to 200 Mbps if necessary.
 Directional Operation: The rings operate in opposite directions, one clockwise and
the other counterclockwise.
 Distance:
1. Single Ring: Can extend up to 200 km.
2. Dual Ring: Can extend up to 100 km.
 Topology: Although FDDI uses a token ring topology, it can also be implemented in
a star topology structure.

Protocol and Operation

 Token Passing: Only the device with the token can transmit data, ensuring orderly
access to the network.
 Timed Token: Ensures maximum wait times for devices, supporting both
synchronous (guaranteed timings) and asynchronous configurations.
 Standards: FDDI operates at the OSI model’s Layer 1 (physical) and Layer 2 (media
access control data link).
 Frame Size: It supports a large maximum transmission unit (MTU) frame size of
4,352 bytes.

Physical Media

 Primary Medium: Single-mode fiber optic cable.


 Alternative Media: Non-fiber optic options such as Copper Distributed Data
Interface (CDDI) and Twisted-Pair Physical Medium-Dependent (TP-PMD).

Device Attachment

 Dual Attachment Stations (DAS): Devices connected to both rings.


 Single Attachment Stations (SAS): Devices connected via a single fiber optic
connection.

10. Discuss issue in Data Link layer and about its protocol on the
process of layering protocol

Media Access Control Sub-layer (MAC): It is the second sub-layer of data-link layer. It
controls the flow and multiplexing for transmission medium. Transmission of data packets is
controlled by this layer. This layer is responsible for sending the data over the network
interface card.

Functions are:

1. To perform the control of access to media.


2. It performs the unique addressing to stations directly connected to LAN.
3. Detection of errors.

KEY ISSUES IN THE DATA LINK LAYER

Error Detection and Correction:

 Issue: Data can get corrupted during transmission.


 Protocols: CRC (Cyclic Redundancy Check), Parity Check, Hamming Code.
 Solution: Implementing error detection and correction mechanisms to ensure data
integrity.

Flow Control:

 Issue: Prevents a fast sender from overwhelming a slow receiver.


 Protocols: Stop-and-Wait, Sliding Window.
 Solution: Ensuring that data is sent at a rate that the receiver can handle.

Frame Synchronization:

 Issue: Delimiting the start and end of each frame.


 Protocols: Bit stuffing, Byte stuffing.
 Solution: Using specific patterns or flags to indicate frame boundaries.
Medium Access Control (MAC):

 Issue: Multiple devices sharing the same physical medium can lead to collisions.
 Protocols: CSMA/CD (Carrier Sense Multiple Access with Collision Detection),
CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance), Token Ring.
 Solution: Implementing rules and protocols to manage access to the physical
medium.

Addressing:

 Issue: Each device must be uniquely identifiable on the network.


 Protocols: MAC addresses, ARP (Address Resolution Protocol).
 Solution: Assigning unique hardware addresses to each network interface.

Quality of Service (QoS):

 Issue: Ensuring priority for different types of traffic (e.g., video, voice, data).
 Protocols: 802.1Q (VLAN tagging), 802.1p (Priority tagging).
 Solution: Prioritizing traffic based on type and requirements.

KEY DATA LINK LAYER PROTOCOLS

Ethernet (IEEE 802.3):

 Description: Widely used LAN technology.


 Features: Uses CSMA/CD for collision detection, supports various speeds (Fast
Ethernet, Gigabit Ethernet).

Wi-Fi (IEEE 802.11):

 Description: Wireless LAN technology.


 Features: Uses CSMA/CA for collision avoidance, supports various speeds and
frequencies (2.4 GHz, 5 GHz).

Point-to-Point Protocol (PPP):

 Description: Used for direct connections between two nodes.


 Features: Provides error detection, supports multiple network layer protocols.

High-Level Data Link Control (HDLC):

 Description: Bit-oriented protocol for communication over point-to-point and


multipoint links.
 Features: Provides frame synchronization, error control, and flow control.

Fiber Distributed Data Interface (FDDI):

 Description: High-speed network standard using fiber optics.


 Features: Dual-ring architecture, token-passing protocol for medium access control.

PROCESS OF LAYERING PROTOCOLS

Layered Approach:
 Purpose: Simplifies network design by dividing it into layers, each responsible for
specific functions.
 Benefits: Modularity, ease of troubleshooting, and the ability to develop protocols
independently.

Inter-Layer Communication:

 Description: Each layer interacts with the layer directly above and below it.
 Mechanism: Layers pass data and control information using Service Access Points
(SAPs).

Encapsulation:

 Process: Each layer adds its own header (and sometimes trailer) to the data before
passing it to the next layer.
 Benefit: Ensures that data is appropriately processed at each layer.

De-Encapsulation: Process: The receiving end reverses encapsulation, removing


headers/trailers at each layer to extract the original data. Benefit: Ensures data is
interpreted correctly by the corresponding layer on the receiving side.

You might also like