Computer Network

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 25

COMPUTER NETWORK

Q1. Explain computer network


(definition/advantages/uses/classification)?
Ans Definition
A computer network is a collection of interconnected devices
(such as computers, servers, network devices, and other
digital systems) that communicate and share resources with
each other. The primary purpose of a computer network is to
enable the sharing of information and resources (like files,
applications, printers, etc.) among multiple users.
Classification
Computer networks can be classified based on various
criteria:
1. By Geographic Scope
 Local Area Network (LAN): Covers a small geographic

area like a single building or campus. Examples include


home or office networks.
 Metropolitan Area Network (MAN): Spans a city or a

large campus. Examples include city-wide wireless


networks.
 Wide Area Network (WAN): Covers a large geographic

area, such as a country or continent. The internet is the


largest WAN.
 Personal Area Network (PAN): A network for personal

devices, typically within a range of a few meters, like


Bluetooth connections.
2. By Connection Type
 Wired Networks: Use physical cables like Ethernet for
connections.
 Wireless Networks: Use wireless technologies like Wi-Fi,

Bluetooth, or cellular networks for connections.


3. By Network Topology
 Bus Topology: All devices are connected to a single

central cable, or bus.


 Star Topology: All devices are connected to a central

hub or switch.
 Ring Topology: Devices are connected in a circular

fashion, where each device is connected to two other


devices.
 Hybrid Topology: A combination of two or more

different topologies.
4. By Control Type
 Peer-to-Peer (P2P): All devices have equal status and

can communicate directly with each other.


 Client-Server: Devices are divided into clients

(requesting services) and servers (providing services).


Advantages
1. Resource Sharing: Networks allow multiple users to
share hardware resources like printers, scanners, and
storage devices, which can save costs.
2. Data Sharing: Users can easily share files, data, and
information over a network.
3. Internet Access: Networks provide users with access to
the internet, allowing for web browsing, email, and
other online services.
4. Centralized Data Management: Networks enable
centralized data management, making data backup,
storage, and retrieval more efficient.
5. Communication: Networks provides various forms of
communication, including email, instant messaging, and
video conferencing.
Uses
1. Business: data exchange, and resource sharing among
employees; supports business applications like email,
ERP, and CRM systems.
2. Education: Provides access to educational resources,
online courses, and collaborative tools for students and
teachers.
3. Entertainment: Supports streaming services, online
gaming, and social media platforms.
4. Government: Enhances public services, communication,
and data management across various departments.
5. Home: Enables internet access, smart home devices,
and personal communication and entertainment.
Q2. explain OSI model and its function
Ans OSI Model Overview
The Open Systems Interconnection (OSI) model is a
conceptual framework used to understand and standardize
the functions of a telecommunication or computing system
irrespective of its underlying internal structure and
technology. The OSI model divides these functions into seven
distinct layers, each with specific roles and responsibilities.
This modular approach helps in designing, troubleshooting,
and understanding network protocols and interactions.
OSI Model Layers
1. Physical Layer (Layer 1)
2. Data Link Layer (Layer 2)
3. Network Layer (Layer 3)
4. Transport Layer (Layer 4)
Functions of Each Layer
1. Physical Layer (Layer 1)
 Function: Deals with the physical connection between

devices, including the transmission and reception of raw


bit streams over a physical medium.
 Key Components: Cables, switches, hubs, and network

interface cards (NICs).


 Responsibilities:

o Defines hardware specifications, such as electrical

signals, cables, and connectors.


o Manages data encoding and signal transmission

techniques.
o Ensures bit synchronization and bit rate control.

2. Data Link Layer (Layer 2)


 Function: Provides node-to-node data transfer and

handles error correction from the physical layer.


 Key Components: Bridges, switches, and MAC (Media

Access Control) addresses.


 Responsibilities:

o Frames data packets.

o Manages MAC addresses and physical addressing.

o Controls access to the physical medium.

o Handles error detection and correction (e.g., using

CRC).
3. Network Layer (Layer 3)
 Function: Manages logical addressing and determines

the best path for data transfer across multiple networks.


 Key Components: Routers, IP addresses.

 Responsibilities:
o Logical addressing (e.g., IP addresses).
o Routing and forwarding of packets.

o Packet fragmentation and reassembly.

o Inter-networking (connecting different types of

networks).
4. Transport Layer (Layer 4)
 Function: Ensures reliable data transfer between two

systems, providing error checking and flow control.


 Key Components: TCP (Transmission Control Protocol),

UDP (User Datagram Protocol).


 Responsibilities:

o End-to-end connection establishment and

termination.
o Segmentation and reassembly of data.

o Flow control (preventing congestion).

o Error detection and recovery (retransmission of lost

packets).
Q3. What are the fundamental ways by which network
performance is measured in computer network?
Ans Network performance in a computer network is
measured using several fundamental metrics that provide
insights into the efficiency, reliability, and speed of the
network. These metrics help network administrators
understand the current state of the network, identify
potential issues, and make informed decisions to optimize
performance. Here are the primary ways network
performance is measured:
1. Bandwidth
 Definition: Bandwidth is the maximum rate at which

data can be transmitted over a network connection,


usually measured in bits per second (bps), kilobits per
second (Kbps), megabits per second (Mbps), or gigabits
per second (Gbps).
 Importance: Higher bandwidth allows more data to be

transmitted simultaneously, improving overall network


speed and capacity.
2. Throughput
 Definition: Throughput is the actual rate at which data is

successfully transmitted over the network, typically


measured in bits per second (bps) or bytes per second
(Bps).
 Importance: Throughput indicates the effective

performance of the network and can be influenced by


various factors like network congestion, packet loss, and
latency.
3. Latency
 Definition: Latency is the time it takes for a data packet

to travel from the source to the destination, measured


in milliseconds (ms).
 Importance: Lower latency is crucial for real-time

applications such as video conferencing, online gaming,


and VoIP, where delays can affect user experience.
4. Packet Loss
 Definition: Packet loss occurs when data packets

transmitted across the network fail to reach their


destination, usually expressed as a percentage of total
packets sent.
 Importance: High packet loss can degrade network

performance, leading to retransmissions, reduced


throughput, and poor quality of service (QoS).
5. Jitter
 Definition: Jitter refers to the variation in packet arrival

times, causing packets to arrive out of order or with


varying delays.
 Importance: Low jitter is essential for applications

requiring consistent data flow, such as streaming media


and VoIP.
6. Error Rate
 Definition: The error rate measures the number of

corrupted bits or packets in the transmitted data,


typically expressed as a percentage or a ratio (e.g., bit
error rate - BER).
 Importance: A high error rate can necessitate

retransmissions, increasing latency and reducing


throughput.
7. Network Utilization
 Definition: Network utilization is the ratio of the current

network traffic to the maximum available bandwidth,


expressed as a percentage.
 Importance: High network utilization indicates heavy

use of network resources, which can lead to congestion


and performance degradation.
8. Quality of Service (QoS)
 Definition: QoS refers to the overall performance of the

network, particularly in terms of bandwidth, latency,


jitter, and packet loss, ensuring that critical applications
receive the necessary resources.
 Importance: QoS is vital for maintaining the

performance of priority applications and services, such


as VoIP and streaming media.
9. Round-Trip Time (RTT)
 Definition: RTT measures the time it takes for a data

packet to travel from the source to the destination and


back again, typically measured in milliseconds (ms).
 Importance: RTT is crucial for understanding the overall

responsiveness of the network and can impact the


performance of interactive applications.
10. Mean Opinion Score (MOS)
 Definition: MOS is a subjective measure of the quality of

voice and video transmissions, typically on a scale from


1 (bad) to 5 (excellent).
 Importance: MOS provides a user-centric view of

network performance, especially for VoIP and video


conferencing services.
Q4. explain TCP/IP reference model?
Ans TCP/IP Reference Model Overview
The TCP/IP reference model, also known as the Internet
Protocol Suite, is a conceptual framework for understanding
and designing the network protocols used in the internet and
similar networks. It was developed by the United States
Department of Défense to enable reliable and standardized
communication between different computer systems. The
TCP/IP model is composed of four layers, each with specific
functions and protocols.
Layers of the TCP/IP Model
1. Network Interface Layer (Link Layer)
2. Internet Layer
3. Transport Layer
4. Application Layer
Functions of Each Layer
1. Network Interface Layer (Link Layer)
 Function: Handles the physical transmission of data over

a network medium, including hardware addressing and


error detection at the data link level.
 Key Protocols: Ethernet, Wi-Fi, ARP (Address Resolution

Protocol), PPP (Point-to-Point Protocol).


 Responsibilities:

o Defines how data is formatted for transmission.

o Manages hardware addresses (e.g., MAC

addresses).
o Controls access to the physical transmission

medium.
o Ensures error detection and correction for frames.

2. Internet Layer
 Function: Facilitates the logical addressing and routing

of data packets across network boundaries to enable


inter-network communication.
 Key Protocols: IP (Internet Protocol), ICMP (Internet

Control Message Protocol), IGMP (Internet Group


Management Protocol).
 Responsibilities:

o Defines logical addressing using IP addresses.

o Routes data packets across multiple networks

(routing).
o Manages packet fragmentation and reassembly.

o Handles error reporting and diagnostics.

3. Transport Layer
 Function: Provides reliable data transfer services

between two hosts, ensuring data integrity, error


recovery, and flow control.
 Key Protocols: TCP (Transmission Control Protocol), UDP
(User Datagram Protocol).
 Responsibilities:

o Establishes and terminates connections between

hosts.
o Provides reliable data transfer with error detection

and recovery (TCP).


o Supports flow control and congestion avoidance

(TCP).
o Enables connectionless data transfer for

applications that do not require reliability (UDP).


4. Application Layer
 Function: Provides network services directly to end-user

applications, enabling communication between software


applications and the network.
 Key Protocols: HTTP (Hypertext Transfer Protocol), FTP

(File Transfer Protocol), SMTP (Simple Mail Transfer


Protocol), DNS (Domain Name System), Telnet, SSH
(Secure Shell), SNMP (Simple Network Management
Protocol).
 Responsibilities:

o Facilitates application-specific networking

functions.
o Manages network communication for user

applications.
o Provides protocols for file transfer, email, web

browsing, remote login, and more.


o Supports translation of data formats, encryption,

and data compression.


Comparison with OSI Model
The TCP/IP model is simpler and has fewer layers than the
OSI model, which has seven layers. The TCP/IP model's layers
can be mapped to the OSI model as follows:
 Network Interface Layer (TCP/IP) ≈ Physical Layer +

Data Link Layer (OSI)


 Internet Layer (TCP/IP) ≈ Network Layer (OSI)

 Transport Layer (TCP/IP) ≈ Transport Layer (OSI)

 Application Layer (TCP/IP) ≈ Session Layer +

Presentation Layer + Application Layer (OSI)


Q5. difference between connection-oriented and
connectionless service?
Ans In computer networks, communication between devices
can be classified into two types: connection-oriented and
connectionless services. These terms describe how data is
transmitted and managed during the communication
process.
Connection-Oriented Service
Definition
A connection-oriented service establishes a dedicated
connection between the communicating devices before any
data is transmitted. This connection is maintained for the
duration of the communication session and is terminated
once the session ends.
Key Characteristics
1. Connection Establishment: Before any data is
transferred, a connection is established between the
sender and receiver.
2. Reliability: The service ensures that data packets are
delivered in the correct order and without errors.
3. Flow Control and Congestion Control: Mechanisms are
in place to manage the rate of data transmission,
preventing network congestion and ensuring smooth
data flow.
4. Error Detection and Correction: The service includes
methods for detecting and correcting errors that may
occur during data transmission.
5. Stateful Communication: The network maintains
information about the connection, such as the sequence
of packets and status of the transmission.
Examples
 TCP (Transmission Control Protocol): Used for

applications where reliable delivery is crucial, such as


web browsing (HTTP/HTTPS), email (SMTP), and file
transfer (FTP).
Connectionless Service
Definition
A connectionless service does not establish a dedicated
connection before data is transmitted. Instead, data packets
are sent independently of each other, and each packet may
take a different path to reach the destination.
Key Characteristics
1. No Connection Establishment: Data is sent without
establishing a connection, which can reduce the initial
delay.
2. Best-Effort Delivery: There is no guarantee of packet
delivery, order, or integrity. Packets may arrive out of
order, be duplicated, or be lost.
3. Stateless Communication: The network does not
maintain information about the state of the
transmission.
4. Less Overhead: Without the need for connection
establishment and maintenance, there is typically less
overhead, leading to faster communication for certain
types of data.
Examples
 UDP (User Datagram Protocol): Used for applications

where speed is more critical than reliability, such as


streaming media (video and audio), online gaming, and
DNS (Domain Name System) queries.
Comparison
Connection-Oriented
Feature Connectionless Service
Service
Connection Required before data
Not required
Setup transfer
High, ensures error-
Low, no guarantees on
Reliability free and ordered
delivery or order
delivery
Higher due to Lower due to lack of
Overhead connection connection
management management
Flow Control Supported Not supported
Built-in mechanisms
Error
for error detection Minimal error handling
Handling
and correction
Stateful, maintains Stateless, no
State
information about the information about the
Information
connection state of transmission
Connection-Oriented
Feature Connectionless Service
Service
Streaming media,
Web browsing,
Use Cases online gaming, DNS
email, file transfer
queries

UNIT 2
Q1. explain go-back-n and selective protocol?
Ans Go-Back-N and Selective Repeat Protocols
Go-Back-N and Selective Repeat are two error-control
protocols used in computer networks to ensure reliable data
transmission over an unreliable or noisy communication
channel. Both protocols are part of the Automatic Repeat
request (ARQ) family, which uses acknowledgments and
retransmissions to ensure data integrity.
Go-Back-N ARQ Protocol
Definition
Go-Back-N (GBN) is a sliding window protocol that allows the
sender to send multiple frames before needing an
acknowledgment for the first one, but requires that frames
be acknowledged in order. If an error is detected or a frame
is lost, all subsequent frames are retransmitted.
Key Characteristics
1. Sliding Window: The sender can send several frames
specified by the window size (N) before needing an
acknowledgment.
2. Sequential Acknowledgment: The receiver only
acknowledges the last correctly received frame in
sequence. If a frame is lost or an error is detected, all
subsequent frames are discarded.
3. Retransmission: If an acknowledgment (ACK) for a
frame is not received within a certain time (due to loss
or error), the sender retransmits that frame and all
subsequent frames in the window.
Example
Consider a window size of 4 (N=4):
 The sender sends frames 0, 1, 2, and 3.

 If the receiver receives frames 0, 1, and 3 (frame 2 is

lost), it acknowledges frame 1.


 The sender, upon receiving the ACK for frame 1,

retransmits frame 2 and then frames 3 and 4 again, even


if frame 3 was previously sent.
Advantages
 Simplicity: The protocol is straightforward and easy to

implement.
Disadvantages
 Inefficiency: Retransmitting all frames after a lost or

erroneous frame can be inefficient, especially with a


large window size or high error rate.
Selective Repeat ARQ Protocol
Definition
Selective Repeat (SR) is a sliding window protocol that also
allows the sender to send multiple frames before needing an
acknowledgment. However, unlike Go-Back-N, only the
erroneous or lost frames are retransmitted.
Key Characteristics
1. Sliding Window: Both sender and receiver maintain a
window of acceptable frames, which can be out of
order.
2. Individual Acknowledgment: Each frame is
acknowledged individually. The receiver sends an
acknowledgment for each correctly received frame,
even if it is out of order.
3. Selective Retransmission: The sender only retransmits
the frames for which a negative acknowledgment
(NACK) is received or the acknowledgment is not
received within the timeout period.
Example
Consider a window size of 4 (N=4):
 The sender sends frames 0, 1, 2, and 3.

 If the receiver receives frames 0, 1, and 3 (frame 2 is

lost), it sends ACKs for frames 0, 1, and a NACK for


frame 2.
 The sender, upon receiving the NACK for frame 2,

retransmits only frame 2.


 Once frame 2 is correctly received, the receiver can then

send an ACK for frame 2 and process frame 3.


Advantages
 Efficiency: Only the lost or erroneous frames are

retransmitted, which reduces the amount of


retransmission and increases efficiency, especially with a
large window size or high error rate.
Disadvantages
 Complexity: Implementing the Selective Repeat protocol

is more complex due to the need for maintaining the


state of individual frames and handling out-of-order
frames.
Comparison
Selective Repeat
Feature Go-Back-N (GBN)
(SR)
N (number of frames N (number of frames
that can be sent that can be sent
Window Size
without without
acknowledgment). acknowledgment).
Cumulative
Individual
(acknowledges up to
Acknowledgment (acknowledges each
the last in-order
frame separately).
frame received).
Retransmits all
Retransmits only the
frames from the last
Retransmission erroneous or lost
acknowledged
frames.
frame.
Larger buffer to hold
Small buffer, as out-
out-of-order frames
Receiver Buffer of-order frames are
until they can be
discarded.
processed.
Lower, due to Higher, due to
Efficiency redundant selective
retransmissions. retransmissions.
Implementation Simpler to More complex to
Complexity implement. implement.

Q2. explain flow control and error control?


Ans Flow Control and Error Control
Flow control and error control are crucial mechanisms in
computer networks to ensure reliable and efficient
communication between devices. They are essential
components of data link and transport layer protocols.
Flow Control
Definition
Flow control is a technique used to manage the rate of data
transmission between a sender and receiver to prevent the
receiver from being overwhelmed by too much data too
quickly. It ensures that the sender does not send more data
than the receiver can process and store.
Techniques
1. Stop-and-Wait Protocol:
o Mechanism: The sender transmits one frame and

waits for an acknowledgment (ACK) from the


receiver before sending the next frame.
o Pros: Simple to implement.

o Cons: Inefficient for high-latency networks, as the

sender remains idle while waiting for the ACK.


2. Sliding Window Protocol:
o Mechanism: The sender can transmit multiple

frames before needing an acknowledgment, but


within a window size limit. The window slides over
the sequence of frames, allowing for continuous
transmission.
o Pros: More efficient use of the network, better

throughput.
o Cons: More complex to implement compared to

stop-and-wait.
3. Credit-Based Flow Control:
o Mechanism: The receiver sends credit (permission)
to the sender, indicating how many frames or bytes
it can receive. The sender can transmit data up to
the credit limit.
o Pros: Adaptable to varying receiver capabilities.

o Cons: Requires additional control information to be

exchanged.
Importance
 Prevents Buffer Overflow: Avoids scenarios where the

receiver's buffer overflows, leading to data loss.


 Optimizes Network Utilization: Maintains an efficient

data flow, reducing idle times and maximizing


throughput.
 Improves Communication Reliability: Ensures that data

is transmitted at a manageable rate for both sender and


receiver.
Error Control
Definition
Error control is a technique used to detect and correct errors
that occur during data transmission. It ensures data integrity
and reliability in communication.
Techniques
1. Error Detection:
o Parity Check: Adds a parity bit to the data to make

the number of 1s either even (even parity) or odd


(odd parity). Simple but not very robust.
o Checksum: Sums up the data segments and sends

the result along with the data. The receiver


performs the same sum and checks for
discrepancies.
o Cyclic Redundancy Check (CRC): Uses polynomial
division to generate a CRC value, which is
appended to the data. The receiver performs the
same polynomial division to detect errors.
2. Error Correction:
o Forward Error Correction (FEC): The sender

includes enough redundant information in the data


so that the receiver can detect and correct errors
without needing a retransmission. Example: Reed-
Solomon codes.
o Automatic Repeat reQuest (ARQ): The receiver

detects errors and requests the sender to


retransmit the erroneous data. Types include:
 Stop-and-Wait ARQ: The sender waits for an

ACK or NACK (negative acknowledgment)


before sending the next frame.
 Go-Back-N ARQ: The sender can send several

frames before needing an ACK, but must


retransmit from the last unacknowledged
frame if an error is detected.
 Selective Repeat ARQ: The sender retransmits

only the frames that were negatively


acknowledged or timed out.
Importance
 Ensures Data Integrity: Detects and corrects errors to

ensure that the data received is accurate and intact.


 Enhances Communication Reliability: Provides

mechanisms to recover from errors, maintaining reliable


communication over unreliable networks.
 Supports Data Consistency: Maintains consistency of
data across transmissions, crucial for applications like
file transfers and data synchronization.
Q3. explain data link layer and data link layer address with
example?
Ans Data Link Layer Overview
The Data Link Layer is the second layer in both the OSI (Open
Systems Interconnection) and TCP/IP reference models. It is
responsible for node-to-node data transfer, framing, error
detection, and managing access to the physical transmission
medium. The Data Link Layer ensures that data is reliably
transferred over the physical layer to the next layer in the
stack.
Key Functions of the Data Link Layer
1. Framing: Encapsulates raw bits from the physical layer
into frames, which are the Data Link Layer's data units.
Framing includes adding headers and trailers to the data
packets.
2. Error Detection and Correction: Detects errors in frames
using methods like checksums, CRC (Cyclic Redundancy
Check), and performs error correction if needed.
3. Flow Control: Manages the rate of data transmission
between sender and receiver to prevent the receiver
from being overwhelmed.
4. Access Control: Determines which device has control
over the communication medium at any given time,
especially in shared network environments like
Ethernet.
5. Logical Link Control (LLC): Provides a way to multiplex
multiple network protocols over the same physical link
and provides flow control and error management.
6. Media Access Control (MAC): Manages protocol access
to the physical network medium, which includes MAC
addressing and controlling how devices on the same
network segment communicate.
Data Link Layer Address (MAC Address)
A Data Link Layer address, commonly known as a MAC
(Media Access Control) address, is a unique identifier
assigned to network interfaces for communication on the
physical network segment. MAC addresses are used within
the Data Link Layer to ensure that data packets are sent to
the correct device on a local network.
Characteristics of MAC Addresses
 Uniqueness: Each MAC address is unique to the network

interface card (NIC). It is assigned by the manufacturer


and is usually hardcoded into the device.
 Format: A MAC address is typically represented as a 48-

bit (6-byte) hexadecimal number. For example,


00:1A:2B:3C:4D:5E.
 Scope: MAC addresses operate within the local network

segment and are not used for routing data between


different networks.
Example
Consider a simple local area network (LAN) with two devices:
 Device A: MAC Address 00:1A:2B:3C:4D:5E

 Device B: MAC Address 00:1A:2B:3C:4D:6F

When Device A wants to send data to Device B, the process


involves the following steps:
1. Framing: Device A encapsulates the data into a frame
and includes Device B's MAC address in the frame's
header as the destination address. Device A's MAC
address is included as the source address.
2. Transmission: The frame is transmitted over the
network medium.
3. Reception: All devices on the local network segment
receive the frame, but only Device B recognizes its own
MAC address in the destination field and processes the
frame.
4. Error Checking: Device B performs error checking using
the frame's CRC or checksum.
5. Data Processing: If the frame is error-free, Device B
extracts the data and processes it. If errors are detected,
the frame is discarded.
Q4. describe wait and stop protocol technique?
Ans Stop-and-Wait Protocol
The Stop-and-Wait Protocol is a simple and fundamental
flow control method used in data communication to ensure
reliable data transfer between a sender and receiver. This
protocol is particularly relevant in scenarios where reliable
delivery is more critical than the speed of transmission.
Key Characteristics
1. Simple Flow Control: The sender sends one frame (or
packet) at a time and waits for an acknowledgment
(ACK) from the receiver before sending the next frame.
2. Error Detection and Retransmission: If the sender does
not receive an acknowledgment within a certain timeout
period, it assumes the frame was lost or corrupted and
retransmits it.
3. ACK/NACK: The receiver sends an ACK if the frame is
correctly received and a negative acknowledgment
(NACK) or no response if there is an error, prompting
the sender to retransmit.
Operation
The Stop-and-Wait Protocol can be broken down into the
following steps:
1. Initialization: The sender and receiver are initialized and
ready for communication.
2. Frame Transmission: The sender transmits a frame to
the receiver.
3. Wait for ACK: The sender waits for an acknowledgment
from the receiver.
4. ACK Received:
o If an ACK is received within the timeout period, the

sender transmits the next frame.


5. Timeout:
o If the ACK is not received within the timeout

period, the sender retransmits the frame.


6. Error Handling:
o If the receiver detects an error in the frame, it

discards the frame and does not send an ACK,


prompting the sender to retransmit after a timeout.
Advantages
 Simplicity: The protocol is easy to implement and

understand.
 Reliability: Ensures that each frame is acknowledged

before the next one is sent, making it reliable for error


detection and correction.
Disadvantages
 Inefficiency: The protocol can be inefficient, especially
over long distances or high-latency networks, because
the sender spends a lot of time waiting for
acknowledgments.
 Low Throughput: The stop-and-wait nature limits the
maximum throughput, as the sender can only send one
frame at a time.

You might also like