OSI Model Physical Layer (Layer 1)

Download as odt, pdf, or txt
Download as odt, pdf, or txt
You are on page 1of 19

OSI Model

Physical Layer (Layer 1):


• Definition: Manages the physical connection between devices, including the transmission
and reception of raw bitstreams over a physical medium.
• Services:
• Bit transmission
• Definition of hardware specifications (e.g., cables, connectors)
• Modulation and signal encoding
• Data rate control and synchronization
• Data Link Layer (Layer 2):
• Definition: Provides error detection and correction, and handles frame synchronization and
MAC addressing to ensure reliable data transfer between two directly connected devices.
• Services:
• Frame synchronization
• Error detection and correction (e.g., CRC)
• MAC (Media Access Control) addressing
• Flow control
• Access control to the shared media (e.g., CSMA/CD in Ethernet)
• Network Layer (Layer 3):
• Definition: Manages logical addressing (IP addresses), routing, and forwarding of packets
across different networks to ensure data reaches its destination.
• Services:
• Logical addressing (e.g., IP addressing)
• Packet forwarding and routing
• Path determination
• Fragmentation and reassembly of packets
• Inter-networking (connecting different networks)
• Transport Layer (Layer 4):
• Definition: Ensures reliable data transfer between devices by handling error correction, flow
control, and data segmentation, using protocols like TCP and UDP.
• Services:
• Reliable data transfer (e.g., TCP)
• Connection establishment, maintenance, and termination
• Segmentation and reassembly of data
• Flow control and congestion avoidance
• Error detection and recovery
• Session Layer (Layer 5):
• Definition: Manages sessions or connections between applications, establishing,
maintaining, and terminating communication sessions.
• Services:
• Session establishment, maintenance, and termination
• Session checkpointing and recovery
• Dialog control (manages which device transmits data when)
• Synchronization of data streams
• Presentation Layer (Layer 6):
• Definition: Translates data between the application layer and the network, handling data
encoding, encryption, and compression.
• Services:
• Data translation (e.g., EBCDIC to ASCII)
• Data encryption and decryption (e.g., SSL/TLS)
• Data compression and decompression
• Data formatting (e.g., JPEG, MPEG)
• Application Layer (Layer 7):
• Definition: Provides network services directly to end-user applications, facilitating network-
related activities like email, file transfer, and web browsing.
• Services:
• Network process to application communication
• Application services such as email (SMTP), file transfer (FTP), and web browsing
(HTTP)
• Directory services (e.g., DNS)
• Network resource sharing and management (e.g., file servers, print servers)

MAC Addresses

MAC Addresses are unique 48-bit hardware numbers of a computer that are embedded into a
network card (known as a Network Interface Card) during manufacturing. The MAC Address is
also known as the Physical Address of a network device.
A 48-bit or 64-bit address, typically represented as six groups of two hexadecimal digits separated
by colons or hyphens (e.g., 00:11:22:33:44:55).

IP address

an IP address is a unique address that is used to identify computers or nodes on the internet.
And these IP addresses are assigned by IANA(known as Internet Corporation For Internet Assigned
Numbers Authority).

What is a Internet?
The Internet is a group of billions of computers and other electronic devices, that allows
people to communicate, access information, and share resources over long distances.

Broadband:

- Transmits multiple signals simultaneously over a single medium (like a cable or fiber)
- Uses modulation techniques (like phase and amplitude modulation) to reduce noise and
interference
- Allows for faster data transfer rates and multiple channels (like TV channels or internet
connections)
- Think of it like a highway with many lanes, where each lane can carry a different signal

Baseband:

- Transmits a single digital signal over a medium (like a cable or fiber)


- Uses signal codes like NRZ, RZ, or Manchester to represent the digital data
- Has a lower data transfer rate compared to broadband
- Think of it like a single-lane road, where only one signal can travel at a time

In summary, broadband is like a multi-lane highway for multiple signals, while baseband is like a
single-lane road for a single signal.

What is Protocol?
A protocol is a set of rules that determines how data is sent and received over a network in secure
manner.

What is the difference between TCP and IP?


TCP (Transmission Control Protocol): TCP is responsible for providing reliable,
connection-oriented communication between devices over a network. It ensures that
data is delivered in the correct order and without errors. TCP handles flow control,
acknowledgment of received data, and retransmission of lost data packets.

IP (Internet Protocol): IP is responsible for addressing and routing packets of data so


that they can be sent from the source to the destination across different networks. It
provides logical addressing (IP addresses) that uniquely identify devices on a network
and determines the best path for data to travel.
What is the purpose of a subnet mask in TCP/IP networking?
A subnet mask is used in TCP/IP networking to divide an IP address into two parts: the
network address and the host address. It’s used in conjunction with an IP address to
determine whether a destination IP is within the same local network or on a different
network. By comparing the destination IP address with the subnet mask, a device can
determine if the communication should happen within the local network (using MAC
addresses) or if it needs to be forwarded to another network through a router.

What is pipelining?Frames can send before first acknowledgement receive.

Data link layer

Data Link Layer Framing Techniques:


Framing in the data link layer involves encapsulating a network layer packet into a frame
before transmission over the physical medium. The framing process includes adding headers
and trailers to the packet to create a complete data unit that can be easily transmitted and
interpreted by the receiving device.
1. Character Count:
• Definition: This technique involves adding a field in the frame header that specifies
the total number of characters (or bytes) in the frame.
• Advantage: Simple to implement.
• Disadvantage: Vulnerable to errors; if the character count field is corrupted, the
receiver cannot correctly determine the frame boundaries.
2. Byte Stuffing (Character Stuffing):
• Definition: Special characters are used to indicate the beginning and end of a frame.
If these special characters appear in the data, extra bytes (escape characters) are
inserted to differentiate them from actual frame delimiters.
• Advantage: Allows variable-length frames and easy synchronization.
• Disadvantage: Increases the frame size due to additional stuffing bytes.
3. Bit Stuffing:
• Definition: A special bit pattern (like a flag sequence) is used to mark the start and
end of a frame. If this bit pattern appears in the data, extra bits are inserted to prevent
confusion with actual frame delimiters.
• Advantage: Efficient use of bandwidth with minimal overhead.
• Disadvantage: Adds complexity to the framing process as bits need to be stuffed and
unstuffed.

Error control techniques


Error control techniques in networking are methods used to detect and correct errors that
occur during data transmission. These techniques ensure that data is transmitted accurately
and reliably across a network. Error control can be divided into two main categories: Error
Detection and Error Correction.

1. Error Detection Techniques:


• Parity Check:
• Definition: A parity bit is added to the data to make the number of 1's either even
(even parity) or odd (odd parity). The receiver checks the parity to detect any errors.
• Advantage: Simple and easy to implement.
• Disadvantage: Can only detect an odd number of bit errors, not correct them.
• Checksum:
• Definition: The data is divided into equal segments, and their sum (checksum) is
calculated and sent along with the data. The receiver computes the checksum again
and compares it with the received one to detect errors.
• Advantage: Can detect many types of errors.
• Disadvantage: Cannot correct errors, and some errors may go undetected.
• Cyclic Redundancy Check (CRC):
• Definition: A polynomial division is performed on the data to generate a CRC code,
which is appended to the data. The receiver performs the same division and compares
the result to detect errors.
• Advantage: Highly effective in detecting common types of errors, including burst
errors.
• Disadvantage: Complex to implement compared to parity checks and checksums;
cannot correct errors.

2. Error Correction Techniques:


• Automatic Repeat reQuest (ARQ):
• Definition: If an error is detected, the receiver requests the sender to retransmit the
corrupted frame. Common ARQ strategies include Stop-and-Wait ARQ, Go-Back-N
ARQ, and Selective Repeat ARQ.
• Advantage: Simple and reliable method to ensure correct data transmission.
• Disadvantage: Introduces latency due to retransmissions, especially in high-latency
networks.

• Hamming Code:
• Definition: A specific type of FEC that uses multiple parity bits positioned at
specific intervals within the data to detect and correct single-bit errors.
• Advantage: Can detect and correct single-bit errors and detect double-bit errors.
• Disadvantage: Limited to detecting and correcting single-bit errors; more complex
to implement.
Data Link Layer sliding window Techniques:(flow control)
A sliding window protocol is a method used in the data link layer of the OSI model to manage
the flow of data between two devices. It is particularly useful for ensuring that data frames are
transmitted efficiently and without errors.

Key Concepts:
1. Window Size: The "window" refers to a set of consecutive sequence numbers that a sender
is allowed to transmit without waiting for an acknowledgment. The size of the window
determines how many frames can be sent before the sender must wait for an
acknowledgment from the receiver.
2. Sliding Window: As frames are acknowledged, the window "slides" forward, allowing the
sender to transmit new frames. This means the sender can send more frames before having
to stop and wait for an acknowledgment.
3. Acknowledgments (ACKs): The receiver sends an acknowledgment for each frame that it
successfully receives. If a frame is lost or an error is detected, the receiver may request
retransmission.
4. Flow Control: Sliding window protocols provide flow control by ensuring that the sender
does not overwhelm the receiver with too much data at once.
5. Error Control: The protocol can detect and correct errors by retransmitting frames that
were not acknowledged.

Types of Sliding Window Protocols:


1. Go-Back-N (GBN): In this variant, if an error occurs in one frame, all subsequent frames
are retransmitted, even if they were correctly received.
2. Selective Repeat (SR): Only the erroneous frames are retransmitted. The receiver can
accept and store out-of-order frames and wait for the missing frames to be retransmitted.
Piggybacking is a technique in networking where an acknowledgment (ACK) is included with a
data packet being sent back to the sender, reducing the need for separate acknowledgment packets
and improving efficiency. It helps optimize bandwidth and reduce network congestion.
what is network congestion.

Network congestion occurs when the demand for network resources exceeds the available capacity,
leading to slower data transmission, increased latency, packet loss, and reduced overall network
performance. It typically happens when too many devices or applications try to send data
simultaneously over a network with limited bandwidth, causing a bottleneck.
Medium Access Control (MAC) Protocols:
Medium Access Control (MAC) protocols are crucial in network communication, especially
in shared network environments like local area networks (LANs) and wireless networks. The
primary function of a MAC protocol is to manage how multiple devices share a common
communication medium (e.g., a single cable or a wireless channel) to avoid collisions and
ensure efficient data transmission.

Types of Medium Access Control (MAC) Protocols:


1. Random Access Protocols:
• Definition: In random access protocols, any device can transmit data whenever it has
data to send. The devices compete for the medium, and if a collision occurs, they
follow specific rules to retransmit the data.
• Examples:
• ALOHA: A simple protocol where devices send data whenever they have it.
If a collision occurs, the device waits for a random time before retransmitting.
• Pure ALOHA: Devices transmit data at any time but must resend if a
collision occurs, leading to lower efficiency.
• Slotted ALOHA: Time is divided into slots, and devices can only
send data at the beginning of a time slot, reducing the chances of
collisions and increasing efficiency.
• Carrier Sense Multiple Access (CSMA): Devices listen to the medium
before transmitting. If the medium is idle, they transmit; otherwise, they wait.
• CSMA/CD (Collision Detection): Used in wired Ethernet. Devices
detect collisions during transmission and stop, then retransmit after a
random backoff period.used for shorter distance.
• CSMA/CA (Collision Avoidance): Used in wireless networks (e.g.,
Wi-Fi). Devices try to avoid collisions by waiting for a random
backoff time before transmitting and using acknowledgment packets
to confirm successful transmission.
2. Controlled Access Protocols:
• Definition: In controlled access protocols, the medium is divided among the devices
using predefined rules, ensuring that collisions do not occur.
• Examples:
• Polling: A central controller (or master) polls each device in a specific order,
giving it permission to transmit. This is used in scenarios like master-slave
configurations in industrial networks.if master fails entire network loss.and
poll delay time.
• Token Passing: A token, a special data packet, circulates around the network.
Only the device holding the token can transmit data. After transmission, the
token is passed to the next device in the sequence.
• Token Ring: Used in Token Ring networks, where devices are
connected in a ring topology.
• Token Bus: Similar to Token Ring but used in a bus topology.
• Reservation Protocols: Devices reserve the medium in advance before
transmission. This method is often used in time-division multiplexing (TDM)
systems where each device is assigned a time slot.
3. Channelization Protocols:
• Definition: Channelization protocols divide the communication medium into
separate channels, and each device is assigned a specific channel or a share of the
channels to avoid collisions.
• Examples:
• Frequency Division Multiple Access (FDMA): The available bandwidth is
divided into frequency bands, and each device is assigned a different
frequency band for transmission.
• Time Division Multiple Access (TDMA): The time is divided into slots, and
each device is assigned a specific time slot for transmission.
• Code Division Multiple Access (CDMA): Each device is assigned a unique
code, allowing multiple devices to share the same frequency band
simultaneously by encoding their transmissions uniquely.

Summary:
• Random Access Protocols: Devices compete for the medium; collisions are possible and
handled after they occur.
• Controlled Access Protocols: Access to the medium is controlled or coordinated to prevent
collisions.
• Channelization Protocols: The medium is divided into distinct channels, each assigned to
different devices or transmissions to prevent collisions.

Network Layer

Store-and-forward packet :switching is a method where each network node fully


receives, stores, and checks a packet for errors before forwarding it to the next node,
ensuring data integrity. This process allows for reliable transmission but introduces
some delay as each node must process the entire packet before moving it along.

Connection-Oriented Service:
• This service requires a connection to be established between the sender and receiver before
any data is transmitted. The connection remains active throughout the communication
session, ensuring that data is delivered in order and reliably. An example of a connection-
oriented protocol is TCP (Transmission Control Protocol).
• Connectionless Service:
• In this service, data is sent without establishing a dedicated connection between the sender
and receiver. Each packet is treated independently, and there’s no guarantee of delivery,
order, or error correction. An example of a connectionless protocol is UDP (User Datagram
Protocol).

What is an IP Address?
• Definition: An IP (Internet Protocol) address is a unique identifier assigned to each device
connected to a network, allowing it to communicate with other devices over the Internet or a
local network.
• Function: It acts as a mailing address for devices, ensuring data sent over a network reaches
the correct destination.

Types of IP Addresses
1. Based on Version:
• IPv4:
• Format: 32-bit address (e.g., 192.168.1.1).
• Total Addresses: Approximately 4.3 billion.
• IPv6:
• Format: 128-bit address (e.g., 2001:0db8:85a3:0000:0000:8a2e:0370:7334).
• Total Addresses: Vastly larger address space than IPv4.

• Classful Addressing: An IP addressing scheme that divides the address space


into fixed classes (A, B, C, D, E) with predefined subnet masks, leading to
inefficient use of address space.
• Classless Addressing (CIDR): An IP addressing method that allows for variable-length
subnet masks and flexible address allocation, improving efficiency and scalability by
eliminating fixed address classes.
Classless Inter-Domain Routing (CIDR)
Subnetting: The process of dividing a larger IP network into smaller, more manageable sub-
networks, or subnets, to optimize address usage and improve network performance and
security.
• Supernetting: The process of combining multiple smaller IP networks into a larger network,
or supernet, to simplify routing and reduce the number of entries in routing tables.
• A subnet mask is a 32-bit number used in IP networking to define the boundary between
the network and host portions of an IP address. It helps determine which part of an IP
address identifies the network and which part identifies individual devices (hosts) within
that network.1 followed by zeros.
Routing Algorithms

Routing algorithms are essential for determining the optimal path for data packets
across a network.

1.Adaptive Routing Algorithms:


• Definition: These algorithms dynamically adjust the routing paths based on current network
conditions, such as traffic load, link failures, or changes in network topology.
a.Isolated Adaptive Routing:
• Definition: In isolated adaptive routing, each router or node makes routing decisions based
solely on local information or measurements. The routing decisions are not influenced by the
global network state.
b.Centralized Adaptive Routing:
• Definition: Centralized adaptive routing involves a central authority or control point that
gathers information from all nodes in the network and makes routing decisions based on the
global network state.
c.Distributed Adaptive Routing:
• Definition: In distributed adaptive routing, each router or node makes routing decisions
based on information from neighboring nodes, and there is no single point of control.
Information about network state is shared among nodes to adapt to changes.
2.Non-Adaptive Routing Algorithms:
• Definition: These algorithms use fixed routing paths or simple mechanisms that do not
change based on current network conditions. They typically rely on predetermined strategies
for routing packets.
a. Flooding:
• Definition: Flooding is a non-adaptive routing algorithm where every incoming packet is
sent to all outgoing links, except the one it came from. This approach ensures that packets
reach their destination by traversing all possible paths through the network.
b. Random Walk:
• Definition: In random walk routing, packets are forwarded randomly from node to node.
The choice of the next node is made at random, and this process continues until the packet
reaches its destination. This method does not adapt to network conditions and relies on
chance for packet delivery.
Hybrid Routing Algorithms:
• Definition: Hybrid routing algorithms combine aspects of different routing strategies to
balance their strengths and weaknesses, aiming to achieve more efficient and adaptable
routing.
a. Link-State Routing:
• Definition: In link-state routing, each router maintains a map of the entire network topology
and computes the best path to each destination based on this global view. Routers
periodically exchange information about the state of their links to keep the topology map up
to date.
• Cons:
• Black Hole: A situation where packets are dropped or lost because the routing table
incorrectly directs packets to a non-existent or unreachable destination.
• Looping: Occurs when packets get stuck in a cycle between routers due to incorrect
routing table entries, causing inefficient use of network resources and potential
packet loss.
b. Distance Vector Routing:
• Definition: Distance vector routing algorithms involve each router maintaining a table of the
best known distance to each destination and the next hop to reach that destination. Routers
periodically exchange distance vectors with their neighbors to update their tables based on
received information.
• Cons:
• Black Hole: Can occur if incorrect distance information is propagated, leading to
packets being sent to non-reachable nodes.
• Looping: Distance vector protocols can suffer from routing loops due to the slow
convergence of routing tables, leading to packets circulating endlessly in the
network.

1.Broadcast Routing:Broadcast routing sends a packet from a source node to all nodes in
the network or a specific subnet.(one to many)
2.Multicast Routing: Multicast routing sends a packet from a source node to a specific
group of nodes that have expressed interest in receiving the information.(one to group)

Congestion Control
when network traffic is high then response time of network is slow down.
• Definition: Congestion control refers to mechanisms and strategies used to manage and
alleviate network congestion by controlling the amount of data entering the network,
ensuring that the network's capacity is not exceeded and preventing packet loss, delays, and
degradation in performance.
• Leaky Bucket:
• Definition: The leaky bucket algorithm is a congestion control method where data is added
to a bucket at a constant rate. If the bucket overflows (i.e., if data arrives too quickly),
excess data is discarded, thereby controlling the rate at which data is sent into the network
and smoothing out bursts of traffic.
• Token Bucket:
• Definition: The token bucket algorithm is a congestion control method where tokens are
added to a bucket at a fixed rate. Each token represents permission to send a unit of data. If
there are tokens available, data can be sent. If the bucket is empty, data must wait until
tokens are available, allowing for bursty traffic while controlling the overall rate of data
transmission.

Internetworking: Connecting multiple distinct networks to create a larger, unified network,


allowing communication between devices across different networks.
Intra-networking: Connecting devices within a single network to enable communication and
resource sharing among devices in that same network.
1. LAN-to-LAN:
• Example: Two departments within the same college, such as the Computer Science
and the Electrical Engineering departments, connect their local area networks
(LANs) to share resources and communicate internally.
2. LAN-to-WAN:
• Example: A college's local area network (LAN) connects to a wide area network
(WAN) to access online resources and services provided by an external education
portal or to connect with other branch campuses of the college.
3. WAN-to-WAN:
• Example: Two colleges in different cities connect their WANs to collaborate on joint
research projects, share academic resources, or facilitate inter-campus
communication and data exchange.
1. LAN-to-LAN:
• Example: A multinational corporation connects its local area networks (LANs) at
different branch offices within the same city to enable seamless data sharing and
collaboration between departments.
2. LAN-to-WAN:
• Example: A company's LAN at its headquarters connects to a wide area network
(WAN) to access cloud services, connect to remote branch offices, or integrate with
external business partners' networks.
3. WAN-to-WAN:
• Example: Two global corporations connect their respective WANs to establish a
secure and reliable link for data exchange, joint ventures, or global supply chain
management.
Transport Layer
Segmentation and Reassembly:
• Segmentation: The transport layer divides large application data into smaller segments for
easier and more efficient transmission.
• Reassembly: At the receiving end, these segments are reassembled into the original data
stream before being passed to the application.
Connection Control:
• TCP (Connection-Oriented): Establishes a connection using a 3-way handshake (SYN,
SYN-ACK, ACK) before data transmission. Ensures that a reliable communication channel
is set up.
• UDP (Connectionless): Sends data without establishing a connection, allowing for faster
but less reliable communication.
Reliability:
• Error Detection: Uses checksums to detect errors in data segments. Each segment includes
a checksum value calculated over the data; the receiver recalculates this value to verify data
integrity.
• Acknowledgments (ACKs): The receiver sends ACK packets back to the sender to confirm
the successful receipt of data segments. If the sender does not receive an ACK within a
specified time, it assumes the segment was lost or corrupted and retransmits it.
• Retransmission Timeout (RTO): A timer set by the sender to wait for an ACK. If the timer
expires before an ACK is received, the sender retransmits the segment. RTO is dynamically
adjusted based on network conditions.
Flow Control:
• TCP: Uses a sliding window mechanism to manage the rate of data transmission. The
receiver advertises a window size indicating how much data it can buffer, and the sender
adjusts its data transmission rate based on this window size.
Error Detection and Correction:
• Checksums: Each segment includes a checksum to detect errors during transmission. If an
error is detected, the corrupted segment is discarded, and the sender is asked to retransmit
the data.
• Automatic Repeat reQuest (ARQ): Retransmission requests are made when errors are
detected, ensuring that corrupted or lost segments are corrected.
Multiplexing:
• Port Numbers: Uses source and destination port numbers to differentiate between multiple
applications using the same network connection. This allows simultaneous communication
between different applications over the same network interface.
Where TCP is Used?
• Sending Emails
• Transferring Files
• Web Browsing

Where UDP is Used?


• Gaming
• Video Streaming
• Online Video Chats

Differences between TCP and UDP


Transmission Control Protocol
Basis User Datagram Protocol (UDP)
(TCP)
UDP is the Datagram-oriented
TCP is a connection-oriented protocol.
protocol. This is because there is no
Connection orientation means that the
overhead for opening a connection,
communicating devices should
maintaining a connection, or
Type of Service establish a connection before
terminating a connection. UDP is
transmitting data and should close the
efficient for broadcast and multicast
connection after transmitting the data.
types of network transmission.

TCP is reliable as it guarantees the The delivery of data to the


Reliability delivery of data to the destination destination cannot be guaranteed in
router. UDP.
TCP provides extensive error-checking
mechanisms. It is because it provides UDP has only the basic error-
Error checking
flow control and acknowledgment of checking mechanism using
mechanism
data. checksums.

An acknowledgment segment is
Acknowledgment No acknowledgment segment.
present.
Sequencing of data is a feature of
Transmission Control Protocol (TCP). There is no sequencing of data in
Sequence this means that packets arrive in order UDP. If the order is required, it has to
at the receiver. be managed by the application layer.

TCP is comparatively slower than UDP is faster, simpler, and more


Speed
UDP. efficient than TCP.
There is no retransmission of lost
Retransmission of lost packets is
Retransmission packets in the User Datagram
possible in TCP, but not in UDP.
Protocol (UDP).
TCP has a (20-60) bytes variable UDP has an 8 bytes fixed-length
Header Length
length header. header.
Transmission Control Protocol
Basis User Datagram Protocol (UDP)
(TCP)
Weight TCP is heavy-weight. UDP is lightweight.
Handshaking Uses handshakes such as SYN, ACK, It’s a connectionless protocol i.e. No
Techniques SYN-ACK handshake
Broadcasting TCP doesn’t support Broadcasting. UDP supports Broadcasting.
TCP is used by HTTP, HTTPs, FTP, UDP is used by DNS, DHCP, TFTP,
Protocols
SMTP and Telnet. SNMP, RIP, and VoIP.
Stream Type The TCP connection is a byte stream. UDP connection is a message stream.
Overhead Low but higher than UDP. Very low.
This protocol is used in situations
This protocol is primarily utilized in
where quick communication is
situations when a safe and trustworthy
necessary but where dependability is
Applications communication procedure is necessary,
not a concern, such as VoIP, game
such as in email, on the web surfing,
streaming, video, and music
and in military services.
streaming, etc.

Which Protocol is Better: TCP or UDP?


The answer to this question is difficult because it totally depends on what work we
are doing and what type of data is being delivered.
• Choose TCP if you need reliable, ordered, and error-checked delivery of data.
TCP is ideal for applications where data integrity and reliability are critical,
such as web services, email, and file transfers.
• Choose UDP if you need fast, efficient, and real-time communication where occasional data
loss is acceptable. UDP is suitable for applications like live streaming, online gaming, and
VoIP, where timely delivery is more important than perfect accuracy.

3-Way Handshake (TCP Connection Establishment):


1. SYN: Client sends a SYN (synchronize) packet to the server to initiate the connection.
2. SYN-ACK: Server responds with a SYN-ACK (synchronize-acknowledge) packet to
acknowledge the client's request.
3. ACK: Client sends an ACK (acknowledge) packet back to the server to confirm the
connection.

4-Way Handshake (TCP Connection Termination):


1. FIN: Client sends a FIN (finish) packet to the server to signal the end of data transmission.
2. ACK: Server acknowledges the FIN packet with an ACK (acknowledge).
3. FIN: Server sends its own FIN packet to the client to signal it’s also done sending data.
4. ACK: Client acknowledges the server’s FIN packet with an ACK.
Application layer

The Domain Name System (DNS) in the Application Layer translates human-readable domain
names (like www.example.com) into IP addresses that computers use to identify each other on the
network. It acts as the internet's phone book, enabling users to access websites using easy-to-
remember names.

What is the Need for DNS?


Every host is identified by the IP address but remembering numbers is very difficult for people also
the IP addresses are not static therefore a mapping is required to change the domain name to the IP
address. So DNS is used to convert the domain name of the websites to their numerical IP address.

Types of Domain
There are various kinds of domains:
• Generic Domains: .com(commercial), .edu(educational), .mil(military), .org(nonprofit
organization), .net(similar to commercial) all these are generic domains.
• Country Domain: .in (India) .us .uk
• Inverse Domain: if we want to know what is the domain name of the website. IP to domain
name mapping. So DNS can provide both the mapping for example to find the IP addresses
of geeksforgeeks.org then we have to type
nslookup www.geeksforgeeks.org
• Root Domain Server: Highest-level DNS servers that direct queries to TLD servers.
• Top-Level Domain (TLD): DNS categories like .com or .org that direct queries to
authoritative servers.
• Authoritative Server: DNS servers that provide definitive answers for specific domain
names.

How DNS work?


• User Request: A client sends a domain name to a DNS resolver to find its
corresponding IP address.
• DNS Query: The resolver queries root domain servers, which direct it to the appropriate
Top-Level Domain (TLD) server.
• TLD Lookup: The TLD server points the resolver to the authoritative DNS server for the
specific domain.
• Authoritative Response: The authoritative server provides the IP address, which the
resolver sends back to the client to complete the request.

What is DNS Lookup?


DNS Lookup or DNS Resolution can be simply termed as the process that helps in allowing devices
and applications that translate readable domain names to the corresponding IP Addresses used by
the computers for communicating over the web.

What is DNS Resolver?


A **DNS Resolver** (or DNS client) initiates the DNS lookup process to translate domain names
into IP addresses. It enables applications to access websites and services by resolving user-friendly
domain names into their corresponding IP addresses.
Root DNS Server:
• Definition: The top-level DNS servers in the DNS hierarchy that manage queries for the
root zone. They direct queries to the appropriate Top-Level Domain (TLD) servers based on
the domain suffix (e.g., .com, .org).
• Function: Acts as the starting point in the DNS resolution process, guiding the request to the
correct TLD server.
Name Server:
• Definition: A general term for any DNS server that handles domain name queries and
provides IP addresses. This category includes various types of DNS servers like
authoritative, recursive, and caching servers.
• Function: Resolves domain names to IP addresses and vice versa, ensuring that users can
access websites and services using human-readable domain names.
Host Server:
• Definition: Often refers to the server that hosts a website or online service. It can also mean
the server that maintains authoritative DNS records for a specific domain.
• Function: For websites, it stores the site’s files and data. For DNS, it provides the final DNS
records (such as IP addresses) for a domain.

Address resolution techniques


• Iterative Method: The DNS resolver queries multiple DNS servers in sequence, with each
server providing either an answer or a referral to another server. The resolver itself continues
querying until it obtains the final answer.
• Recursive Method: The DNS resolver queries a DNS server, which then takes
responsibility for resolving the domain name completely. The server performs all necessary
queries on behalf of the resolver and returns the final result.
• Non-Recursive Query: A DNS query where the server responds with the information it has
available, either from its cache or directly if it has the answer. It does not perform further
queries to other servers to resolve the request.

What is DNS Caching?


DNS Caching can be simply termed as the process used by DNS Resolvers for storing the
previously resolved information of DNS that contains domain names, and IP Addresses for some
time. The main principle of DNS Caching is to speed up the process of future DNS lookup and also
help in reducing the overall time of DNS Resolution.

What do you mean by level 3 DNS Server?


Level 3 can be termed as a third-party DNS Server that is completely free and open to
the public.

Is Domain Name System (DNS) a protocol?


Domain Name System (DNS) is a protocol that is used to convert easily readable names
for communicating over the network, instead of remembering IP Address.

How can you categorize a DNS as a TCP or UDP?


DNS is designed to be used in both the ways like as a TCP or as a UDP. It converts to
TCP when it is not able to communicate on UDP.

Basic Protocals
• Stateful: The server maintains context and state across multiple interactions (e.g.,
SMTP, FTP).
• Stateless: Each interaction is independent, with no retained context (e.g., HTTP).
• In-Band: Control and data messages are transmitted through the same channel (e.g., HTTP).
• Out-of-Band: Control messages are transmitted through a separate channel from the data
(e.g., network management protocols).
In-Band/Out- Stateful/ Default
Protocol TCP/UDP Definition
of-Band Stateless Port
Simple Mail Transfer Protocol:
SMTP TCP Out-of-Band Stateless 25 Used for sending emails between
servers.(only 7 bit ascii)
Multipurpose Internet Mail
N/A (Not a Extensions: An extension to
MIME transport N/A N/A N/A SMTP that allows for sending
protocol) multimedia content in emails.
(non ascii also)
WWW A system of interlinked
(World TCP In-Band Stateless N/A hypertext documents accessed
Wide Web) via the internet.
Hypertext Transfer Protocol:
Used for transferring hypertext
HTTP TCP In-Band Stateless 80
requests and information on the
web.
File Transfer Protocol: Used for
FTP TCP In-Band Stateful 21 transferring files between client
and server over a network.

You might also like