Networks model 3

Download as pdf or txt
Download as pdf or txt
You are on page 1of 65

UNIT3

(1) Describe the GSM architecture in detail.(13)

The GSM architecture consists of three major interconnected subsystems that interact with
themselves and with users through certain network interface. The subsystems are Base Station

Subsystem (BSS), Network Switching Subsystem (NSS) and Operational Support Subsystem
(OSS). Mobile Station (MS) is also a subsystem but it is considered as a part of BSS.

1. Mobile Station (MS): Mobile Station is made up of two


entities. A. Mobile equipment (ME):

• It is a portable, vehicle mounted, hand held device.


• It is uniquely identified by an IMEI number.
• It is used for voice and data transmission.
• It also monitors power and signal quality of surrounding cells foe optimum handover. 160
characters long SMS can also be sent using Mobile Equipment. B. Subscriber Identity
module (SIM):

• It is a smart card that contains the International Mobile Subscriber Identity (IMSI) •
number.
• It allows users to send and receive calls and receive other subscriber services. - It is
• protected by password or PIN.
• It contains encoded network identification details. it has key information to activate the
phone.It can be moved from one mobile to another.
2. Base Station Subsystem (BSS): It is also known as radio
subsystem, provides and manages radio transmission paths between the
mobile station and the Mobile Switching Centre (MSC).
BSS also manages interface between the mobile station and all other subsystems of GSM. It
Consists of two parts.
A. Base Transceiver Station (BTS):
• It encodes, encrypts, multiplexes, modulates and feeds the RF signal to the antenna.
• It consists of transceiver units.
• It communicates with mobile stations via radio air interface and also communicates with
B. Base Station Controller (BSC):

• It manages radio resources for BTS. It assigns frequency and time slots for all mobile
stations in its area.
• It handles call set up, transcoding and adaptation functionality handover for each MS
• radio power control.
• It communicates with MSC via A interface and also with BTS.
3. Network Switching Subsystem (NSS): it manages the switching functions of the system and
allows MSCs to communicate with other networks such as PSTN and ISDN. It consist of A.
Mobile switching Centre:

• It is a heart of the network. It manages communication between GSM and other networks.
• It manages call set up function, routing and basic switching.It provides billing
information.
• MSC does gateway function while its customers roam to other network by using
HLR/VLR.
B. Home Location Registers (HLR): - It is a permanent database about mobile subscriber
in a large service area. - Its database contains IMSI, IMSISDN, prepaid/post-paid, roaming
restrictions, supplementary services.

The International Mobile Subscriber Identity (IMSI) is divided into three parts:
1. Mobile Country Code (MCC):Identifies the subscriber's home country.
• Examples include:
• 310 for the United States
• 234 for the United Kingdom
• 460 for China
2. Mobile Network Code (MNC):
• Represents the national part of a subscriber's home network identification.
• Necessary because a country can have multiple independent mobile
networks.
• Examples in the United Kingdom:
• 10 for O2
• 15 for Vodafone
• 30 for T-Mobile
3. Mobile Subscriber Identification Number (MSIN):
• The remaining digits of the IMSI.
• Uniquely identifies a subscriber within the home network.
For example, in the IMSI 310-260-123456789:
• 310 is the MCC for the United States.
• 260 is the MNC, representing a specific carrier within the United States.
• 123456789 is the MSIN, uniquely identifying the subscriber within that carrier's network.
C. Visitor Location Registers (VLR): - It is a temporary database which updates whenever
new MS enters its area by HLR database. - It controls mobiles roaming in its area. It reduces
number of queries to HLR. - Its database contains IMSI, TMSI, IMSISDN, MSRN, location,
area authentication key.
D. Authentication Centre: - It provides protection against intruders in air interface. - It
maintains authentication keys and algorithms and provides security triplets (RAND, SRES, Ki).
E. Equipment Identity Registry (EIR):

• It is a database that is used to track handset using the IMEI number.


• It is made up of three sub classes- the white list, the black list and the gray list.
4. Operational Support Subsystem (OSS): It supports the operation and maintenance of GSM
and allows system engineers to monitor, diagnose and troubleshoot all aspects of GSM system.
It supports one or more Operation Maintenance Centres (OMC) which are used to monitor the
performance of each MS, Bs, BSC and MSC within a GSM system. It has three main functions:
• To maintain all telecommunication hardware and network operations with a particular
• market.
• To manage all charging and billing procedures
• To manage all mobile equipment in the system.
Interfaces used for GSM network :
1)UM Interface –Used to communicate between BTS with MS
2)Abis Interface— Used to communicate BSC TO BTS
3) A Interface-- Used to communicate BSC and MSC
4) Singling protocol (SS 7)- Used to communicate MSC with other network .
The GSM Subsystems
1. Base Station Subsystem (BSS):
• Manages wireless communication between mobile devices and the network.

• Includes Base Transceiver Station (BTS) and Base Station Controller (BSC).
2. Network Subsystem (NSS):
• Handles call switching, subscriber, and mobility management.
• Components: Mobile Switching Center (MSC), Home Location Register (HLR),
Visitor Location Register (VLR), Authentication Center (AuC).
3. Intelligent Network Subsystem (IN):
• Adds optional functionalities like prepaid services to the network.
• Comprises Service Control Point (SCP) databases.

The Mobile Switching Center (MSC) serves as the central element in a mobile
telecommunication network, also known as a Public Land Mobile Network (PLMN). In a
traditional circuit-switched network, the MSC manages all connections between subscribers,
routing them through the switching matrix.
1. Registration of Mobile Subscribers:
• When a mobile device (Mobile Station or MS) is powered on, it registers with the
network, becoming reachable by other subscribers.
2. Call Establishment and Routing:
• Manages the establishment and routing of calls between two subscribers, ensuring
efficient communication.
3. Forwarding SMS (Short Messaging Service) Messages:
• Facilitates the forwarding of SMS messages, enabling text communication between
mobile subscribers.

(2)Discuss small screen web browsing is done over GPRS and


EDGE.(APR/MAY 2018).
1. GPRS (General Packet Radio Service) Characteristics:
• Bearer for IP Packets: GPRS serves as a bearer for IP packets, allowing mobile devices
to transmit data over the internet.
• Latency: GPRS is characterized by longer latency, meaning there is a delay in the
transmission of data. In moving environments, the latency can vary, impacting the user
experience.
• Coverage Limitations: Users may experience a loss of service if they move outside the
coverage area of the network.

Device Limitations: Devices using GPRS often have limited capabilities, such as small
screens and relatively low processing power compared to PCs.
2. EDGE (Enhanced Data Rates for GSM Evolution) Characteristics:
• Sufficient Bandwidth: EDGE provides sufficient bandwidth for web browsing, offering
faster data transfer compared to GPRS.
3. WAP 1.1 for Early GPRS Devices:
• Bandwidth Limitations: WAP 1.1 was designed for devices with very limited
bandwidth, affecting the speed of page downloads.
• Processing Power: Constrained devices had very limited processing power, impacting
the speed at which pages could be rendered on the screen.
• Connection Reliability: Due to limited bandwidth, reliability of the connection was
crucial to reduce the impact of interruptions on user experience.
• Media Support: WAP 1.1 supported only black and white images in WBMP format,
suitable for the limited capabilities of early mobile devices.
• Protocol Stack: WAP 1.1 used WML and a special protocol stack (WSP) instead of
HTTP for page transfers.


4. WAP 2.0:
• Graphics Support: WAP 2.0 browsers added support for additional graphics formats like
GIF, allowing for more visually appealing content.
• Gateway Role: The WAP 2.0 gateway continued to play a role in billing, control
functionality, and as a simple HTTP proxy.
5. Small Screen Web Browsing with Network Side Compression:
High-End Mobile Devices: Modern high-end mobile devices come with built-in web browsers
capable of downloading and displaying standard web pages.
• Drawbacks: Slow download speed over GPRS and EDGE, along with limited processing
power, result in a degraded user experience.

Network Side Compression: To overcome limitations, some web browsers use network
side compression servers to compress standard web pages before downloading them to
the mobile device.
• Intelligent Zooming and Reflow: The compressed content allows for intelligent
zooming and reflow mechanisms, displaying standard web pages effectively on small
screens without horizontal scrolling.
• User Experience: This approach offers an excellent web-browsing experience, especially
on devices with smaller screens, as it minimizes data transmission time and adapts
content for optimal display.
• Impact on Mobility: The use of compressed content is particularly advantageous in
mobile environments, where standard web pages are downloaded quickly, and coverage
issues have less impact on the overall experience compared to using uncompressed
content.
Small screen web browsing over GPRS (General Packet Radio Service) and EDGE
(Enhanced Data rates for GSM Evolution) poses unique challenges and opportunities. Here's
a comprehensive discussion:
1. **Slow Data Speeds:**
• - GPRS and EDGE are 2G and 2.5G technologies, respectively, offering relatively slower
data speeds compared to 3G, 4G, and 5G. This limitation affects the speed at which web
pages load on small screens.
• 2. **Limited Processing Power:**
• - Devices using GPRS and EDGE for web browsing often have limited processing power,
which can impact the rendering and loading of complex web pages.

• 3. **Built-in Web Browsers:**


• - Modern high-end mobile devices come equipped with built-in web browsers capable of
rendering standard web pages. However, the slow data speeds of GPRS and EDGE can
lead to a suboptimal user experience.

• 4. **Network Side Compression:**


• - To address slow download speeds, some web browsers utilize network side compression
servers. These servers compress standard web pages before transmitting them to the
mobile device, optimizing data usage and speeding up page loading times.

• 5. **Intelligent Zooming and Reflow:**



- Compressed content allows for intelligent zooming and reflow mechanisms. This
means that web pages can be displayed effectively on small screens without horizontal
scrolling, enhancing the user experience.

• 6. **Data Efficiency:**
• - Network side compression not only speeds up data transfer but also reduces the amount
of data transmitted over the network. This is crucial for optimizing bandwidth usage in
GPRS and EDGE environments.

• 7. **User Experience:**
• - Despite the limitations of slow data speeds, the use of network side compression can
significantly enhance the web-browsing experience on small screens. The intelligent
adaptation of content ensures that users can navigate websites effectively.

• 8. **Cost Considerations:**
• - GPRS and EDGE often come with data usage costs. Network side compression, by
reducing the amount of data transmitted, can lead to cost savings for users, especially in
regions where data plans are charged based on usage.

• 9. **Adaptability to Mobile Environments:**


• - The use of compressed content is particularly advantageous in mobile environments.
Since standard web pages are downloaded quickly, coverage issues have a reduced
impact on the overall experience compared to using uncompressed content.

(3)Mobi0lity Management and Session Management OVER GPRS


(GMM/SM)
GPRS Mobility Management and Session Management (GMM/SM)
The GPRS network is responsible for both forwarding data packets between subscribers
and the Internet and managing the mobility and sessions of subscribers. This is achieved
through the GPRS Mobility Management (GMM) and Session Management (SM)
protocols.
Mobility Management Tasks
1. Subscriber Connection:

• Users must connect to the GPRS network before establishing an Internet
connection, similar to attaching to the circuit-switched part of the network.
• An authentication procedure, akin to GSM authentication, is initiated by the
network when a subscriber wants to attach.
2. Location Update:
• If authentication is successful, the Serving GPRS Support Node (SGSN) sends
a location update message to the Home Location Register (HLR) to update the
subscriber's location information in the network's database.
• The HLR acknowledges with an 'insert subscriber data' message, containing
subscription information, so that further communication with the HLR is
unnecessary until the subscriber changes location.
• The SGSN then sends an attach accept message to the subscriber, and the
attach procedure is complete when the subscriber returns an attach complete
message.
3. Handling Previous Attachments:
• If the subscriber was previously attached to a different SGSN, the new SGSN
requests identification information from the old SGSN.
• After successful authentication, the new SGSN sends a location update
message to the HLR, which, in turn, sends a cancel location message to the old
SGSN before returning the insert subscriber data message to the new SGSN.


GPRS Session Management
1. PDP Context Activation:
• To communicate with the Internet, a Packet Data Protocol (PDP) context must
be requested after the attach procedure, akin to obtaining an IP address.
• The subscriber sends a PDP context activation request message to the SGSN,
specifying the Access Point Name (APN), which is a reference used by the
Gateway GPRS Support Node (GGSN) as a gateway to an external network.
• The SGSN checks the requested APN against the allowed APNs received from
the HLR during the attach procedure.
• A DNS lookup is performed using the APN to obtain the IP address of the
GGSN.
• The SGSN forwards the request to the GGSN, including the APN, user's
International Mobile Subscriber Identity (IMSI), and a Tunnel Identifier
(TID) for the virtual connection.

2. Logical Connection:
• Unlike circuit-switched calls, resources are used only during data transmission
in a packet call, allowing for efficient resource utilization.
• The PDP context represents a logical connection to the Internet, remaining
active even during periods of inactivity, often referred to as 'always on'.

GPRS Session Management (Continued)


3. Tunneling User Data:
• The SGSN assigns a Tunnel Identifier (TID) for the virtual connection, which is
crucial for tunneling user data packets through the GPRS network later on.
• The TID facilitates the establishment of a virtual connection between the subscriber
and the GGSN, ensuring proper routing and delivery of data packets.
4. Access Point Name (APN):
• The APN serves as a reference point for the GGSN to access external networks. It
can be a fully qualified domain name (e.g., 'internet.t-mobile.com') or a simpler
identifier like 'Internet' or 'wap'.
• GPRS network operators have the flexibility to choose APN names based on their
specific services and connectivity requirements.
5. Domain Name Service (DNS) Lookup:
• The SGSN performs a DNS lookup with the specified APN as the domain name to
locate the IP address of the GGSN.
• This DNS lookup mirrors the process a web browser undergoes to obtain the IP
address of a web server.
6. Logical Connection Efficiency:
• Unlike circuit-switched calls that reserve resources continuously, the PDP context
in a GPRS packet call uses resources only during data transmission.
• Resources are freed up once data transmission is complete, allowing for efficient
utilization and ensuring that resources are available for other subscribers.

7. Always-On Connectivity:
• The concept of 'always on' refers to the ability of a packet call to remain established
indefinitely without blocking resources, even during periods of inactivity.
• This ensures that subscribers can quickly resume data transmission without the
need for re-establishing the connection.

Aspect Mobility Management Session Management


Involves the control and
Deals with the management of coordination of user sessions within
Definition user mobility in a network. a network.
Focuses on tracking and
maintaining user location Manages the initiation, maintenance,
Scope changes. and termination of user sessions.
Ensures the establishment and
Ensures seamless connectivity for maintenance of application-level
Objective users during movement. connections.
Involves handovers to maintain Deals with application-level
communication during handovers between different servers
Handovers movement. or services.
Associated with mobile Applicable to various networking
communication technologies technologies, including mobile and
Technology (e.g., GSM, CDMA, LTE). fixed-line networks.
Involves Mobile IP, SIP, and Utilizes higher-layer protocols like
others for managing changes in HTTP, FTP, and RTP for managing
Protocols user location. user sessions.
Often includes authentication Includes authentication processes for
Authentication mechanisms during handovers. users accessing specific services.
Requires transfer of user context Manages transfer of session context
information for a seamless to maintain continuity during user
Context Transfer handover. interactions.
Involves elements like load
Network Involves entities like HLR, VLR, balancers, proxies, and application
Elements and SGSN. servers.
Addresses challenges of users Deals with challenges related to
moving between different users accessing services from
Roaming network providers. different geographical locations.
May involve updating session
Involves processes like location information when users move
Location Update updating and registration. between networks.
Resource Manages radio and network Manages application-level resources
Management resources efficiently. for optimal performance.
Aspect Mobility Management Session Management
Does not involve cell handovers but
Includes mechanisms for intracell handles transitions between different
Cell Hando ver and inter-cell handovers. servers or services.
Supports QoS mechanisms to
Ensures QoS for mobile users by prioritize and optimize delivery of
QoS Suppo rt adapting to network conditions. application-level services.
Connection Aims to provide persistent Focuses on maintaining persistent
Persistence connections for mobile users. connections at the application layer.
Typically involves security Implements security measures to
Security measures for secure handovers. protect user sessions and data.
Aspect Mobility Management Session Management
Dynamic IP Assigns dynamic IP addresses to Assigns dynamic or static IP
Assignment mobile users. addresses to users during sessions.

(4)(i)Explain The Classic SS-7 Protocol Stack


(i) Explain IP-Based SS-7 Protocol Stack

The SS -7 standard defines three basic types of network nodes :

 Service Switching Points (SSPs) are switching centers that are more generally referred to as network
elements that are able to establish, transport or forward voice and data connections.
 Service Control Points (SCPs) are databases and application software that can influence the establishment
of a connection.
 Signaling Transfer Points (STPs) are responsible for the forwarding of signaling messages between SSPs
and SCPs as not all network nodes have a dedicated link to all other nodes of the network. STPs only
forward signaling messages that are necessary for establishing, maintaining and clearing a call. The calls
themselves are directly carried on dedicated links between the SSPs.
the Classic SS-7 Protocol Stack

1. Message Transfer Part (MTP):


• MTP Level 1 (MTP1): At this level, signaling bits are physically transmitted over the
transmission medium. It specifies characteristics like the voltage levels, timing, and
physical connectors for the signaling links.
• MTP Level 2 (MTP2): MTP2 provides error checking, flow control, and message
sequencing. It ensures that messages are delivered reliably between signaling points. If
errors occur during transmission, MTP2 is responsible for detecting and handling them.
• MTP Level 3 (MTP3): MTP3 is mainly concerned with network routing. It determines
the route a message should take through the SS7 network based on the destination
point code. MTP3 also handles congestion control and network management functions.
2. Signaling Connection Control Part (SCCP):
• SCCP enhances MTP by providing additional services. One crucial function is global title
translation, where telephone numbers are translated into network addresses, allowing
for more flexible routing of messages. SCCP also supports connectionless and
connection-oriented services.
3. ISDN User Part (ISUP):
• ISUP is specific ally designed for the setup, maintenance, and release of connections for
voice calls on the PSTN. It defines the messages exchanged between switches to
establish and tear down voice circuits. ISUP also handles call-related signaling for
services like call waiting, call forwarding, and three-way calling.
4. Transaction Capabilities Application Part (TCAP):
• TCAP is a versatile part of SS7 that enables the exchange of non-circuit-related
information. It provides a way for applications to communicate and request services
beyond basic call control. For example, TCAP is used for services like toll-free number
translation and intelligent network features.
5. Operations, Maintenance, and Administration Part (OMAP):
• OMAP is responsible for the management and maintenance of the SS7 network. It
includes functions like network monitoring, fault detection, and performance
measurement. OMAP facilitates the efficient operation of the SS7 network by ensuring
that issues are identified and addressed promptly.

In summary, each part of the SS7 protocol stack has a specific role in enabling the reliable and efficient
operation of telecommunication networks. The stack works cohesively to provide the necessary
signaling functions for services such as voice calls, messaging, and various supplementary features.

IP-Based SS-7 Protocol Stack

 When using an IP network for the transmission of SS-7 signaling messages, the MTP-1 and MTP-2 protocols
are replaced by the IP and the transport medium-dependent lower layer protocols (e.g. Ethernet).
UNIT 4
(1)Explain hybrid 4G wireless networks protocols. (APR/MAY 2018).
Hybrid 4G wireless networks refer to the integration of multiple wireless technologies to
enhance network performance, coverage, and reliability. The term "hybrid" typically implies the
coexistence and seamless integration of Long-Term Evolution (LTE) or 4G networks with other
technologies like Wi-Fi, small cells, or even older-generation cellular networks. The aim is to
provide users with a more robust and efficient wireless experience. Here are some key protocols
and technologies involved in hybrid 4G wireless networks:
1. LTE (Long-Term Evolution):
• Physical Layer Protocols: LTE uses Orthogonal Frequency Division Multiple
Access (OFDMA) for downlink transmission and Single Carrier Frequency
Division Multiple Access (SC-FDMA) for uplink transmission.
• Medium Access Control (MAC) Protocol: Manages the communication between
the mobile device and the network, including data scheduling and coordination.
2. Wi-Fi:
• IEEE 802.11 Protocols: Hybrid 4G networks often integrate Wi-Fi, and the IEEE
802.11 family of protocols is commonly used. The specific standard (e.g.,
802.11ac, 802.11ax) can vary depending on the deployment.
• Medium Access Control (MAC) Protocol: Coordinates communication between
Wi-Fi-enabled devices and the network, managing access to the shared medium.
3. Small Cells:
• LTE-U (LTE in Unlicensed Spectrum): Allows LTE to operate in unlicensed
frequency bands, improving capacity in crowded areas.
• Coordinated Multipoint (CoMP): Enables coordination between multiple small
cells to optimize handovers and improve coverage in dense urban environments.
4. HetNet (Heterogeneous Networks):
• Interference Management Protocols: HetNets leverage multiple types of cells, such
as macrocells and small cells, to enhance coverage and capacity. Protocols like
Enhanced Inter-Cell Interference Coordination (eICIC) help manage interference
between cells.
• Self-Optimizing Network (SON) Protocols: Automatically adjust network
parameters to optimize performance and resource utilization.
5. Carrier Aggregation:
• LTE-A (LTE Advanced): Introduces carrier aggregation, allowing devices to use
multiple LTE frequency bands simultaneously. This enhances data rates and
network capacity.
6. Dual Connectivity:
• LTE-Wi-Fi Aggregation: Allows devices to simultaneously connect to LTE and
Wi-Fi networks, enabling faster data speeds and improved reliability.
Hybrid Wireless Networks:
Definition:
• Connectivity Options:
• In hybrid wireless networks, any mobile node can establish connectivity either
directly or through a gateway node to an infrastructure network. This infrastructure
network could be an IP network (Internet), a 3G wide area wireless network, or an
802.11 local area wireless network.
• Intra-technology vs. Inter-technology:
• Intra-technology Hybrid Network:
• Mobile nodes communicate with networks of similar technology. For example, a
mobile node in an ad hoc 802.11 network communicating with an 802.11 Access
Point (AP) in an infrastructure network.
• Inter-technology Hybrid Network:
• Mobile nodes communicate with networks of different technologies. For instance, a
mobile node in an 802.11 network communicating with a 3G network.
Motivations for Hybrid Network Design:
• Existing Hardware Utilization:
• Leveraging the ubiquity of wireless access points and the pre-installation of Wi-Fi
capabilities in laptops and PDAs.
• Smartphone Integration:
• Some smartphones integrate multiple wireless technologies (e.g., GSM and Wi-Fi),
offering advantages such as high-bandwidth Internet access and voice
conversations over different networks.

Infrastructure WLAN (BS-oriented network):


• Fixed Base Stations (BS):
• In this structure, the network relies on fixed Base Stations connected by a wired
backbone. Base Stations serve as access points for mobile devices to connect to the
network.
• Centralized Administration:
• The network has centralized administration, implying that there are standard
support services regularly available. This centralization simplifies network
management and maintenance.
• Single-hop or Cellular Architecture:
• The architecture is often described as single-hop or cellular, meaning that devices
communicate directly with the nearest base station.
Non-infrastructure WLAN (Ad hoc WLAN):
• Ad Hoc Networks:
• Unlike the infrastructure model, ad hoc networks do not have fixed base stations.
Instead, devices communicate with each other directly, forming a decentralized
network without a central administration.
• Direct Device Communication:
• Mobile devices in an ad hoc WLAN communicate directly with other devices in
their proximity. Each device in the network acts as both a user and a relay point for
data transmission.
• No Standard Support Services:
• Ad hoc networks lack centralized support services regularly available on the
network. Devices rely on direct communication for data exchange.
Advantages:
• Higher Throughput:
• Orchestrating hybrid wireless networks can lead to architectures that allow users to
achieve higher throughput by switching between different types of networks.
• Seamless Access to Services:
• Users experience seamless access to integrated or distributed services, enhancing
their overall connectivity experience.
• Cost Reduction:
• Smartphones integrating Wi-Fi and other technologies can reduce operating costs
by offering voice conversations over internal or home networks using Voice over
Internet Protocol (VoIP) techniques.
Business Opportunities:
• Extended Coverage Zones:

• Ad hoc networks can be used to extend the coverage zone of an infrastructure


network, providing users in Wi-Fi hotspot regions with seamless service access.
• New Business Models:
• Hybrid networks create new business opportunities for service providers and
network operators, allowing them to attract a wider user base and introduce
highspeed wireless data services.

(2)Interconnection with UMTS and GSM in LTE Networks


When a mobile device approaches the edge of the LTE network coverage, it needs to switch to
alternative network layers such as UMTS and GSM to maintain connectivity. This process
involves three primary procedures:
1. Cell Reselection from LTE to UMTS or GSM:
• In RRC Idle state, the mobile device receives broadcast information from eNodeBs
about neighboring GSM, UMTS, and CDMA cells.
• When a configured signal level threshold is reached, the device searches for
nonLTE cells based on reception level and usage priority.
• Once the decision to move to a GSM or UMTS cell is made, the mobile device
performs a location area update or routing area update.
2. RRC Connection Release with Redirect from LTE to UMTS or GSM:
• In LTE RRC Connected state, the network coordinates the mobile device's
unavailability periods for measurements on other channels.
• The eNode-B instructs the device to search for neighboring cells on specified
frequencies and bands.
• A transmission gap pattern for measurements is provided, and if the signal level
deteriorates, the RRC connection is released with a redirection order.
• The mobile device, upon receiving the redirection order, changes to the new
frequency and RAT, followed by a location or routing area update.
3. Inter-RAT Handover from LTE to UMTS or GSM:
• Similar to intrafrequency LTE handovers, additional actions are required for interRAT
handovers.
• The eNode-B reconfigures the radio connection for measurements on other frequencies.
• The handover command in the RRC reconfiguration message contains information about
the new frequency, RAT technologies, and other parameters.
• After the handover, the mobile device performs a routing area update to update the core
network nodes and the HSS with its current position.
Network Integration and Planning Aspects:
• LTE networks need to be connected with GSM and UMTS networks for seamless
exchange of subscriber context (IP address, QoS settings, authentication keys).
• Common network elements like the MME, MSC, SGSN, and EPC are shared to ensure
consistent subscriber management.
• Interworking functions like the SGs interface between LTE and GSM, and the Iu interface
between LTE and UMTS facilitate communication and coordination.
• Circuit-switched services support is maintained through mechanisms like CS Fallback and
SRVCC for voice calls.
Network Planning Challenges:
• Meticulous network planning is crucial to minimize interference and ensure high
performance.
• Single Frequency Network (SFN) reuse is employed to extend capacity, but challenges
arise in bands with limited spectrum or unsuitable channel conditions.
• Cell edge performance is optimized using Intercell Interference Coordination (ICIC)
messages exchanged over the X2 interface.
Voice and SMS over LTE:
• LTE's packet-based core necessitates solutions for offering traditional circuit-switched
services over an IP connection.
• CS Fallback is a method to deliver voice calls by falling back to GSM or UMTS for
circuit-switched connections.
• SMS over SGs facilitates SMS message delivery between GSM/UMTS MSCs and LTE
MMEs.
VoLGA (Voice over LTE via Generic Access):
• VoLGA reuses Generic Access Network (GAN) specifications, originally designed for
Wi-Fi, adapting them for LTE.
• Dual-mode mobile devices connect to the LTE core network over the LTE link and
Internet, similar to GAN's principles.


In summary, the interconnection with UMTS and GSM in LTE networks involves seamless
mobility procedures, common network elements, interworking functions, and careful network
planning to ensure efficient integration and service continuity across different generations of
mobile networks.

(3)Write short notes on:


(i) LTE Security Architecture.

(ii)(ii)Mobility Management in Idle State.

(i)LTE Security Architecture


Figure 14.2 gives an overview of the complete security architecture for LTE. The
stratums identified, each addressing a sufficiently isolated category of security
threats, are the application, home, serving and transport stratum.

1. User Plane Security:


• Encryption: LTE uses the Advanced Encryption Standard (AES) to encrypt user
travels betw data as it een the mobile device and the base station (eNodeB). This
unauthorize ensures that d parties cannot easily decipher the information.
2. C ontrol Plane Security:
Integrity Protection:
• undergo integrity pro Messages exchanged between the mobile device and the
integrity algorithms. network tection to prevent tampering. This is achieved
Authentication: Mut through the use of

Equipment) and the
Authentication is bas ual authentication occurs between the user device (UE - User
authentication algorit network. This ensures that both parties are who they claim to
be.
ed on the use of Subscriber Identity Modules (SIM cards) and
hms.
3. K ey M anagement:
• Key Derivation: LTE uses key derivation functions to derive session keys used for
encryption and ity protection. These keys are dynamically generated and
integr updated regularly

to enhance Key security.
Agreement: Key agreement protocols ensure that the network and user device
agree on a set of et keys to be used for secure communication. The
secr these keys is establishment of aspect of the security architecture.
a crucial
4. N etwork Domain Security:
• Firewall: Firewalls are implemented in the LTE network to control and monitor the
incoming and outgoing network traffic. This helps protect against
unauthorized access and potential security threats.

Detection and Prevention Systems (IDPS): These systems monitor
Intrusion
network
and/or system activities for malicious exploits or security policy violations.
They play a role in identifying and mitigating potential security breaches.
5. Evolved Packet Core (EPC) Security:
• Security Gateways (SeGW): The SeGW provides
secure communication between the LTE network and
external networks, such as the internet. It helps protect
against various attacks and ensures the integrity and
confidentiality of data in transit.
• Home Subscriber Server (HSS): The HSS is responsible for storing user
subscriptionrelated information and plays a role in authentication and authorization
processes.
It's important to note that the LTE security architecture is comprehensive, addressing various aspects
of security to create a robust and secure communication environment for mobile users.
(ii)Mobility Management in Idle State.
n the context of mobile networks, Mobility Management in Idle State refers to the set of procedures
and protocols that enable a mobile device to maintain its connection and availability while not actively
engaged in a communication session. When a mobile device is in an idle state, it is not actively
involved in a call, data session, or any other communication activity.

During this idle state, the mobile device periodically communicates with the network to ensure that it
is reachable and can receive incoming calls or messages. This involves updating the location
information of the mobile device in the network's tracking area or location area.

Key components of Mobility Management in Idle State include:

1. Location The mobile device periodically informs the network about its current
Update: location. This allows the network to keep track of the mobile device's
routing location and update the
2.
information
accordingly. Update: The mobile device performs periodic location updates to ensure
Periodic that the network has the latest information about its location. This helps in
3.
Location optimizing the network resources and ensuring efficient call routing.
The mobile device monitors neighboring cells and may decide to reselect a
4. new cell if it determines that it would provide better signal quality or other
Cell advantages. This is
Reselection:
When the network needs to reach the mobile device (e.g., for an incoming
known as cell call or
reselection. message), it initiates a paging procedure to locate the device. The network
Paging: broadcasts a paging message to the cells in the tracking area, and the
mobile device responds if it is within the

specified area.
These procedures help in efficient management of mobile devices in the idle state, ensuring that they
can be reached when needed and that network resources are utilized optimally.

(4)Integrate s1 and x2 handover scenarios. Why x2 handover is not


suitable in all situations?
Integrate s1 and x2 handover scenarios
(has to be found)
Why handover is not suitable in all situations
X2 handover, also known as direct or network-controlled handover, involves the transfer of a mobile device's
connection from one eNodeB (Evolved Node B) to another within the same evolved NodeB (eNB) or between
neighboring eNBs. While X2 handover has its advantages, it may not be suitable in all situations due to several
reasons:
1. Interoperability Issues:
• X2 handover relies on the presence of an X2 interface between neighboring eNBs for direct
communication. In some network deployments, especially in heterogeneous networks or multivendor
environments, there might be challenges in ensuring seamless interoperability between different
vendors' equipment.
2. Network Topology:
• In certain network topologies, the X2 interface might not be available or feasible. For example, in
scenarios where eNBs are not directly connected, X2 handover may not be a viable option.
3. Backhaul Limitations:
• X2 handover requires a low-latency and high-capacity backhaul connection between eNBs. In
situations where the backhaul capacity is limited or not reliable, X2 handover performance may be
compromised.
4. Load Imbalance:
• X2 handover may not be suitable in cases of significant load imbalance between neighboring eNBs. If
one eNB is heavily loaded while another is relatively idle, it might be more efficient to perform a
handover to a more suitable neighboring cell, even if it involves a higher signaling overhead.
5. Handover Decision Complexity:
• The decision to perform an X2 handover involves complex algorithms and considerations, such as
signal strength, interference, and load balancing. In certain scenarios, these decision-making
processes may not be well-suited to X2 handovers, and other handover types (e.g., S1 handover) may
be more appropriate.
6. Limited Mobility Support:
• X2 handover is typically more suitable for scenarios with moderate mobility. In cases of high-speed
mobility or handovers involving cells from different tracking areas, other handover types like S1
handover may be more suitable.
7. Security Concerns:
• Security considerations may impact the deployment of X2 handovers, especially in situations where
secure communication between neighboring eNBs is challenging to implement. Security
vulnerabilities could be exploited if not adequately addressed.
8. Resource Allocation Challenges:
• Efficient resource allocation is essential for X2 handovers. If the network is not designed to handle the
dynamic resource needs associated with X2 handovers, it may lead to suboptimal performance.
UNIT1
(1)Explain the remote access technologies with diagram.

Remote access is the ability of users to access a device or a network from any location. With that
access, users can manage files and data that are stored on a remote device, allowing for continued
collaboration and productivity from anywhere.

This is different from using a cloud solution, as it provides access to an on-premises environment
rather than being hosted offsite in a shared environment and available via the internet. This makes
remote access crucial for businesses of all sizes which have not moved to a cloud-first model, or
which require access to on-premises machines or resources Three of the most common remote
access technologies :

Remote Desktop Services (RDS),also known as Terminal Services, is one of the most common methods
used by SMBs to enable remote work. By using RDS, individuals can remotely connect to an endpoint device
or server which supports Remote Desktop Protocol (RDP) via a Terminal Server.

The connection can be made over a local network or internet connection and gives the user full access
to the tools and software installed on the machine they connect to. This method is frequently used by IT
departments to remotely access servers, or to provide easy local software access to multiple
employees.

One common business application which is frequently used with RDS is Intuit
Quickbooks. Many companies install the application on a central Terminal Server instead of individual
computers, allowing multiple users to connect to the software on a remote device via RDS and access the
toolset.

Remote Access Software

Remote Access Software offers an alternative to RDS and leverages a dedicated software to remotely
connect users to an endpoint device from anywhere in the world via the internet. This method of
remote access is typically the easiest to implement, as it only requires the user to install the software on
the computer to be accessed. This type of remote access is especially useful when most of the
organization’s endpoint devices are desktops.

Virtual Private Network

A Virtual Private Network (VPN) is a technology which creates a smaller, private network on top of a
larger public network – most commonly the internet. By logging into the VPN, users can gain
internetbased access to applications that would otherwise only work on local networks. The goal of any
clientbased VPN solution is to provide remote employees with the same level of access as onsite.
However, this is functionally different from an RDS session, as it does not allow full access to an entire
desktop, but only specific applications, software, and other resources which the user has been given
access to.
What Is A Remote Access Device?
A remote access device is a phone or a computer through which you remotely access another phone or
computer. This means you can access another device without physically touching the accessed device.
This remote accessing is done through a remote access software or application as per the device. An
internet connection is a mandatory requirement through the controlled and controller devices can be on
different internet connection networks.

Here are mainly four types of device remote access available, and they are explained below.

• 1. Access Phone from Phone: You can remotely access the host phone(Android or iOS) from the
client phone(Android or iOS). The phones can be connected to cellular network as well as Wi-Fi
network.

• 2. Access Phone from Computer: You can remotely access the host phone(Android or iOS) from
the client computer(Windows, macOS, Linux). The connected devices can have different
network connections.

• 3. Access Computer from Phone: You can remotely access the host computer(Windows, macOS,
Linux) from the client phone(Android, iOS).

• 4. Access Computer from Computer: You can remotely access the host computer from the
client computer irrespective of the operating system installed on them.
2)What are the components required for designing a network?
(

Explain.

Servers
Servers or Host computers are powerful computers that store data or applications and connect to resources that
are shared by the users of a network.
• They act as a central repository for all the data and applications used by the network.
• Some common examples of servers are file servers, print servers, web servers, and mail servers.
Servers can be classified based on their functionality, such as application servers, database servers, and domain
name servers.

Clients
A client is the computer used by the users of the network to access the servers and shared resources (such as
hard disks and printers). So, a personal computer is a client. Clients can be classified based on their functionality,
such as thin clients and thick clients. Thin clients are lightweight computers that rely on the server to perform
most of the processing, whereas thick clients have their own processing power and can run applications locally.

Channels
• The technical name of channels is a network circuit. It is the pathway over which information travels
between the different computers (clients and servers) that comprise the network.
• Channels can be classified based on their transmission medium, transmission rate or bandwidth,
transmission directional capability, and the type of the signal.
• Transmission medium is the physical medium of the channel, which can be either wireline or wireless.

• The wireline is called the guided media or line-based media. The wireline is of several kinds such as twisted pair
wire, coaxial cable, and fiber optic cable. In wireless media, there is no physical wire along which information
travels, and the information is transmitted without wires from one transmission station to the next. Common
examples are radio, mobile networks, microwave, and satellite.

• Signal type can be analog and digital. Analog signals are ‘continuous’ (they take on a wide range of values)
and digital signals are ‘discrete’ and binary (take on only two values). Digital signals are more suitable for
computer networks because computers represent all information in binary.

Interface Devices
The devices that connect clients and servers (and sometimes other networks) to the channel are called interface
devices. The common examples are modems and network interface cards. Network interface cards (NICs) are
hardware devices that are installed in a client or server and provide a physical connection to the network.
Modems are used to connect to remote networks or the Internet through a telephone line.
Operating Systems
This is the Network Software. It serves the same purpose that the operating system serves in a stand-alone
computer. The operating system controls the overall functioning of the network, including managing resources,
scheduling tasks, and handling security. Some common examples of network operating systems are Windows
Server, Linux, and Unix.
Designing a network involves several key components to ensure a robust and efficient system. Here
are the essential elements:
1. Requirements Analysis: Understand the specific needs of the organization or users. Consider
factors such as the number of users, types of applications, data volume, and security
requirements.
2. Topology: Decide on the network topology, which defines the physical or logical layout of the
network. Common topologies include bus, star, ring, and mesh. The choice depends on factors
like scalability, fault tolerance, and cost.
3. Hardware: Choose the appropriate networking devices, such as routers, switches, and access
points. Consider factors like data transfer rates, scalability, and compatibility with other devices.
4. Software: Select the necessary network protocols and operating systems for routers, switches,
and other devices. Ensure compatibility and security features.
5. IP Addressing and Subnetting: Plan the IP addressing scheme and subnetting to efficiently
allocate and manage IP addresses. This is crucial for proper communication within the network.
6. Routing and Switching: Design the routing and switching infrastructure to enable data to flow
efficiently between devices. Choose routing protocols and configure switches for optimal
performance.
7. Security: Implement security measures, including firewalls, encryption, and access control, to
protect the network from unauthorized access and cyber threats.
8. Scalability: Design the network with scalability in mind to accommodate future growth in
terms of users, devices, and data traffic.
9. Redundancy and High Availability: Integrate redundancy mechanisms to ensure
uninterrupted network operation in case of device failure. This may include redundant links,
devices, or data centers.

(3)Explain in detail about :


(i) DWDM and OFDM (ii)Firewalls and L3 Switches OFDM:

Definition: Orthogonal Frequency Division Multiplexing (OFDM) is a method of digital


data modulation, whereby a single stream of data is divided into several separate
sub-streams for transmission via multiple channels.

OFDM uses the principle of frequency division multiplexing (FDM), where the
available bandwidth is divided into a set of sub-streams having separate frequency
bands.
Frequency Division Multiplexing (FDM) is a technology that transmits multiple
signals simultaneously over a single transmission path, such as cable or wireless
system.

Each carrier is modulated by the data such as text, voice, video etc. Orthogonal
FDM spread spectrum technique distributes the data over a large number of
carriers that are spaced apart at precise frequencies.

This spacing provides the “orthogonality.” This technique, which avoids the demodulators
from seeing frequencies other than their own.

Working Principle of OFDM:OFDM is a specialised FDM having the constraint that


the sub-streams in which the main signal is divided, are orthogonal to each other.
Orthogonal signals are signals that are perpendicular to each other. A main
property of orthogonal signals is that they do not interfere with each other.

When any signal is modulated by the sender, its sidebands spread out either side.
A receiver can successfully demodulate the data only if it receives the whole signal.
In case of FDM, guard bands are inserted so that interference between the signals,
resulting in cross-talks, does not occur. However, since orthogonal signals are used
in OFDM, no interference occurs between the signals even if their sidebands
overlap. So, guard bands can be removed, thus saving bandwidth. The criteria that
needs to be maintained is that the carrier spacing should be equal to the reciprocal
of the symbol period.

In order that OFDM works, there should be very accurate synchronization between
the communicating nodes. If frequency deviation occurs in the sub-streams, they
will not be orthogonal any more, due to which interference between the signals
will occur.

The following diagram plots FDM versus OFDM, to depict the saving in bandwidth obtained
by OFDM –
Dense wavelength division multiplexing (DWDM) is a technology that
multiplexes data signals from different sources so they can share a single optical
fibre pair while maintaining complete separation of the data streams.

DWDM can handle higher speed protocols up to 100 Gbps per channel. Each
channel is only 0.8nm apart. Dense wavelength division multiplexing works on the
same principle as CWDM but in addition to the increased channel capacity it can
also be amplified to support much longer distances.

The diagram given below represents the dense wavelength division multiplexing (DWDM)

Working of DWDM
The working of DWDM is explained below −

• DWDM modulates multiple data channels into optical signals that have different
frequencies and then multiplexes these signals into a single stream of light that is sent
over a fibre-optic cable.
• Each optical signal has its own frequency, so up to 80 data streams can be transmitted
simultaneously over the fibre using only eight different light wavelengths.
• DWDM based networks can transmit data in IP, ATM, SONET/SDH, and Ethernet and
can handle bit rates between 100 Mb/s and 2.5 Gb/s. Therefore, DWDM based
networks can carry different types of traffic at different speeds over an optical channel.
• At the other end, a multiplexer demultiplexes the signals and distributes them to their
various data channels.
Technical advantages of DWDM

The technical advantages of DWDM are explained below −

• Transparency − Because DWDM is physical layer architecture it can transparently


support both TDM and data formats such as ATM, Gigabit Ethernet, ESCON, and Fibre
Channel with open interfaces over a common physical layer.
• calability − DWDM can leverage the abundance of dark fibre in many metropolitan area
and enterprise networks to quickly meet demand for capacity on point-to-point links
and on spans of existing SONET/SDH rings.
• Dynamic provisioning − Fast, simple, and dynamic provisioning of network connections
give providers the ability to provide high bandwidth services in days rather than
months.

Features

The features of dense wavelength-division multiplexing are as follows

• It is a type of technology that increases the bandwidth while transmitting data signals
over the fabric.
• The signal carrying capacity can be increased to a large extent. The speed of the data
signals can be up to 400 Gbps.

Firewalls and L3 Switches

• Layer 3
(Network)
What is a Layer 3 switch?
• Also called a multilayer switch, it is a specialized hardware device that has a lot in
common with the traditional router—both in physical appearance and function.
• Layer 3 switches support the same routing protocols as routers and inspect incoming
packets, as well as make vital routing decisions the same way routers do. And they do
these routing tasks in addition to performing switching duties. Like routers,
• Layer 3 switches can be configured to support such routing protocols as:
• Routing Information Protocol (RIP)
• Open Shortest Path First (OSPF)
• Enhanced Interior Gateway Routing Protocol (EIGRP)

THE main features of a Layer 3 switch? These


switches feature the following:
• Performance on two OSI layers: Layer 2 and Layer3
• Usually come in 24 or 48 Ethernet port models—however, without the WAN interface
• Connects devices within the same subnet
• Uses a simple switching algorithm
• Routing protocols are simple the benefits of a Layer 3 switch?
These switches have many uses in an extensive, busy network. They:
• Support routing between VLANs

• Enhance fault isolation

• Streamline security management

• Reduce the volume of broadcast traffic

• Ease the configuration process for VLANs (Note: A separate router is not needed
between each VLAN.)
• Separate routing tables, thus separating traffic better

• Support flow accounting and high-speed scalability

• Lower network latency because a packet does not have to make extra hops to go
through a router

FIREWALLS
A Firewall is a network security device that monitors and filters incoming and
outgoing network traffic based on an organization's previously established
security policies.
.
Four techniques that firewall use to control access and enforce the site‟s security
policy is as follows:
• Service control – determines the type of internet services that can be accessed,
inbound or outbound. The firewall may filter traffic on this basis of IP address and TCP
port number; may provide proxy software that receives and interprets each service
request before passing it on; or may host the server software itself, such as web or
mail service.
• Direction control – determines the direction in which particular service request may be
initiated and allowed to flow through the firewall.
• User control – controls access to a service according to which user is attempting to
access it.
• Behavior control – controls how particular services are used. Capabilities of Firewall

TYPES OF FIREWALLS
There are 3 common types of firewalls.
• Packet filters

• Application-level gateways

• Circuit-level gateways

Packet Filtering Router


A packet filtering router applies a set of rules to each incoming IP packet and then forwards
or discards the packet.

Application-Level Gateway
An Application level gateway also called a proxy server, acts as a relay of
application level traffic Circuit Level Gateway
• Circuit level gateway can be a stand-alone system or it can be a specified function
performed by an application level gateway for certain applications.

• A Circuit-Level Gateway (CLG) is a type of firewall or network security device that operates at
the session layer (Layer 5) of the OSI model. Unlike packet-filtering firewalls that work at the
network layer (Layer 3) and examine individual packets, a circuit-level gateway works at a
higher level of abstraction.
Steps involved in Circuit-Level Gateway operation(juz read the headlines alone)

1. Session When a connection request is initiated from an internal network to


Establishment: an -level gateway establishes a connection on behalf of the internal
external network, the system. It acts as a proxy between the internal and external systems.
circuit
2. Authentication: The CLG may perform user authentication before allowing the connection to
proceed. This adds an extra layer of security by verifying the identity of the user or system
initiating the connection.
3. Connection Tracking: Once the connection is established, the circuit-level gateway monitors
the ongoing communication between the internal and external systems. It keeps track of the
state of the connection and ensures that only valid and authorized communication is allowed.
4. Proxying: The CLG acts as an intermediary for data exchange between the internal and external
systems. It can hide the internal network structure by forwarding requests on behalf of internal
systems and returning the responses.
5. Stateful Inspection: Circuit-level gateways maintain state information about active
connections. This allows them to make access control decisions based on the context of the
connection, considering factors such as the state of the connection and the rules defined for it.

(4)(i)Explain the concept of shared media networks.


(ii)Describe in detail about the remote access technologies and
devices.

(i)Explain the concept of shared media networks.


In advanced network principles and protocols, a shared media network refers to a type of
network topology where multiple devices share a common communication medium, such as a
physical cable or a wireless channel, to transmit and receive data. This concept is primarily
associated with older networking technologies like Ethernet in its original form (e.g., 10BASE5
or 10BASE2) and bus topology networks.

Here are some key points about shared media networks:

1. *Single Communication Channel:* In a shared media network, all devices on the network
share a single communication channel. This means that when one device transmits data, all
other devices connected to the same medium can potentially hear and receive that data.

2. *Contention for Access:* Devices in a shared media network contend for access to the
communication medium. This contention can lead to collisions, where multiple devices attempt
to transmit data simultaneously, causing data corruption.

3. *CSMA/CD Protocol:* To manage contention and collisions, shared media networks often
use protocols like Carrier Sense Multiple Access with Collision Detection (CSMA/CD). CSMA/CD
helps devices on the network listen for activity on the medium and wait for a clear channel
before transmitting data. In the event of a collision, devices follow a backoff mechanism to
retry transmission.

5.*Ethernet as an Example:* Ethernet networks, especially in their earlier forms like 10BASE5
and 10BASE2, are classic examples of shared media networks. In these networks, devices were
physically connected to a common coaxial cable, and they had to contend for access to
transmit data.

6.*Evolution to Switched Networks:* Shared media networks have largely been replaced by
switched networks, where each device has a dedicated point-to-point connection to a switch.
This improves network performance and reduces contention and collisions.

7.*Wireless Shared Media:* In the context of wireless networks, shared media refers to the
shared radio frequency spectrum where multiple wireless devices contend for access to
transmit data. Protocols like CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance)
are used to manage wireless contention.

It's important to note that shared media networks are less common in modern networking
environments due to the limitations mentioned. Switched networks, which provide dedicated
connections between devices, have become the standard for improved network performance and
scalability.

Advantages of Shared Media Networks:

1. *Simplicity:* Shared media networks are relatively simple to set up, making them costeffective
for smaller network environments.

2. *Cost-Efficiency:* They require less cabling and infrastructure compared to switched networks,
which can be cost-effective for small to medium-sized networks.

3. *Ease of Maintenance:* Fewer cables and connection points can make shared media networks
easier to manage and troubleshoot.

Disadvantages of Shared Media Networks:

1. *Limited Scalability:* Shared media networks are not easily scalable because as more
devices are added, contention for the medium increases, leading to reduced network
performance.
2. *Collision Issues:* Contention for access to the medium can result in collisions, leading to
data corruption and the need for collision detection and retransmission mechanisms, which
can reduce network efficiency.

3. *Performance Degradation:* As network traffic increases, shared media networks can


suffer from performance degradation due to congestion.

4. *Security Concerns:* In shared media networks, all devices on the network can potentially
eavesdrop on each other's communications, which can pose security risks. This is less of a
concern in switched networks where traffic is isolated to individual connections. *Evolution to
Switched Networks:*

The limitations of shared media networks have led to the widespread adoption of switched
networks in modern networking. In switched networks, each device has a dedicated point-topoint
connection to a network switch, which overcomes the scalability and collision issues associated
with shared media networks. This results in improved network performance and security.

Examples of Shared Media Networks:*

1. *Ethernet Hubs:* Early Ethernet networks used hubs, which were essentially shared
media networks. All devices connected to the hub shared the same collision domain.

2. *Wireless LANs:* In wireless local area networks (WLANs), multiple devices share the
same radio frequency spectrum, creating a shared medium. However, WLANs use protocols
like CSMA/CA to manage wireless contention and avoid collisions.

Overall, while shared media networks played a significant role in the history of networking, they
are less prevalent today in favor of switched networks and more advanced protocols that offer
improved performance and scalability.

(ii)Remote access technology and devices

(refer from the 1st qn)

UNIT2
(1)Discuss in detail about Mobile WiMAX technologies which improves
the performance in terms of speed, throughput and capacity

(2)Examine the following frame structures. (i) FDD uplink frame


structure. (7) (ii) TDD frame structure. (6)

(i)FDD UPLINK FRAME


A frame structure in the context of wireless communication and networking refers to the
organization and arrangement of data and control information within a predefined time
interval. Frame structures are used to transmit data and signaling in a structured and
synchronized manner. They are essential for effective communication between network
devices.

KEY COMPONENTS:

Header: The header contains control and synchronization information.

Payload: The payload contains the actual data to be transmitted. This can include user data,
voice, video, or any other type of information that needs to be communicated between
devices.

Trailer: The trailer is used for error detection and correction

Delimiter: Delimiters are used to indicate the start and end of a frame.

Frame Length: The frame structure specifies the length of the frame.

FDD uplink frame structure.

FDD (Frequency Division Duplex) :

• FDD is a duplexing technique that separates the uplink and downlink communication by
allocating distinct frequency bands for each direction of communication.

• In an FDD uplink frame structure, the uplink signals are transmitted on a dedicated
frequency band.
• The key point is that these frequency bands are separate and allocated to specific directions
of communication. In FDD, one frequency band is reserved for uplink (from mobile devices
to the network), and another frequency band is reserved for downlink (from the network to
mobile devices). This separation in frequency bands helps prevent interference between
uplink and downlink signals.

Frame Duration: FDD frames have a fixed duration, usually on the order of milliseconds. Common
frame durations in cellular networks are 10 milliseconds or 20 milliseconds.

Time Division: FDD frames are divided into time slots or subframes, where each subframe
corresponds to a specific time interval within the frame. For example, a 10 ms frame might
consist of ten 1 ms subframes.

Uplink Data Slots: Within each subframe, there are time slots allocated for uplink data
transmission. These slots are typically reserved for user data and control information
generated by mobile devices. Multiple users can transmit data simultaneously in their
allocated time slots.

Guard Periods: Guard periods or guard intervals are inserted between subframes to ensure that
there is no overlap or interference between adjacent subframes. These guard periods help in
maintaining synchronization and reducing interference.

Control Channels: FDD uplink frames also include control channels, such as the Physical Uplink
Control Channel (PUCCH) and the Physical Uplink Shared Channel (PUSCH), for transmitting
control and signaling information.

(ii)TDD Frame Structure:


TDD is another duplexing technique that divides the communication into time slots within a shared
frequency band. In a Time Division Duplex (TDD) system, uplink and downlink signals share the same
frequency band but are transmitted in different time slots. TDD divides the frame into time slots, and
during each time slot, the system switches between uplink and downlink transmission. This means that
both uplink and downlink signals use the same frequency band but at different times.

Frame Duration: TDD frames have a fixed duration, similar to FDD, typically on the order of
milliseconds.

Time Division: TDD frames are divided into time slots. However, in TDD, the allocation of time slots is
dynamic and can vary based on the network's configuration and scheduling algorithm.
Uplink/Downlink Time Slots: TDD frames alternate between uplink and downlink time slots. For
example, the first portion of the frame might be allocated to downlink communication, and the second
portion to uplink communication.

Dynamic Slot Allocation: TDD systems can dynamically allocate time slots to either uplink or downlink
communication based on network load and traffic conditions. This flexibility allows for efficient
resource utilization.

Control Channels: TDD frame structures also include control channels for signaling and
synchronization purposes, but the allocation of these control channels can vary based on the current
configuration.

(3)Describe the architecture of mobile WiMAX IEEE802.16e.


WiMax stands for Worldwide Inter-operability for Microwave Access. This technology is based on
IEEE 802.16. It is used to provide higher data rates with increased coverage. It is based on MAN
(Metropolitan Area Network) technology. Its range is upto 50 Km. It may provide speed upto 70 Mbps and it
can operate in Non-Line-of-Sight. This technology is fast, convenient and cost effective. Architecture:

1. Physical Layer: This layer specifies frequency band, synchronization between transmitter and receiver data
rate and multiplexing scheme.
This layer is responsible for encoding and decoding of signals and manages bit transmission and
reception. It converts MAC layer frames into signals to be transmitted. Modulation schemes which
are used on this layer includes:
QPSK, QAM-16 and QAM-64.

2. MAC Layer:
This layer provides and interface between convergence layer and physical layer of WiMax protocol
stack. It provides point to multipoint communication and is based on CSMA/CA (Carrier Sense
Multiple Access with Collision Avoidance). The MAC layer is responsible for transmitting data in
frames and controlling access to shared wireless medium. The MAC protocol defines how and when
a subscriber may initiate a transmission on the channel.

3. Convergence Layer:
This layer provides the information of the external network. It accepts higher layer protocol data
unit (PDU) and converts it to lower layer PDU. It provides functions depending upon the service
being used.

1. Mobile Station (MS):


• The Mobile Station, often referred to as a "Subscriber Station" (SS), is the user's device
(e.g., smartphone, laptop, CPE) that connects to the WiMAX network.
• The MS communicates with the Base Station (BS) and follows the network's scheduling for
data transmission.
2. Base Station (BS):
• The Base Station, also known as a WiMAX Base Station, acts as the access point for the
Mobile Stations in a particular coverage area.
• The BS is responsible for managing the connection and data traffic to and from the MS within
its cell.
3. ASN Gateway (Access Service Network Gateway):
• The ASN Gateway, sometimes called the ASN-GW, serves as the interface between the WiMAX
access network and the core network.
• It manages the allocation of IP addresses and the routing of data to and from the MS.
4. Controller (ASN Base Station Controller - ASN-BSC):
• The ASN Base Station Controller is responsible for coordinating and managing multiple Base
Stations within an ASN.
• It handles tasks like handovers, resource allocation, and radio resource management.
5. Connectivity Service Network (CSN):
• The Connectivity Service Network provides connectivity to external networks, such as the
internet and private networks.
• It includes various network elements like the ASN Gateway, Home Agent, and other
components necessary for routing and forwarding data.
6. Home Agent (HA):
• The Home Agent is responsible for managing the mobility of MS within the WiMAX
network.
• It plays a crucial role in ensuring that the MS can maintain its IP address and connectivity
as it roams between different BS.
7. Interworking Function (IWF):
The Interworking Function is responsible for interfacing with external networks and
translating protocols to enable connectivity between the WiMAX network and different types
of networks.
8. ASN Transport Network:
This is the network infrastructure that connects the ASN Gateway, Base Stations, and other
network elements. It ensures the transport of data between these components.
9. Backhaul Network:
The Backhaul Network connects multiple Base Stations to the ASN Gateway and provides
the necessary transport for data to and from the core network.
10. Access Network (AN):
The Access Network consists of Base Stations and their associated MS. It provides the last-mile connectivity
to the user devices.

Advantages of WiMAX:

1. Wide Coverage Area: WiMAX can cover an area of up to 50 kilometers, making it suitable
for providing broadband access in rural and underserved areas.
2. High Data Rates: WiMAX can provide data rates of up to 75 Mbps, which is higher than
many other wireless technologies.
3. Scalability: WiMAX can be easily scaled to support a large number of users and devices.
4. Interoperability: WiMAX is based on an international standard, which allows for
interoperability between different vendors’ equipment.

5. Cost-effective: WiMAX is a cost-effective solution for providing broadband access in areas


where it is not economically feasible to deploy wired infrastructure.

Disadvantages of WiMAX:

1. Limited Mobility: WiMAX is designed for fixed or nomadic (semi-fixed) use, not for mobile use.
2. Interference: WiMAX operates in the same frequency range as other wireless technologies, which
can lead to interference.
3. Security Concerns: WiMAX uses a shared spectrum, which can make it vulnerable to security threats
such as eavesdropping and jamming.
4. Limited device availability: WiMAX devices are not as widely available as devices for other wireless
technologies, such as WiFi.
5. Limited penetration: WiMAX signals may have trouble penetrating through walls, buildings and
other obstacles.

Applications:
WiMAX technology is used in a variety of real-life applications, including:
Broadband Internet Access:
Wireless Backhaul: Mobile Broadband Public Safety Smart
Grid:
TelemedicineVoIP (Voice over Internet Protocol) : Video Surveillance:

4) Explain the following QOS parameters: (i) UGS(4) (ii)


Best Effort Service
(BS)(4) (iii) The Real-Time Polling Service
(5)
Quality of service (QoS) refers to any technology that manages data traffic to reduce packet loss,
latency and jitter on a network. QoS controls and manages network resources by setting priorities for
specific types of data on the network. Enterprise networks need to provide predictable and measurable
services as applications -- such as voice, video and delay-sensitive data -- to traverse a network.
Organizations use QoS to meet the traffic requirements of sensitive applications, such as real-time voice
and video, and to prevent the degradation of quality caused by packet loss, delay and jitter.

UGS (Unsolicited Grant Service):


• UGS is a QoS parameter used in IEEE 802.16 (WiMAX) networks. It is designed to provide a
specific quality of service for real-time traffic, particularly for applications like voice and video
streaming that require a constant and predictable data rate.

• Constant Bit Rate (CBR): UGS offers a Constant Bit Rate service. This means that it guarantees a
steady and unchanging data rate. For real-time applications like voice and video streaming,
which require a continuous and predictable flow of data, this is crucial. Users can expect a
consistent and reliable data rate throughout the duration of their UGS connection.
 Voice Communication: In Voice over IP (VoIP) applications, a consistent data rate ensures that
voice packets are delivered without variations in timing. This is essential for maintaining clear
and uninterrupted voice calls.
 Video Streaming: Video streaming, especially for live events or video conferencing, demands a
steady data rate to prevent buffering, artifacts, or interruptions in the video feed.

Low Latency: UGS is engineered to provide low and deterministic latency. In other words, it ensures
that data packets are delivered with minimal and consistent delay. This is essential for real-time
communication applications such as Voice over IP (VoIP) and video conferencing. With UGS, the
network strives to keep latency to a minimum, making it suitable for time-sensitive traffic.
Fixed Data Rate: UGS connections maintain a fixed data rate throughout the entire duration of the
connection. This unchanging data rate ensures that the data is transmitted at a steady pace, without
fluctuations. This stability is particularly important for applications that are sensitive to variations in
data rate, ensuring a smooth and uninterrupted user experience.

Resource Reservation: To meet these stringent QoS requirements, UGS connections typically reserve
network resources in advance. When a user or device establishes a UGS connection, a specific amount
of network bandwidth and other resources are allocated exclusively for this connection. This resource
reservation ensures that the UGS traffic has the necessary resources to meet its CBR requirements.

Usage and Applications:

UGS is particularly beneficial for real-time applications like Voice over IP (VoIP) calls and video streaming,
where a consistent and predictable data transmission is essential to maintain call quality and video playback
without interruptions or jitter.

Best Effort Service


(BE):
The best-effort model is a single-service model and also the simplest service model. In this service
model, the network does its best to deliver packets, but does not guarantee delivery or control delay.
The best-effort service model is the default model in the Internet and applies to most network
applications.

Best Effort Service (BE):

Best Effort is a fundamental Quality of Service (QoS) parameter used in various communication
networks, including the internet and many other data networks. The concept of Best Effort essentially
means that network resources are allocated on a "first-come, first-served" basis without any specific
guarantees regarding the quality of service.

Key Characteristics of Best Effort Service (BE):

No Guarantees: One of the key characteristics of BE is that it provides no guarantees regarding the
quality of service. It does not promise specific levels of bandwidth, latency, or packet loss. Instead, it
simply delivers data packets as resources are available.

Shared Resources: In a network offering Best Effort Service, network resources are shared among all
users and applications. No priority is given to any specific type of traffic. This means that network
performance may vary based on the level of network congestion and other factors.

Low Priority: In situations where network resources become congested, Best Effort traffic is typically
the first to experience performance degradation. This can result in increased latency, packet loss, or
reduced data rates when the network is under heavy load. .
Simplicity and Fairness: The simplicity of Best Effort makes it a fair way to allocate resources among
different users and applications. It ensures that all users get a fair share of available resources, but it
doesn't provide preferential treatment to any particular type of traffic.

The Real-Time Polling Service (rtPS) is a Quality of Service (QoS) class in


wireless communication systems, particularly in the context of WiMAX
(Worldwide Interoperability for Microwave Access) or other similar technologies. It is designed to cater
to the specific needs of streaming applications, like WebTV or MPEG streams, which require consistent
and uninterrupted data transmission.

The primary features and characteristics of the rtPS QoS class include:

1. Bandwidth Guarantee: rtPS is designed to ensure that streaming applications receive the required
and guaranteed bandwidth for uninterrupted data transmission. This is crucial for maintaining the
quality of service for real-time and streaming media content.

2. Unicast Request Opportunities: In the rtPS class, base stations provide dedicated unicast request
opportunities to subscriber stations (SS). Instead of relying on a contention-based uplink resource
request mechanism at the beginning of a communication frame (which can lead to competition for
resources among different devices), the network schedules specific time slots in the second field of an
uplink subframe. During these scheduled time slots, only the particular subscriber station requiring
additional uplink bandwidth can send a request.

3. Uplink Bandwidth Request: During the scheduled unicast request opportunities, a subscriber station
can request additional uplink bandwidth as needed for its streaming application. This ensures that the SS
can adapt to changing bandwidth requirements, such as when streaming video quality needs to be
adjusted dynamically.

4. Predictable and Low Latency: rtPS aims to provide low-latency communication, making it suitable
for real-time applications. The guaranteed bandwidth and dedicated request opportunities help
minimize delays and packet loss, which is crucial for streaming services where interruptions or buffering
are undesirable.

Overall, the Real-Time Polling Service (rtPS) QoS class is designed to provide a reliable and predictable
data transmission service for streaming applications. By scheduling dedicated request opportunities
and ensuring adequate bandwidth, it helps maintain the quality and performance of real-time
multimedia services over wireless networks.
Unit5

(1)xplain the design of Software Defined Network


framework.
Software-Defined Networking (SDN) is an approach to network management that enables programmability and
control of network resources through software applications. The primary goal of SDN is to make networks more
flexible, scalable, and programmable by separating the control plane from the data plane. Here is an overview of the
key components and design principles of a Software-Defined Network framework:

1. Separation of Control Plane and Data Plane:

• Control Plane: This is responsible for making decisions about where to send traffic in the network. In
SDN, the control plane is decoupled from the physical devices and centralized in a software-based
controller.

• Data Plane: This is responsible for actually forwarding the network traffic based on the decisions
made by the control plane. SDN allows for the centralized control of multiple network devices from a
single controller.

2. SDN Controller:

• The SDN controller is a critical component that acts as the brain of the SDN framework. It
communicates with the network devices and makes decisions about how to forward traffic.

• The controller provides a northbound API (Application Programming Interface) that allows
applications to communicate with it, enabling the development of network applications and services.

3. Southbound APIs:

• These interfaces are used by the SDN controller to communicate with the network devices in the
data plane. Examples of southbound APIs include OpenFlow, NETCONF, and RESTful APIs.
• OpenFlow is one of the most widely used southbound APIs. It standardizes communication between
the SDN controller and the network devices, allowing for a more interoperable SDN ecosystem.

4. Network Devices:

• These include switches and routers that make up the physical infrastructure. In an SDN framework,
these devices have a simpler role in the data plane, primarily responsible for forwarding packets
based on the instructions received from the controller.

5. Application Layer (Northbound APIs):

• SDN allows for the development of applications that can directly interact with the network
infrastructure through the SDN controller. These applications can be created for specific network
services, traffic optimization, security, and more.
• The northbound APIs allow applications to communicate with the SDN controller. This enables the
development of custom applications and services that can dynamically control and manage the
network.

6. Network Virtualization:

• SDN facilitates network virtualization, allowing the creation of multiple logical networks on top of a
shared physical infrastructure. This is particularly beneficial for multi-tenancy and cloud
environments.

7. Dynamic Policy Enforcement:

• SDN enables the dynamic application of policies to the network. Network administrators can define
and enforce policies centrally through the SDN controller, leading to more efficient and responsive
network management.

8. Open Standards:

• SDN frameworks are designed to be open and standards-based, promoting interoperability and
preventing vendor lock-in. Open standards ensure that different components from various vendors
can work seamlessly together.

(2)SDN Architecture
The architecture of software-defined networking (SDN) consists of three main layers: the application layer, the control
layer, and the infrastructure layer. Each layer has a specific role and interacts with the other layers to manage and
control the network.

1. Infrastructure Layer: The infrastructure layer is the bottom layer of the SDN architecture, also known as the
data plane. It consists of physical and virtual network devices such as switches, routers, and firewalls that are
responsible for forwarding network traffic based on the instructions received from the control plane.
2. Control Layer: The control layer is the middle layer of the SDN architecture, also known as the control plane.
It consists of a centralized controller that communicates with the infrastructure layer devices and is responsible
for managing and configuring the network.
The controller interacts with the devices in the infrastructure layer using protocols such as OpenFlow to
program the forwarding behaviour of the switches and routers. The controller uses network policies and rules
to make decisions about how traffic should be forwarded based on factors such as network topology, traffic
patterns, and quality of service requirements.

3. Application Layer: The application layer is the top layer of the SDN architecture and is responsible for providing
network services and applications to end-users. This layer consists of various network applications that interact
with the control layer to manage the network.

Examples of applications that can be deployed in an SDN environment include network virtualization, traffic
engineering, security, and monitoring. The application layer can be used to create customized network services that
meet specific business needs.

The main benefit of the SDN architecture is its flexibility and ability to centralize control of the network. The separation
of the control plane from the data plane enables network administrators to configure and manage the network more
easily and in a more granular way, allowing for greater network agility and faster response times to changes in network
traffic.

o Centralized Network Control: One of the key benefits of SDN is that it centralizes the control of the network
in a single controller, making it easier to manage and configure the network.
o Programmable Network: In an SDN environment, network devices are programmable and can be
reconfigured on the fly to meet changing network requirements
o Cost Savings: With SDN, network administrators can use commodity hardware to build a network, reducing
the cost of proprietary network hardware.
o Enhanced Network Security: The centralized control of the network in SDN makes it easier to detect and
respond to security threats.
o Scalability: SDN makes it easier to scale the network to meet changing traffic demands.
o Simplified Network Management: SDN can simplify network management by abstracting the underlying
network hardware and presenting a logical view of the network to administrators.

Disadvantages of SDN

o Complexity: SDN can be more complex than traditional networking because it involves a more sophisticated
set of technologies and requires specialized skills to manage. For example, the use of a centralized controller
to manage the network requires a deep understanding of the SDN architecture and protocols.
o Dependency on the Controller: The centralized controller is a critical component of SDN, and if it fails, the
entire network could go down.
o Compatibility: Some legacy network devices may not be compatible with SDN, which means that
organizations may need to replace or upgrade these devices to take full advantage of the benefits of SDN.

(3)Describe the centralized and distributed control of SDN.


Centralized SDN Controller Architecture:
The image shows a typical three-layer cloud computing architecture:

• Application Layer: This layer sits at the top and represents the applications and services that users interact with. These
could be anything from web applications and mobile apps to enterprise software and big data analytics platforms.

• Control Plane Layer: This layer is responsible for managing and orchestrating the underlying infrastructure. It includes
components like the centralized controller, which is responsible for making decisions about how to route traffic and
allocate resources. The control plane also includes southbound APIs like REST-API and OpenFlow which allow the
controller to communicate with the underlying data plane.

• Data Plane Layer: This layer is responsible for actually moving data around the network. It consists of physical or virtual
servers, storage devices, and networking equipment. The data plane layer is where the workloads run and the data is
stored.

Working Process:

1. Network Devices: Switches and other network devices forward data packets based on flow rules they receive from
the central controller.
2. Southbound Interface: The controller communicates with network devices using a southbound API, like OpenFlow in
this case. This API allows the controller to configure the devices' flow tables, which specify how to handle different
types of data packets.

3. Flow Rules: The controller creates flow rules based on various factors, such as the source and destination of the data
packets, the type of traffic, and the desired network behavior. These rules are then sent to the network devices
through the southbound interface.

4. Packet Forwarding: When a network device receives a data packet, it looks up the packet's header information in its
flow table. The flow table entry corresponding to the packet's header tells the device where to forward the packet.

5. Northbound Interface: Applications and network administrators can interact with the controller using a northbound
API, like RESTful APIs in this case. This allows them to configure the controller, monitor the network, and
troubleshoot issues.
Terms:

• Centralized Controller: A single entity responsible for managing and controlling the entire network. It has a global
view of the network topology and makes all decisions about traffic flow and resource allocation.

• Southbound Interface: The communication channel between the controller and network devices. It uses protocols
like OpenFlow to configure the devices' flow tables.

• Northbound Interface: The communication channel between the controller and applications or network
administrators. It uses protocols like RESTful APIs to manage the controller and network.

• Flow Rules: Instructions installed in network devices' flow tables that specify how to handle different types of data
packets. These rules are created by the controller based on network policies and requirements.

• OpenFlow: A popular southbound API for SDN that allows the controller to programmatically configure the
forwarding behavior of network devices.

• RESTful APIs: A set of programming interfaces that follow the REST architectural style. They are commonly used for
communication between applications and web services.

Key characteristics:

• Single controller: A single controller manages the entire network, responsible for: o Maintaining a global view of the
network topology o Making all forwarding decisions o Distributing flow rules to switches

• Direct communication: Switches communicate directly with the central controller using a southbound API (e.g.,
OpenFlow).

Advantages
Global Visibility: The controller has a complete view of the network, enabling optimal decision-making.

• Simplified Management: Single point of control for policy configuration and network automation.

• Efficient Resource Utilization: Centralized view allows for better traffic engineering and load balancing.

Disadvantages:

• Scalability: Can become a bottleneck for large or geographically dispersed networks.

• Single Point of Failure: Controller failure can disrupt the entire network.

• Latency: Communication overhead between switches and the central controller can impact performance.

Distributed SDN
• control
Distributed SDN control is an architectural approach that distributes the control plane of a Software-Defined Network
(SDN) across multiple controllers, rather than relying on a single centralized controller. Here's a breakdown of its key
aspects:

Key Characteristics:
Multiple Controllers: The network is divided into domains or clusters, each managed by a separate controller.

Decentralized Decision-Making: Controllers collaborate and make decisions independently, based on their local view of
the network and information exchanged with other controllers.

Coordination Mechanisms: Controllers use inter-controller communication protocols (e.g., AMQP, East/Westbound APIs)
to exchange information, synchronize state, and resolve conflicts.

Organization Structures: Controllers can be arranged in flat structures (equal peers), hierarchical structures

(parentchild relationships), or hybrid models. Components

• SDN Controllers (A, B, C): Multiple controllers, each responsible for a specific domain or cluster of switches.

• Modules (1 to n): Functional components within each controller, handling tasks like topology discovery, flow
management, and security.

• Inter-controller Messenger (AMQP): A messaging protocol (AMQP) enabling controllers to communicate and
exchange information.

• Core: The central decision-making logic within each controller.

• OpenFlow: Southbound API used by controllers to communicate with switches.

Domains (A, B, C): Network segments managed by the respective controllers.

Working Process:

1. Network Partitioning: The network is divided into domains, with each controller managing a specific domain.

2. Controller-Switch Communication: Controllers interact with switches in their domain using OpenFlow, receiving
network updates and installing flow rules.

3. Inter-Controller Communication: Controllers exchange information and synchronize state using the inter-controller
messenger (AMQP).
4. Distributed Decision-Making: Controllers collaborate to make decisions about traffic routing, resource allocation, and
policy enforcement, considering the global network view.

5. Flow Rule Installation: Controllers install flow rules on switches to guide traffic forwarding, ensuring consistency
across domains.

6. Traffic Flow: Data packets traverse the network, following the flow rules installed by the controllers.

Inter-controller Messenger (AMQP):

Provides a reliable and secure communication channel for controllers to exchange information like:

o Network topology updates o Flow

rule changes o Security threats o


Resource availability o Policy changes
Core Decision-Making Logic:

• Analyzes information from modules and the messenger to make informed decisions about: o Traffic routing across
domains o Load balancing to optimize resource utilization o Policy conflicts resolution o Failover in case of controller
failur

Key Advantages:

Scalability: Handles large, complex networks by distributing control across multiple controllers.

• Reliability: Localized control reduces impact of controller failures.

• Reduced Latency: Switches communicate with closer controllers, improving response times.

• Improved Fault Tolerance: Failure of one controller doesn't disrupt the entire network. Challenges:

• Controller Coordination: Requires efficient mechanisms for information exchange and synchronization to maintain
consistency.

• Conflict Resolution: Potential for conflicting policies between controllers.

• Overhead: Communication and coordination among controllers introduce overhead.

(3) i)Discuss about the OpenFlow-Based Software-Defined Networks.

ii)Give the structure of hybrid control environment for a transport


network that includes Open Flow control.
i) OpenFlow-Based Software-Defined Networks (SDNs):
Overview: Software-Defined Networking (SDN) is an approach to networking that uses software-based controllers or
application programming interfaces (APIs) to direct traffic on the network and communicate with the underlying
hardware infrastructure. OpenFlow is a key protocol associated with SDN that enables the communication between the
SDN controller and the networking devices, such as switches and routers.

Components of OpenFlow-Based SDNs:

1. SDN Controller:
• The SDN controller is the brain of the SDN architecture. It acts as a central point of control, making
decisions about where to send traffic based on a global view of the network.

• It communicates with the OpenFlow-enabled devices using the OpenFlow protocol.

2. OpenFlow Protocol:
• OpenFlow is a standardized protocol that enables communication between the SDN controller and
the forwarding elements (switches and routers) in the network.

• It allows the controller to dynamically modify the behavior of the network devices, defining how
packets should be forwarded through the network.

3. OpenFlow Switches:

• These are network devices (switches or routers) that support the OpenFlow protocol.

• The switches maintain flow tables that contain rules defining how to handle different types of traffic.

4. Flow Table:

• The flow table is a critical component of OpenFlow-enabled switches. It consists of rules (flow
entries) that define how to process packets.

• Each rule specifies matching criteria (e.g., source and destination addresses, protocol type) and
corresponding actions (e.g., forward, drop).

How OpenFlow-Based SDNs Work:

1. Packet Processing:

• When a packet enters an OpenFlow-enabled switch, the switch looks up the flow table for a
matching rule.

• If a match is found, the switch applies the specified actions. If there's no match, the switch forwards
the packet to the SDN controller for decision-making. 2. Controller Decision:

• The SDN controller receives packet information from switches and makes decisions based on the
global network view.

• It can dynamically adjust flow tables on switches, rerouting traffic or applying new policies in
realtime.

3. Centralized Network Intelligence:


• SDN provides a centralized view of the network, allowing for more efficient traffic management,
better resource utilization, and easier implementation of network policies.

4. Flexibility and Programmability:


• OpenFlow's programmability allows network operators to define and implement policies without
making changes to individual network devices.

5. Benefits:

• Faster network provisioning, easier management, and better adaptability to changing network
conditions are among the key benefits of OpenFlow-based SDNs.

ii)
Hybrid Control Environment for a Transport Network with OpenFlow
Control:
Overview: A hybrid control environment in a transport network refers to the integration of traditional network control
mechanisms with SDN/OpenFlow control. This is often done to facilitate a smooth transition from legacy networking to
SDN. Here's the structure of a hybrid control environment for a transport network:

Components:

1. Legacy Network Elements:


• These include traditional switches, routers, and devices that operate using conventional networking
protocols.

• They may not be OpenFlow-enabled and continue to operate based on traditional networking
principles.

2. OpenFlow-Enabled Devices:

• These are network elements that support the OpenFlow protocol.

• They have flow tables and can be controlled by the SDN controller.

3. SDN Controller:
• The SDN controller in a hybrid environment manages both legacy network elements and
OpenFlowenabled devices.

• It has the capability to communicate using traditional networking protocols as well as OpenFlow.

4. Translation Layer:

• A translation layer may be required to facilitate communication between the SDN controller and
legacy network elements.

• This layer translates SDN commands into commands compatible with traditional networking
protocols.

5. Global Network View:

• The SDN controller maintains a global view of the entire network, including both legacy and
SDNenabled devices.
• It makes decisions based on the overall network state, ensuring that policies are consistently applied
across the entire infrastructure.

Operation:

1. Policy Application:

• The SDN controller defines and enforces network policies across both legacy and SDN-enabled
devices.

• Policies can include traffic prioritization, load balancing, and security measures.

2. Dynamic Adaptation:

• The SDN controller can dynamically adapt to changes in the network, adjusting both
OpenFlowenabled and traditional devices as needed.

• For example, it can reroute traffic in response to congestion or failures.

3. Gradual Migration:

• A hybrid control environment allows for a gradual migration from traditional networking to SDN.

• New OpenFlow-enabled devices can be added to the network, and legacy devices can be replaced
over time.

4. Coexistence:
Legacy and OpenFlow-controlled devices coexist within the same network, allowing for
interoperability during the transition period.

Benefits:

1. Smooth Transition:

The hybrid approach enables a gradual transition, avoiding the need for a complete network overhaul.

2. Compatibility:

Existing legacy infrastructure can continue to operate alongside new OpenFlowenabled devices.
3. Flexibility:
Network operators can leverage the benefits of SDN in specific areas without disrupting the entire
network.

4. Optimized Resource Utilization:

• The SDN controller can optimize the use of network resources across both legacy and SDN environments based on
the global network view.

(4)Explain the network overlays in detail.

What are network overlays?

• A network overlay is a virtual network that is created on top of an existing physical network infrastructure.

• It's a software-defined abstraction that enables you to create multiple virtual networks with their own unique
characteristics, even if the underlying physical network is shared.

• It's like having multiple separate highways built on top of the same physical roads, each with its own rules and traffic
patterns.
How do network overlays work?

1. Encapsulation: Packets are encapsulated with additional headers that contain information about the overlay
network, such as virtual addresses and routing information.

2. Tunneling: These encapsulated packets are then tunneled through the physical network infrastructure, allowing
them to traverse different physical paths and devices while still maintaining the logical structure of the overlay
network.

3. Decapsulation: When the encapsulated packets reach their destination, the overlay headers are removed, and the
original packets are delivered to the intended recipient.
Network overlays, as we established, are like virtual highways built on top of existing physical infrastructure. They offer
incredible flexibility and control over your network, allowing you to create isolated, dynamic, and scalable virtual
networks tailored to specific needs. But with this flexibility comes a diverse range of types, each with its own strengths
and weaknesses. Let's delve into the most common overlay network types and see how they're utilized:

1. Layer 2 Overlays:

These overlays operate at the data link layer (Layer 2) of the OSI model, primarily concerned with physical addressing and
media access control (MAC). Imagine them as dedicated lanes on a highway, each with its own set of traffic rules and
regulations.

Virtual LANs (VLANs): The most familiar type, VLANs segment a physical network into smaller broadcast domains based
on department, function, or security requirements. Think of them as dividing a highway into separate lanes for trucks,
cars, and motorcycles.

Virtual Extensible LANs (VXLANs): Extending VLAN capabilities across physical boundaries, VXLANs encapsulate Layer 2
frames within Layer 3 packets, enabling Layer 2 connectivity over any IP network. Imagine extending a dedicated
highway lane through a tunnel to another city.

Generic Routing Encapsulation (GRE): A more generic tunneling protocol, GRE can encapsulate any type of data, not just
Layer 2 frames, making it suitable for various applications beyond traditional networking. Think of it as a tunnel that can
accommodate any type of vehicle, not just cars.
Layer 3 Overlays:

Operating at the network layer (Layer 3) of the OSI model, these overlays deal with logical addressing and routing.
Picture them as independent highways with their own unique signage and navigation systems.
IP Virtual Private Networks (VPNs): Creating secure tunnels over public networks like the internet, IP VPNs connect
geographically dispersed networks as if they were directly connected. Imagine having a private, secure highway
bypassing the bustling public roads.

Multiprotocol Label Switching (MPLS) VPNs: Offering high performance and scalability, MPLS VPNs utilize labels
attached to packets for efficient routing within service provider networks. Think of them as express lanes on a highway
with designated priority for certain types of vehicles.

Overlay Transport Virtualization (OTV): Designed for data center environments, OTV extends Layer 3 connectivity across
multiple data centers, enabling seamless workload migration and disaster recovery. Imagine seamlessly connecting
multiple highways across different cities.

Application-Specific Overlays:

Tailored to specific applications or services, these overlays cater to unique needs beyond traditional networking
functionalities. Think of them as specialized roads optimized for bicycles, trains, or even aircraft.
• Content Delivery Networks (CDNs): Strategically distributing content across geographically dispersed
servers, CDNs deliver content to users with minimal latency and improved performance. Imagine having
strategically placed warehouses along a highway to quickly deliver goods to different regions.

Peer-to-Peer (P2P) networks: Enabling direct communication between devices without a central server, P2P networks
are often used for file sharing and distributed computing. Imagine a network of interconnected side roads allowing
direct travel between towns without needing a central highway.

Benefits of network overlays:


Flexibility: Create multiple virtual networks with different topologies, security policies, and Quality of Service
(QoS) requirements, without modifying the physical infrastructure.

• Scalability: Easily add or remove nodes and virtual networks to meet changing needs.

• Isolation: Separate traffic for different applications or tenants, enhancing security and performance.

• Resilience: Improve fault tolerance by providing alternative paths for traffic in case of failures in the
physical network.

• Abstraction: Simplify network management by decoupling virtual networks from the physical
infrastructure.

Common use cases:

• Data center virtualization

• Cloud computing

• Software-Defined Networking (SDN)

• Wide Area Network (WAN) optimization

• Network segmentation and isolation

• Security and compliance

• Disaster recovery

Additional considerations:

• Overhead: Encapsulation and tunneling add overhead to network traffic, which can impact performance.

• Management complexity: Overlay networks can add a layer of complexity to network management.

• Security: Overlay networks can introduce new security risks if not properly designed and implemented.

partc
(3)When the mobile device is attached to the GSM network,
it can be either in ‘idle’ mode as long as there is no
connection or in ‘dedicated’ mode during a voice call or
exchange of signaling information. Explain the GPRS State
model in detail.
General Packet Radio Service (GPRS) is a packet-switched technology used for 2G and
3G cellular communication networks, enabling data transmission alongside voice
communication. The GPRS state model defines different states that a mobile device
can be in during communication over the GPRS network. These states include:
1. Idle State:

Standby Mode (GSM Idle): In this mode, the mobile device is attached
to the GSM network but is not actively involved in any data
communication. It can receive calls and text messages but is not
connected to GPRS. The device is said to be in Packet Idle (Pkt Idle)
mode.

2. Ready State:

Cell Selection State (GSM Ready): When the mobile device is in idle
mode and initiates a data transfer request, it enters the Cell Selection
state. In this state, the device selects a suitable GPRS cell for
communication.

3. Packet Transfer States:

• Packet Idle (Pkt Idle): The mobile device is attached to the GPRS network but is not actively sending or
receiving data packets. It's waiting for a data transfer request.

• Packet Access (Pkt Access): The device is actively initiating a data transfer request. It sends a Packet
Channel Request to the network, requesting resources for packet data transfer.

• Packet Transfer (Pkt Transfer): After successfully acquiring resources, the device enters the Packet
Transfer state, where actual data transfer occurs.

4. TBF (Temporary Block Flow) Establishment and Release:

• TBF Establishment: When a mobile device initiates data transfer, a


Temporary Block Flow is established. It involves the allocation of radio
resources and the setup of a logical connection for data transmission.

• TBF Release: After completing the data transfer or when there is no


further need for data communication, the TBF is released, freeing up
the allocated resources.

5. Network Mode Modification:

• GSM Cell Update: If the mobile device moves to a new GSM cell while
in an active GPRS session, it performs a GSM cell update to inform the
network of its new location.

• Routing Area Update: In some cases, a GPRS mobile device may


perform a Routing Area Update to update the network about its current
location within the GPRS network.
Paging Mode:

In this state, the network can initiate a connection with the mobile device by sending a
paging message.

The device may transition from Idle State to Paging Mode when the network needs to
establish a GPRS connection.

(Cell Reselection State:


When the mobile device needs to change its serving cell within the same location area,
it enters the Cell Reselection State.

This state allows the device to switch to a different cell for better signal quality or
other network-related reasons.

1. PDP Context Activation:

Before data transfer, the device activates a PDP context, defining data
transfer parameters.

2. Routing Area Update:

Movement between routing areas triggers a Routing Area Update for


optimized location tracking.

3. Temporary Block Flow:

Temporarily pausing data transmission occurs in the Temporary Block


Flow state.

4. Suspend State:

Active data sessions can be temporarily suspended, allowing for


interruptions like incoming voice calls.

5. Reconnect Mode:

Resuming a suspended data session involves transitioning to the


Reconnect Mode.

1. Release State:
After completing a data session, the device enters Release State,
informing the network about resource deallocation.
Internet-based voice service cannot directly interact with the
transport network and it is hence difficult to prefer IP packets that
contain voice. Suggest methods for Voice services.
To enable internet-based voice services to interact with the transport
network more effectively and prioritize IP packets containing voice,
several methods and protocols can be implemented. Here are some
suggestions:
1. Quality of Service (QoS):
 Implement QoS mechanisms to prioritize voice traffic over other types of
data. This can be achieved by configuring routers and switches to give
preferential treatment to voice packets based on their priority markings.
2. Differentiated Services Code Point (DSCP):
 Use DSCP markings to classify and prioritize IP packets. VoIP packets
can be marked with a higher DSCP value, allowing network devices to
prioritize their transmission and ensure low latency for voice traffic.
3. Resource Reservation Protocol (RSVP):
 RSVP can be used to reserve resources along the network path, ensuring
that there is sufficient bandwidth and minimizing packet loss for voice
traffic. This can be particularly useful in scenarios where strict quality
guarantees are required.
4. Traffic Engineering (TE):
 Implement traffic engineering mechanisms to optimize network resource
utilization. This involves dynamically adjusting the routing of voice
traffic based on network conditions to ensure the most efficient and
reliable path.
5. Multiprotocol Label Switching (MPLS):
 MPLS can be employed to create label-switched paths for voice traffic,
offering a more direct and efficient route through the network. MPLS
allows for traffic engineering and can improve the overall performance
of voice services.
6. VoIP-specific Protocols:
 Utilize VoIP-specific protocols like Session Initiation Protocol (SIP) and
Real-Time Transport Protocol (RTP). These protocols are designed for
efficient voice communication and can help in better handling of voice
packets across the network.
7. Traffic Shaping and Policing:
 Implement traffic shaping to control the rate of outgoing traffic, ensuring
a smooth and consistent flow for voice packets. Traffic policing can be
used to enforce traffic profiles and discard packets that exceed defined
limits.
8. Packet Fragmentation:
 Adjust the Maximum Transmission Unit (MTU) to reduce packet sizes
for voice traffic. This can help in avoiding packet fragmentation issues
and improve the overall efficiency of voice packet transmission.
9. Network Monitoring and Management:
 Regularly monitor network performance using tools like SNMP (Simple
Network Management Protocol) to identify and address potential
bottlenecks or issues affecting voice services.
10. Use of Virtual LANs (VLANs):
 Implement VLANs to logically segregate voice traffic from other types
of data, providing a dedicated network path for voice communication.
By combining these methods and protocols, you can enhance the
performance and reliability of internet-based voice services over the
transport network. The specific approach may depend on the network
infrastructure, requirements, and available technologies in your
environment.

You might also like