0% found this document useful (0 votes)
44 views48 pages

Distributed System Solved Paper 2023

The document outlines key concepts and problems related to distributed systems, including goals such as transparency, scalability, and fault tolerance. It compares distributed and network operating systems, explains remote procedure calls, and discusses concurrency control, logical clocks, and distributed mutual exclusion. Additionally, it covers transport-level communication services, object adapters, timestamp-based concurrency control, and connectionless communication using sockets.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views48 pages

Distributed System Solved Paper 2023

The document outlines key concepts and problems related to distributed systems, including goals such as transparency, scalability, and fault tolerance. It compares distributed and network operating systems, explains remote procedure calls, and discusses concurrency control, logical clocks, and distributed mutual exclusion. Additionally, it covers transport-level communication services, object adapters, timestamp-based concurrency control, and connectionless communication using sockets.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

DISTRIBUTED SYSTEM

RTU SOLVED PAPER

2023

ER SAHIL KA GYAN
PART A 10 QUESTIONS= 20 MARKS

ER SAHIL KA GYAN
Prob.1: What are the goals of a distributed system?
The main goals of a distributed system are:

1. Transparency
○ Hides complexities from users (location, access, migration, replication, failure).
○ Example: Accessing Google Docs looks the same regardless of where it's stored.
2. Openness
○ Should be built using standardized protocols/interfaces so different systems can interoperate.
3. Scalability
○ Must handle growing workloads or users without performance degradation.
○ Example: E-commerce platforms scale up during festive seasons.
4. Fault Tolerance
○ System should continue operating even if parts fail.
○ Example: If one server fails, another should take over (failover).
5. Resource Sharing
○ Allows shared access to files, printers, databases across the network.

ER SAHIL KA GYAN
Prob.2: State the difference between Distributed Operating System and Network
Operating System.

Feature Distributed OS Network OS

Integration Tightly integrated Loosely coupled

Transparency Provides complete No transparency; user knows


transparency system boundaries

Resource Automatic and seamless Requires manual sharing


Sharing
configuration

Fault Higher; automatic Lower; failure in one node affects


Tolerance
failover possible availability

Example Amoeba, Plan 9 Windows Server, UNIX/Linux with


NFS
ER SAHIL KA GYAN
Prob.3: Explain working of RPC (Remote Procedure Call).
RPC allows a program to invoke a procedure on a remote machine as if it were local.
Steps in RPC:

1. Client Stub calls remote function.

2. Marshaling: Arguments are packaged.

3. Message Sent to server over the network.

4. Server Stub receives and unpacks the call.

5. Procedure is executed on server.

6. Result is marshaled and sent back to client.

7. Client Stub unpacks and returns result.

Example: A Java client calls a getUserData() method hosted on a remote Python server using RPC.
ER SAHIL KA GYAN
Prob.4: How is distributed file system different from centralized file
system?

Aspect Centralized File System (CFS) Distributed File System (DFS)

Location Files stored on a single server Files distributed across multiple nodes

Scalability Limited High scalability

Fault Tolerance Low (single point of failure) High (redundancy, replication)

Access Centralized control Transparent, remote access

Example Windows File Sharing Hadoop HDFS, NFS, Google File


System

ER SAHIL KA GYAN
Prob.5: Explain concept of logical clocks.

Logical clocks are used to order events in a distributed system where physical clocks may not be synchronized.

● Lamport’s Logical Clock:

○ Every event increases a counter.

○ Messages carry timestamps.

○ Ensures "happened-before" relation (a → b ⇒ C(a) < C(b)).

● Vector Clocks:

○ Each node maintains a vector of counters.

○ Captures causality and concurrent events.

Use Case: Helps in maintaining event order in distributed logging, database transactions, etc.
ER SAHIL KA GYAN
Prob.6: How concurrency is handled in distributed system?

Concurrency ensures that multiple processes or users can safely access shared resources.

● Techniques Used:

○ Locks and Semaphores

○ Timestamp Ordering

○ Distributed Transactions

○ Two-Phase Locking (2PL)


● Goals:
○ Maintain consistency.
○ Avoid race conditions.
○ Ensure atomicity across multiple nodes.

Example: In distributed banking systems, concurrency control ensures two users don’t overdraw the same account simultaneously.

ER SAHIL KA GYAN
Prob.7: What is Non-Uniform Memory Access (NUMA) model?

NUMA is a memory design where memory access time depends on memory location relative to the processor.

● Characteristics:

○ Each CPU has local memory (faster access).

○ Can access remote memory from other CPUs (slower access).

○ Appears as single address space to software.

● Advantages:

○ Better performance for multithreaded and parallel applications.

○ Reduces bottleneck of shared memory.

Example: High-performance servers with multiple processors using local RAM per CPU.

ER SAHIL KA GYAN
Prob.8: Explain distributed mutual exclusion.

Distributed mutual exclusion ensures only one process in a distributed system can enter its critical section at a time.

● Techniques:

○ Token-based: Only token holder enters critical section (e.g., Ricart-Agrawala).

○ Permission-based: Process requests permission from others.

● Requirements:

○ Mutual Exclusion

○ Freedom from Deadlock

○ Fairness (no starvation)

Use Case: Ensures consistent updates to shared resources like a file or database record.

ER SAHIL KA GYAN
Prob.9: Explain concept of faults in distributed agreement.

Faults in distributed agreement affect the reliability and correctness of consensus.

● Types of Faults:

○ Crash Fault – Node stops working.

○ Omission Fault – Messages are lost.

○ Timing Fault – Delays or incorrect timing.

○ Byzantine Fault – Node behaves maliciously or unpredictably.

● Handling:

○ Algorithms like Byzantine Fault Tolerance (BFT) or Paxos help in achieving agreement despite faults.

Example: Byzantine generals problem – ensuring all loyal generals agree on the same plan, even with traitors.
ER SAHIL KA GYAN
Prob.10: How data is handled in distributed databases?

Distributed databases manage data across multiple nodes or sites while maintaining consistency and performance.

● Techniques Used:

○ Partitioning: Data split into fragments.

○ Replication: Copies of data on multiple sites.

○ Two-Phase Commit (2PC): For atomic distributed transactions.

○ Concurrency Control: Ensures isolation.

○ Consistency Models: Strong, eventual, causal consistency.

Example: A shopping app stores inventory data on servers in different regions to reduce latency and handle local demand.

ER SAHIL KA GYAN
PART B 5/7 QUESTIONS= 20 MARKS

ER SAHIL KA GYAN
✅ Prob.11: Explain how microkernels can be used to organize an operating
system in a client-server fashion.

A microkernel is a minimalistic operating system kernel that provides only essential services such as communication between processes, basic
scheduling, and low-level hardware access. All other services like device drivers, file systems, and networking are moved to user space and
implemented as servers, resulting in a client-server architecture.

🔷 Key Features of Microkernel-based Client-Server OS:

1. Modularity

● The OS is decomposed into small, isolated modules.

● Each module (like file system, device driver) runs as a user-level server.

● Clients (applications or system utilities) interact with these services via IPC (Inter-Process Communication).

ER SAHIL KA GYAN
2. Client-Server Communication

● Clients send requests to OS services through message passing.


● Servers process the request and send the result back.
Example: A text editor (client) requests file saving → Message sent to File System Server → Action performed.

3. Scalability and Flexibility

● Servers can be independently loaded, unloaded, or updated.


● Facilitates tailoring the OS for specific needs like embedded systems, real-time OS, etc.

4. Fault Isolation

● Since each component runs in its own address space, a crash in one server doesn’t affect others.
● Faulty components can be restarted without rebooting the entire OS.

5. Enhanced Security

● Servers and clients are isolated, reducing the attack surface.


● Privileges are minimized; only the microkernel operates in kernel mode, improving system integrity.

6. Customization and Extensibility

● Developers can modify or replace individual components easily.


● Enables the creation of specialized OS for different applications (e.g., IoT devices, phones).

ER SAHIL KA GYAN
✅ Prob.12: Explain transport-level communication services for building
distributed applications.
Transport-level communication is crucial in distributed applications to enable reliable and efficient data exchange across networks. It acts
as a bridge between application-level protocols and lower-level networking.

🔷 Key Services Offered at the Transport Layer:

1. Connection Establishment & Termination

● Establishes a logical communication link between sender and receiver.


● Terminates connection after successful data exchange.
● Protocols: TCP (connection-oriented), UDP (connectionless).

2. Reliability

● Ensures data arrives without errors, in order, and without duplication.


● Mechanisms:
○ Acknowledgements
○ Retransmission on timeout
○ Error detection via checksums

ER SAHIL KA GYAN
3. Flow Control
7. Security Features
● Prevents sender from overwhelming receiver.
● TCP uses sliding window mechanism to manage data flow. ● Includes:
○ TLS/SSL for encryption
4. Congestion Control
○ Authentication to verify sender identity
● Detects and reacts to network congestion.
● TCP uses algorithms like: ○ Integrity checks to prevent tampering
○ Slow Start
○ Congestion Avoidance
○ Fast Retransmit

5. Multiplexing and Demultiplexing 📝 Conclusion:


● Allows multiple applications to use the network simultaneously. Transport-level services form the backbone of reliable
● Uses port numbers to route data to the correct application. distributed communication, offering error handling,
connection management, security, and performance
6. Quality of Service (QoS)
optimization. These features enable developers to build
● Prioritizes specific data streams (e.g., video calls) based on: robust, scalable, and secure distributed applications.
○ Bandwidth
○ Delay
○ Reliability

ER SAHIL KA GYAN
✅ Prob.13: What is an Object Adapter?

An Object Adapter is a middleware component in distributed object systems (like CORBA) that helps bridge the gap between the client request
and the actual server object (servant) that processes the request. It handles object life cycle, identity mapping, request dispatching, and other
object management tasks.

🔷 Purpose of Object Adapter:

● Acts as a translator between the Object Request Broker (ORB) and the servant object.

● Supports various policies like persistence, activation, and servant management.

● Allows reuse of server logic across different client implementations.

🔷 Responsibilities of an Object Adapter:


ER SAHIL KA GYAN
1. Management of Servants

● Allows servers to register, activate, and deactivate servant objects.


● Controls how many servants are active at a time and maps them to object references.

2. Object Lifecycle Management

● Maintains state of objects (active, deactivated, destroyed).


● Manages the activation and destruction of objects.

3. Object Reference Generation & Interpretation

● Generates unique object references that clients can use to identify server-side objects.
● Parses object references received in client requests.

4. Mapping Object References to Servants

● Matches incoming request’s object ID to the appropriate servant.


● Enables method invocation on the correct object instance.

5. Method Invocation

● Demarshals client request data (binary) into method parameters.


● Calls the required method on the servant object.
● Returns the result to the ORB for transmission back to client.

ER SAHIL KA GYAN
🔶 Example (CORBA System):

● Client sends request to getAccountDetails().

● Object adapter interprets object reference → Locates the servant → Calls method → Sends result back.

📝 Conclusion:

The Object Adapter in a distributed object system plays a crucial role in request processing by linking client requests to server objects (servants).
It enhances reusability, flexibility, and modularity, and supports transparent communication in systems like CORBA.

ER SAHIL KA GYAN
Prob.14: Does using time stamping for concurrency control ensure serializability?
Discuss.

Yes, timestamp-based concurrency control ensures conflict serializability by executing transactions in an order consistent with their
timestamps. Every transaction is assigned a unique timestamp when it enters the system, and operations are scheduled based on these
timestamps to avoid conflicts and maintain consistency.

✦ Key Terms:

● TS(Ti) → Timestamp of transaction Ti

● W-timestamp(X) → Latest write timestamp on data item X

● R-timestamp(X) → Latest read timestamp on data item X


ER SAHIL KA GYAN
✦ Timestamp Ordering Protocol: ✦ Advantages:
➤ If Ti issues a read(X) operation: 1. ✅ Ensures Serializability: Transactions are executed
in timestamp order ensuring conflict-serializable
● If TS(Ti) < W-timestamp(X) → Reject operation (Ti is too late to read)
schedules.

● If TS(Ti) ≥ W-timestamp(X) → Allow and update R-timestamp(X)


2. ✅ Deadlock-Free: No locking mechanism is used →
avoids deadlocks.
➤ If Ti issues a write(X) operation:
3. ✅ Consistency: Database remains in a consistent
● If TS(Ti) < R-timestamp(X) → Reject (Ti is too late to write) state post transactions.

● If TS(Ti) < W-timestamp(X) → Reject and rollback ✦ Disadvantages:

● Else → Perform the write and update W-timestamp(X) 1. ❌ High Abort Rate: Frequent transaction rollbacks in
high-contention scenarios.

2. ❌ Poor for Long Transactions: Older timestamps


may delay or block newer transactions.

3. ❌ Not Ideal for Real-Time Systems: Due to frequent


aborts and restarts, performance may degrade in
interactive applications.

ER SAHIL KA GYAN
Prob.15: Discuss connectionless communication between client and server using
sockets.

Connectionless communication enables client-server data exchange without establishing a persistent connection. It is typically implemented
using the User Datagram Protocol (UDP) and is suitable for scenarios where low overhead, speed, or broadcasting is prioritized over
guaranteed delivery.

✦ Key Features of Connectionless Communication (UDP):

🔧 Socket Creation

○ Both client and server create UDP sockets using socket(AF_INET, SOCK_DGRAM, 0)

📡 No Connection Setup (No Handshake)

○ Unlike TCP, no three-way handshake is required

○ Data can be sent/received immediately

ER SAHIL KA GYAN
📦 Stateless Communication

● Each message is self-contained ✦ Example Use Cases:


● No memory of previous packets or client state is maintained
● DNS queries

⚠ No Acknowledgment Mechanism ● VoIP applications (e.g., Zoom, Skype)

● Delivery is not guaranteed


● Online multiplayer games
● Application must handle lost or out-of-order packets manually
● Video streaming services (non-critical
updates)
📉 Possibility of Packet Loss

● Since UDP is unreliable, some packets may be lost or arrive out of order
● Suitable for apps that can tolerate minor loss (e.g., VoIP, online games)

📢 Broadcast and Multicast Support

● Broadcast: Send to all devices on network


● Multicast: Send to a group of subscribed devices

⚡ Lightweight Protocol

● Low overhead makes it suitable for real-time or low-latency applications


● No congestion control, ordering, or reliability mechanisms built-in
ER SAHIL KA GYAN
Prob. 16: How is the state of a distributed system recorded? Explain with a suitable diagram.

✅ Definition: Distributed System State

A distributed system consists of processes running on multiple physical machines, communicating via message-passing. Since there's no
shared memory or global clock, determining a global state (i.e., the combined state of all processes and channels) is non-trivial.

✅ Problem in State Recording

● A process can record its local state, but messages in transit (not yet received) may be missed.
● This leads to inconsistent or incorrect global state.

✅ Solution: Chandy-Lamport Snapshot Algorithm

Chandy and Lamport proposed a non-intrusive snapshot algorithm to record a consistent global state.

✅ Assumptions of the Algorithm

1. No shared memory or global clock.


2. FIFO, unidirectional communication channels.
3. Finite number of processes and channels. ER SAHIL KA GYAN
4. There’s a communication path between all processes.
✅ Terminology

● Marker: A special message used to trigger state recording.


● Local State (LS): State of a single process.
● Channel State (SC): Messages in transit on a channel.

✅ Algorithm Description

🔵 Marker Sending Rule (at initiator P)

● Record own local state.


● Send a marker on all outgoing channels before sending other messages.

🔴 Marker Receiving Rule (at receiver Q)

● If Q has not recorded its state:


○ Record local state.
○ Record incoming channel as empty.
○ Send marker on all outgoing channels.
● If Q has already recorded its state:
○ Record messages received on that channel after state recorded but before marker arrived.

✅ Purpose of Recording Global State

● 🔹 Checkpointing
● 🔹 Deadlock Detection
● 🔹 Garbage Collection
ER SAHIL KA GYAN
● 🔹 Debugging and Recovery
✅ Definition of Consistent Global State

A global state is consistent if: Every message recorded as received is also recorded as sent.

📌 Formal Condition:
For any message mᵢⱼ:

send(mᵢⱼ) ∉ LSᵢ ⇔ mᵢⱼ ∉ SCᵢⱼ ∧ rec(mᵢⱼ) ∉ LSⱼ

✅ Diagram: Chandy-Lamport AlgorithM

Timeline →

P₁: e₁₁ ●──m₁₂─────▶ LS¹₁

P₂: ◀────m₂₁──● e₂₂ LS²₂

P₃: e₃₃ ●──▶ LS³₃

GS₁ = {LS¹₁, LS²₂, LS³₃} → ❌ Inconsistent: m₁₂ received by P₂, but not sent by P₁.

GS₂ = {LS²₁, LS³₂, LS⁴₃} → ✅ Consistent: All channels either empty or have pending (not-yet-received) messages.

✅ Conclusion

Chandy-Lamport snapshot algorithm captures consistent global state without halting or disrupting the system. It's a foundational technique in distributed computing for
reliable system monitoring and recovery.
ER SAHIL KA GYAN
Prob. 17: Explain distributed shared memory (DSM) with the help of a diagram.

✅ Definition: Distributed Shared Memory (DSM)

DSM provides the abstraction of shared memory across multiple machines without physically sharing memory.

✔ Applications behave as if they are accessing a single shared memory, even though data is distributed across nodes.

✅ DSM Architecture Diagram

✅ How It Works Process 1 ←→ Process 2 ←→ ... ←→ Process n


| | |
● Each node runs a memory manager. Memory Memory Memory
Manager Manager Manager
● Managers coordinate to move data across nodes. \ | /
-----------------------------
| Shared Virtual Memory |
● A shared virtual memory space is maintained.
-----------------------------

1.

ER SAHIL KA GYAN
✅ Types of DSM

1. On-Chip Memory
○ Data on CPU chip.
○ Fast access, expensive, and complex.
2. Bus-based Multiprocessors
○ Shared bus connects memory and CPUs.
○ Cache memory used to minimize traffic.
3. Ring-based Multiprocessors
○ Token ring topology.
○ Shared memory emulated through token-passing.

✅ Advantages of DSM

1. Simplified Abstraction
○ Programmers write as if using shared memory.
2. Easier Portability
○ Natural migration from sequential to distributed systems.
3. Locality of Data
○ Nearby data fetched together for efficiency.
4. On-demand Data Movement
○ Reduces unnecessary communication overhead.
5. Larger Virtual Memory Space
○ Combined memory of all nodes acts as a single space.

ER SAHIL KA GYAN
PART C 3/5 QUESTIONS= 30 MARKS

ER SAHIL KA GYAN
Prob. 18: Integration of Two CORBA Systems with Independent Naming
Services

CORBA Systems
(Common Object Request Broker Architecture) is a standard defined by the OMG that enables objects to communicate across networks
regardless of the programming languages, platforms, or operating systems.

Integration Strategy:

To integrate two CORBA systems, each with its own Naming Service, follow the steps below:

1. Naming Service Interoperability


Ensure both systems use interoperable naming services like OMG's CosNaming service. This supports cross-system object referencing.

2. ORB Configuration
Configure each system’s ORB (Object Request Broker) to recognize and reference both local and remote naming services.
○ Include necessary details like hostnames, ports, and protocols.

3. Object Registration
Register critical objects in each system's naming service using unique name bindings

ER SAHIL KA GYAN
4.Cross-System Object Discovery
Enable each system to query and resolve names from the remote naming context.

○ Use context bindings or federated naming trees.

5.Marshalling & Unmarshalling


Ensure ORBs can serialize (marshal) and deserialize (unmarshal) object references for communication across systems.

6.Security & Access Control


Integrate authentication, authorization, and encryption mechanisms to secure inter-system communication.

7.Error Handling & Fault Tolerance


Use retry mechanisms, exception handling, and fallback procedures to handle:

○ Communication failures
○ Inconsistent references

8.Testing & Validation


Conduct thorough testing across both systems for:

○ Object discovery
○ Remote method invocation
○ Error resilience

✅ Result: Objects in one CORBA system can be discovered and used by clients in another system.

ER SAHIL KA GYAN
Prob. 19

(a) Side Effects in Coda’s RPC2 System

The RPC2 system in the Coda Distributed File System supports remote procedure calls with potential side effects due to the distributed
nature of execution.

Common Side Effects:

1. Network Latency & Failures


○ RPCs can fail due to packet loss, congestion, or downtime.
2. Partial Execution
○ Crashes or disconnections during RPC may result in incomplete operation execution.
3. Out-of-Order Execution
○ Messages may arrive out of sequence, causing race conditions.
4. Non-Idempotent Operations
○ Re-executing an RPC can cause inconsistent state changes (e.g., duplicate file writes).
5. Transaction Inconsistency
○ If an RPC spans multiple operations, failure in the middle leads to inconsistent state.
6. Concurrency Control Issues
○ Multiple concurrent RPCs may lead to:
■ Race conditions
■ Deadlocks
7. Resource Leaks
○ Unreleased memory or file handles can cause performance degradation.

ER SAHIL KA GYAN
🎯 Goal: Implement timeout, rollback, retry mechanisms, and strong concurrency control to manage these side effects.
(b) Implementation and Resolution of a Coda File Identifier (FID) Component Description

Coda FID uniquely identifies every file or directory in the system.


VolumeID Identifies the volume/partition
Structure of a File Identifier (FID):
Vnode Identifies the specific file/directory in
the volume
Implementation of FID:
Unique Differentiates versions/replicas
1. VolumeID
○ Assigned during volume creation. Resolution of FID:
○ Uniquely distinguishes each volume.
1. Volume Lookup
2. Vnode
○ Use VolumeID to locate the volume server.
○ Maps to an internal metadata structure.
2. Vnode Lookup
○ Represents a unique file or directory.
○ Identify the file/directory using the vnode number.
3. Unique
3. Object Version Identification
○ Distinguishes file replicas or versions.
○ Use the Unique ID to resolve the correct replica/version.
○ Supports replication and version control.
4. Final Access
○ Once resolved, access operations like read/write/delete are
allowed.

🔍 Example: A client retrieves a file using its FID by performing


volume, vnode, and version resolution sequentially.

ER SAHIL KA GYAN
Prob.20: Explain the following strategies used for deadlock handling in distributed
systems: (a) Deadlock Prevention (b) Deadlock Avoidance (c) Deadlock Detection and Recovery

✅ (a) Deadlock Prevention Policies in Distributed Systems

Deadlock prevention ensures that at least one of the four necessary conditions for deadlock does not hold. In distributed systems, two
popular techniques are used:

1. Ordered Resource Request (Global Ordering Method):

● Each resource type is assigned a unique level number.


● A process can only request resources with higher levels than the ones it is currently holding.
● This prevents the circular wait condition.
R5 R8
🔄 Example:
↓ ↓
● Let resources have levels 1 to 10. [Process] → Request R10 (✅)
● A process holds resources R5 and R8. ↑
● It cannot request R7 (lower than 8) unless R8 is released. Request R7 (❌ Denied as R8 > R7)

❌ If it tries to request R7 while holding R8 → Request denied


✅ If it releases R8 and then requests R7 → Request accepted
ER SAHIL KA GYAN
2. Collective Request Policy (All-or-Nothing Request):

● Prevents Hold and Wait condition.


Process
● A process must request all needed resources at once. ↓
○ If all are available → resources allocated. Request R1, R2, R3
○ If any one is unavailable → request denied. → If R1 & R2 available, but R3 unavailable → Request denied ❌

🧠 Key Rule: No partial allocation is allowed.

Case A:

● Process holds R1 and requests R2 → ❌ Denied (holding a resource already)

Case B:

● Process holds nothing and requests R2 → ✅ Accepted

✅ (b) Deadlock Avoidance

● Involves preemptive decision making: The system only grants resource requests if they do not lead to a
deadlock.

● It ensures that the system is always in a safe state.

ER SAHIL KA GYAN
🔁 Banker's Algorithm:

A resource allocation algorithm used to check if a safe sequence of process execution exists.

Step-by-step Summary:

1. Input Matrices:
○ Allocation: Currently allocated resources. Available: A=1, B=5, C=2, D=0
○ Max: Maximum resources a process may need.
○ Available: Free resources in the system. Process Allocation Max Need (Max - Allocation)
○ Need = Max - Allocation AA
2. Safe State Conditions:
○ Start with Available = Work P0 0012 0012 0000
○ Find a process Pi such that:
Finish[i] = false and Need[i] <= Work P1 1000 1750 0750
○ If found:
Work = Work + Allocation[i] P2 1354 2356 1002
Finish[i] = true
Add Pi to the Safe Sequence. P3 0632 0652 0020
3. Repeat until all processes finish or no suitable Pi is found.
P4 0014 0656 0642

ER SAHIL KA GYAN
○ ✅ Safe Sequence (as per algorithm):
○ Step-by-step Execution:

○ → P0 satisfies conditions → Work = Work + Allocation of P0
○ → P2 satisfies → Work updated
○ → P3 satisfies → Work updated
○ → P4 satisfies → Work updated
○ → P1 satisfies → Work updated

○ ✅ Final Safe Sequence: <P0, P2, P3, P4, P1>
○ → All Finish[i] = true ⇒ System is in a **Safe State**

ER SAHIL KA GYAN
✅ (c) Deadlock Detection and Recovery

Deadlock Detection allows the system to detect deadlocks after they occur and then recover from them.

🔍 Detection:

● Similar to the Banker's Algorithm, but allows allocations without checking for a safe state.
● Uses Wait-For Graph (WFG):
A cycle in the graph → Deadlock detected

🔁 Recovery Techniques:

1. Process Termination:

○ Abort all deadlocked processes.


○ Abort one at a time until deadlock cycle breaks.

2. Resource Preemption:

○ Forcefully take resources from some processes.


○ Rollback the preempted processes to a safe state.

3. Checkpoints and Rollbacks:

○ Roll back to a previous checkpoint to recover from deadlock.


○ Requires maintaining consistent state snapshots.
ER SAHIL KA GYAN
Strategy Key Idea Pros Cons

Prevention Design system to avoid conditions Simple to implement May lead to low resource utilization
for deadlock policies

Avoidance Check before allocation, use safe states Deadlock is avoided Requires advance knowledge of
dynamically resource needs

Detection & Allow deadlocks and then handle them Resources utilized efficiently Overhead in detection & recovery
Recovery

ER SAHIL KA GYAN
Prob.21: Explain Lamport's Clock and Vector Clock with the help of suitable example.

Sol. Lamport's Logical Clock:

Lamport's logical clock is a mechanism to order events in a distributed system where no global clock exists.

Key Rules (Lamport's Clock Algorithm):

1. LC1 (Increment Rule): Each process increments its clock before executing an event.

2. LC2 (Send Rule): When a process sends a message, it sends its timestamp (logical clock value) along with the message.

3. LC3 (Receive Rule): When a process receives a message, it sets its clock to max(local clock, timestamp in message) + 1.

Let’s say we have 3 processes: P1, P2, and P3, and a sequence of events:

P1: a → m1 b → m2 c
(to P2) (to P3)

P2: d ← m1 e

P3: f ← m2 g ER SAHIL KA GYAN


Process Event Time

P1 a 1

P1 Send m1 2

P2 Receive m1 (timestamp=2), so max(0,2)+1 = 3 d=3

P2 e 4

P1 b 3

P1 Send m2 4

P3 Receive m2 (timestamp=4), so f = 5

P3 g 6

ER SAHIL KA GYAN
Vector Clock:

Vector clocks solve a limitation in Lamport’s clocks by allowing causal ordering of events (not just "happens-before").

Each process maintains a vector of size n (number of processes).

Vector Clock Rules:

1. VC1 (Increment): Before executing an event, a process increments its own clock.

2. VC2 (Send Rule): On sending a message, a process includes its vector clock.

3. VC3 (Receive Rule): On receiving a message, it takes element-wise max(received clock, local clock) and increments its own clock.

Event Vector Clock

P1: a [1, 0, 0]

P1: Send m1 [2, 0, 0] → to P2

P2: Receive m1 max([0,0,0], [2,0,0]) + inc = [2,1,0]

P2: e [2,2,0]

P1: Send m2 [3,0,0] → to P3

P3: Receive m2 max([0,0,0], [3,0,0]) + inc = [3,0,1]


ER SAHIL KA GYAN
Feature Lamport Clock Vector Clock

Type Scalar Vector

Tracks Causality ✗ (Partial ordering) ✓ (Full causality)

Space Complexity O(1) O(n)

Use Case Event ordering Causal consistency

ER SAHIL KA GYAN
Prob.22: Explain sender-initiated and receiver-initiated algorithms for dynamic load sharing
and balancing in distributed systems.

Sol. Dynamic Load Sharing:

Dynamic load sharing in distributed systems aims to redistribute processes from overloaded nodes to underloaded nodes to ensure
optimal system performance.

[Overloaded Node] → probe → [Underloaded Node]


(A) Sender-Initiated Algorithm: ← response ←
→ Transfer Task →
Concept:

● An overloaded node (sender) initiates the process of load sharing.


● It searches for an underloaded node (receiver) and transfers the excess load.

Steps:

1. Overloaded node identifies itself based on threshold.


2. It sends probe messages to other nodes to check their load.
3. Once an underloaded node is found, it offloads some tasks.
ER SAHIL KA GYAN
Advantages:

● Quick response to overload conditions.

● Efficient when system load is light to moderate.

Disadvantages:

● High overhead in finding underloaded nodes under heavy load.


(B) Receiver-Initiated Algorithm:

Concept: [Underloaded Node] → request → [Overloaded Node]


← response ←
● An underloaded node (receiver) initiates load balancing.
← Receive Task ←
● It requests load from overloaded nodes.

Steps:

1. Underloaded node detects idle state.


2. It sends a request to other nodes to ask for extra load.
3. If overloaded node is found, it sends a process to the underloaded node.
ER SAHIL KA GYAN
Advantages:

● Effective under heavy system load.

● Low overhead in high-load situations.

Disadvantages:

● Less efficient when most nodes are underloaded (idle scanning).

Feature Sender-Initiated Receiver-Initiated

Initiator Overloaded node Underloaded node

Best when System is lightly loaded System is heavily loaded

Risk Sender may not find receiver Idle receiver scanning

Example Use Case Job migration, thread offloading Task fetching by idle server

ER SAHIL KA GYAN
THANK YOU
SUBSCRIBE AND SHARE - ER SAHIK KA GYAN

ER SAHIL KA GYAN

You might also like