0% found this document useful (0 votes)
5 views

Synchronization

Uploaded by

hfaymsgya9
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Synchronization

Uploaded by

hfaymsgya9
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

SYNCHRONIZATION

Process Synchronization is the task of coordinating the execution of


processes in a way that no two processes can have access to the same shared
data and resources.
It is specially needed in a multi-process system when multiple processes are
running together, and more than one processes try to gain access to the same
shared resource or data at the same time.
This can lead to the inconsistency of shared data. So the change made by
one process is not necessarily reflected when other processes accessed the
same shared data. To avoid this type of inconsistency of data, the processes
need to be synchronized with each other.

On the basis of synchronization, processes are categorized as one of the


following two types:
• Independent Process : Execution of one process does not affect the
execution of other processes.
• Cooperative Process : Execution of one process affects the execution of
other processes.
Anjali Jindia 2
How Process Synchronization Works?

For Example, process A changing the data in a memory location while another
process B is trying to read the data from the same memory location. There is a
high probability that data read by the second process will be erroneous.

Anjali Jindia 3
Process Synchronization is a procedure that is involved in preserving the
appropriate order of execution of cooperative processes. In order to synchronize
the processes, there are various synchronization mechanisms. Processes have to
be scheduled to ensure that concurrent access to shared data does not create
inconsistencies. Data inconsistency can result in what is called a race condition.
A race condition occurs when two or more operations are executed at the same
time, not scheduled in the proper sequence, and not exited in the critical section
correctly.
Sections of a Program

• Entry Section: It is part of the process which decides the entry of a


particular process.
• Critical Section: This part allows only one process to enter and modify the
shared variable.
• Exit Section: Exit section allows the other process that are waiting in the
Entry Section, to enter into the Critical Sections. It also checks that a process
that finished its execution should be removed through this Section.
• Remainder Section: All other parts of the Code, which is not in Critical,
Entry and Exit Section, are knownAnjali
asJindia
the Remainder Section. 4
• Race Condition
When more than one processes are executing the same code or accessing the
same memory or any shared variable, there is a possibility that the output or
the value of the shared variable is wrong as all the processes doing the race
say that my output is correct. This condition known as a race condition. A
race condition is a situation that may occur inside a critical section. This
happens when the result of multiple thread execution in the critical section
differs according to the order in which the threads execute. This condition in
critical sections can be avoided if the critical section is treated as an atomic
instruction. Proper thread synchronization using locks or atomic variables
can also prevent race conditions.

• Critical Section Problem


Critical section is a code segment that can be accessed by only one process
at a time. Critical section contains shared variables which need to be
synchronized to maintain consistency of data variables.

Anjali Jindia 5
• The entry to the critical section is handled by the wait() function, and it
is represented as P().
• The exit from a critical section is controlled by the signal() function,
represented as V().

Anjali Jindia 6
• Only one process can be executed inside the critical section at a time. Other
processes waiting to execute their critical sections have to wait until the
current process finishes executing its critical section.

Rules for Critical Section


The critical section need to must enforce all three rules:

• Mutual Exclusion: Mutual Exclusion is a special type of binary semaphore


which is used for controlling access to the shared resource. Not more than one
process can execute in its critical section at one time.
• Progress: This solution is used when no one is in the critical section, and
someone wants in. Then those processes not in their reminder section should
decide who should go in, in a finite time.
• Bound Waiting: When a process makes a request for getting into critical
section, there is a specific limit about number of processes can get into their
critical section. So, when the limit is reached, the system must allow request to
the process to get into its critical section.

Anjali Jindia 7
Any solution to the critical section problem must satisfy three
requirements:

• Mutual Exclusion : If a process is executing in its critical section, then


no other process is allowed to execute in the critical section.
• Progress : If no process is executing in the critical section and other
processes are waiting outside the critical section, then only those
processes that are not executing in their remainder section can
participate in deciding which will enter in the critical section next, and
the selection can not be postponed indefinitely.
• Bounded Waiting : A bound must exist on the number of times that
other processes are allowed to enter their critical sections after a process
has made a request to enter its critical section and before that request is
granted.

Anjali Jindia 8
Solutions for the Critical Section

The critical section plays an important role in Process Synchronization so


that the problem must be solved. Some widely used method to solve the
critical section problem are as follows:

Peterson's Solution

This is widely used and software-based solution to critical section problems.


Peterson's solution was developed by a computer scientist Peterson that's
why it is named so.
With the help of this solution whenever a process is executing in any critical
state, then the other process only executes the rest of the code, and vice-
versa can happen. This method also helps to make sure of the thing that only
a single process can run in the critical section at a specific time.

Anjali Jindia 9
Anjali Jindia 10
In Peterson’s solution, we have two shared variables:

• boolean flag[i] :Initialized to FALSE, initially no one is interested in


entering the critical section
• int turn : The process whose turn is to enter the critical section.

This solution preserves all three conditions:

• Mutual Exclusion is comforted as at any time only one process can


access the critical section.
• Progress is also comforted, as a process that is outside the critical
section does not block other processes from entering into the critical
section.
• Bounded Waiting is assured as every process gets a fair chance to enter
the Critical section.

Anjali Jindia 11
The above shows the structure of process Pi in Peterson's solution.

• Suppose there are N processes (P1, P2, ... PN) and as at some point of time
every process requires to enter in the Critical Section

• A FLAG[] array of size N is maintained here which is by default false.


Whenever a process requires to enter in the critical section, it has to set its flag
as true. Example: If Pi wants to enter it will set FLAG[i]=TRUE.

• Another variable is called TURN and is used to indicate the process number
that is currently waiting to enter into the critical section.

• The process that enters into the critical section while exiting would change the
TURN to another number from the list of processes that are ready.

Example: If the turn is 3 then P3 enters the Critical section and while exiting
turn=4 and therefore P4 breaks out of the wait loop.

Anjali Jindia 12
Disadvantages of Peterson’s Solution
• It involves busy waiting.(In the Peterson’s solution, the code statement-
“while(flag[j] && turn == j);” is responsible for this. Busy waiting is not
favored because it wastes CPU cycles that could be used to perform other
tasks). The repeated execution of a loop of code while waiting for an event
to occur is called busy-waiting. The CPU is not engaged in any real
productive activity during this period, and the process does not progress
toward completion. Busy-waiting, busy-looping or spinning is a technique
in which a process repeatedly checks to see if a condition is true, such as
whether keyboard input or a lock is available. Busy waiting means a process
simply spins, (does nothing but continue to test its entry condition) while it
is waiting to enter its critical section.
• It is limited to 2 processes.
• Peterson’s solution cannot be usedAnjali
inJindia
modern CPU architectures. 13
Spin lock

In software engineering, a spinlock is a lock that causes a thread trying to


acquire it to simply wait in a loop ("spin") while repeatedly checking whether
the lock is available. Since the thread remains active but is not performing a useful
task, the use of such a lock is a kind of busy waiting.

Synchronization Hardware

Some times the problems of the Critical Section are also resolved by hardware.
Some operating system offers a lock functionality where a Process acquires a lock
when entering the Critical section and releases the lock after leaving it.

So when another process is trying to enter the critical section, it will not be able to
enter as it is locked. It can only do so if it is free by acquiring the lock itself.

Anjali Jindia 14
• Mutex Locks
As the synchronization hardware solution is not easy to implement for
everyone, a strict software approach called Mutex Locks was introduced.
In this approach, in the entry section of code, a LOCK is acquired over the
critical resources modified and used inside the critical section, and in the
exit section that LOCK is released.
As the resource is locked while a process executes its critical section, no
other process can access it.

•TestAndSet

TestAndSet is a hardware solution to the synchronization problem. In


TestAndSet, we have a shared lock variable which can take either of the
two values, 0 or 1.
• 0 Unlock
• 1 Lock

Anjali Jindia 15
Before entering into the critical section, a process inquires about the lock. If it is
locked, it keeps on waiting until it becomes free and if it is not locked, it takes the
lock and executes the critical section.
In TestAndSet, Mutual exclusion and progress are preserved but bounded waiting
cannot be preserved. As no queue is maintained for the processes stuck in the while
loop, bounded waiting is not ensured. If a process waits for a set amount of time
before entering the critical section, it is said to be a bounded waiting condition.

• Semaphores
Semaphore is simply a variable that is non-negative and shared between threads. It
is another algorithm or solution to the critical section problem. A semaphore is a
signaling mechanism and a thread that is waiting on a semaphore can be signaled
by another thread. This is different than a mutex as the mutex can be signaled only
by the thread that called the wait function.
A semaphore uses two atomic operations, wait and signal for process
synchronization.
A semaphore either allows or disallows access to the resource, which depends on
how it is set up.

Anjali Jindia 16
WAIT ( S ):
while ( S <= 0 );
S = S - 1;
SIGNAL ( S ):
S = S + 1;

The classical definitions of wait and signal are:


Wait:
This operation decrements the value of its argument S, as soon as it would
become non-negative(greater than or equal to 1). This Operation mainly helps
you to control the entry of a task into the critical section. In the case of the
negative or zero value, no operation is executed. wait() operation was originally
termed as P; so it is also known as P(S) operation. The definition of wait
operation is as follows:
wait(S)
{
while (S<=0);//no operation
S--; } Anjali Jindia 17
• Signal:
Increments the value of its argument S, as there is no more process blocked on the
queue. This Operation is mainly used to control the exit of a task from the critical
section.signal() operation was originally termed as V; so it is also known as V(S)
operation.
signal(S)
{
S++;
}

Properties of Semaphores

• It's simple and always have a non-negative integer value.


• Works with many processes.
• Can have many different critical sections with different semaphores.
• Each critical section has unique access semaphores.
• Can permit multiple processes into the critical section at once, if desirable.
Anjali Jindia 18
There are two types of semaphores:

• Counting Semaphores:

Counting semaphore can be used when we need to have more than one process in the
critical section at the same time. The value of counting semaphore at any point of
time indicates the maximum number of processes that can enter in the critical section
at the same time.
A process which wants to enter in the critical section first decrease the semaphore
value by 1 and then check whether it gets negative or not. If it gets negative then the
process is pushed in the list of blocked processes (i.e. q) otherwise it gets enter in the
critical section.
When a process exits from the critical section, it increases the counting semaphore
by 1 and then checks whether it is negative or zero. If it is negative then that means
that at least one process is waiting in the blocked state hence, to ensure bounded
waiting, the first process among the list of blocked processes will wake up and gets
enter in the critical section.

Anjali Jindia 19
There are two types of semaphores:

• Binary Semaphores
In counting semaphore, Mutual exclusion was not provided because we has the set of
processes which required to execute in the critical section simultaneously.

However, Binary Semaphore strictly provides mutual exclusion. Here, instead of


having more than 1 slots available in the critical section, we can only have at most 1
process in the critical section. The semaphore can have only two values, 0 or 1.

They can only be either 0 or 1. In this type of semaphore, the wait operation works
only if semaphore = 1, and the signal operation succeeds when semaphore= 0. It is
easy to implement than counting semaphores.
They are also known as mutex locks, as the locks can provide mutual exclusion. All
the processes can share the same mutex semaphore that is initialized to 1. Then, a
process has to wait until the lock becomes 0. The process can make the mutex
semaphore 1 and start its critical section. When it completes its critical section, it can
reset the value of mutex semaphore to 0 and some other process can enter its critical
section.
Anjali Jindia 20
Advantages of Semaphores

• Semaphores allow only one process into the critical section. They follow the
mutual exclusion principle strictly and are much more efficient than some other
methods of synchronization.
• There is no resource wastage because of busy waiting in semaphores as
processor time is not wasted unnecessarily to check if a condition is fulfilled to
allow a process to access the critical section.
• Semaphores are implemented in the machine independent code of the
microkernel. So they are machine independent.
Disadvantages of Semaphores

• Semaphores are complicated so the wait and signal operations must be


implemented in the correct order to prevent deadlocks.
• Semaphores are impractical for last scale use as their use leads to loss of
modularity. This happens because the wait and signal operations prevent the
creation of a structured layout for the system.
• Semaphores may lead to a priority inversion where low priority processes may
access the critical section first and high priority processes later.
Anjali Jindia 21
The producer-consumer problem

• The Producer-Consumer problem is a classic problem this is used for multi-


process synchronization i.e. synchronization between more than one
processes.
• In the producer-consumer problem, there is one Producer that is producing
something and there is one Consumer that is consuming the products
produced by the Producer. The producers and consumers share the same
memory buffer that is of fixed-size.
• The job of the Producer is to generate the data, put it into the buffer, and
again start generating data. While the job of the Consumer is to consume the
data from the buffer.

Anjali Jindia 22
What's the problem?
• The producer should produce data only when the buffer is not full. If the
buffer is full, then the producer shouldn't be allowed to put any data into the
buffer.
• The consumer should consume data only when the buffer is not empty. If
the buffer is empty, then the consumer shouldn't be allowed to take any data
from the buffer.
• The producer and consumer should not access the buffer at the same time.

To solve this problem, we need two counting semaphores – Full and Empty.
“Full” keeps track of number of items in the buffer at any given time and
“Empty” keeps track of number of unoccupied slots.
Initialization of semaphores –

• mutex = 1

• Full = 0 // Initially, all slots are empty. Thus full slots are 0

• Empty = n // All slots are empty initially


Anjali Jindia 23
Solution for Producer –
do{
//produce an item
wait(empty);
wait(mutex);
//place in buffer
signal(mutex);
signal(full);
} while(true)

When producer produces an item then the value of “empty” is reduced by 1


because one slot will be filled now. The value of mutex is also reduced to
prevent consumer to access the buffer. Now, the producer has placed the item
and thus the value of “full” is increased by 1. The value of mutex is also
increased by 1 because the task of producer has been completed and consumer
can access the buffer.

Anjali Jindia 24
Solution for Consumer –

do{
wait(full);
wait(mutex);
// remove item from buffer
signal(mutex);
signal(empty);
// consumes item
} while(true)

As the consumer is removing an item from buffer, therefore the value of “full”
is reduced by 1 and the value is mutex is also reduced so that the producer
cannot access the buffer at this moment. Now, the consumer has consumed the
item, thus increasing the value of “empty” by 1. The value of mutex is also
increased so that producer can access the buffer now.

Anjali Jindia 25
Readers-Writers Problem

Consider a situation where we have a file shared between many people.


If one of the people tries editing the file, no other person should be reading or
writing at the same time, otherwise changes will not be visible to him/her.
If some person is reading the file, then others may read it at the same time.
In OS, this situation is known as the readers-writers problem.
Problem parameters:

• One set of data is shared among a number of processes


• Once a writer is ready, it performs its write. Only one writer may write at a
time
• If a process is writing, no other process can read it
• If at least one reader is reading, no other process can write
• Readers may not write and only read
Anjali Jindia 26
Solution:
The solution of readers and writers can be implemented using binary
semaphores.
We use two binary semaphores "write" and "mutex",
Three variables are used: mutex, wrt, readcnt to implement solution
• semaphore mutex, wrt; // semaphore mutex is used to ensure mutual
exclusion when readcnt is updated i.e. when any reader enters or exit from
the critical section and semaphore wrt is used by both readers and writers
• int readcnt; // readcnt tells the number of processes performing read in
the critical section, initially 0
Functions for semaphore :
• wait() : decrements the semaphore value.
• signal() : increments the semaphore value.

Writer process:
• Writer requests the entry to critical section.

Anjali Jindia 27
• If allowed i.e. wait() gives a true value, it enters and performs the write.
If not allowed, it keeps on waiting.
• It exits the critical section.
do {
// writer requests for critical section
wait(wrt);
// performs the write
// leaves the critical section
signal(wrt);
} while(true);

Reader process:
• Reader requests the entry to critical section.
• If allowed:
– it increments the count of number of readers inside the critical
section. If this reader is the first reader entering, it locks
the wrt semaphore to restrict the entry of writers if any reader is
Anjali Jindia 28
inside.
– It then, signals mutex as any other reader is allowed to enter while
others are already reading.
– After performing reading, it exits the critical section. When exiting, it
checks if no more reader is inside, it signals the semaphore “wrt” as
now, writer can enter the critical section.
• If not allowed, it keeps on waiting.
do
{ // Reader wants to enter the critical section
Static int readcnt=0;
wait(mutex);
// The number of readers has now increased by 1
readcnt++;
// there is atleast one reader in the critical section
// this ensure no writer can enter if there is even one reader. Thus we give preference
to readers here
if (readcnt==1) {
wait(wrt); }
// other readers can enter while this current reader is inside the critical section
signal(mutex);
// current reader performs reading here Anjali Jindia 29
wait(mutex);
// a reader wants to leave
readcnt--;
// that is, no reader is left in the critical section,
if (readcnt == 0) {
signal(wrt); } // writers can enter
signal(mutex); // reader leaves
} while(true);

Thus, the semaphore ‘wrt‘ is queued on both readers and writers in a manner such
that preference is given to readers if writers are also there. Thus, no reader is waiting
simply because a writer has requested to enter the critical section.

Writer Process:

wait(write); If a writer wishes to access the file, wait operation is performed


WRITE INTO THE FILE on write semaphore, which decrements write to 0 and no other
signal(wrt); writer can access the file. On completion of the writing job by the
writer who was accessing the file, the signal operation is
performed on write.
Anjali Jindia 30
Dining Philosopher Problem

The dining philosophers problem is another classic synchronization problem


which is used to evaluate situations where there is a need of allocating multiple
resources to multiple processes.

Five philosophers are sitting around a circular table and their job is to think and
eat alternatively. A bowl of noodles is placed at the center of the table along
with five chopsticks for each of the philosophers. To eat a philosopher needs
both their right and a left chopstick. A philosopher can only eat if both
immediate left and right chopsticks of the philosopher is available. In case if
both immediate left and right chopsticks of the philosopher are not available
then the philosopher puts down their (either left or right) chopstick and starts
thinking again.

Anjali Jindia 31
Problem Statement:
Consider there are five philosophers sitting around a circular dining table.
The dining table has five chopsticks and a bowl of rice in the middle as
shown in the below figure.

At any instant, a philosopher is either eating or thinking. When a


philosopher wants to eat, he uses two chopsticks - one from their left and
one from their right. When a philosopher wants to think, he keeps down
both chopsticks at their original place.
From the problem statement, it is clear that a philosopher can think for an
indefinite amount of time. But when a philosopher starts eating, he has to
stop at some point of time. The philosopher is in an endless cycle of
thinking and eating. Anjali Jindia 32
SOLUTION
A solution of the Dining Philosophers Problem is to use a semaphore to
represent a chopstick. A chopstick can be picked up by executing a wait
operation on the semaphore and released by executing a signal semaphore.
The structure of the chopstick is shown below −
semaphore chopstick [5];
Initially the elements of the chopstick are initialized to 1 as the chopsticks are
on the table and not picked up by a philosopher.
The code for each philosopher looks like:
do {
wait(chopstick[i]);
wait(chopstick[(i+1) % 5]); // mod is used because if i=5, next chopstick is 1
// (dining table is circular)
/* eat */
signal(chopstick[i]);
signal(chopstick[(i+1) % 5]);
/* think */
Anjali Jindia 33
} while(1)
When a philosopher wants to eat the rice, he will wait for the chopstick at his
left and picks up that chopstick. Then he waits for the right chopstick to be
available, and then picks it too. After eating, he puts both the chopsticks down.
But if all five philosophers are hungry simultaneously, and each of them
pickup one chopstick, then a deadlock situation occurs because they will be
waiting for another chopstick forever. The possible solutions for this are:

• A philosopher must be allowed to pick up the chopsticks only if both the left
and right chopsticks are available.
• Allow only four philosophers to sit at the table. That way, if all the four
philosophers pick up four chopsticks, there will be one chopstick left on the
table. So, one philosopher can start eating and eventually, two chopsticks
will be available. In this way, deadlocks can be avoided.
• An even philosopher should pick the right chopstick and then the left
chopstick while an odd philosopher should pick the left chopstick and then
the right chopstick.

Anjali Jindia 34
Monitors in Operating System
Monitors are used for process synchronization. With the help of programming
languages, we can use a monitor to achieve mutual exclusion among the
processes.
Example of monitors: Java Synchronized methods such as Java offers
notify() and wait() constructs.
In other words, monitors are defined as the construct of programming
language, which helps in controlling shared data access.
The Monitor is a module or package which encapsulates shared data structure,
procedures, and the synchronization between the concurrent procedure
invocations.

Characteristics of Monitors.
• Inside the monitors, we can only execute one process at a time.
• Monitors are the group of procedures, and condition variables that are
merged together in a special type of module.
• If the process is running outside the monitor, then it cannot access the
monitor’s internal variable. ButAnjali
a process
Jindia
can call the procedures of 35the
monitor.
• Monitors offer high-level of synchronization
• Monitors were derived to simplify the complexity of synchronization
problems.
• There is only one process that can be active at a time inside the monitor.

Components of Monitor
There are four main components of the monitor:

• Initialization: Initialization comprises the code, and when the monitors are
created, we use this code exactly once.
• Private data: It comprises all the private data, and the private data contains
private procedures that can only be used within the monitor. So, outside the
monitor, private data is not visible.
• Monitor procedure: Monitors Procedures are those procedures that can be
called from outside the monitor.
• Monitor entry queue: Monitor entry queue is another essential component
of the monitor that includes all the threads, which are called procedures.
Anjali Jindia 36
Syntax of monitor

Condition Variables
There are two types of operations that we can perform on the condition
variables of the monitor:
• Wait
• Signal
Suppose there are two condition variables
condition a, b // Declaring variable

Anjali Jindia 37
Wait Operation
a.wait(): – The process that performs wait operation on the condition variables
are suspended and locate the suspended process in a block queue of that
condition variable.
Signal Operation
a.signal() : – If a signal operation is performed by the process on the condition
variable, then a chance is provided to one of the blocked processes.

Advantages of Monitor
It makes the parallel programming easy, and if monitors are used, then there is
less error-prone as compared to the semaphore.

Disadvantages of Monitor:
Monitors have to be implemented as part of the programming language . The
compiler must generate code for them. This gives the compiler the additional
burden of having to know what operating system facilities are available to
control access to critical sections in concurrent processes. Some languages that
do support monitors are Java,C#,Visual Basic,Ada and concurrent Euclid. 38
Anjali Jindia

You might also like