Chapter-3 Real Time OS

Download as pdf or txt
Download as pdf or txt
You are on page 1of 130

Software Frameworks for Real-

time and Embedded Systems

Chapter-3

2/26/2024 Prepared By; Elias B. 1


Schedule
 Real-time operating system
 Features of a real time operation system
 General and specific microprocessors
 Inter process communication
 Real time task scheduling
 Dynamic allocation of tasks
 Scheduling
 Cyclic scheduling
 Priority-based scheduling
 Multi-tasking and Concurrency issue
 Handling resource sharing and dependencies
 Handling resource sharing and dependencies
 Resource sharing protocols
 Fault- tolerance
 Synchronization techniques
 Centralized & distributed clock synchronization
 Real-time applications
 RTOS support for semaphores, queues, and events

2/26/2024 Prepared By; Elias B. 2


1. Real-time operating
system
 A real-time operating system (RTOS) is an OS that
guarantees real-time applications a certain capability within a
specified deadline.
 RTOSs are designed for critical systems and for devices like
microcontrollers that are timing-specific.
 RTOS processing time requirements are measured in
milliseconds.
 Any delays in responding could have disastrous
consequences.
 Real-time operating systems have similar functions as
general-purpose OSes (GPOSes), like Linux, Microsoft
Windows or macOS, but are designed so that a scheduler in
the OS can meet specific deadlines for different tasks.

2/26/2024 Prepared By; Elias B. 3


RTOS
 RTOSs also commonly appear in embedded systems, which
are a combination of hardware and software designed for a
specific function and may also operate within a larger system.
 Often, embedded systems are used in real-time environments
and use a real-time operating system to communicate with
the hardware.
 RTOSs are designed to handle multiple processes at one time,
ensuring that these processes respond to events within a
predictable time limit.
 Processing in an RTOS occurs within defined time constraints
and monitors the priority of tasks.
 An RTOS is also able to make changes to task priority.
 Systems that are event-driven often switch between tasks
based on priority.
2/26/2024 Prepared By; Elias B. 4
RTOSs provide the following functionality:

 Some real-time operating systems are


created for special applications, while
others are more general purpose.
 multitasking, where tasks are rapidly switched
between to give the impression that multiple
programs are executing concurrently;
 process thread prioritization; and
 a sufficient number of interrupt levels.

2/26/2024 Prepared By; Elias B. 5


RTOSs are included in the
following devices:
 air traffic control systems;
 anti-lock brakes and air bags;
 cameras;
 medical systems; and
 PCs.

2/26/2024 Prepared By; Elias B. 6


Characteristics of a real-time operating system

 Real-time operating systems generally have the following


characteristics:
 Small footprint. Compared to general OSs, real-time operating
systems are lightweight.
 High performance. RTOSs are typically fast and responsive.
 Determinism. Repeating inputs end in the same output.
 Safety and security. Safety-critical and security standards are
typically the highest priority, as RTOSs are frequently used in
critical systems.
 Priority-based scheduling. Tasks that are assigned a high
priority are executed first followed by lower-priority jobs.
 Timing information. RTOSs are responsible for timing and
providing application programming interface

2/26/2024 Prepared By; Elias B. 7


RTOS vs. GPOS
 General-purpose OSs, like Windows or Unix, can handle multiple
tasks concurrently but are not ideal for important, time-sensitive
applications due to latency and synchronization issues.
 GPOSs can also operate without time constraints, so tasks may
sometimes fail or take longer to execute. In other words, they
provide a potential nondeterministic, soft real-time response to
tasks.
 RTOSs typically provide a deterministic, hard real-time response
with the goal of providing a quick reaction to events.
 While GPOSs are good for general consumer use, RTOSs are
designed for instances where a system needs to respond to an
event or task within a short, designated time frame.
 RTOSs need to be quick and accurate for the typically embedded
systems they reside in.

2/26/2024 Prepared By; Elias B. 8


RTOS Architectures

 Aside from the minute details, two prevailing design philosophies affect RTOS design:
monolithic kernel versus microkernel.
 These systems are differentiated by their structure; whereas monolithic kernel systems
run in a single space, microkernel systems compartmentalize different components of
the architecture.
1. Microkernel Systems
 In microkernel architecture, components are stored in separate “rooms,” which are
independent from one another but share a similar space. A room can be renovated
without impacting those around it. However, to get from one to another, you have to
step through the doorway and head down the hall, which wastes time. Any action has to
return to the kernel before it can move to the component it references, meaning some
operations take much longer than necessary.
2. Monolithic Systems
 In a monolithic system, there are no “walls” between the rooms, so you can step from
one to another much more quickly. Rather than implementing a small kernel, monolithic
kernels provide services of their own as well as regulating those of other areas. With
exceptions, operations are executed in the kernel space, removing the recurrent need to
return to the kernel and improving speed and performance. However, making a change
in one area could have ramifications for the entire system.

2/26/2024 Prepared By; Elias B. 9


Microkernel vs Monolithic system
Microkernel System Monolithic System

The kernel and operations Kernel and operation


are housed in separate processes share the same
spaces, with the kernel space. Operations move
itself being bare (hence more quickly, and the
micro). Operation spaces systems boast higher
are not given access to performance. However,
one another and must updates may require an
return to the kernel. extensive overhaul.

2/26/2024 Prepared By; Elias B. 10


Real-Time System Examples

 RTOSs can be found in countless


products around the world, with
VxWorks alone powering more than
two billion devices.
 Systems from car engines to deep-
space telescopes to helicopter
guidance systems to the Mars rovers
use embedded systems that run a
real-time operating system.
2/26/2024 Prepared By; Elias B. 11
Examples
A&D Telecom Transportation Medical Manufacturing

•Flight display •Functional •Magnetic •Factory robotics


controller •5G modem safety systems resonance systems
•Engine turbine •Satellite •Emergency imaging •Safety systems
•Drones modem braking systems •Surgery •Oil and gas
•Extraterrestrial •Base station •Engine warning equipment vibration
rovers systems •Ventilators monitors

2/26/2024 Prepared By; Elias B. 12


Features of Real Time Operating System

 Real time OS occupies very less


space.
 The response time of real time OS is
predictable.
 It consumes some of the resources.
 In real time OS, the kernel restores
the state of the task and passes
control of the CPU for that task.
2/26/2024 Prepared By; Elias B. 13
Advantages of Real Time Operating System

 Real time OS is easy to develop and execute in


real-time applications.
 The real time operating system working structures
are extra compact.
 The real time operating system structures require
less memory space.
 Memory allocation in these types of systems is
managed easily.
 The types of real time operating system are error-
free.

2/26/2024 Prepared By; Elias B. 14


Disadvantages of Real Time Operating System

 Real time OSs have complicated


layout principles.
 Real time OSs are very costly to
develop.
 Real time OSs are very complex.
 Real time OSs can consume critical
CPU cycles.

2/26/2024 Prepared By; Elias B. 15


Applications of Real Time OS

 Real Time OS are used in:


 Real time operating system is used in airline
reservation systems.
 Air traffic control system.
 Systems that provide immediate updating.
 Used in any system that provides up-to-date and
minute information on stock prices.
 Defense application systems like RADAR.
 Networked Multimedia Systems
 Command Control Systems
 Internet Telephony
2/26/2024 Prepared By; Elias B. 16
Types of Operating
Systems (OS)
 An operating system is a well-organized collection of
programs that manages the computer hardware. It is a
type of system software that is responsible for the smooth
functioning of the computer system.

2/26/2024 Prepared By; Elias B. 17


OS Types

2/26/2024 Prepared By; Elias B. 18


Types of Processors

General Purpose Processor


 The General-Purpose Processor (GPP) is programmed by the user. It can
handle a wide range of variety of applications. The GPP is designed
especially for the general-purpose computer such as desktops and
workstations. The primary consideration while designing GPPs is its speed;
cost is always a secondary consideration.
Types of general-purpose processor:
 Microprocessor
 Microcontroller
 Embedded Processors
 Digital Signal Processor
Microprocessor
 The microprocessor is the processor where the complete processor is
fabricated on a single chip. It contains ALU, control unit and registers.
 Intel developed the first microprocessor, 4004, in the year 1971. The
microprocessor was the first processor containing all the CPU components
on a single chip.
2/26/2024 Prepared By; Elias B. 19
GP processor
 Microprocessor is the brain of computer, which
does all the work. It is a computer processor that
incorporates all the functions of CPU (Central
Processing Unit) on a single IC (Integrated Circuit)
or at the most a few ICs. Microprocessors were
first introduced in early 1970s.
 4004 was the first general purpose
microprocessor used by Intel in building personal
computers. Arrival of low cost general purpose
microprocessors has been instrumental in
development of modern society the way it has.

2/26/2024 Prepared By; Elias B. 20


Microprocessors Characteristics

 Microprocessors are multipurpose devices that can


be designed for generic or specialized functions.
The microprocessors of laptops and smartphones
are general purpose whereas ones designed for
graphical processing or machine vision are
specialized ones.
 There are some characteristics that are common
to all microprocessors.
 These are the most important defining
characteristics of a microprocessor −
 Clock speed
 Instruction set

 Word size
2/26/2024 Prepared By; Elias B. 21
Clock Speed

 Every microprocessor has an internal clock that regulates


the speed at which it executes instructions and also
synchronizes it with other components. The speed at which
the microprocessor executes instructions is called clock
speed. Clock speeds are measured in MHz or GHz where 1
MHz means 1 million cycles per second whereas 1 GHz
equals to 1 billion cycles per second. Here cycle refers to
single electric signal cycle.
 Currently microprocessors have clock speed in the range of
3 GHz, which is maximum that current technology can
attain. Speeds more than this generate enough heat to
damage the chip itself. To overcome this, manufacturers are
using multiple processors working in parallel on a chip.

2/26/2024 Prepared By; Elias B. 22


Word Size

 Number of bits that can be processed by a processor in a


single instruction is called its word size. Word size
determines the amount of RAM that can be accessed at one
go and total number of pins on the microprocessor.
 Total number of input and output pins in turn determines
the architecture of the microprocessor.
 First commercial microprocessor Intel 4004 was a 4-bit
processor.
 It had 4 input pins and 4 output pins. Number of output
pins is always equal to the number of input pins. Currently
most microprocessors use 32-bit or 64-bit architecture.

2/26/2024 Prepared By; Elias B. 23


Instruction Set

 A command given to a digital machine to perform an


operation on a piece of data is called an instruction.
 Basic set of machine level instructions that a microprocessor
is designed to execute is called its instruction set.
 These instructions do carry out these types of operations −
 Data transfer
 Arithmetic operations
 Logical operations
 Control flow
 Input/output and machine control

2/26/2024 Prepared By; Elias B. 24


Specific Processor(Microprocessor
Components)
 Compared to the first microprocessors, today’s processors are very small
but still they have these basic parts right from the first model −
 CPU
 Bus
 Memory
CPU
 CPU is fabricated as a very large scale integrated circuit (VLSI) and has
these parts −
 Instruction register − It holds the instruction to be executed.
 Decoder − It decodes (converts to machine level language) the
instruction and sends to the ALU (Arithmetic Logic Unit).
 ALU − It has necessary circuits to perform arithmetic, logical, memory,
register and program sequencing operations.
 Register − It holds intermediate results obtained during program
processing. Registers are used for holding such results rather than RAM
because accessing registers is almost 10 times faster than accessing RAM.

2/26/2024 Prepared By; Elias B. 25


Bus

 Connection lines used to connect the internal parts of the


microprocessor chip is called bus. There are three types of buses
in a microprocessor −
 Data Bus − Lines that carry data to and from memory are called
data bus. It is a bidirectional bus with width equal to word length
of the microprocessor.
 Address Bus − It is a unidirectional responsible for carrying
address of a memory location or I/O port from CPU to memory or
I/O port.
 Control Bus − Lines that carry control signals like clock signals,
interrupt signal or ready signal are called control bus. They are
bidirectional. Signal that denotes that a device is ready for
processing is called ready signal. Signal that indicates to a device
to interrupt its process is called an interrupt signal.

2/26/2024 Prepared By; Elias B. 26


Memory

 Microprocessor has two types of memory


 RAM − Random Access Memory is volatile
memory that gets erased when power is switched
off. All data and instructions are stored in RAM.
 ROM − Read Only Memory is non-volatile
memory whose data remains intact even after
power is switched off. Microprocessor can read
from it any time it wants but cannot write to it. It
is preprogrammed with most essential data like
booting sequence by the manufacturer.

2/26/2024 Prepared By; Elias B. 27


Types of Microprocessors
(based on instructionsets)
 Let us learn more about the two types of microprocessors based on their
instruction set.
1. RISC
 RISC stands for Reduced Instruction Set Computers. It has a small set
of highly optimized instructions. Complex instruction are also implemented
using simpler instructions, reducing the size of instruction set. The
designing philosophy for RISC incorporates these salient points −
 Number of instructions should be minimum.
 Instructions should be of same length.
 Simple addressing modes should be used
 Reduce memory references to retrieve operands by adding registers
 Some of the techniques used by RISC architecture include −
Pipelining− A sequence of instructions is fetched even if it means
overlapping of instructions in fetching and execution.
 Single cycle execution − Most of RISC instructions take one CPU cycle to
execute.
 Examples of RISC processors are Intel P6, Pentium4, AMD K6 and K7, etc.
2/26/2024 Prepared By; Elias B. 28
2. CISC

 CISC stands for Complex Instruction Set Computers. It


supports hundreds of instructions. Computers supporting
CISC can accomplish wide variety of tasks, making them
ideal for personal computers. These are some
characteristics of CISC architecture −
 Larger set of instructions
 Instructions are of variable length
 Complex addressing modes
 Instructions take more than one clock cycle
 Work well with simpler compilers
 Examples of CISC processors are Intel 386 & 486, Pentium,
Pentium II and III, Motorola 68000, etc.

2/26/2024 Prepared By; Elias B. 29


Popular RTOS software
 FreeRTOS from Amazon Web Services. This open source
microcontroller OS is designed to simplify the development,
security, deployment and maintenance of microcontroller
edge devices.
 QNX Neutrino from BlackBerry. This commercial real-
time operating system is similar to Unix and is designed to
work in embedded systems. This is also one of the first
commercially successful microkernel OSs.
 VxWorks from Wind River. This RTOS supports
application deployment in containers. Mars Exploration
Rovers run VxWorks.
 SafeRTOS from Wittenstein. This pre-certified safety
real-time operating system is designed for embedded
processors
2/26/2024 Prepared By; Elias B. 30
RTOS lists
 Lots of details about the following RTOSs
available at:
 Linux
 RTLinux
 RTAI
 RTEMS
 QNX
 VxWorks
 LynxOS

2/26/2024 Prepared By; Elias B. 31


Inter process communication
A process can be of two types:
 Independent process.
 Co-operating process.
An independent process is not affected by the execution of other processes
while a co-operating process can be affected by other executing processes.
Though one can think that those processes, which are running independently,
will execute very efficiently, in reality, there are many situations when co-
operative nature can be utilized for increasing computational speed,
convenience, and modularity. Inter-process communication (IPC) is a
mechanism that allows processes to communicate with each other and
synchronize their actions.
The communication between these processes can be seen as a method of co-
operation between them. Processes can communicate with each other through
both:
 Shared Memory
 Message passing

2/26/2024 Prepared By; Elias B. 32


Process Management in OS

 A Program does nothing unless its instructions are executed by a CPU. A


program in execution is called a process. In order to accomplish its task,
process needs the computer resources.
 There may exist more than one process in the system which may require
the same resource at the same time. Therefore, the operating system
has to manage all the processes and the resources in a convenient and
efficient way.
 Some resources may need to be executed by one process at one time to
maintain the consistency otherwise the system can become inconsistent
and deadlock may occur.
 The operating system is responsible for the following activities in
connection with Process Management
 Scheduling processes and threads on the CPUs.
 Creating and deleting both user and system processes.
 Suspending and resuming processes.
 Providing mechanisms for process synchronization.
 Providing mechanisms for process communication.

2/26/2024 Prepared By; Elias B. 33


Attributes of a process

 The Attributes of the process are used by the Operating System to create the process control block
(PCB) for each of them. This is also called context of the process. Attributes which are stored in the
PCB are described below.
1. Process ID
 When a process is created, a unique id is assigned to the process which is used for unique
identification of the process in the system.
2. Program counter
 A program counter stores the address of the last instruction of the process on which the process was
suspended. The CPU uses this address when the execution of this process is resumed.
3. Process State
 The Process, from its creation to the completion, goes through various states which are new, ready,
running and waiting. We will discuss about them later in detail.
4. Priority
 Every process has its own priority. The process with the highest priority among the processes gets
the CPU first. This is also stored on the process control block.
5. General Purpose Registers
 Every process has its own set of registers which are used to hold the data which is generated during
the execution of the process.
6. List of open files
 During the Execution, Every process uses some files which need to be present in the main memory.
OS also maintains a list of open files in the PCB.
7. List of open devices
 OS also maintain the list of all open devices which are used during the execution of the process.
2/26/2024 Prepared By; Elias B. 34
Process states

2/26/2024 Prepared By; Elias B. 35


Operations on the Process

1. Creation
 Once the process is created, it will be ready and come into the ready
queue (main memory) and will be ready for the execution.
2. Scheduling
 Out of the many processes present in the ready queue, the Operating
system chooses one process and start executing it. Selecting the process
which is to be executed next, is known as scheduling.
3. Execution
 Once the process is scheduled for the execution, the processor starts
executing it. Process may come to the blocked or wait state during the
execution then in that case the processor starts executing the other
processes.
4. Deletion/killing
 Once the purpose of the process gets over then the OS will kill the
process. The Context of the process (PCB) will be deleted and the process
gets terminated by the Operating system.

2/26/2024 Prepared By; Elias B. 36


Scheduling in OS
 Operating system uses various schedulers for the process scheduling described below.
1. Long term scheduler
 Long term scheduler is also known as job scheduler. It chooses the processes from the
pool (secondary memory) and keeps them in the ready queue maintained in the
primary memory.
 Long Term scheduler mainly controls the degree of Multiprogramming. The purpose of
long term scheduler is to choose a perfect mix of IO bound and CPU bound processes
among the jobs present in the pool.
 If the job scheduler chooses more IO bound processes then all of the jobs may reside
in the blocked state all the time and the CPU will remain idle most of the time. This will
reduce the degree of Multiprogramming. Therefore, the Job of long term scheduler is
very critical and may affect the system for a very long time.
2. Short term scheduler
 Short term scheduler is also known as CPU scheduler. It selects one of the Jobs from
the ready queue and dispatch to the CPU for the execution.
 A scheduling algorithm is used to select which job is going to be dispatched for the
execution. The Job of the short term scheduler can be very critical in the sense that if it
selects job whose CPU burst time is very high then all the jobs after that, will have to
wait in the ready queue for a very long time.
 This problem is called starvation which may arise if the short term scheduler makes
some mistakes while selecting thePrepared
2/26/2024 job. By; Elias B. 37
Scheduling in OS
3. Medium term scheduler
 Medium term scheduler takes care of the swapped out
processes.
 If the running state processes needs some IO time for the
completion then there is a need to change its state from
running to waiting.
 Medium term scheduler is used for this purpose. It
removes the process from the running state to make room
for the other processes. Such processes are the swapped
out processes and this procedure is called swapping. The
medium term scheduler is responsible for suspending and
resuming the processes.
 It reduces the degree of multiprogramming. The swapping
is necessary to have a perfect mix of processes in the
ready queue.
2/26/2024 Prepared By; Elias B. 38
Times related to process

2/26/2024 Prepared By; Elias B. 39


Times in process..
1. Arrival Time
 The time at which the process enters into the ready queue is
called the arrival time.
2. Burst Time
 The total amount of time required by the CPU to execute the
whole process is called the Burst Time. This does not include
the waiting time. It is confusing to calculate the execution
time for a process even before executing it hence the
scheduling problems based on the burst time cannot be
implemented in reality.
3. Completion Time
 The Time at which the process enters into the completion
state or the time at which the process completes its
execution, is called completion time.
2/26/2024 Prepared By; Elias B. 40
Times…
4. Turnaround time
 The total amount of time spent by the process
from its arrival to its completion, is called
Turnaround time.
5. Waiting Time
 The Total amount of time for which the process
waits for the CPU to be assigned is called waiting
time.
6. Response Time
 The difference between the arrival time and the
time at which the process first gets the CPU is
called Response Time.
2/26/2024 Prepared By; Elias B. 41
CPU scheduling
 In Multiprogramming systems, the Operating system
schedules the processes on the CPU to have the maximum
utilization of it and this procedure is called CPU scheduling.
The Operating System uses various scheduling algorithm to
schedule the processes.
 This is a task of the short term scheduler to schedule the
CPU for the number of processes present in the Job Pool.
Whenever the running process requests some IO operation
then the short term scheduler saves the current context of
the process (also called PCB) and changes its state from
running to waiting.
 During the time, process is in waiting state; the Short term
scheduler picks another process from the ready queue and
assigns the CPU to this process. This procedure is called
context
2/26/2024 switching. Prepared By; Elias B. 42
What is saved in the Process Control Block?

 The Operating system maintains a


process control block during the lifetime
of the process.
 The Process control block is deleted when
the process is terminated or killed.
 There is the following information which is
saved in the process control block and is
changing with the state of the process.

2/26/2024 Prepared By; Elias B. 43


Process control block

2/26/2024 Prepared By; Elias B. 44


Why do we need Scheduling?

 In Multiprogramming, if the long term scheduler


picks more I/O bound processes then most of
the time, the CPU remains idol.
 The task of Operating system is to optimize the
utilization of resources.
 If most of the running processes change their
state from running to waiting then there may
always be a possibility of deadlock in the
system.
 Hence to reduce this overhead, the OS needs to
schedule the jobs to get the optimal utilization of
CPU and to avoid the possibility to deadlock.
2/26/2024 Prepared By; Elias B. 45
CPU scheduling Algorithms
There are various algorithms which are
used by the Operating System to schedule
the processes on the processor in an
efficient way.
The Purpose of a Scheduling algorithm
 Maximum CPU utilization
 Fare allocation of CPU
 Maximum throughput
 Minimum turnaround time
 Minimum waiting time
 Minimum response time
2/26/2024 Prepared By; Elias B. 46
Algorithms
1. First Come First Serve
 It is the simplest algorithm to implement. The process with the
minimal arrival time will get the CPU first. The lesser the arrival
time, the sooner will the process gets the CPU.
 It is the non-preemptive type of scheduling.
2. Round Robin
 In the Round Robin scheduling algorithm, the OS defines a time
quantum (slice). All the processes will get executed in the cyclic
way. Each of the process will get the CPU for a small amount of
time (called time quantum) and then get back to the ready queue
to wait for its next turn.
 It is a preemptive type of scheduling.
3. Shortest Job First
 The job with the shortest burst time will get the CPU first. The
lesser the burst time, the sooner will the process get the CPU.
 It2/26/2024
is the non-preemptive type of scheduling.
Prepared By; Elias B. 47
Cont….
4. Shortest remaining time first
 It is the preemptive form of SJF. In this algorithm, the OS
schedules the Job according to the remaining time of the
execution.
5. Priority based scheduling
 In this algorithm, the priority will be assigned to each of the
processes. The higher the priority, the sooner will the
process get the CPU. If the priority of the two processes is
same then they will be scheduled according to their arrival
time.
6. Highest Response Ratio Next
 In this scheduling Algorithm, the process with highest
response ratio will be scheduled next.
 This reduces the starvation in the system.
2/26/2024 Prepared By; Elias B. 48
First Come First Serve CPU Process Scheduling in Operating Systems

 In this tutorial, we are going to learn an important


concept in CPU Process Scheduling Algorithms.
The important concept name is First Come First
Serve. This is the basic algorithm which every
student must learn to understand all the basics of
CPU Process Scheduling Algorithms.
 First Come First Serve paves the way for
understanding of other algorithms. This algorithm
may have many disadvantages. But these
disadvantages created very new and efficient
algorithms. So, it is our responsibility to learn
about First Come First Serve CPU Process
Scheduling
2/26/2024 Algorithms.
Prepared By; Elias B. 49
Important Abbreviations

 CPU - - - > Central Processing Unit


 FCFS - - - > First Come First Serve
 AT - - - > Arrival Time
 BT - - - > Burst Time
 WT - - - > Waiting Time
 TAT - - - > Turn Around Time
 CT - - - > Completion Time
 FIFO - - - > First In First Out

2/26/2024 Prepared By; Elias B. 50


First Come First Serve
 First Come First Serve CPU Scheduling Algorithm shortly
known as FCFS is the first algorithm of CPU Process
Scheduling Algorithm.
 In First Come First Serve Algorithm what we do is to allow
the process to execute in linear manner.
 This means that whichever process enters process enters
the ready queue first is executed first.
 This shows that First Come First Serve Algorithm follows
First In First Out (FIFO) principle.
 The First Come First Serve Algorithm can be executed in Pre
Emptive and Non Pre Emptive manner. Before, going into
examples, let us understand what is Pre Emptive and Non
Pre Emptive Approach in CPU Process Scheduling.

2/26/2024 Prepared By; Elias B. 51


FCFS(First Come First Serve)
1. Pre-emptive
 In this instance of Pre-Emptive Process Scheduling, the OS allots
the resources to a Process for a predetermined period of time.
 The process transitions from running state to ready state or from
waiting state to ready state during resource allocation.
 This switching happens because the CPU may assign other
processes precedence and substitute the currently active process
for the higher priority process.
2. Non Pre-Emptive Approach
 In this case of Non Pre-Emptive Process Scheduling, the resource
cannot be withdrawn from a process before the process has
finished running.
 When a running process finishes and transitions to the waiting
state, resources are switched.

2/26/2024 Prepared By; Elias B. 52


Characteristics of FCFS CPU Process Scheduling

 The characteristics of FCFS CPU Process Scheduling are:


 Implementation is simple.
 Does not cause any causalities while using
 It adopts a non pre-emptive and pre-emptive strategy.
 It runs each procedure in the order that they are received.
 Arrival time is used as a selection criterion for procedures.
Advantages of FCFS CPU Process Scheduling
 The advantages of FCFS CPU Process Scheduling are:
 In order to allocate processes, it uses the First In First Out queue.
 The FCFS CPU Scheduling Process is straight forward and easy to
implement.
 In the FCFS situation pre-emptive scheduling, there is no chance of process
starving.
 As there is no consideration of process priority, it is an equitable algorithm.

2/26/2024 Prepared By; Elias B. 53


Example-1 FCFS

2/26/2024 Prepared By; Elias B. 54


Non Pre-Emptive Approach

 Now, let us solve this problem with


the help of the Scheduling Algorithm
named First Come First Serve in a
Non Preemptive Approach.

2/26/2024 Prepared By; Elias B. 55


Solution for example-1

2/26/2024 Prepared By; Elias B. 56


Calculation of times
1. The Average Completion Time is:
 Average CT = ( 9 + 12 + 14 + 18 + 21 + 23 ) / 6
2. Average CT = 97 / 6 =16.16667

3. The Average Waiting Time is:


 Average WT = ( 0 + 8 + 11 + 13 + 16 + 18 ) /6
= 66 / 6
=11
4. The Average Turn Around Time is:
 Average TAT = ( 9 + 11 + 13 + 17 + 19 +20 ) / 6
= 89 / 6
= 14.83334
2/26/2024 Prepared By; Elias B. 57
Non Pre-emptive Approach
 Now, let us solve this problem with the help of the
Scheduling Algorithm named First Come First Serve in a Pre-
Emptive Approach.

2/26/2024 Prepared By; Elias B. 58


Cont….

2/26/2024 Prepared By; Elias B. 59


Shortest Job First (SJF) Scheduling
 Till now, we were scheduling the processes according to their arrival time
(in FCFS scheduling). However, SJF scheduling algorithm, schedules the
processes according to their burst time.
 In SJF scheduling, the process with the lowest burst time, among the list of
available processes in the ready queue, is going to be scheduled next.
 However, it is very difficult to predict the burst time needed for a process
hence this algorithm is very difficult to implement in the system.
Advantages of SJF
 Maximum throughput
 Minimum average waiting and turnaround time
Disadvantages of SJF
 May suffer with the problem of starvation
 It is not implementable because the exact Burst time for a process can't be
known in advance.
 There are different techniques available by which, the CPU burst time of the
process can be determined. We will discuss them later in detail.

2/26/2024 Prepared By; Elias B. 60


Example
 In the following example, there are five jobs
named as P1, P2, P3, P4 and P5. Their arrival time
and burst time are given in the table below.

2/26/2024 Prepared By; Elias B. 61


Example
 Since, No Process arrives at time 0 hence; there will be an empty slot in
the Gantt chart from time 0 to 1 (the time at which the first process
arrives).
 According to the algorithm, the OS schedules the process which is having
the lowest burst time among the available processes in the ready queue.
 Till now, we have only one process in the ready queue hence the scheduler
will schedule this to the processor no matter what is its burst time.
 This will be executed till 8 units of time.
 Till then we have three more processes arrived in the ready queue hence
the scheduler will choose the process with the lowest burst time.
 Among the processes given in the table, P3 will be executed next since it is
having the lowest burst time among all the available processes.

2/26/2024 Prepared By; Elias B. 62


Shortest Remaining Time First (SRTF) Scheduling Algorithm

 This Algorithm is the preemptive version of SJF scheduling. In


SRTF, the execution of the process can be stopped after certain
amount of time. At the arrival of every process, the short term
scheduler schedules the process with the least remaining burst time
among the list of available processes and the running process.
 Once all the processes are available in the ready queue, No
preemption will be done and the algorithm will work as SJF
scheduling. The context of the process is saved in the Process
Control Block when the process is removed from the execution and
the next process is scheduled. This PCB is accessed on the next
execution of this process.
Example
 In this Example, there are five jobs P1, P2, P3, P4, P5 and P6. Their
arrival time and burst time are given below in the table.

2/26/2024 Prepared By; Elias B. 63


Process vs their times

2/26/2024 Prepared By; Elias B. 64


Scheduling Gantt chart

2/26/2024 Prepared By; Elias B. 65


Cont…
 The Gantt chart is prepared according to the arrival and burst time given in
the table.
 Since, at time 0, the only available process is P1 with CPU burst time 8. This
is the only available process in the list therefore it is scheduled.
 The next process arrives at time unit 1. Since the algorithm we are using is
SRTF which is a preemptive one, the current execution is stopped and the
scheduler checks for the process with the least burst time.
Till now, there are two processes available in the ready queue. The OS has
executed P1 for one unit of time till now; the remaining burst time of P1 is 7
units. The burst time of Process P2 is 4 units. Hence Process P2 is scheduled
on the CPU according to the algorithm.
 The next process P3 arrives at time unit 2. At this time, the execution of
process P3 is stopped and the process with the least remaining burst time is
searched. Since the process P3 has 2 unit of burst time hence it will be
given priority over others.

2/26/2024 Prepared By; Elias B. 66


Cont…
 The Next Process P4 arrives at time unit 3. At this arrival, the scheduler will stop the
execution of P4 and check which process is having least burst time among the available
processes (P1, P2, P3 and P4). P1 and P2 are having the remaining burst time 7 units
and 3 units respectively.
 P3 and P4 are having the remaining burst time 1 unit each. Since, both are equal
hence the scheduling will be done according to their arrival time. P3 arrives earlier than
P4 and therefore it will be scheduled again. The Next Process P5 arrives at time unit 4.
Till this time, the Process P3 has completed its execution and it is no more in the list.
The scheduler will compare the remaining burst time of all the available processes.
Since the burst time of process P4 is 1 which is least among all hence this will be
scheduled.
 The Next Process P6 arrives at time unit 5, till this time, the Process P4 has completed
its execution. We have 4 available processes till now, that are P1 (7), P2 (3), P5 (3)
and P6 (2). The Burst time of P6 is the least among all hence P6 is scheduled. Since,
now, all the processes are available hence the algorithm will now work same as SJF. P6
will be executed till its completion and then the process with the least remaining time
will be scheduled.
 Once all the processes arrive, No preemption is done and the algorithm will work as
SJF.

2/26/2024 Prepared By; Elias B. 67


Round Robin Scheduling Algorithm
 In this tutorial, we are going to learn about the most efficient
CPU Process Scheduling Algorithm named Round Robin CPU
Process Scheduling. This algorithm is very special because it
is going to remove all the Flaws which we have detected in
the previous CPU Process Scheduling Algorithms.
 There is a lot of popularity for this Round Robin CPU
Scheduling is because Round Robin works only in Pre-Emptive
state. This makes it very reliable.
 Important Abbreviations
CPU - - - > Central Processing Unit
AT - - - > Arrival Time
BT - - - > Burst Time
WT - - - > Waiting Time
TAT - - - > Turn Around Time
CT - - - > Completion Time
FIFO - - - > First In First Out
TQ - - - > Time Quantum
2/26/2024 Prepared By; Elias B. 68
Round Robin CPU
Scheduling
 Round Robin CPU Scheduling is the most important CPU Scheduling
Algorithm which is ever used in the history of CPU Scheduling
Algorithms. Round Robin CPU Scheduling uses Time Quantum (TQ).
 The Time Quantum is something which is removed from the Burst
Time and lets the chunk of process to be completed.
 Time Sharing is the main emphasis of the algorithm. Each step of this
algorithm is carried out cyclically. The system defines a specific time
slice, known as a time quantum.
 First, the processes which are eligible to enter the ready queue enter
the ready queue. After entering the first process in Ready Queue is
executed for a Time Quantum chunk of time. After execution is
complete, the process is removed from the ready queue. Even now
the process requires some time to complete its execution, then the
process is added to Ready Queue.

2/26/2024 Prepared By; Elias B. 69


Cont…
 The Ready Queue does not hold processes which
already present in the Ready Queue. The Ready
Queue is designed in such a manner that it does
not hold non unique processes. By holding same
processes Redundancy of the processes increases.
 After, the process execution is complete, the Ready
Queue does not take the completed process for
holding.

2/26/2024 Prepared By; Elias B. 70


2/26/2024 Prepared By; Elias B. 71
Advantages
 The Advantages of Round Robin CPU Scheduling are:
 A fair amount of CPU is allocated to each job.
 Because it doesn't depend on the burst time, it can truly be
implemented in the system.
 It is not affected by the convoy effect or the starvation problem as
occurred in First Come First Serve CPU Scheduling Algorithm.
Disadvantages
 The Disadvantages of Round Robin CPU Scheduling are:
 Low Operating System slicing times will result in decreased CPU
output.
 Round Robin CPU Scheduling approach takes longer to swap
contexts.
 Time quantum has a significant impact on its performance.
 The procedures cannot have priorities established.

2/26/2024 Prepared By; Elias B. 72


Example

2/26/2024 Prepared By; Elias B. 73


Cont…

2/26/2024 Prepared By; Elias B. 74


Times …
Average Completion Time
 Average Completion Time = ( 31 +9 + 55 +56 +66 + 50 ) / 6
 Average Completion Time = 267 / 6
 Average Completion Time = 44.5
Average Waiting Time
 Average Waiting Time = ( 5 + 26 + 5 + 42 + 42 + 37 ) / 6
 Average Waiting Time = 157 / 6
 Average Waiting Time = 26.16667
Average Turn Around Time
 Average Turn Around Time = ( 31 + 8 + 53 + 53 + 62 + 46 ) / 6
 Average Turn Around Time = 253 / 6
 Average Turn Around Time = 42.16667

2/26/2024 Prepared By; Elias B. 75


RR Scheduling Example

2/26/2024 Prepared By; Elias B. 76


RR

2/26/2024 Prepared By; Elias B. 77


Cont…

2/26/2024 Prepared By; Elias B. 78


Cont…

2/26/2024 Prepared By; Elias B. 79


Cont…

2/26/2024 Prepared By; Elias B. 80


Cont…

2/26/2024 Prepared By; Elias B. 81


Cont…

2/26/2024 Prepared By; Elias B. 82


Cont…

2/26/2024 Prepared By; Elias B. 83


Ready Queue
 Since P6 is completed, hence it will not be added again to the
queue. There are only two processes present in the ready queue.
The Next process P2 requires only 2 units of time.

2/26/2024 Prepared By; Elias B. 84


Cont…

2/26/2024 Prepared By; Elias B. 85


Cont…

2/26/2024 Prepared By; Elias B. 86


Cont…

Avg Waiting Time = (12+16+6+8+15+11)/6 = 76/6 units

2/26/2024 Prepared By; Elias B. 87


Highest Response Ratio Next (HRRN) Scheduling

 Highest Response Ratio Next (HRNN) is one of the


most optimal scheduling algorithms.
 This is a non-preemptive algorithm in which, the
scheduling is done on the basis of an extra
parameter called Response Ratio.
 A Response Ratio is calculated for each of the
available jobs and the Job with the highest
response ratio is given priority over the others.
 Response Ratio is calculated by the given formula.
 Response Ratio = (W+S)/S

2/26/2024 Prepared By; Elias B. 88


HRNN
 Where,
 W → Waiting Time
 S → Service Time or Burst Time
 If we look at the formula, we will notice that the job with the
shorter burst time will be given priority but it is also including
an extra factor called waiting time.
 Since, HRNN α W
 HRNN α (1/S)
 Hence,
 This algorithm not only favors shorter job but it also concern
the waiting time of the longer jobs.
 Its mode is non preemptive hence context switching is
minimal in this algorithm.
2/26/2024 Prepared By; Elias B. 89
Example

2/26/2024 Prepared By; Elias B. 90


Cont…

2/26/2024 Prepared By; Elias B. 91


Cont…
 RR (P2) = ((8-4) +4)/4 = 2
 RR (P3) = (2+1)/1 = 3
 RR (P4) = (0+2)/2 = 1

2/26/2024 Prepared By; Elias B. 92


Cont…
 RR ( P2) = (5+4)/4 = 2.25
 RR (P4) = (1+2)/2 = 1.5

2/26/2024 Prepared By; Elias B. 93


Cont….

2/26/2024 Prepared By; Elias B. 94


Priority Scheduling Algorithm in
OS (Operating System)
 In Priority scheduling, there is a priority number
assigned to each process.
 In some systems, the lower the number, the higher
the priority. While, in the others, the higher the
number, the higher will be the priority.
 The Process with the higher priority among the
available processes is given the CPU. There are two
types of priority scheduling algorithm exists.
 One is Preemptive priority scheduling while the
other is Non Preemptive Priority scheduling.

2/26/2024 Prepared By; Elias B. 95


Priority scheduling

2/26/2024 Prepared By; Elias B. 96


Non Preemptive Priority Scheduling

 In the Non Preemptive Priority scheduling, The Processes are


scheduled according to the priority number assigned to them.
Once the process gets scheduled, it will run till the completion.
Generally, the lower the priority number, the higher is the
priority of the process.
 Example
 In the Example, there are 7 processes P1, P2, P3, P4, P5, P6
and P7. Their priorities, Arrival Time and burst time are given
in the table.

2/26/2024 Prepared By; Elias B. 97


Cont…

2/26/2024 Prepared By; Elias B. 98


Cont…
 Meanwhile the execution of P1, two more Processes P2 and
P3 are arrived. Since the priority of P3 is 3 hence the CPU will
execute P3 over P2.
 Meanwhile the execution of P3, All the processes get available
in the ready queue. The Process with the lowest priority
number will be given the priority. Since P6 has priority
number assigned as 4 hence it will be executed just after P3.
 After P6, P4 has the least priority number among the
available processes; it will get executed for the whole burst
time.
 Since all the jobs are available in the ready queue hence All
the Jobs will get executed according to their priorities. If two
jobs have similar priority number assigned to them, the one
with the least arrival time will be executed.
2/26/2024 Prepared By; Elias B. 99
Cont…

2/26/2024 Prepared By; Elias B. 100


Cont…

2/26/2024 Prepared By; Elias B. 101


Preemptive Priority Scheduling
 In Preemptive Priority Scheduling, at the time of arrival of a process in the
ready queue, its Priority is compared with the priority of the other processes
present in the ready queue as well as with the one which is being executed by
the CPU at that point of time. The One with the highest priority among all the
available processes will be given the CPU next.
 The difference between preemptive priority scheduling and non preemptive
priority scheduling is that, in the preemptive priority scheduling, the job
which is being executed can be stopped at the arrival of a higher priority job.
 Once all the jobs get available in the ready queue, the algorithm will behave
as non-preemptive priority scheduling, which means the job scheduled will
run till the completion and no preemption will be done.
 Example
 There are 7 processes P1, P2, P3, P4, P5, P6 and P7 given. Their respective
priorities, Arrival Times and Burst times are given in the table below.

2/26/2024 Prepared By; Elias B. 102


Given processes

2/26/2024 Prepared By; Elias B. 103


Scheduling

2/26/2024 Prepared By; Elias B. 104


Cont…

2/26/2024 Prepared By; Elias B. 105


Cont…

2/26/2024 Prepared By; Elias B. 106


Cont…

2/26/2024 Prepared By; Elias B. 107


Cont…
 The Completion Time of each process is
determined with the help of GANTT chart. The
turnaround time and the waiting time can be
calculated by the following formula.
 Turnaround-Time=Completion-TimeArrival Time
 Waiting Time = Turn Around Time - Burst Time

2/26/2024 Prepared By; Elias B. 108


Cont…

2/26/2024 Prepared By; Elias B. 109


Communication of processes
An operating system can implement both methods of communication. First,
we will discuss the shared memory methods of communication and then
message passing. Communication between processes using shared memory
requires processes to share some variable, and it completely depends on
how the programmer will implement it.
One way of communication using shared memory can be imagined like this:
Suppose process1 and process2 are executing simultaneously, and they
share some resources or use some information from another process.
Process1 generates information about certain computations or resources
being used and keeps it as a record in shared memory. When process2
needs to use the shared information, it will check in the record stored in
shared memory and take note of the information generated by process1 and
act accordingly.
Processes can use shared memory for extracting information as a record
from another process as well as for delivering any specific information to
other processes.
Let’s discuss an example of communication between processes using the
shared memory method.
2/26/2024 Prepared By; Elias B. 110
Shared vs message passing

2/26/2024 Prepared By; Elias B. 111


i) Shared Memory Method

Ex: Producer-Consumer problem


There are two processes: Producer and Consumer. The producer produces
some items and the Consumer consumes that item.
The two processes share a common space or memory location known as a
buffer where the item produced by the Producer is stored and from which the
Consumer consumes the item if needed.
There are two versions of this problem: the first one is known as the
unbounded buffer problem in which the Producer can keep on producing items
and there is no limit on the size of the buffer, the second one is known as the
bounded buffer problem in which the Producer can produce up to a certain
number of items before it starts waiting for Consumer to consume it. We will
discuss the bounded buffer problem. First, the Producer and the Consumer
will share some common memory, then the producer will start producing
items. If the total produced item is equal to the size of the buffer, the
producer will wait to get it consumed by the Consumer. Similarly, the
consumer will first check for the availability of the item. If no item is available,
the Consumer will wait for the Producer to produce it. If there are items
available, Consumer will consume them.

2/26/2024 Prepared By; Elias B. 112


ii) Messaging Passing Method
 Now, We will start our discussion of the communication
between processes via message passing. In this method,
processes communicate with each other without using any
kind of shared memory.
 If two processes p1 and p2 want to communicate with each
other, they proceed as follows:
 Establish a communication link (if a link already exists, no
need to establish it again.)
 Start exchanging messages using basic primitives.
We need at least two primitives:
– send(message, destination) or send(message)
– receive(message, host) or receive(message)

2/26/2024 Prepared By; Elias B. 113


Message passing

2/26/2024 Prepared By; Elias B. 114


Advantages of IPC:

 Inter-process communication (IPC) is the mechanism through


which processes or threads can communicate and exchange data
with each other on a computer or across a network. IPC is an
important aspect of modern operating systems, as it enables
different processes to work together and share resources, leading
to increased efficiency and flexibility.
 Advantages of IPC:
 Enables processes to communicate with each other and share
resources, leading to increased efficiency and flexibility.
 Facilitates coordination between multiple processes, leading to
better overall system performance.
 Allows for the creation of distributed systems that can span
multiple computers or networks.
 Can be used to implement various synchronization and
communication protocols, such as semaphores, pipes, and
2/26/2024
sockets.
Prepared By; Elias B. 115
Disadvantages of IPC:

 Increases system complexity, making it harder to design,


implement, and debug.
 Can introduce security vulnerabilities, as processes may be able to
access or modify data belonging to other processes.
 Requires careful management of system resources, such as memory
and CPU time, to ensure that IPC operations do not degrade overall
system performance.
Can lead to data inconsistencies if multiple processes try to access
or modify the same data at the same time.
 Overall, the advantages of IPC outweigh the disadvantages, as it is
a necessary mechanism for modern operating systems and enables
processes to work together and share resources in a flexible and
efficient manner. However, care must be taken to design and
implement IPC systems carefully, in order to avoid potential security
vulnerabilities and performance issues.

2/26/2024 Prepared By; Elias B. 116


Task Scheduling in a RTOS

1 . Preemptive scheduling algorithms allow the RTOS to interrupt a running task and switch to a
higher-priority task. This is important for ensuring that critical tasks meet their deadlines, even if lower-
priority tasks are already running.
2. Non-preemptive scheduling algorithms do not allow the RTOS to interrupt a running task. This can
be useful for tasks that need to run to completion without being interrupted, but it can also lead to missed
deadlines for higher-priority tasks if a lower-priority task takes a long time to complete.

 RTOS task scheduling algorithms also consider the following factors:


a. Task priority: Each task in an RTOS is assigned a priority. Higher-priority tasks are always scheduled
to run before lower-priority tasks.
b. Task deadline: Some RTOS task scheduling algorithms also consider the deadlines of tasks when
making scheduling decisions. For example, a deadline-based scheduling algorithm might give priority to
tasks that are closer to their deadlines.
c. Task execution time: Some RTOS task scheduling algorithms also consider the estimated execution
time of tasks when making scheduling decisions. This can help to ensure that all tasks are scheduled to
complete within their deadlines.
d. Resource availability: Some tasks may require resources, such as memory or peripherals. The RTOS
scheduler will only schedule tasks if the required resources are available.

2/26/2024 Prepared By; Elias B. 117


Task scheduling in RTOS
3. Priority-based scheduling: This is the most common type
of RTOS task scheduling algorithm. It simply assigns a priority
to each task and schedules the task with the highest priority to
run first.
4. Round-robin scheduling: This algorithm gives each task
an equal amount of time to run, regardless of its priority. This
can be useful for tasks that need to be executed periodically,
but it can also lead to missed deadlines for higher-priority
tasks.
5. Deadline-based scheduling: This algorithm schedules
tasks based on their deadlines. Tasks with earlier deadlines are
scheduled to run first. This algorithm can help to ensure that
all tasks meet their deadlines, but it can be complex to
implement.
2/26/2024 Prepared By; Elias B. 118
Process Synchronization in OS
(Operating System)
 When two or more process cooperates with each other, their order
of execution must be preserved otherwise there can be conflicts in
their execution and inappropriate outputs can be produced.
 A cooperative process is the one which can affect the execution of
other process or can be affected by the execution of other process.
Such processes need to be synchronized so that their order of
execution can be guaranteed.
 The procedure involved in preserving the appropriate order of
execution of cooperative processes is known as Process
Synchronization. There are various synchronization mechanisms
that are used to synchronize the processes.
 Race Condition
 A Race Condition typically occurs when two or more threads try to
read, write and possibly make the decisions based on the memory
that they are accessing concurrently.
2/26/2024 Prepared By; Elias B. 119
Critical Section
 The regions of a program that try to access shared
resources and may cause race conditions are called critical
section.
 To avoid race condition among the processes, we need to
assure that only one process at a time can execute within
the critical section.

2/26/2024 Prepared By; Elias B. 120


Critical Section Problem in OS
(Operating System)
 Critical Section is the part of a program which tries to access shared
resources. That resource may be any resource in a computer like a
memory location, Data structure, CPU or any IO device.
 The critical section cannot be executed by more than one process at
the same time; operating system faces the difficulties in allowing
and disallowing the processes from entering the critical section.
 The critical section problem is used to design a set of protocols
which can ensure that the Race condition among the processes will
never arise.
 In order to synchronize the cooperative processes, our main task is
to solve the critical section problem. We need to provide a solution
in such a way that the following conditions can be satisfied.

2/26/2024 Prepared By; Elias B. 121


Requirements of Synchronization
mechanisms
 Primary
 Mutual Exclusion
 Our solution must provide mutual exclusion. By Mutual
Exclusion, we mean that if one process is executing inside
critical section then the other process must not enter in
the critical section.

2/26/2024 Prepared By; Elias B. 122


Cont…

2/26/2024 Prepared By; Elias B. 123


Progress
 Progress means that if one process doesn't need to execute
into critical section then it should not stop other processes
to get into the critical section.
 Secondary
 Bounded Waiting
 We should be able to predict the waiting time for every
process to get into the critical section. The process must not
be endlessly waiting for getting into the critical section.
 Architectural Neutrality
 Our mechanism must be architectural natural. It means that
if our solution is working fine on one architecture then it
should also run on the other ones as well.

2/26/2024 Prepared By; Elias B. 124


Lock Variable
 This is the simplest synchronization mechanism. This is a
Software Mechanism implemented in User mode. This is a
busy waiting solution which can be used for more than two
processes.
 In this mechanism, a Lock variable lock is used. Two values
of lock can be possible, either 0 or 1. Lock value 0 means
that the critical section is vacant while the lock value 1
means that it is occupied.
 A process which wants to get into the critical section first
checks the value of the lock variable. If it is 0 then it sets
the value of lock as 1 and enters into the critical section,
otherwise it waits.
 The pseudo code of the mechanism looks like following.

2/26/2024 Prepared By; Elias B. 125


Cont…
 If we look at the Pseudo Code, we find that there are three
sections in the code. Entry Section, Critical Section and the exit
section.
 Initially the value of lock variable is 0. The process which needs
to get into the critical section, enters into the entry section and
checks the condition provided in the while loop.
 The process will wait infinitely until the value of lock is 1 (that is
implied by while loop). Since, at the very first time critical section
is vacant hence the process will enter the critical section by setting
the lock variable as 1.
 When the process exits from the critical section, then in the exit
section, it reassigns the value of lock as 0.
 Every Synchronization mechanism is judged on the basis of four
conditions.

2/26/2024 Prepared By; Elias B. 126


Cont…
 Mutual Exclusion
 Progress
 Bounded Waiting
 Portability
 Out of the four parameters, Mutual Exclusion and Progress must be
provided by any solution. Let’s analyze this mechanism on the basis of the
above mentioned conditions.
 Mutual Exclusion
 The lock variable mechanism doesn't provide Mutual Exclusion in some of
the cases. This can be better described by looking at the pseudo code by
the Operating System point of view I.E. Assembly code of the program.
Let's convert the Code into the assembly language.
 Load Lock, R0
 CMP R0, #0
 JNZ Step 1
 Store #1, Lock
 Store #0, Lock
2/26/2024 Prepared By; Elias B. 127
Semaphores in OS (Operating System)

 To get rid of the problem of wasting the wake-up signals, Dijkstra proposed
an approach which involves storing all the wake-up calls.
 Dijkstra states that, instead of giving the wake-up calls directly to the
consumer, producer can store the wake-up call in a variable. Any of the
consumers can read it whenever it needs to do so.
 Semaphore is the variables which stores the entire wake up calls that are
being transferred from producer to consumer.
 It is a variable on which read, modify and update happens automatically in
kernel mode.
 Semaphore cannot be implemented in the user mode because race
condition may always arise when two or more processes try to access the
variable simultaneously. It always needs support from the operating
system to be implemented.
 According to the demand of the situation, Semaphore can be divided into
two categories.
 Counting Semaphore
 Binary Semaphore or Mutex
2/26/2024 Prepared By; Elias B. 128
Counting Semaphore

 There are the scenarios in which more than one


processes need to execute in critical section
simultaneously. However, counting semaphore
can be used when we need to have more than one
process in the critical section at the same time.
 The programming code of semaphore
implementation is shown below which includes the
structure of semaphore and the logic using which
the entry and the exit can be performed in the
critical section.

2/26/2024 Prepared By; Elias B. 129


Binary Semaphore or Mutex

 In counting semaphore, Mutual exclusion was not provided


because we has the set of processes which required to
execute in the critical section simultaneously.
 However, Binary Semaphore strictly provides mutual
exclusion. Here, instead of having more than 1 slots
available in the critical section, we can only have at most 1
process in the critical section. The semaphore can have only
two values, 0 or 1.
 Let's see the programming implementation of Binary
Semaphore.

2/26/2024 Prepared By; Elias B. 130

You might also like