100% found this document useful (1 vote)
236 views

Unit 4 Embedded System Notes

Interrupt handling in RTOS ensures timely processing of interrupts from hardware or software. It discusses interrupt types and handling methods like polling and interrupt service routines. The document also discusses time management in RTOS including task scheduling, timers, and priority-based scheduling to ensure predictable task execution.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
236 views

Unit 4 Embedded System Notes

Interrupt handling in RTOS ensures timely processing of interrupts from hardware or software. It discusses interrupt types and handling methods like polling and interrupt service routines. The document also discusses time management in RTOS including task scheduling, timers, and priority-based scheduling to ensure predictable task execution.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

UNIT-4 (KOE-062) EMBEDDED SYSTEM

Interrupt Handling and Time Management in RTOS

Interrupt Handling
Interrupt handling in Real-Time Operating Systems (RTOS) is the mechanism by
which the OS handles unexpected events that occur during the execution of a task.
These events can be caused by hardware or software interrupts.

An interrupt is a signal sent to the CPU by a hardware device or software


application, indicating that it requires immediate attention. When an interrupt is
received, the CPU temporarily suspends the current task and begins executing an
interrupt handler or interrupt service routine (ISR) to handle the interrupt.

RTOS provides interrupt handling mechanisms to ensure timely and efficient


processing of interrupts. This ensures that the system remains responsive to
external events and that tasks are executed in a predictable manner.

Types of Interrupts
There are two types of interrupts in RTOS:
 Maskable interrupts: These interrupts can be enabled or disabled by the OS. The
OS can choose to handle or ignore these interrupts, depending on the current
system state.
 Non-maskable interrupts: These interrupts cannot be disabled by the OS. They
are typically associated with critical events that require immediate attention,
such as power failures or hardware faults.
Interrupt Handling Methods
RTOS uses the following methods to handle interrupts:

 Polling: The OS periodically checks for the occurrence of an interrupt by


querying the status of the interrupt source. If an interrupt is detected, the OS
begins executing the ISR.
 Interrupt service routine (ISR): A special function that is executed in response
to an interrupt. The ISR performs the necessary processing to handle the
interrupt and then returns control to the OS.
 Interrupt handler: A component of the OS that manages the ISR and ensures that
the interrupt is handled in a timely and efficient manner.

 Interrupt  Polling

 An interrupt is like  The polling method is like


a shopkeeper. If one needs a a salesperson. The salesman goes
service or product, he goes to from door to door while
him and apprises him of his requesting to buy a product or
needs. In case of interrupts, service. Similarly, the controller
when the flags or signals are keeps monitoring the flags or
received, they notify the signals one by one for all devices
controller that they need to be and provides service to whichever
serviced. component that needs its service.

Time Management
Time management is the process of scheduling tasks and managing time-critical
resources in RTOS. It involves ensuring that tasks are executed in a predictable
manner, and that the system meets its timing requirements.

RTOS provides various mechanisms for time management, including:

 Clocks and timers: Hardware components that generate periodic interrupts to


indicate the passage of time. The OS can use these interrupts to schedule tasks
and manage time-critical resources.
 Task scheduling: The process of determining which task should be executed at a
given time. RTOS provides various scheduling algorithms, such as rate-monotonic
scheduling and deadline-monotonic scheduling, to ensure timely execution of
tasks.
 Time-triggered scheduling: A scheduling approach in which tasks are executed
at fixed intervals of time. This ensures predictable and deterministic execution of
tasks.
 Priority-based scheduling: A scheduling approach in which tasks are assigned a
priority based on their importance and time-criticality. The OS schedules tasks
based on their priority, with higher-priority tasks being executed before lower-
priority tasks.
In summary, interrupt handling and time management are critical components of
RTOS design. Effective interrupt handling mechanisms ensure timely and efficient
processing of interrupts, while time management mechanisms ensure predictable
and timely execution of tasks. By properly managing interrupts and time-critical
resources, RTOS can ensure real-time responsiveness and predictable system
behavior.

Interrupt routines in a real-time operating system (RTOS) environment handle interrupt


source calls. They must follow two rules that are not applicable to task code:

 An interrupt routine cannot call any RTOS functions that might block.
 An interrupt routine cannot call any RTOS functions that might cause the RTOS to switch
tasks unless the RTOS knows that an interrupt routine is executing.
Interrupt routines have higher priorities than OS functions and application tasks. The
elapsed time of an ISR should be short compared to the intervals between interrupts to
avoid delays in processing other functions.

An RTOS can provide for two levels of interrupt service routines (ISRs): a fast level ISR
(FLISR) and a slow level ISR (SLISR). The FLISR is also called hardware interrupt ISR, and
the SLISR is called software interrupt ISR. In Windows CE, the SLISR is called interrupt
service thread (IST).

Here are some steps that occur when an interrupt source is interrupted:

1. The OS finishes the critical code until the preemption point.


2. The OS saves the context of the previous task onto a stack.
3. The OS calls the ISR routine for the interrupt.
4. The ISR can post an event or message to the OS to initiate the Nth or ptask.
5. The OS initiates the Nth or ptask based on their priorities
Edge Triggering vs. Level Triggering

Interrupt modules are of two types − level-triggered or edge-triggered.

Level Triggered Edge Triggered

An edge-triggered interrupt module generates


an interrupt only when it detects an asserting
A level-triggered interrupt edge of the interrupt source. The edge gets
module always generates an detected when the interrupt source level
interrupt whenever the level of actually changes. It can also be detected by
the interrupt source is asserted. periodic sampling and detecting an asserted
level when the previous sample was de-
asserted.

If the interrupt source is still


asserted when the firmware
interrupt handler handles the Edge-triggered interrupt modules can be
interrupt, the interrupt module acted immediately, no matter how the
will regenerate the interrupt, interrupt source behaves.
causing the interrupt handler to
be invoked again.

Edge-triggered interrupts keep the firmware's


Level-triggered interrupts are code complexity low, reduce the number of
cumbersome for firmware. conditions for firmware, and provide more
flexibility when interrupts are handled.

Low- and high-level ISRs

Low-level ISR

A low-level interrupt service routine (LISR) executes as a normal ISR, which includes using
the current stack. Nucleus RTOS saves context before calling an LISR and restores context
after the LISR returns. Therefore LISRs may be written in C and may call other C routines.
However, there are only a few Nucleus RTOS services available to an LISR. If the interrupt
processing requires additional Nucleus RTOS services, a high-level interrupt service
routine (HISR) must be activated. Nucleus RTOS supports nesting of multiple LISRs.
High-level ISR

HISRs are created and deleted dynamically. Each HISR has its own stack space and its own
control block. The memory for each is supplied by the application. Of course, the HISR must
be created before it is activated by an LISR.

Since an HISR has its own stack and control block, it can be temporarily blocked if it tries to
access a Nucleus RTOS data structure that is already being accessed.

There are three priority levels available to HISRs. If a higher priority HISR is activated
during processing of a lower priority HISR, the lower priority HISR is preempted in much
the same manner as a task gets preempted. HISRs of the same priority are executed in the
order in which they were originally activated. All activated HISRs are processed before
normal task scheduling is resumed.

Multitasking is the ability of a computer system to manage multiple tasks or processes at


the same time.

Multitasking can be of two types: preemptive and non-preemptive.

Scheduling Schemes

Scheduling is the method used by the operating system to allocate resources to processes.

In real-time systems, the scheduler is considered as the most important component which
is typically a short-term task scheduler. The main focus of this scheduler is to reduce the
response time associated with each of the associated processes instead of handling the
deadline.

A better approach is designed by combining both preemptive and non-preemptive


scheduling. This can be done by introducing time-based interrupts in priority based
systems which means the currently running process is interrupted on a time-based interval
and if a higher priority process is present in a ready queue, it is executed by preempting the
current process.

Based on schedulability, implementation (static or dynamic), and the result (self or


dependent) of analysis, the scheduling algorithm are classified as follows.

 Static table-driven approaches


 Static priority-driven preemptive approaches: .
 Dynamic planning-based approaches:
 Dynamic best effort approaches:

These types of approaches consider deadlines instead of feasible schedules. Therefore the
task is aborted if its deadline is reached. This approach is used widely is most of the real-
time systems.
There are different types of scheduling schemes such as FIFO, SJF, RR, and priority-based
scheduling.

Real-time operating systems (RTOS) use scheduling algorithms to manage tasks. The most
common RTOS task scheduling algorithms are:

Priority-based scheduling

Assigns a priority to each task and schedules the highest priority task first. This algorithm
is easy to implement and doesn't require information about job release and execution
times.

Priorities can be either dynamic or static. Static priorities are allocated during creation,
whereas dynamic priorities are assigned depending on the behavior of the processes while
in the system. To illustrate, the scheduler could favor input/output (I/O) intensive tasks,
which lets expensive requests be issued as soon as possible.

Priorities may be defined internally or externally. Internally defined priorities make use of
some measurable quantity to calculate the priority of a given process. In contrast, external
priorities are defined using criteria beyond the operating system (OS), which can include
the significance of the process, the type as well as the sum of resources being utilized for
computer use, user preference, commerce, and other factors like politics, etc

Round-robin scheduling

Gives each task an equal amount of time to run, regardless of priority. This can be useful for
periodic tasks but can lead to missed deadlines for higher-priority tasks.

Round Robin(RR) scheduling algorithm is mainly designed for time-sharing systems. This
algorithm is similar to FCFS scheduling, but in Round Robin(RR) scheduling, preemption is
added which enables the system to switch between processes.

 A fixed time is allotted to each process, called a quantum, for execution.


 Once a process is executed for the given time period that process is preempted
and another process executes for the given time period.
 Context switching is used to save states of preempted processes.
 This algorithm is simple and easy to implement and the most important is thing
is this algorithm is starvation-free as all processes get a fair share of CPU.
 It is important to note here that the length of time quantum is generally from 10
to 100 milliseconds in length.

Some important characteristics of the Round Robin(RR) Algorithm are as follows:

 Round Robin Scheduling algorithm resides under the category of Preemptive


Algorithms.
 This algorithm is one of the oldest, easiest, and fairest algorithm.
 This Algorithm is a real-time algorithm because it responds to the event within a
specific time limit.
 In this algorithm, the time slice should be the minimum that is assigned to a specific
task that needs to be processed. Though it may vary for different operating systems.
 This is a hybrid model and is clock-driven in nature.
 This is a widely used scheduling method in the traditional operating system.

Important terms

1. Completion Time: It is the time at which any process completes its execution.
2. Turn Around Time:This mainly indicates the time Difference between completion
time and arrival time. The Formula to calculate the same is:

Turn Around Time = Completion Time – Arrival Time

3. Waiting Time(W.T): It Indicates the time Difference between turn around time and
burst time.
And is calculated as : Waiting Time = Turn Around Time – Burst Time

Deadline-based scheduling

Schedules tasks based on their deadlines, with earlier deadlines running first.

Deadline Scheduler is n I/O scheduler for the Linux kernel and guarantee a start service
time for a request. Deadline Scheduler imposes deadlines on all I/O operations in order to
prevent wanted requests. Two deadline read and write queues (basically sorted by their
deadline) are maintained. For every new request, the scheduler selects which queue will
serve for it. Read queues are given high priority than write queues because during read
operations the processes usually get blocked.

Now, the deadline scheduler checks if the first request in the deadline queue has expired
then, a batch of requests from the sorted queue is served. The scheduler serves both a
batch of requests following the chosen request in the sorted queue.

By default, the Expiration time of read request is of 500 ms and write requests of 5 seconds.

Let us understand it with a scenario :

Suppose there are three processes P1, P2 and P3 with respective deadlines. So the deadline
scheduler states that if P1 makes a request at time t. It cannot make a request again at some
time interval.

First Come, First Served (FCFS)

A non-preemptive algorithm that doesn't assign priority levels to tasks. The first task to
enter the scheduling queue is put into the running state first.

FCFS Scheduling algorithm automatically executes the queued processes and requests in
the order of their arrival. It allocates the job that first arrived in the queue to the CPU, then
allocates the second one, and so on. FCFS is the simplest and easiest CPU scheduling
algorithm, managed with a FIFO queue. FIFO stands for First In First Out. The FCFS
scheduling algorithm places the arriving processes/jobs at the very end of the queue. So,
the processes that request the CPU first get the allocation from the CPU first. As any process
enters the FIFO queue, its Process Control Block (PCB) gets linked with the queue’s tail. As
the CPU becomes free, the process at the very beginning gets assigned to it. Even if the CPU
starts working on a longer job, many shorter ones have to wait after it. The FCFS scheduling
algorithm works in most of the batches of operating systems.
Characteristics of FCFS Scheduling
 FCFS follows a non-preemptive approach, meaning, once the CPU lets a process take
control, it won’t preempt until the job terminates.
 It follows the criteria of arrival time for the selection process.
 The processor selects the very first job in the ready queue, and it runs till
completion.
 It supports a pre-emptive and non-preemptive scheduling algorithm.
 All the jobs execute on a first-come, first-serve basis.
 Smaller processes take the lead in case of a tie.
 The general wait time is quite high because of the modus operandi that FCFS
follows.
 The algorithm is feasible to use and implement in the long run.
 The process is not very complicated, thus easy to understand.
 Every implementation follows the First In First Out (FIFO) ready queue.

Benefits of FCFS Scheduling


 The algorithm is easy to understand and implement.
 The process is simple, thus easy to handle and comprehend.
 FCFS is a very fair algorithm since no priority is involved- the process that comes
first gets served first.
 The implementation follows the FIFO queue for organizing the data structure- thus
simplifying all the processes.
 FCFS doesn’t lead to any starvation.
 The scheduling is non-preemptive. Thus, no project gets paused.
 It is the most simplified form of CPU scheduling algorithm- easy to program and
operate.
 The FCFS algorithm is better for the processes with comparatively large burst time
since it involves no context switch between the processes.

Limitations of FCFS Scheduling


 The FCFS method is poor in performance.
 Its general wait time gets too high due to the non-preemptive scheduling.
 Once a process gets allocated to the CPU, it never releases the CPU until the end of
execution.
 The Convoy effect takes place since smaller processes need to wait for one large
project at the front to finish off first.
 Since it doesn’t guarantee a short response time, it may not be appropriate for
interactive systems.
 FCFS doesn’t prioritize any process or its burst time.
 The simplicity makes FCFS very inefficient.
 This algorithm does not comply with the time-sharing systems.

Shortest Job First (SJF)

A non-preemptive algorithm that requires the scheduler to know each task's execution
time. This is a scheduling algorithm that helps in the scheduling processes as per their
respective burst time.

In SJF Scheduling, the process always stands at the lowest burst time. This comes from the
list of available processes that are ready in the queue. Also, they are on schedule to go one
after another. Proper scheduling helps the OS to conduct each operation (execution of the
program) in a sequence.

Without failing the order, the OS guides everything in detail and controls their burst time.
The systematic execution of each process is quite a helpful deal for a user to conduct
seamless work.

Sometimes, it seems so difficult to predict the right process and its bursting time. The SJF
Scheduling Algorithm is there to process everything in sequential order. Therefore, the
algorithm is a tough one to implement in the system without a proper prediction. Hence, an
operating system is necessary to do the work automatically.

In the shortest job first scheduling, the processor always compares the processes waiting to
be executed to choose the one with the shortest burst time and again makes the
comparison after the ongoing process ends.
To understand it thoroughly, let’s see an example:

Suppose the above table shows how the processes arrive at a processor to be processed. In
that case, the gantt chart below will depict how they will be processed if the processor
works on the shortest job first scheduling algorithm.

Shortest Remaining Time First (SRTF) scheduling algorithm, the process with the smallest
amount of time remaining until completion is selected to execute. Since the currently
executing process is the one with the shortest amount of time remaining by definition, and
since that time should only reduce as execution progresses, processes will always run until
they complete or a new process is added that requires a smaller amount of time.

First IN First Out (FIFO):

FIFO stands for First In First Out. The FCFS scheduling algorithm places the arriving
processes/jobs at the very end of the queue. So, the processes that request the CPU first get
the allocation from the CPU first. As any process enters the FIFO queue, its Process Control
Block (PCB) gets linked with the queue's tail.

Scheduling in RTOS can help maximize system throughput and performance. It can also
ensure that time-critical tasks are executed promptly, which improves system
responsiveness and reliability.

Types of Multiprocessing
Multiprocessing is the use of two or more central processing units (CPUs) in a single
computer system.

The two types of multiprocessing are symmetric and asymmetric.

Symmetric Multiprocessing

In symmetric multiprocessing, all processors are equal and share resources such as
memory and I/O devices.

Each processor can execute any process, making it a more efficient use of resources.

In symmetric multiprocessing (SMP), all processors are treated as equals. They can run any
task, and they all share a common memory space. This allows for high performance and
scalability, but it also requires careful synchronization to avoid conflicts

Improved performance through load balancing and task distribution.

Asymmetric Multiprocessing

In asymmetric multiprocessing, one processor is designated as the master, and all other
processors are slaves.

The master processor manages the system resources and distributes tasks to the slave
processors.

In asymmetric multiprocessing (AMP), each processor is given a specific role or set of tasks.
One processor often serves as the "master" and the others serve as "slaves." This can
simplify synchronization and reduce overhead, but it may not scale as well as SMP.

Less flexible than SMP but can be simpler to implement.

Difference between Multitasking and Multiprocessing

1. Multi-tasking :

Multi-tasking is the logical extension of multiprogramming. In this system, the CPU


executes multiple jobs by switching among them typically using a small time quantum, and
these switches occur so frequently that the users can interact with each program while it is
running. Multitasking is further classified into two categories: Single User & Multiuser.
2. Multiprocessing :

Multiprocessing is a system that has two or more than two processors. In this, CPUs are
added for increasing computing speed of the system. Because of Multiprocessing, there are
many processes that are executed simultaneously. Multiprocessing is further classified into
two categories: Symmetric Multiprocessing and Asymmetric Multiprocessing.
Difference between Multitasking and Multiprocessing :

S
No. Multi-tasking Multiprocessing

The execution of more than one The availability of more than one processor per
task simultaneously is known as system, that can execute several set of instructions
1. multitasking. in parallel is known as multiprocessing.

2. The number of CPU is one. The number of CPUs is more than one.

It takes moderate amount of


3. time. It takes less time for job processing.

In this, one by one job is being In this, more than one process can be executed at a
4. executed at a time. time.

5. It is economical. It is less economical.

The number of users is more The number of users is can be one or more than
6. than one. one.

7. Throughput is moderate. Throughput is maximum.

8. Its efficiency is moderate. Its efficiency is maximum.

It is of two types: Single user


multitasking and Multiple user It is of two types: Symmetric Multiprocessing and
9. multitasking. Asymmetric Multiprocessing.

Cooperative Multitasking

Cooperative multitasking is a type of multitasking where processes voluntarily give up


control of the CPU.

The operating system does not forcefully take control of the CPU from a process.
Preemptive Scheduling

Preemptive scheduling is a method that may be used when a process switches from a
running state to a ready state or from a waiting state to a ready state. The resources are
assigned to the process for a particular time and then removed. If the resources still have
the remaining CPU burst time, the process is placed back in the ready queue. The process
remains in the ready queue until it is given a chance to execute again.

When a high-priority process comes in the ready queue, it doesn't have to wait for the
running process to finish its burst time. However, the running process is interrupted in the
middle of its execution and placed in the ready queue until the high-priority process uses
the resources. As a result, each process gets some CPU time in the ready queue. It improves
the overhead of switching a process from running to ready state and vice versa, increasing
preemptive scheduling flexibility. It may or may not include SJF and Priority scheduling.

Let us take the example of Preemptive Scheduling. We have taken P0, P1, P2, and P3 are
the four processes.

Let us take the example of Preemptive Scheduling. We have taken P0, P1, P2, and P3 are the
four processes.

Process Arrival Time CPU Burst time (in millisec.)

P0 3 2

P1 2 4

P2 0 6

P3 1 4
 Firstly, the process P2 comes at time 0. So, the CPU is assigned to process P2.
 When process P2 was running, process P3 arrived at time 1, and the
remaining time for process P2 (5 ms) is greater than the time needed by
process P3 (4 ms). So, the processor is assigned to P3.
 When process P3 was running, process P1 came at time 2, and the remaining
time for process P3 (3 ms) is less than the time needed by processes P1 (4
ms) and P2 (5 ms). As a result, P3 continues the execution.
 When process P3 continues the process, process P0 arrives at time 3. P3's
remaining time (2 ms) is equal to P0's necessary time (2 ms). So, process P3
continues the execution.
 When process P3 finishes, the CPU is assigned to P0, which has a shorter
burst time than the other processes.
 After process P0 completes, the CPU is assigned to process P1 and then to
process P2.

Advantages of preemptive Scheduling:

 It is a more robust method because a process may not monopolize the processor.
 Each event causes an interruption in the execution of ongoing tasks.
 It improves the average response time.
 It is more beneficial when you use this method in a multi-programming
environment.

Disadvantages of preemptive scheduling:

 It requires the use of limited computational resources.


 It takes more time suspending the executing process, switching the context, and
dispatching the new incoming process.
 If several high-priority processes arrive at the same time, the low-priority process
would have to wait longer.
Non-Preemptive Scheduling

Non-preemptive scheduling is a method that may be used when a process terminates or


switches from a running to a waiting state. When processors are assigned to a process, they
keep the process until it is eliminated or reaches a waiting state. When the processor starts
the process execution, it must complete it before executing the other process, and it may
not be interrupted in the middle.

When a non-preemptive process with a high CPU burst time is running, the other process
would have to wait for a long time, and that increases the process average waiting time in
the ready queue. However, there is no overhead in transferring processes from the ready
queue to the CPU under non-preemptive scheduling. The scheduling is strict because the
execution process is not even preempted for a higher priority process.

For example:

Let's take the above preemptive scheduling example and solve it in a non-preemptive
manner.

 The process P2 comes at 0, so the processor is assigned to process P2 and takes (6


ms) to execute.
 All of the processes, P0, P1, and P3, arrive in the ready queue in between. But all
processes wait till process P2 finishes its CPU burst time.
 After that, the process that comes after process P2, i.e., P3, is assigned to the CPU
until it finishes its burst time.
 When process P1 completes its execution, the CPU is given to process P0

Advantages of non-preemptive scheduling:


 It provides a low scheduling overhead.
 It is a very simple method.
 It uses less computational resources.
 It offers high throughput.

Disadvantages of non-preemptive Scheduling:

 It has a poor response time for the process.


 A machine can freeze up due to bugs.

Preemptive Scheduling Non-Preemptive Scheduling

The resources are assigned to a process for Once resources are assigned to a process, they are held
a long time period. until it completes its burst period or changes to the waiting
state.

Its process may be paused in the middle of When the processor starts the process execution, it must
the execution. complete it before executing the other process, and it may
not be interrupted in the middle.

When a high-priority process continuously When a high burst time process uses a CPU, another
comes in the ready queue, a low-priority process with a shorter burst time can starve.
process can starve.

It is flexible. It is rigid.

It is cost associated. It does not cost associated.

It has overheads associated with process It doesn't have overhead.


scheduling.

It affects the design of the operating system It doesn't affect the design of the OS kernel.
kernel.
Its CPU utilization is very high. Its CPU utilization is very low.

Examples: Round Robin and Shortest FCFS and SJF are examples of non-preemptive scheduling.
Remaining Time First

Shared memory

Process communication is the mechanism provided by the operating system that allows
processes to communicate with each other. This communication could involve a process
letting another process know that some event has occurred or transferring of data from
one process to another. One of the models of process communication is the shared memory
model.

Shared memory is a feature of an operating system that allows multiple programs to access
the same memory at the same time to communicate with each other and avoid redundant
copies. Shared memory is an efficient way to pass data between programs.

The shared memory in the shared memory model is the memory that can be
simultaneously accessed by multiple processes. This is done so that the processes can
communicate with each other. All POSIX systems, as well as Windows operating systems
use shared memory.

A diagram that illustrates the shared memory model of process communication is given as
follows −

Shared memory model of process communication

Advantage of Shared Memory Model


 Memory communication is faster on the shared memory model as compared to the
message passing model on the same machine.

Disadvantages of shared memory model:

 Processes must ensure they don't write to the same memory location
 May create synchronization and memory protection problems

Task communication:

Task communication in embedded systems is the art of exchanging information between


different tasks or modules within a microcontroller or microprocessor system. This is often
done using either interrupt-based or interrupt-free communication methods. Interrupt-
based communication involves the use of hardware interrupts to signal the completion of a
task, while interrupt-free communication relies on a shared memory space between tasks.
The choice of communication method depends on factors such as the system requirements,
the complexity of the tasks, and the available hardware resources. Proper task
communication is essential for ensuring the correct functioning and reliability of an
embedded system.

Different types of task communication:-

1. Cooperation by sharing

The processes may cooperate by sharing data, including variables, memory, databases, etc.
The critical section provides data integrity, and writing is mutually exclusive to avoid
inconsistent data.

Here, you see a diagram that shows cooperation by sharing. In this diagram, Process P1 and
P2 may cooperate by using shared data like files, databases, variables, memory, etc.
2. Cooperation by Communication

The cooperating processes may cooperate by using messages. If every process waits for a
message from another process to execute a task, it may cause a deadlock. If a process does
not receive any messages, it may cause starvation.

Here, you have seen a diagram that shows cooperation by communication. In this diagram,
Process P1 and P2 may cooperate by using messages to communicate.

Inter-task Communication and Synchronization Options In Embedded/RTOS Systems

There are three broad paradigms for inter-task communications and synchronization:

Task-owned facilities – attributes that an RTOS imparts to tasks that provide


communication (input) facilities. The example we will look at some more is signals.

Kernel objects – facilities provided by the RTOS which represent stand-alone


communication or synchronization facilities. Examples include: event flags, mailboxes,
queues/pipes, semaphores and mutexes.

Message passing – a rationalized scheme where an RTOS allows the creation of message
objects, which may be sent from one to task to another or to several others. This is
fundamental to the kernel design and leads to the description of such a product as being a
“message passing RTOS”.

The facilities that are ideal for each application will vary. There is also some overlap in
their capabilities and some thought about scalability is worthwhile. For example, if an
application needs several queues, but just a single mailbox, it may be more efficient to
realize the mailbox with a single-entry queue. This object will be slightly non-optimal, but
all the mailbox handling code will not be included in the application and, hence, scalability
will reduce the RTOS memory footprint.

Interprocess Communication:

In general, Inter Process Communication is a type of mechanism usually provided by the


operating system (or OS). The main aim or goal of this mechanism is to provide
communications in between several processes. In short, the intercommunication allows a
process letting another process know that some event has occurred.

Definition

"Inter-process communication is used for exchanging useful information between


numerous threads in one or more processes (or programs)."

OR

“Inter-process communication (IPC) is a mechanism that allows processes to communicate


and share data with each other. In an embedded system, IPC is the "glue" that connects
different service-providing processes into a cohesive whole. IPC is important for software
requirements like performance, modularity, network bandwidth, and latency”

OR

IPC allows processes to synchronize their actions and exchange information, which enables
them to work together to accomplish a specific task. Processes can communicate with each
other through shared memory or message passing.

Role of Synchronization in Inter Process Communication

It is one of the essential parts of inter process communication. Typically, this is provided by
interprocess communication control mechanisms, but sometimes it can also be controlled
by communication processes.
These are the following methods that used to provide the synchronization:

 Mutual Exclusion
 Semaphore
 Barrier
 Spinlock
 Mutual Exclusion:

It is generally required that only one process thread can enter the critical section at a time.
This also helps in synchronization and creates a stable state to avoid the race condition.

Mutex Functionality:

A Mutex or Mutual Exclusion Object is used to allow only one of the processes access to the
resource at a time. The Mutex object allows all processes to use the same resource, but the
resource is accessed by one process at a time. Mutex uses a lock-based approach to handle
critical section issues.

Each time a process requests a resource from the system, the system creates a mutex object
with a unique name or ID. So whenever a process wants to use that resource, it acquires a
lock on the object. After locking, the process uses the resource and eventually releases the
mutex object. Other processes can then create and use mutex objects in the same way.
Advantages of Mutex
 Mutex is to create a barrier that prevents two different threads from accessing a
resource simultaneously. This prevents resources from being unavailable when
another thread needs them.
 Mutex is that it can help with code reliability. Resources accessed by a thread can
become unavailable if the CPU’s memory management fails. By preventing access to
a resource at this time, the system can recover from any errors that cause a failure
in memory management and still have the resource available, and Mutex helps here.

Disadvantages of Mutex
 It cannot be locked or unlocked by any context other than the context that acquired
it.
 Typical implementations can result in busy wait states that waste CPU time.
 If one thread acquires the lock, goes to sleep, or becomes preemptive, the other
thread may get stuck further. This can lead to hunger.
 In the critical section, he should only allow one thread at a time.

Semaphore:-

Semaphore is a type of variable that usually controls the access to the shared resources by
several processes. A semaphore is an integer variable S that is initialized with the number
of resources present in the system and used for process synchronization. Change the value
of S using two functions: wait() and signal(). These two functions are used to change the
value of a semaphore, but they allow only one process to change the value at any given
time. Two processes cannot change the value of a semaphore at the same time.

Semaphore is further divided into two types which are as follows:

 Binary Semaphore:
The value of a semaphore variable in binary semaphores is either 0 or 1. The value
of the semaphore variable is initially set to 1, but if a process requests a resource,
the wait() method is invoked, and the value of this semaphore is changed from 1 to
0. When the process has finished using the resource, the signal() method is invoked,
and the value of this semaphore variable is raised to 1. If the value of this semaphore
variable is 0 at a given point in time, and another process wants to access the same
resource, it must wait for the prior process to release the resource. Process
synchronization can be performed in this manner.

Features of binary semaphores:


o A Binary Semaphore is one with an integer value between 0 and 1.
o It’s nothing more than a lock with two possible values: 0 and 1. 0 denotes
being busy, while 1 denotes being unoccupied.
o The rationale behind a binary semaphore is that it only enables one process
to enter the crucial area at a time, thus allowing it to access the resource that
is shared.
o The value 0 indicates that a process or thread is in the critical section, i.e. it is
accessing a shared resource, and that the other processes or threads should
wait for it to finish. The critical region, on the other hand, is free because no
process is accessing the resource that is shared.
o It ensures reciprocal exclusion by ensuring that no two processes are in the
crucial section at the same time.
o It cannot ensure bounded waiting because it is only a variable that retains an
integer value. It’s possible that a process will never get a chance to enter the
critical section, causing it to starve. We don’t want it to happen.

 Counting Semaphore:
o A counting semaphore is a semaphore that has multiple values of the
counter. The value can range over an unrestricted domain.
o It is a structure, which comprises a variable, known as a semaphore variable
that can take more than two values and a list of task or entity, which is
nothing but the process or the thread.
o The value of the semaphore variable is the number of process or thread that
is to be allowed inside the critical section.
o The value of the counting semaphore can range between 0 to N, where N is
the number of the number of process which is free to enter and exit the
critical section.
o As mentioned, a counting semaphore can allow multiple processes or threads
to access the critical section, hence mutual exclusion is not guaranteed.
o Since multiple instances of process can access the shared resource at any
time, counting semaphore guarantees bounded wait. Using such a
semaphore, a process which enters the critical section has to wait for the
other process to get inside the critical section, implying that no process will
starve.

Barrier:-

A barrier typically not allows an individual process to proceed unless all the processes does
not reach it. It is used by many parallel languages, and collective routines impose barriers.

Spinlock:-
Spinlock is a type of lock as its name implies. The processes are trying to acquire the
spinlock waits or stays in a loop while checking that the lock is available or not. It is known
as busy waiting because even though the process active, the process does not perform any
functional operation (or task).

These are a few different approaches for Inter- Process Communication:

 Pipes
 Shared Memory
 Message Queue
 Direct Communication
 Indirect communication
 Message Passing
 FIFO

Pipe:-
The pipe is a type of data channel that is unidirectional in nature. It means that the data in
this type of data channel can be moved in only a single direction at a time. Still, one can use
two-channel of this type, so that he can able to send and receive data in two processes.
Typically, it uses the standard methods for input and output. These pipes are used in all
types of POSIX systems and in different versions of window operating systems as well.

Pipe is a communication medium between two or more related or interrelated processes. It


can be either within one process or a communication between the child and the parent
processes. Communication can also be multi-level such as communication between the
parent, the child and the grand-child, etc. Communication is achieved by one process
writing into the pipe and other reading from the pipe. To achieve the pipe system call,
create two files, one to write into the file and another to read from the file.

Pipe mechanism can be viewed with a real-time scenario such as filling water with the pipe
into some container, say a bucket, and someone retrieving it, say with a mug. The filling
process is nothing but writing into the pipe and the reading process is nothing but
retrieving from the pipe. This implies that one output (water) is input for the other
(bucket).
Shared Memory:-
It can be referred to as a type of memory that can be used or accessed by multiple
processes simultaneously. It is primarily used so that the processes can communicate with
each other. Therefore the shared memory is used by almost all POSIX and Windows
operating systems as well.

Message Queue:-
In general, several different messages are allowed to read and write the data to the
message queue. In the message queue, the messages are stored or stay in the queue unless
their recipients retrieve them. In short, we can also say that the message queue is very
helpful in inter-process communication and used by all operating systems.

To understand the concept of Message queue and Shared memory in more detail, let's take
a look at its diagram given below:

Message Passing:-
It is a type of mechanism that allows processes to synchronize and communicate with each
other. However, by using the message passing, the processes can communicate with each
other without restoring the hared variables.

Usually, the inter-process communication mechanism provides two operations that are as
follows:

 send (message)
 received (message)
Direct Communication:-
In this type of communication process, usually, a link is created or established between two
communicating processes. However, in every pair of communicating processes, only one
link can exist.

Indirect Communication
Indirect communication can only exist or be established when processes share a common
mailbox, and each pair of these processes shares multiple communication links. These
shared links can be unidirectional or bi-directional.

FIFO:-
It is a type of general communication between two unrelated processes. It can also be
considered as full-duplex, which means that one process can communicate with another
process and vice versa.

Some other different approaches

Socket:-
It acts as a type of endpoint for receiving or sending the data in a network. It is correct for
data sent between processes on the same computer or data sent between different
computers on the same network. Hence, it used by several types of operating systems.

File:-
A file is a type of data record or a document stored on the disk and can be acquired on
demand by the file server. Another most important thing is that several processes can
access that file as required or needed.

Signal:-
As its name implies, they are a type of signal used in inter process communication in a
minimal way. Typically, they are the massages of systems that are sent by one process to
another. Therefore, they are not used for sending data but for remote commands between
multiple processes.

Usually, they are not used to send the data but to remote commands in between several
processes.

Why we need interprocess communication?

There are numerous reasons to use inter-process communication for sharing the data. Here
are some of the most important reasons that are given below:

 It helps to speedup modularity


 Computational
 Privilege separation
 Convenience
 Helps operating system to communicate with each other and synchronize their
actions as well.

MailBoxes:

A mailbox is a storage location in an embedded system that can be used to store a single
variable of type ADDR. Mailboxes are similar to queues and can be used to share data
between tasks. They can also be used for synchronization

Here are some things to know about mailboxes in embedded systems:

 Mailboxes can be used for asynchronous communication.


 Mailboxes have a fixed number of bits and can be used for small messages.
 Mailboxes should include the message itself and a flag that indicates the message
has been placed (true) or removed (cleared).
 Mailboxes can have different semantics if the RTOS also supports an IPC queue.
 Some RTOSs allow you to choose the number of messages in a mailbox when you
create it.
 Some RTOSs allow a certain number of messages in each mailbox, while others only
allow one message in a mailbox at a time.
 Mailboxes can be a good choice if you need strong control over prioritization.
 Mailboxes typically don't have a size limit, but the size limit is usually fixed and set
by the programmer.

You can write to a mailbox, and then read from it or reset it. Trying to send to a full mailbox
or read from an empty one may result in an error or task suspension.

Priority Inversion and Priority Inheritance

In one line, Priority Inversion is a problem while Priority Inheritance is a solution. Priority
Inversion means that the priority of tasks gets inverted and Priority Inheritance means that
the priority of tasks gets inherited. Both of these phenomena happen in priority scheduling.
Basically, in Priority Inversion, the higher priority task (H) ends up waiting for the middle
priority task (M) when H is sharing a critical section with the lower priority task (L) and L
is already in the critical section. Effectively, H waiting for M results in inverted priority i.e.
Priority Inversion. One of the solutions to this problem is Priority Inheritance. In Priority
Inheritance, when L is in the critical section, L inherits the priority of H at the time when H
starts pending for the critical section. By doing so, M doesn’t interrupt L and H doesn’t wait
for M to finish. Please note that inheriting of priority is done temporarily i.e. L goes back to
its old priority when L comes out of the critical section.
Priority Inversion:

Priority inversion occurs when a low-priority task blocks a high-priority task because the
low-priority task holds a lock, such as a mutex or semaphore, needed by the high-priority
task. Here are some possible solutions to priority inversion:

 Priority inheritance: This is the most common method for dealing with priority
inversion. It promotes the priority of any process when it requests a resource from
the operating system. The low-priority task that holds the resource inherits the
priority of the highest-priority task that is waiting for that resource. This way, the
low-priority task can finish its critical section and release the resource faster,
allowing the high-priority task to resume. The operating system or an application
can implement priority inheritance using a special type of lock or semaphore that
supports this feature.

 Priority ceiling protocol: This gives each shared resource a predefined priority
ceiling. When a task acquires a shared resource, the task is hoisted (has its priority
temporarily raised) to the priority ceiling of that resource. This synchronization
protocol avoids unbounded priority inversion and mutual deadlock due to wrong
nesting of critical sections.

 Random boosting: This strategy is used by the scheduler in Microsoft Windows to


avoid deadlock due to priority inversion. Ready threads holding locks are randomly
boosted in priority and allowed to run long enough to exit the critical section.

 Avoid sharing resources between tasks of differing priorities

 Raise the low-priority task's priority while it is in the critical section

 Use non-blocking algorithms such as read-copy-update

Priority Inversion Priority Inheritance

In priority inversion, a higher-


It is a method that is used to eliminate the
1. priority process is preempted by
problems of Priority inversion.
a lower-priority process.

2. It is the inversion of the With the help of this, a process scheduling


Priority Inversion Priority Inheritance

priorities of the two processes algorithm increases the priority of a process, to


the maximum priority of any other process
waiting for any resource.

It can cause a system to Priority inheritance can lead to poorer worst-


3.
malfunction in our system. case behavior when there are nested locks.

Priority inversions can lead to Priority inheritance can be implemented such


4. the implementation of corrective that there is no penalty when the locks do not
measures. contend,

To deal with the problem of


priority inversion we can have
several techniques such as It is the basic technique at the application level
5.
for managing priority inversion.
Priority ceiling, Random
boosting, etc.

RT Linux and Vx works:

VxWorks and RTLinux are both real-time operating systems (RTOS) that are used in a
variety of applications. VxWorks is a commercial, preemptive RTOS that's designed for
embedded systems. RTLinux provides deterministic, hard real-time performance.

Here are some differences between VxWorks and RTLinux:

 Context switch time: VxWorks has a more deterministic context switch time than
RTLinux.
 Real-time performance: VxWorks is a preemptive RTOS that prioritizes real-time
embedded applications. RTLinux provides deterministic, hard real-time
performance, which is suitable for time-critical tasks.
Uses: VxWorks is used in a variety of applications, including network and communication
devices, automotive systems, and consumer products. RTLinux is suitable for a wide range
of applications that require real-time performance.

Here are some advantages of VxWorks:

 Lower system development costs


 Broad connectivity
 Complete security for connected devices
 Expandable and upgradable architecture
 Lower risk and fast integration of third-party technology
 Easier upgrades and less testing

Here are some advantages of RTLinux:

 Deterministic, hard real-time performance


 Minimal latency and predictable timing

µC/OS-II

You might also like