Unit 4 Embedded System Notes
Unit 4 Embedded System Notes
Interrupt Handling
Interrupt handling in Real-Time Operating Systems (RTOS) is the mechanism by
which the OS handles unexpected events that occur during the execution of a task.
These events can be caused by hardware or software interrupts.
Types of Interrupts
There are two types of interrupts in RTOS:
Maskable interrupts: These interrupts can be enabled or disabled by the OS. The
OS can choose to handle or ignore these interrupts, depending on the current
system state.
Non-maskable interrupts: These interrupts cannot be disabled by the OS. They
are typically associated with critical events that require immediate attention,
such as power failures or hardware faults.
Interrupt Handling Methods
RTOS uses the following methods to handle interrupts:
Interrupt Polling
Time Management
Time management is the process of scheduling tasks and managing time-critical
resources in RTOS. It involves ensuring that tasks are executed in a predictable
manner, and that the system meets its timing requirements.
An interrupt routine cannot call any RTOS functions that might block.
An interrupt routine cannot call any RTOS functions that might cause the RTOS to switch
tasks unless the RTOS knows that an interrupt routine is executing.
Interrupt routines have higher priorities than OS functions and application tasks. The
elapsed time of an ISR should be short compared to the intervals between interrupts to
avoid delays in processing other functions.
An RTOS can provide for two levels of interrupt service routines (ISRs): a fast level ISR
(FLISR) and a slow level ISR (SLISR). The FLISR is also called hardware interrupt ISR, and
the SLISR is called software interrupt ISR. In Windows CE, the SLISR is called interrupt
service thread (IST).
Here are some steps that occur when an interrupt source is interrupted:
Low-level ISR
A low-level interrupt service routine (LISR) executes as a normal ISR, which includes using
the current stack. Nucleus RTOS saves context before calling an LISR and restores context
after the LISR returns. Therefore LISRs may be written in C and may call other C routines.
However, there are only a few Nucleus RTOS services available to an LISR. If the interrupt
processing requires additional Nucleus RTOS services, a high-level interrupt service
routine (HISR) must be activated. Nucleus RTOS supports nesting of multiple LISRs.
High-level ISR
HISRs are created and deleted dynamically. Each HISR has its own stack space and its own
control block. The memory for each is supplied by the application. Of course, the HISR must
be created before it is activated by an LISR.
Since an HISR has its own stack and control block, it can be temporarily blocked if it tries to
access a Nucleus RTOS data structure that is already being accessed.
There are three priority levels available to HISRs. If a higher priority HISR is activated
during processing of a lower priority HISR, the lower priority HISR is preempted in much
the same manner as a task gets preempted. HISRs of the same priority are executed in the
order in which they were originally activated. All activated HISRs are processed before
normal task scheduling is resumed.
Scheduling Schemes
Scheduling is the method used by the operating system to allocate resources to processes.
In real-time systems, the scheduler is considered as the most important component which
is typically a short-term task scheduler. The main focus of this scheduler is to reduce the
response time associated with each of the associated processes instead of handling the
deadline.
These types of approaches consider deadlines instead of feasible schedules. Therefore the
task is aborted if its deadline is reached. This approach is used widely is most of the real-
time systems.
There are different types of scheduling schemes such as FIFO, SJF, RR, and priority-based
scheduling.
Real-time operating systems (RTOS) use scheduling algorithms to manage tasks. The most
common RTOS task scheduling algorithms are:
Priority-based scheduling
Assigns a priority to each task and schedules the highest priority task first. This algorithm
is easy to implement and doesn't require information about job release and execution
times.
Priorities can be either dynamic or static. Static priorities are allocated during creation,
whereas dynamic priorities are assigned depending on the behavior of the processes while
in the system. To illustrate, the scheduler could favor input/output (I/O) intensive tasks,
which lets expensive requests be issued as soon as possible.
Priorities may be defined internally or externally. Internally defined priorities make use of
some measurable quantity to calculate the priority of a given process. In contrast, external
priorities are defined using criteria beyond the operating system (OS), which can include
the significance of the process, the type as well as the sum of resources being utilized for
computer use, user preference, commerce, and other factors like politics, etc
Round-robin scheduling
Gives each task an equal amount of time to run, regardless of priority. This can be useful for
periodic tasks but can lead to missed deadlines for higher-priority tasks.
Round Robin(RR) scheduling algorithm is mainly designed for time-sharing systems. This
algorithm is similar to FCFS scheduling, but in Round Robin(RR) scheduling, preemption is
added which enables the system to switch between processes.
Important terms
1. Completion Time: It is the time at which any process completes its execution.
2. Turn Around Time:This mainly indicates the time Difference between completion
time and arrival time. The Formula to calculate the same is:
3. Waiting Time(W.T): It Indicates the time Difference between turn around time and
burst time.
And is calculated as : Waiting Time = Turn Around Time – Burst Time
Deadline-based scheduling
Schedules tasks based on their deadlines, with earlier deadlines running first.
Deadline Scheduler is n I/O scheduler for the Linux kernel and guarantee a start service
time for a request. Deadline Scheduler imposes deadlines on all I/O operations in order to
prevent wanted requests. Two deadline read and write queues (basically sorted by their
deadline) are maintained. For every new request, the scheduler selects which queue will
serve for it. Read queues are given high priority than write queues because during read
operations the processes usually get blocked.
Now, the deadline scheduler checks if the first request in the deadline queue has expired
then, a batch of requests from the sorted queue is served. The scheduler serves both a
batch of requests following the chosen request in the sorted queue.
By default, the Expiration time of read request is of 500 ms and write requests of 5 seconds.
Suppose there are three processes P1, P2 and P3 with respective deadlines. So the deadline
scheduler states that if P1 makes a request at time t. It cannot make a request again at some
time interval.
A non-preemptive algorithm that doesn't assign priority levels to tasks. The first task to
enter the scheduling queue is put into the running state first.
FCFS Scheduling algorithm automatically executes the queued processes and requests in
the order of their arrival. It allocates the job that first arrived in the queue to the CPU, then
allocates the second one, and so on. FCFS is the simplest and easiest CPU scheduling
algorithm, managed with a FIFO queue. FIFO stands for First In First Out. The FCFS
scheduling algorithm places the arriving processes/jobs at the very end of the queue. So,
the processes that request the CPU first get the allocation from the CPU first. As any process
enters the FIFO queue, its Process Control Block (PCB) gets linked with the queue’s tail. As
the CPU becomes free, the process at the very beginning gets assigned to it. Even if the CPU
starts working on a longer job, many shorter ones have to wait after it. The FCFS scheduling
algorithm works in most of the batches of operating systems.
Characteristics of FCFS Scheduling
FCFS follows a non-preemptive approach, meaning, once the CPU lets a process take
control, it won’t preempt until the job terminates.
It follows the criteria of arrival time for the selection process.
The processor selects the very first job in the ready queue, and it runs till
completion.
It supports a pre-emptive and non-preemptive scheduling algorithm.
All the jobs execute on a first-come, first-serve basis.
Smaller processes take the lead in case of a tie.
The general wait time is quite high because of the modus operandi that FCFS
follows.
The algorithm is feasible to use and implement in the long run.
The process is not very complicated, thus easy to understand.
Every implementation follows the First In First Out (FIFO) ready queue.
A non-preemptive algorithm that requires the scheduler to know each task's execution
time. This is a scheduling algorithm that helps in the scheduling processes as per their
respective burst time.
In SJF Scheduling, the process always stands at the lowest burst time. This comes from the
list of available processes that are ready in the queue. Also, they are on schedule to go one
after another. Proper scheduling helps the OS to conduct each operation (execution of the
program) in a sequence.
Without failing the order, the OS guides everything in detail and controls their burst time.
The systematic execution of each process is quite a helpful deal for a user to conduct
seamless work.
Sometimes, it seems so difficult to predict the right process and its bursting time. The SJF
Scheduling Algorithm is there to process everything in sequential order. Therefore, the
algorithm is a tough one to implement in the system without a proper prediction. Hence, an
operating system is necessary to do the work automatically.
In the shortest job first scheduling, the processor always compares the processes waiting to
be executed to choose the one with the shortest burst time and again makes the
comparison after the ongoing process ends.
To understand it thoroughly, let’s see an example:
Suppose the above table shows how the processes arrive at a processor to be processed. In
that case, the gantt chart below will depict how they will be processed if the processor
works on the shortest job first scheduling algorithm.
Shortest Remaining Time First (SRTF) scheduling algorithm, the process with the smallest
amount of time remaining until completion is selected to execute. Since the currently
executing process is the one with the shortest amount of time remaining by definition, and
since that time should only reduce as execution progresses, processes will always run until
they complete or a new process is added that requires a smaller amount of time.
FIFO stands for First In First Out. The FCFS scheduling algorithm places the arriving
processes/jobs at the very end of the queue. So, the processes that request the CPU first get
the allocation from the CPU first. As any process enters the FIFO queue, its Process Control
Block (PCB) gets linked with the queue's tail.
Scheduling in RTOS can help maximize system throughput and performance. It can also
ensure that time-critical tasks are executed promptly, which improves system
responsiveness and reliability.
Types of Multiprocessing
Multiprocessing is the use of two or more central processing units (CPUs) in a single
computer system.
Symmetric Multiprocessing
In symmetric multiprocessing, all processors are equal and share resources such as
memory and I/O devices.
Each processor can execute any process, making it a more efficient use of resources.
In symmetric multiprocessing (SMP), all processors are treated as equals. They can run any
task, and they all share a common memory space. This allows for high performance and
scalability, but it also requires careful synchronization to avoid conflicts
Asymmetric Multiprocessing
In asymmetric multiprocessing, one processor is designated as the master, and all other
processors are slaves.
The master processor manages the system resources and distributes tasks to the slave
processors.
In asymmetric multiprocessing (AMP), each processor is given a specific role or set of tasks.
One processor often serves as the "master" and the others serve as "slaves." This can
simplify synchronization and reduce overhead, but it may not scale as well as SMP.
1. Multi-tasking :
Multiprocessing is a system that has two or more than two processors. In this, CPUs are
added for increasing computing speed of the system. Because of Multiprocessing, there are
many processes that are executed simultaneously. Multiprocessing is further classified into
two categories: Symmetric Multiprocessing and Asymmetric Multiprocessing.
Difference between Multitasking and Multiprocessing :
S
No. Multi-tasking Multiprocessing
The execution of more than one The availability of more than one processor per
task simultaneously is known as system, that can execute several set of instructions
1. multitasking. in parallel is known as multiprocessing.
2. The number of CPU is one. The number of CPUs is more than one.
In this, one by one job is being In this, more than one process can be executed at a
4. executed at a time. time.
The number of users is more The number of users is can be one or more than
6. than one. one.
Cooperative Multitasking
The operating system does not forcefully take control of the CPU from a process.
Preemptive Scheduling
Preemptive scheduling is a method that may be used when a process switches from a
running state to a ready state or from a waiting state to a ready state. The resources are
assigned to the process for a particular time and then removed. If the resources still have
the remaining CPU burst time, the process is placed back in the ready queue. The process
remains in the ready queue until it is given a chance to execute again.
When a high-priority process comes in the ready queue, it doesn't have to wait for the
running process to finish its burst time. However, the running process is interrupted in the
middle of its execution and placed in the ready queue until the high-priority process uses
the resources. As a result, each process gets some CPU time in the ready queue. It improves
the overhead of switching a process from running to ready state and vice versa, increasing
preemptive scheduling flexibility. It may or may not include SJF and Priority scheduling.
Let us take the example of Preemptive Scheduling. We have taken P0, P1, P2, and P3 are
the four processes.
Let us take the example of Preemptive Scheduling. We have taken P0, P1, P2, and P3 are the
four processes.
P0 3 2
P1 2 4
P2 0 6
P3 1 4
Firstly, the process P2 comes at time 0. So, the CPU is assigned to process P2.
When process P2 was running, process P3 arrived at time 1, and the
remaining time for process P2 (5 ms) is greater than the time needed by
process P3 (4 ms). So, the processor is assigned to P3.
When process P3 was running, process P1 came at time 2, and the remaining
time for process P3 (3 ms) is less than the time needed by processes P1 (4
ms) and P2 (5 ms). As a result, P3 continues the execution.
When process P3 continues the process, process P0 arrives at time 3. P3's
remaining time (2 ms) is equal to P0's necessary time (2 ms). So, process P3
continues the execution.
When process P3 finishes, the CPU is assigned to P0, which has a shorter
burst time than the other processes.
After process P0 completes, the CPU is assigned to process P1 and then to
process P2.
It is a more robust method because a process may not monopolize the processor.
Each event causes an interruption in the execution of ongoing tasks.
It improves the average response time.
It is more beneficial when you use this method in a multi-programming
environment.
When a non-preemptive process with a high CPU burst time is running, the other process
would have to wait for a long time, and that increases the process average waiting time in
the ready queue. However, there is no overhead in transferring processes from the ready
queue to the CPU under non-preemptive scheduling. The scheduling is strict because the
execution process is not even preempted for a higher priority process.
For example:
Let's take the above preemptive scheduling example and solve it in a non-preemptive
manner.
The resources are assigned to a process for Once resources are assigned to a process, they are held
a long time period. until it completes its burst period or changes to the waiting
state.
Its process may be paused in the middle of When the processor starts the process execution, it must
the execution. complete it before executing the other process, and it may
not be interrupted in the middle.
When a high-priority process continuously When a high burst time process uses a CPU, another
comes in the ready queue, a low-priority process with a shorter burst time can starve.
process can starve.
It is flexible. It is rigid.
It affects the design of the operating system It doesn't affect the design of the OS kernel.
kernel.
Its CPU utilization is very high. Its CPU utilization is very low.
Examples: Round Robin and Shortest FCFS and SJF are examples of non-preemptive scheduling.
Remaining Time First
Shared memory
Process communication is the mechanism provided by the operating system that allows
processes to communicate with each other. This communication could involve a process
letting another process know that some event has occurred or transferring of data from
one process to another. One of the models of process communication is the shared memory
model.
Shared memory is a feature of an operating system that allows multiple programs to access
the same memory at the same time to communicate with each other and avoid redundant
copies. Shared memory is an efficient way to pass data between programs.
The shared memory in the shared memory model is the memory that can be
simultaneously accessed by multiple processes. This is done so that the processes can
communicate with each other. All POSIX systems, as well as Windows operating systems
use shared memory.
A diagram that illustrates the shared memory model of process communication is given as
follows −
Processes must ensure they don't write to the same memory location
May create synchronization and memory protection problems
Task communication:
1. Cooperation by sharing
The processes may cooperate by sharing data, including variables, memory, databases, etc.
The critical section provides data integrity, and writing is mutually exclusive to avoid
inconsistent data.
Here, you see a diagram that shows cooperation by sharing. In this diagram, Process P1 and
P2 may cooperate by using shared data like files, databases, variables, memory, etc.
2. Cooperation by Communication
The cooperating processes may cooperate by using messages. If every process waits for a
message from another process to execute a task, it may cause a deadlock. If a process does
not receive any messages, it may cause starvation.
Here, you have seen a diagram that shows cooperation by communication. In this diagram,
Process P1 and P2 may cooperate by using messages to communicate.
There are three broad paradigms for inter-task communications and synchronization:
Message passing – a rationalized scheme where an RTOS allows the creation of message
objects, which may be sent from one to task to another or to several others. This is
fundamental to the kernel design and leads to the description of such a product as being a
“message passing RTOS”.
The facilities that are ideal for each application will vary. There is also some overlap in
their capabilities and some thought about scalability is worthwhile. For example, if an
application needs several queues, but just a single mailbox, it may be more efficient to
realize the mailbox with a single-entry queue. This object will be slightly non-optimal, but
all the mailbox handling code will not be included in the application and, hence, scalability
will reduce the RTOS memory footprint.
Interprocess Communication:
Definition
OR
OR
IPC allows processes to synchronize their actions and exchange information, which enables
them to work together to accomplish a specific task. Processes can communicate with each
other through shared memory or message passing.
It is one of the essential parts of inter process communication. Typically, this is provided by
interprocess communication control mechanisms, but sometimes it can also be controlled
by communication processes.
These are the following methods that used to provide the synchronization:
Mutual Exclusion
Semaphore
Barrier
Spinlock
Mutual Exclusion:
It is generally required that only one process thread can enter the critical section at a time.
This also helps in synchronization and creates a stable state to avoid the race condition.
Mutex Functionality:
A Mutex or Mutual Exclusion Object is used to allow only one of the processes access to the
resource at a time. The Mutex object allows all processes to use the same resource, but the
resource is accessed by one process at a time. Mutex uses a lock-based approach to handle
critical section issues.
Each time a process requests a resource from the system, the system creates a mutex object
with a unique name or ID. So whenever a process wants to use that resource, it acquires a
lock on the object. After locking, the process uses the resource and eventually releases the
mutex object. Other processes can then create and use mutex objects in the same way.
Advantages of Mutex
Mutex is to create a barrier that prevents two different threads from accessing a
resource simultaneously. This prevents resources from being unavailable when
another thread needs them.
Mutex is that it can help with code reliability. Resources accessed by a thread can
become unavailable if the CPU’s memory management fails. By preventing access to
a resource at this time, the system can recover from any errors that cause a failure
in memory management and still have the resource available, and Mutex helps here.
Disadvantages of Mutex
It cannot be locked or unlocked by any context other than the context that acquired
it.
Typical implementations can result in busy wait states that waste CPU time.
If one thread acquires the lock, goes to sleep, or becomes preemptive, the other
thread may get stuck further. This can lead to hunger.
In the critical section, he should only allow one thread at a time.
Semaphore:-
Semaphore is a type of variable that usually controls the access to the shared resources by
several processes. A semaphore is an integer variable S that is initialized with the number
of resources present in the system and used for process synchronization. Change the value
of S using two functions: wait() and signal(). These two functions are used to change the
value of a semaphore, but they allow only one process to change the value at any given
time. Two processes cannot change the value of a semaphore at the same time.
Binary Semaphore:
The value of a semaphore variable in binary semaphores is either 0 or 1. The value
of the semaphore variable is initially set to 1, but if a process requests a resource,
the wait() method is invoked, and the value of this semaphore is changed from 1 to
0. When the process has finished using the resource, the signal() method is invoked,
and the value of this semaphore variable is raised to 1. If the value of this semaphore
variable is 0 at a given point in time, and another process wants to access the same
resource, it must wait for the prior process to release the resource. Process
synchronization can be performed in this manner.
Counting Semaphore:
o A counting semaphore is a semaphore that has multiple values of the
counter. The value can range over an unrestricted domain.
o It is a structure, which comprises a variable, known as a semaphore variable
that can take more than two values and a list of task or entity, which is
nothing but the process or the thread.
o The value of the semaphore variable is the number of process or thread that
is to be allowed inside the critical section.
o The value of the counting semaphore can range between 0 to N, where N is
the number of the number of process which is free to enter and exit the
critical section.
o As mentioned, a counting semaphore can allow multiple processes or threads
to access the critical section, hence mutual exclusion is not guaranteed.
o Since multiple instances of process can access the shared resource at any
time, counting semaphore guarantees bounded wait. Using such a
semaphore, a process which enters the critical section has to wait for the
other process to get inside the critical section, implying that no process will
starve.
Barrier:-
A barrier typically not allows an individual process to proceed unless all the processes does
not reach it. It is used by many parallel languages, and collective routines impose barriers.
Spinlock:-
Spinlock is a type of lock as its name implies. The processes are trying to acquire the
spinlock waits or stays in a loop while checking that the lock is available or not. It is known
as busy waiting because even though the process active, the process does not perform any
functional operation (or task).
Pipes
Shared Memory
Message Queue
Direct Communication
Indirect communication
Message Passing
FIFO
Pipe:-
The pipe is a type of data channel that is unidirectional in nature. It means that the data in
this type of data channel can be moved in only a single direction at a time. Still, one can use
two-channel of this type, so that he can able to send and receive data in two processes.
Typically, it uses the standard methods for input and output. These pipes are used in all
types of POSIX systems and in different versions of window operating systems as well.
Pipe mechanism can be viewed with a real-time scenario such as filling water with the pipe
into some container, say a bucket, and someone retrieving it, say with a mug. The filling
process is nothing but writing into the pipe and the reading process is nothing but
retrieving from the pipe. This implies that one output (water) is input for the other
(bucket).
Shared Memory:-
It can be referred to as a type of memory that can be used or accessed by multiple
processes simultaneously. It is primarily used so that the processes can communicate with
each other. Therefore the shared memory is used by almost all POSIX and Windows
operating systems as well.
Message Queue:-
In general, several different messages are allowed to read and write the data to the
message queue. In the message queue, the messages are stored or stay in the queue unless
their recipients retrieve them. In short, we can also say that the message queue is very
helpful in inter-process communication and used by all operating systems.
To understand the concept of Message queue and Shared memory in more detail, let's take
a look at its diagram given below:
Message Passing:-
It is a type of mechanism that allows processes to synchronize and communicate with each
other. However, by using the message passing, the processes can communicate with each
other without restoring the hared variables.
Usually, the inter-process communication mechanism provides two operations that are as
follows:
send (message)
received (message)
Direct Communication:-
In this type of communication process, usually, a link is created or established between two
communicating processes. However, in every pair of communicating processes, only one
link can exist.
Indirect Communication
Indirect communication can only exist or be established when processes share a common
mailbox, and each pair of these processes shares multiple communication links. These
shared links can be unidirectional or bi-directional.
FIFO:-
It is a type of general communication between two unrelated processes. It can also be
considered as full-duplex, which means that one process can communicate with another
process and vice versa.
Socket:-
It acts as a type of endpoint for receiving or sending the data in a network. It is correct for
data sent between processes on the same computer or data sent between different
computers on the same network. Hence, it used by several types of operating systems.
File:-
A file is a type of data record or a document stored on the disk and can be acquired on
demand by the file server. Another most important thing is that several processes can
access that file as required or needed.
Signal:-
As its name implies, they are a type of signal used in inter process communication in a
minimal way. Typically, they are the massages of systems that are sent by one process to
another. Therefore, they are not used for sending data but for remote commands between
multiple processes.
Usually, they are not used to send the data but to remote commands in between several
processes.
There are numerous reasons to use inter-process communication for sharing the data. Here
are some of the most important reasons that are given below:
MailBoxes:
A mailbox is a storage location in an embedded system that can be used to store a single
variable of type ADDR. Mailboxes are similar to queues and can be used to share data
between tasks. They can also be used for synchronization
You can write to a mailbox, and then read from it or reset it. Trying to send to a full mailbox
or read from an empty one may result in an error or task suspension.
In one line, Priority Inversion is a problem while Priority Inheritance is a solution. Priority
Inversion means that the priority of tasks gets inverted and Priority Inheritance means that
the priority of tasks gets inherited. Both of these phenomena happen in priority scheduling.
Basically, in Priority Inversion, the higher priority task (H) ends up waiting for the middle
priority task (M) when H is sharing a critical section with the lower priority task (L) and L
is already in the critical section. Effectively, H waiting for M results in inverted priority i.e.
Priority Inversion. One of the solutions to this problem is Priority Inheritance. In Priority
Inheritance, when L is in the critical section, L inherits the priority of H at the time when H
starts pending for the critical section. By doing so, M doesn’t interrupt L and H doesn’t wait
for M to finish. Please note that inheriting of priority is done temporarily i.e. L goes back to
its old priority when L comes out of the critical section.
Priority Inversion:
Priority inversion occurs when a low-priority task blocks a high-priority task because the
low-priority task holds a lock, such as a mutex or semaphore, needed by the high-priority
task. Here are some possible solutions to priority inversion:
Priority inheritance: This is the most common method for dealing with priority
inversion. It promotes the priority of any process when it requests a resource from
the operating system. The low-priority task that holds the resource inherits the
priority of the highest-priority task that is waiting for that resource. This way, the
low-priority task can finish its critical section and release the resource faster,
allowing the high-priority task to resume. The operating system or an application
can implement priority inheritance using a special type of lock or semaphore that
supports this feature.
Priority ceiling protocol: This gives each shared resource a predefined priority
ceiling. When a task acquires a shared resource, the task is hoisted (has its priority
temporarily raised) to the priority ceiling of that resource. This synchronization
protocol avoids unbounded priority inversion and mutual deadlock due to wrong
nesting of critical sections.
VxWorks and RTLinux are both real-time operating systems (RTOS) that are used in a
variety of applications. VxWorks is a commercial, preemptive RTOS that's designed for
embedded systems. RTLinux provides deterministic, hard real-time performance.
Context switch time: VxWorks has a more deterministic context switch time than
RTLinux.
Real-time performance: VxWorks is a preemptive RTOS that prioritizes real-time
embedded applications. RTLinux provides deterministic, hard real-time
performance, which is suitable for time-critical tasks.
Uses: VxWorks is used in a variety of applications, including network and communication
devices, automotive systems, and consumer products. RTLinux is suitable for a wide range
of applications that require real-time performance.
µC/OS-II