Introduction To Embedded Systems: Melaku M

Download as pdf or txt
Download as pdf or txt
You are on page 1of 150

INTRODUCTION TO

EMBEDDED SYSTEMS

By Melaku M.
Definition

 “Any sort of device which includes a


programmable computer but itself is not
intended to be a general-purpose
computer”
Types of Embedded Systems

sonar - A device that uses hydrophones (in the same manner as radar) to locate
objects underwater.
Typical Embedded Systems

 Are designed to observed


(through sensors) and control
something (through actuators)
E.g. air condition senses room
temperature and maintains it at set
temperature via thermostat
Embedded System Block Diagram
Ex.
Some common characteristics of embedded
systems

 Single-functioned
 Executes a single program, repeatedly

 Tightly-constrained
 Low cost, low power, small, fast, etc.

 Reactive and real-time


 Continually reacts to changes in the
system’s environment
 Must compute certain results in real-
time without delay
 Must finish operations by deadline
Cont…d
 Application-specific functionality
 specialized for one or one class of
applications
 Deadline constrained operation
 system may have to perform its function(s)
within specific time periods to achieve
successful results
 Resource challenged
 systems typically are configured with a
modest set of resources to meet the
performance objectives
Cont…d
 Power efficient
 many systems are battery-powered and must
conserve power to maximize the usable life of
the system.
 Form factor
 many systems are light weight and low volume
to be used as components in host systems
 Manufacturable
 usually small and inexpensive to manufacture
based on the size and low complexity of the
hardware.
Design Constraints
Design Challenges
 Does it really work?
 Is the specification correct?
 Does the implementation meet the spec?
 How do we test for real-time characteristics?
 How do we test on real data?

 How do we work on the system?


 Observability, controllability?
 What is our development platform?

 More importantly – optimizing design


metrics!!

11
Design Metrics
• Common metrics
• Unit cost: the monetary cost of manufacturing
each copy of the system, excluding NRE cost
• NRE cost (Non-Recurring Engineering
cost): The one-time monetary cost of designing
the system
• Size: the physical space required by the system
• Performance: the execution time or
throughput of the system
• Power: the amount of power consumed by the
system
Cont…d
• Time-to-prototype: the time needed to
build a working version of the system
• Time-to-market: the time required to
develop a system to the point that it can be
released and sold to customers
• Maintainability: the ability to modify the
system after its initial release
• Correctness, safety, many more
Input and output technology

A sensor converts some physical


characteristics of its environment into
electrical signals

Examples of sensor
 Photo-voltaic sensor
 Temperature sensor
 Pressure sensor
Actuators

 An actuator converts electrical signals

into some physical actions.


 The physical action may be:

 Motion, change of thermal, electrical,


pneumatic, or physical characteristics of
some objects.
Actuators

 Examples of actuators
 Motors
 Heaters
 Hydraulic and pneumatic actuators
Processors
 Microprocessors and Micro-Controller

 Key requirement:

 Energy Efficiency

 High code density


Microprocessors
 CPU for computers
 No RAM, ROM, I/O on CPU chip itself

 Example: Intel’s x86, Motorola’s 680x0

Data Bus
CPU
General I/O Serial
purpose RAM ROM Timer
Port port
Micropro
cessor
Address Bus
Microcontroller

 A micro-controller is a single silicon chip

device which integrates a number of the


components of a microprocessor system
onto a single microchip.
 The CPU core
 Memory (both ROM & RAM)
 Some parallel digital I/O & more
Components of a Micro-controller

 A timer module to allow the micro-

controller to perform tasks for certain


time period.
 A serial I/O port to allow data flow b/n

the micro-controller and other devices.


Components of a Micro-controller

 An ADC to allow the micro-controller to

accept analogue input data for


processing.
Why Micro-controller?
 Low cost, small packaging

 Low Power Consumption

 Programmable

 Re-programmable

 Lots of I/O Capability

 Single purpose
Microcontroller architecture
PIC16F877A
INTERFACING
 Interfacing is nothing but connecting an

i/o device or some other device with our


microcontroller.

 Basically, it is the effective interfacing

which completes the embedded system


Various components like LCD, Switch, Led,
Relay etc. will have to be interface with
the microcontroller.
Memory Organization
26

 Data Memory

 Program Memory

 Access to both possible in each

cycle b/c of distinct bus.


The PIC Family: Program
Memory 27

 EPROM (Erasable Programmable


Read Only Memory)
• One Time Programmable (OTP) chips are
EPROM chips, but with no window.
 FLASH
• Re-writable
• Much faster to develop
The PIC Family: Program
Memory28

 Mid-range PIC processors have 13

bit program counter.


 Width of program memory bus is

14 bits.
 Program memory space divided
into 4 pages of 2k each.
Memory Map
29
Stack
30

 Mid-range PIC 8-level deep 13 bit

wide hardware stack.


 PC is pushed onto the stack when

CALL instruction is executed or


Interrupt occurs.
The PIC Family: Data
Memory 31

 PICs use general purpose “file


registers” for RAM (each register is
8 bits for all PICs)
Interrupts

 An interrupt is any service request that

causes the CPU to stop its current


execution stream & to execute an
instruction stream that services the
interrupt.
Interrupts

 When the CPU finishes servicing the


interrupt, it returns to the original
execution stream at the point where it
left out.
Interrupt request mechanism
34
Interrupts in PIC

 Source of interrupts are


many
 INT pin interrupts from external source
 PORTB change interrupt (RB7:RB4)
 USART interrupts
 A/D conversion interrupt
 LCD interrupts
 Others
Interrupts Management

 Use a register INTCON: Status

& Control
cont…d
37

 Bit 7: Global interrupt enable


 Enable (if set) all unmasked interrupts or
disables all interrupts
Interrupts Management

 Bits 6,5,4,3: For enabling peripheral


(write complete on EEPROM), timer0,
external interrupt, Port B bit(pins 4-7)
change interrupts respectively.
 Bits 2,1,0: Timer0, INT, port change
interrupt flag respectively
 Flag bits get set when interrupt occurs
regardless of the value of enable bit.
Peripheral Interrupts

 Managed using PIE & PIR registers

 PIE contain bits for enabling interrupts

from individual peripherals


 PIR registers contain flag bits for
individual peripheral interrupts
Interrupt Processing

 When interrupt is responded to


 GIE bit is cleared to disable other interrupts
 PC is pushed onto the stack
 PC is loaded with 0004h.
 Save STATUS in temporary memory location.
Interrupt Processing

 Return from interrupt instruction (retfie)

exists ISR, sets GIE bit to allow pending


interrupts to execute.
Interrupt Constraints

 Each interrupt source is


characterized by
 Minimum time interval b/n interrupts from
the same source
 Maximum time it takes the CPU to
execute interrupt source handle.
Interrupt Constraints

 Servicing of interrupts must not be


delayed beyond the limit imposed by the
timing requirement of the source.
Interrupt Assignment
44

 An interrupt controller connects


multiple external interrupts to
either FIQ or IRQ.
 IRQ are normally assigned to
general purpose interrupts
Example: periodic timer interrupts to force

a context switch.
Interrupt Assignment
45

 FIQ is reserved for an interrupt

source which requires fast


response time.
Interrupt Latency
46

 Hardware and software latency

 Software method to reduce latency


• Nested handler which allows further
interrupts to occur even when servicing on
existing interrupt by re-enabling the
interrupts inside service routine.
Interrupt Latency
47

• Program interrupt controller to ignore


interrupts of same or lower priority.
 Higher priority interrupts will have lower
average latency.
PIC Peripherals: Digital I/O
 All PICs have digital I/O pins called ports.

 Ports are used to control & monitor


external devices as a source.

 Ports have 2 control registers


 TRISX sets whether each pin is an input or
output
 PORTX sets their output bit levels
Embedded programming basics
 Select the language to be used

 Use PIC16F877A

 Write a program on mikroc

 Create hex file(compile)

 Draw circuit diagrams on proteus software

 Load the hex file to the controller

 Simulate it.
What language to use?
 In the process of making a better embedded

system, the selection of the Programming


Language is very important.

 Factors of selecting the language

 Size: The memory that the program occupies is


very important as Embedded Processors like
Microcontrollers have a very limited amount of
ROM.
Cont…d

 Speed: The programs must be very fast i.e. they

must run as fast as possible. The hardware


should not be slowed down due to a slow
running software.

 Portability: The same program can be compiled

for different processors.

 Ease of Implementation and Maintenance

 Readability
Multiline Comments . . . . . Denoted using /*……*/
Single Line Comments . . . . . Denoted using //
Preprocessor Directives . . . . . #include<…> or #define
Global Variables . . . . . Accessible anywhere in the program
Function Declarations . . . . . Declaring Function
Main Function . . . . . Main Function, execution begins here
{
Local Variables . . . . . Variables confined to main function
Function Calls . . . . . Calling other Functions
Infinite Loop . . . . . Like while(1) or for(;;)
Statements . . . . .
….
}
Function Definitions . . . . . Defining the Functions
{
Local Variables . . . . . Local Variables confined to this Function
Statements . . . . .
….….
}
Configuring PIC16F877A

 Every pin has its own name


 A pin can be declared as input or output

 Pin declaration:
 TRISB.B6=1/0;
 PORTB.B6 = 1/0;
54

Real-Time Operating
System
Why we have OS in an embedded
device 55

 Support for
• Multitasking, scheduling, & synchronization
• Timing aspects
• Memory management
• File system
• Networking
• Interfacing wide range of I/O devices
• Security and power management
Introduction
56

Application Program

Hardware

 Application program directly


controls the hardware resources,
IO, Memory devices.
 Simple application can be
implemented in this manner.
Introduction
57

 However, for more complex and


real time system, a software in
between is needed.
APPLICATION PROGRAM

INTERMEDIATE

HARDWARE
Introduction
58

 Interface b/n hardware and the


user
 It provides user friendly
environment
 Hardware management
 Control execution of a number of
tasks & allows inter-task
communication.
Evolution of Operating System
59

 Simple Batch Systems

 Multi-programmed Batch Systems

 Time-Sharing Systems

 Real Time System

 Distributed System
Batch Processing System
60

 Simple and primitive

 Jobs were submitted using punch


card and processed in batched mode,

BATCH
MONITOR JOB 1 JOB 2 ... JOB N

Start End
Job under of of
execution Batch Batch
Disadvantage
61

 It leads to poor processor


utilization
 Long turn-around time

Time Difference between


completion time and arrival time
Turn Around Time = Completion
Time - Arrival Time
Multi-Programmed System
62

 All jobs are memory


MULTIPROGRAM resident
SUPERVISOR
 Separate I/O processor
JOB 1  Jobs are:
JOB 2
• I/O- bound
• CPU-bound
.  Priority based scheduling
.  I/O-Bound are given higher
. priority
JOB N  Better CPU utilization
 Turn-around time is not
good
Time-Sharing System
63

Dispatch

READY RUNNING
Admit Time Out Release

Event
NEW Occurs Event EXIT
Wait

BLOCKED

No Priority
Round robin fashion
Time slice
Advantage
64

Task1 Task2 Task3

 Short Turn-Around time

 Better CPU Utilization

 Task scheduling is Pre-emptive.


Characteristics of Subtasks
65

 Subtasks do not require equal time

 Some subtasks are periodic in


nature
 Some subtasks are asynchronous

in nature.
Characteristics of Subtasks
66

 Some subtasks have higher priority

 Some subtasks have deadline time

 Some subtasks are Triggered by

event
Real-Time Operating System
67

 Time –Sharing OS is not suitable

 Real-Time application requires


Timely response.
• Hard-Real-Time Task
• Soft-Real-Time Task
Real-Time Operating System
68

 Real-Time Operating System


should have the following features:
• Creation of subtasks for a single
application
• Ability to assign priority
• User defined priority of interrupts
• Task scheduling should be priority driven
or deadline driven
Example
69

Early Deadline First (EDF)


Arrival Duration Deadline
T1 0 10 33
T2 4 3 28
T3 5 10 29

T1
T2
T3

0 2 4 6 8 10 12 14 16 18 20 22 24
Built-In Hardware Feature to
support RTOS
70

 Powerful Interrupt Structure


• Interrupt must have priority level
• Mechanism to enable/disable interrupt
• Latency of interrupt should be small
• Dynamic change of priority level
Built-In Hardware Feature to
support RTOS 71

 Programmable Timer
• Generate timing signal

 Register Banks
72

Real-time Task
Definition
73

 Process (or task)


 is a sequence of instructions that in the
absence of other activities is continuously
executed by the process until completion.
Arrival time Task τi

Start time

ai si fi t
Finishing time
Definition
74

 A real time tasks are generated due

to certain event occurrences


• Either internal or external
 Example:
• A task may get generated due to a
temperature sensor sensing high level.
Task States
75

 A task is said to be:


• ACTIVE: if it can be executed by the CPU.
• BLOCKED: if it is waiting for one event.
 An active task can be:
• RUNNING: if it is being executed by the
CPU.
• READY: if it is waiting for the CPU.
Task State Transitions
76

signal BLOCKED wait

dispatching Termination
activation
READY RUNNING

Preemption
Ready Queue
77

The ready tasks are kept in a waiting


queue, called ready queue
The strategy for choosing the ready
task to be executed on the CPU is the
scheduling algorithm.

activation Dispatching Termination


τ3 τ2 τ1 CPU
Ready queue
Real-Time Task parameters
78
Ti
τi Di
Ci

ri si fi di t

• Ti inter-arrival time (period)-applicable for periodic tasks


• ri request time (arrival time ai)
• si is start time
• Ci worst case execution time (wcet)
• di absolute deadline
• Di relative deadline (relative to the arrival time)
• fi finishing time
Task Criticality
79

 HARD real time tasks


• Missing a deadline is highly undesirable
• May cause catastrophical effects on the
system
Task Criticality
80

 SOFT real time tasks


• Occasional miss of a deadline is tolerable
• Causes a performance degradation

An operating system able to handle hard RT tasks


is called a hard real-time system
Examples
81

 HARD real time tasks


• Flight control system
• Car brake system
 SOFT real time tasks
• Reading data from the keypad
• Playing video
Real-time tasks
82

 Periodic
• Periodic tasks repeats after a certain fixed
time interval
 Sporadic
• Sporadic tasks recurs at random instant
 Aperiodic
• Same as sporadic except that minimum
separation b/n two instant can be 0.
Activation modes
83

 Time driven: periodic tasks


• The task is automatically activated by the
kernel at regular intervals
Activation modes
84

 Event driven: aperiodic/sporadic


tasks
• The task is activated upon the arrival of an
event or through an explicit invocation of
the activation primitive
85

Scheduling Polices
Tasks (Recap.)
86

 Let {Ti} be a set of tasks


• ci be the execution time of Ti
• di be the deadline interval, that is the time
b/n Ti becoming available & the time until
which Ti has to finish execution.
• li be the laxity or slack, defined as li = di - ci
Scheduling Point
87

 At these points on time line


• Scheduler makes decision regarding task to be
run next
 Clock-driven
• Scheduling points defined by interrupts for a
periodic timer.
 Event-driven
• Scheduling points defined by interrupts task
completion & generation events.
Task Scheduling On Uni-
processor88

• Real-time task scheduler can be


broadly classified into

#1 #2

Clock- Even
Driven Driven
Clock-driven scheduling
89

 Decision regarding which job to run


next is made only at the clock
interrupt instances.
• Timer are used to determine the scheduling
points
• The job list as well as which task to be run for
how long stored in the table
Clock-driven scheduling
90

 Popular Examples
• Table-driven scheduler
• Cyclic scheduler
 Also called offline scheduler and

also called static scheduler


 Cannot schedule aperiodic tasks
Table-driven scheduler
91

Task Start Time Stop


T1 0 100
T2 101 150
T3 151 225
Table-driven scheduling
92

 For scheduling n periodic tasks


• The scheduler develops a permanent
schedule for a period LCM (P1, P2, . . . Pn)
• Store the schedule in a table
• The schedule is repeated forever.
Table-driven scheduling
93

 Table driven scheduler are


• Simple: Used in low cost applications
• Efficient: Very little runtime overhead
• Inflexible: Very difficult to accommodate
dynamic tasks.
Disadvantage
94

 When the number of tasks are


large
• Requires setting a large number of timers
• Number of timers supported by an
operating system is restricted due to
efficiency reasons.
Cyclic Schedulers
95

 Cyclic schedulers are very popular


• Being extensively used in the industry
• A large majority of small embedded
applications being manufactured uses
cyclic scheduler.
Cyclic Schedulers
96

 Repeats a pre-computed schedule

 The schedule needs to be stored

for a major cycle


 A major cycle is divided into one or

more minor cycles (frames)


Cyclic Schedulers (Cont.)
97

 Scheduling point for a cyclic


scheduler
• Occurs at the beginning of frames
 Each task is assigned to run in one

or more frames
Major Cyclic
98

 A major cycle is a set of tasks


• In each major cycle, the different tasks
recur identically

Major Cycle Major Cycle


Minor Cyclic
99

 Each major cycle


• Usually has an integral number of minor
cycles or frames
• Period of major cycle LCM (P1, P2,…, Pn)
 Frame boundaries are marked
• Through interrupts from a periodic timer
Selecting an appropriate frame
size (F) 100

 Minimize context switch


• F should be larger than each task size
 Minimize table size
• F squarely divide major cycle
 Satisfaction of task deadline
• B/n the arrival of a task & its deadline - at
least one full frame must exist.
Disadvantages of cyclic
scheduler 101

 As number of tasks increase


• It becomes difficult to select a suitable
frame size
 CPU times in many frame are
wasted
Event-Driven Schedulers
102

 Unlike clock-driven schedulers:


• These can handle both sporadic &
aperiodic tasks
• Used in more complex applications
Event-Driven Scheduling
103

 Scheduling points:
• Defined by task completion & event arrival
 Preemptive Scheduler
• On arrival of higher priority task, the
running task may be preempted.
 These are greedy schedulers:
• Never keep the processor idle if a task is
ready.
Event-Driven Static Schedulers
104

 The task priorities once assigned

by the programmer
• Do not change during runtime.
• RMA (Rate Monotonic Algorithm) is the
optimal static priority scheduling algorithm.
Event-Driven Dynamic Schedulers
105

 The task priorities can change


during runtime
• Based on the relative urgency of completion
of tasks
• EDF (Earliest Deadline First) is the optimal
uni-processor scheduling algorithm
Earliest Deadline First (EDF)
106

 Algorithm
• Each time a new ready task arrives:
• It is inserted in to a queue of ready tasks,
sorted by their deadlines.
• If a newly arrived task is inserted at the
head of the queue, the currently executing
task is pre-empted.
Earliest Deadline First (EDF)
107

 Scheduling is complex
• Processes to be sorted according to deadline
at each scheduling instant.
• Sorted list maintained as a heap.
• Each update requires O(log n).
Example
108

Early Deadline First (EDF)


Arrival Duration Deadline
T1 0 10 33
T2 4 3 28
T3 5 10 29

T1
T2
T3

0 2 4 6 8 10 12 14 16 18 20 22 24
Accumulated Utilization
109

n
ci
Accumulated Utilization:  
i 1 pi

Necessary condition for


schedulability (with m = m
number of processors)
EDF Schedulability Check
110

 Sum of utilization of tasks is less


than one
n
ei

i 1 pi
  ui  1

 Both the necessary and sufficient

condition for schedulability


EDF is a dynamic algorithm
111

 The priority of a task can be


determined at any point of time.
 The longer the task waits in a
ready queue – the higher the
chance (probability) of its being
taken up for scheduling.
Implementation of EDF
112

 Simple FIFO queue


• A freshly arriving task is inserted at the end
of the queue
 Sorted queue
• Priority queue
Implementation of EDF
113

A queue is maintained for each


distinct deadline
 When a task arrives:
• Its absolute deadline is computed and
inserted in Q.
EDF Properties
114

 EDF is optimal for a uni-processor


with task pre-emption being
allowed
• If a feasible schedule exist then EDF will
schedule the tasks
EDF Properties
115

 EDF can achieve 100% processor


utilization
• When an EDF system is overloaded and
misses a deadline, it will run at 100%
capacity for a time before the deadline
missed.
Minimum Laxity First (MLF)
116

 Priorities are decreasing function

of the laxity
 Laxity = relative deadline – time

required to complete task


execution.
Minimum Laxity First (MLF)
117

 The task that is most likely to fail

first is assigned highest priority


• Less laxity, higher priority
• Dynamically changing priority
• Pre-emptive
Minimum Laxity First (MLF)
118

• Requires calling the scheduler


periodically, and to re-compute the
laxity overhead for may calls of the
scheduler & many context switches.
• Detects missed deadline early
• Requires the knowledge of the execution
time.
Example
119

Arrival Duration Deadline


T1 0 10 33
T2 4 3 28
T3 5 10 29

T1
T2

T3

0 2 4 6 8 10 12 14 16 18 20 22 24
l(T1)=33-4-6=23 l(T1)=33-5-6=22 l(T1)=33-15-6=10
l(T2)=28-4-3=21 l(T2)=28-5-2=21 l(T2)=28-15-2=9
l(T3)=29-5-10=14
Rate monotonic (RM) Scheduling
120

 Most well-known technique for


scheduling independent periodic
tasks
Rate monotonic (RM) Scheduling
121

 Assumption:
• All tasks have hard deadlines are periodic
• All tasks are independent
• di = pi for all tasks
• ci is constant and known for all tasks
• The time required for context switching is
negligible
Rate monotonic (RM) Scheduling
122

 Scheduling policy
• The priority of a task is a monotically
decreasing function of its period.
 The higher the frequency (or lower the period)

of a task, the higher is its priority


• At any time, a highest priority task among
all those that are ready for execution is
allocated
Rate monotonic (RM) Scheduling
123

Priority

Frequency
Rate monotonic (RMA) Scheduling
124

 The optimal uni-processor static


scheduling algorithm
• If RMA cannot schedule a set of periodic
tasks, no other scheduling algorithm can
Cases of failing RMA scheduling
125
Task 1: period 5, execution time 2
Task 2: period 7, execution time 4

Missing computation
scheduled in the next
Missed period
deadline

T1

T2

0 2 4 6 8 10 12 14 16 18 20 t
RMA Schedulability(utilization
bound) 126

 Sum of utilization due to tasks is

less than one.


n
ei

i 1 pi
  ui  1

 Necessary condition for


schedulability but not sufficient
condition.
RMA Schedulability
127

 Utilization bound II (Liu & Layland

1971)

u i 
n 2 1n

1

• ui is the processor utilization due to task Ti.


• n is the number of tasks
Example 1
128

 Check whether the ff task is schedulable

using RMA

T1: e1 = 1, p1 = 4, d1 = 4
T2: e2 = 2, p2 = 6, d2 = 6
T3: e3 = 3, p3 = 20, d3 = 20
Solution
129

3
ei 1 1 3 11

i 1 pi
  ui     1
4 3 20 15

   
n 21 n 1  3 21 3 1  0.778
11
 ui  15  0.733  0.778
Therefore, the task set is schedulable
Completion Time Theorem
(Liu & Lehoczky)
130

 Even if a task fails Liu-Layland test,


still it may be RMA schedulable.
 How to check schedulability of a
task set
• Consider zero phasing of all tasks.
• Draw up the schedules till the first deadline
of each task
• Observe if each task is schedulable
Completion Time Theorem: Basic
premise
131

 Worst case completion time for a

task:
• Occurs when it is in phase with its higher
priority task
Completion Time Theorem: Basic
premise
132

T1 = 10, 30, Ф = 0
T1 is in phase with T2 T2 = 60, 120, Ф = 0

T1 T2 T1 T2 T1 T2
10 30 40 60 70 90

T1 = 10, 30, Ф = 20
T1 has 20ms phase lag with T2 T2 = 60, 120, Ф = 0

T2 T1 T2 T1 T2
20 30 50 60 80
Completion Time Theorem
(Liu & Lehoczky)
133

 Drawing the schedule is cumbersome


• When the number of tasks is large.

 A task Ti will meet its first deadline, if

 i 1 
p  
e    i  e   p
 i j 1  p j  j  i
 
where pi is the period of Ti and
p1 ≤ p 2 ≤ p3 . . . ≤ pn
Example
134

Check whether the ff task is schedulable

using RMA

T1: e1 = 20ms, p1 = 100ms


T2: e2 = 30ms, p2 = 150ms
T3: e3 = 60ms, p3 = 200ms
Example
135

Check whether the ff task is schedulable

using RMA

T1: e1 = 20ms, p1 = 100ms


T2: e2 = 30ms, p2 = 150ms
T3: e3 = 90ms, p3 = 200ms
Implementation of RMA
136

 A naïve RMA implementation


• Maintain tasks is in a FIFO queue
 Insertion O(1)

 Searching O(n)

 A better RMA implementation


• Maintain tasks is in a priority queue
 Insertion O(log(n))

 Searching O(1)
Deadline Monotonic
Algorithm(DMA)137

 When a task deadline & periods are

different
• RMA is not an optimal scheduling algorithm
• DMA is optimal for such tasks
 Assign priority based of task
deadline
 RMA will always fails if DMA fails.
Overhead due to context
switching 138

 In the worst case, each task incurs

at most two context switches


1) When it runs possibly preempt the currently

running task
2) When it completes
Overhead due to context
switching139

 Let the context switching time is

constant & equal to C


 Effectively the execution time of

each task increases to (ei + 2*C)


Overhead due to context
switching 140

 Assume three periodic tasks

 Assume context switching time =

1ms, determine whether the task is


schedulable.

T1: e1 = 10ms, p1 = d1 = 50ms


T2: e2 = 25ms, p2 = d2 = 150ms
T3: e3 = 50ms, p3 = d3 = 200ms
Solution
141

Task T1: 12msec < 50msec, schedulable

Task T2: 27 + 12*3 = 63 < 150,


schedulable.
Task T3: 52 + 4*12 + 2*27 = 154 < 200,

schedulable
Self Suspension
142

 When does a task suspended


execution
• It performs input/output operations
• It is waiting for some event to occur
Self Suspension
143

 Self-suspension introduces an
additional scheduling point.
• The OS removes the self-suspended task
from the queue and places it in the blocked
queue.
• The OS then dispatches the next eligible
task.
Self Suspension
144

 Scheduling point need to be


redefined
• Task completion
• Task arrival
• Self-suspension events
Self Suspension
145

 Revised RMA condition

 i 1 
p  
 e  bt    i   e   p
 i i
j 1  p j 
j
 i
 
i 1
bti  bi   mine j , b j 
j 1

bti – Delay that task Ti incurs due to its own self-


suspension & that of all higher priority task.
bi – worst case self suspension time of task Ti.
Example
146

Consider the ff set of periodic real time tasks

T1: e1 = 10ms, p1 = 50ms


T2: e2 = 25ms, p2 = 150ms
T3: e3 = 50ms, p3 = 200ms
Self-suspension times: b1 = 3ms, b2 = 3ms, b3 = 5ms

Determine whether the tasks are schedulable


using RMA
Solution
147

For T1: 10 + 3 + 0 = 13 < 50, schedulable

For T2: 25 + (3 + 3) + 10*3 = 61<150,

schedulable
For T3: 50 + (3 + 3 + 6) +10*4 + 25*2 =

152 < 200, schedulable


Comparison of EDF and RMS
148
Task 1: period 5, execution time 2
Task 2: period 7, execution time 4

T1

T2

EDF

T1

T2

0 2 4 6 8 10 12 14 16 18 20 t
Slack Stealing-to improve average
response time
149

Every periodic job slice must be

scheduled in a frame that ends no


later than its deadline.
Let the total amount of time
allocated to all slice is frame k be
xk.
Slack time in frame k = f - xk.
Slack Stealing-to improve average
response time
150

If aperiodic job queue is non-empty

at the beginning of time frame f,


aperiodic job may be scheduled for
slack time without causing any
deadline miss.

You might also like