4.
1
Timing Guarantees
Hard real-time systems, often in safety-critical
applications abound
Aeronautics, automotive, train industries, manufacturing
control
Airbag in car, Reaction
in <10 mSec
Swiss Federal Computer Engineering
Institute of Technology 2-7 and Networks Laboratory
Real-Time Systems
Embedded controllers are expected to finish their tasks
reliably within time bounds.
Essential: upper bound on the execution times of all
tasks statically known.
Commonly called the Worst-Case Execution Time
(WCET)
Analogously, Best-Case Execution Time (BCET)
Swiss Federal Computer Engineering
Institute of Technology 2-2 and Networks Laboratory
Modern Hardware Features
Modern processors increase performance by using:
Caches, Pipelines, Branch Prediction, Speculation
These features make WCET computation difficult:
Execution times of instructions vary widely.
Best case - everything goes smoothely: no cache miss,
operands ready, needed resources free, branch correctly
predicted.
Worst case - everything goes wrong: all loads miss the
cache, resources needed are occupied, operands are not ready.
Span may be several hundred cycles.
Swiss Federal Computer Engineering
Institute of Technology 2-3 and Networks Laboratory
(Most of) Industry’s Best Practice
Measurements: determine execution times directly by
observing the execution or a simulation on a set of inputs.
Does not guarantee an upper bound to all executions.
Exhaustive execution in general not possible!
Too large space of (input domain) x (set of initial execution
states).
Compute upper bounds along the structure of the
program:
Programs are hierarchically structured.
Instructions are “nested” inside statements.
So, compute the upper bound for a statement from the upper
bounds of its constituents
Swiss Federal Computer Engineering
Institute of Technology 2-4 and Networks Laboratory
Determine the WCET
Complexity:
in the general case: undecidable if a bound exists.
for restricted programs: simple for „old“ architectures,
very complex for new architectures with pipelines, caches, interrupts,
virtual memory, etc.
Analytic (formal) approaches:
for hardware: typically requires hardware synthesis
for software: requires availability of machine programs; complex
analysis (see, e.g., www.absint.de); requires precise machine
(hardware) model.
Swiss Federal Computer Engineering
Institute of Technology 2-5 and Networks Laboratory
4.2
Why Multiple Processes?
The concept of concurrent processes reflects the intuition
about the functionality of embedded systems.
Processes help us manage timing complexity:
multiple rates
• multimedia
• automotive
asynchronous
input
• user
interfaces
• communicat
ion systems
Swiss Federal Computer Engineering
Institute of Technology 2-6 and Networks Laboratory
Overview
There are MANY structured ways of programming an
embedded system.
Only main principles will be covered:
time triggered approaches
• periodic
• cyclic executive
• generic time-triggered scheduler
event triggered approaches
• non-preemptive
• preemptive – stack policy
• preemptive – cooperative scheduling
• preemptive - multitasking
Swiss Federal Computer Engineering
Institute of Technology 2-7 and Networks Laboratory
4.2.1
Time-Triggered Systems
Pure model:
no interrupts except by timer
schedule computed off-line → complex sophisticated
algorithms can be used
deterministic behavior at run-time
interaction with environment through polling
polling
interrupt
Timer interfaces
CPU to sensor/
actuator
Swiss Federal Computer Engineering
Institute of Technology 2-8 and Networks Laboratory
Simple Periodic TT Scheduler
Timer interrupts regularly with period P.
All processes have same period P.
T1 T1 T3
T3 T1 T2 T3 t
T2 T2
P
t(0)
Properties:
later processes (T2, T3) have unpredictable starting times
no problem with communication between processes or use of
common resources, as there is a static ordering
Swiss Federal Computer Engineering
Institute of Technology 2-9 and Networks Laboratory
TT Cyclic Executive Scheduler
Processes may have different periods.
The period P is partitioned into frames of length f.
T1 T3 T2 T2 T2
T1 T4 T2 T1 T1 t
0 2 4T 6 8 10 12 14 16 18 20
1
f P
Problem, if there are long processes; they need to be
partitioned into a sequence of small processes; this is
TERRIBLE, as local state must be extracted and stored
globally:
Swiss Federal Computer Engineering
Institute of Technology 2 - 10 and Networks Laboratory
TT Cyclic Executive Scheduler
Some conditions: period of process k
A process executes at most once within a frame:
Period P is least common multiple of all periods p(k).
Processes start and complete within a single frame:
Between release time and deadline of every task there is at least
one frame boundary:
relative deadline of process k
Swiss Federal Computer Engineering
Institute of Technology 2 - 11 and Networks Laboratory
Example: Cyclic Executive Scheduler
Conditions:
T(k) D(k) p(k) WCET(k)
T1 4 4 1.0
possible solution: f = 2 T2 5 5 1.8
T3 20 20 1.0
Feasible solution (f=2): T4 20 20 2.0
T1 T3 T2 T1 T4 T1 T2 T 1 T2
t
0 2 4 6 T2 8 10 12 14T1 16 18 20
f P
Swiss Federal Computer Engineering
Institute of Technology 2 - 12 and Networks Laboratory
Generic Time-Triggered Scheduler
In an entirely time-triggered system, the temporal control structure
of all tasks is established a priori by off-line support-tools. This
temporal control structure is encoded in a Task-Descriptor List
(TDL) that contains the cyclic schedule for all activities of the node.
This schedule considers the required precedence and mutual
exclusion relationships among the tasks such that an explicit
coordination of the tasks by the operating system at run time is not
necessary. ..
The dispatcher is activated by the synchronized clock tick. It looks at
the TDL, and then performs the action that has been planned for
this instant [Kopetz].
Swiss Federal Computer Engineering
Institute of Technology 2 - 13 and Networks Laboratory
Simplified Time-Triggered Scheduler
main: usually done offline
determine static schedule (t(k), T(k)), for k=0,1,…,n-
1; determine period of the schedule P;
set i=k=0 initially; set the timer to expire at
t(0); while (true) sleep();
set CPU to low power mode; k t(k) T(k)
Timer Interrupt: returns after interrupt
k_old := k; 0 0 T1
i := i+1; k := i mod n; 1 3 T2
set the timer to expire at i/n * P + 2 7 T1
t(k); execute process T(k_old);
return; 3 8 T3
for example using a 4 12 T2
function pointer in C;
process returns after n=5, P =
finishing. 16
possible extensions: execute aperiodic background tasks if
system is idle; check for task overruns (WCET too long)
Swiss Federal Computer Engineering
Institute of Technology 2 - 14 and Networks Laboratory
Summary Time-Triggered Scheduler
deterministic schedule; conceptually simple (static table);
relatively easy to validate, test and certify
no problems in using shared resources
external communication only via polling
inflexible as no adaptation to environment serious
problems if there are long processes
Extensions:
allow interrupts (shared resources ? WCET ?) → be
careful!!
allow preemptable background processes
Swiss Federal Computer Engineering
Institute of Technology 2 - 15 and Networks Laboratory
Event Triggered Systems
The schedule of processes is determined by the
occurrence of external interrupts:
dynamic and adaptive: there are possible problems with
respect to timing, the use of shared resources and buffer over-
or underflow
guarantees can be given either off-line (if bounds on the
behavior of the environment are known) or during run-time
interrupt interrupt
Timer interfaces
CPU to sensor/
actuator
Swiss Federal Computer Engineering
Institute of Technology 2 - 16 and Networks Laboratory
Non-Preemptive ET Scheduling
Principle:
To each event, there is associated a corresponding process that
will be executed.
Events are emitted by (a) external interrupts and (b) by
processes themselves.
Events are collected in a queue; depending on the queuing
discipline, an event is chosen for running.
Processes can not be preempted.
Extensions:
A background process can run (and preempted!) if the event
queue is empty.
Timed events enter the queue only after a time interval
elapsed. This enables periodic instantiations for example.
Swiss Federal Computer Engineering
Institute of Technology 2 - 17 and Networks Laboratory
Non-Preemptive ET Scheduling
Properties:
communication between processes is simple (no problems with
shared resources); interrupts may cause problems with shared
resources
buffer overflow if too many events are generated by
environment or processes
long processes prevent others from running and may cause buffer
overflow
• partition processes into smaller ones long process
• local context must be stored
part 1 part 2
context
global memory
Swiss Federal Computer Engineering
Institute of Technology 2 - 18 and Networks Laboratory
Preemptive ET Scheduling – Stack Policy
Similar to non-preemptive case, but processes can be
preempted by others; this resolves partly the problem of
long tasks.
If the order of preemption is restricted, we can use the usual
stack-based context mechanism of function calls
(process = function).
context main memory
main(){
stack
…
task1(); task 1
…
task1(){ context task 2
…
task2();
…
Swiss Federal Computer Engineering
Institute of Technology 2 - 19 and Networks Laboratory
Preemptive ET Scheduling – Stack Policy
process T3
process T2
process T1
preemption t
Processes must finish in LIFO order of their instantiation.
restricts flexibility
not useful, if several processes wait unknown time for
external events
Shared resources (communication between processes!)
must be protected, for example: disabling interrupts, use of
semaphores.
Swiss Federal Computer Engineering
Institute of Technology 2 - 20 and Networks Laboratory
Process
A process is a unique execution of a program.
Several copies of a “program” may run simultaneously or at
different times.
A process has its own state. In case of a thread, this
state consists mainly of:
register values;
memory stack;
Swiss Federal Computer Engineering
Institute of Technology 2 - 21 and Networks Laboratory
Processes and CPU
Activation record:
copy of process state
includes registers and
PC
local data structures thread 1
registers
Context switch: thread 2
current CPU context
goes out ... CPU
new CPU context
goes in memory
Swiss Federal Computer Engineering
Institute of Technology 2 - 22 and Networks Laboratory
Co-operative Multitasking
Each process allows a context switch at cswitch() call.
Separate scheduler chooses which process runs next.
Advantages:
predictable, where context switches can occur
less errors with use of shared resources
Problems:
programming errors can keep other threads out, thread never gives
up CPU
real-time behavior at risk if it takes too long before context
switch allowed
Swiss Federal Computer Engineering
Institute of Technology 2 - 23 and Networks Laboratory
2.6
A Typical Programming Interface
Example of a co-operative multitasking OS for small
devices: NutOS (as used in the practical exercises).
Semantics of the calls is expressed using Petri Nets
Bipartite graph consisting of places and transitions.
Data and control are represented by moving token.
Token are moved by transitions according to rules: A transition
can fire (is enabled) if there is at least one token in every input
place. After firing, one token is removed from each input place
and one is added to each output place.
firing
Swiss Federal Computer Engineering
Institute of Technology 2 - 24 and Networks Laboratory
Preemptive Multitasking
Most powerful form of multitasking:
Scheduler (OS) controls when contexts switches;
Scheduler (OS) determines what process runs next.
Use of timers to call OS and switch contexts:
interrupt
timer
CPU
Use hardware or software interrupts, or direct calls to
OS routines to switch context.
Swiss Federal Computer Engineering
Institute of Technology 2 - 25 and Networks Laboratory