0521197163
0521197163
0521197163
Two t as k o b j e c t s ( v a r i a b l e s )
Yel l ow LED : Fl ash LED ( Col or => Yel l ow ,
Per i od => 4 ) ;
Green LED : Fl ash LED ( Col or => Green ,
Per i od => 3 ) ;
A p o i n t e r t o a t as k o bj e c t
type Fl a s h Pt r i s access Fl ash LED ;
Bl ue LED : Fl a s h Pt r ;
begi n
Put Li ne ( Fl a s hi ng LEDs ) ;
Bl ue LED := new Fl ash LED ( Col or => Bl ue ,
Per i od => 8 ) ;
Put Li ne ( Main t as k i s done ) ;
end LED Task Exampl es ;
3.2 The task life cycle 109
When this program executes, ve tasks run concurrently to display the
state (on or o) of four dierent-colored light-emitting diodes (LEDs). You
can download this program from our web site, build it, and run it to see the
output displayed by the ve tasks.
It takes only a single line to declare the singleton task Flash Red LED. This
declaration provides only the tasks name. The body contains the sequence
of instructions executed by the task. In this case, our task body contains
an innite loop that alternatively calls a procedure to turn on the red LED
and a procedure to turn it o.
The declaration of task type Flash LED includes two discriminants. As
with record discriminants, we supply an actual value when we use the type
to declare an object. The declarations of variables Yellow LED and Green
LED illustrate the use of these discriminants. The values we gave to the
discriminants Color and Period are used by the task body. Looking at
the body for task type Flash LED, you may wonder why we declared the
discriminant Period to be an integer subtype rather than Duration. That
would save us the trouble of converting the whole number to a Duration
value. Task discriminants, like record discriminants, are limited to discrete
types and access types. Being a real type, Duration discriminants are illegal.
We have shown two ways to create a task: dening a singleton task or
dening a task type and declaring task variables. Tasks may also be created
dynamically using access types. Our program declares the access type Flash
Ptr that designates a Flash LED task and an access variable that holds the
location of a Flash LED task. Like all access objects, the initial value of
Blue LED is null. In the main subprogram we use the allocator new to create
a new Flash LED task and assign its location to Blue LED. We supplied
appropriate values for the two discriminants when we allocated the task.
Unlike a package, a task cannot be written and compiled by itself. The
task declaration and body must exist within some other program unit such
as a package or subprogram. All of the example tasks are written within the
declarative part of the program LED Task Examples.
3.2 The task life cycle
A task passes through several dierent states during its lifetime. The state
diagram in Figure 3.1 shows the basic states and transitions that dene the
behavior of a task. In the next sections well look at each of the dierent
states in this diagram.
110 Task basics
Finished
Creation
Activation
Executable
elaboration of entire
declarative part successful
[static creation]
allocation
successful
[dynamic creation]
activation
successful
exception raised
exception raised
reached end of body
exception raised
a
l
l
d
e
p
e
n
d
e
n
t
s
t
e
r
m
i
n
a
t
e
d
Terminated
Completed
Figure 3.1 The basic states of a task
3.2.1 Task creation
Our LED examples demonstrated the two approaches for creating task ob-
jects. Static creation is done when we dene a singleton task or we declare
a variable of a task type. Dynamic creation uses the allocator operation to
assign a location to an access variable that designates a task. Static creation
is done through elaboration, the run-time processing of declarations. The
elaboration of ordinary variables includes allocation of memory for them on
the stack and the assignment of any initial values. The elaboration of a task
object (singleton task or task variable) also allocates memory. This memory
includes a task control block and a run-time stack where the tasks local
variables and subprogram call frames are stored. Since each task has its own
run-time stack, tasks of the same type have their own separate copies of any
local objects dened in the task body. Tasks created dynamically obtain
their memory when the allocator operator is invoked.
In program LED Task Examples, tasks Yellow LED and Green LED are cre-
ated when those variables are elaborated. The task designated by the pointer
Blue LED is not created until it is assigned a value from the allocator when
the assignment statement in the main subprogram is executed.
The amount of memory allocated to each task for its task control block and
run-time stack is determined by a default value in the compiler. This amount
may be insucient for tasks that require large amounts of storage for local
variables and/or are highly recursive. On the other hand, the default stack
3.2 The task life cycle 111
size may be overly large for a program with many tasks each requiring little
stack space. We can use the pragma Storage Size to specify the amount of
memory allocated for each singleton task and for each stack object created
from a task type. This pragma is given in the declaration of the singleton
task or the declaration of the task type. The following task declarations
include pragmas to set the memory available for our four ashing tasks to
1000 storage units.
task Fl ash Red LED i s
pragma St o r a g e Si z e (1 000 ) ;
end Fl ash Red LED ;
task type Fl ash LED ( Col or : Col or Type ;
Pe r i od : Nat ur al ) i s
pragma St o r a g e Si z e (1 000 ) ;
end Fl ash LED ;
Comparing this code to our original declarations of these tasks in program
LED Task Examples, youll notice that we needed to include end clauses in
the declarations. Later well include other task items in task declarations.
It is possible that an exception is raised during the elaboration of the
declarative region containing task denitions. For example, the constant
declaration
c a l l i n g f u n c t i o n LED Count t o i n i t i a l i z e Count
Count : constant Po s i t i v e := LED Count ;
would raise a Constraint Error exception when it is elaborated if the func-
tion LED Count returns zero. Any tasks that were created prior to this failed
constant elaboration become terminated. A wise programmer will avoid dec-
larations that could raise exceptions during elaboration.
3.2.2 Task activation
During this phase of a tasks life cycle, the declarative part of the task body
is elaborated. Space is allocated for any local variables on the tasks run-
time stack. The task bodys declarative part could include the denition of
another task which is created as part of the tasks elaboration.
A dynamically created task is activated immediately after a successful
allocation. The activation of static tasks is deferred until the elaboration
of the declarative region in which they are dened is complete. In our ex-
ample program, the tasks Flash Red LED, Yellow LED, and Green LED are
activated after the elaboration of program LED Task Examples is complete.
They are activated before the execution of the rst executable statement in
the program.
112 Task basics
Should the elaboration of the task bodys declarative part raise an excep-
tion, the task becomes completed. Again, the wise programmer will avoid
complex declarations that might raise exceptions during their elaboration.
3.2.3 Task execution
A task enters the executable state immediately after a successful activation.
In this state the statements in the tasks body are executed. The executable
state does not mean that the task is actually executing statements. If there
are more tasks than processors, some of the tasks must wait until a processor
is available. A task may also need to suspend executing its statements until
some resource is available. For example, a task calling Get to read informa-
tion will be blocked until the input is available. Well discuss resources in the
next two chapters. Executing a delay statement is another way a task may
block itself. Figure 3.2 shows the substates of a task within the executable
state.
Ready Blocked
Running
Executable
dispatch
wakeup
preempt
block
done
Figure 3.2 The substates of executable
When a task rst becomes executable, it starts o in the ready state. Tasks
in the ready state have all of the resources they need to run except for a
processor. When a processor becomes available the task scheduler dispatches
the task to the running state. In this state the task executes its sequence
of statements. Should the running task execute a delay statement or need
3.3 Task hierarchies 113
some resource that is not currently available, it relinquishes the processor
and moves to the blocked state. When the delay expires or the required
resource becomes available, the task is awoken by moving it to the ready
state. A running task is done when it has nished execution of the statements
in its body or raised an exception.
The replacement of the running task with a ready task is called a context
switch. When there is more than one task in the ready state, the scheduler
must select one of them. Well discuss the scheduler in detail in Chapters 7
and 8. For now well mention that the set of ready tasks is usually stored
in either a FIFO queue or priority queue. The scheduler simply picks the
task at the front of the queue. Figure 3.2 also includes a transition from
running to ready labeled preempt. In many scheduling schemes, a task may
be preempted by the scheduler because a higher priority task is now ready
or the task has used up its allotment of processor time.
3.2.4 Task nish
A nished task is either in the completed state or the terminated state. For
most purposes, it makes little dierence which of these two states a nished
task is in. There are two ways an executable task moves to the completed
state. Completion of all of the statements in the task body is the normal
way. An exception handled by the task body provides another expected way
to complete the task. An unhandled exception in the task body also sends a
task to the completed state. As we mentioned in our discussion of activation,
should an exception be raised during the elaboration of the task body, it will
go directly to the completed state.
A task moves from completed to terminated when all its dependent tasks
are terminated. To understand this state transition requires an understand-
ing of the master-dependent relationship. Well discuss that in the next
section.
3.3 Task hierarchies
While the sequential programs we discussed in Chapter 2 dened no task
objects, a task was still part of their execution. Sequential programs have
a single thread of control carried out by a predened task called the envi-
ronment task. The environment task provides the thread of control that
executes the main subprogram. Before it calls the main subprogram, the
environment task carries out the elaboration of the bodies of packages that
are part of the program. Elaboration of a package body consists of the
114 Task basics
elaboration of any declarations followed by the execution of the optional
package body initialization sequence.
Each task is part of two hierarchical relationships. The parent-child
hierarchy is most relevant to task creation. The master-dependent hi-
erarchy is most relevant to the nishing of tasks.
3.3.1 Parent-child
A task that is directly responsible for creating another task is called the
parent of the task, and the task it creates is called the child. A parent task
may create a child statically through the elaboration process or dynamically
through use of the allocator. In program LED Task Examples (page 107), the
environment task is the parent of the four child tasks created by the main
subprogram.
When a parent task creates a child, it is suspended until that child com-
pletes its activation. When that activation is complete, both parent and child
execute concurrently. Adas block structure allows us to declare task objects
in any block, including within a task body. Such task nesting can give rise
to a hierarchy of suspensions. For example, while a parent task is suspended
for the activation of a child task, that child task may have its activation
suspended while a task it created activates. The environment task is at the
top of the parent-child hierarchy.
Program LED Task Examples is suspended three times during its elabora-
tion to create the three child tasks Flash Red LED, Yellow LED, and Green
LED. The program is suspended a fourth time during its execution when it
uses the allocator to create task Blue LED.all. Figure 3.3 adds the sus-
pended state to the basic task state diagram we illustrated in Figure 3.1.
3.3.2 Master-dependent
Every task depends on a master. The concept of master is more complicated
than that of parent. The direct master of a task is a block structure. Block
structures can be task bodies, block statements, subprogram bodies, or li-
brary packages. Determining the direct master of a task depends on how
the task was created. The direct master of a statically created task is the
block in which the task object is declared. The direct master of a dynami-
cally allocated task is the block in which the access type is declared (not the
block in which the task pointer variable is declared). If the direct master is
a block statement or a subprogram, then this master is dependent on the
3.3 Task hierarchies 115
Activation
Executable
activation
successful
Suspend
for Child
Activation
create child
child activation
complete
[static creation]
create child
child activation
complete
[dynamic creation]
Figure 3.3 Parent tasks are suspended during the activation of a child
task
task executing the block. Therefore, a task may have multiple masters, the
block or subprogram acting as its direct master, and the task that executes
the block as an indirect master.
An important property of the master-dependent relationship is that a
master may not terminate until all of its dependents have terminated. Fig-
ure 3.1 illustrates this property for the case in which a master is a task. A
task moves from completed to terminated when all of its dependents have
terminated. When the direct master of a task is a block statement or subpro-
gram, the task executing that block may not exit the block until all of that
blocks dependent tasks have terminated. Procedure LED Task Examples is
the master of all four tasks in that program. Control does not leave this
procedure after executing its three statements. The environment task must
wait until the dependents of the main subprogram terminate. As these tasks
contain innite loops, the main subprogram never terminates. Lets look at
another example. The following procedure is passed an array of real numbers
and returns the sum and the product of all the numbers in that array.
procedure Math ( Val ues : i n Fl oa t Ar r a y ;
Sum : out Fl oat ;
Pr oduct : out Fl oat ) i s
task Mul t i pl y ;
task body Mul t i pl y i s
begi n
Pr oduct := 1 . 0 ;
f or I ndex i n Val ues Range l oop
Pr oduct := Pr oduct Val ues ( I ndex ) ;
116 Task basics
end l oop ;
end Mul t i pl y ;
begi n Math
Sum := 0 . 0 ;
f or I ndex i n Val ues Range l oop
Sum := Sum + Val ues ( I ndex ) ;
end l oop ;
end Math ;
The singleton task Multiply is declared locally within procedure Math.
The task whose thread of control called procedure Math is the parent of task
Multiply. The scope rules described in Chapter 2 apply to all identiers,
including tasks. As task Multiply is nested within procedure Math, it has
global access to procedure Maths parameters. Task Multiply is activated
during the elaboration of procedure Math. After task Multiply completes
its activation, the code in its body executes concurrently with the code in
the body of procedure Math. Procedure Math is the direct master of task
Multiply. The task whose thread of control called procedure Math is an
indirect master of task Multiply. Should procedure Math complete the cal-
culation of Sum before task Multiply completes the calculation of Product,
Maths thread of control will be suspended until the dependent task is ter-
minated. Should task Multiply complete the calculation of Product before
procedure Math completes its calculation of Sum, task Multiply will become
completed. As task Multiply has no dependent tasks, it will then immedi-
ately move to the terminated state. The main point in this example is that
control does not leave a block while that block is the master of a task that
is not terminated.
Lets look at another example. The following procedure sorts an array of
real numbers. It uses two tasks to accomplish its objective. One task sorts
the rst half of the array and the other sorts the second half. Each task
calls a sequential sort procedure to do the actual sorting. We call a merge
function to combine the two sorted slices into a single sorted array.
procedure Conc ur r e nt Sor t ( Val ues : i n out Ar r ay Type ) i s
S p l i t : constant I n t e g e r := Val ues F i r s t + Val ues Length / 2;
begi n
decl ar e a bl oc k s t at ement
task type Worker Task ( F i r s t : Po s i t i v e ;
Las t : Po s i t i v e ) ;
task body Worker Task i s
begi n
Ca l l a s e q u e n t i a l s o r t t o s o r t t he a s s i gne d s l i c e
Sor t ( Val ues ( F i r s t . . Las t ) ) ;
end Worker Task ;
3.4 Exceptions 117
De c l ar e two wor ker t a s k s
Low Part : Worker Task ( F i r s t => Val ues Fi r s t ,
Las t => S p l i t 1 ) ;
Hi gh Par t : Worker Task ( F i r s t => Sp l i t ,
Las t => Val ues Las t ) ;
begi n e x e c ut a bl e par t of bl oc k s t at ement
do not hi ng ( wai t f o r t he wor ker t a s k s t o t e r mi na t e )
nul l ;
end ; of bl oc k s t at ement
Ca l l a f u n c t i o n t o merge t he two s or t e d h a l v e s ;
Val ues := Merge ( Val ues ( Val ues F i r s t . . S p l i t 1) ,
Val ues ( S p l i t . . Val ues Las t ) ) ;
end Conc ur r e nt Sor t ;
We cannot merge the two array halves until the worker tasks have com-
pleted their work. To ensure that we dont call the merge function before the
two array halves are sorted, we declared the worker tasks in a block state-
ment. We used two task discriminants to assign a dierent slice of the array
to each of our worker tasks. The block statement is the direct master of the
two worker tasks. Control will not exit this block until both worker tasks
have terminated. Thus we are guaranteed that the merge function will not
be called until both halves of the array are sorted. The executable portion of
the block contains only a null statement. As an alternative, we could have
divided the array into three pieces giving one third to each of the worker
tasks and one third to the calling task. For the most ecient sort, we would
have as many concurrent tasks sorting as there are processors in our system.
You can nd a complete program, Concurrent Sort Demo, on our web site
that demonstrates procedure Concurrent Sort.
You can nd more details on the termination of tasks in section 9.3 of the
ARM. Section 7.6.1 of the ARM discusses completion of masters including
masters that include controlled types. A controlled type allows us to write
destructor code for objects. Dale and McCormick (2007) provide simple
examples that use controlled types to reclaim memory used by dynamic
data structures.
3.4 Exceptions
When a sequential program running under a typical operating system raises
an exception for which it has no exception handler, the exception is prop-
agated to the operating system, which displays a message. When a task
raises an exception for which it has no exception handler, the exception is
118 Task basics
not propagated; the task dies silently.
1
We recommend including an others
handler with every task body that will inform us of an unhandled exception.
The following task body includes such a handler.
task body Bad Logi c i s
subtype Dozen Range i s I n t e g e r range 1 . . 12;
Dozen : Dozen Range ;
begi n
Dozen := 1;
l oop
Dozen := Dozen + 1;
exi t when Dozen > 12;
end l oop ;
excepti on
when Except : others =>
Put Li ne
( F i l e => St andar d Er r or ,
I tem => Task Bad Logi c di e d wi t h t he e x c e pt i on &
Ada . Ex c e pt i ons . Excepti on Name ( Except ) ) ;
end Bad Logi c ;
This code includes an exception feature we did not discuss in Chapter
2. The package Ada.Exceptions (section 11.4.1 of the ARM) provides the
type Exception Occurrence and a number of operations for exceptions.
The handler for others includes the Exception Occurrence choice param-
eter Except. This parameter is assigned the occurrence information on the
exception that this handler caught. We use this parameter in the call to
Exception Name to obtain the name of the exception. Our call to Put Line
writes this information to the standard error le. Here is the output from
this handler.
Task Bad Logic died with the exception CONSTRAINT ERROR
Such an exception handler will not handle exceptions raised in the task
bodys declarative part. These exceptions must be handled in an enclosing
scope. It is usually better to simplify declarations (e.g. use an executable
statement to give an initial value to an object rather than in its declaration)
than to handle exceptions.
Handling exceptions that are raised during the creation or activation of a
task is a complex matter. We recommend keeping declarations simple (for
example, avoid using function calls to initialize declared constants and vari-
ables). Burns and Wellings (2007) provide additional details on exceptions
and tasking.
1
Package Ada.Task Termination provides mechanisms we may use to identify a protected
procedure that is called by the implementation when a task terminates. We discuss protected
procedures in Chapter 4.
3.5 The implementation of Ada tasking 119
3.5 The implementation of Ada tasking
Many programmers have their rst introduction to the concept of the process
in a class on operating systems. All modern operating systems provide the
facilities for creating processes. The term process has specic and sometimes
dierent meanings in the context of particular operating systems. Many op-
erating systems support dierent levels of sequential processes. For example,
Linux uses the word process for an instance of an executing program. We
might run a word processing program and a spreadsheet program simul-
taneously on a Linux machine, each executing as a process. Each process
has its own private address space. One process may not access another pro-
cesss address space. In order for processes to communicate, they must use
inter-process communication mechanisms provided by the operating system.
Linux uses the word thread to refer to a sequential subprogram executing
concurrently within the context of a process. The threads within a process
share that processs address space. Within our word processing program,
a spell checking thread may execute concurrently with a thread that takes
our input from the keyboard. Since they share the same memory space,
these threads may access the same variables. The terms heavyweight and
lightweight are often used with process and thread. A process is said to be
heavyweight because it takes signicantly more eort to create a process
with its new address space than a thread in an existing address space.
An Ada task executes within a memory space shared by all tasks in the
program. The usual scope rules determine which objects a task can access.
So the task is more like a Linux thread than a process. If our Ada program is
running under Linux, it is very likely that the Ada run-time support system
maps the Ada tasks to operating system threads. Alternatively, it is possible
that the Ada program might have its own tasking support system making it
invisible to the operating system. We may also use such a tasking support
system for running concurrent Ada programs on bare (no operating system)
hardware. In practice, you rarely need to know the underlying implementa-
tion of the task. Well give more details on the implementation of Ada tasks
in our discussions of run-time systems in Chapter 9.
3.6 Other task features
In this section well introduce some less commonly used Ada tasking features
of tasks. You can nd more details in the ARM and in Burns and Wellings
(2007).
120 Task basics
3.6.1 Aborting tasks
The abort statement allows us to halt one or more tasks. The following
statement aborts the task named Yellow LED and the task that the pointer
Blue LED designates.
abort Yel l ow LED , Bl ue LED . a l l ;
These tasks were dened in the program LED Task Examples on page 107.
The execution of an abort statement results in the named tasks becoming
abnormal. Tasks that are dependent on a task named in an abort statement
also become abnormal.
The use of an abort statement may have catastrophic eects upon the
entire program. It is a means of last resort. We, like many others (Burns
and Wellings, 2007; Taft et al., 2006; Wikibooks, 2010b), strongly discourage
using the abort statement.
You can abort a task any time after its creation. Aborting an abnormal
task or a completed task has no eect. The execution of the abort statement
is nished once the named task and all of its dependents become abnormal.
Abnormal tasks that were not running when they were aborted move im-
mediately from the abnormal state to the completed state. Abnormal tasks
that were running when they were aborted may not move to the completed
state immediately. Such tasks may continue to execute until they reach an
abort completion point. Section 9.8 of the ARM lists these points. Many
of the points involve concepts we cover in Chapters 4 and 5. It is possi-
ble that a task may never reach an abort completion point and continue
its execution, further corrupting the program. A program making use of
the Real-Time Systems Annex (discussed in Chapter 8) will have an upper
bound on the time it takes for an abnormal task to complete. To make the
task abortion issue even more complicated, there are a number of abort-
deferred operations which must be allowed to complete before an abnormal
task completes. These operations are also listed in section 9.8 of the ARM.
3.6.2 Task identication
In most circumstances a task name or pointer that designates a task is all we
need to reference that task. You may nd in some circumstances that it is
useful to have a unique identier rather than a name. The package Ada.Task
Identification (section C.7.1 of the ARM) denes unique identiers for
tasks. This package includes the private type Task ID, an equality operator
for comparing IDs, a function for converting an ID into a string, and a
function that returns the ID of the calling task. Here is a revised version of
3.6 Other task features 121
the task body in program LED Task Examples that calls function Current
Task to obtain the ID of the task that calls it and function Image to convert
the ID to a string.
task body Fl ash LED i s
use Ada . Ta s k I d e n t i f i c a t i o n ;
My ID : Tas k I d ;
begi n
My ID := Cur r ent Tas k ;
Put Li ne ( Col or Type I mage ( Col or ) &
S ID i s & I mage ( My ID ) ) ;
l oop
Turn On LED ( Col or ) ;
del ay Dur at i on ( Per i od ) / 2;
Turn Of f LED ( Col or ) ;
del ay Dur at i on ( Per i od ) / 2;
end l oop ;
end Fl ash LED ;
The output produced by the Put Line statement in the three tasks exe-
cuting this body is
YELLOWS ID is yellow led 003E7110
GREENS ID is green led 003EA160
BLUES ID is blue led 003ED2E8
Package Ada.Task Identification also provides the attribute Identity
which returns the identity of the task. For example, the assignment state-
ment
My ID := Fl ash Red LED I d e n t i t y ;
assigns the value of the singleton task Flash Red LEDs ID to the variable
My ID.
3.6.3 Task attributes
The generic package Ada.Task Attributes (section C.7.2 of the ARM) al-
lows you to create, set, and query your own attributes for tasks. Once you
create an instance of this package, the attribute is added to each task. You
use the task IDs dened in package Ada.Task Identification to specify
the task for which you want to query or set your attribute.
Summary
A task is a program unit that executes concurrently with other tasks.
The denition of a task includes two parts: a declaration and a body.
122 Task basics
We use a task type declaration to create multiple tasks that execute the
same sequence of statements.
Each instance of a task type has its own run-time stack and therefore its
own copies of variables dened in the task body.
We use a singleton task declaration when we want to ensure that there is
only one task executing a sequence of statements.
Tasks may be created statically (through elaboration of declarations) or
dynamically (through use of an allocator).
Each task passes through several dierent states during its lifetime.
During task creation, memory is allocated for the task control block and
run-time stack.
During task activation, the task body is elaborated.
When a task is executable, it may be in one of three substates: Ready,
Running, and Blocked.
The task scheduler is responsible for deciding which tasks run at a given
time.
The environment task provides the thread of control for the main subpro-
gram.
The parent-child hierarchy is most relevant to task creation. A parent task
is suspended while its child activates.
The master-dependent hierarchy is most relevant to task nalization.
A master may be a block statement, subprogram, or package.
Control will not exit a block statement or subprogram as long as it has a
dependent that has not terminated. This feature allows us to synchronize
some concurrent activities.
A task is complete when the execution of the sequence of statements in
its body is complete or it raises an exception.
A completed task is terminated when all its dependents are terminated.
A task usually dies silently when it raises an unhandled exception.
The abort statement provides a means for halting tasks. We strongly
discourage its use.
The package Ada.Task Identification denes unique identiers for tasks.
The generic package Ada.Task Attributes allows you to create, set, and
query your own attributes for tasks.
Exercises
3.1 Obtain a copy of the program LED Task Examples (page 107) from our
web site. Build and run it. Do you get the same output each run? Why?
Exercises 123
3.2 Obtain a copy of the program LED Task Examples (page 107) from our
web site. Rewrite the specication of task type Flash LED so that the
discriminant Period is a pointer to a duration value rather than an
integer.
3.3 Obtain a copy of the program LED Task Examples (page 107) from
our web site. Delete the singleton task denition. Delete the variables
Yellow LED, Green LED, and Blue LED. Declare an array type indexed
by Color Type with Flash Ptr components. Write a for loop in the
main subprogram that dynamically creates a ashing task for each of
the four colors with periods of 1, 2, 3, and 4 seconds. Youll nd the
attribute Pos useful in calculating the periods in the loop.
3.4 Write a program with a task type called Put Name that has a discrim-
inant that is a pointer to String. The task body should display the
designated name ve times, waiting one second between each display.
Declare the task variable Name Writer as an instance of type Put Name.
Within this declaration, use the new operator to obtain the memory
containing a string with your rst name and assign its location to the
task discriminant. The main task should simply display the word Hello.
Build and execute your program.
3.5 Obtain a copy of the program Master Demo from our web site. This
program calls procedure Math (page 115). Add a Put Line statement
after the for loop in task Multiply that displays Product Computed.
Add a Put Line statement after the for loop in procedure Math that
displays Sum Computed. Increase the size of the array to 10. Build
and run the program. Which calculation was completed rst? Does that
calculation always complete rst?
3.6 Obtain a copy of the program Concurrent Sort Demo from our web
site. Rewrite procedure Concurrent Sort (page 116) so that the array
is divided into four chunks. Three of these chunks are sorted by three
worker tasks while the fourth is sorted by procedure Concurrent Sort.
Youll need to carry out three merges.
3.7 Given the following program which is available on the web site:
wi th Ada . Text I O ; use Ada . Text I O ;
wi th Ada . Numer i cs . Fl oat Random ; use Ada . Numer i cs . Fl oat Random ;
procedure Ex e r c i s e i s
Gener at or : Ada . Numer i cs . Fl oat Random . Gener at or ;
task type Do Nothi ng ( Le t t e r : Char ac t e r ) ;
task body Do Nothi ng i s
Per i od : Dur at i on ;
124 Task basics
begi n
Per i od := Dur at i on ( Random ( Gener at or ) ) / 10;
i f Le t t e r = W or Le t t e r = X then
Per i od := 2 Per i od ;
end i f ;
f or Count i n 1 . . 10 l oop
Put ( Le t t e r ) ;
del ay Per i od ;
end l oop ;
end Do Nothi ng ;
type A Ptr i s access Do Nothi ng ;
procedure Do Nothi ng More i s
type B Ptr i s access Do Nothi ng ;
X : A Ptr ;
Y : B Ptr ;
begi n
X := new Do Nothi ng ( Le t t e r => X ) ;
decl ar e
Z : Do Nothi ng ( Le t t e r => Z ) ;
begi n
f or Count i n 1 . . 10 l oop
Put ( ) ;
del ay 0. 008;
end l oop ;
end ;
Y := new Do Nothi ng ( Le t t e r => Y ) ;
end Do Nothi ng More ;
W : Do Nothi ng ( Le t t e r => W ) ;
begi n
New Li ne ;
Put Li ne ( Ca l l i n g Do Nothi ng More ) ;
Do Nothi ng More ;
New Li ne ;
Put Li ne ( Ret ur ned f rom Do Nothi ng More ) ;
end Ex e r c i s e ;
1. How many tasks are in this program?
2. What is the parent of each of these tasks?
3. What is the direct master of each of these tasks?
4. Are we guaranteed that nothing is displayed on the screen by this
program before the phrase Calling Do Nothing More? Why or why
not?
5. Are we guaranteed that the last thing displayed on the screen by
Exercises 125
this program is the phrase Returned from Do Nothing More? Why
or why not?
6. Are we guaranteed that no Y will be displayed before a Z? Why
or why not?
7. Are we guaranteed that no Y will be displayed before an X?
Why or why not?
3.8 Obtain a copy of the program Handler Demo from our web site. This
program includes the task Bad Logic whose body is on page 118. Build
and run it. Now add a denition for the exception My Exception to
this program. Add a raise statement that raises My Exception after
the fourth iteration of the loop in the body of task Bad Logic. Build
and run the revised program.
3.9 Extend the state diagram in Figure 3.1 with the state Suspend for
Child Activation from Figure 3.3 and a state Waiting Dependent
Termination to indicate that a master is waiting for its dependents to
terminate.
3.10 Extend the state diagram in Figure 3.1 with the state Abnormal.
4
Communication and synchronization based on
shared objects
In Chapter 3 we introduced the task. The tasks in all of the examples in
that chapter were independent. They did not share any resources nor did
they communicate with each other. As suggested by the cooking examples
of Chapter 1, the solution to most parallel problems requires cooperation
among tasks. Such cooperation involves communication, mutual exclusion,
and synchronization. Communication is the exchange of data or control
signals between processes. Mutual exclusion is a mechanism that prevents
two processes from simultaneously using the same resource. Synchroniza-
tion ensures that multiple processes working on a job complete the steps
of that job in the correct order. There are two approaches for communica-
tion among processes. In this chapter we look at indirect communication via
shared objects. In Chapter 5, well look at direct communication via Adas
rendezvous mechanism.
4.1 Mutual exclusion
Lets start with a simple example that illustrates the need for mutually
exclusive access to a resource used by multiple tasks. A small but popular
museum has a limited capacity. When that capacity is reached, the sale of
tickets must be suspended until space is made available by the departure of
some visitors. Recently, the museum has installed turnstiles at each of its
four entrances and hired a programmer to write the software to monitor the
turnstiles and keep count of the number of visitors currently in the museum.
Here is the specication of the package that provides the input from the
turnstile sensors at the museums four entrances.
package Tu r n s t i l e i s
Thi s package p r o v i d e s an i n t e r f a c e t o t he
Museum s t u r n s t i l e s e n s o r s
The Museum has f our door s
4.1 Mutual exclusion 127
type Ent r ance Type i s ( North , South , East , West ) ;
A per s on can e nt e r or l e a v e t hr ough a t u r n s t i l e
type Di r e c t i on Ty pe i s ( Enter , Leave ) ;
procedure Get ( Ent r ance : i n Ent r ance Type ;
Di r e c t i o n : out Di r e c t i on Ty pe ) ;
The c a l l e r of t h i s pr oc e dur e i s bl oc ke d u n t i l a
per s on pa s s e s t hr ough t he t u r n s t i l e at t he gi v e n
e nt r anc e . I t s r e t ur n v a l ue i n d i c a t e s whet her t he
per s on has e nt e r e d or l e f t t he Museum.
end Tu r n s t i l e ;
This package encapsulates the code that monitors the turnstiles at the four
museum entrances. Procedure Turnstile.Get waits until someone passes
through the designated turnstile. It then returns the direction in which the
person passed. Here is the rst version of a program that uses the information
from the turnstiles to track the current number of visitors in the museum.
wi th Tu r n s t i l e ; use Tu r n s t i l e ;
wi th Ada . Ex c e pt i ons ;
wi th Ada . I nt e g e r Te x t I O ; use Ada . I nt e g e r Te x t I O ;
wi th Ada . Text I O ; use Ada . Text I O ;
procedure Museum V1 i s
Thi s program us e s a g l o b a l v a r i a b l e s har ed by f i v e t a s k s
t o moni t or t he number of v i s i t o r s i n s i d e a museum
protected I D Ge ne r at or i s
procedure Get ( ID : out Po s i t i v e ) ;
Ret ur ns a uni que p o s i t i v e ID number
pr i vat e
Next I D : Po s i t i v e := 1;
end I D Ge ne r at or ;
protected body I D Ge ne r at or i s
procedure Get ( ID : out Po s i t i v e ) i s
begi n
ID := Next I D ;
Next I D := Next I D + 1;
end Get ;
end I D Ge ne r at or ;
Val ue : Po s i t i v e ;
Consumer Del ay : Dur at i on ;
begi n
Consume e v e r y t h i n g i n t he bounded b u f f e r
f or Count i n 1 . . Number Of Pr oducer s Pr o d u c e r I t e r a t i o n s
l oop
The Buf f er . Take ( Val ue ) ;
Put ( I tem => Val ue , Width => 2 ) ;
New Li ne ;
Si mul at e t he t i me t o do t he work
t o a c t u a l l y consume s omet hi ng
Random Durati on . Get ( Consumer Del ay ) ;
del ay Consumer Del ay / Number Of Pr oducer s ;
end l oop ;
end Producer Consumer Demo ;
Lets begin with the protected type Bounded Buffer. We use this type to
declare the protected object The Buffer. As with task types, we can include
a discriminant with the declaration of a protected type. In this program, the
discriminant species the capacity of the buer. Protected type Bounded
Buffer has three operations. The protected procedure Clear may be called
to empty the buer. The protected entry Put appends a positive integer
to the buer and the protected entry Take removes and returns the rst
number from the buer. The data encapsulated within this protected type
is a FIFO queue of positive integers instantiated from the generic bounded
4.4 The protected entry 139
queue package from Chapter 2. We use the value of the protected types
discriminant to specify the capacity of the FIFO queue.
The body of the protected type Bounded Buffer includes the denition
of a barrier for each of our two entries. Recall that the barrier must be open
(True) for the call to the entry to take place. The barrier for entry Put states
that the encapsulated FIFO queue Buffer must not be full. The barrier for
entry Take states that the encapsulated FIFO queue Buffer must not be
empty. These two barriers ensure that a calling task does not take a value
from an empty buer or attempt to add a value to a full buer. The bodies
of the protected procedure and the two protected entries call the appropriate
FIFO queue operations to accomplish their jobs.
Now lets move down to the producer tasks. We dene the task type
Producer and use it to declare the array of ve tasks, Producers. The
body of task type Producer begins with a call to the protected object
ID Generators procedure Get to obtain a unique identication number.
ID Generator encapsulates a counter used to provide these numbers. The
producer task then puts eight copies of its identication number into The
Buffer. After placing each number into the buer, the task blocks itself for
a random time. The amount of time is obtained from the random number
generator encapsulated in the shared protected object Random Duration.
The one consumer task in this example executes the main subprogram.
It takes numbers out of The Buffer and displays them. Like the producer
tasks, the consumer task obtains a random delay time from the shared pro-
tected object Random Duration which it uses in a delay statement before
taking the next number out of The Buffer.
In Chapter 1 we introduced the scenario as a means for understanding
concurrent algorithms. Lets look at a scenario that illustrates the behavior
of the protected entries in program Producer Consumer Demo.
1. Producers(1) obtains its ID number
2. Producers(2) obtains its ID number
3. The consumer task calls entry Take, the barrier is closed (because Buffer
is empty) so the consumer task is blocked
4. Producers(1) calls entry Put, the barrier is open so its number is en-
queued into Buffer
5. Takes barrier is now open, the consumer task is unblocked and removes
the number from Buffer
6. Producers(3) obtains its ID number
7. Producers(2) calls entry Put, the barrier is open so its number is en-
queued into Buffer
8. Producers(4) obtains its ID number
140 Communication and synchronization based on shared objects
9. Producers(1) gets a random delay value
10. Producers(1) executes the delay statement and is blocked
11. Producers(4) calls entry Put, the barrier is open so its number is en-
queued into Buffer
12. Producers(3) calls entry Put, the barrier is open so its number is en-
queued into Buffer
13. Producers(5) obtains its ID number
14. Producers(5) calls entry Put, the barrier is closed (because Buffer is
full) so the task is blocked
15. The consumer tasks delay time has expired
16. The consumer task calls entry Take, the barrier is open so a number is
removed from the Buffer
17. Puts barrier is now open, Producers(5) is unblocked and enqueues its
number into Buffer
18. And so on . . .
Steps 3 and 5 illustrate how entry Takes barrier blocks consumers until
there is something in Buffer. Steps 14 and 17 show how entry Puts barrier
blocks producers when Buffer is full. Mutual exclusion of shared data and
task synchronization are completely handled by the protected objects. We
need not include any special code within the tasks to obtain these necessary
protections.
4.5 Restrictions
A critical section is a group of instructions that must not be executed con-
currently by multiple tasks. The bodies of protected operations are critical
sections. Other tasks are forced to wait while one task is executing a pro-
tected operation. It is therefore important to keep the code in protected
operations as short as possible.
It is also important that a task is not blocked while executing a protected
operation.
1
It is a bounded error
2
to execute a potentially blocking operation
from within a protected operation. Potentially blocking operations include
Delay statements
Calls to protected object entries
Creation or activation of a task
Calls to subprograms whose body contains a potentially blocking opera-
tion
1
Reasons for this restriction include a very ecient implementation of mutual exclusion and
decreasing the potential of deadlock scenarios.
2
A bounded error need not be detected either prior to or during run time.
4.6 Entry queues 141
Others to be described in Chapter 5
Most input/output operations are potentially blocking. You should never
call a Get or Put procedure within a protected operation. While calling
a protected entry is potentially blocking, calling a protected function or a
protected procedure is not potentially blocking.
While we are discussing restrictions, well remind you again that the
Boolean expressions used in barriers should limit their testing to variables
within the protected object. This limitation facilitates the reevaluation of
the barrier. Barriers should not test variables outside of the protected object.
The barriers are only reevaluated when some task completes the execution
of the body of a protected procedure or entry. A task changing some variable
global to the protected object does not trigger a reevaluation of the barriers.
4.6 Entry queues
Each protected entry has a queue associated with it. When a task calls an
entry whose barrier is closed, it is added to that entrys queue. Figure 4.3
illustrates the two entry queues in our protected object The Buffer from
the program Producer Consumer Demo. Currently, the consumer task is ex-
ecuting the protected entry Take. The read/write lock is active, delaying
tasks Producer(1) and Producer(3). The entry queue for Take is empty
and its barrier, indicated by the hinged gate on the right side of the queue,
is open. Three producer tasks are blocked and waiting in the entry queue
for Put. As the Buffer is currently full, the barrier for Put is closed.
When the consumer task completes the removal of a number from the
buer and exits the protected object, all barriers are reevaluated. Entry Puts
barrier is opened and depending on whether or not the consumer removed
the last item in the queue, entry Takes barrier either remains open or is
closed. As there are producer tasks waiting in the queue, the task at the
head of the queue is selected, removed from the queue, and allowed to put
its value into the buer.
Our simple example demonstrates the order that Ada imposes on the
selection of the task that executes next within the protected object. Queued
entry calls have precedence over other operations on the protected object.
Figure 4.3 shows producer tasks (1) and (3) outside of the protective shell
of the read/write lock. Producer tasks (2), (4) and (5) are within the shell.
When each of these latter three tasks began the execution of entry Put,
the read/write lock was not active. Each activated the lock and checked the
barrier condition. As the barrier was closed, the task was blocked and placed
on Puts entry queue. Upon joining the entry queue, the newly blocked task
released the read/write lock. These three tasks will not be delayed again by
142 Communication and synchronization based on shared objects
The Buffer
legend
A producer task that is
executing or wanting to
execute entry Put
A consumer task
executing or wanting to
execute entry Take
3
C
3
5 2 4
Put
1
read/write lock
Buffer
5 1 3
dequeue
Take
C
Figure 4.3 Entry queues for the protected bounded buer
a read lock or a read/write lock before they execute the body of entry Put.
These tasks, as Ben-Ari (2009) states, have already passed the outer shell
and are considered part of the club and have preference over calls that have
not yet commenced.
When a task completes the execution of a protected procedure or pro-
tected entry and leaves the protected object, all barriers are reevaluated. If
one or more barriers are open, one of the entry queues is serviced the
task at the head of that queue is removed and begins execution of the en-
try body. The read/write lock remains active during the switch from the
task that exited the protected object to the task now executing within it.
Figure 4.4 shows The Buffer just after the consumer task completed its re-
moval of the number from Buffer in Figure 4.3. After Producer task (4)
completes the body of entry Put and leaves the protected object, the two
barriers are reevaluated. Since Buffer is again full, the barrier for entry Put
will close.
After evaluating all the barriers, it is possible to have more than one entry
queue with waiting tasks and open barriers. Ada does not specify which of
these queues is serviced. You should assume that the selection is arbitrary.
However, the Real-Time Systems Annex (Appendix D of the ARM) provides
the pragma Queuing Policy which we may use to specify a deterministic
selection. Well discuss this pragma in Chapter 8.
4.7 Some useful concurrent patterns 143
The Buffer
read/write lock
Buffer
1
5 1
enqueue
4
5 2
Take
3
legend
A producer task that is
executing or wanting to
execute entry Put
3
Put
Figure 4.4 Tasks in the entry queues have precedence over those delayed
behind the lock
The bodies of protected operations may use the Count attribute to query
the current length of an entry queue. The Count attribute may also be
used in barriers. Well look at some examples using this attribute in the
next section. While the Count attribute would seem a useful operation,
one must use it with care in programs that use abort statements or timed
entry calls (Chapter 5). A queue with ve entries becomes a queue of four
entries after one task in the queue is aborted.
4.7 Some useful concurrent patterns
The concurrent programming literature is lled with many patterns for com-
munication between and synchronization of processes. The protected object
provides a simple ecient mechanism for implementing many of these pat-
terns. We have already showed you how the bounded buer pattern can
easily be implemented by a protected object. In this section we look at a
number of additional patterns and their implementations. The implemen-
tations given here are based on those of Burns and Wellings (1998). These
same authors provide an updated set of concurrency utilities based on Adas
object-oriented features (Burns and Wellings, 2007).
144 Communication and synchronization based on shared objects
4.7.1 Semaphores
A semaphore is a classic concurrency control construct. It combines a counter
and a lock to provide both mutually exclusive access to critical sections of
code and synchronization between tasks. The semaphores counter is initial-
ized during its creation. If initialized to one, it is called a binary semaphore
and if it is initialized to a value greater than one, it is called a counting
semaphore. There are two semaphore operations.
Wait Wait until the counter is greater than zero, then decrement the counter.
The test and decrement of the counter is done as an atomic action.
Signal Increment the counter. The increment is done as an atomic action.
Each semaphore has an associated queue of tasks. When a task performs a
wait operation on a semaphore whose counter is zero, the task is added to the
semaphores queue. When another task increments the semaphore through
a signal operation and there are tasks on the queue, a task is removed from
the queue and it resumes execution. The rst thing that the task does on
resumption is to decrement the counter.
We associate a dierent semaphore with each shared resource. This asso-
ciation is done informally; there is no formal syntax to connect the resource
and the semaphore. To ensure mutually exclusive access to a shared resource,
every task that accesses the resource must call wait before accessing the re-
source and call signal after completing its use of the resource. The following
code fragment illustrates the use of a binary semaphore to provide mutually
exclusive access to the global variable Population in the rst version of our
museum visitor counting program.
The s har ed v a r i a b l e
Popul at i on : Nat ur al := 0;
A bi na r y semaphore whi ch we have a s s o c i a t e d wi t h Popul at i on
Sem : Semaphore ( I n i t i a l Va l u e => 1 ) ;
The i n t e r r u p t ha ndl e r i s i n t h i s pr ot e c t e d o bj e c t
procedure Handl er i s
begi n
Conver s i on Compl et e := True ;
end Handl er ;
task Count er i s
entry I ncr ement ( Cl i e n t : i n Po s i t i v e ;
Val ue : out Nat ur al ) ;
end Count er ;
task body Count er i s
Count : Nat ur al := 0;
My Del ay : Dur at i on ;
begi n
l oop Each i t e r a t i o n , r endez vous wi t h two c l i e n t s
Random Durati on . Get ( My Del ay ) ;
del ay My Del ay ;
accept I ncr ement ( Cl i e n t : i n Po s i t i v e ;
Val ue : out Nat ur al ) do
Put Li ne ( Count er i s i n #1 r endez vous wi t h t as k
& Po s i t i v e I mage ( Cl i e n t ) ) ;
Count := Count + 1;
Val ue := Count ;
end I ncr ement ;
Random Durati on . Get ( My Del ay ) ;
del ay My Del ay ;
accept I ncr ement ( Cl i e n t : i n Po s i t i v e ;
Val ue : out Nat ur al ) do
Put Li ne ( Count er i s i n #2 r endez vous wi t h t as k
& Po s i t i v e I mage ( Cl i e n t ) ) ;
Count := Count + 1;
Val ue := Count ;
end I ncr ement ;
end l oop ;
end Count er ;
task type Cl i e n t i s
entry As s i gn I D ( As s i gne d I D : i n Po s i t i v e ) ;
end Cl i e n t ;
task body Cl i e n t i s
My ID : Po s i t i v e ;
My Del ay : Dur at i on ;
Cur r ent : Nat ur al ;
begi n
accept As s i gn I D ( As s i gne d I D : i n Po s i t i v e ) do
My ID := As s i gne d I D ;
end As s i gn I D ;
f or Count i n 1 . . Numbe r Of I t e r a t i ons l oop
Random Durati on . Get ( My Del ay ) ;
del ay My Del ay ;
Count er . I ncr ement ( Cl i e n t => My ID ,
Val ue => Cur r ent ) ;
170 Communication and synchronization based on direct interaction
Put Li ne ( Cl i e n t & Po s i t i v e I mage ( My ID) &
got a v a l ue of &
Nat ur al I mage ( Cur r ent ) ) ;
end l oop ;
end Cl i e n t ;
type Cl i e n t Ar r a y i s ar r ay (1 . . Number Of Cl i ent s )
of Cl i e n t ;
Cl i e n t s : Cl i e n t Ar r a y ;
begi n
Rendezvous w i l l a l l t he t a s k s i n a r r a y Cl i e n t s
f or I ndex i n Cl i e nt Ar r a y Range l oop
Rendezvous wi t h a c l i e n t ,
g i v e i t t he same ID as i t s i nde x
Cl i e n t s ( I ndex ) . As s i gn I D ( As s i gne d I D => I ndex ) ;
end l oop ;
end Rendezvous Demo ;
Program Rendezvous Demo contains the singleton task Counter and task
type Client. Task Counter includes the declaration of the entry Increment
and task type Client includes the declaration of the entry Assign ID.
Clients is an array of ve Client tasks.
The main subprogram executes a for loop that makes an entry call to
each task in the array Clients. The rst statement in the body for task
type Client is an accept statement. When both the entry call and the
accept have been made, the rendezvous between the main task and one of
the ve tasks making up array Clients begins. During this rendezvous,
the ID from the main task is copied to the local variable My ID. When the
assignment statement inside the accept statement is completed, both the
main task and the task Clients(ID) continue to execute concurrently. In
these ve rendezvous, the ve tasks making up the array Clients act as
servers for the main task.
After the main task makes its ve rendezvous, each of the client tasks has
an ID equal to the index of its position in the array Clients. In Chapter 4,
we used task discriminants to assign IDs to tasks (see, for example, the four
door monitoring tasks in the museum program on page 130). Discriminants
cannot be used with arrays of tasks so the rendezvous is a common approach
for assigning IDs to tasks making up an array.
After receiving their IDs, each of the ve client tasks executes a loop in
which they delay for a random duration and then call task Counters entry
Increment. When the rendezvous is complete, the client displays its ID and
the value of the count it got back from Counter.
Task Counter contains a loop in which it participates in two dierent ren-
5.2 The selective accept statement 171
dezvous. While we could obtain the same results with one accept statement
instead of two in our loop, we wanted to demonstrate that there can be
multiple accept statements for every entry declared in a tasks specication.
With ve client tasks calling the one server tasks entry, there is sure to
be some competition between the clients for the servers attention. As with
protected entries, each task entry includes a queue in which entry calls wait
their turn. Note that this queue is associated with the entry, not each accept
statement. When tasks are waiting on an entry queue, the execution of any
accept statement in the task removes the rst client task from the entry
queue for the rendezvous.
The basic rendezvous follows a wait forever model. Once a client has made
an entry call, it must wait until the server accepts the call. There is no limit
on the amount of time the client will wait for the server to rendezvous.
Similarly, once a server has executed an accept statement, it must wait until
some client issues an entry call. Again, there is no limit on the amount of
time the server will wait for a client to rendezvous. In the next sections,
well look at methods for giving servers and clients more control over the
indenite wait for their rendezvous partner.
5.2 The selective accept statement
The selective accept statement gives the server member of the rendezvous
pair more control in the acceptance of entry calls. As this statement has
many options, it is worthwhile looking at its denition in Adas variant of
BNF.
1
selective accept ::= select
[guard]
select alternative
{or
[guard]
select alternative }
[else
sequence of statements ]
end select;
The select statement provides one or more alternatives. Like an if state-
ment, when the select statement is executed only one of the alternatives is
1
Backus-Naur Form. Section 1.1.4 of the ARM describes this BNF variant. The symbol ::= is
read is dened as. Bold text and punctuation are literals. Square brackets, [ ], enclose
optional expressions. Curly brackets, { }, enclose expressions that may occur zero or more
times. Vertical bars, |, separate alternatives.
172 Communication and synchronization based on direct interaction
executed. Here are the denitions of the two new terms given in the deni-
tion of the selective accept.
select alternative ::= accept alternative
| delay alternative
| terminate alternative
accept alternative ::= accept statement
[sequence of statements]
guard ::= when condition =>
The delay alternative, terminate alternative, and else part are mutually
exclusive only one of them can be included in a given select statement.
We will examine all these alternatives in the next sections.
5.2.1 The accept alternative
We use the accept alternative with its optional sequence of statements to
wait for a rendezvous on one of several entries. Here is an example of its use.
The following task is part of an application that controls the preparation of
hot drinks in a beverage vending machine. Other tasks in this application
include one that takes orders, accepts money, and makes change, one that
plays music to attract customers, and one that controls the animated signage
on the front of the machine.
task Make Beverage i s
entry Cof f e e ( Sugar : i n Bool ean ;
Cream : i n Bool ean ) ;
entry Tea ( Sugar : i n Bool ean ;
Mi l k : i n Bool ean ;
Lemon : i n Bool ean ) ;
entry Cocoa ;
end Make Beverage ;
task body Make Beverage i s
Wi t h Sugar : Bool ean ;
Wi th Cream : Bool ean ;
Wi t h Mi l k : Bool ean ;
Wi th Lemon : Bool ean ;
begi n
l oop
s el ect
accept Cof f e e ( Sugar : i n Bool ean ;
Cream : i n Bool ean ) do
5.2 The selective accept statement 173
Wi t h Sugar := Sugar ;
Wi th Cream := Cream ;
end Cof f e e ;
Run Cof f ee Sys t em ( Wi th Sugar , Wi th Cream ) ;
Sa n i t i z e Co f f e e Sy s t e m ;
or
accept Tea ( Sugar : i n Bool ean ;
Mi l k : i n Bool ean ;
Lemon : i n Bool ean ) do
Wi t h Sugar := Sugar ;
Wi t h Mi l k := Mi l k ;
Wi th Lemon := Lemon ;
end Tea ;
Run Tea System ( Wi th Sugar , Wi th Mi l k , Wi th Lemon ) ;
Sani t i z e Te a Sy s t e m ;
or
accept Cocoa ;
Run Cocoa System ;
Sani t i z e Coc oa Sy s t e m ;
end s el ect ;
Sa n i t i z e Di s p e n s e r Sy s t e m ;
end l oop ;
end Make Beverage ;
Task Make Beverage has three entries. The select statement in the task
body has three accept alternatives. When the select statement is executed,
all the entry queues are evaluated. There are three possibilities.
If all the entry queues are empty, the server task is blocked. The server
task will wake up as soon as some client calls one of the three entries.
The server task then makes the rendezvous with the appropriate accept
statement.
If all of the entry queues save one are empty when the server task executes
the select statement, the server task will immediately rendezvous with the
rst task in that non-empty entry queue.
If there are tasks waiting in more than one entry queue when the server
task executes the select statement, the server task will immediately ren-
dezvous with a task from one of the entry queues. Which entry queue
is selected is not specied by the language. You should assume that the
selection is arbitrary. The Real-Time Systems Annex (Appendix D of the
ARM) provides pragma Queuing Policy which we may use to specify a
deterministic selection. Well discuss this pragma in Chapter 8.
The entry Cocoa has no parameters. For parameterless entries, we usually
omit the accept statements optional sequence of statements executed during
the rendezvous. Compare the accept statement for Cocoa with those for
174 Communication and synchronization based on direct interaction
Coffee and Tea. The rendezvous code for Coffee and Tea is short it
copies the parameters to local variables. The client task is not blocked during
the actual brewing or cleaning of the system. These jobs are done after each
rendezvous is complete.
5.2.2 Guarded alternatives
The BNF denition of the selective accept shows that each selective accept
can have a guard associated with it. A guard serves the same function as
the protected entrys barrier. A guard is a Boolean expression that is eval-
uated when the select statement is executed. If the expression is False, the
alternative is said to be closed. A closed select alternative is not eligible for
selection during this execution of the select statement. If the expression is
True, the alternative is said to be open and is eligible for selection during this
execution of the select statement. An alternative without a guard is always
open. The exception Program Error is raised if all alternatives are closed.
It is important to note that all of the guards in a select statement are
evaluated when the select statement rst begins its execution they are
not reevaluated if the task is blocked by the select statement.
Lets look at an example. The following task controls an electric motor. We
can set the motors direction, start the motor, and stop the motor. To start
the motor, we rst energize a set of low-current draw windings. After the
motor begins turning, we switch over to the running windings. The motor
could be damaged if we change its direction while it is running. To ensure
that the motor is not damaged, we include a guard on the select alternative
that sets the motors direction. When the motor is running, the guard is
False and the accept for Set Direction is not eligible for selection.
type Di r e c t i on Ty pe i s ( Cl ockwi s e , Count e r Cl oc kwi s e ) ;
type Wi ndi ng St at e i s ( Of f , On ) ;
task Motor i s
entry St a r t ;
entry Stop ;
entry Se t Di r e c t i o n ( Di r e c t i o n : i n Di r e c t i on Ty pe ) ;
end Motor ;
task body Motor i s
Runni ng : Bool ean := Fa l s e ;
begi n
l oop
s el ect
accept St a r t ;
Se t St a r t Wi ndi ngs (To => On ) ;
5.2 The selective accept statement 175
Runni ng := True ;
del ay 3 0 . 0 ;
Se t St a r t Wi ndi ngs (To => Of f ) ;
Set Run Wi ndi ngs (To => On ) ;
or
accept Stop ;
Set Run Wi ndi ngs (To => Of f ) ;
Runni ng := Fa l s e ;
or
when not Runni ng =>
accept Se t Di r e c t i o n ( Di r e c t i o n : Di r e c t i on Ty pe ) do
Se t Wi ndi ng Po l a r i t y ( Di r e c t i o n ) ;
end Se t Di r e c t i o n ;
end s el ect ;
end l oop ;
end Motor ;
As with barriers, we recommend that guards do not use global variables
which may be changed by other tasks.
5.2.3 The terminate alternative
The terminate alternative provides a means for ending the execution of a
task when it is no longer required. The terminate alternative is selected
when two conditions are true:
The tasks master (see Section 3.3.2) is completed.
Each task that depends on the master considered is either already termi-
nated or similarly blocked at a select statement with an open terminate
alternative.
Here is the select statement for our coee vending machine example with
the addition of a terminate alternative.
s el ect
accept Cof f e e ( Sugar : i n Bool ean ;
Cream : i n Bool ean ) do
Wi t h Sugar := Sugar ;
Wi th Cream := Cream ;
end Cof f e e ;
Run Cof f ee Sys t em ( Wi th Sugar , Wi th Cream ) ;
Sa n i t i z e Co f f e e Sy s t e m ;
or
accept Tea ( Sugar : i n Bool ean ;
Mi l k : i n Bool ean ;
Lemon : i n Bool ean ) do
Wi t h Sugar := Sugar ;
Wi t h Mi l k := Mi l k ;
176 Communication and synchronization based on direct interaction
Wi th Lemon := Lemon ;
end Tea ;
Run Tea System ( Wi th Sugar , Wi th Mi l k , Wi th Lemon ) ;
Sani t i z e Te a Sy s t e m ;
or
accept Cocoa ;
Run Cocoa System ;
Sani t i z e Coc oa Sy s t e m ;
or
termi nate ;
end s el ect ;
With the terminate alternative, task Make Beverage will terminate when
any task that could rendezvous with it is either nished or also waiting on
a select statement with an open terminate alternative.
5.2.4 The delay alternative
The delay alternative provides a means to limit the amount of time that a
server waits for a client to rendezvous. You can use either of two dierent
delay statements to express this time. Here are the BNF denitions for the
delay alternative.
delay alternative ::= delay statement
[sequence of statements]
delay statement ::= delay relative statement | delay until statement
delay relative statement ::= delay delay expression;
delay until statement ::= delay until delay expression;
The expected type for the delay expression in a delay relative statement
is the predened type Duration. We have already used the relative delay
statement in many examples. The expected type for the delay expression in
a delay until statement is a time type, either the type Time from package
Ada.Calendar (described in section 9.6 of the ARM) or the type Time from
the package Ada.Real Time (described in section D.8 of the ARM). The
clock in package Ada.Calendar is often referred to as a wall clock. Its
value is adjusted for leap years and may jump forward or backward as a re-
sult of daylight savings time changes and leap seconds. The high-resolution
clock in package Ada.Real Time keeps monotonic time time that never de-
creases. A call to that clock returns the amount of time that has passed since
some implementation-dened epoch unaected by the events that change
5.2 The selective accept statement 177
wall clock time. We will discuss package Ada.Real Time in more detail in
Chapter 8.
Relative delays
As an example of a delay alternative with a relative delay, lets look at a
task that controls the temperature of a house. Here is its specication:
task Ther mostat i s
entry Enabl e ( Se t Poi nt : i n Degr ees ) ;
entry Di s a bl e ;
end Ther mostat ;
Entry Enable turns on the heating system and sets the desired tempera-
ture. Entry Disable turns o the heating system. When it is enabled, the
thermostat turns the furnace on when the temperature is half a degree below
the desired temperature and turns the furnace o when the temperature is
half a degree above the desired temperature. When the furnace is o, the
thermostat checks the temperature every 0.5 seconds. While the furnace is
running, it checks the temperature every 1.5 seconds. Here is the body of
the thermostat task:
task body Ther mostat i s
Cur r ent : Degr ees ; Cur r ent house t emper at ur e
Tar get : Degr ees ; De s i r e d house t emper at ur e
Furnace On : Bool ean := Fa l s e ; Fl ag f o r guar ds
begi n
accept Enabl e ( Se t Poi nt : i n Degr ees ) do
Tar get := Se t Poi nt ;
end Enabl e ;
l oop
s el ect
accept Enabl e ( Se t Poi nt : i n Degr ees ) do
Tar get := Se t Poi nt ;
end Enabl e ;
or
accept Di s a bl e ;
Tur n Fur nace Of f ;
Furnace On := Fa l s e ;
accept Enabl e ( Se t Poi nt : i n Degr ees ) do
Tar get := Se t Poi nt ;
end Enabl e ;
or
when Furnace On =>
del ay 1 . 5 0 ;
or
when not Furnace On =>
del ay 0 . 5 0 ;
end s el ect ;
178 Communication and synchronization based on direct interaction
Get Temper at ur e ( Cur r ent ) ;
i f Furnace On and Cur r ent >= Tar get + 0. 5 then
Tur n Fur nace Of f ;
Furnace On := Fa l s e ;
e l s i f not Furnace On and Cur r ent <= Tar get 0. 5 then
Tur n Fur nace On ;
Furnace On := True ;
end i f ;
end l oop ;
end Ther mostat ;
Initially, the heating system is o. The accept Enable statement before
the loop turns on the system. The select statement in the loop has four alter-
natives. We may call Enable to change the desired temperature or Disable
to turn the system o. After Disable is called, the furnace is turned o
and the thermostat task is blocked until Enable is called. Note that the
accept statement for this Enable is a simple accept statement, not a select
alternative. The other two alternatives are guarded delay statements. At a
given time, only one of these alternatives will be open. If neither Enable
nor Disable is called before the open delay expires, that delay alternative is
selected. Control continues to the code that checks the current temperature
and, if necessary, makes appropriate calls to the furnace.
Absolute delays
Now lets look at an example of a delay alternative with an absolute delay.
The following task body controls a commercial-size tea kettle. So that the
water is hot when the sta arrives in the morning, we have included a delay
alternative that will turn the heater on at 5:00 AM every morning. Another
delay alternative turns the heater o when the cafe closes at 11:00 PM. Two
entries allow us to turn the heater on and o manually. For this application,
the clock in Ada.Calendar is appropriate.
task body Ke t t l e i s
Year : Ada . Cal endar . Year Number ;
Month : Ada . Cal endar . Month Number ;
Day : Ada . Cal endar . Day Number ;
Seconds : Ada . Cal endar . Day Dur at i on ;
On Time : Ada . Cal endar . Time ;
Of f Ti me : Ada . Cal endar . Time ;
begi n
Read t he c l oc k f o r c u r r e n t dat e / t i me
On Time := Ada . Cal endar . Cl ock ;
S p l i t t he t i me v a l ue i n t o Year , Month , Day , and Seconds
Ada . Cal endar . S p l i t ( On Time , Year , Month , Day , Seconds ) ;
Set t he t ur n on t i me t o mi dni ght ( t he s t a r t of t oday )
On Time := Ada . Cal endar . Ti me Of ( Year , Month , Day , 0 . 0 ) ;
5.2 The selective accept statement 179
Set t he t ur n on t i me t o 5: 00 AM (5 hr s a f t e r mi dni ght )
On Time := On Time + Dur at i on (5 60 60) ;
Set t he t ur n o f f t i me t o 11: 00 PM (18 hr s a f t e r on t i me )
Of f Ti me := On Time + Dur at i on (18 60 60) ;
l oop
s el ect
accept Turn On ; manual t ur n on
Turn On Heat ;
or
accept Tur n Of f ; manual t ur n o f f
Tur n Of f Heat ;
or
del ay unt i l On Time ; automated t ur n on
Turn On Heat ;
Turn on agai n i n 24 hour s
On Time := On Time + Dur at i on (24 60 60) ;
or
del ay unt i l Of f Ti me ; automated t ur n o f f
Tur n Of f Heat ;
Turn o f f agai n i n 24 hour s
Of f Ti me := Of f Ti me + Dur at i on (24 60 60) ;
end s el ect ;
end l oop ;
end Ke t t l e ;
Function Ada.Calendar.Clock returns the current time. We split the time
value into the components Year, Month, Day, and Seconds. Then we recom-
bine these components with a value of zero for Seconds to obtain the time
of midnight of the current day. Finally, we add ve hours to midnight to
obtain our 5:00 AM start time. Our closing time is then calculated to be
18 hours after the opening time. Each time one of the two delay alterna-
tives expires, we carry out the operation and reset the time for 24 hours
later.
5.2.5 The else part
The optional else part of the select statement and its sequence of statements
is executed if no other alternative can be selected immediately. Such is the
case if no client has issued a entry call for the server or the guards have
closed any entry whose queue is not empty. Use this option with caution as
removing the possibility of not blocking the task on some entry can lead to
busy waiting (polling) logic that may starve other tasks. Here is an example.
task body Moni t or Temper at ur e i s
My Del ay : Dur at i on := 5 . 0 ;
Cur r ent : Degr ees ;
begi n
180 Communication and synchronization based on direct interaction
l oop
s el ect
accept Change Rate ( Val ue : i n Dur at i on ) do
My Del ay := Val ue ;
end Change Rate ;
el s e
nul l ;
end s el ect ;
Get Temper at ur e ( Cur r ent ) ;
Di s pl ay Temper at ur e ( Cur r ent ) ;
del ay My Del ay ;
end l oop ;
end Moni t or Temper at ur e ;
Task Monitor Temperature periodically obtains a temperature and dis-
plays it. At the beginning of each iteration, the task checks to see if it should
change the temperature sampling rate. Usually, no change is requested and
the task executes the empty else part, samples and displays the tempera-
ture, and delays until the next cycle. As there is a delay statement after
the select statement, the use of the else part does not create a busy waiting
loop that could starve other tasks. An alternative solution would be to use
a delay alternative in place of the else part. However, with this alternative,
a request change will result in a shortening of the cycle for that one period.
Moving the delay out of the select statement ensures that the delay is never
terminated by an entry call.
5.3 Entry call options
In the last section we looked at options that the server (the called task) has
available when handling entry calls. The server may wait indenitely, wait
for a specic amount of time, wait until all of its clients have terminated, or
not wait at all. The client tasks also have options when making entry calls.
The calls can be to task entries or to protected entries. Well look at the
options for entry calls in this section.
5.3.1 Timed entry calls
A timed entry call allows a client to specify how long they are willing to
wait for the protected object or server to accept the call. Here is an exam-
ple of a timed entry call to the Disable entry of the Thermostat task on
page 177.
s el ect
Ther mostat . Di s a bl e ; Attempt t o r endez vous
5.4 State machines 181
Put Li ne ( Di s abl e d t he t he r mos t at ) ;
or
del ay 5 . 0 ; Wai t f o r 5 s econds f o r r endezvous
Put Li ne ( Gave up wa i t i ng f o r r endez vous ) ;
end s el ect ;
After disabling the furnace once, a second entry call to Disable is blocked
since task Thermostat is waiting to accept an entry call to Enable. Using
the timed entry call, our client can abandon the attempt to rendezvous with
the thermostat task.
When the time specied in a timed entry call expires, the task is removed
from the entry queue in which it was waiting. Removal from the queue
can cause problems for a server or protected object that uses the Count
attribute to determine how many client tasks are waiting in a queue. See
Section 4.7 and Exercise 4.14 for some examples that may be complicated
by the use of timed entry calls or abort statements.
5.3.2 Conditional entry calls
Conditional entry calls may be used to call protected entries or to perform a
rendezvous. A conditional entry call is equivalent to a timed entry call with
an immediate expiration time. Here is an example of a conditional entry call
to the Disable entry of the Thermostat task.
s el ect
Ther mostat . Di s a bl e ; Attempt t o r endez vous
Put Li ne ( Di s abl e d t he t he r mos t at ) ;
el s e
Put Li ne ( Coul d not r endez vous r i g h t away ) ;
end s el ect ;
5.4 State machines
The state machine is an important model in the design of real-time and
embedded systems. We assume that the reader is familiar with the basics of
state machine diagrams and their usefulness in the design of applications.
For a detailed description of state diagrams in software design see Blaha
and Rumbaugh (2004) or Douglass (1998). A state diagram describes the
behavior of an object. Each state represents a stage in the behavior pattern
of an object. A transition is a change from one state to another. It is triggered
by an event that is either internal or external to the object. In Chapter 3
we used simple state diagrams to illustrate the behavior of the Ada task.
182 Communication and synchronization based on direct interaction
We typically draw a state diagram for each object in our design that has
non-trivial behavior.
event / effect
[condition]
State Name
do / activity
Figure 5.2 State machine diagram notation
A state diagram is built from the two primary elements shown in Fig-
ure 5.2: rounded boxes representing states and arrows indicating transitions
from one state to another. The rounded rectangle includes the name of the
state and a do option that indicates an activity that continues while the
object is in this state. Do activities include continuous operations such as
ashing a light and sequential activities that terminate by themselves after
an interval of time, such as closing an airlock door. A do activity may be
interrupted by an event that is received during its execution. For example,
a closing door may encounter an obstacle, causing it to cease moving. The
arrow is annotated with the name of the event that causes the transition
out of a state. A transition may include an optional eect and an optional
guard condition. An eect is an action that happens instantaneously such
as displaying a message, incrementing a counter, or setting a bit. Like the
guard on a select alternative, the guard condition on a state transition closes
the transition when the condition is False. When a transition is closed, the
event does not cause a change of state.
There are a number of techniques for implementing state machines in Ada.
Since a state is an abstraction of an objects attributes, we can represent
it as a collection of values. We change the state by changing these values.
When the state of the object is controlled by a single task, this collection
of values can be as simple as a record. When multiple tasks generate state-
changing events, we can encapsulate the data within a protected object. Both
of these approaches work well for passive objects. A passive object is one
whose state changes only in response to events generated by other objects.
A passive object does not change its state by itself. An active object is one
that can change its state by itself. An active object requires its own thread
of control to change itself. So active objects are often implemented as tasks.
Usually, we can determine whether an object is active or passive from its
state diagram. Indications that an object is active include do activities and
5.4 State machines 183
transitions based on time. In the next sections well look at some techniques
for implementing the state machines for passive and active objects.
5.4.1 Implementing state machines for passive objects
Figure 5.3 is a state diagram for the headlights in a car. The black circle
with an unlabeled transition to the O state indicates that O is the initial
state for this diagram. In the On state, the headlights are illuminated at the
normal brightness level. In the High state, the high beam lament is powered
to give additional light. The driver controls the headlights with two switches
on the dashboard. These switches may be polled by a task that periodically
checks all dashboard controls or may generate an interrupt when a switch
is changed. In either case, some external object generates the events shown
in the gure. The switch o event is also generated by the main task when
the car is turned o.
switch off
On
high on
high off switch on
switch off
High Off
Figure 5.3 State diagram for car headlights
As is usual for state diagrams, each state in Figure 5.3 is aected by only
a few of the possible events. For example, while in the state O, the event
high on does not change the state of the headlights; they remain o.
Here is a package specication for the headlights object.
package He a dl i ght s i s
type St at e Type i s ( Off , On, Hi gh ) ;
type Event Type i s ( Swi tch On , Swi t ch Of f , High On , Hi gh Of f ) ;
procedure Si gna l Ev e nt ( Event : Event Type ) ;
Si g na l an event t hat may p o s s i b l y
change t he s t a t e of t he h e a d l i g h t s
f uncti on Cur r e nt St a t e r etur n St at e Type ;
Ret ur n t he c u r r e n t s t a t e of t he h e a d l i g h t s
end He a dl i ght s ;
184 Communication and synchronization based on direct interaction
Nested case statements
We will show you two of the many techniques for implementing a passive
object state machine. The most common technique uses nested case state-
ments. Since multiple tasks generate headlight events, well encapsulate these
case statements within a protected object. Here is the body that uses this
approach.
wi th Li g ht s ; Package wi t h l i g h t c o n t r o l / s t a t u s r e g i s t e r s
package body He adl i ght s V1 i s
protected Cont r ol i s
procedure Si gna l Ev e nt ( Event : Event Type ) ;
f uncti on Cur r e nt St a t e r etur n St at e Type ;
pr i vat e
St at e : St at e Type := Of f ;
end Cont r ol ;
protected body Cont r ol i s
procedure Si gna l Ev e nt ( Event : Event Type ) i s
begi n
case St at e i s
when Of f =>
case Event i s
when Swi tch On =>
St at e := On;
Li g ht s . Se t He adl i ght s On ;
when Swi t c h Of f | Hi gh On | Hi gh Of f =>
nul l ;
end case ;
when On =>
case Event i s
when Swi t c h Of f =>
St at e := Of f ;
Li g ht s . Se t He a d l i g h t s Of f ;
when Hi gh On =>
St at e := Hi gh ;
Li g ht s . Set Hi gh Beams On ;
when Swi tch On | Hi gh Of f =>
nul l ;
end case ;
when Hi gh =>
case Event i s
when Swi t c h Of f =>
St at e := Of f ;
Li g ht s . Se t He a d l i g h t s Of f ;
Li g ht s . Set Hi gh Beams Of f ;
when Hi gh Of f =>
St at e := On;
Li g ht s . Set Hi gh Beams Of f ;
5.4 State machines 185
when Swi tch On | Hi gh On =>
nul l ;
end case ;
end case ;
end Si gna l Ev e nt ;
f uncti on Cur r e nt St a t e r etur n St at e Type i s
begi n
r etur n St at e ;
end Cur r e nt St a t e ;
end Cont r ol ;
Ex t e r na l Ope r at i ons
procedure Si gna l Ev e nt ( Event : Event Type ) i s
begi n
Cont r ol . Si gna l Ev e nt ( Event ) ;
end Si gna l Ev e nt ;
f uncti on Cur r e nt St a t e r etur n St at e Type i s
begi n
r etur n Cont r ol . Cur r e nt St a t e ;
end Cur r e nt St a t e ;
end He adl i ght s V1 ;
The bodies for the two operations dened in the package specication
make calls to operations in the protected object Control. The outer case
statement in the protected procedure Signal Event selects the alternative
for the current state. Within each of those alternatives, a case statement
based on the event may change the state or not. When the state does change,
calls are made to appropriate procedures to change the powering of the
headlight laments. These are the eects of the state change.
With the nested case statement approach, the amount of code necessary
is proportional to the product of the number of states and the number of
events. This approach can become unwieldy when the number of states or
the number of events is large.
State transition tables
Another technique for implementing passive state machines is based on the
state transition table. Given the current state and an event, the state tran-
sition table can tell us the next state. Table 5.1 is a state transition table for
our headlight state diagram. The rst row shows the states we can transition
to from the O state, the second row shows the states we can transition to
from the On state, and the third row shows the states we can transition to
from the High state.
186 Communication and synchronization based on direct interaction
switch on switch o high on high o
O On O O O
On On O High On
High High O High On
Table 5.1 A state transition table for the state machine of Figure 5.3
If we are in the O state and the event switch on occurs, we can look at
the rst column and determine that the next state is On. If we are in the
O state and the event high on occurs, we can look at the third column
and determine that the next state is O there is no change of state.
We can put the data from a state transition table into a two-dimensional
array with the state as the row index and the event as the column index. To
nd the next state, we simply look up the value in the row for the current
state and the column for the event.
There is one additional problem to solve for this implementation. With
each transition, there is often an eect that requires that we execute some
code. For example, when our headlights transition from the O state to the
On state, we must change the bit in the control/status register of the cars
lighting system that energizes the main bulb laments. We implement this
by storing a pointer to an appropriate eect procedure with each entry in
our two-dimensional array. We rst declare a pointer type that designates a
procedure with no parameters. Then we dene a pointer to each of our ve
eect procedures using the attribute Access with each procedure name.
Here is the package body that uses a state transition table approach.
wi th Li g ht s ; Package wi t h l i g h t c o n t r o l / s t a t u s r e g i s t e r s
package body He adl i ght s V2 i s
procedure Bot h Of f i s
begi n
Li g ht s . Se t He a d l i g h t s Of f ;
Li g ht s . Set Hi gh Beams Of f ;
end Bot h Of f ;
type Ef f e c t Pt r i s access procedure ;
L Of f : constant Ef f e c t Pt r := Li ght s . Se t He a dl i ght s Of f Access ;
L On : constant Ef f e c t Pt r := Li g ht s . Set Headl i ght s On Access ;
H Of f : constant Ef f e c t Pt r := Li g ht s . Set Hi gh Beams Of f Access ;
H On : constant Ef f e c t Pt r := Li g ht s . Set Hi gh Beams On Access ;
B Of f : constant Ef f e c t Pt r := Both Of f Access ;
5.4 State machines 187
type Tabl e Ent r y i s
record
Ne xt St at e : St at e Type ;
Ef f e c t : Ef f e c t Pt r ;
end record ;
type Tr a n s i t i o n Ar r a y i s ar r ay ( St at e Type , Event Type )
of Tabl e Ent r y ;
Tr a n s i t i o n Ta b l e : constant Tr a n s i t i o n Ar r a y :=
( ( ( On, L On ) , ( Of f , nul l ) , ( Of f , nul l ) , ( Of f , nul l ) ) ,
( ( On, nul l ) , ( Of f , L Of f ) , ( Hi gh , H On) , (On, nul l ) ) ,
( ( Hi gh , nul l ) , ( Of f , B Of f ) , ( Hi gh , nul l ) , (On, H Of f ) ) ) ;
protected Cont r ol i s
procedure Si gna l Ev e nt ( Event : Event Type ) ;
f uncti on Cur r e nt St a t e r etur n St at e Type ;
pr i vat e
St at e : St at e Type := Of f ;
end Cont r ol ;
protected body Cont r ol i s
procedure Si gna l Ev e nt ( Event : Event Type ) i s
begi n
Car r y out any e f f e c t f o r t he s t a t e change
i f Tr a n s i t i o n Ta b l e ( St at e , Event ) . Ef f e c t /= nul l then
Tr a n s i t i o n Ta b l e ( St at e , Event ) . Ef f e c t . a l l ;
end i f ;
Det er mi ne t he next s t a t e f rom t he t a b l e
St at e := Tr a n s i t i o n Ta b l e ( St at e , Event ) . Ne xt St at e ;
end Si gna l Ev e nt ;
f uncti on Cur r e nt St a t e r etur n St at e Type i s
begi n
r etur n St at e ;
end Cur r e nt St a t e ;
end Cont r ol ;
Ex t e r na l Ope r at i ons
procedure Si gna l Ev e nt ( Event : Event Type ) i s
begi n
Cont r ol . Si gna l Ev e nt ( Event ) ;
end Si gna l Ev e nt ;
f uncti on Cur r e nt St a t e r etur n St at e Type i s
begi n
r etur n Cont r ol . Cur r e nt St a t e ;
end Cur r e nt St a t e ;
end He adl i ght s V2 ;
188 Communication and synchronization based on direct interaction
The array Transition Table is dened as a constant. The value of this
constant is set by a two-dimensional array aggregate. The elements of this
array aggregate are record aggregates.
5.4.2 Implementing state machines for active objects
Figure 5.4 is a state diagram for a large electric induction motor. To start
this motor, we apply power to a set of low-current draw start windings.
After one minute, the motor is turning fast enough that we can switch over
to the normal run windings. While the motor is running, we monitor its
temperature. Should the temperature exceed some maximum value, we turn
the motor o.
switch off
Starting
temperature > max
Running
do / monitor temperature
1 minute
switch on
switch off
Off
Figure 5.4 State diagram for a large induction motor
Two features of this state diagram indicate that the motor controller is an
active object. First, the transition between Starting and Running is based
on time. Second, while in the Running state the do activity requires that the
motor continually check its temperature. Here is our specication for this
motor.
package I nduc t i on Mot or i s
type St at e Type i s ( Of f , St a r t i ng , Runni ng ) ;
Ex t e r na l motor e v e nt s
procedure Swi tch On ;
procedure Swi t c h Of f ;
f uncti on Cur r e nt St a t e r etur n St at e Type ;
end I nduc t i on Mot or ;
Our implementation of this active motor object uses a task that responds
to external events and generates its own events. The code within the mo-
tor control task is similar to our nested case statement solution for passive
objects. We use a case statement based on the three possible states of the
5.4 State machines 189
motor. In each case alternative we use a select statement. Each select state-
ment includes an alternative for each possible entry.
We keep the state of the motor in a protected object that is shared by
the motor control task and any external task that calls function Current
State. Here is the package body.
wi th I nduc t i on Mot or . Ope r at i ons ; use I nduc t i on Mot or . Ope r at i ons ;
package body I nduc t i on Mot or i s
protected Motor i s For t he s har ed s t a t e
procedure Se t St a t e (To : i n St at e Type ) ;
f uncti on Cur r e nt St a t e r etur n St at e Type ;
pr i vat e
St at e : St at e Type := Of f ;
end Motor ;
protected body Motor i s
procedure Se t St a t e (To : i n St at e Type ) i s
begi n
St at e := To ;
end Se t St a t e ;
f uncti on Cur r e nt St a t e r etur n St at e Type i s
begi n
r etur n St at e ;
end Cur r e nt St a t e ;
end Motor ;
task Cont r ol i s
entry Swi tch On ;
entry Swi t c h Of f ;
end Cont r ol ;
task body Cont r ol i s
begi n
l oop
case Motor . Cur r e nt St a t e i s
when Of f =>
s el ect
accept Swi t c h Of f ;
or
accept Swi tch On ;
Motor . Se t St a t e (To => St a r t i n g ) ;
Ope r at i ons . Se t St a r t Wi ndi ngs (To => On ) ;
end s el ect ;
when St a r t i n g =>
s el ect
accept Swi t c h Of f ;
Motor . Se t St a t e (To => Of f ) ;
Ope r at i ons . Se t St a r t Wi ndi ngs (To => Of f ) ;
190 Communication and synchronization based on direct interaction
or
accept Swi tch On ;
or
del ay 6 0 . 0 ; f o r motor t o get up t o s peed
Motor . Se t St a t e (To => Runni ng ) ;
Ope r at i ons . Se t St a r t Wi ndi ngs (To => Of f ) ;
Ope r at i ons . Set Run Wi ndi ngs (To => On ) ;
end s el ect ;
when Runni ng =>
s el ect
accept Swi t c h Of f ;
Motor . Se t St a t e (To => Of f ) ;
Ope r at i ons . Set Run Wi ndi ngs (To => Of f ) ;
or
accept Swi tch On ;
or
del ay 1 . 0 ; Check t emper at ur e once a second
i f Ope r at i ons . Cur r ent Temper at ur e > Max then
Motor . Se t St a t e (To => Of f ) ;
Ope r at i ons . Set Run Wi ndi ngs (To => Of f ) ;
end i f ;
end s el ect ;
end case ;
end l oop ;
end Cont r ol ;
Ex t e r na l Ope r at i ons
procedure Swi tch On i s
begi n
Cont r ol . Swi tch On ;
end Swi tch On ;
procedure Swi t c h Of f i s
begi n
Cont r ol . Swi t c h Of f ;
end Swi t c h Of f ;
f uncti on Cur r e nt St a t e r etur n St at e Type i s
begi n
r etur n Motor . Cur r e nt St a t e ;
end Cur r e nt St a t e ;
end I nduc t i on Mot or ;
The child package Operations contains the low-level operations (setting
the windings bits and reading the temperature) called by task Control.
The external procedures Switch On and Switch Off rendezvous with task
Control on entries of the same names. The external function Current State
calls the protected function of the same name in the shared object Motor.
5.4 State machines 191
It is important that every select statement has an accept alternative for
each of the tasks entries. If we, for example, left out the accept alterna-
tive for Switch Off in the case alternative for state Off, a task calling
Switch Off when the motor is in the o state would be blocked
indenitely.
Summary
The rendezvous is a mechanism for direct communication between two
tasks.
The rendezvous uses a client-server model.
A rendezvous does not take place until both the client and the server
request it. The client requests a rendezvous by executing an entry call.
The server requests a rendezvous by executing an accept statement.
The client and server synchronize when a rendezvous takes place.
The entry parameters provide two-way direct communication between the
client and the server.
The client is blocked while the server executes the optional rendezvous
code the sequence of statements within the accept statement. The
rendezvous code should be as short as possible to minimize blocking.
The selective accept statement allows a server to wait for one of several
dierent entries. Only one alternative is serviced during the execution of
the select statement.
We use guards to control individual alternatives in a selective accept state-
ment. Closed alternatives are not considered during the execution of the
selective accept statement.
The delay alternative of the selective accept statement allows a server to
give up waiting for a rendezvous after some period of time.
The terminate alternative is used to shut down a server task when all of
its potential clients are nished.
The optional else clause of the selective accept statement is executed when
none of the alternatives are ready. Use it with care to avoid busy wait
loops.
The delay alternative, terminate alternative, and else part are mutually
exclusive only one of them can be included in a given select statement.
The timed entry call allows a task to limit how long it will wait to start
a protected entry or rendezvous.
The conditional entry call is similar to a timed entry call with no wait
time.
192 Communication and synchronization based on direct interaction
State machines are important models for objects in real-time embedded
applications. Objects may be active (change state by themselves) or pas-
sive (only change state in response to some external event).
Exercises
5.1 Modify program Museum (page 130). Replace the four separate task
variables, North Door, South Door, East Door, and West Door, with
an array of tasks indexed by compass direction. Replace the discrim-
inant of task type Entrance Monitor with an entry that is called to
give the task its identity (door). Add the code to the main program to
rendezvous with each task in the array and give them an identity equal
to their location in the array.
5.2 The body of the server task Counter in program Rendezvous Demo
(page 168) contains an innite loop. Why is this loop necessary? What
would happen if we removed the loop from this task body?
5.3 True or false. When a select statement is executed, only one of the
alternatives is executed.
5.4 Suppose that when the select statement in task Make Beverage (page
172) is executed, there are two tasks waiting to rendezvous on entry
Coffee, three tasks waiting to rendezvous on entry Tea, and one task
waiting to rendezvous on entry Cocoa. Which beverage will be pre-
pared?
5.5 Download program Coffee Vending from the book website. This pro-
gram makes random rendezvous with task Make Beverage (page 172).
Compile and run the program. Delete the terminate alternative from
task Make Beverage, then compile and run the program again. What
is dierent between the two executions? Explain the dierence.
5.6 In this exercise you will extend the task Make Beverage (page 172) to
handle the restocking of materials in the vending machine. The machine
holds enough supplies to make 500 cups of each drink. Here is a new
extended task specication.
type Bever age Type i s ( Cof f ee , Tea , Cocoa ) ;
task Make Beverage i s
entry Cof f e e ( Sugar : i n Bool ean ;
Cream : i n Bool ean ) ;
entry Tea ( Sugar : i n Bool ean ;
Mi l k : i n Bool ean ;
Lemon : i n Bool ean ) ;
entry Cocoa ;
entry R e f i l l ( Bever age : i n Bever age Type ) ;
end Make Beverage ;
Exercises 193
1. Entry Refill restocks the supply of the given drink. Extend the
code of the original task body to handle this new entry.
2. Add guards so that when the machine is out of stock of a particular
drink the appropriate select alternative is closed.
3. Finally, add appropriate calls in the task to the following procedure
that controls three out-of-stock lights on the front of the vending
machine.
type Li g h t St a t e i s ( Of f , On ) ;
procedure Swi t c h Li ght ( Which : i n Bever age Type ;
To : i n Li g h t St a t e ) ;
5.7 After adding the guards described in the previous question to task
Make Beverage, the order-taking task seems to hang when the customer
selects a beverage that is out of stock. When the guarded alternative
is closed, the order-taking task is blocked until the vending machine is
restocked. This is a serious problem as no other orders can be taken
while the order-taking task is waiting.
1. Write a timed entry call for the order-taking task that displays Out
of stock, please make another selection if it is unable to rendezvous
with the Make Beverage task within two seconds.
2. Using a timed entry call works ne as long as there is only a single
task calling the beverage entries. Explain why this solution would
fail if the vending machine were to have multiple order-taking tasks.
3. Devise a solution that works for multiple order-taking tasks.
5.8 Write a program that calls the function Ada.Calendar.Clock to obtain
the current time. Your program should then display this time in the
format
Day/Month/Year Hour:Minute:Seconds
where Hour ranges from 0 to 23.
5.9 All of the state machine examples in this chapter have been singletons
with the state machine object encapsulated in a package body. In this
exercise you will develop a state machine class and declare multiple
state machine objects. The following simple state machine describes
the behavior of a turnstile in the museum described in Chapter 4.
There is a turnstile at each of the four entrances to the museum.
Write a package called Turnstiles that implements a state machine
type. The specication of Turnstiles should contain the types for
the states and events. It should also contain the specication for the
protected type Turnstile Control that implements the state machine.
194 Communication and synchronization based on direct interaction
pass / lock
pass / alarm
ticket / unlock
Unlocked Locked
Type Turnstile Control should include the discriminant ID of type
Natural.
Write the body for package Turnstiles. This body will contain the
body for protected type Turnstile Control. It should also contain
the three procedures Alarm, Lock, and Unlock that carry out the three
eects for each state transition. Each procedure should simply display
an appropriate message that includes the identication number of the
turnstile and the eect (turnstile jumper detected, locking, or unlock-
ing).
Finally, write a program that tests all possible combinations of states
and events. Use at least two variables of type Turnstile Control.
5.10 Suppose that entry Switch On in our induction motor control task
(page 189) is called when a start button is pressed by an operator.
A particularly impatient operator pushes the start button to switch
on the motor. After 30 seconds have passed, the operator notices that
the motor is running slowly so he pushes the start button again. Af-
ter another 30 seconds, the motor is still being powered by the start
windings. So the operator presses the start button a third time. The op-
erator continues to press the start button every 30 seconds. The motor
never switches from the start windings to the run windings.
1. Explain why the switch from start windings to run windings was
never made.
2. Make the necessary changes and additions to x the problem. You
may assume that the motor will never be started during a daylight
savings time change.
6
Distributed systems with Ada
Distribution is an important issue in software engineering, providing scalable
systems across networks. Distribution supports a variety of new services from
banking systems to social networks to video-on-demand. It is also used in
embedded systems like cars and planes built with numerous interconnected
computation-intensive blocks. When one processor is not enough, distribu-
tion is a natural complement to concurrency. In this chapter, we provide
some of the elements to build a distributed system using Ada.
1
6.1 What are distributed systems?
Formally speaking, a distributed system is a federation of interconnected
computers that collaborate to fulll a task. As an example, one can cite
web-based applications or constellations of satellites. Distributed systems
rely on mechanisms to let information circulate from one node to another.
In this section, we review key elements of distributed systems and discuss
their strengths and limits.
6.1.1 Why distribute systems?
Before entering into the details of distribution and middleware, it is impor-
tant to clarify the reasons why someone would want to build a distributed
system. Distributing an application across several nodes requires an under-
standing of (1) the network (topology, technology, etc.), (2) its maintenance
lifecycle (potential down time), and (3) its cost (performance and money). In
addition, we need to be able to evaluate the value it adds to our application.
1
We focus only on the communication aspects of distributed systems: programming language
artifacts and library support. Distributed algorithms are not discussed.
196 Distributed systems with Ada
From an historic point of view (Vinoski, 2004), distribution was intro-
duced in the early 1960s in large mainframe computer systems used for air-
line ticketing and banking. The objective was to share information quickly
across a large geographic area without the need for traditional paper mail.
To support these operations, the rst middleware was developed to allow
communication over regular phone lines in a point-to-point topology.
The use of a dedicated network topology, with many-to-many connection,
provided a way to avoid single points of failure. The United Statess DARPA
2
commissioned several teams to build a decentralized network capable of
providing an acceptable level of service when a portion of the network fails.
In 1969, the rst four nodes of ARPANET
3
were deployed in California and
Utah. From this pioneering work, ARPANET evolved into the internet, a
network used by universities in the 1980s. With the introduction of HTML
in the 1990s, internet usage rapidly spread to a wider audience.
Several applications such as electronic mail and bulletin boards used this
network to allow people to communicate and exchange data. Computers
could now share information in order to provide more computational power
for large applications.
Distributed systems are now mature and are present in a large variety
of systems. The smart electronic systems of todays cars are built from in-
terconnected microcontrollers. Nearly every company and many homes now
have networks to connect all of their resources. Web 2.0 provides social net-
working, cloud computing, and the infrastructure required for computation-
intensive simulations needed to build the next generation of automobiles,
aircraft, and energy systems. Distribution fullls many needs including:
Information exchange Information is exchanged in order to share knowl-
edge or to build a bigger knowledge set. For instance, the list of seats
available for a concert allows attendees to make a better purchasing deci-
sion.
Information availability A piece of information may be made available
to multiple recipients. Distributed systems allow us to transmit it quickly.
Information redundancy Information can be duplicated among multi-
ple sites. This redundancy provides a backup in case one of the copies is
not available.
Sharing CPU resources For large applications, one may need more than
one processor. Distributed systems can provide load-balancing features, or
build a virtual processor by aggregating several nodes.
2
Defense Advanced Research Projects Agency.
3
Advanced Research Projects Agency Network.
6.1 What are distributed systems? 197
In addition to satisfying these needs, middleware must also provide sup-
port services such as security to guarantee data integrity, safety to en-
sure the system is resilient to faults, and performance or even real-time
guarantees for systems with stringent timing requirements. Services are also
needed to solve heterogeneity across nodes, to provide naming in order
to access entities through a logical name rather than a cryptic numeric ID,
and to dene protocols that provide a standard organization of the data
being exchanged.
Middleware was introduced to provide a framework for building distributed
systems. A middleware provides a collection of supporting routines to help
in the development of distributed systems. There is a large variety of mid-
dleware designed to cope with the even larger variety of application needs.
6.1.2 Family of middleware
App. Node #1
Middleware
Node #1
OS #1
App. Node #n
Middleware
Node #n
Middleware
Node #n
OS #n
Network
Figure 6.1 Overview of a middleware
A middleware is a set of software components that sits between the
application and the operating system (see Figure 6.1). It provides an abstract
view of the operating system and transport mechanisms to provide a uniform
framework. A middleware supports one or more distribution models.
A distribution model is dened by the various distribution mechanisms it
makes available to applications. Remote Procedure Calls (RPC), Distributed
Objects, and Message Passing are the most commonly used distribution
mechanisms. See Figure 6.2.
Message Passing is notionally equivalent to sending a message from a
publisher to a consumer. Messages contain data dened by the application.
198 Distributed systems with Ada
Figure 6.2 Distribution models
Remote Procedure Call (RPC) is a distributed analog of Adas proce-
dure call. Calling a remote procedure ultimately results in a set of mes-
sages on the network and interaction with a remote server that processes
the messages. The client becomes blocked after sending the request to the
server. After the server processes the messages and sends the answer back,
the caller retrieves the results and continues its execution.
Distributed Objects extend the RPC by dening the notion of dis-
tributed objects that can be placed on any node in the distributed applica-
tion. When the object has been located on a particular node, interactions
are equivalent to those in an RPC.
Distributed Shared Memory provides transparent access to memory
areas distributed across nodes.
Other distribution models have been dened to support a wider range of
semantics. Web-services are close to Distributed Objects over the internet.
Distributed Components provide a stricter consistency check of interfaces.
Service-Oriented Architecture and Cloud Computing address scalability
and reliability issues to meet requirements for large internet-based applica-
tions like social networks. For more details about distribution, please refer
to Coulouris et al. (2005).
To support these mechanisms, middleware are dened as an API
4
or as
an extension to a programming language. Middleware is a proxy between
application code and operating system primitives that implement the distri-
bution model. In a message-based middleware, an API is dened to support
the full message life cycle (creation, sending, receiving, and analyzing) and
the quality of services (policies for retransmission, buering, etc). In an
RPC or Distributed Object Computing middleware, stubs and skeletons are
4
Application Program Interface.
6.1 What are distributed systems? 199
dened to support transparent request building and sending. The stub is
reponsible for client-side management of requests; the skeleton performs the
dual role on the server side. In addition, middleware may dene dierent
qualities of service policies to exchange requests.
6.1.3 Misconceptions of distributed systems
We commonly think that distributing a system across a network helps solve
many problems of modern computing (e.g. scalability and gaining access to
more computation power). Actually, distribution may hinder our applica-
tion.
Building a distributed system requires an understanding of many hetero-
geneous concerns including protocols, operating system issues, concurrency
issues, and system engineering. Debugging a distributed system can be ex-
tremely challenging. The occurrence of transient errors that are hard to
reproduce when logging or debugging tools are activated is particularly frus-
trating. Such errors are often called Heisenbugs, named after the Heisen-
berg uncertainty principle.
Peter Deutsch, a Sun Fellow, published what are now known as the Eight
Fallacies of Distributed Computing. They remind the developer that build-
ing a distributed system is always a complex task that should not be under-
estimated:
The Eight Fallacies of Distributed Computing
Peter Deutsch
Essentially everyone, when they rst build a distributed application, makes the fol-
lowing eight assumptions. All prove to be false in the long run and all cause big
trouble and painful learning experiences.
1. The network is reliable
2. Latency is zero
3. Bandwidth is innite
4. The network is secure
5. Topology doesnt change
6. There is one administrator
7. Transport cost is zero
8. The network is homogeneous
Middleware may help address these issues by providing an analytical
framework to evaluate and use resources. Middleware also helps us to con-
gure and deploy applications. In the following sections, we introduce the
key elements of middleware.
200 Distributed systems with Ada
6.2 Middleware, architectures, and concepts
Middleware is software between your application and the underlying oper-
ating system and hardware. Being in-between adds some fuzziness to the
concept. Where exactly does middleware stand?
There are dierent families of middleware ranging from generic ones for
all-purpose application to ones specialized for video streaming, banking
systems, etc. In this chapter, we restrict our focus to generic middle-
ware.
Middlewares share common assets. To illustrate them, let us consider a
simple interaction between two nodes. We will use the following convention:
the client is a node requesting a service, provided by a server. A node
is a software process.
Let us suppose we want to build a system to manage students registration
to courses. A student is a client of this system, requiring a service from
a server. This client can perform activities like registering to a course, or
getting its score to a course.
To support this interaction, depicted in Figure 6.3, we need to coordinate
several elementary services. Each service provides one small step to support
this interaction,
5
and provide an answer to one simple question: What do I
need to enable interaction? To do so a middleware needs to address many
issues.
Figure 6.3 Interaction using a middleware
5
This list of services has been fully dened in the PolyORB middleware (Hugues et al., 2003)
that we present later in this chapter.
6.2 Middleware, architectures, and concepts 201
Naming The client needs to nd the server. The naming service man-
ages a directory that stores mapping from logical name (e.g. Student
Management System) to its actual location (e.g. network address + node-
specic identier).
Binding Once the server has been located, the students client needs to
be bound to the server in a way that enables communication. The bind-
ing service handles resource allocation, such as opening a communication
channel.
Protocol/transport Now that both the client and the server are con-
nected, they need to agree on an exchange mechanism. One usually splits
it into two parts: protocol and transport. Protocol denes the organiza-
tion and semantics of a set of messages: meta-data and a payload; it also
denes how to combine these to form valid interactions. A typical case is
an RPC made of a request and a reply message. The transport mechanism
denes how it is transferred. In most cases, the Internet Protocol TCP/IP
is used, but embedded systems may use other transport mechanisms based
on dedicated buses.
Marshalling The payload is the users data. We need to agree on a
representation mechanism for a serialized version of the user data we want
to send. The marshalling scheme denes the corresponding value for each
type. So, for example, in one scheme, 20
16
represents the space character
while in another scheme it represents the integer 32.
This common representation also helps resolve heterogeneous represen-
tation of data by dierent processors: all data exchanged shall use this
unique description.
Activation Prior to engaging in any interaction with the client, the server
needs to declare its willingness to do so. It rst needs to activate some
elements to accept it. Basically, the server has to register some software
element with the middleware to make it accessible to the client through
the naming service.
Execution The client may now engage interaction with the server. But
to do so, the server needs to allocate some execution resource, provided
by the execution service. Execution resources may be memory to handle
the incoming request, threads to process it, etc.
One can show these services are the root of all distribution middleware,
for all distribution models. In the following two sections, we illustrate how
these services are coordinated to support two well-known models: Adas
Distributed Systems Annex, and CORBA.
202 Distributed systems with Ada
6.3 DSA, the Distributed Systems Annex
The Ada standard denes a set of annexes to address various issues: the
standard library, connections with other languages, etc. The 1995 revision
of Ada dened the Distributed Systems Annex (Annex E). In this section, we
introduce this annex, illustrate its behavior, and discuss one implementation
based on PolyORB.
6.3.1 Introduction to the Distributed Systems Annex
The Distributed Systems Annex (DSA) denes a set of compilation pragmas
to map an ordinary monolithic application onto a distributed one. This is a
major strength of the annex: we write the code for a system the same way
whether it is going to be executed distributed across several interconnected
nodes or as a program running on a single processor. Annex E was designed
to minimize the number of modications of the source needed to convert a
monolithic program into a distributed one.
As dened by the Ada Reference Manual, a distributed system is the
interconnection of several processing nodes and zero or more storage nodes.
A distributed program is made of several partitions. A partition is an
aggregate of library units,
6
communicating using shared data or RPCs. The
DSA distinguishes between active partitions, housing threads of control, and
passive partitions, which are storage nodes.
The application programmer controls which library units are assigned to
a partition. In the spirit of the DSA, partitioning is a post-compilation pro-
cess: the programmer categorizes each unit making up the program. This
information is processed during the building process to build the full appli-
cation. Four pragmas are used to categorize library units:
1. Pure To categorize stateless packages.
2. Remote subprograms A remote subprogram is an extension of a sub-
program call to the distributed case. Such subprograms are declared in
units categorized as Remote Call Interface.
3. Distributed Objects Access types can designate remote objects. A re-
mote call is performed transparently when a primitive dispatching opera-
tion is invoked on an object. Such types are declared in units categorized
as Remote Types.
4. Shared objects Shared objects provide a shared memory space for active
partitions. Entryless protected objects allow safe concurrent access and
6
See Section 2.5.
6.3 DSA, the Distributed Systems Annex 203
update of shared objects such as a database. Such types are declared in
units as Shared Passive.
To support these mechanisms, the DSA denes a set of coding guidelines.
In order to enable distribution and to ensure that the application is fast and
reliable, these guidelines restrict some constructs. For instance, the notion of
remote rendezvous does not exist tasks cannot be invoked directly from
one partition to another.
6.3.2 Categorization pragmas
A pragma is a compiler directive. Pragmas are used by the compiler to
congure the application. We have already described a number of pragmas
including Atomic, Inline, Pack, Storage Size, and Volatile. In this sec-
tion, we introduce the conguration pragmas that are used to distribute a
program.
pragma Pure
Pragma Pure indicates that a package does not contain any state information
(e.g. variables). Packages designated as pure are useful to dene entities
(types, constants, subprograms) shared by several packages. Because it has
no state, a pure package can be safely replicated on several partitions.
package Pur e Package i s
pragma Pure ; Package de f i ne d as pur e , i . e . s t a t e l e s s
type A Type i s new I n t e g e r ;
end Pur e Package ;
pragma Remote Call Interface
Library units categorized by the pragma Remote Call Interface (RCI) can
include subprograms that may be called and executed remotely. RCI units
are servers for remote calls.
A subprogram call that invokes such a subprogram is an RPC operation.
It is a statically bound call because the compiler can compute the identity
of the subprogram being called.
We can also dene dynamically bound calls using access types:
Remote Access-to-Subprogram (RAS) is the dereference of an access-to-
subprogram.
204 Distributed systems with Ada
Remote Access-to-Class-Wide (RACW) is a dispatching call whose dis-
patching argument is an access-to-class-wide type. They can be dened
in RCI packages.
RAS and RACW types dene logical names to refer to remote entities. In-
ternally, such a name is composed of an identier of the remote partition for
a particular protocol, and a local identier for representing the subprogram
or the object.
The Ada Reference Manual indicates that an implementation may allow
multiple copies of an RCI unit, provided that internal states are propagated
to all replicas. As the propagation of internal states requires signicant over-
head at run-time, RCI units are rarely replicated.
In the following, we will show how to implement the example we presented
in Section 6.2 using the DSA, using the dierent mechanisms to support
interactions: rst using Remote Access-to-Suprogram, then Remote Access-
to-Class-Wide.
The example below
7
illustrates statically bound RCIs. It denes a stu-
dent management system. A server provides basic services for accessing and
managing a students account with operations for checking scores, managing
course registrations, etc. Our example is built from three packages:
The Types package denes some types, shared by both the client
(RCIClient) and the server (RCIStudent).
The RCIStudent package denes the server side of the application: the
server managing student information. It uses the pragma Remote Call
Interface to indicate that these subprograms may be called remotely.
All other declarations are ordinary Ada code having nothing to do with
the distribution.
The RCIClient procedure denes the client side of the application: a client
requesting scores for the student joe, for the Ada lectures.
package Types i s
Thi s package d e f i n e s some t y pe s t o be used by t he
St udent exampl e .
pragma Pure ;
type St udent Type i s new St r i n g (1 . . 10) ;
type Passwor d Type i s new St r i n g ;
end Types ;
7
See the RCIClient example on our website for the full code of this example.
6.3 DSA, the Distributed Systems Annex 205
wi th RASStudent ;
wi th Types ; use Types ;
package body Compute Score i s
f uncti on Compute ( St udent : i n St udent Type ;
Password : i n Passwor d Type )
r etur n I n t e g e r i s
The Scor e : I n t e g e r := 0;
begi n
. . .
r etur n The Scor e ;
end Compute ;
begi n
Re g i s t e r a dy na mi c a l l y bound r emote subpr ogr am ( Compute )
t hr ough a s t a t i c a l l y bound r emote subpr ogr am ( Re g i s t e r )
dur i ng t he e l a b o r a t i o n of t h i s package
RASStudent . Re g i s t e r ( Compute Access ) ;
end Compute Score ;
6.3 DSA, the Distributed Systems Annex 207
Now, let us suppose we want the Registrar of the University to send mes-
sages to students when their registration has been validated.
10
We wish to
send messages using whatever mechanism a student prefers (mobile phones,
electronic mail, etc.). We can dene a generic terminal using polymorphic
(tagged) types to handle all possible mechanisms. Each student can register
their own terminal to the server. In the code below, Terminal Type is the
root type of the distributed terminal hierarchy.
wi th Types ; use Types ;
package Ter mi nal i s
pragma Pure ;
type Ter mi nal Type i s abst r act tagged l i mi t ed pr i vat e ;
Ter mi nal Type i s an a b s t r a c t t agged type , meani ng i t
i s an i n t e r f a c e t hat w i l l be l a t e r i mpl ement ed by ot he r
t y pe s t hat s p e c i a l i z e i t .
procedure Not i f y (MyTerm : access Ter mi nal Type ;
St udent : i n St udent Type ;
Scor e : i n I n t e g e r ) i s abst r act ;
Not i f y i s an a b s t r a c t pr oc e dur e at t ac he d t o t he
Ter mi nal Type . Each s p e c i a l i z a t i o n of Ter mi nal Type
s h a l l e i t h e r be a b s t r a c t i t s e l f , or pr ov i de an
i mpl ement at i on of t h i s pr ocedur e
pr i vat e
type Ter mi nal Type i s abst r act tagged l i mi t ed nul l record ;
end Ter mi nal ;
The RCI unit RACWStudent denes Terminal Access, a remote access to
a class-wide type. Terminal Access is a reference to a distributed object so
it is declared as a general access type.
wi th Types ; use Types ;
wi th Ter mi nal ; use Ter mi nal ;
package RACWStudent i s
pragma Re mo t e Ca l l I n t e r f a c e ;
type Te r mi nal Ac c e s s i s access a l l Ter mi nal Type Cl a s s ;
procedure Re g i s t e r ( MyTermi nal : i n Te r mi nal Ac c e s s ;
St udent : i n St udent Type ;
Password : i n Passwor d Type ) ;
. .
f uncti on Compute Score ( St udent : i n St udent Type ;
Password : i n Passwor d Type )
r etur n I n t e g e r ;
end RACWStudent ;
10
See the RACWClient example on our website for the full code of this example.
208 Distributed systems with Ada
In the next section, we will see how to derive and extend Terminal Type,
how to create a distributed object and how to use a reference to it using
remote types.
pragma Remote Types
Library units categorized with the pragma Remote Types (RT) dene dis-
tributed objects and remote methods attached to them. RCI and RT units
can both dene a remote access type (RACW). It is important to note that
the subprogram itself is not distributed, only the access type is. The code
of the unit is replicated on each partition that uses it.
If we want to implement the notication feature proposed in the pre-
vious section, we must create a specialized Terminal Type for each user.
This extension of our tagged type can be done in a remote types unit as
in A Terminal below. Any object of type Email Terminal Type
becomes a distributed object and any reference to such an object becomes
a fat pointer a reference to a distributed object (see the declaration
of Terminal Access in the previous section). A typical pointer is just a
numerical value referring to a memory area; a fat pointer has more informa-
tion to refer to both the partition and the memory area on this
partition.
From that point, the server implementation in RACWStudent can call back
the student notication function. Note that we have to be cautious when
manipulating references to objects. Since there is no global address space,
we cannot exchange references to objects that are dynamically allocated,
but only to statically allocated objects, as shown in Student Terminal.
This restriction ensures we can easily compute the fat pointer associated to
this variable.
wi th Types ; use Types ;
wi th Ter mi nal ; use Ter mi nal ;
package A Ter mi nal i s
pragma Remote Types ;
type Emai l Ter mi nal i s new Ter mi nal Type wi th nul l record ;
Emai l Ter mi nal i s a s p e c i a l i z a t i o n of Ter mi nal Type
over r i di ng
procedure Not i f y (MyTerm : access Emai l Ter mi nal ;
St udent : i n St udent Type ;
Scor e : i n I n t e g e r ) ;
end A Ter mi nal ;
i=1
C
i
P
i
(7.3)
with n being the number of tasks in our application.
We are guaranteed that all task deadlines are met when this processor
utilization factor is less than or equal to the bound computed by
B = n (2
1
n
1)
(Liu and Layland, 1973). When n is large, this bound tends to B = 0.69.
This test is valid only when priorities have been assigned according to Rate
Monotonic and the tasks are run with a preemptive scheduler.
This feasibility test is a sucient but not necessary schedulability test
which means that:
1. When U B, all task deadlines will be met.
2. When U > B, the result is not denitive. It is still possible that all our
task deadlines may be met. But it is also possible that one or more task
deadlines will be missed. In other words, when U > B the feasibility test
cannot prove whether or not task deadlines will be met.
A harmonic set of periodic tasks is a case that illustrates the second
property. A harmonic set of periodic tasks is one in which the period of
2
The LCM is the smallest non-zero number that is a multiple of two or more numbers.
7.2 Real-time schedulers 265
each task is an exact multiple of every task that has a shorter period. For
example, a set of four tasks with periods 10, 20, 40, and 120 form a harmonic
set. Harmonic sets never miss deadlines when U 1.
The processor utilization factor test is a pessimistic test. In contrast to
exhaustive simulations, which compute the exact scheduling of the system,
the processor utilization factor test is not always able to provide an exact
result. Furthermore, it is limited to simple task sets as in our example of
synchronous periodic tasks.
Another feasibility test, based on worst case response time, provides a
more accurate test of schedulability. The worst case response time fea-
sibility test consists of computing for each task its worst case response time.
We can then compare that with its deadline. A tasks worst case response
time, noted r
i
, is the delay between task release time and task completion
time.
Equation (7.4), conceived by Joseph and Pandya (1986), expresses the
worst-case response time for a set of synchronous periodic tasks with a pre-
emptive xed priority scheduler.
_
i : r
i
D
i
with r
i
= C
i
+
jhp(i)
_
r
i
P
j
_
C
j
(7.4)
hp(i) is the set of tasks which have a higher priority level than task i and
x returns the smallest integer not smaller than x.
3
In the context of synchronous periodic tasks, this feasibility test is a suf-
cient and a necessary condition to check task deadlines. In contrast to the
processor utilization factor test, equation (7.4) can be easily extended to
take into account dierent delays such as those due to shared resource ac-
cess, the execution environment, message exchanging, and others. Note that
this test does not require that task priority assignment follow Rate Mono-
tonic.
4
The test makes no assumptions on how priorities are assigned. How-
ever, the analysis of an application is easier if all tasks have dierent priority
levels.
To see how equation (7.4) is derived, let us assume that task priority levels
are assigned according to Rate Monotonic. The worst case response time of
task i shown by this equation is the sum of two terms:
1. The execution time of task i: C
i
.
2. The time that task i has to wait while higher-priority tasks execute.
3
This operator is called the ceiling operator.
4
Highest priority levels are assigned to the tasks with shortest periods.
266 Real-time systems and scheduling concepts
Figure 7.4 helps show us how to calculate the second term of the sum.
Assume a task j that has a higher priority level than task i. With Rate
Monotonic, if task j has a priority level greater than i, it means that the
period of j is smaller than the period of i. Then, during the response time
of task i (duration from 0 to r
i
, see Figure 7.4), task j is released
_
r
i
P
j
_
times. Then, the waiting time of task i due to task j is:
_
r
i
P
j
_
C
j
Figure 7.4 Task i worst case response time
Since several tasks may have a higher priority level than task i, the set
hp(i) may contain several tasks and the actual waiting time delay of task i
due to higher priority tasks is:
jhp(i)
_
r
i
P
j
_
C
j
Now lets see how to solve equation (7.4) to determine the worst case re-
sponse time, r
i
, of each task, i, in a program. Because r
i
appears on both
sides of this equation, it is dicult to solve. One technique we can use to
solve equation (7.4) is to repeatedly use an approximate value of r
i
to cal-
culate a better approximation of r
i
. Newtons method, which we used to
calculate square roots on page 29, uses this approach. From equation (7.4),
7.2 Real-time schedulers 267
we derive the following recurrence equation:
w
n+1
i
= C
i
+
jhp(i)
_
w
n
i
P
j
_
C
j
(7.5)
Each evaluation of recurrence equation (7.5) uses an approximation of the
worst case response time for task i, w
n
i
, to compute a better approximation,
w
n+1
i
. We use the following algorithm to compute the nal value:
We compute the rst element of the recurrence equation (7.5) by w
0
i
= C
i
.
Then, we compute w
1
i
, w
2
i
, w
3
i
, ..., w
k
i
until:
Either w
k
i
> P
i
. In this case, the recurrence equation will never con-
verge. Therefore, no task response time can be computed for task i. We
can conclude that task is deadline will be missed.
Or w
k
i
= w
k1
i
. In this case, w
k
i
is the actual worst case response time
for task i. We can then compare w
k
i
with the task deadline. When
convergence exists, it also means that w
k
i
< P
i
.
Let us look at an example of worst case task response time analysis with a
task set composed of three synchronous periodic tasks T
1
, T
2
, and T
3
dened
by the following parameters:
Task Period Capacity Deadline Priority
T
1
P
1
= 7 C
1
= 3 D
1
= 7 High
T
2
P
2
= 12 C
2
= 2 D
2
= 12 Medium
T
3
P
3
= 20 C
3
= 5 D
3
= 20 Low
We compute each worst case response time individually:
1. The highest priority task T
1
never has to wait for the processor at its
release times. Its response time is equal to its capacity: r
1
= C
1
= 3.
Note that the set hp(T
1
) is empty.
2. The task T
2
is the middle priority task. Sometimes it has to wait for the
execution of task T
1
: the set hp(T
2
) contains T
1
. We apply equation (7.5)
to compute the following rst elements of the recurrence equation:
w
0
2
= C
2
= 2
w
1
2
= C
2
+
_
w
0
2
P
1
_
C
1
= 2 +
_
2
7
_
3 = 5
w
2
2
= C
2
+
_
w
1
2
P
1
_
C
1
= 2 +
_
5
7
_
3 = 5
We stop at w
2
2
since w
2
2
= w
1
2
. The actual worst case response time of
T
2
is then equal to r
2
= 5.
268 Real-time systems and scheduling concepts
3. The task T
3
is the lowest-priority task. It is delayed by both T
1
and T
2
:
the set hp(T
3
) contains both T
1
and T
2
, which leads to the following
elements of the recurrence equation:
w
0
3
= C
3
= 5
w
1
3
= C
3
+
_
w
0
3
P
1
_
C
1
+
_
w
0
3
P
2
_
C
2
= 5 +
_
5
7
_
3 +
_
5
12
_
2 = 10
w
2
3
= C
3
+
_
w
1
3
P
1
_
C
1
+
_
w
1
3
P
2
_
C
2
= 5 +
_
10
7
_
3 +
_
10
12
_
2 = 13
w
3
3
= C
3
+
_
w
2
3
P
1
_
C
1
+
_
w
2
3
P
2
_
C
2
= 5 +
_
13
7
_
3 +
_
13
12
_
2 = 15
w
4
3
= C
3
+
_
w
3
3
P
1
_
C
1
+
_
w
3
3
P
2
_
C
2
= 5 +
_
15
7
_
3 +
_
15
12
_
2 = 18
w
5
3
= C
3
+
_
w
4
3
P
1
_
C
1
+
_
w
4
3
P
2
_
C
2
= 5 +
_
18
7
_
3 +
_
18
12
_
2 = 18
This time we stop at w
5
3
since w
5
3
= w
4
3
. The actual worst case response
time of T
3
is then equal to r
3
= 18.
Since i : r
i
D
i
, we have shown that this set of tasks is schedulable.
An example: analysis of a car with embedded software
In this section we look at an example to help clarify the analytic techniques
discussed in the previous sections. Suppose we have a car with an embedded
processor. This processor runs a real-time application which is composed of
the following three synchronous periodic tasks:
1. T
display
, a task which displays the cars speed.
2. T
speed
, a task which reads the cars speed sensor.
3. T
engine
, a task which performs an engine monitoring algorithm.
We extracted the necessary timing requirements from the customers spec-
ication and determined that the deadlines are equal to the task periods.
For the execution environment, we use the VxWorks operating system. This
operating system provides 256 levels of priority, ranging from 0 to 255. The
highest priority level is 0.
The scheduling analysis has to check that the customers requirements
can be met according to the functional parts of the system and the execu-
tion environment chosen for this application. After the implementation of
the functional parts of each task, we made some measurements by running
each task alone several times. These measurements have allowed us to com-
pute an approximation of their WCET. The following table summarizes the
requirements and the capacity test results. Priorities are assigned by Rate
Monotonic.
7.2 Real-time schedulers 269
Task Period Capacity Deadline Priority
(milliseconds) (milliseconds) (milliseconds)
T
display
P
display
= 100 C
display
= 20 D
display
= 100 0
T
engine
P
engine
= 500 C
engine
= 150 D
engine
= 500 2
T
speed
P
speed
= 250 C
speed
= 50 D
speed
= 250 1
With these parameters, we can perform analyses with the three dierent
methods presented in the previous section.
Analysis with the processor utilization test The processor utilization factor
is computed as:
U =
n
i=1
C
i
P
i
= 20/100 + 150/500 + 50/250 = 0.7
and the bound as:
Bound = n (2
1
n
1) = 3 (2
1
3
1) = 0.779
Since U Bound, we can be sure that all task deadlines will be met.
Analysis with worst case task response times We use equation (7.5) to cal-
culate the worst case response time for the three tasks in our system. Task
T
display
is the highest priority task. This task is never delayed by another
task. Its worst case response time is equal to its capacity, 20 milliseconds.
Task T
speed
is the middle priority task. It is delayed by the one higher
priority task, T
display
. Its worst case response time is computed by:
w
0
speed
= C
speed
= 50
w
1
speed
= C
speed
+
_
w
0
speed
P
di spl ay
_
C
display
= 50 +
_
50
100
_
20 = 70
w
2
speed
= C
speed
+
_
w
1
speed
P
di spl ay
_
C
display
= 50 +
_
70
100
_
20 = 70
We stopped iterating at w
2
speed
since w
2
speed
= w
1
speed
. The worst case
response time for task T
speed
is 70 milliseconds.
Task T
engine
has the lowest priority level of the three tasks. It is delayed
by both of the other tasks in the application. Its worst case response time
is computed by:
270 Real-time systems and scheduling concepts
w
0
engine
= C
engine
= 150
w
1
engine
= C
engine
+
_
w
0
eng i ne
P
di spl ay
_
C
display
+
_
w
0
eng i ne
P
speed
_
C
speed
= 150 +
_
150
100
_
20 +
_
150
250
_
50 = 150 + 40 + 50 = 240
w
2
engine
= C
engine
+
_
w
1
eng i ne
P
di spl ay
_
C
display
+
_
w
1
eng i ne
P
speed
_
C
speed
= 150 +
_
240
100
_
20 +
_
240
250
_
50 = 150 + 60 + 50 = 260
w
3
engine
= C
engine
+
_
w
2
eng i ne
P
di spl ay
_
C
display
+
_
w
2
eng i ne
P
speed
_
C
speed
= 150 +
_
260
100
_
20 +
_
260
250
_
50 = 150 + 60 + 100 = 310
w
4
engine
= C
engine
+
_
w
3
eng i ne
P
di spl ay
_
C
display
+
_
w
3
eng i ne
P
speed
_
C
speed
= 150 +
_
310
100
_
20 +
_
310
250
_
50 = 150 + 80 + 100 = 330
w
5
engine
= C
engine
+
_
w
4
eng i ne
P
di spl ay
_
C
display
+
_
w
4
eng i ne
P
speed
_
C
speed
= 150 +
_
330
100
_
20 +
_
330
250
_
50 = 150 + 80 + 100 = 330
We stopped iterating at w
5
engine
since w
5
engine
= w
4
engine
and its worst case
response time is 330 milliseconds.
The worst case response times of this task set are: r
engine
= 330 millisec-
onds, r
display
= 20 milliseconds, and r
speed
= 70 milliseconds. All worst case
response times are shorter than the corresponding deadlines.
Analysis by exhaustive scheduling simulation: scheduling simulation during
the hyper-period We can also check the deadlines of our car application by
determining the scheduling during a hyper-period. We use the interval com-
puted from [0, LCM(P
i
)] for the hyper-period. With our set of tasks, the
interval is [0, 500].
We start by drawing a time line of the length of the hyper-period for each
of our three tasks. On each time line we mark the release times of each task
with vertical bars. Figure 7.5 shows the hyper-period marked with six release
times for T
display
, two release times for T
engine
, and three release times for
T
speed
.
Now we ll in the execution times of each of the tasks. Since T
display
has
the highest priority, it always runs when it is ready. We can ll in its 20
millisecond capacity beginning at each release. The top line of Figure 7.6
shows the results.
Moving on to the second highest priority task, T
speed
, we ll in its 50
millisecond executions. It begins its rst execution after T
display
has relin-
quished the processor. At its second execution, there is no need to wait as
7.2 Real-time schedulers 271
Figure 7.5 Template for analysis with exhaustive simulations
T
display
is not executing at this point in time. The middle line of Figure 7.6
shows the results. The worst case response time for T
speed
is 70 milliseconds.
Lastly we ll in the 150 millisecond capacity of the lowest priority task,
T
engine
. We may only ll in the squares for this task when the other two
tasks are not executing. The bottom line of Figure 7.6 shows the results.
T
engine
does not start running until the other two tasks have completed
their rst release. Then it is preempted twice by T
display
and once by T
speed
.
Despite these interruptions, task T
engine
is able to complete its capacity 330
milliseconds after its release.
Figure 7.6 Analysis with exhaustive simulations
As all deadlines were met within the hyper-period, we are guaranteed that
all deadlines beyond that interval will also be met. Note that the worst case
task response times determined by Figure 7.6 are exactly the same as the
ones computed with equation (7.5).
272 Real-time systems and scheduling concepts
7.2.2 Earliest Deadline First, a dynamic priority scheduler
Properties and assumptions
We present now an example of dynamic priority scheduler: Earliest Deadline
First (EDF). Earliest Deadline First is a scheduler based on task priorities
that change during the execution of the application. EDF is then a dynamic
priority scheduler. It works in a manner similar to the way many students
determine an order for completing their class assignments. The assignment
whose due date is closest is usually given the highest priority while an as-
signment due in the distant future is given the lowest priority. Assignment
priorities change as time passes, assignments are completed, and more as-
signments are received. Assignments may be periodic (prepare for the weekly
quiz) or aperiodic (on Tuesday, the instructor gives an assignment to com-
plete exercises 7.1 through 7.4 for Thursdays class).
EDF can handle:
Dynamic real-time systems A dynamic real-time system is one in
which the number of tasks may change during execution time. The run-
time creation and destruction of tasks makes it impossible to fully analyze
the schedulability of the application at design time.
Synchronous periodic tasks With synchronous periodic tasks, EDF
is known to be optimal: if EDF is not able to nd a scheduling sequence in
which all task deadlines are met, no other scheduler will be able to nd it.
The processor utilization for EDF-scheduled synchronous periodic tasks
can reach 100 percent with no missed deadlines.
Aperiodic tasks Tasks whose release times are not xed.
EDF has some drawbacks. It is more dicult to implement within an
operating system than xed priority schedulers. But the biggest drawback
of EDF is its behavior when the processor becomes overloaded, a condition
that can occur in dynamic systems. When overloading occurs, it is dicult to
predict which tasks will miss their deadlines. With xed priority scheduling,
we know that the lowest priority tasks will miss their deadlines when the
processor is overloaded. For this reason, EDF is not suitable for critical
real-time systems. However, EDF is very useful for o-line scheduling of
applications with high levels of criticality. We can compute a scheduling
sequence at design time and embed it into the application which can then
be executed without a scheduler in the operating system but with a cyclic
executive (see page 258).
7.2 Real-time schedulers 273
How it works
At each unit of time during the execution of the application, EDF performs
two steps:
1. First, the scheduler computes a dynamic priority for each task. Dynamic
priorities are deadlines that the scheduler will try to meet. The dy-
namic priority of task i at time t is noted D
i
(t) and is computed as
follows:
If i is an aperiodic task, then D
i
(t) = S
i
+ D
i
.
If i is a periodic task, then D
i
(t) = k
i
+ D
i
, where k
i
is the last task
i release time prior to t.
2. Second, after all task dynamic priorities are updated, EDF runs the ready
task with the smallest dynamic priority the task with the closest dead-
line.
Lets look at an example. Here is a set of three synchronous periodic
tasks.
Task Period Capacity Deadline
T
1
P
1
= 8 C
1
= 4 D
1
= 8
T
2
P
2
= 12 C
2
= 3 D
2
= 12
T
3
P
3
= 4 C
3
= 1 D
3
= 4
It is not necessary to compute the dynamic priorities of the tasks at each
time unit. The priority changes occur only at a task release time when
some k
i
changes. Therefore, for our set of tasks, we need only to calculate
the dynamic priorities at times 0, 4, 8, 12, 16 and 20. Table 7.1 shows the
dynamic priorities that EDF computes during the scheduling of the task
set.
t D
1
(t) D
2
(t) D
3
(t)
(k
1
+D
1
) (k
2
+D
2
) (k
3
+D
3
)
0..3 0 + 8 = 8 0 + 12 = 12 0 + 4 = 4
4..7 0 + 8 = 8 0 + 12 = 12 4 + 4 = 8
8..11 8 + 8 = 16 0 + 12 = 12 8 + 4 = 12
12..15 8 + 8 = 16 12 + 12 = 24 12 + 4 = 16
16..19 16 + 8 = 24 12 + 12 = 24 16 + 4 = 20
20..23 16 + 8 = 24 12 + 12 = 24 20 + 4 = 24
Table 7.1 Dynamic priorities handled by Earliest Deadline First
274 Real-time systems and scheduling concepts
Figure 7.7 shows the scheduling of our three synchronous periodic tasks
with EDF and a preemptive scheduler. We used the dynamic priorities cal-
culated in Table 7.1 to ll in this timing diagram.
Figure 7.7 Scheduling with a preemptive EDF scheduler
At the critical instant, task T
3
has the greatest dynamic priority. After this
task completes its capacity, the second greatest priority task, T
1
, executes
its 4 units of capacity. At time 4, T
3
is released again and then, its dynamic
priority is re-computed: its dynamic priority is 8. Then, T
3
has the greatest
dynamic priority and runs again during its capacity.
At time 6, only one task is ready to run: task T
2
. Task T
2
runs during 2
units of time and at time 8, T
1
and T
3
are released again with new dynamic
priorities of 16 and 12. At this time T
2
and T
3
have the same priority level
and the scheduler can choose any of them for the next unit of time. Running
task T
2
at time 8 avoids requiring the scheduler doing a context switch.
The remainder of the schedule illustrated in Figure 7.7 is derived in the
same manner. We have seen that at some times, EDF leads to the scheduling
of tasks with the same dynamic priority. This is also the case at the end
of the hyper-period: at t = 20, all three tasks of our example have the
same dynamic priority. At this time, we could run either T
2
or T
3
or T
1
.
Each such choice between equal priority tasks leads to dierent scheduling
sequences. The actual choice is implementation-dened: it depends on how
the scheduler is actually implemented inside the operating system. This lack
of predictability is one of the reasons EDF is less suitable for critical real-
time applications.
Figure 7.8 shows the same task set, but scheduled with a non-preemptive
EDF scheduler. This scheduling sequence changes at time 16 where task T
2
cannot be preempted by task T
3
due to the non-preemptive scheduler.
7.2 Real-time schedulers 275
Figure 7.8 Scheduling with a non-preemptive EDF scheduler
Checking deadlines
Even if EDF is less predictable than xed priority schedulers, a designer
can use methods similar to those for xed priority schedulers to check task
deadlines at design time. The deadlines of a set of synchronous periodic
tasks can be checked by exhaustive simulations, by processor utilization
factor feasibility tests, or by worst case response time feasibility tests.
To perform exhaustive simulations with EDF, we must compute a schedul-
ing sequence during the hyper-period. To determine the hyper-period with
EDF, we use the same equations as with xed priority scheduling/Rate
Monotonic equation (7.1) or (7.2) depending on the rst release time of
the tasks. For our sample set of tasks, the hyper-period is 24 time units.
Figure 7.7 shows that each of our three tasks run their full capacity for each
release time in the hyper-period. Therefore, this task set is schedulable with
EDF. Rate Monotonic priority assignment and xed priority scheduling fail
to schedule this particular set of tasks (see Exercises 7.15 and 7.16).
Deadlines can also be checked with processor utilization factor tests. Equa-
tion (7.6) presents an example of such a test. This test is a sucient and
necessary condition and demonstrates the optimality of EDF. We can meet
all task deadlines even if the bound on the processor utilization factor reaches
100 percent. This is another illustration which shows that EDF eciency is
better than xed priority scheduling/Rate Monotonic eciency.
_
U 1
with U =
n
i=1
C
i
P
i
(7.6)
For our example set of tasks, the processor utilization factor is computed
276 Real-time systems and scheduling concepts
as:
U =
n
i=1
C
i
P
i
= 4/8 + 3/12 + 1/4 = 1
A processor utilization factor not greater than 1.0 conrms our earlier anal-
ysis by exhaustive simulation.
Finally, feasibility tests based on the worst case response time were pro-
posed by Spuri (1996). Since EDF is a dynamic scheduler, worst case re-
sponse time feasibility tests are more complex and are outside the scope of
this chapter. You can read more about feasibility tests for EDF and xed
priority scheduling in George and Spuri (1996).
7.2.3 EDF versus xed priority scheduling
In the previous sections, we presented a xed and a dynamic priority sched-
uler.
Fixed priority scheduling is well suited for critical and static real-time
embedded systems. EDF is better suited to less critical systems.
From an implementation point of view, xed priority scheduling is simple
to implement in a real-time operating system. Most real-time operating
systems include such a scheduler. EDF is more dicult to implement so
it is rarely implemented in real-time operating systems.
Fixed priority scheduling with Rate Monotonic priority assignment is used
to easily schedule periodic tasks. EDF can easily schedule both periodic
and aperiodic tasks.
Fixed priority scheduling is more predictable than EDF, especially in the
case of over-loading. Even in the absences of over-loading the random
choice made when two EDF tasks have the same dynamic priority intro-
duces a lack of predictability.
Fixed priority scheduling with Rate Monotonic priority assignment e-
ciency (69%) is lower than EDF eciency (100%).
Table 7.2 summarizes these two schedulers. Of course, the real picture is
a bit more complicated than this summary.
Both EDF and xed priority scheduling have been adapted in dier-
ent contexts to increase their usability, their exibility, or their eciency.
For example, we previously stated that xed priority scheduling with Rate
Monotonic priority assignment has a limited eciency: the processor uti-
lization factor should not be greater than 69% to be sure that all task
7.2 Real-time schedulers 277
Fixed priority scheduling EDF
Applications critical, static dynamic, less critical
RTOS
a
easy more dicult
implementation
Tasks periodic (Rate Monotonic) aperiodic and periodic
Eciency up to 69% (Rate Monotonic up to 100%
only)
Predictability high less than xed priority
if U > 1
Table 7.2 A simple comparison of xed priority scheduling and EDF
a
Real-Time Operating System.
deadlines are met. In fact, in some systems, task deadlines can be met even
when this bound is exceeded. This is the case of a harmonic task set for
which a processor utilization factor reaching 1 is possible with no missed
deadlines.
The behavior of EDF is non-deterministic during a transient processor
over-load. However, an adaption of EDF, called D-over, has been developed
to take care of processor over-loads (Koren and Shasha, 1992).
Another adaption is related to the scheduling of aperiodic tasks with
xed priority scheduling and Rate Monotonic priority assignment. Several
approaches have been proposed to handle aperiodic tasks with xed pri-
ority scheduling (Sprunt et al., 1989). The polling server is one of these
approaches. Basically, a polling server is a periodic task dened by a period
and a capacity. When aperiodic tasks are released in the system, they are
stored in a queue. At each periodic release time of the polling server, it runs
aperiodic tasks which are waiting for the processor in the queue. The polling
server runs aperiodic tasks for the duration of its capacity. If the aperiodic
task queue is empty at the polling servers release times, the server does
nothing. It waits for its next periodic release time. This mechanism allows
the processor to run aperiodic tasks as soon as possible but without de-
laying periodic tasks that are critical. A polling server can be included in
a feasibility test, such as the processor utilization factor test, as a normal
periodic task. The system is checked assuming the worst case aperiodic task
arrivals when the polling server runs aperiodic tasks during all its capacity
at all its release times.
For further reading, Buttazzo (2003) presents a more detailed comparison
of Rate Monotonic and EDF.
278 Real-time systems and scheduling concepts
7.3 Dependent tasks
In the previous sections, we made an unrealistic assumption: all tasks in
the application are independent. Most real applications contain dependent
tasks.
When two tasks share a resource, this data is usually accessed through
a synchronization construct such as a protected object or semaphore (see
Chapter 4). Both provide a pre-protocol to assert a desire to access the
shared resource and a post-protocol to indicate that the shared resource is
available. These protocols are encapsulated within a protected type. The
programmer using semaphores must explicitly invoke the protocols to pro-
tect shared data. We will use the semaphore in our discussions in this section
so that the protocols are completely visible. The same principles apply to
protected objects. A task wanting to access shared data has to wait when
the semaphore is locked by another task. This shared resource waiting time
may increase the worst case response time of the waiting task.
Precedence constraints are a second type of dependency. These constraints
are raised when tasks exchange messages as in a rendezvous (see Chapter
5). A task receiving a message may have to wait until its message actually
arrives. Similarly, a task wanting to send a message may have to wait for the
receiver task to accept the message. These waiting times may also change
the worst case response times of tasks.
The basic analysis methods presented in the previous sections do not deal
with these additional waiting times. The next two sections describe what
we need to add to the basic analysis methods to handle the delays resulting
from shared resources and precedence constraints.
7.3.1 Shared resource problems
Figure 7.9 shows a set of three tasks, two of which share a resource. This
resource is protected by a semaphore called mutex. In the gure, P(mutex)
depicts the semaphore locking and V (mutex) the semaphore unlocking. P
and V come from the original Dutch words chosen by Dijkstra. In Chapter
4, we used the names Wait and Signal for these operations.
Tasks T
1
and T
3
share the resource. These tasks are scheduled with a
preemptive xed priority scheduler. Task T
3
is the highest priority task and
T
1
is the lowest priority task. T
2
is a medium priority level task that does
not use the shared resource protected by the semaphore mutex.
Figure 7.9 shows the scheduling sequence of this task set. This scheduling
leads to a priority inversion. A priority inversion is a scheduling sequence
7.3 Dependent tasks 279
Figure 7.9 A priority inversion
in which a lower priority task blocks a higher priority task. In this gure, a
priority inversion occurs between task T
3
and task T
2
as follows:
1. At time 0, task T
1
is released. T
1
locks the semaphore at time 1.
2. Task T
3
is released at time 2 and immediately preempts the lower priority
task, T
1
. T
1
s preemption occurs before it has released the semaphore.
3. T
3
becomes blocked when it requests the semaphore at time 3.
4. Task T
1
resumes its execution, continuing its use of the shared resource
protected by semaphore mutex.
5. At time 4, T
1
is preempted as T
2
is released. Task T
1
s second preemption
occurs before it has released the semaphore.
6. At this point the medium priority task T
2
blocks the high priority task
T
3
through the low priority task T
1
this is a priority inversion.
Task T
2
delays task T
3
even though these two tasks are independent of
each other. The problem results from the medium priority task preempting
a low priority task that has locked a resource that a high priority task
requires. We must accept the fact that while it is using the shared resource,
low priority task T
1
blocks high priority task T
3
. We saw the consequences of
unprotected resource sharing in Chapter 4. But this is not the case between
T
2
and T
3
they do not share a resource.
We need to determine how long a task can be blocked while waiting to
access a shared resource. We represent the worst case shared waiting time
of task i by B
i
. This delay must be taken into account by feasibility tests
such as equation (7.4).
280 Real-time systems and scheduling concepts
The problem is made worse when we introduce additional tasks with pri-
ority levels between our low priority task T
1
and our high priority task T
3
.
These tasks can eectively prevent the low priority task from releasing the
resource. The high priority task could easily miss its deadline.
While we cannot eliminate priority inversion, it is possible to limit the
amount of time a task will be blocked by a lower priority task that shares
the resource. If we know the maximum value of B
i
, we can determine whether
or not task i will meet its deadlines.
Priority inheritance protocols provide the means to put an upper
bound on B
i
. These protocols increase the priority of the low priority task
to a value above the medium priority tasks. Then the low priority task
can nish its use of the shared resource and release it to the high priority
task. Various priority inheritance protocols have been proposed for both
xed priority schedulers and dynamic priority schedulers. They include the
Priority Ceiling Protocol PCP (Sha et al., 1990), the Priority Inheritance
Protocol PIP (Sha et al., 1990), and the Stack Based Resource Control
SRP (Baker, 1991). In the next section we will look at PIP and PCP.
7.3.2 Shared resource protocols
PIP is the simplest inheritance protocol. With PIP, a task which blocks
a high priority task due to a shared resource sees its priority increased to
the priority level of the blocked task. With such a mechanism, B
i
can be
computed as the sum of the critical section durations of the tasks which
have a priority lower than the priority of task i.
Figure 7.10 presents a new scheduling sequence of the task set of Figure
7.9 but with PIP. At time 3, when the high priority task T
3
attempts to
lock the semaphore mutex, the low priority task T
1
sees its priority level
increased up to the priority level of the high priority task T
3
. This priority
change prevents the medium priority task T
2
from preempting T
1
when T
2
is released at time 4. T
1
signals that it is nished with the shared resource
at time 5 allowing the high priority task T
3
to access it before the medium
priority task T
2
ever ran. Of course, when T
1
is nished with the shared
resource, its priority level returns to its initial value.
Our example shows that PIP is a very simple protocol. However, it has a
major drawback: PIP cannot be used with more than one shared resource
as it leads to deadlock. So while PIP provides a simple model to explain
priority inheritance, it is not used in practice. Instead, PCP is widely used
in real-time operating systems to prevent unbounded priority inversions and
task deadlocks.
7.3 Dependent tasks 281
Figure 7.10 A scheduling sequence with PIP
There are several versions of PCP. We will look at one called Immediate
Ceiling Priority Protocol ICPP. ICPP works as follows:
Each task has a static and a dynamic priority. The static priority is as-
signed according to rules such as Rate Monotonic or Deadline Monotonic.
Each shared resource has a priority. This priority is called a ceiling priority
and its value is equal to the maximum static priority of all the tasks which
use the shared resource.
The scheduler always selects the ready task with the highest dynamic
priority. The dynamic priority is equal to the maximum of the tasks static
priority and the ceiling priorities of any resources the task has locked.
Figure 7.11 shows our task set scheduled with ICPP. The ceiling priority
of the resource protected by the semaphore is equal to the static priority
assigned to our high priority task T
3
. When task T
1
rst locks the semaphore
at time 1, its dynamic priority is increased to the ceiling priority of the
shared resource. As T
1
is now running with the highest priority, it very
quickly completes its access to the shared resource and releases it at time 2.
At that point, the priority of task T
1
returns to its low static value and the
scheduler runs T
3
.
With ICPP, the priority inheritance is applied sooner than with PIP. This
allows the task T
1
to quickly leave the critical section. With ICPP, B
i
can be
computed more accurately than with PIP. B
i
is the largest critical section
of the tasks sharing the same set of resources.
282 Real-time systems and scheduling concepts
Figure 7.11 A scheduling sequence with ICPP
7.3.3 Feasibility tests extended with shared resources
With known values of B
i
, we can perform deadline verications with two of
the feasibility tests presented in Section 7.2.1: tests on processor utilization
factor and tests on worst case response time.
The processor utilization factor test (Klein et al., 1994) extended with
shared resource blocking time, with a preemptive xed priority scheduler
and Rate Monotonic priority assignment, is:
i, 1 i n :
i1
k=1
C
k
P
k
+
C
i
+B
i
P
i
i (2
1
i
1) (7.7)
The same feasibility test with a preemptive EDF scheduler instead is:
i, 1 i n :
i1
k=1
C
k
P
k
+
C
i
+B
i
P
i
1 (7.8)
Tests 7.7 and 7.8 assume that tasks are ordered according to their priority
level in a decreasing manner: task i 1 has a lower priority level than
task i.
Furthermore, worst case response times r
i
, that take into account blocking
time B
i
with a preemptive xed priority scheduler, can be computed as
follows (Klein et al., 1994):
r
i
= C
i
+B
i
+
jhp(i)
_
r
i
P
j
_
C
j
(7.9)
7.3 Dependent tasks 283
with hp(i) the set of tasks which have a lower priority level than task i.
Equation (7.9) can be solved by the recurrence equation:
w
n+1
i
= C
i
+B
i
+
jhp(i)
_
w
n
i
P
j
_
C
j
7.3.4 Precedence constraints
There are various kinds of precedence constraints. In this section we focus on
a precedence constraint that delays the release of a task until the completion
of another task. For example, a task i waiting to receive a message wont be
released until after the execution of a task j which sends the message. To
compute a worst case response time of tasks i and j, we can apply a feasibility
test based on equation (7.4) adapted to take into account this precedence
relationship (Audsley et al., 1993; Tindell and Clark, 1994). This feasibility
test introduces a new parameter called jitter (noted J
i
). The jitter J
i
of
task i is an upper bound on the delay that task i may suer between the
time it is supposed to be released and the time it is actually released.
Tindell and Clark (1994) initially used the jitter parameter to model var-
ious latencies within an operating system. Modeling these latencies allows
the designer to include them in feasibility tests to ensure that the analysis
is consistent with the real-time application behavior.
Lets look at a simple example before we directly address the delays due
to precedence constraints. Suppose a periodic task i has a period of 15 ms.
It should be released at t = 15, 30, 45, ... However, measurements indicate
that the rst release of this task is at 21 ms!
What is responsible for this 6 ms delay in the release of task i? In most op-
erating systems, periodic tasks are released by a timer. The timer is usually
an interrupt handler that is triggered by a clock device. On each interrupt,
the timer updates the system clock, checks task statuses, and re-schedules
them. The timer can be seen as a periodic task that is dened by a period
and a capacity:
The period denes the accuracy of the system clock. For example, in the
case of a Linux system with an Intel architecture, the clock period is about
10 ms.
5
As any program, the interrupt handler has a worst case execution time.
5
Some processors, such as Intel Pentium, have registers to store the system clock. These
registers allow users to read the system clock with a higher resolution (e.g. nanosecond with
Pentium processors). However, it does not mean that the system is able to wake up tasks
with the same accuracy: timer periods are usually longer in order to reduce system overhead.
284 Real-time systems and scheduling concepts
Let us assume P
timer
= 10 ms and C
timer
= 1 ms. Figure 7.12 shows the
1 ms execution times of the timer task when the system clock interrupts
at 0, 10, and 20 ms. This gure also shows why our periodic task that was
supposed to be released at t = 15 was delayed 6 ms. At t = 10, the timer
task checks the status of all tasks waiting on the timer. Task i is not ready
to be released at this time. The next timer interrupt does not occur until
t = 20. At this time, the timer detects that task i must be released. Since
the timer capacity is 1 ms, task i will actually be released at t = 21 ms.
Figure 7.12 Task release time jitter
Therefore, the nature of the operating system timer guarantees that
Tasks will be not released before the theoretical release time.
But tasks may be released after the theoretical release time.
To compute the worst case response time of task i that takes into account
this timer lateness, the jitter of task i is set to J
i
= 6 ms and the task worst
case response times are computed as follows:
_
r
i
D
i
r
i
= w
i
+J
i
(7.10)
with
w
i
= C
i
+
jhp(i)
_
w
i
+J
j
P
j
_
C
j
Equation (7.10) can be used to model any lateness in a real-time system.
In particular, we can use this equation to compute the worst case response
time of tasks that have precedence constraints: if a task i must be released
after the completion time of task j, the worst case response time of i can be
computed with J
i
= r
j
.
7.3 Dependent tasks 285
This approach is called the holistic analysis approach and can be used to
compute end to end worst case response times (Tindell and Clark, 1994).
In the context of the holistic analysis, tasks are organized into sequences
of tasks. A sequence of tasks is composed of several tasks that have to run
sequentially a precedence constraint exists between each pair of successive
tasks in the sequence. Often, we also assume that tasks are assigned to
dierent processors. In this case, the deadline we want to check is not a
deadline that is local to a given scheduler. Instead, we expect to check that
the lateness dened by the release time of the rst task in the sequence and
the completion time of the last task in the sequence is not greater than some
global deadline.
10 i : J
i
:= 0, r
i
:= 0, r
i
:= 0;
20 i : Compute worst case response time (r
i
);
30 while(i : r
i
= r
i
) {
40 i : J
i
:= max(J
i
, j with j i : r
j
);
50 i : r
i
:= r
i
;
60 i : Compute worst case response time (r
i
);
}
Figure 7.13 Holistic analysis
To check this global deadline with an end to end worst case response time,
the task scheduling of several processors may be investigated simultaneously.
Figure 7.13 presents an algorithm for this analysis. In line 40, we update the
jitter of tasks i with worst case response times of tasks j that have to be
run before tasks i. The worst case response time is then computed with
these modied jitters in line 60. Lines 40 to 60 are run up to convergence
of worst case response times. In Figure 7.13, the symbol expresses that
a precedence constraint exists between two tasks: j i means that task i
cannot be released before the completion time of task j.
Summary
A task is dened by a set of parameters which specify its temporal be-
havior.
Tasks can be classied as critical, urgent, independent or dependent, and
periodic or aperiodic.
Critical functions of real-time systems are modeled by periodic tasks. Non-
critical functions are modeled by aperiodic tasks.
286 Real-time systems and scheduling concepts
Real-time schedulers usually provide algebraic analysis methods which
allow the designer to check task temporal constraints without comput-
ing the scheduling sequences. These algebraic analysis methods are called
feasibility tests.
Most of the schedulers implemented in real-time operating systems are
xed priority schedulers.
Fixed priority scheduling is well suited for critical and static real-time em-
bedded applications. Rate Monotonic is an optimal rule to assign priorities
in the context of xed priority schedulers.
Earliest Deadline First is a dynamic scheduler. It is optimal and less suit-
able for critical real-time embedded applications.
Two types of task dependencies must be taken into account: shared re-
sources and precedence constraints of tasks.
Exercises
7.1 Dene the terms context, preemption, and context switch.
7.2 When is a task
1. Both critical and urgent?
2. Critical but not urgent?
3. Urgent but not critical?
7.3 What distinguishes a dependent task from an independent task?
7.4 Describe two forms of task dependency.
7.5 Dene the terms activation time, release time, periodic task, sporadic
task, and aperiodic task.
7.6 What distinguishes a xed priority scheduler from a dynamic priority
scheduler?
7.7 How are task priorities assigned dierently when using Deadline Mono-
tonic than when using Rate Monotonic?
7.8 Explain why there is no need to use protected objects or semaphores to
provide mutually exclusive access to resources shared by tasks sched-
uled by a cyclic executive.
7.9 What distinguishes a preemptive scheduler from a non-preemptive
scheduler?
7.10 Dene the terms optimal scheduler and valid schedule.
7.11 What distinguishes a static real-time system from a dynamic real-time
system?
7.12 What properties must a set of tasks possess in order to be a set of
synchronous periodic tasks?
Exercises 287
7.13 Dene the terms deadlines on request and critical instant.
7.14 What does the processor utilization factor express?
7.15 Practice with scheduling and analysis with xed priority scheduling
and Rate Monotonic priority assignment. Given the following set of
synchronous periodic tasks:
Task Initial Period Capacity Deadline
release
T
1
S
1
= 0 P
1
= 8 C
1
= 4 D
1
= 8
T
2
S
2
= 0 P
2
= 12 C
2
= 3 D
2
= 12
T
3
S
3
= 0 P
3
= 4 C
3
= 1 D
3
= 4
1. Calculate the processor utilization factor and the utilization bound.
2. What do these calculations tell us about our ability to schedule this
task set with xed priority scheduling and Rate Monotonic priority
assignment?
7.16 Using the task set from the previous exercise:
1. What is the hyper-period?
2. Using preemptive xed priority scheduling with Rate Monotonic pri-
ority assignment, draw a scheduling sequence (as in Figure 7.2) of
this task set to demonstrate that a deadline is missed.
7.17 Practice with scheduling and analysis with xed priority scheduling
and Rate Monotonic priority assignment. Given the following set of
synchronous periodic tasks:
Task Initial Period Capacity Deadline
release
T
1
S
1
= 0 P
1
= 29 C
1
= 7 D
1
= 29
T
2
S
2
= 0 P
2
= 5 C
2
= 1 D
2
= 5
T
3
S
3
= 0 P
3
= 10 C
3
= 2 D
3
= 10
1. We schedule this task set according to a preemptive xed priority
scheduling and a Rate Monotonic priority assignment. Apply the
processor utilization factor test. Do the tasks meet their deadlines?
2. Draw the scheduling sequence of this task set (as in Figure 7.2)
during the rst 30 units of time.
3. Re-draw the scheduling sequence of this task set during the rst 30
units of time with a non-preemptive xed priority scheduler and
with Rate Monotonic. Compare the results with the drawing you
made for the previous question.
288 Real-time systems and scheduling concepts
7.18 Practice with scheduling and analysis with Rate Monotonic and xed
priority scheduling. Given the following harmonic set of synchronous
periodic tasks:
Task Initial Period Capacity Deadline
release
T
1
S
1
= 0 P
1
= 30 C
1
= 6 D
1
= 30
T
2
S
2
= 0 P
2
= 5 C
2
= 3 D
2
= 5
T
3
S
3
= 0 P
3
= 10 C
3
= 2 D
3
= 10
1. Why is this task set harmonic?
2. We schedule this task set according to a preemptive xed priority
scheduler and Rate Monotonic. Apply the processor utilization fac-
tor test. Do the tasks meet their deadlines?
3. Draw the scheduling sequence of this task set (as in Figure 7.2)
during the rst 30 units of time.
4. Compute the worst case response time of each task with the Joseph
and Pandia method (equations (7.4) and (7.5)). You should nd
worst case response times that are consistent with the response times
in the picture you drew for the rst part of this exercise.
7.19 Practice with scheduling and analysis with Earliest Deadline First.
Given the following set of synchronous periodic tasks:
Task Initial Period Capacity Deadline
release
T
1
S
1
= 0 P
1
= 10 C
1
= 6 D
1
= 10
T
2
S
2
= 0 P
2
= 30 C
2
= 9 D
2
= 30
1. We schedule this task set according to a preemptive Earliest Dead-
line First scheduler. Apply the processor utilization factor test. Do
the tasks meet their deadlines?
2. Compute the hyper-period of this task set. How many idle units of
time should we have during this hyper-period?
3. Draw the scheduling sequence (as in Figure 7.7) of this task set
during the hyper-period. (Hint: ll in a table of dynamic priorities
like Table 7.1.) Is the number of idle units of time consistent with
your answer to part 2?
4. Now assume a non-preemptive Earliest Deadline First scheduler.
Re-draw the scheduling sequence of this task set during the hyper-
period and compare this scheduling with the scheduling sequence of
Exercises 289
part 3. Compare the behaviors of the preemptive and non-preemptive
schedulers for this task set.
7.20 Practice with scheduling and analysis with Earliest Deadline First.
Given the following set of synchronous periodic tasks:
Task Initial Period Capacity Deadline
release
T
1
S
1
= 0 P
1
= 12 C
1
= 5 D
1
= 12
T
2
S
2
= 0 P
2
= 6 C
2
= 2 D
2
= 6
T
3
S
3
= 0 P
3
= 24 C
3
= 5 D
3
= 24
1. We schedule this task set according to a preemptive Earliest Dead-
line First scheduler. Apply the processor utilization factor test. Do
the tasks meet their deadlines?
2. Compute the hyper-period of this task set. How many idle units of
time should we have during this hyper-period?
3. Draw the scheduling sequence (as in Figure 7.7) of this task set
during the hyper-period. (Hint: ll in a table of dynamic priorities
like Table 7.1.) Is the number of idle units of time consistent with
your answer to part 2?
4. Now assume a non-preemptive Earliest Deadline First scheduler.
Re-draw the scheduling sequence of this task set during the hyper-
period and compare this scheduling with the scheduling sequence of
part 3. Compare the behaviors of the preemptive and non-preemptive
schedulers for this task set.
5. Now assume that some aperiodic tasks are released: tasks TA
1
and
TA
2
. TA
1
has a capacity of 1 unit of time, is released at time 7,
and has to meet a deadline at time 9. TA
2
has a capacity of 3 units
of time, is released at time 12, and has to meet a deadline at time
21. We assume that the deadlines of aperiodic tasks dened above
are the dynamic priorities that the EDF must handle to schedule
the task set (e.g. D
i
(t)). Draw the scheduling sequence of this new
task set for the rst 30 units of time. Can we meet all the deadlines
under the preemptive EDF scheduler?
7.21 Practice with xed priority scheduling and Rate Monotonic with ape-
riodic tasks. Given the following set of synchronous periodic tasks:
290 Real-time systems and scheduling concepts
Task Initial Period Capacity Deadline
release
T
1
S
1
= 0 P
1
= 15 C
1
= 4 D
1
= 15
T
2
S
2
= 0 P
2
= 7 C
2
= 1 D
2
= 7
These periodic tasks are scheduled with a preemptive xed priority
scheduling and Rate Monotonic. We expect to schedule aperiodic tasks
together with the periodic task set. To analyze the schedule, we dene
a polling server with a period of 5 units of time and a capacity of 1
unit of time.
1. Check the schedulability of this set of three periodic tasks.
2. Now consider the arrival of two aperiodic tasks: tasks TA
1
and TA
2
.
TA
1
has a capacity of 1 unit of time, is released at time 7, and has
to meet a deadline at time 9. TA
2
has a capacity of 3 units of time,
is released at time 12, and has to meet a deadline at time 21. Draw
the scheduling sequence of this new task set.
7.22 In this exercise, you will investigate the schedulability of a set of tasks
that share a block of memory. Given the following set of periodic tasks:
Task Initial Period Capacity Deadline
release
T
1
S
1
= 0 P
1
= 6 C
1
= 2 D
1
= 6
T
2
S
2
= 0 P
2
= 8 C
2
= 2 D
2
= 8
T
3
S
3
= 0 P
3
= 12 C
3
= 5 D
3
= 12
These three tasks are scheduled with a preemptive xed priority
scheduler and Rate Monotonic. T
1
and T
3
make use of a block of mem-
ory. This block of memory is protected by a semaphore to enforce a
critical section.
T
1
requires the block of memory during the second half of its capacity
while T
3
needs the block of memory during all its capacity.
1. First, we assume that no inheritance protocol is activated with the
semaphore. Draw the scheduling of this task set during its hyper-
period. In the scheduling sequence, nd when the semaphore is al-
located and released by tasks T
1
and T
3
. Can you see on the graph
at which time a priority inversion occurs?
2. Now we assume that the PIP protocol is active during semaphore
allocations and releases. Draw the scheduling sequence of this task
set during its hyper-period. In the scheduling sequence, nd when
Exercises 291
the semaphore is allocated and released. Also check that no priority
inversion occurs.
3. Compute worst case blocking times B
1
, B
2
, and B
3
.
7.23 In the previous exercise, you investigated the use of PIP to put an
upper bound on the length of time a high priority task has to wait for
a shared resource. Because PIP is dicult to implement, PCP protocols
are more commonly used. In this exercise, we study the ICPP, a kind
of PCP. Given the following set of periodic tasks:
Task Initial Period Capacity Deadline
release
T
1
S
1
= 0 P
1
= 31 C
1
= 8 D
1
= 31
T
2
S
2
= 2 P
2
= 30 C
2
= 8 D
2
= 30
We schedule this task set with a preemptive xed priority scheduler
and Rate Monotonic.
T
1
and T
2
require access to two shared resources called R
1
and R
2
that are controlled by the two semaphores mutex1 and mutex2.
T
1
needs R
1
from the 2nd to the 8th units of time of its capacity.
T
1
needs R
2
from the 4th to the 8th units of time of its capacity.
T
2
needs R
1
from the 6th to the 8th units of time of its capacity.
T
2
needs R
2
from the 2nd to the 8th units of time of its capacity.
1. First, we assume that PIP is activated with the semaphores. Draw
the scheduling sequence of this task set during the rst 30 units
of time. In the scheduling sequence, nd when the semaphores are
allocated and released by the tasks T
1
and T
2
. Describe what you
see in the scheduling sequence.
2. Second, we assume that ICPP is activated for these semaphores.
Draw the scheduling sequence of this task set during the rst 30
units of time. On the scheduling sequence, nd when the semaphores
are allocated and released by the tasks T
1
and T
2
. Compare the
scheduling sequences of questions 1 and 2.
7.24 This exercise summarizes the dierent topics presented in this chap-
ter. You will investigate performances of an application responsible for
displaying data to the driver of a car. The application is composed of
5 tasks:
Task SENSOR 1 reads a speed sensor every 10 ms.
Task SENSOR 2 reads the temperature inside the car every 10 ms.
Task SENSOR 3 reads the GPS location of the car every 40 ms.
292 Real-time systems and scheduling concepts
Task DISPLAY 1 computes every 12 ms a summary of the data pro-
duced by SENSOR 1, SENSOR 2, and SENSOR 3 and displays
the results on a rst screen.
When the driver requests it, task DISPLAY 2 displays, on a second
screen, the road map centered around the current location of the car.
This screen is refreshed every 6 ms.
We have implemented this set of tasks, and, after some measure-
ments, we have noticed that:
Execution time of tasks SENSOR 2 and DISPLAY 1 is bounded
by 2 ms.
Task SENSOR 3 needs a processor time ranging from 2 to 4 ms.
Execution times of tasks SENSOR 1 and DISPLAY 2 never ex-
ceeded 1 ms.
We also assume that all tasks have the same rst release time. The
engineers would like to check that all tasks nish their current work
before being released to do the next one. They have selected an op-
erating system that provides 32 priority levels ranging from 0 to 31.
Priority level 31 is the highest priority. The scheduler is a preemptive
xed priority scheduler. This system does not allow us to assign the
same priority level to several tasks.
1. The engineers would like you to help them with the priority assign-
ment of each task.
Explain the common criteria that people use to assign priorities
to tasks.
Assign a priority for each task of this system. Explain your choice.
2. From your priority assignment, and without drawing the schedul-
ing sequence, compute the worst case response time for tasks
SENSOR 1, SENSOR 2, and SENSOR 3.
3. In practice, tasks SENSOR 1, SENSOR 2, and SENSOR 3 share
a block of memory through the semaphore mutex. SENSOR 1 and
SENSOR 2 need the mutex during all their capacity. SENSOR 3
needs the mutex during the two rst units of time of its capacity.
The mutex uses ICPP.
Explain why the worst case response times of part 2 are incorrect
with the use of this mutex.
4. Draw the scheduling sequence of this system up to time 25. Show
when each shared resource is allocated or/and released.
Exercises 293
5. In order to decrease the response time of the system, we run the
task set on an architecture with two processors, a and b. The task
parameters are now dened by:
Task Processor Capacity Period
DISPLAY 1 b 4 6
SENSOR 3 b 2 20
DISPLAY 2 a 2 3
SENSOR 1 a 2 5
SENSOR 2 a 2 5
These tasks stay scheduled with a preemptive xed priority sched-
uler. Priorities are assigned according to Rate Monotonic. We will
not take the shared resources into account for this question. You
may assume that all tasks are independent. Show that at least
one task cannot meet its deadline.
6. In the previous question, the engineers assigned tasks to proces-
sors randomly. Each task may be run on either processor a or on
processor b. However, running a given task on a or on b changes
the task worst case execution time: processor b is twice as fast as
processor a.
Assign each task to processor a or b in order to meet all task dead-
lines. Explain how you made this assignment and how you have
checked deadlines. You may again assume that all tasks are
independent.
8
Real-time programming with Ada
In Chapter 7, we presented an overview of real-time scheduling theory. Those
discussions were independent of any programming language. This chapter
explains how to write real-time applications in Ada that are compliant with
that scheduling theory. A compliant Ada program can be analyzed thereby
increasing the applications reliability.
Ada practitioners have two international standards available for producing
compliant Ada programs: Ada 2005 and POSIX 1003.1b. In order to apply
real-time scheduling theory with these standards, we need some means to:
1. Implement periodic tasks. This implementation requires a way to rep-
resent time, a way to enforce periodic task release times, and a way to
assign priorities to both tasks and shared resources.
2. Activate priority inheritance protocols for the shared resources.
3. And of course, select and apply a scheduler such as those presented in
Chapter 7 (e.g. xed priority scheduler or Earliest Deadline First).
The Ada 2005 and POSIX 1003.1b standards provide the means to create
compliant programs through pragmas and specic packages. Real-time spe-
cic pragmas and packages for Ada 2005 are dened in Annex D of the Ada
Reference Manual. The Ada binding POSIX 1003.5 provides the packages
that allow us to create compliant real-time Ada programs using the POSIX
standard.
The rst six sections of this chapter discuss and use the packages and
pragmas of Annex D. Section 8.1 shows how to express timing constraints of
tasks with an Ada package called Ada.Real Time. Section 8.2 explains how
to implement periodic tasks. Periodic task release times are enforced with
the delay until statement introduced in Chapter 5. We also show how to
assign priorities to tasks and to protected objects with the priority pragma.
Section 8.3 uses the concepts of the rst two sections to implement the car
8.1 Expressing time 295
application introduced and analyzed in Chapter 7. Section 8.4 explains how
to deal with priority inversion. Section 8.5 is devoted to Adas scheduling
model and explains how we can select a particular scheduler for a given
application. Finally, Section 8.6 focuses on Ravenscar, a set of restrictions
of Ada features that allows us to easily check real-time scheduling theory
compliance at compile time.
The remainder of this chapter is devoted to POSIX 1003.1b. Section 8.7
presents the POSIX 1003.1b scheduling model and the POSIX application
programming interface (API) for Ada. The POSIX 1003.1b Ada API pro-
vides a simple means for implementing Ada applications that run under
an operating system which does not provide an Ada 2005 standard compli-
ant run-time. We will talk more about run-time systems in Chapter 9. In
Section 8.8, well revisit the car application and implement it with POSIX
processes rather than Ada tasks.
Finally, in Section 8.9 well compare the Ada tasking and POSIX ap-
proaches for developing real-time applications. For a more comprehensive
treatment of both the POSIX C and the Ada 2005 application programming
interfaces see Gallmeister (1995), Burns and Wellings (2007), and Burns and
Wellings (2009).
8.1 Expressing time
To implement periodic tasks with Ada, we need some means to express the
timing parameters of these tasks. The package Ada.Real Time provides the
necessary resources. Ada.Real Time is similar to Ada.Calendar but is more
suited for real-time applications. Ada.Real Time provides a documented,
monotonic, high-resolution clock. Monotonic time does not jump forward or
backward as wall clock time does for leap seconds or changes due to daylight
savings time. Here is the specication of this package.
package Ada . Real Ti me i s
type Time i s pr i vat e ;
Ti me Fi r s t : constant Time ;
Ti me Last : constant Time ;
Ti me Uni t : constant := i mpl ement at i on de f i ne d r e a l number ;
type Ti me Span i s pr i vat e ;
Ti me Span Fi r s t : constant Ti me Span ;
Ti me Span Last : constant Ti me Span ;
Ti me Span Uni t : constant Ti me Span ;
Ti ck : constant Ti me Span ;
296 Real-time programming with Ada
f uncti on To Dur at i on (TS : Ti me Span ) r etur n Dur at i on ;
f uncti on To Ti me Span (D : Dur at i on ) r etur n Ti me Span ;
f uncti on Cl ock r etur n Time ;
f uncti on + ( Le f t : Time ; Ri ght : Ti me Span ) r etur n Time ;
f uncti on + ( Le f t : Ti me Span ; Ri ght : Time ) r etur n Time ;
f uncti on ( Le f t : Time ; Ri ght : Ti me Span ) r etur n Time ;
f uncti on ( Le f t : Time ; Ri ght : Time ) r etur n Ti me Span ;
f uncti on < ( Lef t , Ri ght : Time ) r etur n Bool ean ;
f uncti on <= ( Lef t , Ri ght : Time ) r etur n Bool ean ;
f uncti on > ( Lef t , Ri ght : Time ) r etur n Bool ean ;
f uncti on >= ( Lef t , Ri ght : Time ) r etur n Bool ean ;
f uncti on + ( Lef t , Ri ght : Ti me Span ) r etur n Ti me Span ;
f uncti on ( Lef t , Ri ght : Ti me Span ) r etur n Ti me Span ;
f uncti on ( Ri ght : Ti me Span ) r etur n Ti me Span ;
f uncti on ( Le f t : Ti me Span ; Ri ght : I n t e g e r )
r etur n Ti me Span ;
f uncti on ( Le f t : I n t e g e r ; Ri ght : Ti me Span )
r etur n Ti me Span ;
f uncti on / ( Lef t , Ri ght : Ti me Span ) r etur n I n t e g e r ;
f uncti on / ( Le f t : Ti me Span ; Ri ght : I n t e g e r )
r etur n Ti me Span ;
. . .
f uncti on Nanoseconds (NS : I n t e g e r ) r etur n Ti me Span ;
f uncti on Mi cr os econds (US : I n t e g e r ) r etur n Ti me Span ;
f uncti on Mi l l i s e c o n d s (MS : I n t e g e r ) r etur n Ti me Span ;
f uncti on Seconds ( S : I n t e g e r ) r etur n Ti me Span ;
f uncti on Mi nut es (M : I n t e g e r ) r etur n Ti me Span ;
. . .
pr i vat e
. . . not s p e c i f i e d by t he l anguage
end Ada . Real Ti me ;
Package Ada.Real Time denes the two time related types: Time and
Time Span. Lets look at the denitions of these two types provided in sec-
tion D.8 of the Ada Reference Manual.
1. Type Time implements an absolute time. Time begins at an arbitrary
starting point called the epoch that is the same for all values of Time.
As the exact starting point is not important for the analysis of real-time
applications, Ada does not dene the epoch. It might be the time of
system initialization or some time standard (e.g. 1 January 1970 00:00:00
UT). The range of type Time shall be sucient to represent real ranges up
8.1 Expressing time 297
to 50 years from the start of our programs execution. Package Ada.Real
Time provides several constants of type Time.
Time First and Time Last are the smallest and largest values of the
Time type, respectively.
Time Unit is the smallest amount of real-time representable by the
Time type. It is implementation-dened and shall be less than or equal
to 20 microseconds.
2. Type Time Span represents the length of a duration of time. Package
Ada.Real Time provides several constants of type Time Span.
Time Span First and Time Span Last are the smallest and largest val-
ues of the Time Span type, respectively. Time Span First shall be no
greater than 3600 seconds, and Time Span Last shall be no less than
3600 seconds.
Tick is a constant of Time Span that represents the average length
duration in which the clock value does not change. Tick shall be no
greater than 1 ms.
Time Span Unit is the dierence between two successive values of the
Time type. It is also the smallest positive value of type Time Span.
In addition to these types and constants, package Ada.Real Time provides
various subprograms for obtaining and manipulating time-related values.
Function Clock is called to determine the current time. It returns the
amount of time that has passed since the epoch. The functions To Duration,
To Time Span, Nanoseconds, Microseconds, Milliseconds, Seconds, and
Minutes allow us to convert timing values to and from Time, Time Span,
Duration, and Integer variables. As we will show in Section 8.2, we use
these functions to assign periods to our tasks.
Finally, package Ada.Real Time provides a variety of arithmetic and re-
lational operators that allow us to write expressions containing values of
Time Span, Time, and Integer types.
Besides dening types and subprograms, package Ada.Real Time explic-
itly states the timing performances that Ada run-time system implementers
must document. As the clock hardware on various platforms diers, such
documentation is critical to analyzing the timing requirements of our ap-
plication on a particular target. Section D.8 of the Ada Reference Manual
requires that an implementation shall document:
1. The values of Time First, Time Last, Time Span First, Time Span
Last, Time Span Unit, and Tick.
298 Real-time programming with Ada
2. The properties of the underlying time base used for the clock and for type
Time. These properties include the range of values supported and any
relevant aspects of the underlying hardware or operating system facilities
used.
3. The upper bound on the real-time duration of a clock tick.
4. The upper bound on the drift rate of Clock with respect to real-time.
5. The upper bound on the size of a clock jump. A clock jump is the dier-
ence between two successive distinct values of the clock (as observed by
calling the Clock function).
6. The upper bound on the execution time of a call to the Clock function.
7. and more . . .
Some of these documentation requirements allow us to perform schedula-
bility verications, a feature available for very few programming languages.
These documentation requirements help practitioners to write Ada applica-
tions that are compliant with real-time scheduling theory. This is the case
for the second item, which is mandatory to evaluate worst case response
time of tasks.
8.2 Implementing periodic tasks
Now, let us see how to implement periodic tasks with Ada. The scheduling
models discussed in Chapter 7 require that the periodic release times of our
periodic tasks be deterministic. The release time is the time at which a task
begins to work on its activity. A periodic task is released periodically. The
scheduling models also require that each task be assigned a priority. In this
section, we look at implementing both of these requirements in Ada.
8.2.1 Implementing periodic release times
Periodic release times can be implemented with the delay statement. In
Section 5.2.4, we showed that the delay statement makes it possible to
block a task for a relative duration of time (e.g. delay seven seconds) or
until an absolute time (e.g. delay until 15:07).
A task can be blocked for a relative delay with the delay expr statement,
where expr is the relative delay. This statement blocks a task for at least
expr seconds of time. The task will not be released before this amount of
time but it can be released after this amount of time.
A task can also be blocked for an absolute delay with the delay until
expr statement, where expr is the time at which the task can be released.
8.2 Implementing periodic tasks 299
The delay until statement blocks a task at least up to the time expr is
reached. A task will not be released before time expr but can be released
after this time.
The Ada Reference Manual does not require any bound on the lateness
of a tasks release time. But again, the Ada Reference Manual provides the
information needed to make verications with real-time scheduling theory. It
requires the documentation of metrics on the implementation of delay and
delay until statements. One of the required metrics is the upper bound
on the lateness of a delay statement. We can then use this upper bound as
a jitter parameter (see page 283) in our calculation of worst case response
times. Documentation of metrics is related to the implementation of Ada
run-time, which is discussed in Section 9.3.1.
Lets look at an example of a simple periodic task.
wi th Ada . Real Ti me ; use Ada . Real Ti me ;
wi th Text I O ; use Text I O ;
procedure A Per i odi c Task Demo i s
The f o l l o wi n g pr oc e dur e c o nt a i ns a j ob t hat
we want t o e xe c ut e p e r i o d i c a l l y
procedure Run Job Of The Task i s
begi n
Put Li ne ( Pe r i o d i c t as k i s r e l e a s e d and e x e c ut e s i t s j ob ) ;
end Run Job Of The Task ;
task A Pe r i odi c Tas k ; De f i ne a t as k
The body of our p e r i o d i c t as k
task body A Pe r i odi c Tas k i s
Next Ti me : Ada . Real Ti me . Time := Cl ock ;
Per i od : constant Ti me Span := Mi l l i s e c o n d s ( 250) ;
begi n
l oop
The t as k i s r e l e a s e d : i t r uns i t s j ob
Run Job Of The Task ;
Det er mi ne t he next r e l e a s e t i me
Next Ti me := Next Ti me + Per i od ;
Wai t f o r t he next r e l e a s e t i me
del ay unt i l Next Ti me ;
end l oop ;
end A Pe r i odi c Tas k ;
begi n
nul l ;
end A Per i odi c Task Demo ;
300 Real-time programming with Ada
In Chapter 7, we characterized a synchronous periodic task with a period
and a capacity. The period of this task is 250 ms. At each release time the
task runs the procedure Run Job Of The Task. The capacity of this example
is the worst case execution time (WCET) of the procedure Run Job Of The
Task.
Periodic release times of the task in this example are implemented with
the delay until statement. We prefer the absolute delay to the relative
delay because the latter suers from cumulative drift. Figure 8.1 illustrates
the concepts of cumulative and local drift.
Figure 8.1a shows the release times (heavy vertical bars) of a periodic
task that uses the relative delay 0.250; statement to periodically release
it. The rst release is at time 0. The second release is at 250 +
1
ms where
1
is the release time jitter. The third release is 250 +
2
ms after the second
release at 500 +
1
+
2
ms. The fourth release is at 750 +
1
+
2
+
3
ms. You can see that each successive release is further from the regular 250
ms intervals. The drift is a result of the accumulation of jitters.
Figure 8.1b shows the release times (heavy vertical bars) of a periodic task
that uses the absolute delay until Next Time; statement to periodically
release it. The rst release is at time 0. The second release is at 250 +
1
ms.
The third release is at 500 +
2
ms. The fourth release is at 750 +
3
ms.
While the jitter causes some local drift from the regular 250 ms intervals,
the jitter does not accumulate over the lifetime of the periodic task.
8.2.2 The Ada priority model
Another important property of a periodic task is its priority level. Adas
priority model is based on two values. Each task has a base priority and an
active priority.
The tasks base priority is the priority that is assigned to the task prior
to the execution of the application. The tasks active priority is the prior-
ity that is actually handled by the operating system (or the Ada run-time
system) to schedule the set of ready tasks. A base priority is a xed priority
while the active priority is a dynamic one. Dierent ways exist to assign
xed/base priorities to tasks. In Chapter 7, we described one of the most
famous xed priority assignment rules Rate Monotonic. In that chapter
we also discussed shared resource protocols such as ICPP that allow a tasks
priority to change as a result of priority inheritance. Active priorities rep-
resent those changing priority levels. At any time, the active priority of a
8.2 Implementing periodic tasks 301
= release time jitter
0
250 +
4
250 +
2
250 500 750 1000 1250
250 +
1
250 +
3
250 +
5
a) Cumulative drift of relative delay
0
250 500 750 1000 1250
1
b) Local drift of absolute delay
2
3
4
5
Figure 8.1 A comparison of the drift in release times for using relative and
absolute delays
task is the maximum of its base priority and all the priorities the task is
inheriting at that instant. Priority inheritance may occur:
During the execution of protected operations, if ICCP is activated. The
task inherits the priority of the protected object. See section D.3 of the
Ada Reference Manual. We will look at ICCP in Section 8.4.
During a rendezvous. A task which accepts an entry call inherits the
priority of the task making the entry call. See sections 9.5.3 and D.1 of
the Ada Reference Manual.
During task activation. During activation, a task inherits the active pri-
ority of its parent. See section 9.2 of the Ada Reference Manual.
Ada types related to priority are dened in the System package. Here are
the relevant parts of this package for a particular target:
package System i s
. . .
Pr i o r i t y r e l a t e d De c l a r a t i o n s ( Re f e r e nc e Manual D. 1 )
Ma x Pr i or i t y : constant Po s i t i v e := 30;
Ma x I n t e r r u p t Pr i o r i t y : constant Po s i t i v e := 31;
subtype An y Pr i o r i t y i s I n t e g e r range 0 . . 31;
subtype Pr i o r i t y i s An y Pr i o r i t y range 0 . . 30;
302 Real-time programming with Ada
subtype I n t e r r u p t P r i o r i t y i s An y Pr i o r i t y range 31 . . 31;
De f a u l t Pr i o r i t y : constant Pr i o r i t y := 15;
. . .
end System ;
Priority levels can be assigned to both tasks and interrupt handlers.
System.Interrupt Priority denes priority levels for interrupt handlers.
The range of System.Interrupt Priority shall include at least one value.
We assigned an interrupt priority to our interrupt handler in our analog
to digital device driver (page 158). System.Priority denes priority levels
that are available for tasks. System.Priority must provide at least 30 pri-
ority levels but the actual number of available priority levels depends on the
operating system or run-time system. Having more levels is better for real-
time scheduling analysis. Most of the feasibility tests presented in Chapter
7 are easier to use when each task has a unique priority level.
Two pragmas are available to assign priority levels to interrupt handlers
and base priority levels to tasks.
The pragma dedicated to interrupt handlers is:
pragma I n t e r r u p t P r i o r i t y [ ( e x p r e s s i o n ) ] ;
The expression in this pragma is optional. When not given, the priority
value assigned to the interrupt handler is Interrupt PriorityLast. We
do not discuss interrupt handlers further in this chapter. You can review
Chapter 4 for further details.
The second pragma, the Priority pragma, is used to assign base priorities
to tasks and ceiling priorities to protected objects.
pragma Pr i o r i t y ( e x p r e s s i o n ) ;
Here expression represents the priority level to be assigned. We can place
pragma Priority immediately within a task denition, a protected object or
protected type denition, or the declarative part of a subprogram body. The
value of the priority expression must be in the range of System.Priority.
A priority pragma can also be assigned to the main procedure. If you do
not use this pragma to assign a priority to a task, it is assigned a priority
equal to the priority of the task that created it. Finally, if no priorities
are assigned, each task is assigned the default priority value dened by the
constant System.Default Priority.
Here is an example that assigns priority level 10 to the periodic task within
our example program on page 299. Notice that we must use the long form of
the task declaration that we previously used only when our task had entries.
8.3 Ada implementation of the car application 303
task A Pe r i odi c Tas k i s
pragma Pr i o r i t y ( 1 0 ) ;
end A Pe r i odi c Tas k ;
8.3 Ada implementation of the car application
Lets use what we have learned to implement the car application introduced
and analyzed in Chapter 7 (see page 268). This application has three tasks.
Rather than write three dierent tasks, we will make use of Adas generic
facilities. While we cannot dene a generic task, we can dene a generic
package that includes a task type. Then, we can instantiate three separate
instances for our three tasks. Here is the specication of our generic package:
wi th System ;
gener i c
wi th procedure Run ; The j ob of t he p e r i o d i c t as k
package Ge ne r i c Pe r i o di c Ta s k i s
Two d i s c r i mi n a n t s c h a r a c t e r i z e t h i s p e r i o d i c t as k
task type Pe r i odi c Ta s k
( Ta s k Pr i o r i t y : System . Pr i o r i t y ;
Pe r i o d I n Mi l l i s e c o n d s : Nat ur al ) i s
pragma Pr i o r i t y ( Ta s k Pr i o r i t y ) ;
end Pe r i odi c Ta s k ;
end Ge ne r i c Pe r i o di c Ta s k ;
This generic package has a single formal parameter, Run, that is a parame-
terless procedure. We will supply an actual procedure that our periodic task
will call each time it is released.
The task type Periodic Task has two discriminants: Task Priority and
Period In Milliseconds. Notice that the priority pragma associated with
this task type uses the value of the rst discriminant to set the priority
of the task. When we create an actual task from this type, we will specify
its priority through the rst discriminant and its period through the second
discriminant. We use the discriminant Period In Milliseconds in the delay
logic of the task body which is in the package body below.
wi th Ada . Real Ti me ; use Ada . Real Ti me ;
package body Ge ne r i c Pe r i o di c Ta s k i s
task body Pe r i odi c Ta s k i s
Next Ti me : Ada . Real Ti me . Time := Cl ock ;
Per i od : constant Ti me Span :=
Mi l l i s e c o n d s ( Pe r i o d I n Mi l l i s e c o n d s ) ;
304 Real-time programming with Ada
begi n
l oop
Run ; Ca l l t he pr oc e dur e wi t h t he code f o r t h i s j ob
wai t u n t i l t he next pe r i od
Next Ti me := Next Ti me + Per i od ;
del ay unt i l Next Ti me ;
end l oop ;
end Pe r i odi c Ta s k ;
end Ge ne r i c Pe r i o di c Ta s k ;
Now lets see how to use this generic periodic task package to implement
the car application. Here are the relevant characteristics with priority levels
determined by Rate Monotonic assignment for each of the three tasks:
Task Period Priority
(ms)
T
display
P
display
= 100 12
T
engine
P
engine
= 500 10
T
speed
P
speed
= 250 11
We will create one instance of our generic task type for each of our three
tasks using the priorities and periods from this table. Here is the complete
program:
wi th Ge ne r i c Pe r i o di c Ta s k ;
wi th Text I O ; use Text I O ;
procedure Car Syst em i s
The f o l l o wi n g t hr e e pa r a me t e r l e s s pr oc e dur e s ar e s t ubs f o r
t he a c t ua l code t hat r uns i n t he c ar
procedure Di s pl ay Spe e d i s
begi n
Put Li ne ( Tdi s pl a y d i s p l a y s t he s peed of t he c ar ) ;
end Di s pl ay Spe e d ;
procedure Read Speed i s
begi n
Put Li ne ( Tspeed r e ads s peed s e ns or ) ;
end Read Speed ;
procedure Moni t or Engi ne i s
begi n
Put Li ne ( Tengi ne per f or ms e ngi ne moni t or i ng a l gor i t hm ) ;
end Moni t or Engi ne ;
8.4 Handling shared resources 305
I n s t a n t i a t e a p e r i o d i c t as k package f rom t he g e n e r i c package
f o r each of our t hr e e r un pr oc e dur e s
package P1 i s new Ge ne r i c Pe r i o di c Ta s k ( Run => Di s pl ay Spe e d ) ;
package P2 i s new Ge ne r i c Pe r i o di c Ta s k ( Run => Read Speed ) ;
package P3 i s new Ge ne r i c Pe r i o di c Ta s k ( Run => Moni t or Engi ne ) ;
Our t hr e e t a s k s
Tdi s pl a y : P1 . Pe r i odi c Ta s k ( Ta s k Pr i o r i t y => 12 ,
Pe r i o d I n Mi l l i s e c o n d s => 100) ;
Tspeed : P2 . Pe r i odi c Ta s k ( Ta s k Pr i o r i t y => 11 ,
Pe r i o d I n Mi l l i s e c o n d s => 250) ;
Tengi ne : P3 . Pe r i odi c Ta s k ( Ta s k Pr i o r i t y => 10 ,
Pe r i o d I n Mi l l i s e c o n d s => 500) ;
pragma Pr i o r i t y ( 2 0 ) ; Pr i o r i t y of mai n subpr ogr am
begi n
Put Li ne ( Tasks w i l l s t a r t a f t e r t he mai n subprogram compl et es ! ) ;
end Car Syst em ;
We dened stubs for the three jobs each of our tasks must perform. Each
stub simply displays a message. Following the denition of these stubs, we
instantiate three dierent packages, one for each of our tasks. We pass the
appropriate job to run for each of our tasks. Next are the declarations of
our three tasks. We include the values of the priority and period discrimi-
nants in the declarations that create the three tasks.
We also choose to assign priority level 20 to the environment task, the
task that executes the main procedure of this program. This higher priority
level ensures that the three tasks will not start running before the main
procedure has nished its initialization. This approach makes our program
compliant with the critical instant assumption (see page 261). The pragma
Partition Elaboration Policy (see section 5.7 of the Ada Reference Man-
ual) can also be used to defer task activations to implement a critical
instant.
8.4 Handling shared resources
As we discussed in Chapter 4, protected objects are the preferred construct
for encapsulating resources shared among tasks. In Chapter 7, we showed
that in order to perform a schedulability analysis, specic locking protocols
are needed with shared objects. In Section 7.3.2, we discussed priority in-
heritance protocols. These protocols require us to assign priorities to both
306 Real-time programming with Ada
tasks and shared resources. We discussed the use of pragma Priority for
tasks in the last section. The same pragma can be used to assign a priority
to a protected object. We simply place the pragma in the declaration of our
protected type or protected object. Below is the declaration of the protected
bounded buer type used in the producer-consumer demonstration program
on page 136. We have used the priority pragma to assign a priority of 15 to
all objects of this type.
protected type Bounded Buf f er ( Max Si ze : Po s i t i v e ) i s
pragma Pr i o r i t y ( 1 5 ) ;
procedure Cl e a r ;
d e l e t e a l l of t he i t ems i n t he b u f f e r
entry Put ( I tem : i n Po s i t i v e ) ;
add a v a l ue t o t he b u f f e r
entry Take ( I tem : out Po s i t i v e ) ;
remove a v a l ue f rom t he b u f f e r
pr i vat e
Buf f e r : I nt e ge r Que ue . Queue Type ( Max Si ze ) ;
end Bounded Buf f er ;
8.4.1 Locking policies
Each protected object also has a locking policy. Protected object priorities
are properties which are related to the protected object locking policy. The
locking policy species how the protected object can be accessed and the
relationships between task priorities and protected object priorities. If a
protected object declaration does not contain a priority pragma, then a
default priority is specied by the assigned locking policy.
We specify a given locking policy with the following pragma:
pragma Lo c k i ng Po l i c y ( po l i c y to a c t i v a t e ) ;
where policy-to-activate is the protected object locking policy. Ada com-
piler and run-time implementers are allowed to dene any locking policies
but must implement at least the Immediate Ceiling Priority Protocol (ICPP)
described in Section 7.3.2. ICPP is identied by the policy name Ceiling
Locking. To activate ICPP during protected object access, we must add the
following pragma to the application:
pragma Lo c k i ng Po l i c y ( Ce i l i n g L o c k i n g ) ;
The Ada Reference Manual describes exactly how Ceiling Locking
works:
8.4 Handling shared resources 307
Each protected object priority level is called a ceiling priority. If no
ceiling priority has been assigned to a protected object, then it has a
default priority of System.PriorityLast.
The ceiling priority of a protected object must be equal to the maximum
active priority of all the tasks which may call the protected object. The
exception Program Error is raised when a task attempts to access a pro-
tected object whose ceiling priority is less than the tasks active priority.
When a task gains access to a protected object and if its active priority
is lower than the ceiling priority, the task inherits the ceiling priority of
the protected object the tasks active priority is increased to reach
the ceiling priority. Thus the active priority of a task at a given time is
equal to the maximum of its base priority and the ceiling priorities of any
protected objects the task has locked.
You should notice that this specication of Ceiling Locking is similar
to the description we gave in Section 7.3.2 for ICPP.
8.4.2 Queuing policies
Section D.4 of the Real-Time Systems Annex denes a second important
pragma for protected objects.
pragma Que ui ng Pol i c y ( p o l i c y i d e n t i f i e r ) ;
Two language-dened policy identiers exist: FIFO Queuing and
Priority Queuing. A compiler vendor may supply additional policies. FIFO
Queuing is the default when no Queuing Policy pragma is specied. Each
queuing policy governs:
The order in which tasks are queued for entry service.
The order in which dierent entry queues are considered for servicing.
FIFO Queuing orders entry callers according to the time the entry calls
are made. All of our examples of entry queues in Chapters 4 and 5 were
based on FIFO Queuing. With Priority Queuing, entry callers are ordered
in the queues according to their active priority.
In Chapter 4 we said that when two or more open entry queues in a pro-
tected object have waiting tasks, the language does not specify which entry
queue is selected; the selection is arbitrary. This non-deterministic behavior
is dened by the default FIFO Queuing policy. The Priority Queuing pol-
icy provides a deterministic selection the entry queue with the highest
priority task is selected. Should more than one entry queue contain tasks
308 Real-time programming with Ada
with the highest priority, the selection is based on the textual position of the
entry declaration. Each entry has priority over the entries declared after it.
In Chapter 5 we said that when there are tasks waiting on more than
one alternative of a selective accept statement, the language does not spec-
ify which accept alternative is selected for a rendezvous; the selection is
arbitrary. Again, this non-deterministic behavior is dened by the default
FIFO Queuing policy. The Priority Queuing policy provides a determinis-
tic select statement the accept alternative whose entry queue contains
the highest priority task is selected. Should there be more than one open
accept alternative with tasks of equal priority waiting, the selection is based
on the textual position of the accept alternative within the selective accept
statement. Each accept alternative has priority over the accept alternatives
that follow it. If we were to use Priority Queuing in our beverage vending
machine software, task Make Beverage on page 172 would rendezvous with
a task requesting coee before those with equal priority requesting tea or
cocoa.
8.5 The Ada scheduling model
In the previous sections, we explained how to implement and parameterize
tasks and protected objects so our application is compliant with real-time
scheduling theory. In this section we look at the model that Ada uses to
schedule our tasks. We see now how such applications are scheduled by an
Ada compliant run-time.
8.5.1 Conceptual design of an Ada scheduler
Figure 8.2 illustrates how tasks are organized by the Ada run-time system.
Each processor has several ready queues. Each queue is devoted to a single
priority level and stores ready tasks that have the same active priority level.
Each queue has a tail and a head. In the example shown in Figure 8.2, the
queue associated with active priority 8 contains two tasks that are ready,
the queue of active priority 7 contains four ready tasks, and the queue of
active priority PriorityFirst contains one ready task.
Each queue has an associated dispatching policy. The dispatching policy
species when tasks are inserted and deleted from the ready queue and how
the ready tasks are ordered in the queue. The dispatching policy denes
the rules for selecting which ready task the processor must run and how
task preemption is handled. Recall that preemption is where the scheduler
chooses to stop the running task and replace it with another ready task.
8.5 The Ada scheduling model 309
Figure 8.2 Adas scheduling model
With the Ada scheduling model, preemption may only occur at specic
instants called task dispatching times. Each dispatching policy species
what operations can be or must be done at these task dispatching times.
The scheduling model of Ada can be seen as a two-level scheduling. At
any time, the scheduler:
1. Selects the highest priority queue that is not empty.
2. Using the queues dispatching policy, selects the next task to run from
that queue.
8.5.2 Task dispatching policies
Dispatching is the process of giving the processor to a ready task. As dis-
patching is dependent on the states of the various tasks in the applica-
tion, you may nd it useful to review the basic states a task goes through
during its execution, shown in Figure 3.2 on page 112. Ada provides sev-
eral dierent dispatching policies. A dispatching policy is specied by the
Task Dispatching Policy pragma. The most important dispatching pol-
icy is FIFO Within Priorities and is specied by including the following
pragma in the application:
pragma Ta s k Di s pa t c hi ng Pol i c y ( FI FO Wi t h i n Pr i o r i t i e s ) ;
310 Real-time programming with Ada
The dispatching policy FIFO Within Priorities species the following
scheduling rules:
The scheduler is preemptive. When there is a ready task with an active
priority higher than the running task, the scheduler will preempt the run-
ning task.
The tasks in the ready queue are in rst in rst out (FIFO) order. When
the running task blocks itself or is preempted, the scheduler will dispatch
the task at the head of the ready queue with the highest active priority.
When a task changes state from blocked to ready, it is inserted at the tail
of the ready queue for its active priority.
Should the active priority of a task in one of the ready queues change,
that task is moved to the tail of the queue for its new active priority.
The dispatching policy FIFO Within Priorities is well suited for critical
applications. If all the tasks have dierent priority levels, we can easily apply
the xed priority scheduling feasibility tests described in Section 7.2.1. Be-
cause of the widespread use of the xed priority scheduling feasibility tests,
this policy is perhaps the most commonly used dispatching policy for real-
time systems. However, FIFO Within Priorities may not be suitable for
all applications. Ada provides other dispatching policies that can be useful
for less critical applications. The Ada 2005 standard denes three additional
dispatching policies:
Round Robin Within Priorities is a preemptive dispatching policy that
allows us to assign a time quantum to each priority level. The scheduler
preempts a running task when it has exhausted its quantum of time and
inserts it at the tail of the ready queue. The ready queues are FIFO. When
the running task blocks itself or is preempted, the scheduler will dispatch
the task at the head of the ready queue with the highest priority. Round
robin scheduling is popular in time sharing environments.
Non Preemptive FIFO Within Priorities is a scheduling policy without
preemption. The running task must voluntarily relinquish the processor by
executing some blocking statement such as a delay statement, an accept
statement, a select statement, or an entry call. The ready queues are
FIFO. When the running task blocks itself, the scheduler will dispatch
the task at the head of the ready queue with the highest priority.
EDF Across Priorities is a preemptive dispatching policy that imple-
ments the Earliest Deadline First scheduler described in Section 7.2.2. The
tasks in the ready queues are ordered by their deadlines. The task at the
head of each queue has the shortest deadline. The package
Ada.Dispatching.EDF provides operations for setting and querying task
8.5 The Ada scheduling model 311
deadlines. It also provides the procedure Delay Until And Set Deadline
for the implementation of periodic tasks that reset their deadlines at the
beginning of each iteration. As we mentioned in Section 7.2.3, EDF is
less predictable than xed priority scheduling during overload conditions.
However, it can eectively utilize 100% of the processor compared to only
about 69% for xed priority scheduling with Rate Monotonic priority as-
signment.
We can specify dierent dispatching policies for dierent priority queues.
This approach allows an application to run its tasks according to several
policies at the same time. We use pragma Priority Specific Dispatching
in place of pragma Task Dispatching Policy to assign a scheduling policy
to each priority level. Here is an example that divides our ready queues into
three dierent groups, each with its own scheduling policy.
pragma Pr i o r i t y S p e c i f i c Di s p a t c h i n g
( FI FO Wi t h i n Pr i o r i t i e s , 3 , 31) ;
pragma Pr i o r i t y S p e c i f i c Di s p a t c h i n g
( EDF Ac r o s s Pr i o r i t i e s , 2 , 2 ) ;
pragma Pr i o r i t y S p e c i f i c Di s p a t c h i n g
( Round Robi n Wi t hi n Pr i or i t i e s , 0 , 1 ) ;
With this set of pragmas, we assign FIFO preemptive xed priority schedul-
ing to the highest levels of priority from level 3 to 31. These priority levels
are devoted to high critical tasks. The other priority levels are devoted to
EDF (priority level 2) and round robin schedulers (priority levels 0 and 1).
Figure 8.3 illustrates the assignment of scheduling policies to priority queues
specied by these three pragmas.
In this design, priority levels ranging from 0 to 2 are intended for tasks
that are less critical. Since they are hosted by the lowest priority levels, they
can not disturb high priority/critical tasks and worst case response times of
these high priority/critical tasks can always be computed with the methods
described in Section 7.2.1. These mechanisms provide a natural and easy
means to run both critical and non-critical tasks together.
Of course, for high critical applications, the picture must be simpler so we
can use the highly predictable xed priority scheduling approach to schedule
all of our tasks. For such applications we use the following two pragmas that
specify our scheduler satisfy the conditions for xed priority scheduling and
ICPP:
pragma Ta s k Di s pa t c hi ng Pol i c y ( FI FO Wi t h i n Pr i o r i t i e s ) ;
pragma Lo c k i ng Po l i c y ( Ce i l i n g L o c k i n g ) ;
312 Real-time programming with Ada
Figure 8.3 Example of an application using several dierent dispatching
policies
8.6 Ravenscar
The richness of the Ada language provides numerous features that may
not be compliant with the analysis methods available in current real-time
scheduling theory. For example, in the previous sections, we have shown
that Ada allows a program to assign a dierent dispatching protocol to
each priority level. Use of such features may lead to the development of
an application that cannot be analyzed with the feasibility tests such as
the worst case response time feasibility test for xed priority scheduling
presented in Chapter 7.
We must ensure that our application meets all the assumptions of the
feasibility tests we use. To meet these requirements, we must be sure not
to use an Ada feature that violates some feasibility test assumption. Fortu-
nately, Ada provides a mechanism which performs this check automatically:
the Ravenscar prole.
Ravenscar is a prole which denes a subset of the Ada language that is
compliant with xed priority scheduling feasibility tests.
1
Aprole is a set of
restrictions a program must meet. Restrictions are expressed with pragmas
that are checked at compile time and enforced at execution time. We use
the following pragma to activate all of the restrictions in the Ravenscar
prole:
pragma p r o f i l e ( Ravens car ) ;
1
The Ravenscar prole is described in section D.13.1 of the Ada Reference Manual.
8.6 Ravenscar 313
This one pragma is equivalent to the following collection of pragmas:
pragma Ta s k Di s pa t c hi ng Pol i c y ( FI FO Wi t h i n Pr i o r i t i e s ) ;
pragma Lo c k i ng Po l i c y ( Ce i l i n g L o c k i n g ) ;
pragma De t e c t Bl oc ki ng ;
pragma Re s t r i c t i o n s (
No Abor t St at ement s ,
No Dynami c Attachment ,
No Dy na mi c Pr i or i t i e s ,
No I mp l i c i t He a p Al l o c a t i o n s ,
No Loc al Pr ot e c t e d Obj e c t s ,
No Local Ti mi ng Event s ,
No Pr ot e c t e d Ty pe Al l oc at or s ,
No Re l at i v e De l ay ,
No Requeue Statements ,
No Sel ect St at ement s ,
No Spe c i f i c Te r mi na t i on Ha ndl e r s ,
No Tas k Al l oc at or s ,
No Tas k Hi er ar chy ,
No Task Ter mi nat i on ,
Si mpl e Ba r r i e r s ,
Max Ent r y Queue Lengt h => 1 ,
Ma x Pr ot e c t e d Ent r i e s => 1 ,
Max Tas k Ent r i es => 0 ,
No Dependence => Ada . As ynchr onous Tas k Cont r ol ,
No Dependence => Ada . Cal endar ,
No Dependence => Ada . Execut i on Ti me . Group Budget ,
No Dependence => Ada . Execut i on Ti me . Ti mers ,
No Dependence => Ada . Ta s k At t r i but e s ) ;
Together, these restrictions help make it possible to perform a static anal-
ysis of an Ada program. Some of these restrictions are directly related to the
scheduling analysis with xed priority scheduling feasibility tests presented
in Chapter 7. In particular:
1. Restrictions enforced by the pragmas Task Dispatching Policy (FIFO
Within Priorities) and No Dynamic Priorities require that all tasks
are scheduled by preemptive, xed priority scheduling.
2. Restrictions enforced by the pragma Locking Policy (Ceiling Locking)
ensure that protected objects are shared between tasks according to the
ICPP protocol. This pragma makes it possible to give an upper bound
on the time each task waits to access protected objects.
3. Restrictions enforced by No Task Hierarchy and No Task Allocators
forbid dynamic task allocations and allow task declarations only at the
library level. These constraints help to produce a critical instant, an in-
stant in which all tasks are started at the same time. The critical instant
is a major assumption of most of the feasibility tests (see Section 7.3).
314 Real-time programming with Ada
4. Restriction No Relative Delay disables the use of the relative delay
statement to implement periodic release times. As we showed in Figure
8.1, the use of the relative delay to implement periodic task releases yields
a cumulative drift away from the desired intervals.
5. Finally, the Ravenscar prole includes a number of restrictions that forbid
non-deterministic temporal behaviors. These restrictions are necessary
for determining the capacity (WCET) of each task. This is the case of
the restrictions No Task Allocators, No Protected Type Allocators,
and No Implicit Heap Allocation that forbid dynamic memory alloca-
tion, and also the restrictions No Requeue Statements and No Select
Statement that prevent the use of overly complex (and non-deterministic)
synchronization mechanisms.
These pragmas do not cover all xed priority scheduling feasibility test
assumptions. The Ravenscar prole does not specify how priorities are as-
signed to each task. The use of equation (7.3) to calculate the processor
utilization factor requires that priorities must be assigned according to Rate
Monotonic. However, such priority assignment is not required to use equation
(7.4) to calculate worst case response times. Furthermore, both equations
(7.3) and (7.4) assume periodic tasks, a property also not addressed by the
Ravenscar prole.
These feasibility tests also require that we know the number of tasks in
the application. Again, this information is not specied by the Ravenscar
prole. We can, however, include the additional restriction Max Tasks in our
program to bound the number of tasks.
So while the Ravenscar prole does not provide checks for all the necessary
restrictions, it is an excellent rst step to ensure that a program is compliant
with real-time scheduling theory assumptions. We still need to follow good
software engineering principles that complement the tools provided by Ada.
Besides its schedulability analysis support, the Ravenscar prole also pro-
vides very signicant size and speed improvements, both of which are im-
portant to real-time and embedded applications (Dobing, 1999; Shen and
Baker, 1999).
8.7 POSIX 1003.1b and its Ada binding
In the previous sections of this chapter, we described how to use features in
Adas Real-Time Systems Annex to implement applications that are com-
pliant with real-time scheduling theory. Since annexes are optional, some
8.7 POSIX 1003.1b and its Ada binding 315
Ada compilers and run-times may not provide support for this annex. For-
tunately, another standard is available for Ada practitioners POSIX.
POSIX is a standardized application programming interface (API) to an
operating system similar to Unix (Vahalia, 1996). POSIX helps program-
mers port their Unix programs from one Unix operating system to another.
Many programmers assume that only C programs may use the POSIX API.
However, POSIX also denes a standard API for Ada programs. The POSIX
standard is published by ISO and IEEE. Table 8.1 lists the chapters in the
standard relevant to this discussion. Each chapter covers a family of Unix
operating system facilities. Chapter 1003.1 and chapter 1003.2 are the most
well known chapters. Chapter 1003.1 denes an operating system API that
provides the usual Unix services such as fork and exec. Chapter 1003.2
denes programs or commands that Unix shells should provide.
Chapter Content
POSIX 1003.1 System Application Program Interface
POSIX 1003.2 Shell and utilities
POSIX 1003.1b Real-time extensions
POSIX 1003.1c Threads
POSIX 1003.5 Ada POSIX binding
Table 8.1 Some chapters of the POSIX 1003 standard
In this book, we are most concerned with chapters 1003.1b, 1003.1c and
1003.5. Chapters 1003.1b and 1003.1c standardize concurrency and real-time
features for C programs and chapter 1003.5 provides the Ada binding for
the other chapters (including 1003.1b and 1003.1c).
Each chapter provides a set of services. A service may be mandatory or
optional. Some of the services provided in chapter 1003.1b are:
POSIX PRIORITY SCHEDULING provides xed priority scheduling resources.
POSIX REALTIME SIGNALS is related to Unix signals which cannot be lost.
POSIX ASYNCHRONOUS IO provides subprograms to perform asynchronous
input and output.
POSIX TIMERS provides watchdog timer services. Well use these timers
to implement periodic task releases.
POSIX SEMAPHORES contains several usual synchronization tools.
Implementations of chapter 1003.1b are available for most real-time oper-
ating systems including Lynx/OS
TM
, VxWorks
TM
, Solaris
TM
, and QNX
TM
.
Unfortunately, most of the services in chapter 1003.1b are designated as
optional a major weakness of this POSIX chapter. However, POSIX is
316 Real-time programming with Ada
a exible standard and POSIX programs (both C and Ada) are able to
discover at run-time if the services they need are provided by the under-
lying operating system. In addition, POSIX compliant programs are able
to adapt themselves to the underlying operating system capabilities. For
example, when a POSIX program starts, it can check how many priority
levels the operating system provides and could then change its priority level
assignments (see Section 8.7.2). Still, the large number of services not des-
ignated as manditory may encourage the development of programs that are
not portable.
There is a major dierence between the Ada standard with its Ravenscar
prole and the POSIX 1003 standard with its real-time extensions. Ada
checks at compile time whether the application is compatible with services
and resources of the underlying operating system. POSIX gives hints to the
application so that it might adapt itself to the features and resources that
are actually available. Another important dierence for Ada programmers
is related to the concurrency abstraction. POSIX relies on system processes
rather than on Ada tasks. Rather than declaring tasks, the POSIX Ada
programmer must use the more primitive fork mechanism to create the
processes for their program.
8.7.1 Scheduling model
POSIX uses preemptive xed priority scheduling that is quite similar to
Adas. Figure 8.4 illustrates the POSIX scheduling model. It is composed of
several queues that store ready processes with each queue devoted to a given
priority level. This model is identical to that of Ada depicted in Figure 8.2.
A POSIX compliant system must provide at least 32 priority levels. At any
time, the running process is chosen from among the processes of the highest
non-empty priority level queue.
POSIX dispatching is also similar to Adas. When a given priority queue
is selected, the choice of the process to run is performed according to a
policy. In a POSIX compliant operating system, we usually at least nd the
following three policies:
1. SCHED FIFO. This policy is quite similar to the FIFO Within Priorities
scheduling policy of Ada. But instead of dispatching Ada tasks we dis-
patch Unix processes. With this policy, ready processes in a given priority
level get the processor according to their order in the FIFO queue. The
process at the head of the queue runs rst and keeps the processor until it
executes some statement that blocks it, explicitly releases the processor,
or nishes.
8.7 POSIX 1003.1b and its Ada binding 317
Figure 8.4 POSIX 1003.1b scheduling model
2. SCHED RR. This policy provides a round-robin mechanism inside a priority
level. It can be seen as a SCHED FIFO policy but with a time quantum and
some extra rules on queue management. A time quantum is a maximum
duration that a process can run on the processor before being preempted
by another process of the same queue. When the quantum is exhausted,
the preempted process is moved from the head to the tail of the queue
and the new process at the head of the queue is dispatched.
3. SCHED OTHER. The behavior of this policy is not dened in the POSIX
standard. It is implementation-dened. Sometimes, this policy provides
a time-sharing scheduler. This policy is used by Linux for all processes
with a priority level of 0. These processes are put in a SCHED OTHER
queue. With Linux, the process in the SCHED OTHER queue that has waited
longest for the processor is dispatched rst.
Dierent operating systems can implement the POSIX scheduling pol-
icy dierently. For example, some operating systems allow each individual
process to specify the policy under which it is scheduled, while others do
not.
In VxWorks 5.3, the quantum can be initialized by programmers with
the kernelTimeSlice operation. By default, the quantum is equal to zero
and all VxWorks tasks are SCHED FIFO tasks. If a programmer changes the
quantum with kernelTimeSlice, then all tasks in the application become
SCHED RR tasks. In contrast to VxWorks, Solaris allows us to specify a given
policy for each Solaris process (Mauro and Dougall, 2001).
318 Real-time programming with Ada
As we mentioned earlier, Linux also implements POSIX 1003. Priority
level 0 is devoted to SCHED OTHER processes. Processes which have a priority
level between 1 and 99 can be scheduled either with SCHED FIFO or SCHED
RR.
Due to all these implementation dierences, an Ada POSIX compliant
application is generally not as portable as an Ada compliant application.
The portability issue is exacerbated when the operating system also im-
plements POSIX 1003.1c the POSIX chapter related to threads. Pro-
cesses and threads are distinguished by their address spaces each pro-
cess has its own address space while threads share an address space. De-
pending on the operating system, POSIX threads may be implemented
either as kernel threads or as user-level threads (Anderson et al., 1992;
Mauro and Dougall, 2001). This may also change the scheduling of the
application.
Programmers must carefully check how POSIX (process or thread) schedul-
ing is actually implemented in order to analyze the schedulability of their
applications.
8.7.2 POSIX Application Programming Interface
Using the Ada binding POSIX 1003.5, Ada programmers can write and run
POSIX 1003 applications. In this section we will look at two package spec-
ications that are a small part of Florist, a POSIX 1003.5 implementation
written by Baker and Oh (1997) for the GNAT compiler (see Chapter 9).
Implementing periodic POSIX processes
Package POSIX.Timers provides the Ada bindings for the POSIX 1003 ser-
vice POSIX TIMERS. Here is a portion of that package.
wi th POSIX. Si g n a l s ;
package POSIX. Ti mer s i s
type Cl ock I D i s pr i vat e ;
type Ti mer I D i s pr i vat e ;
Cl oc k Re al t i me : constant Cl ock I D ;
procedure S e t I n i t i a l
( St at e : i n out Ti mer St at e ;
I n i t i a l : i n POSIX. Ti mespec ) ;
procedure S e t I n t e r v a l
( St at e : i n out Ti mer St at e ;
I n t e r v a l : i n POSIX. Ti mespec ) ;
8.7 POSIX 1003.1b and its Ada binding 319
f uncti on Ge t I n t e r v a l
( St at e : i n Ti mer St at e )
r etur n POSIX. Ti mespec ;
procedure Set Ti me
( Cl ock : i n Cl ock I D ;
Val ue : i n POSIX. Ti mespec ) ;
f uncti on Get Ti me
( Cl ock : i n Cl ock I D := Cl oc k Re al t i me )
r etur n POSIX. Ti mespec ;
f uncti on Cr eat e Ti mer
( Cl ock : i n Cl ock I D ;
Event : i n POSIX. Si g n a l s . Si gna l Ev e nt )
r etur n Ti mer I D ;
procedure Arm Ti mer
( Ti mer : i n Ti mer I D ;
Opt i ons : i n Ti mer Opt i ons ;
New State : i n Ti mer St at e ) ;
. . .
pr i vat e
. . .
end POSIX. Ti mer s ;
Like package Ada.Real Time, package POSIX.Timers provides subpro-
grams to handle time. This POSIX package allows Ada programmers to
activate timers. Timers are used to implement periodic releases by waking
up a process after a delay (Harbour et al., 1997). Lets look at an example.
The following program demonstrates how to use a timer to implement a
periodic process with a period of one second.
wi th POSIX; use POSIX;
wi th POSIX. Si g n a l s ; use POSIX. Si g n a l s ;
wi th POSIX. Ti mer s ; use POSIX. Ti mer s ;
wi th Text I O ; use Text I O ;
procedure Pos i x Ti mer Exampl e i s
The j ob c a r r i e d out by t he p e r i o d i c pr oc e s s
procedure Run Pe r i odi c Job Of The Pr oc e s s i s
begi n
Put Li ne ( Pe r i o d i c t as k i s r e l e a s e d and e x e c ut e s i t s j ob ) ;
end Run Pe r i odi c Job Of The Pr oc e s s ;
I n f o : POSIX. Si g n a l s . S i g n a l I n f o ;
Wai t ed Set : POSIX. Si g n a l s . Si g n a l Se t ;
Event : POSIX. Si g n a l s . Si gna l Ev e nt ;
Ti mer i d : POSIX. Ti mer s . Ti mer I d ;
Ti mer data : POSIX. Ti mer s . Ti mer St at e ;
Opt i ons : POSIX. Ti mer s . Ti mer Opt i ons ;
320 Real-time programming with Ada
Nanos econd Per i od : Nanoseconds := 0;
Second Per i od : Seconds := 1;
Per i od : POSIX. Ti mespec ;
begi n
Set up s i g n a l management
POSIX. Si g n a l s . Se t Si g n a l
( Event => Event ,
Si g => POSIX. Si g n a l s . SIGUSR1 ) ;
POSIX. Si g n a l s . S e t No t i f i c a t i o n
( Event => Event ,
Not i f y => POSIX. Si g n a l s . S i g n a l No t i f i c a t i o n ) ;
POSIX. Si g n a l s . De l e t e Al l S i g n a l s ( Wai t ed Set ) ;
POSIX. Si g n a l s . Add Si gnal
( Set => Wai t ed Set ,
Si g => POSIX. Si g n a l s . SIGUSR1 ) ;
Cr eat e and i n i t i a l i z e t he t i me r t o t he pe r i od of t he pr oc e s s
Ti mer i d := POSIX. Ti mer s . Cr eat e Ti mer
( Cl ock => POSIX. Ti mer s . Cl ock Real t i me ,
Event => Event ) ;
Per i od := POSIX. To Ti mespec ( S => Second Per i od ,
NS => Nanos econd Per i od ) ;
POSIX. Ti mer s . S e t I n i t i a l
( St at e => Ti merdata ,
I n i t i a l => Per i od ) ;
POSIX. Ti mer s . S e t I n t e r v a l
( St at e => Ti merdata ,
I n t e r v a l => Per i od ) ;
POSIX. Ti mer s . Arm Ti mer
( Ti mer => Ti mer i d ,
Opt i ons => Opti ons ,
New State => Ti mer data ) ;
The p e r i o d i c pr oc e s s
l oop
Bl ock u n t i l t he s i g n a l i s r e c e i v e d
I n f o := POSIX. Si g n a l s . Awa i t Si gna l ( Wai t ed Set ) ;
Car r y out t he j ob of t he p e r i o d i c pr oc e s s
Run Pe r i odi c Job Of The Pr oc e s s ;
end l oop ;
end Pos i x Ti mer Exampl e ;
The rst part of this program issues calls to various procedures in package
POSIX.Signals to set up the POSIX signal SIGUSR1 we have decided to
use for our timer. Then the program calls the function Create Timer to
8.7 POSIX 1003.1b and its Ada binding 321
dene a POSIX timer. The calls to Set Initial and Set Interval set up
the timer so it will periodically raise the Unix signal SIGUSR1. We raise this
signal once per second. POSIX timer delays can be expressed in seconds and
nanoseconds. Of course, the actual precision will depend on the underlying
operating systems clock. The POSIX timer is then activated by the call to
procedure ARM Timer.
The loop at the end of this example program implements our periodic
process. The process calls function Await Signal which blocks it until the
timer sends a SIGUSR1 signal to the process. The process then runs its job
and starts the loop again.
Compare this example to program A Periodic Task Demo on page 299
which uses only Ada constructs to execute a periodic job. POSIX requires a
large number of variables and API calls to accomplish the same
thing.
POSIX scheduling
Now lets look at how we can control scheduling policies in POSIX compliant
Ada programs. Our second Florist package, POSIX.Process Scheduling,
allows programmers to customize the scheduling of their processes. Here is
a portion of that package.
wi th POSIX. Pr o c e s s I d e n t i f i c a t i o n ;
package POSIX. Pr oc e s s Sc he dul i ng i s
subtype S c h e d u l i n g Pr i o r i t y i s I n t e g e r ;
type Sc he dul i ng Par ame t e r s i s pr i vat e ;
f uncti on Ge t Pr i o r i t y ( Par amet er s : i n Sc he dul i ng Par ame t e r s )
r etur n S c h e d u l i n g Pr i o r i t y ;
procedure S e t Pr i o r i t y
( Par amet er s : i n out Sc he dul i ng Par ame t e r s ;
Pr i o r i t y : i n S c h e d u l i n g Pr i o r i t y ) ;
type Sc h e d u l i n g Po l i c y i s new I n t e g e r ;
Sched FI FO : constant Sc h e d u l i n g Po l i c y := . . . ;
Sched RR : constant Sc h e d u l i n g Po l i c y := . . . ;
Sched Ot her : constant Sc h e d u l i n g Po l i c y := . . . ;
procedure Se t Sc he dul i ng Pa r a me t e r s
( Pr oc e s s : i n POSI X Pr o c e s s I d e n t i f i c a t i o n . Pr oc e s s I D ;
Par amet er s : i n Sc he dul i ng Par ame t e r s ) ;
f uncti on Ge t Sc he dul i ng Par ame t e r s
( Pr oc e s s : i n POSI X Pr o c e s s I d e n t i f i c a t i o n . Pr oc e s s I D )
r etur n Sc he dul i ng Par ame t e r s ;
322 Real-time programming with Ada
procedure Se t Sc h e d u l i n g Po l i c y
( Pr oc e s s : i n POSI X Pr o c e s s I d e n t i f i c a t i o n . Pr oc e s s I D ;
New Pol i cy : i n Sc h e d u l i n g Po l i c y ;
Par amet er s : i n Sc he dul i ng Par ame t e r s ) ;
f uncti on Ge t Sc h e d u l i n g Po l i c y
( Pr oc e s s : i n POSI X Pr o c e s s I d e n t i f i c a t i o n . Pr oc e s s I D )
r etur n Sc h e d u l i n g Po l i c y ;
f uncti on Get Maxi mum Pr i or i t y ( Po l i c y : i n Sc h e d u l i n g Po l i c y )
r etur n S c h e d u l i n g Pr i o r i t y ;
f uncti on Get Mi ni mum Pr i or i t y ( Po l i c y : i n Sc h e d u l i n g Po l i c y )
r etur n S c h e d u l i n g Pr i o r i t y ;
f uncti on Ge t Round Robi n I nt e r v a l
( Pr oc e s s : i n POSI X Pr o c e s s I d e n t i f i c a t i o n . Pr oc e s s I D )
r etur n POSIX. Ti mespec ;
. . .
pr i vat e
. . .
end POSIX. Pr oc e s s Sc he dul i ng ;
This package contains subprograms to handle priority and policy. Func-
tions Get Priority, Get Scheduling Parameters, and Get Scheduling
Policy read the current priority and scheduling policy of a given pro-
cess. Processes are identied by unique numbers whose type is dened in
the package POSIX.Process Identification. In the same way, procedures
Set Priority, Set Scheduling Parameters, and Set Scheduling Policy
change the priority and scheduling policy of a process.
Package POSIX.Process Scheduling also denes subprograms that per-
mit an application to adapt itself to the underlying operating system. Func-
tions Get Maximum Priority, Get Minimum Priority, and Get Round
Robin Interval may be called to obtain the maximum priority level, the
minimum priority level, and the quantum that will be applied to SCHED RR
processes.
The following program demonstrates how the operations in package
Posix.Process Scheduling can be used to query and change process schedul-
ing characteristics. It rst displays the priority level and the policy of the
current process. Then it changes the process so that it is scheduled under
the SCHED FIFO policy with a priority level of 10.
wi th Ada . Text I O ; use Ada . Text I O ;
wi th Ada . I nt e g e r Te x t I O ; use Ada . I nt e g e r Te x t I O ;
wi th POSIX. Pr o c e s s I d e n t i f i c a t i o n ; use POSIX. Pr o c e s s I d e n t i f i c a t i o n ;
wi th POSIX. Pr oc e s s Sc he dul i ng ; use POSIX. Pr oc e s s Sc he dul i ng ;
procedure Pos i x Sc he dul i ng Ex ampl e i s
procedure Di s p l a y Po l i c y ( Po l i c y : i n Sc h e d u l i n g Po l i c y ) i s
begi n
i f Po l i c y = SCHED RR then
8.7 POSIX 1003.1b and its Ada binding 323
Put ( /SCHED RR ) ;
e l s i f Po l i c y = SCHED OTHER then
Put ( /SCHED OTHER ) ;
e l s i f Po l i c y = SCHED FIFO then
Put ( /SCHED FIFO ) ;
el s e
Put ( /unknown p o l i c y ) ;
end i f ;
end Di s p l a y Po l i c y ;
Pi d : Pr oc e s s I D ;
Sched : Sc he dul i ng Par ame t e r s ;
begi n
Read c u r r e n t pr oc e s s uni que i d e n t i f i c a t o r
Pi d:= Ge t Pr o c e s s I d ;
Di s pl a y t he d e f a u l t p r i o r i t y and s c he dul i ng par amet er s
of a pr oc e s s i n t he c ur r e nt o pe r a t i ng syst em
Sched := Ge t Sc he dul i ng Par ame t e r s ( Pi d ) ;
Put ( De f aul t p r i o r i t y / p o l i c y on t h i s POSIX ope r a t i ng syst em : ) ;
Put ( Ge t Pr i o r i t y ( Sched ) ) ;
Di s p l a y Po l i c y ( Po l i c y => Ge t Sc h e d u l i n g Po l i c y ( Pi d ) ) ;
New Li ne ;
Change both p r i o r i t y and p o l i c y
Appl y SCHED FIFO wi t h a p r i o r i t y l e v e l of 10
S e t Pr i o r i t y ( Par amet er s => Sched ,
Pr i o r i t y => 10) ;
Se t Sc h e d u l i n g Po l i c y ( Pr oc e s s => Pi d ,
New Pol i cy => SCHED FIFO ,
Par amet er s => Sched ) ;
Se t Sc he dul i ng Pa r a me t e r s ( Pr oc e s s => Pi d ,
Par amet er s => Sched ) ;
Di s pl a y t he updated p r i o r i t y and s c he dul i ng par amet er s
Put ( Updated p r i o r i t y / p o l i c y : ) ;
Put ( Ge t Pr i o r i t y ( Sched ) ) ;
Di s p l a y Po l i c y ( Po l i c y => Ge t Sc h e d u l i n g Po l i c y ( Pi d ) ) ;
New Li ne ;
end Pos i x Sc he dul i ng Ex ampl e ;
Handling shared resources
We have now seen how to implement a periodic process and how to run it
with a given priority under a particular scheduling policy. Package
POSIX.Semaphores provides semaphores that can be used to implement
shared resources. We can use these semaphores to provide communication
and synchronization between processes. If the threads extension is sup-
ported, semaphores may also be used with threads.
324 Real-time programming with Ada
Section 4.7.1 provided a brief description of the semaphore and how it
might be implemented as a protected object. In Section 7.3 we used the
semaphore in our descriptions of protocols for accessing shared resources.
POSIX semaphores provide the necessary tools for implementing the Priority
Ceiling Protocol (PCP).
POSIX mutexes and condition variables provide a second means of
controlling access to resources shared by threads. They are more ecient
than semaphores and thus recommended when using POSIX threads. See
Gallmeister (1995) for more information on sharing resources with POSIX.
8.8 POSIX implementation of the car application
As our nal POSIX example, we return to our car application of Chapter
7. On page 304 we presented an Ada solution based on periodic tasks. Now
well write an equivalent program using periodic POSIX processes. We use
the same task characteristics listed on page 304.
We take a similar approach to our earlier solution of using a generic unit
as a pattern for our three dierent periodic jobs. Rather than use a generic
package containing a task type, our POSIX implementation is based on a
generic procedure that implements a periodic process. Here is its specica-
tion:
wi th POSIX;
wi th POSIX. Pr oc e s s Sc he dul i ng ; use POSIX. Pr oc e s s Sc he dul i ng ;
gener i c
wi th procedure Run ;
Pr i o r i t y Pa r a me t e r : S c h e d u l i n g Pr i o r i t y ;
Pol i c y Par ame t e r : Sc h e d u l i n g Po l i c y ;
Nanos econd Per i od : POSIX. Nanoseconds ;
Second Per i od : POSIX. Seconds ;
procedure Ge ne r i c Pe r i odi c POSI X Pr oc e s s ;
Thi s pr oc e dur e c o nt a i ns an i n f i n i t e l oop t hat c a l l s
pr oc e dur e Run ( t he j ob of t he pr oc e s s ) once e v e r y pe r i od
This specication includes ve parameters. Parameter Run is the proce-
dure that contains the code we want executed periodically. The remaining
four parameters specify the process priority, the scheduling policy, and the
period. The period is expressed in seconds and nanoseconds.
The body of our generic procedure uses the constructs for periodic release
and scheduling priority and policy developed in our previous two examples.
wi th POSIX; use POSIX;
wi th POSIX. Si g n a l s ; use POSIX. Si g n a l s ;
wi th POSIX. Ti mer s ; use POSIX. Ti mer s ;
wi th POSIX. Pr o c e s s I d e n t i f i c a t i o n ; use POSIX. Pr o c e s s I d e n t i f i c a t i o n ;
wi th POSIX. Pr oc e s s Sc he dul i ng ; use POSIX. Pr oc e s s Sc he dul i ng ;
8.8 POSIX implementation of the car application 325
procedure Ge ne r i c Pe r i odi c POSI X Pr oc e s s i s
I n f o : POSIX. Si g n a l s . S i g n a l I n f o ;
Wai t ed Set : POSIX. Si g n a l s . Si g n a l Se t ;
Event : POSIX. Si g n a l s . Si gna l Ev e nt ;
Ti mer i d : POSIX. Ti mer s . Ti mer I d ;
Ti mer data : POSIX. Ti mer s . Ti mer St at e ;
Opt i ons : POSIX. Ti mer s . Ti mer Opt i ons ;
Pi d : Pr oc e s s I D ;
Sched : Sc he dul i ng Par ame t e r s ;
Per i od : POSIX. Ti mespec ;
begi n
Set up s i g n a l management
POSIX. Si g n a l s . Se t Si g n a l ( Event => Event ,
Si g => SIGUSR1 ) ;
POSIX. Si g n a l s . S e t No t i f i c a t i o n
( Event => Event ,
Not i f y => POSIX. Si g n a l s . S i g n a l No t i f i c a t i o n ) ;
POSIX. Si g n a l s . De l e t e Al l S i g n a l s ( Wai t ed Set ) ;
POSIX. Si g n a l s . Add Si gnal ( Set => Wai t ed Set ,
SIG => SIGUSR1 ) ;
Cr eat e and i n i t i a l i z e t he t i me r t o t he pe r i od
Ti mer i d := POSIX. Ti mer s . Cr eat e Ti mer
( Cl ock => POSIX. Ti mer s . Cl ock Real t i me ,
Event => Event ) ;
Per i od := POSIX. To Ti mespec ( S => Second Per i od ,
NS => Nanos econd Per i od ) ;
POSIX. Ti mer s . S e t I n i t i a l
( St at e => Ti merdata ,
I n i t i a l => Per i od ) ;
POSIX. Ti mer s . S e t I n t e r v a l
( St at e => Ti merdata ,
I n t e r v a l => Per i od ) ;
POSIX. Ti mer s . Arm Ti mer
( Ti mer i d , Opti ons , Ti mer data ) ;
Change p r i o r i t y and p o l i c y
Pi d := Ge t Pr o c e s s I d ;
Sched := Ge t Sc he dul i ng Par ame t e r s ( Pi d ) ;
S e t Pr i o r i t y ( Par amet er s => Sched ,
Pr i o r i t y => Pr i o r i t y Pa r a me t e r ) ;
Se t Sc h e d u l i n g Po l i c y ( Pr oc e s s => Pi d ,
New Pol i cy => Pol i c y Par ame t e r ,
Par amet er s => Sched ) ;
Se t Sc he dul i ng Pa r a me t e r s ( Pr oc e s s => Pi d ,
Par amet er s => Sched ) ;
326 Real-time programming with Ada
The p e r i o d i c pr oc e s s
l oop
Run ; Ca l l t he pr oc e dur e wi t h t he code f o r t h i s j ob
I n f o := POSIX. Si g n a l s . Awa i t Si gna l ( Wai t ed Set ) ;
end l oop ;
end Ge ne r i c Pe r i odi c POSI X Pr oc e s s ;
Finally, here is the program that makes use of the generic procedure to
create and run three POSIX processes.
wi th POSIX; use POSIX;
wi th POSIX. Pr oc e s s Sc he dul i ng ;
use POSIX. Pr oc e s s Sc he dul i ng ;
wi th POSIX. Pr o c e s s I d e n t i f i c a t i o n ;
use POSIX. Pr o c e s s I d e n t i f i c a t i o n ;
wi th POSIX. Un s a f e Pr o c e s s Pr i mi t i v e s ;
wi th Ada . Text I O ; use Ada . Text I O ;
wi th Ge n e r i c Pe r i o d i c Po s i x Pr o c e s s ;
procedure Pos i x Car Exampl e i s
The f o l l o wi n g t hr e e pa r a me t e r l e s s pr oc e dur e s ar e s t ubs f o r
t he a c t ua l code t hat r uns i n t he c ar
procedure Read Speed i s
begi n
Put Li ne ( Tspeed r e ads s peed s e ns or ) ;
end Read Speed ;
procedure Moni t or Engi ne i s
begi n
Put Li ne ( Tengi ne per f or ms e ngi ne moni t or i ng a l gor i t hm ) ;
end Moni t or Engi ne ;
procedure Di s pl ay Spe e d i s
begi n
Put Li ne ( Tdi s pl a y d i s p l a y s t he s peed of t he c ar ) ;
end Di s pl ay Spe e d ;
I n s t a n t i a t e an i n f i n i t e l oop pr oc e dur e
f o r each of our t hr e e p r o c e s s e s
procedure Tdi s pl a y i s new Ge ne r i c Pe r i odi c POSI X Pr oc e s s
( Run => Di s pl ay Speed ,
Pr i o r i t y Pa r a me t e r => 12 ,
Pol i c y Par ame t e r => SCHED FIFO ,
Nanos econd Per i od => 1 000 000 ,
Second Per i od => 0 ) ;
procedure Tspeed i s new Ge ne r i c Pe r i odi c POSI X Pr oc e s s
( Run => Read Speed ,
Pr i o r i t y Pa r a me t e r => 11 ,
8.8 POSIX implementation of the car application 327
Pol i c y Par ame t e r => SCHED FIFO ,
Nanos econd Per i od => 2 500 000 ,
Second Per i od => 0 ) ;
procedure Tengi ne i s new Ge ne r i c Pe r i odi c POSI X Pr oc e s s
( Run => Moni t or Engi ne ,
Pr i o r i t y Pa r a me t e r => 10 ,
Pol i c y Par ame t e r => SCHED FIFO ,
Nanos econd Per i od => 5 000 000 ,
Second Per i od => 0 ) ;
Pi d : Pr oc e s s I D ;
Sched : Sc he dul i ng Par ame t e r s ;
begi n
Pr i o r i t y f o r envi r onment pr oc e dur e
( f o r mai n POSIX pr oc e s s )
Pi d := Ge t Pr o c e s s I d ;
Sched := Ge t Sc he dul i ng Par ame t e r s ( Pi d ) ;
S e t Pr i o r i t y ( Par amet er s => Sched ,
Pr i o r i t y => 20) ;
Se t Sc h e d u l i n g Po l i c y ( Pr oc e s s => Pi d ,
New Pol i cy => SCHED FIFO ,
Par amet er s => Sched ) ;
Se t Sc he dul i ng Pa r a me t e r s ( Pr oc e s s => Pi d ,
Par amet er s => Sched ) ;
The mai n pr oc e s s c r e a t e s a new c h i l d pr oc e s s
Pi d := POSIX. Un s a f e Pr o c e s s Pr i mi t i v e s . For k ;
i f Pi d = Nul l Pr o c e s s I D then t he c h i l d pr oc e s s ?
Tengi ne ; Execut e t he e ngi ne pr oc e s s
el s e
The mai n pr oc e s s c r e a t e s a second c h i l d
Pi d := POSIX. Un s a f e Pr o c e s s Pr i mi t i v e s . For k ;
i f Pi d = Nul l Pr o c e s s I D then t he c h i l d pr oc e s s ?
Tdi s pl a y ; Execut e t he d i s p l a y pr oc e s s
el s e
The mai n pr oc e s s c r e a t e s a t h i r d c h i l d
Pi d:= POSIX. Un s a f e Pr o c e s s Pr i mi t i v e s . For k ;
i f Pi d = Nul l Pr o c e s s I D then t he c h i l d pr oc e s s ?
Tspeed ; Execut e t he s peed pr oc e s s
end i f ;
end i f ;
end i f ;
328 Real-time programming with Ada
Al l c h i l d p r o c e s s e s s t a r t t o work a f t e r t he
hi ghe r p r i o r i t y mai n pr oc e s s compl et es !
Put Li ne ( Pr oc e s s e s w i l l s t a r t a f t e r &
t he mai n pr oc e s s compl et es ! ) ;
end Pos i x Car Exampl e ;
Creating POSIX processes is more complex than creating Ada tasks. The
rst set of statements in this program assigns high priority level to the
process. Then, a call to the POSIX function Fork is performed. This function
creates a new child process. The child process executes the same code as its
parent beginning with the statement that follows the call to function Fork.
So after the child process is created, the child and its parent both execute
the if statement that tests the value of variable Pid. This if statement allows
a process to determine if it is the child or the parent. In the child process,
function Fork returns a value of Null Process ID. In the parent process,
function Fork returns the process ID of its child a value not equal to
Null Process ID. The child process then calls procedure Tengine while the
main process makes a second call to function Fork. The same logic is used to
dierentiate between the second child and its parent. The second child calls
procedure Tdisplay while the main process makes a third call to function
Fork. The logic is repeated with the third child calling procedure Tspeed
while the main process displays a message and nishes.
Each process run is one of the three instantiated procedures implementing
a periodic task with a given priority and scheduling policy. Similar to the
Ada tasking implementation, we choose to assign priority level 20 to the
main process. This higher priority level ensures that the three processes
will not start running before the main process has completed. Again, this
approach makes our program compliant with the critical instant assumption.
8.9 Ada tasks versus POSIX processes
Both Ada and POSIX provide compliance with real-time scheduling theory.
In this section, we briey compare these two approaches to writing real-time
applications.
POSIX has two major advantages. It is supported by a large number of
real-time operating systems. And, as it is language-independent, it allows us
to combine processes written in dierent languages. Unfortunately, writing
real-time applications with POSIX has some drawbacks. POSIX schedul-
ing was intended for process or thread scheduling and we saw that POSIX
scheduling may be implemented dierently in several operating systems. As
we saw in our car application, we must do far more work to write a periodic
POSIX process than we do for an Ada task. The Ada compiler is responsible
8.9 Ada tasks versus POSIX processes 329
for expanding our tasks into the API calls we have to make ourselves when
using POSIX (see Section 9.1 for more information on expansion).
Both POSIX and Ada are intended to write portable programs. But since
many of the necessary POSIX constructs are optional, we expect that Ada
achieves a higher level of portability.
The relationship between Ada tasks and POSIX processes is implemen-
tation-dened. So it is best to not mix Ada tasks or protected objects with
POSIX processes and threads. With all of the negative aspects of POSIX,
we might question if Ada programmers should use the POSIX Ada binding
to write their real-time applications. The answer is yes, if no Ada 2005
compliant run-time and compiler is available for the underlying hardware
and run-time system.
In Chapter 9, we discuss run-time systems and some tools that are avail-
able to write and verify concurrent real-time Ada applications.
Summary
Two standards are available for programmers to write real-time Ada ap-
plications that can be compliant with the real-time scheduling theory:
Ada 2005 and POSIX 1003.1b.
With Ada:
Periodic task release times must be implemented with the delay until
statement.
A task has a base priority and an active priority.
Base priorities of tasks and priorities of protected objects are dened
with the Priority pragma.
Scheduling of tasks is specied by dispatching policies. Ada provides
several dispatching policies.
The real-time scheduling theory compliance of an Ada program can be
enforced by the Ravenscar prole.
With POSIX:
The scheduling model is similar to Ada.
POSIX timers may be used to implement periodic task release times.
POSIX is exible: it allows an Ada program to adapt itself to the un-
derlying operating system.
We do not handle tasks but processes or threads.
The portability level of an Ada program written with POSIX may be less
than an Ada program written with Ada 2005.
330 Real-time programming with Ada
Exercises
8.1 Contrast type Time in package Ada.Calendar to type Time in package
Ada.Real Time.
8.2 List what mechanisms enforce real-time scheduling compliance.
8.3 Explain the dierence between cumulative drift and local drift with
respect to periodic tasks.
8.4 Why should we use the delay until statement instead of the delay
statement?
8.5 Write an Ada program that displays the following values dened in the
package System (see page 301) on your computer.
Max Priority
Max Interrupt Priority
The lowest value of subtype Any Priority
The highest value of subtype Any Priority
The lowest value of subtype Priority
The highest value of subtype Priority
The lowest value of subtype Interrupt Priority
The highest value of subtype Interrupt Priority
8.6 What pragmas must we include in an Ada program to be compliant
with xed priority scheduling and ICPP?
8.7 What is the major advantage of using the Ravenscar prole? Describe
at least one disadvantage of using this prole.
8.8 In this exercise you will practice with POSIX 1003 scheduling poli-
cies. We assume a Linux operating system that implements the POSIX
1003 standard with a scheduling model with 100 priority levels (from
0 to 99, with 99 being the highest priority). Priority level 0 is devoted
to time-sharing processes. All processes that have a 0 priority level
are scheduled according to the SCHED OTHER policy. The Linux SCHED
OTHER policy schedules tasks according to how long it has been since
they last had the processor. The process dispatched rst from the queue
at priority level 0 is the one that has waited the longest for the proces-
sor. This waiting time is not just their waiting time in the ready queue
for priority 0. It includes the time that the process may have been
blocked for a resource or for input or output. Processes which have a
priority level between 1 and 99 are scheduled either with SCHED FIFO
or SCHED RR. Of course, the processes in priority queues 1 through 99
are dispatched before any processes in the lower priority queue 0. We
assume a quantum of one unit of time for the SCHED RR policy. The
scheduling is preemptive.
Exercises 331
Given the following process characteristics:
Name Capacity Release time Priority Policy
other1 8 0 0 SCHED OTHER
rr1 5 2 3 SCHED RR
rr2 5 2 3 SCHED RR
fo1 3 4 5 SCHED FIFO
fo2 3 3 2 SCHED FIFO
fo3 2 4 5 SCHED FIFO
Fill in Figure 8.5 of the schedule from time 0 to 25 by darkening in
the squares where a particular process is running on the processor.
other1
rr1
rr2
fifo1
fifo2
fifo3
5 2 0
Figure 8.5 Drawing to ll
8.9 What are advantages and disavantages of using Ada tasks or POSIX
1003.b processes to implement real-time applications?
8.10 In this exercise you will practice with POSIX 1003 scheduling policies.
We assume a system composed of two periodic processes T
1
and T
2
dened with S
1
= 0, S
2
= 2, C
1
= 1, C
2
= 2, P
1
= 8, and P
2
= 7.
We also assume that the process deadlines are equal to their periods
( i : D
i
= P
i
).
The operating system is a POSIX 1003.1b operating system with
32 priority levels (priority levels ranging from 0 to 31). Priority level
31 is the highest priority level. This operating system provides POSIX
1003.1b SCHED FIFO, SCHED RR, and SCHED OTHER policies. In the case
of the SCHED RR, the quantum is about one unit of time. SCHED OTHER
implements a time-sharing scheduling. The scheduling is preemptive.
332 Real-time programming with Ada
1. Check the schedulability of the periodic process set.
2. Besides T
1
and T
2
, the processor owns a set of aperiodic processes
dened as follows:
Name Capacity Release time Priority Policy
fo1 4 5 20 SCHED FIFO
other1 5 0 0 SCHED OTHER
fo2 3 4 5 SCHED FIFO
rr1 3 3 7 SCHED RR
rr2 3 4 7 SCHED RR
3. Assign a priority level to processes T
1
and T
2
which allows these peri-
odic processes to meet their deadlines and which is compliant to the
operating system properties (related to POSIX 1003.1b scheduling
model). Explain your choice.
4. Draw the scheduling of all processes (both periodic and aperiodic
processes) from time 0 to time 30.
8.11 In the implementation of the Generic Periodic Task package on pages
303 and 304, each synchronous periodic task computes its own rst re-
lease time by a local call to the Clock subprogram. When the number
of tasks is large, the resulting scheduling of this set of synchronous peri-
odic tasks may be dierent from the one we can expect. Why? Propose
an adaptation of this package to solve this problem.
9
Tools for building and verifying
real-time applications
In the previous chapters, we introduced both Ada concurrency features
(tasks, protected objects, and rendezvous) and advanced features to sup-
port real-time constructs (notion of priority, protocols to bound priority
inversion, and task dispatching policies). Our Ada applications do not run
alone. They execute within a run-time conguration. A run-time cong-
uration consists of the processor that executes our application and the
environment in which that application operates. This environment includes
memory systems, input/output devices, and operating systems.
An important early design goal of Ada was to provide a compilation sys-
tem that can create from one Ada program dierent executables that run
in an equivalent way within a wide variety of run-time congurations. With
little or no modications to the Ada code, the same program should run on a
variety of platforms from small embedded platforms, such as those based
on 8-bit micro-controllers and PDAs, to single processor microcomputers,
multi-core processors, and multiprocessor systems. The run-time congu-
ration may include no operating system; may include an RTOS (real-time
operating system) like VxWorks
TM
, LynxOS
TM
, ORK+, MaRTE OS, and
RTEMS; or may include a complete general purpose operating system such
as Linux, Windows
TM
, and Mac OS
TM
.
In this chapter, we detail how Ada concurrency constructs are supported
by the Ada run-time environment. We rst introduce a generic architecture
for mapping Ada constructs onto an operating system. Then, we exam-
ine three dierent run-times for the open-source Ada compiler GNAT. The
run-time environment plays a large role in the timing and scheduling of
tasks. In our nal section, we discuss how to validate the schedule of our
application.
334 Tools for building and verifying real-time applications
9.1 Ada run-times to implement real-time applications
A run-time or run-time system (RTS) is a collection of code designed to
support the execution of an application. A run-time system provides services
for common operations (such as input and output) and the implementation
of complex programming language features (such as the automatic managing
of controlled types, exceptions, and tasking). The run-time system provides
an abstraction that hides the complexity and variability of services oered by
dierent operating systems. When we compile our application, the compiler
inserts calls to run-time system functions and procedures that implement
services and language features for a particular run-time conguration.
In this section, we discuss how the run-time system supports Ada con-
structs in a portable way. Let us imagine that we want to write an imple-
mentation of the Ada run-time system. Well begin this thought experiment
with some questions to help us see how to build it.
Question 1: How are Ada features handled?
The Ada compiler translates our Ada program into machine language that
is later run on the target computer. This translation is a complex process
that usually involves many intermediate steps. Each step translates abstract
constructs into one or more simpler steps. Even a seemingly elementary
construct like an assignment statement may map to a number of dierent
machine language instructions to carry out the mathematical operations
and conditional logic to handle potential range violations. Such translation
of abstract to concrete is common in other domains. In Section 1.1.2 we
developed several sets of actions for cooks in a kitchen. The human cooks
will later rene these steps into simpler actions (taking an object, using it,
cleaning it, taking another one, etc.).
In Chapter 3 we introduced the task life cycle. Translating a task decla-
ration and body into machine language is a complex process. The compiler
will create the necessary code to implement the task. This code will include
calls to functions and procedures in the run-time system to carry out the
creation, activation, execution, and nalization of the task.
Consider this simple piece of Ada code:
procedure Si mpl e Tas k i s
task A Task ;
task body A Task i s
begi n
nul l ;
end A Task ;
begi n
nul l ;
end Si mpl e Tas k ;
9.1 Ada run-times to implement real-time applications 335
The GNAT Ada compiler allows us to view the intermediate steps of the
compilation process. The compiler can produce a pseudo-source listing of
the expanded code.
1
Here is a portion of the expanded code generated by
the GNAT compiler for procedure Simple Task:
A pseudo s our c e l i s t i n g of expanded code pr oduced
by t he Ada c ompi l e r
wi th syst em . s y s t e m par ame t e r s ;
wi th syst em . s y s t e m t a s k i n g ;
wi th syst em . s y s t e m t a s k i n f o ;
wi th ada . a d a r e a l t i me ;
. . .
procedure s i mp l e t a s k i s
procedure s i mp l e t a s k c l e a n i s
begi n
. . .
r etur n ;
end s i mp l e t a s k c l e a n ;
begi n
s y s t e m s o f t l i n k s e n t e r ma s t e r . a l l ;
c ha i n : al i as ed s y s t e m t a s k i n g a c t i v a t i o n c h a i n ;
. . .
task type s i mpl e t a s k a t a s k TK ;
. . .
type s i mpl e t as k a t as kTKV i s l i mi t ed record
t a s k i d : s y s t e m t a s k i n g t a s k i d ;
end record ;
. . .
procedure s i mpl e t as k a t as kTKB ( t a s k : access
s i mpl e t as k a t as kTKV ) i s
begi n
. . .
at end
s i mp l e t a s k a t a s k TK c l e a n ;
end s i mpl e t as k a t as kTKB ;
a taskTKE := t r ue ;
$ s y s t e m t a s k i n g s t a g e s a c t i v a t e t a s k s
( chai n unc he c ke d ac c e s s ) ;
nul l ;
r etur n ;
at end
s i mp l e t a s k c l e a n ;
end s i mp l e t a s k ;
This listing is only a small portion of the pseudo-source code that is gen-
erated. The purpose of the code generated is to perform each of the actions
1
By using the -gnatDG compilation ag.
336 Tools for building and verifying real-time applications
discussed when we introduced the life cycle of a task in Chapter 3. This ex-
panded code has dependencies on the Ada and System package hierarchies.
The latter is dedicated to implementation-specic source code to support
Ada constructs and provides a major portion of the run-time system.
Similar code is generated to support protected objects, rendezvous, con-
trolled types, to add range checks on arrays or arithmetic operations, point-
ers, etc.
Question 2: Do I need to use the operating system scheduling services?
As we have seen in the previous section, the compiler expands source code
into simpler elements a task is expanded into a number of data structures
and subprogram calls. The operating system may provide some support for
task creation, management, and termination. In Section 3.5, we mentioned
that the Ada run-time relies on the operating system concurrency mecha-
nisms, but did not justify this choice. You may wonder at rst if this support
from the operating system is needed. Ada has specic semantics such as the
priority range of tasks, the numerical value of the highest priority level,
and support of scheduling policies that may be dierent from the target
operating system. Why cant the Ada run-time system simply provide these
semantics directly?
We will use an example to illustrate why the Ada run-time system cannot
always provide our task scheduling. Imagine the following run-time system
design. Each Ada task is managed as a specic data structure maintained
by the Ada run-time system. Each task is scheduled by the Ada run-
time system it is given processor time whenever required (e.g. upon a
timing event). The run-time system maintains its own ready queues and
entry queues. To support context switching between tasks, all we need is a
way to store the current tasks context (typically the current value of all the
processor registers) and load the context of the next task in the ready queue.
Now consider the following application:
wi th Ada . Text I O ; use Ada . Text I O ;
procedure Pr oc e s s Bl oc k i ng i s
task He l l o ;
task body He l l o i s
begi n
l oop
Put Li ne ( He l l o ) ;
del ay 0 . 5 ;
end l oop ;
end He l l o ;
9.1 Ada run-times to implement real-time applications 337
task Wai t ;
task body Wai t i s
C : Char ac t e r ;
begi n
Get (C) ;
Put Li ne ( Got & C) ;
end Wai t ;
begi n
nul l ;
end Pr oc e s s Bl oc k i ng ;
This program has two simple tasks: Hello periodically displays a string.
Wait waits for the user to enter a character and hit the return key. What
happens when we run this program? We expect the string Hello to be
displayed periodically and whenever a character is entered, the text Got
and the character is displayed.
Now, let us imagine we compile this code with our Ada run-time system
we just described. Each task is replaced by a procedure that is executed
from time to time. Suppose when we execute the program that the run-time
system rst schedules task Hello. This task displays its greeting and then
executes its delay statement. The run-time system saves the context of task
Hello, nds task Wait in its ready queue, and loads its context. Task Wait
executes a call to procedure Get. Input is handled by the operating system.
The operating system will halt our program until the user enters a charac-
ter. When the program is blocked, its run-time system cannot execute and
schedule another ready task. The whole program seems stuck until the user
inputs some data. Only when the user enters a character will our program be
unblocked and the character displayed. As the program is no longer blocked,
our run-time system can handle the completion of task Wait and schedule
the next execution of task Hello.
This is not the behavior we expected of our program. We expected task
Hello to continuously display its message while task Wait waited for input.
The problem is that the operating system sees our program as a single
process that it schedules to run or not. The operating system is not aware
of the run-time system code within the program that provides scheduling of
the Ada tasks.
2
When running a concurrent Ada program under a general or real-time
operating system,we must make our tasks visible to the operating system
2
For more details, see Anderson et al. (1992).
338 Tools for building and verifying real-time applications
and allow it to schedule them. We can, however, run a concurrent Ada
program on a target without an operating system what are called bare
board systems. In this case, the tasking semantics must be included in the
run-time system.
Question 3: How are portability issues handled?
The Ada language denes uniform semantics for all its constructs, including
concurrency. In some cases, the target operating system may not support
all the concurrency features available in Ada. Ada is a modular language. In
Section 8.6, we briey introduced the restriction mechanism that allows
us to reduce the constructs allowed.
These restrictions allow us to indicate that a specic Ada run-time support
needs only a subset of the available concurrency mechanisms. Furthermore,
some elements of the language are designated as being optional. For example,
a compiler vendor need not supply or support Annex D, which is necessary
to build real-time applications. Since concurrency constructs are part of the
language, the Ada compiler can detect the use of unsupported features, and
reject programs that cannot be executed on the target system.
We sometimes need to know the limits of some types such as priority
ranges or the precision of time-related types. The Ada standard requires that
all of this information be dened in the two packages System and Standard.
Here are some excerpts from these two packages.
package System i s
Excer pt f rom t he System package , f o r Mac OS X t a r g e t
Ma x I n t e r r u p t Pr i o r i t y : constant Po s i t i v e := 63;
Ma x Pr i or i t y : constant Po s i t i v e := Ma x I n t e r r u p t Pr i o r i t y 1;
subtype An y Pr i o r i t y i s I n t e g e r range 0 . . Ma x I n t e r r u p t Pr i o r i t y ;
subtype Pr i o r i t y i s An y Pr i o r i t y range 0 . . Ma x Pr i or i t y ;
subtype I n t e r r u p t P r i o r i t y i s An y Pr i o r i t y
range Pr i o r i t y Las t + 1 . . Ma x I n t e r r u p t Pr i o r i t y ;
De f a u l t Pr i o r i t y : constant Pr i o r i t y :=
( Pr i o r i t y Las t Pr i o r i t y F i r s t ) / 2;
end System ;
package St andar d i s
Excer pt f rom t he St andar d package , as shown f o r Mac OS X
type Dur at i on i s del t a 0. 000000001
range ((2 63 1) 0. 000000001) . .
+((2 63 1) 0. 000000001) ;
9.2 Some variants of the GNAT run-time 339
f or Dur at i on Smal l use 0. 000000001;
end St andar d ;
These types dene some implementation limits as bounds on Ada types.
By carefully using all this information, it is possible to write Ada programs
that remain portable across dierent implementations of the Ada run-time
system. Should we exceed the capabilities of a particular run-time congu-
ration, the Ada compiler will inform us.
Question 4: How do I write an Ada run-time?
From our answers to the previous questions, we come to the conclusion that
the Ada run-time system is based on two levels of facilities:
1. A set of low-level routines and data structures that support the seman-
tics of Ada constructs. The Ada compiler will map each element (task,
protected object, etc.) to a set of statements that will use these routines
and data structures.
2. A library supporting concurrency, provided with the target operating sys-
tems to support the concepts of threads (independent ows of execution),
locks (to protect data against concurrent access), and semaphores.
Writing the expander inside a compiler is a large challenge. We need to
understand precisely how the Ada source code is represented internally, and
how to map it onto elementary low-level routines. A typical approach taken
when writing a compiler is to isolate run-time conguration specic elements
so as to maximize code reusability when porting the compiler to new run-
time congurations. In addition, the low-level routines must rely on the
operating system library for mapping tasks or protected objects onto lower-
level mechanisms.
A good compiler design clearly separates the expansion phase from the
run-time system. Figure 9.1 illustrates such a design. Application code and
the code derived through expansion interact with a generic Ada run-time.
This generic run-time system provides services to build all constructs. The
generic run-time interacts with a target-specic library that hides all imple-
mentation details necessary to interact either with the concurrency library
or directly with the operating system.
9.2 Some variants of the GNAT run-time
In this section, we explore dierent implementations of the GNAT run-time.
GNAT is an Ada compiler, initially started as a project at the New York
340 Tools for building and verifying real-time applications
Figure 9.1 Layered design of an Ada program
University (see Schonberg and Banner, 1994, for the seminal work on the
project). This compiler is now fully integrated in the GCC
3
collection of
compilers, and supported by AdaCore.
4
We selected this compiler for our
discussions because it is fully open source. We can analyze the source code
of both the compiler and the run-time system.
Before we begin our discussion of GNAT run-times, let us mention that
Ada and its real-time features are implemented and supported by numerous
vendors. Aonix provides the ObjectAda compiler and support for the Raven-
scar prole through its RAVEN
TM
line of run-time systems and compilers.
IBM Rationals Apex Ada gives you a choice of three run-time systems:
Wind River Tornado
TM
, LynxOS
TM
, and their own Rational Exec. Green
Hills Softwares Ada has three dierent versions of their Integrity
TM
RTOS.
Irvine Compiler Corporations Ada supports a variety of bare machine and
RTOS environments. In addition to the three open source GNAT run-times
that we discuss, AdaCore provides a variety of commercial run-time systems
for embedded, real-time, and safety-critical applications.
9.2.1 Architecture of the GNAT run-time
The GNAT compiler targets a wide variety of operating systems, from bare
board systems (without target operating systems) to large servers. GNAT
supports more than 60 run-time congurations.
3
A free software compiler, part of the GNU project, see http://gcc.gnu.org.
4
See http://www.adacore.com.
9.2 Some variants of the GNAT run-time 341
To reduce portability cost and enhance maintainability, the architecture of
the GNAT run-time follows the elements we outlined in Section 9.1. The run-
time is made of two layers (see Giering and Baker, 1994, for some historical
background on the design):
GNARL: the GNU Ada Run-time Library is the upper layer. The
higher level Ada constructs are expanded into calls to GNARL. Once
expanded, the code of GNARL is treated as regular Ada code.
GNULL: GNU Low-Level interfaces is the lower layer. It holds the
portability layer that provides the interface to the host operating system.
On a bare board system, GNULL provides all the necessary low level
support of Ada semantics.
The combination of these two layers provides:
1. Support of Ada semantics: GNARL completes the host operating system
constructs so as to support full Ada semantics precisely. Whenever pos-
sible, GNARL supports the Real-Time Systems Annex (see Chapter 8).
2. Separation of concerns: dening the run-time as a set of Ada packages
called by the compiler allows one to separate compiler techniques from
run-time concerns, easing the implementation of the run-time systems
on a new operating system. It also allows incremental implementation
one may rst implement a basic run-time, without concurrency, and then
extend it to support a restricted tasking prole then full tasking capabil-
ities.
3. Portability: an initial requirement of GNAT was that it was to be written
in Ada as much as possible. This requirement was meant to provide a
proof of concept of the compiler it should be able to compile itself.
5
A second reason for this requirement was that all Ada checks are made
on the Ada run-time itself. Target-specic code is isolated in the GNULL
and kept as small as possible.
4. Eciency: the run-time should be as ecient as possible and tailorable
whenever possible. The run-time should avoid dependency on complex
code patterns to ease analysis, required to meet the requirements of com-
plex, critical applications.
In the following sections, we discuss three GNAT Ada run-times. We se-
lected these particular run-times for two reasons. First, they are all part of
open source projects so we can inspect the source code. Second, they illus-
trate the complete range of run-time system support of Ada
constructs.
5
This process is known as bootstrapping.
342 Tools for building and verifying real-time applications
1. ORK+, a bare board Ravenscar run-time for mission-critical space sys-
tems based on the LEON2 processor.
2. MaRTE OS, a full Ada run-time and a companion RTOS that supports
advanced scheduling algorithms for bare board x86 targets.
3. RTEMS, a full Ada run-time, based on a C RT-POSIX RTOS that sup-
ports a wide variety of processors and advanced tools like NFS or HTTP
server, and a shell.
The distinction between bare boards and regular operating systems is
thin. Usually, a bare board provides the minimum resources to schedule
tasks. An RTOS adds some drivers and libraries but runs only one appli-
cation. A full-edged operating system like Linux allows us to run multiple
programs. An RTOS is usually divided up into a core scheduler and addi-
tional services, hence a bare board system and libraries. A complete oper-
ating system like Linux boots up as a bare board, and then launches one
particular service (called init) that allows it to spawn multiple processes.
9.2.2 ORK+, a bare board run-time
The Open Ravenscar Real-Time Kernel (ORK+)
6
is an open-source real-
time kernel. ORK is an Ada run-time of reduced size and complexity used
to develop high-integrity real-time applications in a subset of the Ada 2005
language compatible with the Ravenscar prole. ORK was originally de-
veloped under contract with the European Space Agency (ESA). It was
based on Ada 95 and targeted to the ERC32 processor. The current version,
ORK+, supports the Ada 2005 standard and the LEON2 processor.
The ERC32 and the LEON2 processors are two derivatives of the SPARC
family of processors chosen by the European Space Agency. ESA selected
this platform for its on-board mission systems. Because these systems oper-
ate outside of the Earths atmosphere, these processors have been strength-
ened to resist radiation. The SPARC architecture permits the addition of
mechanisms to recover from errors in memory induced by ionizing or elec-
tromagnetic perturbations.
ORK+ (see de la Puente et al., 2000) is integrated as the lower-level in-
terface of the GNAT run-time. The kernel and a modied run-time library
are packaged together with the compiler and other tools into a comprehen-
sive package. Hence, one can generate a full binary, ready to deploy, in one
compilation step. The binary can be executed on either a real platform or a
simulator.
6
See http://polaris.dit.upm.es/~ork/.
9.2 Some variants of the GNAT run-time 343
ORK+ implements only the elements required to support the Ravenscar
prole on a monoprocessor system. The Ravenscar prole greatly reduces
the set of available Ada constructs (see Section 8.6 for a detailed descrip-
tion). The benets are twofold. First, it allows for deterministic concurrency.
Second, it allows for an ecient implementation. The GNULL of ORK+ has
only 2,000 SLOC
7
fully written in Ada, plus some assembly code for low-level
code such as context switch procedures. This reduced kernel is amenable to
the thorough code analysis and inspection required to ensure quality of the
code. The small size also makes it easier to make the necessary timing mea-
surements of capacity required to perform the scheduling analyses discussed
in Chapter 7.
This small size is a result of limiting the kernel to a small set of operations.
Tasks are created only at elaboration time; they cannot terminate. Further-
more, there is no rendezvous allowed. All communications are performed
through protected objects, and they all follow the simple-to-implement Pri-
ority Ceiling Protocol (PCP) that we discussed in Chapter 7.
In addition, ORK+ provides drivers to support serial communications and
the SpaceWire spacecraft communication network (ESTEC, 2003) derived
from the Firewire protocol.
Lets look at a small example for the ORK+ run-time. The rst le,
named gnat.adc, congures the compiler to enforce all compile-time checks
required by the Ravenscar prole. This le contains the single line:
pragma Pr o f i l e ( Ravens car ) ;
Our program contains a single task that periodically outputs the string
ping!. To comply with the Ravenscar No Task Hierarchy restriction, all
tasks must be dened in a separate package they cannot be nested within
the main subprogram. Here is the specication and body of the package that
contains our periodic task.
package Task 1 i s
task Task 1 i s
pragma Pr i o r i t y ( 1 0 ) ;
end Task 1 ;
end Task 1 ;
wi th Ada . Real Ti me ; use type Ada . Real Ti me . Time ;
wi th System . IO ;
package body Task 1 i s
task body Task 1 i s
St ar t Ti me : Ada . Real Ti me . Time := Ada . Real Ti me . Cl ock ;
7
SLOC: source lines of code, i.e. only code statements, no blank lines or comments are counted.
344 Tools for building and verifying real-time applications
begi n
l oop
System . IO . Put Li ne ( pi ng ! ) ;
St ar t Ti me := St ar t Ti me
+ Ada . Real Ti me . Mi l l i s e c o n d s ( 500) ;
del ay unt i l St ar t Ti me ;
end l oop ;
end Task 1 ;
end Task 1 ;
Here is our main subprogram:
wi th Task 1 ;
wi th Ada . Real Ti me ;
procedure Simple ORK i s
begi n
Suspend t he envi r onment t as k f o r e v e r
del ay unt i l Ada . Real Ti me . Ti me Last ;
end Simple ORK ;
The main subprogram Simple ORK delays itself forever. The task dened
in our package provides all of the functionality of this program.
GNAT Pro High Integrity
TM
is a commercially supported run-time for
GNAT, derived from the original ORK+ and supported by AdaCore. It is
being used by the Ocean & Land Colour Instrument (OLCI) project. This
project is currently being developed by Thales Alenia Space and GMV,
under contract of the European Space Agency, for the Sentinel-3 mission.
Sentinel 3 is a scientic observation mission that will routinely monitor ocean
and land surfaces.
Within the OLCI project, the Instrument Control Module (ICM) soft-
ware is a critical component responsible for interfacing with the Satellite
Management Unit the central satellite control computer. The ICM also
controls the rest of the instrument units. The Ada run-time for this critical
software must be qualied to space standard ECCS-E-40B level B on the
target processor. In the context of space systems, qualication requires the
testing of every possible execution path in the system. Such exhaustive path
testing of a run-time system is only possible in small run-times like ORK+
and GNAT Pro High Integrity
TM
.
9.2.3 MaRTE OS, a minimum POSIX run-time
MaRTE OS (Rivas and Harbour, 2001) is a Hard Real-Time Operating Sys-
tem for embedded applications that follows the Minimal Real-Time POSIX
9.2 Some variants of the GNAT run-time 345
subset.
8
Unlike ORK+ which supports only Ada, MaRTE OS provides sup-
port for mixed language applications written in Ada, C, C++, or Java. It also
provides libraries for POSIX mechanisms (threads, mutexes, condvars) and
a deterministic memory allocator. It fully supports the Ada 2005 Real-Time
Annex and includes a wide range of schedulers, including application-dened
scheduling and EDF.
MaRTE OS acts as a particular instance of GNULL. It is an x86 kernel
written in Ada with more than 16,000 SLOC. This code is also exported
as C functions to allow the construction of mixed language applications.
Compared to ORK+, MaRTE OS provides a wider range of drivers: se-
rial communication and various network protocols (Ethernet, CAN, RT-EP,
etc.). In addition, the PolyORB middleware has been adapted to run on
MaRTE OS and take advantage of its drivers.
MaRTE served as the test bench for the new scheduling features of Ada
2005 application-driven scheduling and EDF. Both of these schedulers
rely on a dedicated API to control the dispatching parameters associated
with each task.
The package Ada.Dispatching.EDF provides the API to control the dead-
line associated with each task.
wi th Ada . Real Ti me ;
wi th Ada . Ta s k I d e n t i f i c a t i o n ;
package Ada . Di s pat c hi ng . EDF i s
pragma Pr e e l a bo r a t e ;
subtype De adl i ne i s Ada . Real Ti me . Time ;
De f a ul t De a dl i ne : constant De adl i ne := Ada . Real Ti me . Ti me Last ;
procedure Se t De a dl i ne
(D : De adl i ne ;
T : Ada . Ta s k I d e n t i f i c a t i o n . Tas k I d :=
Ada . Ta s k I d e n t i f i c a t i o n . Cur r ent Tas k ) ;
procedure De l a y Unt i l And Se t De a dl i ne
( De l ay Unt i l Ti me : Ada . Real Ti me . Time ;
De a d l i n e Of f s e t : Ada . Real Ti me . Ti me Span ) ;
f uncti on Ge t De adl i ne
(T : Ada . Ta s k I d e n t i f i c a t i o n . Tas k I d :=
Ada . Ta s k I d e n t i f i c a t i o n . Cur r ent Tas k )
r etur n De adl i ne ;
end Ada . Di s pat c hi ng . EDF;
To select the EDF dispatching protocol, simply congure the run-time by
specifying the task dispatching policy with
8
See http://marte.unican.es/.
346 Tools for building and verifying real-time applications
pragma Ta s k Di s pa t c hi ng Pol i c y ( EDF Ac r o s s Pr i o r i t i e s ) ;
Using EDF or any API-based dispatching policy requires a good under-
standing of the policy itself and its interaction with the run-time system.
While EDF was dened in 1973 by Liu and Layland (1973), the complexity
needed to implement it in a run-time took considerably longer. It was rst
integrated into Ada in 2005.
9.2.4 RTEMS, a complete POSIX run-time
RTEMS is the Real-Time Operating System for Multiprocessor Systems.
9
It is a full featured RTOS that supports a variety of open API and interface
standards including the Real-Time Executive Interface Denition (RTEID)
and the Open Real-Time Kernel Interface Denition (ORKID). RTEMS
includes support for POSIX threads and real-time extensions. It also includes
a TCP/IP protocol stack, several networking clients and servers (web server
or remote le systems), and a small shell to load/unload application code.
RTEMS supports multiple languages including C, C++, and Ada. RTEMS
is written in C. It targets around a dozen CPU platforms.
The POSIX interface of RTEMS serves as the basis of the GNULL part of
the Ada run-time. However, RTEMS is more complicated to congure than
the previous run-times. It relies on a series of C macros to congure part
of its internals. Here are some examples of these macros. The full source is
available on our web site.
/ c o n f i g u r a t i o n i nf o r ma t i o n /
#d e f i n e CONFIGURE TEST NEEDS CONSOLE DRIVER
#d e f i n e CONFIGURE TEST NEEDS CLOCK DRIVER
#d e f i n e CONFIGURE MICROSECONDS PER TICK
RTEMS MILLISECONDS TO MICROSECONDS( 1)
#d e f i n e CONFIGURE POSIX INIT THREAD TABLE
#d e f i n e CONFIGURE MAXIMUM SEMAPHORES 10
#d e f i n e CONFIGURE GNAT RTEMS
#d e f i n e CONFIGURE INIT
Some of these declarations dene dierent macros used to activate parts
of the RTEMS libraries. Others congure the number of RTEMS tasks and
semaphores used to support Ada tasks and protected objects. These macros
9
See http://www.rtems.org.
9.3 Validating scheduling of a system 347
are preprocessed before compiling the code. Since Ada does not provide a
preprocessing mechanism, we must write some C code to write Ada code!
Writing some C code to tailor the RTOS for our Ada application is common
for RTOSs written in C.
RTEMS is being used in a large variety of projects written in both Ada
and C.
10
The Avenger Air Defense System is one project that uses Ada.
9.3 Validating scheduling of a system
In the previous chapters, we discussed the Ada constructs that support con-
currency and the extensions for real-time systems. In this chapter, we have
presented dierent run-time architectures that support these constructs.
Meeting deadlines is a requirement for real-time applications. In Chapter 7
we discussed real-time scheduling theory and dierent methods to determine
whether or not scheduling constraints can be met. In Chapter 8 we showed
how to write Ada code compliant with this theory.
In this section we complete the discussion by showing how we can ensure
that our application will meet all of its deadlines. We need to tie together
the impact of the run-time on scheduling (such as latency, jitter, and over-
head), information on the application code, and an appropriate analytical
framework.
9.3.1 Impact of the Ada run-time
The Ada standard goes well beyond the denition of the language. It also
denes how to tailor the language to meet specic needs through restrictions
and dedicated run-times.
Annex M.1 of the Ada Reference Manual, titled Specic Documentation
Requirements, denes the key elements that make up a run-time congura-
tion. In particular, this annex includes metrics associated with aspects of the
run-time such as interrupt handlers, package Task Attributes, procedure
Set Priority, setting the priority of a protected object, aborts, package
Real Time, delay statements, entry-less protected objects, execution time,
and timing events.
These metrics are measured on particular test benches. Obviously, they
depend on the architecture of the run-time, the processor being used, the
release of the compiler, and the compilation options used. For example,
Vardanega et al. (2005) have provided a full analysis of one version of the
10
See http://rtems.org/wiki/index.php/RTEMSApplications for more details.
348 Tools for building and verifying real-time applications
ORK+ run-time to evaluate how the number of tasks or protected objects
impact each of the parameters dened in Annex M.1 and derive analytical
formulas to compute each metric based on these run-time elements.
Compiler vendors may provide direct support to compute these metrics,
but this is not a mandatory obligation: computing metrics is a costly process
that implies using a dedicated test bench. This is done only when this is
required for a specic contract or project.
9.3.2 From source code to analyzable systems
Of course the user-provided source code itself plays the major role in evalu-
ating scheduling. Concurrency entities animate user code and the code itself
takes time to execute. The complexity of the source code (measured by such
metrics as Big-O notation, cyclomatic complexity, etc.) provides some gen-
eral measure of eciency and understandability of our code. It is the role
of the software designer to clearly evaluate the algorithms and their impact
on the execution time of the system.
Another point is to correctly use Ada constructs so as to minimize uncer-
tainty. The careful use of patterns is important. By using regular patterns for
periodic and sporadic tasks and protected objects, the code becomes aligned
with the analytical framework we introduced for xed priority scheduling.
Reusing the same set of patterns allows an easy mapping from Ada code
to task sets. In Chapter 8 we presented a generic package that provides a
pattern for periodic tasks that we used to create the three tasks of our car
application.
It is important to be able to compute an upper bound on the time taken by
each function so we can calculate the capacity of each task. These capacities
are needed to evaluate the various feasibility tests described in Chapter 7. We
will discuss the determination of worst case execution times in Section 9.3.4.
9.3.3 Applying scheduling analysis
In Chapter 7 we discussed a number of techniques for determining whether
or not all the tasks in an application could meet their deadlines. As these
analyses were algorithmic in nature, it should not be a surprise that tools
exist to support the work we did manually in that chapter. Using a model
of our application, scheduling analysis tools allow us to run the various
feasibility tests.
In some cases the system might be too complex for such tools. For exam-
ple, most tools cannot directly handle interactions with the environment.
9.3 Validating scheduling of a system 349
In such situations, it is important to explore other possible approaches to
scheduling analyses such as model checking or simulation. There are formal
methods with dierent schedulability analysis approaches based on Petri
Nets, timed automata, or process algebra. These formal languages lead to
the development of various simulators or model-checkers that can be used
for schedulability analysis. TIMES (Fersman et al., 2006), VESTA (Clarke
et al., 1995; Philippou and Sokolsky, 2007), and various Petri Net tools
(Berthomieu and Vernadat, 2006; Hamez et al., 2006; Wells, 2006) are ex-
amples of them. There is no one-size-ts-all analysis technique. A real-time
multimedia system requires simulation because of the uncertainty of the run-
time environment, whereas a critical system in a spacecraft will probably
require an analytical framework that provides a denitive yes/no answer.
MAST
11
(Harbour et al., 2001, University of Cantabria) and Cheddar
12
(Singho et al., 2004, University of Brest) are two open source scheduling
analysis tools. There are also some commercial tools such as Rapid-RMA
from Tri-Pacic (2003) and TimeWiz from TimeSys (2002). All of these tools
implement state-of-the-art analysis techniques to compute:
Processors/tasks: worst/best/average response time, number of context
switches/preemptions, missed deadlines, etc.
Shared resources: worst/best/average shared resource blocking time, pri-
ority inversion, and deadlock, etc.
Buers: maximum/average message waiting time and maximum/average
number of messages, etc.
Note that each tool implements only a subset of these analyses, so you
have to select the tool that matches your requirements.
In Chapter 7 we presented a number of ways to compute the schedulability
of a set of tasks. Before starting any computations, we must be careful to
ensure that the hypothesis of the underlying theory holds. Furthermore, the
computation itself is rather tedious, often requiring many steps. Software
engineering tools help us by (1) requiring us to determine the scheduling
parameters of each task in order to model it in the tool and (2) computing
the results.
Modeling is usually the rst step in building a complete application. Its
importance is often underestimated by software engineers. By carefully mod-
eling your system, you have a clearer view of what you intend to build. This
modeling process is supported at dierent levels by each tool.
11
See http://mast.unican.es/.
12
See http://beru.univ-brest.fr/~singhoff/cheddar/.
350 Tools for building and verifying real-time applications
There are many modeling tools available today for designers of real-time
systems. Some of them provide Ada support. Modeling tools usually provide
a means to graphically design the architecture of a real-time system. These
tools may interact with other software engineering tools by model exchange
to generate source code of the system, to produce documentations, or to
perform various analyses (including schedulability analysis).
Modeling activities assume the use of a language that is able to model
real-time features: we should be able to design a model of a real-time sys-
tem composed of tasks, shared resources, and any other components of
a real-time system, with all parameters presented in Chapter 7. Exam-
ples of real-time system modeling languages are various UML proles such
as MARTE
13
(Frederic et al., 2006; OMG, 2007), the SAE AADL stan-
dard (SAE, 2009), HOOD-HRT (Burns and Wellings, 1994), EAST-ADL
(Debruyne et al., 2005), or PPOOA (Fernandez and Marmol, 2008). Some
of these languages are proprietary; others are international standards. Mod-
eling languages are the link between modeling tools, source code generators,
analysis tools, and many others. Use of standardized modeling languages
may increase tool interoperability.
Stood (Dissaux, 2004) from Ellidiss Technologies, Rational Rhapsody
(Chadburn, 2010) from IBM, and Artisan Studio from Atego (2009) are
modeling tools available for Ada practitioners. Stood supports both AADL,
HOOD-HRT, and UML. The others are mostly UML oriented.
Cheddar allows you to model your Ada tasks and some communication
aspects to evaluate its schedulability. Cheddar is composed of a graphical
editor and a library of processing modules written in Ada. The designer uses
the editor to model their real-time systems. Cheddar has its own modeling
language. A real-time system is modeled by a set of tasks, processors, shared
resources, buers, and address spaces. The modeling features of Cheddar
are limited to real-time scheduling. Designers need to use separate modeling
tools to perform the other aspects of the modeling activity. Cheddar has been
directly coupled with Stood using the AADL language to link the two. This
coupling allows real-time systems designers to easily apply schedulability
analysis of the models they prepared with Stood.
The Cheddar library implements many feasibility tests and classical real-
time scheduling algorithms (all the feasibility tests and algorithms presented
in Chapter 7). This library also oers a domain specic language together
with an interpreter and a compiler. This language is used for the design and
analysis of schedulers that are not implemented into the library.
13
Do not confuse MARTE (Modeling and Analysis of Real-Time and Embedded systems) with
the MaRTE OS discussed in Section 9.2.3.
9.3 Validating scheduling of a system 351
In Section 7.2.1 we presented the example of a car application with three
periodic tasks. Given the period, capacity, and deadline of each of these
tasks, we performed a set of computations to conclude that the system is
actually schedulable. Lets use Cheddar to carry out these computations
for us. Figure 9.2 shows the window for entering the scheduling parameters
that we dened in Chapter 7. These parameters include priority, capacity,
deadline, period, start time, and jitter. Cheddar will use the parameters to
model each task.
Figure 9.2 Characterizing the parameters of a task with Cheddar
Figure 9.3 shows the results of running the feasibility tests detailed in
Chapter 7. The results of both the processor utilization factor and worst case
response time feasibility tests are displayed. The feasibility tests provide a
quick answer based on the recursive computation developed in Chapter 7.
The result is a yes or no and provides a theoretical upper-bound value for
the response time.
Figure 9.4 shows a simulation of the car application over the hyper-period
of the tasks. Only a portion of the hyper-period is visible in this gure.
As there is no interaction between tasks and no external source of events
352 Tools for building and verifying real-time applications
Figure 9.3 Output of the Cheddar tool: feasibility tests
(for example the input of a driver) in this simple model, the simulation and
feasibility tests return the same results.
This simulation performs a step-by-step execution of the system as we
did manually to ll in Figure 7.6 on page 271. At each simulated instant,
the simulator computes the task to be dispatched based on the priority and
running state information of each task. This simulation allows us to see
accurately how the system behaves. It tells us how many context switches
occur and, if there are shared resources, when mutexes (locks) are seized.
This information provides a good estimation of the resource usage in the
system. We can feedback the overhead incurred by the context switches to
the processor utilization calculations and rene the priority ceiling value of
the shared resources.
While schedulability analysis tools may provide easy answers, one should
be careful when using them they all require accurate modeling to avoid
misinterpretations of the actual behavior of the application. Poor modeling
can lead to false, non-representative, results. Real-time scheduling theory
9.3 Validating scheduling of a system 353
Figure 9.4 Output of the Cheddar tool: simulation
is not always easy to apply (Singho et al., 2009). Remember that each
feasibility test requires that the target real-time system fullls several as-
sumptions (e.g. critical instant, requests on deadlines, etc.). Thus, it may
be dicult to choose the relevant feasibility test. The current generation of
modeling languages and their software engineering tools provide very limited
support for the automatic application of real-time scheduling theory. Still,
schedulability analysis tools play an increasing role in software engineering
and are used for increasingly more complex designs.
9.3.4 Computation of the worst case execution time (WCET)
All of the techniques discussed in Chapter 7 for determining whether or not
a set of tasks will meet their deadlines required an estimate of the capacity
of each task. The capacity of a periodic task is the amount of execution
time required for one iteration. Although the actual amount of time a task
requires for a particular iteration can vary, we always use the worst case
354 Tools for building and verifying real-time applications
execution time in our scheduling analysis to ensure that deadlines are met
even in the worst cases.
There are two basic approaches for determining WCET:
Measurement techniques typically involve inserting measurement code
into the tasks, running them, and recording the time.
Static Analysis techniques are based on models of the processor on
which we run our application. Essentially, we use the model to determine
the amount of time required to execute each machine language instruction.
We then sum the times for the instructions in our task.
Both approaches require tedious and error-prone work. Tools do exist to
help compute WCET (see Wilhelm et al., 2008, for details). RapiTime
TM
is a tool for analyzing the WCET of Ada programs. It provides a hybrid
approach combining measurement and static analysis.
Both measurement and static analysis techniques require us to determine
the longest path of execution in each task. Once we have determined that
path we can instrument our code or calculate the time from our processor
model. Obviously, these paths must exist, but it may not be easy to nd
them. Tools may help analyze source code or assembly code to nd worst
case scenarios. Such determinations require a good understanding of Ada
semantics. Some constructs such as pointers to subprograms, unbounded
loops, run-time checks, exceptions, etc. complicate the longest path deter-
mination. We may decide to forbid such constructs to simplify our work.
In physics, the term observer eect (or Heisenberg eect) refers to changes
that the act of observation will make on the thing being observed. It is
possible that adding measurement code to our tasks may aect the time it
takes the code to execute.
Static analysis requires a good understanding of how source code is ex-
ecuted. We need to know how the Ada code is translated into machine
instructions for both simple constructs (like assignment, decision, and loop
statements) and abstract ones (like tasks and protected objects). We usually
require cooperation from the Ada compiler to obtain such information.
Modern processors present many challenges to static analysis of WCET.
Patterson and Hennessy (2008) discuss the architecture of processors in
depth. The common use of complex memory architectures is a major prob-
lem in modeling a processor. By providing multiple levels of dierent speed
memories, cache memory provides faster average access to data and instruc-
tions. Dierent processors use dierent heuristics to locate items in the mem-
ory hierarchy. Since the location of data and instructions depends on past
execution, all these heuristics have one property in common the intro-
duction of non-determinism. A study by Bernat et al. (2008) showed that
9.3 Validating scheduling of a system 355
cache memory heuristics introduce a large dispersion (around 20%) around
a mean value. For hard real-time systems where determinism is crucial, it is
usually advised to disable cache, or use it conservatively.
Furthermore, processors use dierent mechanisms to predict the next
instruction to execute. Again, this introduces variability in the execution
time. Hence, it is dicult to create a faithful model of the processor necessary
to statically compute how much time each statement actually consumes.
The determination of WCET is complex, requiring the mastering of many
algorithmic, architectural, and software engineering techniques. But it is a
price we must pay to make sure that our planes, rockets, and trains never
miss their deadlines. From its beginnings, Ada was designed to allow such
analysis in as easy a way as possible.
Summary
The role of the Ada compiler is to map constructs from simple arrays
to tasks and protected objects onto elementary code (machine language)
that is run on the target processor.
The Ada compiler relies on a run-time system library to support some
of the more complex constructs such as tasking. It maps such constructs
onto calls to this library.
A well designed run-time library maximizes portability to ease maintain-
ability. The design should separate target-independent code from operat-
ing system and target-specic code.
We introduced some elements of the GNAT run-time. The GNAT run-time
separates concerns to maximize portability on a wide variety of platforms.
GNARL provides the target-independent portions of the run-time system
while GNULL provides the target-specic support.
We introduced three dierent run-times supported by GNAT. The mech-
anism to support other operating systems is similar isolate in GNULL
the operating system specic code.
By limiting the use of certain Ada constructs, the size and complexity of
the run-time system can be reduced.
ORK+ is an Ada Ravenscar run-time for SPARC platforms. Thanks to
the restrictions imposed by the Ravenscar prole, its architecture has been
highly optimized to reduce resource consumption, but also to enhance
analysis.
MaRTE OS is an Ada run-time for x86 platforms. It supports the Mini-
mum POSIX Prole, and thus provides a range of scheduling algorithms,
including EDF and application-dened scheduling.
356 Tools for building and verifying real-time applications
RTEMS is a C RTOS that supports a large variety of CPUs. RTEMS
is highly congurable. However, the need to write a C adaptation layer
to congure the required components makes the construction of an Ada
program more complex.
A knowledge of the architecture of the Ada run-time and the supporting
operating system is often necessary to perform scheduling analysis of the
system. The Ada Reference Manual denes the set of metrics to perform
such analysis. These metrics are usually not provided with the compiler,
but tool vendors may provide support to compute them for a particular
platform.
By using stringent coding rules, you may align your source code to the
formal task set required to perform scheduling analysis, or some other
modeling approach to check that your system meets all its real-time dead-
lines.
Dierent tools support a wide variety of techniques, from analytical frame-
works to partial analysis of all states through simulation, to complete anal-
ysis of all states through model checking. The selection of one method or
another depends on the complexity of your system, and the applicability
domain of each tool.
The determination of worst case execution time is an important step in the
analysis of the schedulability of a set of tasks. WCET may be determined
by measurement or static analysis.
Exercises
9.1 Locate the Mac OS denition for type Duration in package Standard
on page 338. This denition uses the attribute denition clause Small.
Look up this attribute denition clause in the Ada Reference Manual
and discuss its use in the denition of type Duration.
9.2 Explain why it is recommended to use xed-point types for Duration.
9.3 In some cases, like RTEMS, the Ada run-time uses the POSIX concur-
rency library instead of an available lower-level one. The latter may be
faster or use less resources. Explain why it is still reasonable for the
Ada run-time to use the less ecient POSIX library.
9.4 Download the GNAT compiler. The tool gnatls allows you to locate
the path to the run-time library using the -v ag. Locate these les,
and review the corresponding Ada code. Pay attention to the children
of System; they host the denition of protected objects, tasks, etc.
Try to isolate GNARL and GNULL elements.
Exercises 357
9.5 We claimed that the support of the Priority Ceiling Protocol is trivial
in the context of the Ravenscar prole for the ORK+ run-time. Explain
why.
9.6 What are the restrictions to support the Ada Real-Time Annex on a
GNU/Linux system? You may check the GNAT Reference Manual for
more details.
9.7 Download Cheddar or MAST and model the examples and exercises
from Chapter 7. What additional information would you need to per-
form more accurate analyses? Can you obtain any of that information
from the Ada Reference Manual?
9.8 Install ORK+ or MaRTE OS and implement an example that you
analyzed in the previous exercise.
9.9 Read the documentation for ORK+ or MaRTE OS. Evaluate the com-
plexity of implementing a xed priority scheduling with Rate Mono-
tonic priority assignment or an Earliest Deadline First-based system.
9.10 Explain why using pointers to function makes the WCET analysis more
complex. Explain why limiting run-time checks is often desirable for
WCET analysis.
References
Amdahl, G. M. 1967. Validity of the single processor approach to achieving large
scale computing capabilities. Pages 483485 of: AFIPS 67 (Spring): Proceed-
ings of the April 1820, 1967, Spring Joint Computer Conference. ACM.
Anderson, T. E., Bershad, B. N., Lazowska, E. D., and Levy, H. M. 1992. Scheduler
activations: eective kernel support for the user-level management of paral-
lelism. ACM Transactions on Computer Systems (TOCS), 10(1), 5379.
Atego. 2009. Artisan Studio Architect Enterprise Edition. White paper.
Audsley, A. N., Burns, A., Richardson, M., and Tindell, K. 1993. Applying new
scheduling theory to static priority pre-emptive scheduling. Software Engi-
neering Journal, 9(5), 284292.
Baker, T. P. 1991. Stack-based scheduling for realtime processes. Journal of Real
Time Systems, 3(1), 6799.
Baker, T. P., and Cirinei, M. 2006. A necessary and sometimes sucient condition
for the feasibility of sets of sporadic hard-deadline tasks. Pages 178190 of:
Proceedings of the 27th IEEE International Real-Time Systems Symposium.
IEEE.
Baker, T. P., and Oh, D.-I. 1997.Ada bindings for C interfaces: lessons learned from
the orist implementation. Pages 1322 of: Hardy, K., and Briggs, J. S. (eds),
Ada-Europe. Lecture Notes in Computer Science, vol. 1251. Springer.
Barnes, J. 2003. High Integrity Software, The SPARK Approach to Safety and Se-
curity. Pearson Education Limited.
Barnes, J. 2006. Programming in Ada 2005. Pearson Education Limited.
Ben-Ari, M. 1982. Principles of Concurrent Programming. Prentice Hall Interna-
tional.
Ben-Ari, M. 2009. Ada for Software Engineers. 2nd edn. Springer-Verlag.
Bernat, G., Colin, A., Esteves, J., Garcia, G., Moreno, C., Holsti, N., Vardanega,
T., and Hernek, M. 2008. Considerations on the LEON cache eects on the
timing analysis of onboard applications. In: Proceedings of the Data Systems
In Aerospace Conference (DASIA08). ESA Publications.
Berthomieu, B., and Vernadat, F. 2006. Time petri nets analysis with TINA. In
Proceedings of 3rd International Conference on The Quantitative Evaluation
of Systems (QEST 2006), IEEE Computer Society. IEEE.
Blaha, M. R., and Rumbaugh, J. R. 2004. Object-Oriented Modeling and Design
with UML. 2nd edn. Pearson Prentice Hall.
360 References
Blair, M., Obenski, S., and Bridickas, P. 1992. Patriot Missile Defense: Soft-
ware Problem Led to System Failure at Dhahran, Saudi Arabia. Tech. report
GAO/IMTEC-92-26. United States General Accounting Oce.
Briand, L. P., and Roy, D. M. 1999.Meeting Deadlines in Hard Real-Time Systems,
The Rate Monotonic Approach. IEEE Computer Society.
Burns, A., and Wellings, A. J. 1994. HRT-HOOD: a design method for hard real-
time systems. Real Time Systems Journal, 6(1), 73114.
Burns, A., and Wellings, A. 1998.Concurrency in Ada.Cambridge University Press.
Burns, A., and Wellings, A. 2007. Concurrent and Real-Time Programming in Ada.
Cambridge University Press.
Burns, A., and Wellings, A. 2009. Real-time Systems and Programming Languages.
4th ed. Addison Wesley.
Buttazzo, G. 2003. Rate monotonic vs. EDF: Judgment day. In Proceedings of 3rd
ACM International Conference on Embedded Software. ACM
Carey, R. W., Van Arsdall, P. J., and Woodru, J. P. 2003. The national ignition
facility: early operational experience with a large Ada control system. Ada
Letters, 23(1), 11.
Carlow, G. D. 1984. Architecture of the Space Shuttle Primary Avionics Software
System. Communications of the ACM, 27(9).
Chadburn, F. 2010. Exploiting UML2 code generation with IBM Rational Rhapsody
in Ada. White paper.
Clarke, D., Lee, I., and Xie, H. 1995. VERSA: a tool for the specication and anal-
ysis of resource-bound real-time systems. Journal of Computer and Software
Engineering, 3(2).
Cohen, N. 1996. Ada as a Second Language. McGraw-Hill Series in Computer Sci-
ence. McGraw-Hill.
Cottet, F., Delacroix, J., Kaiser, C., and Mammeri, Z. 2002. Scheduling in Real
Time Systems. John Wiley and Sons Ltd.
Coulouris, G., Dollimore, J., and Kindberg, T. 2005. Distributed Systems: Concepts
and Design. 4th edn. Addison-Wesley.
Dale, N., and McCormick, J. 2007. Ada Plus Data Structures: An Object-Oriented
Approach. 2nd edn. Jones and Bartlett.
Dale, N., Weems, C., and McCormick, J. 2000. Programming and Problem Solving
with Ada 95. 2nd edn. Jones and Bartlett.
de la Puente, J. A., Ruiz, J. F., and Zamorano, J. 2000. An open Ravenscar real-
time kernel for GNAT. In: Proceedings of the Reliable Software Technologies.
Ada-Europe 2000. Lecture Notes in Computer Science, 1845. Springer Verlag.
Debruyne, V., Simonot-Lion, F., and Trinquet, Y. 2005. EAST-ADL an architec-
ture description language. Pages 181195 of: Book on Architecture Descrip-
tion Languages, IFIP International Federation for Information Processing.
Springer Verlag.
DeRemer, F., and Kron, H. 1975. Programming-in-the-large versus programming-
in-the-small. Pages 114121 of: Proceedings of the International Conference on
Reliable Software. ACM.
Dijkstra, E. 1968. Cooperating sequential processes. Pages 43112 of: Genuys, F.
(ed), Programming Languages. Academic Press. Reprinted from the original
Technological University, Eindhoven, The Netherlands, September 1965.
Dissaux, P. 2004. Using the AADL for mission critical software development. 2nd
European Congress ERTS, EMBEDDED REAL TIME SOFTWARE.
References 361
Dobing, B. 1999. The Ravenscar prole: experience report. ACM SIGAda Ada Let-
ters, 19(2), 2832.
Douglass, B. P. 1998. Real-Time UML: Developing Ecient Objects for Embedded
Systems. 2nd edn. Addison Wesley Longman.
Du, R., Hugues, J., Pautet, L., Quinot, T., and Tardieu, S. 2010.PolyORB 2.7.0w
Users Guide. http://www.adacore.com/wp-content/files/auto_update/
polyorb-docs/polyor%b_ug.html/.
Eisenstadt, M. 1997. My hairiest bug war stories. Communications of the ACM,
40(4), 3037.
English, J. 2001. Ada 95: The Craft of Object-Oriented Programming. http://www.
it.bton.ac.uk/staff/je/adacraft/.
ESTEC, ESA. 2003. ECSS-E-50-12A SpaceWire Links, Nodes, Routers and Net-
works. Tech. rept. European Space Agency.
Fernandez, J. L., and Marmol, G. 2008. An eective collaboration of a modeling
tool and a simulation and evaluation framework.In: 18th Annual International
Symposium, INCOSE 2008. Systems Engineering for the Planet. INCOSE.
Fersman, E., Mokrushin, L., Pettersson, P., and Yi, W. 2006.Schedulability analysis
of xed-priority systems using timed automata.Theoretical Computer Science,
354(2), 301317.
Flynn, M. 1974. Some computer organizations and their eectiveness. IEEE Trans-
actions on Computers, C-21, 948960.
Frederic, T., Gerard, S., and Delatour, J. 2006. Towards an UML 2.0 prole for
real-time execution platform modeling. In: Proceedings of the 18th Euromicro
Conference on Real-Time Systems (ECRTS 06), Work in progress session.
IEEE.
Gallmeister, B. O. 1995. POSIX 4: Programming for the Real World. OReilly and
Associates.
George, L., Rivierre, N., and Spuri, M. 1996.Preemptive and Non-Preemptive Real-
time Uni-processor Scheduling. Tech. rept. INRIA.
Giering, III, E. W., and Baker, T. P. 1994.The GNU Ada runtime library (GNARL).
Pages 97107 of: WADAS 94: Proceedings of the Eleventh Annual Washington
Ada Symposium & Summer ACM SIGAda Meeting on Ada. ACM.
GNAT. 2010. GNAT Reference Manual. AdaCore.
Hamez, A., Hillah, L., Kordon, F., Linard, A., Paviot-Adet, E., Renault, X., and
Thierry-Mieg, Y. 2006.New features in CPN-AMI 3: focusing on the analysis of
complex distributed systems. Pages 273275 of: 6th International Conference
on Application of Concurrency to System Design (ACSD06). IEEE Computer
Society.
Harbour, M. G., Moyano, J. M. D., Rivas, M. A., and Fern andez, J. G. 1997.
Implementing robot controllers under real-time POSIX and Ada. Pages 5764
of: IRTAW 97: Proceedings of the Eighth International Workshop on Real-
Time Ada. ACM.
Harbour, M. G., Garcia, J. J. G., Gutierrez, J. C. P., and Moyano, J. M. D. 2001.
MAST: Modeling and Analysis Suite for Real Time Applications. Pages 125
134 of: Proceedings of the 13th Euromicro Conference on Real-Time Systems
(ECRTS 2001). IEEE Computer Society.
Heitmeyer, C., and Mandrioli, D. 1996. Formal Methods for Real Time Computing.
John Wiley and Sons Ltd.
362 References
Hugues, J., Pautet, L., and Kordon, F. 2003. Contributions to middleware archi-
tectures to prototype distribution infrastructures. Pages 124131 of: Proceed-
ings of the 14th IEEE International Workshop on Rapid System Prototyping
(RSP03). IEEE.
Joseph, M., and Pandya, P. 1986. Finding response time in a real-time system.
Computer Journal, 29(5), 390395.
Klein, M. H., Ralya, T., Pollak, B., Obenza, R., and Harbour, M. G. 1994. A
Practitioners Handbook for Real Time Analysis.Kluwer Academic Publishers.
Kleinrock, L. 1975a. Queueing Systems: Computer Application. Wiley-interscience.
Kleinrock, L. 1975b. Queueing Systems: Theory. Wiley-interscience.
Knoll, K. T. 1993. Risk Management In Fly-By-Wire Systems. NASA STI/Recon.
NASA.
Koren, G., and Shasha, D. 1992. D-Over, an optimal on-line scheduling algorithm
for overloaded real time systems. Tech. rept. INRIA technical report number
RT-0138.
Laplante, Phillip. 2004. Real-Time Systems Design and Analysis. Piscataway, New
Jersey: IEEE Press.
Liu, C. L., and Layland, J. W. 1973. Scheduling algorithms for multiprogramming
in a hard real-time environment. Journal of the Association for Computing
Machinery, 20(1), 4661.
Lyu, M. R. 1995. Handbook of Software Reliability Engineering. McGraw-Hill.
Mauro, J., and Dougall, R. 2001. Solaris Internals: Core Kernel Architecture. Pren-
tice Hall.
McCormick, J. 1997. Forum letter. Communications of the ACM, 40(8), 30.
McHale, Ciaran. 2007. CORBA Explained Simply.
Nyberg, K. 2007. Multi-core + multi-tasking = multi-opportunity. Ada Letters,
27(3), 7981.
OAR. 2004. RTEMS Ada Users Guide. OAR Corporation.
OMG. 2001. Ada Language to IDL Mapping v1.2. OMG.
OMG. 2004. The Common Object Request Broker: Architecture and Specication,
Revision 3.0.3. OMG. OMG Technical Document formal/2004-03-12.
OMG. 2007.A UML Prole for MARTE, Beta 1.OMG Document Number: ptc/07-
08-04.
Panwalkar, S. S., and Iskander, W. 1976. A survey of scheduling rules. Operations
Research, 25(1), 4561.
Patterson, D. A., and Hennessy, J. L. 2008. Computer Organization & Design: The
Hardware/Software Interface. 4th edn. Morgan Kaufmann.
Pautet, L., and Tardieu, S. 2000. GLADE: a framework for building large object-
oriented real-time distributed systems. Page 244 of: ISORC 00: Proceedings
of the Third IEEE International Symposium on Object-Oriented Real-Time
Distributed Computing. IEEE Computer Society.
Pautet, L., Quinot, T., and Tardieu, S. 1999. CORBA & DSA: divorce or mar-
riage? In: Proceedings of 4th International Conference on Reliable Software
Conference. Springer Verlag.
Pease, M., Shostak, R., and Lamport, L. 1980. Reaching agreement in the presence
of faults. Journal of the ACM, 27(2), 228234.
Philippou, T. A., and Sokolsky, O. 2007. Process-algebraic analysis of timing and
schedulability properties. In: Handbook of Real-Time and Embedded Systems.
Chapman and Hall/CRC.
References 363
Riehle, R. 2003.Ada Distilled: An Introduction to Ada Programming for Experienced
Computer Programmers. Tech. rept. AdaWorks Software Engineering.
Rivas, M. A., and Harbour, M. G. 2001. MaRTE OS: an Ada kernel for real-time
embedded applications.In: Proceedings of the International Conference on Re-
liable Software Technologies, Ada-Europe-2001.
SAE. 2009. Architecture Analysis and Design Language (AADL) AS 5506. Tech.
rept. The Engineering Society For Advancing Mobility Land Sea Air and
Space, Aerospace Information Report, Version 2.0.
Schonberg, E., and Banner, B. 1994.The GNAT Project: A GNU-Ada 9X Compiler.
Pages 4857 of: TRI-Ada. ACM.
Sha, L., Rajkumar, R., and Lehoczky, J. P. 1990. Priority inheritance protocols:
an approach to real-time synchronization. IEEE Transactions on Computers,
39(9), 11751185.
Shen, S., and Baker, T. P. 1999.A Linux kernel module implementation of restricted
Ada tasking. ACM SIGAda Ada Letters, 19(2), 96103.
Singho, F., Legrand, J., Nana, L., and Marce, L. 2004. Cheddar: a exible real
time scheduling framework. Pages 18 of: SIGAda 04: Proceedings of the 2004
Annual ACM SIGAda International Conference on Ada. ACM.
Singho, F., Plantec, A., Dissaux, P., and Legrand, J. 2009.Investigating the usabil-
ity of real-time scheduling theory with the Cheddar project. Journal of Real
Time Systems, 43(3), 259295.
Sprunt, B., Sha, L., and Lehoczky, J.P. 1989. Aperiodic task scheduling for hard-
real-time systems. Journal of Real Time Systems, 1, 2760.
Spuri, M. 1996. Analysis of Deadline Scheduled Real-Time Systems. Tech. rept. RR-
2772. INRIA.
Stankovic, John. 1988.Misconceptions about real-time computing.IEEE Computer,
21(10): 1019.
Taft, S. T., Du, R. A., Brukardt, R. L., Ploedereder, E., and Leroy, P. 2006. Ada
2005 Reference Manual. Language and Standard Libraries. International Stan-
dard ISO/IEC 8652/1995(E) with Technical Corrigendum 1 and Amendment
1. LNCS Springer Verlag, number XXII, volume 4348.
TimeSys. 2002. Using TimeWiz to Understand System Timing before you Build or
Buy. Tech. rept. TimeSys Corporation.
Tindell, K. W., and Clark, J. 1994. Holistic schedulability analysis for distributed
hard real-time systems.Microprocessing and Microprogramming, 40(2-3), 117
134.
Tri-Pacic. 2003. Rapid-RMA: The Art of Modeling Real-Time Systems. Tech. rept.
Tri-Pacic Software, Inc.
Turley, J. 1999. Embedded processors by the numbers. Embedded Systems Program-
ming, 12(5), 99.
Vahalia, U. 1996. UNIX Internals: the New Frontiers. Prentice Hall.
Vardanega, T., Zamorano, J., and de la Puente, J. A. 2005. On the dynamic se-
mantics and the timing behavior of Ravenscar kernels.Real-Time Systems, 29,
5989.
Vinoski, S. 2004. An overview of middleware. Pages 3551 of: Proceedings of the 9th
International Conference on Reliable Software Techologies Ada-Europe 2004
(RST04). Springer Verlag.
Wells, Lisa. 2006. Performance analysis using CPN tools. Page 59 of: Valuetools 06:
Proceedings of the 1st International Conference on Performance Evaluation
Methodolgies and Tools. ACM.
364 References
Wikibooks. 2010a. Ada Programming. http://en.wikibooks.org/wiki/Ada_
Programming.
Wikibooks. 2010b.Ada Quality and Style Guide.http://en.wikibooks.org/wiki/
Ada_Style_Guide.
Wilhelm, R., Engblom, J., Ermedahl, A., Holsti, N., Thesing, S., Whalley, D.,
Bernat, G., Ferdinand, C., Heckmann, R., Mitra, T., Mueller, F., Puaut, I.,
Puschner, P., Staschulat, J., and Stenstrm, P. 2008. The worst-case execution
time problem overview of methods and survey of tools. ACM Transactions
on Embedded Computing Systems, 7(3), 3653.
Wind River. 1997. VxWorks: Programmers Guide. Wind River Systems.
Index
abstract data type, 69
accept statement, 166
activation time, see release time
actuator, 82
Ada Reference Manual, 23
aggregate
array, 188
record, 55
aliased object, 61
allocator, 58, 109
Amdahls Law, 3
analog to digital converter, 15,
88
aperiodic task, 256
ARM, see Ada Reference Manual
array, 49
constrained, 52
unconstrained, 52
assembly language, 98
atomic action, 10
attribute, 45, 53, 76, 86, 121
rst, 45, 53
image, 45, 46
last, 45, 53
length, 53
partition id, 220
pos, 86
pred, 45
range, 53
size, 97
succ, 45
val, 86
version, 219
attribute denition clause, 86
address, 87
component size, 97
size, 86, 87, 97
small, 356
barrier, 135, 147
Ben-Ari, 5, 142
block, 62
broadcast, 148
capacity, 256, 300, 314,
353
case statement, 27
casting, see type conversion
ceiling priority, 307
Cheddar, 349
child package, 75
class, 76
client-server, 166
communication, 7, 126
concatenation, 52
concurrent programming, 5
condition variables, 324
context, 253
context clause, 24
context switch, 15, 113, 254
conversion, see type conversion
CORBA, 221
CDR, 239
GIOP, 239
IDL, 221, 223
IIOP, 239
IOR, 234, 241
naming service, 237
ORB, 222
POA, 243
servant, 243
critical instant, 261, 305, 313
critical section, 8, 132, 140
critical task, 254
cumulative drift, 300
cyclic executive, 258
deadline, 12, 257
deadline monotonic, 258
deadlines on requests, 261
deadlock, 9
declarative part, 24
declarative region, 33
366 Index
delay statement
absolute delay, 178
relative delay, 177
dependent task, 255
dereference, 59
device driver, 83, 88
digital to analog converter, 15
Dijkstra, 5
discriminant
record, 56
task, 109
dispatching policy, 309
distributed object, 198
distributed program, 11
distributed shared memory, 198
distributed system, 195
distributed systems annex, 202
distribution mechanisms, 197
distribution model, 197
DSA, see distributed systems annex
dynamic priority scheduler, 258, 272
earliest deadline rst, 272, 345
EDF, see earliest deadline rst
elaboration, 63, 110, 111, 219
embedded, 14
entry call, 135, 166
conditional, 181
timed, 180
entry queue, 141, 307
environment task, 113
epoch, 296
exception, 62, 63, 117
handler, 64
exhaustive simulations, 252
fairness, 11
feasibility tests, 252
FIFO queuing, 307
rm real-time, 13
xed priority scheduler, 258
xed priority scheduling, 260, 311
Florist, 318
for loop, 29, 45
fork, 316
generalization, 77
generic package, 39, 73
guard, 174
hard real-time, 13
harmonic set, 264
holistic analysis, 285
homograph, 33
hyper-period, 263
ICPP, see immediate ceiling priority protocol
IDL, 221, 223
if statement, 26
immediate ceiling priority protocol, 281, 306,
311
independent task, 254
inheritance, see generalization
interface, 223
interrupt, 155
interrupt handler, 156
jitter, 283
library unit, 65
livelock, 21
liveness, 11
local drift, 300
loop parameter, 29
loop statement, 28
for scheme, 29, 45
while scheme, 29
machine scalar, 92
marshalling, 240
MaRTE, 344
MAST, 349
memory mapped I/O, 86
message passing, 197
method, 76
middleware, 197
MIMD, 4
MISD, 4
model number, 38
module, 66, 223
monotonic time, 295
mutexes, 324
mutual exclusion, 7, 126
name precedence, 33
naming service, 237
natural, 49
non-preemptive scheduler, 259
null pointer, 58
null range, 30
object, 76
object adapter, 243
o-line scheduler, 258
OMG, 221
on-line scheduler, 258
optimal scheduler, 260
ORK+, 342
package, 66
child, 75
data abstraction, 69
denition, 66
generic, 39, 73
service, 68
parallel program, 2
parameter
association, 25
mode, 32
partition, 202
Patriot missile, 41
PCP, see priority ceiling protocol
PCS, 213
period, 256, 300
Index 367
periodic task, 255
parameters, 256
PIP, see priority inheritance protocol
pointer, see type, access
polling, 94, 155, 179
polling server, 277
polymorphism, 77
port mapped I/O, 88
positive, 49
POSIX, 314
API, 318
post-protocol, 8, 132, 144, 278
pragma, 39
all calls remote, 219
asynchronous, 215
atomic, 153
atomic components, 155
attach handler, 160
inline, 39
interrupt handler, 161
interrupt priority, 160, 302
locking policy, 306
partition elaboration policy, 305
priority, 302
prole, 312
pure, 203
queuing policy, 307
remote call interface, 203
remote types, 208
shared passive, 210
storage size, 111
volatile, 88, 153
volatile components, 155
warnings, 233
pre-protocol, 8, 132, 144, 278
precedence constraint, 255
preemption, 253, 308
preemptive scheduler, 259
priority, 258
active, 300
base, 300
ceiling, 307
priority ceiling protocol, 280, 324,
343
priority inheritance protocol, 280
priority inversion, 278
priority queuing, 307
priority specic dispatching, 311
private operations, 149
process, 107, 119, 318, see task
processor utilization factor, 264
producer-consumer, 7, 135
prole, 312
protected entry, 135
protected function, 132
protected object, 130
locks, 132
protected procedure, 132
quality of service, 14, 239
RACW, see remote access-to-class-wide
RAS, see remote access-to-subprogram
rate monotonic, 258, 260
Ravenscar, 312
real-time, 12
release time, 255, 298
remote access-to-class-wide, 204
remote access-to-subprogram, 203
remote procedure call, 198
rendezvous, 166
representation clause
enumeration, 85
record, 91
requeue, 149, 161, 166
reserved word, 24
RPC, see remote procedure call
RTEMS, 346
run-time conguration, 333
run-time system, 119, 334
scenario, 9, 139
scheduler, 257
scope, 33
select statement, 171
accept alternative, 172, 308
delay alternative, 176
else part, 179
guarded alternative, 174
terminate alternative, 175
semaphore, 144
sensor, 82
servant, 222, 243
shadow register, 90
SIMD, 3
simulations, exhaustive, 252
SISD, 3
skeleton, 222
soft real-time, 13
sporadic task, 256
starvation, 11
state machine, 181
active object, 188
passive object, 183
state transition table, 185
static real-time system, 260
storage pool, 61
storage unit, 92
string, 54
stub, 222
subtype, 48
natural, 49
positive, 49
synchronization, 6, 134, 167
synchronous periodic task, 261
task, 107
aborting, 120
activation, 111
368 Index
task (cont.)
attributes, 121
completion, 113
context, 253
creation, 110
dening, 107
exceptions, 117
execution, 112
hierarchy
master-dependent, 114
parent-child, 114
identication, 120
implementation, 119
life cycle, 109
states, 109, 112, 114, 253
termination, 113
task control block, 254
task dispatching times, 309
task dispatching policy, 309
thread, 119, 315
transducer, 82
type
access, 57, 186
general, 61
pool specic, 61
array, 49
constrained, 52
unconstrained, 52
atomic, 36
composite, 36
denition of, 35
derived, 57, 76
discrete, 43
enumeration, 43
extension, 80
integer
modular, 47
signed, 46
private, 72, 74
protected, 130
real, 37
decimal, 42
xed point, 39
oating point, 38
record, 55
scalar, 36
string, 54
tagged, 80
type conversion, 25, 47
unchecked, 96
typedef, 224
urgent task, 254
use clause, 25, 65, 67
use type clause, 67
valid schedule, 260
WCET, 22, 256, 300, 353
while loop, 29
with clause, 24, 65
worst case response time feasibility test, 265