EE8691-Embedded Systems - 01 - by WWW - LearnEngineering.in
EE8691-Embedded Systems - 01 - by WWW - LearnEngineering.in
EE8691-Embedded Systems - 01 - by WWW - LearnEngineering.in
in
ENGINEERING COLLEGES
Department of EEE
Prepared by
Sl.
Name of the Faculty Designation Affiliating College
No.
1 Mr. V. Vignesh Arumugam AP FXEC
Verified by DLI, CLI and Approved by the Centralized Monitoring Team dated
SYLLABUS
TEXT BOOKS:
1. Rajkamal, ‗Embedded System-Architecture, Programming, Design‘, Mc Graw Hill,
2013.
2. Peckol, ―Embedded system Design‖, John Wiley & Sons,2010
3. Lyla B Das,‖ Embedded Systems-An Integrated Approach‖, Pearson, 2013
REFERENCES:
1. Shibu. K.V, ―Introduction to Embedded Systems‖, Tata Mcgraw Hill,2009.
2. Elicia White,‖ Making Embedded Systems‖, O‘ Reilly Series,SPD,2011.
3. Tammy Noergaard, ―Embedded Systems Architecture‖, Elsevier, 2006.
4. Han-Way Huang, ‖Embedded system Design Using C8051‖, Cengage
Learning,2009.
5. Rajib Mall ―Real-Time systems Theory and Practice‖ Pearson Education, 2007.
Web resources
http://nptel.iitk.ac.in/courses/Webcourse-
contents/IIT%20Kharagpur/Embedded%20systems/New_index1.html
http://www.ecs.umass.edu/ece354/ECE354HomePageFiles/Labs_files/01bigPic
ture.pdf
http://patricklam.ca/ece155/lectures/
http://studentsblog100.blogspot.in/2013/02/embedded-systems-notes-anna-
university.html
TEXT BOOKS:
T1. Rajkamal, ‗Embedded System-Architecture, Programming, Design‘, Mc Graw Hill,
2013.
T2. Peckol, ―Embedded system Design‖, John Wiley & Sons,2010
REFERENCEBOOKS:
R1. Shibu. K.V, ―Introduction to Embedded Systems‖, Tata Mcgraw Hill,2009
R3. Tammy Noergaard, ―Embedded Systems Architecture‖, Elsevier, 2006.
No of Cumulative Boo
Sl. Hours k
UNIT Topics Hour
No.
s No.
Introduction to Embedded T1
1 1 1
Systems
5 DMA 1 5 T1
No of Cumulative Boo
Sl. Hours k
UNIT Topics Hour
No.
s No.
Embedded Networking:
10 1 10 R3
Introduction,
13 RS232 standard 1 13 R3
14 RS422 – RS485 1 14 T1
15 CAN Bus 1 15 R3
UNIT 2
16 Serial Peripheral Interface (SPI). 1 16 R3
Embedded Product
19 Development Life Cycle- 1 19 R1
UNIT 3 objectives.
No of Cumulative Boo
Sl. Hours k
UNIT Topics Hour
No.
s No.
21 Modeling of EDLC 1 21 R1
26 Concurrent Model, 1 26 T1
31 Multitasking, 1 31 1,
R3
No of Cumulative Boo
Sl. Hours k
UNIT Topics Hour
No.
s No.
Synchronization between T1
36 processes- semaphores, 1 36
Mailbox, pipes
T1,R
39 RT Linux 1 39
1
INDEX
UNIT Q.NO TITLE PAGE
NO.
1-15 PART A 1
PART B
1 Build process 3
2 Structural units 6
I
3 DMA &Memory management 10
4 ICE & Timer/Counter 13
5 Selection of processor 17
6 Target Hardware debugging 21
1-13 PART A 26
PART B
1 SPI protocol and Interface 27
II 2 I2C Bus 29
3 CAN Bus 31
4 RS232&RS485 33
5 IO ports & UART 37
1-10 PART A 43
PART B
1 Objectives of EDLC 44
III 2 Phases of EDLC 46
3 Approaches of EDLC 51
4 Issues in hardware and software co-design 55
5 Computational models 57
1-15 PART A 61
PART B
1 Pre-emptive and non pre-emptive scheduling 62
2 Interrupt routines 65
3 Process, Threads and Tasks 69
IV
4 Multitasking 72
5 Semaphore, Mailbox and Pipes 75
6 Shared memory, message passing, priority inheritance 79
and priority inversion
7 µC/OS – II,Vx-Works and RT Linux 85
1-10 PART A 98
PART B
V 1 Washing machine design 99
2 Automotive application 104
3 Design and interface of smart card system 108
University Question 114
Cross Compiler: A compiler that runs on one computer platform and produces code for
another is called a cross-compiler. The use of a cross-compiler is one of the defining
features of embedded software development.
7. List the important considerations when selecting a processor.
Instruction set
Maximum bits in an operand
Clock frequency
Processor ability
8. Classify the processors in embedded system.
a. General purpose processor
Microprocessor
Microcontroller
Embedded processor
Digital signal processor
Media processor
b. Application specific system processor
c. Multiprocessor system using GPP and ASSP GPP core or ASIP core integrated into
either an ASIC or a VLSI circuit or an FPGA core integrated with processor unit in a
VLSI chip.
9. What are the various types of memory in embedded systems?
RAM (internal External)
ROM/PROM/EEPROM/Flash
Cache memory
10. What are the two essential units of a processor on a embedded system?
(i) Program Flow control Unit. (ii) Execution Unit
11. What are the different modes of DMA transfer? Which one is suitable for
embedded system?
Single transfer at a time and then release of the hold on the system bus.
Burst transfer at a time and then release of the hold on the system bus. A burst
may be of a few kB.
Bulk transfer and then release of the hold on the system bus after the transfer
is completed.
UNIT – 1
PART B (16 marks)
1. Explain the build process for embedded system.
The build is often referred either to the process of converting source code files into
stand alone software artifacts that can be run on a computer or the result of doing so.
The process of converting the source code representation of the embedded software
into an executable binary image involves three distinct steps.
Each of the source files must be compiled or assembled into an object file.
All of the object files that result from the first step must be linked together to
produce a single object file called the re-locatable program.
Physical memory addresses must be assigned to the relative offsets within the
re- locatable program in a process called relocation.
The result of the final step is a file containing an executable binary image that is ready
to run on the embedded system.
C/C++
Compiler Pre-processor
Object
Linker
Re-locatable
Locator
Executable
Embedded system
>Led.c
>Led.asm
>Led.o
>Led.exe
Processor
Target
Host computer
Pre-processing: A C program has the following pre-processor structural elements
1. Include directive for the file inclusion
2. Definitions for preprocessor global variables (global means throughout the
program module)
3. Definitions of constants
4. Declarations for global data type, type declaration and data structures,
macros and functions.
4
code until this process. An object file needs to be linked with many C run time library
files, system functions etc., to form an executable object file. For example: in
livesimple.c program, printf statement is needed. So printf.o file must be linked. The
linker (Id) will perform all those tasks.
Syntax: [root@host~] # Id – dynamic-linker/
Locating: The tool that performs the conversion from relocatable program to
executable binary image is called a locator. In embedded systems, the next step after
linking is the use of locator for the program – codes and data in place of the loader. The
features of locator are,
The locator is specified by the programmer the available addresses at the RAM
and ROM in target. The programmer has to define the available addresses to
load and create file for permanently locating codes using a device programmer.
It uses cross-assembler output, a memory allocation map and provides the
locator program output file. It is the final step of software design process for the
embedded system.
The locator locates the I/O tasks and hardware device – driver codes at the
addresses without reallocation. This is because the port and device addresses
for these are fixed for a given system.
The locator program reallocates the linked file and creates a file for permanent
location of codes in a standard format.
Output:
Syntax : [root@host~] # ./ livesimple
The output is
LIVESIMPLE!
In windows, the executable file is denoted as livesimple.exe but there is no need of .exe
extension file in Linux environment.
2. Discuss in detail about the structural units in embedded processor.
Address
Memory
Processor Data
Control
A unit that controls the fetching of data into the I and D. caches in advance from
the memory units. The instructions and data are delivered when needed by the
processor‘s execution units. The processor does not have to fetch data just
before executing the instruction. Pre-fetching unit improves performance by
fetching instruction and data in advance for processing.
4. Instruction Cache (I-Cache):
It sequentially stores, like an instruction queue, the instruction in FIFO mode. It
lets the processor execute instructions at great speed using PFCU compare to
external system memories, which are accessed at relatively much slower
speeds.
5. Branch Target Cache (BT Cache):
It facilitates ready availability of the next instruction – set when a branch
instruction like jump, loop or call is encountered. Its fetch unit foresees a
branching instruction at the I-Cache.
6. Data Cache (D-Cache):
It stores the pre-fetched data from external memory. A data cache generally
holds both the key (address) and value (word) together at a location. It also
stores write through data when so configured. Write through data means data
from the execution unit that transfer through the cache to external memory
addresses.
7. Memory Management Unit (MMU):
It manages the memories such that the instructions and data are readily available
for processing.
8. System Register Set (SRS):
It is a set of registers used while processing the instructions of the supervisory
system program.
9. Floating Point Processing Unit (FLPU):
A unit separate from ALU for floating point processing, which is essential in
processing mathematical function fast in a microprocessor or DSP.
10. Floating Point Register set (FRS):
A register set dedicated for storing floating point numbers in a standard format
and used by FLPU for its data and stack.
11. Multiply and Accumulate Unit (MAC):
There is also a MAC unit for multiplying coefficients of a series and accumulating
these during computations.
9
10
11
12
protection increases the memory requirement for each task and also the
execution time of the code of the task.
The Manager optimizes the memory needs and memory utilization. The memory
manages the following:
1. Use of memory addresses space.
2. Mechanism to share memory space
3. Mechanism to restrict sharing of a given memory space.
4. Optimization of memory.
4. Write short note on (i) ICE (ii) Timer & Counter.
(i) IN – CIRCUIT EMULATOR (ICE):
In- circuit emulation is the use of a hardware device of in-circuit emulator
used to debug the software of an embedded system. It operates by using –
processor with the additional ability to support debugging operations as well as to
carry out the main function of the system.
ICE is a computer chip that is used to emulate a microprocessor, so that embedded
system software can be tested by developers. It allows a programmer to charge or
debug the software in an embedded system. The ICEs gives the option of debugging by
single stepping.
Working Principle:
The programmer uses the emulator to load programs into the embedded system,
run them, step through them slowly and view and change data used by the
system‘s software. It imitates the central processing unit of the embedded
system‘s computer.
The house of Microchip offers in circuit emulators are of 3 types: MPLAB ICE
2000, MPLAB ICE 4000, REAL ICE.
13
ICE consists of a small dual port pod. One port is a probe that plugs into the
microprocessor socket of the target system. The second port is interfaced to a
computer (or) workstation.
Limitations of ICE:
Availability and cost
On chip functions
Transparency
Features and capabilities of ICE are
1. Ability to map resources between target and host.
2. Ability to run and test code in real time without target hardware.
3. Ability to step or run programs from/to specified states or break points.
4. Ability to observe and modify microprocessor registers.
5. Ability to Observe and modify memory contents.
6. Ability to trace program execution using internal Logic Analyzers.
(ii) TIMERS AND COUNTERS:
Timer is a very common and useful peripheral. It is a device that counts the
regular interval (δT) clock pulse at its input. The counts are stored and incremented on
each pulse. It has output bits for the period of counts. The counts multiplied by interval
δT gives the time.
Number of counting the interval x δT = Time
Timer is a programmable device, (i.e) the time period can be adjusted by writing
specific bit patterns to some of the registers called timer-control registers.
A counter is more general version of the timer. It is a device that counts the input
for events that may occur at irregular or regular intervals. The count gives the number of
input events or pulses, since it was last read.
The simple timer has a 16 bit up counter which increments with each input clock
pulse is shown in figure (a). Thus the output value ‗Cnt‘ represents the number of
pulses, since the counter was last reset to zero. An additional output ‗top‘ indicates
when the terminal count has been reached. It may go high for a predetermined time as
set by the programmable control word inside the timer unit. The count can be loaded by
the external program.
14
The figure (b) provides the structure of another timer where a multiplexer is used
to choose between an internal clock and external clock. The mode bit when set or reset
decided the selection. For internal clock (Clk) it behaves like the timer in Figure (a). For
the external coun- in (Cnt-in) it just counts the number of occurrences.
15
16
started, it never resets or never reloaded to another value. Example: DS1307 chip is a
real time clock integrated circuit.
Consider the block diagram shown below. The Arduino UNO is used for reading
time from DS1307 and display it on 16X2 LCD. DS1307 sends time/data using 2 lines to
Arduino. A buzzer is also used for alarm indication, which beeps when alarm is
activated.
5. How to select the processor based upon its architecture and applications?
PROCESSOR:
A processor is the heart of the embedded system. For an embedded system
designer, knowledge of microprocessors and microcontrollers is a prerequisite.
PROCESSOR IN A SYSTEM:
A processor has 2 essential units.
1. Program flow Control Unit (CU)
2. Execution Unit (EU)
The CU includes a fetch unit for fetching instructions from the memory. The EU has
circuits that implement the instructions pertaining to data transfer operations and data
conversions from one form to another. The EU includes the Arithmetic and Logic Unit
(ALU) and also the circuits that execute instructions for a program control task, say halt,
interrupt or jump to another set of instructions. It can also execute instructions for a call
or branch to another program and for a call to a function.
A processor runs the cycles of fetch and execute. The instructions defined in the
processor instruction set are executed in the sequence that they are fetched from the
memory. An embedded system processor chip can be one of the following:
1. General Purpose Processor (GPP)
Microprocessor
Microcontroller
Embedded Processor
Digital Signal Processor (DSP)
Media Processor
2. Application Specific System Processor (ASSP) as additional processor.
17
18
2. 8051 has only 2 timers Times 0 & Times 1, but 8052 has an additional timer
– Timer 2, which is used for the development of Real Time Operating
System.
3. Some Microcontrollers support master slave mechanism with the features of
I2C and SPI supported in-built pins.
4. 8051 family member 83C152JA has two direct memory access (DMA)
channels on-chip.
5. For interfacing more number of devices, we need more pins in
microcontrollers to develop a particular application.
6. 80196KC has a PTS (Peripheral Transactions Server) that supports DMA
functions.
MICROPROCESSORS:
Microprocessor is a single VLSI chip that has a CPU and may have some other
units. Example: caches, floating point processing, arithmetic unit, pipelining etc. RAM,
ROM and I/O units will not present within a CPU chip itself. Microprocessor accepts
binary data as input & process data according to the instructions given and provides
results as output. CPU has two basic functional units such as Control unit and
Arithmetic Logic Unit (ALU). Example: 8085, 8086.
MICROCONTROLLER:
MECHANISM IN MICROCONTROLLERS:
STEPS:
1. CPU gets the Instruction (MOV,MVI) from ROM.
2. CPU also gets the data (A=5, B=6) from RAM or from Peripheral Registers.
3. Now CPU registers are having the data and send data to the ALU unit to perform
the mathematical or logical operation.
4. Finally the results are returned back to the CPU registers. In the turn CPU
registers send the data to RAM or to peripheral devices.
PROCESSOR ARCHITECTURES:
A processor is the logic circuitry that responds to and processes the basic
instructions that drive a system. The term processor has generally replaced the term
central processing unit. The processor in a personal computer or embedded in small
devices is often called a microprocessor. The presence of Microprocessor and Memory
within a single chip is called as Microcontroller. The various processor architectures are
1. Von Neumann Architecture
2. Harvard Architecture
3. Super Harvard Architecture
4. Layered Architecture
VON NEUMANN ARCHITECTURE
Single memory and single bus for transferring the data into and out of the CPU.
Multiplication of two numbers require atleast three clock cycles.
HARVARD ARCHITECTURE
Seperate memory for data and program with separate buses for each. Both
program instructions and data values can be fetched at the same time. Operational
speed is higher than Von Neumann architecture.
Examples:
8051 Microcontroller is a Harvard Architecture with CISC Instruction Set
PIC Microcontroller is a Harvard Achitecture with RISC Instruction Set
SUPER HARVARD ARCHITECTURE (DSP Processors)
DSP algorithms spend their most of the time in loops. So instruction cache is
added. DSP processors are capable of processing many high frequency signals in real
time. The significant feature present in the DSP processor is Multiply – Accumulate
operation
Example: a <- a+(B X C)
By using this feature this mathematical part will performed in a single clock cycle.
Applications performed in this processor are fixed point & floating point operations,
matrix operations, convolution, correlation, parallelism etc.
21
LCD: An LCD (Liquid crystal Display) gives a convenient way of displaying debugging
information. It is also useful for many different applications that need a text display
output. It is a module that displays text characters and a common screen size is 2 rows
of 16 characters. Most LCD modules use the HD44780 controller chip which is why LCD
routines built into high level language always work.
Advantage: a) Very quick update (40 µs 4 bit data bus)
b) Useful in many projects as the main display interface.
c) Simple to interface to an 8-bit port (only needs six of the 8-bits)
Disadvantage: a) Uses up an 8-bit port
b) Hardware is more expensive (Example: compared to a serial port chip)
LED: Using an LED as a microcontroller ―alive‖ indicator. Even though it is such a
simple thing to blink an LED on and off it is extremely useful as a debugging tool as you
can tell at a glance whether the code you just downloaded is working sometimes you
can will incorrectly set parameter on the programming software or compiler which will
stop the code dead.
The LED indicator gives a quick health check for your microcontroller which is easy to
see.
Pin Debugging: This is the simplest and rudest debugging method using any available
port pin. Simply set and reset this pin at any point in the code that you want to monitor.
It has minimal impact on the code speed or size and can give you the following
information.
You can tell if the code is active.
It gives you the repetition rate
It gives you the routine time length (if you set the pin at the start and reset it at
the end).
Logic Analyzer: This tool attaches to the pins you want to observe and captures the
waveforms displaying multiple traces on a single display. It uses a trigger module that
can be set to activate the combinations of the input signals or on their length. So you
can trigger on specific patterns or on glitches or both.
For non-microcontroller based systems where the data and address bus are exposed, a
logic analyzer can show the address and data organised into hex words i.e. readable.
Some can disassemble the instruction showing what the processor was doing at the
trigger point.
24
For a microcontroller based system the logic analyzer can be useful in examining
peripheral operation. Example: for debugging the SPI or I2C buses some logic analyzers
also have built in support for these protocols.
Another use of the logic analyzer is to capture output over a long period of time
depending on the memory capacity of the logic analyzer.
25
Easy implementation
Moderate speed (upto 100 kbps).
UNIT-2
PART B (16 marks)
28
In this configuration CS and SCK lines connected in parallel and each SDO pin of
previous chip is connected to SDI pin.
29
Steps:
Master sends start condition (S) and controls the clock signal.
Master sends a unique 7-bit slave device address.
Master sends read / write bit (R/W) as 0 for slave receive and 1 for slave
transmit.
Wait for (or) send an acknowledge bit (A).
Send (or) receive the data byte (8 bits) (DATA).
Expect / send acknowledge bit (A)
Send the stop bit (P).
30
31
2. Remote Frame:
The remote frame is used by the receiving unit to request transmission of a
Message from the transmitting unit. It consists of 6 fields: Start of Frame (SOF)
Arbitration field, Control field, CRC field, ACK field and End of frame (EOF) field.
32
4. Error frame:
Error frames are generated and transmitted by the CAN hardware and are used
to indicate when an error has occurred during transmission. This frame consists of an
Error flag and an Error delimiter. Error Flag has 2 types.
1. Active with 6 dominant
2. Passive with 6 Recessive bits
The error delimiter consists of 8 Recessive bits.
33
Transmit TXD 2 3 2 3
Receive Data RXD 3 2 3 2
Request to
RTS 4 7 8 5
send
Clear to send CTS 5 8 7 4
Data set ready DSR 6 6 4 20
34
Data carrier
DCD 8 1 1 8
detect
Data terminal
DTR 20 4 6 6
ready
Ring indicator RI 22 9 9 22
Signal ground SG 7 5 5 7
35
RS485:
RS-485 allows multiple devices (up to 32) to communicate at half-duplex and full
duplex at distances up to 1200 meters .
Both the length of the network and the number of nodes can easily be extended
using a variety of repeater.
Data is transmitted differentially on two wires twisted together, referred to as a
twisted pair.
A 485 network can be configured two ways, two-wire or four-wire
In a two-wire network the transmitter and receiver of each device are connected
to a twisted pair.
Four-wire networks have one master port with the transmitter connected to each
of the slave receivers on one twisted pair. The slave transmitters are all
connected to the master receiver on a second twisted pair.
In either configuration, devices are addressable, allowing each node to be
communicated to independently.
Only one device can drive the line at a time, so drivers must be put into a high-
impedance mode (tri-state) when they are not in use.
Two-wire 485 networks have the advantage of lower wiring costs and the ability
for nodes to talk amongst themselves. But is limited to half-duplex.
36
5. Write short Note on (i) Input and output ports (ii) UART
(i) INPUT AND OUTPUT PORTS:
Ports:
A port is a device to receive the bytes from external peripherals for reading them
later using instructions executed on the processor (or) to send the bytes to external
peripheral or device or processor using instructions executed on processor A part
connects to the processor using address decoder and system buses.
The processor uses the addresses of the port registers for programming the port
functions or modes, reading port status and for writing or reading bytes.
Example:
Serial Peripheral Interface (SPI) in 68 HCII.
37
38
39
Each RXD (Receive data) bit is received in each byte at fixed intervals but each
received byte is not in synchronisation.
It does not receive clock Information along with the data bytes.
Bytes are received at variable Intervals (or) phase differences. Asynchronous
serial input is also called as UART Input.
A 1 to 0 transition indicates the reception of a byte.
Time period for 1 byte is 10 T, which includes the one start bit, 8 data bits and
one step bit.
The peripheral saves the byte at a port register from where the microprocessor
reads the byte.
Examples: Keyboard Input & Modem Inputs in Computer.
41
42
UNIT-3
EMBEDDED FIRMWARE DEVELPOMENT ENVIRONMENT
PART – A( 2 marks)
1. What is EDLC?
Embedded Product Development Life Cycle (EDLC) is an analysis – Design –
Implementation based standard problem solving approach for embedded product
development. EDLC defines the interaction and activities among various groups of a
product development sector including project management, system design and
development, system testing, release management and quality assurance.
2. What is Model?
The Life cycle of a product development is commonly referred as Models and a Model
defines the various phases involved in a product‘s life cycle. The Embedded product life
cycle model contains the phases: Need, Conceptualization, Analysis, Design,
Development and testing, Deployment, Support, Upgrades and Retirement/Disposal.
3. Define conceptualization phase.
Conceptualization phase is the phase dealing with product concept development. It
includes activities like product feasibility analysis, cost benefit analysis, product scoping
and planning for next phases.
4. Name three categories of product development.
The 3 categories are
New or custom product development
Product Re-engineering
Product maintenance
43
Data flow is a type of process network model. In data flow, a program is specified by a
directed graph. The nodes of the graph represent computational functions that map
input data into output data. Data is represented by a circle and data flow is represented
using arrows.
8. Define deployment phase.
The deployment phase deals with the launching of the product. Product deployment
notification, training plan execution, product installation, product post implementation
review, etc., are the activities performed during deployment phase.
9. Define product design space and development phase.
Design phase – It deals with the implementation aspects of the required functionalities
for the product
Development Phase – It transforms the design into a realizable product. The detailed
specifications generated during the design phase are translated into hardware and
firmware during the development phase.
10. What are the differences between data flow model and state machine model?
Both data flow and finite state machine are models of computation. The data flow
model of computation is used in signal processing design and modeling of DSP
algorithms. On the other hand FSMs have been developed to solve a different class of
problems, namely sequential control. FSMs are an appropriate modeling approach for
control-dominant applications. Mixing data flow with FSMs is a good solution for
representing a system, which requires both signal processing and control. This
integrated is very useful to eradicate the drawbacks.
UNIT-3
PART-B(16 marks)
and activities among various groups of a product development sector including project
management, system design & development, system testing, release management &
quality assurance. EDLC standards are needed to design for the product development,
which provides the uniformity in development approaches.
OBJECTIVES OF EDLC:
The aim of any product development is the Marginal benefit. Generally, marginal
benefit is expressed as ―Return on Investment‖ (ROI). The investment for a
product development includes initial investment, manpower investment &
infrastructure investment etc.
A developed product needs to be acceptable by the end user that it has to meet
the requirements of the end user in terms of quality, reliability & functionality.
EDLC helps in ensuring all these requirements by following 3 objectives:
Ensure that high quality products are delivered to end user.
Risk minimization & defect prevention in product development through project
management
Maximize the productivity
Ensuring High quality for products:
The primary definition of quality in any embedded product development is return
on investment achieved by the product. In order to survive in market, quality is
the most important factor to be taken care of while developing the product.
Qualitative attributes depends on the budget of the product, because budget
allocation is very important.
Budget allocation might have done after analyzing the market, trends &
requirements of product, competition etc.
EDLC must ensure that the development of the product has taken account of all
the quantitative & qualitative attributes of the embedded system.
Risk minimization & defect prevention through management:
The project management is essential in product development and it needed more
significance.
Project management adds an extra cost on the budget but it is essential for
ensuring the development process is going in the right direction and the
schedules of the development activity are meeting.
45
Projects which are complex and requires timeliness should have a dedicated &
skilled project management part & hence they are said to be ―Highly‖ bounded to
project management.
Project management is essential for predictability, coordination and risk
minimization
The time frame may be expressed in number of person days (PDS)
Predictability – Analyze the time to finish the product.
Coordination – Developers are needed to do the job
Risk management – 1. Backup of resources to overcome critical situation
2. Ensuring defective product is not developed
Increased Productivity:
Productivity is a measure of efficiency as well as Return on Investment. Different ways
to improve the productivity are:
Saving the manpower effort will definitely result increased productivity
Use of automated tools wherever is required
Work which has been done for the previous product can be used, when there is a
presence of similarities between the previous and present product. This is called
as ―Re-usable effort‖.
Usage of resource persons with specific set of skills, which exactly matches the
requirements of the product. This reduces the time in training the resource.
Example: resource with expertise in zigbee wireless technology for developing a
wireless interface for the product. This kind of resource does not need training.
46
Need:
Any embedded product may evolve as an output of a need. Need may come from an
individual or from public or from company (generally from an end user or client). Need
initiate the concept proposal. The human resource management and funding agency
reviews the concept proposal and provides the approval. Then it goes to a product
development team. The product development need can be visualized in any one of the
following three needs.
a) New or Custom Product Development:
The need for a new product which does not exist in the market or a product which acts
as a competitor to an existing product in the current market will lead to the development
of completely new product. Example: Various manufacturers act as competitors in
developing the mobile phones.
b) Product Re-engineering:
The process of making changes in an existing product design and launching it as a new
version is known as re-engineering a product. It is termed as product upgrade. Re-
engineering an existing product comes as a result of the following needs.
Change in business requirements
User interface enhancements
Technology upgrades
c) Product Maintenance:
The technical support provides to the end user for an existing product in the market is
―Product Maintenance need‖. It has two categories 1. Corrective Maintenance deals
with making corrective actions following a failure or non-functioning. 2. Preventive
maintenance is the scheduled maintenance to avoid the failure or non-functioning of the
product.
Conceptualization:
47
This is the product concept development phase and it begins immediately after a
concept proposal is approved. This phase defines the scope of the concept, performs
cost benefit analysis and feasibility study and prepares project management and risk
management plans.
Design:
This deals with the entire design of the product taking the requirements into
consideration and focuses on how the functionalities can be delivered. Preliminary
design consists of the list are,
Inputs and Outputs are defined here
49
50
A process of launching fully functional model into the market is called deployment. It is
also known as first customer shipping. The essential tasks performed during the
deployment phase are,
Notification of product deployment
Execution of training plan
Product installation
Product Post – Implementation Review
Support:
This deals with the operation and maintenance of the product. Support should be
providing to the end user to fix the bugs of the product. The various activities involved in
the support phase are:
Setup a dedicated support wing. Example: Customer Care.
Identification of bugs and Areas of improvement.
Upgrades:
It deals with
Releasing of new version for the product which is already exist in the market.
Releasing of major bug fixes. (Firmware up gradation)
Retirement/Disposal:
Everything in the world changes, the technology you feel as the most advanced
and best today may not be the same tomorrow. Due to this reason, the product cannot
sustain for a long time in the market. It has to be disposed on right time before it causes
the loss. The disposition of the product is essential due to the following reason.
1. Rapid technological advancements.
2. Increased user needs.
3. Mention and explain the approaches of EDLC?
MODELLING OF EDLC: (EDLC APPROACHES)
The various approaches in modelling the EDLC are:
Linear or water fall model:
In this model, each phase of EDLC is executed in sequence and the flow is
unidirectional with output of one phase serving as the input to the next
phase. The feedback of each pulse is available locally and only after they are
executed.
51
Review mechanisms are employed to ensure that the process flow in right
direction.
Bugs are not fixed immediately and they are postponed to support phase.
Advantages:
Rich documentation
Easy project management
Good control over cost & schedule
Drawbacks:
Analysis can be done without any design
Risk analysis is performed only once throughout the development
Bugs are fixed only at support phase
52
53
The short comings of the protomodel after each cycle are evaluated and it is
fixed in the next cycle
After the initial requirement analysis, design is made and development process is
started. Then this prototype is sent to the customer for evaluation. The customer
provides the feedback to the developer for further improvement.
Then the developer repeat the process with some additional features and finally
delivered to the outside world
Advantages:
Feedback after each implementation and fire tuning is also possible
By using the prototype model, the risk is spread across each proto development
cycle & it is under control
Drawbacks:
Deviations from expected cost and schedule due to requirements refinement.
Increased project management
Minimal documentation on each prototype may create problems in backward
prototype traceability
Increased configuration management activities
Spiral Model:
Spiral model is best suited for the development of complex embedded products &
situations where the requirements are changing from customer side
Risk evaluation in each stage helps in reducing risk
54
Spiral model is best suited for the development of complex embedded products &
situations where the requirements are changing from customer side
Risk evaluation in each stage helps in reducing risk
It is the combination of linear & prototype models to give the best possible risk
minimized EDLC model. The activities in the spiral model present in the 4
quadrants are:
o Determine objectives, alternatives, constraints
o Evaluate alternatives, identity & resolve risks
o Develop & test
o Plan
4. What are the fundamental issues in Hardware-Software co-design?
Co design
The meeting of system-level objectives by exploiting the trade-offs between hardware
and software in a system through their concurrent design
Key concepts
o Concurrent: hardware and software developed at the same time on parallel
paths
o Integrated: interaction between hardware and software developments to
produce designs that meet performance criteria and functional specifications
Fundamental issues:
The problem statement is hardware software co-design and its issues. Some of the
issues are:
Selecting the model:
A model is a formal system consisting of objects & composition rules. It is hard to
make a decision on which model should be followed in a particular system design
Models are used for capturing & describing the system characteristics
55
Designers switch between a variety of models, because the objective varies with
each phase
Selecting the architecture:
The architecture specifies how a system is going to implement in terms of the
number & types of different components and the interconnection among them. Some
type of architecture falls into application specific architecture class, while others fall into
either general purpose architecture class or parallel processing class. The commonly
used architectures is system design are:
1. The controller Architecture – implements the finite state machine model using a
state register holds the present state and the combinational circuits holds the next state
& output
2. The data path architecture – suitable for implementing the data flow graph model.
A datapath represents a channel between the input and output. The datapath contains
registers, counters, memories & ports. Ports connect the datapath to multiple buses.
3. The finite state machine datapath – this architecture combines the controller
architecture with datapath architecture. The controller generates the control input,
whereas the datapath processes the data.
4. The complex instruction set computing (CISC) – this architecture uses an
instruction set for solving complex operations. The use of a single complex instruction in
place of multiple simple instructions greatly reduces the program memory access &
program memory size requirement. On the other hand, Reduced Instruction Set
Computing (RISC) architecture uses the multiple RISC instructions to perform a
complex operation. RISC architecture supports extensive pipelining
5. The Very Long Instruction Word (VLIW) – this architecture implements functional
units in the datapath
6. Parallel processing architecture – implements multiple concurrent processing
elements and each processing element may associate a datapath containing register &
local memory. Single Instruction Multiple Data (SIMD) & Multiple Instruction Multiple
Data (MIMD) architectures are examples for parallel processing architecture. SIMD –
eg: Reconfigurable process, MIMD – eg: Multiprocessor systems
Selecting the language:
A programming language captures a ‗Computational Model‘ and maps it into
architecture. A model can be captured using multiple programming languages like C,
C++, C#, Java, etc. for software implementations and languages like VHOL, system C,
56
verilog, etc. for hardware implementations. C++ is a good language for capturing an
object oriented model.
Partitioning system requirements into hardware & software:
Various hardware software trade-offs are used for making a decision on the
hardware-software portioning.
5. Discuss about the various computational models in embedded design.
COMPUTATIONAL MODELS IN EMBEDDED DESIGN
The commonly used computational models in embedded system design are
(i) Data flow graph model(DFG):
This model translates the data processing requirement into a data flow graph. It is a
data model. This model emphasis on the data and operations on the data which
transforms the Input data to output data. In DFG model, Data is represented by a circle
data flow is represented arrows.An inward arrow to the process denotes the output
data. Data driven Embedded are modeled using DFG.
A DFG model is said to be either acyclic DFG (does not contain multiple input values &
multiple output values) or non-acyclic DFG (output is feedback to the input).
(ii) State machine model:
The state machine model explains the system behaviours with states, events, actions
and transitions. The representation of a state is current situation. An event is an input to
the state. The event acts as stimulus for state transition. The activity evolved in state
machine is action. This model is used for event driven embedded systems. Eg: control
and industrial applications.
A Finite State Machine (FSM) model is one in which the number of states are finite.
Consider the FSM model for automatic seat belt warning system as an example.
57
Motor control software design is another example for state machine model.
SetTimes (3);
Start Alarm ( );
While ((check seat belt ( ) = = OFF) && (check Ignition ( ) = = OFF) &&(Timer Expire ( )
= = NO));
Stop Alarm ( );
}
}
}
Flow chart Approach:
59
Figure: Concurrent Processing program model for seat belt warning system.
The concurrent processing model is commonly used for the modeling of real time
system.
(v) Object – oriented model:
It is an object based model
It splits the complex software requirement into well defined pieces called objects
It brings re-usability, maintainability & productivity in system design
Each object is characterized by a set of unique behavior & state. A class is an
abstract of a set of objects.
A class represents the state of an object through member variables and object
behavior through member functions.
The member variables can be public, private or protected
Private member access within the class and public member access within the
class as well as outside the class.
The concept of object & class brings abstraction, holding and protection.
60
UNIT 4
RTOS BASED EMBEDDED SYSTEM DESIGN
Part- A ( 2 marks)
1. Define process.
Process is a computational unit that processes on a CPU under the control of a
scheduling kernel of an OS. It has a process structure, called Process control block. A
process defines a sequentially executing program and its state.
2. What are the states of a process?
a. Running b. Ready c. Waiting
3. What is a thread?
Thread is a concept in Java and UNIX and it is a light weight sub process or
process in an application program. It is controlled by the OS kernel. It has a process
structure, called thread stack, at the memory. It has a unique ID .It have states in the
system as follows: stating, running, blocked and finished.
4. Define scheduling.
This is defined as a process of selection which says that a process has the right
to use the processor at given time.
5. What are the types of scheduling?
1. Time division multiple access scheduling. 2. Round robin scheduling.
6. Define round robin scheduling and priority scheduling
Round robin scheduling: This type of scheduling also employs the hyper period as an
interval. The processes are run in the given order.
Priority scheduling: A simple scheduler maintains a priority queue of processes that
are in the run able state.
7. Give the different styles of inter process communication.
1. Shared memory. 2. Message passing.
8. Differentiate pre-emptive and non pre-emptive multitasking.
Preemptive multitasking differs from non-preemptive multitasking in that the operating
system can take control of the processor without the task‘s cooperation
9. What is meant by PCB?
Process Control Block‘ is abbreviated as PCB.PCB is a data structure which
contains all the information and components regarding with the process.
10. Define Semaphore and mutex.
Semaphore provides a mechanism to let a task wait till another finishes. It is a
way of synchronizing concurrent processing operations. When a semaphore is taken by
61
a task then that task has access to the necessary resources. When given the resources
unlock. Mutex is a semaphore that gives at an instance two tasks mutually exclusive
access to resources.
11. Define priority inversion & priority inheritance.
Priority inversion: A problem in which a low priority task inadvertently does not
release the process for a higher priority task is called as Priority inversion.
Priority Inheritance: Priority inversion problems are eliminated by using a method
called priority inheritance. The process priority will be increased to the maximum priority
of any process which waits for any resource which has a resource lock. This is the
programming methodology of priority inheritance.
12. What is RTOS?
An RTOS is an OS for response time controlled and event controlled processes.
RTOS is an OS for embedded systems, as these have real time programming issues to
solve.
13. What are the real time system level functions in UC/OS II? State some.
1. Initiating the OS before starting the use of the RTOS functions.
2. Starting the use of RTOS multi-tasking functions and running the states.
3. Starting the use of RTOS system clock.
14. Name any two mailbox related functions.
a.OS_Event *OSMboxCreate(void *mboxMsg)
b.Void *OSMboxAccept(OS_EVENT *mboxMsg)
15. Name any two queue related functions for the inter task communications.
a.OS_Event OSQCreate(void **QTop,unsigned byte qSize)
b.Unsigned byte OSQPostFront(OS_EVENT *QMsgPointer,void *qmsg).
UNIT – 4
PART B(16 marks)
1. Discuss deeply about the pre-emptive & non –pre-emptive scheduling with
suitable diagrams.
NON – PREEMPTIVE SCHEDULING:
Non- Preemptive scheduling is employed in non-preemptive multitasking systems. In
this scheduling type, the currently executing task/process is allowed to run until it
terminates or enters the wait state waiting for an I/O or system resource. The various
types of non-preemptive scheduling algorithms are,
62
Advantages:
1. Better for long process
2. Simple method
3. No starvation
Disadvantages:
1. Convoy effect occurs. Even very small process should wait for its turn to come to
utilize the CPU. Short process behind long process results in lower CPU utilization.
2. Throughput is not emphasized.
2. Shortest Job First Scheduling (SJF):
This algorithm associated with each process the length of the next CPU burst. Shortest
job first scheduling is also called shortest process next (SPN). The process with the
shortest expected processing time is selected for execution among the available
process in the ready queue. Thus, a short process will jump to the head of the queue
over long jobs.
If the next CPU bursts of two processes are the same then FCFS scheduling is cued to
break the tie. SJF scheduling algorithm is probably optimal. It gives the minimum
average time for a given set of processes. It cannot be implemented at the level of short
63
term CPU scheduling. There is no way of knowing the shortest CPU burst. SJF can be
preemptive or non-preemptive.
A preemptive SJF algorithm will preempt the currently executing process, if the next
CPU burst of newly arrived process may be shorter than what is left to the currently
executing process.
A non-preemptive SJF algorithm will allow the currently running process to finish.
Preemptive SJF scheduling is sometimes called shortest remaining time first algorithm.
Advantages:
1. It gives superior turnaround time performance to shortest process next because
a short job is given immediate preference to a running longer job.
2. Throughput is high.
Disadvantages:
1. Elapsed time must be recorded, it results an additional overhead on the
processor.
2. Starvation may be possible for the longer processes.
PRE-EMPTIVE SCHEDULING:
In preemptive mode, currently running process may be interrupted and forces the
currently active process to release the CPU on certain events such as a clock interrupt,
some I/O interrupts or a system call and they moved to the ready state by the OS.
When a new process arrives or when a interrupt occurs, preemptive policies may
incur greater overhead than non-preemptive version but preemptive version may
provide better results.
It is desirable to maximize CPU utilization and throughput and to minimize
turnaround time, waiting time and response time.
The various types of pre-emptive scheduling are
1. Priority – Based Scheduling:
64
Each process is assigned a priority. The ready list contains an entry for each
process ordered by its priority. The process at the beginning of the list (highest priority)
is picked first.
A variation of this scheme allows preemption of the current process when a
higher priority process arrives.
Another variation of the policy adds an aging scheme, whether the priority of a
process increases as it remains in the ready queue. Hence, this will eventually execute
to completion.
If the equal priority process is in running state, after the completion of the present
running process CPU is allocated to this, even though one more equal priority process
is to arrive.
Advantage:
Very good response for the highest priority process over non-pre-emptive version
of it.
Disadvantage:
Starvation may be possible for the lowest priority processes.
65
A hardware source calls an ISR directly. The ISR just sends an ISR enter
message to the RTOS. ISR enter message is to inform the RTOS than an ISR
has taken control of the CPU (2).
The case involves the two function such as ISR and OS function in two memory
block.
ISR code can send into a mailbox or message queue(3), but the task
waiting for a mailbox or message queue does not start before the return from
the ISR (4).
When ISR finishes, it sends Exit message to OS.
On return from ISR by retrieving saved context, the RTOS later on returns to the
interrupted process or reschedules the process.
RTOS action depends on the event messages, whether the task waiting for the
event messages from the ISR is a task of higher priority than the interrupted task on the
interrupt.
The special ISR semaphore used in this case is OSISRSemPost ( ) which
executes the ISR. OS ensures that OSISRSemPost is returned after any system call
from the ISR.
2. RTOS first interrupting on an interrupt, then RTOS calling the
corresponding ISR:
On interrupt of a task, say, Nth task, the RTOS first gets itself the hardware
source call (1) and initiates the corresponding ISR after saving the present processor
status (2).
Then the ISR (3) during execution then can post one or more outputs (4) for the
events and messages into the mail boxes or queues.
67
68
When an interrupt source k is interrupted (1), OS finishes the critical code till the
pre emption point and calls the ISR routine for interrupt k called as ISR k (3) after saving
the context of a previous task N onto a stack (2)
The ISR during execution can send one or more outputs for the events and
messages into the mailboxes or queues for the ISTs (4). The IST executes the device
and platform independent code.
The ISR just before the end enables further pre-emption from the same or other
hardware sources (5). The ISR can post messages into the FIFO for the ISTs after
recognizing the interrupt source and its priority. The ISTs in the FIFO that have received
the messages from the ISR executes (6) as per their priorities on return (5) from the
ISR.
The ISR has the highest priority and preempts all pending ISTs and tasks, when
no ISR or IST is pending execution in the FIFO, the interrupted task runs on return (7).
i) PROCESS
Defn: Process is defined as a computational unit that processes on a CPU and whose
state changes under the control of kernel of an OS. It has a state ,which at an instance
defines by the process status(running, blocked or finished),process structure –its data,
objects and resources and process control block.
A process runs on scheduling by OS (kernel) which gives the control of CPU to
the process. Process runs instructions and the continuous changes of its state take
place as the Program counter changes.
69
Fig: Processes
Process control block
PCB is a data structure having the information using which the OS controls the process
state. The PCB stores in the protected memory addresses at kernel.The PCB consists
of the following information about the process state.
1. Process ID,process priority,parent process,child process and address to the next
process PCB which will run next.
2. Allocated program memory address blocks in physical memory and in secondary
memory for the process codes.
3. Allocated process-specific data address blocks.
4. Allocated process heap addresses.
5. Allocated process stack addresses for the functions called during running of the
process.
6. Allocated addresses of the CPU register
7. Process-stae sugnal mask.
8. Signals dispatch table
9. OS-allocated resources descriptors
10. Security restrictions and permissions.
ii) THREAD
Application program can be said to consist of a number of threads or a number of
processes and threads
1.A thread consists of sequentially executable program(codes) under state-control by an
OS.
2.The state of information of a thread is represented by thread- state(started, running,
blocked or finished),thread structure –its data, objects and a subset of the process
resources and thread-stack.
3. A thread is a light weight entity
Defn: A thread is a process or sub process within a process that has its own PC, its
own SP and stack ,its own priority and its own variables that load into the processor
registers on context switching and is processed concurrently along with other threads.
70
Fig :Threads
A multiprocessing OS runs more than one process. When a process consists of
multiple threads, it is called multithreaded process.
A thread can be considered as a daughter process.
A thread defines a minimum unit of a multithreaded process that an OS
schedules onto the CPU and allocates the other system resources.
Different threads of a process may share a common process structure.
Multiple threads can share the data of the process.
Thread is a concept used in Java or Unix.
Thread is a process controlled entity
iii) TASKS
Task is the term used for the process in the RTOSes for the embedded systems. A task
is similar to a process or thread in an OS.
Defn: Task is defined as an embedded program computational unit that runs on a CPU
under the state-control of kernel of an OS. It has a state, which at an instance defines
by status ( running, blocked or finished),structure – its data, objects and resources and
control block.
Fig: Tasks
A task consists of a sequentially executable program under a state-control by an
OS.
The state information of a task is represented by the task state (running, blocked
or finished),structure – its data, objects and resources and task control block.
Embedded software for an application may consist of a number of tasks .
Each task is independent in that it takes control of the CPU as scheduled by the
scheduler at the OS.
A task is an independent process
71
Task 1
Program1
Task 1 Registers
Program 1 Task 1 Stack
Program 2 OS
Process 2
Tasks and threads Program1
Task 2 Registers
….. Task 2 Stack
Program 1
72
Process 3
Program 2
Visit & Downloaded From : www.LearnEngineering.in
Task 3 Registers
Task 3 Stack
Visit & Downloaded From : www.LearnEngineering.in
Thread 1
Part of Program1
Thread 1
Task 1 Registers
Thread 2
Program1 Task 1 Stack
Part of Program1
Program 1 Task 1 Thread 2
Registers Memor
Registers
Thread 3
Task 1 Stack y
Program 2 OS Taskof
Part 1 Stack
Program1
Thread 3
Process 2
Registers
Program1
Task 1 Stack
….. Task 2
Registers
Program 1
Task 2 Stack
Process 3
Program 2
Task 3
Registers
Task 3 Stack
Interleaving tasks
process1
Time process2
process3
process1
Process 1 Process 2 Process 3
process2
Process 4 OS process3
process1
process3
process1
process4
process3
process4
process3
73
Task hierarchy
OS Initial Task
Tasks are structured as a hierarchy of parent and child tasks and when an
embedded kernel starts up only one task exists.
All tasks create their child task through system calls.
The OS gains control and creates the Task Control Block (TCB)
Memory is allocated for the new child task ,its TCB and the code to be executed
by the child task.
After the task is set up to run, the system call returns and the OS releases control
back to the main program.
Types of Multitasking:
Multitasking involves the switching of execution among multiple tasks. It can be
classified into different types.
1. Cooperative multitasking – this is the most primitive form of multitasking in
which a task/process gets a chance to execute only when the currently executing
task/process voluntarily relinquishes the CPU. In this method, any task/process
can hold the CPU as much time as it wants. Since this type of implementation
involves the mercy of the tasks each other for getting the CPU time for execution,
it is known as cooperative multitasking. If the currently executing task is non-
cooperative, the other tasks may have to wait for a long time to get the CPU.
2. Preemptive multitasking: This ensures that every task gets a chance to
execute. When and how much time a process gets is dependent on the
implementation of the preemptive scheduling. In preemptive scheduling, the
74
currently running task is pre empted to give a chance to other tasks to execute.
The preemption of task may be based on time slots or task/process priority.
3. Non-Preemptive Multitasking: In non- preemptive multitasking, the task which
is currently given the CPU time is allowed to execute until it terminates or enters
the Blocked/wait state waiting for an I/O or system resource. The cooperative
and non-preemptive multitasking differs in their behavior when they are in the
blocked/wait state.
In cooperative multitasking, the currently executing process/task need not relinquish
the CPU when it enters the ‗Blocked/wait‘ state waiting for an I/O or a shared resource
access or an event to occur where as in non-preemptive multitasking the currently
executing task relinquishes the CPU when it waits for an I/O or system resource or an
event to occur.
75
Binary Semaphore
A semaphore is called binary semaphore when its value is 0, it is assumed that it has
been taken or accepted and when it is 1, it is assumed that it has been released or sent
or posted and no task has taken it yet.
An ISR can release the token, A task can release the token as well accept the
token or wait for taking the token.
Example
Consider an Automatic Chocolate Vending Machine(ACVM).After the task delivers the
chocolate , it has to notify to the display task to run a waiting section of the code to
display , ―Collect the nice chocolate, Thank you, Visit Again‖. The waiting section for the
display of the thank you message takes this notice and then it starts the display of thank
you message.
MAILBOX
76
In the mailbox, when the time and data message from a clock-process
arrives , the time is displayed at side corner on top line.
When the message is from another task to display a phone number, it is
displayed at the middle.
When the message is to display the signal strength of the antenna, it is
displayed at the vertical bar on the left.
Mailbox types at the different operating systems (OSes)
1. Multiple Unlimited Messages Queueing up
2. One message per mailbox
3. Multiple messages with a priority parameter for each message
A queue may be assumed a special case of a mailbox with provision for
multiple messages or message pointers.
An OS can provide for queue from which a read can be on a FIFO basis
Or an OS can provide for multiple mailbox messages with each message
having a priority parameter.
Even if the messages are inserted in a different priority, the deletion is as per
the assigned priority parameter.
OS functions for the Mailbox
Create
Write(Post)
Accept
Read(Pend)
Query
77
78
Fig: a) Function at operating system and use of write and read function by
task A and task B, b)Pipe messages in a message buffer
6. Write short notes on i) shared memory, ii) message passing, iii) priority
inheritance iv) priority inversion.
Shared Memory:
Shared Memory is an efficient means of passing data between programs. One
program will create a memory portion which other processes can access.
A shared memory is an extra piece of memory that is attached to some address
spaces for their owners to use. As a result, all of these processes share the same
memory segment and have access to it. Consequently, race conditions may occur if
memory accesses are not handled properly. The following figure shows two process
and their address spaces. The rectangle box is a shared memory attached to both
address spaces and both process 1 and process 2 can have access to this shared
memory as if the shared memory is past of its own address space. In some sense, the
original address space is ''extended" by attaching this shared memory.
79
delivers the pointer to the message - receiver task and then deletes the copy of the
pointer with message sender task.
posted.
The features of a message queue IPC are
1. An OS provides for inserting and deleting the message pointers or messages.
2. Each queue for the message needs initialization before using functions in
kernel for the queue.
3. Each Created queue has an ID.
4. Each queue has a user defined size.
5. When an OS call is to insert a message into the queue, the bytes are as per
the pointed number of bytes.
6. When a queue becomes full, there is error handling function to handle that.
There is a presence of two pointers for queue head and tail memory locations are.
Q HEAD and
Q TAIL
The µC/OS - II Functions for a queue are
1. OSQ Create 2. OSQ POST 3. OSQ PEND 4. OSQ ACCEPT 5. OSQ FLUSH 6.
OSQ QUERY 7. OSQ POST Front.
Priority Inversion
Priority inversion is the byproduct of the combination of blocking based process
synchronization and preemptive priority scheduling. Priority inversion is the condition in
which a high priority task needs to wait for a low priority task to release a resource
which is share between the high priority task and the low priority task and a medium
priority task which doesn‘t require the shared resource continue its execution by pre
emptying the low priority task.
Priority based preemptive scheduling technique ensures that a high priority task
is always executed first, whereas the lock based process synchronization mechanism
ensures that a process will not access a shared resource, which is currently in use by
82
83
Priority Inheritance
A low-priority task that is currently accessing (by holding the lock) a shared
resource requested by high priority task temporarily ‗inherits‘ the priority of that high-
priority task, from the moment the high-priority task raises the request Boosting the
priority of the low priority task to that of the priority of the task which requested the
shared resource holding by the low priority task eliminates the preemption of the low
priority task by other task whose priority are below that of the task requested the shared
resource and thereby reduces the delay in waiting to get the resource requested by the
high priority task. The priority of the low priority task which is temporarily boosted to high
is brought to the original value when it relaeases the shared resource. Implementation
of priority inheritance work around in the priority inversion problem discussed for
process A, Process B and process C example will change the execution sequence as
shown in figure.
Priority inheritance is only a work around and it will not eliminate the delay in
waiting the high priority task to get the resource from the low priority task. The only thing
is that it helps the low priority task to continue its execution and release the shared
resource as soon as possible. The moment, at which the low priority task releases the
shared resource, the high priority task kicks the low priority task out and grabs the CPI –
A true form of selfishness. Priority inheritance handles priority inversion at the cost of
run-time overhead at schedules. It imposes the overhead of checking the priorities of all
tasks which tries to access shared resources and adjust.
84
85
Under these assumptions, let n be the number of tasks, Ei be the execution time
of task i, and Ti be the period of task i. Then, all deadlines will be met if the
following inequality is satisfied.
ΣEi/Ti ≤ n(21/n – 1)
Example: suppose we have 3 tasks. Task 1 runs at 100Hz and task 2 ms. Task 2
runs at 50Hz and takes 1ms. Task 3 runs at 66.7 Hz and takes 7 ms. Apply
RMS theory.
(2/10) + (1/20) + (7/15) = 0.707 ≤ 3(21/3 – 1) = 0.780
Thus, all the deadlines will be met.
General solution:
As n goes infinity, the right hand side of the inequality goes to in (2) = 0.6931.
Thus you should design your system to use less than 60 – 70% of the CPU.
Task creation:
Two functions for creating a task are
o OS task create ( )
o OS task create E X t ( )
Task Management:
After the task is created, the task has to get a stack in which it will store its
data.
A stack must consist of contiguous memory locations.
It is necessary to determine how much stack space a task actually uses.
Deleting a task means the task will be returned to its dormant state and does
not mean that the code for the task will be deleted. The calling task can delete itself.
If another task tries to delete the current task, the resources are not freed and
thus are lost. So the task has to delete itself after it uses its resources.
Priority of the calling task or another task can be changed at run time.
A task can suspend itself or another task; a suspended task can resume itself.
86
A task can obtain information about itself or other tasks. This information can
be used to know what the task is doing at a particular time.
Memory Management:
The memory management includes:
1. Initializing the memory manager
2. Creating a memory partition
3. Obtaining status of a memory partition
4. Obtaining a memory block
5. Returning a memory block
6. Waiting for memory blocks from a memory partition.
Each memory partition consists of several fixed – sized memory blocks.
A task obtains memory blocks from the memory partition.
A task must create a memory partition before it can be used.
Allocation and de-allocation of these fixed – sized memory blocks is done in
constant time and is deterministic.
Multiple memory partitions can exist, so a task can obtain memory blocks of
different sizes.
A specific memory block should be returned to its memory partitions from which it
came.
Time management:
Clock Tick: A clock tick is a periodic time source to keep track of time delays and
time outs.
o Time intervals: 10~100 ms
o The faster the tick rate, the higher the overhead imposed on the system.
Whenever a clock ticks occurs, µC/OS – II increments a 32 – bit counter.
o The counter starts at zero and rolls over to 232-1 ticks.
A task can be delayed and a delayed task can be also be resumed.
Five services:
o OSTime_DLY ( )
o OSTime_DLYHMSM ( )
o OSTime_DLYResume ( )
o OSTime_GET ( )
o OSTime_Set ( )
Inter task communication:
87
Tasks can wait and signal along with an optional time out.
88
89
90
Task Management:
The Vx works real-time kernels provides a basic multitasking environment. Vx works
offers both posix and a proprietary scheduling mechanisms. Both preemptive priority
and round robin scheduling mechanism are available. The difference between POSIX
and wind scheduling is that wind scheduling is that wind scheduling mechanism are
available.
91
The difference between a POSIX and wind scheduling is that wind scheduling applies
the scheduling algorithm on a system wide basis, whereas POSIX scheduling
algorithms are applied on a process by process basis.
In Vx works, the states encountered by the task are of 8 different types.
1. Suspended: Idle state just after creation or stated where execution is inhibited.
2. Ready: Waiting for running and CPU access in case scheduled by the scholar but
not waiting for a message through IPC.
3. Pending: The task is blocked as it waits for a message from the IPC or from a
resource only then will the CPO be able to process further.
4. Delayed: Send to sleep for a certain time interval.
5. Delayed + suspended: Delayed and then suspended if it is not pre-emptied
during the delay period.
6. Pended for an IPC _ suspended: Pended and then suspended if the blocked
state does not change.
7. Pended for an IPC + delayed: Pended and than pre-emptied after the delayed
time interval.
8. Pended for an IPC + suspended: Pended and suspended after delayed time
interval.
Kernel library functions are included in the header file 'Vx works.h' and 'kernel Lib.h'.
Task and system library functions are included in 'task Lib.h' and 'sys Lib.h' User task
priorities are between 101 and 255. Lowest priority means task of highest priority
number (255). System tasks have the priorities from 0 to 99. For tasks, the highest
priority is 100 by default..
The functions involved in task management:
1. Task Spawn function: It is used for creating and activating a task. Prototype is
unsigned int task ID = task spawn (name, priority, options, stack size, main, argo, art1,
arg2, ... arg9).
2. Task suspending and Resuming functions: Task suspend (task ID): inhibits the
execution of task indentified by task Id.
Task Resume (task ID): Resumes the execution of the task identified by task ID.
Task Restart (task ID): First terminates a task and then spawn again with its original
assignes arguments.
3. Task deletion and deletion protection function: Task Delete (task Id): this
permanently inhibits the execution f the task identified by task Id and cancels the
allocations of the memory block for the task stack and TCB.
92
Many time each task should itself execute the codes for the following:
Memory de-allocation.
Ensure that the waiting task gets the desired IPC
Close a file, which was opened there.
Delete child tasks when the parent task executes the exit ( ) function.
4. Delaying a task to let a lower priority task get access:
intSysClkRateGet ( ) returns the frequency of the system ticks. Therefore to delay by
0.25 seconds, the function task Delay (SysclkRateGet ( ) /4) is used.
Memory Management:
In Vx works, all systems and all application tasks share the same address space. This
means that faulty application could accidently access system resources and
compromise the stability of entire system. An optional tool named Vx VMI is available
that can be used to allow each task to have its own address space. Default physical
page size used is 8 KB. Virtual memory support is available with Vx VMI tool. Vx works
does not offer privilege protection. The privilege level is always 0.
Interrupts:
To achieve the fastest possible response to external interrupts, Interrupt service routines
in Vx works run in a special context outside of any thread's context, so that there are no
thread context switches involved. The C function that the user attaches to a interrupt
vector is not actual ISR. Interrupts cannot directly vector to C functions.
The ISR's address is stored in the interrupt vector table and is called directly from the
hardware. The ISR performs some initial work and then calls the C function that was
attached by the user. For this reason, we use the term interrupt handler to designate the
user installed C handler function. Vx Works uses an ISR design that is different from a
task design.
The features of the ISR in Vx works are:
1. ISRs have the highest priorities and can pre-empt any running task.
2. An ISR inhibits the execution of tasks till return.
3. An ISR does not execute like a task and does not have regular task context.
4. An ISR should not use mutex semaphore.
5. ISR should just write the required data at the memory or buffer.
6. ISR should not use floating – point functions as these take longer time to
execute.
Performance:
93
Real time performance: Capable of dealing with the most demanding time
constraints, Vx works is a high-performance RTOS tuned for both determinism and
responsiveness.
Reliability: A high – reliability RTOS, Vx works provides certification evidence
required by strict security standard. Even for non-safety – critical systems, Vx works is
counted on to run forever, error free.
Scalability: An indispensable RTOS foundation for very small – scale devices,
large scale networking systems and everything in between, Vx works is the first RTOS
to provide full 64-bit processing to support the over growing data requirements for
embedded real time systems. Vx works is scalable in terms of memory foot print and
functionality so that it can be tuned as per the requirements of the project.
Interrupt latencies: The time elapsed between the execution of the last instruction
of the interrupted thread and the first instruction in the interrupts handler to the next task
scheduled to run is interrupt dispatch latency. Vx works exhibits an Interrupt latency of
1.4 to 2.6 micro seconds and dispatch latency of 1.6 to 2.4 ----s
Priority inheritance: Vx works has a priority inheritance mechanism that exhibits
an optimal performance, which is essential for an RTOS.
Foot print: Vx works has a completely configurable and tunable small memory
foot print for today's memory – constrained systems. The user can control how much of
the operating system he needs.
Applications:
Vx Works RTOS is widely used in the market, for a great variety of applications.
Its reliability makes it a popular choice for safety critical applications Vx works has been
success fully used in both military and civilian avionics, including the Apache Attack
Helicopter, Boeing 787, 747-8 and Airbus A 400 M. It is also used in on ground avionic
systems such as in both civilian and military radar stations. Another safety critical
application that entrusts Vx works is BMW's i – Drive system.
However, Vx works is also widely used in non-safety – critical applications where
performance is at premium. The xerox phasor, a post – script printer is controlled by a
Vx works powered platform link sys wireless routers use Vx works for operating
switches.
Vx works has been used in several space application's. In space crafts, where design
challenges are greatly increased by the need of extremely low power consumption and
lack of access to regular maintenance, Vx works RTOs can be chosen as the operating
94
system for on Board Computer (OBC). For example 'clementine' launched in 1994 is
running Vx works 5.1 on a MIPS – based CPU responsible for the star Tracker and
image processing algorithms. The 'spirt' and opportunity mars exploration rovers were
installed with Vx works. Vx works is also used as operating system in several industrial
robots and distributed control systems.
Summary:
The need to develop for real-time embedded applications is always a challenge,
especially when expensive hardware is at risk. The complex nature of such systems
require many special design considerations an understanding of physical systems, and
efficient Management of limited resources perhaps one of the most difficult choice the
embedded system designers have to make in which operating system they are going ot
use. It is critical to have operating system that will be able to be fail-safe, secure,
scalable, fast and robust in multi task management, while being friendly to the
application developers, Vx works is an RTOs which meets almost all of these
requirements.
iii) RT Linux
RT linux is a hard real time RTOS microkernel that runs the entire linux operating
system as a fully preemptive process. It was developed by Victor Yodaiken, Michael
Barabanov and others at the New Mexico Institute of Mining and Technology and then
as a commercial product at FSM Labs. FSM Labs has 2 editions of RT Linux.
RT Linux pro and RT Linux free. RT Linux pro is the priced edition and RT Linux
is the open source release. RT Linux support hard real time applications. The Linux
kernel has been modified by adding a layer of software between the hardware and the
Linux kernel. This additional layer is called ‗Virtual Machine‘. A footprint of 4 MB is
required for RT Linux.
The new layer, RT Linux layer has a separate task Scheduler. This task scheduler
assigns lowest priority to the standard to the standard Linux kernel. Any task that has to
met real-time constraints will run under RT Linux. Interrupts from Linux are disabled to
achieve real time performance.
95
97
UNIT-5
EMBEDDED SYSTEM APPLICATION DEVELOPMENT
PART-A(2 marks)
1. What is Multi-state system?
A system that can exist in multiple states (one-state at a time) and transition from one
state to another state is known as multistate system. There are 3 types of multi state
system:
Timed multi state system
Input – based multi state system
Input – based / Timed multi state system
2. What is a motor driver?
A motor driver is a little current amplifier. The function of motor drivers is to take a low
current control signal and then turn it into a higher current signal that can drive a motor.
3. Define RTC
A real time clock is a computer clock that keeps track of the current time even when
the computer is turned off. Real Time Clock (RTC) runs on a special battery that is not
connected to the normal power supply.
4. Write in brief about the PIC microcontroller.
PIC (Peripheral Interface Controller) is a family of Harvard Architecture
microcontrollers made by Microchip Technology. It has an in built PWM generator,
which is very useful to control the motors
5. What is Smart Card?
Smart Card stores and process information through electronic circuits embedded in
the silicon in a plastic substrate body. It is portable and tamper resistant computer. It
carries both processing power and information.
6. Define class and objects.
Class: A class is a user defined type or data structure declared with keyword class that
has data and functions as its members whose access is governed by the three
accesses specifies private, protected or public. Class is a collection of data member and
member function
Object: Object is a class type variable, objects are also called instance of the class.
Each object contains all members declared in the class.
7. What is synchronization in RTOS?
98
To let each section of codes, tasks and ISRS run and gain access to the CPU one after
the other sequentially or concurrently, following a scheduling strategy, so that there is a
predictable operation at any instance.
8.List the characteristics of multi – state system.
The system will operate in two or m ore states.
Each state may be associated with one or more function calls.
Transitions between states may be controlled by the passage of time, by
system inputs or by a combination of time and inputs.
Transition between states may also involve function calls.
9. What is an adaptive control algorithm?
An adaptive control algorithm refers to algorithm parameters which adapt to the present
status of the control inputs in place of a constant set of mathematical parameters in
algorithmic equations.
10. What are the task functions for a smart card?
resetTask
task_Read Port
task_PW
task_Appl
UNIT – 5
PART-B(16 marks)
99
Start
Water
Switch
Value
Water
Selector Heater
Dial
Washing Water
Machine Pump
Water
Controller
level
Sensor Drum
Motor
Temperature
sensor Detergent LED or LCD
Hatch indicators
5. When the lid is open, system should not work. If door is accidently opened in
between wash operation, then the system should stop working in minimum possible
time.
6. The system should provide all the basic features of a washing machine like
washing, rinsing, spinning, drying, cold wash, hot wash etc.
7. The system should provide easy option for upgrading the new features.
8. The system should work on single phase AC from 190V AC to 250V AC. The
system should protect itself from power supply voltage variations.
9. In the event of power failure, the washing machine should automatically start
its cycle from the point of interruption when power is resumed.
Hardware Design:
PIC18F452 is a heart of the system. Most of the peripheral features have been
utilized to implement the design. Controlling the motor is very crucial part of the design.
The PWM feature of the microcontroller controls motor speed. PWM output is feed to
driver circuit and then to motor.
To rotate the motor in two different directions forward and reverse direction,
control blocks are used. Motor speed sensor is interfaced to microcontroller.
Microcontroller reads the speed of the motor and approximately controls the speed of
the motor in different phases of washing using PWM output. Door sensor, pressure
sensor, keypad are also interfaced to microcontroller. EEPROM and RTC are interfaced
to MSSP module of controller. In-circuit serial programming facility is provided for quick
and easy programming and debugging.
101
Software Design:
A provisional list of functions that could be used to develop a washing machine
are:
Read_Select_Dial ( )
Read_Start_Switch ( )
102
Read_Water_Level ( )
Read_Water_Temperature ( )
Control_Detergent_Hatch ( )
Control_Door_Lock ( )
Control_Motor ( )
Control_Pump ( )
Control_Water_Heater ( )
Control_Water_Valve ( )
Frame Work:
1. System states – Initialization, start, fill drum, heat water, Wash 1, Wash 2,
Error.
2. User defined data – a) Maximum Fill duration – 1000 seconds. b) Maximum
water heat duration – 1000 seconds. c) Maximum wash 1 duration – 3000 seconds.
3. Functions involved in each state or function call
Initialization:
Control_Motor (OFF)
Control_Pump (OFF)
Control_Water_Heater (OFF)
Control_Water_Valve (OFF)
Read_Select_Dial (ON)
Now switch on to start state.
Start:
Control_Door_Lock (ON)
Control_Water_Valve (ON)
Control_Detergent_Hatch (ON)
Now switch on to fill drum state.
Fill Drum:
Read_Water_Level (ON)
Control_Water_Heater (ON)
Now switch on to either heat water state or Wash 1 state depends upon the
condition.
Heat Water:
Read_Water_Temperature (ON)
Now switch on to Wash 1 state.
103
Wash 1:
Control_Motor (ON)
After the completion of duration, the system switches to the wash 2 state.
Wash 2:
In Wash 2 state, the system performs the washing operation until its duration is expired.
Error:
In any crash or error happens between the states, the system goes to the
default state called error state. It restarts the particular state, where the error occurs.
4. Function definitions:
Thus the software design is clearly explained with states and functions.
104
This control method is used to maintain constant speed in cruise mode and to
decelerate when a vehicle comes in front at a distance less than safe and to accelerate
again to cruise mode by using adaptive control algorithm.
Adaptive Control algorithm
An adaptive control algorithm refers to an algorithm parameters in which adapt to
the present status of the control inputs in place of a constant set of mathematical
parameters in the algorithmic equations. Parameters adapt dynamically.
For an ACC system, an adjustable-system subunit generates output control
signal for throttle valve.
The desired preset cruise velocity vt,desired preset distance dset and safe preset
distance dsafe are the inputs to measuring subunit.
The measured velocity v and distance d are inputs to computing unit.
The comparison and decision subunit sends outputs which are inputs to
adjustable systems.
105
.
1.Task_ACC is an abstract class from which extended classes like
Task_Align,Task_Signal,Task_ReadRange, Task_RangeRate and Task_Algorithm are
derived to measure range and errors.
2.Task_Control is an abstract class from which extended classes like
Task_Brake,Task_Throttle and Task_Speed are derived to measure range and errors.
3.There are two ISR objects ISR_ThrottleControl and ISR_BrakeControl
ACC Hardware Architecture
ACC embeds the following hardware units:
Microcontroller-Runs the service routines and tasks except task_Algorithm.CAN
port interfaces with the CAN bus at the car.
Processor with RAM/ROM- To execute task_Algorithm
Speedometer
Stepper motor-based alignment unit.
Stepper motor-based throttle control unit.
Transceiver –For transmitting pulses through an antenna hidden under the
plastic plates.
Display panel
Port devices-Five port devices are
Port_Align,Port_Speed,Port_ReadRange,Port_Throttle and Port_Brake
106
Synchronization Model
1.The task,task_Align sends the signal to a stepper motor port Port_Align and
Port_Ranging.The stepper motor moves by one step clockwise or anticlockwise.
2.A task,task_ReadRange is for measuring front-end car range.
3.task_Speed gets the port reading at a port Port_Speed.Task sends v, using the
countN and count() interval between the initial and Nth rotation.
4.task_RangeRate sends the rangeNow.Calculates both range and rate errors and
transmits both rangeNow and speedNow.
5.task_Algorithm runs the main adaptive algorithm.It gets inputs from task_RangeRate
and outputs are events to Port_Throttle and brake.Port_Throttle attaches to the vacuum
actuator stepper motor.
108
Requirements:
Assume a contact less smart card for bank transactions. Let it not be magnetic.
Requirements of Smart card communication system with a host are.
1. Purpose:
Enabling authentication and verification of card and card holder by a host and enabling
GUI at host machine to interact with the card holder / user for the required transactions:
for example, financial transactions with a bank or credit card transaction.
2. System Functioning:
1. The cards inserts at host machine. The radiations from the host activate a charge
pump at card.
2. On power up, system reset signals reset task to start. The reset task sends the
messages - request Header and request start for waiting task. task_ReadPort.
3. task_Read Port sends requests for host identification and reads through the
port_IO the host - identification and reads the request start from host for card
identification.
4. The task_PW sends through Port_IO the requested card identification after
system receives the host identity through Port_IO
5. The task_Appl then runs required API. The request Appl close message closes
the application.
6. The card can now be withdrawn and all transactions between card holder user
now takes place through GUIs using at the host control panel.
3. Inputs:
Receives header and messages at IO Port_IO from host through the antenna.
4. Signals and Events and Notifications:
1. On power up by radiation powered charge - pump supply of the card, a signal to
start the system boot program at reset task.
2. Card start request header message to task - Read Port from reset Task.
109
3. Host authentication request start message to task_Read Port from reset Task to
enable request for Port_IO
4. User PW verification message (notification) through Port_IO from host.
5. Card application close request Applcase message to Port_IO
5. Outputs:
Transmitted headers and messages at port_IO through antenna.
6. Control Panel:
No control panel is at the card. The control panel and GUIs activate at the host
machine.
7. Design Metrics:
1. Power source and dissipation: Radiation powered contact - less operation.
2. Code Size: Code size generated should be optimum. The card system memory
needs should not exceed 64 KB Memory.
3. File System(s): Three layered file system for the data. One file for the master file
to store all file headers. A header has strings for file status, access conditions and
file-lock. The second file is a dedicated file to hold a file grouping and heads. The third
file is the elementary file to hold the file header and file data.
4. File Management: There is either a fixed length file management or variable file
length management with each file with a predefined offset.
5. Micro controller hardware: Generates distinct coded physical addresses for the
program and data logical addresses. Protected once writable memory space.
6. Validity: System is embedded with expiry date, after which the card
authorizations through the hosts disable.
7. Extendibility: The system expiry date is extendable by transactions and
authorization of master control unit.
8. Performance: Less than 1s for transferring control from the card to host machine
9. Process Deadlines: None.
10. User Interfaces: At host machine, graphic at LCD or touch screen display on LCD
and commands for card holder (card user) transactions.
11. Engineering Cost: US $ 50,000 (assumed)
12. Manufacturing Cost: US $ 1 (assumed)
Test and Validation Conditions:
Tested on different host Machine versions for fail proof card-host communication.
Class Diagram:
An abstract class is Task Communication. The figure shows the class diagram of
110
Software using Java card provides one solution. JVM has thread scheduler built in. No
separate multitasking OS in thus needed when using Java because all Java byte codes
run in JVM environment. Java provides the features to support (i) security using class
java.lang. Security Manager, (ii) cryptographic needs. Java provides support to
connections, datagrams, IO streams and network sockets.
111
Java mix is a new technology in which the native applications of the card run in C or
C++ and downloadable applications run in Java Card. The system OS and JVM both.
Smart OS in an assumed hypothetical OS in this example, as RTOS in the card.
Remember that a similar OS function name is used for understanding purpose identical
to MUCOS but actual smart OS has to be different from MUCOS. Its files structure in
different. It has two function as follows.
The function unsigned than ( ) smart OS encrypy (unsigned char * applstr, EnType type)
encrypts as per encryptes method, EnType = "RSA" or "DES" algorithm chosen and
returns the encrypted string.
112
Synchronization Model:
Following are the actions on the card places near the host machine antenna in a
machine slot.
Step 1: Receive from the host, on card installation, the radiation of carrier frequency or
clock signals in case of contact with the card. Extract charge for the system power
supply for the modem, processor, memories and port IO device.
Step 2: Execute codes for a boot up task on reset resetTask. Let us code in a similar
way as the codes for First task. The codes begin to execute from the main and the main
creates and initiates this task and starts the smart OS.
113
114
115