Graphics System in Vehicle Electronics Report
Graphics System in Vehicle Electronics Report
Graphics System in Vehicle Electronics Report
1 Introduction 1
1.1 Background ........................................................................................ 1
1.2 Problem Statement ............................................................................ 1
1.3 Method ............................................................................................... 2
1.4 Limitations ......................................................................................... 3
1.5 Contributions ..................................................................................... 3
2 Theory 5
2.1 Device drivers in OSE ........................................................................ 5
2.1.1 Device drivers in OSE 5 ........................................................ 6
2.1.2 Device drivers in OSE Epsilon ............................................10
2.1.3 Analysis of context regarding drivers.................................10
2.2 Investigation of graphics libraries...................................................11
2.2.1 Background ..........................................................................11
2.2.2 Choice of graphics library ................................................... 14
2.2.3 OpenGL ES ..........................................................................16
2.2.4 Mobile 3D Graphics API for Java ....................................... 18
2.2.5 EGL .......................................................................................20
2.2.6 Conclusions .......................................................................... 21
2.3 Evaluate existing CAN protocol header.........................................22
2.3.1 Background ..........................................................................23
2.3.2 How it looks today...............................................................24
2.3.3 Problems in the implementation ........................................27
2.3.4 Mapping of the signals ........................................................ 27
2.3.5 Design proposal ...................................................................27
2.3.6 Conclusions .......................................................................... 28
2.4 Prerequisites for GPS antenna driver ............................................ 29
2.4.1 What is GPS? ....................................................................... 29
2.4.2 Conditions in this project ................................................... 31
2.4.3 Conclusion ............................................................................39
2.5 Summary and Conclusions .............................................................. 40
3 Design & Implementation 43
3.1 Implementing graphics port driver . . . . . . . . . . . . . . . 43
3.1.1 Used hardware . . . . . . . . . . . . . . . . . . . . . . 43
3.1.2 Connection of LCD . . . . . . . . . . . . . . . . . . . . 44
3.1.3 LCD Controller- and Framebuffer driver . . . . . . . . 45
3.2 Adjustments of CAN protocol header . . . . . . . . . . . . . . 48
3.2.1 Converting an OSE signal to a CAN message . . . . . 49
3.2.2 Converting an OSE signal to a CAN message . . . . . 51
3.3 Extend OSE 5 BSP with GPS support on i.MXS board . . . 51
4 Result 55
4.1 Discussion and Conclusions . . . . . . . . . . . .. . .. . . . 56
4.1.1 Graphics . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.1.2 CAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.1.3 GPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.2.1 CAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.2.2 Graphics . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.2.3 GPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
A Abbreviations 67
Chapter 1
Introduction
1.1 Background
Nowadays vehicles contain many embedded computers. One disadvantage
is that the current use of such systems becomes static after the development
process and the possibility to update the system is often limited to be done
by professionals at service garages. Embedded systems are getting more
important in the automotive industry and as the demands from customers
increase, the lack of possibilities to modify the systems creates a problem.
The EU research project DySCAS1 is an attempt to solve the problem.
The project started in June 2006 and will after 30 months end during the
period of this Master Thesis. The overall purpose of the project is to create
a middleware, SHAPE2, to make it easier to integrate consumer electronic
devices in an automotive environment. ENEA AB are involved in the project
through the creation of SHAPE and the develop of a demonstration platform
to test the essential ideas which the project is built on.
The creation of SHAPE will benefit both consumers and vehicle manu-
facturers. Consumers will get a greater option to buy third party solutions
and vehicle manufacturers will cut their expenses by not being forced to
keep spare parts in stock for long term.
1
graphic drivers that are being used with the demonstration platform.
A natural first step in order to manage the task is to answer the question:
Q1. What graphics library is appropriate to use in SHAPE?
A further step would be to find out what choices that will be crucial for
the system performance. The following question can contribute to that un-
derstanding:
Q2. What choices can be done to reduce needed system resources for graph-
ics?
The sub tasks create an elementary understanding to cope with the main
task. The first subtask is to:
• Modify the communication over the CAN-bus to work better on the
demonstration platform
To be able to change the CAN communication, an analysis of the existing
problem must be done first. To identify the problem, the following question
must be answered:
Q3. What limitations/shortcomings imply communication through CAN in
DySCAS?
The questions Q1-Q5 will be addressed and answered in the theory chapter
of the report.
1.3 Method
The approach for this master thesis will be to start with an academic litera-
ture study based on books, technical manuals and research papers within the
area of CAN, GPS and several graphic libraries. The development process
and design of drivers will have a predominant part of this thesis work. The
thesis report will contain the results of the literature study and the results
from the design and implementation part of the thesis.
The development will be carried out at an existing setup consisting
of, among others, a Freescale development board with a I.MX31 processor
2
and two telematic units from an automotive environment communicating
through CAN.
The purpose of making an analysis of the GPS system, the CAN-bus and
the graphic libraries are to get a better understanding of the possibilities
and limitations of such technologies. A basic understanding of GPS is also
needed to pick out information from the protocol to be able to pass on that
information to other parts of the system.
1.4 Limitations
The work will conclude a demonstration setup to show the results of my
work. The three tasks are all using different hardware, so a demonstration
will be implemented for the various components individually. Since the the-
sis work is only 20 weeks long, that parameter will determine how advanced
the graphical presentation will be. With more time the different parts of
the thesis can be linked together to show that they work jointly.
1.5 Contributions
A brief summary of a framework to write drivers for the operating system
OSE is given. There are essentially two ways to write drivers that are
recommended, signal-based interface and system call interface.
Based on some requirements an analysis has been made to find out which
the appropriate graphics libraries are to use in embedded systems. A focus
in the search has been made to suit the hardware i.MX31ADS and ENEAs
operating system OSE 5. One of the requirements has been that the graph-
ics libraries must be operating system independent as a future goal is to
add a graphic HMI (Human-Machine Interface) to the middleware SHAPE.
SHAPE is at the moment implemented in combination with OSE and Linux.
A working driver to be able to present graphics on a display attached to the
hardware i.MX31ADS has been developed based on an existing BSP (Board
Support Package) for Linux. The driver is based on that a storage area, the
framebuffer, is used from where the LCD controller (hardware) retrieves data
which is then presented on the display. It is all made possible by control-
ling software that allocates memory for the framebuffer, informs interested
parts of the system (e.g. graphics libraries) about the framebuffer, configure
the LCD controller on the basis of size and address of the framebuffer and
specific settings for display (e.g. resolution, color information).
An investigation is made on the CAN communication in a working sys-
tem. Two main problems are identified and corrected, which otherwise
contributes to an incorrect order when the messages are sent. The order
in which CAN messages are sent is determined by the construction of the
CAN protocol header. The length of data field in the CAN message has
3
the biggest impact on the order in which the message is sent and a unique
signal number has been assigned to the signals in a non-structured way. A
modification of the CAN protocol header has been introduced and the sig-
nal numbers are assigned based on how often the messages are sent and how
important they are for the system to work.
A study of hardware and conditions to complement an existing BSP
(for OSE 5) with a device driver for a GPS antenna has been made. The
hardware is custom-built and aimed for use in a vehicle. The focus has been
on the theoretical to provide a basis for how the device driver should be
designed. A design proposal will also be provided on a rough construction
of the driver. An analysis of the GPS receivers output has been made and
a design proposal of a way to pick out information from the stream of data
is given.
4
Chapter 2
Theory
The theory chapter is divided into four parts. It begins with a description
of the ways to write device drivers for the operating system OSE. That
knowledge is necessary to sort out several of the tasks this thesis contains.
This is followed by a study of graphics libraries suitable for embedded
systems. An adaption was made to especially fit the hardware, i.MX31ADS,
used in the thesis graphics work. Based on the requirements of the report
two graphics libraries, OpenGL ES and Mobile 3D Graphics API for Java,
remain which could be combined to obtain a greater width of the range of
applications. Should only one of these graphics library be used the choice is
OpenGL ES, as it is adapted for applications written in low level languages.
An evaluation is then performed of an existing CAN communication and
the way in which signals are given priority when there are several signals
that want to be sent simultaneously. Problems with the current solution are
identified and a solution to the problems are given, which in a later section
of the report also will be implemented.
A description is made about how GPS works and what characteristics
and conditions that this projects GPS has. The GPS will be used with an
existing system and therefore a description of the adjustments that need to
be made to the GPS to function in this system is made.
Finally, a summary of all the sections in the theory chapter will be
provided. The questions addressed in the introduction of the report will be
briefly answered.
5
This is however not the case in this thesis. The following sections is based
on information from OSE Core User’s Guide[7], OSE Device Drivers User’s
Guide[8] and C166 Board Support Package User’s Manual[6].
2. Device drivers shall use the DRM interface for management of board
resources and configuration parameters. Every driver provides one
function called DRM entry, which is the entry point for all the DRM
functionality.
• Device manager
• Device drivers
Device manager
One could say that the device manager has the central role of the DDA. The
device manager’s main task is to act as a negotiator when a process wants
to access functionality from a device driver. It is the device manager that
looks up the suitable device driver with the corresponding features that the
process asks for.
The device manager defines a signal interface, which is used for adding
new device drivers to the database. Every device driver reports their ser-
vices to the device manager and the device manager keeps a database of
the installed device drivers. When the device manager passes configuration
information to device drivers it also constructs a device path for the driver.
The device manager uses this information for building a logical device tree,
6
reques Device
t driver
Device
DRM manager
interface
Device Device
driver driver
Hardware
which models the way hardware depends on each other. The device man-
ager is the root of this tree. When a new device is found the device manager
searches the database to find a match for the device.
Device drivers
A device driver consists of a collection of functions used to implement a
physical or virtual device functionality for the benefit of a higher level device
client. Technically the device driver is just a collection of operating system
independent functions used to handle the device hardware. Device drivers
provide a service to applications via a device client interface. DDA provides
two different interfaces for device clients (often applications) to communicate
with the device drivers. The two interfaces are presented in the following
sections.
7
Signal based interface
The first category of device drivers are implemented using a signal based in-
terface. These drivers usually execute, in relation with interrupt processes,
by executing code in the device client. Through this interface, the device
client (User mode privileges) and the device driver (Supervisor mode priv-
ileges) execute in different process contexts. The advantage of this way of
communication is that there is no need to worry about mutual exclusion
to shared resources. This is however, seen from another perspective, also a
drawback since you have to transfer data between different memory buffers.
The signal based intergace is the interface that makes it easiest to create a
safe system, since you never let the device client to execute in supervisor
mode. Another drawback is that there is a limitation in the size permitted
data per signal.
Device
Device driver Kernel
client proces
s
This can be seen as a device client sends a request (1) and hands over
a buffer to the device driver process. The client device may send several
requests without that any data is lost. When the buffer(s) has been read by
the device driver process, the device process demands the asked hardware
resources (2) from the kernel and returns a reply (3). If the device driver
have data available, but no buffer to write to, the device driver is forced to
throw the data away. This means that a device driver never can overflow a
device client with data. See Figure 2.2.
8
with a table. Each table element consists of a string containing the driver
name and a pointer to the entry function of the driver. The handle (see the
”Open” call defined below) represents the table index for a corresponding
driver name. To use the BIOS system call is also a way to change privi-
leges level for user mode code to execute in supervisor mode in a controlled
manner.
Init - This call initializes the BIOS module and clears the table content.
Open - This call is used by the device client to find the registered entry
points. If the right entry point is found, this is returned as a handle.
Call - To execute functions through the given handle, the device client can
call that entry point. BIOS will then jump to the registered entry
point and enable execution.
This is the method the OSE Kernel uses internally to provide all the
system calls to the applications that are running in user mode and/or are
not linked to the core module.
9
An advantage of this method is that there is no need to shuffle data,
since the device client is able to call functions exported by the device driver
through a supplied handle, which implies faster execution. A major draw-
back is the problem if a device client with corrupted data gets access to
execute in supervisor mode. This enhances the need for well written func-
tions that gets exported from the device driver and the risks for loopholes
are larger.
Both the device driver for the GPS antenna and the driver for the graphics
port are intended for OSE 5. Almost the same arguments can be used, as
for CAN, to motivate why the graphics port driver should be implemented
using the system call interface. Graphics claims many resources and send
large amounts of data, so there is important that the communication is rapid
between the application and the driver. As it is mentioned earlier, there is
a limit in the signal size that is used within the signal based interface, and
this might limit the capacity of the system.
10
GPS anthenna driver
The driver for the GPS antenna does not have the same requirement on
fast information exchange. Furthermore, it is not much data to be sent at
a time, so it is not a bad choice to implement these drivers using the signal
based interface. This implies an easier way to make the system safe, since
the device clients never gets to execute in supervisor mode.
2.2.1 Background
In early 1980’s mostly 2D-graphics standards and hardware-oriented indus-
try standards existed, at the same time as hardware traders created special-
ized graphics workstations and introduced graphics libraries for their own
hardware. The beginning of 3D graphics computing was characterized by
very expensive and highly primitive hardware and software.
Beginning around 1990 3D standards like OpenGL appeared on the mar-
ket. These standards were developed and showed more and more parts of
a graphics rendering2 pipeline. Still the workstation hardware was very
2Rendering is the process of generating an image from a 3D model by means of a
software program[5]
11
sophisticated and expensive[18]. Expressions like ubiquitous computing
and interactive mobile graphics was unimaginable. Mobile devices, such
as cell phones, used small monochromatic displays, slow CPUs 3 and little
memory[4].
The resources have improved though. The CPU development has led to
smaller and higher performance CPUs and the amount of available mem-
ory has increased. High-performance 3D graphics applications have moved
during the last decades from large sized systems with specialized hardware
and software to regular low-cost PC-hardware and gaming devices[18, 22].
This is the result of an agreement of common standards like, for example,
OpenGL (manufacturers want open standard solutions in order to reduce de-
velopment costs and to shorten product development times) in combination
with the rapidly improving graphics technology.
There has been a fast progress within mobile graphics technology the
last five years, and it has followed a similar path as on PCs: early copyright
protected software engines running on integer hardware paved the way to
open source standards for graphics hardware acceleration[22]. In a latter
phase of the technical evolution, mobile computing devices have been used
in other areas than mobile telephony and PDA 4 [18].
3CentralProcessing Units
4PersonalDigital Assistance
5Video Graphics Array
12
According to Antochi et al.[16] the amount of users of interactive 3D
graphics applications is predicted to increase significantly in the near fu-
ture. The usage of 3D graphics in embedded and small scale systems are
growing substantially in all areas of life. Especially popular within the area
of gaming applications, 3D graphics are also distributed on a wide range of
devices, such as mobile phones, car radios, navigation systems, PDA and de-
vices with 3D-style user interfaces, public and private infotainment systems
and interactive entertainment[10]. The application that drives the graphics
technology development on mobile devices most seems to be the same one
as on PCs, namely gaming[22].
Until a few years ago there was no commonly accepted API (Application
Programming Interface) for 3D graphics on mobile devices[5]. Recently, as a
result of the development within graphics technology and the high interest in
embedded 3D graphics, APIs accommodated for mobile 3D graphics such as
OpenGL ES and Java Mobile 3D Graphics API for J2ME have appeared. A
3D graphics API is a prerequisite for easily developing 3D applications[17].
13
APIs. The two interfaces are designed to complement each other and can
take advantage of the same underlying rendering engine [21].
14
OpenGL ES and Mobile 3D Graphics are the two major platform in-
dependent graphics APIs. Mobile 3D Graphics actually works as a layer
between Java and OpenGL ES (you make Mobile 3D Graphics calls from
Java, and Mobile 3D Graphics invokes, in turn, OpenGL ES to manage the
graphics rendering).
Using OpenGL ES is the main solution to the problem, but the possibil-
ity of complementing the graphics library with Mobile 3D Graphics exists.
Mobile 3D Graphics does not necessarily have to use the OpenGL ES, but
that stand alone solution would not be equally economic with the resources,
since it is based on a high level 3D API.
OpenGL ES also includes a specification of a common platform interface
layer, called EGL. This layer is platform independent and may optionally be
included as part of OpenGL ES, but the developer may choose to define their
own platform-specific embedding layer. OpenGL ES, Mobile 3D Graphics
and EGL will be presented in more detail in later sections.
15
Combining two graphic libraries
For graphics on embedded systems OpenGL ES, with the possibility to add
Mobile 3D Graphics on top, enables hardware accelerated or software-based
interactive 3D applications with small demands on the memory and com-
putation resources[23]. OpenGL ES is as a matter of fact the standard for
embedded applications. It is a main standard in the embedded avionics
market due to its power and cross platform nature[25].
2.2.3 OpenGL ES
The demands and interest for using 3D graphics on mobile devices have
increased successive during the last decade. By using 3D technologies, the
mobile devices gain access to powerful tools. To provide such a graphic
library for the ubiquitous computing market, performance and memory lim-
itations are some of the challenges to master[26].
When designing an open low-level multiplatform graphics API7, OpenGL
constitutes an excellent benchmark. OpenGL has long been a well-used and
accepted API for graphics rendering on desktop computers. OpenGL is the
main 3D API for MacOS, Linux, most Unixes and it is available for Windows.
It is itself based on previous APIs and has clear design principles that many
3D programmers are used to. OpenGL was not designed to minimize the
size of the API and its memory footprint (size of implementation). During
its lifetime it has grown bigger in form of complexity and the increasing
number of functions along with the adaption to new hardware platforms
[21].
Since OpenGL is too big and is designed for more powerful hardware than
embedded systems generally have, the graphic library needed to be cleaned
of redundant functionality and rarely used functions to better apply to the
requirements from mobile devices. The majority of the work was put on
minimizing the need of resources that are insufficient on mobile devices. The
Khronos Group [14], a member-funded industry consortium that is focused
on the creation of open standard and royalty free APIs for handheld and
embedded devices, took the responsibility to implement the changes and
create OpenGL ES, where ES is an abbreviation of embedded systems.
16
widest scope of use was kept, while the other functions were striped away.
Since some features are more basic than other, hard to emulate using the
remaining functions and/or the API can not do without these there became
small overlaps. Among other things all primitives was simplified. Only
point, line and triangles were kept, whereas quadrangles and common poly-
gons were removed. These are typically presented by numerous triangles in
this reduced API.
The second revision, OpenGL ES 1.1, was released during 2004. In con-
trast to the first version, that was created as minimal as possible, OpenGL
ES 1.1 aimed at systems with better resources. Five optional and two
mandatory features were introduced. The largest update within the area
of visual quality was a further elaborated function and a better support for
texture mapping.
OpenGL ES 2.0 is currently the latest version of the API for embedded
systems. It is defined relative to the OpenGL ES 1.1 and OpenGL 2.0
specifications and introduces a programmable 3D graphics pipeline with the
ability to create shader and program objects using the OpenGL ES Shading
Language [2]. OpenGL ES 2.0 is not backward compatible with OpenGL ES
1.x8, since it does not support the fixed function pipeline that OpenGL ES
1.x applies. Thus the OpenGL ES 2.0 programmable vertex shader replaces
the fixed function vertex units implemented in OpenGL ES 1.x. Although
OpenGL ES 2.0 is not backward compatible with 1.x, the APIs are designed
so that the drivers from both versions can be implemented on the same
hardware, allowing 2.0 and 1.x applications to be executed on the same
device [22].
• There already exists lots of useful literature that eases the adoption of
the API among developers.
• The knowledge of strong and weaker points in the original API makes
it easier to accept similar weaknesses in the upcoming API.
OpenGL ES has now been adopted as the main 3D API for Symbian
OS, an open mobile operating system mainly for cell phones and PDAs.
17
2.2.4 Mobile 3D Graphics API for Java
The Mobile 3D Graphics (JSR 184) is an optional high-level 3D API for
mobile Java that facilitates the manipulation of 3D graphics elements on
mobile devices. The Mobile 3D Graphic API 1.0 is fully specified in JSR 184
[13]. Mobile 3D Graphics uses the same rendering model as OpenGL ES. The
two standards were developed concurrently, partly by the same developers.
It would be possible that if a single device supported both OpenGL ES and
Mobile 3D Graphics and the same essential graphics engine, particularly
hardware accelerated, it could take advantage of both APIs. In fact, it is
even recommended to implement the Mobile 3D Graphics as a complement
to OpenGL ES [22].
18
Java applications
Native
applications
Mobile 3D Graphics
Figure 2.4: Mobile 3D Graphics can work as a layer between Java and
OpenGL ES. You make Mobile 3D Graphics calls from Java and Mobile 3D
Graphics invokes OpenGL ES to manage the graphics rendering.
rowed many basic design rules from Java3D. The new design largely offers
the same fundamental functionality as Java3D, but avoid unnecessary and
overly generalized class hierarchies that prevented the simplification of Java
3D. This resulted in that the footprint (number of classes and methods) of
Mobile 3D Graphic got considerable smaller than that of Java3D[21, 22].
19
In the same way as with the graphics engine, the applications and pro-
grams using the engine needs to be small. Since Java applications often
can be installed, especially on mobile phones, over a wireless connection,
and many users have high demands on the download times, the size of the
application is very important. To provide high level functionality such as
scene graph and animation capacity in the graphics API together with the
competence to encode geometry, textures and animation data into small bi-
nary files, allows the final applications to be more compact or include more
content. It is essential that the low and high level APIs complement rather
than compete with each other and support similar hardware models so that
the same hardware can accelerate both of them [21].
2.2.5 EGL
EGL9 is a library specification [18] and a platform independent API orig-
inally designed to facilitate the integration of OpenGL ES into the native
windowing system of the operating system. It is similar to GLX [1] (OpenGL
Extension to the X Window System) and allows interfacing and control of
9Embedded-System Graphics Library
20
resources such as frame buffers, memory, window and platform resources.
There is no requirement to provide EGL when implementing OpenGL ES.
On the other hand the OpenGL ES API does not state how to create a
rendering context or how the rendering context gets attached to the native
windowing system [23].
To operate correctly the commands in OpenGL ES is dependent of EGL
before any rendering can begin. EGL is needed to create a rendering context,
which stores information about the proper OpenGL ES state. The rendering
context is actually also where all of the other pieces of the graphics library
come together to in fact display bits on an output device or display. The
rendering context needs to be attached to an appropriate surface before the
rendering can begin.
The drawing surface defines the required buffers needed for rendering
e.g. color buffer, depth buffer, and stencil buffer. The drawing surface also
details the bit depth of each of the required buffers. There are three types
of drawing surfaces that can be created in EGL [23]. These are pbuffers,
pixmaps and windows. Windows are attached to the native window system
and are rendered to the graphics memory. Both the pbuffers (off-screen
rendering to user memory) and pixmaps (off-screen rendering to OS native
images) are pixel buffers that do not get displayed, but can be used as
drawing surfaces [22].
EGL is needed to query the displays available on a device and initialize
them. For example, within a car there might be several LCD panels and it
is possible that we can render to surfaces, using OpenGL ES, that can be
displayed on more than one panel.
The EGL API implements the above features and also further function-
ality, for instance, support for multiple rendering contexts in one process,
power management, sharing objects (e.g. textures or vertex buffers) amongst
rendering contexts in a process. The latest available version is EGL 1.4.
2.2.6 Conclusions
The usage of 3D graphics in embedded and small scale systems are growing
substantially in all areas of life. One of the reasons to the increasing interest
for graphics in embedded systems is the advances in display technologies.
OpenGL ES and Mobile 3D Graphics are the two major platform indepen-
dent graphics APIs. Mobile 3D Graphics actually works as a layer between
Java and OpenGL ES. You make Mobile 3D Graphics calls from Java, and
Mobile 3D Graphics invokes, in turn, OpenGL ES to manage the graphics
rendering.
OpenGL ES is a low level API intended for applications written in C.
OpenGL ES is as a matter of fact the standard for embedded applications.
It is a main standard in the embedded avionics market due to its power
and cross platform nature. It is a subset of desktop OpenGL, which con-
21
stituted as a benchmark when designing OpenGL ES. The full OpenGL
library contains too many features that are not applicable for systems with
restricted resources like embedded systems. An example is that calculations
in OpenGL depends to a great extent on floating point arithmetic, but the
support for floating point arithmetic is often limited or even missing on
mobile devices.
Mobile 3D Graphics is an optional high-level 3D API for mobile Java
that facilitates the manipulation of 3D graphics elements on mobile devices.
When designing Mobile 3D Graphics the major goal was to overcome the
performance penalty caused by Java virtual machines. The overcome of the
performance penalty was fulfilled by raising the abstraction level of the API
over that of OpenGL ES, allowing the applications to reduce the amount of
Java code required for regular animation and rendering tasks. Mobile 3D
Graphics uses the same rendering model as OpenGL ES.
There are a number of operating systems that allow end users to in-
stall applications written in C, but that number is small compared to those
who support installation of Java applications. Java is supported by many
different devices and operating systems.
Embedded software is very dependent on the hardware of a product.
When rendering graphics the most demanding operation is floating point cal-
culations. Though software implementations are possible, dedicated hard-
ware acceleration provides better performance. Dedicated graphics hard-
ware can also provide faster execution and lower power consumption.
EGL allows interfacing and control of resources such as frame buffers,
memory, window and platform resources. There are two drawing surfaces
that are rendered off-screen. Reducing the unnecessary use of those con-
tribute to reduced use of system resources.
Should the choice be made between the two graphics libraries, the given
choice is OpenGL ES. This is due to the fact that OpenGL ES is designed
for applications written in low level languages (like C). Mobile 3D Graphics
is designed, in part by the same developer, to supplement the OpenGL ES.
If it is possible to use both graphic libraries it would give a wider range of
applications, since Java is often used in areas such as mobile telephony.
22
the signal number assigned to the signals does not follow any pattern.
Finally, a design proposal will be provided in which the above problems
do not exist. A conclusion of the section is also given in the end.
2.3.1 Background
Many vehicles contain several embedded computers. The increase of elec-
tronics within the automotive sector is the result of customers wish for
greater comfort and better safety and also a result of organizations require-
ments for improved emission control and reduced fuel consumption. The
latter is also an important aspect for vehicle owners.
CAN (Controller Area Network) is a vehicle bus standard designed to
allow devices to communicate with each other within the network without
a host computer. On a CAN-bus, data is sent and received using message
frames. The message frames carry data from the sending node to one or
more receiving nodes.
When a CAN device transmits data onto the network, a unique signal
identifier precedes the data. The signal identifier does not only hold infor-
mation of the content of the message, but is also used to set the priority
of the message. In real time processing, requirements vary on whether a
message is sent fast or not. Some parameters change rapidly, while other
barely changes at all and this explains why all messages do not have the
same priority. Assigning priorities to the messages is a way to increase the
chance that all messages are delivered in time. The lower the numerical
value of the binary representation of the message frame is, the higher the
priority[19].
In the occurrences where more than one node wants to transmit in-
formation at the same time, a non-destructive procedure guarantees that
messages are sent in the correct order and that no information are lost. The
highest priority ID message is sent first and all the lower prioritized nodes
automatically becomes receivers of the highest priority message and retry
to send after the highest priority message was sent. The priorities of the
messages are assigned during the initial phase of system design and can not
be dynamically changed.
This operation can be described, a little simplified, as if a node wants
to send a message to one or several other nodes it waits for the bus to be
available (Ready). The message is sent and the priority of the message is
compared towards other contingent signals (Send message). The signal with
the highest priority continues to send and the other nodes automatically be-
come receivers of the message (Receive message). Every node that received
the correct message performs a control to determine if the message was in-
tended for that node (Control). If the message was of use for the node it
gets accepted (Accept), otherwise it is ignored[9]. See figure 2.5
There are two permitted sizes of the data frame. To get the two formats
23
Control
Read
Control Control
Receive Message Send
Receive Message Receive
Message Message
CAN-
bus
to work together on a single bus it is decided that the shorter message type
has higher priority in case of a bus access collision if signals have the same
identifier.
In addition to the data frame, the CAN standard also specifies a remote
frame. The remote frame holds the ID, but no data. A CAN device uses
the remote frame to request information from another device to transmit
the data frame for the ID. In other words, the remote frame supplies a way
to poll for data[19].
In a CAN network, the messages transferred across the network are called
frames. ENEAs CAN protocol supports two frame formats, the essential
difference being in the length of the data frame. In the short frame format
8 bits of data is permitted and the long frame format need several signals
in order to be completely sent. Figure 2.6 shows the fields of the frame
formats, and the following sections describe each field. The CAN protocol
header that is used in DySCAS today is 29 bits long, in accordance with the
used Microcontrollers manual[15].
A main difference in ENEAs structure of the CAN protocol header is
that a bit (priority) that resolves the length of the data frame precedes the
signal ID, that otherwise determines the priority of the signal.
24
priority The priority bit differentiates short frames from long
frames. Because the priority bit is dominant (0) for short
frames and recessive (1) for long frames, short frames
have always higher priority than long frames.
signal ID The signal ID field contain the identifier for a CAN frame.
Both formats (long and short) have a 7-bit field (7 bits =
128 different signals). In both formats, bits of the signal
ID are transmitted from high to low order.
LE The endian bit decides the byte order. There are two
choices in this protocol; big or little endian. Big en-
dian (0) stores the most-significantbyte first (at lowest
address) and little endian (1) stores the least-significant
byte first. Big endian is most commonly used.
Figure 2.7: Send any signal from process 3 in node 1 to process 1 in node 6
Figure 2.7 exemplifies how the protocol is used. The fields marked with
X can have any values that correspond to the description of the protocol.
As described earlier the CAN protocol must implement a bus allocation
25
dominant
node1: 0x4D =
10011012
node2: 0x5B =
10110112
node3: 0x4A =
10010102
node4: 0x4B =
10010112
method that guarantees that there is always explicit bus allocation, even
when there are instantaneous bus accesses from several nodes. This proce-
dure decides what signal that has the highest priority and consequently gets
to send its message. CAN use the established method CSMA/CD (Carrier
Sense Multiple Access with Collision Detection). In the DySCAS project it
is used with an enhanced capability of non-destructive bitwise arbitration to
grant collision resolution, hence CSMA/CR (Carrier Sense Multiple Access
with Collision Resolution).
The method of non-destructive bitwise arbitration uses the CAN protocol
header of the message to resolve any collision by the means of a comparison
of the binary representation. This is achieved by CAN transmitting data
through a binary model of ”dominant bits” and ”recessive bits” where dom-
inant is a logical 0 and recessive is a logical 1. When a comparison is made
the 0 has higher priority than the 1, just like using a logical AND gate. This
addition also contribute to a more efficient use of the CAN bus.
Figure 2.8 shows four CAN nodes attempting to transmit messages, us-
ing identifier 5B hex, 4D hex, 4A hex and identifier 4B hex respectively. All
messages is presumed to have the same length (or value on priority bit).
As each node transmits the 7 bits of its signal identifier, it checks the bus
to find out if a higher priority identifier is being sent simultaneously. If an
identifier collision is detected, the losing devices directly stop sending and
wait for the higher priority message to complete, before automatically retry-
ing. Since the highest priority signal identifier continues its communication
without interruption, this scheme is referred to as non destructive bitwise
26
arbitration. This ability to resolve collisions and continue with high priority
transmissions is one feature that makes CAN ideal for real-time applications.
1. The priority bit has the greatest impact of the message priority
2. The signal IDs are not assigned in a pre-defined manner
False outcome can be obtained if the priority bit will determine the order
between two signals. This could, for example, lead to that a short signal that
only update somewhat irrelevant is sent before a long, but important, signal
from the system level. The signal IDs are not assigned in a pre-defined
manner that would benefit important and commonly reappearing signals.
The signal IDs has simply got the first available value and has consequently
not the correct intergroup order. Signals from applications and the system
level is appearing in a mixed scheme.
These shortcomings have effect on the order in which signals are sent
according to the non-destructive bitwise arbitration.
27
New CAN protocol header
priority dnode snod dproces sproces LE
8 bits 5 bits e5 s5 s5 1 bit
bits bits bits
accurate priorities to the signals. The main goal of the restructure was to
reduce the significance of the length of the data frame and to assign higher
priorities to system level signals relative application signals.
Figure 2.9 shows the proposed CAN protocol header. The old CAN
protocol header used 8 bits (1 bit + 7 bits) for priority and signal ID. These
8 bits was divided into three parts in the new design; group, long and signal
ID.
”Group” determines if the signal is of system or application type. The
group bit of system signals are assigned to be dominant (0), while the group
bit for application signals are assigned as recessive (1).
The ”long” bit has the same functionality as the prior priority bit; hence
it resolves the difference between short and long signals. To place this bit
after the group bit has made that system signals more prioritized than the
application signals. Information that takes short time to send shall also have
a higher priority than signals that need long time to complete within the
two groups. This is achieved in the new design.
The ”signal ID” also has the same meaning as the earlier used signal ID,
except that it only is of 6 bits length. This allows up to 64 different signals
within each group that can be both short or long signals.
2.3.6 Conclusions
CAN is used in the automobile sector as a mean of communication between
electronic control units and use priority to determine the order in which
signals will be sent. The existing solution within DySCAS is based on that
the signals length has the biggest impact on the priority. Short signals are
considered to be more important than long signals and shall have a higher
priority.
In the old solution, there is no structure in the choice of signal ID, since
28
the signals have gotten its signal ID in the order they were created. A signal
that was created in a late phase of the development has got a lower priority
because of this. These are the problems with the old solution of the CAN
protocol header.
Based on the new structure of the CAN protocol header, signals will
be given a more correct order in the priority determination. It means that
the system signals will always be considered more important than the ap-
plication signals. Short signals will still be given a higher priority than long
signals.
29
the lines of latitude and longitude when producing maps.
Possible Use
The most obvious uses of GPS receivers, based on the description of how
it works, are to exploit the accurate time information that the satellites
send out and to use the information of where you are. The second use
case provides, together with map software, driving directions to the driver.
You can also get information about the surroundings by combining this
information with a GIS (Geographical Information System).
The GPS receivers commercially available today have the drawback of
to a great extent being static. In the DySCAS project there are possibilities
to use the GPS in a larger extent. The greatest possibilities to broaden the
use of GPS receivers are to combine them with other technologies.
Please note that the possible uses proposed are outside the scope of this
thesis. This thesis will focus on providing working device drivers, pick out
simple information from the GPS receiver and forward the information to
the rest of the system.
Planning the trip Often when you plan your travels you are not in the
vehicle. The ability to connect additional devices to the vehicle in DySCAS
allows you to use a portable GPS or a cell phone equipped with GPS. You
can plan your trip at home and transfer the travel information when you
get to the vehicle.
30
2.4.2 Conditions in this project
Hardware prerequisites
The GPS receiver used in this project is integrated on the development
board. The hardware is a micro controller from Freescale, MC9328MXS[11],
based on an ARM9 core. The GPS receiver module on the development
board is of brand u-blox, TIM-LA[27]. It supports both active and passive
antennas and it provides two serial ports and seven GPSMODE pins to offer
boot-time configuration capabilities.
31
serial parallel
communication communication
The hardware is equipped with an external (to the processor) UART chip
with four UARTs. The UART chip is a Philips SC16C554. The Philips
SC16C554 is a 4-channel Universal Asynchronous Receiver and Transmitter
(QUART) used for serial data communications. Its principal function is to
convert parallel data into serial data and vice versa. The hardware’s internal
GPS chip is connected to UART4 of the quad UART and a full 9-pin RS232
interface is connected to the UART1 of the Quad UART. A working driver
for the external UART should be available in the board support package
(BSP) that the work is based on.
The Philips SC16C554 is connected to the processor (see Figure 2.10)
with an eight bit parallel data bus and a five bit address bus. The chip con-
tains an address decoder that controls the chip select signals of the UART
controller so that the UART register can be accessed according to the regis-
tered memory allocation for the controller. The total memory for the quad
UART controller is separated into four equally large parts. Each of the four
UARTs has an interrupt signal connected to the processor. Configuring
of the memories and interrupts are performed by the bootloader, but the
operating system needs to be configured the same way. This is done in a
configuration file before the system is booted. The bootloader is software
stored in the flash memory that starts up when the system is booted. Some-
times the bootloader is used to boot the operating system and to setup the
system.
32
Application
Application Layer
4
3 Middlewear
6 5 Resolver Layer
2
Device Handler Instantiation
Layer
1
Platform
GPS driver Layer
GPS module
Figure 2.11: Sequence of the registration of a service and the initialisation
of the communication between the application and the Device Handler.
33
which uses functionality from the built-in UART control of the system hard-
ware. An exact copy of this Device Handler can be used or adjustments can
be made to fit the GPS that will now be added to the system (see Figure
2.11).
The Device Handler for the COM port is divided into two parts; an
Interrupt Handler and the ordinary COM Handler. The Interrupt Handler
receives an interrupt from the hardware when a new byte of information
is available. The Interrupt Handler then collects the data and sends it to
the COM Handler. The COM Handler receives the byte and sends the
character to the application that has requested its services. If the COM
Handler receives a byte, but has no subscriber of its services, it will throw
the character away. For the COM Handler that is in use, a transfer rate of
19200 baud is used. The analysis of the data is made in the applications
to collect the information that is wanted. One way to extract information
from the NMEA messages is presented later in this chapter.
Communication interface
The GPS that will be used in this project comes with a highly flexible com-
munication interface. It supports the NMEA (National Marine Electronics
Association) and the UBX10 binary protocol and is able to accept differen-
tial correction data (RTCM). The GPS is multiport capable, which means
that each protocol can be assigned to several ports at the same time with
different settings per port (baud rate etc.). It is also multiprotocol capable,
which means that more than one protocol is possible to assign to a single
port. This is especially helpful for debugging purposes.
The UBX binary protocol can set and poll all the available actions and
messages from the GPS. Using asynchronous RS232 ports, the GPS commu-
nicates with a host platform in terms of the alternative, UBX protocol, to
carry GPS data. This proprietary protocol has the following key features:
34
demonstration platform. This is partly because the NMEA protocol does not
take advantage of all the features of the GPS due to bandwidth limitations of
the ASCII-protocol. A short presentation of the fundamentals of the NMEA
protocol follows in the next section.
NMEA protocol
The NMEA protocol is an industry-standard protocol that was developed
for marine electronics and was originally designed to allow data exchange
between various sensors and navigation equipment aboard ships. Nowadays,
it is a de-facto standard for GPS receiver data output[27]. The protocol
expresses the data in the format of ASCII. This is a standard format for
GPS applications.
The GPS supports NMEA-0183 version 2.3, which is upward compatible
with version 3.0. NMEA consists of sentences, of which the first word is
called a data type. The data type defines the interpretation of the rest of the
sentence. Each data type have its own unique interpretation and is defined
in the NMEA standard. Sentences start with a $ and end with carriage
return/line feed. GPS specific messages all start with $GPxxx where xxx is
a three-letter identifier of the message data that follows. NMEA messages
have a checksum, which allows detection of corrupted data transfers. Below
is a list of some of the NMEA standard sentences the GPS support:
35
GGA - Global Positioning System Fixed Data
Figure 2.12: Position fix related data such as position, time, number of
satellites in use, etc.
36
gives most information needed to accomplish the task. The most important
NMEA sentences except for GGA is the RMC, which provides the minimum
GPS sentences information, and the GSA which provides the Satellite status
data.
What does the stream look like? As mentioned, the sentences are
streaming from the GPS receiver via the serial port. Each sentence starts
with a ”start of sentence”-character, ”$”, a two-character prefix that iden-
tifies the type of the transmitting unit, a three-character identification of
the sentence, followed by several data fields separated by commas, and ends
with a optional checksum and a two-character end of sentence (see below).
The format for standard sentences may be defined by the following pro-
duction rules[24]:
Sentence: ”$” <prefix> <sentence ID>,<data fields>*<checksum> <eos>
1. $: Start of sentence
2. prefix: two characters that identifies the type of the transmitting unit.
The most commonly used is ”GP”, identifying a generic GPS. These13
are the only sentences that can be transmitted by a GPS receiver.
4. data fields: the data fields are the biggest difference between differ-
ent sentences. They hold information interesting to the users of GPS
receivers (e.g. position, speed).
13plus some proprietary sentences depending on the GPS receiver
37
5. checksum: is a hexadecimal string formed as a bitwise XOR of all
characters between the ”start of sentence”-character and the asterisk
(see above in production rules).
Writing a parser To be able to read the NMEA sentences you will have
to have some sort of functionality that is able to receive the serial data
stream and analyze it. First it has to decode the stream into sentences,
by finding the ”start of sentence”-character. Secondly it has to parse the
sentences for their content and finally convert the content into something
understandable by the user, which is not a trivial task.
The second step indicates that there is a need for a parser that searches
and extracts the data fields. This can be done by keeping track of the
character position within the sentence. An even better solution is to extract
everything between two commas. Since some data fields are of variable
length or are even empty, this solution makes the parser independent of the
number of digits for the individual data field. It will even be able to handle
empty fields ”,,”. Beside the commas, there are several ways of giving the
parser more functionality. Additional functionality can be introduced to
error check the sentences (e.g. by validating the checksum). If any error is
found, the NMEA parser should reject data output by the GPS receiver and
by no means forward it to, in this theses case, the application.
38
received “$” NMEA sentence
1. Search for “$”
processed
$ 2. prefix +
7. Wait for 1. start
2. Retrieve second ST and sentence
address Rx timeout process NMEA ID
field Rx timeout or sentence $ supported
checksum unsupported
error 3. data
3. Receive 6. Wait for wait checksum
received “,” received
sentence ST (HEX error
HEX 0D
data 0D)
NMEA 4.
4. Get first 5. Get sentence compute
received “*” wait for first checksum
checksu second EOS processed
m checksum character
character character
(a) (b)
To track the structure of the NMEA data you can use a state machine.
The state machine might be formed to look for dollar signs, records char-
acters until it sees a comma, decides if the sentence is interesting, if it is
it records characters until the end of the line, then validates the checksum
and parses the fields it wants. If the sentence isn’t one of interest, or if any
errors occur, it goes back to waiting for a dollar sign.
Design proposal for parser The parser can be composed of two main
parts. One function, messageSplit, which manages the breakdown of the
sentence through the use of the comma character ”,” that exist between all
the data fields. It presents the words as strings and keep track of whether
there are more data fields or not.
The second function keeps track of whether there is support for the
requested sentence. It receives the data stream directly from GPS receiver
(as a call parameter) via the serial port and it will also be called with a
”struct” parameter that can be used to store information. It uses the first
function to read one data field at a time. By scanning the first field (e.g.
$GPGGA) the function knows what sentence type the stream transmit and
then (if supported) to overwrite/update all data fields. These will finally be
converted to be readable and easily understood by humans. By using this
method you are forced to make comparisons between your first field and
strings in, for example, case or if statements.
Most of the work done by the parser consists of determining the sentence
type and separating the different items. The parsed data are stored as strings
in a struct. The parser can be extended to recognize if the data coming from
the GPS receiver is valid by using the optional checksum.
2.4.3 Conclusion
The GPS system gives everyone with a GPS receiver the possibility to deter-
mine their position. The basis for positioning determination is, to calculate
39
time differences for signals sent from satellites to receivers, which in turn
are converted to distances.
The most obvious uses of GPS are to exploit the accurate time informa-
tion that the satellites send out and to use the information of where you are.
The second use case provides, together with map software, driving directions
to the driver.
In the DySCAS project there are possibilities to use the GPS in a larger
extent. The greatest possibilities to broaden the use of GPS receivers are to
combine them with other technologies.
A possible feature is to plan the trip at home by using another GPS
receiver and to transfer that information when you arrive to the vehicle.
Another use case is to predict how long it will take before the vehicle reaches
a certain point.
The scope of use is more or less endless depending on the number of
devices you can combine the GPS with.
The NMEA protocol will be used for the outgoing communication in the
demonstration platform. It is an industry-standard ASCII-based protocol
that defines how data is transmitted in a ”sentence” from one transmitter to
one receiver at a time. The most important NMEA sentences for this thesis
is the GGA, which gives the position as coordinates in a longitudinal and
latitudinal system, the RMC, which provides the minimum GPS sentences
information, and the GSA which provides the Satellite status data.
Q2. What choices can be done to reduce needed system resources for
graphics?
Embedded software is very dependent on the hardware of a product. By
using dedicated hardware acceleration for floating point calculations and
dedicated graphics hardware you can get faster execution and lower power
consumption.
EGL allows interfacing and control of resources such as frame buffers,
memory, window and platform resources. Reducing the unnecessary use of
off-screen drawing surfaces contribute to better use of system resources.
40
Q3. What limitations and shortcomings imply communication through CAN
in DySCAS?
The existing solution within DySCAS is based on that the signals length have
the biggest impact on the priority. Short signals are considered to be more
important than long signals and have higher priorities. The old solution
has an insufficient structure of the signal ID, since the signals basically have
gotten its signal ID in the order they were created.
The proposed structure of the CAN protocol header correct this by
grouping the signals in two categories; system- and application signals. A
review of the signal IDs have been made and they have gotten new values.
Short signals are still considered more prioritized than long signals.
Q5. How do you pick out data from the GPS stream?
The NMEA specification is not as strict as there are any requirements on
the number of digits in a data field, etc. But there is always a decimal point
between two data fields, which makes the easiest way to extract information
from a NMEA message, is to write a parser to search for two decimal points
and read out all data in between. This way the parser is independent of the
number of digits for the individual message fields and can even handle ”,,”.
41
42
Chapter 3
The work with the graphic drivers has been conducted on an i.MX31ADS
Application Development System (ADS) from Freescale Semiconductor. The
work is carried out with a BSP for Linux as a base. The ADS consists
of a Base board, a CPU board, and an MC13783 board (see Figure 3.1).
Peripherals, such as displays and sensors, are attached to the ADS via the
Image Processing Unit (IPU) hardware. The IPU is built to offer video and
graphics processing functions and to be an interface between the i.MX31
processor and video/still image sensors and displays. The IPU contains
functionality that is redundant for the area which this thesis work is focused
on. Such functionality is, for example, video and image data pre- and post
processing like resizing and rotating images/videos and color conversion.
The ADS support several different types of LCD interfaces and the dis-
play that comes with development system is a dumb (memory-less) active
LCD panel, Sharp LQ035Q7DB02. This is the display that has been used
during the master thesis. The Sharp display has a synchronous LCD inter-
face provided with scan control by the i.MX31. This is comparable to LCD
interfaces of previous i.MX processors[12]. That the display is memory-
less means that it has no own memory buffer to read data from, but must
fully rely on a created framebuffer (See section 3.1.3) that is on the hard-
ware RAM. It is a 3,5” Quarter VGA display, which implies a resolution of
240*320 pixels. It contains no integrated control circuit, but it has support
for backlight control and has touch screen capabilities. The display is con-
nected through a flat panel cable to the base board. The base board can be
used with only the CPU board connected, but the touch screen controller is
built into the MC13783 chip and therefore the MC13783 board is required
for this function to operate.
43
Figure 3.1: The i.MX31ADS hardware and Sharp display
44
Synchronous LCD Connector
VCC 1 2 GND
OE_ACD 3 4 FLM_VSYNC_S
PS
LP_HSYN 5 6 LSCLK
C
LD5_B5 7 8 LD4_B4
LD3_B3 9 10 LD2_B2
LD11_G5 11 12 LD10_G4
LD9_G3 13 14 LD8_G2
LD17_R5 15 16 LD16_R4
LD15_R3 17 18 LD14_R2
CONTRAS 19 20 LCDON
T
SPL_SPR 21 22 REV
PS 23 24 CLS
LD1_B1 25 26 LD0_B0
LD7_G1 27 28 LD6_G0
LD13_R1 29 30 LD12_R0
TOP 31 32 BOTTOM
LEFT 33 34 RIGHT
46
Kernel initialization
LCD initialization
Board initialization
LCD probe
Generic
framebuffer
initialization Initialize LCD
IPU initialization framebuffer
LCD Framebuffer Copy/Set information to
driver initialization structures, HW
registers and
LCD Controller framebuffer
driver initialization LCD power on
= Fully working = Thesis work
(a) (b)
take care of setting voltage levels, calls functions to initialize the LCD, turns
it on and finally calls a function to notify the kernel that the installation of
the LCD driver is done. The probe feature is a large and important part
of the driver for the graphics port. In this function lots of function calls,
to other parts of the system, has been gathered to together link the LCD
framebuffer driver and LCD Controller driver together.
The next step is to initialize the framebuffer, which is made from the
probe function. This step may need to be carried out one or two times de-
pending on whether you want a single background graphic plane or if you
also want a graphic window (also known as foreground plane). The LCD
Controller includes two parallel data paths, one for the background plane
and one for the graphic window. Data for the background and foreground
planes are collected from system memory (e.g. RAM) via the LCD frame-
buffer driver into separate buffers. The LCD Controller will perform various
actions, depending on how hardware registers are set. For example byte
swapping on the data if a different endianness has been set in a hardware
register. The LCD Controller has built-in hardware support for combining
the two graphic planes. This is also used when an image is combined with
a hardware cursor.
During the initialization of the framebuffer, information is copied and/or
transferred between different structures (e.g. specific panel information) and
information is set in the hardware registers. Important information is also
written to the framebuffer. After this step the LCD is actually initialized.
The last step is to turn on the panel and to register a client notifier. The
47
#define SIGNAL_NAME (number (e.g. 14))
struct signal_name
{
SIGSELECT sig_no;
U8 source_proc;
U8 source_node;
U8 dest_proc;
U8 dest_node;
...
};
notifier holds information such as a pointer to the function called when the
installation of the LCD driver is done.
48
CAN message FROM
platform (CAN Link handler input
Middleware
bus)
should be stressed is that all units in the system uses the same endian and
therefore the system does not use that bit. No tests are done to determine
what type of endian the devices have.
However, the endian bit had to remain in the CAN protocol header for
future use of the system, when devices with different endian could be used.
This has no impact on the way of reading the CAN protocol header, since
this part of the CAN message always is read the same way. The endian bit
determines which way the following data should be read.
At start up of OSE Epsilon a number of prioritized processes are de-
clared. The first process to be declared is the link handler, which gives that
process the highest priority. The link handler manages the network com-
munication and makes signals go from the source process to the destination
process within or between two or more nodes. Every signal sent that is un-
known to the system will be trapped by the link handler (a separate instance
of SHAPE is available at each node) of that node. The link handler makes
a decision based on the signal content, whether to forward the signal within
the node or to send it to another nodes link handler. This fact makes it
necessary that every signal has a unique signal number.
There are two registers that store the information for the CAN proto-
col header: upper arbitration register (UAR) and lower arbitration register
(LAR). These registers are 16 bits in size each, of which 3 bits are reserved in
the LAR. These 29 bits is what sets the limit on the CAN protocol header
size. Figure 3.5 gives a simplified view of the parts affected by the CAN
header protcol change.
49
Upper Arbitration Register
15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
.
source process endian destination process
bit 2-5 source source process bit 1
node bit
4-5
group long signal number destination node source node destination process source process endian
e.g. sig no = ((group & 0x0080) >> 1 | (signal number & 0x3F))
51
Normal Transfer,
8N1 no parity
bit Possible
start of next
transfer
D0 D1 D2 D3 D4 D5 D6 D7
Figure 3.7: A normal data transmission of a single byte (8 bits data and 1
stop bit).
52
Device Handler Instatiation
Layer
GPS
antenna
driver
Interrupt Platform Layer
process
RS23
2
UART UART
chip Hardware
1
UART
GPS module
Figure 3.8: The GPS module sends data to the driver via UART4 of the
UART chip. The driver returns the data to UART1 of the UART chip after
an analysis. The data is sent via RS232 to the Device Handler.
53
1A peripheral device that transfers data one byte at a time, such as a parallel or serial
port.
54
time the GPS receiver sends data to the UART port. So the final step is to
write a make this function forward (or analyze the data and then send) the
data to the COM Handler.
The device driver could be able to receive both NMEA messages and
UBX messages from the GPS module. This is a decision to make, since the
rest of the system is uninterested in UBX messages. If no UBX messages
will be passed on to the rest of the system, the driver can simply ignore these
data or throw it away. The default settings configure the GPS module to
be able to transmit both NMEA messages and UBX messages, but no UBX
messages are activated at start up. These must be activated via UBX input
messages or with GPSMODE pins (the 7 configuration pins determining the
start up settings). To build the minimum platform, no UBX messages will
be received by the device driver from the interrupt process.
However the device driver should be able to send UBX messages to the
GPS module to make configuration changes to the GPS receiver hardware.
It includes settings related to the configuration pins used to determine the
GPS module’s boot settings. These are settings of for example update rate,
baud rate, TIME PULSE and sentences used.
The analysis that can be made of the received data is first and foremost
to determine what kind of message that has been received. How this analysis
can be made is covered by the text describing how to write a message parser
(see section 2.4.2). The part of the message that differs is the three-letter
identifier. Since only a few sentences are used in the system, these are the
only ones that will be sent to the Device Handler. All the other sentences
that are received by the device driver will be thrown away.
If the message is to be analyzed a memory allocation of a capacity of
at least a full sentence is created. The incoming data is stored with start
at the first position to be filled continuously until a ”new line” character is
collected. Then you know that the entire sentence has been collected and the
data can be passed on to the system if it was complacency. If a dollar sign
is collected that means you have got the first character of a new sentence
and you should store that on the first position after you have thrown the
old data away.
55
Chapter 4
Result
For the i.MX31ADS hardware there is a driver implemented for the graphics
port to Linux that support displays with the same control signals (VSYNC,
HSYNC) as the SHARP screen that was used during this work. The frame-
buffer driver, which activates the LCD Controller driver, is used by most
graphics libraries (Qt, C/PEG, etc.) and let applications talk directly with
the systems framebuffer. The already implemented touch screen driver also
works with the panel, and other displays, which use the four pins on the
LCD connection port that SHARP uses.
The long term goal of my thesis was that in the future there will be an
opportunity to introduce graphics in the framework SHAPE. The short term
objective was, theoretically and practically, to investigate the possibilities
of achieving this goal. That an implementation was made in Linux for the
hardware is also a way to show that it would work, because the graphics li-
braries are not operating system dependent, and they communicate with the
system’s framebuffer via a framebuffer driver. The framebuffer works in the
same way in Linux and OSE, so a working driver to OSE would have given
the same outcome. Unfortunately there is no working driver for the graphics
port to ENEAs operating system OSE for the hardware i.MX31ADS.
The system works after the CAN protocol header change. However, no
tests have been done to evaluate if the system has become more sustainable
and lasting.
The work with the GPS antenna has had a focus on theory and design
choices. A driver for the built-in GPS antenna on the Freescale hardware
with an i.MXS processor was not implemented because of problems with the
hardware.
56
4.1 Discussion and Conclusions
4.1.1 Graphics
Generally, the display is connected to the to the hardware’s integrated LCD
Controller (in the processor), with a flat panel cable, that pick data from the
framebuffer (RAM). The LCD Controller driver is the software that initiates
the LCD controller (the hardware register values, addresses and size of the
framebuffer (LCD specific information)) and then do not need do anything
more. Both for Linux and OSE, there is a system framebuffer.
For OSE, there is a framebuffer driver to the hardware i.MX21ADS,
which manages the allocation of RAM for the framebuffer, and also ini-
tiate the LCD controller and then inform interested clients (e.g. graphic
library for example) where the framebuffer is located. This driver use the
signal based interface (fbdev.sig) described in section 2.1.1. The hardware’s
(i.MX21ADS and i.MX31ADS) has many similarities but also some differ-
ences. Still, the work on the i.MX21ADS is a basis to achieve the corre-
sponding functionalities for the i.MX31ADS.
Restrictions
The work is based on an existing BSP for the Linux operating system. The
BSP had initially lots of functionality that was related to graphics and the
framebuffer.
Graphics support for the operating system OSE exist in a small extent
within the company. The implementations that are, of course, is to other
hardware and none of these implementations make use of the IPU.
The i.MX31 includes a Graphic Processing Unit (GPU), which supple-
ment the processor by hardware accelerating 2D and 3D graphics. In order
to take advantage of what OpenGL ES has to offer it is required to use the
built-in graphics accelerator on the application development system. For the
used BSP for Linux the drivers for the graphics accelerator were delivered
as a binary (already compiled code). Along with the graphics port driver
now available for Linux, it is possible to write OpenGL ES applications.
The source code was needed to be able to implement drivers for the
graphics accelerator in OSE and further to be able to be able to run OpenGL
ES applications with satisfactory results in OSE. To simply use the CPU
will cause the system not cope with the calculations required and other parts
of the system would suffer.
Improvement opportunities
57
the same image appear a long time as if the screen is constantly updated
with information from the framebuffer.
Although this is a problem with the panel, it can be balanced by features
that save energy in other ways. The ability to turn off the screen in a simple
way should be available and tasks that are able to be moved from the proces-
sor to any other part of the system should be implemented, since graphics is
demanding on the processor. It would be a large improvement of the system
as a whole if all the graphics used the built-in graphics accelerator.
Conclusions
There are many graphics libraries to choose from on the market. Based on
the requirements set out in report
there are basically two graphics libraries that meet these requirements,
namely OpenGL ES and Mobile 3D Graphics API for Java. The recom-
mended choice to make is to use OpenGL ES, as it is a low level graphics
library and does not have the same overhead compared to Mobile 3D Graph-
ics API for Java. Mobile 3D Graphics API for Java is not a low-level graphics
library and is not recommended to use on its own. However it can be com-
bined with OpenGL ES since both graphics libraries can take advantage
of the same underlying rendering engine. This would increase the system’s
use, since there is a large market for Java games and similar in other areas,
such as mobile telephony etc. That is the answer to question ”Q1. What
graphics library is appropriate to use in SHAPE? ” that was posted in the
introductory chapter of the report.
The question ”Q2. What choices can be done to reduce needed system
resources for graphics? ” has been answered with a non-software solution.
58
Embedded software is very dependent on the hardware of a product. By
using dedicated hardware acceleration for floating point calculations and
dedicated graphics hardware you can get faster execution and lower power
consumption. When using e.g. OpeGL ES, there is EGL that allows in-
terfacing and control of resources such as framebuffers, memory, window
and platform resources. Reducing the unnecessary use of off-screen drawing
surfaces contribute to better use of system resources.
Implementation-wise, a driver for the graphics port for Linux is imple-
mented. A corresponding driver for the graphics port was not done to OSE.
This has however not been a problem, since the operating systems work the
same way when it comes to graphics. Graphic libraries write data to the
systems framebuffer and this is sent to the display.
4.1.2 CAN
System appearance
The protocol that has been analyzed and changed was already in use in
an existing system. The system is not large and has at some points not
taken into account (slightly simplified) factors which may occur in larger
systems with more diverse hardware and resources. The system makes use of
hardware with the same limitations and requirements on the CAN protocol
header. By that I mean, for example, that no difference exists in the endian.
The number of nodes and processes are so far limited. Would a larger system
be used, more information for both nodes and processors ought to be allowed
in the CAN protocol header. Now is that information space rather unused
relative to the space allowed. For future use, within the project framework,
a more robust network communication channel is required.
Improvements
A balance must always be made to prioritize what is considered to be a pri-
ority. If the system changes or if other software is changing conditions, such
as it can take over parts of priority or similar, then there are improvements
to be made. As mentioned, the system is composed of units that have the
same hardware requirements and thus does not use all bits of the protocol
(endian). It is also not as many nodes in the system as permitted by the
protocol, but it is a future solution.
59
that the design has been implemented on is specific to the existing hard-
ware. However, both the design and implementation is easy to transfer to a
different hardware with similar conditions.
Conclusions
The theoretical work in the field of CAN communication has been driven
by the question ”Q3. What limitations/shortcomings imply communication
through CAN in DySCAS? ” Two main problems were identified in the ex-
isting CAN communication. The signal IDs was not assigned in a predefined
manner and the priority bit had the greatest impact of the message priority.
This was corrected by identifying all the signals and then assigns them new
signal numbers based on how important and how often the signals are used.
The CAN protocol header is also redesigned so that group, system signals
and application signals, have the greatest impact on the signal priority.
4.1.3 GPS
Time was put on getting ENEAs OS to boot on the hardware. The oper-
ating system should have functioned in the past, but attempts yielded no
positive results despite the help of experienced workers. There are several
possible explanations to why the operating system did not start, ranging
from faulty hardware to faulty software or configuration of the system. The
error occurred was that reading and writing of the memory (within the con-
figured area of RAM) resulted in that errors were thrown and the system
would not start. No new software was used when attempting to get the
operating system to start.
Conditions
The hardware belonged to a third party company and the time with the
hardware was limited. The time with the hardware was not enough and the
decision was taken to leave the task and instead spend time on other tasks
in the thesis framework. The estimated time (in my time plan) to be used
to implement the driver was used for troubleshooting.
Conclusions
An area extending beyond the boundaries of this master thesis is the answer
to the question ”Q4. What features can a GPS contribute to DySCAS? ” The
most obvious use of a GPS receiver is to exploit the time information and
to use the information about the location, combined with map software,
to give driving directions. In DySCAS a great way to extend the areas of
use of a GPS receiver is to combine the technology with other technologies
(e.g. cell phones). A possible feature is to plan your trip at home by using
60
another GPS receiver and to transfer that information when you arrive to the
vehicle. This could also be made in the opposite direction, if you are heading
somewhere by foot after having traveled a distance by car. Information from
the GPS receiver can be used to predict how long it will take before the
vehicle reaches a certain point. The scope of use is more or less endless
depending on the number of devices you can combine the GPS with.
The GPS used in the thesis can use the NMEA protocol for output to
the system. It will fit to the existing COM port handler that receive signals
from a serial cable and after a complete message is sent, this will be sent for-
ward to an application that analyzes the string from which the information
requested. One part of the analysis is covered by the question ”Q5. How do
you pick out data from the GPS stream? ” The NMEA specification allows
several appearances of the data fields, but they are always separated by dec-
imal points. The easiest way to extract information from the data stream is
to write a parser that reads out the information between two commas. This
way you can have as many digits in an individual data field as you want or
even don´t have any information at all in a data field. To change settings
in the GPS hardware communication is made through the same serial port,
but with the UBX protocol.
4.2.2 Graphics
A corresponding graphics port driver could be created to the operating sys-
tem OSE by porting the Linux code unit for unit (it is possible to make
tests for certain parts separately) or getting the graphics to work together
in OSE and then move parts into the framework of SHAPE.
The i.MX31ADS hardware also supports the use of multiple displays
simultaneously. This support can be implemented in software terms and
utilized in the future.
Within ENEA there are at least four application development systems
with partial support for graphics (Omap5912 osk, i.MX31ADS, i.MX21ADS,
Redhawk). The hardware with most similarities to i.MX31ADS is i.MX21ADS,
61
which actually has support for the same display as the one used in this work.
There is within the company a written touch screen driver to the i.MX21ADS
hardware (perhaps the same display as used in this work), which corresponds
to the Linux driver for the i.MX31ADS. It can be moved over to i.MX31ADS
when the screen is operational in OSE.
4.2.3 GPS
If the operating system will start on the hardware persist to implement the
GPS anthenna driver. Based on the design proposal that has been given, the
driver for the GPS antenna could be implemented. It should be linked with
an existing GPS handler located in SHAPE middleware as an intermediary
between an application and the driver. The GPS can use the protocols that
are, or even expand the number of allowed varieties of NMEA sentences.
62
63
Bibliography
[3] SungHo Ahn, DongMyung Sul, SeungHan Choi, and KyungHee Lee.
Implementation of lightweight graphic library builder for embedded sys-
tem. Advanced Communication Technology, 2006. ICACT 2006. The
8th International Conference, 1:3 pp.–, Feb. 2006.
64
[13] JSR 184 Expert Group. Mobile 3D Graphics API for J2ME. http:
//www.jcp.org/en/jsr/detail?id=184.
[14] The Khronos Group. Open Standards for Media Authoring and Accel-
eration. http://www.khronos.org/.
[20] Antti Nurminen. m-LOMA - a mobile 3D city map. In Web3D ’06: Pro-
ceedings of the eleventh international conference on 3D web technology,
pages 7–18, New York, NY, USA, 2006. ACM.
[22] Kari Pulli. New APIs for mobile graphics. In Proc. SPIE Electronic
Imaging: Multimedia on Mobile Devices II,SPIE, pages 1–13, 2006.
[23] Kari Pulli, Tomi Aarnio, Ville Miettinen, Kimmo Roimela, and Jani
Vaarala. Mobile 3D Graphics with OpenGL ES and M3G. Morgan
Kaufmann, 2007.
65
[27] u-blox. TIM-LA GPS Receiver Module, Data Sheet.
66
67
Appendix A
Abbreviations
68
LAR - Lower Arbitration Register
69