Unit 2
Unit 2
Unit 2
The CPU works in a cycle of fetching an instruction, decoding it, and executing it, known as the
fetch-decode-execute cycle. The cycle begins when an instruction is fetched from a memory
location pointed to by the PC to the IR via the data bus.
For embedded system design, many factors impact the CPU selection, e.g., the maximum size
(number of bits) in a single operand for ALU (8, 16, 32, 64 bits), and CPU clock frequency for
timing tick control, i.e. the number of ticks (clock cycles) per second in measures of MHz.
Memory : Embedded system memory can be either on-chip or off-chip. On chip memory access
is much fast than off-chip memory, but the size of on-chip memory is much smaller than the size
of off-chip memory. Usually, it takes at least two I/O ports as external address lines plus a few
control lines such as R/W and ALE control lines to enable the extended memory. Generally the data is
stored in RAM and the program is stored in ROM.
I/O Ports : The I/O ports are used to connect input and output devices. The common input
devices for an embedded system include keypads, switches, buttons, knobs, and all kinds of
sensors (light, temperature, pressure, etc).
The output devices include Light Emitting Diodes (LED), Liquid Crystal Displays (LCD),
printers, alarms, actuators, etc. Some devices support both input and output, such as
communication interfaces including Network Interface Cards (NIC), modems, and mobile
phones.
Communication Interfaces :To transfer the data or to interact with other devices ,the embedded
devices are provided the various communication interfaces like RS232, RS422, RS485 ,USB,
SPI(Serial Peripheral Interface ) ,SCI (Serial Communication Interface) ,Ethernet etc.
Application Specific Circuitry : The embedded system some times receives the input from a
sensor or actuator. In such situations certain signal conditioning circuitry is needed. This
hardware circuitry may contain ADC , Op-amps ,DAC etc. Such circuitry will interact with the
embedded system to give correct output.
Power supply: Most of the embedded systems now days work on battery operated supplies.
Because low power dissipation is always required. Hence the systems are designed to work
with batteries.
Specialties of an Embedded Systems : An embedded system has certain specialties
when compared to a normal computer system or a workstation or a mainframe computer system.
(i).Embedded systems are dedicated to specific tasks, whereas PCs are generic computing
platforms.
(ii).Embedded systems are supported by a wide array of processors and processor architectures
(iii). Embedded systems are usually cost sensitive.
(iv). Embedded systems have real-time constraints.
(v).If an embedded system use an operating system , it is most likely using a real-time perating
system (RTOS), but not Windows 9X, Windows NT, Windows 2000, Unix, Solaris, etc.
(vi). The implications of software failure is much more severe in embedded systems than in
desktop systems.
(vii) Embedded systems often have power constraints.
(ix). Embedded systems must be able to operate under extreme environmental conditions.
(x). Embedded systems utilizes fewer system resources than desktop systems.
(xi). Embedded systems often store all their object code in ROM.
(xii). Embedded systems require specialized tools and methods to be efficiently designed when
compared to desktop computers.
(xiii). Embedded microprocessors often have dedicated debugging circuitry.
(xiv).Embedded systems have Software Up gradation capability
(xv). Embedded systems have large User Interfaces for real time applications.
The accumulator (A) is a special data register that stores the result of ALU operations. It can
also be used as an operand. The Program Counter (PC) stores the memory location of the next
instruction to be executed. The Instruction Register (IR) stores the current machine instruction to
be decoded and executed..
The Data Buffer Registers store the data received from the memory or the data to be sent to
memory. The Data Buffer Registers are connected to the data bus. The Address Register stores
the memory location of the data to be accessed (get or set). The Address Register is connected to
the address bus.
In an embedded system, the CPU may never stop and run forever .The CPU works in a cycle of
fetching an instruction, decoding it, and executing it, known as the fetch-decode-execute cycle.
The cycle begins when an instruction is fetched from a memory location pointed to by the PC to
the IR via the data bus.
The memory is divided into Data Memory and Code Memory. Most of data is stored in
Random Access Memory (RAM) and code is stored in Read Only Memory (ROM). This is due
to the RAM constraint of the embedded system and the memory organization .The RAM is
readable and writable, faster access and more expensive volatile storage, which can be used to
store either data or code. Once the power is turned off, all information stored in the RAM will be
lost.The RAM chip can be SRAM (static) or DRAM (dynamic) depending on the manufacturer.
SRAM is faster than DRAM, but is more expensive.
The ROM, EPROM, and Flash memory are all read-only type memories often used to store code
in an embedded system. The embedded system code does not change after the code is loaded into
memory. The ROM is programmed at the factory and can not be changed over time. The newer
microcontrollers come with EPROM or Flash instead of ROM. Most microcontroller
development kits come with EPROM as well. EPROM and Flash memory are easier to rewrite
than ROM. EPROM is an Erasable Programmable ROM in which the contents can be field
programmed bya special burner and can be erased by a UV light bulb. The size of EPROM
ranges up to 32kb in most embedded systems. Flash memory is an Electrically EPROM which
can be programmed from software so that the developers don’t need to physically remove the
EPROM from the circuit to re-program it. It is much quicker and easier to re-write Flash than
other types of EPROM. When the power is on, the first instruction in ROM is loaded into the PC
and then the CPU fetches the instruction from the location in the ROM pointed to by the PC and
stores it in the IR to start the continuous CPU fetch and execution cycle. The PC is advanced to
the address of the next instruction depending on the length of the current instruction or the
destination of the Jump instruction .
The I/O ports are used to connect input and output devices. The common input devices for an
embedded system include keypads, switches, buttons, knobs, and all kinds of sensors (light,
temperature, pressure, etc). The output devices include Light Emitting Diodes (LED), Liquid
Crystal Displays (LCD), printers, alarms, actuators , etc. Some devices support both input and
output, such as communication interfaces including Network Interface Cards (NIC), modems,
and mobile phones.
Clock : The clock is used to control the clocking requirement of the CPU for executing
instructions and the configuration of timers. For ex: the 8051 clock cycle is (1/12)10-6 second
(1/12µs) because the clock frequency is 12MHz. A simple 8051 instruction takes 12 cycles (1ms)
to complete. Of course, some multi-cycle instructions take more clock cycles.
A timer is a real-time clock for real-time programming. Every timer comes with a counter which
can be configured by programs to count the incoming pulses. When the counter overflows (resets
to zero) it will fire a timeout interrupt that triggers predefined actions. Many time delays can be
generated by timers. For example ,a timer counter configured to 24,000 will trigger the tim eout
signal in 24000x 1/12µs = 2ms.
In addition to time delay generation, the timer is also widely used in the real -time embedded
system to schedule multiple tasks in multitasking programming. The watchdog timer is a special
timing device that resets the system after a preset time delay in case of system anomaly. The
watchdog starts up automatically after the system power up.
One need to reboot the PC now and then due to various faults caused by hardware or software.
An embedded system cannot be rebooted manually, because it has been embedded into its
system. That is why many microcontrollers come with an on-chip watchdog timer which can be
configured just like the counter in the regular timer. After a system gets stuck (power supply
voltage out of range or regular timer does not issue timeout after reaching zero count) the
watchdog eventually will restart the system to bring the system back to a normal operational
condition.
ADC & DAC :
Many embedded system application need to deal with non-digital external signals such as
electronic voltage, music or voice, temperature, pressures, and many other signals in the analog
form. The digital computer does not understand these data unless they are converted to digital
formats. The ADC is responsible for converting analog values to binary digits. The DAC is
responsible for outputting analog signals for automation controls such as DC motor or HVDC
furnace control.
In addition to these peripherals, an embedded system may also have sensors, Display modules
like LCD or Touch screen panels, Debug ports certain communication peripherals like I 2C, SPI,
Ethernet,CAN ,USB for high speed data transmission. Now a days various sensors are also
becoming an important part in the design of real time embedded systems. Sensors li ke
temperature sensors, light sensors , PIR sensors ,gas sensors are widely used in application
specific circuitry.
The central part or nucleus of the operating system is the Kernel .A kernel connects the
application software to the hardware of an embedded system. The other important components of
the OS are Device manager, Communication software, Libraries and File system. The kernel
will take care of task scheduling , priorities , memory management etc.It manages the tasks to
achieve the desired performance of the system . It schedules the tasks and provide inter process
communication between different tasks.
The device manager manages the I/O devices through interrupts and device drivers. The device
drivers provide the necessary interface between the application and the hardware. A device
driver is a specific type of software developed to allow interaction with hardware devices. This
constitutes an interface for communicating with the device, through the specific system bus or
communications subsystem that the hardware is connected to, providing commands to receiving
data from the device, and on the other end, the requisite interfaces to the operating system and
software applications.
The communication software provides necessary protocols to make the embedded system
network enabled. This software integrates the upper layer protocols such as TCP/IP stack with
the operating system.
Application programming interface is used by the designer to write the application
software.The API provides the function calls to access the operating system services.
Application Specific software : It sits above the O.S. The application software is developed
according to the features of the development tools available in the OS.These development tools
provide the function calls to access the services of the OS. These function calls include, creating
a task ,to read the data from the port and write the data to the memory etc.
The various function calls provided by an operating system are
i.To create ,suspend and delete tasks.
ii. To do task scheduling to providing real time environment.
iii.To create inter task communication and achieve the synchronization between tasks.
iv.To access the I/O devices.
vi.To access the communication protocol stack .
The designer develops the application software based on these function calls.
Communication Software: To connect to the external world through internet ,the embedded
system need a communication interface. The communication software include the Ethernet
interface and the TCP/IP protocol suit .Now a days even small embedded systems like mobile
phones ,PDAs are network enabled through these TCP/IP support. The TCP/IP protocol suite is
shown in the diagram below.
Application layer
Transport Layer TCP/UDP
IP Layer
Data Link Layer
Physical Layer
This suite consists of different layers like, application layer, Transport layer , IP layer etc.TCP
means Transmission Control Protocol. It ensures that the data is delivered to the application layer
without any errors. The UDP (User Datagram protocol) provides a connectionless service for
error control and flow of data. This TCP/IP protocol suite helps to understand the working of
communication software packages.
Cross platform development: Some times the host computer used for the development
of application software may not be used to debug or compile the software.Then another system
which contains all the of running development tools (editors, compilers, assemblers,
debuggers etc.) may be used. This type of choosing another host sytem ,other than the original
host system is known as Cross platform development. Some common differences between host
and target machines are different operating system, different system boards or a different CPU.
A cross platform development environment allows you to maximize the use of all your
resources. This can include everything from your workstations and servers to their disk space
and cpu cycles.
Here host machine is the machine on which you write and compile programs.A target machine
may be another general-purpose computer, a special-purpose device employing a single-board
computer or any other intelligent device. Debugging is an important issue in cross-platform
development. Since you are usually not able to execute the binary files on the host machine, they
must be run on the target machine. The flow chart for the cross-platform development is shown
below.
In this method first the source code is developed on the host computer system and this code is
compiled and linked using the cross platform development tools..Then the code is downloaded
on to the target and debugged on the target system. If the code is working properly it is burn into
the EPROM or Flash ROM .Finally the code is run on the target system.If the code is not correct
,it is again sent to development stage where it is corrected.
Cross compilation tools are very important for successful product development. Selection of
these tools should be made based upon the embedded system itself as well as features to test and
debug software remotely.The necessary tools for the cross platform development are
• Cross- compiler
• Cross –assembler
• Cross-Linker
• Cross –debugger
• Cross-compiled libraries.
These components will enable to compile, link and debug code for the target environment
through the cross-compilation environment .
Boot Sequence : Booting means starting the system. An embedded system can be booted in
one of the following ways.
i).Execute from ROM using the RAM for Data.
ii).Execute from RAM after loading the image from RAM.
iii).Execute from RAM after downloading from the host.
Normally booting from ROM is the fastest process.the process for executing from ROM using
the RAM for data is shown in the figure below.
Executing from ROM Using RAM for Data :
Some embedded devices have limited memory resources that the program image executes
directly out of the ROM. Sometimes the board vendor provides the boot ROM, and the code in
the boot ROM does not copy instructions out to RAM for execution. In such cases, the data
sections must still reside in RAM. Boot sequence for an image running from ROM is shown
below figure.
The two registers of CPU the Instruction Pointer (IP) register and the Stack Pointer (SP) register
are important. The IP points to the next instruction (code in the .text section) that the CPU must
execute, while the SP points to the next free address in the stack. The stack is created from a
space in RAM, and the system stack pointer registers must be set appropriately at start up.
The boot sequence for an image running from ROM is as follows :
i).The CPU’s IP is hardwired to execute the first instruction in memory (the reset vector).
ii).The reset vector jumps to the first instruction of the .text section of the boot image. The .text
section remains in ROM ; the CPU uses the IP to execute .text. This code is called boot strap
code .This code initializes the memory system, including the RAM.
iii).The .data section of the boot image is copied into RAM because it is both readable and
writeable.
iv).Space is reserved in RAM for the .bss section of the boot image because it is both readable
and writeable. There is nothing to transfer because the content for the .bss section is empty.
v).Stack space is reserved in RAM.
vi).The CPU’s SP register is set to point to the beginning of the newly created stack.
At this point, the boot completes. The CPU continues to execute the code in the .text section and
initializes all the hardware and software components until it is complete or until the system is
shut down.
Embedded system Development Tools : Basically the embedded tools are divided into
two types( i).Hardware Development tools and (ii) Software Development tools.
Hardware development tools : Hardware tools for embedded development include
development or evaluation boards for specific processors, like Friendly ARM’s Mini2440,
Pandaboard , Beagleboard and Craneboard etc..In addition to this various othes devices like
Digital multimeters ,Logic Analyzers , Spectrum Analyzers and Digital CROs etc.are also
required in embedded design.
The digital multimeter is used to measure voltages, currents and to check the continuity in the
circuits in an embedded systems. Because the embedded system also contains some application
specific circuitry which some times require debugging.
The Logic analyzer is used to check the timings of the signals ,and their correctness.
The Spectrum analyzer is helpful to to analyze the signals in the frequency domain.
The digital CRO helps to display the output waveforms and also to store a portion of the
waveforms etc.
Software development tools /testing tools : The software development tools include the
operating system development suite ,cross platform development tools, ROM emulator ,EPROM
programming and In circuit Emulator (ICE) etc. The operating system development suite
consists of API calls to access the OS services.This suite can run on either Windows or
UNIX/Linux systems.
Under the cross platform tools,the compiler generates the object code for the source code
developed in high level languages like C and C++ or Java etc.For LINUX systems a number of
GNU tools are available.
The EPROM programmer is used to in circuit programming by burning the code in the memory
of the target system.
The instruction set Simulator(ISS) software creates the virtual version of the processor on the
PC.
Assembler and Compiler: The binary code obtained by translating an assembly language
program using an assembler is smaller and runs faster than the binary code obtained by
translating a high level language using a compiler since the assembly language gives the
programmer complete control over the functioning of a processor. The advantage of using a high
level language is that a program written in a high level language is easier to understand and
maintain than a program written in assembly language. Hence time critical applications are
written in assembly language while complex applications are written in a high level language.
Cross compilation tools are very important for successful product development. Selection of
these tools should be made based upon the embedded system itself as well as features to test and
debug software remotely. The cross-platform development tools should be compatible with the
host machine. Depending upon CPU family used for the target system, the toolset must be
capable of generating code for the target machine. In the case of GNU development tools, we
need to have a number of things to work together to generate executable code for the target. At
least one of the following tools must be available on the machine.
•Cross compiler
•Cross assembler
•Cross linker
•Cross debugger
•Cross-compiled libraries for the target host.
•Operating system-dependent libraries and header files for the target system
Simulator: A simulator is software tool that runs on the host and simulates the behavior of the
target’s processor and memory. The simulator knows the target processor’s architecture and
instruction set. The program to be tested is read by the simulator and as instructions are executed
the simulator keeps track of the values of the target processor’s registers and the target’s
memory. Simulators provide single step and breakpoint facilities to debug the program.
Emulator : Another important tool is the ICE(In-Circuit Emulator),which emulates the CPU. An
emulator is a hardware tool that helps in testing and debugging the program on the target. The
target’s processor is removed from the circuit and the emulator is connected in its place. The
emulator drives the signals in the circuit in the same way as the target’s processor and hence the
emulator appears to be the processor to all other components of the embedded system. Emulators
also provide features such as single step and breakpoints to debug the program.
Software emulators are software tools that can emulate a particular CPU. Using a software
emulator one can debug the code and find out CPU register values, stack pointers and other
information without having a real CPU. Software emulators are useful when we don’t have the
real hardware available for testing and debugging and want to see how the CPU will behave
when a program is run on it.
In the most basic form, the mobile app architecture embraces a set of
patterns and techniques that developers follow to build a fully structured mobile
application. The specific elements of the architecture are chosen based on the
app’s features and requirements.
• Presentation Layer
• Business Layer
• Data Access Layer
Presentation Layer
The presentation layer pays attention to the components of the User Interface and
UI process components. The primary focus of this layer is how the application
would be presented to the end user. While designing this layer, app developers are
supposed to determine the correct client type that is compliant with the
infrastructure.
Presentation layer embraces UI components and UI process components. When
discussing this layer, the mobile app developer should define how the mobile app
will present itself in front of the end user. The important things like themes, fonts,
colors, etc. should also be decided at this stage.
Business Layer
It represents the core of the mobile app, which exposes functionalities. The
business logic layer can be deployed on the backend server and user remotely by
the mobile application to reduce the load. This load is due to the limited resources
available on mobile devices.
The layer mainly focuses on the business front. The business logic layer includes
workflows, business components, and entities beneath the hood.
The Repository pattern is one of many examples of patterns for data layers. For
large mobile projects, Repository pattern is a perfect solution because it resolves
the problem of managing multiple data sources and mapping data entities used by
business logic components.
• Single view
• Stacked navigation bar
• Scroll view
• Modular controller
• Gesture-based navigation
• Search-driven navigation
• Tab controller
There are three main app types that define app architecture:
• native apps;
• hybrid apps;
• mobile web apps.
Native apps
Native mobile apps are stored and run locally on a device. These apps are similar
to built-in apps like web browsers or mail and they can use all the features and
APIs of a mobile device. There are a wide number of native apps on the app stores.
Native apps are built for a specific mobile platform with the use of definite
programming languages and frameworks. For example, for building an Android
app you’ll need Java and Android studio. Therefore, if you want to run the same
app on the iOS platform, you’ll need to build a new app from scratch using tools
suitable for iOS like Swift and AppCode.
Native apps are fast, work offline, user-friendly, and work smoothly on suitable
devices. However, they require considerable investments of time and money into
development, need frequent upgrades, and are not flexible as you’ll have to
develop a new app once you decide to explore more mobile app platforms.
Hybrid apps
Hybrid apps are a solution to native apps that function only on one platform. These
solutions involve the use of web technologies for their development. They run
within native apps, displaying their web-based content in the native app wrapper.
Their content can be placed on the app or accessed from a web server. Therefore,
these apps have access to the hardware of a device while being web-based,
combining web and native screens. These apps can also be found in app stores.
Hybrid apps are usually much cheaper and faster to develop than native apps while
they can use native APIs such as contacts, camera, and so on. They have one
codebase for Android and iOS apps, meaning that you don’t need to develop two
apps from scratch for each platform. Hybrid apps are simpler to maintain than
native apps.
As for the downsizes, they have connection limitations and can’t work offline and
are much slower than native apps. It may be difficult to reach native functionality
as not all the device features can be incorporated in your app. It’s hard to maintain
high and equal performance for both platforms as they require a lot of code
modifications, resulting in worse than native app user experience.
These apps are built with the help of HTML, JavaScript, and CSS technologies and
get automatically updated from the web without any submission process or vendor
approval.
Mobile web apps are highly compatible with any platform as they run in a browser,
as a result, they have a broader audience. They are easier and cheaper to maintain
as you need to edit or change the content or design only once and the changes get
implemented across all the mobile platforms.
However, mobile web apps don’t have access to native device features like GPS,
cameras, and so on. They can have trouble with screen sizes, therefore, software
developers have to make lots of adjustments. They can work online but with very
limited functionality. All these have a negative effect on user experience.
The standard iOS mobile app architecture can be divided into four blocks:
• Kernel level (Core OS) — works with the file system, controls the validity
of various certificates belonging to the applications. Also responsible for the
security of the entire system. Contains low-level access to the elements of the
device.
• Media level (Media) — contains tools that allow for processing most media
data formats.
• Interface level (Cocoa Touch) — has many elements for creating mobile
interfaces, and also provides the remaining layers with information coming from
the user.
An MVC (Massive View Controller) and its prototypes are used to create a high-
quality iOS mobile application architecture. Cocoa MVC encourages you to write
Massive View Controller because the controller is so involved in the View life
cycle that it is difficult to say that it is a separate entity. Although you still have the
opportunity to ship some of the business logic and data conversion in the Model,
when it comes to shipping work in View, you have few options:
A mobile user interface (mobile UI) is the graphical and usually touch-sensitive display
on a mobile device, such as a smartphone or tablet, that allows the user to interact with the
device’s apps, features, content and functions.
Mobile user interface (UI) design requirements are significantly different from those for desktop
computers. The smaller screen size and touch screen controls create special considerations in UI
design to ensure usability, readability and consistency. In a mobile interface, symbols may be
used more extensively and controls may be automatically hidden until accessed. The symbols
themselves must also be smaller and there is not enough room for text labels on everything,
which can cause confusion.
Users have to be able to understand a command icon and its meaning whether through legible
text or comprehensible graphical representation. Basic guidelines for mobile interface design are
consistent across modern mobile operating systems.
• The layout of the information, commands, and content in an app should echo those
of the operating system in placement, composition and colors. While apps may
diverge to some degree in style, consistency on most of these points allows users to
intuit or at least quickly learn how use an interface.
• Click points must be usable for touch-based selection with a finger. This means a
click point can't be too small or narrow in any direction, to avoid unwanted selection
of nearby items, sometimes referred to as fat fingering.
• Maximize the content window size. On small screens, the UI should not
unnecessarily dominate screen size. It’s important to recognize that the object of a
UI is to facilitate use of content and apps, not just use of the interface.
The number of controls or commands displayed at any given time should be
appropriate to avoid overwhelming the user or making viewing/interacting with
content confusing.
It can be challenging to strike a balance between attending to design considerations and dealing
with the specific requirements of different apps. Furthermore, an app UI should be customized
for each mobile OS, as that is the visual language the device user will be immersed in and
typically most familiar with. To that end, mobile OS developers generally provide resources to
familiarize UI designers with the way their OS does its interface.
Your app's user interface is everything that the user can see and interact with. Android provides a
variety of pre-built UI components such as structured layout objects and UI controls that allow
you to build the graphical user interface for your app. Android also provides other UI modules
for special interfaces such as dialogs, notifications, and menus.
Layouts
A layout defines the structure for a user interface in your app, such as in an activity. All elements
in the layout are built using a hierarchy of View and ViewGroup objects. A View usually draws
something the user can see and interact with. Whereas a ViewGroup is an invisible container that
defines the layout structure for View and other ViewGroup objects, as shown in figure 1.
The View objects are usually called "widgets" and can be one of many subclasses, such
as Button or TextView. The ViewGroup objects are usually called "layouts" can be one of many
types that provide a different layout structure, such as LinearLayout or ConstraintLayout .
Declaring your UI in XML allows you to separate the presentation of your app from the code
that controls its behavior. Using XML files also makes it easy to provide different layouts for
different screen sizes and orientations (discussed further in Supporting Different Screen Sizes).
Material design is a comprehensive guide for visual, motion, and interaction design across
platforms and devices. To use material design in your Android apps, follow the guidelines
defined in the material design specification and use the new components and styles available in
the material design support library. This page provides an overview of the patterns and APIs you
should use.
Android provides the following features to help you build material design apps:
To take advantage of the material features such as styling for standard UI widgets, and to
streamline your app's style definition, apply a material-based theme to your app.
To provide your users a familiar experience, use material's most common UX patterns:
• Promote your UI's main action with a Floating Action Button (FAB).
• Show your brand, navigation, search, and other actions with the App Bar.
• Show and hide your app's navigation with the Navigation Drawer.
• Use one of many other material components for your app layout and navigation, such as
collapsing toolbars, tabs, a bottom nav bar, and more. To see them all, check out the Material
Components for Android catalog
In addition to the X and Y properties, views in Android have a Z property. This new property
represents the elevation of a view, which determines:
• The size of the shadow: views with higher Z values cast bigger shadows.
• The drawing order: views with higher Z values appear on top of other views.
Elevation is often applied when your layout includes a card-based layout, which helps you
display important pieces of information inside cards that provide a material look. You can use
the CardView widget to create cards with a default elevation.
Animations
The new animation APIs let you create custom animations for touch feedback in UI controls,
changes in view state, and activity transitions.
Touch feedback animations are built into several standard views, such as buttons. The new APIs
let you customize these animations and add them to your custom views.
Drawables
These new capabilities for drawables help you implement material design apps:
• Vector drawables are scalable without losing definition and are perfect for single-color
in-app icons. Learn more about vector drawables.
• Drawable tinting lets you define bitmaps as an alpha mask and tint them with a color at
runtime. See how to add tint to drawables.
• Color extraction lets you automatically extract prominent colors from a bitmap image.
See how to select colors with the Palette API.
Styles and Themes
Styles and themes on Android allow you to separate the details of your app design from the UI
structure and behavior, similar to stylesheets in web design.
A style is a collection of attributes that specify the appearance for a single View. A style can
specify attributes such as font color, font size, background color, and much more.
A theme is a collection of attributes that's applied to an entire app, activity, or view hierarchy—
not just an individual view. When you apply a theme, every view in the app or activity applies
each of the theme's attributes that it supports. Themes can also apply styles to non-view
elements, such as the status bar and window background.
Styles and themes are declared in a style resource file in res/values/, usually named styles.xml.
To create a new style or theme, open your project's res/values/styles.xml file. For each style
you want to create, follow these steps:
1. Add a <style> element with a name that uniquely identifies the style.
2. Add an <item> element for each style attribute you want to define.
The name in each item specifies an attribute you would otherwise use as an XML attribute in
your layout. The value in the <item> element is the value for that attribute.
<TextView
style="@style/GreenText"
... />
Each attribute specified in the style is applied to that view if the view accepts it. The view simply
ignores any attributes that it does not accept.
Buttons
A button consists of text or an icon (or both text and an icon) that communicates what action
occurs when the user touches it.
Depending on whether you want a button with text, an icon, or both, you can create the button in
your layout in three ways:
<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/button_text"
... />
<ImageButton
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:src="@drawable/button_icon"
android:contentDescription="@string/button_icon_desc"
... />
• With text and an icon, using the Button class with the android:drawableLeft attribute:
<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/button_text"
android:drawableLeft="@drawable/button_icon"
... />
When the user clicks a button, the Button object receives an on-click event.
To define the click event handler for a button, add the android:onClick attribute to
the <Button> element in your XML layout. The value for this attribute must be the name of the
method you want to call in response to a click event. The Activity hosting the layout must then
implement the corresponding method.
The appearance of your button (background image and font) may vary from one device to
another, because devices by different manufacturers often have different default styles for input
controls.
You can control exactly how your controls are styled using a theme that you apply to your entire
application. For instance, to ensure that all devices running Android 4.0 and higher use the Holo
theme in your app, declare android:theme="@android:style/Theme.Holo" in your
manifest's <application> element. Also read the blog post, Holo Everywhere for information
about using the Holo theme while supporting older devices.
One design that can be useful is a "borderless" button. Borderless buttons resemble basic buttons
except that they have no borders or background but still change appearance during different
states, such as when clicked.
To create a borderless button, apply the borderlessButtonStyle style to the button. For example:
<Button
android:id="@+id/button_send"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/button_send"
android:onClick="sendMessage"
style="?android:attr/borderlessButtonStyle" />
Checkboxes
Checkboxes allow the user to select one or more options from a set. Typically, you should
present each checkbox option in a vertical list.
When the user selects a checkbox, the CheckBox object receives an on-click event.
To define the click event handler for a checkbox, add the android:onClick attribute to
the <CheckBox> element in your XML layout. The value for this attribute must be the name of
the method you want to call in response to a click event. The Activity hosting the layout must
then implement the corresponding method.
Radio buttons allow the user to select one option from a set.
To create each radio button option, create a RadioButton in your layout. However, because radio
buttons are mutually exclusive, you must group them together inside a RadioGroup. By grouping
them together, the system ensures that only one radio button can be selected at a time.
• RadioButton
• RadioGroup
When the user selects one of the radio buttons, the corresponding RadioButton object receives an
on-click event.
To define the click event handler for a button, add the android:onClick attribute to
the <RadioButton> element in your XML layout. The value for this attribute must be the name
of the method you want to call in response to a click event. The Activity hosting the layout must
then implement the corresponding method.
Toggle Buttons
A toggle button allows the user to change a setting between two states.
You can add a basic toggle button to your layout with the ToggleButton object. Android
4.0 (API level 14) introduces another kind of toggle button called a switch that provides
a slider control, which you can add with a Switch object. SwitchCompat is a version of the
Switch widget which runs on devices back to API 7.
Spinners
Spinners provide a quick way to select one value from a set. In the default state, a spinner
shows its currently selected value. Touching the spinner displays a dropdown menu with all
other available values, from which the user can select a new one.
You can add a spinner to your layout with the Spinner object. You should usually do so in your
XML layout with a <Spinner> element. For example:
<Spinner
android:id="@+id/planets_spinner"
android:layout_width="match_parent"
android:layout_height="wrap_content" />
To populate the spinner with a list of choices, you then need to specify
a SpinnerAdapter in your Activity or Fragment source code.
For instance, if the available choices for your spinner are pre-determined, you can
provide them with a string array defined in a string resource file:
override fun onItemSelected(parent: AdapterView<*>, view: View?, pos: Int, id: Long) {
// An item was selected. You can retrieve the selected item using
// parent.getItemAtPosition(pos)
}
• DatePickerDialog
• TimePickerDialog
Once you've defined a DialogFragment like the one shown above, you can display the
time picker by creating an instance of the DialogFragment and calling show().
For example, here's a button that, when clicked, calls a method to show the dialog:
<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/pick_time"
android:onClick="showTimePickerDialog" />
When the user clicks this button, the system calls the following method:
This method calls show() on a new instance of the DialogFragment defined above.
The show() method requires an instance of FragmentManager and a unique tag name for the
fragment.
Once you've defined a DialogFragment like the one shown above, you can display the
date picker by creating an instance of the DialogFragment and calling show().
For example, here's a button that, when clicked, calls a method to show the dialog:
<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/pick_date"
android:onClick="showDatePickerDialog" />
When the user clicks this button, the system calls the following method:
This method calls show() on a new instance of the DialogFragment defined above.
The show() method requires an instance of FragmentManager and a unique tag name for
the fragment.
Tooltips
A tooltip is a small descriptive message that appears near a view when users long
press the view or hover their mouse over it. This is useful when your app uses an icon
to represent an action or piece of information to save space in the layout. This page
shows you how to add these tooltips on Android 8.0 (API level 26) and higher.
Some standard widgets display tooltips based on the content of the title or content
description properties. Starting in Android 8.0, you can specify the text displayed in the
tooltip regardless of the value of other properties.
To specify the tooltip text in your XML files, set the android:tooltipText attribute, as shown
in the following example:
<android.support.design.widget.FloatingActionButton
android:id="@+id/fab"
android:tooltipText="Send an email" />
To specify the tooltip text in your code, use the setTooltipText(CharSequence) method, as
shown in the following example:
val fab: FloatingActionButton = findViewById(R.id.fab)
fab.tooltipText = "Send an email"
Dialogs
A dialog is a small window that prompts the user to make a decision or enter additional
information. A dialog does not fill the screen and is normally used for modal events that require
users to take an action before they can proceed.
The Dialog class is the base class for dialogs, but you should avoid
instantiating Dialog directly. Instead, use one of the following subclasses:
AlertDialog
A dialog that can show a title, up to three buttons, a list of selectable items, or a custom
layout.
DatePickerDialog or TimePickerDialog
A dialog with a pre-defined UI that allows the user to select a date or time.
1. Title
This is optional and should be used only when the content area is occupied by a
detailed message, a list, or custom layout. If you need to state a simple message or
question (such as the dialog in figure 1), you don't need a title.
2. Content area
This can display a message, a list, or other custom layout.
3. Action buttons
There should be no more than three action buttons in a dialog.
Adding buttons
The set...Button() methods require a title for the button (supplied by a string resource) and
a DialogInterface.OnClickListener that defines the action to take when the user presses the
button.
Positive
You should use this to accept and continue with the action (the "OK" action).
Negative
Neutral
You should use this when the user may not want to proceed with the action, but doesn't
necessarily want to cancel. It appears between the positive and negative buttons. For
example, the action might be "Remind me later."
You can add only one of each button type to an AlertDialog. That is, you cannot have
more than one "positive" button.
Adding a list
There are three kinds of lists available with the AlertDialog APIs:
To create a single-choice list like the one in figure 3, use the setItems() method:
override fun onCreateDialog(savedInstanceState: Bundle?): Dialog {
return activity?.let {
val builder = AlertDialog.Builder(it)
builder.setTitle(R.string.pick_color)
.setItems(R.array.colors_array,
DialogInterface.OnClickListener { dialog, which ->
// The 'which' argument contains the index
position
// of the selected item
})
builder.create()
} ?: throw IllegalStateException("Activity cannot be null")
}
Beginning with Android 3.0 (API level 11), Android-powered devices are no longer
required to provide a dedicated Menu button. With this change, Android apps should
migrate away from a dependence on the traditional 6-item menu panel and instead
provide an app bar to present common user actions.
Although the design and user experience for some menu items have changed, the
semantics to define a set of actions and options is still based on the Menu APIs. This
guide shows how to create the three fundamental types of menus or action
presentations on all versions of Android:
The options menu is the primary collection of menu items for an activity. It's where you
should place actions that have a global impact on the app, such as "Search," "Compose
email," and "Settings."
A context menu is a floating menu that appears when the user performs a long-click on
an element. It provides actions that affect the selected content or context frame.
The contextual action mode displays action items that affect the selected content
in a bar at the top of the screen and allows the user to select multiple items.
Popup menu
A popup menu displays a list of items in a vertical list that's anchored to the view that
invoked the menu. It's good for providing an overflow of actions that relate to specific
content or to provide options for a second part of a command. Actions in a popup menu
should not directly affect the corresponding content—that's what contextual actions are
for. Rather, the popup menu is for extended actions that relate to regions of content in
your activity.
• It separates the content for the menu from your application's behavioral code.
• It allows you to create alternative menu configurations for different platform versions,
screen sizes, and other configurations by leveraging the app resources framework.
To define the menu, create an XML file inside your project's res/menu/ directory and build
the menu with the following elements:
<menu>
Defines a Menu, which is a container for menu items. A <menu> element must be the root
node for the file and can hold one or more <item> and <group> elements.
<item>
Creates a MenuItem, which represents a single item in a menu. This element may
contain a nested <menu> element in order to create a submenu.
<group>
An optional, invisible container for <item> elements. It allows you to categorize menu
items so they share properties such as active state and visibility. For more information,
see the section about Creating Menu Groups.
• Pointer events are used to get basic contact info such as location and device type, extended info such
as pressure and contact geometry, and to support more complex interactions.
• Gesture events are used to handle static single-finger interactions such as tapping and press-and-hold
(double-tap and right-tap are derived from these basic gestures).
• Manipulation events are used for dynamic multi-touch interactions such as pinching and stretching,
and interactions that use inertia and velocity data such as panning/scrolling, zooming, and rotating.
Gestures
A gesture is the physical act or motion performed on, or by, the input device (finger, fingers, pen/stylus, mouse,
and so on). For example, to launch, activate, or invoke a command you would use a single finger tap for a touch
or touchpad device (equivalent to a left-click with a mouse, a tap with a pen, or Enter on a keyboard).
Here is the basic set of touch gestures for manipulating the UI and performing an interaction.
Tap Static gesture One finger touches the screen and lifts up.
Press and Static gesture One finger touches the screen and stays in place.
hold
Slide Manipulation One or more fingers touch the screen and move in the same
gesture direction.
Swipe Manipulation One or more fingers touch the screen and move a short distance
gesture in the same direction.
Turn Manipulation Two or more fingers touch the screen and move in a clockwise or
gesture counter-clockwise arc.
Pinch Manipulation Two or more fingers touch the screen and move closer together.
gesture
Stretch Manipulation Two or more fingers touch the screen and move farther apart.
gesture
Manipulations
A manipulation is the immediate, ongoing reaction or response an object or UI has to a gesture. For example,
both the slide and swipe gestures typically cause an element or UI to move in some way.
The final outcome of a manipulation, how it is manifested by the object on the screen and in the UI, is the
interaction.
Interactions
Interactions depend on how a manipulation is interpreted and the command or action that results from the
manipulation. For example, objects can be moved through both the slide and swipe gestures, but the results
differ depending on whether a distance threshold is crossed. Slide can be used to drag an object or pan a view
while swipe can be used to select an item or display the AppBar.
This section describes some common interactions.
Learning
The press and hold gesture displays detailed info or teaching visuals (for example, a tooltip or context menu)
without committing to an action or command. Panning is still possible if a sliding gesture is started while the
visual is displayed. For more info, see Guidelines for visual feedback.
Commanding
The tap gesture invokes a primary action, for example launching an app or executing a command.
Panning
The slide gesture is used primarily for panning interactions but can also be used for moving, drawing, or
writing. Panning is a touch-optimized technique for navigating short distances over small sets of content within
a single view (such as the folder structure of a computer, a library of documents, or a photo album). Equivalent
to scrolling with a mouse or keyboard, panning is necessary only when the amount of content in the view
causes the content area to overflow the viewable area. For more info, see Guidelines for panning.
Zooming
The pinch and stretch gestures are used for three types of interactions: optical zoom, resizing, and Semantic
Zoom.
Optical zoom adjusts the magnification level of the entire content area to get a more detailed view of the
content. In contrast, resizing is a technique for adjusting the relative size of one or more objects within a
content area without changing the view into the content area. The top two images here show an optical zoom,
and the bottom two images show resizing a rectangle on the screen without changing the size of any other
objects. For more info, see Guidelines for optical zoom and resizing.
Semantic Zoom
Semantic Zoom is a touch-optimized technique for presenting and navigating structured data or content within
a single view (such as the folder structure of a computer, a library of documents, or a photo album) without the
need for panning, scrolling, or tree view controls. Semantic Zoom provides two different views of the same
content by letting you see more detail as you zoom in and less detail as you zoom out. For more information,
seeGuidelines for Semantic Zoom.
Rotating
The rotate gesture simulates the experience of rotating a piece of paper on a flat surface. The interaction is
performed by placing two fingers on the object and pivoting one finger around the other or pivoting both
fingers around a center point, and swiveling the hand in the desired direction. You can use two fingers from the
same hand, or one from each hand. For more information, see Guidelines for rotation.
The swipe gesture reveals various command bars or the login screen.
App commands are revealed by swiping from the bottom or top edge of the screen. Use the AppBar to display
app commands.
System commands are revealed by swiping from the right edge, recently used apps are revealed by swiping
from the left edge, and swiping from the top edge to the bottom edge reveals docking or closing commands.
• What is functionality? It is the ability of the system to do the work for which it was
intended. A task requires that many or most of the system's elements work in a
coordinated manner to complete the job. If the elements have not been assigned the
correct responsibilities or have not been endowed with the correct facilities for
coordinating with other elements
2. Business qualities
• Environment. The stimulus occurs within certain conditions. The system may
be in an overload condition or may be running when the stimulus occurs, or
some other condition may be true.
• Artifact. Some artifact is stimulated. This may be the whole system or some
pieces of it.
• Response. The response is the activity undertaken after the arrival of the
stimulus.
AVAILABILITY
• A system failure occurs when the system no longer delivers a service consistent
with its specification.
MODIFIABILITY
PERFORMANCE
• A performance scenario begins with a request for some service arriving at the
system. Satisfying the request requires resources to be consumed. While this is
happening the system may be simultaneously servicing other requests.
4. Assurance is the property that the parties to a transaction are who they
purport to be.
5. Availability is the property that the system will be available for legitimate
use.
6. Auditing is the property that the system tracks activities within it at levels
sufficient to reconstruct them.
TESTABILITY
• Software testability refers to the ease with which software can be made to
demonstrate its faults through testing.
• Testability refers to the probability that it will fail on its next test execution.
USABILITY
Usability is concerned with how easy it is for the user to accomplish a desired task
and the kind of user support the system provides. It can be broken down into the
following areas:
• Minimizing the impact of errors. What can the system do so that a user error
has minimal impact?
• Adapting the system to user needs. How can the user (or the system itself)
adapt to make the user's task easier?
• Increasing confidence and satisfaction. What does the system do to give the
user confidence that the correct action is being taken?