Unit 2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 61

UNIT II

INTRODUCTION : An embedded system is an electronic system,which includes a single


chip microcomputers(Microcontrollers) like the ARM or Cortex or Stellaris LM3S1968 .It is
configured to perform a specific dedicated application .Software is programmed into the on chip
ROM of the single chip computer. This software is not accessible to the user , and software
solves only a limited range of problems .Here the microcomputer is embedded or hidden inside
the system. Every embedded microcomputer system , accepts inputs, performs computations,
and generates outputs and runs in “real time.”
For Example a typical automobile now a days contains an average of ten microcontrollers. In
fact, modern houses may contain as many as 150 microcontrollers and on average a consumer
now interacts with microcontrollers up to 300 times a day. General areas that employ embedded
systems covers every branch of day to day science and technology, namely Communications,
automotive, military, medical, consumer, machine control etc...
Ex: Cell phone , Digital camera , Microwave Oven,MP3 player, Portable digital assistant &
automobile antilock brake system etc.

Basics of embedded systems design:


Every embedded system consists of customer-built hardware components supported by a
Central Processing Unit (CPU), which is the heart of a microprocessor (µP) or microcontroller
(µC). A microcontroller is an integrated chip which comes with built-in memory, I/O ports,
timers, and other components. Most embedded systems are built on microcontrollers, which run
faster than a custom-built system with a microprocessor, because all components are integrated
within a single chip. Operating system play an important role in most of the embedded systems.
But all the embedded systems do not use the operating system. The systems with high end
applications only use operating system. To use the operating system the embedded system should
have large memory capability. So, This is not possible in low end applications like remote
systems, digital cameras,MP3 players , robo toys etc.The architecture of an embedded system
with OS can be denoted by layered structure as shown below. The OS will provide an interface
between the hardware and application software.
In the case of embedded systems with OS ,once the application software is loaded into memory it
will run the application with out any host system.
Coming to the hardware details of the embedded system, it consists of the following important
blocks.
• CPU(Central Processing Unit)
• RAM and ROM
• I/O Devices
• Communication Interfaces
• Sensors etc. (Application specific circuitary)
This hardware architecture can be shown by the following block diagram.
Central Processing Unit : A CPU is composed of an Arithmetic Logic Unit (ALU), a Control
Unit (CU), and many internal registers that are connected by buses. The ALU performs all the
mathematical operations (Add, Sub, Mul, Div), logical operations (AND, OR), and shifting
operations within CPU .The timing and sequencing of all CPU operations are controlled by the
CU, which is actually built of many selection circuits including latches and decoders .The CU is
responsible for directing the flow of instruction and data within the CPU and continuously
running program instructions step by step.

The CPU works in a cycle of fetching an instruction, decoding it, and executing it, known as the
fetch-decode-execute cycle. The cycle begins when an instruction is fetched from a memory
location pointed to by the PC to the IR via the data bus.
For embedded system design, many factors impact the CPU selection, e.g., the maximum size
(number of bits) in a single operand for ALU (8, 16, 32, 64 bits), and CPU clock frequency for
timing tick control, i.e. the number of ticks (clock cycles) per second in measures of MHz.
Memory : Embedded system memory can be either on-chip or off-chip. On chip memory access
is much fast than off-chip memory, but the size of on-chip memory is much smaller than the size
of off-chip memory. Usually, it takes at least two I/O ports as external address lines plus a few
control lines such as R/W and ALE control lines to enable the extended memory. Generally the data is
stored in RAM and the program is stored in ROM.
I/O Ports : The I/O ports are used to connect input and output devices. The common input
devices for an embedded system include keypads, switches, buttons, knobs, and all kinds of
sensors (light, temperature, pressure, etc).
The output devices include Light Emitting Diodes (LED), Liquid Crystal Displays (LCD),
printers, alarms, actuators, etc. Some devices support both input and output, such as
communication interfaces including Network Interface Cards (NIC), modems, and mobile
phones.
Communication Interfaces :To transfer the data or to interact with other devices ,the embedded
devices are provided the various communication interfaces like RS232, RS422, RS485 ,USB,
SPI(Serial Peripheral Interface ) ,SCI (Serial Communication Interface) ,Ethernet etc.
Application Specific Circuitry : The embedded system some times receives the input from a
sensor or actuator. In such situations certain signal conditioning circuitry is needed. This
hardware circuitry may contain ADC , Op-amps ,DAC etc. Such circuitry will interact with the
embedded system to give correct output.
Power supply: Most of the embedded systems now days work on battery operated supplies.
Because low power dissipation is always required. Hence the systems are designed to work
with batteries.
Specialties of an Embedded Systems : An embedded system has certain specialties
when compared to a normal computer system or a workstation or a mainframe computer system.
(i).Embedded systems are dedicated to specific tasks, whereas PCs are generic computing
platforms.
(ii).Embedded systems are supported by a wide array of processors and processor architectures
(iii). Embedded systems are usually cost sensitive.
(iv). Embedded systems have real-time constraints.
(v).If an embedded system use an operating system , it is most likely using a real-time perating
system (RTOS), but not Windows 9X, Windows NT, Windows 2000, Unix, Solaris, etc.
(vi). The implications of software failure is much more severe in embedded systems than in
desktop systems.
(vii) Embedded systems often have power constraints.
(ix). Embedded systems must be able to operate under extreme environmental conditions.
(x). Embedded systems utilizes fewer system resources than desktop systems.
(xi). Embedded systems often store all their object code in ROM.
(xii). Embedded systems require specialized tools and methods to be efficiently designed when
compared to desktop computers.
(xiii). Embedded microprocessors often have dedicated debugging circuitry.
(xiv).Embedded systems have Software Up gradation capability
(xv). Embedded systems have large User Interfaces for real time applications.

Recent trends in Embedded systems : With the fast developments in semiconductor


industry and VLSI technology ,one can find tremendous changes in the embedded system design
in terms of processor speed , power , communication interfaces including network capabilities
and software developments like operating systems and programming languages etc.
Processor speed and Power : With the advancements in processor technology ,the embedded
systems are now days designed with 16,32 bit processors which can work in real time
environment. These processors are able to perform high speed signal processing activities which
resulted in the development of high definition communication devices like 3G mobiles etc.Also
the recent developments in VLSI technology has paved the way for low power battery operated
devices which are very handy and have high longevity. Also , the present day embedded
systems are provided with higher memory capabilities ,so that most of them are based on tiny
operating systems like android etc.
Communication interfaces : Most of the present day embedded systems are aimed at internet
based applications. So,the communication interfaces like Ethernet, USB, wireless LAN etc.have
become very common resources in almost all the embedded systems. The developments in
memory technologies also helped in porting the TCP/IP protocol stack and the HTTP server
software on to the embedded systems. Such embedded systems can provide a link between any
two devices any where in the globe.
Operating systems : With recent software developments ,there is a considerable growth in the
availability of operating systems for embedded systems. Mainly new operating systems are
developed which can be used in real time applications. There are both commercial RTOSes like
Vx Works , QNX,WIN-CE and open source RTOSes like RTLINUX etc. The Android OS in
mobiles has revolutionized the embedded industry.
Programming Languages : There is also a remarkable development in the programming
languages. Languages like C++, Java etc. are now widely used in embedded application
programming. For example by having the Java virtual machine in a mobile phones ,one can
download Java applets from a server and can be executed on your mobile.
In addition to these developments, now a days we also find new devices like ASICs and FPGAs
in the embedded system market. These new hardware devices are popular as programmable
devices and reconfigurable devices.
Design Constraints for mobile applications, hardware related:
The hardware architecture of an embedded systems is very important, because it is one
of the powerful tools that can be used to understand an embedded systems design or to resolve
challenges faced while designing a new system. The hardware architecture of any embedded
system consists of three sections namely : Core, Central Processing Unit (CPU) and
Peripherals.
Core is the component which executes the instructions. CPU contains the core and the other
components which support the core to execute programs. Peripherals are the components which
communicate with other systems or physical world (Like ports, ADC,DAC, Watch dog Timers
etc.). The core is separated from other components by the system bus.
The CPU in the embedded system may be a general purpose processor like a microcontroller or
a special purpose processor like a DSP (Digital signal processor). But any CPU consists of of an
Arithmetic Logic Unit (ALU), a Control Unit (CU), and many internal registers that are
connected by buses. The ALU performs all the mathematical operations (Add, Sub, Mul, Div),
logical operations (AND, OR), and shifting operations within CPU.
The timing and sequencing of all CPU operations are controlled by the CU, which is actually
built of many selection circuits including latches and decoders .The CU is responsible for
directing the flow of instruction and data within the CPU and continuously running program
instructions step by step.
There are many internal registers in the CPU.

The accumulator (A) is a special data register that stores the result of ALU operations. It can
also be used as an operand. The Program Counter (PC) stores the memory location of the next
instruction to be executed. The Instruction Register (IR) stores the current machine instruction to
be decoded and executed..
The Data Buffer Registers store the data received from the memory or the data to be sent to
memory. The Data Buffer Registers are connected to the data bus. The Address Register stores
the memory location of the data to be accessed (get or set). The Address Register is connected to
the address bus.
In an embedded system, the CPU may never stop and run forever .The CPU works in a cycle of
fetching an instruction, decoding it, and executing it, known as the fetch-decode-execute cycle.
The cycle begins when an instruction is fetched from a memory location pointed to by the PC to
the IR via the data bus.
The memory is divided into Data Memory and Code Memory. Most of data is stored in
Random Access Memory (RAM) and code is stored in Read Only Memory (ROM). This is due
to the RAM constraint of the embedded system and the memory organization .The RAM is
readable and writable, faster access and more expensive volatile storage, which can be used to
store either data or code. Once the power is turned off, all information stored in the RAM will be
lost.The RAM chip can be SRAM (static) or DRAM (dynamic) depending on the manufacturer.
SRAM is faster than DRAM, but is more expensive.

The ROM, EPROM, and Flash memory are all read-only type memories often used to store code
in an embedded system. The embedded system code does not change after the code is loaded into
memory. The ROM is programmed at the factory and can not be changed over time. The newer
microcontrollers come with EPROM or Flash instead of ROM. Most microcontroller
development kits come with EPROM as well. EPROM and Flash memory are easier to rewrite
than ROM. EPROM is an Erasable Programmable ROM in which the contents can be field
programmed bya special burner and can be erased by a UV light bulb. The size of EPROM
ranges up to 32kb in most embedded systems. Flash memory is an Electrically EPROM which
can be programmed from software so that the developers don’t need to physically remove the
EPROM from the circuit to re-program it. It is much quicker and easier to re-write Flash than
other types of EPROM. When the power is on, the first instruction in ROM is loaded into the PC
and then the CPU fetches the instruction from the location in the ROM pointed to by the PC and
stores it in the IR to start the continuous CPU fetch and execution cycle. The PC is advanced to
the address of the next instruction depending on the length of the current instruction or the
destination of the Jump instruction .
The I/O ports are used to connect input and output devices. The common input devices for an
embedded system include keypads, switches, buttons, knobs, and all kinds of sensors (light,
temperature, pressure, etc). The output devices include Light Emitting Diodes (LED), Liquid
Crystal Displays (LCD), printers, alarms, actuators , etc. Some devices support both input and
output, such as communication interfaces including Network Interface Cards (NIC), modems,
and mobile phones.
Clock : The clock is used to control the clocking requirement of the CPU for executing
instructions and the configuration of timers. For ex: the 8051 clock cycle is (1/12)10-6 second
(1/12µs) because the clock frequency is 12MHz. A simple 8051 instruction takes 12 cycles (1ms)
to complete. Of course, some multi-cycle instructions take more clock cycles.
A timer is a real-time clock for real-time programming. Every timer comes with a counter which
can be configured by programs to count the incoming pulses. When the counter overflows (resets
to zero) it will fire a timeout interrupt that triggers predefined actions. Many time delays can be
generated by timers. For example ,a timer counter configured to 24,000 will trigger the tim eout
signal in 24000x 1/12µs = 2ms.
In addition to time delay generation, the timer is also widely used in the real -time embedded
system to schedule multiple tasks in multitasking programming. The watchdog timer is a special
timing device that resets the system after a preset time delay in case of system anomaly. The
watchdog starts up automatically after the system power up.
One need to reboot the PC now and then due to various faults caused by hardware or software.
An embedded system cannot be rebooted manually, because it has been embedded into its
system. That is why many microcontrollers come with an on-chip watchdog timer which can be
configured just like the counter in the regular timer. After a system gets stuck (power supply
voltage out of range or regular timer does not issue timeout after reaching zero count) the
watchdog eventually will restart the system to bring the system back to a normal operational
condition.
ADC & DAC :
Many embedded system application need to deal with non-digital external signals such as
electronic voltage, music or voice, temperature, pressures, and many other signals in the analog
form. The digital computer does not understand these data unless they are converted to digital
formats. The ADC is responsible for converting analog values to binary digits. The DAC is
responsible for outputting analog signals for automation controls such as DC motor or HVDC
furnace control.
In addition to these peripherals, an embedded system may also have sensors, Display modules
like LCD or Touch screen panels, Debug ports certain communication peripherals like I 2C, SPI,
Ethernet,CAN ,USB for high speed data transmission. Now a days various sensors are also
becoming an important part in the design of real time embedded systems. Sensors li ke
temperature sensors, light sensors , PIR sensors ,gas sensors are widely used in application
specific circuitry.

Design Constraints for mobile application Software Related: To design an efficient


embedded system , both hardware and software aspects are equally important. The software of an
embedded system is mainly aimed at accessing the hardware resources properly. The software of
an embedded system means both operating system and application software. But every
embedded system need not contain the operating system.For low end applicati ons , operating
system is not needed. In such cases the designer has to write the necessary software routines to
access the hardware. The architecture of the software in an embedded system can be shown by
the following figure.

The central part or nucleus of the operating system is the Kernel .A kernel connects the
application software to the hardware of an embedded system. The other important components of
the OS are Device manager, Communication software, Libraries and File system. The kernel
will take care of task scheduling , priorities , memory management etc.It manages the tasks to
achieve the desired performance of the system . It schedules the tasks and provide inter process
communication between different tasks.
The device manager manages the I/O devices through interrupts and device drivers. The device
drivers provide the necessary interface between the application and the hardware. A device
driver is a specific type of software developed to allow interaction with hardware devices. This
constitutes an interface for communicating with the device, through the specific system bus or
communications subsystem that the hardware is connected to, providing commands to receiving
data from the device, and on the other end, the requisite interfaces to the operating system and
software applications.

The communication software provides necessary protocols to make the embedded system
network enabled. This software integrates the upper layer protocols such as TCP/IP stack with
the operating system.
Application programming interface is used by the designer to write the application
software.The API provides the function calls to access the operating system services.

Application Specific software : It sits above the O.S. The application software is developed
according to the features of the development tools available in the OS.These development tools
provide the function calls to access the services of the OS. These function calls include, creating
a task ,to read the data from the port and write the data to the memory etc.
The various function calls provided by an operating system are
i.To create ,suspend and delete tasks.
ii. To do task scheduling to providing real time environment.
iii.To create inter task communication and achieve the synchronization between tasks.
iv.To access the I/O devices.
vi.To access the communication protocol stack .
The designer develops the application software based on these function calls.
Communication Software: To connect to the external world through internet ,the embedded
system need a communication interface. The communication software include the Ethernet
interface and the TCP/IP protocol suit .Now a days even small embedded systems like mobile
phones ,PDAs are network enabled through these TCP/IP support. The TCP/IP protocol suite is
shown in the diagram below.

Application layer
Transport Layer TCP/UDP
IP Layer
Data Link Layer
Physical Layer

This suite consists of different layers like, application layer, Transport layer , IP layer etc.TCP
means Transmission Control Protocol. It ensures that the data is delivered to the application layer
without any errors. The UDP (User Datagram protocol) provides a connectionless service for
error control and flow of data. This TCP/IP protocol suite helps to understand the working of
communication software packages.

Cross platform development: Some times the host computer used for the development

of application software may not be used to debug or compile the software.Then another system
which contains all the of running development tools (editors, compilers, assemblers,
debuggers etc.) may be used. This type of choosing another host sytem ,other than the original
host system is known as Cross platform development. Some common differences between host
and target machines are different operating system, different system boards or a different CPU.
A cross platform development environment allows you to maximize the use of all your
resources. This can include everything from your workstations and servers to their disk space
and cpu cycles.
Here host machine is the machine on which you write and compile programs.A target machine
may be another general-purpose computer, a special-purpose device employing a single-board
computer or any other intelligent device. Debugging is an important issue in cross-platform
development. Since you are usually not able to execute the binary files on the host machine, they
must be run on the target machine. The flow chart for the cross-platform development is shown
below.

In this method first the source code is developed on the host computer system and this code is
compiled and linked using the cross platform development tools..Then the code is downloaded
on to the target and debugged on the target system. If the code is working properly it is burn into
the EPROM or Flash ROM .Finally the code is run on the target system.If the code is not correct
,it is again sent to development stage where it is corrected.
Cross compilation tools are very important for successful product development. Selection of
these tools should be made based upon the embedded system itself as well as features to test and
debug software remotely.The necessary tools for the cross platform development are
• Cross- compiler
• Cross –assembler
• Cross-Linker
• Cross –debugger
• Cross-compiled libraries.
These components will enable to compile, link and debug code for the target environment
through the cross-compilation environment .

Boot Sequence : Booting means starting the system. An embedded system can be booted in
one of the following ways.
i).Execute from ROM using the RAM for Data.
ii).Execute from RAM after loading the image from RAM.
iii).Execute from RAM after downloading from the host.
Normally booting from ROM is the fastest process.the process for executing from ROM using
the RAM for data is shown in the figure below.
Executing from ROM Using RAM for Data :

Some embedded devices have limited memory resources that the program image executes
directly out of the ROM. Sometimes the board vendor provides the boot ROM, and the code in
the boot ROM does not copy instructions out to RAM for execution. In such cases, the data
sections must still reside in RAM. Boot sequence for an image running from ROM is shown
below figure.

The two registers of CPU the Instruction Pointer (IP) register and the Stack Pointer (SP) register
are important. The IP points to the next instruction (code in the .text section) that the CPU must
execute, while the SP points to the next free address in the stack. The stack is created from a
space in RAM, and the system stack pointer registers must be set appropriately at start up.
The boot sequence for an image running from ROM is as follows :
i).The CPU’s IP is hardwired to execute the first instruction in memory (the reset vector).
ii).The reset vector jumps to the first instruction of the .text section of the boot image. The .text
section remains in ROM ; the CPU uses the IP to execute .text. This code is called boot strap
code .This code initializes the memory system, including the RAM.
iii).The .data section of the boot image is copied into RAM because it is both readable and
writeable.
iv).Space is reserved in RAM for the .bss section of the boot image because it is both readable
and writeable. There is nothing to transfer because the content for the .bss section is empty.
v).Stack space is reserved in RAM.
vi).The CPU’s SP register is set to point to the beginning of the newly created stack.
At this point, the boot completes. The CPU continues to execute the code in the .text section and
initializes all the hardware and software components until it is complete or until the system is
shut down.
Embedded system Development Tools : Basically the embedded tools are divided into
two types( i).Hardware Development tools and (ii) Software Development tools.
Hardware development tools : Hardware tools for embedded development include
development or evaluation boards for specific processors, like Friendly ARM’s Mini2440,
Pandaboard , Beagleboard and Craneboard etc..In addition to this various othes devices like
Digital multimeters ,Logic Analyzers , Spectrum Analyzers and Digital CROs etc.are also
required in embedded design.
The digital multimeter is used to measure voltages, currents and to check the continuity in the
circuits in an embedded systems. Because the embedded system also contains some application
specific circuitry which some times require debugging.
The Logic analyzer is used to check the timings of the signals ,and their correctness.
The Spectrum analyzer is helpful to to analyze the signals in the frequency domain.
The digital CRO helps to display the output waveforms and also to store a portion of the
waveforms etc.
Software development tools /testing tools : The software development tools include the
operating system development suite ,cross platform development tools, ROM emulator ,EPROM
programming and In circuit Emulator (ICE) etc. The operating system development suite
consists of API calls to access the OS services.This suite can run on either Windows or
UNIX/Linux systems.
Under the cross platform tools,the compiler generates the object code for the source code
developed in high level languages like C and C++ or Java etc.For LINUX systems a number of
GNU tools are available.
The EPROM programmer is used to in circuit programming by burning the code in the memory
of the target system.
The instruction set Simulator(ISS) software creates the virtual version of the processor on the
PC.
Assembler and Compiler: The binary code obtained by translating an assembly language
program using an assembler is smaller and runs faster than the binary code obtained by
translating a high level language using a compiler since the assembly language gives the
programmer complete control over the functioning of a processor. The advantage of using a high
level language is that a program written in a high level language is easier to understand and
maintain than a program written in assembly language. Hence time critical applications are
written in assembly language while complex applications are written in a high level language.
Cross compilation tools are very important for successful product development. Selection of
these tools should be made based upon the embedded system itself as well as features to test and
debug software remotely. The cross-platform development tools should be compatible with the
host machine. Depending upon CPU family used for the target system, the toolset must be
capable of generating code for the target machine. In the case of GNU development tools, we
need to have a number of things to work together to generate executable code for the target. At
least one of the following tools must be available on the machine.
•Cross compiler
•Cross assembler
•Cross linker
•Cross debugger
•Cross-compiled libraries for the target host.
•Operating system-dependent libraries and header files for the target system
Simulator: A simulator is software tool that runs on the host and simulates the behavior of the
target’s processor and memory. The simulator knows the target processor’s architecture and
instruction set. The program to be tested is read by the simulator and as instructions are executed
the simulator keeps track of the values of the target processor’s registers and the target’s
memory. Simulators provide single step and breakpoint facilities to debug the program.
Emulator : Another important tool is the ICE(In-Circuit Emulator),which emulates the CPU. An
emulator is a hardware tool that helps in testing and debugging the program on the target. The
target’s processor is removed from the circuit and the emulator is connected in its place. The
emulator drives the signals in the circuit in the same way as the target’s processor and hence the
emulator appears to be the processor to all other components of the embedded system. Emulators
also provide features such as single step and breakpoints to debug the program.
Software emulators are software tools that can emulate a particular CPU. Using a software
emulator one can debug the code and find out CPU register values, stack pointers and other
information without having a real CPU. Software emulators are useful when we don’t have the
real hardware available for testing and debugging and want to see how the CPU will behave
when a program is run on it.

Architecting Mobile Applications


Mobile app architecture is a set of techniques and patterns that are
required to be followed in order to build a fully structured mobile application.
These patterns and requirements are formulated by keeping the vendor’s
requirements and industry standards in mind.

In the most basic form, the mobile app architecture embraces a set of
patterns and techniques that developers follow to build a fully structured mobile
application. The specific elements of the architecture are chosen based on the
app’s features and requirements.

A good mobile app architecture ensures that components have multiple


responsibility layers. Or, good mobile application architecture is the one that will
enforce assumptions and good programming patterns like SOLID or KISS.
Meeting all these conditions allow you to accelerate development and make
future maintenance much easier. This way, it saves your time and money.However,
a wisely selected architecture together with platform-specific technology like Swift
for iOS or Kotlin for Android will be the best for resolving complex business
problems in the most effective way for mobile projects.
Good architecture must be so abstract as it can be applied to the platforms such as
iOS or Android. One of the most crucial features of a good architecture is -
responsibility layer separation.
The Multiple Layers of Mobile App Architecture Design
One of the most popular multilayer architecture is a three-layer architecture. The
main three important layers are:

• Presentation Layer
• Business Layer
• Data Access Layer

Presentation Layer
The presentation layer pays attention to the components of the User Interface and
UI process components. The primary focus of this layer is how the application
would be presented to the end user. While designing this layer, app developers are
supposed to determine the correct client type that is compliant with the
infrastructure.
Presentation layer embraces UI components and UI process components. When
discussing this layer, the mobile app developer should define how the mobile app
will present itself in front of the end user. The important things like themes, fonts,
colors, etc. should also be decided at this stage.

Business Layer
It represents the core of the mobile app, which exposes functionalities. The
business logic layer can be deployed on the backend server and user remotely by
the mobile application to reduce the load. This load is due to the limited resources
available on mobile devices.
The layer mainly focuses on the business front. The business logic layer includes
workflows, business components, and entities beneath the hood.

Data Access Layer


This layer is created from the combination of data utilities, data access
components, and service agents. Data access layer meets with the application
requirements and facilitates secure data transactions. It is important to design this
layer because it could scale in the future. This is the third stage data as it includes
Data access components, data helpers/utilities, and service agents.
The other factor for designing this layer is to select the correct data format and put
in place a strong validation technique. This way, your app can be protected from
invalid data input.
Mobile app developers should focus on the decoupling of business logic from the
presentation layer code.

Mobile App Architecture Usage


A clear and defined architecture can make the developer’s work easier and faster.
Developers also have better control over work and data flow in the application. A
clear and defined architecture does not only make things easier but it also makes
testing more efficient and increase the quality of an application.
In the image of the three-layer model, you can see the implementation of each of
the layers will be dependent on its purpose or project scope.
The presentation layer relies on screen designs and their behavior. On the other
hand, the Business Layer depends on what kind of data will be provided by the
data layer. It also relies on how this data needs to be processed to match the
presentation layer’s requirements.
The data layer will be responsible for managing the sources of data, synchronizing
them, and providing it to the higher levels. The data layer has the most specified
scope and making it the perfect starting point for optimizations.
To start the optimization data layer, it is important to select a programming pattern
that will solve common problems and make work easier and faster. The optimal
pattern for mobile projects for data layer will be the so-called Repository pattern.
One of the most popular patterns, Repository Pattern, is to create an enterprise
level application. It is easy to use and clean pattern, which restricts developers to
work directly with the data in the app. It also helps to create new layers for
database operations, business logic, and the app’s UI. If the mobile app does not
follow the Repository pattern, it may have the following problems:

• It would be hard to implement database caching


• Duplicate database operation codes

By using the Repository Pattern, it embraces the following advantages:

• The database access code can be reused.


• It is easy to implement domain logic.
• Your domain entities or business entities are strongly typed with
annotations.
• Your database access code is centrally managed. So that it is easy to
implement any database access policies like caching.

The Repository pattern is one of many examples of patterns for data layers. For
large mobile projects, Repository pattern is a perfect solution because it resolves
the problem of managing multiple data sources and mapping data entities used by
business logic components.

Important Factors to Consider While Developing Mobile App Architecture

1. Determining The Device Type


Smartphones come in different categories and this is what you need to keep in
mind whilst developing mobile app architecture. The type of the smartphone is
generally decided by its operating systems on which they run on. As you have
already known that the Android smartphones are completely different from
iPhones and these two are totally different categories. It is also a pivotal deciding
factors before selecting the mobile app architecture. The other important device
characteristics, which you should consider, are:

• Screen Size & Resolution


• CPU Characteristics
• Memory
• Storage Capacity
• Availability of Development Tool Framework
Thus, all you need to keep in mind is to determine the type of device before
choosing the mobile application architecture.

2. Considering Bandwidth Scenarios


There may be times when internet connectivity is zero or very limited. In such
case, all you should take into account the bandwidth scenario or think about the
local internet network of the demographic region. Plus, the region where your
targeted audience is. Sometimes, the very low speed of internet frustrates the user
and user would abandon the app eventually. It also leads to poor user experience.
Hence you should consider the worst possible internet network while developing
the mobile app architecture.

3. Selecting the Optimal Navigation Method


The salient factor is the app navigation method. However, the needs and priorities
of customers can be fulfilled by choosing the optimal navigation method.
The mobile app navigation has a large influence on user experience. It is important
to go with an optimal navigation method for the app. To get an optimal one, you
can choose from a list of navigation methods:

• Single view
• Stacked navigation bar
• Scroll view
• Modular controller
• Gesture-based navigation
• Search-driven navigation
• Tab controller

Follow guidelines to understand the requirements of the user considering different


scenarios.

4. Stating User Interface (UI)


A confusing UI leads to the failure of an app. Users should be able to seamlessly
interact with the app. It is important to keep things simple. Make sure you would
not pour out all your creativity into the user interface. Do not forget the thumb rule,
which is simply the best medicine for designing highly interactive and intuitive
UI.
5. Real-time Updates vs Push Notifications
When deciding the correct app architecture for your app, ask yourself whether your
users need real-time updates or push notifications. Real-time updates can be a
compelling feature, but it might be an expensive feature. Plus, this feature also
consumes the phone’s battery and data.
Types of Mobile App:

There are three main app types that define app architecture:

• native apps;
• hybrid apps;
• mobile web apps.
Native apps
Native mobile apps are stored and run locally on a device. These apps are similar
to built-in apps like web browsers or mail and they can use all the features and
APIs of a mobile device. There are a wide number of native apps on the app stores.

Native apps are built for a specific mobile platform with the use of definite
programming languages and frameworks. For example, for building an Android
app you’ll need Java and Android studio. Therefore, if you want to run the same
app on the iOS platform, you’ll need to build a new app from scratch using tools
suitable for iOS like Swift and AppCode.

Native apps are fast, work offline, user-friendly, and work smoothly on suitable
devices. However, they require considerable investments of time and money into
development, need frequent upgrades, and are not flexible as you’ll have to
develop a new app once you decide to explore more mobile app platforms.

Hybrid apps
Hybrid apps are a solution to native apps that function only on one platform. These
solutions involve the use of web technologies for their development. They run
within native apps, displaying their web-based content in the native app wrapper.
Their content can be placed on the app or accessed from a web server. Therefore,
these apps have access to the hardware of a device while being web-based,
combining web and native screens. These apps can also be found in app stores.

Hybrid apps are usually much cheaper and faster to develop than native apps while
they can use native APIs such as contacts, camera, and so on. They have one
codebase for Android and iOS apps, meaning that you don’t need to develop two
apps from scratch for each platform. Hybrid apps are simpler to maintain than
native apps.

As for the downsizes, they have connection limitations and can’t work offline and
are much slower than native apps. It may be difficult to reach native functionality
as not all the device features can be incorporated in your app. It’s hard to maintain
high and equal performance for both platforms as they require a lot of code
modifications, resulting in worse than native app user experience.

Mobile web apps


Mobile web apps are completely based on web technology and accessible through
URL in a browser. For more convenience, many mobile web app providers create
icons that are placed on a home screen and can be launched from there. However,
the app isn’t installed into a device but bookmarked on the screen.

These apps are built with the help of HTML, JavaScript, and CSS technologies and
get automatically updated from the web without any submission process or vendor
approval.

Mobile web apps are highly compatible with any platform as they run in a browser,
as a result, they have a broader audience. They are easier and cheaper to maintain
as you need to edit or change the content or design only once and the changes get
implemented across all the mobile platforms.

However, mobile web apps don’t have access to native device features like GPS,
cameras, and so on. They can have trouble with screen sizes, therefore, software
developers have to make lots of adjustments. They can work online but with very
limited functionality. All these have a negative effect on user experience.

Android Mobile App Architecture

This architecture will allow your application to be independent of frameworks,


databases, and more. Transitions between layers in such Android mobile app
architecture are carried out through Boundaries, that is, through two interfaces: one
for the request and one for the answer. They are needed so that the inner layer does
not depend on the outer layer (following the Dependency Rule), but at the same
time, it can transmit data to it:
In order for a dependency in such an Android mobile application architecture to be
directed towards the reverse flow of data, the principle of dependency inversion is
applied (the letter D from the abbreviation SOLID). That is, instead of Uses Cases
being directly dependent on the Presenter (which would violate the Dependency
Rule), they depend on the interface in its layer, and the Presenter must implement
this interface.

iOS Mobile App Architecture

The standard iOS mobile app architecture can be divided into four blocks:

• Kernel level (Core OS) — works with the file system, controls the validity
of various certificates belonging to the applications. Also responsible for the
security of the entire system. Contains low-level access to the elements of the
device.

• Core Services (Core Service) — provides access to databases and file


controls.

• Media level (Media) — contains tools that allow for processing most media
data formats.
• Interface level (Cocoa Touch) — has many elements for creating mobile
interfaces, and also provides the remaining layers with information coming from
the user.

An MVC (Massive View Controller) and its prototypes are used to create a high-
quality iOS mobile application architecture. Cocoa MVC encourages you to write
Massive View Controller because the controller is so involved in the View life
cycle that it is difficult to say that it is a separate entity. Although you still have the
opportunity to ship some of the business logic and data conversion in the Model,
when it comes to shipping work in View, you have few options:

Problems Occur When Ignoring Mobile App Architecture


Selecting the right mobile app architecture is a mandatory step. And, it is one of
the primary elements in the design and planning phase of software development.
Due to developer’s negligence, rush, lack of experience and knowledge, the
concept of architecture is generally overlooked.
The lack of architecture in apps causes a few major problems like:

• It will be difficult to develop and maintain.


• It will be more error-prone.
• Code is less readable
• Without architecture or design patterns, the source code is hard to test. And,
it results in missing unit tests of key functionalities. Also, lack of tests
causes difficulties with maintaining the software, for example, no regression
control, much harder refactoring or bug fixing, etc.

Developing an app or software without architecture or design patterns is like a


building without foundation. In the starting, inexperienced developers generally
experience speed in the process without architecture. However, it may seem faster
at first, but it soon turns out to be a dead end. Regardless of the size and
complexity of the project, it is mandatory to consider mobile application
architecture to get the best results.
User Interfaces for mobile Apps

A mobile user interface (mobile UI) is the graphical and usually touch-sensitive display
on a mobile device, such as a smartphone or tablet, that allows the user to interact with the
device’s apps, features, content and functions.

Mobile user interface (UI) design requirements are significantly different from those for desktop
computers. The smaller screen size and touch screen controls create special considerations in UI
design to ensure usability, readability and consistency. In a mobile interface, symbols may be
used more extensively and controls may be automatically hidden until accessed. The symbols
themselves must also be smaller and there is not enough room for text labels on everything,
which can cause confusion.

Users have to be able to understand a command icon and its meaning whether through legible
text or comprehensible graphical representation. Basic guidelines for mobile interface design are
consistent across modern mobile operating systems.

Mobile UI design best practices include the following:

• The layout of the information, commands, and content in an app should echo those
of the operating system in placement, composition and colors. While apps may
diverge to some degree in style, consistency on most of these points allows users to
intuit or at least quickly learn how use an interface.

• Click points must be usable for touch-based selection with a finger. This means a
click point can't be too small or narrow in any direction, to avoid unwanted selection
of nearby items, sometimes referred to as fat fingering.

• Maximize the content window size. On small screens, the UI should not
unnecessarily dominate screen size. It’s important to recognize that the object of a
UI is to facilitate use of content and apps, not just use of the interface.
The number of controls or commands displayed at any given time should be
appropriate to avoid overwhelming the user or making viewing/interacting with
content confusing.

It can be challenging to strike a balance between attending to design considerations and dealing
with the specific requirements of different apps. Furthermore, an app UI should be customized
for each mobile OS, as that is the visual language the device user will be immersed in and
typically most familiar with. To that end, mobile OS developers generally provide resources to
familiarize UI designers with the way their OS does its interface.

Your app's user interface is everything that the user can see and interact with. Android provides a
variety of pre-built UI components such as structured layout objects and UI controls that allow
you to build the graphical user interface for your app. Android also provides other UI modules
for special interfaces such as dialogs, notifications, and menus.

Layouts
A layout defines the structure for a user interface in your app, such as in an activity. All elements
in the layout are built using a hierarchy of View and ViewGroup objects. A View usually draws
something the user can see and interact with. Whereas a ViewGroup is an invisible container that
defines the layout structure for View and other ViewGroup objects, as shown in figure 1.

Figure 1. Illustration of a view hierarchy, which defines a UI layout

The View objects are usually called "widgets" and can be one of many subclasses, such
as Button or TextView. The ViewGroup objects are usually called "layouts" can be one of many
types that provide a different layout structure, such as LinearLayout or ConstraintLayout .

You can declare a layout in two ways:


• Declare UI elements in XML. Android provides a straightforward XML vocabulary that
corresponds to the View classes and subclasses, such as those for widgets and layouts.
You can also use Android Studio's Layout Editor to build your XML layout using a drag-and-
drop interface.
• Instantiate layout elements at runtime. Your app can create View and ViewGroup
objects (and manipulate their properties) programmatically.

Declaring your UI in XML allows you to separate the presentation of your app from the code
that controls its behavior. Using XML files also makes it easy to provide different layouts for
different screen sizes and orientations (discussed further in Supporting Different Screen Sizes).

Material Design for Android

Material design is a comprehensive guide for visual, motion, and interaction design across
platforms and devices. To use material design in your Android apps, follow the guidelines
defined in the material design specification and use the new components and styles available in
the material design support library. This page provides an overview of the patterns and APIs you
should use.

Android provides the following features to help you build material design apps:

• A material design app theme to style all your UI widgets


• Widgets for complex views such as lists and cards
• New APIs for custom shadows and animations

Material theme and widgets

To take advantage of the material features such as styling for standard UI widgets, and to
streamline your app's style definition, apply a material-based theme to your app.

To provide your users a familiar experience, use material's most common UX patterns:
• Promote your UI's main action with a Floating Action Button (FAB).
• Show your brand, navigation, search, and other actions with the App Bar.
• Show and hide your app's navigation with the Navigation Drawer.
• Use one of many other material components for your app layout and navigation, such as
collapsing toolbars, tabs, a bottom nav bar, and more. To see them all, check out the Material
Components for Android catalog

Elevation shadows and cards

In addition to the X and Y properties, views in Android have a Z property. This new property
represents the elevation of a view, which determines:

• The size of the shadow: views with higher Z values cast bigger shadows.
• The drawing order: views with higher Z values appear on top of other views.

Elevation is often applied when your layout includes a card-based layout, which helps you
display important pieces of information inside cards that provide a material look. You can use
the CardView widget to create cards with a default elevation.

Animations

The new animation APIs let you create custom animations for touch feedback in UI controls,
changes in view state, and activity transitions.

These APIs let you:

• Respond to touch events in your views with touch feedback animations.

• Hide and show views with circular reveal animations.

• Switch between activities with custom activity transition animations.

• Create more natural animations with curved motion.


• Animate changes in one or more view properties with view state change animations.

• Show animations in state list drawables between view state changes.

Touch feedback animations are built into several standard views, such as buttons. The new APIs
let you customize these animations and add them to your custom views.

Drawables

These new capabilities for drawables help you implement material design apps:

• Vector drawables are scalable without losing definition and are perfect for single-color
in-app icons. Learn more about vector drawables.
• Drawable tinting lets you define bitmaps as an alpha mask and tint them with a color at
runtime. See how to add tint to drawables.
• Color extraction lets you automatically extract prominent colors from a bitmap image.
See how to select colors with the Palette API.
Styles and Themes

Styles and themes on Android allow you to separate the details of your app design from the UI
structure and behavior, similar to stylesheets in web design.

A style is a collection of attributes that specify the appearance for a single View. A style can
specify attributes such as font color, font size, background color, and much more.

A theme is a collection of attributes that's applied to an entire app, activity, or view hierarchy—
not just an individual view. When you apply a theme, every view in the app or activity applies
each of the theme's attributes that it supports. Themes can also apply styles to non-view
elements, such as the status bar and window background.

Styles and themes are declared in a style resource file in res/values/, usually named styles.xml.

Create and apply a style

To create a new style or theme, open your project's res/values/styles.xml file. For each style
you want to create, follow these steps:

1. Add a <style> element with a name that uniquely identifies the style.
2. Add an <item> element for each style attribute you want to define.
The name in each item specifies an attribute you would otherwise use as an XML attribute in
your layout. The value in the <item> element is the value for that attribute.

For example, if you define the following style:


<?xml version="1.0" encoding="utf-8"?>
<resources>
<style name="GreenText" parent="TextAppearance.AppCompat">
<item name="android:textColor">#00FF00</item>
</style>
</resources>

You can apply the style to a view as follows:

<TextView
style="@style/GreenText"
... />
Each attribute specified in the style is applied to that view if the view accepts it. The view simply
ignores any attributes that it does not accept.

Buttons

A button consists of text or an icon (or both text and an icon) that communicates what action
occurs when the user touches it.

Depending on whether you want a button with text, an icon, or both, you can create the button in
your layout in three ways:

• With text, using the Button class:

<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/button_text"
... />

• With an icon, using the ImageButton class:

<ImageButton
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:src="@drawable/button_icon"
android:contentDescription="@string/button_icon_desc"
... />
• With text and an icon, using the Button class with the android:drawableLeft attribute:

<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/button_text"
android:drawableLeft="@drawable/button_icon"
... />

Responding to Click Events

When the user clicks a button, the Button object receives an on-click event.

To define the click event handler for a button, add the android:onClick attribute to
the <Button> element in your XML layout. The value for this attribute must be the name of the
method you want to call in response to a click event. The Activity hosting the layout must then
implement the corresponding method.

For example, here's a layout with a button using android:onClick:

<?xml version="1.0" encoding="utf-8"?>


<Button xmlns:android="http://schemas.android.com/apk/res/android"
android:id="@+id/button_send"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/button_send"
android:onClick="sendMessage" />

Styling Your Button

The appearance of your button (background image and font) may vary from one device to
another, because devices by different manufacturers often have different default styles for input
controls.

You can control exactly how your controls are styled using a theme that you apply to your entire
application. For instance, to ensure that all devices running Android 4.0 and higher use the Holo
theme in your app, declare android:theme="@android:style/Theme.Holo" in your
manifest's <application> element. Also read the blog post, Holo Everywhere for information
about using the Holo theme while supporting older devices.

To customize individual buttons with a different background, specify


the android:background attribute with a drawable or color resource. Alternatively, you can apply
a style for the button, which works in a manner similar to HTML styles to define multiple style
properties such as the background, font, size, and others. For more information about applying
styles, see Styles and Themes.
Borderless button

One design that can be useful is a "borderless" button. Borderless buttons resemble basic buttons
except that they have no borders or background but still change appearance during different
states, such as when clicked.

To create a borderless button, apply the borderlessButtonStyle style to the button. For example:

<Button
android:id="@+id/button_send"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/button_send"
android:onClick="sendMessage"
style="?android:attr/borderlessButtonStyle" />

Checkboxes

Checkboxes allow the user to select one or more options from a set. Typically, you should
present each checkbox option in a vertical list.

Responding to Click Events

When the user selects a checkbox, the CheckBox object receives an on-click event.

To define the click event handler for a checkbox, add the android:onClick attribute to
the <CheckBox> element in your XML layout. The value for this attribute must be the name of
the method you want to call in response to a click event. The Activity hosting the layout must
then implement the corresponding method.

For example, here are a couple CheckBox objects in a list:

<?xml version="1.0" encoding="utf-8"?>


<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="fill_parent"
android:layout_height="fill_parent">
<CheckBox android:id="@+id/checkbox_meat"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/meat"
android:onClick="onCheckboxClicked"/>
<CheckBox android:id="@+id/checkbox_cheese"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/cheese"
android:onClick="onCheckboxClicked"/>
</LinearLayout>
Radio Buttons

Radio buttons allow the user to select one option from a set.

To create each radio button option, create a RadioButton in your layout. However, because radio
buttons are mutually exclusive, you must group them together inside a RadioGroup. By grouping
them together, the system ensures that only one radio button can be selected at a time.

Key classes are the following:

• RadioButton
• RadioGroup

Responding to Click Events

When the user selects one of the radio buttons, the corresponding RadioButton object receives an
on-click event.

To define the click event handler for a button, add the android:onClick attribute to
the <RadioButton> element in your XML layout. The value for this attribute must be the name
of the method you want to call in response to a click event. The Activity hosting the layout must
then implement the corresponding method.

For example, here are a couple RadioButton objects:

<?xml version="1.0" encoding="utf-8"?>


<RadioGroup xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="vertical">
<RadioButton android:id="@+id/radio_pirates"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/pirates"
android:onClick="onRadioButtonClicked"/>
<RadioButton android:id="@+id/radio_ninjas"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/ninjas"
android:onClick="onRadioButtonClicked"/>
</RadioGroup>

Toggle Buttons
A toggle button allows the user to change a setting between two states.

You can add a basic toggle button to your layout with the ToggleButton object. Android
4.0 (API level 14) introduces another kind of toggle button called a switch that provides
a slider control, which you can add with a Switch object. SwitchCompat is a version of the
Switch widget which runs on devices back to API 7.

If you need to change a button's state yourself, you can use


the CompoundButton.setChecked() or CompoundButton.toggle() method.

Responding to Button Presses


To detect when the user activates the button or switch, create
an CompoundButton.OnCheckedChangeListener object and assign it to the button by
calling setOnCheckedChangeListener(). For example:
val toggle: ToggleButton = findViewById(R.id.togglebutton)
toggle.setOnCheckedChangeListener { _, isChecked ->
if (isChecked) {
// The toggle is enabled
} else {
// The toggle is disabled
}
}

Spinners
Spinners provide a quick way to select one value from a set. In the default state, a spinner
shows its currently selected value. Touching the spinner displays a dropdown menu with all
other available values, from which the user can select a new one.

You can add a spinner to your layout with the Spinner object. You should usually do so in your
XML layout with a <Spinner> element. For example:

<Spinner
android:id="@+id/planets_spinner"
android:layout_width="match_parent"
android:layout_height="wrap_content" />

To populate the spinner with a list of choices, you then need to specify
a SpinnerAdapter in your Activity or Fragment source code.

Populate the Spinner with User Choices


The choices you provide for the spinner can come from any source, but must be
provided through an SpinnerAdapter, such as an ArrayAdapter if the choices are available in
an array or a CursorAdapter if the choices are available from a database query.

For instance, if the available choices for your spinner are pre-determined, you can
provide them with a string array defined in a string resource file:

<?xml version="1.0" encoding="utf-8"?>


<resources>
<string-array name="planets_array">
<item>Mercury</item>
<item>Venus</item>
<item>Earth</item>
<item>Mars</item>
<item>Jupiter</item>
<item>Saturn</item>
<item>Uranus</item>
<item>Neptune</item>
</string-array>
</resources>

Responding to User Selections


When the user selects an item from the drop-down, the Spinner object receives an on-
item-selected event.

To define the selection event handler for a spinner, implement


the AdapterView.OnItemSelectedListener interface and the
corresponding onItemSelected() callback method. For example, here's an implementation
of the interface in an Activity:

class SpinnerActivity : Activity(), AdapterView.OnItemSelectedListener {

override fun onItemSelected(parent: AdapterView<*>, view: View?, pos: Int, id: Long) {
// An item was selected. You can retrieve the selected item using
// parent.getItemAtPosition(pos)
}

override fun onNothingSelected(parent: AdapterView<*>) {


// Another interface callback
}
}

The AdapterView.OnItemSelectedListener requires


the onItemSelected() and onNothingSelected() callback methods.

Then you need to specify the interface implementation by


calling setOnItemSelectedListener():

val spinner: Spinner = findViewById(R.id.spinner)


spinner.onItemSelectedListener = this
Pickers
Android provides controls for the user to pick a time or pick a date as ready-to-use dialogs.
Each picker provides controls for selecting each part of the time (hour, minute, AM/PM) or date
(month, day, year). Using these pickers helps ensure that your users can pick a time or date
that is valid, formatted correctly, and adjusted to the user's locale.
We recommend that you use DialogFragment to host each time or date picker.
The DialogFragment manages the dialog lifecycle for you and allows you to display the pickers
in different layout configurations, such as in a basic dialog on handsets or as an embedded part
of the layout on large screens.

Key classes are the following:

• DatePickerDialog

• TimePickerDialog

Creating a Time Picker


o display a TimePickerDialog using DialogFragment, you need to define a fragment class
that extends DialogFragment and return a TimePickerDialog from the
fragment's onCreateDialog() method.

Extending DialogFragment for a time picker

To define a DialogFragment for a TimePickerDialog, you must:

• Define the onCreateDialog() method to return an instance of TimePickerDialog

• Implement the TimePickerDialog.OnTimeSetListener interface to receive a callback when


the user sets the time.

• class TimePickerFragment : DialogFragment(),


TimePickerDialog.OnTimeSetListener {

override fun onCreateDialog(savedInstanceState: Bundle): Dialog


{
// Use the current time as the default values for the picker
val c = Calendar.getInstance()
val hour = c.get(Calendar.HOUR_OF_DAY)
val minute = c.get(Calendar.MINUTE)

// Create a new instance of TimePickerDialog and return it


return TimePickerDialog(activity, this, hour, minute,
DateFormat.is24HourFormat(activity))
}

override fun onTimeSet(view: TimePicker, hourOfDay: Int, minute:


Int) {
// Do something with the time chosen by the user
}
}

Showing the time picker

Once you've defined a DialogFragment like the one shown above, you can display the
time picker by creating an instance of the DialogFragment and calling show().

For example, here's a button that, when clicked, calls a method to show the dialog:

<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/pick_time"
android:onClick="showTimePickerDialog" />

When the user clicks this button, the system calls the following method:

fun showTimePickerDialog(v: View) {


TimePickerFragment().show(supportFragmentManager, "timePicker")
}

This method calls show() on a new instance of the DialogFragment defined above.
The show() method requires an instance of FragmentManager and a unique tag name for the
fragment.

Creating a Date Picker


Creating a DatePickerDialog is just like creating a TimePickerDialog. The only difference is
the dialog you create for the fragment.

To display a DatePickerDialog using DialogFragment , you need to define a fragment class


that extends DialogFragment and return a DatePickerDialog from the
fragment's onCreateDialog() method.

Extending DialogFragment for a date picker

To define a DialogFragment for a DatePickerDialog, you must:

• Define the onCreateDialog() method to return an instance of DatePickerDialog


• Implement the DatePickerDialog.OnDateSetListener interface to receive a callback when
the user sets the date.
class DatePickerFragment : DialogFragment(),
DatePickerDialog.OnDateSetListener {

override fun onCreateDialog(savedInstanceState: Bundle): Dialog


{
// Use the current date as the default date in the picker
val c = Calendar.getInstance()
val year = c.get(Calendar.YEAR)
val month = c.get(Calendar.MONTH)
val day = c.get(Calendar.DAY_OF_MONTH)

// Create a new instance of DatePickerDialog and return it


return DatePickerDialog(activity, this, year, month, day)
}

override fun onDateSet(view: DatePicker, year: Int, month: Int,


day: Int) {
// Do something with the date chosen by the user
}
}

Showing the date picker

Once you've defined a DialogFragment like the one shown above, you can display the
date picker by creating an instance of the DialogFragment and calling show().

For example, here's a button that, when clicked, calls a method to show the dialog:

<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/pick_date"
android:onClick="showDatePickerDialog" />

When the user clicks this button, the system calls the following method:

fun showDatePickerDialog(v: View) {


val newFragment = DatePickerFragment()
newFragment.show(supportFragmentManager, "datePicker")
}

This method calls show() on a new instance of the DialogFragment defined above.
The show() method requires an instance of FragmentManager and a unique tag name for
the fragment.
Tooltips

A tooltip is a small descriptive message that appears near a view when users long
press the view or hover their mouse over it. This is useful when your app uses an icon
to represent an action or piece of information to save space in the layout. This page
shows you how to add these tooltips on Android 8.0 (API level 26) and higher.

Some scenarios, such as those in productivity apps, require a descriptive method of


communicating ideas and actions. You can use tooltips to display a descriptive
message, as shown in figure 1.

Figure 1. Tooltip displayed in an Android app.

Some standard widgets display tooltips based on the content of the title or content
description properties. Starting in Android 8.0, you can specify the text displayed in the
tooltip regardless of the value of other properties.

Setting the tooltip text


You can specify the tooltip text in a View by calling the setTooltipText() method. You can
set the tooltipText property using the corresponding XML attribute or API.

To specify the tooltip text in your XML files, set the android:tooltipText attribute, as shown
in the following example:

<android.support.design.widget.FloatingActionButton
android:id="@+id/fab"
android:tooltipText="Send an email" />

To specify the tooltip text in your code, use the setTooltipText(CharSequence) method, as
shown in the following example:
val fab: FloatingActionButton = findViewById(R.id.fab)
fab.tooltipText = "Send an email"
Dialogs
A dialog is a small window that prompts the user to make a decision or enter additional
information. A dialog does not fill the screen and is normally used for modal events that require
users to take an action before they can proceed.

The Dialog class is the base class for dialogs, but you should avoid
instantiating Dialog directly. Instead, use one of the following subclasses:

AlertDialog

A dialog that can show a title, up to three buttons, a list of selectable items, or a custom
layout.

DatePickerDialog or TimePickerDialog

A dialog with a pre-defined UI that allows the user to select a date or time.

Creating a Dialog Fragment


You can accomplish a wide variety of dialog designs—including custom layouts and
those described in the Dialogs design guide—by extending DialogFragment and creating
an AlertDialog in the onCreateDialog() callback method.

Building an Alert Dialog


The AlertDialog class allows you to build a variety of dialog designs and is often the only
dialog class you'll need. As shown in figure 2, there are three regions of an alert dialog:

1. Title
This is optional and should be used only when the content area is occupied by a
detailed message, a list, or custom layout. If you need to state a simple message or
question (such as the dialog in figure 1), you don't need a title.
2. Content area
This can display a message, a list, or other custom layout.
3. Action buttons
There should be no more than three action buttons in a dialog.
Adding buttons

To add action buttons like those in figure 2, call


the setPositiveButton() and setNegativeButton() methods:
val alertDialog: AlertDialog? = activity?.let {
val builder = AlertDialog.Builder(it)
builder.apply {
setPositiveButton(R.string.ok,
DialogInterface.OnClickListener { dialog, id ->
// User clicked OK button
})
setNegativeButton(R.string.cancel,
DialogInterface.OnClickListener { dialog, id ->
// User cancelled the dialog
})
}
// Set other dialog properties
...

// Create the AlertDialog


builder.create()
}

The set...Button() methods require a title for the button (supplied by a string resource) and
a DialogInterface.OnClickListener that defines the action to take when the user presses the
button.

There are three different action buttons you can add:

Positive

You should use this to accept and continue with the action (the "OK" action).

Negative

You should use this to cancel the action.

Neutral

You should use this when the user may not want to proceed with the action, but doesn't
necessarily want to cancel. It appears between the positive and negative buttons. For
example, the action might be "Remind me later."
You can add only one of each button type to an AlertDialog. That is, you cannot have
more than one "positive" button.

Adding a list

There are three kinds of lists available with the AlertDialog APIs:

• A traditional single-choice list

• A persistent single-choice list (radio buttons)

• A persistent multiple-choice list (checkboxes)

To create a single-choice list like the one in figure 3, use the setItems() method:
override fun onCreateDialog(savedInstanceState: Bundle?): Dialog {
return activity?.let {
val builder = AlertDialog.Builder(it)
builder.setTitle(R.string.pick_color)
.setItems(R.array.colors_array,
DialogInterface.OnClickListener { dialog, which ->
// The 'which' argument contains the index
position
// of the selected item
})
builder.create()
} ?: throw IllegalStateException("Activity cannot be null")
}

Adding a persistent multiple-choice or single-choice list

To add a list of multiple-choice items (checkboxes) or single-choice items (radio


buttons), use the setMultiChoiceItems() or setSingleChoiceItems() methods, respectively.
Menus

Menus are a common user interface component in many types of applications. To


provide a familiar and consistent user experience, you should use the Menu APIs to
present user actions and other options in your activities.

Beginning with Android 3.0 (API level 11), Android-powered devices are no longer
required to provide a dedicated Menu button. With this change, Android apps should
migrate away from a dependence on the traditional 6-item menu panel and instead
provide an app bar to present common user actions.

Although the design and user experience for some menu items have changed, the
semantics to define a set of actions and options is still based on the Menu APIs. This
guide shows how to create the three fundamental types of menus or action
presentations on all versions of Android:

Options menu and app bar

The options menu is the primary collection of menu items for an activity. It's where you
should place actions that have a global impact on the app, such as "Search," "Compose
email," and "Settings."

See the section about Creating an Options Menu.

Context menu and contextual action mode

A context menu is a floating menu that appears when the user performs a long-click on
an element. It provides actions that affect the selected content or context frame.

The contextual action mode displays action items that affect the selected content
in a bar at the top of the screen and allows the user to select multiple items.

See the section about Creating Contextual Menus.

Popup menu
A popup menu displays a list of items in a vertical list that's anchored to the view that
invoked the menu. It's good for providing an overflow of actions that relate to specific
content or to provide options for a second part of a command. Actions in a popup menu
should not directly affect the corresponding content—that's what contextual actions are
for. Rather, the popup menu is for extended actions that relate to regions of content in
your activity.

Defining a Menu in XML


For all menu types, Android provides a standard XML format to define menu items.
Instead of building a menu in your activity's code, you should define a menu and all its
items in an XML menu resource. You can then inflate the menu resource (load it as
a Menu object) in your activity or fragment.

Using a menu resource is a good practice for a few reasons:

• It's easier to visualize the menu structure in XML.

• It separates the content for the menu from your application's behavioral code.

• It allows you to create alternative menu configurations for different platform versions,
screen sizes, and other configurations by leveraging the app resources framework.

To define the menu, create an XML file inside your project's res/menu/ directory and build
the menu with the following elements:

<menu>

Defines a Menu, which is a container for menu items. A <menu> element must be the root
node for the file and can hold one or more <item> and <group> elements.

<item>

Creates a MenuItem, which represents a single item in a menu. This element may
contain a nested <menu> element in order to create a submenu.

<group>

An optional, invisible container for <item> elements. It allows you to categorize menu
items so they share properties such as active state and visibility. For more information,
see the section about Creating Menu Groups.

Here's an example menu named game_menu.xml:

<?xml version="1.0" encoding="utf-8"?>


<menu xmlns:android="http://schemas.android.com/apk/res/android">
<item android:id="@+id/new_game"
android:icon="@drawable/ic_new_game"
android:title="@string/new_game"
android:showAsAction="ifRoom"/>
<item android:id="@+id/help"
android:icon="@drawable/ic_help"
android:title="@string/help" />
</menu>

Touch events and Gestures


With touch interactions, your app can translate and use physical gestures to emulate the direct
manipulation of UI elements.
Touch interactions provide a natural, real-world experience when users interact with the elements on the
screen. By contrast, interacting with an object through its properties window or other dialog box is considered
indirect manipulation. Windows also supports touch interactions across input modes and devices, including
touch, mouse, and pen.
The Windows Runtime platform APIs support user interactions through three types of interaction events:
pointer, gesture, and manipulation.

• Pointer events are used to get basic contact info such as location and device type, extended info such
as pressure and contact geometry, and to support more complex interactions.
• Gesture events are used to handle static single-finger interactions such as tapping and press-and-hold
(double-tap and right-tap are derived from these basic gestures).
• Manipulation events are used for dynamic multi-touch interactions such as pinching and stretching,
and interactions that use inertia and velocity data such as panning/scrolling, zooming, and rotating.

Gestures
A gesture is the physical act or motion performed on, or by, the input device (finger, fingers, pen/stylus, mouse,
and so on). For example, to launch, activate, or invoke a command you would use a single finger tap for a touch
or touchpad device (equivalent to a left-click with a mouse, a tap with a pen, or Enter on a keyboard).
Here is the basic set of touch gestures for manipulating the UI and performing an interaction.

Name Type Description

Tap Static gesture One finger touches the screen and lifts up.

Press and Static gesture One finger touches the screen and stays in place.
hold

Slide Manipulation One or more fingers touch the screen and move in the same
gesture direction.
Swipe Manipulation One or more fingers touch the screen and move a short distance
gesture in the same direction.

Turn Manipulation Two or more fingers touch the screen and move in a clockwise or
gesture counter-clockwise arc.

Pinch Manipulation Two or more fingers touch the screen and move closer together.
gesture

Stretch Manipulation Two or more fingers touch the screen and move farther apart.
gesture

Manipulations
A manipulation is the immediate, ongoing reaction or response an object or UI has to a gesture. For example,
both the slide and swipe gestures typically cause an element or UI to move in some way.
The final outcome of a manipulation, how it is manifested by the object on the screen and in the UI, is the
interaction.

Interactions
Interactions depend on how a manipulation is interpreted and the command or action that results from the
manipulation. For example, objects can be moved through both the slide and swipe gestures, but the results
differ depending on whether a distance threshold is crossed. Slide can be used to drag an object or pan a view
while swipe can be used to select an item or display the AppBar.
This section describes some common interactions.

Learning

The press and hold gesture displays detailed info or teaching visuals (for example, a tooltip or context menu)
without committing to an action or command. Panning is still possible if a sliding gesture is started while the
visual is displayed. For more info, see Guidelines for visual feedback.

Commanding
The tap gesture invokes a primary action, for example launching an app or executing a command.

Panning

The slide gesture is used primarily for panning interactions but can also be used for moving, drawing, or
writing. Panning is a touch-optimized technique for navigating short distances over small sets of content within
a single view (such as the folder structure of a computer, a library of documents, or a photo album). Equivalent
to scrolling with a mouse or keyboard, panning is necessary only when the amount of content in the view
causes the content area to overflow the viewable area. For more info, see Guidelines for panning.

Zooming
The pinch and stretch gestures are used for three types of interactions: optical zoom, resizing, and Semantic
Zoom.

Optical zoom and resizing

Optical zoom adjusts the magnification level of the entire content area to get a more detailed view of the
content. In contrast, resizing is a technique for adjusting the relative size of one or more objects within a
content area without changing the view into the content area. The top two images here show an optical zoom,
and the bottom two images show resizing a rectangle on the screen without changing the size of any other
objects. For more info, see Guidelines for optical zoom and resizing.
Semantic Zoom

Semantic Zoom is a touch-optimized technique for presenting and navigating structured data or content within
a single view (such as the folder structure of a computer, a library of documents, or a photo album) without the
need for panning, scrolling, or tree view controls. Semantic Zoom provides two different views of the same
content by letting you see more detail as you zoom in and less detail as you zoom out. For more information,
seeGuidelines for Semantic Zoom.

Rotating

The rotate gesture simulates the experience of rotating a piece of paper on a flat surface. The interaction is
performed by placing two fingers on the object and pivoting one finger around the other or pivoting both
fingers around a center point, and swiveling the hand in the desired direction. You can use two fingers from the
same hand, or one from each hand. For more information, see Guidelines for rotation.

Selecting and moving


The slide and swipe gestures are used in a cross-slide manipulation, a movement perpendicular to the panning
direction of the content area. This is interpreted as either a selection or, if a distance threshold is crossed, a
move (drag) interaction. This diagram describes these processes. For more info, see Guidelines for cross-slide.

Displaying command bars

The swipe gesture reveals various command bars or the login screen.
App commands are revealed by swiping from the bottom or top edge of the screen. Use the AppBar to display
app commands.

System commands are revealed by swiping from the right edge, recently used apps are revealed by swiping
from the left edge, and swiping from the top edge to the bottom edge reveals docking or closing commands.

Achieving Quality Constraints


Quality Attributes:
1. Functionality and Architecture

• Functionality and quality attributes are orthogonal.

• What is functionality? It is the ability of the system to do the work for which it was
intended. A task requires that many or most of the system's elements work in a
coordinated manner to complete the job. If the elements have not been assigned the
correct responsibilities or have not been endowed with the correct facilities for
coordinating with other elements

• Functionality may be achieved through the use of any of a number of possible


structures. In fact, if functionality were the only requirement, the system could
exist as a single monolithic module with no internal structure at all. Instead, it is
decomposed into modules to make it understandable and to support a variety of
other purposes. In this way, functionality is largely independent of structure.
Software architecture constrains its allocation to structure when other quality
attributes are important.

2. Architecture and Quality Attributes

• Achieving quality attributes must be considered throughout design,


implementation, and deployment.

• Architecture is critical to the realization of many qualities of interest in a system,


and these qualities should be designed in and can be evaluated at the architectural
level.

• Architecture, by itself, is unable to achieve qualities. It provides the foundation for


achieving quality, but this foundation will be to no avail if attention is not paid to
the details.

• Types of quality attributes:

1. Qualities of the system

2. Business qualities

3. Qualities that are about the architecture itself

3. System Quality Attributes

QUALITY ATTRIBUTE SCENARIOS

A quality attribute scenario is a quality-attribute-specific requirement. It consists of


six parts.

• Source of stimulus. This is some entity (a human, a computer system, or any


other actuator) that generated the stimulus.
• Stimulus. The stimulus is a condition that needs to be considered when it
arrives at a system.

• Environment. The stimulus occurs within certain conditions. The system may
be in an overload condition or may be running when the stimulus occurs, or
some other condition may be true.

• Artifact. Some artifact is stimulated. This may be the whole system or some
pieces of it.

• Response. The response is the activity undertaken after the arrival of the
stimulus.

• Response measure. When the response occurs, it should be measurable in


some fashion so that the requirement can be tested.

Quality attribute parts

AVAILABILITY

• Availability is concerned with system failure and its associated consequences.

• A system failure occurs when the system no longer delivers a service consistent
with its specification.

• Such a failure is observable by the system's users—either humans or other


systems.

Availability General Scenarios

MODIFIABILITY

Modifiability is about the cost of change.


Modifiability General Scenario Generation
Portion of Possible Values
Scenario
Source End user, developer, system administrator
Stimulus Wishes to add/delete/modify/vary functionality, quality attribute, capacity
Artifact System user interface, platform, environment; system that interoperates with
target system
Environment At runtime, compile time, build time, design time
Response Locates places in architecture to be modified; makes modification without
affecting other functionality; tests modification; deploys modification
Response Cost in terms of number of elements affected, effort, money; extent to which
Measure this affects other functions or quality attributes

Sample modifiability scenario

PERFORMANCE

• Performance is about timing. Events (interrupts, messages, requests from users,


or the passage of time) occur, and the system must respond to them.
• One of the things that make performance complicated is the number of event
sources and arrival patterns.

• A performance scenario begins with a request for some service arriving at the
system. Satisfying the request requires resources to be consumed. While this is
happening the system may be simultaneously servicing other requests.

• An arrival pattern for events may be characterized as either periodic or


stochastic.

Performance General Scenario Generation

Portion of Scenario Possible Values


Source One of a number of independent sources, possibly from within system
Stimulus Periodic events arrive; sporadic events arrive; stochastic events arrive
Artifact System
Environment Normal mode; overload mode
Response Processes stimuli; changes level of service
Response Measure Latency, deadline, throughput, jitter, miss rate, data loss

Sample performance scenario


SECURITY

• Security is a measure of the system's ability to resist unauthorized usage while


still providing its services to legitimate users.

• Security can be characterized as a system providing nonrepudiation,


confidentiality, integrity, assurance, availability, and auditing.

1. Nonrepudiation is the property that a transaction (access to or modification


of data or services) cannot be denied by any of the parties to it.

2. Confidentiality is the property that data or services are protected from


unauthorized access.

3. Integrity is the property that data or services are being delivered as


intended.

4. Assurance is the property that the parties to a transaction are who they
purport to be.

5. Availability is the property that the system will be available for legitimate
use.

6. Auditing is the property that the system tracks activities within it at levels
sufficient to reconstruct them.

Security General Scenario Generation


Portion of Possible Values
Scenario
Source Individual or system that is correctly identified, identified incorrectly, of
unknown identity who is internal/external, authorized/not authorized with access
to limited resources, vast resources
Stimulus Tries to display data, change/delete data, access system services, reduce
availability to system services
Artifact System services; data within system
Security General Scenario Generation
Portion of Possible Values
Scenario
Environment Either online or offline, connected or disconnected, firewalled or open
Response Authenticates user; hides identity of the user; blocks access to data and/or
services; allows access to data and/or services; grants or withdraws permission to
access data and/or services; records access/modifications or attempts to
access/modify data/services by identity; stores data in an unreadable format;
recognizes an unexplainable high demand for services, and informs a user or
another system, and restricts availability of services
Response Time/effort/resources required to circumvent security measures with probability
Measure of success; probability of detecting attack; probability of identifying individual
responsible for attack or access/modification of data and/or services; percentage
of services still available under denial-of-services attack; restore data/services;
extent to which data/services damaged and/or legitimate access denied

Sample security scenario

TESTABILITY

• Software testability refers to the ease with which software can be made to
demonstrate its faults through testing.

• Testability refers to the probability that it will fail on its next test execution.

• For a system to be properly testable, it must be possible to control each


component's internal state and inputs and then to observe its outputs.
Testability General Scenario Generation
Portion of Possible Values
Scenario
Source Unit developer, Increment integrator, System verifier, Client acceptance
tester, System user
Stimulus Analysis, architecture, design, class, subsystem integration completed;
system delivered
Artifact Piece of design, piece of code, complete application
Environment At design time, at development time, at compile time, at deployment time
Response Provides access to state values; provides computed values; prepares test
environment
Response Measure Percent executable statements executed
Probability of failure if fault exists
Time to perform tests
Length of longest dependency chain in a test
Length of time to prepare test environment
Sample testability scenario

USABILITY

Usability is concerned with how easy it is for the user to accomplish a desired task
and the kind of user support the system provides. It can be broken down into the
following areas:

• Learning system features. If the user is unfamiliar with a particular system or


a particular aspect of it, what can the system do to make the task of learning
easier?
• Using a system efficiently. What can the system do to make the user more
efficient in its operation?

• Minimizing the impact of errors. What can the system do so that a user error
has minimal impact?

• Adapting the system to user needs. How can the user (or the system itself)
adapt to make the user's task easier?

• Increasing confidence and satisfaction. What does the system do to give the
user confidence that the correct action is being taken?

Usability General Scenario Generation

Portion of Possible Values


Scenario
Source End user
Stimulus Wants to learn system features; use system efficiently; minimize impact of errors;
adapt system; feel comfortable
Artifact System
Environment At runtime or configure time
Response System provides one or more of the following responses:
to support "learn system features": help system is sensitive to context; interface is
familiar to user; interface is usable in an unfamiliar context
to support "use system efficiently": aggregation of data and/or commands; re-use
of already entered data and/or commands; support for efficient navigation within a
screen; distinct views with consistent operations; comprehensive searching;
multiple simultaneous activities
to "minimize impact of errors": undo, cancel, recover from system failure,
recognize and correct user error, retrieve forgotten password, verify system
resources
to "adapt system": customizability; internationalization
to "feel comfortable": display system state; work at the user's pace
Response Task time, number of errors, number of problems solved, user satisfaction, gain of
Measure user knowledge, ratio of successful operations to total operations, amount of
time/data lost
Sample usability scenario

You might also like