CSC 4310 Human Computer Interface Note

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 24

Yobe state university, Damaturu

Km 7 Gujba Road P.M.B. 1144, Damaturu. Yobe State. Nigeria

DEPARTMENT OF COMPUTER SCIENCE

CSC 4310
Human Computer Interface

2023
Course Contents
Foundations of HCI, Principles of GUI, GUI toolkits;
Human-centred software evaluation and development;
GUI design and programming.
Foundations of HCI
Definition
The following definition is given by the Association for Computing Machinery (ACM)
"Human-computer interaction is a discipline concerned with the design, evaluation and
implementation of interactive computing systems for human use and with the study of major
phenomena surrounding them."
Because of the interaction that occurs between users and computers at the interface of software
and hardware such as between computer peripherals and large-scale mechanical systems in
aircraft and power plants, human-computer interaction is the study of that interaction between
people (otherwise called users) and computers. It can also be regarded as the intersection of
computer science, behavioral sciences, design and several other fields of study.

Overview
Since human-computer interaction studies a human and a machine in conjunction, it draws from
supporting knowledge on both the machine and the human side. On the machine side, techniques
in computer graphics, operating systems, programming languages, and development
environments are relevant. On the human side, communication theory, graphic and industrial
design disciplines, linguistics, social sciences, cognitive psychology, and human performance are
relevant. Engineering and design methods are also relevant.
The multidisciplinary nature of HCI enables people with different backgrounds contribute to its
success. HCI is also sometimes referred to as man—machine interaction (MMI) or computer—
human interaction (CHI).

The goals of HCI Studies:


A basic goal of HCI study is to improve the interactions between users and computers by making
computers more usable and receptive to the user's needs in the following ways:
– Methodologies and processes for designing interfaces in their related styles (i.e., given a
task and a class of users, design the best possible interface within given constraints,
optimizing for a desired property such as learn ability or efficiency of use)
– Methods or techniques for implementing interfaces (e.g. software toolkits and libraries;
efficient algorithms)
– Techniques for evaluating and comparing interfaces
– Developing new interfaces and interaction techniques
– Developing descriptive and predictive models and theories of interaction
– Design systems that minimize the barrier between the human's cognitive model of what
they want to accomplish and the computer's understanding of the user's task.

Research
Part of research in human-computer interaction involves exploring easier-to-learn or more
efficient interaction techniques for common computing tasks. This includes inventing new
techniques and comparing existing techniques using the scientific method as follows:
1. Designing graphical user interfaces and web interfaces.
2. Developing new design methodologies,
3. Experimenting with new hardware devices,
4. Prototyping new software systems,
5. Exploring new paradigms for interaction, and
6. Developing models and theories of interaction.

Interaction technique
An interaction technique or user interface technique is a combination of input and output
consisting of hardware and software elements that provides a way for computer users to
accomplish a simple task. For example, one can go back to the previously visited page on a Web
browser by either clicking a button, hitting a key, performing a mouse gesture or uttering a
speech command.

The computing perspective of interaction technique:


Here, an interaction technique involves one or several physical input devices, including a piece
of code which interprets user input into higher-level commands, possibly producing user
feedback and one or several physical output devices. Consider for example, the process of
deleting a file using a contextual menu. This first requires a mouse and a screen (input/output
devices). Then, a piece of code needs to paint the contextual menu on the screen and animate the
selection when the mouse moves (user feedback). The software also needs to send a command to
the file system when the user clicks on the "delete" item (interpretation).

The user view of interaction technique:


Here, an interaction technique is a way to perform a simple computing task and can be described
by the way of instructions or usage scenarios. For example, "right-click on the file you want to
delete, then click on the delete item".

The conceptual view of interaction technique:


Here, an interaction technique is an idea and a way to solve a particular user interface design
problem. It does not always need to be bound to a specific input or output device. For example,
menus can be controlled with many sorts of pointing devices.
Interaction techniques as conceptual ideas can be refined, extended, modified and combined. For
example, pie menus are a radial variant of contextual menus. Marking menus combine pie menus
with gestures. In general, a user interface can be seen as a combination of many interaction
techniques, some of which are not necessarily widgets.

Interaction styles
Interaction techniques that share the same metaphor or design principles can be seen as
belonging to the same interaction style. Examples are command line and direct manipulation user
interfaces.
More details are provided in subsequent chapter of this guide.

Paradigms and History


Paradigms are predominant theoretical frameworks or scientific world views such as the
Aristotelian, Newtonian, and Einsteinian (relativistic) paradigms in physics. Understanding HCI
history is largely about understanding a series of paradigm shifts. The study of paradigms is
concerned about how an interactive system is develop and how that usability can be
demonstrated or measured. The history of interactive system design also provides paradigms for
usable designs
Paradigms of interaction
Paradigms of interaction conceptually outline the arrival of new technologies creating a new
perception of the human-computer relationship. Some of these paradigms shifts can be traced in
the history of interactive technologies as follows:
Batch processing, Timesharing, Networking, Graphical display, Microprocessor, World Wide
Web (WWW) and Ubiquitous computing.

The initial paradigm started with batch processing that signified impersonal computing. The
paradigm shifts commenced from timesharing processing system that signified an interactive
Computing.
Principles of GUI
Survey of Human Computer Interaction Practices
Nature of Interactivity
Long ago going down memory lane, 'remote computer interaction' was done through batch
processing involving punched card stacks or large data files prepared with long wait for the line
printer output. And if it is not right the wait continued indefinitely ...
But now most computing is 'truly' interactive with rapid feedback and the user is in control most
of the time with the thinking taken over by the computer. A typical computer system interaction
is carried out through input devices such as: the screen or monitor, keyboard, mouse or track pad.
The input devices exist in variations of desktop, laptop, mainframe computers and Personal
Digital Assistants (PDAs). The devices dictate the styles of interaction that the system supports.
If we use different devices, then the interface will support a different style of interaction. In order
to understand the nature of human-computer interaction, one needs to understand the computer
systems.

Basic Components of Human Computer Interaction


The components of Human Computer Interaction comprise the Interaction models that concern
translation between the user and the computer system, Ergonomics that describe the physical
characteristics of interaction, the Interaction styles that express the nature of user and system
dialog and finally the context of the social, organizational and the motivational aspect of
interaction.

The Interaction Models


The interaction models comprise; The terms of Interaction, The Donald Norman Model and The
Interaction framework.

Terms of interaction
Domain: This is the area of work under study e.g. a graphic design
Goal: This is what you want to achieve e.g. to create a solid red triangle
Task: Concerns how you go about doing it, ultimately in terms of operations or actions e.g select
the fill tool, click over the triangle
Donald Norman's model
These are in seven stages as follow:
1. The user establishes the goal
2. The user formulates intention
3. The user specifies actions at interface
4. The user executes the action
5. The user perceives the system state
6. The user interprets the system state
7. The user evaluates the system state with respect to goal

Norman's model concentrates on the user's view of the interface Execution and evaluation loop.

Interpretation
Goal: The user establishes the goal
Execution: The user formulates intention. The user specifies actions at interface and user
executes the action
Evaluation: The user perceives the system state. The user interprets the system state and the user
evaluates the system state with respect to goal.

Donald Norman's model Application


Norman's model can be applied thro ugh:
Gulf of Execution that evaluates the user's formulation of actions where actions re allowed by the
system. Gulf of Evaluation where the user's expectation of changed system state represe nt actual
presentation of this state.

Interaction could harbour some human errors which may be slips and mistakes. Slips may
include lack of understanding the system and goal, incorrect formulation of action, incorrect
action and mistake of not even having the right goal! To fix slips, better interface design should
be carried out while to avoid mistakes, one should better understand the system

To avoid some of the Human errors, Abowd and Beale framework is adopted. Abowd and Beale
framework is an extension of Norman model and it has 4 parts namely:
i. the user,
ii. the inp ut,
iii. the system, and
iv. the output while each framework has its own unique language.
If interaction is the translation between languages, and if there are problems in interaction, then
there would be problems in translation.

Using Abowd & Beale's model


The user intentions could be translated into actions at the interface, translated in to alterations of
state, reflected in the output displaysystem or interpreted by the user himself. The general
framework for understanding interaction are that interaction is not restricted to electronic
computer systems alone, all major c mponents involved in interaction should be identified. The
comparative assessment of systems should be allowed. The framework also considers an
abstraction.

Ergonomics
This considers both the physical aspects of interfaces and the industrial interfaces. Ergonomics is
the study of the physical characteristics of interaction. It is known as human factors. Ergonomics
is good at defining standards and guidelines for constraining the way we design certain aspects
of systems Examples of Ergonomics include: Arrangement of controls and displays such as the
controls grouped according to function, frequency and sequence of use. Surrounding
environment such as the seating arrangements adaptable to cope with all sizes of user, health
issues such as the physical position, environmental conditions (temperature, humidity), lighting,
and noise. Use of colour such as the use of red for warning, green for okay, and awareness of
colour-blindness etc. The user interacts with real world through interface issues, feedback and
delays.
Common Interaction styles
Two major classes of interaction styles will be considered, they are: -
Dialogue Style of Interaction between computer and user Distinct styles of interaction. Both are
expressed in the following common forms of interfaces:
• Command line interface
• Menus
• Natural language
• Question and answer, and query dialogue
• Form-fills and spreadsheets
• WIMP
• Point and click
• Three-dimensional interfaces

Command line interface


This is the way of expressing instructions to the computer directly through the function keys,
single characters, short abbreviations, whole words, or a combination suitable for repetitive
tasks. The interface is better designed for expert users than novices because it offers direct access
to system functionality. However, the command names and abbreviations used should be
meaningful!
A typical example is the Unix system command line interface.

Menus
Menus is a set of options displayed on the screen. The Menu Options are visible, it has a less
recall characteristic that make it easier to use. The visible options rely on recognition so the
names should be meaningful. The selection is done through numbers, letters, arrow keys, mouse
and/or combination of any of them e.g. mouse plus accelerators. Often, the options are
hierarchically grouped. But sensible grouping is needed.

Natural language
This is the language familiar to the user. It may be in form of speech recognition or a typed
natural language. Problems with in this kind of interaction are that the language may be vague,
ambiguous, and hard to be recognised. Design solutions to language interface problems are for
the user to try to understand a subset and pick on key words.

Query interfaces
These comprise question and answer interfaces in which the user is led through interaction via
series of questions. Though with restricted functionality, this kind of interface is suitable for
novice users. It is often used in information systems.
Query languages (e.g. SQL): This is used to retrieve information from database. It requires
understanding of the database structure and language syntax, hence requires some expertise.

Form-fills and Spreadsheets


Form-fills are primarily designed for data entry or data retrieval. It is a screen like paper form to
which data is put in relevant place. It requires a good design and obvious correction facilities.
See illustration below;

Example of a form-fill Spreadsheets


Spreadsheets are sophisticated variations of form-filling in which grid of cells contain a value or
a formula. The formula can involve values of other cells e.g. sum of all cells in this column. The
user can enter and alter data in spreadsheet to maintain consistency. The first spreadsheet
introduced was VISICALC, followed by Lotus 1-2- 3. Micro Soft Excel is the most common
today.

WIMP Interface
This interface comprises Windows, Icons, Menus, and Pointers or Windows, Icons, Mice, and
Pull-down menus! The interface is the default style for majority of interactive computer systems,
especially PCs and desktop machines.
Elements of the WIMP interface: The elements include windows, icons, menus, and pointers.
In some other cases they may be buttons, toolbars, palettes, and dialog boxes. Understanding the
concept of 'Look and feel' WIMP systems have the same elements: as windows, icons., menus,
pointers, buttons, etc. but have different window systems that behave differently. For example,
Macintosh Operating System (MacOS) compared with Windows menus. The combination of the
appearance and the behaviour is the 'look and feel'

Windows: Windows are areas of the screen that behave as if they were independent. They can
contain text or graphics and can be moved or resized. They can overlap and obscure each other,
or can be laid out next to one another (tiled)

Icons: Icons are small pictures or images that represent some object in the interface. They appear
often as windows or as actions. Windows can be 'iconised' that is closed down. They are small
representations that fit many accessible windows. Icons can be many and various. They can be
highly stylized with realistic representations.

Menus: These are choice of operations or services offered on the screen. The required option is
selected with the pointer. However, this takes a lot of screen space This problem is partly solved
when a pop-up menu appears when needed.
Kinds of Menus: Menu Bar at top of screen (normally), menu drags down
1. Pull-down menu - mouse hold and drag down menu
2. Drop-down menu - mouse click reveals menu.
3. Fall-down menus - mouse just moves over bar!

Contextual menu appears where you are


Pop-up menus take actions for selected object
Pie menus are arranged in a circle such that it is easier to select item over larger target
area. Selection is also quicker because it can move same distance to any option. Pie
menus are not widely used!

Cascading menus: This has a hierarchical menu structure in which a menu selection opens new
menu and so in ad infinitum.

Keyboard accelerators: This comprises key combinations with same effect as menu item. They
operate in two modes;
1. active when menu open - usually first letter and
2. active when menu closed - usually Ctrl + letter

Menus design issues: In order to design an effective menu, the following issues should be
considered:
• which kind to use
• what to include in menus at all
• words to use ( in action or description)
• how to group items
• choice of keyboard accelerators

Palettes and tear-off menus: • Palettes are little windows of actions shown or hidden via menu
option in available shapes in drawing package.
• In tear-off and pin-up menus, menu 'tears off' to become palette

Pointers: Pointers are important WIMP style components that point on and select. They are
activated by the use of mouse, track pad, joystick, trackball, cursor keys or keyboard shortcuts.
They are in wide variety of graphical images. See examples below;

Point and click interfaces: Point and click interfaces are used in multimedia, web browsers, and
hypertext. You just click something such as icons, text links or location on map. It requires
minimal typing.

Scrollbars: Scrollbars allow the user to move the contents of the window up and down or from
side to side.

Title bars: Title bars describe the name of the window.

Buttons: This is an individual and isolated region within a display that can be selected to invoke
an action The Special kinds that exist are; The radio buttons with a set of mutually exclusive
choices and the check boxes with a set of non-exclusive choices.

Toolbars: These are long lines of icons with fast access to common actions and are often
customizable: You can choose which toolbars to see and choose what options are on it.
Dialogue boxes: These are information windows that pop up to inform of an important event or
requested information, for example when saving a file, a dialogue box is displayed to allow the
user to specify the filename and location. Once the file is saved, the box disappears.
The interactivity of dialogue boxes: They are easy to focus on look and feel.
Other types of interaction styles are speech driven interfaces: The development of this kind
of interface is yet to be perfect and accurate; though it is rapidly improving. Example of speech
driven interface dialogue on an airline reservation:
reliable "yes" and "no"?
+ System reflects back its understanding
"you want a ticket from New York to Boston?"

Three Dimensional Interfaces


These are virtual reality 'ordinary' window systems highlighting visual affordance. The
indiscriminate use can however be confusing! There are also three dimensional (3D) workspaces
used for extra virtual space with light and occlusion that give deep distance effects. For typical
computer displays, three-dimensional images are projected on them in two dimensions. Three-
dimensional graphics are currently mostly used in computer games, art and computer-aided
design (CAD). There have been several attempts at making three-dimensional desktop
environments like Sun's Project Looking Glass. A three-dimensional computing environment
could be used for collaborative work. For example, scientists could study three-dimensional
models of molecules in a virtual reality environment, or engineers could work on assembling a
three-dimensional model of an airplane.] The Technologies The use of three-dimensional
graphics has become increasingly common in mainstream operating systems, but mainly been
confined to creating attractive interfaces—eye candy—rather than for functional purposes only
possible using three dimensions. For example, user switching is represented by rotating a cube
whose faces are each user's workspace, and window management is represented in the form of
Exposé on Mac OS X. In both cases, the operating system transforms windows on-the-fly while
continuing to update the content of those windows.
Interfaces for the X Window System have also implemented advanced three-dimensional user
interfaces through compositing window managers such as Beryl and Compiz using the AIGLX
or XGL architectures, allowing for the usage of OpenGL to animate the user's interactions with
the desktop.
Another branch in the three-dimensional desktop environment is the three-dimensional GUIs that
take the desktop metaphor a step further, like the BumpTop, where a user can manipulate
ocuments and windows as if they were "real world" documents, with realistic movement and
physics.
The Zooming User Interface (ZUI) is a related technology that promises to deliver the
representation benefits of 3D environments without their usability drawbacks of orientation
problems and hidden objects. It is a logical advancement on the GUI, blending some three-
dimensional movement with two-dimensional or "2.5D" vector objects.

Context: Social and Organisational


These issues and concerns involve all possible interactions between a user and a system during
its lifecycle, including the development stage, use in context, and the impact of such use on
individuals, organizations, society, and future systems development.

Context Analysis: Context analysis includes understanding the technical, environmental and
social settings where the information systems will be used. It examines whether and how the
interaction between physical and social environment and the physiological and psychological
characteristics of the user would impact users interacting with the system. There are four aspects
in Context Analysis: physical context, technical context, organizational context, and social and
cultural context. Overall, context analysis can provide ideas for design factors such as metaphor
creation, selection and patterns of communications between users and the system.
Physical context: This considers where the tasks carried out, what entities and resources
are implicated in task operation, What physical structures and entities are necessary to
understand observed task action. For example, an ATM machine can be used in a mall
outside a bank office, or in a night club. These environments provide different levels of
lighting, crowdedness, and noisiness. Thus legibility of the screen, use of audible devices
for input or output, or even the size of the working space to prevent people nearly to see
the screen could be designed differently.
Technical context: This considers the technology infrastructure, platforms, hardware and
system software, wired or wireless network connection? For example, an E-commerce
website may be designed to allow access only to people with certain browser versions.
The website may also be designed to allow small screen devices such as PDA or mobile
phone to access.

Organizational context: Organizational context may play different roles in internal and
external situations. For an organizational information system to be used by the
organization's own employees, organizational context analysis answers questions such as:
o What is the larger system where this information system is embedded?
o What are the interactions with other entities in the organization?
o What are the organizational policies or practice that may affect individual's
attitude and behavior towards using the system?
For example, assuming that Lotus Note is used by an organization as a communication
and collaboration tool, management may depend on using the tool to set up meetings by
checking employees' calendars on mutually available time slots. The effectiveness of
setting up meetings depends on whether employees use the tool, and how they use it. The
whether and how questions can be enforced by organizational policies.

Social and cultural context: What are the social or cultural factors that may affect user
attitudes and eventual use of the information system? In an E-Commerce website
example, the website can be accessed from all over the world. It thus is a design
consideration that the website allows access by people with any language and cultural
background that can provide credit cards with the foreign currency exchange, or it is only
accessible to people who speak certain languages and are from certain cultures.

Interactions are also affected by other social and organizational context as follow:
• By other people: A desire to impress, competition among stakeholders, and fear of failure
from management
• Motivation from management as against fear, allegiance, ambition, self-satisfaction that
exist among employees
• Existing inadequate systems that may cause frustration and lack of motivation.

The organizational, social and cultural context in which humans interact with IT is largely the
result of the broad adoption of IT by organizations and society to support organizational
functions and goals and to enhance society's development. For example, organizational
efficiency may be expected due to redesign of workflows among critical business units that is
affected by the implemented IT; satisfaction and retention of customers/clients are anticipated
due to accurate and fast information gathering and presentations, to name a few. Some of the
organizational or societal impacts may not be tangible or directly attributed to HCI
considerations. This assertion is in line with the issues of determining IT values in organizations
and societies. While each of these HCI concerns may have its own importance in different
situations in relation to human motivation, it would be helpful for designers to see an overview
picture of the potential HCI concerns and goals. The purpose of this picture is not to force every
IT to be compliant with all the HCI concerns, but to provide an overall framework so that
designers can use it as a roadmap and to apply it according to different situations.

Human-centred software evaluation and


development
This briefly describes some uncommon technologies associated with human computer
interaction. These are innovations that improve upon the user interface, particularly those
innovations benefiting the disabled. Technologies such as the phonetic typewriter, the ear cons,
the auditory icons, the recognition and gesture devices for the disabled and the elderly are
described.

Critical Evaluation of Computer Based Technology


The intends to describe multi-modal, multi-media and multi-sensory systems, Appreciate the speech and the
Phonetic typewriter interfaces, understand the Ear cons and Auditory Icons as important components of multi-
modal systems. And to know that Recognition and Gestures Devices are essential for the Elderly and Disabled.

Multi-Sensory Systems
Here, more than one sensory channel are involved in interaction as in sounds, text, hypertext,
animation, video, gestures and vision. They are used in a range of applications particularly for
users with special needs and virtual reality. The components of Multi-Sensory systems are:
Speech, Non-speech sounds, Handwriting, together with their applications and principles.
Usable Senses: The five senses: sight, sound, touch, taste and smell are used by us every day and
each is important on its own Together, they provide a fuller interaction with the natural world.
We can ideally utilise the Computers to use all the available senses but this becomes practically
impossible because computers rarely offer such a rich interaction. We can use the sight, sound,
and sometimes the touch senses, but we cannot yet use the taste and smell.

Multi-modal and Multi-media systems


Multi-modal systems: These use more than one sense (or mode) of interaction as in visual and
aural senses. For example, a text processor may speak the words as well as echo them to the
screen
Multi-media systems. These use a number of different media to communicate information. For
example, a computer-based teaching system may use video, animation, text and still images;
different media all using the visual mode of interaction. These may also use sounds, both speech
and non-speech. Of course, two or more media now use different modes.
Speech: Human beings have a great and natural mastery of speech which makes it
difficult to appreciate the complexities but it is an easy medium for communication.

Simple terminologies used to describe speech: The structure of speech is called phonemes
and there are 40 of them. The phonemes as basic atomic units that sound slightly different
depending on the context they are in, the larger units of phonemes are the Allophones.
Allophones: are the sounds in the language between 120 and 1 30 and are formed into
morphemes. The morphemes are the smallest units of language that have meaning.
Prosody: is the alteration in tone and quality. They are also variations in emphasis, stress,
pauses and pitch. They impart more meaning to sentences.
Co-articulation: is the effect of context on the sound. It transforms the phonemes into
allophones. Syntax is the term used for the structure of sentences while semantics is the
collective term used for the meaning of sentences.
Problems in Speech Recognition: Different people speak differently because accent,
intonation, stress, idiom, volume, etc. differ. The syntax of semantically similar sentences
may also vary while background noises can interfere. People often "ummm....." and
"errr ...... " but words are not enough - semantics are also needed. It requires intelligence
to understand a sentence because context of the utterance often has to be known as well
as information about the subject and speaker. For example, even if "Errr ..... I, um, don't
like this" is recognised, it is a fairly useless piece of information on its own.

Speech Related Human-Interaction Technologies


The Phonetic Typewriter: This is developed for Finnish - a phonetic language. This
machine trained on one speaker, will generalise the training to others. A neural network is
trained to cluster together similar sounds, which are then labelled with the corresponding
character. When recognising speech, the sounds uttered are allocated to the closest
corresponding output, and the character for that output is printed. It requires large
dictionary of minor variations to correct general mechanism.

Usefulness of Speech Recognition


It is useful for a single user or in a situation in which limited vocabulary systems exist,
for example in a computer dictation. In the public and open places of limited vocabulary
systems, it can work satisfactorily e.g. in some voice activated telephone systems. For the
general user with wide vocabulary systems, problems do occur. Its great potential value
however manifests when users hands are already occupied as in driving or during
manufacturing particularly for users with physical disabilities. Another advantage is its
lightweight and its use as a mobile device.

Speech Synthesis
This is a generation of speech. It is useful because of its natural and familiar way of
receiving information. It is successful in certain constrained applications when the user
has few alternatives and is particularly motivated to overcome problems. However, it has
its own problems sim ilar to speech recognition particularly in pro sody. Additional problems can arise
from intrusion calling the need for headphones particularly due to noise in the workplace. Its transient
nature is a problem when it becomes harder to review and browse. Few examples occur in screen
readers that reads the textual display to the user e. g. as utilised by visually impaired people. Also in
warning signals of spoken information sometimes presented to pilots whose visual and haptic skills
are already fully occupied while flying.

Sounds
Non-Speech Sounds: These are bongs, bangs, squeaks, clicks etc. that are commonly used for
warning and alarms. Fewer typing mistakes occur here with key clicks. It is also useful in video
games that become uninteresting without sound.

Auditory Icons: eg. SoniFinder for Macintosh

Earcons: These are synthetic sounds used to convey information

Family ear cons: Here, similar types of earcons represent similar classes of action or similar
objects.The family of "errors" would contain syntax and operating system errors. Earcons are
easily grouped and refined due to compositional and hierarchical nature. It is harder to associate
with the interface task since there is no natural mapping.

Recognition and Gestures


Touch recognition: This comprise the following
i. Haptic interaction made up of cutaneous perception that provide tactile sensation and
vibrations on the skin
ii. Kinaesthetic comprising movement and position and force feedback.

Touch recognition also include information on shape, texture, resistance, temperature, and
comparative spatial factors. Examples of technologies on touch recognition include electronic
Braille displays and force feedback devices e.g. Phantom that recognises resistance and texture.

Handwriting recognition: Handwriting is another communication mechanism which we are


used to in day-to-day life The technology of handwriting consists of complex strokes and spaces.
The handwriting is captured by digitising tablet through strokes transformed to sequence of dots.
Large tablets available are suitable for digitising maps and technical drawings. Smaller devices,
some incorporating thin screens are used to display the information. Such include PDAs such as
Palm Pilot and tablet PCs. The problems associated with handwriting recognition are personal
differences in letter formation and co- articulation effects. The breakthroughs in this technology
is the creation of stroke not just bitmap found in special 'alphabet' like Graffeti on PalmOS. The
technology is usable even without training though many people prefer to use the keyboards!

Gesture technology: This can be found in its various applications such as in gestural input - e.g.
"put that there" and sign language. The technology comprises data glove and position sensing
devices such as MIT Media Room. Gesture provides the benefits of natural form of interaction
by pointing. It enhances communication between signing and non-signing users. The problems
with gesture interaction are that it is user dependent due to the variable nature of each user.
Issues of co articulation are also considered as problems.

Devices for the Elderly and Disabled


The development of Technology on Human Computer Interaction has helped users with
disabilities as follow:
Visual impairment: Use of screen readers and Sonic Finder
Hearing impairment: Use of text communication, gesture and captions
Physical impairment: Use of speech input and output, eye gaze, gesture, predictive systems
(e.g. reactive keyboard)
Speech impairment: Use of speech synthesis and text communication
Dyslexia: Use of speech input and output
Autism: Use of communication and education devices. Older people use disability aids, memory
aids, and communication tools to prevent social isolation
Others: Children use appropriate input and output devices for education, games and fun. In
solving cultural differences, the influence of nationality, generation, gender, race, sexuality,
class, religion, political persuasion etc. are affected by the interpretation of interface features.
e.g. interpretation and acceptability of language, cultural symbols, gesture and colour.
Since the basic goal of HCI study is to improve the interactions between users and computers by
making computers more usable and receptive to the user's needs, there is continuous research in
human- computer interaction that involves exploring easier-to-learn or more efficient interaction
techniques for common computing tasks. This includes inventing new techniques and comparing
existing techniques using scientific methods.

You might also like