Virtual Keyboard: P-Ism

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 40

projection keyboard is a virtual keyboard that can be projected and


touched on any surface. The keyboard watches finger movements and
translates them into keystrokes in the device. Most systems can also function
as a virtual mouse or even as a virtual piano.[1] A proposed system called
the P-ISM will combine the technology with a small video projector to create a
portable computer the size of a fountain pen.[2]
How a projection keyboard generally works:

1. A laser or beamer projects visible virtual keyboard onto level surface


2. A sensor or camera in the projector picks up finger movements[3]
3. detected co-ordinates determine actions or characters to be generated
Some devices use a second (invisible infrared) beam:

1. An invisible infrared beam is projected above the virtual keyboard


2. Finger makes keystroke on virtual keyboard. This breaks infrared beam
and infrared light is reflected back to projector
3. Reflected infrared beam passes through infrared filter to camera
4. Camera photographs angle of incoming infrared light
5. Sensor chip determines where infrared beam was broken
6. detected coordinates determine actions or characters to be generated
An optical virtual keyboard[3] was invented and patented by IBM engineers
in 1992. It optically detects and analyses human hand and finger motions and
interprets them as operations on a physically non-existent input device like a
surface having painted or projected keys. In that way it allows to emulate
unlimited types of manually operated input devices (mouse, keyboard, etc.).
All mechanical input units can be replaced by such virtual devices, optimized
for the current application and for the user's physiology maintaining speed,
simplicity and unambiguity of manual data input.
In 2002, the start-up company Canesta developed a projection keyboard
using their proprietary "electronic perception technology".[4][5][6] The company
subsequently licensed the technology to Celluon of Korea.[7]
Contents
 [hide]
1 Projection keyboards connectivity

2 How laser keyboards work

o 2.1 Template projection (Projection module)

o 2.2 Reference plane illumination (Micro-illumination

ModuleTM)

o 2.3 Map reflection coordinates (Sensor Module)

o 2.4 Interpretation and communication (Sensor module)

3 References

[edit]Projection keyboards connectivity

Projection keyboards connect to the devices they are used for either
through Bluetooth or USB.
The Bluetooth projection keyboard is a wireless virtual keyboard, a pocket-
size device that projects a full-size keyboard through infrared technology onto
any flat surface.[8]
Bluetooth dongle technology enables the projection keyboard for point to
multi-point friendly connectivity with other Bluetooth devices, such as
PCs, PDAs and mobile phone. Bluetooth is an open specification for wireless
data transmission which operates on the globally available 2.4GHz radio
frequency.[9]
The way the Bluetooth projection keyboard is connected to a device varies
depending on the specific laptop, phone or computer that the user intends to
use it for. All the connectivity instructions normally come with the product and
they basically consist in turning on the Bluetooth connection on one's device
and then turning on the keyboard.
The USB projection keyboard works like a regular USB keyboard. The
connection between the virtual keyboard and the device is made through a
USB port, which is available on every computer, laptop and other devices that
are compatible with the projection keyboard. Connection instructions come as
well with the product and with the manufacturer's specifications but it mainly
consists in Plug and Play the devices.
[edit]How laser keyboards work
The laser keyboards use laser and infra-red technology to create the virtual
keyboard and to project the hologram of a keyboard on a flat surface.[10]
The projection is realized in four main steps and via three modules: projection
module, sensor module and illumination module. The main devices and
technologies used to project the hologram are a diffractive optical element,
red laser diode, CMOS camera and sensor chip and an infrared (IR) laser
diode.
[edit]Template projection (Projection module)
A template produced by a specially designed and highly efficient holographic
element with a red diode laser is projected onto the adjacent interface
surface. [11] The template is not however involved in the detection process and
it is only used as a reference for the user. In a fixed environment, the template
can just as easily be printed onto the interface surface.
[edit]Reference plane illumination (Micro-illumination ModuleTM)
An infra-red plane of light is generated on the interface surface. The plane is
however situated just above and parallel to the surface. The light is invisible to
the user and hovers a few millimeters above the surface. When a key position
is touched on the surface interface, the light is reflected from the infra-red
plane in the vicinity of the key and directed towards the sensor module.
[edit]Map reflection coordinates (Sensor Module)
The reflected light user interactions with the interface surface is passed
through an infra-red filter and imaged on to a CMOS image sensor in the
sensor module. The sensor chip has a custom hardware embedded such as
the Virtual Interface Processing CoreTM and it is capable of making a real-
time determination of the location from where the light was reflected. The
processing core may track not only one, but multiple light reflections at the
same time and it can support multiple keystrokes and overlapping cursor
control inputs.
[edit]Interpretation and communication (Sensor module)
The micro-controller in the sensor module receives the positional information
corresponding to the light flashes from the sensor processing core, interprets
the events and then communicates them through the appropriate interface to
external devices. By events it is understood any key stroke, mouse
or touchpad control.
Most projection keyboards use a red diode laser as a light source and may
project a full size QWERTY layout keyboard. The project keyboard size is
usually 295 mm x 95 mm and it is projected at a distance of 60 mm from the
virtual keyboard unit. The projection keyboard may detect up to 400
characters per minute and it may be connected by using either USB ports or
Bluetooth.
The projection keyboard unit works on lithium-ion batteries and it has a
capacity of at least 120 minutes of continuous typing. The projection unit sizes
vary on the manufacturer but normally it is not bigger than 35 mm x 92 mm x
25 mm.
[edit]

A Virtual Keyboard Based on

True-3D Optical Ranging

Huan Du

, Thierry Oggier

, Felix Lustenberger

, Edoardo Charbon

1
Ecole Polytechnique Fédérale Lausanne, 1015 Lausanne,

SWITZERLAND

huan.du | edoardo.charbon@epfl.ch

CSEM SA, Badenerstrasse 569, 8048 Zurich, SWITZERLAND

thierry.oggier | felix.lustenberger@csem.ch

Abstract

In this paper, a complete system is presented which mimics a QWERTY

keyboard on an arbitrary surface. The system consists of a pattern projector

and a true-3D range camera for detecting the typing events. We exploit depth

information acquired with the 3D range camera and detect the hand region

using a pre-computed reference frame. The fingertips are found by analyzing

the hands’ contour and fitting the depth curve with different feature models. To

detect a keystroke, we analyze the feature of the depth curve and map it back to

a global coordinate system to find which key was pressed. These steps are fully

automated and do not require human intervention. The system can be used in

any application requiring zero form factor and minimized or no contact with a

medium, as in a large number of cases in human-to-computer interaction,

virtual reality, game control, 3D designs, etc.

Keywords: virtual keyboard, computer vision, range camera, finger tracking,

feature extraction, Swissranger, time-of-flight imaging

1 Introduction

As the demand for computing environments evolves, new human-computer interfaces

have been implemented to provide multiform interactions between users and machines.

Nonetheless, the basis for most human-to-computer interactions remains the binomial

keyboard/mouse. Ordinary keyboards however, to be comfortable and effective, must be

reasonably sized. Thus they are cumbersome to carry and often require wiring. To

overcome these problems, a smaller and more mobile touch-typing device [1] has been
proposed which does not have physical support. This device is known as virtual

keyboard [2] or zero-form-factor interface.

Finger tracking and finger tracking based interfaces have been an actively

researched problem for several years now. For example, glove-based systems, such as

the “Key-glove” by Won, Lee et al. [3], require the user to wear a tethered glove to

recognize signal variations caused by the movement of fingers. Recently, other kinds of

sensors have also been used in wearable virtual keyboards. Senseboard for example has developed
a virtual keyboard system [4] based on two devices made of a combination of

rubber and plastic. The devices recognize typing events by analyzing the data from

pressure sensors attached to the user’s palm. Other systems based on more sophisticated

sensing devices have been proposed. One such example is the SCURRY system [5]

which uses an array of gyroscopes attached to user’s hands to detect the movements of

fingers and wrist.

Computer vision researchers as well have made significant advances in the

development of vision based devices that require no wearable hardware. Canesta and

VKB for example have designed virtual keyboard systems [6][7] using infrared cameras

to detect the interaction between the fingers and a projected image from a laser diode.

Stereo vision has also been employed to obtain finger positioning and to perform

rudimentary finger tracking. Lee and Woo for example describe ARKB [8], a 3D vision

system based on a stereo camera.

This paper describes a system based on the Swissranger SR-2 camera demonstrator

[9], a novel 3D optical range camera that utilizes both the gray-scale and depth

information of the scene to reconstruct typing events. The tracking of the fingers is not

based on skin color detection and thus needs no training to adapt to the user. After

initial calibration for environment setup, the system is fully automated and does not

require human intervention. Besides its application in a planar setup, the system can be

employed in a real 3D space for virtual reality and gaming applications. Since physical

contact is not a requirement of the system, important applications designed for disabled
computer users and for users operating in hostile and sterile environments are

envisioned.

The paper is organized as follows. In Section 2, the system architecture is described,

including all main hardware modules and software components of the system. Section 3

outlines the camera-calibration process, including the models for all known sources of

error caused by the camera. The dynamic event detection algorithm is discussed in

Section 4, where we describe the techniques for fingertip and keystroke detection.

Results are presented in Section 5.

2 System Architecture

Figure 1 shows the physical setup of the system. The 3D range camera is placed several

centimeters over the input surface, with a well-defined angle facing the working area.

The size of the working area, limited by the spatial resolution of the camera, is 15 cm ×

25 cm, which is comparable to a full-size laptop-computer keyboard. The display

projector is mounted on the camera, facing the same area, which would generate the

visual feedback for the keyboard and input information.

The proposed system consists of three main hardware modules: (1) 3D optical range

camera, (2) visual feedback, and (3) processing platform. The range camera is

connected to the processing platform, presently a personal computer (PC), via a USB2.0

interface. The visual feedback module communicates with the computer via serial port.

The Swissranger SR-2 3D optical range camera simultaneously measures gray-scale

and depth map of a scene. The sensor used in the camera is based on CMOS/CCD

technology and it is equipped with an optical band-pass filter to avoid all background

light. It delivers gray-scale and depth measurements based on the time-of-flight (TOF)

measurement principle with a spatial resolution of 160 × 124 pixels. The light source of

the camera is an array of LEDs modulated at 20 MHz with a total optical power output of approx.
800 mW, however only a fraction of this total light power is utilized in the

current setup. The depth resolution under these conditions is only about 1.5 cm without

any spatial filtering, and may reach 0.6 cm with the spatial filtering with a window size
of 5 × 5 pixels. A detailed discussion of the theoretical background and practical

implementation of the camera can be found, e.g., in [10].

Figure 1. Projected-keyboard demonstration-system setup

The depth information supplied by the range camera allows developing simpler and

more efficient computer-vision algorithms to estimate the position of fingertips and to

locate the corresponding stricken key. Simplicity and efficiency are key elements to

enable real-time or even portable applications.

However, there are still some challenges associated with the range camera utilized

in this project. A number of problems, such as e.g., light scattering, and “close target”

artifacts impact achievable depth accuracy. Moreover, the image resolution of the

camera is relatively low, thus restricting its use to applications with large view window.

As a result the working area of the keyboard is limited today to a sub-optimal size. The

current power consumption and size of the range camera are also impediments to its use

in truly portable applications. Naturally, the current demo system is composed, in part,

by prototypes. Nonetheless, based on our research experience in the field of full-field

TOF-based rangefinders, we believe that all these problems will be solved in the near

future, and all-solid-state 3D cameras will soon fall in adequate size and price ranges.

The visual feedback module is constructed using projection of a dynamically

generated image based on a mini LCD. Whenever the processing algorithm detects a

key-striking or key-bouncing event, it sends an UPDATE command to the visual

feedback module with specific key information. The feedback module updates the

generated display according to the command and thus the user can see the change of the

keyboard image as well as textual or graphical updates. Additional audio feedback is

used to help the user identify successful keystrokes.

The processing algorithm consists of five main modules as shown in Figure 2: (1)

depth map error correction, a camera dependent module based on specific models

designed for the range camera, (2) background subtraction, (3) central column

estimation, (4) fingertip detection, and (5) keystroke detection. Note that software
modules (2) to (5) are camera independent modules applying computer vision

algorithms to track the movement of fingers and to detect the typing event. The 3D range camera is
calibrated at startup. The projection matrix of the camera is

estimated during calibration. The depth map delivered by the range camera is first

processed by the error-correction routine to compensate for errors caused by parallax

and distributed clock skews in the camera. The rectified range measurements, combined

with gray-scale image, are then subtracted from the reference image and binarized to

extract the hand region. After applying the central column estimation, which is defined

as the pixel segments associated with fingers that are good candidates for an event, by

searching the local extrema in x-coordinate along the hand boundary, and applying the

fingertip detection by extracting features with curve modeling, precise location of

fingertips can be found in the hand region. Finally, the keystroke detection is obtained

by fitting depth curve applying another feature model, and the corresponding hitting

positions are mapped back to the world coordinate system to infer the stricken keys. The

updated key status is then sent to visual feedback module to generate refreshed display.

Figure 2. Software flowchart

3 Camera Calibration

The 3D optical range camera requires certain calibration procedures. To estimate the

projection matrix Mc

of the camera, we use the classic method proposed by Tsai [11]

with respect to a world coordinate frame attached to the table. We also find that there

exist some camera specific errors in the range measurement, and we analyze the cause

of these errors and how to model and correct them.

(a) (b)

Figure 3. (a) Depth map of the desk before error correction and (b) depth map of the

desk after error correction There are mainly two types of errors in the range measurement. One is
caused by
the parallax effect, the other by skew in the clock distribution of the sensor. Both errors

result in location dependent under- or overestimation of depth. The parallax error is

estimated using triangular geometry of the scene projection. The clock-skew error

linearly increases with the x- and y-coordinates in the image plane. The gain or slope in

x- and y-direction may be estimated using LMS. Figure 3 shows the depth map of a flat

table surface before and after error correction, and Table 1 lists the statistical value of

the two cases. The mean value of the latter case has counted in the offset.

Mean (cm) Standard deviation (cm)

Before correction 38.42 12.73

After correction 35.84 1.46

Table 1. Statistics of depth map before and after error correction

4 Dynamic Event Detection

To detect the movement of fingers, the proposed system applies a series of computervision
algorithms to the gray-scale image and depth map. The first step in the process is

the segmentation of the scene into its foreground and background components. The

foreground is defined as the fingers and forearms. Once the segmentation has been

performed, the process of detecting the fingertips starts with estimating and extracting

features.

For the detection of the hand region, the algorithm uses a previously acquired

background image as its reference frame. The background is modeled using both, grayscale image
and depth map. During detection process, the input frame is compared to

the reference frame, and its differential distance map is computed.

(a) (b)

Figure 4. (a) Differential depth map and (b) Binarized hand region after background

subtraction

When the pixels of the resulting differential map that have values larger than a

given threshold (one for the depth map and one for the gray-scale image), it corresponds

to the foreground. Binarizing the differential map results in a separation of the hand
region from the background scene. Median filtering is then applied to the binary frame

to reduce the impact of noise. The resulting hand image is clear and contains only few gaps. Figure
4(a) shows the differential depth map, and Figure 4(b) shows the

segmented hand region computed for an example image.

According to our analysis of the differential map after background subtraction, the

difference is very small near the edge of the fingers and it is submerged by Gaussian

noise, either in the gray-scale field or in the depth field. Therefore, it would be difficult

to detect the precise location of a fingertip based solely on the thresholded hand region.

Alternatively, the central-pixel column associated with a finger is first estimated from

the binarized hand region. Then, the fingertip is detected by extracting features from the

central column curve of the depth map. Since the frame rate of the demo system of 33

frames per second is not high enough to apply cross-frame tracking in such a close

distance, since the target tracking requires additionally a relatively complicated hand

model, which is not suitable for our real time algorithm, no temporal coherence is

assumed for this system.

The most likely position of the central columns can be derived from the finger

trunks based on the segmented hand region. We first compute the boundary of the hand

by noting that any pixel on the hand region, which has 4-connectivity with a non-hand

region, belongs to the boundary. Then we compute an approximation of k-curvature for

each pixel on the boundary. K-curvature is defined by the angle between two vectors

[P(i-k), P(i)] and [P(i), P(i+k)], where k is a constant and P(i)=(x(i), y(i)) is the list of

contour points. Segen and Kumar used local curvature extrema [12], but with our

specifically configured angle and elevation of the camera, we need only to compute

local extrema in x-coordinate instead of the angle.

(a) (b)

Figure 5. (a) Example of local-extrema detection and (b) Result of central column

estimation

With an appropriate value of k, the local extrema in the x-coordinates can be used to
find the central columns of fingers. For each pixel on the boundary of the hand region,

we extend two vectors k pixels away along the contour. Since it is an approximation of

k-curvature and the computation for local extrema is simplified as sign detection, we

may get a series of continuous “local extrema” near the actual central columns. In such

a situation the center pixel of this series is taken as the estimation of the central column.

Figure 5(a) shows an example of local extrema in x-coordinate. The figure indicates

how sign change is used to detect local extrema. The current pixel is marked with black

and the forward and backward vector from this pixel are denoted as F and B. If F and B

have a different sign in x-direction, the current pixel is marked as local extrema,

otherwise it is skipped. Figure 5(b) shows the result of central column detection in grayscale image.
The detected central columns are marked with white lines. To detect the fingertip in the segmented
hand image, the feature model of the

differential depth curve is applied. The differential depth curve represents the depth

values of the differential map along the central columns of the fingers in x-direction. It

appears to be the piecewise linear superposition of a linear segment with a gradient

close to zero, in correspondence of the finger, and a curve segment which can be

modeled as a non-decreasing second order polynomial. We take the intersection point of

two sections of the curve as the fingertip.

Figure 6 shows the differential depth curve of the central column and the fitted

curves of its piecewise parts, respectively.

Figure 6. Differential depth curve and the fitted curve

The contact between fingertips and virtual keys is also detected by feature models.

The depth curve of the fingertip appears to be a smooth second-degree parabola for the

case that the finger touches the working surface or it is very close to it. For the case in

which the finger lifts away from the working surface, the depth curve appears to have a

discontinuity in the parabolic curve. The keystroke hypothesis is tested by curve fitting

with second order polynomial. The hypothesis is verified when the fitted curve exhibits

a deviation larger than some predefined threshold. This large deviation is caused by the
perspective projection of the scene and the discontinuity in the depth map between the

finger and the table.

Figure 7. Depth curve of collision case and non-collision case Figure 7 shows the depth curve in the
finger-touching and the finger-lifting cases,

respectively. The position of the typing fingertip is then mapped back to the world

coordinate with previously calibrated projection matrix, and the corresponding virtual

key is inferred from the differential depth map which codes the x- and z-coordinates of

the keystroke.

5 Experimental Results

In this section, preliminary experimental results of the proposed system are presented.

The range camera is connected to a PC with Pentium4 1.8GHz CPU. With these

resources, the system could capture images and process them in approx. 30ms intervals,

i.e., 33 frames per second on average. This frame rate is high enough for normal typing,

which requires a finger speed of at most 10 cm/s. The camera was pre-calibrated with

respect to a world coordinate frame associated with the surface, and the construction of

the reference frame was achieved in less than 3 seconds.

Figure 8 shows a true human hand’s typing motion being tracked by our algorithm

and the detected keystroke event in the image sequence, which are marked with white

dots. A sequence of frames leading to a specific keystroke is shown. The keystroke of

the dynamic fingertip is detected accurately in Figure 8(b). Note that also the other

fingers are correctly detected, as they are stationary keystrokes. Stationary keystrokes

can be interpreted as REPEAT. Current challenge for the system is the finger occlusion

problem for two-handed typing, which is common to most of the vision based hand

tracking systems. A solution to this problem is to apply more complicated 3D hand

models so that the position of the occluded finger can potentially be estimated.

However, this may dramatically lower the system frame rate and cannot fulfill our realtime
requirement.

(a) (b) (c) (d)


Figure 8. (a), (b), (c) and (d) Result from image sequence

The proposed system could also be extended to the application of a virtual mouse.

The finger tracking method can precisely locate the position of a moving finger in the

working area, and detect the click event in the same way as the detection of keystroke

event. The only challenge is to track multiple fingers and record their traces in the scene

for this application. In the trivial case we can assume that there is only one finger in the

scene as the input source. However, for the case where the left and the right button are

both simulated, or for some gesture controlling interface, temporal coherence

information can be employed and a more complicated tracking algorithm should be

devised.

Table 2 lists the results of the system’s usability tests, which involve users of

different races, typing skills and genders, reflecting the natural distribution of left- and

right-handed subjects. The subjects were required to type test patterns indoors under different
lighting conditions and with slow, normal and fast typing speed. The finely

designed test patterns cover all key positions and most of their possible combinations.

Statistics were compiled to evaluate the false detection, misses and incorrect detection

rate. False detection occurs when a key is not pressed but a character is issued. A miss

occurs when a key is pressed but no character is issued. Incorrect detection is equivalent

to a misprint. User set 1 consists of users who never used this system beforehand, and

user set 2 of users who practiced with the system for less than 10 minutes. From Table 2

we found that the dominant detection error is the false detection in both cases. This error

is mainly caused by the low lateral resolution of the camera, which could not resolve

very small floating distance between the fingers and the table, though that users could

not adapt to a flat typing surface is also a reason. Experiments showed that with short

time of practice, accuracy and typing speed are both observably enhanced. It was also

shown that an appreciably experienced user can reach the normal typing speed of

approx. 30 words per minute with the presented system.

False
detection

rate

Missedstrokes

rate

Incorrect

detection

rate

Typing

accuracy

Average typing

speed (words/min)

User set 1 8.5% 3.5% 1% 87% 21.6

User set 2 6.7% 2.7% 1% 89.7% 30.8

Total 7.4% 3% 1% 88.6% 27.1

Table 2. Summary of results of the system-usability tests

In general, the surrounding lighting condition would impact the scene segmentation

method if based on static reference frame. However, the infrared LED light source and

the camera filter is narrow-banded and centered on a wavelength that only few natural

light sources would have as their main component. This enables to extract the hand

region according to pre-computed reference frame with very low processing

complexity.

In this paper, only static background conditions were discussed. One major problem

of having a mobile input device is that of dynamic scene. We are currently working on

methods of automatically updating the reference frame if the movement of background

is detected. Furthermore, for the comfort of user’s wrist and elbow, generally virtual

keyboard systems are developed based on 2D flat keyboard area. However, with the

depth information supplied by the range camera, our system could be extended to 3D

keyboard without any actual typing surface, which is also the direction for our future
study.

6 Conclusions

A virtual keyboard system based on a true-3D optical range camera is presented.

Keystroke events are accurately tracked independently on the user. No training is

required by the system that automatically adapts itself to the background conditions

when turned on. No specific hardware must be worn and in principle no dedicated

goggles are necessary to view the keyboard since it is projected onto an arbitrary surface

by optical means. The feedback text and/or graphics may be integrated with such projector, thus
enabling truly virtual working area. Experiments have shown the

suitability of the approach which achieves high accuracy and speed.

Acknowledgements

The project was supported by a grant of the Swiss National Science Foundation – Grant

Nr.: 620-066110. The authors wish to thank Nicholas Blanc as well as the entire crew

from the Image Sensing group of CSEM SA for their technical support with the

Swissranger SR-2 range camera.

References

[1] Kö lsch, M. and Turk, M. Keyboards without Keyboards: A Survey of Virtual

Keyboards, Workshop on Sensing and Input for Media-centric Systems, Santa

Barbara, CA, June 20-21, 2002.

[2] Elliot, C., Schechter, G., Yeung, R., and Abi-Ezzi, S. TBAG: A High Level

Framework for Interactive, Animated 3D Graphics Applications, In Proceedings of

ACM SIGGRAPH 94, ACM Press / ACM SIGGRAPH, New York, pp. 421-434,

1994.

[3] Won, D. H., Lee, H. G., Kim, J. Y., and Park, J. H. Development of A Wearable

Input Device Recognizing Human Hand and Finger Motions as A New Mobile

Input Device, International Conference on Control, Automation and System, pp.

1088-1091, 2001.

[4] Senseboard Tech. AB, http://www.senseboard.com


[5] Spring, T. 2001. Virtual Keyboards Let You Type in Air, PCWorld article,

http://www.pcworld.com/news/article/0,aid,0568,tk,dnWknd 1117,00.asp

[6] Canesta Inc., http://www.canesta.com

[7] Vkb Inc., http://vkb.co.il

[8] Lee, M. And Woo, W. ARKB: 3D vision-based Augmented Reality Keyboard,

International Conferece on Artificial Reality and Telexisitence, pp. 54-57, 2003.

[9] CSEM SA, http://www.swissranger.ch

[10] Oggier, T., Lehmann, M., Kaufmann, R., Schweizer, M., Richter, M., Metzler, P.,

Lang, G., Lustenberger, F., and Blanc, N. An all-solid-state optical range camera

for 3D real-time imaging with sub-centimeter depth resolution (SwissRanger), In

Proceedings of SPIE 2003, pp. 534-545, 2003.

[11] Tsai, R. Y. A versatile camera calibration technique for high-accuracy 3D machine

vision metrology using off-the-shelf TV cameras and lenses, IEEE Journal of

Robotics and Automation, Vol. 3, pp. 323-344, 1987.

[12] Segen, J. and Kumar, S. Human-Computer Interaction using Gesture Recognition

and 3D Hand Tracking, In Proceedings of ICIP, Chicago, pp. 188-192, 1998.


A system augments stylus keyboarding with shorthand gesturing. The system defines a shorthand symbol for each word
according to its movement pattern on an optimized stylus keyboard. The system recognizes word patterns by identifying an
input as a stroke, and then matching the stroke to a stored list of word patterns. The system then generates and displays the
matched word to the user.

Claims

What is claimed is:

1. A method of recognizing word patterns, comprising: defining word patterns of a plurality of known words by a plurality of
paths, wherein each path connects elements in aword on a virtual keyboard layout; accepting an input as a stroke based on
a virtual keyboard layout; matching the inputted stroke to a word having a word pattern that approximates the inputted
stroke; and generating the matched word having the wordpattern that approximates the inputted stroke.

2. The method of claim 1, further comprising displaying the matched word.

3. The method of claim 2, further comprising analyzing the input to differentiate between a tapping input and a stroke input.

4. The method of claim 3, wherein generating the matched word comprises generating at least one matched candidate word
based on the stroke input.

5. The method of claim 4, wherein generating the at least one matched candidate word comprises comparing the at least
one matched candidate word to a predetermined matching threshold.

6. The method of claim 3, wherein the input comprises the tapping input that represents at least one element of the word.

7. The method of claim 4, wherein displaying the matched word comprises displaying a plurality of candidate matched words
in a graphical format.

8. The method of claim 7, wherein the graphical format includes a pie chart graphical display.

9. The method of claim 7, further comprising selecting a desired candidate matched word from the displayed candidate
matched words by accepting a gesture input in a direction of the desired candidate matched word.

10. The method of claim 1, wherein the virtual keyboard layout matches a physical keyboard layout.

11. A computer program product having executable instruction codes stored on a computer-readable medium, for
recognizing word patterns, comprising: a set of instruction codes for defining word patterns of a plurality of known words by
aplurality of paths, wherein each path connects elements in a word on a virtual keyboard layout; a set of instruction codes
for accepting an input as a stroke based on a virtual keyboard layout; a set of instruction codes for matching the
inputtedstroke to a word having a word pattern that approximates the inputted stroke; and a set of instruction codes for
generating the matched word having the word pattern that approximates the inputted stroke.

12. The computer program product of claim 11, further comprising a set of instruction codes for displaying the matched
word.

13. The computer program product of claim 12, further comprising a set of instruction codes for differentiating between a
tapping input and a stroke input.

14. The computer program product of claim 13, wherein the set of instruction codes for generating the matched word
comprises generating at least one matched candidate word based on the stroke input.

15. The computer program product of claim 14, wherein the set of instruction codes for generating the at least one matched
candidate word compares the at least one matched candidate word to a predetermined matching threshold.

16. The computer program product of claim 13, wherein the input comprises the tapping input that represents at least one
one element of the word.

17. The computer program product of claim 14, wherein the set of instruction codes for displaying the matched word
comprises displaying a plurality of candidate matched words in a graphical format.

18. The computer program product of claim 17, wherein the the graphical format includes a pie chart graphical display.

19. The computer program product of claim 17, further comprising a set of instruction codes for selecting a desired
candidate matched word from the displayed candidate matched words by accepting a gesture input in a direction of the
desiredcandidate matched word.

20. The computer program product of claim 11, wherein the virtual keyboard layout matches a physical keyboard layout.

21. A system for recognizing word patterns, comprising: means for defining word patterns of a plurality of known words by a
plurality of paths, wherein each path connects elements in a word on a virtual keyboard layout; means for accepting aninput
as a stroke based on a virtual keyboard layout; means for matching the inputted stroke to a word having a word pattern that
approximates the inputted stroke; and means for generating the matched word having the word pattern that approximates
theinputted stroke.

22. The system of claim 21, further comprising means for displaying the matched word.

23. The system of claim 22, further comprising means for differentiating between a tapping input and a stroke input.

24. The system of claim 23, wherein the means for generating the matched word comprises generating at least one matched
candidate word based on the stroke input.

25. The system of claim 24, wherein the means for generating the at least one matched candidate word comprises means
for comparing the at least one matched candidate word to a predetermined matching threshold.

26. The system of claim 23, wherein the input comprises the tapping input that represents at least one element of the word.

27. The system of claim 24, wherein the means for displaying the matched word displays a plurality of candidate matched
words in a graphical format.

28. The system of claim 27, wherein the graphical format includes a pie chart graphical display.

29. The system of claim 27, further comprising an input device for selecting a desired candidate matched word from the
displayed candidate matched words by accepting a gesture input in a direction of the desired candidate matched word.

30. The system of claim 21, wherein the virtual keyboard layout matches a physical keyboard layout.

31. The method of claim 1, wherein the virtual keyboard layout comprises alphabetical letters.

32. The method of claim 1, wherein the virtual keyboard layout comprises punctuations that correspond to the elements of
the word.

33. The method of claim 1, wherein the virtual keyboard layout comprises symbols that correspond to the elements of the
word.

34. The method of claim 33, wherein the symbols comprise phonetic symbols.

35. The method of claim 1, wherein the virtual keyboard layout comprises elements in a non-alphabetical language that
correspond to the elements of the word.

36. The method of claim 1, wherein the known words comprise a finite number of fragments of words.

37. The method of claim 1, wherein the known words comprise a finite number of names.

38. The method of claim 1, wherein the known words comprise a finite number of abbreviations.
39. The method of claim 1, further comprising inputting the stroke by any one of a digital pen and a stylus, on a sensing
surface.

40. The method of claim 1, further comprising inputting the stroke by a hand gesture on a sensing surface.

41. The computer program product of claim 11, wherein the virtual keyboard layout comprises alphabetical letters.

42. The computer program product of claim 11, wherein the virtual keyboard layout comprises punctuations that correspond
to the elements of the word.

43. The computer program product of claim 11, wherein the virtual keyboard layout comprises symbols that correspond to
the elements of the word.

44. The computer program product of claim 43, wherein the symbols comprise phonetic symbols.

45. The computer program product of claim 11, wherein the virtual keyboard layout comprises elements in a non-
alphabetical language that correspond to the elements of the word.

46. The computer program product of claim 11, wherein the known words comprise a finite number of fragments of words.

47. The computer program product of claim 11, wherein the known words comprise a finite number of names.

48. The computer program product of claim 11, wherein the known words comprise a finite number of abbreviations.

49. The computer program product of claim 11, further comprising inputting the stroke by any one of a digital pen and a
stylus, on a sensing surface.

50. The computer program product of claim 11, further comprising inputting the stroke by a hand gesture on a sensing
surface.

51. The system of claim 21, wherein the virtual keyboard layout comprises alphabetical letters.

52. The system of claim 21, wherein the virtual keyboard layout comprises punctuations that correspond to the elements of
the word.

53. The system of claim 21, wherein the virtual keyboard layout comprises symbols that correspond to the elements of the
word.

54. The system of claim 53, wherein the symbols comprise phonetic symbols.

55. The system of claim 21, wherein the virtual keyboard layout comprises elements in a non-alphabetical language that
correspond to the elements of the word.

56. The system of claim 21, wherein the known words comprise a finite number of fragments of words.

57. The system of claim 21, wherein the known words comprise a finite number of names.

58. The system of claim 21, wherein the known words comprise a finite number of abbreviations.

59. The system of claim 21, further comprising means for inputting the stroke by any one of a digital pen and a stylus, on a
sensing surface.

60. The system of claim 21, further comprising means for inputting the stroke by a hand gesture on a sensing surface.

Description

FIELD OF THE INVENTION

The present invention generally relates to text entry devices for computers, particularly text entry via virtual keyboards for
computer-based speed writing that augment stylus keyboarding with shorthand gesturing. Shorthand gestures for wordsare
defined as the stroke sequentially formed by the user after the pattern defined by all the letters in a word on a virtual
keyboard.
BACKGROUND OF THE INVENTION

Text input constitutes one of the most frequent computer user tasks. The QWERTY keyboard has been accepted as the
standard tool for text entry for desktop computing. However, the emergence of handheld and other forms of pervasive or
mobilecomputing calls for alternative solutions. These devices have small screens and limited keypads, limiting the ability of
the user to input text. Consequently, text input has been revived as a critical research topic in recent years. The two classes
ofsolutions that have attracted the most attention are handwriting and stylus-based virtual keyboarding.

Handwriting is a rather "natural" and fluid mode of text entry due to users' prior experience. Various handwriting recognition
systems have been used in commercial products. However, the fundamental weakness of handwriting as a text entrymethod
is its limited speed. While adequate for entering names and phone numbers, handwriting is too limited for writing longer text.

Virtual keyboards tapped serially with a stylus are also available in commercial products. The keyboard provided on the
screen is typically the familiar QWERTY layout. Stylus keyboarding requires intense visual attention at almost every keytap,
preventing the user from focusing attention on text output. To improve movement efficiency, optimization of the stylus
keyboard layout has been considered both by trial and error and algorithmically. Using a keyboard layout such as
ATOMIK(Alphabetically Tuned and Optimized Mobile Interface Keyboard), text entry is relatively fast. Reference is made to
S. Zhai, M. Hunter & B. A. Smith, "Performance Optimization of Virtual Keyboards, Human-Computer Interaction," Vol.17 (2,
3), 229 270,2002.

The need for entering text on mobile devices has driven numerous inventions in text entry in recent years. The idea of
optimizing gesture for speed is embodied in the Unistrokes alphabet. In the Unistrokes alphabet, every letter is written witha
single stroke but the more frequent ones are assigned simpler strokes. If mastered, a user could potentially write faster in
the Unistrokes alphabet than in the Roman alphabet. The fundamental limitation of the Unistrokes alphabet, however, is
thenature of writing one letter at a time.

Quikwriting method uses continuous stylus movement on a radial layout to enter letters. Each character is entered by
moving the stylus from the center of the radial layout to one of the eight outer zones, sometimes crossing to another zone,
andreturning to the center zone. The stylus trajectory determines which letter is selected. While it is possible to develop
"iconic gestures" for common words like "the", such gestures are relatively complex due to the fact that the stylus has to
returnto the center after every letter. In this sense, the Quikwriting method is fundamentally a character entry method.

Cirrin (Circular Input) operates on letters laid out on a circle. The user draws a word by moving the stylus through the letters.
Cirrin explicitly attempts to operate on a word level, with the pen being lifted up at the end of each word. Cirrin also attempts
to optimize pen movement by arranging the most common letters closer to each other. However, Cirrin is neither location
nor scale independent.

It is important to achieve at least partial scale and location independency for the ease and speed of text entry. If all the
letters defining a word on the keyboard have to be precisely crossed, the time to trace these patterns cannot beexpected to
be any shorter than tapping. As an example, if it is desired to draw a line from key "r" to key "d" as part of the word "word",
within a tunnel connecting the two keys, such a closed-loop drawing process would take more time and visualattention than
tapping "d" after "r". The user must place the pen in the proper position before drawing the word and ensure that the
movement of the pen from letter to letter falls within the allowed pen stroke boundaries.

It is also important to facilitate skill transfer from novice behavior to expert performance in text entry by designing similar
movement patterns for both types of behavior. The idea of bridging novice and expert modes of use by common
movementpattern is used in the "marking menu". Instead of having pull-down menus and shortcut keys, two distinct modes
of operation for novice and expert users respectively, a marking menu uses the same directional gesture on a pie menu for
both types of users. For a novice user whose action is slow and needs visual guidance, marking menu "reveals" itself by
displaying the menu layout after a pre-set time delay. For an expert user whose action is fast, the marking menu system
does not display visual guidance. Consequently, the user's actions become open loop marks. However, the marking menu is
not used for text entry due to the limited number of items can be reliably used in each level of a pie menu (8 or at the most
12). Reference is made to G. Kurtenbach,and W. Buxton, "User Learning and Performance with Marking Menus," Proc. CHI.
1994, pages 258 264; and G. Kurtenbach, A. Sellen, and W. Buxton, "An Empirical Evaluation of Some Articulatory and
Cognitive Aspects of "Marking Menus", "Human ComputerInteraction, 1993, 8(1), pages 1 23.

A self-revealing menu approach, T-Cube, defines an alphabet set by cascaded pie menus that are similar to a marking
menu. A novice user enters characters by following the visual guidance of menus, while an expert user could enter the
individualcharacters by making menu gestures without visual display. A weakness of the T-Cube is that it works at alphabet
level; consequently, text entry using T-cube is inherently slow.

Dasher, another approach using continuous gesture input, dynamically arranges letters in multiple columns. Based on
preceding context, likely target letters appear closer to the user's cursor location. A letter is selected when it passesthrough
the cursor; consequently, cursor movement is minimized. This minimization, however, is at the expense of visual attention.
Because the letter arrangement constantly changes, Dasher demands user's visual attention to dynamically react to
thechanging layout.

One possibility for introducing gesture-based text entry would be the use of shorthand. Traditional shorthand systems are
efficient, but hard to learn for the user and difficult to recognize by the computer. Shorthand has no duality; it cannotbe used
by experts and beginners alike. In addition, shorthand has no basis in a virtual keyboard, so the user cannot identify the
required symbol from the keyboard. If the user forgets shorthand symbols, a separate table must be consulted to find
thesymbol.

What is therefore needed is a form of continuous gesture-based text input that requires minimal visual attention, and that is
based on keyboard entry, wherein a system and method recognize word patterns based on a virtual keyboard layout.
Theneed for such system and method has heretofore remained unsatisfied.

SUMMARY OF THE INVENTION

The present invention satisfies this need, and presents a system and associated method (collectively referred to herein as
"the system" or "the present system") for recognizing word patterns based on a virtual keyboard layout. The present
systemcombines hand writing recognition with a virtual, graphical, or on-screen keyboard to provide a text input method with
relative ease of use. The system allows the user to input text quickly with little or no visual attention from the user.

The design of the present system is based on five principles for achieving gesture-based text input in a virtual keyboard. The
first principle is that for word pattern gesturing to be effective, patterns must be recognized independent of scaleand
location. This is especially critical for small device screens or virtual keyboards such as those on a PDA. As long as the user
produces a pattern that matches the shape of the a word pattern defined on the keyboard layout, the system
shouldrecognize and type the corresponding word for the user. If so, the users could produce these patterns with much less
visual attention, in a more open-loop fashion, and with presumably greater ease and comfort.

The second principle of the current work lies in efficiency. In comparison to hand writing alphabetic or logographic
characters such as Chinese, writing a word pattern defined by a stylus keyboard can be much more efficient. Each
letterconstitutes only one straight stroke and the entire word is one shape. In other words, the present system is a form of
shorthand writing.

The present system can be defined on any keyboard layout. However if defined on the familiar QWERTY layout, frequent
left-right zigzag strokes would be required because the commonly used consecutive keys are deliberately arranged on the
oppositesides of QWERTY. An alternative keyboard layout would be the ATOMIK (Alphabetically Tuned and Optimized
Mobile Interface Keyboard) layout. The ATOMIK keyboard layout is optimized to reduce movement from one key to another;
consequently, it is alsooptimized for producing word patterns of minimum length.

The third principle involves the concept of duality, that is the ability of advanced users to primarily use gestures for a
increasing set of frequently used words while beginning users primarily use stylus tapping to enter text. Traditionalshorthand
writing systems, take significant time and effort to master. With the exception of touch-typing on physical keyboards, users
are typically reluctant to invest time in learning a human computer interaction skill. A shorthand system defined ona stylus
keyboard, however, does not have to contain a complete or even a large set of words, because one can use both tapping
and shorthand gesturing. For familiar words whose patterns are well remembered, the user can use gestures. For the
lessfamiliar, one can use stylus tapping. Both modes of typing are conducted on the same input surface; the present system
distinguishes tapping from stroking, and provides the output accordingly. Consequently, users do not have to learn many
gesturesbefore beginning to benefit from the present system.

The fourth principle recognizes that word frequency in a language tends to follow Zipf's law with a highly skewed distribution.
Zipf's law models the observation that frequency of occurrence of event f, as a function of its rank i, is apower-law function
f~1/ia with the exponent a close to unity. For example, the 100 most common individual words make up 46% of the entire
British National Corpus (BNC). The word "the" alone constitutes over 6% of the BNC. Consequently, arelatively small set of
shorthand gestures can cover a large percentage of text input. The use of shorthand equivalents for a small set of common
words dramatically increases text input speed for the user.

The fifth principle recognizes that a user's repertoire of shorthand gesture symbols can be gradually expanded with practice,
providing a gradual and smooth transition from novice to expert behavior. Gesturing and tapping a word share a
commonmovement pattern that may facilitate skill transfer between the two modes. For a novice user, visually guided
tapping is easier. When a word is tapped enough times, the user may switch to the more fluid "expert" mode of shorthand
gesturing. If ashorthand gesture is forgotten, one can fall back to taping, which reinforces the pattern and pushes the user
back to expert mode. 

BRIEF DESCRIPTION OF THE DRAWINGS

The various features of the present invention and the manner of attaining them will be described in greater detail with
reference to the following description, claims, and drawings, wherein reference numerals are reused, where appropriate,
toindicate a correspondence between the referenced items, and wherein:
FIG. 1 is a schematic illustration of an exemplary operating environment in which a word pattern recognition system of the
present invention can be used;

FIG. 2A represents a process flow chart that illustrates a preferred method of operation of the word pattern recognition
system of FIG. 1;

FIG. 2B represents a process flow chart that illustrates an alternative embodiment for the steps of matching the shorthand
gesture against the list of known words and for generating the best matched word, for use in the operation of the
wordpattern recognition system of FIG. 1;

FIG. 3 is an exemplary virtual keyboard layout that can be used with the word pattern recognition system of FIGS. 1 and 2;

FIG. 4 is comprised of FIGS. 4A, 4B, 4C, and 4D, and represents an exemplary keyboard diagram illustrating one approach
in which the word pattern recognition system of FIG. 1 resolves ambiguity in shorthand gestures; and

FIG. 5 is comprised of FIGS. 5A and 5B, and represents a screen shot of a virtual keyboard using the word pattern
recognition system of FIG. 1 illustrating the input of the word "they".

DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

The following definitions and explanations provide background information pertaining to the technical field of the present
invention, and are intended to facilitate the understanding of the present invention without limiting its scope:

ATOMIK: Alphabetically Tuned and Optimized Mobile Interface Keyboard optimized by an algorithm in which the keyboard
was treated as a "molecule" and each key as an "atom". The atomic interactions among the keys drove the movement
efficiencytoward the minimum. Movement efficiency is defined by the summation of all movement times between every pair
of keys weighted by the statistical frequency of the corresponding pair of letters. ATOMIK is also alphabetically tuned,
causing a generaltendency that letters from A to Z run from the upper left corner to the lower right corner of the keyboard,
helping users find keys that are not yet memorized. ATOMIK is one exemplary virtual keyboard that can be used in
combination with the currentinvention.

Elastic Matching: A conventional hand writing recognition method. Reference is made to Tappert, C. C., "Speed, accuracy,
flexibility trade-offs in on-line character recognition", Research Report RC13228, Oct. 28, 1987, IBM T. J. Watson
ResearchCenter, 1987; and Charles C. Tappert, Ching Y. Suen, Toru Wakahara, "The State of the Art in On-Line
Handwriting Recognition," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.12, No.8, August 1990.

PDA: Personal Digital Assistant. A pocket-sized personal computer. PDAs typically store phone numbers, appointments,
and to-do lists. Some PDAs have a small keyboard, others have only a special pen that is used for input and output on
avirtual keyboard.

Virtual Keyboard: A computer simulated keyboard with touch-screen interactive capability that can be used to replace or
supplement a keyboard using keyed entry. The virtual keys are typically tapped serially with a stylus. It is also
calledgraphical keyboard, on-screen keyboard, or stylus keyboard.

FIG. 1 portrays an exemplary overall environment in which a system 10 and associated method 200 for recognizing word
patterns on a virtual keyboard according to the present invention may be used. System 10 includes a software programming
code orcomputer program product that is typically embedded within, or installed on a computer. The computer in which
system 10 is installed can be mobile devices such as a PDA 15 or a cellular phone 20. In addition, system 10 can be
installed in devices suchas tablet computer 25, touch screen monitor 30, electronic white board 35, and digital pen 40.
System 10 can be installed in any device using a virtual keyboard or similar interface for entry, represented by auxiliary
device 45. Alternatively, system10 can be saved on a suitable storage medium such as a diskette, a CD, a hard drive, or like
devices.

With reference to FIG. 2A, a preferred method 200 of operation of system 10 is illustrated by a high-level flow chart. At block
205, the user forms a stroke on the virtual keyboard. The stroke can be short, as in a tap, or long, as in ashorthand gesture.

System 10 records the stroke at block 210. Then, at decision block 215, system 10 determines whether the stroke or mark
was short. If not, the user is in tapping mode (block 220) and the system is instructed to select letters individually on avirtual
keyboard. System 10 then correlates the user's tap with a letter by matching the location of the mark with keyboard
coordinates at block 225, and by generating one letter at block 230. System 10 then returns to block 205 when the user
formsanother stroke.

If at decision block the user's stroke on the virtual keyboard is not short, the user is in shorthand gesturing mode (block 235).
The recognition system of system 10 can be based on, for example, a classic elastic matching algorithm thatcomputes the
minimum distance between two sets of points by dynamic programming. One set of points is from the shape that a user
produces on a stylus tablet or touch screen (i.e., an unknown shape). The other is from a prototype, i.e., an ideal
shapedefined by the letter key positions of a word. The recognition system can also be implemented by other hand writing
recognition systems. Reference is made to "Charles C. Tappert, Ching Y. Suen, Toru Wakahara, "The State of the Art in On-
LineHandwriting Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence," Vol.12, No.8, August 1990".

After preprocessing, filtering, and normalization in scale, system 10 matches the unknown shape against the known word
parameters (block 240) by computing the distance between the unknown shape and the prototypes using elastic matching or
otheralgorithms. The word corresponding that best matches the user's input sample above a certainty threshold is returned
as the recognized word at block 245.

At decision block 250, system 10 determines whether the best matched word found at block 245 is above the desired
recognition threshold. If the shorthand gesture formed by the user is clearly one word in the list of known word patterns,
system10 displays that word to the user at block 255.

System 10 then returns to block 205 when the user forms another stroke. If at decision block 250 more than one word is
found to match the shorthand gesture, system 10 displays a message or alternative word to the user at block 260.

An alternative method 300 for matching the shorthand gesture against the list of known words and generating the best
matched word is shown in FIG. 2B. If the result matching the shorthand gesture against a known list of words (block 240) is
aunique match (decision block 265), system 10 proceeds to decision block 250, and proceeds as described earlier in
connection with FIG. 2A.

Otherwise, system 10 presents to the user multiple choices that match the shorthand gesture at block 270. The user then
selects the desired word from the candidates at block 275. System 10 then system returns to block 205 and proceeds
asdescribed earlier in connection with FIG. 2A.

One aspect of the present system is its ability to handle ambiguity generated by shorthand gestures. The shape of a
shorthand gesture is not always unique, particularly for some short words. This is illustrated by FIG. 3 that illustrates
anexemplary virtual keyboard layout, referred to as the ATOMIK keyboard layout. For example, the words "can", "an", and
"to" are completely identical when scale and location are ignored. The same is true for the words "do" and "no".

One method for resolving ambiguities in the alternative embodiment of FIG. 2B is through the use of transient pie menus. As
shown in FIG. 4A, the user gestures for the word "can" a stroke 405 from left to right on a virtual keyboard 410. Itshould be
noted that the stroke 405 does not need to carried out over the actual letters: c-a-n; but rather it could be made at any
location on the virtual keyboard 410, so long as the stroke 405 connects the three letters: c-a-n. While the presentinvention
is described in terms of a pie menu for exemplification purpose only, it should be clear that other known or available menus
could alternatively be used, such as a linear menu.

The word pattern recognition system 10 finds more than one match to the gesture or stroke 405, "can", "an" and "do" (block
240 of FIG. 2B). In response, system 10 displays a pie menu 415 with all three candidate words in a consistent order(block
270). A user inexperienced with this particular ambiguous word would look at the menu and make a straight stroke 420 in
the direction of the desired candidate on the pie chart, independent of location. With experience, the user will not have
tolook at the menu because the candidates are presented in a consistent segment of the pie.

The selection of choice depends on direction only, regardless the location of the stroke. An experienced user may simply
remember the second stroke as part of the shorthand for that word. For example, a right horizontal stroke 425 (FIG.
4D)followed by a stroke 430 to the upper-left direction will always be the word "can". Similarly, left and down is always the
word "to" and a left stroke followed by a stroke to the upper right will always be the word "an".

FIGS. 5A and 5B further illustrate the use of system 10. As seen in the screenshot 500 of a virtual keyboard system
operating with system 10, the user is presented with a virtual keyboard such as the ATOMIK keyboard 505. The user wishes
toenter the word "they". A novice user would tap the keys "t" 510, "h" 515, "e" 520, and y "522". As the user becomes more
familiar with the pattern of these letters, the tapping sequence is replaced with the shorthand gesture 525 that follows the
samepattern as tapped for the word "they". Eventually, the user will not need a keyboard for entry, simply entering the
shorthand gesture 525 as shown in FIG. 5B.

Table 1 of the Appendix shows additional exemplary word patterns generated using system 10 based on the ATOMIK virtual
keyboard layout.

It is to be understood that the specific embodiments of the invention that have been described are merely illustrative of
certain application of the principle of the present invention. Numerous modifications may be made to the system and
methodfor recognizing word patterns based on a virtual keyboard layout invention described herein without departing from
the spirit and scope of the present invention. For example, the input unit may also be fragments of a word (such as "tion"),
abbreviations(e.g. "asap") and alike whose patterns are defined on a virtual keyboard layout just as a word is. Moreover,
while the present invention is described for illustration purpose only in relation to the ATOMIK virtual keyboard, it should be
clear that theinvention is applicable as well to any virtual keyboard layout.
Appendix

TABLE-US-00001 TABLE 1 the ##STR00001## knowing ##STR00002## and ##STR00003## about ##STR00004## in
##STR00005## could ##STR00006## inside ##STR00007## think ##STR00008## have ##STR00009## people
##STR00010## has ##STR00011## after ##STR00012##had ##STR00013## right ##STR00014## having ##STR00015##
because ##STR00016## he ##STR00017## between ##STR00018## him ##STR00019## before ##STR00020## his
##STR00021## through ##STR00022## it ##STR00023## place ##STR00024## its ##STR00025##
become##STR00026## they ##STR00027## such ##STR00028## them ##STR00029## change ##STR00030## was
##STR00031## point ##STR00032## their ##STR00033## system ##STR00034## not ##STR00035## group
##STR00036## for ##STR00037## number ##STR00038## you ##STR00039##however ##STR00040## your
##STR00041## again ##STR00042## she ##STR00043## world ##STR00044## her ##STR00045## course
##STR00046## with ##STR00047## company ##STR00048## on ##STR00049## while ##STR00050## that
##STR00051## problem ##STR00052## this##STR00053## against ##STR00054## these ##STR00055## service
##STR00056## those ##STR00057## never ##STR00058## did ##STR00059## house ##STR00060## does
##STR00061## down ##STR00062## done ##STR00063## school ##STR00064## doing ##STR00065##
report##STR00066## are ##STR00067## start ##STR00068## our ##STR00069## country ##STR00070## from
##STR00071## really ##STR00072## which ##STR00073## provide ##STR00074## will ##STR00075## local
##STR00076## were ##STR00077## member ##STR00078## said##STR00079## within ##STR00080## can
##STR00081## always ##STR00082## whose ##STR00083## follow ##STR00084## went ##STR00085## without
##STR00086## gone ##STR00087## during ##STR00088## other ##STR00089## bring ##STR00090## another
##STR00091## although##STR00092## being ##STR00093## example ##STR00094## seeing ##STR00095## question
##STR00096## knew ##STR00097##

*****

Other References

 S. Zhai et al., “Performance Optimization of Virtual Keyboards, Human-Computer Interaction,” vol. 17 (2, 3), 229-
270, 2002.
 G. Kurtenbach et al., “User Learning and Performance with Marking Menus,” Proc. CHI. 1994, pp. 258-264.
 C.C. Tappert, “Speed, accuracy, flexibility trade-offs in on-line character recognition”, Research Report RC13228,
Oct. 28, 1987, IBM T.J. Watson Research Center, 1987.
 J. Mankoff et al., “Cirrin: A Word-Level Unistroke Keyboard For Pen Input,” pp. 213-214, 1998.
 K. Perlin, “Quickwriting: Continuous Stylus-Based Text Entry,” pp. 215-216, 1998.
 D. Venolia, et al., “T-Cube: A Fast, Self-Disclosing Pen-Based Alphabet,” pp. 265-270, 1994.
 D. Goldberg et al., “Touch-Typing With a Stylus,” 1993.
 D. Ward et al., “Dasher—A Data Entry Interface Using Continuous Gestures and Language Models,” pp. 129-137,
2000.
 Kristensson, Per-Ola, “Design and Evaluation of a Shorthand Aided Soft Keyboard,” Master's Thesis in Cognitive
Science Department of Computer and Information Science, Linköping University, Sweden, Aug. 28, 2002.

VIRTUAL KEYBOARD A SEMINAR REPORT

VIRTUAL KEYBOARD
A SEMINAR REPORT
Submitted by
MOHAMMED AJMAL RAHMAN
in partial fulfillment for the award of the degree
of
BACHELOR OF TECHNOLOGY
in
COMPUTER SCIENCE & ENGINEERING
SCHOOL OF ENGINEERING
COCHIN UNIVERSITY OF SCIENCE &TECHNOLOGY,
KOCHI-682022
NOVEMBER 2008Page 2

DIVISION OF COMPUTER SCIENCE & ENGINEERING


SCHOOL OF ENGINEERING
COCHIN UNIVERSITY OF SCIENCE &
TECHNOLOGY, KOCHI-682022
CERTIFICATE
Certified that this is a bonafide record of the seminar entitled
VIRTUAL KEYBOARD
done by the following student
MOHAMMED AJMAL RAHMAN
of the VIIth semester, Computer Science and Engineering in the
year 2008 in 
partial fulfillment of the requirements of the award of Degree of
Bachelor of 
Technology in Computer Science and Engineering of Cochin
University of Science
and Technology.
Ms. Remya Mol 
Dr. David Peter S
Seminar Guide
Head of the Department
Date:19/09/2008Page 3

ACKNOWLEDGEMENT
I thank my seminar guide Ms Remya Mol, Lecturer, CUSAT, for
her proper 
guidance, and valuable suggestions. I am indebted to Mr. David
Peter, the HOD, 
Computer Science division & other faculty members for giving
me an opportunity 
to learn and do this seminar. If not for the above mentioned
people my seminar 
would never have been completed successfully. I once again
extend my sincere 
thanks to all of them.
MOHAMMED AJMAL RAHMANPage 4

iii
ABSTRACT
Computing is now not limited to desktops and laptops, it has
found 
its way into mobile devices like palm tops and even cell phones.
But what 
has not changed for the last 50 or so odd years is the input
device, the good 
old QWERTY keyboard. Virtual Keyboard uses sensor
technology and 
artificial intelligence to let users work on any surface as if it were
a keyboard. 
Virtual Devices have developed a flashlight-size gadget that
projects an 
image of a keyboard on any surface and letâ„¢s people input
data by typing on 
the image. The Virtual Keyboard uses light to project a full-sized
computer 
keyboard onto almost any surface, and disappears when not in
use. Used 
with Smart Phones and PDAs, the VKEY provides a practical
way to do 
email, word processing and spreadsheet tasks, allowing the
user to leave the 
laptop computer at home.Page 5

iv
TABLE OF CONTENTS
NO.
TITLE
PAGE NO.
ABSTRACT¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦.iii
LIST OF FIGURES¦¦¦¦¦¦¦¦¦.¦¦¦¦¦...vi
1.
INTRODUCTION¦¦¦¦¦¦......................................1
2.
QWERTY KEYBOARDS¦¦¦¦¦¦¦¦¦¦¦¦..2 
2.1 Introduction..........................................................2
2.2 Working¦¦¦¦¦¦¦¦¦¦¦¦¦¦..¦..2
2.3 Difficulties¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦3
3.
VIRTUAL KEYBOARD¦¦¦¦¦¦¦¦¦¦¦¦....4
3.1 Introduction¦¦¦¦¦¦¦¦¦¦¦.............4
3.2 Virtual Keyboard Technology¦¦¦¦¦¦¦.7
3.3 Different Types¦¦¦¦¦¦¦.......................11
3.3.1 Developer VKB¦¦¦¦¦¦¦¦¦..11
3.3.2 Canesta¦¦¦¦¦¦¦¦¦¦¦¦...12
3.3.3 Senseboard Technologies¦¦¦¦¦...12
3.3.4 Kitty¦¦¦¦¦¦¦¦¦¦¦¦¦....14
3.3.5 InFocus¦¦¦¦¦¦¦¦¦¦¦¦.. 15Page 6

v
4.
ADVANTAGES¦¦¦¦¦¦¦¦¦¦¦¦¦¦..¦....16
5.
DRAWBACKS¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦.......17
6.
APPLICATIONS¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦...18
7.
CONCLUSION¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦...19
8.
REFERENCES¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦..¦.20Page 7

vi
LIST OF FIGURES
Fig 4.1: Virtual keyboard used in
PDA™s¦¦¦¦¦¦¦¦¦¦¦¦¦...5
Fig 4.2: Sensor Module
¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦.................7
Fig 4.3: IR-light source
¦¦¦¦¦¦¦¦¦.............................................9
Fig 4.4: Pattern
projector...............................................................................10
Fig 4.5: Developer VKB
¦..¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦.....11
Fig 4.6: Canesta
Keyboard............................................................................12
Fig 4.7: Senseboard
Technologies¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦.....13
Fig 4.8:
Kitty¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦......¦¦14Page
8

Virtual Keyboard
Division of Computer Engineering, SOE, CUSAT
1
1. INTRODUCTION
Virtual Keyboard is just another example of todayâ„¢s computer
trend of 
Ëœsmaller and fasterâ„¢. Computing is now not limited
to desktops and laptops, it has
found its way into mobile devices like palm tops and even cell
phones. But what has 
not changed for the last 50 or so odd years is the input device,
the good old QWERTY 
keyboard. The virtual keyboard technology is the latest
development. 
The virtual keyboard technology uses sensor technology and
artificial 
intelligence to let users work on any flat surface as if it were a
keyboard. Virtual 
Keyboards lets you easily create multilingual text content on
almost any existing 
platform and output it directly to PDAs or even web
pages. Virtual Keyboard, being a 
small, handy, well-designed and easy to use application, turns
into a perfect solution 
for cross platform text input.
The main features are: platform-independent multilingual
support for 
keyboard text input, built-in language layouts and settings,
copy/paste etc. operations 
support just as in a regular text editor, no change in already
existing system language 
settings, easy and user-friendly interface and design, and small
file size.
The report first gives an overview of the QWERTY keyboards
and the 
difficulties arising from using them. It then gives a description
about the virtual 
keyboard technology and the various types of virtual keyboards
in use. Finally the 
advantages, drawbacks and the applications are
discussed.Page 9

Virtual Keyboard
Division of Computer Engineering, SOE, CUSAT
2
2. QWERTY KEYBOARDS
1.1 Introduction
QWERTY is the most common keyboard layout on English-
language computer and 
typewriter keyboards. It takes its name from the first six
characters seen in the far left of 
the keyboard's top first row of letters.
2.2 Working
The working of a typical QWERTY keyboard is as follows:
1. When a key is pressed, it pushes down on a rubber dome
sitting beneath the key. 
A conductive contact on the underside of the dome touches (and
hence connects) 
a pair of conductive lines on the circuit below. 
2. This bridges the gap between them and allows electric current
to flow (the open 
circuit is closed). 
3. A scanning signal is emitted by the chip along the pairs of
lines to all the keys. 
When the signal in one pair becomes different, the chip
generates a "make code" 
corresponding to the key connected to that pair of lines. 
4. The code generated is sent to the computer either via a
keyboard cable (using on-
off electrical pulses to represent bits) or over a wireless
connection. It may be 
repeated. 
5. A chip inside the computer receives the signal bits and
decodes them into the 
appropriate keypress. The computer then decides what to do on
the basis of the 
key pressed (e.g. display a character on the screen, or perform
some action). Page 10

Virtual Keyboard
Division of Computer Engineering, SOE, CUSAT
3
6. When the key is released, a break code (different than the
make code) is sent to 
indicate the key is no longer pressed. If the break code is
missed (e.g. due to a 
keyboard switch) it is possible for the keyboard controller to
believe the key is 
pressed down when it is not, which is why pressing then
releasing the key again 
will release the key (since another break code is sent). 
2.3 Difficulties
It is now recognized that it is important to be correctly seated
while using a 
computer. A comfortable working position will help with
concentration, quality of 
work, and reduce the risk of long-term problems. This is
important for all who use 
computers, and especially so for those with disabilities.
The increased repetitive motions and awkward postures
attributed to the use 
of computer keyboards have resulted in a rise in cumulative
trauma disorders (CTDs) 
that are generally considered to be the most costly and severe
disorders occurring in 
the office. Lawsuits for arm, wrist, and hand injuries have been
filed against keyboard 
manufacturers allege that keyboarding equipment is defectively
designed and 
manufacturers fail to provide adequate warnings about proper
use to avoid injury. 
As early as1926, Klockenberg described how the keyboard
layout required the 
typist to assume body postures that were unnatural,
uncomfortable and fatiguing. For 
example, standard keyboard design forces operators to place
their hands in a flat, palm 
down position called forearm pronation. The compact, linear key
arrangement also 
causes some typists to place their wrist in a position that is
skewed towards the little 
fingers, called ulnar deviation. These awkward postures result in
static muscle loading, 
increased muscular energy expenditure, reduced muscular
waste removal, and eventual 
discomfort or injury. Researchers also noted that typing on the
QWERTY keyboard is 
poorly distributed between the hands and fingers, causing the
weaker ring and little 
fingers to be overworkPage 11

Virtual Keyboard
Division of Computer Engineering, SOE, CUSAT
4
3. VIRTUAL KEYBOARD 
3.1 Introduction
Virtual Keyboard is just another example of todayâ„¢s computer
trend of "smaller 
and faster". Computing is now not limited to desktops and
laptops, it has found its way 
into mobile devices like palm tops and even cell phones. But
what has not changed for 
the last 50 or so odd years is the input device, the good old
QWERTY keyboard. 
Alternatives came in the form of handwriting recognition, speech
recognition, abcd input 
(for SMS in cell phones) etc. But they all lack the accuracy and
convenience of a full-
blown keyboard. Speech input has an added issue of privacy.
Even folded keyboards for 
PDAs are yet to catch on. Thus a new generation of virtual input
devices is now being 
paraded, which could drastically change the way we type.
Virtual Keyboard uses sensor technology and artificial
intelligence to let users 
work on any surface as if it were a keyboard. Virtual Devices
have developed a 
flashlight-size gadget that projects an image of a keyboard on
any surface and letâ„¢s 
people input data by typing on the image. 
The device detects movement when fingers are pressed down.
Those 
movements are measured and the device accurately determines
the intended keystrokes 
and translates them into text. The Virtual Keyboard uses light to
project a full-sized 
computer keyboard onto almost any surface, and disappears
when not in use. The 
translation process also uses artificial intelligence. Once the
keystroke has been decoded, 
it is sent to the portable device either by cable or via
wireless.Page 12

Virtual Keyboard
Division of Computer Engineering, SOE, CUSAT
5
Fig 3.1: Virtual keyboard used in PDAâ„¢s
The Virtual Keyboard uses light to project a full-sized computer
keyboard 
onto almost any surface, and disappears when not in use. Used
with Smart Phones and 
PDAs, it provides a practical way to do email, word processing
and spreadsheet tasks, 
allowing the user to leave the laptop computer at home. The
technology has many 
applications in various high-tech and industrial Sectors. These
include data entry and 
control panel applications in hazardous and harsh environments
and medical markets.Page 13

Virtual Keyboard
Division of Computer Engineering, SOE, CUSAT
6
Projection key boards or virtual key boards claim to provide the
convenience 
of compactness with the advantages of a full-blown QWERTY
keyboard. An interesting 
use of such keyboards would be in sterile environments where
silence or low noise is 
essential like operation theaters. The advantage of such a
system is that you do not need a 
surface for typing, you can even type in plain air. The company's
Virtual Keyboard is 
designed for anyone who's become frustrated with trying to put
information into a 
handheld but doesn't want to carry a notebook computer
around. There is also the 
provision for a pause function to avoid translating extraneous
hand movements function, 
so that users can stop to eat, drink etc
It is also a superior desktop computer keyboard featuring
dramatically easier 
to learn touch-typing and leaving one hand free for mouse or
phone. Combination key 
presses ("chords") of five main and two extra control keys allow
users to type at 25-60 
words per minute, with possibly greater speeds achieved
through the use of abbreviation 
expansion software. Most users, however, will find memorizing
the chords easy and fun, 
with the included typing tutorial. The scanner can keep up with
the fastest typist, 
scanning the projected area over 50 times a second. The
keyboard doesn't demand a lot of 
force, easing strain on wrists and digits. virtual keyboards solve
the problem of sore 
thumbs that can be caused by typing on the tiny keyboards of
various gadgets like PDAs 
and cell phones. They are meant to meet the needs of mobile
computer users struggling 
with cumbersome, tiny, or nonexistent keyboards. It might help
to prevent RSI injuries.
The Virtual Keyboard uses an extremely durable material which
is extremely 
easy to clean. The Virtual Keyboard is not restricted to the
QWERTY touch-typing 
paradigm , adjustments can be done to the software to fit other
touch-typing paradigms as 
well, such as the DVORAK keyboard. It will work with all types of
Bluetooth enabled 
devices such as PDAs and smart phones, as well as wearable
computers. Applications 
include computer/PDA input, gaming control, TV remote control,
and musical 
applications.Thus virtual keyboards will make typing easier,
faster, and almost a 
pleasure.Page 14

Virtual Keyboard
Division of Computer Engineering, SOE, CUSAT
7
3.2 Virtual Keyboard Technology
This system comprises of three modules, 
1. The sensor module, 
2. IR-light source and 
3. The pattern projector. 
Sensor module:
Fig 3.2: Sensor Module
The Sensor Module serves as the eyes of the Keyboard
Perception technology.
The Sensor Module operates by locating the user's fingers in 3-
D space and tracking the 
intended keystrokes, or mouse movements. Mouse tracking and
keystroke information is
processed and can then be output to the host device via a USB
or other interface.Page 15

Virtual Keyboard
Division of Computer Engineering, SOE, CUSAT
8
Electronic Perception Technology:
Electronic perception technology enables ordinary electronic
devices to see the 
world around them so they can perceive and interact with it.
Now everyday electronic 
devices in a variety of markets can perceive usersâ„¢ actions,
gainingfunctionality and ease 
of use.
The tiny electronic perception chips and embedded software
work by developing 
a 3D distance map to nearby objects in real-time. This
information is factored through 
an on-chip processor running imaging software that translates
the image into defined 
events before sending it offchip for application-specific
processing. Itâ„¢s an action that is 
continually repeated, generating over 30 frames of 3D
information per second.
Electronic perception technology has a fundamental advantage
over classical 
image processing that struggles to construct three-dimensional
representations using 
complex mathematics and images from multiple cameras or
points of view. This single-
chip contour mapping approach results in a high reduction of
complexity, making it 
possible to embed the application independent processing
software directly into the chips 
themselves “ so they may be used in the most modestly-
priced, and even pocket-sized 
electronic devices.Page 16

Virtual Keyboard
Division of Computer Engineering, SOE, CUSAT
9
IR-light source:
Fig 3.3: IR-light source
The Infrared Light Source emits a beam of infrared light. This
light beam is 
designed to overlap the area on which the keyboard pattern
projector or printed image 
resides. This is done so as to illuminate the users fingers by the
infra-red light beam. This 
helps in recognizing the hand movements and the pressing of
keys. The light beam 
facilitates in scanning the image. Accordingly the information is
passed on to the sensor 
module which decodes the information. 
An invisible infra-red beam is projected above the virtual
keyboard. Finger makes 
keystroke on virtual keyboard. This breaks infrared beam and
infrared light is reflected 
back to projector. Reflected infrared beam passes through
infrared filter to camera. The 
camera photographs angle of incoming infrared light. The
Sensor chip in the sensor
module determines where the infrared beam was broken.
detected co-ordinates determine 
actions or characters to be generated.Page 17

Virtual Keyboard
Division of Computer Engineering, SOE, CUSAT
10
The pattern projector:
Fig 3.4: Pattern projector
The Pattern Projector or optional printed image presents the
image of the 
keyboard or mouse zone of the system. This image can be
projected on any flat surface. 
The projected images is that of a standard qwerty keyboard,
with all the keys and control 
functions as in the keyboard. 
The Projector features a wide-angle lens so that a large pattern
can be projected 
from relatively low elevations. A printed image, with replaceable
templates allows 
system flexibility, permitting most any kind of keyboard
configuration for greater 
functionality.
In some types of virtual keyboards, a second infra-red beam is
not necessary. 
Here the projector itself takes the inputs, providing dual
functionality. A sensor or camera 
in the projector picks up the finger movements, and passes the
information on to the 
sensor modules.Page 18

Virtual Keyboard
Division of Computer Engineering, SOE, CUSAT
11
3.3 Different Types
There are different types of virtual keyboards, manufactured by
various companies which 
provide different levels of functionalities. The different types of
virtual keyboards are:
3.3.1 Developer VKB?
Fig 3.5: Developer VKB
Its full-size keyboard also can be projected onto any surface and
uses laser 
technology to translate finger movements into letters. Working
with Siemens 
Procurement Logistics Services Rechargeable batteries similar
to those in cell phones 
power the compact unit . The keyboard is full size and the letters
are in a standard format. 
As a Class 1 laser, the output power is below the level at which
eye injury can occur.Page 19

Virtual Keyboard
Division of Computer Engineering, SOE, CUSAT
12
3.3.2 Canesta?
The Canesta Keyboard, which is a laser projected keyboard with
which the 
same laser is also used to scan the projection field and extract
3D data. Hence, the user 
sees the projected keyboard, and the device "sees" the position
of the fingers over the 
projected keys. They also have a chip set, Electronic Perception
Technology, which they 
supply for 3rd parties to develop products using the
projection/scanning technology.
Canesta appears to be the most advanced in this class of
technology and the only one who 
is shipping product. They have a number of patents pending on
their technology.
Fig 3.6: Canesta Keyboard
3.3.3 Sense board Technologies
The Senseboard SB 04 technology is an extreme case of a
hybrid approach.
The sensing transducer is neither a laser scanner nor a camera.
Rather, it is a bracelet-
like transducer that is worn on the hands which captures hand
and finger motion. In fact, 
as demonstrated, the technology does not incorporate a
projection component at all; 
rather, it relies on the user's ability to touch type, and then infers
the virtual row and key 
being typed by sensing relative hand and finger movement. The
system obviously could 
be augmented to aid non-touch typists, for example, by the
inclusion of a graphic Page 20

Virtual Keyboard
Division of Computer Engineering, SOE, CUSAT
13
representation of the virtual keyboard under the hands/fingers.
In this case, the keyboard 
graphically represented would not be restricted to a conventional
QWERTY keyboard, 
and the graphical representation could be projected or even on
a piece of paper. I include 
it here, as it is a relevant related input transducer that could be
used with a projection 
system. The technology has patents pending, and is currently in
preproduction proof of 
Concept form. 
Fig 3.7: Senseboard Technologies
Sensors made of a combination of rubber and plastic are
attached to the user's 
palms in such a way that they do not interfere with finger
motions. Through the use of 
Bluetooth technology, the "typed" information is transferred
wirelessly to the computer, 
where a word processing program analyzes and interprets the
signals into readable text. 
The device is currently usable via existing ports on personal
digital assistants (PDAs) 
from Palm and other manufacturers. Senseboard officials say it
eventually will be 
compatible with most brands of pocket PCs, mobile phones and
laptop computers.Page 21

Virtual Keyboard
Division of Computer Engineering, SOE, CUSAT
14
3.3.4 KITTY?
KITTY, a finger-mounted keyboard for data entry into PDA's,
Pocket PC's 
and Wearable Computers which has been developed at the
University of California in 
Irvine.
Fig 3.8: Kitty
KITTY, an acronym for Keyboard-Independent Touch-Typing, is
a Finger 
mounted keyboard that uses touch typing as a method of data
entry. The device targets 
the portable computing market and in particular its wearable
computing systems which 
are in need of a silent invisible data entry system based on
touch typing .the new device 
combines the idea of a finger mounted coding device with the
advantages of a system that 
uses touch typing.Page 22

Virtual Keyboard
Division of Computer Engineering, SOE, CUSAT
15
3.3.5 InFocus?
InFocus is one of the leading companies in providing video and
data 
projectors. Their projectors are conventional, in that they do not
use laser technology.
This has that advantage of delivering high quality colour images
with a mature 
technology. However, it has the disadvantage of larger size,
lower contrast, and higher 
power requirements, compared to laser projection systems. In
2000, InFocus merged 
with Proxima, which had been one of its competitors. I include
InFocus/Proxima in this 
survey not only because they make projectors. In their early
days, Proxima developed 
one of the first commercially available projection/vision systems.
It was called Cyclops,
and they still hold a patent on the technology. Cyclops
augmented the projector by 
adding a video camera that was registered to view the projection
area. The video camera 
had a band pass filter over the lens, which passed only the
wavelength of a laser pointer.
The system, therefore, enabled the user to interact with the
projected image, using a 
provided laser pointer as the input device. The camera detected
the presence of the laser 
pointer on the surface, and calculated its coordinates relative to
the currently projected 
image. Furthermore, the laser pointer had two intensity levels
which enabled the user to 
not only point, but to have the equivalent of a mouse button, by
the vision system 
interpreting the two levels as distinguishing button up and down
events.Page 23

Virtual Keyboard
Division of Computer Engineering, SOE, CUSAT
16
4. ADVANTAGES
1. It can be projected on any surface or you can type in the plain
air.
2. It can be useful in places like operation theaters where low
noise is essential.
3. The typing does not require a lot of force. So easing the strain
on wrists and 
hands.
4. The Virtual Keyboard is not restricted to the QWERTY touch-
typing 
paradigm, adjustments can be done to the software to fit other
touch-typing 
paradigms as well.
5. No driver software necessary, It can be used as a plug and
play device.
6. High battery life. The standard coin-sized lithium battery lasts
about eight 
months before needing to be replaced.Page 24

Virtual Keyboard
Division of Computer Engineering, SOE, CUSAT
17
5. DRAWBACKS
1. Virtual keyboard is hard to get used to. Since it involves
typing in thin air, it 
requires a little practice. Only people who are good at typing can
use a 
virtual keyboard efficiently.
2. It is very costly ranging from 150 to 200 dollars.
3. The room in which the projected keyboard is used should not
be very bright 
so that the keyboard is properly visible.Page 25

Virtual Keyboard
Division of Computer Engineering, SOE, CUSAT
18
6. APPLICATIONS
1. High-tech and industrial Sectors
2. Used with Smart phones, PDAs, email, word processing and
spreadsheet 
tasks.
3. Operation Theatres.
4. As computer/PDA input.
5. Gaming control.
6. TV remote control. Page 26

Virtual Keyboard
Division of Computer Engineering, SOE, CUSAT
19
7. CONCLUSION
Virtual Keyboard uses sensor technology and artificial
intelligence to let users 
work on any surface as if it were a keyboard. Projection key
boards or virtual key boards 
claim to provide the convenience of compactness with the
advantages of a full-blown 
QWERTY keyboard. The company's Virtual Keyboard is
designed for anyone who's 
become frustrated with trying to put information into a handheld
but doesn't want to carry 
a notebook computer around.
Canesta appears to be the most advanced in this class of
technology. Different 
types of virtual keyboards suit different typing styles. Thus virtual
keyboards will make 
typing easier, faster, and almost a pleasure.Page 27

Virtual Keyboard
Division of Computer Engineering, SOE, CUSAT
20
8. REFERENCES
1.
http://www.newscom.com/cgi-bin/prnh
2.

Reference: http://www.seminarprojects.com/Thread-virtual-keyboard-a-seminar-report#ixzz1HoWLy6wI

You might also like