5 Pen PC Technology
5 Pen PC Technology
5 Pen PC Technology
“ 5 Pen PC Technology ”
(GROUP OF INSTITUTION)
CERTIFICATE
Under the
theSupervision
topic ofof“5 PEN PC TECHNOLOGY” is submitted
Submitted byby Mr.
Mr. Sanjib Kumar Nayak Subrat Kumar Nayak
RITURAJ SINGH PANWAR (12EJJCS011) in partial fulfillment for
Assistant Professor Roll No. – 2106151015
the award
Department of degree
of Computer of Bachelor of Technology in Achary
Application Computer Science
Kumar Bahubalindra
Roll No. – 2106151016
Engineering has been found satisfactory and is approved for
submission.
CERTIFICATE
This is to certify that Subrat Kumar Nayak and Acharya Kumar Bahubalindra, students of Master in
Computer Application (MCA) 3rd semester bearing Roll No. 2106151015 & 2106151016, respectively,
have submitted their seminar entitled "5 Pen PC Technology" towards partial fulfilment of the
requirement for the award of the degree Masters in Computer Application (MCA) during the session
2022-23 under my guidance.
Guide
Department of Computer Application
ACKNOWLEDGEMENT
VEER SURENDRA SAI UNIVERSITY OF TECHNOLOGY
(Formerly, University College of Engineering, Burla)
Burla, Sambalpur, Odisha, 768018
A seminar report is a golden opportunity for learning about upcoming new technologies and self-
development. I consider myself very lucky and honored to have so many wonderful people lead me through
in completion of this report. I respect and thank Mr. Sanjib Kumar Nayak Goyal (Assistant professor of MCA
Department) for giving me an opportunity to do the seminar report. And I heartily thank, Mr. Priyanka
Agarwal (Assistant professor & Head of Department of Computer Science Engineering), for his guidance and
suggestions during this seminar report.
Certificate of Completion
I own my profound gratitude to our seminar report guide Mr. Vishnu Kr. Sharma who took keen
interest on our project work and guided us all along, till the completion of our report work by providing all
the necessary information for developing a good system.
This is to certify that Subrat Kumar Nayak and Acharya Kumar Bahubalindra, students of Master in
Computer Application (MCA) 3rd semester, bearing Roll No. 2106151015 & 2106151016,
respectively, have presented and successfully completed their seminar entitled "5 Pen PC
Technology" in presence of the undersigned dignitaries.
H.O.D
ACKNOWLEDGEMENT
I wish to express my heartfelt thanks to my seminar guide Mr. Sanjib Kumar Nayak for his valuable
suggestions along with keen interest & co-operation. I am greatly debited for his constructive and helpful
guidance from time to time during the progress of the seminar without which the seminar would not
have completed.
I also wish to thank other faculties who helped me directly or indirectly to complete this seminar.
Finally, I express my sincere gratitude and thanks to the department fraternizes, for their technical and
non-technical help, encouragement and suggestions from time-to-time basis, towards me during the
tenure of this seminar. At last, I offer gratitude to my friends for heartily help and interaction.
When writing a quick note, pen and paper are still the most natural to use .The 5 pen pc technology
with digital pen and paper makes it possible to get a digital copy of handwritten information, and have
it sent to digital devices via Bluetooth.
P-ISM (Pen-style Personal Networking Gadget Package), which is nothing but the new discovery which
is under developing stage by NEC Corporation. It is simply a new invention in computer and it is
associated with communication field. Surely this will have a great impact on the computer field. In this
device you will find Bluetooth as the main interconnecting device between different peripherals.
P-ISM is a gadget package including five functions: a pen-style cellular phone with a handwriting data
input function, virtual keyboard, a very small projector, camera scanner, and personal ID key with
cashless pass function. P-ISMs are connected with one another through short-range wireless
technology. The whole set is also connected to the Internet through the cellular phone function. This
personal gadget in a minimalist pen style enables the ultimate ubiquitous computing.
LIST OF CONTENTS
1. INRODUCTION.......................................................................................................................11
1.1. COMPONENT NAME......................................................................................................2
2
1.2. HISTORY.........................................................................................................................3
3
2. CPU PEN .....................................................................................................................................4
4
2.1. CONTROL UNIT..............................................................................................................5
5
2.2. MICROPROCESSOR .....................................................................................................6
6
2.3. OPERATIONS ..................................................................................................................7
7
2.4. DESIGN & IMPLEMENTATION ......................................................................................9
9
2.5. CLOCK RATE ..................................................................................................................11
11
2.6. PERFORMANCE ..........................................................................................................12
12
3. COMMUNICATION PEN ............................................................................................................14 14
3.1. BLUETOOTH ..............................................................................................................15
15
3.2. IEEE 802.11 .................................................................................................................17
17
3.3. CELLULAR NETWORK ..................................................................................................23
23
4. VIRTUAL KEYBOARD ..................................................................................................................24
24
4.1. TYPES ..........................................................................................................................25
25
4.2. SECURITY CONSIDERATION ........................................................................................26
26
5. DIGITAL CAMERA .....................................................................................................................27
27
28
5.1. TYPES OF DIGITAL CAMERA .......................................................................................28
CHAPTER 1.
INTRODUCTION
P-ISM is a P-ISM
gadgetispackage
a gadgetincluding five functions:
package including a CPU pen,
five functions: a CPUcommunication pen with
pen, communication penawith
cellular
phone function,
a cellular virtual
phone keyboard,
function, a very
virtual small projector,
keyboard, and projector,
a very small a camera. and a camera.
P-ISM’s areP-ISM’s
connected
arewith one another
connected through
with one short-range
another throughwireless technology.
short-range wirelessThe whole setThe
technology. is also
connected
whole to
setthe Internet
is also throughtothe
connected thecellular phone
Internet function.
through This personal
the cellular gadget inThis
phone function. a minimalist
personal pen
style enables the ultimate ubiquitous computing.
gadget in a minimalist pen style enables the ultimate ubiquitous computing.
JIT/DOCSE/2015-16/SEMINAR 1
5 Pen Pc Technology
Concept
Function Reliability
Component
CPU Pen Computing Engine Open
Cell Phone, Pressure Sensitive
Communications Pointing Device, Pointer and ear
Near Term
Pen piece. Communications using
Bluetooth
LED Projector Slightly Father Out
Display A4 Size Than the Phone and
Approx. 1024 X 768 Camera
Slightly Father Out
Projector keyboard with
Keyboard Than the Phone and
3D IR Sensor
Camera
Camera Digital Camera Near Term
Battery Charger and Mass
Based Open
Storage
1.2. History
The conceptual prototype of the “pen” computer was built in 2003. The prototype device, dubbed the
“P-ISM”, was a “Pen-style Personal Networking Gadget” created in 2003 by Japanese technology
company NEC. The P-ISM was featured at the 2003 ITU Telecom World held in Geneva, Switzerland.
The designer of the 5 Pen Technologies, “Toru Ichihash”, said that “In developing this concept the
asked himself- “ What is the future of IT when it is small?” The pen was a logical choice. He also wanted
a product that you could touch and feel. Further the intent is to allow for an office any where.”
How ever, although a conceptual prototype of the “pen” computer was built in 2003, such devices are
not yet able to consumers.
An article about the device published on the Wave Report website in 2004 explains at ITU Telecom
World we got a sample of another view by NEC, It is based on the pen and called P-ISM. This concept
is so radical that we went to Tokyo to learn more.
JIT/DOCSE/2015-16/SEMINAR 2
5 Pen Pc Technology
“The design concept uses five different pens to make a computer. One pen is a CPU, one create a virtual
keyboard, another projects the visual output and thus the display and a communicator ( a phone ). All
five pens can rest in a holding block which recharges the batteries and holds the mass storage. Each
A Pen-style Personal Networking Gadget Package it seems that information terminals are infinitely
getting smaller. However, we will continue to manipulate them without our hands for now. We have
visualized the connection between the latest technology and the human, in a from of a pen. P-ISM is a
gadget package including five functions : a pen-style cellular phone with a hand writing data input
function, virtual keyboard, a very smaller projector, camera scanner, and personal ID key with cashless
pass function. P-ISMs are connected with one another through short-range wireless technology. The
whole set is also connected to the Internet through the cellular phone function. This personal gadget
in a minimalistic pen style enables the ultimate ubiquitous computing.
However, the prototype displayed at ITU Telecom World was apparently the only sample that was built
and reportedly cost $30,000. Thus, while the prototype may have proved that such technology is
feasible, it is currently unclear when-or even if personal computers of this type will be come available
to the public. Several years on from the initial launch of the P-ISM conceptual prototype, there seems
to be little information available about future plans.
JIT/DOCSE/2015-16/SEMINAR 3
5 Pen Pc Technology
CHAPTER 2.
CPU PEN
The function of the CPU is done by one of the pen. It is also known as computing engine. It consists of
dual core processor embedded in it and works with WINDOWS operation system. The central process
unit (CPU) is the portion of a computers system that carries out the instructions of a computer program
and is the primary element carrying out the computer’s function. The central processing unit carries
out each instruction of the program in sequence, to perform the basic arithmetical, logical and
input/output operation of the system. This term has been in use in the computer industry at least since
the early 1960’s. The form, design and implementation of
CPUs have changed dramatically since the earliest examples, but their fundamental operation remains
much the same.
Early CPUs were custom –designed as a part of a larger, some times one-of- kind, and computer.
However, this costly method of designing custom CPUs for a particular application has largely given
way to the development of mass- produced processors that are made for one or many purposes. This
standardization trend generally began in the era of discrete transistor mainframe computers and has
rapidly accelerated with the popularization of the integrated circuit(IC). The IC has allowed
increasingly complex CPUs to be designed and manufactured to tolerances on the order of
nanometers. Both the miniaturization and standardization of CPUs have increased the presence of
these digital devices in modern life far beyond the limited application of dedicated computing
machines.
Modern Microprocessors appear in everything from automobiles to cell phones and children’s toys.
JIT/DOCSE/2015-16/SEMINAR 4
5 Pen Pc Technology
CPU, core memory, and external bus interface of a DECPDP-8/I. made of medium-scale integrated
circuits. The design complexity of CPUs increased as various technologies facilitated building smaller
and more reliable electronic devices. The first such improvement came with the advent of the
transistor. Transistorized CPUs during the 1950s and 1960s no longer had to be built out of bulky,
unreliable, and fragile witching elements like vacuum tubes and electrical relays. With this
improvement more complex and reliable CPUs were built on to one or several printed circuit boards
containing discrete(individual) components.
During this period, a method of manufacturing many transistors in a compact space gained popularity.
The integrated circuit(IC) allowed a lager number of transistors to be manufactured on a single
semiconductor-based die, or “chip”. At first only very basic non-specialized digital circuits such as NOR
gates were miniaturized in to ICs. CPUs based upon these “building block” ICs are generally referred
to as “small-scale integration” (SSI) devices. SSI ICs , such as the ones used in the Apollo guidance
computer, usually contained transistor counts numbering in multiple often. To build an entire CPU out
of SSI ICs required thousands of individual chips, but still consumed much less space and power than
earlier discrete transistor designs. As micro electronic technology advanced, an increasing number of
transistors were placed on ICs, thus decreasing the quantity of individual ICs needed for a complete
CPU. MSI and LSI ICs increased transistor counts to hundreds, and then thousands.
JIT/DOCSE/2015-16/SEMINAR 5
5 Pen Pc Technology
In 1964 IBM introduced its System/360 computer architecture which was used in a series of computers
that could run the same programs with different speed and performance. This was significant at a time
when most electronic computers were in compatible with another, even those made by the same
manufacture. To facilitate this improvement, IBM utilized the Concept of a micro program (often called
“microcode”), which still sees wide spread us age in modern CPUs. The system/360 architecture was
so popular that it dominated the mainframe computer market for decade sand left a legacy that is still
continued by similar modern computers like the IBMs series. In the same year (1964), Digital
Equipment Corporation (DEC) introduced another influential computer aimed at the scientific and
research markets, the PDP-8. DEC would later introduce the extremely popular PDP-11 line that
originally was built with SSI ICs but was eventually implemented with LSI components once these
became practical. In stark contrast with its SSI and MSI predecessors, the first LSI implementation of
the PDP-11 contained a CPU composed of only four LSI Integrated circuits.
Transistor-based computers had several disadvantages over their predecessors. A side from
facilitating increased reliability and lower power consumption, transistors also allowed CPUs to
operate at much higher speeds because of the short switching time of a transistor in comparison to a
tube or relay. Thanks to both the increased reliability as well as the dramatically increased speed of
the switching elements (which were almost exclusively transistor by this time), CPU clock rate in the
tens of mega hertz were obtained during this period. Additionally while discrete transistor and IC CPUs
were in heavy usage, new high-performance designs like SIMD (Single Instruction Multiple Data) vector
processors began to appear. These early experimental designs later gave rise to the era of specialized
super computers like those made by CrayInc.
2.2. Microprocessor
The introduction of the microprocessor in the 1970s significantly affected the design and
implementation of CPUs. Since the introduction of the first commercially available microprocessor
(the Intel 14004) in 1970 and the first widely used microprocessor (the Intel 18080) in 1974, this class
of CPUs has almost completely over taken all other central processing unit implementation methods.
Mainframe and minicomputer manufactures of the time launched proprietary IC development
programs to upgrade their older computer architectures, and eventually produced instruction set
compatible microprocessor that were eventual vast success of the now ubiquitous personal
computer, the term CPU is now applied almost exclusively to microprocessors.
JIT/DOCSE/2015-16/SEMINAR 6
5 Pen Pc Technology
Previous generations of CPUs were implemented as discrete components and numerous small
integrated circuits (ICs) on one or more circuit boards. Microprocessors, on the other hand, are CPUs
manufactured on a very small number of ICs usually just one. This has allowed synchronous
microprocessors to have clock rates ranging from of megahertz to several gigahertz’s. Additionally, as
the ability to construct exceedingly smaller transistors on an IC has increased, the complexity and
number of transistors in a single CPU has increased dramatically. This widely observed trend is
described by Moore’s law, which has proven to be a fairly accurate predicator of the growth of CPU
(and other IC) complexity to date.
While the complexity, size, construction, and general form of CPUs have changed drastically much at
all. Almost all common CPUs today can be very accurately described as Von-Neumann stored program
machines. As the afore mentioned Moore’s law continues to hold true, concerns have arisen about
the limit so finite grated circuit transistor technology. Extreme miniaturization of electronic gates is
causing the effects of phenomena like electro migration and sub threshold leakage to become much
more significant. These newer concerns are among the many factors causing researches to investigate
new methods of computing such as the quantum computer, as well as to expand the usage of
parallelism and other methods that extend the usefulness of the classical Von Neumann model.
2.3. Operations
The fundamental operation of most CPUs, regardless of the physical form they take, is to execute
sequence of stored instructions called program. The program is represented by a series of numbers
that are kept in some kind of computer memory.
There are four steps that nearly all CPUs use in their operation: fetch, execute, decode, and write back.
The first step, fetch ,involves retrieving an instruction (which is represented by a number or sequence
of numbers ) from program memory. The location in program memory is determined by program
counter (PC), which stores a number that identifies the current position in the program. After an
instruction is fetched, the PC is incremented by the length of the length of the instruction word in
terms of memory units. Often, the instruction to be fetched must be retrieved from relatively slow
memory, causing the CPU to stall while waiting for the instruction to be returned. This issue is largely
addressed in modern processors by cache sand pipeline architectures.
The instruction that the CPU fetches from memory is used to determine what the CPU is to do. In the
decodes step, the instruction is broken up into parts that have significance to other portions of the
JIT/DOCSE/2015-16/SEMINAR 7
5 Pen Pc Technology
CPU. The way in which the numerical instruction value is interpreted is defined by the CPU’s
instruction set architecture (ISA). Often , one group of numbers in the instruction called the opcode,
indicates which operation to perform. There maining parts of then number usually provide information
require for that instruction such as operant for an addition operation such operant be given as a
constant value (called an immediate value), or as a place to locate a value : a register or a memory
address, as determine by some addressing mode. In older designs the portions of the CPU responsible
for instruction decoding were unchangeable hardware devices. However, in more abstract and
complicated CPUs and ISAs, a micro program is often used to assist in translating instructions into
various Configuration signals for the CPU. This micro program is some times rewritable so that it can
be modified to change the way the CPU decodes instructions even after it has been manufactured.
After the fetch and decode steps, the execute step is performed. During this step, various portions of
the CPU are connected so they can perform the desired operation. If, for instance, an addition
operation was requested, the arithmetic logic unit (ALU) will be connected to a set of inputs and a set
of outputs. The ALU contains the circuitry to perform simple arithmetic and logical operations on the
inputs (like addition and bitwise operations). If the addition operation produces a result too large for
the CPU to handle, an arithmetic overflow flag in a flags register may also be set.
The final step, write back, simply “ write back” the result of the execute step to some form of memory.
Very often there result are written to some internal CPU register for quick access by sub sequent
instructions. In other case result may be written to slower, but cheaper and larger, main memory.
Some type of instructions manipulate the program counter rather than directly produce result data.
These are generally called “jumps” and facilitate behavior like loops, conditional program execution
(through the use of a conditional jump), and functions in programs. Many instructions will also change
the state of digits in a “flags” register. These flags can be used to influence how a program behaves,
since they often indicate the outcome of various operations. For example, one type of “compare”
instruction considers two values and set a number in the flags register according to which one is
greater. This flag could then be use by a later jump instruction to determine program flow.
After the execution of the instruction and write back of the resulting data, the entire process repeats,
with the next instruction cycle normally fetching the next-insequence instruction because of the
incremented value in the program counter. If the completed instruction was a jump, the program
counter will be modified to contain the address of the instruction that was jumped to, and program
JIT/DOCSE/2015-16/SEMINAR 8
5 Pen Pc Technology
execution continues normally. In more complex CPUs than the one described here, multiple instruction
scan be fetched, decoded, and executed simultaneously. This section describes what is generally
referred to as the “Classic RISC pipeline”, which in fact is quite common among the simple CPUs used
in many electronic devices (often called microcontroller). It largely ignore the important role of CPU
cache, and therefore the access stage of pipeline
The way a CPU represents numbers is a design choice that affects the most basic ways in which the
device functions. Some early digital computers used an electrical model of the common decimal (base
ten) numeral system to represent numbers internally. A few other computers have used more exotic
numeral systems like ternary (base three). Nearly all modern CPUs represent numbers in binary from,
with each digit being represented by some two-valued physical quantity such as a “high” or “low”
voltage.
MOS 6502 microprocessor in a dual in-line package, an extremely popular 8- bit design. Related to
number presentation is the size and precision of numbers that a CPU can represent. In the case of a
binary CPU, a bit refers to one significant place in the numbers a CPU deals with. The number of bits
(or numeral places) a CPU use store present numbers is often called “word size” , bit width”, “data
path width”, or “integer precision” when dealing with strictly integer numbers (as opposed to Floating
point). This number differs between architectures, and often with in different parts of the very same
CPU. For example, an 8-bit CPU deals with a range of numbers that can be representation by eight
binary digits (each digit having two possible values), that is 2 8 or 256 discrete numbers. In effect,
integer size sets a hardware limit on the range of integers the software run by the CPU can utilize.
Integer range can also affect the number of locations in memory address (locate). For example, if a
binary CPU uses 32 bits store present a memory address, and each memory address represents one
octet (8 bits), the maximum quantity of memory that CPU can address is 2 32 octets, or 4 GB. This is a
very simple view of CPU address space, and many design use more complex addressing methods like
paging in order to locate more memory than their integer range would allow with flat address space.
Higher levels of integer range require more structures deal with the additional digits, and therefore
more complexity, size, power usage, and general expense. It is not at all uncommon , therefore , to
see 4-or 8-bit microcontrollers used in modern applications, even through CPUs with much higher
range (such has 16, 32, 64, even 128-bit) are available. The simple microcontrollers are usually cheaper,
JIT/DOCSE/2015-16/SEMINAR 9
5 Pen Pc Technology
useless power, and therefore dissipate less heat, all of which can be major design considerations for
electronic devices. However, in higher-end applications, the benefits afforded by the extra range (most
often the additional address space) are more significant and often affect design choices. To gain some
of the advantages afforded by both lower and higher bit lengths, many CPUs are designed with
different bit widths for different portions of the of the device. For example, the IBM System/370 used
a CPU that was primarily 32 bit, but it used 128-bit precision in side its floating point units to facilitate
greater accuracy and range in floating point numbers. Many later CPU designs use similar mixed bit
width, especially when the processor is meant for general purpose usage where a reasonable balance
of integer and floating point capability is required.
The clock rate is the speed at which a microprocessor executes instructions. Every computer contains
an internal clock that regulates the rate at which instructions are executed and synchronizes all the
various computer components. The CPU requires a fixed number of clock ticks (or clock cycles) to
execute each instruction. The faster the clock, the more instructions the CPU can execute per second.
Most CPUs, and indeed most sequential logic devices, are synchronous in nature. That is, they are
designed and operate on assumptions about a synchronization signal. This signal, known as clock
signal, usually takes the form of a periodic square wave. By calculating the maximum time that
electrical signals can move in various branches of a CPU’s many circuits, the designers can select an
appropriate period for the clock signal.
This period must be longer than the amount of time it takes for a signal to move, or propagate, in the
worst-case scenario. In setting the clock period to a value well above the worst-case propagation delay,
it is possible to design the entire CPU and the way it moves data around the “edges” of the rising and
falling clock signal. This has the advantage of simplifying the CPU significantly, both from a design
perspective and a component-count perspective. However, it also carries the disadvantage that the
entire CPU must wait on its slowest elements, even through some portions of it are much faster. This
limitation has largely been compensated for by various methods of increasing CPU parallelism.
However, architectural improvements alone do not solve all of the draw back of globally synchronous
CPUs. For example, a clock signal is subject to the delay of any other electrical signal. Higher clock rates
in increasingly complex CPUs to require multiple identical clock signals to be provided in order to avoid
delaying a single signal significantly enough to case the CPU to malfunction. An other major issue as
JIT/DOCSE/2015-16/SEMINAR 10
5 Pen Pc Technology
clock rates increase dramatically is the amount of heat that is dissipated by the CPU. The constantly
changing clock causes many components to switch regardless of whether they are being used that
time. In general, a component that is switching uses more energy than an element in a static state.
Therefore, as clock rate increase, so does heat dissipation, causing the CPU to require more effective
cooling solutions.
One method of dealing with the switching of unneeded components is called gating, which involves
turning off the clock signal to unneeded component (effectively disabling them). However, this is often
regarded as difficult to implement and there for doesn’t see common usage outside of very low power
designs. One not able late CPU design that usage clock gating is that of IBM power PC-based X box 360.
It utilize extensive clock gating in order to reduce the power requirements of the aforementioned
video game console in which it is used. Another method of addressing some of the problems with a
global clock signal is the removal of the clock signal altogether. While removing the global clock signal
makes the design process considerably more complex in many ways, asynchronous (or clock less)
designs carry marked advantages in power consumption and heat dissipation in comparison with
similar synchronous designs. While some what uncommon, entire asynchronous CPUs have been built
without utilizing a global clock signal. Two notable examples of this are the ARM compliant AMULET
and the MIPSR 30000 compatible Mini MIPS. Rather than totally removing the clock signal, some CPU
designs allow certain portions of the device to be asynchronous, such as using a synchronous ALUs in
conjunction with super scalar pipelining to achieve some arithmetic performance gains. While it is not
altogether clear whether totally asynchronous designs can perform at a comparable or better level
than their synchronous counter parts, it is evident that they do at least excel in simpler math
operations. This, combined with their excellent power consumption and heat dissipation properties,
makes them very suitable for embedded computers.
2.6. Performance
The performance or speed of a processor depends on the clock rate and the instructions per clock
(IPC), which together are the factors, for the instructions per second (IPS) that the CPU can perform.
Many reported IPS values have presented “peak” execution rates on artificial instruction sequences of
a mix of instructions and applications some of which take longer to execute than others. The
performance of the memory hierarchy also greatly affects processor performance, an issue barely
considered in MIPS calculations. Because if these problems, various standardized tests such as SPE C
JIT/DOCSE/2015-16/SEMINAR 11
5 Pen Pc Technology
in have been developed to attempt to measure the real effective performance in commonly used
applications.
JIT/DOCSE/2015-16/SEMINAR 12
5 Pen Pc Technology
CHAPTER 3.
COMMUNICAION PEN
P-ISM’s are connected with one another through short-range wireless technology. The whole set is
also connected to the Internet through the cellular phone function. They are connected through Tri-
wireless modes (Bluetooth, 802.11 B/G) and terabytes of data, exceeding the capacity of today’s hard
disks,
This is very effective because we can able to connect whenever we need without having wires. They
are used at the frequency band of 2.4 GHz ISM (although they used different access mechanisms).
Bluetooth mechanism is used for exchanging signal status information between two devices. This
techniques have been developed that do not require communication between the two devices (such
as Bluetooth’s
Adaptive Frequency Hopping), the most efficient and comprehensive solution for the most serious
problems can be accomplished by silicon venders. They can implement information exchange
capacities with in the designs of the Bluetooth.
JIT/DOCSE/2015-16/SEMINAR 13
5 Pen Pc Technology
3.1. Bluetooth
Bluetooth uses a radio technology called frequency-hopping spread spectrum, which chops up the
date being sent and transmits chunks of it on up to 79 bands
(1MHz each centered from 2402 to 2480 MHz) in the range 2,400-2,483.5 MHz (allowing for guard
bands). This range is the globally unlicensed Industrial, Scientific and Medical (ISM) 2.4 GHz short-
range radio frequency band.
Originally Gaussian frequency-shift keying (GFSK) modulation was the only modulation schema
available subsequently, since the introduction of Bluetooth 2.0+EDR, / -DQPSK and 8 DPSK modulation
may also be used between compatible devices. Devices functioning with GFSK are said to be operating
in basic rate (BR) mode where an instantaneous data rate of 1 M bit/s is possible. The term Enhanced
Data rate (EDR) is used to describe / -DPSK and 8 DPSK schemes, each giving 2 and 3 M bit/s
respectively. The combination of these (BR and EDR) modes in Bluetooth radio technology is classified
as a “BR/EDR radio”.
Bluetooth is a packet-based protocol with a master-slave structure. One master may communicate
with up to 7 slave in a Pico net all devices share the master’s clock. Packet exchange is based on the
basic clock, defined by the master, which ticks at 312.5µs internals. Two clock ticks make up slot of
625µs two slots make up a slot pair of 1250 µs. In the simple case of single-slot packets the master
transmits in even slots and receives in odd slots the slave, conversely, receives in even slots and
transmits in odd slots. Packets may be 1, 3 or 5 slots long but in all cases the master transmit will begin
in even slots and the slave transmit in odd slots.
Bluetooth provides a secure way to connect and exchange in formation between devices such as
faxes, mobile phones, telephones, laptops, personal computers, printers, Global Positioning System
(GPS) receivers, digital cameras, and video game consoles.
A master Bluetooth device can communicate with up to seven devices in a Pico net. (An ad-hoc
computer network using Bluetooth technology) The devices can switch roles, by agreement, and the
slave can become the master at any time.
At any given time, data can be transferred between the master and one other device (except for the
little-used broadcast mode). The master choose switch slave device to address typically, it switches
rapidly from one device to another in aroundrobin fashion. The Bluetooth Core Specification provides
JIT/DOCSE/2015-16/SEMINAR 14
5 Pen Pc Technology
for the connection of two or more Pico nets to form a scatter net , in which certain devices serve as
bridges, simultaneously playing the master role in one Pico net and the slave role in another.
Many USB Bluetooth adapters or “dongles” are available, some of which also include an IrDA
adapter. Older (pre-2003) Bluetooth dongles, however, have limited capabilities, offering only the
Bluetooth Enumerator and less-powerful Bluetooth Radio incarnation. Such devices can link
computers with Bluetooth with a distance of 100 meters, but they do not offer as many service as
modern adapters do.
A personal computer that does not have embedded Bluetooth can be used with a Bluetooth
adapter that will enable the PC to communicate with other Bluetooth devices (such as mobile phones,
mice and keyboards). While some desktop computers and most recent laptops come with a built-in
Bluetooth radio, others will require an external one in the form of a dongle.
Unlike its predecessor, IrDA, which requires a separate adapter for each device, Bluetooth
allows multiple devices to communicate with a computer over a single adapter.
The Bluetooth SIG completed the Bluetooth Core Specification version 4.0, which includes
Classic Bluetooth, Bluetooth high speed and Bluetooth lower energy protocols. Bluetooth high speed
is based on Wi-Fi and Classic Bluetooth consists of legacy Bluetooth protocols. This version has been
adopted as of June 30, 2010.
Cost-reduced single-mode chips, which will enable highly integrated and compact devices, will
feature a light weight Link Layer providing ultra-low power idle mode operation, simple device
discovery and reliable point-to-multi point data transfer with advanced power-save and secure
encrypted connections at the lowest possible cost. The Link Layer in these controllers will enable
Internet connected sensors to schedule Bluetooth low energy traffic between Bluetooth
transmissions.
Many of the services offered over Bluetooth can expose private data or allow the connecting
party to control the Bluetooth device. For security reasons it is therefore necessary to control which
devices are allowed to connect to given Bluetooth device. At the same time, it is useful for Bluetooth
devices to automatically establish a connection without user intervention as soon as they are in range.
To resolve this conflict, Bluetooth uses a process called pairing. Two devices need to be paired
to communicate with each other. The pairing process is typically triggered automatically the first time
JIT/DOCSE/2015-16/SEMINAR 15
5 Pen Pc Technology
a device receives a connection request from a device with which it is not yet paired (in some cases the
device user many need to market the device’s Bluetooth link visible to other devices first). Once a
pairing has been established it is remembered by the devices, which can the connect to each without
user intervention. When desired, the pairing relation ship can later be removed by the user.
IEEE 802.11 is a set of standards for implementing wireless local area network (WLAN) computer
communication in the 2.4, 3.6, and 5 GHz frequency bands. They are created and maintained by the
IEEE LAN/MAN Standards
Committee (IEEE 802). The base current version of the standard is IEEE 802.112007.
The 802.11 family consists of a series of over-the-air modulation techniques that use the same basic
protocols, which are amendments to the original standard. 802.11-1997 was the first wireless
networking standard, but 802.11b was the first widely accepted one, followed by 802.11g and 802.11n.
Security was originally purposefully weak due to export requirements of some governments, and was
later enhanced via the 802.11i amendment after governmental and legislative changes. 802.11n is a
new multi-streaming modulation technique. Other standards in the family (c-f, h, j) are service
amendments and extensions corrections other previous specifications.
802.11b and 802.11g use the 2.4 GHz ISM band , operating in the United States under Part 15 of the
US Federal Communications Commission Rules and Regulations. Because of this choice of frequency
band, 802.11 b and g equipment may occasionally suffer interference from microwave ovens, cordless
telephones and Bluetooth devices. 802.11b and 802.11g control their interference and susceptibility
to interference by using direct-sequence spread methods, respectively. 802.11a uses the 5 GHz U-NII
band, which for much of the world, offer at least 23 non-over lapping channels rather than the 2.4 GHz
ISM frequency band, where all channels over lap. Better or worse performance with higher or lower
frequencies (channels) may be realized, depending on the environment.
The segment of the radio frequency spectrum used by 802.11 varies between countries. In the US,
802.11a and 802.11g devices may be operated without a license, as allowed in Part 15 of the FCC Rules
and Regulations. Frequencies used by channels one through six of 802.11b and 802.11g fall within the
2.4 GHz amateur radio band. Licensed amateur radio operators may operate 802.11b/g devices under
JIT/DOCSE/2015-16/SEMINAR 16
5 Pen Pc Technology
Part 97 of the FCC Rules and Regulations, allowing increased power output but not commercial content
or encryption.
Current 802.11 standards define “frame’’ types for use in transmission of data as well as management
and control of wireless links. Frames are divided into very specific and standardized sections. Each
frame consists of a MAC header, payload and frame check sequence (FCS). Some frames may not have
the form and function of the frame. The frame control field is further sub divided into the following
subfields:
• Protocol Version : two bits representing the protocol version. Currently used protocol version is
zero. Other values are reserved for future use.
• Type : two bits identifying the type of WLAN frame. Control, Data and Management are various
frame types defined in IEEE 802.11.
• Sub Type : Four bits providing addition discrimination between frames. Type and Sub type together
to identify the exact frame.
• To DS and From DS : Each is one bit in size. They indicate whether a data frame is headed for a
distributed system. Control and management frame set these values to zero. All the data frames
will have one of these bit set. However communication with in an IBSS network always set these
bits to zero.
• More Fragments : The More Fragments bit is set when a packet is divided into multiple frames for
transmission. Every frame except the frame of packet will have this bits set.
• Retry : Some times frames require retransmission, and for this there is a Retry bit which is set to
one when a frame is resent. This aids in the elimination of duplication frames.
• Power Management : This bit indicates the power management state of the sender after the
completion of a frame exchange. Access points are required to manage the connection and will
never set the power saver bit.
• More Data : The More Data bit is used to buffer frames received in a distributed system. The access
point use this bit to facilitate stations in power saver mode. It indicates that at least one frame is
available and addresses all stations connected.
• WEP : The WEP bit is modified after processing a frame. It is toggled to one after a frame has been
decrypted or if no encryption is set it will have already been one.
JIT/DOCSE/2015-16/SEMINAR 17
5 Pen Pc Technology
• Order : This bit is only set when the “strict ordering” delivery method is employed. Frames and
fragments are not always sent in order as it cause a transmission performance penalty.
The next two bytes are reserved for the Duration ID field. This field can take one of three forms :
Duration, Contention-Free Period (CFP), and Association ID (AID). An 802.11 frame can have up to four
address fields. Each field can carry a MAC address. Address 1 is the receiver, Address 2 is the
transmitter, and Address 3 is used for filtering purposes by the receiver.
The Sequence Control field is a two-byte section used for identifying message order as well as
eliminating duplicate frames. The first 4 bits are used for the fragmentation number and the last 12
bits are these sequence number.
An optional two-byte Quality of Service control field which was added with 802.11e. The Frame Body
fields variable in size, from 0 to 2304 bytes plus any over head from security encapsulation and contains
information from higher layers.
The Frame Check Sequence (FCS) is the last four bytes in the standard 802.11 frame. Often referred
to as the Cyclic Redundancy Check (CRC), it allows for integrity check of retrieved frames. As frames
are about to be sent the FCS is calculated and appended. When a station receives a frame it can
calculate the FCS of the frame and compare it to the one received. If they match, it is assumed that the
frame was not distorted during transmission.
Management Frames allow for the maintenance of communication. Some common 802.11 sub types
include :
• Authentication Frame : 802.11 authentication begin with the WNIC sending an authentication frame
to other access point containing its identity. With an open system authentication the WNIC only sends
a single authentication frame and the access point responds with an authentication, after the WNIC
sends its initial authentication request it will receive authentication frame from the access point
containing challenge text. The WNIC sends an authentication frame containing the encrypted version
of the challenge text to the access point. The access point ensures the text was encrypted with the
corrected key by encrypting it with its own key. The result of this process determines the WNICs
authentication status.
• Association Request Frame : sent from station it enables the access point to locate resources and
synchronize. The frame carries information about the WNIC including supported data rates and the
JIT/DOCSE/2015-16/SEMINAR 18
5 Pen Pc Technology
SSID of the network the station wishes to associate with. If there request is accepted, the access point
reserves memory and establishes an association ID for the WNIC.
• Association Response Frame : sent from an access point to station containing the acceptance or
rejection an association request. If it is an acceptance, the frame will contain information such an
association ID and supported data rates.
• Beacon Frame : Sent periodically from an access point to announce its presence and provide the SSID,
and other parameters for WNICs within range.
• De authentication Frame : Sent from station wishing to terminate connection from another station.
• Disassociation Frame : Sent from a station wishing to terminate connection. It’s an elegant way to
allow the access point to relinquish memory allocation and remove the WNIC from the association
table.
• Probe request Frame : Sent from an access point containing capability information, supported data
rates, etc., after receiving a probe request frame.
• Re association request Frame : A WNIC sends a re-association request when it drops from range of
the currently associated access point and finds another access point with a stronger signal. The new
access point coordinates the forwarding of any information that may still be contained in the buffer of
the previous access point .
• Re association response Frame : Sent from an access point containing the acceptance or rejection to
a WNIC re association request frame. The frame includes information required for association such as
the association ID and supported data rates.
Control frames facilitate in the exchange of data frames between stations. Some common 802.11
control frames include :
• Acknowledgement (ACK) Frame : After receiving a data frame, the receiving station will send an ACK
frame to the sending station if no error are found. If the sending station does not receive an ACK frame
with in a predetermined period of time, the sending station will resend frame.
• Request to Send (RTS) Frame : The RTS and CTS frame provide an optional collision reduction scheme
for access point with hidden stations. A station sends a RTS frame to as the first step in a two-way hand
shake required before sending data frames.
JIT/DOCSE/2015-16/SEMINAR 19
5 Pen Pc Technology
• Clear to Send (CTS) Frame : A station responds to an RTS frame with a CTS frame. It provides clearance
for the requesting station to send a data frame. The CTS provides collision control management by
including a time value for which all other stations are to hold off transmission while the requesting
stations transmits.
In 2001, a group from the University of California, Berkeley presented a paper describing weaknesses
in the 802.11 Wired Equivalent Privacy (WEP) security mechanism defined in the original standard
they were followed by Fluhrer, Mantin and Shamir’s paper titled “Weaknesses in the Key Scheduling
Algorithm of RC4”. Not long after, Adam Stubble field and AT& T publicly announced the first
verification of attack. In the attack, they were able to intercept transmissions and gain un authorization
access to wireless networks.
The IEEE setup a dedicated task group to create are placement security solution, 802.11i (previously
this work was handled as part of broader 802.11 e effort to enhance the MAC layer). The Wi-Fi Alliance
announced an interim specification called Wi-Fi Protected Access (WPA) based on a subset of the then
current IEEE 802.11i draft. These started to appear in products in mid-2003. IEEE 802.11i (also known
as WPA2) it self was ratified in June 2004, and uses government strength encryption in the Advanced
Encryption Standard AES, instead of RC4, which was used in WEP. The modern recommended
encryption for the home/ consumer space is WPA2 (AES Pre-Shared Key) and for the Enterprise space
is WPA2 along with a RADIUS authentication server (or another type of authentication server) and a
strong authentication method such as EAP-TLS.
In January 2005, IEEE setup yet another task group, TG w, to protect management and broadcast
frames, which preciously were sent unsecured.
A cellular network is a radio network distributed overland areas called cells, each served by at least
one fixed-location transceiver known as cell site or base station. When joined together these cell
provide radio coverage over wide geographic area. This enables a larger number of portable
transceivers (e.g. mobile phones, pagers, etc.) to communicate with each other and with fixed
transceivers and telephones any where in the network, via base stations, even if some of the
transceiver are moving through more than once cell during transmission.
JIT/DOCSE/2015-16/SEMINAR 20
5 Pen Pc Technology
An example of a simple non-telephone cellular system is an old tax driver’s radio system where the
taxi company has several transmitters based around a city that can communicate directly with each
taxi.
In a cellular radio system, a land area to be supplied with radio service divided in to regular shaped
cells, which can be hexagonal, square, circular or some other irregular shapes, although hexagonal
cells are conventional. Each of these cells is assigned multiple frequencies are not reused in adjacent
neighboring cells as that would cause co-channel interface.
The increased capacity in a cellular network, compared with a network with a single transmitter,
comes from the fact that the same radio frequency can be reused in different area for completely
different transmission. If there is a single plan transmitter, only one transmission can be used on any
given frequency. Unfortunately, there is inevitably some level of interference from the signal from the
other cells which use the same frequency. This means that, in standard FDMA system, there must be
at least a one cell gap between cells which reuse the same frequency.
In the simple case of the taxi company, each radio had a manually operated channel selector knob to
tune to different frequencies. As the drivers moved around, they would change from channel to
channel. The drivers know which frequency covers approximately what area. When they do not receive
a signal from the transmitter, they will try other channels until they find one that works. The taxi drivers
only speak one at a time, when invited by the base station operator (in a sense
TDMA).
To distinguish signals from several different transmitters, frequency division multiple access (FDMA)
and code division multiple access (CDMA) were developed. With FDMA, the transmitting and receiving
frequencies used in each cell are different from the frequencies used in each neighboring cell. In a
simple taxi system, the taxi driver manually tuned to a frequency of a chosen cell to obtain a strong
signal and to avoid interference from other cells.
The principle of CDMA is more complex, but achieves the result the distributed transceivers can select
one cell and listen to it.
Other available methods of multiplexing such as polarization division multiple access (PDMA) and time
division multiple access (TDMA) cannot be used to separate signals from one cell to the next since the
effects of both vary with position and this would make signal separation practically impossible. Time
JIT/DOCSE/2015-16/SEMINAR 21
5 Pen Pc Technology
division multiple access, however, is used in combination with either FDMA or CDMA in a number of
system to give multiple channels with in the coverage of a single cell.
The key characteristic of a cellular network is the ability to re-use frequencies to increase both
coverage and capacity. As described above, adjacent cells must utilized different frequencies how ever
there is no problem with two cells sufficiently far a part operation the same frequency reuse are there
use distance and the reuse factor.
Where R is the cell radius and N is the number of cells per cluster. Cells may vary in radius in the ranges
(1 km to 30 km). The boundaries of the cells can also overlap between adjacent cells and large cells be
divided into smaller cells.
The frequency reuse factor is the rate which the same frequency can used in the network. It is 1/K (or
K according to some books) where K is the number of cell which can not use the same frequencies for
transmission. Common values for the frequency reuse factor 1/3, 1/4, 1/7, 1/9, and 1/12 (or 3, 4, 7, 9,
and 12 depending on notation).
In case of N sector antennas on the same base station, each with different direction, the base station
site can serve N different sectors. N is typically 3. A reuse pattern of N/K denotes a further division
frequency among N sector antennas per site. Some current and historical reuse pattern are 3/7 (North
American AMPS). 6/4 (Motorola NAMPS), and 3/4 (GSM).
If the total available band width is B, each cell can only utilize number of frequency channels
corresponding to band width of B/K, and each sector can use a band width of B/NK. Code division
multiple access- based systems use a wider frequency band to achieve the same rate of transmission
as FDMA, but this is compensated for by the ability to use a frequency reuse factor of 1, for example
using are use pattern of 1/1. In other words, adjacent base station site use the same frequencies, and
the different base stations and users are separated by codes rather than frequencies. While N is shown
as 1 in this example , that does not mean the CDMA cell has only one sector , but rather that the entire
cell bandwidth this also available to each sector individually.
JIT/DOCSE/2015-16/SEMINAR 22
5 Pen Pc Technology
Depending on the size of the city, a taxi system may not have any frequency- reuse in its own city, but
certainly in other near by cities, the same frequency can be used. In a big city, on the other hand,
frequency- reuse could certain be in use.
Recently also orthogonal frequency-division multiple access based systems such as LTE are being
developed with frequency reuse of 1. Since such systems do not spread the signal across the frequency
and inter- cell radio resource management is important to coordinates resource allocation between
different cell Interference Coordination (ICIC) already defined in the standard. Coordinated scheduling,
multisite MIMO or multi-site beam forming are other examples for inter-cell radio resource
management that might be standardized in the future.
Although the original, 2-way-radio cell towers were at the centers of the cells and were omni-
directional, a cellular map can be redrawn with the cellular telephone towers located at the corners of
the hexagons where three cells converge. Each tower has three sets of directional antennas aimed in
three different directions with 120 degrees for each cell (totaling 360 degrees) and receiving /
transmitting into three different cells at frequencies. This provides a minimum of three channels (from
three towers) for each cell. The numbers in the illustration are channel numbers, which repeat every3
cells. Large cells can be subdivided into smaller cells for high volume areas.
This network is the foundation of the GSM system network. There are many functions that are
performed by this network in order to make sure customers get the desired service including mobility
management, registration, call setup and hand over.
Any phones connects to the network via an RBS in the corresponding cell which in turn connects to
the MSC. The MSC allows the onward connection to the PSTN. The link from a phone to the RBS is
called an up link while the other way is termed down link.
Radio channels effectively use the transmission medium through the use of the following multiplexing
schemes are following access techniques : frequency division multiple access (FDMA), time division
multiple access (TDMA), code division multiple access (CDMA), and space division multiple
access(SDMA).
JIT/DOCSE/2015-16/SEMINAR 23
5 Pen Pc Technology
CHAPTER 4.
VIRTUAL KEYBOARD
The Virtual Laser Keyboard (VKB) is the ULTIMATE new gadget for PC users. The VKB emits laser on to
the desk where it looks like the keyboard having QWERTY arrangement of keys i.e. it use a laser beam
to generate a full-size perfectly operating laser keyboard that smoothly connects to of PC and most of
the hand held devices. As we type on the laser Projection, it analyses what we are typing according to
the co-ordinates of the location.
A virtual keyboard is a software component that allows a user to enter characters. A virtual keyboard
can usually be operated with multiple input devices, which may include a touch screen, an actual
keyboard, a computer mouse, a head mouse and an eye mouse.
JIT/DOCSE/2015-16/SEMINAR 24
5 Pen Pc Technology
4.1. Types
On a desktop PC, one purpose of a virtual keyboard is provide an alternative input mechanism for users
with disabilities who can not use a physical keyboard. Another major use for an on-screen keyboard is
for bi- or multi-lingual users who switch frequently between different character sets or alphabets.
Although hardware keyboards are available with dual keyboard layouts (for example Cyrillic/Latin
letters in various national layouts), the on-screen keyboard provides handy substitute while working
at different stations or laptops, which seldom come with dual layouts.
The standard on-screen keyboard utility on most windowing systems allow shot key Switching between
layouts from the physical keyboard (typically alt-shift but this user configure able), simultaneously
changing both the hardware and the software keyboard layout. In addition, a symbol in the syst ray
alerts the user to the currently active layout.
Although Linux supports this fast manual keyboard –layout switching function, many popular Linux on-
screen keyboard such as gt keyboard or Kvkbd do not react correctly. Kvkbd for example defines its
visible layout according to the first define layout in Keyboard Preferences rather than the default
layout, causing the application to output incorrect characters if the first layout on the list is not the
default. Activating a hot-key layout switch will cause the application to change its output according to
another keyboard layout, but the visible on-screen layout doesn’t change, leaving the user blind as to
which keyboard layout he is using. Multi-lingual, multi-alphabet users should choose a linux on –screen
keyboard that support this feature instead, like Florence.
Virtual keyboards are commonly used as non-screen input method in devices with no physical
keyboard, where there is no room for one, such as a packet computer, personal digital assistant (PDA),
tablet computer or touch screen equipped mobile phone. It is common for the user to input text by
tapping a virtual keyboard built into the operating system of the device. Virtual keyboards are also
used as features of emulation software for system that have fewer buttons than a computer keyboard
would have.
• Physical keyboards with distinct keys comprising electronically change able displays integrated in the
keypads.
JIT/DOCSE/2015-16/SEMINAR 25
5 Pen Pc Technology
Virtual keyboards to allow input from a variety of input devices, such as a computer mouse, switch or
other assistive technology device.
An optical virtual keyboard has been invented and patented by IBM engineers in 2008. It optically
detects and analyses human hand and finger motions and interprets the mas-operations on physically
non-existent input device like a surface having painted keys. In that way it allows to modulate
unlimited types of manually operated input devices such as a mouse keyboard. All mechanical input
units can be replaced by such virtual devices, optimized for the current application and for the user’s
physiology maintaining speed, simplicity and un ambiguity of manual data input.
On the Internet, various Java Script virtual keyboard shave been created, allowing users to type their
own language on foreign keyboards, particularly in Internet cafes.
Virtual keyboards may be used in some cases to reduce the risk of key stroke logging. For example,
West pac’s online banking service uses a virtual keyboard for the password entry, as does Treasury
Direct. It is more difficult for malware to monitor the display and mouse to obtain the data entered
via the virtual keyboard, than it is to monitor real keystrokes. However it is possible, for example by
recording screen shots at regular intervals or upon each mouse click.
The use of an non-screen keyboard on which the user “types” with mouse click scan increase the risk
of password disclosure by shoulder surfing, because :
An observer can typically watch the screen more easily (and less suspiciously) than the keyboard, and
see which characters the mouse move to Some implementations of the on-screen keyboard may give
visual feedback of the “key” clicked, e.g. by changing its color briefly. This makes it much easier for an
observer to read the data from the screen.
A user may not able to “point an click” as fast as they could type on a keyboard, thus making it easier
for the observer.
JIT/DOCSE/2015-16/SEMINAR 26
5 Pen Pc Technology
CHAPTER 5.
DIGITAL CAMERA
The digital camera is the shape of pen. It is useful in video recording, video conferencing, simply it
called as webcam. It is also connected with other devices through Bluetooth. It is a 360 degrees visual
communication device. This terminal will enable us to know about the surrounding atmosphere and
group communication with around display and a central super wide angle camera.
A digital camera (or digicam) is a camera that takes video or still photographs, or both, digitally by
recording images via an electronic image sensor. Most 21st century cameras are digital.
Front and back of Canon Powers shot A95 Digital cameras can do things film cameras can not :
displaying images on a screen immediately after they are recorded, storing thousands of images to
free storage space. The majority, including most compact cameras, can record moving video with
sound as well as still photographs.
Some can crop and stitch pictures and perform other elementary image editing. Some have a GPS
receiver built in and can produce Geo tagged photographs.
JIT/DOCSE/2015-16/SEMINAR 27
5 Pen Pc Technology
The optical system works the same as in film cameras, typically using a lens with a variable diaphragm
to focus light onto an image pick up device. The diaphragm and shutter admit the correct amount of
light to the imager, just as with film but the image pick up device is electronic rather than chemical.
Most digicams, a part from camera phones and a few specialized types, have a standard screw.
Digital cameras are incorporated in to many devices ranging from PDAs and mobile phones (called
camera phones) to vehicles. The Hubble Space Telescope and other astronomical devices are
essentially specialized digital cameras.
Digital cameras are made in a wider in a range of sizes, prices and capacities. The majority are camera
phones, operated as a mobile application through the cell phone menu. Professional photographs and
many amateurs use larger, more expensive digital single-lens reflex cameras (DSLR) for their greater
versatility. Between these extremes lie digital compact cameras and bridge digital cameras that
“bridge” the gap between amateur and professional cameras. Specialized cameras including spectral
imaging equipment and astrographs continue to serve the scientific, military, medical and other
special purposes for which digital photography was invented.
Compact camera are designed to betiny and portable and are particularly suitable for casual and
“snapshot” use, thus are also called point-and-shoot cameras. The smallest, generally less than 20 mm
thick, are described as sub compacts or” ultra-compacts and some are nearly credit card size.
Most, a part from ruggedized or water-resistant models, incorporate are tractable lens assembly
allowing a thin camera to have a moderately long focal length and thus fully exploit an image sensor
larger than on a camera phone, and a mechanized lens cap to cover the lens when retracted. The
retracted and capped lens is protected from keys, coins and other hard objects, thus marking a thin,
pocket able package. Sub compacts commonly have one lug and a short wrist strap which aids
extraction pocket, while thicker compacts may have two lugs for attaching a neck strap.
Compact cameras are usually designed to be easy to use, sacrificing advanced feature and picture
quality for compactness and simplicity images can usually only be stored using lossy compression
JIT/DOCSE/2015-16/SEMINAR 28
5 Pen Pc Technology
(JPEG). Most have built-in flash usually of low power, sufficient for near by subjects. Live preview is
almost always used to frame the photo. Most have limited motion picture capability. Compacts often
have macro capacity and zoom range is usually less than for bridge and DSLR cameras. Generally a
contrast-detect auto focus system, using the image data from the live preview feed of the main imager,
focuses the lens. Typically, these cameras in corporate a nearlysilent leaf shutter into their lenses.
For lower cost and smaller size, these cameras typically use image sensors with a diagonal of
approximately 6 mm, corresponding to a crop factor around 6. This give them weaker low-light
performance, greater depth of field, generally closer focusing ability, and smaller components than
cameras using larger sensors.
Bridge are higher –end digital camera that physically and ergonomically resemble DSLRs and share
with them some advanced features, but share with compacts the use of a fixed lens and a smaller
sensor. Like compacts, most use live preview to frame the image. Their auto focus uses the same
contrast-detect mechanism, but man bridge cameras have manual focus mode, in some cases using a
separate focus ring, for greater control.
Due to the combination of big physical size but a small sensor, many of these camera shave very highly
specified lenses with larger zoom range and fast aperture, partially compensating for the inability to
change lenses. To compensate for the lesser sensitivity of their small sensors, these cameras almost
always include an image stabilization system to enable longer hand held exposures. The highest zoom
lens so far on a bridge camera is on the Nikon Coolpix P500 digital camera, which encompasses an
equivalent if a super wide to ultra-telephoto 22.5-810 mm (36x).
These cameras are some times marketed as and confused with digital SLR cameras since the
appearance is similar. Bridge cameras lack there flex viewing system of DSLR’s are usually fitted with
fixed (non-interchangeable) lenses
(although some have a lens thread to attach accessory wide-angle or telephoto converters), and can
usually take movies with sound. The scene is composed by viewing either the liquid crystal display or
the electronic view finder (EVF). Most have longer shutter lag than a trued SLR, but they are capable
of good image quality (with sufficient light) while being more compact and lighter than DSLRs. Many
of these cameras can store images in a Raw image format, or processed and JPEG compressed, or both.
The majority have a built-in flash similar to those found in DSLRs.
JIT/DOCSE/2015-16/SEMINAR 29
5 Pen Pc Technology
In bright sun the quality difference between a good compact camera minimal but bridge cams are
more portable, costless and have similar zoom ability to DSLR. Thus a Bridge camera may better suit
outdoor day time activities, except when seeking professional quality photos. In low light conditions
and/or at ISO equivalents above 800, most bridge cameras (or mega and a digital SLR is zooms) lack in
image quality when compared to even entry level DSLRs. The first New 3D Photo Mode of Bridge
camera has announced by Olympus. Olympus SZ-30 MR can take 3D photo in any mode from macro
to landscape by release the shutter for the first shot, slowly pan until camera automatically takes a
second image from as lightly different perspective. Due to 3D processing is in-built in camera, so an.
MPO file will easily display on 3D televisions or laptops.
JIT/DOCSE/2015-16/SEMINAR 30
5 Pen Pc Technology
CHAPTER 6.
LED PROJECTOR
The role of monitor is taken by LED Projector which projects in the screen.
The size of the projector is of A4 size. It has the approximate resolution capacity of 1024 X 768. Thus it
is gives more clarity and good picture.
A video projector is a device that receives a video signal and projector the corresponding image on
projection screen using a less system. All video projectors use a very bright light to project the image,
and most modern one scan correct any curves, blurriness, and other inconsistencies through manual
settings. Video projectors are widely used for conference room presentations, class room training,
home theatre and live events applications. Projectors are widely used in many schools and other
educational settings, connected to an interactive while board to interactively teach pupils.
6.1. Overview
A video projector, also known as a digital projector , may be built in to a cabinet with a rear-projection
screen (rear-projection television, or RPTV) to form a single unified display device, now popular for
“home theater” applications.
JIT/DOCSE/2015-16/SEMINAR 31
5 Pen Pc Technology
Common display resolutions for a portable projector include SVGA (800 X 600 pixels), XGA (1024 X 768
pixels), 720p (1280 X 720 pixels), and 1080p (1920 X 1080 pixels).
The cost of a device is not only determined by its resolution, but also by its brightness. A projector with
a higher light output (measured in lumens, symbol “lm “) is required for a larger screen or a room with
a high amount of ambient light. A rating of 1500 to 2500 ANSI lumens or lower is suit able for smaller
screens with controlled lighting or low ambient light.
Between 2500 and 4000 lm is suitable for medium-sized screens with some ambient light or dimmed
light. Over 4000 lm is appropriate for very large screens in a large room with no lighting control (for
example, a conference room). Projected image size is important because the total amount of light does
not charge, diagonally, obscuring the fact that larger images require much more light (proportional to
the image area, not just the length of a side). Increasing the diagonal measure of the image by 25%
reduces the image brightness by more than one-third (35%) an increase of 41% reduces brightness by
half.
CRT projector using cathode ray tubes. This typically involves a blue, a green, and a red tube. This is
the oldest system still in regular use, but falling out of favor largely because of the bulky cabinet.
However, it does provide the largest screen size for a give cost. This also covers three tube home
models, which, while bulky, can be moved (but then usually require complex picture adjustments to
get the three images to line up correctly).
LCD projector using LCD light gates. This is the simplest system, making it one of the most common
and affordable for home theater and business use. Its most common problem is a visible “screen door”
or pixilation effect, although recent advance shave minimized this.
The most common problem with the single –or two-DMD varieties is a visible “rainbow” which some
people perceive when moving their eyes. More recent projectors with higher speed (2x or 4x) and
otherwise optimized color wheels have lessened this artifact. System with 3 DMDs never have this
problem, as they display each primary color simultaneously.
JIT/DOCSE/2015-16/SEMINAR 32
5 Pen Pc Technology
LED projectors use one of the above mentioned technologies for image creation, with a difference that
they use an array of Light Emitting Diodes as the light source, negating the need for lam pre placement.
Hybrid LED and Laser diode system developed by Casio. Uses a combination of Light Emitting Diodes
and 445 nm laser diodes as the light source, while image is processed with DLP (DMD) chip.
Laser diode projectors have been developed by Micro vision and A ax a Technologies. Micro vision laser
projectors use Micro vision’s patented laser beamsteering technology, where as Area Technologies
uses laser diodes + LCos.
JIT/DOCSE/2015-16/SEMINAR 33
5 Pen Pc Technology
CHAPTER 7.
BATTERY
The most important part in portable type of computer is battery and storage capacity. Usually batteries
must be small in size and work for longer time. For normal use is can be used for 2 weeks. The type of
battery used here us lithium on battery. The storage device is of the type tubular holographic which is
capable of storing. The use of lithium on battery in this gadget will reduce energy density, durability
and cost factor.
By making Five Pen PC feasible, it will enable ubiquitous computing therefore it is easier for people to
use. Many applications can be imagined with this new technology. As it make use of E-finger printing
the gadget will be more secure, which allows only owner to activate the Pc. So even if we lose it, no
one else can access the gadget. All PC’s communicate each other with the help of Bluetooth technology
and the entire gadget is connected to internet (Wi-Fi). This technology is very portable, feasible and
efficient. Every body can use this technology in very efficient manner. Some prototypes have been
already developed in 2003 which are very feasible, but currently unclear. The enhancement in this
technology can be expected in coming years.
JIT/DOCSE/2015-16/SEMINAR 34
5 Pen Pc Technology
CHAPTER 8.
REMARK
8.1. Advantages
• Portable
• Feasible
• Ubiquitous
• Makes use of Wi-Fi technology
8.2. Disadvantages
• Currently unclear
• Cost
• Keyboard concept is not new Easily misplaced
As the gadget is very costly the consumer can not afford to purchase them. The virtual keyboards
are already present in various companies like Lumio and Virtual Devices Inc.
JIT/DOCSE/2015-16/SEMINAR 35
5 Pen Pc Technology
CHAPTER 9.
CONCLUSION
The communication devices are becoming smaller and compact. This is only a example for the start of
this new technology. We can expect more such developments in the future, It seems that information
terminals are infinitely getting smaller. However, we will continue to manipulate them with our hands
for now. We have visualized the connection between the latest technology and the human, in a form
of a pen. P-ISM is a gadget package including five functions: a pen-style cellular phone with a
handwriting data input function, virtual keyboard, a very small projector, camera scanner, and
personal ID key with cashless pass function. P-ISMs are connected with one another through short-
range wireless technology. The whole set is also connected to the Internet through the cellular phone
function. This personal gadget in a minimalistic pen style enables the ultimate ubiquitous computing.
“The design concept uses five different pens to make a computer. One pen is a CPU, another camera,
one creates a virtual keyboard, another projects the visual output and thus the display and another
communicator (a phone). All five pens can rest in a holding block which recharges the batteries and
holds the mass storage. Each pen communicates wireless, possibly Bluetooth.
JIT/DOCSE/2015-16/SEMINAR 36
5 Pen Pc Technology
CHAPTER 10.
REFERANCES
[1] http://www.computinghistory.org.uk/history.html
[2] http://en.wikipidia.org/wiki/pen_computing
[3] http://www.softwaretoolbar.com/virtual_keboard.html
[4] http://www.compinfo_center.com/ledprojector.html
[5] http://users.erols.com/rwseries/biblio.html
[6] http://rwservices.no._ip.info:81/pens/biblio70html#gray1888b
[7] www.scribed.com/doc/67990223/report_on_5_pen_pct_echnology
[8] http://www.rockinglearners.blogspot.in/p/virtual_ke
JIT/DOCSE/2015-16/SEMINAR 37