1.1 Pervasive Computing
1.1 Pervasive Computing
1.1 Pervasive Computing
Ubiquitous computing (ubicomp) or Pervasive Computing is an advanced computing concept where computing is made to appear everywhere and anywhere. In contrast to desktop computing, ubiquitous computing can occur using any device, in any location, and in any format. A user interacts with the computer, which can exist in many different forms, including laptop computers, tablets and terminals in everyday objects such as a fridge or a pair of glasses. The underlying technologies to support ubiquitous computing include Internet, advanced middleware, operating system, mobile code, sensors, microprocessors, new I/O and user interfaces, networks, mobile protocols, location and positioning and new materials. This new paradigm is also described as pervasive computing, ambient intelligence, or everywhere. Each term emphasizes slightly different aspects. When primarily concerning the objects involved, it is also known as physical computing, the Internet of Things, haptic computing, and things that think. Rather than propose a single definition for ubiquitous computing and for these related terms, a taxonomy of properties for ubiquitous computing has been proposed, from which different kinds or flavors of ubiquitous systems and applications can be described. Ubiquitous computing touches on a wide range of research topics, including distributed computing, mobile computing, location computing, mobile networking, context-aware computing, sensor networks, human-computer interaction, and artificial intelligence. Pervasive computing is a rapidly developing area of Information and ommunications Technology (ICT). The term refers to the increasing integration of ICT into peoples lives and environments, made possible by the growing vailability of microprocessors with inbuilt communications facilities. Pervasive computing has many potential applications, from health and home care to environmental monitoring and intelligent transport systems. This briefing provides an overview of pervasive computing and discusses the growing debate over privacy, safety and environmental implications.
Eight billion embedded microprocessors1 are produced each year. This number is expected to rise dramaticallyover the next decade, making electronic devices ever more pervasive. These devices will range from a few millimetres in size (small sensors) to several metres (displays and surfaces). They may be interconnected via wired and wireless technologies into broader, more capable, networks. Pervasive computing systems (PCS) and services may lead to a greater degree of user knowledge of, or control over, the surrounding environment, whether at home, or in an office or car. They may also show a form of intelligence. For instance, a
smart electrical appliance could detect its own impending failure and notify its owner as well as a maintenance company, to arrange a repair. Some core technologies have already emerged, although the development of battery technologies and user interfaces pose particular challenges. It may be another 5-10 years before complete PCS become widely available. This depends on market forces, industry, public perceptions and the effects of any policy/regulatory frameworks. There have been calls for more widespread debate on the implications of pervasive computing while it is still at an early stage of development Ubiquitous computing can be considered as the new hype in the information and communication world. It is normally associated with a large number of small electronic devices (small computers) which have computation and communication capabilities such as smart mobile phones, contactless smart cards, handheld terminals, sensor network nodes, Radio Frequency IDentification (RFIDs) etc. which are being used in our daily life (Sakamura & Koshizuka,2005). These small computers are equipped with sensors and actuators, thus allowing them to interact with the living environment. In addition to that, the availability of communication functions enables data exchange within environment and devices. In the advent of this new technology, learning styles has progressed from electronic-learning (m-learning) to mobile-learning (m-learning) and from mobile-learning to ubiquitous-learning (u-learning).
Ubiquitous learning, also known as u-learning is based on ubiquitous technology. The most significant role of ubiquitous computing technology in u-learning is to construct a ubiquitous learning environment, which enables anyone to learn at anyplace at anytime. Nonetheless, the definition and characteristic of u-learning is still unclear and being debated by the research community. Researchers have different views in defining and characterizing u-learning, thus, leads to misconception and misunderstanding of the original idea of u-learning.
1.2 HISTORY
Mark Weiser coined the phrase "ubiquitous computing" around 1988, during his tenure as Chief Technologist of the Xerox Palo Alto Research Center (PARC). Both alone and with PARC Director and Chief Scientist John Seely Brown, Weiser wrote some of the earliest papers on the subject, largely defining it and sketching out its major concerns.[5][6][7] Recognizing that the extension of processing power into everyday scenarios would necessitate understandings of social, cultural and psychological phenomena beyond its proper ambit, Weiser was influenced by many fields outside computer science, including "philosophy, phenomenology, anthropology, psychology, post-Modernism, sociology of scienceand feminist criticism". He was explicit about "the humanistic origins of the invisible ideal in post-modernist thought'",[7] referencing as well the ironically dystopian Philip K. Dicknovel Ubik. Andy Hopper from Cambridge University UK proposed and demonstrated the concept of "Teleporting" - where applications follow the user wherever he/she moves. Roy Want, while a researcher and student working under Andy Hopper at Cambridge University, worked on the "Active Badge System", which is an advanced location computing system where personal mobility that is merged with computing. Bill Schilit (now at Google) also did some earlier work in this topic, and participated in the early Mobile Computing workshop held in Santa Cruz in 1996. Dr. Ken Sakamura of the University of Tokyo, Japan leads the Ubiquitous Networking Laboratory (UNL), Tokyo as well as the T-Engine Forum. The joint goal of Sakamura's Ubiquitous Networking specification and the T-Engine forum, is to enable any everyday device to broadcast and receive information.[8][9] MIT has also contributed significant research in this field, notably Things That Think consortium (directed by Hiroshi Ishii, Joseph A. Paradiso and Rosalind Picard) at the Media Lab[10] and the CSAIL effort known as Project Oxygen.[11] Other major contributors include University of Washington's Ubicomp Lab (directed by Shwetak Patel), Georgia Tech'sCollege of Computing, Cornell University's People Aware Computing Lab, NYU's Interactive Telecommunications Program, UC Irvine's Department of Informatics, Microsoft Research, Intel Research and Equator,[12] Ajou University UCRi & CUS.[13]
Chief Technologist Xerox PARC Began Ubiquitous Computing Project in 1988 Weiser, M. "The Computer for the Twenty-First
Century, Scientific American, 1991, 94-100 Mark Weisers vision Computers everywhere, disappearing/integrated in environment/objects around us Computer no longer isolates us from tasks/environment, no longer focus of attention Social impact: similar to writing: found everywhere from clothes labels to billboards Similar to electricity which surges invisibly through the walls of every home, office, car
This term was coined by Marc Weiser of Xerox PARC around 1988 The purpose of a computer is to help you do something else. The best computer is a quiet, invisible servant. The more you can do by intuition the smarter you are; the computer should extend your unconscious.
1.3
Need
Generally there are two types of approaches in recognizing situations: knowledge-driven and data-driven.1 Knowledge-driven approaches use expert knowledge to define situations with logic rules and apply reasoning engines to infer proper situations from sensor input.24 Datadriven approaches use machine learning and data-mining techniques to learn correlations between sensor data and situations from training data.5, 6 However, the complexity, heterogeneity and uncertainty of pervasive systems undermine the performance of these existing approaches. Developers of knowledge-driven approaches need to consider all of the relevant knowledge and then be able to provide precise definitions on situations; developers of data-driven approaches need good quality training data, which is extremely difficult and time-consuming to collect.
We propose a new data structure, the context lattice, which takes the advantages of both knowledge- and data-driven approaches and compensates for the deficiencies.7, 8 Built on lattice theory, a context lattice is a mathematical abstraction that supports the systematic study of the semantics of situations. Context lattices have a powerful ability to represent the semantics of sensor
data and domain knowledge. There are many middleware technologies that provide a set of application programming interfaces (APIs) as well as network protocols that can meet the network requirements. It establishes a software platform enabling all devices that form the network to talk to each other, irrespective of their operating systems or interface constraints. In these environments, each device provides a service to other devices in the network. Each device publishes its own interfaces, which other devices can use to communicate with it and thereby access its particular service. This approach ensures compatibility and standardized access among all devices.
Device Technology for Pervasive Computing include Power-provisioning technologies, Display technologies, Memory technologies, Communication technologies, Processor technologies, Interfacing technologies, Sensor Technologies and Authentication Technologies.
1.4.2 Technology Aspects Low-power Device Technologies Since many of the devices involved in the pervasive computing environment may have to be small in size and may have to live on their battery / power units, consumption of lower power, extension of power provisioning period etc. assume critical significance. In addition, prevention from excessive heating also requires attention. Power requirements can be reduced by several means right from material selection and chip-level designing to software designing and communication system designing. Power provisioning technology including the Battery design technology plays a very important role in the process.
1.4 .3 Connectivity Aspects Role of communication architectures in pervasiveness The pervasive computing system needs at least two basic elements to be pervading everywhere they are required to pervade: o Computing elements to take care of computational needs; and, o Communication elements to interconnect these computing elements either through wires or wirelessly (with / without mobility). From the end users perspective and in many a practical situations, the wireless communication based mobile computing is becoming
increasingly important. From the back-end systems viewpoint, however, due to its sheer traffic volume, low error rates, better noise immunity and low cost, the wireline communication based computing still remains an attractive option. Therefore, hybrid architectures will possibly continue to exist even though end users may not be even aware of it. Identifying multi-technology mobile communication architectures of relevance Several generations Gradual enhancements Coexistence & transition Generations of Wireless Communication Networking Standards First Generation Global Mobile Radio standard : 1G o Only voice, No data Second Generation Global Mobile Radio standard : 2G o GSM: 9.6 Kbps <circuit switched voice / data> o Enhanced Second Generation Global Mobile Radio standard : 2.5G GSM-GPRS <combination of circuit and packet switched voice / data> GPRS-136: <100Kbps <packet switched> Third Generation Global Mobile Radio standard: 3G o CDMA2000, =< 2Mbps <packet switched voice / data> Fourth Generation Global Mobile Radio standard : 4G (near future) o ? 20-40 Mbps <packet switched voice / data> Inside the GSM Network Subsystem MSC (Mobile Services Switching Center) acts like a normal switching node and provides the connection to the fixed networks (such as the PSTN or ISDN). HLR (Home Location Register ) contains information of each subscriber registered in the corresponding GSM network, along with the current location of the mobile. There is logically one HLR per GSM network VLR (Visitor Location Register) contains selected information from the HLR, necessary for call control and provision of the subscribed services and each mobile currently located in the geographical area controlled by the VLR. EIR (The Equipment Identity Register) is a database that contains a list of all valid mobile equipment on the network, AuC (The Authentication Center) is a protected database:secret key of SIM GSM uses TDMA/FDMA to share the limited radio spectrum wherein the FDMA part divides frequency of the not more than 25 MHz B/W into 124 carrier frequencies spaced 200 kHz apart.; and Each of these carrier frequencies is then divided in time, using a TDMA scheme. GSM is a circuit-switched digital network. SGSN (the Serving GPRS Support Node) keeps track of the location of the mobile within its service area and send/receive packets from the mobile , passing them on, or receiving them from the GGSN. GGSN (Gateway GPRS Support Node) converts the GSM packets into other packet protocols (e.g. IP / X.25) and sends them out into another network. GPRS users can share the resource (Radio link) which is used only when users are actually sending or receiving data.
GPRS is based on GMSK which is a modulation technique known as Gaussian minimum-shift keying. It can support a theoretical upper limit of speed up to 171.2kbps as against the GSM s 9.6Kbps.
CHAPTER 2