DESIGN & IMPLEMENTATION OF WIRELESS SENSOR NETWORKS
FOR CONDITION BASED MAINTENANCE
The members of the Committee approve the master’s
thesis of Ankit Tiwari
Frank L. Lewis
Supervising Professor
______________________________________
Jonathan Bredow
______________________________________
Qilian Liang
______________________________________
Copyright © by Ankit Tiwari 2004
All Rights Reserved
DESIGN & IMPLEMENTATION OF WIRELESS SENSOR NETWORKS
FOR CONDITION BASED MAINTENANCE
by
ANKIT TIWARI
Presented to the Faculty of the Graduate School of
The University of Texas at Arlington in Partial Fulfillment
of the Requirements
for the Degree of
MASTER OF SCIENCE IN ELECTRICAL ENGINEERING
THE UNIVERSITY OF TEXAS AT ARLINGTON
May 2004
ACKNOWLEDGEMENTS
I would like to thank Dr. Frank Lewis for his support and guidance all
throughout this work. He not only helped me in my academic development but also in
my overall development as an engineer and a professional. It is an absolute privilege for
anyone to work under his guidance.
I would also like to thank all ACS ARRI group members, especially Jyotirmay
Gadewadikar, for their cooperation at all times. Thanks to Mariam John for
proofreading the report.
This work was supported by ARO DAAD 19-02-1-0366 and NSF IIS -0326505
grants.
April 07, 2004
iv
ABSTRACT
DESIGN & IMPLEMENTATION OF WIRELESS SENSOR NETWORKS FOR
CONDITION BASED MAINTENANCE
Publication No. ______
Ankit Tiwari, M. S. in Electrical Engineering
The University of Texas at Arlington, 2004
Supervising Professor: Frank L. Lewis
A new application architecture is designed for continuous, real-time, distributed
wireless sensor networks. We develop a wireless sensor network for machinery
condition-based maintenance (CBM) using commercially available products, including
a hardware platform, networking architecture, and medium access communication
protocol. We outline the design requirements for wireless sensor network (WSN)
systems specifically for CBM and thus take an application driven system design. We
also investigate the physical layer of WSN by modeling the battery consumption of
radio hardware used on the sensor nodes. We thus incorporate both application
requirements and physical layer functionality in the design of our single-hop
networking architecture, and User Configured Time Division Multiple Accessing (UCv
TDMA) MAC protocol. In our design, we emphasize energy efficiency and latency
requirements posed by resource constrained WSN and our application domain
respectively. We use modified RTS-CTS mechanisms for combining contention with
scheduling to provide an overall energy efficient, scalable and adaptable MAC protocol
with no collisions and minimal protocol overhead.
We implement a single-hop sensor network to facilitate real-time monitoring
and extensive data processing for machine monitoring. A LabVIEW graphical user
interface is described that allows for signal processing, including FFT, and computation
of various moments, including kurtosis. A wireless CBM sensor network
implementation on a Heating & Air Conditioning Plant is presented as a case study.
vi
TABLE OF CONTENTS
ACKNOWLEDGEMENTS .......................................................................................
v
ABSTRACT ..............................................................................................................
vi
LIST OF ILLUSTRATIONS .....................................................................................
xii
LIST OF TABLES .....................................................................................................
xiv
Chapter
1. INTRODUCTION ........................................................................................
1
1.1 Gathering, Analyzing, and Reacting – A Definition ...............................
3
1.2 Sensor Network Applications ..................................................................
4
1.3 Data Processing .......................................................................................
4
1.4 Challenges ..............................................................................................
5
1.5 Organization ...........................................................................................
5
2. HARDWARE ARCHITECTURE FOR SENSOR NODES .........................
6
2.1 Overview .................................................................................................
6
2.2 General Node Architecture ......................................................................
7
2.2.1 Processor Module .....................................................................
8
2.2.2 Radio Module ...........................................................................
10
2.2.3 Sensor Module ..........................................................................
11
2.2.4 Memory Module .......................................................................
13
2.2.5 Power Supply Module ..............................................................
13
vii
2.3 Microstrain’s Node Architecture .............................................................
14
3. CONDITION BASED MAINTENANCE ....................................................
19
3.1 Motivation ..............................................................................................
19
3.2 Overview .................................................................................................
21
3.3 Complete CBM Architecture ...................................................................
23
3.4 CBM Diagnostics ....................................................................................
24
3.4.1 Offline Phase ............................................................................
24
3.4.2 Online Phase ............................................................................
25
3.5 CBM Prognostics ....................................................................................
26
3.6 Motivation for Wireless Sensor Networks in CBM ................................
29
4. DESIGN REQUIREMENTS.........................................................................
31
4.1 Continuous Sensing ...............................................................................
31
4.2 Periodic Data Transmission .....................................................................
31
4.3 User-Prompted Data Querying ................................................................
31
4.4 Emergency Addressing and Alarms ........................................................
32
4.5 Real-Time Potential .................................................................................
32
4.6 Adaptability .............................................................................................
32
4.7 Network Reconfigurability ......................................................................
33
4.8 Scalability ................................................................................................
33
4.9 Energy Efficiency ....................................................................................
33
4.10 Feedback Control .................................................................................
34
5. SYSTEM DESCRIPTION.............................................................................
35
viii
5.1 Topology .................................................................................................
37
6. UC-TDMA MAC PROTOCOL ....................................................................
42
6.1 MAC Protocol Attributes .........................................................................
44
6.2 TDMA Slot Allocation ............................................................................
45
6.3 Energy Model of Radio ...........................................................................
46
6.4 Sleep Scheduling .....................................................................................
49
6.5 Modes of Operation .................................................................................
51
6.6 Network Setup .........................................................................................
52
6.7 Main Thread ...........................................................................................
54
6.8 Adaptability and Reconfigurability .........................................................
56
6.9 Scalability ................................................................................................
57
6.10 Emergency Addressing and Alarm ........................................................
58
6.11 State Machine for Nodes .......................................................................
59
7. IMPLEMENTATIONS
7.1 MATLAB Implementation ......................................................................
62
7.1.1 Check connection between Host and Base Station ...................
63
7.1.2 Check sync between Base Station and Link Device .................
63
7.1.3 Read from a particular EEPROM
Address on-board a Link Device..............................................
63
7.1.4 Write to a particular EEPROM
Address on-board a Link Device..............................................
64
7.1.5 Down Load a page of Data from a Link Device .......................
64
7.1.6 Erase all data on-board a Link Device ......................................
64
ix
7.1.7 Trigger a data capture session on-board a Link Device ...........
65
7.1.8 Trigger a data capture session on-board
Link Device with supplied Trigger name.................................
65
7.1.9 Initiate real time streaming data collection from a Link Device
65
7.1.10 Initiate low-power periodic sleep mode ................................
66
7.2 LabVIEW Implementation ......................................................................
68
8. CONCLUSION..............................................................................................
80
REFERENCES ..........................................................................................................
82
BIOGRAPHICAL INFORMATION.........................................................................
87
x
LIST OF ILLUSTRATIONS
Figure
Page
2.1 Overview of Wireless Sensor Network Architecture ...................................... 6
2.2 Wireless Sensor Node Architecture ................................................................ 7
2.3 General node architecture of Microstrain sensor node.................................... 14
2.4 Internal block diagram of V-Link node........................................................... 16
2.5 G-link Sensor Node ......................................................................................... 17
2.6 An SG-link Sensor Node ................................................................................. 18
3.1 Overview of CBM System .............................................................................. 21
3.2 CBM Architecture ........................................................................................... 23
3.3 CBM fault diagnostics procedure .................................................................... 26
3.4 Maintenance prescription and scheduling procedures .................................... 27
5.1 System Overview ............................................................................................ 36
5.2 Any-to-Any Paradigm ..................................................................................... 37
5.3 May-to-One Paradigm ..................................................................................... 37
6.1 UC-TDMA frame showing time slots for N nodes in network ....................... 45
6.2 Symbolic Radio Model.................................................................................... 47
6.3 Flow Chart for UC-TDMA MAC Protocol ..................................................... 53
6.4 State-machine running on each node .............................................................. 59
6.5 IEEE 1451 Standard for Smart Sensor Networks ........................................... 60
xi
6.6 A general model of smart sensor ..................................................................... 61
7.1 Data Packet format for real-time streaming .................................................... 65
7.2 MATLAB – Real-time display of acceleration along 3-axes .......................... 66
7.3 Screen shot of GUI created in MATLAB ....................................................... 67
7.4 Implementation Architecture........................................................................... 68
7.5 OSI reference model – layers implemented .................................................... 69
7.6 Heating and Air Conditioning plant at ARRI.................................................. 70
7.7 Screen shot of first screen of Network Configuration Wizard (NCW) ........... 71
7.8 Second Screen of NCW................................................................................... 72
7.9 Dialog Box for selecting an existing configuration file .................................. 72
7.10 NCW screen showing actual physical location of sensors in plant ................. 73
7.11 NCW Main Configuration Screen ................................................................... 74
7.12 Introductory Screen of Application GUI......................................................... 77
7.13 Time Domain display of real-time data from three different sensors ............. 78
7.14 Screen Shot of Application GUI with Frequency Domain Signals ................. 79
xii
LIST OF TABLES
Table
Page
2.1 Measurements for Wireless Sensor Networks ................................................ 12
xiii
CHAPTER 1
INTRODUCTION
Wireless distributed sensor networks require an application driven system
design. Unlike traditional communication networks, these networks are deployed for
specific tasks and applications. Different tasks might pose different energy, latency,
throughput, scalability, adaptability, quality, and lifetime requirements for these
systems. Due to limited energy and communication bandwidth resources available to
these networks, it becomes essential to use innovative design techniques to efficiently
utilize resources in context of applications. In order to design good protocols, it is
important to understand the parameters that are relevant to sensor applications [14]
With most of the research efforts targeting on applications like habitat
monitoring [28], area monitoring [29], surveillance [50], etc, environmental sensing and
processing remains the principle stimulant in evolution of sensor networks. Many of the
protocol architectures, viz., S-MAC [49], PAMAS [42], T-MAC [9], ER-MAC [18], are
designed for applications where data is acquired only when an interesting event occurs
or when prompted by user. In contrast to this, we have focussed on applications
requiring turn wise, continuous, periodic and real-time transmission of data from
sensors. Examples of such applications include for instance, monitoring gradual
changes in the ambience conditions during course of some experiment in laboratory,
hazard detection systems, etc; another such application is condition based maintenance
1
(CBM) of machines and equipments for reliability and health maintenance. Real time
monitoring and control increases the equipment utilization and positively impacts the
yield [1]. It also tightens the monitoring of process variability which is an essential
requirement in modern manufacturing industries. Unacceptable variations cause
degradation in overall product quality [19]. CBM is key to avoid breakdowns, process
variations, unscheduled maintenance, temporary repairs, equipment caused defects, loss
of equipment speed, and many more factors that add to manufacturing cost. By reducing
manning level in factories, maximizing productivity and lifetime of equipment and by
avoiding overheads in manufacturing, CBM if implemented efficiently, promises to
save billions of manufacturing costs. WSN in turn, helps in low cost fast and efficient
implementation of CBM.
This thesis develops a new application domain for distributed wireless sensor
networks. It presents specific design requirements, topology, and the limitations and
guidelines for implementing sensor networks for many such applications. It describes
the hardware platform, networking architecture and medium access protocol for such
networks. Implementation of single hop sensor network to facilitate real time
monitoring and extensive data processing for machine monitoring using commercially
available Microstrain wireless sensors is also presented. A LabVIEW graphical user
interface has been written that allows for signal processing, including FFT, various
moments and kurtosis. Time plots can be displayed in real time and alarm levels set by
user. A wireless CBM sensor network implementation on a Heating & Air conditioning
plant is presented as a case study.
2
1.1 Gathering, Analyzing, and Reacting – A Definition
Remote sensing and measuring is becoming more important with accelerating
advances in technology. It has ascended from meter reading to data acquisition systems
to a new era of wireless sensor networks (WSN). Earlier sensor readings used to be
recorded manually. This method was both inefficient and error prone because being a
monotonous task it was difficult for humans to perform well over an extended period of
time. Next generation belonging to data acquisition systems automated the recording by
wiring sensors to a central data storage unit. Installation and maintenance of these
systems are costly. These systems are inflexible once installed and require great amount
of expertise from installer. Wiring of these systems even makes them infeasible in
certain situations.
Wireless sensor networks are now providing an intelligent platform to gather,
analyze and react on data without human intervention. Typically a sensor network
consists of autonomous wireless sensing nodes that organize to form a network. Each
node is equipped with sensors, embedded processing unit, short-range radio
communication module, onboard data memory and a power supply battery. These nodes
are capable of communicating with other nodes and passing their data to a base station
where data will be compiled, analyzed, processed and reacted upon. Base station forms
the link between sensors and higher level application. Thus, a wireless sensor network
can be defined as an intelligent system capable of performing distributed sensing and
processing, along with collaborative processing and decision making for carrying out a
particular task.
3
1.2 Sensor Network Applications
With recent innovations in micro-machined ceramic and MEMS sensor
technology, wireless sensor networks hold significant promises in many application
domains. Smart spaces capable of catering needs of occupants, condition based machine
maintenance, patient monitoring, vehicle monitoring for breakdowns or accident
preventions, atmosphere monitoring for harmful chemicals, habitat monitoring, military
surveillance, structural (bridges, dams, buildings etc) health monitoring, seismic
detection, inventory tracking, vehicle tracking and detection, electric metering, etc are
few to name. With miniaturization of technology and capabilities of sensor networks
one can expect them to permeate through lives of everyone.
1.3 Data Processing
Again depending upon application requirement, some processing might be
needed locally and others may require collaborative processing at a base station. Each
node performs analog to digital processing of its sensor input signal. Noisy signals may
need local filtering for improving signal to noise ratio. Nodes may locally compare the
signal with a set threshold to determine the control action or to determine whether the
data should be discarded, stored locally or transmitted over network. Collaborative
processing includes – averaging the reading of temperature sensors in a room, running
decision making algorithms on fused data for determining the overall status of
environment. These decision making algorithm may include neural networks, fuzzy
logic, statistical analysis, probabilistic analysis, etc.
4
1.4 Challenges
Wireless sensor networks are exposed to many technical limitations including
available processing power, transmission rate, synchronization rate and robustness of
operation; energy and memory constraints which limit the battery life and local data
storage for in-network processing respectively. Issues of proper sensor placement and
sensor density must be solved for providing both operation and cost effectiveness.
Interference among neighbouring nodes in network may cause collisions in a network
organized in clusters. The challenges in hierarchy of: detecting the relevant quantities,
monitoring and collecting the data, assessing and evaluating the information,
formulating meaningful user displays, and performing decision-making and alarm
functions are enormous [27]. To overcome these limitations one needs to have
application specific optimized wireless sensor network design.
1.5 Organization
Chapter 2 explains the generic architecture of sensor nodes used in WSN. It also details
the sensor network hardware from Microstrain Inc., used in our implementations.
Chapter 3 briefly describes the condition based maintenance from the perspective of
diagnosis and prognosis of faults and failures. Chapter 4 outlines the design
requirements for WSN for CBM. Chapter 5 describes the overall system and network
topology used. Chapter 6 presents the UC-TDMA protocol designed. Chapter 7
describes the MATLAB and LabVIEW implementations of proposed system on heating
and Air conditioning plant. Chapter 8 concludes with a summary of key lessons and
recommendations for future work.
5
CHAPTER 2
HARDWARE ARCHITECTURE FOR SENSOR NODES
2.1 Overview
Overall picture of a Wireless Sensor Networks generally consists of a data
acquisition network and a data distribution network, monitored and controlled by a
management center. The plethora of available technologies makes even the selection of
components difficult, let alone the design of consistent, reliable, and robust overall
system. The study of wireless sensor networks is challenging in that it requires an
enormous breadth of knowledge from an enormous variety of disciplines. In this chapter
we outline the hardware architecture of a wireless sensor node used in these networks.
Wireless Sensor Networks
Vehicle Monitoring
Animal
Monitoring
Machine
Monitoring
Medical Monitoring
Wireless
Data Collection
Networks
Wireless Sensor
Wireless
Sensor
BSC
Ship Monitoring
(Base Station
Controller,
Preprocessing)
BST
Data Acquisition
Network
Data Distribution
Network
Roving
Online
Printer
monitoring
Human
monitor
Any where, any
time to access
Notebook
transmitter
Wireless
(Wi-Fi 802.11 2.4GHz
BlueTooth
Cellular Network, CDMA, GSM)
PDA
Cellular
Phone
Management Center
(Database large storage,
analysis)
Server
Wireland
(Ethernet WLAN,
Optical)
PC
Figure 2.1 Overview of Wireless Sensor Network Architecture [27]
6
2.2 General Node Architecture
Currently available wireless sensor nodes are built completely by integrating
COTS components. Large variety of prototype systems have been implemented and
tested. However these systems tend to be developer specific and require substantial
overhead in demonstrating more than one application. These systems are designed to
provide low power functionality, small form factor, high processing and memory
capability, long communication range, high flexibility, low cost and ability to scale the
energy consumption of entire system in order to maximize lifetime and reduce global
energy consumption. Node architecture of a typical wireless sensor is shown in figure
2.2. It consists of following five major modules.
Memory
SM
EO
ND
SU
OL
RE
Controller
Module
R
A
D
I
O
Power Supply
Module
Figure 2.2 Wireless Sensor Node Architecture
7
2.2.1 Processor Module
This performs all the computation required for sensing, acquiring, processing,
storing and communicating the data. This module thus forms the heart and brain of the
sensor node. Microprocessors with required external peripherals or microcontrollers
with built-in peripherals are used for this module. Multiple processors are used in
certain architectures to separate the application specific computation from
communication specific processing. The plethora of commercially available low power
microprocessors and microcontrollers make it both flexible and difficult to pick one for
the node. There are microcontrollers like Amega128 which provide Spartan processing
and memory but lower power consumption. On the other hand there are
microprocessors like StrongARM 1100, SH-4 etc, which along with external memory
are equivalent to PCs in their computing and memory capabilities but render limited
lifetime and larger form factor to the design of overall node. Selection is then made on
the basis of possible application requirements. Attempts are made to have wider
applicability of nodes in several application domains.
For acquiring the data from sensors, analog signal is first sampled and then
converted into digital data by using an A/D converter with precision ranging anywhere
from 8-bits to 32 bits. This could be an external ADC or built-in ADC of the
microcontroller used on the node. UC Berkeley’s low powered MICA [6] node uses the
built-in 8 channel, 10-bit ADC of the ATmega128 microcontroller used on the node.
Whereas MIT’s µAMPS [40] node utilizes an external 12-bit A/D converter with rate
8
125 K samples/second along with StrongARM SA-1110 microprocessor for acquiring
the data from sensors.
Data processing requirements for a node stretches from simple processing for
computing the FFT of locally stored data, filtering the local data, and averaging the
collective data from various nodes to complex processing for executing Kalman filters,
beam forming algorithms, neural networks etc. Processor module of low powered nodes
like MICA allows performing simple processing locally on node. However to enable
complex local processing, WINS [37] node from Rockwell uses powerful SA-1100
processor integrated with external 4 MB flash and 1MB SRAM.
Utilizing a single processor for both data and communication protocol
processing often overloads the central controller. This approach is good enough for
nodes requiring little data processing. Some nodes separate the two processing tasks for
more effective data processing. For example Medusa MK-2 [3] from UCLA uses two
microcontrollers The first one is an Atmel ATMega128L microcontroller with 32KB
flash and 4KB of RAM running at 4MHz. This microcontroller is dedicated to the
frequent but less computationally demanding tasks of the node such as the radio base
band processing, sensor sampling and sensor trigger monitoring. The more powerful
Atmel AT91FR4081 processor handles the more challenging computation tasks. This
processor runs at 40MHz and has 1MB of Flash memory and 136KB of RAM and it
acts as a computation coprocessor for less frequent but more computationally
demanding tasks. MIT’s µAMPS, in addition to SA-1110 processor, uses Xilinx FPGA
for additional protocol processing and data recovering.
9
2.2.2 Radio Module
This module delivers the data and control messages to neighboring nodes.
Processor module transfers the data and control messages (for neighboring nodes and
base station) to radio using the system bus. Radio then transmits these messages on to
the radio channel for reception by intended nodes. On receiving any message on radio
channel, radio module passes it on to the processor. This module consists of a
transceiver and set of discrete components required for operation. As most of the energy
consumption on node is due to this radio module, considerable attention is paid on
choice of appropriate transceiver out of different available modules. These are shortrange low power chips operating on ISM band of radio frequencies. External antenna
can be used to improve the reliability and range of transmission. Effective power
consumption in a radio module is governed by MAC and routing protocols used in the
network. The radio module is thus flexible enough to let higher layer protocols exploit
features like programmable transmitting power and data rate, frequency band selection,
operating mode selection (sleep, Idle, Receive, Transmit).
Various available sensor nodes use different commercially available radio
transceivers for radio modules. MICA nodes use a low power RFM TR1000 single IC
transceiver which uses amplitude shift keying for modulating the carrier frequency of
916 MHz. Transmission range can be controlled by controlling the transmitting power
of radio by using a DS1804 digital potentiometer. µAMPS node on other hand uses a
Bluetooth-compatible single chip 2.4 GHz transceiver in its radio module, which using
two different power amplifiers is capable of transmitting 1Mbps at range of up to100m.
10
2.2.3 Sensor Module
A sensor, or more appropriately a transducer, is a device that converts energy
from one domain to another. In our application, it converts the quantity to be sensed
into useful signals that can be directly measured and processed. The output of the
transducers that are useful for sensor networks are generally currents and voltages.
Micro-electromechanical Systems (MEMS) sensors are by now very well developed
and are available for most sensing applications in wireless networks [27].
Sensors of many types that are suitable for wireless network applications are
available commercially. Table 1 shows which physical principles may be used to
measure various quantities. MEMS sensors are available for most of these measurands.
Various available sensor nodes use various kinds of sensors on board. For
example MICA motes include most of the sensors necessary for environmental
monitoring. It contains sensor board [7] which interfaces with the MICA motes. These
sensor boards contains a Thermistor (YSI44006) – capable of achieving 0.2ºC of
accuracy; a light sensor – it is simple photocell with maximum sensitivity at 690nm
light wavelength; an acoustic sensor – it is microphone circuit; a 2-axis accelerometer, it
is MEMS surface micro-machined 2-axis, +/- 2G device; a 2-Axis magnetometer, it is a
Honeywell HMC1002 sensor. Also it contains a prototyping area for interfacing
external sensors. All sensors have power control circuit for switching the sensors on-off.
Rockwell’s WINS nodes also contain separate modules for acoustic sensors,
magnetometer, accelerometer and seismic sensor.
11
Table 2.1 Measurements for Wireless Sensor Networks [27]
Measurements for Wireless Sensor Networks
Physical Properties
Motion Properties
Measurand
Transduction Principle
Pressure
Piezoresistive, capacitive
Temperature
Humidity Resistive,
Flow
Thermistor, thermomechanical, thermocouple
capacitive
Pressure change, thermistor
Position
E-mag, GPS, contact sensor
Velocity
Doppler, Hall effect,
optoelectronic
Optical encoder
Piezoresistive, piezoelectric,
optical fiber
Angular velocity
Acceleration
Contact Properties
Presence
Strain
Piezoresistive
Force Piezoelectric,
Torque Piezoresistive,
Slip
Vibration
piezoresistive
optoelectronic
Dual torque
Piezoresistive, piezoelectric,
optical fiber,
Sound, ultrasound
Tactile/contact
Contact switch, capacitive
Proximity
Hall effect, capacitive,
magnetic, seismic, acoustic,
RF
E-mag (sonar, radar, lidar),
magnetic, tunneling
E-mag, IR, acoustic, seismic
(vibration)
Distance/range
Motion
Biochemical
Biochemical agents
Biochemical transduction
Identification
Personal features
Vision
Personal ID
Fingerprints, retinal scan,
voice, heat plume, vision
motion analysis
12
2.2.4 Memory Module
Nodes using microcontrollers in their processor module have scanty data
memory and EEPROM. This memory is not sufficient to carryout sufficient processing
at the nodes. They use an external, serially interfaced memory chip for storing the data
points, routing tables, TDMA table etc required for communication and data processing.
Nodes some times also store the sensor data into these memory blocks which can later
be downloaded by base stations. MICA motes use 4MB external flash from Atmel.
Accessing this serial interfaced memory however consumes a great deal of power.
2.2.5 Power Supply Module
Power for the node is provided by the power supply module. These modules are
designed to regulate the supply voltage of the system by using a DC-DC converter that
provides a constant supply which is required for proper radio operation. These
converters are low-voltage, synchronous-rectified, step-up DC-DC converter intended
for use in devices powered by multiple cell alkaline or lithium battery. It takes input
voltage even less than 1 V and boosts it to range of 2.0V to 4.0 V. In alkaline battery
more than 50 % of its energy lies below 1.2 V [11]. Hence, without a converter, this
energy remains unusable [16].
Mostly available nodes are powered by a standard 3.6 V lithium ion battery with
capacity of 2400mAH approximately. Standard 9 V rechargeable batteries can also be
used. Berkeley MICA motes are powered by using standard pair of AA batteries
producing 3.2V to 2.0V. It uses a Maxim1678 DC-DC converter to provide constant 3.0
V supply. µAMPS node however uses a single 3.6V DC source.
13
2.3 Microstrain’s Node Architecture
We use Microstrain’s hardware for this thesis work [31]. Apart from various
commercially
available
sensor
network
platforms
designed
specifically
for
environmental monitoring, these nodes are designed specifically for machine
monitoring in industrial setups. The Block diagram in figure 2.3 shows the general node
architecture for a sensor node.
Figure 2.3 General node architecture of Microstrain sensor node [32]
Each node takes input from sensors and transmits that data wirelessly to a Base station
connected to a terminal through serial RS-232 link.
Each node in itself is a complete wireless measurement system with a Microchip
PIC 16f877A microcontroller [30] at its heart. It has a RISC CPU with just 35 single
word instructions, all single-cycle instructions except program branch which are 2cycle, operating at DC - 20MHz speed, up to 8K x 14 words of Flash Program Memory,
up to 368 x 8 bytes of Data Memory (RAM), up to 256 x 8 bytes of EEPROM Data
14
Memory, and low power consumption due to CMOS technology used. Data EEPROM
is used to store different sensor calibration coefficient, filter parameters and 16-bit
unique node ID. External 2 Mbytes data memory on nodes is ATMEL AT45DB041B
serial flash memory, used to store the data logged from the sensor. It can store up to one
million data points. Analog data from sensors is converted in digital data by using an
external A/D converter, MCP3204 from Microchip. It is an 8 channel, 12-bit ADC
featuring SPI serial interface and 100K samples/second rate of sampling the data.
Microcontroller communicates with transceiver using serial interface. Nodes contain
low power RF Monolithic transceiver model TR1000 [36] using on-off keyed (OOK )
modulation of 916MHz carrier frequency and providing transmission rate of 19.2 Kbps.
It uses a half wave monopole antenna of 50 ohm impedance to provide up to 30 m of
range for above mentioned configuration. It draws input current of 3.1mA, 12mA and
0.7µA in receive, transmit and sleep mode of operation. Sensor nodes are multi-channel,
with maximum of 8 sensors supported by single wireless node. A single receiver (Base
Station) addresses multiple nodes; maximum of 216 nodes can be addressed as each
node has 16-bit unique address. All nodes support 9V external rechargeable battery.
Baud rate on the serial RS-232 link between Base Station (BS) and terminal PC is
38400.
Figure 2.4 displays the internal block diagram of a V-link sensor node which
can take input from variety of external sensors and transmit them to base station. Sensor
nodes for measuring specific physical quantities like vibration, strain, temperature, etc
15
are also available. Set of wireless sensor nodes along with base station forms a complete
sensor network.
Figure 2.4 Internal block diagram of V-Link node [33]
There are four channels that provide for differential input. Each of these channel
include both programmable gain and programmable offset. There are three channels
available that allow for direct input into the A/D converter without any amplification.
This accommodates direct voltage input for sensors that have a range from 0 to 3.0
16
volts. The system uses a twelve bit A/D converter, to convert the output of the A/D
converter to volts using the following transfer function:
OutputVolts
Output Bits.
3.00
4096
3.00 Volts is the maximum voltage that can be obtained from any sensor and 4096(=212)
is the corresponding digital value. Additionally an internal temperature sensor is
provided on channel 8 of A/D converter to allow for temperature measurement [33].
Figure 2.5 shows the picture of the G-link. It is a high speed, tri-axial
accelerometer node, designed to operate as a part of integrated wireless sensor system.
It has a very small form factor, which makes it possible to place the accelerometer
tightly in contact with the physical quantity to be measured. It contains tri-axial
accelerometer ADXL2XXJE from Analog Devices which can measure acceleration in
range of +/-10G and have shock limits of 500G. These nodes have an operating
temperature range of -40 to +85ºC.
Figure 2.5 G-link Sensor Node [32]
Rest of the blocks and functionality of the G-link is same as V-link except that three of
its channels take inputs from three axes of accelerometer and rests of the channels are
17
unused. Digital accelerometer readings can be calibrated to output acceleration G’s
using the set of procedures defined.
Figure 2.6 displays the SG-link node from Microstrain Inc., which offers
a small form factor suitable for anywhere installations. It is a complete wireless strain
gauge node, designed for integration with high speed wireless sensor networks. It
combines full strain gauge conditioning with the wireless sensor node. It can have up to
three channels of strain gauge inputs measuring strains in the range of +/-1 µstrain for
3-quarter wire bridge installations. Quarter wire, half bridge or full bridge configuration
can be used to measure the strains. It provides a +3 VDC bridge excitation with
capability to have pulsed bridge excitation and synchronous A/D conversion to
conserve power, so that only the channel being sampled is being excited. Thus, adding
an additional strain gauge bridge does not increase rate of power consumption.
Figure 2.6 An SG-link Sensor Node [31]
The base station which communicates with all the nodes in the network, communicates
with the terminal PC by using serial RS-232 link.
18
CHAPTER 3
CONDITION BASED MAINTENANCE
Manufacturing equipments in industries are subjected to heavy wear and tear
during the course of their operation. Hence, they require some maintenance and cannot
be left on their own after initial installation. Maintenance action based on actual
condition of equipment health can be termed as condition based maintenance. CBM can
be defined as dynamic maintenance scheduling based on instantaneous operating
condition of machine so as to have minimum impact on overall functioning of system. A
comprehensive approach to CBM, prognostics, and health management has been
developed by Dr. George Vachtsevanos at Georgia Tech. [45], [46].
3.1 Motivation
The manufacturing facilities in industrial sectors are becoming more complex
and highly sophisticated, with emphasis on higher throughput and better quality along
with maximum yield. The manufacture of typical products such as aircraft, automobiles,
appliances, medical equipment, etc, involves a large number of complicated
equipments. Turbines, engines, motors, expanders, pumps, compressors, generators plus
various integral components makes up each individual system. Manufacturing processes
involving these equipments are often complex and are characterized by highly nonlinear
dynamics coupling a variety of physical phenomena in the temporal and spatial
domains. It is not surprising, therefore, that these processes are not well understood and
19
their operation is “tuned” by experience rather than through the application of scientific
principles [46]. Machine breakdowns are common, limiting uptime in critical situations.
Failure conditions are difficult and, in certain cases, almost impossible to identify and
localize in a timely manner. Scheduled maintenance practices tend to increase
downtime, resulting in loss of productivity [46].
Architecture is hence desired, such that it can collect data from on-line sensors,
assess current condition of components, decide upon maintenance needs of certain
components, and schedule maintenance operations so as to have minimum downtime.
Such architecture, if implemented efficiently, promises to:
Increase equipment utilization.
Positively impact the overall yield.
Tighten process variability monitoring.
Avoid breakdowns.
Avoid process variations.
Avoid unscheduled maintenance.
Avoid equipment caused defects.
Avoid loss of equipment speed.
Avoid manufacturing overheads.
Reduce manning level in factories.
Maximize productivity and lifetime of equipment.
Hence an effective Condition Based Maintenance can save billions of manufacturing
costs.
20
3.2 Overview
Faults and failures are the terms that often form the basis for condition based
maintenance (CBM) systems. Induced faults, if not taken care, propagates to failure.
Figure 3.1 outlines the architecture of CBM. It shows the building blocks of an
integrated CBM system.
Various measurements from on-line sensors are acquired and collected by the
data acquisition block. It performs distributed sensing to obtain measurements for all
critical components of the equipment. These measurements are then made available to
diagnostics block for identification of faults.
Diagnostic module assesses the current state of critical machine components.
This involves continuous monitoring of sensor data and classification of impending
faults. This fault classification is often made by comparing the currently obtained
measurements with the historically available measurements for particular faults. On
determining any fault condition, diagnostic module triggers the prognostic module and
provides failure pertinent sensor data [46].
Scheduling
Diagnosis
Process
Prognosis
Data
Acquisition
Figure 3.1 Overview of CBM System
21
Prognostic module takes input from diagnostician and decides upon the need to
maintain certain machine components on the basis of historical failure-rate data and
fault models. This module serves to answer the question: What is the remaining useful
life time of a machine component once an impending failure condition is detected and
identified. It projects the future temporal behavior of faulted component [46].
Propagation of fault (lets say, growth of a crack in bearing) is difficult to model
accurately. Lack of historical data availability and strong dependency of fault growth
models on system architecture, operating conditions, environmental effects etc makes
the task even tougher. The prognostic module receives fault data from the diagnostic
module and determines the allowable time during which machine maintenance must be
performed so that the integrity of the process is maintained. However, this
determination of time-to-failure must be updated dynamically as more information
becomes available from the diagnostician [46].
The scheduling module takes time-to-failure/remaining useful lifetime as an
input from prognostics module. It schedules the maintenance operation such that other
functionality of system is not disturbed. This involves determining the type of
maintenance required to be performed, time required to perform the desired
maintenance, and total time available during which maintenance must be performed.
The scheduler then depending upon redundant machine availability, timing constraints,
the production requirements, resource and maintenance personnel availability,
schedules the maintenance [46].
22
3.3 Complete CBM Architecture
DAQ and Monitoring
Preprocessing
Monitoring
Post Processing
(Labview)
Techniques
Artificial Intelligent
Neural Net
Database
Feature extraction
Fuzzy Logic
Offline Legacy
Data
Distributed DSP
Wavelet
Time Domain
Target
Wireless Transceiver
Frequency Domain
Wireless Sensor
Analysis
Fault Pattern
Functional Building Block
Training Sequence
Self Learning
Fault Recognition
Health Assessment
Sensor Fusion
Pattern Matching
Prognosis & Diagnosis
Figure 3.2 CBM Architecture [24]
23
3.4 CBM Diagnostics
Diagnostics involve both fault and failure diagnosis. Detecting, isolating and
identifying an impending or incipient failure condition, while the affected component is
still operational although in degraded mode, is fault diagnosis. Failure diagnosis
however is detecting, isolating and identifying the system component that ceased to
operate. There are two phases to CBM diagnostics: Offline phase and online phase.
Offline phase involves background studies – physics based fatigue modeling and fault
mode analysis. Online phase performs real time fault monitoring and diagnosis [23].
3.4.1 Offline Phase
The background study in offline phase aims to model various faults and study
the physics associated with them. It involves, identification of best features to track for
effective diagnosis, identifying measured outputs needed to compute the features and
building the fault pattern library. This helps accurately model the fault growth patterns
and predicting the remaining useful life, while performing real time fault monitoring
and diagnosis.
Physics based fatigue modeling, like crack initiation models, must account for
variations in temperature, stress ratio, cycle frequency, sustained hold time, and
interaction of damage mechanisms [45].
Fault mode analysis involves identifying the failure and fault modes, classifying
various failure modes according to their criticality, relating failure events to their root
causes, and identifying means of detecting incipient faults. Various components of
system have different fault modes. For example, Electro-hydraulic flight actuator have
24
fault modes like control surface loss, excessive bearing friction, hydraulic system
leakage, air in hydraulic system, malfunctioning of pump control valve, etc [43].
Required inputs for the diagnostic models are termed the feature vectors. The
feature vectors contain information about the current fault status of the system. Feature
vectors may contain many sorts of information about the system. This includes both
system parameters relating to fault conditions (bulk modulus, leakage coefficient,
temperatures, pressures) as well as vibration and other signal analysis data (FFT,
energy, kurtosis). Feature vector components are selected using physical models and
legacy data. Physical models show that components such as bulk modulus and leakage
coefficient should be included, and legacy data shows the importance of vibration
signature energy, kurtosis, etc. Different feature vectors are needed to diagnose different
subsystems [25].
Most of the feature vectors cannot be measured directly by using physical
sensors, hence, proper sensor measurements must be chosen for extracting feature
vectors through system identification and digital signal processing. A fault pattern
library can be created based on the conditions on feature vectors selected. Feature
vectors are time varying and are monitored continuously. At each time, fault status will
be determined by comparing the feature vector to a library of stored fault patterns.
3.4.2 Online Phase
After identifying the fault modes, selecting the feature vectors, and building the
fault pattern library; online phase involves, sensing, online feature extraction, fault
25
classification, fault pattern diagnosis and reasoning. Figure 3.3 shows the fault
diagnostic procedure.
Stored Legacy Failure data
Statistics analysis
Systems, DSP
& Data Fusion
Sensing
Reasoning
& Diagnosis
Fault Feature
Extraction
Stored
Fault Pattern
Library
Inject probe test signals for refined diagnosis
Sensor
outputs
machines
Math models
x f ( x, u, )
y h ( x, u , )
Physics of failure
System dynamics
Physical params.
Dig. Signal
Processing
System
IdentificationKalman filter
NN system ID
RLS, LSE
Sensor
Fusion
Vibration
Moments,
FFT
ˆ
Feature
vectors
Feature
VectorsSufficient
statistics
Fault Classification
(t ) Feature patterns for faults
Physical
Parameter
Feature
estimates &
fusion
Aero. coeff.
estimates
Decision fusion could use:
Fuzzy Logic
Expert Systems
NN classifier
Identify
Faults/
Failures
yes
Inform
pilot
Inform
pilot
yes
More info
needed?
Serious?
no
Request
Maintenance
Feature extraction determine inputs for Fault Classification
Model-Based
Diagnosis
Set Decision Thresholds
Manuf. variability data
Usage variability
Mission history
Minimize Pr{false alarm}
Baseline perf. requirements
Figure 3.3 CBM fault diagnostics procedure [23]
3.5 CBM Prognostics
Prognostics aim to determine the time window over which maintenance must be
performed without compromising the systems operational integrity. Prediction of
remaining useful lifetime or time-to-failure is the most difficult part of CBM, as many
uncertainties are associated with the process. Fault propagation and progression impacts
the prediction and demands for a dynamic assessment of time-to-failure.
26
Prognostics again involves two phases – offline background study, remaining
useful lifetime (RUL) analysis; online real-time prognostics, and remaining useful
lifetime prediction [24]. Various innovative methodologies can be used to effectively
integrate the diagnostic results with maintenance scheduling which is the ultimate
objective of CBM.
Background study for fault prognostics involves, fault mode time analysis,
identifying feature combinations to track for effective prognosis and RUL, identifying
best decision schemes to compute the feature combinations and building failure time
pattern library. Identification of mean time to failure (MTTF) for each fault condition
forms an important offline study for preparing failure time pattern library. Figure 10
shows the maintenance prescription and scheduling procedure.
Stored
Prescription
Library
User interfaces for
Decision assistance
Decision Support
Medical Health
Prescriptions
Prescription
Diagnostic
Prescription Library
Fault
failure modes
condition
trends
side effects
Rulebase expert system
Fuzzy/Neural System
Prescription decision tree
Bayesian
Dempster-Shafer
Adaptive
integration
of new
prescriptions
Maint.
Request
Manufacturing
On-Line Resource
Dispatching
Manufacturing MRP
Dispatching
Maintenance Requirements
Planning
Resource assignment
Maint. Planning & Scheduling
weight maint. Requests
Computer machine planners
HTN, etc.
and dispatching
priority dispatching
maximum % utilization
minimize bottlenecks
Scheduling
Automatically
generated work
orders.
Maintenance plan
with maint. Rankings
Maintenance Priorities
Mission Due Dates
Guaranteed QoS
resources
RUL
Performance Priority Measures
Estimated time earliest mission date
least slack repair time
of failure
due date
Mission criticality and
due date requirements
safety
risk
cost
opportunity
convenience
Generate:
optimized maint. tasks
(c.f. PMS cards)
Priority Costs Communications System
Scheduling & Routing
Figure 3.4 Maintenance prescription and scheduling procedures [24]
27
Prioritized
Work Orders
assigned to
Maint. Units
Prescription library (PL) and decision support systems shown in figure 10 are
case based reasoning systems, derived from experience to make correct decisions.
Diagnosed fault conditions are based on experience and urgency. The urgency is
conveyed by prognostics, RUL, and priority measures. Prescription libraries can be
constructed such that addition of rules and knowledge adaptively integrates the new
information through learning. This prescription library generates maintenance requests.
To suitably schedule the prescriptions coming from the PL, priority dispatching
information can be extracted from the various performance priority measures. Priority
factors such as estimated time to failure and mission due date requirements are hard
limits and cannot be exceeded. In some situations, mission criticality and estimated time
to failure can interact and can progressively escalate the required maintenance
prescribed [26].
Given the prescriptions and their due dates and cost priorities, it is necessary to
generate work orders and a maintenance plan with priority rankings that have
guaranteed quality of service. Advanced planning and scheduling techniques can be
used to schedule component and subsystem activity in such a way that overall product
due dates and required delivery numbers are satisfied.
Given a maintenance plan and work orders with priority rankings, it is necessary
to assign maintenance units to perform the tasks. This is similar to shared resources
assignment problem in manufacturing. Since some resources are shared they must be
assigned based on priority orderings. Care must be taken to avoid deadlocks and
blocking, where units are held up waiting for other units or resources [26].
28
3.6 Motivation for Wireless Sensor Networks in CBM
Distributed data acquisition and real-time data interpretation are two primary
ingredients of an efficient CBM system. These two are mutually dependent on each
other, thicker is the former, riper is the later and riper later is, thinner can be former.
Data interpretation algorithms are learning systems that matures with time. Distributed
data acquisition should thus be adequate for both machine maintenance and learning by
the monitoring system. In control theory terms, one needs both a component to control
the machinery and a component to probe or identify the system.
Wireless sensors are playing an important role in providing this capability. In
wired systems, the installation of enough sensors is often limited by cost of wiring,
which runs from $10 to $1000 per foot. Previously inaccessible locations, rotating
machinery, hazardous or restricted areas, and mobile assets can now be reached with
wireless sensors. These can be easily moved, should a sensor needs to be relocated.
Often, companies use manual techniques to calibrate, measure, and maintain
equipment. In some cases, workers must physically connect PDAs to equipments to
extract data for particular sensors, and then download data to a PC [21]. This is laborintensive method not only increases the cost of maintenance but also makes the system
prone to human errors. Especially in US Naval shipboard systems, reduced manning
levels make it imperative to install automated maintenance monitoring systems.
Wireless Sensor Networks are highly flexible, unattended, self operative systems with
low installation costs and minimal intrusion in existing infrastructure. WSN are quick
and easy to install, and require no configuration tools and limited technical expertise of
29
the installer. WSN is also the best solution for temporary installation when
troubleshooting or testing machines.
WSN makes it feasible to install redundant sensors for effectively measuring the
same physical quantity. This resolves an important issue of proper sensor placement
which in itself is huge research area in the field of condition based maintenance.
30
CHAPTER 4
DESIGN REQUIREMENTS
In order to design an efficient architecture for WSN, it is important to
understand the requirements that are relevant to the sensor applications. We chalk out
following requirements for implementation of WSN for CBM and many such
applications.
4.1 Continuous Sensing
Critical manufacturing processes and equipments must be continuously
monitored for any variations or malfunctions. A slight shift in performance can
adversely affect overall product quality or manufacturing equipment health. Thus,
continuous sensing is necessary for the system.
4.2 Periodic Data Transmission
CBM systems rely on historical data for diagnosis of impending failures and
defects. These are dynamic systems that continuously learn during their operation.
Periodic data transmission thus helps update the historical data that in turn help improve
the overall efficacy of the system for both diagnosis and prognosis of system failures
and computing remaining useful lifetime of equipments.
4.3 User-Prompted Data Querying
With a group of sensing nodes monitoring various manufacturing equipments
and processes and transmitting data in periodic manner, situations may arise where an
31
engineer might want to query data from some specific nodes to estimate current status
of a particular process or equipment. A provision for breaching the cycle of periodic
transmissions to address user prompted querying is thus required.
4.4 Emergency Addressing and Alarms
In any industrial setup, with several critical processes and equipments running
for production, there can be situations of unforeseen malfunctioning or variations
beyond prescribed toleration bands. A mechanism is hence required to define tolerance
band for each sensing module. When measurements at particular node exceed the
tolerance, the node must breach the periodic cycle to send an alarm about the
emergency.
4.5 Real-Time Potential
In case of emergency situations, or during some vital processes, it is sometimes
required to monitor certain critical measurements in real time. This helps guarantee
safety objectives and acceptable quality of the system. Therefore, the architecture
should be capable of facilitating critical measurements in real time whenever desired.
4.6 Adaptability
CBM systems are adaptive learning systems, characterized by their evolutionary
behavior over time. They learn and improve with their maturation. They should be
capable of adapting to new situations and incorporating new knowledge into their own
knowledgebase. This inherent adaptability of CBM systems demands a similar
characteristic from the WSN architecture.
32
4.7 Network Reconfigurability
During the set up phase of CBM system or whilst normal operational phase, the
maintenance engineer may want to alter the functionality of individual nodes. This may
include changes in sampling rate, number of data points transmitted during each
transmission, the sequence in which nodes transmit, number of channels transmitted
from each node, tolerance band for each sensor node etc. Such re-tasking provision
should be built into the design of WSN.
4.8 Scalability
Over the duration of operation, some sensing nodes may fail or their batteries
may become depleted. Also, a need may arise for installation of more sensing nodes to
monitor processes and equipments more closely and precisely. The WSN should be
scalable to accommodate changes in number of nodes without affecting the entire
system operation.
4.9 Energy Efficiency
Sensor nodes are autonomous devices that usually derive their power from a
battery mounted on each node. It becomes necessary to have an inherent energy saving
notion in every component of WSN system to prolong the lifetime of each node in
network. This helps relaxing the battery recharging requirements for various nodes. All
layers of the architecture are thus required to have built-in power awareness.
33
4.10 Feedback Control
To provide real time control capability for certain dynamic processes, features
might be added to allow breaches in normal network operation to transmit control
signals back to the nodes. This could help in reducing manning levels by eliminating
minor manual control or machine resetting operations.
34
CHAPTER 5
SYSTEM DESCRIPTION
Having explicitly defined the requirements for the given application domain, we
now look at the actual architecture, network topology and protocol design to address
those requirements. Figure 5.1 gives an overview of our system. We have the battery
operated sensing nodes distributed all over the machinery; continuously sensing and
monitoring various measurements. These nodes periodically transmit the data to the
central control for analysis and storage. In case any emergency occurs (i.e. if
measurement at any node exceeds the set threshold limits), pertinent data is
immediately transmitted to the base station (BS). Also, nodes are required to be able to
transmit their data whenever prompted by the central control center.
The central control center communicates with the distributed sensors through
the BS, which is capable of communicating with multiple sensors using a single channel
RF-link. Figure 5.1 depicts the scenario where all the wireless nodes transmit their data
to the BS. From BS, data is then used by other modules of the system. Data Analysis
module uses the data for running various data interpreting and decision making
algorithm with high computational requirements. Data Base module is used to store the
results obtained from the analysis and also to store various fault pattern and prescription
libraries. Data Analysis and Data Base modules, often run in unison for obtaining useful
conclusions and decisions which are then displayed using the display module.
35
SN
SN
Display
Base
Station
Analysis
SN
Feed Back
Data
Base
Figure 5.1 System Overview
Minor control or machine resetting kind of operations recommended by the analysis can
be performed by providing a feed back actuator mechanism at each of the sensing node.
The intent of our WSN is, to collect data from distributed sensors so that we can
test-run various data analysis and decision making algorithms on the combined data
from various sensors. We wish to compare these runs with a stored fault pattern library
to diagnose faults or impending failures, and we wish to upgrade the existing fault
pattern library. Finally, it is required to estimate remaining useful life (RUL) of the
equipment and display results to maintenance personnel.
At the same time, we wish to take energy constraints, latency, and other design
requirements into consideration while selecting various constituents of our overall
system.
36
5.1 Topology
In consideration of our design requirements, we pose an adaptive and scalable
data-gathering wireless sensor network with an event driven emergency alarm tipster. In
traditional wireless ad hoc networks, with any-to-any communication paradigm shown
in figure 5.2, any node may wish to communicate with any other node in the network.
In contrast to this, multiple sensor nodes in our network transmit to a single sink for
collective data analysis, decision-making, and storage. Many-to-one paradigm, shown
in figure 5.3 thus, becomes an obvious mode of communication.
Figure 5.2 Any-to-Any Paradigm
Figure 5.3 Many-to-One Paradigm
37
Broadly, two different topological arrangements are used in many-to-one
network model. In single-hop topology, all nodes in network transmits directly to the
central BS, whereas in multi-hop topology nodes communicate with central base station
through intermediate nodes. Thus, each node not only transmits its own data but also
relays data from other nodes to the base station and acts as a router. While doing this,
multi-hop seeks to minimize the distance through which each individual node must
transmit and hence tries to minimize the energy dissipation at each node. However, in
doing this it increases the overall energy consumption of network.
For the short-range radio used on nodes, energy consumption in the transmitter
electronics is much more than the energy required generating RF output power [39]. For
a low power transceiver available today [36], current consumption contributing to RF
output power of 1.5 dBm is only 0.45 mA out of 12 mA of total current consumption in
transmitter section. Using multi-hop topology would be more energy exhausting, as a
minimum of 11 mA current will be required by transmitter section of each node for
every transmission.
For verifying the above argument, more rigorous calculations were performed
for another commercially available radio transceiver from Chipcon [4]; transmitting 12bit encoded data at 19.2 kbps using OOK modulation and no threshold at the receiver. A
3 dB filter bandwidth of 14.4 kHz is used (noise BW = 1.25 * 3 dB BW).A receiver
noise figure of 7.5 dB is assumed. Antennas with 1 dB of gain are used. A 20 dB fade
margin is chosen (99% Rayleigh probability). Packets are 38 bytes long (excluding
preamble), or 456 bits. The system goal is to achieve 90% packet reads on the first try.
38
The operating frequency is 868 MHz. Assuming 20 dB fade margin and 1 dB
transmitter/receiver antenna gain; we obtained an 80.9 dB +PO (in dBm) of allowed path
loss. PO is the transmitter output power in dBm. For details refer [35]. For indoor
environment of typical cubical office spaces, equation for distance in meters is given.
PO
80.9dB
27.6dB 20 log(F ) 40 log(D) , F in MHz.
On substituting F = 868 MHz, we have:
PO
49.73dB
40 log(D)
By using the above relationship we found that, for transmitting up to a distance
of 12.7 meter in single-hop, current drawn by the transmitter is 18.1 mA [4], and
transmitting to a same distance using 2-hops of 5.3 meters, it takes 13.7 mA of current
at each of the transmitters; drawing a total of more than 27.4 mA current for the same
distance. Here we have not considered the additional current required for driving the
remaining node circuitry, and the protocol overheads. From the above argument, it is
clear that for short range transmissions, it is wise to use single-hop transmissions.
With multi-hop topology, nodes near the base station dissipate higher energy as
they end up relaying data for all the distant nodes. The data generated at the routing
nodes often get delayed because of the data from the neighboring nodes awaiting the
transmission. These near-by nodes thus dies out fast and results in a degraded overall
network performance and some times even terminating the network operation. To
overcome these drawbacks of multi-hop topology, clustering [14] is often used, which
makes lot of assumptions like, the neighboring nodes have highly correlated data.
39
In actual implementations of the multi-hop topology, network performance has
not been satisfactory for resource constrained distributed wireless sensor networks. As
data hops from node to node across the multi-hop networks, information may be lost
along the way. Intel faced a similar problem with the motes forming a multi-hop
network. It gets worse as network size increases [17].
With continuous sensing and periodic transmissions, our network generates high
rates of data traffic. We thus desire to have maximum possible throughput for the
network. In many-to-one communication models, throughput capacity is defined as the
per source data throughput, when all sources are transmitting to a single receiver or sink
[10]. It means the amount of data that can be moved from any of the given sources to a
single sink in the given time period.
If there are n nodes in the network and each of them can transmit at a maximum
rate of W bits/second, then maximum achievable throughput at each node (all nodes
transmit to single base station) is W/n bits/second per source. And this maximum
throughput can be achieved only when every source can directly reach the sink [10].
Hence for many-to-one communication network, single-hop transmission achieves the
highest possible throughput.
Single-hop topology is capable of providing a central control to a network,
which is our architectural requirement. With all the data transmissions destined for the
single sink, it becomes viable to have centralized control. Nodes in the network can
communicate with each other through the base station, having nodes dissipate the least
energy and base station dissipates the most. Thus, base station can perform all the
40
energy intensive tasks in the network, with nodes just sensing and transmitting their
own data. The central control in a network can support efficient MAC protocols based
on TDMA or FDMA with dynamic frequency allocations, which elsewhere becomes
complex to implement.
Keeping in view the energy constraints, latency requirements, required
simplicity at the nodes, and to avoid all the control overheads, we developed the singlehop topology for our network.
Single-hop transmission facilitates the delivery of data in real-time and caters
the low latency requirements of time critical data. It avoids the delay associated at each
node of multi-hop network. The single-hop topology alleviates the need of routing
protocol and consequently saves both energy and complexity at the nodes. It has an
inherent advantage of negligible control overhead. Also, even if a single node in the
network fails, rest of the network remains unaffected.
41
CHAPTER 6
UC-TDMA MAC PROTOCOL
For our application domain it is necessary for the user to explicitly define the
sequence in which data will be collected. This will help in establishing relationships
between two measurements and drawing conclusions. We here design a User
Configured Time Division Multiple Accessing (UC-TDMA) based MAC protocol for
our network.
TDMA is intrinsically less energy consuming than contention protocols. Many
researchers have focused their work on MAC protocols specifically for WSN [48], [41],
[18], [2], [15], [49]. Ye et al. in [49] has proposed S-MAC, a contention based protocol
which sets radio to sleep during transmissions of other nodes. It is inspired from
PAMAS [42], another contention based protocol in which nodes sleep to avoid
overhearing from neighboring nodes. PAMAS however utilizes out-of-channel
signaling, in contrast to in-channel-signaling used by S-MAC. However, S-MAC seeks
to sacrifice the latency requirements, and ignores the throughput considerations in its
design. Albeit it eliminates the need of maintaining TDMA schedules at the nodes, it
necessitates maintaining the sleep schedules of all the nodes.
In [9], Dam and Langendoen describes T-MAC, another contention based MAC
protocol, which uses adaptive sleep/listen duty cycle in contrast to the fixed duty cycle
used in S-MAC. It seeks to achieve better energy conservation by adaptively ending the
42
active part of duty cycle, if no activation event occurs for certain time (TA). Although it
achieves better energy conservation in higher load conditions, messages arriving
immediately after the time TA suffers high latency.
Kannan et al. proposed TDMA based ER-MAC [18], where they seek to
balance the energy consumption of overall network. The sleep/listen duty cycle in ERMAC is different for each node and is based on the node criticality. More critical nodes
sleep longer. Each node sleeps only in its own time slot and listens in the time slots
assigned to the other nodes even if they don’t transmit. It suffers from the overhearing
problem. Moreover, each node has to maintain a two-tuple receive table. There is an
increased protocol overhead involved due to voting and selection of the critical nodes.
Carley et al. describes a contention-free periodic message scheduler MAC in
[2]. It requires each node to run a real-time scheduling algorithm, to determine which
message has access to the medium. It thus trades off the memory at each node with the
computation at each node. It introduces a high probability of interference between
network and computational tasks.
In our work we have tried to incorporate the advantages of sleep/listen duty
cycle for energy savings and overhearing reductions. Our hybrid UC-TDMA protocol
combines scheduling and contention to achieve lowest possible protocol overhead and
100 % collision avoidance with the deterministic slot allocation. We also seek to
achieve negligible protocol processing at various nodes and minimum contention
overheads to the nodes. Most of these features can be attributed to the single-hop
topology and the centralized control provided to the base station.
43
6.1 MAC Protocol Attributes
To satiate our application design requirements, we have transformed those
requirements into the desired attributes from the MAC protocol for our network. We
have focused on the following facets to design an efficient MAC for the network.
First of all is Energy efficiency. As these networks are intended to operate for
long durations, to get most out of battery operated sensor nodes, it is necessary to have
energy aware MAC protocol. The MAC should, hence, consume minimum possible
energy in channel assignments and accessing.
Scalability of the MAC is important to have an overall network scalability.
Some of the nodes in the network might fail, and also there might be addition of others
later in time. MAC protocol should be capable of scaling to such changes in network.
For providing effective services to the adaptive application layer (CBM),
adaptability at the MAC is also desired. Channel access requirements of different nodes
in a network changes with change in data requirements of application. A good MAC
should easily adapt to such changes.
In contrast to many other proposed MAC protocols [49], [9] for WSN,
Throughput and Latency are significant attributes for our design. Our MAC should
induce minimum latency to provide real-time data from the sensors.
As the nodes in our network continuously sense and transmit data, much heavier
traffic is generated in the network. As nodes in the network increases, maximum
achievable throughput decreases. Highest possible throughput is desired from the MAC.
Figure 6.3 gives the flowchart of our UC-TDMA protocol.
44
6.2 TDMA Slot Allocation
Frame 1
1 2
1 3 ….
Frame 2
1 3 ….
N 1 2
N …..
Time
Figure 6.1 UC-TDMA frame showing time slots for N nodes in network
Even though TDMA based protocols offer natural collision avoidance and
energy preservation, they are sometimes not preferred for memory constrained sensor
networks. Both traditional table TDMA and TDMA scheduler approach of TDMA
implementation are not simple. Maintaining a TDMA table at each node takes up a
major chunk of its valuable memory. This could hamper other memory expecting
operations, like in-network data processing, which are then required to share the same
limited onboard memory of the sensor nodes with the TDMA tables. Running a TDMA
scheduler on each node is complex, again memory intensive and needs high
coordination among various nodes.
We circumvent this major impediment by utilizing our central base station for
maintenance of TDMA slot assignment table. Being a single hop network, all the nodes
in our network communicate directly with the BS. It is thus easier to maintain time slots
for all the nodes at the BS alone and communicate with them accordingly. With this
45
approach, nodes need not maintain any table or do any scheduling for the time slots
which saves both memory and complexity at the nodes.
For assigning time slot to different nodes in the network, user can define the
sequence in which nodes will access the channel and also the time duration for their
respective slots. Depending on the applications, nodes might access the channel more
than once in a given time frame and also the length of each slot can be different. Figure
6.1 shows UC-TDMA frame for our network. Note that, this is little different from
usual TDMA method, in a sense that, we are trading off fairness at each node for
meeting our application needs.
6.3 Energy Model of Radio
Knowledge of the application domain along with the physical layer functionality
of network hardware is the key to an effective energy aware MAC protocol for wireless
sensor networks. After looking at the application specific design requirements, we now
investigate the physical layer. Using the radio model for power consumption by Shih et
al. [40] we developed a model for battery consumption by the radio used on our nodes.
We have considered three modes of operation for our radio model, namely – Transmit
mode, Receive mode, and Sleep mode. Figure 6.2 shows the symbolic block diagram of
a typical transceiver. It shows three different sections described as follows.
Transmitter Electronics, whenever radio is in transmitting mode this section is
activated and it draws a total current of Itx amperes out of which Itxm is the actual
modulation current which produces the RF output power.
46
Receiver Electronics, radio activates this section for reception of any signal on
the RF channel, when in receive mode. It draws a total current of Irx.
Other Electronics, this section constitutes the circuitry or electronics necessary
for driving the transmitter or receiver section. Symbolically, we have shown in diagram
that Itx is the total current drawn when transmitter section is active (i.e. in transmit
mode), Irx is the current drawn by radio when receiver section is active (i.e. in receive
mode), and when none of the section remains active, radio is in sleep mode. Is is the
current drawn by the radio in sleep mode.
RF_IN
Irx
Other Elex
Receiver Electronics
RF_OUT
Itxm
Transmitter
Electronics
Itx
Is
Figure 6.2 Symbolic Radio Model
Radios on the sensor nodes are operated by the batteries mounted on the nodes.
The amount of energy that can be stored in a battery, i.e. its capacity, is measured in
ampere-hours. Hence we model the energy consumption of the radio in ampere-hour to
establish a direct relationship for the battery consumption due to radio. The Battery
Consumption Equation, given below, measures the battery consumed by the radio in
one hour, and total battery consumption can be easily obtained by using this per hour
measurement.
47
AmpHrs / Hr N s
tx
Ns
I tx Ts
rx
Ttx
tx
I rx Ts
N rx s I s Trx
rx
s
I txmTtx
Trx
N tx
Ts
N rx
rx
tx
I tx Trx
I rx Ttx
N tx s I s Ttx
s
rx
Ts
tx
Ttx
I txmTtx
Trx
I rx / txTturn
on
Ns/rx-tx is number of times per hour that radio switches to transmit mode from
either sleep or receive mode, similarly Ns/tx-rx and Nrx/tx-s are number of times per hour
that radio switches to receive mode and sleep mode respectively from either of the
remaining modes, Ts/rx-tx is the time taken by the radio to switch to transmit mode from
either sleep or receive mode. Similarly, Ts/tx-rx and Trx/tx-s are the time taken to switch to
receive and sleep mode respectively from either of the remaining modes, Ttx/rx/s is the
actual time for which radio transmits, receives, or sleeps after switching to respective
modes. Tturn-on is the time required by radio to become operational in either of the modes
after power-on.
Itx is the current drawn by transmitter electronics, and Itxm is the
modulation current responsible to generate the RF output power. The RF output power
is usually directly proportional to the square of modulation current (Prf
(Itxm)). Irx/s is
the current drawn by the radio in receive/sleep mode. Note that, while switching or
startup no transmission or reception can be performed.
For typical short range radios operating at 916 MHz frequency ISM band,
I tx I rx I s . Switching times between various modes are also critical as they are often of
the order of milliseconds. This relationship along with the above described battery
consumption equation sets up the foundation for an efficient MAC design.
48
6.4 Sleep Scheduling
A typical radio, for short range transmission, operates in either of following
modes. Transmit mode, in which it transmits some data on the RF channel. Receive
mode, in which it receives the data transmitted for it on the RF channel. Idle/listen
mode, in which it keeps on tracking the RF channel for any intended data. Sleep mode,
during which radio shuts off and sleeps; neither can it receive nor can it transmit any
data in this mode. Electric current consumption in idle mode and receive mode is almost
similar. Some radios have a common idle and receive mode. Such radios when idle, are
in receive mode as they keep listening to the RF channel. For commercially available
radios, the ratio of current drawn in sleep mode to listen/idle mode is of the order of 1:
4500 or more [36]. As nodes are in idle state for most of the time, except when they
transmit or receive, listen/idle mode leads to substantial energy wastage.
Many MAC protocols for WSN exploit this radio hardware feature by putting
radio in sleep mode when it is neither transmitting nor receiving any data on channel. In
periodic sleep and listen scheme used by many protocols, radio sleeps for certain
duration of time and then listen for certain time to see if anyone wants to talk to it. In
doing this it switches back n forth in two modes. In order to keep the latency into
tolerable limits, frequency of this switching is usually kept high so that sender need not
wait longer for receiver to wakeup.
However, on considering our radio energy model, we surprisingly discovered
that, energy consumption in switching the modes periodically can be more than that
required in transmission of data. In steady phase of a sensor network, let’s say radio
49
switches its mode every 30 seconds from sleep to awake and awake to sleep mode. On
the basis of the energy model given in section 6.3, current consumption per hour in just
switching the modes is 0.059 mA-hour/hour. This much of current consumption is
sufficient to transmit data for 17.7 seconds per hour. This is much more than the typical
transmission time per hour for any sensor in network. Given data is for commercially
available radio [36]. Pertaining to application requirement, we thus seek to minimize the
frequency of switching between sleep and receive modes and at the same keeping the
latency at its minimum.
Given sweep rate for each node, number of data points from each node,
frequency at which each node transmits (every r hours) and the sequence in which they
transmit data; sleep duration for all the nodes in network can be calculated using
following formulation:
Tp
U
diag S r
N sT
Sd
Tp
Sd
3600 RuT
diag N sT T p
T
diag N sT T p
iff updating Rate not given (2)
T
iff updating Rate given (3)
1 n
Where S r is the matrix containing sweep rate for all the nodes in the sequence
1 n
in which they transmit, N s
by respective nodes,
U1
n
is the matrix containing number of data points transmitted
1 n
is a unit matrix, RU is updating rate matrix which contains
1 n
rate (in every r hours) with which every node transmit its data, T p is time period
1 n
matrix (1/sweep-rate) for each node, and S d is matrix containing sleep duration (in
seconds) calculated for each node in network.
50
While calculating the sleep durations we have neglected the time taken by the
nodes to setup the link with the base station before transmitting the data. This provides
us with the margin to tackle with the clock drifts of the nodes as each node wakes up a
little before its turn to transmit.
In calculating sleep durations we take advantage of our single hop topology with
central base station which maintains the TDMA schedule for all the nodes. We thus
schedule sleep times of the nodes such that they sleep for maximum possible duration
with minimum possible switching frequency.
6.5 Modes of Operation
Broadly, we have considered two modes of operation for our network. These
modes have been categorized on the basis of data gathering frequency of the network.
The first is continuous mode. This is useful for newly deployed networks where
we are usually confronted with a question on how frequently data should be collected
from various sensor nodes. In this mode we collect data continuously and sequentially
from each node. This mode thus keeps the base station busy all the time. Sleep
durations given by equation (2) are used in this mode. Nodes transmit their data and
then sleep for the time during which other nodes transmit.
Other mode is non-continuous mode of network operation. After operating a
newly installed network in continuous mode and through proper analysis of data
obtained one can answer the question posed above. We can then obtain an updating rate
1 n
matrix Ru and use equation (3) to obtain sleep duration. This updating rate matrix
gives the updating rates for all the nodes in network. If update matrix specifies r as
51
updating rate for any given node, then that node transmits data after every r hours. All
nodes can have different updating rates. In this mode also, node sleeps all the time
except when it transmits on its turn.
6.6 Network Setup
To start-run the sensor network for machine monitoring, we first need to
physically install sensors at proper locations on the machine. Each sensor contains a 16bit node type associated with the physical quantity (like vibration, temperature,
pressure, strain, etc.) it measures. Base station is connected through a serial port to some
hand held computing device (like PDA, laptop), which runs the application program. At
power-on, all nodes in network are in receive mode. In the flow chart given in figure
6.3, first few blocks describes the setup of network.
Through a user interface, user defines the functionality of various nodes in
network. User can set different values for parameters of different nodes. Node
parameters include sweep rate, number of sweeps, node type, sequence number of node
in TDMA frame and active channels. Base station keeps these parameters in different
parameter arrays according to the sequence of nodes in TDMA time frame.
After obtaining functional definition for all nodes in the network, base station
checks for the availability of defined nodes. If any node is found to be missing, then
base station alarms the user about missing node and updates all of its parameter arrays.
Sleep duration for each node is then calculated by using equations (2) or (3). Base
station (BS) then configures all the nodes one by one with defined parameters and
calculated sleep durations.
52
Start
User Interface
Functionality definition for each node
Set J=0
B.S. Checks availability of
all defined nodes in N/W
Status Quo
Report user about missing
node & node type
Yes
Remove failed node from
TDMA slot sequence
Is Node
missing?
Calculate sleep schedule for
each node
Insert node in existing
slot sequence assigned
Yes
Set S=1
Read node type, data rate no.
of data points & sequence no.
Is J > 0
Configure nodes with defined
functions & sleep schedule
Set i=1, J=1, S=n+1
Yes
Is J > =10
No
Set S=S+1
B.S. pings for new
node.
Is S > n
Append sleep schedule
command data to node i.
Retrieve data from node i
No
Set J=1
No
Add new node?
Any Data?
Node i sleep
Set i=1,
J=J+1
Yes
Is i=n?
Set i=i+1
No
Stop?
Stop
Figure 6.3 Flow Chart for UC-TDMA MAC Protocol
53
Yes
6.7 Main Thread
After setting up the network and configuring all the nodes we are now ready for
gathering data from distributed sensors. Data is collected in accordance with the UCTDMA frame maintained by the base station. We seek to minimize two major sources
of energy wastage, viz., collisions and protocol overhead by using our modified version
of RTS (Request To Send) and CTS (Clear To Send) mechanism used in IEEE 802.11.
We exploit our power plugged base station for affecting our modified RTS-CTS
mechanism. From the TDMA schedule maintained, base station knows which node in
network has access to the channel at any particular instant. Instant any node acquires the
channel according to its schedule, on behalf of that node; BS itself generates a virtual
RTS signal after assuring that no other node is communicating with it. Node also wakes
up at this instant and is ready to receive CTS signal from base station before it transmits
its data.
BS sends a CTS signal with node address appended to it. On receiving this
signal, node transmits the predetermined number of data points to BS. After successful
reception of data points, BS acknowledges the node with request to sleep signal. In a
similar manner, data is collected sequentially from all the nodes in UC-TDMA frame
and then frame is repeated to indefinitely collect the data from network.
Collisions cause significant amount of energy wastage as messages are to be
retransmitted. Also retransmission of messages some time causes a loop in schedule and
may further spoil other on-going transmissions. It is thus wise to spend some energy in
contention mechanism along with scheduling. Even with contention, collisions are not
54
reduced to zero and latency is increased as sender waits for random duration of time
before contending again for channel.
With this modified RTS-CTS mechanism, BS generates virtual RTS, so there is
no chance that any other node will contend for channel during normal network
operation. Even if a node is scheduled to access the channel and BS is talking to some
other node, virtual RTS will be generated only after BS finishes talking with current
node. This reduces the probability of collisions to zero.
Virtual RTS scheme also reduces the control overhead for contention to zero for
nodes in the network. As these control overheads are short packets, they are highly
energy exhausting. In the transmission of small packets, amount of energy wasted in
startup of transmitter electronics turns out to be more than the energy required in actual
transmission of packets [40].
In original RTS-CTS mechanism [20], considerable amount of energy is wasted
in switching of modes from sleep to receive then to transmit and then again to receive
each time RTS signal is being sent. Also, processing at node to acquire channel is
reduced nearly to zero. Node simply sleeps and wakes up according to a timer.
Modified RTS-CTS scheme thus saves fairly large amount of energy by tapping our
overall network arrangement and data gathering requirements.
55
6.8 Adaptability and Reconfigurability
During normal operation of network in non-continuous mode, application data
requirements might change for a set of measurements. Our network should then be able
to adapt to such changes by adjusting certain node parameters (like sampling rate,
number of data points and active channels). But then this adjustment should not disrupt
the normal functioning of remaining network.
To render the above mentioned adaptability, base station makes use of new node
parameters to calculate new sleep schedule for the desired nodes. Here we have
assumed that application do not compel network to change sequence of nodes to adapt
to changes. To configure nodes with this new schedule and parameters, after completing
the ongoing cycle, BS appends these parameters and new sleep schedule to the CTS
signal sent to nodes whilst they acquire the channel. And then retrieve data according to
the newly configured parameters.
To facilitate the re-tasking provision mentioned in section 4.7 that also includes
change in sequence in which nodes transmit, BS stops acknowledging the nodes with
sleep command after determining that there is request for re-configurability.
Consequently, all the nodes in network remain awake at the end of cycle. This situation
is similar to setup phase. BS calculates new sleep schedule for the entire network and
uses new UC-TDMA frame to sequence the nodes. For configuring the nodes with new
parameters, it takes one complete TDMA frame. It thus takes one cycle for network to
begin operating with the new configuration. It is as good as starting the network again
with new node parameters and UC-TDMA frame.
56
6.9 Scalability
To address the requirements of scalability posed by section 4.8, following two
aspects are to be taken care off: failure of existing nodes and addition of new nodes.
If a node is not able to transmit its data at the scheduled time, it is considered to
be failed. It can be seen from the flowchart that if data from any node is not retrieved, it
is declared to be failed. BS then reports about the missing node (with its node type and
node address) to engineer. UC-TDMA frame is then scaled by removing the failed node
from sequence. Sleep schedule for remaining nodes is calculated again with new
TDMA slot sequence. This new sleep schedule for each node is appended to CTS signal
sent to nodes according to existing slot sequence. As it can be seen from flowchart, new
UC-TDMA frame is affected after sending the new schedule to all nodes according to
existing schedule.
Addition of new nodes is not so frequent affair in CBM network. To scale the
UC-TDMA frame with addition of new nodes, after every ten repetitions of TDMA
frame, BS pings for availability of newly installed nodes in network. On detecting a
new node, its node type, sequence number in UC-TDMA time frame and other node
parameters are read by the base station. These parameters are then inserted into
respective parameter arrays at location specified by sequence number. Newly calculated
sleep schedule for nodes are appended with the CTS signal for respective nodes. Nodes,
now access the channel according to new slot sequence. After retrieving the data from
node, it is set to sleep for the time specified by new schedule so that network can carry
on with its normal operation form next cycle. Refer to figure 6.3.
57
6.10 Emergency Addressing and Alarm
To address any emergency situation as described in section 4.4, our nodes keep
sensing the physical quantity even when radio is in sleep mode. Nodes compare the
measured value with the set threshold value on a continuous basis. On determining that
measured value exceeds the set threshold, node declares an emergency situation to be
addressed immediately. Node then wakes up its radio and transmits its node address to
BS until responded.
In continuous mode of network operation, BS remains busy talking to some
other nodes all the time. So, when node in emergency transmits its address on the
channel already occupied by some other node, results in a continuous checksum error at
base station due to collision. As our MAC protocol assures that there are no collisions in
any other situation, BS interprets these continuous collisions as an indicator of
emergency. To handle this, BS hangs up the ongoing operation and receives the address
of the node in emergency. After addressing the emergency, BS catches up with the node
scheduled to access the channel at that particular instant.
Whereas for non-continuous operation of network there is high probability that
BS is idle at the time emergency occurs. This makes it easier for BS to handle the
emergency. Base station on receiving the node address on RF channel, starts
communicating with the node and addresses the emergency. And in case an emergency
occurs while BS is communicating with any other node, by following the steps similar
to those mentioned above BS can address the emergency.
58
6.11 State Machine for Nodes
One of the important aspects of our network organization and protocol is to
minimize the processing at the nodes required for enabling communication of nodes
with BS. The simple state machine running at each node is shown in figure 6.4. At
power-on, nodes enter receive state as it consumes lesser start up energy than that
required by transmit state. From the energy model given above and for typical radio
specifications [36] we found that it takes 69.78 % more energy to startup the radio in
transmit mode than that in receive mode.
In receive state, node looks for commands from BS. In set up state, node sets up
various parameters like active channels, sweep rate, number of data points, sequence
number, node type etc. In sleep mode, node turns its radio off but keeps sensing the
physical quantity. It comes out of sleep mode in case of emergency or time out. In
transmit state node transmits the data or other parameters desired by BS.
Emergency
Sleep State
Time out
Sleep cmd
Receive State
Data out
Done
Set cmd
Transmit cmd
Setup State
Transmit State
Figure 6.4 State-machine running on each node
59
The simplified state machine at nodes is crucial for sensor networks, penetrating
into wider application areas. With numerous sensor manufacturers and network
products in market, compatibility between different components manufactured by
different manufacturers is a major concern [27]. Hence, adoption of standards like IEEE
1451 is gaining momentum.
In 1993 the IEEE and the National Institute of Standards and Technology
(NIST) began work on a standard for Smart Sensor Networks. IEEE 1451, the Standard
for Smart Sensor Networks was the result. The objective of this standard is to make it
easier for different manufacturers to develop smart sensors and to interface those
devices to networks.
Figure 6.5 IEEE 1451 Standard for Smart Sensor Networks
Smart Sensor, Virtual Sensor. The figure shows the basic architecture of IEEE
1451 [8]. Major components include STIM, TEDS, TII, and NCAP as detailed in the
figure. A major outcome of IEEE 1451 studies is the formalized concept of a Smart
60
Sensor. A smart sensor is a sensor that provides extra functions beyond those necessary
for generating a correct representation of the sensed quantity [12]. Included might be
signal conditioning, signal processing, and decision-making/alarm functions.
Figure 6.6 A general model of a smart sensor
Objectives for smart sensors include moving the intelligence closer to the point
of measurement; making it cost effective to integrate and maintain distributed sensor
systems;
creating
a
confluence
of
transducers,
control,
computation,
and
communications towards a common goal; and seamlessly interfacing numerous sensors
of different types. The concept of a Virtual Sensor is also depicted. A virtual sensor is
the physical sensor/transducer, plus the associated signal conditioning and digital signal
processing (DSP) required to obtain reliable estimates of the required sensory
information. The virtual sensor is a component of the smart sensor.
Incorporating these standards obviously expends some of the sensor node
resources. Hence it is important to have minimal processing and memory expecting
requirements for resource constraint sensor nodes.
61
CHAPTER 7
IMPLEMENTATIONS
For all the implementations, following hardware and software were used:
Hardware from Microstrain Inc. [31] – one V-link wireless node, one SG-link
wireless sensor node, two G-link wireless sensor nodes, and one Base station. External
standard 9 Volt batteries with 150mAh capacity were used to power all the nodes. The
base station was plugged to an AC outlet using an AC adapter. Laptop with Intel
Pentium 4- 1.99GHz processor and 256MByte of RAM was used as terminal PC. A
USB to serial converter was used for establishing serial RS-232 link between terminal
PC and base station.
Software – MATLAB version 6.5.1 and LabVIEW version 6.1 were used for all
the software implementations. Microsoft Windows XP Home Edition is the operating
system used.
7.1 MATLAB Implementation
The data link layer for establishing an RF communication link between the base
station and any of the wireless sensor nodes is implemented. A serial link between base
station and terminal PC is first created for enabling the terminal program to issue
commands to the base station for communicating with the wireless node.
A serial object was created in MATLAB with the following properties:
Data Bits – 8, Stop Bits – 1, No parity, Baud Rate – 115.2 Kbps, Byte Order –
big-endian, Output Buffer Size – 50000, Input Buffer Size – 50000, Timeout – 0.5
62
seconds. Data rate, stop bit, parity, and baud rate identified by base station was provided
by Microstrain Inc.
Following are the set of commands identified by the base station along with the
details of command byte and command data to be issued and also the format of
response to various commands, obtained from base station [31]. All these commands
are issued using the serial object created.
7.1.1 Check connection between Host and Base Station
The Host sends a 1 byte command and the Base Station responds with a 1 byte
echo indicating communication is established between the Host and the Base Station.
Command Byte – 0x01, Command Data – None, Response – 0x01 (No response if base
station not communicating.
7.1.2 Check sync between Base Station and Link Device
The Host sends a 3 byte command and the Base Station responds with a 1 byte
echo indicating communication is established between the Base Station and the link
device. Command Byte – 0x02, Command Data – MSB of link device address followed
by its LSB, Response – 0x02 (Note: 0x21 if no link device found by BS).
7.1.3 Read from a particular EEPROM address on-board a Link Device
The Host sends a 5 byte command and the Base Station responds with a 5 byte
echo indicating communication is established between the Base Station and the Link
Device. Command Byte – 0x03, Command Data – MSB of link device address followed
by its LSB and then MSB of location to be read followed by LSB of location to be read,
63
Response – 0x03 byte followed by MSB of value, LSB of value, MSB of check sum,
LSB of check sum.
7.1.4 Write to a particular EEPROM address on-board a Link Device
The Host sends an 8 byte command and the Base Station responds with a 1 byte
echo indicating communication is established between the Base Station and the Link
Device. Command Byte – 0x04, Command Data – MSB of link device address followed
by its LSB and then EEPROM location (1-byte) and then MSB of value followed by its
LSB and then MSB of check sum, LSB of check sum. MSB of location to be read
followed by LSB of location to be read, Response – None.
7.1.5 Download a page of data from a Link Device
The Host sends a 5 byte command and the Base Station responds with a 267
byte return containing the 132 data values contained on the particular data page
requested. Command Byte – 0x05, Command Data – MSB of link device address
followed by its LSB, MSB of page, LSB of page, Response – 0x5 byte followed by
MSB/LSB pair of data points 1-132 and then MSB of page data check sum followed by
LSB of page data check sum.
7.1.6 Erase all data on-board a Link Device
The Host sends a 7 byte command to erase all data memory on the Link Device.
The base station does not return an acknowledgement when finished. The process can
take approximately 20 seconds, and the node will not respond to any commands during
this period. This inactivity can be used to detect completion, by using a simple ping
64
loop. Command Byte – 0x06, Command Data – MSB of link device address followed
by its LSB, 0x08 byte, 0x10 byte, 0x0C byte, 0xFF byte, Response – None.
7.1.7 Trigger a data capture session on-board a Link Device
The Host sends a 3 byte command and the Base Station responds with a 1 byte
echo indicating the Base Station has triggered a data capture session on-board the Link
Device. The trigger name will be the next available number. Command Byte – 0x07,
Command Data – MSB of link device address followed by its LSB, Response – 0x07
byte.
7.1.8 Trigger a data capture session on-board Link Device with supplied Trigger name
The Host sends an 4 byte command and the Base Station responds with a 1 byte
echo indicating the Base Station has triggered a data capture session on-board the Link
Device. The trigger name will be the number supplied. Command Byte – 0x0C,
Command Data – MSB of link device address followed by its LSB, followed by an
integer between 0x00 and 0xFF representing the Trigger name, Response – 0x0C byte.
7.1.9 Initiate real time streaming data collection mode from a Link Device
The Host sends a 3 byte command and the Base Station responds with a
“stream” of bytes which is a data session being generated in real time on-board the Link
Device. The “stream” is passed through to the Host by the Base Station. Command
Byte – 0x38, Command Data – MSB of link device address followed by its LSB.
0xFF
(Header)
Channel 1
MSB
Channel 1
LSB
…….
……
Channel N
MSB
Channel N
LSB
Figure 7.1 Data packet format for real-time streaming.
65
Checksum
Byte
Response – An undefined number of data packets with the format shown in figure 7.1.
Data is sent 12 bit format justified 1 bit to left. Checksum is the sum of MSBs and LSBs
of data from all the channels. Data obtained is rebuilt into 12 bit format by using
following relationship: (Channel N MSB*256 + Channel N LSB)/2.
7.1.10 Initiate low-power periodic sleep mode
The Host sends a 1 byte command and the Base Station responds with a 1 byte
echo indicating streaming from the Link Device has been stopped. Command Byte –
0xFF, Command Data – None, Response – 0xFF.
All of the above mentioned commands can be issued by using the serial object
in MATLAB m-file. A separate m-file for streaming the data in real-time and real-time
display of the calibrated data in time domain was created. Figure 7.2 shows the data
obtained from a G-link sensor in real-time.
Figure 7.2 MATLAB - Real-time display of acceleration along 3-axes
66
For displaying the data read from serial port in real-time, after writing the
command, serial port is checked for data availability and all the available data is picked,
processed to proper format, and plotted on the graph then program again looks for new
data on serial port and plots the available data. For doing this plot, command is used
inside the for-loop. To minimize the time required in plotting the points, erase mode is
used so that figure window is not refreshed every time data points are plotted. This
saves time and allows for real-time display without any loss of data. The data packets
with checksum error are discarded. Figure 7.3 shows the GUI created in MATLAB.
Figure 7.3 Screen shot of GUI created in MATLAB
67
7.2 LabVIEW Implementation
After implementing the data link layer for extracting the data from a sensor in
real-time in MATLAB, it was found that this may be exposed to inherent time lags as
MATLAB graphics are inherently slow. On the other hand, LabVIEW intrinsically
support real-time data acquisition. But for application it was desired to use MATLAB
tools like DSP, Fuzzy Logic, Neural Networks, Statistical Analysis, etc for advanced
data processing and interpretation. Thus, the implementation architecture shown in
figure 7.4 was developed. As seen, LabVIEW is used for acquiring the data, displaying
Path to Decision & Display
Signal
Display
Data
Transition
Artificial Intelligent
Neural Net
Fuzzy Logic
Information
Display
Wisdom
Knowledge
Figure 7.4 Implementation Architecture
68
in real-time, and storing in data file. These data files can be accessed directly by various
MATLAB tools for analysis and decision making, the results of which can again be
displayed using LabVIEW GUI. We hence utilize the fast and efficient real-time data
acquisition tools, and user friendly GUI development tools of LabVIEW and easy to
implement data processing, data analysis and data interpretation tools from MATLAB,
for a rich overall implementation.
We implemented most of the features of UC-TDMA protocol in
LabVIEW. Figure 7.5 shows an overview of OSI reference model layers addressed in
the implementation. Session and Transport layers are not addressed specifically for
wireless sensor networks. The physical layer is provided by Microstrain Inc., and UCTDMA protocol hence provides all the services required by the application layer.
Application
Application GUIs
in LabVIEW
Presentation
Session
User Configured
TDMA Protocol
with Emergency
tipster
Transport
Network
Data Link
Provided
Physical
OSI Layers
Figure 7.5 OSI reference model – layers implemented
69
All the sensor nodes were physically installed at optimal locations in the
Heating and Air-conditioning Plant at ARRI. G-link sensors were used to obtain
vibrations of the operating pump; V-link was used to measure the operating
temperature; and SG-link sensor measures the load. Although the small form factor of
sensing nodes facilitated proper contact of sensing element with the physical
measurands, it was found that sensors had to be placed tightly in order to get accurate
measurement of vibrations and load. Base station was plugged to AC outlet; it
communicated with the laptop using 9-pin RS-232 serial connector to USB port
converter.
Figure 7.6 shows the Heating and Air conditioning test-bed, monitored by
installed wireless sensors.
Figure 7.6 Heating and Air Conditioning plant at ARRI
70
We created two separate GUIs, viz., Network configuration wizard, for creating
configuration files; application GUI, for displaying the real-time measurements,
frequency domain display, and for storing the data in data files to be used by MATLAB.
Both programs together implement the UC-TDMA protocol and the application layer.
Network configuration wizard provides an engineer’s interface with which one
can specify various parameters for the network. Figure 7.7 shows the very first screen of
the wizard. Clicking the Next tab starts the wizard and replaces current screen with the
Figure 7.7 Screen shot of first screen of Network Configuration Wizard (NCW)
screen shown in figure 7.8. It asks the user whether to create a new configuration file or
use an already existing file on system. On selecting Create New Configuration File, user
71
Figure 7.8 Second Screen of NCW
is prompted with a dialog box to give the name of file and directory to store it.
Figure 7.9 Dialog Box for selecting an existing configuration file
72
Whereas on selecting Run from Existing file, the user is prompted with the dialog box
shown in figure 7.9. On selecting the configuration file, program loads the settings from
the file for different nodes in the network. This is extremely useful when user wants to
make minor changes for some of the nodes in the network. Also, when a new
configuration file is to be created, then the default settings for various nodes are loaded.
These default settings give the user an idea of what kind of selections are to be made for
various sensor nodes.
Figure 7.10 NCW screen showing actual physical location of sensors in Plant
Next screen of the wizard shows the plant under observation with various
sensing nodes installed. Figure 7.10 shows the screen with a snap of ARRI test-bed with
73
installed sensors. To configure any particular sensor node, click on that and the next
screen appears with the current settings for that particular sensor node. Users have
freedom to choose other sensor nodes either from the drop down menu at the top, or by
clicking the Back tab and then reselecting the desired node by clicking on it. This
eliminates the issue of node naming in many perspectives. For example - User need not
to remember the names of vibration sensors installed on front of the machine and that
on the rear of the machine. Also, nodes are named on the basis of the physical quantities
they measure and the node naming issue is thus taken care by the upper most layer,
avoiding complications at lower layers.
Figure 7.11 shows main configuration screen of the wizard, various parameters
Figure 7.11 NCW Main Configuration Screen
74
for all the nodes in the network are set through this screen.
For each node, the following parameters are defined. 1. Sweep rate, it is the data
sampling rate (one sweep represents one sample from all active channels). It can be
chosen as 32sweeps/sec, 64sweeps/sec, 128sweeps/sec, 256sweeps/sec, 512sweeps/sec,
1024sweeps/sec, or 2048sweeps/sec.This sweep rate actually determines the maximum
frequency of signal that can recovered from the samples. According to Nyquist
criterion, maximum signal frequency that can be recovered from the data points
sampled at rate of 2048 sweeps/sec is 1024 Hz. 2. Number of Sweeps, it is the number
of data points transmitted each time node transmits (one data point represents one
sample point from all active channels). 3. Sequence Number, it is the sequence at which
a particular node will transmit. Node with sequence number two will transmit after the
node with sequence number one and so on. Node with sequence number one occupies
the first slot of the UC-TDMA time frame and hence is the first node to send its data to
the base station in the complete data acquisition cycle. 4. Active Channels, it is the
number of channels that transmit each time a node transmits. Each node can have
maximum of eight channels. User can check on the desired channels for transmitting
their data each time node transmits. 5. Node Number, it is a 16-bit number which is the
unique node address of the node in network. This address is stored on the EEPROM of
the sensor node. It also identifies the type of physical quantity being measured by any
particular node. 6. No Setup, it provides the flexibility of skipping some sensors out of
the acquisition cycle and obtaining more frequent data from few selected sensors for
troubleshooting a particular machine component. This option could be checked for the
75
sensors that are not required to be configured for transmission. 7. Comm. Port, it
identifies the communication port number at which base station is connected to the
terminal PC. All the nodes communicating to the same Base Station must therefore have
same communication port number setting.
On clicking the “Save & Exit” tab, all the settings are saved in the named
configuration file. Several such configuration files can be created and saved for
different network tasks to be carried out during different phases of operation.
These configuration files are used by the Application GUI for setting up the
node parameters for all the nodes in the network, and creating the UC-TDMA time
frame. The Application program thus performs the following operations. It extracts the
settings from the configuration file selected by the user, creates the UC-TDMA time
frame, sequences the real-time display windows according to the time frame, removes
the windows for the non configured sensors, acquires the data from the network,
process it to proper format, display it in real-time in the corresponding display windows
(for this it implements most the commands implemented in MATLAB), calculates and
displays the FFT of the time domain signal and saves the raw data in a file that can be
used by MATLAB for analysis. Application program runs continuously by repeating the
UC-TDMA frame, until interrupted by a click on the “STOP” tab on the screen.
Application program utilizes Sweep rate, number of sweeps and sequence
number arrays for creating UC-TDMA time frame. Sweep rate along with the number
of sweeps ascertains the time duration of TDMA slot for any particular node. Thus by
properly selecting the sweep rate and the number of sweeps option, user can define the
76
length of the slot for any particular node. Sequence number on other hand resolves the
position of slot in the frame. User can choose to have more than one slot for a particular
node. Application program communicates with nodes through the base station
connected to the serial port.
Figure 7.12 shows the introductory screen of the application GUI, when “click
to Continue…” tab is clicked, user is prompted to select the saved configuration file for
operating the network.
Figure 7.12 Introductory Screen of Application GUI
After selection of proper configuration file, application program performs all the
necessary steps for configuring the network. It first configures all the nodes with given
parameters. After that it organizes the main display screen according to given sequence
77
and starts the network operation. Figure 7.13 shows the screen shot of application GUI
when network was configured with three nodes.
Figure 7.13 Time Domain display of real-time data from three different sensors
These real-time plots keep moving to left of viewer as new data keeps coming in
real time. To obtain the frequency domain display of these data points, one can click on
the “Freq Domain” tab for corresponding measurement. Frequency plots are obtained
by taking FFT of data points transmitted by the senor node in one time slot; FFT is
updated after each time the node transmits new data to the base station. These are thus
time varying FFT plots of real time data acquired by the BS. These FFT plots thus
represent the current vibration frequency signatures at any instant of time. Figure 7.14
78
shows the screen shot displaying the frequency domain signals along with
corresponding time domain signals.
Figure 7.14 Screen Shot of Application GUI with Frequency Domain Signals
Data acquired by each sensor over the time of network operation is saved in a
data file selected by the user. This data can be used by any other application program
for, further, detailed data analysis using various tools like fuzzy logic, neural networks,
etc. Network was operated in continuous mode of UC-TDMA protocol for gathering
the data. It was found that after a node finishes sending its data to BS, there is a little
delay before next node in sequence starts transmitting. This delay is attributed to the
time taken by the BS to generate CTS signal and then node responding with the data.
79
CHAPTER 8
CONCLUSION
Emerging wireless sensor network technologies exhibit a huge potential for
efficient implementation of Condition Based Maintenance systems, which in turn
promises to save billions of dollars in manufacturing cost to industry. Both these
promising technologies can mutually benefit each other in their maturation. CBM,
benefiting as it does from distributed sensing and processing techniques developed for
wireless sensor networks, represents a completely new application domain for WSN.
It is evident that the topmost and lowest layer of an OSI architecture determines
the overall wireless sensing network system design. Both application and physical layer
driven design can give surprisingly favorable results for resource constrained WSN.
We demonstrated that a single-hop topology not only offers a simplified overall
design, but also satisfies most of the application requirements which otherwise would
have been difficult to address. Single-hop transmission thus provides the best
topological solution for CBM networks.
We found that by utilizing proper sensor deployment scenarios one can not only
loosen the designing constraints, but also provide potential solutions for many of the
problems faced for efficient network operation. For CBM, typical a deployment
scenario is in an industrial setting, where powering of some of the nodes from AC
outlets can be feasible. With this we hope to alleviate a plethora of implementation
difficulties associated with multi-hop networks used for large areas of coverage.
80
For deployment in large industries, the proposed scheme can be used as a
prototype, with several such small networks forming clusters and communicating with
each other and a central-control-centre using wireless techniques through their base
stations. Higher levels of our UC-TDMA protocol can be used for inter-cluster
communication in such networks.
Also, the consideration of RF propagation characteristics, while designing the
MAC protocol, is important for the robust system design. Interpretation of fades as
emergency or failure; improper interpretation of emergency because of capture effect;
must be considered and avoided.
With a plethora of sensing and networking product manufacturers on the market
it is essential to adopt industry standards like IEEE 1451 for resolving associated
compatibility issues. It is thus important that processing and memory requirements
incurred by network operation should be minimal for resource constrained sensing
nodes, which also have to support in-network data processing and the evolving
standards for sensor networks.
The ultimate aim of design and development of wireless sensor network for
condition based maintenance, however, should be to facilitate distributed sensing and
processing along with networked data for collaborative processing for CBM systems. At
the same time WSN systems should be capable of adapting to the challenging dynamic
requirements posed by adaptive and learning CBM systems.
opportunities for further study.
81
This affords many
REFERENCES
[1]
G. Baweja & B. Ouyang, Data Acquisition Approach for Real-time Equipment
Monitoring and Control, in Proc. IEEE/SEMI ASMC’02, pp 223-227.
[2]
T. W. Carley, et al, Contention-Free Periodic Message Scheduler Medium
Access Control in Wireless Sensor / Actuator Networks, in proc. IEEE
RTSS’03, pp.298-307.
[3]
Centre
for
Embedded
Networked
Sensing,
Sensor
node
platforms:
http://wins.rsc.rockwell.com
[4]
Chipcon
Inc.,
CC1020
Datasheet:
http://www.chipcon.com/files/CC1020_Data_Sheet_1_0.pdf
[5]
S. H. Cho, A. P. Chandrakasan, Energy Efficient Protocols for Low Duty Cycle
Wireless Microsensor Networks, 0-7803-7041-4
[6]
Crossbow
Technology,
IEEE 2001, pp. 2041-2044.
MICA
Datasheet.2001:
http://www.xbow.com/Products/Product_pdf_files/Wireless_pdf/MICA.pdf
[7]
Crossbow Technology, MTS/MDA Sensor and Data Acquisition Boards user’s
Manual.2003:
http://www.xbow.com/Support/Support_pdf_files/MTS-
MDA_Series_User_Manual_7430-0020-02_A.pdf
[8]
Conway and Heffernan, Univ. Limerick, 2003, http://www.ul.ie/~pei
[9]
T. V. Dam, K. Langendeon, An Adaptive Energy-Efficient MAC Protocol for
Wireless Sensor Networks, in proc. SenSys’03,
[10]
November, 2003.
E. J. Duarte-Melo, M. Liu, Data-Gathering Wireless Sensor Networks:
Organization and Capacity, Preprint submitted to Elsevier Science, May 2003.
82
[11]
Energizer, Alkaline AA battery datasheet: www.energizer.com
[12]
R. Frank, Understanding Smart Sensor, 2nd Ed., Artech House, Norwood, MA,
2000.
[13]
J. Galbreath, A Narrowband FDMA Protocol for Wireless Sensor Networks,
http://www.emba.uvm.edu/~jfrolik/papers/jake_uvmgrad03.pdf
[14]
W. R. Heinzelmann, A. P. Chandrakasan, H. Balakrishnan, An Application
Specific Protocol Architecture for Wireless Microsensor Networks, in IEEE
Transactions on Wireless Communications, vol.1, No.4, 2002.
[15]
W. R. Heinzelmann, A. P. Chandrakasan, H. Balakrishnan, Energy-efficient
communication protocols for wireless microsensor networks, in proc. Hawaii
International Conference on Systems Sciences, 2002.
[16]
J. Hill, D. Culler, A Wireless embedded Sensor architecture for system-level
optimization, Technical report, U.C. Berkeley, 2001
[17]
Intel
Inc.,
Heterogeneous
Sensor
Networks:
http://www.intel.com/research/exploratory/heterogeneous.htm
[18]
R. Kannan, et al, Energy and Rate based MAC Protocol for Wireless Sensor
Networks, SIGMOD,vol.32,No.4, December 2003.
[19]
B. Kim, G. S. May, Real-Time Diagnosis of Semiconductor Manufacturing
Equipment Using a Hybrid Neural Network Expert System, in trans. IEEE
CPMT, part C, vol. 20, No.1. 1997.
[20]
LAN MAN Standards Committee of the IEEE Computer Society, Wireless LAN
medium access control (MAC) and physical layer
(PHY) specification, IEEE
std 802.11-1997 edition, 1997.
[21]
M. LaPedus, Intel to use wireless sensors for chip-equipment maintenance, EE
TIMES, October, 2003.
83
[22]
J. Lee, Advances and Issues on Remote Monitoring & Prognostics, ICMT,
National Taiwan University, Taiwan, 2003.
[23]
F. Lewis, CBM Diagnostics, http://arri.uta.edu/acs, 2003.
[24]
F. Lewis, CBM Prognostics and Maintenance Scheduling, http://arri.uta.edu/acs,
2003.
[25]
F. Lewis, Fault Diagnosis using Feature Vectors and Fuzzy Fault Pattern
Rulebase, http://arri.uta.edu/acs, 2003.
[26]
F. Lewis, Prescription Based PHM, http://arri.uta.edu/acs, 2003.
[27]
F. L. Lewis, Wireless Sensor Networks, to appear in Smart Environments:
Technologies, Protocols, and Applications, ed. D.J. Cook and S. K. Das, John
Wiley, New York, 2004.
[28]
A. Mainwaring, et al, Wireless Sensor Networks for Habitat Monitoring, Intel
Research, IRB-TR-02-006, 2002.
[29]
H. O. Marcy, et al, Wireless Sensor Networks
for Area Monitoring and
Integrated Vehicle Health Management Applications,
Rockwell
Science
Centre, AIAA-99-4557, 1999.
[30]
Microchip
Inc.,
PIC
16F87XA
Datasheet:
http://www.microchip.com/download/lit/pline/picmicro/families/16f87x/39582b
.pdf
[31]
Microstrain Inc., http://microstrain.com
[32]
Microstrain Inc., G-link flier: http://microstrain.com/pdf/G-Linkflier6.pdf
[33]
Microstrain
Inc.,
V-link
User’s
Manual:
http://microstrain.com/usermanuals/VLink_usermanual.pdf
[34]
T. V. Nguyen, H. W. Nelson, A System Approach to Machine Condition
Monitoring and Diagnostic, Lockheed Martin Information Systems, Florida
84
[35]
RF
Monolithic
Inc.,
ASH
Transceiver
Designer’s
Guide:
http://www.rfm.com/products/tr_des24.pdf
[36]
RF Monolithic Inc., TR1000 Datasheet: www.rfm.com/products/data/tr1000.pdf
[37]
Rockwell Scientific Company LLC, Wireless Sensing Network. 2002:
http://wins.rsc.rockwell.com
[38]
S. Saha, P. Bajcsy, System Design Issue in Single-Hop Wireless Sensor
Networks, in proc. IASTED-CIIT’03, 2003.
[39]
E. Shih, et al, Energy-Efficient Link Layer for Wireless Microsensor Networks,
0-7695-1056-6/01 IEEE, 2001.
[40]
E. Shih et al, Physical layer driven protocol and algorithm design for energyefficient wireless sensor networks, in proc. ACM
[41]
K. Shorabi, G. J. Pottie, Performance of a Novel Self-organization Protocol for
Wireless Ad Hoc Sensor Networks, in proc. IEEE
[42]
MobiCom’01, pp.272-286.
VTC, 1999, pp. 1222-1226.
S. Singh & C. Raghavendra, PAMAS – Power Aware Multi-Access Protocol
with Signaling for AD HOC Networks, in proc. ACM
SIGCOMM ’98, pp.
5-26.
[43]
V. Skormin, SUNY Binghamton 1994.
[44]
C. P. Townsend, M. J. Hamel, P. A. Sonntag, S. W. Arms, Scalable, Wireless
Web Enabled Sensor Networks, SIcon/02, Houston, Texas, 2002.
[45]
G. Vachtsevanos, Fault Diagnostics/Prognostics for Equipment Reliability and
Health Maintenance, G Tech DLPE, November, 2003.
[46]
G. Vachtsevanos, P. Wang, An Intelligent Approach to Fault Diagnosis and
Prognosis, The 53rd Meeting of the Society for Machinery Failure Prevention
Technology, MFPT Forum, Virginia Beach, April 19-22, 1999.
85
[47]
A. Wang et al., Energy Scalable Protocols for Battery-Operated MicroSensor
Networks, 0-7803-5650-0/99 IEEE, 1999.
[48]
A. Woo, D. Culler, A transmission control scheme for media access in sensor
networks, in proc. ACM / IEEE ICMCN, 2001.
[49]
W. Ye, J. Heidemann & D. Estrin, An Energy-Efficient MAC Protocol for
Wireless Sensor Networks, in proc. IEEE
[50]
INFOCOM’02.
F. Zhao, J. Shin and J. Reich, Information-driven dynamic sensor collaboration
for target tracking, IEEE Signal Processing Magazine, 2002, vol.19, pp. 61-72.
86
BIOGRAPHICAL INFORMATION
Ankit Tiwari was born in Indore, India. He obtained his degree of Bachelor of
Engineering in Electronics Engineering from SV Institute of Technology & Science in
2002. He joined UT Arlington in August 2002 and subsequently started working at
Automation & Robotics Research Institute from January 2003. He is member Tau Beta
Pi (National Engineering Honors Society). His study at UTA was funded in part through
the Teaching Assistantship from the EE department at UTA. He completed his Master’s
in Electrical Engineering in May 2004.
87