Detection of Forest Fire Using Wireless Sensor Network
Detection of Forest Fire Using Wireless Sensor Network
Detection of Forest Fire Using Wireless Sensor Network
I.INTRODUCTION
The use of this technology throughout society "could well dwarf
previous milestones in the information revolution": U.S.
National Research Council Report, 2001. In 2003 it was
heralded as one of "10 emerging technologies that will change
the world". This revolutionary technology is known as
Wireless Sensor Networks (WSNs). A sensor network is an
infrastructure comprised of sensing (measuring), computing,
and communication elements that gives an administrator the
ability to instrument, observe, and react to events and
phenomena in a specified environment. A wireless sensor
network (WSN) is a wireless net- work consisting of spatially
distributed autonomous devices using sensors to cooperatively
monitor physical or environmental conditions, such as
temperature, sound, vibration, pressure, motion or pollutants, at
different locations. There are four basic components in a sensor
network: (1) an assembly of distributed or localized sensors; (2)
an interconnecting network (usually, but not always, wirelessbased); (3) a central point of information clustering; and (4) a
set of computing resources at the central point (or beyond) to
handle data correlation, event trending, status querying, and data
mining. The communication between sensor nodes starts with
data being forwarded via multiple hops, to a sink (sometimes
denoted as controller or monitor) that can use it locally or is
connected to other networks (e.g., the Internet) through a
gateway. The design issues for network set up and
communication include cost, energy efficiency, energy
sufficiency, ability to cope with node failures, nature of node i.e.
homogenous or heterogeneous, communication failures,
scalability to large scale of deployment, environment
accessibility, ease of use and many more. A Crossbow
Figure 1: Two XML elements that represent the same Film. Nodes are labelled
by their XML tag name.
RELATED WORK
In [2]
Ananthakrishna has
exploited
dimensional
hierarchies typically associated with dimensional tables in
data warehouses to develop duplicate elimination algorithm
called Delphi, which significantly reduces the number of
false positives without missing out on detecting duplicates. He
rely on hierarchies to detect an important class of
equivalence errors in each relation, and to efficiently reduce
the number of false positives.
Carvalho and Silva proposed a similarity-based approach in [3]
to identifying similar identities among objects from multiple
Web sources. This approach works like the join operation in
relational databases. In the traditional join operation, the
equality condition identies tuples that can be joined together.
In this approach, a similarity function that is based on
information retrieval techniques takes the place of the equality
condition. In this paper, we present four different strategies to
dene the similarity function using the vector space model and
describe experimental results that show, for Web sources of
three different application domains, that our approach is quite
effective in nding objects with similar identities, achieving
precision levels above 75%.
DogmatiX is a generalized framework for duplicate detection
[4], dividing the problem into three components: candidate
definition defining which objects has to be compared, duplicate
definition defining when two duplicate candidates are actually
duplicates, and duplicate detection means how to efficiently find
those duplicates. The algorithm is very effective in the first
scenario: Edit distance should compensate typos, and our
similarity measure is specifically designed to identify duplicates
despite missing data. On the other hand, synonyms, although
having the same meaning, are recognized as contradictory data
and the similarity decreases. They are more difficult to detect
without additional knowledge, such as a thesaurus or a
dictionary. Thus, we expect the second scenario to yield poorer
results.
Milano Propose a novel distance measure for XML data, the
structure aware XML distance [5] that copes with the flexibility
which is usual for XML files, but takes into proper account the
semantics implicit in structural schema information. The
structure aware XML distance treats XML data as unordered.
The edit distance between tokens t1 and t2 is the minimum
number of edit operations (delete, insert, transpose, and replace)
required to change t1 to t2; we normalize this value with the
sum of their lengths
In [6] author has proposed a novel method for detecting
duplicates in XML which has structural diversity. This method
uses a Bayesian network to compute the probability of any two
XML objects being duplicates. Here author has considered not
only children elements but also complete subtrees. Computing
all probabilities, this method performs accurately on various
datasets. Following figure shows two XML trees which contain
duplicate data although value represented differently.
Base for proposed system presented in [1], has extended work
done in [6] by adding pruning algorithm to improve the
efficiency of the network evaluation. That is to reduce the no. of
comparisons where the pairs which are incapable of reaching a
given duplicate probability threshold are decreased. It requires
user to give input, since the user only needs to provide the
attributes to be compared, their respective default probability
values, and a similarity value. However, the system worked in
good manner that it allows to use different similarity measures
and different combinations of probabilities.
III.
METHODOLOGY
Third Layer
This layer is also called Client Layer (Visualization, Analysis
tool) It provides the user visualization software and graphical
interface for managing the network. Moteview is used for this
purpose that bundles software from all three layers to provide an
end to end solution.
Algorithm : Detection of forest fire
Input: Sensed temperature
Output: Forest fire is detected or not
Here the proposed system uses all these algorithms but needs
small user intervention. User has to provide the parameter by
which comparison will perform. And second the action to be
performed after duplicate detection which may be elimination,
updation, or any other operation. Next section will describe all
algorithms and example which show how the original trees
shown in figure 1 will be converted to Bayesian network.
IV. DESIGN AND SPECIFICATION
In this system, there are three design layers. They are as
follows: [31]
First Layer
This layer is also called Mote Layer(TinyOS firmware). XMesh
resides here which is the software that runs on the cloud of
sensor nodes forming a mesh network. The nodes N1 to N8
form the mesh network as shown in diagram below. The XMesh
provides the networking algorithms required to form a reliable
communication backbone connecting all the nodes within the
mesh cloud to server.
V.EXPERIMENTAL SETUP
The sensor nodes are configured using mote software. The
codes are burned on nodes and base station. Sensor nodes are
deployed in the environment whose surrounding temperature
has to be measured. The radio board is connected to the usb port
on computing device which acts as the gateway. Simulation of
DYMO routing protocol is done using TOSSIM (Tiny OS
Simulator) [28]
The data received from Terminal:
VI. CONCLUSION
In this project report, we succeeded in understanding the basics
of Wireless Sensor Network (WSN), the software and hardware
required for it along with their specifications required to build a
wireless sensor network system. By optimally routing the data
with the reference of dymo routing protocol and extracting the
information from it effectively, Wireless sensor network provide
insights about the early detection of forest fire efficiently thus
making an environment friendly system by reducing the impact
caused by forest fire.
VII. FUTURE WORK
[1] Luis Leitao, Pavel Calado, and Melanie Herschel, Efficient and Effective
Duplicate Detection in Hierarchical Data IEEE Trans. on Knowledge and Data
Engineering, Vol. 25, No. 5, May 2013.
[2] R. Ananthakrishna, S. Chaudhuri, and V. Ganti, Eliminating Fuzzy
Duplicates in Data Warehouses, Proc. Conf. Very Large Databases (VLDB),
pp. 586-597, 2002.
[3] J.C.P. Carvalho and A.S. da Silva, Finding Similar Identities among Objects
from Multiple Web Sources, Proc. CIKM Workshop Web Information and Data
Management (WIDM), pp. 90-93, 2003.
[4] M. Weis and F. Naumann, Dogmatix Tracks Down Duplicates in XML
Proc. ACM SIGMOD Conf. Management of Data, pp. 431-442, 2005.
[5] D. Milano, M. Scannapieco, and T. Catarci, Structure Aware XML Object
Identification, Proc. VLDB Workshop Clean Databases (CleanDB), 2006.
[6] L. Leitao, P. Calado, and M. Weis, Structure-Based Inference of XML
Similarity for Fuzzy Duplicate Detection, Proc. 16th ACM Intl Conf.
Information and Knowledge
Management, pp. 293-302, 2007.
[7] K.-H. Lee, Y.-C. Choy, and S.-B. Cho, An efficient algorithm to compute
differences between structured documents IEEE Transactions on Knowledge
and Data Engineering (TKDE), vol. 16, no. 8, pp. 965979, Aug. 2004.
[8] E. Rahm and H.H. Do, Data Cleaning: Problems and Current Approaches
IEEE Data Eng. Bull., vol. 23, no. 4, pp. 3-13, Dec. 2000.